Introduction to linear control systems
 9780128127483, 0128127481

Table of contents :
Cover......Page 1
Introduction to Linear Control Systems......Page 3
Copyright......Page 4
Dedication......Page 5
Preface......Page 6
Acknowledgments......Page 13
Part I: Foundations......Page 14
1.1 Introduction......Page 15
1.2 Why control?......Page 16
1.3 History of control......Page 17
1.4 Why feedback?......Page 19
1.5 Magic of feedback......Page 20
1.6 Physical elements of a control system......Page 21
1.7 Abstract elements of a control system......Page 22
1.8 Design process......Page 23
1.9 Types of control systems......Page 25
1.10.1 Stability and performance......Page 26
1.10.2 Sensitivity and robustness......Page 28
1.10.3 Disturbance......Page 29
1.11.1 Stability and performance......Page 30
1.11.2 Sensitivity and robustness......Page 31
1.11.3 Disturbance and noise......Page 32
1.11.4 Reliability, economics, and linearity......Page 33
1.12 The 2-DOF control structure......Page 39
1.13 The Smith predictor......Page 44
1.15 Modern representation—Generalized model......Page 45
1.16 Status quo......Page 47
1.16.1 Overview......Page 48
1.16.1.1 Summary......Page 52
1.16.1.2 The forgotten......Page 56
1.16.2 Relation with other disciplines......Page 57
1.16.3 Challenges......Page 58
1.16.4 Outlook......Page 60
1.18 Notes and further readings......Page 63
1.19 Worked-out problems......Page 68
1.20 Exercises......Page 77
References......Page 86
Further Reading......Page 91
2.1 Introduction......Page 96
2.2 System modeling......Page 97
2.2.1 State-space......Page 98
2.2.1.1 Linearization......Page 100
2.2.1.2 Number of inputs and outputs......Page 105
2.2.2 Frequency domain......Page 106
2.2.2.1 Finding the output......Page 107
2.2.3 Zero, pole, and minimality......Page 110
2.3 Basic examples of modeling......Page 111
2.3.1 Electrical system as the plant......Page 112
2.3.2 Mechanical system as the plant......Page 113
2.3.3 Liquid system as the plant......Page 114
2.3.4 Thermal system as the plant......Page 115
2.3.5 Hydraulic system as the plant......Page 116
2.3.6 Chemical system as the plant......Page 117
2.3.7 Structural system as the plant......Page 119
2.3.9 Economics system as the plant......Page 121
2.3.10 Ecological system as the plant......Page 122
2.3.11 Societal system as the plant......Page 123
2.3.12 Physics system as the plant......Page 125
2.3.13.1 Exact modeling of delay......Page 126
2.3.13.2 Approximate modeling of delay......Page 131
2.3.14.2 Amplifiers......Page 132
2.4 Block diagram......Page 134
2.5 Signal flow graph......Page 139
2.5.1 Basic terminology of graph theory......Page 140
2.5.2 Equivalence of BD and SFG methods......Page 142
2.5.3 Computing the transmittance of an SFG......Page 143
2.7 Notes and further readings......Page 146
2.8 Worked-out problems......Page 151
2.9 Exercises......Page 183
References......Page 203
3.1 Introduction......Page 211
3.2 Lyapunov and BIBO stability......Page 212
3.3 Stability tests......Page 216
3.4 Routh’s test......Page 218
3.4.1 Special cases......Page 220
3.5 Hurwitz’ test......Page 223
3.6 Lienard and Chipart test......Page 224
3.7 Relative stability......Page 225
3.8 D-stability......Page 227
3.9 Particular relation with control systems design......Page 229
3.10 The Kharitonov theory......Page 230
3.11 Internal stability......Page 231
3.12 Strong stabilization......Page 234
3.13 Stability of LTV Systems......Page 235
3.14 Summary......Page 238
3.15 Notes and further readings......Page 239
3.16 Worked-out problems......Page 242
3.17 Exercises......Page 252
References......Page 262
4.1 Introduction......Page 267
4.2 System type and system inputs......Page 268
4.3 Steady-state error......Page 269
4.4 First-order systems......Page 273
4.5 Second-order systems......Page 274
4.5.1 System representation......Page 275
4.5.2 Impulse response......Page 276
4.5.3 Step response......Page 277
4.5.3.1 Time response characteristics......Page 278
4.6 Bandwidth of the system......Page 283
4.6.1 First-order systems......Page 284
4.6.2 Second-order systems......Page 286
4.6.3 Alternative derivation......Page 289
4.6.4 Higher-order systems......Page 291
4.7 Higher-order systems......Page 293
4.8 Model reduction......Page 295
4.9 Effect of addition of pole and zero......Page 299
4.10 Performance region......Page 302
4.11 Inverse response......Page 303
4.12 Analysis of the actual system......Page 305
4.12.1 Sensor dynamics......Page 306
4.12.2 Delay dynamics......Page 314
4.13 Introduction to robust stabilization and performance......Page 317
4.13.1 Open-loop control......Page 319
Design for disturbance and noise rejection......Page 320
Design for sinusoidal reference tracking......Page 323
4.15 Notes and further readings......Page 326
4.16 Worked-out problems......Page 328
4.17 Exercises......Page 350
References......Page 358
5.1 Introduction......Page 361
5.2 The root locus method......Page 364
5.3 The root contour......Page 382
5.4 Finding the value of gain from the root locus......Page 383
5.5.1.2 Systems with NMP zeros......Page 386
5.5.1.3 Examples of systems without NMP zeros......Page 387
5.5.1.4 Examples of system with NMP zeros......Page 396
5.5.2 Simple systems......Page 402
5.6 Summary......Page 409
5.7 Notes and further readings......Page 410
5.8 Worked-out problems......Page 411
5.9 Exercises......Page 440
References......Page 450
Part II: Frequency domain analysis & synthesis......Page 452
6.1 Introduction......Page 453
6.2 Nyquist plot......Page 455
6.2.1 Principle of argument......Page 456
6.2.2 Nyquist stability criterion......Page 458
6.2.3 Drawing of the Nyquist plot......Page 459
6.2.4 The high- and low-frequency ends of the plot......Page 462
6.2.5 Cusp points of the plot......Page 465
6.2.6 How to handle the proportional gain/uncertain parameter......Page 466
6.2.7 The case of j-axis zeros and poles......Page 467
6.2.8 Relation with root locus......Page 478
6.3 Gain, phase, and delay margins......Page 480
6.3.1 The GM concept......Page 481
6.3.1.1 Definition of GM in the Nyquist plot context......Page 490
6.3.2 The PM and DM concepts......Page 492
6.3.3 Stability in terms of the GM and PM signs......Page 507
6.3.4 The high sensitivity region......Page 509
6.5 Notes and further readings......Page 511
6.6 Worked-out problems......Page 513
6.7 Exercises......Page 531
References......Page 537
7.1 Introduction......Page 539
7.2.3 Log magnitude......Page 540
7.2.5 Octave and decade......Page 541
7.2.7.1 Gain K......Page 542
7.2.7.2 Zeros at origin (jω)+m......Page 543
7.2.7.4 Real zeros not at origin (1+jωT)+m......Page 544
7.2.7.6 Error in Lm......Page 545
7.2.7.9 Double poles [1+2ζωnjω+1ωn2(jω)2]−m......Page 546
7.2.8 How to draw the Bode diagram with hand......Page 547
7.3 Bode diagram and the steady-state error......Page 552
7.4 Minimum phase and nonminimum phase systems......Page 555
7.4.4 NMP pole with negative gain: −1/(p−s)=1/(s−p), p%3e0......Page 556
7.4.5 Determination of NMP systems from the Bode diagram......Page 561
7.5 Gain, phase, and delay margins......Page 562
7.6 Stability in the Bode diagram context......Page 567
7.8 Relation with Nyquist plot and root locus......Page 568
7.9 Standard second-order systems......Page 569
7.10 Bandwidth......Page 570
7.11 Summary......Page 573
7.13 Worked-out problems......Page 574
7.14 Exercises......Page 589
References......Page 596
8.1 Introduction......Page 597
8.2 S-Circles......Page 599
8.3 M-Circles......Page 600
8.4 N-circles......Page 602
8.5 M- and N-Contours......Page 603
8.6 KMN chart......Page 604
8.7.1 Gain, phase, and delay margins......Page 607
8.7.2 Stability......Page 608
8.7.3 Bandwidth......Page 611
8.8 The high sensitivity region......Page 613
8.9 Relation with Bode diagram, Nyquist plot, and root locus......Page 614
8.11 Notes and further readings......Page 615
8.12 Worked-out problems......Page 616
8.13 Exercises......Page 630
References......Page 635
9.1 Introduction......Page 637
9.2 Basic controllers: proportional, lead, lag, and lead-lag......Page 639
9.3 Controller simplifications: PI, PD, and PID......Page 657
9.4 Controller structures in the Nyquist plot context......Page 660
9.5 Effect of the controllers on the root locus......Page 663
9.6 Design procedure......Page 666
9.7.1 Heuristic rules......Page 668
9.7.2.1 Pole placement method......Page 669
9.7.2.2 Direct synthesis......Page 671
9.7.2.3 Skogestad tuning rules......Page 672
9.7.3 Optimization-based rules......Page 676
9.8 Internal model control......Page 678
9.9 The Smith predictor......Page 679
9.10.2 Integral control—I-term......Page 680
9.10.6 Series proportional-integral-derivative—Series PID......Page 681
9.10.8 Lag......Page 682
9.11 Summary......Page 683
9.12 Notes and further readings......Page 684
9.13 Worked-out problems......Page 685
9.14 Exercises......Page 722
References......Page 727
Part III: Advanced Issues......Page 730
10.1 Introduction......Page 731
10.2 Relation between time and frequency domain specifications......Page 732
10.3 The ideal transfer function......Page 734
10.4 Controller design via the TS method......Page 741
10.5 Interpolation conditions......Page 742
10.6 Integral and Poisson integral constraints......Page 745
10.7.1 Implications of open-loop integrators......Page 752
10.7.2 MP and NMP poles and zeros......Page 754
10.7.3 Imaginary-axis poles and zeros......Page 758
10.8.1 Maximal actuator movement......Page 760
10.8.2 Minimal actuator movement......Page 762
10.8.4 Sensor speed......Page 763
10.9 Delay......Page 764
10.10 Eigenstructure assignment by output feedback......Page 765
10.10.1 Regulation......Page 768
10.10.2 Tracking......Page 772
10.11 Noninteractive performance......Page 773
10.12 Minimal closed-loop pole sensitivity......Page 777
10.13.1 Structured perturbations......Page 781
10.13.2 Unstructured perturbations......Page 782
10.14 Special results for positive systems......Page 783
10.15 Generic design procedure......Page 784
10.16 Summary......Page 788
10.17 Notes and further readings......Page 789
10.18 Worked-out problems......Page 792
10.19 Exercises......Page 805
References......Page 815
Appendices A–G......Page 823
A.1 Introduction......Page 824
A.2 Basic properties and pairs......Page 826
A.2.2 Table of some Laplace transform pairs......Page 827
A.3 Differentiation and integration in time domain and frequency domain......Page 829
A.3.2 Differentiation formula in time domain......Page 831
A.3.4 Frequency domain formulae......Page 832
A.3.5 Some consequences......Page 833
A.4 Existence and uniqueness of solutions to differential equations......Page 834
References......Page 835
B.1.1 Electrical systems......Page 836
B.1.2 Mechanical systems......Page 837
B.1.3 Chemical systems......Page 838
B.2 Equivalent systems......Page 839
B.3 Worked-out problems......Page 840
References......Page 845
C.1 Introduction......Page 847
C.2 MATLAB®......Page 848
C.2.1.1 Script file......Page 850
C.2.1.2 Function file......Page 851
C.2.2.1 LTI models......Page 853
C.2.2.6 Model dynamics......Page 854
C.2.2.11 Pole placement......Page 855
C.3 Simulink......Page 856
C.4 Worked-out problems......Page 858
References......Page 874
D.1 Introduction......Page 875
D.2.1 Deterministic systems......Page 876
D.2.2 Stochastic systems......Page 878
D.2.3 Miscellaneous......Page 879
D.3 Lipschitz stability......Page 880
D.4 Lagrange, Poisson, and Lyapunov stability......Page 881
D.5 Finite-time and fixed-time stability......Page 883
D.5.1 Fixed-time decentralized stability of large-scale systems......Page 884
D.5.1.1 Large-scale system description......Page 885
D.6 Summary......Page 886
References......Page 887
E.2 Applications of the Routh’s array......Page 890
References......Page 893
F.2 Convex optimization......Page 897
F.3 Nonconvex optimization......Page 898
F.5 Genetic algorithms......Page 899
References......Page 911
G.1 Sample Midterm Exam (ME) (4h, closed book/notes)......Page 913
G.2 Sample Endterm Exam (EE) (4h, closed book/notes)......Page 914
Index......Page 918
Back Cover......Page 929

Citation preview

Introduction to Linear Control Systems

Introduction to Linear Control Systems

Yazdan Bavafa-Toosi

Academic Press is an imprint of Elsevier 125 London Wall, London EC2Y 5AS, United Kingdom 525 B Street, Suite 1800, San Diego, CA 92101-4495, United States 50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom Copyright © 2017 Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress ISBN: 978-0-12-812748-3 For Information on all Academic Press publications visit our website at https://www.elsevier.com/books-and-journals

Publisher: Mara Conner Acquisition Editor: Sonnini R Yura Editorial Project Manager: Ana Claudia Garcia Production Project Manager: Mohana Natarajan Cover Designer: Victoria Pearson Typeset by MPS Limited, Chennai, India

Dedication

To men possessed of minds

With gratitude to all our professors in alma mater Especially advisors Ali Khaki-Sedigh, Volker Mehrmann, Hiromitsu Ohmori, and Hossein Tabatabaei-Yazdi

Preface

The book Introduction to Linear Control Systems is designed as a comprehensive introduction to linear control systems for all those who one way or another deal with control systems. It can be used as a reference for a one-semester 3-credit undergraduate course on linear control systems as the first course on this topic. It is designed for as large an audience as possible. It can be adopted in all departments where courses on linear control systems are offered. These include the faculties of electrical engineering, mechanical engineering, industrial engineering, aerospace engineering, civil engineering, bio-engineering, chemical and petroleum engineering, physics, mathematics, economics, management, and social sciences, etc. Although the materials are for undergraduate level the approach is theoretically firm and contemporary. The author was educated in engineering and mathematics departments in Iran, Germany, and Japan. He obtained a BEng degree in EE (power) from Ferdowsi University of Mashhad in 1997, and an MEng degree in EE (control) from K. N. Toosi University of Technology of Tehran in 2000, both in Iran. From 2001 to 2003 he held a research position at the Department of Mathematics, Technical University of Berlin, Germany. He earned a PhD degree in Integrated Design Engineering (also known as EE, Systems & Control) from Keio University of Japan in 2006. His research interests span systems and control theory and applications. He has held various research and teaching positions since his student years. The book reflects his wide experience in the field. The idea of writing an undergraduate book on linear control systems was conceived when he was an undergraduate student. Expanding on the materials taught to him at college and obtaining new results has started since that time. In fact almost all the results of the book have been independently obtained by the author, however since part of them have appeared in some scientific forums before, he acknowledges this fact by citing the appropriate references and giving credit to them. The core materials of this book have been taught by him at different universities in Iran over the past 14 years, starting from 2003. The book discusses and corrects numerous mistakes in the available texts and offers several contributions to the existing literature on this topic. In addition to students, our colleagues in academia as well as engineers in industry will also find it beneficial. It offers over 600 versatile examples and worked-out problems as well as 1800 unsolved exercises. The problems, both solved and unsolved, are carefully designed one by one so as to shed more light on the details of the subjects and to facilitate and enhance the learning of the lessons by manifesting new facets of them. Many modern issues are discussed therein. The book also features a chapter on advanced topics that are necessary for undergraduate students to know at least at an

xviii

Preface

introductory level. Every chapter includes a part on further readings where more advanced topics and pertinent references are introduced for further studies. Many of these articles are published in the 21st century, especially after 2010. A short description of each chapter as well as a summary of the unique features of the book and the acknowledgement follow. PART I: In this part of the book, Chapters 1 5, we present the foundations of linear control systems. These include the introduction to control systems, their raison d’ˆetre, their different types, modeling of control systems, different methods for their representation and fundamental computations, basic stability concepts and tools for both analysis and design, basic time domain analysis and design details, and the root locus as a stability analysis and synthesis tool. Chapter 1: In this chapter we introduce what control theory is and why it is needed. Open-loop and closed-loop control structures are studied, and their features are compared. Included issues are stability, performance, sensitivity/robustness, disturbance rejection, reliability, economics, and linearization. We observe that a welldesigned feedback can stabilize and robustify an unstable or poorly stable system, enhance its performance, and reject the disturbance. On the other hand a poorlydesigned feedback can bring about the opposite outcomes. The 2-DOF and 3-DOF control structures, the internal model control structure, the Smith predictor, and the modern representation of control systems are also discussed. We have a glimpse at the history of automatic control and a detailed study of its status quo as well. The chapter includes about 30 examples and worked-out problems as well as over 100 exercises to enhance and facilitate the learning of the subject. Chapter 2: In this chapter we learn how to model a control system and its constituents. Particular attention is paid to the plant where different examples are provided, including an electrical, mechanical, liquid, thermal, hydraulic, chemical, structural, biological, economics, ecological, societal, physics, and time-delay system. Instances of discrete-time, discrete-event, stochastic, and nonlinear models are also discussed. State-space and transfer function methods are introduced as two basic frameworks for modeling. Linearization is then introduced so as to convert a nonlinear model to a linear one. The presentation is followed by the introduction of block diagrams and their algebra for representing the integration of different components of a control system with each other. Signal flow graph is then introduced as an alternative method to block diagram representation. We learn how to compute the transmittance of a given block diagram or signal flow graph. The chapter offers about 70 examples and worked-out problems along with over 150 exercises which help shed more light on the details of the lessons and boost the learning of the reader. Chapter 3: In this chapter we introduce the concept of stability. The reader becomes familiar with basic definitions of stability and tests for its verification. The included stability concepts are Lyapunov and BIBO stability. The stability tools we

Preface

xix

discuss are the Routh, Hurwitz, Lienard-Chipart, and Kharitonov methods. We show how these results should be used for controller design. Included are also the issues of relative stability, D-stability, internal stability, strong stabilization, and stability of LTV systems. Further, in the exercises we briefly discuss several other modern results including the secant condition for stability, the Gerschgorin circles, the stability of the internal model control structure, the Hermite-Biehler theorem and its extensions, and the stability of linear time delay, linear time varying, and switching systems. The chapter features about 60 examples and worked-out problems as well as over 150 exercises to improve the learning of the subject. Two pertinent appendices at the end of the book provide further specialized details of the topic. Chapter 4: In this chapter we are interested in the characteristics of the time response of the system for different inputs. We consider both the transient state and the steady state of the response. The inputs we consider are the impulse, step, ramp, parabolic, and sinusoid. For a reason that will become clear in the text, in the literature the response of the system to a sinusoidal input is called the frequency response of the system, and thus sinusoidal inputs and bandwidth are often considered at the end of the course, along with Bode diagrams. However, the reality is that unlike the name suggests this response also evolves in time and is in fact the time response of the system. We thus study sinusoidal inputs in this chapter and the students will have more time to master this important topic along with the concept of bandwidth. Particular attention is paid to second-order systems. Included in this chapter are also the bandwidth of the system, and a study of high-order systems, model reduction, effect of addition of a pole/zero, performance region, inverse response, analysis of the actual system considering the effect of the sensor dynamics and delay, and introductory study and design for robustness in both stability and performance. The chapter presents about 70 examples and worked-out problems in addition to over 200 exercises so as to assist and increase the learning of the lessons. Chapter 5: In this chapter we learn how to find the location of the closed-loop poles of the system—the so-called root locus of the system—without explicitly computing them. The method is called the root locus and uses the open-loop information of the system. More precisely, with some simple rules we draw the root locus of the system from the information of the open-loop poles and zeros of the system. We also consider the problem of root contour which is the problem of root locus when more than one parameter of the system varies. Included in this chapter is finding the appropriate value of the gain from the root locus for satisfactory performance, and the implications of the root locus on the controller design. Numerous sophisticated and instructive systems, especially NMP ones, are discussed. The chapter includes about 80 examples and workedout problems as well as more than 250 exercises to enhance the learning of the subject.

xx

Preface

PART II: In this part of the book, Chapters 6 9, we present what is generally referred to as the frequency domain methods. This refers to the experiment of applying a sinusoidal input to the system and studying its output. There are basically three different methods for representation and studying of the data of the aforementioned frequency response experiment: these are the Nyquist plot, the Bode diagram, and the Krohn Manger Nichols chart. We study these methods in detail. We learn that the output is also a sinusoid with the same frequency but generally with a different phase and magnitude. By dividing the output by the input we obtain the so-called sinusoidal or frequency transfer function of the system which is the same as the transfer function when the Laplace variable s is substituted with jω. Finally we use the Bode diagram for the design process. Chapter 6: In this chapter we study the Nyquist plot. The analysis starts by a review of the principle of argument and how it is used to obtain the Nyquist stability criterion. We then focus on the details of drawing the plot. This includes the behavior of the plot at high- and low-frequency ends, the cusp points of the plot, handling of the proportional gain, the case of the j-axis zeros and poles and the relation with the root locus. The presentation continues with the introduction of the gain, phase, and delay margin concepts and a stability analysis in terms of them. The chapter features about 70 examples and worked-out problems as well as over 200 exercises to boost the learning of the lessons. Chapter 7: In this chapter we introduce the Bode diagram. We learn how the Bode magnitude and phase diagrams are constructed, with emphasis on details for second-order systems, and how they are beneficial to control systems analysis and synthesis. The inverse problem as identification of the transfer function from the Bode diagram is also studied. The relation between the Bode diagram and the steady-state error of the system is discussed as well. Included are a study of non-minimum phase systems and the concepts of gain margin, phase margin, delay margin, bandwidth, stability, and sensitivity. Special attention is paid to the case of multiple crossover frequencies. Relations to the Nyquist plot and root locus are also discussed. The chapter includes about 50 examples and worked-out problems along with over 200 exercises to improve the learning of the subject. Chapter 8: In this chapter we study what is known in the literature as the Nichols chart, whose essence is actually due to Krohn, and may be best referred to as the Krohn Manger Nichols (KMN) chart. We start by the introduction of the S-circles, M-circles, and N-circles, and end up with the M- and N-contours. We learn how the KMN plot is constructed and used in the analysis and synthesis of a control system. Included is also a study of the system features: gain margin, phase margin, delay margin, bandwidth, stability, and sensitivity in this context. Special attention is paid to the case of multiple crossover frequencies and non-minimum phase systems. Relations to the method of robust quantitative feedback theory as well as Bode diagram, Nyquist plot, and root locus are discussed. The shortcomings of the method are explained and we learn why it is becoming more and more obsolete

Preface

xxi

over the time. The chapter includes about 30 examples and worked-out problems as well as over 200 exercises to facilitate the learning of the lessons. Chapter 9: In this chapter we study the design procedure in the Bode diagram context. By some motivating examples we introduce three basic dynamic controllers: lead, lag, and lead-lag, and learn how they affect the system and when they should be used. Their simplifications as PI, PD, and PID are also studied. The effect of the controllers is also studied in the Nyquist and root locus contexts. The development is followed up by a general design procedure in the Bode diagram context. Specialized design and tuning rules for PID controllers are also included. Next, the design in the IMC and Smith predictor structures is briefly presented. Finally, implementation of the controllers with operational amplifiers is discussed. The chapter features about 40 examples and worked-out problems, many of them with unstable and NMP plants, as well as over 400 exercises to enhance the learning of the reader. PART III: In this part we introduce some miscellaneous advanced topics under the theme of fundamental limitations, which should be included in this undergraduate course at least to an introductory level. We make bridges between some seemingly disparate aspects of a control system and theoretically complement the previously studied subjects. Chapter 10: In this chapter we briefly study the relation between time and frequency domain constraints, ideal transfer functions, controller design via the TS method, interpolation conditions, integral and Poisson integral constraints, constraints implied by poles and zeros, actuator and sensor limitations, delay, eigenstructure assignment, eigenvalue sensitivity, non-interactive performance, robust stabilization, and positive systems. Among the numerous issues that we learn in this chapter are that the ideal transfer function re sensitivity is not consistent with setpoint tracking requirements; in general when we reduce sensitivity in the frequency range we inevitably increase it in another range; in general small imaginary zeros result in large control inputs; in general pole placement by output feedback, the ideal minimal eigenvalue sensitivity, and non-interactive performance may not be achievable. The chapter contains about 80 examples and worked-out problems in addition to over 150 exercises to boost the learning of the subject. Appendices: The book contains seven appendices. Appendix A is on the Laplace transform and differential equations. Appendix B is an introduction to dynamics. Appendix C is an introduction to MATLABs, including SIMULINK. Appendix D is a survey on stability concepts and tools. A glossary and road map of the available stability concepts and tests is provided which is missing even in the research literature. Appendix E is a survey on the Routh-Hurwitz method, also missing in the literature. Appendix F is an introduction to genetic algorithms as a random optimization technique, convex and non-convex problems. Finally, Appendix G

xxii

Preface

presents sample midterm and endterm exams, which are class-tested several times. These appendices include about 80 examples and worked-out problems as well as about 20 unsolved problems. Unique features: G

G

G

G

G

G

G

G

G

G

G

Presenting a detailed contemporary perspective of the field of systems and control theory and applications; contemporary approach even for classical issues; discussing and correcting numerous mistakes in the available literature; collecting and discussing numerous important points which are scattered in the research literature; many new results and/or details in Chapters 3 10 and Appendix A and D; a detailed glossary and road map of stability results scattered throughout the literature; addressing numerous sophisticated NMP and unstable plants in our examples; a chapter on advanced topics in fundamental limitations; discussing alternative facets of the lessons, not available in the literature, by the help of especially designed versatile problems—over 600 examples and worked-out problems along with their simulation source codes available on the website of the book for download, as well as 1800 unsolved exercises; presenting the latest results—many of which obtained in the 21st century—wherever appropriate; allocating a subchapter to Further Readings in each chapter, where more advanced topics and references are introduced.

Our final words are about using the book. Our objective is to write a comprehensive self-contained reference book for newcomers to control that may also be used as the text for the first course on linear control systems. As such, as many pertinent results as appropriate for the contemporary undergraduate level are collected in the book. The instructor may wish to omit some parts from the syllabus especially in Chapters 1, 3, 4, 9, and 10 due to taste and lack of time. For the convenience of the reader the accompanying CD (which is also available on the webpage of the book for download) contains all the MATLABs and SIMULINK codes (of the 2015a release) of the examples and worked-out problems. The naming of the files is as follows: For MATLABs files:

Example X.Y is named as ExampleXpointYS.slx. Hence for instance to run the slx file of Example 1.3 (which is the Example 3 of Chapter 1) you should simply go to Chapter 1 and then run the slx file Example1point3S in the SIMULINK environment. Then in the MATLABs environment you run the corresponding m file Example1point3 which produces the desired figures. (The same for Problem X.Y.)

Preface

xxiii

For SIMULINK files:

Example X.Y is named as ExampleXpointYS.slx. Hence for instance to run the slx file of Example 1.3 (which is the Example 3 of Chapter 1) you should simply go to Chapter 1 and then run the slx file Example1point3S in the SIMULINK environment. Then in the MATLABs environment you run the corresponding m file Example1point3 which produces the desired figures. (The same for Problem X.Y.)

The reader is encouraged to visit the companion website of the book on the Elsevier homepage (https://www.elsevier.com/books). Alternatively a simple search on the web directs the reader to the exact page. Update and relevant information are announced there. Several tokens of faulty operation of MATLABs are pointed out in this book. Rectifying some of them is straightforward and they will probably be put right in near-future releases of MATLABs. The reader should be aware that when we discuss the functionality of MATLABs we refer to its 2015a release. We have made every possible effort to remove the mistakes that might have inadvertently crept into the text. We would appreciate the feedback of our colleagues who choose the book as their text. The future edition will correct the aforementioned oversights and address their concerns. On the other hand, we add that several questions and exercises of the book are advanced topics whose answers deserve publication in scientific forums. If the reader manages to publish an article on them we appreciate to be informed of it so that it is announced in the “Updates” on the webpage of the book. Yazdan Bavafa-Toosi Mashhad, Iran July 2017 [email protected]

Acknowledgments

The author is grateful to Drs K.J. Astrom (Sweden), S.P. Boyd (United States), G.C. Goodwin (Australia), M. Halpern (Australia), M. Fazel (United States), J.S. Freudenberg (United States), Z.H. Guan (China), P.W. Heath (United Kingdom), H. Hemami (United States), V.L. Kharitonov (Russia), M. Morari (Switzerland), G.G. Naumis (Mexico), A. Saberi (United States), D.D. Siljak (United States), and S. Skogestad (Norway) who kindly provided their research articles and let him use them in his book. He also thanks Drs N. Safari-Shad (UW Platteville, United States), S. Tavakoli (U Sistan and Baluchestan, Iran), K. Hosseini Suni (Ferdowsi U, Iran), M. Tavakoli (U Alberta, Canada), H. Gholizade-Narm (TU Shahrood, Iran), R. Ghabcheloo (TU Tampere, Finland), and R. Banaei Khosrowshahi (Honeywell Aerospace, Canada) for their careful and constructive critiques on the summary of the manuscript. Additionally, the original Word documents have been edited and transformed into book format by the publication department of Elsevier and the author wishes to thank them for their nice work. The positive attitude, assistance, and efforts of all the members of the Elsevier group in all the stages of the work are also appreciated.

Introduction 1.1

1

Introduction

The course linear control systems is a three-credit undergraduate course. It is or can be offered in a variety of disciplines and departments including electrical engineering, mechanical engineering, industrial engineering, aerospace engineering, civil engineering, bio-engineering and bio-medicine, chemical and petroleum engineering, physics, mathematical sciences, economics, management and social sciences,1 etc. Electronics engineering! What do you think of when you hear this term? What image do you have of it? Your answer is probably diodes, transistors, capacitors, resistors, electrons, radio, TV. . . And your answer is sort of correct. Mechanical engineering! What do you think of when you hear this term? What image do you have of this term? Perhaps your answer is springs, dashpots, gears, levers, mechanical arms and cranes, robots. . . Well, your answer is kind of correct. Now consider other branches like Aerospace engineering! Civil engineering! Chemical engineering! What perception and image do you have of these fields? Your answer includes space shuttles and satellites, structures, roads, urban infrastructures, chemical products, etc. Your answer shows that you do have an image of all these fields and your answer is to some extent correct. How about control! Is your answer “A system to be controlled.”? Well, your answer is sort of correct. But what is that system? The answer is that it can be anything, for instance society, economy, ecology, a chemical/pharmaceutical/biological/ mechanical/electrical/structural product/device/system, etc. etc. In fact control is nothing in itself, but a theory dealing with all these systems. This mathematical theory (thus also in mathematics departments)—i.e., tools, algorithms, and rules, for analysis and synthesis, to achieve the desired objectives—is what control is. More precisely, it is an interdisciplinary field where mathematics is used for the purpose of controlling a phenomenon that can have any nature. In this book we take a journey from the rudiments of control theory and practice to its status quo. It can be used by any person interested in becoming familiar with the topic. The materials that are appropriate for a first course on linear control systems are treated in details. Other materials that are outside the scope of this book are also briefly discussed/named so that the reader perceives the contemporary general picture of the field. Each chapter features numerous especially designed examples, worked-out problems, and exercises to shed more light on the details of the

1

It is good to know that in 2014 the control society witnessed the inauguration of the IEEE Transactions on Computational Social Systems as well as the IEEE Transactions on Control of Network Systems. (IEEE stands for Institute of Electrical and Electronics Engineers.)

Introduction to Linear Control Systems. DOI: http://dx.doi.org/10.1016/B978-0-12-812748-3.00001-X © 2017 Elsevier Inc. All rights reserved.

4

Introduction to Linear Control Systems

lessons and to facilitate and enhance the learning of the reader. They are indispensable parts of the text. Guidelines for further research are provided at the end of each chapter. The appendices of the book make it self-contained. In the rest of this chapter we talk about the raison d’ˆetre of control in Section 1.2, the history of control in Section 1.3, and the philosophy behind feedback in Section 1.4. The magic of feedback is discussed in Section 1.5. Physical elements and abstract elements of a control system are studied in Sections 1.6 and 1.7. Then we briefly comment on the design procedure in Section 1.8. Types of control systems, open-loop and closed-loop structures follow in Sections 1.91.11. Next we proceed to the two-degree-of-freedom (2-DOF) control structure, the internal model control (IMC) structure, and the Smith predictor in Sections 1.121.14. The modern representation of control systems and the status quo of the field are presented in Sections 1.15 and 1.16. The chapter is wrapped up by the summary, further readings, worked-out problems, and exercises in Sections 1.171.20.

1.2

Why control?

Two tangible and most noticeable reasons to control something are: (1) To keep a variable near a constant target value, also called regulation, e.g., the speed of rotation of a computer disk, moisture content of paper in paper making, chemical composition of products (drugs, pills, etc.), and physical characteristics of products (width of paper, metal plates, wood, etc.), and (2) To keep a variable near a varying target value, also named tracking or servo, e.g., robot manipulator along a trajectory, homing devices (missiles, rockets, and aircrafts), and target tracking by antennas and cameras. We should add that sometimes the use of the term regulation is restricted to the class of problems whose target value is zero, and the term tracking is used for the class of problems whose target values are nonzero, either constant or nonconstant. The most advanced control system is the human body which is replete with control mechanisms. Regulation includes temperature, blood sugar, etc., whereas tracking comprises daily activities, reading this book, going to work and pursuing our daily schedule, etc. etc.—God must be the supreme control engineer! (Note that in the previous sentence the tracking problems are beyond the conventional tracking problems in control systems as exemplified in the above paragraph.) There are numerous reasons to control a system. In the broad sense all these reasons fall in the following categories:2 stability (in some sense safety), linearization, performance, reliability, and economics. Roughly speaking, and in most cases, a system is said to be stable (or in some sense safe) if it just works, either well or poorly, and thus in particular does not explode, break, burn, etc. (See item 1 of Further Readings.) Linearity of the system has the common meaning as in the mathematics literature and we will deal with it in Chapter 2, System Representation. High gain 2

It should be emphasized that some consider different categorizations. In particular, robustness and optimality considerations are sometimes considered as separate categories.

Introduction

5

feedback control pushes a nonlinear system towards linearity.3 Performance includes regulation/tracking, time-response specifications such as rise time, settling time, and steady-state error, robustness/low sensitivity, optimality such as minimum energy and minimum pollution, etc. That is, the system must work well and as desired. By reliability (or failure tolerance) we mean that the system works and keeps the performance as much as possible unchanged if a failure occurs. Economics (or savings) is especially true for continuous (in the sense of long-running) processes, mass product and usage, and large-scale industries. For instance, small savings in the gas price (or, in general, optimizing and enhancing the efficiency) of cars, public transportation services, power plants, and printed circuit boards of mass electronic products will result in huge economical savings.

1.3

History of control

History of control is not apart from the history of science and humankind in general. This history is interleaved with many unfortunate periods. In the past millennia and centuries many historical evidences and scientific achievements included in books and libraries were burned to ashes and many people—including scientists— were massacred when a country was invaded. This is in particular true of Iran. Before about 650 AD Iran was the most advanced country in the world. Due to practice and teachings of divine religion scientific research was firmly intertwined and rooted in the educational system of the empire. Many foreign scientists and philosophers actually came to Iran for (continuing) their educations. Astrology, physics, chemistry, mathematics, medicine, music, agriculture, industries (e.g., metal and textile), etc. were very advanced in the Iran of that time. In fact, scientists and musicians were highly revered by people and were among the nearest consultants to the Kings. Through the invasion of Iran by Arab Moslems in about 650 AD (starting from many years before that) the situation changed. For a nice account of the un-nice history the reader is referred to such history books as “History of Prophets and Kings” (Al-Tabari, 1989) which is a millennium-old book. Anyway, Iran was under the siege of foreigners. Nevertheless, despite the conditions Iranian people continued their culture and tradition of scientific research and many historical progresses and achievements were made in the subsequent centuries and the country arose to prominence again. This continued until about 1220 AD when the country was invaded again; this time by Mongol Moslems, and history was repeated. In recent centuries invasions of the country in the “modern fashion” have had a longlasting hindering effect on its progress. Well, coming back to our theme of control, it may be a matter of debate who first invented the feedback mechanism. Countries with ancient civilizations may claim that their engineers designed some feedback systems for the control of water levels in dams and tanks (or for similar tasks) ages before other countries. What is 3

However precise proof and mathematical exposition of this feature is beyond this undergraduate course.

6

Introduction to Linear Control Systems

true is not known for sure. However, regardless of who first accomplished it, the accomplishment itself is certain since there is historical evidence for it. Practices of control mechanisms and ideas date back to time immemorial. One of the first applications of control ideas was float valve regulation for keeping accurate track of time over three millennia ago. Other applications were designing float regulators for maintaining a constant level of oil in a lamp or a constant level of water in pools. In recent times some historical events are as follows: After the invention of the steam engine in the 18th century, need for controlling the speed of the engine was felt. It was in 1769 that J. Watt invented the flyball governor for this purpose. The device was theoretically analyzed by J. C. Maxwell in 1868. About the same time a theory for regulators was supplied by I. A. Vyshnegradskii. In 1855, 1877, 1892, 1895 and 1898 C. Hermite, E. J. Routh, A. M. Lyapunov, A. Hurwitz, and H. Poincare respectively proposed their methods for stability analysis/determination of dynamical systems (see also Appendix D). In 1910 E. A. Sperry developed the gyroscope and autopilot for airplanes. At this time control theory and practice was being developed in ad hoc manners in the US, Europe, and Russia. In Russia it was mostly dominated by mathematicians dealing with differential equations describing the problem of interest. In Europe and the US electronic amplifier design was the main concern of engineers. Following this line of thought, the precursor for stability and robustness analysis and design methods stemmed from the work of physicist H. Barkhausen around 19214 (see Chapter 6 for details), about two decades before its rediscovery and publicizing by H. W. Bode. One year later in 1922 N. Minorsky, who was concerned with automatic steering of ships, showed how stability of the system can be determined from the governing differential equations of the system. Five years later in 1927 the first feedback control system was designed by H. S. Black (see the proceeding Section 1.4). Five years later in 1932 H. Nyquist introduced his criterion for the determination of the closed-loop stability of LTI systems from their open-loop data. Two years on, in 1934, H. L. Hazen introduced the term servomechanism for position tracking systems and proposed a relay feedback theory for it. Four years later in 1938 H. W. Bode introduced his Bode diagrams for the analysis and synthesis of control systems—and in the 1940s rediscovered and publicized the gain and phase margin concepts. In 1947 the KrohnManger-Nichols chart5 and the bounded-input bounded-output stability concept were introduced by E. Krohn and H. M. James & P. R. Weiss, respectively. In 1948 W. R. Evans proposed the root locus technique. In 1950 L. A. Zadeh published his classical results on linear time-varying systems. In the late 1940s and early 1950s pioneering steps on discrete-time control were taken independently in Russia (Y. Z. Tsypkin), Europe (D. F. Lawden and R. H. Barker), and the US (W. Hurewicz, W. K. Linvill, J. R. Ragazzini, L. A. Zadeh, E. I. Jury). In 1952 L. A. Zadeh proposed a general mathematical theory for linear systems in function spaces. 4

It is quite unfortunate that this point remains unacknowledged by the majority of the control community. 5 The correct and fair name is probably Krohn-Manger-Nichols chart (see Chapter 8). It was proposed by E. Krohn.

Introduction

7

In 1953 and 1956 M. J. Mason proposed his signal flow graph method. On the other hand, the technological advancements of Germany resulted by worldwide consensus in the first international conference on automatic control to be held in Heidelberg in 1956. At that conference the participants pledged to promote the formation of national organizations and also to found an international organization of automatic control, which led to the inauguration of the International Federation of Automatic Control (IFAC) in 1957. By the elapse of time the scientific community witnessed a boom of further discoveries. The main technological achievements of the time were the successful launch of the first earth satellite, Sputnik, in 1957, and landing of the first rocket on the moon in 1959, both by Russia. Subsequently the first IFAC world congress was held in Moscow in 1960. After many years of extensive research by the scientific community new classical results in the state-space domain were obtained in the 1960s. At the same time formal ramification to and formation of different branches of control theory (like optimal, nonlinear, robust, etc. control; as well as identification, filtering, etc.—see Section 1.16) took place in the late 1950s and early 1960s by distinctive contributions of the time whose details go well beyond the scope of this book. Of course, pertinent results started to appear from at least one or two decades earlier. Some typical fundamental results in this category are the filtering theory of Wiener (1942) and its generalization by Zadeh and Ragazzini (1950) and Zadeh (1953), the initiation of cybernetics by N. Wiener in 1948 (see Section 1.16.1), the finite-time control problem by G. Kamenkov in 1953 (see Appendix D), the formulation and classical results on the identification theory by L. A. Zadeh in 1956 who coined the term identification (see Chapter 2), classical converse results on the Lyapunov stability theory by J. L. Massera and J. Kurzweil in 1949, 1950, and 1956, and the invariant stability principle by E. A. Barbashin and N. N. Krasovskii in 1959 and J. P. LaSalle in 1960.

1.4

Why feedback?

From one standpoint, there are two types of control: open-loop control and closedloop or feedback control. These will be rigorously dealt with in the rest of this book, where the characteristics of each type are precisely studied. For the moment some general information will suffice. The structure of an open-loop control system is shown in Fig. 1.1.

Figure 1.1 Schematic of an open-loop control system.

To explain why the input of the system, i.e., the reference input r, is chosen to be yd , i.e., the desired output, we start by saying that in the first place in an openloop control system there is no controller C and we barely have the plant. The plant is for example a room and its temperature is defined as the output. If the temperature is as desired, i.e., y 5 yd , which is due to the environmental conditions, there is no need to exert a control. Otherwise an operator-input, like a cooler or heater, must be used. But how much should they affect the plant? In other words, how

8

Introduction to Linear Control Systems

should the plant know that its output is desired to be yd ? The answer is clear: we must tell it, and this is done through its input gate. This is the only way we can talk to the plant, so r is chosen as yd . If we have accurate enough knowledge about the plant P, open-loop control might work, depending on the application. Some typical examples are traditional idle speed control of an engine, traditional printer, traditional traffic lights, traditional screen display in computer monitors, and traditional washing machine. We will shortly see in Section 1.10 how the controller C must be designed. The structure of closed-loop control or feedback control is given in Fig. 1.2. The explanation is as follows: In the open-loop control, after the exertion of the input, there is no measurement of the output. So we do not know whether y 5 yd or not. To verify this, we must compare them, i.e., we must form their difference, and this is done by feeding back the output to the input through a comparison element or the feedback element as shown in the right panel of Fig. 1.2.

Figure 1.2 Schematic of a closed-loop control system, Left: Abstract, Right: Actual.

In comparison to open-loop control, if our knowledge of the plant P is not precise, e.g., because of aging, friction, etc., open-loop control will not be that good, if applicable at all, or might not work at all. Thus we use feedback control. Of course, this lack of precise knowledge is not the only reason we use feedback. Sometimes we have to use feedback to make the system stable (i.e., roughly said, to have it work at all) in the first place, or to have it work as desired; as will be discussed later. Some examples include the closed-loop version of the aforementioned open-loop control systems, robot manipulator, homing devices, and sophisticated industry. We wrap up this part by stressing that for the sake of instructiveness we have introduced the variable yd in this section. Otherwise it is superfluous and we can simply use r instead of it, and this is actually what we do in the rest of the book with the understanding that r 5 yd. In Section 1.11 we shall introduce yd (along with yr and yn) which should not be mistaken for yd.

1.5

Magic of feedback

The aforementioned five reasons of Section 1.2 and a lot more (including disturbance rejection—see next Section) are why we use feedback and constitute what is achieved by feedback. This we summarize as “the magic of feedback!” The use of this term will be further clarified later where some open-loop and closed-loop systems are compared in details. Feedback is the most pervasive system in nature—it is ubiquitous. When a speaker asks whether the audience hear him/her clearly or not, so that he/she can speak louder or more quietly, the speaker invokes a feedback mechanism. When a

Introduction

9

horse-rider or a biker bestrides, they use a feedback mechanism to keep their balance. When a government exercises its controls (subsidy, loan, bond, interest rates, etc.) in view of stabilizing the national economy, they attempt to use feedback. When the presence of employees or students is checked by a roll-call mechanism, a feedback is used. When a football coach applies its controls (tactics, replacements, etc.) in order to achieve his objectives, he invokes a feedback mechanism. When one’s body shivers or sweats, it uses a feedback mechanism to control its temperature. It is feedback by which the temperature in a building or oven is kept to within a small neighborhood of a desired setting. It is feedback by which unmanned aircrafts fly without the intervention of human beings. Remark 1.1: When we talk about control, either in this book or in the general literature on control, we mean closed-loop or feedback control, unless otherwise is explicitly mentioned. For example, this course is called linear control systems not linear closed-loop control systems!

1.6

Physical elements of a control system

A typical multitank fluid level and flow control system is shown in Fig. 1.3. To consider all control systems and plants in a unified pictorial framework, we use Fig. 1.4 in which the physical elements of a general control system are shown (see also Problem 1.5). We should clarify that the plant may have several sub-plants as in Fig. 1.3 and that in general there are interactions between the sub-plants. A typical example is the electric power system of a country where there are several power stations, and numerous “loads” that consume the energy.

Sensor Actuator

Figure 1.3 Schematic of a typical three-tank fluid level and flow control system. Disturbance Amplifier Operator

Actuator

Control Elements

Figure 1.4 Physical elements of a control system.

Plant Sensor Noise

10

Introduction to Linear Control Systems

The objective of control is to stabilize the voltage and frequency at the loads (which are the end plants). While such systems may be pictorially depicted as the interconnected version of several systems as in Fig. 1.4, we may simply use a single system as in Fig. 1.4 with the understanding that the signals around the loop are not scalar valued but vector valued, and that each block (e.g, Amplifier) constitutes several sub-blocks. Therefore, a 3-input 2-output system, for example, is not shown by a system which has 3 arrows entering its plant and 2 arrows coming out of its plant, but with a single vector-valued arrow at its input and output. It is noteworthy that the plant is not necessarily a physical one, like that in Fig. 1.3. The economy of a nation, for instance, is a nonphysical plant to control. For this plant, inputs are the monetary and fiscal policies, and outputs are the gross national product, growth rate, unemployment reduction (rate), inflation reduction (rate), etc. Sensors can be various relevant criteria and standards (observing whether the desired outputs are achieved, e.g., by considering the satisfaction of people or by statistical results/poling). Amplifiers and actuators can be support and encouragement for creative industries and job-making companies (e.g., by levying lesser tax on them) and injection of some money from the government savings (of e.g., oil revenue of oil exporting countries). The latter case is in the form of subsidy on some goods or founding low-price governmental shops so that private shop owners have to lower their prices as well. Finally, sensor noise refers to mistakes in polling and statistics, whereas disturbance may refer to fraud, war, or serious natural disasters, like a catastrophic flood or earthquake, which considerably affect the economy. In the case of a mechanical plant such as moving a load by a robot manipulator, the manipulator is rotated by a motor which is the actuator. The control elements are a computer, microprocessor, or even some TTL ICs. The output of these devices is in the range of volts (5, 12, 24, etc.) and milli-amperes (e.g., 110). This is not enough to drive a motor (i.e., the actuator, needing, e.g., 220 V and 10 A) and thus an amplifier is needed in between. These are the power amplifiers we design in courses on electronics.

1.7

Abstract elements of a control system

The abstract elements of a control system are shown in Fig. 1.5, which is the abstract version of Fig. 1.4. As shown, actuator and amplifier are combined with and hidden in the controller box.6 The boxes C; P; Ps denote the controller, plant, and sensor. (We do not use the letter S for the sensor and reserve it for another purpose.) The signals r; u; y; ym denote the setpoint (which is the desired output), control signal, actual output, and measured output. Here as well we note that the signals around the loop are in general vector valued and the blocks constitute several sub-blocks. (It is also possible to include the amplifier and actuator in the plant.) It must be noted that in the aforementioned example of the economy of a nation, to take a poll (statistics) takes some time. That is, the sensor has some delay. 6

An actual system may be more sophisticated than this, with more elements around the loop—see the Worked-Out Problem 1.5.

Introduction

11

Figure 1.5 Abstract elements of a control system.

That is why a box has been considered for its dynamics. This means that even if n 5 07, r is constant, and e 5 0, there does not hold ym 5 y in all time, i.e., the measured output is not the actual output in all time, but roughly after the settling time of the sensor. We will talk more about the delay in Chapter 2, where we see that— at least in classical physics—all actual systems are Lipschitz and in fact beyond that: the slope of the output at the starting time is zero. In fact, in practice, every system (including any sensor) has some delay. Moreover, note that representation of measurement/sensor noise also includes measurement/sensor error, if any.

1.8

Design process

The process can largely be divided into four rather independent steps, namely: G

G

G

G

Modeling Formulation Computation Implementation

Modeling refers to finding a model that represents the problem at hand. Formulation refers to formulating the problem and finding the solution in the form of a formula (usually for the control signal). Computation refers to numerically producing the answer of the former step.8 Implementation means designing (probably manufacturing,) and implementing a device (or rather a system—a combination of different devices) that performs the task. To be more specific note that the least objective of control is to keep y as close as possible to r. That is, tracking—and note that tracking includes regulation. It may be more; in fact there are often some time specifications, energy specifications, etc., as well. Regardless of what exactly the control objectives are, the above-mentioned design process is further divided in 7

In the literature the letters n and v both are used to refer to noise. Because v is used in Section 1.15 for another purpose, we prefer to use n for noise, although as we shall see in Chapter 2 that is used for other purposes as well! Do not mistake them for each other! On the other hand the letters d and w both are used to denote the disturbance. Similarly, in Section 1.15 we use w for another purpose, so here we prefer to use d for disturbance. 8 More explanation is supplied in Section 1.16.1, in the paragraph on “scientific computing” and also in Example 1.9.

12

Introduction to Linear Control Systems

two distinct parts dealing respectively with the physical and abstract levels as discussed in Sections 1.6 and 1.7. The outcomes of the design in the abstract level are the controller C (formula/ operator),9 its computing engine,10 the operator-machine interface in theory, amplifier, actuator, and sensor. To this end we must have (1) An approximation of plant P11 which is called a mathematical model of the plant, (2) Characteristics of r and d in the time domain or frequency domain, (3) Models of actuators and sensors, the latter including sensor noise, (4) A set of performance specifications, and probably (5) Restrictions on the structure Ð t of C e.g., proportional-integral-derivative (PID) controller uðtÞ 5 Kðr 2 ym Þ 1 T1i 0 ðr 2 ym Þdt 1 Td dtd ðr 2 ym Þ. In this case, the design is thus reduced to specifying K; Ti ; Td . However if we opt to choose digital control— which is certainly the case in the present time—then the sophisticated controller structures can easily be programmed and implemented and item (5) is thus irrelevant. In fact for complicated control objectives (like almost all optimal control problems) and plants where the controller has a sophisticated structure this item is irrelevant (but some other constraints may exist, as we shall see in the rest of the book). On the other hand the outcomes of the design on the physical level are again the controller C, the computing engine, operator-machine interface, amplifier, actuator, and sensor, but this time all implemented. To this end, one is engaged with: (1) Selection of actuators and sensors. Sensors must have minimum effect on the plant dynamics—they must not interact with the system. (2) Choice of the control elements. Now that digital controllers are pervasively used in practice this step inevitably includes a computer or microprocessor which gives rise to the issues of both software and hardware. (3) Design of the operator-machine interface. This is comprised of the construction of alarms, displays, and backup devices. (4) Implementation, which is usually tied with other branches of electrical engineering and perhaps other fields of engineering. Depending on the application this may well include electronics, communications, mechanical engineering, chemical engineering, etc. We have implicitly assumed three points: (1) The designer has knowledge of the market (economics, availability of equipments, reliability of equipments and vendors, etc.), and (2) The designer has project and managerial skills. And last but not least: (3) The problem setting is given, i.e., it is known which variables are the inputs, which are the outputs, what inputoutput pairing or control configuration is chosen, where the sensors and actuators are placed, what control hierarchy and 9

Which is the outcome of the Formulation step. Which is the outcome of the Computation step. For instance suppose that the control signal contains integral, eigenspace, and mini-max terms. The computing engine refers to the algorithm for numerically computing the aforementioned integral, eigenspace, and mini-max problems as well as the basic operations. 11 Which is the outcome of the Modeling step. Note that this is needed in this course which is the first course on control theory. Otherwise, model-free control strategies do already exist in theory and have been successfully implemented in practice as well. These are advanced topics which have been seriously investigated since the beginning of the 21st century. Their introduction, however, is much older and is traced back to the fuzzy control theory of the 1970s. The notions of fuzzy logic and fuzzy sets were initiated in 196263 and 1973 by the Iranian scientist Lotfi A. Zadeh who emigrated to the US in the early 1940s. See also item 15 of Further Readings. 10

Introduction

Management

13

Research & Development

Pilot Design

Software Expriment

Pilot Manufacuring

Pilot On-Board Test

Mass Production

Quality Test

Figure 1.6 A simple schematic representation of an engineering project.

loops exist, etc. In terms of actual problems this last item is by no means a trivial matter and is the main stepping-stone to the solution. This problem is best cast in an optimization setting and in fact there are different frameworks and layers, such as plant-wide optimization, top-down optimization, bottom-up optimization, control layer optimization, etc., which independently of the managerial side of the problem take an optimization viewpoint in each control layer in the whole process. Such issues go outside the scope of this first undergraduate course, but in graduate courses and research you will get more familiarity with them. It suffices for us to pose a simple question. Suppose you want to stabilize an inverted pendulum—or similarly a rod—on your palm. What should be the output, i.e., what should be measured by the sensor or what should you look at? The palm-end position of the rod, the far-end position of the rod, the rod angle, the palm-end speed, etc.? As we shall see in future chapters the answer does affect the performance and in fact the practical achievability of the control objective! Also note that the two abstract and physical design levels interact to a great extent and the overall process is interactive. For instance, performance specifications are rarely set beforehand, and are often the result of tradeoffs between the preliminarily desired set of performance specifications, hardware (actuators, sensors, computers, etc.) costs, and the needs of the end user. Today on the industrial scale this is dealt with in an optimization framework; see also Further Readings. The industrial-size carrying out of this task is an engineering project, a team undertaking, and is also referred to as business or company (see Fig. 1.6). In actual situations there are more divisions/subsystems and more feedback loops and interconnections in the system. This figure is just a simple general overview. It is rather impossible for one person to be an expert in all parts of this business. This course/ book is an introduction to the design in the abstract level. Nevertheless, the reader is encouraged to make himself/herself familiar with other fields (physical level, management, marketing, etc.) in order to be more effective in the team in his or her future career. In light of this, the students are encouraged to plan their courses so as to get at least some acquaintance with those subjects. It may be beneficial for you to know that as student we ourselves took courses in management and economics as well and since then have kept our contact with these branches of science!

1.9

Types of control systems

It is clear that human beings can be classified in different ways. For instance, according to their age (e.g., intervals), gender, place of birth and/or living, education, job. . . Different combinations thus follow. As an example: 1418, student, born/living in

14

Introduction to Linear Control Systems

Kermanshah. The same is true for control systems, i.e., different classifications exist from different standpoints. For example, control systems can be categorized as being linear or nonlinear, deterministic or stochastic, intelligent or otherwise, robust or otherwise, optimal or otherwise, adaptive or otherwise, hybrid or otherwise. Likewise, different combinations follow accordingly; e.g., nonlinear robust adaptive control systems. Linear control systems is the first and basic course in this arena. It must be noted that the term linear traditionally implies that all the elements around the loop are not only linear but also time invariant—they are all linear-time-invariant (LTI) systems, so is the overall system. In particular, the controller is time invariant in all time, not in certain intervals of time or working regimes. A brief explanation about some of the technical terms introduced above is in order. If the system is not linear, it is nonlinear. These will be treated in more details in Chapter 2. If the input(s) and parameters of the system are deterministic, the system is called deterministic, and if either of them is stochastic the system is called stochastic. If expert knowledge (based on fuzzy control theory) is included in designing the controller, the system is called intelligent. In modern usage, neural networks and evolutionary algorithms are essential constituents of intelligent systems in addition to fuzzy control theory. If the system is robust against uncertainties, including uncertainties in parameters (also called parameter variations), uncertainties in modeling due to nonlinear, higher-order terms, backlash, deadzone, etc. (also called neglected dynamics), and external disturbances, the system is called robust. It the controller is changed with time or in other words adapts itself with time so as to address the design objective, the system is called adaptive. Adaptation is traditionally used to address nonlinearities or time variations in system parameters. The other technical terms like optimal control, etc. will be defined in Section 1.16: these topics are studied in graduate courses in the field of control engineering. However, elementary studies of certain parts of robust control theory are included in this course.

1.10

Open-loop control

In this part we will study the stability and performance issues in an open-loop control scheme. It must be noted that at this stage—the beginning of the book—as we have not yet studied much, the treatment is rather conceptual. In particular, sensitivity or robustness, and disturbance are studied under independent items due to their importance. Rigorous treatment will be possible after reading Chapters 14, and this is what you are recommended to do in some exercises like Exercise 1.8.

1.10.1 Stability and performance Recall that: (1) tracking includes regulation (and is above that), and (2) performance includes both of them (and is beyond them: it also includes rise time, settling time, etc.) and also includes sensitivity reduction and disturbance/noise attenuation. However, as stated before the last two items are studied independently. Now, consider an open-loop control system as in Fig. 1.7.

Introduction

15

Figure 1.7 Schematic of an open-loop control system.

Before considering these issues let us see how the system is represented by equations. Denote the output of C which is the input to P by u. Thus12, Y 5 PU 5 PCR. We shall shortly see in Chapter 2, System Representation, how P and C are represented. Here it is sufficient to say that in multiinput multioutput (MIMO) systems, in general, PC 6¼ CP, and in fact for the generic MIMO system this is never the case. However, for Single-Input Single-Output (SISO) systems there always holds PC 5 CP. In this course we are mostly concerned with SISO systems and thus we simply use CP instead of PC. Consider a tracking task, i.e., suppose it is desired to find the controller C, if possible, such that y 5 r. We have:13 Y 5 CPR: 5 Hr R E5R2Y 5 ð1 2 CPÞR 5 ð1 2 Hr ÞR Thus in order for the tracking error to be zero we must have C 5 P21 . What this means and when it is possible will be discussed in details after studying stability in mathematical terms. Here it is enough to say that two necessary (but not sufficient) conditions for the possibility of open-loop control (with the objective of “perfect control”, i.e., E  0) is that P be both stable and minimum phase. See also Problem 1.8 and Exercises 1.8, 1.43. At the moment let us see if it has any conceptual meaning. Let P be a human being and y be his/her behavior and conduct. The control task is to make the behavior and conduct of the person P as desired, i.e., r. We saw that to this end C must be equal to P21 . 1. C 5 P21 means we know the character of the person P so that we know how to talk to him so as to get him to conduct and behave as desired. 2. Certainly it does not apply to all people. Some people are wrongdoers, and in some sense are not stable. Some must be forced—their behavior and conduct must be brought The input signal to the controller block C is r 5 rðtÞ in time domain or R 5 RðsÞ in Laplace domain. The same is true for other signals u and y. However the input-output relation of an “LTI” block is given by the multiplication rule only in the Laplace domain. For instance it holds that UðsÞ 5 CRðsÞ but not uðtÞ 5 CrðtÞ unless C is a constant. Note that in general P and C are actually PðsÞ and CðsÞ. However, in the figures throughout the book, in order to convey the general meaning of a controller or plant, we suppress the argument s and simply represent them by C and P. Moreover, note that if a block is “nonlinear,” even time-invariant, then in general the multiplication rule does not hold for it, that is e.g., y 6¼ Pu but y 5 f ðuÞ where f is a nonlinear function. This will be further clarified in Chapter 2. In brief, the block diagram representation is conceptually valid for all types of systems, however the governing equations are not the same. All the formulas that we write are for LTI systems, unless explicitly stated otherwise or clear from the context. 13 The signal summation formulas are valid in both time and Laplace domains. For instance e 5 r 2 y and E 5 R 2 Y are both valid. See also the explanation of Fig. 2.3 of Chapter 2. 12

16

Introduction to Linear Control Systems

to—i.e., fed back to—the rules and compared with them in order to make them understand the errors in their behavior/conduct so as to correct them (make them stable). 3. Even those who perform as we want will not accept all things. For specific behavior/ conduct, i.e., specific performance, they should also be forced. That is, when we want performance, feedback is necessary also for them.

In general, for stability and performance (including tracking, regulation, and more) open-loop control is not enough and feedback/closed-loop control is necessary.

1.10.2 Sensitivity and robustness If (1) our knowledge about the system is wrong and the system is P0 ð6¼ PÞ not P, or; (2) our knowledge is right and the system is P but it changes14 through time (aging) or because of environmental or working regime from P to P0 ð6¼ PÞ, then the tracking error will not be zero: E 5 ð1 2 P21 P0 ÞR 6¼ 0: Therefore, in case of model uncertainty (i.e., the system being P0 not P) open-loop control does not suffice. Lack of model uncertainty is the third necessary condition for the possibility of open-loop perfect control. At this stage we cannot say more about the achievable performance, but later in the course this will be addressed. The fourth necessary condition for the possibility of open-loop perfect control is lack of uncertainty in the controller. However, it should be stressed that uncertainty in the controller (controller, amplifier, actuator) is unavoidable both in software and hardware. In software there is always some precision error at least due to round off errors and numerical problems of the algorithms used. As for hardware there is always some precision error. Suppose the controller computes the control to be ϕ m3 =s which is the volume of water that should pass through a valve. Will it be implemented exactly? Certainly not. And this is not typical of mechanical systems. In allelectrical systems the same is true as well. As such, in the present time that we are well aware of the benefits of closed-loop control, as you shall see in the next Section 1.11, open-loop control systems are rarely found. A restatement of the above argument about model uncertainty with a more mathematically quantifiable nature is as follows: We call the (percent) change in the output because of the (percent) change in the plant parameters the sensitivity of the output (or gain, transmittance function, transfer function) of the plant. In mathematical terms: SPHr : 5

percent change in Hr @Hr =Hr @Hr P 3 100% 5 3 100% 5  3 100% percent change in P @P=P @P Hr

P In open loop control Y 5 CP R 5 Hr R and thus SPHr 5 C  CP 3 100% 5 100%. That is, any percent change in the plant translates to exactly the same percent change in the output, which is clearly undesirable. We will shortly see that feedback can reduce sensitivity to model uncertainty, or equivalently, feedback can increase

14

We assume that the change is in such a way that the plant is in effect time-invariant, otherwise the story is completely different.

Introduction

(A)

17

(B)

(C)

Figure 1.8 Disturbance transferred to the output in open-loop control.

robustness to model uncertainty. By can we mean perhaps not always, i.e., not in all frequencies, which means not for all disturbances. As for uncertainty in the controller, by definition it holds that @Hr =Hr change in Hr Hr @Hr C SC 5 percent percent change in C 3 100% 5 @C=C 3 100% 5 @C  Hr 3 100%. Thus in this case SCHr 5 100%, which is undesirable. The same argument as before can be made.

1.10.3 Disturbance In Fig. 1.8 the block Pd is used to show that disturbance has different natures. For instance, for the economy of a nation, it may be a fatal flood, a catastrophic earthquake, a war, etc. Moreover, the right panel of the same Figure shows the disturbance transferred to the output.15 Thus Y 5 CPR 1 D 5 Hr R 1 Hd D with obvious definition for Hr ; Hd . Hence, with C 5 P21 , Y 5 R 1 D and E 5 R 2 Y 5 2D. In other words, disturbance is exactly transmitted to the output; open-loop control neither increases nor decreases the disturbance. Let us see what this means in connection with a person as the plant P, just discussed. Disturbance means bad environment. The meaning of this observation is that bad environment affects people and makes good people corrupt ones. This is certainly correct and in accordance with experience.16 We also know that by enforcing people we can decrease the effect of environment. This will be shown shortly in the context of closed-loop control: with feedback (i.e., force) we can decrease the effect of disturbance. Here again, can means perhaps not always, i.e., not in all frequencies, which means not for all disturbances. Finally note that r and d are in general two time-varying functions. Thus, in 2D order to have tracking C 5 RPR is not acceptable since it is not time invariant (recall that in this course everything is LTI) regardless of other limitations. Moreover, even if it is, note that disturbance is considered in the controller formula. But if it becomes zero then the output will be R-D:

15

Indeed, in both open-loop and closed-loop control to follow, the effect of disturbance can be modeled at both the plant input and output. For the sake of brevity and simplicity here we restrict the presentation to the given structure—disturbance at the plant output. In the worked-out problems we shall address the other cases. Further explanations are given in Problems 1.15, 1.16. 16 The reason that some people do not exactly become like the environment is the internal feedback they have due to morality obligations, etc.

18

Introduction to Linear Control Systems

1.10.4 Reliability, economics, and linearity In general, reliability and economical considerations cannot be addressed in open-loop control, since the controller can only be P21 . The essence of the above exposition is that in general only tracking may be obtained (in special situations) and not the aforementioned objectives. However, if we relent the objective of “perfect control” to “almost perfect control” or simply said, control, which means that at the initial stage (the technical term being transient response) we allow for some error, then we have more freedom in choosing the controller and may to a larger extent satisfy some economical considerations. Also note that, in practice, often we even allow for some negligible error in the subsequent phase (the technical term being steady-state response) and this gives us more freedom and higher chance of fulfilling this design objective. With regard to linearity, the question is linearizing a nonlinear plant by openloop control, i.e., the controlled plant, which is the whole system, becomes linear. The idea again is to use the inverse of the plant as the controller so that the mapping from the setpoint to the output becomes identity, which is interpreted as a linear system. In case of a linear plant P, you can easily find the conditions for its invertibility and provide rigorous characterization of the aforementioned arguments of this Section when you study Chapters 14 of this book. However, for a nonlinear plant the situation is different. We cannot discuss the theoretical conditions but the concept. A simple conceptual technique to make the inverse of a plant—for any purpose e.g., open-loop control—is studied in Exercise 1.57 in which we use the concept of feedback which is studied in the next Section.

1.11

Closed-loop control

In this part we will develop results similar to those of Section 1.9 for closed-loop control systems. Here again, due to their importance, sensitivity or robustness, and disturbance or noise are studied under independent items. Note that as in the previous Section, because we have not yet studied much, the arguments are somewhat conceptual, although accompanied with mathematical formulations and justifications. More rigorous treatment will be possible after reading future chapters, and this is what you are recommended to do in some exercises in particular Exercises 1.9 and 1.43.

1.11.1 Stability and performance Recalling footnote 15 about modeling disturbance at the plant output, consider the sequel closed-loop structure of Fig. 1.9. Explanations about d are relegated to Problem 1.15 and Exercise 1.45. (A)

(B)

Figure 1.9 Disturbance, noise, and sensor dynamics in closed-loop control.

Introduction

19

The block Ps denotes the sensor dynamics, with the explanation that a sensor does not respond instantaneously and has some delay, even if its noise/error is zero. Thus, Ym 5 Ps Y 1 N. It is instructive to suppose Ps 5 1 and analyze the idealized situation first. The actual system is addressed in Exercise 1.9. Coming back to our analysis of the idealized situation, it is easy to verify (using the superposition theorem—the system is LTI) that: Y5

CP 1 CP R1 D2 N 5 Hr R 1 Hd D 1 Hn N 1 1 CP 1 1 CP 1 1 CP

If CP 5 21, then y 5 N, which means instability, because it is equivalent to a breakdown, explosion, burnout, etc. (Recall that in actual systems all quantities— either internal states or external measured outputs—are bounded.) Hence, for stability we must have CP 6¼ 21. Practically, the requirement is beyond this: in actual systems all quantities are not only bounded but also inside their corresponding Safe Operation Region. Consequently, CP must not tend to 21, i.e., their difference must not become that small. In Chapter 6 we will see that this is equivalent to some acceptable stability margins. Yet, stability is even beyond this, however here we cannot discuss it formulation-wise in more details; it is done in Chapter 3.

1.11.2 Sensitivity and robustness Let n 5 0. It is observed that we can reduce the error, or in other words, we can improve tracking and disturbance rejection by making jCPjc1.17 However, CP is a function of frequency. The question that arises is thus at what frequencies we can/should make jCPjc1? This question will be answered later in the course. Here, we will simply say that the answer is the bandwidth of the system, being defined roughly as the frequency range in which the system works acceptably. Another question that arises is whether the effect of disturbance totally vanishes. If the answer is positive, it is somehow against a conservation law! Indeed, it is negative (under mild conditions). Because of the Bode’s integral theorem, which will be introduced in Chapter 10, both disturbance rejection and setpoint tracking take place in the bandwidth and not outside that. In other words (under mild conditions, see Chapter 10 for details), by jCPjc1 the disturbance is attenuated in the passband and amplified outside that! Now suppose there is model uncertainty, i.e., the system is P0 and not P. If for the uncertain system there also holds jCP0 jc1 and the system is stable, the error

17

In fact it must be “so large (or large enough)” that “tracking takes place” and “the effect of multiplication by D in the second term is negligible”. We stress that the “conceptual condition” of the previous sentence has a precise mathematical explanation which we shall present at the end of this Section 1.11. At the moment for simplicity it is instructive to go on with the aforementioned conceptual condition.

20

Introduction to Linear Control Systems

will again tend to zero, i.e., reference tracking and disturbance rejection will be achieved. Note that the exact knowledge of uncertainty is not needed, i.e., we do not know what P0 exactly is; we only know that jCP0 jc1 is satisfied. The same argument is valid for uncertainty in the controller: if it holds that jC 0 Pjc1 (or jC0 P0 jc1) and the system is stable it is observed that setpoint tracking and disturbance rejection objectives are met. Similar to the open-loop control case, we can put the above arguments about model uncertainty in more mathematical terms. That is, we can compute the sensitivity in the closed-loop control structure as follows. Here it holds that Hr 5 1 1CPCP . P 1 r Hence, sensitivity will be SPHr 5 @H @P  Hr 5 1 1 CP . Recalling that in open-loop conHr trol SP 5 1, it is thus seen that by applying feedback sensitivity is multiplied (i.e., increased or decreased, depending on the frequency) by the factor 1 11CP . Thus in the passband we must decrease the sensitivity. Therefore if the feedback/controller is designed correctly, meaning that in the passband 1 11CP is small (or jCPjc1), sensitivity is reduced (or robustness is increased). As for SCHr , i.e., the sensitivity of the transfer function with respect to uncertainty in the controller, the answer is as SPHr . The same arguments can be repeated.

1.11.3 Disturbance and noise As for the disturbance, the disturbance to output transmission is Hd 5 1 11CP , whereas in open-loop control Hd 5 1. In other words, by the application of feedback disturbance is multiplied (i.e., increased or decreased, depending on the frequency) by the factor 1 11CP . Here again the effect of disturbance should be decreased in the passband. That is 1 11CP must be small (or jCPjc1) in the passband. It is observed that this requirement coincides with the requirement for sensitivity reduction. As for noise, let n 6¼ 0. It is observed Hr and Hn are the same. Thus the requirements of setpoint tracking and noise reduction contradict each other. If we improve one, the other automatically deteriorates. In other words, jCPjc1 will improve setpoint tracking (as well as disturbance rejection and sensitivity reduction) but at the expense of exact noise transmission to the output. That is, in this case we will not achieve Y 5 R but Y 5 R 2 N. These observations (analyses of the system for n 5 0 and n 6¼ 0) can be rephrased as the feedback can be18 at most as good as the measurement is. See the worked-out problem 10.22 of Chapter 10. To have n 5 0 sensors should be of high quality (thus expensive) and at the right positions. Sensor positioning is a topic of extensive research in the field of systems and control, although it may not seem so to newcomers,

18

We say “can be” not “is” to refer to the requirement jCPj c1. Otherwise, nothing is achieved!

Introduction

21

see Further Readings. In other words, noise can be reduced only by hardware and implementation.19

1.11.4 Reliability, economics, and linearity In graduate courses these issues will be addressed in details. There you will see that by feedback control we may achieve reliability, can address some economical considerations, and reduce the effects of nonlinearities through high gain. In simple words the reason may be attributed to the flexibility inherent in jCPjc1. That is to say, unlike the open-loop control that the controller was unique as C 5 P21 , here the controller is not unique—infinite number of controllers satisfy jCPjc1, and we may be able to exploit this flexibility to fulfill some design objectives. See also Exercise 1.57 and the worked-out problem 3.26 of Chapter 3, Stability Analysis. We should clarify that the above parallel statements about open-loop and closedloop controls should not be understood as a “comparison”, since it is unfair: For open-loop control we consider perfect control and for closed-loop control we just consider control. The explanation is that in closed-loop control of actual systems perfect control is almost always impossible, see Exercise 1.43, and on the other hand we often try to exploit feedback benefits as much as possible. Remark 1.2: The following notation is used: L 5 CP is the loop gain, S 5 1=ð1 1 CPÞ is the sensitivity function, and T 5 1 2 S 5 CP=ð1 1 CPÞ is the complementary sensitivity function. Note that with this notation: Y 5 Hr R 1 Hd D 1 Hn N 5 TR 1 SD 2 TN. In this terminology by designing the controller we increase the loop gain, resulting in an increase (towards 1) in the complementary sensitivity function and a reduction (towards 0) in the sensitivity function. Remark 1.3: The order of the arguments in the above analyses can be changed without making any difference in the conclusions. Instead of considering first n 5 0/n 6¼ 0 and then jCPjc1, we can consider first jCPjc1 and then n 5 0/n 6¼ 0. This way the analysis is more concise. In doing so, the first step results in Y 5 R 2 N, meaning that setpoint tracking and disturbance rejection are achieved at the expense of exact noise transmission to the output. That is, roughly said, the feedback can be at most as good as the measurement is. Thus we should have n 5 0. Remark 1.4: The conceptual meaning of jCPjc1 in human beings is that great people follow their objectives (setpoints) and reject the disturbances. Sarcasm and 19

In this course, linear control systems, which emphasizes on the linearity and time invariance of the controller. Otherwise e.g., by adaptive control and filtering the same objectives may be (in a sense) achieved in the presence of noise. You will learn this in graduate studies, see item 19 of Further Readings. However, even in the case of using adaptive control it is advisable to try to reduce the noise by hardware and implementation as well.

22

Introduction to Linear Control Systems

side-challenges such as bribery do not digress them from the right path. They have high knowledge, experience, patience, perseverance, good intentions, moralities, etc. Remark 1.5: The ratio

Y RjN5D50 Y DjN5R50

5 CP is called the signal to disturbance ratio in the

output, or simply the signal to disturbance ratio. It is observed that increasing the signal to disturbance ratio coincides with setpoint tracking, disturbance rejection, and sensitivity reduction objectives. To present examples for the aforementioned analyses we should first study system modeling. This will be done in Chapter 2, System Representation. Nevertheless we assume a simple system model and “observe” the aforementioned arguments in several simulation results in the subsequent examples. You are encouraged to come back to these examples after studying Chapter 4 and analyze the systems. Example 1.1: Consider the open-loop system PðsÞ 5 1=ðs 1 2Þ. The closedloop reference tracking of this system for the step input is given in Fig. 1.10. Now we design the two controllers C1 ðsÞ 5 10 and C2 ðsÞ 5 2=s for the system. The reference tracking of the system is shown in the same figure.

Figure 1.10 Example 1.1, Reference tracking for step input.

(Continued)

Introduction

(cont’d) As observed, the best tracking performance takes place with C2 for which C2 Pjs50 5 Nc1. The second best answer is obtained with C1 for which C1 Pjs50 5 5 . 1. Finally we note that the closed-loop tracking performance with no controller is worse than that of the open-loop system for which Pjs50 5 1=2, but note that this is not always the case. (Question: Why have we chosen the frequency zero?)

Example 1.2: In Example 1.1, suppose that the plant parameters vary up to 100%. For instance, we suppose that the plant changes to P0 ðsÞ 5 1:5s0:21 4 . The output of the system for both open-loop and closed-loop structures is depicted in Fig. 1.11, left panel. It is observed that again the tracking objective is achieved with C2 , however with a different transient performance. We also note that the responses of the open-loop and closed-loop with no controller are almost identical, shown by the solid line. (Their steady-state difference is less than 0.0025.)

Figure 1.11 Example 1.2, Reference tracking with parameter variations.

Parameter uncertainty may be present in the controller as well. In the right panel simulation results are given for the original plant with controllers C3 ðsÞ 5 5 and C4 ðsÞ 5 5=s, and C5 ðsÞ 5 ð2s 1 50Þ=½sðs 1 100Þ. As is observed, as long as the condition jCPjc1 is maintained the tracking objective is fulfilled, however in the transient phase the output shape will be different.

23

24

Introduction to Linear Control Systems

Example 1.3: In Example 1.1 we assume that a negative step disturbance enters at the system output at t 5 8 seconds. The simulation results are presented in Fig. 1.12. As seen, it changes the final value of the output with no controller or with C1 , however not with C2 . Of course in all cases, except the open loop, it also has a transient effect. We also note that the reference tracking performance (which is in the starting phase of the output) is the same as disturbance rejection performance (which is right after t 5 8) because both Hr 5 Hd and r 5 d.

Figure 1.12 Example 1.3, Reference tracking and disturbance rejection.

Example 1.4: In Example 1.1 suppose that the controller changes to C1 ðsÞ 5 50s , C2 ðsÞ 5 s 2s 1 , or C3 ðsÞ 5 s12 . The simulation results are offered in Fig. 1.13. It is observed that with C1 the output becomes oscillatory and with C2 and C3 it is unstable (unbounded over a longer time). (Continued)

Introduction

25

(cont’d)

Figure 1.13 Example 1.4, Reference tracking for step input, Left: C1 , Right: C2 , Bottom: C3 .

The observation that we shall make of the above examples—which is in accordance with the theoretical arguments of Sections 1.10, 1.11—is that, in brief, with a well-designed feedback we can (stabilize and) robustify a system, enhance its performance, and reject the disturbance. On the other hand, a poorly designed feedback can render the opposite results. You will learn much more details in future chapters. For the moment let us ponder on the question “What disturbances are rejected and what does largeness of CP mean mathematically?” The answers to the above questions are related to footnote 17. Now we discuss in a mathematical setting the conceptual arguments of this Section. We have Y 5 Hr R 1 Hd D 1 Hn N. Denote Y 5 Yr 1 Yd 1 Yn with obvious meanings for the terms. In the time domain denote y 5 yr 1 yd 1 yn with obvious meanings. For instance, yr 5 L 1 Yr 5 L 1 1 1CPCPR is the output due to reference input. We wish that yr 5 r ; yd 5 0 ; yn 5 0. In the case that rðtÞ 5 Ar stepðtÞ then at steady state yr;ss 5 lims!0 s1 1CPCP Asr , provided the analyticity of the limit argument

26

Introduction to Linear Control Systems

in the closed right half plane (CRHP), see Appendix A. Thus if Lð0Þc1, preferably infinity, we have yr;ss 5 Ar . This happens if L has the factor s in its denominator, e.g., L 5 1=½sðs 1 1Þ. But does that mean that the disturbance is rejected? To answer this let us distinguish some cases: (1) Step disturbance d 5 Ad stepðtÞ, (2) Ramp disturbance d 5 Ad t, (3) Sinusoidal disturbance d 5 Ad sinωt. Of course we can consider more complicated signals as the disturbance, but as we shall shortly explain, the chosen ones suffice to make the point. There holds: 1. In this case we see that yd;ss 5 lims!0 ssðssðs111Þ1Þ1 1 Asd 5 0 and the objective is achieved. 2. In this case we have yd;ss 5 lims!0 ssðssðs111Þ1Þ1 1 As2d 5 Ad 6¼ 0. s11 ω2 3. In this case yd 5 L 1 sðs 1 1Þ 1 1 s2 1 ω2 Ad 6¼ 0. The exact solution of this inverse Laplace transform can be obtained in line with the derivations of Section 4.6.2. In Section 4.6.3 we prove that at steady state the answer is a sinusoidal term with the same frequency but with phase shift and different magnitude. In the transient phase there are some decaying terms.

All in all, we observe that in cases (2) and (3) the objective is not fulfilled, but in case (1) it is. The first part of the first sentence of footnote 17 which reads as ‘it must be so large that tracking takes place’ should now be meaningful: It is large enough for this purpose. On the other hand, the second part of the first sentence of footnote 17 which reads as ‘it must be so large that the effect of multiplication by D in the second term is negligible’ should also be meaningful: Although Lð0Þ 5 Nc1, it is not large enough to make yd;ss 5 0. However, suppose that we have L 5 ðs 1 1Þ=½s2 ðs 1 2Þ which also fulfills Lð0Þ 5 Nc1. This time we get yd;ss 5 0. That is, in this case it is large enough. (See Exercise 1.49.) On the other hand, suppose that rðtÞ 5 Ar t. Then as we shall learn in Chapter 4 a loop gain like L 5 1=½sðs 1 1Þ (although satisfies Lð0Þ 5 Nc1) is not large enough to achieve reference tracking, but e.g., L 5 ðs 1 1Þ=½s2 ðs 1 2Þ is. (See Exercise 1.48.) This mathematical formula also explains the first sentence of footnote 17. These issues are known as the ‘type’ of the system (plant plus controller) and also ‘kind/order/type’ of the input/disturbance/noise. More precisely, the number of the integrators in the plant and controller and the exact formula of the input/disturbance/noise do affect the details of the analysis. The aforementioned analysis in the text is crude and conceptual, but instructive and insightful. On the other hand to consider the effect of noise we define (1) Step noise n 5 An stepðtÞ, (2) Ramp noise n 5 An t, (3) Sinusoidal noise n 5 An sinωt. Here in parallel with the previously discussed system there hold: 1. In this case we see that yn;ss 5 lims!0 ssðs 121Þ1 1 1 Asn 5 2 An and noise is transmitted to the output. 2. In this case we have yn 5 L 1 sðs 121Þ1 1 1 As2n 5 2 An t 1 ? ! N. 2 3. In this case yn 5 L 1 sðs 121Þ1 1 1 s2 ω1 ω2 An 6¼ 0. The same explanation as in the case of disturbance is true here.

We observe that noise ‘badly’ affects the performance, much worse than a disturbance does, and this is not a surprise as Hn equals Hr except for a minus sign.

Introduction

27

To prevent any probable misconception let us add a few more points, see Exercise 1.49: G

G

G

G

For the disturbance d 5 Ad sinωt if jSj 5 j1=ð1 1 LÞj 5 jHd j js5jω . 1 then the disturbance is amplified and otherwise if jHd j js5jω , 1 it is ‘attenuated’ but it does not become zero. For certain systems ’ ω : jHd j js5jω , 1 and for other systems it depends on the frequency ω. Let L 5 ðs 1 1Þðs 1 2Þ=½sðs2 1 ω2 Þ. Then for d 5 Ad sinωt we get yd;ss 5 0. (For r 5 Ar stepðtÞ we have yr;ss 5 Ar .) Let L 5 ðs2 1 ω2 Þ=½sðs 1 1Þðs 1 2Þ. Then for n 5 An sinωt we get yn;ss 5 0. (For r 5 Ar stepðtÞ we have yr;ss 5 Ar .) In certain respects there is a difference between a disturbance which is at the plant input (the so-called input disturbance) and a disturbance which is at the plant output (the so-called output disturbance).

In the rest of the course we shall discuss these points in more detail. For this introductory Chapter this level of exposition seems sufficient. Now, let us discuss some practical issues of theoretical relevance. For disturbance: Case (1) is often realistic for many actual systems, at least in a piece-wise manner. Case (2) is unrealistic unless over short durations of time or somewhat periodically. Case (3) is realistic for some systems, at least approximately. This is better understood after reading Problems 1.15, 1.16. For noise: Case (1) refers to a ‘drift’ in measurement and is rarely realistic in engineering applications but may be the case e.g., in monetary systems. Case (2) is indeed unrealistic, but is somewhat realistic if it is in a short interval of time or is periodic and has a low magnitude. Case (3) is better than the others, but of course it has a low magnitude. In fact, actual noise is almost always a high-frequency low-amplitude stochastic signal. A best simple approximate representation of noise (for hand drills) is perhaps the deterministic summation of two or three high-frequency low-amplitude sinusoids. With regard to formulation, since the formulas are the same it suffices to consider only one such sinusoid. We wrap up this section by adding that these issues are reconsidered in particular in Chapters 4,7,10. See also Remark 1.13. In Exercise 1.53 and Section 4.13 we shall discuss the more general setting.

1.12

The 2-DOF control structure

The feedback structure we have considered so far is called the 1-degree-of-freedom control structure, or the 1-DOF control structure, as it has only one controller. In the 1-DOF control structure it is desired that y 5 r. In this Section we introduce the 2-DOF control structure. The 2-DOF control structure has become a classic result in the control literature to such a degree that unfortunately no reference is cited for it—we do not know who first proposed it. Nevertheless, despite its popularity it is still not correctly studied in some references. We instructively consider this

28

Introduction to Linear Control Systems

Figure 1.14 A 2-DOF control structure.

structure in details in this Section, Further Readings, and Exercises. The classical version of the 2-DOF control structure is provided in Fig. 1.14. In this system the output is given by Y 5 1 1CPCP FR 1 1 11CP D 2 1 1CPCP N. Note that here again we have to have n 5 0 by hardware design and implementation—the feedback can be at most as good as the measurement is. Apart from this, as before, we must have jCPjc1. The disturbance attenuation of 1-DOF and 2-DOF systems are thus the same, however their setpoint trackings are not. Here we have Y 5 1 1CPCP FR and by the appropriate choice of the dynamics of F we can get a better setpoint tracking. Now we add some analysis on this system. Suppose rðtÞ 5 A stepðtÞ20. Thus, due Fð0ÞLð0Þ A to the input, at steady state we will have yss ðtÞ 5 lims!0 s FðsÞLðsÞ 1 1 LðsÞ  s 5 A 1 1 Lð0Þ , where L 5 CP is the loop gain. In order to have yss 5 A, we should specify the dynamics of F, as we distinguish in the sequel two cases: (1) Lð0Þ 5 N, like the system LðsÞ 5 1=½sðs 1 1Þ. Then there should hold Fð0Þ 5 Nð0Þ=Dð0Þ 5 1. (2) Lð0Þ 6¼ N, like the system LðsÞ 5 1=ðs2 1 s 1 1Þ. In this case we should have Fð0Þ 5 ð1 1 Lð0ÞÞ=Lð0Þ. Remark 1.6: Equally the same we can obtain the above results by analyzing 1 1 LðsÞð1 2 FðsÞÞ the error signal. EðsÞ 5 R 2 Y 5 ð1 2 FðsÞLðsÞ R. Thus 1 1 LðsÞÞR 5 1 1 LðsÞ

2 FðsÞÞ 1 2 Fð0ÞÞ  s 5 1 1 Lð0Þð1 . In order to have ess 5 0, we ess ðtÞ 5 lims!0 s  1 1 LðsÞð1 1 1 LðsÞ 1 1 Lð0Þ should specify the dynamics of F. The result is exactly as before.

Remark 1.7: Note that in this structure the error signal is not the output of the feedback element, but r 2 y. In fact the error signal which must be reduced to zero is always the quantity r 2 y. Remark 1.8: A restatement of this observation is that we use C (the first degree of freedom) to shape the response to d (being 1 11CP D) and F (the second degree of freedom) to shape the response to r (being 1 1CPCP FR). The dynamics of F are chosen as follows. We first diagnose the reason of the poor transient response. It may be due to a (stable) term in the numerator of T, a (stable) term in the denominator of T, or both. We choose F so as to cancel them out or substitute them with especially-designed ones.

20

The case of other inputs is discussed in the worked-out problems of Chapter 4.

Introduction

Some examples follow: 1 Example 1.5: Consider the system PðsÞ 5 s2 ðs 1 10Þ in a negative unity feedback structure. It is desired that the system track a step input and reject an output step disturbance. We first try a 1-DOF control structure. The controller is 1 designed as CðsÞ 5 500 3 ss1120 , see Problem 4.9 of Chapter 4. With this controller the transient response for both reference tracking and disturbance rejection objectives are the same since Hr 5 Hd in the 1-DOF structure. Next we 3 design the prefilter F1 ðsÞ 5 13 3 ss 1 1 1 . The reference tracking objectives of these two cases are shown in the left panel of Fig. 1.15. In the middle panel the output of the 2-DOF structure is shown where an output disturbance enters the system at t 5 6 seconds. As is observed, the transient performance for reference tracking and disturbance rejection are not the same; the former is better. We also observe that by using a sophisticated prefilter we can even further enhance the performance. For instance the reference tracking with the prefilter 4 30s3 1 200s2 1 500s 1 500 F2 ðsÞ 5 ss4 1 1 30s3 1 230s2 1 700s 1 500 is provided in the lower panel of the same figure.

Figure 1.15 Example 1.5, Left: Reference tracking in the 1-DOF and 2-DOF structure with F1 Right: Output, Reference input at t 5 0, output disturbance at t 5 6 Bottom: Reference tracking in the 1-DOF and 2-DOF structure with F1 and F2 .

29

30

Introduction to Linear Control Systems

Remark 1.9: The philosophy behind the 2-DOF control structure is sound and correct however its use is rather intricate. The reason is that by a well-designed controller C in the first place we may obtain an acceptable performance with 1-DOF control structure. This is shown in the ensuing example and also in a worked-out problem. Also note that while the use of the 2-DOF control structure for stable plants is case-dependent, its use for plants with unstable poles and/or zeros is often inevitable, as we will show in a worked-out problem and exercises. Example 1.6: The second degree of freedom FðsÞ 5 1=ð2s 1 1Þ is used “to avoid the overshoot” for the system PðsÞ 5 1=s and CðsÞ 5 ð2s 1 1Þ=s, see Fig. 1.15, left panel. However, if we allow high gain control this can be simply avoided by the controller CðsÞ 5 200ðs12Þ2 =½sðs 1 10Þ, which results in the superb response of the right panel of the same figure. Note that the settling time has reduced from 5.8347 to 0.0879 seconds! Also note that the overshoot may be reduced in the first place by reducing the magnitude of the zero. For instance if we choose CðsÞ 5 ð10s 1 1Þ=s the overshoot decreases but the initial value of the control signal increases and the response will have a long slow tail. The settling time will be 0.9043 seconds. Consequently, a compromise should be made at the designer’s discretion, especially noting that precisely speaking the system exhibits a slight overshoot at a larger time but within the negligible band of 2%. In practice depending on the application such overshoots may be neglected (Fig. 1.16). For the sake of completeness let us add that in practice many other factors are also included as some design objectives and limitations. That is, the decision is not simply made by the aforementioned considerations. We shall study the other factors throughout the book. One of them is that such a small rise time is not acceptable for some systems, see Section 4.10 of Chapter 4

Figure 1.16 Example 1.6, Left: A solution, Right: Another solution.

Introduction

31

Figure 1.17 Use of transducers.

Remark 1.10: If one simulates the system of Example 1.5 with the controller CðsÞ 5 3s=ðs 1 1Þ one observes no overshoot! Similarly, for Example 1.6 the controller CðsÞ 5 1=ðs 1 2Þ or even the simple proportional controller CðsÞ 5 K results in no overshoot! Consequently one may wonder why we have not used these controllers. The answers will become clear later in the course. As we shall learn the first system is internally unstable and the second system does not reject a step input disturbance and/ or a ramp output disturbance which we have implicitly assumed a design objective. Question 1.2: Is this analysis valid for nonconstant setpoints as well? If so, why? If not, does a 2-DOF control structure make sense for such systems? You had better come back to this question again after studying Chapter 4. Question 1.3: Can CP have RHP zeros or poles? Come back to this question after we study the stability notion in Chapter 3, Stability Analysis. We wrap up this part by mentioning that some relevant issues of the 2-DOF control structure are discussed in Problems 1.13, 1.14, and Exercises 1.121.18. Remark 1.11: In certain applications the desired output (i.e., the reference input) is different in units from the measured output. This is tackled in either of the structures of Fig. 1.17, in which C2 is the transducer21 used to convert the units and is thus fixed. These structures are 1-DOF structures and should not be mistaken with the 2-DOF structure just studied. In the system on the left, the transducer acts on r whereas in the system on the right the transducer acts on y. For example if r is velocity and y is position, in the system on the left panel C2 is an integrator whereas in the system on the right panel C2 is a differentiator. Note that in these structures the error signal is not r 2 y, but the output of the feedback element. From a practical point of view these “idealized” systems are not good and are avoided in practice. The reason is discussed further in another undergraduate course on Industrial Control Systems. Recall that you have four undergraduate courses in the control discipline: this course which focuses on control systems in frequency domain, control systems in state space, control systems in discrete time, and industrial control.

21

A transducer is a device whose input and output have different units. For instance one is velocity and the other is acceleration, or one is temperature, position, or velocity, and the other is voltage.

32

Introduction to Linear Control Systems

Analysis of the situation in the right panel is, however, important in the case that C2 represents a controller. That is another 2-DOF control structure and will be addressed in the worked-out problem 1.10. We close this Section by bringing the attention of the reader to item 11 of further readings in Section 1.18 concerning the 3-DOF control structures.

1.13

The Smith predictor

As you recall from differential equations the delay term is represented by e2sτ . The effect of delay on a time function f ðtÞ is illustrated in Fig. 1.18 in which τ is called the deadzone and there holds f ðt 2 τÞ2e2sτ FðsÞ, τ . 0, where f ðtÞ2FðsÞ is a pair of Laplace transform. The question that we are concerned with in this Section is designing a controller for the plant PðsÞ 5 e2sτ P0 ðsÞ. The plant PðsÞ is the same as the plant P0 ðsÞ 5 P0 ðsÞ affected by the time delay τ. A solution to this problem is known as the Smith predictor. Given the effect of the time delay τ, Smith’s approach to this problem was as follows: Given the plant P0 ðsÞ 5 P0 ðsÞ we design the controller C 0 ðsÞ for the desired performance manifested by its transfer function T 0 ðsÞ. Now consider the plant given by PðsÞ 5 e2sτ P0 ðsÞ. We wish to design the controller CðsÞ such that yðtÞ 5 y0 ðt 2 τÞ. This translates to designing CðsÞ so that the transfer function TðsÞ becomes TðsÞ 5 e2sτ T 0 ðsÞ. To this end we simply find 2sτ C0 2sτ C 0 P0 0e C from 1 1CPCP 2sτ 5 e 1 1 C 0 P0 . The answer is C 5 1 1 C 0 P0 ð1 2 e2sτ Þ . Note that the 0e controller can be simply implemented as in Fig. 1.19. (Convince yourself that it is correct!)

f(t-τ) f(t) τ

t

Figure 1.18 Effect of delay on a signal.

Figure 1.19 The Smith predictor structure. Adopted from Smith O.J.M., “A controller to overcome deadtime,” ISA J., 6, 2, pp. 2833, 1959, with permission.

Introduction

33

Figure 1.20 Construction of the 1-DOF IMC structure. Adopted from Garcia C.E. and M. Morari, “Internal model control—1. A unified review and some new results,” Ind. Eng. Chem. Process Des. Dev., 21, pp. 308323, 1982, with permission.

1.14

Internal model control structure

The Internal Model Control or IMC structure is built as follows: Consider Fig. 1.9 with n 5 0 for simplicity, and let P0 be a model of the plant P. Insertion of the “zero gain path” to it results in the equivalent left panel of Fig. 1.20. This in turn is equivalent to the right panel of the same figure, in which the folQP 1 2 QP0 lowing hold: Q 5 1 1CCP0 and Y 5 1 1 QðP 2 P0 Þ R 1 1 1 QðP 2 P0 Þ D. We note that feedback signal in the right panel is ðP 2 P0 ÞU 1 D. Thus if P0 5 P and D 5 0 then the feedback signal is zero and the system acts like the open-loop system Y 5 QP0 R. In passing, note that this observation is another facet of our statement in Section 1.11 that feedback is needed when there is model uncertainty, i.e., P0 ¼ 6 P, or signal uncertainty, i.e., D ¼ 6 0. There are many pros and cons to the implementation of control systems in the IMC structure, which has been successfully used in some process control industries. Among its advantages are: (1) It simply allows a parameterization of all stabilizing controllers; (2) The controllers designed for many plant models are automatically in the PID structure, and thus easily implementable and tunable; (3) Its extensions for MIMO systems as well as for some nonlinear systems are straightforward; (4) In the case of actuator limitations there is no need for special antiwindup measures if the bounded control signal (and not the computed control signal) is also inputted to the model. On the other hand its applicability is restricted to open-loop stable plants and the stability and design analysis in this framework for complex systems (of high-order with time-delay and unstable zeros) needs special treatment, see also Exercise 3.18 of Chapter 3, Stability Analysis and Section 9.9. We close this part by providing the schematic of the in Fig. 1.21. h 2-DOF IMC structure i 1 2 Q d P0 rP In this structure E 5 R 2 Y 5 1 2 1 1 QQd ðP R 2 2 P0 Þ 1 1 Qd ðP 2 P0 Þ D. In the case of the perfect model, i.e., P0 5 P, the error simplifies to E 5 ð1 2 Qr P0 ÞR 2 ð1 2 Qd P0 ÞD. In this structure Qd is designed for disturbance rejection and Qr is designed for reference tracking.

1.15

Modern representation—Generalized model

An alternative representation of a feedback control system which in comparison to the previous representations is concise and more abstract is known as the

34

Introduction to Linear Control Systems

d +

r

Qr

y

P

_

P0

_

Qd

Figure 1.21 The 2-DOF IMC structure. Adopted from Morari M. and E. Zafiriou, Robust Process Control, Prentice Hall, 1989, with permission. w

z u

v

Figure 1.22 The generalized system representation—concise and abstract. Derived from Doyle J.C., “Synthesis of robust controllers and filters,” In Proceedings of the IEEE Conf. Decision and Control, pp. 109-114, San Antonio, Texas, 1983, with permission.

generalized model, see Fig. 1.22. It is usually used in connection with robust control systems analysis and synthesis where a more complete version of it is used. It is instructive to become familiar with its core concept now. In this perspective and are the generalized plant and controller, w is the vector of exogenous inputs to the system consisting of the setpoint, disturbance(s) and sensor noise, z is the vector of exogenous outputs comprising (error) signals to be minimized,22 u is the vector of control signals, and v is the vector of controller inputs for the generalized/extended plant, such as setpoints, measured outputs, and measured disturbances. It should be stressed that the choice of these signals is not unique and different generalized models may be proposed for a given system. This is shown in the worked-out problems and unsolved exercises. The utility of this approach is highlighted in solving robust control problems— robust stability and robust performance in the presence of uncertainties and disturbances. In this perspective this representation has turned to a modern and standard set up in which analysis and synthesis of problems are facilitated. Almost all sophisticated structures can be transformed to this structure. In the following and also in the worked-out problems—through some basic examples—we demonstrate how systems may be recast in this setup.

22

It is possible to include other signals here as well, especially in a weighted form. Indeed, in every actual system it is desired that the control signal be as small as possible. Thus a weighted control signal may be included in z. On the other hand, in every actual system the input to the plant and in certain actual systems the output of the plant should be within specified regions given by its SOR, otherwise the plant will be damaged or not be functioning properly. An example for the latter is the drum-boiler of a power plant whose output cannot be lowered arbitrarily. Another example is the bicycle whose speed must be higher than a critical minimum, otherwise it will not be rideable/stable.

Introduction

Partitioning

35

as

5



 11

12

21

22

the equations governing the system in Fig. 1.22

are given by, Z 5 11 W 1 12 U V 5 21 W 1 22 U U 5 V: It is due to stress that in Fig. 1.22 the signal u is in general different from the signal u which customarily denotes the controller output in standard structures. To avoid ambiguity we may use a different font for the u in Fig. 1.22, e.g., the same font used for the generalized plant and controller. However, for notational simplicity it is understandably not done. In the subsequent Example 1.7 these two signals appear to be the same, but as Problems 1.15,1.17 and Exercises 1.321.34 show they are in general different. To avoid ambiguity, in Example 1.7 we denote the controller output by u. Example 1.7: As an example consider the 1-DOF control system in Fig. 1.23.

Figure 1.23 Example 1.7, A 1-DOF control problem with output disturbance and sensor noise.

It can be represented by w 5 ½w1 w2 w3 T 5 ½r d nT , v 5 r 2 ym 5 r 2 y 2 n, u 5 u, and z 5 r 2 y. Therefore, Z 5 R 2 Y 5 IW1 2 ðPU 1 IW2 Þ 5 IW1 2 IW2 1 0W3 2 PU; V 5 R 2 Ym 5 IW1 2 ðPU 1 IW2 1 IW3 Þ 5 IW1 2 IW2 2 IW3 2 PU: The generalized plant is thus,   I 2I 0 2P 5 I 2I 2I 2P with partitioning 11 5 ½I 2 I 0, We also have 5 C.

12

5 2 P,

21

5 ½I 2 I 2 I,

22

5 2 P.

Question 1.4: In Example 1.7 can we define v 5 ym ?

1.16

Status quo

After the 1927 work of H. S. Black, the control community witnessed a boom in control analysis and synthesis techniques. The contributions are so vast that by no means can they be summarized here. Broadly, from the 1930s to early 1960s the scientific community witnessed the flourishing of frequency domain techniques. From the early 1960s to late 1970s the community was overwhelmed with

36

Introduction to Linear Control Systems

state-space methods—frequency domain methods were at issue to a lesser extent. Lack of robustness, and more seriously, malfunctioning and even instability of the control systems designed by state-space methods reshifted the attention of the scientists back to frequency domain techniques. This happened in the late 1970s and early 1980s in particular by a new problem formulation in the frequency domain, which was the so-called HN problem. Subsequently it was solved (and computed) in the state-space domain more efficiently. Since then the same trend has more or less continued which can be roughly regarded as an integrated approach to control problems. More recent trends are largely in the time domain, with respect to both formulation and computation. At the same time, of course, other branches of control theory (as we shall name in the sequel) also witnessed noticeable improvements. On the other hand, model-free control techniques (see footnote 11 of Section 1.8) have emerged in the mid 1970s and have undergone considerable development since then. Today, intelligent control systems have revolutionized the industries and many more advancements are even expected. Starting from simple position and velocity control problems, control theory has virtually touched every aspect of our daily life in the present time. For instance, it is used in treating patients with mental and cardiac disorders, controlling traffic over the Internet, and controlling epidemics, species population, and ecology in a region—issues that may seem surprising to newcomers to control at the first glance. The application area of control is quite vast. It is virtually present in all technologies although perhaps not from the front view. Some applications where control theory is practiced are as follows: amplifier/power electronics/power system control, ecological/biological/biomedicinal systems control, economical/managerial systems control, process control (like oil refineries, sewage recycling, paper pulp, glassware, casting and hot rolling, etc.), robotics control, spacecraft/satellite/aircraft/missile/ ship/automotive/etc. control, structural and vibration control, flow control, and traffic control (air, train, urban, Internet, and communications).

1.16.1 Overview Needless to say the aforementioned accomplishments are the result of endless theoretical investigation by scientists throughout the world. Today it is about a century that researchers have been working in the realm of control theory and practice. Control theory has grown so vast that almost unanimously four undergraduate courses and numerous graduate courses are offered on it. The number of these courses is so large that it is impossible for a student to pass them all when he or she earns the PhD degree! The undergraduate courses are this course, fundamentals of state-space methods, fundamentals of discrete-time control systems, and industrial control. The graduate courses are: optimal control, identification, neural networks and control, multivariable control, robust control, adaptive control, nonlinear control, fuzzy and intelligent control, softcomputing or intelligent computation, optimization, large-scale and network systems and control, filtering and estimation, discrete-event and hybrid systems and control, monitoring and fault detection and isolation, stochastic systems and control, infinite-dimensional systems and control, quantum systems and control, and fractional-order systems and controllers.

Introduction

37

It should be stressed that this list is by no means exhaustive. For instance some universities offer graduate courses on robotics, electromechanical/mechatronics and embedded systems, process control, (computational) systems biology, micro/nano control theory and technology, traffic control, propulsion and navigation systems, game theory, singular perturbation, behavioral approach, chaos and bifurcation control, time series analysis and control, descriptor systems and control, cognitive systems, artificial intelligence and machine learning, real-time software, signal processing, and scientific computation in systems and control for students of the control major. There are also several specialized topics and/or tools that are sometimes offered as courses, like: polynomial systems (and polynomial matrix approach to control systems), algebraic-geometric theory of control systems, model order reduction, constrained control, model predictive control (MPC), learning control (iterative, repetitive, run-to-run, etc.), antiwindup techniques, time-varying systems, and time-delay systems. All in all, there are above 30 rather independent23 graduate courses/topics while the student passes at most 16 courses—usually between 12 and 16, depending on the university. It does not seem proper to elaborate on the aforementioned courses such as adaptive control in details; nonetheless a few words about some of them are in order. Optimal control refers to controlling the system in such a way that a cost function is optimized/minimized. For instance, what should the path and speed of a car be so that by the destination the fuel consumption is minimal? Identification comprises the theories for construction of a model for a given system from its inputoutput data, or in other words identifying a (model of) the system. Robust control concerns the design of controllers that are capable of maintaining some level of performance in the face of uncertainties in the system, either parameter uncertainties or signal uncertainties. By adaptive control we mean designing a controller whose parameters and/or structure adapts itself by time, i.e., it is not a constant-term controller. This kind of control is used when robust control does not suffice for the control of uncertain systems. If you have guessed that in adaptive control in general we need to identify the system during the online operation your guess is correct. Nonlinear control refers to the case that the plant and/or controller are nonlinear as opposed to linear. Hybrid control addresses the design of control systems whose dynamics evolves both in continuous time and in discrete time. This is tied with some events which happen and trigger the system at discrete times. A discrete-event system is a system whose state belongs to a discrete set and takes another value in that set due to triggering by some discrete-time event. Stochastic systems and control means that an element of the system has a stochastic nature, like the load of a power system or an inventory system where customers arrive at cashiers stochastically. The source of stochasticity may be an input, time delay, disturbance, coefficient, or even the state of the system like in statistical physics. Hence, unlike a deterministic system, a stochastic system does not always produce the same output for a given 23

However, there is a noticeable overlap between them in the sense that, e.g., some adaptive control methods are robust, some nonlinear control methods are adaptive, some optimal control methods are nonlinear, some multivariable control methods are both stochastic and robust, etc.

38

Introduction to Linear Control Systems

input and initial condition. Other examples of stochastic systems include stock market and exchange rate, speech/audio/video/seismic/cosmic/ocean wave/etc. signals, medical data like a patient’s electrocardiogram signal or blood pressure, inventory and renewal processes, random movement such as Brownian motion or random walk, traffic systems, etc. Filtering and estimation concern finding the state or output of the system in the past or future, respectively, from the information of the system at other times. Neural networks are mathematical models for the neural system of the body. They have found numerous applications in engineering in general and control in particular. Today they are utilized as a class of nonlinear and adaptive controllers under the name neural(-network) controllers. Fuzzy and intelligent control, roughly stated, tries to find a mathematical model for the approximate reasoning done by humans in various instances of functionality. The model is then employed as a controller. A large-scale system comprises numerous subsystems which are interconnected to each other (forming a System of Systems—SoS) and, in the literal meaning of the phrase, form a network. Typical examples are power systems, traffic networks, and communication networks, etc. Whether the control acts centrally or decentrally, what information is available to each local controller, and what local and global control objectives are considered, are the issues discussed and addressed in the control of large-scale systems and networked systems. What distinguishes a network from a large-scale system is that in the former there is a hierarchy in the control levels, and the interconnections and operation of the system are demarcated by certain protocols and communication among different parts. Moreover, a network is not necessarily large-scale and may be of moderate or small size, although the most challenging and interesting networks are the large-scale ones. It should be clear that for such systems the control may be either adaptive, robust, nonlinear, etc., or a combination, such as robust adaptive control. Monitoring and fault detection and isolation refers to detecting a failure, like a sensor failure, in the system, locating it, and taking measures such that the normal operation of the system continues to the greatest extent possible. This is important in all systems, especially in large-scale control systems (like refinery and petrochemical industries) where hundreds and even thousands of control loops exist. Fractional order systems and controllers are those in which fractional order differential and integral calculus is used. That is, e.g., we work with expressions like d0:2 f ðxÞ=dx0:2 or ðs0:3 2 1Þ=ðs 1 s1:5 1 2Þ. It has been shown, as we may guess, that such systems provide a larger framework for modeling and control of systems and performance can in general be enhanced. The fractional-order versions of all types of control are under development by researchers from all over the world. The field is still quite young. Constrained control refers to the case that there are some constraints in the system, e.g., on the allowable error signal, control signal (actuator constraint), measured output (sensor constraint), states, control horizon, etc. (Needless to say there may be some constraints in the higher level—availability of equipment, production level, and management—as well.) Learning control refers to tracking control schemes which are developed for repetitive processes. The basic

Introduction

39

example is a robot doing/repeating the same action over time. The controller learns from past experience to improve its performance in the current cycle. Various other examples are rotary systems, power systems, chemical systems, semiconductor processes, optical disc systems, biomedical systems, etc. Let us present more detailed discussions on scientific computing and intelligent computing. Scientific computing is a multidisciplinary field which is tied with mathematics and computer science and engineering. It focuses on the numerical analysis, computation, and simulation of mathematical formulae, and the development of software (computer algorithms) and hardware for their implementation. Thus part of the field concerns efficiency, precision, sensitivity, and error analysis of computations. (Note that in this definition we are including “computational complexity,” which is actually offered as an independent course to students of computer science.) To get a flavor of what is done in this regard, consider the simple example of solving the nonsingular square system Ax 5 b with n unknowns. For large n, the number of (addition, subtraction, multiplication, and division) operations needed to solve the equation by finding A21 is in the order of 2n3 . The same number for Gaussian elimination with back propagation is 2n3 =3. Finally this number, if we have the LU factorization, is 2n2 . Thus in this case we roughly make the problem n times time efficient, which is quite noticeable—as an example suppose n 5 104 ! Of course the actual problems in this field are much more involved than this simple classical example. In the second undergraduate course on state-space methods you will learn some pole assignment and/or eigenstructure assignment formulae and algorithms due to Ackermann, Bass-Gura, Mayne-Murdoch, Davison-Smith, Gourishanker-Ramar, Maki-VanDeVegte, Srinathkumar-Rhoten, Barnett, KleinMoore, Porter, Munro, Wonham, Varga, Seraji, Fahmy-O’Reilly, Minimis-Paige, Shafai-Bhattacharyya, Kautsky-Nichols-VanDooren, Kwon-Youn, Tarokh, SyrmosLewis, Duan, Alexandridis-Paraskevopoulos, Khaki Sedigh-Bavafa Toosi, Wang, Ichikawa, Askarpour-Owens, Esna Ashari, Mehrmann-Xu, Calvetti-Lewis-Reichel, Abdelaziz, and very many other researchers (Bavafa-Toosi, 2000). Which one is computationally more efficient and precise than the others? Or should we look for another computational method for the same problem? Similarly, there are different tests for the state controllability condition, like that of Kalman, Hautus, etc. Which one should we use from the computational point of view? Also, you study such equations as the Lyapunov equation or the coupled Riccati equations. How should we numerically compute the solution? On the other hand intelligent computation or softcomputing aims at bringing intelligence to computation. It is tied with the fields of fuzzy logic, evolutionary algorithms, and neural networks. The idea of softcomputing was initiated by L. A. Zadeh in the 1990s. In 2006 the IEEE Transactions on Computational Intelligence was launched. Moreover, as of 2017 a new IEEE Transactions on Emerging Topics in Computational Intelligence is inaugurated. It should also be mentioned that a “brief” list of pertinent texts is provided at the end of the chapter in item 21 of Further Readings, where the interested reader can acquire specialized information on the graduate topics in control theory and

40

Introduction to Linear Control Systems

practice. Actually, the literature of most of these topics is so vast that it is possible to offer two courses on them, I and II, each of them three credits. We mention a few of them: adaptive control, robust control, nonlinear control, filtering and estimation, identification, and neural networks. In this case, for adaptive control, the first course spans the classical topics more or less in the level of (Ioannou and Sun, 2012), while the second course covers the topics of L1 and the backstepping adaptive techniques of (Hovakimyan and Cao, 2010; Smyshlyaev and Krstic, 2009), as well as the general topics of multiple model adaptive control, nonlinearly parameterized adaptive control, dual adaptive control, etc. The case of robust control is briefed in Chapter 8. As for nonlinear control the main issues which are almost unanimously out of syllabus (or very briefly introduced) are the important topics of Input-to-State (ISS) stability and its variants, see Appendix D.2, advanced versions (ISS, nonlinear, mean-square, scaled, etc.) of the small-gain theorem, recent developments in sliding-mode control techniques, robust control of nonlinear systems in the HN setting, robust control of nonlinear systems with structured perturbations, max-plus methods, etc. See e.g., (Helton and James, 1999; Isidori, 2016; Ito, 1999; Karafyllis and Jiang, 2012; McEneaney, 2006; Sontag, 2006), etc. With reference to filtering and estimation the first course is more or less in the level of (Kailath et al., 2000), while the second course covers the topics of (Hassibi et al., 1999; Saberi et al., 2007; Sira-Remirez et al., 2014), etc. With regard to identification the first course covers the classical methods more or less in the level of (Ljung, 1999) while the second course includes the advanced techniques of nonlinear/multivariable/subspace/sparse/blind/fuzz/etc. identification as well as some fundamental results such as (Sun et al., 2001) scattered in the literature. As for neural networks the first course spans most of (Gupta et al., 2003) and the second course covers the advanced results on fuzzy/Fourier/discrete/binary/flexible/LIF (Leaky-Integrate-&-Fire)/SRM (Spike Response Model)/etc. neural networks, advanced learning methods, and advanced neural controllers (adaptive, stochastic, optimal, discrete-time, etc.). See e.g., (Bavafa-Toosi, 2006, 2016; Sarangapani, 2006; Vidyasagar, 2002), etc. We close this paragraph by stressing that despite the above discussion we are not promptly recommending to offer two courses; there are pros and cons for either option and the decision needs further consideration.24 See the last paragraph of the Section 1.16.1.2.

1.16.1.1 Summary We summarize the current ramifications of systems and control theory, as discussed above, in Table 1.1. It is possible to add further details to this table but for the sake of brevity we do not do this. However, it is instructive to add a few technical points, from among the many relevant issues: (1) For some of the properties both versions of “weak” and “strong” are studied in the literature, e.g., weakly/strongly timevarying, weakly/strongly nonminimum phase, weakly/strongly nonlinear, weakly/ 24

We should also add that some of abovementioned texts can be used for other courses as well, e.g., Smyshlyaev and Krstic (2009) for nonlinear control and Hassibi et al. (1999) for robust and optimal control.

Introduction

Table 1.1

41

A nonexhaustive ramification of control systems

strongly stable, etc. (2) A plant itself may also be, e.g., optimal or hybrid. For instance, optimality of weight/size is usually considered in the design of an airplane. (3) As you will learn in graduate studies, there are “neural-network-based controllers” or simply “neural controllers.” We have not written them in the second column because they are actually a class of nonlinear adaptive controllers. The reason that they are offered as a course is their importance, widespread usage, and large volume of pertinent literature. (4) Some control strategies are partially in both of model-based and model-free classes. (5) The control strategies and the courses have several sub-branches that you will study in graduate courses. (6) For largescale systems centralized control is often not a good choice and decentralized control is adopted. Decentralized means that the controller matrix is diagonal, either full diagonal, block diagonal, or block diagonal with overlap. In brief this means that not all the measurements are feedback to all the inputs and some kind of input/ output pairing is done on the system. We also have the concept of “communicationaware decentralized control.” It should be stressed that deciding upon the size of the system as small-/medium-/large-scale depends on the specific application and associated numerical algorithms and hardware. For instance, in nonconvex situations a 15-dimensional problem (i.e., 15 unknowns) and even a 5-dimensional problem may not be numerically tractable, especially if it is ill-conditioned, see

42

Introduction to Linear Control Systems

Appendix F. On the other hand, in convex situations some 106 -dimensional problems are numerically tractable. This is the status of the problem in the year 2017. In the 1980s we could handle problems up to the size of a few hundreds, and it is reasonable to expect that the progress will continue in future. We shall speak more about numerical algorithms in Section 1.16.3. Additionally, we should mention that the terms “mapping/time series” and “DAE: Differential Algebraic Equation: descriptor/behavioral” will be explained in Chapter 2, System Representation and its Further Readings. The term “(un)constrained” will be elaborated a little more in Chapters 2 and 10. With regard to the term “convex” we refer you to item 3 of Further Readings of Chapter 9 and Appendix F. For the notions of “fixed-/finite-/infinite-time” control we direct you to Appendix D. Moreover, in the literature you encounter numerous other technical terms such as structured, linear parameter varying (LPV) systems, diagonally dominant systems, observed-based/nonobserver-based output control, model predictive control (MPC), loop transfer recovery (LTR) controller, model-following control, predictive-feedback stabilization, dissipativity, synchronous/asynchronous control, dual control problem, certainty-equivalence problem, internal model principle (IMP), complementarity system, mixed logical-dynamical (MLD) system, finitestate system, etc. You will learn more in graduate studies! Remark 1.12: : It should have become clear that the mathematical side of a control problem is much more involved than its engineering side. It is realistic to say that—in complicated industrial problems—more than 70% of the problem is its mathematical side. We have: G

G

G

G

Modeling Formulation Computation Implementation

Mathematics 1 Other Sciences/Engineering Mathematics Mathematics 1 Computer Sci & Eng Mathematics 1 Other Sciences/Engineering

We should emphasize that computer science itself has a mathematical nature. See also the last part of item 12 of Further Readings. With regard to the role of mathematics in the ‘modeling step’ and ‘implementation step’ we should add that as for the former, in certain systems (see e.g., Section 2.3.12) this mathematics is beyond the average undergraduate engineering level, and as for the latter, it is at least in the form of analytical thinking and optimization. See also footnote 26 for further information. (Moreover, in certain restricted cases the computation is done by implementation, in particular the solution to some optimization problems may be found by implementation on some neural network circuitry.) Without going into details of Table 1.1 the mathematical nature of the ‘formulation step’ and ‘computation step’ of certain systems and control problems is concisely evinced by the proceeding representative Examples 1.8, 1.9, respectively. These examples do not show all the mathematical side of the field of systems and control but are good tokens.

Introduction

43



x_1 5 2 x1 ðt 2 τ 1 Þ 1 a1 ðtÞ signðx1 Þx22 , x_2 5 2 2x1 1 a2 e2t x1 x2 ðt 2 τ 2 ðtÞÞ 1 x1 uðtÞ yðtÞ 5 x1 ðtÞ 1 2x2 ðt 2 1Þ, t $ 0, yð0Þ 5 0. We have a2 A½0 1 which is unknown but fixed, a1 ðtÞA½ 2 1 2 is unknown but continuously differentiable, τ 1 A½0 0:1 is unknown but fixed, τ 2 ðtÞA½0 0:2 is unknown and perhaps discontinuous. The measured output is yðtÞ. Synthesize and design hs or h0 in uðtÞ 5 satðhs ðx1 ðtÞ; x2 ðtÞ; rðtÞÞÞ or uðtÞ 5 satðh0 ðyðtÞ; rðtÞÞÞ where satð:Þ is the saturation function, such that yðtÞ tracks the reference input r 5 sint as closely as possible in some appropriate sense. Example 1.8: Take the model

Let us add that although this system is complicated it does not have all the aforementioned complications. For instance, it is not singularly perturbed and it does not involve the issue of input/output pairing. Moreover it does not restrict hs ; ho with regard to convexity, continuity, staticity, time invariance, etc. It is conceivable, and you will learn in graduate studies, that in general not only the performance but indeed even the mere objective of stabilization crucially depends on the aforementioned properties of the control law. What is probably less conceivable is that the effects of τ 1 ; τ 2 ðtÞ and a2 ; a1 ðtÞ are different in that a control law that can accommodate an uncertainty of the kind a2 ; τ 1 in general cannot accommodate an uncertain parameter of the kind a1 ðtÞ; τ 2 ðtÞ, unless it is explicitly taken into account. Moreover, the rate of variation, smoothness/differentiability, continuity, etc. of the parameters play a due role. Also note that in the case of hs we actually have a state-feedback problem where the state x 5 ðx1 ; x2 ÞT of the system should be first somehow constructed from the measured output yðtÞ: We shall talk about the ‘state’ of a system in Chapter 2.

Example 1.9: In a certain control problem the control signal is given by u 5 h 2 B21 ðv 1 f Þ, u 5 ½u1 ? um T , where except the vectors v; f other terms are known. The vectors u; v; f depend on time. Component hi is computed by matrix inversion and multiplication. Component fi is a hyperbolic tangent function whose argument depends on the vector z defined below. Component vi 5 ai satðei =εÞ where except ai other parameters are known. The scalar ei is a computed error and depends on time. The unknown time-dependent ai is the solution of ai 5 minωAD1 maxzAD2 signðeεi Þ ðfi 2 f^i 1 wi ðωT z~ÞÞ. In this formula we have the time-dependent variables ω 5 ½ω1 ? ωn T ; ωi 5 argðai Þ, z~ 5 ½~z1 ? z~n T 5 z^ 2 z where z is fixed and thus z_~ 5 _z^, and also there holds eεi 5 ei 2 ε signðei =εÞ and _z^ 5 ðwT eε Þω. Hence, we have a set of coupled optimization problems and integrations in addition to the basic computations. For the skipped details see (Bavafa-Toosi, 2006). (Continued)

44

Introduction to Linear Control Systems

(cont’d) How should we compute uðtÞ? Note that this question comprises ‘every item’, from the seemingly trivial operations of addition and multiplication, to the more advanced operations of inversion, nonlinear function computation, integration, and coupled optimization. We briefly add that there are ‘dedicated’ scientific methods even for the seemingly trivial operation of addition, let alone for other issues. See the sequel of the book on state-space methods.

1.16.1.2 The forgotten To the best of our knowledge there is no written agreement on what courses MS and PhD students should pass. It appears to depend on the expertise and taste of the faculty members as well as their respective department and its strategic plans. However, an unwritten partial agreement can be detected in different departments. For instance, the unwritten agreement that almost unanimously is complied with in electrical engineering departments is that MS students take the sequel courses among the courses that they pass: adaptive control, optimal control, nonlinear control, neural networks, and a combination of robust control and multivariable control in one course (although they are actually two different courses). The rest of the courses that they pass seem to depend on the expertise and taste of the instructor. In fact, the syllabus of any course that they pass partly depends on the taste of the instructor. The same is true for the courses that PhD students pass. And this does not sound good to us. There are some important points and results—at least in the author’s opinion—which are completely ignored by some, and sometimes most, instructors. To mention a few we cite: G

G

G

G

G

G

G

G

G

G

G

3-DOF control structure (see item 11 of Further Readings) Role of information and communications in control with regard to delay and fundamental limitations (see item 7 of Further Readings of Chapter 10) Exact infimum of output feedback HN design (Chen et al., 1992) Special Coordinate Basis of P. Sannuti and A. Saberi (see item 2.20 of Further Readings of Chapter 2) Stabilization of constrained systems (Lin and Saberi, 1993; Lin and Hu, 2001; Saberi et al., 1996, 2012; Stoorvogel and Saberi, 2016) Instability of weighted sensitivity HN controller (Ito et al., 1993) Integral Quadratic Constraint (Megretski and Rantzer, 1995, 1997) Matrix perturbation theory and robust control, see e.g., (Konstantinov et al., 2003; Esna Ashari and Labibi, 2012) and item 5 of Further Readings of chapter 8 Advanced structural methods and structural results for linear and nonlinear systems (Khorasani, 1990; Castro-Linares and Moog, 1994; Mehl, 1999; Chen et al., 2004; Boerm and Mehl, 2012) Generalized Kalman-Yakubovic-Popov lemma (Iwasaki and Hara, 2005) Distributed filtering, estimation, and control (Olfati-Saber, 2007, 2009; Olfati-Saber and Jalalkamali, 2012)

Introduction

G

G

G

G

G

G

G

G

45

Pole assignment and periodic feedback (Lavaei et al., 2010) Role of sparse/compressed and event-triggered sensing and control (Babazadeh and Nobakhti, 2017; Fazel et al., 2013; Heemels et al. 2008; Mostofi, 2011; Oymak et al., 2015; Ryll et al., 2016) Hypothesis testing (Naghshvar and Javidi, 2013) Distributed learning by social sampling (Sarwate and Javidi, 2015) Relaxation and convexity-related issues and formulations of and for controller design and optimization (Rotkowitz and Lall, 2006; Sojoudi and Lavaei, 2014; Dvijotham et al., 2015), Design of fixed-order HN controllers (Ohmori and Sano, 1992; Ohmori and Shrivastava, 1995; Babazadeh and Nobakhti, 2015) Control of systems with mismatched disturbance/uncertainty, see e.g., (Takamatsu and Ohmori, 2016) Recent advances in the theory of chaos in PDEs (Lan and Li, 2013; Li, 2017)

The same is more or less true for the general and important issues of advanced stabilization techniques (nonquadratic, indefinite Lyapunov functions, universal stabilizers, etc.), stability notions other than Lyapunov’s (see Appendix D), Kharitonov theory, DAE systems, singular perturbation theory, game theory, infinite dimensional systems, the computational side of the problem, and so on. Finally we add that some universities also offer courses on “advanced mathematics,” briefing the mathematical topics that are most important for students to master. Other universities require the students to acquire this knowledge through selfdirected study. It will be very nice if these issues are addressed by a worldwide consensus. Probably such a consensus will allow for some degree of freedom due to the vast number of possible courses on the one hand, and the focused expertise that the student is to acquire by doing his/her dissertation on the other hand. Increasing the number of courses and necessitating the postdoc research period—which is actually compulsory in many countries especially in Europe and Russia, known as “habilitation,” and has become common in other countries as a bonus for employment since a few decades ago—seem to become inevitable in the foreseeable future.

1.16.2 Relation with other disciplines It should have become clear from the discussions of this chapter and the aforementioned courses that control theory is intertwined with system theory, information theory, and computer science. The explanation is that not only the plant to be controlled is a system on its own, but also every part in a control system (see Figs. 1.31.6) is a system on its own—some are mechanical, some electronic/ electrical, some biological, some managerial, etc. Sometimes wireless/wired communication is also needed, e.g., for remote control of an underwater or space vehicle, or tele-robotic operations. For the functioning of the system, information/data should be communicated to or propagate through and gathered from the whole system, and qualified, analyzed, stored, etc. And the integration of all these items is also a system whose implementation is intertwined with computer science. As such, today a successful control theorist is actually concerned with “systems and control theory”

46

Introduction to Linear Control Systems

(which is understood to overlap25 computer science, communications and signal theory, information theory, etc.), and this is very different from the modest origin of control theory in the early 20th century. Indeed, this is even beyond the original definition of cybernetics—as control and communication in human/animal and machine—as proposed by N. Wiener in 1948. The progress and evolution of the filed to its status quo as discussed above has been punctuated by several momentous theoretical contributions. The thorough discussion of the mathematical side of the issue is outside the scope of this book, but we should add that the mathematical tools that are involved in this field are versatile. For instance, advanced tools in equations (differential, partial differential, integral, integro-differential, differential-algebraic, etc.), analysis (linear, matrix, numerical, real, complex, nonlinear, etc.), probability and stochastic theory, differential and algebraic geometry, graph theory, topology operator theory, measure theory, optimization, etc. are being used in this field. Control is the most mathematized branch of engineering and today it can be well considered as a large subset of applied mathematics overlapping with almost all subsets of applied mathematics and some branches of pure mathematics, see e.g., (Falb, 1990; Kunkel and Mehrmann, 2006; Stefani et al., 2014; Troeltzsch, 2010), item 14 of Further Readings, item 8 of Further Readings of Chapter 10, and Appendices D, F. To better highlight this issue we should stress that while all the research results which are published in the field of systems and control have a mathematical nature, parts of them are published by mathematical societies (like AMS: American Mathematical Society, EMS: European Mathematical Society, or SIAM: Society for Industrial and Applied Mathematics) rather than engineering and general publishers (like IEEE or Elsevier). It is good to know that several of the journals of the general publishers like Elsevier are actually mathematical journals that also publish systems and control theoretic results, if appropriate.

1.16.3 Challenges Every seemingly simple problem has its own challenges and the control performance may be improved by the practice of other control strategies or even clever retuning of the controller parameters. On the other hand, every branch of control as listed above has its own difficulties, challenges, and open problems which are of course solved and substituted by new ones through time. Some of the main general challenges that the control community is encountering today can be enlisted as: G

G

25

Delay systems, PDE (Partial Differential Equations) and IE (Integral Equations) systems, time-varying systems, constrained systems, hybrid systems, chaotic systems, fault detection and isolation Reliable, efficient, strongly backward stable numerical computation and simulation Lest there is a misunderstanding we should stress that this overlap is not onto, i.e., surjective. For instance, the field of communications is a well-established branch of electrical engineering which has its own basics, details, and ramifications. It is good to know that communications is a highly mathematized branch of engineering.

Introduction

G

G

G

G

G

G

G

47

Bringing intelligence to the computation and control practice by designing humanoid robots for daily life activities or space/underwater/mine/surgery/etc. operations Control of and over general networks such as a smart power grid, communication network, social network, or cyber-physical network such as traffic control in automated highways Medicinal and biological systems modeling and control for, e.g., the brain, heart, and pancreas; as well as systems biology Control in physics and chemistry such as crystal growth, glass formation, fluid dynamics, Landau-Lifschitz-Gilbert equation, reaction-diffusion models, etc. Control of economy Quantum control Fractional-order control

Concerning the first item we should say that although the first results in these cases appeared more or less in the mid 20th century, it is fair to say that these fields are much less developed than other fields with respect to both basic theory and computational algorithms. Tangible and tractable, and at the same time general and fundamental, results are yet to be developed. With regard to numerical solvers note that efficiency refers to computational load, both with respect to speed and memory usage. In particular, speed depends on computational complexity, i.e., all three of the number of operations, type of operations, and I/O complexity (or communication cost) of the algorithm. (Note that we actually have both of the notions time complexity and space complexity. On the other hand, the role of the hardware is clear.) Moreover, a numerical algorithm is said to be strongly backward stable if it is able to find the exact solution to a nearby problem (i.e., the slightly perturbed system) as well. These issues are more noticeable in large-scale systems where the problem almost always inevitably becomes ill-conditioned, a phenomenon that aggravates the already complicated situation. One of its challenging application fields is optimization26. Intelligent computation has proven effective in strengthening classical computation techniques (sometimes outperforming them) and vice versa. Its use in humanoid robots is a fruitful research direction. In particular one of the applications of approximate computation is in image de-noising and image processing which are used in humanoid robots as well. It also reduces the computation time and makes it effective for real-time use. It should also be clarified that a general network is hybrid, stochastic, nonlinear, uncertain, delayed, etc. and you can thus imagine that control of and over networked systems is yet very young despite the availability of some good results. Some particular challenges in advanced control systems especially in networks are as follows: (1) The interaction between human and machine. A particular case in this general domain is addressed in (Cai and Mostofi, 2016). (2) The effect of the network topology. Some issues are addressed in (Shahinpour et al., 2016). (3) Need for special treatment of some conventional concepts such as fragility and robustness, in cyber-physical network systems (Rungger and Tabuada, 2016). (4) Cyber security and analysis and computation of the colossal 26

Note that in the larger picture the numerical solvers are used also in optimization problems (plantwide, top-down, etc., see Section 1.8), the design of Printed Circuit Boards (PCBs) for electronic implementation, and even in the design of integrated circuits (ICs), transistors, etc.. For instance, it is good to know that once upon a time a transistor design was made merely by physical justification, but now it is the outcome of an optimization problem (which is of course physically meaningful).

48

Introduction to Linear Control Systems

volume of data in large-scale networks which hinders the real-time applications. (5) Importance of co-design of control and implementation platform in large-scale systems especially cyber-physical networks (Soudbakhsh et al., 2016). In many interconnected systems such as biological systems a key challenge is that of scale, wherein the elements which are at the heart of modeling and the outputs which are observed may be in the scales of colossal difference. In biological systems the ratio (between protein foldings and life expectancy) is larger than 1020 . So what should be modeled and how, and how we should computationally deal with such systems, are questions of main concern. The issue, in appearance, resembles the problem of a general theory sought in physics which desires to combine quantum physics and cosmic physics in one. Systems biology refers to studying a living thing as a whole complex networked system rather than studying its components individually. Exemplary recognitions of the role of systems and control theory in medicine are the induction of the control theorist F. Doyle to the National Academy of Medicine of the US in 2016, and the 1963 Nobel Prize in medicine, which was awarded to a work utilizing feedback theory. Interesting examples of physics and chemistry systems are presented in Chapter 2, System Representation. Control and even modeling of such systems often requires advanced mathematical tools. It should be noted that to date some Nobel Prizes in physics have been awarded to novel applications of control theory in physics, such as the 1912 and 1984 prizes. As for the economy, we suffice to say that to date several Economics Prizes have been awarded to works related to control theory, as in the years 1978, 1986, 1997, 2003, 2005. Additionally, the 2013 Bode Prize Lecture27 was delivered under the title “Can Control Science Bring New Insights to Stock Trading Research?” by the control theorist B. R. Barmish. In conventional quantum control the quantum measurement by the sensor inevitably disturbs the dynamics of the system and the system is thus stochastic and the control is called incoherent. The alternative approach is that of coherent control in which the sensor, controller, and actuator are also quantum and interact coherently with the system. Hence the system is not stochastic and the state of the system is not destroyed. There has been significant progress but in comparison to the classical theory for nonquantum systems the field is still young. In the field of fractional-order control researchers try to develop the fractional-order versions of the available control theory, which is in the integer-order calculus. In comparison to the conventional theory in integer order, the field is still very young.

1.16.4 Outlook Among the perspectives that can be envisioned for future research in control are: G

G

G

27

Further development of the theory and control of model-free, fuzzy, and intelligent systems Further development of the algebraic and geometric theory of control Control of infinite dimensional systems This is the highest prize of the Control System Society of the IEEE. It was inaugurated in 1989 in recognition of H. W. Bode’s seminal contributions to control theory. The prize is awarded on a yearly basis to the most prominent figure in the field and the recipient delivers a plenary lecture at the ceremony.

Introduction

G

G

G

G

G

G

G

49

Control in nonintegrable spaces such as measure spaces Control in nonconventional spaces such as indefinite metric spaces and non-Banach spaces Control of multiscale systems Virtual design Biomimetics Genetic, DNA, and RNA engineering Biomedicinal and biochemical engineering

It is worth commenting a little on fuzzy theory. Fuzzy logic is based on considering a degree of truth for a statement; equivalently a degree of membership of an element to a set. The advent of fuzzy theory culminated in a revolution in all science and engineering fields, for soon it found its way to basic mathematics and gave birth to fuzzy mathematics. Today there are several specialized journals on fuzzy mathematics, fuzzy systems, and fuzzy control. Terms like “fuzzy arithmetic, fuzzy number, fuzzy mean value, fuzzy ODE/PDE, fuzzy Fourier transform, fuzzy wavelet, fuzzy topology, fuzzy measure, fuzzy Lyapunov function, etc.” are quite often encountered in the literature. On the other hand, mathematics is the core of all the other science and engineering fields. It is used in thinking/reasoning, modeling, formulation, and computation. Further investment in this direction will be quite fruitful. The actual impact is yet to be felt in the time to come. Currently research on the algebraic and geometric frameworks of control theory is quite restricted in scope. (An avenue for further research is exploring the connections with and using the automorphic forms, L-functions, etc.) The same is true for infinite dimensional systems (including PDE and delay systems as special cases), measure spaces, and nonconventional spaces. The main reason is the sophisticated mathematical nature of the problems, and thus a small percentage of researchers are active in these fields who are more mathematically oriented than others. These fields are immature, some even embryonic, and expansion of the work is greatly welcome. Multiscale systems refer to systems whose dynamics contain more than a single set of modes, probably even more than the two extreme sets of fast and slow modes. They can be regarded as a generalization of singularly perturbed systems. Theory, analysis, synthesis, computation, simulation, and controller implementation of such systems is quite sophisticated. It is somewhat correct to say that the emergence of such systems has given rise to multiscale mathematics. Only elementary results in a few branches of control are reported in the literature. We suffice to say that in 2009 a SIAM prize on numerical analysis and computation was given to the mathematician A. Abdulle for his contribution to the computation and simulation of such systems. Also, the 2017 SIAM/ACM28 prize on computational science and engineering was awarded to the mathematician T. J. R. Hughes for his contribution to finite element method for PDE systems, including multiscale ones. Virtual design refers to providing theory, algorithm, software, and perhaps hardware for the first three steps of a problem solution, see Section 1.8, so that they are done automatically. It is particularly desirable for multibody, large-scale, and networked systems. 28

ACM: Association for Computing Machinery.

50

Introduction to Linear Control Systems

Thus, e.g., the individual components of the whole system are judiciously integrated together algorithmically without the designer’s influence. To be general purpose, the method must allow for the model of the system to be a set of DifferentialAlgebraic Equations (DAEs), either linear or nonlinear, time-invariant or timevarying, integer calculus or fractional calculus, single- or multiscale, etc. Despite advances in the theory of DAEs, this objective is still far from being achieved. The reason is threefold: (1) The theory of DAEs is more involved than the theory of conventional systems and hence the counterparts of the existing control theory for conventional systems are not yet fully available for DAEs. (2) DAEs are in general ill-posed and have to be regularized. At present there is no general (algorithmic) theory for regularization of DAEs in their full generality. (3) At present a reliable, efficient, strongly backward stable numerical solver for general DAEs looms out of reach. See also items 4 and 5 of Further Readings in Chapter 2, System Representation for more information. In the large, the term biomimetics means the imitation of natural elements and systems with the aim of solving complex human problems. Thus, from a certain perspective it refers to designing and manufacturing with biological systems similar to what is practiced today with electronic and mechanical components. It has been accomplished only in a handful of applications, such as biomorphic mineralization. This is a technique which produces materials with structures and morphologies similar to those of natural living organisms by employing bio-structures as templates for mineralization. Regarding genetic engineering, we should mention that every cell contains long strands of DNA and RNA that encode the information essential to the cells’ functioning and reproduction. An ultimate goal is to engineer an organism’s genome. This has been achieved in a number of applications such as agriculture, industrial biotechnology, medicine, and genetically modified animals. However we should add that the products of genetic engineering which all bear the mark Genetically Modified Object are vociferously criticized and disapproved by a great number of scientists who, regardless of the morality side of the issue, assert that they are harmful for both our health and nature. Finally, in biomedicinal and biochemical engineering an ultimate goal can be to design and use control systems to create new molecules and new drug delivery mechanisms so as to repair a damaged tissue in its original place in the body without interfering with the normal operation of other parts of the body. This general perspective may be called programming molecular functions. What we will study in the rest of this course is of course relatively limited. We study the basics of control theory which comprises the methods up to 1950, but in a modern fashion including the latest developments of the respective theory whenever appropriate. It should be noted that the original theory of that time which appears in the available textbooks is full of mistakes. We discuss and correct these mistakes in the present book and contribute significantly beyond them. In the second and third undergraduate courses on linear control systems fundamentals of state-space control systems and fundamentals of discrete-time control systems which roughly span the period 1650/196080 are covered. Part 2 of the present book is devoted to state-space methods and will hopefully appear in the foreseeable future.

Introduction

1.17

51

Summary

In this chapter we have introduced what systems and control theory is and why it is needed. The open-loop and closed-loop control structures have been studied and their features have been compared. The included issues are stability, performance, sensitivity/robustness, disturbance rejection, reliability, economics, and linearization. We have argued that a well-designed feedback can stabilize and robustify an unstable or poorly stable system, enhance its performance, reject the disturbance and push the system towards a linear one. On the other hand a poorly designed feedback can bring about the opposite outcomes. The 2-DOF control structure, the Smith predictor, the IMC structure, and the modern representation of control systems have also been discussed. We have had a glimpse at the history of automatic control and a comprehensive study of its status quo as well. We have learned that through time it has had its impact on conventionally nonengineering fields such economics, physics, medicine, and biology. It is quite interdisciplinary and in fact today it can be traced into virtually all areas in which human beings devise a device or an idea to somehow interfere with a process or nature. The evolution of the field has been punctuated by several key theoretical contributions. Today it is a highly ramified field with which researchers in various academic departments are engaged. It is the most mathematized engineering field and can be well regarded as a large subset of applied mathematics in which various advanced mathematical tools are utilized. Systems and control theory overlaps communications, signal theory, information theory, computer science, etc. In general, the mathematical side of a control problem is much more complicated than its engineering side. Numerous worked-out problems, which follow at the end of the chapter, enhance and facilitate the learning of the subject.

1.18

Notes and further readings

1. More will be said about stability in Chapter 3, Stability Analysis and Appendices D and E. Also note that some systems, like oscillators, are designed to work on the verge of instability. On the other hand, numerous scientists have played and are playing their roles in laying the foundations of analysis and synthesis techniques of control systems. The number of contributors is so large that it is not possible to introduce all of them here. However as we discuss different topics the pioneering contributor is introduced. 2. As we mentioned in Section 1.8 the design process is actually an optimization process. This is more noticeable in network, large-scale, and MIMO systems than in simple SISO control systems, although every seemingly simple SISO system actually has its own difficulties and needs the aforementioned optimization but to a lesser extent. Also note that control configuration refers to not only the feedback loops but also the forward paths in the whole system. Some pertinent references for further studies are: (Jagtap et al., 2013; Khaki-Sedigh and Moaveni, 2009; Lakerveld et al., 2013; Skogestad, 2015; Sojoudi et al., 2011).

52

Introduction to Linear Control Systems

3. An industrial control system has a hierarchy of control where at each level or layer a specific task is performed. Not all the signals are available at all the levels. In the down or first layer the controller parameters are retuned e.g., every second, in intermediate layers the variables are manipulated e.g., every hour, in the upper intermediate layer the management is reconsidered e.g., every day, and in the top layer the management is done on a long-run basis where future plans are also made. 4. A number of researchers such as Marshal, Kleinman, Zames, Youla, and El-Sakkary have argued that in the presence of additive disturbances at the output, the feedback gain must be different from unity. In particular El-Sakkary uses and indeed develops a gap-metric framework for the analysis. See (El-Sakkary, 1981) and its references. Apart from this, for technical reasons it is often assumed that a scaling is done on different parameters in the system so that, simply said, the quantities become comparable. You will learn more in future studies. 5. The basics of the representation of MIMO systems are given in Chapter 2, System Representation. The analysis and synthesis of MIMO systems in general differs—though not without some similarities—from that of SISO systems. Just to give a flavor, for MIMO systems: (1) the equations of signals will have a special order—the elements cannot be interchanged, and (2) the sensitivity function S and complementary sensitivity function T can be and indeed are defined both at the input and the output, SI , SO and TI , TO , respectively. For SISO (and not MIMO) systems they are equal: SI 5 SO and TI 5 TO , and are those functions defined in the text. With regard to ‘structural methods’ mentioned in Section 1.16.1.2 we shall talk more in item 20 of Further Readings of Chapter 2. 6. There are different ways of defining sensitivity in a system, like differential sensitivity (which we consider in this chapter), comparison sensitivity, logarithmic sensitivity, etc. The interested reader is referred to (Boyd and Barratt, 1991) for some details. The definition of differential sensitivity is due to Bode (1945). He considers the closely-related notions of return difference and return ratio as well. However, the notion of sensitivity has also appeared before, e.g., in 1934 by H. S. Black. From another standpoint, we may define sensitivity with respect to parameter variations and sensitivity with respect to signal variations/uncertainty. 7. Sensor and actuator placement are important topics in control system design. There are strong mathematical theory and different criteria behind these conceptually obvious issues. On the other hand there are different types of actuators and sensors for different applications. The interested reader is referred to the respective literature including (Tzoumas et al., 2016) and those introduced in the references see item 19 of this Section. 8. The classical Smith predictor controller was proposed by O. J. M. Smith in 1959 (19172009). It has been extended in various ways. They are pervasively used in practice. Recent theoretical developments include adaptive Smith predictor (Bai et al., 2008), modified Smith predictor for unstable systems (Astrom et al., 1994; Padhan and Majhi, 2012), Smith predictor for time-varying systems (Normey-Rico et al., 2012), robust Smith predictor (de Oliviera and Karimi, 2013), filtered Smith predictor (Rodriguez et al., 2016), MIMO Smith predictor (Santos et al., 2016), etc. 9. The IMC structure was proposed by C. E. Garcia and M. Morari in 1982. Similar structures to the IMC structure of Garcia and Morari had been discussed previously by other researchers. At that time the most prominent IMC-type schemes were known as Model Algorithmic Control, Dynamic Matrix Control, and Inferential Control of C. B. Brosilow 1979, C. R. Cutler and B. L. Ramaker 1979, Richalet et al. (1978), see (Garcia and Morari, 1982; Morari and Zafiriou, 1989) and their bibliographies. After the 1982 article

Introduction

10.

11.

12.

13.

14.

53

M. Morari and coworkers published a series of other results on IMC. The interested reader is referred to the book (Morari and Zafiriou, 1989). Recent theoretical works on IMC include decentralized IMC (Salcedo et al., 2013), time-varying IMC (Zhang et al., 2014), decoupling IMC (Garrido et al., 2014), intelligent IMC (Yadav and Gaur, 2015), etc. Applications abound. The generalized model of Section 1.15 was proposed in 1983 by J. C. Doyle. See also Van Diggelen and Glover (1994) who showed that it does not cover all systems, and proposed a modified solution. In the case of measurable disturbances a third degree of freedom can be used through feedforward compensation of the disturbance. This is discussed in Exercise 1.19 concerning the 3-DOF control structures. See also Li et al. (2015). With regard to scientific computation we should mention that this issue began being firmly considered in control systems in the 1980s. It is interesting to note that many of the former design algorithms in control systems, such as for pole assignment, were shown to be numerically unstable, see e.g., Kautsky and Nichols (1983), and Mehrmann (1991). This is of course true in all engineering fields. Since the 1980s only a few numerically unstable algorithms have been reported. A modern challenge in computations is that of “structure/property preservation,” see e.g., Benner et al. (2003) and Reis and Stykel (2011) as some early references. For instance, if the original system is passive, the model reduction technique should preserve this property in its reduced-order model. Moreover, see e.g., Ballard et al. (2012) for communication costs of an algorithm. In the second part of the book on state-space methods we shall discuss the computational side of a control problem in a whole chapter. With respect to the computer science and engineering part of the problem, we should add that apart from faster clocks and higher memory, special computers and/or special CPU/GPU programming techniques are also needed for problems with truly heavy computations. There is evidence that nonconvex problems are harder than convex ones. More generally, some problems are not numerically tractable, are “hard” or even “NP-hard,” where the term means “Nondeterministic Polynomial-time hard.” Although it is not proven, it is rather stereotypical to assume that there is no polynomial-time algorithm for NP-hard problems (Van Leeuwen, 1998) and thus they are also said to be “numerically intractable.” A typical example is the rank minimization problem, which has several applications in systems and control theory. It seems and perhaps is reasonable to say that from the computational point of view the best case for a problem is the convex case, see also Appendix F. Thus, one main direction of research—apart from the basic idea of providing more powerful algorithms—is to find conditions that make a problem convex so that it is numerically desirable, see e.g., Rotkowitz and Lall (2006). Another perspective is to reformulate the problem, at least locally, as a convex problem so that the surrogate problem is easier to solve. A remarkable example in this field is Recht et al. (2010). Another novel idea is to propose alternative nonstandard, yet meaningful, convex formulations for standard noncovex problems. A pioneering work in this field is Dvijotham et al. (2015). See also Oymak et al., 2015 and Babazadeh and Nobakhti, 2017. We close this item by adding that somewhat precise definitions for the notion of tractability can be found in the contemporary literature on algorithm and computation complexity, see e.g., Irrgeher et al. (2016). Measure theory is used in (Zarre et al., 2002) for controller design. By admitting measures as controls instead of integrable control functions, the support of the control law can even have a zero measure. It also helps regularize some ill-posed problems. See Casas and Troeltzsch (2014) and their bibliography. For further technical references see item 19. In the field of mathematics the original idea of fractional-order calculus dates

54

15.

16.

17.

18.

19. 20.

Introduction to Linear Control Systems

back to 1695. In control engineering it first briefly appeared in the 1945 book of H. W. Bode. In 1958, A. Tustin et al. implemented Bode’s opinion. Methodical introduction of fractional-order calculus to classical control systems was pioneered by S. Manabe (1961) and a series of his works henceforth. With regard to the status quo of control we should add that a particular interesting direction of research is control over “rings.” See Further Readings of Chapter 2, System Representation. Moreover, developing the discrete-time counterpart of the existing theory which is mostly devoted to continuous-time systems would be desirable. It is due to add that research in this direction is underway; several well-understood continuous-time problems are open in the discrete-time settings. On the other hand, another “model-free” control strategy has been proposed and advocated since 2008 by M. Fliess and coworkers, see e.g., Fliess and Join (2013). See also Blanchini et al. (2017) for the status quo. Model-free neural controllers (and in general intelligent controllers) can also be found in the literature. The chapter can be best followed by a study of the internal model principle (IMP) and the lifting techniques. The IMP was proposed by Francis and Wonham (1975) and concerns controller design in the 1-DOF structure for the case that input disturbance, output disturbance, and reference input have the same ‘generating function’. It is the general setting of the sinusoidal disturbance and noise rejection arguments that we had at the end of Section 1.11. Lifting techniques parallelize and combine plant inputs and outputs to achieve zero assignment/annihilation. A particular systematic methodology is offered by Y. Wan, S. Roy, and A. Saberi, and will be discussed in item 18 of Further Readings of Chapter 2. It is noteworthy that repetitive control methods are based on the IMP. Iterative learning control (ILC) and repetitive control (RC) were pioneered by M. Uchiyama (1978) and T. Inoue et al. (1981) for industrial applications in Japan. See (Arimoto et al. 1984; Hara et al. 1988) for further developments. Run-to-run (R2R) control was proposed by Sachs et al. (1990). See Wang et al. (2009) for a survey of these ‘learning methods’. Advanced versions—like intelligent, decentralized, robust, etc.—and various further details of these methods are available in the literature. The classical works of Zadeh (1950ac, 1958, 1962a,b, 1968), which seem to remain hot possibly even for the times to come, are recommended to any reader. His envisions have come true, and in addition these papers give a summary of the then frontiers of knowledge. Papers Zadeh and Miller (1955) and Zadeh (1956a,b) are also recommended to readers of electrical engineering. In particular, L. A. Zadeh’s 1962a paper discusses the shift of the standpoint from circuit theory to system theory, which was subsequently enhanced by the fuzzy theory that he proposed. There are different frameworks for linearization of a nonlinear plant so that the controlled plant (i.e., the whole system) is linear. This is an advanced topic and is partly covered in the sequel of the book on state-space methods. Here we propose a conceptual approximate solution to this problem in Exercise 1.57. Note that this is closely connected to the concept of ‘inverse control’ for which numerous results including intelligent and model-free control techniques are available. The oldest fundamental result on ‘invertibility’ of a nonlinear system of which we are aware is (Hirschorn, 1979). More modern results are of course available. For active noise control we refer the reader to e.g., (Jiang et al., 1997; Kouno et al., 2002; Okumura et al., 2010; Saberi at al., 1993). We briefly discussed the role of computer science in control theory. The inverse relation has been discussed at least as early as the 1990s. Roozbehani et al. 2013 put forward and formulated the software verification problem as a control-theoretic problem. This is important in particular for software used in safety-critical systems, especially large-scale

Introduction

55

and network systems where the associated software is quite complicated. Many pros and cons can be raised in connection with such works. For instance, there remains the general logical problem that translating the original software to control-theoretic issues is not much simpler than manual verification of the original software, and implementation of the verification method is itself algorithmic. Nevertheless, the work takes the first step in this direction and the topic certainly deserves further investigation. 21. For further details and graduate-level studies the reader is referred to the representative references introduced at the end of the chapter. After reading the “basics” which are presented in these books and the like, the reader can easily follow the cutting-edge results published in scientific journals. The references are as follows: Optimal Control (Liberzon, 2012; Saberi et al., 1995; Troeltzsch, 2010), Identification (Katayama, 2005; Ljung, 1999; Ogunfunmi, 2007), Neural Networks (Gupta et al., 2003; da Silva et al., 2017; Sarangapani, 2006), Multivariable Control (Isidori, 2016; Skogestad and Postlethwaite, 2005), Robust Control (Barmish, 1994; Dahleh and Diaz-Bobillo, 1995; Zhou et al., 1996), Adaptive Control (Hovakimyan and Cao, 2010; Ioannou and Sun, 2012; Smyshlyaev and Krstic, 2009), Nonlinear Control (Helton and James, 1999; Khalil, 2014; Shtessel et al., 2014), Fuzzy and Intelligent Control (Nazmul, 2014; Tanaka and Wang, 2001; Wang, 1996), Softcomputing (Kruse et al., 2016; Keller et al., 2016), Optimization (Bertsekas, 2015; Giorgi et al., 2004; Walter, 2011), Large-Scale Systems (Jamshidi, 1996; Sojoudi et al., 2011; Zecevic and Siljak, 2010), Network Control (Bemporad et al., 2010; Mesbahi and Egerstedt, 2010; Sarangapani and Xu, 2015), Infinite-Dimensional Systems (Bensoussan et al., 2007; Fattorini, 2005), Filtering and Estimation (Kailath et al., 2000; Saberi et al., 2007; Sira-Remirez et al., 2014), Discrete-Event and Hybrid Control (Cassandras and Lafortune, 2008; Lygeros et al., 2014; Villani et al., 2007), Fault-Tolerant Control (Campbell and Nikoukhah, 2002; Meskin and Khorasani, 2011; Noura et al., 2009), Stochastic Systems and Control (Costa et al., 2005; Pham, 2009; Sun, 2006), Fractional-Order Systems and Controllers (Caponetto et al., 2010; Monje et al., 2010; Valerio and Sa Da Costa, 2012), Game Theory (Basar and Bernhard, 2008; Osborne and Rubinstein, 1994), Singular Perturbation (Johnson, 2005; Verhulst, 2005; Bonnard and Chyba, 2003), Behavioral Approach (Markovsky et al., 2006; Willems, 1998), Model Order Reduction (Antoulas, 2005), Quantum Control (Cong, 2014), Constrained Control (Abu-Khalaf et al., 2006; Saberi et al., 2012; Sajjadi-Kia and Jabbari, 2013), Anti-Windup Techniques (Hippe, 2006; Zaccarian and Teel, 2011), Model Predictive Control (Borelli et al., 2014; Wang, 2009), Bifurcation and Chaos Control (Aihara et al., 2015; Azar and Vaidyanathan, 2015), Descriptor Systems (Duan, 2010; Kunkel and Mehrmann, 2006; Lamour, et al., 2013), Time-Varying Systems (Bourles and Marinescu, 2011; Ichikawa and Katayama, 2008), Time-Delay Systems (Kharitonov, 2013; Mayer, 2016), Actuator and Sensor Placement (Batou, 2015; Chi et al., 2015; Tzoumas et al., 2016), Scientific Computing (Gustafsson, 2011; Heath, 2001; Konstantinov et al., 2003), Algebraic-Geometric Theory of Control (Falb, 1990; Jurdjevic, 2008; Stefani et al., 2014), Amplifiers and Power Electronics Control (Kazmierkowski et al., 2002; Patil and Rodey, 2015; Shirvani and Wooley, 2003), Biological and Biomedicinal Systems Control (Costentino and Bates, 2011; Lessard, 2009; Rao and Rao, 2009), Economical and Managerial Systems Control (Burger et al., 2006; Deissenberg and Hartl, 2005; Hosking and Venturino, 2008), Power Systems Control (Bevrani et al., 2014; Murty, 2011), Process Control (Bequette, 2003; Gonzalez et al., 2016), Robotics Control (Bloch et al., 2003; Spong et al., 2005), Satellite, Aircraft, Missile, Ship, and Automotive Control (Do and Pan, 2009; Durham, 2013; Halderman, 2015), Structural and Vibration Control (Kermani et al., 2008; Lagaros et al., 2013; Mao and Pietrzko, 2013), Traffic Control: Air, Urban, Internet, Communications (Domzal et al., 2015; Janert, 2013; Kerner, 2009).

56

1.19

Introduction to Linear Control Systems

Worked-out problems

Problem 1.1: Is there any feedback mechanism in a purely electrical system? If so, give a simple example. Yes, apart from the feedback mechanism used in designing (different parts of) electronic circuits, a remarkable example is the Automatic Gain Control (AGC) system in radios, for instance. The AGC is another feedback loop that attempts to nullify the effect of the distance of the receiver from the transmitter on the quality and volume of the received message. The result is that within a certain distance from the transmitter the quality of the received signal will be almost the same, i.e., independent of the distance. It is of course clear that (1) this distance is limited, and (2) beyond that the quality deteriorates, and past a certain distance there will be no received signal. It is worth remembering that from the theory of communications we know that these distances depend on the wavelength of the signal. The same concept exists also in televisions and mobile phones. Problem 1.2: Consider a class with its students, lecturer, facilities, etc. in a school. From a control point of view explain the constituents of this system and their roles. Instructor is the controller; the control signal is what he teaches. Giving bonus to presence (through roll call), active and constructive participation in question-answer discussions, solving examples (by instructor), teaching aids such as handouts (of taught materials, further readings, etc.), and computer and visual facilities play the role of the amplifier. (In this system, in some sense amplifier may be seen as part of the controller.) Actuator is doing the homework and term projects. Plant constitutes the students. Evaluation mechanism such as exams is the sensor. Note that this can simply be combined with the feedback element which is asking questions and soliciting answers. Problem 1.3: Consider traffic control via traffic lights and prescribed maximum speed on streets and lanes of highways. Is that open-loop control or closed-loop control? Traditional ones are open-loop. Modern ones which are in operation in some cities in the world are closed-loop. More precisely, the speed cameras observe the speed of the vehicles and if the maximum allowable speed is overruled then a fine will be issued. Also, traffic light cameras observe the flow of vehicles in all directions at the junctions, and thus lengthen or shorten the green and red lights as appropriate. Problem 1.4: Compare the traditional and modern video cameras: modern ones are able to filter out the vibrations in the shot film due to vibrations in the hand of (or camera position carried by) the cameraman. Traditional ones are open-loop. Modern ones are closed-loop. In modern cameras consecutive picture shots are compared to each other. If a vibration is observed in the picture, then based on a filtering rule some shots are removed or filtered out and a smooth film is obtained.

Introduction

57

Problem 1.5: An actual control system may be more detailed than that discussed in Section 1.6 of the text in that it may involve more elements. Enumerate some of these components. Some other elements of the loop are: encoder, decoder, digital to analogue converter, analogue to digital converter, transducer, and optical couplers and sensors. Moreover, note that some connections in the system may be wireless. Thus transmitters and receivers are also needed on those signals. In certain applications like welding machines communication between the measured output and the controller is sometimes done by an optocoupler. Problem 1.6: Discuss the correctness or incorrectness of the proceeding statements and reasoning: “Given that the output perfectly tracks the setpoint in the negative unity feedback control, thus: EðsÞ 5 RðsÞ 2 YðsÞ 5 0 and hence YðsÞ 5 CPEðsÞ 5 0! That is, perfect tracking never takes place.” Assuming the system is linear, whether tracking takes place or not it always holds that YðsÞ 5 CPEðsÞ. Given that the tracking takes place, we have YðsÞ 5 CP 3 0. However, this does not result in YðsÞ 5 0, it is a mathematical trick; because the zero on the right hand side of YðsÞ 5 CP 3 0 depends on YðsÞ; it is Yd ðsÞ 2 YðsÞ. Thus, YðsÞ should always be calculated from YðsÞ 5 CPEðsÞ 5 CPðRðsÞ 2 YðsÞÞ, yielding YðsÞ 5 CPRðsÞ=ð1 1 CPÞ. Problem 1.7: In simple examples explain the Bode’s integral theorem. Disturbance is an unwanted input to the plant. It is unwanted in the system, like garbage, dust, or unwanted furniture in a place. We can move the garbage to another place (e.g., to garbage can or outside) but we cannot remove it. There is kind of conservation law in this respect. The same is true for disturbance. It cannot be removed (under mild conditions), but moved from the bandwidth out—the frequency range outside. Details will be provided in Chapter 10. Problem 1.8: Given the SISO plant s2 21s6s111 1 , design a controller such that in open2s11 loop control the setpoint to output transmittance becomes s3 1 6s 2 1 2s 1 1 . Repeat the 2s11 problem for s3 1 2s2 1 3s 1 1 . What is the observation? This problem concerns the achievable performance in open-loop control. The 11 2s 1 1 controllers are as follows: Denote System 1: s2 2s 1 6s 1 1 , System 2: s3 1 6s2 1 2s 1 1 , and System 3:

2s 1 1 s3 1 2s2 1 3s 1 1 .

To achieve System 2 the controller is s2 1 6s 1 1 s3 1 2s2 1 3s 1 1 .

s2 1 6s 1 1 s3 1 6s2 1 2s 1 1

and to

achieve System 3 the controller is The outputs are depicted in Fig. 1.24. We observe that by changing the controller dynamics a better/worse output will be obtained. Clearly System 3 is the best. We will see in Chapter 4 with more details that all these systems are nonminimum phase. Also note that in this problem we do not require perfect control, which means that the error is zero at all times, but merely tracking, which means that the steadystate error is zero. (Question: What will happen if we inset periodic inputs of sinusoid, ramp, and parabola to the system?)

58

Introduction to Linear Control Systems

Figure 1.24 Problem 1.8, Output of the systems for a step input.

Figure 1.25 Disturbance, noise, and sensor dynamics in closed-loop control.

Problem 1.9: Take the system in Fig. 1.25. Discuss the design of the controller C for the tracking objective in the presence of the noise n, disturbances d, and uncertainty in the plant P. The output is given by Y 5 1 1CPCP R 1 1 1PCP D 2 1 1CPCP N. Now, the analysis is as before. We may analyze either y (to become yd ) or the error signal (to become zero). We analyze y. Suppose that jCj .. 1 is so large that also jCPj .. 1. Then Y 5 R 1 C1 D 2 V 5 R 2 N. That is, setpoint tracking and disturbance rejection are achieved at the expense of exact noise transmission to the output. In other words, the feedback can be at most as good as the measurement is. Thus we should have v 5 0. Similar to what we had in the text, jCPj .. 1 can and should be achieved only over the bandwidth of the system. Remark 1.13: The disturbances at the plant input and output are called the input and output disturbance, respectively. The “crude” analysis we have provided above and also in the text actually makes a difference between them, as shall be further detailed in Chapter 10. The difference is that integrators in the loop gain (CP) affect the output disturbance while integrators in the controller (C, and not the plant) affect the input disturbance.

Introduction

59

Figure 1.26 Problem 1.10, A 2-DOF control structure.

Problem 1.10: Another version of the 2-DOF control structure is given in Fig. 1.26 below. Analyze this system. It is easy to show that Y 5 1 1CC11PC2 P R 1 1 1 C11 C2 P D 2 1 1C1CC12CP2 P N. The crude analysis is that C2 must be such that at steady state Y 5 R. Also C1 P must be large enough such that C1 C2 P is large and D is eliminated. Of course this will be at the expense of noise transition, which has to be eliminated by hardware and implementation. For a more detailed analysis, we can analyze either the error signal or the output. Let us do it for the output. As you know from Section 1.11 the analysis depends on r; d; n. Assume rðtÞ 5 Ar stepðtÞ. Hence there holds PÞð0Þ yr;ss 5 1 1 ðCðC11PÞð0ÞC A. (For simplicity we have written, e.g., ðC1 PÞð0Þ instead 2 ð0Þ of lim s!0 ðC1 PÞð0Þ.). If ðC1 PÞð0Þ 5 N then there must hold C2 ð0Þ 5 1. If ðC1 PÞð0Þ 6¼ N then there must hold C2 ð0Þ 5 ððC1 PÞð0Þ 2 1Þ=ðC1 PÞð0Þ. On the other hand, yd;ss ; yn;ss depend on d; n. In the simplest case of step disturbance and noise 1 1 PÞð0Þ 2 1 there holds yss 5 Ar 1 ðC1 PÞð0Þ Ad 2 ðCðC An . That is, tracking takes place except 1 PÞð0Þ for the presence of disturbance and/or noise depending on the value of ðC1 PÞð0Þ. The usual application is that ðC1 PÞð0Þ 5 N, C2 ð0Þ 5 1. That is, disturbance is eliminated and noise is transmitted. As in other structures noise must be eliminated by hardware and implementation. On the other hand by the appropriate choice of the dynamics of C2 ðsÞ a better transient response for setpoint tracking will be obtained by shaping Y 5 1 1CC11PC2 P R. Needless to say the dynamics of C2 ðsÞ must maintain not only stability but also some other design objectives like some stability margins, which you will learn later. (Question: Can we choose ðC1 PÞð0Þ 5 1 to eliminate the noise?) Remark 1.14: A statement as nice as Remark 1.8 of Section 1.11 cannot be made here since it is observed that C2 appears in both Hr 5 1 1CC11PC2 P (being the transmittance from yd ) and Hd 5 1 1 C11 C2 P (being the transmittance from d). That is, the design does not have two completely independent parts. Nonetheless, it is a 2-DOF control structure. (When you study block diagram algebra in Section 2.4 of the next chapter, investigate whether Rule 6 helps in this regard.) Question 1.5: Is this analysis valid for nonconstant setpoints as well? If so, why? If not, does a 2-DOF control structure exist and make sense for such systems? You had better come back to this question after reading Chapter 4. Question 1.6: Can C1 P; C2 ; C1 C2 P have RHP zeros or poles? Come back to this question later when you study stability.

60

Introduction to Linear Control Systems

Figure 1.27 Problem 1.11.

Problem 1.11: Calculate the transfer functions between all the designated signals and inputs r, d, n in the 2-DOF control structure given in Fig. 1.27. For the sake of conciseness, these transfer functions are given in the following matrix: 3 2 F 2P 21 6 1 1 CP 1 1 CP 1 1 CP 7 7 6 7 6 6 CF 2CP 2C 7 7 2 3 6 6 1 1 CP 1 1 CP 1 1 CP 7 Uc 72 3 6 6 U 7 6 CF 1 2C 7 7 R 6 7 6 74 5 6 Up 7 5 6 6 7 6 1 1 CP 1 1 CP 1 1 CP 7 D 7 N 4 Y 5 6 6 CPF P 2CP 7 7 6 Ym 7 6 6 1 1 CP 1 1 CP 1 1 CP 7 7 6 6 CPF P 1 7 5 4 1 1 CP 1 1 CP 1 1 CP It is left to the reader to obtain these by direct calculations. However, note that these can be simply obtained by the way of the rules of the Block Diagram Algebra, which will be presented in Chapter 2, System Representation. What matters now is to point out that from among these fifteen transfer functions some are repetitive. More precisely there are only seven different transfer functions among them, namely 1 11CP, 1 1FCP, 1 1CCP, 1 1PCP, 1 1CFCP, 1 1CPCP, 1CPF 1 CP . Excluding the second transfer function, the rest of them were called The Gang of Six in the old literature. Also, the phrase The Gang of Four referred to the four transfer functions obtained from the gang of six by replacing F 5 1. These transfer functions are used in the context of internal stability. Note that the above analysis is not complete. The complete analysis and the underlying philosophy are provided in Chapter 3, Stability Analysis. Problem 1.12: It is well known (and you will see in the ensuing chapters) that by feedback control the designer changes the closed-loop poles of the (open-loop) system to fulfill some design objectives (like transient and tracking performance) which are not fulfilled otherwise. In other words, the designer does not (and cannot!) change the plant itself, but makes it behave as desired. How does it take effect?

Introduction

61

The answer is very simple: Through feedback the input to the system is changed— instead of r the error r 2 y (or FR 2 Y, etc.) is inputted to the system. By the application of feedback we speak to the system (controller and plant) in a different way and this is the right way. Problem 1.13: The use of the 2-DOF control structure is often essential for unstable plants like in this problem because the large overshoot (and even oscillation) in disturbance rejection (which is the same for reference tracking in the 1-DOF structure) is inevitable. Consider the system PðsÞ 5 1=½ðs 2 1Þðs 2 2Þ in the 2-DOF control structure with CðsÞ 5 250ðs 1 1Þðs 1 2Þ=½sðs 1 20Þ and FðsÞ 5 ð100s2 1 685s 1 500Þ=ð250s2 1 750s 1 500Þ. Simulate the output and observe the effect of the prefilter; see also Exercise 1.22. The answer is given in Fig. 1.28. The setpoint and disturbance are applied at t 5 0 and t 5 6 seconds, respectively. It should be clarified that the slight overshoot in reference tracking can be eliminated but at the expense of a larger settling time. This is achieved e.g., if the coefficient 685 in F is replaced by 640. Also note that alternatively we may use a prefilter like FðsÞ 5 ðs2 1 10:58s 1 121:1Þ=ðs2 1 34s 1 121:1Þ. The philosophy behind this prefilter is that its numerator is the same as a term in the denominator of Hr to cancel it out. At this stage, its denominator is chosen by trial and error, observing the DC gain of one.

Figure 1.28 Problem 1.13, Reference tracking and disturbance rejection.

62

Introduction to Linear Control Systems

Problem 1.14: Consider the plant PðsÞ 5 1=½ðs 1 1Þðs 1 2Þ and the controller C1 ðsÞ 5 10ðs 1 0:5Þ=s. Investigate the design of a prefilter to enhance the reference tracking performance of the system. The step response of the system is shown in Fig. 1.29, left panel. As observed it has an overshoot. However, with proper choice of the controller as C2 ðsÞ 5 Kðs 1 1Þ=s, 1 # K # 1:3, not only can we avoid the overshoot but also we can reduce the large initial value of the actuator, Fig. 1.29 right panel. So the system does not need any prefilter for the overshoot. However we may wish a faster response. This is achieved e.g., by the prefilter FðsÞ 5 a2 ðs11Þ2 =ðs1aÞ2 ; a . 1. Problem 1.15: Consider the system in Fig. 1.30. Propose a generalized model for it. Before answering the problem let us comment on the type of disturbance and modeling the plant disturbance—what we schematically did in Fig. 1.8 and footnote 15. The complete Fig. is 1.30. The signals di and do are called the input and output disturbances, respectively. The explanation is as follows: Consider a load like a drum which is rotated by a motor and the objective is controlling its

Figure 1.29 Problem 1.14, Left: Reference tracking, Right: Control signal.

Figure 1.30 Problem 1.15.

Introduction

63

speed. In its 1-DOF control structure the setpoint, error signal, and output signal have the unit of speed. The control signal—which is the output of motor (actuator) and input to the plant—has the unit of torque. If we insert some disturbance in the form of “friction” to the load/plant, we note that no signal has the unit of friction. We can either model it as some negative torque (which will be an input disturbance di ) or some negative speed (which will be an output disturbance do ). Moreover, if we insert some torque on the load (e.g., by our hand) so as to reduce its speed, the torque is an input disturbance di , which can also be modeled as an output disturbance do (with the unit of speed). In brief, the plant disturbance can be modeled either as di or do . On the other hand, recall that the controller box is actually the controller-amplifier-actuator box. Suppose that the controller has computed that the torque u 5 τ should be applied to the plant. The imprecision in implementing this torque (either because of the amplifier or actuator) is an input disturbance di which has the unit of torque. It should be clear that conceptually it is possible to model this di (with the unit of torque) also as a do (with the unit of speed), but we usually do not do this. That is, the imprecision in the control signal is usually modeled only as an input disturbance. Finally, for the sake of completion, let us add that after studying Sections 4.13 and 10.7 investigate whether the above arguments can be somehow complemented. Now we present the solution of the problem. As we have stated in Section 1.15 the generalized model is not unique and depends on the choice of signals. This is shown in both this problem and the next, which presents another solution for the above-given system. One possibility is to define the signals of the generalized structure are as follows: w 5 ½w1 w2 w3 w4 T 5 ½r di do nT , u 5 ½u1 u2 T as shown in the picture, z 5 r 2 y, and v 5 ½r2u2 ym T . Hence, Z 5 R 2 Y 5 IW1 2 ðDo 1 PðDi 1 U1 ÞÞ 5 IW1 2 PW2 2 IW3 1 0W4 2 PU1 1 0U2 ; # " # " # " R 2 U2 IW1 2 IU2 IW1 2 IU2 V5 5 5 Ym Ps Y 1 N Ps ðDo 1 PðDi 1 U1 ÞÞ 1 N " # IW1 1 0W2 1 0W3 1 0W4 1 0U1 2 IU2 5 : 0W1 1 Ps PW2 1 Ps W3 1 IW4 1 Ps PU1 1 0U2 The generalized plant is therefore, 2 3 I 2P 2I 0 2P 0 54 I 0 0 0 0 2I 5 0 Ps P P s I P s P 0





I 0 0 0 2 I0, with  the partitioning 11 5½I 2P 12 5½ 2P 0, 21 5 0 P P P I ,   s s 0 2I 5 C01 C0 obtained from U 5 V. We close this 22 5 P P 0 . We also have s

2

problem by adding that the notation yf :5u2 is often used for the “feedback output/ signal” in this structure.

64

Introduction to Linear Control Systems

Problem 1.16: Explain the elements of a liquid tank system. A typical system is a liquid tank with outlet and inlet. The objective can be (see the next paragraph) controlling the liquid level via the inlet valve. In the control structure the setpoint, output, and error have the unit of meters (height) whereas the control signal (inlet flow) has the unit of m3 =s. The water that is outlet through the outlet gate is a di (with the unit of m3 =s) which can also be modeled as a do (with the unit of meters; height). Similarly, apart from the outlet, if we add some water to or take some water from the tank it is also a di which can be modeled as a do as well. On the other hand, imprecision in the implementation of the inlet value (which is computed by the controller to be u 5 ϕm3 =s via the inlet valve) is an input disturbance di . As we mentioned before, we do not model it as a do , although it is possible. For the sake of completeness let us add that in this example if the outlet valve is controlled, then that is also an input of the system. On the other hand, it is possible to regard the outlet as an output on which we have no control objective. This is the counterpart of the similar problem that our objective is controlling the outlet flow, not the height, via the inlet flow. Problem 1.17: Repeat the above Problem 1.15 with w 5 ½w1 w2 w3 w4 T 5 ½r di do nT , u 5 u1 as shown in the picture, z 5 r 2 y, and v 5 ½r ym T . Z 5 R 2 Y 5 IW1 2 ðDo 1 PðDi 1 UÞÞ 5 IW1 2 PW2 2 IW3 1 0W4 2 PU; " # " # " # R IW1 IW1 V5 5 5 Ym Ps Y 1 N Ps ðDo 1 PðDi 1 UÞÞ 1 N " # IW1 1 0W2 1 0W3 1 0W4 1 0U1 1 0U2 5 : 0W1 1 Ps PW2 1 Ps W3 1 IW4 1 Ps PU Hence the generalized plant is, 2 3 I 2P 2I 0 2P 54 I 0 0 0 0 5 0 Ps P Ps I Ps P with the partitioning  22

5 P0P . Here we have s





5 ½I 2P 2I 0, 12 5 2P, 21 5 0I P0P P0 0I , s s   5 C1 2C1 C2 obtained from U 5 V.

11

Problem 1.18: The effect of the smith predictor is shown in this problem. We consider the plant PðsÞ 5 e2s =ðs 1 1Þ and the controller C 0 ðsÞ 5 ðs 1 2Þ=s which is designed for P0 ðsÞ 5 1=ðs 1 1Þ. We implement the Smith predictor as specified in Section 1.13. Denote the output of the system T 0 5 CP=ð1 1 CPÞ with z. The outputs of the systems are depicted in Fig. 1.31, left panel. As is observed, yðtÞ 5 zðt 2 1Þ.

Introduction

65

Figure 1.31 The effect of the Smith predictor.

1.20

Exercises

Over 100 exercises are considered in the following in the form of some collective problems. It is advisable to try as many of them as your time allows. In Exercise 1.5356 you are required to find some transfer functions. Because the systems are almost complex you can come back to these exercises after reading Chapter 2. Then they should be pretty easy. Exercise 1.1: Briefly explain the mechanism of feedback and why we use it. Exercise 1.2: What is the most important requirement for a system? Why? Exercise 1.3: Given an organization with its president, employees, etc., from a control point of view discuss the different constituents of this system and their roles. Exercise 1.4: Repeat Exercise 1.3 for a field with its crops, farmer, facilities, etc. in a farm. Define the objective as good cultivation. Exercise 1.5: Repeat Exercise 1.3 for a sports team with the coaches, players etc. The objective is winning the games. Exercise 1.6: Consider the following systems. Describe the elements of their control system. 1. A ship, airplane, or satellite navigated on board or from the earth. 2. A power station and the different loads it feeds. 3. The temperature control system in a building. 4. An oil refinery and its products. 1 2Þðs 2 5Þ Exercise 1.7: Given the SISO plant ðs 1ð2s . (1) Design a controller such that in 1Þðs 1 3Þðs110Þ2 2Þðs 2 5Þ open-loop control the setpoint to output transmittance becomes ð2s 1 . (2) By trial ðs11Þ4 and error design a prefilter for this system to improve its transient response.

66

Introduction to Linear Control Systems

Exercise 1.8: This problem has several parts. (1) What does P21 mean? (2) What are the necessary and sufficient conditions for the existence/implementation of P21 ? (3) What are the necessary and sufficient conditions for sufficiency of open-loop control for the purposes of perfect control and almost perfect control? You had better reconsider this problem after studying Chapters 3 and 4. (Note that the phrase “almost perfect control” refers, s12 3s 1 2 e.g., to the case CP 5 2:3s 1 1 3 s 1 4 for a step reference tracking problem.) Discuss different setpoints. Exercise 1.9: This problem has two parts. (i) Consider the sensor dynamics as Ps ðsÞ 5 s 1T T ; Tc1. What is the relation between ym ðtÞ and yðtÞ? (ii) Provide an analysis of the actual system, i.e., when the sensor dynamics are considered, regarding the benefits of feedback control. Exercise 1.10: One might think that by reducing the magnitude of CP the noise contribution to the output will be reduced in the closed-loop structure, which is desirable. Discuss this matter. Exercise 1.11: Derive a negative unity-feedback equivalent of a nonunity feedback system. That is, in Fig, 1.32 find P0 and C 0 such that these systems are equivalent. Discuss. Exercise 1.12: Given the following 2-DOF control system, see Fig. 1.33. Discuss the design of the controllers C and F for the tracking objective in the presence of the noise n, disturbances di ; do , and uncertainty in the plant P. Exercise 1.13: In the above Exercise 1.12 compute SFHr , SCHr , and SPHsr . What is the observation? Exercise 1.14: Given the following 2-DOF control system, see Fig. 1.34. Discuss the design of the controllers C1 and C2 for the tracking objective in the presence of the noise n, disturbances di ; do , and uncertainty in the plant P. Exercise 1.15: In the above Exercise 1.14 compute SCH1r , SCH2r , and SPHsr . Exercise 1.16: Two different structures of the 2-DOF control systems have been given in the text. Now consider their combination. Does it have any advantage over either of them? Explain. Exercise 1.17: Other versions of the 2-DOF control structure are discussed in this Exercise. (1) One possibility is given in Fig. 1.35, left panel. (2) Another possibility is given in the right panel of Fig. 1.35 in which yf denotes the feedback output/signal. Discuss the analysis and synthesis of these systems. (3) Repeat the problem by inclusion of the sensor dynamics. (4) Can you think of another 2-DOF control structure? Exercise 1.18: In Exercise 1.17 compute SCHr and SFHr in all cases. (A)

Figure 1.32 Exercise 1.11.

Figure 1.33 Exercise 1.12.

(B)

(C)

Introduction

67

Figure 1.34 Exercise 1.14.

Figure 1.35 Exercise 1.17.

Figure 1.36 Exercise 1.19. Exercise 1.19: Consider the left panel of Fig. 1.36. (1) This structure is called forward disturbance compensation. Find C0 such that the effect of d on the output is nullified. When can we use such a structure? Explain. (2) Repeat the problem with input disturbance, instead of output disturbance. (3) Draw the closed-loop version of this structure and explain it. (4) Repeat part (5) when another controller is included as a prefilter or feedback-path controller. Note that C0 can be regarded as the third degree of freedom and the system is a 3-DOF control structure.A possible answer is given in the right panel of the same figure. The idea of 3-DOF control is to have more freedom in shaping the r response to wanted and unwanted inputs. (4) Find SH C0 . (5) Propose and discuss other versions of the 3-DOF control structure. Exercise 1.20: Consider the system PðsÞ 5 2=½s3 ðs 1 1Þ in the 2-DOF control structure 6 5 4 1 4s3 1 s2 1 0:2s 1 0:01 with CðsÞ 5 0:5ðs10:1Þ2 =ðs12Þ2 and FðsÞ 5 s6 1s 6s155s1 8s148s1 5:5s 3 1 1:9s2 1 0:24s 1 0:01. Simulate the system and observe the effect of the prefilter—the disturbance rejection is quite oscillatory but reference tracking is excellent; see also the Example 5.39 of Chapter 5. Also, propose a lower order prefilter for the system. Exercise 1.21: Consider the system PðsÞ in the 2-DOF control  5 1=½ðs 2 2 1Þðs 2 2Þðss2213Þ 1 991:4 structure with CðsÞ 5 1000 3 s 1s0:5 3 0:1867s11 and FðsÞ 5 s2 111:51s 991:4 1 991:4 . Simulate the 0:0134s11 system and observe that there is no overshoot in reference tracking but inevitably an oscillation in disturbance rejection. Note that the numerator of F is chosen to cancel out the same term in the denominator of Hr . Verify that if the denominator of F is replaced e.g., by s2 1 300s 1 991:4 the response is sharply speeded up.

68

Introduction to Linear Control Systems 0:3733s 1 1 Exercise 1.22: A better controller for Problem 1.13 is CðsÞ 5 70 3 s 1s0:1 3 0:0267s 1 1. Simulate the system and design a prefilter for it. Exercise 1.23: Consider the system PðsÞ 5 1=ðs2 1 1Þ in the 2-DOF control structure with CðsÞ 5 90ðs 1 1Þðs 1 0:5Þ=½sðs 1 20Þ. Simulate the system and design a prefilter for it; see also the Example 5.38 of Chapter 5. 2 Exercise 1.24: Consider the system PðsÞ 5 ss 2 2 1 in the 1-DOF control structure with s 1 0:2 CðsÞ 5 ð231Þ 3 sðs 1 40Þ . Simulate the system and observe the output. Design a prefilter for it to improve the tracking performance in the 2-DOF control structure. 1 s 2 0:2 Exercise 1.25: Repeat Exercise 1.24 for PðsÞ 5 ss 2 2 2 and CðsÞ 5 75 3 sðs 2 60Þ . Exercise 1.26: Consider a plant with both input and output disturbances which is supposed to be controlled. Is that possible to design a control structure such that the setpoint, input disturbance, and output disturbance are all separately shaped? Exercise 1.27: Consider the Smith predictor structure where the path gain ð1 2 e2sT ÞP0 is given by by ð1 2 e2sT ÞP. (1) Compute SPHr and STHr. (2) What differences should we make in the problem formulation in order to consider the sensor dynamics? (3) Discuss the effect of output disturbance. Exercise 1.28: Derive the output formula of the IMC structure, considering the sensor dynamics and noise. Exercise 1.29: Enumerate the advantages and problems associated with the IMC structure. You had better come back to this exercise after reading Chapter 3, and Chapter 10 and may wish to consult the pertinent literature as well. Exercise 1.30: Referring to Fig. 1.21, discuss the design of the 2-DOF IMC structure where feedback-path controller Qd is used in the forward path as Qc . Exercise 1.31: In Fig. 1.37 which is called the IMC forward disturbance compensation scheme explain the choice of Q0 ; P0d . When can we use it? Discuss. Exercise 1.32: Convert the following feedback system, see Fig. 1.38, to a generalized abstract one in the following ways, and find the generalized controller in each case. First discuss whether the proposed signal definitions are correct. 1. w 5 ½w1 w2 w3 w4 T 5 ½r d1 d2 nT , u 5 ½u1 u2 T with ui the output of the controller Ci , z 5 yd 2 y, and v 5 ½r u2 2ym T . 2. w 5 ½w1 w2 w3 w4 T 5 ½r d1 d2 nT , u 5 ½u1 u2 T with ui the output of the controller Ci , z 5 r 2 y, and v 5 ½r ym T . 3. w 5 ½w1 w2 w3 w4 T 5 ½r d1 d2 nT , u defined as the output of controller C1 , z 5 r 2 y, and v 5 ½r ym T . Finally, can you define another signal definition for this system?

Figure 1.37 Exercise 1.31. Adopted from Morari M. and E. Zafiriou, Robust Process Control, Prentice Hall, 1989, with permission.

Introduction

69

Exercise 1.33: Find the generalized model of the sequel system, see Fig. 1.39. Define w 5 ½w1 w2 w3 T 5 ½r d uΔ T , u as the input to P, z 5 ½z1 z2 T 5 ½yΔ yT , and v 5 r 2 y. Exercise 1.34: Propose generalized models for the sequel systems, see Fig. 1.40. Also try to analyze what is achievable and what is not. (This part is rather advanced and you will learn more in future courses.) Exercise 1.35: To overcome the stability requirement of the plant in the IMC structure, one may suggest first stabilizing the plant P by the controller C1 in a feedback loop to get the (closed-loop) system P0 5 C1 P=ð1 1 C1 PÞ and then applying the IMC structure to P0 . Discuss this opinion, whether we can get the advantages of the IMC structure in this setting. Exercise 1.36: Discuss the design of the 2-DOF control structure for the Smith predictor system. Exercise 1.37: Propose generalized models for the Smith predictor and the IMC structure. Exercise 1.38: Analyze the IMC structure in the presence of sensor dynamics and noise. Exercise 1.39: Analyze the Smith predictor in the presence of sensor dynamics and noise. Exercise 1.40: Consider different control structures that we studied in the text and the feedback element in them. (1) What happens if we add the feedback signal to (instead of subtracting it from) the other input of the feedback element? Explain. (2) When there is a controller in the feedback path, is that essential to use negative feedback, i.e., a subtractor as the feedback element? Exercise 1.41: Consider the standard 1-DOF and 2-DOF control structures. Derive the output expression if the system is MIMO. Exercise 1.42: One may wish to use the control structure of Fig. 1.41 in which seemingly y  r. Discuss this design. Exercise 1.43: Consider an unstable plant. What are the necessary and sufficient conditions for the achievement of perfect control? For the solution, consider all the structures that you have learned: 1-DOF/2-DOF/3-DOF (Exercise 1.19), and Exercise 1.42. It is instructive to 2 s12 s22 s22 consider the plants P1 5 ss 1 2 1 , P2 5 ðs21Þ2 , P3 5 s 2 1 , P4 5 ðs21Þ2 . You had better reconsider this problem after studying Chapters 3 and 4. (Hint: P2 refers to the case of “almost perfect control” in which a large stable pole has negligible effect. Discuss different inputs.)

Figure 1.38 Exercise 1.32.

Figure 1.39 Exercise 1.33.

70

Introduction to Linear Control Systems

(A)

(B)

(C)

(D)

Figure 1.40 Exercise 1.34, The above four panels.

r

+ _

C

P

y

Figure 1.41 Exercise 1.42, A proposed control structure. Exercise 1.44: Consider the 2-DOF and 3-DOF control structures. (1) What is the effect of the extra degrees of freedom on the control signal? (2) How about the error signal? In your answer include an analysis of the frequency content of the reference input. Exercise 1.45: We have addressed the cases of “input and output disturbances” di ; do . Another possibility for considering a disturbance is the “feedback-element or error disturbance,” where an additive disturbance de is added to the output of the feedback element in either 1-DOF, 2-DOF, or 3-DOF structures. Yet another possibility for considering a disturbance is the “setpoint disturbance,” where an additive disturbance dr is added to the

Introduction

71

setpoint. How should we take a measure against these disturbances? For the sake of completeness let us also add that considering a disturbance at the output but “outside” the feedback loop is meaningless or in other words impossible. (Why?) Exercise 1.46: From the theory of communications we know that signals may be corrupted by a multiplicative noise/disturbance as well. (1) Discuss the relevance of this issue in the framework of a control system. (2) How should we analyze the effect of such noise/disturbance? Exercise 1.47: Consider a 15 3 15 matrix. How much time do you think one needs to compute its determinant via the classical definition? This is a basic example in scientific computation. (Hint: The number of 1; 2; 3; = operations is between n! and ðn 1 1Þ!. For simplicity ignore the other times and assume the lower bound.) Exercise 1.48: As in the previous Exercise 1.47 this exercise is to bring your attention to some classical issues in scientific computation. We encourage you to find the answers by referring to the literature. (1) Is there any difference between addition and subtraction, and positive and negative numbers, for computers? (2) Can we find the precise answer when we add any two numbers by computers? (3) How much time does matrix multiplication take for n 3 n matrices? (4) In modern computations, matrix inversion and even multiplication is avoided. Why? And how? (5) What does numerical sensitivity mean? (6) What does numerical instability mean? (7) To what accuracy do you think we can compute the eigenvalues of a given matrix? (8) How much time does it take for an n 3 n matrix? (9) Can we compute the rank of a matrix precisely? (10) What do you know about machine precision? (11) Have you heard about fast algorithms? (12) What does pre-conditioning mean? (13) What does IO complexity of an algorithm mean? (14) What is the philosophy in using orthogonal matrices for computations? (15) How can we find the minimum of a nonconvex function? (16) What does the curse of dimensionality mean? (17) There are electronic chips/processors that perform approximate computations (as opposed to the usual crisp/binary computations). What do you know about the details of this field? We close the exercise by adding that a concise one-paragraph answer for each item serves the purpose of opening your eyes (for non-mathematics readers) to the wonderful field of scientific computation. At this stage you do not need to master that field and indeed it is impossible for you! Exercise 1.49: Consider L1 5 1=½sðs 1 1Þ and L2 5 ðs 1 1Þ=½s2 ðs 1 2Þ. We observe that L1 ð0Þ 5 N and L2 ð0Þ 5 N. In Section 1.11 we learned that L1 ð0Þ 5 N is not large enough to reject a ramp disturbance or to track a ramp reference input, but L2 ð0Þ 5 N is large enough for these purposes. In ‘folk language’ this means that ‘the infinity of L2 ð0Þ 5 N is larger than the infinity of L1 ð0Þ 5 N’. What is the precise mathematical expression for this folk statement? Exercise 1.50: Although it is irrelevant to this course it is instructive for readers outside mathematics departments to consider this Exercise, which is somehow similar to Exercise 1.49. We know that there are infinite numbers in R; Z; N, the classical sets. We also know that NCZCR. In ‘folk language’ this means that ‘the infinity (as the number of its elements) of N is smaller than the infinity (as the number of its elements) of Z which in turn is smaller than the infinity (as the number of its elements) of R’. What is the precise mathematical expression of this folk statement? Exercise 1.51: Consider the statement of footnote 17. What is its precise expression when the reference input, disturbance, and noise are sinusoidal? Exercise 1.52: Consider a 2-DOF control system in which tracking takes place. Does at steady state the output of the feedback element become zero? (Hint: (1) It is instructive to analyze 2-DOF control systems for which step tracking takes place with F1 5 ðs 1 1Þ=ðs 1 2Þ and F2 5 ð2s 1 3Þ=½ðs 1 1Þðs 1 2Þ . Find yF (the output of the prefilter) at steady state in either case. (2) Consider systems with other reference inputs like ramp, sinusoid, etc.)

72

Introduction to Linear Control Systems

Exercise 1.53: In this exercise we discuss an unfortunate situation in the literature. Except for the standard 1-DOF control structure without disturbances, there is no consensus for labeling the signals around a control system. In fact, there are some misleading denotations in some texts, like using e for the output of the feedback element in the 2-DOF structure. It may not matter much how to name the signals in a control system as far as we are concerned with formulas, or when we are practicing some particular lessons as we did throughout this introductory chapter. For instance, we have used the denotations u; u1 ; u2 ; d1 ; d2 in some structures. However, from another standpoint it is certainly good if there is a consensus in this regard, especially one that makes sense and is veritable. We do not intend to set the standard for this, as we know it is partly a matter of opinion and taste. Nevertheless, we discuss an idea. For a complicated system, like Parts B-D of Exercise 1.34, Examples 2.3132, and the systems in Fig. 2.106 of Chapter 2, ‘a’ most neutral method is that for a block A we use uA ; yA for its input and output signals. This is particularly useful for software development. Of course then the input and output of the whole system are denoted by r; y, the control signal in which we are interested is defined appropriately (whether in a conventional model or in a generalized model), and the signals which have a physical interpretation are defined accordingly (like the error, the measured output, or the feedback output.) We can further simplify the method if we use different indices i for the blocks, and then simply ui ; yi , see the lower panel in Fig. 1.42. No matter whether we do this or not, in certain systems some signals may be the same. For instance, in Panel A of Fig. 1.40 we have YAmp 5 UAct , in Fig. 2.49 of Chapter 2 we have Y 5 Y3 5 U5 5 U6 and in Fig. 2.78 of Chapter 2 there holds Y2 5 U3 5 U4 . For 2-DOF systems we have Fig. 1.42. For the standard 1-DOF structure and the 2-DOF structures as given in Fig. 1.42, answer the following parts in this taxonomy: 1. Express Y 5 Hry R 1 Hdyi Di 1 Hdyo Do 1 Hny N, Y 5 Yr 1 Ydi 1 Ydo 1 Yn , and y 5 yr 1 ydi 1 ydo 1 yn . Provide the details of the formulas. 2. Define U 5 YC or YC1 . Express U 5 Hru R 1 Hdui Di 1 Hduo Do 1 Hnu N, U 5 Ur 1 Udi 1 Udo 1 Un , and u 5 ur 1 udi 1 udo 1 un . Provide the details of the formulas. 3. Express E 5 Hre R 1 Hdei Di 1 Hdeo Do 1 Hne N, E 5 Er 1 Edi 1 Edo 1 En , and e 5 er 1 edi 1 edo 1 en . Provide the details of the formulas. y y y y 4. Express Yf 5 Hr f R 1 Hdif Di 1 Hdof Do 1 Hnf N, Yf 5 Yf ;r 1 Yf ;di 1 Yf ;do 1 Yf ;n , and yf 5 yf ;r 1 yf ;di 1 yf ;do 1 yf ;n . Provide the details of the formulas.

Figure 1.42 Exercise 1.53, Taxonomy of different signals around the loop.

Introduction

73

We close this Exercise by mentioning that because we are mainly interested in Y; y, for notational simplicity we often omit the superscript y in part (1) and use, e.g., Hr instead of Hry . Moreover, in the standard 1-DOF system we define e; u as the input and output of the controller, up 5 u 1 di , y 5 yp 1 do . Exercise 1.54: Consider Panel B of Fig. 1.40 of Exercise 1.34. 1. Write down the expressions for Y; Up ; U2 ; E in terms of the system inputs r; d; n (i.e., in the manner of Exercise 1.53). Note that Up ; U2 ; E are the input to the plant P2 , the output of controller C2 , and the tracking error. 2. ForA all the transfer functions that are used in the expressions of Part (1) as HBA , find H SGB where G is any of C1 ; P2 ; F. 3. Assume that P1 ; P2 are two liquid tanks. Draw the schematic of the actual system. 4. Draw a 1-DOF/2-DOF/3-DOF equivalent control structure for the system in which one block is used for the plant and one/two/three for the controller(s), perhaps as some matrices, and show the interconnections in the blocks. Exercise 1.55: Consider Panel C of Fig. 1.40 of Exercise 1.34. 1. Write down the expressions for Y1 ; Y2 ; Up ; U1 ; U2 ; E in terms of the system inputs r1 ; r2 ; d; n (i.e., in the manner of Exercise 1.53). Note that Up ; U1 ; U3 ; E are the input to the plant P2 , the output of controller C1 , the output of controller C3 , and the tracking error. 2. ForA all the transfer functions that are used in the expressions of Part (1) as HBA , find H SGB where G is any of C1 ; P2 ; C3 . 3. Assume that P1 ; P2 are two liquid tanks. Draw the schematic of the actual system. 4. Draw a 1-DOF/2-DOF/3-DOF equivalent control structure for the system in which one block is used for the plant and one/two/three for the controller(s), perhaps as some matrices, and show the interconnections in the blocks. Use a vector-valued arrow for the inputs and outputs of the system, as in the case of all structures shown in the text. Exercise 1.56: Repeat Exercise 1.55 for Panel D of Fig. 1.40 of Exercise 1.34. For Part (3) assume that P1 ; P2 ; P3 are three liquid tanks. Exercise 1.57: Consider the system in the left panel of Fig. 1.43 in which P is the nonlinear plant and C is the nonlinear controller which should be designed such that y 5 PðuÞ 5 PðCðrÞÞ 5 r. Note that parentheses show nonlinear dependence, not multiplication. Rigorous treatment of this problem is outside the scope of this book, nevertheless we present a simple approximate solution to it. We use the structure in the right panel of the same figure in which GðÞ is unknown and P0 is an approximate model of the plant P0  P, the more exact the better, preferably P0 5 P. There holds u 5 Gðr 2 P0 ðuÞÞ. Assuming G is invertible we have G21 ðuÞ 5 r 2 P0 ðuÞ. Hence, if G21 ðuÞ  0, then P0 ðuÞ  r and y 5 PðuÞ  P0 ðuÞ  r. That is, the system r ! y is not only almost linear, but also almost has the unity gain and well track the arbitrary reference input. If GðÞ is a high-gain system, then G21 ðÞ  0. That is, with the help of ‘high-gain’ and ‘feedback’ we can linearize a nonlinear system. A simple choice for GðÞ is G 5 kc1, i.e., a static

Figure 1.43 Exercise 1.57, Linearizing a nonlinear plant.

74

Introduction to Linear Control Systems

large gain (which is also a linear block). (1) You are encouraged to come back to this exercise in future and provide rigorous mathematical analysis for this ‘conceptual analysis’. (2) Propose other choices for GðÞ and analyze its impact on the result. (3) Propose an alternative structure for the same purpose. Note that this is an advanced exercise and in this course you do not acquire the knowledge for answering it.

References Al-Tabari M.I.J., History of the Prophets and Kings (Tarikh e Tabari), Translated from Farsi to English by F. Rosenthal, State University of New York Press, 1989. Arimoto, S., Kawamura, S., Miyazaki, F., 1984. Bettering operation of dynamic systems by learning: a new control theory for servomechanism or mechatronic system. In: 23rd Conf. Decision and Control, Las Vegas, Nevada, pp. 10641069. Astrom, K.J., Hang, C.C., Lim, B.C., 1994. A new Smith predictor for controlling a process with an integrator and long dead-time. IEEE Trans. Autom. Control. 39 (2), 343345. Babazadeh, M., Nobakhti, A., 2015. Direct synthesis of fixed-order HN controllers. IEEE Trans. Autom. Control. 60 (10), 27042709. Babazadeh, M., Nobakhti, A., 2017. Sparsity promotion in state feedback controller design. IEEE Trans. Autom. Control. 62 (8), 40664072. Bai, J., Wang, S., Zhang, X., 2008. Development of an adaptive Smith predictor-based selftuning PI controller for an HVAC system in a test room. Energy Build. 40 (12), 22442252. Ballard, G., Demmel, J., Holtz, O., Schwartz, O., 2012. Graph expansion and communication costs of fast matrix multiplication. J. ACM. 59 (6), 123. Bavafa-Toosi, Y., 2000. Eigenstructure Assignment by Output Feedback, MEng Thesis (Control). Department of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran. Bavafa-Toosi, Y., 2006. Decentralized Adaptive Control of Large-Scale Systems, PhD Dissertation (Systems and Control), School of Integrated design Engineering. Keio University, Yokohama, Japan. Bavafa-Toosi, Y., 2016. On the theory of flexible neural networks—Part I: A survey paper. Int. J. Syst. Sci. (available online). Benner, P., Kressner, D., Mehrmann, V., 2003. Structure preservation: a challenge in computational control. Future Gener. Comput. Syst. 19, 12431252. Bensoussan, A., Da Prato, G., Delfour, M.C., Mitter, S.K., 2007. Representation and Control of Infinite Dimensional Systems. Birkhauser. Blanchini, F., Fenu, G., Giordano, G., Pellegrino, F.A., 2017. Model-free plant tuning. IEEE Trans. Autom. Control. 62 (6), 26232635. Bode, H.W., 1945. Network Analysis and Feedback Amplifier Design. Van Nostrand, NY. Boerm, S., Mehl, C., 2012. Numerical Methods for Eigenvalue Problems. De Gruyter, Berlin. Boyd, S., Barratt, C., 1991. Limits of Performance. Prentice-Hall, NJ. Bonnard, B., Chyba, M., 2003. Singular Trajectories and their Role in Control Theory. Springer-Verlag, New York. Cai, H., Mostofi, Y., 2016. When human visual performance is imperfect—how to optimize the collaboration between one human operator and multiple field robots. In: Wang, Y. (Ed.), Trends in Control and Decision-Making for Human-Robot Collaboration Systems. Springer.

Introduction

75

Campbell, S., Nikoukhah, R., 2002. Auxiliary Signal Design for Failure Detection. Princeton University Press. Casas, E., Troeltzsch, F., 2014. Second-order and stability analysis for state-constrained elliptic optimal control problems with sparse controls. SIAM J. Control Optim. 52, 10101033. Castro-Linares, R., Moog, C.H., 1994. Structure invariance for uncertain nonlinear systems. IEEE Trans. Autom. Control. 39 (10), 21542158. Chen, B.M., Saberi, A., Ly, Y.-L., 1992. Exact computation of the infimum in HN optimization via output feedback. IEEE Trans. Autom. Control. 37 (1), 7078. Chen, B.M., Lin, Z., Shamash, Y., 2004. Linear Systems Theory: A Structural Decomposition Approach. Birkhauser, New York. de Oliviera, V., Karimi, A., 2013. Robust Smith predictor design for time-delay systems with HN performance. IFAC Proc. 46 (3), 102107. Doyle J.C., “Synthesis of robust controllers and filters,” In Proceedings of the IEEE Conf. Decision and Control, pp. 109114, San Antonio, Texas, 1983. Dvijotham, K., Todorov, E., Fazel, M., 2015. Convex structured controller design in finite horizon. IEEE Trans. Control Netw. Syst. 2 (1), 110. El-Sakkary, A.K., 1981. The Gap Metric for Unstable Systems, PhD Dissertation. Department of Electrical and Computer Engineering, McGill University, Montreal, Canada. Esna Ashari, A., Labibi, B., 2012. Application of matrix perturbation theory in robust control of large-scale systems. Automatica. 48 (8), 18681873. Falb, P., 1990. Methods of Algebraic Geometry in Control Theory: Part I and II. Birkhauser. Fazel, F., Fazel, M., Stojanovic, M., 2013. Random access compressed sensing over fading and noisy communication channels. IEEE Trans. Wireless Commun. 12 (5), 21142125. Fliess, M., Join, C., 2013. Model-free control. Int. J. Control. 5 (4), 123. Francis, B.A., Wonham, W.M., 1975. Internal model principle for linear-multivariable regulators. Appl. Math. Optim. 2, 170194. Garcia, C.E., Morari, M., 1982. Internal model control—1. A unified review and some new results. Ind. Eng. Chem. Process Des. Dev. 21, 308323. Garrido, J., Vazquez, F., Morilla, F., 2014. Inverted decoupling internal model control for square stable multivariable time delay systems. J. Process Control. 24 (11), 17101719. Gupta, M.M., Jin, L., Homma, N., 2003. Static and dynamic neural networks: from fundamentals to advanced theory. IEEE Press. Wiley-Interscience. Hara, S., Yamamoto, Y., Omata, T., Nakano, M., 1988. Repetitive control system: a new type servo system for periodic exogenous signals. IEEE Trans. Autom. Contr. 33 (7), 659668. Hassibi, B., Sayed, A.H., Kailath, T., 1999. Indefinite-Quadratic Estimation and Control—A Unified Approach to H2 and HN Theories. SIAM. Heemels, W.P.M.H., Sandee, J.H., Van Den Bosch, P.P.J., 2008. Analysis of event-driven controllers for linear systems. Int. J. Control. 81 (4), 571590. Helton, J.W., James, M., 1999. Extending Hinf Control to Nonlinear Systems. SIAM, Philadelphia. Hirschorn, R.M., 1979. Invertibility on nonlinear control systems. SIAM J Control Optim. 17 (2), 289297. Hovakimyan, N., Cao, C., 2010. L1 Adaptive Control Theory: Guaranteed Robustness with Fast Adaptation. SIAM. Ioannou, P., Sun, J., 2012. Robust Adaptive Control. Dover Publications. Irrgeher, C., Kritzer, P., Pillichshammer, F., Wozniakowski, H., 2016. Tractability of multivariate approximation defined over Hilbert spaces with exponential weights. J. Approx. Theory. 207, 301338.

76

Introduction to Linear Control Systems

Isidori, A., 2016. Lectures in Feedback Design for Multivariable Systems. Springer. Ito, H., Ohmori, H., Sano, A., 1993. Design of stable controllers attaining low HN weighted sensitivity. IEEE Trans. Autom. Control. 38 (3), 485488. Ito, H., 1999. Local stability and performance robustness of nonlinear systems with structured uncertainty. . IEEE Trans. Autom. Control. 44 (6), 12501254. Iwasaki, T., Hara, S., 2005. Generalized KYP lemma: unified frequency domain inequalities with design applications. IEEE Trans. Autom. Control. 50, 4159. Jagtap, R., Kaistha, N., Skogestad, S., 2013. Economic plantwide control over a wide throughput range: a systematic design procedure. J. AIChE. 59 (7), 24072426. Jiang, F., Tsuji, H., Ohmori, H., Sano, A., 1997. Adaptation for active noise control. IEEE Control Syst. Mag. 17 (6), 3647. Jurdjevic, V., 2008. Geometric Control Theory. Cambridge University Press. Kailath, T., Sayed, A.H., Hassibi, B., 2000. Linear Estimation. Prentice Hall. Karafyllis, I., Jiang, Z.-P., 2012. A new small-gain theorem with an application to the stabilization of the chemostat. Int. J. Robust Nonlinear Control. 22, 16021630. Kautsky, J., Nichols, N.K., 1983. Matrix factorisation and some applications in linear systems. In Proceedings of the IEE Colloquium on Reliable Numerical Procedures in Control System Design 5/15/5. Khaki-Sedigh, A., Moaveni, B., 2009. Control Configuration Selection for Multivariable Plants. Springer. Kharitonov, V.L., 2013. Time-Delay System: Lyapunov Functionals and Matrices. Birkhauser. Khorasani, K., 1990. Feedback equivalence for a class of nonlinear singularly perturbed systems. IEEE Trans. Autom. Control. 35 (12), 13591363. Kouno, T., Ohmori, H., Sano, A., 2002. Adaptive active noise control for uncertain secondary pathes. In: Proceedings of the 11th European Signal Processing Conference, Toulouse, France. ID: 7071987. Lakerveld, R., Benyahia, B., Braatz, R.D., Barton, P.I., 2013. Model-based design of a plantwide control strategy for a continuous pharmaceutical plant. AIChE J. 59, 36713685. Lan, Y., Li, Y.C., 2013. A resolution of the turbulence paradox: Numerical implementation. Int. J. Non-Linear Mech. 51, 19. Lavaei, J., Sojoudi, S., Aghdam, A., 2010. Pole assignment with improved control performance by means of periodic feedback. IEEE Trans. Autom. Control. 55 (1), 248252. Li, Z., Yu, J., Xing, X., Gao, H., 2015. Robust output-feedback attitude control of a threedegree-of-freedom helicopter via sliding-mode observation technique. IET Control Theory Appl. 9, 16371643. Li, Y.C., 2017. Rough dependence upon initial data exemplified by explicit solutions and the effect of viscosity. Nonlinearity. 30, 10971108. Lin, Z., Saberi, A., 1993. Semi-global exponential stabilization of linear systems subject to input saturation via linear feedbacks. Syst. Control Lett. 21 (3), 225239. Lin, Z., Hu, T., 2001. Semi-global stabilization of linear systems subject to output saturation. Syst. Control Lett. 43 (3), 211217. Ljung, L., 1999. System Identification: Theory for the User. 2nd ed. Prentice Hall. Manabe, S., 1961. The non integer integral and its application to control systems. J. Inst. Electr. Eng. Jpn. 6 (3/4), 8387. McEneaney, W.M., 2006. Max-Plus Methods for Nonlinear Control and Estimation. Birkhauser. Megretski, A., Rantzer, A., 1995. “System Analysis via Integral Quadratic Constraints—Part I & II,” Technical Report. Department of Automatic Control, Lund Institute of Technology, 1997.

Introduction

77

Mehl, C., 1999. Compatible Lie and Jordan Algebras and Applications to Structured Matrices and Pencils. Logos, Berlin. Mehrmann, V., 1991. The autonomous linear quadratic control problem, Lecture Notes in Control and Information Sciences, No. 163. Springer-Verlag. Morari, M., Zafiriou, E., 1989. Robust Process Control. Prentice Hall. Mostofi, Y., 2011. Compressive cooperative sensing and mapping in mobile networks. IEEE Trans. Mobile Comput. 10 (12), 17701785. Naghshvar, M., Javidi, T., 2013. Active sequential hypothesis testing. Annal. Stat. 41 (6), 27032738. Normey-Rico, J.E., Garcia, P., Gonzalez, A., 2012. Robust stability analysis of filtered Smith predictor for time-varying delay processes. J. Process Control. 22 (10), 19751984. Ohmori, H., Sano, A., 1992. Design of robust adaptive control system with a fixed compensator. In: Davisson, L.D., et al., (Eds.), Robust Control, Lecture Notes in Control and Information Sciences, vol. 183. Springer, Berlin, pp. 146153. Ohmori, H., Shrivastava, Y., 1995. Robust adaptive control system with fixed compensator attaining robust positive realness. In: Proceedings of the American Control Conference, vol. 1, pp. 592596. Seattle, WA, USA. Okumura, H., Ohno, T., Ohmori, H., Sano, A., 2010. New hybrid structure for adaptive active noise control. In: UKACC International Conference on Control. Coventry, UK. pp. 16. Olfati-Saber, R., 2007. Distributed Kalman filtering for sensor networks. In Proceedings of the 46th IEEE Conference on Decision and Control, 54925498, New Orleans, USA. Olfati-Saber, R., 2009. Kalman-consensus filter: optimality, stability, and performance. In Proc. the 48th IEEE Conference on Decision and Control and the 28th Chinese Control Conference (CDC/CCC), 70367042, Shanghai, China. Olfati-Saber, R., Jalalkamali, P., 2012. Coupled distributed estimation and control for mobile sensor networks. IEEE Trans. Autom. Contr. 57 (10), 26092614. Oymak, S., Jalali, A., Fazel, M., Eldar, Y., Hassibi, B., 2015. Simultaneously structured models with applications to sparse and low-rank matrices. IEEE Trans. Inform. Theory. 61 (5), 28862908. Padhan, D.G., Majhi, S., 2012. Modified Smith predictor based cascade control of unstable time delay processes. ISA Trans. 51 (1), 95104. Recht, B., Fazel, M., Parrilo, P., 2010. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52 (3), 471501. Reis, T., Stykel, T., 2011. Lyapunov balancing for passivity-preserving model reduction of RC circuits. SIAM J. Appl. Dyn. Syst. 10 (1), 134. Richalet, J.A., Rault, A., Testud, J.L., Papon, J., 1978. Model predictive heuristic control: applications to an industrial process. Automatica. 14, 413428. Rodriguez, C., Normey-Rico, J.E., Guzman, J.L., Berenguel, M., 2016. On the filtered Smith predictor with feedforward compensation. J. Process Control. 41, 3546. Roozbehani, M., Megretski, A., Feron, E., 2013. Optimization of Lyapunov invariants in verification of software systems. IEEE Trans. Autom. Contr. 58 (3), 696711. Rotkowitz, M., Lall, S., 2006. A characterization of convex problems in decentralized control. IEEE Trans. Autom. Control. 51 (2), 274286. Rungger, M., Tabuada, P., 2016. A notion of robustness for cyber-physical systems. IEEE Trans. Autom. Control. 61 (8), 21082124. Ryll, C., Lober, J., Martens, S., Engel, H., Troeltzsch, F., 2016. Analytical, optimal, and sparse optimal control of traveling wave solutions to reaction-diffusion systems. In: Scholl, E., Klapp, S.H.L., Hovel, P. (Eds.), Understanding complex systems, Control of self-organizing nonlinear systems. Springer, pp. 189210.

78

Introduction to Linear Control Systems

Saberi, A., Chen, B.M., Sannuti, P., 1993. Loop Transfer Recovery: Analysis and Design. Springer-Verlag. Saberi, A., Lin, Z., Teel, A.R., 1996. Control of linear systems subject to input saturation. IEEE Trans. Autom. Contr. 41 (3), 368378. Saberi, A., Stoorvogel, A.A., Sannuti, P., 2007. Filtering Theory with Applications to Fault Detection and Isolation. Birkhauser. Saberi, A., Stoorvogel, A., Sannuti, P., 2012. Internal and External Stabilization of Linear Systems with Constraints. Birkhauser. Sachs, E., Guo, R.-S., Ha, S., Hu, A., 1990. On-line process optimization and control using the sequential design of experiments. Symposium on VLSI Technology, Honolulu, pp. 99100. Salcedo, C.A.G., Hernandez, A.I., Vilanova, R., Cuartas, J.H., 2013. Inventory control of supply chains: Mitigating the bullwhip effect by centralized and decentralized internal model control approaches. Eur. J. Oper. Res. 224 (2), 261272. Santos, T.L.M., Torrico, B.C., Normey-Rico, J.E., 2016. Simplified filtered Smith predictor for MIMO processes with multiple time delays. ISA Trans. in press. Sarangapani, J., 2006. Neural Network Control of Nonlinear Discrete-Time Systems. Taylor & Francis. Sarwate, A.D., Javidi, T., 2015. Distributed learning of distributions via social sampling. IEEE Trans. Autom. Control. 60 (1), 3445. Shahinpour, S., Rakhlin, A., Jadbabaie, A., 2016. Distributed detection: finite-time analysis and impact of network topology. IEEE Trans. Autom. Control. 61 (11), 32563268. Sira-Remirez, H., Rodriguez, C.G., Romero, J.C., Juirez, A.L., 2014. Algebraic Identification and Estimation Methods in Feedback Control Systems. John Wiley & Sons. Skogestad, S., 2015. Control structure selection. In: Baillieul, J., Samad, Tariq, T. (Eds.), Encyclopedia of Systems and Control. Springer. Smith, O.J.M., 1959. A controller to overcome deadtime. ISA J. 6 (2), 2833. Smyshlyaev, A., Krstic, M., 2009. Adaptive Control of Parabolic PDEs. Princeton University Press. Sojoudi, S., Lavaei, J., Aghdam, A.G., 2011. Structurally Constrained Controllers: Analysis and Synthesis. Springer. Sojoudi, S., Lavaei, J., 2014. Exactness of semidefinite relaxations for nonlinear optimization problems with underlying graph structure. SIAM J. Optimization. 24 (4), 17461778. Sontag, E.D., 2006. Input to state stability: basic concepts and results. In: Nistri, P., Stefani, G. (Eds.), Nonlinear and Optimal Control Theory. Springer, pp. 163220. Soudbakhsh, D., Phan, L.T.X., Annaswamy, A.M., Sokolsky, O., 2016. Co-design of arbitrated network control systems with overrun strategies. IEEE Trans. Contr. Network Systems (in press). Stefani, G., Boscain, U., Gauthier, J.-P., Sarychev, A., Sigalotti, M. (Eds.), 2014. Geometric Control Theory and sub-Riemannian geometry. Springer. Stoorvogel, A.A., Saberi, A., 2016. Necessary and sufficient conditions for global external stochastic stabilization of linear systems with input saturation. IEEE Trans. Autom. Contr. 61 (5), 13681372. Sun, L., Ohmori, H., Sano, A., 2001. Output intersampling approach to direct closed-loop identification. IEEE Trans. Autom. Control. 46 (12), 19361941. Takamatsu, T., Ohmori, H., 2016. Sliding mode controller design based on backstepping technique for fractional order system. . SICE J. Control, Measurement, and System Integration. 9 (4), 151157.

Introduction

79

Troeltzsch F., Optimal control of partial differential equations—Theory, methods and applications, Graduate Studies in Mathematics, 112, American Mathematical Society, 2010 (Translation of the second German edition by J. Sprekels). Tzoumas, V., Rahimian, M.A., Pappas, G.J., Jadbabaie, A., 2016. Minimal actuator placement with bounds on control effort. IEEE Trans. Control Netw. Syst. 3 (1), 6778. Van Diggelen, F., Glover, K., 1994. State-space solutions to Hadamard weighted HN and H2 controlproblems. Int. J. Control. 59 (2), 357394. Van Leeuwen, J., 1998. Handbook of Theoretical Computer Science, Vol. A: Algorithms and Complexity. Elsevier. Vidyasagar, M., 2002. Learning and Generalization: With Applications to Neural Networks, 2nd ed. Springer. Wang, Y., Gao, F., Doyle, F.J., 2009. Survey on iterative learning control, repetitive control, and run-to-run control. J. Process Control. 19, 15891600. Wiener, N., 1942. Extrapolation, interpolation & smoothing of stationary time series, Technical Report of the Services 19. Research Project DIC-6037. MIT. Yadav, A.K., Gaur, P., 2015. Intelligent modified internal model control for speed control of nonlinear uncertain heavy duty vehicles. ISA Trans. 56, 288298. Zadeh, L.A., 1950a. Thinking machines—a new field in electrical engineering. Columbia Eng. Q. 3, 1213, 30, 31. Zadeh, L.A., 1950b. The determination of impulsive response of variable networks. J. Appl. Phys. 21, 624645. Zadeh, L.A., 1950c. Frequency analysis of variable networks. Proc. I.R.E. 38, 291299. Zadeh, L.A., 1953. Theory of filtering. SIAM J. 1, 3551. Zadeh, L.A., 1956a. On the identification problem. IRE Trans. Circuit Theory CT. 3, 277281. Zadeh, L.A., 1956b. On passive and active networks and generalized Norton’s and Thevinin’s theorems. Proc. IRE. 44, 378. Zadeh, L.A., 1958. “What is optimal,” (editorial). IRE Trans. Inf. Theory IT. 4, 3. Zadeh, L.A., 1962a. From circui t theory to system theory. Proc. IRE. 50, 856865. Zadeh, L.A., 1962b. On the extended definition of linearity. Proc. IRE. 50, 20002001. Zadeh, L.A., 1968. The concepts of system, aggregate and state in system theory. In: Zadeh, L.A., Polak, E. (Eds.), System Theory. McGraw-Hill, pp. 342. Zadeh, L.A., Miller, K.S., 1955. Generalized Fourier integrals. Trans. PGCT CT. 2, 256260. Zadeh, L.A., Ragazzini, J.R., 1950d. An extension of Wiener’s theory of prediction. J. Appl. Phys. 21, 645655. Zarre, A., Vahidian Kamyad, A., Khaki-Sedigh, A., 2002. Application of measure theory in the design of multivariable PID controllers. Int. J. of Engineering Sciences. 13 (1), 162176, In Persian.

Further Reading Zhang, Z., Yan, P., Wang, P., 2014. Discrete time-varying internal model-based control of a novel parallel kinematics multi-axis servo gantry. IFAC Proc. 47 (3), 20402045. Abu-Khalaf, M., Huang, J., Lewis, F.L., 2006. Nonlinear H2/HN Constrained Feedback Control. Springer. Aihara, K., Imura, J.-I., Ueta, T., 2015. Analysis and Control of Complex Dynamical Systems—Robust Bifurcation, Dynamic Attractors, and Network Complexity. Springer.

80

Introduction to Linear Control Systems

Antoulas, A.C., 2005. Approximation of Linear Dynamical Systems. SIAM. Azar, A.-T., Vaidyanathan, S., 2015. Chaos Modelling and Control Systems Design. Springer. Barker, R.H., 1952. The pulse transfer function and its applications to sampling servo systems. Proc. IEE. 99, 302317. Barmish, B.R., 1994. New Tools for Robustness of Linear Systems. MacMillan, NY. Basar, T., Bernhard, P., 2008. H-infinity Optimal Control and Related Minimax Design Problems: A Dynamic Game Approach. Springer. Batou, A., 2015. Model updating in structural dynamics—uncertainties on the position and orientation of sensors and actuators. J. Sound Vibr. 354, 4764. Bavafa-Toosi, Y., 1996. A C11 Implementation of the Critical Path Method for Project Management, Technical Report. Department of Electrical Engineering, Ferdowsi University of Mashhad, Mashhad, Iran. Bavafa-Toosi, Y., 1997. Design and Implementation of HEXFET-based High-Frequency Full-Bridge PWM Welding Sets, BEng Thesis (Power). Department of Electrical Engineering, Ferdowsi University of Mashhad, Mashhad, Iran. Bavafa-Toosi, Y., 1999. PID Control: Theory, Design, And Tuning, MS Seminar (Control). Department of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran. Bavafa-Toosi, Y., Blendinger, C., Mehrmann, V., Steinbrecher, A., Unger, R., 2008. A new methodology for modeling, analysis, synthesis, and simulation of time-optimal train traffic in large networks. IEEE Trans. Autom. Sci. Eng. 5, 4352. Bemporad, A., Heemels, M., Johansson, M., 2010. Networked Control Systems. Springer. Bequette, B.W., 2003. Process Control: Modeling, Design and Simulation. Prentice Hall. Bertsekas, D., 2015. Convex Optimization Algorithms. Athena Scientific. Bevrani, H., Watanabe, M., Mitani, Y., 2014. Power System Monitoring and Control. WileyIEEE Press. Bloch, A.M., Baillieul, J., Crouch, P.E., Marsden, J.E., 2003. Mechanics and Control: Nonholonomic Mechanics, Control, and Variational Principles. Springer. Borelli, F., Bemporad, A., Morari, M., 2014. Predictive control for linear and Hybrid systems. Updated Version. Cambridge University Press. Bourles, H., Marinescu, B., 2011. Linear Time-Varying Systems: Algebraic-Analytic Approach. Springer. Burger, M., Micheletti, A., Morale, D., 2006. Math Everywhere, Deterministic and Stochastic Modelling in Biomedicine, Economics, and Industry. Springer. Caponetto, R., Dongola, G., Fortuna, L., Petras, I., 2010. Fractional Order Systems. World Scientific. Cassandras, C.G., Lafortune, S., 2008. An Introduction to Discrete-Event Systems. 2nd ed. Springer. Chi, G., Wang, D., Zhu, S., 2015. An integrated approach for sensor placement in linear dynamic systems. J. Franklin Inst. 352 (3), 10561079. Cong, S., 2014. Control of Quantum Systems—Theory and Methods. Wiley. Costa, O.L.V., Fragoso, M.D., Marques, R.P., 2005. Discrete-Time Markov Jump Linear Systems. Springer. Costentino, C., Bates, D., 2011. Feedback Control in Systems Biology. CRC Press. Dahleh, M.A., Diaz-Bobillo, I., 1995. Control of Uncertain Systems: A Linear Programming Approach. Prentice-Hall. da Silva, I.N., Spatti, D.H., Flauzino, R.A., Liboni, L.H.B., dos Reis Alves, S.F., 2017. Artificial Neural Networks: A Practical Course. Springer.

Introduction

81

Deissenberg, C., Hartl, R.F. (Eds.), 2005. Optimal Control and Dynamic Games: Applications in Finance, Management Science and Economics. Springer. Do, K.D., Pan, J., 2009. Control of Ships and Underwater Vehicles: Design for Underactuated and Nonlinear Marine Systems. Springer. Domzal, J., Wojcik, R., Jajszczyk, A., 2015. Guide to Flow-Aware Networking: Quality of Service Architectures and Techniques for Traffic Management. Springer. Duan, G.-R., 2010. Analysis and Design of Descriptor Linear Systems. Springer. Durham, W., 2013. Aircraft Flight Dynamics and Control. John Wiley & Sons. Fattorini, H.O., 2005. Infinite Dimensional Linear Control Systems. Elsevier. Giorgi, G., Guerraggio, A., Thierfelder, J., 2004. Mathematics of Optimization: Smooth and Nonsmooth Case. Elsevier. Gonzalez, R.T., Qi, F., Huang, B., 2016. Process Control Systems Fault Diagnosis: A Bayesian Approach. Wiley. Gustafsson, B., 2011. Fundamental of Scientific Computing. Springer. Halderman, J.D., 2015. Automotive Fuel and Emission Control System, 4th ed. Prentice Hall. Hazen, H.L., 1934. Design and test of a high performance servo-mechanisms. J. Franklin Inst. 218, 543580. Heath, M.T., 2001. Scientific Computing. 2nd ed. McGraw Hill. Hippe, P., 2006. Windup in Control: Its Effect and Their Prevention. Springer. Aspects of mathematical modelling. In: Hosking, R.J., Venturino, E. (Eds.), Applications in Science, Medicine, Economics and Management. Birkhauser. Ichikawa, A., Katayama, H., 2008. Linear Time-Varying Systems and Sampled-Data Systems. Springer. Jamshidi, M., 1996. Large-Scale Systems: Modeling, Control and Fuzzy Logic. Prentice Hall. Janert, P.K., 2013. Feedback Control for Computer Systems. O’Reilly Media. Johnson, R.S., 2005. Singular Perturbation Theory: Mathematical and Analytical Techniques with Applications to Engineering. Springer. Katayama, T., 2005. Subspace Methods for System Identification. Springer. Kazmierkowski, M.P., Krishnan, R., Blaabjerg, F. (Eds.), 2002. Control in Power Electronics: Selected Problems. Academic Press. Keller, J.M., Liu, D., Fogel, D.B., 2016. Fundamentals of Computational Intelligence: Neural Networks, Fuzzy Systems, and Evolutionary Computation. Wiley-IEEE Press. Kermani, M.R., Moallem, M., Patel, R.V., 2008. Applied Vibration Suppression Using Piezoelectric Materials. Nova Science Publishers, NY. Kerner, B.S., 2009. Introduction to Modern Traffic Flow Theory and Control—The Long Road to Three-Phase Traffic Theory. Springer. Khalil, H., 2014. Nonlinear Control. Prentice Hall. Konstantinov, M., Gu, D.-W., Mehrmann, V., Petkov, P., 2003. Perturbation Theory for Matrix Equations. North-Holland. Kruse, R., Borgelt, C., Braune, C., Mostaghim, S., Steinbrecher, M., 2016. Computational Intelligence - A Methodological Introduction. 2nd ed Springer, Berlin. Kunkel, P., Mehrmann, V., 2006. Differential-Algebraic Equations: Analysis and Numerical Solution. European Mathematical Society. Lamour, R., Maerz, R., Tischendorf, C., 2013. Differential Algebraic Equations: A Projector Based Analysis. Springer. Lawden, D.F., 1952. A general theory of sampling servomechanisms. Proc. IEE. 98, 3136. Lessard, C.S., 2009. Basic Feedback Controls in Biomedicine. Morgan and Claypool. Liberzon, D., 2012. Calculus of Variations and Optimal Control Theory: A Concise Introduction. Princeton University Press.

82

Introduction to Linear Control Systems

Linvill, W.K., 1951. Sampled-data control systems studied through comparison sampling with amplitude modulation. AIEE Trans. 70, 17781788, pt. II. Lygeros, J., Sastry, S., Tomlin, C., 2014. Hybrid Systems: Foundations, Advanced Topics, and Applications. Springer. Mao, Q., Pietrzko, S., 2013. Control of Noise and Structural Vibration, A MATLAB-Based Approach. Springer. Markovsky, I., Willems, J.C., van Huffel, S., De Moor, B., 2006. Exact and Approximate Modeling of Linear Systems: A Behavioral Approach. SIAM. Mayer, E., 2016. Linear Delay Differential Systems with Commensurate Delays—An Algebraic Approach. Springer. Mesbahi, M., Egerstedt, M., 2010. Graph Theoretic Methods in Multiagent Networks. Princeton University Press. Meskin, N., Khorasani, K., 2011. Fault Detection and Isolation Multi-Vehicle Unmanned Systems. Springer, NY. Monje, C.A., Chen, Y., Vinagre, B.M., Xue, D., Feliu-Batlle, V., 2010. Fractional-Order Systems and Control—Fundamentals and Applications. Springer. Murty, P.S.R., 2011. Operation and Control in Power Systems. 2nd ed. CRC Press. Nazmul, S., 2014. Intelligent Control: A Hybrid Approach Based on Fuzzy Logic, Neural Networks and Genetic Algorithms. Springer. Noura, H., Theilliol, D., Ponsart, J.-C., Chamseddine, A., 2009. Fault-Tolerant Control Systems: Design and Practical Applications. Springer. Ogunfunmi, T., 2007. Adaptive Nonlinear System Identification: The Volterra and Wiener Model Approaches. Springer. Osborne, M.J., Rubinstein, A., 1994. A Course in Game Theory. MIT Press. Patil, M., Rodey, P., 2015. Control Systems for Power Electronics: A Practical Guide. Springer. Pham, H., 2009. Continuous-Time Stochastic Control and Optimization with Financial Applications. Springer. Ragazzini, R.J., Zadeh, L.A., 1952. The analysis of sampled-data systems. AIEE Trans. 71, 225234, pt. II. Rao, V.S.H., Rao, P.R.S., 2009. Dynamic Models and Control of Biological Systems. Springer. Saberi, A., Sannuti, P., Chen, B.M., 1995. H2 Optimal Control. Prentice Hall. Sajjadi-Kia, S., Jabbari, F., 2013. Controllers for linear system with bounded actuators: Slab scheduling and anti-windup. Automatica. 449 (3), 762769. Sarangapani, J., Xu, H., 2015. Optimal Networked Control Systems with MATLAB. CRC Press. Shirvani, A., Wooley, B.A., 2003. Design and Control of RF Power Amplifiers. Kluwer. Shtessel, Y., Edwards, C., Fridman, L., Levant, A., 2014. Sliding Mode Control and Observation. Springer. Skogestad, S., Postlethwaite, I., 2005. Multivariable Feedback Control: Analysis and Design. 2nd ed. John Wiley. Spong, M.W., Hutchinson, S., Vidyasagar, M., 2005. Robot Modeling and Control. Wiley. Sun, J.-Q., 2006. Stochastic Dynamics and Control. Elsevier. Tanaka, K., Wang, H.O., 2001. Fuzzy Control Systems Design and Analysis: A Linear Matrix Inequality Approach. John Wiley. Valerio, V., Sa Da Costa, J., 2012. An Introduction to Fractional Control. The IET Publication. Verhulst, F., 2005. Methods and Applications of Singular Perturbations: Boundary Layers and Multiple Timescale Dynamics. Springer.

Introduction

83

Villani, E., Miyagi, P.E., Valette, R., 2007. Modelling and Analysis of Hybrid Supervisory Systems: A Petri Net Approach. Springer. Vyshnegradskii, I.A., 1877. On Controllers of Direct Action. Izv. SPB Tekhnolog. Inst.. Walter, A., 2011. Nichtlineare optimierung: Eine einfu¨hrung in theorie, verfahren und anwendungen. Vieweg 1 Teubner. Wang, L., 2009. Model Predictive Control System Design and Implementation Using MATLAB. Springer. Wang, L.-X., 1996. A Course in Fuzzy Systems and Control. Prentice Hall. Willems, J.C., 1998. Introduction to Mathematical Systems Theory: A Behavioral Approach. Springer. Zaccarian, L., Teel, A.R., 2011. Modern Anti-Windup Synthesis: Control Augmentation for Actuator Saturation. Princeton University Press. Zadeh, L.A., 1952. A general theory of linear signal transmission systems. J. Franklin Inst. 253, 293312. Zecevic, A.I., Siljak, D.D., 2010. Control of Complex Systems: Structural Constraints and Uncertainty. Springer. Zhou, K., Doyle, J.C., Glover, K., 1996. Robust and Optimal Control. Prentice Hall.

System representation 2.1

2

Introduction

The classical method of designing a controller for a system pivots on describing the system using a model. There are different frameworks for model representation as will shortly be introduced. Such a model is a mathematical representation which is desired to demonstrate the effect of certain system variables on certain other system variables. Let us further explain some phrases in this sentence: G

G

G

Mathematical representation: As we shall shortly see there are different ways for describing a model. Thus, there are different mathematical representations for the system. Desired/Demonstrate: There are always inaccuracies in system modeling. In this course we do not deal with such inaccuracies; they are handled in graduate courses. However, some simplifications are always made, e.g., in modeling a nonlinear spring or friction with a linear one. Hence the aforementioned words have relative strength and are accurate only to a certain extent. Certain system variables: We note that in general it is neither necessary nor possible to model the interrelation between all variables in a system. That is, we cannot and do not really need to model the effect of all variables on each other. Examples that—depending on the application—are needed include: the effect of the setpoint on the control signal, the effect of the setpoint on the output, the effect of the control signal on the internal system variables, the effect of the control signal on the output, the effect of the disturbance on the control signal, the effect of the disturbance on the internal system variables, the effect of the disturbance on the output, the effect of the internal system variables on each other, the effect of internal system variables on the output, etc.

Having clarified this we give an overview of the organization of the rest of this chapter. We start by system modeling below in Section 2.2. We present the basic mathematical frameworks—at the undergraduate level—for system modeling. These are the transfer function and the state-space frameworks. With emphasis on SISO LTI systems some details in both frameworks are elaborated. We learn how to linearize a model so as to convert it to a transfer function in the case that it is time invariant. This is expounded by basic examples of modeling of a wide variety of plants in Section 2.3. We consider typical examples of electrical, mechanical, liquid, thermal, hydraulic, chemical, structural, biological, economics, ecological, societal, physics, and time-delay systems. In the examples and worked-out problems some discrete-time, discrete-event, stochastic, and nonlinear models are also discussed. In Section 2.4 we study block diagrams as a tool for representation of the system constituents as they interconnect in a system. We study their governing algebra and learn how to use them to simplify a complex interconnected system and find the transmittance between different signals in the system. This is followed by signal flow graphs in Section 2.5 as an alternative tool for the same purpose. The chapter is wrapped up by summary, further readings, workedout problems, and exercises in Sections 2.62.9. Introduction to Linear Control Systems. DOI: http://dx.doi.org/10.1016/B978-0-12-812748-3.00002-1 © 2017 Elsevier Inc. All rights reserved.

86

Introduction to Linear Control Systems

2.2

System modeling

There are three main approaches to system modeling: (i) fundamental principles, (ii) black-box methods, and (iii) grey-box methods. In the first approach, by writing the formulae of the fundamental principles of the underlying process, the dynamics of the system will be described. The most common fundamental principles are as follows: Kirchhoff’s law in electrical networks, Newton’s law in dynamics, conservation laws in chemical processes, Maxwell’s law in electromagnetics, and thermodynamics laws in thermal processes. Economics, social, hydraulic, pneumatic, etc. processes can also be described by their respective fundamental principles governing their dynamics. This requires some specialized knowledge which goes beyond the scope of this book and thus these systems are not treated in detail here, however for the sake of familiarity for the readers a variety of interesting examples are provided. In the second approach the input and output terminal(s), and nothing more, are given to us. We apply some appropriate bounded input(s) and measure/ observe the output(s). In other words, the system is identified from its input-output (I/O) data. The third approach will be explained after Example 2.2. The second and third approaches are studied in graduate studies in the course system identification. The key early contributions to the aforementioned field which formally marked its advent were made by L.A. Zadeh in 1956 and 1963 who coined its name and furthered its dissemination. Of course various minor results under different names, e.g., ‘gedanken experiment’ were available at that time. Simple examples of the first and second approaches are presented in the sequel. Example 2.1: Consider a mass M pushed by a force F. The translational movement is opposed by friction which is a function of velocity, f ðvÞ, as shown in Fig. 2.1. Describe the dynamics of this system.

Figure 2.1 Translational movement and friction.

The second law of Newton says that F 2 f ðvÞ 5 Ma, where z; v; a; M are the position, velocity, acceleration and mass of the object, v_ 5 a; z_ 5 v. Defining x1 5 z; x2 5 v, and y 5 z as the output, the equations of motion will be

8 x_ 5 x2 > < 1 21 1 x_2 5 f ðx2 Þ 1 F. M M > : y 5 x1

System representation

87

Example 2.2: Given the sequel I/O data in Figure 2.2, it is clear that the system on the left is a proportional system (depending on the slope) with saturation, the one in the middle is a sinusoid (whose frequency depends on the value of u on the axis). What is the system on the right? We will shortly see in Chapter 4 that this system can be a 2nd order or higher order system.

Figure 2.2 Examples of I/O data.

Finally it should be noted that in between the above two ‘end classes’ (i) and (ii) explained above, there are the gray-box methods as the third approach. They are ramified to two branches: (a) the structure of the model is known, but not its parameters. For instance, a1 y_ 1 a2 y 1 a3 u 5 0 or a1 y€ 1 a2 ðy_ 2 uÞ 1 sinða3 uÞ 5 0; or (b) both of the structure and parameters are partially unknown. For example 2y€ 1 ay_ 5 u 1 f ðuÞ. In this book, which is on linear control systems, the first course in the control major, we focus on the fundamental principles approach. The model that we obtain from the above modeling process can be in two main frameworks: (A) State-space domain, and (B) Frequency domain. If (A), we transform it to (B), since it is (B) which forms the core of this course. It should be noted that there are some third and fourth frameworks for system models: (C) Mappings and Operators1, which are studied mainly in mathematics departments and to some extent in other Ð graduate departments. An example of an operator is Tðx; uÞ 5 xð0Þ 1 1 2 x2 1 xu. In fact (A) and (B) are special forms of (C). Framework (D) is discussed in item 5 of Further Readings. In the ensuing Subsections 2.2.1 and 2.2.2 state-space models and transfer function models are introduced, respectively.

2.2.1 State-space Consider the above-given Example 2.1. In this example, x1 ; x2 are called the state ( x_1 5 x2 variables, F is the input, and y is the output. The equations x_2 5 21 f ðx2 Þ 1 1 F are M

M

the state equations and y 5 x1 is the output equation. If there is more than one 1

Precisely speaking, a mapping is different from an operator, but the two together constitute this third class.

88

Introduction to Linear Control Systems

output one obtains output equations. The general form, valid for a huge body of systems (see Remark 2.7), is as follows: ( x_1 5 f1 ðx1 ; . . .; xn ; u1 ; . . .; um ; tÞ ^ ^ x_n 5 fn ðx1 ; . . .; xn ; u1 ; . . .; um ; tÞ 8 < y1 5 g1 ðx1 ; . . .; xn ; u1 ; . . .; um ; tÞ ^ ^ : yl 5 gl ðx1 ; . . .; xn ; u1 ; . . .; um ; tÞ

(2.1)

where x1 ; . . .; xn are the state variables, i.e., the smallest set of variables (or the least number of variables) completely characterizing the dynamics of the system. Moreover, u1 ; . . .; um are the inputs and y1 ; . . .; yl are the outputs. Define, 2 3 2 3 2 3 2 3 2 3 u1 y1 f1 ðx; u; tÞ x1 g1 ðx; u; tÞ 5; gðx; u; tÞ 5 4 5: ^ ^ x 5 4 ^ 5; u 5 4 ^ 5; y 5 4 ^ 5; f ðx; u; tÞ 5 4 fn ðx; u; tÞ gl ðx; u; tÞ xn um yl (2.2) Therefore, the system will be described by,  x_ 5 f ðx; u; tÞ y 5 gðx; u; tÞ

(2.3)

being the state and output equations, respectively. The question that arises at this point is what a linear system is. To  answer this question note that the system is x_ 5 AðtÞx 1 BðtÞu linear if f and g are linear. Thus describes a linear-timey 5 CðtÞx 1 DðtÞu varying (LTV) system. In this model xARn ; uARm ; yARl ; AARn 3 n ; BARn 3 m ; CARl 3 n ; DARl 3 m . Moreover, A, B, C, and D are the state, input, output, and feedthrough matrices, respectively. The following Fig. 2.3 shows the interrelation of state-space matrices. The initial condition is also shown in this figure, if we allow for this method of its inclusion/representation in a block diagram. If A, B, C, and D are time invariant the system is called linear-time-invariant (LTI),  x_ 5 Ax 1 Bu : (2.4) y 5 Cx 1 Du Note that Fig. 2.3 is valid for the time domain representation, of both LTV and LTI systems, but not for the Laplace domain.

Figure 2.3 Interrelation of state-space matrices.

System representation

89

What if f and/or g are/is not linear? If this is the case, then the system is nonlinear. At the undergraduate level there is a simple approach to tackling nonlinear time-invariant systems: Some points are considered on its operating trajectory, and by the use of Taylor series expansion (considering first-order terms only) a linear model is obtained in the neighborhood of each operating point. In other words the system is linearized about each operating point. Then linear control is used along with gain scheduling. Flight control systems (airplanes and spacecrafts) are probably the most prominent examples where gain scheduling is practiced. Other examples include missile control, power plant control, wind energy systems control, and process control. Although through the methodology just explained we use linear control and thus may expect perfection of the design there are several critical issues to address, among which are the range of validity of the models, bumpless transfer among the models, stability, robustness, and performance of the ‘overall’ system. At the graduate level there are advanced approaches for tackling such systems: nonlinear control, robust control, adaptive control, switching control, etc. These methods are beyond the scope of this book. For the sake of completeness let us add that when only one of f and g is nonlinear, then the system is again nonlinear but there are special methods for controlling it.

2.2.1.1 Linearization _ 5 f ðx; u; tÞ, yðtÞ 5 gðx; u; tÞ where Consider the nonlinear model (2.3), i.e., xðtÞ xARn ; uARm ; yARl . Throughout the book we assume that x; y; u; f; g are continuous and smooth enough so that they are differentiable to any required order. The Taylor series expansion around the point x results in the model  x_  f ðx ; u ; tÞ 1 AðtÞðx 2 x Þ 1 BðtÞðu 2 u Þ , if valid in the vicinity of x , see y  gðx ; u ; tÞ 1 CðtÞðx 2 x Þ 1 DðtÞðu 2 u Þ Question 2.1. In this neighborhood we neglect the higher order terms. The point x is often the equilibrium point which is the solution of x_ 5 0 evaluated with the nominal input u 5 u ; see also Remark 2.1. The parameters of the system are given by: 2

@f1 6 @x1 6 6 AðtÞ 5 6 ^ 6 @fn 4 @x1 2

@g1 6 @x1 6 6 CðtÞ 5 6 ^ 6 @gl 4 @x1

3 2 @f1 @f1 7 6 @xn 7 6 @u1 7 6 ^ 7; BðtÞ 5 6 ^ 7 6 @fn @fn 5 4 ? @xn @u1 ?

3 2 @g1 @g1 7 6 @xn 7 6 @u1 7 6 ^ 7; DðtÞ 5 6 ^ 7 6 @gl @gl 5 4 ? @xn @u1 ?

3 @f1 @um 7 7 7 ^ 7 @fn 7 5 ? @um ?

3 @g1 @um 7 7 7 ^ 7 @gl 7 5 ? @um ?

(2.5)

90

Introduction to Linear Control Systems

where AARn 3 n ; BARn 3 m ; CARl 3 n ; DARl 3 m . The above matrices are com puted at x 5 x and u 5 u . Denoting Δx 5 x 2 x and  Δu 5 u 2 u we note that for Δx_ 5 AðtÞΔx 1 BðtÞΔu small Δu, i.e., Δu  0, there approximately hold which Δy 5 CðtÞΔx 1 DðtÞΔu is the linearized model of the original nonlinear model and is also called the smallsignal linear approximation of the original nonlinear equation—see Remark 2.2. If at least one of the matrices A; B; C; D depends on time then the aforementioned model is LTV, and if none of them depends on time then it is LTI; see Examples 2.3 and 2.4. Question 2.1: Consider the approximation made in the Taylor series expansion. To what extent and under what conditions is it valid? Lest you are misled by some references let us answer this question. For brevity of notation consider the time-invariant and time-varying systems x_ 5 f ðxÞ; f ð0Þ 5 0 and x_ 5 f ðx; tÞ; f ð0; tÞ 5 0, and define fr 5 f ðxÞ 2 Ax or fr 5 f ðx; tÞ 2 AðtÞx where r stands for remainder. Let jj  jj denote any vector norm (say the Euclidean norm), see Section 10.12 if you are not familiar with it. Then for time-invariant systems it holds limjj x jj ! 0 jjfr jj=jjxjj 5 0 and the linearized model is always valid. Nevertheless for time-varying systems it does not necessarily hold limjj x jj ! 0 supt $ 0 jjfr jj= jjxjj 5 0. If it holds, which means that the system uniformly remains in its equilibrium over all time, then x_ 5 AðtÞx is actually the linearized model of the system. Otherwise, precisely speaking, it is not and it is said that the system is not linearizable or that the linearized model is not valid. For instance, consider the system x_ 5 f ðx; tÞ 5 2 xe2t cost 1 te22t x2 , t $ 0. We have f ð0; tÞ 5 0 and around x 5 0, x_ 5 AðtÞx 5 2 xe2t cost. Here limjj x jj ! 0 supt $ 0 jjfr jj=jjxjj 5 0 and thus x_ 5 2 xe2t cost is actually the linearized model. However, if the system is x_ 5 f ðx; tÞ 5 2 xe2t cost 1 tx2 then this is no longer the case. Remark 2.1: The equation x_ 5 0 may have several solutions. Which ones are acceptable depends on their stability—the stable ones are locally considered. Stability of nonlinear systems is reviewed to a great extent in the sequel of the book on state-space methods—for linear systems the basics are studied in Chapter 3. Here our focus is not the stability, but merely the linearization technique. Remark 2.2: Suppose the linearized model is valid. It is important to note that the above-mentioned ‘small-signal’ model can actually be used to approximate the original nonlinear model for ‘large signals’ around u 5 u . We have said ‘large’ because u is not necessarily small, i.e., about zero. In u is  fact in most actual applications  x_ 5 AðtÞx 1 BðtÞu 2 ½AðtÞx 1 BðtÞu  not small. To this end we approximate it as y 5 CðtÞx 1 DðtÞu 2 ½CðtÞx 1 DðtÞu  and note that the terms in the brackets have known values. In this equation we define xl ≔ x and yl ≔ y where the subscript l stands for linear. In fact this equation is also the linearized model of the original nonlinear model (2.3) for which we

System representation

91

define xnl : 5 x and ynl : 5 y, where the subscript nl stands for nonlinear. Now, the nearer the input signal u to its nominal value u , the nearer the signals xl ; yl to their true values xnl ; ynl . As the input gets far from its nominal value, the discrepancy between xl ; yl and xnl ; ynl increases, i.e., the linearized model becomes less valid. See Example 2.3.

Example 2.3: Linearize and simulate the system x_ 5 2 x2 1 u3 ; t $ 0 with the nominal input u 5 u 5 2. pffiffiffi The equilibrium is found from x_ 5 0 with u 5 u . Thus x 5 2 2. Note p ffiffi ffi that the solution x 5 2 2 2 results in an unstable system, as you learn in Chapter 3, and hence is discarded. In the linearized model AðtÞ 5 2 2x and BðtÞ 5 3u2 pwhere both should be computed at x 5 x and u 5 u . Thus ffiffiffi AðtÞ 5 A 5 2 4 2 5 2 5:6569 and BðtÞ 5 B 5 12. The linearized model is thus LTI. Following Remark 2.2 we denote x_nl 5 2 x2nl 1 u3 as the original nonlinear equation. The linearized model is x_l 5 Axl 1 Bu 2 ½Ax 1 Bu  or x_l 5 Axl 1 Bu 2 ½ 2 16 1 24 5 Axl 1 Bu 2 8. Now we simulate the systems for the input signal u given in the left panel of Fig. 2.4 which is not a small signal. The states xnl and xl are provided in the right panel of the same figure as the solid and dashed curves, respectively. We note that the nearer the input signal u to its nominal value u , the nearer xl to xnl , i.e., the more valid the linearized model.

Figure 2.4 Left: Input signal. Right: Nonlinear and linearized states.

Remark 2.3: Is there any tool to measure or quantify the degree of nonlinearity? This interesting question, which deserves investigation, is among the issues which have not yet been addressed explicitly by the control community. However, in the mathematics literature there exist some bounds regarding the approximation error of a Taylor series expansion when only a finite number of terms are considered in the summation. The interested reader is referred to the pertinent literature. See also item 10 of further readings.

92

Introduction to Linear Control Systems

Remark 2.4: With regard to finding the exact solution of x_ 5 f ðx; tÞ, note that if we just write down a set of equations which cross our mind it will most probably not have a directly and easily computable compact-form analytical solution (with limited terms); in fact, it may not have any solution at all2! The methods to solve a set of differential equations fall in two main categories. The first comprises methods to find an approximate analytical solution to them. There are numerous methods like polynomial ones in this category and the research is quite actively ongoing. These methods are dealt with in graduate studies in mathematics and are outside the scope of this book. The second approach is to solve them numerically. The core of such numerical methods is taught in undergraduate courses on numerical computations even in engineering faculties. The most famous are probably the Runge-Kutta methods, which have been implemented in MATLABs as well, see Appendix C. The literature on this category is of course much richer than this simple method and here too the research is earnestly ongoing. In order to admit an analytical solution (with limited terms) the set of equations must be carefully designed. See Exercises 2.9, 2.10, and Appendix A. Remark 2.5: The equilibrium points of a time dependent system described by (2.3) may be either time dependent or time independent. Example 2.4 is an instance of the former. As for the latter, consider the system x_ 5 2 x2 ðt 1 1Þ 1 2u3 ðt 1 1Þ; t $ 0; u 5 1.

Example 2.4: Linearize the system x_ 5 2 x2 =ðt 1 1Þ 1 te2t 1 2u3 2 6u; t $ 0 with the nominal input u 5 u 5 2 1. To find the equilibrium we solve x_ 5 0 with u 5 u . Thus pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  x 5 6 ðt 1 1Þðte2t 1 4Þ. The positive answer results in a stable system. In the linearized model AðtÞ 5 2 2x=ðt 1 1Þ and BðtÞ 5 6u2 2 6 where both are computed at x 5 x and u 5 u . Hence AðtÞ 5 2 2x = pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2t ðt 1 1Þ 5 2 2 ðte 1 4Þ=ðt 1 1Þ and BðtÞ 5 6u 2 6 5 B 5 0. As it is observed at least one of the system matrices depends on time and thus the linearized model, if valid, is Δx_ 5 AðtÞΔx 1 BðtÞΔu and is LTV. The linearized model is stable.3 Note that the linearized model is valid. At this stage we cannot draw any conclusions about the original system.

2

The study of the existence and uniqueness of the solution is an important topic in differential equations. This is not the focus of this book, however we do briefly discuss some relevant results in Appendix A. Regarding this example as well as the Worked-Out Problems 2.42.8 we pose this question for the reader: Does the system admit any other solution? See also Exercise 2.6. 3 Note that in this model λðtÞ 5 AðtÞ is the time-varying eigenvalue of the system. Stability of LTV systems is outside the scope of this book. In fact stability of LTV systems in its full generality is an open problem although many interesting results are available. Some general information is provided in Section 3.13 of Chapter 3. For this particular problem it is easy to show, e.g., using the Lyapunov function V 5 x2 , that the given system is stable since AðtÞ , 0 for all t $ 0. You will study the details of the Lyapunov stability method in the sequel of the book on state-space systems.

System representation

93

Example 2.5: Linearize the system  x_1 ðtÞ 5 2 x1 1 x21 x22 te22t , t $ 0, around the origin. x_2 ðtÞ 5 x2 1 ðx21 1 x22 Þsinðt 1 π=3Þ We first note that the origin makes x_ 5 0 and is thus an equilibrium point. The linearized model, if valid, is given   by x_5 AðtÞx where 21 0 2x21 x2 te22t 2 1 1 2x1 x22 te22t 5 . We note that AðtÞ 5 0 1 2x1 sinðt 1 π=3Þ 1 1 2x2 sinðt 1 π=3Þ   x21 x22 te22t fr 5 and limjj x jj ! 0 supt $ 0 jjfr jj=jjxjj 5 0. Hence 2 ðx1 1 x22 Þsinðt 1 π=3Þ the linearized model is valid. In a future course you will learn that because the linearized model is unstable the original system is, too.

Example 2.6: A nonlinear system is described by  x_1 ðtÞ 5 x1 ðtÞx2 ðtÞ 1 x21 ðtÞ 1 x1 ðtÞu1 ðtÞ . Find the linearized model when the x_2 ðtÞ 5 x1 ðtÞx22 ðtÞ 1 x2 ðtÞu21 ðtÞ 1 u22 ðtÞ input is u 5 ½0 1T .  x1 x2 1 x21 5 0 Solving x_ 5 0 we get . From the first equation we find x1 x22 1 1 5 0 either x1 5 0 or x1 5 2 x2 . Considering the second equation we conclude that x1 5 0 must be discarded. The choice x1 5 2 x2 results in the equilibrium point x1 5 2 x2 5 2 1, i.e., x 5 ½21 1T .  The linearized model, if valid,  x1 x 1 2x1 1 u1 is Δx_ 5 AðtÞΔx 1 BðtÞΔu where AðtÞ 5 2 5 x22 2x1 x2 1 u21       0 21 21 x1 21 0 and BðtÞ 5 5 . The linearized model 2x2 u1 2u2 1 22 0 2 is valid. Remark 2.6: For your knowledge we mention a basic result for the formulations in Question 2.1. Consider a time-invariant or linearizable time-varying nonlinear system with bounded A; AðtÞ. If the linearized model is exponentially stable then so is the original nonlinear system. Whether the answer is global or local depends on the system. Some converse and instability results are also known. In this direction other tools include the so-called Lyapunov exponents, characteristic exponents, Perron effects, strange attractors, etc. We shall review them in the sequel of the book on state-space methods. Question 2.2: This question has some parts; note that the answers are technical. (i) Suppose we can solve the system (2.3) for the exact solution. If we use it as x in (2.5) do we get the linearized model around the solution? (ii) If not, how can we linearize a system about its solution trajectory (if this makes sense)? (iii) With

94

Introduction to Linear Control Systems

regard to Question 2.1, what is the role of supt $ 0 ? (iv) Provide the counterparts of the arguments in Question 2.1 for the systems (a) x_ 5 f ðxÞ; f ðx 6¼ 0Þ 5 0, (b) x_ 5 f ðx; tÞ; f ðx 6¼ 0; tÞ 5 0, (c) x_ 5 f ðx; uÞ; f ðx 6¼ 0; u 6¼ 0Þ 5 0, and (d) x_ 5 f ðx; u; tÞ; f ðx 6¼ 0; u 6¼ 0; tÞ 5 0. (v) Consider a system whose linearization is stable but invalid. How close is its response to the response of the original system? (vi) What if the equation x_ 5 0 does not admit any bounded and/or real solution?

2.2.1.2 Number of inputs and outputs Note that in general the system has m inputs and l outputs. Such systems are called multi-input multi-output (MIMO), in which m $ l in general. Otherwise there are not enough degrees of freedom to control the system. In scientific terms m $ l is a necessary condition for the generic system to be ‘functionally controllable’. This issue is rigorously handled in volume 2 of this book focusing on state-space models. Here it will suffice to provide a conceptual meaning for m , l. Is it possible to hit two targets with one arrow? It goes without saying that in general this is impossible. It is possible if the targets depend on each other in a ‘special’ way. Other possibilities are multi-input single-output (MISO, m . 1, l 5 1), single-input multi-output (SIMO, m 5 1, l . 1), and single input single-output (SISO, m 5 l 5 1). It should thus be clear that SIMO systems are very special and few (unlike the claim of some references). In this course, inside the class of LTI systems, only SISO systems are considered. It is noteworthy that for such systems the structure of A, B, C, D is as follows: 2

a11 A54 ^ an1

3 ? a1n ^ 5; ? ann

2

3 b1 B 5 4 ^ 5; bn

 C 5 c1

? cn ;

D 5 d:

(2.6)

This part is wrapped up by mentioning that the concept of the ‘state’ of a system was further developed by L. A. Zadeh in a series of papers from 1962 to 1968. State space methods began to be developed in the 1950s and 1960s by many researchers around the world as you will learn in the sequel of the book on statespace methods. The first major textbook on state-space methods was written in 1963 by L. A. Zadeh and C. A. Desoer which marked the advent of this framework for the analysis and synthesis of control systems. _ 5 f ðx; u; tÞ, yðtÞ 5 gðx; u; tÞ are called ordiRemark 2.7: Equations of the form xðtÞ nary differential equations (ODEs). Not all systems are described by ODEs. Some systems are described by partial differential equations (PDEs) in which the partial derivatives of the unknowns appear. Whilst this book is not focused on the study of such systems it is good to have some examples in mind. The equations of water flow in a river or heat transfer in a body are described by a PDE. See also Further Readings.

System representation

95

Remark 2.8: The definition of output is important in the sense that if it is not chosen appropriately then control is not effective, even if enough information is present in the output so that it is both observable and controllable. For instance it is shown that if the output of the inverted pendulum system, see Appendix B, is defined as the position of the cart, then the system is effectively impossible to control, see Chapter 10, although it is both controllable and observable. However, if the output is defined as the angle of the pendulum then it can be easily controlled.

2.2.2 Frequency domain This method originated in the 1930s and has been developed since then. It lost its popularity with the advent of the state-space method, but regained its status in the 1980s. Today, a combination of both is sometimes used. As stated before, frequency-domain methods constitute the core of this book. These methods hinge on the Laplace transform which is reviewed in Appendix A. Applying the Laplace transform to the MIMO LTI system (2.4) we obtain (

sXðsÞ 2 xð0Þ 5 AXðsÞ 1 BUðsÞ YðsÞ 5 CXðsÞ 1 DUðsÞ

:

(2.7)

YðsÞ Assuming xð0Þ 5 0, XðsÞ 5 ðsI2AÞ21 BUðsÞ, and thus UðsÞ 5 CðsI2AÞ21 B 1 D. This is called the transfer function of the system. It must be noted that (i) No initial condition is considered, (ii) The transfer function depends on the nature of the system (i.e., the A, B, C, D matrices) not on inputs nor on outputs. A general transfer YðsÞ function is usually denoted by G, i.e., GðsÞ 5 UðsÞ . Thus if the input is the impulse function (whose Laplace transform is one) we conclude that the transfer function of a system is its impulse response with zero initial conditions.4 What structure does a transfer function have? For SISO systems considered in this book G is a rational transfer function of the form,

GðsÞ 5

bm sm 1 bm21 sm21 1 ? 1 b1 s 1 b0 : sn 1 an21 sn21 1 ? 1 a1 s 1 a0

(2.8)

Remark 2.9: It must be noted that it is customary to denote the order of numerator and denominator by m and n, respectively. These symbols have been previously used for the number of inputs and the number of states, respectively. Is there any relation between these two m’s and these two n’s? The answer is that there is no relation 4

That is, the impulse response and the transfer function contain the same information about the system— all the information about the system. It is noteworthy that in the old literature gðtÞ is also called the weighting function of the system. This terminology is rather ambiguous and unfortunate; the reason being that the same term is used in optimal control and optimization related literature where a term in an optimization cost function is weighted.

96

Introduction to Linear Control Systems

between these two m’s. For instance for SISO systems the number of inputs is m 5 1 whereas the order of the numerator may be 0, 1, or larger. However, there does exist a relation between these two n’s. The number of states is greater than or equal to the denominator order. More explanation will follow in the next Section and in Chapter 3. Remark 2.10: It must be noted that D 5 0 (i.e., there is no feedthrough, see Fig. 2.3) iff m , n. In this case the system is said to be strictly proper. On the other hand, D 6¼ 0 (i.e., there is a feedthrough) iff m 5 n. In this case the system is called proper. Examples of both cases are given in the worked-out problems.5 The difference n 2 m is called the relative degree of the system. Thus for strictly proper systems relative degree is positive and for proper systems it is zero. If the relative degree is negative the system is called improper. Such systems are not causal and thus cannot be implemented, in other words they do not exist in the real world— they exist only on paper! (Note that a state-space model can be proposed for them e.g., by considering the matrix D as a function of the Laplace variable s. Question: Can we do this for other matrices of the model?) Remark 2.11: The transfer function thus relates the input and output of the system in the following manner, which is an algebraic equation in the Laplace variable s: sn YðsÞ 1 an21 sn21 YðsÞ 1 ? 1 a1 sYðsÞ 1 a0 YðsÞ 5 bm sm UðsÞ 1 bm21 sm21 UðsÞ 1 ? 1 b1 sUðsÞ 1 b0 UðsÞ:

(2.9)

The time-domain version of this algebraic equation is the following differential equation in the time variable t: _ 1 a0 yðtÞ yðnÞ ðtÞ 1 an21 yðn21Þ ðtÞ 1 ? 1 a1 yðtÞ _ 1 b0 uðtÞ: 5 bm uðmÞ ðtÞ 1 bm21 uðm21Þ ðtÞ 1 ? 1 b1 uðtÞ

(2.10)

2.2.2.1 Finding the output The initial condition xð0Þ is not reflected in the transfer function but can be considered in addition to it. More precisely there holds, XðsÞ 5 ðsI2AÞ21 xð0Þ 1 ðsI2AÞ21 BUðsÞ; YðsÞ 5 CðsI2AÞ21 xð0Þ 1 ½CðsI2AÞ21 B 1 DUðsÞ:

(2.11)

The output in time domain is thus obtained by computing the inverse Laplace transform of YðsÞ. These are several ways to accomplish this. The direct one is to 5

A mistake in part of the literature is that they say all actual systems are strictly proper. Examples of actual proper systems are versatile. We see some in this book.

System representation

97

expand YðsÞ to partial fractions whose inverse Laplace transforms are directly available. An example is provided below. Example 2.7: Find the time response of the system described by    

21 2 , 0 23   stepðtÞ uðtÞ 5 . rampðtÞ A5

B5

1 0

2 1

,

C 5 ½1

0,

D 5 ½1

22,

xð0Þ 5 ½1

21T ,

The system has 1 output and 2 inputs. Its output comprises three terms: the term due to initial conditions, the term due to the first input, and the term due to the second input. These will be shown in the following expansion. We have, !           s11 22 21 1 2  1=s s11 22 21 1 1 1 0 1 1 22 YðsÞ 5 1 0 2 : 21 0 s13 0 1 1=s 0 s13 

Defining Δ 5 ðs 1 1Þðs 1 3Þ, 0 1         s13 2  s13 2  1=s 1 1 2 1 1 1 0 1@ 1 0 1 1 22 A YðsÞ 5 Δ Δ 0 s 1 1 21 0 s11 0 1 1=s2 5 Yxð0Þ 1 YU1 1 YU2 1 s13 1 2s 1 8 2 where Yxð0Þ 5 s 1 Δ , YU1 5 sΔ 1 s , and YU2 5 s2 Δ 2 s2 are the due to the initial conditions, first input, and second input, respectively. Now we expand the formula and obtain,



     1 2 1 21 2 2=3 1=2 1=6 2 1 1 YðsÞ 5 1 1 1 : s13 s s11 s2 s s11 s13 Consequently, yðtÞ 5 yxð0Þ ðtÞ 1 yu1 ðtÞ 1 yu2 ðtÞ where yxð0Þ 5 e23t ðt $ 0Þ, yu1 5 2 2e2t ðt $ 0Þ, and yu2 5 2t 2 23 1 12 e2t 1 ðt $ 0Þ. It is observed that the pole at s 5 23 shows itself in yxð0Þ , but the pole at s 5 21 does not. On the other hand yu1 consists of terms due to u1 and the pole at s 5 21. Finally, yu2 consists of terms due to u2 (i.e., 2t 2 23), the poles at s 5 21, and the pole at s 5 23. Questions like why s 5 21 does not have any effect in yxð0Þ and why s 5 23 does not show up in yu1 are dealt with and (Continued)

1 23t 6e

98

Introduction to Linear Control Systems

(cont’d) answered in details in the second volume of this book which is designed for the next undergraduate course in the field of control—control systems in the state-space domain. 4=3 2 1=2 7=6 4 In total the output is YðsÞ 5 21 s2 1 s 1 s 1 1 1 s 1 3 and yðtÞ 5 2t 1 3 1 2t 7 23t 2 2 e 1 6 e ðt $ 0Þ. It should be noted that this system will be dealt with again in Chapter 3 where stability of the system is studied. Remark 2.12: As stated before, some questions might now arise for the reader, e.g., what poles (the technical term being modes) appear in what states and/or output? How about zeros, if any? Or, how can we write a state-space form for a given transfer function (the terminology being realization)? These questions and the like are answered in detail in Part 2 of this book on state-space methods. However, in Chapter 10 they are briefly addressed as well. Remark 2.13: An alternative method is to find yðtÞ directly from (2.11) by inverse Laplace transform. InÐ Part 2 of the book it will be shown that yðtÞ is given by t yðtÞ 5 CeAðt2t0 Þ xðt0 Þ 1 t0 CeAðt2τÞ BuðτÞdτ. In that course you will learn how to comAt pute e . Here we do not pursue this method further. 2

Remark 2.14: For SIMO systems the transfer function has the form

3 G11 G 5 4 ^ 5. Gl1

It

is worth restating that these systems are very  special and few. For MISO systems the transfer function is of the form G 5 G11 . . . G1m , whereas for MIMO 2

systems the transfer function has the structure

G11 G54 ^ Gl1

... ...

3 G1m ^ 5. Glm

Finally, it is

noted that the constituting elements of SIMO, MISO, and MIMO transfer functions, i.e., Gij ’s, are all of the form (2.8) given before. In particular for a SISO system in the state-space form we have AARn 3 n ; BARn 3 1 ; CAR1 3 n ; DAR. Thus it is in accordance with the usual consensus in mathematics (that a vector is defined as a column vector) to use b instead of B and cT instead of C. And in fact for this reason some authors use C T instead of C for the multi-output case. However, in the control and engineering literature the commonest notation is to use C (not CT ), for the nicety of the formulation. It is also good to know that sometimes for special emphasis we prefer to use the lower case letters b; c; d. This is seen e.g., in some proofs in the state-space framework where a rank-one matrix of the form bc appears. We do not have it in this book, but in the sequel of the book on statespace methods.

System representation

99

2.2.3 Zero, pole, and minimality Let G ≔ numðsÞ denðsÞ . Then the zeros of the numerator are called the zeros of G. That is, if numðsÞ 5 0, then z ≔ s is called a zero of G. Likewise, the zeros of the denominator are called the poles of G. In other words, if denðsÞ 5 0 then p ≔ s is named a pole of G. It must be noted that the poles of the transfer function form a subset of the eigenvalues of A. That is, fsjdenðsÞ 5 0gDfsjdetðsI 2 AÞ 5 0g. The cardinality of the set fsjdenðsÞ 5 0g, which is the order of the equation denðsÞ 5 0, is the number of poles of the transfer function. The cardinality of the set fsjdetðsI 2 AÞ 5 0g, which is the order of the equation6 detðsI 2 AÞ 5 0, is exactly the number of the eigenvalues7 of A, which is the number of states of the system. If these sets are identical, then the system is called (completely) controllable and observable, and the corresponding state-space representation is a minimal representation of that system. These issues are also studied in details in volume 2 of this book. Moreover, it is worth noting that for MISO, SIMO, and MIMO systems, poles and zeros need special treatment. Remark 2.15: A restatement of the above definition is that the transfer function loses its rank at a zero—its rank reduces from 1 to 0. zi is the zero of GðsÞ iff Gðzi Þ 5 0. Zeros have no effect on stability of the open-loop system (to be precisely addressed in Chapter 3) but on the transient response of the system, either open-loop or closed-loop. On the other hand poles do affect both the transient response and stability. If the system has an unstable pole then its output grows unbounded even if its input is bounded and the system is said to be unstable. Note that in this case the state corresponding to the unstable pole grows unboundedly and causes the output to grow unboundedly.

Example 2.8: Consider the system described by the transfer function 11 GðsÞ 5 ðs 1s2Þðs 2 3Þ. Its normal (i.e., conceptually meaning usual) rank is 1, which reduces to 0 at s 5 2 1. Now consider the bounded input uðtÞ 5 stepðtÞ. Thus the output is computed as the inverse Laplace transform of GðsÞUðsÞ 7 22t 1 3t being yðtÞ 5 261 1 30 e 2 15 e , t $ 0. It is observed that the term e3t (which is due to the pole at p 5 3) grows unboundedly and thus the output grows unboundedly and the system is said to be unstable. More details will be given in the next Chapter on stability.

6 7

This equation is called the characteristic equation of the system. It is good to know that the word ‘eigen’ is a German word and means ‘characteristic’. The words eigenvalue and eigenvector are translations of the German words eigenwert and eigenvektor, respectively. Probably the words were first used by David Hilbert (18621943) in 1904. He is one of the most influential mathematicians of all time, who made significant contributions to invariant theory, axiomatization of geometry, Hilbert spaces, etc.

100

Introduction to Linear Control Systems

Question 2.3: What is the conceptual meaning, if any, of poles and zeros? The literal meaning of the term zero may tempt one to guess that, “A system has a zero when its dynamics is such that output is zero even if the input and the states are not identically zero.” Discuss the correctness of this guess and provide an example for it, if possible. What is the correct counterpart of this statement, if existent, about a ‘stable’ pole?

Example 2.9: What is the effect of feedback, i.e., closed-loop control, on zeros and poles of a system? Consider the controller CðsÞ 5 Nc =Dc and plant PðsÞ 5 Np =Dp in a negative unity feedback structure. The open-loop zeros and poles of the system are the roots of Np 5 0; Dp 5 0, respectively. When the feedback is present the closedloop system is Nc Np =ðDc Dp 1 Nc Np Þ. Thus the closed-loop zeros and poles of the system are the roots of Nc Np 5 0; Dc Dp 1 Nc Np 5 0, respectively. That is, the closed-loop zeros are the union of the open-loop plant zeros and the controller zeros (if they are not canceled). On the other hand the closed-loop poles have no relation with the open-loop plant poles and controller poles. They are changed.8 Question 2.4: What is the effect of open-loop control on zeros and poles of the system? Question 2.5: What is the effect of closed-loop control on zeros and poles of the system if the controller is used in the feedback loop? How about the 2-DOF control structure?

2.3

Basic examples of modeling

Modeling the constituents of a control system, see Fig. 2.5, is studied in this part. As we are concerned with the control side of this issue, the first examples to follow are restricted to simple plants. Examples include simple electrical, mechanical,

Figure 2.5 Constituents of a control system. 8

Of course it is possible that some open-loop and closed-loop poles are the same. Provide an example!

System representation

101

liquid, thermal, hydraulic, chemical, structural, biological, economics, ecological, societal, physics and time-delay systems for which specialized knowledge of the respective fields are not required. Delay, which is a realistic, ubiquitous phenomenon in many systems, is treated next. Sensors and amplifiers are also considered. The other constituents, i.e., actuators and some applied gadgets such as encoders, transducers, feedback elements, and opto-couplers are studied in books on industrial control—they are outside the scope of this book. Finally, the remaining constituent, i.e., controller, is what we design in Chapters 3, 4, 5, 9, 10. The simple analog implementation of basic controllers via electronic elements is also considered in this book in Chapter 9. The reader is encouraged to see Appendix B for an introduction to dynamics and the two equivalence relations between mechanical and electrical systems: (i) the currentforce equivalence, and (ii) the voltageforce equivalence. It should be noted that that these “namings” are traditionally used as given. Precisely speaking they are (i) currentintegral_force or derivative_currentforce, and (ii) voltage integral_force or derivative_voltageforce.

2.3.1 Electrical system as the plant A simple electrical plant is discussed in the ensuing example. Example 2.10: Find the transfer function between the input and output of the circuit in Fig. 2.6. R2

vi

C2

R1

vo

C1

Figure 2.6 A simple electrical system.

It is easy to verify that application of the Kirchhoff’s voltage law results in 1=C1 s 5 R1 1 R2R11 11=C . This system is proper and its relative degree is zero. 1 s 1 1=C2 s

Vo ðsÞ Vi ðsÞ

Electrical and electronic systems are pervasively used in the present technology. Every device that works with electrical power has some electrical parts which may be quite complicated (like satellite receiver, television, mobile phone, digital camera, etc.) or rather moderate (like a door opener). Interested readers of other disciplines can consult the literature on electrical engineering, e.g., Razavi (2014).

102

Introduction to Linear Control Systems

2.3.2 Mechanical system as the plant A simple mechanical system is considered in the sequel. Example 2.11: Find a model of the system of Fig. 2.7 in both frequency and time domains, i.e., a transfer function and a state-space representation.

z

b k

m

f

Figure 2.7 Mass-dashpot-spring—a configuration.

For this system Newton’s second law is f 2 b_z 2 kz 5 m€z. Defining y 5 z as the output of the system and taking the Laplace transform of both sides, and assuming zero initial conditions, the transfer function is easily obtained YðsÞ as GðsÞ 5 FðsÞ 5 ms2 11bs 1 k. To write a state-space model for the system by defining x1 5 z; x2 5 z_ we get the state equations of the systems as ( x_1 5 x2

k b 1 x_2 52 x1 2 x2 1 f which can be m m m   in which x5 xx1 is the state vector 2

reformulated as x_ 5



   0 1 0 x1 f 2k=m 2b=m 1=m

of the system, and f is its input. With y5 z

as the output of the system, the output equation of the system is given by y 5½1 0x. From these equations the transfer function can be computed via YðsÞ 5 CðsI2AÞ21 B 5 ms2 11bs1 k. The system is strictly proper and its GðsÞ 5 FðsÞ relative degree is two. The voltage-force and current-force or precisely speaking voltage integral_force and currentintegral_forceÐ equivalents of this system Ð is given in Fig. 2.8 with the governing equations f 5 v 5 L_z 1 Rz 1 ð1=CÞ z for and Ð Ð f 5 i 5 Cz_ 1 ð1=RÞz 1 ð1=LÞ z, respectively. L + v _

R

C

L

R

C

Figure 2.8 The electrical equivalents of the configuration in Fig. 2.7.

Now let us find the unforced dynamics of the system, i.e., f 5 0, but with nonzero initial condition zð0Þ 6¼ 0. In this case m€z 1 b_z 1 kz 5 0. (Continued)

System representation

103

(cont’d) Hence

_ 1 bðsYðsÞ 2 yð0ÞÞ 1 kYðsÞ 5 0 and YðsÞ 5 mðs2 YðsÞ 2 syð0Þ 2 yð0ÞÞ The output of the system is thus YðsÞ 5 ms2ms11bsb1 k yð0Þ1

ms 1 b ms2 1 bs 1 k yð0Þ. 1 ms2 1 bs 1 k FðsÞ.

Mechanical systems and structures—and more generally, systems working with mechanical engineering concepts like thermodynamics—are pervasively used in today’s life. Examples include the dentists’ drill, refrigerator, bicycle, car, helicopter, airplane, space shuttle, satellite, etc. Some are purely mechanical and some have electrical parts as well. With regard to the bicycle, the reader—especially of mechanical engineering—is referred to items 15-17 of further readings.

2.3.3 Liquid system as the plant A simple liquid system is modeled in the subsequent example. Example 2.12: An example of a liquid-level system of connected tanks is given in Fig. 2.9. Let A denote a tank input area, h its content level, R flow resistance at an orifice, and q rate of flow of liquid. For simplicity assume that in each tank and orifice the sequel relations hold, respectively: input rate of flow 2 output rate of flow 5 Ah_ and q 5 h=R. Find a model for the system. q

i

A1 h1

A2 R1

q

h2 1

A3 R2 q

2

h3

R3

q3

Figure 2.9 Liquid-level system in connected tanks.

Let qi denote the input rate of flow. We write down the basic equations for all tanks as follows: qi 2 q1 5 qi 2 ðh1 2 h2 Þ=R1 5 A1 h_1 ; q1 2 q2 5 ðh1 2 h2 Þ=R1 2 ðh2 2 h3 Þ=R2 5 A2 h_2 ; q2 2 q3 5 ðh2 2 h3 Þ=R2 2 h3 =R3 5 A3 h_3 : (Continued)

104

Introduction to Linear Control Systems

(cont’d) Defining x 5 ½x1 x2 x3 T 5 ½h1 h2 h3  as the state vector, the state equations of the system will be: 3 2 21 1 0 7 2 3 6 A1 R1 A1 R1 1 7 6 7 6 1 21 1 1 7 6 A1 7 6 2 7 1 6 7qi : x_ 5 6 7 4 0 5 6 A2 R1 A2 R 1 A2 R2 A2 R 2 7 6 6 1 21 1 7 0 5 4 0 2 A3 R2 A3 R2 A 3 R3 Output definition depends on the application. If we want to control the liquid level in all tanks, then the output vector is defined as y 5 ½y1 y2 y3 T 5 ½x1 x2 x3 T and thus the output equations of the system are y 5 Cx, where C is the 3 3 3 identity matrix. It is left to the reader to find the transfer functions. Question 2.6: In the above example is it possible to control all the outputs independently? If not, why, and what is achievable? Moreover, is it possible to have e.g., h2 , h3 ? Discuss. We close this example by adding that liquid level precision control systems are a topic of extensive research from sensor and actuator design to controller implementation, see e.g., Basci and Derdiyok (2016), Ran et al. (2016). Nonlinear models are also available. A typical reference on control of an actual nonlinear system is Sha Sadeghi et al. (2014).

2.3.4 Thermal system as the plant We address a simple thermal system below. Example 2.13: Consider a mercury thermometer which is stabilized at the temperature θ0 . It is put in an environment with the temperature θe . Find a mathematical model for the dynamics of the temperature in the thermometer. In simplified analysis where we consider uniform temperature in a body and in its environment, when the temperature of the body is raised from θ1 to θ2 [ C] the heat energy stored in the body is given by h 5 Cðθ2 2 θ1 Þ [Joules] where C [J/ C] denotes the thermal capacitance. The raise in temperature is due to the passage of heat to the body, which is assumed to be at the heat flow rate q given by q 5 Cðθ_ 2 2 θ_ 1 Þ [J/sec]. On the other hand the rate of heat (Continued)

System representation

105

(cont’d) flow also depends on the boundary temperatures θ3 and θ4 and is given by q 5 ðθ3 2 θ4 Þ=R where R [ C/J/sec] is the thermal resistance. Now in our system we denote the temperature at the center of the mercury by θm . Therefore R1 ðθe 2 θm Þ 5 Cðθ_ m 2 θ_ 0 Þ from which we have RC θ_ m 1 θm 5 θe . Defining x 5 θm and u 5 θe as the state and input of the system, the system is described by x_ 5 2ð1=RCÞx 1 ð1=RCÞu. The output of the system is also defined as y 5 x.

There are various kinds of thermometers. Recent studies include optical thermometers (He et al., 2017). Additionally, the general theory of thermal systems or rather thermodynamics has found numerous industrial as well as apparently only theoretical applications such as in astrophysics. The latter is so at least at the present time—but will probably find application in the foreseeable future. The interested reader is referred to Cabeza (2015), Moran et al. (2003) and the pertinent physics literature. Among the industrial applications is the use of of smart nanofluid in thermal systems, see e.g., Mashaei et al. (2016).

2.3.5 Hydraulic system as the plant There are different hydraulic systems in industry. A hydraulic actuator is considered in the following example. Example 2.14: The schematic of a simple hydraulic actuator is provided in Fig. 2.10. In this system the motion of the valve regulates the flow of oil from the high pressure source Ph to either side of the piston and the drain of oil from the other side of the piston to a drain at the low pressure Pl . Assume that the flow rate though the input orifice is modeled by q 5 ca 3 f ðΔPÞ where c is the orifice coefficient (also called the discharge coefficient), and a is the orifice area. Find a model for the dynamics of the system. Drain Pl

Source Ph

Drain Pl Valve

v Input Cylinder Main piston

P1 P2

z Output

Figure 2.10 The schematic of a hydraulic actuator.

(Continued)

106

Introduction to Linear Control Systems

(cont’d) We assume that the high pressure is constant during the operation of the system—it is maintained by an external source. Because Ph is high and Pl is low, an input motion of a few thousands of a centimeter causes a large change in the oil flow and thus a large pressure at the load side given by PL 5 P1 2 P2 . In other words, a small input force is amplified to a large output force—that is, the device is a force amplifier. Next we note that the orifice area a is proportional to the valve displacement v. The parameters c and ΔP are actually functions of the displacement v. That is we have q 5 KðvÞ 3 v. But in the simplified analysis we assume them to be independent of v and hence q 5 K1 v. Neglecting the oil leakage around the valve and the main piston as well as the compressibility of the oil, the displacement of the main piston is proportional to the volume of oil that enters the cylinder. In other words, the rate of flow is proportional to the rate of displacement or q 5 K2 z_. Thus the dynamics of the system is given by z_ 5 Kv. In this system x 5 y 5 z are the state and output of the system and u 5 v is the input of the system. The model is valid for small inputs under the aforementioned assumptions.

This example is a simple and modified model of Merritt (1966), Parr (2011) which studies various hydraulic and pneumatic actuators in more details. For instance, the function f in the statement of the problem is formulated as a function of ΔP and ρ, being the fluid mass density. Leakage is also modeled. Modeling of hydraulic systems using the basic approach of fundamental principles has inaccuracies and in recent studies identification techniques have been used to find more precise models (Mihajlov et al., 2008). More advanced structural designs can also be found in various sources such as Altare and Vacca (2015). Finally, from another standpoint it is good to know that the above force amplifier falls in the class of ‘mechanical amplifiers’ which have different forms, like force, torque, displacement, velocity, and acceleration amplifiers either in the linear or rotational framework. Some pertinent devices are the lever and the gear. A particular reference on control of actual (nonlinear and uncertain) hydraulic actuators is Niksefat and Sepehri (2000).

2.3.6 Chemical system as the plant Examples of control practice in chemical engineering are versatile. A basic example is as follows.

System representation

107

Example 2.15: The continuous stirred isothermal reactor is schematically given in Fig. 2.11. In this system it is desired to produce the product B from A by the reaction A ! B in the reactor. For a typical system assume that the reaction rate is given by r 5 kA2 where k:ðm3 Þðkg moleÞ21 ðhrÞ21 and A:ðkg moleÞðm3 Þ21 . The reactor has the volumetric feed and effluent rates qf and qe . The feed mixture has the average composition Af :ðkg moleÞðm3 Þ21 . The material volume inside the reactor is denoted by V. Find a model for the system.

qe , A

qf , Af

Effluent gate

Feed gate

Figure 2.11 The continuous stirred isothermal reactor.

Denote the amount of reactant present in the tank at the time t by ðVAÞðtÞ. At the time t 1 Δt this is increased to ðVAÞðt 1 ΔtÞ. Therefore the accumulation of the reactant is ðVAÞðt 1 ΔtÞ 2 ðVAÞðtÞ. This quantity equals the amount of material entering the tank minus the amount of material leaving the tank minus the amount of material produced in the reactor. That is, ðVAÞðt 1 ΔtÞ 2 ðVAÞðtÞ 5 qf Af Δt 2 qe AΔt 2 kVA2 Δt. We divide both sides by Δt and let Δt ! 0. Hence, dtd ðVAÞ 5 qf Af 2 qe A 2 kVA2 . This is the governing equation of the system. In the case q 5 qf 5 qe the volume V remains constant and the equation simplifies to V dtd A 5 qðAf 2 AÞ 2 kVA2 which can be rewrit_ 1 qðtÞ AðtÞ 1 kAðtÞ2 5 qðtÞ Af ðtÞ. In this equation x ≔ A is the state, ten as AðtÞ V

V

u ≔ Af is the input, and y ≔ kA2 is the output of the system. It is good to know that this is a nonhomogeneous nonlinear OED and is in the form of a Riccati equation. (The scalar Riccati equations have the general form _ 1 pðtÞxðtÞ 1 qðtÞxðtÞ2 5 rðtÞ in which of course q is different from that of xðtÞ this example.)

The phrase ‘continuous stirred tank reactor’ should not be understood as ‘continuously stirred tank reactor’. The complete name is ‘continuous-flow stirred tank reactor’ which is a classical system in chemical engineering where the acronym CSTR (as a short form of CFSTR) is well-known. The device approximately provides the ideal environment for a chemical reaction. For the details of the operation of the system—in particular handling the heat issue via a heat transfer fluid, which we did not discuss in this example—the interested reader is referred to the pertinent literature in chemical engineering (Schmidt, 1998; Towler and Sinnott, 2013). The system has found applications in bio/biochemical engineering

108

Introduction to Linear Control Systems

as well. Various processes are discussed in the literature, see e.g., Fathi Roudsari et al. (2013) for methyl methacrylate (MMA) solution polymerization, and Shakeri Yekta et al. (2017) for speciation of sulfur and metals in biogas reactors.

2.3.7 Structural system as the plant Modern skyscrapers are protected against seismic excitations. A simple structural control system is considered in the proceeding example. Example 2.16: The schematic of a three-floor building is given in Fig. 2.12. The structure is protected against seismic excitations in the form of x€g by a ground hydraulic actuator which applies the force f to the first floor. The dashed crosses represent cross-braces which are used to restrict the three dimensional oscillations of the structure effectively to 1 dimension. The symbol mi refers to the mass of the ith floor. m1 also includes the mass of the actuator rod and piston which are of course negligible compared to the mass of the first floor. By assuming stiffness and damping constants as appropriate find a model for the plant. m3

m2

m1

z3

z2

z1

xg

Figure 2.12 The schematic of a three-floor building. Adopted from Chung et al. (1988), with permission.

We assume independent and mutual stiffness and damping elements for the 1-dimensional model of the plant as in Fig. 2.13. The displacement of the floor i is represented by zi . We also note that an earthquake signal is approximately modeled as xg 5 x_g 5 0, x€g 6¼ 0. Hence similar to Example 2.11 the governing equations of the system are written as, (Continued)

System representation

109

(cont’d) 2 32 3 2 32 3 2 32 3 2 3 z€1 1 x€g m1 0 0 b11 b12 b13 z_1 k11 k12 k13 z1 1 6 76 7 6 76 7 6 76 7 6 7 4 0 m2 0 54 z€2 1 x€g 5 1 4 b12 b22 b23 54 z_2 5 1 4 k12 k22 k23 54 z2 5 5 4 0 5f z€3 1 x€g 0 0 0 m3 b13 b23 b33 z_3 k13 k23 k33 z3 where f is the force applied by the hydraulic actuator. Note that in the terms z€i 1 x€g the plus sign must be used not the negative sign (Why?). Defining x 5 ½x1 x2 x3 T 5 ½z1 z2 z3 T as the state vector of the model, the model is therefore given by M x€ 1 Bx_ 1 Kx 5 Γ u 2 Md where u 5 f , d 5 x€g and other terms have obvious definitions. In this model u 5 f is the control signal and d 5 x€g is the disturbance to the plant. The control objective is to design f such that the floor displacements and their first and second derivatives due to d are minimized. This is an example of stochastic systems. The controller design is of course outside the scope of this book.

b33

z3

k 33 b23

b22

z2

k 22 b12

m2

m3 k 23

b13

m1 k 12

b11

z1 f

k 11

xg

k 13

Figure 2.13 The 1-dimensional equivalent model of the structure.

For the sake of completeness let us add that the signal x€g 6¼ 0, which approximately satisfies xg 5 x_g 5 0 , is like a low-amplitude high-frequency sinusoid. Of course, the earthquake signal is a stochastic signal, not a simple sinusoid, but it also has high-frequency oscillations.

It should be noted that some papers consider a rotating actuator on top of the roof instead of the ground actuator of this system. The model that we have used is a classical one in the literature, a particular reference is Chung et al. (1988). For the purpose of safety and reliability, structures—especially critical ones like skyscrapers and high bridges—have to be designed based on extensive simulation and sometimes pilot-scale tests. Modeling of these systems—and sometimes control of them—is thus an important issue. Pertinent research results are often published in civil and mechanical engineering forums. See also Jalayer and Ebrahimian (2017) and Jalayer et al. (2012).

110

Introduction to Linear Control Systems

2.3.8 Biological system as the plant The general example in the field of biological systems is that of nutrient/food and micro-organism/consumer, which we discuss below.

Example 2.17: Let aðtÞ and bðtÞ denote the concentration of a nutrient and a micro-organism at the time instant t. Suppose that: (i) The amount a0 is supplied at the constant rate D to the system, i.e., the rate of increase of the nutrient concentration at time t is Da0 . (ii) The amount aðtÞ is removed from the system at the constant rate D, i.e., the system loses DaðtÞ of its nutrient concentration at time t. (iii) Each micro-organism b consumes the amount CðaðtÞÞ of the nutrient. Thus CðaðtÞÞbðtÞ is the nutrient concentration consumed by the species bðtÞ at the rate k. That is, the nutrient concentration is reduced by the amount kCðaðtÞÞbðtÞ at the time t because of consumption by b. (iv) The amount of consumption of the nutrient by the species b translates to the amount of growth of b. (v) The micro-organism b is removed from the system at the constant rate D. The equations governing this system are: _ 5 kCðaðtÞÞbðtÞ 2 DbðtÞ. Defining _ 5 Da0 2 DaðtÞ 2 kCðaðtÞÞbðtÞ and bðtÞ aðtÞ x1 5 a, x2 5b, and u 5 a0 the state equations of the system can easily be x_1 5 2Dx1 2 kCðx1 Þx2 1 Du written as . The input to the system can be x_2 5 2Dx2 1 kCðx1 Þx2 defined as either u 5 a0 or u 5 Da0 . The output of the system can be defined as either y 5 x1 , y 5 x2 , or y 5 x 5 ½x1 x2 T .

It is interesting to note that such models have been extensively studied from a control-theoretic point of view; see the respective literature. The most common consumption functions are: (a) CðxÞ 5 x: Lotka-Volterra or Holling Type I, (b) CðxÞ 5 x=ðm 1 xÞ: Michaelis-Menten or Holling Type II, (c) CðxÞ 5 x2 = ½ðp 1 xÞðq 1 xÞ: Sigmoidal or Holling Type III. See Dawes and Souza (2013), (Huang et al., 2006) for their derivation. The book Rao and Rao (2009) is a typical reference which also considers this example. In the larger setting of biological systems models of different parts of the body like the heart, eye, etc. as well as models of a whole living being are studied. Models of particular issues like cancer systems are also under investigation, see e.g., Masoudi-Nejad et al. (2015). Cancer diagnosis using control theory for the analysis of biomedical images has also been studied, see e.g., Khodadadi et al. (2016).

2.3.9 Economics system as the plant Actual economic systems have delay, also called gestation lags. The reason is that a finite period of time should pass after a decision in order for its effect to appear. We present a simple example.

System representation

111

Example 2.18: Suppose that we split our total income TðtÞ into consumption CðtÞ, investment IðtÞ, and expenditure EðtÞ. That is TðtÞ 5 CðtÞ 1 IðtÞ 1 EðtÞ. With reference to consumption we may define CðtÞ 5 cTðtÞ where c is the consumption rate. As for the investment, assume that a finite time τ should pass between ordering (i.e., the decision) to invest DðtÞ (in some new equipment, etc.) and the time its effect appears (manifested in both IðtÞ and the availabilÐt _ 5 Dðt 2 τÞ, ity of new equipment). Thus IðtÞ 5 ð1=τÞ t2τ DðsÞds and KðtÞ where KðtÞ is the stocks of capital assets. On the other hand, economic rationale suggests that DðtÞ is determined by the rate of saving (proportional to TðtÞ) and by the capital stock KðtÞ. Formulation wise, DðtÞ 5 αð1 2 cÞTðtÞ 2 βKðtÞ 1 ε in which ε is the trend rate and α; β are positive. Combining these relations one can get IðtÞ 5 ð1=τÞ½KðtÞ 2 Kðt 2 τÞ,

_ 5 α KðtÞ 2 β 1 α TðtÞ 5 τð112 cÞ ½KðtÞ 2 Kðt 2 τÞ 1 1 2ε c, and finally KðtÞ τ τ Kðt 2 τÞ 1 αE 1 ε which determines the rate of delivery of the new equipment. This last equation is an example of delay functional differential equations. Defining the state and output as xðtÞ 5 KðtÞ and yðtÞ 5 IðtÞ, respectively, the state and output equations of the system are: _ 5 ατ xðtÞ 2 β 1 ατ xðt 2 τÞ 1 αE 1 ε and yðtÞ 5 ð1=τÞ½xðtÞ 2 xðt 2 τÞ. xðtÞ

There are numerous mathematical models and theories for economics. Some are due to A. Smith, W. Leontief, J. M. Keynes, J. H. von Thuenen, D. Ricardo, R. Luxemburg, L. Walras, etc. See e.g., Alavi et al. (2016) and Zhang (2009) and the pertinent economics literature. The traditional/simple economics models of the mid20th century are often in the realm of matrix models and are treated by basic matrix operations, at most the linear programming technique. Recent studies consider nonlinear and time-varying models and are better capable of representing the actual market which shows regular fluctuation near its equilibrium, recession (depression), large growth cycle and irrational exuberance, and flash crash. They require a higher level of mathematics, see e.g., [Li and Yang, 2017] and its bibliography.

2.3.10 Ecological system as the plant A basic example in ecology is population growth which we discuss below. Example 2.19: The simplest model for the population growth of a species in an environment is obtained by assuming that the population xðtÞ at time t of _ 5haxðtÞ. Ai comparatively the species grows with the constant rate a, that is xðtÞ _ 5 axðtÞ 1 2 sophisticated model is given by assuming xðtÞ

xðtÞ b

. Compare the

models with each other. (Continued)

112

Introduction to Linear Control Systems

(cont’d) Denote x0 5 xð0Þ. In the first model the population is given by xðtÞ 5 x0 eat . In at 0e the second model it is given by xðtÞ 5 b 2 bx x0 1 x0 eat (see below). It is observed that in the first model the population grows exponentially and becomes unbounded with increasing time. This is certainly an unrealistic model since the population never grows unboundedly due to restrictions on the resources such as space, nutrition, sunlight, etc. or the presence of enemies. On the other hand in the second model xðtÞ does not grow unboundedly and in fact xðtÞ ! b as t !N, regardless of the value of x0 ; see Fig. 2.14 for the shape with typical values of the parameters. It is good to know that in the respective literature b is called the ‘carrying capacity’ of the environment, which reflects the availability of its resources. If the initial population is larger than the carrying capacity of the environment it will decrease to it. Similarly, if the initial population is smaller than the carrying capacity it will increase to it. x (t) x0

b x0 t

Figure 2.14 xðtÞ of the second model with b 5 4, x0 5 1 and x0 5 7.

the Ð derivation Ð of

1 the 1 xðtÞ in Ðthe second x model we write it as Ð For bdx 5 adt. Thus 2 xðb 2 xÞ x b 2 x dx 5 adt or lnðb 2 xÞ 5 at 1 c which results in b 2x x 5 deat , d 5 ec , or x 5 d 1dbe2at . By substitution in the original differential equation we find d 5 x0 =ðb 2 x0 Þ which gives the desired answer. These models (and more) can be found in various research articles and books like Hui (2015), Lindenstrand and Svensson (2013), Rao and Rao (2009). A stochastic perspective is presented in Gerami and Ejtehadi (2009).

Study of ecological systems is rather new. The problem is certainly very important for the wellbeing of our ecology has been in need of consideration from a long time ago. Some particular fields include wild-life in oceans (with respect to fishing) and protecting certain species from extinction.

2.3.11 Societal system as the plant A typical relevant example that can be discussed is a dynamical model for rumor or gossip spreading. The sequel model is called the ISS (Ignorant-Spreader-Stifler) model.

System representation

113

Example 2.20: Consider a total population T of people and a rumor. People can have three behaviors or conditions with regard to the rumor: they have not heard it (with the population I of Ignorants), they spread it (with the population S of Spreaders), they have heard it but are no longer spreading it (with the population R of Stiflers). There holds T 5 I 1 S 1 R. The dynamical behavior of the model depends on how the people—more precisely the Spreader and Ignorant—meet each other. Suppose that the probability of turning an Ignorant to a Spreader is β. Also suppose that spreading decays due to a forgetting process or because the Spreader realizes that the rumor is not valuable any longer. In this model the forgetting process occurs when a Spreader meets another Spreader or a Stifler and both contacts have the probability α. In fact parameters α and β can be estimated if we consider the experimental data as a Markov chain. On the other hand we assume that the graph of the social network among individuals presents homogeneous mixing with k denoting the average number of contacts each person. For simplicity we assume that T is constant. Normalizing T 5 I 1 S 1 R to 1, we can represent the model as, 8_ I 5 2βkI > < S_ 5 βkIS 2 αkSðS 1 RÞ > : R_ 5 αkSðS 1 RÞ As it is observed the dynamics of the first subsystem is independent from the rest of the system in this simple model. The dynamics of the network depends on the initial condition of the states, which is created by the media and campaign agents. It should be added that today such studies are especially welcome in election campaigns.

There are many models for rumor spreading like the SICR, SIHR, SSIC, ICSAR, ISS, etc. Some of these models are considered both in a stochastic and a deterministic setting. Control of rumor spreading or (in a more general setting) information epidemics is an interesting issue which is also discussed in the literature. Some representative references are Ajorlou et al. (2016), Giorno and Spina (2016), Goswamy and Kumar (1990), Ikehara and Ohmori (2010), Jadbabaie (2012), Khanafer et al. (2016), Olfati-Saber (2007), Wang et al. (2015). The model we discussed is derived as a simple model from Wang et al. (2015). Remark 2.16: Similar models are existent in the literature for studying computer viruses over a network of computers, a disease over a population, etc.

114

Introduction to Linear Control Systems

2.3.12 Physics system as the plant For the sake of your surface familiarity with the models with which physicists are engaged, we ‘read’ the mathematical model of some representative systems in the subsequent example and also in the worked-out problems. Example 2.21: We consider an example from the branch ‘physics of materials’ in which one of the most important open problems is the glass formation, since it requires techniques beyond the statistical mechanics based on equilibrated systems. When we cool a liquid melt, a glass or phase transition can result depending on the cooling rate. This phenomenon is not yet fully described and solved in energy-landscape9 models and the reason is its sophisticated high-dimensional topology. Indeed, we do not even know definitively how a phase transition is related with the topology of the landscape, i.e., why a global minimum leads to singularities in the thermodynamic behavior. A first step towards the understanding of this problem is taken in Naumis (2012), Toledo-Marina et al. (2016) among others. We describe the model in the sequel. Take a two-level system10 where state zero has energy ε0 5 0 and state one has energy E1 5 Nε1 with the degeneracy11 g1 5 2N . N is the number of particles in the system and g1 is the complexity of the energy landscape. Suppose the system is at equilibrium at the temperature T. Thus the canonical particle function12 is ZðT; NÞ 5 1 1 g1 e2E1 =T and the equilibrium probability pðTÞ to find the system in state one is given by the ensemble average pðTÞ 5 g1 e2E1 =T =ð1 1 g1 e2E1 =T Þ. It is known that for this equilibrium probability the system has a phase transition associated with crystallization when the temperature crosses the critical value Tc 5 ε1 =log2. To study the system out of the equilibrium state we may assume a simple landscape topology where all transition rates between measurable13 states are the same, and the transition from the metastable states to the ground state is also the same for all metastable states. In this framework the probability pðtÞ of finding the system in one of the states with energy E1 at time t is governed by the so-called mas_ 5 2Γ10 pðtÞ 1 Γ01 g1 ð1 2 pðtÞÞ. In this equation Γij in the tranter equation pðtÞ sition probability of going from state i to state j, and Γ01 5 Γ10 e2E1 =T . The cooling temperature is the input to the system which can be given by e.g., the (Continued) 9

An energy landscape is a mapping of all possible spatial positions of interacting molecules in a system and their corresponding energy levels. 10 A two-level system is one which has two energy minima separated by a well. This is quite common in physics, for example the NH3 (ammonia or hydrogen nitride) molecule is such a system. 11 Degeneracy refers to the case that there are (many) different atomic structures with the same energy. 12 The definition comes from statistical mechanics, and means that the system is in equilibrium with a thermal bath at temperature T. 13 In physics, a metastable state is one whose energy is a local (and not global) minimum. Thus the system gets trapped in a metastable state before relaxing to the global minimum (which is the ground state).

System representation

115

(cont’d) hyperbolic formula TðtÞ 5 T0 =ð1 1 RtÞ where T0 is the initial temperature at which the system is at equilibrium and R is the cooling rate. We note that this is a nonlinear system. Denoting xðtÞ 5 pðtÞ, uðtÞ 5 TðtÞ or uðtÞ 5 e2E1 =T , yðtÞ 5 xðtÞ the state equations and output of the system can be written and considered for solution. By solving the state equation of the system the minimum speed of cooling required for producing the glass is computed. The aforementioned references provide many more details as well.

For pertinent results see Reyes-Retana and Naumis (2015) and Perepezko et al. (2014). To better understand the exact meaning of the technical terms that are involved in the statement and solution of the above example (and the corresponding worked-out problems and exercises), the interested reader should consult the pertinent literature. However, it suffices for our purpose to focus on the mathematics involved. In the departments of physics different branches are investigated. This includes cosmology, astrophysics, high-energy physics, plasma physics, electromagnetics, condensed matter physics, quantum mechanics, nuclear physics, etc. In these branches the researchers work with some mathematical models of the system. Parts of the mathematical methods and frameworks invoked in these researches are above the engineering undergraduate level—some of them are advanced, see e.g., Szekeres (2004), Whelan (2016). Consequently, presenting a comprehensive example from this field does not seem plausible as it requires specialized knowledge. However, for our purpose we can ‘read’ some problems and get familiar with some models. In the ensuing Section we discuss the ‘delay’ phenomenon which we encountered in our economic model.

2.3.13 Delay We divide our arguments into parts A and B where exact and approximate modeling of delay is considered.

2.3.13.1 Exact modeling of delay Delay is intrinsic to any physical system because signals are transmitted between the components of the system with limited speed and measurements are done at the end point. Also, sometimes there is a delay in ‘communication signals’ in some systems. The first case is well transparent in the following basic examples: shower, rolling machines, systems with a conveyer belt to transport material, chemical processes, communication networks, underwater vehicles, combustion systems, exhaust gas recirculation systems, nuclear reactor, bio-systems, prey-predator systems, economic systems, and social systems. More precisely,

116

Introduction to Linear Control Systems

sometimes there is a physical distance between the place the control force is exerted (the input) and the place the output is sensed or measured. Thus if the speed of the process is low there will be appreciable delay. This is easily observable in non-electrical systems—hydraulic, pneumatic, chemical, etc. systems  or electrical systems where the aforementioned distance is long, such as the electrical power network of a big country. This means that there is a temporal distance between input exertion and output measurement. We have just seen an example of this in economic systems. Three other specific examples are provided below. Firstly let us mention that the issue has been known by the control community since at least as early as the 1940s, as demonstrated in the research literature of the time. In the broader scene the topic was certainly observed e.g., in conversation over telephone lines. We do not know when exactly the scientific community started to seriously tackle with the problem. Example 2.22: The mixer/bathroom shower and a rolling machine are typical delay systems, see Fig. 2.15. They are some millennia old; according to Swank (1892) the latter is at least as old as 600 BCE in ancient Iran (the Middle East). A relevant nineteenth century paper is Haines (1893). Note that the reason for the existence of distance between input and output places is not the same in delay systems (such as these systems) and depends on the nature of the system.

a(t)

d,v,T

b(t)

a(t)

d,v,T

b(t)

Valve Output measurement

Rolling stands

Thickness measurement

Figure 2.15 Examples of delay systems: Mixer/Shower on the left, Rolling Machine on the right.

In the above figure d; v; T denote the distance, speed, and time respectively. The input is exerted at the point a, and the output is measured at the point b. It is good to know that precision control of rolling machines is a challenging problem, see Engler et al. (2016) as a recent article.

If the delay time T is the only difference between the quantities at points a and BðsÞ b, then: bðtÞ 5 aðt 2 TÞ or AðsÞ 5 e2Ts . In other words, a pure delay has the inputYðsÞ output description yðtÞ 5 uðt 2 TÞ and the transfer function UðsÞ 5 e2Ts . It should be clarified that je2jωT j 5 1 and +e2jωT 5 2ωT. That is the delay term is an all-pass system/filter and its phase increases linearly with frequency.

System representation

117

Question 2.7: Recalling the initial and finial value theorems of the Laplace transform, can we conclude that at the time zero which corresponds to the infinite frequency the delay is at its maximum and at the time infinity which corresponds to zero frequency there is no delay?

Example 2.23: Another typical delay system is that of hot water circulation for heating purposes, see Fig. 2.16. The delay phenomenon of the model is better justified if the distance is short (and the pipe has insulation which should be the case outdoors) so that the temperature remains constant. From the theory of thermodynamics we know that at steady state the transfer function between the voltage v (or the rate of the heat flow to the heating element) K and the water temperature in the vicinity of the heating element is s 1 a, see Problem 2.18. Thus the transfer function between the input voltage v (applied to the heating element) and the temperature (at the measurement point) is 2Ts YðsÞ approximately given by UðsÞ 5 K se1 a. Hot water flow v Heater

Temperature sensor

Figure 2.16 Hot water circulation system.

Almost all classical synthesis and analysis tools—e.g., the Routh’s test for stability which is to come in Chapter 3—work with rational functions. Thus, in general the delay must be approximated by a rational function.14 To this end two main approaches have been offered in the literature: Taylor/McLauren15 series expansion (Eves 1990), and Pade’s approximation16 (Baker, 1975). The first approach itself may be used in two different ways, as explained below. 1.1 Truncated Taylor/McLauren series expansion, e.g., e2Ts 5 1 2 Ts 1 1.2 Inverse truncated Taylor/McLauren series expansion, i.e., e2Ts 5

T 2 s2 2!

2 1 2 2

T 3 s3 3! 3 3

1 1 Ts 1 T 2!s 1 T 3!s

2. Pade’s approximation formula, e.g., the first-order one is e2Ts 5

14

1 2 Ts=2 1 1 Ts=2

There are advanced techniques that do not use an approximation of delay. See item 4 of Further Readings of Chapter 4. We shall learn some of them in part 2 of the book on state-space methods. 15 The idea of what is today known as the Taylor series is due to James Gregory, Scottish mathematician and astronomer (16381675). It was formally proposed by Brook Taylor, English mathematician (16851731). When used around the origin it is called the Maclaurin series, named after Colin Maclaurin, Scottish mathematician (16981746) who made extensive use of it. 16 Named after Henri Eugene Pade, French mathematician 18631953; see also Further Readings at the end of this Chapter.

118

Introduction to Linear Control Systems

The philosophy behind Pade’s approximation is as follows. We try to match a 2 2 3 3 rational function with e2Ts 5 1 2 Ts 1 T2!s 2 T3!s . If we choose the rational function as 1 2 as 2 2 3 3 as e2Ts 5 11 2 1 as then by long division we have 1 1 as 5 1 2 2as 1 2a s 2 2a s 1 ?. Thus if we choose a 5 T2 the first three terms match. See also Further Readings. The second approach is in general better. However, it gives rise to stability problems which have been discussed in some research papers. On the other hand both the direct McLauren expansion (1.1) and Pade’s approximation give rise to nonminimum phase systems (see Remark 2.19), a characteristic of delayed systems (see Example 4.22 of Chapter 4), meaning that these approximations are perhaps good. Considering the first three terms only, we see that the nonminimum phase zeros z1;2 5 T1 ð1 6 jÞ and z 5 T2 are introduced by (1.1) and (2), respectively. Remark 2.17: To have higher-order Pade’s approximations, in the control literature it is suggested that the so-called n-th order Pade’s approximation given by the for

n mula e2Ts 5 12as is used. Another possibility is to use a different criterion for 11as ‘matching’ a function with a rational function. One such criterion is given in Further Readings. Continuing that approach the following are some higher-order 1 2 Ts=3 Pade approximants of the pure delay: e2Ts  1 1 2Ts=3 , 1 ðTsÞ2 =6 2 3 1 2 2Ts=5 1 ðTsÞ2 =20Þ , 1 2 Ts=2 1 ðTs Þ=10 2 ðTs Þ=120, 1 1 3Ts=5 1 3ðTsÞ2 =20 1 ðTsÞ3 =60 1 1 Ts=2 1 ðTs2 Þ=10 1 ðTs3 Þ=120

1 2 Ts=2 1 ðTs2 =12Þ 1 1 Ts=2 1 ðTs2 =12Þ,

etc.

Remark 2.18: For the sake of completeness we depict the effect of the time delay T sec (in Laplace domain e2Ts ) on a typical signal in Fig. 2.17. The initial time of T sec in which the output is zero is called the deadzone. Question 2.8: How do the effects of the aforementioned approximations compare with each other? Try the step and sinusoidal inputs in a MATLABs simulation. Remark 2.19: The term nonminimum phase (NMP) comes from the fact that if the system PðsÞ is minimum phase (MP) then there is no other system PðsÞ which has the same magnitude jPðjωÞj 5 jPðjωÞj, ’ ω but has a smaller phase lag. On the other hand, an NMP system does not have the least phase lag that is possible for a system with that magnitude. For instance the systems P1 ðsÞ 5 ð1 2 sÞ=ð1 1 sÞ and P2 ðsÞ 5 e2s have the same magnitude jP1 ðsÞj 5 jP2 ðsÞj 5 1 and the phase lags as

f (t-T) f (t)

T

Figure 2.17 Remark 2.18, effect of pure delay on a typical signal.

t

System representation

119

Phase 0 P1

−180 P2 1

ω

Figure 2.18 Remark 2.18, phase lag of P1 and P2 .

shown in Fig. 2.18. However, the system PðsÞ 5 1 has the same magnitude and the least phase lag, which is zero. (Question: What if P1 ðsÞ is written as P1 ðsÞ 5 ð21 1 sÞ=ð1 1 sÞ?) Many Plants (or Systems) and many interconnected systems (or networks) which are also called System of Systems (SoS) exhibit NMP zero(s). An SoS is not necessarily a large-scale system. It may be small-scale but has some subplants/subsystems which are interconnected to each other. Depending on the specific configuration and parameters of the system or SoS, the system or SoS may be NMP. The question “When does a system have an NMP zero?” is quite interesting. This question is treated in a graph-theoretic setting in Abad Torres and Roy (2015), Daasch et al. (2016a, 2016b) as a structural property of the system. Further theoretical properties and details can also be found in (Nokhbatolfoghahaee et al., 2016). We also add that many interesting results on NMP systems are studies in the rest of the book especially in Chapters 4 and 10. As we shall see NMP zeros in general are the main stumbling block in the stabilization task as well as performance achievement. Examples of NMP systems are versatile and numerous. Some systems are NMP inherently, some are NMP due to the effect of delay, and some systems are NMP for both reasons. Examples that can be found in the literature include but are not limited to the inverted pendulum (which is a simple model of a space shuttle), double inverted pendulum, missiles, space shuttles, tactical and hypersonic aerospace vehicles, helicopters, VTOL (Vertical Takeoff and Land) airplane, some other airplanes (by design), some airplanes and helicopters in some flight regimes, the rotational translational actuator system (which is a simple model for seismic structure control), seismic structure control, differential-drive robot, slave tele-robotics, hard disk drives, bicycle, rear-steered bicycles, cars, high speed trains, ships, underwater vehicles, power system (load-frequency), hydro power plant, pulse-width-modulation systems, DC-DC boost converter, galvanometer and wafer scanners, hydro turbine, floating wind turbine, some electric motors, electrohydraulic actuator, water level control, some configuration of connected tanks, rolling mills, vibratory systems (such as loudspeaker and microphone in a cavity or active noise control), remote satellite control, economics systems, biological systems, transcription-translation systems, some processes like the bio reactor/CSTR, pH neutralization, distillation column, combustion engine, boiler, etc. We emphasize that most of the above systems have j-axis/NMP pole(s) as well and are among the most challenging and

120

Introduction to Linear Control Systems

Figure 2.19 The effect of lag. Left: Step response, Right: Sinusoid response.

difficult actual systems. It should also be stressed that simplified/approximate MP models of almost all of the above systems are also available in the literature. The NMP models are obtained if we consider the coupling and nonlinear terms, which are neglected in simplified analysis. The pertinent references are quite extensive. It will suffice to refer the reader to Exercises 2.14, 2.37, 2.38, 10.65; Problems 2.1, 2.18, 2.20, B.2, and items 15, 16 of further readings of this chapter.

2.3.13.2 Approximate modeling of delay Apart from what we have said above we can present a simplified and approximate version of delay as follows. If two signals are conceptually identical (saving the initial phase) except that one lags the other, this means that there is a delay between them. This is true between the input and output of a plant which consists of one or more poles (assuming its stability, which will be studied in Chapter 3). As such the term 1=ðs 1 aÞ in a transfer function is called a lag term. This is consistent with item 1.2 in the previous Section 2.3.13.1 for exact modeling of delay (i.e., inverse truncated Taylor/McLauren series) when we use the first-order approximation e2Ts  1 11 Ts. The effect of lag on a step and sinusoid are demonstrated in Fig. 2.19. The systems are sys0 i0 5 1=ðs11Þi , i 5 1; 2; 3; 4. As it is observed by increasing the order of lag the lag or delay effect increases. Question 2.9: Does such an effect exist for other inputs? Try a MATLABs simulation in addition to theoretical analysis. Remark 2.20: In nature—in contrast to mathematics—there is no abrupt change (at least in classical physics). Specifically this means that the slope at time zero is zero. Formulation wise this is a little loosened and it is said that all actual systems are Lipschitz,17 meaning that the slope is bounded all over the time. (Note that 17

Named after Rudolf Otto Sigismund Lipschitz, German mathematician (18321903), who was active in several fields of mathematics.

System representation

121

Figure 2.20 A Lipschitz function on the left, the actual step in the middle and on the right.

Lipschitzness does not specifically say that at origin the slope is zero.) The most prominent example is probably the ideal step function, which is not realizable and exists only in theory. A Lipschitz function is shown in Fig. 2.20 on the left panel. The dashed line is the slope at a point on the curve which is the tangent line to the curve. When the curve is Lipschitz this tangent line is not vertical. The other panels show actual step functions, which are Lipschitz, casual and realizable. The thick lines show tangent lines at the starting point and have the slope zero.

2.3.14 The other constituents The other elements in a control system, i.e., the operator-machine interface as well as feedback elements, amplifiers, actuators, sensors, transducers, encoders, decoders, etc. are studied in books on industrial control/electronics. Whilst exact treatment of these items is not provided in this book, some general information on sensors and amplifiers is given below.

2.3.14.1 Sensors A sensor is a device that senses or measures or reproduces a signal quite fast. The ideal sensor should have zero delay (and zero error/noise) and thus be able to reproduce a signal exactly, but actual sensors have both delay and error. Technological advances have made possible to build sensors with negligible delay and noise. However, these sensors must be protected from being damaged (and thus malfunctioning) by being protected with shields. (In case of temperature sensors these shields are thermodynamic wells.) This, however, introduces a delay (in the form of a lag), which is not negligible in comparison to the intrinsic negligible delay of the core sensor. The sensor pack (simply called sensor) is thus approximated by the transfer function 1=ðs 1 aÞ. The parameter a determines the speed of the sensor. Common sense tells us that a must be large so that the sensor responds fast, however it depends on the system. This issue will be addressed in Chapter 10 where fundamental limitations of a control system are studied.

2.3.14.2 Amplifiers An amplifier falls into category of electronic power amplifiers. In the case of voltage amplifiers usually an OpAmp (Operational Amplifier) circuit is used. Some decades ago, when BJT (Bipolar Junction Transistor) technology was in practice OpAmps were not very accurate. This has probably been an unknown source of some problems experienced in control systems. Through the passage of time on the one hand better microelectronic circuits were designed, and on the other hand JFET (Junction

122

Introduction to Linear Control Systems

vi

vo R1

R2 Figure 2.21 A voltage amplifier.

Field Effect Transistor) technology was introduced, which is vastly superior to the BJT technology. These advances rendered the production of greatly improved amplifiers. Later, the MOSFET technology offered further enhancements. Current improvementsin the year 2017include DMOS (Double-Diffused Metal Oxide Semiconductor), ELT (Enclosed Layout Transistor), RHBD (Radiation-HardenedBy-Design), BSIM (Berkeley Short-channel IGFET Model), etc. A voltage amplifier circuit looks like Fig. 2.21 in which VVoi 5 R1 R12 R2 over a well-expanded range. Note that the output current is limited. In applications in which more output current is required, a power amplifier is called for. To make it more tangible for readers in departments other than electrical engineering we mention two examples: (i) A power amplifier is needed to drive a loudspeaker. A spoken voice signal is usually at most a few hundred microwatts. The output of a loudspeaker for home applications may be hundreds of watts and for concert applications some tens of thousands of watts. (ii). A power amplifier is needed to drive the motor of a robot manipulator or a crane to move a load. The output of the control circuitry which is made by TTL ICs is below 12 volts and 10 milliamperes, whereas the motor may need tens or even hundreds of amperes at 380 volts. In old times, say mid 20th century, power amplifiers were not of high quality because they were not linear over the whole frequency range. In terms of control systems, linearity of the power amplifier (over the bandwidth of the system) is a must. In comparison to voltage amplifiers (which are a subclass of power amplifiers and were just discussed above), common sense suggests that power amplifiers are more probably a source of unwanted problems in control systems. Detailed study of this subject requires advanced knowledge of microelectronics, see Further Readings at the end of Chapter. We wrap up this discussion by stressing that a thorough design considers uncertainty (perhaps in the form of nonlinearity) in the amplifier as well as other constituents of a control system. Such design methods as introduced in Chapter 1 are called robust control methods and are detailed in graduate courses on control engineering. In this undergraduate course we can only occasionally mention specific parts of that course and even this will not be in complete details. In analogue implementation the feedback element depends on the kind (unit) of the measured output. For instance, if it is position in the form of a mechanical movement then a potentiometer (actually two potentiometers) may be used. A potentiometer is a device which transduces a mechanical movement (either rotational or translational) to an electrical voltage.

System representation

123

Figure 2.22 Example of block diagram representation of a control system.

Through the theoretical advancements in the theory of discrete-time control systems and the availability of powerful microprocessors since the 1990s almost all industrial control problems are implemented by digital technology. This calls for the usage of some further concepts and devices which are studied in the third undergraduate course under the name of digital or discrete-time control systems. Discrete-time control systems became a hot topic for research in the 1950s after certain pioneering works as we mentioned in Section 1.3. So far, we have seen how the constituents of a control system are modeled. How their connection in a control system is depicted is what follows in the next two Sections. These two general and alternative methods are the Block Diagram and the Signal Flow Graph, whose equivalence is also shown.

2.4

Block diagram

The general structure we have already used to depict a control system falls in the domain of block diagrams sometimes referred to as BDs. For instance, the following Fig. 2.22 is a block-diagram representation of a closed-loop system, subject to noise and output disturbance. This form of illustration is further elaborated in this part. To obtain a blockdiagram representation, the first step is to apply the Laplace transform to the state and output equations (of a state-space model developed in the previous part). Their interrelation is then shown by block diagrams. Example 2.24: The system shown in Fig. 2.22 is the block diagram representation of, Y 5 CPE 1 D Ym 5 Y 1 N E 5 R 2 Ym which in turn is the result of the application of the Laplace transform to a set of state-space equations.

124

Introduction to Linear Control Systems

In studying complicated block diagrams in SISO systems,18 the following equivalence rules help us simplify the problem: Rule 1: Series feedback elements can be interchanged (Fig. 2.23):

A

A-B

_

A-B+C

A

+ C

B

+ C

A+C _

A-B+C

B

Figure 2.23 Block diagram algebra—Rule 1. Rule 2: Series feedback elements can be combined into one (Fig. 2.24):

C +

A

A-B+C

A

_

+ C

B

A+C _

A-B+C

B

Figure 2.24 Block diagram algebra—Rule 2.

Rule 3: Series blocks can be interchanged (Fig. 2.25):

A

G1

AG 1

G2

AG 1 G 2

A

G2

AG 2

G1

AG 2 G1

Figure 2.25 Block diagram algebra—Rule 3. Rule 4: Series blocks can be combined into one (Fig. 2.26):

A

G1

AG 1

G2

AG1 G 2

A

G1 G 2

AG 1 G 2

Figure 2.26 Block diagram algebra—Rule 4. Rule 5: Parallel blocks can be combined into one (Fig. 2.27):

A

G1 G2

AG 1

AG 1+ AG 2

A

G1 + G 2

AG 1+ AG 2

AG 2

Figure 2.27 Block diagram algebra—Rule 5.

18

Some are valid for MIMO systems as well, but we do not go into details in this book.

System representation

125

Rule 6: Feeding a block in through a feedback element (Fig. 2.28):

Figure 2.28 Block diagram algebra—Rule 6.

Rule 7: Taking a block out through a feedback element (Fig. 2.29):

Figure 2.29 Block diagram algebra—Rule 7.

Rule 8: Feeding a block in through a node (Fig. 2.30):

A

AG

G

A

AG

G

AG

AG

G Figure 2.30 Block diagram algebra—Rule 8. Rule 9: Taking a block out through a node (Fig. 2.31):

A

G

AG

A

A

AG

G AG

1/G

Figure 2.31 Block diagram algebra—Rule 9. Rule 10: Feeding a feedback element in through a node (Fig. 2.32):

B _ A-B

A-B A-B

A

A

A-B

_

_

B

B

Figure 2.32 Block diagram algebra—Rule 10.

A

126

Introduction to Linear Control Systems

Rule 11: Taking a feedback element out through a node (Fig. 2.33):

Figure 2.33 Block diagram algebra—Rule 11.

Example 2.25: Using the above rules the following equivalents can be obtained from Rule 6 and Rule 9, respectively.19 See Fig. 2.34. A

B

G1

A

B

G 1G 2

1/G 2

G2 A

G1 G2

AG1

AG1+ AG2

A

AG 2

AG1

G1

AG1+ AG2

G2 / G 1

Figure 2.34 Example 2.25. Top row: One system. Bottom row: Another system.

Finally, the following rule simplifies a closed-loop (feedback) system to an open-loop one. Rule 12: Open-loop equivalence of a closed-loop system (Fig. 2.35):

A _

G1

B

A

G1 1+ G 1 G 2

B

G2 Figure 2.35 Block diagram algebra—Rule 12.

19

Note that the first one is a unity-feedback equivalence (with a pre-compensator) of a non-unity feedback system. Another equivalence (without a pre-compensator) was given in Exercise 1.11 of Chapter 1.

System representation

127

Example 2.26: Obtain the input-output transfer function of the following system in Fig. 2.36. G2 _

G1

_

G3

G4 Figure 2.36 Example 2.26. Original system.

In the first step, the block G1 is fed through the feedback element (Fig. 2.37): G 2 G1 _

G 1G 3

_ G4

Figure 2.37 Example 2.26. Reduced system.

In the second step, the inner series feedback elements are interchanged (Fig. 2.38): G 2 G1 _

_

G1 G 3

G4 Figure 2.38 Example 2.26. Reduced system.

In the third step, the two inner feedback loops are simplified (Fig. 2.39):

_

G1+ G2 G1

G 1G 3 1 +G 1G 3 G4

Figure 2.39 Example 2.26. Reduced system.

(Continued)

128

Introduction to Linear Control Systems

(cont’d) Then the series blocks are simplified to their multiplication. This step is skipped and combined with the final step, which is simplifying the feedback loop (Fig. 2.40): ( G1+ G 2 ) G3 ( G1 + G 2 ) G 3 G 4 + 1 + G 1 G 3 Figure 2.40 Example 2.26. Equivalent system.

Having introduced the block diagram method, the signal flow graph method is introduced next as the other representation method of control systems.

2.5

Signal flow graph

An alternative to block diagram representation of a model or system is through graph theory. Graph theory has a long history in mathematics, starting from the Euler’s20 paper on ‘seven bridges of Konigsburg’ in 1736. A group of mathematicians (Vandermonde21, Leibniz22, Cauchy23, etc.) contributed to the problem, and this gave birth to graph theory as an independent branch of mathematics. This itself resulted in the emergence of a closely-related but independent branch of mathematics, that of topology. The issues discussed in these branches of mathematics are quite versatile and fascinating. We refer the interested reader to such pertinent literature as Golumbic (1980), Runde (2005). In the contemporary studies of systems in general, and control systems in particular, techniques of graph theory and topology are in practice as we mentioned in Section 1.16 of Chapter 1. They are seen especially in the study and synthesis of networks. See item 25 of Further Readings. 20

Leonhard Euler was a Swiss scientist (17071783) active in many fields such as mathematics, physics, astronomy, and engineering. He was one of the most influential mathematicians of all time, with significant contributions to, e.g., calculus, graph theory, topology, number theory, etc. 21 Alexandre-Theophile Vandermonde was a French mathematician (17351796) whose main works were concerned with the determinant theory. 22 Gottfried Wilhelm von Leibniz was a German mathematician (16461716) and philosopher who made significant contributions to different branches of mathematics—one of the most influential of all time. He developed differential and integral calculus independently of others. Some of his ideas like ‘law of continuity’, ‘transcendental law of homogeneity’, and ‘refinement of binary number system’ found application after two centuries in the 20th century. 23 Augustin Louis Cauchy was a French mathematician (17891857), another of the most influential mathematicians of all time. He was one of the pioneers of analysis, especially complex analysis. He is probably the most prolific mathematician of all time, writing about 800 research articles and five textbooks.

System representation

129

Anyway, a very limited exposition of graph theory to electrical engineering (and circuit theory in particular) was made in 1953 in Mason (1953) in which no reference was given to the mathematics literature. However, Mason managed to derive a formula for the transmittance or gain of a graph which was new (Mason, 1956) and is a classical tool in control and graph theory. We use it to find the transfer function of a system whose block diagram representation is given. In the sequel Section 2.5.1 we first briefly introduce graph theory terminology as required for our discussion. Then in Section 2.5.2 we comment on the equivalence of directed weighted graphs and block diagram representation. Finally in Section 2.5.3 we present the Mason’s rule for computing the transmittance of directed weighted graph, what Mason termed a Signal Flow Graph (SFG).

2.5.1 Basic terminology of graph theory A graph is a pair ðV; EÞ where V is the set of vertices or nodes and E is the set of edges or branches. A particular class of graphs is that of directed weighted graphs ðV; E; WÞ where each edge is directed and weighted. The matrix W denotes the weights. The idea can be easily demonstrated in a simple example. Example 2.27: Consider the set V 5 fx1 ; x2 ; x3 ; u2 g of vertices and W of weights as specified below. Fig. 2.41 illustrates the associated graph. The associated system which shows the dynamics of the graph is obtained via V 5 WV. 2

a11 6 a21 W 56 4 0 0

a12 a22 0 0

3 0 b22 7 7 0 5 0

0 0 a32 0

The associated system which is given by V 5 WV: x1 5 a11 x1 1 a12 x2 x2 5 a21 x1 1 a22 x2 1 b22 u2 x3 5 a32 x2 u2 a 11

b 22 a 12

x2

a 32

x1 a 21

x3

a 22

Figure 2.41 The SFG of the equations in Example 2.27.

130

Introduction to Linear Control Systems

Formal definitions in the context of control systems are as follows: Vertex or Node: A node is depicted as a point and represents a variable or signal. (In this system x1 , x2 , x3 , and u2 .) Edge or Branch: A branch is a directed line connecting two nodes. A gain is associated with every branch and is called its transmittance. Input node (Source): A source has only outgoing branches. (In this system u2 .) Output node (Sink): A sink has only incoming branches. (In this system x3 .) Mixed node: A mixed node has both outgoing and incoming branches. (In this system x1 , x2 .) Path: A path is the traversal of some concatenated branches if allowed by their direction, which is the direction of their arrows. No node is traversed more than once. (In this system x1 2 x2 2 x3 , u2 2 x2 2 x1 , and u2 2 x2 2 x3 .) Loop: A loop is a closed path, one ending at the starting node. No node is traversed more than once. (In this system x1 2 x2 2 x1 , x2 2 x1 2 x2 , x1 2 x1 and x2 2 x2 . Note that the first two loops are in effect identical.) Loop gain: The loop gain is the product of all the transmittances/gains of the branches of the loop. That is, the product of all the gains around the loop. (In this system a21 a12 , a12 a21 5 a21 a12 , a11 and a22 .) Non-touching loops: Two loops are non-touching if they do not share any node or branch. (In this system x1 2 x1 and x2 2 x2 .) Forward path: A forward path is a path starting from a source and ending at a sink (or in general any node). (In this system u2 2 x2 2 x1 , u2 2 x2 and u2 2 x2 2 x3 .) Forward path gain: A forward path gain is the product of all the gains associated with the branches of a forward path. (In this system b22 a12 , b22 , b22 a32 .) It should be noted that it is possible to make an output node from a mixed node by adding an outgoing branch with gain unity to that node. For the above example this is shown in Fig. 2.42. Very many other things are important and defined in graph theory. For instance, degree, connectivity, trees, forests, cycles, adjacent matrix, contraction, minors, arboricity, planarness, etc., and there are various simple and sophisticated algorithms for graph-related problems. However for our usage the above notions suffice. Mason calls what we have studied in the above development a signal flow graph: SFG. It should be stressed that the SFG of a given system is not unique because the equations/formulae representing the dynamics of the system are not unique in formulation. To see this just note that the equations of the associated system of Example 2.27 can be re-arranged, resulting in a new SFG. u2 a 11 a 12 x1

1

x1

b 22 x2

a 21

Figure 2.42 Making output nodes from mixed nodes.

x3 a 32

a 22

1

x2

System representation

131

2.5.2 Equivalence of BD and SFG methods The SFG method is neither particularly superior nor inferior to the block diagram method. Their equivalence is clear from the discussion in the previous section and is further shown via two examples in the sequel. They are alternative methods for the task of showing the connections of elements of a control system.

Example 2.28: Fig. 2.43 shows a typical system in block diagram representation on the left panel and in SFG representation on the right panel. They contain the same information.

Figure 2.43 The equivalence of block diagram and SFG methods for a particular system.

Example 2.29: Fig. 2.44 shows another typical system in block diagram representation on the left panel and in SFG representation on the right panel. They are equal.

Figure 2.44 The equivalence of block diagram and SFG methods for a particular system.

It is thus clear that the SFG algebra is the same as the block diagram algebra as detailed in Figs. 2.232.35 of Section 2.4. As such we do not reproduce them here and instead leave it to the reader to practice them. A simple example is provided.

132

Introduction to Linear Control Systems

Example 2.30: Simplify the SFG in Fig. 2.45. d x1

a

b x2

x3

c Figure 2.45 The SFG of Example 2.30.

The simplification is shown in the following three steps from left to right (Fig. 2.46):

Figure 2.46 Simplification steps of the SFG of Example 2.30.

Formulation-wise, the analysis of the original SFG is as follows. It holds that x2 5 ax1 1 dx2 1 cx3 and therefore x2 5 1 2a d x1 1 1 2c d x3 . Using x3 5 bx2 x3 bc ab this last formula results in x3 5 1 ab 2 d x1 1 1 2 d x3 whence we get x1 5 1 2 ðd 1 bcÞ. On the other hand from Rule 4 we see that the simplified SFG in Fig. 2.48 represents the same equations.

All in all, the block diagram and SFG methods are equal and alternative methods for system representation. However, for complicated systems with many loops the SFG method requires lesser work, as it does not need the redrawing of the reduced system in every step of reduction if we use the method of the next Section.

2.5.3 Computing the transmittance of an SFG In complicated systems, obtaining the transfer function between two nodes by following the SFG algebra is cumbersome. In such systems it is convenient to use the S. J. Mason’s Rule. Mason’s rule: Mason rule or formula is given by, P5

1X Pk Δ k : Δ k

(2.12)

System representation

133

where, Pk is the path gain or transmittance of the kth feedforward path Δ is the determinant of the graph defined as 1  (sum of individual loop gains) 1 (sum of gain products of all possible combinations of two non-touching loops)  (sum P of gain Pproducts P of all possible combinations of three non-touching loops) 1 . . . 5 1 2 L1 1 L2 2 L3 1 ? (with obvious meanings for the Sigmas) Δk is cofactor of the kth forward path determinant of the graph. This is what remains of Δ when the loop gains touching the kth forward path are removed, i.e., substituted by zero.

Example 2.31: Find the transfer function in the following SFG in Fig. 2.47.

Figure 2.47 A complex example illustrating the Mason’s rule.

We start by finding the forward paths. There are two forward paths as 1 2 a 2 c 2 d 2 f 2 g 2 i 2 k 2 n 2 1 and 1 2 a 2 c 2 d 2 f 2 m 2 n 2 1 with path gains: P1 5 acdfgikn and P2 5 acdfmn, respectively. Then we find and denote the loop gains: l1 5 ab, l2 5 de, l3 5 h, l4 5 il, l5 5 j, l6 5 acdfmno, l7 5 acdfgikno. Thus, P P P P

L1 5 l1 1 l2 1 l3 1 l4 1 l5 1 l6 1 l7 L2 5 l1 l2 1 l1 l3 1 l1 l4 1 l1 l5 1 l2 l3 1 l2 l4 1 l2 l5 1 l3 l5 1 l3 l6 1 l4 l6 1 l5 l6 L3 5 l1 l2 l3 1 l1 l2 l4 1 l1 l2 l5 1 l1 l3 l5 1 l2 l3 l5 1 l3 l5 l6 L4 5 l1 l2 l3 l5

P P P P Thus we can form Δ as Δ 5 1 2 L1 1 L2 2 L3 1 L4 . Next we form Δ1 and Δ2 associated with the first and second forward paths whose path gains are P1 and P2 , respectively. Δ1 is what remains from Δ when we set to zero the gains of all the loops li touching the first forward path. As all loops touch the first forward path we get Δ1 5 1. As for Δ2 , it is observed that l3 , l4 , l5 do not touch the second forward path and hence Δ2 5 1 2 ðl3 1 l4 1 l5 Þ 1 ðl3 l5 Þ. Finally we can compute the system gain as P 5 Δ1 ðP1 1 P2 Δ2 Þ. We close this example by adding that the corresponding block diagram representation of the system is given in Fig. 2.48. (Continued)

134

Introduction to Linear Control Systems

(cont’d)

Figure 2.48 Block diagram representation of the system in Example 2.31.

The Mason’s rule can also be applied to a given block diagram as in Fig. 2.48 directly. This however requires some practice and experience, which the student is expected to acquire. We close up this section with another example, which is tricky and instructive in this direction. Example 2.32: Find the transfer function of the sequel system in Fig. 2.49.

Figure 2.49 System of Example 2.32.

Without depicting the SFG, it should be clear from Fig. 2.49 that P1 5 G2 G3 , P 2 5 G1 G3 , l1 5 2G2 G4 , l2 5 2G3 G5 , l3 5 2G2 G3 G6 , Δ512 ðl1 1 l2 1 l3 Þ 1 l1 l2 , Δ1 5 1, Δ2 5 1 2 l1 , and thus P 5 Δ1 ðP1 1 Δ2 P2 Þ. In fact the SFG is given in Fig. 2.50, which verifies our answer.

Figure 2.50 SFG representation of the system in Example 2.32.

System representation

2.6

135

Summary

In this chapter we have focused on system representation. We have seen how to model the plant and other constituents in a control system and how to represent the whole control system. In particular for engineers the model of the plant is usually in two main frameworks—state space or transfer function. The fundamentals of the process of modeling have been presented and practiced in several instructive examples. We have also studied details of both of the aforementioned frameworks. In this chapter and in fact in this book our main focus is on the transfer function, and thus special attention is paid to linearization of nonlinear models. Next we learned how to show the interconnection of the system constituents in the block diagram or signal flow graph frameworks. The underlying algebra and rules of these frameworks have been studied as well. In particular we have learned how to find simpler equivalents of a complicated system and how to find its transmittance. Numerous and versatile examples and worked-out problems assist and boost the learning of the lessons.

2.7

Notes and further readings

1. More details can be presented about transforming a set of differential equations to a state-space form, i.e., the way to define the state variable. This is discussed in the second course. Moreover note that the energy storing elements in modeling physical engineering systems are in which

as follows

1 2 we use the

1 format ‘element (energy, variable)’:

2 Capacitor 12Cv2 ; v , Inductor of inertia 12Jω2 ; ω ,

1 2 2Li ; i , Mass 2Mv ; v1 , Moment 2 Translational spring 2Kx ; x , Rotating spring 2Kθ ; θ , Fluid compressibility

1 V 2 2 1 2 1 P ; P , Fluid capacitor ρAh ; h , Thermal capacitor Cθ ; θ . For more informa2K 2 2 tion the reader is referred to the pertinent literature. For further details about the concept of state of a system we refer you to L. A. Zadeh 1968 of Chapter 1. 2. Traditionally all the results in the mathematical theory of systems have been obtained for systems defined over R, being the field of real numbers as a normed space. For technical reasons some results have been extended to C as well. It should be noted that it is possible to define systems over rings. Indeed this began in the early 1970s with the pioneering works of Y. Rouchaleau, B. F. Wyman, and R. E. Kalman and interesting results, though limited, are now available. In this setting modules rather than vector fields are used. Realistic applications of this mathematical theory include for instance modeling and studying the effect of fixed precision integer arithmetic and delay systems. 3. As stated in the text, there is another main category for system modeling under the title of mapping. We cannot reproduce even one complete example since it necessitates the introduction of certain mathematical terms. However, we can say that if input and output spaces are normed spaces (rather than topological spaces, measure spaces, manifolds, etc.) then mappings can take the simple form of transfer functions, for instance. 4. By the state-space method a model is obtained for a system as a set of differential equations. Sometimes a system is represented by a set of differential equations along with some algebraic equations, which are its constraints. Such a system is said to be in Differential Algebraic Equations (DAE) form. An example is x_1 5 2tx1 1 sinðx2 Þ 1 e2t ; 0 5 tx1 x2 1 x22 2 t3 . The model is termed descriptor, singular,

136

Introduction to Linear Control Systems

semistate, or generalized, and the system is called a descriptor, singular, semistate, or generalized system. The linear DAE looks like: Ex_ 5 Ax 1 Bu; y 5 Cx 1 Du where detðEÞ 52 0 or in3 more technical terms E is rank deficient. For instance, consider the system

1 40 0

0 1 0

    0 Af Bf x1 u. 0 5x_ 5 Ad Bd 0

The partitioning on A and B correspond to the nonzero

and zero rows of E. Thus, here the index f refers to the first two rows (whose corresponding rows in E are full rank) and the index d refers to the third row (whose corresponding row in E is rank deficient). Therefore the equations actually represent the system x_f 5 Af x 1 Bf u; Ad x 1 Bd u 5 0 in which the second equation represents the algebraic constraints. It should be noted that any rank deficient E can be transformed to an E as given above which is in the ‘rank normal form’ via left and right multiplications by appropriate elementary non-singular matrices for complete row and column reductions, respectively. Both steps affect E, A, B, and the second step (column reduction) affects E, A, x. Descriptor models thus represent generalized models for systems. This field originated in the work of H. H. Rosenbrock24 in the 1970s. Some excellent references on descriptor systems are introduced at the end of Chapter 1. Theory of descriptor systems is geared with the theory of DAEs of mathematics. Since their introduction they have undergone considerable developments. Many efforts have been made to extend the existing methods of control (robust, adaptive, optimal. . .) of conventional models to descriptor models and significant progress has been made. Nevertheless, still there is a wide gap. In particular note that except for a few, the topics that we study in the remainder of the book do not have any counterparts for descriptor systems. For instance, the counterparts of concepts like rise time, overshoot, MP/NMPness, root locus, Nyquist theory, gain/phase/delay margin, lead-lag design, etc., have not been developed for descriptor systems. It is not difficult to envision that a possible approach to some issues may be through constrained optimization where (part of) the constraint is the algebraic constraint of the model. However, it will be acceptable only if it is accompanied with an efficient numerical solver, preferably strongly backward stable. 5. By the final years of the 1970s, the method of Behavioral Approach was suggested by J. C. Willems25 as a generalized framework for modeling systems. This method uses observables and latent variables of the system for modeling and introduces some structural indices for the system. Roughly speaking it combines the inputs and outputs of the system into a new vector and blends the classical equations of the system to an Auto Regressive (AR) system in a time series format. However, it does not actually a priori distinguish between inputs and outputs of the system. Instead of input-output data here we work with observed time-series data of the system, which summarizes all the inputoutput data without distinguishing between the two. Similar to the theory of descriptor systems, it is also closely connected to the theory of DAEs. Behavioral approach can be seen as a generalization of the descriptor system theory. This method has not gained much popularity for two main reasons: (i) It is only a mathematical abstraction—in dealing with actual systems one always distinguishes between the inputs and outputs of them. (ii) It is more involved than the descriptor system approach but does not seem to

24

British control theorist (19202010), who is well-known for pioneering contributions to multivariable control and numerical optimization. He also proposed the descriptor systems formulation, however the theory has been developed mainly since the 1980s by others. 25 Beligian control theorist (19392013), who is well known for the introduction of dissipativity and behavioral approach.

System representation

6.

7.

8.

9.

137

offer a proven advantage over it, except in certain restricted cases. This is at least the status quo, although the future cannot be predicted for sure. We close with a rare example which justifies this framework. Consider the linear DAE with n 5 3; m 5 2, and rankðEÞ 5 rankðAÞ 5 2. If the row reduction of E results in Ad 5 ½0 0 0 and Bd 5 ½0 1, then u2 5 0 and we cannot freely choose it. In other words it is not really an input, and hence we cannot (or should not) distinguish between input(s), output(s), and state(s). Of course such a model hardly refers to an actual system. Finally, it is important to add that for sophisticated systems like time-varying systems, the ‘behavior approach’ as defined in an algebraic setting (modules, isomorphisms, etc.), which has connections to the behavioral approach of Willems, proves to be a powerful framework, see e.g., Bourles et al. (2015). However, whether it is more powerful than other approaches or nor, needs further investigation. Some systems are evolved according to difference equations instead of differential equation. In the case of LTI systems, the state-space representation looks like xðn 1 1Þ 5 AxðnÞ 1 BuðnÞ, yðnÞ 5 CxðnÞ 1 DuðnÞ, where n denotes the time instant. It should be noted that for digital implementation of continuous-time systems, models should be discretized by which they turn to discrete-time models, which are some difference models like the one just mentioned. You will work with these models in the third undergraduate course on control. See also worked-out Problem 2.23 and Exercise 2.51. For nonlinear systems the terms time invariant and time varying are sometimes replaced by the terms autonomous and non-autonomous. The linearization approach and the stability of the resulting system are much richer than we can present in this undergraduate course. The interested reader may consult graduate texts on nonlinear control. It is also good to learn the terminology affine. When a nonlinear system is affine in a variable it is linear in that variable. Some examples are as follows. Systems like x_ 5 AðtÞx 1 BðtÞu 1 dðtÞ are linear. The system x_ 5 x2 1 ðx 1 1Þu is nonlinear but affine in u. The system x_ 5 ðt3 2 2tuÞx 1 tu2 1 dðtÞ is nonlinear but affine in x. The models described in the text by the pure transfer function or the pure state space are finite dimensional. Systems like those described by ordinary differential equations (ODEs) evolving in Banach spaces, partial differential equations (PDEs), and delay differential (also termed hereditary) equations are infinite-dimensional. Infinite-dimensional systems are also called distributed systems. The transfer function of infinite-dimensional systems is irrational. An example of the PDEs is GðsÞ 5 ðe2s 1 s 2 1Þ=s2 and an example of the hereditary systems is GðsÞ 5 1=ðe2s 2 s 1 1Þ. Besides, sometimes it is possible to model infinite dimensional systems as systems over rings of operators. Further details can be found in the respective literature. A particular issue with regard to PDE systems is that of chaos. Important results in this direction can be found in the prize-wining works of the mathematician C. Li: a resolution of the Sommefeld/turbulence paradox and a theory of rough dependence on initial conditions for fully developed turbulence. Note that these works have implications on the related control problems of various actual systems. Among the systems for which we do not provide any examples are singularly perturbed systems, multi time-scale systems, etc. You will deal with these systems in graduate studies. However, the interested reader is encouraged to consult specialized books on modeling of other plants: electronics (consisting of transistors—Exercises 2.26, 2.27), geological, managerial, physics (cosmology, high energy, etc.) and so forth. Note that there are also specialized books on the control of time delay systems like the ones introduced in the references of Chapter 1. A general model of large-scale systems is provided in (D.3) of appendix D.5.1.1. Interesting models of decentralized economy are studied in Abramov (2014). Reading Maciejowski (2003) is recommended.

138

Introduction to Linear Control Systems

10. The Pade approximation method is named after Henri Pade who developed the method around 1890. It is the best approximation of a function by a rational function of a given order. The basics are as follows. A function f ðxÞ is approximated by the Pade 1 bm xm Aproximant PðxÞ 5 1111ba11xx11? ? 1 an xn by finding the coefficients through f ð0Þ 5 Pð0Þ,

11.

12.

13.

14. 15.

f 0 ð0Þ 5 P0 ð0Þ; . . .; f ðm1nÞ ð0Þ 5 Pðm1nÞ ð0Þ. The Pade approximant often gives a better approximation of a function than a truncated Taylor series does and may still work where the Taylor series does not converge. As such, Pade approximants are used quite often in computer calculations. Apart from their general usage, they have also found specialized applications in mathematics as auxiliary functions in Diophantine approximation and transcendental number theory. For further information the interested reader is referred to the pertinent literature where its generalizations and specialized topics like the Riemann-Pade Zeta function are studied. It is worthy of mention that the rational approximation of functions was first proposed by Ferdinand Georg Frobenius, German mathematician (18491917), who made significant contributions to various fields of mathematics, such as elliptic functions, differential equations, group theory, number theory, manifolds, etc. The ‘Frobenius norm’ is frequently used in systems and control, as you will learn in Chapter 10. Taylor-Maclaurin series and Pade approximation methods are still under investigation and of course in extensive use. Some pertinent recent results are Costin and Xia (2015), Ysern and Ceniceros (2016). Time-varying systems are often discussed in the state-space approach. However, they have been studied in the frequency-domain framework as well. Serious studies started with the classical results of L. A. Zadeh in the early 1950s (Zadeh, 1950a, 1950b, 1961). The interested reader is encouraged to consult also Emre et al. (1999), Kamen et al. (1985), Kamen and Hafez (1979), and Piwowar (2015). It is due to add that the theory of non-autonomous systems is rather open and undeveloped in contrast to that of autonomous systems. In particular, transfer function of TV systems is by no means a trivial and handy issue - this is the status quo. The footnote explanations for Example 2.21 are provided by G. G. Naumis (Naumis, 2016), author of the pertinent papers (Naumis, 2012; Toledo-Marina et al., 2016). Some references on NMP systems are as follows. We encourage the reader to derive the respective model if the system falls in his or her field of expertise. Rotational Translational Actuator system known as RTAC (or TORA: Translational Oscillator— Rotational Actuator) (Kiseleva et al., 2016); conventional and rear-steered bicycle (Edelmann et al., 2015; Astrom et al., 2005; Whitt and Wilson 2004); helicopter (Bittanti and Lovera, 1996); airplane (Sri Namachchivaya and Ariaratnam, 1986); missile (Devaud and Siguerdidjane, 2002); tactical aerospace vehicles (Vignesh et al., 2014); car (Andersson and Ljung, 1982); high speed trains (Faieghi et al., 2014); roll dynamics of ships (Ren et al., 2014); underwater vehicles (Ruiz-Duarte and Loukianov, 2015); combustion engine (Daasch et al., 2016a, 2016b); power systems (Khodabakhshian and Hemmati, 2012); slave tele-robotics (Atashzar et al., 2011); DC motor (Mohammadzaman et al., 2006); hard disk drive (Ataollahi et al., 2011); DC-DC boost converters (Antritter and Tautz, 2007; Martinez-Salamero et al., 2011); galvanometer scanner (Woong et al., 2016); some vibratory systems (like loudspeaker and microphone in a cavity or active noise control) (Paul et al., 2015; Zhou et al., 2013); rolling mills (Li et al., 2014); water level control (Ansarifar et al., 2012); some configurations of connected tanks (Ionescu et al., 2016); boiler (Labibi et al., 2009); bio reactor/CSTR (Dochain and Perrier, 1991); pH neutralization process (Chen et al., 2011); distillation column (Tabrizi and Edwards, 1992); transcription-translation systems (Yeung et al., 2013). For more references the interested reader is referred to the literature.

System representation

139

16. The nature of the drilling and welding processes (which can be experienced by every user) strongly suggest that these processes are also NMP, at least in certain conditions. However we have not encountered any NMP model for them in the literature. This sounds like an interesting topic for further research. 17. Apart from the systems of item 15, excellent advanced modeling exercises for readers of mechanical engineering are haptic robots and minimally invasive robotic surgery, parallel robots, and flexible-joint robots. Typical pertinent references are (Sharifi et al., 2017; Tavakoli et al., 2008), (Khosravi and Taghirad, 2014; Taghirad, 2013), and (Moallem et al., 2000; Karimi et al., 2006; Nanos and Papadopoulos, 2015; Ott, 2008), respectively. 18. A systematic novel methodology for moving and in fact assigning the closed-loop (transmission) zeros of a (multivariable) system is proposed by Wan et al. (2010). It is a pre 1 post 1 feedforward compensator. 19. In contemporary research the idea of transfer function models of nonlinear systems is under development, see e.g., Halas (2008) and its bibliography. This is done in an algebraic framework and requires advanced tools of mathematics. In an algebraic framework (differential modules, (non-)commutative rings, etc.) the concept of poles and zeros of time-varying systems is also under development. 20. The study of poles, zeros, and structural properties of MIMO systems is more intricate than that of SISO systems. There are two different frameworks for this: one is the classical framework which can be found in every text on multivariable control, and the other is the Special Coordinate Basis (SCB) of P. Sannuti and A. Saberi, which is the basis for a number of further developments. See the texts Saberi et al. (1995, 2007, 2012) as well as Chen et al. (2004) of Chapter 1. It is due to add that many researchers contributed to the classical approach, among the main were H.H. Rosenbrock and A.G.J. MacFarlane. We should also add that the SCB of Section 1.16.1.2 is in the spirit of ‘structural methods’ that we mentioned there, but we categorized it separately due to its importance. Among the other important results in the same perspective are Saberi and Sannuti, 1989 and Saberi et al., 1992. 21. In control theory we work with some functions (perhaps in the form of sequences), which represent some signals in actual applications or merely some mathematical functions with certain properties. The actual signals are functions of time but their study is sometimes conducted in a transformed domain, usually the Laplace domain, the Fourier domain, or the z domain, as it is often more convenient than that in the time domain. What we can mathematically do with the aforementioned functions depends on the properties of them. In mathematical terms this means that it depends on the space the given function belongs to. Thus the chapter can be best continued by a study of ‘mathematical spaces’ including spaces, signals, systems, and norms. We shall do this in the sequel of the book on statespace methods. Among the things that we learn there are the function or measure space L1 ; L2 ; LN , hardy spaces H1 ; H2 ; HN where L2 ; H2 are Hilbert spaces as well, Sobolev spaces Wk;p , indefinite metric spaces, etc.26 and their associated norms, as well as some

26

Named after Henri Leon Lebesgue, French mathematician (18751941), Godfrey Harold Hardy, English mathematician (18771947), David Hilbert, German mathematician (18621943), Sergei Lvovich Sobolev, Russian mathematician (19081989), and Stefan Banach, Polish mathematician (18921945). However, according to [Bourbaki 1987] Lebesgue spaces were first introduced by Frigyes Riesz, Hungarian mathematician (18801956). Moreover, Hardy spaces were introduced by F. Riesz in 1923 who named them after G. H. Hardy because of a former paper of his in 1915.

140

22.

23.

24.

25.

26.

Introduction to Linear Control Systems

other norms. They have their own details and applications. For instance, PDE systems are primarily studied in Sobolev spaces. Throughout this book we are restricted to the real field Rn , as the simplest Banach space, and the state of the system is assumed continuous and sufficiently smooth, i.e., as many times differentiable as required. The technical term for the usual control problem in which the final time is not specified and can be infinity is the infinite-horizon control. The other case is the so-called finitehorizon or finite-time control. We also have fixed-time control; see Appendix D. The model predictive control (MPC) scheme is also called the receding-horizon control for some reason which will become clear to you in further studies. The state-space systems that we have considered are also called square systems (either conventional or descriptor) in that the state matrix A is square. We also have rectangular systems in which the state matrix A is rectangular. Moreover, higher-order conventional systems in the form of Am xðmÞ 1 ? 1 A1 x_ 1 A0 x 5 Bm uðmÞ 1 ? 1 B1 u_ 1 B0 u are also studied in the literature. For realizability we assume that Am ¼ 6 0. See e.g., (Yu and Duan 2009). Recall that the condition of functional controllability is that the number of outputs should not be more than the number of inputs. Conceptually, it seems that in a network this condition translates to the number of sensors should not be more than the number of agents (systems at nodes). In recent studies advanced estimation and control techniques are used to circumvent this requirement. R. Olfati-Saber and P. Jalalkamali (2012) present an outstanding result in this direction. With regard to the role of graph theory and topology in systems and control theory some typical results are graph theory and stability of nonlinear systems (Hemami and Cosgriff, 1966), graphs and differential equations (Lazebnik and Woldar, 2000), graph rigidity and network control problems (Olfati-Saber and Murray, 2002), Laplacian graphs for the synthesis of information graphs (Kim and Mesbahi, 2006), multiagent networks and graph theory (Mesbahi and Egerstedt, 2010, 2014), graph theory and HN control (Mehrmann and Poloni, 2013), study of NMP zeros in a graph-theoretic setting (Abad Torres and Roy, 2015), effective resistance of directed graphs and network control problems (Young et al., 2016), control of networks with switching topology (Olfati-Saber and Murray, 2004; Rezaee and Abdollahi, 2017; Saboori and Khorasani, 2015; Shiota and Ohmori, 2011), distributed learning and network topology (Sarwate and Javidi, 2015), impact of network topology on distributed detection (Shahinpour et al., 2016), and consensus control and topology of networks (Li et al., 2017). Heavy investment is done on developing specialized theory and algorithms for simulation of different systems, especially for industrial applications. This is partly initiated in the form of some PhD dissertations. Some particular references are (Bambang, 2006; Ebert, 2008; Kobayashi, 2017; Naito, 2015; Schlauch, 2007; Steinbrecher, 2006; Takahashi, 2004).

2.8

Worked-out problems

Problem 2.1: A differential drive robot (DDR) is given by the nonlinear equations

 x_1 5 ax22 1 bðu1 1 u2 Þ x_2 5 cx1 x2 1 dðu1 2 u2 Þ

where c , 0, see Exercise 2.38 for details. Define

y 5 ½x1 x2  and u 5 ½u1 u2 T as the output and input of the system and find the linearized model of the system in the state-space domain around x 5 ½1 1T , u 5 ½1 1T . Analyze the system. T

System representation

141

The linearized model is given by Δx_ 5 AΔx 1 BΔu 1 f ðx ; u Þ where     

A5

0 cx2

2ax2 0 5 cx1 c

2a c

b d

and B 5

b 2d

. The output definition results in

 0 . The transfer function 1      1 bs2bc12ad bs2bc22ad s2c 2a b b 1 TðsÞ5CðsI2AÞ21 B5 sðs2cÞ22ac . 5 ds1cb 2ds1cb c s d 2d Δ

y 5 Cx

and

Δy 5 CΔx

C5

where



1 0

is

As it is observed the characteristic equation Δ 5 s2 2 cs 2 2ac has positive coefficients and is always stable. Now we consider the numerators of the transfer functions Tij ; i; j 5 1; 2. The transfer function T11 is MP with positive gain. The transfer function T12 is MP or NMP depending on the values of the parameters. The transfer function T21 is NMP. The transfer function T22 is MP with negative gain. See also Exercise 2.38. (Question: Can we find a linearized model about a given point like x 5 ½1 1T for arbitrary inputs?) Problem 2.2: The Van der Pole oscillator (Van de Pol, 1920) is given by v€ 2 εð1 2 v2 Þv_ 1 v 5 0. Nonlinear control techniques such as the averaging method and the singular perturbation method are used to study this nonlinear system, for reasons that will become clear to you in graduate studies. Here, write a state-space model for the system and linearize it.  _ one gets Defining x1 5 v; x2 5 v, 

linearized model is thus

0 21 2 2εx1 x2

 Problem 2.3: Linearize the system

x_1 5 x2 . x_2 5 2x1 1 εð1 2 x21 Þx2  1 . εð1 2 x21 Þ

The state matrix of the

x_1 5 2 x1 1 x2 1 x1 x2 cost 1 e2t ð2x21 1 x22 Þ 2 x_2 5 2 2x2 1 ðe2t 1 te2t Þx1 x2

,

t $ 0, about the origin. We first note that origin is indeed an equilibrium point. Next, for the linearized model, if valid, we  have x_ 5Ax where   2t 2t 1 1 x1 cost 1 2x2 e 2 1 1 x2 cost 1 4x1 e 21 1 2 2 A5 5 . There holds x2 ðe2t 1 te2t Þ 2 2 1 x1 ðe2t 1 te2t Þ 0 22   x1 x2 cost 1 e2t ð2x21 1 x22 Þ fr 5 . It is easy to see that 2 ðe2t 1 te2t Þx1 x2 limjj x jj ! 0 supt $ 0 jjfr jj=jjxjj 5 0 and thus the linearized model is acceptable. Moreover, the linearized model is exponentially stable at origin, and so is the original nonlinear system. In the second course we shall learn how to analyze and determine whether it is global or local. Problem 2.4: Linearize the system x_ 5 1212tt2 x. The initial condition is xð0Þ 5 23. This differential equation is linear from the outset and if we try to linearize it the same model will be obtained. A solution to this equation is x 5 c=ð1 1 t2 Þ with

142

Introduction to Linear Control Systems

c 5 2 3 due to the initial condition. (Question: Does the system admit any other solution?) 

x_1 5 x1 x2 2 2x2 1 x2 u1 1 u2 Problem 2.5: Linearize the system with the input x_2 5 3x1 1 x22 1 2u21 1 x1 u2 T u 5 ½21 0 . We first find the equilibrium points. From the first equation in  x_1 5 x1 x2 2 2x2 2 x2 5 0 x_2 5 3x1 1 x22 1 2 5 0

2

we have x1 5 3 or x2 5 0. Considering the second equation we

see that x1 5 3 is not acceptable but x2 5 0 results in x1 5 2 2=3. Thus with u 5 ½21 0T the equilibrium is x 5 ½22=3 0T . The linearized model, if  

valid, is given by Δx_ 5 AΔx 1 BΔu where A 5 B5



x2 2u1



2u2 5 x1



0 22

0 22=3



x2 3 1 u2

x1 2 2 1 u1 0 5 2x2 3

211=3 0

,

. (Question: Is the linearized model valid?)

Problem 2.6: Solve the system x_ 5 1xðt2 1 u2 Þ with the input u 5 2. The initial condition is xð0Þ 5 3. Referring to the materials of the course Differential Equations, which you have certainly taken, it is fairly easy to find an answer of this separable equation qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi as x 5 23 t3 1 8t 1 k. Considering the initial value we find k 5 9. (Question: What is the linearized model of this system?)  x_1 5 2x21 2 2x2 u Problem 2.7: Linearize the system where uðtÞ 5 e2t , 1=2 3 _ 5 2 x 2 3u x 2 1 0 # t , N. Also solve the system. The linearized model, if valid, has the form Δx_ 5 AΔx 1 BΔu where 2 3   4x1 2 2u 2 2x2 1 21=2 4 5 A5 2 x and B 5 in which x1 ðtÞ 5 9e26t and 0 2 9u2 2 1 x2 ðtÞ 5 81e211t . (Is the linearized model valid?) On the other hand, a moment of thought on the structure of this system, considering the input, suggests that the answers are probably in the form of exponential functions. This is correct—it is easy to verify that x1 ðtÞ 5 e22t and x2 ðtÞ 5 e2t 1 e23t . (Does the system admit any other solution?) 8 21 x2 > < x_1 5 x1 1 u1 t11 2ðt11Þ2 x1 , t $ 0, with the Problem 2.8: Solve the system > : input u 5 ½1 cosðt11ÞT . x_2 5 2 2ðt11Þ2 x21 u1 1 2u22 The structure of the system reveals, after some thought, that the answers are probably related to the sine and cosine functions. This is correct and it can easily be verified that x1 ðtÞ 5 t 11 1 sinðt 1 1Þ and x2 ðtÞ 5 sinð2t 1 2Þ. (Question: Does the system admit any other solution? How can we linearize the system about the exact solution trajectory?)

System representation

143

Question 2.10: What happens if t 1 1 is substituted with t in the statement of Problem 2.8? Discuss. Problem 2.9: Find  the output of the system A 5 D 5 1, xð0Þ 5 21 to the input u 5 2stepðtÞ. 1 

YðsÞ 5 0 1 1 5 0 Δ  5

" s11 23 1

" s

21

#21 "

s 1

21

#

1 #"

21

#

0



1@ 0



1

21 2



, B5

" s11

0

1 1@ 0 Δ

1 0

1 3 s11    22 3 10 26 6 1 1 1 1 s s11 s2 s s11

1

21

  1 , 2

 C5 0

#21 " # 1

23

s

2

" s

1

#" # 1

s11

2

3

1 , 1

1 1A

2 s

1 1 1A

2 s

Thus, yðtÞ 5 ð22 1 3e2t Þ 1 ð10t 2 6 1 6e2t Þðt $ 0Þ: Note that the first term is the contribution of the initial condition and the second term that of the input. Problem 2.10: Find a model for the system in Fig. 2.51. We solve this example in a more general setting. Denote the impedance connecting the input and the inverting input of the OpAmp with Z1 . Also denote the impedance connecting the inverting input of the OpAmp and its output by Z2 . Then if the noninverting OpAmp input is grounded, there holds ViZ21 0 5 0 2Z2Vo and therefore VVoi 5 2Z1Z2 . R2 3 ðR1 1 1=C1 sÞ R2 1 ðR1 1 1=C1 sÞ and Z2 Vo R4 ðR1 1 R2 ÞC1 s 1 1 R 1 C1 s 1 1 . Vi 5 R3 

In this circuit in the first stage of the circuit Z1 5

5 R2 . As for

the second stage Z1 5 R3 and Z2 5 R4 . Therefore In Chapter 9 we will see that this circuit can be used for the implementation of a lag controller. The system is proper and its relative degree is zero.

R1 C1

R2

R4 R3

vi

R2

Figure 2.51 Problem 2.10, A particular OpAmp circuit.

vo

144

Introduction to Linear Control Systems

Problem 2.11: 27 Find a model for the system in Fig. 2.52.  The equations governing this system are

f 5 kðz1 2 z2 Þ kðz1 2 z2 Þ 2 b_z2 5 m€z2

. Solving these equa-

tions is straight-forward: f 5 b_z2 1 m€z2 and by applying the Laplace transform we 2 1 bs 1 k 2 ðsÞ 1 ðsÞ 5 sðms11 bÞ. Substituting in the first equation we get ZFðsÞ 5 msksðms get ZFðsÞ 1 bÞ . A 28 canonical method to solve such equations is offered in the next problem. Note that Z2 =F and Z1 =F are strictly proper and proper, respectively. The voltage-force and current-force or precisely speaking voltageintegral_ force and currentintegral_force equivalents of this system isÐ given in Ð Ð Fig. 

2.53 Ð

with Ð

the

governing

v 5 fÐ5 ð1=CÞð z1 2 z2 Þ Ð ð1=CÞð z1 2 z2 Þ 5 L_z2 1 Rz2

equations

Ð

i 5 fÐ5 ð1=LÞð Ð z1 2 z2 Þ , ð1=LÞð z1 2 z2 Þ 5 C z_2 1 ð1=RÞz2

and

respectively.

Problem 2.12: Find a model for the system inFig. 2.54.

f 2 bð_z1 2 z_2 Þ 5 m€z1 . bð_z1 2 z_2 Þ 5 kz2

In this system the governing equations are

Here again we take

the Laplace transform and then eliminate one variable (like Z1 ðsÞ) and thus obtain the transfer function of the other variable (Z2 ðsÞ=FðsÞ), and whence the transfer z2 b m

z1 k

f

Figure 2.52 Particular configuration of a mass-dashpot-spring system.

1

+ v _

2

L

R

C

Figure 2.53 The electrical equivalents of the system in Fig. 2.52.

27

Problems 2.112.13 are somewhat similar in appearance to each other and also to Example 2.11, however they are different. We address all of them so that by comparison the student better masters the technique. 28 It is good to know that in the scientific literature the word ‘canonical’ is encountered frequently. For instance in the second course on state-space methods you study the Kalman canonical decomposition. The irony is that the root of this word and its precise meaning is not known to the majority of the scientific community. This word is the adjective of the noun canon, in the meaning of rule, standard, or criterion. Hence the word canonical should mean like rule-based. Interestingly, the word canon is also the English pronunciation/form of the Farsi/Persian word ‘Ghanoon’ which has the same meaning. The book ‘Ghanoon e Teb’ of the Iranian scientist Abu Ali Sina (known as Avicenna in the western world) was translated to ‘Canon of Medicine’ when it was adopted as the text of medicine in European universities in the 17th and 18th centuries.

System representation

145

function of the eliminated variable (Z1 ðsÞ=FðsÞ). The complication of this system is modest, there are only two variables, and elimination strategy is not difficult. In complicated systems where there are more variables (e.g., in Exercise 2.19) a structured method is better as in the following.29 We take the Laplace transform of the equations and then rewrite them as PZ 5 Q whose solution is Z 5 P21 Q. This is demonstrated below.  Laplace transform results30 in

 ðbs 1 ms2 ÞZ1 2 bsZ2 5 F bsZ1 2 ðbs 1 kÞZ2 5 0



or



bs 1 ms2 bs

f 2 bsðZ1 2 Z2 Þ 5 ms2 Z1 . bsðZ1 2 Z2 Þ 5 kZ2

2bs 2ðbs 1 kÞ



Z1 Z2



  F 5 0

These equations are rewritten as 

. Therefore,

 Z1 5 Z2

  2ðbs 1 kÞ bs F from which we obtain ZF1 5 2bs bs 1 ms2 0 5 sðmbs2bs11mksk 1 bkÞ and ZF2 5 2 ðbs 1 ms22Þðbsbs1 kÞ 1 b2 s2 5 mbs2 1 bmks 1 bk

1 2 ðbs 1 ms2 Þðbs 1 kÞ 1 b2 s2

2 ðbs 1 kÞ 2 ðbs 1 ms2 Þðbs 1 kÞ 1 b2 s2

Question 2.11: It is observed that the first transfer function is of third order, but the reader probably expects a lower order. How do you justify the answer? Also, reconsider this problem after studying Chapter 3. The voltage-force and current-force or precisely speaking voltageintegral_ force and currentintegral_force equivalents of this system are provided  Ð in 

Fig.

2.55

with

the

governing

v 5 f 5 L_z1 1 RðzÐ1 2 z2 Þ Rðz1 2 z2 Þ 5 ð1=CÞ z2

equations

Ð

i 5 f 5 C z_1 1 RðzÐ 1 2 z2 Þ , Rðz1 2 z2 Þ 5 ð1=LÞ z2

and

respectively.

2.56. Problem 2.13: Find a model for the system in Fig. 8 The governing equations of this system are

< f 5 kðz1 2 z2 Þ kðz1 2 z2 Þ 5 bð_z2 2 z_3 Þ : bð_z2 2 z_3 Þ 5 m€z3

. We first note

that there holds f 5 m€z3 and thus ZF3 5 ms1 2 . By substituting in the last equation of 1b the above set we get ZF2 5 ms bms2 . And by substituting in the first equation we get Z1 F

5

bms2 1 kms 1 kb . kbms2

z1

z2 k

b m

f

Figure 2.54 Particular configuration of a mass-dashpot-spring system.

29

We are interested in a straightforward procedure which is not error prone for hand computations for finding a closed-form solution. Otherwise from the numerical/computational point of view we do know that this may not be a good choice. This is a classical issue in scientific computing. 30 For notational simplicity the argument s is suppressed. You should be careful not to make a mistake between time and frequency domain variable. For instance in the original equations given in the previous paragraph the argument t is suppressed.

146

Introduction to Linear Control Systems

L + v _

R

C

Figure 2.55 The electrical equivalents of the system in Fig. 2.54.

z2

z3 b m

z1 k f

Figure 2.56 Particular configuration of a mass-dashpot-spring system.

Figure 2.57 The electrical equivalents of the system in Fig. 2.56.

The voltage-force and current-force or precisely speaking voltageintegral_ force and currentintegral_force equivalents of8 this system is offered in Fig.

2.57

with

the

8 Ð Ð Ð < i 5 fÐ5 ð1=LÞð Ð z1 2 z2 Þ ð1=LÞð z1 2 z2 Þ 5 Rðz2 2 z3 Þ , : Rðz2 2 z3 Þ 5 C z_3

governing

equations

Ð Ð Ð z1 2 z2 Þ < v 5 fÐ5 ð1=CÞð Ð ð1=CÞð z1 2 z2 Þ 5 Rðz2 2 z3 Þ : Rðz2 2 z3 Þ 5 L_z3

and

respectively.

Problem 2.14: Obtain a model for the following system; see Fig. 2.58. The system describes a motor rotating a load L (with torque TL ) through two shafts connected in a gear as shown in the picture. The shafts have inertias Jm and JL , and the gear has the tooth ratio 1:N as shown. The equation v 5 Ri 1 Li_1 km ωm holds for the motor in which v, i, and L denote the applied voltage, the current of the voltage source, and the inductance of the winding. km is a constant. Find a model for the system. The equations of the system are as follows:

Tm 5 Jm ω_ m 1 T1 T2 5 JL ω_ 1 TL

and

T2 T1

5

ωm ω

5 N. We

define J: 5 JL 1 N 2 Jm and thus obtain ω_ 5 NkJ m i 2 1J TL (i). From v 5 Ri 1 Li_1 km ωm we calculate i as i_5 2 NkLm ω 2 RL i 1 L1 v (ii). Finally there holds θ_ 5 ω (iii). Equations (i)(iii) can be combined in the following matrix form and are the state

System representation

147

equations of the system. x 5 ½x1 x2 x3 T 5 ½θ ω iT denotes the state vector of the system. Defining θ and ω as the outputs of the system, the output equations are given as in the following. 2

0

2_3 6 θ 60 6 7 6 4 ω_ 5 5 6 6 6 _i 40

" # θ ω

" 5

1 0

1 0 2

Nkm L

3 2 0 0 Nkm 72 θ 3 6 7 60 J 7 7 6 76 6 1 ω 4 5 7 6 7 61 R5 i 4 2 L L

0

3

1 7" # 2 7 v J7 7 7 T 7 L 0 5

ðState EquationsÞ

2 3 # θ 0 0 6 7 4 ω 5 ðOutput EquationsÞ 1 0 i

It is noteworthy that the above system is linear, as opposed to those in the proceeding examples, all nonlinear, where a linearized model is also obtained. Before reading through, it is advisable to peruse Appendix B and in particular the Lagrange’s/Euler-Lagrange’s method if you are not familiar with it. Problem 2.15: Obtain a model for the two-arm robot manipulator of Fig. 2.59. Note that in case τ 2 5 0 the system can be seen as a version of the double-inverted i + _v

1:N Jm

Motor

ωm T2 T1

Tm

ω

JL

Figure 2.58 A DC motor.

m2 l2 τ2 τ1 Figure 2.59 A two-arm robot.

l1

θ2 m1

θ1

TL

148

Introduction to Linear Control Systems

pendulum system, where the task is to balance the upper pendulum (l2 ; m2 ) through the exertion of τ 1 to the lower part. We use the Lagrange’s method. 1  1  KE 5 m1 ðl1 sin θ1 ÞU 2 1 ðl1 cos θ1 ÞU 2 1 m2 ðl1 sin θ11l2 sin θ2 ÞU 2 1 ðl1 cos θ11l2 cos θ2 ÞU 2 2 2 1 1  5 m1 ðl1 θ_ 1 Þ2 1 m2 ðl1 θ_ 1 Þ2 1 ðl2 θ_ 2 Þ2 1 2l1 l2 cosðθ1 1 θ2 Þ 2 2

PE 5V01 1 mgl1 cos θ1 1 V02 1 mgðl1 cos θ1 1l2 cos θ2 Þ L 5 KE 2 PE, and the system is assumed conservative. Hence, from 2 @@ qLi 5 Qi , with q1 5 θ1 , q2 5 θ2 , Q1 5 τ 1 , Q2 5 τ 2 , we get:

d @L dt @ q_i

  d  2_ m1 l1 θ 1 1 m2 l21 θ_ 1 1 l1 l2 θ_ 2 cosðθ1 1 θ2 Þ 2 2m2 l1 l2 θ_ 1 θ_ 2 sinðθ1 1 θ2 Þ dt  2 2m1 gl1 sin θ1 2 m2 gl1 sin θ1 5 τ 1 ;  d  2_ m2 l2 θ 2 1 l1 l2 θ_ 1 cosðθ1 1 θ2 Þ 2 2m2 l1 l2 θ_ 1 θ_ 2 sinðθ1 1 θ2 Þ dt  2 2m2 gl2 sin θ2 5 τ 2 ; resulting in,   ðm1 1 m2 Þl21 θ€ 1 1 m2 l1 l2 cosðθ1 1 θ2 Þ θ€ 2 5 τ 1 1 m2 l1 l2 θ_ 2 sinðθ1 1 θ2 Þ 1 ðm1 1 m2 Þgl1 sin θ1 ; 2



 2 m2 l1 l2 cosðθ1 1 θ2 Þ θ€ 1 1 m2 l22 θ€ 2 5 τ 2 1 m2 l1 l2 θ_ 1 sinðθ1 1 θ2 Þ 1 m2 gl2 sin θ2 :

Now we form M q€ 5 F, where,  M5 "

ðm1 1 m2 Þl21 m2 l1 l2 cosðθ1 1 θ2 Þ

   m2 l1 l2 cosðθ1 1 θ2 Þ θ€ 1 € ; q 5 ; 2 m 2 l2 θ€ 2

# 2 τ 1 1 m2 l1 l2 θ_ 2 sinðθ1 1 θ2 Þ 1 ðm1 1 m2 Þgl1 sin θ1 F5 : 2 τ 2 1 m2 l1 l2 θ_ 1 sinðθ1 1 θ2 Þ 1 m2 gl2 sin θ2

System representation

149

M is the so-called mass matrix, as expected symmetric (and positive definite, thus invertible). Hence, q€ 5 M 21 F. Define, 2

2 ðm 1 m2 Þl21 ½τ 1 1 m2 l1 l2 θ_ 2 sinðθ1 1 θ2 Þ 1 ðm1 1 m2 Þgl1 sin θ1  6 1 6 1 m2 l1 l2 cosðθ1 1 θ2 Þ½τ 2 1 m2 l1 l2 θ_ 2 sinðθ1 1 θ2 Þ 1 m2 gl2 sin θ2  1 16 q€ 5 6 6 Δ6 2 4 m2 l1 l2 cosðθ1 1 θ2 Þ½τ 1 1 m2 l1 l2 θ_ 2 sinðθ1 1 θ2 Þ 1 ðm1 1 m2 Þgl1 sin θ1  2 1 m2 l21 ½τ 2 1 m2 l1 l2 θ_ 1 sinðθ1 1 θ2 Þ 1 m2 gl2 sin θ2 

3 7 7 7 7; 7 7 5

in which, Δ 5 m2 ðm1 1 m2 Þl21 l22 2 m22 l21 l22 cos2 ðθ1 1 θ2 Þ 5 m2 l21 l22 ½m1 1 m2 sin2 ðθ1 1 θ2 Þ:   Note that Δ 6¼ 0. Then, defining x 5 x1 x2 x3 x4 T 5 θ1 θ2 as the state vector, the dynamics of this system can be modeled by:

θ_ 1

T θ_ 2 

2

3 2 3 x3 ðtÞ x_1 6 x_2 7 6 7 x4 ðtÞ 7 6 7 x_ 5 6 4 x_3 5 5 4 f3 ðx; τ 1 ; τ 2 ; tÞ 5; f4 ðx; τ 1 ; τ 2 ; tÞ x_4 in which, f3 5

1 Δ

1 f4 5 Δ



 ðm1 1 m2 Þl21 ½τ 1 1 m2 l1 l2 x24 sinðx1 1 x2 Þ 1 ðm1 1 m2 Þgl1 sinx1  ; 1 m2 l1 l2 cosðx1 1 x2 Þ½τ 2 1 m2 l1 l2 x23 sinðx1 1 x2 Þ 1 m2 gl2 sinx2 



 m2 l1 l2 cosðx1 1 x2 Þ½τ 1 1 m2 l1 l2 x24 sinðx1 1 x2 Þ 1 ðm1 1 m2 Þgl1 sinx1  ; 1 m2 l21 ½τ 2 1 m2 l1 l2 x23 sinðx1 1 x2 Þ 1 m2 gl2 sinx2 

Δ 5 m2 l21 l22 ½m1 1 m2 sin2 ðx1 1 x2 Þ: The output is defined according to the task of the system. As a robot manipulator the output is certainly the coordinates of the end-point load m2 , given by xo 5 l1 cos θ1 1 l2 cos θ2 , yo 5 l1 sin θ1 1 l2 sin θ2 . As for a double inverted pendulum the balancing task is performed if θ2 5 90 and θ_ 2 5 0. Hence, as a robot manipulator, 

 l1 cosx1 1 l2 cosx2 y5 ; l1 sinx1 1 l2 sinx2

150

Introduction to Linear Control Systems

and as a double inverted  pendulum,   0 1 0 0 90 y5 x which should track r 5 . 0 0 0 1 0 It should be noted that the state equations for both systems are the same except that in the double inverted pendulum τ 2 5 0. The linearized model around the trajectory (the robot manipulator case), is given by the general formula (given in Section 2.2.1.1) where the equilibrium point x is the general endpoint coordinates or more precisely the general state  x 5 x1 x2 x3 x4 T 5 θ1 θ2 θ_ 1 θ_ 2 T . In other words f ðx Þ ¼ 6 0 and it cannot be further simplified. However, for the inverted pendulum case it can be simplified, as follows: 2

0 0 6

6 l1 m1 m2 6g 1 1 2 6 A 5 6 l22 m2 m1 6 g m2 4 11 l1 m1

0 0 gm2 l1 m1 gm2 l2 m1

3 2 3 0 1 0 7 6 0 17 6 1 1 0 1 7 7   7 6 1 0 1 0 0 0 07 7 2 m 7; B 5 6 ; C 5 : m l 7 6 1 2 7 0 0 0 1 7 6 2 7 7 6 1 5 5 4 0 0 m1 l1 l2

Problem 2.16: Develop a model for the double pendulum system; see Fig. 2.60. We use the Lagrange’s method. Let z denote the position of the center of mass of the cart where the pivot is mounted. Thus, 1  1 1  KE 5 M z_2 1 m1 ðz1l1 sin θ1 ÞU 2 1 ðl1 cos θ1 ÞU 2 1 m2 ðz1l2 sin θ2 ÞU 2 1 ðl2 cos θ2 ÞU 2 2 2 2

PE 5V01 1 mgl1 cos θ1 1 V02 1mgl2 cos θ2

θ1

m1 l1 θ2 l2

u

Figure 2.60 A double pendulum system.

M

m2

System representation

151

L 5 KE 2 PE, and the system is assumed conservative. Hence, from 2 @@ qLi 5 Qi , with q1 5 z as the position of the cart, q2 5 θ1 , q3 5 θ2 , we get:

d @L dt @ q_i

d M z_ 1 m1 ð_z 1 l1 θ_ 1 cos θ1 Þ 1 m2 ð_z 1 l2 θ_2 cos θ2 Þ 5 F; dt

i  d h 2_ m1 l1 θ 1 1 z_l1 cos θ1 2 2m1 l1 z_θ_ 1 sin θ1 1 m1 gl1 sin θ1 5 0; dt

i  d h 2_ m2 l2 θ 2 1 z_l2 cos θ2 2 2m2 l2 z_θ_ 2 sin θ2 1 m2 gl2 sin θ2 5 0 dt yielding, 2 2 ðM 1 m1 1 m2 Þ€z 1 ðm1 l1 cos θ1 Þθ€ 1 1 ðm2 l2 cos θ2 Þθ€ 2 5 m1 l1 θ_ 1 sin θ1 1 m2 l2 θ_ 2 sin θ2 1 F;

ðm1 l1 cos θ1 Þ€z 1 m1 l21 θ€ 1 5m1 gl1 sin θ1

ðm2 l2 cos θ2 Þ€z 1 m2 l22 θ€ 2 5m2 gl2 sin θ2 Therefore, M q€ 5 F, where, 2

M 1 m1 1 m2

6 M56 4 m1 l1 cosθ1

m2 l2 cosθ2

m1 l21

0

0

m2 l22 3

m2 l2 cosθ2

2 6 6 F56 4

m1 l1 cosθ1

F

2 1 m1 l1 θ_ 1

2 sinθ1 1 m2 l2 θ_ 2

m1 gl1 sinθ1

sinθ2

3

2

z€

3

7 6 7 7; q€ 5 6 θ€ 1 7; 5 4 5 θ€ 2

7 7 7 5

m2 gl2 sinθ2 M is the so-called mass matrix, as expected symmetric (and positive definite and invertible). Hence, q€ 5 M 21 F. Before doing the calculations, the following simplification is done: the second and third rows are divided by m1 l1 and m2 l2 , respec21 tively, to obtain q€ 5 M F, in which, 2 6 M56 4 2 6 6 F56 4

M 1 m1 1 m2

m1 l1 cos θ1

m2 l2 cos θ2

cos θ1

l1

0

cos θ2

0

l2 3

2 2 F 1 m1 l1 θ_ 1 sin θ1 1 m2 l2 θ_ 2 sin θ2

g sin θ1 g sin θ2

7 7 7: 5

3 7 7; 5

152

Introduction to Linear Control Systems

Thus, 2 M

21

5

2l2 cos θ1

l1 l2

2l1 cos θ2

3T

16 7 m1 l1 cos θ1 cos θ2 4 2m1 l1 l2 cos θ1 ðM1m1 1m2 Þl2 2m2 l2 cos2 θ2 5 ; Δ 2 2m2 l1 l2 cos θ2 m2 l2 cos θ1 cos θ2 ðM1m1 1m2 Þl1 2m1 l1 cos θ1

where, Δ 5 ðM 1 m1 1 m2 Þl1 l2 2 m1 l1 l2 cos2 θ1 2 m2 l1 l2 cos2 θ2 5 l1 l2 ½M 1 m1 ð1 2 cos2 θ1 Þ 1 m2 ð1 2 cos2 θ2 Þ 5 l1 l2 ðM 1 m1 sin2 θ1 1 m2 sin2 θ2 Þ; (note that Δ 6¼ 0) and, 2

3 2 2 l1 l2 ½F 1 m1 l1 θ_ 1 sin θ1 1 m2 l2 θ_ 2 sin θ2  2 m1 l1 l2 cos θ1 ½g sin θ1  6 7 6 2 m2 l1 l2 cos θ2 ½g sin θ2  7 6 7 2 2 6 7 _ _ 1 6 2 l2 cos θ1 ½F 1 m1 l1 θ 1 sin θ1 1 m2 l2 θ 2 sin θ2  1 ½ðM 1 m1 1 m2 Þl2 7 q€ 5 6 7: 7 Δ 6 2 m2 l2 cos2 θ2 ½g sin θ1  1 m2 l2 cos θ1 cos θ2 ½g sin θ2  6 7 6 7 2 2 4 2 l1 cos θ2 ½F 1 m1 l1 θ_ 1 sin θ1 1 m2 l2 θ_ 2 sin θ2  1 m1 l1 cos θ1 cos θ2 ½g sin θ1  5 1 ½ðM 1 m1 1 m2 Þl1 2 m1 l1 cos2 θ1 ½g sin θ2 

 Then, defining x 5 x1  x2 the state vector, and y 5 θ1 system can be modeled by: 3 2 3 x2 ðtÞ x_1 6 x_ 7 6 f ðx; F; tÞ 7 7 6 27 6 2 7 6 7 6 6 x_3 7 6 x4 ðtÞ 7 756 7 x_ 5 6 6 x_ 7 6 f ðx; F; tÞ 7; 7 6 47 6 4 7 6 7 6 4 x_5 5 4 x6 ðtÞ 5

 x3 x4 x5 x6 T 5 z z_ θ1 θ_ 1 θ2 θ_ 2 T as θ2 T as the output vector, the dynamics of this

2

x_6

 y5

0 0

0 0

1 0

0 0 0 1

 0 x; 0

f6 ðx; F; tÞ

in which, f2 5

1

l1 l2 ½F 1 m1 l1 x24 sinx3 1 m2 l2 x26 sinx5  2 m1 l1 l2 cosx3 ½g sinx3  Δ 2 m2 l1 l2 cosx5 ½g sinx5 

System representation

153

f4 5

  1 2l2 cosx3 ½F 1 m1 l1 x24 sinx3 1 m2 l2 x26 sinx5  1 ½ðM 1 m1 1 m2 Þl2 Δ 2 m2 l2 cos2 x5 ½g sinx3  1 m2 l2 cosx3 cosx5 ½g sinx5 

f6 5

1 Δ



2l1 cosx5 ½F 1 m1 l1 x24 sinx3 1 m2 l2 x26 sinx5  1 m1 l1 cosx3 cosx5 ½g sinx3  1 ½ðM 1 m1 1 m2 Þl1 2 m1 l1 cos2 x3 ½g sinx5 



Δ 5 l1 l2 ðM 1 m1 sin2 x3 1 m2 sin2 x5 Þ: The linearized model around the equilibrium point x3 5 x4 5 x5 5 x6 5 0 is given Δx_ 5 AΔx 1 BΔu , where, by Δy 5 CΔx 1 DΔu 2

0

60 6 6 60 A56 60 6 6 40 0 

1

0

0

0

0

2m1 g=M

0

2m2 g=M

0 0

0 0

1 0

0 0

0 0

0 0

0 0

0 0

0 0 C5 0 0

1 0

0 0

 0 0 ; 1 0

0

3

2

07 7 7 07 7; 07 7 7 15

0

3

6 1=M 7 7 6 7 6 7 6 0 7; 6 B56 7 6 21=ðl1 MÞ 7 7 6 5 4 0 21=ðl2 MÞ

0

  0 D5 : 0

We close the problem by encouraging the reader to analyze when the stabilization objective is possible. Moreover, it would be instructive to repeat this problem and Problem B.2 of appendix B for the case that the cart is on a slope and the force is applied at an arbitrary angle with the horizon. Problem 2.17: Obtain a model for the sequel system given in Fig. 2.61. The cylinder has the mass m and radius r. The force f is applied to its axis. The motion of this system is governed by the equation

1 2 2 € component of the spring’s 2 mr 1 mr Þθ 5 2 fr 2 Fz r in which Fz is the horizontal q ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi force. Let the spring have length L0 at rest. Hence dL 5 ðh1rÞ2 1 z2 2 L0 is the

f

h

θ

k

Figure 2.61 Confined rotational mass system.

154

Introduction to Linear Control Systems

increase in the length of the spring in which z is the position of the cylinder. There qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

z ffi. On the other hand θ 5 z=r rad holds Fz 5 kdLsinθ 5 k ðh1rÞ2 1 z2 2 L0 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðh1rÞ2 1 z2   L0 ffi and θ€ 5 z€=r rad=s2 . Thus 32m€z 1 kz 1 2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 5 2 f . Defining x1 5 z and 2 ðh1rÞ 1 z2

x2 5 z_ we have

8 x_1 5 x2 > ! > < 2k L0 2 q ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi _ x 5 2 x 2 f 1 2 2 1 > 3m 3m 2 > 2 : ðh1rÞ 1 x1

. We leave the linearization of this

system to the reader. It is good to note that the interpretation of the second order equation in z is that the spring opposes the motion/rotation of the cylinder by a nonlinear force, although the spring is itself linear, and this is because of the configuration of the system. We close this problem by encouraging the reader, especially of mechanical engineering, to repeat the problem when the force f is applied at an arbitrary point on the outer surface of the cylinder perhaps with an angle with the horizon. Problem 2.18: Obtain a model for the Translational Oscillator Rotational Actuator (TORA) also known as Rotational Translational ACtuator system (RTAC) system, given in Fig. 2.62. Consider two different cases: (i) lossless system; (ii) actual system to be mounted on a magnetically levitated table, thus suffering from eddy current adverse effects denoted by viscous friction c, as well as braking forces denoted by viscous friction b. (i) Applying the Lagrange’s method one obtains: KE 5

1 1 2 1  2 M z_ 1 m ðz1lsinθÞU 2 1 ðlcosθÞU 2 1 mJ θ_ 2 2 2

1 PE 5 kz2 2

M k τ

J θ

f

l m

z Figure 2.62 The TORA system. Adopted from Kiseleva et al. (2016), with permission.

System representation

155

L 5 KE 2 PE, and the system is conservative. Hence, from q1 5 z, q2 5 θ, we get:

d @L dt @ q_i

2

@L @ qi

5 0, with

d ðM z_ 1 m_z 1 mlθ_ cosθÞ 1 kz 5 f ; dt d ðml2 θ_ 1 ml_z cosθ 1 mJ θ_ Þ 1 ml_zθ_ sinθ 5 τ; dt yielding, ðM 1 mÞ€z 1 kz 5 2mlðθ€ cosθ 2 θ_ sinθÞ 1 f ;

(2.13)

ðml2 1 JÞθ€ 5 2ml€z cosθ 1 τ: Therefore, with x 5 ½x1 ; x2 ; x3 ; x4 T 5 ½z; z_; θ; θ_ T , a state-space model can easily be obtained. The procedure is similar to the previous problems and is thus left to the reader. The reader should observe the NMPness of the system. (ii) In this case the equation (2.13) is replaced by ðM 1 mÞ€z 1 2 kz 2 b_z 5 2mlðθ€ cosθ 2 θ_ sinθÞ 2 c_z 1 f . Here again it should be fairly easy for the reader to complete the rest of the procedure. We close this problem by adding that with a change of variables we can find a simpler and nicer model representation. The system is adopted from Kiseleva et al. (2016) where hidden attractors of the system are studied. Note that the system can be considered as a simplified model of stabilizing a seismic excited structure by a rotating actuator—recall Example 2.16. Problem 2.19: Consider a two-room building where room 1 is heated by a source. The ambient (i.e., outside) temperature is assumed constant. The roof and floor are assumed to be insulated. Derive a mathematical model for this system which is shown in Fig. 2.63, left panel. Denote the rate of heat flow of the source by q, average temperature of room i 5 1; 2 by θi , thermal resistance between room i 5 1; 2 and the ambient by Ria , thermal resistance between the rooms by R12 5 R21 , and the thermal capacitance of the room i 5 1; 2 by Ci . The conservation law for heat is hHeat ini 5 hHeat outi 1 hHeat stored i. See Fig. 2.63, right panel. Thus for room 1 we have θ1 Room1 q

Room2

q

R 1a

R 12 C1

θ2 C2

R 2a

θa Figure 2.63 Problem 2.19, Left: A typical thermal system. Right: Circuit equivalent.

156

Introduction to Linear Control Systems

q 5 θ1R21aθa 1 θ1R212θ2 1 C1 dðθ1dt2 θa Þ. Note that the first and second terms are the hHeat outi and the third term is the hHeat stored i. For room 2 there holds 0 5 θ2R22aθa 1 θ2R212θ1 1 C2 dðθ2dt2 θa Þ. Hence noting that dθa =dt 5 0, and defining x1 5 θ1 , x2 5 θ2 , and x 5 ½x1 given by, x_1 5

21 C1



x2 T as the state of the system, the state equations are

 1 1 1 1 1 1 x2 1 ua 2 uq x1 1 R1a R12 C1 R12 C1 R1a C1

1 21 x1 1 x_2 5 C2 R12 C2



 1 1 1 1 ua x2 1 R2a R12 C2 R2a

in which ua 5 θa and uq 5 q. The input to the system is u 5 ½ua uq . Note that for a constant θa without loss of generality it is often assumed that θa 5 0 (why can we do so?) and thus the system will have the single input u 5 uq 5 q. Finally we should add that for a single room which is heated the model gives the transfer function of Example 2.23. Problem 2.20: The pH neutralization reactor is used for the neutralization of the pH of an acid or a base. Consider the process for a pH neutralization of a strong acid by a strong base in the presence of a buffer agent. The neutralization takes place in a continuous stirred tank reactor. The volume V of the tank is assumed constant. A strong acid with the time-varying volumetric flow rate qA 6¼ 0 of the fixed composition zA runs into the tank and is neutralized by the volumetric flow rate qB of a strong base of the known composition zB and the buffer agent of the composition zBU . Derive a model for this system. We assume that due to the high reaction rates of the neutralization process the chemical equilibrium condition is instantaneously achieved. Besides, we assume that the acid, base, and buffer are strong enough so that total dissociation31 of the three compounds takes place. The dynamics of the process is governed by the law of mass conservation and the electro-neutrality condition. Let ½: denote the concentration of ‘.’. Denote x1 5 ½A2 , x2 5 ½B1 , x3 5 ½X 2 , and x 5 ½x1 x2 x3 T as the state of the system. Also, for convenience let xi;1 5 zA , xi;2 5 zB , xi;3 5 zBU . Assuming constant V the dynamics of the system is given by, x_1 5

31

1 1 ðxi;1 2 x1 Þ 2 x1 u; ϑ V

A dissociation constant is a specific type of equilibrium constant that measures the propensity of a larger object to dissociate (or separate) reversibly into smaller components. For the general reaction Ax By "xA 1 yB in which the complex Ax By dissociates to xA and yB subunits, the dissociation constant is defined as Kd 5 ½Ax ½By =½Ax By  where brackets denote the concentration.

System representation

157

x_2 5

21 1 x2 1 ðxi;2 2 x2 Þu; ϑ V

x_3 5

21 1 x3 1 ðxi;3 2 x3 Þu: ϑ V

On the other hand the celebrated pH equation is, f ðx; ζÞ 5 2x1 1 x2 1 x3 1 ζ 2

KW x3 2 5 0: ζ 1 1 ðKBU =KW Þζ

Note that this is a third order equation in ζ. In the above equations ζ 5 102pH , ϑ 5 V=qA , and u 5 qA =qB . The symbols KW and KBU with known values denote the dissociation constants of the water and buffer. The output of the system is the pH of the solution, which should be controlled actually by manipulating qB . This shows itself in the form of manipulating u 5 qA =qB which is defined as the input in the above model. The titration curve32 (or the input-output map) of the system is also known. It is given by a curve like33 that of Fig. 2.64 whose exact values are known for a specific example. The problem is adopted from Galan et al. (2000) where further details including parameter values for a specific example are also provided. We leave it to the reader to investigate if the system is NMP. Problem 2.21: Not every chemical process is describable by the deterministic population balances or the moment equation. For instance, droplet based microfluidic systems are stochastic and are modeled by the so-called Master equation. In this 12 pH

0

qB qA

1

Figure 2.64 Problem 2.20. A typical acid-base titration curve.

32

Titration or titrimetry refers to a laboratory method of quantitative chemical analysis to determine the unknown concentration of an analyte. A reagent called titrant or titrator of known concentration is used to react with an analyte or titrand in order to determine its concentration. A titration curve is a curve in the plane whose x-coordinate is the volume of the titrant added from the beginning of the titration, and whose y-coordinate is the concentration of the analyte at the corresponding stage of the titration. The most common of titrations is the acid-base titration or pH neutralization. For the titration curve of pH neutralization the x-coordinate is qB =qA and the y-coordinate is the pH of the solution. 33 The rates of rise depend on whether the acid and base are strong or not.

158

Introduction to Linear Control Systems

problem a model for crystal nucleation in such systems is proposed. The nucleation expression can have any form under the conditions of time-varying supersaturation. Assume that a nucleus can grow large enough to become observable in a negligible time and that the solution is spatially uniform. Denote by kðtÞ the nucleation rate in a whole droplet. Thus kðtÞdt is the probability that a critical nucleus will form during an infinitesimal time interval dt. The dynamics of the probability Pn ðtÞ that a droplet contains n crystals is governed by the so-called Master equation, dP0 ðtÞ 5 kðtÞP0 ðtÞ; dt

P0 ð0Þ 5 1

dPn ðtÞ 5 kðtÞ½Pn21 ðtÞ 2 Pn ðtÞ; dt

Pn ð0Þ 5 0; n 5 1; 2; . . .

It is further assumed in the equations for n 5 1; 2; . . . that earlier nuclei do not grow fast enough to considerably deplete solute from the solution. This is a reasonable assumption provided that the crystals observed in each droplet are approximately the same size. The above-mentioned equations describe a non-stationary Poisson process and can be solved recursively or by defining a probabilitygenerating function to yield,  ðt  P0 ðtÞ 5 exp 2 kðτÞdτ 0

1 Pn ðtÞ 5 n!

ð t 0

n

 ðt  kðτÞdτ exp 2 kðτÞdτ ; n 5 1; 2; . . . 0

In the case of constant nucleation rate kðtÞ 5 k we get Pn ðtÞ 5 n!1 ðktÞn e2kt , n 5 0; 1; 2; . . .. The problem is adopted from Goh et al. (2010) where many other details are also provided. In the sequel we cite a little more discussion on the system from the aforementioned reference. Consider a high-throughput microfluidic device with a large number of droplets. Assume that each droplet moves from under-saturated to saturated and to super-saturated conditions. The mean number of crystals at time t (averaged over a sufficiently large number of droplets) is computed from, 



E NðtÞ 5

n  ð t  ðt XN n ð t nPn ðtÞ 5 kðτÞdτ exp 2 kðτÞdτ 5 kðτÞdτ; n50 n50 n! tsat tsat tsat

XN

where E denotes the expectation. Hence, Ð the time when the mean number of crystals equals 1 is t which is found from tsat t kðτÞdτ 5 1.

System representation

159

Problem 2.22: In the statement of Example 2.17 assume that there is a recycling of the microorganism which is added to the nutrient concentration and that growth does not immediately take place after consumption. Modify the model to address these realistic assumptions. ( The model can have the general form

Ðt x_1 5 2Dx1 2 kCðx1 Þx2 1 2N f1 ðt 2 τÞx2 ðτÞdτ 1 Du Ðt x_2 5 2Dx2 1 kx2 ðtÞ 2N f2 ðt 2 τÞCðx1 ðτÞÞdτ

.

This is a set of integral equations (see Appendix A). The answer certainly depends on the delay kernels f1 ; f2 . Such models have been extensively studied in the literature; the details go outside the scope of this book. The model is adopted from Rao and Rao (2009) and slightly modified. The reference gives an excellent overview of the subject. We cannot talk much about the theoretical properties of such models, including their stability, as they require graduate knowledge of the field. Problem 2.23: In genetics and hereditary studies we are interested to know the genotype of offspring after n generations. Consider the two sets of genes A and a with the possible pairings AA, Aa, and aa which specify the individual’s genotype. One type of inheritance is the autosomal in which every offspring takes one gene from its parents’ genotypes to form its own particular genotype. For instance in humans the eye coloration is dictated by autosomal inheritance. The same type of inheritance also dictates many traits in animals and plants. Now assume a farmer has the cultivation program that each plant in the population is always fertilized with a plant of genotype AA and is then replaced by one of its offspring. Find a model for the representation of the distribution of genotypes after n generations. We start by finding the genotype probability of offspring for different pairs of parents. This is summarized below. Genotype of offspring

AA Aa aa

Genotype of parents AA-AA

AA-Aa

AA-aa

Aa-Aa

Aa-aa

aa-aa

1 1 1

1/2 1/2 1/2

0 0 0

1/4 1/4 1/4

0 0 0

0 0 0

Now let αn , β n , and γ n denote the fraction of the offspring with the genotype AA, Aa, and aa in the nth generation. There holds αn 1 β n 1 γ n 5 1. If we derive the table for n 5 2; 3; . . . by inspection (which can also be easily proven precisely) we see that αn 5 αn21 1 ð1=2Þβ n21 , β n 5 ð1=2Þβ n21 1 γ n21 , and γ n 5 0 for n 5 1; 2; . . .. Hence, defining xðnÞ 5 ½αn β n γ n T as the state of the system at the nth generation we observe that, 2

1 xðnÞ 5 Mxðn 2 1Þ; M 5 4 0 0

1=2 1=2 0

3 0 1 5: 0

160

Introduction to Linear Control Systems

We note that this equation can be simplified as xðnÞ 5 2 Mxðn 2 1Þ 5 M xðn 2 2Þ 5 ? 5 M n xð0Þ, where xð0Þ denotes the initial distribution of the genotypes. This is the desired model. It is a discrete-time system and the obtained model is a discrete-time model in which the continuoustime variable t is replaced by the discrete-time variable n. It is possible to further simplify the model by finding a simple expression for M n . To this M n 5 VJ n V 21 . end we use the Jordan form of M 3as M 5 VJV 212 and thus 2 3 1

1 1 1 0 0 21 22 5 and J 5 4 0 1=2 0 5 and hence 0 0 1 0 0 0 2 32 3 3 2 32 3 21 1 1 2 α0 3 1 1 2 ð1=2Þn 1 2 ð1=2Þn21 α0 1 1 1 1 0 0 6 7 6 7 n n M 5 4 0 21 22 54 0 ð1=2Þ 0 5 4 0 21 22 54 β 0 5 5 4 0 ð1=2Þn ð1=2Þn21 54 β 0 5 0 0 0 0 0 0 γ0 γ0 0 0 0 0 0 0 2 3 α0 1 β 0 1 γ 0 2 β 0 ð1=2Þn 2 γ 0 ð1=2Þn21 7 or M n 5 64 5. Thus because α0 1 β 0 1 γ 0 5 1 we have β 0 ð1=2Þn 2 γ 0 ð1=2Þn21

In

this

example

V 5 40

0

αn 5 1 2 β 0 ð1=2Þn 2 γ n ð1=2Þn21 , β n 5 β 0 ð1=2Þn 2 γ n ð1=2Þn21 , γ n 5 0. Therefore in the limit as n ! N we get αn ! 1, β n ! 0, and γ n 5 0. That is, the genotype AA will be the only type in the population. Other scenarios of the problem are addressed in the Exercises 2.48, 2.49. This problem and exercise are taken from Bavafa-Toosi (1996) in which other types of inheritance are also discussed. Problem 2.24: Mathematical modeling of biological and physiological functioning of the human body first became a hot topic of research as early as the 1960s, see e.g., Howard (1966). This is now a greatly expanded and established field. Today sophisticated models of the brain, heart, ocular, respiratory, gait, etc. systems are available. To give you a flavor of the subject we present a model developed for the left heart. The following model is extracted from Lessard (2009) and modified to our context. The cardiovascular system comprises the heart, the blood vessels, and the blood. The role of the cardiac system is to pump the blood (containing oxygen, nutrients, immune cells, etc.) to all the tissues and organs of the body. The human heart consists of four chambers—two atria and two ventricles—fitted in the left heart and right heart, functioning as independent pumps. The right atrium receives blood from the venae cavae and sends it to the right ventricle from which the blood is pumped to the lungs for oxygenation. The left atrium of the heart receives oxygenated blood from the lungs and sends it to the left atrium which pumps the oxygenated blood to the whole body. Considering the left heart as the plant the input to the plant is the left arterial pressure and the output of the plant is the systemic pressure (or as simply said, blood pressure). There are different systems, like the respiratory and nervous systems, which affect the functioning of the left heart. For simplicity we ignore them. We model the left ventricle as the time-varying capacitor Cv . The systemic circulation—representing a model for the body where oxygenated blood is circulated—is modeled by rs ; Rs ; Cs . It seems reasonable to include at least one inductor

System representation

vl

161

il Dm Cl

Rs

vv Da

Rm Cv

va Ra

ia

rs

vs

Ls Cs

Figure 2.65 Problem 2.24. Left heart. Adopted from Lessard 2009, with permission.

as well. It is shown by Ls . Note that in a more precise analysis all of these four parameters can also be time varying. The pulmonary (or lung) circulation and the left atrium are modeled as Cl . The heart valves—which guarantee the unidirectional flow of blood—are modeled as diodes with resistance Dm ; Rm (for the mitral) and Da ; Ra (for the aorta). The circuit is given in Fig. 2.65. The input to the system is vl which plays the role of the left arterial pressure. The output to the system is vs which plays the role of the systemic pressure. The ventricle and aortal pressures are denoted by vv and va , respectively. The blood to the mitral and aorta are represented by il and ia , respectively. Apart from the inputoutput transfer function Vs ðsÞ=Vl ðsÞ, two other transfer functions can also be defined in this model. One is the ‘filling’ transfer function Vv ðsÞ=Il ðsÞ, and the other is the ‘ejection’ transfer function Va ðsÞ=Ia ðsÞ. The philosophy behind the terms filling and ejection should be clear. We should discuss a slight mistake in the literature. We cannot use the theory of LTI systems in the usual manner to find the aforementioned transfer functions. This would be correct if we considered time-invariant elements, in particular the capacitor Cv (and rs ; Rs ; Ls ; Cs ). Under this simplifying condition, we leave it to the reader to find the transfer functions by assuming short circuit for the diodes. In fact by identification of the system from clinical data the values of the parameters of the transfer functions are also available in the literature. We also add that whether we had better consider Ls in the shown place or in series with Rs , or even in another place can be best decided if we have access to clinical data. On the other hand we should add that if we wish to derive an LTV model for the system, then for instance we can use the ZTF, standing for Zadeh Transfer Function, which was proposed by L. A. Zadeh in 1950 and is of course outside the scope of this book (Zadeh, 1950a, 1150b, 1961). For the ease of your future reference we also add that pertinent research papers often appear in the forums related to bioengineering, biomechanics, biomathematics, neurobiology, neuroscience, physiology, etc. An interesting result on quantifying the health of heart is provided in (Gholizade-Narm et al., 2010). On the other hand, among the hot topics in bioengineering is a model for the brain from different perspectives, see e.g., (Hemami and Moussavi, 2014; Shanechi et al., 2014). Problem 2.25: Among the economics systems two are due to Wassily Leontief, a Nobel laureate in Economics. These models are called the input-output models:

162

Introduction to Linear Control Systems

one is the closed economy and the other is the open economy (Leontief, 1966). In the closed economy there is no profit  different sectors/regions balance out. In the open economy there is a level of profitability. In the closed economy, as we shall see in this problem and also in Exercises 2.522.55, the system is described by the equations x 5 Ex. In the open economy, as we shall have in Exercises 2.56, 2.57, the system is described by the equations x 5 Cx 1 d. Matrix C and vector d may be called the consumption matrix and production vector, respectively. The following example is a simple system based on his proposed notion of closed economy. Consider an economy consisting of the three workers: an electrician, a tailor, and a mechanic. They decide to do service for each other according to the following agreement in which each person works for 10 hours:

Hours of work done by

Work done for the electrician Work done for the tailor Work done for the mechanic

Electrician

Tailor

Mechanic

2 4 4

4 1 5

1 6 3

What should their hourly wages be so that each worker’s expenditure equals his income with regard to the aforementioned service? Denoting the hourly wage of them by w1 5 we , w2 5 wt , and w3 5 wm for the electrician, tailor, and mechanic, respectively, the income of the worker i 5 1; 2; 3 is thus 10wi . On the other hand the total expenditure of, e.g., the electrician is 2w1 1 4w2 1 w3 . Thus there should hold Ew 5 10w where, 2

2 E544 4

4 1 5

3 1 6 5; 3

2

3 w1 w 5 4 w2 5: w3

Reformulated otherwise, it is Ew 5 w where E 5 0:1 3 E. It is noted that the column sum in the matrix E equals 1. The matrix E may be called the exchange matrix. It is easy to show that the answer to our particular problem is w 5 k 3 ½33 52 56T for arbitrary k. In our problem setting of course only positive values of the gain make sense. In general, it is easy to show that this problem always has a nonnegative solution. In practice, we are interested in problems which admit a strictly positive solution. It can be shown that a sufficient (but not necessary) condition for this is that there is an integer m such that all the entries of Em are strictly positive. The problem and the associated exercise are taken from Bavafa-Toosi (1996). It should be noted that Leontief’s models have been a topic

System representation

163

of further investigations and developments (Raa, 2005). See also Meng and Xue, (2015) for further results. Problem 2.26: A social network describes the connections and relations among the individuals. In an actual social network there is a hierarchy (and of course some feedbacks) among the different systems which comprise the network. The regulations and structure of the network (which depend on the country in question) demarcate the connections among its different systems (like police department, universities, syndicates of laborers, governors, industries, etc.) in the network, and among different subsystems in each system (like faculties, admission, library, housing, etc. in a university), and also among different individuals in each subsystem (like students or faculty members), which are also individuals of the whole network. There are sophisticated models for the description of the evolution of social networks. Among the usages that are made from these models is the influence of a stimulus (in the social sciences language) or an input/disturbance (in control theory language) on the evolution of the system. The input can be anything like a particular behavior from a music icon who is heard and seen by (a large part of) the whole network or a propaganda which may be spread by individuals or subsystems. In the case that the spreader is the government, the propaganda is regarded as an input and in the case that the spreader is an enemy or opposing party, the propaganda is regarded as a disturbance. There are instances in which some inputs or disturbances cause ‘communities’ i.e., particular systems, to form and evolve in the network, e.g., an opposition group who oppose the government with regard to specific policies. Another usage of these models is to ‘identify’ the source that diffuses a rumor or harmful propaganda. The study of the model of such networks is outside the scope of this book. However, it is good to know that such models are geared with Markov chains and stochastic graphs where studies of directions, trees, clusters, etc. are made. An indispensable part of the model is the ‘protocols’, which represent specific regulations. These are certain specific mathematical relations (differential, algebraic, logical, etc.) among certain nodes and paths in the network. The model is often studied in an optimization framework using specially designed algorithms. The interested reader may consult the pertinent literature such as the research forums on social systems, computational social systems, networking, etc. See also Mitra et al. (2016), Du et al. (2016). Problem 2.27: A simple and approximate mathematical model for star formation is considered in this example. In particular we focus on blue stars. These stars are the most luminous and massive and the largest among all stars. Nevertheless their lifetime is shorter than that of sun and medium-size stars due to their high rate of burnup. The importance of studying them is that they lead us to galactic regions in which star formation is possible. This in turn has connections with our general understanding of the origin of the universe. A nice general account of the star

164

Introduction to Linear Control Systems

formation process can be found in the astrophysics literature, such as Sharaf et al. (2012) where we extract this problem from. The star formation process contains three main components: cool atomic cloud (or gas) of mass A, cool molecular cloud (or gas) of mass M, and young active stars of mass S. When these components are present in a certain region and gain close proximity to each other, they interact with each other and with the rest of the galaxy—the outcome may be the birth of a core-hydrogen burning star. We assume that the total mass T 5 S 1 M 1 A is constant. The component atomic cloud evolves as follows: (i) There is a constant replenishment by new atomic clouds. It is reasonable to assume that the amount equals the amount that leaves the system by stellar evolution. Thus we have the term k1 S where k1 is a constant. (ii) The atomic cloud is increased as young stars lose mass by stellar winds. This process is proportional to the number of young stars and therefore proportional to the stellar mass S. Hence we have the term k2 S. (iii) The atomic cloud transforms to molecular clouds. This process is proportional to A and becomes more effective with the cooling capacity of the cloud. This in turn increases with M 2 . Therefore we approximate it with the term 2k3 AM 2 . All in all for the evolution of the atomic cloud we have dA=dt 5 k1 S 1 k2 S 2 k3 AM 2 . With similar physically justified arguments we conclude that dS=dt 5 2 k1 S 2 k2 S 1 k4 SM n and dM=dt 5 2 k4 SM n 1 k3 AM 2 . The aforementioned reference presents simulation results with different values of parameters, including n, and derives further theoretical conclusions about the validity of the model by observing the convergence of the system to equilibrium points and limit cycles which go outside the scope of our context. The essence of the conclusion is that the model, although approximate, is a good one and is capable of modeling the actual situation. We also mention that clearly the number of state equations can be reduced from three to two because of the assumption T 5 S 1 M 1 A, and that the state equations are nonlinear. The problem is wrapped up by adding that in such systems designing a control law and its implementation (steps 2 and 4 in the solution of a control problem) are almost irrelevant, at least presently and also in the foreseeable future. A similar problem in astrophysics is discussed in (Spitoni et al., 2017). Problem 2.28: In this example we ‘read’ a model used in fluid dynamics as part of the physics of matter. We start by saying that an adiabatic process is one that occurs with no transfer of matter or heat between a thermodynamic system and its surroundings. In such a process energy is transferred only as work. An isentropic process is an idealized thermodynamic process that is adiabatic and in which the work transfers of the system are frictionless. There is no transfer of matter or heat and the process is reversible. It is clear that an actual process is non-isentropic. On the other hand, precisely speaking, every fluid is compressible. A common assumption for compressibility is the polytropic compressibility which means that the fluid obeys Pυn 5 Constant, where P is the pressure, υ is the specific volume, and the real number n is the polytropic index. Now consider the non-isentropic Euler equation for a polytropic compressible fluid in a one-dimensional flow. It is given by,

System representation

165

@t ρ 1 @x ðρvÞ 5 0 @t ðρvÞ 1 @x ðρv2 1 PÞ 5 0 @t

γ21 2

γ21 ρv2 1 P 1 @x ρv3 1 γPv 5 0: 2

In the above equations @t 5 @=@t , @x 5 @=@x , ρ 5 ρðt; xÞ is the local density of the fluid at time tAð0; tf Þ and position xAð0; xf Þ, v is the local velocity, P . 0 is the pressure, and λ . 0 is the adiabatic constant. In tf ; xf the subscript f means final. The equations are obtained by the laws of conservation of mass, momentum, and energy, respectively. The state of the system is defined as x 5 ðρ; v; PÞ. The output of the system is defined as the state of the system y 5 x. The input to the system is the boundary conditions u 5 ðx0 ; xf Þ and the controllability problem is to determine what final states are achievable. In other words the control objective is to find the input u 5 ðx0 ; xf Þ such that the desired final state is achieved. Note that this is a sophisticated nonlinear problem. The model is adopted from Glass (2014) where the problem is solved in details. Problem 2.29: We ‘read’ another system of physics in this problem. Quantum physics,34 also known as quantum mechanics or quantum theory, gradually arose from the work of Max K. E. L. Planck (18581947), a German physicist, concerning the black-body radiation problem. He won the Nobel Prize in Physics in 1918. In this theory the Schro¨dinger equation which is a partial differential equation and describes how the quantum state of a quantum system changes with time plays a central role. The Schro¨dinger equation was formulated in late 1925, and published in 1926, by the Austrian physicist Erwin Schro¨dinger (18871961) who subsequently won the Nobel Prize in Physics in 1933. The form of the Schro¨dinger equation depends on the physical situation of question. The most general form is the time-dependent version, which describes p a ffiffiffiffiffiffiffi model ^ for a system that evolves in time. It is ih @t Ψðr; tÞ 5 HΨðr; tÞ where i 5 21, h is the Planck constant divided by 2π, @t 5 @=@t , Ψ is the is the wave function of the quantum system, and H^ is the Hamiltonian operator which characterizes the

34

Some definitions are in order. (i) Quantum physics/mechanics/theory is a branch of physics which is concerned with processes involving e.g., atoms and photons. (ii) A quantum system is a part of the whole universe or environment that is chosen for study in quantum mechanics with regard to the wave-particle duality in that system. The environment, i.e., everything outside the system, is studied only to observe its effects on the system. A quantum system thus comprises the wave function and its constituents. (iii) The wave-particle duality is the concept that every elementary particle or quantic entity may be partly described in terms of not only particles, but also waves. Note that at present the phenomena of light can be explained only with this concept and not via either the particle concept or wave concept alone. (iv) Quantum state refers to the state of an isolated quantum system. A quantum state is the probability distribution for the value of each ‘observable’, i.e., for the outcome of each possible measurement on the system.

166

Introduction to Linear Control Systems

total energy of the given wave function and takes different forms depending on ^ the problem. The time-independent version of the equation is EΨðr; tÞ 5 HΨðr; tÞ in which the proportionality constant E is the energy of the state Ψ. Note that in the terminology of linear algebra this equation is an eigenvalue problem (compare with λx 5 Ax). A particular case is that of a non-relativistic particle in an electric (not magnetic) field. For h this system ithe time-dependent Schro¨dinger equation is ih @t Ψðr; tÞ 5

2 h2 2μ

r2 1 Vðr; tÞ Ψðr; tÞ and the time-independent Schrodinger equai 2 tion is EΨðrÞ 5 22μh r2 1 VðrÞ ΨðrÞ where μ is the reduced mass of the particle, h

r2 is the Laplacian, and V is the potential energy. The above information is classical and can be found in all physics forums. Control of the Schro¨dinger equation is extensively studied in the literature, see e.g., Feng et al. (2014), Guo et al. (2015) and the references therein. In the control problem V often acts as the control input and Ψ is the state and also output of the system. We close this problem with further details about the one-directional timeindependent Schro¨dinger equation in an electric field. The equation is given 2μ d2 by dx 2 ΨðxÞ 5 h2 ½VðxÞ 2 E ΨðxÞ. Thus if VðxÞ 5 V , E then the solution is given by ΨðxÞ 5 A sinðax 1 bÞ and if VðxÞ 5 V . E then the solution is given by ΨðxÞ 5 Cecx 1 De2cx . See Exercises 2.592.62. Problem 2.30: Modeling of discrete-event systems is outside the scope of this book. However, for your surface familiarity we comment a few words on such systems in this problem, with which we close the modeling of systems in this Section. In practice the state of many actual systems is an element of a discrete set. For instance the state of a machine may be an element of {ON,OFF} or {IDLE, BUSY, DOWN}. The case of BUSY itself may be broken down into further states. For instance in the case of an automatic telling machine it comprises {PAYING CASH, PAYMENT, TRANSFER, REPORT, etc.} Such systems are called discrete-event systems. The state of the system changes from the present to the next when a discrete event happens or triggers the system, thus the phrase discrete-event systems. In our system the event is pressing a key for the desired functionality of the machine by a customer. There are different alternative methods for showing the evolution of the state of a discrete-event system. For example consider a discrete-event system whose state space is fs1 ; s2 ; s3 ; s4 ; s5 g. If the state of the system changes from s1 to s4 due to event e1 at time t1 , then changes to s2 due to event e2 at time t2 , then changes to s5 due to event e3 at time t3 , and so forth, one possible way to show this is by the following diagram, see Fig. 2.66. If we assign typical increasing values to the states, an alternative way to show the state space information is as follows; see Fig. 2.67. Such models are teeming in the research literature, a typical reference being Cassandras and Lafortune (2008). Finally as a hybrid system consider the dynamics of traffic flow. The dynamics of the system is continuous and is triggered by certain

System representation

167

discrete events, i.e., traffic lights, police commands, and the presence of obstacles such as pedestrians. Problem 2.31: Using the BD algebra, obtain the input-output transfer function of the following system in Fig. 2.68. In the first step, the block G1 is fed through the feedback element (Fig. 2.69): In the second step, the inner series feedback elements are interchanged (Fig. 2.70): In the third step, the inner feedback loop is simplified (Fig. 2.71): s1

s2

s4

s5

s5

s3

s2

s5

state

t1

t2

t3

t5

t6

t7

time

e1

e2

e3 e4 e5

e6

e7

event

t4

Figure 2.66 Problem 2.30. A typical state-space information.

State s5 s4 s3 s2 s1 t1

t2

t3

t5

t6

t7

Time

e1

e2

e3 e4 e5

e6

e7

Event

t4

Figure 2.67 Problem 2.30. The state-space information.

G5

_

G3

G2

G1

_

G4 Figure 2.68 Problem 2.31. Original system.

G5 / G1

_ G 1G 2

_ G4

Figure 2.69 Problem 2.31. Reduced system.

G3

168

Introduction to Linear Control Systems

Then the series blocks are simplified Fig. 2.72): Now the upper feedback loop is simplified (Fig. 2.73): G5 / G1

_

G 1G 2

_

G3

G4 Figure 2.70 Problem 2.31. Reduced system.

Finally the lower feedback loop is simplified (Fig. 2.74): G5 / G 1 _ G1 G 2 _

1 – G1 G 2 G4

G3

Figure 2.71 Problem 2.31. Reduced system.

Problem 2.32: Using the BD algebra, obtain the input-output transfer function of the following system in Fig. 2.75. G5 / G1 _ G 1G 2 G 3 1– G 1G 2 G4

_

Figure 2.72 Problem 2.31. Reduced system.

_

G 1G 2G 3 1 – G1G 2 G4 + G 2G 3G 5

Figure 2.73 Problem 2.31. Reduced system.

G1G2 G 3 1 – G 1 G 2 G 4 + G 2 G3 G 5 + G 1 G 2 G 3 Figure 2.74 Problem 2.31. Equivalent system.

System representation

169

First we change the order of the inner feedback elements and redraw the diagram as follows (Fig. 2.76). _ _

_

_

Figure 2.75 Problem 2.32. Original system.

Next we simplify the innermost loops to get (Fig. 2.77): Now it is easily seen that the inner loop is simplified to the gain 1=5. Thus the _ _

_

_

Figure 2.76 Problem 2.32. Reduced system.

system is equivalent to the gain 2=7.

_

2

_

1/2

1/2

Figure 2.77 Problem 2.32. Reduced system.

Problem 2.33: Draw the SFG equivalence of the system of Problem 2.31. Then, via the Mason’s rule obtain its input-output transfer function. We start by labeling the signals as shown below (Fig. 2.78): Note that to this end we have used our suggestion in Exercise 1.53 of Chapter 1. In this system e.g., Y2 5 U3 5 U4 and we have used U3. Next we draw its SFG equivalent as (Fig. 2.79):

Figure 2.78 Problem 2.33. Original system.

170

Introduction to Linear Control Systems

To apply the Mason’s rule we first find the forward path. There is only one such path: U 2 E 2 U1 2 U2 2 U3 2 Y with the gain P1 5 G1 G2 G3 . Next we identify and name the loop gains: l1 5 G1 G2 G4 , l2 5 2G2 G3 G5 , l3 5 2G1 G2 G3 . Hence,

Figure 2.79 Problem 2.33. SFG of the system.

X

L1 5 l1 1 l2 1 l3 5 G1 G2 G4 2 G2 G3 G5 2 G1 G2 G3 :

P Therefore we have Δ as Δ 5 1 2 L1 5 1 2 G1 G2 G4 1 G2 G3 G5 1 G1 G2 G3 . Next we find Δ1 . Because all loops touch the only feedforward path, Δ1 5 1, the system gain is computed via P 5 Δ1 P1 5 1 2 G1 G2 G4 1G1GG22GG33G5 1 G1 G2 G3 . Problem 2.34: Depict the SFG equivalence of the system of Problem 2.32. Then, via the mason’s rule compute its input-output transfer function. The reader should note that in this fictitious system a signal E is not really an error signal. We simply denote them by E. If you wish you can denote them by say X, U, F, etc.; it makes no difference. To this end we first label the signals as shown below (Fig. 2.80): Then we draw its SFG equivalent as follows (Fig. 2.81): Now we embark on finding the forward paths. There are two forward paths as R 2 E1 2 E2 2 E3 2 Y and R 2 E1 2 E3 2 Y with path gains: P1 5 1 and P2 5 1,

Figure 2.80 Problem 2.34. Original system.

Figure 2.81 Problem 2.34. SFG of the system.

System representation

171

respectively. Next we identify and name label the loop gains: l1 5 E2 E3 E2 , l2 5 E3 YE3 , l3 5 E1 E2 E3 YE1 , l4 5 E1 E3 YE1 , l5 5 YY. Therefore, X X

L1 5 l1 1 l2 1 l3 1 l4 1 l5 5 2 5 L2 5 l1 l5

P P Hence we can form Δ as Δ 5 1 2 L1 1 L2 5 7. Next we find Δ1 and Δ2 . Since all loops touch both of the feedforward paths, we have Δ1 5 1, Δ2 5 1. Lastly the system gain is computed via P 5 Δ1 ðP1 1 P2 Þ 5 27. Problem 2.35: Determine the outputs y1 and y2 of the SFG in Fig. 2.82. Let us start by saying that an SFG has a general usage. It is not restricted to control systems. As such its input and output may be labeled by any letters: they do not

Figure 2.82 Problem 2.35. SFG of the system.

necessarily refer to the labels in standard control structures. More precisely, in this figure u, x, y do not necessarily refer to the control signal, state, and output in the notation of control systems. Denote Gi1 5 uy1i ju2 50 and Gi2 5 uy2i ju1 50 for i 5 1; 2. Therefore yi 5 Gi1 u1 1 Gi2 u2 for i 5 1; 2. We compute the transfer functions Gij ; i; j 5 1; 2 in the following. l1 5 bc, l2 5 e, l3 5 fg, l4 5 abdfji, l5P 5 abhi. P For G11 we have P 5 l 1 l 1 l 1 l 1 l , L 5 l l 1 l l 1 l l 1 l l , Δ 5 1 2 L1 1 L 1 1 2 3 4 5 2 1 2 1 3 2 5 3 5 P L2 . P1 5 abdfj, P2 5 abh, Δ1 5 1 and Δ2 5 1 2 ðl2 1 l3 Þ. Consequently G11 5 Δ1 ðP1 Δ1 1 P2 Δ2 Þ is known. For G12 , Δ is as before. P1 5 bdfj, P2 5 bh, Δ1 5 1 and Δ2 5 1 2 ðl2 1 l3 Þ are as before. As a result G12 5 Δ1 ðP1 Δ1 1 P2 Δ2 Þ can be computed. For G21 ,Δ is as before. P1 5 abdf , Δ1 5 1. Therefore G21 can be computed as G12 5 Δ1 P1 Δ1 . For G22 ,Δ is as before. P1 5 bdf , Δ1 5 1. Hence we find G22 by G22 5 Δ1 P1 Δ1 . Finally outputs are found through yi 5 Gi1 u1 1 Gi2 u2 for i 5 1; 2. Problem 2.36: For the system of the above example determine the ratio yx1 ju1 50 .

172

Introduction to Linear Control Systems

We can compute the desired transfer function directly. Without having any real 1 =u2 advantage, we practice finding it through yx1 ju1 50 5 yx=u j . Note that if we denote 2 u1 50 y1 G12 x 5 H1 u1 1 H2 u2 , there holds x ju1 50 5 H2 , and thus we find H2 . For this transfer function we have Δ as before, P1 5 b, Δ1 5 1, and thus H2 if found from H2 5 Δ1 P1 Δ1 .

2.9

Exercises

Over 150 exercises are offered in the form of some collective problems in the sequel. It is advisable to try as many of them as your time allows. Exercise 2.1: How does one obtain a model for a given system? Enumerate and briefly explain. Exercise 2.2: What is meant by system delay? Does it exist in the plant only or also in other elements of a closed-loop control system? Briefly explain. Exercise 2.3: This problem is for the readers of mathematics. (i) Provide an example of system modeling over a ring. (ii) Provide an example of mappings in topological and measure spaces. Exercise 2.4: This problem has several parts. (i) How does feedback (i.e., closed-loop control) affect the zeros of a system? (ii) How can we move the zeros of a system? (iii) How can we change the placement of the poles of a system? Exercise 2.5: Given a SISO plant and a state-space and transfer function representation of it, how do the initial and final values of the step response of the plant relate to the system parameters? Exercise 2.6: Many differential equations have more than one solution. (i) Show that the fifteen solutions to the differential equation xð4Þ 1 4x 5 0 are given by x 5 et ðc1 sint 1 c2 costÞ 1 e2t ðc3 sint 1 c4 costÞ where one of them is the trivial solution. (ii) Show that the differential equation x€ 2 t 1t 2 x_ 1 t 1t2 2 x 5 0 has a solution x 5 t and then find a second solution for it. Does it have a third, fourth, etc. solution? (iii) Find all the solutions of x⃛ 2 3x€ 1 4x 5 0. Exercise 2.7: Solve the following differential equations: i. x_ 5 ðt 1 1Þ=x pffiffiffi ii. x_ 5 ð2 xe2t Þ=t iii. x_ 5 ðt 1 1Þ=x 1 ffi 1 pffiffiffiffiffiffiffiffiffiffiffi iv. x_ 5 2t t2 1 1=sinx v. x_ 5 2x=ðt 1 1Þ vi. x_ 5 2tx2 =ð1 1 t2 Þ vii. x_ 5 2tx2 =ð1 1 x2 Þ viii. x_ 5 t2 e2x ix. x_ 5 ð4 2 tÞ=ð3x2 2 1Þ x. x€ 1 2x_ 1 4x 5 4t 2 1 xi. x€ 1 2x_ 1 4x 5 e22t xii. x€ 1 2x_ 1 4x 5 sin3t xiii. x€ 1 2x_ 1 4x 5 3te2t xiv. x€ 1 2x_ 1 4x 5 3t2 e3t xv. x€ 1 2x_ 1 4x 5 e2t cost 1 3t2 xvi. x€ 1 2x_ 1 4x 5 3tsin2t xvii. x€ 1 2x_ 1 4x 5 te2t cos3t

System representation

173

xviii. x€ 1 2x_ 1 4x 5 3sec3t xix. t2 x€ 1 2tx_ 1 2t4 ex 5 0 xx. t2 x€ 2 tx_ 1 4x 1 lnt 5 0 xxi. t3 x⃛ 2 tx€ 1 4t2 x_ 1 5x 1 3 1 lnt2 5 0 Exercise 2.8: Consider the nonlinear Eq. (2.3) where x and y are vectors. Write the Taylor series expansion up to the 4th power. What is the formula of the general n-th order term? Exercise 2.9: Consider the signals x1 5 ðt11Þ21 2 2ðt11Þ21=2 , x2 5 3ðt11Þ23=2 1 1, and u 5 ðt11Þ21=2 . (i) By differentiation of the signals construct the system  x_1 5 f1 ðx1 ; x2 ; u; tÞ . Note that the representation is not unique. (ii) Do your proposed x_2 5 f2 ðx1 ; x2 ; u; tÞ models admit any other solution? Exercise 2.10: Repeat Exercise 2.9 for the signals x1 5 e22t 2 2te23t , x2 5 e24t , u1 5 e2t , and u2 5 2 e2t=2 . Exercise 2.11: This problem has several parts. (i) Find the equilibrium of the following systems, perhaps with the help of MATLABs. (ii) Find their linearized model. (iii) Simulate them as we did in Example 2.4. (iv) In the case that they admit an explicit closed-form solution (with limited terms) find it! i. x_ 5 x23 1 u2 with u 5 2 ii. x_ 5 2x2 1 sin2 x iii. x_ 5 xjxj 1 u2 iv. x_ 5 x signðxÞ 1 juj v. x_ 5 ðx21Þ4 vi. x€ 1 x_2 1 sinx 5 x _ 1 sin2 x 5 1 vii. x⃛ 1 x€2 2 xx 2t viii. x_ 5 2 e ð1 1 tÞx3 with xð0Þ 5 1 ix. x_ 5 3t2 x 2 1:5t2 x2 with xð0Þ 5 2 x. x_ 5 2 e2t x 1 x2 cost with xð0Þ 5 2 1 xi. x_ 5 2x2 =ðt 1 1Þ 1 te2t 1 2u3 with u 5 1 xii. x_ 5 2x2 1 e2t 1 u3 with u 5 2 xiii. x_ 5 2x2 1 sint 1 2u2 with u 5 1 xiv. x_ 5 2x2 1 te2t u2 with u 5 1 xv. x_ 5 2x21 1 e2t u2 with u 5 3 pffiffiffi xvi. x_ 5 2 x 1 te2t 1 3u2 with u 5 2 xvii. x_ 5 2x2 =ðt 1 1Þ 1 e2t u2 with u 5 1 xviii. x_ 5 1x ðt5 2 2t 1 uÞ with xð0Þ 5 2 and u 5 1  x_1 5 x1 1 1:2x21 1 4:7x31 1 x1 x22 xix. x_2 5 x1 1 5:1x2 1 2    x_1 5 x1 1 sinðx1 x2 Þ 1 u1 2 with u 5 xx. x_2 5 2x1 x2 1 u1 1 u22 1    x_1 5 x1 1 x1 sinx1 1 x1 u1 1 with u 5 xxi. x_2 5 2x1 x2 1 x22 2 u1 1 u22 1:2    1 x_1 5 x1 1 x2 ð1 2 x1 Þ 1 u1 ðx1 1 x2 Þ with u 5 xxii. x_2 5 2x1 x2 1 u2 2   x_1 5 x2 jyj , 1 y 1 y3 with f ðyÞ 5 xxiii. x_2 5 1 2 f ðx1 2 x2 Þ 3y 2 signðyÞ jyj $ 1    1 x_1 5 x2 u with xð0Þ 5 and u 5 2 xxiv. x_2 5 x1 1=2

174

Introduction to Linear Control Systems

 xxv.  xxvi.

  1 x_1 5 2 4x2 and u 5 1 with xð0Þ 5 2 x_2 5 x1 u x_1 5 x1 1 x2 2 u2 x22 x_2 5 2 x2 u1

with xð0Þ 5 ½1

1T and u 5 ½1 3T

8 21 x2 >   < x_1 5 x1 2 u2 2 sinðt 1 1Þ t 1 1 2ðt11Þ x 1 xxvii. with u 5 1 > : x_2 5 2ðt11Þ2 x21 u2 2 2u21 8 1 1 x2 > > x1 1 ðt11Þ2 u1 x_1 5 > >   t11 2 x1 < 1 xxviii. with the input u 5 2 1 sinðt 1 1Þ > 2 x2 2 > > > x_2 5 2 ðt11Þ x2 u1 2 2u2 : 1 8 3 2 x_1 5 4x23 2 2 4x3 2 3ð4t 1 1Þu1 > > > 22 > >  2 x2 < x_2 5 2t 3 3 xxix. with u 5 e > 1 > 21 1=2 > > > : x_3 5 3 x2 u2 8 21 1 > > x3 1 u2 x_1 5 > > > 2 2 > > > > < 21 2 pffiffiffiffiffiffiffiffiffiffi x u x_2 5 xxx. with u 5 1= t 1 1 2 2 > > > > > 1 > 4 > > x_3 5 3 2 u > : 2x1 Exercise 2.12: Consider the sequel systems. (i) Are they ODEs, PDEs, or integral equations? (ii) Determine the properties of (non)linearity, time (in)variance, and (non)causality of the following systems. (iii) What is the role of the initial condition? i. x_ 5 2 x 1 2u ; y 5 x 1 u ii. x_ 5 2 x 1 u ; y 5 2 x 1 2x_ 1 u iii. x_ 5 ð1 2 costÞx ; y 5 x 1 e2t u iv. x_ 5 cosx 1 u3 ; y 5 x v. x_ 5 xlnðt 1 1Þ 1 u ; y 5 x 1 u 2 u_ vi. x_ 5 sinðtxÞ 1 u ; y 5 te2t x 1 u vii. x_ 5 2 x 1 1 2 u ; y 5 x 1 u viii. x_ 5 x 1 xu ; y 5 te2t x 1 u_ ix. x_ 5 xðt 2 1Þ 1 u ; y 5 x x. x_ 5 x 1 uðt 2 τðtÞÞ ; y 5 3x 1 u_ xi. y 1 y_ 2 e2t y 5 uðt 2 1Þ xii. y 1 y_ 5 u 2 e2t u€ _ 2 1Þ xiii. y 1 y_ 2 e2t y 5 uuðt xiv. y 5 u 2 e2t u€y_ Exercise 2.13: Show that the motion of a satellite of mass m 5 1, considered as a point, in the dimensional gravitations field of the form hðrÞ 5 2αr 2 ; r 6¼ 0 is described by  2 € 5 rðtÞθ_ ðtÞ 2 αr22 ðtÞ 1 u1 ðtÞ , where rðtÞ is the distance of satellite from the center rðtÞ € _ θ_ ðtÞ 1 u2 ðtÞ rðtÞθ ðtÞ 5 22rðtÞ

System representation

175

of gravitation at time t, θ_ ðtÞ in the angular velocity of the satellite at time t, and u1 ðtÞ; u2 ðtÞ are the radial and tangential thrusts as the control inputs. Write a state-space model for the system and linearize it around the operating point x ðtÞ 5 pffiffiffiffi pffiffiffiffi T α which represents rotation at a constant orbit with ½r r_ θ θ_ T 5 ½1 0 t α constant angular velocity. Exercise 2.14: The nonlinear model of a VTOL (Vertical Takeoff and Land) aircraft is given in the equations below. In these equations x1 , x3 are the horizontal position and vertical position of the aircraft’s center of mass in the body-fixed reference. The roll angle is denoted by x5 and corresponding velocities are x2 , x4 , x6 . The control inputs are u1 (the thrust directed out of the bottom of the aircraft) and u2 (the rolling moment). The disturbances corresponding to u1 ; u2 are denoted by d1 ; d2 , respectively. The small coefficient ε characterizes the coupling between the rolling moment and the lateral force. The constant 21 is the acceleration gravity. The outputs are y1 5 x1 and y2 5 x3 . Find the linearized model of the system and in particular show that it is NMP. The model is adopted from Hauser et al. (1992). Finally we encourage the reader to derive this model by herself/himself as well. 8 x_1 5 x2 > > > > > x_2 5 2ðu1 1 d1 Þsinx5 1 ðu2 1 d2 Þεcosx5 > > > > < x_3 5 x4 > x_4 5 ðu1 1 d1 Þcosx5 1 ðu2 1 d2 Þεsinx5 > > > > > > x_5 5 x6 > > : x_6 5 u2 1 d2 Exercise 2.15: Find the equilibrium point of the biological system in Example 2.17. Find the linearized model for the type II consumption functions. Exercise 2.16: This problem has two parts. (i) Using MATLABs depict and compare the phase and magnitude of the given approximations for the delay; see Subsection 2.3.13. Explain the observation. (ii) To obtain other rational functions approximating the delay one possibility is to use a numerical/compact-form curve-fitting method. This can be done especially if we want to put a weight on particular frequencies. Try such methods. Exercise 2.17: How do the phase lags caused by the systems e2sT and 1=ðs 1 aÞ compare with each other for a sinusoidal input? Exercise 2.18: Provide the details of the Jordan form technique which we have used in computing the nth power of a matrix in the worked-out Problem 2.23. Exercise 2.19: Derive a model for each of the systems of Fig. 2.83. Draw their mechanical equivalents. Discuss. Exercise 2.20: Derive a model for each of the following systems in Fig. 2.84. Exercise 2.21: The following Fig. 2.85 shows a load rotated through a rotary-spring shaft, gear, and a DC motor. The DC motor is as that in the worked-out problem 2.14. Obtain a model for each of these systems. Exercise 2.22: In Exercise 2.21 assume that the torque-speed relation of the motor is given by the following curve; see Fig. 2.86. Write down the equations of the system and find the transfer function θL =V. Exercise 2.23: Fig. 2.87 shows a drum rotating by two motors. The drum is subject to the load torque and inertia TL and JL . Motors are assumed to have inertia Jm and negligible inductance. Their shafts are modeled by rotary springs. Obtain a model for

176

Introduction to Linear Control Systems

+ v_

+ v_

i

+ v1 _

+ v_

v2

i

v

i

Figure 2.83 Exercise 2.19. The above six panels.

C1

C2 R2

vi

R1

R1 C1

R4

R4 C2

R2 R3

R3 vo

vi

R2

R3 vo

Figure 2.84 Exercise 2.20. The above two panels.

the system. Analyze the effect of equality or otherwise of r1 and r2 , similarly for other parameters. Exercise 2.24: In the subsequent Fig. 2.88 the solenoid produces an electromagnetic force at the distance z given by f 5 ki2 =ðz 1 aÞ. Obtain a state space representation. Linearize the system around the equilibrium point z 5 0:5m and obtain the transfer function Z=V. Reconsider this problem also after studying Chapter 3. Exercise 2.25: This Exercise has several parts. 1. By analyzing the top left panel of Fig. 2.89 show that Op-Amp and lever are equivalent. 2. The top right panel of the same Figure shows a simplified version of an electromagnetic actuator. The solenoid produces a magnetic force proportional to the current, i.e., f 5 ki. Suppose that the system is implemented horizontally, thus there is no gravitational effect. Obtain a model for the system. 3. Repeat part (2) if the electromagnetic actuator is implemented vertically. 4. In parts (2) and (3) investigate the effect of a springer placed between the mass m and the ground.

System representation

177

i + _v

1:N Motor

k

Jm

TL

i + _v

N1 Motor

N3

Jm

b k

N2 N4

i + _v

JL1

N1 Motor

JL 2

b

N3

Jm b

k

N2

N4

JL

Figure 2.85 Exercise 2.21. Load and rotary-spring shaft.

T T0 V0 ω0

ω

Figure 2.86 Exercise 2.22. The torque-speed curve of the motor.

Td ω0

R T1 , ω1 , θ 1 + v _1

φ1

r1

JL

r2 φ2

k1 + v _2

Figure 2.87 Exercise 2.23. Drum speed control.

2

T2 , ω2 , θ 2

bL

178

Introduction to Linear Control Systems

k _

m

vi

b k

b

_

m R,L R,L

i

v +

Figure 2.88 Exercise 2.24. Magnetic suspension system.

Figure 2.89 Exercise 2.25. 5. The bottom row of the same Figure shows two mechanical systems with feedback. Analyze the feedback mechanisms and derive their electric equivalents. 6. In the systems of the bottom row add a dashpot and mass as you wish and repeat part (5). We close this Exercise by adding that precisely speaking there is a difference among a current amplifier, voltage amplifier, and power amplifier. However, since it is a technical issue in electrical engineering we simply refer to it as the amplifier. The readers of electrical engineering are encouraged to master the topic. Exercise 2.26: This exercise is for the readers of electrical engineering and has various parts. 1. Under what conditions, e.g., in what frequency range, etc., is an OpAmp—say LTC6261—a linear element? 2. Answer part (1) for other elements of an electronic circuit. 3. Two basic kinds of resistance are defined for electronic elements/devices, namely static resistance (rstat , which is the usual resistance r) and differential resistance (rdiff ). Negative resistance which can have both kinds is the property that an increase in the (differential) voltage results in a decrease in the (differential) current. It appears in some nonlinear systems. Elements/devices can have rstat . 0 ; rdiff . 0 like resistors

System representation

179

and ordinary diodes, rstat , 0 ; rdiff . 0 like batteries, generators and transistors, rstat . 0 ; rdiff , 0 like tunnel diodes and Gunn diodes, or rstat , 0 ; rdiff , 0 like feedback oscillators with positive feedback and negative impedance converters with positive feedback. You are encouraged to master this concept by referring to the pertinent literature. Provide further details. 4. In the literature on electronic circuit design the term ‘positive feedback’ is sometimes encountered. It often refers to the case that the positive gate of an OpAmp is used for feedback. (4i) What does it exactly mean in the context of the lessons of control theory? (4ii) Draw the control-theoretic block diagram, i.e., something like a 1-DOF/2-DOF/3DOF control structure, of such a typical system. (4iii) When is it used? 5. Apart from feedback in individual one-stage amplifiers, e.g., through the emitter resistance as appropriate, the basic use of feedback in electronic circuits is for ‘robustifying’ an amplifier gain—in control-theoretic terms. In the electronics literature, especially of the 20th century, this is sometimes referred to as ‘stabilizing’ the amplifier. The core philosophy is that it is not possible to design a circuit which accurately provides a desired amplification (especially when it is large). For instance, if we wish it to be 200 it can easily be 100 or 350 upon implementation and in use. The classical remedy for this problem is that a circuit with a much larger gain is designed in the forward path and then a small but (almost) accurate gain is inserted in the feedback path which equals the reciprocal of the desired gain. This is often done by high-precision resistors whose ‘tolerance’ (in the language of electronics, or ‘uncertainty’ in ours) is guaranteed to be less than 0.005%. Call the forward and feedback path gains A ; b, respectively. Then the system gain will be A=ð1 1 AbÞ. The value of A is such that Abc1 and thus A=ð1 1 AbÞ  1=b 5 desired gain, which is accurate and independent of A and its inaccuracy. For instance, for the desired gain of 200, b 5 1=200; A $ 100; 000. (5i) For the circuit in the top left panel of Fig. 2.90 identify the feedback loops and draw their control-theoretic block diagrams. (Note that the series capacitor in the feedback path is used to prevent constant wasteful consumption of the battery. Battery is used whenever there is an AC input.) (5ii) With the above-explained philosophy, propose typical values for the elements of the feedback (and of course for other elements) of part (5i). (5iii) Can you exemplify an electronic circuit with feedback, where an unstable (in the sense of control theory) electronic circuit as the plant P is stabilized by a controller C which itself is an electronic circuit? Note that the output is not necessary to become equal to the input but is required to become as desired, e.g., 10 times the input or anything as desired. (5iv) Can you exemplify an electronic circuit with feedback in the standard 1-DOF control structure where the output is desired to become equal to the input? 6. Draw a model for the top right panel of Fig. 2.90. 7. In the same figure, the bottom three rows are a limiter, voltage-controlled current source (VCCS), half-wave rectifier, full-wave rectifier, DC restorer (peak clamper), and dead-band circuit. (7i) Explain their functionality and find a model for each. (7ii) Repeat part (7i) for the circuits where the marked diode(s) is (are) reversed. (7iii) Explain the feedback mechanism(s) in these circuits. Exercise 2.27: This exercise is for the readers of electrical engineering. As you know or will study in future courses, power converters are used for reducing power consumption.

180

Introduction to Linear Control Systems

Figure 2.90 Exercise 2.26. They are used in a variety of applications ranging from power supplies of personal computers to power supplies of spacecrafts. They have a number of different versions and are ubiquitous in the pertinent literature. Their modeling and control are extensively studied in the literature, see e.g., (Bavafa-Toosi, 1997; Ding et al., 2004; Sabzehgar and Moallem, 2012). This Exercise is extracted from (Bavafa-Toosi, 1997). 1. As a simple example consider the top panel of Figure 2.91. Analyze and explain the functionality of the circuit. By modeling the MOSFET as an ideal switch show that the

model of the system is given by i_5 L1ððaðtÞ 2 1Þv 1 Vi Þ and v_ 5 C1 ð1 2 aðtÞÞi 2 R1 v in which aðtÞ 5 1 when the switch is closed and aðtÞ 5 0 when the switch is open. Write the state-space model of the system. Note that this is a switching model and is a subclass of hybrid models. 2. Find a model for the systems in the other three rows of Figure 2.91 and explain their functionality.

System representation

181

Figure 2.91 Exercise 2.27, Some typical power converters

Finally we add that “average models” are also available in the literature which are not hybrid and are an average (in a certain sense) of the different possibilities of the hybrid model. For their construction the reader can refer to the pertinent literature. Exercise 2.28: Obtain a model for each of the systems given in Fig. 2.92. In the system on the left define the position of the mass m2 as the output and in the other systems the position of m3 as the output. Write the state-space formulae for computing the transfer functions. Moreover, depict the electrical equivalents of these systems. Suppose that all forces are applied on the center of gravity. Exercise 2.29: Obtain a model for the mechanical systems depicted in Fig. 2.93. Additionally, draw the electrical equivalents of these systems. For the middle row assume that the motion is two-dimensional and the angles are small. Exercise 2.30: Obtain a model for the connected inverted pendulums system given in Fig. 2.94. Exercise 2.31: This exercise is for the readers of mechanical engineering. Obtain a simple model for the tightrope artist as shown in Fig. 2.95. The mass of the artist is M concentrated at its center of gravity which is assumed to be at the height l.

182

Introduction to Linear Control Systems

k1

f

k1

b1 m1

m1 k2

k2

k3

f

m2

b2

k4

k3 b

m3

m2 Figure 2.92 Exercise 2.28. The above three panels.

b13 b13

b 23 m3

k 12

m2

m2

b1

L

b1

m1

k1

k3

k4

m2

k6

l2

m l2 b2 k2

1

f

k5

k7

k2 f

f

f

b5

k3

m l1

m1

f

b1

m1 b12

k2

k1

L

L

m2 b 23

k2

k1

m3

f2

f

k3

L

k 12

k 23 m1

b12

k 23

b13

k 13

k 13

l1

R

k2

r

l2

k1 m b1

f

k1

b1

Figure 2.93 Exercise 2.29. The above seven panels.

Exercise 2.32: The following Fig. 2.96 shows a vehicle suspension system. The mass M refers to the body of the vehicle whereas the mass m refers to the unsprung part, being the tires, axle, and so forth. The force u is provided by a hydraulic actuator and is the control input. Spring k2 models the stiffness of the tires, acting between the axle and the road. The varying height h of the road is the disturbance to the system. The vehicle speed is V. (i) Obtain a model for the system. (ii) Is this model deterministic? (iii) The objective of control is to ensure a quality ride. How should we express it formulation-wise? (iv) Consider another dashpot in series or parallel with k2 and for all models simulate the free

System representation

183

m1

θ1

l1 θ2 k

m2 l2

a

a

u

M

Figure 2.94 Exercise 2.30. Connected pendulums.

θ

φ l m

L m

Figure 2.95 Exercise 2.31. Tightrope artist.

M u u

k1

b z1

m k2

z2

h l Figure 2.96 Exercise 2.32. Vehicle active suspension. Adopted from Thaller et al. (2016), with permission. response of the system for typical parameter values. Which model is better? The system is extracted from Thaller et al. (2016). Exercise 2.33: The sequel Fig. 2.97 shows the schematic of a train. There are two locomotives, one pulling at the head and the other pushing at the rear (not shown). There are N identical wagons. The parameters are as shown. The control objective is to keep constant speed and avoid overstressing the couplings. Obtain a model for the system and formulate the control objective.

184

Introduction to Linear Control Systems

m

b

b

m

k

u

M

k

Figure 2.97 Exercise 2.33. Train.

k M

f

m Figure 2.98 Exercise 2.34. A mass-spring configuration.

l1 θ1 m1

l2 θ2 l3

m2

θ3 m3

Figure 2.99 Exercise 2.35. Triple pendulum.

m

u1

θ l

y u2

M x

β

u2

α

u2 u1

u1 Figure 2.100 Exercise 2.36. Palm and beam/Space rocket balancing. Exercise 2.34: Obtain a model for the system shown in Fig. 2.98. Define the outputs as the angle of the pendulum and the position of the cart. Find the transfer functions. Exercise 2.35: Write a model for the three-pendulum system shown in Fig. 2.99. Exercise 2.36: The subsequent Fig. 2.100 shows the three-dimensional schematic of an inverted pendulum which may fall at any α and β angles. This is a model of beam

System representation

185

Figure 2.101 Exercise 2.37. Rotating inverted pendulum: The swing-up system.

2r

v

τl

θ

ω

L d τr Figure 2.102 Exercise 2.38. A differential drive robot.

balancing on palm, also a better but yet restricted model of a space rocket balancing system. Write a model for the system. What should be defined as the output in order to have the beam vertical? Formulate the problem and obtain the transfer function. Exercise 2.37: This exercise is for the readers of mechanical engineering. Write down a model for the rotating inverted pendulum given in Fig. 2.101. Note that the manipulator (whose length is l1 ) rotates horizontally, and the pendulum (whose length is l2 ) can fall only in the plane perpendicular to the manipulator. Exercise 2.38: Differential Drive Robots (DDR) are teeming in the literature. They are a topic of extensive research (Mathew and Hiremath, 2016) and have different forms. The schematic of a typical DDR is provided in Fig. 2.102. This robot has two motors at the right and left wheels which produce the torques τ r and τ l , respectively. Let the parameters L, d and r be as designated in the figure, where d is the distance between the mid-wheels point and the center of mass. Denote by mc , mw , and m 5 mc 1 2mw the mass of the robot without the wheels and motors, mass of each wheel and motor, and the total mass. Also denote by Iw , Im , and Ic the moment of inertia of each wheel about the wheel axis, the moment of inertia of each wheel about the wheel diameter, and the moment of inertia of the robot through the center of mass.

186

Introduction to Linear Control Systems

(i) Show that the equations of motion of the DDR are given by  ðm 1 2Iw =r 2 Þv_ 2 mc Lω2 5 ðτ r 1 τ l Þ=r where v and ω 5 θ_ are the linear and angular ðI 1 2d 2 Iw =r 2 Þω_ 1 mc Lvω 5 dðτ r 2 τ l Þ=r velocities of the robot and I 5 Ic 1 mc d 2 1 2mw L2 1 2Im is the total equivalent inertia. (ii) Define y 5 ½v ωT and u 5 ½τ r τ l T as the output and input of the system and find the linearized model of the system in the state-space domain about an arbitrary operating point/trajectory. Discuss the MP/NMPness of the transfer functions Tij , i; j 5 1; 2. Exercise 2.39: Find a model for the systems in Fig. 2.103. In each tank and orifice the following relations hold, respectively: input rate of flow 2 output rate of flow 5 Ah_ and q 5 h=R, where A, h, R and q are the tank area, content height, fluid resistance at the valve, and liquid flow. Note that the right panel shows two systems: one without the dash-dotted path and one with the dash-dotted path. Exercise 2.40: Consider the system of Example 2.23. The hair drier has the same transfer function between the voltage applied to the heating element and the temperature at the exit point of the pipe. Assume that the temperature of the output air (in the direct line) reduces linearly with distance and that at the distance of 2 meters it becomes equal to the temperature of the environment denoted by θ0 . Find the transfer function between the input voltage v and output temperature at a point of distance d from the nuzzle. Exercise 2.41: Derive a model for the thermal system shown in Fig. 2.104. Exercise 2.42: Consider a tank with an inlet and an outlet. Cold water runs into the tank. The tank is insulated on five of its six sides except on its top side which is the ambient temperature and from where a hot metal is submerged in it. Derive a simple model for this system. Exercise 2.43: This problem has three parts. (i) Assume a mass as the load and complete the model of the hydraulic actuator of Example 2.14. (ii) Assume a series mass-dashpot and repeat part (i). (iii) Provide a more detailed model of the hydraulic actuator by not assuming a constant orifice factor c.

qi

qo Figure 2.103 Exercise 2.39. Some three-tank systems.

Room1 q1 Figure 2.104 Exercise 2.41.

Room2

Room3 q3

Room4

System representation

187

Exercise 2.44: This exercise is for the readers of chemical engineering. Derive the state equations of the pH neutralization Problem 2.20 in details. Exercise 2.45: Consider the worked-out Problem 2.21. (i) Find the most likely time of having exactly one crystal in the system. (ii) At the time in part (i) find the probability distributions. (iii) Find the cumulative distribution function and probability distribution function for the time when at least n crystals have nucleated. (iv) Find the mean time for the appearance of at least n crystals. Exercise 2.46: In Fig. 2.105 the left panel is the heated blending tank and the right panel is the heated stirred reactor whose production process is the same as that of Example 2.15. Find a mathematical model for the chemical system of this Figure in four scenarios. (i) Both paths P1 and P2 are disconnected. Thus there are two independent systems. (ii) Path P1 is connected. (iii) Path P2 is connected. (iv) Both paths P1 and P2 are connected. Exercise 2.47: This exercise is also for the readers of chemical engineering. The ‘chemostat’ problem in biology is the growth of a bio species fed by a nutrient. Today this is available as a laboratory device. State reasonable assumptions and find a model for it. Exercise 2.48: Reconsider the settings of the worked-out Problem 2.23. Find the model of the genotype distribution in the following scenarios. (i) Each plant is fertilized by the type Aa. (ii) Each plant is fertilized with the genotypes AA and Aa every other generation. (iii) Each plant is fertilized with its own type. Exercise 2.49: In genetics some diseases are transmitted to the offspring via autosomal inheritance. Suppose that the gene A is normal but the gene a is abnormal. Then the gene AA is normal, the gene Aa is a carrier but is not afflicted with the disease and the gene aa is afflicted with the disease. We assume that the unhealthy individual aa dies before maturity. One simple way to prevent the disease is to always mate a female (either normal or carrier) with a normal male. Provide a model for the system and find and analyze the distribution of the normals and carriers in the population after n generations. Exercise 2.50: Consider a virus which affects any person in the first contact if the person does not have immunity. The society has three subsystems with regard to it: the infected (I), the susceptible (S) who are susceptible to be infected by the infected group, and the removed (R) who are at home and are being cured. These people grow immunity and will not be infected again. Develop a model for this system. In the literature on epidemiology the model is known as the SIR model. Then extend your model to the case that the immunity of the removed people is relative. This model is known as the SIRS which indicated the change of status from R to S. Exercise 2.51: Consider a savings account which yields 20% interest per annum. Let the starting deposit be 200000 Rials and assume that a yearly deposit of kðiÞ is made at the end of the ith year. Find a model that shows the evolution of the balance at the end of the nth year. In the systems terminology what are the starting deposit and the yearly deposit kðiÞ?

z1 q1 T1

z2 q2 T2

P2

P1 qH

1

zf , q f , Tf ze , q e , Te Effluent gate

Feed gate

z, q,T

Figure 2.105 Exercise 2.46. Five different systems.

qH

2

188

Introduction to Linear Control Systems

Exercise 2.52: Three farmers A, B, and C grow rice, wheat, and barley, respectively. They agree to divide their produce among each other on the following basis. Farmer A gets 1/4 of the rice, 1/2 of the wheat, and 1/4 of the barely. Farmer B gets 1/3 of the rice, 1/4 of the wheat, and 1/6 of the barely. Finally, farmer C gets 5/12 of the rice, 1/4 of the wheat, and 7/12 of the barely. (i) What prices should the farmers assign to their respective crops in order to be at equilibrium? (ii) How many solutions does the problem have? (iii) Repeat part (i) if the minimum price is supposed to be $1000?   0 Exercise 2.53: Consider the closed economy with the exchange matrix E 5 0:4 . 0:6 1 Show that it does not admit a strictly positive solution. 2 3 2 10 4 0 3 5. 4 0 3

Exercise 2.54: Consider the closed economy with the exchange matrix E 5 4 4

Note that the matrix E is rank deficient and some of its entries are zero. Show that it does admit a strictly positive solution. Exercise 2.55: Consider the closed economy. Under what conditions on the exchange matrix it doesn’t admit a strictly positive solution? Exercise 2.56: A company has three sectors A, B, and C. Sector A consumes $0.2 of its own output, $0.3 of B’s output, and $0.4 of C’s output for every $1 it produces. Sector B consumes $0.3 of its own output, $0.3 of A’s output, and $0.1 of C’s output for every $1 it produces. Finally, sector C consumes $0.4 of its own output, $0.1 of A’s output, and $0.2 of B’s output for every $1 it produces. Determine the production of each sector so that each month they meet the external demand of $10000, $20000, and $15000 for A’s, B’s and C’s products, respectively. The external demand is also called the net production. Exercise 2.57: Consider the open economy. It is productive if (and only if) ðI2CÞ21 exists and has only nonnegative entries. (i) What is the philosophy behind this naming? (ii) Show that an open economy is productive iff there is a production vector x $ 0 such that x . Cx. (iii) Show that a sufficient condition for productivity of an open economy is that all the principal minors of I 2 C are positive. (iv) Show that a sufficient condition for productivity of an open economy is that each of the row sums of C is less than 1. Exercise 2.58: This exercise is for all readers, but more appropriate for the readers of mathematics. Consider an ecology with two species (like prey and predator) whose populations are denoted by x1 and x2 . In the literature different models are proposed for their dynamic interaction, such as the Lotka-Volterra model, the competition model, the symbiosis model, etc. Suppose that the dynamics of the system is given by the following equations. (i) Explain the systems in as much details as possible. (ii) Simulate the systems for various typical values of the parameters. (iii) Find the linearized model in each case and investigate its stability. Note that stability analysis of the actual nonlinear system is a challenging problem which you are encouraged to conduct in your graduate studies. ( i.

x_1 5 x1 ða 2 bx2 Þ

x_2 5 x2 ðc 2 dx1 Þ 8   x1 x2 > > > x_1 5 a1 x1 1 2 2 > > b11 b12 < ii.   > x1 x2 > > _ 1 2 5 a x 2 x > 2 2 2 > : b21 b22

System representation

189

8   x1 > > _ x 1 2 α 5 a x > 1 1 1 > > 1 1 x2 < iii.   > x2 > > > _ 5 a x 1 2 β x 2 2 2 > : 1 1 x1 ( iv. ( v.

x_1 5 x1 ða 2 x1 2 x2 Þ x_2 5 x2 ðb 2 x1 2 x2 Þ x_1 5 a1 x1 1 b1 x1 x2 x_2 5 a2 x2 1 b2 x1 x2

Exercise 2.59: Reconsider the last part of Problem 2.29. Find the unknown parameters and analyze the stability of the solution in each case. Exercise 2.60: Reconsider the last part of Problem 2.29. Solve the equation for the spherically symmetric three-dimensional case. Exercise 2.61: Consider the one dimensional time-independent Schro¨dinger equation d2 2 given by ðdx 2 1 k ÞΨðxÞ 5 UðxÞΨðxÞ. Solve the equation for the two cases: (i) x . 0 and (ii) 2N , x , N, for the arbitrary function UðxÞ. Exercise 2.62: This exercise is for the readers in physics departments. When a particle moves with a speed much less than the speed of light in an electromagnetic field, the relativistic effect can be neglected. In this case the Schro¨dinger equation is known as the Pauli’s equation which was formulated by Wolfgang Pauli, Austrian physicist, in 1927. Subsequently he won the Nobel Prize in Physics in 1954 for his Exclusion Principle. Derive the Pauli’s equation by referring to the literature on electromagnetics. Exercise 2.63: Using the block diagram algebra, obtain the input-output transfer functions of the following systems in Fig. 2.106. Exercise 2.64: Draw the SFG equivalents of the systems of Exercise 2.63. Then, via the Mason’s rule compute their input-output transfer functions. Exercise 2.65: With the help of BD algebra determine if any of the structures of Fig. 2.107 is a PID controller, given by PID: k1 1 ks2 1 k3 s, in the unity feedback structure. Note that PI: a 1 bs, PD: c 1 ds, I: es, and D: fs. Exercise 2.66: Draw the SFG equivalent of the systems of Exercise 2.65. Exercise 2.67: Determine the transfer function in the sequel SFGs in Fig. 2.108. Exercise 2.68: Draw the BD equivalent of the systems of Exercise 2.67. Exercise 2.69: In the sequel SFGs of Fig. 2.109 determine the outputs y1 and y2 . Exercise 2.70: For the systems of Exercise 2.69 determine the ratios yx1 ju1 50 , yx1 ju2 50 ,  y2  y2  x r1 50 , and x r2 50 . Exercise 2.71: Draw the block diagram equivalent of the systems of Exercise 2.69. Exercise 2.71: Draw the block diagram equivalent of the systems of Exercise 2.69. Exercise 2.72: This exercise has three parts. 1. In Fig. 2.48 denote the output of the blocks c; g with Yc ; Yg , respectively. Find Y=Yc , Y=Yg , Yg =Yc , Yc =Yg , Yg =R, Yc =R. 2. In Fig. 2.49 denote the input to the block G3 with Up and the input to the block G2 with U2 . Find Up =R, Up =U2 , U2 =Up , Y=Up , Y=U2 . 3. In all panels of Fig. 2.109 express x in terms of the inputs u1 ; u2 . 4. Find the sensitivity of either of the transfer functions in Part (1) to c ; k. That is, find yg y r e.g., SH where Hr g : 5 Yg =R. c 5. Find the sensitivity of either of the transfer functions in Part (2) to G2 ; G5 . That is, u

Hu 2

find e.g., SG5p where Huup2 : 5 U2 =Up .

190

Introduction to Linear Control Systems

G4 +

G1

G3

G2

_

G5 G4

_

G5 +

G1

G2

_

G3

_

G6 G7 G4

_

G5 +

G1

G3

G2

_

G6

_

G7

G3 _

G1

_

G2 G4

G5

G6

G4 G1

_

G2

_

G5

G3 G6

G7 G5 _

G1

_

G2

G3

G4

G6 G7 G7

G6

G8

_

_

G1

_

G2

_

G9

G5 G1 G7

G5

G 10

_ _

G4

G3

_

G6

G2

G3

G8

G9 G 10

Figure 2.106 Exercise 2.63. The above eight panels.

G4

System representation

191

P

PI

_

_ D

PD

_

P

_

I

D PI

_

P

PD

_

_

I

P

I

P

_

PD PI

_

P

PD

Figure 2.107 Exercise 2.65. The above six panels.

l

r

a

1

c

b

g

e

i

f

d h

k

p

m

1

o

j q n m

b

r

f

l j

1 a

c d

g

p k

i

e

Figure 2.108 Exercise 2.67. The above two panels.

o

q

1

y

y

192

Introduction to Linear Control Systems

j i

r2 1

r1

a

f

c

b

x

h

g

d e

k

1

y1

1

y2

n

b

r1

r2

f

1 h g

1 c d e

m l j

q

o

x

k

y1

i

1

y2

p

k j i

r1

a 1 b

r2

x

l

f

1 g

y1

y2

Figure 2.109 Exercise 2.69. The above three panels.

References Abad Torres, J., Roy, S., 2015. Graph-theoretic analysis of network inputoutput processes: Zero structure and its implications on remote feedback control. Automatica. 61 (11), 7379. Abramov, A.P., 2014. Balanced and Cyclical Growth in Models of Decentralized Economy. Springer. Ajorlou, A., Jadbabaie, A., Kakhbod, A., 2016. Dynamic pricing in social networks: the word of mouth effect. Manag. Sci (available online). Alavi, S.E., Moshiri, S., Sattarifar, M., 2016. An analysis of the efficiency of the monetary and fiscal policies in Iran economy using IS-MP-AS model. Procedia Econ. Finance. 36, 522531. Altare, G., Vacca, A., 2015. A design solution for efficient and compact electro-hydraulic actuator. Procedia Eng. 106, 816. Andersson, P., Ljung, L., 1982. A test case for adaptive control: Car steering. Theory Appl. Digital Control, pp. 7176. Ansarifar, G.R., Talebi, H.A., Davilu, H., 2012. Adaptive estimator-based dynamic sliding mode control for the water level of nuclear steam generators. Prog. Nucl. Energy. 56, 6170. Antritter, F., Tautz, J., 2007. Tracking control of the angular velocity of a DC motor via a boost converter. IFAC Proc. Vol. 40 (12), 11431148.

System representation

193

Astrom, K.J., Klein, R.E., Lennartsson, A., 2005. Bicycle dynamics and control. IEEE Control Syst. Mag. 2647. Ataollahi, M., Eghrari, H.H., Taghirad, H.D., 2011. Adaptive robust backstepping control design for a non-minimum phase model of hard disk drives. In: Proceedings of the Iranian Conference on Electrical Engineering, pp. 16. Atashzar, S.F., Talebi, H.A., Shahbazi, M., Towhidkhah, F., Patel, R.V., Shojaeim, S., 2011. Control challenges in non-minimum phase tele-robotics systems. In: Proceedings of the IEEE/ASME International Conference on Advanced Intelligent Mechatronics, pp. 152157. Baker Jr., G.A., 1975. Essentials of Pade´ Approximants in Theoretical Physics. Academic Press, New York. Bambang, S.A., 2006. Simulation and Optimization Tools to Study Design Principles of Biological Networks (PhD dissertation). Biological Engineering Division, MIT. Basci, A., Derdiyok, A., 2016. Implementation of an adaptive fuzzy compensator for coupled tank liquid level control system. Measurement. 91, 1218. Bavafa-Toosi, Y., 1996. Applications of linear algebra, Technical Report, Department of Electrical Engineering, Ferdowsi University of Mashhad, Mashhad, Iran. Bavafa-Toosi, Y., 1997. Design and implementation of HEXFET-based high-frequency fullbridge PWM welding sets, BEng Thesis (Power), Department of Electrical Engineering, Ferdowsi University of Mashhad, Mashhad, Iran. Bittanti, S., Lovera, M., 1996. On the zero dynamics of helicopter rotor loads. Eur. J. Control. 2 (1), 5768. Bourbaki, N., 1987. Topological Vector Spaces, Elements of Mathematics. Springer. Bourles, H., Marinescu, B., Oberst, U., 2015. Weak exponential stability of linear time-varying differential behaviors. Linear Algebra Appl. 486, 523571. Cabeza, L.F., 2015. Advances in Thermal Energy Storage Systems. Woodhead Publishing. Cassandras, C.G., Lafortune, S., 2008. An Introduction to Discrete-Event Systems. 2nd ed. Springer. Chen, B.M., Lin, Z., Shamash, Y., 2004. Linear Systems Theory: A Structural Decomposition Approach. Birkhauser. Chen, J., Peng, Y., Han, W., Guo, M., 2011. Adaptive fuzzy sliding mode control in pH neutralization process. Procedia Eng. 15, 954958. Chung, L.L., Reinhom, A.M., Soong, T.T., 1988. Experiments on active control of seismic structures. ASCE J. Eng. Mech. 114, 241256. Costin, O., Xia, X., 2015. From the Taylor series of analytic functions to their global analysis. Nonlinear Anal. 119, 106114. Daasch, A., Schulz, E., Schultalbers, M., 2016a. Intrinsic performance limitations of torque generation in a turbocharged gasoline engine. IFAC-Pap. OnLine. 49 (11), 714721. Daasch, A., Schultalbers, M., Svaricek, F., 2016b. Structural non-minimum phase systems. In: Proceedings of the American Control Conference, pp. 37583763. Dawes, J.A.P., Souza, M.O., 2013. A derivation of Holling’s type I, II, and III functional responses in predator-prey systems. J. Theor. Biol. 327 (21), 1122. Devaud, E., Siguerdidjane, H., 2002. A missile autopilot based on feedback linearization. Eur. J. Control. 8 (6), 553560. Ding, Y., Ohmori, H., Sano, A., 2004. Adaptive predistortion for high power amplifier with linear dynamics. In: Proceedings of the 47th Midwest Symposium on Circuits and Systems, Hiroshima, Japan, vol. 3, pp. 121124. Dochain, D., Perrier, M., 1991. Nonlinear adaptive controllers for non-minimum phase bioreactors. In: Proceedings of the American Control Conference, pp. 13111316.

194

Introduction to Linear Control Systems

Du, F., Zhang, J., Li, H., Yan, J., Galloway, S., Lo, K.L., 2016. Modelling the impact of social network on energy savings. Appl. Energy. 178, 5165. Ebert, F., 2008. On Partitioned Simulation of Electrical Circuits Using Dynamic Iteration Methods (PhD dissertation). Department of Mathematics. TU Berlin. Edelmann, J., Haudum, M., Plochl, M., 2015. Bicycle rider control modelling for path tracking. IFAC-Pap. OnLine. 48 (1), 5560. Emre, E., Tai, H.-M., Seo, J.H., 1999. Transfer matrices, realization, and control of continuous-time linear time-varying systems via polynomial fractional representations. Linear Algebra Appl. 4, 79104. Engler, O., Schafer, C., Brinkman, H.-J., Brecht, J., Beiter, P., Nijhof, K., 2016. Flexible rolling of aluminum alloy sheet  Process optimization and control of materials properties. J. Mater. Process. Technol. 229, 139148. Eves, H., 1990. An Introduction to the History of Mathematics. Saunders College Publishing, Philadelphia. Faieghi, M., Jalali, A., Mousavi Mashhadi, S.K., 2014. Robust adaptive cruise control of high speed trains. ISA Trans. 53 (2), 533541. Fathi Roudsari, S., Ein-Mozaffari, F., Dhib, R., 2013. Use of CFD in modeling MMA solution polymerization in a CSTR. Chem. Eng. J. 219, 429442. Feng, B., Zhao, D., Chen, P., 2014. Optimal bilinear control of nonlinear Schrodinger equations with singular potentials. Nonlinear Anal. 107, 1221. Galan, O., Romagnoli, J.A., Palazoglu, A., 2000. Robust HN control of nonlinear plants based on multi-linear models: an application to a bench-scale pH neutralization reactor. Chem. Eng. Sci. 55, 44354450. Gerami, R., Ejtehadi, M.R., 2009. A history-dependent stochastic predator-prey model: chaos and its elimination. Eur. Phys. J. B-Condens. Matter Complex Syst. 13 (3), 601606. Gholizade-Narm, H., Azemi, A., Khademi, M., Karimi-Ghartemani, M., 2010. An index for evaluating distance of a healthy heart from Sino-Atrial blocking arrhythmia. J. Biomed. Sci. Eng. 3, 308316. Giorno, V., Spina, S., 2016. Rumor spreading models with random denials. Phys. A. 461, 569576. Glass, O., 2014. On the controllability of the non-isentropic 1-D Euler equation. J. Differ. Equations. 257 (3), 638719. Goh, L., Chen, K., Bhamidi, V., He, G., Kee, N.C.S., Kenis, P.J.A., et al., 2010. A Stochastic model for nucleation kinetics determination in droplet-based microfluidic systems. Cryst. Growth Des. 10, 25152521. Golumbic, M.C., 1980. Algorithmic Graph Theory and Perfect Graphs. Academic Press, New York. Goswamy, M., Kumar, A., 1990. Stochastic model for spread of rumor supported by a leader resulting in collective violence and planning of control measures. Math. Soc. Sci. 19 (1), 2336. Guo, B.-Z., Zhou, H.-C., Al-Fhaiid, A.S., Younas, A.M.M., Asiri, A., 2015. Parameter estimation and stabilization for one-dimensional Schrodinger equation with boundary output constant disturbance and non-collocated control. J. Franklin Inst. 352 (5), 20472064. Haines Jr., R.B., 1893. Haines automatic micrometer gauge, for gauging either hot or coldrolling mill work to exact thickness while in the act of rolling. J. Franklin Inst. 135 (1), 4245. Halas, M., 2008. An algebraic framework generalizing the concept of transfer functions to nonlinear systems. Automatica. 44 (5), 11811190. Hauser, J., Sastry, S., Meyer, G., 1992. Nonlinear control design for slightly non-minimum phase systems: Application to V/STOL aircraft. Automatica. 28 (4), 665679.

System representation

195

He, Y.-C., Xu, N., Zheng, X., Yu, Y., Ling, B., You, J., 2017. A silver(I) coordination polymer luminescent thermometer. Dyes Pigm. 136, 577582. Hemami, H., 1966. Graph theory in the analysis of nonlinear systems. IEEE Trans. Autom. Control. 11 (3), 550553. Hemami, H., Moussavi, Z., 2014. A model of the basal ganglia in voluntary movement and postural reactions. Comput. Methods Biomech. Biomed. Eng. 17 (13), 14321446. Howard, M.T., 1966. The Application of Control Theory to Physiological Systems. W. B. Saunders Co, Philadelphia. Huang, Y., Chen, F., Zhong, L., 2006. Stability analysis of a prey-predator model with Holling type III response function incorporating a prey refuge. Appl. Math. Comput. 182 (1), 672683. Hui, C., 2015. Carrying capacity of the environment, International Encyclopedia of the Social & Behavioral Sciences. 2nd ed. Elsevier, Amsterdam. Ikehara, M., Ohmori, H., 2010. Tracking control system for HIV-virus dynamics by global linearization system. In: Proceedings of the IEEE Int. Conf. Control Applications, Yokohama, Japan, pp. 16261629. Ionescu, C.M., Maxim, A., Copot, C., De Keyser, R., 2016. Robust PID auto-tuning for the quadruple tank system. IFAC-Pap. OnLine. 49 (7), 919924. Jadbabaie, A., 2012. Natural algorithms in a networked world. Commun. ACM (Research Highlights). 55 (12), 100. Jalayer, F., Beck, J.L., Zareian, F., 2012. Intensity measures of ground shaking based on information theory. J. Eng. Mech. 138 (3), 307316. Jalayer, F., Ebrahimian, H., 2017. Seismic risk assessment considering cumulative damage due to aftershocks. Earthq. Eng. Struct. Dyn. 46 (3), 369389. Kamen, E.W., Hafez, K.M., 1979. Algebraic theory of linear time-varying systems. SIAM J. Control Optim. 17 (4). Kamen, E.W., Khargonekar, P.P., Poolia, K.R., 1985. A transfer function approach to linear time-varying discrete-time systems. SIAM J. Control Optim. 23 (4). Karimi, H.R., Yazdanpanah, M.J., Patel, R.V., Khorasani, K., 2006. Modeling and control of linear two-time scale systems: applied to single-link flexible manipulator. Intell. Robotic Syst. 45 (3), 235265. Khanafer, A., Basar, T., Gharesifard, B., 2016. Stability of epidemic models over directed graphs: a positive systems approach. Automatica. 74, 126134. Khodabakhshian, A., Hemmati, R., 2012. Robust decentralized multi-machine power system stabilizer design using quantitative feedback theory. Int. J. Electr. Power Energy Syst. 41 (1), 112119. Khodadadi, H., Khaki-Sedigh, A., Ataei, A., Jahed-Motlagh, M.R., 2016. Applying a modified version of Lyapunov exponent for cancer diagnosis in biomedical images: the case of breast mammograms. Multidimens. Syst. Signal Proc. 27 (1), 115. Khosravi, M.A., Taghirad, H.D., 2014. Dynamic modeling and control of parallel robots with elastic cables: singular perturbation approach. IEEE Trans. Robotics. 30 (3), 694704. Kim, Y., Mesbahi, M., 2006. On maximizing the second smallest eigenvalue of a statedependent graph Laplacian. IEEE Trans. Autom. Control. 51 (1), 116120. Kiseleva, M.A., Kuznetsov, N.V., Leonov, G.A., 2016. Hidden attractors in electromechanical systems with and without equilibria. IFAC-Pap. OnLine. 49 (14), 5155. Kobayashi, T., 2017. Numerical Simulation of Ultrasound-Induced Dynamics of a Gas Bubble Attached at a Rigid Wall (M.Eng. thesis). Department of Mechanical Engineering, Keio University, Japan.

196

Introduction to Linear Control Systems

Labibi, B., Marquez, H.J., Chen, T., 2009. Decentralized robust control of a class of nonlinear systems and application to a boiler system. J. Process Control. 19 (5), 761772. Lazebnik, F., Woldar, A., 2000. General properties of some graphs defined by systems of equations. Electron. Notes Discrete Math. 5, 206209. Leontief, W., 1966. Input-Output Economics. Oxford University Press. Lessard, C.S., 2009. Basic Feedback Controls in Biomedicine. Morgan and Claypool. Li, J., Li, S., Chen, X., Yang, J., 2014. RBFNDOB-based neural network inverse control for non-minimum phase MIMO system with disturbances. ISA Trans. 53 (4), 983993. Li, H., Chen, G., Huang, T., Dong, Z., 2017. High-performance consensus control in networked systems with limited bandwidth communication and time-varying directed topologies. IEEE Trans. Neural Networks Learning Syst. 28 (5), 10431054. Li, C., Yang, H., 2017. A mathematical model of demand-supply dynamics with collectability and saturation factors. Int. J. Bifurcation Chaos. 27 (1), 1750016-1-1750016-11. Lindenstrand, D., Svensson, A., 2013. Estimation of the Malthusian parameter in a stochastic epidemic model using martingale methods. Math. Biosci. 246 (2), 272279. Maciejowski, J.M., 2003. More modelling, less feedback. IEE Rev. 49, 62. Martinez-Salamero, L., Cid-Pastor, A., El Aroudi, A., Giral, R., Calvente, J., Ruiz-Magaz, G., 2011. Sliding-mode control of DC-DC switching converters. IFAC Proc. Vol. 44 (1), 19101916. Mashaei, P.R., Shahryari, M., Fazeli, H., Hosseinalipour, S.M., 2016. Numerical simulation of nanofluid application in a horizontal mesh heat pipe with multiple heat sources: a smart fluid for high efficiency thermal system. Appl. Therm. Eng. 100, 10161030. Mason, S.J., 1953. Feedback theory  Some properties of signal flow graphs. Proc. IRE. 41 (9), 11441156. Mason, S.J., 1956. Feedback theory  Further properties of signal flow graphs. Proc. IRE. 44 (7), 920926. Masoudi-Nejad, A., Bidkhori, G., Hosseini Ashtiani, S., Najafi, A., Bozorgmehr, J., Wang, E., 2015. Cancer systems biology and modling: microscopic scale and multiscale approaches. Semin. Cancer Biol. 30, 6069. Mathew, R., Hiremath, S.S., 2016. Trajectory tracking and control of differential drive robot for overdefined regular geometrical path. Procedia Technol. 25, 12731280. Mehrmann, V., Poloni, F., 2013. Using permuted graph bases in HN control. Automatica. 49 (6), 17901797. Meng, Q., Xue, J., 2015. Balanced-budget consumption taxes and aggregate stability in a small open economy. Econ. Lett. 137, 214217. Merritt, H.E., 1966. Hydraulic Control Systems. John Wiley & Sons, New York. Mesbahi, M., Egerstedt, M., 2010. Graph Theoretic Methods in Multiagent Networks. Princeton University Press, Princeton, NJ. Mesbahi, M., Egerstedt, M., 2014. Graphs for modeling network interactions. In: Baillieul, J., Samad, T. (Eds.), Encyclopedia of Systems and Control. Springer, Berlin, Germany. Mihajlov, M., Ivlev, O., Graswer, A., 2008. Modelling and identification for control design of compliant fluidic actuators with rotary elastic chambers: Hydraulic case study. IFAC Proc. Vol. 41 (2), 1049210497. Mitra, A., Paul, S., Panda, S., Padhi, P., 2016. A study on the representation of the various models for dynamic social networks. Procedia Comput. Sci. 79, 624631. Moallem, M., Patel, R.V., Khorasani, K., 2000. Flexible-link Robot Manipulators: Control Techniques and Structural Design.. Springer, London. Mohammadzaman, I., Khaki-Sedigh, A., Nasirian, M., 2006. Predictive control of non-minimum phase motor with backlash in an earth station antenna. In: Proceedings of the Chinese Control Conference, pp. 900905.

System representation

197

Moran, M., Shapiro, H., Munson, B., Dewitt, D., 2003. Introduction to Thermal Systems Engineering  Thermodynamics, Fluid Mechanics, and Heat Transfer. John Wiley & Sons. Naito, H., 2015. Numerical Studies on Control of Flow Around a Circular Cylinder (PhD dissertation). Department of Mechanical Engineering, Keio University, Japan. Nanos, K., Papadopoulos, E.G., 2015. On the dynamics and control of flexible joint space manipulators. Control Eng. Pract. 45, 230243. Naumis, G.G., 2012. Simple solvable energy-landscape model that shows a thermodynamic phase transition and a glass transition. Phys. Rev., E. 85, 061505-1-5. Naumis, G.G., 2016. Further explanations on energy-landscape models. Personal Communications with Y. Bavafa-Toosi. Niksefat, N., Sepehri, N., 2000. Design and experimental evaluation of a robust force controller for an electro-hydraulic actuator via quantitative feedback theory. Control Eng. Practice8. 12, 13351345. Nokhbatolfoghahaee, H., Mehhaj, M.B., Talebi, H., 2016. Weakly and strongly non-minimum phase systems: properties and limitations. Int. J. Control. 89 (2), 306321. Olfati-Saber, R., 2007. Evolutionary dynamics of behavior in social networks. In: Proceedings of the 46th IEEE Conf. Decision and Control, New Orleans, pp. 40514056. Olfati-Saber, R., Jalalkamali, P., 2012. Coupled distributed estimation and control for mobile sensor networks. IEEE Trans. Autom. Contr. 57 (10), 26092614. Olfati-Saber, R., Murray, R.M., 2002. Graph rigidity and distributed formation stabilization of multi-vehicle systems. In: Proceedings of the 41st IEEE Conf. Decision and Control., Las Vegas, vol. 3, pp. 29652971. Olfati-Saber, R., Murray, R.M., 2004. Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans. Automat. Contr. 49 (9), 15201533. Ott, C., 2008. Cartesian Impedance Control of Redundant and Flexible-Joint Robots. Springer. Parr, A., 2011. Hydraulics and Pneumatics: A Technician’s and Engineer’s Guide. 3rd ed. Butterworth-Heinemann. Paul, L., Philippe, C., Mohamed, Y., Jean-Marc, D., 2015. Broadband active noise control design through nonsmooth HN synthesis. IFAC-Pap. OnLine. 48 (14), 396401. Perepezko, J.H., Santhaweesuk, C., Wang, J.Q., Imhoff, S.D., 2014. Kinetic competition during glass formation. J. Alloys Compd. 615 (Suppl. 1), S192S197. Piwowar, A., 2015. Time-frequency responses of generalized first order parametric sections. Arch. Electr. Eng. 64 (3), 371377. Raa, T.T., 2005. The Economics of Input-Output Analysis. Cambridge University Press. Ran, Y., Xia, L., Niu, D., Wen, Y., Yu, C., Liu, D., 2016. Design and demonstration of a liquid level fiber sensor based on self-imaging effect. Sens. Actuators, A. 237, 4146. Rao, V.S.H., Rao, P.R.S., 2009. Dynamic Models and Control of Biological Systems. Springer. Razavi, B., 2014. Fundamentals of Microelectronics. 2nd ed. John Wiley & Sons. Ren, R.-Y., Zou, Z.-J., Wang, X.-G., 2014. A two-time scale control law based on singular perturbations used in rudder roll stabilization of ships. Ocean Eng. 88, 488498. Reyes-Retana, J.A., Naumis, G.G., 2015. Ab initio study of Si doping effects in PdNiP bulk metallic glass. J. Non-Crystalline Solids. 409, 4953. Rezaee, H., Abdollahi, F., 2017. Consensus problem over high-order multiagent systems with uncertain nonlinearities under deterministic and stochastic topologies. IEEE Trans. Cybernetin press.

198

Introduction to Linear Control Systems

Ruiz-Duarte, J.E., Loukianov, A.G., 2015. Higher order sliding mode control for autonomous underwater vehicles in the diving plane. IFAC-Pap. OnLine. 48 (16), 4954. Runde, V., 2005. A Taste of Topology. Springer. Saberi, A., Sannuti, P., 1989. Time-scale structure assignment in linear multivariable systems by high-gain feedback. Int. J. Contr. 49 (6), 21912213. Saberi, A., Ozcetin, H., Sannuti, P., 1992. New structural invariants of linear multivariable systems. Int. J. Contr. 56 (4), 877900. Saboori, I., Khorasani, K., 2015. Actuator fault accommodation strategy for a team of multiagent systems subject to switching topology. Automatica. 62, 200207. Sabzehgar, R., Moallem, M., 2012. Modeling and control of a boost converter for irregular input sources. IET Power Electron. 5 (6), 702709. Sarwate, A.D., Javidi, T., 2015. Distributed learning of distributions via social sampling. IEEE Trans. Autom. Control. 60 (1), 3445. Schlauch, S., 2007. Modeling and Simulation of Drop Size Distributions in Stirred LiquidLiquid Systems. Department of Mathematics. TU Berlin, FRG. Schmidt, L.D., 1998. The Engineering of Chemical Reactions. Oxford University Press. Shahinpour, S., Rakhlin, A., Jadbabaie, A., 2016. Distributed detection: finite-time analysis and impact of network topology. IEEE Trans. Autom. Control. 61 (11), 32563268. Shanechi, M.M., Hu, R.C., Williams, Z.M., 2014. A corticalspinal prosthesis for targeted limb movement in paralysed primate avatars. Nat. Commun. 5 (3237), 19. Sharaf, M.A., Ghoneim, R., Hassan, I.A., 2012. A mathematical model of star formation in the galaxy. NRIAG J. Astron. Geophys. 1, 775. Sharifi, M., Salarieh, H., Behzadipour, S., Tavakoli, M., 2017. Stable nonlinear trilateral impedance control for dual-user haptic teleoperation systems with communication delays. ASME J. Dyn. Syst. Meas. Control. in press. Sha Sadeghi, M., Safarinejadian, B., Farughian, A., 2014. Parallel distributed compensator design of tank level control based on fuzzy TakagiSugeno model. Appl. Soft Comput. 21, 280285. Shakeri Yekta, S., Skyllberg, U., Danielsson, A., Bjorn, A., Svensson, B.H., 2017. Chemical speciation of sulfur and metals in biogas reactor—implications for cobalt and nickel bio-uptake processes. J. Hazard. Mater. 324, A110116. Shiota, T., Ohmori, H., 2011. Robustness of consensus algorithm for communication delays and switching topology. In: Proceedings of the SICE Annual Conference, pp. 13731380. Spitoni, E., Vincenzo, F., Matteucci, F., 2017. New analytical solutions for chemical evolution models: characterizing the population of star-forming and passive galaxies. Astron. Astrophys. 599 (6), (available online. Steinbrecher, A., 2006. Numerical Solution of Quasi-Linear Differential-Algebraic Equations and Industrial Simulation of Multibody Systems (PhD dissertation). Department of Mathematics, TU Berlin, FRG. Sri Namachchivaya, N., Ariaratnam, S.T., 1986. Non-linear stability analysis of aircraft at high angles of attack. Int. J. Non-Linear Mech. 21 (3), 217228. Swank, J.M., 1892. History of the Manufacturers of Iron in All Ages. The American Iron and Steel Association, Philadelphia. Szekeres, P., 2004. A Course in Mathematical Physics: Groups, Hilbert Space, and Differential Geometry. Cambridge University Press. Tabrizi, M.H.N., Edwards, J.B., 1992. Modelling short packed and tray-type distillation columns for controller design. Math. Comput. Modell. 16 (5), 131146. Taghirad, H.D., 2013. Parallel Robots: Mechanics and Control. CRC Press, London.

System representation

199

Takahashi, K., 2004. Multi-Algorithm and Multi-Timescale Cell Biology Simulation (PhD dissertation). Department of Bioinformatics, Keio University, Japan. Tavakoli, M., Patel, R.V., Moallem, M., Azimnejad, A., 2008. Haptic Interaction in Robotic Systems for Surgery and Telesurgery. World Scientific Publishers, Singapore. Thaller, S., Reiterer, F., Schmied, R., Waschl, H., Kokal, H., del Re, L., 2016. Fast determination of vehicle suspension parameters via continuous time system identification. IFAC-Pap. OnLine. 49 (11), 448453. Toledo-Marina, J.Q., Castillob, I.P., Naumis, G.G., 2016. Minimal cooling speed for glass transition in a simple solvable energy landscape model. Phys. A. 451, 227236. Towler, G., Sinnott, R.K., 2013. Chemical Engineering Design: Principles, Practice and Economics of Plant and Process Design. 2nd ed. Butterworth-Heinemann. Van de Pol, B., 1920. A theory of the amplitude of free and forced triode vibrations. Radio Rev. 1, 701710. Vignesh, N., Akhilesh, S., Pulak, H., 2014. Design of autopilots for tactical aerospace vehicles: A comparative study. IFAC Proc. Vol. 47 (1), 271278. Wan, Y., Roy, S., Saberi, A., 2010. A pre- 1 post- 1 feedforward compensator design for zero placement. Int. J. Control. 83 (9), 18391843. Wang, Q., Lin, Z., Jin, Y., Cheng, S., Yang, T., 2015. ESIS: Emotion-based spreader-ignorant-stifler model for information diffusion. Knowl.-Based Syst. 81, 4655. Whelan, C.T., 2016. A First Course in Mathematical Physics. Wiley-VCH. Whitt, F.R., Wilson, D.G., 2004. Bicycling Science. 3rd ed. MIT Press, Cambridge, MA. Woong, H., Ito, S., Schitter, G., 2016. High speed laser scanning microscopy by iterative learning control of a galvanometer scanner. Control Eng. Pract. 50, 1221. Yeung, E., Kim, J., Murray, R.M., 2013. Resource competition as a source of non-minimum phase behavior in transcription-translation systems. In: Proceedings of the 52nd IEEE Conference on Decision and Control, pp. 40604067. Young, G., Scardovi, L., Leonard, N.E., 2016. A new notion of effective resistance for directed graphs—Parts I, II. IEEE Trans. Autom. Contr. 61 (7), 17271736, 1737-1752. Ysern, B., de la, C., Ceniceros, J.M., 2016. Zero distribution of incomplete Pade and Hermite-Pade approximations. J. Approx. Theory. 201, 1329. Yu, H.H., Duan, G.R., 2009. ESA in high-order linear system via output feedback. Asian J. Control. 11 (3), 336343. Zadeh, L.A., Desoer, C.A., 1963. Linear System Theory: A State Space Approach. McGrawHill. Zadeh, L.A., 1950a. The determination of impulsive response of variable networks. J. Appl. Phys. 21, 624645. Zadeh, L.A., 1950b. Frequency analysis of variable networks. Proc. IRE. 38, 291299. Zadeh, L.A., 1956. On the identification problem. IRE Trans. Circuit Theory. CT-3, 277281. Zadeh, L.A., 1961. Time-varying networks, I. Proc. IRE. 49 (10), 291299. Zadeh, L.A., 1963. The general identification problem. In: Proceedings of Princeton University Conference on Identification Problems in Communication and Control Systems, pp. 117. Zhang, W.-B. (Ed.), 2009. Mathematical Models in Economics, Vol. I. EOLSS Publishers Ltd, Paris. Zhou, Y.L., Yin, Y.X., Zhang, Q.Z., 2013. An optimal repetitive control algorithm for periodic impulsive noise attenuation in a non-minimum phase ANC system. Appl. Acoust. 74 (10), 11751181.

Stability analysis 3.1

3

Introduction

Stability is at the heart of all analysis and synthesis considerations. Conceptually, “a” most neutral and general definition of stability that one may think of is that a system is stable if no quantity in the system grows unboundedly. This may suggest that there is no overvoltage, overpressure, overtemperature, etc. and hence there is no burnout, breakdown, explosion, etc. and the nominal functionality of the system continues. This definition seems physically justifiable and veritable, however in certain nonlinear systems the state of the system may converge to limit cycles or strange attractors and thus the nominal functioning of the system is disrupted. A better definition is thus required. In fact there are different notions of stability.1 These notions and tools for studying stability depend on the dynamical system being linear, nonlinear, continuous, discrete, discontinuous, deterministic, stochastic, etc. The literature is teeming with disparate and seemingly irrelevant results on this issue. Even graduate study textbooks, and the literature as a whole, to the best of our knowledge, do not provide a thorough overview of the subject. This shortcoming is briefly addressed in this book. In Appendix D we offer a general picture of the stability theory, which will help ease your further understanding and investigation of the topic even in cutting-edge graduate research. This is complemented by Appendix E, on a particular stability method, namely that of RouthHurwitz, which is the main tool in this undergraduate course. The graduate-level research results on this tool are discussed in Appendix E. In this course we present basic undergraduate-level materials. One of the main notions of stability is “Lyapunov stability” or “stability in the sense of Lyapunov.” A system is said to be Lyapunov stable if, starting near the equilibrium set, the state converges to it eventually;2 see Appendix D. If the equilibrium of the system is the origin, the definition thus says that the state of the unforced system must eventually converge to zero. The definition has a general usage but in this course we shall use it only for LTI systems. Another useful stability notion that we use for LTI systems is that of “Bounded-Input Bounded-Output (BIBO) stability.” In this notion a system is stable if 1

Note that no matter what notion we adopt, the stability can be strengthened to robust stability if we consider the ability of the system to resist the influence of a stimulus which is unknown a priori. A system is said to be robustly stable if such stimuli (in the form of perturbations in its parameters or perhaps in its inputs, disturbances, nonlinearities, and initial conditions) do not essentially change it, so that it remains stable in the adopted notion. For your knowledge let us add that this is very much similar to the concept of Input-to-State Stability (ISS), and its variants, which we mention in Appendix D. 2 More precisely, if there is a domain—called the region of attraction—for an equilibrium point such that if the initial state is in it, then the state remains in a prespecified neighborhood of that equilibrium point. Of course, there are various details. For example, the region of attraction must contain the equilibrium point as an interior point, stability may be uniform or nonuniform, and some details specific to autonomous, nonautonomous, and periodic solutions. We shall treat the subject in the sequel of the book on state-space methods. Introduction to Linear Control Systems. DOI: http://dx.doi.org/10.1016/B978-0-12-812748-3.00003-3 © 2017 Elsevier Inc. All rights reserved.

202

Introduction to Linear Control Systems

the output of the system remains bounded for all bounded inputs. For LTI systems both of the aforementioned properties appear to be related to the roots of the characteristic equation of the system, which are the eigenvalues of the state matrix of the system, also recall Section 2.2.3. This level of exposition is sufficient for this first course in the control discipline. In the second course on state-space methods and in graduate courses you will of course learn more definitions and results about the stability issue. The rest of this chapter is organized as follows: The concepts of Lyapunov and BIBO stability are discussed in Section 3.2. In Sections 3.33.6 we introduce the stability tests of Hurwitz, Routh, and LienardChipart. Relative stability and D-stability are studied in Sections 3.7 and 3.8, respectively. Particular relation of the stability tests with controller design is addressed in Section 3.9. Next, the Kharitonov theory is introduced in Section 3.10. Internal stability and strong stabilization are considered in Sections 3.11 and 3.12, respectively. Stability of LTV systems is briefly studied in Section 3.13 along with some new results. Summary, further readings, worked-out problems, and exercises follow in Sections 3.1417. Many important issues are discussed in the exercises. As in the other chapters exercises are an indispensable part of the chapter. Appendices D and E are a must for readers of mathematics.

3.2

Lyapunov and BIBO stability

At around the end of the 19th century the stability problem was one of the issues the mathematics community was concerned with. More precisely, they were interested in the stability of an equation (in particular ODE), and a solution that was proposed was to study it in the Laplace domain. The four stability theory pioneers of the nineteenth century were C. Hermite (French, 18221901), A. Hurwitz (German, 18591919), E. J. Routh (English, 18311907), and A. M. Lyapunov (Russian, 18671918). Hermite was concerned with the problem that the roots of the equation have positive imaginary parts. We can now interpret this problem as a stability determination by a simple change of variables. Hermite’s result is based on quadratic forms and it probably helped others in obtaining their results. Those of Hurwitz and Routh will be briefly reviewed in the ensuing Sections. The results of Lyapunov (Lyapunov, 1992) are much more advanced and are studied in the next course on state-space methods as well as in graduate courses. They have a general form and are applicable to time-varying and nonlinear systems as well. In fact although they have been extended in certain respects, they have remained the main techniques for stability studies despite the fact that they are over a century old. Lyapunov is one of the most influential mathematicians of all time. References (Jury, 1996; Siljak, 1976) give a general nontheoretical account of the issue. It seems that the phrase “Lyapunov stable” was coined by the control community—not by Lyapunov himself—in order to distinguish it from other types of stability. However, it is not known to us who first proposed it. The concept of BIBO stability was first proposed by James and Weiss in 1946 (James et al., 1947). Two well-known results are as follows; see also Questions 3.1 and 3.2.

Stability analysis

203

Theorem 3.1: An LTI system is stable in the sense of Lyapunov iff the roots of the characteristic equation are all in the Closed Left-Hand Plane (CLHP), with possibly some roots of multiplicity one on the j-axis. Δ. That is, if the system has Open Right-Hand Plane (ORHP) poles or j-axis poles with multiplicity larger than one, it becomes unstable if the initial conditions are nonzero. Theorem 3.2: An LTI system is BIBO stable iff all the roots of the characteristic equation are in the OLHP. Δ. Note that if a system has j-axis poles of multiplicity one it is not BIBO stable because  a sinusoid of the same frequency drives it  unstable. Tosee this recall

ω 1 L21 s2 1ω ω2  s2 ω1 ω2 5 12 ðsinωt 2 ωt cosωtÞ, L21 s2 1s ω2  s2 1 ω2 5 2ω3 t sinωt,   and L21 1s  1s 5 t. In the first case the system is ω2 =ðs2 1 ω2 Þ and the input is the sine function. In the second case the same system is driven by the cosine function. And in the third case the system is 1=s (pole at frequency zero) and the input is the step function (cosine of frequency zero).

that

2

2

Remark 3.1: In engineering practice except in a limited number of situations like solving certain equations or optimization problems by neural network circuitry the system always works with a nonzero input. That is, what matters in practice is BIBO stability and not Lyapunov stability. In other words, in practice the OLHP which is also said otherwise as “the region left to the j-axis” is the stability region—j-axis itself is not part of the stability region and is unstable. Remark 3.2: The comparison of these two stability concepts is that Lyapunov stability is weaker than BIBO stability. Every BIBO stable system is also Lyapunov stable but the converse is not necessarily true. Moreover, note that if an LTI system whose roots of the characteristic equation are in the OLHP has no input, then its state (due to initial conditions) tends to zero; see Exercise 3.1. The above theorems can easily be demonstrated in the proceeding examples: Example 3.1: Consider the system LðsÞ 5 s33s1 s12 13s3s114 3 with the initial 1 1 condition x0 . It is Lyapunov stable since LðsÞ 5 s2 2s1 3 1 s2 1 3 1 s 1 1 and thus conditions of Theorem 3.1 are satisfied. The inverse Laplace transform is pffiffiffi pffiffiffi lðtÞ 5 2cos 3t 1 p1ffiffi3 sin 3t 1 e2t . That is, the unforced response of the system pffiffiffi pffiffiffi (or the response to initial condition) is lðtÞx0 5 ð2cos 3t 1 p1ffiffi3 sin 3t 1 e2t Þx0 (Continued) 2

204

Introduction to Linear Control Systems

(cont’d) which is bounded as t ! N. Note that the first two terms are bounded sinusoidal functions of time while the third term is an exponential function which decays to zero.

s4 1 2s3 1 11s2 1 4s 1 10 with s6 1 2s5 1 7s4 1 12s3 1 15s2 1 18s 1 9 2s Lyapunov stable since LðsÞ 5 ðs2 13Þ2 1

Example 3.2: Consider the system LðsÞ 5 the initial condition x0 . It is not

h pffiffiffi i whose inverse Laplace transform is lðtÞ 5 p1ffiffi3 t sin 3 t 1 h pffiffiffi pffiffiffi pffiffiffi i 1 ffiffi p ðsin 3 t 2 3 t cos 3tÞ 1 ½te2t . That is the unforced response of the 6 3

1 ðs2 13Þ2

1

1 ðs11Þ2

system is lðtÞx0 which grows unboundedly as t ! N. Note that unboundedness is because the first two terms grow unboundedly—the third one does not.

2 1:2s 1 1:4s 2 1 Example 3.3: Consider the system LðsÞ 5 s4 22s0:6s 3 2 2:6s2 1 4:2s 2 2 with the initial 1 1 1 condition x0 . It is not Lyapunov stable since LðsÞ 5 s2 2 1:6s 11 1 s21 1 s12 0:8t whose inverse Laplace transform is lðtÞ 5 ½1 2 ð1=0:6Þe sinð0:6 t 1 tan21 0:75Þ 1 ½et  1 ½e22t . That is, the unforced response of the system is lðtÞx0 which grows unboundedly as t ! N. The unboundedness is because the first two. 3

2

An important situation that may happen in a system is that of pole-zero cancellation. That is, when we form the transfer function of the system from its state-space formulation some pole(s) and zero(s) cancel each other out, and thus the order of the transfer function, being the order of the denominator, becomes less than the number of states. The questions of when and why this happens are answered thoroughly in the second undergraduate course on linear control in the state-space domain, under the topics of controllability, observability, and minimalilty. The relation that this phenomenon has with the stability concept is that if a system is stable in the transfer function formulation (which means that all its poles are stable) but has had an unstable pole-zero cancellation, it is in fact unstable. This is demonstrated in the following Example 3.4.

Stability analysis

205

Example 3.4: Consider the system function. It is

YðsÞ UðsÞ

5

s21 ðs 2 1Þðs 1 3Þ

5

h i h i 1 0 x1 0 u x_ 5 22 1 .   23 y5 1 1 x

1 s13 :

We find its transfer

The transfer function of the system is

1 s13

which is stable, so apparently the system is stable, but it not because the 21 transfer function is actually ðs 2s1Þðs 1 3Þ which is unstable. This can be further verified in the states-space formulation as follows. We write down the state x_1 5 x1 space of the system as . Therefore from the first equax_2 5 22x1 23x2 1 u tion we get x1 5 x1 ð0Þet which grows unboundedly if the initial condition is nonzero. It is not even Lyapunov stable let alone BIBO stable.

Remark 3.3: It is worth paraphrasing that stability of a system cannot be assessed from its transfer function unless we know that there has been no unstable pole-zero cancellation in it. In this course we assume this for a given transfer function. In the next example we discuss the effect of the input. Example 3.5: We reconsider Example 2.7 of Chapter 2. For the LTI system h i h i 2 , B 5 1 2 , C 5 ½1 0, D 5 ½1 2 2, described by A 5 21 0 23

0 1 u ðtÞ stepðtÞ xð0Þ 5 ½1 21T , uðtÞ 5 1 5 we found out that the solution u2 ðtÞ rampðtÞ is given by yðtÞ 5 yxð0Þ ðtÞ 1 yu1 ðtÞ 1 yu2 ðtÞ where yxð0Þ 5 e23t ðt $ 0Þ, yu1 5 2 2 e2t ðt $ 0Þ, and yu2 52t 2 23 1 12 e2t 1 16 e23t ðt $ 0Þ. We note that yu2 !N (and thus y ! N) as t ! N, but this does not mean that BIBO stability does not hold because this is due to u2 ! N as t ! N. In other words, the input u2 is not a bounded input and thus we do not find a bounded output. On the other hand note that the input u1 is bounded and results in a bounded output. All in all, for BIBO stability of LTI systems only the eigenvalues of the matrix A matter and other system matrices do not play any role. (The reader should be careful that such a simple analysis is not valid for time-varying and nonlinear systems!)

Because it is BIBO stability that matters, for brevity sometimes the term BIBO is omitted and we simply use the word stable. This is also sometimes practiced in the literature. In particular, for the stability tests that we have in the rest of this Chapter we actually mean BIBO stability—but it is customary to simply say stability.

206

Introduction to Linear Control Systems

Question 3.1: This question has some parts. (1) Where is (are) the equilibrium point(s) in the aforementioned examples? (2) Does the state of the system converge to it (them)? (3) In Example 3.4 does x2 necessarily diverge? The theorem we offered on BIBO stability is so classical that unfortunately in the literature a reference is not cited for it. The earliest reference we know for it is (James et al., 1947); in Chapter 2 of the aforementioned reference it is discussed and proven by H. M. James and P. R. Weiss. The proof is easy and you will provide it in Exercise 3.1. James and Weiss also show that BIBO stability of an LTI system is related to a property of its impulse response function—that it must absolutely converge. Question 3.2: This question has two parts. (1) In the previous theorems have we implicitly assumed that the system is not improper? Discuss the stability of an improper transfer function. (2) Is the convergence of the absolute value of the impulse response also a necessary and sufficient condition for BIBO stability of LTV systems? By analysis and a counterexample show that the answer is negative.

3.3

Stability tests

Our problem of determining the stability of the system is thus reduced to deciding whether the roots of an equation are in the OLHP. As we said before, in the late 19th century mathematicians were concerned with this problem and approached it in the Laplace domain. But is there any method to find this out without solving the equation? This question was independently answered by Routh and Hurwitz, in the method now known as the RouthHurwitz stability test. The most famous stability criteria/tests in the continuous-time domain are as follows: Routh’s test, Hurwitz’ test, LienardChipart’s test, Nyquist’s test, Bode’s test, Nichols’ test, Root Locus method, Lyapunov’s methods, Kharitonov’s method, and Direct (or Computerized) test.3 For further information and details see Appendix D as well as the sequel of the book on state-space methods. Let ΔðsÞ denote the “Characteristic Equation,” which is the denominator of the transfer function of interest.4 If we define ΔðsÞ 5 an sn 1 an21 sn21 1 ? 1 a1 s 1 a0 , it is easy to verify that the roots of ΔðsÞ 5 0 satisfy, X si 5 2an21 =an X si sj 5 an22 =an X (3.1) si sj sk 5 2an23 =an ^ s1 ?sn 5 ð21Þn a0 =an 3 4

As we shall learn in Chapters 7 and 8 the stability conditions in the Bode and Nichols contexts are erroneous. If there is no pole-zero cancellation (which we assume in this course), this is also jsI 2 Aj50 where A refers to the state matrix of an associated state-space model.

Stability analysis

207

Thus a necessary condition for stability is that all coefficients be strictly positive or strictly negative. That is to say that they are all nonzero and have the same sign. Note that a sufficient condition for instability is thus a missing coefficient or a sign change in them. See also item 3 of Further Readings. The condition of nonzero-ness of all the coefficients of the characteristic equation translates to the following important Remark. Remark 3.4: Let the open-loop transfer function be given by L 5 sNnðsÞ dðsÞ where degðnðsÞÞ 5 m. Thus a necessary condition for stability of the closed-loop system (in a negative unity-feedback structure) is that N 2 m # 1. Equivalently, a sufficient condition for instability is that N 2 m . 1. For instance, closed-loop systems of the k 1b open-loop systems LðsÞ 5 s2 dðsÞ and LðsÞ 5 as s3 dðsÞ are unstable. N ðsÞ

Question 3.3: Let the plant transfer function be given by PðsÞ 5 Dpp ðsÞ ; and be unstable. What is the minimum order of the controller (and what is a/the controller itself) to stabilize the closed-loop system (in a negative unity feedback)? In this question of course we mean internal stability, see Section 3.11. It is possible to find a sufficient condition or even a necessary and sufficient condition answer for this question at least in certain cases, but to the best of our knowledge the answer as a necessary and sufficient condition is not known for the general case; see Chapter 5 and item 8 of Further Readings of Chapter 10. An example is discussed in the following:

Example 3.6: Consider the system PðsÞ 5 1=½ðs 2 1Þðs 2 2Þ. Find a minimumorder stabilizing controller for it. We start with C1 ðsÞ 5 k. It is seen easily that the characteristic equation is unstable because its coefficients do not have the same sign. Note that it is s2 2 3s 1 2 1 k 5 0. Next we try C2 ðsÞ 5 k=ðs 1 pÞ. By this controller the characteristic equation is s3 1 ðp 2 3Þs2 1 ð2 2 3pÞs 1 2p 1 k 5 0. This equation is always unstable as well since positivity of its second and third terms are conflicting. (There cannot hold both p 2 3 . 0; 2 2 3p . 0.) Then we try C3 ðsÞ 5 kðs 1 zÞ=ðs 1 pÞ. This time we arrive at s3 1 ðp 2 3Þs2 1 ðk 1 2 2 3pÞs 1 2p 1 kz 5 0. This equation can be stabilized by proper choice of the controller parameters. (For stability there should hold p 2 3 . 0; k 1 2 2 3p . 0; 2p 1 kz . 0; 2p 1 kz , ðp 2 3Þðk 1 2 2 3pÞ. It is easy to verify that these conditions are not conflicting.) We will reconsider this system in Example 5.22 in Chapter 5 in the context of root locus. There we will present a more tangible approach for its stabilization.

If the system is of a higher order the above approach is in effect impossible to apply. This shows, at least from a certain perspective, why Question 3.2 has not been answered yet in its generality.

208

Introduction to Linear Control Systems

3.4

Routh’s test

The Routh’s original results on stability are not much user-friendly. The Routh’s tabular form or the Routh’s table is given by the following table or array. It should be noted that this form of the Routh’s test has been termed the RouthHurwtiz test by the scientific community as it is obtained with the help of the Hurwitz’s determinants., see Section 3.5. sn

an

an22

an24

an26

?

n21

an21

an23

an25

an27

?

n22

s sn23

b1 c1

b2 c2

b3 c3

b4 c4

? ?

sn24 ^

d1 ^

d2 ^

d3

d4

?

s2

e1

e2

1

f1 g1

s

s s0

(3.2)

in which,

b1 5

b3 5

an a

n21

an22 a n23

2an21 an an26 a a n21

n27

5

an an23 2 an22 an21 ; b2 5 2an21

5

an an27 2 an26 an21 ;... 2an21

an a

n21

an24 a n25

2an21

5

an an25 2 an24 an21 ; 2an21

2an21 an21 an23 an21 an25 b b b2 an21 b2 2 an23 b1 b3 an21 b3 2 an25 b1 1 1 c1 5 5 ; c2 5 5 ; 2b1 2b1 2b1 2b1 an21 an27 b b4 an21 b4 2 an27 b1 1 c3 5 5 ; ... 2b1 2b1 b1 b2 b1 b3 c c b c 2b c c c b c 2b c 1 2 2 1 1 3 3 1 1 2 1 3 d1 5 5 ; d2 5 5 ; . . .; g1 5 e2 : 2c1 2c1 2c1 2c1 (3.3)

Stability analysis

209

Remark 3.5: The structure of the table if as follows. ^ s9 s8 s7 s6 s5 s4 s3 s2 s1 s0

^ ^ 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 ð3Þ 3 ð3Þ

^ ^ ^ 3 3 3 3 3 ð3Þ 3 3 3 ð3Þ 3 ð3Þ

^

(3.4)

That is, from bottom to top every two consecutive rows have the same number of columns, and this number is incrementally increased every other row. Also, if n is odd the first two rows have the same number of columns, otherwise the first row will be longer than the second one by one column. Moreover, the entries encircled by parentheses are equal. Theorem 3.3: The number of sign changes in the first column is equal to the number of poles in ORHP. In particular this means that the system is stable if there is no sign change in the first column. Δ. Theorem 3.4: Any row can be divided or multiplied by a positive constant without making any difference to the result. (Note that this may be done to simplify the computations when the numbers are too large or decimal.) Δ.

Example 3.7: Determine the number of unstable poles, if any, of the system with characteristic equation s6 1 3s5 1 2s4 1 9s3 1 5s2 1 12s 1 20. The Routh’s tabular form is given in the following: s6 s5 s4 s3 s2 s1 s0

1 3 21 12 7 37:71 20

2 9 1 72 20

5 12 20

20

(Continued)

210

Introduction to Linear Control Systems

(cont’d) It is observed that there are two sign changes (from 3 to 21 and from 21 to 12) in the first column and thus there exist two ORHP poles.

Example 3.8: Find the range of parameters such that the system s3 1 as2 1 bs 1 c 5 0 is stable. We first note that because one of the coefficients (that of s3 ) is positive all the rest should also be. Hence it is necessary that a . 0; b . 0; c . 0. Next we form the Routh’s array. s3 s2 s1 s0

1 a ðc 2 abÞ=ð2aÞ c

b c

Thus, in addition to the previously derived conditions there should hold c , ab.

3.4.1 Special cases The derivation of the Routh’s tabular form fails in two special cases in which indefinite results are obtained. These are as follows: (1) There is a zero coefficient in the first column (but the corresponding row has at least one nonzero element.) (2) There is a complete zero row. It must be mentioned that if (2) occurs, it is in an odd-numbered row, never in an even-numbered one. However (1) can happen in both even- and odd-numbered rows. In the sequel we discuss possible ways of addressing the above problems. There are three typical approaches to tackle the first problem, as follows. 1. Substitute s with 1=x. Clearly, this method does not work if the coefficients of the original characteristic equation are symmetric. 2. Multiply by ðs 1 aÞ; a . 0, e.g., ðs 1 1Þ and carry out the original method. 3. Substitute 0 with ε . 0. The number of sign changes in the first column is equal to the number of ORHP poles. If the elements above and below ε have the same sign, there is a pair of imaginary roots.5

5

It is easy to provide examples for which the method fails. In fact this seemingly simple problem is not yet fully resolved and new results are still being reported, although not frequently. The interested reader should consult the respective literature for further results including Appendix E.

Stability analysis

211

As for the second special case, a recipe is as follows. Substitute the zero row with the coefficients of the derivative of the auxiliary equation obtained from the previous row. Then, resume the usual procedure. The details are as follows. Suppose in the table the row s2k21 is identically zero. The auxiliary equation has the form α2k s2k 1 α2k22 s2k22 1 ? 1 α2 s2 1 α0 where its coefficients α2i are the entries of the previous row, s2k . Note that the number of these coefficients is k 1 1. The following points are noteworthy: 1. The auxiliary equation is always even. 2. The roots of the auxiliary equation are the roots of the original characteristic equation. 3. These roots occur in pairs and are the negative of each other. That is, they have these general forms: 6α, 6βj, 6ðα 6 βjÞ.

The subsequent examples demonstrate the above cases and the proposed solutions. Example 3.9: Determine the stability of the equation s4 1 2s3 1 2s2 1 4s 1 3 5 0. s4 s3 s2

1 2 0

2 4 3

3

In forming the Routh’s table we face a row with a zero in its first element but the row is not identically zero. We try all three proposed methods to address it. 1. By the application of the first method we arrive at 3s4 1 4s3 1 2s2 1 2s 1 1 5 0 whose Routh’s tabular procedure can be completed as follows: x4 x3 x2 x1 x0

3 4 0:5 26 1

2 1 2 1

As is seen there are two sign changes in the first column and thus there are two ORHP roots. 2. Through the second method we obtain s5 1 3s4 1 4s3 1 6s2 1 7s 1 3 5 0 whose Routh’s tabular form is completed as follows: s5 s4 s3 s2 s1 s0

1 3 2 23 8 3

4 7 6 3 6 3

Likewise, here we conclude that there are two ORHP poles.

(Continued)

212

Introduction to Linear Control Systems

(cont’d) 3. As for the third method, the table is completed as follows.

s4 s3 s2 s1

1 2 2 4 0ðεÞ 3 6 42 ,0 ε

s0

3

3

It is observed that with small ε, the sign of 4 2 6ε is negative. There are two sign changes in the first column and thus there are two ORHP poles.

Example 3.10: Given s4 1 3s3 1 6s2 1 12s 1 8 5 0. Determine its stability. s4 s3

1 3

6 12

s2 s1

2 0 ð4Þ

8

s0

8

8

In constructing the table we encounter a row which is identically zero. We thus replace its zeros with the coefficients of the derivative of the auxiliary equation made from the previous row. More precisely, the auxiliary equation is 2s2 1 8 5 0 and its derivative is 4s 5 0. So we replace 0 with 4, shown in parentheses. Then we continue forming the table as usual. It is observed that there is no sign change in the first column and thus there is no root in ORHP. This also means that the roots of the auxiliary equation are on the j-axis. In fact they are 62j. Also note that roots of the auxiliary equation are roots of the original equation. Its other roots are 22; 2 4 which can be found by dividing the original equation by the auxiliary equation.

Stability analysis

213

Example 3.11: Given s5 2s4 1 6s3 2 6s2 1 25s 2 25 5 0. Determine its stability. The Routh’s table is, s5 s4 s3 s2 s1 s0

1 21 0ð24Þ 23 64=3 225

6 26 0ð212Þ 225

25 225

In forming the table we face a row which is identically zero. Hence we replace its zeros with the coefficients of the derivative of the auxiliary equation made from the previous row. The auxiliary equation is 2s4 2 6s2 2 25 5 0 and its derivative is 24s3 2 12s 5 0. So we replace zeros of the s3 row with 24 and 212 as shown in parentheses. Then we continue forming the table in the usual manner. It is seen that there are three sign changes in the first column and thus there are three roots in ORHP. The roots of the auxiliary equation are 6ð1 6 2jÞ which are also the roots of the original equation. Its other root is 21. Also note that if the auxiliary equation is considered as s4 1 6s2 1 25 5 0 although the entries of the first column will change the number of its sign changes will not. (Verify this!)

We close this section by mentioning that the Routh’s table has found numerous applications in mathematics and engineering, especially control engineering. While many of these applications are of a former time, the mid-20th century, some of them are rather new, of the 21st century. A good collection of these applications is provided in Appendix E.

3.5

Hurwitz’ test

Hurwitz’ test states that for stability of the polynomial ΔðsÞ 5 an sn 1 an21 sn21 1 ? 1 a1 s 1 a0 it is necessary and sufficient that all the principle minors of the Hurwitz matrix be positive. The Hurwitz matrix is given by, 3 2 an21 an23 an25 ? 7 6 an an22 an24 ? 7 6 7 6 0 a a ? n21 n23 7: 6 (3.5) H 56 7 0 a a ? n n22 7 6 5 4 ^ ^ ^ ^ ^ 0 0 0 ? a0

214

Introduction to Linear Control Systems

Its principle minors are the following determinants which are called the Hurwitz determinants, an21 an23 an25 an21 an23 ; D3 5 an an22 an24 ; . . .; Dn 5 jHj: D1 5 jan21 j; D2 5 an an22 0 an21 an23

(3.6)

It is clear that direct manual application of this test is rather cumbersome. This problem can be circumvented by the tabular procedure of Section which was termed as the RouthHurwitz method. We also have a remark. Remark 3.6: A matrix/polynomial whose eigenvalues are all in the OLHP is sometimes called Hurwitz stable or a Hutwitz matrix/polynomial. Sometimes, especially in the mathematics literature, an unstable matrix/polynomial whose eigenvalues are all in the ORHP is called antistable.

3.6

Lienard and Chipart test

A. Lienard and M. H. Chipart streamlined the RouthHurwitz criterion and proved that only about half the number of the Hurwitz determinants actually need to be computed, and the remaining determinants can be substituted by certain polynomial coefficients. More precisely, let ΔðsÞ 5 an sn 1 an21 sn21 1 ? 1 a1 s 1 a0 (with at least one positive coefficient) and the determinants Di ði 5 1; . . .; nÞ be defined as in Eq. (3.3) of Section 3.3. They proved that the system is stable iff one of the following four equivalent conditions is satisfied: i. ii. iii. iv.

a0 . 0; a0 . 0; a0 . 0; a0 . 0;

a2 . 0; a2 . 0; a1 . 0; a1 . 0;

. . .; D1 . 0; D3 . 0; . . . . . .; D2 . 0; D4 . 0; . . . a3 . 0; . . .; D1 . 0; D3 . 0; . . . a3 . 0; . . .; D2 . 0; D4 . 0; . . .

Therefore, for instance, if we use condition (iv) we have the following Table 3.1 for n 5 1; . . .; 7: Table 3.1

The LienardChipart test

n

Positivity conditions

1. 2. 3. 4. 5. 6. 7.

a0 ; a0 ; a0 ; a0 ; a0 ; a0 ; a0 ;

a1 a1 ; a1 ; a1 ; a1 ; a1 ; a1 ;

a2 D2 ; D2 ; D2 ; D2 ; D2 ;

a3 a3 ; a3 ; a3 ; a3 ;

D4 D4 ; a5 D4 ; a5 ; D6 D4 ; a5 ; D6 ; a7

Stability analysis

215

As an example note that for n 5 3 we have already seen the positivity of D2 in Example 3.8, if we repeat it for the general case that a3 6¼ 1 and assume that at one least coefficient is positive, the LienardChipart method has computational advantage over the RouthHurwit’s test. However, it should be noted that in the computer era we simply use the command “roots” of MATLABs (or similar commands of other software packages) to find the roots.

3.7

Relative stability

In modeling a system there is always some uncertainty over its parameters. This uncertainty is reflected in the coefficients of the characteristic equation. As such, the locations of the poles of the system are not certain—they are subject to some changes or uncertainties. In fact, depending on the range of uncertainty in the parameters a range for the pole location is obtained—if we are able to compute these regions. If these regions overlap with the CRHP then the system is unstable for certain parameter values. This problem, as we have outlined in Section 3.1, Introduction, falls into the branch of robust control, where numerous tools have been developed for the analysis and synthesis of systems in the presence of uncertainties. Among the first solutions to this problem, which were proposed in the first half of the twentieth century, is to bring the poles far from the j-axis; see also Remark 3.7. This idea works in general; see the following Example 3.12. Among other early solutions for this problem is to design the system with an acceptable stability margin, as translated to its gain margin and phase margin. These ideas will be introduced later in Chapter 6. Let us investigate that by bringing the poles farther to the left the system can accommodate larger perturbations in general.

Example 3.12: Consider the system x_ 5 Ax where

A5

h

21 0

3 22

i .

The system can tolerate the “individual” additive perturbations a, b, c, d in its elements 21; 3; 0; 2 2 up to a , 1, ’ b c , 2=3, d , 2. (Of course some may happen simultaneously.) Now suppose we bring the poles of h i the system more to left and make it e.g., A0 5

24 0

3 25

. Now the system

can tolerate a ,h4, ’ b c i, 20=3, d , 5. On the other hand suppose the new 300 which has the same eigenvalues. This time the system is A00 5 24 0 25 system can tolerate c , 2=30.

216

Introduction to Linear Control Systems

f(s)

f(s−σ)

f(s)

s=−σ

f(s−σ)

s=−σ

Figure 3.1 Relative stability for: Left panel m 5 0. Right panel m 5 1.

Rigorous analysis of the above problem falls into the domain of structured and unstructured perturbation analysis, which is briefed in Chapter 10. For simplicity we often assume (or hope!) that such pathological, with some abuse of terminology, cases do not occur. In the sequel development we tackle the problem by bringing the poles farther to the left in the OLHP. This is done in two main frameworks, one is discussed in the rest of this section, and the other is discussed in the ensuing Section 3.6. Let us start. We note that if ΔðsÞ 5 0 has m roots to the right of s 5 2σ, then Δðs 2 σÞ 5 0 has m roots to the right of the j-axis (σ 5 0 line) and vice versa. The situation is depicted for m 5 0; 1 in Fig. 3.1. Thus to obtain the number of roots of ΔðsÞ 5 0 to the right of s 5 2σ, the Routh’s test is applied to Δðs 2 σÞ 5 0. If Δðs 2 σÞ 5 0 has m roots in the ORHP (or on the j-axis), then denðsÞ 5 0 has m roots to the right of s 5 2σ (or on this line). Example 3.13: For the control system shown in Fig. 3.2 obtain the range of K for which: (1) The system is stable. (2) The system is stable and only two closed-loop poles satisfy 23 , ReðsÞ , 0. The parameters are M 5 26; N 5 134; P 5 2. yd

P s 2 +Ms+N

y

K s

Figure 3.2 An example for the application of relative stability.

First we find the characteristic equation of the system. It is ΔðsÞ 5 s3 1 26s2 1 134s 1 2K 5 0. For the part (1) of the problem similar to Example 3.8 we find K , 26 3 134=2 5 1742. For part (2) we find K such that there are no sign changes in the first column of the Routh’s array of ΔðsÞ 5 0, but there are two sign changes in the Routh’s array of the first column of Δðs 2 3Þ 5 0. This is done in the sequel. For the stability of ΔðsÞ 5 0 there should hold K , 26 3 134=2 5 1742. On the other hand Δðs 2 3Þ 5 s3 1 17s2 1 5s 1 195 2 2K 5 0. For the aforementioned purpose there should hold 97:5 , K , 140. The final answer is the intersection of the derived conditions on K. Hence 97:5 , K , 140.

Stability analysis

217

Remark 3.7: There is another reason for bringing poles of the system farther to the left. In fact the aforementioned reason is another look at the problem which we present in the sequel. Examples 3.13.3 reveal that in the time response of the system the poles of the transfer function play a crucial role. They are called the “modes” of the system as they determine the mode of the system in the literal meaning. The modes appear in the form of e2σt in the time response where 2σ is the real part of the poles. Therefore when the system poles are left to s 5 2σ, i.e., with real parts less than 2σ, this means that they decay to zero faster than e2σt . As such the term exponential stability (or σ 2 exponential stability) is also used in part of the literature instead of relative stability. This is especially done when the problem is posed in the state-space framework.

3.8

D-stability

In the previous section the poles were brought to the left of the line s 5 2σ. Another idea is to bring them to the region Ω which is left to both of the lines y 5 6cotðδÞðx 1 σÞ where 0 # δ , π=2 and σ $ 0. This region is depicted in Fig. 3.3  and is alternatively described by Ω 5 λ: ðReðλÞ 1 σÞcos δ 6 ℑmðλÞsin δ , 0 . Because of the shape of region Ω this kind of stability is called D-stability,6 sometimes Ω-stability. Let the second-order poles of the system be formulated as k=ðs2 1 2ζωn s 1 ω2n Þ. pffiffiffiffiffiffiffiffiffiffiffiffiffi The poles are thus s 5 2ζωn 6 jωn 1 2 ζ 2 ≔2 σ 6 jωd . As you will learn in Chapter 4, σ and ζ (which relates to the angle δ) are called the damping and damping ratio of the poles and determine the “quality” of the response. Thus if the poles of the system are in this region the damping and damping ratio of individual modes are upper bounded and this means that individual modes of the system show a good behavior. However we should stress that—as we shall learn in Chapter 4—this does not guarantee a good performance of the whole system although the stabilization will be (in general) stronger than the usual OLHP BIBO stability. The proceeding theorem relates the D-stability of a system to its BIBO stability (in the OLHP).

Figure 3.3 Region Ω. 6

Note that the definition of D-stability is different in mathematics; see Appendix D.

218

Introduction to Linear Control Systems

Theorem 3.5: The system x_ 5 Ax is Ω-stable iff the augmentedh system x_ 5 i AΩ x cosδ 2sinδ with AΩ ≔ΘðδÞ  ðA 1 σIÞ is OLHP BIBO stable, where ΘðδÞ 5 sinδ cosδ and  denotes the Kronecker7 matrix product.

Δ.

Note that the Kronecker product of matrices A 5 ½aij  and B 5 ½bij  is defined as A  B 5 ½aij B. Example 3.14: Investigate whether the system x_ 5 3



25 21

13 1



x is D-stable for

the region δ 5 30 and σ 5 1. We apply the Routh’s test to the characteristic equation of the system x_ 5 AΩ x and observe that it is unstable with two unstable poles. Indeed the eigenvalues of A are 22 6 2j which are stable but outside the specified region Ω.

Question 3.4: What is the exact relation between σ; δ and the quality of the response? You had better reconsider this question after reading Chapter 4. Remark 3.8: Sometimes it is preferred to use the region of Fig. 3.4, although there is not much difference between them. Question 3.5: How should we verify D-stability in this case? (Note that a possible answer in all cases is via LMIs (Linear Matrix Inequalities) which you learn in the sequel of the book on state-space methods.)

Figure 3.4 Another region Ω.

7

Named after Leopold Kronecker, Polish mathematician (18231891), whose scientific career was pursued in Berlin.

Stability analysis

3.9

219

Particular relation with control systems design

In connection with control systems the Routh’s array can be used to derive conditions on the controller parameters so as to guarantee closed-loop stability. As we shall see in an example for systems of order four or higher this is usually a complicated process and for the sake of computability some parameters are chosen based on other considerations and a single parameter, usually the gain of the controller or a single pole or zero, is left to be bounded by the method. Example 3.15: Obtain the range of K, a, and b for the stability of the following system, see Fig. 3.5. K(s+b) s(s+a)

yd

1 s 2 +2s+4

y

Figure 3.5 A typical controller design.

The characteristic equation of the system is s4 1 ð2 1 aÞs3 1 ð4 1 2aÞs2 1 ð4a 1 KÞs 1 Kb 5 0. Therefore, the Routh’s table is, s4 s3 s2 s1 s0

1 4 1 2a 2 1 a 4a 1 K A Kb B Kb

Kb

ð4 1 2aÞð2 1 aÞ 2 Að4a 1 KÞ and B 5 ð2 1 aÞKb 2A : Deriving explicit in which A 5 4a 1 K 22ð2 1 aÞ conditions on K; a; b from positivity on A and B does not seem pragmatic in this system. So as we said before from other considerations some parameters are determined. You will learn in Chapter 9 that a probably good choice for the controller is a 5 0:02; b 5 10a 5 0:2. Thus the problem reduces to bounding K. From the polynomial coefficients we should have K . 0. 2 8:0808 2 0:646464 From the table there should hold A 5 K 222:02 and B 5 K 2 7:18472K : 2A Solving for positivity of A and B we find fK , 8:0808g - f20:08887815 , K , 7:27359815g. All in all we should have 0 , K , 7:27359815. Note that if we use the command “allmargin” we find 0 , K , 7:27259552; the difference and the slight inaccuracy is because the command uses a numerical method.

We close this Section by some important points in the proceeding remark.

220

Introduction to Linear Control Systems

Remark 3.9: Take heed of these points: (i) Recall that the controller is in general designed to increase the loop gain and the upper limit is dictated by the stability (and transient response8 and SOR9). We considered only the stability requirements in this example. (ii) The stability range that we find guarantees the stability of the system for any unknown but ‘fixed’ parameter in that range. It is not allowed to be discontinuous, time-varying, etc., even if it stays in that range. See Exercise 3.56.

3.10

The Kharitonov theory

The problem discussed in preceding Sections can be recast as follows: K—appearing usually in one of the coefficients of the characteristic polynomial— shows the uncertainty in a certain parameter of the model. That is, another statement of the problem is: What is the range of the uncertainty so that the stability of the system is maintained? This problem is in the field of robust control theory. From one standpoint, results in this field can be categorized into two classes of structured uncertainty and unstructured uncertainty, see Chapter 10. The solution to the class of unstructured uncertainty with independent variations in the system parameters is well connected to the RouthHurwitz’s test, as explained below. Uncertainty in the parameters of the model translates to uncertainty in the coefficients of the characteristic polynomial. Let the characteristic equation be given by Δðs; aÞ 5 a0 1 a1 s 1 a2 s2 1 ? 1 an sn where coefficients belong to a i # ai # ai . This is called an interval polynomial. Two questions arise which are duals of each other. (1) What is the range of such uncertainties so that the system remains stable? (2) Given an interval polynomial how can we determine its stability/instability? The solution to problem (1) is outside the scope of this undergraduate book (but we consider a simple example, Problem 3.23), however the solution to problem (2) well fits our current theme, as we shall present next. Assuming that the coefficients are unknown but fixed, if we want to use the Routh’s test we should apply it an infinite number of times. However, this is obviously impossible. V. L. Kharitonov proved in a seminal work that the stability of the aforementioned interval polynomial Δðs; aÞ is (necessarily and sufficiently) equivalent to the stability of the following four polynomials, the so-called

8

‘In general’, the higher the gain K, the faster the response, the more the oscillations (above a certain value of the gain), and the larger the overshoot (above a certain value of the gain). The effect of K on the performance is shown in various examples in Chapters 4, 5, 9. However, we shall see some examples that the aforementioned general rule does not hold. 9 SOR stands for Safe Operation Region. Actual systems all have an SOR outside which the operation of the system is not allowed although the system may be theoretically stable. The SOR is usually determined by the minimum and maximum allowable change and rate of change in different parameters of the system.

Stability analysis

221

Kharitonov polynomials, provided the interval polynomial is of the invariant degree n, i.e., provided an 6¼ 0, from which either a n . 0 or an , 0 follows: Theorem 3.6: The interval polynomial Δðs; aÞ 5 a0 1 a1 s 1 a2 s2 1 ? 1 an sn , a i # ai # ai , is stable iff the following Kharitonov polynomials are stable. k1 ðsÞ 5 a 0 1 a 1 s 1 a2 s2 1 a3 s3 1 a 4 s4 1 a 5 s5 1 a6 s6 1 a7 s7 1 ? k2 ðsÞ 5 a 0 1 a1 s 1 a2 s2 1 a 3 s3 1 a 4 s4 1 a5 s5 1 a6 s6 1 a 7 s7 1 ? k3 ðsÞ 5 a0 1 a 1 s 1 a 2 s2 1 a3 s3 1 a4 s4 1 a 5 s5 1 a 6 s6 1 a7 s7 1 ? k4 ðsÞ 5 a0 1 a1 s 1 a 2 s2 1 a 3 s3 1 a4 s4 1 a5 s5 1 a 6 s6 1 a 7 s7 1 ? Δ. Example 3.16: Is the following interval polynomial stable? Pðs; aÞ 5 a0 1 a1 s 1 a2 s2 1 a3 s3 , where 0:2 # a0 # 0:6, 1 # a1 # 2, a2 5 2, 2 # a3 # 3. We form the Kharitonov polynomials as follows: k1 ðsÞ 5 0:2 1 s 1 2s2 1 3s3 k2 ðsÞ 5 0:2 1 2s 1 2s2 1 2s3 k3 ðsÞ 5 0:6 1 s 1 2s2 1 3s3 k4 ðsÞ 5 0:6 1 2s 1 2s2 1 2s3 It is easily verified by the extended version of Example 3.8 that these polynomials are all stable. Therefore, the given interval polynomial family is stable. See also Problem 3.23.

We conclude this section by emphasizing that Kharitonov’s theory in its original form is valid for independent variations in the coefficients of the characteristic equation. In the case of dependent variations the theorem is not valid any longer. In this case its extensions, mainly the Edge Theorem, should be used. It has been extended to systems with time delays, the case of dependent perturbations, Markov parameters, discrete-time systems, etc.; see Further Readings.

3.11

Internal stability

The following theorem and definition relate to the stability of the system in a larger framework. Consider the standard 1-DOF control system of Fig. 3.6, left panel.

222

Introduction to Linear Control Systems

Figure 3.6 The 1-DOF control structure.

It is said that the system is internally BIBO stable iff all the signals around the loop are bounded for bounded inputs to the system. It is important to note that in practice the system is perturbed by some bounded inputs which are added to different signals in the system. These are shown by dr ; de ; di ; do ; n in the right panel of Fig. 3.6. Internal stability is BIBO stability around the loop between all the signals. To verify it we must actually write the transfer functions between all the (input) signals and all the (output) signals around the loop. The input signals are r; dr ; de ; di ; do ; n as well as r; e; uc ; u; up ; yp ; y; y 1 n and the output signals are r; e; uc ; u; up ; yp ; y; y 1 n. (The signals uc ; u; up ; yp are the input and output signals of the controllers and plant, respectively.) It is easy to verify that these transfer functions are 1 (between r and r; dr) as well as the ones that appear in, 2 3

R C C C 2CP 2C 2C 6 Dr 7 6 7

7 CP CP CP P 1 2CP 6 UP 6 De 7: (3.6) 5 7 6 YP D 1 1 CP 6 i7 4 Do 5 N Hence, internal stability is equivalent to the stability of these transfer functions which can be summarized as 1=ð1 1 CPÞ, C=ð1 1 CPÞ, P=ð1 1 CPÞ, and CP=ð1 1 CPÞ. Note that internal stability is beyond stability of the inputoutput transfer function of the system, i.e., CP=ð1 1 CPÞ, as is shown in the next examples. Before this we have an important remark. Remark 3.10: Observe that R; Dr ; De have the same effect on any (output) signal (except for the trivial cases r; e) around the loop in the sense that the transfer functions are the same. And this is one of the reasons that we do not consider any “setpoint and error disturbances dr ; de ” and suffice with the “input and output disturbances di ; do ”. See also Exercise 1.45. Example 3.17: Consider the 1-DOF system with PðsÞ 5 CðsÞ 5

s23 s :

2 ðs 1 1Þðs 2 3Þ and function, i.e., 1 1CPCP 5

It is seen that the inputoutput transfer is stable. However the system is not internally stable since the transfer function 1 1PCP 5 ðs 2 3Þðs2s2 1 s 1 2Þ is not stable. Note that this transfer function is (Continued)

2 s2 1 s 1 2

Stability analysis

223

(cont’d) the one between Di and YP : A bounded di will produce an unbounded yP and thus unbounded y!

Example 3.18: Consider the 1-DOF system with PðsÞ 5

s21 sðs 2 2Þ

and the controller

1 1Þ CðsÞ 5 ðs 2Kðs 1Þðs 1 10Þ : We observe that the inputoutput transfer function, 1 1Þ i.e., 1 1CPCP 5 s3 1 8s2 Kðs 1 ðK 2 20Þs 1 K is stable with K . 22:86. Nevertheless, the 1 1Þðs 2 2Þ transfer function 1 1CCP 5 ðs 2 1Þðs3Ksðs 1 8s2 1 ðK 2 20Þs 1 KÞ in unstable. Regardless of a

sign difference, this is the transfer function between Yd ; Do ; N and UP . That is, a bounded yd ; do ; n produces an unbounded uP and hence unbounded y!

Remark 3.11: The observation that can be supported by a theoretical proof, and is actually a theorem, is that CRHP pole-zero cancellation should not take place between the controller and the plant. (Recall that CRHP is the union of ORHP and the j-axis.) Of course there is another reason that this should not take place—as discussed next.

Example 3.19: It is clear that exact pole-zero cancelation is unlikely to happen in practice because of imprecise implementation of the controller or parameter variations of the plant. For instance in Example 3.18 if the 1 1Þ controller is implemented as CðsÞ 5 ðs 2 Kðs 1:01Þðs 1 10Þ (which is certainly so in practice) then the inputoutput transfer function (which is otherwise stable) will be unstable for any value of the gain since it is given by 1 1CPCP 5

Kðs2 2 1Þ s4 1 6:99s3 1 ðK 2 28:08Þs2 1 20:2s 2 K

:

All in all, a design which does not guarantee internal stability, but merely inputoutput BIBO stability, should be avoided in practice as it is doomed to fail. Question 3.6: The four transfer functions of interest are T=L, T=C, T=P, and T. In the above examples we saw cases that T is stable but T=C and T=P are not. Is it possible that T is stable but T=L is not?

224

3.12

Introduction to Linear Control Systems

Strong stabilization

If internal stability can be achieved by a controller which itself is (Hurwitz) stable, the plant is said to be strongly stabilizable. The importance of this notion is twofold. (1) One is when the plant itself is stable. In such cases if the loop is disconnected then the open-loop system remains stable and a hazard is circumvented. (2) The other usage is with respect to its connection with simultaneous stabilization, meaning that two or more plants are stabilizable by the same controller. The importance of this latter issue is obvious. Among its applications is stabilization of a plant linearized over its nonlinear domain. It has also applications in robust control of uncertain systems. However, we should add that when the plant itself is unstable, then the argument in (1) is not that compelling. The subsequent theorem presents conditions for strong stability. Theorem 3.7: Consider the plant PðsÞ with some NMP poles and zeros (and possibly some MP ones). Then it is strongly stabilizable iff it has an even number of real ORHP poles between every pair of real CRHP zeros. Δ. Note that in the above theorem we are concerned with real (not complex) ORHP poles and CRHP zeros. The aforementioned condition is called the ParityInterlacing Property (PIP). 2 zÞ Corollary 3.1: Consider the strictly proper plant PðsÞ 5 ðsðs 2 pÞ P1 ðsÞ where z; p . 0 and P1 ðsÞ is MP and stable. Then it is strongly stabilizable iff z . p. Δ.

Example 3.20: The plant PðsÞ 5 K sðss 2212Þ is not strongly stabilizable.

Example 3.21: Is the following system stabilizable by a stable controller? 2 2 2Þðs 2 4Þ LðsÞ 5 K ðs sðs222s213s2Þðs : 1 10Þðs25Þ2

The plant has its real ORHP zeros at 2; 4; N. It has an even number of real ORHP poles between every pair of them: No real poles between 2; 4 and two real poles between 4; N. In the above system if the multiplicity of the ORHP pole at 5 is one (instead of two) still it will be strongly stabilizable, but if it is three then it will not be.

Stability analysis

225

Another direct and important corollary of Theorem 3.7 is, z Corollary 3.2: The plant PðsÞ 5 ðs 2 ps12 Þðs 2 p2 Þ P1 ðsÞ where z; p1 ; p2 . 0 and P1 ðsÞ is MP, proper, and stable is strongly stabilizable iff p1 ; p2 . z. Finally, an important issue is discussed in item 12 of Further Readings. Δ. We close this Section by stressing a major drawback of Theorem 3.7: It does not guarantee internal stability. More precisely, its proof relies on admitting unstable pole-zero cancellation in that the controller may need to have NMP zeros cancelling NMP poles of the plant. See item 12 of Further Readings.

3.13

Stability of LTV Systems

Study of time-varying systems, and in particular LTV systems, is becoming more and more important as some actual plants are indeed time-varying. The basic examples are air vehicles and missiles—mass is a function of time since the fuel is exhausted during the journey. It is well known that for the LTV system x_ 5 AðtÞx the eigenvalues of the matrix A are not indicative of the stability of the system (or the kind of stability: exponential, asymptotic, etc., as you learn in future courses), as opposed to the case of LTI systems (unless under extra assumptions). In particular, it is notable that the solution is not given in parallel with that of LTI systems— Remark 2.13 of Chapter 2. Full treatment of this topic is outside the scope of this book, however it is instructive to briefly review some key issues. We are motivated by the Example of Wu (1974), who reported a 2-dimensional stable LTV system with a positive eignevalue. By construction we show that: Theorem 3.8: There exist stable n-dimensional LTV systems with n 2 1 positive/ unstable eigenvalues and the remaining eigenvalue is negative/stable. Δ: To the best of our knowledge this observation is missing in the literature. We shall show the construction on Examples 3.243.26. The proceeding Examples 3.22, 3.23 are classical.



Example 3.22: Consider the LTV system x_ 5 AðtÞx where AðtÞ 5 20 1 22t1 . Investigate the stability of the system. In this simple system we can directly solve the system and find the general solution as x1 5 ðat2 1 bÞe2t ; x2 5 ae2t . (First we solve for x2 and then by back substitution we solve for x1 . To this end recall that the linear Ðsystem _Ð 5 f ðx; tÞ xðtÞ Ð 5 pðtÞx 1 qðtÞ has the general solution xðtÞ 5 expð pðtÞdtÞ ½ qðtÞexpð 2 pðtÞdtÞdt 1 k.) Thus the system is unstable, as the first component is. However, we note that the eigenvalues of the system are independent of time and are both stable λ1;2 5 2 1.

226

Introduction to Linear Control Systems





Example 3.23: Consider the LTV system x_ 5 AðtÞx where AðtÞ 5 20 2 2e 3 . Investigate the stability properties of the system. In this simple system we can directly solve the system and find the general solution as x1 5 ðat 1 bÞe22t ; x2 5 ae23t . Thus the system is stable at its equilibrium point as origin. (Note that the unboundedness of et plays no role.)

Example 2 6 6

1

AðtÞ 5 66 4 0

3.24: 2 2et 2

:

0 2 4et &

Consider ? 0 n21

0

the3 LTV

7 7 7, 7 t5 2 2ðn 2 1Þe 2n

system

x_ 5 AðtÞx

t

where

and xð0Þ 5 a½1 ? 1T . It is easy to

solve for the exact solution of the system as xðtÞ 5 a½e2t e22t ? e2nt T which is exponentially stable at origin. However all the eigenvalues of AðtÞ, except one of them, are unstable.

Discussion: We have made this system by inverse engineering. We started from the xðtÞ 5 ½e2t e22t ? e2nt T and we wrote a possible representation of AðtÞ as the one given. However, when we solve for this system we see that the general answer

is different. For brevity, we suffice to exemplify that the answer of AðtÞ 5

1 0

2 2et 22

is given by xðtÞ 5 ½ae2t 1bet ae22t T . The coefficient a is fine, but b is not, and we have to set it to zero. Thus we have to define xð0Þ 5 a½1 ? 1T . Apart from this point, note that the representation of AðtÞ is not unique. For instance, the first row can be Að1; :Þ 5 ½k ð 2 k 2 1Þet 0 ? 0, Að1; :Þ 5 2 3e2t 0 ? 0, etc. ½3 2 et Remark 3.12: It is obvious that other classes of systems can also be included here, e.g., those like in Example 2.23 or those which are related by a constant timeinvariant similarity transformation. A classification of general classes of stable systems which are representable (and solvable) in this form would be desirable. The next example which for brevity is 3-dimensional is another such class.

Example 3.25: Consider the exponentially stable function xðtÞ 5 ½3e22t cost e2t sint e2t T . Write a corresponding stable LTV system x_ 5 AðtÞx which has n 2 1 5 3 2 1 5 2 unstable eigenvalues. (Continued)

Stability analysis

227

(cont’d) We have x_1 5 2 6e22t cost 2 3e22t sint. A possible (non-unique) way of representing it is x_1 5 3e22t cost 2 9e22t cost 2 3e22t sint 5 x1 2 3e22t ð3cost 1 sintÞ 5 x1 2 3e2t ð3cost 1 sintÞx3 . The non-uniqueness is in the coefficient of x1 and exclusion of x2 . A non-unique representation is also x_2 5 x2 1 ðcost 2 2sintÞx3 . The non-uniqueness is2 in the coefficient of 3x2 . Hence, a possible non-unique matrix AðtÞ is AðtÞ 5 4

1

0

0 1

2 3e2t ð3cost 1 sintÞ 5. cost 2 2sint 21

It is direct, but a little cumber-

some, to find the answer as xðtÞ 5 ½3ae22t cost1cet ae2t sint1bet ae2t T . We want the answer to be unique and stable. Thus we choose xð0Þ 5 ½3a 0 aT .

Question 3.7: If we write another representation, will the general answer of the system be different? How about the appropriate initial condition? Next we present two interesting examples. Example 3.26: Consider the scalar function xðtÞ 5 expðpðtÞÞ where pðtÞ is a polynomial. When is it stable? Represent it as a differential equation. It is stable iff the coefficient of the leading term in pðtÞ is negative. _ 5 pðtÞexpðpðtÞÞ _ Moreover, its differential equation representation is xðtÞ 5 _ _ pðtÞxðtÞ. If pðtÞ has n distinct positive real roots then in positive time _ (which is the eigenvalue of the system) will be positive in n=2 λðtÞ 5 pðtÞ bounded time intervals. (The ceiling function is meant.) For instance, if _ 5 λðtÞxðtÞ, λðtÞ 5 2 ðt 1 t1 Þðt 2 t2 Þðt 2 t3 Þðt 2 t4 Þ, xð0Þ 5 x0 , we have xðtÞ thenÐ the solution of the system is xðtÞ 5 expðpðtÞÞ, pðtÞ 5 t 2 0 ðτ 1 t1 Þðτ 2 t2 Þðτ 2 t3 Þðτ 2 t4 Þ dτ, where the constant term of the integral is c 5 logx0 . The eigenvalue of the system will be positive in 2 bounded time intervals and then for the rest of the time will be negative.

_ 5 ð 2 t 1 sint 1 tcostÞxðtÞ, Example 3.27: Consider the scalar function xðtÞ xð0Þ 5 e21 . Is the system stable? The answer is xðtÞ 5 expð 2 t2 =2 1 tsint 2 1Þ which decays to zero. So the system is stable. Note that the answer is unique and global.

228

Introduction to Linear Control Systems

This part is wrapped up with some points in the following remarks. Remark 3.13: Note these points: (i) The inclusion of a coefficient, a power, or both of them, are equivalent when we solve a system like (for simplicity) x_ 5 2 x. The answer is either of x 5 ae2t , x_ 5 e2t1b , or x 5 ae2t1b . In the third case one of the parameters is obviously redundant. In the above examples we have used either of the first two solutions—the one that looks nicer. (ii) Specifying the initial condition is somewhat tricky. We bring your attention to this question: In Example 3.22 can we choose xð0Þ 5 ½a aT ? Remark 3.14: It is worth paraphrasing the important point of constancy of parameters in LTI systems (and time-invariant controllers in general, e.g., for nonlinear systems). As we have mentioned in Chapter 2 a method of control is that of gain scheduling. An example is a power station for which we discuss two scenarios. (1) The control engineer tunes the controller parameters to new ones when needed. The ‘simple’ explanation is that we have two LTI systems (in the respective time durations) with their own controllers. Precisely speaking, stability of the transition from one system to the other has to be proven and should not be taken for granted. However, for most actual systems some conditions are satisfied that this is assured. (2) Suppose that the operator knows the range of controller parameters for stability. If the operator keeps turning the nubs on the control panel ‘even inside the allowable ranges’ (as opposed to case (1) that this is done only once in a while) then it is possible that the system becomes unstable. And this is shown in Exercise 3.56. In the case of digital controllers if the operator keeps inserting new parameters, instability may happen.

3.14

Summary

In this Chapter we have studied the stability of control systems. Stability is the cornerstone of a control system—performance cannot be achieved without stability. The simplest concepts of stability, namely Lyapunov stability and BIBO stability, have been introduced. The connection with state-space representation has also been briefly pointed out. The stability tests of Routh, Hurwitz, and LienardChipart have then been introduced. The usage in controller design has been discussed. The concepts of relative stability, D-stability, internal stability, strong stabilization, and stability of LTV systems have also been studied. The Chapter has briefly featured the Kharitonov’s theory as a general setting for unstructured parameter uncertainty consideration in control systems. Numerous examples and worked-out problems to follow enhance the learning of the subject. Further study in the realm of stability is facilitated by Appendix D, which provides a glossary and road map to the available stability tests and concepts. Appendix E provides a tutorial on the Routh’s stability test.

Stability analysis

3.15

229

Notes and further readings

1. The importance of “tending to zero” of the state of an unforced system in Lyapunov stability is in nonlinear systems, where limit cycles and attractors may exist. Three in the large pertinent famous issues are the Aizerman, Kalman, and MarkusYamabe conjectures which are all proven to be false in general. Recent results on these conjectures are reported in (Castaneda and Guinez, 2017; Cima et al., 1997, 1999; Heath et al., 2015). It also has implications, at least in the larger picture, on the Hilbert 16th Problem10 (Ilyashenko, 2002). On the other hand, lest there is a misunderstanding we should mention a lesser-known fact that a limit cycle is not always bad. It may be effective, e.g., in the control of periodic systems, nonholonomic systems, and under-actuated systems. Also note that periodicity may arise due to different reasons—one of them is multi-rate sampling in discrete-time control. Finally, notice that some actual systems, like oscillators, are designed to work on the verge of instability. 2. Hermite’s result was derived in 1855. Routh’s method was proposed in 1877 (Routh, 1877). Hurwitz’ result was published in 1895 (Hurwitz, 1895). LienardChipart’s result is of 1914 (Lienard and Chipart, 1914). The RouthHurwitz method has found numerous applications in the systems and control theory. Most of these applications were reported in the second half of the twentieth century but even in the 21st century there have been research papers addressing new corners of its application. A good account of further theoretical details can be found in (Gantmakher, 1959; Porter, 1968) of which Section 3.4.1 is extracted. A summary of its applications as well as new theoretical investigations (like the RouthHurwitz method for fractional-order systems) is given in Appendix E (Siljak, 1976). 3. We saw that a necessary condition for stability of a polynomial is that all the coefficients be nonzero and have the same sign. The converse statement is a sufficient condition. Let ΔðsÞ 5 an sn 1    1 a1 s 1 a0 5 a0 ∏p ðs−λp Þ∏q ðs−αq −jβq Þðs−αq 1 jβq Þ, with obvious meaning for the terms. If the system is stable, i.e., all λp ; αq < 0, then all the terms ðs−λp Þ ; ðs−αq −jβq Þðs−αq 1 jβq Þ 5 s2 −2sαq 1 ðα2q 1 β2q Þ have positive coefficients. Hence ΔðsÞ has the same property. That is, stability is a sufficient condition that all the coefficients of ΔðsÞ are nonzero and have the same sign. We should also add that various properties of polynomials have been studied in the literature. Some old and recent results are surveys and/or are reported in (Borwein and Erdelyi, 1995; Goodarzi, 2016; Milovanovic et al., 1999; Nikseresht, 2017; Oboudi, 2016; Rahman and Schmeisser, 2002; Szego, 1975; Tavares, 2006; Zekavat, 2017). In particular, with regard to their zeros (distribution, computation, bounds, etc.) some old and recent results are due to A. L. Cauchy, J.L. Lagrange, M. Fujiwara, T. Kojima, N. Rouche, L. W. Jensen, K. Mahler, Y.J. Sun, J.G. Hsieh, A.G. Akritas, A.W. Strzebonski, P.S. Vigklas, P. Erdos, P. Turan, J.H. Wilkinson, J. Davenport, K. Farahmand, A. Grigorash, A. Eigenwillig, S. Rezakhah, S. Shemehsavar, J.J. Moreno-Balcazar, V. Sharma, K. Castillo, H. Majidiian, etc. The results are quite interesting, however we do not include them here in order to avoid digression from our theme, except for two typical problems in Exercise 3.64, 3.65. 4. The result of Kharitonov was proven in 1978 (Kharitonov, 1978) in a rather complicated manner. Simpler proofs were subsequently proposed by different researchers like (Minnichelli et al., 1989). Moreover, the original proof of Kharitonov does not allow for degree drop in the equation. The case of polynomials with degree drop was subsequently 10

Number 16 of the list of “23 Problems in Mathematics” posed by D. Hilbert at the International Congress of Mathematicians, Paris, 1900.

230

5.

6.

7.

8.

9.

10.

11.

12.

Introduction to Linear Control Systems

proven by different researchers, see e.g., (Willems and Tempo, 1999; Mori and Kokame, 1992; Hernandez and Dormido, 1996). With regard to these proofs it is worth noting that while the proofs of (Mori and Kokame, 1992; Hernandez and Dormido, 1996) are quite independent, the proof of (Willems and Tempo, 1999) is based on Bezoutians whose connection with Kharitonov’s theorem was first discovered in (Auba and Funahashi, 1993). The Kharitonov theory was a hot topic of active research in the last two decades of the 20th century. Since the mid-2000s it has almost reached a state of dormancy and new results, although strong, are only sporadically reported. Among the interesting issues discussed in the literature are the relation between the Kharitonov theorem and Bezoutians, the rank-1 μ problem, the interval polynomial matrices, extensions to the case of dependent uncertainties, discrete-time systems, fractional systems, Markov parameters, etc. See e.g., (Barmish, 1994; Chowdary and Chidambaram, 2014; Jury and Katbab, 1993; Kharitonov and Tempo, 1994; Olshevsky and Olshevsky, 2003; Sondhi and Hote, 2016; Wang, 2003; Xu et al., 1998; Young, 1994) and the references therein. An alternative approach to unstructured robust parametric control was initiated by N. E. Mastorakis in 1997 with the help of the Rouche’s theorem, see (Mastorakis, 2000) and its bibliography. Recent results in this domain are provided in (Bavafa-Toosi, 2016). The HermiteBiehler theorem is another classical result which has been extended to complex and time-delay systems as well. In particular it has been used for controller design of time-delay systems (Roy and Iqbal, 2005). Further results can be found in (Ho et al., 1999, 2000; Oliviera et al., 2003). We discuss the basic result in exercises 3.413.43, adopted from these references. Recent results on the stability of time-delay, time-varying, and failure-tolerant systems can be found in (Du et al., 2013; Mondie and Cuvas, 2016; Nasiri and Haeri, 2014; Ngoc, 2016; Nestler et al., 2016; Xiang et al., 2016), etc. See also item 4 of Further Readings of Chapter 4. As we mentioned in the text many other stability tests and several other stability notions exist, which you can find in Appendix D. A representative one is the Floquet’s theorem (and its generalizations and extensions) for the solution and stability of periodic differential equations. Another important issue in the context of stability is that of sensitivity, which will be discussed in Chapter 10. Here we suffice to say that sensitive or ill-conditioned systems change their stability condition by small perturbations in their elements. Such designs must be avoided in practice. The state matrix of some systems has a special form, such as an M-matrix, P-matrix, Z-matrix, Metzler matrix, etc. There exist specialized results for the stability of such systems, see e.g., Shafai et al., 1997. The original concept of internal stability was proposed in Desoer and Chan (1975). We have presented it in a modified and enhanced manner. It should be noted that some authors attribute different names to some stability theories. The ones that we consider in this book, including the Appendices D and E, are the most pervasively used. The notion of strong stability is due to Youla et al. (1974). It is an interesting question whether there is any advantage, of any kind, in insisting on using a stable controller for an unstable plant. Note that this question is explicitly answered with respect to sensitivity in the worked-out problem 10.16 of Chapter 10. It has a good effect. With respect to other design specifications and objectives the answer is not straightforward. Simultaneous stabilization was first treated by Saeks and Murray (1982). We cannot talk about the theoretical difficulties of a proof for Theorem 3.7 due to which it has to allow for unstable controller_zero-plant_pole cancellation. However, we shall simply

Stability analysis

13.

14.

15.

16.

17.

231

justify it pictorially in Chapter 5, summarized in Exercise 5.56. We have an open question: “How can we modify the theorem to guarantee internal stability as well?” Note that the actual value of Theorem 3.7 is for the cases of internally stable results. Otherwise the design is obviously worth nothing. Based on the result of P. C. Parks in 1962, recently Shorten and Narendra (2014) have studied the relation between the classical results of Routh, Hurwitz, Biehler, and Kharitonov on the one hand and a Schwarz matrix representation of stable systems on the other hand. The important case of more than one zero row in the RouthHurwitz table is addressed in Choghadi and Talebi (2013). In (Borobia and Dormido, 2001, 2006) it is shown that under certain conditions three coefficients of the characteristic polynomial determine its stability or sector stability. The aforementioned results are generalized in (Yang, 2003, 2004). In (Otsuka and Okada, 2003) the results of Kharitonov and Tempo for stability of weighted diamonds of polynomials of low degrees are simplified. Similar problems are also discussed. In (Li, 2007) the computational cost of verifying whether a matrix is Hurwitz or M-matrix is reduced from checking n determinants to checking one determinant by the help of Gaussian elimination. A procedure for finding non-conservative upper bounds for perturbations in the coefficients such that the stability of the system is preserved is offered in (Argoun, 1986, 1987). The Chapter can be best continued by the stability table of Elizondo-Gonzalez (2001), the Routh-Pascal polynomial of De Paor (2003), the dynamic-Routh’s test of Sahu et al. (2013), the so-called factorization approach, singular values, structured singular values, results of the MansourKraus type, YoulaKucera parameterization to find the set of all stabilizing controllers, stability of delay systems, linear programming, hybrid systems, nonlinear systems, set stabilization, stability of interval matrices, etc. You are referred to the pertinent literature. See also Appendices D and E. In particular with regard to interval matrices and polynomial matrices which are problems of rising importance some results can be found in (Firouzbahrami et al., 2013; Shafai et al., 2003). With respect to parametrization of controllers for a specific task, it is due to clarify that the general picture has two dual perspectives: (1) parametrization of controllers (to fulfill some design objective(s)), (2) parameterization of plants (which admit the fulfillment of some objective(s) by general/certain controllers). Some results are classical and straightforward. For instance the IMC controller is the parametrization of all stabilizing controllers for stable MP plants. (Partial extensions of this result are available.) However, some other results are yet to be discovered like what we discussed at the end of item 12. (Partial and specific results are available.) Some pertinent results can be found in Dorato et al. (1989), Saif et al. (1998), Glaria and Goodwin (1994), Hoshikawa et al. (2013, 2015), Mevenkamp and Linnemann (1988), Ozbay and Gundes (2008), Seraji (1975, 1981), Yamada et al. (2002). The terms ‘internal stabilization’ and ‘external stabilization’ exist in the literature also with other meanings. The former refers to stabilization of systems without stimuli, whereas the latter refers to that of systems with stimuli, in particular a disturbance, and is addressed especially in the general field of Input-to-State Stability (ISS) and its variants. Development of the counterparts of results of this chapter for DAE, fractional-order, positive, constrained, multiscale, etc. systems is desirable. Note that to some extent they already exist in the literature. On the other hand an important class of systems is the so-called pseudo linear systems. Interesting results on these systems can be found in Ghane-Sasansaraee and Menhaj (2014). Further developments are of course welcome.

232

Introduction to Linear Control Systems

3.16

Worked-out problems

In the following problems we only discuss the stabilization problem. That is to say we do not discuss the exact determination of the unknown parameter(s) for a good performance. This latter problem will be addressed in future chapters. The only exception is Problem 3.22 due to its importance. Problem 3.1: The output of a given system is stable according to calculations from the model. Is the system stable? Maybe. The answer is in the affirmative if there is no unstable pole-zero cancellation. Problem 3.2: Verify the Lyapunov and BIBO stability of the system L 5 s12 1 1 4s s23 s 2 1 1 s 1 2 1 s2 1 2 : The system is not stable because of the first and second terms. Note that the inverse Laplace transform of the first and second terms are t and et , both growing unboundedly as time approaches infinity. The third and fourth terms are stable with explanation that the third term is BIBO stable and the fourth term is Lyapunov stable (and thus from a practical point of view unstable). Note that the numerators do not have any effect on the stability.

Problem 3.3: Verify the stability of the system



21 0 0 x1 u 1 .  α 2 y5 1 0 x x_ 5

0 The transfer function is CðsI2AÞ21 B 5 ðs 1 1Þðs 2 2Þ : If we consider its value alone, it is not informative. However if we work with its denominator (which has no pole-zero cancellation) then we can decide its instability due to the unstable pole at 2. Note that the result is independent of the value of the parameter α. Also note that the alternative and shorter way is to find the eigenvalues of A directly. They are 21; 2 and thus the system is unstable.

2

Problem 3.4: Verify the stability of the system

3 2 3 0 1 0 1 x_ 5 4 0 2 1 5x 1 4 2 5u . 1  0 21  0 y5 0 0 1 x

sðs 2 4Þ If we form the transfer function we arrive at sðs21Þ 2 5

ðs 2 4Þ : ðs21Þ2

The system is neither

(BIBO) stable nor Lyapunov stable. The alternative and shorter way is to find the eigenvalues of A directly. They are 0; 1; 1 and thus the system is (BIBO and Lyapunov) unstable. 2

Problem 3.5: Verify the stability of the system

3 2 3 29 226 224 1 4 5 x_ 5 1 0 0 x 1 4 0 5u . 1 0  0 0 y 5 1 21 22 x

1 1Þðs 2 2Þ The transfer function is ðs 1ðs2Þðs 1 3Þðs 1 4Þ and thus the system is stable. Note that unstable zero has no effect on stability/instability of the system. Like previous examples

Stability analysis

233

this is also more easily decidable from the roots of jsI 2 Aj 5 0 which are 22; 2 3; 2 4, showing stability of the system, both in the Lyapunov and BIBO senses. Problem 3.6: 2

2 3 2 6 22 23 0 x_ 5 6 4 22 22 24  22 22 22 y 5 7 6 4 2 x:

Verify 3

the 3 stability

2 1 6 0 7 7x 1 6 4 0 5 25

and BIBO stability of the system

1 22 7 7u; 2 5 21

Calculating the transfer function using MATLABs we arrive at GðsÞ 5 ðs 1 2Þðs 1 3Þðs 1 4Þ s3 1 9s2 1 26s 1 24 1 s4 1 10s3 1 35s2 1 50s 1 24 : It is observed that GðsÞ 5 ðs 1 1Þðs 1 2Þðs 1 3Þðs 1 4Þ 5 s 1 1 : Because all the poles (including those canceled with zeros) are stable, the system is stable. Problem 3.7: Consider the system x_ 5 Ax 1 Bu; y 5 Cx 1 Du where the system matrices are given below. Determine the range of unknown parameters such that the system is stable. A 5 ½a 0 2 1; b 0 2; 2 1 1 1; B 5 ½1 0 1; 1 1 20 ; C 5 ½1 0 1; c 1 1; D 5 ½d 20 Stability depends only on the poles of the system, which are roots of the characteristic equation. This itself is given by jsI 2 Aj 5 0. (That is, B, C, and D play no role.) Hence ðs 2 aÞðsðs 2 1Þ 2 2Þ 1 1ðb 2 sÞ 5 0 or s3 1 ð2 1 2 aÞs2 1 ða 2 3Þs 1 ð2a 1 bÞ 5 0. Recalling Example 3.8 the condition for stability in our case that the coefficient of s3 is positive is that the following conditions are simultaneously satisfied: 21 2 a . 0; a 2 3 . 0; 2a 1 b . 0; 2a 1 b , ð21 2 aÞða 2 3Þ. The intersection region is empty since the first two conditions are conflicting! Thus the system cannot be stabilized by the selection of its unknown parameters. Problem 3.8: Consider this situation. In forming the Routh’s table a complete row of zeros is faced and the first column does not have any sign change. What can be said about the roots of the system? It has no roots in ORHP. It has at least two pure imaginary roots. The number of these roots is the order of the auxiliary equation. If the system has nonpure imaginary roots (which can be verified only by computation) they are in OLHP. (Note that the auxiliary equation cannot have nonpure imaginary roots. Why?) Problem 3.9: Determine the stability of the system with characteristic equation s5 1 2s4 1 12s3 1 24s2 1 8s 1 16 5 0. s5 s4 s3 s2 s1 s0

1 2 0ð8Þ 12 112=3 16

12 24 0ð48Þ 16

8 16

In forming the Routh’s table a complete row of zeros is encountered. We thus substitute the zeros with the coefficients of the derivative of the auxiliary equation,

234

Introduction to Linear Control Systems

shown in parentheses. It is seen that there is no sign change in the first column and thus we conclude that there are no ORHP roots. Hence the auxiliary equation has no ORHO roots and because it is of fourth order it is of the form ðs2 1 a2 Þðs2 1 b2 Þ. This can be verified by hand. Its roots are 6 j0:7417; 6 j3:3603. These roots are also the roots of the original equation. Its other root is 22, which can be found by dividing the original system by the auxiliary equation. Problem 3.10: Determine the stability of the system with characteristic equation s5 1 2s4 1 24s3 1 48s2 2 25s 2 50 5 0 s5 s4 s3 s2 s1 s0

1 2 0ð8Þ 24 112:7 250

24 48 0ð96Þ 250

225 250

We encounter one complete row of zeros. The zero elements are substituted by coefficients of the derivative of the auxiliary equation, written in parentheses. The first column has one change of signs and thus the system has one root in the ORHP. Thus because the auxiliary equation is of order four we can be sure that it is of the form ðs2 1 a2 Þðs2 2 b2 Þ. This is indeed the case and it can be factorized by hand as ðs2 1 52 Þðs2 2 12 Þ. Its roots are also the roots of the original system. Its other root is 22. Problem 3.11: Determine the stability of the system with characteristic equation s8 1 s7 1 s6 2 s5 1 2s3 1 5s2 1 s 1 2 5 0. s8 s7 s6 s5 s4 s3 s2 s1 s0

1 1 2 0ð12Þ 22=3 40 3:4 92:8 2

1 21 22 0ð2 8Þ 8=3 44 2

0 5 2 1 4 2 0ð8Þ 2

2

A complete row of zeros is encountered in this system. The zero coefficients are thus substituted by the coefficients of the derivative of the auxiliary equation. The new entries are shown in parentheses. It is observed that there are two sign changes in the first column and thus the system has two ORHP roots. We know that they must be roots of the auxiliary equation. Let us verify: The auxiliary equation

Stability analysis

235

is 2s6 2 2s4 1 4s2 1 2 5 0 which has the roots 6 ð1:0706 6 j0:6707Þ; 6 j0:6266. As expected it has two ORHP roots. All the roots of the auxiliary equation are the roots of the original equation. Its other roots are 20:5 6 j1:3229. Note that factorizing the auxiliary equation by hand is a little time consuming in this case and we had better use MATLABs for verification. If we want hand calculations we should try both ðs2 1 a2 Þðs2 2 b2 Þðs2 2 c2 Þ and ððs1aÞ2 1 b2 Þððs2aÞ2 1 b2 Þðs2 1 c2 Þ. Problem 3.12: Determine the stability of the system with characteristic equation s5 2 s4 1 4s3 2 4s2 2 5s 1 1 5 0. s5 s4 s3 s2 s1

1 21 0ðεÞ   1 24 1 1 ε   ε 11ε ε 2 16  24 4ð1 1 εÞ ε

s0

4 24 24

25 1

1

1

We encounter a row with a zero in its element, but the row is not identically zero. We use the ε method and leave it to the reader to try the other two solutions we have proposed for this case. So we substitute the zero with ε. It is observed that there are four sign changes and hence there should be four roots in ORHP. This is the case; the roots are: 0:0595 6 2:2148j; 1:4747; 0:1788; 20:7725. Also note that the elements above and below ε have the same sign and thus we can expect two imaginary roots, which is also the case. Problem 3.13: Determine the stability of the system with characteristic equation s4 2 s2 1 2s 1 2 5 0. s4

1

21

3

0ðεÞ 21ε 2ε

2

s

s2 1

  ε ε12 2ε 1 2 2 21ε ε

s0

2

s

2

2

There are two sign changes in the first column and thus two ORHP poles. In fact, the roots are: 1 6 j; 21; 21. Note that this example reveals that when the elements above and below ε have opposite signs, some (ORHP) roots may be imaginary.

236

Introduction to Linear Control Systems

Problem 3.14: Determine the stability of the system with characteristic equation s4 1 1 5 0. 1 0 1 s4 s3 0ð4Þ 0ð0Þ 1 s2 0ðεÞ 24 1 s ε s0

1

In forming the table we encounter a complete row of zeros. We thus use the auxiliary equation, being s4 1 1 5 0. The zero coefficients are thus substituted by 4 and 0, shown in parentheses. Then we encounter a row whose first entry is zero, but the row is not identically zero. Now we use the epsilon method and form the table. There are two sign changes in the first column indicating that there are two pffiffi ORHP poles. In fact the roots are 22 ð61 6 jÞ. Problem 3.15: Determine the stability of the system with characteristic equation s4 2 1 5 0. s4 s3 s2 s1 s0

1 0 0ð4Þ 0ð0Þ 0ðεÞ 21 4 ε

21

21

The problem is as before except that this time there is one sign change and thus the system has one ORHP pole. Here the roots are 61; 6j. Problem 3.16: Determine the stability of the system with characteristic equation s6 1 s5 1 5s4 1 5s3 1 7s2 1 6s 1 3 5 0. The Routh’s table is as follows: s6 s5 s4 s3

1 1 0ðεÞ 1 2 5ε 2ε

s2

8ε 2 1 2 6ε2 5ε 2 1

s1

236ε2 2 9ε 8ε 2 1 2 6ε2

s0

3

5 7 5 6 1 3 3 2 6ε 2ε 3

3

Stability analysis

237

There are two sign changes in the first column and thus there are two ORHP poles. To verify this we compute the poles; they are 0:1217 6 1:3066j; 61:7321j; 20:6217 6 0:4406j. Problem 3.17: Repeat Problem 3.10 for the system M 5 5; N 5 11; P 5 15 such that only two closed-loop poles satisfy 22 , ReðsÞ , 0. Characteristic equation of the system is ΔðsÞ 5 s3 1 Ms2 1 Ns 1 PK 5 3 s 1 5s2 1 11s 1 15K 5 0. Thus for stability there should hold K . 0;15K , 55 or 0 , K , 11=3. For the second part of the problem we should compute Δðs 2 2Þ which is s3 2 s2 1 3s 2 10 1 15K. Now the Routh’s table is: s3

1

3

s2 s1

21 A

210 1 15K

s0

210 1 15K

where A 5 210 1 15K 1 3. For this system we must have two sign changes in the first column. There are two options for this purpose: (1) A . 0; 2 10 1 15K . 0, (2) A , 0; 210 1 15K . 0. The first case is achieved if K . 2=3. The second case is self-contradictory and cannot be met. (Why?) Problem 3.18: Obtain the range of the parameters a; b for which the system with this characteristic equation is stable: s4 1 2s3 1 as2 1 s 1 b 5 0. Applying the Routh’s test we determine the range of stability as a . 0:5, 0 , b , 0:5a 2 0:25. Problem 3.19: Is it possible to stabilize the system L 5

1 ðs21Þ2

with PI control?

How about PD control? 1 With the PI controller C 5 k 1 Ts the characteristic equation is Ts3 2 2Ts2 1 Tðk 1 1Þs 1 1 5 0. This equation is always unstable because its coefficients will not have the same sign for any choice of the parameters. With the PD controller C 5 k 1 Ds the characteristic equation is s2 1 ðD 2 2Þs 1 k 1 1 5 0 which is stable for any k . 21; D . 2. However this controller is not causal; we should try C 5 k 1 Ds=ð1 1 D0 sÞ. The characteristic equation now is D0 s3 1 ð1 2 2D0 Þs2 1 ðD0 ð1 1 kÞ 1 D 2 2Þs 1 k 5 0. Assuming positive k (or D0 ) it is stable if D0 . 0; 1 2 2D0 . 0; D0 ð1 1 kÞ 1 D 2 2 . 0, and D0 k 2 ð1 2 2D0 ÞðD0 ð1 1 kÞ 1 D 2 2Þ , 0. The first two conditions dictate 0 , D0 , 0:5. The third condition yields D . 2 2 D0 ð1 1 kÞ. The fourth condition with use of the third condition results in D . 2 1 D0 . Thus in total k . 0, 0 , D0 , 0:5, and D . 2 1 D0 . Question 3.7: Can you find a simpler mathematical argument to prove that when a system is stabilizable by ideal PD control it is always stabilizable by actual PD control? (And the same for ideal and actual PID?)

238

Introduction to Linear Control Systems

Problem 3.20: Consider the plant P 5 20=½sðs 1 2Þðs 1 8Þ preceded by the controller C 5 Kðs 1 zÞ=ðs 1 pÞ in the conventional 1-DOF control structure. Propose some stabilizing controller for it. Try with p 5 1. The characteristic equation of the system is sðs 1 2Þðs 1 8Þðs 1 pÞ 1 kðs 1 zÞ 5 0, k 5 20K, which can be written as s4 1 11s3 1 26s2 1 ðk 1 16Þs 1 kz 5 0. Thus there should hold 216 , k , 270; kz . 0; k2 2 254k 2 4320 , 2121kz. A stabilizing controller is thus obtained e.g., with k 5 1 by which we get z , 4573=121 5 37:79. At this stage we can check the output response only by simulation. In Chapter 4 we will learn more performance specifications. For instance, to minimize the steady-state error to a ramp input, kz=p must be large. Problem 3.21: Consider the plant P 5 1=½sðs 2 1Þ preceded by the controller C 5 kðs 1 zÞ=ðs 1 pÞ in the conventional 1-DOF control structure. Propose some stabilizing controller for it. The characteristic equation of the system is sðs 2 1Þðs 1 pÞ 1 kðs 1 zÞ 5 0 which can be written as s3 1 ðp 2 1Þs2 1 ðk 2 pÞs 1 kz 5 0. Recalling Exercise 3.8 the system is stable if the following conditions are simultaneously satisfied: p 2 1 . 0; k 2 p . 0; kz . 0, kz , ðp 2 1Þðk 2 pÞ. Thus k . p . 1; z . 0, kz , ðp 2 1Þðk 2 pÞ. A stabilizing controller in this region is obtained e.g., by choosing p 5 2. Thus k . p 5 2; 0 , z , 1 2 2=k. What controller (what k and z) in this class (or with another p) results in good performance needs to be verified by simulation. (However in Chapter 4 we will learn some design specifications. For example, to have low steady-state error, kz=p must be large.) Problem 3.22: Determine the simplest controller structure CðsÞ such that the closed-loop system of the system P 5 s3 ðs21 1Þ is stable. Recalling Remark 3.4 we conclude that it is necessary that the order of the numerator of the controller be at least two. Because the controller should be realizable the order of its denominator should also be at least 2. The first thought that Ks2 may cross one’s mind is to use the controller C1 5 ðs1pÞ 2 and it is of course easy to verify that the closed-loop system is stabilizable by appropriate choice of K for any positive p. (For instance for p 5 1 we obtain 0 , K , 0:44.) However, the system is internally unstable as there is a CRHP pole-zero cancellation. (Recall that j-axis 2 is part of the CRHP.) In fact the transfer function 1 1PC1 P 5 s2 ðs1pÞ2 ½sðs1pÞ is 2 ðs 1 10Þ 1 2 unstable. Thus the zeros of the controller should not be at origin. At this stage specifying further the parameters of the numerator and denominator is possible only by trial and error. It is easy to verify by MATLABs that e.g., the controller C2 5

Kðs10:1Þ2 ðs12Þ2

with 0:165 , K , 1:890 is a stabilizing controller. (However if we

choose the zeros e.g., at 20:3 then the system will not be stabilizable.) Note that a higher-order controller like C3 5

Kðs1zÞ3 ðs1pÞ3

is also a stabilizing controller where

e.g., z 5 0:3; p 5 2. If we try this controller we find out that 1:38 , K , 7:95. Discussion: Using the command “impulse” of MATLABs we can actually produce the response to initial conditions, which is the stabilization (as opposed

Stability analysis

239

Figure 3.7 Simulation with C2 : Left: Stabilization. Right: Step response.

to tracking) performance of the system, see Fig. 3.7. For impulse response of C2 we observe that by decreasing the value of gain the overshoot decreases but at the expense of more oscillations and a longer settling time. The designer may choose that the best answer is with K 5 0:3. The tracking performance for a step input is also shown in the same Figure. However, here the answer with K 5 0:6 is clearly better than that with K 5 0:3. This shows that the controller which is designed for one objective (such as stabilization) may fulfill a second objective as well (like step tracking).11 However, to have the best performance the controller parameters often have to be modified. In brief, a controller is designed for a particular objective. For other objectives the controller parameters, if not the controller structure, have to be redesigned. Problem 3.23: Determine the range of the proportional controller K such that the following uncertain system is closed-loop stable in a negative unity feedback: 2 1 ½1 6s 1 ½ 22 5 L 5 K s3 1½1½ 215s 2s 2 1 ½ 21 3s 1 ½1 2 : The characteristic equation of the system is: Δðs; KÞ 5 s3 1 ½ 21 1 K 2 1 5Ks2 1 ½ 2 1 1 2K 3 1 6Ks 1 ½1 2 2K 2 1 5K 5 0. For third order systems it can easily be shown that it suffices to check the stability of k3 from among the four Kharitonov polynomials. (How?) Thus we should have 2 1 p 5Kffiffiffiffiffiffi ,ffi pffiffiffiffiffiffiffi ð21 1 KÞð21 1 2KÞ or K . 2 1 4:5. Note that the other answer (K , 2 2 4:5) is not acceptable. (Why?) Problem 3.24: From both theoretical and practical points of view discuss the stability of the negative unity feedback control system with PðsÞ 5 ðs 2 2Þ= ½ðs 2 1Þðs 1 1Þ and CðsÞ 5 0:4ðs 2 1Þ=ðs 1 1Þ.

11

In fact this system even tracks a ramp input, as we shall learn in the next chapter. But the best performance is not obtained with K 5 0:3.

240

Introduction to Linear Control Systems

It is seen that L 5 CP reduces to LðsÞ 5 0:4ðs 2 2Þ=ðs11Þ2 and thus the closedloop transfer function is T 5 L=ð1 1 LÞ 5 0:4ðs 2 2Þ=½s2 1 2:4s 1 0:2 which is stable. However the system is unstable because an unstable pole zero cancellation has occurred in the loop gain. That is to say, the loop gain is actually LðsÞ 5 0:4ðs 2 1Þðs 2 2Þ=½ðs 2 sÞðs11Þ2  and hence the closed-loop transfer function is indeed TðsÞ 5 0:4ðs 2 1Þðs 2 2Þ=ðs3 1 1:4s2 2 2:2s 2 0:2Þ which is evidently unstable. (Discuss the problem in terms of T=C and T=P as well.) Problem 3.25: Discuss the stability of the negative unity feedback control system with PðsÞ 5 ðs 2 2Þ=½sðs 2 1Þðs 1 1Þ and CðsÞ 5 Kðs 1 2Þðs 1 3Þ=½ðs 2 2Þðs 1 10Þ. 1 2Þðs 1 3Þ The loop gain is CPðsÞ 5 sðs 2Kðs1Þðs 1 1Þðs 1 10Þ : Because there is a CRHP pole-zero cancellation between the controller and the plant the closed-loop system is not internally stable, although the input-out transfer function is stable for approximately K . 26:02. (Discuss the problem also in terms of T=C and T=P.) Problem 3.26: This problem concerns the failure tolerance feature of feedback that we mentioned in Chapter 1. Failure Tolerant Control (FTC) schemes fall into two main categories: Active FTC and Passive FTC. The second branch uses the ideas of robust control design. This problem presents a passive FTC design in the realm of decentralized control. Decentralized control means that not all the outputs are fedback to all the inputs; the controller has a decentralized or (generally speaking) block diagonal structure.12 Consider the unstable system x_ 5 Ax 1 Bu, y 5 Cx, where, 1 0 28:2 2:3 0:7 20:1 22:13 212:3 20:86 5:15 B 23:2 24:6 0:43 0:1 29:81 20:12 0:15 22:2 C C B C B 0:21 27:86 0:38 211:2 212:59 20:1 1:4 C B 0:93 C B B 21:2 0:91 21:32 25:12 10:56 0:8 0:2 21:81 C C; B A5B 0:23 29:3 1:2 4:5 0:66 C C B 20:2 22:3 0:65 C B 20:1 0:15 0:15 2:1 0:1 0:01 0:21 C B 0:1 C B @ 2:2 7:29 21:2 25:1 0:9 0:5 22:25 2:1 A 212:32 20:23 0:12 1 1 0 0 0 B1 0 0 0C C B C B B0 1 0 0C C B B0 1 0 0C T C B5B B 0 0 1 0 C; C 5 B : C B C B B0 0 1 0C C B @0 0 0 1A 0 0 0 1 0

12

4:31

4:2

5:11

21:4

28:3

In modern research, issues such as overlapping decentralized control are also addressed. We simply consider that the controller is block diagonal where there is no overlap between the subcontrollers.

Stability analysis

241

For this system we solve the failure tolerant stabilization problem in the face of islanding13 of any subsystem. This problem is adopted from (Bavafa-Toosi, 2006; Bavafa-Toosi et al., 2006b) where it is solved in a more general context. The static control signal is given by u 5 Ke 5 Kðr 2 yÞ. In the stabilization problem r 5 yd 5 0 and the problem reduces to u 5 2Ky and thus to x_ 5 ðA 2 BKCÞx. Therefore K must be chosen such that the eigenvalues of A 2 BKC are in the OLHP. Solving the problem in the context of decentralized FTC means that the controller has the structure K 5 diagfk1 ; k2 ; k3 ; k4 g and that it must also stabilize the system when either of ki ; i 5 1; . . .; 4 is set to zero. By an optimization procedure we find an infinite number of solutions for this problem, such as: 0 1 5:69 0 0 0 B 0 6:65 0 0 C C: K 5 2B @ 0 0 5:81 0 A 0 0 0 15:65 It is verified in the accompanying CD (on the website of the book14) that the FTC problem is solved for D-stability in the region σ 5 0:5; δ 5 π=12. Further details can be found in the aforementioned references. Note that the output feedback passive FTC problem is a quite difficult problem and in general does not admit a solution, especially if we require D-stability. Problem 3.27: Consider the LTV system x_ 5 AðtÞx, t ≥ 0, where AðtÞ 5 −2t 1 1 and xð0Þ 5 e. Investigate the stability properties of the system. 2 The solution to the system is evidently x 5 ce−t 1t and from the initial condition 2 we find c 5 e and thus x 5 e−t 1t11 . Since x is bounded for all t ≥ 0 the system is stable. However we note that the eigenvalue of the system changes signs with time, for 0 ≤ t < 1=2, λ > 0, for t 5 1=2, λ 5 0, and for t > 1=2, λ < 0. For the sake of completeness we add that depending on the type of stability that we have in mind, we should also consider the region of attraction of the system. And this is not restricted to this problem only. We leave these details to the second course on state-space methods. Problem 3.28: Consider the function xðtÞ 5 ðt 1 2Þ=ð2t2 1 1Þ, t $ 0. Is it stable? Yes. It (is positive, bounded and) decays from xð0Þ 5 2 to xðNÞ 5 0. The function is stable at the equilibrium point x 5 0. (Question: We note that the system can be written as x_ 5 AðtÞx where AðtÞ 5 ð 2 2t2 2 8t 1 1Þ=½ðt 1 2Þð2t2 1 1Þ, ’ t $ 0. How can we investigate the stability of this differential equation?)

13

Islanding is the technical term for disconnecting a subsystem from the rest of the system and is represented by setting its subcontroller to zero. 14 The reader is encouraged to visit the companion website of the book on the Elsevier homepage (https://www.elsevier.com/books). Alternatively a simple search on the web directs you to the exact page. Update and relevant information are announced there.

242

Introduction to Linear Control Systems

Problem 3.29: Consider the LTV system x_ 5 AðtÞx, t $ 0, where

3t 2 5texpðt2 Þ AðtÞ 5 . Investigate the stability properties of the 2 2 2texpð 2 t Þ 2 2t system. An answer of the system is found as xðtÞ 5 a½expð2t2 Þ expð22t2 ÞT which is (exponentially) stable at origin. However, direct computation shows that the eigenpffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi values of AðtÞ are λ1;2 5 2t ð1 6 1 1 4 3 16Þ, one of which is nonnegative. (Question: What is the general answer and what should the initial condition of the system be?) Problem 3.30: A sufficient, but not necessary, condition for the stability of the LTV system x_ 5 AðtÞx is that the eigenvalues of AðtÞ 1 AT ðtÞ are stable. You will learn the proof and details of this conservative result in the second course on state-space systems via the Lyapunov stability method. In particular this means that a sufficient condition for the stability of the scalar system x_ 5 λðtÞx is that λðtÞ , 0 for all t $ 0. Note that we have used this result to conclude the stability of Example 2.3 of Chapter 2. Problem 3.31: Consider the LTI system x_ 5 Ax. (1) Show that the state transformation x 5 Tx, where T is a time independent full rank matrix, does not change the stability properties of the system. (2) Show that if T is time dependent the stability condition may change. 1. We have T 21 x_ 5 AT 21 x and thus x_ 5 TAT 21 x. The eigenvalues of the system are the roots of jsI 2 TAT 21 j 5 0. There holds jsI 2 TAT 21 j 5 jsTIT 21 2 TAT 21 j 5 jTjjsI 2 AjjT 21 j 5 jTjjT 21 jjsI 2 Aj 5 jsI 2 Aj. That is, the eigenvalues are not changed and thus the stability properties are the same. 2. As for the second part we use a simple example. Consider the system x_ 5 x which is unstable. Now by x 5 e22t x we have x_ 5 2x which is stable.

For the sake of completeness we add that the exposition suffices for our purpose. However, in fact, validity of the transformation should also be discussed. Provide the analysis! Also consider x 5 e2t x which results in x_ 5 3x which is unstable! Analyze this transformation.

3.17

Exercises

Over 150 exercises are offered in the form of several collective problems in the following. It is advisable to try as many of them as your time allows. Exercise 3.543.60 are a little advanced and are more appropriate for readers of mathematics or the general reader after studying the sequel of the book on state-space methods. Yet most of them are elementary with regard to the pertinent cutting-edge results. Exercise 3.1: This problem has several parts. (1) Prove Theorems 3.1 and 3.2. (2) Prove the impulse response condition for BIBO stability of LTI systems. (3) In supplementing Remark 3.2 we should add that for certain nonzero inputs the state of a BIBO stable LTI

Stability analysis

243

system eventually tends to zero as well. Provide an example. Can you characterize those inputs? (4) What is the relation between the growths of the state and the input of a BIBO stable LTI system? (5) What is the relation between the growths of the output and the input of a BIBO stable LTI system? Exercise 3.2: Verify the Lyapunov and BIBO stability of the sequel systems and their individual terms: 2s s22 1. LðsÞ 5 1s 1 s 21 1 1 s 1 1 1 ðs12Þ2 ; s21 2s s22 s 1 1 1 ðs11Þ2 1 ðs12Þ2 ; 2s LðsÞ 5 1s 1 s2 11s 2 1 1 s 1 1; 12s LðsÞ 5 1s 1 s2 21s 1 1 1 ðs11Þ 2 ; s LðsÞ 5 21 1 s2 11s 1 1 1 2s 12 1s 1 ðs12Þ 2 ; s21 22s 12s s LðsÞ 5 s 1 3s2 1 2s 1 1 1 s2 1 1 1 ðs12Þ 2

2. LðsÞ 5 1 1 3. 4. 5. 6.

7. LðsÞ 5

s 21 s 2

1

8. LðsÞ 5 s 2 1 1 9. LðsÞ 5

1 2 s2 s11

1

;

12s s 3s2 1 s 1 2 1 s 1 1 ; 11s s s2 1 s 2 2 1 s2 1 1 ; 11s s2 1 s 2 1

1

s21 ðs2 11Þ2

:

Exercise 3.3: Show the equivalence of the Routh’s tabular form and the Hurwitz’ determinant condition for polynomials of degree four. Extend your exposition to general nth polynomials.

A11 0

Exercise 3.4: Consider the matrix A 5

A12 A22

. (1) Show that the eigenvalues of A are

the union of the eigenvalues of A11 and A22 . (2) Suppose that λ and γ are eigenvalues of A11 and A22 , respectively, with x and y as their right and left eigenvectors. What can be said about the left and right eigenvectors of λ and γ as eigenvalues of A? (3) Repeat part (2) for general block diagonal matrices. (4) Repeat the problem for the matrix A5

A11 A21

0 A22

.

Exercise 3.5: What are the eigenvalues of the matrix A given below: 2 3 a1 6 7 0 a2 6 7 7 ⋰ 1. A 5 6 6 7 4 5 0 an21 an 2 3 a1 6 0 a2 3 7 6 7 ⋰ ^ 7 2. A 5 6 6 7 4 0 35 an21 an 0 ? 0 b Exercise 3.6: This problem has three parts. (1) Consider the matrix A 5



0 A21

A12 A22



. What

can be said about the eigenvalues of A versus the eigenvalues of A12 and A21 ? (2) How

about the matrix A 5

0 B

B 0

(3) How about the matrix A 5

where BACm 3 n and B is the conjugate transpose of B?



0 2In

In B



?

Exercise 3.7: Denote by ΛðAÞ the set of the eigenvalues of A. (1) Show that if A and B commute, then ΛðA 1 BÞDΛðAÞ 1 ΛðBÞ? (2) What if A and B do not commute? Exercise 3.8: The spectral the matrix A is defined as ρðAÞ 5 maxi jλi ðAÞj. Show P radius of P that ρðAÞ # minf maxi j jaij j; maxj i jaij jg.

244

Introduction to Linear Control Systems

Exercise 3.9: Let A 5 ½aij  and w1 ; . . .; wn be positive real numbers. (1) Show that the o S n P eigenvalues of A lie in the regions ni51 zAC: jz 2 aii j # w1i nj51 wj jaij j as well as in j 6¼ i o P S n the regions nj51 zAC: jz 2 ajj j # wj ni51 w1i jaij j . That is, they lie in the intersection of i 6¼ j

the above regions. In the case that the weights wi are all equal to 1 these circles are called Gerschgorin circles15 (Varga, 2004). In the general case that they are the generalized or weighted Gershgorin circles: (2) Show on a simple example that by appropriate selection of weights a tighter and hence more informative region may be obtained. (3) Show that if a Gershgorin circle is isolated from others, then it contains precisely one eigenvalue. Exercise 3.10: Express the Gershgorin result as a sufficient condition for instability and provide an example. Exercise 3.11: A matrix A is called normal if AA 5 A A where A denotes the conjugate transpose of A. Show that for the normal matrix A 5 ½aij  with eigenvalues λi there holds P 2 P 2 aij 5 λij where the summations are taken over all elements and eigenvalues. Exercise 3.12: A matrix A is called Hermitian if A 5 A and skew Hermitian if A 5 2A . (1) Show that a Hermitian matrix has real eigenvalues whereas a skew Hermitian matrix has pure imaginary eigenvalues. (2) Show that a normal matrix is Hermitain iff all its eigenvalues are real. (3) Show that a normal matrix is skew Hermitain iff all its eigenvalues are pure imaginary. (4) Show that if A is Hermitian then A 1 A , AA , and A A are also Hermitian, and A 2 A and jA are skew Herrmitian. (5) Let A and B be Hermitian. Arrange the eigenvalues λi ðAÞ, λi ðBÞ, and λi ðA 1 BÞ in increasing order λ1 # λ2 # ? # λn . Show that ’ i there holds λi ðAÞ 1 λ1 ðBÞ # λi ðA 1 BÞ # λi ðAÞ 1 λn ðBÞ. (6) In the settings of part (5) show that ’ i 1 j 5 k 1 n there holds λk ðA 1 BÞ # minfλi ðAÞ 1 λj ðBÞ ; λj ðAÞ 1 λi ðBÞg. (7) In the settings of part (5) assume that A 2 B has only nonnegative eigenvalues. Then show that ’ i there holds λi ðAÞ $ λi ðBÞ. (8) What is the restatement of part (7) for nonpositive eigenvalues and thus the usage for stability? Exercise 3.13: Consider the sequel matrix. Show that the matrix is Hurwitz stable, i.e., has all its eigenvalues in the OLHP, if the given condition is satisfied. This condition is termed as the “secant condition” for stability in the context of biological systems where it finds numerous applications. Its extension to delay systems is also available (Ahsen et al., 2015). 0

2α1 B β2 B @ ^ 0

0 2α2 ^ 0

? ?

0 0 ^ ? βn

1 2β 1 0 C C; ^ A 2αn

β 1 ?β n  π n 1 , sec 5 n ðcos π=nÞn α1 ?αn

Exercise P 3.14: Consider the LTI system x_ 5 Ax 1 Bu, y 5 Cx. Define the P“trace” of A as trA: 5 aii where the summation is taken over all i. Prove that trA 5 λi where λi is the ith eigenvalue of A and the summation is taken over all eigenvalues. Then conclude that if the trace is positive then the system is unstable. That is, a sufficient condition for instability is positivity of the trace. Note that an equivalent statement is that a necessary condition for stability is negativity of the trace. (A special case of this result is considered in Exercise 10.60.)

15

Named after Semyon Aranovich. Gerschgorin, Belarussian mathematician (19011933) who studied, worked, and died of illness at a young age in Russia.

Stability analysis

245

Exercise 3.15: Consider an LTI system and its characteristic equation given by the conventional finite-dimensional polynomial Δ(s). Show that a necessary condition for stability of the system is that the derivative of Δ(s) do not change signs for positive s. Exercise 3.16: Consider the polynomial ΔðsÞ 5 an sn 1 an21 sn21 1 ? 1 a1 s 1 a0 where an 6¼ 0. Prove that it is stable iff an21 and an have the same sign and the polynomial ΔðsÞ 2 ðan =an21 Þðan21 sn 1 an23 sn22 1 an25 sn24 1 ?Þ is also stable. Exercise 3.17: Show that for the Smith predictor to work the plant must be open-loop stable. Note that this is needed even in MATLABs simulations. For instance, simulate the Smith predictor for the system PðsÞ 5 e2s =s with C0 ðsÞ 5 ð3s 1 1Þ=s designed for P0 ðsÞ 5 1=s. Plot the outputs in the same figure. Note that not only does yðtÞ 5 y0 ðt 2 1Þ not hold, but also the output yðtÞ diverges as you increase the simulation time to, e.g., t 5 100 s. Exercise 3.18: The following structure, Fig. 3.8, is known as the IMC structure, as introduced in Chapter 1, Introduction. (1) Suppose P 5 1=ðs 2 1Þ, P0 5 1=ðs 2 1Þ, and Q 5 2. Is the system stable? (2) Show that if the system is stable, then necessarily P and P0 do not have common unstable poles. (3) Assume that the model is perfect, i.e., P 5 P0 . Prove that the model is internally stable iff both P and Q are stable. (4) Derive the internal stability condition for the case of imperfect model P0 6¼ P. (5) Derive the internal stability condition for the 2-DOF IMC structure. Consider both perfect and imperfect models. r

Q

_

y

P P0

_

Figure 3.8 Internal Model Control (IMC) Structure. Exercise 3.19: (1) Consider Fig. 3.9, left panel. Find the range of k for stability. (2) In other panels find the condition for stability, using both exact and approximate computations. What if the delays T1 sec and T2 sec are in the forward and feedback paths, respectively? Explain the systems and the result. r

_

k s

y

r

_

k s

y

r

_

e-sT

k s

y

e-sT

Figure 3.9 Presence of delay in a typical system. Exercise 3.20: Repeat Exercise 3.19 when the system k=s is replaced by the following systems: (1) PðsÞ 5 k=ðs 1 pÞ, (2) PðsÞ 5 k=½sðs 1 pÞ, (3) PðsÞ 5 kðs 1 zÞ=ðs 1 pÞ. Exercise 3.21: A Phase-Lock Loop (PLL) is a device that finds the frequency and phase of an input sinusoid and outputs a sinusoid with the same frequency whose phase differs from that of the input by a constant. A PLL is a useful device because if it is fed by a sinusoid and some other signals it detects the sinusoid and “locks” on its phase and frequency. It consists of a multiplier as the feedback element and a low-pass filter cascaded by a voltage-controlled oscillator in its forward path. With respect to phase it can be modeled by the following control system, see Fig. 3.10. Find the range of the parameters of the model for its stability. For a modern treatment of PLLs the reader is referred to (Karimi-Ghartemani, 2014).

246

Introduction to Linear Control Systems

ϕin

α 2

1 τs+1

β 2

ϕout

Figure 3.10 Exercise 3.12. A PLL system with respect to phase.

Exercise 3.22: Apply the Routh’s tabular test to the following systems: 1. s5 1 s4 1 2s3 1 2s2 1 6 1 2 5 0 2. 2s6 1 s5 1 7s4 1 2s3 1 8s2 1 s 1 3 5 0 3. s6 1 s5 1 4s4 1 2s3 1 5s2 1 s 1 2 5 0 4. s7 1 s6 1 4s5 1 2s4 1 7s3 1 3s2 1 7s 1 1 5 0 5. s8 1 2s7 1 2s6 1 2s5 1 3s4 1 4s3 1 s2 2 2s 2 2 5 0 6. 2s8 1 s7 1 8s6 1 4s5 1 4s4 1 2s3 1 s2 1 3s 1 2 5 0 7. s8 1 2s7 1 s6 1 2s5 1 2s4 1 2s3 1 2s2 1 4s 1 1 5 0 8. 2s9 1 s8 1 5s7 1 2s6 1 4s5 1 3s4 1 s3 1 2s2 1 2s 1 2 5 0 Exercise 3.23: Apply the Routh’s tabular test to the following systems: 1. s4 1 s3 2 8s2 2 9s 2 9 5 0 2. s4 1 s3 1 2s2 1 s 1 1 5 0 3. s4 1 33 1 9s2 1 9s 1 10 5 0 4. s4 2 3s3 1 5s2 1 s 1 10 5 0 5. s5 1 s4 2 13s3 2 13s2 1 36s 1 36 5 0 6. s5 1 2s4 2 2s3 2 4s2 1 s 1 2 5 0 7. s5 1 2s4 1 3s3 1 6s2 2 4s 2 8 5 0 8. s5 1 2s4 1 13s3 1 26s2 1 36s 1 72 5 0 9. s7 1 s6 1 2s5 1 2s4 1 s3 1 s2 2 100s 2 100 5 0 10. s7 1 2s6 1 10s5 1 20s4 1 49s3 1 98s2 1 100s 1 200 5 0 11. s7 1 s6 1 7s5 1 6s4 1 14s3 1 10s2 1 6s 1 4 5 0 12. 3s7 1 s6 1 6s5 1 2s4 1 3s3 1 s2 1 9s 1 3 5 0 13. s8 1 3s7 1 3s6 1 6s5 1 3s4 1 3s3 1 4s2 1 9s 1 3 5 0 14. s9 1 s8 1 6s7 1 3s6 1 9s5 1 3s4 1 7s3 1 4s2 1 12s 1 3 5 0 Exercise 3.24: Obtain the range of a; b for which the system is stable in unity feedback control. 1 1. LðsÞ 5 sðs 1 aÞðs 1 bÞ 2. LðsÞ 5 3. LðsÞ 5

s1a sðs 1 bÞ s1a s2 ðs 1 bÞ s1a s3 ðs 1 bÞ :

4. LðsÞ 5 Exercise 3.25: Obtain the range of parameters a; b; c for which the systems with these characteristic equations are stable: 1. s4 1 as3 1 2s2 1 s 1 1 5 0 2. as4 1 2s3 1 4s2 1 s 1 b 5 0 3. s5 1 as4 1 2s3 1 4s2 1 bs 1 1 5 0 4. as5 1 2s4 1 2s3 1 bs2 1 s 1 1 5 0 5. s5 1 as4 1 2s3 1 4s2 1 bs 1 c 5 0. Exercise 3.26: Consider the system x_ 5 Ax 1 Bu; y 5 Cx 1 Du where the system matrices are given below. Determine the range of unknown parameters such that the system is stable.

Stability analysis

247

A 5 ½a 1; 0 2; B 5 ½1 b0 ; C 5 ½c 1; D 5 d A 5 ½a 1; b 22; B 5 ½1 00 ; C 5 ½c 1; D 5 1 A 5 ½0 a; 21 22; B 5 ½b 00 ; C 5 ½1 21; D 5 d A 5 ½ 21 0 0; 1 a 2 2; 1 0 21; B 5 ½1 0 b0 ; C 5 ½1 c 1; D 5 d A 5 ½a 0 b; 0 21 2; 1 1 21; B 5 ½1 0 10 ; C 5 ½1 c 1; D 5 d A 5 ½a 0 0 1; 0 1 1 22; b 0 0 2; 0 0 1 23; B 5 ½1 1 1 1; 1 0 21 20 ; C 5 ½1 1 c 1; D 5 ½d 10 Exercise 3.27: Is it possible to stabilize the sequel plants with proportional control preceding the plant in a negative unity feedback structure? Repeat the problem also for PI, PD, and PID controllers. 1. PðsÞ 5 s 21 2

1. 2. 3. 4. 5. 6.

2. PðsÞ 5 3. PðsÞ 5 4. PðsÞ 5 5. PðsÞ 5 6. PðsÞ 5 7. PðsÞ 5 8. PðsÞ 5

s11 s21 s21 s11 s21 s22 s21 sðs 2 2Þ ; 1 s2 ðs 2 1Þ 1 s 1 s2 ; 1 s3 :

9. PðsÞ 5 Exercise 3.28: Repeat the above exercise with lag and lead controllers given by 0 11 CðsÞ 5 kð1 1 Tsk1 1Þ and CðsÞ 5 kðαTs Ts 1 1 Þ; α . 1, respectively. When needed for simplicity 0 try with k 5 9 and α 5 10. Exercise 3.29: Repeat the above Exercises 3.27, 3.28 with the controllers in the feedback path. Exercise 3.30: Determine the simplest controller CðsÞ such that the closed-loop system of the sequel systems is stable. 1. PðsÞ 5 ss2210 3 2. PðsÞ 5 3. PðsÞ 5 4. PðsÞ 5 5. PðsÞ 5 6. PðsÞ 5

s23 s 2 10 1 s2 ðs 1 1Þ 1 s2 ðs 2 1Þ 1 s3 ðs 2 1Þ ; 1 s4 ðs 1 1Þ 1 s4 ðs 2 1Þ :

7. PðsÞ 5 Exercise 3.31: Find the range of parameters of the sequel systems such that only two closed-loop poles satisfy 22 , ReðsÞ , 0: 1. ΔðsÞ 5 s3 1 2s2 1 s 1 K 2. Δ 5 s3 1 2s2 1 as 1 K 3. Δ 5 2s3 1 s2 1 ðK 2 1Þs 1 K 4. Δ 5 s3 2 2s2 1 s 1 K 5. Δ 5 s3 2 2s2 1 as 1 K 6. Δ 5 2s3 2 ðK 2 1Þs2 1 s 1 K 7. Δ 5 s4 1 2s3 1 3s2 1 s 1 K 1 1 8. Δ 5 s4 1 2s3 2 3s2 1 s 1 K 1 1

248

Introduction to Linear Control Systems

Exercise 3.32: consider the system x_ 5 Ax. Investigate the exponential stability and D-stability of the sequel systems for the regions σ 5 22, and σ 5 21 ; δ 5 303 (Fig. 3.3), and σ 5 21 ; δ 5 303 (Fig. 3.4).

23 22 1. A 5 1 21

23 1 2. A 5 21 21

24 1 3. A 5 22 21

21:5 0:5 4. A 5 . 1 21 Exercise 3.33: Consider the square matrices A:m 3 m and B:n 3 n with eigenvalues γ i and λj , respectively. Investigate the validity of the sequel statements. m; n i. A  B has mn eigenvalues fγ i λj gi51; j51 m; n ii. ðA  In Þ 1 ðB  Im Þ has mn eigenvalues fγ i 1λj gi51; j51 Other properties of the Kronecker product are studied in the second course. Exercise 3.34: Consider the system Ex_ 5 Ax where E; AARn 3 n , rankðEÞ # n. The system is called a descriptor system. Its finite spectrum is given by the finite roots of detðλE 2 AÞ 5 0. It is called regular if detðλE 2 AÞ is not identically zero. Prove that the finite spectrum of the regular system is D stable (as we defined in Section 3.6) iff the finite spectrum of the augmented system I  Ex_ 5 ΘðδÞ  ðA 1 σEÞx is in the OLHP. This theorem is adopted from (Bavafa-Toosi, 2006a). Exercise 3.35: Using Kharitonov’s theorem, verify the stability of the following systems: 1. ½2 2:3s3 1 ½2:1 2:5s2 1 s 1 ½1 1:5 5 0 2. ½2 2:3s3 1 ½2:8 2:9s2 1 ½1 2s 1 3 5 0 3. ½1 2s4 1 ½14 15s3 1 ½70 73s2 1 ½139 140s 1 ½80 85 5 0 4. ½1 3s4 1 ½7 14s3 1 ½2 4s2 1 ½4 13s 1 ½3 10 5 0 5. ½1 2s5 1 ½10 12s4 1 ½46 48s3 1 ½92 96s2 1 ½80 82s 1 ½25 27 5 0 6. ½1 1:2s6 1 ½15 16s5 1 ½88 90s4 1 ½230 235s3 1 ½295 304s2 1 ½182 190 s 1 ½161 167 5 0 7. 10; 000s7 1 49; 000s6 1 98; 700s5 1 105; 000s4 1 ½50; 000 55; 000s3 1 12; 000s2 1 1300s 1 50 5 0. Exercise 3.36: Repeat the worked-out Problem 3.23 for the subsequent systems: 1. LðsÞ 5 K s3 1½1½ 215s 2. LðsÞ 5 K 3. LðsÞ 5 K

1 ½ 21 6s 1 ½2 3 2s2 1 ½1 2s 1 ½1 1:5 ½ 22 3 s4 1 4s3 1 s2 1 2s 1 ½1 2 ½ 21 3 s4 1 4s3 1 2s2 1 ½1 2s 1 1 2

Exercise 3.37: Consider the open-loop system LðsÞ 5 ðb3 s3 1 b2 s2 1 b1 s 1 b0 Þ= ða4 s4 1 a3 s3 1 a2 s2 1 a1 s 1 a0 Þ. Suppose that the coefficients vary in the intervals a4 A½1 1:2, a3 A½2 2:1, a2 A½2 4, a1 A½2 2:5, a0 A½0:5 1, b3 A½1:9 2:5, b2 A½4 4:1, b1 A½2:5 4, b0 A½0:4 1. Is the closed-loop system stable in a negative unity feedback structure? Exercise 3.38: Consider the stable polynomial ΔðsÞ 5 a0 1 a1 s 1 ? 1 an21 sn21 1 an sn . Decompose it as ΔðsÞ 5 Δe ðs2 Þ 1 sΔo ðs2 Þ where Δe ðsÞ 5 a0 1 a2 s2 1 a4 s4 1 ? and Δo ðsÞ 5 a1 1 a3 s3 1 a5 s5 1 ?. Prove that the two polynomial Δe 1 dΔe =ds 5 a0 1 2a2 s 1 a2 s2 1 4a4 s3 1 a4 s4 1 ? and sΔo 1 dðsΔo Þ=ds 5 a1 1 a1 s 1 3a3 s2 1 a3 s3 1 5a5 s4 1 ? are stable. Is the polynomial Δo 1 dΔo =ds also stable?

Stability analysis

249

Exercise 3.39: Decompose a given polynomial ΔðsÞ as ΔðsÞ 5 Δe ðs2 Þ 1 sΔo ðs2 Þ as in Exercise 3.38. Now suppose the polynomials Δ1 ðsÞ 5 Δe ðs2 Þ 1 sΔo1 ðs2 Þ and Δ2 ðsÞ 5 Δe ðs2 Þ 1 sΔo2 ðs2 Þ are two stable polynomials with the same “even part.” Prove that the polynomial ΓðsÞ 5 Δe ðs2 Þ 1 γ 1 Δo1 ðs2 Þ 1 γ 2 Δo2 ðs2 Þ is stable for all positive γ 1 ; γ 2 . Is the similar statement also true if the original stable polynomials share the same odd part and differ in the even part? Exercise 3.40: Consider the stable polynomial ΔðsÞ 5 a0 1 a1 s 1 ? 1 an21 sn21 1 an sn and decompose it as ΔðsÞ 5 Δe ðs2 Þ 1 sΔo ðs2 Þ as in Exercise 3.38. Prove that the polynomial ΓðsÞ 5 Γe ðs2 Þ 1 sΓo ðs2 Þ is stable where Γe 5 Δe 2 γΔo , 0 # γ , a0 =a1 , and Γo 5 Δo . Deduce that a2k =a2k11 $ a0 =a1 for all k $ 1. Exercise 3.41: The HermiteBiehler theorem for real polynomials is as follows. Let ΔðsÞ 5 a0 1 a1 s 1 ? 1 an21 sn21 1 an sn and ΔðsÞ 5 Δe ðs2 Þ 1 sΔo ðs2 Þ where Δe ðsÞ 5 a0 1 a2 s2 1 a4 s4 1 ? and Δo ðsÞ 5 a1 1 a3 s3 1 a5 s5 1 ?. Also let ze1 ; ze2 ; . . . and zo1 ; zo2 ; . . . denote the distinct nonnegative real zeros of Δe ð2 z2 Þ and Δo ð2 z2 Þ, respectively, both in ascending order of magnitude. Then ΔðsÞ is Hurwitz stable iff all the zeros of Δe ð2 z2 Þ and Δo ð2 z2 Þ are real and distinct, Δn and Δn21 are of the same sign, and the nonnegative real zeros satisfy the so-called interlacing property 0 , ze1 , zo1 , ze2 , zo2 , . . .. Exercise 3.42: The HermiteBiehler theorem for complex polynomials is as follows. Consider the complex polynomial ΔðsÞ 5 ða0 1 jb0 Þ 1 ða1 1 jb1 Þs 1 ? 1 ðan21 1 jbn21 Þsn21 1 ðan 1 jbn Þsn . Denote ΔðjωÞ 5 Δr ðωÞ 1 jΔi ðωÞ where Δr ðωÞ 5 a0 2 b1 ω 2 a2 ω2 1 b3 ω3 1 ? and Δi ðωÞ 5 b0 1 a1 ω 2 b2 ω2 2 a3 ω3 1 ? Show that ΔðsÞ is Hurwitz stable iff an21 an 1 bn21 bn . 0 and the zeros of Δr and Δi are all real, simple, and interlace as ω runs from 2N to 1N. Exercise 3.43: In control systems the simplest expression for the characteristic polynomial of a time-delay system is of the form e2sT NðsÞ 1 DðsÞ. The HermiteBiehler theorem for time-delay polynomials in its general form is as follows. Consider the entire P function δðsÞ 5 Nk51 e2sTk Δk ðsÞ in which Δk ðsÞ; k 5 1; . . .; N are arbitrary real or complex polynomials, and λk’s are real numbers with λ1 , λ2 , ? , λN ; jλ1 j , λN . Denote δðjωÞ 5 δr ðωÞ 1 jδi ðωÞ. Prove that δðsÞ is Hurwitz stable iff: (1) δr and δi have only simple interlacing roots, and (2) δ0i δr 2 δ0r δi . 0 for all ω in R. Moreover, in part (2), the statement “for all ω in R” can be replaced by “for some ω0 in R,” i.e., it is enough that the condition is satisfied at a single point. Exercise 3.44: Show that the phase of a Hurwitz polynomial ΔðsÞ 5 a0 1 a1 s 1 ? 1 an21 sn21 1 an sn is monotonically increasing as the frequency runs from zero to infinity. That is, it looks like the curves in Fig. 3.11. In the left panel a0 , 0 and in the right panel a0 . 0. m(Δ( s ))

m(Δ( s ))

e(Δ( s ))

Figure 3.11 The phase curve of a stable polynomial.

e(Δ( s ))

250

Introduction to Linear Control Systems

Exercise 3.45: Discuss the stability and internal stability of the sequel systems: 1 1. C1 ðsÞ 5 s Ks 1 1 and P1 ðsÞ 5 s2 ðs 1 10Þ ; 2. C2 ðsÞ 5

Kðs 1 1Þ sðs2 1 1Þ Kðs 2 1Þ s12

and P2 ðsÞ 5

s2 1 1 ðs 2 1Þðs 1 10Þ ; s2 1 1 ðs 2 1Þðs 1 1Þ :

3. C3 ðsÞ 5 and P2 ðsÞ 5 Exercise 3.46: Write down the conditions for internal stability of the 1-DOF control structure in the presence of sensor dynamics. Exercise 3.47: Write down the conditions for internal stability of: (1) The 2-DOF control structure in all its variants, (2) The 2-DOF control structures with sensor dynamics, (3) The 2-DOF IMC structure, (4) The 2-DOF IMC structure with sensor dynamics, (5) The Smith predictor structure (with sensor), (6) The 3-DOF control structure with all its variants (and sensor). Exercise 3.48: Are the sequel plants strongly stabilizable? 2 24 1. PðsÞ 5 ðs 2s3Þðs11Þ 2 2. PðsÞ 5 3. PðsÞ 5 4. PðsÞ 5 5. PðsÞ 5 6. PðsÞ 5 7. PðsÞ 5 8. PðsÞ 5 9. PðsÞ 5 10. PðsÞ 5 11. PðsÞ 5 12. PðsÞ 5

s2 2 9 ðs 2 2Þðs 1 1Þ s2 2 1 sðs 2 2Þ s2 2 2 sðs 2 1Þ ; ðs 2 1Þðs 2 3Þ sðs2 2 4s 1 5Þðs 1 4Þ ðs 2 1Þðs 2 3Þ sðs2 2 4s 1 5Þðs14Þ2 ðs 2 1Þðs23Þ2 ; ðs2 24s11Þ2 2 ðs 2 1Þðs23Þ2 sðs2 2 4s 1 5Þðs14Þ2 ðs21Þ3 sðs22Þ2 ðs21Þ2 ðs 2 3Þ ðs11Þ3 ðs22Þ2 sðs 2 2Þ ðs21Þ2 ðs23Þ2 sðs22Þ2 : ðs21Þ2 ðs23Þ2

Exercise 3.49: Consider the LTV system x_ 5 AðtÞx. (i) Show that in general the answer is Ðt ‘not’ given by x_ 5 expð 0 AðsÞdsÞx0 which is the counterpart of the solution of LTI systems. (ii) Under what conditions(s) is the proposed answer correct? (iii) By direct analysis solve the following systems and investigate their stability. (iv) Investigate the stability of the given systems through the argument of Problem 3.30.

21 0 i. AðtÞ 5 , sint 0

21 0 ii. AðtÞ 5 , sint 2 2

21 0 iii. AðtÞ 5 2t11 , e 22

2 t 2 1 0:5ð1 1 tÞ iv. AðtÞ 5 , 0:5ð1 1 tÞ 2t21

0 0 v. AðtÞ 5 , 2t 0 vi. AðtÞ 5 2 1 2 sint, vii. AðtÞ 5 2 ð1 1 sintÞ 2 tcost 2 2sint.

Stability analysis

251

Exercise 3.50: Consider the following functions, which are stable at origin. (i) Construct ‘a’ corresponding stable LTV system x_ 5 AðtÞx with as many nonnegative eigenvalues as possible. Note that for item (iv) it is n 2 2. (ii) Does your system admit any other solution? i. xðtÞ 5 ½22texpð2t11Þ ð t21Þexpð2tÞ 2expð2t12ÞT ii. xðtÞ 5 ½expð2tÞ2expð22tÞ texpð2tÞ 22expð2tÞT iii. xðtÞ 5 ½expð2tÞ12texpð24t11Þ ð t12Þexpð23tÞ expð22t11ÞT iv. xðtÞ 5 ½expð2tÞ 2expð22tÞ ? ðn21Þexpð2ðn21ÞtÞ expð2tn ÞT v. xðtÞ 5 ½expð2tÞ expð2tÞsint 5expð22tÞcosð3tÞT vi. xðtÞ 5 ½expð2tÞ texpð2tÞcost expð22tÞsinð3tÞT . Exercise 3.51: Investigate the application and usefulness of the similarity transformation x 5 Tx, where T is full rank, with regard to Remark 3.11. Discuss both time-independent and time-dependent Ts. Exercise 3.52: Analyze the stability of the system x_ 5 2x with transformations x 5 e22t x and x^ 5 e2t x.



_ 5 Exercise 3.53: Consider the system xðtÞ

cost sint

2sint cost

21 0

25 21

cost 2sint

sint cost

xðtÞ.

The eigenvalues of the system are both at 21. By direct computation of the fundamental matrix eAt show that the system is unstable. The problem is adopted from (Coppel, 1978). _ 5 AðtÞxðtÞ. Prove that jjAðt2 Þ 2 Aðt1 Þjj # Exercise 3.54: Consider the system xðtÞ εjt2 2 t1 j, ’ t2 ; t1 . 0 and ε sufficiently small, is a sufficient condition for exponential stability of the system. The problem is adopted from (Coppel, 1978). _ _ 5 AðtÞxðtÞ. Prove that jjAðtÞjj Exercise 3.55: Consider the system xðtÞ # ε, ’ t $ 0 and ε sufficiently small, is a sufficient condition for exponential stabilityh of the system. i _ 5 AðtÞxðtÞ where AðtÞ 5 Exercise 3.56: Consider the system xðtÞ

0 1 2 aðtÞ

21 b

and b , 0

is a known constant. (1) Let aðtÞ 5 a be a (time-invariant) constant. Find the range of a as jaj , a for stability of the system. (2) Show that there exists an appropriate value for b (in fact infinitely many values) along with a time-varying aðtÞ (in fact infinitely many of such parameters) with jaðtÞj , a for which the system becomes unstable (for nonzero initial values). Exercise 3.57: This question has several parts and concerns the situation discussed in Exercise 3.56. (1) Can it happen if there is uncertainty in only one element of the matrix A? (2) Discuss versus the order of the system. Exercise 3.58: This exercise is for your further acquaintance with time-varying systems. Stability analysis of these systems in its full generality is a difficult issue, partly because of the following facts: 1. Given a function f ðtÞ. The condition df =dt 5 f_ðtÞ ! 0 does not imply f ðtÞ has a limit as t ! N. An example is f ðtÞ 5 sinðlntÞ; t . 0. 2. The condition f ðtÞ has a limit as t ! N does not imply f_ðtÞ ! 0. An example is f ðtÞ 5 sinðt2 Þ=t; t . 0. You will learn the exact relevance of these statements in future courses, however even now you certainly have some understanding of the situation: roughly speaking, with the condition either on f_ðtÞ or f ðtÞ we want to conclude the limiting behavior of the other. Provide other examples for the above statements in the context of LTV systems. Exercise 3.59: This exercise is for your further information. Stability of hybrid systems is a challenging issue. The basic difference with the theory of LTI systems is as follows. Consider the system x_ 5 Ai x; i 5 1; :::; N, xARn 3 1 , where all the matrices Ai ARn 3 n are

252

Introduction to Linear Control Systems

constant but their choice (i.e., presence) in the system depends on time or state of the system. This is the basic formulation of a switching system. By construction of an example (for n 5 2; N 5 2) show that: 1. It is possible that Ai ’s are all stable but the system is unstable. 2. It is possible that Ai’s are all unstable but the system is stable. Exercise 3.60: This exercise is for the readers of mathematics. Consider the function x 5 sint=t ; t $ 0. The function decays to zero by some oscillations—it converges to and is stable at the origin x 5 0. Can you study it in the context of differential equations? Exercise 3.61: This exercise has several parts. (1) Can a purely resistive circuit be unstable? Answer this question also for purely inductive (L), capacitive (C), resistive-inductive (RL), resistive capacitive (RC) and inductive-capacitive (LC) circuits. (2) In the left panel of Fig. 3.12 propose a typical design for the impedances Z1 ; Z2 such that the system becomes unstable (if possible). Discuss and distinguish (if possible) between the roles of Z1 and Z2 . (3) Discuss the stability of the circuits in the middle and right panels.

Figure 3.12 Exercise 3.61, three typical electrical circuits.

For the sake of completeness let us add that ‘a’ general theory for handling such problems is that of passivity and dissipativity, which we shall discuss in the sequel of the book on state-space methods. Exercise 3.62: This exercise is for the readers of electrical engineering. Reconsider the circuit in the top left panel of Fig. 2.90 of Chapter 2 and the typical values you proposed in item (5ii) of Exercise 2.26. What happens if via the series RC (resistor-capacitor) path the output is fed back to the source of the first-stage MOSFET instead of its emitter? Exercise 3.63: Discuss the stability of all the systems in general in the Exercises of Chapter 2, in particular in Exercise 2.77. Exercise 3.64: Consider the polynomial an sn 1 ?a1 s 1 a0 5 0 whose coefficients are all positive. Show that it does not have root in the sector jsj , π=n. Exercise 3.65: The Hurwitz theorem is originally proven for finite-dimensional matrices/ systems. Consider an infinite dimensional polynomial like ΔðsÞ 5 a0 1 a1 s 1 ? 1 an sn 1 ?. Is the theorem still valid? The general question is treated in (Bacchelli, 1999; Barkovsky and Tyaglov, 2011; Brown and Eastham, 2004; Dyachenko, 2017; Holtz et al., 2016).

References Ahsen, M.E., Ozbay, H., Niculescu, S.-I., 2015. A secant condition for cyclic systems with time delays and its application to gene regulatory networks. IFAC-PapersOnLine. 48 (12), 171176. Argoun, M.B., 1986. Allowable coefficient perturbations with preserved stability of a Hurwitz polynomial. Int. J. Control 44 (4), 927934.

Stability analysis

253

Argoun, M.B., 1987. Stability of a Hurwitz polynomial under coefficient perturbations: necessary and sufficient conditions. Int. J. Control 45 (2), 739744. Auba, T., Funahashi, Y., 1993. A note on Kharitonov’s theorem. IEEE Trans. Autom. Control 38 (4), 663664. Bacchelli, S., 1999. Block Toeplitz and Hurwitz matrices: a recursive approach. Adv. Appl. Math. 23 (3), 199210. Barkovsky, Y., Tyaglov, M., 2011. Hurwitz rational functions. Linear Algebra Its Appl. 435 (8), 18451856. Barmish, B.R., 1994. New Tools for Robustness of Linear Systems. MacMillan, NY. Bavafa-Toosi, Y., 2000. Eigenstructure Assignment by Multivariable Output Feedback, MEng Thesis (Systems & Control). Department of Electrical Engineering, K.N. Toosi University of Technology, Tehran, Iran. Bavafa-Toosi Y., Decentralized Adaptive Control of Large-Scale Systems, PhD Dissertation (Systems & Control), School of Integrated Design Engineering, Keio University, Yokohama, Japan, 2006. Bavafa-Toosi, Y., 2016. Simple criteria for stability and instability of discrete-time systems. J. Control Syst. Eng. 4 (1), 1019. Bavafa-Toosi, Y., Ohmori, H., Labibi, B., 2006a. Note on finite eigenvalues of regular descriptor systems. IEE Proc.Control Theory Appl. 153 (4), 502503. Bavafa-Toosi, Y., Ohmori, H., Labibi, B., 2006b. A generic approach to the design of decentralized linear output-feedback controllers. Syst. Control Lett. 55, 282292. Borobia, A., Dormido, S., 2001. Three coefficients of a polynomial can determine its instability. Linear Algebra Its Appl. 338 (13), 6776. Borobia, A., Dormido, S., 2006. Three coefficients of a polynomial can determine its instability. Linear Algebra Its Appl. 416 (23), 857867. Borwein, P., Erdelyi, T., 1995. Polynomials and Polynomial Inequalities. Springer, Berlin. Brown, B.M., Eastham, M.S.P., 2004. Extended Hurwitz results for hypergeometric functions arising in spectral theory. J. Comput. Appl. Math. 171 (12), 113121. Castaneda, A., Guinez, V., 2017. Injectivity and almost global asymptotic stability of Hurwitz vector fields. J. Math. Anal. Appl. 449 (2), 16701683. Choghadi, M.A., Talebi, H.A., 2013. The RouthHurwitz stability criterion, revisited: the case of multiple poles on imaginary axis. IEEE Trans. Autom. Control 58, 18661869. Chowdary, N.V., Chidambaram, M., 2014. Robust controller design for first order plus time delay systems using Kharitonov theorem. IFAC Proc. 47 (1), 184191. Cima, A., Gasull, A., Manosas, F., 1999. The discrete MarkusYamabe problem. Nonlin. Anal.: Theory, Methods & Appl. 35 (3), 343354. Cima, A., van den Essen, A., Gasull, A., Hubbers, E., Manosas, F., 1997. A polynomial counterexample to the MarkusYamabe conjecture. Adv. Math. 131 (2), 453457. Coppel, W.A., 1978. Dichoromies in stability theory. Lecture Notes in Mathematics, no. 629. Springer. De Paor, A., 2003. Pascal-Routh polynomials: a first exploration in feedback system design. Int. J. Control 76 (4), 386389. Desoer, C.A., Chan, W.S., 1975. The feedback interconnection of lumped linear timeinvariant systems. J. Franklin Inst. 300 (5,6), 335351. Dorato, P., Park, H.B., Li, Y., 1989. An algorithm for interpolation with units in HN with applications to feedback stabilization. Automatica. 25 (3), 427430. Du, N.H., Linh, V.H., Mehrmann, V., Thuan, D.D., 2013. Stability and robust stability of linear time-invariant delay differential-algebraic equations. SIAM J. Matrix Anal. Appl. 34, 16311654.

254

Introduction to Linear Control Systems

Dyachenko, A., 2017. Hurwitz matrices of doubly infinite series. Linear Algebra Its Appl. 530, 266287. Elizondo-Gonzalez, C., 2001. A new stability criterion on space of coefficients. In Proceedings of the 40th IEEE Conf. Decision and Control, pp. 26632664. Firouzbahrami, M., Babazadeh, M., Nobakhti, A., Karimi, H., 2013. Improved bounds for the spectrum of interval matrices. IET Control Theory Appl. 7 (7), 10222028. Gantmakher, F.R., 1959. Theory of Matrices. Chelsea, NY. Ghane-Sasansaraee, H., Menhaj, M.B., 2014. Pseudo linear systems: stability analysis and limit cycle emergence. Control Eng. Appl. Inf. 16 (2), 7889. Glaria, J.J., Goodwin, G.C., 1994. A parametrization for the class of all stabilizing controllers for linear minimum phase plants. IEEE Trans. Autom. Control 39, 433434. Goodarzi, A., 2016. Dimension filtration, sequential CohenMacaulayness and a new polynomial invariant of graded algebras. J. Algebra. 456, 250265. Heath, P.W., Carrasco, J., de la Sen, M., 2015. Second-order counterexamples to the discrete-time Kalman conjecture. Automatica. 60, 140144. Hernandez, R., Dormido, S., 1996. Kharitonov theorem extension to interval polynomials which can drop in degree. IEEE Trans. Autom. Control 41, 10091012. Ho, M.-T., Datta, A., Bhattacharyya, S.P., 1999. A generalization of the HermiteBiehler theorem. Linear Algebra Appl. 302303, 135153. Ho, M.-T., Datta, A., Bhattacharyya, S.P., 2000. Generalizations of the HermiteBiehler theorem: the complex case. Linear Algebra Appl. 320, 2336. Holtz, O., Khrushchev, S., Kushel, O., 2016. Generalized Hurwitz matrices, generalized Euclidean algorithm, and forbidden sectors of the complex plane. Comput. Methods Function Theory. 16 (3), 395431. Hoshikawa, T., Yamada, K., Tatsumi, Y., 2013. The parameterization of all two-degree-offreedom strongly stabilizing controllers. ECTI Trans. Comput. Inf. Technol. 7 (1), 8896. Hoshikawa, T., Yamada, K., Tatsumi, Y., 2015. The parameterization of all semistrongly stabilizing controllers. Int. J. Innovative Comput. Inf. Control 11 (4), 11271137. Hurwitz, A., 1895. Ueber die bedingungen, unter welchen eine Gleichung nur Wurzeln mit negativen reellen Theilen besitzt. Mathematische Annalen. 46, 273284. Ilyashenko, Y., 2002. Centennial history of Hilbert’s 16th problem. Bull. AMS. 39 (3), 301354. James, H.M., Nichols, N.B., Phillips, R.S., 1947. Theory of Servomechanisms. McGraw-Hill, NY. Jury, E.I., 1996. Remembering the four stability pioneers of the nineteenth century. IEEE Trans. Autom. Control 41 (9), 12421244. Jury, E.I., Katbab, A., 1993. A note on Kharitonov-type results in the space of Markov parameters. IEEE Trans. Autom. Control 31 (1), 155158. Karimi-Ghartemani, M., 2014. Enhanced Phase-Locked Loop Structures for Power and Energy Applications. Wiley-IEEE Press, New Jersey. Kharitonov, V.L., 1978. Asymptotic stability of an equilibrium position of a family of systems of linear differential equations. Differentsial’nye Uravneniya. 14 (11), 20862088. In Russian. (English translation by A. A. Zhdanov in Differential Equations, 14 (11), 1483-1485, 1979.). Kharitonov, V.L., Tempo, R., 1994. On the stability of a weighted diamond of real polynomials. System Control Lett. 22, 57. Li, W., 2007. Practical criteria for positive-definite matrix, M-matrix and Hurwitz matrix. Appl. Math. Comput. 185 (1), 397401.

Stability analysis

255

Lienrad, A., Chipart, M.H., 1914. Sur la signe de la partie reelle des raciness d’une equation algebrique. J. Math. Pure Appl. 10, 291346. Lyapunov A.M., 1992. The General Problem of the Stability of Motion, translated by A. T. Fuller, London: Taylor & Francis. Mastorakis, N.E., 2000. On the robust stability of 2D Schur polynomials. J. Optimization Theory Appl. 106 (2), 431439. Mevenkamp, M., Linnemann, A., 1988. Generic stabilizability of single-input single-output plants. Syst. Control Lett. 10 (2), 9394, 2. Milovanovic, G.V., Mitrinovic, D.S., Rassias, T.M., 1999. Topics in Polynomials: Extremal Problems, Inequalities, Zeros. World Scientific Publishing, NY. Minnichelli, R.J., Anagnost, J.J., Desoer, C.A., 1989. An elementary proof of Kharitonov’s theorem. IEEE Trans. Autom. Control 34, 995998. Mondie, S., Cuvas, C., 2016. Necessary stability conditions for delay systems with multiple pointwise and distributed delays. IEEE Trans. Autom. Control 60 (7), 19871995. Mori, T., Kokame, H., 1992. Stability of interval polynomials with vanishing extreme coefficients. In: Recent Advances in Mathematical Theory of System, Control, Networks, and Signal Processing. Mita, Tokyo. Nasiri, H.R., Haeri, M., 2014. How BIBO stability of LTI fractional order time delayed systems relates to their approximated integer order counterparts. IET Proc. Control Theory Appl. 8 (8), 598605. Nestler, P., Scholl, E., Troltzsch, F., 2016. Optimization of nonlocal time-delayed feedback controllers. Comput. Optimization Appl. 64 (1), 265294. Nikseresht, A., 2017. Dual of codes over finite quotients of polynomial rings. Finite Fields Their Appl. 45, 323340. Ngoc, P.H.A., 2016. Novel criteria for exponential stability of linear neutral time-varying differential systems. IEEE Trans. Autom. Control 61 (6), 15901595. Oboudi, M.R., 2016. On the roots of domination polynomial of graphs. Discrete Appl. Math. 205, 126131. Oliviera, V.A., Tiexeira, M.C.M., Cossi, L., 2003. Stabilization of a class of time delay systems using the HermiteBiehler theorem. Linear Algebra Appl. 369, 203216. Olshevsky, A., Olshevsky, V., 2003. Kharitonov’s theorem and Bezoutians. Linear Algebra Appl. 399, 285297. Otsuka, N., Okada, T., 2003. On simplified Hurwitz stability conditions for low-degree weighted diamond polynomials. J. Franklin Institute. 340 (67), 399406. Ozbay, H., Gundes, A.N., 2008. Strongly stabilizing controller synthesis for a class of MIMO plants. Proc. 17th World Congress. IFAC, pp. 359-363. Porter, B., 1968. Stability Criteria for Linear Dynamic Systems. Academic Press. Rahman, Q.I., Schmeisser, G., 2002. Analytic Theory of Polynomials. Oxford University Press, Oxford, UK. Routh, E.J., 1877. A Treatise on the Stability of a Given State of Motion, Particularly Steady Motion. MacMillan, NY. Roy, A., Iqbal, K., 2005. PID controller tuning for the first-order-plus-dead-time process model via HermiteBiehler theorem. ISA Trans. 44 (3), 363378. Saeks, R., Murray, J., 1982. Fractional representation, algebraic geometry, and the simultaneous stabilization problem. IEEE Trans. Autom. Control 27, 895903. Sahu, B.K., Gupta, M.M., Subudhi, B., 2013. Stability analysis of nonlinear systems using dynamic-Routh’s stability criterion: a new approach. In: Proceedings of the Int. Conf. Advances in Computing, Communications, and Informatics, pp. 17651769.

256

Introduction to Linear Control Systems

Saif, A., Gu, D.W., Postlethwaite, I., 1998. Strong stabilization of MIMO systems via unimodular interpolation in HN. Int. J. Control 69 (6), 797818. Seraji, H., 1975. An approach to dynamic compensator design for pole assignment. Int. J. Control 21 (6), 955966. Seraji, H., 1981. Output control in linear systems: a transfer-function approach. Int. J. Control 33 (4), 649676. Shafai, B., Chen, J., Kothandaraman, M., 1997. Explicit formulas for stability radii of nonnegative and Metzlerian matrices. IEEE Trans. Autom. Control 42 (2), 265270. Shafai, B., Wilson, B.H., Chen, J., 2003. Robust stability of a special class of polynomial matrices with control applications. Comput. Electr. Eng. 29 (7), 781800. Shorten, R., Narendra, K.S., 2014. Classical results on the stability of linear time-invariant systems, and the Schwarz form. IEEE Trans. Autom. Control 59, 30203025. Siljak, D.D., 1976. Alexander Michailovich Lyapunov. J. Dyn. Syst. Meas. Control 6 (6), 121122. Sondhi, S., Hote, Y.V., 2016. Fractional order PID controller for perturbed load frequency control using Kharitonov’s theorem. Int. J. Electrical Power Energy Syst. 78, 884896. Szego, G., 1975. Orthogonal Polynomials. 4th ed American Mathematical Society, Providence. Tavares, S.A., 2006. Generation of Multivariate Hermite Interpolating Polynomials. Chapman & Hall, CRC, NY. Varga, A., 2004. Gerschgorin and His Circles. Springer. Wang, L., 2003. Kharitonov-like theorems for robust performance of interval systems. J. Math. Anal. Appl. 279 (2), 430441. Willems, J.C., Tempo, R., 1999. The Kharitonov theorem with degree drop. IEEE Trans. Autom. Control 44 (11), 22182220. Wu, M.Y., 1974. A note on stability of linear time-varying systems. IEEE Trans. Autom. Control 19, 192. Xiang, W., Zhai, G., Briat, C., 2016. Stability analysis for LTI control systems with controller failures and its application in failure tolerant control. IEEE Trans. Autom. Control 60 (3), 811817. Xu, S.J., Rachid, A., Darouach, M., 1998. Robustness analysis of interval matrices based on Kharitonov’s theorem. IEEE Trans. Autom. Control 43 (2), 273278. Yamada, K., Satoh, K., Okuyama, T., 2002. The parametrization of all stabilizing repetitive controllers for a certain class of non-minimum phase systems. IFAC Proc. 35 (1), 205210. Yang, X., 2003. Necessary conditions of Hurwitz polynomials. Linear Algebra Its Appl. 359 (13), 2127. Yang, X., 2004. Some necessary conditions for Hurwitz stability. Automatica. 40 (3), 527529. Youla, D.C., Bongiorno, J.J., Lu, C.N., 1974. Single-loop feedback stabilization of linear multivariable dynamical plants. Automatica. 10, 159173. Young, P.M., 1994. The rank one mixed μ problem and Kharitonov-type analysis. Automatica. 30 (12), 18991911. Zekavat, M., 2017. On a relation between boundedness and degree boundedness of a sequence of polynomials. J. Math. Anal. Appl. 450 (1), 7780.

Time response 4.1

4

Introduction

So far we have seen how to model a given system, and how to analyze its stability. As such we have been able to have a preliminary design which satisfies the minimum requirement of stability. The objective in a control system goes beyond stability—we require tracking/regulation, which is seen in the steady state error and also in the transient response. The requirement of a good transient response is important. Consider a car, train, airplane, or elevator. A good transient response guarantees quality ride and transport of people, or safe transportation of goods that should be handled carefully such as glassware. In certain systems like rolling industry the transient response should have no or negligible overshoot otherwise it means that the product is (almost) cut. Other examples include industries making products of certain thinness like glass panes, plastic, paper, etc. The same is true for the first systems that we named: car, train, airplane, and elevator. If there are overshoots and undershoots—i.e., oscillations—around the desired target, this is felt by the user and is certainly undesirable. On the other hand, the importance of the steady-state behavior of the system goes without saying. The steady-state tracking error must be zero or within a prescribed allowable band. In this chapter we study the time response of the system. The inputs that we consider for the tracking systems—which were tsitionally called servo-systems or servomechanisms—are the step, ramp, parabolic, higher powers of time, and sinusoidal ones. The case of sinusoidal inputs is considered under the section Bandwidth. In the literature it is customary to study sinusoidal inputs at the end of the course where Bode diagrams are introduced. Because the step input can be considered as a sinusoidal input (cosine function) with frequency zero, the response of the system to sinusoidal inputs of nonzero frequency is customarily called the “frequency response” of the system. However, apart from the name the reality is that the response evolves in time; that is to say, what actually matters is the time characteristics of the response for any kind of input. Thus we study this topic in this chapter under “time response.” There is another reason for this: The important notion of bandwidth is related to sinusoidal/alternating inputs. Thus if we present the topics in this chapter students will have more time to master these important notions. Nonetheless, when we introduce Bode diagrams in Chapter 7 we will reconsider this topic. On our journey in this chapter we mention some errors and mistakes in the existing texts and correct them. We start with the system type and system inputs in Section 4.2. Steady-state error is considered in Section 4.3. In the literature it is customary to study the error signal instead of the output signal. Thus if the error signal is small or zero it means that tracking is achieved. The idea is elegant; however the procedure which can be found in the literature contains a slight flaw. We shall mention the problem and correct it thereafter. Then in Sections 4.4 and 4.5 we restrict our attention to first- and second-order systems and study both the transient Introduction to Linear Control Systems. DOI: http://dx.doi.org/10.1016/B978-0-12-812748-3.00004-5 © 2017 Elsevier Inc. All rights reserved.

258

Introduction to Linear Control Systems

and steady-state modes. Here the existing books contain some mistakes as well which are corrected. Next, the topic of bandwidth is introduced and studied in Section 4.6. This is followed by an analysis of higher order systems in Section 4.7. We proceed to the performance region, model reduction, effect of the addition of a pole/zero, and inverse response in Sections 4.84.11, respectively. It is unfortunate that regarding some of the aforementioned topics there are some wrong stereotypes in the exiting literature. These mistakes are discussed and corrected as well. Finally, we present the analysis and synthesis of the actual systems, i.e., systems with the inclusion of sensor dynamics and delay term in different control structures. This important issue which is missing in the existing literature is carried out in Section 4.12. Section 4.13 provides elementary robustness analysis with regard to both stabilization and time-domain performance. This included ultimate disturbance and noise rejection of the sinusoidal kind.The chapter closes by summary, further readings, worked-out problems, and exercises in Sections 4.144.17. For the sake of completeness we add that although our presentation is modern and contains partially original contributions, studies relevant to the issues we discuss in this chapter can be somehow traced back to stability studies of the late 19th century. However, in a more direct manner it is fair to say that they were seriously- but partially - commenced by the work of Hazen (1934). Further results were subsequently supplied by numerous researchers especially in the 1940s and 1950s. The publications were however circumscribed due to restrictions of the war-time. References (Brown and Campbell, 1948; Chestnut and Mayer, 1959; Gardner and Barnes, 1942; Hall, 1943), and (James et al., 1947) summarize part of the results of that time.

4.2

System type and system inputs

The general framework for studying the error signal is as follows. Consider the 1-DOF control structure of a SISO system given in Fig. 4.1. Tracking takes place if the output of the feedback element is zero. In particular it is essential in the steady state. Defining eðtÞss 5 limt!N eðtÞ the control design must guarantee a zero or small eðtÞss . To find this quantity the final value theorem is invoked in the literature, that is, eðtÞss 5 lim sEðsÞ 5 lim s!0

sRðsÞ

s!0 1 1 LðsÞ

;

(4.1)

where LðsÞ 5 CðsÞPðsÞ. What part of the literature has failed to pay attention to is that this theorem is valid only for sEðsÞ which is analytic in the CRHP. If this condition is not satisfied the method is not conclusive. However, the literature has applied the final value theorem even in this case. This is discussed in the ensuing development.

Figure 4.1 The 1-DOF control structure.

Time response

259

The details of the procedure are as follows. Let L be of the general form, LðsÞ 5

KðT1 s 1 1ÞðT2 s 1 1Þ? : sN ðTa s 1 1ÞðTb s 1 1Þ?

(4.2)

The parameter N is the number of pure integrators and is called the type of the system. More precisely, we have, N 50 Type 0

N 51 Type 1

N 52 ? N 5N Type 2 ? Type N

(4.3)

It is good to learn the following historical terminologies although they are becoming obsolete. The step, ramp, and parabolic inputs are called position, velocity, and acceleration inputs, respectively. As such the error in tracking the step, ramp, and parabolic inputs are termed position, velocity, and acceleration errors, respectively.

4.3

Steady-state error

We consider three different inputs: step, ramp, and parabolic inputs, which are the usual inputs in practice with the explanation that the last two apply to the system only for a limited duration of time.1 Thus computing (4.1) for the step reference input rðtÞ 5 A stepðtÞ one gets the position error as eðtÞss 5 lims!0 1 1ALðsÞ 5 1 1 limAs!0 LðsÞ, for the ramp input rðtÞ 5 At stepðtÞ we find the velocity error as eðtÞss 5 lims!0 s 1 AsLðsÞ 5 lims!0A sLðsÞ, and for a parabolic input rðtÞ 5 12 At2 stepðtÞ we have the acceleration error as eðtÞss 5 lims!0 s2 1 As2 LðsÞ 5

A lims!0 s2 LðsÞ.

Now define,

Kp 5 lims!0 LðsÞ Position Constant (The term shows that input is a step/position.) Kv 5 lims!0 sLðsÞ Velocity Constant (The term shows that input is a ramp/velocity.) Ka 5 lims!0 s2 LðsÞ Acceleration Constant (The term shows that input is a parabola/ acceleration.)

The literature then presents Table 4.1 which relates the steady-state errors to the type of the system and order of the input. All the entries of this table are correct. However, a point should be explained. The final value theorem does not apply to the cases where sEðsÞ is not analytic in the CRHP. These cases are those corresponding to the entries infinity in this Table although these entries are correct. The reason for their correctness is shown in the next example. 1

Otherwise it means that the input signals become unbounded. No actual system works with unbounded inputs. On the one hand such unbounded inputs cannot be built. On the other hand, in actual systems the only inputs which can be applied for unlimited time are bounded input, perhaps in the form of periodic ramp and parabolic functions; see Examples 4.25 and 4.27.

260

Introduction to Linear Control Systems

Table 4.1 Steady-state error versus type of the system and type of the input

rðtÞ 5 A stepðtÞ 5 A; A $ 0 rðtÞ 5 A rampðtÞ 5 At; A $ 0 rðtÞ 5 0:5A parabola

Type 0

Type 1

Type 2

Type $ 3

A 1 1 Kp

0

0

0

Position error

N

A Kv

0

0

Velocity error

N

N

A Ka

0

Acceleration error

5 0:5At ; A $ 0 2

Example 4.1: The steady-state error of a type zero system to the velocity input, called the velocity error, is claimed to be infinity. This is correct as shown for the sequel system. Consider the system LðsÞ 5

1 s 1 2. The 1=9 21=9 s 1 s13

error is given by EðsÞ 5

1 1 1 LðsÞ RðsÞ. 2 1 1 23t 3t1 9 2 9e

2=3 2 1 and therefore eðtÞ 5 Thus EðsÞ 5 ss 1 1 3  s2 5 s2 1 whose steady-state value is infinity. It is simple to verify this for the other two entries which are infinity.

Remark 4.1: The above observation is valid for the general system given by (4.2) as well. It is left as an easy exercise for the reader to verify this. Remark 4.2: The philosophy of above discussions is valid for 2-DOF control systems as well. However, developing the details is not as neat as in this case. See Section 4.12 and the worked-out Problems 4.304.34. Remark 4.3: The step input is of the kind t0 A stepðtÞ and is thus called to have order zero. Similarly, the ramp and parabolic inputs are of order one and two, respectively. It is thus observed that in order for the system to have bounded or zero steady-state error, type of the system must be at least as large as the order of the input. Moreover, the errors can be evaluated in terms of the defined constants Kp ; Kv ; Ka : The higher these constants the smaller the errors. Note that for the entries zero in Table 4.1 these constant are infinity, for the nonzero bounded entries these constants have nonzero bounded values, and for the entries infinity in the Table the constants Kv and/or Ka are zero.

Time response

261

Remark 4.4: The question that arises at this point is thus how large can Kp ; Kv ; Ka be? To answer this question we should study its effect on stability and transient performance (overshoot, etc.). The general trend—which has of course exceptions —is that the larger these parameters up to a certain degree, the better and faster the transient response. However, beyond that the transient response becomes oscillatory and the stability deteriorates. For certain systems beyond a certain amount of these parameters the system becomes unstable. This is shown in the examples to follow. Remark 4.5: Recall from Chapter 1, Introduction, that by feedback we make jLj 5 jCPjc1 in order to achieve the design objectives. There it was mentioned that CP has both magnitude and phase. Here we are working with its magnitude at the zero frequency in the form of Kp ; Kv ; Ka . (Kp is the so-called DC gain.) What will follow in Chapters 9 and 10 is the design with regard to both magnitude (at all frequencies) and phase.

Example 4.2: Obtain the steady-state error and steady-state values of the following systems in a unity-feedback structure. Find the unknown parameters such that the nonzero steady-state errors become 2%. L1 5

K1 ; s11

r2 ðtÞ 5 At;

r1 ðtÞ 5 A stepðtÞ; L3 5

s 1 K3 2 s ðs110Þ2

;

L2 5

K2 ðs 1 2Þ ; sðs 1 5Þðs2 1 2s 1 5Þ

r3 ðtÞ 5 0:5At2 stepðtÞ

For simplicity of notation we omit the term lims!0 in the expression of the position, velocity, and acceleration constants. Therefore, for L1 , ess 5 1 1ALð0Þ 5 1 1AK1 . For L2 , ess 5 sLðsÞA 5 2K2A=25. For L3 , ess 5 A s2 LðsÞjs50

5

js50

A K3 =100.

In each case it is desired that Ess # 0:02A. Thus K1 . 49,

K2 . 625, K3 . 5000. On the other hand, the systems must be stable. The range of Ki ’s for which the systems remain stable are K1 .21, 0 , K2 , 28:1, 0 , K3 , 4:99. Therefore only K1 . 49 is acceptable by which the design objective for L1 is satisfied. As for L2 and L3 the maximum values for their parameters are K2 5 28:1 and K3 5 4:99 by which the steady-state errors are 1 1 2 K2 =25 5 0:44 and K3 =100 5 25, that is 44% and 2500%. These two systems certainly need compensation.

Remark 4.6: Consider a system L0 for which we obtain, say, K 0 . 390 for the desired steady-state error and 0 , K 0 , 400 for stability. Can we use the limit values of the parameter? The answer is that we never do so because with these

262

Introduction to Linear Control Systems

marginal values the system is on the verge of instability—some poles are very close to the j-axis and the system is oscillatory, suffering from a poor transient response—such a system also needs compensation.2 This is also shown in the next chapter on root locus. Another reason is that a small uncertainty (about 2.5%) in the parameter value will result in an unstable system (390 1 390 3 2:56% 5 400). It is said that the system does not have a good stability margin, as shall be studied in Chapter 6. Remark 4.7: Some systems may show a slightly and acceptably larger steady-state error by tuning the controller parameters (modifying the controller gain, zero locations, pole locations) at the benefit of a better transient response. Thus when some degrees of freedom exist it is advisable that simulations always be performed in order to fine-tune the controller parameters. This is shown in the worked-out problems. Remark 4.8: Unlike the claims of some references, it is not known for sure how to find the gain so as to get the minimum overshoot, although the general trend appears to be where the rightmost conjugate poles of the system have the largest damping ratio (whose definition will be provided in Section 4.5). We show in the worked-out problems that in some cases there is not much difference in the overshoot and because of smaller rise and settling times higher values of gain (resulting in poles with larger damping ratios) should be preferred, see e.g., Problem 4.13 and Examples 5.205.22, 5.38. Of course, other factors like the control effort, sensitivity, etc. need to be considered as well. Therefore, always, especially at this stage of the course that the controller is not the outcome of a comprehensive optimization problem it is advisable that simulations always be carried out to fine-tune the controller parameters. Question 4.1: In a practical control system when we design the system we know a priori both the input and the plant. Thus the controller is designed such that type of the loop gain is at least as large as the order of the input, and thus the tracking error is either bounded or zero. Now consider two situations. (1) A disturbance of order larger than the controller type enters the system as an unwanted input for a limited duration of time and then fades away. (2) A disturbance of order equal to controller type enters the system as an unwanted input for limited/unlimited time. Discuss both the stability and steady-state error of the system. Consider the disturbance in two forms: additive input disturbance and additive output disturbance. This question will be answered in Chapter 10. In the proceeding Example 4.3 we prove an important property of systems—the correlation between the step response quality and type of the system; see also the worked-out Problem 4.6 and the Exercise 4.7. 2

For further explanation of a special case see Discussion of the worked-out Problem 9.13. In this problem one marginal side is oscillatory and the other is not.

Time response

263

Example 4.3: Consider the standard configuration of Fig. 4.1. Prove that if a system is of type 2 then its step response necessarily exhibits an overshoot.   There holds EðsÞ 5

and thus lims!0 EðsÞ 5 lims!0

1 1 1 1 LðsÞ  s

5

Because system is of type two lims!0 sLðsÞ 5 Kv 5 N and ÐN hence lims!0 EðsÞ 5 0. On the other hand by definition EðsÞ 5 0 e2st eðtÞdt ÐN ÐN and thus lims!0 EðsÞ 5 0 eðtÞdt. As a result 0 eðtÞdt 5 0. This means that eðtÞ—which is the error signal for step input—changes signs or that the step response shows at least one overshoot. See also Chapter 5, Remark 5.28. 1 lims!0 sLðsÞ

5

1 1 1 LðsÞ RðsÞ

1 Kv .

In the ensuing sections we study the time response of first- and second-order systems in details. Then we comment on the bandwidth of the system, higher order systems, model reduction, the effect of the addition of a pole and zero, and the inverse response. Finally we end the chapter with a discussion of the actual system where the sensor dynamics and the delay term are included. We consider four different inputs: impulse, step, ramp, and parabolic inputs. Because the chosen input is actually the desired output yd ðtÞ of the system, it should be noted that apparently impulse input does not make sense because it does not represent any actual situation. However, theoretically speaking it is useful for some purposes: (1) It will be verified as a characteristic of LTI systems that the step response is the integral of impulse response, with the constant chosen such that the signal remains causal. It should be noted that the same relation exists between the parabola and ramp responses. (2) Impulse response has other applications such as in determining the causality and stability of the system. (3) It can be used for the purpose of identification of the system, since the Laplace transform of the impulse response of a system is its transfer function. Nevertheless, it should be added that in practice impulse input is not used for the purpose of identification, for the least reason that applying an impulse input damages an actual system.

4.4

First-order systems

First note that the term “first-order” means the term s exists in the denominator. The given structure in Fig. 4.2, referred to as the standard first-order system, is only one possibility for first-order systems; other structures are also possible. They are plants with a pole not at the origin and plants having both a zero and a pole.

Figure 4.2 The standard first-order system. Left: Closed-loop, Right: Equivalent open-loop.

264

Introduction to Linear Control Systems

4.4.1 Impulse input Let the reference input be rðtÞ 5 AδðtÞ. Hence, YðsÞ 5 A Ts 11 1 and yðtÞ 5 TA e2t=T . It is observed that the slope at the beginning is dtd yðtÞjt50 5 2A T 2 . Also note that ess 5 r 2 yjt!N is not defined.

4.4.2 Step, ramp, and parabolic inputs For these inputs, the corresponding output formulae are obtained by partial fraction expansion and then inverse Laplace transform. There holds: A 1 A 2AT A 2A A 1 A 2AT AT 2 A 2AT AT s Ts 1 1 5 s 1 Ts 1 1 5 s 1 s 1 1=T , s2 Ts 1 1 5 s2 1 s 1 Ts 1 1 5 s2 1 s 1 s 1 1=T , AT 2AT 5 sA3 1 2AT s2 1 s 1 Ts 1 1. Output formulae, initial slopes, and steady-state errors are summarized below.

A 1 s3 Ts 1 1

2

3

First-order system 1 closed loop Ts 1 1

Step input A stepðtÞ

Ramp input At

Parabolic input 1 2 At 2

YðsÞ

A 1 s Ts 1 1

A 1 s3 Ts 1 1

yðtÞ

A 2 Ae2t=T

A 1 s2 Ts 1 1   At 2 AT 1 2 e2t=T

d yðtÞ dt t50

A T

0

  1 2 At 2 AT t 2 T 1 Te2t=T 2 0

ess

0

AT

N

It is good to observe what was previously emphasized: Remark 4.9: As the input is differentiated or integrated, the output is differentiated or integrated as well, where the constants of the integration are such that the system remains causal, i.e., at time zero the output is zero. For instance, the parabolic response is the integral of the ramp response and the ramp response is the derivative of the parabolic response. Note that this is a characteristic of LTI systems only, not of either LTV or nonlinear systems. The case of sinusoidal inputs is considered in the Section 4.6.

4.5

Second-order systems

The materials are presented in four parts. In Section 4.5.1 we study the details of the system representation. In Section 4.5.2 we study the impulse response. Next in Section 4.5.3 we focus of the step response. Finally in Section 4.5.4 we deal with the ramp and parabola responses of the system.

Time response

265

4.5.1 System representation Similar to first-order systems, the term “second-order” means the term s2 exists in the denominator. The proceeding structure in (Fig. 4.3), referred to as the standard second-order system, is only one possibility; other structures are also possible, such as 22=ðs2 1 1Þ, ð2s2 2 3Þ=ðs2 1 s 1 1Þ, etc. For the sake of notational and computational simplicity, rewrite the above system as

M sðs 1 NÞ

5

ω2n sðs 1 2ζωn Þ

and thus the closed-loop system is

M s2 1 Ns 1 M

ω2n

s2 1 2ζωn s 1 ω2n .

5

(Note that for closed-loop stability both M and N have to be positive. Why?) For reasons that will become clear shortly, when the time response of the system is studied, ζ is termed as the damping ratio and ωn as the undamped natural frequency. In order to find the response of the system to given inputs, the Laplace transform of the output is decomposed to partial fraction expansions, thereof the inverse Laplace transform is performed. Thus for any given input partial fraction expansion of the transfer function is also needed. We start by finding the roots of the denominator. pffiffiffiffiffiffiffiffiffiffiffiffiffi The closed-loop poles are given by s 5 2ζωn 6 jωn 1 2 ζ 2 :¼ 2σ 6 jωd , as illustrated below (Fig. 4.4). It should be noted that the term ωd is called the damped natural frequency of the system. The reason is that ωd is the frequency of the response to compute, and ωd , ωn (in the case that 0 , ζ , 1, which is the case in which we are interested, as will become clear), and thus the modifier is “damped.” Moreover, σ is called

Figure 4.3 The standard second-order system. Left: Closed-loop, Right: Equivalent open loop.

Figure 4.4 Closed-loop poles of the standard second-order system.

266

Introduction to Linear Control Systems

damping of the system as it appears in the exponential push of the output function. Further, note that the loci of closed-loop poles with the same ζ; ωn ; σ or ωd are as depicted below (Fig. 4.5). ω2

Let us now derive the output formula. There holds YðsÞ 5 s2 1 2ζωnn s 1 ω2 RðsÞ. We n shall consider rðtÞ 5 AδðtÞ and rðrÞ 5 A stepðtÞ. We do not derive the output formulae for ramp and parabolic inputs because they are not of much practical value. We did so for first-order systems in order to make the observation of Remark 4.9. In this case it is left as an exercise to the reader—Exercise 4.6.

4.5.2 Impulse response When the input is the impulse function rðtÞ 5 AδðtÞ the output is given by ω2

YðsÞ 5 s2 1 2ζωnn s 1 ω2 for A 5 1. Now, versus the value of ζ the output has different n expressions: n For jζj . 1; YðsÞ 5 pωffiffiffiffiffiffiffiffiffi 2

2

ζ 21

For 0 , jζj , 1; YðsÞ 5 For ζ 5 0, YðsÞ 5

 1pffiffiffiffiffiffiffiffiffi s 1 ωn ðζ 2 ζ 2 2 1Þ

ω2n ωd ωd ðs1ζωn Þ2 1 ω2 d

ω2n s2 1 ω2n .

For ζ 5 1,

2

1pffiffiffiffiffiffiffiffiffi s 1 ωn ðζ 1 ζ 2 2 1Þ

.

ωd ωn 5 pffiffiffiffiffiffiffiffiffi . 2 ðs1ζω Þ2 1 ω 2

12ζ ω2 YðsÞ 5 ðs1ωn Þ2 . n

n

d

For ζ 5 21, YðsÞ 5

ω2n . ðs2ωn Þ2

We take the inverse Laplace transform of the above expressions and find the output in the time ffi domain as follows. pffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiNote that in the first and last cases s1 5 ωn ðζ 1 ζ 2 2 1Þ; s2 5 ωn ðζ 2 ζ 2 2 1Þ.

ωn1

ωn2

σ2>σ1>0

σ3 ωn2

t

Figure 4.41 Effect of ωn on the time response.

328

Introduction to Linear Control Systems

p < 2σ ωd ωn

p > 2σ –σ

Figure 4.42 The locus of the closed loop poles when p , 2ωn . ω2

n Problem 4.21: Given the plant P 5 sðs 1 2ζω , 0 , ζ , 1, in a unity negative feednÞ s 1 2ζωn back preceded by the controller C 5 s 1 p . Discuss the effect of this controller on the specifications of the original system.

First note that for stability there must hold p . 0. By this controller the undamped natural frequency of the system does not change but damping does and so do the damping ratio and damped natural frequency. In the case (1) p , 2ωn the closed-loop poles are on the constant ωn curve (see Fig. 4.42), and in the case (2) p $ 2ωn they are both negative real. In the former case (1) we distinguish two cases: (1.1) If ζ 5 p=ð2ωn Þ , ζ or equivalently p , 2ζωn , damping ratio decreases and damped natural frequency increases. As a result settling time and maximum percent overshoot increase, and peak time and rise time decrease. (1.2) If ζ , ζ 5 p=ð2ωn Þ ð,1Þ or equivalently 2ζωn , p ð, 2ωn Þ damping ratio increases and damped natural frequency decreases. Consequently settling time and maximum percent overshoot decrease, and peak time and rise time increase. Note that all the aforementioned dependencies are monotonic. (Prove it!) In the latter case (2) there is no overshoot and rise time is (precisely speaking) infinity; by the increase of p the system becomes more sluggish, settling time increases. Problem 4.22: It is known that for standard second-order systems the percent overshoot depends only on the damping ratio. Is this true when the system has a zero as well? The example of systems S2 and S3 in Section 4.10 on performance region show that it is not correct. It will depend on the zero magnitude and the gain or undamped natural frequency as well. This can also be verified theoretically. (Show it!) Problem 4.23: This problem has three parts: (1) Analyze the structure of Fig. 4.43 with CðsÞ 5 1. (2) Under what conditions the following specifications are satisfied: MP , 10%, ts , 2 s. Assume N 5 2P 5 1 and determine M and Q. (3) With the parameters of part (2), what should the simplest controller CðsÞ be so that the system exhibits no overshoot? (4) With the parameters of part (2), design the simplest controller CðsÞ such that the steady-state error to parabolic input r 5 0.5t2 is less than 2%?

Time response

329

Figure 4.43 Problem 4.23.

Figure 4.44 Problem 4.23: The equivalent system.

Figure 4.45 Problem 4.23.

(1) With CðsÞ 5 1, by the help of block diagram algebra it is observed that the system is equivalent to the system in Fig. 4.44: Note that it is called a velocity feedback. The closed-loop system is thus M sðNs 1 P 1 MQÞ 1 M . Comparing with the plant M/[s(Ns 1 P)] in a negative unity feedback structure, we observe that the undamped natural frequencies of the systems are the same pffiffiffiffiffiffiffiffiffiffiffi in both systems, being ωn 5 M=N . However, the damping ratio increases from P=ð2Nωn Þ to ðP 1 QMÞ=ð2Nωn Þ. As a result MP and ts decrease, but tr increases. This is analysis of the system. (2) Now if we require MP , 10%, there should hold ζ . 0:59  0:6. We take ζ . 0:7 by which MP  4:3%. As for ts , 2 s, using the 2% criterion we must have ζωn 5 2 and thus ωn 5 2:86. In brief, M=N 5 8:1633 and ðP 1 QMÞ=N 5 4. With N 5 2P 5 1 one obtains M 5 8:1633 and Q 5 0:4287. (3) Now the system has the structure of Fig. 4.45 with M 5 8:1633, N 5 2P 5 1, and Q 5 0:4287. In order for the system to have no overshoot we must increase its damping ratio to at leastp1.ffiffiffiffiffiffiffi This ffi is achieved if CðsÞ is of the form K , 1. Denoting CðsÞ 5 K, we find K from 2ζ KM 5 4 with ζ 5 1. Hence K 5 0:49. (4) We must increase the type of system to at least 2. Thus the controller should have an integrator. In order to fulfill stability it must have a zero as well. Therefore its simplest form is CðsÞ 5 ðas 1 bÞ=s. The requirement on the steady-state error determines b . 200=M 5 24:49. On the other hand for stability we must have a . b=4 5 6:125.

Problem 4.24: In the structure of the left panel of Fig. 4.46, given CðsÞ 5 1, the impulse response of the plant P is depicted in the right panel of the same figure in which S 5 1:25 and T 5 1. (1) Find a second-order approximation for P. (2) In the same structure with CðsÞ 5 1, if the input is the unit step, find the rise time, delay time, and settling time.

330

Introduction to Linear Control Systems

y S T

t

Figure 4.46 Problem 4.24. Left: Test structure, Right: Impulse response.

We can prove that if P is the standard second-order system, then there holds S 5 1 1 Mp , T 5 tp . To prove the claim it suffices to investigate the impulse and step response formulae of the standard second-order system given in Section 4.5. This is left to the reader. Thereafter from MP we find ζ and from tp we compute ωd , whence we obtain ωn . That is, we have found the exact second-order model that has these experimental data. With these data we find ζ 5 0:4 and ωn 5 3:4278. Part (2) needs direct substitution in the respective formulas and is left to the reader. Problem 4.25: Dynamics of some airplanes may be represented by the transfer 1 2ζωn Þ function XðsÞ 5 s2 Aðs 1 2ζωn s 1 ω2n . Obtain the time response of this system to an impulse input which represents a disturbance in the form of turbulence. Discuss versus the values of ζ. We expand XðsÞ to partial fraction expansion, whence we find xðtÞ by inverse Laplace transform. For A 5 1 the answer is, 1. ζ 5 0: xðtÞ 5 cosωn t ζ sinωd tÞ 2. 0 , ζ , 1: xðtÞ 5 e2ζωn t ðcosωd t 1 pffiffiffiffiffiffiffiffiffi 2 12ζ

3. ζ 5 1: xðtÞ 5 e2ωn t ð1 1 ωn tÞ pffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffi ðs1 e2s1 t 2 s2 e2s2 t Þ; s1 5 ωn ðζ 1 ζ 2 2 1Þ; s2 5 ωn ðζ 2 ζ 2 2 1Þ 4. ζ . 1: xðtÞ 5 p1ffiffiffiffiffiffiffiffiffi 2 2ωn

ζ 21

For negative ζ cases (2) and (4) are obtained the same as above except that this time answers are divergent. The third case is xðtÞ 5 eωn t ð1 2 ωn tÞ which diverges because the second term does. It is good to know that depending on the flight regime the airplane demonstrates both positive and negative damping ratios. Additionally, it is worth recalling that this system also shows the dynamics of the unforced mass-spring-damper of Example 2.9 of Chapter 2, System Representation, in the presence of initial conditions. That system, however, is always stable. Problem 4.26: Given the dynamic mass-spring-damper configuration of Example 2.9 of Chapter 2 with u 5 F sin ωt. Obtain the response of this system. There holds m€z 1 b_z 1 kz 5 F sin ωt. It is left to the reader to compute the solution, with zero initial conditions, as zðtÞ 5 F 0 sinðωt 2 θÞ where qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 0 21 2 2 2 2 2 F 5 F= ðk2mω Þ 1 b ω and θ 5 tan ½bω=ðk 2 mω Þ. (Question: What if the input is u 5 F cos ωt?) It is worth stressing that the total answer is the forced answer plus the unforced one. The unforced answer is of the form given in Problem 4.25.

Time response

331

1 Problem 4.27: Given the system P 5 sðs 1 1Þ preceded by a controller k in a negative unity feedback, (1) Find the bandwidth of the system for k 5 1. (2) In what frequency range is the input amplified and in what frequency range is it attenuated? (3) Find k such that rðtÞ 5 sin0:3t appears in the output with the same amplitude. For part (1) we use the bandwidth formula given by BW 5 h h pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffii 1=2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffii 1=2 ωn 122ζ 2 1 4ζ 4 24ζ 2 12 5 1 122 3 0:52 1 4 3 0:54 24 3 0:52 12 5 1:272 rad=s. For part (2) we note that 0 , ζ , 0:7, and thus in the frequency range pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 0 , ω , ωn 2 2 4ζ 2 or 0 , ω , 1 the input is amplified in the output and in pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi the frequency range ωn 2 2 4ζ 2 , ω , BW or 1 , ω , 1:272 the input is attenuated in the output. As for part (3) the proportional gain is given by

k5



1=2 2 2 2 2 12 ωω2 1 4ζω2ω 5 ðð120:09Þ2 10:09Þ1=2 5 0:9582. n

n

We note that k , 1 as

expected from Example 4.10. Problem 4.28: Consider the following systems: P1 ðsÞ 5 s A11 2 and 2 P2 ðsÞ 5 s A11 2 1 s 1A100 . We know that the mode at s 5 2100 dies out much faster than the mode at s 5 22. Can we conclude that P1 is a first-order approximate of P2 ? The statement of “the mode at s 5 2100 dies out much faster than the mode at s 5 22” is correct but we can make the conclusion “P1 is a first-order approximate A2 A2 of P2 ” provided the condition A21 c 100 say A21 5 100 100 holds, so that their steady state values are about the same. Problem 4.29: Find the simplest second-order approximate for the system 4s 1 1:2 PðsÞ 5 ðs 1 15Þðs 2 1 3s 1 2Þ in the same pole/zero pattern. We only need to consider the steady state value of the system. The answer is thus Pl ðsÞ 5 15ðs2 11:23s 1 2Þ. (The answer is obviously valid with respect to step tracking. Question: To what extent can we draw conclusions about tracking other inputs like a sinusoid?) Problem 4.30: Consider the actual 1-DOF control structure of Fig. 4.25. Design a s2 1 1 50 parabola tracking controller for the system PðsÞ 5 sðs 1 1Þ with Ps ðsÞ 5 s 1 50. The error signal is given by E 5 R 2 Y. Thus EðsÞ 5    NðsÞ s 1 1 s 11 50 1 2 NðsÞ  = 1 1   RðsÞ, where CðsÞ 5 NðsÞ=DðsÞ. For simplicity DðsÞ sðs 1 1Þ DðsÞ sðs 1 1Þ s 1 50 of notation we drop the argument s. Therefore ess ðtÞ 5 Dsðs 1 1Þðs 1 50Þ 2 Nsðs2 1 1Þ lims!0 s12  Dsðs . In order for e to be zero ss 1 1Þðs 1 50Þ 1 50Nðs2 1 1Þ 2

2

Dsðs 1 1Þðs 1 50Þ 2 Nsðs2 1 1Þ Dsðs 1 1Þðs 1 50Þ 1 50Ns2 1 1Þ must be of Dðs 1 1Þðs 1 50Þ 2 Nðs2 1 1Þ 2 ^ Dsðs 1 1Þðs 1 50Þ 1 50Nðs2 1 1Þ 5 s EðsÞ. In solving

12T 5

^ This simplifies to the form s3 EðsÞ.

this equation by inspection we should be careful that the resulting controller must be causal and the resulting system should be stable. A simple possibility is C1 ðsÞ 5 N=D with D 5 s2 1 1 and N 5 51s 1 50, which is however internally unstable. So we should discard it (although we provide the simulation in the left panel of Fig. 4.47). An internally stable design is C2 ðsÞ 5 N=D with D 5 s 1 1 and N 5 101s 1 50. The simulation result is provided in the right panel of the same figure.

332

Introduction to Linear Control Systems

Figure 4.47 Problem 4.30. Left: First design, internally unstable. Right: Second design, internally stable.

Figure 4.48 Problem 4.31.

Problem 4.31: Consider the alternative structure of Fig. 4.31 with Ps ðsÞ 5 50=ðs 1 50Þ and PðsÞ 5 1=ðs 1 3Þ. Determine the feedback controller CðsÞ such that tracking takes place for a step input. The error signal is given by E 5 R 2 Y. Thus EðsÞ 5     NðsÞ 1 2 s 11 3 = 1 1 DðsÞ  s 11 3  s 15050 RðsÞ, where CðsÞ 5 NðsÞ=DðsÞ. For simplicity of notation we drop the argument s. Therefore ess ðtÞ 5 lims!0 ð1 2 TðsÞÞ 5 1 2



1 3=

 11

50Nð0Þ 150Dð0Þ



.

In order for ess to be zero there should hold Tð0Þ 5 1 or 5 22. With the choice CðsÞ 5 22 the system is stable and thus it is acceptable. Simulation is provided in Fig. 4.48. Nð0Þ Dð0Þ

Time response

333

Problem 4.32: Repeat Problem 4.31 for a ramp input. Dðs 1 50Þðs 1 2Þ 1 50N 2 ^ This time 1 2 TðsÞ 5 Dðs 1 50Þðs 1 3Þ 1 50N should be of the form s EðsÞ. Note that for notational simplicity DðsÞ and NðsÞ are written as D and N. This problem does not admit any acceptable solution! It is noted that it does not admit a solution for a parabola input either! Problem 4.33: Repeat Problem 4.31 for a ramp input and the plant PðsÞ 5 ðs 1 1Þ=ð2s 1 1Þ. The error signal is given by e 5 r 2 y. Thus EðsÞ 5    s11 50 1 2 2ss 1111 = 1 1 NðsÞ   RðsÞ, where CðsÞ 5 NðsÞ=DðsÞ. For simplicity of notaDðsÞ 2s 1 1 s 1 50 Ds 1 D50s 1 50Nðs 1 1Þ tion we drop the argument s. Therefore ess ðtÞ 5 lims!0 1sU Dð2s 1 1Þðs 1 50Þ 1 50Nðs 1 1Þ. In 2

Ds2 1 D50s 1 50Nðs 1 1Þ 2 ^ order for ess to be zero 1 2 T 5 Dð2s 1 1Þðs 1 50Þ 1 50Nðs 1 1Þ must be of the form s EðsÞ with obvious conditions as in previous exercises. A simple possibility is D 5 s 1 1 and N 5 2s which satisfies these conditions. Performance of the controller is excellent—output and input are barely distinguishable, see Fig. 4.49. Note that as shown in the same figure the same controller tracks the parabolic input as well. Why? Step tracking of the system is also excellent. Why?

Problem 4.34: Repeat Problem 4.31 for a parabolic input and the system 2 11 PðsÞ 5 2ssðs11s 1Þ . The error signal is given by E 5 R 2 Y. Thus EðsÞ 5    NðsÞ 2s 1 s 1 1 2s 1 s 1 1 50 1 2 sðs 1 1Þ = 1 1 DðsÞ  sðs 1 1Þ  s 1 50 RðsÞ, where CðsÞ 5 NðsÞ=DðsÞ. For simplicity of notation we drop the argument s. Therefore ess ðtÞ 5 1 2Dðs3 1 50s2 1 s 1 50Þ 1 50Nð2s2 1 s 1 1Þ lims!0 s2  Dsðs 1 1Þðs 1 50Þ 1 50Nð2s2 1 2s 1 1Þ . In order for ess to be zero 2

12T 5

2

2Dðs3 1 50s2 1 s 1 50Þ 1 50Nð2s2 1 s 1 1Þ Dsðs 1 1Þðs 1 50Þ 1 50Nð2s2 1 2s 1 1Þ

^ with obvious must be of the form s3 EðsÞ

conditions as in previous exercises. A simple possibility is D 5 50ð2s2 1 s 1 1Þ and N 5 ð50s2 1 s 1 50Þ which satisfies these conditions. Performance of the controller is excellent—output and input are barely distinguishable, see Fig. 4.50. Note that if we zoom on the figures, similarly to Problem 4.32, we observe a slight tracking error. Problem 4.35: Consider the system PðsÞ 5 3=ð2s 1 1Þ in a negative unity feedback structure with the delay T 5 0:2 second in the forward path. Design the simplest controller in the forward path so that the system tracks a step input and is critically damped. Use the second-order Taylor approximation for the delay. It is clear that the controller is of the form CðsÞ 5 k=s Thus we actually have the system LðsÞ 5 3ke20:2s =½sð2s 1 1Þ in the forward path. The closed-loop transfer function is given by,

334

Introduction to Linear Control Systems

Figure 4.49 Problem 4.33, Excellent tracking performance for ramp, parabolic and step inputs.

L 3ke20:2s 3ke20:2s 5  20:2s 1 1 L sð2s 1 1Þ 1 3ke sð2s 1 1Þ 1 3kð1 2 0:2s 1 ð0:2sÞ2 =2Þ 3k=ð2 1 0:06kÞ 5 e20:2s 2 s 1 ð1 2 0:6kÞ=ð2 1 0:06kÞs 1 3k=ð2 1 0:06kÞ

:

Time response

335

Figure 4.50 Problem 4.34, Top left: Parabola tracking, Top right: Ramp tracking, Bottom: Step tracking.

There holds ω2n 5 3k=ð2 1 0:06kÞ and 2ζωn 5 ð1 2 0:6kÞ=ð2 1 0:06kÞ. In order for the system to be critically damped there should hold ζ 5 1 and thus from ðð120:6kÞ=½2ð210:06kÞÞ2 5 3k=ð2 1 0:06kÞ we find k 5 0:0397  0:04. Note that the equation has the other solution k 5 270:0397 which should be discarded. (Why?) Problem 4.36: Consider the open-loop systems L1 ðsÞ 5 Kðs 1 1Þðs 1 2Þ=½sðs 1 5Þðs2 1 1Þ, L2 ðsÞ 5 Kðs2 1 400Þ=½sðs 1 1Þðs 1 2Þ, and L3 ðsÞ 5 Kðs 1 1Þðs2 1 1Þ=½sðs 1 2Þðs 1 3Þðs2 1 4Þ in the standard closed-loop 1-DOF structure. Simulate the system and observe the performance with regard to both the disturbance/noise rejection and constant setpoint tracking. Respectively assume d1i 5 sint 1 stepðtÞ, d1o 5 sint 1 stepðtÞ, n2 5 sin20t, d3i 5 sin2t 1 stepðtÞ, d3o 5 sin2t 1 stepðtÞ, n3 5 sint, r1 5 r2 5 r3 5 stepðtÞ. The analyses of these systems were provided in Examples 4.33-35. For the sake of brevity we provide simulations of disturbance rejection only for the case of output disturbance for which the decomposition of L 5 CP does not appear—we have yd0 5 L 1 SDo 5 L 1 1 11 LDo . That is, it does not matter which terms of L belong to

336

Introduction to Linear Control Systems

C and which belong to P. For the input disturbance we have ydi 5 L 1 PSDi 5 L 1 1 1P LDi and thus the decomposition of L matters. The interested reader can simply obtain the simulations of ydi by appropriately changing the associated m.file of this Problem. The stability range is: L1 ðsÞ : K $ 6:67, L2 ðsÞ : K # 0:015, L3 ðsÞ : 0 , K , N. For the sake of brevity we give the simulations for 3 values of K, see Fig. 4.51. Panels 1-3 concern L1 with K 5 7; 30; 200. Panels 4,5 are related to L2 with K 5 0:001; 0:005; 0:01. Panels 6-8 are for L3 with K 5 1; 7; 50. See the m.file of the Problem for more simulations. As observed, performance depends on the gain value. Of course, it depends on the ‘rest’ (as we defined in item (3) of Section 4.13.2.1) of the controller as well, i.e., all the loop parameters. We encourage the reader to come back to this Exercise after reading Chapter 5 and have a look at the root locus of these systems. The philosophy of the controller design will be easily grasped after Chapter 5. Exercise 4.57 is in this direction. Discussion: An actual noise almost always has a much higher frequency than the disturbance. This makes the rejection of such simultaneous noise and disturbance difficult. For instance, note that the simple system L03 ðsÞ 5 Kðs 1 1Þðs2 1 4Þ=½sðs 1 2Þðs 1 3Þðs2 1 1Þ is not stabilizable by K. However, it may be possible to do so even for more difficult systems, like item (9) of Exercise 4.56. Let us add that it does admit a solution, e.g., Kðs2 1 202 Þðs10:1Þ3 ðs10:1Þ2 1 12 CðsÞ 5 U . The system is stable for K . 95:282. sðs2 1 12 Þðs15Þ2 ðs10:1Þ2 1 202 The philosophy behind this controller design should be clear for you after reading Chapter 5. We use the rightmost term of the controller for stabilizing its other imaginary poles and zeros. If we do not use that term it is possible to stabilize the system with the choice of K but we should change ðs15Þ2 to, e.g., ðs110Þ2 or ðs115Þ2 . We emphasize that our objective is only to provide an answer, not an answer with good stability margins, etc.—that needs further considerations. Of course the second design is better than the first. Problem 4.37: Reconsider Example 4.36. We have PðsÞ 5 ðs 1 1Þ=½sðs 2 1Þ in the standard 1-DOF control structure. We consider two scenarios: (1) The controller is ðs2 1 100Þ ðs10:1Þ3 ððs10:5Þ2 1 1Þ C1 ðsÞ 5 K1 2 3 . (2) The controller is sðs 1 1Þðs2 1 9Þ ðs 1 10Þððs10:5Þ2 1 225Þ ðs2 1 100Þ ðs10:1Þ3 ððs10:5Þ2 1 1Þ 3 . Simulate the system. C2 ðsÞ 5 K2 2 sðs 1 1Þðs2 1 9Þ ðs 1 10Þððs15Þ2 1 225Þ Apart from simulating the system and observing the performance, which has its own importance, our objective is to point out a serious shortcoming of MATLABs—MATLABs 2015a and all its predecessors. If you simulate the system with input disturbance or noise you see a divergent output—unstable system— over a long time, approximately longer than 35 sec (with MATLABs 2015a). However, verifying the stability of the system is quite simple. Do not doubt it! The

Time response

Figure 4.51 Problem 4.36, Panels 13: L1(s), Panels 4,5: L2(s), Panels 68: L3(s).

337

338

Introduction to Linear Control Systems

problem is the faulty operation of MATLABs! And the villain of the piece can be very easily detected. The transfer functions Hn 5 2 CP=ð1 1 CPÞ and Hdi 5 P=ð1 1 CPÞ which are needed for simulating yn and ydi are computed in a wrong way! That may sound surprising to you but let us see how it works with a simple system. Let PðsÞ 5 ðs 1 1Þ=ðs 2 1Þ and CðsÞ 5 10ðs 1 2Þ=ðs 2 2Þ. Then, 10 T5

5

CP 5 1 1 CP

s12 s11 3 s22 s21

1 1 10

s12 s11 3 s22 s21

5

11s4

10s4 2 50s2 1 40 2 6s3 2 37s2 2 12s 1 44

0:9090ðs 1 2Þðs 1 1Þðs 2 1Þðs 2 2Þ : ðs 2 1Þðs 2 2Þðs2 1 2:455s 1 2Þ

That is, MATLABs does not cancel out poles and zeros that must be canceled out. Obviously, this is not a forbidden unstable pole-zero cancellation that we discussed in Chapter 3. Because this cancellation is not done and they are unstable, then over long-time simulations every such system has the potential to become unstable due to numerical inaccuracies. We should add that the phrase ‘long-time’ in the previous sentence has an imprecise meaning—it may be either shorter or longer than the settling time of the system, and this is well observed in panels five and six in Fig. 4.52. For instance, for the above simple second-order system if you use the command ‘step’ for a time larger than 15 sec, say 20 sec or larger, then you obviously see the divergence of the output. The same is true for the command ‘lsim’ and/or more complicated systems as well. Note that in computing T in the above formula the last fraction is produced by the command ‘zpk’. When we do this process for the system of Problem 4.37 we see that the terms s; ðs 2 1Þ exist in both the numerator and denominator of Hn , Hdi and thus yn , ydi diverge. Anyway, the simulation results are provided in Fig. 4.52. The case (A) refers to do 5 sint 1 sin3t 1 1. The case (B) refers to do 5 sint 1 sin3t 1 t. For the input disturbance we use di 5 sint 1 sin3t 1 1. For the noise we use n 5 sin10t. The reader is encouraged to use the m.file of the problem to better observe the performance by zooming on the figures. We emphasize that it is possible to improve the performance, but we do not bother to do so, as our only purpose is to provide a controller with the aforementioned tracking/rejection properties. Remark 4.22: The aforementioned shortcoming of MATLABs is encountered in all such systems that we work with in this book, but for the sake of brevity we do not mention it again.

Time response

339

Figure 4.52 Problem 4.37, Top and middle row: Output disturbance, Bottom row: Input disturbance and noise. s 1 100 s 1 400 Problem 4.38: Consider PðsÞ 5 sðs 2 1 4Þ and CðsÞ 5 s2 ðs2 1 9ÞC1 ðsÞ in the standard 1-DOF control structure. The system is internally stable. Discuss the tracking and rejection properties of (un)wanted inputs of this system. 2

2

First we note that from the statement it is obvious—an implicit assumption—that C1 ðsÞ does not have any term on the imaginary axis. We have the term s2 1 400 in the numerator of the controller and the term s2 ðs2 1 9Þ in the denominator of the

340

Introduction to Linear Control Systems

controller. Moreover, we have the term ðs2 1 100Þðs2 1 400Þ in the numerator of the loop gain and the term s3 ðs2 1 4Þðs2 1 9Þ in the denominator of the loop gain. Hence: 1. The references rðtÞ 5 stepðtÞ, rðtÞ 5 rampðtÞ, rðtÞ 5 parabolaðtÞ will be tracked with zero steady-state error. The reference rðtÞ 5 sinωt; ω 6¼ 10; 20 is tracked with a phase shift and different magnitude, whose respective output at steady state is yss ðtÞ 5 jTðjωÞjsinðωt 1 +TðjωÞÞ where T 5 CP=ð1 1 CPÞ is the transfer function of the system. The reference rðtÞ 5 sin10t , rðtÞ 5 sin20t is rejected. (Needless to say we can have the superposition of the above reference, e.g., rðtÞ 5 rampðtÞ 1 sinωt.) 2. The input disturbance di ðtÞ 5 A1 sin3t 1 d i ðtÞ is rejected where d i ðtÞ 5 A2 stepðtÞ 1 A3 rampðtÞ. 3. The output disturbance do ðtÞ 5 A4 sin2t 1 A5 sin3t 1 d o ðtÞ is rejected where d o ðtÞ 5 A6 stepðtÞ 1 A7 rampðtÞ 1 A8 parabolaðtÞ. 4. The noise nðtÞ 5 A9 sin10t 1 A10 sin20t is rejected.

4.17

Exercises

Over 200 exercises are offered in the form of various collective problems below. It is advisable to try as many of them as your time allows. Exercise 4.1: What is meant by model reduction? Briefly explain how it is done and what it is good for. Exercise 4.2: What is the general effect of the addition/removal of zeros and/or poles on the time response of a system? Briefly discuss. Exercise 4.3: Find the impulse, step, ramp, and parabolic responses of the first-order plant: PðsÞ 5 s 1k a. b Exercise 4.4: Repeat the above problem with the general first-order plant: PðsÞ 5 k ss 1 1 a. Exercise 4.5: Draw the impulse response of the standard second-order system versus values of ζ as categorized in the text. Exercise 4.6: Derive the output formulae of standard second-order systems for ramp and parabolic inputs and verify the observation of Remark 4.9 for them. Exercise 4.7: Consider Example 4.3 and Problem 4.6. What can be said in the same spirit about type-one systems? Exercise 4.8: Consider the standard second-order system. What is the effect of changing the damped natural frequency on the time response of the system? Discuss. Exercise 4.9: Repeat Exercise 4.8 if we change the damping of the system. Exercise 4.10: Prove, by argument, that bandwidth of the system k=½ðs 1 p1 Þ?ðs 1 pn Þ, ’ i; pi . 0, is smaller than the smallest pi , i.e., the smallest in magnitude pole of the system. Exercise 4.11: Discuss the correctness or falseness of the following arguments. “Given the system ðas 1 bÞGðsÞ, analyze the system GðsÞ, so that you know the bandwidth and the output response y of G(s) for a sinusoidal input u. The output is a sinusoid of the same frequency as that of the input. Because the system is linear, the output of the system ðas 1 bÞGðsÞ is of the form ay_ 1 by. Thus in increasing the input frequency the output magnitude grows (even if both a; b are less than one)—it is multiplied by the frequency. This amplification is higher if the zero is of higher order. For instance for a third order zero the third order derivative of y appears in the output. The output is hence multiplied by the third power of the input frequency. Thus at low input frequencies the output

Time response

341

magnitude is small and at high input frequencies the output magnitude is large. In particular this means that the bandwidth is infinity.” Exercise 4.12: Derive the expression of the step response of the system pω2n ðs 1 pÞðs2 1 2ζωn s 1 ω2n Þ.

To simplify computations solve the problem for a given set of para-

meters of the second-order part of the system. Then analyze the response and explain Fig. 4.19 of the text. Exercise 4.13: Derive the expression of the step response of the system

ðs 1 zÞω2n =z s2 1 2ζωn s 1 ω2n .

To

simplify computations solve the problem for a given set of parameters of the second-order part of the system. Then analyze the response and explain Fig. 4.20 of the text. Exercise 4.14: Given the SISO closed-loop system with PðsÞ 5 2sK1 1, (1) Design the static controller CðsÞ 5 K such that the steady-state error to unit step input is 2%. (2) What if the nonzero steady-state error is wanted to not exceed 5%? (3) How can we make the error zero?   Exercise 4.15: Given the SISO closed-loop system with LðsÞ 5 k 1

1 Ti s

1 s 1 2,

(1) Design

the PI controller CðsÞ 5 k 1 such that the steady-state error to ramp input is 5%. (2) What if the nonzero steady-state error is required to not exceed 2%? (3) How can we make the error zero? Exercise 4.16: Design a PID controller CðsÞ 5 k 1 T1i s 1 1 1TTddss=N for the plant 1 Ti s

PðsÞ 5 s2 11 4. The steady-state error to ramp input should not exceed 5%. Find the parameters such that the controller has two real zeros say at s1;2 5 21; 23. Analyze this design. 1 Exercise 4.17: Given the SISO closed-loop system with LðsÞ 5 CðsÞ ðs 6 1Þðs 1 10Þ, design the simplest controller CðsÞ such that the steady-state error to: (1) A step input is 2%; (2) A step input is 0; (3) A ramp input is 2%; (4) A ramp input is 0; (5) A parabolic input is 2%; (6) A parabolic input is 0. a Exercise 4.18: The controller CðsÞ 5 k ss 1 1 b, a , b, is called a lead controller, which will be exclusively treated in Chapter 9. Here we take an approach to its design based on what we have learned in Problem 4.13. Design the controller for the plant PðsÞ 5 s2 ðs201 1Þ such that the steady-state error to ramp inputs is at most 1%. a Exercise 4.19: The controller CðsÞ 5 k ss 1 1 b, a . b, is called a lag controller which will be 180 exclusively treated in Chapter 9. Design the controller for the plant PðsÞ 5 sðs 1 10Þðs 1 20Þ such that the steady-state error to ramp inputs is at most 2%. Note that we have already designed one such controller in Problem 4.14. Repeat the same problem with a 5 30b and analyze the systems. Exercises 4.20: Given the SISO closed-loop system with LðsÞ 5 CðsÞ sðss 2221Þ. Design a PI, PD, and PID controller CðsÞ 5 k 1 T1i s, CðsÞ 5 k 1 1 1TTddss=N , and CðsÞ 5 k 1 T1i s 1 1 1TTddss=N , whichever possible, such that the steady-state error to parabolic input is less than 2%. If this is not possible, propose a stabilizing controller that fulfills this requirement. 1 2Þ Exercises 4.21: The output of the MP system PðsÞ 5 ðs 150ðs 5Þðs 1 20Þ to the inputs u1 5 sint, u2 5 sin5t is given in Fig. 4.53. Analyze and explain the response of the system. Note that it is considered as an open-loop system, and thus the input and reference input are actually the same. 2 2Þ Exercises 4.22: The output of the NMP system PðsÞ 5 ðs 150ðs 5Þðs 1 20Þ to the inputs u1 5 sint, u2 5 sin5t is given in Fig. 4.54. Analyze and explain the response of the system. Note that it is considered as an open-loop system, and thus the input and reference input are actually the same.

342

Introduction to Linear Control Systems

Figure 4.53 Output of the system for different inputs: Left u1 , Right u2 .

Figure 4.54 Output of the system for different inputs: Left u1 , Right u2 . Exercise 4.23: Given the system PðsÞ 5 sð2s31 1Þ preceded by a controller k in a negative unity feedback, find the bandwidth of the system for k 5 1. For a sinusoidal input, in what frequency range is the input amplified and in what frequency range is it attenuated? Find k such that uðtÞ 5 sin0:4t appears in the output with the same amplitude. ω2n Exercise 4.24: Given the plant PðsÞ 5 sðs 1 2ζω , 0 , ζ , 1, in a unity negative feedback nÞ preceded by the controller C. Determine the controller such that the closed-loop poles are located on a constant damped natural frequency curve. Discuss the specifications of the new system. ω2n Exercise 4.25: Given the plant PðsÞ 5 sðs 1 2ζω in a negative unity feedback structure, nÞ (1) Show that if the input is filtered by an ideal PD controller k 1 TD s or even the simpler filter 1 1 TD s, then by proper choice of the filter parameters(s) we can make the ramp error zero. (2) Noting that the ideal PD filter is not realizable, propose a modification of the filter so that the same objective is fulfilled, if possible. Exercise 4.26: Let the Pmclosed-loop transfer function of a negative unity feedback system ÐN P P ðTi s 1 1Þ L P be given by 1 1 L 5 ni51 0 . (1) Show that 0 eðtÞdt 5 ni51 T 0i 2 m i51 Ti , where i51

ðT i s 1 1Þ

eðtÞ is the error in the unit step response. (2) Show that the above expression is also the

Time response

343

value of the velocity constant Kv of the system. (3) What is the implication of this result? (4) Let pðsÞ be a Hurwitz polynomial. Thus the system pð0Þ=pðsÞ tracks the step response. ÐN Show that 0 e2 ðtÞ dt has a simple expression in terms of the Hurwitz determinants. The answer is given in (Allwright, 1980). Exercise 4.27: In the course Industrial Control you will be introduced to the important performance indices of a control system which are decisive in empirical tuning of controller parameters. Consider the standard second-order system. Compute the following ÐN ÐN quantities for a unit step input: (1) J 5 jeðtÞjdt, (2) J 5 tjeðtÞjdt, (3) J 5 0 0 ÐN _ What is the name of each index? 0 ½tjeðtÞj 1 jeðtÞjdt. Exercise 4.28: Using the approach of Section 4.8, find a second- and first-order approxi21 mate for the systems G3 ðsÞ 5 1 1 2:5s 122s2 1 0:3s3 and G2 ðsÞ 5 1 1 2:5s 1 s2 , respectively. s Exercise 4.29: Using MATLAB find second- and third-order approximates for the following systems and verify the result for step inputs. 1. LðsÞ 5 2. LðsÞ 5 3. LðsÞ 5

2ðs11Þ4 ðs110Þ5 , ðs2 1 2s 1 2Þðs12Þ4 ðs112Þ4 2ðs21Þ4 ðs110Þ5 , ðs2 1 2s 1 10Þðs12Þ4 ðs112Þ4 4 5 2 2ðs 22s110Þ ðs110Þ . ðs11Þ10

Exercise 4.30: This exercise has seven parts; provide a theoretical analysis for each. (1) Characterize the phenomena of Example 4.21. (2) When does the phenomenon of Example 4.22 occur? (Hint: It is helpful to recall Exercise 2.5 of Chapter 2: System Representation.) (3) Can an MP system exhibit only the initial undershoot and not the overshoot? (4) Can an MP system show the inverse response feature for multiple times (similar to an NMP system with multiple NMP zeros)? (5) Can an MP system exhibit the initial inverse-sign phenomenon (similar to Example 4.21)? (6) Consider the system 3 2 PðsÞ 5 ass3 112sbs2 116scs113d. When does it show inverse response as an MP system? (7) Repeat part (6) when the denominator is a general stable third-order polynomial? Exercise 4.31: Consider the system PðsÞ 5 s 1A1p1 1 s 1A2p2 . (1) Under what condition does the system exhibit an overshoot? (2) Under what condition does the system exhibit inverse response? Exercise 4.32: Do the following systems exhibit inverse response? Why? 2 1 100s 1 1 1. PðsÞ 5 2s ðs11Þ 3 2. PðsÞ 5 3. PðsÞ 5 4. PðsÞ 5 5. PðsÞ 5 6. PðsÞ 5 7. PðsÞ 5 8. PðsÞ 5 9. PðsÞ 5 10. PðsÞ 5

2s2 2 100s 1 1 ðs11Þ3 2s2 1 100s 2 1 ðs11Þ3 s2 2 100s 1 1 ðs11Þ3 s2 1 100s 2 1 ðs11Þ3 s2 2 100s 2 1 ðs11Þ3 s2 1 0:4s 1 1 s2 1 1:2s 1 1 s2 2 0:4s 1 1 s2 1 1:2s 1 1 ð2s 1 1Þðs 2 1Þ ðs11Þ2 s2 2 1 . ðs11Þ2

Exercise 4.33: How many overshoots and undershoots do the following systems exhibit? Why? 1. PðsÞ 5

ð2s2 1 100s 1 1Þðs 2 50Þ ðs11Þ4

344

Introduction to Linear Control Systems

Figure 4.55 Exercise 4.34.

2. PðsÞ 5 3. PðsÞ 5 4. PðsÞ 5 5. PðsÞ 5

ðs2 2 100s 1 1Þðs 2 50Þ ðs11Þ4 ðs2 2 100s 1 1Þðs 2 1Þ ðs11Þ4 ðs2 1 100s 2 1Þðs 2 50Þ ðs11Þ4 ðs2 1 100s 2 1Þðs 2 1Þ . ðs11Þ4

Exercise 4.34: A project manager is faced with the following problem. He needs a tracking system for his type-0 plant to follow a step input. The system is subjected to an input disturbance of the kind d 5 At, see Fig. 4.55. He forwards this problem to his R&D team to design the control system. He is given the following two answers and is to decide between them or end up with a third best option. What is the right option for him? Design 1: A 2-DOF control structure is proposed as in the following. Engineers first design C2 for disturbance rejection. They say Y 5 1 1 CP1 C2 P D thus yss 5 lims!0 s 1 1 CP1 C2 P sA2 , and thus choose C2 5 C2 5

as2 1 bs 1 c . s2

N2 s2 D2 ,

a causal one being something like

Then they design C1 for reference following. They use E 5 21

 12

C1 P 1 1 C1 C2 P



R

They also perform a stability analysis to choose the and thus decide that C1 5 P unknown parameters in C2 . Design 2: A 1-DOF control structure. They say E 5 1 1PCP D and thus conclude that controller should be of the kind C 5 s2ND. Then continue and say with this controller the tracking objective is achieved and thus conclude the design by a stability analysis. Exercise 4.35: Consider the 2-DOF control structure as given in the previous Exercise 4.34. Discuss the correctness of the following arguments: “Let C2 5 1s . Thus Y 5 sR and yss ðtÞ 5 lims!0 s2 R. Hence for t $ 0, if r 5 1 then yss ðtÞ 5 0, if r 5 t then yss ðtÞ 5 1, and if r 5 0:5t2 then yss ðtÞ 5 N.” If needed discuss versus C1 and P. Exercise 4.36: Consider the actual 1-DOF control structure of Fig. 4.25. Design a steptracking controller for the system with the sensor dynamics Ps ðsÞ 5 T=ðs 1 TÞ, Tc1, and the general plant models PðsÞ 5 k=ðs 1 pÞ, PðsÞ 5 kðs 1 zÞ=ðs 1 pÞ,PðsÞ 5 k=½sðs 1 pÞ, PðsÞ 5 kðs 1 zÞ=½sðs 1 pÞ, PðsÞ 5 ðbm sm 1 bm21 sm21 1 ? 1 b1 s 1 b0 Þ=ððsn 1 an21 sn21 1 ? 1 a1 s 1 a0 ÞÞ. Note that the parameters are general. In particular, this means that we consider NMP and unstable systems like PðsÞ 5 ðs 2 zÞ=½sðs 2 pÞ, z $ 0; p $ 0 as well. Exercise 4.37: Repeat Exercise 4.36 for the ramp input. Exercise 4.38: Repeat Exercise 4.36 for the parabola input. Exercise 4.39: Consider the alternative actual 1-DOF control structure of Fig. 4.31. Repeat Problem 4.36. Exercise 4.40: Repeat Exercise 4.39 for the ramp input. Exercise 4.41: Repeat Exercise 4.39 for the parabola input. Exercise 4.42: Reconsider Exercises 4.364.41. For cases in which the problem does not admit a solution try a 2-DOF control structure, see Fig. 4.32 of Section 4.12 as well as the left panel of Fig. 1.35 of Chapter 1 with sensor dynamics. 1 1 2 C2 .

Time response

345

Exercise 4.43: Study the effect of input and output disturbance on the designed control systems in Exercises 4.364.42. Exercise 4.44: Reconsider Examples 4.25, 4.27, 4.28, and 4.30. Analyze the systems in the following cases: (1) The delay T 5 0:2 second is in the forward path. (2) The delay T 5 0:3 second is in the feedback path. (3) The delays T 5 0:2 second and T 5 0:3 second are in the forward and feedback paths, respectively. Exercise 4.45: Design a step-tracking controller in the forward path for the sequel systems and analyze the stability and error of the system. The sensor and plant are given by Ps ðsÞ 5 50=ðs 1 50Þ, P1 ðsÞ 5 1=ðs 1 2Þ, P2 ðsÞ 5 2=ð3s 1 1Þ, P3 ðsÞ 5 1=½sð2s 1 1Þ, P4 ðsÞ 5 ðs 2 1Þ=s, P5 ðsÞ 5 1=ðs 2 1Þ, and P6 ðsÞ 5 ðs 1 zÞ=ðs 1 pÞ; z; pAR. (1) The delay T 5 0:2 second is in the forward path. (2) The delay T 5 0:3 second is in the feedback path. (3) The delays T 5 0:2 second and T 5 0:3 second are in the forward and feedback paths, respectively. Exercise 4.46: Design a step-tracking controller in the feedback path for the sequel systems and analyze the stability and error of the system. The sensor and plant are given by Ps ðsÞ 5 50=ðs 1 50Þ, P1 ðsÞ 5 1=s 1 2, P2 ðsÞ 5 2=ð3s 1 1Þ, P3 ðsÞ 5 1=½sð2s 1 1Þ, P4 ðsÞ 5 ðs 2 1Þ=s, P5 ðsÞ 5 1=ðs 2 1Þ, and P6 ðsÞ 5 ðs 1 zÞ=ðs 1 pÞ; z; pAR. (1) The delay T 5 0:2 second is in the forward path. (2) The delay T 5 0:3 second is in the feedback path. (3) The delays T 5 0:2 second and T 5 0:3 second are in the forward and feedback paths, respectively. Exercise 4.47: Repeat Exercises 4.45 and 4.46 for the ramp input. Exercise 4.48: Repeat Exercises 4.45 and 4.46 for the parabola input. Exercise 4.49: Consider the control structures we studied in Section 4.12 in which the feedback gain is different from 1. Derive the conditions for internal stability of them. This exercise is further followed in Exercise 10.62 of Chapter 10. Exercise 4.50: In the standard 1-DOF control structure we have observed that as long as stability of the system is maintained, perturbations in the parameters of the system do not affect the steady-state error provided that the type of the system is not decreased. Thus the steady-state error of the system is robust or insensitive to perturbations (in the above settings). For the control structures considered in Section 4.12, investigate the sensitivity of the steady-state error to perturbations. What is the conclusion? Exercise 4.51: Consider a problem in the control structures we studied in Section 4.12 which does not admit an exact solution. (1) Propose an alternative design procedure which results in an acceptable predetermined steady-state error. (2) Develop a counterpart theory for ‘error constant’s and ‘steady-state error’s in these structures. Exercise 4.52: In the standard feedback structure a necessary condition for tracking step, ramp, and parabolic inputs is that the system does not have any zero at origin. (1) Is it the case also for sinusoidal inputs? (2) How about other feedback structures? (3) How about other reference inputs?. Exercise 4.53: Reconsider Example 4.27. (1) Repeat it when the controller is designed in the forward path. (2) In both cases simulate the control signal, being the output of the controller. Compare them with each other. Exercise 4.54: Repeat the counterpart of Exercise 4.53 for Examples 4.28, 29 and Problems 4.30, 31, 33, 34. Exercise 4.55: Reconsider the 2-DOF control structures in Exercise 1.53 with u1 : 5 yc and u2 5 yF . How do the control signals differ from each other with regard to peak value and energy? (This is a general form of Exercises 4.53, 54.)

346

Introduction to Linear Control Systems

Exercise 4.56: Repeat Exercise 4.55 when we include the sensor dynamics and/or delay. (This is the general form of Exercises 4.53, 54.) Exercise 4.57: Reconsider the 2-DOF control structures in Exercise 1.53. Compare the error signals in these structures. Which one is better? (In a certain sense that should be defined, e.g., smaller, or smoother, etc. Note that again a measure should be defined for quantifying these featues.) Exercise 4.58: Consider the standard open-loop control structure with output disturbance. 2 Let PðsÞ 5 ðs 1 2Þ=ð2s 1 3Þ and dðtÞ 5 t. If we choose CðsÞ 5 s s22 1P21 ðsÞ then we get YðsÞ 5 RðsÞ. 1. Explain why this design is not acceptable. 2. Explain the design of a forward disturbance compensator for this system. Exercise 4.59: Consider a standard 1-DOF control structure. Let CðsÞ 5 1=s and PðsÞ 5 1=ðs 1 1Þ. Find yðtÞ 5 yr ðtÞ 1 ydi ðtÞ 1 yd0 ðtÞ 1 yn ðtÞ in the sequel cases: 1. rðtÞ 5 2 stepðtÞ, di ðtÞ 5 2 stepðtÞ, d0 ðtÞ 5 0:25 stepðtÞ, nðtÞ 5 0:01 sinð40tÞ 1 0:02 sinð60tÞ. 2. rðtÞ 5 2 stepðtÞ, di ðtÞ 5 0:5 t, d0 ðtÞ 5 0:25 sinðtÞ, nðtÞ 5 0. 3. rðtÞ 5 2 stepðtÞ, di ðtÞ 5 0:25 sinðtÞ, d0 ðtÞ 5 0:25 t, nðtÞ 5 0. 4. rðtÞ 5 2 stepðtÞ, di ðtÞ 5 0:25 t2 , d0 ðtÞ 5 0:25 t2 , nðtÞ 5 0. Exercise 4.60: Provide the details of the analyses and design procedures in Section 4.13. In particular, explicitly express all the assumptions that we need to make. Note that so far you have acquired the knowledge to answer this question—just pay enough attention! Exercise 4.61: Consider the systems given below. Design a controller for the rejection of the given disturbance(s) and/or noise and tracking the given reference input. (They all admit a solution.) You had better come back to this exercise after reading Chapter 5; then you can simply carry out the design procedure. 1. PðsÞ 5 sðss 1111Þ, do 5 sin2t 1 stepðtÞ, r 5 stepðtÞ; rampðtÞ; parabolaðtÞ 2. PðsÞ 5 sðss 1111Þ, di 5 sin2t 1 stepðtÞ, r 5 stepðtÞ; rampðtÞ; parabolaðtÞ 3. PðsÞ 5 sðss 1111Þ, di 5 sin2t 1 rampðtÞ, r 5 stepðtÞ; rampðtÞ; parabolaðtÞ 4. PðsÞ 5 sðss 1211Þ, do 5 sin2t 1 stepðtÞ, r 5 stepðtÞ; rampðtÞ; parabolaðtÞ 5. PðsÞ 5 sðss 1211Þ, di 5 sin2t 1 stepðtÞ, r 5 stepðtÞ; rampðtÞ; parabolaðtÞ 6. PðsÞ 5 sðss 1211Þ, di 5 sin2t 1 parabolaðtÞ, r 5 stepðtÞ; rampðtÞ; parabolaðtÞ 7. PðsÞ 5 sðss 1211Þ, do 5 sin2t 1 rampðtÞ, r 5 stepðtÞ; rampðtÞ; parabolaðtÞ 8. PðsÞ 5 sðss 1211Þ, di 5 sin2t 1 rampðtÞ, r 5 stepðtÞ; rampðtÞ; parabolaðtÞ 9. PðsÞ 5 sðss 1211Þ, n 5 sin20t, r 5 stepðtÞ; rampðtÞ; parabolaðtÞ 10. PðsÞ 5 sðss 1211Þ, n 5 sin20t, r 5 stepðtÞ; rampðtÞ; parabolaðtÞ 11. PðsÞ 5 sðss 1211Þ, di 5 sint 1 stepðtÞ, do 5 sint, n 5 sin20t, r 5 rampðtÞ 12. PðsÞ 5 sðss 1211Þ, di 5 sint 1 stepðtÞ, do 5 sint 1 rampðtÞ, n 5 sin20t, r 5 parabolaðtÞ Exercise 4.62: Consider the following internally stable systems (a)(f) in the standard 1-DOF structure. Some obvious assumptions are made: The plant and controller are both causal and P1 ðsÞ ; C1 ðsÞ do not have any terms on the imaginary axis. 1. Discuss the tracking and rejection properties for different (un)wanted inputs as we did in Example 4.36. 2. Denote up ðtÞ 5 uðtÞ 1 di ðtÞ, yðtÞ 5 yp ðtÞ 1 do ðtÞ, ym ðtÞ 5 yðtÞ 1 nðtÞ with obvious meaning for the terms. For different (un)wanted inputs find ess ðtÞ, uss ðtÞ, up;ss ðtÞ, yp;ss ðtÞ, ym;ss ðtÞ, yss ðtÞ. The subscript ss stands for steady-state. (a) PðsÞ 5 ðs2 11 9ÞP1 ðsÞ, CðsÞ 5 1s C1 ðsÞ (b) PðsÞ 5 1s P1 ðsÞ, CðsÞ 5 ðs2 1 400ÞC1 ðsÞ (c) PðsÞ 5 ðs2 1 100ÞP1 ðsÞ, CðsÞ 5 sðs2 11 4ÞC1 ðsÞ

Time response

347

s 1 400 (d) PðsÞ 5 ðs2 11 9ÞP1 ðsÞ, CðsÞ 5 sðs 2 1 4ÞC1 ðsÞ 2

s 1 400 (e) PðsÞ 5 ðs2 1 100ÞP1 ðsÞ, CðsÞ 5 sðs 2 1 4ÞC1 ðsÞ 2

s 1 100 s 1 400 (f) PðsÞ 5 sðs 2 1 1Þ, CðsÞ 5 sðs2 1 4ÞC1 ðsÞ 2

2

Exercise 4.63: Consider a system which rejects the disturbance sin ωt. What happens if the disturbance is cos ωt? Exercise 4.64: Consider a system which rejects the noise sin ωt. What happens if the noise is cos ωt? Exercise 4.65: Consider a system which tracks the reference sin ωt. What happens if the reference is cos ωt? Exercise 4.66: Consider a system which tracks the reference rðtÞ 5 0:5t2 . (1) What happens if the reference is rðtÞ 5 t2 sin ωt? (2) What happens if the reference is rðtÞ 5 t2 2 t 1 2stepðtÞ? Exercise 4.67: This Exercise has 2 parts. (1) Explain the procedure for designing, if possible, an LTI controller for rejecting a step/ramp/parabolic/sinusoidal additive disturbance dr in rðtÞ (as r 1 dr ). (2) How about additive disturbance de in eðtÞ (as e 1 de )? Exercise 4.68: Explain the procedure for designing, if possible, an LTI controller in the standard 1-DOF structure for rejecting a disturbance and/or noise and/or setpoint tracking, when the (un)wanted input is (1) cos ωt, (1) cos ωt 1 t, (2) tan h ωt, (3) t sin ωt. Exercise 4.69: Consider a 3-DOF control structure as we introduced in Exercise 1.19 of Chapter 1. Discuss how we should take advantage of the 3-DOF controller and when we need it. Exercise 4.70: Explain—via a simple example—how with a non-causal controller we may be able to improve the performance of a system. Exercise 4.71: Consider the sequel stable systems in items (a)(g) and a step input. 1. Assume that they represent the loop-gain G 5 L 5 CP. By simulation find the maximum and minimum of tr ; ts ; Mp ; Mos ; Mus in open-loop structure. Note that Mos ; Mus are defined for inverse-response curves as in Fig. 4.56 and more will be said about them in Chapter 10. For the case that y0 5 0 ; yss 5 1, the quantities Mos ; Mus are defined as the magnitude (not percentage) of the overshoot and undershoot. (How should we define them otherwise?) In particular there holds 100 Mos 5 Mp . For steady-state error compute the bounds theoretically. 2. Assume that they represent the closed-loop transfer function in the 1-DOF structure, G 5 L=ð1 1 LÞ. In addition to tr ; ts ; Mp ; Mos ; Mus , find the maximum and minimum of the error signal as well. 3. For item (g), in addition to the parameters in part (2), find the range of K for closedloop stability and maximum and minimum of the control signal as well.

Figure 4.56 Exercise 4.71.

348

Introduction to Linear Control Systems

(a) GðsÞ 5

½1

3 2s 1 ½1 4 3s 1 ½ 2 2 6 (b) GðsÞ 5 2s 1 ½1 4 3s 1 ½ 2 2 6 (c) GðsÞ 5 2s 1 ½1 4 ½22 6 (d) GðsÞ 5 ½0:1 4s 1 ½0:5 2 2s2 1 ½1 2 6s 1 ½ 2 1 2 (e) GðsÞ 5 ½0:1 ½ 2 2s 2 1 ½1 4s 1 ½0:5 2 2 1½21 2s 1 ½ 2 0:1 (f) GðsÞ 5 ½1 ½ 2 2 3s3 16s ½1 2s2 1 ½8 9s 1 ½2 (g) CðsÞ 5 K ½1½2 3s6s11½ 2½12 20, PðsÞ 5 ½1 ½0:1 ½1 ½0:1 ½21 ½0:1

4 3 s 1 ½1 2 2s 1 ½ 2 2

2 1

Exercise 4.72: Consider a general system which can track a step reference, and whose step response is like the right panel of Fig. 4.21. Can its second peak (first undershoot) be larger than its first peak (first overshoot)? Consider both linear and nonlinear systems. Exercise 4.73: This question is for widening your understanding of the lessons of this chapter. Suppose that our plant output is desired to track a setpoint which is not either of the setpoints we consider in this chapter. What do you think we should do? (You will learn the answer in graduate courses. See also (Bazaei et al., 2017).)

References Allwright, D.J., 1980. A note on Routh-Hurwitz determinants and integral square errors. Int. J. Control. 31 (4), 807810. Azimi-Sadjadi, M.R., Khorasani, K., 1992. A model reduction method for a class of 2-D systems. IEEE Trans. Circuits Syst. I: Fund. Theory Appl. 39 (1), 2841. Bavafa-Toosi, Y., 2000. Eigenstructure Assignment by Multivariable Output Feedback. MEng Thesis (Systems & Control). Department of Electrical Engineering, K.N. Toosi University of Technology, Tehran, Iran. Bazaei, A., Chen, X., Yong, Y.K., Moheimani, S.O.R., 2017. A novel state transformation approach to tracking of piecewise linear trajectories. IEEE Trans. Control Syst. Tech. in press. Benhamou, F., Goualard, F., Granvilliers, L., 1999. Revising hull and box consistency. Proc. of Int. Conf. Logic Programming. MIT Press, Las Cruces, NM. Boston, pp. 230244. Benner, P., Mehrmann, V., Sorensen, D. (Eds.), 2005. Dimension reduction of large-scale systems. LNCSE, vol. 45. Springer-Verlag, Berlin. Besselink, B., Tabak, U., Lutowska, A., van de Wouw, N., Nijmeijer, H., Rixen, D.J., et al., 2013. A comparison of model reduction techniques from structural dynamics, numerical mathematics and systems and control. J. Sound Vib. 332 (19), 44034422. Boje, E., 2003. Pre-filter design for tracking error specifications in QFT. Int. J. Robust Nonlinear Control. 13 (7), 637642. Brown, G.S., Campbell, D.P., 1948. Principles of Servomechanisms. Wiley, New York. Chen, J., Xu, D., Shafai, B., 1995. On sufficient conditions for stability independent of delay. IEEE Trans. Autom. Control. 40 (9), 16751680. Chestnut, H., Mayer, R.W., 1959. 2nd ed. Servomechanisms and Regulating System Design, vol. 1. Wiley, New York. Daroogheh, N., Meskin, N., Khorasani, K., 2017. A dual particle filter-based fault diagnosis scheme for nonlinear systems. IEEE Trans. Control Syst. Tech. 118, in press.

Time response

349

Eslami, M., Nobakhti, A., 2016. Integrity of LTI time-delay systems. IEEE Trans. Autom. Control. 61 (2), 562567. Farnam, A., Esfanjani, R.M., 2016. Improved linear matrix inequality approach to stability analysis of linear systems with interval time-varying delays. J. Comput. Appl. Mathem. 294 (1), 4956. Gardner, M.F., Barnes, J.L., 1942. Transients in Linear Systems. Wiley, New York. Glover, K., 1984. All optimal Hankel-norm approximations of linear multivariable systems and their LN-error bounds. Int. J. Control. 6 (6), 11151193. Hall, A.C., 1943. The analysis and synthesis of linear servomechanisms. MIT: Technology Press, Cambridge, MA. Hazen, H.L., 1934. Theory of servo-mechanisms. J. Franklin Inst. 218, 279331. James, H.M., Nichols, N.B., Phillips, R.S., 1947. Theory of Servomechanisms. McGraw-Hill, New York. Karimi-Ghartemani, M., Karimi, H., Bakhshai, A.R., 2009. A filtering technique for threephase power systems. IEEE Trans. Instrum. Meas. 58 (2), 389396. Khaki-Sedigh, A., Shahmansorian, M., 1996. Input-output pairing using balanced realizations. Electronic Lett. 32 (21), 20272028. Kenney, C., Hewer, G., 1987. Necessary and sufficient conditions for balancing unstable systems. IEEE Trans. Autom. Control. 32 (2), 157. Kim, K., Shafai, B., 1995. Finite impulse response estimator (FIRE). IEEE Trans. Signal Process. 43 (9), 21862189. Lang, N., Saak, J., Stykel, T., 2016. Balanced truncation model reduction for linear timevarying systems. Math. Comput. Model. Dyn. Syst. 22 (4), 267281. Lehotzky, D., Insperger, T., Stepan, G., 2016. Extension of the spectral element method for stability analysis of time-periodic delay-differential equations with multiple and distributed delays. Commun. Nonlinear Sci. Numer. Simul. 53, 177189. Malek-Zavarei, M., Jamshidi, M., 1981. Time Delay Systems: Analysis, Optimization and Applications. North-Holland, New York. Miyamoto, H., Ohmori, H., Sano, A., 1999. Parameterization of all plug-in adaptive controllers for sinusoidal disturbance rejection. In: Proc. Am. Control Conf., vol. 3, 20332034. Mondie, S., Cuvas, C., 2016. Necessary stability conditions for delay systems with multiple pointwise and distributed delays. IEEE Trans. Autom. Control. 60 (7), 19871995. Morari, M., Grimm, W., Oglesby, M.J., Prosser, I.D., 1985. Design of resilient processing plants—VII. Design of energy management system for unstable reactors—new insights. Chem. Eng. Sci. 40, 187198. Normey-Rico, J.-E., Camacho, E.F., 2007. Control of Dead-Time Processes. Springer, Berlin. Ohmori, H., Narita, N., Sano, A., 1998. Plug-in adaptive controller for periodic disturbance rejection. In: Proceedings of the IEEE Conf. Decision and Control, vol. 4, pp. 45294530. Ozcetin, H., Saberi, A., Sannuti, P., 1990. Special coordinate basis for order reduction of linear multivariable systems. Int. J. Control. 52 (1), 191226. Perev, K., Shafai, B., 1994. Balanced realization and model reduction of singular systems. Int. J. Syst. Sci. 25 (6), 10391052. Porter, B., Khaki-Sedigh, A., 1987. Singular perturbation analysis of the step response matrices of a class of linear multivariable systems. Int. J. Syst. Sci. 2, 205211. Qi, T., Zhu, J., Chen, J., 2017. Fundamental limits on uncertain delays: when is a delay system stabilizable by LTI controllers? IEEE Trans. Autom. Control. 62 (3), 13141329.

350

Introduction to Linear Control Systems

Rohn, J., 1994. NP-hardness results for linear algebraic problems with interval data. In: Herzberger, J. (Ed.), Topics in Validated Computations, 1994. North-Holland, pp. 463471. Scarciotti, G., Astolfi, A., 2016a. Model reduction of neutral linear and nonlinear timeinvariant time-delay systems with discrete and distributed delays. IEEE Trans. Autom. Control. 60 (6), 14381452. Sottile, F., 2010. Frontiers of reality in Schubert calculus. Bull. Am. Math. Soc. 47 (1), 3171. Scarciotti, G., Astolfi, A., 2016b. Model reduction by matching the steady-state response of explicit signal generators. IEEE Trans. Autom. Control. 60 (7), 19952001. Shafai, B., Ghadami, R., 2008. Model reduction for Metzlerian systems. In: Proceedings of the 47th IEEE Conf. Decision and Control, Cancun, Mexico, pp. 18331838. Taghirad, H.D., Belanger, P.R., 1999. Intelligent built-in torque sensor for harmonic drive systems. IEEE Trans. Instrum. Meas. 48 (6), 12011207. Taghirad, H.D., Belanger, P.R., 2001. HN-based robust torque control of harmonic drive systems. J. Dyn. Syst. Meas. Control. 123 (9), 338345. Tavazoei, M.S., 2015. Reduction of oscillations via fractional order pre-filtering. Sig. Process. 107, 407414. Tsuruoka, H., Ohmori, H., 2003. Adaptive output regulation for nonlinear systems with unknown periodic disturbances. IEE J. Trans. Ind. Appl. 123 (10), 11041110. Trombettoni, G., Papegay, Y., Chabert, G., Pourtallier, O., 2010. A Box-consistency contractor based on extremal functions. In: Cohen, D. (Ed.), Principles and Practice of Constraint Programming. Lecture Notes in Computer Science, 6308. Springer, Berlin, pp. 491498. Vidyasagar, M., 1986. On undershoot and nonminimum phase systems. IEEE Trans. Autom. Control. 31, 440. Volkov, D.P., Zubkov, Y.N., 1978. Vibration in a drive with harmonic gear transmission. Russ. Eng. J. 58 (1), 1721. Wilson, B.H., Shafai, B., Uddin, V., 1998. A structurally based approach to model order reduction using bond graphs. In: Proceedings of the American Control Conference, vol. 1, Philadelphia, USA, pp. 151156. Zhang, L., 2016. Multi-input partial eigenvalue assignment for high order control systems with time-delay. Mech. Syst. Sig. Process. 7273, 376382. Zhong, Q.-C., 2006. Robust Control of Time-Delay Systems. Springer, Berlin.

Root locus 5.1

5

Introduction

Apart from direct computation of the closed-loop poles, which was not a good choice before the computer era, we have learned two main methods for verifying stability of a given system. These are the Hurwitz’ and Routh’s stability tests. Despite their usefulness they have a serious shortcoming: these tests give a yes/no answer, and no qualitative information, to the stability issue. In control system analysis and synthesis we are further interested in qualitative information on how the system performs—time response quality in addition to stability. In Evans (1948, 1950)1 W. R. Evans initiated a depictive method for finding the locus of the closed-loop poles. In general they could thus get qualitative information from the location of the dominant poles (the ones closer to the origin) whether the system is slow, oscillatory, etc., or otherwise. This method, called the root locus method,2 was further enhanced by others over the decades.3 In brief, it is used both for the stabilization task (it provides guidelines on how to design a stabilizing controller) and for the objective of performance achievement (qualitative information about the response). The first issue is discussed in this chapter. The second issue requires depictive analysis and is not really practiced today; we address it to a modest extent. It is noteworthy that part of the existing literature provides somewhat false guidelines in the aforementioned respects; these problems will be discussed and rectified in this chapter. Root locus is the locus of the closed-loop poles (i.e., the roots of the characteristic equation) as the gain (in general, any single parameter) varies4 from zero to infinity (or minus infinity). It shows the contribution of each open-loop pole and zero to the locations of the closed-loop poles. In other words the closed-loop poles are obtained from the open-loop poles and zeros as the gain varies, i.e., with gain as the parameter. The method is introduced in Section 5.2. In Section 5.3 we extend the method to the socalled root contour method where more than one parameter varies. Next, we discuss the problem of determination of the gain from the root locus for satisfactory performance in Section 5.4. Finally, the implications of the root locus method on controller design are considered in Section 5.5. With numerous instructive examples we learn how the method can be used for controller design for simple and difficult systems, i.e., systems

1

American control theorist (192099). It is noteworthy that he proposed the root locus method when he had a bachelor’s degree, earned in 1941. He obtained his master’s degree in electrical engineering in 1951. 2 The root locus method is not restricted to LTI systems. However, we consider only LTI systems. 3 Later another look at the root locus was cast in the context of robust control. This will be explained later. 4 Note that the gain does not really vary. It is not time-varying. What is meant is that it takes any constant value in the stated range, e.g., zero to infinity. It is customary to express this issue by the term ‘vary’. (The story of LTV systems is different. Recall that in general the eigenvalues, or the closed-loop poles, of LTV systems are not indicative of their stability.) Introduction to Linear Control Systems. DOI: http://dx.doi.org/10.1016/B978-0-12-812748-3.00005-7 © 2017 Elsevier Inc. All rights reserved.

352

Introduction to Linear Control Systems

without and with non-minimum phase (NMP) zeros. The chapter is wrapped up by a summary, further readings, worked-out problems, and exercises in Sections 5.65.9. Before presenting the method of depicting and constructing the root locus for a given system in Section 5.2, we illustrate through several examples what the locus of the closed-loop poles is, and what qualitative information it gives. It must be clarified that what is done in the following to obtain the locus of the closed-loop poles is not what is done in the root locus method.

Example 5.1: Let the open-loop system be LðsÞ 5 Kðs 1 zÞ=ðs 1 pÞ. The 2p closed-loop pole computed from 1 1 Kðs 1 zÞ=ðs 1 pÞ 5 0 or s 5 2Kz 11K 5 0  K 5 02s 5 2p and there hold the relations . For positive values of the K 5 N2s 5 2z gain, different possibilities of the root locus are shown in Fig. 5.1.

K= oo s=–z

K=0

K=0

s=–p

s=–p

K= oo s=–z

K= oo s=–z Large K

K=0

K=0

s=–p

s=–p Small K

K= oo s=–z

Figure 5.1 Root locus of Example 5.1.

It is seen that the two systems on the left are stable for any positive value of K. The third one is stable for large values of K and the fourth one for small values of K. If both p and z are negative, then the system is unstable for positive values of the gain. This is not shown in this figure.

Remark 5.1: In the above figure and the next problems and figures in this chapter (and in fact in future chapters when needed) a finite pole is designated by a cross mark whereas a finite zero is designated by a circle. Infinite zeros (like in Example 5.2) or poles (like in the Worked-out Problem 5.30) are not marked by any symbol. Further, observe in this and the next examples that the locus of the closed-loop pole(s) starts at the open-loop pole(s) and ends at the open-loop zero(s). We will shortly see that this is always the case. (Note that any rational fraction has the same number of poles and zeros with the explanation that some zeros/poles may be infinite. In particular, some zeros are infinite when the fraction is strictly proper, and some poles are infinite when it is improper.) Remark 5.2: Another important observation should be made. When open-loop stable NMP systems are put under feedback, by high gain they become unstable, because by high gain some closed-loop poles approach the NMP zeros which are to the right of the j-axis. See the right panel in Fig. 5.1.

Root locus

353

Example 5.2: Consider the system LðsÞ 5 K=ððs1σÞ2 1 ω2 Þ. The closed-loop poles are found from 1 1 K=ððs1σÞ2 1 ω2 Þ 5 0 and thus are given by pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi K 5 02s 5 2σ 6 jω . s 5 2σ 6 j K 1 ω2 . There hold the relations K 5 N2s 5 2σ 6 jN The root locus of the closed-loop system is given in Fig. 5.2. K=oo K=0



–σ K= 0

–jω

K = oo

Figure 5.2 Root locus of Example 5.2.

It is easily seen that the system is stable for all values of K. Moreover, an increase in K results in a decrease in ζ, more oscillations, an increase in Mp , an increase in ωn ; ωd , and no change in σ.

Example 5.3: Given the system LðsÞ 5 ðs 1 p1KÞðs 1 p2 Þ, the closed-loop poles are given

by

s 5 2 12 ð p1 1 p2 Þ 6

1 2

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðp1 2p2 Þ2 2 4K .

8 1 1 > >

> :K 5 N2s 5 2 2 ðp1 1 p2 Þ 6 jN

We

have

the

relations

The root locus is depicted in Fig. 5.3.

K = oo

K= 0

K= 0

K = oo

Figure 5.3 Root locus of Example 5.3.

It is observed that for small K there holds ζ . 1 and the system is overdamped. At a certain value of K the system is critically damped and s1 5 s2 5 12 ðp1 1 p2 Þ. (What is the aforementioned certain value?) For larger K’s the system becomes underdamped ζ , 1, and the argument of Example 5.2 applies.

354

Introduction to Linear Control Systems

Example 5.4: Suppose the system is given by LðsÞ 5 K s2 1s 12s21 2. The closed-loop poles are found from s2 1 sðK 1 2Þ 1 2ðK 1 1Þ 5 0 or qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffi s 5 2 12 ðK 1 2Þ 6 12 ðK 22Þ2 2 8. It is observed that for 20:82 5 2 2 2 2 , pffiffiffi K , 2 1 2 2 5 4:82, ðK 22Þ2 28 , 0 and thus the system has complex poles. There hold, 8 K 5 02s1 ; s2 5 21 6 j > > > < K 5 4:822s ; s 5 22 2 2pffiffi2ffi 5 23:41 1 2 1 > > > : K 5 N2s1 ; s2 5 2 2 ððK 1 2Þ 6 ðK 2 2ÞÞ 5 22; 2K 5 22; 2N See Fig. 5.4. It is observed that the system is stable for all positive values of K. For small K the system is underdamped ζ , 1, and an increase in K results in an increase in ζ and a decrease in Mp . For larger K the system is overdamped.

Figure 5.4 Root locus of Example 5.4.

Question 5.1: Before going to the next section, draw the root locus of the aforementioned Examples 5.15.4 for negative values of the gain. Repeat this for the systems with open-loop transfer functions LðsÞ 5 K=s, LðsÞ 5 K=ðs 1 pÞ, LðsÞ 5 k=s2 , LðsÞ 5 K=ðs2 1 ω2 Þ and analyze them. Can you infer any further information or governing rule? The answer is provided in the ensuing Section 5.2.

5.2

The root locus method

In the above examples, we saw only the locus of the closed-loop poles for some simple systems. For this we solved for the closed-loop poles in terms of K. For higher order systems this is somewhat impossible to do. This gave rise to the need for a method for obtaining the locus of the closed-loop poles without computing them. The method is called “the root locus,” and as stated before was

Root locus

355

“initiated” by W. R. Evans. In the following, several rules are presented based on which the root locus is built upon the open-loop poles and zeros (without solving for the closed-loop poles). These rules are much richer than the original contribution of Evans. If we accept approximate drawing of the root locus, in general the method can easily be implemented “by hand” as you will learn in this Chapter. Only in pathological cases the method gets complicated as will be shown in Remark 5.17. It should be noted that the method was further facilitated by devising a tool called “Spirule” (due to Evans himself). Through the introduction of computers, however, the tool was not used anymore and specialized software like MATLABs have built-in commands to accomplish this task. Now lets us start. The root locus is built on the open-loop poles and zeros 0 ðT1 s 1 1ÞðT2 s 1 1Þ? obtained from LðsÞ 5 KsN ðT where 1 1 LðsÞ 5 0 is the characteristic a s 1 1ÞðTb s 1 1Þ? 1 z1 Þ?ðs 1 zm Þ equation. Rewrite LðsÞ as LðsÞ 5 sKðs N ðs 1 p Þ?ðs 1 p 0 Þ (after factorizing T ’s and embed1 n ding them in K). In the context of steady-state analysis that pure integrators play a crucial role we worked with this model. In the context of root locus analysis 1 z1 Þ?ðs 1 zm Þ 0 a more useful form is LðsÞ 5 Kðs ðs 1 p1 Þ?ðs 1 pn Þ where N 1 n 5 n. In the case of pure integrators some p’s are zero. The root locus is constructed using the eight rules listed below. The characteristic equation is 1 1 L 5 0.

Rule one 1. The root locus (with either positive gain, negative gain, or both) is symmetric with respect to the real axis. 2. The root locus with both positive and negative gain5 is symmetric with respect to the (vertical) axis of symmetry of the open-loop pole/zero configuration, if any.

Proof—(1): The root locus is the locus of closed-loop poles, which are always symmetric with respect to the real axis, because they are the roots of the characteristic equation which has real coefficients.6 It is worth noting that the real axis is always an (in fact, the horizontal) axis of symmetry of the open-loop pole/zero configuration. Proof—(2): If the system possesses such an axis of symmetry, then with a linear transformation we can map it on the real axis of the transformed page. The root locus is thus symmetric with respect to that axis of symmetry as well. Thus, e.g., if you see an asymmetric root locus about the real axis (irrespective of the sign of K) it is certainly wrong. The counterpart also for Part 2. Rule two The root locus has maxfm; ng branches, since the characteristic equation has maxfm; ng roots, starting at open-loop poles (with K 5 0) and ending at open-loop zeros (with K 5 N).

5 6

The so-called complete root locus, according to some. This symmetry is not valid for polynomial equations with complex coefficients. For example the equation s2 1 2s 1 1 2 2j 5 0 has the roots j; 22 2 j which are not symmetric about the real axis.

356

Introduction to Linear Control Systems

Proof: From the definition of the root locus it follows that any point on it (which is a closed-loop pole) satisfies the characteristic equation. Define: 1 1 LðsÞ: 5 1 1 K NðsÞ DðsÞ. Hence, DðsÞ 1 KNðsÞ 5 0. Thus, there holds K 5 02DðsÞ 5 0, i.e., we are at openNðsÞ loop poles. On the other hand, K1 1 NðsÞ DðsÞ 5 0. So there holds K 5 N2 DðsÞ 5 0 which happens in two cases. The first case is NðsÞ 5 0 which is an open-loop zero. The second case is s 5 N for strictly proper systems.7 Such infinite zeros are also open-loop zeros and their number is n 2 m. In brief, the root locus starts at open-loop poles (with K 5 0) and ends at openloop zeros (with K 5 N). Because there are maxfm; ng closed-loop poles, the root locus has maxfm; ng branches. Rule three For positive gain, a point on the real axis belongs to the root locus if the total number of open-loop poles and zeros to its right is odd. Proof: Break the characteristic equation to (1) angle condition, and (2) magnitude condition. Angle condition: There holds 1 1 LðsÞP5 0. Hence P LðsÞ 5 21 and +LðsÞ 5 ð1 1 2hÞ180; h 5 0; 61; 62; . . .. Thus,8,9 +z 2 +p 5 ð1 1 2hÞ180. Because transfer functions are usually strictly proper, for convenience we write it as P P +p 2 +z 5 ð1 1 2hÞ180. On the other hand, let s be a point on the real axis. Then it holds, +LðsÞ 5 contribution of complex poles and zeros 1 contribution of real poles and zeros left of s 1 contribution of real poles and zeros right of s: In the above sum the first two terms are zero and therefore, +LðsÞ 5 180 3 ðNumber of real zeros right of s 2 Number of real poles right of sÞ Thus, the number of real zeros right of s minus the number of real poles right of s must be odd, which for the sake of simplicity is restated as “Number of real zeros right of s 1 number of real poles right of s must be odd.” That is, the total number of real open-loop poles and zeros to its right must be odd.

7

Any rational transfer function has the same number of poles and zeros, with the explanation that some may be at infinity. 8 Recall that +c1 c2 5 +c1 1 +c2 and +c1 =c2 5 +c1 2 + c2 for complex numbers c1 ; c2 . 9 For K , 0: +LðsÞ 5 h360; h 5 0; 6 1; 6 2; . . . The remaining part on the real axis—i.e., the portion where the total number of poles and zeros to the right of a point is even—relates to K , 0.

Root locus

357

Example 5.5: The real axis part of the root locus (with positive gain) for some typical systems is given in Fig. 5.5. The multiplicity of multiple poles/ zeros is written in parentheses next to them. Other ones are simple.

(3)

(2)

(2)

(2)

Figure 5.5 Real axis part of the root locus of some typical systems.

We should add that the right panel shows an improper system, like Problem 5.30. Remark 5.3: For negative gain, in the above statement “odd” is replaced by “even.” That is, part of the real axis belongs to the root locus that the number of poles and zeros to its right is even. Note that this is the rest of the real axis. Remark 5.4: The angle condition alone gives the root locus. The magnitude condition is introduced later in Remark 5.13. Example 5.6: It is easy to verify the angle condition for Example 5.3, see Fig. 5.6. For any point on the root locus φ1 1 φ2 5 180, where φ1 ; φ2 are the marked angles.

s + p1

s

s = –p1

s + p2

s = –p2

Figure 5.6 Angle conditions for the system of Example 5.6.

Question 5.2: How can we pictorially verify the angle condition for Examples 5.4? Rule four If n . m, as K ! N, n 2 m branches tend to straight line asymptotes radiating from a common point, called centroid or abscissa, on the real axis. If n , m, as K ! 0, m 2 n branches tend to straight line asymptotes radiating from a common point, called centroid or abscissa, on the real axis. This case refers to improper transfer functions and thus we suffice to present only one example of it—the worked-out Problem 5.30.

358

Introduction to Linear Control Systems P

P

The abscissa on the real axis is at the point s 5 2σ where σ 5 np 22 m z and the asymptote angles are +s 5 180 3n 2ð2hm1 1Þ. Proof: The root locus is the locus of s satisfying LðsÞ 5 21. Let s be large. K 5 21 or ðn 2 mÞ+s 5 180ð2h 1 1Þ and Then for positive gain lims!N LðsÞ 5 sn2m 180 3 ð2h 1 1Þ 10 hence +s 5 . n2m As for the centroid, we proceed as follows. For large values of K, only the first m K 5 0. However, this terms need to be considered. Thus, 1 1 K ssn 5 0 or 1 1 sn2m equation, which is an approximation, indicates that the abscissa is at the origin, s 5 0. A better approximation is obtained by considering the characteristic equation K of the form 1 1 ðs1σÞ n2m 5 0. Now, if we consider the first two terms of both numerator and denominator of the characteristic equation, and write it as 1 1 s 1  P p 2 KP zs 1 ? 5 0, by equating these last equations we obtain n2m

P

P

ðpÞ 2 n2m

n2m21

P

ðzÞ

P

ReðzÞ which can be reduced to σ P 5 ReðpÞn 22P . The abscissa point on the σ5 m real axis is given by s 5 2σ. Note that ðpÞ 2 ðzÞ P P P shouldPnot bePmistaken for ðpolesÞ 2 ðzerosÞ. In fact, there holds ðpÞ 2 ðzÞ5 ð2polesÞ 2 P ð2zerosÞ.

Remark 5.5: The sequel Fig. 5.7 illustrates the asymptotes for different systems— for positive gain. Note that for positive gain, only in case of odd polezero excess, the real axis— more precisely, its left part—is one of the asymptotes. Remark 5.6: For negative gain, the previous figure is replaced by Fig. 5.8. Note that for negative gain in case of odd polezero excess the right part of the real axis is one of the asymptotes, and in case of even polezero excess both the left and right parts of the real axis are two of the asymptotes.

n–m = 1

n–m = 2

n–m = 3

n–m = 4

n–m = 5

n–m = 6

Figure 5.7 Asymptotes for positive gain.

10

If n 5 m the root locus has no asymptotes.

Root locus

359

n–m = 1

n–m = 2

n–m = 3

n–m = 4

n–m = 5

n–m = 6

Figure 5.8 Asymptotes for negative gain.

Rule five Break-away and Break-in points are the roots of11   d D 2 ds N 5 0 or D0 N 2 N 0 D 5 0. We note the following:

dK ds

5 0. Consequently

1. If the root locus exists between two poles, there is an odd number of break points between them. 2. If the root locus exists between two zeros, there is an odd number of break points between them. 3. If the root locus exists between a pole and a zero, between them there is no break-away/ break-in point or there are both with an equal number, simply said, an even number of break points.

If a root of the above equation lies on the portion of the real axis belonging to the root locus, then that root is an actual break-away or break-in point. If there is a pair of complex conjugate roots, then the sign of K must be checked at that pair. If the root locus is drawn for positive K and the sign is also positive, then that pair is an actual pair of break-away or break-in points. Otherwise, that answer must be discarded. The same is true for negative K and negative sign. Example 5.7: Some break-away and break-in points are given in Fig. 5.9.

Figure 5.9 Some typical configurations of break-away and break-in points.

Note that in some sense the terms break-away and break-in may be interchangeably used in that for example in Fig. 5.57 at least some break points can be actually 11

Equivalently, it can be written as dsd

N  D

5 0 thus N 0 D 2 D0 N 5 0.

360

Introduction to Linear Control Systems

regarded as both a break-away and a break-in point. Also note in this figure the vertical lines which are shown quite vertically actually have some curvature, see workedout Problem 5.25 and its explanation. Moreover, existence of break-away and breakin points also depend on the system parameters. For example in the middle panel for some systems there is no break-away and break-in points: root locus has three branches, one on the real axis and two symmetric branches which connect complex poles and zeros to each other. Also note that certain systems with a polezero pattern like the middle panel have an intersection point as will be explained in Rule eight.

Example 5.8: A complex break-away/break-in point is shown in Fig. 5.10. The systems are ðs12Þ2 ðs2 1 2s 1 2Þ=s4 and ðs2 1 4s 1 40Þ=½sðs 1 4Þðs2 1 4s 1 20Þ on the left and right panels, respectively.

Figure 5.10 Complex break-away and break-in points.

The reader is also referred to the worked-out Problems 5.222.25 and Question 5.9. Rule six The angles of departure from or arrival at complex poles or zeros are given by: X Angle of departure from a complex pole p 5 180ð2h 1 1Þ 2 +ðp 2 pi Þ X 1 +ðp 2 zi Þ X Angle of arrival at a complex zero z 5 180ð2h 1 1Þ 1 +ðz 2 pi Þ X 2 +ðz 2 zi Þ in which the summation index i runs over other poles and zeros. This is shown in Fig. 5.10 for a typical system.

Root locus

361

βi

αi

αi

αi

αi

βi

Figure 5.11 Illustrating angle definition and computation of angles of departure and arrival.

Regarding Fig. 5.11 first note that angles are defined counterclockwise, as shown in the middle part. Next if we denote +ðp 2 pi Þ 5 αi (or +ðz 2 pi Þ 5 αi ) and +ðp 2 zi Þ 5 β i (or +ðz 2 zi Þ 5 β i ) then the angle of departure from the pole p (the left panel)P and theP angle of arrival at the zero P z (the P right panel) are given by 180ð2h 1 1Þ 2 αi 1 β i and 180ð2h 1 1Þ 1 αi 2 β i , respectively. Remark 5.7: For negative gain the term 180ð2h 1 1Þ is substituted by 360 h. We emphasize that the above formulae are for complex poles and zeros. For real poles and zeros if they are simple the angles do not need any computation because of Rule three. If they are repeated then always the angles are symmetric with respect to the real axis, in a way further specified by Rule eight which will follow. 2s 1 2 Example 5.9: Consider the systems S1 : 11 K s2 1s 12s21 2 and S2 : 11 K ðss 211Þðs 2 2Þ. For the system S1 the angle of departure from the pole 21 1 j is computed as 180 2 90 1 45 which is correct, as is verified by Fig. 5.12 produced by MATLABs. For the system S2 the angle of arrival at the zero 21 1 j is computed as 180 1 161:5 1 153:4 2 90 5 405 5 45 which is correct as shown by the picture produced by MATLABs. 2

Figure 5.12 Illustrating angles of departure and arrival of Example 5.8.

362

Introduction to Linear Control Systems

Remark 5.8: In case of repeated poles or zeros the angles must be divided by their multiplicities, as shown next.

Example

5.10:

Consider

the

system

S1 : 1 1 K ðs11Þ

2

ðs2 1 4s 1 5Þ , s4

14Þ whose root loci are depicted in Fig. 5.13. S2 : 1 1 K ððs11Þ ððs21Þ2 11Þ2 2

2

Figure 5.13 Angles of departure and arrival. S1 on the left; S2 on the right.

For S1 : The angles of departure from the poles at origin are given by 4θ 5 180ð2h 1 1Þ 1 2 3 0 1 α 1 360 2 α. Thus θ 5 645; 6135. The angles of arrival at the zeros at 21 are found from 2θ 5 180ð2h 1 1Þ 1 4 3 180 2 ðα 1 360 2 αÞ. Hence θ 5 690. The angle of arrival at the zero at 22 1 j is computed as θ 5 180ð2h 1 1Þ 1 4 3 153:4 2 ð90 1 2 3 135Þ 5 73:7. The angle of arrival at the conjugate zero is 273:7 deg. For S2 : The angles of departure from the upper complex poles are given by 2θ 5 180ð2h 1 1Þ 2 2 3 90 1 2 3 ðð180=πÞtan21 ð1:5Þ 2 ð180=πÞtan21 ð0:5ÞÞ or θ 5 29:74; 209:74 deg. The angles of arrival at the upper complex zeros are 2θ 5 180ð2h 1 1Þ 1 2 3 ð180 2 ð180=πÞtan21 ð1:5Þ 1 180 2 ð180=πÞtan21 ð0:5ÞÞ 2 2 3 90 or θ 5 97:125; 277:125 deg. For the lower complex poles and zeros the angles are the negative of the above values.

Question 5.3: Computation of the contribution of a pair of complex poles/zeros (like 21 6 2j) to the angle at a pole/zero (like 21 6 j) can be somewhat simplified, or rather reformulated, via a simple trigonometric formula. How? Remark 5.9: This remark has three parts: (1) A direct result of Rule six is that the angles of departure from poles or arrival at zeros if they are repeated of order two always differ by 180 degrees. If they are repeated of order three they form angles of 120 degrees with each other. If they are of order four they form angles of

Root locus

363

90 degrees with each other, and so forth. (2) If these poles or zeros are real then necessarily the angles are symmetric with respect to the real axis. Thus in the case that the multiplicity order is two the angles are either 690 or 0; 180. If the multiplicity order is three the angles are either 180; 660 or 0; 6120. If the multiplicity order is four the angles are either 6 45; 6 135 or 0; 180; 690. (3) For complex poles and zeros, part (1) is valid but part (2) is not necessarily correct. See Example 5.10 and Problem 5.14. Remark 5.10: Note that the angle formulae are exact and can be easily proven by analysis. (Consider a small neighborhood around the zero/pole of interest and apply the general angle condition to an arbitrary point in it.) If we wish direct analytical computation of the angles of departure/arrival, it is impossible in the general case and even in cases which admit a solution it is quite cumbersome. The direct computation is as follows: we should substitute s 5 σ 1 jω in the characteristic equation, find ω 5 hðσÞ, and then compute dh=dσ. More precisely, we fN ðσ;ωÞ 1 jgN ðσ;ωÞ denote 1 1 K NðsÞ DðsÞ 5 1 1 K fD ðσ;ωÞ 1 jgD ðσ;ωÞ 5 0, omit K in a straightforward manner and arrive at fN gD 2 gN fD 5 0. However, this last equation is not always transformable to ω 5 hðσÞ. Rule seven Imaginary axis crossing points—there are two main analytical approaches to find these points: i. Routh’s test ii. Set s 5 jω and solve 1 1 LðsÞ 5 0 by equating both the real and imaginary parts equal to zero.

The second method usually involves cumbersome computations and is not easily tractable by hand. It is used in software development. The first method is preferred for hand computations. In this method the one and only element in the s1 row of the Routh’s array is set to zero, the unknown parameter(s) is (are) computed and then the auxiliary equation is solved. The roots of the auxiliary equation are the j-axis crossings. Note that there may be multiple j-axis crossings, as will be shown in the Worked-out Problems. Example 5.11: Consider the system with 1 1 K sðs2 1abs 1 cÞ 5 0, where a; b; c are known. Determine the j-axis crossings, if any, and the value of K at the crossings. We implement the described procedure. To form the Routh’s array we first form the characteristic equation. It is s3 1 bs2 1 cs 1 Ka 5 0. Thus, s3 s2 s1 s0

1 b Ka 2 bc 2b

c Ka

Ka (Continued)

364

Introduction to Linear Control Systems

(cont’d) We must have

8 < Ka 2 bc 5 0 2b . : 2 bs 1 Ka 5 0



Hence,

K 5 bc=a pffiffiffiffiffiffiffiffiffiffiffi pffiffiffi . s 5 6 j Ka=b 5 6 j c

On the other hand

the stability region is 0 , K , bc=a.

Question 5.4: How should we complete the details of the above solution? (Hint: Note that e.g., the root locus of the system 1 1 K sðs2 12 1Þ 5 0 does not have any j-axis “crossings” but that at K 5 0; the origin is part of the root locus.) Remark 5.11: Some systems have several j-axis crossings. It is important to note that at crossings larger frequencies do not necessarily correspond to larger values of the gain.

s 1 0:5 Example 5.12: Consider the system LðsÞ 5 K ðs2 2 10Þðs 2 1 s 1 10Þ. The root locus is provided in Fig. 5.14. The j-axis crossings are the pairs ðω; KÞ 5 ð0; 200Þ; ð3:08; 19:5Þ.

Figure 5.14 Root locus of Example 5.12.

Root locus

365

ððs12Þ 1 1Þððs11Þ 1 9Þððs11Þ 1 25Þ Example 5.13: Consider the system LðsÞ 5 K ðs10:5Þ s2 ðs 2 1Þðs 2 2Þðs 2 3Þðs 2 4Þðs 2 5Þðs 2 6Þ . The root locus is given in Fig. 5.15. The j-axis crossings are the pairs ðω; KÞ 5 ð0; 0Þ; ð1:23; 1:04Þ; ð3:61; 8:62Þ; ð9:40; 2:25Þ. Further examples are encountered in more sophisticated systems, like the ones discussed in Appendix C, Section 5.5, and Chapter 9. 2

2

2

2

Figure 5.15 Root locus of Example 5.13.

Remark 5.12: If the gain of the system is multiplied by a constant, then the gain values at the j-axis crossings (if any) are divided by the same value but the frequencies do not change. Note that this is an important property which will be frequently used in the gain margin context, Chapter 6. Rule eight Intersection of root locus branches (beyond break-away and break-in points). 1. If at a point on the root locus dK ds 6¼ 0, there is one and only one branch of the root locus through this point. i r11 2. If at a point s on the root locus dsd i K 5 0 ði 5 1; . . .; rÞ and dsd r11 K 6¼ 0, there are r 1 1 branches entering and r 1 1 branches leaving this point. In this case it is said that there are root locus intersections at that point. The angle between two adjacent approaching or leaving branches is 6 r360 1 1 and between every two adjacent branches (of which always one is approaching and one is leaving) is 6 r180 1 1.

366

Introduction to Linear Control Systems

Note that conditions of part (2) are satisfied iff sT is a root of multiplicity r of the equation dsd K 5 0. (In other words we do not really need to verify the conditions of part (2).) In this case dsd K is of the form ðs2sTÞr f ðsÞ with f ðsTÞ ¼ 6 0. Moreover sT is a root of multiplicity r 1 1 of the characteristic equation. In other words it is a closedloop pole of multiplicity r 1 1, or the characteristic equation has the factor ðs2sTÞr11 . It is due to add that the above Rule eight is a classical result in the theory of differential equations. Its proof is straightforward, based on the theory of complex analysis (Mac Duff, 1954). For instance, consider the system L0 ðsÞ 5 10LðsÞ where 10LðsÞ is the system of Example 5.13. The j-axis crossings are the pairs ðω; KÞ 5 ð0; 0Þ; ð1:23; 0:104Þ; ð3:61; 0:862Þ; ð9:40; 0:225Þ. Example 5.14: The point s 5 23 is an intersection point of four branches of K d the system LðsÞ 5 ðs 1 2Þðs 1 4Þðs 2 1 6s 1 10Þ. The reason is that ds K 5 0 has s 5 23 as its root with multiplicity three. (Thus there necessarily hold r 5 1; 2; 3, and

4

d ds4

dr dsr

Kjs523 5 0,

Kjs523 6¼ 0.)

The reader is also referred to Exercise 5.18. The above eight rules summarize the procedure for constructing the root locus. Now, with several remarks and examples, we wrap up this section. Remark 5.13: The Magnitude condition given below gives the exact value of the gain K at a point on the root locus. On the root locus there holds LðsÞ 5 21. Hence jLðsÞj 5 1 and jKj 5 jDðsÞj jNðsÞj 5

Ljs 2 pj . Ljs 2 zj

If the value of K is known then the exact values (locations) of the closed-loop poles can be obtained by trial and error. This, of course, is not done today in the computer era, but one must be aware of it. Remark 5.14: Relative degree rule, which is a conservation law for systems satisfying n $ m 1 2, states that the sum of the closed-loop poles is equal to the sum of the open-loop poles, is constant, and independent of K. Thus, the total distance some closed-loop poles go to the left is equal to the total distance other poles go to the right. Remark 5.15: Once the locus is sketched, determine the dominant poles from the performance specifications, in particular from ζ. The remaining poles may be obtained by division of the characteristic equation by the dominant poles (by some approximation) or from the relative degree rule, up to the sum of the real parts. If

Root locus

367

n 5 3, i.e., there is only one more pole left, it will be real and the relative degree rule gives its exact value. If n 5 4, i.e., there are two more poles left, if they are complex conjugate (which can be understood from the overall root locus), the relative degree rule gives the real part. The approximate value of the conjugate part of the remaining poles is then obtained from the root locus (by drawing a line). Today with computers in pervasive use this is not done, however we solve an example to demonstrate the methodology. Remark 5.16: At the intersection points, including the break-away and break-in points, the sensitivity of the closed-loop poles to the gain K is infinite: ds=s ds K SsK 5 dK=K 5 dK  s 5 10  Ks !N. (Question: Does any problem happen if the gain is chosen such that the closed-loop poles are at the intersection points?) Remark 5.17: Some systems are pathological in the sense that small perturbations in one (or some) parameter(s) of the system result in a considerably different root locus. So for such systems the root locus must be drawn carefully. This may not be easy for hand drawing; a computer software like MATLABs is needed.

Example 5.15: In the following Fig. 5.16, the open-loop poles and zeros are almost the same, whereas the root loci are considerably different. The system is ðs 1 0:5Þ=½sðs 2 1Þðs2 1 4s 1 αÞ. In the left panel α 5 12 and in the right panel α 5 14.

Figure 5.16 A pathological case in root locus drawing.

368

Introduction to Linear Control Systems

Example 5.16: Similar to the previous example, in the following Fig. 5.17, the open-loop poles and zeros are almost the same, whereas the root loci are drastically different. The system is L 5 Kðαs 1 64Þ=½sðs 1 1Þðs2 1 7s 1 25Þ. In the left panel α 5 38, in the right panel α 5 39, and in the lower panel α 5 40.

Figure 5.17 A pathological case in root locus drawing.

The student is encouraged to investigate by direct computation whether in the right case the break point exactly realizes.

Question 5.5: As is observed, root locus drawing may be pathologically intricate or sensitive. Probably the more important question than the shape of the root locus is the range of stability. Do these systems have drastically different stability ranges versus K? With the ensuing comprehensive Examples 5.17 and 5.18 we close this section.

Root locus

369

1700 Example 5.17: Consider the plant PðsÞ 5 sðs2 1 80s 1 1700Þ preceded by the controller CðsÞ 5 K in a negative feedback structure with the sensor dynamics Ps ðsÞ 5 0:02s1 1 1 in the feedback loop. It is desired that the rightmost poles have ζ 5 0:6. In the context of root locus find the controller C and the time response for a step input. Are the rightmost poles dominant? The problem is solved in the following steps (1)(14).

1. 2. 3. 4. 5. 6. 7.

85;000K We form the loop gain as L 5 CPPs 5 sðs2 1 80s 11700K 1700Þð0:02s 1 1Þ 5 sðs 1 50Þðs2 1 80s 1 1700Þ. The open-loop poles are s 5 0; 250; 240 6 10j. There are no open-loop zeros. There are four branches of the root locus. And all of them end at infinite zeros. The root-locus on the real axis in between 0; 250. The angles of the asymptotes are 6 45; 6 135. The real axis abscissa is 2 ½0 1 50 14 40 1 40 5 232:5 d 2 The break-away points are computed from: dK ds 5 ds ðsðs 1 50Þðs 1 80s 1 1700ÞÞ 5 0, therefore s 5 243:0747 6 j4:0896; 211:3505. The acceptable answer is s 5 211:3505. 8. Angle of departure from p3 5 240 1 10j is computed as: +p3 5 180ð2h 1 1Þ 2 ðα1 1 α2 1 α4 Þ 5 ð1 1 2hÞ180 2 ð45 1 90 1 165:96Þ 5 239:03, see Fig. 5.18.

α1

α4

α2

Figure 5.18 Angle definition and computation. Similarly, +p2 can be obtained. A simpler way is to resort to the symmetry of the root locus with respect to the real axis and conclude that +p2 5 2 239:03. 9. The j-axis crossings: The characteristic equation12 is s4 1 130s3 1 5700s2 1 85; 000s 1 85; 000K. s4 s3 s2 s1

1 130 ! 1 5046:1538 ! 1 653:8462 2 16:8445K

s0

16:8445K 

Therefore, we must have:

5700 85; 000 ! 653:8462 85; 000K ! 16:8445K

653:8462 2 16:8445K 5 0 s2 1 16:8445K 50

85; 000K



and thus

K 5 38:8165 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi . s 5 6 j 653:8462 5 6 j25:5704

On the other hand the stability region is 0 , K , 38:8165.

(Continued)

12

Note that the transfer function is 1 1CP CPPS 5

1700Kðs 1 50Þ s4 1 130s3 1 5700s2 1 85;000s 1 85;000K .

370

Introduction to Linear Control Systems

(cont’d) 10. The root locus is sketched in Fig. 5.19, with care around the origin and the j-axis.

Figure 5.19 Root locus of Example 5.17. 11. The radial line ζ 5 0:6 is drawn, giving the pole 28:7 1 11:6j. Hence, the rightmost poles are 28:7 6 11:6j. 12. The gain is given by the magnitude condition 85; 000K 5 jsj:js 1 50j:js2 1 80s 1 1700j. Hence K 5 8:7221. 13. Other poles are obtained by either division (with approximation) or the relative degree rule. a. The quadratic form representing the dominant form is ðs18:7Þ2 1 11:62 5 s2 1 17:4s 1 210:25. After division of the characteristic equation by this polynomial and ignoring the remainder we arrive at s2 1 112:6s 1 3530:51, yielding the other roots as 256:29 6 j18:99. b. Alternatively, from the relative degree rule we have ð0Þ 1 ð2 50Þ 1 ð2 40 1 10jÞ 1 ð2 40 2 10jÞ 5 ð2 8:7 1 11:6jÞ 1 ð2 8:7 2 11:6jÞ 1 ðα 1 jβÞ 1 ðα 2 jβÞ Thus, with α 5 2 56:3 we refer to the root locus, draw a vertical line at α 5 2 56:3, and obtain 256:3 6 j19:5. 1700Kðs 1 50Þ 14. The transfer function13 is 1 1CP CPPs 5 ðs2 1 17:4s 1 210:25Þðs2 1 112:6s 1 3530:51Þ which can be written as

CP 1 1 CPPs

5

4:1998ðs 1 50Þ ðs2 1 17:4s 1 210:25Þ

3

that the second term is negligible and

1 ð1=3530:51Þs2 1 ð112:6=3530:51Þs 1 1. It 4:1998ðs 1 50Þ CP thus 1 1 CPPs  ðs2 1 17:4s 1 210:25Þ.

is observed

(Continued)

13

We use the poles obtained by long division. These poles are often more accurate than those of the relative degree rule.

Root locus

371

(cont’d) It is noteworthy that the DC gain must be (approximately equal to) 1, i.e., the steady-state error must be zero, since P is type one. This requirement holds with high accuracy. The response of the original system as well as that of the reduced model are plotted in Fig. 5.20. The approximation looks fine. Finally, we should say that the rightmost poles are dominant.

Figure 5.20 Step Response of Example 5.17, Solid: Original system, Dashed: Reduced model.

Kð2 2 sÞ Example 5.18: Draw the root locus of the system LðsÞ 5 sðs 1 1Þðs 2 1 6s 1 12Þ for positive gain. 2Kðs 2 2Þ In our formulation the system is sðs 1 1Þðs where K . 0 or 2 1 6s 1 12Þ, 0 K 5 2K , 0. Thus we note that (1) On the real axis the total number of poles and zeros to the right of a test point is even. (2) The term 180ð2h 1 1Þ present in the asymptotes angles is substituted with 360 h, i.e., we should follow Remark 5.6. (3) The terms 180ð2h 1 1Þ present in the formulae for angles of departure and arrival should be substituted with 360 h. The finite pffiffiffi open-loop poles and zero are at 0; 21; 23 6 j 3 and 2, respectively. There are also three zeros. The break points are the  infinite open-loop 

roots of

dK ds

5

d ds

sðs 1 1Þðs2 1 6s 1 12Þ 22s

5 0 at which K . 0. The roots are

3:1681; 20:3872; 22:3904 6j0:8985. Because we know that there are break (Continued)

372

Introduction to Linear Control Systems

(cont’d) points between the real poles and also to the right of the real zero, we do not really need to check the sign of K; 3:1681; 20:3872 are acceptable break points. The abscissa is at 2½ð0 1 1 1 6Þ 2 ð22Þ=3 5 23. The asymptotes are the lines 0; 6 120 deg. The angle of departure from the complex pole at pffiffiffi 23 1 j 3 is 360h 2 ð90 1 139:11 1 150Þ 1 ð160:89Þ 5 141:79 deg. The root locus is depicted in Fig. 5.21.

Figure 5.21 Root locus of Example 5.18.

5.3

The root contour

When in an equation more than one parameter changes, the locus of the roots is called the root contour not the root locus. We will analyze this situation for the two-parameter case; the more parameter case follows directly. Let the characteristic 1 ðsÞ 2 ðsÞ equation be 1 1 LðsÞ ≔ 1 1 K1 NDðsÞ 1 K2 NDðsÞ 5 0 or equivalently DðsÞ 1 K1 N1 ðsÞ 1 K2 N2 ðsÞ 5 0. The procedure is as follows. We set K2 5 0 thus DðsÞ 1 K1 N1 ðsÞ 5 0 or 1 ðsÞ 5 0. We call this “System 1” and sketch the root locus for it. Then we 1 1 K1 NDðsÞ N2 ðsÞ keep K1 5 constant hence ðDðsÞ 1 K1 N1 ðsÞÞ 1 K2 N2 ðsÞ 5 0 or 1 1 K2 DðsÞ 1 K1 N1 ðsÞ 5 0. We call this “System 2.” The root locus of system 2 where K1 varies from zero to infinity is called the root contour of the original system. It is observed that the open-loop poles of system 2 are the closed-loop poles of system 1, which in turn

Root locus

373

form the root locus of system 1. Hence, the root contour of the original system N2 ðsÞ starts from the root locus of system 1 and ends at zeros of DðsÞ 1 K1 N1 ðsÞ. Example 5.19: Let a system be given by s3 1 K2 s2 1 K1 ðs 1 1Þ 5 0. Draw its root contour for different values of the parameters. We start by setting K2 5 0. Thus “System 1” will be s3 1 K1 ðs 1 1Þ 5 0 or 1 1 1 K1 s 1 s3 5 0. It has two asymptotes as evident in Fig. 5.22. The angles of departure from s3 5 0 are 180; 160; 260. To verify the last two in the figure note that axes scaling is not the same. Now we form “System 2” as follows. Setting K1 constant we obtain ðs3 1 K1 ðs 1 1ÞÞ 1 K2 s2 5 0 or 2 1 1 K2 s3 1 Ks1 ðs 1 1Þ 5 0. Now by varying K1 from zero to infinity and drawing the root locus of system 2 we have actually drawn the root contour of the original system, as shown in Fig. 5.22.

Figure 5.22 Root contour of Example 5.19.

5.4

Finding the value of gain from the root locus

In parts of the literature it is often claimed that by looking at the root locus it is possible to find the right value of the gain K, in order to achieve the desired performance. If we suffice with stabilization, it is of course possible. However, if we aim at performance achievement this is rather complicated and in fact seems impossible for the general case. Only in the cases in which some rightmost poles are “truly

374

Introduction to Linear Control Systems

dominant” it is possible. This is for example the case for the system in Example 5.35 and Problems 5.1, 5.4. In the general case that poles and zeros have a more complicated pattern, finding the value of K is possible only by simulation of different cases or optimization of a cost function. We adopt the first method here which gives more insight to the dynamics of the system. The second is often geared with the course industrial control, as we have mentioned in Chapter 4, Time Response. The following Examples 5.20 and 5.21 show that in general it is not possible to guess the right value of the gain by looking at root locus—See also Examples 5.22, 5.38. 3 for the plant P 5 sðs20 Example 5.20: Design the controller C 5 K ss1130 2 1Þ to have a good step response. Note that this is part of Problem 4.13 of Chapter 4, Time Response, where we designed the controller for input ramp tracking. For the complete solution we refer the reader to that problem. Here our emphasis is on choosing K for a good step tracking. The root locus is given in Fig. 5.23. It shows that K 5 15:7 results in the smallest damping ratio for the pair of conjugate poles of the system, much smaller than that for e.g., K 5 50, and thus may suggest choosing the former case. But let us have a look at the computed outputs in the right panel of Fig. 5.23. We observe that the overshoots are almost the same. In the former case MP  33% and in the latter case MP  35%. Because of smaller rise and settling times it is clear that we prefer K 5 50.14 (It is noteworthy that with K 5 40 even the overshoots will be the same. But then the ramp error will be above the desired value. Refer to Problem 4.13 for details.)

Figure 5.23 System of Example 5.20. Left: Root locus; Right: Input tracking.

(Continued)

14

In the simulations of this chapter we consider the output shape only. In future chapters we shall learn other design specifications like gain margin and phase margin, which are measures of robustness, and many others. They will be considered in the design procedure in Chapter 9.

Root locus

375

(cont’d) Discussion: We complement the above argument by adding that with K 5 15:7 and K 5 50 the closed-loop poles are at 210:43; 2 9:285 6 j2:0196 and 23:397; 212:8 6 j26:8172, respectively. This clearly shows that in this system it is not possible to determine the value of gain by looking at the root locus.

1 Example 5.21: Design the controller C 5 K s 1s1:5 for the plant P 5 s2 1 5s 1 6 to have a good step response. This is also a former problem—Problem 4.8 of Chapter 4 where we designed the controller for input ramp tracking. For the complete explanation the reader should refer to that problem. Here we consider the problem of choosing K for a good step tracking. The root locus of the system shows that with approximately K . 0:64 two closed-loop poles are complex and thus may suggest an overshoot in the step response. However, simulations reveal that only with K . 6 the system exhibits an overshoot. Indeed with values K  0:64 not only is there not any overshoot but also the system is very sluggish. Simulations with some values of K are provided in Fig. 5.24.

Figure 5.24 System of Example 5.21. Left: Root locus; Right: Input tracking.

Discussion: Here as well we complement the above argument by adding that with K 5 0:5; 6; 10 the closed-loop poles are at 22:674; 22:198; 20:1276 and 21:217; 21:8915 6 j1:9543 and 21:357; 21:8215 6 j2:7825, respectively. This also clearly shows that in this system it is not possible determine the value of gain by looking at the root locus.

376

Introduction to Linear Control Systems

Remark 5.18: In certain cases the root locus near the origin is so compact that only by zoom and magnification that part is visible. This is for example the case in Example 5.24. When we use this feature of MATLABs, then the current releases do not produce the value of gain if we click on the locus in the zoomed portion. It will be very good if this problem is addressed in future releases.

5.5

Controller design implications

It is good to have some insight about the stabilization, tracking, and performance issues in the context of root locus. We present the developments in two sections: difficult systems, and simple systems. It is instructive to start by difficult systems.

5.5.1 Difficult systems Let us start by stressing that stabilizing unstable systems is particularly studied in the graduate course Robust Control Systems. There you will learn specialized results and frameworks like the factorization approach, HN and μ. For instance, one of these specialized results is that the strictly proper system PðsÞ 5 ðs 2 zÞP1 ðsÞ=ðs 2 pÞ where P1 ðsÞ is stable, MP, and (strictly) proper can be stabilized by a stable controller iff z . p. Another result is that the unstable plant PðsÞ can be stabilized by a stable controller iff there is an even number of real unstable poles between every pair of real unstable zeros of PðsÞ. At this stage we certainly cannot talk much about these issues in a theoretical framework. However, we endeavor to learn as much as reasonable, now in the context of root locus. It is advisable that when you take that graduate course, you conceptualize the lessons in the context of root locus as well—it enhances your insight. Now we embark on our discussion. Recall Example 5.1 and Remark 5.2. An open-loop unstable system may become closed-loop stable by proper choice of the gain. This is shown for instance in Example 5.1. But what if this is not the case? We discuss some general possibilities in the following.

5.5.1.1 System without NMP zeros The system is something like PðsÞ 5 K=½ðs 2 1Þðs 2 2Þ, K=½ðs 2 1Þðs 2 2Þðs 2 3Þ, K=½ðs 2 1Þðs 2 2Þðs 2 3Þðs 2 4Þ, etc. perhaps with some stable part in its dynamics as well, e.g., Kðs 1 1Þ=½ðs 1 2Þðs 2 1Þðs 2 2Þðs 2 3Þ. We solve this problem for both stabilization and tracking objectives. See also Remark 5.26.

5.5.1.2 Systems with NMP zeros The system is something like PðsÞ 5 Kðs 2 1Þðs 2 3Þ=½ðs 2 2Þðs 2 4Þðs 2 5Þ or with any other “pattern” for the polezero locations. Recall that this is the same as what we considered in Question 3.3 of Chapter 3. This problem in its full generality does

Root locus

377

not admit a solution unless we let up the internal stability requirement and allow for unstable polezero cancellations between the controller and the plant. As you already know, such designs are not acceptable. However, merely for the purpose of illustrating the design complexities we include an example of such unacceptable designs. Apart from this, and maintaining internal stability, we present several special and interesting cases that admit a solution in a tractable manner for this undergraduate course in the context of root locus. It must be stressed that the solutions to the problems in this category are somewhat innovative. Remark 5.19: In the following two sections we present several examples. Except for problems for which we present simulation results in some figures, our presentation for other problems is just to demonstrate the design procedure and to point out the stability region regarding the gain value. More precisely, the controller parameters are just typical values that admit a solution regardless of the performance. In order to have good performance, trial and error should be conducted on the controller “synthesis” and “analysis,” which we do not do in this chapter. Note that, precisely speaking, analysis and synthesis are different tasks, as will be further clarified in Chapter 9. See also Remark 5.20.

5.5.1.3 Examples of systems without NMP zeros Suppose that our plant is PðsÞ 5 1=½ðs 2 1Þðs 2 2Þ. To answer this question we note that the relative degree of the plant is 2. Relative degree of the controller can be at least zero, i.e., proper controller. Thus the overall system will have at least the relative degree of 2, which means that two of its root locus branches will go to infinity (on 690 deg. asymptotes). What we should do is to bring these branches to the left of the j-axis (beyond a certain value of the gain) and this is accomplished by proper choice of the controller pole(s) and zero(s). These branches will end at stable zeros either finite or infinite. The idea is illustrated in the Subsequent Example 5.21 and Remark 5.20. Example 5.22: Consider the system PðsÞ 5 1=½ðs 2 1Þðs 2 2Þ. Imagine the root locus of this system. It is clear that it cannot be stabilized by choice of the controller CðsÞ 5 K and needs to be compensated by a dynamic controller. Recalling the relative degree rule we design a controller such that a branch of the root locus goes to the right so that the two unstable branches come to the left. This proposes a controller of the form ðs 1 zÞ=ðs 1 pÞ where p . z, not vice versa. Moreover, the coming to the left of the unstable branches must be to the degree that they come to the left of the j-axis. To this end we use the abscissa formula and get 2½ðp 2 1 2 2Þ 2 ðzÞ=ð3 2 1Þ , 0 or p 2 z . 3. (Continued)

378

Introduction to Linear Control Systems

(cont’d) What specific choice should be made from among an infinite number of options depends on the design specifications, if achievable by this controller. These include steady-state error in reference tracking, maximum overshoot, bandwidth, etc., which will be studied in future chapters. However, here we add a few words. You already know that the controller ðs 1 zÞ=ðs 1 pÞ does not suffice for step tracking—there will be a nonzero steady-state error. A possible solution would be to use the controller ðs 1 z1 Þðs 1 z2 Þ=½sðs 1 pÞ. For instance the controller ðs 1 0:5Þðs 1 1Þ=½sðs 1 15Þ (with K . 55, e.g., K 5 200) results in zero steady-state error in tracking a step input. Note that this is only a possibility. The exact values of the parameters need to be found by further investigation and simulation. In the worked-out problems we will propose controllers which render zero steady-state error in tracking ramp and parabolic inputs for this system. Note that the crude philosophy behind the factor ðs 1 0:5Þ=s of the controller is as follows. For tracking a step input with zero steady-state error the controller must have the factor 1=s, as you have learned in the previous Chapter. The zero factor s 1 0:5 is then added to “roughly cancel out” the effect of the term 1=s on the root locus so that the overall root locus remains unchanged (to a good extent), i.e., in particular the stability property remains valid. (But the exact value of gain at the j-axis crossing will change.) The value K . 55:36 refers to the j-axis crossing. The exact value of K is chosen such that the maximum overshoot is minimized and this requires trial and error and simulation. The responses for K 5 250 and K 5 130 are offered in Fig. 5.25. (With K 5 100 the system shows 85% overshoot; this case is not shown in the figure.) Note that K 5 130 results in the highest damped complex poles, but it is not the best choice, as the simulation shows. And this is what we have said in Section 5.4. In general it is not possible to assess the overshoot from the place (i.e., the damping ratio) of the complex conjugate poles of the system. Discussion: With K , 130 the simulation (which is not provided) shows that the output has excessive overshoot. For higher values of gain K . 250 the improvement in the output is negligible. So, the given controller K 5 250 is a good choice. With these values of gain the closed-loop poles are given by: K 5 130: 23:676; 2 0:3289; 23:997 6 j6:147 and K 5 250: 21:77; 20:3814; 24:924 6 j12:682. The example shows that by looking at the root locus it may not be possible to choose the right value for the gain. And this is in accordance with what we have said in Section 5.4. In some systems the complexity of the problem is such that to find the value of the gain trial and error and simulations are called for. Also recall from Remark 4.8 of Chapter 4 that there are some other factors that for simplicity we ignore, like (Continued)

Root locus

379

(cont’d) the control effort, sensitivity, etc. Finally, note that in Chapter 9 we will propose a controller synthesis for this system whose root locus is slightly different and results is a smaller overshoot. That is Problem 9.12; see also Exercise 5.57.

Figure 5.25 Example 5.22. Left: Root locus; Right: Step response; Solid: K 5 250; Dash-dotted: K 5 130.

Remark 5.20: Another “synthesis” could be to use the controller CðsÞ 5 Kðs1bÞ3 = ½sðs1aÞ2  where e.g., b 5 0:5, a 5 5. (Draw the root locus!) Note that: (1) Actually there are several possibilities since the poles and zeros can be simple and/or conjugate like CðsÞ 5 Kðs 1 b1 Þððs1b2 Þ2 1 b23 Þ=½sðs 1 a1 Þðs 1 a2 Þ. (2) As we mentioned in Remark 5.19 to have good performance the controller parameters have to be carefully designed. The given numerical values show only a stable case. Remark 5.21: You may wonder what the influence of different patterns of the controller poles and zeros is. Well, the answer is twofold. (1) For simple systems like that of Example 5.22, it does not much matter with reference to the mere objective of stabilization. However, for complex systems like the ones in Exercises 5.27 and 5.28 they do matter. The student is encouraged to try other poles-zero patterns for the aforementioned exercises and verify that in many cases they fail to stabilize the system! (2) It has an influence on e.g., the robustness/sensitivity of the system. Full details go outside the scope of this undergraduate course but to a great extent are discussed in Chapter 10. It is due to mention that the collection of topics that we consider in Chapter 10 is richer than that of any other text. For higher relative degrees the problem gets rather complicated. Let the system be e.g., PðsÞ 5 1=½ðs 2 1Þðs 2 2Þðs 2 3Þ which has relative degree three. Again the best we can do is to choose a proper controller and thus the controlled system (like

380

Introduction to Linear Control Systems

the uncontrolled system) will have three asymptotic branches, two of which being necessarily unstable for high gains. Our only hope to stabilize this system is to choose the polezero configuration of the controller in such a way that the system has j-axis crossings and becomes stable for a bounded range of the gain. See also Exercise 5.57. Example 5.23: Let the system be given by PðsÞ 5 1=½ðs 2 1Þðs 2 2Þðs 2 3Þ. The root locus has three branches, two of which (start from a break-away point between the unstable poles at 2 and 3 and) are necessarily unstable for large values of the gain. The third branch is on the real axis and is stable for high gains. Let us first take care of this branch. The smallest among the poles and zeros of the controller is chosen to be a stable zero, e.g., s 5 20:02. The rest of the poles and zeros of the controller are chosen such that the unstable branches of the root locus are pulled to the left, have j-axis crossing, and then turn back to right and cross the j-axis again and become unstable. A controller which serves for this purpose is CðsÞ 5 Kðs 1 0:02Þðs 1 0:05Þðs 1 5Þðs 1 10Þ=½ðs 1 0:2Þðs 1 1Þðs1100Þ2 ; see also the workedout Problem 5.12 for a similar case. Part of the root locus of the system is given in Fig. 5.26. 60˚ C Some stable real poles and zeros: Stable part of the root locus

B A

Figure 5.26 Root locus of Example 5.23, not drawn to scale.

Discussion: The values of the gain at j-axis crossings A, B, and C and the corresponding frequencies are given in the ðω; KÞ pairs as follows: ð0; 2:4 3 105 Þ, ð14:243; 1:6383 3 105 Þ, ð77:094; 1:2179 3 106 Þ. Hence, the range of stability is 2:4 3 105 , K , 1:217 3 106 . Note that the controller is not unique, and any controller with parameters near this controller’s will work as well. For instance 0.02 can be substituted by 0.01 or 0.03, and the same for other parameters. The range of stability will however be different. Another possibility for stabilizing this system is to try to have a root locus like that of Fig. 5.27 in which maxfk1 ; k2 g , k3 . (Recall from Remark 5.11 that this condition is not satisfied by every controller which results in the desired root locus shape, and this is a difficulty in finding the controller parameters. Also note that in the given figure and similar ones like Fig. 5.29 we are not interested in the exact angles of arrival and departure, but the (Continued)

Root locus

381

(cont’d) “general shape” of the locus.) This is possible for instance with the controller CðsÞ 5 Kðs 1 1Þððs11Þ2 1 9Þ=ðs130Þ3 . With this controller the j-axis crossings are as follows: Gain values: 1.620000000000011e 1 04, 6.626708495254431e 1 04, 1.575232466482128e 1 05 Frequencies: 0, 63.741230309653811, 644.751443236884370

If we want a tracking controller for parabolic inputs we can use CðsÞ 5 Kðs10:5Þ4 ððs11Þ2 1 9Þ=½s3 ðs130Þ3  as the controller. The root locus of the resultant system is also shown in the right panel of the same Fig. 5.27. 60˚

60˚

k3

k3

k2

k2

k1

k1

Figure 5.27 Root locus of Example 5.23, not drawn to scale. Left: Stabilization; Right: Parabola tracking.

Example 5.24: An argument similar to Remarks 5.20 and 5.21 can be made for this system as well. For instance, in the above root locus the complex zeros of the controller may be real, and may be the same as the other real zero of the controller. For instance the controller CðsÞ 5 Kðs10:5Þ4 ðs12:5Þ2 = ½s3 ðs137Þ3  is also a tracking controller for this system. With this controller the crossings are as follows: Gain values: 3.081808720033890e 1 04, 5.191110572956986e 1 04, 2.761571423881151e 1 05 Frequencies: 0, 60.556703574788035, 64.290806238879783, 653.585961289840689.

The controller ðs12:5Þ6 =½s3 ðs137Þ3  is also acceptable.

Remark 5.22: For unstable plants with four NMP poles we do not know any system with a root locus of the shape given in Fig. 5.28 (with k1 , k2 ).

382

Introduction to Linear Control Systems

45˚ k3 k2 k1

Figure 5.28 Probably impossible root locus.

45˚ k3 k2 k1

Figure 5.29 A general possibility for stabilizing the system, figure not drawn to scale.

Another possibility one may think of is to stabilize the system by the root locus like that of Fig. 5.29 in which maxfk1 ; k2 g , k3 . This is done in the next two examples. Note that an argument similar to Remarks 5.20, 5.21 can be made here too. For example in this figure the double zeros may be real and/or separate from each other. The controller poles may also have a different pattern. These cases are not shown in the picture.

Example 5.25: For the plant PðsÞ 5 1=½ðs 2 1Þðs 2 2Þðs 2 3Þðs 2 4Þ such a root locus is obtained by a controller like CðsÞ 5 Kððs11Þ2 14Þ2 =ðs140Þ4 and is stable by proper choice of K. The gain crossover frequencies and their corresponding gain values are as follows: Gain values: 6.794062202831677e 1 06, 5.196367484900963e 1 06, 7.066797413068382e 1 06 Frequencies: 61.194885231109358, 65.184199736058521, 632.117598881519292

Thus, for example with K 5 7 3 106 the system is stable. Note that we can easily turn the controller to a tracking one, but to maintain maxfk1 ; k2 g , k3 we have to change the controller parameters. With the controller CðsÞ 5 Kðs10:5Þ2 ððs11Þ2 19Þ2 =½s2 ðs140Þ4 , the j-axis crossings are: Gain values: 0, 3.808049466795303e 1 06, 5.941119004427454e 1 06, 6.908132418840583e 1 06 Frequencies: 0, 61.860720953496763, 66.064050565813183, 631.429601990181514

(Continued)

Root locus

383

(cont’d) Thus by the proper choice of K (e.g., 6:2 3 106 ), the system tracks the ramp input with zero steady-state error. The reader is encouraged to draw the root locus of the resultant system and analyze the philosophy behind the proposed controller.

Example 5.26: The plant of Example 5.25 is also stabilizable with the controller CðsÞ 5 Kðs13Þ4 =ðs137Þ4 . The root locus is like the one given in Fig. 5.29 except that the zeros are all real. The j-axis crossings are: Gain values: 7.949053795519195e 1 05, 2.131267445767384e 1 06, 3.422762554871950e 1 06 Frequencies: 6 1.058372068967484, 6 10.537469727517159, 6 22.063707550452374.

Remark 5:23: This strategy is ad hoc at the moment in the sense that the controller parameters have to be chosen by trial and error. For example in the above system if the controller poles are at 220 or the zeros are at 21 6 j the system will not be stabilized. Moreover, although we have achieved reference tracking we have not considered performance in the form of a “good” output. This requires extensive simulation with different controller values—i.e., designs—and perhaps different controller structures, i.e., syntheses. Remark 5.24: Examples of fifth and sixth order systems are considered in Exercises 5.27, 5.28. But it is not possible to try all orders of systems one by one to verify whether the above procedure succeeds. We should certainly try to prove a theorem and present a canonical procedure for it. The question that we are concerned with is as follows: Given the unstable but MP plant PðsÞ 5 Np =Dp , orderðNp Þ # orderðDp Þ, are there polynomial Nc ; Dc with orderðNc Þ # orderðDc Þ such that the polynomial Np Nc 1 Dp Dc is stable? If we desire a tracking controller then it is required that the polynomial Dc Dp have 1, 2, or 3 zeros at origin (not cancelled with zeros of Nc Np ) as appropriate. This question is a restricted version of Question 3.3 of Chapter 3 in which Np was allowed to be NMP. It was a topic of extensive research in the 1980s when numerous results under the brand “stabilization” were reported. The “factorization approach” was the main tool for tackling this problem.

384

Introduction to Linear Control Systems

Remark 5.25: The above technique for stabilizing the unstable branches of the system can be seen in the context of “lead control.” In Chapter 9 we will learn that a controller like ðs 1 zÞ=ðs 1 pÞ; 0 , z , p is called a lead controller. Note that (1) The controller is not unique. (2) The choice of p and z drastically affects the output and must be verified by simulation of different scenarios. Usually an acceptable relation between p and z is p $ 10z. The effect of the choice of p and z on the result is clear but is quite problem-dependent and for a specific system is shown in Example 5.27. (3) What this controller does is to add a positive phase to the system. (4) Not every lead controller stabilizes the system, since an appropriate condition (like that of Examples 5.235.26) on the values of the gain at crossings must be satisfied. The condition is not satisfied in Example 5.28, which will follow.

Example 5.27: In this Example we investigate the effect of the parameters p and z in the controller of Example 5.22 on the output. The plant is PðsÞ 5 1=½ðs 2 1Þðs 2 2Þ and the controller is CðsÞ 5 Kðs 1 aÞðs 1 zÞ=½sðs 1 pÞ. We consider nine different controllers in three groups: C1 ðsÞ 5 ðs 1 1Þðs 1 3Þ=½sðs 1 30Þ; C2 ðsÞ 5 ðs 1 1Þðs 1 2Þ=½sðs 1 20Þ; C3 ðsÞ 5 ðs 1 1Þðs 1 1Þ=½sðs 1 10Þ; C4 ðsÞ 5 ðs 1 0:5Þðs 1 3Þ=½sðs 1 30Þ; C5 ðsÞ 5 ðs 1 0:5Þðs 1 2Þ=½sðs 1 20Þ; C6 ðsÞ 5 ðs 1 0:5Þðs 1 1Þ=½sðs 1 10Þ; C7 ðsÞ 5 ðs 1 0:1Þðs 1 3Þ=½sðs 1 30Þ; C8 ðsÞ 5 ðs 1 0:1Þðs 1 2Þ=½sðs 1 20Þ; C9 ðsÞ 5 ðs 1 0:1Þðs 1 1Þ=½sðs 1 10Þ The outputs with these controllers as well as the control signals of controllers C2 , C5 , and C8 , are provided in Fig. 5.30. The least overshoots are obtained with K around 250. For instance for C2 it is 260. Also note that it is possible to slightly further reduce the overshoots if we e.g., increase the gain to 300 or even higher for C8 . But it does not pay off, as the increase in the control signal is noticeable because the initial value of the control signal is the value of the gain. As it is observed with reducing the magnitude of a, the settling time increases.15 On the other hand the decrease in the overshoot of the output is negligible and the undershoot of control signal only slightly reduces.16 We emphasize that the choice of this parameter is problem-dependent; a 5 1 is appropriate for this system and may not be good for other systems. We also emphasize that in this specific example the choice of p 5 10z 5 20 is better than other choices. However, for other plants it may not be the case. In industrial design extensive trial and error should always be undertaken. (Continued) 15 16

Considering the inverse Laplace transform of the system, this can easily be theoretically proven. For other examples see the worked-out Problems 5.315.33.

Root locus

385

(cont’d)

Figure 5.30 Example 5.26. Top left: Output with C1 2 C3 ; Top right: Output with C4 2 C6 ; Bottom left: Output with C7 2 C9 ; Bottom right: Control signals with C2 , C5 , and C8 .

Example 5:28: The root locus of the plant PðsÞ 5 1=½ðs 2 1Þðs 2 2Þ ðs 2 3Þðs 2 4Þðs 2 5Þðs 2 6Þ with the controller CðsÞ 5 Kðs13Þ8 = ðs137Þ8 is provided in Fig. 5.31. The system is not stable since the values of gain at crossings are as follows: Gain values: 4.2504e 1 11, 3.7155e 1 11, 4.4279e 1 10, 2.4789e 1 10 Frequencies: 60.6666, 62.5894, 610.7879, 621.5067.

(Continued)

386

Introduction to Linear Control Systems

(cont’d)

Figure 5.31 Root locus of Example 5.28.

5.5.1.4 Examples of system with NMP zeros Another possibility is when the unstable plant has NMP zero(s) as well. To motivate the discussion consider a plant with unstable poles and zeros and that a branch of the root locus connects an unstable pole and zero and resides in the ORHP. This branch is unstable for any value of the gain (with the same sign) and thus the whole system will be unstable unless we in some way stabilize this unstable branch. To this end, one may think of the following schemes, see Fig. 5.32. Note that in the right panel the relative degree of the controller is 3, so it has 3 asymptotes on 660; 180 deg lines. If the relative degree is 2, then asymptotes will be on 690 deg lines (which should also be stable). Our answer to this argument is that17 these schemes are not correct since it is inevitable that in the left panel k1 , k2 and in the right panel k1 , k2 , k3 . That is, these systems are always unstable. Thus we should think of other schemes. The other possibilities that we may think of are provided in the panels of Fig. 5.33. Note that in the left panel the controller has both stable and unstable dynamics. The condition k1 . k2 must be satisfied in the left panel.18 In the right panel the controller has an unstable dynamics (as shown) and some 17 18

Note that such root loci exist e.g., for the system Kðs 2 2Þðs 1 60Þ=½ðs 2 180Þðs 1 4Þðs 1 9Þ. Note that the root locus may have other branches as well. We have shown just the part that is directly related to the original unstable branch.

Root locus

387

60˚ k3 k2 k1

k2 k1

Figure 5.32 Proposed stabilization schemes for the unstable root locus branch.

k2 k1

Figure 5.33 Proposed stabilization schemes for unstable NMP systems.

other dynamics (not shown) in such a way that the root locus has the given shape. Fortunately these root loci are achievable for some systems; we provide an example.19 The difficulty is in finding the dynamics of the controller! Unless we use the theoretical framework of the aforementioned “factorization approach” or a MATLABs command which numerically finds an answer (in the Robust Control Toolbox of MATLABs, if the algorithm converges, which is not guaranteed), finding an answer by trial and error is certainly quite difficult. However, this is not our concern here— our purpose is to propose a solution and gain more insight, which we have fulfilled. After this motivation we present an example.

Example 5.29: In the first example we show that certain systems have root locus branches like the ones given in Fig. 5.33. The system 2 1 1:126s 1 0:8796Þðs2 1 2:022s 1 2:347Þ LðsÞ 5 2 sðs 2 6:594Þðs 2 0:367Þðs 1 0:1824Þðs is such a ðs 2 1Þðs 2 2Þðs11Þ7 system, see Fig. 5.34. The j-axis crossings are: Gain Values: N, 1.794629563075848, 1.778277629416771, 1.818468624233680 Frequencies: 0, 1.043535189286645, 1.301720534927592, 1.768952930862226

and thus the system is stable.20 (Continued) 19

Note that the unstable pole and zero of the controller may coincide with those of the plant so that the overall system has a double unstable pole and a double unstable zero. 20 If the zero at 20:1824 is substituted with a zero at 20:181 the root locus branches will be like the right panel of Fig. 5.33.

388

Introduction to Linear Control Systems

(cont’d)

Figure 5.34 Example 5.29. Left: Whole root locus; Right: Magnification of a part of it.

However we should note that the gain values are such that the response will be oscillatory. That is, the system has barely met the stability condition alone and it does not have a good performance! Unfortunately, even this mere objective of internal stability is not achievable for every given system, see Exercise 5.56. Another interesting plant is discussed in the proceeding Example 5.30. Example 5.30: Let the system be given by PðsÞ 5 ðs 2 1Þðs 2 3Þðs 2 6Þ= ½ðs 2 2Þðs 2 4Þðs 2 5Þ. See Fig. 5.35, top left panel. If we want to stabilizable it by an internally stable controller there are some possibilities for its root locus of the controlled system, among which are the ones provided in other panels of Fig. 5.35 with appropriate conditions on the gain values at the crossings. (Note that the rest of the root locus is not shown.) For instance in the top right panel denoting the gain values in the increasing order of frequency as k1 ; . . .; k5 there should hold minfk1 ; k3 ; k5 g . maxfk2 ; k4 g.

Figure 5.35 Example 5.30. Top left: Polezero pattern of the plant; Other panels: The proposed stabilizing root loci.

(Continued)

Root locus

389

(cont’d) We close this example by encouraging the reader to investigate if this system is indeed internally stabilizable and provide an internally stabilizing controller, if existent.

Example 5.31: For the purpose of illustration (and not implementation!) we show how we can stabilize the plant of Example 5.30 by an internally unstable controller. We show this for the even more difficult task of tracking. We design a ramp tracking controller for it. To this end we cancel out the NMP zeros of the plant with NMP poles of the controller. Then we simply proceed as in Examples 5.22 and 5.23. A controller like CðsÞ 5 Kðs10:5Þ3 ððs11Þ2 1 9Þ=½s2 ðs130Þ3 ðs 2 1Þðs 2 3Þðs 2 6Þ or CðsÞ 5 Kðs12Þ5 =½s2 ðs138Þ3 ðs 2 1Þðs 2 3Þðs 2 6Þ works.

Remark 5.26: The problem that we are concerned with here is exactly Question 3.3 of Chapter 3. It must have become clear that NMP zeros are the main stumbling block in the stabilization problem. (Of course a finer characterization of this observation would be desirable.) However, the good news is that sometimes they can be avoided by a redefinition of the output or a structural redesign of the original plant. The interested reader is referred to the pertinent literature like (Nokhbatolfoghahaee et al., 2016) and Chapter 10. Apart from the factorization approach, another framework for tacking this problem was (and still is) optimization, like HN norm optimization. However, the bottlenecks of optimization problems exist here as well, see Section 4.13. On the other hand, we should stress that we cannot guarantee a good performance, but merely stabilization (recalling Example 5.29), if we succeed. (Note that we may formulate and encapsulate the performance requirement in the optimization framework, as we do in Chapter 10, but yet we face the intrinsic difficulties of nonconvex optimization problems.) Finally, we add that recent results use algebraic geometry techniques as well. (Question: What is impact of the pole-zero pattern (for both cases of systems with or without NMP zeros)? See Exercise 5.56. Note that the precise answer is not known yet! See also item 4 of Further Readings. In the next examples and worked-out problems to follow we discuss some tractable cases which are appropriate for this undergraduate course.

390

Introduction to Linear Control Systems

Example 5.32: Propose stabilizing and tracking controllers for the system P 5 ðs 2 2Þ=ðs 2 1Þ. It is easily seen that the proportional controller 21 , K , 20:5 stabilizes the system. If we want to try a dynamic controller then e.g., either of the root loci of Fig. 5.36 is achievable. For instance the controllers C1 5 Kðs 2 3Þ=ðs 1 10Þ and C2 5 Kðs 2 3Þðs13Þ2 =½ðs 1 2Þððs11Þ2 1 25Þ result in the root loci of the left and right panels, respectively. k2 k1 Figure 5.36 The proposed root loci for the controlled system of Example 5.32.

Note that in order for the left panel to realize, the real pole of the controller must be far from the origin. If the system has small unwanted poles, like P0 5 Kðs 2 2Þ=½ðs 2 1Þðs 1 0:5Þ, then we cancel out that pole with a zero of the controller. Thus we must add also a pole to the controller. (Why?) For the aforementioned system the controllers can be C 01 5 Kðs 2 3Þðs 1 0:5Þ=ðs120Þ2 and C 02 5 Kðs 2 3Þðs13Þ2 ðs 1 0:5Þ=½ðs 1 4Þðs 1 2Þððs11Þ2 1 25Þ. Note that we have altered the poles of the controllers C 01 from 210 to e.g., 220, otherwise it will not stabilize the system. We do not need to alter the parameters of C02 . With C02 , however, the root locus will have a different shape. To have a visible root locus we provide the answer for the controller C 02 new 5 Kðs 1 0:5Þðs17Þ2 ðs 2 3Þ= ½ðs 1 8Þðs 1 6Þððs11Þ2 1 25Þ, see Fig. 5.37.

Figure 5.37 The resultant root locus for the system P0 of Example 5.32 with C02 new .

Root locus

391

Example 5.33: Design the simplest step-tracking controller CðsÞ for the 2 system PðsÞ 5 ss 2 2 1. It is left to the reader to investigate that the answer is a controller like CðsÞ 5 K sðss 110:2 40Þ, or a nearby controller. With this controller the root locus for negative gain looks like Fig. 5.38, which is not drawn to scale. The j-axis crossings are ðK; ωÞ 5 ð222:5261; 0:7396Þ; ð238:4723; 5:4065Þ. The best step response is obtained with K 5 231, given in the same figure, right panel. The poor transient response of the system will be theoretically justified in Chapter 10. (Also recall that pole and zero of the controller affect the performance and with another controller we can have a better result.)

–40

–0.2

1

2

Figure 5.38 Example 5.33. Left: Root locus for negative gain; Right: Step response.

Question 5.6: How should we modify the controller to address the plant PðsÞ 5 ðs 2 2Þ=½ðs 1 1Þðs 2 1Þ? Example 5.34: Design the simplest stabilizing and step-tracking controller 1 CðsÞ for the system PðsÞ 5 ss 2 2 2. The answer for the first part is the proportional controller 22 , K , 21 and for the second part is a controller like CðsÞ 5 K sðss 220:2 60Þ, or a nearby controller. With this controller the root locus for positive gain looks like Fig. 5.39, which is not drawn to scale. The j-axis crossings are ðK; ωÞ 5 ð62:2753; 6:7280Þ; ð99:5581; 0:7281Þ. The best step response is with K 5 75, given in the same figure, right panel. Note that an argument similar to that of the previous example is valid here as well. (Continued)

392

Introduction to Linear Control Systems

(cont’d)

0.2

1

2

60

Figure 5.39 Example 5.34. Left: Root locus for positive gain; Right: Step response.

Question 5.7: How should we modify the controller to address the plant PðsÞ 5 ðs 2 1Þ=½ðs 1 1Þðs 2 2Þ? The above-discussed problems are “difficult” problems which are “totally NMP pole/zero” and rarely represent an actual system. In fact the actual difficult systems that we enlisted in Section 2.3.13.1 of Chapter 2 usually have at most one NMP pole and one NMP zero. Examples that have more NMP zeros are rare. They are usually found in networks, e.g., certain configurations of interconnected systems (like tanks) and vibratory systems (loudspeakers and microphones in a chamber). We studied the more complicated cases for their theoretical importance and also in order to get a deeper insight to the problem. More realistic cases are treated in the following section under “Simple Systems,” although here the problem has its own complexities.

5.5.2 Simple systems In practice many actual systems have only CLHP poles and/or zeros. Such systems can be more easily treated than the previous “totally unstable” ones. The design philosophy can however be the same. That is, based on the tracking objective we add enough integrators for the controller. If needed we should add also zeros to the controller.21 This is shown in the following examples.

21

The stabilization method (i.e., the synthesis, as the term will be defined in Chapter 9) is not unique. Nevertheless, in this Chapter this seems to be the only tractable way. As we have mentioned in Remark 5.25, in Chapter 9 we will learn other systematic methods for stabilization of unstable branches of the root locus, namely by adding phase to the system by the use of lead control.

Root locus

393

Example 5.35: Design a controller CðsÞ for the plant PðsÞ 5 1=½ðs 1 1Þðs 1 2Þ to have negligible error in tracking ramp inputs. We know from Chapter 4 that the controller should have the factor 1=s. But with this simple controller the steady-state error in ramp tracking (1=Kv 5 ð2=KÞ 3 100%) will be considerable because as the root locus shows the gain cannot exceed a certain value (K 5 6) due to the stability requirement, see left panel of Fig. 5.40. Indeed, even before instability occurrence the output will be oscillatory as “dominant” poles of the system approach the j-axis by increasing the gain value. This calls for pulling the unstable branches of the root locus to the left on 690 deg asymptotes which will be achieved by considering a zero for the controller as well. Thus with the controller CðsÞ 5 Kðs 1 zÞ=s the objective is met as follows. First we consider the abscissa for stability: 2½ð0 1 1 1 2Þ 2 ðzÞ=2 , 0 or z , 3. Now if we choose K 5 100 the steadystate-error will be 2%, which is negligible. But where should we choose z, in 0 , z # 1, 1 , z # 2, or 2 , z , 3? The answer can be found only by simulation. The best answer is obtained with z 5 0:2. (Question: What happens if we choose z 5 1 or z 5 2 in the above consideration?). Note that the root locus of this case is not shown. The right panel of Fig. 5.40 shows the ramp tracking performance.

Figure 5.40 Example 5.40. Left: Root locus; Right: Ramp tracking with z 5 0:2.

Remark 5.27: Due to uncertainties it is wise to not choose z close to 3 (which results in a poorly stable abscissa). Example 5.36: Analyze the response of the system of Example 5.35 if the input is a step function. We know that the steady-state error is certainly zero. But is the output satisfactory? Because of large complex poles we can expect an oscillatory output and this is verified by simulation results given in Fig. 5.41. Note that the answer is probably different from our expectation conceived from Chapter 4.9 (Continued)

394

Introduction to Linear Control Systems

(cont’d) that zeros near origin result in large overshoots. In this example because of a pole at origin it is not the case and in fact has the least overshoot.

Figure 5.41 Example 5.36. Left: Step response with K 5 100—Least overshoot z 5 0:2; Middle z 5 1:2; Largest overshoot z 5 2:2; Right: Step response with different values of K as specified in the panel.

What is the best choice for z, if we wish to use this controller structure? The answer depends on the value of the gain we choose. That is, it is possible that we get acceptable results for any z, by proper choice of the gain K. The best answer is obtained with z 5 1, K 5 1. We assume that due to uncertainties exact polezero cancellation does not take place and thus we provide the simulations with z 5 1:2, K 5 1.

Example 5.37: Design a tracking controller for ramp inputs for the system of Example 5.35 such that the steady-state error is zero. From the previous examples we know that the answer is e.g., (1) A controller like C1 5 Kðs1zÞ2 =s2 where z is small like z 5 0:5, or a pair of complex conjugate zeros in that vicinity, or (2) A controller like C2 5 Kðs 1 zÞ=s2 where z is small like z 5 0:1 (Refer to its root locus in Fig. 5.40. The range of stability is KAð0; 5Þ.). Now we discuss an important issue. Recall from Example 4.3 of Chapter 4, Time Response, that a system with zero ramp error inevitably shows at least one overshoot in tracking a step input. In Example 5.35 (that the error is a small nonzero constant) the output is highly oscillatory for step tracking. It turns out that with the controller of this example that results in zero steady-state error, if we want a fast and good response in ramp tracking we should choose a large value for gain, and this (Continued)

Root locus

395

(cont’d) results in an oscillatory response in step tracking as well. The ramp response with C1 is given for K 5 1 and K 5 10. Note that the ramp response with C2 is poor as well—not depicted. If with C1 we increase K to K 5 100 the ramp response will be satisfactory but the step response will be oscillatory as before. See Fig. 5.42. (Note that with C2 we cannot increase gain beyond K 5 5 and thus C2 is not a good control structure for ramp tracking.)

Figure 5.42 Example 5.37. Ramp tracking, Left: C1 with K 5 1; 10; Right: Root locus with C2 .

Remark 5.28: The observation of Example 5.36 is important and is worth emphasizing. It turns out that for certain systems controllers which result in negligible or zero steady-state errors in tracking a ramp input and have a fast and good transient response, result in poor and oscillatory responses in step tracking. (This is beyond the well-established fact of at least one overshoot, see Example 4.3. See also Remark 4.20 of Chapter 4). This is an interesting topic and is worth further theoretical investigation. Studying the same issue for parabolic and ramp inputs is desirable as well; see Problem 4.6 and Exercise 4.7 of Chapter 4. Remark 5.29: On the other hand, Question 4.17 of Chapter 4 is noticeable in this system. If we increase the period of the input (or decrease its frequency) the performance improves. Example 5.38: Design a step tracking controller for the system PðsÞ 5 1=ðs2 1 1Þ. We design a proper controller for this system whose relative degree is two. So by proper choice of the controller parameters we stabilize the abscissa and the two branches of the root locus which originate from the imaginary poles. A possible controller is CðsÞ 5 Kðs 1 z1 Þðs 1 z2 Þ=½sðs 1 pÞ which can be (Continued)

396

Introduction to Linear Control Systems

(cont’d) realized as an actual PID. For stabilization there should hold p . z1 1 z2 . The exact values of the parameters are determined by simulation. To demonstrate the impact of this choice two controllers which result in different root loci are discussed in the following. See Fig. 5.43. C1 5 Kðs 1 1Þðs 1 0:5Þ=½sðs 1 20Þ. The values of K 5 90; 119; 200 are designated on the root locus. The step responses are also shown. Note that the outputs may not be in accordance with one’s expectations. And this clearly shows that it is not possible to find the appropriate values of the gain by looking at the root locus. C2 5 Kðs 1 2Þðs 1 1Þ=½sðs 1 20Þ. The values of K 5 100; 150; 300 are shown on the root locus. The step responses are also given. Here again note that the outputs may not be in accordance with your expectations. And this clearly shows that it is impossible to find the appropriate values of the gain by looking at the root locus.

Figure 5.43 Example 5.38. Top left: Root locus with C1 ; Top right: Step response: Fastest K 5 200, middle K 5 119, slowest K 5 90, Bottom left: Root locus with C2 ; Bottom right: Step response: Fastest K 5 300, middle K 5 150, slowest K 5 100.

Root locus

397

Figure 5.44 Example 5.38, Left: Root locus with ideal controller, Right: Output with C2 ; C2i ; C3 ; C3i .

Finally we note that choice of the controller parameters certainly depends on the design specifications. If smaller overshoot is desired then smaller zeros are needed, and if smaller settling time is needed then larger zeros are used. These two objectives are contradictory and thus a trade-off is called for. (Question: What is the effect of the pole location?) Remark 5.30: This remark has two parts. 1. Consider a stable/unstable open-loop system which is closed-loop stable. As we have previously mentioned in Chapter 4, if we add an open-loop pole (e.g., due to the sensor) to it we can justify that the closed-loop system retains its stability if the pole is large and the gain is chosen properly. The root locus of course changes. For instance, if it has two asymptotes on 690 for positive gain, then it will have three on 180; 660. However, as the root loci of Examples 4.244.26 show, if the pole is large enough and the gain is chosen appropriately then the system remains stable. On the other hand, if we add a large pole to the derivative term of a PD or PID controller in order to make it causal, then the open-loop zero(s) of the system is (are) changed and the above justification does not seem effective anymore. Nevertheless, common sense tells us that because the number of degrees of freedom is increased, if the parameters are chosen properly then the system will remain stable. However, if the pole is added to the whole controller then no openloop zero changes. Anyway, the stability issue is an interesting topic for rigorous analytical consideration. 2. To simplify the computations, in a great portion of the literature authors consider ‘ideal’ PD/PID and provide exact values for the common design objectives like overshoot, sensitivities, and gain and phase margins (which are some stability margins to be introduced in Chapter 6). Then they say that a large pole is added to the differentiator upon implementation and the same values will be obtained by the ‘actual’ controller. The point is that even if the system retains its stability—which needs to be precisely addressed and should not be taken for granted—all the design specifications will be

398

Introduction to Linear Control Systems

different (often all worse) and sometimes the difference is quite noticeable, and in any case they depend on the added pole. It also turns out that the initial value of the control signal increases as the added pole gets larger, and this is ‘not’ desirable if it has to be unreasonably large to guarantee the stability. Hence, for industrial design (as opposed to a simple homework) it is wiser to work with the actual control structure—if restricted— from the outset, although it is computationally more expensive. (In Chapter 10, we summarize the design procedure.) 3. Finally, for the system of Example 5.38 reconsider C2 5 Kðs 1 1Þðs 1 2Þ=½sðs 1 20Þ. Put in the context of our discussion it means that the corresponding ideal controller has been C2i 5 ð1=20Þ 3 Kðs 1 1Þðs 1 2Þ=s. For the sake of comparison suppose that the ideal controller is C3i 5 20C2i . Then its corresponding actual controller is C3 5 20C2 . The output of the system with these controllers for K 5 100 is provided in the right panel of Fig. 5.44. We observe that the difference between the ideal and actual performance can be considerable. Left panel gives the root locus with ideal controller. The gain margin (GM) and phase margin (PM) are provided below. In Chapter 6 you will learn the meaning of these values and that the actual systems are inferior to their ideal ones.

C2 :

GM 5 2 20:9 db; PM 5 44:1deg

C2i :

GM 5 2 23:5 db; PM 5 60:1deg

C3 :

GM 5 2 46:9 db; PM 5 21:1deg

C3i :

GM 5 2 49:5 db; PM 5 88:3deg

Example 5.39: Design a stabilizing controller for the system PðsÞ 5 2=½s3 ðs 1 1Þ. Recall that this is the Problem 3.22 that we considered in Chapter 3, Stability Analysis. There we concluded that it is necessary that the order of the numerator of the controller be at least two. Because the controller should be realizable the order of its denominator should also be at least 2. We Ks2 proposed three controllers: C1 5 ðs12Þ 2 with 0 , K , 2:88 which is internally unstable and has to be discarded, as well as C2 5 0:165 , K , 1:890, and C3 5

Kðs10:3Þ ðs12Þ3

3

Kðs10:1Þ2 ðs12Þ2

with

with 1:38 , K , 7:95. Here we have a

look at the root locus, see Fig. 5.45. Note that the left panel is stable but the design is unacceptable. (Continued)

Root locus

399

(cont’d)

Figure 5.45 Root loci of Example 5.39. Left: C1 ; Right: C2 ; Bottom: C3 .

5.6

Summary

The root locus of a system refers to the locus of the poles of the closed-loop system. In this Chapter we have studied the method of root locus by which we could draw the root locus using the open-loop information of the system without computing the closed-loop poles. The method has some simple rules which have been fully detailed. The nitty-gritties of the subject have been addressed. Then we have extended the root locus method to the case that more than one parameter in the system varies, the so-called root contour problem. The root locus method also gives us guidelines for controller design. We have discussed the controller design in the two frameworks of difficult and simple systems in details with the aid of several instructive examples. Numerous worked-out problems which follow at the end of the chapter enhance the learning of the subject.

400

5.7

Introduction to Linear Control Systems

Notes and further readings

1. Evans did not present any set of rules in his articles. The rules that we have presented were later assembled in the literature. Moreover, numerous results exist in the literature concerning the root sensitivity problem. The interested reader can consult the pertinent literature. The contributions of this chapter are excerpts of BavafaToosi, 1996. 2. The root locus can be seen as the stability analysis and synthesis when there is uncertainty in one parameter. If there is more than one uncertain parameter it is named root contour, as we discussed in Section 5.3 (Byrnes. 1981). The latter problem can also be recast as Nðs;qÞ 5 0 where q is the robust root locus problem, in the general setting 1 1 LðsÞ≔1 1 K Dðs;qÞ the vector of uncertain parameters, or the uncertainty vector, in the bounding set qAQ. A version of this problem is finding the uncertainty set Q so as to assure the stability. A nice research paper has appeared in 1990 on this topic (Barmish and Tempo, 1990). Further results are, e.g., in (Hwang and Yang, 2005; Nesenchuk, 2002). 3. Some modern studies in the context of root locus are as follows: Fractional-order systems are discussed in (Tenreiro Machado, 2011; Patil et al., 2014; Fioravanti et al., 2012); Controller design for damping oscillations with comparisons to LMIs is given in (Pal et al., 2001); Plug-in adaptive controller design (Miyamoto et al., 2000); Coincident root loci can be found in (Kurmann, 2012); Duality of multiple root loci is handled in (Lee and Sturmdels, 2016); Root locus for SISO deadtime processes is revisited in (Gumussoy, 2012); Saturated root locus is presented in (Ching et al., 2008); Multivariable root locus (Guo et al., 1994; Owens, 1983; Saberi and Sannuti, 1989). The last reference presents a design method which assigns the rates at which the closed-loop poles move along their loci. Other interesting results can also be found in (Byrnes et al., 1994; Sekara and Rapaic, 2015; Khodaverdian et al., 2015; Kwakernaak and Sivan, 1972). 4. In brief, we should clarify that the answer to Question 3.3 of Chapter 3 is not known yet unless in certain very restricted cases. For instance, in (Smith and Sondergeld, 1986) it is shown that if the plant has more that one NMP zero then the minimum order of the controller - in general - is not expressible in terms of the order of the plant. Further explanations of the difficulty of the problem are provided in Exercise 5.56. Some other pertinent results are reported in (Chen et al., 1995; Du et al., 2012; Linnemann, 1988; Seraji, 1975, 1981; Smith, 1986; Tarokh, 1987). 5. The stabilization problem that we have considered in this chapter is in other terms the “dynamic output feedback” problem, where we design a dynamic controller. A simple version of it is the “static output feedback” problem where we use merely a constant gain controller. “The output feedback problem” has been investigated for virtually all classes of systems like switching systems, stochastic system, large-scale system, nonlinear and time-varying systems, etc. 6. In Chapter 2 you have seen several models of electrical and mechanical systems. Apart from this, if you have specialized expertise in these fields you can conceive even more complicated systems and the associated models. Thus, proposing a plant whose model is that of a stable controller should be easy for you. But how about an unstable controller? How should we implement e.g., the controller in Example 5.34? 7. Development of the counterparts of results of this chapter for DAE, fractional-order, positive, constrained, multiscale, etc. systems is desirable.

Root locus

5.8

401

Worked-out problems

Problem 5.1: Draw the root locus for the system LðsÞ 5

K sðs 1 1Þðs 1 20Þ

and determine

K such that the dominant poles have ζ 5 0:5. For this value of gain, obtain the step response in time domain, both exactly and approximately, i.e., considering the dominant poles only. First we specify the open-loop poles and zeros on the s-plane. Then we determine part(s) of the real axis which belong(s) to the root locus. Next we note that the root locus has three main branches. One is already identified, the one going to minus infinity from 23. The other two start from a break-away point which resides between 0 and 21 on the negative real axis and then go to infinity on angles 660. Therefore, necessarily there are j-axis crossings. We find the break point, the j-axis crossings, and the abscissa in the following. To find the break-away point we compute K 5 2 sðs 1 1Þðs 1 20Þ and solve dK=ds 5 0. Thus s 5 213:5; 20:49. As we analyzed before, we know that s 5 213:5 is not acceptable but s 5 20:49 is. However, let us verify this. At s 5 213:5 the sign of K is negative which means that s 5 213:5 must be discarded, however at s 5 20:49 the sign of K is positive which means that s 5 20:49 is an actual break point. To determine the j-axis crossings we construct the Routh’s array, set the element of the s1 row equal to zero and solve the auxiliary equation made from the s2 row. Hence, s3 s2 s1 s

1 21 2K=21 1 20 K

20 K

Therefore 2K=21 1 20 5 0 and K 5 21 3 21 5 420. This is the value of K at the crossings (one positive and one negative). On the other hand 21s2 1 21 3 20 5 0 and thus s 5 6 j4:47. The abscissa is at the point s 5 2ð0 1 2 1 20Þ=3 5 27. The asymptotes cross the j-axis at ω 5 67 3 tan 60 5 612:12. Thus, practicing care, we must draw the root locus—the upper part suffices— then draw the line of ζ 5 0:5, and read the value of the intersection point s. This results in s1;2 5 20:45 6 j1:65. The other pole is obtained from dividing the characteristic equation by ðs 2 s1 Þðs 2 s2 Þ 5 s2 1 0:9s 1 2:92 and ignoring the remainder. Although in this problem the division can be made independently of the value of K, in general it is not the case and the value of K is needed for doing the division. Thus we show how to find it. There holds K 5 jsi j  jsi 1 1j  jsi 1 20j in which i is either 1 or 2. Hence K 5 58:36. The characteristic equation is s3 1 21s2 1 20s 1 58:36, the dividend is s2 1 0:9s 1 2:92, and the quotient is s 1 20:1. That is, the other pole is s 5 220:1. Simulations are given in Fig. 5.46. Note that the second-order approximate refers to the system ð58:36=20:1Þ=ðs2 1 0:9s 1 2:92Þ.

402

Introduction to Linear Control Systems

j1.65

0.45

Figure 5.46 Problem 5.1. Left: Part of the root locus; Right: Step response; Solid: original system; Dotted: second-order approximate.

Problem 5.2: Two very similar systems are considered in this problem and the Kðs 1 1Þ next, Problem 5.3. Draw the root locus for the system LðsÞ 5 sðs 2 1Þðs 2 1 6s 1 αÞ, α 5 28, and determine the range of K for stability. We start by specifying the open-loop poles and zeros on the s-plane, and specify part of the real axis which belongs to the root locus. This refers to the regions ½0 1 and ½ 2N 21. We note that there should necessarily exist two break points, each of which in each of these regions. To determine them we solve dK=ds 5 0 where K 5 sðs 2 1Þðs2 1 6s 1 αÞ=ðs 1 1Þ. This results in 22:6164; 0:442; 21:2461 6 j2:553. We know that the first two are acceptable. (This can easily be verified if we notice that K . 0 at these points.) As for the complex points we compute K at them; K is complex and thus these points are not break points and must be discarded. Next we find the j-axis crossings. s4 s3 s2

1 5 228 1 K 2 110 25

s1

K 2 2 141K 1 3864 2138 1 K

s0

K

22 228 1 K

K

K

We set the element in the s1 row to zero and solve the auxiliary equation. Thus K 2 2 141K 1 3864 5 0 and K1 5 37:2397, K2 5 103:7603. Now we solve 2 28 1 K 2 110 2 s 1 K 5 0. With K1 we find s 5 6 j1:3594 and with K2 we find 25 s 5 6 j3:8926. The range of stability is the range of K where the first column is positive. This is K1 , K , K2 . We also compute the angles of departure from the complex poles. For the upper pole it is 180ð2h 1 1Þ 2 ð90 1 124:53 1132:54Þ 1

Root locus

403

Figure 5.47 Root locus of Problem 5.2.

ð114:64Þ 5 252:43. We note that there are two possibilities for drawing the root locus: (1) One is that the unstable poles break away, cross the j-axis with the value of K1 at the computed frequencies, and then break in at negative break-in point. On the other hand the complex zeros cross the imaginary axis with the value of K2 at the computed frequencies and then tend to infinity on the asymptotes of 660 deg whose abscissa is at 2½ð0 2 1 1 6Þ 2 ð1Þ=3 5 24=3. (2) The second possibility is that the unstable poles have two j-axis crossings and then tend to the asymptotes. On the other hand the stable poles break in at the negative break-in point. Which one is the correct one cannot be decided visually and a software like MATLABs is called for. Using MATLABs we find out that the second possibility is the answer, as depicted in Fig. 5.47. Discussion: We can use the command “allmargin” of MATLABs for the computation of j-axis crossings. Because this problem is solved numerically, the computed answers are a little different from the exact answers we have found. MATLABs answers are K1 5 37:2445 and K2 5 103:675 resulting in s 5 6 j1:3596 and s 5 6 j3:8899, respectively. For the sake of brevity, in the following examples we use this command to find the j-axis crossings. Problem 5.3: This problem is the continuation of Problem 5.2. Draw the root locus for Kðs 1 1Þ the system LðsÞ 5 sðs 2 1Þðs 2 1 6s 1 αÞ, α 5 30, and determine the range of K for stability. From the outset we can expect that the root locus of this system is in the form of the first possibility explained in the previous example. This is indeed the case, which has to be—and is—verified by MATLABs. We leave it to the reader to follow the solution procedure of the previous example in details. The angle of departure from the upper stable complex pole is 250:74, very near to that of the previous example. The j-axis crossings are K1 5 38:692 and K2 5 116:2266 resulting in s 5 6 j1:3184 and s 5 6 j4:1523, respectively, which are also very near to those of the previous example. The root locus is given in Fig. 5.48.

404

Introduction to Linear Control Systems

Figure 5.48 Root locus of Problem 5.3.

Problem 5.4: Three very similar systems are considered in this problem and the next two (Problems 5.5, 5.6). Draw the root locus for the system LðsÞ 5 sðs2 1K6s 1 αÞ, α 5 11, and determine K for stability. The relative degree of the system is three. It thus has 3 asymptotes on 180; 660 deg lines. The abscissa is at 2½0 1 6=3. The j-axis crossings are at s 5 6 j3:3166 with K 5 66. The break points are the solutions of dK=ds 5 0 where K 5 2sðs2 1 6s 1 αÞ. Thus s 5 22:57; 21:42. Both are acceptable because K has positive sign at them. The angle of departure from the upper complex pole is 180ð2h 1 1Þ 2 ð90 1 154:76Þ 5 264:76. The root locus is as shown in Fig. 5.49. Problem 5.5: This problem is the continuation of Problem 5.4. Draw the root locus for the system LðsÞ 5 sðs2 1K6s 1 αÞ, α 5 12, and determine K for stability. The solution is as that of the previous problem except that the j-axis crossings are at s 5 6 j3:4641 with K 5 72, and the break points are at s 5 22; 22. Thus there is an intersection point of order three at this point. The angle of departure from the upper complex pole is 180ð2h 1 1Þ 2 ð90 1 150Þ 5 260. The root locus is as shown in Fig. 5.50. It is easy to prove that the root locus has three straight lines. [Hint: Consider the geometry of the configuration and substitute s 5 σ 1 jω in the angle condition (which gives the root locus) and whence derive the equations of the three branches of the locus.] Problem 5.6: This problem is the continuation of Problems 5.4, 5.5. Draw the root locus for the system LðsÞ 5 sðs2 1K6s 1 αÞ, α 5 13, and determine K for stability. Here as well the solution is similar to that of the previous problem except that the j-axis crossings are at s 5 6 j3:6056 with K 5 78, and the break points are

Root locus

Figure 5.49 Root locus of Problem 5.4.

Figure 5.50 Root locus of Problem 5.5.

405

406

Introduction to Linear Control Systems

at s 5 22 6 j0:57. Because K is complex at these points they must be discarded. The angle of departure from the upper complex pole is 180ð2h 1 1Þ 2 ð90 1 146:31Þ 5 256:31. The root locus is as shown in Fig. 5.51.

Figure 5.51 Root locus of Problem 5.6.

Problem 5.7: Three very similar systems are considered in this problem and the 1 1Þ next two (Problems 5.8, 5.9). Draw the root locus for the system LðsÞ 5 sKðs 2 ðs 1 αÞ for α 5 8. The polezero excess of the system is two, so it has two asymptotes on 6 90 deg lines. The abscissa is at 2½ð0 1 0 1 8Þ 2 ð1Þ=2 5 2 3:5. The solutions of dK=ds 5 0 are 0; 6 j0:6614. At s 5 0, K 5 0 which is the starting point of the two branches of the root locus. The points 6 j0:6614 are not acceptable break points. The angles of departure from the double pole at s 5 0 are computed from 2θ 5 180ð2h 1 1Þ 2 0 1 0. Thus θ 5 6 90. The j-axis crossings are at s 5 0 with K 5 0. Actually, this is not a crossing point since the locus is tangential to the j-axis. The root locus is thus as shown in Fig. 5.52. Problem 5.8: This problem is the continuation of Problem 5.7. Draw the root locus 1 1Þ for the system LðsÞ 5 sKðs 2 ðs 1 αÞ for α 5 9. The solution to this problem is as that of Problem 5.7 except that this time the abscissa is at 2½ð0 1 0 1 9Þ 2 ð1Þ=2 5 24. The solutions of dK=ds 5 0 are 23; 23. Thus there is an intersection point of order three: three branches of the locus intersect at this point. Hence, the root locus is as depicted in Fig. 5.53

Root locus

407

Figure 5.52 Root locus of Problem 5.7.

Figure 5.53 Root locus of Problem 5.8.

Problem 5.9: This problem is the continuation of Problems 5.7, 5.8. Draw the root 1 1Þ locus for the system LðsÞ 5 sKðs 2 ðs 1 αÞ for α 5 10. This problem has also a similar solution to this problem as that of Problem 5.7, except that here the abscissa is at 2½ð0 1 0 1 10Þ 2 ð1Þ=2 5 24:5. The solutions of dK=ds 5 0 are 0; 22:5; 24. The point s 5 0 is as before the starting point of two branches of the locus which are tangent to the j-axis at this point. The points 22:5; 24 are both acceptable break points. Therefore, the root locus is as given in Fig. 5.54.

408

Introduction to Linear Control Systems

Figure 5.54 Root locus of Problem 5.9.

Problem 5.10: Stability of a system or its root locus may be discontinuous in the parameter. Some such systems are presented in Problems 5.105.13. Numerous other systems are found in other problems as well as in Exercises. Draw the root locus of the system LðsÞ 5

Kðs12Þ2 . ðs10:1Þ3

The system has an infinite zero at infinity on the asymptote 180 deg line. There are two j-axis crossings with K 5 0:0029; 0:6895. The first one is at 6 j0:2039 and the second one at 6 j1:6698. The angles of departure from the triple pole at s 5 20:1 are found from 3θ 5 S180ð2h 1 1Þ. Thus θ 5 180; 660. The range of stability is KAð0 0:0029Þ ð0:6896 NÞ. For instance with K 5 0:001; 1 the system is stable but with K 5 0:5 the system is unstable. The answer is given in Fig. 5.55.

1 2s 1 2 Problem 5.11: Draw the root locus of the system LðsÞ 5 K ðs 1s 0:1Þðs10:2Þ 2. 2

The system has an infinite zero at infinity on the asymptote 180 deg line. There is one real break point between 20:1 and 20:2. There are two j-axis crossings with K 5 0:0431; 0:4166. The first one is at 6 j0:4079 and the second one at 6 j0:9556. The angle of arrival at the upper complex zero is 180ð2h 1 1Þ 1 ð131:98 1 S 2 3 128:65Þ 2 ð90Þ 5 119:28. The range of stability is KAð0 0:0431Þ ð0:4167 NÞ. For instance with K 5 0:01; 1 the system is stable but with K 5 0:1 the system is unstable. The root locus is shown in Fig. 5.56. Problem

5.12:

Draw

the

root

Kðs 1 10Þðs 1 5Þðs 1 0:05Þðs 1 0:02Þ : ðs1100Þ ðs 1 1Þðs 1 0:2Þðs 1 0:01Þðs 1 0:001Þðs 2 0:0005Þ 2

locus

of

the

system

LðsÞ 5

Root locus

409

Figure 5.55 Root locus of Problem 5.10.

Figure 5.56 Root locus of Problem 5.11.

The answer is given in Fig. 5.57. The j-axis crossings computed by MATLABs are as follows. Note that “e 2 0a” and “e 1 0a” mean “3102a ” and “310a ”, respectively. Frequency at the crossings: [0, 0.003895634178675, 0.021954072988539, 0.440315721828126, 6.862627536041506, 84.933395893761656]. The respective gain values at the crossings are: [1.999999999999997e204, 0.006637238477279, 0.288951892834013, 45.979344120266695, 3.187801688552762e 1 04, 1.449581658965747e 1 06].

410

Introduction to Linear Control Systems

F

60˚

E

Some stable real poles and zeros: Stable part of the root locus

D C B A

Figure 5.57 Root locus of Problem 5.12, not drawn to scale.

Denote these values of the gain by KA through KF , where A-F refer to the designated crossing S points Sin Fig. 5.57. Then the system is stable if KAðKA KB Þ ðKC KD Þ ðKE KF Þ. Thus, e.g., with K 5 0:1AðKB KC Þ the system is unstable, and with K 5 1AðKC KD Þ the system is stable. Note that if the unstable pole at 0:0005 is substituted by a stable pole at 20:0005 and/or a pair of complex conjugate NMP zeros at 1 6 j20 are added to the system, the shape of the root locus does not change. However, the frequencies and the corresponding gain values at the j-axis crossings will change. The student is encouraged to verify, using MATLABs, that in the former case the change in the gain value and frequency of the near crossings (AC) is drastic but that in the far crossings (DF) is quite small. In the latter case all gain values drastically change. Moreover, the frequencies of the far crossings (AC) only slightly alter, contrary to those of near crossings (DF). Problem 5.13: Draw the root locus of the system LðsÞ 5

Kðs11000Þðs1 500Þðs 110Þðs1 5Þðs1 0:05Þðs 10:02Þ : ðs 110; 000Þðs 1100Þðs1 50Þðs1 1Þðs 10:2Þðs1 0:01Þðs 10:001Þðs10:0005Þ

The answer is given in Fig. 5.58. The j-axis crossings computed by MATLABs are as follows. Frequency at the crossings: [0.008207249680195, 0.018420795100126, 0.439554903688131, 7.438119029485299, 71.891233035539656, 5.895773578877810e1 02, Inf]. And the corresponding gain values at the crossings are: [3.214362976181781e 2 04, 0.001975821911297, 0.458222411449459, 3.769362008243660e 1 02, 1.512895189052537e104, 2.328235497048640e106, Inf]. Denote these values of the gain by KA through KF where AF refer to designated S crossing Spoints in S the figure. Then the system is stable if KAð0 KA Þ ðKB KC Þ ðKD KE Þ ðKF NÞ. Thus e.g., with K 5 10AðKC KD Þ the system is unstable, and with K 5 1000AðKD KE Þ the system is stable.

Root locus

411

Figure 5.58 Root locus of Problem 5.13, not drawn to scale.

Also note that if the pole at 20:0005 is substituted by the unstable pole 0:0005 the shape of the root locus remains unchanged, however the parameters of the j-axis crossings will change. Discussion: Let KZ denote the value of K (when it assumes negative values) at the designated j-axis crossing Z, which is the origin. Then it should be noted that for the range KAðKZ 0Þ the system is stable as well. This value is approximately 20:00000204 at the frequency zero. Problem 5.14: : Draw the root locus for the system L1 ðsÞ 5 Kðs 1 1Þðs12Þ3 =s4 and L2 ðsÞ 5 Kððs11Þ2 14Þ2 =ððs13Þ2 11Þ3 . The answer computed by MATLABs is given in Fig. 5.59. Note that especially for the right panel because the scales on the vertical and horizontal axes are not the same the correct angles are not easily viewable. For L1 the angles of arrival at the triple zero at s 5 2 2 are θ 5 0; 6 120. The angles of departure from the quadruple

412

Introduction to Linear Control Systems

Figure 5.59 Root locus of Problem 5.14. L1 on the left; L2 on the right.

pole at origin are θ 5 6 45; 6 135. For L2 the angles of arrival at the double zeros at 21 1 2j are 124:31; 304:31 and the angles of departure from the triple poles at 23 1 j are 70:17; 190:17; 310:17. Respectively, they have been computed from 2θ 5 180ð2h 1 1Þ 1 3ð26:56 1 56:31Þ 2 2ð90Þ and 3θ 5 180ð2h 1 1Þ 2 3ð90Þ 1 2ð206:56 1 123:69Þ. Note this is a verification of Remark 5.9. Problem 5.15: Draw the root locus of the systems L1 ðsÞ 5 Kðs2 1 1Þ=½sðs2 1 4Þ and L2 ðsÞ 5 Kðs2 1 4Þ=½sðs2 1 1Þ.

Figure 5.60 Root loci of Problem 5.15. L1 on the left; L2 on the right.

It is left to the reader to verify that the answers are given in Fig. 5.60.

Root locus

413

It is due to stress that a wrong conclusion should not be drawn. When the imaginary zeros are larger than poles the system may still be stable. An example is item (9) of Exercise 4.55 of Chapter 4 which was considered in the discussions of Problem 4.36. Problem 5.16: At the point s on the root locus of a system there hold di d3 dsi Kjs5s 5 0; i 5 1; 2; ds3 Kjs5s 6¼ 0. Draw the root locus in the vicinity of this point with straight (un-curved) lines for negative values of K. Specify the direction of movement of poles as K varies from 0 to minus infinity. Note that this problem is not as simple as it looks! We start by saying that the intersection condition is independent of the sign of K, since if it is satisfied for K it is also satisfied for 2K. Moreover, the given locus in Rule 8 is for positive K. Thus the statement of the problem is misleading. In the vicinity of the point sT, i.e., with straight line, the root locus is given in Fig. 5.61, upper panel. Note that another possibility is that all the arrows have the opposite direction (like in Problem 5.5). For negative K the answer is problem-dependent. For instance for the systems of Problems 5.5 and 5.8 the answers are given in the left bottom and right bottom panels of the same figure, respectively.

Figure 5.61 Root loci of Problem 5.16.

Question 5.8: Can the point sT be complex? If so, what will the root locus be like in its vicinity?

Problem 5.17: Draw the root locus for the system LðsÞ 5

Kðs 1 1Þðs 1 10Þ s3 ðs 1 100Þðs 1 1000Þ.

414

Introduction to Linear Control Systems

We start by specifying the open-loop poles and zeros on the s-plane and determining parts of the real axis which are on the root locus. Then we find possible break points by solving dK=ds 5 0. Two are found on the portion [2100 210] whose corresponding K values are positive. Thus they are acceptable, and the only possibility of the root locus is as given in Fig. 5.62. 60˚

–1000

–100

–10

–1

Figure 5.62 Upper part of the root locus of Problem 5.17, not drawn to scale.

Problem 5.18: Draw the root locus of the system LðsÞ 5 K=ðs11Þ3 and find the range of K for stability. It is easy to verify that the root loci of L and L are as given in the left and right panels of Fig. 5.63. They are stable for 0 , K , 8 and 21 , K , 0, respectively. Thus the system is stable if 21 , K , 8.

Figure 5.63 Root locus of Problem 5.18. Left: System L; Right: System 2L.

Problem 5.19: Draw the root locus of the system LðsÞ 5 Kð1 2 αsÞ=½sðβs 1 1Þ where the parameters α; β are known and positive. We note that L is not in the general form considered before, since a coefficient of s is negative. Thus LðsÞ 5 2 Kðαs 2 1Þ=½sðβs 1 1Þ, which means that the root locus of the original system must be sketched for negative values of K 0 5 2K. Recalling Example 5.18 this in particular means that (1) On the real axis the total

Root locus

415

number of poles and zeros to the right of a test point is positive. (2) The terms 180ð2h 1 1Þ present in the formulae for angles of departure and arrival should be substituted with 360 h. (3) The term 180ð2h 1 1Þ present in the asymptotes angles is substituted with 360 h. In this example the asymptote angle is the line 360h=ð2 2 1Þ 5 0 degrees. The root locus is shown in Fig. 5.64. It is left to the reader to complete the rest of the details: the break-away point, the j-axis crossing, and the range of K for stability.

Figure 5.64 Root locus of Problem 5.1.

Problem 5.20: Draw the root locus of the system LðsÞ 5 Kððs11Þ2 1 1Þ=½sð1 2 sÞ and find the range of K for stability. It is easy to verify that the root locus of L is given in the left panel of Fig. 5.65 and that of L in the right panel of the same figure. Using MATLABs, also verifiable by Routh’s test, the left panel is stable for K . 1, whereas the right panelS is stable for K , 2 0:5. Thus the system is stable if KA½ 2N 2 0:5Þ ð1 N.

Figure 5.65 Root locus of Problem 5.20. Left: system P; Right: system P.

416

Introduction to Linear Control Systems

Problem 5.21: Consider the plant PðsÞ 5

2 3ðs 1 8Þðs 2 8Þ sðs 1 3Þððs14Þ2 1 10Þ

and the controller

C 5 K . 0 in the feedback path. Draw the root locus of this system and determine the value of K such that the complex poles have ζ 5 0:707. Plot and analyze the step response of the system. Note that the “effective gain” is K 0 5 2 K , 0, but the actual implemented gain is K . 0. If we work with this K . 0, similar to the usual case those break points are acceptable at which K . 0 (or equivalently K 0 , 0). From dK=ds 5 0 we p find ffiffiffiffiffi three acceptable break points. The angle of departure from the pole at 24 1 j 10 is 360h 2 ð141:67 1 107:54 1 90Þ 1 ð165:23 1 38:32Þ 5 230:34 . The root locus is given in Fig. 5.68. Note that asymptotes angles are the lines 360h=ð4 2 2Þ 5 0; 180 degrees. The j-axis crossing occurs at K 5 1:4266. The range of stability is thus 0 , K , 1:4266. Next we draw the line ζ 5 0:707 which makes the angle α 5 45 degrees with the negative real axis, and carefully read the intersection coordinates. The point is 21:03 6 j1:04. The value of gain at this point is found from jLj 5 1. Thus K 5 jsj:js 1 3j:jðs14Þ2 1 10j=½3js2 2 64j where s is either 21:03 1 j1:04 or 21:03 2 j1:04. It will be K 5 0:3187. The characteristic equation is s4 1 11s3 1 ð50 2 3KÞs2 1 78s 1 192K 5 s4 1 11s3 1 49:0438s2 1 78s 1 61:1955. The quotient of dividing the characteristic equation by ðs 1 1:03 1 j1:04Þ ðs 1 1:03 2 j1:4Þ is s2 1 8:94 1 28:4849. Thus the other two poles are the roots of this polynomial, i.e., 24:47 6 j2:91. Step response is shown in Fig. 5.66, right panel. As expected the steady-state value is different from one because this is not a unity-feedback system. Moreover, the inverse response is also observed, as expected.

Figure 5.66 Left: Root locus of the system of Problem 5.21. Right: Its step response.

Problem 5.22: If a branch of root locus connects two poles, either on the real axis or not, there is necessarily an odd number of break-away points between them. The following system is such an example where this number is three: LðsÞ 5

Kðs 1 7Þððs15Þ2 1 1Þ . sðs 1 3Þððs12Þ2 1 1Þ

Root locus

417

To obtain the break-away and/or break-in points, we set dK ds 5 0. Thus, s 1 34s5 1 390s4 1 2042s3 1 5199s2 1 6188s 1 2730 5 0 which results in f217:4292; 2 5:6018 6 0:7468j; 2 2:5340; 2 1:6836; 2 1:1496g, all the real ones acceptable. See Fig. 5.67. 6

Figure 5.67 Root locus of Problem 5.22.

Problem 5.23: If a branch of root locus connects two zeros, either on the real axis or not, there is necessarily an odd number of break-in points between them. The ðs 1 7Þððs110Þ2 1 40Þ . sðs 1 3Þððs12Þ2 1 1Þ dK set ds 5 0. Thus,

following system is an example with this number as three: LðsÞ 5

To obtain the break-away and/or break-in points, we s6 1 54s5 1 1012s4 1 7810s3 1 24; 935s2 1 33; 320s 1 14; 700 5 0 which results in f220:8466; 2 19:3606; 2 8:4861; 2 2:2188 6 0:1244j; 2 0:8691g, all the real ones being acceptable. From left to right the real ones are, respectively, break-in, break-away, break-in, and break-away points. See Fig. 5.68. (It is good to recall the discussion after Example 5.7 with regard to referring to the second-left break point as a break-away point.) Problem 5.24: Both of the above situations may exist for one system. For instance: LðsÞ 5

ðs 1 7Þððs110Þ2 1 10Þ . sðs 1 3Þððs12Þ2 1 1Þ

To obtain the break-away and/or break-in points, we set dK ds 5 0. Thus, s 1 54s5 1 922s4 1 6550s3 1 20; 015s2 1 26; 180s 1 11; 550 5 0 which results in f229:5709; 211:0271; 28:0866; 22:2855; 22:1303; 20:8997g, all of which are acceptable. See Fig. 5.69. 6

418

Introduction to Linear Control Systems

Figure 5.68 Root locus of Problem 5.23.

Figure 5.69 Root locus of Problem 5.24.

Problem 5.25: If a branch of root locus connects a zero and a pole, either no or an even number of break points exist between them, for instance: 2 1 2s 1 2Þðs2 1 6s 1 10Þ LðsÞ 5 ðs 1sðs5Þðs . 2 1 4s 1 5Þðs2 1 8s 1 17Þ It is easy to verify that the root locus is given by the Fig. 5.70. Note that the heights of the complex poles and zeros should not necessarily be the same. They can be a little different, e.g., the right and left ones can be at 21 6 1:3j and 24 6 1:1j. However, if they differ considerably, e.g., 21 6 1:7j, then the root locus will have a different shape.

Root locus

419

Figure 5.70 Root locus of Problem 5.25.

Question 5.9: Consider Problems 5.225.25. Can the multiplicity of break points happen for the complex case? Note that the answer is positive, unlike the claim of some sources. For instance consider the system LðsÞ 5 ððs11Þ2 1 4Þððs13Þ2 1 4Þððs11Þ2 1 16Þððs13Þ2 1 16Þ . ððs12Þ2 1 1Þððs12Þ2 1 25Þððs11Þ2 1 9Þððs13Þ2 1 9Þ

See also Exercise 5.44.

Problem 5.26: Draw the root contour of the system LðsÞ 5 αðs 1 1Þ=½sð1 2 βsÞ. The root contour starts from the open-loop poles and ends at open-loop zeros. Open-loop poles are the roots of sð1 2 βsÞ. These are s 5 0 and s 5 1=β. As β moves from zero to infinity, s moves from infinity to zero. So the positive real axis including the origin is the place of open-loop poles. As for open-loop zeros, there is one open-loop zero at s 5 21 and one at s 51N. The root locus thus has two branches: one starting from a point on the positive real axis and ending at s 51N, and the other starting from the origin and ending at s 5 21. Note that the same conclusion can be made if we rewrite the equation 1 1 LðsÞ 5 0 as sð1 2 βsÞ 1 αðs 1 1Þ 5 0 or s 2 βs2 1 αðs 1 1Þ 5 0 and then follow the usual approach for the construction of the root contour. Problem 5.27: Draw the root contour of the system sðs 1 1Þðs 1 2Þ 1 K1 K2 s 1 K1 5 0 We start by forming System 1 which is obtained by setting K2 5 0. 1 Hence sðs 1 1Þðs 1 2Þ 1 K1 5 0 or 1 1 K1 sðs 1 1Þðs 1 2Þ 5 0. Thus root locus of this system is clear and we do not depict it. Now we assign a constant value to K1 and form System 2 as ðsðs 1 1Þðs 1 2Þ 1 K1 Þ 1 K1 K2 s 5 0. Therefore 1 1K2 sðs 1 1ÞðsK11s 2Þ 1 K1 5 0. The root contour of the original system thus starts from the root locus of System 1 and ends at open-loop zeros of System 2. There are three such zeros: one finite zero at origin and two infinite zeros on asymptotes 690 degrees.

420

Introduction to Linear Control Systems

Note that the closed-loop poles of System 2 depend on K1 . For K1 5 0:3 the answer is given in the top left panel. For larger values like K1 5 0:6 there are two complex conjugate poles and two break points on the real axis. This case is shown in the top right panel. For K1 5 1 there is an intersection point of order three on the negative real axis, as shown in the bottom left panel. Finally for larger values of K1 the answer is given in the bottom right panel. On the other hand, the sum of the closed-loop poles P is independent of K1 . The denominator shows that P P p2 z 320 p 5 23, and thus σ 5 2 n 2 m 5 2 3 2 1 5 21:5 independently of the values of K1 . See Fig. 5.71.

Figure 5.71 Root contour of Problem 5.27. Top left K1 5 0:3; Top right K1 5 0:6; Bottom left K1 5 1; Bottom right K1 . 1. αÞðs 1 2Þðs 1 3Þ Problem 5.28: Draw the root contour of the system 1 1 K ðs 1 5 0. sðs 1 12Þðs21Þ2

We start by finding System 1 which is obtained by setting α 5 0. sðs 1 2Þðs 1 3Þ Thus 1 1 K sðs 5 0. The root locus of this system is shown in the 1 12Þðs21Þ2

left panel. Now we set a constant value to K and form System 2 as Kðs 1 2Þðs 1 3Þ 1 1α sðs 1 12Þðs21Þ 5 0. The root contour starts form the root locus of 2 1 Ksðs 1 2Þðs 1 3Þ system 1 and ends at the open-loop zeros of System 2. There are four such zeros:

Root locus

421

two finite zeros at 22; 2 3 and two infinite zeros on asymptotes 6 90. In this example the abscissa depends on K because the denominator shows that P p 5 2 K. For “some” values of K the root contour is given in the right panel. For values of gain above a certain value the closed-loop poles of system 1 are all real and the root contour will be different. This case is not shown in the picture. See Fig. 5.72.

Figure 5.72 Root contour of Problem 5.28.

Problem 5.29: Draw the loot contour of the system LðsÞ 5 System 1 is obtained by setting K2 5 0 as 1 1

K1 s2 ðs 1 1Þ

K1 ð1 2 K2 sÞ s2 ðs 1 1Þ .

5 0. The root locus of this

2 K1 s system is given in the left panel. Now we form System 2 as 1 1 K2 s2 ðs 1 1Þ 1 K1 5 0 in which K1 varies. The root contour of the original system thus starts from the root locus of System 1 and ends at open-loop zeros of System 2. There are three such zeros: one finite zero at origin and two infinite zeros at asymptotes 0; 1 180 degrees, because gain of the system is negative. See Fig. 5.73.

Figure 5.73 Root contour of Problem 5.29.

422

Introduction to Linear Control Systems

It should be noted that this system needs compensation for stability. It is unstable for all values of K1 ; K2 . (Question: Are all possible cases included in the right panel?) Problem 5.30: Improper systems are not realizable, but for the sake of completeness we present an example about them. Draw the root locus of the system ð2 2 sÞ LðsÞ 5 Ksðs11Þ for both positive and negative values of the gain. s13 We note that this system is improper. With a similar argument we conclude that in such systems as well the root locus starts from open-loop poles and ends at open-loop zeros. Because the system is improper some open-loop poles are infinite, more precisely there are m 2 n infinite poles. This system has m 2 n 5 4 2 1 5 3 infinite poles in addition to its finite pole at 23. Note that the system has a negative sign, i.e., it is L 5 2 Ksðs11Þ2 ðs 2 2Þ=ðs 1 3Þ. With positive K the infinite poles are on the asymptotes 0; 6 120 deg lines. With negative K the infinite poles are on the asymptotes 6 60; 180 deg lines. The break points are the solution of dK=ds 5 0, where K 5 ðs 1 3Þ=sðs11Þ2 ðs 2 2Þ. These points are: 20:3973; 2 3:8972; 2 1; 1:2924. The upper part of the root loci are as given in Fig. 5.74. The first break point is used in the left panel and the others in the right panel. 2

60˚

120˚

–3

–1

2

–3

–1

2

Figure 5.74 Root Locus of Problem 5.30. Left K . 0; Right K , 0.

Remark 5.31: The sequel problems concern controller design in the context of root locus. As we have mentioned in Section 5.5 there are more methodical ways than those that we have presented in this chapter. These methods however are highly depictive and thus we do not engage with them. Moreover, note that our purpose is not to present the ultimate design, as some design specifications above the tracking objective are still to be introduced and studied in the next chapters. Thus, for instance in Problem 5.31 with other values for a, z, and p, a different and probably better answer may be obtained. We do not make a comparative study here. We only practice the controller structure (or synthesis) which is instructive and adds to your insight. Problem 5.31: Let the system be given by PðsÞ 5 1=½ðs 2 1Þðs 2 2Þ. Propose a ramp-tracking controller for it. The controller should have the term 1=s2 . We also include two zeros say at a 5 2 0:5. Then we include the term ðs 1 zÞ=ðs 1 pÞ to stabilize the unstable branches of the root locus. The controller is ðs 1 zÞðs10:5Þ2 =½s2 ðs 1 pÞ and there should hold 2½ðp 2 1 2 2Þ 2 ðz 1 0:5 1 0:5Þ=ð4 2 2Þ , 0 or p 2 z . 4.

Root locus

K = 280

423

90˚

K = 150

A 280

–15

150

–2 –0.5

1

2

Figure 5.75 Problem 5.31. Left: Part of the root locus (not drawn to scale); Right: Reference tracking.

We present the simulation for z 5 2; p 5 15. The system is stable with K . 71:06. We simulate the system for different values of the gain. The answer with K 5 280 looks acceptable, and is provided in Fig. 5.75. Discussion: The reader is encouraged to do the simulations for himself/herself. With K # 150 (which is the knee value of the branch—the smallest damping ratio) the transient performance is poor. With larger values the performance improves and with K 5 280 it is acceptable. With these values of gain the closed-loop poles are given by: K 5 150: 25:99; 2 2:601 6 j7:74; 2 0:39 6 j0:168 and K 5 280: 23:33; 2 3:89 6 j13:63; 2 0:434 6 j0:142. Also note that with these values of K the rightmost closed-loop poles are almost identical. (The α angles they make with the negative real axis are 18 and 23. According to the terminology of Chapter 4 they are well damped.) But because of the contribution of other poles the outputs are drastically different. And this is what we have said in Section 5.4. In actual systems the complexity of the problem may be such that it is not possible to decide the value of the gain by looking at the root locus, and simulations have to be performed. Finally, we leave it to the reader to try other syntheses like CðsÞ 5 Kðs1b1 Þ2 ðs 1 b2 Þðs 1 b3 Þ=½s2 ððs1a1 Þ2 1 a22 Þ. In Chapter 9 we will talk more about the controller design. Problem 5.32: Let the system be given by PðsÞ 5 1=½ðs 2 1Þðs 2 2Þ. Propose a parabola-tracking controller for it. The controller should have the term 1=s3 . We also include three zeros say at a 5 2 0:5. Then we include the term ðs 1 zÞ=ðs 1 pÞ to stabilize the unstable branches of the root locus. The controller is ðs 1 zÞðs10:5Þ3 =½s3 ðs 1 pÞ and there should hold 2½ðp 2 1 2 2Þ 2 ðz 1 0:5 3 3Þ=ð4 2 2Þ , 0 or p 2 z . 4:5. We present the simulation for z 5 2; p 5 20. The system is stable with K . maxf1:12; 96:30g. We simulate the system for different values of the gain. The answer with K 5 700 seems acceptable and is plotted in Fig. 5.76, right panel. Discussion: With the gain value K # 270 (referring to the highest damping ratio) the output performance is poor, not shown in the figure. With K 5 700 the output is

424

Introduction to Linear Control Systems

2.5 Output Input

2 1.5

Input & output

1 0.5 0 –0.5 –1 –1.5 –2 –2.5

0

1

2

3

4

5

6

7

8

9

Time (s)

Figure 5.76 Top: Part of the root locus (not drawn to scale), Bottom: Tracking performance.

acceptable and with increasing the gain value the performance improves more. However this is at the expense of a larger control input and we may thus suffice to this gain value. Also note that the maximum value of the control signal is about 70 (and not 700). Why? Finally the closed-loop poles with K 5 700 are at: 22:81; 2 0:36; 2 0:49 6 j0:22; 2 6:42 6 j23:29. A discussion like that of Problem 5.31 can be made here as well. Problem 5.33: Design the simplest controller CðsÞ so that system PðsÞ 5 1=ðs2 2 1Þ tracks the ramp input with zero steady-state error. With a similar argument as in the above problems the controller is proposed as, ðs 1 2Þ ðs11Þ ðs 1 1Þ ðs11Þ ðs 1 2Þ ðs10:1Þ ðs 1 2Þ e.g., C1 ðsÞ 5 ðs11Þ s2 ðs 1 12Þ , C2 ðsÞ 5 s2 ðs 1 10Þ , C3 ðsÞ 5 s2 ðs 1 20Þ , C4 ðsÞ 5 s2 ðs 1 20Þ , 22 etc. The worst answer is obtained with C1 ; C2 . The root locus with C1 is more visible than that with other controllers; it is given Fig. 5.77; with other controllers it 2

22

2

2

Note that the rule of thumb condition p $ 10z is not satisfied in C1 , but in C2 it is.

2

Root locus

425

Figure 5.77 Problem 5.33. Top left: Root locus with C1 ; Top right: Output with C4 ; Bottom left: Control signal with C4 ; Bottom right: Control signal with C1 , C2 , and K 5 300.

has the same shape. For gain values above a certain value the system is stable. The appropriate values of the controller parameters depend on the design specifications and need to be determined by simulation. The best output shape is obtained with C3 which is almost indistinguishable from the input. However, in order to be visible the results are offered for C4 . The output and the control signal are given for two values of the gain: K 5 150 and K 5 300. As it is observed with these values the outputs are not much different but the control signals considerably differ especially in magnitude. A compromise should hence be made. Finally, for the purpose of illustration of poor performance with controllers C1 ; C2 the control signals are also provided in the same figure. Note that oscillations in the control signal result in oscillations in the plant output, although they are small since they are filtered out by the plant integrators. Discussion: We wrap up the solution by adding that what we have mentioned in the discussion part of previous problems is true for this system as well. Moreover, the interested reader is encouraged to simulate all the cases for himself/herself and observe the performance of different syntheses and different

426

Introduction to Linear Control Systems

designs. Finally, we add the caveat that what we said about oscillations in the control signal and poor performance should not be understood as the latter being a direct result of the former. Indeed, as you will see in Remark 9.22 of Chapter 9 sometimes oscillations in the control signal are needed to improve the performance of the system. Problem 5.34: It must have become clear by the lessons in the text that stabilization and tracking problems in the context of root locus when NMP zeros are involved are rather innovative. This is further demonstrated in this and the rest of the problems. Propose the simplest stabilizing controller for the plant PðsÞ 5 ðs 2 1Þ=ðs2 1 1Þ. The simplest controller is the proportional controller 0 , K , 1. The root locus is offered in Fig. 5.78.

Figure 5.78 Problem 5.34.

Problem 5.35: Propose the simplest step tracking controller for the plant of Problem 5.34. 2 1 c2 Þ The simplest controller has the structure CðsÞ 5 K ðs 1 aÞððs1bÞ , for instance sðs1dÞ2 CðsÞ 5 2 35 3

ðs 1 0:2Þððs11Þ2 1 1Þ . sðs120Þ2

The root locus is offered in Fig. 5.79. Note that the

step response is necessarily poor. However, by modifying the controller the performance will be to some extent improved. The student is encouraged to try e.g., the controller which has a triple zero at s 5 2 0:2 and the same values for other parameters.

Root locus

427

1 –20

–1 –0.2

1

Figure 5.79 Problem 5.35.

Problem 5.36: Propose the simplest stabilizing controller for the plant 2 2Þ PðsÞ 5 ðs 2s21Þðs 11 . The simplest controller is the proportional controller 20:5 , K , 0. The root locus for negative gain is given in Fig. 5.80.

Figure 5.80 Root locus of Problem 5.36 with negative gain.

Problem 5.37: Propose the simplest stabilizing and tracking controllers for the 2 11 plant PðsÞ 5 ðs 2s1Þðs 2 2Þ : The simplest stabilizing controller is K ,22 and the simplest tracking controller is K=s with K . 6. The root loci are given in Fig. 5.81, left and right panels, respectively. Problem 5.38: Propose the simplest stabilizing controller for the plant PðsÞ 5 ðs2 2 1Þ=ðs2 2 4Þ. ðs13Þ The answer is a controller like C1 ðsÞ 5 K ðs 1ðs13Þ 1:2Þðs 1 4Þ or C2 ðsÞ 5 K ðs 1 1:2Þðs 1 8Þ, which for negative gain result in the root loci of the left and right panels of Fig. 5.82, respectively. (The interested reader is encouraged to use MATLABs and find the range of K for stability.) 2

2

428

Introduction to Linear Control Systems

Figure 5.81 Root locus of Problem 5.37. Left: Stabilization; Right: Step tracking.

–4

–3 –2 –1.2 –1

1

2

–8

–3 –2 –1.2 –1

1

2

Figure 5.82 Root locus of Problem 5.38. Left with C1 ; Right with C2 .

Problem 5.39: Propose a step-tracking controller for the plant PðsÞ 5 sð1 2 sÞ= ½ðs 1 1Þðs 1 2Þ. The answer seems to be the controller CðsÞ 5 Kðs 2 zÞ=s2 with z . 0 and K , 0. The root locus and the step response for z 5 2 and K 5 2 0:14 are given in Fig. 5.83. However, we recall from Chapter 3 that the system is internally unstable because of a CRHP polezero cancellation. Thus actually the problem does not admit a solution and that is the reason behind the word “seems” in the first sentence.

Figure 5.83 Problem 5.39. Left: Root locus; Right: Step response.

Root locus

429

Figure 5.84 Problem 5.40. Left: Root locus; Right: Step response.

Problem 5.40: Design the simplest controller CðsÞ so that system PðsÞ 5 ððs20:1Þ2 1 1Þ=ððs20:2Þ2 1 4Þ tracks the step input with zero steady-state error. The answer is simply an integrator, by which the system will have the root locus of Fig. 5.84 and is stable for a bounded range of K, KAð0:538 15:001Þ. However, if we simulate the system we observe that the output is oscillatory. For instance with K 5 3 the output is given in the right panel of the same figure. By increasing the gain value oscillations increase. In Chapter 10 we shall supply theoretical analysis and reason for the poor transient response of the system. Problem 5.41: Design the simplest ramp-tracking controller CðsÞ for the system PðsÞ 5 ðs 2 1Þ=s. 2 The answer is a controller like CðsÞ 5 Kðs10:1Þ sðs 1 1Þ with K , 0, or a near controller. The student is encouraged to draw the root locus and find the range of K. A higher order controller is CðsÞ 5

Kðs10:1Þ2 sðs11Þ2

with K , 0. To have a better performance

the controller parameters should be chosen carefully. See also Chapter 9 and the accompanying CD on the website of the book. Problem 5.42: Design the simplest parabola-tracking controller CðsÞ for the system PðsÞ 5 ðs 2 1Þðs 2 2Þ=½sðs 1 1Þ. 3

The answer is a controller like CðsÞ 5 Kðs10:1Þ s2 ðs 1 1Þ with K . 0, or a near controller. We urge the student to analyze the problem and find out the reason, draw the root locus, and find the range of K. The simple pole at s 5 21 can be multiple, of higher orders. The performance depends on the values of the controller parameters. See also Chapter 9 as well as the accompanying CD on the website of the book.

430

Introduction to Linear Control Systems

5.9

Exercises

Over 250 exercises are presented below in the form of some collective problems. Although they have some similarities they are different in details and nature. It is advisable to try as many of them as your time allows. When you are required to draw the root locus it is instructive to try the negative values of the gain as well. Exercise 5.1: How can we find the position, velocity, or acceleration constants, and more generally the steady-state error for different inputs, if we are given a root locus? Discuss. Exercise 5.2: This exercise has some parts. (1) How many branches and asymptotes does a root locus have? (2) Where do the closed-loop poles locate as the gain varies from zero to infinity? Discuss. (3) How do the incoming and outgoing branches at a break point pair with each other? Exercise 5.3: How do poles/zeros of a system compare with its break-away/-in points? Exercise 5.4: How does the gain change at the intersection points of root locus branches? Explain. Exercise 5.5: In the manner of the given strategy in the text, derive a second-order formula for the abscissa. Is that more accurate than the first-order one? Discuss. Exercise 5.6: What happens if the gain/parameter is selected such that the closed-loop poles coincide, assuming its possibility? Exercise 5.7: Given a given system, how can the polezero locations change so that the centroid remains unchanged? Discuss the various possibilities—some are very useful and easy to master. Exercise 5.8: Is that possible that a branch of root locus crosses itself (by making a loop like in the symbol α)? Exercise 5.9: Is that possible that some root locus branches slide over (not intersect) each other in the same or opposite directions? Exercise 5.10: Given a mathematical model (i.e., not necessarily a causal system) and its root locus, what is the effect of the addition of a/some poles on the root locus? How about zeros? How about the combination of some poles and zeros? Discuss. Exercise 5.11: Where do the formulae for the angles of departure and arrival come from? Exercise 5.12: What is the relation between j-axis crossings and the Routh’s array? What is the reason behind it? Exercise 5.13: Consider a system with some j-axis crossings. Do smaller and larger crossing frequencies relate to smaller and larger values of gain, respectively? Exercise 5.14: By direct analysis, like the one given for Examples 5.15.4, draw the root locus of the following systems: 1. S1 : 1 1 K s2 s11s 11 1 5 0 2s 1 2 2. S2 : 1 1 K ðss 211Þðs 2 2Þ 5 0, 2

ðs 2 1Þðs 2 2Þ 3. S3 : 1 1 K ðs 1 2Þðs 1 3Þ 5 0. Exercise 5.15: If a branch of root locus connects a zero and a pole, either no or an even number of break points exist between them, for instance:

1. LðsÞ 5 2. LðsÞ 5 3. LðsÞ 5 4. LðsÞ 5

ðs 1 5Þðs2 1 2s 1 2Þðs2 1 6s 1 10Þ sðs2 1 4s 1 5Þðs2 1 8s 1 17Þ Ks ðs 2 20Þðs 1 10Þðs2 2 2s 1 26Þ ; Kðs 1 1:5Þðs 1 4:5Þ sðs 1 1Þðs 1 4:2Þ Kðs 1 8Þ sðs 1 3Þðs 1 5Þðs 1 7Þðs 1 15Þ

Root locus

431

5. LðsÞ 5

Kðs 1 2Þðs 1 60Þ ðs 2 50Þðs 1 4Þðs 1 9Þ

6. LðsÞ 5

ðs2 11Þ2 . sðs2 14Þ2

Exercise 5.16: The break points may be complex. Some examples are: 1. LðsÞ 5

ðs11Þ2 ððs11Þ2 1 100Þðs111Þ2

2. LðsÞ 5

Kðs2 15Þ2 ðs2 11Þ2 ðs2 19Þ2

3. LðsÞ 5 4. LðsÞ 5

Kðs2 1 2s 1 7Þðs2 2 2s 1 7Þ ; ðs2 11Þ2 ðs2 19Þ2 K sðs 1 4Þðs2 1 4s 1 20Þ

5. LðsÞ 5

ðs2 1 s2 1 2Þðs2 1 2s 1 5Þ ðs2 12s110Þ2

6. LðsÞ 5

sðs2 1 100Þ ðs2 1 25Þðs2 1 400Þ,

7. LðsÞ 5

ðs11Þ2 ðs2 11Þ2 . ðs21Þ6

Exercise 5.17: At the point s on the root locus of a system the following relations hold. Draw the root locus in the vicinity of this point with straight (un-curved) lines for negative values of K. Specify the direction of movement of poles as K varies from 0 to minus infinity. Then repeat the problem also for positive values of K. i 4 1. dsd i Kjs5s 5 0; i 5 1; 2; 3; dsd 4 Kjs5s 6¼ 0 2.

di dsi di dsi

Kjs5s 5 0;

i 5 1; . . .; 4;

d5 ds5 d6 ds6

Kjs5s 6¼ 0;

Kjs5s 5 0; i 5 1; . . .; 5; Kjs5s 6¼ 0. 3. Exercise 5.18: This Exercise has several parts. (1) Can Rule 8 be satisfied for a complex point? (2) Reconsider Example 5.14. What is the root locus of its complex version, i.e., when the same pole pattern exists above and under the real axis as a conjugate set. (3) What is the root locus of the system LðsÞ 5 1=½ððs11Þ2 1 1Þððs13Þ2 1 1Þ? (4) What is the root locus of the system LðsÞ 5 ððs12Þ2 1 1Þ=½ððs11Þ2 1 1Þððs13Þ2 1 1Þ? Exercise 5.19: Intersection points of orders three and four can be found in the following systems. For item (4) find a condition such that this occurs. 1 2Þ 1. LðsÞ 5 ðs 1ðs10Þðs11Þ 2 2. LðsÞ 5 3. LðsÞ 5 4. LðsÞ 5

Kðs2 1 2:5s 1 1Þ s3 2 1:3s2 2 3:25s 2 1:3 Ksðs2 1 2s 1 2Þ 1 0:5 ; s2 ðs2 1 2s 1 2Þ Kðs 1 bÞ ðs 1 aÞðs2 1 2s 1 2Þ :

Exercise 5.20: Draw the root locus for the following systems and find the value of K for stability. 1. LðsÞ 5 ðs 2 αÞðs2K1 4s 1 7Þ ; α 5 0:5; 1; 2

s 1 0:5 α 5 8; 10; 12; 14; 16 2. LðsÞ 5 sðs 2 1Þðs 2 1 4s 1 αÞ ; Exercise 5.21: Stability of a system or root locus may be discontinuous in the parameter. Some examples are as follows. 2 1 2s 1 2 1. LðsÞ 5 K ssðs10:2Þ 2 s 1 2s 1 2 2. LðsÞ 5 K sððs10:2Þ 2 1 0:22 Þ 2

s 1 2s 1 2 3. LðsÞ 5 K ðs 1 0:1Þððs10:2Þ ; 2 1 0:22 Þ 2

4. LðsÞ 5 K ðs110Þ sðs11Þ2

2

1 10Þðs 1 5Þðs 1 0:05Þðs 1 0:02Þ 5. LðsÞ 5 K ðs 1 100ÞðsKðs 1 1Þðs 1 0:2Þðs 1 0:01Þðs 1 0:001Þðs 2 0:0005Þ ;

432

Introduction to Linear Control Systems 1 1000Þðs 1 500Þðs 1 10Þðs 1 5Þðs 1 0:05Þðs 1 0:02Þ 6. LðsÞ 5 K ðs110;000ÞKðs ; 2 ðs 1 100Þðs 1 50Þðs 1 1Þðs 1 0:2Þðs 1 0:01Þðs 1 0:001Þðs 2 0:0005Þ Kðs 1 2s 1 4Þ 7. LðsÞ 5 K sðs15Þ 2 2 ðs 1 1:5s 1 1Þ 2

1 0:1Þðs 1 0:2Þ 8. LðsÞ 5 K ðs 1 Kðs ; 0:001Þðs10:01Þ2 ðs 1 1Þ 2

9. LðsÞ 5 K ðs 6 0:01Þðs 1Kðs1100Þ 0:1Þðs 1 0:5Þðs11000Þ2 ðs 1 1000Þ 10. LðsÞ 5 K ðs 6 1Þðs 1Kðs150Þ ; 2Þðs 1 5Þðs 1 200Þðs1500Þ2 2

1 12s 1 54s 1 16 11. LðsÞ 5 K s5 1s11s146s1 22s 3 1 60s2 1 47s 1 25 4

3

2

ððs20:1Þ 1 1Þððs20:1Þ 1 9Þ 12. LðsÞ 5 K ððs20:2Þ ; 2 1 4Þððs20:2Þ2 1 25Þðs 1 1Þ 2

13. LðsÞ 5

2

Kððs20:1Þ2 1 1Þððs20:1Þ2 1 9Þ ððs20:2Þ2 1 4Þððs20:2Þ2 1 25Þðs 2 1Þ

1 1Þððs10:1Þ 1 9Þðs 2 0:1Þ 14. LðsÞ 5 K ððs10:1Þ : ððs10:1Þ2 1 4Þððs10:1Þ2 1 25Þðs 1 1Þ 2

2

Exercise 5.22: In the context of root locus, the range of stability of a system is discontinuous in the uncertain parameter, given by KAða; bÞ , ðc; dÞ , ðe; NÞ, where a , b , c , d , e. Draw a possible root locus for this system. Exercise 5.23: Propose stabilizing and/or tracking controllers (if possible) for the following systems. Item 16 is ‘very’ difficult in its general formulation, but try typical systems like PðsÞ 5 ðs21Þ3 =s3 . 1. PðsÞ 5 1=ðs21Þ3 2. PðsÞ 5 s=ðs21Þ3 3. PðsÞ 5 ðs 1 1Þ=½ðs 2 1Þðs 2 3Þðs 2 5Þ 4. PðsÞ 5 1=½ðs2 1 1Þðs 2 1Þ 5. PðsÞ 5 2 1=½ððs21Þ2 1 1Þðs 2 2Þ 6. PðsÞ 5 1=½ððs22Þ2 1 1Þðs 2 1Þ 7. PðsÞ 5 1=½ðs2 1 1Þðs2 2 1Þ 8. PðsÞ 5 2 1=½s2 ðs 2 1Þ 9. PðsÞ 5 1=½s3 ðs 2 1Þ 10. PðsÞ 5 ðs2 2 4Þ=ðs2 2 1Þ 11. PðsÞ 5 ðs2 1 4Þ=ðs2 1 1Þ 12. PðsÞ 5 1=½ððs21Þ2 1 1Þðs 2 2Þðs 2 3Þ 13. PðsÞ 5 2 1=½ðs 2 1Þðs 2 2Þððs23Þ2 1 1Þ 14. PðsÞ 5 ðs 1 1Þ=½ðs 2 1Þðs 2 3Þððs22Þ2 1 1Þ 15. PðsÞ 5 1=½ðs 2 1Þððs22Þ2 1 1Þðs 2 3Þðs 2 4Þ 16. PðsÞ 5 ðs 2 b1 Þ?ðs 2 bn Þ=ðs 2 a1 Þ?ðs 2 an Þ ; bn $ ? $ b1 $ 0 ; an $ ? $ a1 $ 0 Exercise 5.24: Propose stabilizing and/or tracking controllers (if possible) for these systems: 1. PðsÞ 5 1=ðs11Þ3 2. PðsÞ 5 s=½ðs 1 1Þðs 1 2Þ 3. PðsÞ 5 ð2 s 1 1Þ=ðs2 1 2s 1 2Þ 4. PðsÞ 5 ðs 2 1Þ=½sðs 1 1Þ 5. PðsÞ 5 ðs 2 1Þ=½sðs11Þ2  6. PðsÞ 5 ðs 2 1Þ=ðs2 1 1Þ 7. PðsÞ 5 ðs2 2 4Þ=ðs2 2 1Þ 8. PðsÞ 5 ð2 s 1 1Þðs 1 1Þ=½sðs 1 2Þ 9. PðsÞ 5 ðs 2 1Þðs 2 2Þ=½sðs 1 2Þ 10. PðsÞ 5 ð2 s 1 1Þðs 2 2Þ=½sðs12Þ2 

Root locus

433

11. PðsÞ 5 ðs 2 2Þðs 2 3Þ=½sðs 2 1Þ 12. PðsÞ 5 1=½s2 ðs2 1 1Þ 13. PðsÞ 5 ðs2 1 4Þ=½s2 ðs2 1 1Þ 14. PðsÞ 5 1=½sððs11Þ2 1 1Þðs 1 1Þ 15. PðsÞ 5 1=½s2 ððs11Þ2 1 1Þðs 1 1Þ 16. PðsÞ 5 1=½ðs 1 1Þðs 1 2Þðs 1 3Þðs 1 4Þ; 17. PðsÞ 5 1=½s2 ðs 1 1Þðs 1 2Þðs 1 3Þ; 18. PðsÞ 5 1=½s3 ðs 1 1Þðs 1 2Þðs 1 3Þ; 19. PðsÞ 5 ½ðs20:2Þ2 1 4=½ðs20:1Þ2 1 1: 20. PðsÞ 5 s2 1 1=ðs2 19Þ2 : 21. PðsÞ 5 s2 1 9=ðs2 11Þ2 : 22. PðsÞ 5 s2 1 1=ðs2 19Þ2 : 23. PðsÞ 5 s2 1 9=ðs2 11Þ2 : Exercise 5.25: Propose stabilizing and/or tracking controllers for the system PðsÞ 5 P1 ðsÞðs 2 1Þ=ðs 2 2Þ in which P1 is either: 1. P1 ðsÞ 5 1=ðs 1 1Þ 2. P1 ðsÞ 5 ðs 1 1Þ=ðs 1 2Þ 3. P1 ðsÞ 5 ðs 1 1Þ=ðs12Þ2 4. P1 ðsÞ 5 1=½ðs 1 1Þðs 1 2Þ Exercise 5.26: Repeat Exercise 5.25 for PðsÞ 5 P1 ðsÞðs 2 2Þ=ðs 2 1Þ. Exercise 5.27: Using MATLABs verify that the controller CðsÞ 5

Kððs11Þ2 1 1Þððs13Þ2 1 4Þððs11Þ2 1 25Þ sðs150Þ5

is a step-tracking controller for the system

PðsÞ 5 1=½ðs 2 1Þðs 2 2Þðs 2 3Þðs 2 4Þðs 2 5Þ. Find the range of K. (Note that the controller has been chosen carefully. If you try other poles and zeros for the controller, like multiple real zeros in the vicinity of the given zeros, the controller does not stabilize the system since the appropriate condition on the gain values at the j-axis crossings is not satisfied. Verify it!) Exercise 5.28: With the help of MATLABs verify that the controller CðsÞ 5

Kðs10:25Þ2 ððs12Þ2 1 36Þððs14Þ2 11Þ2 s2 ððs170Þ2 12500Þ3

is a ramp-tracking controller for the system

PðsÞ 5 1=½ðs 2 1Þðs 2 2Þðs 2 3Þðs 2 4Þðs 2 5Þðs 2 6Þ. Find the range of K. (Note that the explanation at the end of Exercise 5.27 is valid here as well.) Exercise 5.29: Draw the root locus for the given systems: 1. LðsÞ 5

Kðs12Þ2 ðs13Þ2 ððs11Þ2 11Þ2

2. LðsÞ 5

Kðs12Þ3 ððs11Þ2 11Þ2

3. LðsÞ 5

Kðs12Þ4 ððs11Þ2 11Þ2

Exercise 5.30: Sketch the root locus of the sequel systems and note how the break-away and break-in points vary as the polezero locations vary: K 1. LðsÞ 5 ðs2 1 1Þðs 2 2 1Þ 2. LðsÞ 5 3. LðsÞ 5

K ðs2 1 1Þðs2 2 4Þ K ðs2 1 4Þðs2 2 1Þ

Exercise 5.31: Without doing much computation draw the root locus of the following systems. 1. LðsÞ 5 2. LðsÞ 5

Kðs11Þ2 s3 Kðs11Þ2 s4

434

Introduction to Linear Control Systems

5. LðsÞ 5

Kðs11Þ3 s3 Kðs11Þ3 s4 Ks3 ðs11Þ4

6. LðsÞ 5

Ks4 ðs11Þ4

7. LðsÞ 5

18. LðsÞ 5

Kðs2 2 1Þ s3 Kðs2 2 1Þ s4 Kð1 2 s2 Þ s3 Kð1 2 s2 Þ s4 Kð1 2 s2 Þ s5 Kðs 1 1Þðs12Þ2 s4 Kðs 1 1Þðs12Þ3 s4 Kðs 1 2Þðs11Þ2 s4 Kðs11Þ2 ðs 1 2Þðs 1 3Þ s4 Kðs 1 1Þðs12Þ2 ðs 1 3Þ s4 Kðs2 1 4Þðs2 1 αÞ ; α 5 4; 9 ðs 1 4Þðs15Þ2 ðs16Þ2 K ðs 1 2Þðs 1 9Þððs13Þ2 1 12Þððs18Þ2 1 27Þ

19. LðsÞ 5

Kðs12Þ2 ðs 1 3Þ sðs 1 1Þðs 1 5Þðs 1 10Þ

20. LðsÞ 5

Kðs 1 2Þðs13Þ2 sðs 1 1Þðs 1 5Þðs 1 10Þ

21. LðsÞ 5

Kðs12Þ2 ðs13Þ2 sðs 1 1Þðs 1 5Þðs 1 10Þ ðs2 11Þ2 s2 ðs2 14Þ2 Kðs 1 2Þðs 1 3Þ sðs 1 1Þðs 1 4Þðs 1 5Þ Ksðs 1 1Þðs 1 4Þðs 1 5Þ ; 0#α#1 ðs 1 2Þðs 1 3Þððs12:5Þ2 1 αÞ Kðs 1 1:5Þðs 1 4:5Þ sðs 1 1Þðs 1 αÞ ; α 5 4; 4:1; 4:2; 4:3 Kðs 2 3Þðs 1 5Þ ðs 2 1Þðs 1 2Þ

3. LðsÞ 5 4. LðsÞ 5

8. LðsÞ 5 9. LðsÞ 5 10. LðsÞ 5 11. LðsÞ 5 12. LðsÞ 5 13. LðsÞ 5 14. LðsÞ 5 15. LðsÞ 5 16. LðsÞ 5 17. LðsÞ 5

22. LðsÞ 5 23. LðsÞ 5 24. LðsÞ 5 25. LðsÞ 5 26. LðsÞ 5

27. Lα ðsÞ 5

Ks82α ðs 2 1Þðs 1 1Þðs2 1 1Þðs2 1 2s 1 2Þðs2 2 2s 1 2Þ ; α 5 0; 1; . . .; 8

28. Lα ðsÞ 5

Ks82α ðs2 1 1Þðs2 1 4Þðs2 1 9Þðs2 1 16Þ ; α 5 0; 1; . . .; 8

29. Lα ðsÞ 5

Kðs12Þ82α ðs2 116Þ2 ðs2 18s132Þ2

; α 5 0; 1; . . .; 8:

Exercise 5.32: Draw the root locus of the following systems. Note their similarities and differences, in that a swap of their poles and zeros makes the system stable or unstable. 2 1. LðsÞ 5 sðss 2 1114Þ 2. LðsÞ 5

s2 1 4 sðs2 1 1Þ

3. LðsÞ 5

sðs2 1 25Þ ðs2 1 4Þðs2 1 100Þ

4. LðsÞ 5

sðs2 1 4Þ ðs2 1 25Þðs2 1 100Þ

Root locus

435

5. LðsÞ 5

sðs2 1 100Þ ðs2 1 4Þðs2 1 25Þ

6. LðsÞ 5

ðs 2 0:1Þððs20:1Þ2 1 25Þ ððs20:1Þ2 1 4Þððs20:1Þ2 1 100Þ

Exercise 5.33: By the help of MATLABs compare the root loci of the system LðsÞ 5

Kðs 1 7Þððs15Þ2 1 αÞ sðs 1 3Þððs12Þ2 1 1Þ

for different values of 1 # α # 100: (Note that for some values of

α the stability range is discontinuous in K.) 1 7Þððs110Þ2 1 100Þ and different values Exercise 5.34: Repeat Exercise 5.33 for LðsÞ 5 sðsKðs 1 3Þðs 1 25Þððs15Þ2 1 αÞ of 1 # α # 100: Exercise 5.35: Consider the plant PðsÞ 5 K=½sðs 2 aÞ in the feedforward path with the controller CðsÞ 5 ðs 1 1Þ=ðs 1 2Þ in the feedback loop. (1) With positive values of the gain K, for what values of a is it possible to make this closed-loop system stable? (2) How about negative values? Exercise 5.36: Consider the system PðsÞ 5 Kðss2 ðs114s5Þ1 8Þ in a negative unity feedback structure. Draw the root locus of the system. Can the gain K be chosen such that the right-most poles have 0:6 # ζ # 0:7? Determine the largest and smallest damping ratios of the system and the corresponding gain values. 1 2Þ Exercise 5.37: Repeat Exercise 5.36 for PðsÞ 5 ðs2 1 4s 1Kðs 6Þðs2 1 6s 1 10Þ and 0:7 # ζ # 0:8. 2

2

Kðs12Þ Exercise 5.38: Draw the root locus of the system LðsÞ 5 sðs 2 1 1Þ for positive gains. Find the value of gain such that the complex poles have ζ 5 0:707. Will these poles dominate the response or not? Exercise 5.39: Consider the plant PðsÞ 5 1=½sðs 2 aÞ; a . 0 in the forward path with the controller CðsÞ in the feedback path. In the context of root locus analyze the stabilization of the system. Repeat the problem for the plants PðsÞ 5 ðs 2 1Þ=½sðs 2 2Þ and PðsÞ 5 ðs 2 2Þ=½sðs 2 1Þ as well. YðsÞ 1 z1 Þ?ðs 1 zm Þ Exercise 5.40: For a closed-loop system represented by UðsÞ 5 Kðs ðs 1 p1 Þ?ðs 1 pn Þ show that P Pm n k k i51 ðzi Þ 5 j51 ðpj Þ for k 5 1; . . .; N 2 1, where N 5 n 2 m . 1, i.e., systems with relative degree 2 or higher. The result first appeared in (Rao, 1976). 1 z1 Þ?ðs 1 zm Þ Exercise 5.41: Consider the system LðsÞ 5 Kðs ðs 1 p1 Þ?ðs 1 pn Þ . Show that the break-away and P 1 P 5 n 1 . break-in point, denoted by s , can alternatively be obtained from m  Pm 1 1s 1 zi Pn 1 s 11 pi Investigate whether for real break points the condition reduces to 1 s 1 jzi j 5 1 s 1 jpi j? Verify the rule for a simple example. Exercise 5.42: Show that a horizontal shift in all open-loop zeros and poles of a system translates to the same shift of the root locus. Show that this is not true about vertical shifts or horizontal shifts of different values. Exercise 5.43: How should the polezero pattern be enlarged (shrunk) so that the root locus is also enlarged (shrunk) uniformly, if possible? Exercise 4.44: Consider Example 5.7, middle panel. Can it happen for complex poles and zeros? For instance can there identically and symmetrically exist two of such root loci about the real axis? Answer this question also for other panels of the same figure. Exercise 5.45: Obtain the asymptotes of the root locus through the application of the Rouche’s theorem. s1α Exercise 5.46: In a unity feedback structure, the system PðsÞ 5 ðs21Þ 2 is controlled by the 2Þðs 1 3Þ controller CðsÞ 5 K ðs 1 sðs 1 12Þ . Draw the root contour for positive values of α; K.

436

Introduction to Linear Control Systems

Exercise 5.47: Consider a closed-loop system with the plant PðsÞ 5 1=½sðs 1 2Þ in the feedforward path and the controller CðsÞ 5 Kðs 1 αÞ=ðs 1 10Þ in the feedback loop. Draw the root contour of the system. For what values of K and α is it possible to stabilize the system? Exercise 5.48: Repeat Exercise 5.47 for PðsÞ 5 1=½ðs 2 1Þðs 1 2Þ and CðsÞ 5 Kðs 1 αÞ=s. Exercise 5.49: Draw the root contour of the following systems: 2 αsÞ 1. LðsÞ 5 sKð1 2 ðβs 1 1Þ 2. LðsÞ 5 3. LðsÞ 5 4. LðsÞ 5 5. LðsÞ 5 6. LðsÞ 5 7. LðsÞ 5 8. LðsÞ 5 9. LðsÞ 5

Kð1 2 αsÞ sðβs2 1 1Þ Kð1 2 αsÞ sðβs2 2 1Þ Kð1 2 αsÞ s3 ðβs 1 1Þ K1 ðs 1 1Þ sðs 1 2Þðs 2 K2 Þ Kð1 2 s2 Þ α2s K1 ðs 1 1Þ sðs 1 2Þðs 1 K2 Þ K1 sðs 1 1Þðs 2 K2 Þ K1 sðs 1 1Þðs 1 K2 Þ

10. LðsÞ 5

K1 ð1 2 K2 s2 Þ ðs11Þ2 ðs 1 2Þðs 1 3Þ

11. LðsÞ 5

K1 ð1 1 K2 s2 Þ ðs11Þ2 ðs 1 2Þðs 1 3Þ K1 ð1 2 K2 sÞ s2 ðs 1 1Þ K1 ð1 1 K2 sÞ s2 ðs 1 1Þ K sð2 s 1 TÞðs 1 2Þ s2K ð2 s 1 TÞðs 1 1Þ

12. LðsÞ 5 13. LðsÞ 5 14. LðsÞ 5 15. LðsÞ 5

Exercise 5.50: Draw the root locus of the following systems: 1. LðsÞ 5 Ksðs 1 1Þ 2. LðsÞ 5 Ksðs 1 1Þðs 1 2Þ 3. LðsÞ 5 Ksðs 1 1Þðs2 1 4Þ 4. LðsÞ 5 Kðs2 1 1Þ=s 5. LðsÞ 5 6. LðsÞ 5 7. LðsÞ 5 8. LðsÞ 5 9. LðsÞ 5 10. LðsÞ 5 11. LðsÞ 5

Kðs2 1 1Þ 12s Ksðs2 1 1Þ s11 Ksðs2 1 1Þ 12s Kðs2 1 1Þðs 2 1Þ s11 Kðs2 1 1Þðs2 2 1Þ s Kðs2 1 1Þð1 2 s2 Þ s Kðs 1 1Þð4 2 s2 Þ ðs 2 1Þðs 1 3Þ

Exercise 5.51: For what values of α; β is it possible to make the sequel systems stable with positive values of K? How about negative values of K? 1. LðsÞ 5 2. LðsÞ 5

Kðs 1 αÞðs 1 1Þ sðs 1 βÞ Kðs 1 αÞðs 1 1Þ sðs 1 1Þðs 1 βÞ

Root locus

3. LðsÞ 5 4. LðsÞ 5 5. LðsÞ 5 6. LðsÞ 5

437 Kðs 1 αÞð1 2 sÞ sðs 1 1Þðs 1 βÞ Kðs 1 2Þðs 1 4Þ sðs 1 1Þðs 1 3Þðs 1 αÞ Kðs 1 2Þðs 1 αÞ s2 ðs 2 1Þðs 1 βÞ Kð1 2 sÞðαs 1 2Þ : s3 ðs 2 βÞ

Exercise 5.52: Can the panels of Fig. 5.85 represent the root locus of some systems? If the answer is positive, by construction and/or inspection find some corresponding systems. See also Exercises 10.7 and 10.8.

Figure 5.85 Exercise 5.52. Three suggested root loci. Exercise 5.53: In the top row of Fig. 5.86 we show the variation of the value of K versus the location of the given closed-loop poles. The figures are constructed by simple analysis. (1) Find the turning points. Discuss. (2) Complete the figure for the cases shown in the bottom row. (3) Do part (1) for part (2). |K|

σ σ

Figure 5.86 Exercise 5.53. Diagram of value of K versus closed-loop pole location. Exercise 5.54: Many actual systems have delay. Show that the root locus of the system PðsÞ 5 Ke2sT =½sðs 1 pÞ is given by (Fig. 5.87), where only the upper part of the locus is shown. In this figure ω1 T 5 tan21 ðp=ω1 Þ.

j5π/Τ j3π/Τ jπ/Τ jω1 –p

Figure 5.87 Root locus of Exercise 5.54.

σ

438

Introduction to Linear Control Systems

Exercise 5.55: The root locus of some systems has some fine features like the outward curves shown after/before ‘some’ break point with the angle 90 deg, see Fig. 5.88. Although it may not make any difference in the whole analysis, it sounds interesting to find a precise explanation for it—beyond the magnitude/angle stipulations. The least question probably is that whether it can happen at double (repeated) poles or zeros.

Figure 5.88 Outward curves after and before the break point in some systems. Exercise 5.56: In various examples and problems we saw how by intuition and innovation we may think of an internally stabilizing controller for a given plant. Now we pictorially show that for stabilization—no matter whether strong or not—it may be necessary that unstable pole-zero cancellation takes place. In particular, for strong stabilization we may need unstable controller_zero-plant_pole cancellation. See Figure 5.89. Upper part: Consider a plant with a sufficiently compact (i.e., close to each other) and complicated pattern of its NMP poles and zeros. To exacerbate the situation let it be far in the ORHP, e.g., panels A-C given in the upper part of Fig. 5.89. Only the pertinent part, and not the rest of the dynamics, is shown. How can we ‘internally’ stabilize them? Common sense tells us that systems AC are in increasing order of difficulty. Let us discuss panel C. We have to connect the unstable zero to the OLHP by a branch. We distinguish two scenarios. (1) If we wish to use a stable controller, the solution is that the controller is of the form CðsÞ 5 ðs 2 z1 ÞC1 ðsÞ where s 5 z1 is near the NMP zero of the plant, perhaps coincides with it, as shown in panel D1. (Note that the possibility of having another zero ðs 2 z2 Þðs 2 z2 Þ is in the same spirit—we have to connect the NMP zero of the plant to the OLHP.) The desired root locus, if existent, will look like the one provided in panel D1. (2) If we allow the controller to be unstable, then some poles may be stabilized as shown in panel D2. But the stabilization of zero is as before and is not repeated here. (In either of the above scenarios the rest of the dynamics is not shown.) Additionally, in both panels D1 and D2 an appropriate condition on the gain values at the j-axis crossings must hold. We do not approve of making conjectures but do state that the existence of such a root locus is highly improbable! We do not know of even a single example of such systems. Finally, note that with the assumption that the rest of the dynamics of the system in panel C is such that it satisfies the PIP condition of Youla et al. and thus is strongly stabilizable, panel D1 shows that the PIP Theorem is not really useful unless for cases that are strongly stabilizable. System C does not seem to be so. Middle part: In panels E and G we have some plants whose pole-zero patterns are as shown and whose exact locations are such that the plant is internally stabilizable. It is easy to construct such systems which are tracking systems as well. Panels F and H are the solutions of E and G, respectively. (Note that higher types are obvious, and only the pertinent part is shown.) We emphasize that the root locus branches which are shown by straight lines may have some curvature (indeed they do have) and that other stable root locus for them is also possible, see, e.g., Problem C.7 of Appendix C. All in all, the Example shows that not only the pattern but also the exact locations of poles and zeros do

Root locus

439

Figure 5.89 Exercise 5.56: Upper part: Some truly difficult plants to internally stabilize, Middle part: Existing internally stabilizable plants, Lower part: Other internally stabilizable plants.

440

Introduction to Linear Control Systems

affect the possibility—or at least our ability—of internal stabilization. See the .m file of this example in the accompanying CD on the webpage of the book. Lower part: Here we also have some plants which are stabilizable with the given root loci. Examples exist. Find some! Exercise 5.57: Reconsider Examples 5.225.28, in particular 5.22 and 5.23. Controller gain is considered positive. Is that necessary or not? That is, can the design objectives be satisfied also with negative values of the gain?

References The general list of references as well as the sequel specialized references: Barmish, R.B., Tempo, R., 1990. The robust root locus. Automatica. 26 (2), 283292. Byrnes, C.I., 1981. On root loci in several variables: continuity in the high gain limit. Syst. Control Lett. 1 (1), 6973. Byrnes, C.I., Gilliam, D.S., He, J., 1994. Root locus and boundary feedback design for a class of distributed parameter systems. SIAM Jr. Control Optimization. 32 (5), 13641427. Chen, H.-B., Chow, J.H., Kale, M.A., Minto, D.K., 1995. Simultaneous stabilization using stable system inversion. Automatica. 31 (4), 531532. Ching, S.-N., Kabamba, P.T., Meerkov, S.M., 2008. Saturated root locus: theory and application. IFAC Proc. 41 (2), 1514815153. Du, B., Lam, J., Shu, Z., 2012. Strong stabilization by output feedback controller for linear systems with delayed input. IET Control Theor, Appl. 6 (10), 13291340. Evans, W.R., 1948. Graphical analysis of control systems. Trans. AIEE. 67 (1), 547551. Evans, W.R., 1950. Control system synthesis by root locus method. Trans. AIEE. 69 (1), 6669. Fioravanti, A.R., Bonnnet, C., Ozbay, H., Niculescu, S.-I., 2012. A numerical method for stability windows and unstable root-locus calculation for linear fractional time-delay systems. Automatica. 48 (11), 28242830. Gumussoy, S., 2012. Root locus for SISO dead-time systems: a continuation based approach. Automatica. 48 (3), 480489. Guo, S.-J., Hung, C.-C., Tsai, J.S.H., 1994. Regional-pole placement of linear multivariable systems via an alternative root-locus technique. J. Franklin Inst. 331 (1), 2333. Hwang, C., Yang, S.-F., 2005. Construction of robust root loci for linear systems with ellipsoidal uncertainty of parameters. IFAC Proc. 38 (1), 712. Khodaverdian, S., Lanfermann, F., Adamy, J., 2015. Root locus design for the synchronization of multi-agent systems in general directed networks. IFAC-PapersOnLine. 48 (22), 150155. Kurmann, S., 2012. Some remarks on equations defining coincident root loci. J. Algebra. vol. 352 (1), 223231. Kwakernaak, H., Sivan, R., 1972. The maximally achievable accuracy of linear optimal regulators and linear optimal filters. IEEE Trans. Autom. Control. 17, 7986. Lee, H., Sturmdels, B., 2016. Duality of multiple root loci. J. Algebra. 446, 499526. Linnemann, A., 1988. A class of single-input single-output systems stabilizable by reducedorder controllers. Syst. Control Lett. 11 (1), 2732. Mac Duff, C.C., 1954. Theory of Equations. John Wiley & Sons, New York.

Root locus

441

Miyamoto, H., Ohmori, H., Sano, A., 2000. A new design method of plug-in adaptive controller via root locus technique. In: Proceedings of the IEEE Conference on Decision and Control, 2 11021103. Nesenchuk, A.A., 2002. Parametric synthesis of qualitative robust control systems using root locus fields. IFAC Proc. 35 (1), 331335. Nokhbatolfoghahaee, H., Talebi, H., Menhaj, M.B., Ebenbauer, C., 2016. A new approach for minimum phase output definition. Int. J. Syst. Sci. 48 (2), 264271. Owens, D.H., 1983. Geometric conditions for generic structure of multivariable root-loci. Syst. Control. Lett. 3 (5), 273281. Pal, B.C., Coonick, A.H., Cory, B.J., 2001. Linear matrix inequality versus root-locus approach for damping inter-area oscillations in power systems. Int. J. Electr. Power Energy Syst. 23 (6), 481489. Patil, M.D., Vyawahare, V.A., Bhole, M.K., 2014. A new and simple method to construct root locus of general fractional-order systems. ISA Trans. 53 (2), 380390. Rao, S.N., 1976. A relation between closed-loop pole and zero locations. Int. J. Control. 24, 147. Saberi, A., Sannuti, P., 1989. Time-scale structure assignment in linear multivariable systems using high-gain feedback. Int. J. Control. 49, 21912213. Sekara, T.B., Rapaic, M.R., 2015. A revision of root locus method with applications. J. Process Control. 34, 2634. Seraji, H., 1975. An approach to dynamic compensator design for pole assignment. Int. J. Contr. 21 (6), 955966. Seraji, H., 1981. Output control in linear systems: a transfer-function approach. Int. J. Contr. 33 (4), 649676. Smith, M.C., 1986. On minimal order stabilization of single loop plants. Syst. Control Lett. 7 (1), 3940. Smith, M.C., Sondergeld, K.P., 1986. On the order of stable compensators. Automatica. 22 (1), 127129. Tarokh, M., 1987. Linearity of the coefficients of the closed-loop characteristic polynomial. Int. J. Contr. 45 (4), 13831385. Tenreiro Machado, J.A., 2011. Root locus of fractional linear systems. Commun. Nonlinear Sci. Numer. Simul. 16 (10), 38553862. Youla, D.C., Bongiorno, J.J., Lu, C.N., 1974. Single-loop feedback stabilization of linear multivariable dynamical plants. Automatica. 10, 159173.

Nyquist plot 6.1

6

Introduction

In Chapter 4, Time Response, our analysis and synthesis of the control system was in the time domain. As we observed, the problem becomes sophisticated as the order of the system becomes more than two. An alternative approach to handle the analysis and synthesis of control systems—especially to tackle the problem of high order of the system—is to work in the frequency domain. This method has some particular advantages. One is that sometimes exogenous disturbances acting on the system are expressed only (or more easily) in the frequency domain. The other advantage is that expressing the plant/controller uncertainties in the frequency domain is easier and more natural than in the time domain.1 Frequency domain methods emerged in the 1930s in parallel with the available time-domain results (like those in Chapter 4, Time Response) and grew over the following decade. The advent of the state-space methods in the 1960s, which consider the system in the time domain, turned the attention of the control community away from the frequency domain and towards the state space. The decades of the 1960s and the 1970s witnessed the flourishing of various state-space methods. These methods were apparently superior to the frequency domain methods since they could explicitly address many design specifications that were not explicitly (or easily) tractable in the frequency domain techniques. However in-practice failure of optimal, robust, and adaptive controllers which were designed by the state-space methods shifted back the attention of the control community to the frequency domain techniques. Since then researchers have re-examined both fields and significant progress has been made in both directions. Indeed, today almost all the problems of “that time” are fully resolved, but needless to say now we have new challenges! Giving a full account of the aforementioned problems goes outside the scope of this first introductory course (many of them need graduate knowledge), let alone the challenges of the present time. Nonetheless, we do address the issues which are at the undergraduate level but which have been resolved even quite recently. In the rest of this introductory course we will study frequency domain methods. The techniques we study in this course are the frequency response, Nyquist plot, Bode diagram, and Krohn-Manger-Nichols chart. Frequency response of a system refers to the response of the system when the input is sinusoidal. As we have previously stated in Chapter 4, Time Response, unlike the name, this is actually the time response of the system, and thus we have already studied it in that Chapter. We shall study the other three methods in Chapters 68. We start with the Nyquist plot in this chapter. 1

These advantages are better exploited in the graduate course Robust Control. Here we study the fundamentals of the frequency domain methods.

Introduction to Linear Control Systems. DOI: http://dx.doi.org/10.1016/B978-0-12-812748-3.00006-9 © 2017 Elsevier Inc. All rights reserved.

446

Introduction to Linear Control Systems

In general in all frequency response approaches we apply a sinusoid to the system, change the frequency, and measure the outputs at steady state. The theory of LTI system states—as proven in the following—that the outputs are also sinusoids with the same frequencies as that of the inputs, but probably with different magnitudes and phase shifts. The ratio between the outputs and the inputs give the transfer function at respective frequencies. The transfer functions obtained so are called the sinusoidal (or frequency) transfer functions of the system and are the same as the Laplace transform of the system where s is substituted2 with jω. The input can be applied if the system is stable. Otherwise, the system must be first stabilized by feedback. The question thus becomes to find the open-loop transfer function from the closed-loop one obtained. Formulation-wise this is straightforward. One has P 5 L=½Cð1 2 LÞ. Thus to present the core idea of the frequency domain methods we can simply suppose that the system is stable. However, from a theoretical standpoint,3 we should also consider frequency domain methods for unstable systems. There are two reasons for this. One is that it gives further insight to the problem. And the other is that we shall use these methods for the analysis and synthesis (stabilization and tracking) of unstable systems. Suppose we have obtained the transfer function. To “work” with the transfer function, the three classical frequency domain methods are studied in this book, namely: 1. Magnitude versus phase: The so-called Nyquist plot or diagram 2. Log-magnitude versus log-scale frequency 1 Phase versus log-scale frequency: The so-called Bode diagram 3. Log-magnitude versus phase: The so-called Krohn-Manger-Nichols chart, plot, or diagram

Nyquist plot is named after H. Nyquist from Sweden, 18891976, who immigrated to the US in 1907. During almost 40 years of scientific service at the Bells Laboratory he obtained close to 140 American patents. Among his numerous contributions to the radio art (as was then said) are the first quantitative explanation of thermal noise, signal transmission studies which laid the foundation for modern data transmission and information theory, the invention of the vestigial side-band transmission system widely used in television broadcasting, and the well-known Nyquist plot for the determination of the stability of feedback systems. Also, it is noteworthy that Shannon’s sampling theorem is also referred to as the NyquistShannon (also the NyquistShannonKotelnikov, or even the Whittaker NyquistKotelnikovShannon) theorem. Unfortunately the existing literature on the aforementioned frequency domain methods, i.e., Nyquist, Bode, and Krohn-Manger-Nichols methods, is full of critical mistakes and shortcomings. As in the previous chapters, here our presentation has a survey nature. We mention all these false issues and rectify them. 2

The reason that the input is considered a sinusoid is as follows. In many systems, particularly communication systems, inputs are sinusoids. On the other hand, considering the Fourier transform, any input signal can be represented as a summation of sinusoids. Thus, we consider a sinusoid as the input. 3 But pictorially, without applying inputs.

Nyquist plot

447

The common mistakes among them are the determination of stability properties of MP or NMP systems in terms of the signs of their gain margin (GM) and phase margin (PM) (which will be defined later), the GM and PM of systems which have multiple crossover frequencies (which will be defined later), and even some mistakes and shortcomings in “drawing” the aforementioned diagrams and determination of the system type, etc. In this chapter we shall study the Nyquist method. It must be noted that, historically, first the Nyquist plot was proposed. Its shortcomings were partly rectified in the Bode diagram and the Krohn-Manger-Nichols plot methods. However, on the other hand these techniques have some weaker points than the Nyquist plot method, as will be seen. The organization of this chapter is as follows. In Section 6.2 the Nyquist plot is introduced along with its nitty-gritties. The GM, PM, and delay margin (DM) concepts, as well as the high sensitivity region, are introduced and discussed in Section 6.3. Finally, summary, further readings, worked-out problems, and exercises follow in Sections 6.46.7.

6.2

Nyquist plot

Let us first mention that the Nyquist plot is exactly the same as the polar plot of the system transfer function, and thus is also known as the polar plot of the system. Here the term polar plot refers to the classical polar plot of the previous centuries where mathematicians drew magnitude versus phase of a function. In the Nyquist plot, sometimes also referred to as the Nyquist diagram, frequency plays the role of the parameter, i.e., each frequency determines one point on the diagram. Hence, in other words, the transfer function is evaluated along the j-axis. Since the characteristic equation has real coefficients, it suffices to consider positive frequencies only; for negative frequencies the mirror image is obtained with respect to the real axis. However, for the purpose of illustration it is instructive to draw the plot over the whole frequencies, i.e., positive and negative. Of course, there is another reason for this which will be discussed in Section 6.2.7. It should be noted that the Nyquist plot is a closed curve, following the RiemannLebesgue lemma (Bochner and Chandrasekharan, 1949), since Lð jωÞ has a definite limit as jωj ! N. The Nyquist plot was introduced in 1932 as a graphical tool for assessing the closed-loop stability of a system in a negative unity-feedback structure from its open-loop frequency response (Nyquist, 1932). The classical approach has been to apply sinusoidal inputs with different frequencies to the system and to measure the output. The magnitude versus phase plot of the transfer functions constructed from these data is called the Nyquist plot. This plot is accompanied by a criterion called the Nyquist criterion which has been obtained by the help of the principle of the argument of complex analysis in mathematics. If this criterion is satisfied by the plot then the stability is assured. Due to its elegance and importance many attempts have been made to turn this graphical and thus inaccurate method to an analytical one, thus making the stability assessment a precise verification tool.

448

Introduction to Linear Control Systems

s-plane

L-plane L

Cs s1 s2 s3

CL L(s3) L(s2) L(s1)

Figure 6.1 A typical mapping.

Although these attempts have been partially successful, precise analysis of the general case of j-axis poles and/or zeros has remained open. More precisely, no canonical rule for the precise and analytical construction of the plot is found in the literature of control engineering. What further confirms this point is the fact that the ubiquitously used software MATLABs is not able to produce the complete plot in the case of j-axis poles. Indeed, older versions of MATLABs, like its 2010 release, did produce a somewhat wrong answer in the case of j-axis zeros for which the plot does not go to infinity. Recent versions like the 2015 release have other problems as we shall learn in this chapter. The presentation of the material resembles a survey. The rest of this section is organized as follows. We present the Principle of Argument on which the Nyquist stability criterion is based in Section 6.2.1. This is followed by the Nyquist stability criterion in Section 6.2.2. Then we review the main difficulties and subtleties of the existing results. This includes the basic procedure for drawing the Nyquist plot in Section 6.2.3, a mistake in the majority of the classical textbooks concerning the 0-frequency end of the Nyquist plot in Section 6.2.4, a result on cusp points of the plot in Section 6.2.5, and a procedure on how to handle the gain for stability verification in Section 6.2.6. Next, in Section 6.2.7 we present the case of j-axis poles/zeros and provide new contributions. We clearly elaborate the procedure with several detailed examples. The presentation is followed by several obvious but critical comments on the relation between the classical stability tools of Nyquist and Evans in Section 6.2.8. This important relation is unfortunately not clearly stated or is even missing in some classical texts. In Chapters 7 and 8, Bode and Krohn-Manger-Nichols methods will be added to the comparisons made in Section 6.2.8.

6.2.1 Principle of argument The Nyquist criterion for stability is based on the Principle of Argument.4 Thus we shall briefly introduce this argument in the following. Given Cs as a closed contour in the s-plane, the mapping of Cs under L(s) is the closed contour CL in the L-plane obtained by applying L(s) to Cs. Thus, e.g., given the points s1, s2, s3 on Cs, the points Lðs1 Þ, Lðs2 Þ, Lðs3 Þ are on CL, see Fig. 6.1. It is clear that CL can be traversed 4

Due to A. L. Cauchy.

Nyquist plot

449

in the same direction Cs is traversed or opposite to that. In the given example in Fig. 6.1 they have the same direction. The principle of argument states that if the function L(s) is rational, analytic, nonzero on Cs, and with finite number of zeros and poles inside Cs, then N 5 Z 2 P, where N is the number of encirclements of the origin by CL with N . 0 as clockwise (CW) and N , 0 as counter CW (CCW) encirclements, Z is the number of zeros of L inside Cs, and P is the number of poles of L inside Cs. It should be clarified that in the s-plane the real and imaginary axes are ReðsÞ and ℑmðsÞ, respectively, whereas in the L-plane they are ReðLÞ and ℑmðLÞ. Example 6.1: Given Cs as the cube constructed symmetrically around the point 21 with sides in parallel with the axes and of length 2, investigate the validity of the principle of argument for L1 ðsÞ 5 1=ðs 1 1Þ; L2 ðsÞ 5 s 1 1. First we designate some points on the contour Cs in the CW manner. These are the points s1 through s6 in Fig. 6.2. Next we compute for L 5 L1: Lðs1 Þ 5 1; Lðs2 Þ 5 p1ffiffi2 +45, Lðs3 Þ 5 p1ffiffi2 +135; Lðs4 Þ 5 21, Lðs5 Þ 5 p1ffiffi2 +225, Lðs6 Þ 5 p1ffiffi +315. The contour CL1 is depicted in the middle panel of the same 2

figure. It is observed that there is one CCW encirclement of the origin, i.e., N 5 21. On the other hand, Z 5 0, P 5 1, and thus we verify N 5 Z 2 P. As for L 5 L2 the contour CL2 is depicted in the right panel of the same figure. In this case N 5 1, Z 5 1, P 5 0, and N 5 Z 2 P is verified. s-plane s6

s5 –2 s4 s3

L-plane

j

L(s3)

j

L(s4) –1

–1

L(s5)

L-plane

L(s2) 1 L(s1)

L(s6) j 1 L(s1)

L(s4) –1

s1 s2

L(s5)

–j

–j

L(s6)

–j L(s3)

L(s2)

Figure 6.2 Example 6.1. Left: s-plane; Middle: L1 -plane; Right: L2 -plane.

Example 6.2: Repeat the above example with L1 ðsÞ 5 1=ðs 2 1Þ; L 2 ðsÞ 5 s 2 1.  ffi + π 2 tan21 13 ; For L 5 L1 we find Lðs1 Þ 5 21; Lðs2 Þ 5 p1ffiffi2 +135, Lðs3 Þ 5 p1ffiffiffi 10   Lðs4 Þ 5 231, Lðs5 Þ 5 p1ffiffiffiffi + π 1 tan21 13 , Lðs6 Þ 5 p1ffiffi +225. The contour CL1 is 10

2

depicted in the left panel of Fig. 6.3. It is observed that N 5 0. On the other hand Z 5 0; P 5 0, and hence N 5 Z 2 P is verified. As for L 5 L2 the contour CL2 is given in the right panel of the same figure. Here as well the condition N 5 Z 2 P is verified with the same details. (Continued)

450

Introduction to Linear Control Systems

(cont’d) L-plane L(s3) L(s1)

L-plane

L(s5)

L(s2)

–3

L(s4)

–1 L(s1)

L(s4)

L(s5) L(s6)

L(s6)

L(s3)

L(s2)

Figure 6.3 Example 6.2. Left: L1 -plane; Right: L2 -plane.

Im(s)

0

Re(s)

Figure 6.4 Nyquist contour, covering the whole ORHP.

In the ensuing Section we show how Nyquist used the above result to obtain his stability criterion.

6.2.2 Nyquist stability criterion What Nyquist did was to choose Cs, the semi-circle shown in Fig. 6.4, with infinite radius so as to cover the whole ORHP. On the other hand, he noted that the closedloop poles are the zeros of 1 1 L 5 0. The application of the principle of argument thus resulted in his stability criterion as follows: Let P be the number of open-loop poles in ORHP. Then the closed-loop system is stable iff the Nyquist plot of L encircles of the point 21 in the CCW manner P times. Proof: Let Z be the number of zeros of 1 1 L in the ORHP. Thus, Z is the number of unstable closed-loop poles of the system. For stability there should hold Z 5 0, and hence N 5 2P. We note that the positive and negative values of N, respectively, mean CW and CCW encirclements of the point 0 by 1 1 L. On the other hand, encirclement of the point 0 by 1 1 L is equivalent to encirclement of the point 21 by L.5 The theorem thus is proven. Δ. 5

Note that C11L is the same as CL except that it is shifted to the right by 1.

Nyquist plot

451

In other words, in terms of N, if N . 0 (CW encirclements of the point 21) the closed-loop system is certainly unstable. If N , 0 (CCW encirclements of the point 21) the closed-loop system is stable iff it has N unstable open-loop poles. Also note that the theorem in particular means that if the open-loop system is stable, the Nyquist plot of the system must not encircle the point 21. How should we count the number of encirclements of the point 21? This question has been treated in several papers through the decades and each has contributed some new insight to the problem. The basic approach, which seems to be the most tangible one, is as follows (see also Further Readings). Complete the plot for both positive and negative frequencies, by drawing the negative-frequency part as the mirror image of the positive-frequency part with respect to the real axis, and then simply count the number of encirclements. Remark 6.1: In the original statement of the principle of argument it is assumed that the function L(s) does not have any zeros/poles on the Cs boundary. Therefore in the Nyquist criterion the system L(s) does not have any (finite) zeros/poles on the j-axis. The case of such zeros/poles will be addressed later in Section 6.2.7. In the next Section we present the basic construction or drawing of the Nyquist plot.

6.2.3 Drawing of the Nyquist plot To this end, there are two basic approaches one may think of. The first is the “basic elements approach” and the second is the “transfer function approach.” In the former which sounds more systematic at the outset one finds the Nyquist plot of basic elements of a transfer function and then tries to put them together so as to make a certain transfer function. The basic elements that appear in all transfer functions are  2 1  jω 2 6. It is easy to find the K, jω, jω1 , 1 6 jωT, 1 61jωT , 1 1 ω2ζn jω 1 ωjωn , 2ζ 1 1 ωn jω 1

ωn

Nyquist plot of these constituting elements, however their combination in the transfer function is formed by multiplication. Therefore if we take a polar view (which is simpler than the Cartesian view), at different frequencies their respective magnitudes must be multiplied by each other and their respective phases must be added to each other. This is difficult since frequency is the hidden parameter of the plot. At a certain frequency one basic element may be at its starting portion whereas the other is at its mid range and it is not clear where that frequency will be in their multiplication, i.e., start, mid, or final portion.7 That is, it is not easy to deduce the Nyquist plot of a transfer function from that of its constituting elements. This is demonstrated in Examples 6.3 and 6.4. As such we should take the second Note that any rational transfer function, e.g., ðs 2 1Þð2s11Þ2 =½s3 ð2s2 23s11Þ2 , is a combination of these elements. 7 As will be seen this problem is rectified in the Bode diagram approach which will be presented in Chapter 7. For construction of Bode diagrams the basic elements approach is adopted. 6

452

Introduction to Linear Control Systems

approach. That is to say, we have to construct the Nyquist plot of the whole transfer function by computing and analyzing its magnitude and phase at different frequencies along the j-axis. Several instructive examples will be offered to expound this method. However we start with a simple system in Example 6.5. Before presenting the examples we emphasize again that the Nyquist plot is a closed curve symmetric with respect to the real axis. For negative frequencies the curve is the mirror image of the positive-frequency part about the real axis. Example 6.3: This example shows that the “basic elements approach” does not succeed for construction of the Nyquist plot. The constituting elements 1 and L2 ðsÞ5 s2 1 1:57s1 1 2:325 make L3 ðsÞ5L1 ðsÞL2 ðsÞ5 s3 12s2113s11. L1 ðsÞ 5 s 1 0:4302 Their Nyquist plots are provided in Fig. 6.5, left, middle, and right panels, respectively. The points A through D refer to frequencies ω 50:15;0:3;1:3;4, respectively. As it is observed it is not easy, if possible at all, to deduce the right panel from the other two. Also note that for ω $4 the right panel is essentially at the origin whereas the other ones are yet far from the origin.

Figure 6.5 Example 6.3. Left: L1 ; Middle: L2 ; Right: L3 .

Example 6.4: This is another example to show that the “basic elements approach” fails especially for sophisticated systems. Consider the system ðs11Þ LðsÞ 5 ðs 1 0:1Þðs 1 4Þðs 1 5Þ. You have already seen in Example 6.3 how the constituting element of poles looks, i.e., Fig. 6.5, left panel. The Nyquist plot of the constituting element of a real zero is a straight vertical line. That of the element s 1 1 is shown in Fig. 6.6, left panel. In the right panel the Nyquist plot of L is given. Points A, B, and C refer to frequencies 0:01; 0:5; 10, respectively. As is observed it is not easy, if possible at all, to deduce the Nyquist plot of L from that of its constituting elements. (Continued) 2

Nyquist plot

453

(cont’d)

Figure 6.6 Left: Nyquist plot of s 1 1; Right: Nyquist plot of L.

We should stress that the system of Example 6.4, although more complicated than that of Example 6.3, is not really a “complex” system. However, because these examples serve our purpose, we suffice to them. More sophisticated systems are considered throughout this chapter and are solved with the second approach. The reader is urged to ponder and observe the aforementioned problem as he or she comes to those complex systems. Example 6.5: In this example we draw the Nyquist plot of the system 4 LðsÞ 5 ðs11Þ 3 by the second approach, i.e., without trying to use its constituting elements. First we write Lð jωÞ 5

4 . ð jω11Þ3

Now we increase the frequency from zero to

infinity and in a polar plot context we find the magnitude and phase of L. The start and end points are straightforward to find. At ω 5 0: jLð j0Þj 5 4, +Lð j0Þ 5 0 deg. At ω 5 N:jLð jNÞj 5 0, +Lð jNÞ 5 2270 deg. For other frequencies we need to expand the transfer function. The details that we seek are as follows. 4 4 4ð1 2 3ω2 Þ 2 j4ð3ω 2 ω3 Þ Lð jωÞ 5 5 5 . 3 2 3 ð1 2 3ω Þ 1 jð3ω 2 ω Þ ð11jωÞ ð123ω2 Þ2 1 ð3ω2ω3 Þ2 For stability we are interested in the encirclement (or otherwise) of the point 21. Thus we need to find the real-axis crossings. To this end we work with 24ð3ω 2 ω3 Þ the imaginary part of L, i.e., ℑ Lð jωÞ 5 ð123ω 2 Þ2 1 ð3ω2ω3 Þ2 . From ℑ Lð jωÞ 5 0 pffiffiffi we find ω. These are ω 5 0; 6 3; 6 N. We already know L at ω 5 0; 6 N. (Continued)

454

Introduction to Linear Control Systems

(cont’d)

pffiffiffi Now we find Lð 6 j 3Þ 5 21=2. On the other pffiffiffi hand from the formula of ℑpLð ffiffiffi jωÞ we know that ℑ Lð jωÞ , 0 for 0 , ω , 3 and ℑ Lð jωÞ . 0 for ω . 3. Thus the Nyquist plot starts at the real point 1 at ω 5 0, by increasing frequency goes downwardspin ffiffiffi the fourth and thirds quadrants and reaches the real point 21=2 at ω 5 3. By further increasing the frequency, the plot goes up in the second quadrant and finally at ω 5 N ends at the origin with the angle 2270 deg. For negative frequency the plot is the mirror image of the positive part with respect to the real axis and therefore the whole plot looks like Fig. 6.7. Finally note that according to the Nyquist stability criterion the system is stable.

Figure 6.7 Nyquist plot of Example 6.5.

In the following we shall present several other examples to further elaborate on the Nyquist method. Firstly, it is instructive to present some general issues of which some are discussed in a false way in the existing literature.

6.2.4 The high- and low-frequency ends of the plot Nðs;mÞ Let LðsÞ 5 NðsÞ DðsÞ 5 sN D0 ðs;n0 Þ be the system transfer function where m is the order of the numerator, N is the type of the system, and n 5 N 1 n0 is the order of the

Nyquist plot

455

denominator. Then, for a minimum-phase system Table 6.1 summarizes the highand low-frequency end data of the Nyquist plot of the system: Table 6.1

High- and low-frequency ends of the plot

Frequency

Magnitude

0

N .0 N 50 N ,0

N jGð j0Þj 0

290N

N

n.m n5m n,m

0 jGð jNÞj N

290ðn 2 mÞ

0

Phase

-270˚

ω

ω =∞

–180˚ 0 ω

ω =0 0˚

ω 0

–90˚

Figure 6.8 The classical but generally incorrect picture of the Nyquist plot versus type of the system.

It should be clarified that actually this is the case at ω ! 01 . Fig. 6.8 is a classical result in many standard textbooks on linear control systems. In other words, it seems to be folk knowledge that Table 6.1 (which is correct) means that the low-frequency end of the plot tends to one of the axes. More precisely, it is thought that for type-1, type-2, and type-3 systems the low-frequency end of the plot tends to the negative imaginary axis, negative real axis, and positive imaginary axis, respectively. This is obviously false in general. In fact, the low-frequency end may be infinitely far away from all axes, as verified in Examples 6.6 and 6.7. Example 6.6: Let LðsÞ 5 ðs 1 1Þ=s2 . Then one may expect the low-frequency end of the plot to be something like the one given in Fig. 6.8. However, for 2 this system Lð jωÞ 5 ω212 1 j 21 ω thus jR Lð jωÞj 5 jℑ Lð jωÞj , with both R L; ℑ L negative, which is a parabola in the third quadrant, as depicted in Fig. 6.9. In particular this means that not only does not the Nyquist plot tend (Continued)

456

Introduction to Linear Control Systems

(cont’d)

Figure 6.9 Correct Nyquist plot of Example 6.6.

to the negative real axis (whose angle is 290 3 2 or 270 3 2) but also indeed goes infinitely far from it although its phase is 290 3 2. In fact, the Nyquist plot goes infinitely far from both of the axes. The complete Nyquist plot of this system will be depicted after reading the Section 6.2.7, where the behavior of the plot at its end points is clarified, see worked-out Problem 6.6.

Example 6.7: As a more complicated example we consider the system LðsÞ 5 ðs2 1 1Þ=½s3 ðs21Þ2 . Here as well one may expect the low-frequency end of the Nyquist plot to be like Fig. 6.8. In this case R L 5 ð12ω2 Þ2 =½ω3 ð12ω2 Þ2 1 4ω5  and ℑ L 5 22ωð1 2 ω2 Þ=½ω3 ð12ω2 Þ2 1 4ω5 . While it is not straightforward to draw the plot analytically, it is easy to do so numerically, e.g., through a software like MATLABs. Using MATLABs we find out that the Nyquist plot looks like Fig. 6.10. The complete plot will

Figure 6.10 Correct Nyquist plot of Example 6.7.

(Continued)

Nyquist plot

457

(cont’d) be addressed in Exercise 6.10. Although the Nyquist plot has the angle 290 3 3 or 270 3 3 at its low-frequency end, it not only does not tend to the positive imaginary axis (whose angle is 290 3 3) but in fact is infinitely far from both axes.

We should also mention that more explanation on the end-points behavior of the plots are provided in Section 6.2.7.

6.2.5 Cusp points of the plot In general, traversing along the plot continuously by increasing or decreasing the frequency, the plot does not have any cusp, i.e., a sharp point where the angle jumps, see the top left panel in Fig. 6.11. Also see Section 6.2.7 and Problem 6.3. However, the whole plot (for positive and negative frequencies) may look like as if it has. Note that what we observed in Examples 6.3 Example 6.4, the point between the points B and C in the right panel of Fig. 6.6—where the plot intersects itself— is not a cusp point. Systems with apparent cusp points are provided in Example 6.8. You will encounter many other systems with this feature later in this chapter. Example

6.8:

Consider

105 ðs 1 1Þðs 1 2Þ ðs 1 0:1Þðs 1 20Þðs 1 100Þðs 1 200Þ,

the

and L3 ðsÞ 5

systems

L1 ðsÞ 5

ðs11Þ3 , ðs 1 0:2Þðs15Þ3

L2 ðsÞ 5

105 ðs 1 1Þðs 1 2Þ sðs 1 10Þðs 1 100Þðs 1 200Þ.

The Nyquist plots are provided in the upper right, lower left, and lower right panels of Fig. 6.11, respectively. In the upper right panel, by increasing the frequency from zero, the plot starts at the point A, goes downwards and reaches the point B. Then goes upwards and reaches the point C, and finally goes downwards again and arrives at the point D. The whole plot for both positive and negative frequencies thus looks like the provided plot. The points B and C are cusp points. In the lower left panel the plot starts from A, goes downwards and reaches B, then goes upwards and reaches C, and finally goes downwards and arrives at D, tangential to the negative real axis. The points B, C, and D are cusp points. The unlettered points designated by arrows are intersection points. Finally, in the third case in the lower right the plot starts at the bottom on the page, come upwards and reaches A, then goes upwards and reaches B, and finally goes downwards and reaches C. All the lettered points (and not the unlettered ones) are cusp points. (Continued)

458

Introduction to Linear Control Systems

(cont’d)

Figure 6.11 Example 6.8 and cups points. Upper left: Cusp points Upper right: L1 ; Lower left: L2 ; Lower right: L3 .

In the next section we discuss the case of gain determination via Nyquist stability analysis.

6.2.6 How to handle the proportional gain/uncertain parameter Often the problem is to find the range of a single uncertain parameter in the system for stability. Equivalently, the simplest case of this is to find the range of the static gain for stability. The solution to this problem, though not as involved as the one studied in the next section, is alas not a widespread practice in all engineering textbooks and course notes. In fact, some even present it incorrectly. Hence, we offer a few words here along with an example. To solve this problem, as in the root locus context, we factorize the gain/parameter and reformulate the problem as follows. We originally have 1 1 L 5 0. We write it as 1 1 L 5 1 1 KL0 5 0 and reformulate it to 1=K 1 L0 5 0. Hence, it is observed that for stability the point 21=K (instead of 21) should or should not be encircled.

Nyquist plot

459

K Example 6.9: Let LðsÞ 5 ðs11Þ 3 . Using the Nyquist criterion find the range of K for stability. 1 Through the aforementioned reformulation we get L0 ðsÞ 5 ðs11Þ 3 . Now the

system is similar to Example 6.5 except that its gain is different. So in the present system the starting point is 1 (instead of 4) and the negative real axis crossing is 21=8 (instead of 21=2). Because the open-loop system is stable, for stability of the closed-loop system the point 21=K should be in a region on the real axis that is not encircled by the plot. There are two regions with this feature. One is 21=K , 21=8 and the other is 21=K . 1. The first one results in 0 , K , 8 and the second one means 21 , K , 0. Thus, the range of K stability is 21 , K , 8. Note that if 21=K lies in ð21=8 ; 0Þ, then N 5 2; Z 5 2 1 0 5 2, i.e., two ORHP closed-loop poles. This corresponds to K . 8. On the other hand, if 21=K lies in ð0 ; 1Þ, then N 5 1; Z 5 1 1 0 5 1, i.e., one ORHP closed-loop pole. This corresponds to K , 21. Question 6.1: How can we verify this result by the Routh’s test?

As we observed in Section 6.2.2, the Nyquist stability criterion does not allow for j-axis zeros and poles. This case is addressed in the next section.

6.2.7 The case of j-axis zeros and poles Although some researchers may partly know how to analytically draw the Nyquist plot especially for the case of j-axis poles and/or zeros, it turns out that this has not become folk knowledge worldwide—even for academicians, let alone students, as confirmed by our experience and the materials taught on this topic around the globe. The major hindrance is probably the fact that classical control engineering textbooks do not cover this topic thoroughly, nor is the ubiquitously used software MATLABs able to produce the correct plot in case of j-axis poles and zeros. Indeed, older versions of MATLABs, like its 2010 release, did produce a wrong answer even for certain systems that for which the plot does not go to infinity. Newer versions, like the 2015a release, have other inconsistencies. In this Section we provide details of the low- and high-frequency ends of the plot as well as its jaxis poles and zeros, and demonstrate the procedure through several detailed and instructive examples. Now we embark on the topic. Some systems have poles and/or zeros on the j-axis, especially at the origin. In this case the Nyquist contour (Fig. 6.4.) must be modified. This is done by right indentations around these poles and/or zeros along small semi-circles ρejθ , the so-called ρ semi-circles, as shown in Fig. 6.12. (See also Exercise 6.19.) Note that y can be at the origin. The indentation technique is a classical method in mathematics literature; in control literature it can be traced

460

Introduction to Linear Control Systems

Figure 6.12 Left: The modified/indented Nyquist contour; Right: Magnification about the upper ρ semi-circle.

back at least to the 1940s, without, however, providing details. This case turns out to be the main challenge for Nyquist analysis. We provide a complete treatment of this case. Because the procedure we are offering is unfortunately missing in the classical texts and the research literature, the subject has turned out to be a formidable one for students. To analyze the problem, the two cases of j-axis poles and zeros are considered separately. 1 Case 1: j-axis pole of order m: L 5 ðs2jyÞ m L1 , finite y. On the ρ semi-circle around jy there holds s 5 jy 1 ρejθ . Thus, its map becomes LðsÞ 5 ρm 1ejθm L1 ð jy 1 ρejθ Þ. We note that jLðsÞj  ρ1m jL1 ð jyÞj and thus the ρ semicircle is mapped on an arc of infinite radius. The angle of this arc is mπ. In case of improper L, y 51 N are infinite poles of order m where m is the zero-pole excess of L. The points y 51 N are mapped on an arc of infinite radius. The angle of this arc is mπ. It should be noted that this result is theoretically sound, but it is practically impossible that it could happen as all actual systems are causal and hence nonimproper. As such we present only a single example for this case—the worked-out Problem 6.10. Case 2: j-axis zero of order m: L 5 ðs2jyÞm L1 , finite y. On the ρ semi-circle around jy there holds s 5 jy 1 ρejθ . Thus, its map becomes LðsÞ 5 ρm ejθm L1 ð jy 1 ρejθ Þ. We observe that jLðsÞj  ρm jL1 ð jyÞj and therefore the ρ semi-circle is mapped on an arc of infinitesimal radius. This arc is too small to be shown in the Nyquist plot—in the limit it is the origin itself. The angle of this arc is mπ. Hence, if m is odd the plot approaches the origin and bypasses (m 5 1) or encircles (m $ 3) that as if going through it with an increase in its angle of mπ. If m is even the plot approaches the origin and encircles that ðmπÞ=ð2πÞ times as if reaching the origin and coming back tangent to the same angle line but with an increase in its angle of mπ. In case of strictly proper L, y 5 6 N are infinite zeros of order m where m is the polezero excess of L. The points y 5 6 N are mapped on an arc of infinitesimal radius. The angle of this arc is mπ. How this arc is traversed is the same as in the above case. Remark 6.2: Older versions of MATLABs, like its 2010 release, compute the phase of such systems wrongly and hence produce a wrong Bode diagram for such systems. Bode diagrams will be introduced and studied in Chapter 7, see also

Nyquist plot

461

Remark 6.5. In the given analysis we clarified what happens to the plot on the ρ semi-circles, and thus we know what the Bode phase diagram should look like. At j-axis zeros the phase of the system increases by mπ. But MATLABs shows a decrease. The obvious mistake of MATLABs may be unimportant in some sense, but it causes confusion for students. Newer versions of MATLABs produce wrong plots as well, but because of another problem: inconsistent definitions of phase angles. See Example 7.2 of the next chapter. Remark 6.3: If the system has multiple NMP poles and/or zeros, i.e., if the Nyquist contour contains some poles and/or zeros inside, then there may be a difference in the phase angles of any point at which the contour starts and ends. This point is typically the point ω 5 0: +L0 at ω 5 01 at which we end traversing the contour may be different from +L0 at ω 5 01 from which we have started to traverse the contour CW. Once the contour is traversed CW, each NMP pole and zero undergoes a 360 decrease in its angle. Therefore, the increase in phase angle, 0 1 i.e., Δ+L0 5 +L0 ð jω1 end Þ 2 +L ð jω start Þ, is ð#NMP poles 2 #NMP zerosÞ 3 360 deg, with obvious meanings for the terms. Note that the phase angle change for stable and j-axis poles and zeros is zero. It should be obvious when the increase in the phase angle is negative, zero, or positive. Now we present a procedure for drawing the Nyquist plot. Procedure: Define the Nyquist contour as shown in Fig. 6.12. Define the angles +ðs 2 zi Þ, +ðs 2 pi Þ where s is any point on the Nyquist contour just defined, and form +L0 5 Σ+ðs 2 zi Þ 2 Σ+ðs 2 pi Þ. Start at any point, typically ω 5 01 , and traverse the contour CW by increasing the frequency. Find +L0 by analyzing the increase or decrease of the values of +ðs 2 zi Þ and +ðs 2 pi Þ. In particular, mind: (1) The frequencies of j-axis poles and zeros; (2) The frequencies of real-axis crossings; and (3) Asymptotes, if any. Note that on the ρ semi-circle around jy, +ðs 2 jyÞ increases by mπ where m is the order or multiplicity of the pole or zero at jy. Thus if jy is a zero, +L0 increases by mπ, and if jy is a pole, +L0 decrease by mπ. On the other hand, on the infinite-radius arc of the contour all angles drop by 180 deg. Therefore +L0 grows by ð#poles 2 #zerosÞ 3 180 deg. Complete the contour by arriving at the start point again, being ω 5 01 . Remember that there must 1 0 hold Δ+L0 5 +L0 ð j01 end Þ 2 +L ð j0start Þ 5 ð#NMP poles 2 #NMP zerosÞ 3 360 deg. above procedure is pictorially shown for the system LðsÞ 5 a; b; c . 0 in Fig. 6.13. The left panel shows the Nyquist contour. The right panel shows the CCW angle definitions for a typical point s 5 jω where 01 # ω # a2 . Angles’ changes on some representative portions of the Nyquist contour are also provided in the same Figure. These are 01 # ω # a2 , a2 # ω # a1 , infinite arc, and 2 N # ω # 2 a2 . The above procedure is further demonstrated in the following instructive examples with more details. The

s2 1 a2 sðs 2 bÞðs 1 cÞ ;

462

Introduction to Linear Control Systems

Figure 6.13 Part of the Nyquist analysis procedure for a typical system.

Example 6.10: Draw the Nyquist plot of the system LðsÞ 5 Ks2 =ðs21Þ3 and analyze its stability. Note that the system has a zero of second order at origin and an infinite 3 0 2 zero of first order. For the given system we form Lp 5 ffiffiffi s =ðs21Þ . First find 0 the real-axis crossings: ℑ L ð jωÞ 5 0 thus ω 5 0; 6 3; 6 N. Define CCW α 5 +s and θ 5 +ðs 2 1Þ. Thus, +L0 5 2α 2 3θ. In the frequency range 0 , ω, α 5 90:+L0 5 180 2 3θ. It is seen that at ω 5 01 , θ 5 180, and 0 L0 5 0+ 2 360. θ decreases pffiffiffi As we increase ω, pffiffiffi and +L increases. pffiffiFor ffi exam0 0 ple, at ω 5 p 3=3; θ 5 150, and L 5 jL ð j 3 =3Þj+ 2270. At ω 5 3 ; θ 5 120, ffiffiffi and L0 5 jL0 ð j 3Þj+ 2 180. Finally, at ω 51 N; θ 5 90, and L0 5 0+ 2 90. On the infinite arc of the Nyquist contour jL0 j ! 0 and all the angles drop by 180 deg. Thus +L0 increases by ð#poles 2 #zerosÞ 3 180 5 ð3 2 2Þ 3 180 5 180, thus at ω 5 2N +L0 5 2 90 1 180 5 90. That is, the infinitesimal arc to the right of the origin is obtained, see Fig. 6.14, Panel 1. (Continued)

Nyquist plot

463

(cont’d)

Mapping of the infinite arc of the Nyquist contour:

Mapping of the ρ semi-circle around j0: Origin is encircled one time

Origin is bypassed

Figure 6.14 Nyquist plot of Example 6.10.

Now we are on the negative j-axis. In the frequency range 2N # ω , 0, +L0 5 2α 2 3θ 5 2 180 2 3θ. As we increase the frequency in this range, +L0 increases from 90 at ω 5 2N to 360 at ω 5 02 . At ω 5 02 , L0 5 0+360. On the ρ semi-circle around j0, jL0 j 5 0 but its phase changes. As α increases from 290 to 90, θ remains constant at 2180. Thus, as 2α appears in +L0 , +L0 increases by 2 3 180 5 360. That is, at ω 5 01 , +L0 5 720. The infinitesimal circle shown in Fig. 6.14, Panel 2 is thus obtained. Note that we have a second-order zero, m 5 2, and the origin has been encircled mπ=2π 5 1 time. Indeed, because ρ tends to zero, the circle vanishes, as if the mapping (Continued)

464

Introduction to Linear Control Systems

(cont’d) reaches and touches the origin and goes back tangent to the same angle line, but with an increase of 360 in its phase. The complete Nyquist plot is shown in Panel 3 of Fig. 6.14. Note that for stability we must have 3 CCW encirclements of the point 21=K. As for no portion of the real axis this takes place, the system cannot be stabilized by K. Lastly, note that the increase in the phase angle is 1 0 Δ+L0 5 +L0 ð j01 end Þ 2 +L ð j0start Þ 5 720 2 ð2360Þ 5 3 3 360 as predicted by Remark 6.3.

Remark 6.4: Note that both in this and similar problems, for the purpose of illustration the plot is not drawn to scale. In particular this is true of the infinitesimal arcs/circles.

Example 6.11: Draw the Nyquist plot of the system LðsÞ 5 Kðs2 1 1Þ=ðs21Þ3 . Analyze its stability. The system has finite j-axis zeros of first order, and an infinite zero of 3 first order. We first form L0 5 ðs2 1 1Þ=ðs21Þ pffiffiffi , and find the real-axis cross0 ings: ℑ L ð jωÞ 5 0 hence ω 5 0; 6 1; 6 3; 6 N. By convention the angles are CCW. Defining α 5 +ðs 1 jÞ, β 5 +ðs 2 jÞ, and θ 5 +ðs 2 1Þ, there thus holds: +L0 5 α 1 β 2 3θ. We increase the frequency from zero to infinity and find L0 . In the frequency range 0 # ω , 1, α 5 90; β 5 270: +L0 5 360 2 3θ. (Note that CCW convention results in β 5 270 not β 5 290.) It is seen that at ω 5 0, θ 5 180, and L0 5 1+ 2 180. As we increase ω, θ decreases and +L0 increases (i.e., becomes less negative). At ω 5 1; θ 5 135, and L0 5 0+ 2 45. The lower part of the plot has been obtained so far, Fig. 6.15, Panel 1. On the ρ semi-circle around j1,α remains 90 deg. On the other hand, as s traverses along the upper ρ semi-circle, β increases from 270 to 450. The contribution of β is thus 180. This causes +L0 to increase 180 at ω 5 1, from 245 to 135. This takes place on the infinitesimal arc shown in Fig. 6.15, Panel 1. This is precisely what happens at j-axis zeros. However, because such arcs are infinitesimally small, MATLABs does not depict them. Indeed, because ρ tends to zero, the arc vanishes and the mapping touches the origin. Here, it is seen that the mapping goes through the origin. (This is in accordance with what we said before: for odd m, here m 5 1, the mapping goes through the origin.) (Continued)

Nyquist plot

465

(cont’d) Mapping of the ρ semicircle around j1

Mapping of the infinite arc of the Nyquist contour

135 –180 –45

Nyquist diagram

0.8

765 900 585 Mapping of the ρ semicirclearound –j1

Imaginary axis

0.6 0.4 0.2 0 –0.2 –0.4 –0.6 –0.8

–1

–0.8

–0.6

–0.4

–0.2

0

Real axis

Figure 6.15 The Nyquist plot of Example 6.11 with its details. Top left: Panel 1; Top right: Panel 2; Bottom left: Panel 3; Bottom right: Panel 4.

As we further increase 1 , ω, it is seen that α 5 90, β 5 450: +L0 5 540 2 3θ. As θ goes from 135 p toffiffiffi120, +L0 goes from 135 to 180. This real-axis crossing takes place at ω 5 3. As we continue to increase ω, +L0 grows from 180 to 270, i.e., finally at ω 51 N: +L0 5 540 2 3 3 90 5 270. As we move from ω 51 N to ω 5 2N on the infinite-radius arc of the Nyquist plot, on this arc jL0 j ! 0 and α; β; θ each drop 180, thus +L0 increases 180 from 270 to 450, and the infinitesimal arc to the right of the origin is obtained, see Fig. 6.15, Panel 2. Now we are on the negative imaginary axis. In the p range ffiffiffi 2N # ω , 21, +L0 5 2 90 1 270 2 3θ. As we increase ω till ω 5 2 3, θ decreases from 290 to 2120, and +L0 goes from 450 to 540. Here there is a real-axis crossing. As we continue to decrease θ from 2120 to 2135, +L0 grows from 540 to 585 at ω 5 212 . At this point L0 5 0+585. On the ρ semi-circle around 2j1, α increases from 290 to 90, and thus there is a 180 phase increase in L0 , i.e., +L0 increases from 585 to 765. The infinitesimal arc of Fig. 6.15, Panel 3, around the origin is obtained. As we further increase ω in the range 21 , ω # 0, +L0 5 90 1 270 2 3θ. As θ decreases from 2135 to 2180, +L0 grows from 765 to 900. We are at the point 1+900. (Continued)

466

Introduction to Linear Control Systems

(cont’d) The complete Nyquist plot is depicted in 6.15, Panel 4. To perform a pffiffiFig. ffi stability analysis, we compute R L0 ð j 3 6 3Þ 5 21=4. For stability we must have 3 CCW encirclements of the point 21=K. Thus, 21=4 , 21=K , 0 yielding K . 4. Finally, note that in this example 02 5 01 , and that the increase in the 1 0 phase angle is Δ+L0 5 +L0 ð j01 end Þ 2 +L ð j0start Þ 5 900 2 ð2180Þ 5 3 3 360 as expected from Remark 6.3.

Remark 6.5: Although we have not studied Bode diagrams yet, it is instructive to have a look at the Bode diagrams of this system. In a Bode diagram the upper diagram shows the logarithmic magnitude diagram and the lower diagram shows the phase diagram. In both cases the horizontal axis is the frequency in logarithmic scale. Older versions of MATLABs, like its 2010 release, were not able to produce the correct plot. It produced the diagram in Fig. 6.16, left panel, which is certainly wrong in our settings as it shows a phase drop at j-axis zeros.8 Note that the Bode diagram that MATLABs 2015a produces for this system has wrong phase angle: 360 must be added to it to become correct. In both cases the magnitude diagrams are correct. (The simple explanation for the upper part is that the logarithm of a small number, approximately zero, is a large negative number like 2200 or 2300. Also, the aforementioned small number is the magnitude of the transfer function at the j-axis zero.) Bode Diagram Magnitude (dB)

–50 –100 –150 –200 –45 –90 –180 10–2

–100 –200 –300 –400 0

–135 –225

Bode diagram

0

Phase (deg)

Phase (deg)

Magnitude (dB)

0

10–1

100 Frequency (rad/s)

101

102

–180 –360 –540

–1

10

100 Frequency (rad/s)

101

Figure 6.16 Example 6.11. Left: Output of MATLABs 2010a; Right: Output of MATLABs 2015a. 8

However, note that in the modified Nyquist contour of Fig. 6.12 if the right indentations are replaced with left indentations then we obtain this figure. Which one is correct and how they compare with each other is what you discuss in Exercise 6.19.

Nyquist plot

467

Example 6.12: Draw the Nyquist plot of the system LðsÞ 5 Ks= ½ðs 1 1Þðs2 1 1Þ. Analyze its stability. We remind you that MATLABs is not able to draw the Nyquist plot of this system. The system has finite j-axis zeros and poles of first order together, and an infinite zero of second order. In the first step we form L0 5 s=½ðs 1 1Þðs2 1 1Þ, and find the real-axis crossings: ℑ L0 ð jωÞ 5 0 hence ω=ð1 2 ω4 Þ 5 0 or ω 5 0; 6 N; 6N. By convention we define the angles in the CCW manner. With α 5 +s, β 5 +ðs 1 jÞ, γ 5 +ðs 2 jÞ, and θ 5 +ðs 1 1Þ, there holds: +L0 5 α 2 ðβ 1 γ 1 θÞ. We increase the frequency from zero to infinity and find L0 . At ω 5 01 , α 5 90; β 5 90; γ 5 270; θ 5 0. Therefore, L0 5 0+ 2 270. In the frequency range 01 # ω , 1, θ increases and other angles remain constant. Thus +L0 becomes more negative. At ω 5 12 , L0 5 N+ 2 315. On the ρ semi-circle around j1,α; β; θ remain constant but γ increase by 180 deg. As it is a pole, +L0 decreases by 180. At ω 5 11 , L0 5 N+ 2 495. As we further increase the frequency, α; β; γ remain constant and θ increases from 45 to 90. Thus finally at ω 51 N, +L0 decreases to 2540. At this frequency we have L0 5 0+ 2 540. As we move from ω 51 N to ω 5 2N on the infinite-radius arc of the Nyquist plot, on this arc jL0 j ! 0 and all the angles drop 180, thus +L0 increases by ð#poles 2 #zerosÞ 3 180 5 ð3 2 1Þ 3 180 5 360. Hence, the infinitesimal circle around the origin is obtained, Fig. 6.17, Panel 1. At ω 5 2N, +L0 5 2180. Now we are on the negative imaginary axis. At ω 5 2N: α 5 290; β 5 290; γ 5 270; θ 5 290. Therefore +L0 5 2180. As we increase ω in the range 2N # ω , 2 1 all the angles remain constant except θ which changes from 290 to 245. At ω 5 212 , L 5 N+ 2225. On the ρ semi-circle around 2j1, all the angles remain constant except β which increases by 180 deg and becomes 90. As it is a pole, +L0 decreases by 180 deg. That is, at ω 5 2 11 : L 5 N+ 2 405. As we further increase ω in the range 211 , ω # 02 , all the angles remain constant except θ which changes from 245 to 0. At ω 5 02 : L 5 0+ 2 450. –270 Mapping of the infinite arc of the Nyquist contour

–225

–1

–225

–450

–315

–315 –180 –1 –540

–1 Mapping of the ρ semi-circle around j0

–495

–270

–270

–405

–405

–495 –450

–450

Figure 6.17 Example 6.12. Left: Panel 1; Middle: Panel 2; Right: Panel 3.

(Continued)

468

Introduction to Linear Control Systems

(cont’d) Finally on the ρ semi-circle around j0, all the angles remain unchanged except α which increases by 180 deg. As it is a zero, +L0 increases by the same amount and changes to 2270. The infinitesimal arc to the right of the origin is thus obtained, see Fig. 2.17, Panel 2. Now we are at 01 and 1 L0 5 0+ 2 270. Note that Δ+L0 5 +Lð j01 end Þ 2 +Lð j0start Þ 5 0 5 0 3 360, as known from Remark 6.3. The whole plot is given in Panel 3 of the same figure. For stability, the point 21=K must not be encircled by the plot. Therefore 21=K , 0 or K . 0.

Remark 6.6: It may be possible to derive the exact formula for the curve on which the plot tends to e.g., the 2315 deg line, but it does not really matter. On the other hand note that the infinitesimal arcs/circles around the origin do not have any influence on the stability of the system and thus from this standpoint they do not matter either. (Their importance lies in the fact that knowledge of them gives better insight into the problem.) However, the infinite arcs/circles do matter, as they influence the stability of the system. 1 1Þ Example 6.13: Draw the Nyquist plot of the system LðsÞ 5 Kðs sðs 2 1Þ , and find the range of K for stability. The system has finite j-axis zeros and poles of first order together. First we form L0 ð jωÞ 5 ð1 2 ω2 Þ=½jωð jω 2 1Þ. We note that L0 ð jωÞ 5 2 ω2 ð1 2 ω2 Þ 2 ω2 Þ 1 j ωð1 ω 4 1 ω2 ω4 1 ω2 . Thus the real-axis crossings are at ω 5 6 1; 6 N. On the other hand as ω ! 01 : R L0 ! 2 1 and ℑ L0 ! 1 N. By convention the angles are defined in the CCW manner. With α 5 +s, β 5 +ðs 1 jÞ, γ 5 +ðs 2 jÞ, and θ 5 +ðs 2 1Þ, one has +L0 5 ðβ 1 γÞ 2 ðα 1 θÞ. We increase the frequency from zero to infinity and find L0 . At ω 5 01 , α 5 90; β 5 90; γ 5 270; θ 5 180. Hence at this frequency L0 5 N+90 and R L0 5 2 1. In the frequency range 01 # ω , 1, θ decreases and other angles remain unchanged. Therefore +L0 increases. At ω 5 12 , L0 5 0+135. On the ρ semi-circle around j1,α; β; θ remain constant but γ increases by 180 deg. As it is a zero, +L0 increases by 180 deg. At ω 5 11 , L0 5 N+315. There will be an infinitesimal arc around the origin whose start and end points are L0 5 0+135 and L0 5 N+315. This arc is traversed CCW. As we further increase the frequency, α; β; γ remain constant and θ decreases from 135 to 90. Thus finally at ω 5 N, +L0 increases to 360. At this frequency we have L0 5 1+360. As we move from ω 51 N to ω 5 2N on the infinite-radius arc of the Nyquist plot, on this arc jL0 j 5 1 and all the angles drop 180, thus +L0 increases by ð#poles 2 #zerosÞ 3 180 5 ð2 2 2Þ 3 180 5 0. Thus it does not change. Hence at ω 5 2N we are at L0 5 1+360. (Continued) 2

Nyquist plot

469

(cont’d) Now we are on the negative imaginary axis. The plot for negative frequencies is the mirror image of the plot (for positive frequencies) with respect to the real axis. More precisely, as we increase the frequency and reach ω 5 2 12 , in this range: α 5 2 90; β 5 2 90; γ 5 270 and θ decreases from 290 to 2135. Therefore at this frequency L0 5 0+405. On the ρ semi-circle around 2j1, all the angles remain constant except β which increases by 180 deg and becomes 90. As it is a zero, +L0 increases by 180 deg. That is, at ω 5 2 11 : L 5 0+585. As we further increase ω in the range 211 , ω # 02 , all the angles remain constant except θ which changes from 2135 to 2180. At ω 5 02 : L 5 N+630 and R L0 5 2 1. Finally, on the ρ semi-circle around j0, all the angles remain unchanged except α which increases by 180 deg. As it is a pole, +L0 decreases by the same amount and changes to 450. Therefore at ω 5 01 : L 5 N+450. The mapping is an infinite arc in the form of a straight line on the asymptote ReL0 5 2 1 where the plot moves from L 5 N+630 upwards to L 5 N+450. Note that this straight line is not depicted by MATLABs, but 1 does exist. Finally we note that Δ+L0 5 +Lð j01 end Þ 2 +Lð j0start Þ 5 450 2 90 5 1 3 360, as necessitated by Remark 6.3. For stability the point 21=K must be encircled by the plot one time CCW. Therefore 0 , 2 1=K , 1 or K , 2 1. See Fig. 6.18.

Figure 6.18 Nyquist plot of Example 6.13.

470

Introduction to Linear Control Systems

Remark 6.7: The analysis can also be a complement to computerized drawing of the plot as a tool for explaining its phase and end-point behavior, especially for systems with complicated angle and/or magnitude conditions. This is illustrated in the following example. Example

6.14:

Consider

K 3 0:002 3 ðs 1 0:02Þðs 1 0:05Þðs 1 5Þðs 1 10Þ . ðs 2 0:0005Þðs 1 0:001Þðs 1 0:01Þðs 1 0:2Þðs 1 1Þðs1100Þ2

the system LðsÞ 5 Draw the Nyquist plot and analyze

its stability. Using the software MATLABs it is easy to see that the Nyquist plot is given in Panel 1 of Fig. 6.19. It starts at 10+ð2180Þ and goes downward. Note that the plot is not drawn to scale. Panel 2 shows what happens at ω 5 N on the infinite arc of the Nyquist contour: the origin is encircled by an arc of 3π, as if the plot goes through the origin. The arrival point is 0+ð2270Þ and the leaving point is 0+ð2270 1 3 3 180Þ 5 0+270, in which 3 is the polezero 1 0 excess of the system. Finally note that Δ+L0 5 +L0 ð j01 end Þ 2 +L ð j0start Þ 5 180 2 ð2180Þ 5 1 3 360 as required by Remark 6.3. For stability the point 21=K must be encircled one time CCW. Thus in either of the following cases should there hold: A0 5 210 , 2 1=K , B0 , C 0 , 2 1=K , D0 , or E0 , 2 1=K , F 0 . The real axis crossings are: [ 2 10, 20.301331886940276, 20.006921569618185, 20.000043497172684, 20.000000062739193, 20.000000001379710, 0]. From left to right these are the points A0 through G0 , respectively. And the corresponding frequencies are: [0, 0.0039, 0.0220, 0.4403, 6.8626, 84.9334, Inf].

Mapping of the infinite arc of the Nyquist contour

Figure 6.19 Nyquist plot of Example 6.14 with its details. Left: Panel 1; Right: Panel 2.

Relation with the root locus is presented next.

6.2.8 Relation with root locus Having presented the Nyquist analysis we comment a few words on its relation with the root locus. We will include the stability methods of the next chapters, i.e., Bode and Krohn-Manger-Nichols methods, in our analysis as we study them throughout the course. These are some simple but subtle points which are unfortunately not presented correctly in the available literature. The correct relation is as follows: The j-axis crossings in the root locus are the same as the real-axis crossings in the Nyquist plot. Having studied the analysis of previous sections it should be fairly

Nyquist plot

471

easy for the reader to know that the phase of the negative real axis is either of 6 180ð2h 1 1Þ and that of the positive real axis is either of 6 360h. This in particular means that the phase of the negative real axis is not prefixed to 2180, as claimed in some texts. The importance of this point is in the computation of the PM, as shall be introduced in the next section and especially when Bode diagram and Krohn-Manger-Nichols charts are used in Chapters 7 and 8 for the computation of the PM. On the other hand, regardless of being acceptable or unacceptable, positive gain margins are read off the negative real axis and negative gain margins are read off the positive real axis. Moreover, at the j-axis crossings in the root locus Gain ValueRoot Locus 5 2 1=Real CrossingNyquist . Two examples follow. Example 6.15: Consider again the system of Example 6.14. Note that we have seen the root locus of this system in the worked-out Problem 5.12 but with a different gain. Because of Remark 5.12 the gain values in the current problem are [0.1, 3.318619238639335, 1.444759464170064e 1 02, 2.29896 7206013336e 1 04, 1.593900844276381e 1 07, 7.247908294828736e 1 08] which are the gain values of that problem divided by 0.002. As it is observed these values are reciprocal of the real axis crossings of the Nyquist plot. More precisely, KA;Root Locus 5 2 1=A0Nyquist , the same as for the other crossings B through F. Finally, we stress that the real-axis crossings in the Nyquist plot are not necessarily on the negative axis. For some systems like LðsÞ 5 Ksðs2 1 1Þ=ðs21Þ3 (see worked-out Problem 6.7) they are on the positive axis, and for some systems like LðsÞ 5 K=ðs11Þ3 they are on both axes. It is easy to verify that the former system is stabilized by K , 24, whereas the latter is stabilized by 21 , K , 8. (Be careful not make a wrong conclusion as a general rule, that in the former case the system is stabilized by negative gain and in the latter case the system is stabilized by both positive and negative gains! The correct answer must be found by inspecting the Nyquist stability criterion. See Section 6.3.1.)

Example 6.16: Consider the system LðsÞ 5 K=ðs11Þ3 . In Example 6.9 we found the real axis crossings at 1 and 21=8. The real-axis crossing at 1 corresponds to K 5 2 1 (at the j-axis crossing in the right panel of Fig. 6.20) and the real-axis crossing at 21=8 corresponds to K 5 8 (at the j-axis crossing of the left panel of Fig. 6.20). See also Appendix A3, Examples C.6 and C.7. (Continued)

472

Introduction to Linear Control Systems

(cont’d)

Figure 6.20 Root locus of Example 6.16. Left: With positive gain; Right: With negative gain.

In the next section we introduce the concepts of the gain and PMs.

6.3

Gain, phase, and delay margins

The traditional GM and PM concepts probably were first introduced around 1921 by the German physicist H. Barkhausen as margins to oscillations in the context of electric circuit design (Barkhausen, 1935). About two decades later, probably independently, they were rediscovered by Bode (1945). It was at this time that they were adopted and used by the control community. In fact Barkhausen even introduced his so-called stability criterion which was actually a necessary (but not sufficient) condition for avoiding oscillation in linear feedback systems and was a precursor for all subsequently derived stability/instability/oscillation criteria. However, unfortunately the control community has not paid due tribute to his contributions. Anyway, the GM, PM, and DM concepts are amongst the first tools developed by the control community for quantifying robustness against uncertainty. Whilst their usage is widespread and experts know how to interpret them, they have turned out to be somewhat problematic for students and newcomers to the field. This is partly because the traditional definitions of these concepts have several shortcomings, and partly because the educational literature does not teach them properly. In fact, some classical texts provide false guidelines for them. Additionally, the implementation of the underlying philosophy of these concepts in the ubiquitously used software MATLABs is incomplete and in fact somewhat incorrect. So when one uses MATLABs, one encounters answers which are not consistent with the lessons learned from some classical texts. This is exacerbated when we notice that both are wrong in certain particular cases, as we exemplify. These problems, which are sometimes in the form of inconsistency between our expectations and the computed results by MATLABs, are highlighted in the

Nyquist plot

473

following cases: systems—either minimum phase (MP) or nonminimum phase (NMP)—whose stability domains are discontinuous in the parameter and have multiple crossover frequencies, and NMP systems. These are intricate issues which are not well addressed in standard texts. Indeed, some texts provide false guidelines for them, while others do not cover them. In this section we will address these issues in full details. To this end the conventional GM/PM/DM concepts are revisited. A redefinition of the GM is offered which better captures the underlying notion of being a measure of robustness. The discussion is motivated and supported by the analysis of several instructive NMP examples. More precisely, we define two GMs for a system: one positive (in the increasing direction) and one negative (in the decreasing direction). For stable systems these are the destabilizing GMs and for unstable ones these are the stabilizing GMs. A false theorem and a false interpretation of the GM concept for stable and unstable systems are also discussed and rectified. This is done in Section 6.3.1. As for the PM and DM, there are also some wrong results in the literature. In particular MATLABs also provides wrong answers in certain cases. We will discuss all these issues. This is presented in Section 6.3.2. In Section 6.3.3 we investigate the stability/instability properties of NMP systems in terms of their GM/ PM signs. This is a critical and important issue for which some sources provide false guidelines, while others do not cover it. More precisely, some standard texts and course notes claim that stable NMP systems (necessarily and sufficiently) have negative GM and PM. Some others say that Bode diagrams should be used with care in the case of NMP systems, suggesting that it is possible to decide the stability/instability of the system from its Bode diagram. In this book, through several instructive examples, it is shown that mere signs of GM and PM cannot be indicative of the stability of NMP systems—any combination of the signs of GM and PM is possible, depending on the system. Thus if the objective of deciding the stability properties of NMP systems from the Bode diagram is possible at all, a higher level of analysis is needed and some other factors have to be considered. What those factors are, however, is not known to us at the moment. The presentation and arguments include the cases of multiple gain crossover frequencies and multiple phase crossover frequencies as well. It has been claimed by some texts that if there are two or more crossover frequencies, the gain/PM is measured at the largest one. We rebut this claim with some nice counterexamples and explain which one is the answer.

6.3.1 The GM concept Recall that the traditional definition of GM is the amount by which gain can be increased or decreased before the system changes its stability condition. As such, it is one-directional and only one GM is defined for every system. This definition is not good—it does not address the issue of “changing stability condition by gain variations (which is the conceptual meaning of GM)” completely, as elaborated in the following. To motivate and highlight the discussion we present some nice and instructive examples, especially NMP ones. It is instructive and helpful to start with a root locus analysis. Then we present the same arguments in the Nyquist plot context in Section 6.3.1.1.

474

Example

Introduction to Linear Control Systems

6.17:

Consider

the

NMP

system

K 3 0:002 3 ðs 1 0:02Þðs 1 0:05Þðs 1 5Þðs 1 10Þ . ðs 2 0:0005Þðs 1 0:001Þðs 1 0:01Þðs 1 0:2Þðs 1 1Þðs1100Þ2

LðsÞ 5

We have previously studied this system in Problem 5.12 and Example 6.14. For the sake of convenience we reproduce the root locus of the system in Fig. 6.21. It is observed that the stability range is discontinuous—the union of some regions: KAðKA KB Þ , ðKC KD Þ , ðKE KF Þ, where KA through KF are the corresponding values of K at the j-axis crossings A through F. Note that only the upper part of the root locus is shown, and that the picture is not drawn to scale. It should be noted that if the pole at 0.0005 is replaced by 20:0005 (an MP system) the shape of the root locus remains unchanged but of course the crossings will change. An important lesson should be learned from this system. Suppose we are in any of the stable regions. Then by either increasing or decreasing the gain beyond the borders, the system becomes unstable. That is, any point is actually associated with two GMs: one in the increasing direction of gain (GM1 . 0) and one in the decreasing direction of gain (GM2 , 0). Similarly, suppose we are at either the region ðKB KC Þ or ðKD KE Þ. Then in both directions of increasing and decreasing gain the system becomes stable. That is to say, again any point is actually associated with two GMs: one in the increasing direction of gain (GM1 . 0) and one in the decreasing direction of gain (GM2 , 0). In ð2N KA Þ and ðKF 1 NÞ the system becomes stable by increasing and decreasing the gain, respectively. This argument is valid for systems with NMP zeros as well, as discussed in the next example.

F

60˚

E

Some stable real poles and zeros: Stable part of the root locus

D C B A

Figure 6.21 The root locus of Example 6.17.

Nyquist plot

475

Example 6.18: As an example consider the above system with two NMP zeros added to it at 1 6 20j. Then the general shape of the root locus remains unchanged, that is to say the stability range is again the union of three disjoint regions as before. The above discussion for the GM concept is thus valid.

Consequently, we can formally present the following definitions. Definition of GM: The amount by which the gain can be increased or decreased before the system becomes unstable if stable, or stable if unstable. Associated with a system, either MP or NMP, are one positive (GM1 ) and one negative (GM2 ) GM. For stable systems these are the destabilizing GMs, whereas for unstable systems these are the stabilizing GMs. More precisely, given the number K as the current value of the gain. If it should be increased to K h . K or decreased to K l , K so that the stability condition of the system changes,9 this means that it must be multiplied by K h =K . 1 or K l =K , 1, respectively. The quantities K h =K . 1 and K l =K , 1 are the GMs of the system in plain numbers, i.e., ratios. In the control literature, it is customary and convenient (for reasons which will become clear in the next chapter) to transform these quantities to decibel scales. Thus the positive and negative GMs are GM1 5 20 logðK h =KÞ . 0 and GM2 5 20 logðK l =KÞ , 0, respectively. Alternatively phrased, the definition provides a better measure of robustness for the system. The reason is that in practice perturbations in gain appear in both increasing and decreasing directions of gain, and not just in one direction. It should also be stressed that in the case that any of these quantities (GM1 and GM2 ) are infinite, it means that the objective (of changing the stability condition) is not achievable. For instance, stabilizing the system of Example 6.17 when in the unstable region KAð2N AÞ by decreasing the gain is impossible. That is, GM2 5 2N. Similarly, stabilizing the unstable system in the region ðF 1 NÞ by increasing the gain is impossible. Thus GM1 5 N. The question that arises at this point is that for a given system (that is, for a given K) which one of the aforementioned two GMs is more important than the other and is in some sense the “main” one? The answer is the smaller one in absolute value, because the system is more sensitive to perturbation in gain in that direction.10 More precisely, if GM1 , jGM2 j (equivalently, K h =K , K=K l ) then GM1 is the 9

Precisely speaking, K h and K l are the border values where the system is on the verge of changing its stability condition. So the gain must be multiplied by the ratios K h =K 1 ε and K l =K 2 ε, so that the stability condition changes. However, because this is clear, for the sake of brevity we may simply use the border values and utter a statement like the one in the text. 10 This statement means that, supposing we are given a value for the gain, we want to increase it to a larger value (where the stability condition of the system changes) or decrease it to a smaller value (where the stability condition of the system changes). Which one do we arrive at sooner: the larger value or the smaller value? The one that we arrive at sooner is the one which is nearer to the present value of the gain, and it is said that the system is more sensitive to perturbation in gain in that direction.

476

Introduction to Linear Control Systems

main GM, and if jGM2 j , GM1 (equivalently, K=K l , K h =K) then GM2 is the main GM. Paraphrased in simple words: If the gain is nearer to the larger border then the main GM is GM1 , and if the gain is nearer to the smaller border then the main GM is GM2 . For instance consider K l 5 1 and K h 5 14. Then for K 5 10, GM1 is the main GM and for K 5 2, GM2 is the main GM. A numerical example is provided next. Example 6.19: We first note that in the system of Example 6.17, KA 5 0:1, KB 5 3:31861923, KC 5 1:44475946 3 102 , KD 5 2:29896720 3 104 , KE 5 1:59390084 3 107 , KF 5 7:24790829 3 108 . Now, suppose that K 5 3 3 104 . The system is unstable in the region ðKD KE Þ. If the gain is multiplied by either KD =K , 1 or KE =K . 1 it becomes stable.11 These two quantities are the GMs in plain numbers. In decibel scales, the two stabilizing GMs associated with the system are GM2 5 20 logðKD =KÞ , 0 and GM1 5 20 logðKE =KÞ . 0. Because K=KD , KE =K (or jGM2 j , GM1 ), GM2 is the main GM. Next suppose K 5 107 for which the system is unstable in the same region. The two stabilizing GMs associated with the system are GM2 5 20logðKD =KÞ , 0 and GM1 5 20 logðKE =KÞ . 0. Because KE =K , K=KD (or GM1 , jGM2 j), GM 1 is the main GM. The two cases considered are both unstable cases. Let us now discuss two stable cases. With K 5 2 3 102 , the system is stable in the region ðKC KD Þ. The two destabilizing GMs associated with the system are GM2 5 20logðKC =KÞ , 0 and GM1 5 20logðKD =KÞ . 0. Because K=KC , KD =K (or jGM2 j , GM1 ), GM2 is the main GM. Next consider K 5 104 for which the system is stable in the same region. The two destabilizing GMs associated with the system are GM2 5 20logðKC =KÞ , 0 and GM1 5 20logðKD =KÞ . 0. Because KD =K , K=KC (or GM1 , jGM2 j), GM1 is the main GM.

In the following remark we discuss two other problems in the existing literature. Remark 6.8: A source of confusion over GM is the following: (1) In some standard texts it is found that for unstable systems GM is indicative of how much the gain should be decreased to make the system stable. Nevertheless, users encounter both positive and negative GMs for unstable systems when they use MATLABs. (2) In some standard texts it is said that MP systems are stable if and only if both GM and PM are positive. Nonetheless, users sometimes come across stable MP systems with negative GM when they use MATLABs. The explanation is that what MATLABs does is correct. With regard to (1), what the sources say is correct only in certain 11

More precisely, the multiplier should be KC =K , x , KD =K and KE =K , x , KF =K. But because this is clear we may omit the explanation and simply use the “border value” or the “margin value” which is the gain margin in plain numbers, i.e., ratios.

Nyquist plot

477

cases—in the cases that the main GM is GM2 . In the case the main GM is GM1 (as with K 5 107 discussed above) it indicates how much the gain must be increased to make the system stable. As for (2), first note that an example for it is the system of Example 6.17 if the pole at 0.0005 is replaced by 20.0005. Recall that as we said before this MP system has an (almost) identical root locus shape as of Fig. 6.21. Now if we are, e.g., in the stable region KAðKC KD Þ with K=KC , KD =K then GM2 , 0 is the main GM, which is correctly outputted by MATLABs. Remark theffi conditions K h =K , K=K l and K=K l , K h =K pffiffiffiffiffiffiffiffiffiffi6.9: ffi The turning pointpinffiffiffiffiffiffiffiffiffiffi l K h . That is, if K . K l K h the former condition is satisfied and if is p Kffiffiffiffiffiffiffiffiffiffi ffi K , K l K h the latter condition is met. Next we present a definition. Phase crossover frequencies: The frequencies of the j-axis crossings in the root locus context which are the frequencies of the real-axis crossings in the Nyquist context are called the phase crossover frequencies. Note that the negative real-axis has the phase 6 180ð2h 1 1Þ deg and the positive real-axis has the phase 6 360h deg. Now we discuss Exercise 6.17 in the Nyquist plot context. Example 6.20: Reconsider Example 6.17 in the context of the Nyquist plot. The Nyquist plot of the system is depicted in Fig. 6.22. Note that the figure is not drawn to scale. In this figure A0 through G0 denote the real part of the plot at the designated points, i.e., they are the values of the real-axis crossings. First note that with K 5 1 the absolute values of A0 through G0 will be the inverse values of KA through KG computed in Examples 6.17, i.e., KA;Root Locus 5 21=A0Nyquist , also at other points. Now we provide some numerical examples, the same as those used in Example 6.19. With K 5 3 3 104 , A0 5 2 300000, B0 5 2 9:0399042019 3 103 , C 0 522:07647021833102 , D0 521:3049337947, E0 521:88217479823 1023 , F 0 524:13912521783 1025 , G0 5012.

A⬘

B⬘

C⬘

D⬘

E⬘

F⬘

G⬘ 0

Figure 6.22 Nyquist plot of Example 6.20.

(Continued) 12

Note that in comparison to Example 6.17 where the real-axis crossings were given for K 5 1, now the real-axis crossings are multiplied by the same K, being K 5 3 3 104 . And this is consistent with the relation Gain Value Root Locus 5 2 1=Real CrossingNyquist of Chapter 6.2.8 and Remark 5.12.

478

Introduction to Linear Control Systems

(cont’d) The GM in plain values is defined as 1=jXj, where X is either of A0 through G . In db scales, GM is defined as 20log 1=jXj. We notice that the system is unstable with the given K. (It has one NMP pole and is stable if the point 21 is encircled once CCW. This has not occurred between the points D0 and E0 .) Now, the system becomes stable if its gain is multiplied13 either by 1=jD0 j or 1=jE0 j. In the former case the point D0 moves to the right of the point 21, and in the latter case the point E0 moves to the left of the point 21. The former decreases the gain (GM2 5 20log 1=jD0 j), and the latter increases the gain (GM1 5 20 log 1=jE0 j). Because jD0 j , j1=E0 j, there holds jGM2 j , GM1 and thus GM2 is the main GM. Next we consider K 5 107 . With this value of the gain A0 5 2 108 , B0 5 23:0133014006 3 106 , C 0 5 2 6:9215673944 3 104 , D0 5 2 4:3497793156 3 102 , E0 5 2 0:6273915994, F 0 5 2 1:3797084059 3 1022 , G0 5 0. Again the system is unstable as the point 21 is in the same region which is not encircled once CCW. We have GM2 5 20 log 1=jD0 j and GM1 5 20 log 1=jE0 j. Because jD0 j . j1=E0 j there holds jGM2 j . GM1 and therefore GM1 is the main GM. With K 5 2 3 102 , A0 5 2 2000, B0 5 2 60:2660280129, C0 5 22 0 0 2 1:3843134788, D 5 2 0:8699558631 3 10 , E 5 2 1:2547831988 3 1025 , F 0 5 2 0:2759416811 3 1026 , G0 5 0. We observe that the system is stable with the given K. (It has one NMP pole and is stable if the point 21 is encircled once CCW. This has occurred between the points C0 and D0 .) Now, the system becomes unstable if its gain is multiplied either by 1=jC 0 j or 1=jD0 j. In the former case the point C0 moves to the right of the point 21, and in the latter case the point D0 moves to the left of the point 21. The former decreases the gain (GM2 5 20 log 1=jC 0 j), and the latter increases the gain (GM1 5 20 log 1=jD0 j). Because jC 0 j , j1=D0 j, there holds jGM2 j , GM1 and thus GM2 is the main GM. Next we consider K 5 104 . With this K one obtains A0 5 2 100000, B0 5 2301:3301400645, C 0 5 269:2156739443, D0 5 20:4349779316, E0 5 2 6:2739159941 3 1024 , F 0 5 2 0:1379708405 3 1024 , G0 5 0. The system is again stable as the point 21 is encircled once CCW in the same region. Again GM2 5 20 log 1=jC0 j and GM1 5 20 log 1=jD0 j. Because jC0 j . j1=D0 j there holds jGM2 j . GM1 and hence GM1 is the main GM. 0

Remark 6.10: In the given example the gain values at the j-axis crossings “strictly” increase with increasing frequency. Similarly, the real-axis crossings “strictly” increase (decrease in absolute value) with increasing frequency. This is not always 13

More precisely the multiplier should be 1=jC 0 j , x , 1=jD0 j or 1=jE0 j , x , 1=jF 0 j. As with the root locus context, because this is clear we may omit the explanation and simply use the border values which are the gain margins in plain numbers, i.e., ratios.

Nyquist plot

479

the case. That is, the change (which may be either in the increasing or decreasing direction) is not always strict. This is shown in Example 6.21. Remark 6.11: In the given example all j-axis crossings (similarly, real-axis crossings in the Nyquist plot context) result in a change in the stability condition. This is not always the case as Example 6.21 shows (see also worked-out Problem 6.12). That is, the real-axis is not always divided to regions which are every other one stable and unstable. Recall that whether the system changes its stability condition or not can be understood by looking at the whole Nyquist plot. That region is stable in which the point 21 (or 21=K) is encircled N times CCW where N is the number of NMP poles of the open-loop system. Remark 6.12: Here we have one of problems with MATLABs. If we use the command “allmargin” it outputs all the gain values (under the name “GainMargin”) at the j-axis crossings and their corresponding frequencies which are the phase crossover frequencies (under the name “GainMargin frequency”). This is certainly misleading. As for the former, note that they are the GMs in plain numbers (with the explanation that they may not be acceptable, because they may not result in a change in the stability condition) and in the Nyquist context the computed values are the negative of the inverse of the real-axis crossings. With respect to the latter, notice that not all these frequencies are acceptable, as we explained in the previous remark, and this is the reason that not all of the former gain values (GMs in plain numbers) may result in a change in the stability condition. Remark 6.13: Another problem with MATLABs is computation of the correct GM (via the command “margin”) in the case that not all the crossings are acceptable answers. It may compute the wrong answer depending on the present value of the gain. What MATLABs does, according to its documentation, is that it computes the GM at the crossover frequency which results in the smallest GM. This will be correct if that crossing is an acceptable one, one of those at which the stability condition of the system changes. Otherwise it will be wrong. In Example 6.23 we provide the numerics of this faulty operation of MATLABs. The next motivating example addresses the above remarks. Example 6.21: In this example we show the two points discussed in the above Remarks. One is that not all j-axis crossings result in a change in the stability condition. The other is that gain values at the j-axis crossings do not necessarily strictly increase (or decrease) with increasing frequency. We consider the system LðsÞ 5

Kððs10:1Þ2 1 1Þððs10:1Þ2 1 9Þðs 2 0:1Þ . ððs10:1Þ2 1 4Þððs10:1Þ2 1 25Þðs 1 1Þ

The root locus of this system is given in Fig. 6.23. In Appendix C we consider this system. (Continued)

480

Introduction to Linear Control Systems

(cont’d)

Figure 6.23 Root locus of Example 6.21.

The frequencies of the j-axis crossings are given by: [0, 1.1160, 1.8052, 3.3430, 4.4528]. The corresponding gain values are: [110.2077, 34.9821, 1.5769, 4.5036, 0.4175]. Note that if we do not have a look at the root locus, the interpretation of the result is impossible and in fact misleading. The root locus shows that the interpretation is as follows. The system is stable if KAfð0 110:2Þg fð0 1:5Þ , ð35 NÞg - fð0 0:41Þ , ð4:6 NÞg which is KAfð0 0:41Þ , ð35 110:2Þg. Considering the root locus with negative gain as well, we conclude that the system is stable also for KAð2 1 0Þ. Thus the system is stable if KAfð2 1 0Þ , ð0 0:41Þ , ð35 110:2Þg. Finally, in the above figure we name the j-axis crossings A through E in increasing order of frequency. Thus, e.g., KA 5 110:2077. As we can observe, the crossings at C and D do not change the stability condition of the system.

Next we consider the same system in the Nyquist context. Example 6.22: Reconsider Example 6.21 in the context of the Nyquist plot. The Nyquist plot is shown in Fig. 6.24. Note that it is not drawn to scale, but serves our purpose. With K 5 1 the absolute values of A0 through E0 will (Continued)

Nyquist plot

481

(cont’d) be the inverse values of KA through KE computed in Examples 6.21. The point F 0 corresponds to negative gain which results in a change in the stability condition, i.e., K 5 21.

E⬘

C⬘

D⬘

B⬘

A⬘

F⬘

Figure 6.24 Nyquist diagram of Example 6.22.

The real axis crossings are A0 5 2 0:009073776973, B0 5 2 0:02858607108, C 5 2 0:6341608111, D0 5 2 0:2220424878, E0 5 2 2:3951055115, F 0 5 1. The system is stable in the regions which are not encircled by the plot. Thus 21=KAfð2N E0 Þ , ðB0 A0 Þ , ðF 0 NÞg or equivalently KAfð2 1 0Þ , ð0 0:41Þ , ð35 110:2Þg. It is observed that the real axis is not divided into every other one stable and unstable regions. The plot is drawn for K 5 1. The system is unstable in the region ðE0 C 0 Þ. Crossing the points C 0 ; D0 does not change the stability condition of the system. GMs of the system are GM2 5 20 log 1=E0 , 0 and GM1 5 20 log 1=B0 . 0. Because 1=jB0 j . jE0 j, GM2 is the main GM of the system. 0

Now we provide some numerics about Remark 6.13. Example 6.23: This example shows that the operation of MATLABs may be wrong (depending on the gain value) if the system has multiple crossover frequencies which do not all result in a change in the stability condition. To this end we consider Example 6.22. (Continued)

482

Introduction to Linear Control Systems

(cont’d) As we stated above, with K 5 1 we have GM2 5 20 log 1=E0 5 2 7:58 , 0 (computed at ω 5 4:45). However, because 20log 1=C 0 5 3:95 , j 2 7:58j MATLABs mistakenly outputs 3:95 (computed at ω 5 1:81) as the GM of the system. (Note that in this case 20log 1=C 0 is the smallest in absolute value among all the crossings.) Similarly, with K 5 3 one has the following. The values of A0 through F 0 are all multiplied by 3. They will be A0 5 2 0:02722133092, B0 5 2 0:08575821325, C0 5 2 1:9024824333, D0 5 2 0:6661274634, E0 5 2 7:1853165346, F 0 5 3. Because 1=jB0 j . jE0 j again GM2 5 20 log 1=E0 5 2 17:13 , 0 is the main GM. However, since 20log 1=D0 5 3:53 , j 2 17:13j MATLABs mistakenly outputs 3.53 (computed at ω 5 3:34) as the GM of the system. (Note that in this case 20log 1=D0 is the smallest in absolute value among all the crossings.) However, with K 5 30 MATLABs computations are correct. Because 1=jB0 j , jE0 j the GM of the system will be GM1 5 20 log 1=B0 5 1:33 . 0 computed at ω 5 1:12. (Note that in this case 20log 1=B0 is the smallest in absolute value among all the crossings, which happens to be the correct answer).

Remark 6.14: It is possible to obtain the GMs of the system with K 5 K2 from the specifications of the root locus or Nyquist plot with K 5 K1 . To this end we note that when the system gain is multiplied by x (that is K ! xK), then the values of gain at the j-axis crossings are multiplied by 1=x. That is, the GM (in plain numbers) is multiplied by 1=x. Similarly, the real-axis crossings are multiplied by x and thus the GM (in plain numbers) is multiplied by 1=x. (You can easily observe this for the numbers provided in previous examples). Based on the above arguments and motivating examples we can present:

6.3.1.1 Definition of GM in the Nyquist plot context It is the amount by which gain can be increased or decreased before the Nyquist plot of the system [crosses (passes over or under) the point 21 in the polar plane and] changes its stability condition. Recalling that not all the real axis crossings result in a change in the stability condition and not necessarily do the real-axis crossings strictly increase or decrease

Nyquist plot

483

with increasing frequency,14 the GM concept can be graphically illustrated as in Fig. 6.25:

Figure 6.25 Illustration of real-axis crossings for GM definition.

First we study the negative real-axis crossings. They correspond to positive gains. Suppose that A is the smallest (in absolute value) of those Ai that result in a change in the stability condition. Thus if the gain is multiplied by 1=jA j the stability condition of the system changes. The quantity 1=jA j , 1 is the GM in plain numbers; 20log 1=jA j , 0 is the GM in decibels and is GM2 of the system. Similarly, suppose B is the largest (in absolute value) of those Bi that result in a change in the stability condition of the system. Hence if we multiply the gain by 1=jB j the stability condition of the system changes. The quantity 1=jB j . 1 is the GM in plain numbers; 20log 1=jB j . 0 is the GM in decibels and is GM1 of the system. If jA j , 1=jB j then jGM2 j , GM1 and GM2 is the main GM. If jA j . 1=jB j then GM1 , jGM2 j and GM1 is the main GM. As for the positive real-axis crossings denoted by C, the analysis is as before except that the gain values (which should not be mistaken for GMs) are negative. We conclude this subsection by a simple example which is cliche´ in the existing literature. Example 6.24: Some systems are “simple.” They have only one phase crossover frequency (except the start and end points). The following example investigates the GM concept for them. Consider the system L 5 K=ðs3 1 5s2 1 20s 1 40Þ. The Nyquist diagrams with K 5 50 and K 5 80 are shown in Fig. 6.26 in the left and right panels, respectively. Investigate the GMs. With K 5 50 the real axis crossings are at 20:8333; 0; 1:25. MATLABs computes the GM only for positive gain and thus computes 1=j 2 0:83333j 5 1:2. This is the GM in plain numbers. In decibel it is 20log 1:2 5 1:58 db. The points 0 and 1.25 result in 21=0 5 2N and 21=1:25 5 2 0:8. If the multiplier is in this range again the system will be unstable. If the multiplier is in the range 0 and 20.8 the system will be stable. (Continued)

14

More precisely, it is for this reason that we have not connected the crossings to each other in a specific order. Also note that although we have used the letters A, B, and C in alphabetical order this does not mean that they occur in the order of increasing frequency. We have simply chosen a label for them!

484

Introduction to Linear Control Systems

(cont’d)

Figure 6.26 Nyquist diagram of Example 6.24.

With K 5 80 the real axis crossings are at 21:8333; 0; 2. MATLABs computes the GM only for positive gain and thus computes 1=j 2 1:83333j 5 0:75. This is the GM in plain numbers. In decibel it is 20log 1:2 5 22:5 db. The points 0 and 2 result in 21=0 5 2N and 21=2 5 2 0:5. If the multiplier is in this range again the system will be unstable. If the multiplier is between 0 and 20.5 the system becomes stable.

The PM and DM concepts are introduced in the following section.

6.3.2 The PM and DM concepts The PM of a system is defined as the amount its phase can be increased or decreased, without changing the gain of the system, so that the stability condition of the system changes. Thus if it is possible the phase value should be read from a so-called gain crossover frequency, defined below. Gain crossover frequency: The frequency at which the magnitude of the system transfer function is at one is called the gain crossover frequency of the system. This is the frequency at which the Nyquist plot of the system intersects the unit circle. A system may have multiple gain crossover frequencies. We will show this in several examples in the following. It should be stressed that as in the case of GM the existing literature on this topic contains several mistakes and false guidelines. We will discuss them and refute the false claims. One is in the MATLABs documentation. More precisely, in case of multiple gain crossover frequencies it computes the PM at all of them and outputs the smallest one (in absolute value) as the answer. However, it may not be acceptable (as in the case of GM where sometimes the computed GMs were not acceptable).

Nyquist plot

485

We will demonstrate this faulty operation of MATLABs through several examples. The other false claim in some textbooks is that the PM should be computed at the largest crossover frequency. We will refute this claim by analyzing some instructive examples. It is good to have in mind from the outset that this topic is not as straightforward as the previous one, GM, since here some visualization of the rotated/transformed Nyquist plot is needed. The topic is nevertheless interesting and extremely important. One reason for its importance for the control community is that PM translates to pure delay. That is, if the (acceptable) PM of a stable system is, say, PM0 , it means that the system can accommodate T , PM0 =ω seconds delay (in its forward path) before becoming unstable. Similarly, if the system is unstable then it means that with a delay T . PM0 =ω it will become stable. The term ω is the aforementioned gain crossover frequency. In either case T is called the DM of the system. As will be shown in the examples unfortunately MATLABs may not compute it correctly. In fact MATLABs does not provide the main DM but computes the DM (whether acceptable or not) corresponding to each PM (whether acceptable or not), and unfortunately even these individual computations sometimes are not all correct. Remark 6.15: It should be clear that for an acceptable PM0 . 0 one finds T . 0, and that they are both realizable. On the other hand, for an acceptable PM0 , 0 one finds T , 0. While PM0 , 0 is realizable, T , 0 is not, unless we interpret it as the product of the realization of PM0 , 0. Remark 6.16: Suppose that a system has two acceptable positive PMs: PM1 . PM2 . 0. If the corresponding frequencies are ω1 , ω2 then we necessarily find T1 . T2 and thus the main DM of the system is DM 5 T2 . However if there holds ω1 . ω2 then either T1 . T2 or T1 , T2 is possible and we have to compute both and then choose the smaller as the main DM of the system. Indeed, for systems with multiple acceptable positive PMs, the smallest corresponding T is the main DM of the system which is not known a priori to correspond to which PM. Other mistakes in the literature concern the determination of the stability of a system from its GM and PM signs. As we shall show it is not possible to determine the stability of a system by merely considering the sign of its computed GM and PM. We emphasize that unlike the case of GM which was “easily” viewable in the root locus context, the PM is not—at least according to the author’s knowledge. Thus we directly commence with the Nyquist plot. Example 6.25: Consider the system LðsÞ 5 K=ðs3 1 5s2 1 20s 1 40Þ. The Nyquist plots with K 5 45 and K 5 150 are shown in Fig. 6.27 in the left and right panels, respectively. Note that the plots are not exactly drawn to scale. (Continued)

486

Introduction to Linear Control Systems

(cont’d)

Figure 6.27 Nyquist plots of Example 6.25: Left K 5 45, Right K 5 150.

We start with a general analysis. Then we switch to the solution of this specific system at hand. If the crossover frequency is in the third or fourth quadrants, then PM is read as a positive value, PM . 0. If the crossover frequency is in the second or first quadrants, then the PM is read as a negative value, PM , 0. In the first case we increase the phase of the system by PM.15 The resultant plot must be visualized. (Do it carefully, it is intricate. Without changing the magnitude of the points, change their phase, by rotating them on arcs of circles whose centers are at the origin.) If the stability condition of the system changes then PM . 0 is the PM of the system and we denote it by16 PM1 . It is said the system has a positive PM. The second case means we increase the phase of the system by PM , 0 (equivalently, we decrease it by jPMj).17 Again the resultant plot must be visualized. If the stability condition of the system changes then PM , 0 is the PM of the system and we denote it by18 PM2 . It is said that the system has a negative PM. In this example it is observed that with K 5 45 we have the first case, PM 5 18:167 deg . 0 at ω 5 4:068 rad=s, and the change in the stability conditions happen for this system: It is stable and becomes unstable. Also one finds DM 5 18:167 3 ðπ=180Þ=4:068 5 0:078 seconds. As for K 5 150 we have the second case, PM 5 231:576 deg , 0 at ω 5 5:793 rad=s, and this changes the stability condition of the system: It is unstable and becomes stable. Here, although PM 5 231:57 deg , 0 is realizable, we find DM 5 231:57 3 ðπ=180Þ=5:79 5 2 0:0954 seconds , 0 which is not realizable, see Remark 6.15.

More precisely, it must be added by PM 1 ε. But because this is clear we simply use the border value. If the stability condition does not change then we denote it by PM1 51 N or PM1 5 Inf. 17 More precisely, it must be added by PM 2 ε. However since this is obvious we simply use the border value. 18 Likewise, if the stability condition does not change then we denote it by PM2 5 2N or PM2 5 Inf. 15 16

Nyquist plot

487

Example 6.26: Let the system be given by LðsÞ 5 Kððs20:1Þ2 1 1Þ=½ðs 2 0:3Þððs20:2Þ2 1 4Þ. For K 5 1 the Nyquist of the system is given in Fig. 6.28, left panel. Note that it is not exactly drawn to scale. There are two phase crossover frequencies, the first one at the point Y (at which PM2 5 PM , 0) and the second one at the point Z (at which PM1 5 PM . 0). However, by visualizing the rotated/transformed plot we note that this unstable system does not become stable by either of these PMs. Thus the PMs are not acceptable. We denote it by PM 5 6 N (or PM 5 Inf) in this case, since in neither direction is changing the stability condition of the system possible by changing its phase. The numerical values outputted by the command “allmargin” of MATLABs are as follows: PhaseMargin: [ 2 53.6146, 59.5512], PMFrequency: [1.7575, 2.4012].

Clearly, this MATLABs result is false and misleading. As we discussed above, none of the PMs is acceptable. For the sake of completeness we add that the same command also outputs:19 GainMargin: [1.2000, 15.9146, 0.5340] GMFrequency: [0, 0.9882, 2.0132] DelayMargin: [3.0425, 0.4328] DMFrequency: [1.7575, 2.4012]

In this plain style, these answers are also misleading and partly incorrect. The explanation is that regardless of being acceptable or not (which must be found out by inspection), the first row is the GMs in plain numbers (also the negative of the inverse of real-axis crossings). The system has three NMP poles and thus is stable if the point 21 or 21=K is encircled three times CCW. This happens in the range ½ 2 1=1:2 2 1=15:91 or equivalently if KAð1:2 15:91Þ. Thus GM 5 GM1 5 1:2 in plain numbers or 20log1:2 5 1:58 db. For the sake of completeness we also provide the root locus of the system in Fig. 6.28, right panel. The stability range is KAð1:2 1 NÞ - ð0:53 15:91Þ 5 ð1:2 15:91Þ. On the other hand, the third row is double erroneous, i.e., it is erroneous for two reasons: (1) The given values are computed wrongly. (More precisely, 3.0425 is wrong and 0.4328 is almost correct). They should be computed by DM 5 T 5 PM=ω. Thus the row DelayMargin should be DM 5 ½ 2 53:61 3 ðπ=180Þ=1:75 59:55 3 ðπ=180Þ=2:40 5 ½ 2 0:5347 0:4331 seconds. (2) Since the PMs are not acceptable, as previously discussed, the DMs (even the correctly computed one) are not acceptable. That is, it is not possible to change the stability condition of the system by adding (negative/positive) delay to it. (Continued) 19

Recall that the command ‘allmargin’ outputs the real-axis crossings, which are strictly negative. That is, the real-axis crossings at the origin (corresponding to the infinite frequency) or with positive values (corresponding to negative gain) are not computed.

488

Introduction to Linear Control Systems

(cont’d) Root locus 2.5 2 Imaginary axis (s–1)

1.5 1 0.5 0 –0.5 –1 –1.5 –2 –2.5 –2

–1.5

–1

–0.5

0

0.5

Real axis (s–1)

Figure 6.28 Example 6.26. Left: Nyquist plot; Right: Root locus.

Remark 6.17: Here we have one of the mistakes in the existing literature. Unfortunately the computed answer is not analyzed and it is taken for granted that it is acceptable. As such, in the literature you can find statements such as, “With [positive] delay we can stabilize unstable poleNMP systems.” This example shows that this statement is not true in general. However, for certain systems like that of the worked-out Problem 6.16 it is.

Example 6.27: Consider the system of Example 6.26, with K 5 4. The Nyquist of the system is given in Fig. 6.29. Note that it is not exactly drawn to scale.

Figure 6.29 Nyquist plot of Example 6.27.

(Continued)

Nyquist plot

489

(cont’d) This system has three phase crossover frequencies. The first and third (at points X and Z) result in positive PMs (PM1 ) and the second (Y) results in a negative PM (PM2 ). It is observed that either of these PMs changes the stability condition of the system (from stable to unstable). Therefore all of them are acceptable. The numerical values are: PhaseMargin: [56.8639, 270.1603, 82.8660], PMFrequency: [0.6237, 1.3268, 4.6562].

Note that from left to right (which is in the order of increasing frequency) the PMs refer to the points X, Y, and Z. The main PM is computed at the point X (which is the smallest in absolute value). That is PM 5 PM1 5 56:86 deg. The philosophy behind this choice will be discussed in the paragraph following the solution to this example. For the sake of completeness let us add that the command “allmargin” also outputs: GainMargin: [0.3000, 3.9786, 0.1335] GMFrequency: [0, 0.9882, 2.0132] DelayMargin: [1.5910, 3.8126, 0.3106] DMFrequency: [0.6237, 1.3268, 4.6562]

In comparison with the previous example, the first row is divided by the value of K which is 4. Because 1=0:3 , 3:97, GM 5 GM2 5 0:3 in plain numbers or 20log0:3 5 210:45 db is the GM of the system. As for the DMs, some are computed wrongly and some correctly. Note that the correct answer is the smallest of the computed DMs. At the point X it is DM 5 56:86 3 ðπ=180Þ=0:62 5 1:6 seconds and at the point Z it is DM 5 82:86 3 ðπ=180Þ=4:65 5 0:31 seconds. The answer is thus DM 5 0:31 seconds. On passing also note that the computed DM at the point Y is erroneous. It should be DM 5 270:16 3 ðπ=180Þ=1:32 5 2 0:92 seconds, regardless of the fact that it is not realizable—see also Remark 6.15.

Similar to the case of the GM, the question that arises is that in the case that one finds several PMs, which PM is the main one? To answer this question we argue as in the case of GM. If we change the phase of the system we first arrive at the one provided by the point X, since this PM is the smallest in absolute value among all the PMs. In other words, it is said that the system is more sensitive to phase variations in that direction20 (which is the increasing direction). In brief, from among all 20

Recall that in the context of GM we also used the term “direction.” In both cases it refers to either the increasing or decreasing direction. Beware that in the graduate course Multivariable Control you will study the notion of ‘direction’ in multivariable systems. That concept is completely different from the one we have here.

490

Introduction to Linear Control Systems

the acceptable PMs the one which is smallest (in absolute value) is the main PM. Depending on the system it may be either positive or negative. Example 6.28: Let the system be given by LðsÞ 5 Kððs20:1Þ2 1 1Þ= ½ðs 1 0:1Þððs20:2Þ2 1 4Þ. For K 5 1 the Nyquist plot of the system is given in Fig. 6.30. Note that it is not exactly drawn to scale.

Figure 6.30 Nyquist plot of Example 6.28.

It is observed that the system has three phase crossover frequencies. The smallest and largest of these frequencies correspond to the points X and Z and result in positive PMs. The second one refers to the point Y and results in a negative PM. It is observed that the system is destabilized by the PMs of either of the points X, Y, or Z. Because Y results in the smallest PM (in absolute value), the main PM should be computed at this point. The numerical values computed by MATLABs are: GainMargin: [14.7046, 0.5456] GMFrequency: [1.0291, 1.9723] PhaseMargin: [113.1468, 241.0729, 69.2798] PMFrequency: [0.2194, 1.7537, 2.4055] DelayMargin: [8.9995, 3.1741, 0.5027] DMFrequency: [0.2194, 1.7537, 2.4055]

The correct interpretation is thus: GM 5 GM2 5 0:55 in plain numbers, PM 5 PM2 5 241:08 deg, and the correct DM is DM 5 241:07 3 ðπ=180Þ=1:75 5 20:409 seconds. On the other hand with the positive delay corresponding to point Z the system will become unstable: DM 5 69:27 3 ðπ=180Þ=2:40 5 0:5037.

For the sake of brevity, in the following example we do not reproduce the output of the command “allmargin” and suffice with the exact numerical values we need.

Nyquist plot

491

Example 6.29: Reconsider Example 6.28 with K 5 0:2. Fig. 6.31 shows the Nyquist plot of the system. Note that it is not exactly drawn to scale.

Figure 6.31 Nyquist plots of Example 6.29.

The system is unstable. It is observed that the system does not have any phase crossover frequency and thus cannot be stabilized by merely changing its phase. We denote it by PM 5 Inf and DM 5 Inf. As for its GM it is GM1 5 2:7279. (The relevant real-axis crossing is 20:3666 5 2 1=2:7279).

Example 6.30: Reconsider Example 6.28 with K 5 18. Fig. 6.32 shows the Nyquist plot of the system. Note that it is not exactly drawn to scale.

Figure 6.32 Nyquist plot of Example 6.30.

It is observed that the system is unstable with a positive PM. However it is not acceptable as it does not change the stability condition of the system from unstable to stable. Note that as we mentioned in Remark 6.15, in the literature this example is solved wrongly. The computed solution is accepted and it is said that with (positive) delay we can stabilize “this” unstable poleNMP system.

492

Introduction to Linear Control Systems

Example 6.31: Consider the system of Examples 6.17 and 6.20 with K 5 10 and K 5 30. The Nyquist plot of the system is depicted in Fig. 6.33.

Figure 6.33 Nyquist plots of Example 6.31.

In both cases the system is unstable and will become stable by PM2 measured at the point Z. With K 5 10 the value is PM2 5 2 5:5313 deg at ω 5 0:0066 rad=s. Hence DM 5 2 14:6272 seconds. The relevant real-axis crossings are 21=0:3319 and 21=14:4476. The GMs in plain numbers are GM2 5 0:3319 and GM1 5 14:4476. Because 1=0:3319 , 14:4476, GM2 is the main GM. With K 5 30 the value is PM2 5 27:5376 deg at ω 5 0:0066 rad=s. Hence DM 5 219:9327 seconds. The relevant real axis crossings are 21=0:1106 and 21=4:8159. (Note that these are three times those of the previous case as K has become three times.) The GMs in plain numbers are one third of the previous ones, i.e., GM2 5 0:1106 and GM1 5 4:8159. Because 1=0:1106 . 4:8159, GM1 is the main GM.

Example 6.32: Reconsider the system of Example 6.17 with K 5 1 and K 5 0:2. The Nyquist plot of the system is provided in Fig. 6.34.

Figure 6.34 Nyquist plots of Example 6.32.

In both cases the system is stable and will become unstable by PM1 measured at the point Z. With K 5 1 the value is PM1 5 8:0655 deg at ω 5 0:0021 rad=s. (Continued)

Nyquist plot

493

(cont’d) Hence DM 5 67:7670 seconds. The relevant real-axis crossings are 21=0:1000 and 21=3:3186. The GMs in plain numbers are GM2 5 0:1 and GM1 5 3:3186. Because 1=0:1 . 3:3186, GM1 is the main GM. With K 5 0:2 the value is PM1 5 18:0814 deg at ω 5 0:0006644 rad= seconds. Hence DM 5 474:98 seconds. The relevant real axis crossings are 21=0:5000 and 21=16:5931. (Note that these are 0.2 times those of the previous case as K has become 0.2 times.) The GMs in plain numbers are 5 times the previous ones, i.e., GM2 5 0:5 and GM1 5 16:5931. Because 1=0:5 , 16:5931, GM2 is the main GM.

Example 6.33: Consider the system of Example 6.17 and 6.20 with the NMP pole replaced by its MP counterpart. That is, the system is LðsÞ 5 K 3 0:002 3 ðs 1 0:02Þðs 1 0:05Þðs 1 5Þðs 1 10Þ . With K 5 50, K 5 25 the Nyquist ðs 1 0:0005Þðs 1 0:001Þðs 1 0:01Þðs 1 0:2Þðs 1 1Þðs1100Þ2 plot of the system is illustrated in Fig. 6.35.

Figure 6.35 Nyquist plots of Example 6.33.

In both cases the system is unstable and will become stable by PM2 measured at the point Z. With K 5 50 its numerical value is PM2 5 22:3845 deg at ω 5 0:0136 rad=s. Therefore DM 5 2 3:0601 seconds. On the other hand in plain numbers GM2 5 0:3216 and GM1 5 1:9732. The main GM is GM1 . With K 5 25 its numerical value is PM2 5 21:8276 deg at ω 5 0:0100 rad=s. Therefore DM 5 23:1898 seconds. On the other hand in plain numbers GM2 5 0:6433 and GM1 5 3:9464. The main GM is GM2 .

494

Introduction to Linear Control Systems

Example 6.34: Consider the system of Example 6.33. The Nyquist plot of the system is given in Fig. 6.36 for K 5 500, K 5 2000.

Figure 6.36 Nyquist plots of Example 6.34.

The system is stable in both cases and will become unstable by PM1 measured at the point Z. With K 5 500, PM1 5 16:1009 deg at ω 5 0:0414 rad=s. Thus DM 5 6:7855 seconds. Also, in plain numbers GM2 5 0:1973 and GM1 5 46:4058. The main GM is GM2 . With K 5 2000, PM1 5 28:0255 deg at ω 5 0:1007 rad=s. Thus DM 5 4:858 seconds. Also, in plain numbers GM2 5 0:0493 and GM1 5 11:6014. The main GM is GM1 .

Now we discuss some problems of MATLABs in the following Remark. Remark 6.18: We have shown that the main PM is not necessarily computed at the largest crossover frequency (thus refuting a false claim in the literature), and that some PMs are acceptable and some are not. Considering the second part of the last statement, it should be noted that unfortunately MATLABs 2015a does not take heed of this fact. It always outputs the smallest (in absolute value) PM as the answer of the PM of the system. Here we provide another example that this method fails. In fact it seems that there is no way but direct one-by-one verification of the PMs computed at all the crossover frequencies to decide which one is acceptable. If we have a reliable software this can be verified numerically and algorithmically.

Example 6.35: Reconsider Example 6.21 with K 5 2:2: This time we need to draw a plot which is more accurate, although it is not drawn to scale either, see Fig. 6.37, left panel. The unit circle is shown in half as the dotted curve. It is seen that system has three phase crossover frequencies and thus three PMs. The numerical values are PM: [X: 6.6867, Y: 2137.4823, Z: 4.6905] deg with the corresponding frequencies in increasing order: PM frequency: [1.7349, 2.3666, 3.6284] rad/s. (Continued)

Nyquist plot

495

(cont’d)

Figure 6.37 Nyquist plot of Example 6.35.

MATLABs outputs the smallest one (in absolute value) as the answer. This corresponds to the point Z. However, it is certainly not the answer; the system remains unstable. Indeed all the PMs are unacceptable. The computed DMs are thus unacceptable as well. Discussion: In the actual Nyquist plot (Fig. 6.37, left panel) the height of the outer and second outer curves is about 40 and 3. That of the inner curves is below 0.2. The details of the plot produced by MATLABs are thus invisible unless by zoom and magnification. Hence we have to draw a visible plot which is not to scale (like the provided plot). In doing so we must practice enough care, since it is possible to draw a plot which gives the same GM and PM values, but wrong conclusion about acceptance of PMs. For instance the right panel of the same figure shows such a plot. As it is observed in the right panel we make the conclusion that the PM at the point X is acceptable (!) which is wrong.

Next we pose a question. Question 6.1: Does there exist a stable MP system with a negative PM? A possibility for the Nyquist plot of such a system is given in Fig. 6.38 in which all the arrows may also be in the opposite direction as well. The answer to the above question is positive. We provide two examples for it. The first one (Examples 6.36, 6.37) is a poleMP system (but with an NMP zero) and the second one (Example 6.38) is an all polezeroMP system. Before presenting the example it is worth emphasizing that in the context of the Nyquist plot that we are concerned with encirclement (or otherwise) of the point 21, MP-ness refers to poles. Thus in our context the first example in the sequel is indeed an MP system.

496

Introduction to Linear Control Systems

Figure 6.38 Nyquist plots of Question 6.1, if existent.

Example 6.36: Reconsider Example 6.21 with KAð35 110:2Þ. In this range the system is stable. Its Nyquist plot for K 5 50 is given in Fig. 6.39. It is observed that PM is negative and is acceptable. Numerically one obtains PM: [X: 274.5036, Y: 288.9161, Z: 216.4480] with PM frequencies: [0.2111, 0.8874, 1.0595]. The point Z is obviously the answer, PM2 5 216:4480 deg. On the other hand because 50=35 , 110=50, the main GM is GM2 5 35=50 in plain numbers.

Figure 6.39 Nyquist plots of Example 6.36.

Example 6.37: Reconsider Example 6.36 with K 5 80. The Nyquist plot is given in Fig. 6.40. It is observed that PM is negative and is acceptable. This time PM2 5 248:8241 deg. On the other hand since 80=34:98 . 110:2=80, the main GM is GM1 5 110:2=80 in plain numbers. (Continued)

Nyquist plot

497

(cont’d)

Figure 6.40 Nyquist plots of Example 6.37.

Example 6.38: Reconsider Example 6.36 with its NMP zero at s 5 0:1 replaced by its MP counterpart, i.e., the MP zero at s 5 2 0:1. Thus the system is LðsÞ 5

Kððs10:1Þ2 1 1Þððs10:1Þ2 1 9Þðs 1 0:1Þ . ððs10:1Þ2 1 4Þððs10:1Þ2 1 25Þðs 1 1Þ

It is easy to verify that the stability

range of the system for positive K is KAfð0 2:1879Þ , ð23:4309 NÞg fð0 0:6347Þ , ð3:0473 NÞg which is KAfð0 0:6347Þ , ð23:4309 NÞg. Considering the root locus with negative gain as well, we conclude that the system is stable also for KAð2N 2 110:2077Þ , ð2 1 0Þ. Thus the system is stable if KAfð2N 2 110:2077Þ , ð2 1 0Þ , ð0 0:6347Þ , ð23:4309 NÞg. Its Nyquist plot for K 5 50 is given in Fig. 6.41. It is observed that PM is negative and is acceptable.

Figure 6.41 Nyquist plots of Example 6.38.

(Continued)

498

Introduction to Linear Control Systems

(cont’d) In fact with this value of gain we have PM: [X: 2125.2014, Y: 2101.7750, Z: 227.2315] with respective PM frequencies: [X: 0.2111, Y: 0.8874, Z: 1.0595]. The system has PM2 5 2 27:2315 deg and GM2 5 23:4309=50 in plain numbers, or 26:58 db.

Example 6.39: Reconsider Example 6.38 with K 5 0:5. The Nyquist plot is provided in Fig. 6.42. The unit circle is shown in half as the dotted curve. The system is stable and has two phase crossover points X and Y, respectively with PhaseMargin: [21.2987, 2170.1411], PMFrequency: [4.3747, 6.5689]. Both are acceptable. Thus X is chosen: PM2 5 2 1:2987 deg. On the other hand the system has a positive GM: GM1 5 0:6347=0:5  2:072.

Figure 6.42 Nyquist plots of Example 6.39.

Remark 6.19: We conclude this section by restating that we have seen in various examples that the computed PM (whether unique or multiple) by MATLABs may not be acceptable, i.e., it may be wrong. The same is true also for the GM. The question is thus what should we do in practice? The answer is that we can trust the command “allmargin” of MATLABs21 to find all the real-axis crossings and unit circle crossings. Thus we compute all the gain and PMs of the system.

21

Used both for the system (to produce the negative real-axis crossings) and its negative (to produce the positive real-axis crossings). See the documentation of the m file Exercise6point13 which is available on the accompanying CD. Also note that the answers that you obtain might slightly differ depending on whether you use “format short” or “format long” in your computations with MATLABs.

Nyquist plot

499

Then we draw the Nyquist plot of the system by the command “nyquist” of MATLABs and analyze the computed GMs and PMs. On the other hand, a quick fix that may substitute the aforementioned analysis is numerical verification of the stability in different regions (between the computed GMs and PMs). It is good if this is included in future releases of MATLABs. See also Question 10.5 of Chapter 10.

6.3.3 Stability in terms of the GM and PM signs It is worth paraphrasing that the following have been shown in the above examples that there exist: (1) Stable MP systems with computed positive and negative PM, (2) Unstable MP systems with computed positive and negative PM, (3) Stable NMP systems with computed positive and negative PM, (4) Unstable NMP systems with computed positive and negative PM, (5) Unacceptable computed PMs. Considering also the GM, we can highlight the sequel Remark: Remark 6.20: We have shown that stable MP or NMP systems may have any computed GM and PM signs. That is, all cases GM . 0; PM . 0 and GM . 0; PM , 0 and GM , 0; PM . 0 and GM , 0; PM , 0 are possible. Consequently, we have rebutted the following two theorems which are prevailing in the literature: (1) MP systems are stable iff both GM and PM are positive, (2) NMP systems are stable iff both GM and PM are negative. In fact, it is not possible to determine the stability of a system (either MP or NMP) by merely considering the signs of its computed GM and PM. Next we pose some questions. Question 6.2: Does there exist a stable/unstable MP systems with infinite PM? Does there exist a stable/unstable NMP system with infinite PM? (Note that these questions are partly answered in the cases which we have considered in the above examples). Remark 6.21: We have measured the PM as the angle (regardless of its positivity or negativity) the crossover point makes with the negative real axis. The angle of the negative real axis is 6 180ð2h 1 1Þ. That is, it is not prefixed to 2180 deg, as some texts teach. In other words, the phase angle is measured with respect to either of the lines 6 180ð2h 1 1Þ which is nearer to it. Note that this results in the phase angle to be between 2180 and 180. Which line is nearer and is thus chosen depends on the phase of the system (which in turn depends on the given gain) and can be best observed in the context of the Bode diagram, which we will introduce in the next chapter.

500

Introduction to Linear Control Systems

Example 6.40: The phase angle of the system of Example 6.18 with K 5 107 is measured with respect to the line 2180 deg. With K 5 20000 it is measured with respect to the line 180 deg. In the first case the phase of system is 289:5 which is nearer to the line 2180 from among the lines 6 180ð2h 1 1Þ, resulting in PM 5 90:5. (Note that if we compute it with respect to the line 180 we obtain 2269:5 which is outside the range ð2 180; 180Þ and is thus wrong.) In the second case the phase of the system is 166:3 which is nearer to the line 180, yielding PM 5 2 13:7. (Here also note that if we compute it with respect to the line 2180 we obtain 346:3 which is outside the range ð2180; 180Þ and therefore is wrong). On the other hand it is easy to verify that in both cases the system is unstable. In the first case with PM 5 90:5 there will be no change in the stability condition of the system, i.e., the computed PM is not acceptable, so PM 5 Inf. However, in the second case the computed PM changes the stability condition of the system, so PM 5 2 13:7 is acceptable. See worked-our problem 6.22.

Example 6.41: Consider the system LðsÞ 5 Ks=ðs2 12s12Þ4 . With K 5 50, PM 5 2 100:31 which is measured with respect to the line 2180 deg. With K 5 100, PM 5 2126:83 which is measured with respect to the line 180 deg. With K 5 10000, PM 5 39:13 which is measured with respect to the line 2540. From among the lines 6 180 6 2kπ these lines are the nearest (the distance measured in absolute value) to the phase angle of the system. On the other hand it is easy to verify that the first computed PM is acceptable. In the second and third cases the computed PMs are not acceptable, so PM 5 Inf. See the worked-out Problem 6.23. We will pictorially observe it in the next Chapter. However, the student is encouraged to find the range of the phase of the system by traversing the Nyquist contour. For the sake of brevity we skip it.

Example 6.42: The phase angle of the system LðsÞ 5 Kðs 1 1Þ=ðs2 12s12Þ8 is measured with respect to the lines 2540, 2900, and 21260 deg, depending on the value of gain. (It is left to the student to verify whether the computed PMs are acceptable).

Nyquist plot

501

Example 6.43: The phase angle of the system LðsÞ 5 K=½s2 ðs2 12s12Þ8  is measured with respect to the lines 2180, 2540, 2900, 21260, and 21580 deg, depending on the value of gain. (It is left to the student to verify whether the computed PMs are acceptable).

6.3.4 The high sensitivity region In this Section we introduce the high sensitivity region which must be avoided by the plot. To motivate the problem we start by recalling that GM and PM provide measures of robustness of the system in the general case. This, however, fails in certain cases. Some stable systems are pathological in the sense that GM and PM are not good indicatives of robustness for them. The reason is that the Nyquist plot looks like the solid curve given in Fig. 6.43. It is observed that with a small increase in both gain and phase the Nyquist plot moves to the dashed curve and the system becomes unstable. 1 0:25s 1 0:4Þ Example 6.44: Consider the system LðsÞ 5 sðs0:35ðs 1 4Þðs2 1 0:05s 1 0:3Þ. Its gain and phase margin are GM 5 N; PM 5 91:5 deg and thus apparently the system has good stability margins. However, the Nyquist plot of the system shows a pathological shape as depicted in Fig. 6.44, left panel. With a small increase in the GM and PM it will become unstable. More precisely, for instance if the gain is increased as 0:35 ! 0:45 then with PM 5 21:3 deg the system will be unstable. Similarly, a small change in the parameter 0:3 ! 0:24 or 0:4 ! 0:5 makes the system unstable as shown in the right panel of the same figure. (Continued) 2

Figure 6.43 A Typical pathological Nyquist plot.

502

Introduction to Linear Control Systems

(cont’d)

Figure 6.44 A Typical pathological Nyquist plot.

Figure 6.45 High sensitivity region which should be avoided.

The conclusion of the above observation is that to have satisfactory robustness margins the Nyquist plot must not enter the shaded region in Fig. 6.45. Note that this shaded region is not contradictory with the GM and PM measures and in fact complements them. The shaded region is called the region of high sensitivity.22 The explanation of the high sensitivity region is as follows. For the negative unity feedback there holds S 5 1=ð1 1 LÞ. On the other hand the radius of the circle around the critical point is j1 1 Lj for the point L belonging to the Nyquist plot. Therefore if we want the sensitivity of the system to be at most S0 , the Nyquist plot must stay outside the shaded circle whose radius is 1=S0 . (A more detailed explanation will be offered in Chapter 8 under the title “S-Circles”.) 22

The advanced version of this consideration is called loop shaping, which will be studied in the graduate course Robust Control System and also in Chapter 9 where the controller design is performed in the context of Bode diagram.

Nyquist plot

503

Question 6.3: Given a value for jSj and its corresponding S-Circle as the high sensitivity region which should be avoided. What GM and PM are guaranteed for the system? Remark 6.22: We close this chapter by stating that it is possible to obtain the closed-loop transfer function from the (open-loop) Nyquist plot of the plant. This is discussed in Chapter 8.

6.4

Summary

In this chapter we have started what is generally known as the frequency methods. These methods pivot on applying a sinusoid to the system, observing the output, and studying the transfer function thus obtained. There are basically three methods for representing the aforementioned transfer function: Nyquist plot, Bode diagram, and Krohn-Manger-Nichols chart. We have focused on the Nyquist plot (also called the polar plot) in this chapter and with a survey nature, giving a thorough account of it. This includes rectifying the mistakes and shortcomings of the available results. More precisely, we have started with the Nyquist stability criterion, presented a canonical method for drawing the Nyquist plot even in case of jaxis poles and/or zeros, and analyzed the high- and low-frequency ends and cusp points of the plot. We have also studied its relation with the root locus method. Then we have introduced the concepts of GM, PM, and DM and addressed the case of multiple crossover frequencies and stability of the system. Among the various issues explored, we have learned that it is not possible to determine the stability of a system by merely considering the signs of its GM and PM, and that some computed GM and PM may be unacceptable. Included are also the treatment of critical lines, pathological systems, and the high sensitivity region in the context of the Nyquist plot. Learning of the topics is facilitated by numerous carefully designed examples. Similarly, many worked-out problems follow to shed more light on the subject.

6.5

Notes and further readings

1. Although it is outside the scope of this book, it is instructive to comment on the main problem in optimal and robust control problems of the 1960s. The Optimal Linear Quadratic Regulator (LQR) framework was extended to the Robust Linear Quadratic Gaussian (LQG) framework, where uncertainties in the form of white noise were addressed in the design. At that time it was thought that plant uncertainties are thus addressed by this method. However, plant uncertainties are not in the form of a noise signal—they never vary randomly in time, but almost always quite slowly and smoothly, or are not time varying at all. It was later shown that actual plant uncertainties not only

504

2.

3. 4.

5.

6.

7.

23

Introduction to Linear Control Systems

may derobustify the LQG method but in fact can even destabilize it. It almost coincided with the advent of the HN control framework, which grew out of the pioneering work of G. Zames.23 After the above-discussed period the attention of the control community was shifted back to the frequency domain and a boom of discoveries followed—but more efficient solutions were offered in the state-space domain. In this time the robust HN , L1 , μ, Kharitonov, and variable structure branches flourished and grew rapidly and shadowed the branch robust QFT control. The L1 theory was pioneered by M. Dahleh. The μ theory was initiated by J. C. Doyle. (A very similar idea was initiated simultaneously by M. G. Safonov.) The Kharitonov theory is named after V. L. Kharitonov who pioneered the field. The sliding mode control was initiated in the 1950s by S. V. Emel’yanov and co-workers. Robust QFT was pioneered by I. Horowitz in the 1960s. By the mid-1980s attempts were being made to integrate the frequency domain and time-domain analysis and synthesis tools in one so as to exploit the strengths of both fields. In many disciplines this goal was effectively achieved. Among the topics for further studies are the following items. For the original work of H. Barkhausen and recent pertinent works the interested reader may consult (Barkhausen, 1935; Lindberg, 2010). The inverse Nyquist array can also be found in the literature. It can be used provided the plant and the perturbed plant(s) have the same number of CRHP zeros. On the other hand, Lehtomaki’s robustness test is applicable if the plant and the perturbed plant(s) have the same number of CRHP zeros and poles. In (Postlethwaite et al., 1985) it is shown that the latter is a special case of the former. The Nyquist criterion and/or the inverse Nyquist criterion have also been applied to multivariable, n-dimensional, infinite dimensional, and periodic systems. Examples of periodic systems include flapping dynamics of helicopter rotors, rolling motion of ships in waves, robot manipulators moving along periodic trajectories, electromechanical oscillation in AC generators, etc. Stability analysis of such systems has been studied in different contexts, one of which being the Nyquist’s. The analysis heavily pivots on the Floquet theory (concerning the solution of periodically time-varying ordinary differential equations). Toeplitz matrix related results also appear in some other works. The interested reader can consult the pertinent classical literature. A good starting point can be: (Munro, 1972; DeCarlo et al., 1977; Logemann, 1989; Zhou and Hagiwara, 2005). Various extensions of the Nyquist stability criterion are available. A framework which encapsulates them and provides even new results is proposed in (Sasane, 2010) which uses a Banach algebra setting. In (Vidyasagar et al., 1988) the notion of the winding number of a curve was introduced to the context of Nyquist plots. More precisely, if a real-axis crossing occurs from below the crossing index is negative and if it happens from above it is positive. Using this, a theorem is thus presented which specifies parts of the real axis, if any, in which the system is stable if the 21=K point lies within. However, unlike the title the paper is not really a simplification of what we have presented here which is tangible and intuitive. But the interested reader can consult it as an alternative approach.

Polish control theorist (193497) who emigrated to Canada in 1948. He made significant contributions to control theory. Apart from pioneering work on HN theory he is most noticeably known for the small-gain theorem, passivity theorem, and circle criterion in inputoutput form. It is due to add that the use of HN optimization in engineering dates back to 1976 (Helton, 1976).

Nyquist plot

505

8. When instead of one plant there is a family of plants, a question that arises is that whether the envelope of the Nyquist plots can be obtained from (or are contained in) the Nyquist plots of a small number of the plants, in particular the Kharitonov ones. This question is answered in (Hollot and Tempo, 1994), the answer being in general negative, and a number of important results are obtained as byproducts. 9. Finally we mention that research on the basic theory of the Nyquist criterion and related stuff such as GM/PM is still underway, and new results are sporadically reported. Among these miscellaneous advanced results are the following: Qualitative graphical representation of Nyquist plots (Zanasi et al., 2015); Robust Nyquist array (Chen and Seborg, 2002); Theory of index and a generalized Nyquist criterion (Hsu, 1980); Utilizations re: stability analysis, either applied or theoretical (Dolgin and Zeheb, 2009; Fang, 2013; Hagiwara, 2002; Huang et al., 2016; Lovisari and Jonsson, 2010; Zhou, 2013]; Robustness of GM/PM and their connections with dissipativity (Pal and Belur, 2013; Tan, 2004); Design for robust/invariant PM in fractional-order systems (Basiri and Tavazoei, 2015). 10. A nonlinear circuit, called phase shifter, is proposed in (Dantowitz et al., 1992) for the direct measurement of PM. 11. In the perspective of singular perturbation, generalized GM and singular perturbation margin are introduced in Yang et al. (2015). 12. Circle criterion and its extensions concern the stability of a class of nonlinear systems. They are closely related to the Nyquist criterion, as you will learn in the sequel of the book on state-space methods. In brief, there we would like the Nyquist criterion to stay outside a “particular” circle in the plane. 13. Development of the counterparts of results of this chapter for DAE, fractional-order, positive, constrained, multiscale, etc. systems is desirable. Note that a partial answer for singularly perturbed systems (two-scale systems) is provided in Yang et al. (2015).

6.6

Worked-out problems

In the first half of the problems we follow the procedure outlined in Section 6.2, demonstrated in Examples 6.56.14. For the sake of brevity we present the analysis in a concise manner. In the second half of the problems, we follow the procedure of Section 6.3. It is instructive to read Appendix C if you have not done so yet. There you learn many things, e.g., how to find the real-axis crossings or verify the stability of a given system using MATLABs. Kðs 1 2Þ Problem 6.1: Draw the Nyquist plot of the system LðsÞ 5 ðs 2 1Þðs 2 2Þ, and find the range of K for stability. ωð8 2 ω2 Þ The first step is to form L0 ð jωÞ 5 ð jω 2jω1Þð1jω2 2 2Þ. Now from ℑ L0 5 ð22ω 2 Þ2 1 9ω2 5 0 pffiffiffi we find ω 5 0; 6 2 2; 6 N. Defining the angles CCW, we observe that at ω 5 01 5 0, L0 ð jωÞ 5 1+ 2 360. As we increase the frequency at small frequencies pffiffiffi (ω , 2 2) there holds ℑ L0 . 0 and thus the plot enters the first quadrant. It then pffiffiffi enters the second quadrant24 and at ω 5 2 2 has a real axis crossing and enters the 24

pffiffiffi At ω . 2= 5, R L0 5

4 2 5ω2 ð22ω2 Þ2 1 9ω2

, 0, but we do not really need to compute this.

506

Introduction to Linear Control Systems

third quadrant. The real-axis crossing is at 21=3. Finally at ω 5 N there holds L0 ð jωÞ 5 0+ 2 90. On the infinite arc of the Nyquist contour the phase of the system increases by ð#poles 2 #zerosÞ 3 180 5 ð2 2 1Þ 3 180 5 180 deg. Thus there will be an infinitesimal arc with this angle to the right of the origin. At ω 5 2N, L0 ð jωÞ 5 0+90. In negative frequencies the phase of the system increases by 270 and at ω 5 02 5 01 the phase of the system becomes 360. Note that Δ+L0 5 +L0 ð j01 end Þ 2 +L0 ð j01 Þ 5 360 2 ð2360Þ 5 2 3 360 as known from Remark 6.3. start The whole plot produced by MATLABs is given in Fig. 6.46. Note that the infinitesimal arc at ω 5 N (to the right of the origin) is not depicted by MATLABs. For stability there should hold 21=3 , 2 1=K , 0 or K . 3.

Figure 6.46 Nyquist plot of Problem 6.1.

1 1Þ Problem 6.2: Draw the Nyquist plot of the system LðsÞ 5 Kðs , and find the range ðs21Þ3 of K for stability. 11 We form L0 ð jωÞ 5 ð jω . Defining the angles CCW, at ω 5 01 5 0, jω21Þ3

L0 5 1+ 2 540. Also, from ℑ L0 5

4ωðω2 2 1Þ ð3ω2 21Þ2 1 ð3ω2ω3 Þ2

5 0 we compute that the real-

axis crossings are at ω 5 0; 6 1; 6 N. As we increase the frequency the phase increases (becomes less negative), and the plot traverses CCW. At ω 5 1 there will be a real-axis crossing at 0.5. The phase of the system is 2360 deg at this point. As we further increase ω at ω 5 N: L0 5 0+ 2 180. On the infinite arc of the Nyquist contour the phase of the system increases by ð#poles 2 #zerosÞ 3 180 5 ð3 2 1Þ 3 180 5 360 deg. Thus there will be an infinitesimal circle around the origin with the angle 360 deg. At ω 5 2N, L0 5 0+ 1 180. In negative frequencies the

Nyquist plot

507

phase of the system increases by 360 and at ω 5 02 5 01 the phase of the system 1 0 becomes 540. Note that Δ+L0 5 +L0 ð j01 end Þ 2 +L ð j0start Þ 5 540 2 ð2540Þ 5 3 3 360 as required by Remark 6.3. Fig. 6.47 produced by MATLABs shows the complete plot. Note that the infinitesimal circle at ω 5 N (around the origin) is not produced by MATLABs. Because no part of the real-axis is encircled three times CCW, the system is not stabilizable by pure gain.

Figure 6.47 Nyquist plot of Problem 6.2.

Problem 6.3: Draw the Nyquist plot of the system LðsÞ 5 s2Ks 2 1, and find the range of K for stability. We first form L0 ð jωÞ 5 ð jωÞjω2 2 1. Defining the angles CCW, at ω 5 01 ,

ω L0 5 0+ 2 90. From ℑ L0 5 ω2 2 1 1 5 0 we solve for the real-axis crossings and find the respective frequencies at ω 5 0; 6 N. We note that the real part of L0 ð jωÞ is zero and thus the whole plot will be on the imaginary axis. In 01 # ω # 1 N the phase of the system is 290. As we increase ω the magnitude of ℑ L0 increases from zero and the plot goes downwards. The maximum of ℑ L0 occurs at ω 5 1. At this frequency ℑ L0 5 2 0:5, and the plot is at its lowest point. As we further increase the frequency the magnitude of ℑ L0 decreases and the plot goes upwards. At ω 51 N the plot has arrived back at the origin, L0 5 0+ 2 90. On the infinite arc of the Nyquist plot the phase of the system increases by ð#poles 2 #zerosÞ 3 180 5 180 deg. That is, there will be an infinitesimal arc to the right of the origin. At ω 5 2N, L0 5 0+ 1 90. In negative frequencies 2N # ω # 02 the phase of the system remains at 90 deg. As we increase the frequency (i.e., decrease it in absolute value), the magnitude of ℑ L0 increases and the plot goes upwards. Its maximum occurs at ω 5 2 1. At this frequency ℑ L0 5 0:5. Above this frequency the magnitude of ℑ L0 decreases and the plot goes

508

Introduction to Linear Control Systems

downwards. Finally at ω 5 02 we are back at the origin. On the ρ semi-circle at j0 (02 , ω , 01 ) the phase of the system increases by 180 deg and thus an infinitesimal arc to the left25 of the origin is obtained. At ω 5 01 the phase of the system 1 0 becomes 270. Note that Δ+L0 5 +L0 ð j01 end Þ 2 +L ð j0start Þ 5 270 2 ð2 90Þ 5 1 3 360 as expected from Remark 6.3. The plot that MATLABs produces for this system is given in Fig. 6.48. Note that the infinitesimal arcs are not produced by MATLABs. Because no part of the real-axis is encircled one time CCW, the system is not stabilizable by pure gain. We add that the upper and lower parts of the plot are not cusp points (although the figure looks sharp at these points) since the angle does not change at these points. Finally, we encourage the reader to try the systems L1 ðsÞ 5 Kðs2 1 1Þ=ðs2 2 1Þ and L2 ðsÞ 5 Kðs2 2 1Þ=ðs2 1 1Þ.

Figure 6.48 Nyquist plot of Problem 6.3.

Ks Problem 6.4: Draw the Nyquist plot of the system LðsÞ 5 ðs21Þ 2 , and find the range of K for stability. ωð1 2 ω2 Þ jω 0 In the first step we form L0 ð jωÞ 5 ð jω21Þ 5 0 we 2 . Next from ℑ L 5 ð12ω2 Þ2 1 4ω2

find ω 5 0; 6 1; 6 N. Defining the angles CCW, we observe that at ω 5 01 5 0, 2 # 0 and thus the plot is in the L0 ð jωÞ 5 1+ 2 270. We note that R L0 5 ð12ω22 Þ2ω 2 1 4ω2

left half plane. As we increase the frequency at small frequencies (ω , 1) there holds ℑ L0 . 0 and thus the plot enters the second quadrant. At ω 5 1 it has a realaxis crossing and enters the third quadrant. The real-axis crossing is at 21=2. Finally at ω 5 N there holds L0 ð jωÞ 5 0+ 2 90. 25

Note that unlike other problems here it is to the left and not the right of the origin.

Nyquist plot

509

On the infinite arc of the Nyquist contour the phase of the system increases by ð#poles 2 #zerosÞ 3 180 5 ð2 2 1Þ 3 180 5 180 deg. Thus there will be an infinitesimal arc with this angle to the right of the origin. At ω 5 2N, L0 ð jωÞ 5 0+90. In negative frequencies the phase of the system increases by 180 deg and at ω 5 02 the phase of the system becomes 270 deg: L0 ð jωÞ 5 0+270. On the ρ semi-circle at j0 (02 , ω , 01 ) the phase of the system increases by 180 deg and thus an infinitesimal arc to the right of the origin is obtained. At ω 5 01 the phase of the system 1 0 becomes 270 1 180 5 450. Note that Δ+L0 5 +L0 ð j01 end Þ 2 +L ð j0start Þ 5 450 2 ð2 270Þ 5 2 3 360 as necessitated by Remark 6.3. The plot produced by MATLABs is given in Fig. 6.49. Note that the plot for negative frequencies is exactly the same as the plot for positive frequencies, and thus it is not visible. That is, the region inside the plot is encircled twice. Moreover, as usual the infinitesimal arcs are not depicted by MATLABs. For stability there should hold 21=2 , 2 1=K , 0 or K . 2.

Figure 6.49 Nyquist plot of Problem 6.4. 1 2Þ Problem 6.5: Draw the Nyquist plot of the system LðsÞ 5 Kðs sðs 2 1Þ , and find the range of K for stability. First we form L0 ð jωÞ 5 jωðjωjω122 1Þ. Defining the angles CCW, at ω 5 01 , 0 2ω 2 ω L0 5 N+ 2 270. We note that R L0 5 ω24 13ωω2 5 ω223 1 1 and ℑ L 5 ω4 1 ω2 . Thus at 1 ω 5 0 , R L0 5 2 3 and ℑ L0 5 N. Also, from ℑ L0 5 0 we compute that the realpffiffiffi axis crossings are at frequencies ω 5 0; 6 2; 6 N. As we increase the frequency, R L0 and ℑ L0 both decrease and the plot goes downwards in the second quadrant. pffiffiffi At ω 5 2 there will be a real-axis crossing at 21. Above this frequency the real part decreases further and at ω 5 N the plot is at the origin with the angle 2

3

510

Introduction to Linear Control Systems

290 deg. On the infinite arc of the Nyquist plot the phase of the system increases by 180 deg. Therefore there will be an infinitesimal arc with this angle to the right of the origin. At ω 5 2N, L0 5 0+ 1 90. In negative frequencies the phase of the system increases from 90 to 270 at ω 5 02 , and the plot will be at L0 5 N+ 1 270 on the line R L0 5 2 3. On the ρ semi-circle at j0 the phase of the system decreases by 180 deg and the infinite arc to the left of the origin is obtained, whose angle is 180 deg. At ω 5 01 , L0 5 N+ 1 90. Note that Δ+L0 5 +L0 ð j01 end Þ 2 +L0 ð j01 start Þ 5 90 2 ð2270Þ 5 1 3 360 as required by Remark 6.3. Fig. 6.50 shows the Nyquist plot of the system. The plot is shown with magnification around the origin otherwise the infinitesimal arc is not visible. For stability there should hold 21 , 2 1=K , 0 or K . 1.

Figure 6.50 Nyquist plot of Problem 6.5, not drawn to scale.

1Þ Problem 6.6: Draw the Nyquist plot of the system LðsÞ 5 Kðss1 , and find the range 2 of K for stability. Note that we have solved this problem with a different approach in Example 6.6. Here we draw the whole plot. Following the procedure we have learned, it is easy to verify that at ω 5 01 the plot starts at the lower left corner of the curve (L0 5 N+ 2 180) given in Fig. 6.51 and at ω 51 N ends at the origin (L0 5 0+ 2 90), the part of the figure which is in the third quadrant. The infinitesimal arc to the right of the origin is the mapping of the infinite arc of the Nyquist contour. At ω 5 2N the plot is L0 5 0+ 1 90. For negative frequencies the plot is in the second quadrant, at ω 5 02 the plot is L0 5 N+ 1 180. Finally, the mapping on the ρ semi-circle at j0 is an infinite circle which is inevitably shown as an incomplete large circle in the figure. Note that the angle of this circle is 360 deg (not less than that) and is traversed CW, because the phase of the system decreases on the ρ semi-circle at j0, i.e., at ω 5 01 : L0 5 N+ 2 180. Note that 1 0 Δ+L0 5 +L0 ð j01 end Þ 2 +L ð j0start Þ 5 2 180 2 ð2 180Þ 5 0 3 360 as required by Remark 6.3. The Nyquist plot of the system is given in Fig. 6.51. For stability there should hold 21=K , 0 or K . 0.

Nyquist plot

511

Figure 6.51 Nyquist plot of Problem 6.6, not drawn to scale. 1 1Þ Problem 6.7: Draw the Nyquist plot of the system LðsÞ 5 Ksðs , and find the ðs21Þ3 range of K for stability. ωð1 2 ω2 Þð3ω2 2 1Þ For this system ℑmL0 5 ð3ω 2 21Þ2 1 ð3ω2ω3 Þ2 . The real axis crossings are thus pffiffiffi ω 5 0; 6 3=3; 6 1; 6 N. At ω 5 01 : L0 5 0+ 2 90. As we increase the frepffiffiffi quency the plot enters the fourth quadrant and at ω 5 3=3 there is a real-axis crossing at 1=4. The phase of the system is zero at this frequency. As we increase the frequency the plot enters the first quadrant and at ω 5 12 there is another realaxis crossing, this time at the origin. The phase of the system is 45 deg at this point. On the ρ semi-circle at j1 the phase of the system increases by 180 deg and thus there will be an infinitesimal arc with this angle on the plot at this point. The phase of the arc is 45 deg at the starting point and 225 deg at the end point, i.e., the arc is traversed CCW. Above ω 5 11 the phase of the system increases and finally at ω 51 N it will be 360 deg. At this frequency L0 5 1+360. On the infinite arc of the Nyquist contour the phase of the system does not change (why?) and thus at ω 5 2N we are at the same point L0 5 1+360. As we increase the frequency and reach ω 5 2 12 , the plot goes in the first and second quadrants and reaches the origin with the angle 495 deg. On the ρ semi-circle at 2j1 there is a 180 deg increase in the phase of the system and there will be an infinitesimal arc with this angle around the origin. The phase of the arc is 495 deg at the starting point and 675 deg at the end point, i.e., the arc is traversed CCW. pffiffiffi Above ω 5 2 11 the phase of the system increases and at ω 5 2 3=3 it will be 720 deg. At this frequency there is a real-axis crossing at 1/4. As we further increase the frequency we arrive at ω 5 02 where the phase of the system is 810 deg. On the ρ semi-circle around the origin the phase of the system increases by 180 deg, and thus at ω 5 01 we will be at L0 5 0+990. Note that 1 0 Δ+L0 5 +L0 ð j01 end Þ 2 +L ð j0start Þ 5 990 2 ð2 90Þ 5 3 3 360 as expected from Remark 6.3. Fig. 6.52 shows the Nyquist plot of the system. For stability there should hold 0 , 2 1=K , 1=4 or K , 2 4. 2

512

Introduction to Linear Control Systems

Figure 6.52 Nyquist plot of Problem 6.7. 1 4Þ Problem 6.8: Draw the Nyquist plot of the system LðsÞ 5 Ksðs , and find the ðs21Þ4 range of K for stability. 2 ω2 Þðω4 2 6ω2 1 1Þ In this system ℑmL0 5 ðωωð4 4 26ω2 11Þ2 1 ð4ω 3 24ωÞ2 . The real axis crossings are thus pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffi pffiffiffi ω 5 0; 6 3 2 2 2 5 6 0:4142; 6 2; 6 3 1 2 2 5 6 2:4142; 6 N. At ω 5 01 : L0 5 0+ 2 270. As we increase the frequency the plot enters the second quadrant and at ω 5 0:4142 there is a real-axis crossing at 21:1550. The phase of the system is 2180 deg at this frequency. As we increase the frequency the plot enters the third and fourth quadrants and at ω 5 22 there is another real-axis crossing, this time at the origin. The phase of the system is 270 1 90 1 90 2 4 3 ½180 2 ð180=πÞ 3 tan21 2 5 2 16:26 deg at this point. On the ρ semi-circle at j2 the phase of the system increases by 180 deg and thus there will be an infinitesimal arc with this angle on the plot at this point. The phase of the arc is 216:26 deg at the starting point and 163.74 deg at the end point, i.e., the arc is traversed CCW. Above ω 5 21 the phase of the system increases in the second quadrant and at ω 5 2:4142 there is another real-axis crossing, this time at 20:0947. By increasing the frequency the plot goes to the third quadrant and finally at ω 51 N there is another real-axis crossing, this time at origin with the angle 270 deg. At this frequency L0 5 0+270. On the infinite arc of the Nyquist contour the phase of the system increases by 180 deg, and therefore at ω 5 2N we are at L0 5 0+450. As we increase the frequency the mirror image of the plot with respect to the real-axis is obtained. There is a real-axis crossing at ω 5 2 2:4142 at 20:0947. The next real-axis crossing will be ω 5 2 22 at origin with an infinitesimal arc whose start and end angles are 556.26 deg and 736.26 deg, respectively. After this there will be another real-axis crossing at ω 5 2 0:4142 at 21:1550. Finally at ω 5 02 we are back at origin with angle 990 deg. On the ρ semi-circle at 2

Nyquist plot

513

j0 there is a 180 deg increase in the phase of the system and there will be an infinitesimal arc with this angle to the right of the origin. At ω 5 01 , L0 5 0+1170. 1 0 Note that Δ+L0 5 +L0 ð j01 end Þ 2 +L ð j0start Þ 5 1170 2 ð2 270Þ 5 4 3 360 as expected from Remark 6.3. Fig. 6.53 shows the Nyquist plot of the system. For stability there should hold 20:0947 , 2 1=K , 0 or K . 10:5557.

Figure 6.53 Nyquist plot of Problem 6.8. Left: Whole plot; Right: Magnification of the stable part.

Problem 6.9: Draw the Nyquist plot of the system L 5 Kðs2 1 1Þ=sðs2 1 4Þ. Analyze its stability. As in Problems 6.5 and 6.6 note that MATLABs is not able to draw the Nyquist plot of this system. The system has its poles and zeros all on the j-axis, five finite zeros and poles of first order and one infinite zero of first order. Using the above procedure it is easy to verify that the Nyquist plot is given by Fig. 6.54. To summarize the procedure we state that at different frequencies we are at the designated points as follows.

D

C B

A

Figure 6.54 Modified Nyquist contour and Nyquist plot of Example 6.9.

514

Introduction to Linear Control Systems

A 5 N+2 90: 01 ; 21 ; 2 21 , B 5 0+2 90: 12 ; 1 N; 2 12 , C 5 0+90: 11 ; 2N; 2 11 , and D 5 N+90: 22 ; 2 22 ; 02 . Finally, by further increasing the frequency on the ρ semi-circle, from 02 which is at N+90 we arrive at 01 which is 1 0 at N+ 2 90. Thus, Δ+L0 5 +L0 ð j01 end Þ 2 +L ð j0start Þ 5 2 90 2 ð2 90Þ 5 0 3 360, 0 or that +L has not changed. This is not a surprise, as there is no NMP pole or zero inside the Nyquist contour (Remark 6.3). The Nyquist plot is given in the right panel of Fig. 6.54 and is apparently traversed 3 times. As it is observed, the system is stable if 21=K , 0 or K . 0. Note that we have said “apparently” because the actual contour is traversed only one time. However, pictorially different parts of the contour refer to what are apparently the same points on the plane. Problem 6.10: The method we proposed for Nyquist plot analysis and drawing also applies to noncausal and unrealizable models. The purpose of this problem is to show the applicability of the result to such models. Draw the Nyquist plot of the 2 1 4Þ mathematical model LðsÞ 5 Ksðs , and find the range of K for stability. ðs21Þ2 For this system L0 ð jωÞ 5

2ω2 ðω2 2 4Þ ð12ω2 Þ2 1 4ω2

2 ω Þð4 2 ω Þ 1 j ωð1 . Thus we note that the real ð12ω2 Þ2 1 4ω2 2

2

axis crossings are at ω 5 0; 6 1; 6 2. On the other hand as ω ! N: R L0 ! 2 and ℑ L0 ! N. Now we start at ω 5 01 at which L0 5 0+90. By increasing the frequency the plot goes to the second quadrant and at ω 5 1 there is a real-axis crossing at 21:5. By further increasing the frequency at ω 5 22 the plot arrives at the origin where there is another real-axis crossing. On the ρ semi-circle around j2 the phase of the system increases by 180 deg and thus there will be an infinitesimal arc with this angle on the plot. The angle at the starting point is 270 1 90 1 90 2 2 3 ½180 2 ð180=πÞ 3 tan21 2 5 216:8699 deg. At the end point the angle is 396.87 deg. By further increasing the frequency at ω 51 N the plot tends to the asymptote R L0 5 2 in the first quadrant. We are at the point L0 5 N+450 & R L0 5 2. On the infinite arc of the Nyquist plot the phase of the system drops by 180 deg.26 On the other hand at this frequency jL0 j 5 N and R L0 5 2. Thus there is an infinite arc in the form of a straight line on the asymptote R L0 5 2, where the plot goes from ℑ L0 51 N in the first quadrant down to the point ℑ L0 5 2N in the fourth quadrant. This line is not depicted by MATLABs, see Fig. 6.55, but exists. Thus at ω 5 2N we are at the point L0 5 N+270 & R L0 5 2. As we increase the frequency the mirror image of the plot is obtained. More precisely, by increasing the frequency to ω 5 2 22 the plot reaches the origin with the angle 323.13. There is an infinitesimal arc on the plot where the angle increases by 180, i.e., the angle at its end point will be 503.13 deg. By further increasing the frequency the plot is in the second quadrant and will have a real-axis crossing at ω 5 2 1 at 21:5, where it enters the third quadrant. By further increasing the frequency at ω 5 02 we reach at the origin, L0 5 0+630. On the ρ semi-circle around the origin the phase of the system increases by 180, thus at ω 5 01 we are at L0 5 0+810. We 1 0 note that Δ+L0 5 +L0 ð j01 end Þ 2 +L ð j0start Þ 5 810 2 90 5 2 3 360 as necessitated by Remark 6.3. 26

Note that this is contrary to the case of strictly proper systems where the phase increases.

Nyquist plot

515

Figure 6.55 Nyquist plot of Example 6.10.

As is observed, the system is stable if 21:5 , 2 1=K , 0 or K . 0:6667. Problem 6.11: Can a system have both GM1 and GM2 infinite? How about PM1 and PM2 ? Yes, in the case that changing the stability condition of the system by pure gain is impossible in both increasing and decreasing directions. An example for the GM is the system 1=½s2 ðs 1 1Þ and an example for the PM is the system of Example 6.26 or 6.29. (Question: Can both happen in a system?) Remark 6.22: In the following problems and exercises the points A, B, C denote the real-axis crossings in the increasing order of frequency. Likewise, the points X, Y, Z denote the unit-circle crossings in the increasing order of frequency. Problem 6.12: Can a system have some phase crossover frequencies but none of which results in an acceptable GM? How about gain crossover frequencies and PM? Yes. As for the first part, a system which is not stabilizable by pure gain control is such a system. For instance the system L 5 100ðs11Þ2 =ðs21Þ10 whose Nyquist plot for positive frequencies and root locus are provided in Fig. 6.56 has three GMs, none of which is acceptable. The system also has one PM, which is not acceptable either. The output of the command “allmargin” is copy-pasted in the following, but none of them are acceptable. GainMargin: [0.0132, 0.1600, 496.6755] GMFrequency: [0.2679, 1.0000, 3.7321] PhaseMargin: 129.3855 PMFrequency: 1.4705

516

Introduction to Linear Control Systems Root locus 4

C

Imaginary axis (s–1)

3 2

B

1

A

0 –1 –2 –3 –4 –2

–1

0

1

2

3

Real axis (s–1)

Figure 6.56 Problem 6.12. Left: Nyquist plot; Right: Root locus.

Note that the above “GainMargin” row provides the GMs (regardless of being acceptable or not) in plain numbers. As for the second part, Example 6.26 is such a system. Some other systems that we have discussed or will discuss have this characteristic—identify them! Kðs150Þ ðs 1 1000Þ Problem 6.13: The Nyquist plot of the system LðsÞ 5 ðs 2 1Þðs 1 3Þðs 1 5Þðs 1 200Þðs 1 400Þ is provided in Fig. 6.57 for positive frequencies. For K 5 2 the computed PM is: [5.4593] deg at ω 5 2.6093 rad/s. The real-axis crossings27 are: [ 2 4.1667, 20.7462, 20.0004802, 20.00005862, 0]. Comment on the stability, GM, PM, and DM of the system. For stability the point 21 should be encircled one time CCW. Considering the plot also for negative frequencies we observe that this is satisfied. Thus the system is stable. Because j 2 1=0:7462j , j 2 4:1667j the main GM of the system is GM1 5 j 2 1=0:7462j 5 1:34  2:54 db. Also, we observe that the computed PM changes the stability condition and thus it is acceptable: PM1 5 5:45 deg. Therefore DM 5 5:45 3 ðπ=180Þ=2:6093 5 0:0365 seconds. 2

Figure 6.57 Nyquist plot of Problem 6.13.

27

In this and the rest of the problems the real-axis crossings are given “from left to right.” In this problem this happens to be in the increasing order of frequency (which is the alphabetical order on the points) whilst in the other problems this is not true.

Nyquist plot

517

Question 6.3: What is the behavior of the plot of the system at the start and end points? Problems 6.14: The Nyquist plot of the system of Example 6.18 with K 5 500; 000 is depicted in Fig. 6.58 for positive frequencies. The computed PMs at the points X, Y, and Z are: [13.1934, 2136.6846, 100.8549] deg at ω 5 [16.3593, 25.3377, 989.5014] rad/s. From left to right the real-axis crossings are: [ 2 2,005,012,531 260,426,611 21,386,827 28795 29.5693 20.4729 0 4.7347]. Determine the stability of the system as well as its GM, PM, and DM. The system has one NMP pole and for stability the point 21 should be encircled once CCW. This has occurred with the current gain and thus the system is stable. Because j 2 1=0:4729j , j 2 9:5693j the main GM is GM1 5 j 2 1=0:4729j 5 2:1145  6:5042 db. On the other hand all the PMs change the stability condition of the system (from stable to unstable) and are thus acceptable. The main one is PM 1 5 13:1934 deg. As for the realizable DM (recalling Remarks 6.15 and 6.16) because 13:1934=16:3593 . 100:8549=989:5014 the DM should be computed at point Z and is DM 5 100:8549 3 ðπ=180Þ=989:5014 5 0:0018 seconds.

Figure 6.58 Nyquist plot of Problem 6.14, not drawn to scale.

Question 6.4: What is the behavior of the plot of the system at the start and end points? Question 6.5: In the same region ðE ative GM.

FÞ find the range of K which results in a neg-

Problems 6.15: The Nyquist plot of the system LðsÞ 5

Kðs 2 1Þðs11Þ2 ðs 2 0:2Þðs15Þ3

with K 5 15 is

depicted in Fig. 6.59 for positive frequencies. The computed PMs are: [ 2 132.8830, 152.7957] deg at ω 5 [4.0003, 11.8594] rad/s. From left to right the real-axis crossings are: [0, 0.6000, 1.1777]. Discuss the stability of the system as well as its GM, PM, and DM. For stability the point 21 should be encircled once CCW since the system has one open-loop NMP pole. This has not taken place and thus the closed-loop system is unstable. Considering the plot for negative frequencies as well, we observe that this does not take place anywhere and thus the system is not stabilizable by pure gains. We denote it by GM 5 Inf. Moreover, none of the computed PMs causes a CCW encirclement of the critical point. Thus they are not acceptable either. Likewise it is denoted by PM 5 Inf and hence DM 5 Inf.

518

Introduction to Linear Control Systems

Figure 6.59 Nyquist plot of Problem 6.15.

3

Kðs11Þ Problems 6.16: The Nyquist plot of the system LðsÞ 5 ðs 2 0:2Þðs 2 5Þðs 1 5Þ with K 5 2 is shown in Fig. 6.60. The computed PM is: [141.4885] deg at ω 5 4:6949 rad/s. The real axis crossings are: [ 2 0.2, 0.4, 2]. Comment on the stability of the system and find its GM, PM, and DM. The open-loop system has two NMP poles and thus for stability the point 21 should be encircled 2 times CCW. With the given gain this has not occurred and thus the system is unstable. It takes place in the region ðB AÞ. Thus the gain must be multiplied by K 0 (i.e., K ! KK 0 ) where 21=K 0 Að2 0:2 0:4Þ. Therefore K 0 Að2N 2 2:5Þ , ð5 NÞ. The border value of K 0 is the GM of the system. Because j 2 2:5j , j5j the main GM is GM1 5 2:5  7:9588 db with negative gains. (Note that in this problem both GMs are positive in db, one in positive gains and one in negative gains.) The given PM changes the stability condition of the system (from unstable to stable) and is thus acceptable: PM1 5 141:49 deg. The DM of the system is DM 5 141:49 3 ðπ=180Þ=4:69 5 0:5265 seconds.

Figure 6.60 Nyquist plot of Problem 6.16.

Nyquist plot

Problems 6.17: The Nyquist plot of the system LðsÞ 5

519 Kððs10:1Þ2 1 1Þððs10:1Þ2 1 9Þðs 2 0:1Þ ððs20:1Þ2 1 4Þððs10:1Þ2 1 25Þðs 1 1Þ

with K 5 70 is provided in Fig. 6.61 for positive frequencies. The computed PM at the point Z is: [ 2 56.7747] deg with ω 5 0:1258 rad=s. The real axis crossings from left to right are [ 2 1.6062, 20.6352, 49.505, 69.9301]. Determine the stability of the system and its GM, PM, and DM. The open-loop system has two NMP poles and thus is closed-loop stable if the point 21 is encircled CCW two times by the plot. If we complete the plot by considering also the negative frequencies, we conclude that in the region ðB AÞ this takes place. Thus the system is stable. Because j 2 1=0:6352j , j 2 1:6062j the main GM is GM1 5 j 2 1=0:6352j 5 1:5744  3:94 db. Additionally, the system becomes unstable with the computed PM. That is, it is acceptable: GM2 5 2 56:77 deg. Hence DM 5 2 56:77 3 ðπ=180Þ=0:1258 5 2 7:8768 seconds.

Figure 6.61 Nyquist plot of Problem 6.17.

Problem 6.18: The Nyquist plot of the system LðsÞ 5

Kððs10:1Þ2 1 1Þððs20:1Þ2 1 9Þðs 1 0:1Þ ððs10:1Þ2 1 4Þððs10:1Þ2 1 25Þðs 1 1Þ

with K 5 10 is depicted in Fig. 6.62 for positive frequencies. At points X, Y, Z the measured PMs are: [4.1222, 165.4939, 52.0568] deg at ω 5 [1.3570, 2.8526, 3.1398] rad/s. The real axis crossings from left to right are: [ 2 41.6667, 22.7894, 20.5412, 0.0907, 1.5708, 10]. Determine the stability of the system and its GM, PM, and DM. Because the open-loop system is poleMP, its closed-loop form is stable if the point 21 is not encircled by the plot. Considering also the plot for negative frequencies, we conclude that the point 21 should be outside the whole plot. Thus with the current value of the gain the system is unstable. For stability either 21=K 0 , E or 21=K 0 . F should hold. Thus K 0 . 0:0204 or 20:1 , K 0 , 0. (Note that K 0 is the value the current gain should be multiplied by. That is, as in Problem 6.16, the gain is changed from 70 to 70K 0 . And the border value of K 0 is the GM of the system.) The GM of the system is thus GM2 5 0:0204  2 33:8074 db if we consider positive gains only, or GM2 5 0:1  2 20 db in negative values of the gain. As for the PM, we note that the PM at the point Z is acceptable. Thus PM1 5 52:05 deg. Consequently, DM 5 52:0568 3 ðπ=180Þ=3:1398 5 0:2894 seconds.

520

Introduction to Linear Control Systems

Figure 6.62 Nyquist plot of Problem 6.18.

Problem 6.19: The Nyquist plot of the system LðsÞ 5

Kððs10:1Þ2 1 1Þððs20:1Þ2 1 9Þðs 2 0:1Þ ððs10:1Þ2 1 4Þððs20:1Þ2 1 25Þðs 1 1Þ

with K 5 1 is depicted in Fig. 6.63 for positive frequencies. At points W, X, Y, Z the measured PMs are: [ 2 20.90, 2126.36, 27.70, 2180] deg. The frequencies are: ω 5 [1.8887, 2.1393, 4.0310, Inf] rad/s. The real axis crossings from left to right are: [ 2 0.5020, 20.0308, 20.0091, 0.1198, 1, 5.2770]. Determine the stability of the system and its GM, PM, and DM. The closed-loop system is stable if the point 21 is encircled CCW two times. Hence the system is unstable. Inspection shows that the stability condition is satisfied in the region ðF EÞ. That is, the gain K 5 1 must be multiplied by K 0 (K ! KK 0 ) where 21=K 0 AðF EÞ or K 0 Að2 1 2 0:1895Þ. The GM of the system is thus 21 in plain numbers. As for the PMs, none of them changes the stability condition of the system (from unstable to stable) except the one at Y. Thus PM1 5 27:7. Hence, DM 5 27:7 3 ðπ=180Þ=4:03 5 0:12 seconds. Note that for gain values smaller than 1, like K 5 0:9, point Z is to the left of point 1 (in this case it is 0.9), then the PM at point Z (which is 2175.52 in this case) is also acceptable. However, the main PM will be that of point Y (in this case it is 27.53).

Figure 6.63 Nyquist plot of Problem 6.19.

Nyquist plot

521

Problems 6.20: The Nyquist plot of the system LðsÞ 5

Kððs20:1Þ2 1 1Þððs20:1Þ2 1 9Þ ðs 1 1Þððs20:2Þ2 1 4Þððs20:2Þ2 1 25Þ

with K 5 5 is given in Fig. 6.64 for positive frequencies. The computed PMs at the points W, X, Y, and Z are: [ 2 20.1638, 79.9438, 264.2226, 92.3213] deg at the frequencies ω 5 [1.6991, 2.3168, 3.9104, 7.6743] rad/s. The real-axis crossings are: [ 2 8.7032, 21.6722, 20.1363, 20.0953, 0, 0.4498]. Determine the stability of the system as well as its GM, PM, and DM. In order for the system to be stable its plot must encircle the point 21 four times CCW. This has taken place (in the region ðC BÞ) and thus the system is stable. Because j 2 1:6722j , j 2 1=0:1363j, GM2 5 1=1:6722  2 4:4658 db is the main GM. As for the PMs, it is observed that all of them are acceptable, thus PM2 5 2 20:16 deg is the main PM of the system, which is the smallest in absolute value. Finally, the meaningful or realizable DM is computed at either of the acceptable positive PMs (X and Z). Because 79:9438=2:3168 . 92:3213=7:6743 the DM should be computed at the point Z, thus DM 5 92:3213 3 ðπ=180Þ= 7:6743 5 0:2100 seconds. Of course, if we consider the realization of delay in the form of a lead controller, the DM should be computed at the point W and is 2 0.2071 s.

Figure 6.64 Nyquist plot of Problem 6.20.

Problems 6.21: The Nyquist plot of the system LðsÞ 5

Kððs20:1Þ2 1 1Þððs20:1Þ2 1 9Þ ðs 2 0:1Þððs20:2Þ2 1 4Þððs20:2Þ2 1 25Þ

with K 5 2 is given in Fig. 6.65 for positive frequencies. The computed PMs at the points X, Y, and Z are: [54.6546, 272.2029, 74.9718] deg at frequencies ω 5 [0.1459, 4.4287, 5.8504] rad/s. From left to right the real-axis crossings are: [ 2 3.6563, 21.7992, 20.8977, 20.0434, 20.0393, 0]. Determine the stability of the system as well as its GM, PM, and DM. For stability the point 21 should be encircled five times. This has not happened with the given value of the gain and thus the system is unstable. It takes place in the region ðC BÞ. Thus the gain of the system must be changed from K 5 2 to KK 0 where K 0 Að2 1=B 2 1=CÞ. The border value of K 0 is the GM of the system which is GM1 5 2 1=C 5 1:1139  0:9369 db. On the other hand none of the PMs is acceptable. We denote it by PM 5 Inf, DM 5 Inf.

522

Introduction to Linear Control Systems

Figure 6.65 Nyquist plot of Problem 6.21.

Note that for K 5 4 the plot is enlarged and the system has five gain crossover frequencies and thus five PMs: [70.1598, 251.8047, 50.3132, 279.1778, 82.7046] deg, at V, W, X, Y, Z in the increasing order of frequency. The system is stable and all the computed PMs are acceptable. The main one is at the point X, PM1 5 50:3132 deg. Problems 6.22: Investigate the acceptance (or otherwise) of the computed PMs for the system of Example 6.18 with K 5 107 and K 5 20; 000 as discussed in Example 6.40. The Nyquist plots are as that of Problem 6.14 except that around the origin the plot has the following form: With K 5 107 it is given in the left panel of Fig. 6.66 and with K 5 20; 000 it is given in the right panel of the same figure. We thus conclude that in the former case the computed PM is not acceptable, but in the latter case it is.

Figure 6.66 Problem 6.22. Left: K 5 107 ; Right: K 5 20; 000.

Question 6.6: Using the values of real-axis crossings of the worked-out Problem 6.14, how should we find the real-axis crossings and the GM of the present system in each case? Problems 6.23: Investigate the acceptance (or otherwise) of the computed PMs for the system of Example 6.41.

Nyquist plot

523

The Nyquist plots of the system for K 5 50, K 5 100, and K 5 10000 are given in the left, middle, and right panels of Fig. 6.67. It starts at the origin and goes in the first quadrant in the direction of the arrow. It is therefore easy to conclude that in the first case the smaller PM (at the point Z) is acceptable (but the larger at Y is not) and in the other cases none of the computed PMs is acceptable.

Figure 6.67 Problem 6.23. Left: K 5 50; Middle: K 5 100, Right: K 5 10000.

Question 6.7: With K 5 1, from left to right the real-axis crossings are: [ 2 0.0383, 0.0068, 0.0237]. What is the GM of the system in each case? Problem 6.24: A system has approximately GM1  23 db, GM2  2 5 db and PM  703 ; ω  2:5 rad=s. Comment on the stability margins of the system. (Note that the system is that of Example 6.28, but suppose that you do not know it—and hence, do not refer to it!) Because GM1 . jGM2 j the main GM is GM2 . The PM 5 PM1 is also uniquely given. Assuming its acceptance we find DM  70 3 ðπ=180Þ=2:5 5 0:4887 seconds. Note that if the bandwidth of the system were also given it would not be informative.

6.7

Exercises

Over 200 exercises are offered in the form of various collective problems below. It is advisable to try as many as your time allows. In cases in which you must draw the Nyquist plot with the help of MATLABs, be mindful of the discussion of Example 6.35. Exercise 6.1: Does it make sense to talk about the unit of jLj? Explain. Exercise 6.2: Repeat Examples 6.1 and 6.2 if the Cs contour is rotated about its center for 45 deg. Exercise 6.3: Can a Nyquist plot intersect itself? How about sliding over itself? If the answer is positive, what does that mean? Discuss. Exercise 6.4: The question has two parts. (1) The Nyquist plots of two systems have similar low- and high-frequency shapes, but different mid-frequency shapes. What can be said about these systems? (2) Is that possible that the situation in (1) happens in the reverse direction, i.e., where the terms similar and different are interchanged? If so, what can be said about these systems?

524

Introduction to Linear Control Systems

Exercise 6.5: Design a system whose Nyquist plot has several real-axis crossings. Explain the procedure. Exercise 6.6: Design a system whose Nyquist plot has several j-axis crossings. Explain the procedure. Do j-axis crossings in the Nyquist context plane have any relevance? Exercise 6.7: How do the GM and PM of a system change if we increase the gain? K Exercise 6.8: Let LðsÞ 5 sðs2 1 0:1s 1 1Þ. Show that the Nyquist plot has a seemingly sharp corner, like a cusp point. Exercise 6.9: Verify the low-frequency end behavior of the systems 1. LðsÞ 5 sðss 2111Þ 2. LðsÞ 5 3. LðsÞ 5 4. LðsÞ 5

s2 6 10 sðs 1 1Þ s21 sN ðs 1 1Þ s2 6 1 sN ðs 1 10Þ s2 6 1 sN ðs2 1 s 1 1Þ ; N

5. LðsÞ 5 5 1; . . .; 5 Exercise 6.10: Reconsider Example 6.7. Use the derivation for R L and ℑ L and conclude that as ω ! 0, R L and ℑ L tend to infinity. Also, draw the complete Nyquist plot of the system as a closed contour. Exercise 6.11: Draw the Nyquist plot of the following systems and find the range of K for stability. 1 1Þ 1. LðsÞ 5 Kðs sðs 2 1Þ 2. LðsÞ 5 3. LðsÞ 5 4. LðsÞ 5 5. LðsÞ 5 6. LðsÞ 5 7. LðsÞ 5 8. LðsÞ 5 9. LðsÞ 5 10. LðsÞ 5 11. LðsÞ 5 12. LðsÞ 5 13. LðsÞ 5 14. LðsÞ 5 15. LðsÞ 5 16. LðsÞ 5 17. LðsÞ 5 18. LðsÞ 5 19. LðsÞ 5 20. LðsÞ 5 21. LðsÞ 5 22. LðsÞ 5

Ksðs 1 1Þ s21 Kðs2 1 1Þ s21 Kðs2 1 1Þ ðs 2 1Þðs 1 2Þ Kðs 1 1Þ s2 ðs 2 1Þ K sðs 2 1Þ Kðs 2 1Þ s2 1 1 Kðs 2 2Þ sðs 1 1Þ Ks2 ðs21Þ2 Kðs 1 1Þ s2 Kðs 1 1Þ s2 1 1 Ks s2 1 1 Kðs 1 2Þ s2 ðs 1 1Þ Kðs 1 1Þ s2 ðs 1 2Þ K sðs11Þ2 Ks2 ðs11Þ3 Kðs2 1 4Þ sðs2 1 1Þ Kðs2 1 1Þ sðs2 1 4Þ Kðs2 1 1Þ sðs21Þ2 Ksðs2 1 4Þ s2 1 1 Ksðs2 1 1Þ s2 1 4 Ksðs2 1 4Þ ðs11Þ2

Nyquist plot

23. LðsÞ 5 24. LðsÞ 5 25. LðsÞ 5 26. LðsÞ 5 27. LðsÞ 5 28. LðsÞ 5 29. LðsÞ 5 30. LðsÞ 5 31. LðsÞ 5 32. LðsÞ 5 33. LðsÞ 5 34. LðsÞ 5 35. LðsÞ 5 36. LðsÞ 5 37. LðsÞ 5 38. LðsÞ 5 39. LðsÞ 5 40. LðsÞ 5 41. LðsÞ 5 42. LðsÞ 5 43. LðsÞ 5

525 Ksðs2 1 4Þ ðs21Þ2 K ðs 1 1Þðs2 1 1Þ Kðs 1 10Þðs2 1 1Þ ðs21Þ3 Ksðs2 1 4Þ ðs 1 2Þðs21Þ2 Kðs 1 2Þðs 1 3Þ ðs 2 1Þðs11Þ2 K10sðs2 1 9Þ ðs 1 10Þðs21Þ2 Ksðs2 1 4Þ ðs 1 2Þðs21Þ2 Ksðs2 1 4Þ ðs2 21Þ2 Kðs2 1 1Þ sðs21Þ3 Ksðs2 1 100Þ ðs2 1 25Þðs2 1 900Þ Ksðs2 1 25Þ ðs2 1 100Þðs2 1 900Þ Kðs2 1 4Þ ðs2 1 1Þðs2 1 9Þ Kðs2 11Þ2 ðs21Þ4 Ksðs2 1 1Þ ðs2 19Þ2 Kðs2 1 1Þðs2 1 9Þ sðs2 1 4Þðs2 1 36Þ Kðs11Þ4 sðs21Þ4 Ks2 ðs2 1 4Þ s21 Ks2 ðs2 11Þ2 ðs21Þ6 Kðs11Þ4 sðs2 11Þ2 Kðs11Þ5 s2 ðs2 11Þ2 Kðs11Þ5 s3 ðs2 11Þ2

Exercise 6.12: Verify the answer of the previous Problem 6.11 by the root locus method. Exercise 6.13: Draw the Nyquist plot of the systems discussed in the text for other values of the gain, find the real axis crossings, all gain and phase crossover frequencies, and determine their stability, GM, PM, and DM. For the sake of brevity we suffice to mention only one case. Reconsider the system of the worked-out Problem 6.20 given by LðsÞ 5

Kððs20:1Þ2 1 1Þððs20:1Þ2 1 9Þ . ððs20:2Þ2 1 4Þððs20:2Þ2 1 25Þðs 1 1Þ

The Nyquist plot for

several different values of the gain as K 5 2 55, K 5 2 45, K 5 2 15, and K 5 2 10 is provided in Fig. 6.68. Do the above for this system. The data in each case are as follows. Note that the real axis crossings are from left to right: K 5 2 55 Real axis crossings: [ 2 4.9480, 0, 1.0480, 1.4990, 18.3824, 96.1538] PM: [ 2 88.8539, 2155.5927, 289.3821] deg PM Frequency: [0.9304, 1.0566, 55.3345] rad/s K 5 2 45 Real axis crossings: [ 2 4.0486, 0, 0.8575, 1.2264, 15.0602, 78.1250] PM: [ 2 73.8973, 2170.1374, 2133.7611, 160.4339, 289.2491] deg PM Frequency: [0.8858, 1.0949, 2.9367, 3.0684, 45.4089] rad/s K 5 2 15

526

Introduction to Linear Control Systems

Figure 6.68 Nyquist plots of Exercise 6.13. Real axis crossings: [ 2 1.3493, 0, 0.2858, 0.4088, 5.0176, 26.1097] PM: [ 2 29.2709, 158.2142, 296.7622, 122.2039, 288.0219] deg PM Frequency: [0.4691, 1.3395, 2.6919, 3.3497, 16.1666] rad/s K 5 2 10 Real axis crossings: [ 2 0.8995, 0, 0.1906, 0.2725, 3.3445, 17.4216] PM: [154.9964, 294.1696, 117.9306, 287.4877] deg PM Frequency: [1.4646, 2.5637, 3.5209, 11.6538] rad/s Exercise 6.14: By the help of MATLABs draw the Nyquist plot of the following systems for different values of the gain, find the real axis crossings, all gain and phase crossover frequencies, and determine their stability, GM, PM, and DM. 2 1 2s 1 2 1. LðsÞ 5 K ssðs10:2Þ 2 s 1 2s 1 2 2. LðsÞ 5 K sððs10:2Þ 2 1 0:22 Þ 2

s 1 2s 1 2 3. LðsÞ 5 K ðs 1 0:1Þððs10:2Þ 2 1 0:22 Þ 2

4. LðsÞ 5 K ðs110Þ sðs11Þ2

2

Kð50s 1 1Þð20s 1 1Þðs 1 5Þðs 1 10Þ sð1000s 1 1Þð100s 1 1Þð5s 1 1Þðs 1 1Þ 1 10Þðs 1 5Þðs 1 0:05Þðs 1 0:02Þ LðsÞ 5 ðs 1 100ÞðsKðs 1 1Þðs 1 0:2Þðs 1 0:01Þðs 1 0:001Þðs 2 0:0005Þ 1 1000Þðs 1 500Þðs 1 10Þðs 1 5Þðs 1 0:05Þðs 1 0:02Þ LðsÞ 5 ðs110000Þ2Kðs ðs 1 100Þðs 1 50Þðs 1 1Þðs 1 0:2Þðs 1 0:01Þðs 1 0:001Þðs 2 0:0005Þ 2 Kðs 1 2s 1 4Þ LðsÞ 5 K sðs15Þ 2 2 ðs 1 1:5s 1 1Þ Kð50s 1 1Þð20s 1 1Þ LðsÞ 5 sð1000s 1 1Þð100s 1 1Þð5s 1 1Þðs 1 1Þ 1 0:1Þðs 1 0:2Þ LðsÞ 5 ðs 1 Kðs 0:001Þðs10:01Þ2 ðs 1 1Þ Ksðs 1 1Þðs 1 2Þ LðsÞ 5 ðs 1 10Þðs 1 100Þðs 1 200Þ Ksðs 1 1Þðs 1 2Þ LðsÞ 5 ðs 1 10Þðs 1 100Þðs1200Þ2 2 LðsÞ 5 ðs 6 0:01Þðs 1Kðs1100Þ 0:1Þðs 1 0:5Þðs11000Þ2 4 3 1 12s2 1 54s 1 16 LðsÞ 5 K s5 1s11s146s1 22s 3 1 60s2 1 47s 1 25 Kððs20:1Þ2 1 1Þððs20:1Þ2 1 9Þ LðsÞ 5 ððs20:2Þ 2 1 4Þððs20:2Þ2 1 25Þðs 1 1Þ

5. LðsÞ 5 6. 7. 8. 9. 10. 11. 12. 13. 14. 15.

Nyquist plot

527

Kððs20:1Þ2 1 1Þððs20:1Þ2 1 9Þ ððs10:2Þ2 1 4Þððs20:2Þ2 1 25Þðs 1 1Þ Kððs20:1Þ2 1 1Þððs20:1Þ2 1 9Þ LðsÞ 5 ððs20:2Þ 2 1 4Þððs20:2Þ2 1 25Þðs 2 1Þ ððs10:1Þ2 1 1Þððs10:1Þ2 1 9Þðs 2 0:1Þ LðsÞ 5 K ððs10:1Þ2 1 4Þððs10:1Þ2 1 25Þðs 1 1Þ 2 1 1Þððs20:1Þ2 1 9Þðs 1 3Þ LðsÞ 5 ðsKððs20:1Þ 1 0:1Þððs20:2Þ2 1 4Þððs20:2Þ2 1 25Þ 2 1 1Þððs20:1Þ2 1 9Þðs 1 3Þ LðsÞ 5 ðsKððs20:1Þ 2 0:1Þððs10:2Þ2 1 4Þððs10:2Þ2 1 25Þ 2 1 1Þððs10:1Þ2 1 9Þðs 1 3Þ LðsÞ 5 ðsKððs20:1Þ 2 0:1Þððs20:2Þ2 1 4Þððs10:2Þ2 1 25Þ 0:1Þððs10:1Þ2 1 1Þððs20:1Þ2 1 9Þ LðsÞ 5 Kðsðs 2 1 1Þððs20:2Þ2 1 1Þððs20:2Þ2 1 9Þ 1Þððs10:1Þ2 1 1Þððs10:1Þ2 1 9Þ LðsÞ 5 ðsKðs210:1Þððs20:2Þ 2 1 1Þððs20:2Þ2 1 9Þ Kððs20:1Þ2 1 1Þððs20:1Þ2 1 9Þððs20:1Þ2 1 36Þ LðsÞ 5 ðs 2 0:1Þððs20:2Þ2 1 4Þððs20:2Þ2 1 25Þððs10:2Þ2 1 49Þ 2 1 1Þððs20:1Þ2 1 9Þððs20:1Þ2 1 36Þ LðsÞ 5 ðs 1Kððs20:1Þ 1Þððs20:2Þ2 1 4Þððs20:2Þ2 1 25Þððs20:2Þ2 1 49Þ 2 1 1Þððs20:1Þ2 1 9Þððs20:1Þ2 1 36Þ LðsÞ 5 ðs 2Kððs20:1Þ : 0:1Þððs20:2Þ2 1 4Þððs20:2Þ2 1 25Þððs20:2Þ2 1 49Þ

16. LðsÞ 5 17. 18. 19. 20. 21. 22. 23. 24. 25. 26.

Here again for the sake of brevity we suffice to mention only one case. Consider the system in item (16) given by LðsÞ 5

Kððs20:1Þ2 1 1Þððs20:1Þ2 1 9Þ . ððs10:2Þ2 1 4Þððs20:2Þ2 1 25Þðs 1 1Þ

The Nyquist plot for some

different values of the gain as K 5 2, K 5 10, K 5 40, and K 5 60 is provided in Fig. 6.69. Do the above for this system. The data in each case are as follows. Note that the real axis crossings are from left to right: K 52 Real axis crossings: [ 2 3.2457, 20.0480, 20.0398, 0, 0.1799, 0.6795] PM: [ 2 44.9160, 94.3430] deg PM Frequency: [4.4411, 5.8355] rad/s K 5 10

Figure 6.69 Nyquist plots of Exercise 6.14.

528

Introduction to Linear Control Systems

Real axis crossings: [ 2 16.2338, 20.2400, 20.1990, 0, 0.8995, 3.3979] PM: [ 2 59.3661, 129.9214, 242.9367, 96.5644] deg PM Frequency: [1.4646, 2.5637, 3.5209, 11.6538] rad/s K 5 40 Real axis crossings: [ 2 64.9351, 20.9602, 20.7959, 0, 3.5984, 13.5870] PM: [100.6833, 214.3770, 84.0477, 22.7868, 91.9766] deg PM Frequency: [0.8593, 1.1162, 2.9132, 3.0931, 40.4603] rad/s K 5 60 Real axis crossings: [ 2 97.0874, 21.4403, 21.1939, 0, 5.3967, 20.3666] PM: [67.2662, 18.4416, 91.3283] deg PM Frequency: [0.9527, 1.0362, 60.3067] rad/s Exercise 6.15: The area enclosed by the Nyquist plot of a stable system is related to its HilbertSchmidtHankel norm. Derive the condition. The problem is taken from (Hanzon, 1992). Exercise 6.16: Let GðsÞ be a transfer function with relative degree one and a pair of pure imaginary zeros 6 jω0 of multiplicity one. Now, consider a sampled system composed of a zero-order hold, GðsÞ, and a sampler in series. Then, the sampled system will have a zero corresponding to jω0 of GðsÞ. Such a zero is called an intrinsic zero. For the above sampled system prove that: If the Nyquist plot of GðsÞ approaches the origin from left- (right-) hand side of the complex plane as ω increases to ω0 , the intrinsic zeros of the sampled system corresponding to 6 jω0 lie inside (outside) the unit circle for sufficiently small sampling periods. You had better come back to this problem after taking the third undergraduate course in control. The problem is adopted from (Ishitobi and Kashiwamoto, 2002). 0:3ðs 1 0:25s 1 0:6Þ Exercise 6.17: Repeat Example 6.44 for the system LðsÞ 5 sðs 1 2:5Þðs2 1 0:05s 1 0:5Þ. Exercise 6.18: Consider a strictly proper open-loop system LðsÞ with an unstable pole where other poles and zeros are stable. (1) Show that for stability of the closed-loop system it is necessary that Lð0Þ , 2 1. (2) Is this also a sufficient condition? (3) What if the delay term e2sT is also included? (4) What if an NMP zero is also included? (5) What if both a delay term and an NMP zero are included? Exercise 6.19: In drawing the Nyquist plot, in the case of j-axis poles and zeros we made indentations to the right around these points. Can we do this to the left? Discuss. Exercise 6.20: Consider the correct version of Fig. 6.8 which shows the low-frequency behavior of system. Depict a figure which shows the high-frequency behavior of the system versus its type. Discuss. Exercise 6.21: This question has several parts. (1) What is the relation between the Nyquist plots of the systems LðsÞ and 1=LðsÞ? (2) What is the relation between the Nyquist plots of the systems LðsÞ and sk LðsÞ; k . 0? (3) How about LðsÞ and s2k LðsÞ; k . 0? (4) How about L1 ðsÞ 1 L2 ðsÞ and its constituents L1 ðsÞ; L2 ðsÞ? (5) How about L1 ðsÞL2 ðsÞ and its constituents L1 ðsÞ; L2 ðsÞ? (6) How about LðsÞ and 1=ð1 1 LðsÞÞ? (7) How about LðsÞ and ðTs 1 1ÞLðsÞ? (8) How about LðsÞ and ðTs11Þ21 LðsÞ? (9) How about LðsÞ and e2sT LðsÞ? Exercise 6.22: Draw the Nyquist plot of the sequel systems. 1. LðsÞ 5 e2sT 2. LðsÞ 5 e2sT =s 3. LðsÞ 5 e2sT =ðs 1 pÞ 4. LðsÞ 5 e2sT =½sðs 1 pÞ 5. LðsÞ 5 e2sT s=ðs 1 pÞ, 6. LðsÞ 5 e2sT ðs 1 zÞ=½sðs 1 pÞ. 2

Nyquist plot

529

Exercise 6.23: How can we analyze and measure the bandwidth (BW) of the system, either open-loop or closed-loop, in the Nyquist plot context? You had better come back to this exercise after studying M-circles in Chapter 8. Exercise 6.24: Consider a classical 1-DOF control structure whose GM, PM, DM, BW are known. Now, consider its 2-DOF and 3-DOF extensions by the inclusion of a prefilter and disturbance forward compensator. (1) What can be said about the system features of the 2-DOF structure? (2) How about its 3-DOF control structure? (3) What if the second degree of freedom is inserted in the feedback loop? Exercise 6.25: Discuss computing and determining the GM, PM, DM, BW, stability, and sensitivity of a nonunity feedback structure. Exercise 6.26: Assume that we draw the Nyquist plot of L=ð1 1 LÞ and analyze it. What is the interpretation of the result? Exercise 6.27: How can we identify a system whose Nyquist plot is given? In particular, how can we find out its type? How about the error constants? Exercise 6.28: A system is given by LðsÞ 5 e2sT L1 ðsÞ where L1 ðsÞ is rational. Can the panels in Fig. 6.70 show the Nyquist plot of this system? Note that the solid part of the plot (i.e., the closed circle-like part) is the focus of the question—the dashed part does not matter much. If the answer is positive, provide an example. Discuss.

Figure 6.70 Exercise 6.28, the Nyquist plots of interest.

References Barkhausen, H., 1935. Lehrbuch der Elektronen-Ro¨hren und ihrer technischen Anwendungen. 4th ed Leipzig, S. Hirzel. Basiri, M.H., Tavazoei, M.S., 2015. On robust control of fractional order plants: invariant phase margin. ASME J. Nonlinear Comput. Dyn. 10 (5), 054504. Bochner, S., Chandrasekharan, K., 1949. Fourier Transforms. Princeton University Press, Princeton, NJ. Bode, H.W., 1945. Network Analysis and Feedback Amplifier Design. Van Nostrand, New York. Chen, D., Seborg, D.E., 2002. Robust Nyquist array analysis based on uncertainty descriptions from system identification. Automatica. 38 (3), 467475. Dantowitz, P., Shafai, B. Shafai, E., 1992. Phase shifter: a nonlinear circuit for direct measurement of phase margin in feedback control systems. In: Proceedings of the 35th Midwest Symposium on Circuits and Systems, Washington, DC, USA. pp. 365368. DeCarlo, R., Saeks, R., Murray, J., 1977. A Nuquist-like test for the stability of twodimensional digital filters. Proc. IEEE. 65 (6), 978979. Dolgin, Y., Zeheb, E., 2009. Finite Nyquist and finite inclusions theorems for disjoint stability regions. Syst. Control Lett. 58 (12), 804809.

530

Introduction to Linear Control Systems

Fang, C.-C., 2013. Using Nyquist or Nyquist-like plot to predict three typical instabilities in DCDC converters. J. Franklin Inst. 350 (10), 32933312. Hagiwara, T., 2002. Nyquist stability criterion and positive-realness of sampled-data systems. Syst. Control Lett. 45 (4), 283291. Hanzon, B., 1992. The area enclosed by the (oriented) Nyquist diagram and the HilbertSchmidtHankel norm of a linear system. IEEE Trans. Autom. Control. 37 (6), 835839. Helton, J.W., 1976. Operator theory and broadband matching. In: Proceedings of 11th Annual Allerton Conference on Circuits and Systems Theory, University of Illinois at Urbana Champaign. Hollot, C.V., Tempo, R., 1994. On the Nyquist envelope of an interval plant family. IEEE Trans. Autom. Control. 39 (2), 391396. Hsu, C.S., 1980. Theory of index and a generalized Nyquist criterion. Int. J. Nonlinear Mech. 15 (4), 349354. Huang, J., Li, Z., Yann Liaw, B., Zhang, J., 2016. Graphical analysis of electrochemical impedance spectroscopy data in Bode and Nyquist representations. J. Power Sources. 309, 8298. Ishitobi, M., Kashiwamoto, K., 2002. Nyquist stability criterion of intrinsic zeros for continuous-time pure-imaginary zeros. Automatica. 38, 21852187. Lindberg, E., May 2010. The Barkhausen criterion (Observation?). In: Proceedings of the 18th IEEE Workshop on Nonlinear Dynamics of Electronic Systems, Dresden, Germany, pp. 1518. Logemann, H., 1989. On the Nyquist criterion and robust stabilization for infinitedimensional systems. In: Robust Control of Linear Systems and Nonlinear Control, Amsterdam. Lovisari, E., Jonsson, U.T., 2010. A Nyquist criterion for synchronization in networks of heterogeneous linear systems. IFAC Proc. 43 (19), 103108. Munro, N., 1972. Multivariable systems design using the inverse Nyquist array. Comput. Aided Des. 4 (5), 222227. Nyquist, H., 1932. Regeneration theory. Bell Syst. Tech. J. 250 (11), 126147. Pal, D., Belur, M.N., 2013. Nyquist plots, finite gain/phase margins & dissipativity. Syst. Control Lett. 62 (10), 890894. Postlethwaite, I., Tombs, M.S., Foo, Y.K., Loh, A.P., 1985. On the relationship between Lehtomaki’s robustness test and an inverse Nyquist based test. IEEE Trans. Autom. Control. 30 (9), 927928. Sasane, A., 2010. An abstract Nyquist criterion containing old and new results. J. Math. Anal. Appl. 370, 703715. Tan, N., 2004. Robust phase margin, robust gain margin, and Nyquist envelope of an interval plant family. Comput. Electr. Eng. 30 (2), 153165. Vidyasagar, M., Bertschmann, R.K., Sallaberger, C.S., 1988. Some simplifications of the graphical Nyquist criterion. IEEE Trans. Autom. Control. 33 (3), 301305. Yang, X., Zhu, J.J., Hodel, A.S., 2015. Singular perturbation margin and generalised gain margin for linear time-invariant systems. Int. J. Control. 88 (1), 1129. Zanasi, R., Grossi, F., Biagiotti, L., 2015. Qualitative graphical representation of Nyquist plots. Syst. Control Lett. 83, 5360. Zhou, J., 2013. Contraposition 2-regularized Nyquist stability criteria for linear continuoustime periodic systems. IFAC Proc. 46 (12), 149154. Zhou, J., Hagiwara, T., 2005. 2-Regularized Nyquist criterion for linear continuous-time periodic systems and its implementation. SIAM J. Control Optim. 44 (2), 618645.

Bode diagram 7.1

7

Introduction

In the context of Nyquist plot we observed that Nyquist plot of a system does not have any tangible relation with the Nyquist plots of its constituent components. Because the relation is in the form of multiplication it is not easy to deduce the plot of the transfer function from that of its constituting elements. If we take the logarithm of the system transfer function, then there will be a straightforward relation between the logarithm of the transfer function and logarithm of its constituent components— the relation is addition. The plotting of frequency/sinusoidal transfer functions can thus be systematized and simplified by using logarithmic plots. This is what H. W. Bode proposed in 1938. More precisely, he proposed to draw the logarithm of the magnitude of the system transfer function versus frequency in logarithmic scales, as well as phase of the system transfer function versus frequency in logarithmic scales. The two diagrams together, i.e., logarithmic magnitude versus logarithmic frequency plus phase versus logarithmic frequency, are called Bode diagrams. Other advantages of this approach are: (1) expanding the low frequency range, which is of primary importance, because it corresponds to the steady state (infinite time); (2) reduction of the mathematical operations of multiplication and division to addition and subtraction (as will be seen by a trick we will use only addition); (3) reduction of the analytical work of obtaining the transfer function to a simple graphical work. If the plant is stable, we simply apply a sinusoid to its input, draw its Bode diagram, and then by the method that we will explain we deduce the system transfer function from the Bode diagram. If the plant is unstable it should first be stabilized. Finding the transfer function of the plant from the transfer function of the stable closed-loop system which has the controller C as its stabilizer is then very simple. There holds CP=ð1 1 CPÞ 5 Q, thus P 5 ð1=CÞQ=ð1 2 QÞ. On the other hand, as we discussed in Chapter 6, it is instructive to learn the drawing the Bode diagram of unstable systems as well. Question 7.1: If the system has no CRHP pole except that it is type 1, 2, or 3, can we perform the frequency response experiment without stabilizing the system? As in the previous Chapter 6, Nyquist Plot, there are many false guidelines in the literature regarding GM, PM, DM, and stability especially for systems with multiple crossover frequencies and/or nonminimum phase (NMP) poles. We discuss them all in this chapter. The organization of the rest of this chapter is as follows. In Section 7.2 we introduce the Bode diagram and present various details for first- and second-order systems. The relation between the Bode diagram and steady-state error is considered in Section 7.3. Special attention is paid to the difference between Introduction to Linear Control Systems. DOI: http://dx.doi.org/10.1016/B978-0-12-812748-3.00007-0 © 2017 Elsevier Inc. All rights reserved.

532

Introduction to Linear Control Systems

minimum phase (MP), NMP, delayed, and delay-free systems in Section 7.4. In Section 7.5 we discuss the system features: GM, PM, and DM, with detailed discussion of the case of multiple crossover frequencies. In Section 7.6 we consider the problem of stability determination in the Bode diagram context. In Section 7.7 we address the high sensitivity region. Relations with the Nyquist plot and the root locus are studied in Section 7.8. Special numerical and analytical details are provided for second-order systems in Section 7.9. Finally, bandwidth (BW) is studied in Section 7.10. The chapter proceeds to its summary, further readings, workedout problems, and exercises in Sections 7.117.14. We wrap up the introduction with a short biography of H. W. Bode, who was from Dutch ancestry, 190582. He initiated many research avenues in control systems theory and made significant contributions to the field. Among his most noticeable contributions apart from the Bode diagrams are “a” sensitivity analysis (see further readings of Chapter 1, Introduction), Bode integral theorem, and publicizing the GM and PM concepts after but independently of H. Barkhausen. Many of his and his contemporaries’ contributions were driven by the needs of the war time, like missile control and the need for an automated antiaircraft/missile gun/missile system, which were of course futile. He is also author of the book Network Analysis and Feedback Amplifier Design, 1945, which was one of the most read books of the time.

7.2

Bode diagram

Before presenting the details of the Bode diagram, in the ensuing Sections we briefly review some points.

7.2.1 Logarithm Logarithm of real numbers is used in this course. However, it is noteworthy that logarithm applies also to complex numbers, and logarithm of a complex number is also a complex number: log Gð jωÞ 5 logjGð jωÞjejφðωÞ 5 logjGð jωÞj 1 logejφðωÞ 5 logjGð jωÞj 1 j0:434φðωÞ. Note that in this formula 10 is the base of the logarithm.

7.2.2 Decibel Decibel is the unit used for the logarithm of the magnitude. It is abbreviated by db or dB. Note that logarithm applies to the ratio of two numbers which do not necessarily have the same units.

7.2.3 Log magnitude The logarithm of magnitude is stated as log magnitude or Lm. The Lm of the transfer function Gð jωÞ is defined as Lm Gð jωÞ 5 20 logjGð jωÞj db where base of the logarithm is 10.

Bode diagram

533

7.2.4 The magnitude diagram The horizontal axis is frequency in logarithmic scale. That is, the distance between a frequency and its ten times more or less, e.g., 1 and 10 or 0.1, is divided in length proportional to: log1 5 0; log2 5 0:3010; log4 5 0:6020; log8 5 0:9030; log10 5 1. Thus the horizontal axis looks like Fig. 7.1. .3 .1

.2

.5 .4

3 .8 1

2

5 67 9 4

8 10

30 20

50 40

80 100

ω

Figure 7.1 Axis of frequency in logarithmic scale.

Note that frequency zero locates at the left end (the usual 2N point) of the above axis. The low frequency range of the spectrum (which is of primary interest in control systems) is thus expanded and this makes visualization of the diagram easy—one of the advantages of this approach. Part of the logarithmic magnitude plane versus logarithmic frequency of the Bode diagram or said simply “the magnitude diagram” looks like that in Fig. 7.2, in which frequency ω represents a power of 10. db 40 20 0 –20 –40 .2ω

.4ω

.8ω





ω

.1ω 1

1

8ω 10ω

1 .32

Figure 7.2 Part of the magnitude diagram plane of the Bode diagram.

7.2.5 Octave and decade On the horizontal axis in the above diagram, the distance between any two frequencies whose larger one is two times the smaller one is called an octave. That is, if f2 =f1 5 2 then the interval ½ f1 f2  is one octave. Similarly, the distance between any two frequencies whose larger one is 10 times the smaller one is called a decade. In other words, if f2 =f1 5 10 then the interval ½ f1 f2  is one decade. Given any two frequencies f1 and f2 , how many octaves are there between them? The answer is logð f2 =f1 Þ 2 =f1 Þ 5 3:32logð f2 =f1 Þ. Similarly, there exist logðf log2 log10 5 logð f2 =f1 Þ decades between them.

534

Introduction to Linear Control Systems

7.2.6 Some useful figures to remember It is good to remember that if jGj 5 1, then the log magnitude of G is Lm G 5 20logjGj 5 0. Noting that log 2 5 0:3 and log 10 5 1, some other useful figures to remember are given in Table 7.1. Table 7.1

Some useful relations to remember

Ratio

Decibel

200 100 10 2 1 0.5 0.1 0.01 0.005

46 40 20 6 0 26 220 240 246

Note that if the ratio is multiplied or divided by 2 then 6 db is added to or subtracted from its log magnitude. Also, if the ratio is multiplied or divided by 10 then 20 db is added to or subtracted from its log magnitude. Moreover, Lm 1=A 5 2Lm A. A particular usage of these scales is the slope 620m db=dec  66m db=oct. In the old literature both were interchangeably used, however in the contemporary control literature the use of octave is rather obsolete.

7.2.7 Relation between the transfer function and its constituting components Now

denote

Gð jωÞ 5

Nð jωÞ Dð jωÞ : Nð jωÞ 1 + Dð1jωÞ :

Thus,

Lm Gð jωÞ 5 Lm Nð jωÞ 1 Lm

1 Dð jωÞ

and

Therefore, we need to study the basic constituting + Gð jωÞ 5 + elements of the numerator and the reciprocal of the denominator. It is noteworthy that in this way we use only addition and not subtraction, i.e., we do not write the above as Lm Gð jωÞ 5 Lm Nð jωÞ 2 Lm Dð jωÞ and + Gð jωÞ 5 + Nð jωÞ 2 + Dð jωÞ, for which we need subtraction. The constituting elements of the transfer function are: K, h i6 m ð jωÞ 6 m , ð11jωTÞ 6 m , 11 ω2ζn jω1 ω12 ð jωÞ2 , which are studied in the sequel. n

7.2.7.1 Gain K Bode diagrams of this component are as follows. If K . 1 then Lm K . 0 and if K , 1 then Lm K , 0. The points 180 and 2180 deg represent the same point, but it is preferred to use the standard convention of counter clockwise (CCW) direction for measuring angles. It is good to know that unfortunately because of lack of general consensus on a convention (either CCW or CW), different texts present

Bode diagram

535

db K>1

20 log |K|

K=1

0

K 0

φ ± 180(2h+1)

ωgc

Figure 7.25 Illustration of the GM computation.

ωgc

Bode diagram

555

a j-axis crossing’s acceptance (or not) has nothing to do with its frequency. That is, unlike what some references claim, it is not necessarily at the largest or smallest frequency. Before reading the sequel example it is helpful to read Appendix C to learn how the Bode diagram is used for the GM/PM computation—Examples C.11 and C.12. Example 7.13: Reconsider Example 6.21 as we discussed in Example 6.23 of Chapter 6, Nyquist Plot. For K 5 1 and K 5 30 the Bode diagrams are respectively given in the left and right panels of Fig. 7.26. As is observed there are five phase crossover frequencies,2 and thus five computed GMs in each case. As we discussed in Example 6.23 in the first case the smallest (in absolute value) GM is not the correct answer, but in the second case it happens to be. That is, with K 5 1 the MATLABs outputted answer is wrong but with K 5 30 the outputted answer happens to be correct.

Figure 7.26 Example 7.13, Left: K 5 1, Right: K 5 30.

Example 7.14: Reconsider the system LðsÞ 5 K 3 100ðs11Þ2 =ðs21Þ10 of the worked-out problem 6.12. This system is not stabilizable by static gain control and thus GM 5 Inf. However MATLABs says that by increasing its gain by GM 5 2 15:9 db it will be stabilized—this is obviously wrong. See Fig. 7.27. (Continued)

2

Happening in the phase diagrams, with the first one being at the frequency zero which is at the left end of the figure.

556

Introduction to Linear Control Systems

(cont’d)

Figure 7.27 Example 7.14 with K 5 1.

2. As we have mentioned in Chapter 6, Nyquist Plot, contrary to the claims of some texts, the critical line is not prefixed to the 2180 deg line, and is either of the lines 6 180ð2h 1 1Þdeg to which the phase of the system is nearer, i.e., the computed phase margin is 2180 # PM # 180. In negative angles, if the absolute value of the phase of the system is smaller than the absolute value of 6 180ð2h 1 1Þ, the phase margin is positive, and otherwise it is negative. In positive angles, the relation is the opposite. That is, if the value of the phase of the system is smaller than the value of 180ð2h 1 1Þ . 0, the phase margin is negative, and otherwise it is positive. For instance, if at the gain crossover frequency the phase of the system is 2525, 2578, 152, or 189, then the phase margin of the system is 115, 238, 228, or 19, respectively. A typical picture illustrating the PM computation is Fig. 7.28. The figure hints to a simple paraphrase of the situation: If the plot is above the respective line the PM is positive, and if it is below it the PM is negative. (Question: Can the other three situations (Lmk; φm; Lmm; φm; Lmm; φk) happen?)

Now we consider Example 6.40 and 6.41 and the worked-out Problem 6.12 in the following.

Bode diagram

557

Lm ωpc

ωpc

0 φ 180(2h+1)

Pm > 0 Pm < 0

0 –180(2h+1)

Pm > 0 Pm < 0

Figure 7.28 Illustration of the PM computation.

Example 7.15: Reconsider the system of Example 6.40. With K 5 107 it is measured with respect to the line 2180 deg. With K 5 20000 it is measured with respect to the line 180 deg. They are provided in Fig. 7.29, left and right panels, respectively.

Figure 7.29 Example 7.15, Left: K 5 107 , Right: K 5 20; 000.

In the first case the phase of system is 289:5 which is nearer to the line 2180 from among the lines 6180ð2h 1 1Þ, resulting in PM 5 90:5. (Note that if we compute it with respect to the line 180 we obtain 2269:5 which is outside the range ð2180; 180Þ and is thus wrong.) In the second case the phase of the system is 166:3 which is nearer to the line 180, yielding PM 5 213:7. (Here also note that if we compute it with respect to the line 2180 we obtain 346:3 which is outside the range ð2 180; 180Þ and therefore is wrong.)

558

Introduction to Linear Control Systems

Example 7.16: Reconsider the system of Example 6.41. With K 5 50, PM 5 2 100:3131 which is measured with respect to the line 2180 deg. With K 5 100, PM 5 2 126:8367 which is measured with respect to the line 180 deg. With K 5 10; 000, PM 5 39:1398 which is measured with respect to the line 2540. From among the lines 6 180ð2h 1 1Þ these lines are the nearest (in absolute value) to the phase angle of the system. See Fig. 7.30.

Figure 7.30 Example 7.16, Left: K 5 50, Top right: K 5 100, Bottom: K 5 10; 000.

Example 7.17: Reconsider Example 7.14 which was the system of the worked-out Problem 6.12. The Bode diagram is depicted in Fig. 7.27. As it is observed the phase margin is computed with respect to the line 21260. This line is the nearest (in absolute value) from among the lines 6 180ð2h 1 1Þ to the phase of the system.

Bode diagram

559

3. As discussed in Chapter 6, Nyquist Plot, the computed phase margin may not be acceptable. Unfortunately MATLABs is not able to analyze the answer (and does not verify it numerically) and takes for granted that the computed answer is acceptable. In the proceeding examples we present some systems for which the computed phase margin is or is not acceptable.

Example 7.18: As we discussed in the worked-out Problem 6.22, the computed PM of Example 7.15 for K 5 107 is not acceptable. That is, it is wrong. However for K 5 20000 it is acceptable. Bode diagrams are the same as those of Example 7.15 and thus are not reproduced here.

Example 7.19: As we discussed in the worked-out Problem 6.23, the computed PM of Example 7.16 for K 5 50 is acceptable, but it is not for the other cases. That is to say, those computed answers are wrong. Bode diagrams are the same as those of Example 7.16 and hence we do not reproduce them here.

Example 7.20: Reconsider Example 7.14. Its computed PM is wrong because it does not change the stability condition of the system.

7.6

Stability in the Bode diagram context

As we know from Chapter 6, Nyquist Plot, contrary to what some references claim, it is not possible to determine the stability of the system by merely considering the signs of its GM and PM. So Bode diagrams are not conclusive if we read only the signs of the GM and PM from them. A higher level of analysis is needed and other factors have to be considered. However, what those factors are is not known to us at the moment. It is noteworthy that some false guidelines for analyzing the Bode diagrams in order to determine the stability condition can be found in the existing literature. We do not present them here. On the other hand, for the sake of brevity we do not present any further examples here. The student can use the commands “bode,” “margin,” or “allmargin” of MATLABs on the example systems of Chapter 6, Nyquist Plot, for further illustration of the above arguments.

560

Introduction to Linear Control Systems

In the next Section 7.7 we study the high sensitivity region in the Bode diagram context. This is followed by the relation between the Bode diagram, Nyquist plot, and root locus.

7.7

The high sensitivity region

In Section 6.3.4 we studied the high sensitivity region. In this part we introduce this concept in the Bode diagram context. The high sensitivity region for a typical system with GM . 0 and PM . 0 is shown in Fig. 7.31 as the shaded strip around the critical lines and the 0-db line. Recall that a stable system may have any sign of PM and GM. This is the reason that the strips are considered “around” (i.e., both positive and negative) the aforementioned lines. db 0 φ ±180(2h+1) ωgc

ωpc

Figure 7.31 The high sensitivity region.

The widths of these strips depend on the application and the measure the designer has in mind. It should be clarified that the 6180(2h11) deg critical line is not necessarily the 2180 deg line.

7.8

Relation with Nyquist plot and root locus

In Chapter 6 we learned that the imaginary axis crossings in the root locus are the real-axis crossings in the Nyquist plot context. These points are the critical line crossings in the Bode diagram context, as exemplified in Examples 7.157.17. On the other hand the unit circle crossings in the Nyquist context are the 0-db line crossings in the Bode diagram context. It should be clarified that these latter points are not easily and transparently visible in the root locus context. For other examples in this context the reader is referred to Examples 6.42 and 6.43. It is instructive to study the Bode diagrams of the standard second order system in more detail. This is done in Section 7.9 under the title Standard second-order systems. It will be followed by a discussion of the BW in the Bode diagram context in Section 7.10.

Bode diagram

7.9

561

Standard second-order systems

We have already studied the frequency response (i.e., the time response for sinusoidal inputs) and BW of the second order system in Chapter 4, Time Response, and also the Bode diagram of it in this Chapter. Now we discuss its GM and PM. For the sake of completeness we also summarize those findings in the following. ω2

n Let the standard open-loop second-order system be given by PðsÞ ≔ sðs 1 2ζω : nÞ 3 Thus the closed-loop system in the negative unity-feedback structure will be

ω2n s2 1 2ζωn s 1 ω2n : Hence ω2 Mð jωÞ 5 ð jωÞ2 1 2ζωn jω 1 ω2 n n

at steady-state for sinusoidal inputs s 5 jω one obtains

Therefore,jMj 5

and +M 5 tan21

MðsÞ ≔

1 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   2

2 12ω2 ωn

or

2 2 1 4ζ 2ω ωn

M5 

1 2 1 2 ω2 ωn

h

5

1 j2ζω ωn 2 2ζω=ωn 1 2 ω2 =ω2n

i

2 12ω2 ωn

12

2 2 1 4ζ 2ω ωn

h 12

ω2 ω2n



2 j 2ζω ωn

i

.

.

We define the subsequent terms: Peak resonance in the magnitude diagram is denoted by Mr . It is the frequency domain counterpart4 of the maximum percent overshoot Mp of the time domain. In some sense both are measures for stability as when they are large the oscillation in the output is large. Peak resonance frequency ωr is the frequency counterpart of the peak time tp of the time domain5. Bandwidth is the frequency at which jMð jωÞj reduces to 0:707 of its magnitude at the frequency zero. For the generic system, the higher the BW the faster the system, i.e., the smaller the time rise. Note that this results in a larger Mr which is hostile to stability. Roll-off rate is the slope of jMð jωÞj at the BW frequency. This and BW together specify the rejection of exogenous disturbances. For our system it is 240 db=dec. For general systems, the higher the role-off rate, i.e., the steeper the magnitude diagram at the BW frequency, the better the disturbance rejection. In other words, the system better distinguishes the main signal (which is transmitted in the BW) from the disturbance (whose frequency is outside the BW and is filtered out). However, in general this results in poor stability margins. Details of their relation are outside the scope of this first course and are a topic of interest for many researchers. Nevertheless, some more discussions are presented in Chapter 10.

Now, after derivation and manipulation the frequency ωr at which the overshoot of jMj , if any, occurs, and the magnitude of jMj at that frequency, i.e., the pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi maximum of jMj , are obtained as follows: ωr 5 ωn 1 2 2ζ 2 ; ζ # 0:707, and   pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1=2 : Also BW 5 ωn 122ζ 2 1 4ζ 4 24ζ 2 12 . Mr 5 2ζ p1ffiffiffiffiffiffiffiffiffi 12ζ 2

3

This is the same as the transfer function which we usually denoted by T. The denotation M of this chapter and N of the next chapter originate from the mid 20th century. The denotation T for the complementary sensitivity transfer function dates back to the 1970s when the S & T studies were at issue. 4 Not to be mistaken as equivalent—they are different. See also Section 10.2. 5 The denotations Mr ; ωr also date back to the mid 20th century. In Chapter 10 we shall introduce the contemporary denotations as MT 5 Tmax and ωT , respectively.

562

Introduction to Linear Control Systems

pffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1=2 On the other hand jMð jωgc Þj 5 1 thus, ωgc 5 ωn 114ζ 4 22ζ 2 whence we find 2ζ 21 PM 5 tan pffiffiffiffiffiffiffiffiffiffi 2 1=2  100ζ; ζ # 0:6. Also, it is clear that the positive gain ½ 114ζ 22ζ  margin is infinite, GM 5 N, since the system is stable for all positive values of the gain. The following observations are made: 4

1. 2. 3. 4. 5. 6. 7.

Mr and Mp depend only on ζ—inverse relation. For ζ{1, Mr c1, whereas Mp is at most 1 (100%). tr is directly proportional to ζ and inversely proportional to ωr . BW is inversely proportional to ζ and directly proportional to ωn . BW and tr are inversely proportional. The larger the BW, the larger the Mr , the smaller the tr , and the poorer the stability. For small ζ, ωr  ωd .

7.10

Bandwidth

The BW of p anffiffiffi open-loop system is the frequency its Bode magnitude diagram becomes ð1= 2Þ 3 100 5 70:7%  70% of its magnitude at frequency zero. If the zero-frequency gain of the system is 1, then the zero-frequency magnitude is 0 db and the aforementioned frequency is the frequency the magnitude becomes 23 db. For a system of type 1 or higher, the zero-frequency gain of the negative unity-feedback closed-loop system is 1. Therefore, the BW of such a closed-loop system is the frequency the magnitude becomes 23 db. How does the BW relate to the poles and zeros of the system? The development of Section 7.1 reveals that by addition of a pole the BW of the system may decrease. In fact from Chapter 4 we know that the BW does decrease if the added pole is smaller than 10 times the largest pole of the system. However, by addition of a zero the BW of the system increases, depending on the location of the zero (recall Remark 4.17 of Chapter 4). Some examples follow. Example 7.21: Consider the system L1 ðsÞ 5 25=½ðs 1 5Þðs 1 10Þ. The Bode diagram is shown in solid curve in Fig. 7.32. If we add a zero to the system as s 1 1, the Bode diagram of the resultant system L2 ðsÞ 5 25ðs 1 1Þ= ½ðs 1 5Þðs 1 10Þ is shown in a dashed curve in the same figure. The dashdotted curve refers to the system L3 ðsÞ 5 25ð10s 1 1Þ=½ðs 1 5Þðs 1 10Þ. Increase in the BW due to the effect of the zero is obvious. The original system has BW 5 4:17 rad=s, the second system has BW 5 69:74 rad=s and the third system has BW 5 706:18 rad=s. (Continued)

Bode diagram

563

(cont’d)

Figure 7.32 Example 7.21.

For the sake of completeness, let us remind you that addition of a single zero is impossible since it is not causal—unless we have a pure theoretical discussion. So the whole practical scenario is that at least a pole is also associated with that zero and we are considering the effect of the zero. (Also recall that you already know what poles have no effect on the BW.). What is the relation between the closed-loop BW, the open-loop BW, ωpc , and ωgc ? The exact relation is quite complicated and depends on the system parameters. However, the general trend is that usually the open-loop BW is in between ωpc and ωgc if they are finite, and occasionally a little larger than the larger of them if they are finite. (Note that in general it is not known which one is larger, both ωgc . ωpc and ωgc , ωpc are possible. To see examples for both cases, refer to the previous examples, also to Chapter 9.) On the other hand, usually the closed-loop BW is in the order of the open-loop BW or larger than it. (More precisely, unless we have to, we usually do not increase the closed-loop BW much

564

Introduction to Linear Control Systems

beyond the open-loop BW, due to practical considerations.). If the open-loop system is of type 1 or more, then the closed-loop BW is usually between ωpc and ωgc or a little larger than the larger one. However, exceptional systems exist for which the closed-loop BW is drastically smaller than the smaller of ωpc and ωgc . Such systems will be encountered in Chapter 9. On the other hand, systems exist for which the closed-loop BW is drastically larger than the open-loop BW. This is not a surprise and is a result of high gain of the system. Example 7.22: Consider the system LðsÞ 5 25=½ðs 1 5Þðs 1 10Þ. The system has ωgc 5 ωpc 5 Inf. The open-loop BW is 4.1791 rad/s and the closed-loop BW is 6.7966 rad/s.

Example 7.23: Consider the system LðsÞ 5 25=½sðs 1 5Þðs 1 10Þ. With the 0:5077s 1 1 30s 1 16:0177 controller CðsÞ 5 6:2431 3 0:0788s the system tracks a ramp 11 3 30s 1 1 input with ess 5 2% and has PM  60 deg at ωgc  5 rad=s. On the other hand ωpc 5 13:4 rad=s. The closed-loop system has BW 5 9:4984 rad=s.

Example 7.24: Consider the system LðsÞ 5 1=½s2 ðs 1 1Þðs 1 10Þ. With the 4 1 9990s3 1 9833s2 1 3849s 1 500 the system tracks a parabolic controller CðsÞ 5 3424s0:22s 4 1 7s3 1 68s2 1 166s 1 1 input with ess 5 2% and has PM  39 deg at ωgc  5 rad=s. On the other hand ωpc 5 8:4 rad=s. The closed-loop system has BW 5 9:1 rad=s.

Example 7.25: Consider the system of Example 7.15. The open-loop BW is 4:1745e 2 04 rad=s regardless of the value of K. However, with K 5 107 , the closed-loop BW is 1:9765e 1 04 rad=s.

Examples of the opposite case will be encountered in Chapter 9. Remark 7.4: The proceeding general shapes, see Fig. 7.33, are indicative of the respective BWs.6 In words, in the left panel the high frequency gain is smaller than the low frequency gain, and in the right panel it is vice versa. In the left panel the 6

Note that these are only some typical examples and in actual systems, especially in the mid frequency range, the shape of the curve may be much more complicated.

Bode diagram

565

Lm M

Lm M

ω

ω

Figure 7.33 Remark 7.4. Some typical Bode diagrams.

BW of the system is bounded—it cannot be infinity—unless the drop in the magnitude is less than 3 db. In the right panel, if in the mid frequencies the magnitude falls below 3 db less than that in low frequencies, the BW is bounded, otherwise it is infinity. In particular, in either case if the system has j-axis zeros the bandwidth is certainly bounded by the frequency of that zero. Question 7.2: In the latter case discussed above, is it possible to increase the BW of such systems?

7.11

Summary

In this chapter we have introduced Bode diagrams. They were introduced by Bode in 1938 in the context of circuit analysis and synthesis (in particular electronic amplifier and filter design) and were soon welcomed by the control community. Since then they have been an indispensable tool for the analysis and synthesis of control systems. The diagram constitutes two parts or panels: One is the magnitude diagram and the other is the phase diagram. The magnitude diagram is actually 20 times the logarithm of the magnitude of the transfer function of the system. The horizontal axis is the frequency in logarithmic scale. Thus, unlike the Nyquist plot that frequency is the hidden parameter of the plot, here frequency is a visible variable versus which the diagram is plotted. Plotting and working with the diagram is thus simplified: The basic element approach can be easily used instead of the transfer function approach, where the latter has to be chosen in the Nyquist context. In Chapter 9 we will conduct the controller design in the Bode diagram context. It is noteworthy that it is possible to perform the controller design in the Nyquist plot and the Krohn-Manger-Nichols chart contexts as well, but the task is much simpler in the Bode diagram framework. Its only weakness is that there is no proven stability criterion in this framework, whereas in the Nyquist framework there is the Nyquist stability criterion. We have studied the system features: GM, PM, DM, BW, stability, sensitivity, as well as the steady-state error in this context. The inverse problem as the transfer function from the Bode diagram is also studied. Relations with the Nyquist plot and root locus have also been discussed. Various worked-out problems to follow assist deeper learning of the subject.

566

7.12

Introduction to Linear Control Systems

Notes and further readings

1. An interesting theorem known as the Bode’s Integral Theorem dating back to the 1940s concerns the relation between the sensitivity function of the system and its parameters, in particular its unstable poles. For MP systems there is also a one-to-one relation between the phase and the magnitude of the systems. Details are provided in Chapter 10. See also (Terman, 1943). 2. The Bode envelope of an interval plant family is considered in Nakhmani et al. (2011). Bode diagram for convergent nonlinear systems is discussed in [Pavlov et al., 2006]. 3. For the generic “control system” there often holds the relation BW 3 tr $ 1. Thus if the BW of the system is e.g., 1 rad/s, the rise time cannot be e.g., 0.2 second. Different sources provide different heuristics for this purpose. The general rule of thumb appears to be BW 3 tr  2, for many ‘actual’ systems. However, we emphasize that it does depends on the plant and in some systems this rule may not hold. For instance it is well known that in oscilloscope signals BW 3 tr  0:35 2 0:45, where the 10% 2 90% of the signal is the convention for computing the rise time. 4. There are specialized details about the sensitivity and complementary sensitivity functions (S, T) of a system, in particular with respect to their Bode diagrams. Problems 7.25 and 7.26 and Exercises 7.2224 are in this direction. Chapter 10 includes some pertinent results as well. There we define Tmax : 5 maxω jTj, the same for S. In continuation of Question 4.18 of Chapter 4 we can now ask “for what member of an interval plant do we have Tmax and Smax : 5 maxω jSj?” 5. The Bode diagram has extensive use in filter design. Exercises 7.2528 give some general information on this perspective. Some pertinent issues are also discussed in Chapter 10. In completion we should add that advanced techniques like the HN and adaptive theories are also used in this field, see e.g., (Hassibi, 2003; Hassibi et al., 2006; Naito et al., 2000; Ohno et al., 2009; Sayed, 2003). In connection we can add that ‘sound analysis’ has many actual applications. For instance, in medicine, through computerized analysis of the sounds of the respiratory system, heart and lungs conditions are understood (Aarabi et al., 2005; Moussavi, 2007). 6. Determining the open-loop transfer function from a closed-loop one should be quite easy for you. In particular this is used in electronics and switch-mode power supplies, where the Middlebrook’s method was proposed in 1975 (Middlebrook, 1975). 7. In the electronics literature on filter design the term octave is still in use perhaps even more than the term decade. 8. Establishing a stability test, especially one that is easy to use, in the Bode diagram context is quite desirable. 9. Development of the counterparts of results of this chapter for DAE, fractional-order, positive, constrained, multiscale, etc. systems is desirable.

7.13

Worked-out problems

Problem 7.1: The Bode magnitude diagram of a system goes to positive infinity at low frequencies and is constant at high frequencies. What can be said about the system?

Bode diagram

567

The system has an integrator and its step error is zero if the closed-loop system is stable. The system is proper; it has an equal number of poles and zeros, or its relative degree is zero. Note that the velocity error and acceleration error may also be zero (provided its stability) depending on the slope at low frequencies. Because this information (220, 240, and 260 db/dec) is not provided, we can count only on 220 db/dec. Problem 7.2: The Bode magnitude diagram of a system has the slope 240 db=dec at high frequencies. What does it indicate? Its relative degree is two. It has two more poles than zeros. Note that this does not necessarily mean the presence of two integrators. No matter it has two integrators or not, its relative degree is two. Problem 7.3: The Bode magnitude diagram of a system goes to positive infinity at both low and high frequencies. What can be deduced from it? The system has one or more integrators and is improper. It has more zeros than poles. 2 10Þ Problem 7.4: Draw the Bode diagram of the system LðsÞ 5 ðs 2ðs1Þðs 1 100Þ : Note that this system is the same as the one considered in Example 7.10 except that a finite pole and zero are interchanged. Because they are the same concerning MP-ness or NMP-ness (here they are both NMP) the Bode diagram of the new system is the same as that of the original system in both low- and high-frequency ends. More precisely, the Bode magnitude diagrams have the same slopes (not values) in respective ends, and the Bode phase diagrams have the same angle values in respective ends. However in mid frequencies they will differ. This is easily verified if we refer to Fig. 7.34 which gives the Bode diagram of the new system. 2

Problem 7.5: Draw the Bode diagram of the system LðsÞ 5 ðs2 1 s 1s2Þðs 1 20Þ : We follow the method used for solving Example 7.1. First we rewrite the system 2 : Starting from the frequency 0 we transfer function as LðsÞ 5 pffiffi 2 ð1=40Þs ððs= 2Þ 1 s=2 1 1Þðs=20 1 1Þ

have the effect of the second-order zero at the origin. Thus the low-frequency region of the diagram has the slope 40 db=dec. This portion or its extension passes through the point 20logð1=40Þ 5 2 32 db at ω 5 1 rad=s. As we increase the frepffiffiffi pffiffiffi quency the first corner frequency we arrive at is ω 5 2 with ζ 5 2=4 5 0:35. Thus with a slight overshoot at this frequency the magnitude flattens out at 0 db/ dec above this frequency. The next corner frequency we reach is at ω 5 20. Above this frequency the slope will be 220 db=dec. The diagram is the upper part of Fig. 7.35. With a similar argument we conclude that the phase diagram is the lower part of the same figure. Note that alternatively, and perhaps more easily, the phase analysis can be performed in the manner we did for Nyquist plot drawing.

568

Figure 7.34 Bode diagram of Problem 7.4.

Figure 7.35 Bode diagram of Problem 7.5.

Introduction to Linear Control Systems

Bode diagram

569

11 Problem 7.6: Draw the Bode diagram of the system LðsÞ 5 ssðs11s 10Þ : Discuss its tracking error for different inputs both from the transfer function and from the Bode diagram. Via the method presented in Example 7.1 for drawing the Bode diagram, we conclude that it is given by Fig. 7.36. To compute its steady-state error, first note that the closed-loop system is stable. This is verified by the application of the Routh’s test, or the root locus method, or the command “allmargin” of MATLABs. Hence since the system is type 1 its step error is zero, and its ramp error is the nonzero constant ð1=Kv Þ 3 100% where Kv 5 lims!0 sLðsÞ 5 0:1. If we want to derive it from the figure from Example 7.4 we know that the intercept of the lowfrequency slope of 220 db=dec (or its extension) with the 0-db magnitude line occurs at the frequency ω 5 Kv . Read from the figure this frequency is ω  0:1 and thus Kv  0:1. As for the phase diagram note that we may alternatively use the method we used for Nyquist plot drawing. 2

Figure 7.36 Bode diagram of Problem 7.6.

Problem 7.7: Draw the Bode diagram of the system LðsÞ 5

16000ðs 1 1Þðs 1 10Þ : s2 ðs 2 0:1Þðs1100Þ2

Discuss

its tracking error for different inputs both from the transfer function and from the Bode diagram. Using the method of Example 7.1 for drawing the Bode diagram, we conclude that it is given by Fig. 7.37. As for its steady-state error, first note that the closedloop system is stable with the given gain. (The gain margins of the system in ratio are 0.8070 and 97.3497.) This is verified using the Routh’s test, or the root locus

570

Introduction to Linear Control Systems

Figure 7.37 Bode diagram of Problem 7.7.

method, or the command “allmargin” of MATLABs. Thus because the system is type 2 its step and ramp errors are zero. Its acceleration error is a nonzero constant given by ð1=Ka Þ 3 100% where Ka 5 lims!0 s2 LðsÞ 5 160. If we want to obtain it from the figure from Example 7.5 we know that the intercept of the low-frequency slope of 240 db=dec pffiffiffiffiffiffi (or its extension) with the 0-db magnitude line occurs at the frequency ω 5 Ka . Read from the figure this frequency is ω  12:5 and thus Ka  156:25. (Note that some inaccuracy is unavoidable in any depictive computation. One may even read off 12 or 13.) Problem 7.8: Draw the Bode diagrams of the systems L1 ðsÞ 5 s2 1 z2 , p2 2 2 L2 ðsÞ 5 s 2 z , L3 ðsÞ 5 s2 1 p2 ; and L4 ðsÞ 5 s2 12 1 : Defining CCW the angles of the vectors s 2 a and s 2 ja by α, and the angles of s 1 a and s 1 ja by β, respectively, where a represents either of z or p, then for L1 ; L2 the phase is α 1 β and for L3 ; L4 the phase is 2ðα 1 βÞ. Therefore the Bode diagrams are as given in Fig. 7.38. Note that 150 db and 2150 db represent Lm of typical large and small numbers, tending to infinity and zero respectively. The corresponding numbers are above 107 and below 1027 , respectively. Note that because of inconsistent definition of angles, MATLABs computes the phase of L1 and L3 in a wrong way. It computes them as 360 deg less and more than the correct answers, respectively.

Bode diagram

571

Figure 7.38 Bode diagram of Problem 7.8; From left to right: L1 to L4 .

s 1 80 Problem 7.9: Draw the Bode diagram of the system LðsÞ 5 sðs 2 0:1Þðs 2 1 100Þ : The answer is given in Fig. 7.39. Note that the effect of the pole at ω 5 0:1 is clear from the corner it has caused in the curve. However the effect of the zero is not. The zero shows itself by the effect it has on the slope of the right part (high frequency end) of the figure. In the left part the slope is 220 db=dec. Above ω 5 0:1 it is 240 db=dec. Above ω 5 100 it is 260 db=dec. However because of the complex poles the slope must be 280 db=dec in the aforementioned range. Thus we conclude that there is a zero between ω 5 10 and ω 5 100. Its exact value cannot be deduced from the figure (if we wish to find it) unless we do experimental analysis of the phase diagram as well. On the other hand, note that due to j-axis poles at ω 5 10 there is a 180 deg drop in the phase of the system at this frequency.

Figure 7.39 Bode diagram of Problem 7.9.

572

Introduction to Linear Control Systems

sðs 1 1Þ Problem 7.10: Draw the Bode diagram of the system LðsÞ 5 ðs 2 10Þðs 2 1 1000Þ : The answer is provided in Fig. 7.40. Note that the effect of the zero at origin in the low frequency part of the curve is obvious. Because of the complex (j-axis) zeros the slope above ω 5 1 is 60 db=dec. However the slope above ω  30 is 0, not 20. This means that there should be a pole under ω  30. Note that the presence of a pole in the range 1 , ω , 30 is not apparent if we look for a corner in the magnitude curve in this range. On the other hand, the 180 deg rise and fall in the phase diagram are at the j-axis zeros and poles, respectively. 2

Figure 7.40 Bode diagram of Problem 7.10.

. Problem 7.11: Draw the Bode diagram of the system LðsÞ 5

s21 ðs18Þ3

The answer is given in Fig. 7.41. The slope in the low- and high-frequency ends of the plot is 0 db=dec and 240 db=dec, respectively. Problem 7.12: : Deduce the transfer function of the system whose Bode diagram is given in Fig. 7.42. Suppose the system is MP with positive gain. What is the tracking error of this system for different inputs in the standard 1-DOF structure? The slope at the low-frequency end of the magnitude diagram is 220 db=dec. So the system has an integrator. The slope slightly drops at ω 5 0:1, so there is a simple pole at this frequency. Then the slope slightly rises at the frequency ω 5 1 so there should be a zero at this frequency. The slope drops again at the frequency ω 5 20 thus we conclude that there is a pole at this frequency. Consequently, the 1 1Þ transfer function is of the form LðsÞ 5 sðs 1Kðs 0:1Þðs 1 20Þ :

Bode diagram

Figure 7.41 Bode diagram of Problem 7.11.

Figure 7.42 Bode diagram of Problem 7.12.

573

574

Introduction to Linear Control Systems

To compute the tracking error we should first determine whether the closed-loop system is stable. Here the closed-loop system is stable (for all positive values of gain. How about negative values? Find them out!). Thus because the system is of type 1 the tracking error for step inputs is zero. The system tracks velocity inputs with a constant nonzero error which is ð1=Kv Þ 3 100%. To find Kv we proceed as follows. In this system ðK=2Þ 5 Kv and ω 5 Kv is the intercept of the low-frequency slope of 220 db=dec (or its extension) with the 0 db line. Therefore, Kv  0:5 and K 5 1. Problem 7.13: Deduce the transfer function of the system whose Bode diagram is given in Fig. 7.43. Suppose the system is MP with positive gain. What is the tracking error of this system for different inputs? The slope at the low-frequency end is 20 db=dec. So the system has one zero at origin. At about the frequency 0:1 the curve starts to flatten out, so there should be one real pole as s 1 0:1. At about frequency 2 there is a second-order fall (240 db=dec) with an undershoot. Thus there should be a pair of complex zeros of the form ðs2 1 4ζs 1 4Þ. The undershoot is about 6.5 db so ζ  0:25. At around the frequency 10 the curve starts to flatten out (without any overshoot) so there should exist a pair of repeated poles as ðs110Þ2 . At frequency 100 there is a second-order fall (240 db=dec) with an overshoot, hence there should be a pair of complex poles of the form ðs2 1 200ζs 1 10000Þ. Because the overshoot is about 14 db, ζ  0:1. Ksðs2 1 s 1 4Þ Consequently, the system has the transfer function LðsÞ 5 ðs 1 0:1Þðs110Þ : 2 2 ðs 1 20s 1 10000Þ From the magnitude value at ω 5 1 we conclude that K 5 100.

Figure 7.43 Bode diagram of Problem 7.13.

Bode diagram

575

To compute the tracking error first we should decide the stability of the closedloop system. Here the closed-loop system is stable (for all positive values of K). Because the system does not have any integrator, it does not track any input. (Question: What can be precisely said about the error?) Problem 7.14: Infer the transfer function of the system whose Bode diagram is given in Fig. 7.44. Suppose the system is MP with positive gain. What is the tracking error of this system for different inputs? Starting from the low-frequency end of the diagram we conclude that the system has a double integrator. The curve flattens out at the frequency 1 with a sharp turn (not a curved one, like around frequency 10 of the previous problem) and a slight undershoot. This confirms the presence of a pair of complex zeros like s2 1 2ζs 1 1. From the small magnitude of the undershoot ( 1:25) we conclude that probably ζ  0:5. The curve starts to fall with a slight slope (which can only be 220) at frequency about 10. Therefore there is a pole as s 1 10. At about ω 5 80 the curve starts to flatten out again. Hence there should be a zero as s 1 80. At around ω 5 400 the curve starts to rise with a slight rate (which can only be 20 db=dec) and thus the system should have a zero as s 1 400. At the frequency ω 5 1000 the curve starts to fall with a first-order rate (compare with the 240 db=dec of the low-frequency range) of 220 db=dec and an overshoot. Thus necessarily there is a pair of complex poles like s2 1 2000ζs 1 106 . (Note that the slope of 220 is the result of 20 2 40 5 2 20) From the overshoot magnitude we conclude that probably ζ 5 0:2. As a result the

Figure 7.44 Bode diagram of Problem 7.14.

576

Introduction to Linear Control Systems

1 s 1 1Þðs 1 80Þðs 1 400Þ transfer function of the system is LðsÞ 5 s2Kðs ðs 1 10Þðs2 1 400s 1 1000000Þ : To find K we note that the system is of type 2, thus we follow the derivation of Example 7.5. The interpffiffiffiffiffiffi cept of the low-frequency end with the 0-db line is ω  1:8, i.e., Ka  1:8. Thus the gain is determined as K  1000. To compute the tracking error first we should decide the stability of the closedloop system. Here the closed-loop system is stable (for all positive values of K. Find the exact range!). It thus follows a parabolic input with constant bounded error and lower-order inputs with zero steady-state error. 2

Problem 7.15: Deduce the transfer function of the system whose Bode diagram is given in Fig. 7.45. Suppose the system is MP with positive gain. What is the tracking error of this system for different inputs? From the low-frequency end we conclude that the system has a double integrator. From sharp rise and fall of the magnitude diagram we conclude the presence of j-axis zeros and poles, respectively. The smaller and larger frequencies are 1 and 10. The middle one is slightly above 3 so probably ω2 5 10 and the transfer func2 1 10Þ tion is LðsÞ 5 s2 ðs2Kðs 1 1Þðs2 1 100Þ : Note that if the zeros and one pair of the poles are of higher orders, but equal, then the high-frequency end has the same slope. So it cannot be inferred for sure whether they are repeated or simple unless we refer to the phase diagram. The gain K is easily found as in the previous example: K 5 1. The closed-loop system is unstable.

Figure 7.45 Bode diagram of Problem 7.15.

Bode diagram

577

Problem 7.16: Deduce the transfer function of the system whose Bode diagram is given in Fig. 7.46. Suppose the system is MP with positive gain. What is the tracking error of this system for different inputs? The slope at the low frequency end is 220 db=dec. The sharp fall in the magnitude is due to the j-axis zero at ω 5 1. So the slope above this frequency will be 40 2 20 5 20 db=dec. On the other hand the 0-db slope at high frequencies with the corner at ω 5 10 reveals the presence of a pole at s 5 10. Thus the system is 2 1 1Þ LðsÞ 5 Kðs sðs 1 10Þ : Finally the low-frequency intercept dictate K 5 4. It is closed-loop stable and thus tracks the step inputs with zero steady-state error.

Figure 7.46 Bode diagram of Problem 7.16.

Problem 7.17: Deduce the transfer function of the system whose Bode diagram is given in Fig. 7.47. Suppose the system is MP with positive gain. What is the tracking error of this system for different inputs? Following a similar analysis we conclude that the system is LðsÞ 5 s210s 1 1 : It is closed-loop stable. However it does not have an integrator so does not rack any input. (Question: What is the precise error analysis for different inputs?) Problem 7.18: The Bode diagram of a system is given in Fig. 7.48. What is its steady-state error in tracking different inputs? Following a similar analysis we find the system as LðsÞ 5 s2 ðs101 1Þ : It is closedloop unstable.

578

Figure 7.47 Bode diagram of Problem 7.17.

Figure 7.48 Bode diagram of Problem 7.18.

Introduction to Linear Control Systems

Bode diagram

579

Problem 7.19: A system has jGMj 5 4 db and PM 5 45 . During the operation of the system its gain is reduced to 70% its nominal value. Does it remain stable? What delay can this system tolerate in its forward path with its nominal gain? What happens if the gain is increased to 1.6 times its nominal gain? Because of the definition of the positive and negative GM that the smaller one is the main GM, the statement of the problem means that both GM1 and GM2 are at least 4db. (One of them is 4 db and the other is larger.) Because 24 db  0:63 , 0:7 stability of the system is maintained. The delay margin is given by T , PM=ωpc which is not computable since ωpc is not provided. As for the second part, because 4 db  1:58 , 1:6 the system will become unstable. Problem 7.20: A stable system has approximately GM1 5 12 db, GM2 5 2 10 db and PM 5 60 ; ω 5 0:6 rad=s. The BW of the system is 0.7 rad/s. Analyze the stability margins of the system. Note that the system is that of Example 6.27, but do not refer to it! The gain can be reduced or increased to 0.3162 and 3.9811 times its nominal value before instability occurs. The tolerable delay margin in the forward path is T , PM=ωpc 5 60 3 ðπ=180Þ=0:6 5 1:7453 s. The BW of the system is not informative re stability margins. Problem LðsÞ 5

7.21:

Compute

the

GM

and

PM

of

the

system

50ððs10:1Þ2 1 1Þððs10:1Þ2 1 9Þðs 1 0:1Þ : ððs10:1Þ2 1 4Þððs10:1Þ2 1 25Þðs 1 1Þ

Using the command “margin” we find out the answer as in Fig. 7.49. The system is stable (see Example 6.38) and this cannot be deduced from the Bode diagram!

Figure 7.49 Bode diagram of Problem 7.21.

580

Introduction to Linear Control Systems

Problem 7.22: Compute the GM and PM of the system of Problem 7.21 with the gain 50 substituted by 0.5. Using the command “margin” we find the answer as in Fig. 7.50. This system is also stable (see Example 6.39) and likewise the previous problem this cannot be inferred from the Bode diagram.

Figure 7.50 Bode diagram of Problem 7.22.

Remark 7.5: Several other examples can be offered in this direction. Readily available are the examples and worked-out problems of Chapter 6, Nyquist Plot. We leave it to the reader to have a look at their Bode diagrams. Problem 7.23: Compute the BW of the system of Problem 7.22. Using the command “bandwidth” on the open-loop system we find it as Inf. This is of course obvious from the Bode diagram. Indeed the BW of the closed-loop system is also infinity. The shape of the Bode diagram of the closed-loop system is almost the same as that of the open-loop system. Problem 7.24: Compute the BW of the closed-loop systems of the open-loop sys2 1 1Þ tems L1 ðsÞ 5 sðs 1K 10Þ and L2 ðsÞ 5 Kðs sðs 1 10Þ : We show that by increasing the value of the gain K the first system will have a larger BW whereas in the second system the BW is bounded by the frequency of

Bode diagram

581

the j-axis zero. For L1 some pairs of ðK; BWÞ are ðK; BWÞ 5 ð4; 0:4156Þ; ð10; 1:1069Þ; ð30; 3:9995Þ; ð50; 7:0627Þ; ð100; 12:7119Þ; ð500; 33:4933Þ. For L2 we have ðK; BWÞ 5 ð4; 0:3599Þ; ð10; 0:6342Þ; ð30; 0:8579Þ; ð50; 0:9123Þ; ð100; 0:9553Þ; ð500; 0:9909Þ. Problem 7.25: Consider the standard 1-DOF control structure with P 5 1=ðs 1 1Þ and C 5 2=s. Let the reference input be rðtÞ 5 sinωt. Is it possible that that output has a peak-to-peak magnitude larger than 2 at steady state? The output is YðsÞ 5 TðsÞRðsÞ and yss ðtÞ 5 jTðjωÞjsinðωt 1 +TðjωÞÞ. If for some frequency (or frequency range) ω there holds jTðjωÞj . 1, then the answer is positive. With reference pffiffiffito Section 7.2.16, the Bode diagram of the transfer function has ζ 5 1=ð2 2Þ , 0:707 and thus shows an overshoot. See upper panel of Fig. 7.10. This means that the answer is positive. In fact the frequency range is divided into fω , ω1 g : jTðjωÞj . 1 and fω . ω1 g : jTðjωÞj . 1 and fω 5 0g , fω 5 ω1 g : jTðjωÞj 5 1. (See Exercise 7.22.) In Chapter 4 you learned how to find ω1 analytically. Thus we leave it to you. Of course when the system is more complicated you can simply draw the Bode magnitude diagram in MATLABs and by magnification and inspection of the diagram find the aforementioned ranges. Alternatively you can look into the vector MT as the first output of the command [MT,PT] 5 bode(T). If you wish to do it systematically, you can write a simple m.file to search MT in [MT,PT] 5 bode(T) and find and output the interval (s) in which jTðjωÞj . 1 or jTðjωÞj , 1. Problem 7.26: Reconsider Problem 7.25. Let the output disturbance be do ðtÞ 5 sinωt. Is it possible that that ydo ; ss ðtÞ has a peak-to-peak magnitude larger than two? The output is Ydo ðsÞ 5 SðsÞDo ðsÞ and ydo ; ss ðtÞ 5 jSðjωÞjsinðωt 1 +SðjωÞÞ. If for some frequency (or frequency range) ω there holds jSðjωÞj . 1, then the answer is positive. The Bode diagram of the sensitivity function also shows an overshoot and hence the answer is positive. In Exercise 7.24 we provide more details. Remark 7.6: In Chapter 10 we shall talk more about the T and S functions and their properties.

7.14

Exercises

Over 200 exercises are offered in the form of some collective problems in the following. It is advisable to try as many of them as you find time for. Exercise 7.1: A system and its Bode diagram are given. Now, the type of the system is increased by 1. What happens to its Bode diagram? Exercise 7.2: Is that possible to determine the stability or instability of a system from the sign of its GM and PM? Exercise 7.3: Explain why the phase versus log frequency of a NMP system cannot be obtained (uniquely) from its log magnitude information.

582

Introduction to Linear Control Systems

Exercise 7.4: What characteristics do type 1 systems have in the Bode diagram? Repeat the problem for type 0 and 2 systems as well. Exercise 7.5: What characteristics must the Bode magnitude diagram possess if a velocity servo system is to have no steady-state velocity error for a constant velocity input? What is true of the corresponding Bode phase diagram characteristics? Exercise 7.6: Find or construct a transfer function with several phase crossover frequencies, i.e., several 0-db line intersections of the Lm diagram. Discuss. Exercise 7.7: Construct a transfer function with several gain crossover frequencies, i.e., several critical line(s) intersections of the phase diagram. Discuss. Exercise 7.8: Consider a plant whose Nyquist plot intersects itself. What can be said about the Bode diagram of this plant? Exercise 7.9: This Exercise has three parts. (1) Draw the Bode diagram of the sequel systems. (2) What can be said about their steady-state errors for different inputs? Explain how you should repeat Parts (1) and (2) after the inclusion of a delay whose value is assumed such that stability is maintained. 1. LðsÞ 5

106 ðs 1 0:1Þðs2 1 s 1 4Þ ðs 2 0:1Þðs110Þ2 ðs2 1 20s 1 10;000Þ

2. LðsÞ 5

105 ðs 1 0:5Þðs2 2 s 1 4Þ ðs 2 0:1Þðs110Þ2 ðs2 1 20s 1 10;000Þ

3. LðsÞ 5

90ðs 1 0:1Þðs 1 10Þ ðs 2 0:1Þðs210Þ2

4. LðsÞ 5

100ðs 1 0:1Þðs2 2 s 1 4Þ sðs110Þ2 ðs2 1 20s 1 10;000Þ

5. LðsÞ 5

10ðs2 1 1Þ sðs2 1 10Þ

6. LðsÞ 5

2ðs 1 1Þ sðs 1 10Þ

7. LðsÞ 5

4ðs 1 1Þ sðs 2 1Þ

8. LðsÞ 5

10ðs 1 1Þðs 1 20Þ s2 ðs 2 0:1Þðs140Þ2

9. LðsÞ 5

8ðs11Þ2 s2 ðs 1 10Þ

10. LðsÞ 5

6ðs 1 1Þ s2 ðs 2 1Þ

11. LðsÞ 5

70ðs11Þ2 ðs 1 8Þ s2 ðs 2 1Þðs 2 10Þ

12. LðsÞ 5

10ðs11Þ2 s3 ðs 2 1Þ

13. LðsÞ 5

70ðs11Þ3 s3 ðs 1 2Þ

14. LðsÞ 5

10ðs11Þ3 s3 ðs 2 1Þ

15. LðsÞ 5

80ðs11Þ3 s3 ðs 1 10Þðs 2 1Þ

16. LðsÞ 5

20ðs11Þ4 s3 ðs 2 10Þðs 2 1Þ :

Exercise 7.10: Find the GM, PM, DM, and BW of the systems in Exercise 7.9. Exercise 7.11: Using MATLABs, draw the Bode diagram of the systems of Chapter 6, Nyquist Plot and find their GM, PM, DM, and BW. Exercise 7.12: What is the low sensitivity region in the Bode diagram context? Explain. Exercise 7.13: Find the transfer function of the following systems, see Fig. 7.51. Suppose that the systems are MP with positive gain.

Bode diagram

583

Figure 7.51 Exercise 7.13, Bode diagrams of twelve different systems.

Exercise 7.14: Plot the exact and approximate (using order 5 Pade approximation) Bode diagrams of the pure delay term LðsÞ 5 e2sT . Do this also in the Nyquist plot context. In which framework do you observe more discrepancy? Why? Exercise 7.15: Repeat Exercise 6.20 of Chapter 6, Nyquist Plot, in the Bode diagram context. Exercise 7.16: Repeat Exercise 6.21 of Chapter 6, Nyquist Plot, in the Bode diagram context.

584

Introduction to Linear Control Systems

Figure 7.51 (Continued), Exercise 7.13.

Exercise 7.17: Repeat Exercise 6.22 of Chapter 6, Nyquist Plot in the Bode diagram context. Derive both the exact diagram and the approximate diagram using Pade approximation. Interpret the result. Exercise 7.18: Repeat Exercise 6.23 of Chapter 6, Nyquist Plot, in the Bode diagram context. Exercise 7.19: Repeat Exercise 6.24 of Chapter 6, Nyquist Plot, in the Bode diagram context.

Bode diagram

585

Exercise 7.20: Repeat Exercise 6.25 of Chapter 6, Nyquist Plot, in the Bode diagram context. Exercise 7.21: Repeat Exercise 6.26 of Chapter 6, Nyquist Plot, in the Bode diagram context. Exercise 7.22: Reconsider Problems 7.25 and 7.26. The Bode magnitude diagrams of T and S are provided in Fig. 7.52. Explain the discussion of the aforementioned problems pictorially.

Figure 7.52 Exercise 7.22, Left: System of Problem 7.25, Right: System of Problem 7.26.

Exercise 7.23: Consider the standard 1-DOF control structure with P and C given below. Let rðrÞ 5 di ðtÞ 5 do ðtÞ 5 nðtÞ 5 sinωt. In each case determine whether uss ðtÞ, yr;ss ðtÞ, ydi ;ss ðtÞ, ydo ;ss ðtÞ, yn;ss ðtÞ can have a peak-to-peak magnitude larger than two. 1. PðsÞ 5 2=ðs 1 3Þ, CðsÞ 5 ðs 1 1Þ=s 2. PðsÞ 5 ðs 1 3Þ=s, CðsÞ 5 1=ðs 1 1Þ 3. PðsÞ 5 ðs 1 1Þ=½sðs 1 4Þ, CðsÞ 5 ðs 1 2Þ=s

For the sake of completeness we should clarify that questioning about yr;ss ðtÞ is only for practicing the idea, because the systems are not designed for sinusoidal reference tracking. (And the answer for yn;ss ðtÞ should be clear to you. Why?) On the other hand, questioning about uss ðtÞ, ydi ;ss ðtÞ and ydo ;ss ðtÞ is quite natural. Exercise 7.24: This Exercise has several parts. 1. Consider the standard 1-DOF control system. In Chapter 4 we defined five different transfer functions Hr 5 T, Hdi : 5 PS, Hdo : 5 S, Hn 5 2 T, and Hru : 5 CS. What is the relation between them? Explain inasmuch as you can. 2. Note that because Hn 5 2 Hr we are actually concerned with four transfer functions. Is it possible that none, one, two, three, or all of these transfer functions show an overshoot? Discuss. 3. Suppose that the aforementioned transfer functions show an overshoot. (3-1) What is the relation between the respective frequencies? (3-2) Can we determine at least the order of occurrence? (3-3) Is it possible that they show an undershoot as well (i.e., both overshoot and undershoot, perhaps some)? 4. Repeat the counterparts of items (1)-(3) for the variants of 2-DOF and 3-DOF structures.

586

Introduction to Linear Control Systems

Exercise 7.25: Let us start by mentioning that as in the Exercises 7.187.20 this exercise also has a Nyquist counterpart. We have not presented it in Chapter 6 since it is easier to study it first in the context of Bode diagram. You are encouraged to subsequently analyze the exercise in the Nyquist context as well. This exercise is for all readers but is particularly important for the readers of electrical engineering and electronics engineers. In the general language of electrical engineering every system can be considered as a filter. Most plants are low-pass filters, as they pass the low-frequency content of the input signal and filter out other frequencies. In certain applications we need band-pass (i.e., mid-frequency pass) and high-pass filters as well. We also have the opposites: low-stop, band-stop, and high-stop filters. On the other hand, low-band-pass, band-high-pass, and low-band-high-pass filters also exist. See Fig. 7.53 in which the horizontal axis is in logarithmic scale. It should be obvious that the provided smooth curves are typical and some filters have more complicated curves than these.

Figure 7.53 Exercise 7.25, Different kinds of filters. 1. Identify the above-mentioned filters in this figure. 2. Provide ‘examples’ of these systems and study their bandwidth. 3. Provide a ‘theory’ (even elementary and ad hoc) for the construction of such filters and their relation with each other. For the sake of completeness we should add that filter design has a long history and is an independent course in undergraduate electrical engineering. Filters are pervasively used in audio and video applications, among others. A computer software that enhances the quality of a video signal, or changes the environment from, e.g., stage to room or concert in audio applications performs these tasks by feeding the signal to an appropriate filter. Filters are also used to remove the ‘disturbance/noise’ (in electronics these terms are used rather interchangeably) from a signal. Such a disturbance is, e.g., the so-called ‘humming noise’ which is present in almost all live recordings. Contemporary televisions, radios, and even cassette players of the late 20th century (which were soon overshadowed by MP3 players) all have such capability. Such filters are also called ‘equalizers’. In fact such an instrument exists and is used by all broadcasting corporations and music/film studios and companies. High-quality equalizers all have a bank of filters which is a collection of numerous filters each localized on a particular frequency range. The corresponding speakers also have at least three speakers: (1) Woofer or Bass for low frequencies, (2) Main for mid frequencies, and (3) Tweeter or Treble for high frequencies. See Fig. 7.54.

Bode diagram

587

Figure 7.54 Exercise 7.25, Left: Equalizer, Right: Its speaker. LPF: Low-pass filter, HPF: High-pass filter. We should add that the reason for using woofer and tweeter is that the main loudspeaker is not able to reproduce low- and high-frequency signals with high fidelity/quality. In passing it is good to learn that the acronym HiFi which is seen on electronic instruments stands for High Fidelity. Finally, recall that an advanced course is also offered in graduate studies, see Section 1.16. Therein a duality between filtering and control problems is elaborated upon as an indispensable part of the course. Exercise 7.26: With reference to Fig. 7.55 explain the functionality of each filter.

Figure 7.55 Exercise 7.26, Three passive filters and one active filter.

Exercise 7.27: Consider a coded signal whose actual information is concentrated near ω 5 10 rad=s in its spectrum. (1) How can we filter out other frequencies? (Hint: Refer to Fig. 7.10, upper panel.) (2) Explain your answer by considering the poles of the filter and comparing the filter with another filter with the same ωn but ζ 5 0. Exercise 7.28: Some filters have a linear phase. Propose one and explain its usage. Exercise 7.29: Although we do not use the basic elements approach in the Nyquist plot, it is instructive to draw and learn the Nyquist plot of the basic elements. We did not present this exercise in Chapter 6 since we did not introduce the basic elements in that chapter but in this one. Exercise 7.30: Reconsider and answer (the counterpart of) Exercise 6.28 in the Bode diagram context.

588

Introduction to Linear Control Systems

References Aarabi, P., Shi, G., Shanechi, M.M., Rabi, S., 2006. Phase-Based Speech Processing. World Scientific, Singapore. Bode, H.W., 1945. Network Analysis and Feedback Amplifier Design. Van Nostrand. Borne, P., 1992. Laplace transforms and Bode diagrams. Concise Encyclopedia Modell. Simul. 237239. Chen, C.F., 1967. A remark on Polonnikov’s approach to generalized Bode diagrams. IEEE Trans. Aerospace Electr. Syst. 3 (1), 136138. Hahn, J., Edison, T., Edgar, T., 2001. A note on stability analysis using Bode plots. Chem. Eng. Educ. Summer, 208211. Hassibi, B., 2003. On the robustness of LMS filters. In: Haykin, S., Widrow, B. (Eds.), Least-Mean-Square Adaptive Filters. John Wiley & Sons, NY. Hassibi, B., Erdogan, A.T., Kailath, T., 2006. MIMO linear equalization with an criterion. IEEE Trans. Signal Proc. 54 (9), 499511. Huang, J., Li, Z., Yann Liaw, B., Zhang, J., 2016. Graphical analysis of electrochemical impedance spectroscopy data in Bode and Nyquist representations. J. Power Sources. 309, 8298. Middlebrook, R.D., 1975. Measurement of loop gain in feedback systems. Int. J. Electron. 38 (4), 485512. Moussavi, Z., 2007. Introduction for special issues: respiratory sound analysis. IEEE Eng. Med. Biol. Mag. 26 (1), 15. Moragn, K.D., Zheng, Y., Bush, H., Noehren, B., 2016. Nyquist and Bode stability criteria to assess changes in dynamic knee stability in healthy and anterior cruciate ligament reconstructed individuals during walking. J. Biomech. 49 (9), 16861691. Naito, T., Hidaka, K., Xin, J., Ohmori, H., Sano, A., 2000. Adaptive equalization based on internal model principle for time-varying fading channels. IEEE Symposium on Adaptive Systems for Signal Processing, Communications, and Control. Chateau Lake Louise, Alberta, Canada, pp. 363368. Nakhmani, A., Lichtsinder, M., Zeheb, E., 2011. Generalized Bode envelopes and generalized Nyquist theorem for analysis of uncertain systems. Int. J. Robust Nonlinear Control. 21, 752767. Ohno, T., Sano, A., Ohmori, H., 2009. New simple filtered-x algorithm for parallel Hammerstein systems. ICCAS-SICE Int. Joint Conf. 4503-4507. Fukuoka, Japan. Pavlov, A., van de Wouw, N., Nijmeijer, H., 2006. Frequency response functions and Bode plots for nonlinear convergent systems. In: Proceedings of the IEEE Conf. Decision and Control, San Diego, CA, pp. 37653770. Sayed, A.H., 2003. Fundamentals of Adaptive Filtering. John Wiley & Sons, NY. Terrnan, F.E., 1943. Radio Engineers’ Handbook. McGraw-Hill, NY.

Krohn-Manger-Nichols chart 8.1

8

Introduction

In Chapter 6, Nyquist Plot, we started frequency domain methods. So far we have studied the Nyquist and Bode methods. In this chapter we introduce what is known in the literature as the Nichols Chart, as the last frequency domain framework in this first undergraduate course. As we shall discuss, the method can be best called the Krohn-Manger-Nichols (KMN) chart. Traditionally the KMN method is mostly used in the context of robust Quantitative Feedback Theory (QFT1) control. Today, the theory of robust control falls into six main branches, which you will study (to some extent, not completely!) in graduate studies. These are QFT, Lyapunov-based results, HN ðHp Þ, L1 ðLp Þ, μ, Kharitonov-type results, and the seventh branch of miscellaneous approaches like the Rouche-type results, gap metric theory, probabilistic, convex, and random optimization techniques. The tools in the seventh branch may not be less important than the other tools, and in particular scenarios they may be even more powerful, however for certain reasons which go outside the scope of this book they have found restricted applications; see Further Readings. We embark by mentioning that in the undergraduate texts of the period between 1970-2017 the KMN method barely appears in more than a few pages, and even that was completely ignored by many instructors. On the other hand, a graduate course on QFT (which is in the context of KMN chart) is offered by a small percentage of universities, and a small percentage of researchers are active in or know of this technique at all. To understand the reason for this situation we should take a glimpse at the history of robust control methodologies. The first of such methods was the sliding-mode control, followed by circle-criterion and similar and related stuff. These methods have a striking2 mathematical exposition and thus naturally enthrall the attention of the researchers. The QFT method, which was proposed after the aforementioned techniques, thus had noticeable rivals at least with regard to attracting the attention of researchers. By the elapse of time other methods (as mentioned in the previous paragraph) have appeared roughly since the late 1970s and this shifted the attention of researchers even further away from the QFT method. Speaking more precisely, we can summarize the reasons as: (1) In the 1960s before the MATLABs era the method had to be practiced by hand, it was highly graphical, and needed heavy computations; (2) The alternative approaches had (and have) more glamorous mathematical exposition and this probably alluded to their superiority over QFT; (3) The alternative methodologies in some respects 1

Beware that in the research literature the acronym QFT also stands for Quantum Field Theory in physics and Quanti-FERON-TB in medical sciences. 2 The words striking and glamorous, which will come shortly, have a relative meaning. For mathematicians these issues are at the most moderate, if not lower. However, for average engineering departments these words seem to be exact. Introduction to Linear Control Systems. DOI: http://dx.doi.org/10.1016/B978-0-12-812748-3.00008-2 © 2017 Elsevier Inc. All rights reserved.

590

Introduction to Linear Control Systems

do indeed outperform the QFT method, both in formulation and solution. As such, for the graduate course on robust control, given the fact of restricted time as discussed in Section 1.16 of Chapter 1, the preference of the faculty members has been to focus on the HN ; μ methods (and this is not totally undue). This in turn persuaded the instructors to ignore the KMN method in undergraduate studies. All in all, since the 1970s the KMN method has sunk into a state of near obsolescence and oblivion. Nevertheless, the small percentage of researchers who remained faithful to it endeavored greatly and, mainly since the 21st century, new good results have appeared. See Further Readings. We give a comprehensive treatment of the KMN chart in line with the Nyquist plot and Bode diagram of Chapters 6 and 7. The covered materials are the same. However, the volume of this chapter is smaller and we have fewer examples only because we have fully developed the basic ideas in previous chapters. It is noteworthy that as in the previous Chapters 6 and 7, in the literature there are many false guidelines especially regarding the GM, PM, and stability determination in the context of the KMN chart. We comment on all these issues in this Chapter. In the next chapter we shall study the design procedure in the frequency domain. It is due to stress that we can perform the dynamic controller design in all the frameworks of Nyquist, Bode, and KMN plots. The item (1) as we mentioned in the previous paragraph is true also for Bode diagram and Nyquist plot. However, since frequency is the hidden parameter in the Nyquist and KMN plots, they are not as nice as the Bode diagrams. In this undergraduate course we are not interested in computerized solutions and performing the procedure with the help of a computer, but in step by step, by hand. Hence, we prefer the Bode context for the design in Chapter 9. We stress that if we want a computerized solution then it does not really matter much which context we choose for the problem formulation since the contexts are transformable to each other—this will be better understood in Chapter 10 where we present a generic problem formulation. However, with regard to the computational side of the problem the choice of the context matters and is yet to be investigated in future. Let us come back to the theme of this chapter. If the frequency response information is depicted on the Lm (log magnitude) versus phase diagram superimposed by the M- and N-contours, we have the so-called Nichols chart. About this name we must mention that the reality is that in a joint work in Chapter 4 of (James et al., 1947) N. B. Nichols, W. P. Manger, and E. H. Krohn offered various contributions under the title “general design principles for servomechanisms.” In this work the authors explicitly mention which part of the work is done by which author. The essence of what is today known as the Nichols chart was exclusively developed by Krohn. Of course he shared an idea of part of the work with Manger as we shall explain. The chart was further used for dynamic controller design by Nichols. Attributing the method solely to Nichols is quite unfair, but was alas done in the subsequent literature and thus the contemporary new generation of the control society is unaware of the reality. It may be a matter of opinion how we should assess the degree of contribution of the aforementioned authors to the whole method. Which one is the fairest: Krohn chart,

Krohn-Manger-Nichols chart

591

Krohn-Manger chart, Krohn-Manger-Nichols (KMN) chart or a permutation of it? The reader’s judgment may be different from ours but we do believe that you do not attribute it to Nichols alone! We encourage you to have a look at the original work. The KMN chart pivots on M- and N-contours. In the polar plane (i.e., where jLj versus +L was used in the Nyquist plot) the loci of the points where jTj and +T are constant are called M-circles and N-circles, respectively. When we transform the polar plane to the plane of Lm versus phase, these circles are transformed to some contours, called M-contours and N-contours, respectively. As such, we shall start by M- and N-circles in the polar plane. (Recall that the Nyquist plot is also called the polar plot.) Yet, we start by a more general picture, in which we first introduce and interpret the |S|-constant loci, where S is the sensitivity function. We suppose throughout that the closed-loop system is a negative unity feedback. The case that the gain of the feedback loop is different from 21 (let it be either static or dynamic) will be discussed in the worked-out Problem 8.21. Hence in the following we study S-circles, M-circles, and N-circles in Sections 8.28.4. The M- and N-contours and KMN chart are introduced in Sections 8.5, 8.6, along with some details. In Section 8.7 we discuss the system features: GM, PM, DM, BW, stability. Special attention is paid to the case of multiple crossover frequencies and NMP systems. The high sensitivity region is studied in Section 8.8. Finally, relations with the Bode diagram, Nyquist plot, and root locus are considered in Section 8.9. Summary, further readings, worked-out problems, and exercises are presented in Sections 8.108.13.

8.2

S-Circles

Given the negative unity-feedback structure, it is known that the sensitivity function is given by S 5 1=ð1 1 LÞ. If jSj is constant, then jSj 5 1=j1 1 Lj is constant. Thus, the |S|-constant loci are some circles centered on the point 21 with radii 1=j1 1 Lj, the so called S-circles. It is noteworthy that the smaller the circles the larger the corresponding jSj values. Intuitively, this is clear and correct, because we get closer to the critical point, (21,0) as shown in Fig. 8.1.

Figure 8.1 S-Circles.

592

Introduction to Linear Control Systems

The interpretation is as follows: 1. If we want the sensitivity of the system to be less than S0, the Nyquist plot must be outside the S0-circle. 2. If we want the sensitivity of the system to be less than S0 at ω0 , then the Nyquist plot at ω0 , i.e., Lðjω0 Þ, must be outside the S0-circle. 3. If the Nyquist plot is tangent to the S0-circle at ω0 , the maximum sensitivity of the system is S0, occurring at ω0 . 4. If the Nyquist plot intersects with the S0-circle at ω1 ; ω2 , the sensitivity of the system is S0 at ω1 ; ω2 .

We close this section by adding that S-circles are some of the classical results in the literature. Unfortunately no reference is cited for them and we do not know who first introduced them and when. (Question: What are the jSj-constant loci in the Bode diagram context?) Next we switch to M- and N-circles. We start with M-circles in the ensuing section.

8.3

M-Circles

With the previous convention, i.e., negative unity-feedback closed-loop system, the transfer function will be L=ð1 1 LÞ. Recall that this is the complementary sensitivity function, denoted by T. Now, let L 5 X 1 jY and M: 5 jTj. Therefore, the loci of constant M 5 jTj is given by ðM 2 2 1ÞY 2 1 ðM 2 2 1ÞX 2 1 2M 2 X 1 M 2 5 0. It is observed that for M 5 1 this equation results in X 5 2 1=2. If M ¼ 6 1, then  2 2 M2 M the loci can be reformulated as X1 M 21 1 Y 5 ðM 2 21Þ2 : That is to say, the loci of 2



2



constant M are circles centered at 2 MM2 1 ; 0 with radii M2M2 1 : These are the so-called M-circles. It should be noted that for M . 1 the M-circles are to the left of the X 5 21=2 line, whereas for M , 1 they are to the right of this line, see Fig. 8.2. 2

2

Figure 8.2 M-Circles.

Krohn-Manger-Nichols chart

593

M-circles were introduced by W. P. Manger in 1947 in the joint paper that we discussed in the introduction. (This is the shared idea that we mentioned in Section 8.1.) As in the case of S-circles, the interpretation of M-circles is as follows: 1. If we want the complementary sensitivity of the system to be less than T0, the Nyquist plot must be outside the T0-circle. 2. If we want the complementary sensitivity of the system to be less that T0 at ω0, then the Nyquist plot at ω0, i.e., Lðjω0 Þ, must be outside the T0-circle. 3. If the Nyquist plot is tangent to the T0-circle at ω0, the maximum complementary sensitivity of the system—i.e., the maximum gain of the transfer function—is T0, occurring at ω0. 4. If the Nyquist plot intersects with the T0-circle at ω1 ; ω2 , the complementary sensitivity of the system is T0 at ω1 ; ω2 .

Question 8.1: Because of TðωÞ 1 SðωÞ 5 1; ’ ω, can we conclude that if either of T or S achieves its maximum at ω0, then the other achieves its minimum at the same frequency? If not, what is the correct statement? Remark 8.1: An important point regarding the M-circles is that if from the origin a line is drawn tangent to a certain M-circle, the sin of the angle made with the negative real axis is the reciprocal of the respective M value, sin ψ 5 1=M. Moreover, the distance from the point of tangency to the j-axis is 1, i.e., the projection of the point of tangency on the negative real axis is the point 21. Remarkably, this is the case for all of the M-circles, as shown in Fig. 8.3. See also Exercise 8.11. The above remark shows the procedure for designing the gain so as to achieve a maximum M. Design Procedure: When it is desired to have a certain maximum M by adjustment of the pure gain, the design procedure is as follows, pictorially depicted in Fig. 8.4. 1. The normalized polar (Nyquist) plot is drawn. 2. The angle ψ 5 sin21 1=M with the negative real axis is drawn. 3. A circle tangent to both of the polar plot and the above-drawn angle line with whose center on the negative real axis is fitted. From the point of tangency to the angle ψ a perpendicular line to the negative real axis is drawn.

Figure 8.3 Characteristic of M-circles.

594

Introduction to Linear Control Systems

Figure 8.4 Gain adjustment using M-circles. 4. In order for this circle to be the designated M-circle, the distance of this line to the origin (OA) must be equal to 1. This happens if K is chosen as K 5 1=OA (as in the case of gain margin).

Example 8.1: Given LðjωÞ 5 K=½jωð1 1 jωÞ. Find K such that M 5 1:4: Following the above procedure we obtain ψ 5 sin21 1=1:4 5 45:6 and K 5 1=0:63 5 1:58. Note that the answer is inaccurate and approximate since it is the output of a depictive method.

The above design procedure is a classical result and can be found in various sources which proceeded the aforementioned work of 1947. Unfortunately in the literature no reference is cited for it—we do not know who first proposed it and when. Of course its proof is straightforward and simple. N-circles are presented in the subsequent section.

8.4

N-circles

Following the same system representation, i.e., negative unity-feedback structure, jY let T: 5 MðωÞejαðωÞ . Thus MðωÞejαðωÞ 5 1 1X 1 X 1 jY : If α is constant then N: 5 tan α is  2   2 1 2 constant. It can easily be shown that X1 12 1 Y 2 2N 5 N4N12 1 : That is the loci   1 with radii of constant α (and thus constant N) are circles centered at 2 12 ; 2N pffiffiffiffiffiffiffiffiffiffi N2 1 1 : These are the so-called N-circles, see Fig. 8.5. 2N Remark 8.2: It is easily seen that N-circles N1 and N2, where N1 5 tan α1 , N2 5 tan α2 , α2 2 α1 5 180 are the same. If the M- and N-circles are superimposed on a polar plane, and a polar (Nyquist) plot—i.e., an open-loop plot—is plotted, then the magnitude and phase of the

Krohn-Manger-Nichols chart

595

Figure 8.5 N-Circles.

closed-loop system at each frequency can be (roughly) determined. Before the introduction of the computer software MATLABs all the design methods were depictive. At that time this method was at issue—but not any longer. We wrap up this section by adding that N-circles were not introduced in the aforementioned work of 1947 but are ubiquitous in the subsequent literature. We do not know who first proposed them. The above developments were preparations for the KMN chart which we can now formally start.

8.5

M- and N-Contours

As in the case of drawing the Nyquist diagram, the problem associated with the polar plots and the use of M- and N-circles is that a change in the gain (i.e., multiplication or division of the gain by a constant) results in a certain amount of deformation of the polar plot. There we saw that this problem was rectified by the use of log magnitude instead of magnitude. This idea was also used by E. H. Krohn who suggested the Lm versus phase diagram. Krohn originally used the name ψ-contours3 instead of N-contours. However, then the community almost unanimously adopted the name N-contours, to go with M-contours, and we do so as well. Moreover, Krohn directly started from ψ-contours, i.e., he did not introduce ψ-circles, although he did introduce M-circles. Anyway, by this transformation of the vertical axis, the M- and N-circles are also transformed. The results are called the M- and N-contours, depicted in the sequel Figs. 8.6 and 8.7. The figure repeats 3

In the setting of Problem 8.1 we use α instead of ψ.

596

Introduction to Linear Control Systems

Figure 8.6 M- and N-contours.

every 360 deg and is symmetric with respect to either of the multiples of 6 180 deg line. It looks like Fig. 8.6. In this figure M1 , M2 , 0 , M3 and 2αi is used instead of 360 2 αi for simplicity of notation—they refer to the same point; the same for 2β i ; 2 γ i . If these contours are superimposed on the open-loop Lm versus phase grid of the system then the resulting plane is the Krohn chart which may also be called the KMN due to the contributions of Manger and Nichols. An expansion of the ½ 2 320 0  interval of the KMN chart is shown in Fig. 8.7. Note that once upon a time before the MATLABs era such sheets were commercially available.

8.6

KMN chart

In the KMN chart shown in Fig. 8.7, the M- and N-contours are thus the loci of constant log magnitude and phase of the closed-loop system, shown by some solid curved lines, whereas the straight horizontal and vertical grid are the log magnitude and phase scale of the open-loop system, shown by dashed and dotted lines. More precisely, the 0-db M-contour has a U shape and is tangential to the 290; 2 270 deg lines. M-contours with positive values are inside this U-shaped region, and M-contours with negative values are outside it. The M 5 N contour is the central point which is the critical point of the Nyquist plot, and M 5 2N in the lowest part of the plane. The N-contours are semi-straight lines originating from the critical point, ending as straight parallel lines at the bottom of the page. The scales on the left vertical side of the figure are open-loop Lm values, the scales on the right vertical side represent the (closed-loop) M-contour values, the lower horizontal scales denote the open-loop phase values, and the scales on the N-contours are the respective closed-loop phase values. The derivation is explained in Problems 1.1, 1.2, and Exercise 1.1. If the log magnitude versus phase of an open-loop system is drawn on the KMN chart plane, it is referred to as the KMN chart/plot/curve of the system. Construction or drawing of the KMN curve by hand is semistraightforward in the KMN chart context. More precisely the basic elements’ approach is somewhat helpful in this context: At every frequency the magnitudes and phases of all the constituting elements are added to each other. However, the construction is not really straightforward and is semistraightforward since: (1) The basic elements do

Figure 8.7 Expansion of part of the KMN chart plane.

598

Introduction to Linear Control Systems

not have a “nice and handy” shape (as those in the Bode diagram context), (2) The result of the aforementioned additions may pathologically depend on the way the frequency plays a role in the basic elements. This is well illustrated in the WorkedOut Problems 8.238.25. In fact, construction of the system is easiest in the Bode diagram context and hardest in the Nyquist plot context—KMN chart lies in between. Anyway, because the procedure is clear (although not that nice) for the sake of brevity, we leave it to the reader in Exercises 8.14 and 8.15. From the KMN chart the same information is obtained that can be from the Nyquist plot (superimposed by M- and N-circles). More precisely, the closed-loop information is obtained from the open-loop information at the points/frequencies the polar plot intersects or is tangent to the M- and N-contours. The difference is in the simplicity of the design procedure—multiplication is substituted by addition. Hence, if the gain is larger than one, there is an upward shift and if the gain is less than one there is a downward shift of the plot, and that is all for pure gain design. However, for dynamic compensator design the procedure is cumbersome and complicated. The reason for this is that, similarly to the Nyquist plot (recall Examples 6.3 and 6.4), in the KMN plot frequency is the hidden parameter of the curve. On the other hand, in the design procedure some particular frequencies are among the design objectives. In the rest of this chapter we use the command nichols(L) to draw the KMN plot of the system with loop gain L in the standard 1-DOF structure.

2 Example 8.2: Given PðsÞ 5 ðs11Þ 3 : It is desired to design a pure gain controller such that M 5 2  6 db. The KMN chart is depicted in solid curve in Fig. 8.8. It is seen that it must be shifted up by 5 db. That is 20 log K 5 5 and hence K 5 1:7783. More precisely, the system KPðsÞ (whose gain is K2 5 KK1 5 2K 5 3:5566 db) which is depicted as a dashed line has M 5 2  6B. It is noteworthy that this procedure, as with all depictive methods, has some inherent inaccuracy.

Figure 8.8 Example 8.2. Left: Whole picture; Right: Its magnification.

(Continued)

Krohn-Manger-Nichols chart

599

(cont’d) We should clarify that the 5db distance can be read on any vertical line connecting the two curves. The distance is constant and can be read off by zooming such that curves intersect the left vertical axis of the figure as in the right panel.

8.7

System features: GM, PM, DM, BW, stability

In this section the system features, i.e., the GM, PM, DM, BW, and stability, are studied in the context of the KMN chart. Because we have fully studied these features in previous chapters, here we only briefly discuss these issues.

8.7.1 Gain, phase, and delay margins To introduce these concepts in the context of the KMN chart, we first note that (1) the gain crossover point is the point at which the open-loop plot crosses the 0-dB axis. (2) The phase crossover point is the point where the open-loop locus intersects either of the 6 180ð2h 1 1Þdeg axes. (3) The critical point is the central point in the KMN chart, i.e., the (0-dB; 6 180ð2h 1 1Þ deg) point. Note that we actually have critical points. Therefore, in the simplest case that there is one gain crossover frequency and one phase crossover frequency, the answer is as follows: The phase margin is the distance (in degrees) between the gain crossover point and the critical point. If the crossover point is to the right of the critical point PM . 0, otherwise PM , 0. The gain margin is the distance (in decibels) between the phase crossover point and the critical point. If the crossover point is below the critical point GM . 0, otherwise GM , 0. Finally, the DM is computed based on its definition, i.e., DM 5 PM=ωgc ; see also Section 6.3.2 for further details. An important point is also discussed in the Worked-Out Problem 8.16. A system may have several phase crossover frequencies and thus several gain margins, if they result in a change in the stability condition. Similarly, a system may have several gain crossover frequencies and thus several phase margins, if they result in a change in the stability condition. As we discussed in previous Chapters, not every phase/gain crossover frequency may result in an acceptable gain/phase margin, because it may not change the stability condition of the system. We established these results by the help of several instructive examples in the previous Chapters. For the sake of brevity we do not repeat all those examples here. It will suffice to present only a few examples and worked-out problems for the illustration of how the KMN chart of a complicated system looks like.

600

Introduction to Linear Control Systems

8.7.2 Stability As in the case of Bode diagrams, there are several false guidelines in the literature for stability determination in the context of the KMN chart. To give the correct answer we recall Section 6.3.3: A stable system may have any combination of signs of the GM and PM; The same is true for unstable systems. Thus, the correct answer is that it is not possible to determine the stability of the system in the context of the KMN chart, if we merely consider the signs of the GM and PM. This can easily be verified if we consider the same examples that we considered in the previous Chapters. Because this is straightforward we leave it to the reader; see also Exercise 8.18. Example 8.3: Find the PM and GM of the system of Example 8.2, i.e., PðsÞ 5 2=ðs11Þ3 . The KMN chart is offered in Fig. 8.9. Visually read from the chart they are almost 12 db and 67 deg. Computationally read from the plot by clicking on the crossover points, depending on the accuracy of the clicking point, more accurate answers may be obtained. The best computed answer is found using the command “margin” of MATLABs which outputs the system features as GM 5 12 db, ωpc 5 1:73 rad/s, PM 5 67:7 deg, ωgc 5 0:766 rad/s. Recall that frequency is the hidden parameter along the plot in the KMN chart. To find the frequency of a point we must click on that point in the chart produced by MATLABs.

Figure 8.9 KMN chart of Example 8.3.

Krohn-Manger-Nichols chart

601

Example 8.4: Study the following dynamical system in the KMN chart: K 3 0:002 3 ðs 1 0:02Þðs 1 0:05Þðs 1 5Þðs 1 10Þ with K 5 1: LðsÞ 5 ðs 2 0:0005Þðs 1 0:001Þðs 1 0:01Þðs 1 0:2Þðs 1 1Þðs1100Þ2

With K 5 1 the KMN chart is shown in Fig. 8.10. It is observed that there are six gain crossover frequencies and one phase crossover frequency. The “critical line” crossings which designate the six gain crossover frequencies are the j-axis crossings in the root locus, the real-axis crossings in the Nyquist plot, and the critical line crossings in the Bode phase diagram (i.e., either multiple of 6 180ð2h 1 1Þ). These are the points A 2 F which we specified in Example 5.124 (with another value of the gain). Note that for strictly proper systems like this system (type 0) the point F is the lowest point which represents the highest bounded frequency. The point A is the top or the highest point which represents the lowest frequency. Because reading the points by clicking on the figure is inaccurate we use the command “allmargin” of MATLABs to find these points as follows.

Figure 8.10 KMN chart of Example 8.4. Left: Whole plot; Right: Magnification of part of it.

Gain margin values: [0.1000, 3.3186, 144.4759, 2.2990e 1 04, 1.5939e 1 07, 7.2479e 1 08], GM frequencies: [0, 0.0039, 0.0220, 0.4403, 6.8626, 84.9334], Phase margin: 8.0655, PM frequency: 0.0021. One of the crossings is above the critical point (GM , 0) and five of the crossings are below the critical point (GM . 0). The interpretation is thus that there is one crossing point with smaller value of the gain (A) and there are five crossing points with larger values of the gain (BF). The system has one PM which is positive and is measured at the point Z. As in the case of the Bode diagram, the stability of the system cannot be inferred from the KMN (Continued) 4

Or the points A0 2 F 0 which we specified in Example 6.14 with the same value of the gain. (Question: What is the point G0 ?)

602

Introduction to Linear Control Systems

(cont’d) chart if we know only the GM and PM signs are considered. Higher level of analysis is necessitated and other factors have to be considered. However, at the moment we do not know those factors. Question 8.2: According to the plot shown in Fig. 8.10, in this example the low frequency region of the curve (which is around the highest point in the figure) does not intersect with the M- and N-contours. This is even more transparent in the Worked-out Problems 8.5, 8.12, 8.13, etc. Why? Example 8.5: Study the sequel dynamical system in the KMN chart: LðsÞ 5 Kððs20:1Þ2 1 1Þ=½ðs 2 0:3Þððs20:2Þ2 1 4Þ with K 5 1: The KMN chart is offered in Fig. 8.11. The system has three GMs (one negative and two positive) marked with unlettered arrows. As in the previous example the interpretation is that with increasing (decreasing) the gain there will be two (one) j-axis crossings in the root locus of the system. The system also has two PMs (one positive and one negative). These are the points Y and Z designated in Example 6.26. As in the previous examples, stability of the system cannot be deduced—to this end we have to resort to the Nyquist plot. (Question: The plot has two ends. Which end of the given plot denotes its starting point and which end denotes its end point?)

Figure 8.11 KMN chart of Example 8.5.

Krohn-Manger-Nichols chart

603

Remark 8.3: One of the claims of the QFT method from the outset has been to provide a ‘transparent’ method for inclusion of time-domain features in the frequencydomain design. In particular, with a condition like that of Section 10.1 of Chapter 10 the maximum percent overshoot is translated to an M-contour. However, the M-contour of the KMN context has its obvious counterparts in the Nyquist plot and Bode diagram contexts. (Question: What are the aforementioned counterparts?) Thus, as we said in Section 8.1 if the controller design procedure is going to be implemented by a computer algorithm there is not much difference between them (unless from the numerical side of the problem). However: (1) For hand construction Bode diagram is preferred, (3) For analytical (as opposed to computational) stability analysis and verification the Nyquist plot is used.

8.7.3 Bandwidth For a causal system of type one or higher (and thus tangent to the 0 dB M-contour at low frequencies in the upper part of the plane, and down in the lower part of the plane at high frequencies), bandwidth is the smallest frequency its magnitude intersects the 23 dB line. Thus for such systems we should look for the intersection of the KMN plot of the system with the 23 dB M-contour. Straight forward determination of the bandwidth for such systems seems to be the only tractable usage of the KMN chart. Example 8.6: Find the bandwidth of the system PðsÞ 5 2=½sðs11Þ3 . Note that the system satisfies the condition. The closed-loop BW is thus easily obtained as the frequency of the intersection of the KMN plot of the system with the 23 dB M-contour, as depicted in Fig. 8.12. Recall that frequency is the hidden parameter of the plot and is found by clicking on the point of interest. The answer is BW 5 ω 5 1:09.

Figure 8.12 Example 8.6.

604

Introduction to Linear Control Systems

Remark 8.4: The abovementioned usage of the KMN chart is only for the sake of illustration of how it was once usable for this purpose in the pre-MATLABs era when everything had to be done by hand. Clearly, now we simply use the command “bandwidth” in MATLABs. Even in that era, the method failed for systems which did not satisfy the aforementioned condition. This is shown in the proceeding example.

Example 8.7: Find the frequency of the intersection of the KMN plot of the system of Example 8.2, i.e., PðsÞ 5 2=ðs11Þ3 , with the 2 3 db M-contour. The KMN chart is provided in Fig. 8.13. By clicking on the designated point we find out the frequency is 1.34 which is different from the true bandwidth of the system being BW  1:54. Note that the plot has two intersections with the 23 db M-contour and we chose the larger frequency. (Why?)

Figure 8.13 KMN chart of Example 8.6.

Remark 8.5: The discrepancy is slight in this system. From a theoretical standpoint it can be quite noticeable in a system, like 10 times different. This is shown in the ensuing example.

Krohn-Manger-Nichols chart

605

Example 8.8: For the system P 5 10=ðs 2 9Þ the closed-loop BW is 1 whereas the 23 dB M-contour intersection frequency is 14.5. For such systems the intersection should be computed with the M-contour whose value is 3 db less than the M-contour at which the system is tangent at the low-frequency end. The KMN chart of this system is given in Fig. 8.14. The closed-loop system is P 5 10=ðs 1 1Þ and thus the central M-contour to which the plot is tangent at the low frequency end has M 5 20 log 10 5 20 dB whereas the second central M-contour has M 5 20 2 3 5 17 dB. The intersection frequency with this M-contour correctly gives the BW of the system as ω 5 1 rad=s.

Figure 8.14 Example 8.8. Correct computation of the bandwidth.

Again, we should add that this is only for the purpose of illustration. For such systems bandwidth computation via the KMN chart is clearly not a tractable method, since the available “sheets” did not have all the M-contour values (even MATLABs does not plot all these curves) and the plot had to be drawn quite carefully. Next we discuss the high sensitivity region in the KMN chart context.

8.8

The high sensitivity region

Based on the properties of the chart and the GM/PM concept in its context, we conclude that the surrounding of the critical point is the high sensitivity region.

606

Introduction to Linear Control Systems

More explanation is as follows. Because of the relation S 1 T 5 1, when S is large, T is also large, and vice versa. That is, the sign does not matter. For instance S 5 2 9 is a large S. The surrounding of the critical point (well inside the U-shaped region bordered by the 0-db M-contour) is where T is positive and large and S is negative and large. For instance the 20 db M-contour corresponds to T 5 10; S 5 2 9. The nearer to the central point, the larger the S (in absolute value). Thus, if we want the sensitivity to be less than jSj in all frequencies, the plot must stay outside the M-contour with T 5 jSj 1 1. Similarly, outside the U-shaped region M is negative and thus T is positive and small. In particular the lower part of the plane has T  0; S  1. In the vicinity of the 0-db M-contour (that T 5 1) S  0, i.e., the minimum of S. There is another method for discussing sensitivity in the context of KMN chart which is presented in Problem 8.22. Question 8.3: What part of the plane refers to a large positive S? Question 8.4: With regard to pictorial sensitivity analysis, which of the frequency domain contexts of Chapters 68 is superior to others? Finally we wrap up this section by stressing that the presentation of this section was brief because we have fully expanded the ideas in previous Chapters 6 and 7. We encourage you to do all the exercises to develop the missing materials (like Exercises 8.14 and 8.15) along with those of Chapters 6 and/or 7. We solve many more examples re these issues in the Worked-out Problems in Section 8.12.

8.9

Relation with Bode diagram, Nyquist plot, and root locus

In the previous chapters we have seen that there is a relation between the Bode diagram, the Nyquist plot, and the root locus. In this part we will include the KMN chart as well. To present the relation we first note that the critical lines 6 180ð2h 1 1Þdeg each exist in each section of the KMN chart plane as depicted in Fig. 8.6. The critical line crossings in the KMN chart are the same as the critical line crossings in the Bode diagram, which are the (negative) real axis crossings of the Nyquist plot, or the imaginary axis crossings in the root locus. On the other hand the 0-db line crossings in the KMN chart are the same as the 0-db line crossings in the Bode diagram, which are the unit circle crossings of the Nyquist plot. The aforementioned crossings are illustrated in, for example, Examples 8.4 and 8.5 and the worked-out Problems 8.68.9 and 8.138.16. It is instructive to clarify a point. Recall that, regardless of being acceptable or unacceptable, positive GMs are read off the 6 180ð2h 1 1Þdeg lines (the negative real axis in the Nyquist context) and negative GMs are read off the 6 360hdeg line (the positive real axis in the Nyquist context). The lines 6 360hdeg are all existent in the KMN chart and by construction they intersect the 0-db line as well.

Krohn-Manger-Nichols chart

607

However, by consensus we always write the M-contour values of a KMN section/plot on one of the lines 6 360hdeg. Hence, if there is an intersection with one of the lines 6 360hdeg the values (which are the negative GMs, i.e., GMs for negative gain values) are not easily readable unless we horizontally read them off the left/right vertical axis, which is a critical line 6 180ð2h 1 1Þdeg. Alternatively, we should simply shift the plot for half a section (180 deg) to the right, which is equivalent to multiplying the system by 2 1. Now there will be intersection(s) with the critical lines 6 180ð2h 1 1Þdeg and the intersection value(s) can be readily read off. This is well demonstrated in the Worked-Out Problem 8.16.

8.10

Summary

Starting from the Nyquist plot and the Bode diagram in the previous Chapters 6 and 7, in this Chapter we have studied the KMN chart of the system. This chart is the log magnitude of the system versus its phase. It was once in the 1960s a classical tool for the analysis and synthesis of control systems, especially robust QFT systems. Since the 1970s it has not been taken seriously and used by most researchers and universities—we have discussed the reasons. However, the situation seems to have changed since the 21st century by the publication of several important results. Our exposition of the KMN method has contained all the materials of the previous Chapters 6 and 7. Namely, we studied its construction and the system features GM, PM, DM, BW, stability, and sensitivity. Special attention has been paid to the case of multiple crossover frequencies and NMP systems. Relations with the Bode diagram, Nyquist plot, and root locus are also discussed. Numerous worked-out problems to follow further demonstrate the details of the lessons and enhance their learning. A comparison between the three frequency-domain methods of Nyquist, Bode, and KMN is also briefly presented. We have argued that (1) for manual work Bode diagram is preferred, whereas for stability analysis the Nyquist plot is utilized, and (2) for computerized controller design these methods are the same with regard to formulation since the contexts are transformable to each other. However, from the numerical side of the problem this issue needs further consideration.

8.11

Notes and further readings

1. In Chapter 4 of (James et al., 1947) Nichols considered dynamic controller design—in particular lead design—in the context of the Krohn chart. Because frequency is the hidden parameter of the curve, manual implementation of the method is now rather obsolete. 2. Robust QFT control was originally proposed for continuous-time SISO linear uncertain systems by I. M. Horowitz in the early 1960s. Through time it was extended to MIMO systems, time-varying systems, nonlinear systems, discrete-time systems, etc. with the explanation that the underlying mathematics is not as glamorous as that of other branches

608

3.

4.

5.

6. 7.

Introduction to Linear Control Systems

of robust control theory, namely HN ðHp Þ, L1 ðLp Þ, μ, Kharitonov, variable structure, backstepping, etc. The prevailing trend in the last decades of the 20th century showed no promising future for the field of robust QFT control. However, since the 21st century many good results have appeared—both in theory and computer algorithms like the MATLABs QFT Toolbox. If the student opts to do graduate research in this field, the following references and their bibliographies are essential: (Alavi et al., 2007; Banos, 2007; Boje, 2003; Comasolivas et al., 2012; Horowitz, 1963; Houpis et al., 2005; Hwang and Yang, 2002; Jeyasenthil and Nataraj, 2013; Khaki-Sedigh and Lucas, 2000; Labibi and Mahdi Alavi, 2016; Lee et al., 2000; Moreno et al., 2011; Nataraj, 2002; Nataraj and Makwana, 2016; Moreno et al., 2011; Safonov and Yaniv, 2009; Shafai et al., 1993; Yang, 2009; Yaniv, 1999). Despite the fact that a small percentage of researchers are active in the field, the application area of robust QFT is quite vast. For instance the interested reader can see the subsequent papers and books, in addition to the references cited above: (Ibarra et al., 2015; Kerpenko and Sepehri, 2012; Tian and Nagurka, 2012; Ahn et al., 2007). With regard to the branches of Robust Control Theory, we should say that to some extent it depends on our taste. For instance, some do not consider the L1 theory as an independent branch, but we do. Moreover, at a certain step of several of them we have to solve an optimization problem. Thus from this standpoint we may divide the methods of robust control to sets of optimization-based methods and nonoptimization-based methods. Also, it is due to add the use of LMI optimization has been continuously on the rise and today it is pervasively used; see Appendix F for more explanation. Some of the subbranches of the seventh branch have the potential to grow to an independent main branch. In particular, we mention two subbranches. (5.1) Following the advances in the Matrix Perturbation Theory, which were achieved by mathematicians, recently they have been utilized in robust control analysis and synthesis as well. This sounds like a promising avenue of research. The interested reader is referred to Esna Ashari and Labibi (2012), Konstantinov et al. (2003), Liu et al. (2013), and the reference therein. (5.2) The classical methods are all based on a worst-case analysis and design and are thus conservative and the nominal performance is not the highest possible. In Probabilistic Methods the conservatism of the result is lower and the nominal performance is higher however at the risk that there is no “guarantee” in the usual deterministic sense for the reliability of the design. Nevertheless, in certain applications that violation of the probabilistic distribution function of uncertainties is not hazardous, the probabilistic methods have the potential of success and should thus be naturally preferred. Two good works in this direction are reported in Kim et al. (2013) and Shen and Braatz (2016), which propose a polynomial robust design for nonlinear systems with probabilistic uncertainties. Interesting new perspectives are also reported in Mesbahi (2001a, 2001b). Development of a stability test in the KMN chart context is highly desirable, especially one that is easy to use. Development of the counterparts of results of this chapter for DAE, fractional-order, positive, constrained, multiscale, etc. systems is desirable.

8.12

Worked-out problems

In certain cases MATLABs produces wrong KMN plots. This is either in phase, or phase and magnitude. An example of the former is Problem 8.24; some other

Krohn-Manger-Nichols chart

609

systems are discussed in Exercise 8.38. In the following problems we leave it to the reader to investigate whether the phase angles of the MATLABs-produced plots are correct, using phase analysis as in Chapter 6. Problem 8.1: What is the relation between M and L, the magnitudes of closed- and open-loop transfer functions? Let LðsÞ 5 jLðsÞjejφðsÞ : 5 Lejφ and MðsÞ 5 jMðsÞjejαðsÞ : 5 Mejα . Hence from MðsÞ 5 LðsÞ=ð1 1 LðsÞÞ one obtains Mejα 5 Lejφ =ð1 1 Lejφ Þ, leading to M 5 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi L= L2 1 1 1 2L cos φ. Problem 8.2: The above Problem 8.1 actually shows the equation of an M-contour. In this problem we provide details for the computing and plotting of the 6-db M-contour. We pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi note that 20 log M 5 6 means that M 5 2. Hence we have 2 5 L= L2 1 1 1 2L cos φ. This yields L2 1 ð8=3ÞL cos φ 1 ð4=3Þ 5 0, restricted to values of L for which 21 # cos φ # 1. To find the details we let cos φ take its extreme values. With cos φ 5 2 1 we have L 5 2 2; 2 2=3 which must be discarded since L by definition is nonnegative. With cos φ 5 1 we have L 5 2; 2=3 which are valid and thus the equation of this M-contour is given by cos φ 5 2 ðL2 1 4=3Þ=½ð8=3ÞL in the above range. This function is periodic with the period 360 deg. In the vicinity of odd multiples of 2180 deg it is a single closed oval-like curve, as depicted below in Fig. 8.15.

Figure 8.15 Details of the 6 dB M-contours.

It should be stressed that the contour is not symmetric with respect to the 0-db line. To plot it we give numerical values to L and solve for φ. For instance with L 5 1  0 dB we find cos φ 5 2 7=8. In the vicinity of φ 5 2 180 its solutions are φ 5 2 150:04; 2 208:95, which are the points B and A shown in the picture. With both L 5 2  6 dB and L 5 2=3  2 3:5 dB we find φ 5 2 180. For other values of LA½2=3 2  ½ 2 3:5 6dB we find the corresponding values of φ and complete the contour. We note that minimum and maximum values of φ are slightly smaller than 2208:95 deg and larger than 2150:04 deg. It should be clear that if the vertical axis is stretched then these horizontal oval-like M-contours will look vertical. We close this problem by encouraging the reader to provide the details of the 0 dB and 0:5 dB M-contours. Problem 8.3: Referring to Fig. 8.7, the left vertical axis shows the Lm of the open-loop system L, whereas the right vertical axis denotes the values of the

610

Introduction to Linear Control Systems

corresponding M-contours, i.e., the values (also in Lm scale) of the closed-loop transfer function T. It is noted that as we go downwards, they get closer to each other. That is, for instance, L 5 2 25 dB and M 5 2 24 dB are almost the same, whereas L 5 2 5 dB and M 5 2 5 dB (or L 5 2 0:5 dB and M 5 2 0:5 dB) are not, and this discrepancy gets higher and higher as we approach the critical point. On the other hand, at the bottom of the chart (i.e., for small L) the N-contours converge to the open-loop phase lines with the same values, whereas in the other parts of the chart this is not true. Why? The answer is that this is not a surprise, and in hindsight is clear, since for small L, i.e., jLj{1 (like 225 dB 5 0:05 in plain numbers), T 5 L=ð1 1 LÞ  L. Thus, their magnitude and phase become the same. Problem 8.4: Find the GM and PM of the system PðsÞ 5 20ðs 1 10Þ= ½ðs 2 1Þðs 2 15Þ. From the root locus or Nyquist analysis we know that the system is stable. Read from the picture they are GM2  2 1:9 dB and PM1  16 deg. See Fig. 8.16 and its magnification.

Figure 8.16 Problem 8.4. Left: KMN chart of the system; Right: Its magnification.

Problem 8.5: Find the bandwidth of the system of Problem 8.4. The low-frequency gain the system is not 1 and thus the 23 dB M-contour intersection does not work. In fact this frequency is 35.1 whereas the true bandwidth of the system is 37.01. Problem 8.6: Reconsider the system of Example 8.4 with K 5 500. With K 5 500 the KMN chart is provided in Fig. 8.17. Now three of the crossings are above the critical point (GM , 0) and three of the crossings are below the critical point (GM . 0). The interpretation is thus that there are three crossing points with smaller values of the gain (A, B, C) and three crossing points with larger values of the gain (D, E, F).

Krohn-Manger-Nichols chart

611

Figure 8.17 KMN chart of Problem 8.6. Magnification of part of the plot.

Problem 8.7: Reconsider the system of Example 8.5 with K 5 4. The KMN chart is given in Fig. 8.18. This system has three GMs (one positive and two negative) and three PMs (two positive and one negative). The latter ones are the points X, Y, Z of Example 6.27.

Figure 8.18 KMN chart of Problem 8.7.

612

Introduction to Linear Control Systems

Problem 8.8: Study the system LðsÞ 5 Kððs20:1Þ2 1 1Þ=½ðs 1 0:1Þððs20:2Þ2 1 4Þ with K 5 1 in the KMN chart context. The KMN chart is illustrated in Fig. 8.19. Note that the system is strictly proper so the end point of the system is at the bottom of the page, here the point asymptotic to the 290 deg line. The starting point is the rightmost point of the plot. The system has three PMs and two GMs. To see which of them are acceptable we must perform a Nyquist analysis. Note that this is the system we discussed in Example 6.28 of Chapter 6, Nyquist Plot. The reader is referred to that Problem.

Figure 8.19 KMN chart of Problem 8.8.

Problem 8.9: Study the given system in the KMN chart context: LðsÞ5Kððs20:1Þ211Þððs20:1Þ219Þ=½ðs1 1Þððs20:2Þ2 1 4Þððs20:2Þ2 125Þ with K55: The KMN chart is offered in Fig. 8.20. Here as well the end point of the system is the lowest point of the plot and the starting point is the rightmost point of the plot. The system has four PMs and four GMs, but not all of them are acceptable, as we have discussed in the worked-out Problem 6.20 of Chapter 6, Nyquist Plot. Reliable analysis should be made in the Nyquist plot context - this is the status quo of our knowledge in the year 2017.

Krohn-Manger-Nichols chart

613

Figure 8.20 KMN chart of Problem 8.9.

Problem 8.10: KMN plot of a system upright goes to infinity in the upper part of the plane. What can be said about the system. The system has at least one pole at the origin. Thus if it is stable, which cannot be inferred from the chart (with the present knowledge—see Section 8.6), it tracks the step input with zero steady-state error. Note that type of the system cannot be inferred from the given information, so we can definitively talk about one integrator, only. (Question: Can the plot go infinity but not “upright”, which refers to the 0-db M-contour?) Question 8.3: How can we find the type and relative degree of an MP system by inspecting its KMN chart? How about NMP systems? Problem 8.11: What can be said about the KMN chart of a system with one or some zeros at the origin. One end point of the plot is at the bottom (low part) of the plane. More precisely it is its starting point. Note that a strictly proper system has also the same feature with the explanation that it is its end point. Problem 8.12: The end points of the KMN chart of a system are both finite (in the visible part of the plane), with one on the 0-dB line. What can be said about the system? The system is proper, with no pole and/or zero at origin, and either the low or high frequency gain of the system is 1.

614

Introduction to Linear Control Systems

As we have said inside the chapter the plot may expand over more than one section of the plane. This is shown in the Problems 8.138.16. Problem 8.13: Consider the system of Example 8.4 with two NMP zeros added to 3 ðs 1 0:02Þðs 1 0:05Þðs 1 5Þðs 1 10Þððs 2 12 Þ 1 202 Þ it. The system is LðsÞ 5 K 3ðs 0:002 which is the 2 0:0005Þðs 1 0:001Þðs 1 0:01Þðs 1 0:2Þðs 1 1Þðs1100Þ2 same as Problem 6.18 and 6.22. For K 5 500 the KMN chart is provided in Fig. 8.21. As is seen the KMN chart expands over 2 sections of the plane. Note that the Bode phase diagram of the closed-loop system expands over ½2360 360 deg interval. (Why?) The system has four negative GMs, two positive GMs, and one negative PM.

Figure 8.21 KMN chart of Problem 8.13.

Problem 8.14: This is another system with the same nature as in Problem 8.13. The system is LðsÞ 5 100ðs11Þ2 =ðs21Þ10 . The KMN chart is provided in Fig. 8.22, left panel. As is observed the KMN chart expands over 3 sections of the plane. The Bode phase diagram of this system expands over ½21800 2720 deg interval. The Nyquist plot of this system looks like the right panel of the same figure. Note that it is not drawn to scale. Gain values: [0.0132 0.1600 496.6755] GM in decibels: [A: 237.5885, B: 215.9176, C: 53.9215] Frequencies: [0.2679 1.0000 3.7321] Phase margin: 129.3855 PM Frequency: 1.4705

Krohn-Manger-Nichols chart

615

Figure 8.22 Problem 8.14. Left: KMN chart; Right: Nyquist plot, not drawn to scale.

Question 8.4: In the above system the plot intersects more than one critical line. Which one produces the GM of the plant? (Note that the answer is none! Why?) Moreover, where on the chart is the high frequency end of the KMN plot (i.e., the origin in the Nyquist plot)? Problem 8.15: The KMN chart of Exercise 5.28 is depicted below. It has the same nature as Problems 8.138.14 Note that it is a stable system but this property cannot be deduced from the KMN chart. The system is PðsÞ 5 1=½ðs 2 1Þðs 2 2Þðs 2 3Þðs 2 4Þðs 2 5Þðs 2 6Þ and the controller is CðsÞ 5 Kðs10:25Þ2 ððs12Þ2 1 36Þððs14Þ2 11Þ2 s2 ððs170Þ2 12500Þ3

with K 5 4:9 3 1011 .

The KMN chart of the system is given in Fig. 8.23, top row. The Nyquist plot of this system is so complicated that MATLABs is not able to draw it. And of course it is so complicated that hand drawing is quite cumbersome. The best way to study this system is the root locus with the help of the command “allmargin” to check the appropriate condition on j-axis crossings. With the normalized controller, i.e., K 5 1, the j-axis crossings are: Gain values: [0, 4.4529e 1 10, 4.3650e 1 11, 4.7842e 1 11, 4.9758e 1 11, Inf], Frequencies: [0, 1.1557, 4.5171, 21.8478, 32.5044, Inf],

Hence the system can be stabilized with appropriate gain selection, e.g., K 5 4:9 3 1011 , although the system has poor stability margins. But this is not a surprise since it is a difficult system. To find the gain margins for this value of the gain, the first row (named “Gain values”) is divided by it. Thus we get: [0, 0.0909, 0.8908, 0.9764, 1.0155, Inf]. These are the gain margins in plain numbers. To compute them in decibel scales we find 20 logðU Þ and thus obtain [ 2 Inf, 220.8287, 21.0044, 20.2074, 0.1336, Inf]. These values are the GMs read at the points A through F, respectively. Using the command “allmargin” with K 5 4:9 3 1011 we also find the information about the three phase margins of the system, as evident

616

Introduction to Linear Control Systems

Figure 8.23 Example 8.15. Top left: KMN chart; Top right: Its magnification. (Bottom: Root locus, not drawn to scale).

from the top right panel of the same figure. These are the 0-db line crossings which are not designated by any letters/arrows in that panel. The data are: Phase margins: [17.1812, 210.5308, 1.9549] PM Frequencies: [4.7466, 17.9556, 30.4410]

Note that because we do not have the Nyquist plot of the system, we cannot decide which of them is acceptable. 2 10Þ Problem 8.16: Study the system PðsÞ 5 ðs 28ðs1Þðs 1 20Þ in the KMN chart context. The KMN chart is given in Fig. 8.24, top left panel. The plot does not intersect with any of the critical lines, so its stability condition cannot be altered with positive gain variations and GM 5 Inf. However, it intersects the critical line 0 deg and therefore has a GM for negative values of the gain, regardless of its acceptance or unacceptance. Using either root locus or Nyquist analysis we easily understand that it is acceptable. As for the phase margin it is computed as PM2  2 138 , not PM  222, see Section 6.3.3. To decide whether it is acceptable or not we inspect its Nyquist plot. The answer is in the negative. The system is originally unstable and does not becomes stable. For the sake of completeness and to get further insight to the problem the KMN chart of 2P(s) is depicted in the top right panel of the same figure. Now the system is stable and with either of the following becomes unstable: GM1  7:51 db  2:3749 or GM2  2 12:04 db  0:25. The computed PM1 5 42 degis not acceptable. Finally for clarity we add that the solution means that if the original system is multiplied by a gain K 0 Að 2 2:3749 2 0:25Þ it becomes stable. (That is, its gain

Krohn-Manger-Nichols chart

617

Figure 8.24 Problem 8.16.

becomes 8K 0 .) The second and third rows of the same Fig. 8.24 show the Nyquist plot and root locus of the systems PðsÞ and 2 PðsÞ. Note that they are not drawn to scale. k Problem 8.17: Consider the uncertain interval system PðsÞ 5 sðs 1 aÞ : Discuss its stability in the context of robust QFT using the KMN chart. We consider the uncertain range of the parameters k; a as a , a , a and k , k , k. Thus the uncertainty range is depicted as follows, see Fig. 8.25, left panel.

Figure 8.25 Left: Uncertainty region; Middle: Plant template; Right: KMN plot.

618

Introduction to Linear Control Systems

In the KMN chart plane this region is transformed to the region on the middle panel of the same figure, which is called a plant “template.” That is, due to uncertainty a point on the KMN plot can actually be any point in this region. Hence, the points on the KMN plot (at different frequencies) can be any points in such regions. Thus the KMN plot of the system is actually a plot in such regions, as depicted in the right panel of the same figure. Even for simple systems producing such templates is complicated let alone for complex systems. The difficulty becomes even more noticeable if we recall that as we have learned in this book stability analysis is not possible by our present knowledge (i.e., the stability recipe of former times is erroneous). . . Adding the fact that frequency is the hidden parameter of the plot and thus dynamic controller design is quite complicated, the above arguments will hopefully give enough insight to the reader into the difficulties and shortcomings of the KMN chart framework, for “manual” controller design. However, as we said before, today the procedure can be implemented algorithmically on a computer. The MATLABs QFT Toolbox does this. Nevertheless, the procedure relies on “constraint consistency verification” which is not a globally proven algorithm at present. The interested reader is referred to Jeyasenthil and Nataraj (2013). Problem 8.18: Is it possible that a system has no tangent M-circle/contour? No, because the tangent M-circle/contour gives or indicates the maximum gain of the closed-loop system, and every system exhibits its maximum gain at a certain frequency. (However, not all systems have a resonant peak. Recall that this was defined for second-order terms with ζ , 0:707.) For instance, the maximum gain of the system G 5 1=ð1 1 sÞ (in a unity feedback system, T 5 1=ð2 1 sÞ) is 0.5 taking place at ω 5 0. The tangent M-circle/contour of this system is M 5 2 6 dBð 5 20 logð0:5ÞÞ. Problem 8.19: Is it possible for the KMN chart of a system to be tangent to more than one M-contour? Yes, we have already seen instances of this situation in previous problems if we ponder for a moment! Other situations are discussed below. Because it is a little difficult to see on the KMN chart—as MATLABs does not draw the tangent M-circles-we first demonstrate it on the Bode diagram. With the insight obtained from the Bode diagram, we then switch to the KMN chart. If this occurs, then the Bode diagram of the closed-loop system must exhibit more than one maximum. This will occur if the closed-loop transfer function can be factored as the product of two or more second order systems, each of which with a resonant peak. An example of such a system is LðsÞ 5 sðs 1 0:3Þðs8:6 2 1 0:4s 1 10Þ : The

8:6 closed-loop system is 1 1L L 5 ðs2 1 0:2882s 1 0:9502Þðs 2 1 0:4118s 1 9:051Þ : The magnification of the Bode diagram of the closed-loop system and the KMN chart of the open-loop system (giving closed-loop information) are provided in Fig. 8.26. As it is observed, in the left panel the system has two maxima (one global and one local, about 11:5 dB and 21 dB) and two local minima (0 db and about 26 db), the points highlighted by small circles. In the right panel the system is tangent to four M-contours, having the same values. Note that the 11:5 dB M-contour is not produced

Krohn-Manger-Nichols chart

619

Figure 8.26 Problem 8.19. Left: Bode diagram; Right: KMN chart.

by MATLABs, and that we have chosen the system parameters such that its other local maxima and minima are tangent to the default M-contours of the MATLABsproduced picture. In other words, there are infinitely many of such systems. Remark 8.6: The above phenomenon can equivalently be observed and explained in the M-circles context in the Nyquist plane. We do not do so since the MATLABs-produced figures are not transparent enough. In the ensuing two problems we discuss a slightly different situation. Problem ðs2

8.20:

Repeat

1 1 0:1s 1 1Þð100s2 1 3s 1 1Þ :

the

above

example

for

the

system

LðsÞ 5

All the discussion is as before except that the values of the maxima and minima are different. In particular the 0 dB M-contour is replaced by the 26 dB M-contour as the low-frequency gain of the system is 1/2. Note that this phenomenon can also be observed for higher order systems, e.g., 6 order systems with 3 local minima and three maxima. We do not present an example since the picture is not clear.

Problem 8.21: When the closed-loop system is not a unity-feedback one, how should we use the KMN chart? Let the plant P be in the feedforward path and the controller C be in the feedback loop. Then the closed-loop transfer function will be T 5 P=ð1 1 PCÞ. To use the KMN chart we write T 0 5 TC 5 PC=ð1 1 PCÞ and draw the KMN chart of T 0 . Then we write and use jTj 5 jT 0 j=jCj and +T 5 +T 0 2 +C. Thus, Lm T 5 Lm T 0 2 Lm C. Note that C is not restricted to static (pure gain) controllers and can be a dynamic one. Problem 8.22: Can the KMN chart be used in the context of sensitivity? Yes. Note that we have already discussed this in Section 8.8. Now we present another standpoint. Given the negative unity-feedback system T 5 L=ð1 1 LÞ, we have S 5 1=ð1 1 LÞ which can be rewritten as S 5 L21 =ð1 1 L21 Þ. Thus, the KMN

620

Introduction to Linear Control Systems

chart of L21 is drawn. The intersection with M-contours gives the sensitivity at the respective frequencies. The tangent M-contour gives the maximum sensitivity. Note that in this context the low sensitivity region is the bottom of the page, corresponding to M 5 2N contour, where T (which plays the role of S in this context) is zero. As we get closer to the critical point, the sensitivity increases. That is, in this context as in the case of Section 8.8, the high sensitivity region is the vicinity of the critical point. As a numerical example consider LðsÞ 5 30=½sðs 1 2Þðs 1 13Þ. The KMN chart of L21 is tangent to the M 5 2:96  3 dB contour. That is, the maximum sensitivity is 3 dB  1:4. (It is noted that the KMN chart of L is tangent to the M 5 0:3596 dB contour. Thus, the maximum magnitude of the closed-loop transfer function is M 5 0:3596 dB  1:0423. As is seen the maxima occur at different frequencies, i.e., their sum is not equal to 1.) Problem 8.23: Because the procedure is simple we leave it to the reader to provide the details of the basic elements approach for the construction of KMN plots. This is what you do in Exercise 8.14. Here we show the result on a simple system, namely L3 ðsÞ 5 1=ðs3 1 2s2 1 3s 1 1Þ, L1 ðsÞ 5 1=ðs 1 0:4302Þ, L2 ðsÞ 5 1=ðs2 1 1:57s 1 2:325Þ, L3 5 L1 L2 of Example 6.3. Note that at each frequency we must add the respective magnitudes and phases, vertically and horizontally, respectively. This is very simple and straightforward if frequency does not play a crucial role, as is the case in some systems, like this system. See also the next Problems 8.24 and 8.25. {{{Fig. 8.27}}}

Figure 8.27 Problem 8.23, Illustration of the basic elements approach for construction of the KMN plot.

Problem 8.24: In the preceding Problem 8.23 frequency does not play a crucial role. In many other cases where it does, the basic elements approach will result in erroneous answers if used crudely. The system LðsÞ 5 ðs 1 1Þ=ðs2 1 1Þ is such a system.

Krohn-Manger-Nichols chart

621

Figure 8.28 Problem 8.24, Failure of crude use of the basic elements approach for construction of the KMN plot.

We define L1 ðsÞ 5 ðs 1 1Þ, L2 ðsÞ 5 1=ðs2 1 1Þ. The LMN plots of the basic elements and the whole system are depicted in Fig. 8.28. First note that we have hand-drawn them, since MATLABs produces false phases and unclear magnitudes. All three plots start at a central point and go upwards. Problem 8.25: Here we have another system in which frequency plays a crucial role. The basic elements approach will result in an erroneous answer if we do not know the respective frequencies on the plots of basic elements. The system is LðsÞ 5 ðs11Þ2 =ðs2 1 0:1 1 2Þ. Denote L1 ðsÞ 5 ðs11Þ2 =1, L2 ðsÞ 5 1=ðs2 1 0:1 1 2Þ, L 5 L1 L2 . The answer is given in Fig. 8.29. If we do not know the frequency of the points on the curves L1 ; L2 in the left panel, either of the curves L in the right panel is possible. In fact, other possibilities can also be imagined. The correct answer is the rightmost panel.

Figure 8.29 Problem 8.25: Failure of crude use of the basic elements approach for construction of the KMN plot.

If we wish to brainstorm, other possibilities for L can be imagined as well! For the sake of completeness we should add that it may be possible to rule out some possibilities (for the KMN plot of L) by ‘analyzing’ the curves in the left panel, but we simply and crudely use the ‘addition rule’. These last three

622

Introduction to Linear Control Systems

examples hopefully show why the Bode diagram approach is preferred for this purpose.

8.13

Exercises

More than 200 exercises are offered in the form of some collective problems in the sequel. It is advisable to try as many of them as your time allows. Exercise 8.1: Similar to Problems 8.1 and 8.2 explain in detailed formulae how the N-contours are derived and plotted. Carry it out for α 5 2 150 deg. Note that the desired equation you should arrive at is N 5 tan α 5 sin φ=ðL 1 cos φÞ. Exercise 8.2: Can the KMN plot of a system intersect itself, like the α symbol? If so, what can be said about the point of intersect? Exercise 8.3: Consider a plant whose Nyquist plot intersects itself. What can be said about the KMN chart of this plant? Exercise 8.4: The end points of a KMN chart are one finite and one in the infinite upper part of the plane. What can be said about the system? Exercise 8.5: This problem has two parts. (1) Can the KMN plot of a system have cusp points, i.e., sharp points where phase of the system jumps? (2) How do the KMN plot of a system and its type relate to each other? Exercise 8.6: What can be said about the asymptotic angle values of a KMN plot? Exercise 8.7: Can a system be tangent to no/more than one S-circle? If so, how? Discuss in the Nyquist and Bode context as well and provide examples. Exercise 8.8: Can a system cross a certain S-circle in more than 2 points? If the answer is positive, exemplify. Exercise 8.9: Repeat the above Exercise 8.8 for M-circles/contours. Exercise 8.10: Repeat the above Exercise 8.8 for N-circles/contours. Exercise 8.11: By direct manipulation of the formula of M-circles, prove Remark 8.1 and then justify the associated design procedure of Section 8.3. Exercise 8.12: Can the KMN plot of a system start or end at the critical point? If so, provide an example. How about passing exactly through it? Exercise 8.13: What is the intuitive philosophy behind the fact that M-circles to the right of the line X 5 2 1=2 have M , 1 whereas those to the right of this line have M . 1? Exercise 8.14: This Exercise has several parts. (1) Draw and master the KMN plot of the basic elements as introduced in Chapter 7. (2) Is MATLABs able to produce their KMN plot correctly? (3) Provide an analysis for the behavior of the plot at high- and low-frequency ends. (4) Draw the KMN plot of the following systems. Note that the magnitudes and phases of the basic elements are added at each frequency, however, each addition results in a movement of the respective contour and thus is not as simple as that in the Bode diagram context. (5) Repeat the exercise after inclusion of a delay term. 1. L 5 ðs 1 1Þ=ðs 1 10Þ

Krohn-Manger-Nichols chart

2. 3. 4. 5. 6. 7.

623

L 5 ðs 1 1Þ=½sðs 1 10Þ L 5 ðs2 1 0:2s 1 1Þ=½sðs 1 10Þ L 5 ðs 1 1Þ=½sðs 1 10Þðs2 1 0:2s 1 1Þ s PðsÞ 5 s2 6 1 s PðsÞ 5 s3 6 1 s PðsÞ 5 s4 6 1 s2 6 1 s3 6 1 2 PðsÞ 5 s s62 1 2 PðsÞ 5 s2s16s 11 1 2 PðsÞ 5 s s12 6s 11 1 2 PðsÞ 5 ss266ass 6111 2 PðsÞ 5 s s63 1 2 PðsÞ 5 s s64 1 2 2

8. PðsÞ 5 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20.

L 5 ðs 1 1Þ=ðs 1 4Þ L 5 ðs2 1 4Þ=ðs2 1 1Þ L 5 ðs2 11Þ2 =ðs11Þ4 L 5 ðs11Þ4 =ðs2 11Þ2 L 5 ðs2 1 1Þðs2 1 4Þ=ðs11Þ4 L 5 ðs2 11Þ2 ðs2 14Þ2 =ðs11Þ8

Exercise 8.15: Find the transfer functions of systems whose KMN plots are provided in Fig. 8.30. 1 s11 s 1 10 (Hint: The systems are LðsÞ 5 ss1110 , LðsÞ 5 ss1110 1 , LðsÞ 5 s ( s 1 10Þ, LðsÞ 5 sðs 1 1Þ, sðs 1 1Þ sðs 1 10Þ s11 s11 LðsÞ 5 s 1 10 , LðsÞ 5 s 1 1 , L ( sÞ 5 s2 ðs 1 10Þ, LðsÞ 5 s2 1 0:1s 1 1. Find the respective plots.) Exercise 8.16: This exercise has several parts. (1) The KMN plot of a system is a vertical line. What is that system? Is the answer unique? (2) Repeat part (1) for a vertical and seemingly half line. (3) Repeat part (1) for U shaped curve? (4) Repeat part (1) for an inverse U shaped curve. (5) Repeat part (1) for a bell shaped curve. (6) Repeat part (1) for a closed curve. Exercise 8.17: Without drawing the plot, how can one understand on how many sections of the plane the KMN chart of a given system expands? Exercise 8.18: Since in the KMN chart there is a critical point (actually several critical points), because of its similarity with the Nyquist plot a tempting idea is to investigate whether the Nyquist stability criterion can be generalized to the KMN chart context or not. To this end the complete KMN chart over the whole frequencies (i.e., positive and negative) should be plotted. Investigate that for some systems like those of Example 8.4 and 8.5 (if you interpret it correctly) the Nyquist criterion is valid with the term CCW substituted by CW in its statement. On the other hand for some systems like that of the worked-out Problem 8.3 it does not hold in any form. Exercise 8.19: In the context of KMN chart, discuss the stability, GM, PM, DM, and closed-loop BW of the systems used in Examples, Problems, and Exercises of Chapter 6, Nyquist Plot, for different values of the gain.

624

Introduction to Linear Control Systems

Figure 8.30 Problem 8.25: Failure of crude use of the basic elements approach for construction of the KMN plot.

Krohn-Manger-Nichols chart

625

Exercise 8.20: This exercise has two parts. (1) Find the BW of the sequel systems through the method explained in Example 8.8. (2) What is the justification in finding BW of an unstable system? 1. 2. 3. 4. 5. 6.

PðsÞ 5 2=ðs 2 1Þ PðsÞ 5 ðs 1 10Þ=ðs 2 1Þ PðsÞ 5 ðs 1 10Þ=ðs 2 9Þ PðsÞ 5 ðs 1 10Þ=½ðs 1 1Þðs 2 1Þ PðsÞ 5 ðs 2 1Þ=ðs 1 2Þ PðsÞ 5 10s=ðs 1 1Þ

Exercise 8.21: Consider BW computation via the KMN chart for systems satisfying the low-frequency condition mentioned in the text. If there are some intersections with the 23 dB M-contour, the smallest one produces the right answer. Construct or find examples with this feature and verify the above statement. What if the aforementioned low-frequency condition is not satisfied? Exercise 8.22: Considering worked-out Problems 8.19, 8.20, find or construct examples whose closed-loop Bode magnitude diagram have 3 maxima. Investigate their KMN chart. Exercise 8.23: Given the sequel plants in the feedforward path and the pure gain K in the feedback loop of a closed-loop system, determine the value of K such that the maximum gain of the transfer function becomes 1.2 and verify the stability of the system: 1. PðsÞ 5 2. PðsÞ 5 3. PðsÞ 5 4. PðsÞ 5 5. PðsÞ 5

1 sðs 1 2Þ s11 sðs 1 5Þ 1 sðs 1 1Þðs 1 10Þ s 1 0:1 sðs 1 1Þðs 1 10Þ s21 ðs 1 1Þðs 1 2Þðs 1 3Þ

Exercise 8.24: Given the following systems in a negative unity-feedback loop, in the context of the KMN chart, determine the value of K such that maximum sensitivity becomes 2: 1. PðsÞ 5 2. PðsÞ 5 3. PðsÞ 5 4. PðsÞ 5 5. PðsÞ 5

K sðs 1 1Þ Kðs 1 2Þ sðs 1 1Þ K sðs 1 2Þðs 1 10Þ Kðs 1 2Þ sðs 1 1Þðs 1 10Þ Kðs 2 1Þ ðs 1 1Þðs 1 2Þðs 1 3Þ

Exercise 8.25: This exercise has two parts. (1) Draw the KMN plot of the systems of Examples 6.406.43 of Chapter 6, Nyquist Plot, and find the GM and PM. (2) Draw and interpret the KMN plot of the system of Example 6.44. 2z Exercise 8.26: Given the system LðsÞ 5 ss 2 p L1 ðsÞ where L1 ðsÞ is MP and stable: (1) Using the maximum modulus theorem conclude that  the closed-loop system in  p the negative unity-feedback structure satisfies jSj $ zz 1 2 p. (2) Show that e.g.,

626

Introduction to Linear Control Systems

jSj $ 2 requires z . 3p or z , p=3. (3) In part (2) which solution is tractable? (4) Repeat part (3) for the general case. You had better come back to this Exercise also after studying Chapter 10. Exercise 8.27: Consider a system whose KMN plot stays outside a given S-circle. What can be said about the GM and PM of the system? Exercise 8.28: Repeat Exercise 8.27 for a given M-circle. Exercise 8.29: Repeat Exercise 6.20 of Chapter 6 in the context of Nichols chart. Exercise 8.30: Repeat Exercise 6.21 of Chapter 6 in the context of Nichols chart. Exercise 8.31: Repeat Exercise 6.22 of Chapter 6 in the context of Nichols chart. Exercise 8.32: Repeat Exercise 6.23 of Chapter 6 in the context of Nichols chart. Exercise 8.33: Repeat Exercise 6.24 of Chapter 6 in the context of Nichols chart. Exercise 8.34: Repeat Exercise 6.25 of Chapter 6 in the context of Nichols chart. Exercise 8.35: Repeat Exercise 6.26 of Chapter 6 in the context of Nichols chart. Exercise 8.36: Repeat Exercise 7.25 in the context of KMN chart. Exercise 8.37: Repeat Exercise 7.30 in the context of KMN chart. Exercise 8.38: The Bode diagrams and KMN charts of the systems

Figure 8.31 Exercise 8.38, Top row: L1, left: Bode diagram, Right: KMN chart Bottom row: L2, left: Bode diagram, Right: KMN chart

L1 ðsÞ 5 ðs2 1 1Þðs2 1 100Þ=ðs21Þ4 and L2 ðsÞ 5 ðs2 11Þ2 =ðs21Þ4 are provided in the top and bottom row of Fig. 8.31, respectively. MATLABs 2015a is used. The results are all wrong. The error in the top left panel (the Bode diagram of L1 ) is less

Krohn-Manger-Nichols chart

627

serious than others’ since by an upward shift of 1080 deg it becomes correct. However, the other panels are miserably wrong. Provide the correct plots. We close this exercise by adding that L2 is what we mentioned in the discussion after Example 7.2 of Chapter 7. Several other systems can be exemplified which MATLABs fails to handle.

References Ahn, K.K., Chau, N.H.T., Truong, D.Q., 2007. Robust force control of a hybrid actuator using quantitative feedback theory. J. Mech. Sci. Eng. 21 (12), 20482058. Alavi, S.M.M., Khaki-Sedigh, A., Labibi, B., Hayes, M.J., 2007. Improved multivariable quantitative feedback design for tracking error specifications. IET Control Theory Appl. 1 (4), 10461053. Banos, A., 2007. Nonlinear quantitative feedback theory. Int. J. Robust Nonlinear Control. 17 (23), 181202. Boje, E., 2003. Pre-filter design for tracking error specifications in QFT. Int. J. Robust Nonlinear Control. 13 (7), 637642. Comasolivas, R., Escobet, T., Quevedo, J., 2012. Automatic design of robust PID controllers based on QFT specifications. IFAC Proc. 45 (3), 715720. Esna Ashari, A., Labibi, B., 2012. Application of matrix perturbation theory in robust control of large-scale systems. Automatica. 48 (8), 18681873. Horowitz, I.M., 1963. Synthesis of Feedback Systems. Academic Press, NY. Houpis, C.H., Rasmussen, S.J., Garcia-Sanz, M., 2005. Quantitative Feedback Theory: Fundamentals and Applications. second ed. CRC Press, Florida. Hwang, C., Yang, S.-F., 2002. QFT template generation for time-delay plants based on zeroinclusion set. Syst. Control Lett. 45 (3), 179191. Ibarra, L., Ponse, P., Molina, A., 2015. Robust QFT-based control of DTC-speed loop of an induction motor under different load conditions. IFAC Papers OnLine. 48 (3), 24292434. James, H.M., Nichols, N.B., Phillips, R.S., 1947. Theory of Servomechanisms. McGraw-Hill, NY. Jeyasenthil, R., Nataraj, P.S.V., 2013. Automatic loop shaping in QFT using hybrid optimization and consistency technique. IFAC Proc. 46 (32), 427432. Kerpenko, M., Sepehri, N., 2012. Electrohydraulic force control design of a hardware-in-theloop load emulator using a nonlinear QFT technique. Control Eng. Pract. 20 (6), 598609. Khaki-Sedigh, A., Lucas, C., 2000. Optimal design of robust QFT controllers using random optimization techniques. Int. J. Syst. Sci. 31 (8), 10431052. Kim, K.K.K., Shen, D.E., Nagy, Z.K., Braatz, R.D., 2013. Wiener’s polynomial chaos for the analysis and control of nonlinear dynamical systems with probabilistic uncertainties. IEEE Control Syst. Mag. 33 (1), 5867. Konstantinov, M., Gu, D.-W., Mehrmann, V., Petkov, P., 2003. Perturbation Theory for Matrix Equations. North-Holland, Amsterdam. Labibi, B., Alavi, S.M.M., 2016. Inversion-free decentralised quantitative feedback design of large-scale systems. Int. J. Syst. Sci. 47 (8), 17721782. Lee, J.W., Cahit, Y., Steinbuch, M., 2000. On QFT tuning of multivariable µ controllers. Automatica. 36 (1), 17011708. Liu, W., Yu, Y., Gao, W., Li, H., 2013. New quantitative criterion to define closely spaced modes. J. Vibr. Meas. Diag. 33 (4), 578581.

628

Introduction to Linear Control Systems

Mesbahi, M., 2001a. Robustness analysis via the running time of the interior point methods. Syst. Control Lett. 44 (5), 355361. Mesbahi, M., 2001b. Towards an algorithmic theory of robustness. In: Proceedings of the American Control Conference, Arlington, VA, USA, vol. 5, pp. 33973402. Moreno, J.C., Guzman, J.L., Banos, A., Berenguel, M., 2011. The input amplitude saturation problem in QFT: A survey. Annu. Rev. Control. 35 (1), 3455. Nataraj, P.S.V., 2002. Computation of QFT bounds for robust tracking specifications. Automatica. 38 (2), 327334. Nataraj, P.S.V., Makwana, D., 2016. Automated synthesis of fixed structure QFT prefilter using piecewise linear approximation based linear programming optimization techniques. IFAC Papers OnLine. 49 (1), 349354. Safonov, S., Yaniv, O., 2009. New nonlinear QFT technique. IFAC Proc. 42 (6), 108113. Shafai, B., Jayasuriya, S., Stich, D., 1993. Robust stability of interval plants and quantitative feedback theory. In: Proceedings of the American Control Conference, San Francisco, USA, pp. 17031705. Shen, D.E., Braatz, R.D., 2016. Polynomial chaos-based robust design of systems with probabilistic uncertainties. AIChE J. 62 (9), 33103318. Tian F., Nagurka M., 2012. Robust control design of a single degree-of-freedom magnetic levitation system by quantitative feedback theory. Proceedings of the ASME/ISCIE 2012. International Symposium on Flexible Automation, St. Louis, MO, June 1820, 2012. Yang, S.-F., 2009. An improvement of QFT plant template generation for systems with affinely dependent parametric uncertainties. J. Franklin Inst. 349 (7), 663675. Yaniv, O., 1999. Quantitative Feedback Design of Linear and Nonlinear Control Systems. Springer, Berlin.

Frequency domain synthesis and design 9.1

9

Introduction

We embark on this chapter with some terminology that the control engineering society seems to have almost agreed upon, although informally. When the openloop plant is given along with various performance specifications, the process of specifying the controller structure (and configuration, in sophisticated problems) is the “synthesis.” The process of satisfying the performance specifications through a control system is called the “design” in which the controller parameters are determined. When a (closed-loop or open-loop) control system is given and it is desired to have different performance specifications, the process of modifying the control system so as to achieve the new performance specifications is called “compensation.” In industry as well as in academia, however, these terms are sometimes used interchangeably. This may also have happened inadvertently throughout this book. It should be noted that in practice design specifications are rarely prefixed a priori. What the designer does is to carry out the design procedure for the given set of specifications. If they are met, the whole problem set and solution are acceptable. Otherwise we may have to alter the design specifications. Indeed if by changing the synthesis the design objectives are not yet met, then the designer considers different control strategies (nonlinear, adaptive, etc.) considering the higher cost it incurs. Solving a complete industrial problem—even with linear control—is outside the scope of this book which introduces the students to the fundamentals of the field. Many theoretical issues have emerged or been discovered after some problems have occurred in practice. We discuss at an introductory level some of these aspects in Chapter 10 and also in Further Readings of this Chapter, like the derivative backoff. In the course (Advanced) Industrial Control which is offered to (graduate) undergraduate students of control major these features will be reconsidered and complemented more seriously. Then the student has more time and knowledge to practice them. Even in that course the engagement with actual problems is weak. It is not a realistic expectation to become a master industrialist after four years of college! Indeed many PhDs opt to work in industry, and even then they have to enhance their knowledge by further studying. With this caveat in mind, we begin discussing the design specifications. The design specifications that we consider in this chapter are the steady-state tracking error as well as the gain margin (GM), phase margin (PM), delay margin (DM), bandwidth (BW), and a good transient response. We start by choosing a control synthesis. If the design specifications are achievable by this synthesis then the whole problem set is

Introduction to Linear Control Systems. DOI: http://dx.doi.org/10.1016/B978-0-12-812748-3.00009-4 © 2017 Elsevier Inc. All rights reserved.

630

Introduction to Linear Control Systems

acceptable.1 Otherwise we have to modify the controller structure, i.e., adopt another synthesis. If this does not help then we have to relent and moderate the design specifications and accept more benign ones. It should be stressed that the relation between the transient response and the stability margins GM/PM/DM is not transparent. Sometimes by accepting a lesser robust design the transient performance significantly improves. As we outline in Section 9.6 and demonstrate in the worked-out problems, the process actually involves some trial and error if done manually, and optimization if done in industrial scale. We consider four kinds of controllers in the feedforward path of the unity feedback system, namely: static controller or pure gain CðsÞ 5 K, and the dynamic controllers lead, lag, and lead-lag. A dynamic controller is a controller which is in terms of the Laplace variable s. This is covered in Section 9.2. Their simplifications as PI (proportionalintegral), PD (proportionalderivative), and PID (proportional integralderivative) controllers are discussed in Section 9.3. To see when these controllers have to be used and what they are, we start with some motivating examples in the Bode diagram context. It should be emphasized that the essence of the design procedures which can be found in different texts is the same and are all in the spirit of the classical results of the 1940s, 1950s (Bode, 1945; Brown and Campbell, 1948; Chestnut and Mayer, 1959; Gardner and Barnes, 1942; Hall, 1943; James et al., 1947), etc. They present controllers which provide a high DC gain to fulfill the steady-state error requirement (the lag controller), controllers which provide additional phase to increase the PM (the lead controller) and/or to stabilize the system, and controllers which fulfill both of the above requirements (the lead-lag controller). The details of the formulation may differ, which is not really noteworthy. However, the problems that are solved in the literature are restricted to some simple and mostly stable systems. What we address in this book is wider in scope. Numerous unstable and NMP systems are considered in details where each of which can be a benchmark for controller design. In addition, some intricate issues like the unguaranteed implicit connection between the closed-loop bandwidth and the open-loop crossover frequencies, effect of stabilizing zeros, etc. are discussed. Unfortunately these points are missing in the available literature. Our presentation of the subject is a stepby-step instructive approach where the details are fully explained. All the arguments are supported by simulation results and depictive explanations. In Section 9.4 we discuss the controller structures in the Nyquist plot context. The effect of the controllers on the root locus is considered in Section 9.5. The design procedure is briefed in Section 9.6. Apart from the above, briefly included are some specialized design and tuning rules for PID controllers in Section 9.7, IMC structure in Section 9.8, Smith predictor in Section 9.9, and implementation of controllers with operational amplifiers in Section 9.10. The chapter is wrapped up by summary, further readings, worked-out problems, and exercises in Sections 9.119.14.

1

On the other hand, even if a certain synthesis is successful, different other syntheses may result in a better answer. This is shown is various worked-out problems.

Frequency domain synthesis and design

9.2

631

Basic controllers: proportional, lead, lag, and lead-lag

The four basic controllerspure gain, lead, lag, and lead-lag are studied in the following motivating examples. 25 Motivating Example 9.1: Given PðsÞ 5 sðs 1 10Þðs 1 15Þ : It is desired to design a controller so that the steady-state error to the ramp reference input becomes less than ess 5 5%. We start with the simplest controller which is the static or pure gain 1 , 0:05 or controller CðsÞ 5 K. There holds ess 5 lim1s!0 sLðsÞ: Thus ð25K=150Þ K . 120. On the other hand for stability the Routh’s test results in 0 , K , 150. Thus although the result is in the acceptable range, from our experience in Chapters 4 and 5 we may guess that the response is oscillatory. We verify this by looking at the simulation result given in, Fig. 9.1. (The control signal is also poor, see the accompanying CD.) Hence we need to use a dynamic controller—a controller which is in terms of the Laplace variable s.

Figure 9.1 Velocity response of Example 9.1.

What should the structure of the controller be? To answer this question we should decide what it needs to do. In other words, we should determine the reason for this oscillatory response so that we counteract it by the controller. In previous Chapters we have studied four important features of a system, namely GM, PM/DM, and BW. This system (with K 5 120) has small GM (Continued)

632

Introduction to Linear Control Systems

(cont’d) and PM.2 They are GM 5 1:94 dB  1:25 in plain numbers, and PM 5 6:43 deg. We shall try to increase them. This is done in the ensuing developments. Another feature that we will study in the coming examples is the bandwidth of the system. Note that we should consider the bandwidth of the closed-loop system, and thus its gain (which has no effect on the openloop bandwidth) does matter. With K 5 120 it is BW 5 16:43. We shall consider different cases in the sequel examples. Also recall that our Question 4.17 of Chapter 4 is unanswered.

Now we have some possibilities: increasing both GM and PM, increasing GM without increasing PM, and increasing PM without increasing GM. To have a fair conclusion, all three cases must be studied separately, and at different BWs. However because both GM and PM are decisive factors as the stability margins we shall focus on increasing both of them. As for BW we already know that if the closed-loop BW (whose rough approximate is the open-loop ωgc , sometimes ωpc — see also Remark 9.9) is not large enough the output response will be poor regardless of the GM and PM values. This will be verified in simulations. As will become clear in the process it is convenient to start with other problems in which we introduce and develop the structure of our controllers. This is done in the sequel motivating Examples 9.29.5. The final controller for the system of Example 9.1 is offered in Examples 9.6 and 9.7. We start with a Remark. Remark 9.1: The following example illustrates increasing the GM and PM together—see also Remark 9.3. In this example the controller does not fix the steady-state error of the system which is the most important among all. Thus, in general it must be used for a system whose steady-state error is acceptable or will be fixed later,3 otherwise its usage is irrelevant. Nevertheless for the purpose of illustration of the design philosophy it is instructive to start with this problem. Then we will use the controller in the actual context in Motivating Examples 9.3 and 9.6.

25 Motivating Example 9.2: Given PðsÞ 5 sðs 1 10Þðs 1 15Þ : It is desired to design a  controller so that PM 5 60 at the gain crossover frequency ωgc 5 10 rad=s. (Continued)

Question 9.1: Recalling from the Nyquist diagram context, can a system have a small PM but large GM or vice versa? See also Exercise 9.27. 3 As shall be seen, the steady-state error will be fixed later by another dynamic controller. 2

Frequency domain synthesis and design

633

(cont’d) We should always try a pure gain controller as the first option. If we want to have ω 5 10 rad=s as the gain crossover frequency, which means that we should increase the gain to a certain value, the Bode diagram of the system reveals that the resultant system will have the phase 2169 deg at this frequency, that is PM 5 11 , and not 60 . Thus we have to try a dynamic controller. What structure should the dynamic controller have? To answer this question we note that it needs to contribute some phase to the system. The first solution which comes to mind is probably s 1 z, z . 0. It is not acceptable as it is improper—unrealizable. A moment of thought reveals that a controller like C 0 ðsÞ 5 ðs 1 zÞ=ðs 1 pÞ, z , p serves the purpose. Note that the Bode diagram looks like Fig. 9.2. The dynamics of this controller at low and high frequencies are interrelated. To further decouple them we include a pure gain in it and write it as CðsÞ 5 k0 C0 ðsÞ 5 k0 ðs 1 zÞ=ðs 1 pÞ. The Bode diagram of CðsÞ is the same as that of C0 ðsÞ except that the magnitude diagram is shifted upwards by 20logk0 . Phase

dB 20 log z 20 log p

90

Leadzero

Lead-zero

0 0

Leadpole

Lead-pole

–90

0

90 Lead

Lead

20 log (z/p)

0

ω

ω

Figure 9.2 Bode diagram of the proposed controller C0 ðsÞ. Left: Magnitude, Right: Phase.

How should we design its parameters? The first answer that comes to mind is probably that: “The (maximum) phase of the controller depends on z and p. It is probably convenient to design the controller such that the maximum phase it contributes equals the phase deficit of the plant. This phase deficit occurs at the desired ωgc . Thus by these two parameters we find z and p. As for the gain k, we note that for realization of the desired ωgc the controller needs to contribute some gain at that frequency. By this consideration we find k0 .” For ease of formulation we write the controller as CðsÞ 5 k0 C 0 ðsÞ 5 0 k ðs 1 zÞ=ðs 1 pÞ 5 k0 ðz=pÞððp=zÞð1=pÞs 1 1Þ=ðð1=pÞs 1 1Þ 5 kðαTs 1 1Þ= ðTs 1 1Þ, α . 1. Thus the controller has the structure CðsÞ 5 kCd ðsÞ 5 kðαTs 1 1Þ=ðTs 1 1Þ, α . 1. It transpires that the aforementioned design (Continued)

634

Introduction to Linear Control Systems

(cont’d) philosophy is correct with further explanation that the maximum phase of the controller depends on α 5 p=z and ωgc depends on α, T. The term k is the amplification needed to realize ωgc . Before demonstrating the design procedure a remark is in order. Remark 9.2: The controller CðsÞ 5 kCd ðsÞ 5 kðαTs 1 1Þ=ðTs 1 1Þ, α . 1 is called a lead controller. The reason for this terminology is that its output leads its input in phase. Its Bode magnitude and phase diagrams are depicted in Fig. 9.3 below. dB 20 log kα 20 log k

Phase 90 max

0 ω gc

ω

Figure 9.3 Bode diagram of the lead controller.

It is noteworthy that that Cd ðsÞ 5 ðαTs 1 1Þ=ðTs 1 1Þ is also a lead controller.4 In fact the lead phenomenon of CðsÞ is because of that of Cd ðsÞ. Coming back to the solution of our problem, formulation-wise the design procedure is as follows: First we find the maximum phase of the controller. d d +CðjωÞ 5 0. Thus, dω ðtan21 αωT 2 tan21 ωTÞ 5 0, or To this end we set dω αT T 2 1 1 ðωTÞ2 5 0. This happens if ωT 5 p1ffiffiαffi and hence 1 1 ðαωTÞ2

1 1 sinφmax 1 φmax : 5 max+CðjωÞ 5 sin21 αα 2 1 1 : Therefore α 5 1 2 sinφmax . Because we want the maximum phase of the controller to be at ω 5 ωgc , one obtains  2 2 2 1=2   pffiffiffiffi α T ω 11 T 5 ωp1 ffiffiαffi 5 ωgc1pffiffiαffi : To compute k, we write Cðjωgc Þ 5 k T 2 ω2 gc11 5 k α. gc

Thus, inorder for  ωgc to be the new gain crossover frequency  we must have  Cðjωgc ÞPðjωgc Þ 5 1  0 dB, or equivalently, Cðjωgc Þ 5 2 Pðjωgc ÞdB.   pffiffiffiffi Thus, k α 5 2 Pðjωgc ÞdB  102jPj=20 or k 5 p1ffiffiαffi 102jPj=20 in plain numbers. (Continued) 4

The reason that we do not use the initial ‘l’ as the subscript in Cd ðsÞ is that as we shall shortly see we have another controller which is called a ‘lag’ controller. To distinguish between the lead and lag controllers we use the end letters ‘d’ and ‘g’.

Frequency domain synthesis and design

635

(cont’d) In our problem recall that at the desired crossover frequency ωgc 5 10 the phase margin is PM  5 11,  and thus the system has 60 2 11 5 49 deg phase deficit. Moreover, Pðjωgc Þ 5 2 40:3 dB.ffi Thus we compute α 5 ð1 1 pffiffiffiffiffiffiffiffi psin49Þ= ffiffiffiffiffiffiffiffiffi ð1 2 sin49Þ 5 7:1536, T 5 1=ð10 7:15Þ 5 0:0374, k 5 1040:3=20 = 7:15 5 38:70. The Bode diagram of the controlled system is shown in Fig. 9.4. It has GM 5 12:6 dB at ωpc 5 25:5, PM 5 59:5 at ωgc 5 10:1, and BW 5 18:44. The slight difference between the obtained specifications and specifica the desired  tions is due to imprecise reading of parameters like Pðjωgc Þ from the Bode diagram which has been done by clicking on the MATLABs figures. We accept the answer.

Figure 9.4 The Bode diagram of the system before and after the lead controller. Dashed: Without lead controller, Solid: With lead controller.

Remark 9.3: In the previous example increasing the PM by the proposed controller necessarily results in an increase of the GM, as can easily be verified in the Bode diagram context. Is this a general rule for all systems, for instance complicated systems like those with multiple crossover frequencies studied in the previous Chapters? If not, how should we modify the approach? Unfortunately, the answer to this question does not seem to be straightforward; we put it aside it for future investigation. Thus we have learned a controller structure which increases the PM and at least for certain systems increases the GM as well. In the next Example we shall apply it to Example 9.1 whose steady-state error has been fixed by the static controller

636

Introduction to Linear Control Systems

K 5 120. As shall be seen the lead controller destroys the already fixed steady-state error. The remedy will be offered afterwards. 3 120 Motivating Example 9.3: Given PðsÞ 5 sðs 12510Þðs 1 15Þ whose steady-state error is ess 5 5%. Investigate the effect of a lead controller in the form of increasing its GM and PM on its response. The system has GM 5 1:94 dB at ωpc 5 12:2 rad=s, and PM 5 6:43 deg at ωgc 5 10:9. Thus the controller needs to contribute φmax 5 53:57 deg. pffiffiffiffiffiffiffiffiffi sin53:57 5 9:2345, T 5 1=ð10:9 9:23Þ 5 0:0302, k 5 Therefore α 5 11 1 2 sin53:57 pffiffiffiffiffiffiffiffiffi 0=20 10 =  9:23 5 0:3291. (Because we do not change ωgc of the system Pðjωgc Þ 5 0 dB.) Application of the controller results in GM 5 13:1 dB at ωpc 5 28:2 rad=s, PM 5 59:9 deg at ωgc 5 10:9 rad=s, and BW 5 19:71, i.e., the design objective of increasing the GM and PM are fulfilled.

Remark 9.4: Because of the gain k of the controller, which in general is different from 1, the steady-state error of the system changes. In our problem that k  0:33 , 1 the error increases. The simulation result is shown in Fig. 9.5, left panel. There are two solutions to this problem: (1) To increase the gain by the static controller 1=k so that the DC gain remains unchanged. This, however, destroys the just-increased GM and PM! (2) To design a dynamic controller which increases the gain at low frequencies (the so-called DC gain) and not at medium-high frequencies so that the stability margins remain unchanged. 2

2

1.5

1.5 1 Input & output

Input & output

1 0.5

Output

0 Input

–0.5

0.5

–1

–1 –1.5 0

1

2

3

4 5 Time (sec)

6

7

8

9

Input

–0.5

–1.5 –2

Output

0

–2

0

1

2

3

4 5 Time (sec)

6

7

8

Figure 9.5 Lead-controlled system, Example 9.3. Left: k 5 0:33, Right: k 5 1.

It is clear that the right solution is (2). However for the sake of illustration and comparison we apply also the method (1) to this system. This fixes the steady-state error but reduces the already increased GM and PM to (Continued)

9

Frequency domain synthesis and design

637

(cont’d) GM 5 3:42 dB and PM 5 12:6 deg. However they are still larger than those of the original system which were GM 5 1:92 dB and PM 5 6:43 deg. Simulation result is provided in Fig. 9.5, right panel. Comparing with Fig. 9.1, a significant improvement in the response is observed. The conclusion that we should draw from this example is that improving the stability margins by increasing the GM and PM in general results in a less oscillatory response.5

The above problem gave rise to the question “How we can increase the gain of the system at low frequencies, so that we improve the steady-state error, but not at medium-high frequencies, so that we do not alter the stability properties of the system?” In other words, how can we increase the DC gain of the system? We shall answer this question in the sequel. First we present a wider scope in Example 9.4 and then in Example 9.5 we apply the developed controller to our present problem of Example 9.3. 25 Motivating Example 9.4: Given PðsÞ 5 sðs 1 10Þðs 1 15Þ : It is desired to design a controller so that ess 5 2%. We first try a static controller. It should be K . 300, but stability condition requires 0 , K , 150. Hence we have to design a dynamic controller for it. The controller must be able to provide: (1) A high DC (i.e., zero frequency) gain and (2) No gain in other frequencies, especially around the desired crossover frequencies, so that (ideally thinking) the stability properties of the system are not affected. To find the structure of the controller we start with simple structures. If we cannot find an answer then we consider more complicated structures. For part (1) of the requirement a controller like 1=ðs 1 pÞ, p , 1, or k=ðs 1 pÞ, k . p may be an acceptable choice. However, it significantly changes the root locus of the system and thus its stability properties, i.e., it does not fulfill part (2). A moment of thought reveals that a structure like C 0 ðsÞ 5 ðs 1 zÞ=ðs 1 pÞ, z . p, may be an answer. It provides the lowfrequency gain z=p . 1 while it does not provide any gain in high frequencies. They are, however, interrelated. Note that the Bode diagram looks like (Continued)

5

We do not really need to know whether the improvement in the response is due to the increase in GM, PM, or both. As we have said before because both are decisive stability margins we have to increase both. See also Exercise 9.27. However, we should pose the subsequent question: Question 9.2: Is the oscillatory response in Example 9.1 due to the small PM or the small GM or both of them? That is, does a system with a small PM but acceptable GM necessarily has an oscillatory response? How about small GM and acceptable PM?

638

Introduction to Linear Control Systems

(cont’d) Fig. 9.6. We can simply rectify this problem by adding a pure gain to the controller. Now the controller is CðsÞ 5 kðs 1 zÞ=ðs 1 pÞ and looks like the right panel of the same figure. By the extra degree of freedom which is provided by the parameter k, the low- and high-frequency behaviors of the controller are further decoupled. The Bode diagram is the same as before save that it its magnitude diagram is shifted by 20logk. dB

dB 20 log z 20 log p

0

Lagzero

Lag-pole

–90 90

Lagpole

0 0

20 log (z/p) Lag

Lag-zero

Lag

–90

0

ω

ω

Figure 9.6 Bode diagram of the proposed controller C0 ðsÞ, Left: Magnitude, Right: Phase.

How should we determine the parameters of the controller? To answer this question we notice that the controller has no idealized effect beyond the frequency of its zero. (The actual controller has a small nonzero contribution to gain beyond that frequency.) Hence, the design procedure that comes to mind is probably that: “The zero is determined by stability considerations. It must6 be much smaller than the gain crossover frequency of the system, so as to not have any effect on it. As for its other parameters (gain and pole frequency) there is certainly some freedom in determining them.” Some researchers opt to choose z 5 10p and design the parameter k for the rest7 of the amplification needed. Some others do not prefix z 5 10p and divide the needed amplification between p and k in the design procedure. There does not appear to be much difference between the two and they in effect do the same thing. However the choice certainly affects the performance. For the ease of formulation we rewrite the above controller as CðsÞ 5 k ðs 1 zÞ=ðs 1 pÞ 5 kCg ðsÞ5k½ð1=pÞ s1ðz=pÞ=½ð1=pÞ s 1 1 5 k ðTs 1 kg 1 1Þ= ðTs 1 1Þ 5 k½1 1 kg =ðTs 1 1Þ with obvious formulation for kg and T. Hence, (Continued) 6 7

Always? An amplification ratio of 10 is embedded in z 5 10p.

Frequency domain synthesis and design

639

(cont’d) the structure of the controller is CðsÞ 5 kCg ðsÞ 5 k½1 1 kg =ðTs 1 1Þ. Hence it transpires that the abovementioned design philosophy is correct but it is convenient to implement it the other way around. As will be shortly explained, we first find k and kg and then determine T. Before demonstrating the design procedure a remark is in order. Remark 9.5: The controller CðsÞ 5 kCg ðsÞ 5 k½1 1 kg =ðTs 1 1Þ is called a lag controller. The reason for this terminology is that its output lags its input in phase. Its Bode magnitude and phase diagrams are depicted in Fig. 9.7 below. dB 20 log k(1+kg) 20 log k

Phase 0 –90

ω

Figure 9.7 Bode diagram of the lag controller.

It is noteworthy that that Cg ðsÞ 5 1 1 kg =ðTs 1 1Þ is also a lag controller8 and indeed the lag phenomenon of CðsÞ is due to that of Cg ðsÞ. Now we come back to our discussion and show the design process on our system. First we look at the Bode diagram of the system. It is given in Fig. 9.8, top left panel. The system parameters are GM 5 43:5 dB ðat ωpc 5 12:5 rad=sÞ and PM 5 88:4 deg ðat ωgc 5 0:67 rad=sÞ. The closed-loop bandwidth is BW  0:17. Thus around ωpc 5 12:5 the gain contribution of the controller must be below 43:5 dB (which is 150, already known to us). By any choice different from and larger than 1 the phase margin will be reduced. In this problem there is no requirement on PM and we show the performance of the controller for different PMs, ωgc ’s, and BWs. (a) In the first trial we decide to keep the phase margin of PM 5 88:4 deg almost the same. So we choose k of CðsÞ 5 kCg ðsÞ as k 5 1. Now we design (Continued) 8

The reason that we do not use the subscript ‘l’ (as the initial of lag) instead of ‘g’ was explained in footnote 4.

640

Introduction to Linear Control Systems

(cont’d) Cg ðsÞ 5 ½1 1 kg =ðTs 1 1Þ such that its DC gain is 120 and it has ideally no effect at the frequency ωgc 5 0:67 and beyond. We note that the DC gain of Cg ðsÞ is 1 1 kg . Thus kg 5 119. In order for Cg ðsÞ 5 ½1 1 kg =ðTs 1 1Þ to not have any effect on the Bode magnitude diagram around ωgc 5 0:67, its magnitude at that frequency must be 1. But this is impossible. (Why?) Thus we accept a value near 1, such as 1.1. That is, we may approximately require ffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffi k2 10k jkg =ðTs 1 1Þjjωgc 5 0:1. This results in T 5 ω1gc 0:1g 2 2 1  ωgcg (if kg . 1). In our case we have T  1776. The closed-loop system has BW  0:23 The simulation result for an input with the frequency f 5 1=8 or ω 5 2πf 5 π=4 5 0:78 (in simplified analysis) is provided in Fig. 9.8. Because the input frequency is larger than the closed-loop BW the output is inevitably so poor like the one provided in Fig. 9.8, top right panel. (Why?) Note that there will not be much difference in the output if we change the controller parameters, e.g., use 360 instead of 120. Indeed this is verified by simulation in the same panel. The outputs are not distinguishable. (Note that with a DC gain of 360 there holds kg 5 359 and T  5358.) Next we try a larger BW which is tantamount to a larger ωgc and smaller PM in this system. This is done in part (b). (b) We may decide to choose PM  50 deg as acceptable. This means that we can increase the gain by 29.5 dB which is 30 in plain numbers. Hence in CðsÞ 5 kCg ðsÞ we choose k 5 30. Now the system has ωgc 5 4:39 and BW 5 7:88. Next we design Cg ðsÞ 5 ½1 1 kg =ðTs 1 1Þ. To this end we recall that the DC gain must be at least 120. For the first trial we thus test a DC gain of 120. This means that the DC gain of Cg ðsÞ which is 1 1 kg must be 120/30. Thus kg 5 3 and T  6:83. Simulation result is shown in Fig. 9.8, bottom left panel. A significant improvement in the output is observed. Next we try a DC gain of 360. One has 1 1 kg 5 360=30 and thus kg 5 11 and T  25:05. This time the improvement in the response is so negligible that they are barely distinguishable. (c) As the last trial we take PM  25 deg. This happens if the gain is increased by 36.4 dB which is 66 in plain numbers. Therefore in CðsÞ 5 kCg ðsÞ we choose k 5 66. Now the system has ωgc 5 7:73 and BW 5 12:61. Next we design Cg ðsÞ 5 ½1 1 kg =ðTs 1 1Þ. For the DC gain of 120 we obtain 1 1 kg 5 120=66 or kg 5 0:82 and T  1:05. With a DC gain of 360 we find 1 1 kg 5 360=66 or kg 5 4:45 and T  5:76. Simulation result is shown in Fig. 9.8, bottom right panel. The two cases are barely distinguishable. In comparison to part (b) it is observed that the response is faster but at the expense of some oscillations. (Continued)

Frequency domain synthesis and design

641

(cont’d) Bode diagram Gm = 43.5 dB (at 12.2 rad/s), Pm = 88.4 deg (at 0.167 rad/s)

1.5

–50

1

–100

0.5

Input & output

2

0

Magnitude (dB)

50

Phase (deg)

–150 –90

0 Input (a)

–0.5 –1

–180

–1.5

–270 10–1

10

0

1

10 Frequency (rad/s)

10

2

10

–2

3

2.5

0

1

2

3

4 5 Time (sec)

6

7

8

9

7

8

9

2.5

2

2

1.5

1.5

1 0.5

Input & output

Input & output

Output (a)

Output (b)

0 Input (b)

–0.5

1 0 –1

–1.5

–1.5

–2

–2

–2.5

–2.5

1

2

3

4 5 Time (sec)

6

7

8

9

Input (c)

–0.5

–1

0

Output (c)

0.5

0

1

2

3

4 5 Time (sec)

6

Figure 9.8 Tope left: Bode diagram of the original system, Top right: Result of part (a) Bottom left: Result of part (b), Bottom right: Result of part (c).

Observe that in all the cases the frequency of the zero of the controller is much smaller than the corresponding ωgc , as required by the design philosophy. Moreover, the conclusion that we should draw in this example is that a larger ωgc and BW of the system results in a faster response (smaller settling time) but at the expense of some oscillations and smaller stability margins (both PM and GM). A compromise is thus called for. Choosing the best PM and crossover frequency for this system with this control structure and reference input signal is at the designer’s discretion. One may choose PM 5 40 and ωgc 5 5:64.

Remark 9.6: It is noteworthy that the lag controller can be cast in a structure similar to that of the lead controller. More precisely, CðsÞ 5 k½1 1 kg =ðTs 1 1Þ 5 ðk=αÞðαTs 1 1Þ=ðTs 1 1Þ 5 KðαTs 1 1Þ=ðTs 1 1Þ, where α 5 1=ð1 1 kg Þ , 1. Note that the difference with the lead controller is that there α . 1 whereas here α , 1.

642

Introduction to Linear Control Systems

Remark 9.7: What bandwidth shall we choose for a given system? The answer is that it certainly depends on the controller structure (if restricted) and the frequency of the reference input signal. We will comment more on such issues in the coming Chapter. There we will see that for sinusoidal reference inputs if we have freedom it is recommendable to take the BW at least as large as the smallest pole of the open-loop system. However, for ramp and parabolic reference inputs our Question 4.17 (with the explanations given in its preceding paragraph) of Chapter 4, Time Response, remains unanswered even in this chapter. Remark 9.8: As PM was not a requirement in Example 9.4 we did not check whether the controller had any effect on it or not. In the cases the PM is a design specification we need a re-design or some consideration in the original design as we shall show in the next example.

Motivating Example 9.5: Investigate the effect of the lag controller on the PM of the controlled system. To this end we use MATLABs. The following results are obtained for the motivating Example 9.4: Part (a): first controller: GM 5 43:4 dB ðωpc 5 12:2 rad=sÞ and PM 5 67:7 deg ðωgc 5 0:178 rad=sÞ second controller: GM 5 43:4 dB ðωpc 5 12:2 rad=sÞ and PM 5 67:7 deg ðωgc 5 0:178 rad=sÞ Part (b): first controller: GM 5 13:3 dB ðωpc 5 11:8 rad=sÞ and PM 5 44:1 deg ðωgc 5 4:42 rad=sÞ second controller: GM 5 13:3 dB ðωpc 5 11:8 rad=sÞ and PM 5 44:1 deg ðωgc 5 4:41 rad=sÞ Part (c): first controller: GM 5 5:89 dB ðωpc 5 11:4 rad=sÞ and PM 5 18:9 deg ðωgc 5 7:82 rad=sÞ second controller: GM 5 5:93 dB ðωpc 5 11:4 rad=sÞ and PM 5 19:1 deg ðωgc 5 7:77 rad=sÞ It is observed that in part (a) the PM is reduced by some 21 deg, and in parts (b) and (c) it is reduced by some 6 deg. In fact it turns out that for low-order systems in most cases (that the PM is below 60 deg) the reduction in PM is about 6 deg. Therefore if the PM is a strict requirement we should redesign the controller for PM 1 6 deg. Of course, to avoid the redesign we can modify the desired PM to PM 1 6 deg from the outset.

Now we come back to Example 9.3. Recall that we needed a dynamic controller to increase its DC gain without affecting its stability properties.

Frequency domain synthesis and design

643

Motivating Example 9.6: ReconsiderExample 9.3with its controller. That is,   3 0:0302s 1 1 25 3 120 the system is 0:3291 9:2345 0:0302s 1 1 sðs 1 10Þðs 1 15Þ . Propose a controller that renders ess 5 5% without much affecting the stability margins of the system. We have learned that we should design a lag controller. We also know that it will decrease its PM by about 6 deg. So if PM 5 60 is strict we should first redesign the lead controller for PM 5 66 so that when we apply the lag controller it becomes 66 2 6 5 60. Suppose that PM 5 60 is not strict and we can accept PM 5 54. Now we design the lag controller. Because it does not need to provide a new PM for the system in its formulation k½1 1 kg =ðTs 1 1Þ we choose k 5 1. From Example 9.3 we recall that the lag controller must contribute 1=0:3291 5 3:0386 as its DC gain and should act below ωgc 5 10:9. Thus 1 1 kg 5 3:0386 or kg 5 2:04 and T  10 3 2:04=10:9 5 1:87. Simulation result in Fig. 9.9 illustrates an excellent response. The system has GM 5 12:4 dB at ωpc 5 27:1, PM 5 53:8 deg at ωgc 5 11, and BW 5 19:95. Comparing with the system features of Example 9.3 we observe that the deteriorating effect of lag control is some 6 deg reduction in the PM (which we agreed to accept) and a slight reduction also in GM and ωpc . The BW is almost the same.

Figure 9.9 Simulation result of Example 9.6.

In the next example we shall present combination of the lead and lag controllers. By the 6 deg consideration in the PM we will prevent the (slight) deteriorating effect of the lag controller on the stability margins.

644

Introduction to Linear Control Systems

Motivating Example 9.7: Reconsider the system of Example 9.1. Design a controller for it such that at ωgc 5 13, PM 5 50, and ess 5 2%. It should be clear for the reader that the design procedure is as follows. If by pure gain the phase margin Pm 1 6 and ωgc are achievable, we take this value as the gain k of the lag controller kCg ðsÞ, and then design Cg ðsÞ to fulfill the desired ess . If fulfilling PM 1 6 and ωgc by pure gain is not possible, we shall do so by a lead controller. We choose the phase as PM 1 6 because in the second step the lag controller which is designed for ess will reduce the PM by about 6 deg. Remark 9.8: The resultant controller is called a lead-lag controller. Alternatively, it may be called lag-lead controller. Both terminologies are supported by certain philosophies. The philosophy behind the term “lead-lag” is that in the design process first the lead part is designed and then the lag part. Also note that in the time domain one first observes the effect of the lead part (on the transient response) and then at steady-state the effect of the lag part (on the steady-state error). As for “lag-lead” in the frequency domain (like the Bode diagram) one first (i.e., at low frequencies) observes the effect of the lag term and then (at medium-high frequencies) the effect of the lead part. We may use both terminologies without any preference. To avoid confusion between parameters of the lag and lead controllers we use an index for them and write the resultant controller as kd ðαTd s 1 1Þ=ðTd s 1 1Þ 3 ½1 1 kg = ðTg s 1 1Þ. Note that because the lead part fixes the PM, the parameter k of the lag controller equals 1. The Bode diagram of our lead-lag controller is shown in Fig. 9.10.

Figure 9.10 Lead-lag controller. Top: Magnitude, Bottom: Phase.

(Continued)

Frequency domain synthesis and design

645

(cont’d) The reader is encouraged to investigate and discuss (1) Whether both of the cases in the upper panel of Fig. 9.10 are possible? (2) Whether the value 20logkd in that panel is exact? (3) When is the intermediate frequency range shrunk so that the phase plot looks like monotone in that band? See also Problem 9.19. Now we implement the design procedure. First we design the lead part of the controller, i.e., kd ðαTd s 1 1Þ=ðTd s 1 1Þ. At the desired ωgc 5 13, the phase of the system is 2183 deg and jPj 5 2 44:6 dB. To have a PM of 50 1 6 5 56 deg at this frequency the phase deficit of the system is 56 1 3 5 59 deg. This phase will be contributedpby ffiffiffiffiffi the lead controller. Thus αp5ffiffiffiffiffið1 1 sin59Þ=ð1 2 sin59Þ 5 13:00, Td 5 1=ð13 13Þ 5 0:0213, kd 5 1044:6=20 = 13 5 47:10. The resultant system has PM 5 55:6. (It should be PM 5 56. The reason for this slight difference is the error in reading the parameters from the figures which is done by clicking on the figures and to a much lesser extent the round-off errors.) Next we design the lag part of the controller, i.e., ½1 1 kg =ðTd s 1 1Þ. To this end we note that the steady-state error requirement with the lead-lag controller results in Kv kd ð11 1 kg Þ 5 0:02, where Kv 5 lims!0 sPðsÞ of the original system. Thus 1 1 kg 5 ð25=150Þ 3147:1 3 0:02 or kg 5 5:37 and Tg  10 3 5:37=13 5 4:13. The resultant system has with the designed lead-lag controller has GM 5 12:6 dB ðωpc 5 31:7 rad=sÞ, PM 5 49:7 deg ðωgc 5 13:1 rad=sÞ, and BW 5 23:04, which are well acceptable. The response is depicted in Fig. 9.11, which is excellent.

Figure 9.11 Example 9.7 controlled by a lead-lag controller.

646

Introduction to Linear Control Systems

Remark 9.9: As we have said before, it is common practice in the literature to use the open-loop ωgc instead of the desired closed-loop BW in the design process since we do not know any straightforward method for consideration of the BW. This method may succeed in roughly satisfying the desired BW in many problems, however it severely fails in many other cases. Thus it is necessary to verify the final result. An example is given shortly. Also recall that the importance of the bandwidth is that if it not large enough the output performance will be poor, as we have observed and will observe again in simulations. (Also recall from Chapter 4, Time Response, that for sinusoidal inputs we know the direct relation between the bandwidth of the system and the bandwidth of the input signal.) Example

9.8:

25 ðs 1 5Þðs 1 10Þðs115Þ2

  3 0:0268s11 4 1 22:5 The system 1000 3 s12 3 13:930:0268s11 3 28:8s 28:8s 1 1 3 has ess 5 2% and GM 5 4:36 dB ðωpc 5 16:2 rad=sÞ, PM 5

74:6 deg ðωgc 5 8:51 rad=sÞ, thus the design is seemingly acceptable, and we may expect the closed-loop bandwidth to be BW  10 or larger. However, it is BW 5 3:00. Other systems with this phenomenon are encountered in the worked-out problems.

The maximum angle we may provide with a lead controller is discussed in the next Remark. Remark 9.10: The maximum phase boost that a lead controller can provide is 90 deg provided α 5 N. This is certainly unacceptable. In practice the maximum value that is used is usually φmax 5 60 which corresponds to α 5 13:93. Thus if the required phase boost is more than 60 deg we have to use more than one controller. This is shown in Example 9.9, and many of the worked-out problems. There are two main possibilities for this objective. One is to expand the lead controllers over a frequency range. We may do so in such a way that in each step a stable system is obtained, although this is not necessary. The other is to choose all of them at the same frequency. To the best of our knowledge at the moment there is no theory in favor of either of them and thus we have to resort to simulation. (However, this is an interesting topic for theoretical research, of course.) What can be said for sure is that the second approach requires lesser work—this is what we do in the workedout problems. In the next example, however, for the sake of illustration and comparison we follow both approaches.

Example 9.9: Reconsider the system of Example 9.1 with another integrator, 25 i.e., PðsÞ 5 s2 ðs 1 10Þðs 1 15Þ : It is desired to design a controller so that the PM is (Continued)

Frequency domain synthesis and design

647

(cont’d) at least 40 deg, the bandwidth is about ω 5 30 rad=s, and the steady-state error to step reference input is zero. Let us first discuss the BW. As we have already said there is no guaranteed relation between the BW and ωpc or ωgc . Thus we must verify the result. We start by aiming at ωgc 5 20. If the closed-loop BW is as desired, we are done, otherwise we naturally increase the ωgc . Now we start the design. The system has the phase 2233 deg at ω 5 6. If we use a maximal lead controller at this frequency the system will be stabilized (but with a smaller pffiffiffiffi BW than the desired one, of course.) There holds α 5 13:93, T 5 1=ðω αÞ 5 0:0447. Now we can specify also the gain of this controller, but it is simpler to relegate the gain determination to the last step where the phase boost is completed. We will do so. Now we have a look at the Bode diagram of the resultant system. It has the phase 2236 at ω 5 16.pWe ffiffiffiffi design a maximal lead controller at this frequency: α 5 13:93, T 5 1=ðω αÞ 5 0:0167. The resultant system has the phase 2194 at ω 5 20. If we design a maximal lead controller at this frequency (after the appropriate gain determination) the system will have a PM of 46 deg at this frequency, pffiffiffiffi and thus the design objective is fulfilled. We do so: α 5 13:93, T 5 1=ðω αÞ 5 0:0134. Now we determine the gain (in one step, instead of three steps, each for each lead term). At ω 5 20, jPj 5 2 35:2. Thus the gain is specified as 1035:2=20 5 57:544. The controller is thus 3 0:0447s 1 1 3 0:0167s 1 1 3 0:0134s 1 1 C1 ðsÞ 5 57:54 3 13:930:0447s 3 13:930:0167s 3 13:930:0134s : The sys11 11 11 tem has GM 5 7:12 dB, ωpc 5 34:2, PM 5 45:8 deg, ωgc 5 20:2, and BW 5 35:29. As it is observed the design objective is fulfilled. Next we solve the problem by designing all the lead controllers at ωgc 5 20. At this frequency the phase of the system is 2297 deg. Thus to have a PM of at least 40 deg the lead controllers should contribute at least 117 1 40 5 157 deg. If we design 3 maximal lead controllers at this frequency, i.e., contribute 3 3 60 deg the PM will be 63 deg. Now it is at the designer’s discretion what PM to choose. For the sake of comparison with the previous design we take both options of PM 5 46 deg (as before) and PM 5 63 deg. For PM 5 46 deg we can choose 2 maximal lead controllers and one with φmax 5 43 deg. The third one corresponds to α 5 5:29, T 5 0:0217. With these three controllers the magnitude of the system at this frequency will be jLj 5 2 48:9. Thus the gain is determined as 1048:9=20 5 278:61.  In brief the 3 0:0217s 1 1 3 0:0134s11 2 controller is C2 ðsÞ 5 278:61 3 5:290:0217s 3 13:930:0134s11 with which 11 the system has GM 5 8:2 dB, ωpc 5 39:9, PM 5 46:6 deg, ωgc 5 19:9, and BW 5 38:42. (Note that the slight difference in PM; BW are due to inaccuracies in reading the values from the Bode diagram which is done by clicking on it.) For PM 5 63 deg we design three maximal lead controllers at ωgc 5 20. With these controllers at this frequency jLj 5 2 44:7 and hence the gain (Continued)

648

Introduction to Linear Control Systems

(cont’d) will be 1044:7=20 13:93 3  5 171:79. All in all, the controller is C3 ðsÞ 5 171:79 3 0:0134s11 3 with which the system has GM 5 9:96 dB, ωpc 5 49, 0:0134s11 PM 5 63:5 deg, ωgc 5 20, and BW 5 41:32. Finally, we discuss the choice we should make from among the above designs. To this end we should have a look at the system response in all the above cases. They are provided in Fig. 9.12. Step response

1.2

1.2

1

1 Amplitude

Amplitude

0.8 0.6 0.4

0.6

0.2 0.5

1

1.5 Time (s)

2

2.5

0

3

Step response

1.2

0

1

1

0.8

0.8

0.6

0.4

0.2

0.2

0.5

1

1.5 Time (s)

1

2

2.5

3

1.5 Time (s)

2

2.5

3

2

2.5

3

Step response

0.6

0.4

0 0

0.5

1.2

Amplitude

Amplitude

0.8

0.4

0.2 0 0

Step response

1.4

0 0

0.5

1

1.5 Time (s)

Figure 9.12 Top left: C1 , Top right: C2 , Bottom left: C3 , Bottom right: C4 .

In the right panel of the bottom row we have taken a different approach. We have designed three maximal lead controllers at the same frequency and then have done some trial and error on the gain value. To this end we have verified the stability region as k , 540:6, and have experimented with both smaller and larger values than 171.79. With each value we have computed the system features and observed the output as well. For the gain value of 128 the controller C4 results in the system features GM 5 12:5 dB, ωpc 5 49, PM 5 79:6 deg, ωgc 5 13:1, and BW 5 30:34. The output is the bottom right (Continued)

Frequency domain synthesis and design

649

(cont’d) panel of Fig. 9.12. As observed, it is smoother. Probably this design is chosen by any designer because all the GM, PM, and delay margins are larger. The delay margin is T , 79:6=13:1, which is by far larger than that of the previous designs, e.g., T , 63:5=20 in the third design. On the other hand, if a smaller settling time is a must, then the second design should be preferred. We wrap up this example by adding that the system obviously tracks the ramp input with zero steady-state error as well. To choose the controller that results in the best transient performance we should simulate the system with all the controllers. We should also add that if the design is exclusively done for a ramp input, then obviously a better performance will be achieved. This is also left to the reader.

9.3

Controller simplifications: PI, PD, and PID

The simplification of the lead, lag, and lead-lag controllers are known as PD, PI, and PID controllers. They have been the most pervasively used controllers in industry since the 1930s. With the new advances of control theory in the 1980s and 1990s other types of controllers (mainly robust, adaptive, and intelligent) have substituted them in many applications, sometimes in the form of their robust/adaptive/intelligent version. Traditionally these controllers have been designed by looking at the system as a black box, then observing its output for a given step or sinusoidal input and finally finding the controller parameters based on the output (i.e., step response, or frequency response) characteristics. Through decades several other approaches have been developed for their design so that today over 10 wellestablished approaches can be found in the pertinent literature. Indeed several books can be found in the literature which focus on the design of these controllers. We will briefly discuss some of these approaches in Section 9.7. Here we address their design using the same philosophy of GM and PM which we used for the design of lead, lag, and lead-lag controllers. This is done in the following, where we first introduce them. A PID controller is given by CðsÞ 5 kð1 1 1=ðTi sÞ 1 Td sÞ, where the letters P (proportional), I (integral), and D (derivative) refer to the terms k, k=ðTi sÞ, and kTd s, respectively. To find the unknowns k; Ti ; Td we rewrite the controller expression as CðsÞ 5 kð1 1 1=ðTi sÞ 1 Td sÞ 5 k0 ð1 1 1=ðT 0i sÞÞð1 1 T 0d sÞ, where k 5 k0 ð1 1 T 0d =T 0i Þ, Ti 5 T 0i 1 T 0d , Td 5 T 0i T 0d =ðT 0i 1 T 0d Þ. The terms k0 , ð1 1 1=ðT 0i sÞÞ, and ð1 1 T 0d sÞ are the P, PI, and PD terms in the new formulation. The former and latter formulations are veritably called the parallel and series structures, respectively. We will design the controller in the new/series formulation, i.e., find k0 ; T 0i ; T 0d and then by the given formulae find the original unknown parameters. This is done in the sequel in three separate steps.

650

Introduction to Linear Control Systems

The steps of the design are as follows. 1. PD control: The PD term has a positive phase and can have high gain in high frequencies. It is thus a lead term, and as such its usage is like that of a lead term. More precisely, we design T 0d such that at the desired ωgc , +ð1 1 jT 0d ωgc Þ is the desired phase boost. 2. PI control: The PI term has a negative phase and can have high gain in low frequencies. That is, it is a lag term, and hence is used is like a lag term. It is used for improving the steady-state error. However, note that it outstrips the standard lag controller since it increases the type of the system by 1—changes 0 to 1, 1 to 2, and so on. That is, it may make the steady-state error equal to zero. Its parameter is chosen such that it has no effect at the desired ωgc . Thus we choose 1=ðT 0i ωgc ÞÞ{1, usually 0.1. Therefore, T 0i 5 10=ωgc . 3. P control: At the final stage k0 is chosen such that k0 3 j1 1 1=ðjT 0i ωgc Þj 3 j1 1 jT 0d ωgc j 5 102jPj=20 , where jPj is the magnitude of the plant (read from its Bode diagram) at the desired ωgc in db scale. Note that in this formula j1 1 1=ðjT 0i ωgc Þj 5 p ffiffiffiffiffiffiffiffiffi 1:01 5 1:005 and j1 1 jT 0d ωgc j 5 1=cosφ where φ is the phase boost the PD term provides.

PID control: In the above steps we have designed the PID control in the PPIPD structure. Then we easily convert it to the conventional parallel PID structure using the relations provided before. Remark 9.11: The proposed PID controller is the ideal formulation which is noncausal and thus unrealizable. Two approaches one may take. (1) For its implementation a large pole is added to the PD term (which is otherwise noncausal) or to the whole P-PI-PD controller (which is otherwise noncausal). From Chapters 4 and 5 (Remark 5.30) we know that this is not a good approach—alas is practiced in the literature. (2) No PD term is designed—instead, the conventional lead term is designed, followed by the PI design (see Remark 9.12). This second approach has firmer theoretical foundation and is preferred. We illustrate the above development in the subsequent example. Example 9.10: Reconsider the system of Example 9.1. Design a PID controller for it such that at ωgc 5 13, PM 5 50, and ess 5 2%. First we design an ideal PID. Recall that at ωgc 5 13: jPj 5 2 44:6 dB and +P 5 2 183 deg. So the phase deficit of the system which should be provided by the PD term is φ 5 53 deg. Thus +ð1 1 jT 0d ωgc Þ 5 53 or 13T 0d 5 tan53 5 1:327, or T 0d 5 0:1021. Now we design the PI term: T 0i 5 10=ωgc 5 0:7692. Finally k0 5 1044:6j=20 3 cos53=1:005 5 101:6944. The PID in the conventional formulation is thus CðsÞ 5 115:49ð1 1 1=ð0:8713sÞ 1 0:0684sÞ. With this controller the system has GM 5 2 Inf, ωpc 5 0, PM 5 43:9 deg, ωgc 5 13:1, and BW 5 21:43. We note that the PM is some 6 deg less than the desired value, and this is due to the effect of the PI term (similar to the lag term). Next we design a realizable PID. So we design a lead term instead of the PD term. At ωgc 5 13 this lead term will provide φmax 5 53 1 6 5 59 deg (Continued)

Frequency domain synthesis and design

651

(cont’d) because the PI term will reduce the PM by some 6 deg. This term is what we 3 0:0213s 1 1 have designed before in Example 9.7; it is 47:10 13 0:0213s 1 1 : Now we simply 0 design the PI term by T i 5 10=ωgc 5 0:7692. With this controller the resultant system has GM 5 12:6, ωpc 5 31:7, PM 5 49:7 deg, ωgc 5 13:1, and BW 5 23:04. Comparing with Example 9.7 we observe that system features are very much the same, but here ess 5 0. The performance of the system is shown in Fig. 9.13. Comparing with Example 9.7 we only observe an improvement in the steady-state error, which is due to the PI term. The transient phases are almost the same which are due to the lead terms. (Finally we should add that the ideal PID results in a nicer tracking performance; see the accompanying CD.)

Figure 9.13 Example 9.10. PID in the form of Lead-PI control.

Remark 9.12: Note that in the above procedure the lead term takes the place of the PPD term, i.e., k0 ð1 1 T 0d sÞ, and only the PI term, i.e., ð1 1 1=ðT 0i sÞÞ is left which is designed afterwards. A slightly different procedure can also be taken in which the lead term takes the place of the PD term only. It is as follows. We design the pole-zero of the lead control, but do not specify its gain. Thus in our example 13 3 0:0213s 1 1 0 0:0213s 1 1 : Then we design the PI term as usual. Finally we determine the gain k j13 3 0:0213ω 1 1

from k0 3 1:005 3 j j0:0213ωgc 1gc 1 j 5 102jPj=20 . If we take this approach we find k0 5 46:9318. With this gain the crossover frequencies are found as before but one finds a slight difference in other features as GM 5 12:7, PM 5 49:8 deg, BW 5 22:98. Because this second approach requires more computations, the first approach is usually preferred.

652

Introduction to Linear Control Systems

dB

dB

20 log k

20 log k

Phase

Phase

0

90 0

–90

ω

ω

dB 20 log k PI

PD

1/Ti

1/Td

Phase 90 0 –90

ω

Figure 9.14 Top left: PI, Top right: PD, Bottom: PID.

Remark 9.13: We close this section by providing the Bode diagram of the PI, PD, and PID controllers in the following. In general PI and PD terms are given by CðsÞ 5 kð1 1 1=ðTi sÞÞ and CðsÞ 5 kð1 1 Td sÞ, respectively. The Bode diagrams are thus as given in Fig. 9.14, top row. Note that the slopes of the magnitude diagrams are 220 and 20 dB=dec, respectively. It is noteworthy that the unboundedness of the magnitude diagram of the PD controller at the high-frequency end accounts for its noncausality. The same is true for the PID controller given by CðsÞ 5 kð1 1 1=ðTi sÞ 1 Td sÞ, depicted in the bottom row of the same figure. The reader is encouraged to investigate whether the value 20logk in the Bode diagram of the PID controller is exact. See also Exercise 9.34.

9.4

Controller structures in the Nyquist plot context

The structures of the lag, lead, and lag-lead controllers are studied in this section in the context of the Nyquist plot. In particular we also consider lag-leads with more than one lead component. Note that the pictures are not drawn to scale. The Nyquist plot of a lag controller is depicted in Fig. 9.15. Recall that the controller is given by kð1 1 kg =ðTs 1 1ÞÞ. Thus the term kð1 1 kg Þ is its low frequency gain. By increasing the frequency the phase becomes negative. By further increasing the frequency the phase becomes more negative and beyond a certain frequency it starts to increase, i.e., becomes less negative. This trend continues till the infinite

Frequency domain synthesis and design

653

ω 20 log k(1+k g )

20 log k

–6

ω=οο ω=ω gc

ω

ω=0

Figure 9.15 Nyquist plot of the lag controller.

ω ω=ω gc

max

ω=0

ω=οο

ω

20 log kα

20 log k

Figure 9.16 Nyquist plot of the lead controller. ω

ω ω=ω gc

ω=ω gc max

max

20 log kd (1+k g )

ω=0

ω=οο 20 log kd α

ω=οο

ω

20 log kd α

20 log kd (1+k g )

ω=0

ω

Figure 9.17 Nyquist plot of the lead-lag controller, two possibilities.

frequency where the gain is k and the phase is zero. At the frequency ωgc the phase is almost 6 deg in most cases, which is the phase drop caused by the lag control. The Nyquist plot of the lead controller is given in Fig. 9.16. Recall that the controller is given by kðαTs 1 1Þ=ðTs 1 1ÞÞ. At ω 5 0 the gain of the controller is k. By increasing the frequency the phase becomes positive and increases and at ωgc is at its maximum. Beyond this frequency the phase decreases so that at ω 5 N it becomes zero, where the gain is kα. The Nyquist plot of a lead-lag controller is illustrated in Fig. 9.17. The controller is given by kd ðαTd s 1 1Þ=ðTd s 1 1Þ 3 ½1 1 kg =ðTd s 1 1Þ. At ω 5 0 the gain of the controller is kd ð1 1 kg Þ. By increasing the frequency the phase becomes negative and decreases (increases in absolute value). Beyond a certain frequency it increases (towards zero) and at a certain frequency the phase becomes zero. In this range the controller has acted as a lag term. Beyond this frequency the phase becomes positive and the controller behaves as a lead control. At ωgc the phase is at its maximum. Beyond that the phase decreases and finally at ω 5 N it reduces to zero. The gain of the controller is kd α at this frequency. Both cases kd α . kd ð1 1 kg Þ and kd α , kd ð1 1 kg Þ are shown. (Question: Which one is probable or possible at all?) The “expected” Nyquist plot of a lead-lag controller with two lead terms is provided in Fig. 9.18. See Exercise 9.28. The term “expected” refers to the point that

654

Introduction to Linear Control Systems

ω

ω=ω gc

max

20 log k d1 k d2 (1+k g )

ω=0

ω=οο

20 log kd1α1 k d2 α 2

ω

Figure 9.18 Expected Nyquist plot of a lag-lead controller with two leads and some 120 deg phase contribution.

D P

I

Figure 9.19 Effect of the P-, I-, and D-term on the Nyquist plot.

we may expect to have α1 α2 . 1 1 kg , especially if there are even more than two lead terms, i.e., α1 . . .αn . 1 1 kg . The term 20logkd1 kd2 ð1 1 kg Þ is the zerofrequency gain. By increasing the gain the phase will first go negative, and the controller acts as a lag term. Beyond a certain frequency the phase becomes positive, and the controller has the effect of a lead term. At the frequency ωgc the controller makes its maximum phase contribution. The term 20logkd1 α1 kd2 α2 is its infinitefrequency gain. Question 9.3: What are the Nyquist plots of PI, PD, and PID controllers? Finally, we wrap up this subsection by illustrating the effect of a P-, I-, and D-term on a typical Nyquist plot in Fig. 9.19. The effect of PI, PD controllers can thus be easily deduced by vector summation. (How?) As for PID, the situation is a little different. In low frequencies a PID acts as a PI and in high frequencies it acts as a PD. Question 9.4: What is the effect of lead, lag, and lead-lag controllers on the Nyquist plot? Question 9.5: What are the controllers’ structures in the KMN chart context? How do they affect a given plot in this context?

Frequency domain synthesis and design

655

We wrap up this section by stressing that it is possible to present the controller design procedure in the Nyquist and KMN chart contexts as well. We do not do so for the sake of brevity and because such designs are not classical in the sense that they are not really practiced. The reason is that because frequency is the hidden parameter they are quite complicated, require heavy graphical and computational work if we are going to do it manually and have only theoretical value—from the viewpoint of computation they are not attractive. The percentage of the researchers in these fields is negligible. Recall the discussions of Chapter 8.

9.5

Effect of the controllers on the root locus

The controllers make quite different effects on the root locus of the system, as will be shown on typical systems. However, the effects are similar in that the root locus is stabilized at least for bounded regions of the gain. First we consider the lag controller. Because its pole and zero are small and near each other its effect on the shape of the root locus is negligible except in the vicinity of its pole and zero. More precisely, the lag control does not make much effect on the root locus for large s, since when jsjcðkg 1 1Þ=T . 1=T then α  β, i.e., the phase contribution of the lag term on that point is negligible—see Fig. 9.20. In other words, it does not much affect “the angle condition” of the root locus and thus because of Remark 5.4 the lag term does not much affect the root locus for large s. However, for small s it does change the root locus. Its effect is9 making a circle-like branch around its zero and pole as shown for the system of Motivating Example 9.4 in Fig. 9.21. The middle and right panels refer to case (b,120) of that example. Note that the root loci are very much similar except around the pole-zero location of the lag controller. Indeed the break-away point has moved from ðK; sÞ 5 ð10:6; 23:92Þ to ð10:7; 23:88Þ in case (a,120) and to ð12; 23:58Þ in case (b,120). Moreover the j-axis crossings change from ðK; ωÞ 5 ð150; 6 12:247Þ to ð148:32; 612:17Þ and ð138:96; 611:79Þ in cases (a,120) and (b,120) respectively. s

β α

Figure 9.20 Effect of the lag controller on the root locus for large s. 9

Question 9.6: Usually or always? See the accompanying CD on the webpage of the book to find out the answer!

Figure 9.21 Effect of the lag controller on the root locus. Left: Without lag, Middle: With lag, Right: Vicinity of the pole-zero of the lag term.

Frequency domain synthesis and design

657

Question 9.7: As observed the value of the gain at the j-axis crossings does not change much, or in fact becomes even a little smaller. So why is it that the system can have a higher gain and thus lower steady-state error? Now we consider the lead controller. It has a drastic effect on the root locus. For instance if a (maximal) lead controller (at frequency 10) is used to stabilize the system P1 ðsÞ 5 1=½ðs 2 1Þðs 2 2Þ then the root locus will have the shape in the top right panel of Fig. 9.22 which is stable for large values of the gain. (We have not presented the root locus of the uncontrolled plant since it is clear. Also note that in the top left panel the phase is above the critical line in a frequency range.) However, if we use 8 (maximal) lead controllers (at frequency 10) to stabilize the system P2 ðsÞ 5 1=½ðs 2 1Þðs 2 2Þðs 2 3Þðs 2 4Þðs 2 5Þðs 2 6Þ, although the root locus will have the shape in the bottom right panel of Fig. 9.22 which is an acceptable one, it is not stable since the appropriate condition on gain values at j-axis crossings is not satisfied. (Here as well we have not presented the root locus of the uncontrolled plant since it is clear. Also notice that the phase is above the critical line in a frequency range, bottom left panel of the same figure.) It is noteworthy that increasing the number of lead controllers to 9 does not rectify the problem. However, recall Bode diagram

–20

8

–40

6

–60 –80

Phase (deg)

–100 –90 –180 –270 –360 –2 10

10–1

100 101 Frequency (rad/s)

102

103

Bode diagram

0 –2 –4 –6

–10

–35

–30

–25

–20

–15

–10

–5

0

0

5

Real axis (s–1) Root locus

20

–50

Imaginary axis (seconds–1)

Magnitude (dB)

2

25

–100 –150 –200 0

Phase (deg)

4

–8

0

–360 –720

–1080 10–1

Root locus

10 Imaginary axis (seconds–1)

Magnitude (dB)

0

15 10 5 0 –5 –10 –15 –20

100

101 Frequency (rad/s)

102

103

–25

–40 –35 –30 –25 –20 –15 –10 –5 Real axis (s–1)

Figure 9.22 Top: System C1 P1 , Left: Bode diagram, Right: Root locus, Bottom: System C2 P2 , Left: Bode diagram, Right: Root locus.

658

Introduction to Linear Control Systems

that we did propose a stabilizing controller for this system in Exercise 5.28 of Chapter 5, Root Locus, and discussed it in Chapter 8.

9.6

Design procedure

In general a good design always needs a repetition with several trial and errors since from the outset we cannot be sure that the given specifications guarantee a good transient performance. This is typical of actual design problems, was illustrated on controller C4 of Example 9.9, and is also clearly demonstrated in the worked-out problems below. On the other hand, a system is designed for a particular input frequency, if it is a sinusoid, velocity or acceleration, in contrast to a step input which can also be un-alternative (i.e., not a square-wave signal). With other (i.e., larger) input frequencies it is not guaranteed to have the same performance, and indeed will not have. Thus we should assume a frequency for the given input and base the design on it. The dependence of the design on the input frequency is especially discussed in the worked-out Problems 9.69.10. Recall that we have already seen it before in Chapter 4, Time Response. The trial and error mentioned in the above paragraph is mostly needed for the lead design which is used for stabilization or bandwidth increase where some phase is added to the system. One way to simplify the trial and error is to design the polezero part of the lead controller(s) for a particular frequency and then do the trial and error on the gain value. In this way, in effect we do the iteration also on the frequencies, although nonexhaustively. This is carried out in most of the worked-out problems at the end of this chapter. The design procedure which has been followed in the above examples especially Example 9.9 will be formally summarized in the subsequent procedure. This procedure is followed also in the end-of-chapter worked-out problems. Design Procedure: The design procedure consists of three steps. Step 1: Take care of the steady-state error requirement by adding enough integrators to the plant, and decide on the synthesis you will use: whether to accompany the integrators with some stabilizing zero(s) or not. Step 2: To address the stabilization or bandwidth increase, the phase of the system is increased to over 2180 6 2hπ (i.e., e.g.. 2130), via one or several lead designs (without specifying the gain of the controller). This however does not guarantee the stability of the resultant system. Indeed, as we have already seen in previous chapters (and in Fig. 9.22, bottom row) not every crossing of the critical line or the j-axis in root locus changes the stability condition of the system. So we follow Step 3. Step 3: What we do is that when we increase the phase to over 2180 6 2hπ, then we check the stability of the system by the command “allmargin” which is interpreted by the help of the root locus of the system. There are two possibilities: (1) If the system is stable for a range of the gain, then we examine different values of the gain for all system features (GM, PM, DM, BW) and output response. It is very probable that the best value of the gain is differentp(either more or less) from ffiffiffiffi the one specified by usual lead design (i.e., 102jPj=20 = α). The reason for this is

Frequency domain synthesis and design

659

that there is no guaranteed relation between a prespecified set of system features— the so-called design specification—and good transient performance. (2) If the system is not stable for any value of the gain, or the system is stable but its features and/or output are not satisfactory for any value of the gain, then we go to Step 1 and choose a different synthesis and/or change the added phase (in Step 2) and/or change the (frequency of the) lead controller, and then go to Step 3. Note that Step 1 was automatically taken in all the above Motivating Examples, but this is not the case in many actual systems like many of the worked-out problems. On the other hand, in many problems Step 1 of the design procedure is not unique, e.g., we may use either of 1=s3 , ðs 1 aÞ=s3 , ðs2 1 as 1 bÞ=s3 , ðs1aÞ3 =s3 , etc. We refer to this as different synthesis strategies. Thus in actual industrial problems the whole procedure should be iterated also on other possible synthesis strategies. Apart from this point, given a specific synthesis strategy, the procedure should be iterated on the location of the stabilizing zeros. In practice based on the available actuators and energy consumption constraints10 the controller is then chosen. What we do here in the worked-out problems is that for some problems we carry out the design procedure for different synthesis strategies and compare the performance of the designs with each other (Problems 9.69.10), and for some we suffice to one synthesis and leave it to the reader to try the other possibilities. However, we usually do not do an iteration for the design of the stabilizing zeros. We recall from Chapter 5, Root Locus, that smaller stabilizing zeros (usually) result in smaller control signals, but at the expense of longer tails in the output, (i.e., larger settling times). We “suppose” that power consumption is a stringent requirement and thus we usually choose “average” or “small” stabilizing zeros like 20:5 and 20:1. However sometimes 21 results in a better performance (like Problem 9.8). We also note that the order of steps of the design procedure can be interchanged. Finally, it should be stressed that the problems we offer, except a few, are difficult ones and are missing in the research literature. Indeed many of them can be seen as a benchmark for controller design. We complete this section with the proceeding Remarks. Remark 9.14: Since the end of the 1980s, another design philosophy has emerged in terms of the sensitivity and complementary sensitivity functions S and T. We will discuss the relevant issues in the coming Chapter 10. The approach reveals several facets of the control systems, known as fundamental limitations, which we have not considered yet. Remark 9.15: It is good to have a preview of the conclusions that we shall make after the comprehensive comparisons we make in Problems 9.69.10 as well as 9.149.15. These conclusions which are detailed in Remark 9.22 and 9.23 state that in general it depends on the problem whether to accompany the required integrators with stabilizing zeros or not, and that when the plant is high order it maybe helpful to cancel the stable dynamics of the plant by the controller. 10

Note that larger control signals generally mean more power consumption, at least at peaks.

660

Introduction to Linear Control Systems

Remark 9.16: In the case the plant has phase lag, which is almost always the case, and/or is unstable, for sinusoidal input reference tracking we need to use lead control. If the phase lag in the output is not a concern, then the lead control just needs to stabilize the system. Otherwise if it is desired that there be no (or only negligible) phase lag in the output, i.e., the instantaneous error be almost zero, then the controller must provide additional phase lead to compensate for the phase lag. Two comprehensive arguments are offered in Problems 9.2 and 9.12. As we shall see we may have to use a 2-DOF control. It is also worth stressing that as we shall learn in Chapter 10, we may wish to use an integrator in the controller even in this case. This is discussed in the final Discussion of Problem 9.2, and the consequence is that then the controller has to supply more phase lead to the system.

9.7

Specialized design and tuning rules of PID controllers

PID controllers are the most common controllers in industry. Their usage is so widespread that through time numerous methods have been proposed for their design and tuning, especially on-line tuning. Today several specialized books are available on the tuning and the nitty-gritty of this class of controllers and still research papers are being published on the theoretical aspects of the design. Since around the beginning of the 21st century the publication of new tuning rules for SISO systems has almost reached a standstill but topics like tradeoffs between performance and robustness as well as tuning rules for MIMO systems are still active areas. PID controllers and in particular PI controllers dominate the process control applications. More than 95% of the controllers in process control are PID and PI controllers. The reason for this is that the dynamics of processes are often loworder and stable, possibly with a time delay. Examples of such processes are water level in connected tanks, glassware, casting, hot rolling, paper making, etc. It can be shown that PID control suffices for such systems. Because of intricacies of the derivative term tuning, in online self-tuning applications sometimes the derivative term is set to zero and the designer suffices to a lower performance with PI controllers. However if the design is performed offline then the designer has enough freedom to simulate different designs and choose the best one by trial and error. In a broad sense the design and tuning of PID controllers can be categorized as follows: heuristic methods, analytical methods, and optimization methods; all of which we discuss below.

9.7.1 Heuristic rules The first experimental method for tuning the parameters of a PID controller is due to J. G. Ziegler and N. B. Nichols in 1942. By experiment and trial and error they

Frequency domain synthesis and design

661

found a relation between the controller parameters and some specific parameters which were computed from the step response or the frequency response of the plant where it was regarded as a black box. The systems controlled by these rules often have a large overshoot and some oscillations because the controller parameters are often larger than necessary. In the literature these rules are sometimes referred to as aggressive rules. In fact it is easy to show that for systems whose parameters belong to a certain class the ZieglerNichols tuning rules result in an unstable system. Since then many researchers have tried to modify the tuning rules to obtain higher performance. Among them are K. L. Chien, A. J. Hrones, and J. B. Reswick who proposed their rules in 1953, see the respective literature. Their rules in general result in a higher performance, but again for certain systems the performance is poor and even the system becomes unstable. All in all the methods which are in this category are dated and low-performance, no longer utilized in practice. We do not provide their details.

9.7.2 Analytical rules Examples of the analytical methods include Cohen-Coon, loop shaping, Haalman, IMC, pole placement, direct synthesis, Skogestad IMC, fractional PID design, etc. In the sequel we discuss the pole placement, direct synthesis, and the Skogestad IMC rules.

9.7.2.1 Pole placement method The next analytical method that we consider is the pole placement. In this approach the characteristic equation of the system is obtained by the inclusion of the controller transfer function. Then the controller parameters are obtained such that the characteristic equation has the desired dynamics, usually at least a desirable second-order part representing the rightmost poles of the closed-loop system. Precisely speaking we should start with the proportional controller and if the design objectives are not achievable with it then we should try PI and PD controllers and finally the PID controller. We present the pole placement approach by a simple example in which we discuss the details only for the PI controller. Example 9.11: Consider the plant PðsÞ 5 s2 1 ab10s 1 a0 : Design a PI controller for this system using the pole placement approach. 1 The controller is of the form CðsÞ 5 K 1 Ts : The characteristic equation 3 2 is Ts 1 Ta1 s 1 Tða0 1 b0 KÞs 1 b0 5 0. Now we choose K and T such that the closed-loop poles, in particular the pair of rightmost complex poles, are at the desired locations. If by the proper choice of K and T the design objectives (Continued)

662

Introduction to Linear Control Systems

(cont’d) like rise time and settling time are met we suffice with this controller, otherwise we try the PD and PID. Note that the satisfaction of the design objectives must be verified by simulation.

The difficulty of the approach is obvious: Apart from the complexity of the equations that we have to solve, which is formidable in the general setting, there is another problem with the pole placement approach. How should we know beforehand the location of the rightmost poles of the system? And how should we know that they guarantee the design objectives when the characteristic equation is of higher order? The answer (recalling Chapter 5) is simply that we do not know for sure (see also Further Readings), although the general trend would be to choose them with a large damping ratio. There are also some complicated guidelines on how to locate the rightmost poles so as to optimize a cost function. The pole placement approach is not transparent unless we supplement it by a root locus analysis. A more tangible approach is to specify the damping ratio of the rightmost poles and resort to a root locus analysis. This is shown in the next example. s11 Example 9.12: Consider the plant PðsÞ 5 sðs2 1 4s 1 5Þ : Design a PID controller such that the rightmost poles have ζ  0:85. To make the problem more tangible we have a look at some possible root locus shapes around the rightmost poles; see Fig. 9.23.

32

32

Figure 9.23 Part of the possible root locus.

Note that one real zero and one integrator are due to the controller. The picture suggests that we may simply set a zero of the controller as s 1 1. So there are two other parameters in the controller to determine: one real zero and one real pole. Note that we are actually digressing from the pole placement approach since as we said above it is not a transparent method. Our knowledge of the root locus chapter suggests choosing the rest of the controller as e.g., ðs 1 2Þ=ðs 1 15Þ or ðs 1 3Þ=ðs 1 15Þ. With the PID controller CðsÞ 5 Kðs 1 1Þðs 1 2Þ=½sðs 1 15Þ the root locus is provided in Fig. 9.24, left panel. To have ζ  0:85 (corresponding to α  31:53 ) we click on the locus and find the gain value as K  80. The step response is acceptable, an overshoot of about Mp 5 6%, provided in the right panel of the same figure (Fig. 9.24). (Continued)

Frequency domain synthesis and design

663

(cont’d) Root locus

4

Step response

1.2

3

1 0.8

1

Amplitude

Imaginary axis (s–1)

2

0 –1

0.6 0.4

–2 0.2

–3 –4 –16

–14

–12 –10 –8 –6 Real axis (s–1)

–4

–2

0

2

0 0

1

2

3 Time (s)

4

5

6

7

Figure 9.24 Example 9.12. Left: Root locus, Right: Step response.

The closed-loop poles are at 22:548; 2 7:47 6 j4:887; 2 0:756 6 j0:464. Note that if we allow for a larger overshoot, then we can use a cheaper actuator. For instance if we accept Mp 5 20%, it suffices to choose the gain of the controller as K 5 22. This corresponds to ζ  0:6 and α  55 of the rightmost poles. The closed-loop poles are 213:35; 2 2:425 6 j1:191; 2 0:397 6 j0:542.

9.7.2.2 Direct synthesis By direct synthesis we mean that we directly compute the controller parameters based on our design objectives. The path to the fulfillment of the design objective certainly depends on the problem, but falls under the theme of direct synthesis. It can be quite difficult, but in certain cases ingenious techniques help circumvent the bottlenecks. An example follows. 2sθ

ke Example 9.13: Consider the plant PðsÞ 5 sðτs 1 1Þ : Design a PI controller for it such that the desired maximum phase of the open-loop system occurs at the frequency of the desired maximum closed-loop gain of the system. 1 Let CðsÞ 5 Kð1 1 Ts Þ and LðsÞ 5 CðsÞPðsÞ. Denote the respective frequency, phase, and magnitude by ω, φ, M (corresponding to L). Hence we should have 2 2 2 @LðjωÞ 5 0, argLðjωÞ 5 φ, L ðM 2 1Þ 1 Lð2M cosφÞ 1 M 2 5 0, recalling @ω j ω5ω

Problem 8.1. The three unknowns K; T; ω should be computed from these three equations. A precise closed-form solution to these equations, if possible, does not seem to have a pleasing representation.  However, by direct manipula1=x x . 1 then we have tion it is easy to show that if we use tan21 x  π=2 2 x x#1  1=2  1=2 T 2 ω6 1ω4 . T 5 16ðτ 1 θÞ=ð2φ1πÞ2 , ω 5 1=½Tðτ1θÞ , and K 5 Tθ 2 k T 2 ω 11

664

Introduction to Linear Control Systems

M ω φ L

Figure 9.25 Illustration of the design objective of Example 9.13.

We close this part by adding that the design objective of Example 9.13 may seem bizarre at first glance, but it is meaningful. We encourage the reader to further analyze it and find its justification. We also add that in the KMN chart context it looks like Fig. 9.25. You are encouraged to provide further details about the highand low-frequency ends of the plot.

9.7.2.3 Skogestad tuning rules The next analytical method that we consider is due to Skogestad (2003). The method is an improvement of a former result of the same author in the IMC context and has thus been termed as the SIMC method, in which the letter S stands for either Skogested or Simple. The idea is to approximate a given model with the 2sθ 2sθ first-order model PðsÞ 5 τke or the second-order model PðsÞ 5 ðτ 1 s 1ke1Þðτ 2 s 1 1Þ : The 1s 1 1 controller formula is CðsÞ 5 Kð1 1 T1i sÞð1 1 Td sÞ and the tuning rules are K5



1 τ1 4τ 1 ; Ti 5 min τ 1 ; 5 min τ 1 ; 4ðτ c 1 θÞ ; Td 5 τ 2 : k τc 1 θ kK

(9.1)

In the above formula τ c is the time constant of a desirable first-order dynamics. For a fast response Skogestad recommends τ c 5 θ and thus, K5

τ1 ; Ti 5 minfτ 1 ; 8θg; Td 5 τ 2 : 2kθ

(9.2)

He shows that his rules for PI control result in Table 9.1. Note that this PID controller is noncausal and so it should also have another pole, say we have Td s 1 1 sTd =N 1 1 where N is large. For the second-order system as will be shown in an example the performance is poorer than that of the first-order system. It should be mentioned that if we are not given the plant transfer function but a black box as the plant, then the plant parameters should be obtained from e.g., its step response. This is shown in Fig. 9.26. The initial deadzone is θ; the final value is k; the time it takes for the output to grow to 0:63k is τ; and the initial slope is k=τ. Note that the ratio τ=θ in this figure is typical and depends on the system. The method is in the class of analytic methods, not experimental methods, since it is supported by analysis and theory. The advantage of the Skogestad method is that if we are given the plant model, then the tuning rules are readily and easily

Frequency domain synthesis and design

Table 9.1

665

Properties of the Skogestad’s tuning rules ke 2 θs =ðτ 1 s 1 1Þ τ 1 =ð2kθÞ τ1 3:14 61:4 2:14 1:59 1 1:57 0:5

Process K Ti GM PM DM MS MT ωpc θ ωgc θ

k0 e 2 θs =s 1=ð2k0 θÞ 8θ 2:69 46:9 1:59 1:7 1:3 1:49 0:5

Adopted from Skogestad 2003, with permission.

y k 0.63 k

0 θ

τ

t

Figure 9.26 Computation of the model parameters. Adopted from Skogestad 2003, with permission.

read from the model (with a probable redesign as will be shortly explained). They do not need much computation. On the other hand, the properties are excellent, comparable to and sometimes better than the properties of optimization-based methods that specifically optimize some design parameters. The superiority of this method over optimization-based methods may sound strange, but it is correct and justifiable as will be explained in Remarks 9.19, 9.20 of the next section. The paper (Skogestad, 2003) discusses several other details. For instance: 1. If we require a slow response with acceptable input disturbance rejection we should set of the disturbance K $ Magnitude Allowed output deviation : After deciding a value for K (like the minimum of the above) then from (9.1) we back calculate τ c whence we find Ti . Moreover, if the minimum of K in this formula is larger than τ 1 =ð2kθÞ then the system is not effectively controllable by PID control. 2. If the plant output or the control signal of the controller designed by (9.2) is oscillatory with large peaks (i.e., unnecessarily fast response) then if we can accept a slower output response we design the controller according to item (1) above. This results in smoother output and control signals.

Note that the best value of K in item (1) should be found by trial and error. Moreover the minimum is not tight for a step disturbance since the condition is derived for a sinusoidal disturbance. For a step disturbance a smaller value may also be acceptable. The design procedure as well as the above items are best explained in an example.

666

Introduction to Linear Control Systems

20:3s

Example 9.14: Design a PID controller for the plant PðsÞ 5 ð7s 15e1Þð2s 1 1Þ : We have K 5 5, θ 5 0:3, τ 1 5 7, τ 2 5 2. We choose τ c 5 θ 5 0:3 and thus τ1 5 2 3 573 0:3 5 2:33, Ti 5 minfτ 1 ; 8θg 5 8θ 5 2:4, and Td 5 τ 2 5 2. We K 5 2kθ also choose N 5 10. The simulation of the system is given in Fig. 9.27 by the solid line, Controller C1. Note that an input disturbance of magnitude 0.5 enters the system at the time t 5 15 s. As observed the output response Input disturbance

1.6 1.4

20 Control signal

Output

1 0.8 0.6

10 5 0

0.2

–5 0

5

10

15

20 Time (s)

25

30

35

40

–10

Output disturbance

1.6

0

5

10

15

20 Time (s)

25

30

35

40

Output disturbance

30 C1 C2

1.4

C1 C2

25 20 Control signal

1.2 1 Output

15

0.4

0.8 0.6 0.4

15 10 5 0 –5

0.2 0

C1 C2

25

1.2

0

Input disturbance

30

C1 C2

–10 0

5

10

15

20 Time (s)

25

30

35

40

–15

0

5

10

15

20 Time (s)

25

30

35

40

Figure 9.27 Example 9.14. Top row: Input disturbance, Left: Output signal, Right: Control signal, Bottom row: Output disturbance, Left: Output signal, Right: Control signal.

and the control signal are somewhat oscillatory with large peaks. So we redesign the controller according to item (1) above. We suppose that it is required that the output deviation be less than 1 for a disturbance of magnitude 0.5. (Note that this requirement is well met with the first design. The maximum output deviation is about 0.105.) Now from K $ 0:5 1 we choose the minimum, K 5 0:5. Thus from (9.1) by back calculation we find τ c 5 7=2:5 2 0:3 5 2:5, Ti 5 minfτ 1 ; 4ðτ c 1 θÞg 5 τ 1 5 7, and Td 5 τ 2 5 2. Again we choose N 5 10. The simulation result is shown in the same figure in dashed line, Controller C2. As it is observed the output response is slower but it is oscillation free (well damped) with the benefit of a smoother and smaller control signal but at the cost of a worse input disturbance rejection. (Continued)

Frequency domain synthesis and design

667

(cont’d) For the sake of completeness the case of “output” disturbance is also simulated and provided in the same figure. The best value of K is clearly somewhere in between the above two cases and should be found by a tradeoff between the design objectives. We leave it to the reader to find the best value at his or her discretion based on the desirable performance that he/she has in mind.

Remark 9.17: The value of N does affect both the output shape and the control signal. Its effect on the control signal is drastic. In the above example by increasing the value of N the initial value of the control signal and its other peaks increase in magnitude. (Question: Is it a general rule?) This should also be found by a tradeoff and trial and error. Remark 9.18: In passing—note that a restatement of the argument of the above arguments is that it is possible to fine-tune the first design to obtain higher performance. For instance if we decrease the proportional gain and increase the integral gain then the performance enhances. The student is encouraged to verify that for instance with K 5 1; Ti 5 6 the tracking performance is much better. 20:25s

Example 9.15: Consider the plant PðsÞ 5 2e6s 1 1 : Design a PI controller for it. We simply use the tuning rules of Table 9.1 for this system. We have K 5 2 3 2 63 0:25 5 6 and Ti 5 τ 5 6. The simulation result is provided in Fig. 9.28. In particular note that MT 5 1 while the response is fast. This is an excellent property. Also note that input and output disturbances of amplitude 1 enter the system at T 5 10 s and t 5 20 s, respectively. Input and output disturbance

2.5

Input and output disturbance

8 6

2 Control signal

4

Output

1.5

1

2 0 –2 –4

0.5

–6 0

0

5

10

15 Time (sec)

Figure 9.28 Example 9.16.

20

25

30

–8 0

5

10

15 Time (sec)

20

25

30

668

Introduction to Linear Control Systems

We close this section by adding that if the system has a higher-order dynamics and/or has zeros as well, then its dynamics should be approximated by the secondorder dynamics which we considered in this section. While this problem can be cast as an optimization problem, as Skogestad mentions, it would be helpful if we can read the parameters from the model easily and with negligible computations. Skogestad proposes a simple rule of thumb for reading the parameters of the second-order approximate from the parameters of the high-order plant model. The interested reader can refer to the same reference for the details. The SIMC method has been further analyzed and improved in (Grimholt and Skogestad, 2013). In particular if the derivative term is increased by θ=3 and the term τ c is divided by half then the PID tuning rules result in the near Pareto-optimal performance in terms of the Integral of the Absolute Error (IAE) criterion.

9.7.3 Optimization-based rules Examples of optimization methods comprise AstromHagglund method, LMIbased optimization methods, the modern intelligent methods that use fuzzy logic, neural networks, etc. It should be clear that in the literal meaning the optimizationbased methods are also analytical. However, to emphasize that the controller parameters are computed by optimization of a cost function they are separated as the third class. The techniques discussed in the Section 9.7.2 under analytical methods do not involve optimization. Finally note that designs for setpoint tracking and disturbance rejection are often done independently, where setpoint weighing is performed for good tracking performance. The advantage of the experimental and optimization rules is that they regard the plant as a black box and work with its step response (or frequency response). As such a model is not needed. As a general rule, which can be supported by theory, a system which has essentially a second-order dynamics, perhaps with time delay, can be controlled by PID. Reference (Firouzbahrami and Nobakhti, 2016) addresses a more general problem in this direction. Thus, high-order systems and nonlinear systems which have—more or less—a step response like that of second-order systems are easily addressed without having any model of them. Among the best tuning rules are the recent optimization-based ones proposed since 1990. The early experimental rules were based on trial and error. Needless to say these methods cannot guarantee high performance for all plants. Indeed they are almost always somewhat poor. On the contrary, recent optimization-based methods guarantee some level of performance as they are explicitly considered as the design objective. They can be categorized into two main classes: (1) In this class of the optimization-based tuning rules the same parameters as in the ZieglerNichols-based rules are computed in the same way. Then the controller parameters which are based on them are found by optimization so as to meet some robustness/sensitivity considerations for a large class of plant models like first order through fourth order models with some typical delay times. In particular the maximum sensitivity peak of Smax : 5 maxω jSj 5 2; 1:5; 1:2 etc. is usually guaranteed by the controller. As you will study in Chapter 10 the GM and PM are in direct relation with Smax and thus at least indirectly

Frequency domain synthesis and design

669

they are considered as two design objectives. (2) In this class of the optimizationbased rules some other parameters (different from those of the ZieglerNichols-based rules) are computed from the step response (or frequency response) and the controller parameters are based on them. Among the best optimization-based tuning rules are those of (Astrom and Hagglund, 2006; Hast et al., 2013). Remark 9.19: The tuning rules are important in online tuning of the controller parameters where the step response test is performed online during normal operation of the plant and is analyzed by software whence some specific parameters are determined on which the controller parameters are based. For offline applications such as those which we have in this course we compute these parameters and then use either of the aforementioned tuning rules as the starting point for a trial and error procedure which we perform for fine tuning. It should be noted that by fine tuning, almost every controller which is the result of the recent optimization-based rules can be improved. They are optimal for a large class of plant models and not for a specific single model. For specific models they are suboptimal and hence, by fine tuning, the performance can be enhanced. The difficulty of fine tuning is that there is no systematic approach for changing the controller parameters “towards an optimal solution” and it is often geared with intuition, experience, and engineering insight of the designer. It is not easy, although the general trend is as given in Remark 9.21 to follow. See also Example F.2 of Appendix F. Remark 9.20: As stated above, the controller designed by the available optimization-based rules is suboptimal for a specific plant. As such the analytic rule of Skogested which is designed for a specific plant model does outperform them in many cases, and occasionally they provide the same level of performance. Of course it has its own weaknesses as were discussed. In particular, for the secondorder model the counterpart of Table 9.1 is provided in the same reference with acceptable features. However it uses the ideal PID. With the actual PID the system features are different. (We encourage the reader to verify it at least by simulation!) Recall that this is in line with what we have already said in Chapters 4, 5: When there is an apparently unimportant larger pole, we had better consider it from the outset; its inclusion in the final step is not wise  Remark 5.30. Remark 9.21: We close this Section on PID by summarizing the effect of independent tuning of the parameters K; Ti ; Td of the PID controller CðsÞ 5 Kð1 1 1=ðTi sÞ 1 Td sÞ on the properties of the closed-loop system in Table 9.2. Table 9.2

General effect of independent tuning of parameters

Increase K Decrease Ti Increase Td

tr

Mp

ts

ess

Stability

k k k

m m k

m m k

k k Minor change

Degrade Degrade Improve

670

Introduction to Linear Control Systems

Needless to say the exact effect depends on the specific problem at hand, especially with regard to Td . Indeed, it is good to know that the difficulty of tuning the derivative term has encouraged many practitioners to set it to zero (or fix it at a certain value) and tune only the PI part of the controller.

9.8

Internal model control

The IMC structure was another scheme that we learned in Chapter 1, Introduction. Special treatment is needed when the plant has an NMP zero and/or a time delay. These situations as well as a study of the type of the system are addressed in (Rivera et al., 1986; Morari and Zafiriou, 1989). We do not go into the details of such systems and simply present and analyze an example which is delay free and MP. In the same references it is shown that the controllers designed in this scheme for a large body of process control models are in the form of PID controllers. Tuning rules are also presented. The updated version of the aforementioned rules is the Skogestad’s SIMC rule which we presented in the Section 9.7.2.3. Example 9.16: Consider the plant PðsÞ 5 s 11 2 : The controller CðsÞ 5 1:5 s is designed for this system in the standard 1-DOF control structure. In the IMC 1 2Þ structure the corresponding controller Q is QðsÞ 5 sðs1:5ðs 1 2Þ 1 1:5 : The result of the simulation is depicted in Fig. 9.31 in solid curve. Three different situations are discussed in the following: (1) Due to environmental conditions etc. the plant

Figure 9.29 Example 9.16. Solid: Perfect internal model, Dashed: Plant variation, Dash-dotted: Imperfect internal model.

(Continued)

Frequency domain synthesis and design

671

(cont’d) changes to another plant like P0 5 s 10:92:1 ; but the controller QðsÞ and the internal model P0 5 s 11 2 are implemented perfectly, i.e., as computed before. (2) The internal model that we use is not implemented perfectly, e.g., it is implemented as P00 5 s 10:92:1 ; but the controller is implemented as computed. In the first case the transient response of the system will be different, but the task of control which was step tracking is accomplished. It is shown by the dashed curve. In the second case the control objective of step tracking is not achieved. It is illustrated by the dash-dotted curve (Fig. 9.29). (3) The third case that is worth considering is this: What if the original internal model that we have from the system and on which we base the design of Q is not perfect (but both of them are implemented as com1 2:5Þ puted)? For instance P00 5 s 10:82:5 and Q0 ðsÞ 5 sðs 11:5ðs 2:5Þ 1 1:5 3 0:8 : In this case the response is the same as the case of perfect internal model! The outputs exactly match each other in all time; see Exercise 9.35 as well as the accompanying CD on the webpage of the book.

We close this section by adding that cases (1) and (3) discussed above are probable to happen in practice. It is an advantage of the method that control system works satisfactorily in these cases. Case (2) must be and can be (to a great extent) avoided by precise digital implementation of the control system. Finally note that as we mentioned in Chapter 1, Introduction, among the advantages of the IMC method are: (1) It simply allows a parameterization of all stabilizing controllers, (2) The controllers designed for many plant models are automatically in the PID structure, and thus easily implementable and tunable, (3) Its extensions for multiinput multioutput systems as well as for some nonlinear systems are straightforward, and (4) In the case of actuator limitations there is no need for special antiwindup measures if the bounded control signal (and not the computed control signal) is also inputted to the model. The interested reader is referred to the pertinent literature for further details.

9.9

The Smith predictor

The classical results of Sections 9.2 and 9.3 assume that the system is delay free. For delayed systems which are open-loop stable there is a specialized controller called the Smith predictor, as explained in Section 1.13. Recall that poles on the j-axis, even simple poles, make the system unstable.

672

Introduction to Linear Control Systems

20:5s

Example 9.17: Consider the system PðsÞ 5 ðs 125e 10Þðs 1 15Þ : It is desired to design a step tracking controller so that the delay-free system has PM 5 60 deg at the gain crossover frequency ωgc 5 10 rad=s. Note that the plant is the same as that of the Example 9.2 except that it has 25 no integrator and has a delay term. Thus for the system P0 ðsÞ 5 ðs 1 10Þðs 1 15Þ we 3 0:0374s 1 1 can use the controller C 0 ðsÞ 5 38:7 3 7:1536 of that example. Denote sð0:0374s 1 1Þ 0 0 the output of the system T 5 C P=ð1 1 C0 PÞ with z. The outputs of the systems are plotted in Fig. 9.30. As expected we observe that yðtÞ 5 zðt 2 0:5Þ.

Figure 9.30 Example 9.17.

Question 9.8: What is the relation between the system features for the actual delayed system and the delay-free system? If they are different, how should we take care of it?

9.10

Implementation with operational amplifiers

9.10.1 Proportional control—P-term The circuit of Fig. 9.31 may be used in which VVoi 5

R4 R2 R3U R1

:

9.10.2 Integral control—I-term We may use the sequel circuit in Fig. 9.32 where VVoi 5

R4 1 R 3 U R 1 C2 s :

Frequency domain synthesis and design

673

R4

R2 R1

R3

vi

vo

Figure 9.31 Proportional controller.

C2 R1

R4 R3

vi

vo

Figure 9.32 Integral controller.

R 2 C2 R1

R4 R3

vi

vo

Figure 9.33 Proportionalintegral controller.

9.10.3 Proportionalintegral—PI-term One can use the given circuit of Fig. 9.33 in which VVoi 5

R4 R2 R3 U R1



11

1 R2 C2 s

 .

9.10.4 Proportionalderivative—PD-term The following circuit in Fig. 9.34 is proposed for this controller in which Vo R4 R2 Vi 5 R3U R1 ð1 1 R1 C1 sÞ.

9.10.5 Nonideal/actual derivative—D-term One can implement the circuit of Fig. 9.35 where VVoi 5

R4 R2 s R3U R1U s 1 R 1C

:

2 2

9.10.6 Series proportional-integral-derivative—Series PID We can usethe circuit in the subsequent Fig.  9.36 to realize this controller  in which Vo R4 R2 R4 R2 R 1 C1 1 1 Vi 5 R3U R1 1 1 R2 C2 s ð1 1 R1 C1 sÞ 5 R3U R1 1 1 R2 C2 1 R2 C2 s 1 R1 C1 s .

674

Introduction to Linear Control Systems

C1

R2

R4 R3

R1

vi

vo

Figure 9.34 Proportionalderivative controller.

R4

R2 R1

C1

R3

vi

vo

Figure 9.35 Actual derivative controller.

C1

R 2 C2

R4 R3

R1

vi

vo

Figure 9.36 Series proportionalintegralderivative.

R 2 C2 R4 R1

R1

R3

vi

vo

Figure 9.37 Lead controller.

9.10.7 Lead One may implement the circuit in Fig. 9.37 for this control in which Vo R4 R 1 C1 s 1 1 Vi 5 R3U ðR1 1 R2 ÞC1 s 1 1 :

9.10.8 Lag We can use the circuit in the sequel Fig. 9.38 in which VVoi 5

R4 ðR1 1 R2 ÞC1 s 1 1 : R 1 C1 s 1 1 R3 U

Frequency domain synthesis and design

R 1 C1

675

R2

R4 R3

R2

vi

vo

Figure 9.38 Lag controller. C2 C1

R4 R2

vi

R3

R1

vo

Figure 9.39 Lead or lag controller. R 4 C2 R 1 C1

R2 R3

vi

R3

R2

vo

Figure 9.40 Leadlag controller.

9.10.9 Lead or lag The proposed circuit of Fig. 9.39 acts as either lead or lag control depending on the 1 parameter values. In this circuit there holds VVoi 5 RR43U RR21U RR12 CC12 ss 1 1 1 : If R1 C1 . R2 C2 it is a lead controller and if R2 C2 . R1 C1 it is a lag controller.

9.10.10 Lead-lag The following circuit in Fig. ðR1 1 R2 ÞC1 s 1 1 R 3 C2 s 1 1 Vo : R 1 C1 s 1 1 Vi 5 ðR3 1 R4 ÞC2 s 1 1U

9.40

serves

this

purpose

in

which

Question 9.9: As you know a noncausal transfer function is not realizable. However, here we claim a realization of the ideal PD controller. How do you justify this?

9.11

Summary

Having learned the fundamental features and specifications of a control system in previous chapters, along with the tools for analyzing them, in this chapter we have studied the basic controller synthesis and design methods in the frequency domain.

676

Introduction to Linear Control Systems

Specifically we have studied the lead, lag, and lead-lag control structures, and their simplifications as PD, PI, and PID controllers. We have learned how these controllers affect the system features and model in the root locus, Bode diagram, and Nyquist plot contexts. Designing a controller for a plant starts with specifying some design specifications. The design specifications are often the steady-state error, some GM, PM, DM, and BW along with a good transient time response. This process is not definitive and transparent since there is no guaranteed relation between the first four of the aforementioned requirements and the last one. The problem gets even more complicated when the bandwidth (for reference inputs which are alternating) and the magnitude of the control signal are also among the design objectives. On the other hand, sometimes—in fact in general always—the design objectives are (partly) conflicting. As a result in practice always there are some redesigns and trial and errors to find the overall satisfactory specifications which are a tradeoff among all the objectives. Indeed in modern industrial applications the process is an optimization one, as mentioned in Chapter 1, Introduction. Through numerous examples and worked-out problems we have demonstrated how the above procedure can be simply hand-done in this first undergraduate course. We have also overviewed the specialized design and tuning methods of PID controllers which are the most common controllers in industry. Several of these methods have been discussed with some examples. Finally the internal model control, the Smith predictor, and the implementation of the controllers by operational amplifiers have also been discussed. Numerous worked-out problems at the end of the chapter boost the learning of the subjects.

9.12

Notes and further readings

1. As we have mentioned in the text it is possible to perform the controller design (lead, lag, lead-lag, PID) also in the contexts of the Nyquist plot, KMN chart, and root locus. Some pertinent references are (Chen and Seborg, 2003; Ivezic and Petrovic, 2003; Zanasi and Cuoghi, 2012; Zanasi et al., 2011; Fang, 2010; Dincel and Soylemez, 2014). 2. Design of multilead compensators for double-integrator networks is considered in (Wan et al., 2010). 3. By the design procedure we are implicitly designing the transfer function of the system, given by TðsÞ 5 CðsÞPðsÞ=ð1 1 CðsÞPðsÞÞ. The transfer function is nonaffine and nonconvex in the controller. In the larger framework, recently some researchers have provided alternative problem formulations (from the standard H2 etc.) with arbitrary constraints on the controller in which the design is a convex problem. This of course needs advanced knowledge of the state-space method and matrix analysis. You are encouraged to have a look at such works as (Dvijotham et al., 2015) and its bibliography at the right time in the future. 4. The materials of Section 1.7 are extracted from (Bavafa-Toosi, 1999) and the cited references. Further results can be found in books like (Yu, 2006). Extensions of the classical PID control like the delayed PI, the MIMO PID, the nonlinear PID, the fractional PID, the PID-plus-second-derivative, and the adaptive PID have also been introduced and studied in the literature, see e.g., (Badri and Tavazoei, 2016; Boyd et al., 2016;

Frequency domain synthesis and design

5.

6.

7. 8. 9.

10.

11.

677

Cai et al., 2003; Li et al., 2016; Mouayad, 2015; Porter and Khaki-Sedigh, 1989; Ramirez et al., 2016; Shariati et al., 2014; Shiota and Ohmori, 2012, 2013; Su et al., 2005; Tamura and Ohmori, 2007; Tang and Li, 2015; Tavakoli and Banookh, 2010; Tyreus and Luyben, 1992; Zarre et al., 2002). Some critical issues are geared with PID controllers, namely the integrator windup, derivative kick, derivative backoff, and bumpless transfer. The derivative backoff phenomenon has been discovered and introduced in (Theorin and Hagglund, 2015). Performance limitations are studied in various articles like (Garpinger et al., 2014). For modern implementation of PID controller via a Field Programmable Gate Array (FPGA) readers of electrical engineering can consult (Chan et al., 2007). The article (Liu et al., 2015) presents new results on the pole placement by PID controllers. Moreover (Nikita and Chidambaram, 2016; Jin et al., 2016) present new results on the stability of unstable time delay systems. The original Smith predictor does not apply to unstable plants. A brief summary of its extensions can be found in further readings of Chapter 1, Introduction. Details of the IMC extensions can be found in the literature, like the ones introduced in further readings of Chapter 1, Introduction. Since the late 20th century random optimization techniques like genetic algorithms have been extensively used in the design of control systems, see e.g., (Bavafa-Toosi, 2000; Bavafa-Toosi et al., 2005, 2006; Chan et al., 2002; Tavakoli and Banookh, 2010; Tempo et al., 2013). Appendix F is on genetic algorithms which are used in Chapter 10 for the solution of some problems. A follow up to the classical stabilization and control problem is that of constrained stabilization and control, where there are constraints either on the states, control signal, sensor, or some/all. Most of the available results in this field use advanced state-space techniques and/or require graduate knowledge of e.g., intelligent/neural control. Some relevant references were introduced in the bibliography of Chapter 1, Introduction. See also (Esfandiari et al., 2014). In Section 10.8 of Chapter 10 we shall provide more details. Development of the counterparts of results of this chapter for DAE, fractional-order, positive, constrained, multiscale, etc. systems is desirable.

9.13

Worked-out problems

Most actual control systems are type 1 and work with the step reference input. Recall from Example 4.24 that this is the only case in which the sensor dynamics can be included after the design, provided that it does not overrule the stability. For the sake of brevity we suffice to consider the sensor dynamics and the effect of delay only in the case of the worked-out Problems 9.2 and 9.5, which are type 0 and 1, respectively. Moreover, we consider sinusoidal inputs in Problems 9.2 and 9.12 in a comprehensive setting. Other problems are devoted to step, ramp, and parabolic inputs, where the last one is usually more intricate than the others and requires a more careful design. Finally, we should add that we usually do not prefix a pair of desired GM and PM and find acceptable ones during the design process in a tradeoff against the transient response. In actual industrial problems upper bounds on the magnitude and rate of the control signal are also part of the design objectives

678

Introduction to Linear Control Systems

and are guaranteed in an optimization framework. See Chapter 10 for more details. Here we only compare and interpret the control signals for a few of the problems. Problem 9.1: In brief explain the effect of lead, lag, and lead-lag controllers. The lag controller is for reducing the steady-state error through increasing the dc gain, or the position, velocity, and acceleration constants. Because it increases the loop gain it suppresses the effect of disturbances. However it may slow down the transient response. The lead controller is for stabilizing unstable systems and speeding up the transient response. It improves the stability margins and thus the transient response. It increases the bandwidth, and thus amplifies disturbances. It may also have a small effect on reducing the steady-state error (due to its gain k). Laglead controller combines the above features. 25 Problem 9.2: Consider PðsÞ 5 ðs 1 5Þðs 1 10Þ : (1) Design a controller so that the phase margin is PM 5 50 deg and the steady-state error to the step reference input is ess 5 2%. (2) Include the sensor dynamics Ps 5 100=ðs 1 100Þ and the delay T 5 0:15 3 DM s in the forward path and observe the output. (3) Repeat part (2) with the inclusion of the same delay also in the feedback path. (4) Apply the input rðtÞ 5 sin15t to the system designed in part (1) and analyze the response. (5) Design a controller for the plant such that it tracks the input rðtÞ 5 sin15t with the same steady-state magnitude. (6) Design a controller for the plant such that it tracks the input rðtÞ 5 sin15t with almost zero steady-state error.

1. First we note that the Bode diagram of the system is near 2180 deg line from among the lines 6 180ð2h 1 1Þ and thus PM 5 50 deg means that +P 5 2 130 deg. From the Bode diagram we also understand that this happens at ω 5 15:9 at which jPj 5 2 22 dB. Thus k 5 1022=20 5 12:59. With this value of the gain the system has GM 5 Inf, ωpc 5 Inf, PM 5 49:5 deg, ωgc 5 16. (Note that the slight difference is due to inexact reading of the data from the Bode diagram which is done by clicking on the figure.) Next we design kg and T. To this end we note there should hold 1=½ð25=50Þ 3 12:59 3 ð1 1 kg Þ 5 0:02 or kg 5 6:94, and thus T  10kg =ωgc 5 4:34. The closed-loop system has GM 5 Inf, ωpc 5 Inf, PM 5 43:6 deg, ωgc 5 16, BW 5 25:34. Thus we do a redesign for PM 5 56:5 deg. This time ω 5 13:5 at which jPj 5 2 19:8 dB, and hence k 5 1019:8=20 5 9:77. Also ð25=50Þ 3 9:77 3 ð1 1 kg Þ 5 50 or kg 5 9:23 and T  10 3 9:23=13:5 5 6:84. The system has GM 5 Inf, ωpc 5 Inf, PM 5 50:7 deg, ωgc 5 13:6, BW 5 21:58. See left panel in Fig. 9.41. Again note that the reason for slight difference is as before.) 2. Now we include the sensor dynamics Ps 5 100=ðs 1 100Þ and some delay like T 5 0:15 3 0:0651  0:01 second in the forward path and observe the output in the middle panel of the same figure. 3. In the right panel the same amount of delay is considered also in the feedback path. Note that in either case if we increase the delay e.g., to T 5 0:4 3 DM s then the deterioration in the performance will be more noticeable. 4. Now we apply the input rðtÞ 5 sin15t to the system. The output is provided in the left panel of Fig. 9.42. We observe that the output has a larger magnitude than the input has. The reason for this is that the transfer function is TðsÞ 5 ð1671s 1 2499Þ=ð6:84s3 1 103:6s2 1 2028s 1 2549Þ. Note that the transfer function has a DC gain almost equal to 1, but

Figure 9.41 Problem 9.2, Left: Part (1), Middle: Part (2), Right: Part (3).

680

Introduction to Linear Control Systems Frequency response

1.5

Output Input

0.8 0.6 0.4

0.5

Input & output

Input & output

1

Frequency response

1 Output Input

0 –0.5

0.2 0 –0.2 –0.4 –0.6

–1

–0.8 –1.5 0

0.2

0.4 0.6 0.8 1 1.2 Time (sec)

1.4 1.6

1.8

2

–1 0

0.2

0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 Time (sec)

2

Figure 9.42 Problem 9.2. Left panel: Item (4), Right panel: Either of its remedies.

because the reference is a sinusoid not a step it does not mean that the output has almost the same magnitude of the input. If we want to keep the controller in place we have to modify its parameters, e.g., reduce its gain. It is possible to supply theoretical analysis but as you can guess it is rather sophisticated and computationally unappealing. We resort to trial and error. We observe that if we multiply the controller by 0:85 (found by trial and error) then the objective is achieved. Alternatively, we may apply reference input weighting and use rðtÞ 5 0:875 3 sin15t as the input. Note that the latter weighting is the ratio of the input peak to the output peak in the steady-state regime of the left panel of Fig. 9.42. Finally we should add that reference weighting is an instance of the 2-DOF control. (Question: How are the system features affected in either of the above approaches?) 5. First we note that the statement of the control objective means that phase lag between the output and input is acceptable, but at steady-state they must have the same magnitude. If we want to design a controller for this purpose, we note that the open-loop system is stable and has a smaller bandwidth than desired. If we wish to use closed-loop control, we first try to satisfy it by pure gain. If not possible we use a lead control to increase the bandwidth of the system. With regard to the first approach recall from Chapter 4, Time Response, that when the system has no zeros, if the bandwidth of the system is 10 times the input frequency then the input and output almost have the same magnitude. To this end we need K 5 400 and F 5 0:98, see top row of Fig. 9.43, closed-loop bandwidth is about 155 rad/s. However, increasing the bandwidth of the system to such a large value results in a poor control signal in the transient phase. Alternatively, we may accept a smaller bandwidth, use a smaller gain along with a static prefilter in the 2-DOF structure. For the purpose of contrasting we consider the case K 5 1 and F 5 10:87. See bottom row of Fig. 9.43; closed-loop bandwidth is about 6.8 rad/s. In the steady state the control signals have almost the same magnitude. (Question: How are the system features affected in either of the above approaches?)

Discussion: Note that by increasing the bandwidth in the first design, we have automatically removed the phase lag in the output although it is not a concern in this item. In the first design the sensitivity of the system is lower than that of the second design; the same is true also for its instantaneous tacking error. However, it requires a more expensive actuator. The need for a trade-off is thus obvious.

Frequency domain synthesis and design

681

Frequency response

1.5

50 40

0.5

Control signal

Input & output

1

0

–0.5

20 10 0

–20 –30 0

0.1

0.2 0.3 0.4 0.5 0.6 0.7 Time (sec)

0.8

0.9

1

–40

Frequency response

2

0.5

Time (sec)

1

1.5

Frequency response

10 Control signal

1 0.5 0

–0.5

5 0 –5 –10

–1 –1.5

0

15 Output Input

1.5

Input & output

30

–10

–1 –1.5

Frequency response

60 Output Input

0

0.5

Time (sec)

1

1.5

–15

0

0.5

Time (sec)

1

1.5

Figure 9.43 Problem 9.2. Item (5), Top row: K 5 400 and F 5 0:98. Bottom row: K 5 1 and F 5 10:87.

(The student is encouraged to insert a saturation block like ½ 2 13 13 on the control signal of the first design and observe its effect in the SIMULINK environment.) 6. Here note that the statement of the control objective means that phase lag between the output and input should be negligible, and at steady-state the instantaneous error should be almost zero. To this end we have to increase the bandwidth of the system, for which we have two possibilities: (a) Increasing the BW by a 1-DOF structure inside the loop, or (b) Increasing the BW by a 2-DOF structure as we had in Section 4.13.2.1. For item (a) we have two options: (a1) One way to achieve this was offered in item (5). (a2) We design a lead control to compensate for the phase lag and increase the bandwidth. We try different controllers. With the controller CðsÞ 5 Kð13:93 3 0:0267s11Þ2 =ð0:0267s11Þ2 with K 5 10 and F 5 1:04 we get the answer in the left column of Fig. 9.44. In the transient phase the output shows high frequency oscillations as given in the middle panel of the same figure. Although the oscillation looks mild, and in particular the magnitude of the fluctuations or error is negligible, because of the second-order derivative in the controller the control signal is poor, as given in the right panel of the same figure. (b) We stabilize the system in a 1-DOF structure (with LðsÞ 5 CðsÞPðsÞ) and then design the prefilter FðsÞ such that jFðjωr Þj 5 j1 1 Lðjωr Þ=jLðjωr Þj and +Fðjωr Þ 5 2 ϕ 5  2 + Lðjωr Þ=ð1 1 Lðjωr ÞÞ , where ωr 5 15 is the frequency of the reference input. To this end we define TðsÞ 5 LðsÞ=ð1 1 LðsÞÞ. At s 5 jωr 5 j15 we have jTj 5 1:143954 and

Figure 9.44 Problem 9.2, Top row (6a2), Left: Input and output, Middle: Transient response, Right: Control signal. Bottom row (6b), Left: Input and output, Middle: Output of prefilter, Right: Control signal inside the loop.

Frequency domain synthesis and design

683

+T 5 2 76:247775deg. The prefilter will have two parts F 5 F2 F1 . We decide that F1 provides 60deg with k1 5 1. Thus F1 ðsÞ 5 ð13:93 3 0:017863s 1 1Þ=ð0:017863s 1 1Þ. Next we design F2 such that it provides j+T 1 60j 5p16:247775deg. Hence ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffi F2 ðsÞ 5 k2 ð1:776976 3 0:050011s 1 1Þ=ð0:050011s 1 1Þ and k2 1:776976 3 13:93 3 1:143954 5 1. Therefore k2 5 0:175713. Simulation results are provided in the right column of Fig. 9.44. Discussion: With controller the closed-loop bandwidth is about 356 rad/s and the tracking error is less than that of item (5). That is why it requires a larger control input. In practice a tradeoff between performance and cost is needed. Also note that it is possible to verify theoretically that at steady state the instantaneous error is almost zero, but the procedure is cumbersome. However, for part (5) with K 5 400 this is simple. (Provide the details of the formulation!) With controller (b) the BW of the system is infinity. On the other hand as you will learn in the next Chapter 10, one may wish to use an integrator in the controller so as to reject possible step input (or output) disturbances. Propose a controller for it! Finally, we should add that as it is observed in steady state the control signals are almost the same. On the other hand, in (6b) we have the best control signal in the transient phase, and in (5) we have the best output response in the transient phase. The reason is their control signals—In the transient phase: In (5) the control signal is poor (i.e., needs an expensive actuator) and in (6b) the control signal is nice (needs a cheaper actuator). A compromise may thus be needed depending on the application.

Question 9.10: We observe that at steady state the control signals are almost the same. A detailed comparison of the system features in all the cases, as well as issues like control signal energy and power consumption, in a general theoretical development would be desirable. For instance, note that at steady state (at the reference frequency) there holds 9CPF9 5 91 1 CP9. What choice of C is optimal (in an appropriate sense)? Various other issues like the effect of C on the transient response are also important. 25 Problem 9.3: Given PðsÞ 5 ðs 1 5Þðs 1 10Þ . Design a controller so that the closed-loop BW is at least about BW 5 6 rad=s and the steady-state error to the velocity reference input is ess 5 2%. We note that the controller should have the term 1=s. With this controller we first add a proportional controller such that the closed-loop BW is about 6. This must be done by trial and error and direct verification. As a rule of thumb, we start by choosing the gain such that BW 5 6 rad=s is between ωpc and ωgc . By choosing k 5 10 the system will have GM 5 9:54, ωpc 5 7:07, PM 5 32:6 deg, ωgc 5 3:75, BW 5 6:31. Stability margins of the system are acceptable. (If we are not satisfied we have to use a lead control to improve them.) Now we design a lag control for the ess requirement. There should hold ð25=50Þ 3 10 3 ð1 1 kg Þ 5 50 or kg 5 9 and T  10kg =ωgc 5 24. The resultant system has GM 5 8:5, ωpc 5 6:66, PM 5 26:7 deg, ωgc 5 3:76, BW 5 6:32. For two given inputs the input and output of this latter design are given in Fig. 9.45. Discussion: As we have stated before a design process always needs a repetition since from the outset we cannot be sure that the given specifications warrant a good performance. This is well demonstrated in this example; we try to enhance

684

Introduction to Linear Control Systems 2.5

2

2

1.5

1.5

1 0.5

Input & output

Input & output

2.5

Output

0

Input

–0.5

0

–1

–1 –1.5

–2

–2

–2.5

–2.5

0

1

2

3

4 5 Time (sec)

6

7

8

9

Output Input

–0.5

–1.5

2

0

0.5

1

1.5

2 2.5 Time (sec)

3

3.5

4

4.5

6

7

8

9

2.5

1.5

2

1

1.5

0.5

Input & output

Input & output

1 0.5

Output

0

Input

–0.5 –1

Output

1 0.5

Input

0 –0.5 –1

–1.5

–1.5

–2

–2

–2.5

–2.5

0

1

2

3

4 5 Time (sec)

6

7

8

9

0

1

2

3

4 5 Time (sec)

Figure 9.45 Problem 9.3. Top left: First design, Top right: First design, Bottom left: Second design, Bottom right: Third design.

the result. Now in the second design we design a maximal lead controller at the frequency ω 5 6, and then a lag controller. The controller is 1s 3 6 3 13:93 3 0:0447s 1 1 1 16:67 3 26:12s 0:0447s 1 1 26:12s 1 1 : The system has GM 5 14:1, ωpc 5 17:6, PM 5 61:7deg, ωgc 5 6:18, BW 5 11:34. The output is given in the same figure. In the third design we design a maximal lead controller at ω 5 10, and then a lag control3 0:0267s 1 1 1 5:91 ler. The controller is 1s 3 16:91 3 13:930:0267s 3 4:91s 11 4:91s 1 1 : The system has GM 5 11, ωpc 5 20:8, PM 5 35:8 deg, ωgc 5 10, BW 5 16:92. The output is given in the same figure. Finally note that it may be possible to even further improve the answer by more trial and errors. Indeed, this is shown in the next example!

25 Problem 9.4: Given PðsÞ 5 ðs 1 5Þðs 1 10Þ . Design a controller so that the closed-loop BW is at least about BW 5 20 rad=s and the steady-state error to the velocity input is ess 5 2%. The performance should be acceptable. This problem is like the previous one except that this time if we increase the gain so as to have ωgc 5 20 the system will be unstable. So we have to stabilize the system with lead control afterwards, or start increasing the BW by lead control from the beginning. We take the second approach. Our experience with the third design of the

Frequency domain synthesis and design

685

2.5

2.1

2

2

1.5

1.9

Output

Input & output

Input & output

1 0.5 0 –0.5

Input

–1 –1.5

Output

1.7 Input

1.6 1.5

–2 –2.5 0

1.8

1.4 1

2

3

4 5 Time (sec)

6

7

8

9

1.2

1.4

1.6

1.8 2 2.2 Time (sec)

2.4

2.6

2.8

Figure 9.46 Problem 9.4. Left: Tracking performance, Right: Its magnification.

previous example shows that we should add the lead control at a higher frequency. Since the phase of the system (including an integrator) is 2230 at ω 5 20 we use two maximal lead controllers at p this frequency. Thus α 5 13:93, T 5 pffiffiffiffiffiffiffiffiffiffiffi ffiffiffiffiffiffiffiffiffiffiffi 1=ð20 13:93Þ 5 0:0134, and k 5 1051:4=20 = 13:93 5 99:55. That is the lead control   3 0:0134s11 2 is 99:55 3 13:930:0134s11 . With this controller the steady-state error is almost 2%; it is 1=½25 3 99:55=50 3 100% 5 2:01% and we may accept it. The response is shown in Fig. 9.46. As observed, the input and output are barely distinguishable. The system has GM 5 5:71, ωpc 5 78:8, PM 5 21:2 deg, ωgc 5 55:3, BW 5 87:42. Discussion: The obtained BW clearly shows that we could design the lead controllers at a lower frequency than ω 5 20. However, if its control signal (which is not provided) is acceptable for us we can accept the design since its tracking performance is excellent. More precisely, one lead control at the frequency 12 also renders the required bandwidth—We leave it to the reader to verify it. Pushing the bandwidth too high, as we did in this example, causes other problems (large control signal and noise amplification) which we shall discuss in the next chapter. However, we did so in order to illustrate its effect on the response. On the other hand, it is left as an easy exercise to the reader to design a controller with more or less the same tracking performance while guaranteeing a PM of 40 deg. Problem 9.5: Reconsider parts (1) through (3) of Problem 9.2 with a PI control. 1. The PI is given by kð1 1 1=ðTi sÞÞ. With the same reasoning as that in Problem 9.2 in order to have PM 5 50 deg there should hold +P 5 2 130 deg. However because we know that the PM will be about 6 deg less, we should design for PM 5 56 deg, +P 5 2 124 deg. This occurs at ω 5 13:9 at which jPj 5 2 20:1 dB. Therefore k 5 1020:1=20 5 10:116. With this value of the gain the system has GM 5 Inf, ωpc 5 Inf, PM 5 55:5 deg, ωgc 5 13:9. Now we design Ti s such that ωgc 5 13:9 (almost) does not change. For this we choose 1=ðTi ωgc Þ 5 0:1 or Ti 5 10=ωgc 5 0:72. The system has GM 5 Inf, ωpc 5 Inf, PM 5 49:7 deg, ωgc 5 13:9. Finally, recall that PI increases the type of the system by 1 and thus the steady-state error to the step input will be zero. Its response is given in the left panel of Fig. 9.47. As expected, the PI outstrips lag control.

Figure 9.47 Problem 9.5: Tracking performance.

Frequency domain synthesis and design

687

2. We do the same experiments as in Problem 9.2. That is we include the sensor dynamics Ps 5 100=ðs 1 100Þ and the delay T 5 0:15 3 0:0622 5 0:00933  0:01 second in the forward path and present the output in the middle panel. 3. In the right panel the same amount of delay is considered also in the feedback path.

Discussion: In the third case the oscillations die out at about t 5 8 s. It is interesting to note that if the gain is set at a larger value like k 5 12:12 then in the third case even if we increase the simulation time unboundedly, some oscillations will sustain in the output. In fact the given panel refers to this case. See the accompanying CD on the webpage of the book. What is the reason? Also note that although the PI control outstrips the lag control with respect to the steady-state error, in this respect the lag control is superior. Finally, the reader is encouraged to repeat other parts of Example 9.2 in this problem as well. Problem 9.6: Given PðsÞ 5 ðs 1 5Þðs 12510Þðs115Þ2 ; design a controller so that the closedloop BW is at least BW 5 15 rad=s and the steady-state error to the acceleration input is ess 5 2%. We first note that the controller must have a double integrator. With the inclusion of a double integrator the system is unstable for any value of the static gain. The controller thus needs to stabilize the system as well. We offer different syntheses to this problem in this and the next problems 9.79.10. Our purpose is the comparison of these different synthesis methods. The conclusion will be made after Problem 9.10 in Remark 9.22. Also note that the gain of this system is very small (for illustration) and thus very large control signals will be obtained with any synthesis we choose. A possibility is to use a double stabilizing zero along the double integrator. Thus 2 11 for instance we choose C1 ðsÞ 5 ðs10:5Þ : Now the system is stable for multiple 2 s values of gain, as can be verified either by root locus or the Bode diagram shown in Fig. 9.48. To fulfill the BW we do as follows. We note that GM 5 66 dB, ωpc 5 9:84, PM 5 4:82 deg, ωgc 5 0:0236. To approach the desired BW we increase the gain by 63:5 dB  1500 in plain numbers. Now the system has GM 5 2:46 dB, ωpc 5 9:84, PM 5 15:6 deg, ωgc 5 8:39. With this value of ωgc we can expect12 that (by the help of the lead controller which will be designed) the BW of the closed-loop system to be at least 15 rad/s. Now we design the lead controller. Due to the poor PM of the system we take the maximum contribution of phase lead, i.e., 60 deg. Without loss of generality we may choose the future value of ωgc to be 9.84. At this frequency jPj 5 2 2:46 dB. pffiffiffiffiffiffiffiffiffiffiffi Now, α 5 ð1p1ffiffiffiffiffiffiffiffiffiffiffi sin60Þ=ð1 2 sin 60Þ 5 13:9282, Td 5 1=ð9:84 13:93Þ 5 0:0272, kd 5 102:46=20 = 13:93 5 0:3557. The resultant system has GM 5 7:58 dB, ωpc 5 17:4, PM 5 60 deg, ωgc 5 9:84. Another possible synthesis is to choose C1 ðsÞ 5 ðs 1 0:5Þ=s2 along with lead stabilization. We leave it to the reader. 12 Note that we cannot be sure about it. If the objective is fulfilled we are done, otherwise a redesign is needed. 11

688

Introduction to Linear Control Systems

Bode diagram Gm =66 dB (at 9.84 rad/s), Pm = 4.82 deg (at 0.0236 rad/s)

Magnitude (dB)

0 –100 –200

–100

–90

–90 –180

–270 10–1

100 101 Frequency (rad/s)

102

103

–270 –360 10–2

Bode diagram

50

100 101 Frequency (rad/s)

102

103

1.5

50

1

–100 –150 180 0

Output

0.5 0 Input –0.5 –1

–180 –360 10–1

10–1

2

0

Input & output

Magnitude (dB)

0

–200 0

–360 –2 10

Phase (deg)

100

–300 0

–180

Bode diagram Gm =7.02 dB (at 16.9 rad/s), Pm = 53.9 deg (at 9.89 rad/s)

200

Phase (deg)

Phase (deg)

Magnitude (dB)

100

–1.5 –2 100

101 102 Frequency (rad/s)

103

0

1

2

3

5 4 6 Time (sec)

7

8

9

Figure 9.48 Problem 9.6, Top left: Bode diagram of the system with C1 , Top right: Resultant open-loop system, Bottom left: Resultant closed-loop system, Bottom right: Tracking performance.

Finally we design the lag part of the controller. To this end we note that the steady-state error requirement with the lead-lag controller results in Kv kd ð11 1 kg Þ 5 0:02, where Kv 5 lims!0 sPðsÞ of the original system. Thus 1 1 kg 5 1 or kg 5 167:68 and Tg  10 3 167:68= ½ð25 3 0:52 Þ=ð50 3 152 Þ 3 1500 3 0:3557 3 0:02

9:84 5 170:4. Finally, we should check the features of the resultant system. They are GM 5 7:02 dB, ωpc 5 16:9, PM 5 53:9 deg, ωgc 5 9:89, as given by the top right panel in Fig. 9.48. How about its BW which was a design specification? Surprisingly it is BW 5 2:5! This means that the Bode magnitude diagram of the closed-loop has some “belly” around this frequency. This guess is verified by looking at the Bode diagram of the closed-loop system given in the bottom left panel of Fig. 9.48. There are two solutions to this problem: (1) To slightly increase the gain of the controller, and (2) To do a redesign and trial and error. Let us try (1). A simple investigation shows that if we multiply the gain of the controller by 1.04 (which is indeed a slight increase) then the closed-loop magnitude diagram will be slightly shifted upwards, but this is enough for the BW to jump to BW 5 17:82. The design

Frequency domain synthesis and design

689

objective is thus fulfilled. Note that with the new controller the system parameters change to GM 5 6:68 dB, ωpc 5 16:9, PM 5 50:5 deg, ωgc 5 10:3 which are all acceptable. Thus it is not really needed to follow (2). However, the interested reader is encouraged to do the redesign. Before doing so, the reader is first referred to the next Problem 9.7 to see how a redesign may be done. Simulation2 is offered in the bottom right panel of Fig. 9.48. The controller is 3 0:0272s 1 1 1 168:68 1:04 3 ðs10:5Þ 3 1500 3 0:3557 13:930:0272s 3 170:4s 11 170:4s 1 1 : s2 Discussion: What is the reason for the poor performance in the falling regime of the input reference? It is probably that the bandwidth of the system is not large enough, or equivalently that the frequency of the reference signal is large for this controller. The simulation with a lower frequency reference signal is provided in Fig. 9.49. As is observed the performance enhances considerably. However, as shall be seen in the ensuing problems, with other syntheses we can achieve an excellent performance for the high frequency reference signal. So the main reason for poor performance of this system is the synthesis of the controller. How can it be precisely explained? Question 9.11: As it is observed the transfer function of the (closed-loop) system is not flat in the bandwidth, and this is because of the controller. An ideal transfer function is flat in the bandwidth. One may argue that “The frequency of the input signal is 2π=8 5 0:78 (in simplified analysis), however considering the Fourier expansion of it the main components are sinusoidal frequencies probably in the belly region frequency ([1 10]) which are attenuated by the transfer function. Thus the output peak is smaller than the input peak.” What do you think about this argument? Problem 9.7: Given PðsÞ 5

25 ðs 1 5Þðs 1 10Þðs115Þ2

. Design another controller so that the

closed-loop BW is at least BW 5 15 rad=s and the steady-state error to the acceleration input is ess 5 2%. We already know that the controller must have a double integrator and that it must also stabilize the system as well. In this synthesis we do not use stabilizing zeros for the integrators and instead stabilize the system by adding enough phase to it via some lead controllers. Then we design a lag controller for the steady-state error. To this end we should first have a look at the Bode diagram of the system with a double integrator. It is given in Fig. 9.50. As the first trial we decide to add enough phase to it at the frequency 10 to stabilize the system.13 The phase is 2360 deg, so the controller should contribute more than 180 deg. We use four lead controllers with maximum phase contribution pffiffiffiffiffiffiffiffiffiffiffi at this frequency. Thus α 5 ð1 1 sin60Þ=ð1 2 sin60Þ 5 13:9282, T 5 1=ð10 13:93Þ 5 0:0268. Note that we set the proportional gain of the lead controller equal to 1 at this stage. With the application of four of this controller the system has GM 5 64:7 dB 5 1717 (by inspecting the Bode diagram) or 1703 by the command 13

As we shall see, this will not result in the desired bandwidth. However, for the purpose of illustration we present the result. In practice we have to find the right frequency by trial and error, although after gaining some experience we may be able to have an educated guess.

Figure 9.49 Problem 9.6, Left: Tracking of a low frequency input, Middle: Control signal, Right: Control Signal for the high frequency input.

Frequency domain synthesis and design

Bode diagram

50 Magnitude (dB)

0 Magnitude (dB)

691

–100 –200 –300

–360

–540 10–1

100

101 Frequency (rad/s)

102

Bode diagram Gm = 4.63 dB (at 16.2 rad/s), Pm = 73.7 deg (at 8.55 rad/s)

–150

100 0 –100

–360 100

101 Frequency (rad/s)

102

103

102

103

Bode diagram

0 –50 –100 –150 –200 180

Phase (deg)

–180 –360 –540 10–3

0 –180

50

–200 0 Phase (deg)

–100

–540 –1 10

103

Magnitude (dB)

Magnitude (dB)

200

0 –50

–200 180 Phase (deg)

Phase (deg)

–400 –180

Bode diagram Gm = 4.63 dB (at 16.5 rad/s), Pm = 81.5 deg (at 8.33 rad/s)

–2

10

–1

0

1

10 10 10 Frequency (rad/s)

10

2

10

3

0 –180 –360 –540 10–1

100

101 Frequency (rad/s)

Figure 9.50 Problem 9.7, Top left: Original system with double integrator, Top right: Lead controlled system, Bottom left: Lead-lag open-loop controlled system, Bottom right: Closedloop controlled system.

“allmargin.” Now we choose k 5 1000. That is, the controller is   3 0:0268s11 4 1000 13:930:0268s11 . The system will have GM 5 4:63 dB, ωpc 5 16:5, PM 5 81:5 deg, ωgc 5 8:33. Now we design the lag controller. There holds 1 5 0:02, or kg 5 21:5 and T  10kg =8:33 5 25:81. Thus the ½25=ð50 3 152 Þ 3 1000 3 ð1 1 k Þ g

1 22:5 lag controller is 25:81s 25:81s 1 1 : The controlled system has GM 5 4:35 dB ðωpc 5 16:2 rad=sÞ, PM 5 73:7 deg ðωgc 5 8:55 rad=sÞ which are all acceptable, see the bottom left panel of Fig. 9.50. However the closed-loop bandwidth is surprisingly BW 5 3:10. The reason for this problem is as in the previous example, see the bottom right panel of Fig. 9.50. Like the previous example we verify whether the bandwidth can be fixed by increasing the gain. We observe that if we multiply the gain of the system by 1.2 it is increased to BW 5 21:5. New system features are GM 5 2:77 dB ðωpc 5 16:2 rad=sÞ, PM 5 37:1 deg ðωgc 5 12:2 rad=sÞ. We observe that unlike Problem 9.6 here the system features are drastically deteriorated. Thus a redesign is called for. To do the redesign the reader is encouraged to verify that adding other lead controllers at the same frequency or smaller frequencies (to push up the belly of the

692

Introduction to Linear Control Systems Bode diagram Gm = 8.4 dB (at 18.5 rad/s), Pm = 45.4 deg (at 7.21 rad/s)

100 0 –100

Phase (deg)

Phase (deg)

0 –50 –100 –150 –200 180

–200 0 –180 –360 –540 10–2

Bode diagram

50 Magnitude (dB)

Magnitude (dB)

200

10–1

100 101 Frequency (rad/s)

102

103

0 –180 –360 –540 10–1

100

101 Frequency (rad/s)

102

103

Figure 9.51 Problem 9.7, Left: Open-loop controlled system, Right: Closed-loop controlled system.

curve in the frequency range [2 10]) does not rectify the problem. The reader is also encouraged to do the redesign for the frequency 15. We present the results for the   3 0:0134s11 4 1 2:81 as the lead controller and 3:92s frequency 20. We have 8000 13:930:0134s11 3:92s 1 1 as the lag controller. The controlled system has GM 5 8:4 dB ðωpc 5 18:5 rad=sÞ, PM 5 45:4 deg ðωgc 5 7:21 rad=sÞ, and BW 5 17:33, all quite acceptable. The Bode diagrams of the open-loop controlled system as well as closed-loop controlled system are given in Fig. 9.51. If we want a larger PM we have to increase the number of lead controllers or change the parameters of the system. We leave it to the reader. Finally the output response is shown in Fig. 9.52. The interested reader is encouraged to increase the phase margin of the system and observe its result on the output response. Discussion: As it is observed the lead-based controller outperforms the synthesis of Problem 9.6. This however has not come for free; now the control signal shows sharper and larger oscillations  needs a more expensive actuator. Moreover, here as well we simulate the system with a lower frequency reference input, Fig. 9.53. The performance significantly improves. As is seen especially in the second case the input and output are barely distinguishable. The control signals are very much similar to that of Problem 9.6 and hence are not shown. See also the next Problems 9.89.10. We close the discussion by adding that in the first design no matter whether we use C or 1:2C as the controller the tracking performance is poor. Hence, in the case that we have to use this controller synthesis it is wise to use C which results in better system margins. Question 9.12: An argument similar to Question 9.11 can be made here as well. However, this time the transfer function has an upward belly, i.e., it amplifies various frequencies. Thus one may argue that “This is the reason that the output peak is larger than the input peak.” What do you think about this reasoning? Problem 9.8: Given PðsÞ 5 ðs 1 5Þðs 12510Þðs115Þ2 . Design a controller so that the closedloop BW is at least BW 5 15 rad=s and the steady-state error to the acceleration input is zero.

Frequency domain synthesis and design

693 2.5

2.5 2

2

1.5

1.5 1

output

0

Input & output

Input & output

1 0.5 Input

–0.5 –1

0 –0.5 –1

–2

–2

–2.5

–2.5

0

1

2

3

4 5 Time (sec)

6

7

8

9

1500

1500

1000

1000

500

500

Control signal

Control signal

Input

0.5

–1.5

–1.5

0 –500

–1000 –1500

output

0

1

2

3

4 5 Time (sec)

6

7

0

1

2

3

4 5 Time (sec)

6

7

8

9

0 –500

–1000 0

1

2

3

4 5 6 Time (sec)

7

8

–1500

9

8

9

Figure 9.52 Problem 9.7, Top left: Tracking performance the resultant system with lead control at ω 5 10, Top right: Tracking performance the resultant system with lead control at ω 5 20, Bottom left: Control signal ω 5 10, Bottom right: Control signal ω 5 20. 2.5

2.5 Output Input

2 1.5

1.5 Input & output

1 Input & output

Output Input

2

0.5 0 –0.5 –1 –1.5

1 0.5 0 –0.5 –1 –1.5

–2

–2

–2.5

–2.5

0

5

10

15

20 25 30 Time (sec)

35

40

45

50

0

5

10

15

20 25 30 Time (sec)

35

40

45

50

Figure 9.53 Problem 9.7, Tracking performance, Left: ω 5 10, Right: ω 5 20.

To fulfill the requirement of zero acceleration error we need a triple integrator. Thus the controller does not need a lag term. In this synthesis we try to stabilize the triple integrator by the triple zero which we place next to it. Then we investigate the achievable bandwidth and performance versus the location of the zeros. In our trials we place the zeros at C1 ! f 2 1; 2 1; 2 1g,C2 ! f 2 0:5; 2 0:5; 2 0:5g,

694

Introduction to Linear Control Systems

f 2 0:5; 2 0:5 6 0:5jg, C3 ! f 2 0:1; 2 0:1; 2 0:1g, f 2 0:1; 2 0:1 6 0:1jg, etc. The general observation is that by moving towards the origin the achievable bandwidth is slightly increased. Moreover, as for the bandwidth it does not make much difference whether the zeros are real or complex. However, the stability range differs. In the real case the stability range is wider and the designer has more freedom in choosing the gain value, although in either case the tracking performance is poor. Some numerical values are as follows. With C1 the upper marginal value of gain is K  1530 for which BW 5 12:53. With C2 the upper marginal value of gain is K  1870 yielding BW 5 13:56. Finally, for C3 the marginal value of gain is K  2150 resulting in BW 5 14:32. See Fig. 9.54. 2

2.5 Output Input

Output Input

1.5

1.5

1

1

0.5

Input & output

Input & output

2

0.5 0 –0.5 –1

0 –0.5 –1 –1.5

–1.5 –2

–2

–2.5

–2.5

0

1

2

3

4 5 Time (sec)

6

7

8

9

0

1

2

3

4 5 Time (sec)

6

7

8

9

Figure 9.54 Problem 9.8, Tracking performance, Left: C1 with K 5 1000, Right: C2 with K 5 1000.

Discussion: It is observed that the location of stabilizing zeros does not have much influence on the achievable bandwidth. It is mainly dictated by the controller gain and synthesis/structure. Moreover, unlike problems of relative degree one or two where we can use high gain control and obtain excellent performance, in this system that we cannot use high gain control the performance is poor. Also note that with marginal values of the gain the output is oscillatory, not shown in the figure. Thus the desired bandwidth seems unachievable with this controller synthesis. On the other hand, the performance is poor. So regardless of the bandwidth this synthesis is not good and we should try another one. In the next Problem 9.9 we shall propose another synthesis—we stabilize the triple integrator with several lead controllers and to a large extent address the problem of poor performance. Then in Problem 9.10 we shall propose a third synthesis which outperforms that of Problem 9.9. Finally, note that as in the previous problems if we try a lower frequency reference input the performance significantly improves. This is shown in Fig. 9.55. The control signals are very much similar to that of Problem 9.6 and thus are not provided. Problem 9.9: Given PðsÞ 5

25 ðs 1 5Þðs 1 10Þðs115Þ2

: Design a controller so that the

closed-loop BW is at least BW 5 20 rad=s and the steady-state error to the acceleration input is zero.

Frequency domain synthesis and design

695

2.5

2.5 Output Input

2 1.5

1.5

1

Input & output

Input & output

Output Input

2

0.5 0 –0.5

1 0.5 0 –0.5

–1

–1

–1.5

–1.5

–2

–2

–2.5

–2.5

0

5

10

15

20 25 30 Time (sec)

35

40

45

50

0

5

10

15

20 25 30 Time (sec)

35

40

45

50

Figure 9.55 Problem 9.8, Control signal, Left: C1 with K 5 1000, Right: C2 with K 5 1000.

As in the previous example, to achieve the zero acceleration error we need a triple integrator, C1 5 1=s3 . That is the controller does not need a lag term. In this synthesis we do not use any stabilizing zeros along with the integrators and stabilize the system and provide the bandwidth via some lead controller design. Based on the experience we got from the previous problem we start with the frequency 20. From the Bode diagram of the system with triple integrator we learn that the phase of the system at this frequency is approximately 2515 deg, see Fig. 9.56. We should use at least 6 maximal lead controllers at this frequency. We do so. With the   3 0:0134s11 6 controller K 13:930:0134s11 the range of stability versus K is approximately KAð6:55 3 103 4:30 3 104 Þ. We shall check the bandwidth and the response of the system for different values of K. We obtain the following pairs: ðK; BWÞ: ð10; 000; 6:15Þ, ð15; 000; 10:96Þ, ð15; 100; 20:16Þ, ð25; 000; 30:85Þ, ð40; 000; 36:35Þ. Thus the gain value must be at least about 15,100. With this value of gain the output is provided in Fig. 9.56. By increasing the value of K (which means a larger control signal) better tracking performance is obtained. The best answer is obtained with the gain value of about 25,000, see the same figure. With this value of gain the system has GM 5 4:72 dB ðωpc 5 25:3 rad=sÞ, PM 5 57:8 deg ðωgc 5 11:2 rad=sÞ. (As we may expect, for both lower and upper marginal values the output is oscillatory. Simulations are not provided.) Discussion: As it is observed by the lead controller we obtain a significantly better response. Moreover, the controller which results in a larger BW has a higher performance. However, this does not come for free; here the control signal shows some large and sharp oscillations—needs a more expensive actuator. Here as well if we use a lower frequency input the performance significantly improves. Because the input and output are barely distinguishable we do not provide the figures. Problem 9.10: Using pole-zero cancellation design another controller for Problem 9.9. This means that here we cancel some poles of the system with zeros of the controller. Because the controller should have a triple integrator one possibility is to cancel three poles of the plant, and thus of the controller will be

696

Introduction to Linear Control Systems Bode diagram

0 –100 –200 –400

–360 –540 –720 –1 10

100

101 Frequency (rad/s)

102

–180 –360 –540

2

1.5

1.5

1

Input

0.5 0

Output

–0.5 –1 –1.5

101 Frequency (rad/s)

102

103

1

Input

0.5 Output

0 –0.5 –1 –1.5

–2

–2 1

2

3

4 5 Time (sec)

6

7

8

–2.5

9

1500

1000

1000

500

500

Control signal

1500

0 –500

–1000 –1500 0

100

2.5

2

Input & output

Input & output

–200

–720 10–1

103

2.5

Control signal

0 –100

–300 0 Phase (deg)

Phase (deg)

–300 –180

–2.5 0

Bode diagram

100 Magnitude (dB)

Magnitude (dB)

100

0

1

2

3

4 5 Time (sec)

0

1

2

3

4 5 Time (sec)

6

7

8

9

0 –500

–1000 1

2

3

4 5 Time (sec)

6

7

8

9

–1500

6

7

8

9

Figure 9.56 Problem 9.9, Top left: System C1 P, Top right: System Clead C1 P ðK 5 1Þ, Second and third row: Left: K 5 15; 100, Right: K 5 25; 000.

e.g., C1 ðsÞ 5 ðs 1 5Þðs 1s310Þðs 1 15Þ : The rest of the controller will be a stabilizing lead controller which also provides the required bandwidth. With this controller the open-loop system has the phase 2304 deg at ω 5 10, the phase 2323 at ω 5 20, etc. There are several possibilities for the lead design. In the first design we put  3 0:0268s11 3 three maximal lead controllers at ω 5 10, i.e., K 13:930:0268s11 . The closed-loop system is stable for KAð2:25 39:30Þ. The best performance for output is obtained with K 5 20 and is provided in Fig. 9.57, left panel. The system has GM 5 5:87 dB ðωpc 5 25:3 rad=sÞ, PM 5 37:4 deg ðωgc 5 15:4 rad=sÞ, and BW 5 28:21. In the second design we put three maximal lead controllers at ω 5 20,

Frequency domain synthesis and design

697

2.5

2.5 Output Input

2 1.5

1.5

1

1 Input & output

Input & output

Output Input

2

0.5 0 –0.5 –1 –1.5

0.5 0 –0.5 –1 –1.5

–2

–2

–2.5

–2.5

0

1

2

3

4 5 Time (sec)

6

7

8

9

0

1

2

3

4 5 Time (sec)

6

7

8

9

4 5 Time (sec)

6

7

8

9

Figure 9.57 Problem 9.10, Left: First design, Right: Second design. 5000

1500

4000 3000 2000

500

Control signal

Control signal

1000

0 –500

0 –1000 –2000 –3000

–1000 –1500

1000

–4000 0

1

2

3

4 5 Time (sec)

6

7

8

9

–5000 0

1

2

3

Figure 9.58 Problem 9.10, Control signals. Left: First design, Right: Second design.

  3 0:0134s11 3 i.e., K 13:930:0134s11 . This time the closed-loop system is stable for KAð23:37 401:88Þ. The best response is obtained with K 5 150 and is given in the right panel of Fig. 9.57. The system has GM 5 8:56 dB ðωpc 5 41:8 rad=sÞ, PM 5 37:5 deg ðωgc 5 19:5 rad=sÞ, and BW 5 39:14. Discussion: As it is observed both of these designs outperform the controller of Problem 9.9. This is mainly because of the synthesis—the loop gain has a nicer and simpler shape. The control signals are provided in Fig. 9.58. The left and right panels refer to the first and second designs, respectively. As it is observed the control signal of the first design is similar but has smaller peak magnitudes than the case K 5 25; 000 of Problem 9.9, however its performance is better—this is because of the synthesis. On the other hand the second design, which outperforms both of the previous designs, has not come for free. As is seen its control signal (right panel) and bandwidth are both larger—it needs a more expensive actuator. Finally, another synthesis is to cancel all the poles of the plant and try 2 C1 ðsÞ 5 ðs 1 5Þðs 1s410Þðs115Þ : We leave it to the reader to complete the rest of the procedure and compare the results.

698

Introduction to Linear Control Systems

Remark 9.22: The general observations that we make in the previous Problems 9.69.10 are that: 1. The performance does depend on the frequency of the reference input. 2. For this system the lead-stabilizing synthesis outperforms the zero-stabilizing synthesis, as it can accommodate inputs with higher frequencies. 3. Transfer functions which are simpler result in a nicer performance. This suggests cancelling out the stable dynamics of the plant with the controller. The more precise the cancellation the higher the performance. 4. Controllers which result in a higher performance require more expensive actuators—their control signals change more sharply and with larger peak magnitudes. Note that such a control signal is customarily said to be ‘poor’. (Note that item 4 is observed also for sinusoidal reference inputs. For instance, in Part (5) of Problem 9.2 we had a higher transient performance and a poorer control signal in that phase. This will be observed also in Part (3) of Problem 9.12. A theoretical explanation e.g., in terms of energy of the system would be quite desirable. For what systems do we observe this feature more?)

Problem 9.11: Design a stabilizing controller for the plant PðsÞ 5 sðs10 2 1Þ such that the steady-state error to ramp inputs is at most 1%. The system already has an integrator so we only need to stabilize the system. We accomplish this by lead control. The Bode diagram of the plant shows that phase of the system is 2186 deg at ω 5 10. Therefore we use a maximal lead control at this 3 0:0268s 1 1 frequency.pffiffiffiffiffiffiffiffiffiffiffi That is CðsÞ 5 k 13:930:0268s : The Bode diagram specifies k 5 11 1020:1=20 = 13:93 5 2:71. However if we choose k 5 3 stability margins are improved. With k 5 3 the system has GM 5 2 20:3, ωpc 5 1:73, PM 5 54:7deg, ωgc 5 11, BW 5 18:01. As for the steady-state error there should hold 3 3 10 3 ð1 1 kg Þ 5 100, or kg 5 2:33, and T  10kg =ωgc 5 2:12. The system has GM 5 2 16, ωpc 5 2:68, PM 5 49:1deg, ωgc 5 11:1, BW 5 18:77. The response is given in Fig. 9.59.

Figure 9.59 Problem 9.11.

Frequency domain synthesis and design

699

Discussion: The reader is encouraged to have a look at the accompanying CD on the webpage of the book and observe the Bode diagram of this system as it is probably different from one’s expectations! Moreover note that if we further increase the gain, stability margins are further improved with more improvement in the GM than in the PM. (Question: What other considerations should we have in order to find the right value of the gain?) Problem 9.12: Consider the plant PðsÞ 5 1=½ðs 2 1Þðs 2 2Þ. (1) Design a stabilizing and step tracking controller for it. (2) Design a controller such that the system follows the sinusoidal input rðtÞ 5 sin15t accepting a phase lag between the input and output. (3) Design a controller such that the system follows the sinusoidal input rðtÞ 5 sin15t with negligible instantaneous error at steady state. 1. Note that this is the problem we considered in Chapter 5, Root Locus, Example 5.22. Now we propose a controller whose root locus is slightly different and has smaller overshoot. Bode diagram shows that the phase of the system at ω 5 10 is 2197 deg and the system is unstable. We use a maximal lead controller at this frequency, i.e., CðsÞ 5 ð13:93 3 0:0267s 1 1Þ=ð0:0267s 1 1Þ. This stabilizes the system—note that we do not specify its gain k, in other words it is 1. The root locus is provided in Fig. 9.60. To modify the stabilizer to a step tracker we multiply the controller by kðs 1 0:1Þ=s. Note that the small zero term is used in order to minimize the effect of the integrator on the whole root locus—it remains unchanged. Now we should simulate the system for different values of the gain. The best answer is obtained around k 5 70, provided in Fig. 9.60. System features are as follows: GM 5 2 17:8 dB, ωpc 5 3:51, PM 5 44:4 deg, ωgc 5 22:4, BW 5 38:24, all acceptable. To know where on the root locus this value of gain refers to note that the closed-loop system poles are 215:37 6 j22:67; 2 3:613; 2 0:0967. Finally we add that we have not performed an exhaustive trial and error—perhaps one can get even a better response with another set of controller parameters.

Discussion: The root locus with the inclusion of the term kðs 1 0:1Þ=s is the same as that of Fig. 9.60 plus a branch connecting the pole and zero of this term. It is not provided since on the one hand it is not visible and on the other hand makes Root locus

50

Step response

1.4

40

1.2

30 1

10

Amplitude

Imaginary axis (s–1)

20 0 –10 –20

0.6 0.4

–30

0.2

–40 –50 –40

0.8

–35 –30

–25 –20 –15 –10 Real axis (s–1)

–5

0

5

0 0

0.2

0.4

0.8 0.6 Time (s)

1

1.2

1.4

Figure 9.60 Problem 9.12, Left: Root locus of the system, Right: Tracking performance.

1.6

700

Introduction to Linear Control Systems Frequency response

1.5

200 150 100

0.5

Control signal

Input & output

1

0

–0.5

50 0 –50 –100 –150

–1 –1.5

Frequency response

250 Output Input

–200 0

0.5

1

1.5 Time (sec)

2

2.5

3

–250

0

0.5

1

1.5 Time (sec)

2

2.5

3

Figure 9.61 Problem. 9.12, item (2). Left: Input and output, Right: Control signal.

the rest of the figure unclear. Moreover, recalling from Section 9.7, in general we cannot be sure that by adding enough phase so as to cross the line 6 180ð2h 1 1Þ the system becomes stable. So to be on the safe side we should verify the stability by MATLABs. However, for simple systems we may acquire enough insight to be sure about the result. 2. It is clear that we have to use lead control to stabilize the system. We use the same controller with a different gain as CðsÞ 5 K 3 ð13:93 3 0:0267s 1 1Þ=ð0:0267s 1 1Þ with K 5 31:5. See Fig. 9.61. The reader is encouraged to try other controllers as well.

Discussion: It should be noted that if we use a 2-DOF control structure to reduce the gain of controller to e.g., K 5 20, then the magnitude of the output will be about 0.61 and we have to use the static prefilter K 5 1:64 and the resultant control signals are almost exactly the same in both cases. Why? And how do you compare these designs? Finally, what is the reason for the large magnitude of the control signal? Does it have anything to do with the high frequency of the reference input? 3. It is clear that we have to increase the BW. So we have two options as in Problem 9.2. (a) To increase the phase lead of the controller and then use a static prefilter if needed. (b) To stabilize the system in a 1-DOF structure and then use a dynamic prefilter as in Section 4.13.2.1 to take care of the magnitude and phase differences. Now we try these two options. (a) We should try 2, 3, 4, etc. (maximal) leads at different the frequencies and compare the outcomes with each other. We observe that with 3 maximal lead control of the form CðsÞ 5 K 3 ð13:93 3 0:0267s11Þ3 =ð0:0267s11Þ3 with K 5 20 and the static prefilter F 5 1:07 in the 2-DOF structure the objective is well fulfilled. See Fig. 9.62. It is noted that if we do not use the 2-DOF structure then we have to increase K to over 500 (otherwise the peak magnitude is smaller than 1) and the resultant control signal will be much poorer and larger than that of the 2-DOF structure in the transient phase, although in the steady state they are almost the same. And this is contrary to that of case (2). Why? (b) We stabilize the system in a 1-DOF structure (with LðsÞ 5 CðsÞPðsÞ) and then design the prefilter FðsÞ such  that jFðjωr Þj 5 j1 1 Lðjωr Þ=jLðjωr Þj and +Fðjωr Þ 5 2 ϕ 5  2 + Lðjωr Þ=ð1 1 Lðjωr ÞÞ , where ωr 5 15 is the frequency of the reference input. To this

Figure 9.62 Problem 9.12, Top row (3a). Left: Input and output, Middle: Transient response, Right: Control signal. Bottom row (3b), Left: Input and output, Middle: Output of prefilter, Right: Control signal inside the loop.

702

Introduction to Linear Control Systems

end we use the controller of part (2) and define TðsÞ 5 LðsÞ=ð1 1 LðsÞÞ. At s 5 jωr 5 j15 we have jTj 5 1:001026 and +T 5  86:731512deg. The prefilter will have two parts F 5 F2 F1 . We decide that F1 provides 60deg with k1 5 1. Thus F1 ðsÞ 5 ð13:93 3 0:017863s 1 1Þ=ð0:017863s 1 1Þ. Next we design F2 such that t provides j+T 1 60j 5 16:731512deg. F2 ðsÞ 5 k2 ð2:635109 3 0:041068s 1 1Þ= pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Hence pffiffiffiffiffiffiffiffiffiffiffi ð0:041068s 1 1Þ and k2 2:635109 3 13:93 3 1:001026 5 1. Therefore k2 5 0:164895. Simulation results are provided in the right column of Fig. 9.62

Discussion: Although the output looks smooth in (3a) its magnification demonstrates that it has some fluctuations in the vicinity of the reference input (i.e., with negligible tracking error). Although the fluctuations look mild, because of third-order derivation in the controller the control signal has large and sharp fluctuations in the transient mode of the response. At steady-state the magnitude of the control signal is about 115. The student is encouraged to insert the saturation block uA½ 2 120 120 on the control signal and observe its effect in the SIMULINK environment. That is, what happens if the actuator has this limited range of authority? Finally it should be added that if we are to allow for larger instantaneous error, then we need smaller phase lead in the controller and this in general results in a smoother and smaller control signal. As for (3b), in the transient phase the control signal is ‘very nice’ and this is exactly the reason that in this phase the output performance is poorer. However, in the steady state, performances are about the same with regard to both control signal and output/error. We should emphasize that the difference in the steady states is that in part (3b) we can make the error exactly zero (neglecting roundoff errors) but in (3a) it cannot be exactly zero, but almost zero. Problem 9.13: Design a step-tracking controller for the system PðsÞ 5 sðss 21141Þ : This system is unstable and type 1. Thus we only need to stabilize it. This is achieved by adding enough positive phase to it. In the first trial we design the lead controller at the frequency 1. We try 2 and 3 such maximal controllers. The quality of the response and the control signal drastically depend on the gain value. (But system features do not change drastically with the gain value. Why?) We discuss two cases here. With 2 maximal lead controllers and k 5 0:1 we find GM 5 Inf dB, PM 5 22:6 deg, ωgc 5 1:74, BW 5 1:87. For 3 maximal lead controllers and k 5 0:1 we find GM 5 Inf dB, PM 5 73:8 deg, ωgc 5 1:95, BW 5 1:96. On the other hand, for 2 maximal lead controllers and k 5 1 we find GM 5 Inf dB, PM 5 18:9 deg, ωgc 5 1:97, BW 5 1:98. For 3 maximal lead controllers and k 5 1 we find GM 5 Inf dB, PM 5 72:7 deg, ωgc 5 1:99, BW 5 1:99. Some results are given in Fig. 9.63. As is observed a tradeoff is called for. Finally, we encourage the reader to use the accompanying CD on the webpage of the book and change the gain value to observe the dependence of the output and control signal on the gain value, as well as the root locus and Bode diagram of the system which are instrumental in deeper learning of the subject. In the second trial we design the controller at frequency 10 using 3 and 4 maximal lead controllers. We do not present the results, but encourage the reader to use the accompanying CD. 2

Figure 9.63 Problem 9.13, Top row: Three maximal leads and k 5 0:1, Bottom row: Three maximal leads and k 5 1, Left: Output, Middle, Control signal, Right: Margins.

704

Introduction to Linear Control Systems

Discussion: With small values of the gain the output is oscillatory. With larger values of the gain although two closed-loop poles approach the j-axis (to end at the open-loop j-axis zeros), the output becomes less oscillatory. In the second designs, in either case (3 or 4 controllers) the stability range of the system is in the form kAðkmin NÞ. Let us stress that parenthesis means that infinity is not included. It is interesting to note that with the lower marginal value of the gain (that the system has closed-loop poles near the j-axis) the oscillations in the output are quite large. However with larger values of the gain (approaching its upper marginal value) although the closed-loop poles are again near the j-axis the output is not oscillatory! All in all, the first design with three controllers is clearly the best option. Also note that because of the j-axis zeros at ω 5 2, increasing the bandwidth beyond this frequency is not possible unless we cancel out these zeros with poles of the controller. Such a design is however internally unstable! Question 9.13: What is the reason that the output is not oscillatory although the system has a pair of (individually looking) highly oscillatory closed-loop poles? Problem 9.14: Design a ramp-tracking controller for the plant PðsÞ 5 ðs 2 1Þðs 21 2Þðs 2 3Þ with the steady-state error of less than 2%. We provide two solutions to this system in this and the next Problem 9.15. One is as follows. The controller should have the term C1 ðsÞ 5 1=s. With this controller the system C1 P is unstable and has the phase 2394 deg at ω 5 10. So to stabilize the system a possibility is to use four maximal lead controllers at this frequency. With this controller, i.e., ð13:93 3 0:0267s11Þ4 =ð0:0267s11Þ4 , the system is stable if the gain is in the range kAð47:1 122:5Þ. The system is simulated with different values. The best response is obtained around k 5 80. See the documentation of the respective .m file in the accompanying CD on the webpage of the book. With this value the system has: GM 5 3:7 dB, ωpc 5 27:5, PM 5 22:8 deg, ωgc 5 18:7, BW 5 35:24. Now we design the lag part of controller for the steadystate error requirement. There should hold ½1=ð2 3 3Þ 3 80 3 ð1 1 kg Þ 5 50 or kg 5 2:75. Thus Tg  10kg =ωgc 5 1:47. The resultant system has GM 5 3 dB, ωpc 5 26, PM 5 16:8 deg, ωgc 5 18:9, BW 5 35:05. The result of the simulation is depicted in the left panel of Fig. 9.64. We also note that in the lead controlled system contrary to the usual case we had ωpc 5 27:5 . ωgc 5 18:7. If we design the lag term as kg 5 2:75 and Tg  10kg =27:5 5 1 then the outputs are barely distinguishable from each other (see the accompanying CD), however the system features are slightly different. Now the resultant system has GM 5 2:62 dB, ωpc 5 25:2, PM 5 13:8 deg, ωgc 5 19:1, BW 5 34:99. Finally, if we choose k 5 64 the final system has GM 5 2 2:43 dB, ωpc 5 7:37, PM 5 24:8 deg, ωgc 5 13:6, BW 5 31:56. See the right panel of the same figure. Discussion: In order to verify the stability of the system we will be misled if we use the commands “margin” or “bode” of MATLABs. Instead we should use the command “allmargin” along with the root locus of the system so that we can interpret the output of the aforementioned command. As in the previous (and next!) problems we observe that we have to make a tradeoff between different objectives. Remember to do this trial and error for any design you carry on!

Frequency domain synthesis and design

705

2.5

2.5

2

2

1.5 1

1 Input & output

Input & output

Output

1.5

Output Input

0.5 0

–0.5

0 –0.5

–1

–1

–1.5

–1.5

–2

–2

–2.5 0

–2.5 0

1

2

3

4 5 Time (sec)

6

7

8

9

Input

0.5

1

2

3

4 5 Time (sec)

6

7

8

9

Figure 9.64 Problem, 9.14, Left: k 5 80, Right: k 5 64.

Problem 9.15: Design another ramp-tracking controller for the system PðsÞ 5 ðs 2 1Þðs 21 2Þðs 2 3Þ with the steady-state error of less than 2%. In the previous problem with the controller C1 ðsÞ 5 1=s we increased the relative degree of the plant from three to four. We can keep it at three, if we choose e.g., C1 ðsÞ 5 ðs 1 0:1Þ=s. This is done inhere. With this controller the system C1 P is unstable and has the phase 2304 deg at ω 5 10. So to stabilize the system a possibility is to use three maximal lead controllers at this frequency. With this controller, i.e., ð13:93 3 0:0267s11Þ3 =ð0:0267s11Þ3 , the system is stable if the gain is in the range kAð17:14 102:4Þ. The system has the best margins with about k 5 40. They are GM 5 2 7:37 dB, ωpc 5 4:68, PM 5 37:8 deg, ωgc 5 29:6, BW 5 51:79. The lag term is designed as follows. We have ½1=ð2 3 3Þ 3 0:1 3 40 3 ð1 1 kg Þ 5 50 or kg 5 74. Thus Tg  10kg =ωgc 5 740=29:6 5 25. The system has GM 5 7:27 dB, ωpc 5 50:7, PM 5 31:8 deg, ωgc 5 29:7, BW 5 51:48. The result of the simulation is provided in Fig. 9.65. The reader is encouraged to try other values of the gain. Remark 9.23: As it is observed we have obtained better results in Problem 9.15 than in Problem 9.14. This is in contrast to Problems 9.69.10. Indeed it depends on the problem whether to use stabilizing zeros along with the integrator or not. Remark 9.24: In the above problems the relative degree of the original plant was three. In Problem 9.14 we increased it to four and in Problem 9.15 we did not do so and kept it at three. Thus, in either case the controlled system was stable inevitably for a bounded range of the gain. This means restricted degree of freedom. This is in contrast with Problems 9.13 and 9.12 where the relative degree was two and one and better performance was obtained since the designer had a larger degree of freedom: The gain could tend to infinity (because the infinite branch(es) of the root locus was (were) stable). Before carrying on it should be mentioned that this has only a theoretical value—in practice due to actuator limitations high gains are not realizable.

706

Introduction to Linear Control Systems

Figure 9.65 Problem 9.15 with k 5 40.

Problem 9.16: Design a step-tracking controller for the system 1 PðsÞ 5 ðs 2 1Þðs 2 2Þðs 2 3Þðs 2 4Þ with the steady-state error of less than 1%. In this problem the controller does not have the terms 1/s. We just need to stabilize it to good stability margins (by adding enough phase to it) and then use a lag controller. The Bode diagram of the system shows that the system has the phase of 2388 deg at the frequency 20. Hence a possibility to stabilize the system is to use four maximal lead controllers at this frequency, i.e., ð13:93 3 0:0134s11Þ4 =ð0:0134s11Þ4 . With this controller the system is stable if the gain is in the range kAð664:1 1997:8Þ. The best answer is probably around k 5 1150, with which the system has GM 5 2 4:77 dB, ωpc 5 11:9, PM 5 29:7 deg, ωgc 5 33:7. Now we design the lag controller. There should hold ½1=ð2 3 3 3 4Þ 3 1150 3 ð1 1 kg Þ 5 100, and thus kg 5 1:09, and T  10kg =ωgc 5 0:32. The output is given in Fig. 9.66, left panel. The system has GM 5 4:17 dB, ωpc 5 53:7, PM 5 23:5 deg, ωgc 5 34:3, BW 5 67:58. Finally, note that if the desired ess was 5% we did not need any lag controller. (Why?) The output of this case is given in the same figure, right panel. Discussion: The reader is encouraged to use the accompanying CD on the webpage of the book and observe that the same lead-lag controlled system tracks the ramp and parabola inputs! What is the reason? Problem 9.17: Reconsider Problem 9.16. Design a ramp-tracking controller so that the steady-state error is less than 2%. The controller has the term 1/s. With this controller the system has the phase of 2478 deg at the frequency ω 5 20. Hence a possibility to stabilize the system is to use 5 or 6 maximal lead controllers at this frequency. With 5 controllers and the

Frequency domain synthesis and design

707

Step response

2.5

Step response

2 1.8

2

1.6

Output

Output

1.4 1.5 1

1.2 1 0.8 0.6

0.5

0.4 0.2

0

0

0.5

1 Time (s)

1.5

0 0

2

0.5

1 Time (s)

1.5

2

2.5

Figure 9.66 Problem 9.16, Left: With lag, Right: Without lag. 2.5

2.5

2 1

Input

0.5

Input & output

Input & output

2

Output

1.5

0 –0.5

1.5

Output

1

Input

0.5 0 –0.5

–1

–1

–1.5

–1.5

–2

–2

–2.5 0

–2.5 0

1

2

3

4 5 Time (sec)

6

7

8

9

1

2

3

4 5 Time (sec)

6

7

8

9

Figure 9.67 Problem 9.17, Left: With lag, Right: Without lag.

gain k 5 5400Að4452 6143Þ the system has GM 5 1:15 dB, ωpc 5 30:8, PM 5 3:85 deg, ωgc 5 25:9, BW 5 51:61. Stability margins are poor but the response is better than other choices. It is depicted in Fig. 9.67, left panel. With 6 controllers and the gain k 5 1550Að1349 1670Þ the system has GM 5 0:65 dB, ωpc 5 58:6, PM 5 7:46 deg, ωgc 5 55:2, BW 5 4:37. The output is given in Fig. 9.67, right panel. As it is seen, the response and the system feature are all poorer. Note that the controller does not need a lag part in either case since e.g., ½1=ð2 3 3 3 4Þ 3 1550 . 50. Next we try to put the controllers at the frequency ω 5 10. The phase of the system at this frequency is 2506 deg. Thus 6 maximal lead controllers are needed. However, with this controller although the root locus shape is acceptable, the gain condition at the j-axis crossings is not satisfied and thus the system is unstable. Discussion: As we have mentioned in Section 9.7 it may be possible to obtain better results if we perform more trial and errors (especially different frequencies for the lead design). On the other hand, obtaining better results may also be possible with different synthesis strategies, like e.g., using the controller ðs 1 aÞ=s (instead

708

Introduction to Linear Control Systems Bode diagram Gm = –4.15 dB (at 12.5 rad/s), Pm = 29.8 deg (at 31.7 rad/s)

2.5

100

2

50

1.5

0

1

Input & output

Magnitude (dB)

150

–50

Phase (deg)

–100 0 –360

0.5

Input

0 –0.5 –1 –1.5

–720

–1080 10–2

Output

–2 –1

10

0

1

10 10 Frequency (rad/s)

2

10

3

10

–2.5

0

1

2

3

4 5 Time (sec)

6

7

8

9

Figure 9.68 Problem 9.18, Left: Lead-controlled system, Right: Lead-lag controlled system.

of 1/s) which does not increase the relative degree of the system. Another possibility is to use ðs1aÞ2 =s2 which makes the steady-state error zero. We leave it to the interested reader. We also encourage the reader to verify in the accompanying CD that the system tracks the parabola input quite excellently in the rising regime of the reference input but poorly in the falling regime. What is the reason? Finally, how do you compare this design with that of Problem 9.16 ‘if’ used for ramp and parabola tracking? Problem 9.18: Reconsider Problem 9.17. Design a parabola-tracking controller for the system such that the steady-state error is less than 5%. We follow the alternative approach of the discussion part of the previous problem here. Thus the controller has a term like C1 ðsÞ 5 ðs10:5Þ2 =s2 . With this controller the system C1 P is unstable and has the phase 2391 deg at the frequency 20. Thus we try four maximal lead controllers at this frequency. With k 5 1100Að682 1960Þ the system is stable and has GM 5 2 4:15 dB, ωpc 5 12:5, PM 5 29:8 deg, ωgc 5 31:7, BW 5 66:08. The steady-state error is less than 5%, because ½1=ð2 3 3 3 4Þ 3 1100 5 45:83 . 20. (The error is 2.68%). Thus no lag part is needed. If we sharpen the error to e.g., 1%, then ½1=ð2 3 3 3 4Þ 3 1100 3 ð1 1 kg Þ 5 100 or kg 5 1:18, and Tg  10kg =ωgc 5 0:37. The response is given in Fig. 9.68. The system has GM 5 2 4:04 dB, ωpc 5 14:4, PM 5 23:7 deg, ωgc 5 32:3, BW 5 65:89. Discussion: In this example ωgc . ωpc which is contrary to the usual situation ωgc , ωpc . If we design the lag term according to Tg  10kg =ωpc , we obtain Tg 5 0:94 and the system will have GM 5 2 3:98 dB, ωpc 5 13:3, PM 5 27:4 deg, ωgc 5 31:8, BW 5 65:98. One may prefer this second design. However, the outputs are barely distinguishable from each other. Hence it is not provided again. In this example as well the interested reader is encouraged to try other synthesis approaches, like the lead-stabilizing technique, C1 ðsÞ 5 ðs 1 0:5Þ=s2 , C1 ðsÞ 5 1=s2 , C1 ðsÞ 5 ðs10:5Þ2 =s3 , etc. Finally, note that as in Problems 9.69.8, if we decrease the input frequency the performance in the falling regime of the input command will be enhanced.

Frequency domain synthesis and design

709

Problem 9.19: Find the frequency at which the lead-lag controller changes its role from lag to lead or vice-versa. Let us first present another formulation for our lead, lag, and lead-lag controllers. Rewrite

the

lag

controller

Cg ðsÞ 5

Tg s 1 k g 1 1 Tg s 1 1

as

Cg ðsÞ 5

s 1 1=T 0g s 1 β=T 0g

;

where

ds 1 1 Tg 5 T 0g ðkg 1 1Þ, β 5 1=ðkg 1 1Þ. Also, rewrite the lead controller Cd ðsÞ 5 k αT Td s 1 1 as

s 1 1=T 0

Cd ðsÞ 5 k0 s 1 α=Td0 ; where k0 5 kα, T 0d 5 αTd . If we design the controllers such that d

s 1 1=T 0

β 5 1=α ð , 1Þ, then Cd ðsÞ 5 k0 s 1 1=ðβTd0 Þ : Thus the lead-lag controller will be s 1 1=T 0

s 1 1=T 0

d

CðsÞ 5 k0 s 1 1=ðβTd0 ÞU s 1 β=Tg0 : Now, it is easy to show that the turning frequency is g qffiffiffiffiffiffiffiffiffiffiffi d ω 5 1= T 0d T 0g . For the frequencies less than ω the controller has a lag effect and for frequencies beyond ω it has a lead effect. This frequency is the point where +CðsÞ 5 0, i.e., tan21 ωT 0d 1 tan21 ωT 0g 2 tan21 ωT 0d β 2 tan21 ωT 0g =β 5 0, which can easily be verified using the identity tanða 1 bÞ 5 ðtana 1 tanbÞ=ð1 2 tanatanbÞ. Question 9.14: What if we do not design the controller with β 5 1=α ? 21 Problem 9.20: Propose a step tracking controller for the plant PðsÞ 5 ðs 1s2Þðs 1 3Þ : We use our intuition of the root locus chapter. The controller is simply 1 3Þ CðsÞ 5 K ðs 1sðs2Þðs 1 aÞ ; where a . 0 and K , 0. (Draw the root locus!) With a 5 2 the range of stability versus K is 22 , K , 0. Good response is obtained with K 5 2 0:7. With this value of the gain the system has GM 5 2:8571  9:1185 dB and PM 5 59:47 deg. The step response is given in Fig. 9.69. Discussion: The values of a affects the speed of the response. If it is small like a 5 0:1 or a 5 0:5 the system will be slow. On the other hand note that because

Figure 9.69 Problem 9.20. Step response.

710

Introduction to Linear Control Systems

pole-zero cancellation is done for stable ones, no problem is caused if it does not take place exactly, except that the GM and PM of the system will be (slightly) dif2:2Þðs 1 2:8Þ ferent. For instance if the controller is implemented as CðsÞ 5 2 0:7 ðs 1sðs 1 2:1Þ then GM 5 2:9365  9:3566 dB and PM 5 60:22 deg. For this particular case the GM and PM have both improved, but of course there are infinitely many cases for which they degrade. Problem 9.21: Propose a ramp tracking controller for the plant of Problem 9.20. Again we use our insight of the root locus. The controller is simply CðsÞ 5 K sðs10:1Þ 2 ðs1aÞ2 ; where we can choose e.g., a 5 2 or a 5 4. (Have a look at the root 2

locus; infinitely many choices are available). With a 5 2 we find 226:54 , K , 0. With K 5 2 17 we find GM 5 1:56  3:87 dB and PM 5 89 deg. The answer is depicted in Fig. 9.70. Note that if we increase the frequency of the input the performance necessarily degrades because the bandwidth is small BW  0:16. Discussion: The value of the controller zeros (s 5 2 0:1) affects the speed of the response, and thus the frequency of the reference input which is acceptable for the system. It is possible to improve the response but they cannot be chosen much larger since otherwise the shape of the root locus will not be acceptable, i.e., the controller will not stabilize the system. The student is encouraged to try other values for the controller zeros and the frequency of the reference input to have a better feeling of the subject. 2 1Þðs 2 2Þ Problem 9.22: Propose a ramp tracking controller for the plant PðsÞ 5 ðs ðs 1 2Þðs 1 3Þ : Here as well we use our insight of the root locus. The controller is simply

CðsÞ 5 K sðs10:1Þ 2 ðs1aÞ2 ; where we can choose e.g., a 5 2 or a 5 4. (Again have a look at 2

the root locus; infinitely many choices are available). With a 5 2 we find 0 , K , 11:0678. With K 5 7 we find GM 5 1:58  3:98 dB and PM 5 77:15 deg. The output is given in Fig. 9.71. The discussion of Problem 9.21 is valid here as well. 60

400 Output Input

200

20 0 –20

100 –100 –200

–40 –60 0

300

Control signal

Input & output

40

–300 50

100 150 Time (sec)

200

250

–400 0

50

100 150 Time (sec)

Figure 9.70 Problem 9.21, Left: Tracking performance, Right: Control signal.

200

250

Frequency domain synthesis and design

711 200

60 Output Input

40

150

Control signal

Input & output

100 20 0 –20

50 0 –50

–100 –40 –60 0

–150 50

100 150 Time (sec)

200

250

–200

0

50

100 150 Time (sec)

200

250

Figure 9.71 Problem 9.22. Left: Tracking performance, Right: Control signal.

21 Problem 9.23: Consider the plant is PðsÞ 5 ðs 2s2Þðs 1 3Þ : Design a step tracking controller for the system. We have already worked in Chapter 5, Root Locus, with systems which have both NMP pole and zero. A possible controller for this system is 1 3Þ CðsÞ 5 14ðs 2sðs1=14Þðs as will be detailed in Chapter 10. With this controller the sys2 9Þ tem has multiple GMs and PMs: Gain Margin: [1.1859 0.7952] GM Frequency: [0.4601 2.4622] Phase Margin: [36.7622 2 8.1932 44.3833] PM Frequency: [0.0890 1.0714 10.4806] A Nyquist analysis shows that the smallest GM and PM are acceptable as they result in a change in the stability condition, i.e., GM 5 GM1 5 1:18  1:48 dB and PM 5 GM2 5 2 8:19 deg. Bandwidth is BW 5 19:72 rad=s. The step response of this system is shown in Fig. 9.72. As observed it has a large overshoot and undershoot, and this makes the control of this system extremely difficult in practice, if possible at all. The reason for this poor performance is that the NMP zero of this system is smaller (in magnitude) than its NMP pole. Note that for effective control we not only need z . p, but also we actually require z $ 5p, as will be discussed in Chapter 10. Discussion: The student is encouraged to try his or her own controller on this system. Although the answer (both the characteristics of the step response in time domain and values of the stability margins in frequency domain) depends on the controller, a satisfactory performance cannot be obtained for this system. And this is a “fundamental limitation” in this system as we shall learn in Chapter 10.

Problem 9.24: Consider the plant is PðsÞ 5 troller for the system.

s 2 10 ðs 2 2Þðs 1 3Þ :

Design a step tracking con-

712

Introduction to Linear Control Systems

Figure 9.72 Problem 9.23, Step response.

Figure 9.73 Problem 9.24. Step response.

A controller which results in a somewhat acceptable performance is 1 3Þ CðsÞ 5 2 9ðssðs110:2Þðs : The output is given in Fig. 9.73. The system has 20Þ Gain Margin: [0.4603 1.9702] GM Frequency: [0.7732 11.5644] Phase Margin: 26.8667 PM Frequency: 4.3651 Both of the computed gain margins are acceptable. The main one is GM 5 GM1 5 1:97  5:89 dB. Bandwidth is BW 5 15:39 rad=s.

Frequency domain synthesis and design

713

Discussion: Compared with the step response of the previous problem, a significant improvement in the performance is observed. Here as well the student is encouraged to try his or her own controller to feel the difficulty of a plant which has both NMP pole and zero. Problem 9.25: Design a stabilizing PID controller for the system PðsÞ 5 ss22111 : The answer is simply a PD controller which is a lead controller CðsÞ 5 b k ss 1 1 a ; b , a. As a numerical example for k 5 1; b 5 5; a 5 10 the root locus is given in Fig. 9.74, left panel. The system has Gain Margin: 2.0000 GM Frequency: 0 Phase Margin: [ 2 28.8322 133.6859] PM Frequency: [0.6356 1.3675] The system clearly has GM 5 6:02 dB. MATLABs is not able to draw the Nyquist plot of the system. We leave it to the reader to investigate which PM is acceptable by analyzing the Nyquist plot which he or she draws independently. The response to initial conditions which shows the stabilization performance of the system is found by the command “impulse” and is shown in the right panel of the same figure. By other values for the controller parameters (increasing the pole magnitude with an appropriate gain value) the performance improves. It is left as an easy exercise to the student. Problem 9.26: Consider the system PðsÞ 5 s 1K τ e2sθ . By direct synthesis design a controller such that the closed-loop transfer function becomes TðsÞ 5 1 11τ 0 s e2sθ . Discuss. Ke2sθ CðsÞ With the controller CðsÞ the loop gain is s 1 τ and the closed-loop transfer function becomes

Ke2sθ CðsÞ : s 1 τ 1 Ke2sθ CðsÞ

By equating this fraction with the desired closed-

loop transfer function we easily find the controller as CðsÞ 5 Kð1 1sτ10 s τ2 e2sθ Þ : We note that CðsÞ ! N as s ! 0, i.e., the controller performs the integral action. Moreover 1τ for θ 5 0, CðsÞ 5 sKτ 0 s which is a PI controller. On the other hand by denoting the Root locus 1

0.8 0.6 Amplitude

Imaginary axis (s–1)

0.5

0

–0.5

0.4 0.2 0

–0.2 –0.4

–1 –14

Impulse response

1

–12

–10 –8 –6 –4 Real axis (s–1)

–2

0

2

–0.6 0

5

Figure 9.74 Problem 9.25. Left: Root locus, Right: Step response.

10 Time (s)

15

20

25

714

Introduction to Linear Control Systems

input the controller by EðsÞ and UðsÞ we get UðsÞ 5  1 and  output 0 of 2sθ τ E K s 1 K 2 UðsÞðτ s 2 e Þ. The derivative action, either on U or E, is evaluated via dtd f ðtÞ 5 f ðtÞ 2 fTðt 2 TÞ : Therefore, the control at time t compensates for the effect of the action that has been taken at the time interval ðt 2 θ; tÞ but whose effect has not yet appeared in the output because of the time delay. Problem 9.27: Prove the properties of the Skogestad’s tuning rules given in Table 9.1. We present the computations for the first-order model with PI control and leave it to the reader to address the case of the integrating process. With the given con1 2θs troller parameters there holds LðsÞ 5 CðsÞPðsÞ 5 2θs e . Thus from  π π +LðjωÞ 5 2 180 we find 2 2 2 θωpc 5 2 π and ωpc 5 2θ : Therefore GM 5 1 1=jLðjωÞjjωpc 5 π 5 3:14. On the other handfrom jLðjωÞj 5 1 we find ωgc 5 2θ and  π hence PM 5 π 1 +Ljωgc 5 π 1 2 2 2 θωgc 5 π=2 2 0:5ðradÞ 5 61:4 . Finally the delay margin is DM 5 PM=ωgc 5 ðπ 2 1Þθ 5 2:14θ. The problem is extracted from (Skogestad, 2003).

9.14

Exercises

Over 400 exercises are presented in the form of some collective problems below. Although they are similar to each other in appearance, they are different in nature and details. It is worthwhile trying as many as you find time for. Also note that as you will learn in Chapter 10 pushing the closed-loop bandwidth much beyond the open-loop bandwidth14 usually requires excessive control input, i.e., a large control signal. Thus this is not a good practice in actual applications (unless we have to do so). However we do so in the proceeding exercises in order to better feel the difficulty of the design. Exercise 9.1: This question has several parts. (1) Offer a Fourier analysis for the frequency of the ramp and parabolic inputs that we use throughout the book. (2) Repeat your analysis also for similar signals like the ones offered in Fig. 9.75. (3) Referring to Figs. 9.48, 9.52, 9.54 and the like, what is the reason that the tracking performance in the falling regime of the reference input is poorer than that in the rising regime? (4) Consider an all-pole transfer function. When does the output show an overshoot? Under what condition can we avoid it? (5) What if the transfer function has zeros as well? r(t)

r(t) 0

t

0

Figure 9.75 Exercise 9.1, Some typical inputs. 14

An unstable pole is considered stable in this computation.

t

Frequency domain synthesis and design

715

Exercise 9.2: Repeat Problems 9.69.9 with a pole-zero cancellation synthesis. For instance, for Problem 9.9 investigate the following. We assume the controller has the part C1 5 ðs 1 5Þðs 1 10Þ=s3 . With this assumption the system C1 P has the phase 2338 deg at ω 5 10 rad=s and needs 3 maximal lead controllers at this frequency for the purpose of stabilization. The stabilizing range of the gain is approximately kAð44 376Þ. With trial and error we find the best value of the gain so as to achieve the required bandwidth of at least 15 rad/s as well as good system features. The answer is about k 5 180. Then also try other synthesis problems like C1 ðsÞ 5 ðs 1 5Þðs 1 10Þðs115Þ2 =s4 . Exercise 9.3: Propose stabilizing proportional/lead/lag/lead-lag controllers for the subsequent systems with good stability margins: 10 1. PðsÞ 5 ðs 1 1Þðs 2 1Þ 2. PðsÞ 5

9. PðsÞ 5

10 ðs 1 1Þðs 1 10Þ 2 10ðs 1 1Þ ðs 1 10Þðs 2 1Þ 2 10ðs 2 1Þ ðs 1 1Þðs110Þ2 10ðs 1 1Þ ðs 2 1Þðs 2 10Þ 10ðs2 1 1Þ ðs 1 1Þðs110Þ2 10ðs2 1 1Þ ðs 2 1Þðs110Þ2 2 10ðs 1 2Þ ðs 2 1Þðs2 2 9Þ 2 10ðs 1 2Þ ðs11Þ2 ðs2 1 1Þ

10. PðsÞ 5

10ðs 2 2Þ ðs11Þ2 ðs2 1 1Þ

11. PðsÞ 5

2 10ðs2 1 1Þ ðs 2 1Þðs 2 2Þ

12. PðsÞ 5

10ðs2 1 1Þ ðs2 2 1Þ

13. PðsÞ 5 14. PðsÞ 5

2 10ðs2 2 1Þ s2 1 1 s21 s2 1 1

15. PðsÞ 5

s2 2 25 s2 2 1

16. PðsÞ 5

s2 2 1 s2 2 25

17. PðsÞ 5

s2 2 25 ðs2 2 1Þðs 1 2Þ

18. PðsÞ 5

10ðs 1 2Þ ðs 2 1Þðs 2 3Þðs 2 4Þ

19. PðsÞ 5

2 10ðs2 1 1Þ ðs 2 1Þðs 2 2Þðs 2 3Þ

20. PðsÞ 5

2 10ðs2 1 4Þ ðs2 1 1Þðs 2 1Þ

21. PðsÞ 5

10ðs2 1 1Þ ðs2 1 4Þðs 2 1Þ

22. PðsÞ 5

10ðs11Þ2 ðs 2 1Þðs 2 2Þðs 2 3Þðs 2 4Þ

23. PðsÞ 5

210 ðs2 1 1Þðs 2 1Þðs 2 2Þðs 2 3Þ

24. PðsÞ 5

1 ðs 2 1Þðs 2 2Þðs 2 3Þðs 2 4Þðs 2 5Þ

25. PðsÞ 5

1 ðs 2 1Þðs 2 2Þðs 2 3Þðs 2 4Þðs 2 5Þðs 2 6Þ :

3. PðsÞ 5 4. PðsÞ 5 5. PðsÞ 5 6. PðsÞ 5 7. PðsÞ 5 8. PðsÞ 5

716

Introduction to Linear Control Systems

Exercise 9.4: This problem has two parts. (1) Repeat the above Exercise 9.3 aiming at BW 5 15 rad=s. (2) Consider the NMP plants of above Exercise 9.3. Propose controllers which guarantee GM $ 4 dB, PM $ 45 deg, if possible. Exercise 9.5: Propose a tracking controller for the above systems so that the controlled system tracks the input rðtÞ 5 sin15t: (1) With the steady-state magnitude 1; (2) With zero or negligible instantaneous steady-state error. Exercise 9.6: Repeat the above Exercises 9.39.5 using PID controllers instead of leadlag controllers. Exercise 9.7: Propose a tracking controller for the systems of Exercise 9.3 such that the steady-state error to step input is less than 2%, the closed-loop BW 5 15 rad=s, and the system has good stability margins and transient performance. Exercise 9.8: Propose a tracking controller for the systems of Exercise 9.3 such that the steady-state error to step input is zero, the closed-loop BW 5 15 rad=s, and the system has good stability margins and transient performance. Exercise 9.9: Repeat Exercise 9.7 for a ramp input. Exercise 9.10: Repeat Exercise 9.8 for a ramp input. Exercise 9.11: Repeat Exercise 9.7 for a parabola input. Exercise 9.12: Repeat Exercise 9.8 for a parabola input. Exercise 9.13: Reconsider Exercises 9.39.12. For cases in which the transient response is poor, especially for unstable plants, propose a prefilter to improve the tracking performance. Simulate the output and compare the performance of reference tracking and disturbance rejection. (It is helpful to review the respective parts of Chapter 1, Introduction.) Exercise 9.14: Repeat Exercises 9.39.12 for D-stability and control in the Ω region specified by σ 5 1 and δ 5 30 deg. Exercise 9.15: In Exercises 9.39.10 investigate the effect of inclusion of the sensor dynamics and delay both in the forward and feedback paths. Exercise 9.16: Propose some systems whose closed-loop bandwidth significantly differs from the open-loop crossover frequencies. Exercise 9.17: Draw the Nyquist plot of the PI, PD, and PID controllers. Exercise 9.18: This problem has two parts. (1) What is the effect of the lead, lag, and lead-lag controllers on a point on the Nyquist plot of a given system? (2) Repeat part (1) for the KMN plot. Exercise 9.19: Find the frequency where the phase of a lag control is at its minimum. Does this frequency have any relevance and importance? Exercise 9.20: Discuss the design of the lag controller in the structure given in Remark 9.6. 11 Exercise 9.21: What is the effect of the controller CðsÞ 5 as bs 1 1 ; 0 , a , b in a closedloop system? Discuss. a Exercise 9.22: What is the effect of the controller CðsÞ 5 ss 1 1 b ; 0 , a , b in a closed-loop system? Discuss. Exercise 9.23: What is the frequency of the maximum phase lead of the controller a CðsÞ 5 k ss 1 1 b ; 0 , a , b; k . 1? Compute the rate of increase and decrease of the phase before and after this frequency. Note that in certain design techniques the aforementioned rate matters. Exercise 9.24: Solve the counterpart of Exercise 9.23 for the lag controller. Exercise 9.25: Discuss the design of the lead-lag controller in the structure given in the Worked-out Problem 9.19 with the assumption β 5 1=α ð , 1Þ.

Frequency domain synthesis and design

717

Exercise 9.26: Similar to Example 9.9, design a controller for the following systems such that the phase margin is at least 40 deg and the BW is at least 30 rad/s. Compare the designs and choose the best one. 1 1. PðsÞ 5 s2 ðs 1 1Þðs 1 10Þ 2. PðsÞ 5

s21 s2 ðs 1 1Þðs 1 10Þ

3. PðsÞ 5

1 s3 ðs 1 1Þðs 1 10Þ

4. PðsÞ 5

s21 s3 ðs 1 1Þðs 1 10Þ

Exercise 9.27: Propose systems with: (1) A small PM and acceptable/large GM, and (2) A small GM and acceptable/large PM. Then propose controllers for them for stabilization/ tracking objectives with good system features. Exercise 9.28: This problem has two parts. (1) Consider the two panels of Figs. 9.10, 9.17. Are both cases possible to happen? Discuss properties of the system whose controller has either form. Exemplify a system for each case if existent. (2) Considering Fig. 9.18, discuss the correctness of the respective expectation. Exercise 9.29: Find the frequency at which the ideal PID controller changes its role from PI to PD or vice-versa. Exercise 9.30: In order to increase the performance, in multi lead design we found the proper value of the gain by trial and error on the gain value in the stabilizing range. Derive the formula for the gain computation such that the maximum phase contribution occurs at the chosen frequency. (Note that what we did in the text is correct for single lead design. Follow the same procedure for multi lead design and take head of Part (6b) in Problem 9.2.) Exercise 9.31: By direct analysis propose step-tracking PID controllers for the sequel systems in the negative unity feedback structure: 20:3s 1. PðsÞ 5 sðse 1 10Þ 2. PðsÞ 5

e20:3s ðs 1 1Þ sðs 2 10Þ

3. PðsÞ 5

e20:4s ðs 1 1Þ ðs14Þ3

4. PðsÞ 5 5. PðsÞ 5

e20:2s ðs 2 1Þ sðs2 1 s 1 1Þ e20:4s s21

6. PðsÞ 5

e20:4s s2 ðs 1 1Þ

7. PðsÞ 5

e20:2s ðs 2 1Þ sðs 1 10Þ :

Exercise 9.32: Nonlinear delay models for internal combustion engines are available in various references in the literature. Such a model is considered in (Bavafa-Toosi, 1999) which is given in the following equations. In these equations N is the speed (rpm) and P is the pressure (KPa), which are the outputs of the system. θ and δ are the gas and preignition angles and are the inputs to the system. Td is a step disturbance between 0 and 61 Nm. KN and KP are uncertain coefficients in the given ranges. τ is the uncertain delay in the given range.   1 P , 50:6625 P_ 5 kP ðm_ ai 2 m_ ao Þ ; ; GðPÞ 5 0:0197ð101:325P2P2 Þ0:5 P $ 50:6625 N_ 5 kN ðTi 2 Tl Þ m_ ao 5 2 0:0005968N 2 0:1336P 1 0:0005341NP 1 0:000001757NP2 ; mao 5 m_ ao ðt 2 τÞ

718

Introduction to Linear Control Systems

m_ ai 5 ð1 1 0:907θ 1 0:099θ2 ÞGðPÞ Ti 5 2 39:22 1 325024mao 2 0:0112δ2 1 0:635δ 2 0:000102N 2 ð2π=60Þ2 1 ð2π=60Þð0:0216 1 0:000675δÞN Tl 5 ðN=263:17Þ2 1 Td 5 # θ # 35; 10 # δ # 45; 10 # δ # 45; 6 # kP # 43, 13 # kN # 95, 11=N # τ # 80=N This plant is robustly controlled for both regulation and tracking objectives by PID control in the aforementioned reference. Propose a PID controller for this system. Exercise 9.33: Provide further details for Remark 9.17. Exercise 9.34: This exercise has some parts. (1) Consider the PID controller CðsÞ 5 Kð1 1 1=ðTi sÞ 1 Td sÞ. Show that if Ti $ Td then the controller has real zeros and can be implemented as the cascade PID structure (PIPD) given by ffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi CðsÞ 5 Kð1 1 1=ðαTi sÞÞðα 1 Td sÞ where α 5 ð1 6 1 2 4Td =Ti Þ=2 . 0. Note that this means that transformation between the parallel and series formulations given in Section 9.3 is not bidirectional. Some questions that arise are thus: (2) Which one is more general? (3) Which one should be preferred to the other? (4) Which one is more pervasively used in industry and why? Note that the answers to the last two parts may not be (and indeed are not) the same due to practical limitations of implementation in the mid 20th century and advances in technology and in our understanding of the theory afterwards. The problem is adopted from (Bavafa-Toosi, 1999). Exercise 9.35: Consider Example 9.17 and the observation we made in the third case of its solution. Prove that this is true for every system. Exercise 9.36: This problem is about the IMC structure and has three parts. (1) Consider a plant which has NMP zeros and/or a delay term. Propose a design procedure for the controller design. (2) Provide a detailed analysis of the system type. (3) Provide a detailed analysis of the superiority of the IMC structure over the conventional 1-DOF control structure in the case of actuator limitations. The problem is answered in (Morari and Zafiriou, 1989). Exercise 9.37: Consider a type-1 time-delay plant. Propose a Smith-type controller for it. Note that a solution is given in (Astrom et al., 1994). Exercise 9.38: This exercise is for the readers of electrical engineering. Propose alternative and perhaps better circuit designs for the implementation of controller structures via operational amplifiers. For instance, analyze the circuits in Fig. 9.76.

vi

vi

vo

vi vo

Figure 9.76 Exercise 9.38.

vo

Frequency domain synthesis and design

719

Exercise 9.39: Provide examples for the cases (1) ωgc . ;  ωpc , (2) ωgc , ;  ωpc , (3) ωgc . ωpc , (4) ωgc , ωpc , (5) ωgc cωpc , (6) ωgc {ωpc . Which class is the most difficult to control? Discuss. Exercise 9.40: Consider Section 9.5 and the effect of a lead controller as that in the lower panel of Figure 9.22. Explain and characterize, if possible, this situation in more details. For instance, when does/may it take place? Is there any lower/upper bound on the order and relative degree of a system for which this does/may happen? etc. Exercise 9.41: Referring to Problems 9.21, 22, how can we have an estimate of the settling time (for such slow NMP systems)? Exercise 9.42: Reconsider Problem 6.24 of Chapter 6. We are given a standard 1-DOF control system with its GM/PM/DM. Assume that we want to include a prefiler to shape its response to step setpoints. How can we design, if possible, the prefilter so that additionally the GM/PM/DM of the system are altered as we desire?

References Astrom, K.J., Hagglund, T., 2006. Advanced PID Control. ISA, Elsevier, Research Triangle Park, NC. Astrom, K.J., Hang, C.C., Lim, B.C., 1994. A new Smith predictor for controlling a process with an integrator and long dead-time. IEEE Trans. Autom. Control. 39 (2), 343345. Badri, V., Tavazoei, M.S., 2016. Some analytical results on tuning fractional-order [proportional-integral] controllers for fractional-order systems. IEEE Trans. Contr. Syst. Tech. 24 (3), 10591066. Bavafa-Toosi, Y., 1999. PID controllers: theory and design. MS Seminar, Department of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran. Bavafa-Toosi, Y., 2000. Eigenstructure assignment by output feedback. MEng Thesis (Control), Department of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran. Bavafa-Toosi, Y., Ohmori, H., Labibi, B., 2005. Failure-tolerant performance stabilization of the generic large-scale system by decentralized linear output feedback. ISA Trans. 44 (4), 501513. Bavafa-Toosi, Y., Ohmori, H., Labibi, B., 2006. A generic approach to the design of decentralized linear output-feedback controllers. Syst. Control Lett. 55, 282292. Bode, H.W., 1945. Network Analysis and Feedback Amplifier Design. Van Nostrand, New York. Boyd, S., Hast, M., Astrom, K., 2016. MIMO PID tuning via iterated LMI restriction. Int. J. Robust Nonlinear Control. 26, 17181731. Brown, G.S., Campbell, D.P., 1948. Principles of Servomechanisms. Wiley, New York. Cai, L., Rad, A.B., Chan, W.L., Cai, K.Y., 2003. A robust fuzzy PD controller for automatic steering control of autonomous vehicles. In: Proceedings of the IEEE Conf. Fuzzy Systems, St. Louis, MO, USA, pp. 549554. Chan, P.T., Rad, A.B., Tsang, K.M., 2002. Optimization of fused fuzzy systems via genetic algorithms. IEEE Trans. Ind. Electron. 49 (3), 685692. Chan, Y.F., Moallem, M., Wang, W., 2007. Design and implementation of modular FPGAbased PID controllers. IEEE Trans. Ind. Electron. 54 (4), 18981906. Chen, D., Seborg, D.E., 2003. Design of decentralized PI control systems based on Nyquist stability analysis. J. Process Control. 13 (1), 2739.

720

Introduction to Linear Control Systems

Chestnut, H., Mayer, R.W., 1959. second ed. Servomechanisms and Regulating System Design, vol. 1. Wiley, New York. Dincel, E., Soylemez, M.T., 2014. Guaranteed dominant pole placement with discrete-PID controllers: a modified Nyquist plot approach. IFAC Proc. 47 (3), 31223127. Dvijotham, K., Todorov, E., Fazel, M., 2015. Convex structured controller design in finite horizon. IEEE Trans. Control Network Syst. 2 (1), 110. Fang, B., 2010. Computation of stabilizing PID gain regions based on the inverse Nyquist plot. J. Process Control. 20 (10), 11831187. Firouzbahrami, M., Nobakhti, A., 2016. Reliable computation of PID gain space for general second-order time-delay systems. Int. J. Control (available on line), http://dx.doi.org/ 10.1080/00207179.2016.1236295. Gardner, M.F., Barnes, J.L., 1942. Transients in Linear Systems. Wiley, New York. Garpinger, O., Hagglund, T., Astrom, K.J., 2014. Performance and robustness tradeoffs in PID control. J. Process Control. 24 (5), 568577. Grimholt, C., Skogestad, S., December 2013. Optimal PID-control on first order plus time delay systems & verification of the SIMC rules. In: Preprints of the 10th IFAC International Symposium on Dynamics and Control of Process Systems, Mumbai, India, pp. 265270. Hall, A.C., 1943. The Analysis and Synthesis of Linear Servomechanisms. MIT, Technology Press, Cambridge, MA. Hast, M., Astrom, K.J., Bernhardsson, B., Boyd, S., July 2013. PID design by convexconcave optimization. In: Proceedings of the European Control Conference, Zurich, Switzerland, pp. 44604465. Ivezic, D.D., Petrovic, T.B., 2003. New approach to milling circuit control—robust inverse Nyquist array design. Int. J. Miner. Proc. 70 (14), 171182. James, H.M., Nichols, N.B., Phillips, R.S., 1947. Theory of Servomechanisms. McGraw-Hill, New York. Jin, Q.B., Liu, Q., Huang, B., 2016. New results on the robust stability of PID controllers with gain and phase margins for UFOPTD processes. ISA Trans. 61, 240250. Li, M., Zhou, P., Zhao, Z., Zhang, J., 2016. Two-degree-of-freedom fractional-order PID controllers design for fractional order processes with dead-time. ISA Trans. 61, 147154. Liu, J., Wang, H., Zhang, Y., 2015. New result on PID controller design of LTI systems via dominant eigenvalue assignment. Automatica. 62, 9397. Morari, M., Zafiriou, E., 1989. Robust Process Control. Prentice Hall, Englewood Cliffs, NJ. Mouayad, A.S., 2015. A novel optimal PID plus second order derivative controller for AVR system. Int. J. Eng. Sci. Technol. 18 (2), 194206. Nikita, S., Chidambaram, M., 2016. Tuning of PID controllers for time delay unstable systems with two unstable poles. IFAC-Pap. Online. 49 (1), 801806. Porter, B., Khaki-Sedigh, A., 1989. Robustness of tunable digital set point tracking PID controllers for linear multivariable plants. Int. J. Control. 49 (3), 777789. Ramirez, A., Mondie, S., Garrido, R., Sipahi, R., 2016. Design of proportionalintegralretarded (PIR) controllers for second-order LTI systems. IEEE Trans. Autom. Control. 61 (6), 16881695. Rivera, D.E., Morari, M., Skogestad, S., 1986. Internal model control. 4. PID controller design. Ind. Eng. Chem. Process Des. Dev. 25, 252265. Shariati, A., Taghirad, H.D., Fatehi, A., 2014. A neutral system approach to HN PD/PI controller design of processes with uncertain input delay. J. Process Control. 24 (6), 144157.

Frequency domain synthesis and design

721

Shiota, T., Ohmori, H., 2012. Design of adaptive I-PD control systems using augmented error method. In: IFAC Proceedings Volumes (IFAC-PapersOnline), vol. 2, 5357. Shiota, T., Ohmori, H., 2013. Design of adaptive I-PD control systems using delta operator based on partial model matching. In: IFAC Proceedings Volumes (IFAC-PapersOnline), vol. 11, 518522. Skogestad, S., 2003. Simple analytic rules for model reduction and PID controller design. J. Process Control. 13, 291309. Su, Y.X., Sun, D., Duan, B.Y., 2005. Design of an enhanced nonlinear PID controller. Mechatronics. 15, 10051024. Tamura, K., Ohmori, H., 2007. Auto-tuning method of expanded PID control for MIMO systems. In: IFAC Proceedings Volumes, vol. 9, 98103. Tang, H., Li, Y., 2015. Feedforward nonlinear PID control of a novel micromanipulator using Preisach hysteresis compensator. Rob. Comput. Integr. Manuf. 34, 124132. Tavakoli, S., Banookh, A., 2010. Robust PI control design using particle swarm optimization. J. Comput. Sci. Eng. 1 (1), 3641. Tempo, R., Dabbene, F., Calafiore, G., 2013. Randomized Algorithms for Analysis and Control of Uncertain Systems. second ed. Springer, London. Theorin, A., Hagglund, T., 2015. Derivative backoff: the other saturation problem for PID controllers. J. Process Control. 33, 155160. Tyreus, B.D., Luyben, W.L., 1992. Tuning PI controllers for integrator/dead time processes. Ind. Eng. Chem. Res. 31, 26282631. Wan, Y., Roy, S., Saberi, A., Stoorvogel, A., 2010. The design of multi-lead-compensators for stabilization and pole placement in double-integrator networks. IEEE Trans. Autom. Control. 55 (12), 28702875. Yu, C.-C., 2006. Autotuning of PID Controllers: A Relay Feedback Approach. Springer, London. Zanasi, R., Cuoghi, S., 2012. Analytical and graphical design of PID compensators on the Nyquist plane. IFAC Proc. 45 (3), 524529. Zanasi, R., Cuoghi, S., Ntogramatzidis, L., 2011. Analytical design of lead-lag compensators on Nyquist and Nichols planes. IFAC Proc. 44 (1), 76667671. Zarre, A., Vahidian Kamyad, A., Khaki-Sedigh, A., 2002. Application of measure theory in the design of multivariable PID controllers. Int. J. Eng. Sci. 13 (1), 162176 (in Persian).

Fundamental limitations 10.1

10

Introduction

The materials we discuss in this final chapter as the closing notes are fundamental limitations in SISO LTI systems with which the student should be familiar at least at an introductory level. Part of them applies to MIMO systems as well. These issues are important topics which govern the interrelations between different objectives and features in a control system and represent their inherent restrictions or fundamental limitations. The importance of studying the fundamental limitations is multifold. The foremost is avoiding hazardous problems like the 1986 Chernobyl nuclear reactor explosion or other hazards of lesser severity. Besides, we can understand what is achievable and what is not achievable and this helps save lots of vain endeavors to fulfill a design objective that is not possible. On the other hand it gives us a better insight to the problem, which in turn will be fruitful in various ways. Fundamental limitations are being investigated in the research literature even now in the 21st century, although some of them date back to the mid-20th century. As such the recent developments are usually in sophisticated contexts (like advanced state-space representation, graph theory and convex optimization techniques) which cannot be presented in this first undergraduate course. We will cite some of the recent results in Further Readings at the end of the chapter, so that the interested reader can follow at the right time. We discuss the basics of fundamental limitations in this chapter. Some are discussed with more formulation details and some with lesser details. In either case the proofs can be found in the cited references. These issues can be enlisted as: relation between the time and frequency domain constraints, the ideal transfer function, controller design via the TS method, interpolation conditions, integral and Poisson integral constraints, constraints implied by poles and zeros, actuator and sensor limitations, delay, eigenstructure assignment, noninteractive performance, minimal eigenvalue sensitivity, robust stabilization, and positive systems. They are addressed in Sections 10.210.14. Full treatment of these topics in a chapter is impossible and actually needs a specialized book. The treatment of these materials is thus somewhat concise, but fully expounded with numerous examples and worked-out problems. In Section 5.15 we present a design procedure for controller design by output feedback, subject to the aforementioned limitations. The chapter closes by summary, further readings, worked-out problems, and exercises in Sections 10.1510.18. The contribution of this chapter is multi-faceted and can be summarized as: Assembling a collection of various topics which are scattered in the research literature, Providing contributions in several sections, Presenting a coherent picture of the whole subject of control system design. With regard to the last item we should stress that the topics that we mention in Further Readings are essential for understanding of the whole Introduction to Linear Control Systems. DOI: http://dx.doi.org/10.1016/B978-0-12-812748-3.00010-0 © 2017 Elsevier Inc. All rights reserved.

726

Introduction to Linear Control Systems

picture. The reader will probably find some of them bizarre, but in fact those items are the points which will be problematic for you if you are not aware of them! We should also mention that the problem statement and formulation in Section 10.15 is more general than any other design procedure that can be found in the literature, including the research forums. We wrap up the introduction by stating that the limitations which are imposed by the structure of the system, like those due to the delay and its poles and zeros, are also called structural limitations.

10.2

Relation between time and frequency domain specifications

Upper bounds on Smax ≔ maxω jSj and Tmax ≔ maxω jTj1 have been considered in the context of the KMN chart. Here we add some more notes on them. 1. Verifying the sensitivity function of different systems, we observe that for some systems2 in the mid frequency range there is an interval where the control degrades the performance, since it increases (and not decreases) the sensitivity magnitude. Denoting the peak value of S by Smax for stability and performance considerations we thus require that Smax  1, i.e., not much larger than 1. 2. The relation between the GM, PM, S, and T is as follows. Notice that this can easily be proven by analyzing Fig. 6.45 of Chapter 6, Nyquist Plot.   Smax 1 1 ½rad ; Pm $ 2sin21 For a given Smax : Gm $ $ 2Smax Smax Smax 2 1   1 1 1 ½rad $ ; Pm $ 2sin21 For a given Tmax : Gm $ 1 1 Tmax 2Tmax Tmax

Example 10.1: Requiring Smax # 2 translates to Gm $ 2 and Pm $ 29  303 : 3. Empirical verification suggests that there are relations between time and frequency domain specifications, however they are hard to prove in a general setting. A proven relation is between the maximum overshoot (time domain) and maximum complementary sensitivity (frequency domain). We first note that the maximum overshoot can be alternatively assessed by the total variation whose definition is as follows. Let the step response of the systemPbe given by Fig. 10.1. Then we define the Total Variation (TV) of the output as TV 5 i vi . It is clear that the smaller theÐ TV the higher the performance of the system. : N Note that TV can be computed from TV 5 0 hðτÞdτ¼jjhðtÞjj1 in which hðtÞ is the impulse response of the system. For the generic system (i.e., for almost all systems) of order n it can be numerically verified that Tmax # TV # ð2n 1 1ÞTmax . This condition is not really informative and cannot be considered as a fundamental limitation, however the idea leads us to some conditions in this perspective as we shall shortly discuss in Question 10.1. 1 2

Some others use MS and MT . For the sake of ease of future studies we mention both. As will be studied in the Section 10.6.

Fundamental limitations

727

y

v1

v2

v3

v4 v5

v6 t

Figure 10.1 Definition of Total Variation. Adopted from S. Boyd and C. Barratt, Linear Controller Design  Limits of Performance, Prentice Hall, 1991, with permission.

Example 10.2: For It holds: ζ Mp TV $1 0 0 0:8 2 1:03 0:6 9 1:2 0:1 73 6:4 0:01 97 63

the standard second-order system TV depends on ζ. Tmax 1 1 1:04 5:03 50

Smax # 1:15 1:22 1:35 5:12 50

The above Example 10.2 shows that for most second-order systems (with complex as opposed to real poles) it is not possible to have a good performance in terms of TV. If ζ , 0:7, the TV will be poor. However, we know that with a controller we can adjust it. From Chapter 4 we know that this translates to a condition on σ; ζ; ωn ; k (with obvious meanings—in connection with some other design specifications) and can be interpreted as a fundamental limitation. This observation gives rise to the following important question. Question 10.1: Take a generic LTI plant. (1) What TV is achievable for the closedloop system by LTI control in the standard 1-DOF structure? (2) Repeat the question for the standard 2-DOF structure with a prefilter. (3) Characterize or parameterize the plants for which a desirable TV like Tmax # TV # 1:1 Tmax is achievable. (4) Take a stable transfer function whose DC gain is one and is subject to a unit step reference input, say H 5 ðb1 s 1 b0 Þ=ða3 s3 1 a2 s2 1 a1 s 1 a0 Þ, a0 5 b0 . (We do not wish to use any controller on it now, but may have done it before.) What is its TV? (5) Given a transfer function of fixed numerator and denominator degree, like the one mentioned above, parametrize all such transfer functions for which TV is as desired, say Tmax # TV # 1:1 Tmax . (6) Given an interval transfer function like the one mentioned above in which every coefficient belongs to an interval, find the range of its TV. (From the analytical point of view this question is open in its full generality. However, as we shall see in future sections in certain restricted cases we can easily solve it. See also Exercises 10.75, 10.76.)

728

Introduction to Linear Control Systems

Several results can be proven re the interrelations between the time and frequency domain constraints. It is instructive to postpone them to Sections 10.6 and 10.7 and discuss some other topics in the next Sections 10.310.5 which directly use the concepts of this Section 10.2 on the one hand, and provide the preliminaries for Sections 10.6,10.7 on the other hand.

10.3

The ideal transfer function

In step-tracking control systems the objective of tracking translates to the ideal condition of jTj 5 1 in the bandwidth and jTj 5 0 outside the bandwidth, in general. (Recall that some details were discussed in Section 4.13.2. Moreover, we shall shortly see some further details in this chapter.). Thus because of T 1 S 5 1, the ideal transfer function and sensitivity function for low-pass systems looks like the Fig. 10.2. However such sharp functions are not realizable if the order of the system (i.e., the controller) is finite, which is always the case. In fact recall that we have already worked with many transfer functions. None of them so sharply (slope of 2N db=dec) reduces to zero. The slope is the relative degree of the system times 220 db=dec. The interesting questions that cross the mind are thus what transfer functions are achievable and what transfer functions best approximate the ideal transfer function? The answer to the first question is also tied with the given plant. Not every transfer function is achievable for every plant. We will comment more on this question in the ensuing Sections. We answer the second question in this part. To answer this question we should choose a criterion for approximation. If the criterion is chosen as the “maximally flat magnitude” transfer functions, then Butterworth filters are the answer. Butterworth filters have the flattest magnitude among all transfer functions of a given order. They were proposed By S. Butterworth in 1930 in the context of electronic amplifier design. The original paper presents a special design method which, although correct, is not theoretically appealing. We discuss it in details in a modern fashion. The Butterworth filter of the general order n has the pole distribution of Fig. 10.3 in which φn 5 π=n, and its magnitude is given by qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  pffiffiffi   jBn ðsÞj 5 1= 1 1 ω=ωc 2n . The bandwidth is ωc (since 20log 1= 2 5 2 3) and the roll-off rate is 220n db=dec. The high-pass filters are given by 1 2 Bi ðsÞ. Hence, for instance the first three of Butterworth filters of the low-pass type with DC gain ω2 ω3 one are given by B1 ðsÞ 5 s 1ωcω ; B2 ðsÞ 5 2 pffiffi c ; B3 ðsÞ 5 ðs 1 ω Þðs2 1c ω s 1 ω2 Þ : 2 s 1

c

T,S 1 0

2ω c s 1 ω c

c

c

c

Lm |T|

|S|

BW

0 ω

|T|

|S|

BW

ω

Figure 10.2 Ideal transfer function (T) and sensitivity function (S) Left: Magnitude in plain numbers, Right: Log magnitude.

Fundamental limitations

729

φn

φ —n 2

ωc

φn φ —n 2

Figure 10.3 Roots of the Butterworth filter.

Example 10.3: The log magnitude of the first three low-pass Butterworth filters for ωc 5 1 are plotted in Fig. 10.4, left panel. In the right panel the magnitudes of B3 ðsÞ 5 s3 1 2s211 2s 1 1 ; B3a ðsÞ 5 s3 1 5s211 2s 1 1 and B3b ðsÞ 5 s3 1 2s211 5s 1 1 are compared with each other. In the left panel we observe that by increasing the order of the filter the system better approximates the ideal transfer function: it becomes flatter in the bandwidth and more sharply rolls off at the bandwidth frequency. In the right panel we observe that B3 better approximates the ideal transfer function than B3a and B3b which are not Butterworth filters.

Figure 10.4 Example 10.3. Comparison of filters.

730

Introduction to Linear Control Systems

Example 10.4: Among other filters which have been extensively studied and have certain applications in other fields (especially signal processing) are the Chebyshev and elliptic filters. The magnitude of Butterworth, Chebyshev Type I, Chebyshev Type II, and Elliptic filters for n 5 4 and ωc 5 1 are plotted in Fig. 10.5. Apart from the ripples, notice that the Butterworth filter rolls off around the corner frequency more slowly than these filters.

1

1

0

ωc

ωc

1

1

0

0

ωc

0

ωc

Figure 10.5 Example 10.4, Comparison of filters. Top left: Butterworth, Top right: Chebyshev I, Bottom left: Chebyshev II, Bottom right: Elliptic

Note that the magnitude of Chebyshev type I, Chebyshev type II, 1 and elliptic filter is respectively given by G 5 jCnI ðωÞj 5 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi , 2 2 1 1 ε Tn ðω=ωc Þ

1 1 ffi in which , and G 5 jEn ðωÞj 5 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi G 5 jCnII ðωÞj 5 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 2 2 1 1 1=½ε Tn ðω=ωc Þ

1 1 ε Rn ðξ;ω=ωc Þ

Tn and Rn are the Chebyshev and elliptic rational functions of order n, ε is the ripple factor, and ξ is the sensitivity factor.

Example 10.5: Derive the formula for the nth-order Butterworth filter. For the pole distribution of Fig. 10.3 we have Bn ðsÞ 5 Ln ðs K2 s Þ=ω ; where j2k1n21 π sk 5 ωc e 2n

k51

k

c

is a pole. Note that the conωc is the cut-off frequency and jugate pairs are s1 and sn , s2 and sn21 , etc. On the other hand K is the DC n gain. If the DC gain is 1 then K 5 ωnc . The polynomial Lk51 ðs 2 sk Þ=ωc is called the Butterworth polynomial. For ωc 5 1 the Butterworth polynomials are normalized and the first few of them are given in Table 10.1. (Continued)

Fundamental limitations

731

(cont’d) Note that if ωc ¼ 6 1, ωc appears in an obvious manner in these polynomials. For instance B5 5 ω5c =½ðs 1 ωc Þðs2 1 0:6180ωc s 1 ω2c Þ ðs2 1 1:6180ωc s 1 ω2c Þ. Table 10.1

Butterworth polynomials

n

Butterworth polynomial

1 2 3 4 5 6 7 8

s11 s2 1 1:4142s 1 1 ðs 1 1Þðs2 1 s 1 1Þ ðs2 1 0:7654s 1 1Þðs2 1 1:8478s 1 1Þ ðs 1 1Þðs2 1 0:6180s 1 1Þðs2 1 1:6180s 1 1Þ ðs2 1 0:5176s 1 1Þðs2 1 1:4142s 1 1Þðs2 1 1:9319s 1 1Þ ðs 1 1Þðs2 1 0:4450s 1 1Þðs2 1 1:2470s 1 1Þðs2 1 1:8019s 1 1Þ ðs2 1 0:3902s 1 1Þðs2 1 1:1111s 1 1Þðs2 1 1:6629s 1 1Þðs2 1 1:9619s 1 1Þ

Example 10.6: In this example we investigate the flatness of the filter. Assuming K 5 1 and hdenoting the magnitude or gain of the filter with G,  2n i 2 2 one has G 5 jBn j 5 1= 1 1 ω=ωc . Hence, dG=dω 5 2 nG3 ω2n21 which is monotonically decreasing since G is positive. The magnitude of the filter has thus no ripple. On the other hand the expansion of G is G 5 1 2 ð1=2Þω2n 1 ð3=8Þω4n 2 ?. That is, all the derivatives of the gain up to but not including the 2n-th order are zero at ω 5 0, resulting in “maximal flatness”. The interesting point is that if the requirement of monotonicity is restricted to the passband alone and not the stopband (allowing ripples in the stopband) then it is possible to design a filter of the same order (like the inverse Chebyshev filter) which is flatter in the passband than the maximally flat Butterworth filters.

So far we have seen that Butterworth filters best approximate the ideal transfer functions and thus provide best sensitivity properties in the passband and stopband. But how about their tracking performance? As shall be seen, unfortunately their step response (except for n 5 1) is poor and it gets worse when the order of the filter increases.

732

Introduction to Linear Control Systems

Example 10.7: The step response of the Butterworth filter Bn for n 5 1; 2; 4; 6 is plotted in Fig. 10.6

Figure 10.6 Poor step response of Butterworth filters.

It is good to know that in the realm of filtering in electronics the oscillations in the step response are called ringing. Butterworth filters show considerable ringing.

Remark 10.1: Stated in other words, the above findings constitute a fundamental limitation in feedback systems: The sensitivity properties of closed-loop control systems and their step-tracking performance are not consistent with each other and there is the need for a trade-off. Our knowledge from previous chapters suggests that the logistic function is a good choice for the setpoint tracking perfor n ωn mance: Ln ðsÞ 5 s1ω . Note that the logistic function of order n has a real pole n of multiplicity n. On the other hand it is possible to construct transfer functions of order n with simple (in contrast to multiple) real poles with more or less the same or even faster (but without overshoot) tracking performance. The question that arises is “How do they compare with each other?” From one standpoint the answer is that a system with simple poles has better robustness properties than a system with multiple poles (in the face of unstructured perturbations). However, for simplicity of computations in this first course we work with the logistic function which has multiple poles in examples like Example 10.10, 10.11. We summarize the above findings in the subsequent remark.

Fundamental limitations

733

Remark 10.2: The main advantages of Butterworth filters can be summarized as follows: (1) Maximally flat amplitude filters; (2) Monotonic amplitude response both in the passband and stopband; (3) Quick roll-off around the cutoff frequency which improves with increasing order; (4) Slightly nonlinear phase response. The main disadvantage of Butterworth filters is that the step response has overshoot and is oscillatory, and these flaws worsen with increasing order of the filter. Thus in applications where step tracking is the design objective (i.e., in almost all applications) the desired transfer functions should be chosen differently—the logistic function may be adopted for simplicity of computations. On the other hand, it is due to add that in the general formulation of an actual problem we do not specify it, see Section 10.15. It should be noted that with actual transfer functions the S and T plots will be drastically different from the idealized ones of Fig. 10.2, in that the rate of increase and decrease in jSj and jTj is slow, i.e., there is no sharp change in them, and they overlap over the whole frequency range, although the overlapping effect is mostly concentrated within a limited range.

Example 10.8: For the Butterworth filters of order 1 and 2 with ωc 5 30, the ideal T and S plots of Fig. 10.2 are replaced by Fig. 10.7, left and right panels, respectively. To see the respective Bode magnitude diagrams see the accompanying CD on the website of the book. We observe that as ω ! N: jTj ! 0 and jSj ! 1. A point which is worth emphasizing is that the mid frequency range the relation T 1 S 5 1 cannot be interpreted from the plot since jTj 1 jSj 5 1 does not hold. However in low and high frequencies some interpretations which are theoretically justified can be made.

Figure 10.7 jTj and jSj in plain numbers, Left: B1 Right: B2 .

734

Introduction to Linear Control Systems

Remark 10.3: In control systems a large overshoot in jSj is undesirable. Such a system is highly sensitive and difficult to control. Denoting Smax ≔ maxω jSj, we usually require Smax , 2, typically Smax 5 1:5 or better smaller.3 A value of e.g., Smax 5 44 is a large number, let alone 10 or 100. (Incredible!) We also note that when one of Smax and Tmax ≔ maxω jTj is large (e.g., 10) then because of the relation T 1 S 5 1 the other one is necessarily large. (To visualize this suppose Tmax is large and takes place at ωT . Draw TðωT Þ as a large complex number. Then SðωT Þ is also a large number since it is given by SðωT Þ 5 1 2 TðωT Þ. Because Smax ðωS Þ $ jSðωT Þj by definition, Smax ðωS Þ is also a large number. Recall that Tmax and Smax are at different frequencies.) Reasonable values of Tmax are the same as those of Smax . Remark 10.4: In open-loop control (output) disturbance appears in the output without any amplification. An interpretation of jSj . 1 in closed-loop control is that in the respective frequency the disturbance is amplified in the output. That is, in that frequency range the feedback has a detrimental effect (in contrast to the frequency range where jSj , 1 and disturbance is attenuated) and is worse than open-loop control. Remark 10.5: The relation Y 5 TR 1 SD also suggests to require jSj # ε , 1 for ωA½0 ωl . That is, it is desirable to have an S like in Fig. 10.8,5 left panel. Considering Remarks 10.2 and 10.3, it is also desirable to set an upper bound on Smax . However, as we shall see in Section 10.7, for systems with NMP zeros this is not achievable and in fact has the opposite nature. The requirement jSj # ε , 1 for ωA½0 ωl  is tantamount to setting a lower bound (as opposed to upper bound) on Smax ! And this is another fundamental limitation in feedback systems. Similarly, it is desirable to set jTj # ε , 1 for ωA½ωl N, see Fig. 10.8,

|S|

|T|

1

1

ε

ε

0

ωl

ω

0

ωh

ω

Figure 10.8 Left: Desired S, Right: Desired T.

3

Actually there is a tradeoff between the level of sensitivity/robustness of the system and its performance as manifested in its transient response. 4 Such a system may be controlled only automatically, not by a human controller like a person who wants to ride (i.e., control) a bike or stabilize a rod on his or her palm. See the worked-out problems. 5 Note that in this figure the horizontal axis is frequency in the logarithmic scale, and that is the reason of the tail in the low-frequency range of the sensitivity. On the other hand in Fig. 10.7 the horizontal axis is frequency in the usual scale.

Fundamental limitations

735

right panel. But this condition causes the same problem for T if the plant has open-loop NMP poles! We wrap up this Section by mentioning that the topic of sensitivity functions will be shortly reconsidered in Section 10.7 where other fundamental limitations are discussed. To this end we should first present some preliminaries as the interpolation conditions. Because these conditions are directly geared with any controller design procedure, we shall first start with a design procedure called the TS design method where T and S denote the complementary sensitivity function and the sensitivity function of the system.

10.4

Controller design via the TS method

Since mid-1980s there has emerged a research line to accomplish the design procedure in terms of the T and S functions, the so-called complementary sensitivity and sensitivity functions. The method attracted the attention of the control community and was further enhanced after the introduction of the so-called interpolation conditions which we shall introduce in the next Section. It soon entered the graduate texts on the course Robust Control. Because of the importance of the results recent efforts have focused on bringing the materials even to undergraduate courses and this is what we are doing in this book even in a larger framework—that of fundamental limitations, which we are discussing in this chapter. Recall that in the negative unity-feedback structure T 5 CP=ð1 1 CPÞ and T S 5 1=ð1 1 CPÞ. Hence, C 5 PS : Usually the systems are low-pass. Thus we require that lims!N TðsÞ 5 0 and hence lims!N SðsÞ 5 1. Now suppose that as s ! N: TðsÞ ! skNTT and PðsÞ ! skNPP : That is, NT and NP are the polezero excess of T and P, respectively. (Note that NS 5 0 and kS 5 1.) Therefore, CðsÞ ! ðkT =kP ÞsNP 2NT . Thus to have a causal controller there should hold NT $ NP . Indeed if NT 5 NP then C is proper, and if NT . NP then C is strictly proper. In brief, the conditions NT $ NP , Tð0Þ 5 1 restrict the achievable transfer functions for both the controller and the closed-loop system. This is another fundamental limitation in feedback systems. Not every closed-loop transfer function is achievable for every system. For instance, if PðsÞ 5 2=ðs 1 3Þ the closed-loop transfer function TðsÞ 5 ðas2 1 bs 1 cÞ= ðs2 1 cs 1 dÞ is not achievable by a causal controller because NP 5 1, NT 5 0, and NT $ NP is violated. Example 10.9: Consider the plant PðsÞ 5 2 2=ðs 1 0:01Þ. Design a controller such pthat the closed-loop transfer function becomes TðsÞ 5 ω20 = ffiffiffi ðs2 1 2ω0 s 1 ω20 Þ. We first note that the conditions NT $ NP , Tð0Þ 5 1 are both met. Now we ω20 ðs 1 0:01Þ T pffiffi 5 Pð1T2 TÞ 5 : Note that C is strictly proper easily compute C 5 PS sðs 1

2ω0 Þð2 2Þ

because NT . NP . (The stability of the design will be shortly discussed.)

736

Introduction to Linear Control Systems

Example 10.10: Consider the plant PðsÞ 5 3=½sðs 1 1Þðs 1 10Þ. Design a controller such that the closed-loop transfer function has all its poles at s 5 2 2. NðsÞ The closed-loop transfer function has thus the form TðsÞ 5 ðs12Þ a : To have NT $ NP 5 3 we should have either a 5 3; NðsÞ 5 k, a 5 4; NðsÞ 5 k, a 5 4; NðsÞ 5 k1 s 1 k2 , etc. We choose (or rather try) a 5 3; NðsÞ 5 k. Therefore from Tð0Þ 5 1 we find k 5 23 5 8. Now the controller is 1 1Þðs 1 10Þ T 5 Pð1T2 TÞ 5 ð8=3Þðs : Note that C is proper because NT 5 NP . C 5 PS s2 1 6s 1 12 In the literature stability of this design method is discussed in the interpolation conditions which follow in the next section. Of course on the other hand the above two designs are stable by direct verification, e.g., via the Routh’s test.

10.5

Interpolation conditions

The internal stability of the 1-DOF control structure, see Fig. 10.9, was studied in Chapter 3, Stability Analysis. There we learned that the transfer functions 1=ð1 1 CPÞ, C=ð1 1 CPÞ, P=ð1 1 CPÞ, and CP=ð1 1 CPÞ all should be stable. The following famous theorem expresses this condition in terms of the sensitivity and complementary sensitivity functions of the system. Theorem 10.1: The system is internally stable iff TðsÞ [or SðsÞ], P21 ðsÞTðsÞ, and PðsÞSðsÞ are all stable. This can be further translated to the subsequent interpolation conditions. For P21 ðsÞTðsÞ to be stable, T must cancel out the CRHP zeros of P. If P has some CRHP zeros, for a zero of multiplicity m at s 5 z0 there should hold: d m21 T di S Tðz0 Þ 5 dT ds ðz0 Þ 5 ? 5 dsm21 ðz0 Þ 5 0 and Sðz0 Þ 5 1, dsi ðz0 Þ 5 0; i 5 1; :::; m 2 1. For PðsÞSðsÞ to be stable, S must cancel out the CRHP poles of P. If P has some CRHP poles, for a pole of multiplicity m at s 5 p0 there should hold: dm21 S di T Sðp0 Þ 5 dS ds ðp0 Þ 5 ? 5 dsm21 ðp0 Þ 5 0 and Tðp0 Þ 5 1, dsi ðp0 Þ 5 0; i 5 1; :::; m 2 1. In fact in the above theorem the loop gain can also have a delay term, i.e., it can be of the form LðsÞ 5 e2sT L0 ðsÞ where L0 has CRHP poles and/or zeros. Δ:

Figure 10.9 The 1-DOF control structure with external inputs.

Fundamental limitations

737

The usual case is the simple CRHP poles and zeros. In this case, S is 0 at CRHP poles and 1 at ORHP zeros, T is 1 at CRHP poles and 0 at CRHP zeros. The interpolation conditions are important because they show how the system dynamics contains the choices of S and T. Recall that S and T characterize the achievable performance. Thus, the interpolation conditions restrict and in some sense characterize the achievable performance for a given plant. This is another fundamental limitation in feedback systems. CRHP poles and zeros of the plant restrict the achievable performance of the system by (de)forming the shapes of T and S. This is pictorially shown in the worked-out Problem 10.1. It should be mentioned that the interpolation conditions (for p 5 0) are actually met in the solution of Example 10.10 (Convince yourself that we have done so!) and the system is stable. For sophisticated systems this requires some computations as we show in the next example in which we realize the interpolation conditions by direct construction. 21 Example 10.11: For PðsÞ 5 ðs 2s2Þðs 1 3Þ design a proper controller CðsÞ such that Tð0Þ 5 1 and TðsÞ has all its poles as multiple poles at s 5 2 a; a . 0. To have a proper controller we should choose NT 5 NP 5 1. To have a minimal-order controller we should start from T1 ðsÞ 5 s 1k a . If not possible,

we then go to T2 ðsÞ 5

kðs 1 bÞ ðs1aÞ2

; T3 ðsÞ 5

kðs 1 bÞðs 1 cÞ ðs1aÞ3

, etc. The interpolation con-

ditions are Tð2Þ 5 1, Tð1Þ 5 0 (or Sð1Þ 5 1), and also we should have Tð0Þ 5 1. It is clear that with T1 ðsÞ we cannot satisfy the above three conditions. Similarly, it is easy to show that with T2 ðsÞ we cannot satisfy the above three conditions either. (Convince yourself that these claims are correct!) However, as we shall verify in the following with T3 ðsÞ the conditions can be satisfied. From Tð1Þ 5 0 we find b 5 2 1 or c 5 2 1; we choose the latter and hence bÞðs 2 1Þ . From other conditions we find Tð0Þ 5 2a3kb 5 1, TðsÞ 5 kðs 1ðs1aÞ 3 Tð2Þ 5

kð2 1 bÞð2 2 1Þ ð21aÞ3

5 1. Hence, k 5 a3 1 3a2 1 6a 1 4, b 5

2 a3 a3 1 3a2 1 6a 1 4 :

Now, 3 2 2 3 2 2 1 k 2 kbÞs 1 a3 1 kb 1 k 1 a3 Þs 1 a3 2 a3 S 5 1 2 T 5 s 1 ð3a 2 kÞs 1 ð3a 5 s 1 ð3a 2 kÞs 1 ð3a ðs1aÞ3 ðs1aÞ3 s3 2 ða3 1 3a2 1 3a 1 4Þs2 1 ð2a3 1 6a2 1 6a 1 4Þs ðs1aÞ3

5

On the other hand Sð2Þ 5 0. Thus from the above expression for S we con2Þðs 2 dÞ : By equating the clude that it should be factorizable as SðsÞ 5 sðs 2ðs1aÞ 3

numerators we find d 5 a3 1 3a2 1 3a 1 2. The controller is thus 1 bÞðs 2 1Þ ðs 2 2Þðs 1 3Þ bÞðs 1 3Þ T 5 kðs 5 kðs 1 where k 5 a3 1 3a2 1 6a 1 4, CðsÞ 5 PS sðs 2 2Þðs 2 dÞ  ðs 2 1Þ sðs 2 dÞ a 3 2 b 5 a3 1 3a22 1 6a 1 4 ; d 5 a 1 3a 1 3a 1 2. For a 5 1 we have k 5 14, b 5 2 1=14, d 5 9, and the root locus of the system is shown in the left panel of Fig. 10.10. The system is stable. The step response is provided in the right panel of the same figure. Note that we shall study the reason for the poor transient performance of this system in Section 10.6 (Continued) 3

738

Introduction to Linear Control Systems

(cont’d) Step response

3 2

Amplitude

1 0 –1 –2 –3 –4 –5

0

2

4

6 Time (s)

8

10

12

Figure 10.10 Example 10.11. Left: Root locus not drawn to scale, Right: Step response.

Remark 10.6: Using this approach in which we do not apply an unstable polezero cancellation, small perturbations in the controller and/or plant parameters still result in a stable system, although the design specification will certainly be different. This is shown in the next example. Example 10.12: Suppose that instead of CðsÞ 5 14:5ðs 2 1=15Þðs 1 3:1Þ : sðs 2 9:5Þ

14ðs 2 1=14Þðs 1 3Þ sðs 2 9Þ

the controller is

Investigate the stability and response implemented as CðsÞ 5 of the system. With the new polezero values the system has the root locus of the left panel of Fig. 10.11 which is stable for 11:32 , K , 17:10 and thus remains stable because K 5 14:5. The step response is given in the right panel of the same figure. We will shortly learn the reason for the poor transient performance in Section 10.6 Step response

3 2

Amplitude

1 0 –1 –2 –3 –4

0

5

10 Time (s)

15

Figure 10.11 Example 10.12, Left: Root locus not drawn to scale, Right: Step response.

20

Fundamental limitations

739

The direct construction approach for the satisfaction of the interpolation conditions which we have presented in the above example has the shortcoming that we do not know a priori whether the desired transfer function structure is achievable or not. Of course a root locus analysis can help in finding an achievable one but here the logical problem arises that if we were to use the root locus then we would not use the TS approach from the outset. All in all the conditions are insightful but it is not an easy task to satisfy them if we wish to perform the design by hand computations as in the above, especially if we want a tracking system for ramp and parabola inputs and/or if the plant has multiple CRHP poles/zeros. The factorization and the controller parameterization approach is a more systematic way to perform this task. You will study them in graduate studies. Also, MATLABs commands of the Robust Control Toolbox numerically solve a given problem if it admits a solution and if they are able to find and answer -recall our pertinent discussions in Chapters 4,5 regarding the bottlenecks of optimization problems.

10.6

Integral and Poisson integral constraints

In his 1945 book Bode proposed integral constraints on the sensitivity and complementary sensitivity transfer functions. The results were largely unappreciated till the 1960s when they were interpreted in the design context although some theoretical mistakes were made at that time. This brought them to the attention of the control community and finally in the 1980s a renaissance in frequency domain results took place. Since then many researchers have endeavored to improve the classical results of Bode and they have succeeded to a certain extent. The old and recent results all pivot on functional analysis of analytic functions. We discuss some pertinent results. Let us start by the phase-gain relation of a minimum phase system. Bode showed that they are dependent on each other, and this characterizes some fundamental limitations as will be explained. Theorem 10.2: Let H be a stable proper transfer function with Hð0Þ . 0. Then at any frequency ω0 the phase ϕðω0 Þ ≔ argHðjω0 Þ satisfies ϕðω0 Þ 5 Ð 1 1N dlogjHðjω0 eu Þj log cothj u2 jdu, where u 5 logðω=ω0 Þ. π 2N du Δ: ω 1 ω0 u Because of the shape of the weighting W 5 log cothj 2 j 5 log j ω 2 ω0 j, see Fig. 10.12, the above relation indicates —in general—as the distance between ω and ω0 increases, the dependence of ϕðω0 Þ on the rate of gain decrease at the frequency ω sharply decreases. Therefore, in general a system has good stability margins if the slope of the Bode magnitude diagram at the crossover frequency is about 220 db=dec in magnitude. More precisely suppose that the slope of the magnitude Ð 1N 2 curve at ω0 is 220k db=dec. Then ϕðω0 Þ  2π k 2N log cothj u2 jdu 5 2π k 3 π2 5 2 k π2 : Thus in order to have a good PM it is often required that the slope be in the

740

Introduction to Linear Control Systems

W

0.01

0.1

1

10

100

ω/ω0

Figure 10.12 Shape of the weighting function.

range 230 to 220 db=dec at ωgc . For instance if is 260 db=dec then most probably either it is unstable or has poor stability margins. Of course it is easy to construct certain systems which do not obey this general trend (Zadeh and Desoer, 1963). Moreover, in Fig. 10.8 of Remark 10.5, ωl and ωh cannot be too close. Note that if we remove the condition Hð0Þ . 0 from the statement of Theorem 10.2, then the left-hand side should be substituted with ϕðω0 Þ 2 ϕð0Þ. Next we discuss another facet of the above theorem in terms of a fundamental limitation. Example 10.13: The above theorem can also be used to verify if a certain desired phase margin is achievable for a plant or control system or not. In fact it can be applied in the same context for systems with NMP poles and zeros. 2z For instance suppose that HðsÞ 5 ss 2 p H1 ðsÞ where z and p are positive and H1 ðsÞ is strictly proper, stable, and MP. Then for strong stabilization we write s1p z s1z s2z the loop gain as LðsÞ 5 CðsÞHðsÞ 5 CðsÞ 3 ss 2 1 z 3 s 2 p 3 s 1 p H1 ðsÞ 5 s 1 z 3

s1p s2p

3 L1 ðsÞ, with obvious definition for the MP and stable part L1 ðsÞ. Now from Problem 6.18 of Chapter 6 we recall that for stability of LðsÞ there should z hold L1 ð0Þ , 0. Hence in order to use Theorem 10.2 we write LðsÞ 5 2 ss 2 1z 3 s1p s 2 p 3 L2 ðsÞ, with L2 ð0Þ . 0. We note that jLðsÞj 5 jL2 ðsÞj and +LðsÞ 5 s1p s + zz 2 1 s 1 + s 2 p 1 +L2 ðsÞ. Hence,

φðω0 Þ 5 2+

z 1 jω0 ðz 1 jω0 Þðjω0 2 pÞ jω 2 p 1 +L2 ðsÞ. 1 +L2 ðsÞ 5 2 + 2+ 0 jω0 1 p ðz 2 jω0 Þðjω0 1 pÞ z 2 jω0

Now we use Theorem 10.2 for +L2 ðsÞ and simplify the first term to obtain, ð ω0 =z 1 p=ω0 1 1N dlogjL2 ðjω0 eu Þj u log cothj jdu 1 φðω0 Þ 5 2 2tan21 π 2N du 2 1 2 p=z ð 1N u ω0 =z 1 p=ω0 1 dlogjLðjω0 e Þj u 522tan21 log cothj jdu: 1 π 2N du 2 1 2 p=z The second integral can be approximated as we did in the explanation of Theorem 10.2 using the desired slope at ωgc . Thus we can simply proceed. In fact we can even verify if the desired phase margin is achievable at an arbitrary frequency ω0 . To this end we need to compute the minimum of the first pffiffiffiffiffi term above which is attained at ω0 5 pz. (Continued)

Fundamental limitations

741

(cont’d) Note that alternatively the above approach can be used to verify for what unstable pole/zero configurations the desired phase margin and slope are achievable. For instance if we desire Pm 5 45 deg at a certain known ωgc with the slope 220 db=dec, then the last line of the above formula results in ω =z 1 p=ω π 2135 3 180 5 2 2tan21 gc 1 2 p=z gc 2 π2 : Thus the relation between z and p is determined as a fundamental limitation since z and p cannot take arbitrary values but some that are determined by this formula.

Next we switch to closed-loop systems. Consider the 1-DOF negative unity feedback control system with the loop gain LðsÞ 5 CðsÞPðsÞ. For this system among the results that Bode obtained is the subsequent theorem. Question 10.2: How can we apply the theorem to a system with more that one NMP pole and zero? Theorem 10.3: Let the loop gain have relative degree nr . 1. Then ÐN log jSðjωÞjdω 5 0. 0 Δ:

Example 10.14: Theorem 10.3 means that the magnitude of the sensitivity function shows an overshoot. Given the sensitivity function of a system with nr . 1 as in Fig. 10.13, the graphical interpretation of the result is that the areas under and above the 0-db line are equal, i.e., Au 5 Aa . Noting that ln1 5 0 this can be paraphrased as follows: If the designer reduces the sensitivity of a controlled system in a frequency range (that of Au ), then inevitably the sensitivity is increased in another frequency range (that of Aa ). That is to say, there is a conservation law on the sensitivity in a control system. Given the relation T 1 S 5 1 and considering a low-pass system (as in this figure), the designer usually does not have much freedom in the area Au but to dig it and pile it up on Aa such that it has the best shape. We may think that we can spread it as an arbitrarily thin band over the whole frequency range (outside the system passband) and thus avoid large sensitivity values especially a peak in a small frequency range. However given the requirement that T  0 above a certain frequency we must have lnjSj  0 in that range and this means that the area Aa cannot spread over all frequencies and has to be accumulated in a certain small frequency range. A peak in jSj is thus inevitable. (Continued)

742

Introduction to Linear Control Systems

(cont’d) |S|

|S| 1

Aa

1

Au ω

0

Controller

0

ω

Figure 10.13 Left: A typical sensitivity function and the integral constraint.

Remark 10.7: In connection to the S-Circles introduced in Chapter 8, KrohnMangerNichols Chart, consider the unit circle centered at the critical point 21 in the Nyquist plot context. The region inside this circle refers to jSj . 1 and the region outside this circle refers to jSj , 1. Thus for systems with nr . 1 in the frequency range that the Nyquist is inside the aforementioned circle the disturbance is amplified and in other frequencies it is attenuated. The results we present in the rest of this section are in two types: the so-called integral constraints and Poisson integral constraints. An example of the first class is Theorem 10.6 and an example of the second class is Theorem 10.4 in which a weighting function appears. The reason of this naming is that the Poisson6 integral formula is used in the proof of the result. Let us start. In the case that the system has ORHP zeros the above theorem is modified. We write HðsÞ as the product of a function which is MP and analytic in the M zi ORHP and the Blaschke products (or their inverse) defined by Bz ðsÞ 5 Li51 ss 2 1 zi M s 2 pi and Bp ðsÞ 5 Li51 s 1 p . i

Theorem 10.4: Consider the system LðsÞ which is delay free and whose closedloop is either stable or unstable, or has delay and its closed-loop system is stable. If LðsÞ has the (simple or multiple) ORHP zero z 5 α 1 jβ, then Ð 1N α 2N logjSðjωÞj α2 1 ðβ2ωÞ2 dω 5 2 πlogjBp ðzÞj.

Δ: N pi The term Bp ðzÞ is defined as Bp ðsÞ 5 Li51 ss 2 for NMP poles p ; i 5 1; :::; N, i 1 pi where N is the number of such poles, and is called a Blaschke product. The weightα ing function in the integrand of Theorem 10.4 is α2 1 ðβ2ωÞ 2 : We note that the

weighting function decays to zero with increasing frequency. This is another facet of the argument of Example 10.14 that compensation in high frequencies should essentially be achieved over a limited frequency range. The area of sensitivity increase cannot be expanded arbitrarily wide (and thin). In fact the weighted length of the j-axis is finite and equal to π. (Why?) 6

Named after S. D. Poisson, French mathematician and physicist (17811840) with a legacy of important results.

Fundamental limitations

743

The above theorem can be used to prove the proceeding corollary which we present as a theorem due to its importance. Theorem 10.5: If a design requires jSj # ε , 1 in ωA½0 ωl , then a lower bound Ðω θ π α π2θ is set on Smax as: Smax $ ð1εÞπ2θ jB21 in which θ 5 0 l α2 1 ðβ2ωÞ 2 dω 5 p ðzÞj     21 ωl 2 β 21 β tan 1 tan α . α Δ: An actual numerical example is provided in the worked-out problem 10.6. Here we provide a general example. Example 10.15: Theorem 10.5 is a fundamental limitation in control systems. It can be used to deduce other fundamental limitations in control systems, as we explain. For the sake of simplicity suppose that the system has one ORHP pole p and one ORHP zero z and that they arereal, which is the case of interest in z1p actual systems. This results in θ 5 tan21 ωl =α in rad/s, jB21 p ðzÞj 5 j z 2 p j. On the other hand for the cases θ is a small number and in particular  θ of interest π π2θ cjB21 ðzÞj provided jB21 ðzÞj . 1. Now θ , π and thus Smax $ 1ε π2θ jB21 ðzÞj p p p if z and p are near each other, this quantity will be large and the system will be difficult to control.7 For instance if 9=11  0:8 , z=p , 11=9  1:2 then approximately jB21 p ðzÞj $ 10 and thus approximately Smax c10. Such a system is almost impossible to control. On the other hand, if we require π   θ z=p 5 5 or 0:2 Smax $ 1ε π2θ ð1:5Þπ2θ . 1:5. In passing note that from Exercise 8.26 we recall that z=p . 5 or z=p , 0:2 also results in Smax $ 1:5. Remark 10.8: Another important consequence of the theorem is that if the bandwidth is set near or larger that z then it necessarily results in a sensitivity increase and large Smax in other frequencies. On the other hand to effectively stabilize the system, bandwidth must be set larger than p. Hence, if z . p (preferably z $ 5pÞ then there are no conflicting objectives; otherwise if z , p there are conflicting requirements and the plant is extremely difficult to control in practice, if possible at all. Next we consider the effect of unstable poles. Theorem 10.6: Consider a strictly proper system with ORHP poles at p1 ; . . .; pN , and possibly ORHP Ð N zeros and a time Pdelay τ. Assume that the closed-loop system is stable. Then 0 logjSðjωÞjdω 5 π Ni51 Rðpi Þ. Δ: In terms of Example 10.24 for a system satisfying the conditions of Theorem 10.6 the area of sensitivity amplification is larger than the area of sensitivity 7

Another facet of the problem is that in this case an unstable polezero cancellation almost takes place and this makes the system almost uncontrollable.

744

Introduction to Linear Control Systems

attenuation P by the amount specified in the right hand side. In other words, Aa 5 Au 1 π Ni51 Rðpi Þ. Two interpretations can be made. (1) We can expect a larger peak in the sensitivity. (2) Part of the loop gain which is used for sensitivity reduction in the case of no ORHP poles is now used to stabilize the ORHP poles. It also shows that the bandwidth and ωl should not be set close to each other otherwise a large peak in jSj in this frequency range is unavoidable. Question 10.3: Can we conclude from Theorem 10.6 that for stabilizing unstable plants we need a larger (in an appropriate sense) control signal? Note that in the case of state-feedback pole assignment (that you will study in the next course) this conclusion is directly made from the pertinent formula. Example 10.16: The Nyquist plot of a system satisfying the conditions of this theorem will enter and leave the unit circle centered at the critical point 21 (the S 5 1 circle) infinitely many times, see Fig. 10.14 for the typical system LðsÞ 5 2e2sT =ðs 1 3Þ. Thus because of Remark 10.7 this means that applying feedback to a delay system will result in a sensitivity which alternates between amplification and attenuation infinitely many times. In other words, for an infinite number of frequencies the sinusoidal disturbance (which is at those frequencies) is amplified and for an infinite number of frequencies the disturbance is attenuated.

|S|1 −1

2/3 ω

Figure 10.14 Nyquist plot of our strictly proper delay system, not drawn to scale.

Next we consider the complementary sensitivity function. Theorem 10.7: Consider the system LðsÞ which has the delay τ and the closed-loop system is stable. If LðsÞ has the (simple or multiple) ORHP pole p 5 γ 1 jδ, then Ð 1N γ 2N logjTðjωÞj γ 2 1 ðδ2ωÞ2 dω 5 2 πlogjBz ðpÞj 1 πγτ.

Δ: The term Bz ðpÞ is defined by for NMP zeros zi ; i 5 1; :::; M, where M is the number of such zeros, and is called a Blaschke product. Similar to the case of sensitivity function the sequel theorem can be presented. M zi Bz ðsÞ 5 Li51 ss 2 1 zi

Fundamental limitations

745

Theorem 10.8: If a design requires jTj # ε , 1 in ωA½ωh N, then a lower bound Ðω πγτ π2θ π γ θ θ is set on Tmax as: Tmax $ ð1εÞ θ jB21 in which θ 5 0 h γ2 1 ðδ2ωÞ 2 dω 5 z ðpÞj 10    tan21 ωh γ2 δ 1 tan21 γδ . Δ: Among the various implications these theorems have is a conclusion similar to Example 10.15. On the other hand it has nice implications on robustness and stability margins that you will study in graduate courses. For instance, the inverse of jTj is interpreted as a measure of robustness against unstructured multiplicative uncertainty. Thus unstable poles impose a tradeoff on the size of this measure in different frequency ranges. That is, the stability margin cannot be made large in all frequencies. Example 10.17: What is probably the simplest general interpretation of the above results altogether? Probably it is that feedback properties at different frequencies are not independent of each other. The benefits at some frequencies that we make by controller design should be traded off against the losses incurred at other frequencies. Theorem 10.9: Consider the system LðsÞ 5 CðsÞPðsÞ which has the delay τ and an unstable pole p. Then Tmax $ jepτ j. Δ: Example 10.18: The application and interpretation of this theorem are straightforward. For instance if we want Tmax , 1:1, the theorem places an upper bound as a fundamental limitation on the product τp. Conversely, if τp is given then it is not possible to design a controller which renders Tmax , jepτ j. Remark 10.9: Further results are also available in the references introduced at the end of the chapter. Among these is the optimization approach of (Halpern and Evans, 2000) to the sensitivity reduction problem. Remark 10.10: We close this section by stressing that the above theorems assume that the loop gain is analytic in the ORHP, i.e., it does not have any pole on the j-axis. However for disturbance rejection and tracking the system often has at least one integrator. It may also have pure imaginary poles. In such cases the results are generally valid by the indentation technique around the j-axis singularities, which is a standard technique used also in Chapter 6 on Nyquist analysis. We close this Section by reminding you that some important questions were raised in Exercise 7.24 of Chapter 7 which can be recast in the context of fundamental limitations as well.

746

Introduction to Linear Control Systems

Figure 10.15 The ideal 1-DOF control structure.

10.7

Constraints implied by poles and zeros

Beginning with frequency domain results, since around 1990 various results have also been obtained in the time domain. We present some of the available results. In the following we discuss three topics: implications of open-loop integrators, implications of MP and NMP poles and zeros, and implications of j-axis poles and zeros. We consider the control structure of Fig. 10.15 where di and do represent the input and output disturbances, respectively.

10.7.1 Implications of open-loop integrators Let the plant and controller be given by PðsÞ 5 NP ðsÞ=DP ðsÞ and CðsÞ 5 NC ðsÞ=DC ðsÞ, where LðsÞ 5 NL ðsÞ=DL ðsÞ 5 ½NC ðsÞNP ðsÞ=½DC ðsÞDP ðsÞ is the loop gain. Denote eðtÞ 5 rðtÞ 2 yðtÞ as usual. Then the ensuing theorems can be presented whose proofs are very simple. Theorem 10.10: Assume that the loop gain has i integrators, i.e., LðsÞ 5 i ^ ^ NL ðsÞ=DL ðsÞ 5 LðsÞ=s and lims!0 si LðsÞ 5 Lð0Þ. 1. ÐFor a step reference input or a step output disturbance: limt!N eðtÞ 5 0, ’ i $ 1, and N 0 eðtÞdt 5 0, ’ i $ 2. 2. For a positive unit ramp reference input or a negative unitÐ ramp output disturbance: N ^ limt!N eðtÞ 5 1=Lð0Þ, for i 5 1, limt!N eðtÞ 5 0, ’ i $ 2, and 0 eðtÞdt 5 0, ’ i $ 3. 3. For a positive unit parabolic reference input or a negative unit parabolic output distur^ bance: limt!N Ð NeðtÞ 5 N, for i 5 1, limt!N eðtÞ 5 2=Lð0Þ, for i 5 2, limt!N eðtÞ 5 0, ’ i $ 3, and 0 eðtÞdt 5 0, ’ i $ 4.

Δ: Theorem 10.11: Assume that the controller has i ^ ^ CðsÞ 5 NC ðsÞ=DC ðsÞ 5 CðsÞ=s and lims!0 si CðsÞ 5 Cð0Þ.

i

integrators,

i.e.,

ÐN i. For a step input disturbance: limt!N eðtÞ 5 0, ’ i $ 1, and 0 eðtÞdt 5 0, ’ i $ 2. ^ ii. For a negative unit rampÐ input disturbance: limt!N eðtÞ 5 1=Cð0Þ, for i 5 1, N limt!N eðtÞ 5 0, ’ i $ 2, and 0 eðtÞdt 5 0, ’ i $ 3. iii. For a negative unit parabolic input disturbance: limt!N Ð N eðtÞ 5 N, for i 5 1, ^ limt!N eðtÞ 5 2=Cð0Þ, for i 5 2, limt!N eðtÞ 5 0, ’ i $ 3, and 0 eðtÞdt 5 0, ’ i $ 4.

Δ: Note that Theorem 10.11 means that integration in the plant does not have any effect on the rejection of input disturbance. Also note that the proof is by direct

Fundamental limitations

747

substitution and computation. See also the special cases that we have addressed in Example 4.3, Problem 4.6, and Exercise 4.7 of Chapter 4, Time Response. Now we present two examples. Consider the system LðsÞ 5 CðsÞPðsÞ 5 5ðs11Þ2 =s3 . We make the following systems out of this original system and simulate them: (0) C0 ðsÞ 5 5 and P0 ðsÞ 5 ðs11Þ2 =s3 , (1) C1 ðsÞ 5 5ðs 1 1Þ=s and P1 ðsÞ 5 ðs 1 1Þ=s2 , and (2) C2 ðsÞ 5 5ðs 1 1Þ=s2 and P2 ðsÞ 5 ðs 1 1Þ=s. Note that in case k the controller has k integrators and the loop in all cases has 3 integrators. Example 10.19: We simulate case (0) for input and output step and ramp disturbances and investigate Theorems 10.10 and 10.11. The answer is plotted in Fig. 10.16. We note that there is a difference between the input disturbance and the output disturbance. We also note that the loop gain has i 5 3 integrators and thus y shows an undershoot (in response to output disturbance) as predicted by Ð e 5 0. Finally, the student is encouraged to simulate the system for a ramp input as well, and observe the effect of the input and output disturbances.

Figure 10.16 Example 10.19, Step input reference at t 5 0, Disturbance at t 5 5, Top left: Step input disturbance, Top right: Step output disturbance, Bottom left: Ramp input disturbance, Bottom right: Ramp output disturbance.

748

Introduction to Linear Control Systems

Example 10.20: We simulate case (1) for input and output ramp disturbances. Because the loop gain is as that of Example 10.19 the responses to output disturbance are the same and thus are not reproduced. The response to input disturbances are plotted in Fig. 10.17. We note that the controller has i 5 1 integrator and the constant error 1=5 5 0:2 in the right panel is visible as predicted by Theorem 10.11.

Figure 10.17 Example 10.20, Step input reference at t 5 0, Input disturbance at t 5 5, Left: Unit step input disturbance, Right: Unit ramp input disturbance.

It should be noted that if the system has the same integrator property but Ð is different in other dynamics, then the conditions on e; e will be the same but the “shape” of the response will be different. The interested reader is encouraged to verify this point by simulating the system C01 ðsÞ 5 5=s and P01 ðsÞ 5 ðs11Þ2 =s2 . The case (2) is simulated in the worked-out Problem 10.10.

10.7.2 MP and NMP poles and zeros We start with a theorem which will be used to show interconnection between time and frequency domain results on the one hand (worked-out Problems 10.1110.14), and prove the rest of the theorems on the other hand. Theorem 10.12: Let FðsÞ and f ðtÞ be a Laplace pair where FðsÞ is strictly proper with the region Ð N of convergence RðsÞ . 2 α. Then for any z such that RðzÞ . 2 α there holds 0 f ðtÞe2zt dt 5 lims!z FðsÞ. Δ:

Fundamental limitations

749

Theorem 10.13: Consider a control system with closed-loop poles to the left of 2α for some α . 0. Also assume that the controller has at least one integrator. Then for an uncancelled plant zero at z and an uncancelled plant pole at p satisfying RðzÞ . 2 α and RðpÞ . 2 α there holds, 1. For unit step Ð N a positive Ð Nreference input or a negative unit step output disturbance: 2zt dt 5 1=z and 0 eðtÞe2pt dt 5 0. 0 eðtÞe ÐN 2. For a positive unit step reference input with z . 0: 0 yðtÞe2zt dt 5Ð0. N 2zt 3. For a negative unit step input disturbance: dt 5 0 and 0 eðtÞe ÐN 2pt eðtÞe dt 5 1=½pLð0Þ. 0

Δ: This theorem has very nice and important interpretations. For instance part (1) indicates that if z , 0 then the error should change sign and there will be an overshoot in the output. Moreover part (2) indicates the inverse response (with the correct explanation as provided in Section 4.11). These observations together impose an upper bound on the closed-loop BW that can be achieved: If we want a good transient response it is necessary that the closed-loop BW be chosen less than the smallest NMP zero of the plant. On the other hand the theorem indicates that NMP poles impose a lower bound on the closed-loop BW than must be achieved. In other words, to have acceptable performance it is necessary that the closed-loop BW be chosen larger than the largest (in absolute value) NMP pole. Thus in case the above two conditions are conflicting it is not possible to get good performance from the system. Example 10.21: To have acceptable performance, the closed-loop bandwidth of the plant PðsÞ 5 ðs 2 2Þðs2 2 1Þ=½sðs 1 3Þðs 1 10Þ can be at most 1 rad=s.

Example 10.22: To have acceptable performance, the closed-loop bandwidth of the plant PðsÞ 5 ðs 1 2Þ=½sðs 2 3Þðs 1 8Þðs 2 10Þ should be at least 10 rad=s.

Example 10.23: To have acceptable performance, the closed-loop bandwidth of the plant PðsÞ 5 ðs 1 2Þðs 2 10Þðs 2 12Þ=½sðs 2 1Þðs 2 3Þðs 1 10Þ should be in the range 3 , BW , 10. As a rule of thumb the BW is usually set twice the largest NMP pole. (See also the worked-out problem 10.3.)

750

Introduction to Linear Control Systems

Example 10.24: It is not possible to get acceptable performance from the plant PðsÞ 5 ðs 1 2Þðs 2 3Þðs 2 5Þ=½ðs 1 1Þðs 2 2Þðs 2 4Þðs 2 10Þ, since the NMP zeros and poles require conflicting conditions: BW , 3 and BW . 10, respectively.

Example 10.25: The above theorems also justify the poor transient performance of Examples 5.33 and 5.34 of Chapter 5, Root Locus, which we reproduce here for ease of reference. (Make sure you can make the right conclusions! Moreover, try to analyze them in the context of the Theorems 10.14 and 10.15 which will follow next.) 2 s 1 0:2 For the system P1 ðsÞ 5 ss 2 2 1 and C1 ðsÞ 5 2 31 3 sðs 1 40Þ in a negative unity feedback structure the step response is plotted in the left panel of Fig. 10.18. 1 s 2 0:2 For the system P2 ðsÞ 5 ss 2 2 2 with C2 ðsÞ 5 75 3 sðs 2 60Þ the step response is given in the right panel of the same figure.

Figure 10.18 Example 10.25, Left: System 1, Right: System 2.

It should be noted that in case of complex poles and zeros some researchers use the largest and smallest real part of the NMP poles and zeros and some the magnitude of them. For NMP zeros the real part, and for NMP poles the magnitude, appear to be the safer estimates, resulting in better response. We close this discussion by emphasizing that Theorem 10.13 has other pleasing interpretations which you will provide in Exercise 10.23. Next we present a theorem which estimates the upper bound on the magnitude of the undershoot of a system with an NMP zero.

Fundamental limitations

751

Theorem 10.14: Consider a system (either open- or closed-loop) with unity DC gain and an NMP zero at s 5 z . 0. Assume that the settling time of the system is ts , i.e., 1 2 δ # jyðtÞj # 1 1 δ with δ{1 for all t . ts . Then the undershoot magnitude of the system, Mus , is estimated as Mus $ e1zts22δ1 : Δ:

Example 10.26: The above result establishes the fundamental limitation that there is a tradeoff between a fast step response and a small undershoot, because if zts {1 then the above formula yields Mus . zt1s :

Similarly, we can offer a theorem which estimates the upper bound on the magnitude of the overshoot of a system with an MP zero. Theorem 10.15: Consider a system (either open- or closed-loop) with unity DC gain and an MP zero at s 5 2 z; z . 0. Assume that the system has dominant poles with the real part σ 5 2 p; p . 0. If z=p{1 and K is chosen such that j1 2 yðtÞj , Ke2pt for all t . ts , then of the overshoot of the system,  the magnitude 

Mos 8, is estimated as Mos $

1 ezts 2 1

12

Kz=p 1 2 z=p

. Δ:

Example 10.27: The above result substantiates the fundamental limitation that there is a tradeoff between a fast step response and a small overshoot for the aforementioned system, because if zts {1 and Kz=p{1 then the above formula dictates Mos . zt1s :

Finally, we present a corollary of the above results, but in the form of a theorem due to its importance. 2z Theorem 10.16: Consider the system LðsÞ 5 ss 2 p P1 ðsÞ, with z . 0, p . 0, and P1 ðsÞ is MP and stable. Then the following fundamental limitations can be shown:

1. If z . p then Mos $

p z 2 p.

2. If p . z then Mus $

z p 2 z.

p z1p 3. There hold Smax $ j zz 1 2 p j and Tmax $ j z 2 p j.

8

Note that the relation between Mos and Mp of Chapter 4 is 100Mos 5 Mp . Recall that Mos and Mus are defined in Fig. 4.56 of Chapter 4.

752

Introduction to Linear Control Systems

Recall that in case of Theorems 10.5 and 10.8 the lower bounds on Part (3) are in general much larger. Next we recall a theorem on strong stabilizability of Section 3.12 which is actually a fundamental limitation implied by NMP poles and zeros. A system is called strongly stabilizable if it can be stabilized by a stable controller. Theorem 10.17: Consider the plant PðsÞ with some NMP poles and zeros (and possibly some MP ones). Then it is strongly stabilizable iff it has an even number of real ORHP poles between every pair of real CRHP zeros. Δ: Also recall that we presented some important corollaries for this theorem in Section 3.12. We do not reproduce them here for the sake of brevity.

Example 10.28: Is the sequel system stabilizable by a stable controller? ðs2 2 2s 1 2Þðs2 2 9Þ LðsÞ 5 K : ðs11Þ2 ðs 2 2Þðs25Þ2 The plant has its real ORHP zeros at 3; N. It has an even number of real ORHP poles between them at 5; 5.

We close this part by stressing that some of the Theorems of the previous Section 10.6 on integral and Poisson integral constraints can also be seen as some constraints imposed by NMP poles and zeros. We also refer the reader to item 14 of further readings. Finally, we mind you to not forget item 12 of Further Readings of Chapter 3 with Regard to internal stability and strong stability. The theorem does not guarantee internal stability.

10.7.3 Imaginary-axis poles and zeros An important special case of Theorem 10.13 is when the plant has j-axis poles and zeros. It is actually a corollary but because of its importance we present it as a theorem. Theorem 10.18: Consider the situation of Theorem 10.3. Then there holds, ÐN 1. If the plant has a pair of simple zeros at 6 jω0 then 0 eðtÞcosω0 tdt 5 0 and ÐN 0 eðtÞsinω0 tdt 5 1=ω0 . ÐN ÐN 2. If the plant has a pair of poles at 6 jω0 then 0 eðtÞcosω0 tdt 5 0 and 0 eðtÞsinω0 tdt 5 0.

Δ: An important corollary of this theorem is that when the j-axis zero tends to zero then emax ! N, as shown below.

Fundamental limitations

753

Example 10.29: Consider Theorem 10.14 and denote the settling time, assuming its existence, by ts ≔ inf T fjeðtÞj 5 0; ’ t $ Tg. Thus from Part (1) we Ðt have 0s eðtÞsinω0 tdt 5 1=ω0 . Assuming ts # π=ω0 and defining emax as the maximum value of jeðtÞj on the interval ð0 ts Þ one has Ð ts Ð ts Therefore emax 3 ð1 2 cosω0 ts Þ= 0 emax ðsinω 0 Þdt $ 0 eðtÞsinω0 tdt 5 1=ω0 . ω0 $ 1=ω0 or emax $ 1=ð1 2 cosω0 ts Þ. Hence, if ω0 ts !0 then emax ! N, and this is a fundamental limitation in control of such systems.

Observe that the practical implication is that such a system is in effect uncontrollable, since no system can tolerate such a large error. (The reason simply is that this actually means a large faulty output and thus disruption/destruction of the system.) On the other hand, observe that the limiting case is that the zero is at origin. From Chapters 3,4 we know that the system becomes internally unstable if we try to design a step-tracking controller for it. These observations are obviously consistent with each other. A numerical example follows next. Example 10.30: To illustrate the above result consider a closed-loop control sys2 1 1Þ tem given by 1 1L L 5 8 3 ð100s : The error signal which is given by 1 11 L R for a ðs12Þ3 reference step input is provided in Fig. 10.19. As it is observed emax is very large.

Figure 10.19 Example 10.30, The error signal.

754

Introduction to Linear Control Systems

A typical actual system with near-j-axis zeros is the rolling system which has seriously challenged the engineers in the past (Goodwin et al., 1999). We close this Section by stressing that the theorems of Sections 10.6 and 10.7 show correlations between the time and frequency domain constraints. For instance theorems 10.6 and 10.13 together reveal such a correlation. Other facets of these correlations are discussed in the worked-out Problem 10.1210.14. Moreover, from previous chapters, e.g., Problem 7.24 of Chapter 7, we know that j-axis zeros also put an upper bound on the achievable bandwidth, and near j-axis zeros in effect turn the system to a localized filter, see Exercise 7.27. They have the same (but slightly lesser) effect on the bandwidth as well. Speaking of bandwidth, let us remind you of the heuristic mentioned in item 3 of Further Readings of Chapter 7, which can be interpreted in the context of fundamental limitations at least in a larger picture.

10.8

Actuator and sensor limitations

It is good to start by a general statement, or rather fact, which is the observation of extensive simulation results, like Problems 9.69.10 of Chapter 9, Frequency Domain Synthesis and Design. Controllers (even of the same order) which result in a higher performance require more expensive actuators—the control signal changes more sharply and even has a larger magnitude. Next we discuss the actuator and sensor restrictions in this Section. Actuators have restrictions or limitations both for large and small movements, apart from issue like backlash. On the other hand sensors have limitations in the precision and speed of the response. Moreover, they may also saturate in an upper limit and have a lower threshold as well. These issues are discussed below.

10.8.1 Maximal actuator movement Actuators always have maximal movement constraints in the form of saturation limits on both amplitude and speed of the movement. In a situation in which maximal actuator movement will be a restriction it is usually the case that the closed-loop bandwidth is much larger than the plant bandwidth. It can be shown that in this case the initial and/or maximal value of the control signal will be large. Example 10.31: To investigate the above issue we observe the control signal of the following system. Let PðsÞ 5 s 11 1 which has BW 5 1 rad=s. For T1 ðsÞ 5 s 11010 100 with BW 5 10 rad=s and T2 ðsÞ 5 s2 1 12s 1 100 with BW 5 11:47 rad=s, we have the controllers C1 ðsÞ 5

10ðs 1 1Þ s

and C2 ðsÞ 5

100ðs 1 1Þ s2 1 12s :

For a step input the control (Continued)

Fundamental limitations

755

(cont’d) signals are depicted in Fig. 10.20. As it is observed in both cases we need an actuator whose maximal movement needs to be much larger than its steady-state value.

Figure 10.20 Example 10.31.

Question 10.4: How can we quantifiably discuss the power and energy in such scenarios? If the control signal does not meet the maximal actuator movement then at least the performance will be degraded. This may have different forms, e.g., a drastically slower response or a large overshoot. It may also result in instability. Theoretical analysis of the situation is a sophisticated problem, outside the scope of this course. What we do in this course (and of course always in practice) is to simulate the system for the available actuator. An example follows.

Example 10.32: In the above example we suppose that the actuator can move in ½2 2 2. Thus we insert such a saturation block between the controller and the plant and simulate the system in SIMULINK. As it is observed in Fig. 10.21 which is given for C1 the output performance degrades. (Continued)

756

Introduction to Linear Control Systems

(cont’d)

Figure 10.21 Example 10.32, Output with and without actuator saturation.

On the other hand it should be noted that sometimes actuator limitation turns out to have a good effect on reducing the large overshoots in the output at the expense of only a slightly slower response. That is to say, in certain cases it is beneficial. This is shown in the worked-out Problem 10.25. Also note that the solution to the above problem is known as the anti-windup technique of which several are available in the literature (Zaccarian and Teel, 2011; Li and Lin, 2016; Sajjadi-Kia and Jabbari, 2013). We close this Section by bringing the attention of the reader to (Lin and Saberi, 1993) which addresses semi-global exponential stabilization of systems with actuator saturation and is a milestone in this direction.

10.8.2 Minimal actuator movement Similarly to the maximal movement constraints, actuators always have minimal movement constraints as well. That is to say, if their desired movement is below (i.e., finer than) a certain magnitude they either cannot meet it well, or at all. Apart from precision levels this is usually tied with frictional effects—that is, the actuator sticks. When the actuator is in such a mode integrators of the system will windup until enough force is accumulated to overcome the frictional force. Therefore it usually manifests itself as a self-sustaining oscillation as the actuator undergoes a cycle of sticking, moving, sticking, moving, and so forth. Examples of this problem can be found in many industrial applications including casting robots and irrigation systems.

Fundamental limitations

757

The solution to the minimal actuator movement limitation is usually twofold. In certain applications like water flow control a structural redesign helps: two actuators are used in tandem in such a way that the smaller one provides the fine-trim movement whereas the bigger one provides the usual other movements. This however does not help in certain applications like continuous mold casting as the smaller high-precision valve will soon clog with solidified mold. In such cases adding a high-frequency low-amplitude signal—the dither signal—to the nominal input helps as it keeps the valve in motion and thus reduces the stikction effects, but at the price of higher wear and tear of the actuator. Finally we add that detection of vavle stiction is an important topic which is considered in some works like (Garcia et al., 2017).

10.8.3 Sensor precision Sensor is a device that senses and reproduces a physical quantity. One of its fundamental limitations is precision. We have seen in Chapter 1 that classical linear control is at most as good as the measurement is. Today there are high precision sensors for almost all applications. Because Y 5 TR 1 SD 2 TN we conclude that in the frequency region where jNj is significant jTj should be small. Thus, since the measurement noise is often of high frequency, it usually sets an upper bound on the achievable closed-loop bandwidth.

10.8.4 Sensor speed As we have said above today there are high precision sensors for almost all applications. Nevertheless these sensors do not respond simultaneously; they have a dynamics of the form Ps 5 T=ðs 1 TÞ where Tc1 and this is mostly because of their protecting shield. That is, they induce a lag on the actual signal. Inclusion of the sensor dynamics in theoretical analysis and synthesis is thus necessary. We have learned it in Chapter 4, Time Response, for simple systems. For sophisticated systems a theoretical design is cumbersome and may be replaced by an optimization-based design e.g., with the help of MATLABs. This is outside the scope of this course. We also note that a practical solution that crosses the mind is to pass the signal through a high-pass filter ðT221 s 1 1Þ=ðT121 s 1 1Þ, T2 . T1 . Nonetheless, this does not eliminate the dynamics of the sensor, it just speeds up the response at the expense of amplifying high frequency noise. Thus this thought is often discarded and a theoretical analysis and design is adopted. Example 10.33: Consider the system PðsÞ 5 2=ðs 1 1Þ and CðsÞ 5 1=s in a negative unity feedback structure. The system response without a sensor and with sensor with dynamics T 5 10 and T 5 50 are depicted below. Fig. 10.22 reveals that as expected a faster sensor results in a better performance—hardly distinguishable. (Question: How do their control signals compare? Do we need to make a tradeoff? See Exercise 10.34.) (Continued)

758

Introduction to Linear Control Systems

(cont’d)

Figure 10.22 Inclusion of sensor dynamics.

Further information can be found e.g., in (Demir and Tiwari, 2014; de Fornel and Louis, 2010). We close this Section by reminding the reader that the third case of fundamental limitations of sensors is saturation. The references (Lin and Hu, 2001; Saberi et al., 1996) address semi-global stabilization of systems with sensor saturation and are milestones in this direction.

10.9

Delay

An indispensable part in many actual control systems is time delay, as we have exemplified in Chapter 2, System Representation. Consider the plant is PðsÞ 5 e2sT P1 ðsÞ. Thus we should wait for T s until the effect of the controller appears in the output. The fundamental limitation that delay causes is setting an upper bound on the achievable bandwidth as BW , 1=T. Special other results can also be proven. For instance, if in the above system P1 ðsÞ is stable with no NMP zeros, then the ideal transfer function with the conventional Smith predictor is achievable as TðsÞ  e2sT (why?) and the ideal sensitivity function is SðsÞ 5 1 2 e2sT . (See also Further Readings, item 16.) In Chapter 6 we have also seen that the stability condition of a system may be changed by insertion of delay. We illustrate it here. We recall that in certain cases the computed delay margins are not acceptable, i.e., do not change the stability condition of the system, as the corresponding phase margins are not acceptable. On the other hand, in certain cases the computed DM is acceptable.

Fundamental limitations

759

Example 10.34: We recall Example 6.27 of Chapter 6, Nyquist Plot. As we discussed the correct delay margin is DM 5 0:3110 s. This is shown in Fig, 10.23. We simulate the system for a bounded input (the unit step) and verify the BIBO stability of the system by observing the output. We consider three time delays: T 5 0:1 s in the left panel, T 5 0:3 s in the middle panel, and T 5 0:35 s in the right panel. As is observed, with T , DM the system remains stable but its performance of course deteriorates with increasing delay.

Figure 10.23 Example 10.34, Destabilizing effect of time delay.

Question 10.5: Recall the Worked-out Problem 6.16 of Chapter 6, Nyquist Plot. As discussed, the system is unstable and is stabilized with the time delay DM 5 0:5262 s. However simulation shows that this time delay does not stabilize the system. Have we made the wrong conclusion about Problem 6.16? Or have we solved it correctly but there is another reason for this inconsistency of results?

10.10

Eigenstructure assignment by output feedback

Eigenstructure refers to the eigenvalues and eigenvectors of a system altogether. The eigenstructure assignment refers to assigning the eigenvalues and eigenvectors

760

Introduction to Linear Control Systems

simultaneously, by designing the controller. In the literature the idea is attributed to S. Srinathkumar and R. P. Rhoten who formulated and solved it for the multivariable state feedback problem in 1975. Nevertheless, prior to that it was used in the ‘modal analysis’ of B. Porter in his 1969 book which was further expanded in his 1972 book co-authored with R. Crossley. Subsequently Porter authored and co-authored several articles on the topic with J.J. D’Azzo, L.R. Fletcher, T.A. Kennedy, A. Bradshaw, etc.9 Note that eigenstructure assignment is beyond the usual pole assignment (or pole placement, as also said) problem where only the eigenvalues are assigned by controller design. The importance of the eigenstructure assignment problem is that the eigenvectors do shape the time response of the system: while the steady-state behavior (more precisely, stability or instability) of the system is affected by its eigenvalues only, its time response (i.e., how it evolves in time) is shaped both by the eigenvalues and eigenvectors, i.e., by the eigenstructure of the system. To explain the reason, for simplicity of exposition we assume that the closedloop poles are real and distinct. That is, for the general MIMO system, x_ 5 Ax 1 Bu y 5 Cx

(10.1)

with m inputs, l outputs, and the controller u 5 Kðr 2 yÞ, KARm 3 l , we assume that the closed-loop state matrix Acl 5 A 2 BKCARn 3 n has distinct eigenvalues. Thus there holds Acl vi 5 λi vi and wTi Acl 5 λi wTi , for i 5 1; :::; n, where λi , vi , and wi are the real eigenvalues and the associated right and left eigenvectors, respectively. It is possible to choose the matrices V and W of right and left eigenvectors through, 2 T3 w1 6 7 T V 5 ½v1 ::: vn ; W 5 4 ^ 5 5 V 21 (10.2) wTn where V is called the modal matrix of the system. Also V 21 Acl V 5 J 5 diagfλ1 ; :::; λn g is the Jordan form of the system. Thus, as we know from linear and matrix algebra there holds, eAcl t 5 I 1 Acl t 1

1 2 2 1 A t 1 A3cl t3 1 ? 2! cl 3!

5 VV 21 1 VJV 21 t 1 5 VðI 1 Jt 1

1 1 ðVJV 21 Þ2 t2 1 ðVJV 21 Þ3 t3 1 ? 2! 3!

(10.3)

1 22 1 J t 1 J 3 t3 1 ?ÞV 21 2! 3!

5 VeJt V 21

9

On the other hand we should add that the fact that there are degrees of freedom in multivariable pole assignment by state feedback was discovered before by others, probably firstly by Moore (1967).

Fundamental limitations

761

On the other hand eJt 5 diagfeλ1 t ; :::; eλn t g and thus eAcl t 5 Therefore, denoting the jth column of BK by b~j we have, ðt Acl t xðtÞ 5 e xðt0 Þ 1 eAcl ðt2τÞ BKrðτÞdτ 5

Xn i51

t0

e

λi t

vi wTi xðt0 Þ 1

Xm Xn j51

i51

e

λi t

vi wTi b~j

ðt

Xn

eλi t Cvi wTi xðt0 Þ 1 i51

eλi t vi wTi .

(10.4) 2λi τ

e

Xm Xn

eλi t Cvi wTi b~j i51

j51

i51

rj ðτÞdτ

t0

and, yðtÞ 5

Pn

ðt

e2λi τ rj ðτÞdτ:

t0

(10.5) It is thus observed that the response of the system is shaped by the eigenstructure of the system. Note that the above formulation clearly is valid for the SISO system as well. In this case l 5 m 5 1 and thus j 5 1, K 5 k, b~1 5 kb1 is the only column of BK, and C is a row vector. Remark 10.11: So far we have usually considered the control signal as U 5 CðsÞðR 2 YÞ where CðsÞ refers to the dynamic controller. But in the above development as in the majority of the literature we consider u 5 Kðr 2 yÞ where K is a static controller. The reason is twofold. (1) It is simpler, and we are presenting the basics. (2) Every dynamic controller design can be cast as a static controller design with the help of a transformation. This is shown in the proceeding theorem. Theorem 10.19: Let the plant and controller models be given by x_ 5 Ax 1 Bu; y 5 Cx 1 Du and x_c 5 Ac xc 1 Bc uc ; yc 5 Cc xc 1 Dc uc , respectively, and let the controller be in the forward path. That is, uc 5 r 2 y and yc 5 u, which is the usual case. If the plant is strictly proper, D 5 0, then the whole system is

stable iff the static matrix K 0 5

A0 1 B0 K 0 C 0 ≔



A 0

0 0



1



B 0 0 I



2Dc 2Bc

2Dc Cc can be found 2Bc Ac



Cc C 0 is stable. Ac 0 I

such that the matrix Δ:

In fact in the above structure the dynamics of the whole system is described by x_ 5 Ax 1 Br; y 5 Cx 1 Dr where the system state and matrices are given by



x A 2 BDc C BCc BDc x 5 x , A 5 2B C , B 5 B ,C 5 C 0 , D 5 0. Thus by proper A c

c

c

c

design of the controller the system matrices A; B are found to have satisfactory performance. In particular we note that stability of the system solely depends on the





matrix A. It is easily seen that A 5

A 0

0 0

1

B 0

0 I

2Dc 2Bc

Cc Ac

C 0

0 I

. Hence, stabil-

ity of the system is equivalent to that of the matrix A0 1 B0 K 0 C 0 which is a static output feedback problem. We note that most actual plants are strictly proper and therefore the above discussion is rather generic. The case of proper plants, especially nonsquare ones, is tricky and computationally complicated.

762

Introduction to Linear Control Systems

Consequently we simply present our exposition for the case of static controllers. This is what we do in the sequel. The eigenstructure assignment by output feedback is more involved than that with state feedback, but there exist several necessary and sufficient conditions for it which are different in exposition but of course identical in content. The results and methods for output feedback pole placement and eigenstructure assignment by static controllers have been surveyed in (Syrmos et al., 1997; Liu and Patton, 1998; Bavafa-Toosi, 2000). From among the available results that of (Bavafa-Toosi, 2000) seems to be simpler and more appropriate for this undergraduate course, especially because of its relation to the next sections through the appearance of the matrix V directly or indirectly, see Remark 10.18. The result is cited below in the following two sections on Regulation and Tracking.

10.10.1 Regulation In the problem of regulation we can substitute r 5 0. However, to keep the general nature of the formulation we do not do so. We start by a theorem which is valid for all dimensions l; m; n, however it is natural that there holds both l; m , n; other cases are trivial. Without loss of generality we assume that B and C are full rank. Moreover, λi can be repeated, but we suppose they are nondefective. Note that all these assumptions are for the simplicity of notation; of course nondefectiveness has some additional advantages as we shall discuss in Section 10.12. Theorem 10.20: Consider the setting of the above development where u 5 Kðr 2 yÞ. Then λi and vi are assignable by K iff exactly l vectors of n vectors



Cvi pi

, for i 5 1; :::; n, are linearly independent where

vi pi

, for i 5 1; :::; n, belong to

the null space of the matrix pencils ½ 2λi I 1 A

B, for i 5 1; :::; n. Then the matrix  21 K is given by the Moore-Penrose pseudo inverse10 K 5 PðCVÞT ðCVÞðCVÞT

where P is found from

CV P



5



Cv1 p1

::: :::

Cvn pn



. Δ:

The above theorem can also be stated in the dual form as follows. Theorem 10.21: Consider the setting of the above development. Then λi and wi are

assignable by K iff exactly m vectors of n vectors



BT w i qi

, for i 5 1; :::; n, are linearly

independent where wq i , for i 5 1; :::; n, belong to the null space of the matrix peni cils 2λi I 1 AT C T , for i 5 1; :::; n. Then the matrix K is given by the Moore 21 Penroze pseudo inverse K 5 ðW T BÞT ðW T BÞ ðW T BÞT QT where Q is found from



BT W Q

10



5



BT w 1 q1

::: :::

BT w n qn



. Δ:

For more details on pseudo inverse or generalized inverse the reader can refer to any text on linear algebra and matrix analysis like Horn and Johnson (2012).

Fundamental limitations

763

It is clear that given a fixed-order controller like a static controller for the generic system, neither the pole assignment nor the eigenstructure assignment may be arbitrarily possible by output feedback, and this is a fundamental limitation; further explanation follows in Remarks 10.1310.16. A simple example shows this fact. Example 10.35: Consider the x_ 5 Ax 1 Bu, y 5 Cx, u 5 Kðr 2 yÞ, where



A5

1 3

2 24

, B5

1 1

, C 5 BT , r 5 0, and K 5 kAR. That is, the objective is

stabilization. Without using the above theorems, by direct computation investigate the fundamental limitations for pole placement by static output feedback. There holds u 5 2 Ky and x_ 5 ðA 2 BKCÞx. Hence we should compute the eigenvalues of A 2 BKC, which are the closed-loop poles of the system, and investigate the limitations in their assignability.

We have A 2 BKC 5

the

determinant

of

12k 32k

22k 24 2 k

. Its eigenvalues are the roots of

sI 2 ðA 2 BKCÞ 5



s 2 ð1 2 kÞ 2ð3 2 kÞ

2ð2 2 kÞ s 2 ð2 4 2 kÞ



.

Therefore

½s 2 ð1 2 kÞ½s 1 ð4 1 kÞ 2 ð2 2 kÞð3 2 kÞ 5 0 or s 1 ð2k 1 3Þs 1 ð8k 2 10Þ 5 0. It is observed that s1 1 s2 5 2 ð2k 1 3Þ and s1 s2 5 8k 2 10, and s1 s2 1 4ðs1 1 s2 Þ 5 222. These equations clearly show that generic pole assignment is not possible. For instance it is not possible to have s1 5 s2 5 21 or s1 5 21; s2 5 22, but it is possible to have s1 5 2 1; s2 5 26. On the other hand, e.g., the closed-loop pole s1 5 2 5 is not assignable since it is accompanied with s2 5 2 which is unstable and thus the system will be unstable. We close this exercise by stressing that here we restricted ourselves to the problem of eigenvalue assignment which is a subset of the problem of eigenstructure assignment. And observed that there are some fundamental limitations. Obviously the situation gets worse for the original problem of eigenstructure assignment. See also Exercise 10.38. 2

Remark 10.12: Many of the existing necessary and sufficient conditions for multivariable pole assignment and eigenstructure assignment are obtained for the general system without any restriction on its dimensions. However, a sufficient condition for the generic (i.e., almost always) analytical (as opposed to numerical) solvability of the conditions is m 1 l . n as in the case of our results. The most interesting question that one may be intuitively intrigued by is whether the condition ml . n is a sufficient condition for the problem. Note that this is less conservative than the previous condition. In fact, ml is the number of the elements of K which are the unknowns of the problem. Thus if the problem admits a solution it has a linear nature (compare to the linear system of equations Ax 5 b). This problem has been theoretically considered by X. A. Wang in 1996 (Wang and Konigorski 2013) in an algebraic geometry setting using the Grassmannian and central projection techniques.

764

Introduction to Linear Control Systems

In the next examples we use Theorem 10.20. We note that when we have r 5 0 it makes no difference to write u 5 Ky or u 5 2 Ky since the sign difference is accounted for in the sign of the elements of K. In these examples we use the former. Finally note in this first undergraduate course for the sake of simplicity we look at eigenstructure assignment problem from the pole assignment standpoint. More precisely, although we accomplish the eigenvalue assignment by the help of eigenvector assignment, we do not assign any desired eigenvectors and the eigenvectors are chosen merely such that the conditions of the Theorem 10.20 are satisfied. In the second course on state space methods we carry out the eigenstructure assignment where assigning the prescribed eigenvalues and eigenvectors both are the design objectives. Example 10.36: Consider the system x_ 5 Ax 1 Bu, y 5 Cx, u 5 Ky. Find the static controller K such that the closed-loop poles are assigned at Λ 5 f 2 1; 2 2; 2 5g. The system parameters are, 2

3 2 0 1 1 5, B 5 4 1 0 1

0 1 A540 0 0 0

3 0 1 5 0 , C5 0 1

0 1

0 . 0

Implementing the conditions of the theorem 10.20 we find an infinite number of solutions, some like,

8 T v pT1 5 1 0 1 2 1 0 > > < 1 1. vT2 pT2 5 6 1 11 2 13 2 9 > > : T v3 pT3 5 3 2 7 2 17 2 18 8 T v1 pT1 5 1 2 1 1 0 2 1 > > < 2. vT2 pT2 5 2 3 2 2 8 4 12 > > : T v3 pT3 5 3 1 11 2 16 2 39 8 T T > < v1 p1 5 2 2 1 2 2 1 2 1 vT2 pT2 5 2 3 0:75 2 6:75 5:25 8:25 3. > : T 5 3 1:5 9 2 16:5 2 28:5 v3 pT3

Corresponding K3 5



23 25

25 29



to

them

we

find

K1 5



21 0

27 29



,

K2 5



24 210

24 29



,

. Note that in each case exactly l 5 2 vectors are linearly inde-

pendent from among the vectors



Cvi pi



, i 5 1; 2; 3.

Fundamental limitations

765

Example 10.37: Consider the system x_ 5 Ax 1 Bu, y 5 Cx, u 5 Ky. Find the static controller K such that the closed-loop poles are assigned at Λ 5 f 2 2; 2 3; 2 4; 2 5g. The system parameters are, 2 3 2 3 0 1 0 0 1 0

60 0 1 07 61 07 1 0 0 0 6 6 7 7 . A54 , B54 , C5 0 0 0 15 0 15 0 1 0 0 0 0 0 0 0 1 Implementing

K5

237 1138=24

23 234

the design



procedure

we find the

unique answer

. What is noteworthy in this system is that if we want to

assign a closed-loop pole at 21 we find out that it is not possible—a funda

mental limitation. And the reason is that the vector

Cv p

corresponding to this

pole is not expandable by the corresponding vectors of other poles. Remark 10.13: The well-known state controllability condition rank½B AB ::: An21 B 5 n of Kalman (1960)11, which exists in other forms as well, is a necessary and sufficient condition for pole placement by state feedback. This condition is a necessary but not sufficient condition for pole placement by output feedback. Thus state feedback controllability is another fundamental limitation in linear systems; the same is true for observability. Details will be studied in the second course on state space methods. There you will learn, e.g., that if an eigenvalue is uncontrollable it cannot be moved by feedback and nor can its left associated eigenvector be assigned differently—however its right one can be. Remark 10.14: In case the above problem does not admit a solution a possibility is to find a near solution by optimization. In fact given the state controllability of the system, always the matrix K 0 can be found for pole assignment by state feedback. Thus the fundamental limitation in the output feedback problem can be restated as the limitation in decomposing the aforementioned matrix K 0 (which is unique for SISO systems and nonunique for MIMO systems) as K 0 5 KC. When the decomposition is not possible we may aim at finding a matrix which admits a near decomposition or answer. This goes outside the scope of this book. See also Exercise 10.39. Remark 10.15: It is instructive to clarify the sequel issue. The state controllability depends only on the matrices A and B. This is either in the form of Remark 10.13

11

R. E. Kalman, Hungarian control theorist (19302016), who made significant contributions to systems and control theory in the state-space framework, mainly on filtering, controllability/observability decomposition, and realization.

766

Introduction to Linear Control Systems

or the vectors

vi pi



which are obtained from ½ 2λi I 1 A

B in Theorem 10.20

(with obvious explanations). The connection with output feedback is through matrix

C which appears either in the form of Remark 10.14 or the vectors

Cvi pi

in

Theorem 10.20. Remark 10.16: Lest there is a misunderstanding, it should be stressed that in the SISO case provided that the pole assignment problem (for simple eigenvalues) admits a solution, the eigenvectors are determined uniquely and thus the eigenstructure is unique. However, in the MIMO case it may be nonunique and we are interested in the flexibilities of the MIMO case in order to satisfy some other design objectives. However, the SISO case is also important for us because in many applications exact pole assignment is not a requirement but regional pole assignment is. Thus we search in the prescribed regions for points, i.e., eigenvalues, such that the associated eigenvectors (and thus eigenstructure) are as desired. Finally we should add that the cases of semi simple and degenerate eigenvalues are a little tricky and require special treatment. We close this part by adding a theorem which encapsulates the role of all the involved parameters in one formula. Consider the more general output feedback problem of assigning the eigenstructure V and J (as the Jordan pair of the closed-loop system) without any assumption on the rank of B and C and the eigenvalues as being distinct, nondefective, or otherwise. The problem admits a solution iff BB1 ðVJV 21 2 AÞC1 C 5 VJV 21 2 A 5 Acl 2 A, where :1 denotes the pseudo inverse.12 In this case the controller is given by K 5 2 B1 ðVJV 21 2 AÞC1 1 H 2 B1 BHCC1 , in which H is an arbitrary matrix of appropriate dimension. The answer, if existent, is unique iff B and C have full column and row ranks, respectively. The answer exists for “every” Acl and=or A iff B and C have full row and column ranks, respectively. Requiring the uniqueness of answer, this means that B and C are both square and nonsingular. (In the case of state feedback we have this condition only on B.) These conditions can also be seen as fundamental limitations.

10.10.2 Tracking We close this chapter by presenting a method for the design of step-tracking controllers as follows. We consider plants which do not have any zero at the origin if they are SISO, or any transmission zero at the origin if they are MIMO. Note that this assumption by itself is a fundamental limitation for the design of servo systems otherwise we have to violate the internal stability by the controller design. Suppose we wish to design a type-1 servo system for the plant described in the above setting with m $ l. We define u 5 Kχ where χ_ 5 r 2 y. Consequently, 12

In the literature it is customary to use the dagger sign as the superscript, but because we cannot produce it in our typesetting software we resort to the plus sign.

Fundamental limitations

" # x_ χ_

" 5

767

A

BK

2C

0 " # x

y 5 ½C

0

χ

#" # x χ

1

" # 0 I

r; (10.6)

:

If the controller K is designed such that the state matrix

A 2C

BK 0



is stable then

at steady state x; χ; y will be constant and from χ_ 5 0 we conclude that y 5 r. The relation of this topic with our context is that stabilization of the aforementioned state matrix is accomplished in the context of output feedback design and thus has fundamental limitations. To this end we rewrite (10.6) as,



0 B 1 K 0 0 0

 x_ A 5 χ_ 2C By defining x^ 5



x 2 xðNÞ χ 2 χðNÞ

I



x 0 1 r: χ I

(10.7)



Eq. (10.7) itself can be restated as,

^ x; ^ CÞ ^ x_^ 5 ðA^ 1 BK

(10.8)

^ B; ^ ^ C. with obvious definition for A; Remark 10.17: It is easily seen that step output disturbances will be rejected by the above design. On the other hand, tracking can also be achieved by the design of a static prefilter for the system acting on the reference input r. However, with this method, in the case of a disturbance tracking will be violated. (Question: What other problems and restrictions are associated with it?) We close this section by clarifying that what we have considered is entire eigenstructure assignment. The theorems that we have provided are insightful and have theoretical importance. However, entire eigenstructure assignment is rarely practiced as we shall explain in Section 10.13.

10.11

Noninteractive performance

Noninteractive performance means decoupling the effect of certain system parameters on certain other system parameters. The influencing parameters are often reference inputs, (error signal, control signals,) disturbances, noise, and internal system modes. The influenced parameters are the error signal, control signal, system modes, and system output. This is an interesting and important topic in systems and control theory and applications. Partial treatment of this topic was presented in Section 4.13. Here we pursue the topic in more details and directions.

768

Introduction to Linear Control Systems

Considering the effect of the exogenous disturbances udist through the additive term Dudist on the state equations of the system, where DARn 3 q and u 5 Kðr 2 yÞ, the closed-loop system will be, x_ 5 ðA 2 BKCÞx 1 BKr 1 Dudist y 5 Cx:

(10.9)

Based on this notation, let dj , bj , ck , λi , vi , wi , vij , yk , xj , rj , udist j and xðt0 Þ denote the jth column of D, the jth column of B, the kth row of C, the ith distinct real eigenvalue/mode, its associated right eigenvector, its associated left eigenvector, the jth element of the ith right eigenvector, the kth output, the jth state, the jth reference input, the jth disturbance, and the initial value, respectively. The subsequent theorems present some conditions for different kinds of decoupling in this subsystem. Theorem 10.22: For the unexcited system, i.e., r 5 0 and udist 5 0, the following two propositions are true: 1. A necessary and sufficient condition to decouple the state xj from the mode λi is that vij wTi xðt0 Þ 5 0 which reduces to vij 5 0 if λi appears in the other modes. 2. A necessary and sufficient condition to decouple the output yk from the mode λi is that ck vi wTi xðt0 Þ 5 0 which reduces to ck vi 5 0 if λi appears in the other outputs.

Δ: Theorem 10.23: For the undisturbed system with zero initial conditions, i.e., udist 5 0 and xðt0 Þ 5 0, the following four propositions are true: 1. A necessary and sufficient condition to decouple the mode λi from the input rj is that wTi b~j 5 0 where b~j is the jth column of BK. 2. If the inputs are linearly independent, a necessary and sufficient condition to decouple the state xj from the mode λi is that vij wTi BK 5 0 which reduces to vij 5 0 if λi appears in the other states, i.e., if there exists coupling between λi and all of the inputs. If the inputs are linearly dependent, the above condition is a sufficient condition only. 3. If the inputs are linearly independent, a necessary and sufficient condition to decouple the output yk from the mode λi is that ck vi wTi BK 5 0 which reduces to ck vi 5 0 if λi appears in the other outputs, i.e., if there exists coupling between λi and all of the inputs. If the inputs are linearly dependent, the above condition is a sufficient condition only. 4. To decouple the states and the outputs from the inputs, using the static precompensator F, the state equations of the closed-loop system will be x_ 5 ðA 2 BKCÞx 1 BKFr. Let the jth column of BKF be denoted by bj , then: (4-1) A necessary and sufficient condition to decouple the state xi from the input rj is that vxi wTx bj 5 0 for x 5 1; :::; n. (4-2) A necessary and sufficient condition to decouple the output yk from the input rj is that ck vx wTx bj 5 0 for x 5 1; :::; n.

Δ: Theorem 10.24: For the unexcited system with zero initial conditions, i.e., r 5 0 and xðt0 Þ 5 0, the following four propositions are true:

Fundamental limitations

769

1. A necessary and sufficient condition to decouple the mode λi from the disturbance udist is j that wTi dj 5 0. 2. If the disturbances are linearly independent, a necessary and sufficient condition to decouple the state xj from the mode λi is that vij wTi D 5 0 which reduces to vij 5 0 if λi appears in the other states, i.e., if there exists coupling between λi and all of the disturbances. If the disturbances are linearly dependent, the above condition is a sufficient condition only. 3. If the disturbances are linearly independent, a necessary and sufficient condition to decouple the output yk from the mode λi is that ck vi wTi D 5 0 which reduces to ck vi 5 0 if λi appears in the other outputs, i.e., if there exists coupling between λi and all of the disturbances. If the disturbances are linearly dependent, the above condition is a sufficient condition only. 4. To decouple the states and the outputs from the disturbances, we note that the state equations of the closed-loop system are x_ 5 ðA 2 BKCÞx 1 Dudist . Then, (4-1) A necessary and sufficient condition to decouple the state xi from the disturbance udist is that vxi wTx dj 5 0 j for x 5 1; :::; n. (4-2) A necessary and sufficient condition to decouple the output yk from the disturbance udist is that ck vx wTx dj 5 0 for x 5 1; :::; n. j

Δ: The superposition theorem should then be used to obtain the conditions for the decoupling of states from modes and outputs from modes. A system described as in the above is said to have noninteractive performance if the above decoupling conditions can be satisfied. However, it is clear that in most practical situations perfect decoupling cannot be achieved and this is a fundamental limitation. In fact, the criterion for each of them is the minimization of the ensuing cost functions, respectively, and they should be traded off against other design objectives:

J1;1 5 vij wTi xðt0 Þ

(10.11)

J1;2 5 ck vi wTi xðt0 Þ

(10.12)

J1;3 5 jvij j

(10.13)

J1;4 5 jck vi j

(10.14)

J1;5 5 wTi b~j

(10.15)

J1;6 5 J1;7 5 J1;8 5 J1;9 5

Xm x51

Xm x51

Xn x51

Xn x51

Xq



vij wT b~x 1

vij wT dz i i z51

(10.16)

Xq



ck vi wT b~x 1

c k v i w T dz i i z51

(10.17)



vxi wT bj

(10.18)



ck vx wT bj

(10.19)

x

x

770

Introduction to Linear Control Systems

J1;10 5 wTi dj J1;11 5 J1;12 5

Xn x51

Xn x51

(10.20)

vxi wT dj

(10.21)



ck vx wT dj :

(10.22)

x

x

The right-hand side of each of the above cost functions J1;s ; s 5 1; :::; 12; is defined as a decoupling index, appropriately, e.g., wTi b~j is the ith mode-from-jth input decoupling index. Besides, wTi BK is defined as the ith mode-from-inputs decoupling vector, Wdj as the modes-from-jth disturbance decoupling vector, Cvi as the outputs-from-ith mode decoupling vector, and so forth. Remark: 10.18: In passing recall that in the statement of Theorems 10.20 and 10.21 there appeared the vectors Cvi and BT wi . This better highlights that in general perfect decoupling is not possible since there are conflicting requirements due to lack of degrees of freedom—in other words there are fundamental limitations. In the following we solve two simple examples for which perfect decoupling is possible. Example 10.38: Reconsider the system of Example 10.36. (1) Design a static output feedback controller such that the same closed-loop poles are assigned and the mode λ1 5 2 1 does not appear in the second state x2 for r 5 ½0 0T and xð0Þ 5 ½1 2 3T . (2) Design another controller such that the same eigenvalues are assigned and the mode λ1 5 2 1 does not appear in the second output y2 for r 5 ½1 2T and xð0Þ 5 ½0 0 0T . For part (1) the decoupling index is v12 wT1 xð0Þ 5 0. Thus it suffices to have v12 5 0. Hence it suffices to choose the case (1) of Example 10.35. Therefore K1 of that example is an answer. For part (2) the decoupling index is c2 v1 5 0. Because c2 5 ½0 1 0, this reduces to v12 5 0, and thus the answer of part (1) of the problem solves this part as well. That is, altogether with K1 for any initial condition and any input the mode λ1 5 2 1 does not appear in the second state and second output. Remark 10.19: This example clearly shows that in direct hand computations it is helpful if the C and/or B matrices have some special forms like C 5 ½I 0 where I is the identity matrix. This can be achieved by a similarity transformation whose details you will study in the second course on state space methods. For simplicity here we assume that the matrices have such a structure.

Fundamental limitations

771

Example 10.39: Consider the system x_ 5 Ax 1 Bu, y 5 Cx, u 5 Ky. Find the static controller K and the precompensator F such that the closed-loop poles are assigned at Λ 5 f 2 1; 2 2g and the first input does not appear in the first



1 0 , B 5 11 21 , C5 . output. The system parameters are A 5 13 22 6 1 0 1 The decoupling indices are c1 v1 wT1 b1 5 c1 v2 wT2 b1 5 0. It is observed that by choosing K 5



k k13

22:5 24:5



and F 5



a a

b c



the closed-loop poles are assigned as

required and the decoupling indices are satisfied. Four degrees of freedom k, a, b, c also remain to satisfy some other design objectives. (To verify the decoupling







indices note that v1 5

v11 2v11 ð2k 1 6Þ

, w1 5

w11 0

, v2 5

0 v22

, w2 5

w22 ð2k 1 6Þ w22

.)

We wrap up this Section by adding that more advanced results are available in the literature which will partly learn in graduate studies. We have presented the basics.

10.12

Minimal closed-loop pole sensitivity

The sensitivity (or robustness13) of a control system behavior to parameter variations as reflected in its model has been a pole of attraction since the 1970s. Three types of sensitivity have been considered: Performance index sensitivity, trajectory sensitivity, and eigenvalue sensitivity. Eigenvalue sensitivity is the sensitivity of the closed-loop poles of the system to its parameter variations. Its pervasive use lends itself to its computational simplicity. To present the topic we should state the definition of the matrix norm. Consider the two real numbers a and b, and the question of how close they are to each other. To answer this question we should simply compute the absolute value of their difference. Now consider two vectors or two matrices, and the same question. To be specific, consider e.g., u 5 ½1 2 and v1 5 ½1 1:8, v2 5 ½1:2 2, v3 5 ½0:9 2:1, v4 5 ½0:7 2:2, etc. Which vi , i 5 1; :::; 4, is nearest to u? Similarly consider e.g.,

U5

1 2 3 4

and V1 5

1 3

2 4:4

, V2 5

1 2:6

2 4

, V3 5

1 2:3 3 4:1

, V4 5

1:2 2:9

2:1 3:9

, etc.

Which Vi , i 5 1; :::; 4, is nearest to U? To answer these questions we should define a measure for closeness. This measure is called the norm—vector norm or matrix norm—and is denoted by the sign jj : jj. Different vector and matrix norms have been defined in the literature. In fact a function is a norm if it satisfies some conditions. As for the matrix norm, jjVjj is a norm of V if it (is a real number and) satisfies: 1. jjVjj $ 0; 2. jjVjj 5 0 iff V 5 0; 3. jjaVjj 5 jaj 3 jjVjj for any complex scalar a; 13

The use of these terms is related to each other as “the lower the sensitivity, the higher the robustness.”

772

Introduction to Linear Control Systems

4. jjV1 1 V2 jj # jjV1 jj 1 jjV2 jj; 5. jjV1 V2 jj # jjV1 jj 3 jjV2 jj.

For the vector norm, the first four of the above conditions are used. Infinitely many functions qualify for a vector or matrix norm. For instance in R4 the function 1=2 jjvjj 5 3jv2 j 1 2maxðjv1 j; jv2 jÞ2 1v24 is a vector norm. From one standpoint, matrix norms can be divided to induced and noninduced norms, where induced means that the matrix norms are induced fromPthe vector norms. Some noninduced matrix norms are the sum norm jjVjjsum 5 jvij j, the maximum-element norm P jjVjjmax 5 maxjvij j, and the Frobenius/Euclidean norm jjVjjF 5 ð jvij j2 Þ1=2 . Some induced P matrix norms are the 1-norm or the maximum column sum P jjVjj1 5 maxj ð i jvij jÞ, the N-norm or the maximum row sum jjVjjN 5 maxi ð j jvij jÞ, and the 2-norm or the maximum singular value jjVjj2 5 σðVÞ 5 ðλmax ðV H VÞÞ1=2 , where the superscript H denotes the conjugate transpose.



2 Example 10.40: Consider the matrix V 5 13 24 . The above norm are comP P puted as jjVjjsum 5 jvij j 5 10, jjVjjmax 5 maxjvij j 5 4, jjVjjF 5 ð v2ij Þ1=2 5 pffiffiffiffiffi P P 30 5 5:4772, jjVjj1 5 maxj ð i jvij jÞ 5 6, jjVjjN 5 maxi ð j jvij jÞ 5 7, and

jjVjj2 5 σðVÞ 5 ðλmax ðV H VÞÞ1=2 5 5:1167. In passing note that for vectors the 2-norm and the Frobenius norm are the same, but for matrices in general they are not. (Question: When are they the same?)

We also define the 2-norm condition number of a matrix V as pffiffiffiffiffiffiffiffiffiffi σðVÞ condðVÞ ≔ jjVjj2 :jjV 21 jj2 5 σðVÞ ; where σðVÞ ≔ σmax ðVÞ ≔ λmax ð V H V Þ and pffiffiffiffiffiffiffiffiffiffi σðVÞ ≔ σmin ðVÞ 5 λmin ð V H V Þ are the largest and smallest singular values (denoted by σ) of V, which are the largest and smallest eigenvalues (denoted by λ) of pffiffiffiffiffiffiffiffiffiffi V H V . Also note that σðVÞ 5 jjVjj2 and σðVÞ 5 jjV 21 jj2 .

1 2 Example 10.41: Consider the matrix V 5 . Its condition number is 3 24 s computed by the MATLAB command “cond(V)” and is 2.6180.

Having presented the above development we go to the next step. Consider the nondefective (i.e., completely diagonalizable) matrix A and denote its eigenvalues by λi . Let it be perturbed by the unstructured additive term E as A 1 E. Then sensitivity of the eigenvalue λi to perturbation E depends on the magnitude of its condition number defined by condðλi Þ ≔ jjwi jj2 jjvi jj2 =jwTi vi j $ 1 where vi ; wi are the right and left eigenvectors associated with λi . Moreover, maxi condðλi Þ # condðVÞ where V is the modal matrix of A, i.e., the matrix of the

Fundamental limitations

773

eigenvectors of A (Wilkinson, 1965).14 We thus observe that independently of E the eigenvalue sensitivities are affected by V and thus by A itself (in the sense that if condðVÞ is relatively small then λi ’s are relatively small, but if condðVÞ is relatively large then we must be cautious). It is easy to show that condðVÞ $ 1 and that condðVÞ 5 1 iff the matrix A is normal, i.e., AAH 5 AH A. Inside the class of normal matrices there are three known subclasses which of course do not cover the whole class. These are A 5 AH , A21 5 AH , and A21 5 2 AH . In the above exposition if we consider the matrix A as the closed-loop state matrix of the system, we observe that the eigenvalues of A are the closed-loop poles of the system. The sensitivity or robustness of the closed-loop poles of the system is thus affected by their (mutual) location in the complex plane. Put in the context of fundamental limitations, a restatement of the above argument is that eigenvalue robustness consideration further restricts the achievable transfer function and thus the achievable performance. Note that: (1) This statement can be formulated in further detail but it goes outside this first undergraduate course. (2) You may guess that different limitations and design objectives which we have studied so far should be cast in an optimization framework. Your guess is correct, but it also goes outside the scope of this book, however we provide a simple method for it in the ensuing Section 10.15. The available eigenvalue sensitivity results in the literature are mostly related to the state feedback problem which you will study in the second undergraduate course on state space methods. We offer two alternative solutions to the output feedback case, one in the following and one in the worked-out Problem 10.29. Recall that nondefective matrices have better sensitivity properties than defective ones (Wilkinson, 1965), and normal matrices (and real symmetric matrices as a special case) are necessarily nondefective. Theorem 10.25: (The minimal sensitivity problem by output feedback): Consider the system x_ 5 Ax 1 Bu, y 5 Cx, u 5 2 Ky. A sufficient condition for achieving condðVÞ 5 1 where V is the modal matrix of A 2 BKC is that there exists a real symmetric matrix A0 such that BB1 ðAT 1 A0 ÞC1 C 5 A 1 A0 , where :1 denotes the pseudo inverse. In this case the controller is given by K 5 2 B1 ðAT 1 A0 ÞC1 1 H 2 B1 BHCC1 for an arbitrary H. Δ: Note that in the above theorem the closed-loop state matrix becomes symmetric and it is inherent that we require closed-loop stability. In the sequel a simple example is presented for which the ideal minimal sensitivity is achievable. Example 2

0 7 A 5 40 3 0 0

10.42:

3 1 1 5, 2

Consider 3

2

1 B 5 41 0

0 0 5, 1

the

C5



system

1 2 0 0

0 1



x_ 5 Ax 1 Bu,

y 5 Cx,

where

. By the feedback u 5 2 Ky where (Continued)

14

Due to J. H. Wilkinson, British mathematician (191986) who made significant contributions to numerical analysis.

774

Introduction to Linear Control Systems

(cont’d) K5



7 1 0 4:5

2



, the closed-loop state matrix is

27 A 2 BKC 5 4 27 0

27 211 0

3 0 0 5 22:5

which is symmetric and thus has condðVÞ 5 1. Of course, it is also stable, the closed-loop poles are at 216:2801; 2 2:5; 2 1:7199.

Question 10.6: When does the theorem admit a solution? That is, provide a sufficient condition for the existence of A0 . It should be clear that the problem of minimal sensitivity does not always admit a solution, especially when we consider other design objectives. Put in the fundamental limitations context, the ideal minimum eigenvalue sensitivity (i.e., the case condðVÞ 5 1) may not be achievable and indeed for the generic actual system it is not. Moreover, if we require a specific set of closed-loop poles (as opposed to the above case that we merely require stability or stable regional pole assignment) then the eigenvectors are restricted by Theorems 10.20 and this means that we have less freedom to minimize the condition number (for SISO systems we have no freedom); see also Exercise 10.41. In fact even for the simpler case of state feedback it is well known that “placing plenty of poles is pretty preposterous” due to problems of sensitivity, controllability, observability, etc. (He et al., 1995; Mehrmann and Xu, 1998). This in particular means that the entire eigenstructure assignment has merely theoretical importance for insight into the fundamental limitations, and it is rarely practiced. What is often practiced is partial eigenstructure assignment in the form of some decoupling indices of Section 10.11 for a small portion of the poles, see also Exercise 10.69. The rest of the poles are found such that some design objectives (like minimal sensitivity, Tmax , Mus , etc.) are satisfied. Consequently, in practice we often resort to performing an optimization over condðVÞ to minimize it subject to regional pole assignment. That is, we wish to minimize, J2 5 condðVÞ:

(10.23)

The above cost function is used in Section 10.15. We close this section by stressing that condition number has different versions, such as the diagonally scaled one. It is utilized for integral controllability tests based on steady state information and also for selection of actuators and sensors using dynamic information. With regard to these applications there are also some fundamental limitations (Nett, 1990; Nett and Minto, 1989). See also (Braatz and Morari, 1994). Other measures instead of the 2-norm condition number can also be found in the literature with regard to (10.23). Because in finite dimensions norms are equivalent they will not contradict but support each other. However, what formulation and which norm is more efficient and effective than others is an interesting question for further investigation, see also (Esna Ashari, 2005).

Fundamental limitations

10.13

775

Robust stabilization

An important feature in control systems is that of robustness. A system is said to have robustness if its closed-loop poles remain stable in the face of perturbations in the system parameters. It is desired that stability is maintained for as large perturbations as possible. There are different methods for considering uncertainty, mainly: additive or multiplicative, and structured or unstructured. Complete treatment of the subject is in the graduate course robust control systems. Here we present some simple cases. Structured additive perturbations refers to the case that there is uncertainty or perturbation in some elements of the system in an additive manner, while unstructured additive perturbations refers to the case that are uncertainties in all elements of the system in an unstructured additive manner. The two cases are considered separately in the ensuing sections.

10.13.1 Structured perturbations It is instructive to start with an example. 2

Example 10.43: Consider the system x_ 5 Ax where

23 A54 1 0

0 22 2

3 4 0 5. 23

The

system is stable. Additive structured perturbations refers to the case that there are uncertainties in some elements of A, say Að1; 2Þ and Að3; 3Þ 2 3 such that A is perturbed to A0 5 4 2

0

A 0 5 A 1 a4 0 0

1 0 0

3 2 0 0 0 5 1 b4 0 0 0

0 0 0

23 1 0

a 22 2

3 0 0 5 5 A 1 aE1 1 bE2 1

4 5. 0 23 1 b

We write it as

with obvious definition for

E 1 ; E2 .

The relation of the issue to the topic of fundamental limitations is direct. Stability consideration restricts the size of the perturbations. Recalling that these perturbations may reflect purposeful (as opposed to unwanted) changes in certain system parameters, e.g., increasing the mass of an element in the system, this shows another facet of the fundamental limitations due to the stability requirement. On the other hand, of course the perturbation may be unwanted. In either case, for instance in Example 10.43 in the case of A0 5 A 1 aE1 the size of a is restricted to approximately a , 3:33. It is desired that the system maintains its stability for as large perturbations as possible. In the above Example this refers to the values of a and b. Theorems are available which determine the size of the perturbations for which the stability of the system is maintained. We present a theorem for the case of D-stability as defined in the Section 3.6. It is clear that it includes the case of OLHP BIBO stability.

776

Introduction to Linear Control Systems

Theorem 10.26: Assume that the closed-loop system x_ 5 ðA 1 BKCÞx 5 Acl x is staP ble. If it is perturbed as x_ 5 ðAcl 1 ki51 pi Ei Þx, it maintains its D-stability for the perturbations satisfying jpi j , Pk T2 ; where P is the unique symmetric jj

i51

Eiδ P1PEiδ jj2

positive definite solution of the Lyapunov equation ATclΩ P 1 PAclΩ 1 2I2n 5 0, AclΩ 5 ΘðδÞ  ðA 1 BKC 1 αIÞ, and Eiδ 5 ΘðδÞ  Ei . Δ: Hence to maximize the magnitude of the allowable perturbations we have to minimize the following cost function: J3;1 5 jj

Xk i51

EiδT P1PEiδ jj2 :

(10.24)

As is observed the minimum depends on P, which itself depends on AclΩ and thus on K. That is, the minimization is performed by designing the matrix K. Remark 10.20: If α 5 0; δ 5 0 then Ω region will be the whole OLHP. That is, the problem reduces to the standard BIBO stability.

10.13.2 Unstructured perturbations It is helpful to start with an example. Example 10.44: Consider the system x_ 5 Ax of Example 10.43. The system is stable. Additive structured perturbation refers to the case that there are uncertainties in all elements of A in an unstructured additive manner such that A is 2 3 perturbed to A0 5 4

23 1 a 11d g

b 22 1 e 21h

41c 5. f 23 1 i

We write it as A0 5 A 1 E.

The relation of this issue to the topic of fundamental limitations is also direct. Stability consideration limits the size of the perturbations. Here, in addition to purposeful changes in the system parameters, this often reflects unwanted perturbations in all the system parameters. The size of allowable E for which the system sustains its stability is usually measured in the 2-norm. In our example the system maintains its stability for approximately jjEjj2 # 0:5055. It is desired that the system maintains its stability for as large perturbations as possible. Theorems are available which measure the size of the allowable perturbations. As in the case of structured uncertainties we present a theorem for D-stability which covers the OLHP BIBO stability as well. Theorem 10.27: Assume that the closed-loop system x_ 5 ðA 1 BKCÞx 5 Acl x is stable. If it is perturbed as x_ 5 ðAcl 1 EÞx, it maintains its stability for the perturbations

Fundamental limitations

777

1 satisfying15 jjEjj2 # jjPjj ; where P is the unique symmetric positive definite solution 2 of the Lyapunov equation ATclΩ P 1 PAclΩ 1 2I2n 5 0, AclΩ 5 ΘðδÞ  ðA 1 BKC 1 αIÞ, and Eδ 5 ΘðδÞ  E. Δ:

Therefore to have tolerance to larger uncertainties we minimize, J3;2 5 jjPjj2 5 σmax ðPÞ;

(10.25)

with respect to the matrix K. We observe that P depends on AclΩ which in turn depends on K. Thus optimization is performed with respect to the matrix K. Note that a remark similar to Remark 10.20 can be made here as well. Remark 10.21: Lyapunov equation is the central equation in stability analysis and synthesis of control systems. In the sequel of the book on state-space methods you will learn more about it. Other important equations in systems and control theory are its extensions (regional Lyapunov equation, projected Lyapunov equation, timevarying Lyapunov equation, etc.), Riccati equation (and its extensions), Bezout equation, Hamilton-Jacobi-Bellman equation, etc. See also Problem C.12 of Appendix C for the solution of the Lyapunov equation. Remark 10.22: We close this part by mentioning that the kind of uncertainties in a system depends on the nature of the problem. Some are better fitted to the structured framework and some to the unstructured one. This is also the case for the method of representing the uncertainty, whether additive or multiplicative, at the input or output, and is outside the scope of this book.

10.14

Special results for positive systems

By definition positive systems are those in which all states and outputs are nonnegative if the initial conditions and inputs are nonnegative. Special treatment is needed for the analysis and synthesis of controllers for positive systems since, e.g., the Luenberger observer (which you will study in the second undergraduate course on linear control) may result in illogical answers. Positive systems are defined on cones, not on linear spaces, and their control theory is less developed in comparison to that of general LTI systems. Nevertheless numerous specialized results are available re the control of such systems, which are outside the scope of this book; see the respective literature. It can be proven that the SISO LTI system x_ 5 Ax 1 Bu; y 5 Cx is positive iff A is a Metzler16 matrix and B $ 0; C $ 0, i.e., Aij $ 0; ’ i 6¼ j, and Bi $ 0; Ci $ 0. Note that in the statement of this theorem we actually have the condition jjEδ jj2 # 1=jjPjj2 but there generically holds jjEjj2 5 jjEδ jj2 . 16 Named after Lloyd Metzler, American economist (191380). 15

778

Introduction to Linear Control Systems

Examples of actual positive systems are versatile. Typical models include chemical reactors, distillation columns, heat exchangers, compartmental systems, storage systems, water pollution models, population models, the blood glucose regulation system, etc. Note that in the latter system all variables, e.g., blood glucose and insulin concentration, are restricted to take positive values. About 1% of the people in the western world suffer from the Type 1 diabetes problem, and in their bodies this system does not work properly. Due to its importance there has been extensive research for its cure, including the development of an artificial pancreas device, and some progress has been made. Treatment of the diabetes problem is invasive, disruptive, and with side effects. As such theoretical analysis of the problem is in demand. It is clear that the aforementioned conditions of positivity by themselves impose a fundamental limitation in the theory of LTI systems: If a system satisfies those conditions, its state and output cannot become negative, and vice versa. Apart from this obvious fact there are fundamental limitations exclusively for positive systems, see e.g., Exercises 10.5610.60. A special result in this domain with implications for the treatment of the Type 1 diabetes problem can be found in the literature. The result concerns the maximum and minimum output response peaks due to a disturbance for all feasible inputs. Put in simple words, a core consequence of the result is that when the disturbance pulse response peaks faster than the input pulse response, then every attempt to minimize the maximum peak response to the disturbance necessarily incurs an unavoidable undershoot at a later time. For details of the theoretical development and practical implications for the treatment of Type 1 diabetes patients the reader is referred to (Goodwin et al., 2015). Finally we mention a “negative result” about positive systems (Zappavigna et al., 2012). As we have already learned in previous chapters Pade approximation is an essential tool in control theory. Sometimes we use it for the discretization purpose as well, as you will study in the third course “discrete-time control systems.” The bad news is that Pade approximation is not suited for discretization of positive systems because of two reasons: (1) Certain types of Lyapunov stability are not, in general, preserved. (2) Positivity itself may not be preserved, even if stability is. Yet, the good news is that we may resort to alternative methods. Further discussions can be found in item 15 of Further Readings.

10.15

Generic design procedure

A simple general design procedure can be outlined as follows: We wish to internally stabilize the system and at the same time optimize various cost functions, subject to some constraints. More precisely, we wish to minimize some design specifications and bound some others. Determination of the minimization objectives and the constraints is problem-dependent. We provide a typical problem formulation in the following and briefly complement it in Remark 10.26. Assume that we wish to minimize some of the decoupling indices J1;s ðs 5 1; :::; 12Þ, minimize the eigenvalue sensitivities J2 , maximize the robustness,

Fundamental limitations

779

and thus minimize J3;j ðj 5 1; 2Þ. Moreover, in addition to these objectives we aim at minimizing the control signal magnitude and both the Tmax 5 MT and Smax 5 MS (of Sections 10.210.6). Therefore we can consider J4 5 jjKjj2 , J5 5 MT , and J6 5 MS . On the other hand, we desire to have an upper bound on TV and lower and upper bounds on tr . The design procedure can thus be rephrased as follows: Design Procedure: Minimize the cost function J5

X12

ρ J φ1;i 1 ρ2 J2 φ2 1 i51 1;i 1;i

X2 j51

ρ3;j J3;j φ3;j 1 ρ4 J4 φ4 1 ρ5 J5 φ5 1 ρ6 J6 φ6 (10.26)

subject to (1) The admissible eigenspectrum region Ψ, (2) No unstable pole-zero cancellation, (3) TV # ULðTVÞ, LLðtr Þ # tr # ULðtr Þ, and (4) Non-defective closedloop system, in the search space LLðKrs Þ # Krs # ULðKrs Þ. The region Ψ and LL and UL (which stand for the Lower Limit and Upper Limit, respectively) depend on the specific application. The coefficients and exponents ρ and φ are positive scalar weightings which must be determined properly to achieve overall satisfactory performance. Some remarks are in order. Remark 10.23: As we mentioned before the definition of the optimization problem and its constituents partly depend on the original problem and designer’s discretion. For instance, we may include TV (of Section 10.2) in the cost function rather than the constraints. Choosing the coefficients and exponents are also problemdependent. However, internal stability is problem-independent. In general by increasing a ρ its corresponding J will be minimized more and both the minimum and the obtained controller K depend on choice of ρ and φ. Remark 10.24: In an optimization problem there are different methods to handle the constraints (Fletcher, 2000; Huyer and Neumaier, 2003; Scholtes and Stoehr, 1999). A most commonly used one is the penalty method. The two most pervasively used penalty functions are the quadratic penalty function and the absolute value penalty function. It is noteworthy that if the quadratic penalty function is used, the resulting problem differs from the original one in that it is equal to it when the weight of the penalty function approaches infinity. The absolute value penalty function, on the other hand, is the exact solution. Remark 10.25: The aforementioned optimization problem is nonconvex. While one may aim at convexifying it—which is a formidable problem—to use convex optimization methods which itself requires graduate knowledge of the field, we can simply use a random optimization technique to solve this problem. Genetic Algorithms (GAs) are among the most powerful random optimization techniques and are extensively used in control system design, see e.g., (Bavafa-Toosi et al., 2000, 2005, 2006; Khaki-Sedigh and Bavafa-Toosi, 2001; Khaki-Sedigh and Lucas, 2000) and Appendix F for an introduction to GAs.

780

Introduction to Linear Control Systems

It is instructive to repeat what we have previously mentioned about optimization problems (in previous Chapters) and highlight some points of Appendix F: G

G

G

G

G

For the generic problem we do not know its feasibility. The problem may have no answer and we may search in vain. If the optimization algorithm does not find an answer it does not mean that the problem does not have an answer. If the optimization algorithm finds an answer we do not know whether we are able to find a much better answer or not. The GA (or our evolutionary algorithm) must be run enough times so as to get an acceptable answer. In practice, its convergence is not guaranteed. If we get disappointed in searching for an answer we have to change the weighting functions and/or loosen the constraints and/or enlarge the search space. The ‘constraint consistency verification algorithms’ (Discussion 6 of Section 4.13.2.1) may also be helpful.

Remark 10.26: In the case that the bounds in Section 10.13 are improved to tighter ones, the approach is still not only valid but in fact results in a better answer. Moreover, with respect to the fundamental limitation of Section 10.14, considering the physical implications and restrictions, we can aim at minimizing the undershoot magnitude. Indeed, any design objective can be addressed if we somehow include it in our optimization cost function or constraints. Some projecting candidates are the Mus ; Mos of Section 10.7.2, the structured singular value (called μ), the performance indices like ISE (Integral of Squared Error), ITAE (Integral of Time multiplied by Absolute Error), ts, Mp, wighted sensitivity functions, etc., controllability and observability indices/distances, condðAÞ, etc. See Further Readings, item 13. Remark 10.27: The question of optimality of the cost function itself is an interesting and important issue discussed in L.A. Zadeh 1958. To the author’s knowledge the problem is yet open. A pertinent discussion with regard to short-term and long-run optimization is reported in (Liberzon, 2015). In the perspective of optimal control the arguments are obvious, however they may be insightful for governments whose decisions and rules directly and indirectly affect lives of individuals. An actual example follows.

Example 10.45: Consider the system x_ 5 Ax 1 Bu, y 5 Cx, u 5 Ky where it represents the linearized dynamics of an Extended Medium Range Air to Air Technology (EMRAAT) system and (Chouaib and Pradin, 1994), (Continued)

Fundamental limitations

(cont’d) 2

20:501 6 16:83 6 6 6 23227 A56 6 0 6 6 4 0 0 0 6 0 6 6 6 0 B56 6 0 6 6 4 179 2

781

20:985 0:174 20:575 0:0123

0 0

0:109 2132:8

0:321

22:1

0

21620

0 0

1 0

0 0

0 2179

0 3 0 2 1 0 7 7 7 6 0 0 7 7, C 5 6 6 7 40 0 7 7 5 0 0

0

0

0 0

3 0:007 27:19 7 7 7 21240 7 7, 0 7 7 7 0 5 2179

3

0

0

0 0

1 0

0 1

0 0 0 0

07 7 7. 05

0

0

1 0

0

0 179 The constraints of the problem are the admissible eigenspectrum region Ψ which is the intersection of the region Ω specified by the D region σ 5 2; δ 5 30 and, 220 # Rðλ1 Þ 5 Rðλ2 Þ # 210 5 # ℑðλ1 Þ 5 ℑðλ2 Þ # 15 230 # Rðλ3 Þ 5 Rðλ4 Þ # 215 10 # ℑðλ3 Þ 5 ℑðλ4 Þ # 19 2160 # λ5 ; λ6 # 2150 along with the special constraints J1;s , 0:1, where J1 5 jc1 v1 j2 1 jc1 v5 j2 1 jc1 v6 j2 1 jc2 v1 j2 1 jc3 v3 j2 1 jc4 v3 j2 1 jc4 v5 j2 1 jc4 v6 j2 . The search space is jjKjjmax # 2 and k13 5 0. Design a static output feedback controller for the satisfaction of these design objectives. We first note that the search space means jkij j # 2 and k13 5 0. As we mentioned in the text, the choice of the parameters ρ and φ in (10.26) should be done by trial and error and the obtained answer depends on them. In our problem it turns out that J 5 10J1 1 J2 1 J3;2 1 J4 results in good answers.17 We use a GA to solve this problem and find the answer K5

21:86510330 20:16730183

0:15541937 20:18712045

0 0:01186545

0:35712345 20:31021282

,

which

results

in

J 5 255:04721342, J1 5 0:12652713, J2 5 80:730018846, J3;4 5 171:14435401, J4 5 1:90756962. More details are as follows: σmax ðEÞ 5 0:00584302, λ1 ;λ2 5210:493752096j10:00000063, λ3 ;λ4 5215:074444176j10:07856223, λ5 52150:22122981, λ6 52159:81837764, jc1 v1 j2 52:252986e2004, jc1 v5 j2 58:871849e2006, jc1 v6 j2 56:3926592007, jc2 v1 j2 50:032689, (Continued) 17

Note that in our experiment the general trend is that the coefficient of J1 must be much larger, approximately in the range 103 2 105 . However we try values outside this range as well and it happens that in a certain trial with ρ1 5 10 we find better answers!

782

Introduction to Linear Control Systems

(cont’d) jc3 v3 j2 50:093277, jc4 v3 j2 52:836768e2004, jc4 v5 j2 53:119501e2006, jc4 v6 j2 53:878119e2005. This clearly shows that the answers are completely acceptable. Finally we add that the condition K13 50 is in the spirit of ‘sparse control’. Needless to say, it restricts the solvability of the problem and this is a difficult problem which our GA succeeded in solving. Remark 10.28: Recalling Theorem 10.25 of Section 10.12, we may formulate another optimization problem, where the search space is the space of symmetric real matrices A0 , like in (Bavafa-Toosi et al., 2006).

10.16

Summary

In this final chapter of the book, after studying the foundations of analysis and synthesis of linear control systems in previous chapters, we have studied some of the fundamental limitations in linear control systems. The issues are of utmost importance for two simple reasons. They give the designer a deeper insight to the interrelations between different features and design objectives of a control system. They also determine what is achievable and what is not. The first results in this domain emerged in the 1940s out of the work of Bode on amplifier design! Through the passage of time, little by little, other topics like the inverse response of NMP systems came to the attention of the control community. Fundamental limitations for linear systems were a hot topic in the last two decades of the 20th century and by the year 2010 they reached a state of maturity as evinced by a sharp decrease of the new pertinent research results. The irony is that undergraduate texts do not address these topics. We hope that this text serves as a platform also for bringing these important issues to the attention of the next generation of engineers and researchers. We have presented a coherent picture of various aspects and issues in a control systems design. In particular we have discussed the relation between time and frequency domain constraints, ideal transfer functions, controller design via the TS method, interpolation conditions, integral and Poisson integral constraints, constraints implied by poles and zeros, actuator and sensor limitations, and delay. Included are also eigenstructure assignment, eigenvalue sensitivity, noninteractive performance, robust stabilization, and positive systems. Among the numerous issues that we have learned in this chapter are that the ideal transfer function re sensitivity is not consistent with setpoint tracking requirements; in general when we reduce sensitivity in a frequency range we inevitably increase it in another range; in general small imaginary zeros result in large control inputs; in general pole placement by output feedback, the ideal minimal eigenvalue sensitivity, and noninteractive performance may not be achievable. The chapter has ended with a design procedure for output feedback control design. The aforementioned procedure is more comprehensive than the existing ones. Numerous examples and worked-out problems to follow boost the learning of the subject.

Fundamental limitations

10.17

783

Notes and further readings

1. Section 10.2 is partly taken from (Boyd and Barratt, 1991). Section 10.3 is extracts and extensions from (Butterworth, 1930; Winder, 2008). Sections 10.410.6 are extracts and extensions from (Bode, 1945; Freudenberg and Looze, 1985). The interpolation conditions for simple poles and zeros date back to 1958 when they first appeared in the works of J. R. Ragazzini & G. F. Franklin, and S. C. Bigelow independently. For the case of multiple poles and zeros they are due to Freudenberg & Looze, where they considered the delay term as well. Results of Section 10.7 are extracts and extensions of (Goodwin et al., 1999; Middleton, 1991; Middleton and Graebe, 1999; Youla et al., 1974). Sections 10.1010.13, 10.15 are adopted from (Bavafa-Toosi, 2000). Further results can be found in the same references and the introduced references. Among these results are the fundamental limitations on the time-domain shaping of the response to fixed inputs of (Hill and Halpern, 1993; Hill et al., 2002) and the generalization of the Poisson integrals of (Freudenberg et al., 2003). Theorems 10.26, 10.27 are adopted from (Chouaib and Pradin, 1994). Section 10.14 is extracted from (Farina and Rinaldi, 2011; Goodwin et al., 2015; Zappavigna et al., 2012). Reference (Goodwin et al., 2015) was brought to our attention by G. C. Goodwin (Goodwin, 2016). The definition of positive systems is due to D. G. Luenberger in 1979. 2. Part of the fundamental limitations of linear systems is tied up with the uncertainties which are studied in the graduate course on robust control systems. Additionally, fundamental limitations have also been studied in other contexts such as filtering and fault detection and isolation. 3. Some fundamental limitations are associated with particular design tools. Some typical important results are due to Ito et al. (1993, 2012), Olanrewaju and Maciejowski (2017), Sun et al. (2001), and Takamatsu and Ohmori (2016). The former relates the stability of the space of symmetric real matrices controllers with certain properties of the weighting function and the number of NMP zeros of the plant. The second one discusses fundamental limitations of certain methods of Lyapunov function construction. The third reference studies the implications of discretization on dissipativity and economic MPC scheme which can be regarded as some fundamental limitations. The fourth article concerns the identifiability and persistence of excitation requirements for classical closedloop identification techniques. The limitations are lifted. Finally, the last work solves the robust control problem with mismatched disturbance for fractional order systems by the help of a modified backstepping control. Note that at present we know mostly adaptive (backstepping and intelligent/neural) methods for handling this problem and although there is no proof that other methods are not able to solve it, it is rather a stereotype that they are not, and thus mismatched disturbance/uncertainty has turned to a practical fundamental limitation for other methods. Among other works in this class is Arefi et al. (2015) which uses an adaptive neural controller to tackle this problem. 4. In large-scale and networked systems there are some special fundamental limitations. The interested reader is referred to the respective literature that we mentioned in Chapter 1 as well as (Bamieh et al., 2012; Bavafa-Toosi, 2006, 2016; Dahleh, 2009; Eslami and Nobakhti, 2016; Meskin and Khorasani, 2011; Mesbahi and Egerstedt, 2010; Siami and Motee, 2016), etc. In the same spirit we also have the issues re structure and configuration design as we pointed out in item 2 of Further Readings of Chapter 1. 5. Some of the fundamental limitations which we have discussed for LTI systems can be overcome by resorting to nonlinear controllers; see e.g., (Hunnerkens et al., 2016). However some may not. For instance the inverse response of an NMP system cannot be overcome by

784

6.

7.

8.

9.

Introduction to Linear Control Systems

any type of control and this can easily be proven. Further results in fundamental limitations of NMP systems can be found in (Nokhbatolfoghahaee et al., 2016, 2017). The counterparts of the results we presented here for continuous-time SISO LTI systems also exist for discrete-time systems, MIMO systems, and to some extent for time varying, nonlinear, stochastic, hybrid, adaptive, constrained, singularly-perturbed, delay, fractional-order systems, etc., and the case of finite-horizon control. The interested reader can consult the research forums in either direction. A few references are cited: (Aguiar et al., 2008; Chen, 1995; Guan et al., 2014; Iglesias, 2002; Jiang et al., 2014; Khorasani and Pai, 1985; Li and Hovakimyan, 2013; Mehta et al., 2008; Mesbahi, 2005; Qi et al., 2017; Qureshi and Gajic, 1992; Saberi et al., 1996, 2012; Saha et al., 2010; Shafai et al., 2013; Shafai and Sotirov, 1999; Skarda et al., 2014; Tavazoei, 2014; Yu and Mehta, 2010; Zelazo and Mesbahi, 2011). An interesting particular result is on tracking sinusoidal input references (Su et al., 2003). Interesting conditions on decomposability of linear systems are reported in (Mesbahi and Haeri, 2015). Fundamental limitations in filtering and constrained estimation are also studied in a number of works, see e.g., (Levy and Nikoukhah, 2004; Sabei et al., 2006, 2007). For results on fault detection and isolation see (Baniamerian et al., 2013; Hashtrudi-Zad and Massoumnia, 1999; Massoumnia, 1986; Meskin and Khorasani, 2011; Saberi et al., 2007). In the modern approach a more general model of the control system is assumed, by considering a communication link or “communication channel” between different elements in the system. Fundamental limitations have been considered also for such systems, although the results are rather scarce. Interesting results are in (Fang et al., 2014, 2017; Liberzon, 2014; Martins and Dahleh, 2008; Rojas et al., 2008; Silva and Pulgar, 2013). Some pertinent fundamental results can also be found in (Chakravorty and Mahajan, 2017; Devroye et al., 2008; Jalaleddini et al., 2013; Kittipiyakul et al., 2009; Musa et al., 2016; Tabbara et al., 2007). Research on output feedback pole assignment and eigenstructure assignment problems has almost reached a state of dormancy since the late 1990s in the engineering forums. Basic results can be found in (Seraji, 1975, 1981; Tarokh, 1985, 1992). Only sporadically some results, although strong, have since appeared, see e.g., (Askarpour and Owens, 1999; Ghane-Sasansaraee and Menhaj, 2013; Konigorski, 2012; Tsui, 2005; Verschelde and Wang, 2004; Wang and Konigorski, 2013). The reason is that in its full generality it is a sophisticated problem where advanced algebraic geometry tools of pure mathematics are required. In mathematics forums several important results have been reported, see e.g., (Blumthaler and Oberst, 2012; Bunse-Gerstner et al., 1999; Chu et al., 1999; Mukhin et al., 2009; Sottile, 2000, 2010) and their bibliographies. Nevertheless, it is still open in certain directions despite the availability of several good results. We also add that “characterization” of the minimal sensitivity problem by output feedback is still open in its full generality, and our results address a special case. In the simpler case of state feedback the situation is almost completely solved, but even here some specific corners are still open in the sense that further tangible characterizations and estimates would be desirable, see Exercise 10.41. On the other hand, relation with constrained control is discussed in Exercise 10.69. Finally we add that more advanced studies will be pursued in the sequel of the book on state-space methods. The concept of norm has much more detail—beyond what we can discuss in this book. However some basic properties are addressed in Exercises 10.4810.55. See also Remark C.2 of Appendix C. It is good to know that in recent times the concept has found special applications in systems theory, where e.g., topics like “rank minimization” have been studied in details. The interested reader is referred to (Fazel, 2002; Recht et al., 2010; Fazel et al., 2013; Oymak et al., 2015) and the references therein.

Fundamental limitations

785

10. In the next course on state space methods you will study more about the condition number. A matrix which has a large condition number is called ill-conditioned. Illconditioned matrices are difficult to invert and working with them requires specialized techniques like orthogonal methods where inversion is avoided. The direct relation with our context of control theory is that the system should not be ill-conditioned. Also note that the singular values σðVÞ which are used in the definition of the condition number have special forms of representation and computation, along with many important properties which are outside the scope of this book. On the other hand it is known that systems with large singular values have fundamental limitations in control(lability), see e.g., (McAvoy and Braatz, 2003). We close this item by adding that a theoretical analysis of regional pole placement for condition number minimization in state feedback was offered by V. Mehrmann and H. Xu in 1998, where important observations were made. The counterpart of this study in output feedback is an interesting topic for research. 11. The theory of Metzler matrices is closely related to the theory of M-, P-, and Z-matrices. Many results which can be interpreted as fundamental limitations on ‘special matrices’ can be found in the literature. The interested reader can refer to (Amster and Idels, 2016; Brandts and Cihangir, 2016; Berman and Plemmons, 1994; Kushel, 2015; Shafai et al. 1997; Talebi, 2017) and the pertinent literature. See also unitary, normal, Bell, Leslie, circulant, and Toeplitz matrices, as well as adjacency and Laplacian matrices of graphs (Ahmadizadeh et al., 2017; Akbari et al., 2017; Hassani Monfared, 2017). References (Dehghani et al., 2017) and (Emami and Hopke, 2017) discuss topics re information in a matrix and limitations in model-free data analysis. 12. The structured singular value that we mentioned in Remark 10.26 was introduced by Doyle (1982). It is a fundamental tool in robust control theory and constitutes an independent branch of it. Weighted sensitivity functions which are commonly used in the context of robust control can be included in our problem formulation as well. 13. Among the topics which we did not discuss are the implications of the effective resistance of graphs, limitations in synchronization, virtual holonomic constraints, constraints in the context of matrix means (with applications in stochastic systems), certain constraints in the context of optimality, small gain theorem, etc., eigenvector sensitivity, and performance of numerical algorithms used for computations in control. With respect to this last item recall from Section 1.8 that one of the four steps in the whole solution of a problem is “computation.” The central issue in almost all computational control problems is the so-called “eigenvalue problem.” In computational mathematics this phrase refers to the whole problem of finding eigenvalues, eigenvectors, invariant subspaces, stability radius, etc., and is one of the main topics in the field. In particular it is good to know that an upper bound of the accuracy of almost all numerical algorithms that are used for this task depend on condðAÞ. In some cases condðAÞ2 also appears! Some typical references where you can find interesting general/specialized results are Bollhofer (2015), Borm and Mehl (2012), Horn and Johnson (2012), Karow (2011), Kressner (2007), Liesen and Strakos (2015), Maggiore and Consolini (2013), Schmidt (2007), Seufer (2005), Shi (2004), Son et al. (2016), Steinbrecher (2015), Xia and Scardovi (2016), Young et al. (2015). 14. An actual system with actual parameter values on which fundamental limitations are studied is the bicycle. The interested reader is referred to (Astrom et al., 2005). A variety of other systems can also be found in the literature like a particular manipulator as discussed in (Bazaei and Moallem, 2012). On the other hand we add that NMPness is a structural property of the system, i.e., a structural fundamental limitation. The question “When does a system possess an NMP zero?” is considered in a graph-theoretic setting in (Abad Torres and Roy, 2015; Daasch et al., 2016). Further theoretical properties are available in (Nokhbatolfoghahaee et al., 2016, 2017).

786

Introduction to Linear Control Systems

15. Available results on positive systems are versatile and numerous. They are both in the realm of intrinsic/structural properties and controller analysis/design. We suffice with referring the reader to (Bokharaie et al., 2011a, 2011b; Farina and Rinaldi, 2011; Feyzmahdavian et al., 2013; Guiver et al., 2016; Roszak and Davison, 2011; Santesso and Valcher, 2007; Shafai et al., 2016; Shen and Chen, 2017; Virnik, 2008). For instance we know that positive eigenvectors of nonnegative irreducible matrices induce norms for linear positive systems, which may help estimate or control transient dynamics. This result is further developed in the case of uncertain systems in (Guiver et al., 2016). On the other hand, the zero pattern properties and asymptotic evolution of trajectories of positive systems are investigated in (Santesso and Valcher, 2007). Finally, optimal servomechanism design for SISO positive systems is studied in (Roszak and Davison, 2011). The paper relies on a 1976 result of Davison which itself can be regarded as a fundamental limitation in a particular setting. The result hinges on the theory of singular perturbations. 16. The chapter can be best complemented by fundamental limitations due to controller structures: The achievable performance also depends on the controller structure. To see this, recall simply that not every controller structure is able to stabilize a certain given plant, let alone to have a desired performance level, e.g., a desired steady-state error. The same is true for GM, PM, etc., as we have seen in Chapter 9 in various problems. This has been further detailed by all the results of this chapter, e.g., Example 10.13, 10.18, Theorem 10.15, Problem 10.16, etc. In fact in various papers achievable performance of certain plant models with certain controller structures (like PI and PID) are characterized, see e.g., (Badri and Tavazoei, 2015; Garpinger et al., 2014). You will practice this in Exercises 10.67, 10.68. 17. Development of the counterparts of results of this chapter for DAE, fractional-order, positive, constrained, multiscale, etc. systems is desirable.

10.18

Worked-out problems

Problem 10.1: Compare the T and S of the logistic functions with those of the Butterworth filters. For first-order systems they are the same. For 2nd and 3rd order systems they are compared in the sequel. To have a fair comparison the BWs should be the same. Recall from Chapter 4 that bandwidths of k=ðs1aÞ2 and k=ðs1aÞ3 are 0:64a and 0:5a, respectively. On the other hand the Butterworth filters all have the bandwidth ωc . In order for the logistic functions to have the same bandwidth they must be chosen as T2 ðsÞ 5 ðωc =0:64Þ2 =½s1ðωc =0:64Þ2 and T3 ðsÞ 5 ðωc =0:5Þ3 = ½s1ðωc =0:5Þ3 . For ωc 5 30 the answer is plotted in Fig. 10.24, where the solid and dashed curves refer to the logistic and Butterworth filters respectively. Note that ωc 5 30 is visible as the BW in this figure. As is observed, the logistic function has a smaller sensitivity peak. We have also seen that it has a better tracking performance. Thus, it looks like an excellent choice for the desired transfer function in applications—however the problem is that it is not always achievable! Problem 10.2: Prove that when either jTj or jSj is small then the other one is approximately equal to one.

Fundamental limitations

787

Figure 10.24 Problem 10.1, Left: Second order, Right: Third order.

Suppose jTj p isffiffiffiffiffiffiffiffiffiffiffiffiffiffi a small ffi number at the frequency ω. Denote TðωÞ 5 a 1 jb. Because jTj 5 a2 1 b2  0, a; b are both small, i.e., a; b  0. Now SðωÞ 5 1 2 TðωÞ 5 1 2 a 2 jb  1 or S  1 and jSj  1. Similarly, when jSj  0 then T  1 and jTj  1. The argument is pictorially shown in Fig. 10.25. jω S

T 0

σ

1

Figure 10.25 Problem 10.2.

Note that with a similar argument we can conclude that when either jTj or jSj is large then the other is also large (and of approximately the same magnitude). Problem 10.3: How do the interpolation conditions contradict and restrict the ideal transfer function and sensitivity function of a plant? For simplicity of representation we consider the case of simple CRHP pole and zeros. We consider the case that the system is designed for BW 5 4, while it has NMP poles and zeros at z 5 2 and P 5 8. The “ideal” T and S of Fig. 10.2 are “simply” deformed as in Fig. 10.26. However, the effect is in fact further extended on the whole frequency range like the water-bed effect. (The student is urged to exemplify a system and use MATLABs to see how exactly how they look.) |T|

|S|

1

1

0

z

BW

p

ω

0

z

BW

Figure 10.26 Effect of CRHP poles and zeros on the ideal T and S.

p

ω

788

Introduction to Linear Control Systems

The above figure clearly demonstrates the requirement that in order to have good performance we should have z . p. In fact because of the water-bed effect it if often required that z $ 5p. The BW should be chosen in between, i.e., p , BW , z, often BW 5 2p, and thus the ideal T and S are not much affected. We also note that Exercise 8.26 shows that the condition z $ 5p translates to Smax $ 1:5. Finally for the sake of completeness we should add that the phrase “water-bed effect” actually exists in the literature with similar statement and interpretation, considering the magnitude of the ORHP zeros zi , ε and Theorem 10.5, and is due to (Francis and Zames, 1984). Problem 10.4: Design a stabilizing controller for the system PðsÞ 5 ðs 2 1Þ=ðs 2 2Þ using the interpolation conditions. Try different transfer functions and find an achievable one. We first note that NT $ NP requires NT $ 0, i.e., this condition does not impose any restriction on the achievable T (except its causality which should obviously be met). If we set NT 5 NP 5 0 the controller will be proper and if we set NT . NP 5 0 the controller will be strictly proper. We note that the simple proportional control CðsÞ 5 k ; 2 2 , k , 2 1 stabilizes the system. That is, the transfer function T 5 kðss 11abÞ is achievable. (For what choices of a and b?) But for the sake of illustration let us try T 5

kðs 1 bÞðs 1 cÞ ðs1aÞ2

for step tracking.

The interpolation conditions are Tð1Þ 5 0, Tð2Þ 5 1, Tð0Þ 5 1. From Tð1Þ 5 0 we bÞðs 2 1Þ have T 5 kðs 1ðs1aÞ : From Tð0Þ 5 1 we find 2kb 5 a2 . Finally from Tð2Þ 5 1 we 2

find k 5 a2 1 2a 1 2 and b 5 2 a2 =ða2 1 2a 1 2Þ. On the other hand,

2 ð1 2 kÞs2 1 ð2a 2 kb 1 kÞs 1 a2 1 kb sðs 2 2Þ 5 2 ða11Þ : Thus the controller is ðs1aÞ2 ðs1aÞ2 kðs 1 bÞ T s1b a2 1 2a 1 2 PS 5 2 ða11Þ2 s 5 K s where K 5 2 a2 1 2a 1 1 : With this controller,

S512T 5 found as C 5

e.g., for a 5 1 we find b 5 2 1=5 and K 5 2 1:25 and the root locus is given in Fig. 10.27. The system is stable.

Question 10.7: With the assumption Tð0Þ 5 1 ; Sð0Þ 5 1 we have actually designed a step-tracking controller, as is obvious from the controller integrator. If we do not make this assumption and suffice with stabilization (which is the original objective in this problem), then a controller like CðsÞ 5 k 3 ðs 2 0:5Þ=ðs 2 0:1Þ, k  2 1:1212 results in a root locus like the same root locus and a closed-loop transfer function with s1 5 s2  2 1:7247. In parallel with Section 10.4, provide the assumptions that we should make for the stabilization objective, and in particular the details for the present problem. Question 10.8: With reference to Section 10.4, what if our system is not low-pass, but one of those discussed in Exercise 7.25 of Chapter 7? Question 10.9: The controller of question 10.7 may raise the thought that a controller like CðsÞ 5 kðs 2 0:5Þ=ðs 1 0:1Þ, k , 0 also stabilizes the system. However, it does not, since the break point will be on the right-hand branch. Given the system

Fundamental limitations

789

Figure 10.27 Root locus of Problem 10.4

LðsÞ 5 kðs 1 aÞðs 1 bÞ=ðs 1 cÞðs 1 dÞ where k , 0 and the zeros are between the poles, when is the break point on the left-hand branch? Problem 10.5: Design a step-tracking controller for the system PðsÞ 5 ðs 2 1Þ=ðs 2 2Þ using the interpolation conditions. Try a strictly proper controller. This is what we just did in the previous Problem 10.4. For the sake of instructiveness here we have a look at the problem from another corner. We know that the controller should have an integrator term; let us include it in the plant and change it to PðsÞ 5 ðs 2 1Þ=½sðs 2 2Þ. Now the best that we can assume is that the controller is proper and thus we set NT 5 NP 5 1. Hence we can try the controllers T1 5 s 1k a ; 1 bÞ bÞðs 1 cÞ T2 5 kðs ; T3 5 kðs 1ðs1aÞ ; . . . Note that we have said “can” because we can try 3 ðs1aÞ2 other denominators for T2 , T3 . In this problem the interpolation conditions are Tð1Þ 5 0, Tð2Þ 5 1, Tð0Þ 5 1. We easily verify that with T1 and T2 we cannot satisfy these conditions. Thus we try T3 . Similarly to the computation for Example 10.11 we find out c 5 2 1, 2Þðs 2 dÞ where k 5 a3 1 3a2 1 6a 1 4, b 5 2 a3 =ða3 1 3a2 1 6a 1 4Þ and SðsÞ 5 sðs 2ðs1aÞ 3 T d 5 a3 1 3a2 1 3a 1 2. The controller is thus CðsÞ 5 PS 5 kðss 21dbÞ : For a 5 1 we have k 5 14, b 5 2 1=14, d 5 9, and the root locus of the system is as in Fig. 10.10 and thus we do not reproduce it here. The system is stable. (Question: What is the lesson that we should get from this solution?)

Problem 10.6: This is an actual numerical example of fundamental limitations in feedback systems. Consider stabilizing a rod on the palm. We have modeled this system as an inverted pendulum on a cart. We found out in Appendix B that the

790

linearized model results in the transfer function

Introduction to Linear Control Systems Z F

s2 2 z2 Ms2 ðs2 2 p2 Þ where Z is the posiz2 5 gl : Explain the fundamental

5

1 MÞ ; and tion of the cart, F is the input, p2 5 gðmMl limitations of this system. From our findings in this chapter we know that to have acceptable performance we should have at least z . p. In fact z $ 5p is desirable. However in this system always z , p, and thus we know that there is necessarily a large peak in jSj. This is inevitable and is independent of the controller that we design. Moreover, if l is small then regardless of the value of m the NMP poles p is large and the control is difficult, requiring a large bandwidth for the controller. As an example consider m 5 M and l 5 20 cm 5 0:2 m. Then p  10 and BW  20, and thus tr in in the order of 0:05B0:1 s. That is the controller must stabilize the system quite fast. To have a feeling of this issue try to stabilize a pen on your palm. Almost impossible! For the sake of completeness we should add some explanations about stabilizing a pen on the palm that we mentioned in the above paragraph. In this system the mass m in negligible and instead the rod l which is the pen itself has some weight. The modeling is the same except that we denote the mass of the pen by m centered at its center of mass which is almost at l=2. We also note that when m{M (like in the case of a pen) then z  p and because an unstable polezero cancellation is approximately done the system verges on the uncontrollability and instability. And this aggravates the already complicated situation. In practice improbable to control! Finally we present a numerical example for lower bounds on Sp max pffiffiffiffiffi ffiffiffiffiffiand Tmax . For m 5 M and l 5 50 cm 5 0:5 m one finds p 5 40 5 6:32 and z 5 20 5 4:47. Now for ε 5 0:1 some cases are as follows: For ωl 5 1 we find Smax $ 7:91 and for and ωh 5 10 we find Tmax $ 32134. For ωl 5 0:1 we find Smax $ 5:99 and for and ωh 5 100 we find Tmax $ 478. This system is in practice impossible to control because z is near p and smaller than that. In passing we note that as expected from the intuition, by lowering ωl the lower bound on Smax decreases. Similarly, by increasing ωh the lower bound on Tmax decreases. They also have an inverse relation with ε. If we increase ε both of the aforementioned lower bounds decrease. Moreover, if we do not impose the ε restriction on jTj and jSj the lower bounds on Tmax and Smax are smaller, given by jðp 1 zÞ=ðp 2 zÞj 5 5:82.

Question 10.10: If we use other types of control, like nonlinear or discontinuous control, can we lift the condition z . p which is a fundamental limitation for good performance and small Smax? 2z Problem 10.7: Consider a typical plant given by PðsÞ 5 K s2s 2 p2 : Discuss the effect of the parameter values in the context of fundamental limitations.

It should have become clear that we require z $ 5p which relates to Smax $ 1:5. Otherwise control of the system will be practically-speaking difficult, perhaps even impossible! If the plant has a second NMP zero then we require minfz1 ; z2 g $ 5p. (Question: What implication does it have on Smax ?)

Fundamental limitations

791

Problem 10.8: Consider the 1-DOF control system in Fig. 10.9. Denote the output of YðsÞ the controller by U. Define Hru ðsÞ 5 UðsÞ RðsÞ and Hdi ðsÞ 5 Di ðsÞ as in Exercise 1.53. Show that in addition to the interpolation conditions mentioned in Section 10.5, the following conditions should also be met: Every unstable pole of the plant is a zero of Hru and every unstable zero of the plant is a zero of Hdi . There holds Hru 5 CðsÞ=ð1 1 LðsÞÞ and Hdi 5 PðsÞ=ð1 1 LðsÞÞ. Thus with PðsÞ 5 NP ðsÞ=DP ðsÞ and CðsÞ 5 NC ðsÞ=DC ðsÞ we have Sc 5 NC ðsÞDP ðsÞ=ðNC ðsÞNP ðsÞ 1 DC ðsÞDP ðsÞÞ and Hdi 5 NP ðsÞDC ðsÞ=ðNC ðsÞNP ðsÞ 1 DC ðsÞDP ðsÞÞ. Noting that for internal stability there should be no unstable polezero cancellation in these transfer functions (as in S and T), the conclusion follows, i.e., there should hold Hru ðpÞ 5 0 and Hdi ðzÞ 5 0. Problem 10.9: Given an LTI system, suppose you are required to design an LTI controller to track the step reference input and reject a ramp disturbance. What fundamental limitation do you face? The results of the Section 10.6 dictate that it is not possible to design such a controller without having an overshoot in the output (step response). Also recall that in Example 4.3 of Chapter 4, Time Response, we provided a proof for it and also for similar cases, Problem 4.6 and Exercise 4.7. Problem 10.10: Reconsider Examples 10.19 and 10.20. We simulate case (2) in this problem. The system is C2 ðsÞ 5 5ðs 1 1Þ=s2 and P2 ðsÞ 5 ðs 1 1Þ=s. The response due to step and ramp input disturbances are plotted in Fig. 10.28. 2z Problem 10.11: Consider the plant PðsÞ 5 ss 2 p with positive z and p and the conNðsÞ troller CðsÞ 5 sDðsÞ in a negative unity feedback structure for step tracking. Assume that the actuator works in the range ½0 UA;max . Show that actuator saturates. Using the notation of Problem 10.8 we can write UðsÞ 5 Hru ðsÞR 5 1s Hru ðsÞ. In u that problem we also learned r ðpÞ 5 0. Thus by Ð N that H Ð Napplication of Theorem 10.12 2st to the definition UðsÞ 5 0 uðtÞe dt we obtain 0 uðtÞe2pt dt 5 0. This in turn

Figure 10.28 Problem 10.10, Input disturbance at t 5 5, Left: Step input disturbance, Right, Ramp input disturbance.

792

Introduction to Linear Control Systems

means that uðtÞ must be negative in a time interval and thus the actuator will saturate in its lower bound. Problem 10.12: What correlations do there exist between the time and frequency domain results? In addition to the discussion of Section 10.2, and the concluding paragraph of Section 10.7, we note Ð that the worked-out problem 10.8 actually means that ÐN N 2zt eðtÞe dt 5 1=z and 0 yðtÞe2pt dt 5 1=p where z and p denote any of the ORHP 0 zeros and open-loop poles of the plant. These relations along with Theorems 10.410.8 show other correlations between the time and frequency domain results. Recall that as we have said before there are several other integral-type constraints of which we introduce some in the next two problems. Problem 10.13: This is actually a restatement of Theorem 10.6. For an open-loop unstable plant having ORHP poles p1 ; . . .; pN in a negative unity feedback structure, ÐN P there holds: π1 0 logjSðjωÞjdω $ Ni51 pi . If ORHP poles of the controller are also included in the right-hand side summation, then equality is achieved. Note that this ÐN theorem along with 0 yðtÞe2pt dt 5 1=p shows another correlation between time and frequency domain results. We also notice that the integral means that the area of sensitivity amplification is larger than that of sensitivity reduction by the amount specified in the right-hand side. In terms of Example 10.14 P Aa $ Au 1 π Ni51 Rðpi Þ. Two interpretations can be made. (1) We can expect a larger peak in the sensitivity. (2) Part of the loop gain which is used for sensitivity reduction in the case of no ORHP poles is now used to stabilize the ORHP poles. Problem 10.14: This is actually a theorem (Middleton, 1991). Consider the setting of Theorem 10.7. If the loop gain is of type at least 2, i.e., has at least 2 integrators, and the system has ORHP zeros z1 ; . . .; zM , there holds Ð PM 1 1 N 1 τ i51 zi . We notice that this theorem along with π 0 ω2 logjTðjωÞjdω $ 2 ÐN 2zt dt 5 1=z manifests another correlation between time and frequency 0 eðtÞe domain results. (Question: What is the interpretation of this theorem?) Problem 10.15: What is the interpretation of the sensitivity integrals considering the effect of feedback? Note that jSj . 1 means that disturbance is amplified with a gain larger than 1, while in open-loop control (no feedback) the disturbance identically appears in the output. Consequently, at frequencies where jSj . 1 feedback has a detrimental effect. That is, the closed-loop system becomes worse than the open-loop system! Problem 10.16: What is the implication of Theorem 10.6 on the controller poles? Discuss. ÐN P Recall that the theorem is 0 logjSðjωÞjdω 5 π Ni51 Rðpi Þ where the right-hand side contains the ORHP poles of both the controller and the plant, i.e., it is P C P P RðpPi Þ 1 π Ni51 RðpCi Þ with obvious meaning for the terms. Because π Ni51

Fundamental limitations

793

P P P C π Ni51 RðpPi Þ is a fixed quantity, the sensitivity actually varies as π Ni51 RðpCi Þ does. That is, a way to minimize the sensitivity is to minimize this latter quantity. If the plant is strongly stabilizable, i.e., the plant is stabilizable by a stable controller, then this quantity is at its minimum which is zero. Otherwise we may seek a stabilizing controller which minimizes this quantity. In fact this approach has been proposed in (Halpern and Evans, 2000). Problem 10.17: This problem shows another facet of the problems caused by NMP 2 2s12 zeros. Consider the systems L1 ðsÞ 5 ss 1 1 1 and L2 ðsÞ 5 s 1 2 L1 ðsÞ. Study them from the point of view of sensitivity. The systems have the same magnitude, jL1 j 5 jL2 j, but different phase due to the s12 presence of the NMP zero, + 2s 1 2 :0 ! 2 180 as ω:0 ! N. The Nyquist plot of L1 is outside the unit circle centered at the critical point, but due to the presence of the Blaschke product the Nyquist plot of L2 enters that circle, see Fig. 10.29. The sensitivity of (the closed-loop system whose open-loop is) L1 is always less than 1, whereas the sensitivity of (the closed-loop system whose open-loop is) L2 is larger than one in some frequencies. In passing note that a somewhat similar observation can be made in 2 many other cases, e.g., L1 ðsÞ 5 s 11 2 and L2 ðsÞ 5 s 11 2 3 ss 2 1 1 : (Make the precise statement!) 2s

Problem 10.18: Consider the system LðsÞ 5 s2e1 3 and the output disturbance d 5 sin2t in a negative unity feedback. Is feedback detrimental or beneficial for this disturbance? We note that frequency of the disturbance ðω 5 2Þ is inside the bandwidth of the system. However, the Nyquist plot of the system is within the circle jSj 5 1 and

Figure 10.29 Problem 10.17, Nyquist plot of the systems.

794

Introduction to Linear Control Systems

thus the disturbance is amplified. Likewise this can be verified by computing jSðj2Þj 5 j 1 1 1Lðj2Þ j . 1. We encourage the reader to repeat the same problem for LðsÞ 5 ðs 1 1Þe22s =½sðs 1 3Þ.

20:1s

Problem 10.19: Consider the system LðsÞ 5 CðsÞPðsÞ 5 CðsÞ es 2 2 P0 ðsÞ where P0 ðsÞ is minimum phase and delay free. It is desired to use an NMP controller for this system such that Tmax , 1:15. Is it achievable? We should have τp , logð1:15Þ 5 0:1398. But this condition is not satisfied and thus no matter what controller we may design, the design objective is not achievable. Problem 10.20: Consider a system with an NMP zero at z 5 0:1. It is desired to design a controller such that the settling time of the system becomes ts # 3 s for the 2% criterion for step tracking. What can be said about the undershoot of the system? We have δ 5 0:02 and Mus $

12δ ezts 2 1

5 2:8.

Problem 10.21: Suppose that we require Tmax # 1:2 and Smax # 1:1. What GM and PM are guaranteed for this system? For the given Smax we have Gm $ 11, Pm $ 543 . For the given Tmax we get Gm $ 1:83, Pm $ 49:2  503 . Thus we are guaranteed to have Gm $ 11, Pm $ 543 . We close this problem by encouraging the reader to analyze the formulae to find out under what condition on Tmax ; Smax their corresponding GM/PM is the larger one. Problem 10.22: Is the sequel system stabilizable by a stable controller? 2 2 1Þðs 2 3Þ LðsÞ 5 2 ðs2ðs2 2s : 1 2Þðs24Þ2

The system has three real ORHP zeros at 1; 3; N. It also has an even number of real poles between every pair of them: no (i.e., zero which is even) real poles between 1; 3 and two real poles between 3; N. So it is strongly stabilizable. (Question: What if the system has also an integrator?) Problem 10.23: Consider a type-1 plant PðsÞ for which we wish to design a controller for step tracking. The plant has ORHP pole(s) and zero(s) and some stable dynamics. We consider two scenarios. (1) The designer designs the internally stabi2 1 aÞ lizing controller CðsÞ 5 K ðssðs 2 bÞ C1 ðsÞ. (2) The designer designs the internally stabi-

lizing controller CðsÞ 5 K sðs2 11 aÞ C2 ðsÞ. Discuss the philosophy behind the controller terms and the problem setting in each scenario.

(1) Because the controller is already of type 1 the controller does not need to have any integrator. The reason for the integrator of the controller is thus to reject the input step disturbance and/or output ramp disturbance. The reason for using the unstable pole of the controller is (probably) to make an even number of ORHP

Fundamental limitations

795

poles between a pair of ORHP zeros of the plant, so that the rest of the controller with stable dynamics, i.e., C1 , stabilizes the system with ‘a’ minimal sensitivity (recall Problem 10.16). Moreover, it is also certainly used for the purpose of internal stabilization. The reason for using the imaginary zeros of the controller ispffiffito ffi reject a sinusoidal noise and/or a sinusoidal disturbance (of frequency ω 5 a) which appears in the setpoint as additive disturbance (the so-called setpoint disturbance). Thatpffiffiffiis, in this particular problem there is a noise of the form nðtÞ 5 An sin at and/or the setpoint that is applied to thepffiffisystem is not ffi rðtÞ 5 Ar stepðtÞ but the perturbed signal rðtÞ 5 Ar stepðtÞ 1 A^r sin at. See Section 4.13. With this controller we shall have ydr ;ss 5 yn;ss 5 0. (2) The reason for integrator is the same as before. As for the imaginary poles they are used to reject a sinusoidal disturbance, either at the input or output pffiffiffi of the plant. Moreover, the input disturbance can actually be p di ðtÞ 5 A sin at 1 di ffiffiffi A^ di stepðtÞ and the output disturbance can be do ðtÞ 5 Ado sin at 1 A^do stepðtÞ 1 A do rampðtÞ. Problem 10.24: In order to get rid of the NMP zeros of the plant, a designer proposes to cancel them out by the prefilter F in the 2-DOF control structure. Is the design acceptable? Because F acts in open loop, i.e., there is no feedback loop around it, in order to maintain the internal stability of the system it has to be stable. The design is not acceptable. It is internally unstable. Problem 10.25: As stated in the text, in certain cases that the designed controller is not well-tuned actuator limitation proves beneficial. We demonstrate this issue in 13:93 3 0:0268s 1 1 this problem. We simulate the system P 5 sðs10 of 2 1Þ and CðsÞ 5 3 3 0:0268s 1 1 Problem 9.11 of Chapter 9 for a sawtooth reference input. We observe that the control signal requires very large peaks which are not usually met by any actuator, left panel of Fig. 10.30. The tracking performance is shown in the middle panel of the same figure: the output has also large undesirable overshoots. Now, we suppose that the actuator can accommodate peaks of say ½ 2 30 30. By insertion of this saturation block we again simulate the system and observe a better output, right panel of the same figure! Discussion: We also note that the controller of Example 9.11 was originally designed for another reference input, and it was an excellent design. The question that arises now is that what should the controller be for this reference input in order to have a good response (without using a saturation block, i.e., a less authoritative actuator)? It is left as an easy exercise to the reader to propose the controller. Problem 10.26: We investigate the sensor noise in this problem. Consider a normally distributed band-limited white noise, added to the output of the sensor in Example 10.33. Simulate the system. The simulation result is plotted in Fig. 10.31. As observed: (1) The tracking performance deteriorates. (2) The rate of oscillations in the output is much less than

796

Introduction to Linear Control Systems

Figure 10.30 Problem 10.25 Top: Control signal without saturation. Bottom left and right: Input reference & output without and with actuator saturation.

Figure 10.31 Problem 10.26, Left: Noise power 0.001, Right: Noise power 0.02.

that of the noise, and this is because the system is a low-pass one. (3) The maximum amplitude of the error in the output is much less than the maximum amplitude of the noise. (4) As we may expect, an increase in the power of the noise results in more performance degradation. The noise power in the left and right panels is

Fundamental limitations

797

0.001 and 0.02. In passing it is good to learn that the small number 0.02 is actually a large number for power of the noise! Question 10.11: This question has some parts. (1) What is the unit of the noise power? (2) Does the response deterioration also depend of the delay-free plant/system dynamics? (3) If yes, how? (4) Exemplify a system whose observed deterioration (e.g., in the form of oscillations magnitude) is lesser? Problem 10.27: Consider the system x_ 5 Ax 1 Bu, y 5 Cx, u 5 ky, where



1 2 A 5 3 24 , B 5 11 , C 5 1 21 where the control objective is stabilization. By direct computation investigate the fundamental limitations in pole assignment by static output feedback. Solving the problem similar to Example 10.35 we find the characteristic equation as s2 1 3s 2 4k 2 10 5 0. Thus s1 1 s2 5 2 3 and s1 s2 5 2 4k 2 10. Thus the assignable closed-loop poles are restricted to those satisfying s1 1 s2 5 2 3 and 23 , si , 0. Therefore for instance the closed-loop poles s1 5 s2 5 22 or s1 5 2 4; any s2 are not assignable. Problem 10.28: Consider the system x_ 5 Ax 1 Bu, y 5 Cx, u 5 Ky. Find the static output feedback controller such that the closed-loop poles are assigned at f 21; 22; 23; 24; 25g. The system parameters are, 2

0 60 6 A56 60 40 0

1 0 0 0 0

0 1 0 0 0

0 0 1 1 0

2 3 1 0 60 07 7 6 6 07 7, B 5 6 0 41 15 0 0

0 1 0 0 1

3 0 07 7 1 , C 5 17 7 0 05 0

0 1

0 0

0 0 . 0 0

Using theorem 10.20 as 3well as a GA we find an 3infinite number of solutions 2 2 2 3

2584 210 2584 34 5, 26 5, K2 5 4 60=583 692=583 19 268=583 19 2 3 2 3 10:01 2584:005 50:001 2584:07 225:99 5, K5 5 4 5:8662 266:02 5. K4 5 4 0:6518 20:7684 19:01 22:0720 19:1

like

250

K1 5 4 1820=583

K3 5 4

0 120=583 2258=583

2584 216 5, 19

Problem 10.29: As we have observed in Examples 10.35, 10.37, and Problem 10.27, it is possible that the condition m 1 l . n or ml . n is not satisfied but still pole assignability is achievable at certain desired places. Investigate this fact, using Theorem 10.20, to place the poles of the system x_ 5 Ax 1 Bu, y 5 Cx, u 5 Ky at Λ 5 f 2 1; 2 2; 2 3; 2 4g, where, 2

0 6 0 6 A54 0 22

2 0 0 0

0 3 25 0

3 2 0 1 6 07 7, B 5 6 1 40 25 0 0

3 0 07 7, C 5 1 0 5 1 1

0

0 .

798

Introduction to Linear Control Systems

T T We find v1 pT1 5 21 22 21 22 5 0 , v2 pT2 5 T T 1:5 2:25 1 1:5 27:5 0 , v3 p3 5 21:5 22:5 21 21 7:5 0 , T T v4 p4 5 2 1 2 1 210 0 , and K 5 ½25 0T . The reader is encouraged to investigate that if a parameter in A varies, e.g., 25 ! 2 4, then the problem does not admit a solution. (Verify the details!) Problem 10.30: Consider the plant x_ 5 Ax 1 Bu, y 5 Cx, u 5 Ky where the system matrices are given below. Design a type-1 servo system for this plant such that the closed loop poles are assigned at Λ 5 f 21; 22; 23; 24; 25g. 2

0 A540 0

4 29 0

3 2 28 1 0 5, B 5 4 0 26 0

0 1 0

3 0 1 0 5 0 , C5 0 1 1

0 . 0

Using the approach presented in the text we find an infinite number of solutions 2 3 2 3 2 3 0

like K1 5 4 9:75 0

24:1025 5, 31 21:5385

K2 5 4

15 2 13

256 16 5, 220

K3 5 4

0 1 24:375

65 5. 31 271:785

Problem 10.31: Let an LTI MIMO plant be described by x_ 5 Ax 1 Bu, y 5 Cx, u 5 2 Ky with B and C of full column and row ranks, respectively. Let Dv be the vector representation of a matrix D obtained by concatenating its rows into a long (column) vector. Prove that   if a symmetric real matrix Z can be found such that rank B  CT ðAT 1ZÞv 5 mp and pole assignment is accomplished in the admissible region Ψ, then the ideal minimal sensitivity problem is solved and the solution in the matrix-vector representation is given by K v 5 2 ðB  C T Þ1 ðAT 1ZÞv . If the controller K can be designed such that the closed-loop state matrix is symmetric, then condðVÞ 5 1. This is possible iff a real symmetric matrix Z can be found such that 2BKC 5 AT 1 Z or in the matrix-vector representation v T ÞK v 5 ðAT 1ZÞ . This last equation is solvable iff rank 2ðB  C   v B  CT ðAT 1ZÞ 5 rank B  C T 5 rankðBÞrankðC T Þ 5 mp in which case the solution is given by K v 5 2 ðB  CT Þ1 ðAT 1ZÞv . Note that a natural method to find Z is to use a random-number generator. However, a better and faster approach is to invoke a genetic algorithm. On the other hand, we should add that analytical methods are also available for this purpose which we shall discuss in the sequel of the book on state-space systems. Problem 10.32: Consider the system x_ 5 Ax 1 Bu, y 5 Cx, u 5 Ky where the system parameters are, 2

20:0582 6 20:303 6 A 5 4 20:0715 0

0:0651 20:685 20:658 0

0 1:109 20:947 1

3 20:171 7 0 7, 5 0 0

2

0 6 20:0541 6 B 5 4 1:11 0

3 1 07 7, 05 0

2

5 C 540 0

0 0 0

0 0 1

3 0 1 5. 0

Fundamental limitations

799

This is a flight control system taken from (Patel and White, 1993). A control objective is to decouple the modes in the admissible eigenvalue region. Other design objectives are minimization of the eigenvalue sensitivities and the control signal magnitude. The constraints are the admissible eigenvalue region Ψ, 0:4 # ζ s # 0:6 1:5 # ωs # 3 0:4 # ζ p # 0:6 0:05 # ωp # 1:3 along with the special constraints J2 , 8:2883, J4 , 9:1599. The search space is jjKjjmax # 2. Design the controller. First note that the search space means 22 # kij # 2, i 5 1; 2, j 5 1; 2; 3. The decoupling objective can be 2restated as obtaining a model matrix V matrix which is 3 6

0

as close as possible to V 5 64 30 we formulate J1 as

3 J1 5 σmax vv11 21

0 3 3 0 3 37 7, where 3 refers to any value. Thus 3 0 0 5 3 0 0



v12 v33 v34 1 σ . The cost function we conmax v22 v43 v44

sider is J 5 J1 1 J2 1 J4 . We use a genetic algorithm to solve this problem and

find K 5

20:0482 0:0539

2:4924 25:3501

1:3110 25:3700

which results in J 5 16:5442, J1 5 1:5059,

J2 5 6:9904, J4 5 8:0479, ζ s 5 0:5635, ωs 5 1:8575, ζ p 5 0:5999, ωp 5 0:6519. The constraints are all met.

10.19

Exercises

More than 150 exercises are offered in the sequel. It is advisable to try as many of them as your time allows. Exercise 10.1: Why do Butterworth filters have the pole distribution of Fig. 10.3? Exercise 10.2: Among other filters which have been studied extensively are the comb, Bessel, Gaussian, Legendre, LinkwitzRiley, etc. filters. How do these filters compare with the Butterworth filter? Consult the pertinent literature for their formulas. Exercise 10.3: Which of the following is possible? Provide an example: (1) A system shows overshoots both in jSj and jTj, (2) A system does not show overshoots in jSj and jTj, (3) A system shows an overshoot in jSj but not in jTj, (4) A system shows an overshoot in jTj but not in jSj, (5) A system shows more than one overshoot in either of jSj, jTj, or both, (6) Referring to Remark 10.3, ωT , ωS , ωT . ωS , ωT 5 ωS . Exercise 10.4: How is the relation S 1 T 5 1 interpreted considering the plots of jSj and jTj? Using MATLABs, superimpose the plots of jSj and jTj (both in plain numbers and in decibels) of the systems you exemplify in Exercise 10.3 and interpret the result. This shows what a typical actual jSj and jTj look like contrasted with the ideal ones of Fig. 10.2. Exercise 10.5: Plot the jSj and jTj of the systems of Examples 10.19, and 10.20, and Problem 10.10, and interpret them. Exercise 10.6: Propose a step-, ramp-, and parabola-tracking controller for the sequel systems using the interpolation conditions. Note that a root locus analysis may help finding out what transfer function is achievable. For instance for system (1) simple analysis shows that with the controllers Cr ðsÞ 5 kðs 1 bÞ=s and Cp ðsÞ 5 kðs1bÞ2 =s2 the transfer functions

800

Introduction to Linear Control Systems

Tr ðsÞ 5 kðs 1 bÞ=ðs1aÞ2 and Tp ðsÞ 5 k0 ðs1bÞ2 =½ðs1aÞ2 ðs 1 cÞ are achievable. Note that the answers are not unique. 1 1. PðsÞ 5 s 1 2. PðsÞ 5 s21 s11 3. PðsÞ 5 s s21 4. PðsÞ 5 s 2 Exercise 10.7: Propose a ramp-tracking controller for the system PðsÞ 5 ss 2 2 1 so that the transfer function becomes the logistic function, if possible. Note that if the problem admits a solution then the root locus looks like Fig. 10.32, not drawn to scale. For instance a “near” controller which results in a “near” root locus and transfer function is CðsÞ 5

Kðs10:1Þ2 s2 ðs 1 35Þ

with K , 0.

–p

–z

1

2

Figure 10.32 Proposed root locus of Exercise 10.7. Exercise 10.8: Investigate whether a ramp-tracking controller can be designed for the sys1 tem PðsÞ 5 ss 2 2 2 so that the desired transfer function becomes the logistic one? Note that

the desired root locus looks like Fig. 10.33 and the controller is CðsÞ 5 with K . 0. Discuss.

Kðs 1 bÞðs 1 cÞðs 1 dÞ s2 ðs 1 pÞ

Figure 10.33 Desired root locus of Exercise 10.8. 1 Exercise 10.9: Propose a parabola-tracking controller for the system PðsÞ 5 ss 2 1 4 : Note that the problem admits a solution e.g., with CðsÞ 5 Kðs1zÞ2 =s3 with appropriate values of z and K , 0. What does the root locus of the system look like? Exercise 10.10: Repeat Problem 10.9 for ramp inputs. 21 s25 Exercise 10.11: Given the plants P1 5 ðs 1s1Þðs 2 5Þ and P2 5 ðs 1 1Þðs 2 1Þ and the design objectives jSj , 0:3 for ωA½0 1 and jTj , 0:2 for ωA½8 N, find estimates for Smax and Tmax of the systems and interpret them. Exercise 10.12: Consider Example 10.11, 10.12, 10.25. Compute Smax and Tmax for them. 1 s21 s21 Exercise 10.13: Compare the systems P1;1 5 sðs 1 2Þ ; P1;2 5 sðs 1 1Þðs 1 2Þ and P2;1 5 sðs 1 2Þ ;

P2;2 5

2s11 sðs 1 2Þ

3

s21 s11

from the sensitivity point of view.

Fundamental limitations

801

Exercise 10.14: Write down the frequency-domain and time-domain constraints for the control system design, error signal, control signal, and output signal of the subsequent systems in a negative unity feedback structure. Assume that the controller has one integrator. Repeat the problem when the delay term e2s is also included. 1 1. PðsÞ 5 s 1 2. PðsÞ 5 s21 s11 3. PðsÞ 5 sðs 2 1Þ s21 4. PðsÞ 5 s22 s21 5. PðsÞ 5 sðs 2 2Þ s 6. PðsÞ 5 s21 sðs 2 1Þ 7. PðsÞ 5 ðs 1 1Þðs 2 2Þ s2 1 1 8. PðsÞ 5 sðs 2 1Þ s2 1 1 9. PðsÞ 5 2 2 s ðs 1 4Þ s21 10. PðsÞ 5 2 2 s ðs 1 1Þðs 2 2Þ ðs2 2s11Þ2 11. PðsÞ 5 sðs2 11Þ2 ðs22Þ3 Exercise 10.15: Prove the results of Section 10.7.1 by the techniques of Chapter 4, Time Response. Also prove Theorem 10.12 by direct computation and then prove the results of Sections 10.7.210.7.3 by its utilization. Exercise 10.16: Consider both positive and negative values for z and p in Theorem 10.13 and interpret different parts of the theorem. Exercise 10.17: Consider Theorem 10.13 and its implication on the closed-loop bandwidth that can be achieved and/or must be achieved. What implications does it have on the actuators and sensors? 2 s21 s11 Exercise 10.18: Consider the systems P1 ðsÞ 5 ss 2 2 1 ; P2 ðsÞ 5 s 2 2 ; P3 ðsÞ 5 s 2 1 ; 1 s12 s11 ; P ðsÞ 5 ; P ðsÞ 5 : Propose controllers such that: (1) All the closedP4 ðsÞ 5 ss 2 5 6 11 s11 s12 loop poles are located at s 5 2 3. (2) All the closed-loop poles are located at s 5 2 0:5. In each case observe the step response of the system and analyze it in the context of Theorem 10.13. Exercise 10.19: Consider the system LðsÞ 5 CðsÞPðsÞ 5 10ðs11Þ3 =s4 in a negative unity feedback structure. Decompose it as C0 ðsÞ 5 10 and P0 ðsÞ 5 ðs11Þ3 =s4 , C1 ðsÞ 5 10ðs 1 1Þ=s and P1 ðsÞ 5 ðs11Þ2 =s3 ,C2 ðsÞ 5 10ðs11Þ2 =s2 and P2 ðsÞ 5 ðs 1 1Þ=s2 , C3 ðsÞ 5 10ðs11Þ3 =s3 and P3 ðsÞ 5 1=s. Without doing simulations hand-draw the response of the systems due to step, ramp, and parabola input and output disturbances as precisely as possible. Exercise 10.20: Reconsider the pair C1 ; P1 of Exercise 10.19. Suppose a ramp input disturbance and a parabola output disturbance are applied at t 5 5 and t 5 10 s, respectively. Hand-draw the output of the system. Exercise 10.21: Repeat Exercise 10.20 for the pair C1 ; P1 of Example 10.20.

802

Introduction to Linear Control Systems

Exercise 10.22: Extend the statement of Theorem 10.13 to the case of multiple poles and zeros. Exercise 10.23: Provide other interpretations of Theorem 10.13. Exercise 10.24: What is the implication of Example 10.30 on the control signal? Exercise 10.25: Reconsider the worked-out problem 9.6 of Chapter 9, Frequency Domain Synthesis and Design, in particular the bandwidth of the system. Rephrase the observation in the context of the results of Section 10.7. Exercise 10.26: Extend the results of the Section 10.7 to the actual control structure where the sensor dynamics is also considered. Exercise 10.27: Extend the results of the Section 10.7 to the actual 2-DOF control structure. Exercise 10.28: By simulation investigate the effect of inclusion of a sensor (with and without noise) in the worked-out problems of Chapter 9, Frequency Domain Synthesis and Design. Try different sensor speeds. Exercise 10.29: A designer has designed a controller that internally stabilizes a system, he says, and has a pair of pure imaginary zeros. What can be the reason for the aforementioned poles of the controller? How about pure imaginary poles? Discuss. Exercise 10.30: The type-1 plants P1 ; :::; P4 with NMP pole(s) and zero(s) are required to be controlled for step tracking. The designer designs the sequel controllers for each of them. For each case discuss the philosophy behind the controller terms and the problem setting. 1 1. C1 ðsÞ 5 CðsÞ sðs 2 2Þ 1 CðsÞ 2. C2 ðsÞ 5 2 2 s ðs 1 1Þ s21 3. C3 ðsÞ 5 CðsÞ sðs 2 2Þ s2 1 1 4. C4 ðsÞ 5 CðsÞ sðs 2 2Þ Exercise 10.31: Are the following plants strongly stabilizable? s21 1. LðsÞ 5 ðs22Þ2 s21 2. LðsÞ 5 sðs22Þ2 ðs 2 1Þðs 2 3Þ 3. LðsÞ 5 ðs 2 4Þðs2 2 4s 1 5Þ ðs2 2 1Þðs 2 2Þ 4. LðsÞ 5 ðs24Þ2 ðs2 2 6s 1 10Þ ðs2 2 2s 1 2Þðs 2 2Þ 5. LðsÞ 5 ðs2 29Þ2 2 ðs 2 2s 1 2Þðs 2 2Þ 6. LðsÞ 5 ðs2 29Þ2 ðs 2 4Þ 2 ðs 2 2s 1 2Þðs 2 2Þ 7. LðsÞ 5 sðs2 29Þ2 2 ðs 2 2s 1 2Þðs 2 2Þ 8. LðsÞ 5 ðs2 29Þ3 2 ðs 2 2s 1 2Þðs 2 4Þ 9. LðsÞ 5 ðs2 29Þ2

Fundamental limitations

803

Exercise 10.32: Investigate the response of Example 10.32 and Problem 10.25 with the actuator authority in the range ½0 1 and ½0 30, respectively. Exercise 10.33: In Chapter 6, Nyquist Plot, we computed the delay margin of many stable systems which become unstable by delay. Similarly, we computed the delay margin of some unstable systems which become stable with delay. Using the SIMULINK environment investigate and observe this phenomenon for them. Exercise 10.34: This problem has two parts. (1) Using MATLABs observe the effect of the sensor dynamics on the control signal of some typical systems, like that of Example 10.33, and theoretically supplement your observation. What is the message? (2) Using the SIMULINK environment observe that the frequency of the sensor noise does affect the performance of a linear system. Note that this observation actually has a theoretical explanation which you may study in graduate studies. Exercise 10.35: This exercise has some parts. (1) Derive the state space formulation of a control system for the case that a dynamic controller acts on the reference input as a prefilter, in addition to the forward path controller; i.e., a 2-DOF control system. (2) Repeat the problem for the 1-DOF structure with he controller in the feedback path. (3) What if the plant is proper? Note that this part needs some advanced knowledge of matrix theory. Try it! Exercise 10.36: This exercise concerns the problem of quadruple partitioned matrix

eigenvalue assignment. More precisely, consider the matrix A 5

A1 A3

A2 A4

where A1 ARp 3 r ,

A2 ARp 3 s , A3 ARq 3 r , A4 ARq 3 s , and AARn 3 n . Using the approach of Section 10.10.2 discuss the stabilization and eigenvalue assignment of the matrix A via either of its partitions Ai ; i 5 1; :::; 4. The Problem is adopted from (Bavafa-Toosi, 2000). Exercise 10.37: How should we address the design of type-2 and type-3 tracking systems in Section 10.10.2? Exercise 10.38: Reconsider Example 10.35. Investigate that the condition of Theorem 10.20 is not satisfied for Λ1 5 f 2 1; 2 1g or Λ2 5 f 2 1; 2 2g but for Λ3 5 f 2 1; 2 6g it is satisfied. Exercise 10.39: Reconsider Example 10.35 and Remark 10.14. Investigate that for pole assignment by state feedback for Λ1 5 f 2 1; 2 1g, Λ2 5 f 2 1; 2 2g, and Λ3 5 f 2 1; 2 6g we should have K 01 5 ½3:25 2 4:25, K 02 5 ½3 2 3, and K 03 5 ½2 2. Only in the third case is the decomposition K 0 5 KC possible. Exercise 10.40: Consider the system x_ 5 Ax 1 Bu, y 5 Cx as given in the proceeding six cases, and the static linear controller u 5 Ky. (1) By the help of Theorem 10.20 design the controller such that the closed-loop poles of the system are at Λa 5 f 2 1; 2 2; 2 3g or Λb 5 f 2 1; 2 1; 2 1g. (2) By the help of optimization design the controller such that the closed-loop system is stable and the condition number is minimized. 2

0

1. A 5 4 0 0 2

1

2. A 5 4 0 0 2

3.

1 A 5 40 2

4 23 0 2 3 0 2 24 0

3 1 1 5; 2

B 5 40

3 21 1 5; 22

B 5 41

3 0 3 5; 5

2

3 0 1 5; 1

C1 5 1

21 1 ; C2 5 10

0 1

0 1

C1 5 1

0

3 0 0 5; 1

21 1 ; C2 5 10

0 1

0 1

1 B 5 41 1

3 0 0 5; 1

C1 5 1

21 1 ; C2 5 10

0 1

0 1

1 0

2

2

1







; ; :

804

Introduction to Linear Control Systems

Exercise 10.41: Consider Sections 10.10 and 10.12. Establish a lower bound on the achievable condðVÞ, depending on A; B; C; Λ 5 fλ1 ; :::; λn g where Λ denotes the set of desired eigenvalues and V is the modal matrix of the closed-loop system. Note that the case of state feedback is addressed on pages 11371140 of (Kautsky et al., 1985). Exercise 10.42: Consider the system x_ 5 Ax 1 Bu, y 5 Cx, u 5 Ky, whose parameters are given below. By the help of Theorem 10.20 or 10.21 investigate that the static output feedback controller which assigns the closed-loop poles at Λ 5 f 2 0:5; 2 2:5; 2 3; 2 3:5; 2 4g is uniquely given as follows. 2

0 60 6 6 A 5 66 00 6 40 0

1 0 0 0 0 0

0 1 0 0 0 0

0 0 1 0 0 0

3 0 07 7 07 7; 07 7 15 0

0 0 0 1 0 0

2

1 61 6 6 B 5 66 00 6 40 0

0 0 1 1 0 0

3 0 07 7 07 7; 07 7 15 1

C5



1 0

0 1

0 0 0 0

0 0

0 0



2

; K 54

23:25 21474=180 213439=180

3 212:25 293 5: 2290:75

Exercise 10.43: Reconsider Example 10.36. In twelve separate designs, find the output controller K and the precompensator F such that the twelve decoupling indices of Section 10.12 are satisfied separately and independently of each other. Exercise 10.44: Repeat Exercise 10.43 for the systems of Exercise 10.40. Note that in some cases the problem does not admit a solution. See also Exercise 10.45. Exercise 10.45: This Exercise has two parts. (1) For the systems of Exercise 10.40, by the help of optimization design the controller u 5 Ky such that the closed-loop system is stable and either of the individual decoupling indices introduced in Section 10.12 is minimized. (2) Establish a theoretical bound on the achievable decoupling index for either of the indices we defined in the text. Exercise 10.46: Extend the theorems of Sections 10.10, 10.11 to the case of multiple eigenvalues. Exercise 10.47: Design the controller u 5 Ky such that the closed-loop systems of the Exercise 10.40 have minimum eigenvalue sensitivity to structured additive perturbations given by, 2 3 0

1. E 5 4 0 0 2

2. E 5 4 2

3. E 5 4

0 0 0

0 0:4 5 0

21:2 0 0 0 0 0 0 0:3 0 0 20:5 0

3 0 0:3 5 0 3 0 0:1 5 0

Exercise 10.48: Some norms may be interpreted as “length.” On the other hand, some pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi norms come from an inner product ,U;U. by the formula jjxjj 5 , x; x . and some do not. Provide some examples and explanations. Exercise 10.49: This problem has some parts. (1) In Rn 3 n , is trace a norm? (2) In Rn 3 n , is maxi jλi j a norm? (3) In Rn 3 n , is mini jλi j a norm? (4) In Rm 3 n is rank a norm? (5) In Rm 3 n is σ a norm? Exercise 10.50: In infinite dimensional spaces and different metrics and spaces the norm has special properties and needs special treatment. These are outside the scope of this book. For our simple case of Cn by definition the p-norm of the vector x is defined as

Fundamental limitations

jjxjjp 5

Pn i51

jxi jp

805

1=p

. Show that jjx 1 yjjp # jjxjjp 1 jjyjjp , the so-called Minkowski18 1

1 2

inequality. Moreover, if p . q . 0, then jjxjjp # jjxjjq # nq p jjxjjp . In particular pffiffiffi pffiffiffi jjxjj2 # jjxjj1 # njjxjj2 , jjxjjN # jjxjj2 # njjxjjN , and jjxjjN # jjxjj1 # njjxjjN . jjXujj Exercise 10.51: For the matrix X the induced p-norm is defined as jjXjjp ≔ maxu6¼0 jjujjpp . Show that the 1, 2, and infinity norms are as defined and specified in the text. Exercise 10.52: The Schatten p-norm19 of a matrix is defined as P 1=p minfm;ng p σ . Show that it is a norm. It is good to know that for p 5 1 it is jjXjjSp ≔ i i51 called the nuclear norm and is denoted by jjXjj . It plays a pivotal role, e.g., in matrix norm minimization, matrix completion, and sparse data analysis. Exercise 10.53: Let r ≔ rðXÞ denote the rank of XARm 3 n and U1 ; U2 be unitary. Prove the following. 1. jjXjj22 # jjXjj1 jjXjjN 2. jjXjj2 5 jjU1 XU2 jj2 pffiffiffi 3. p1ffiffimffi jjXjj1 # jjXjj2 # njjXjj1 pffiffiffiffi 4. p1ffiffin jjXjjN # jjXjj2 # mjjXjjN pffiffiffiffiffiffi 5. jjXjjmax # jjXjj2 # mnjjXjjmax pffiffi 6. jjXjj2 # jjXjjF # r jjXjj2 P 7. jjXjj2F 5 i σ2i 5 traceðA AÞ 8. jjXjjF 5 jjU1 XU2 jjF pffiffi 9. jjXjjF # jjXjj # r jjXjjF 10. maxfjjXjjF ; jjXjj1 ; jjXjjN ; jjXjjmax g # jjXjjsum 11. ρðXÞ ≔ maxi jλi j # jjXjji 12. ρðXÞ # ρðjXjÞ 13. σðXÞ # jλi ðXÞj # σðXÞ 14. maxfσðXÞσðYÞ; σðXÞσðYÞg # σðXYÞ # σðXÞσðYÞ 15. σðXÞσðYÞ # σðXYÞ 16. σi ðXÞ 2 σðYÞ # σi ðX 1 YÞ # σi ðXÞ 1 σðYÞ 17. jσðXÞ 2 σðYÞj # σðX 1 YÞ # σðXÞ 1 σðYÞ 18. σðXÞ 2 σðYÞ # σðX 1 YÞ # σðXÞ 1 σðYÞ 19. rðXÞ 5 rðXYÞ 5 rðYXÞ, Y:full rank 20. rðXÞ 1 rðYÞ 2 m # rðXYÞ # minfrðXÞ; rðYÞg, XARl 3 m ; YARm 3 n Note that inequalities (6) and (9) are a fortiori valid if r is substituted with either of m or n. Also note that ρðXÞ is called the spectral radius of X and in (11) jjXjji is any induced norm. Finally ρðjXjÞ is called the Perron20 root or the PerronFrobenius eigenvalue of X. Exercise 10.54: The Lp;q norm (for positive integers p and q) of a matrix X is defined as hP P  i 1=q n m p p=q . Show that it is a norm. Note that for p 5 q 5 2 it jjXjjp;q ≔ j51 i51 jxij j

18

Named after Hermann Minkowski (18641909), a prominent Lithuanaian mathematician who emigrated to Germany in 1872. He is most noticeably known for his work on relativity. He showed that Albert Einstein’s theory of relativity can be geometrically interpreted as the theory of four-dimensional spacetime, the so-called Minkowski spacetime. Among his other works are pioneering contributions to the development of the geometry of numbers. M-matrices are also named after him. 19 Named after Robert Schatten (191177), Polish mathematician who emigrated to the US in the 1930s. 20 Named after Oskar Perron (18801975), German Mathematician.

806

Introduction to Linear Control Systems

reduces to the Frobenius norm21. The L2;1 norm has found applications e.g., in robust data analysis and sparse coding theory. Exercise 10.55: For the matrix AAR2 3 2 let s ≔ λ1 1 λ2 and p ≔ λ1 λ2 be the sum and qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi product of the eigenvalues. Show that σðAÞ 5 12ðs 2 s2 2 4pÞ and σðAÞ 5 12ðs 1 s2 2 4pÞ. Exercise 10.56: Consider the matrix A whose all entries are positive aij . 0. (1) Prove that it has a unique dominant eigenvalue which is equal to its spectral radius. (2) Prove that its associated right eigenvector is positive. (3) What can be said about its left eigenvector? Exercise 10.57: Prove the positivity condition of SISO LTI systems as given in the text. Is it valid also for MIMO systems? Exercise 10.58: Consider a stable positive SISO LTI system. (1) Prove that its largest pole which is called the Frobenius eigenvalue of the system is real and unique, i.e., of geometric and algebraic multiplicity one. (2) Prove that its zeros are smaller than its largest pole. Exercise 10.59: Consider the positive LTI system x_ 5 Ax 1 Bu, y 5 Cx. Denote by ci and the sum of the elements of the ith column and row of A. (1) Prove that the Frobenius eigenvalue of A, denoted by λF , satisfies: maxf mini ci ; mini ri g # λF # min f maxi ci ; maxi ri g. (2) Prove that the associated right eigenvector, called the Frobenius eigenvector vF , is nonnegative vF $ 0 and can be strictly positive vF . 0 and this is true only for this eigenvector and not for other eigenvectors. Exercise 10.60: Consider the fundamental limitation condition of Exercise 3.14 which is for general LTI systems. Prove that for the positive LTI system x_ 5 Ax 1 Bu, y 5 Cx that condition reduces to: if 'i: aii . 0 then the system is unstable. That is, a sufficient condition for instability is positivity of a diagonal element. Note that an equivalent statement is that a necessary condition for stability is negativity of all the diagonal elements. Exercise 10.61: Derive the conditions for positivity of the circuits given in Fig. 10.34.

R i

L

C

+ v_

+ v_

R1 L

R2

C

+ v_

+ v_

L R1

R2

C

+ v_

Figure 10.34 Circuits of Exercise 10.61.

Exercise 10.62: Consider the control structures of Section 4.12 in which the feedback gain is different from 1. Develop the fundamental limitations for those structures. Exercise 10.63: Consider the theorems of Section 10.13. How do they compare with the Kharitonov theorem of Chapter 3, Stability Analysis? Exercise 10.64: Consider the DDR system of Chapter 2, System Representation. Analyze the fundamental limitations of this system. Exercise 10.65: The exact dynamic equations of some airplanes are quite complicated. They may exhibit the three undesirable features of NMP zero, NMP pole, and delay in a 21

It is good to know that the Frobenius norm is also known as the Hilbert-Schmidt norm in the context of operators in mathematics. David Hilbert, German mathematician 18621943, and Erhard Schmidt, German Baltic mathematician 18761959, are two of the most influential mathematicians of all time.

Fundamental limitations

807

certain flight condition (e.g., depending on the speed or angle of attack) and some of these three undesirable features in other conditions. However, intrinsically unstable airplanes show higher performance. (1) The reader of mechanical engineering is encouraged to provide such a comprehensive model along with precise analysis. (2) Choose a typical model from the respective literature and perform a fundamental limitations analysis on it. Some typical references are (Nagatani, 2013; Siepenkotter and Alles, 2005; Sri Namachchivaya and Ariaratnam, 1986; Yang et al., 2012). Also recall that a VTOL airplane model is offered in Exercise 2.14 of Chapter 2, System Representation. Exercise 10.66: Various systems with NMP pole/zero are mentioned in item 15 of further readings of Chapter 2, System Representation. Perform a fundamental limitations analysis on them. Exercise 10.67: One may think that with the help of lead control it is possible to nullify the effect of delay and thus the ideal transfer function of TðsÞ  1 (instead of TðsÞ  e2sT , referring to Section 10.9) is achievable at least in certain control structures. Discuss the correctness of this thought. Exercise 10.68: Consider the following systems. (1) Characterize the achievable performance with as much detail as possible with the simplest step tracking controllers for them. (2) Repeat the problem when a delay term e2Ts is also included. (3) Propose more sophisticated controllers so as to improve the performance and characterize the achievable performance. (4) In particular consider z 5 1; p 5 2; T 5 0:2; a 5 3 and z 5 2; p 5 1; T 5 0:2; a 5 3 and compare the results for each plant. 1. PðsÞ 5 ðs 1 zÞ=ðs 1 pÞ 2. PðsÞ 5 ðs 1 zÞ=ðs 1 pÞ 3. PðsÞ 5 ðs 2 zÞ=ðs 1 pÞ 4. PðsÞ 5 ðs 2 zÞ=ðs 2 pÞ 5. PðsÞ 5 ðs 2 zÞ=½ðs 2 pÞðs 1 aÞ Exercise 10.69: Consider formulae (10.4), (10.5) of Section 10.10. A similar formula can be written for any other signal around the loop in any control structure. Such a formula shows the role of the eigenstructure in the time values of that signal. In particular, this can be used to establish results in “constrained control” problems where there are constraints on some signals, like the error signal, control signal, output signal, and states. The case of “partial” eigenstructure assignment by state feedback and constraints on the control signal is discussed in (Benzaouia and Hmamed, 1993; Maarouf, 2016, 2017) and the references therein via the Sylvester equation. (1) Extend the results of the aforementioned references to the case of constraints on other signals. (2) Extend the results to the output feedback case. Exercise 10.70: Consider the results of this chapter. Express them in a graph-theoretic setting, especially Sections 10.5, 10.10, 10.11, 10.14. Exercise 10.71: Consider the standard 1-DOF control structure and associated notation. What is the relation between σmax ðCSÞ and σmin ðPð0ÞÞ? Discuss. Exercise 10.72: Consider Example 10.13. How can we extend the approach to multiple NMP poles and/or zeros? Exercise 10.73: Consider a control system with an LTI plant affected by input disturbance, output disturbance, and noise. Assume that all the wanted/unwanted inputs are standard signals, i.e., step, ramp, parabola, sinusoid. (1) Can you think of an LTI control structure to reject all the three unwanted inputs and track the reference input? (2) Does an internally unstable design help? (3) What if we include delay in the forward path? (4) What if we include delay in the feedback path? (5) Does a noncausal controller help? Exercise 10.74: This problem is about delay systems and has some parts. (1) Consider a control system whose loop gain includes a delay term in its numerator as an additive

808

Introduction to Linear Control Systems

term. Explain the situation and the fundamental limitations it cause. (1) Repeat part (1) when the delay is in the denominator. Exercise 10.75: Obviously we cannot have TV , Tmax by the way of an output like the one provided in Fig. 10.35. But does that help in reducing the TV?

Figure 10.35 A possible step response for the analysis of the TV. Exercise 10.76: Certain systems necessarily show an overshoot and undershoot, with the explanation that the overshoot may be pushed below the final value as in Fig. 10.35. Explain for what systems it may or may not be possible. Exercise 10.77: This Exercise has several parts. (1) In three typical systems the controller has the Bode magnitude diagram as shown in Fig. 10.36. Explain the situations as much as you can and the philosophy behind the shape of the curve at the designated frequency regions. (2) Provide typical values for the point on the vertical axis. (3) Explain the general shape of the magnitude of a controller in the standard 1-DOF structure. Discuss versus type of the plant and different frequency regions. (4) Repeat Part (3) when the controller is used in the feedback path. (5) Repeat Part (3) when we have a 2-DOF or 3-DOF control structure.

Figure 10.36 Some typical controller magnitude diagrams.

Exercise 10.78: Given a typical system like LðsÞ 5 1=ðs 1 1Þ or LðsÞ 5 1=ðs 2 1Þ. (2) Design a controller such that the closed-loop system becomes a band-pass filter/system (for a typical frequency or frequency band). (3) Repeat Part (1) for a high-pass, bandstop, high-stop, low-band-pass, band-high-pass filter/system. (4) How does the controller magnitude diagram look in each case? (5) Repeat and discuss part (4) when the original plant is arbitrary. Exercise 10.79: What is the importance of phase in a filter/system? Discuss in as much details as you can.

Fundamental limitations

809

References Abad Torres, J., Roy, S., 2015. Graph-theoretic analysis of network inputoutput processes: zero structure and its implications on remote feedback control. Automatica. 61 (11), 7379. Aguiar, A.P., Hespanha, J.P., Lokotovic, P.V., 2008. Performance limitations in reference tracking and path following for nonlinear systems. Automatica. 43 (3), 598610. Ahmadizadeh, S., Shames, I., Martin, S., Nesic, D., 2017. On eigenvalues of Laplacian matrix for a class of directed signed graphs. Linear Algebra Its Appl. 523, 281306. Akbari, S., Ghodrati, A.H., Hosseinzadeh, M.A., 2017. Imprimitivity index of the adjacency matrix of digraphs. Linear Algebra Its Appl. 517, 110. Amster, P., Idels, L., 2016. New applications of M-matrix methods to stability of high-order linear delayed equations. Appl. Math. Lett. 54, 16. Arefi, M.M., Jahed-Motlagh, M.R., Karimi, H.R., 2015. Adaptive neural stabilizing controller for a class of mismatched uncertain nonlinear systems by state and output feedback. IEEE Trans. Cybern. 45 (8), 15871596. Askarpour, S., Owens, T.J., 1999. Eigenstructure assignment by output feedback: the case of common open- and closed-loop characteristic vectors. IEE Proc. Control Theory Appl. 146, 3740. Astrom, K.J., Klein, R.E., Lennartsson, A., 2005. Bicycle dynamics and control. IEEE Control Syst. Mag.2647. Badri, V., Tavazoei, M.S., 2015. Achievable performance region for a fractional order proportional and derivative (FOPD) motion controller. IEEE Trans. Ind. Electron. 62 (11), 71717180. Bamieh, B., Jovanovic, M., Mitra, P., 2012. Coherence in large-scale networks: dimensiondependent limitations of local feedback. IEEE Trans. Autom. Control. 57 (9), 22352249. Baniamerian, A., Meskin, N., Khorasani, K., 2013. Geometric fault detection and isolation of two-dimensional (2D) systems. In Proceedings of the American Control Conference. Washington, DC, pp. 35413548. Bavafa-Toosi, Y., 2000. Eigenstructure Assignment by Output Feedback, MEng Thesis (Control). Department of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran. Bavafa-Toosi, Y., 2006. Decentralized Adaptive Control of Large-Scale Systems, PhD Dissertation (Systems and Control). School of Integrated design Engineering, Keio University, Yokohama, Japan. Bavafa-Toosi, Y., 2016. On the theory of flexible neural networks  Part I: A survey paper. Int. J. Syst. Sci. (available online). Bavafa-Toosi, Y., Ohmori, H., Labibi, B., 2005. Failure-tolerant performance stabilization of the generic large-scale system by decentralized linear output feedback. ISA Trans. 44 (4), 501513. Bavafa-Toosi, Y., Ohmori, H., Labibi, B., 2006. A generic approach to the design of decentralized linear output-feedback controllers. Syst. Control Lett. 55, 282292. Bazaei, A., Moallem, M., 2012. A fundamental limitation on the control of end-point force in a very flexible single-link arm. J. Vibr. Control. 18 (8), 12071220. Benzaouia, A., Hmamed, A., 1993. Regulator problem for linear continuous-time systems with nonsymmetrical constrained control. IEEE Trans. Autom. Control. 38, 15561560. Berman, A., Plemmons, R., 1994. Nonnegative Matrices in the Mathematical Sciences. SIAM, Philadelphia. Bigelow, S.C., 1958. The design of analog computer compensated control systems. AIEE Trans. 77, 409415.

810

Introduction to Linear Control Systems

Blumthaler, I., Oberst, U., 2012. Design, parameterization, and pole placement of stabilizing output feedback compensators via injective cogenerator quotient signal modules. Linear Algebra Appl. 436 (5), 9631000. Bode, H.W., 1945. Network Analysis and Feedback Amplifier Design. Van Nostrand. Bokharaie, V.S., Mason, O., Verwoerd, M., 2011a. Correction to D-stability and delay-independent stability of homogeneous cooperative systems. IEEE Trans. Autom. Control. 56 (6), 1489. Bokharaie, V.S., Mason, O., Verwoerd, M., 2011b. Stability and positivity of equilibria for subhomogeneous cooperative systems. Nonlinear Anal. 74, 64166426. Bollhofer, M., 2015. Algebraic preconditioning approaches and their applications. In: Benner, P., et al., (Eds.), Numerical Algebra, Matrix Theory, Differential-Algebraic Equations and Control Theory. Springer, Berlin, pp. 257295. Borm, S., Mehl, C., 2012. Numerical Methods for Eigenvalue Problems. De Gruyter, Berlin. Boyd, S., Barratt, C., 1991. Linear Controller Design—Limits of Performance. Prentice Hall, Englewood Cliffs, NJ. Braatz, R.D., Morari, M., 1994. Minimizing the euclidean condition number. SIAM J. Control Optim. 32, 17631768. Brandts, J., Cihangir, A., 2016. Geometric aspects of the symmetric inverse M-matrix problem. Linear Algebra Appl. 506, 3381. Bunse-Gerstner, A., Byers, R., Mehrmann, V., Nichols, N.K., 1999. Feedback design for regularizing descriptor systems. Linear Algebra Its Appl. 299 (1-3), 119151. Butterworth, S., 1930. On the theory of filter amplifiers. Wireless Engineer (also called Experimental Wireless and the Radio Engineer). 7 (85), 536541. Chakravorty, J., Mahajan, A., 2017. Fundamental limits of remote estimation of autoregressive Markov processes under communication constraints. IEEE Trans. Autom. Control. 62 (3), 11091125. Chen, J., 1995. Sensitivity integral relations and design trade-offs in linear multivariable feedback systems. IEEE Trans. Autom. Control. 40 (10), 17001716. Chouaib, I., Pradin, B., 1994. Performance robustness with eigenspace assignment. In: Proceedings of the IEE Int. Conf. on Control, pp. 11641169. Chu, D., Mehrmann, V., Nichols, N.K., 1999. Minimum norm regularization of descriptor systems by output feedback. Linear Algebra Appl. 296, 3977. Daasch, A., Schultalbers, M., Svaricek, F., 2016. Structural non-minimum phase systems. In: Proceedings of the American Control Conference, pp. 37583763. Dahleh, M.A., 2009. Fundamental limitations of networked decision systems. In: Proceedings of the Asian Control Conference, pp. 89. de Fornel, B., Louis, J.-P., 2010. Electrical Actuators: Applications and Performance. WileyISTE. Dehghani, M., Kian, M., Seo, Y., 2017. Matrix power means and the information monotonicity. Linear Algebra Its Appl. 521, 5769. Demir, M.M., Tiwari, A., 2014. Advanced Sensor and Detection Materials. Wiley-Scrivener. Devroye, N., Vu, M., Tarokh, V., 2008. Cognitive radio networks: Highlights of information theoretic limits, models, and design. IEEE Sig. Process. Mag. 26 (6), 1223. Doyle, J.C., 1982. Analysis of feedback systems with structured uncertainties. IEE Proc. D. 129 (6), 242250. Emami, F., Hopke, P.K., 2017. Effect of adding variables on rotational ambiguity in positive matrix factorization solutions. Chemomet. Intell. Lab. Syst. 162, 198202. Esfandiari, K., Abdollahi, F., Talebi, H., 2014. Adaptive control of uncertain nonaffine nonlinear systems with input saturation using neural networks. IEEE Trans. Neural Netw. Learn. Syst. 26 (10), 23112322.

Fundamental limitations

811

Eslami, M., Nobakhti, A., 2016. Integrity of LTI time-delay systems. IEEE Trans. Autom. Control. 61 (2), 562567. Esna Ashari, A., 2005. Robust eigenstructure assignment: new approach. In: Proceedings of the 16th IFAC Triennial World Congress. Pragure. pp. 136141. Fang, S., Chen, J., Ishii, H., 2017. Design constraints and limits of networked feedback in disturbance attenuation: an information-theoretic analysis. Automatica. 79 (5), 6577. Fang, S., Ishii, H., Chen, J., 2014. Control over additive white Gaussian noise channels: bode-type integrals, channel blurredness, negentropy rate, and beyond. IFAC Proc. 47 (3), 37703775. Farina, L., Rinaldi, S., 2011. Positive Linear Systems: Theory and Applications. John Wiley & Sons, New York. Fazel, M., 2002. Matrix Rank Minimization with Applications. Department of Electrical Engineering, Stanford University. Fazel, M., Pong, T.K., Sun, D., Tseng, P., 2013. Hankel matrix rank minimization with applications in system identification and realization. SIAM J. Matrix Anal. Appl. 34 (3), 946977. Feyzmahdavian, H.R., Charalambous, T., Johansson, M., 2013. On the rate of convergence of continuous-time linear positive systems with heterogeneous time-varying delays. In: Proceedings of The European Control Conference, pp. 33723377. Fletcher, R., 2000. Practical Methods of Optimization. 2nd ed. John Wiley & Sons. Francis, B.A., Zames, G., 1984. On-optimal sensitivity theory for SISO feedback systems. IEEE Trans. Autom. Control. 29 (1), 916. Freudenberg, J.S., Looze, D.P., 1985. Right half plane poles and zeros and design tradeoffs in feedback systems. IEEE Trans. Autom. Control. 30 (6), 555565. Freudenberg, J.S., Looze, D.P., 1988. Frequency Domain Properties of Scalar and Multivariable Feedback System. Springer-Verlag, New York. Freudenberg, J.S., Hollot, C.V., Middleton, R.H., Toochinda, V., 2003. Fundamental design limitations of the general control configuration. IEEE Trans. Autom. Control. 48 (8), 13551370. Garcia, O.P., Zakharov, A., Ja¨msa¨ Jounela, S.-L., 2017. Data and reliability characterization strategy for automatic detection of valve stiction in control loops. IEEE Trans. Control Syst. Tech. 25 (3), 769781. Garpinger, O., Hagglund, T., Astrom, K.J., 2014. Performance and robustness tradeoffs in PID control. J. Process Control. 24 (5), 568577. Ghane-Sasansaraee, H., Menhaj, M.B., 2013. Eigenstructure-based analysis for nonlinear autonomous systems. IMA J. Math. Control Inform. 3 (3), 123. Goodwin, G.C., 2016. Personal communications with Y. Bavafa-Toosi. Goodwin, G.C., Woodyatt, A., Middleton, R.H., Shim, J., 1999. Fundamental limitations due to jω-axis zeros in SISO systems. Automatica. 35, 857863. Goodwin, G.C., Medioli, A.M., Carrasco, D.S., King, B.R., Fu, Y., 2015. A fundamental control limitation for linear positive systems with application to type 1 diabetes treatment. Automatica. 55, 7377. Guan, Z.-H., Wang, B., Ding, L., 2014. Modified tracking performance limitations of unstable linear SIMO feedback control systems. Automatica. 50 (1), 262267. Guiver, C., Hodgson, D., Townley, S., 2016. A note on the eigenvectors of perturbed matrices with applications to linear positive systems. Linear Algebra Appl. 509, 143167. Halpern, M.E., Evans, R.J., 2000. Minimization of the Bode sensitivity integral. Syst. Control Lett. 40 (3), 191195. Hashtrudi-Zad, S., Massoumnia, M.-A., 1999. Generic solvability of the failure detection and identification problem. Automatica. 35 (5), 887893.

812

Introduction to Linear Control Systems

Hassani Monfared, K., 2017. Existence of a not necessarily symmetric matrix with given distinct eigenvalues and graph. Linear Algeb. Its Appl. 527, 111. He, C., Laub, A.J, Mehrmann, V., 1995. Placing plenty of poles is pretty preposterous. Preprint 95-17, DFG-Forschegruppe Scientific Parallel Computing, Fak. F. Mathematik, TU-Chemnitz, FRG. Hill, R.D., Halpern, M., 1993. Minimum overshoot design for SISO discrete-time systems. IEEE Trans. Autom. Control. 38 (1), 155158. Hill, R.D., Eberhard, A.C., Wenczel, R.B., Halpern, M.E., 2002. Fundamental limitations on the time-domain shaping of response to a fixed input. IEEE Trans. Autom. Control. 47 (7), 10781090. Horowitz, I.M., 1963. Synthesis of Feedback Systems. Academic Press, New York. Horn, R.A., Johnson, C.R., 2012. Matrix Analysis. 2nd ed. Cambridge University Press. Hunnerkens, B.G.B., Wouw, Nvd, Nesic, D., 2016. Overcoming a fundamental time-domain performance limitation by nonlinear control. Automatica. 67, 277281. Huyer, W., Neumaier, A., 2003. A new exact penalty function. SIAM J. Optim. 13 (4), 11411158. Iglesias, P., 2002. Logarithmic integrals and system dynamics: an analogue of Bode’s sensitivity integral for continuous-time, time-varying systems. Linear Algebra Appl. 343344, 451471. Ito, H., Dashkovskiy, S., Wirth, F., 2012. Capability and limitation of max- and sum-type construct ion of Lyapunov functions for networks of iISS systems. Automatica. 48 (6), 11971204. Ito, H., Ohmori, H., Sano, A., 1993. Design of stable controllers attaining low HN weighted sensitivity. IEEE Trans. Autom. Control. 38 (3), 485488. Jalaleddini, K., Moezzi, K., Aghdam, A.G., Alasti, M., Tarokh, V., 2013. Rate assignment in wireless networks: Stability analysis and controller design. IEEE Trans. Control Syst. Technol. 21 (2), 521529. Jiang, X.-W., Guan, Z.-H., Yuan, F.-S., Zhang, S.-H., 2014. Performance limitations in the tracking and regulation problem for discrete-time systems. ISA Trans. 53 (2), 251257. Kalman, R.E., 1960. On the general theory of control systems. In: Proceedings of the First International Congress of Automatic Control (of IFAC), Moscow. Karow, M., 2011. Structured pseudospectra for small perturbations. SIAM. J. Matrix Anal. Appl. 32 (4), 13831398. Kautsky, J., Nichols, N.K., Van Dooren, P., 1985. Robust pole assignment in linear state feedback. Int. J. Control. 41 (5), 11291155. Khaki-Sedigh, A., Bavafa-Toosi, Y., 2001. Design of static linear multivariable output feedback controllers using random optimization techniques. J. Intelligent Fuzzy Syst. 10 (3, 4), 185195. Khaki-Sedigh, A., Lucas, C., 2000. Optimal design of robust QFT controllers using random optimization techniques. Int. J. Syst. Sci. 31 (8), 10431052. Khorasani, K., Pai, M.A., 1985. Stability of singularly perturbed systems and parasitics bounds for robust adaptive control. In: Proceedings of the American Control Conference. Boston, MA. pp. 405407. Kittipiyakul, S., Elia, P., Javidi, T., 2009. High-SNR analysis of outrage-limited communications with bursty and delay-limited information. IEEE Trans. Inform. Theory. 55 (2), 746763. Konigorski, U., 2012. Pole placement by parametric output feedback. Syst. Control Lett. 61 (2), 292297. Kressner, D., 2007. Numerical Mathematics. 2nd ed. Springer, New York. Kushel, O., 2015. Matrices with totally positive powers and their generalization. Operators and Matrices. 9 (4), 943964.

Fundamental limitations

813

Levy, B.C., Nikoukhah, R., 2004. Robust least-squares estimation with a relative entropy constraint. IEEE Trans. Inform. Theory. 50 (1), 89104. Li, D., Hovakimyan, N., 2013. Bode-like integral for stochastic switched systems in the presence of limited information. Automatica. 49 (1), 18. Li, Y., Lin, Z., 2016. A switching anti-windup design based on partitioning of the input space. Syst. Control Lett. 88, 3946. Liberzon, D., 2014. Finite data-rate feedback stabilization of switched and hybrid linear systems. Automatica. 50, 409420. Liberzon, D., 2015. Leaving a lasting mark. IEEE Control Systems Magazine. 35 (2), 1920. Liesen, J., Strakos, Z., 2015. Krylov Subspace Methods: Principles and Analysis. Oxford University Press, Oxford. Lin, Z., Hu, T., 2001. Semi-global stabilization of linear systems subject to output saturation. Syst. Control Lett. 43 (3), 211217. Lin, Z., Saberi, A., 1993. Semi-global exponential stabilization of linear systems subject to input saturation via linear feedbacks. Syst. Control Lett. 21 (3), 225239. Liu, G.P., Patton, R.J., 1998. Eigenstructure Assignment for Control System Design. John Wiley, New York. Maarouf, H., 2016. Controllability and nonsingular solutions of Sylvester equations. Electr. J. Linear Algebra. 31, 721739. Maarouf, H., 2017. The resolution of the equation XA 1 XBX 5 HX and the pole placement problem: A general approach. Automatica. 79, 162166. Maggiore, M., Consolini, L., 2013. Virtual holonomic constraints for EulerLagrange systems. IEEE Trans. Automatic Control. 58 (4), 10011008. Martins, N.C., Dahleh, M.A., 2008. Feedback control in the presence of noisy channels: Bode-like fundamental limitations of performance. IEEE Trans. Autom. Control. 53 (7), 16041615. Massoumnia, M.-A., 1986. A geometric approach to the synthesis of failure detection filters. IEEE Trans. Autom. Control. 31 (9), 839846. McAvoy, T.J., Braatz, R.D., 2003. Controllability limitations for processes with large singular values. Ind. Eng. Chem. Res. 42, 61556165. Mehrmann, V., Xu, H., 1998. Choosing poles so that the single-input pole placement problem is well-conditioned. SIAM J. Matrix Anal. Appl. 19, 664681. Mehta, P.G., Vaidya, U., Banaszuk, A., 2008. Markov chains, entropy, and fundamental limitations in nonlinear stabilization. IEEE Trans. Autom. Control. 53 (3), 784791. Mesbahi, M., 2005. On state-dependent dynamic graphs and their controllability properties. IEEE Trans. Autom. Control. 50 (3), 387392. Mesbahi, M., Egerstedt, M., 2010. Graph Theoretic Methods in Multiagent Networks. Princeton University Press, Princeton, NJ. Mesbahi, A., Haeri, M., 2015. Conditions on decomposing linear systems with more than one matrix to block triangular or diagonal form. IEEE Trans. Autom. Contol. 60 (1), 233239. Meskin, N., Khorasani, K., 2011. Fault Detection and Isolation Multi-Vehicle Unmanned Systems.. Springer, New York. Middleton, R.H., 1991. Trade-offs in linear control systems design. Automatica. 27 (2), 281292. Middleton, R.H., Graebe, S.F., 1999. Slow stable open-loop poles: to cancel or not to cancel. Automatica. 35 (5), 877886. Moore, B.C., 1967. On the flexibility offered by state feedback in multivariable system beyond closed-loop eigenvalue assignment. IEEE Trans. Autom. Control. 21 (6), 689692. Morari, M., Zafiriou, E., 1989. Robust Process Control. Prentice Hall, Englewood Cliffs.

814

Introduction to Linear Control Systems

Mukhin, E., Tarasov, V., Varchenko, A., 2009. Schubert calculus and representations of the general linear group. J. Am. Math. Soc. 22 (4), 909940. Musa, A., Biagioni, J., Eriksson, J., 2016. Trading off accuracy, timeliness, and uplink usage in online gps tracking. IEEE Trans. Mobile Computing. 15 (8), 21242136. Nagatani, T., 2013. Nonlinear-map model for the control of an airplane schedule. Phys. A: Stat. Mech. Appl. 392 (24), 65456553. Nett, C.N., 1990. Decentralized control system design for a variable-cycle gas turbine engine. In: Proceedings of the American Control Conference. Nett, C.N., Minto, K.D., 1989. A quantitative approach to the selection and partitioning of measurements and manipulations for the control of complex systems. Proc. Am. Control Conf. Nokhbatolfoghahaee, H., Menhaj, M.B., Talebi, H., 2016. Weakly and strongly non-minimum phase systems: properties and limitations. Int. J. Control. 89 (2), 306321. Nokhbatolfoghahaee, H., Menhaj, M.B., Talebi, H., Khoshnam Tehrani, M., 2017. Performance limitations of non-minimum phase affine nonlinear systems. IEEE Trans. Autom. Controlin press. Olanrewaju, O.I., Maciejowski, J.M., 2017. Implications of discretization on dissipativity and economic model predictive control. J. Process Control. 49 (1), 18. Oymak, S., Jalali, A., Fazel, M., Eldar, Y., Hassibi, B., 2015. Simultaneously structured models with applications to sparse and low-rank matrices. IEEE Trans. Inf. Theory. 61 (5), 28862908. Patel, Y., White, B.A., 1993. Flight control design using polynomial eigenstructure assignment. In: Proceedings of the 27th American Control Conference, San Francisco, CA. pp. 13901394. Porter, B., 1969. Synthesis of Dynamical Systems. Barnes and Noble, New York. Porter, B., Crossley, R., 1972. Modal ControlTheory and Applications.. Taylor and Francis, London. Qi, T., Zhu, J., Chen, J., 2017. Fundamental limits on uncertain delays: when is a delay system stabilizable by LTI controllers? IEEE Trans. Autom. Control. 62 (3), 13141329. Qureshi, M.T., Gajic, X., 1992. A new version of the Chang transformation. IEEE Trans. Autom. Control. 37 (6), 800801. Ragazzini, J.R., Franklin, G.F., 1958. Sampled-Data Control Systems. McGraw-Hill, New York. Recht, B., Fazel, M., Parrilo, P., 2010. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52 (no 3), 471501. Rojas, A.J., Braslavsky, J.H., Middleton, R.H., 2008. Fundamental limitations in control over a communication channel. Automatica. 44 (12), 31473151. Roszak, B., Davison, E.J., 2011. An optimal servomechanism controller for SISO positive systems. IFAC Proc. Vol. 44 (1), 75297534. Saberi, A., Lin, Z., Teel, A.R., 1996. Control of linear systems subject to input saturation. IEEE Trans. Autom. Control. 41 (3), 368378. Saberi, A., Stoorvogel, A.A., Sannuti, P., 2006. Connections between H2 optimal filters and unknown input observers-performance limitations of H2 optimal filtering. Int. J. Control. 79 (2), 171183. Saberi, A., Stoorvogel, A.A., Sannuti, P., 2007. Filtering Theory with Applications to Fault Detection and Isolation.. Birkhauser, New York. Saberi, A., Stoorvogel, A.A., Sannuti, P., 2007. Analysis, design, and performance limitations of HN optimal filtering in the presence of an additional input with known frequency. Int. J. Robust Nonlinear Control. 17 (16), 14741488. Saberi, A., Stoorvogel, A., Sannuti, P., 2012. Internal and External Stabilization of Linear Systems with Constraints. Birkhauser, Boston.

Fundamental limitations

815

Saha, S., Das, S., Ghosh, R., Goswami, B., Balasubramanian, R., Chandra, A.K., Das, S., Gupta, A., 2010. Fractional order phase shaper design with Bode’s integral for isodamped control system. ISA Trans. 49 (2), 196206. Sajjadi-Kia, S., Jabbari, F., 2013. Controllers for linear system with bounded actuators: slab scheduling and anti-windup. Automatica. 449 (3), 762769. Santesso, P., Valcher, M.E., 2007. On the zero pattern properties and asymptotic behavior of continuous-time positive system trajectories. Linear Algebra Appl. 425, 283302. Schmidt, M., 2007. Systematic Discretization of Input/Output Maps and Other Contributions to the Control of Distributed Parameter Systems, PhD Dissertation. Department of Mathematics, TU Berlin. Scholtes, S., Stoehr, M., 1999. Exact penalization of mathematical programs with equilibrium constraints. SIAM J. Control Optim. 37 (2), 617652. Seraji, H., 1975. An approach to dynamic compensator design for pole assignment. Int. J. Control. 21 (6), 955966. Seraji, H., 1981. Output control in linear systems: a transfer-function approach. Int. J. Control. 33 (4), 649676. Seufer, I., 2005. Generalized Inverses of Differential-Algebraic Equations and their Discretization, PhD Dissertation. Department of Mathematics, TU Berlin. Shafai, B., Chen, J., Kothandaraman, M., 1997. Explicit formulas for stability radii of nonnegative and Metzlerian matrices. IEEE Trans. Autom. Control. 42 (2), 265270. Shafai, B., Ghadami, R., Oghabee, A., 2013. Constrained stabilization with maximum stability radius for linear continuous-time systems. In Proceedings of the 52nd IEEE Conf. Decision and Control, Florence, Italy. pp. 34153420. Shafai, B., Oghabee, A., Nazari, S., 2016. Robust fault detection for positive systems. In Proceedings of the 55th IEEE Conf. Decision and Control, Las Vegas, USA. pp. 64706476. Shafai, B., Sotirov, G., 1999. Uniqueness of solution in FLP under parameter perturbations. Fuzzy Sets Syst. 34 (2), 179186. Shen, J., Chen, S., 2017. Stability and LN-gain analysis for a class of nonlinear positive systems with mixed delays. Int. J. Robust Nonlinear Control. 27 (1), 3949. Shi, C., 2004. Linear Differential-Algebraic Equations of Higher-Order and the Regularity or Singularity of Matrix Polynomials, PhD Dissertation. Department of Mathematics, TU Berlin. Siami, M., Motee, N., 2016. Fundamental limits and tradeoffs on disturbance propagation in linear dynamical networks. IEEE Trans. Autom. Control. 61, 40554063. Siepenkotter, N., Alles, W., 2005. Stability analysis of the nonlinear dynamics of flexible aircraft. Aerospace Sci. Technol. 9 (2), 135141. Silva, E.I., Pulgar, S.A., 2013. Performance limitation for single-input LTI plants controlled over SNR constrained channels with feedback. Automatica. 49 (2), 540547. Skarda, R., Cech, M., Schlegel, M., 2014. Bode-like control loop performance index evaluated for a class of fractional-order processes. IFAC Proc. 47 (3), 1062210627. Son, N.H., Kien, B.T., Rosch, A., 2016. Second-order optimality conditions for boundary control problems with mixed pointwise constraints. SIAM J. Optim. 26 (3), 19121943. Sottile, F., 2000. Real Schubert calculus: polynomial systems and a conjecture of Shapiro and Shapiro. Exp. Math. 9 (2), 161182. Sottile, F., 2010. Frontiers of reality in Schubert calculus. Bull. Am. Math. Soc. 47 (1), 3171. Sri Namachchivaya, N., Ariaratnam, S.T., 1986. Non-linear stability analysis of aircraft at high angles of attack. Int. J. Non-Linear Mech. 21 (3), 217228. Srinathkumar, S., Rhoten, R.P., 1975. Eigenvalue/eigenvector assignment for multivariable systems. Electr. Lett. 11 (6), 124125.

816

Introduction to Linear Control Systems

Steinbrecher,, A., 2015. Regularization of quasi-linear differential-algebraic equations. Proc. IFAC Conf. Math. Modell. 48 (1), 300305. Su, W., Qiu, L., Chen, J., 2003. Fundamental performance limitations in tracking sinusoidal signals. IEEE Trans. Autom. Control. 48 (8), 13711380. Sun, L., Ohmori, H., Sano, A., 2001. Output intersampling approach to direct closed-loop identification. IEEE Trans. Autom. Control. 46 (12), 19361941. Syrmos, V.L., Abdallah, C.T., Dorato, P., Grigoriadis, K., 1997. Static output feedback—a survey. Automatica. 33 (2), 125137. Tabbara, M., Nesic, D., Teel, A.R., 2007. Stability of wireless and wireline control systems. IEEE Trans. Autom. Control. 52 (9), 16151630. Takamatsu, T., Ohmori, H., 2016. Sliding mode controller design based on backstepping technique for fractional order system. SICE J Control Meas. Syst. Integ. 9 (4), 151157. Talebi, G., 2017. On the Taylor sequence spaces and upper boundedness of Hausdorff matrices and Norlund matrices. Indagationes Math. 38 (3), 629636. Tarokh, M., 1985. Fixed modes in multivariable systems using constrained controllers. Automatica. 21 (4), 495497. Tarokh, M., 1992. Measures for controllability, observability, and fixed modes. IEEE Trans. Autom. Control. 37 (8), 12681273. Tavazoei, M.S., 2014. Algebraic conditions for monotonicity of magnitude-frequency responses in all-pole fractional order systems. IET Control Theor. Appl. 8 (12), 10911095. Tsui, C.C., 2005. Six-dimensional expansion of output feedback design for eigenstructure assignment. J. Franklin Inst. 342 (7), 892901. Verschelde, J., Wang, Y., 2004. Computing dynamic output feedback laws. IEEE Trans. Autom. Control. 49 (8), 13931397. Virnik, E., 2008. Stability analysis of positive descriptor systems. Linear Algebra Appl. 429 (10), 26492659. Wang, X.A., Konigorski, U., 2013. On linear solutions of the output feedback pole assignment problem. IEEE Trans. Autom. Control. 58 (9), 23542359. Wilkinson, J.H., 1965. Algebraic Eigenvalue Problem. Oxford University Press, Oxford. Winder, S., 2008. , Chapter 6 .Analog Circuits. Newnes/Elsevier, Oxford Xia, T., Scardovi, L., 2016. Output-feedback synchronizability of linear time-invariant systems. Syst. Control Lett. 94, 152158. Yang, W., Hammoudi, M.N., Hermann, G., Lowenberg, M., Chen, X., 2012. Two-state dynamic gain scheduling control applied to an F16 aircraft model. Int. J. Non-Linear Mech. 47 (10), 11161123. Youla, D.C., Bongiorno, J.J., Lu, C.N., 1974. Single-loop feedback stabilization of linear multivariable dynamical plants. Automatica. 10, 159173. Young, G.F., Scardovi, L., Leonard, N.E., 2015. A new notion of effective resistance for directed graphs—Part II: Computing resistances. IEEE Trans. Autom. Control. 60 (7), 17371753. Yu, S., Mehta, P.G., 2010. Bode-like fundamental performance limitations in control of nonlinear systems. IEEE Trans. Autom. Control. 55 (6), 13901405. Zaccarian, L., Teel, A.R., 2011. Modern Anti-Windup Synthesis: Control Augmentation for Actuator Saturation. Princeton University Press, Princeton. Zadeh, L.A., 1958. What is optimal. (editorial). IRE Trans. Inf. Theory IT. 4, 3. Zadeh, L.A., Desoer, C.A., 1963. Linear Systems: A State-Space Approach. McGraw-Hill, New York. Zappavigna, A., Colaneri, P., Kirkland, S., Shorten, R., 2012. Essentially negative news about positive systems. Linear Algebra Appl. 436, 34253442. Zelazo, D., Mesbahi, M., 2011. Edge agreement: graph-theoretic performance bounds and passivity analysis. IEEE Trans. Autom. Control. 6 (3), 554555.

Appendix A: Laplace transform and differential equations

A.1

Introduction

To embark on this topic we assume that the reader is familiar with the following notions: complex variables, complex functions, analytical functions, singular points (poles) and zeros of functions, etc. If this is not the case, the reader is urged to have a look at the pertinent mathematics literature. In the introduction of this Appendix we study the underlying philosophy of the Laplace transform in the context of differential and integral equations (IEs). In Sections A.2 and A.3 we study the Laplace transform in some details, and in Section A.4 we shall comment on the existence and uniqueness of solutions to differential equations. What are IEs, and how are we to solve them? Examples of IEs are the Volterra Ðx Ðb IE: φðxÞ 5 f ðxÞ 1 0 Kðx; tÞφðtÞdt, the Fredholm IE: φðxÞ 5 f ðxÞ 1 λ a Kðx; tÞφðtÞdt, the Abel IE, integro-differential equations, nonlinear IEs, WienerHopf IE, fractional IE, etc. IEs arise e.g., when we want to empty the water of a dam in an optimal way, especially if there is more than one outlet. If some dams are connected to each other then we will have a set of IEs. Another situation wherein we come up with IEs is selling goods at big stores with more than one cashier. There are some major methods for solving IEs: (1) Resolvent Kernel, (2) Successive Approximation, (3) Variational Iteration, (4) Numerical solution, etc. In the following without going into the details of these methods we solve two simple IEs for the sake of your surface familiarity with the subject. Ð1 Example A.1: Solve the Fredholm IE φðxÞ 5 2 1 λ 22 xtφðtÞdt. Ð1 We note that 22 tφðtÞdt is a constant which we denote by A. Thus Ð1 φðxÞ 5 2 1 λxA. Therefore A 5 22 tð2 1 λtAÞdt from which we get A 5 23=ð1 2 3λÞ and φðxÞ 5 2 1 3xλ=ð1 2 3λÞ. (Question: What happens if λ 5 1=3?)

820

Appendix A

Ðx Example A.2: Solve the Volterra IE φðxÞ 5 2x 1 3λx 0 φðtÞdt. Ðx φðxÞ We write the equation as φðxÞ x 5 2 1 3λ 0 φðtÞdt. Denote hðxÞ 5 x . Thus dhðxÞ dhðxÞ 3 2 dx 5 3λφðxÞ or hðxÞ 5 3λxdx. This results in ln hðxÞ 5 2 λx 1 c or 2

2

hðxÞ 5 de1:5λx or φðxÞ 5 dxe1:5λx . One method to deduce the unknown constant d is via the integral limit analysis. We know that in the original IE as x ! 0, there holds φðxÞ ! 2x 1 Oðx3 Þ whereas in the obtained solution φðxÞ 5 dx 1 Oðx3 Þ. Hence d 5 2. We close this example by emphasizing that this method of solution is not applicable to every given IE and is presented only as a simple example. Coming back to our discussion of IEs, based on f ðxÞ there are two types of IEs: type-1 IE where f ðxÞ 5 0 and type-2 IE where f ðxÞ 6¼ 0. The type-1 IE is a linear transform or operator, i.e., T½αf ðxÞ 1 βgðxÞ 5 αT½f ðxÞ 1 βT½gðxÞ. K is called the Kernel according to which the IE takes different general forms. For instance, if Kðx; tÞ 5 e2st , the IE reduces to the Laplace transform,1 if Kðx; tÞ 5 e2jωt , it gives the Fourier transform, and if Kðx; tÞ 5 Jn ðxÞ, it becomes the Bessel functions. As stated above Laplace transform is a special Ð N case of IEs in which a 5 0; b 5 N; Kðs; xÞ 5 e2sx . Thus, L½f ðtÞ: 5 FðsÞ: 5 0 e2st f ðtÞdt. In this book, we use the Laplace transform to solve differential equations which are originally in time domain. Apart from this we also employ it to get the transfer function from the state space equations. The Laplace transform FðsÞ exists if the integral converges and this depends on f ðtÞ. ExampleÐ A.3: The function f ðtÞ 5 2e3t has the transform N FðsÞ 5 2 0 e2ðs23Þt dt. The integral exists if σ . 3, where s 5 σ 1 jω, and equals FðsÞ 5 2=ðs 2 3Þ. In Example A.3, σ . 3 is called the region of convergence of the integral. The simple conditions that we usually consider for f ðtÞ are that it is piece-wise continuous and that ' 0 # σ , N : e2σt jf ðtÞj ! 0 as t ! N. Note that: (1) This does 2 not necessitate that f ðtÞ ! 0 as t ! N. (2) A function like f ðtÞ 5 et ; t $ 0 does t2 not have a Laplace transform but the function f ðtÞ 5 e ; tAI; f ðtÞ 5 0; t 2 =I where I is a bounded interval in R does have a Laplace transform. (3) Piece-wise continuity is not necessary, as the function f ðtÞ 5 t21=2 shows. (4) As s ! N we have FðsÞ ! 0, and thus functions like s; sins; es ; ::: are not the Laplace transform 1

Named after Pierre-Simon marquis de Laplace, French astronomer and mathematician (17491827), who made significant contributions to different branches of mathematics, some finding direct applications in various fields of science like mechanics, physics, and astronomy. He was one of the most influential mathematicians of all time.

Appendix A

821

of any function. The reader is referred to the references for more detail, where measures, distributions, probabilities, etc. are also discussed. Finally, we stress that we have used the one-sided Laplace transform, i.e., we take the starting time at t 5 0 not t 5 2N.

A.2

Basic properties and pairs

Basic properties of the Laplace transform whose proofs can be found in a variety of references are as follows: P1. Linearity P2. L½f ðnÞ ðtÞ 5 sn FðsÞ 2 sn21 f ð0Þ 2 sn22 f ð1Þ ð0Þ 2 ? 2 f ðn21Þ ð0Þ Ð  Ð P3. L ? f ðτÞðdtÞn 5 FðsÞ sn L½ ð21Þn xn f ðtÞ 5 F ðnÞ ðsÞ In the simplest case, see part A.3 L½e2at f ðtÞ 5 Fðs 1 aÞ L½f ðt 2 aÞuðt 2 aÞ 5 e2as FðsÞ for a $ 0, and uðtÞ: Heaviside function limt!0 f ðtÞ 5 lims!N sFðsÞ If the time limit exists limt!N f ðtÞ 5 lims!0 sFðsÞ If sFðsÞ is analytic in the CRHP Ðt L½ 0 f1 ðτÞf2 ðt 2 τÞdτ 5 F1 ðsÞF2 ðsÞ Ð 1 σ1jN P10. L½f1 ðtÞf2 ðtÞ 5 2πj σ2jN F1 ðηÞF2 ðs 2 ηÞdη P4. P5. P6. P7. P8. P9.

It is noted that property P4 is found in the given form in the literature. However it is correct in the simplest case as will be clarified in the last part of this appendix. At this point we suffice to say that differentiation and integration in time domain and frequency domain are not as simple as they look at first glance. The given property P4 is wrong in general! On the other hand the properties P6 and P7 are called the initial value and the final value theorem, respectively. It is worth mentioning that the final value theorem does not apply to the sinusoid since the Closed Right Half Plane (CRHP) analyticity condition is not met. However, this restrictive condition does not exist for the initial value theorem. Example A.4: The input and output of two systems are related by the differential equations (1) y⃛ 1 1:5y€ 1 2y_ 1 y 2 u€ 1 2u 5 0 and (2) y€ 1 2y_ 1 y 2 uðt 2 1Þ step ðt 2 1Þ 5 0. Find the transfer functions of the systems. The answers are given by (1) G 5 ðs2 2 2Þ=ðs3 1 1:5s2 1 2s 1 1Þ and (2) G 5 e2s =ðs2 1 2s 1 1Þ. (Obviously we have considered that the functions and their derivative as appropriate are all at rest at the time zero.)

822

Appendix A

Example A.5: It is easy to see that the solution Ð t of the system € 1 a2 yðtÞ 5 uðtÞ ; yð0Þ 5 yð0Þ _ 5 0 is given by yðtÞ 5 1a 0 uðτÞsinaðt 2 τÞ dτ yðtÞ which is obtained from the transfer function YðsÞ=UðsÞ 5 1=ðs2 1 a2 Þ.

A.2.1 Inverse Laplace transform The question here is how to find a function in time domain whose Laplace transform is given? This is called the inverse Laplace transform, for which different methods exist. The main and most straightforward approach is called the partial fraction expansion. The MATLABs command ½r; p; k 5 residueðnum; denÞ can be used instead of manual handling, but note that the current implementation is illconditioned and the command may not work correctly. Example A.6: Find the time-domain representation of FðsÞ 5 s22s112s12 1 5. s11 2t We note that FðsÞ 5 5 ðs11Þ22 1 22 1 2 ðs11Þ sin 2t 1 2 2 and thus f ðtÞ 5 5e 12 2t 2e cos 2t; t $ 0.

A.2.2 Table of some Laplace transform pairs

Ð 1 N st Using the of the Laplace transform, i.e., f ðtÞ 5 2π 2N FðsÞe dω and Ð N definition 2st FðsÞ 5 2N f ðtÞe dt, and the aforementioned properties, it is easy to verify the following Table A.1 of Laplace transform pairs. In the following collective examples over 25 different cases are exemplified.

Table of some Laplace transform pairs

Table A.1

f ðtÞ

FðsÞ

δðtÞ stepðtÞ rampðtÞ tn21 =ðn 2 1Þ! e2at tn21 e2at =ðn 2 1Þ! sin ωt cos ωt sinh ωt cosh ωt

1 1=s 1=s2 1=sn 1=ðs 1 aÞ 1=ðs1aÞn , nAN ω=ðs2 1 ω2 Þ s=ðs2 1 ω2 Þ ω=ðs2 2 ω2 Þ s=ðs2 2 ω2 Þ

Appendix A

823

Example A.7: Using partial fraction expansion and Table A.1 it is easy to verify that the following are some pairs: ðe2at 2 e2bt Þ=ðb 2 aÞ21=½ðs 1 aÞðs 1 bÞ ðae2at 2 be2bt Þ=ða 2 bÞ2s=½ðs 1 aÞðs 1 bÞ ð1 2 e2at Þ=a21=½sðs 1 aÞ ½1 1 ðbe2at 2 ae2bt Þ=ða 2 bÞ=ðabÞ21=½sðs 1 aÞðs 1 bÞ ð1 2 e2at 2 ate2at Þ=a2 21=½sðs1aÞ2  ðat 2 1 1 e2at Þ=a2 21=½s2 ðs 1 aÞ:

Example A.8: Using the properties of the Laplace transform and Table A.1 it is easy to verify that the following are some pairs, which were used in Example A.3: e2at sin ωt2ω=ððs1aÞ2 1 ω2 Þ and e2at cos ωt2ðs 1 aÞ=ððs1aÞ2 1 ω2 Þ:

Example A.9: In fact using partial fraction expansion, the properties of the Laplace transform, and Table A.1 it is easy to expand the table as follows. Table A.1

Continued

sin2 ωt cos2 ωt sinh2 ωt cosh2 ωt ð1 2 cos ωtÞ=ω2 ðωt 2 sin ωtÞ=ω3 t sin ωt t cos ωt ðsin ωt 1 ωt cos ωtÞ=ð2ωÞ t sinh ωt t cosh ωt

2ω2 =½sðs2 1 4ω2 Þ ðs2 1 2ω2 Þ=½sðs2 1 4ω2 Þ 2ω2 =½sðs2 2 4ω2 Þ ðs2 2 2ω2 Þ=½sðs2 2 4ω2 Þ 1=½sðs2 1 ω2 Þ 1=½s2 ðs2 1 ω2 Þ 2sω=ðs2 1ω2 Þ2 ðs2 2 ω2 Þ=ðs2 1ω2 Þ2 s2 =ðs2 1ω2 Þ2 2sω=ðs2 2ω2 Þ2 ðs2 1 ω2 Þ=ðs2 2ω2 Þ2

824

Appendix A

Example A.10: By the use of the Laplace transform properties and/or direct computation we can show that Table A.1 can be expanded in the following way. It is clear that further pairs can also be included—we suffice with the given pairs. Table A.1

Continued

sin ωt sinh ωt sin ωt cosh ωt cos ωt sinh ωt cos ωt cosh ωt ebt 2 eat t 2ð1 2 cosh ωtÞ t sin ωt t sin at cos bt t 1ffi p t 2a2 p1ffiffiffi e 4t πt 2

2a p1ffiffiffiffiffi e 4t 2 πt3

4ω2 s s4 1 4ω4 ωðs2 1 2ω2 Þ s4 1 4ω4 ωðs2 2 2ω2 Þ s4 1 4ω4 s3 s4 1 4ω4 a ln ss 2 2b 2 2 ln s 2s2 ω tan21 ωs 1 tan21 a 1s b 2 pffiffiπ sffi p s e2a pffi s pffi 2a s

1 12 tan21 a 2s b

e

We close this section by adding that Laplace transform is being studied in various modern contexts such as fractional calculus.

A.3

Differentiation and integration in time domain and frequency domain

As stated in property P4, the differentiation in time domain and frequency domain are unfortunately presented in a wrong way in parts of the literature. Indeed this is the case even for the same property of the Fourier transform. As you recall from the course Signals and Systems, Fourier transform2 is pervasively used in electrical engineering especially (in addition to mathematics), as Laplace transform is. Both transforms are thus of fundamental importance. We take this chance to present the correct form of these properties. We give the answer for the Fourier transform in the following. It is left as an easy exercise to the reader to derive the parallel properties for the Laplace transform. In many references in the literature the Fourier transform of xðtÞ 5 uðtÞ, the step 1 t$0 or Heaviside function uðtÞ 5 , is given by XðωÞ 5 jω1 1 πδðωÞ along with 0 t,0 2

Named after Jean-Baptiste Joseph Fourier, French mathematician and physicist (17681830).

Appendix A

825

a proof. In many other references this proof is replaced by another one. Unfortunately, the answer and both of its proofs are wrong. It is instructive to first reproduce these proofs and explain the reason of their wrongness. Then we offer the correct answer. We start by citing the differentiation and integration formulae of a Fourier transform pair as given in the literature. Let xðtÞ2XðjωÞ be a Fourier transform pair, ÐN Ð 1 N jωt 2jωt that is xðtÞ 5 2π dt, assuming the stan2N XðjωÞe dω and XðjωÞ 5 2N xðtÞe dard assumptions on xðtÞ for the existence of XðωÞ or vice versa. Then one can prove the following theorems: Differentiation formula in time domain: d xðtÞ2jωXðjωÞ: dt Integration formula in time domain: ðt 1 xðτÞdτ2 XðjωÞ 1 πXð0ÞδðωÞ: jω 2N

(A.1)

(A.2)

Differentiation formula in frequency domain: ð2jtÞxðtÞ2

d XðjωÞ: dω

(A.3)

Integration formula in frequency domain: ðω 21 xðtÞ 1 πxð0ÞδðtÞ2 XðjηÞdη: (A.4) jt 2N The literature refers to πXð0ÞδðωÞ and πxð0ÞδðtÞ as the DC or constant terms of the integrals. Now we cite an explicit derivation for the Fourier transform of xðtÞ 5 uðtÞ. In part of the  literature uðtÞ is decomposed to xðtÞ 5 xe ðtÞ 1 xo ðtÞ, where xe ðtÞ 5 1=2 1=2 t$0 . Then they proceed as follows. Clearly Xe ðωÞ 5 πδðωÞ. and xo ðtÞ 5 21=2 t , 0 As for Xo ðωÞ it is argued that the time derivative of xo ðtÞ is δðtÞ. Therefore d dt xo ðtÞ21. On the other hand from the differentiation formula there holds d 1 dt xo ðtÞ2jωXo ðjωÞ, where xo ðtÞ2Xo ðjωÞ, and thus Xo ðjωÞ 5 jω. Altogether XðjωÞ 5 Xe ðjωÞ 1 Xo ðjωÞ 5 πδðωÞ 1 jω1 . This result is wrong because the differentiation formula as barely given by (A.1) is erroneous; see Section A.3.2. In many other references in the literature the integration formula is used as follows. One Ð t has δðtÞ2ΔðjωÞ 5 1 which using the integration formula results in xðtÞ 5 2N δðτÞdτ2XðjωÞ 5 jω1 ΔðjωÞ 1 πΔð0ÞδðωÞ 5 jω1 1 πδðωÞ. This answer is also wrong because the integration formula (A.2) is erroneous. It should be mentioned that there is another problem with this proof: the use of the integration formula for the derivation of the Fourier transform of uðtÞ is logically wrong, because

826

Appendix A

the former itself is obtained from the latter as will be explained in Section A.3.3. The inverse derivation is thus invalid—it is a vicious spiral. In the above cited references it is also stated that similar to the time domain derivations of the issues discussed above, their frequency domain duals can be 1 ω$0 obtained. More precisely, the inverse Fourier transform of VðjωÞ 5 has 0 ω,0 been computed as vðtÞ 5 πδðtÞ 2 jt1, and the derivative and integral formulae are given as in (A.3) and (A.4). These results are also all wrong; see Section A.3.4. In the next Section A.3.1 we present the correct answer for the Fourier transform of uðtÞ. This is followed by the correct forms of the time-domain derivative and integral formulae in the ensuing Sections A.3.2 and A.3.3, respectively. The inverse Fourier transform of the step function and the frequency-domain derivative and integral formulae are presented in Section A.3.4. Finally, in Section A.3.5 some applications of these tools which have resulted in erroneous conclusions are cited and corrected.

A.3.1 Fourier transform of the Heaviside function The correct  2at answer is simply as follows: uðtÞ 5 lima!0 f ðtÞ, where a . 0 and ÐN t$0 e . Hence, UðjωÞ 5 lima!0 FðjωÞ 5 lima!0 0 e2at e2jωt dt 5 f ðtÞ 5 0 t,0 lima!0 a 11 jω 5 jω1 , obtained directly or indirectly. By indirectly we mean that if the limit is divided on the real and imaginary parts of a 11 jω the same answer is obtained. Thus, uðtÞ2

1 : jω

(A.5)

A.3.2 Differentiation formula in time domain Let xðtÞ2XðjωÞ. It is desired to find the Fourier transform of x0 ðtÞ 5 dtd xðtÞ. There ÐN holds x0 ðtÞ 5 dtd xðtÞ2 2N x0 ðtÞe2jωt dt. Using integration by parts the right hand N ÐN 1 jω 2N xðtÞe2jωt dt. Now, if there holds, side is written as xðtÞe2jωt 2N xðtÞ ! 0 as jtj ! N;

(A.6)

the first term will be zero because e2jωt is bounded (in fact its magnitude is one). The second term is clearly jωXðjωÞ. In other words, the differentiation formula as given in (A.1) is valid for signals satisfying (A.6). This condition is not satisfied by xo ðtÞ [given in the paragraph after (A.4)] and thus the formula is not applicable. However, this is not the whole story. To complete the argument we must add that we have implicitly assumed that the signal is continuous (but xo ðtÞ is not). If the sigÐN nal has discontinuity, say at t1 , and yet satisfies (A.6), one has 2N x0 ðtÞe2jωt dt 5

Appendix A

827

t  ÐN Ð t12 12 x0 ðtÞe2jωt dt 1 t11 x0 ðtÞe2jωt dt 5 xðtÞe2jωt 2N 1 jω 2N xðtÞe2jωt dt 1 xðtÞe2jωt N t11 ÐN 1 jω t11 xðtÞe2jωt dt , in which t12 and t11 have obvious meanings. This itself is equivalent to,

Ð t12

2N

½xðt12 Þ 2 xðt11 Þe2jωt1 1 jωXðjωÞ:

(A.7)

In case there are some discontinuities ti ði 5 1; . . .; nÞ the answer will be Pn 2jωti 1 jωXðjωÞ. In passing it is useful to add that using the i51 ½xðti2 Þ 2 xðti1 Þe same procedure the Fourier transform of higher order derivatives are computed as P k2l dk xðtÞ2 kl51 dtd k2l ½ xðt12 Þ 2 xðt11 ÞðjωÞl21 e2jωt1 1 ðjωÞk XðjωÞ. dtk In case of some discontinuities ti ði 5 1; . . .; nÞ the answer is as follows: n X k X dk d k2l ½ xðti2 Þ 2 xðti1 ÞðjωÞl21 e2jωti 1 ðjωÞk XðjωÞ: xðtÞ2 dtk dtk2l i51 l51

(A.8)

A.3.3 Integration formula in time domain

Ðt Let xðtÞ2XðjωÞ. It is desired to compute the Fourier Ðt Ð Ntransform of 2N xðτÞdτ. To find the answer we first note that 2N xðτÞdτ 5 2N xðτÞuðt 2 τÞdτ 5 xðtÞ  uðtÞ, where uðtÞ is the step function. Now we can simply find the Fourier transform. It is a standard result (with a correct proof) in the aforementioned references, known as the time-domain convolution formula, that there holds xðtÞ  uðtÞ2XðjωÞUðjωÞ. Thus we have found the answer if we find the Fourier transform of uðtÞ denoted by UðjωÞ. Note that this is what we have previously stated in the second paragraph following (A.4): the derivation of the integration formula pivots on the derivation of UðjωÞ. Thus the inverse is not valid, otherwise it forms a vicious spiral. Coming back to our discussion, from the result of Section A.3.1 we have UðjωÞ 5 jω1 and thus, ðt xðτÞdτ2 2N

1 XðjωÞ: jω

(A.9)

It is observed that the integral has no DC or constant term.

A.3.4 Frequency domain formulae Similar to the developments presented so far, one can derive the inverse Fourier transform of the Heaviside function, as well as the differentiation and integration formulae in the frequency domain in a correct way. For the sake of brevity we do not present the details and suffice with their final forms. We start by the inverse Fourier transform of  the step function. The correct 1 ω$0 answer is simply as follows. Let VðjωÞ 5 , therefore VðjωÞ 5 0 ω,0

828

Appendix A



e2aω ω $ 0 . Hence, 0 ω,0 Ð 1 N 2aω jωt 1 21 e dω 5 lima!0 2π U 2 21 lima!0 gðtÞ 5 lima!0 2π 0 e a 1 jt 5 2πjt. In brief,

lima!0 GðjωÞ,

where

a.0

and

GðjωÞ 5

21 2VðjωÞ: 2πjt

vðtÞ 5

(A.10)

Now we present the derivative and integral formula. Let xðtÞ2XðjωÞ be a Fourier pair, where XðjωÞ is continuous and, XðjωÞ ! 0 as jωj ! N

(A.11)

Then we can present the following results whose proofs go along the lines of their time-domain counterparts in Sections A.3.2 and A.3.3. Differentiation formula in frequency domain: ð2 jtÞxðtÞ2

d XðjωÞ: dω

(A.12)

In case there is a discontinuity at ω1 , there holds: 1 d ½ Xðjω12 Þ 2 Xðjω11 Þejω1 t 1 ð2 jtÞxðtÞ2 XðjωÞ: 2π dω

(A.13)

And in case of some discontinuities ωi ði 5 1; . . .; nÞ and higher derivatives, n X k 1 X dk2l dk ½ Xðjωi2 Þ 2 Xðjωi1 Þð2jtÞl21 e2jωi t 1 ð2jtÞk xðtÞ2 k XðjωÞ: k2l 2π i51 l51 dω dω

Integration formula in frequency domain: 21 xðtÞ2 jt

(A.14)

ðω XðjηÞdη:

(A.15)

2N

Note that for the derivation of (A.14) we have used the frequency-domain convolution formula as 2π xðtÞvðtÞ2XðjωÞ  VðjωÞ, which is a standard result, see the aforementioned references.

A.3.5 Some consequences The aforementioned results can potentially be used to obtain a number of other results. More precisely they can be directly applied to obtain the Fourier transform Ðt of the functions of the form 2N xðτÞdτ, xðtÞuðtÞ, or xðtÞuðt 1 TÞ, where XðωÞ is known. Likewise, they may be directly employed to compute the inverse Fourier

Appendix A

829

Ðω transform of 2N XðjηÞdη, XðjωÞVðjωÞ, or XðjωÞVðjω 1 jΩÞ in which xðtÞ is known. For instance they have been invoked to obtain a number of results like the Fourier answer has been computed as transform of uðtÞcosðω0 tÞ. The jω π ½  δðω 2 ω Þ 1 δðω 1 ω Þ 1 which is certainly wrong. The correct answer is 0 0 2 ω2 2 ω 2 0

ω20

jω . 2 ω2

Similarly, the Fourier transform of uðtÞsinðω0 tÞ has been computed in a

wrong way. The correct answer is

ω0 . ω20 2 ω2

In case of windowed functions like

xðtÞ½uðt 1 T1 Þ 2 uðt 2 T2 Þ or XðjωÞ½Vðjω 1 jΩ1 Þ 2 Vðjω 2 jΩ2 Þ the answers obtained by the employment of (A.2), (A.4) will be luckily correct since the impulse terms cancel each other out. In the broad literature many other examples have also been solved by the applications of the aforementioned results. They should all be corrected accordingly.

A.4

Existence and uniqueness of solutions to differential equations

The existence and uniqueness of solutions to differential equations are important and in general difficult questions to answer, especially the latter one. It is easy to provide differential equations which do not admit any solution. On the other hand, some differential equations admit an infinite number of solutions. Some relevant theorems are presented below. _ 5 f ðx; tÞ. Let R denote the Theorem A.1: Consider the differential equation xðtÞ region a # t # b and c # x # d and ðx0 ; t0 Þ be a point in its interior. If f ðx; tÞ and @ @x f ðx; tÞ are continuous on R, then there exists an interval I0 :ðt0 2 T; t0 1 TÞ contained in R, and a unique solution xðtÞ defined on I0 . Δ. It should be noted that the theorem is a sufficient (and not necessary) condition and does not provide any indication about the size of the interval I0 . Thus, the uniqueness property is usually understood in a local sense. However, in certain systems we can show that I0 is the whole space and thus the answer is globally unique. _ 5 2xðtÞ, xð1Þ 5 2 1. Of course we know that A simple example is the system xðtÞ there are some classes of differential equations which have a unique global solution, _ 5 f ðx; tÞ 5 pðtÞx 1 qðtÞ, xðt0 Þ 5 x0 . like xðtÞ A particular but important local theorem is discussed next. n

n21

Theorem A.2: Consider the system an ðtÞdtd n x 1 an21 ðtÞdtd n21 x 1 ? 1 a1 ðtÞdtd x 1 _ 0 Þ 5 x1 ; . . .; xðn21Þ ðt0 Þ 5 xn21 . If the coefficients ai ðtÞ a0 ðtÞx 5 gðtÞ with xðt0 Þ 5 x0 , xðt are all continuous on an interval I, an ðtÞ 6¼ 0 in I, and t0 AI, then the given system has a unique solution in I. Δ. The subsequent theorem is a global result.

830

Appendix A

Theorem A.3: Consider the retarded functional differential equation x_ 5 f ðt; xðt 2 τÞÞ, xðtÞ 5 φðtÞ; tA½t0 2 τ t0 , φðtÞACð½t0 2 τ t0 ; Rn Þ, where f :DDR 3 Rn 3 1 ! Rn 3 1 is continuous, C is the class of continuous functions as described, D is open, and f ðt; xÞ is Lipschitz in x in every compact set in D. Then for every ðt0 ; xÞAD the system has a unique solution passing through ðt0 ; xÞ. Δ. In particular, the LTI system considered in this book has a unique solution as derived in Remark 2.13 of Chapter 2. Theorem A.4: Continuity of f in x is sufficient for the existence, but not uniqueness, of an answer for x_ 5 f ðt; xÞ. Δ. Finally, we add that the above Theorems A1-A3 are for Initial Value Problems (IVPs). Some actual problems are modeled as IVPs. On the other hand, some are modeled as Boundary Value Problems (BVPs) in which values at two points are _ specified, e.g., xð1Þ 5 2 1; xð10Þ 5 2 or xð1Þ 5 2 1; xð10Þ 5 2. For the results on BVPs as well as further results on IVPs the reader is referred to the pertinent literature.

References For further readings the reader is referred to the following references: Alencar, M.S., da Rocha, V.C., 2005. Communication Systems. Springer, Berlin. Boulet, B., 2006. Fundamentals of Signals and Systems. Charles River Media, Boston. Collins, P.K., 2006. Differential and Integral Equations. Oxford University Press, NY. Doetsch, G., 1974. Introduction to the Theory and Application of the Laplace Transformation. Springer, Berlin (Translated from German edition by W. Nader). Haykin, S., Veen, B.V., 2003. Signals and Systems. second ed. John Wiley and Sons, NY. Haykin, S., Moher, M., 2005. Modern Wireless Communications. Prentice Hall, NJ. Hsu, H.P., 1995. Schaum’s Outlines: Signals and Systems. McGrawHill, NY. ,https://en.wikipedia.org/wiki/Fourier_transform.. ,http://mathworld.wolfram.com/FourierTransform.html.. Karris, S.T., Signal and Systems with MATLAB Computing and Simulink Modeling, fifth ed., Orchard Publications, Fermont, 2012. Lee, E.A., Varaiya, P., 2002. Structure and Implementation of Signals and Systems. Addison Wesley, NY. MATLAB Robust Control Toolbox, The MathWorks, Inc., 2016. Oppenheim, A.V., Willsky, A.S., Nawab, S.H., 1997. Signals and Systems. second ed. Prentice Hall, NJ. Widder, D.V., 2010. The Laplace Transform. Dover Publications, NY. Zwillinger, D., 1997. Handbook of Differential Equations, third ed. Academic Press, NY.

Appendix B: Introduction to dynamics

B.1

Introduction

As we mentioned in the text, modeling of different systems can be based on the fundamental laws governing those systems. Presenting the fundamental laws of all various types of systems is outside the scope of this book. Instead we have seen in Section 2.3 in a case-by-case analysis how this can be performed, e.g., for biological or economic systems. It appears that electrical and mechanical systems are the kinds of systems most tangible for all readers as in secondary school they have made some acquaintance with them. This is true also for chemical systems, but to a lesser extent since here the analysis of the degrees of freedom is rather technical and complicated for the general reader who is not educated in chemical engineering. In this Appendix we briefly review the principal laws of electrical, mechanical, and chemical systems. Our primary focus throughout the book is on these three classes of systems.

B.1.1 Electrical systems The basic electrical components are the resistor, capacitor, and inductor. Let v and i represent the voltage and current of these elements in the way shown in Fig. B.1. The linear versions of these components have the following models, or in other words obey the following rules: v 5 Ri

Ð v 5 ð1=CÞ i dt

i 5 ð1=RÞv

Capacitor Inductor

v 5 Ldi=dt

i 5 Cdv=dt Ð i 5 ð1=LÞ v dt

Resistor

(R: resistance, C: capacitance, L: inductance) Consider a network with electrical components. Then: (1) The sum of all voltages around a closed loop is zero. (2) The sum of all the currents flowing into a node equals the sum of all the currents flowing out of it.

832

Appendix B

i + v –

i

i + v –

R

C

+ v –

L

Figure B.1 Schematics representation of electrical components. Left: Resistor, Middle: Capacitor, Right: Inductor.

B.1.2 Mechanical systems For mechanical systems we first review the most basic rules and elements below. Let F be the force, x the translational position, v the translational velocity, a the translational acceleration, and M the mass. Then we have: Translational motion Translational spring Friction Viscous friction Static friction Coulomb friction

P

F 5 Ma 5 M v_ 5 M x€ F 5 kx (Linear relation) F 5 f ðvÞ F 5 bv 5 bx_ (Dashpot/linear relation) (Beyond our scope) (Beyond our scope)

It is noted that the dashpot is the following device, see Fig. B.2. Let θ be angular position, ω the angular velocity, α the angular acceleration, and J the inertia. Then one has, Rotational motion Torsional spring Friction Viscous friction Static friction Coulomb friction

P

T 5 Jα 5 J ω_ 5 J θ€ T 5 Kθ (Linear relation) T 5 f ðωÞ F 5 bω 5 bθ_ (Dashpot/linear relation) (Beyond our scope) (Beyond our scope)

As for energies we have: Kinetic energy Translational motion Rotational motion

Wk 5 12 Mv2 Wk 5 12 Jω2

Translational spring Torsional spring

Wp 5 12 kx2 Wp 5 12 kθ2

Potential energy

Filled with fluid

Piston

Figure B.2 Dashpot. Left: Construction, Right: Schematic representation.

Appendix B

833

N1 , T 1 Ti

J1 , B1

J2 , B2

TL

T 2 , N2

Figure B.3 Schematic representation of a gear.

A gear has the representation and model given in Fig. B.3. Gear sides NN12 5 rr12 5 TT12 5 θθ21 5 ωω21 Input and load torques (i: input, L: load) (1: input side, 2: load side) 2 T2 5 J2 θ€ 2 1 b2 θ_ 2 1 TL N1 2 2 Ti 5 J1 θ€ 1 1 b1 θ_ 1 1 T1 5 J1 θ€ 1 1 b1 θ_ 1 1 T2 N2

It is possible to draw analogies between electrical and mechanical, chemical, hydraulic, thermal, etc., systems. The two main frameworks for similarity between electrical and mechanical systems are the voltage-force and current-force analogies, given in Section B.2 for the sake of completeness. For other systems this gets technical and is thus not addressed inhere. Examples of these equivalences are provided in Chapter 2, System Representation.

B.1.3 Chemical systems For a chemical system the principal laws are the conservation laws as follows: Mass: hrate of mass accumulationi 5 hrate of mass ini 2 hrate of mass outi Component i: hrate of component i accumulationi 5 hrate of component i ini 2 hrate of component i outi 1 hrate of component i producedi Energy: hrate of energy accumulationi5 hrate of energy in by convectioni 2 hrate of energy out by convectioni 1 hnet rate of heat addittion to the system from the surroundingsi 1 hnet rate of work performed on the system by the surroundingsi On the other hand the total energy of a thermodynamic system UT is the sum of the internal energy UI , kinetic energy UKE , and potential energy UPE , i.e., UT 5 UI 1 UKE 1 UPE :

834

B.2

Appendix B

Equivalent systems

Comparative study shows that there are two possible ways of finding equivalences between electrical and mechanical systems. Precisely speaking, in one of the equivalence frameworks voltage plays the role of the integral of force. Thus this is the voltage-integral_force equivalence system which for brevity is called the voltageforce equivalence system. In the other equivalence framework, current plays the role of the integral of force. Hence this should be called the current-integral_force equivalence system, but for the sake of brevity it is termed as the current-force equivalence system. These systems are elaborated in the following Tables B.1 and B.2. The above tables are classical and the derivations are straightforward. For Ð instance note that there holds v 5 Li;_ Ri; ð1=CÞ i and on the other hand € bx; _ kx and this shows the analogy “voltage-integral of force” (or “voltagef 5 M x; integral_force” as the correct way of naming). Question B.1: What are the equivalents of amplifier, transformer, lever, gear, and pulley? The general approach for writing down the laws governing the dynamics of mechanical systems is to use the Lagrange’s method. The Lagrangian equation in @KE @PE @DE generalized coordinates is dtd @KE @q_i 2 @qi 1 @qi 1 @q_i 5 Qi , where KE, PE, and DE represent the kinetic, potential, and dissipative energies. Considering the system @L conservative—i.e., without dissipation—it reduces to dtd @@Lq_ 2 @q 5 0, where i i 1 L 5 KE 2 PE is called the Lagrangian.

Table B.1 Voltage-force analogy between electrical and mechanical systems

1

Mechanical systems

Electrical systems

D’Alembert’s principle Degree of freedom Force applied Force F (torque T) Mass M (moment of inertia J) Displacement x (angular displacement θ) Velocity x_ (angular velocity ω) Damping (viscous-friction coefficient) b Spring constant k Coupling elements

Kirchhoff’s voltage law Loop Switch closed Voltage v Inductance L Charge q Loop current i Resistance R Reciprocal of capacitance 1/C Elements common to two loops

This is due to J. L. Lagrange, Italian mathematician (17361813). The case of non-conservative systems (the previous equation) is due to both J. L. Lagrange and L. Euler, Swiss mathematician (170783), and in the literature is referred to as either the Euler-Lagrange equation, the Lagrange equation of the second kind, or sometimes in short the Lagrange equation.

Appendix B

835

Table B.2 Current-force analogy between electrical and mechanical systems Mechanical systems

Electrical systems

D’Alembert’s principle Degree of freedom Force applied Force F (torque T) Mass M (moment of inertia J) Displacement x (angular displacement θ) Velocity x_ (angular velocity ω) Damping (viscous-friction coefficient) b Spring constant k Coupling elements

Kirchhoff’s voltage law Loop Switch closed Current i Capacitance C Ð φ 5 v dt Node voltage v Conductance 1/R Reciprocal of inductance 1/L Elements common to two nodes

B.3

Worked-out problems

The usage of the above lessons is demonstrated on some basic problems below. Problem B.1: Consider the electrical network of Fig. B.4. Find a model for this system. We define the currents i1 ; i2 ; i3 as shown in the picture. Thus we can simply write the voltage laws around the loops and obtain: L1

ð ð ð di1 1 1 1 i1 dt 1 ði1 2 i2 Þdt 1 ði1 2 i3 Þdt 5 vðtÞ 1 C1 C2 C3 dt

L2

ð di2 1 ði2 2 i1 Þdt 5 0 1 Rði2 2 i3 Þ 1 C2 dt

Rði3 2 i2 Þ 1

ð 1 ði3 2 i1 Þdt 5 0 C3

L1

C1

i1 + v

L2

C2

i2 R

i2

C3

i3

i3

i1



Figure B.4 Problem B.1, A typical electrical network.

836

Appendix B

We define x1 5 i1 , x2 5 i2 , x3 5 i3 , and differentiate the above relations. By denoting u 5 v_ the dynamics of the system is given by, L1 x€1 1

1 1 1 x1 1 ðx1 2 x2 Þ 1 ðx1 2 x3 Þ 5 uðtÞ C1 C2 C3

L2 x€2 1 Rðx_2 2 x_3 Þ 1 Rðx_3 2 x_2 Þ 1

1 ðx2 2 x1 Þ 5 0 C2

1 ðx3 2 x1 Þ 5 0 C3

Defining x 5 ½x1 x2 x3 T as the state of the system and u as the input of the system we observe that in compact form these equations are given by Lx€ 1 Rx_ 1 Cx 5 Bu, where, 2 3 2 3 0 0 L1 0 0 01 R 2R 5; L 5 4 0 L 2 0 5; R 5 4 0 0 2R R 0 0 0 2 P3

1=Ci C 5 4 21=C2 21=C3 i51

3 2 3 1 21=C3 0 5; B 5 4 0 5: 0 1=C3

21=C2 21=C2 0

The output may be defined any voltage or current in the network. For instance if it is defined as the voltage across the resistor with the negative and positive sides on the left and right respectively, i.e., y 5 Rði2 2 i3 Þ, then it is simply y 5 ½0 R 2 Rx. The mechanical equivalent of the system is provided in Fig. B.5. The governing equations are: m1 x€1 1 k1 x1 1 k2 ðx1 2 x2 Þ 1 k3 ðx1 2 x3 Þ 5 f ðtÞ m2 x€2 1 bðx_2 2 x_3 Þ 1 k2 ðx2 2 x1 Þ 5 0 bðx_3 2 x_2 Þ 1 k3 ðx3 2 x1 Þ 5 0 We notice that the correct naming is thus voltage-integral_force equivalence, as we have mentioned before. We close this problem by encouraging the reader to place another resistor in the third loop (passing the current i3 ) and repeat the same problem. k1 m1

f

x1 k3

k2

b m2

Figure B.5 Problem B.1, Mechanical equivalent.

x3 x2

Appendix B

837

m θ

l

M

f

z

Figure B.6 Problem B.2, An inverted pendulum mounted on a cart.

Problem B.2: The ideal inverted pendulum problem is a conservative system, see Fig. B.6. In this system the control objective is θ 5 0 and z 5 0, where z denotes the coordinates of a specific point on the one-dimensional platform with respect to the center (origin). Note that this is a simple, one-dimensional version of shuttle/rocket balancing or beam balancing on the palm. We use Lagrange’s method.  1 1  KE 5 M z_2 1 m ðz1l sin θÞ:2 1 ðl cos θÞ:2 2 2 PE 5 V0 1 mgl cos θ L 5 KE 2 PE, and the system is assumed conservative. Hence, from @L 2 @q 5 Qi , with q1 5 z, q2 5 θ, Q1 5 f , we get: i

d @L dt @q_i

 d M z_ 1 mð_z 1 lθ_ cos θÞ 5 f ; dt    d  2_ mðl θ 1 z_l cos θÞ 2 2 ml_zθ_ sin θ 1 mgl sin θ 5 0; dt yielding, ðM 1 mÞ€z 1 ðml cos θÞθ€ 5 mlθ_ sin θ 1 f ; 2

ðml cos θÞ€z 1 ðml2 Þθ€ 5 mgl sin θ: Therefore, M q€ 5 F, where,  M5

M1m ml cos θ

     2 ml cos θ z€ f 1 mlθ_ sin θ ; € ; q 5 ; F 5 ml2 θ€ mgl sin θ

M is the so-called mass matrix. It is proven that the mass matrix is symmetric, positive definite, and thus invertible. Hence, q€ 5 M 21 F. Before doing the

838

Appendix B

calculations, the following simplification is done: the second row is divided by ml 21 to obtain q€ 5 M F, in which, 

M1m M5 cos θ

   _ 2 sin θ ml cos θ f 1 ml θ : ;F5 l g sin θ

Thus,    1 z€ l 5 θ€ ðM 1 mÞl 2 ml cos2 θ 2ml cos θ

T 

 2 f 1 mlθ_ sin θ ; g sin θ " # 2 1 lðf 1 mlθ_ sin θÞ 2 mgl sin θ cos θ 5 : 2 lðM 1 m sin2 θÞ 2cos θðf 1 mlθ_ sin θÞ 1 gðM 1 mÞsin θ   Then, defining x 5 x1 x2 x3 x4 T 5 z z_ θ θ_ T as the state vector, the dynamics of this system can be modeled by: 2 3 2 3 x2 ðtÞ x_1 6 x_2 7 6 f2 ðx; f ; tÞ 7 7 6 7 y 5 ½0 0 1 0x: x_ 5 6 4 x_3 5 5 4 x4 ðtÞ 5; f4 ðx; f ; tÞ x_4 2cos θ M1m

in which, f2 5

 1 lðf 1 mlx24 sin x3 Þ 2 mgl sin x3 cos x3 ; Δ

f4 5

 1 2 cos x3 ðf 1 mlx24 sin x3 Þ 1 gðM 1 mÞsin x3 ; Δ

Δ 5 lðM 1 m sin2 x3 Þ:  The linearized model around the equilibrium point x 5 0 0 0 0 T is given by Δx_ 5 AΔx 1 BΔu and thus x_ 5 Ax 1 Bu since x 5 0; u 5 0, where u 5 f and, 2

0 60 6 6 A56 60 6 40

1 0

0 2 mg M 0 0 gðM 1 mÞ 0 lM

3 0 07 7 7 7 1 7; 7 05

2 6 6 6 B56 6 6 4

3 0 1 7 M 7 7 7: 0 7 21 7 5 lM

As for the output, y 5 Cx 1 Du. If we define and y 5 θ as the output, then C 5 0 0 1 0 , D 5 0. If we define y 5 z as the output then

Appendix B

839

  C 5 1 0 0 0 , D 5 0. The transfer functions are computed from ZðsÞ 1 s2 2 z2 TðsÞ 5 CðsI2AÞ21 B 1 D and are given by ΘðsÞ FðsÞ 5 Mlðs2 2 p2 Þ and FðsÞ 5 Ms2 ðs2 2 p2 Þ gðm 1 MÞ g 2 2 where p 5 Ml and z 5 l . We leave it to the reader to propose the electrical equivalence of this system. Question B.2: In the text we have introduced the concept of functional controllability and said that the number of outputs should be fewer than the number of inputs. This system has one input. Apparently we have defined two outputs for it. Why? Can we control both together? Remark B.1: The inverted pendulum system is a simplified model of stabilizing a space shuttle leaving the earth on a ray perpendicular to the earth’s surface. Remark B.2: A counterpart for this system is stabilization of a rod on the palm by horizontal movement of the palm. The definition of y 5 z means that we stabilize the rod by looking at the hand. What if we look at the top end of the rod? Define YðsÞ y 5 z 1 l sin θ and find the transfer function FðsÞ . Problem B.3: A double mass-spring-damper configuration is given in Fig. B.7 which is a dissipative system. We use Lagrange’s method. 1 1 KE 5 m1 z_21 1 m2 z_22 2 2 PE 5

1 1 k1 ðz1 2z2 Þ2 1 k2 z23 2 2

1 DE 5 bð_z2 2 z_3 Þ2 2 @L Hence, we have dtd @@Lq_ 2 @q 1 @DE @q_i 5 Qi with q1 5 z1 , q2 5 z2 , q3 5 z3 , Q1 5 f . We i i leave it to the reader to complete the solution.

Problem B.4: The Blending system is a typical chemical system. Find a model for the blending tank of Fig. B.8. In this system component i ði 5 1; 2Þ has the composition zi and flows into the tank at the mass flow rate qi . The composition z flows out of the tank at the mass flow rate q. z3 k2

z2 b

z1 m2

k1

m1

f

Figure B.7 Problem B.3, A dissipative mass-spring-dashpot system.

840

Appendix B

q2 , z2

q1 , z1

q,z

Figure B.8 Problem B.4, A blending tank.

The mass conservation law results in dðρVÞ dt 5 q1 1 q2 2 q, where V is the volume of the tank and ρ is the density. The component conservation law yields dðρVzÞ dt 5 q1 z1 1 q2 z2 2 qz. Assuming constant ρ the above equations result in dz dV ρ dðVÞ dt 5 q1 1 q2 2 q and ρV dt 1 ρz dt 5 q1 z1 1 q2 z2 2 qz. Hence,

dz dt

5

q1 ρV

ðz1 2 zÞ 1

ðz2 2 zÞ. Defining x 5 ½x1 x2  as the state of the system where x1 5 V, x2 5 z the state equations can easily be written. The input to the system is u 5 ½q1 z1 q2 z2 qT and the output of the system is y 5 z. Also note that at steady state there holds q1 1 q2 2 q 5 0 and q1 z1 1 q2 z2 2 q z 5 0.

q2 ρV

T

Question B.3: Which of the input components can be assumed constant? We close this appendix by recommending (Hemami, 1982, 2002; Hemami and Weimer, 1973; Hemami and Wyman, 2005; Hemami and Utkin, 2015; Olfati-Saber, 2002) to the readers of mechanical engineering - these are some fundamental results. It is due to mention that the latter work is in the spirit of ‘structural methods’, as we mentioned in Section 1.16.1.2, and is essential for researchers in the respective field.

References Busch-Vishniac, I.J., 1999. Electromechanical Sensors and Actuators. Springer Science & Business Media, NY. Hand, L.N., Finch, J.D., 2008. Analytical Mechanics. Cambridge University Press, Cambridge. Hemami, H., 1982. Some aspects of EulerNewton equations of motion. Arch. Appl. Mech. 52 (3), 167176. Hemami, H., 2002. A general framework for rigid body dynamics, stability, and control. J. Dyn. Syst. Meas. Control. 124 (2), 241251. Hemami, H., Weimer, F.C., Koozekanani, S.H., 1973. Some aspects of the inverted pendulum problem for modeling of locomotion systems. IEEE Trans. Autom. Control. 18 (6), 5861. Hemami, H., Wyman, B.F., 2005. Rigid body dynamics, constraints, and inverses. J. Appl. Mech. 74 (1), 4756. Hemami, H., Utkin, V.I., 2015. Constrained rigid body stability and control. Int. J. Robust Nonlinear Control. 25 (11), 16011622.

Appendix B

841

Johnston, E.R., Mazurek, D., Self, B., Cornwell, P., Beer, F., 2015. Vector Mechanics for Engineers: Statics and Dynamics. McGraw-Hill, NY. Li, S., Xin, F., Li, L., 2017. Reaction Engineering. Butterworth-Heinemann (in press). Nilsson, J.W., Riedel, S., 2014. Electric Circuits. 10th ed. International Edition, Prentice Hall, Lebanon, (Indiana, US). Olfati-Saber, R., 2002. Normal forms for underactuated mechanical systems with symmetry. IEEE Trans. Autom. Control. 47 (2), 305308. Towler, G., Sinnott, R.K., 2013. Chemical Engineering Design: Principles, Practice and Economics of Plant and Process Design. 2nd ed. Butterworth-Heinemann.

Appendix C: Introduction to MATLABs

C.1

Introduction

MATLABs, or better, MatLab stands for Matrix Laboratory1 and is a programming environment. It was first developed in the late 1970s by professor Cleve Moler at the University of New Mexico in the US. He deigned it to ease students’ access to LINPACK2 and EISPACK3 without need to learn FORTRAN. It was soon welcomed by other universities. When Moler made a visit to Stanford University in 1983, Jack Little, an engineer, was exposed to it and realized its further potential. He joined Moler and Steve Bangert and together rewrote MATLABs in C and founded MathWorks in 1984. Over the decades the software has undergone considerable development and is now pervasively used in many fields of Electrical Engineering and in part in Mathematics and Statistics. It has much superiority over classical programming languages like FORTRAN, RASCAL, C, etc.4 Let us simply begin. In these programming languages when we want to use a variable first we should define it, e.g., as a matrix if it is going to be a matrix. Then we can use it. However, in MatLab definition and use of variables are done simultaneously. Moreover, suppose we want to perform an operation on a variable—for instance we want to find the minimum element of a vector. In programming languages we should write a program for it: the first two elements are compared and the larger is chosen. Then this value is compared with the third element and again the larger is chosen, and so forth. However, in MatLab for almost all classical operations there are specifically written commands which perform those operations. Some of these commands are minimum, maximum, sinusoid, absolute value, matrix multiplication and inversion, solving differential equations, etc. Moreover, it has many specialized toolboxes, like the Control System Toolbox in which we are most interested for this undergraduate course. Other toolboxes include: Optimization, Robust Control Toolbox, Signal Processing, Fuzzy Systems,

1

Not to be mistaken for MATHLAB which stands for Mathematical Laboratory. A software for performing numerical linear algebra. 3 A software for numerical computation of eigenvalues and eigenvectors of a matrix. 4 Another powerful software is SCILAB or rather SciLab which stands for Scientific Laboratory. It is being developed by Scilab Enterprises with the support of INRIA (French National Institute for Research in Computer Science and Control), France. 2

844

Appendix C

etc., with which you will work in graduate courses. These toolboxes comprise commands designed for specialized tasks. For example the Control System Toolbox offers commands for computing the closed loop system, computing the step response, drawing the root locus, drawing the Nyquist plot, finding a balanced realization, etc. MatLab also provides a simulation environment called Simulink, standing for Simulation Link. It provides a powerful environment with many facilities for simulating a system. We can connect different components of a system (input source, feedback elements, controller, plant, relays, etc.), specify their parameters, simulate the system and find the output (or any signal in the system) over the designated time. After learning MATLABs as we present in Section C.2 of this brief, you can simply learn Simulink which is presented in Section C.3.

C.2

MATLABs

In the following we first enlist the most widely used characters, operators, and commands. Then we present simple examples. Next we go to writing .m files: programs and functions. By simple examples we show you how to accomplish these tasks. This is followed by the list of commands of the Control System Toolbox. Some general MatLab characters, operators, and commands/functions are listed below. 1 ./ size rand plot(x) for stem chirp find

2 .\ length randn plot(x,y) end inv rectpuls num2str



.’ min sin plot(x,y,s) while disp sinc rat

^ .5 max exp xlabel break what diric sprintf

/ ,5 sort tan ylabel switch which gauspuls clf

\ B5 diag abs zoom if who pulstran input

’ 55 zeros pi hold else lookfor tripuls format

. || ones i save elseif square speye help

.^ & eyes angle load clear sawtooth magic ...

To learn how the above should be used you can simply put the command help before them. For instance, typing help inv shows you how to use inv which does matrix inversion. Now you should run MatLab by clicking on its icon on your desktop and start the following. Example C.1: This is a multipart example by which we can get started. Typing the command A 5 [1 2 2 3; 2 4 5 6;7 8 2 9] in the command line (starting by ..) results in: A5 1 22 3 24 5 6 7 8 29 (Continued)

Appendix C

845

(cont’d) which means A is defined as the aforementioned matrix and can be used thereafter. If we continue and type the command A(2,1) in the command line the result is: ans 5 24

which is the element in the second row and first column of A. If we type the command B 5 [2 2 1 3]' in the command line it results in: B5 2 21 3

Note that “'” is actually the conjugate transpose. The raw transpose is “.'”. Now if we type the command A B in the command line we get the product of AB: ans 5 13 5 2 21

By the command size(ans) in the command line we have size of the answer, i.e., the variable ans as: ans 5 3 1 It means 3 rows and 1 column. Also note that ans is the last answer. Now it is [3 1]. Before this command it was [13 5 221]'. Typing the command [m,n] 5 min(ans) in the command line results in: m5 1 n5 2

The first argument m 5 1 is the value of the minimum. The second argument n 5 2 means the minimum is at the second element, i.e., ans(2) is the minimum. Typing the command t 5 0:0.5:4 in the command line outputs the following: t5 0

0:5000

1:0000

1:5000

2:0000

2:5000

3:0000

3:5000

4:0000

(Continued)

846

Appendix C

(cont’d) The command A^2 computes the second power of A: ans 5 30 12 2 36 18 81 2 36 2 88 2 46 150 The command A.^2 computes the point-wise second power of A: ans 5 1 4 9 16 25 36 49 64 81 The command save test1.mat saves all the variables of the current workspace in the binary file test1.mat in the current directory. After restarting the computer these variables are in the memory and can be reused by the command load test1.mat. Typing the command [a,b] 5 rat(0.9846) outputs a rational fraction approximation of 0.9846; a/b is approximately 0.9846. a5 959 b5 974

The commands t 5 0:0.01:5; u 5 rem(t,1) . 5 0.5; plot(t,u) make and plot the square signal u on the given time duration. Typing the command disp('This is a MatLab brief.') display the string This is a MatLab brief. Typing the command n 5 input('n 5 ') stops any ongoing process (like running a script file) and asks for the variable n (scalar, vector, matrix) to be specified/inputted by the user. It then resumes.

C.2.1 How to write an M.file There are two types of M.files: Scripts (or programs) and Functions. Below we give a simple introduction to them.

C.2.1.1 Script file A script file contains and performs (usually long) sequences of commands to produce some result. In this sense, it is like a program in programming languages. A script file works globally on the workspace and its variables remain in the memory.

Appendix C

847

Example C.2: Open a “New Script” file. The way to do so depends on the version of MatLab you use. But in all versions it is done by the options in the top toolbar. Then write your program in the untitled opened window. For example the following is a program in which the operator 55 means logical equivalence: a 5 1; b 5 2; if a 55 b c53 else c54 end

Then, click on Save As ,file name..m For example name it as “ExampleCpoint2” the suffix “m” will automatically be added to it. Now, if your file is in the current directory (which it will automatically be), you can run it. Before this let’s try the command dir. The result is: . .. . . . ExampleCpoint2.m which means that the m file ExampleCpoint2 is in the current directory. Now if you type run ExampleCpoint2 or simply ExampleCpoint2, you get the answer: c5 4

C.2.1.2 Function file A function file is like a built-in MatLab function. It starts with a statement containing the syntax definition for the function. Its variables are local in the function and do not remain in the memory. Example C.3: In this example we define and add on disk a function called stat.m. Previously you opened a “New Script” file. This time open a “New Function” file. Then write the subsequent program in it. function [mean,stdev] 5 stat(x) n 5 length(x); mean 5 sum(x) / n; stdev 5 sqrt(sum((x 2 mean).^2)/n); display('First argument is the mean value.') display('Second argument is the standard deviation.') end

(Continued)

848

Appendix C

(cont’d) This function defines a new function called stat which computes the mean and standard deviation of a vector. The variables within the body of the function are all local variables. Then, click on Save As ,file name..m and name it as “stat” or another name. Because it is informative to name it “stat”, we do so although it is not consistent with the naming in other examples. (We do not name it “ExampleCpoint3”.) The suffix “m” will automatically be added to it. Now, if your file is in the current directory (which will automatically be), then, for instance if you type t 5 [1 .2 0 -.3 1 5 0]; [a,b] 5 stat(t) the answer will be: First argument is the mean value. Second argument is the standard deviation. a 5 0.9857 b 5 1.7041

Note that t will be the input to the function stat and a (5mean(t)), and b (5stdev(t)) will be outputs of the function. A function file is and indeed is used like a built-in MatLab function. To illustrate this note that, for instance, we used the function stat.m in the same manner that we use a built-in function say min.m: After defining the input vector t, we simply write [a,b] 5 min(t) or [a,b] 5 stat(t).

Example C.4: Suppose we want to do some optimization. Then we can start by typing lookfor optimization (or contraint, minimum, etc.—keywords in the same spirit). Dozens of commands are listed relating to this keyword. Included are: optimparfor ga simulannealbnd fseminf

- Minimizing an Expensive Optimization Problem Using Parallel Computing Toolbox (TM) - Constrained optimization using genetic algorithm - Bound constrained optimization using simulated annealing - Solves semi-infinite constrained optimization problems

If you type lookfor constraint dozens of commands are listed related to this keyword. Included are: fmincon fminunc fnmin pidtune_demopad

- Finds a constrained minimum of a function of several variables - Finds a local minimum of a function of several variables - Minimum of a function (in a given interval) - PID Tuning with Actuator Constraints

(Continued)

Appendix C

849

(cont’d) Alternatively, we may use the Function Browser option on the toolbar. Again, dozens of commands are listed. Of course we may also use the Optimization Toolbox of MatLab, where more details and examples are provided. Now, let us find the minimum norm solution ||x|| of Ax 5 b where A 5 [1 2] and b 5 1. We type fmincon(@(x) x(1)^2 1 x(2)^2,[1 0]',[],[],[1 2],1) and get the answer: ans 5 0.2000 0.4000

If we want the solution that minimizes ||Ax-b|| where A 5 [1 2;3 4;5 6] and b 5 [7 8 9]', then we simply type fmincon(@(x) norm(A x-b),[.1 0]',[],[], A,b) which outputs: ans 5 2 6.0000 6.5000

Finally suppose we are interested in the nonnegative solution that minimizes ||Ax-b|| where A and b are as above. In this case we type lsqnonneg([1 2;3 4;5 6],[7 8 9]') which results in: ans 5 0 1.7857

In the following we reproduce the whole “MATLABs Functions by Category” of the “Control System Toolbox,” but only list some of the commands.

C.2.2 MATLABs functions by category—control system toolbox For the sake of brevity we cite only a few of the existing commands.

C.2.2.1 LTI models Function name drss dss filt rss

Description Generate random discrete state-space model Create descriptor state-space model Create descriptor state-space model Generate random continuous state-space model

850

Appendix C

C.2.2.2 Model characteristics Function name class hasdelay isempty issiso

Description Display model type ('tf', 'zpk', 'ss', or 'frd') Test true if LTI model has any type of delay Test true for empty LTI models Test true for SISO models

C.2.2.3 Model conversions Function name d2c delay2z pade ss

Description Convert from discrete- to continuous-time models Convert delays in discrete-time models or FRD models Compute the Pade´ approximation of delays Convert to a state space model

C.2.2.4 Model order reduction Function name balreal real modred mineral

Description Calculate an I/O balanced realization Calculate minimal realization or eliminate pole/ zero pairs Delete states in I/O balanced realizations Calculate structured model reduction

C.2.2.5 State-space realizations Function name canon sctrb gram obsv

Description Canonical state-space realization Controllability matrix Controllability and observability grammians Observability matrix

C.2.2.6 Model dynamics Function name norm pole, eig rlocus roots

Description Calculate norms of LTI models (H_2 and L_oo) Calculate the poles of an LTI model Calculate and plot root locus Calculate roots of polynomials

Appendix C

851

C.2.2.7 Model interconnections Function name connect feedback parallel series

Description Connect the subsystems of a block-diagonal model according to an interconnection scheme of your choice Calculate the feedback connection of models Create a generalized parallel connection Create a generalized series connection

C.2.2.8 Time responses Function name gensig initial lsim ltiview

Description Generate an input signal impulse Calculate and plot impulse response Calculate and plot initial condition response Simulate response of LTI model to arbitrary inputs Open the LTI Viewer for linear response analysis

C.2.2.9 Time delays Function name delay2z pade totaldelay

Description Convert delays in discrete-time models or FRD models Compute the Pade´ approximation of delays Provide the aggregate delay for an LTI model

C.2.2.10 Frequency response Function name bode linspace nichols nyquist

Description Calculate and plot Bode response Create a vector of evenly spaced frequencies Calculate Nichols plot Calculate Nyquist plots

C.2.2.11 Pole placement Function name acker place estim reg

Description Calculate SISO pole placement design Calculate MIMO pole placement design Form state estimator given estimator gain Form output-feedback compensator given state-feedback and estimator gains

852

Appendix C

C.2.2.12 LQG design Function name lqr lqry lqrd kalman

Description Calculate the LQ-optimal gain for continuous models Calculate the LQ-optimal gain with output weighting Calculate the discrete LQ gain for continuous models Calculate the Kalman estimator

C.2.2.13 Equation solvers Function name care dare lyap dlyap

Description Solve continuous-time algebraic Riccati equations Solve discrete-time algebraic Riccati equations Solve continuous-time Lyapunov equations Solve discrete-time Lyapunov equations

C.2.2.14 Graphical user interfaces for control system analysis and design Function name ltiview sisotool

Description Open the LTI Viewer for linear response analysis Open the SISO Design GUI

More examples are provided at the end of the Appendix.

C.3

Simulink

Simulink is an environment which provides the different building blocks of an actual control system. This includes “sources” for making the inputs signals, “feedback elements” for addition/subtraction of different signals, “system blocks” for providing the amplifier, actuator, controller, plant, sensor, etc., dynamics either in the state space or transfer function frameworks, in both discrete- and continuous-time domains, and a lot more. Virtually we can do anything in this environment. For instance, there are blocks for applying a threshold on a signal (switch), a shift on a signal, a transport delay, multiplying some signals, multiplexing, demultiplexing, rounding a signal (i.e., the floor function), limiting a signal magnitude or rate (relay, saturation, rate limiter), dead zone, backlash, friction, computing the reciprocal of a signal, finding the square root of a signal, computing the derivative/integral of a signal, displaying a signal (by oscilloscope, briefly called scope), etc. And when there is no built-in element for the specific functionality which we have in mind we can simply realize it by writing a function file or MATLABs function for it and calling it in the SIMULINK environment. The functionality of SIMULINK is so versatile that by no means can it be summarized in this appendix. On the other hand exploring

Appendix C

853

and exploiting its full functionality is outside the scope of this undergraduate course. Consequently we suffice with some simple representative examples. Before presenting the example we should mention that to write a SIMULINK file we open a “simulink file,” bring the required blocks in it, and connect them as we wish. Then we specify the simulation time or duration and save it; it is automatically saved with a “.slx” suffix. Then we simply run it! In the first example in the following we introduce some most commonly used blocks. Two more examples are provided in Section C.4, Worked-Out Problems. Example C.5: In this example we introduce some of the commonly used blocks in the SIMULINK. They are given in Fig. C.1.

Figure C.1 Some commonly used blocks in the SIMULINK environment.

In the first column from the left we see the multiplexer, demultiplexer, vector concatenator, and matrix concatenator. The number of inputs or outputs is specified by the user in the specific application and is not necessarily “two” as shown in the icons. In the second column from the left there are some signal generators, namely a chirp signal, a band-limited white noise, a source (containing sinusoid, square, sawtooth, and random with adjustable frequency), and a repeating sequence stair. In the third column from the left we present a Coulomb & Viscous friction, a backlash, a rate limiter and a relay, all with adjustable parameters. In the fourth column from the left we see a transfer function in the pole-zero pattern, a transfer function in the polynomial pattern, a state-space model, and a PID controller, whose parameters are all specified by the user. In the fourth column from the right there exist a transport delay, a variable time delay, a second-order integrator, and a derivative. In the third column from the right there are a summation or subtraction element, a minimum finder, a divider, and a nonzero element finder. In the second column from the right we see some math operators, namely an exponential function, a complex to real-imaginary separator, a magnitude-angle to complex merger, and a general MATLABs function block. Finally in the right column we show some sink blocks all used for the purpose of demonstrating a signal in a specific way. They are a workspace, an XY graph, a file, and a scope.

854

Appendix C

C.4

Worked-out problems

The use of MATLABs and SIMULINK is further demonstrated on some basic examples below. In Problem C.1 we introduce various commands. Over 15 different cases are exemplified. 1 2s 1 2 Problem C.1: Let a system be described by the open-loop gain L 5 3 ðs 1s 0:1Þðs10:2Þ 2. 2

Then the following commands perform the explained tasks: rlocus(3 [1 2 2],conv([1 0.1],conv([1 0.2],[1 0.2]))) draws the root locus. margin(3 [1 2 2],conv([1 0.1],conv([1 0.2],[1 0.2]))) plots the Bode diagram, showing the GM, PM, and crossover frequencies. [A,B,C,D] 5 tf2ss(3 [1 2 2],poly([-0.1 -0.2 -0.2])) finds a state-space representation. L 5 3 tf([1 2 2],conv([1 0.1],conv([1 0.2],[1 0.2]))) defines L as this system in transfer function. bandwidth(L) computes the bandwidth of the system. sys 5 feedback(L,1) computes the closed-loop system in negative unity feedback. sys 5 L=ð1 1 LÞ finds the closed-loop system in negative unity feedback. S 5 1=ð1 1 LÞ finds the sensitivity function given by the same formula. pzmap(A,B,C,D) plots the poles and zeros on the s-plane. K 5 acker(A,B,[ 21 21 26]) returns the state matrix for pole placement at [ 2 1, 21, 26] according to Ackerman’s formula. obsv(A,C) returns the observability Grammian. nyquist(L) draws the Nyquist plot of L which represents this system. nichols(L); grid draws the Nichols chart of L on grid. [A,B,C,D] 5 ssdata(L) extracts state-space data of L. zpk(L) decomposes L according to pole-zero-gain data. stepinfo(L) computes the step-response parameters being the rise time, peak time, settling time, overshoot, etc.

Appendix C

855

balred(L,2) returns a balanced reduction of order 2. zL 5 c2d(L,0.01) computes the discrete-time version of L with the sampling time of 0.01 seconds. zpk(zL) decomposes zL according to pole-zero-gain data.

Suppose we want to know how the command “lsim” works. In the command line we type “help lsim” and get the explanation, which we do not reproduce inhere. The command ‘lsim’ is used to compute/simulate any signal in a control system, such as the prefilter output, error, plant output (to be added to output disturbance), sensor output, etc. Regardless of the situation in which we use the command ‘lsim’, for the sake of ease of remembrance we always denote its input and output with u and Y, respectively. The input time variable and the output (simulated/computed signal) time variable are denoted by t and T, respectively. The command is used like [Y,T] 5 lsim(sys,u,t); In this command sys denotes the respective transfer function whose input and output are denoted by u and Y. For notational simplicity inside the program we usually use the denotations like sys, sysnew or sys1 instead of the precise respective denotation like Hru which we discussed in Exercise 1.53. For instance, if we actually want to compute the output of Hru (in our notational convention in the book, which is u or the control signal), inside the program we use u (instead of r which is the reference input) and Y (instead of u which is the control signal). Problems C.2C.6 are about the use of the command “lsim.” The other problems are re various other topics, as we explain in their statements. Problem C.2: To simulate and plot the response of a SISO system given by the transfer function PðsÞ 5

150 ð1 1 5Þðs 1 10Þ

to the input uðtÞ 5 sinð8tÞ during 4 seconds, we

can use the following program. See Fig. C.2. close all; n 5 150; d 5 conv([1 5],[1 10]); sys 5 tf(n,d); t 5 0:0.01:4; u 5 sin(8 t); [Y,T] 5 lsim(sys,u,t); plot(T,Y,t,u,'--'); legend('Output','Input') xlabel('Time (sec)') ylabel('Input & Output')

856

Appendix C

Figure C.2 Problem C.2.

Problem C.3: To simulate and plot the response and control signal of a negative unity-feedback SISO system whose controller and plant are given by the transfer 1 10:23 25 functions CðsÞ 5 9:77 6:84s and PðsÞ 5 ðs 1 5Þðs to the step input 6:84s 1 1 1 10Þ rðtÞ 5



1 0

t$0 , t,0

we can use the following program. See Fig. C.3.

close all; sysp 5 tf(25,conv([1 5],[1 10])); sysc 5 9.77 tf([6.84 10.23],[6.84 1]); sysol 5 series(sysc,sysp); syscl 5 feedback(sysol,1); step(syscl); xlabel('Time'); ylabel('Step Response'); figure(2); sysnew 5 feedback(sysc,sysp); step(sysnew); title(' '); xlabel('Time'); ylabel('Control Signal');

Appendix C

857

Figure C.3 Problem C.3. Left: Output, Right: Control signal.

Problem C.4: To simulate and plot the response and control signal of a negative unity-feedback SISO system whose controller and plant are given by the transfer 0:3733s 1 1 2:12s 1 3:33 10 functions CðsÞ 5 3 0:0268s 1 1 3 2:12s 1 1 and PðsÞ 5 sðs 2 1Þ to the alternating ramp 8
>
20:5ðt24Þ2 > : 20:5ðt28Þ2

300ðs10:5Þ2 ðs 1 2Þ s3 ðs 1 20Þ

0#t#2 2#t#4 , 4#t#6 6#t#8

and PðsÞ 5

1 s21

to the alternating parabolic input

we can use the following program. See Fig. C.5.

close all; sysp 5 tf(1,[1 -1]); sysc 5 300 tf(poly([ 2 0.5 20.5 22]),poly([0 0 0 220])); sysol 5 series(sysc,sysp); syscl 5 feedback(sysol,1); u1 5 [0:0.01:2]; u2 5 [2: 2 0.01:0]; u3 5 [0: 2 0.01: 2 2]; u4 5 [ 2 2:0.01:0]; u 5 [0.5 u1.^2 0.5 u2.^2 20.5 u3.^2 20.5 u4.^2]; t 5 0:0.01:8.03; [Y,T] 5 lsim(syscl,u,t); plot(T,Y,t,u,'--'); legend('Output','Input') xlabel('Time (sec)') ylabel('Input & Output') figure(2); sysnew 5 feedback(sysc,sysp); [Y,T] 5 lsim(sysnew,u,t); plot(T,Y); xlabel('Time (sec)'); ylabel('Control Signal');

Appendix C

859

Figure C.5 Problem C.5. Left: Input & Output, Right: Control signal

Remark C.1: It is very helpful if built-in commands are written by MATLABs writers to perform the simulation for ramp and parabolic inputs as the command “step” does for step inputs. Problem C.6: To find the range of stability versus gain of the system L5

Kððs10:1Þ2 1 1Þððs10:1Þ2 1 9Þðs 2 0:1Þ ððs10:1Þ2 1 4Þððs10:1Þ2 1 25Þðs 1 1Þ

we use the command “allmargin.”

L 5 tf(conv([1 2.1],conv([1 .2 1.01],[1 .2 9.01])),conv([1 1],conv([1 .2 4.01],[1 .2 25.01]))); allmargin(L) ans 5 GainMargin: [110.2077 34.9821 1.5769 4.5036 0.4175] GMFrequency: [0 1.1160 1.8052 3.3430 4.4528]

The interpretation is as follows. The “GMFrequency” refers to the frequency at the j-axis crossings in the increasing order of frequency. The corresponding gain values are provided by “GainMargin.” We have a look at the root locus of the system. It is given in Fig. C.6 Thus the interpretation of the is as follows.SThe system T first row ofSthe answer T is stable if KAfð0 S110:2Þg fð0 1:5Þ ð35 NÞg fð0 0:4Þ ð4:6 NÞg which is KAfð0 0:4Þ ð35 110:2Þg. Considering the root locus with negative gain as well, we conclude that the system is stable also for KAð2 1 0Þ. To this end we use the command allmargin(2L) to find the gain of “1” (which is thus interpretedSas 21) and S the respective frequency. Thus the system is stable if KAfð2 1 0Þ ð0 0:4Þ ð35 110:2Þg Problem C.7: To find the range of stability versus gain of the system L5

Kððs20:1Þ2 1 1Þððs20:1Þ2 1 9Þ ððs20:2Þ2 1 4Þððs20:2Þ2 1 25Þðs 2 1Þ

we use the command “allmargin.”

860

Appendix C

Figure C.6 Root locus of Problem C.6.

L 5 tf(conv([1 2.2 1.01],[1 2.2 9.01]),conv([1 21],conv([1 2.4 4.04], [1 2.4 25.04]))); allmargin(L) ans 5 GainMargin: [11.1165 52.8785 2.7267 49.1588 0.5641] GMFrequency: [0 0.9213 2.0895 2.9569 5.0420]

To be able to interpret the answer and find the range of stability we should have a look at the root locus of the system. It is provided in Fig. C.7. T T Hence the system is stable if KAfð11:12 52:8Þ ð2:8 49:1Þ ð0:57 52:8g 5 ð2:8 49:1Þ. Here the system is unstable for any negative gain. Problem C.8: To find the GM (Gain Margin) and PM (Phase Margin) of a system we use the command “margin.” For instance the commands sys 5 tf(4,poly([ 2 1 21 21])) margin(sys);

provide the GM and PM of the system PðsÞ 5 4=ðs11Þ3 . They are written above the panel of Fig. C.8 with the respective frequencies: GM 5 6:02 db; ωpc 5 1:73 rad=seconds; PM 5 27:1 deg; ωgc 5 1:23 rad=seconds. Note that the GM and PM values are designated by two “solid” vertical lines in continuation of two dotted lines which show the respective frequencies. Finally we add that if we

Appendix C

Figure C.7 Root locus of Problem C.7.

Figure C.8 Margins of Problem C.8.

861

862

Appendix C

use the command [a,b,c,d] 5 margin(sys) then the output will be a 5 2:0003 which is the GM in plain numbers, b 5 27:1424 which is the PM in degrees, c 5 1:7322 and d 5 1:2328 which are the ωpc and ωgc , respectively. Problem C.9: As we have mentioned in the text MATLABs may compute the GM and PM wrongly. With this caveat in mind if we want to use MATLABs to compute the GM and PM we can simply use the command “margin.” If we want all the margins of the system and their respective frequencies we use the command “allmargin.” For instance if we use the following commands P58 tf(conv([1 2 .2 [1 2 .4 25.04]))) margin(P)

1.01],[1

2 .2

9.01]),conv([1

2 1],conv([1

2 .4

4.04],

then the following picture is produced, Fig. C.9. The GM, PM, and their respective frequencies are written above the figure panel. Because for this system the GM is computed at the frequency zero which is outside the panel (i.e., on the left end on the horizontal axis), it is marked with one (as opposed to two) solid and dotted line to designate the PM. (Question: Using the analysis method of Chapter 6, Nyquist Plot, is GM computed correctly in this example? How about PM?).

Figure C.9 Margins of Problem C.9.

Appendix C

863

Figure C.10 Root locus of Problem C.10.

Problem C.10: To compute the open- and closed-loop bandwidth of the system PðsÞ 5 40=ðs11Þ3 we use the following command. The output is offered in Fig. C.10. close all; sys 5 tf(40,poly([ 2 1 21 21])); bode(sys); grid; title('Bode Diagram of the Open-Loop System'); display('Open-loop bandwidth is:'); bandwidth(sys) syscl 5 feedback(sys,1); figure(2); bode(syscl); grid; title('Bode Diagram of the Closed-Loop System'); display('Closed-loop bandwidth is:'); bandwidth(syscl)

The program outputs the sequel data: Open-loop bandwidth is: ans 5 0.5088 Closed-loop bandwidth is: ans 5 4.1101

The concept of norm, condition number, and singular values are defined in Section 10.12. A basic example is presented below.

864

Appendix C

Problem C.11: Consider the vector and matrix x 5 ½1 0 2 23T and   1 0 22 X5 . Then the command “norm” is used in the sequel manner. 21 24 3 norm(x,1) outputs 6. norm(x,2) outputs 3.7414. norm(x,inf) outputs 3. norm(x,'fro') outputs the Frobenius norm as 3.7417. norm(X,1) outputs 5. norm(X,2) outputs 5.3028. norm(X,inf) outputs 8. norm (X, 'fro') outputs the Frobenius norm as 5.5678. cond(X) outputs 3.1244. svd(X) outputs the singular values of X as 5.3028 and 1.6972, and we note that condðXÞ 5 σðXÞ=σðXÞ 5 5:3028=1:6972 5 3:1244. Remark C.2: In metric geometry the “Hamming norm” of a vector is defined as the number of nonzero elements of the vector, which is the “Hamming distance” of the vector from zero.5 It should be noted that, precisely speaking, it is not a norm but with slight abuse of terminology the term norm is used for it. It has special importance in statistics, signal processing, information theory, and communications. It is useful if it is included as a command in future releases of MATLABs. Of course we have the block “Find” in SIMULINK. We should add that in the engineering literature the phrases ‘ zero norm ’ and ‘ zero “norm” ’ (without the outer ‘’ symbols) are also used for it with some abuse of terminology suggesting the L0 norm (in parallel with the well-established Lp norms). On the other hand, the term zero norm is also used in the F-space of sequences and the space of measurable functions, but even here there is some slight abuse of terminology since it does not satisfy the absolute homogeneity (i.e., absolute scalability) property; The Hamming norm is not an F-norm. Problem C.12: Consider the system x_ 5 Ax and the Lyapunov equation AT P 1 PA 1 Q 5 0, where A and Q are given below. Solve the equation for the symmetric positive definite solution P both by direct computation and by MATLABs. 2

3 2 3 2 0 0 0 4 22 0 5 and Q 5 4 0 2 0 5: 2 23 0 0 2 2 3 a b c Defining P 5 4 b d e 5 by direct substitution in the given equation we arrive at, c e f 23 A54 1 0

5

Named after R. W. Hamming, American mathematician 191598, whose works had many implications on computer science and tele-communications. In recognition of his contributions, the IEEE R. W. Hamming Medal was established in 1986 and is bestowed on a yearly basis for outstanding achievements in information sciences, systems, and technology.

Appendix C

865

8 26a 1 2b 1 2 5 0 > > > > 25b 1 2c 1 d 5 0 > > < 26c 1 4a 1 e 5 0 . This is rewritten as 24d 1 4e 1 2 5 0 > > > > 25e 1 4b 1 2f 5 0 > > : 8c 2 6f 1 2 5 0 2

26

6 6 6 4 6 6 6 4

2 25

2 26

4

1 24

1 4 25

8

32 3 2 a 76 b 7 6 76 7 6 76 c 7 6 76 7 5 6 76 d 7 6 76 7 6 2 54 e 5 4 26 f

3 22 0 7 7 0 7 7 where the unspecified ele22 7 7 0 5 22

ments are zero. It has the form My 5 z with obvious definition of the terms and has the solution y 5 M 21 z, see also Remark C.3. Thus the answer is ½a b c d e f T 5 ½0:4969 0:4089 0:4278 1:886 0:6886 0:9038T from which the matrix P can be formed. On the other hand if we want to use MATLABs we note that the command “lyap(A,Q)” solves the Lyapunov equation AP 1 PAT 1 Q 5 0. Thus we apply it to AT as lyap(A',Q) and get the same answer for P. Remark C.3: In passing recall from linear algebra that MATLABs does not find the solution of My 5 z by inversion as y 5 M 21 z, but by Gaussian elimination. However, we said this in order to highlight the nature of the problem as a simple invertible linear system. Problem C.13: Suppose we want to solve the nonlinear equations  x_1 5 3:5 signðx2 Þu1 , xð0Þ 5 ½1 21T , uðtÞ 5 ½2 1T . We use x_2 5 2 floorðx1 Þ 2 10x21 x2 1 10x2 u22 the command “ode45” as follows. First we create a function PrC13 as follows, see also Example C.3. function dx 5 PrC13(t,x) dx 5 zeros(2,1); dx(1) 5 7 sign(x(2)); dx(2) 5 2floor(x(1))-10 x(1)^2 x(2) 1 10 x(2);

Then we use the commands, [T,X] 5 ode45(@PrC13,[0 5],[1 -1]); plot(T,X(:,1),'-',T,X(:,2),'--'); legend('x1', 'x2') xlabel('Time (sec)');

866

Appendix C

Figure C.11 Simulation result of Problem C.13. ylabel('x1 & x2');

and get Fig. C.11. Note that [0 5] is the simulation duration and [1 21] is the initial condition. Problem C.14: As a more pffiffiffiffifficomplicated example consider the nonlinear equation _ 1 vðtÞx2 ðtÞ 2 uðtÞsinð jxjÞ 2 x 5 0. The non-constant coefficient vðtÞ and the xðtÞ non-constant input uðtÞ are generated as specified below. The initial condition is xð2Þ 5 1. It is desired to find xðtÞ over t 5 ½2 8. vt 5 linspace(2,10,20); v 5 3 cos(vt 1 0.15) 1 vt; ut 5 linspace(1,11,20); u 5 ut.^3 2 sqrt(ut) - 4;

Note that the command “linspace(X,Y,N)” generates N linearly equally spaced points between X and Y. We solve the system numerically by simulating it over the interval t 5 ½2 8. To this end we first write the function PrC14 as follows. function dx 5 PrC14(t,x,ut,u,vt,v) u 5 interp1(ut,u,t); v 5 interp1(vt,v,t); dx 5 -v. x^2 1 sin (abs(x)^0.5). u 1 x;

The function “Yi 5 interp1(X,Y,Xi)” interpolates to find Yi, the values of the underlying function Y 5 f(X) at the query points Xi. Now we use the commands,

Appendix C

867

TSpan 5 [2 8]; IC 5 1; [T X] 5 ode45(@(t,x) PrC14 (t,x,ut,u,vt,v),TSpan,IC); plot(T,X); xlabel('Time (sec)'); ylabel('x(t)');

and get Fig. C.12. We have used IC and TSpand to denote the initial condition and the simulation duration, respectively. Also note that the initial time in TSpan (i.e., 2) must be at least as large as the maximum of the first arguments of vt (i.e., 2) and ut (i.e., 1). On the other hand the final time of TSpan (i.e., 8) can be at most as large as the minimum of the second argument of vt (i.e., 10) and ut (i.e., 11). And in both cases the reason simply is that outside this domain the equation is not defined. Problem C.15: In this example we simulate a typical plant given by 1=½sðs 1 1Þ in a negative unity feedback with saturation and time delay. The upper limit of saturation is 0:8 and its lower limit is 20:5. The delay time in the forward path is 0:1 second. The SIMULINK file is provided in Fig. C.13. The reference input, plant input, and plant output signals are depicted in Fig. C.14. The explanation is as follows. “Scope” is for viewing a signal. Thus, if after running the program we click on a “scope,” we see the corresponding signal. “Workspace” is for saving a signal as a vector/variable so that we can depict it in

Figure C.12 Problem C.14.

868

Appendix C

Figure C.13 Problem C.15, Typical plant in a negative unity feedback with saturation and delay.

Figure C.14 Different signals in the system of Problem C.15.

MATLABs using the command “plot.” This is what we have done along with the command “hold on” to create the plots in one figure. Note that the name of the variable of a Workspace is what appears on its icon. Therefore we have, e.g., used the command “plot(Output_Signal).”

Appendix C

869

Figure C.15 System of Problem C.16.

Figure C.16 Reference input and output signals of Problem C.16.

Problem C.16: Consider the system where

8 < 1 satðxÞ 5 : x 21

x.1 21 , x , 1 , x, 21



x_1 5 x1 1 2x2 1 u x_2 5 3x1 1 4x2 1 2u

 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  , y 5 sat 5 x1 ðt 2 0:3Þ 2 x1 x2 ,

in a negative unity feedback structure driven by a

sinusoid r, u 5 r 2 y. It is desired to simulate the system and find its output. The SIMULINK file we design for this system is shown in Fig. C.15. In the block

State-Space

the

parameters

are

as

follows:

A5



1 3

2 4



,

B5

  1 , 2

870

C5

Appendix C 

1 0

0 1



, D5

  0 . 0

In this way the state of the system appears in the output of

this block, i.e., y 5 x 5 ½x1 x2 T is the output of this block. Now, we first pass this signal through a demultiplexer to separate its components; the first component is x1 and the second is x2 . Then we simply apply the required ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi on them  poperations  and make the actual output of the system given by y 5 sat 5 x1 ðt 2 0:3Þ 2 x1 x2 . The input and output signals of the system are plotted in Fig. C.16.

References MATLABs, 2016: mathworks.com/products/matlab. SCILAB, 2016: www.scilab.org.

Appendix D: Treatise on stability concepts and tests

D.1

Introduction

Stability is the kernel of dynamical systems’ operation. Today, researchers in almost all academic faculties deal with dynamical systems and thus their stability, as these are geared to at least some of their courses. The topic can be quite advanced if approached from a purely mathematical point of view, where differential equations/ inclusions and dynamical systems are studied in different contexts—general topological spaces, measure spaces, manifolds, mappings, etc. In this perspective notions like Poisson and Lagrange stability are encountered whose studies need the introduction and study of some other mathematical objects. On the other hand, if the system at hand is described by an ordinary differential equation in Rn or at most a Banach space, stability concept then becomes more approachable for researchers in all faculties. It turns out that even in this restricted context there is not any survey paper on stability notions and tools. Insofar as our expertise allows us in this Appendix we survey the existing literature on the topic of stability, for both notions and tools, including the advanced ones, and enlist the most pervasively used ones. Then we elaborate on items which are in our opinion less known to the engineering academicians. This includes Lipschitz, Poisson, and Lagrange stability, as well as finite- and fixed-time stability. While the second and third concepts are more than half a century old (in the mathematics world, of course) others are rather new. Fixed time stability was first encountered in 2008 and then rigorously studied since 2012. Rigorous investigation of finite-time stability is also rather new although the notion itself date back to the mid-20th century. It turns out that the concept of Lipschitz stability, which is about 30 years old in the mathematical field, has not yet appeared in the engineering field although it is a useful notion as will be seen. While the appendix may be used by all undergraduate students of mathematics, it is not appropriate for undergraduate students of other fields, but for their graduate students and researchers. The organization of the rest of this Appendix is as follows. In Section D.2 we review the literature on stability tools and concepts. Sections D.3 and D.4 are devoted to introductions to Lipschitz stability, and Lagrange, Poisson, and Lyapunov stability, respectively. Section D.5 presents a brief on finite- and fixedtime stability. The results are then extended to decentralized stabilization of largescale systems. We suffice to cite a few references in the bibliography for the issues which are less known. Further references can easily be found using the specialized search engines of the scientific publishers like IEEE, Elsevier, etc. The Appendix is

872

Appendix D

of interest to all academicians and researchers who are dealing with the concept of stability. It can be used as a reference, offering an almost exhaustive collection of stability notions and tools which are teeming in the literature till the year 2017.

D.2

A survey on stability concepts and tools

The notions and tools for studying stability depend on the dynamical system being linear, nonlinear, continuous, discrete, discontinuous, deterministic, stochastic, etc. With this in mind the appearance of the term stability in the literature may be categorized, from one standpoint, to the two branches of deterministic and stochastic systems. However, for the reason that will become clear later, we present a third class as well. We also add that in either case some of the notions ramify to strong and weak types.

D.2.1 Deterministic systems For a deterministic system the main traditional notions of stability are as follows: (1) Lyapunov stability, (2) bounded-input bounded-output—BIBO stability, (3) asymptotic stability, (4) exponential stability, (5) uniform stability, (6) absolute stability, (7) global stability, (8) total stability, (9) relative stability, (10) conditional stability, and (11) D stability. Combinations like uniform global asymptotic stability also exist. More advanced stability notions can be enlisted as: (1) hyperstability, (2) superstability, (3) (non) quadratic stability, (4) Lagrange stability, (5) Poisson stability, (6) bounded-input bounded-state—BIBS stability (7) input-to-state stability—ISS, (8) integral stability, (9) eventual stability, (10) extreme stability, (11) L (Lp ; lp ) stability, (12) output-input stability, (13) Lipschitz stability, (14) set stability, (15) finite-time stability, (16) fixed-time stability, (17) Input/output stability over finite alphabets. Generalization and variants of these notions are already available, e.g., output-to-state stability—OSS, input-to-output stability—IOS, input/output-tostate stability—IOSS, input-output practical stability—IOpS, exponential-ISS— exp-ISS, input-to-state dynamical stability—ISDS, and integral ISS—iISS. Also note that LN stability is the same as BIBO stability. For integral stability, eventual stability, extreme stability, set stability, ISS-type stability, and output-input stability see (Lakshmikantham and Leela, 1969; El-Hawwary and Maggiore, 2010; Sontag, 2011; Liberzon et al., 2002). For stability of systems over finite alphabets see (Balochian et al., 2011a, 2011b; Tarraf et al., 2008). The most famous stability criteria/tests are: (1) Hermite’s test, (2) Hurwitz’ test, (3) Routh’s test, (4) Lienard-Chipart’s test, (5) Barkhausen’s test, (6) Nyquist’s test, (7) Bode’s test, (8) Nichols’ test, (9) root Locus method, (10) Lyapunov’s method, (11) Filippov’s method, (12) direct (or computerized) test, (13) Jury test, (14) Schur-Cohn test, (15) Tsypkin test, (16) Lev-Ari/Bistritz/Kailath test, and (17) Middleton-Goodwin test. These methods, some dating back to over a century ago, have been extensively studied in the literature. Several variants and extensions of a

Appendix D

873

number of them are already available, e.g., Nyquist criterion for multidimensional systems, Nyquist criterion for uncertain systems, Nyquist-Barkhausen criterion for linear lumped parameter systems, and inverse Nyquist criterion. More advanced stability tests/criteria include: (1) Hermite-Biehler test, (2) generalized Nyquist plot, (3) generalized Bode diagram, (4) phase-plane analysis, (5) describing function analysis, (6) small-gain theorem, (7) circle criterion, (8) off-axis circle criterion, (9) positive real theorem, (10) Popov criterion, (11) Mikhailov criterion, (12) theorems or converse theorems of Barbashin, Krasovskii, LaSalle, Matrosov, Kurzweil, Massera, Razumikhin, Yoshizawa, Milnor, Hale, Artstein, NarendraAnnaswamy, Aeyels-Peuteman, Lin-Sontag-Wang, Agrachev-Liberzon, PyatnitskiyRapoport, Loria-Panteley-Popovic-Teel, Amato-Ambrosino-Ariola-Calabrese, Balochian-Khaki Sedigh-Zare, Balochian-Khaki Sedigh-Haeri, Karafyllis, etc. (13) fixed-point theorems, (14) linearization-type methods, (15) algebraic techniques and Lie-algebraic conditions, (16) particular methods like Halanay-type techniques, (17) center manifold method, (18) perturbation methods, (19) vibrational methods (adding oscillatory inputs), (20) comparison methods, (21) contraction methods, (22) invariance-type methods, (23) transformation methods, (24) Rantzer and PranjaParrilo-Rantzer methods, (25) dissipativity- and passivity-based methods, (26) backstepping methods, (27) Lehtomaki’s robustness test, (28) μ test, (29) Kharitonov-type tests, (30) Rouche-type tests, (31) computational (in particular LMI, min-max, sum-of-squares, randomized, etc.) tests, (32) Floquet’s theorem, (33) Bloch’s theorem, etc. Several extensions and variants of these tests can be found in the literature. For instance, some generalizations and variants of the small gain theorem are as follows: local small-gain theorem, nonlinear small-gain theorem, mean-square smallgain theorem, ISS small-gain theorem, IOS small-gain theorem, extended scaled small-gain theorem, etc. In the above lists some stability tests/tools are primarily for uncertain systems. In this respect there are many other specialized results as robustness tools, see e.g., (El-Sakkary, 1985; Konstantinov et al., 2003; Krasnova et al., 2011; Xu et al., 1993, 1995; Zhou and Doyle, 1998) and their bibliographies as well as item 5 of Further Readings of Chapter 8. As for tests which are originally not for uncertain systems, extensions of many of them to uncertain systems are already present in the literature. For instance, robust root locus analysis, and generalized Nyquist plot and Bode diagram. We also mention that it is possible to further structure the above lists by revealing the overlaps, connections, and implication which do exist between some classes. The details are outside the scope of this book. It should also be noted that in some specific contexts the word stability can also be found, i.e., it has been defined in some specific applications as well, like LyapunovVolterra stability (Liao and Wang, 2012); partial variable stability (Kao et al., 2014); connective stability (Siljak, 2012); semi stability, quasi stability, sign stability, sign semi stability (Jeffries et al., 1987), bistability and multistability (Gedeon and Sontag, 2007), diagonal stability, etc. Moreover, in the literature on discontinuous systems, solutions to discontinuous differential equations are often understood in the sense of Filippov (1989) as in Section D.4 of this Appendix,

874

Appendix D

and the phrase “stable in the sense of Filippov solutions” is also seen. Thus for example the phrase connective stability in the sense of Filippov is encountered in the literature as well (Ruan, 1991). Other results on stability of discontinuous systems include stable in the sense of Hermes solutions, stable in the sense of Krasovskii solutions, stable in the sense of “sample and hold” (Clarke et al., 1997), stable over finite alphabets (Tarraf et al., 2008), etc. In (Hajek, 1979) it is shown that for a large class of discontinuous differential equations Krasovskii solutions coincide with the Filippov solutions. See also (Hermes, 1967; Deimling, 1992). Finally, we add that precisely speaking (Balochaian et al., 2011a, 2011b) considers stabilization with a finite number of control rules. In the literal meaning it can be considered as stabilization over finite alphabets, as categorized before. We stress that, as we mentioned before, it is possible to provide a finer picture of the filed but it is too technical and cannot be addressed in an appendix, but a book!

D.2.2 Stochastic systems For the stability of stochastic systems there exist at least three times as many notions as there are for the stability of deterministic systems. The reason is that in the stochastic context there are at least three basic types of convergence: convergence in the probability, convergence in the mean, and convergence in the almost sure sense. During the initial phase of the development of stochastic (control) theory there was some confusion and controversy over the stability notions, their usefulness, and their implications on each other. These issues have been addressed to a great extent in the classical texts of (Arnold, 1974; Khas’miskii, 1980; Liu, 2006), etc. The main concepts of stability, in our opinion, are stability in probability, stability in the pth moment, and stability in the almost sure sense, where the (1)(11) variants of the deterministic case of these concepts are also existent, as well as (12) the polynomial version. Stochastic counterparts of the aforementioned advanced topics for deterministic systems, in principle, all make sense and thus can be developed. In fact this has been done for almost all these concepts except a few including the fixed-time and Poisson stabilities which have not yet been addressed. More advanced concepts of stability include: (1) N-Sigma stability, (2) incremental stability, (3) L stability, (4) noise-to-state stability, (5) evolutionary stability, (6) nonlinear stability, (7) polynomial stability, (8) stochastic stability, and (9) stochastic structural stability. The notion of stochastic stability was first defined for a Gaussian spin glass model in 1998 (Aizenman and Contucci, 1998). It may be extended to more general stochastic models, as follows: A Gaussian (spin glass) model is stochastically stable if the deformed quenched state and the original one coincide in the thermodynamic limit. Stochastic structural stability has been introduced in the field of statistical mechanics and astrophysics (Farrell and Ioannou, 2003). It addresses the turbulent jet dynamics as a two-way interaction between the mean flow and its consistent field of turbulent eddies. For stochastic systems the most famous stability tools are: (1) Lyapunov method, (2) Khas’minskii criteria, (3) Kushner criteria, (4) Foster-type criteria,

Appendix D

875

(5) Razumikhin-type theorems, (6) Rouche-type criteria, (7) small-gain theorems, and (8) computational (in particular LMI) tests. For more specialized results the interested reader is encouraged to consult the pertinent literature. It should also be mentioned that similar to the case of deterministic systems here the (generalized) Filippov notion exists (Hafayed, 2010). Thus, for instance solutions to discontinuous differential equations are understood in the sense of Filippov, and the phrase “stable in the sense of Filippov solutions” is also encountered. The same is true for its other applications like Volterra and partial variable stability as well.

D.2.3 Miscellaneous In the mathematics literature the term stability is encountered in other contexts as well. These contexts may themselves be categorized to deterministic and stochastic, so that this third category may not stand independently and may be actually included in both of the above-mentioned categories. However because the details get too technical and go outside the scope of this book we present it as a third class due to its mathematical nature which is far from the engineering fields, although some of them are starting to appear in engineering literature as well1. Included in this category is the following: According to some, in numerical analysis an iterative algorithm is called stable if the error tends to zero or remains bounded by increasing the number of iterations so that the algorithm converges to the vicinity of a solution. On the other hand, some authors distinguish between stability (and wellposedness) and convergence. Polynomial stability and exponential stability are among the concepts one encounters in this setting. Also, a numerical algorithm is called strongly backward stable if it can perform its desired task for a nearby problem as well. Other examples are stability of subspaces (Asgari, 2011), stability of graphs (Siljak, 2008), stability of sets2 (Weiner and Szonyi, 2014), stability of manifolds (Deniz et al., 2008), stability of maps (Hashemi et al., 2017; Rimanyi et al., 2015; Yu et al., 2017) in algebraic geometry and stability of operators (Bass and Ren, 2013) in operator theory. It should also be noted that in the field of mathematics a matrix is called D stable if it remains stable when multiplied by any positive diagonal matrix. This property has been investigated for many classes of matrices like M-matrices (Bierkens and Ran, 2014; Kushel, 2016). On the other hand, as you recall from Chapter 3, Stability Analysis, in the field of systems and control a matrix is said to be D stable if its eigenvalues are in a D-shaped region (perhaps open from the left) in the OLHP. Remark D.1: It is due to add that from another standpoint a system may be either continuous, discrete, or hybrid. The aforementioned stability ‘concepts’ are valid for both continuous- and discrete-time systems, but they are mostly developed for continuous-time systems. As for ‘tools/tests’ except for some like the Schur-Cohn, 1

We should also add that in this category sometimes the definition and usage of the word “stability” is different from that in previous categories. 2 Note that it is different from what we had in Section D.2.1.

876

Appendix D

Middleton-Goodwin, etc. which are for discontinuous-time systems, all the abovementioned methods are for continuous-time systems. All in all, the theory of discrete-time systems is less developed than that of continuous-time systems. The same is also true for hybrid systems - not well developed. In the sequel of the book on state-space methods we shall learn much more details. Remark D.2: We should also add that from another standpoint a system may be either small-scale or large-scale. In Section 1.16.1.1 and Appendix F we provide some explanation about these terms. Our concern here is that applying a stabilization method to a large-scale system is not - simply said - easy and efficient. The same is true for applying a stability test. In the sequel of the book we shall provide more details. Here we suffice to say that, e.g., finding the roots of a matrix of dimension 1000 is not - simply said - reliably possible. A common solution to this difficulty is that we apply the stability methods and tests to ‘subsystems’ of the system under some conditions that the stability of the original system is maintained. The explanation is that the dimension of a large-scale system is, say, n 5 1000. We divide it to e.g., N 5 200 subsystems of the average dimension 5, and work with the subsystems. An example of this method appears in Section D.5.1 - it is called decentralized control. Stability theory of large-scale systems is less developed than that of common small-scale systems. In this survey we confine our exposition to topics which are less known to engineering academicians but are important for them as emerging fields. In our opinion these topics are Lipschitz, Lagrange, and Poisson stability of the mathematics world, and finite- and fixed-time stability. They are presented in the ensuing Sections D3D5, respectively. Section D5 offers new contributions as well: decentralized finite- and fixed-time stability of large-scale systems are introduced and addressed briefly.

D.3

Lipschitz stability

The notion of (uniform) Lipschitz stability was first introduced in 1986, as follows (Dannan and Elaydi, 1986, 1989). Consider the system x_ 5 f ðx; tÞ with standard assumptions. The system is called to be uniformly Lipschitz stable if there exists M . 0 and δ . 0 such that jxðt; x0 ; t0 Þj # Mjx0 j whenever jx0 j # δ, t $ t0 $ 0. If jx0 j # δ is replaced by jx0 j , N then it is called globally uniformly Lipschitz stable. Moreover, the system is said to be asymptotically stable in variation if Ðt jφðt; sÞjds # M where φðt; t0 Þ is the fundamental matrix of y_ 5 fx ðt; 0Þy, t0 fx 5 @f =@x, and φðt0 ; t0 Þ 5 I. It should be noted that for linear systems this notion of stability coincides with uniform stability, but is an independent property in nonlinear systems. Uniform Lipschits stability lies somewhere between the concept of uniform stability on the one side, and concepts of asymptotic stability in variation and uniform stability in variation on the other side. It neither implies asymptotic

Appendix D

877

ESL

GESV

EAS

EASV

ES

ASV

GULSV GUSV

GULS

ULS

UAS

US

AS

S

USV

ULSV

Figure D.1 Implications among different notions of stability. Adopted from F.M. Dannan and S. Elaydi, Lipschitz stability of nonlinear system of differential equations, J. Math. Anal. Appl., vol. 113, pp. 562577, 1986, with permission.

stability nor is implied by it. For nonlinear systems the implications among different notions of stability are shown in the diagram of Fig. D.1 adopted from (Dannan and Elaydi, 1986). In this figure the acronyms spell out as follows. ESL: exponentially stable in large; EAS: exponentially asymptotically stable; ES: exponentially stable; UAS: uniformly asymptotically stable; AS: asymptotically stable; GESV: globally exponentially stable in variation; EASV: exponentially asymptotically stable in variation; ASV: asymptotically stable in variation; ULS: uniformly Lipschitz stable; US: uniformly stable; S: stable (in the sense of Lyapunov); GULS: globally uniformly Lipschitz stable; GULSV: globally uniformly Lipschitz stable in variation; GUSV: globally uniformly stable in variation; USV: uniformly stable in variation; ULSV: uniformly Lipschitz stable in variation. For further details the interested reader is referred to (Dannan and Elaydi, 1986). It seems that in the engineering field of systems and control this notion has not appeared yet. It is currently under investigation how it will enhance the existing literature. In particular its extension to large-scale systems is among our ongoing research. Apart from differential equations the notion of Lipschitz stability does exist in other contexts like matrix theory, where e.g., the notion Lipschitz stability of an invariant subspace can be found in the literature (Gracia and Velasco, 2011) as well.

D.4

Lagrange, Poisson, and Lyapunov stability

Lagrange and Poisson stability notions are among the advanced concepts studied in differential equations and dynamical systems in math departments. They have started to appear in cutting-edge research in control theory, e.g., in the stability

878

Appendix D

analysis of discrete and jump dynamical systems mainly since a decade ago, see e.g., (Filippov, 1989; Bonotto and Federson, 2009; Li and Jian, 2015) and their bibliographies. It is not possible to give a self-containing introduction to these concepts as many other technical terms have to be introduced first. Nevertheless, for the sake of familiarity of the nonmathematical control community we do briefly define these concepts in the following. Further details can be found in the related literature (Oxtoby, 1971; Engelking, 1989), and the like. Prior to this let us introduce the definitions of some classical terms. Precompactness: A space X is precompact if for every entourage UCX there is a finite set FCX such that UCFðXÞ. Thus, for X to be precompact it is sufficient that some completion of it is compact, and it is necessary that all completions of it be compact. It is analogous to the Heine-Borel theorem which states that a subset of a metric space is closed iff it is complete and totally bounded. (On the other hand, the Heine-Borel theorem states that subsets of Rn are compact iff they are closed and bounded.) Recurrence: A point x of a dynamical system f t in a metric space S is recurrent if ’ ε . 0; ( T . 0 such that all points of the trajectory f t x are contained in an ε-neighborhood of any arc of time length T of this trajectory. Minimality: A minimal set in a Riemannian space is a generalization of a minimal surface. A minimal set is a k-dimensional closed subset X0 in a Riemannian space M n ; n . k, such that for some subset Z of k-dimensional Hausdorff measure zero the set X0 \ Z is a differentiable k-dimensional minimal surface. A minimal set in a topological dynamical system is a nonempty closed invariant subset of the phase space of the system which does not have proper closed invariant subsets. And now the main definitions of this Subappendix follow. Lagrange stability: Given a metric space S, a point x of a trajectory of a dynamical system is Lagrange stable if the trajectory is contained in a precompact set. The concept was introduced by Poincare3 in 1892. It should be noted that this is the basic definition. More recent definitions apply to nonmetrizable and noncompact spaces as well. A related theorem is due to Birkhoff, stating that if S is complete, then the closure of a positively or negatively Lagrange-stable trajectory contains at least one compact minimal set. Every point of a compact minimal set is a recurrent point. Poisson stability: Given a topological space S, a point x of a trajectory f t x of the dynamical system f t is Poisson stable if it is an α 2 and ω 2 limit point of the trajectory f t x, i.e., if there are sequences tk ! N, τ k ! 2N such that limk!N f tk x 5 limk!N f τ k x 5 x. The concept was introduced by Poincare in 1898. A related theorem is the Hopf’s recurrence theorem for dynamical systems given on a space of infinite measures: If a dynamical system is given on an arbitrary domain 3

Jules Henri Poincare was a French mathematician, physicist, and engineer (18541912). He contributed to all fields of mathematics and was one of the most influential ones of all time. He is considered as the last mathematician who excelled in all branches of mathematics. His contributions are too many. In particular, his celebrated conjecture remained open for about a century when it was finally settled in the early 21st century.

Appendix D

879

in Rn and if Lebesgue measure is an invariant measure of the system, then every point—with the exception of the points of a certain set of measure zero—is either Poisson stable or divergent. Lyapunov stability: Given a differential equation and a normed space. In simple words, if the solutions start out near an equilibrium and stay near that for all time then it is called Lyapunov stable. It has two main versions: Lyapunov first method and Lyapunov second method of stability. We should mention that it is also possible to define this notion for mappings, topological spaces, manifolds, infinite dimensional spaces, etc., and indeed these notions do exist in the mathematical literature. We also add that Lyapunov functions can have various forms, which we address in the sequel of the book on state-space methods. In general these three concepts do not imply each other and are rather disparate. However, if the underlying space is complete (which has a mathematical definition) then their implication on each other is straightforward. The interested reader is referred to the introduced references for further detail. Remark D.3: In the simplest case of S 5 Rn , Lagrange stability is equivalent to boundedness of the trajectory. Remark D.4: The term stability has appeared in some technical phrases like “orbital stability” and “multistability” (like bistability) as well. The former is related to (Lyapunov) stability of a trajectory under certain circumstances. For details the interested reader is referred to the corresponding math literature. The latter is used when the system has more than one equilibrium point.

D.5

Finite-time and fixed-time stability

In this section we briefly introduce the notions of finite-/fixed-time stabilization and then extend the latter to large-scale systems. The notion of finite-time stability dates back to 1953 (Kamenkov, 1953). It became known to the western world in the 1963 English book of W. Hahn (Hahn, 1963). However, its rigorous treatment has been in the 21st century, see e.g., (Bhat and Bernstein, 2000; Ghasemi et al., 2014; Mahmoudi et al., 2008; Polyakov et al., 2015). The concept of fixed-time stability was first introduced in 2008 (Andrieu et al., 2008). Some pertinent results are in (Polyakov, 2012; Polyakov et al., 2015). From the name (and the precise definition, of course) it is clear that a necessary condition for fixed-time stability is finite-time stability. Finite-time stability is well connected to homogeneity of the system, which provides a powerful framework for studying it. The general definitions are quite intuitive, as follows. Consider the system, x_ 5 f ðx; tÞ; xð0Þ 5 x0 ;

(D.1)

880

Appendix D

where the right-hand side can be discontinuous. The solutions are thus defined in the sense of Filippov. Without loss of generality we assume that x 5 0 is the equilibrium state of the system. We have these definitions extracted from (Polyakov et al., 2015). Finite-time stable: The equilibrium point x 5 0 of the system (D.1) is said to be globally finite-time stable if it is globally asymptotically stable and any solution xðt; x0 Þ of (D.1) reaches the equilibrium at some finite time, i.e., xðt; x0 Þ 5 0; ’ t . Tðx0 Þ, where T is the so-called settling-time function of the system. Fixed-time stable: The equilibrium point x 5 0 of the system (D.1) is said to be globally fixed-time stable if it is globally finite-time stable and the settling-time function Tðx0 Þ is bounded by some positive number Tmax , i.e., Tðx0 Þ , Tmax ; ’ x0 . In the sequel we extend the results of (Polyakov, 2012) to fixed-time decentralized stabilization of large-scale systems.

D.5.1 Fixed-time decentralized stability of large-scale systems Let us begin by linearizing the system (D.1) as, x_ 5 Ax 1 Bu 1 v;

(D.2)

where v denotes the higher order terms. In other words, v represents any uncertainty in the form of disturbances, parameter variations, or nonlinearities. Since the controller to present is adaptive, as in the case of decentralized adaptive control literature we assume that the system satisfies the matching condition, i.e., A 5 BH and v 5 Bf . We assume (H1): rank½B AB ? An21 B 5 n, and (H2): rank½B AB 5 n. Note that many actual systems like mechanical systems with maximal number of inputs satisfy this condition. We also mention a slight mistake in the formulation of the Brunovsky form in (Polyakov, 2012). Because the correct form is available in the literature we do not present it. Now we present “a reformulation in philosophy,” as opposed to a formulation-wise modification (i.e., changes in structure or parameters), which enable the controller to accommodate unknown systems as well. Let the unknown (possibly time-varying) system matrices be denoted by Au , Bu . We decompose them to Au 5 A 1 Ar , Bu 5 B 1 Br , where the constant matrices A; B are chosen arbitrarily up to assumption (H1)/(H2), and the unknown (possibly time-varying) remainders Ar x,Br u are encapsulated in the lumped perturbation v. (It is clear that v will satisfy the matching condition). Hence, the problem is put in the framework we develop for known systems from the outset. Remark D.5: The above reformulation has a nice interpretation and is worth paraphrasing. In the same framework both known and unknown systems are accommodated without any modifications in the controller structure or even parameters. This renders the advocated methodology a universal approach. Moreover, for both known and unknown systems, the system can be nonlinear, time varying, perturbed and/or disturbed. All the designer needs to do is to choose some constant matrices

Appendix D

881

satisfying (H1)/(H2) and to pursue the proposed synthesis approach of the next section. Needless to say, in general, the closer these parameters to their true values, the smaller the v, and hence the smaller and smoother the control action and the transient state.

D.5.1.1 Large-scale system description Consider a large-scale system consisting of N interconnected subsystems whose ith one is represented by, x_i 5 Ai xi 1 Bi ui 1 vi 1 xi 5 ϕi ; tA½ 2 T; 0;

N X

Aij xj ðt 2 τ ij Þ;

j6¼i

(D.3)

in which xi ðtÞARni , ui ARmi and vi ðxi ; ui ; xj ; tÞARni are the state, scalar input, and uncertain nonlinear time-varying parameter perturbation of the ith subsystem. More precisely, vi denotes any bounded unknown time-varying perturbation in the form P of model uncertainty (ΔAi xi ,ΔBi ui , Nj6¼i ΔAij xj ), external disturbance, or nonlinearity. The scalar function 0 # τ ij ðtÞ # T , N represents the unknown, nonnegative, continuous, bounded, time-varying delay, where T is an unknown positive constant. The continuous function ϕi ðtÞARni denotes the arbitrary initial condition. Moreover, all the matrices Ai ; Bi ; Aij are known, of appropriate dimensions, with obvious P meanings. The terms Nj6¼i Aij xj denote the interconnections with other subsystems. We present the ensuing development for known systems, because similar to the argument of the preceding subsection we can consider unknown systems in the same framework without any modifications. The following assumptions are made: (H0 1) Assumption H1 for all the subsystems. (H0 2) Assumption H2 for all the subsystems. (H0 3) Matching condition: As in the case of decentralized adaptive control and centralized fixed-time control (Polyakov, 2012) we assume that the system satisfies the matching condition, i.e., Aij 5 Bi Hij , vi 5 Bi fi , and thus P x_i 5 Ai xi 1 Bi ðui 1 di Þ, where di 5 fi 1 Nj6¼i Hij xj ðt 2 τ ij Þ. (H0 4) Boundedness of the lumped uncertainty vi and all the states of all subsystems. Thus di is bounded. Note that this assumption is realistic and does not impose any limitation on the class of actual systems considered. Remark D.6: To consider unknown systems in the same framework the aforementioned “reformulation philosophy” is applied to all the subsystems. Consequently, all the designer has to do is to adopt some constant matrices satisfying (H01 ), (H02 ) as the system parameters Ai ; Bi ; Aij , and to follow the design approach that we present in the sequel. Now we formally define the problems we address in this section.

882

Appendix D

Decentralized fixed-time attractivity/stability: Design the decentralized controllers ui of the subsystems such that fixed-time attractivity/stabilization of the largescale system (D.3) is achieved. Before we discuss the controller design an important observation is made. As it is noticed with this reformulation we have encapsulated the effect of the interaction terms in di . That is, the index j in (D.3) does not appear in the formulation of subsystem i. Hence we omit the subscript i as well and simply work with the system x_ 5 Ax 1 Bðu 1 dÞ which represents any of the subsystems. All the formulations of (Polyakov, 2012) can thus be used henceforth, with the reminder that we have enhanced its method by extending that to unknown systems as well. The only thing which remains to be taken care of is the existence of a bounded upper limit of di (or simply d): jjdi jjN # d0 . Because x (including those included in d) is bounded, d and d0 are also bounded. However, since d0 is unknown, for the moment we have to choose a sufficiently large value for it. Theorem D.1: For the SISO case, if each subcontroller is designed according to theorem 1 of (Polyakov, 2012), then the neighborhood of the origin is globally fixed-time attractive for the overall system (D.3). Δ. Theorem D.2: For the SISO case, if each subcontroller is designed according to theorem 2 of (Polyakov, 2012), then the origin is globally fixed-time stable for the overall system (D.3). Δ. Theorem D.3: For the MIMO case, if each subcontroller is designed according to theorem 3 of (Polyakov, 2012), then the neighborhood of the origin is globally fixed-time attractive for the overall system (D.3). Δ. Theorem D.4: For the MIMO case, if each subcontroller is designed according to theorem 4 of (Polyakov, 2012), then the origin is globally fixed-time stable for the overall system (D.3). Δ. We close this part by adding that the choice of d0 is an interesting topic and further theoretical investigation is desirable. Other extensions such as for stochastic systems, hybrid systems, etc., are also interesting.

D.6

Summary

In this Appendix we have given a rich glossary of the stability concepts and tests. Those items which are less known to engineering academicians and students have

Appendix D

883

been further treated, although in an introductory level. An extension of the fixedtime stability concept to decentralized stabilization of large-scale systems is also included. For further specialized details the reader should consult the respective literature. The reference (Siljak, 2008) was brought to our attention by D.D Siljak. A comprehensive treatment of several concepts of Section D.2 along with specialized cutting-edge results are offered in a whole chapter in the sequel of the book on state-space methods.

References Aizenman, M., Contucci, P., 1998. On the stability of the quenched state in mean field spin glass models. J. Stat. Phys. 92 (5/6), 765783. Andrieu, V., Praly, L., Astolfi, A., 2008. Homogeneous approximation, recursive observer design, and output feedback. SIAM J. Control Optim. 47 (4), 18141850. Arnold, L., 1974. Stochastic Differential Equation: Theory and Applications. Wiley, New York. Asgari, M.S., 2011. On the stability of fusion frames of subspaces. Acta Math. Sci. 31 (4), 16331642. Balochian, S., Khaki-Sedigh, A., Haeri, M., 2011a. Stabilization of fractional order systems using a finite number of state feedback laws. Nonlinear Dyn. 66, 141152. Balochian, S., Khaki-Sedigh, A., Zare, A., 2011b. Variable structure control of linear time invariant fractional order systems using a finite number of state feedback laws. Commun. Nonlinear Sci. Numer. Simul. 16, 14331442. Bass, R.F., Ren, H., 2013. Meyers inequality and strong stability for stable-like operators. J. Funct. Anal. 265 (1), 2848. Bhat, S., Bernstein, D., 2000. Finite time stability of continuous autonomous systems. SIAM J. Control Optim. 38 (3), 751766. Bierkens, Y., Ran, A., 2014. A singular M-matrix perturbed by a nonnegative rank one matrix has positive principal minors; is it D-stable?. Linear Algebra Appl. 457, 191208. Bonotto, E.M., Federson, M., 2009. Poisson stability and impulsive semi-dynamical systems. Nonlinear Anal. Theory Methods Appl. 71 (12), 61486156. Clarke, F.H., Ledyaev, Y.S., Sontag, E.D., Subbotin, A.I., 1997. Asymptotic stabilization implies feedback stabilization. IEEE Trans. Automatic Control. 42, 13941407. Dannan, F.M., Elaydi, S., 1986. Lipschitz stability of nonlinear system of differential equations. J. Math. Anal. Appl. 113, 562577. Dannan, F.M., Elaydi, S., 1989. Lipschitz stability of nonlinear system of differential equations. II. Lyapunov functions. J. Math. Anal. Appl. 143, 517529. Deimling, K., 1992. Multivalued Differential Equations. de Gruyter, Berlin. Deniz, A., Kocak, S., Ratiu, A., 2008. A notion of robustness and stability of manifolds. J. Math. Anal. Appl. 342 (1), 524533. El-Hawwary, M.I., Maggiore, M., 2010. Reduction principles and the stabilization of closed sets for passive systems. IEEE Trans. Automatic Control. 55 (4), 982987. El-Sakkary, A.K., 1985. The gap metric: robustness and stabilization of feedback systems. IEEE Trans. Automatic Control. 30, 240247.

884

Appendix D

Engelking, R., 1989. General Topology, second ed. (Translated from Polish) Sigma Series in Pure Mathematics, 6. Heldermann Verlag, Berlin. Farrell, B.F., Ioannou, P.J., 2003. Structural stability of turbulent jets. J. Atmos. Sci. 60, 21012118. Filippov, A.F., 1989. Differential Equations with Discontinuous Righthand Sides, (Translated from Russian) Kluwer, Dordrecht. Ghasemi, M., Nersesov, S.G., Clayton, G., 2014. Finite-time tracking using sliding mode control. J. Franklin Inst. 351 (5), 29662990. Gracia, J.-M., Velasco, F.E., 2011. Lipschitz stability of controller invariant subspaces. Linear Algebra Appl. 434, 11371162. Gedeon, T., Sontag, E.D., 2007. Oscillations in multi-stable monotone systems with slowly varying feedback. J. Differ. Equ. 239, 273295. Hafayed, M., 2010. Filippov approach in stochastic maximum principle without differentiability assumptions. Electron. J. Differ. Equ. 97, 113. Hahn, W., 1963. Theory and Applications of Lyapunov’s Direct Method, first ed. PrenticeHall, NJ. Hajek, O., 1979. Discontinuous differential equations, I, II. J. Differ. Equ. 32, pp. 149170, 171185. Hashemi, A., Schweinfurter, M., Seiler, W.M., 2017. Deterministic genericity for polynomial ideals. J. Symbolic Comput (available online). Hermes, H., 1967. “Discontinuous vector fields and feedback control,” (Proc. Int. Symp., Mayaguez, P.R., 1965) .Differential Equations and Dynamical Systems. Academic Press Jeffries, C., Klee, V., van den Driessche, P., 1987. Qualitative stability of linear systems. Linear Algebra Appl. 78, 148. Kamenkov, G., 1953. On stability of motion over a finite interval of time [in Russian]. J. Appl. Math. Mech. 17, 529540. Kao, Y., Wang, C., Zha, F., Cao, H., 2014. Stability in means of partial variable for stochastic reaction-diffusion systems with Markovian switching. J. Franklin Ins. 351 (1), 500512. Khas’miskii, R.Z., 1980. Stochastic Stability of Differential Equations. Sijthoff & Noordhoff, Netherlands. Konstantinov, M., Gu, D.-W., Mehrmann, V., Petkov, P., 2003. Perturbation Theory for Matrix Equations. North-Holland, Amsterdam. Krasnova, S.A., Sirotina, T.G., Utkin, V.A., 2011. A structural approach to robust control. Autom. Remote Control. 72 (8), 16391666. Kushel, O., 2016. On a criterion of D-stability for P-matrices. Special Matrices. 4 (1), 181188. Lakshmikantham, V., Leela, S., 1969. Differential and Integral Equations: Theory and Applications. Academic Press, NY. Li, L., Jian, J., 2015. Exponential convergence and Lagrange stability for impulsive CohenGrossberg neural networks with time-varying delays. J. Comput. Appl. Math. 227, 2335. Liao, S., Wang, J., 2012. Global stability analysis of epidemiological models based on VolterraLyapunov stable matrices. Chaos Solitons Fractals. 45 (7), 966977. Liberzon, D., Morse, A.S., Sontag, E.D., 2002. Outputinput stability and minimum-phase nonlinear systems. IEEE Trans. Automatic Control. 47 (3), 422436. Liu, K., 2006. Stability of Infinite Dimensional Stochastic Equations with Applications. Chapman & Hall/CRC, London. Mahmoudi, A., Momeni, A., Aghdam, A.G., Gohari, P., 2008. Switching between finite-time observers. Eur. J. Control. 14 (4), 297307.

Appendix D

885

Oxtoby, J.C., 1971. Measure and Category. Springer, Berlin. Polyakov, A., 2012. Nonlinear feedback design for fixed-time stabilization of linear control systems. IEEE Trans. Automatic Control. 57 (8), 21062110. Polyakov, A., Efimov, D., Perruquetti, W., 2015. Finite-time and fixed-time stabilization: Implicit Lyapunov function approach. Automatica. 51, 332340. Rimanyi, R., Tarasov, V., Varchenko, A., 2015. Trigonometric weight functions as K-theoretic stable envelope maps for the cotangent bundle of a flag variety. J. Geometry Phys. 94, 81119. Ruan, S., 1991. Connective stability of discontinuous large-scale systems. J. Math. Anal. Appl. 160, 480484. Siljak, D.D., 2012. Decentralized Control of Complex Systems. Dover Publications, NY. Siljak, D.D., 2008. Dynamic graphs. Nonlinear Anal. Hybrid Syst. 2, 544567. Sontag, E.D., 2011. Input-to-state stability. In: Levine, W.S. (Ed.), The Control Handbook, second ed. Control Systems Advanced Methods. pp. 44.1–44.23 (1011-1033). CRC Press, Boca Raton. Tarraf, D., Megretski, A., Dahleh, M.A., 2008. A framework for robust stability of systems over finite alphabets. IEEE Trans. Autom. Control. 53 (5), 11331146. Weiner, Z., Szonyi, T., 2014. On the stability of sets of even type. Adv. Math. 267, 381394. Xu, S.J., Darouach, M., Schaefers, J., 1993. Expansion of det(A 1 B) and robustness analysis of uncertain state space systems. IEEE Trans. Autom. Control. 38 (11)16711675. Xu, S.J., Darouach, M., Schaefers, J., 1995. Expansion of det(A 1 B 1 C) and robustness analysis of discrete-time state-space systems. IEEE Trans. Autom. Control. 40 (5), 936942. Yu, Z., Zhang, Z., Liu, C., 2017. Stability of ε-isometric embeddings into Banach spaces of continuous functions. J. Math. Anal. Appl. 453 (2), 805820. Zhou, K., Doyle, J.C., 1998. Essentials of Robust Control. Prentice Hall, NJ.

Appendix E: Treatise on the Routh’s stability test

E.1

Introduction

A concise survey and road map on the topic of stability was offered in Appendix D. Here we focus on the Routh’s test (Routh, 1877), which was a hot topic of research from the 1950s to the 1970s. We enlist the most important of its numerous applications in Section E.2. Several of them are new, reported since the 1980s or even in the 21st century. Due to space limitations we cite only a few of pertinent references, often the first (i.e., oldest) works. Recent works can be found by a search via specialized search engines of the IEEE, Elsevier, etc. The topics discussed in item 1 of Section E.2 may be somewhat far from engineering disciplines and are more on the mathematical side, however the other items have stemmed from, or found application in, engineering fields. Further, in Section E.3 we mention the most famous algorithms for handling the case of j-axis zeros. It is interesting to note that this seemingly simple problem is not yet fully resolved. Thus we do not present any details here since the results are too technical and still inconclusive in the full generality.

E.2

Applications of the Routh’s array

1. It has originally been developed for real polynomials but then has been extended to complex ones as well. A geometrical interpretation of it has also been made. While different efficient algorithms for its implementation have been proposed, a characterization of stable tests for its conditions with connections to total positivity has also been developed. Moreover, it has rich connections with some areas of analysis and algebra where the discussed topics include: the amplitude-phase interpretation of stability, Cauchy indices, the principle of argument, Hermite Biehler theorem, rational lossless functions, Bezoutians, nonnegativity of polynomials and polynomial matrices, total positivity of the Hurwitz matrix, Euclid’s algorithm, orthogonal polynomials, and greatest common divisor of polynomials. See Asner (1970), Dimitrov and Pena (2002), Barrio (1999), Fryer (1959), Gantmacher (1959), Genin (1996), Hurwitz (1895), Kemperman (1982), Krein and Naimark (1981), Lepschy et al. (1988), Pena (2004), Postnikov (1981), Rivero (2015), Sigal (1990), VanVleck (1899) and their bibliographies. 2. It has been employed to find an upper bound for the abscissa of stability of continuoustime systems. Thus without directly finding the poles of the system, upper bound for the largest one of them can be easily found (Henrici, 1970).

888

Appendix E

3. A unified treatment for the implementation of the Routh Hurwitz conditions in the continuous-time case and the Schur Cohn conditions in the discrete-time case has been offered by the Fujiwara theorems. The Fujiwara matrix in each case satisfies a Lyapunov-type equation. The elementary proofs of the Fujiwara results establish a link between two apparently disparate approaches to the root-separation problem: the solution through matrix equations, and the classical method of solution via quadratic forms (Datta, 1978). On the other hand, in addition to the conventional bilinear transformation for the use of the Routh-Hurwitz test in the discrete-time setting, a biquadratic transformation is discussed in (Jalili-Kharaajoo and Araabi, 2005). 4. It has been used to compute Sturm’s fractions and continued fraction expansions of two rational polynomials, which themselves are used in analysis and representation of the input impedance of Resistor-Capacitor (RC) ladder networks, feedback control systems, etc., and the inverse problem, i.e., for instance obtaining the ratio of two polynomials from a given continued fraction expansion. In the bigger context, this can be used to obtain reduced order models for dynamical systems, a topic known as model/order reduction, see also item 9. See e.g., Chen and Shieh (1969), Kumar (1980), Wall (1948) and the references therein. 5. It has been utilized for finding the greatest common divisor and polynomial factorization. This has applications in both mathematics and control theory. A particular application of this result is in the pathological case of the Routh’s array that some j-axis zeros exist. The interested reader is referred to e.g., Fryer (1959), Mastascusa et al. (1971), Porter (1968), VanVleck (1899) and their bibliographies. 6. The equivalence of the Routh’s algorithm (resulting in a necessary and sufficient condition for stability) and the Lyapunov’s second method (resulting in a sufficient condition) in certain cases can be proven. Note that this thus expresses the necessity of the Lyapunov-based results in those cases. To this end finite Stieltjes continued fraction expansion is used along with the Routh’s canonical flow graph of its array. For further details consult e.g., Raju (1967). 7. Symmetric form of the Routh’s array has been developed. The reason is that to check the positivity of minors of symmetric matrices is simpler than that for nonsymmetric matrices. This has led the researchers to develop a symmetric matrix, resulting in the socalled symmetric form of the Routh’s array. It is noteworthy that this matrix has some advantages on the side. For instance, it has been employed to construct a Lyapunov function for the analysis of the stability of the system, which itself gives an alternative, direct proof of the sufficiency of Routh Hurwitz stability test, see also item 5. The following references and their bibliographies can be seen: Fu and Dasgupta (2000), Raltson (1962), Parks (1963). 8. A direct connection between Hermite’s and Routh’s stability tests for real polynomials has been established (Chew, 1974; Porter, 1968). 9. It has been invoked to generate the Lyapunov function for the stability analysis of nonlinear time-varying systems through an energy metric algorithm. For more information see e.g., Wall (1970) and the references therein. Its generalization for nonlinear systems is discussed in Schultz (1963). See Sreedhar and Rao (1968) for linear time-varying systems. 10. The coefficients of the Routh’s tabular form have specific usages. For instance, rows three, four, among others, can be used for the calculation of some quadratic integrals as the cost function of some optimization problems. In this context Routh’s and Schwatrz’ canonical forms play pivotal roles. Some relevant works are Chen and Chu (1966), Csaki (1967), Parks (1966). See also (Ismail and Kim, 1983).

Appendix E

889

11. It has been modified and applied to determine the number of (positive/negative) real roots of a polynomial. Multiplicities of such roots can also be inferred Hostetter (1975), Lepschi (1962), Siljak (1970a). 12. Its modified form of item 11 has been used to establish aperiodicity of dynamical systems (Fuller, 1955; Meerov, 1945). 13. Its modified form of item 11 has been utilized to obtain conditions for nonnegativity of polynomials and polynomial matrices. This has applications in both mathematics and control theory (Siljak, 1970a). 14. It has been employed, using the results of item 13, to derive (algebraic) conditions for absolute stability, optimality, passivity, strict positive realness, and inner convex approximation of stability domain of dynamical systems, as well as condition for stability of two-dimensional and multivariable systems. See e.g., Artemchuk et al. (2015, 2016), Siljak (1970b, 1975), Xie and Fu (1994), and their bibliographies for more references. 15. It has been applied in model/order reduction. Application of the Routh’s array in this domain was proposed to overcome the stability and inaccuracy problems encountered in Pade-type approximation. Continued fractions, item 3, are an indispensable part of this application. The interested reader is urged to consult Hutton and Friedland (1975), Shamash (1980), Singh (1981), Lamba and Bandyopadhyay (1986), and their bibliographies 16. It has been used to test the stability of segments of polynomials and multidimensional systems, e.g., filters and matrix polynomials. See e.g., the following articles and the references therein: Argoun (1986), Bialas (1985), Galindo (2015), Hwang and Yang (2001), Saydy et al. (1990), Tits (2002). A Routh-type test for verifying stability of families of polynomials with linear uncertainty has also been developed (Tjaferis and Hollot, 1989, Shingare and Bandyopadhyay, 2007). It has also been used for robust pole assignment of interval polynomials (Juang, 2015). 17. A relation between vanishing of some minors’ determinants (resulting in j-axis zeros— the problematic case of the algorithm) and Hopf bifurcation of higher order has been pointed out in Yang (2002). 18. The Routh Hurwitz (and the Schur Cohn problems) have been solved by the application of the Hankel matrix of Markov parameters (Datta, 1979). 19. It has been utilized in the realization theory—see e.g., Margalio and Langholz (2000). 20. It has found some other special applications as well. For instance, it has been shown that it can be used to find the number of zeros of a polynomial lying inside and outside the unit circle. Moreover, a bilinear Routh test for the stability of discrete-time systems has also been proposed (Barnett, 1974; Bistritz, 1982; Bistritz, 1984). 21. It has been extended and utilized to test stability and zero distribution of fractional order differential equations, point delayed systems, spherical triangles, and differential-difference equations in Ahmed et al. (2006), Chapellat et al. (1990), De la Sen (2007), Sharma and Klamkin (1996), Thowsen (1981), Verriest and Michiels (2006). 22. Finally it must be noted that several different proofs including some elementary proofs of this theorem can be found in Chapellat et al. (1990), Anagnost and Desoer (1991), Meinsma (1995), Ho et al. (1998), Holtz (2003). 23. It has been used in many specific applications like Al-Azzawi (2012), El-Marhomy and Abdel-Sattar (2004), Fano et al. (2011).

We close this Section by adding that in 1977 a centennial survey on Routh’s test appeared in (Barnett and Siljak, 1977). Our list includes and greatly expands the aforementioned survey; of course this is natural, after 40 years! Also recall that several new and pertinent results are cited in Further Readings and Exercises of

890

Appendix E

Chapter 3 which we do not repeat here. On the other hand we are aware that there is some literature (books and articles) in other languages which is not readable to us.

E.3

The case of imaginary-axis zeros

It is interesting to know that this apparently simple problem has not yet been fully resolved. Apart from what we have presented in Section 3.4, there are numerous results and algorithms for handling this situation in further details, each with its own strengths and weaknesses. The most famous of these algorithms are due to Y. Shamash, K. J. Khatwani, K. S. Yeung, M. Benidir B. Picinbono, S. S. Tsai, and J. S. H. Chen etc. Because these algorithms are not yet conclusive with certainty, we do not present any of them inhere. The interested reader is referred to literature, including the introduced references.

References Ahmed, E., El-Sayed, A.M.A., El-Saka, H.A.A., 2006. On some Routh Hurwitz conditions for fractional order differential equations and their applications in Lorenz, Rossler, Chua, and Chen systems. Phys. Lett. A. 358, 1 4. Al-Azzawi, S.F., 2012. Stability and bifurcation of pan chaotic system by using Routh Hurwitz and Gardan methods. Appl. Math. Comput. 219 (3), 1144 1152. Anagnost, J.J., Desoer, C.A., 1991. An elementary proof of the Routh Hurwitz stability criterion. Circuits Syst. Signal Process. 10 (1), 101 114. Argoun, M.B., 1986. On sufficient conditions for the stability of interval matrices. Int. J. Control. 44 (5), 1245 1250. Artemchuk, I., Nurges, U., Belikov, J., Kaparin, V., 2015. Stable cones of polynomials via Routh rays. In: Proc. 20th Int. Conf. Process Control. pp. 255 260. Artemchuk, I., Nurges, U., Belikov, J., 2016. Robust pole assignment via Routh rays of polynomials. In: Proc. American Control Conference. pp. 7031 7036. Asner, B.A., 1970. On the total nonnegativity of the Hurwitz matrix. SIAM J. Appl. Math. 18, 407 414. Barnett, S., 1974. Application of the Routh array to stability of discrete-time linear systems. Int. J. Control. 19, 47 55. Barnett, S., Siljak, D.D., 1977. Routh’s algorithm: A centennial survey. SIAM Review. 19 (3), 1472 1489. Barrio, R., 1999. On the A-stability of Runge-Kutta collocation methods based on orthogonal polynomials. SIAM J. Numer. Anal. 36 (4), 1291 1303. Bialas, S., 1985. A necessary and sufficient condition for the stability of convex combinations of polynomials and matrices. Bull. Pol. Acad. Technol. Sci. 33 (9,10), 473 480. Bistritz, Y., 1982. A direct Routh stability method for discrete system modelling. Syst. Control Lett. 2 (2), 83 87. Bistritz, Y., 1984. Direct bilinear Routh stability criteria for discrete systems. Syst. Control Lett. 4 (5), 265 271.

Appendix E

891

Chapellat, H., Mansour, M., Bhattacharyya, S.P., 1990. Elementary proofs of some classical stability criteria. IEEE Trans. Educ. 33 (3), 232 239. Chen, C.F., Chu, H., 1966. A matrix for evaluating Schwarz’s form. IEEE Trans. Autom. Control 303 305. Chen, C.-F., Shieh, L.-S., 1969. Continued fraction inversion by Routh’s algorithm. IEEE Trans. Circuits Syst. 16 (2), 197 202. Chew, C.F., 1974. A new formulation of the Hermite criterion. Int. J. Control. 19, 757 764. Csaki, F., Lehoczky, J., 1967. Comment on ‘A remark on Routh’s array’. IEEE Trans. Autom. Control. 12 (4), 462 463. Datta, B.N., 1978. On the Routh Hurwitz Fujiwara and the Schur Cohn Fujiwara theorems for the root separation problem. Linear Algebra Appl. 22, 235 245. Datta, B.N., 1979. Application of Hankel matrices of Markov parameters to the solutions of the Routh Hurwitz and the Schur Cohn problems. J. Math. Anal. Appl. 68 (1), 276 290. De la Sen, M., 2007. Stability criteria for linear time-invariant systems with point delays based on one-dimensional Routh Hurwitz tests. Appl. Math. Comput. 187, 1199 1207. Dimitrov, D.K., Pena, J.M., 2002. Almost strict total positivity and a class of Hurwitz polynomials. J. Approx. Theory. 132 (2), 212 223. El-Marhomy, A., Abdel-Sattar, N.E., 2004. Stability analysis of rotor-bearing systems via Routh Hurwitz criterion. Appl. Energy. 77 (3), 287 308. Fano, W.G., Boggi, S., Razzitte, A.C., 2011. Magnetic susceptibility of MnZn and NiZn soft ferrites using Laplace transform and the Routh Hurwitz criterion. J. Magn. Magn. Mater. 323 (12), 1708 1711. Fryer, W.D., 1959. Applications of Routh’s algorithm to network theory problems. IEEE Trans. Circuit Theory. 6, 144 149. Fu, M., Dasgupta, S., 2000. Parametric Lyapunov functions for uncertain systems: the multiplier approach. In: Ghaoui, L.E., Niculescu, S.L. (Eds.), Advances in Linear Matrix Inequality Methods in Control. Phildelphia. SIAM. Fuller, A.T., 1955. Conditions for aperiodicity in linear systems. Br. J. Appl. Phys. 6, 195 198, pp. 450 451. Galindo, R., 2015. Stabilisation of matrix polynomials. Int. J. Control. 88 (10), 1925 1932. Gantmacher, F.R., 1959. The Theory of Matrices, 2. Chelsea, New York. Genin, Y.V., 1996. Euclid algorithm, orthogonal polynomials, and generalized Routh Hurwitz algorithm. Linear Algebra Appl. 246, 131 158. Henrici, P., 1970. Upper bounds for the abscissa of stability of a stable polynomial. SIAM J. Numer. Anal. 7, 538 544. Ho, M.T., Datta, A., Bhattacharyya, S.P., 1998. An elementary derivation of the Routh Hurwitz criterion. IEEE Trans. Autom. Control. 43 (3), 405 409. Holtz, O., 2003. Hermite Biehler, Routh Hurwitz, and total positivity. Linear Algebra Appl. 372, 105 110. Hostetter, G.H., 1975. Using the Routh Hurwitz test to determine the numbers and multiplicities of real roots of polynomials. IEEE Trans. Circuits Syst. 697 698. Hurwitz, A., 1895. Uber die bedingungen, unter welchen eine gleichung nur wurzeln mit negative reallen teilen besitzt. Mathematische Annalen. 46, 273 284. Hutton, M.F., Friedland, B., 1975. Routh approximations for reducing order of linear timeinvariant systems. IEEE Trans. Autom. Control. 20, 320 337. Hwang, C., Yang, S.-F., 2001. The use of Routh array for testing the Hurwitz property of a segment of polynomial. Automatica. 37 (2), 291 296.

892

Appendix E

Ismail, M., Kim, H.K., 1983. Elements of the first column of Routh’s array. J. Franklin Inst. 316 (5), 381 384. Jalili-Kharaajoo, M., Araabi, B.N., 2005. The Schur stability via the Hurwitz stability analysis using a biquadratic transformation. Automatica. 41 (1), 173 176. Juang, B.-Y., 2015. Robust pole assignment for interval polynomial based on positive interval Routh-table: positive property of Routh-table of binomial coefficients. Int. J. Syst. Sci. 38 (7), 833 842. Kemperman, J.H.B., 1982. A Hurwitz matrix is totally positive. SIAM J. Math. Anal. 13, 331 341. Krein, M.G., Naimark, M.A., 1981. The method of symmetric and Hermitian forms in the theory of the separation of the roots of algebraic equations. Linear Multilinear Algebra. 10, 265 308. Kumar, A., 1980. Continued fraction expansion and inversion by ‘rearranged’ Routh tables. Int. J. Control. 31 (4), 627 635. Lamba, S.S., Bandyopadhyay, B., 1986. An improvement on Routh approximation technique. IEEE Trans. Autom. Control. 31 (11), 1047. Lepschi, A., 1962. A new criterion for evaluating the number of complex roots of an algebraic equation. IRE Trans. 7, 1981. Lepschy, A., Mian, G.A., Viaro, U., 1988. A geometrical interpretation of the Routh test. J. Franklin Inst. 32 (6), 695 703. Liang, S., Wang, S.-G., Wang, Y., 2017. Routh-type table test for zero distribution of polynomials with commensurate fractional and integer degrees. J. Franklin Inst. 354 (1), 83 104. Margalio, M., Langholz, G., 2000. The Routh Hurwitz array and realization of characteristic polynomials. IEEE Trans. Autom. Control. 45, 2424 2427. Mastascusa, E.J., Rave, W.C., Turner, B.M., 1971. Polynomial factorization using the Routh criterion. Proc. IEEE 1358 1359. Meerov, M.V., 1945. Criteria for aperiodic systems. Izv. Akad. Nauk SSSR. OTN-12, 1169 1178. Meinsma, G., 1995. Elementary proof of the Routh Hurwitz test. Syst. Control Lett. 25 (4), 237 242. Parks, P.C., 1966. Further comments on ‘Obtaining system performance measures from Routh’s algorithm’. Proc. IEEE. 54 (1), 93 95. Parks, P.C., 1963. A further comment on ‘A symmetric matrix formulation of the Hurwitz Routh Stability criterion’. IEEE Trans. Autom. Control. 8 (3), 270 271. Pena, J.M., 2004. Characterization and stable tests for the Routh Hurwitz conditions and for total positivity. Linear Algebra Appl. 393, 319 332. Porter, B., 1968. Stability Criteria for Linear Dynamic Systems. Academic Press, New York. Postnikov, M.M., 1981. Stable Polynomials. Nauka, Moscow. Raju, G.V.S.S., 1967. The Routh canonical form. IEEE Trans. Autom. Control. 12 (4), 463 464. Raltson, A., 1962. A symmetric matrix formulation of the Hurwitz Routh stability criterion. IRE Trans. Autom. Control. 7, 50 51. Rivero, A.E.C., 2015. On matrix Hurwitz type polynomials and their interrelations to Stieltjes positive definite sequences and orthogonal matrix polynomials. Linear Algebra Its Appl. 476, 56 84. Routh, E.J., 1877. A Treatise on the Stability of a Given State of Motion, Particularly Steady Motion. MacMillan, New York.

Appendix E

893

Saydy, L., Tits, A.L., Abed, E.H., 1990. Guardian maps and the generalized stability of parameterized family of matrices and polynomials. Math. Control Signals Syst. 3, 345 371. Schultz, D.G., 1963. A discussion of generalized Routh Hurwitz conditions for nonlinear systems. Trans. AIEE, Part II: Appl. Ind. 81 (6), 377 382. Shamash, Y., 1980. Failure of Routh Hurwitz method of reduction. IEEE Trans. Autom. Control. 25 (2), 313 314. Sharma, A., Klamkin, M.S., 1996. Extension of Routh’s theorem to spherical triangles. SIAM Rev. 27 (3), 446 447. Shingare, P., Bandyopadhyay, B., 2007. Accurate computation of interval Routh table for robust stability analysis of interval systems. IFAC Proc. 40 (9), 407 411. Sigal, R., 1990. Algorithms for the Routh Hurwitz stability test. Math. Comput. Model. 13 (8), 69 77. Siljak, D.D., 1970a. Nonnegative polynomials: a criterion. Proc. IEEE. 58, 1370 1371. Siljak, D.D., 1970b. Algebraic criterion for absolute stability, optimality and passivity of dynamic systems. Proc. IEEE. 117, 2033 2036. Siljak, D.D., 1975. Stability criteria for two-variable polynomials. IEEE Trans. Circuits Syst. 22, 185 189. Singh, V., 1981. Improved stable approximants using the Routh array. IEEE Trans. Autom. Control. 26 (2), 581 583. Sreedhar, N., Rao, S.N., 1968. Stability of linear time-varying systems. J. Control. 7 (6), 591 594. Thowsen, A., 1981. The Routh-Hurwitz method for stability determination of linear differential-difference systems. Int. J. Control. 33 (5), 991 995, 1981. Tits, A.L., 2002. Comment on ‘The use of Routh array for testing the Hurwitz property of a segment of polynomials’. Automatica. 38, 559 560. Tjaferis, T.E., Hollot, C.V., 1989. A Routh-like test for the stability of families of polynomials with linear uncertainty. Syst. Control Lett. 13 (1), 23 29. VanVleck, E.B., 1899 1900. On the determination of a series of Sturm’s functions by the calculation of a single determinant. Ann. Math. 1, 1 13. Verriest, E.I., Michiels, W., 2006. Inverse Routh table construction and stability of delay equations. Syst. Control Lett. 55 (9), 711 718. Wall, H.S., 1948. Analytic Theory of Continued Fraction. Van Nostrand, Princeton, NJ. Wall, E., 1970. A modification of the energy metric algorithm to include the Routh Hurwitz criteria. IEEE Trans. Autom. Control. 15 (3), 373 374. Xie, L., Fu, M., 1994. “Finite test of robust strict positive realness,” in Proceedings of the American Control Conference, pp. 649 653. Yang, X., 2002. Generalized form of Hurwitz Routh criterion and Hopf bifurcation of higher order. Appl. Math. Lett. 15, 615 621.

Appendix F: Genetic algorithm: A global optimization technique

F.1

Introduction

Many problems in mathematics and other branches of science require finding the optimum, i.e., maximum or minimum, of a function. Because maximization of a function can be seen as the minimization of its negative, we can simply consider the minimization problem only. The problem is more complicated when we have a set of functions as the cost function subject to a set of constraints and the problem is nonconvex. A nonconvex problem is the one that the cost function and/or the constraints are not convex. The problem falls into the field of optimization, which is an independent graduate course in control engineering and also in mathematics. The field is so vast that there are several specialized journals which present the pertinent research results and applications. It is certainly impossible to give an account of it in this short Appendix. However we try to present the basic idea as concisely as possible. In the ensuing parts we present the basic ideas of convex optimization, nonconvex optimization, convexification, and global optimization techniques, with an emphasis on genetic algorithms (GAs).1

F.2

Convex optimization

We present the basic idea with a simple example. Consider minimization of the function f(x) shown in Fig. F.1 over xAR. The problem is strictly convex and has a global unique solution. There are efficient numerical methods to find the answer. In MATLABs there are different commands for this purpose like “fmincon,” “fminunc,” etc., see Appendix C for more detail. A typical scalar convex function is the quadratic form. Quadratic forms exist in vector and matrix domains as well. In fact convex problems have many special known (but nonexhaustive) forms in scalar, vector, and matrix domains. Following the advances in the field of convex optimization (both in basic theory and 1

It is good to know that the word “algorithm” is derived from the word “Al-Kharazmi” which is the name of an Iranian scientist Mohammad ibn Musa Al-Kharazmi who made significant contributions to mathematics. He was the first mathematician to introduce methodical procedures to mathematics in his book “Algabr val Moghabeleh,” what we today refer to as algorithm. In particular, he is considered as the father of algebra. The word algebra is derived from the word “gabr,” a Farsi word whose meaning is the methods in the science of algebra.

896

Appendix F

f (x)

x

Figure F.1 A typical convex function.

f (x)

x

Figure F.2 A typical nonconvex function.

numerical algorithms) made by the mathematics community in the 1980s, a powerful tool for handling convex linear matrix problems has been publicized since 1994 in the engineering community partly by the book (Boyd et al., 1994) on Linear Matrix Inequalities (or LMIs) which was the first book on the subject. It should be stressed that the roots of the idea can be found in the works of many other researchers from some decades ago as is testified in the same book.

F.3

Nonconvex optimization

A typical nonconvex optimization problem is finding the minimum of the function f ðxÞ in Fig. F.2. This function has two minima, one local and one global. You can simply imagine a function with more local maxima and minima where the problem gets more complicated. Finding a local minimum of a given function is not difficult. However finding the global minimum is inherently difficult. Nevertheless there are numerous methods which can be categorized as: (1) Deterministic methods like inner and outer approximation, cutting plane methods, branch and bound methods, algebraic geometry methods, etc. (2) Stochastic methods like simulated annealing, Monte-Carlo sampling, parallel tempering, etc. (3) Heuristic and Meta-heuristic methods like evolutionary algorithms, differential evolution, swarm optimization, memetic algorithms, intelligent algorithms, etc. (4) Response surface methods like self-organization and Bayesian optimization. In general the abovementioned methods are not able to find the exact global minima but can find an answer which—roughly said in simple words—is better than some other answers. In the sequel we will focus on GAs, which are a special form of evolutionary algorithms of the third category above. Before this, however, in the next Section F.4 we comment on a method called convexification which yields the global answer if implementable.

Appendix F

897

Figure F.3 A typical convexification.

F.4

Convexification

Given the function of Fig. F.2 “suppose” we can convexify it by the function gðxÞ which is the solid curve (or a similar curve) in Fig. F.3. Then we can rather simply and effectively find its global minimum by dedicated interior-point methods. This is true for every problem especially if it is linear as well. The difficulty of this approach is that convexification of a general nonconvex function is a formidable task, almost impossible in general, in the sense that its difficulty is rather comparable (in the literal meaning of the word) with that of the original problem. For general complex problems—with the present knowledge of the scientific community in general—we often have to resort to global optimization methods also known as global search methods which we outlined in Section F.3. However, the future cannot be predicted for sure. (For further details also see the sequel of the book on state-space methods.) From among these methods, in the next section we focus on GAs. It has been shown that for complex problems, including nonconvex and combinatorial ones, GA is a powerful method and generally outperforms the classical techniques, see e.g., (Homaifar et al., 1992).

F.5

Genetic algorithms

In modern appearance GAs were publicized mainly by the book of John Holland of 1975. The idea, however, has its roots in the work of other researchers of some decades ago.2 Soon it found such popularity that in 1997 the IEEE Transactions on Evolutionary Algorithms was inaugurated. In recent times many researchers have made significant contributions to the general field and its applications, see Tempo et al. (2013) and its bibliography. GAs are stochastic search methods based on the mechanism of natural selection and genetics. They start with a random set of solutions called the population. A population 2

A short history is as follows: In 1950 Alan Turing proposed a learning machine which somehow parallels the principles of evolution. In 1954 Nils Aall Barricelli pioneered computer simulation of evolution processes. In 1957 Alex Fraser worked on simulation of artificial selection of organisms with multiple loci controlling a measurable trait. In the 1960s Hans-Joachim Bremermann adopted a population of solution to optimization problems, going through recombination, mutation, and selection. Key contributions were also made by Lawrence J. Fogel, Ingo Rechenberg, and Hans-Paul Schwefel in the 1960s and early 1970s. For more information the interested reader is referred to the pertinent literature, like the introduced references.

898

Appendix F

is thus a set of solutions for the problem where each solution is called a chromosome. A chromosome is a string of symbols, usually but not necessarily binary numbers. The chromosomes (which are the solutions to the problem) evolve in successive repetitions, called generations. In each generation, chromosomes are evaluated based on a fitness measure and for the production of the new generation new chromosomes which are called offspring are generated. Making of new chromosomes is done by merging two chromosomes via the operator “crossover” or by modifying a chromosome via the operator “mutation.” Formation of the new generation is done by selecting from among parents and offspring and rejecting the rest of the chromosomes so that the size of the generation remains constant. Chromosomes which have a higher fitness measure have higher probability of selection. After some generations the algorithm will probably converge to the best chromosome or solution. GAs have many advantages, such as: 1. The user does not need to have much knowledge of the optimization field which is a sophisticated branch of mathematics. 2. They are well suited to complex nonlinear and nonconvex problems in mixed search spaces. 3. They are well suited to the accommodation of different heuristics at the user’s discretion. 4. They are well suited to coding instead of using the actual variable and thus are generally applicable to any kind of problem. 5. They are well suited to parallel processing as they search in a set of points.

The following flowchart shows the general procedure of almost all GAs (Fig. F.4). GAs have also been included in MATLABs with the command “ga.” However, it is instructive to show the inside procedure of the GAs by providing a sample program. For the sake of brevity of programming we show a real-coded or floating point (as also said, i.e., no coding) GA in the following example which is adopted from Bavafa-Toosi (2000) as part of a more advanced program, see also Chuang et al. (2016). There are different choices for the mutation and crossover operators— they depend on the application. The crossover operator is usually chosen to be linear so as to search through the whole solution space. The mutation operator is often chosen to be dynamic in order to yield fine-tuning capabilities aimed at high precision. Typical values of their probabilities are 0.3 and 0.1, respectively. We emphasize that there is a rich literature on the choice of the aforementioned operations whose coverage is outside the scope of this appendix. Solutions

Encoding

1100010101 1010111001 0011101011 0100111010

110001 0101 101011 1001

Crossover 110001 1001 101011 0101

Offspring 110001 1001 101011 0101 00111 1 1011

Decoding Solutions

Selection

00111 0 1011

Roulettewheel

00111 1 1011

Mutation

Figure F.4 Basic flowchart of most GAs.

Fitness computation

Appendix F

899

Remark F.1: It should be emphasized that in modern usage GAs have more nittygritties which are outside the scope of this book. Remark F.2: There are numerous results on the convergence properties (including premature convergence) of global optimization techniques, and GAs in particular, which are valid for “infinite time”; see the introduced references, e.g., Angelova and Pencheva (2011), Cruz and Pereira (2013) for recent results. On the other hand, for the sake of completeness we should add that there are a number of researches on the expected runtime of different variants of GAs. This includes super-polynomial, polynomial, exponential, and linear time lower bounds and some upper bounds in special cases, see e.g., (Gutjahr, 2012; Jansen and Zarges, 2014; Lassig and Sudholt, 2014; Oliveto and Witt, 2014) and the references therein. The statement of these results is outside the scope of this book, however in words probably the simplest statement is that they work in probability in the sense that for several different runs of the program it may not result in a satisfactory answer. Moreover, the answers of different runs differ from each other and usually over a longer period of time we get a better answer. Remark F.3: There are several practical methods for terminating a GA. Two of these “stopping criteria” are as follows: (1) Specifying the maximum generation/ iteration, (2) Specifying a desired minimum for the cost function. While the second method looks elegant its problem is that it may not terminate in bounded time, like several hours (or days, depending on the problem), and thus the first approach is often adopted. Moreover, note that in either case, for the generic complicated problem, the process almost surely means that we accept an answer which may or may not be near the global minimum. It may be a far-off local minimum. In the sequel Example F.1 we do so. In this example the GA takes a few seconds to terminate. Depending on the complexity of the problem, size, and the maximum generation size it may take a much longer time—e.g., several minutes or more. Remark F.4: Our objective in this appendix is only to introduce GAs and show how the core algorithm works. This is particularly true for the GA presented in the sequel Example F.1. It is possible to improve the details of the program so as to increase both its precision and efficiency. This is outside the scope of this book.

Example F.1: The following program accomplishes the pole placement problem for the system x_ 5 Ax 1 Bu; y 5 Cx; u 5 Gy, whose parameters are given below. Using a GA we find an output feedback controller such that the poles are assigned at f 2 1; 2 2; 2 5g. 2

0 x_ 5 4 0 0

1 0 0

3 2 0 1 1 5; B 5 4 1 0 1

3  0 1 5 0 ;C5 0 1

0 1

 0 : 0 (Continued)

900

Appendix F

(cont’d) The simple but general program that we have written for this problem is given below. It is general in that with obvious changes it can be used for other problems as well. The population size is 170, the generation size (number of iterations) is 300, the crossover and mutation probabilities are 0.3 and 0.1, respectively. %%%%%% --------------------start--------------------- %%%%%% clear all close all I 5 i; A 5 [0 1 0;0 0 1;0 0 0]; B 5 [1 1 1;0 0 1]'; C 5 [eye(2) zeros(2,1)]; Ld 5 [ 2 1 2 2 2 5]; popsize 5 170; n 5 max(size(B)); m 5 min(size(B)); l 5 min(size(C)); maxgen 5 300; Pc 5 0.3; Pm 5 0.1; N 5 0; NN 5 0; b 5 1.65; Gmax 5 10; Gmin 5 -10; %lambda 5 .4; rr 5 rand-.5; minJ 5 1e10; sigma 5 0; ro1 5 1; %ro2 5 0; ro3 5 0; ro4 5 1; ro5 5 1; ro6 5 1; ro7 5 1; ro8 5 1; ro9 5 1; %p1 5 1; p2 5 1; p3 5 1; p4 5 1; p5 5 1; p6 5 1; p7 5 1; p8 5 1; p9 5 1; % reansLd 5 real(Ld); imansLd 5 imag(Ld); [sortreLd,ordreLd] 5 sort(reansLd); for u 5 1:max(size(Ld)) sortimLd(u) 5 imansLd(ordreLd(u)); end Ld 5 sortreLd 1 I sortimLd;

(Continued)

Appendix F

901

(cont’d) % % 1 Generate the first population randomly in real numbers % for i 5 1:m l for j 5 1:popsize G(j,i) 5 20 (rand-.5); end end % % while N , maxgen N 5 N 1 1; % % % 2 Arithmetic Crossover % a 5 0; for i 5 1:popsize r(i) 5 rand; if r(i) , Pc a 5 a 1 1; ii(a) 5 i; end end if rem(a,2) 55 0 for i 5 1:a if rem (a,2) 55 1 rr 5 rand-.5; GG(i,:) 5 G(ii(i),:) 1 rr G(ii(i 1 1),:);

%(1-lambda)

GG(i 1 1,:) 5 G(ii(i 1 1),:)-rr G(ii(i),:);

%(1-lambda)

end end else for i 5 1:a 2 1 if rem (a,2) 55 1 GG(i,:) 5 G(ii(i),:) 1 rr G(ii(i 1 1),:);

%(1-lambda)

GG(i 1 1,:) 5 G(ii(i 1 1),:)-rr G(ii(i),:); end

%(1-lambda)

(Continued)

902

Appendix F

(cont’d) end GG(a,:) 5 G(ii(a),:); end % % 3 Nonuniform Mutation % aa 5 0; for i 5 1:m l popsize r(i) 5 rand; if r(i) , Pm aa 5 aa 1 1; quot 5 floor(i/(m l)); if i , m l popsize quot 5 quot 1 1; end remi 5 rem(i,m l); if remi 55 0 remi 5 m l; end if floor(rand 2) 55 1 G(quot,remi) 5 G(quot,remi) 1 (Gmax-G(quot,remi)) rand (1-N/maxgen)^b; GG(a 1 aa,:) 5 G(quot,:); else G(quot,remi) 5 G(quot,remi)-(G(quot,remi)-Gmin) rand (1-N/maxgen)^b; GG(a 1 aa,:) 5 G(quot,:); end end end % % 4 Top popsize selection % GGG 5 [G;GG]; x 5 size(GGG); y 5 x(1); clear Lambda eval z; for z 5 1:y

%%%%%startloop

clear bb i GGGG;

(Continued)

Appendix F

903

(cont’d) bb 5 0; i 5 0; for i 5 1:m l if rem(i,l) 55 0 bb 5 bb 1 1; GGGG(bb,:) 5 GGG(z,i-l 1 1:i); end end eigans 5 eig(A 1 B GGGG C,'nobalance')'; reans 5 real(eigans); imans 5 imag(eigans); [sortre,ordre] 5 sort(reans); for u 5 1:max(size(eigans)) sortim(u) 5 imans(ordre(u)); end Lambda(z,:) 5 sortre 1 I sortim; if sortre(n) , sigma Bestsortre 5 sortre; sigma 5 sortre(n); Bestchrom 5 z; %bait(N) 5 1; %else %bait(N) 5 0; %bait 5 1; end eval1(z) 5 sum((Lambda(z,:)-Ld). conj(Lambda(z,:)-Ld)); eval(z) 5 ro1 eval1(z); % 1 ro2 eval2(z)^p2 1 ro3 eval3(z)^p3 1 ro4 eval4(z)^p4 1 ro5 eval5(z)^p5; % 1 ro6 eval6(z)^p6 1 ro7 eval7(z)^p7 1 ro8 eval8(z)^p8 1 ro9 eval9(z)^p9; if z 55 1 GGGGG 5 GGGG; else GGGGG 5 [GGGGG;GGGG]; end end %%%%%endloop [EEval,En] 5 min(eval);

%%%%%%eval is always positive

Eval(N) 5 EEval;

(Continued)

904

Appendix F

(cont’d) AnsG 5 GGGGG((En-1) m 1 1:m En,:); %%%%%%Ans 5 Best at this iteration Anseval1 5 eval1(En); [evalsort,esort] 5 sort(abs(eval)); for i 5 1:popsize

%%%%%find the best chromosomes in THIS iteration

G(i,:) 5 GGG(esort(i),:); end [V,D] 5 eig(A 1 B AnsG C,'nobalance'); if Eval(N) , minJ

%%%%%%Best 5 Best at this run

minJ 5 Eval(N);

% Minimum of the cost function

Besteval1 5 Anseval1;

% Last cost function

BestN 5 N;

% Best iteration

BestG 5 AnsG; BestV 5 V;

% Best controller/answer % Best modal matrix

BestL 5 sort(diag(D));

% Best assigned eigenvalues

end end %%%%%%%%%%%%%%%%%%%%%%%%%% end of this run %%%%%%%%%%%%%%%%%%%%%%%%%% Lasteval1 5 Anseval1;

%%%%%%Last cost function

[beste,iterat] 5 min(Eval) BestN

%%%%%%iterat

LastG 5 AnsG LastL 5 sort(diag(D))

%%%%%D 5 eig(A 1 B AnsG C)

LastV 5 V BestG BestL

%%%%%L 5 eig(A 1 B BestG C)

BestV condBV 5 cond(BestV) condLV 5 cond(LastV) normBG 5 norm(BestG) normLG 5 norm(LastG) testBestN 5 BestN-iterat BestJ 5 ro1 Besteval1 LastJ 5 ro1 Lasteval1 %%%%%%%% save the results and plot the cost function minimization %%%%%%% Leval1 5 Lasteval1; Beval1 5 Besteval1; save Eval; save Beval1; save Leval1; save BestG; save LastG; save LastJ save BestJ; save BestL; save LastL; save BestN; save iterat

(Continued)

Appendix F

905

(cont’d) plot(Eval) xlabel('Iteration'); ylabel('Cost Function'); %%%%%% -----------------------end -------------------------- %%%%%%

The program outputs the following results. The best answer is obtained at iteration number 288. The cost function is minimized to 1.1211e 2 05. beste 5 1.1211e 2 05 iterat 5 288 BestN 5 288 LastG 5 2 2.9851 2 4.9345

2 5.0062 2 8.9573

LastL 5 2 4.9995 2 1.9923 2 0.9995 LastV 5 2 0.3134 2 0.1576 2 0.9365

2 0.6675 0.4059 0.3309 2 0.1006 2 0.6670 0.9084

BestG 5 2 3.0038 2 5.0137

2 4.9975 2 8.9977

BestL 5 2 5.0021 2 2.0013 2 0.9978 BestV 5 2 0.3122 2 0.1561 2 0.9371

2 0.6674 0.4037 0.3349 2 0.1012 2 0.6652 0.9093

condBV 5 20.6801 condLV 5 20.6786

(Continued)

906

Appendix F

(cont’d) normBG 5 11.8350 normLG 5 11.7697 testBestN 5 0 BestJ 5 1.1211e 2 05 LastJ 5 5.9810e 2 05

Note that the “namings” in the above answers is such that e.g., BestV and CondBV represent the modal matrix and its condition number corresponding to the best iteration, and the best iteration is the one which results in the minimum cost function. Thus, CondBV is not necessarily the minimum among all the condition numbers. For instance as it is observed we have found that CondBV . CondLV, where CondLV corresponds to the last iteration. The same is true for normBG and normLG. See Fig. F.5 to observe how the cost function is minimized over successive generations.

Figure F.5 Minimization of the cost function.

(Continued)

Appendix F

907

(cont’d) Discussion: As we stated in Remark F.3 a GA works in probability in the sense that for several different runs of the program it may not result in a satisfactory answer. Moreover, the answers of different runs differ from each other. For instance, the results of three other runs of the program are shown in Fig. F.6. Note that in the first and third cases the obtained minimums are not acceptable. It practice the program must be run enough times so as to result in a satisfactory answer.

Figure F.6 Three typical runs of the program.

Example F.2: One may wonder why we do not do a direct search for the answer—the so-called brute-force search. What is the reason? The reason is that it is possible only for truly small problems. The above problem is not small! Note that the precision in the answer (in the case of the above example, the assigned eigenvalues) in general depends on the number (Continued)

908

Appendix F

(cont’d) of decimal digits of the optimization variable (in the case of the above example the elements of the feedback matrix G). The search space of the above example is 6-dimensional (the number of the elements of the matrix G). Recall that the search space in each dimension is ½ 2 10 10 as specified in the program. If we wish to search this space with 5 decimal digits, this means that each dimension is divided to 20 3 105 points, and thus the whole space has ð20 3 105 Þ6 5 64 3 1036 points. This is the number of iterations that the program must run. On the other hand at each iteration the cost function (here, the eigenvalues) must be computed. This part has in general a considerable number of FLOPS (FLating-point Operations Per Second), also known as flops. Even if we suppose it is only 1 flops, we still have 64 3 1036 flops. If we use a computer which is capable of carrying out 1012 flops, which is well beyond the average home computer in the year 2017, this takes 64 3 1024 seconds. But a year is only  107:5 seconds! Hence it takes  64 3 1016:5 years! This is certainly impossible even for offline computations (where we allow, depending on the importance of problem, from some days up to some months/years), let alone online computations in control systems. We should add that this problem is out of reach even for a supercomputer. However, if we decrease the precision e.g., to 1 or 2 decimal digits of the search space (which in general means that the answer will be very inexact), or in general if our problem is truly small, then brute-force search may be possible, especially for offline computations. Yet, for online computations it is in general unacceptable since the actual time is more than the time that we considered above: we have ignored the time of the cost function evaluation and the I/O complexity of the algorithm, see Section 1.16.3. When a problem is numerically intractable, either because of time or memory usage, we refer to the situation as “The Curse of Dimensionality.” This problem is usually associated with large-scale systems whose dimension is above, say, several thousands (depending on the application, see Remark F.5), however as we saw above it may happen to systems whose dimension (here, number of states) is low. We close this example by adding that the problem is better felt if we have a dynamical controller, say a three-term controller in each element of the feedback matrix. Then the dimension of the search space will be 3 3 6 which highly aggravates the already complicated situation. In passing, note that it is easily seen that if we use a decentralized controller, then the search space is reduced and it helps make the problem computationally tractable, of course if feasible. We shall learn more about scientific/intelligent computations in the second part of the book on state-space methods.

Remark F.5: As you might have guessed, decision of the size of the system as small/medium/large/etc. depends on the particular application and the associated numerical solvers and hardware. A typical example that you will encounter in the

Appendix F

909

sequel of the book on state-space methods is the SISO state-feedback pole assignment. As you will learn there as the size increases the problem gets ill-conditioned and around n 5 15 (for typical originally unstable plants) the problem gets so illconditioned that the present pole assignment algorithms (up to the year 2017) fail. On the other hand, to further explain the connection with numerical algorithms recall that among the factors that we should consider are the number of operations, type of operations, and communication cost of the algorithm. As an example, consider two algorithms that work in e.g., Oðn3 Þ and Oðn2 Þ, and are more or less the same in other respects, and require 200 iterations to terminate. If we use a 1012 flops processor, ignoring other times, the dimension n 5 107 requires  200 3 107 3 321227:5 years with the first algorithm, and we consider it an instance of the curse of dimensionality; it is certainly a large-scale system. However, the same dimension with the second algorithm requires 200 3 107 3 221227:5 years, which is acceptable for offline computations and thus we do not consider it as the curse of dimensionality. Remark F.6: Considering the literal meaning of the word tractability, its generally accepted definition as the existence of a polynomial-time algorithm can be misleading and in fact incorrect in some cases even of truly small size. There are problems with polynomial-time algorithms for their solutions which are out of reach even for offline computation with future generations of supercomputers. This topic is further detailed in the sequel of the book on state-space methods, which features a whole chapter on the computational side of systems and control problems. Remark F.7: Modern versions of evolutionary algorithms (including modern GAs) use a blend of different techniques inside them: (1) Mathematical techniques (like LMI based optimization, gradient based routes, sum of squares programming, kurnel-based rules, etc.), and (2) Intelligent techniques (fuzzy operators, neural networks, etc.). Apart from general-purpose algorithms, dedicated algorithms for specific problems have also been developed. These results undoubtedly outperform the basic GA which we exemplified in this appendix, and we refer the interested reader to the pertinent literature. Remark F.8: Speaking of computational load, we should add that there are various methods for reducing the computational load of evolutionary algorithms as well. This is particular refers to the fitness function evaluation, see (Davarynejad et al., 2007) among others.

References For further details the interested reader is referred to the sequel works. Alipouri, Y., Poshtan, J., 2013. A modification to classical evolutionary programming by shifting strategy parameters. Applied Intelligence. 38 (2), 175192. Alipouri, Y., Poshtan, J., 2014. Global minimum routing in evolutionary programming using fuzzy logic. Inform. Sci. 292, 162174.

910

Appendix F

Angelova, M., Pencheva, T., 2011. Tuning genetic algorithm parameters to improve convergence time. Int. J. Chem. Eng. vol. 211. Available from: http://dx.doi.org/10.1155/2011/ 646917. Augusto, O.B., Rabeou, S., Depince, Ph, Bennis, F., 2006. Multi-objective genetic algorithms: a way to improve the convergence rate. Eng. Appl. Artif. Intell. 19 (5), 501510. Bavafa-Toosi, Y., 2000. Eigenstructure Assignment by Output Feedback, MEng Thesis (Control). Department of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran. Bertsekas, D., 2015. Convex Optimization Algorithms. Athena Scientific, Nashua. Boyd, S., El Ghaoui, L., Feron, E., Balakrishnan, V., 1994. Linear Matrix Inequalities in Systems and Control Theory. SIAM, Philadelphia. Chong, E.K.P., Zak, S.H., 2008. An Introduction to Optimization. Fourth ed. Wiley, New York. Chuang, Y.-C., Chen, C.-T., Hwang, C., 2016. A simple and efficient real-coded genetic algorithm for constrained optimization. Appl. Soft Comput. 38, 87105. Cruz, J.A.R., Pereira, A.G.C., 2013. The elitist non-homogeneous genetic algorithm: almost sure convergence. Stat. Probab. Lett. 38 (10), 21792185. Davarynejad, M., Akbarzadeh-T, M.R., Pariz, N., 2007. HA novel general framework for evolutionary optimization: adaptive fuzzy fitness granulation. In: Proceeding of the IEEE Congress on Evolutionary Computation. Singapore. pp. 951956. Fletcher, R., 2000. Practical Methods of Optimization. Wiley, New York. Gutjahr, W.J., 2012. Runtime analysis of an evolutionary algorithm for stochastic multiobjective combinatorial optimization. Evol. Comput. 20 (3), 395421. Homaifar, A., Turner, J., Ali, S., 1992. The N-queens problem and genetic algorithms. In: Proceedings of the IEEE Southeast Conference, vol. 1. Birmingham, AL, USA, pp. 262267. Jansen, T., Zarges, C., 2014. Performance analysis of randomised search heuristics operating with a fixed budget. Theor. Comput. Sci. 545 (14), 3958. Kim, B.M., Kim, Y.B., 1997. A study on the convergence of genetic algorithms. Comput. Ind. Eng. 33 (34), 581588. Lassig, J., Sudholt, D., 2014. General upper bounds on the runtime of parallel evolutionary algorithms. Evol. Comput. 22 (3), 405437. Melanie, M., 1996. An Introduction to Genetic Algorithms. The MIT Press, Boston. Mostaghim, S., Teich, J., 2003. The role of dominance in multi objective particle swarm optimization methods. In: Proceedings of the IEEE Congress on Evolutionary Computation, vol. 3, pp. 17641771. Mostaghim, S., Teich, J., 2004. Covering Pareto-optimal fronts by subswarms in multiobjective particle swarm optimization. In: Proceedings of the IEEE Congress on Evolutionary Computation, vol. 2, pp. 14041411. Naghib, E., Nobakhti, A., 2016. Entropic differential evolution - DE. In: Proceedings of the American Control Conference. Boston, MA, USA, pp. 24402446. Oliveto, P.S., Witt, C., 2014. On the runtime analysis of the simple genetic algorithm. Theor. Comput. Sci. 545 (14), 219. Sharapov, R.R., Lapshin, A.V., 2006. Convergence of genetic algorithms. Pattern Recognit. Image Anal. 16 (3), 392397. Sivanandam, S.N., Deepa, S.N., 2008. Introduction to Genetic Algorithms. Springer, Berlin. Struwe, M., 2008. Variational Methods: Applications to Nonlinear Partial Differential Equations and Hamiltonian Systems. Springer, Berlin. Tempo, R., Dabbene, F., Calafiore, G., 2013. Randomized Algorithms for Analysis and Control of Uncertain Systems. Second ed. Springer, Berlin.

Appendix G: Sample exams

G.1 Sample Midterm Exam (ME) (4 h, closed book/notes) ME.1. (1) In the 2-DOF version of the standard 1-DOF control architecture, how should we design the prefilter? In your answer include an analysis of the frequency content of the reference input. (2) Does it have any effect on the controller input and controller output signals? (3) How about the error signal? Discuss. ME.2. Consider a dynamical system with an LTI controller for following a sinusoidal reference input. Can we design the controller such that the steady-state phase shift between the input and output is zero?    Discuss. (5%) x_1 5 2 2x2 with xð0Þ 5 x_2 5 x1 u

ME.3. A nonlinear system is given by

21 . (10%) 2

(1) Find the linearized model around u 5 2 and discuss its stability. (2) Find the exact solution of the system for this input. ME.4. Consider Fig. G.1 below. (10%) (1) Find the transfer function between the input f and the position of the mass m2 . Is it strictly proper? (2) Draw its electrical equivalent. ME.5. Consider a system whose SFG is given in Fig. G.2. (10%) (1) Find its transfer function. (2) Draw its block diagram equivalent as well. ME.6. An open-loop control system is given by CðsÞ 5 k=ðs 1 aÞ and PðsÞ 5 ðs 1 1Þ=½sðs 2 1Þ. Find the range of the controller parameters for its closedloop stability. (10%) ME.7. Consider the system in Fig. G.3. (25%) (1) Suppose that CðsÞ 5 1. Find the response of the system for a step input. (2) With this controller find its rise time, peak time, maximum percent overshoot, settling time, and bandwidth. k2 m2

m1 b1

Figure G.1 Problem ME.4.

k1

f

912

Appendix G

Figure G.2 Problem ME.5.

Figure G.3 Problem ME.7.

(3) With this controller, in what frequency range are sinusoidal inputs amplified and in what frequency range are they attenuated? (4) Find the controller CðsÞ 5 k such that the input rðtÞ 5 sin0:5t appears in the output with the same magnitude at steady state. (5) Synthesize the simplest controller CðsÞ such that the system tracks the ramp input with zero steady state error. (6) With controller of part (5) consider the delays T 5 0:2 s and T 5 0:1 s in the forward and outer feedback paths, respectively, as well as the sensor dynamics PS ðsÞ 5 50=ðs 1 50Þ over the outer loop. Find the steady-state error of the system. ME.8. This problem has two parts. (20%) (1) Without doing much computations draw the root locus of the system LðsÞ 5 kð1 2 s2 Þ=s5 for both positive and negative values of the gain, in separate figures. (2) Propose a synthesis (or controller structure) for step-tracking of the plant PðsÞ 5 ðs2 2 4Þ=ðs4 2 1Þ. Discuss. (20%) (ME.Bonus) Propose a ramp-tracking controller for the system PðsÞ 5 1=ðs 2 1Þ with the sensor dynamics PS ðsÞ 5 100=ðs 1 100Þ. (10%)

G.2 Sample Endterm Exam (EE) (4 h, closed book/notes) EE.1. Comment on the choice of the closed-loop BW of the following system. (2.5%) L 5 kðs 2 1Þðs 2 aÞ=ðs2 2 5Þ

Appendix G

913

EE.2. What is the implication of NMP poles and zeros of a system loop gain on the sensitivity of the closed-loop system? Discuss the plant and controller separately. (5%) EE.3. Consider the standard 1-DOF control structure. The plant PðsÞ is of type 1 and has NMP pole(s) and zero(s). A designer designs an internally stabilizing cons2 1 a troller for it in the form of CðsÞ 5 sðs 2 bÞ C1 ðsÞ; a; b . 0. (17.5%) (1) Discuss the philosophy behind the controller terms and the problem setting. (2) Let the reference input, input disturbance, and output disturbance be rðtÞ 5 2stepðtÞ 1 sinct; c 6¼ a; di ðtÞ 5 2 stepðtÞ; d0 ðtÞ 5 rampðtÞ 1 3sin at, respectively. In the standard notation e 5 r 2 y; up 5 u 1 di ; y 5 yp 1 do , find ess ðtÞ; uss ðtÞ; yp;ss ðtÞ. The subscript ss denotes the steady state. (3) Let rðtÞ 5 stepðtÞ; di ðtÞ 5 rampðtÞ. Find yss ðtÞ. (4) In part (3) let the actuator signal be restricted to positive values in ½0 UA;max . Does the actuator saturate? (5) What can you say about the step response of the system with regard to inverse response phenomenon, the amount of possible undershoot and overshoot, settling time, etc.? (6) Is there any fundamental limitation in the bandwidth of this system? (7) Let rðtÞ 5 sinωt. Is it possible or impossible that at steady state the output has a peak-to-peak magnitude larger than 2? Explain the reason. EE.4. This problem has four parts. Supply the reason in the case of a negative answer, and provide an example in the case of a positive answer. (10%) (1) Can the Nichols chart of a system start or end at the critical point? (2) Can the critical point be on the curve at a mid frequency? (3) What about forming a closed loop with the critical point on it? (4) What about forming a closed loop with the critical point either inside or outside of it? EE.5. The Bode diagram of a system is given below in Fig. G.4. (15%) (1) Find a possible transfer function for it. (2) Is the answer unique or not? Why? (3) Is your closed-loop system stable? (4) Comment on its steady-state error for different inputs. EE.6. The Nyquist plot of the system L 5

Kððs20:1Þ2 1 1Þððs20:1Þ2 1 9Þ for K 5 5 ð1 2 sÞððs10:2Þ2 1 4Þððs20:2Þ2 1 25Þ

is provided in Fig. G.5. The computed PMs at the points W, X, Y, and Z are: [37.8504, 276.9086, 102.9172, 296.1210] deg at ω 5 [1.6991, 2.3168, 3.9104, 7.6743] rad/s. From left to right the real-axis crossings are: [ 2 1.8896, 0, 0.0928, 0.1192, 0.4498, 8.9606]. (15%) (1) Determine the stability, stability range, GM, PM, and DM of the system. (2) Divide the stability range to regions in which the GM is positive or negative only. EE.7. Sketch the Nyquist plot of the system LðsÞ 5 Kðs2 1 1Þ=½sðs 2 1Þðs 2 2Þ and find the range of K for stability. (10%)

914

Appendix G

Figure G.4 Problem EE.5.

Figure G.5 Problem EE.6, not drawn to scale.

EE.8. The Bode diagram of the system G 5 25=½sðs 2 5Þðs 1 10Þ is given in Fig. G.6. (25%) (1) Design a lead-lag controller such that the system has PM 5 π=3 and a steady-state error of at most 2% in tracking a reference ramp input. (2) Design a PID controller for the same performance specification. (EE.Bonus) Discuss the inclusion of output disturbance, sensor dynamics, and prefilter in the Smith predictor design. (10%)

Appendix G

Figure G.6 Problem EE.8.

915

Index Note: Page numbers followed by “f ” and “t” refer to figures and tables, respectively. A Abscissa, 357 358, 401, 404, 406 407, 420 421 Acceleration constant, 259, 261, 678 error, 259, 260t, 544t, 569 570, 693 694 input, 259, 299 303, 321 322, 546 547, 692, 694 Accuracy of computational algorithms, 600, 784 Active suspension system, 183f Actuator placement, 72 selection, 12 Aerospace applications, 3 Aircraft, 4 Algebraic geometry, 46, 303 304, 762 763, 875 876, 896 Amplifier, 10, 12, 16, 43, 56, 100 101, 121 123, 672 675, 852 853 Analytic function, 739 Angle of arrival, 360 362 condition, 356, 363, 404, 655 of departure, 360 361, 371 372, 403 406, 416 Anti-windup technique, 755 Asymptotic stability, 872, 876 877

Biomimetics, 50 Birkhoff, G.D., theorem minimal set, 878 Black, H.S. feedback structure, 5 7 Block diagram, 85, 123 128, 131 132, 134 135 Bode, H.W. diagram, 5 7, 257, 273 274, 445, 447, 460 461, 466, 473, 503, 531 547, 553 554, 559 560, 600 603, 606 607, 630, 688 689, 702, 873 differential sensitivity and limitations, 72 gain and phase margins, 554 559 Bounded input bounded output (BIBO), 201 206 Branch of root locus, 416 418 Break point break-away point, 380, 401, 414 415, 655 657 break-in point, 359 360 Brute-force search optimization, 907 Bumpless transfer, 89, 677 Business system, 13 Butterworth, S filter, 271, 728 735, 729f, 732f, 785

B Backstepping, 39 40, 607 608, 873 Bandwidth, 19, 911 Barkhausen, H. gain and phase margins, 599 oscillation and stability, 641 Behavioral approach, 36 37, 174, 316 Biochemical engineering, 50, 107 108 Biological system, 47 48, 50, 110, 119 120, 244 Biomedicinal engineering, 45, 50

C Cancellation of poles and zeros, 204 205, 232, 238, 240, 291, 331, 376 377, 394, 695, 715, 738, 778, 789, 794 Certainty equivalence, 42 Characteristic equation, 141, 201 202, 206 207, 215 216, 220 221, 233, 237, 296, 326 327, 351, 355, 358, 363, 366 367, 372 373, 401, 416, 447, 661 663, 796

918

Chemostat, 187 Chromosome, 897 898 Closed-loop system, 8, 123, 207, 238, 265, 283, 450 451, 520, 561, 574, 576, 591, 594 596, 605, 618, 661 662, 767 Cohen-Coon tuning rules, 661 Compensation, 629 Computational/numerical algorithm communication cost, 47 48, 73, 908 909 efficiency, 39, 47 48 I/O complexity, 47 48, 908 memory, 47 48 operation number/type, 47 48 speed, 47 48 Condition number, 771 774, 784, 863 864, 906 Constant magnitude loci, 596 Constant phase locus, 596 Control adaptive, 36 40, 44 45, 89 aircraft, 4 amplifier, 36 antiwindup and, 33 automotive, 36 behavioral approach and, 36 37 bifurcation, 36 37 biological systems, 44 biomedicinal systems, 36 chaos, 37 39 constrained, 36 39, 73, 783, 806 continuous-time, 37 39, 41t derivative, 12, 630 descriptor systems, 36 37 deterministic, 13 14 discrete-time, 5 7, 36 37, 123 economical systems, 36 estimation and, 36 37 fault tolerant, 36 37, 41t filtering and, 5 7, 36 37, 39 40 finite-time, 5 7, 41t, 42 fixed-time, 42, 189 flow, 9, 9f, 36 fractional-order, 36 37, 47 48 fragility of, 47 48 fuzzy, 36 39, 49 game theory and, 36 37, 41t hybrid, 36 39, 41t identification and, 5 7, 36 37, 39 40

Index

industrial, 31, 36 37, 100 101, 121, 123, 373 374 infinite-dimensional systems, 36 37 infinite-time, 41t, 42 integral, 46 intelligent, 35 39 internal model (IMC), 4, 33, 245f, 670 671, 676 large-scale systems, 37 42 learning, 36 39 liquid level, 64, 104 managerial systems, 36 missile, 4, 36, 89, 532 model-based, 40 42, 41t model-free, 35 36, 41t, 73 model predictive (MPC), 36 37, 42, 189 multivariable, 36 37, 39 40, 41t, 44 45, 189 network, 36 39 neural network and, 36 42, 41t nonlinear, 36 40, 41t, 89, 174 optimal, 5 7, 12, 14, 36 39, 41t, 44 45, 445 optimization and, 12 13, 36 37 pneumatic, 86, 115 116 power electronics, 36 power systems, 37 39 process, 36 37, 89, 271, 660, 670 671 proportional, 12, 247, 631 649, 672 quantum, 41t, 47 48 reconfigurable, 41t repetitive, 54 robotics, 36 37 robust, 14, 33 34, 36 40, 44 45, 89, 122, 215, 224, 240 241, 300 303, 503 504, 589, 607, 735, 774 satellite, 119 120 ship, 36 singular perturbation and, 36 37, 44 45 social system, 115 116 softcomputing and, 36 37 sparse, 39 40, 45 stochastic, 13 14, 36 40, 41t, 46 48 structural, 36 time-delay system, 36 37 time-varying system, 36 37 traffic, 36 37 vibration, 36

Index

Controllability condition (state feedback) Kalman, R.E., 764 integral, 774 Convergence, 164, 874 876, 899 Convex control problem formulation, 45, 53, 676 optimization, 725, 779, 895 896 Convexification, 895 897, 897f Corner frequency, 536 537, 540 541, 553 554, 567, 730 Critical damping, 271, 333, 335, 353 Crossover operator, 898 909 Curse of dimensionality numerical computation, 908 909 Cut-off frequency, 730, 733 Cutoff frequency, 733 Cybernetics, 5 7, 45 46 D Damped natural frequency, 265 266, 328 333 Damping ratio, 217 218, 262, 265, 267, 271, 324 325, 328 333, 423 424, 662 663 Dashpot, 182 183, 832, 832f Dead time, 32f, 400 Deadzone, 14, 32, 118, 305, 664 Decade Bode diagram, 472 473 Decibel, 483 484, 532 Decoupling index, 769 770 Degree of freedom, 44 45, 637 638, 705, 834t, 835t Delay time, 116 117, 269 272, 329, 668 670, 867 Denominator, 28, 61, 99 100, 238, 263, 304 306, 420 421, 454 455, 534 Derivative backoff, 677 kick, 677 Describing function analysis, 873 Descriptor systems, 36 37, 135 136, 174, 316 Design, 11 13, 629, 658 660, 778 781 Desoer, C.A., and Chan, W.S. internal stability, 230

919

Deterministic system, 37 39, 872 875 Differential drive robot, 119 120, 140, 185, 185f Differential algebraic equation, 42, 49 50, 135 136 equation, 5 7, 32, 92, 96, 819 820, 829 830, 871, 877 879 geometry, 46 inclusion, 871 sensitivity, 72 Discrete-time control systems Barker, R.H., 5 7 Hurewicz, W., 5 7 Jury, E.I., 5 7 Lawden, D.F., 5 7 Linvill, W.K., 5 7 Ragazzini, J.R., 5 7 Tsypkin, Y.Z., 5 7 Zadeh, L.A., 5 7 Dissipativity, 42, 505, 782 Disturbance input, 27, 31, 54, 58, 62 64, 335 338, 666f output, 27, 54, 58, 62 63, 335 336, 666f setpoint, 70 71, 794 DNA/RNA engineering, 45 DOF (degree of freedom) 1-DOF, 28, 29f, 41t, 222f 2-DOF, 27 32, 28f, 29f, 41t, 911 3-DOF, 32, 41t, 44 45 Dominant poles, 271, 351, 366 367, 393, 401, 751 Doyle, J.C. modern representation, 33 35 structured singular value, 231, 779 E Efficiency, 39, 47 48, 833 Eigenstructure assignment, 725, 759 766 Eigenvalue Frobenius, 805 Perron-Frobenius, 804 problem, 165 166, 784 Eigenvector Frobenius, 805 Perron-Frobenius, 805 Equation difference, 174

920

Equation (Continued) differential, 5 7, 32, 92, 96, 819 820, 829 830, 871, 877 879 differential algebraic (DAE), 42, 49 50, 135 136 integral (IE), 46, 159, 819 Abel, 819 820 fractional, 819 820 Fredholm, 819 820 integro-differential, 819 820 nonlinear, 819 820 numerical solution, 819 820 Resolvant kernel, 819 820 successive approximation, 819 820 variational approximation, 819 820 Volterra, 819 820 Wiener-Hopf, 819 820 ordinary differential (ODE), 94, 174, 871 partial differential (PDE), 46, 94, 165 166, 174 regularization, 49 50 solvability, 762 763 sylvester, 806 uniqueness of answer, 765 Equilibrium, 91, 111, 114 115, 201 202, 879 Estimation, 36 40, 41t Euler, L., 128 129, 164 165 Evans, W.R. root locus, 5 7, 354 355 Evolutionary algorithm, 14, 39, 896 897 Expansive systems, 41t Exponential function, 142, 203 204, 853 F Feedback element, 8, 28, 31, 56, 70 71, 100 101, 121 122, 127, 167, 245, 258, 844, 852 853 mechanism, 5 9, 56 Feedforward, 73, 170 171, 300, 619, 625, 630 Filippov method, 872 873 Filter(ing) Butterworth, 271, 728 735, 732f, 785 Chebyshev, 730 731, 730f elliptic, 730, 730f Final value theorem, 258 262, 319, 821 822

Index

Finite-horizon control, 189, 783 Finite-state system, 42 First-order system, 263 265, 274 276, 664, 785 Floquet theory, 504 Force-current analogy, 835t Force-voltage analogy, 834t Fourier transform, 21, 820, 824 829 Fractional-order control, 44, 47 48 system, 36 37, 229, 400, 783 Francis, B.A. and Wonham, W.M. internal model principle, 54 Francis, B.A. and Zames, G. water-bed effect, 787 Frequency response, 257, 273 274, 279, 445 446, 553 554, 561, 649, 660 661, 668, 851 Fundamental limitations, 317, 659, 711, 723, 740 741, 757 759, 765, 773 777, 781 782 Bavafa-Toosi, Y., 732, 761, 779, 781 783 Bigelow, S. C., 782 Bode, H. W., 739 742, 781 782 Freudenberg, J. S. and Looze, D. P., 782 Goodwin, G. C., et al., 777, 782 Middleton, R., et al., 782 Youla, D. C., et al., 782 G Gain margin (GM), 215, 446 447, 569 570, 599, 601, 615 616, 712 Game theory, 36 37, 41t, 44 45 Garcia, C.E., and Morari, M., 33f, 73 Gear, 3, 106, 146, 175, 833f Generalized controller, 68 model, 33 35, 62 63, 135 136 plant, 34 35, 63 64 Genetic algorithm, 677, 779, 797 798, 895, 897 909 H Habilitation, 44 45 Hamming norm, 864 Hermite, C test, 872 873 Hermite Biehler, 230, 249, 873, 887

Index

Heuristic methods, 660, 896 High-order system, 271, 285 286 Humanoid robot, 42, 47 48 Hurwitz, A determinants, 214 Hydraulic, 85 86, 100 101, 105 106, 115 116, 833 I Identification, 5 7, 36 40, 106, 161, 263 Impulse response, 95, 238 239, 263 264, 266 267 Inertia, 185 Infinite-dimensional systems, 36 37, 174 Information theory, 45 46, 51, 446, 864 Initial value theorem, 821 822 Input node, 130 of system, 174 Instability, 19, 44 45, 220, 472 473, 754 755, 759, 789 Intelligent computation, 36 37, 39, 47 48, 908 control, 35 39 Internal model control (IMC), 4, 33, 33f, 245f, 670 671, 676 Internal model principle (IMP), 42, 54 Interpolation conditions, 725, 735 739, 782, 787 788 Inventory system, 37 39 Inverse Laplace transform, 96 99, 203 204, 232, 264 268, 273, 304 306, 330, 822 of plant, 294 polar plot, 447 response, 257 258, 263, 293 295, 294f, 295f, 319, 323, 416, 749 750, 781, 783 Inverted pendulum, 12 13, 95, 119 120, 150, 181, 789 J James, H.M., and Weiss, R.P. BIBO stability, 202 203 Jordan form, 160 172, 759 760 K Kharitonov, V.L. theorem, 44 45, 202, 220 221, 228 230, 504

921

Kirchhoff’s law, 86 KMN (Krohn-Manger-Nichols) chart/plot, 5 7, 445, 589, 742 Krohn, E.H. (KMN), 5 7, 445, 589, 742 L Lag compensator, 660 Lag-lead compensator, 630 649, 675, 681 683 Lagrange, J.L. Euler-Lagrange equation, 834 Lagrange equation, 834 Laplace transform, 32, 95 99, 102, 123 124, 144 145, 265, 304 306, 817 Lead compensator, 676 Lead-lag compensator, 135 136 Lienard, A., and Chipart, M.H. test, 202, 206, 214 215, 214t, 872 873 Limit analysis, 820 Linear matrix inequality (LMI), 218, 668, 874 875, 895 896 Linear system, 88, 762 763, 802, 865 Linear time invariant (LTI) system, 13 14, 88 Linear time varying (LTV) system, 5 7 Linearization, 4 5, 51, 89 94, 135, 153 154, 174 Log magnitude, 446, 532, 534, 590 591, 595 596, 607, 729 Logarithmic scale, 466, 533, 533f, 565 Loop gain, 21, 28 29, 58, 130, 219 220, 240, 262, 678, 697, 713 714, 736, 740 744, 746 748, 791, 794, 913 Lyapunov, A.M. equation, 39, 775 776, 864 865 function, 45, 49, 782, 879, 888 stability, 5 7, 201 203, 228 229, 242, 777, 871 872, 877 879 M M circle, 593 594, 618 Magnitude condition, 356 357, 366 Manger, W.P. (KMN), 5 7, 445, 470 471, 503, 565, 589 Mapping theorem, 742 Mason, G formula, 132 134 Mathematical model, 12, 37 39, 104, 114 115, 155, 163 164, 187, 514

922

Matrix Bell, 784 Circulant, 784 Leslie, 784 M-, 230 Metzler, 230, 777, 784 P-, 230 (Skew-)Hermitian, 244 (Skew-)symmetric, 149, 151 152, 772 773, 781, 797 798, 888 Toeplitz, 504, 784 Z-, 230 Maximum overshoot, 269, 289, 378, 726 Measure infinite, 878 879 invariant, 878 879 Lebesgue, 878 879 Mechanical system, 16, 102 103, 181, 831 Memory numerical algorithm, 47 48 Minimal realization, 850 Minimum norm solution, 849 Minimum phase (MP) system, 454 455, 472 473, 531 532, 547 554, 739 Minimum global, 114 115, 896 897, 899 local, 848 849, 896, 899 Mixed node, 130 Mixed sensitivity minimization, 778, 798 Multi-body systems, 49 50 Multiple input multiple output (MIMO), 15, 26, 33, 69, 72, 94, 98 100, 607 608, 660, 725, 759, 765 766, 783 Multiscale systems, 44, 49 50 Mutation operator, 898 909 N N circle, 591 592, 594 596, 595f Neural networks, 14, 36 40, 41t, 44 45, 668 Newton’s law, 86 Nichols, N.B. (KMN), 5 7, 445, 589, 742 Node, 129 130, 163 Noise additive, 70 72, 794 multiplicative, 71

Index

Nonlinear equation, 140, 865 866 system, 4 5 Non-minimum phase (NMP) system, 118 120, 178, 294 295, 319, 352, 446 447, 472 473, 488, 499, 547, 553 554, 591, 607, 630, 781, 783 Non-touching loop, 130, 179 Norm Frobenius, 174, 771, 804 805, 864 hamming norm, 0-, 864 induced, 771, 804 non-induced, 771 Schatten p-, 804 NP-hard numerical algorithm, 73 Numerator, 28, 61, 95 96, 99 100, 238, 288, 358, 398 399, 454 455, 534, 544 547, 678 680, 737 738 Nyquist, H criterion, 447 449, 451, 459, 504 505, 623, 872 873 plot, 445 503 O Observability, 204 205, 764, 773 774, 779 Octave Bode diagram, 533 Open-loop system, 22 23, 33, 60, 99, 207, 224, 283, 352, 451, 459, 479, 518 519, 562, 580, 596, 609 610, 618, 633 635, 642, 663, 695 697, 791 792 Operational amplifier, 121 122, 630, 672 676 Optimization algebraic geometry methods, 896 Bayesian, 896 convex, 725, 779, 895 896 differential evolution, 896 evolutionary algorithm, 896 genetic algorithm, 897 909 heuristic methods, 660, 896 memetic algorithms, 896 meta-heuristic methods, 896 Monte Carlo, 896 response surface methods, 896 self-organization, 896

Index

simulated annealing, 896 swarm, 896 Output equation, 87 88, 102, 104, 111, 123 124, 146 147 node, 130, 130f of system, 43 Overshoot, 30, 61 62, 135 136, 238 239, 261 262, 268 269, 271, 290 292, 295, 318, 324 325, 328 333, 378, 384 385, 662 663, 733, 741 742, 755 P Pade, H.E. approximation, 117 118, 174, 777 Parallelization input output (for zero assignment), 54 Parity-interlacing property (PIP), 224 225 Partial fraction expansion, 264 265, 822 823 Passive systems, 73 Path, 130 Peak time, 269 Performance index, 770 ISE, 779 ITAE, 779 specifications, 13, 629 Perturbation additive, 215, 774 multiplicative, 774 structured, 216 217, 774 775 unstructured, 216 217, 775 776 Phase lag, 118 119, 660, 680 683 lead, 660, 702 lock loop (PLL), 245 margin (PM), 5 7, 599 of system, 460 461, 500, 507 510, 556 PID (P, PD, PI, P-PI-PD), 12, 630, 649 652, 660 670 Polar plot, 447, 503 Pole, 99 100 Pole/Eigenstructure assignment Abdelaziz, 39 Ackermann, 39 Alexandridis-Paraskevopoulos, 39

923

Askarpour-Owens, 39 Barnett, 39 Bass-Gura, 39 Calvetti-Lewis-Reichel, 39 Davison-Smith, 39 Duan, 39 Esna Ashari, 39 Fahmy-O’Reilly, 39 Gourishanker-Ramar, 39 Ichikawa, 39 Kautsky-Nichols-VanDooren, 39 Khaki Sedigh-Bavafa Toosi, 39 40, 43 44, 51 Klein-Moore, 39 Kwon-Youn, 39 Maki-VanDeVegte, 39 Mayne-Murdoch, 39 Mehrmann-Xu, 39 Minimis-Paige, 39 Munro, 39 Porter, 39 Seraji, 39 Shafai-Bhattacharyya, 39 Srinathkumar-Rhoten, 39 Syrmos-Lewis, 39 Tarokh, 39 Varga, 39 Wang, 39 Wonham, 39 Popov criterion, 873 Position constant, 259, 545 error, 259 Positive systems, 776 777, 785 D.G. Luenberger, 782 Positive-real theorem, 873 Power electronics, 36 Predictive feedback control, 42 model control, 36 37, 42, 55 Prefilter, 29, 61 Principle of Argument, 448 450 of superposition, 45, 768 769 Proportional derivative (PD), 673, 674f integral (PI), 673, 673f integral derivative (PID), 673, 674f

924

Q Quantitative feedback theory (QFT), 589 Horowitz, I.M., 504, 607 608 R Ramp input, 306 307, 395, 631 Rank of matrix, 98 Realization, 98, 633, 850 Reference input, 7 8, 794 795 Regulation, 4, 761 765 Relative degree, 96, 366, 377 Residue, 822 Resonant peak frequency, 618 peak magnitude, 697 Rise time, 269, 328 333 Robotics, 36 37 Root contour, 351, 372 373 locus, 5 7, 323f, 324 325, 351 399, 560, 606 607, 655 658 Routh, E.J., and Hurwitz, A., 202, 206 dynamic, 231 test, 208 213, 887 S Saturation, 553 554, 680 683, 795f Schwarz form, 888 Scientific computing, 39 Second-order system, 257 258, 264 273, 276 279, 561 562 Sensitivity, 736 737 eigenvalue, 770, 772 773, 781 782 eigenvector, 784 integral, 791 Poisson integral, 725 726, 739 745, 752, 781 782 Sensor placement, 72 selection, 12, 774 Servomechanism, 257 Hazen, H.L., 5 7 Setpoint, 85, 794 Settling time, 270f, 271f, 880 Signal flow graph (SFG), 5 7, 128 134 Singular perturbation, 36 37, 785 Sink node, 130 Sinusoidal transfer function, 531

Index

Sliding mode control (SMC), 504 Small-gain theorem, 39 40, 873 Smith, O.J.M. predictor, 32, 73 Softcomputing, 36 37, 39 Software real-time, 36 37 verification, 54 55 Source node, 130 Space Banach, 174, 189, 871 compact, 878 function, 5 7 hardy, 189 Hausdorff, 878 Hilbert, 189 indefinite metric, 44, 189 measure, 44, 49 50, 135, 871 nonintegrable, 44 Riemannian, 878 Sobolev, 189 topological, 135, 871, 878 879 Spectral radius, 804 Speed numerical algorithm, 40 42, 784 Spring-mass-dashpot system, 102f, 144f, 145f Stability absolute, 872 Aeyels-Peuteman theorem, 873 Agrachev-Liberzon theorem, 873 Algebraic methods for, 135 136 in the almost sure sense, 874 Amato-Ambrosino-Ariola-Calabrese theorem, 873 Artstein, Z., theorem, 873 asymptotic, 225, 872, 876 877 backstepping methods, 873 backward, 47 48, 875 876 Balochian-Khaki Sedigh-Haeri theorem, 873 Balochian-Khaki Sedigh-Zare theorem, 873 Barbashin, E.A. theorem, 873 Barkhausen, H., test, 872 873 Bavafa-Toosi, Y., theorem, 225, 882, 882, 882, 882 BIBO, 201 206

Index

Bloch’s theorem, 873 Bode’s test, 206, 872 873 center manifold method, 873 circle criterion, 873 comparison theorems, 873 computational tests, 873 conditional, 872 connective, 873 874 contraction methods, 873 converse theorems, 873 D-, 202, 217 218, 241 decentralized, 871 872, 880 882 describing function analysis, 873 direct (or computerized) test, 206, 872 873 dissipativity-based methods, 42, 782 eventual, 872 evolutionary, 874 exponential, 872, 875 876 extreme, 872 Filippov (in the sense of), 872 873 finite-time, 871, 879 882 fixed-point theorems, 873 fixed-time, 879 882 foster-type criteria, 874 875 global, 872 of graphs, 875 876 Halanay-type theorems, 873 Hale, J.K., theorem, 873 Hermes (in the sense of), 873 874 Hermite-Biehler test, 873 Hermite’s test, 872 873 Hurwitz’s test, 213 214, 351, 872 873 hyper, 872 incremental, 874 integral, 872 internal, 221 223 invariance-type methods, 873 ISS, 39 40, 872 Exp-ISS, 872 iISS, 872 IOpS, 872 IOS, 872 IOSS, 872 ISDS, 872 OSS, 872 Jury test, 872 873 Karafyllis, I. theorem, 873 Kharitonov-type tests, 873

925

Khas’minskii criteria, 874 875 Krasovskii, N.N. in the sense of, 873 874 theorem, 873 Kurzweil, J., theorem, 873 Kushner criteria, 874 875 L (Lp,lp), 872 Lagrange, 872, 877 879 LaSalle, J.P., theorem, 873 Lehtomaki’s robustness test, 873 Lev-Ari/Bistritz/Kailath test, 872 873 Lienard-Chipart’s test, 206, 214, 872 873 Lin-Sontag-Wang theorems, 873 linearization-type methods, 873 Lipschitz, 871 872, 876 877 LMI tests, 873 875 Loria-Panteley-Popovic-Teel theorem, 873 Lyapunov, 201 202, 872, 877 879 Lyapunov-Volterra, 873 874 of manifolds, 875 876 of maps, 875 876 Massera, J.L., theorem, 873 Matrosov, V.M. theorem, 873 Middleton-Goodwin test, 872 873 Mikhailov criterion, 873 Milnor, F., theorem, 873 min-max tests, 873 µ test, 873 multi-, 873 874 Narendra-Annaswamy theorem, 873 Nichols’ test, 872 873 Noise-to-state, 874 Nonlinear, 874 N-Sigma, 874 Nyquist’s test, 206, 872 873 off-axis circle criterion, 873 of operators, 875 876 orbital, 879 output-input, 872 over finite alphabets, 872 partial variable, 873 874 passivity-based methods, 873 perturbation methods, 873 phase-plane analysis, 873 Poisson, 874, 876 879 polynomial, 874 876 Popov criterion, 873 positive real theorem, 873

926

Stability (Continued) Pranja-Parrilo-Rantzer method, 873 in probability, 874 in the pth moment, 874 Pyatnitskiy-Rapoport theorem, 873 quadratic, 872 quasi, 873 874 randomized tests, 873 Rantzer, A. theorem, 873 Razumikhin-type theorems, 874 875 relative, 215 217, 872 root locus method, 872 873 Rouche-type tests, 873 Routh’s test, 872 873 sample and hold ( in the sense of), 873 874 Schur-Cohn test, 872 873 semi, 873 874 of sets, 872, 875 876 sign, 873 874 sign semi, 873 874 small-gain theorem, 873 extended scalar, 873 IOS, 873 ISS, 873 local, 873 mean-square, 873 nonlinear, 873 stochastic, 874 stochastic structural, 874 of subspaces, 875 876 strong, 224 225 sum-of-squares tests, 873 super, 872 Tarraf-Megretski-Dahleh, theorems, 872 874 total, 872 transformation methods for, 873 Tsypkin test, 872 873 uniform, 872, 876 877 in variation, 876 877 vibrational methods for, 873 weak, 872 Yoshizawa, T. theorem, 873 Stabilization, 224 225, 392 397, 774 776 Stable system, 4 5, 285 286, 473, 501 502 State equations, 87 88, 164, 767

Index

-space equations, 123, 820 variable, 87 88, 135 Steady-state analysis, 355 error, 259 263, 378, 544 547 response, 274 276, 279 Step input, 22, 22f, 25f response, 62, 267 273 Stochastic systems, 37 39, 874 875 Structure preservation, 73 Synthesis, 35 36, 629 630, 659, 663 664 System theory, 45 46 type, 258 259 T Taylor series, 89, 174 Thermal system, 104 105 Third-order system, 288 Time delay, 100 101, 660, 757 759, 851 invariant system, 89 response, 257 306 series, 42, 174 varying system, 36 37, 73, 176 Topology, 114 115, 128 129 Torque, 62 63 Total variation, 726, 727f Tracking, 4, 14, 22, 238 239, 257, 544, 728, 766 Tractability numerical, 40 42 Transfer function, 32, 132, 161, 204 205, 286, 446, 447, 451 453, 531, 561, 728 735, 781 782 Transient response, 257, 749 750 Transmittance, 132 134 Transport lag, 852 853 Transpose, 771 TS design method, 735 Tuning rules, PID analytical rules, 661 668 Skogestad, S., 661, 664 668 experimental rules, 668 670 optimization rules, 668 Astrom, K.J., Hagglund, T., 668

Index

U Undamped natural frequency, 265, 272, 329 Undershoot, 268 269 Uniform stability, 876 877 Universal, 880 881 V Van der Pole equation, 141 Velocity constant, 259 error, 259 input, 260 Virtual design, 49 50 W Water-bed effect, 786 787 Watt’s governor, 5 7 Wavelet, 49 Weighting function, 95, 108f Wiener, N cybernetics, 5 7 filter, 5 7 Wilkinson, J.H. sensitivity, 772 773 Y Youla-Bongiorno-Lu strong stabilization, 230 231 Youla Kucera parametrization, 231

927

Z Zadeh, L.A. discrete-time control, 5 7 extended linearity, 503 504 filtering, 5 7 fuzzy control, 12 fuzzy mathematics, 49 fuzzy systems, 49 general linear system theory, 86, 94 95 generalized Fourier transform, 824, 826 827 generalized Norton & Thevinin, 54 identification, 5 7, 86, 161 intelligent computation, 39 intelligent control, 35 39 linear time-varying systems, 5 7 state (concept), 94 95 system theory, 54 55, 174 thinking machine, 54 transfer function (ZTF), 161 Zames, G HN, 503 504 input-output circle criterion, 503 504 passivity theorem, 503 504 small-gain theorem, 503 504 Zero, 99 100, 535