Sensitivity of automatic control systems

866 135 4MB

English Pages 435 [444] Year 2000

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Sensitivity of automatic control systems

Citation preview

© 2000 by CRC Press LLC

Visit the CRC Press Web site at www.crcpress.com

Preface

The idea of publishing this book in English came to the authors while reviewing the generalizing monograph Theory of Sensitivity in Dynamic Systems by Prof. M. Eslami (Springer-Verlag, 1994), which deals mostly with sensitivity of linear systems. That monograph covers a wide scope of sensitivity theory problems and contains a bibliography consisting of more than 2500 titles on sensitivity theory and related topics. Nevertheless, the authors were surprised to find that it does not contain many topics of sensitivity theory that are important from both theoretical and applied viewpoints. These problems are considered in the present monograph that is brought to attention of the reader. By sensitivity of control systems one usually means dependence of their properties on parameters variation. The aggregate of principles and methods related to sensitivity investigation forms sensitivity theory. Sensitivity problems in one or another form have been touched on in theory of errors in computational mathematics and computer science, in theory of electrical and electronic networks, as well as in disturbance theory in classical mechanics, and so on. Sensitivity theory became an independent scientific branch of cybernetics and control theory in the sixties. This was connected in major part with quick development of adaptive (self-tuning) systems that were constructed for effective operation under parametric disturbances. Lately, sensitivity theory methods were widely used for solving various theoretical and applied problems, viz.: analysis and synthesis, identification, adjustment, monitoring, testing, tolerance distribution, and so on. Step by step, methods of sensitivity theory have become a universal tool for investigating control systems. This has stimulated many works on employing methods of sensitivity theory to systems of various nature (technical, biological, social, economical, and so on). Nevertheless, a general basis for solving sensitivity problems for control systems has not been exposed systematically. Only a relatively small number of works consider the general theoretical and mathematical foundation of sensitivity investigation and elucidate qualita-

© 2000 by CRC Press LLC

tive and quantitative connections between sensitivity theory and classical sections of mathematics and control theory. Many techniques of sensitivity investigation used in applications have not received due justification. It appears that many results obtained by the Russian scientific school are totally unknown to English-speaking readers. We refer specifically to the general concept of mathematical substantiation of sensitivity theory. In this book the problem of sensitivity investigation is considered as a stability problem in an augmented state space that contains, in addition to initial state variables, changing parameters of the system. This concept makes it possible to employ profound results obtained by the Russian mathematical school on the basis of Lyapunov’s methods to problems of sensitivity theory. For sensitivity investigation for systems given by block-diagrams, the theory of parameter-dependent generalized functions is used. All this taken in the aggregate leads to the possibility to obtain constructive solutions for a number of problems that are only mentioned briefly in the monograph by Prof. M. Eslami or are not considered at all. The following problems can be placed in this group: 1. Sensitivity investigation of nonlinear non-stationary systems of general form, including discontinuous systems 2. Investigation of sensitivity with respect to initial conditions and parameters of exogenous disturbances 3. Investigation of sensitivity with respect to singular parameters, for which sensitivity investigation based on the first approximation is not correct 4. Sensitivity investigation for boundary-value problems, sensitivity of autonomous and non-autonomous oscillating processes 5. Estimation of a norm of additional motion over finite and infinite time intervals on the basis of Lyapunov’s functions 6. A detailed investigation of sensitivity invariants 7. Sensitivity investigation for mathematical programming and calculus of variations problems 8. Statement and solution of a number of applied direct and inverse problems of sensitivity theory The goal of the present book is to compensate, in a sense, for the aforementioned deficiency, and to expose, as rigorous and completely as possible, foundation of sensitivity theory as an independent and clear-cut branch of control theory that has, at the same time, organic relations with numerous adjacent disciplines. Such a concept of the book did not allow to us present

© 2000 by CRC Press LLC

many specific but often fairly attractive investigation methods. Nevertheless, the authors hope that the book will contribute to strengthening the general basis for such investigations and for solving a number of new problems. In this book, the problems of sensitivity theory of real systems and their mathematical models are formulated from general positions, and connections between sensitivity and choice of technical parameters of control system have been established. Much attention is paid to theoretical substantiation of sensitivity investigation methods for finite-dimensional continuoustime systems. For such systems, relations between sensitivity problems and classical stability problems is established, rigorous mathematical conditions are given for applicability of the first approximation, and some critical cases are considered where the use of the first approximation gives qualitatively wrong results. For the finite-dimensional case, a general theory of sensitivity investigation of boundary-value problems is presented. Elements of the above theory are employed for sensitivity analysis of solutions of nonlinear programming and variational calculus, as well as for analysis of sensitivity of oscillating processes. Sensitivity investigation methods for discontinuous systems, including those given by operator models, are considered from general positions. It is shown that, in the latter case, theory of parameter-dependent generalized functions provides a natural method for solving sensitivity problems. Much attention is paid to substantiation and generalization of sensitivity investigation methods for non-time characteristics of control systems, viz.: transfer functions, frequency responses, zeros and poles of transfer functions, eigenvalues and eigenvectors of systems matrices and integral estimates. Methods for obtaining indirect characteristics of sensitivity functions are also given. As in network theory, sensitivity invariants are introduced, i.e., special functional dependencies between sensitivity functions, that facilitate their determination and investigation. Methods and techniques are given for derivation of linear sensitivity invariants for time-domain, frequencydomain and other characteristics of control systems. Generalization of some applications of sensitivity theory is presented from the viewpoint of direct and inverse problems. The present monograph is addressed to scientific workers and engineers specializing in the field of theoretical investigation, design, testing, adjustment, and exploitation of various types of control systems and their units. It can also be useful for graduate and post-graduate students working in corresponding fields. The methods of sensitivity investigation presented in the book may appear quite useful in various applications other than control problems, e.g.,

© 2000 by CRC Press LLC

in stability theory, oscillation theory, motion dynamics, mathematical economics and many others. Generally speaking, it should be noticed that the primary sections of the monograph by Prof. M. Eslami and the present book actually do not intersect, so that these works complement each other. The authors hope that their book can contribute to English-speaking literature on sensitivity theory and will provide for an instrument for solving new theoretical and applied problems. The bibliography given in the book does not pretend to be complete and contains mostly monographs, major textbooks, proceeding of conferences and symposia on sensitivity theory, as well as works immediately used in the text. Chapters 1–4 are written by E. N. Rosenwasser, Chapters 5–8 by R. M. Yusupov. The general idea and concept of the monograph belong to both the authors. The material of Section 6.5 was written together with Yu. V. Popov, and Section 8.1.6 together with V. V. Drozhin. The authors are grateful to Prof. P. D. Krut’ko for his useful comments on improving the monograph. The authors are particularly grateful to Dr. K. Polyakov who translated the book into English, and to post-graduate student Yu. Putintseva for technical preparation of the manuscript. Special thanks are due to Prof. M. Eslami who contributed much to appearance of the book in English. We are also grateful to Mrs. Nora Konopka from CRC Press for exellent organization of the publishing process despite of large geographic distance between Russia and the USA. Y. Rosenwasser and R. Yusupov 20th March 1999

© 2000 by CRC Press LLC

Contents

1 Parametric Models 1.1 State Variables and Control Systems Parameters 1.1.1 General Principles of Control 1.1.2 Directed Action Elements 1.1.3 State Variables and Parameters 1.1.4 Sensitivity and Problem of Technical Parameters Selection 1.2 Parametric Models of Control Systems 1.2.1 Mathematical Model of Control System 1.2.2 Parametric System Model 1.2.3 Determining Sets of Parameters 1.2.4 Problem of Sensitivity of Parametric Model 1.3 Sensitivity Functions and Applications 1.3.1 Sensitivity Functions 1.3.2 Main and Additional Motions 1.3.3 Analysis of First Approximation 1.3.4 Statement of Optimization Problems with Sensitivity Requirements 2 Finite-Dimensional Continuous Systems 2.1 Finite-Dimensional Continuous Systems Depending on a Parameter 2.1.1 Mathematical Description 2.1.2 Parametric Dependence of Solutions on Finite Time Intervals 2.1.3 Calculation of Derivatives by Parameters 2.1.4 Parametric Models and General Sensitivity Equations 2.1.5 Sensitivity Equations of Higher Orders 2.1.6 Multiparameter Case 2.1.7 Analytical Representation of Single-Parameter Family of Solutions

2.2

2.3

2.4

2.5

2.6

2.1.8 Equations of Additional Motion 2.1.9 Estimation of First Approximation Error Second Lyapunov’s Method in Sensitivity Theory 2.2.1 Norms of Finite-Dimensional Vectors and Matrices 2.2.2 Functions of Constant and Definite Sign 2.2.3 Time-Dependent Functions of Constant and Definite Sign 2.2.4 Lyapunov’s Principle 2.2.5 Norm of Additional Motion 2.2.6 Parametric Stability 2.2.7 General Investigation Method 2.2.8 Sensitivity of Linear System Sensitivity on Infinite Time Intervals 2.3.1 Statement of the Problem 2.3.2 Auxiliary Theorem 2.3.3 Sufficient Conditions of Applicability of First Approximation 2.3.4 Classification of Special Cases Sensitivity of Self-Oscillating Systems in Time Domain 2.4.1 Self-Oscillating Modes of Nonlinear Systems 2.4.2 Linear Differential Equations with Periodic Coefficients 2.4.3 General Properties of Sensitivity Equations 2.4.4 Sensitivity Functions Variation over Self-Oscillation Period 2.4.5 Sensitivity Functions for Periodicity Characteristics 2.4.6 Practical Method for Calculating Sensitivity Functions 2.4.7 Application to Van der Paul Equation Sensitivity of Non-Autonomous Systems 2.5.1 Linear Oscillatory Systems 2.5.2 Sensitivity of Linear Oscillatory System 2.5.3 Sensitivity of Nonlinear Oscillatory System Sensitivity of Solutions of Boundary-Value Problems 2.6.1 Boundary-Value Problems Depending on Parameter 2.6.2 Sensitivity Investigation for Boundary-Value Problems 2.6.3 Implicit Functions Theorems 2.6.4 Sensitivity Functions of Solution of Boundary-Value Problems 2.6.5 Sensitivity of Non-Autonomous Oscillatory System 2.6.6 Sensitivity of Self-Oscillatory System 2.6.7 Boundary Conditions for Sensitivity Functions

3 Finite-Dimensional Discontinuous Systems 3.1 Sensitivity Equations for Finite-Dimensional Discontinuous Systems 3.1.1 Time-Domain Description 3.1.2 Time-Domain Description of Relay Systems 3.1.3 Parametric Model and Sensitivity Function of Discontinuous System 3.1.4 General Sensitivity Equations for Discontinuous Systems 3.1.5 Case of Continuous Solutions 3.2 Sensitivity Equations for Relay Systems 3.2.1 General Equations of Relay Systems 3.2.2 Sensitivity Equations for Systems with Ideal Relay 3.2.3 Systems with Logical Elements 3.2.4 Relay System with Variable Delay 3.2.5 Relay Extremal System 3.2.6 System with Pulse-Frequency Modulation of First Kind 3.3 Sensitivity Equations for Pulse and Relay-Pulse Systems 3.3.1 Pulse and Relay-Pulse Operators 3.3.2 Sensitivity Equations of Pulse-Amplitude Systems 3.3.3 Sensitivity of Pulse-Amplitude Systems with Respect to Sampling Period 3.3.4 Sensitivity Equations of Systems with Pulse-Width Modulation 4 Discontinuous Systems Given by Operator Models 4.1 Operator Parametric Models of Control Systems 4.1.1 Operator Models of Control Systems 4.1.2 Operator of Directed Action Element 4.1.3 Families of Operators 4.1.4 Parametric Properties of Operators 4.1.5 Parametric Families of Linear Operators 4.1.6 Transfer Functions and Frequency Responses of Linear Operators 4.1.7 Parametric Operator Model of System 4.2 Operator Models of Discontinuous Systems 4.2.1 Generalized Functions 4.2.2 Differentiation of Generalized Functions 4.2.3 Multiplication of Generalized Functions 4.2.4 Operator Equation of Open-Loop Linear System 4.2.5 Operator Equation of Closed-Loop Linear System 4.3 Sensitivity of Operator Models 4.3.1 Generalized Functions Depending on Parameter 4.3.2 Generalized Differentiation by Parameter

4.4

4.3.3 Sensitivity Equations 4.3.4 Sensitivity Equations for Multivariable Systems 4.3.5 Higher-Order Sensitivity Equations Sensitivity Equations for Relay and Pulse Systems 4.4.1 Single-Loop Relay Systems 4.4.2 Pulse-Amplitude Systems 4.4.3 Pulse-Width Systems 4.4.4 Pulse-Frequency Systems

5 Non-Time Characteristics 5.1 Sensitivity of Transfer Function and Frequency Responses of Linear Systems 5.1.1 Sensitivity of Transfer Function 5.1.2 Sensitivity of Frequency Responses 5.1.3 Relations between Sensitivity Functions of Frequency Characteristics 5.1.4 Universal Algorithm for Determination of Sensitivity Functions for Frequency Characteristics 5.1.5 Sensitivity Functions for Frequency Characteristics of Minimal-Phase Systems 5.1.6 Relations between Sensitivity Functions of Time and Frequency Characteristics 5.1.7 Relations between Sensitivity Functions of OpenLoop and Closed-Loop Systems 5.1.8 Sensitivity of Frequency-Domain Quality Indices 5.2 Sensitivity of Poles and Zeros 5.2.1 General Case 5.2.2 Sensitivity of the Roots of a Polynomial 5.2.3 Sensitivity of Poles and Zeros for Open-Loop and Closed-Loop Systems 5.2.4 Relations between Sensitivity of Transfer Function and that of Poles and Zeros 5.3 Sensitivity of Eigenvalues and Eigenvectors of Linear TimeInvariant Systems 5.3.1 Eigenvalues and Eigenvectors of Matrices 5.3.2 Sensitivity of Eigenvalues 5.3.3 Sensitivity of Real and Imaginary Parts of Complex Eigenvalues 5.3.4 Sensitivity of Eigenvectors 5.3.5 Sensitivity Coefficients and Vectors of Higher Orders 5.3.6 Sensitivity of Trace and Determinant of Matrix 5.4 Sensitivity of Integral Quality Indices 5.4.1 Integral Estimates 5.4.2 Sensitivity of Integral Estimate I0

5.4.3

5.5

Sensitivity of Quadratic Estimates. Transformation of Differential Equations 5.4.4 Sensitivity of Quadratic Estimates. Laplace Transform Method 5.4.5 Sensitivity Coefficients of Integral Estimates for Discontinuous Control Systems Indirect Characteristics of Sensitivity Functions 5.5.1 Preliminaries 5.5.2 Precision Indices 5.5.3 Integral Estimates of Sensitivity Functions 5.5.4 Envelope of Sensitivity Function

6 Sensitivity Invariants 6.1 Sensitivity Invariants of Time Characteristics 6.1.1 Sensitivity Invariants 6.1.2 Existence of Sensitivity Invariants 6.1.3 Sensitivity Invariants of Single-Input–Single-Output Systems 6.1.4 Sensitivity Invariants of SISO Nonlinear Systems 6.1.5 Sensitivity Invariants of Multivariable Systems 6.1.6 Sensitivity Invariants of Weight Function 6.2 Root and Transfer Function Sensitivity Invariants 6.2.1 Root Sensitivity Invariants 6.2.2 Sensitivity Invariants of Transfer Functions 6.3 Sensitivity Invariants of Frequency Responses 6.3.1 First Form of Sensitivity Invariants of Frequency Responses 6.3.2 Second Form of Sensitivity Invariants of Frequency Responses 6.3.3 Relations between Sensitivity Invariants of Time and Frequency Characteristics 6.4 Sensitivity Invariants of Integral Estimates 6.4.1 First Form of Sensitivity Invariants 6.4.2 Second Form of Sensitivity Invariants 6.5 Sensitivity Invariants for Gyroscopic Systems 6.5.1 Motion Equations and Transfer Functions 6.5.2 Sensitivity Invariants of Amplitude Frequency Response 6.5.3 Sensitivity Invariants of Integral Estimates 6.5.4 Sensitivity Invariants of Damping Coefficient

7 Sensitivity of Mathematical Programming Problems 7.1 Sensitivity of Linear Programming Problems 7.1.1 Actuality of Investigation of Optimal Control Sensitivity 7.1.2 Linear Programming 7.1.3 Qualitative Geometric Sensitivity Analysis 7.1.4 Quantitative Sensitivity Analysis 7.2 Sensitivity of Optimal Solution to Nonlinear Programming Problems 7.2.1 Unconstrained Nonlinear Programming 7.2.2 Nonlinear Programming with Equality Constraints 7.2.3 Sensitivity Coefficients in Economic Problems 7.2.4 Nonlinear Programming with Weak Equality Constraints 7.2.5 Sensitivity of Convex Programming Problems 7.3 Sensitivity of Simplest Variational Problems 7.3.1 Simplest Variational Problems 7.3.2 Existence Conditions for Sensitivity Function 7.3.3 Sensitivity Equations 7.4 Sensitivity of Variational Problems 7.4.1 Variational Problem with Movable Bounds 7.4.2 Existence Conditions for Sensitivity Functions 7.4.3 Sensitivity Equations 7.4.4 Case Study 7.4.5 Variational Problem with Corner Points 7.5 Sensitivity of Conditional Extremum Problems 7.5.1 Variational Problems on Conditional Extremum 7.5.2 Lagrange Problem 7.5.3 Variational Problem with Differential Constraints 7.5.4 Sensitivity of Isoperimetric Problem 8 Applied Sensitivity Problems 8.1 Direct and Inverse Problems of Sensitivity Theory 8.1.1 Classification of Basic Applied Sensitivity Problems 8.1.2 Direct Problems 8.1.3 Inverse Problems and their Incorrectness 8.1.4 Solution Methods for Inverse Problems 8.1.5 Methods for Improving Stability of Inverse Problems 8.1.6 Investigation of Convergence of Iterative Process 8.2 Identification of Dynamic Systems 8.2.1 Definition of Identification 8.2.2 Basic Algorithm of Parametric Identification Using Sensitivity Functions 8.2.3 Identifiability and Observability

8.3

8.4

8.5

Distribution of Parameters Tolerance 8.3.1 Preliminaries 8.3.2 Tolerance Calculation Problem 8.3.3 Initial Mathematical Models 8.3.4 Tolerance Calculation by Equal Influence Principle 8.3.5 On Tolerance Distribution with Account for Economic Factors 8.3.6 On Requirements for Measuring Equipment Synthesis of Insensitive Systems 8.4.1 Quality Indices and Constraints 8.4.2 Problems of Insensitive Systems Design 8.4.3 On Design of Systems with Bounded Sensitivity 8.4.4 On Design of Optimal Insensitive Systems Numerical Solution of Sensitivity Equations 8.5.1 General Structure of Numerical Integration Error 8.5.2 Formula for Integration of Sensitivity Equations 8.5.3 Estimates of Solution Errors 8.5.4 Estimates of Integration Error for a First-Order System 8.5.5 Results of Numerical Calculation

Chapter 1 Parametric Models of Control Systems and Statement of the Problem in Sensitivity Theory

1.1 1.1.1

State Variables and Control Systems Parameters TGeneral Principles of Control

The notion of control always implies a plant to which the control is applied, and a target that is to be attained using this control. In practice, control process is realized by means of specially formed exogenous actions applied to the plant. These actions are intended to change some properties of the latter. This idea is illustrated in Figure 1.1, where O is a multivariable plant, Y is a vector of generalized coordinates that define properties of the plant, and U is a vector of control actions.

Figure 1.1 Block-diagram of control process Denote by MY the set of values of Y for which properties (state) of the plant satisfy the requirements, and by RY the set of values of Y that can really take place during plant operation. Then, the target of the control

© 2000 by CRC Press LLC

can be defined by the relation R Y ⊂ MY .

(1.1)

If equation (1.1) is satisfied for the plant without any interference from outside, control process becomes superfluous. Control is necessary if the plant is affected by factors (we will call them disturbances) that lead to violation of (1.1)1 . In practice control action U is realized by a system called controller that is, in the general case, functionally connected with the state of the plant and disturbances applied to it. As a result, we obtain a more detailed control scheme (Figure 1.2), where P is a controller and Vo and Vp are

Figure 1.2 General block-diagram of control disturbances applied to the plant and control, respectively. According to Figure 1.2, the controller P is to provide equation (1.1) in the presence of disturbances Vo applied to the plant and Vp infringing controller actions. The block-diagram shown in Figure 1.2 is the most general and, therefore, the least substantial. Schemes of actual control systems, and still more their realizations, are very diversified and are determined by a number of factors. The main factors are: physical nature of the plant; the choice of the vector Y , which gives the state of the plant; the set MY of admissible states of the plant; availability of information about variations of plant properties during the control process; availability of information about vectors Vo and Vp , etc. If the information about the plant and corresponding disturbances is sufficiently complete, we can, as a rule, employ the classic block-diagram of combined control shown in Figure 1.3, where NVo , NVp , and NY are sensors of corresponding values, and U is an amplifier-transducer of controller

1 Hereinafter

such a notation is applied to equations.

© 2000 by CRC Press LLC

Figure 1.3 Block-diagram of combined control

signals. In such systems, sufficient information about the plant and vector Vo and Vp allows us, as a rule, to choose a fixed controller that provides for admissible performance of the system as a whole. Other situations arise when there is no sufficient information about the plant and vectors Vo and Vp , or it can substantially vary during system operation. In this case, it can hardly be supposed that a controller with a fixed structure will successfully cope with its functions in all possible situations. Therefore, it is necessary to change controller properties depending on available information about the plant and disturbance during system operation. Then, the generalized block-diagram takes the form shown in Figure 1.4, where Io , IVo and IVp are identifiers of properties of the plant and vectors Vo and Vp , respectively.

1.1.2

Directed Action Elements

The block-diagrams shown above illustrate general principles of control and bear generalized character. In practice, any control system can be represented in the form of a detailed block-diagram, which is determined by system elements and their physical nature, as well as by their interaction. A widely used method of structural representation of control systems consists in splitting the system onto directed action elements that interact on the basis of three basic kinds of connections: series, parallel, and feedback. By directed action element we usually mean a physical plant having a vector input X and vector output Y of an arbitrary nature. It is assumed

© 2000 by CRC Press LLC

Figure 1.4 Block-diagram of control system

that the input X affects the output Y in some sense, but there are no reverse influence of Y on X. The relation between Y and X can be written in a symbolic form as Y = L(X, V ),

(1.2)

where it is additionally assumed that the output Y is affected by some disturbances acting on the element. The block-diagram corresponding to (1.2) is shown in Figure 1.5.

Figure 1.5 Directed action element Assume that the system under consideration can be represented as an

© 2000 by CRC Press LLC

assembly of directed action elements Yi = Li (Xi , Vi ),

i = 1, . . . , n,

(1.3)

connected by some links. These links in the general case can also be represented as directed action elements defined by symbolic relations Xi = Si (Y1 , ..., Yn , V˜i ),

i = 1, . . . , s,

(1.4)

where V˜i are disturbances acting on the links. Equations (1.3) and (1.4) taken in the aggregate reflect performance interrelations of separate elements of the system and simultaneously determine some structural representation of the system. Thus, an arbitrary structure constructed from directed action elements and directed links generates a set of symbolic equations (1.3) and (1.4). On the other hand, any set of equations of the form (1.3)–(1.4) can be associated with a control system of a definite structure. In some cases, Li and Si vary themselves during operation. This circumstance can be taken into account by augmenting (1.3)–(1.4) by some additional equations. Consider, as an example, a general system of symbolic equations describing the performance of the adaptive control system shown in Figure 1.4. According to Figure 1.4, for the plant we have Y = L0 (U, V0 ),

(1.5)

which indicates a link between state vector of the plant, control action and disturbances acting on the plant. In the general case, for the controller we have U = Lp (Y¯ , V¯0 , V¯p , Vp ),

¯ 0 , V¯0 , V¯p ), Lp = Lp (L

(1.6)

These equations determine control action applied to the plant. The argu¯ 0 in the second equation in (1.6) indicates the fact that controller ment L performance should, in the general case, change when plant performance changes. Therefore, the second equation characterizes the possibility of controller adaptation. Moreover, equations ¯0 = L ¯ 0 (L0 ), L

V¯0 = V¯0 (V0 , L0 ),

V¯p = V¯p (Vp )

(1.7)

describe identification processes for the plant and disturbances applied to the system. Note that the material given in this section do not claim to be rigorous and complete and has a descriptive character.

© 2000 by CRC Press LLC

1.1.3

State Variables and Parameters

Assume that the system under investigation is split into directed action elements according to some principle, and its structure (1.3)–(1.4) has been fixed. Then, the values Xi and Yi taken in the aggregate will be called characteristics of state variables of the system associated with the structure defined by (1.3)–(1.4). The variables Vi and V˜i characterize exogenous disturbances applied to the system. It should be emphasized that the same system can be described by various sets of state variables depending on specification of the structural representation (1.3)–(1.4). As a rule, a relatively small number of controllable and measurable values which gives basic information on system performance are chosen as state variables. Moreover, usually only some exogenous disturbances that affects the system substantially are taken into account. On the other hand, any set of state variables and exogenous disturbances, whichever wide, does not describe completely all properties of the system. This is explained by the fact that properties and performance of actual systems depends on many extra factors (often uncontrollable), which specify features of any individual exemplar of the system and operating conditions. Various factors of this kind will hereinafter be called parameters. The whole set of parameters that determine properties of a technical system can usually be divided into two following groups: 1) technical parameters, and 2) parameters of environment and operating conditions. By technical parameters we shall mean values that determine the difference between individual exemplars of the system under the same operating conditions. By environmental parameters we mean the difference between operating conditions of individual exemplars of the system. For example, image quality of two TV sets of the same type that work in the same room and use the same power supply will be different due to different technical parameters. But two practically identical TV sets can show different image quality in various operating conditions, for instance, for different temperature or air humidity in the room. Using the system symbolic description given by (1.5)–(1.6), technical parameters are various factors affecting properties of the relations L0 and Lp for individual exemplars of the system. Operating condition parameters variation can manifest themselves in changing properties of the disturbances Vo and Vp and in variation of initial conditions. Technical parameters of a system include sizes of various elements and precision of their manufacture, physical properties of materials as well as some other values determining its functional implementation. Parameters of operating conditions depend on physical nature of system elements and can be diversified. In principal, parameters can assume constant values for each realization of the system or can be variable. Hereinafter in the book we consider mostly the first case.

© 2000 by CRC Press LLC

1.1.4

Sensitivity and Problem of Technical Parameters Selection

Denote by αi , βi , γi and δi vectors of parameters characterizing system properties. Then, equations (1.3)–(1.4) can be represented in a more detailed form Yi = Li [Xi , αi , Vi (γi )],

Xi = Si [Y1 , ..., Yn , βi , V˜i (δi )].

(1.8)

If relations Li and Si are fixed, properties of the system are determined by the choice of technical parameters αi and βi and parameters of operating conditions γi and δi . Denote by αT the set of all technical parameters of the system (1.8), and by αE the set of all parameters of operating conditions. Moreover, combine all state variables Yi and Xi into a single vector Y . Then, under other equal conditions, finally we obtain Y = Y (αT , αE ),

(1.9)

where Y is the set of state variables of the system. Thus, variation of parameters αT and αE causes variation of state variables that reflect essential system properties. The ability of a system to change its properties under variation of parameters αT and αE is called (parametric) sensitivity. As follows from (1.9), the conditions of normal operation for the system (1.1) can be represented in the form RY (αT , αE ) ⊂ MY ,

(1.10)

Therefore, to assure system workability, technical parameters must be chosen taking into account parameters of operation conditions. Let possible operation conditions (value of the vector αE ) be defined by αE ∈ ME ,

(1.11)

where ME is a given set. Then, the traditional problem statement of the choice of technical parameters αT reduces to obtaining vectors αT such that RY (αT , αE ) ⊂ MY

© 2000 by CRC Press LLC

for any αE ∈ ME .

(1.12)

The set of values of technical parameters for which conditions (1.12) hold will be called workability area and denoted by Mp . Thus, if the condition αT ∈ Mp

(1.13)

holds, this means that conditions (1.12) hold as well. Let α1T and α2T be two fixed values of the technical parameters vector satisfying the workability condition (1.13). From the viewpoint of satisfying conditions (1.12) we may equivalently adopt either α1T or α2T as design values. Nevertheless, accounting for system sensitivity with respect to technical parameters, this conclusion can be erroneous. For various technological and physical reasons actually we will always have αT = α1T + ∆α1 ,

αT = α2T + ∆α2 ,

where vectors δα1 and δα2 characterize inevitable parameters variation and belong to some sets N1 and N2 . This means that choosing αT = α1T or αT = α2T as initial (nominal) value (Figure 1.6), we actually will

Figure 1.6 Workability area and parameters variation have areas Q1 (α1T ) and Q2 (α2T ) in parameter space that can be far from equivalent. Figure 1.6 demonstrates that the choice αT = α1T ensures

© 2000 by CRC Press LLC

workability conditions despite inevitable parameters variation, while the choice αT = α2T does not. Therefore, a realistic statement of the problem of setting technical parameters must definitely take into account parametric sensitivity of the system. It should be emphasized that the above reasoning does not assume that the variations δα1 and δα2 are small, and bears a general meaning.

1.2 1.2.1

Parametric Models of Control Systems Mathematical Model of Control System

One of the most important problems arising in design of a control system consists in predicating its probable properties. The main method of such prediction is the use of physical and mathematical models. Hereinafter, if it is not specified, by models of systems and elements we will mean mathematical ones. Mathematical model of a directed action element (1.3) has the form YiM = LiM (XiM , ViM ),

(1.14)

where LiM is a computational algorithm or formula, and XiM and ViM are arguments. Moreover, it is assumed that the argument XiM reproduces in a sense the input Xi of the element, while ViM and YiM reflect exogenous disturbances Vi and the output Yi , respectively. Thus, the model of a directed action element includes a model of the relation Li from (1.3), models of input and output values Xi and Yi , respectively, and that of disturbances Vi . In a similar way, actual connections (1.4) can be associated with mathematical models XiM = SiM (Y1M , ..., YnM , V˜iM ). (1.15) Relations (1.14) and (1.15) taken in the aggregate specify a mathematical model of an initial system, which reproduces in a sense the structure of the system. Moreover, the set of the values XiM and YiM simulates corresponding state variables.

1.2.2

Parametric System Model

Evidently, a sufficiently complete mathematical model of an element (or a system) must reflect its basic properties, including possible dependence on parameters. Formally, dependence of properties of a directed action

© 2000 by CRC Press LLC

element (1.3)–(1.4) on some independent parameters can be taken into account by introducing some auxiliary variables into relations (1.14)–(1.15). Then, similarly to (1.8), instead of (1.14)–(1.15) we will have YiM = LiM [XiM , αiM , ViM (γiM )], XiM = SiM [Y1M , ..., YnM , βiM , V˜iM (δiM )],

(1.16)

where αiM , βiM , γiM and δiM are vecors of auxiliary variables (parameters). It is assumed that the variables αiM , βiM , γiM and δiM are independent of LiM , SiM , XiM , YiM , ViM and VˆiM . Relations (1.16) will be called parametric model of the system (1.3)–(1.4). As will be shown later, in practice there is much more freedom in constructing parametric models. In special cases, model parameters may include some characteristics of exogenous disturbances, operation conditions, as well as various variables that have no direct physical meaning but are included in parametric model (1.16), for example, coefficients of corresponding differential equations. Therefore, the term “parameter” has different meaning for initial actual system and its model. For an actual system, as a rule, parameter is a value that has a definite physical sense and reflects its objective properties. On the other hand, by a parameter of a mathematical model we can mean any independent variable included in its parametric model (1.16). This more general notion of parameter makes it possible to consider a wider spectrum of problems, because it is possible to investigate the influence on the system of many diversified factors ignoring their physical nature. Nevertheless, model parameters are usually chosen, when possible, in some correspondence with actual parameters and operation conditions of the system under investigation. Hereinafter we consider, in fact, only problems connected with investigation of sensitivity of mathematical models. Therefore, the subscript “M” with corresponding arguments is omitted in (1.16) and similar relations.

1.2.3

Determining Sets of Parameters

Combining all the parameters included in (1.16) into a single vector α, equation (1.16) can be written in the form Yi = Li (Xi , α), Xk = Sk (Y1 , ..., Yn , α), i = 1, . . . , n, k = 1, . . . , s.

(1.17)

Thus, any parametric model is a system of equations that determines a chosen set of state variables and depends on a set of independent variables (parameters). In principle, depending on the nature of the problem

© 2000 by CRC Press LLC

at hand, some parameters may be fixed while others be considered as variable. By determining set of parameters for a given problem we will mean a chosen finite set of parameters of the associated parametric models that are considered as independent variables. Let, for example, the motion of a real pendulum be described by the following mathematical model: J y¨ + 2ny˙ + k 2 y = q1 x1 (t) + q2 x2 (t),

(1.18)

where y is the angle of deviation of the pendulum from the vertical; J, n and k are constants (product of inertia, damping coefficient, and hanger rigidity); q1 and q2 are constants, and x1 (t) and x2 (t) are given functions. In the model (1.18), the value y is a state variable, and J, n, k, q1 and q2 are parameters. In the general case, motion of the pendulum for t ≥ 0 can be described on the basis of the model (1.18) by the following relation: y(t) = y(t, y0 , y˙ 0 , J, n, k, q1 , q2 ),

(1.19)

where y0 and y˙ 0 are initial deviation and initial angular velocity, which are also model parameters. Assume that initial conditions and constructive parameters are considered to be constant, and let us investigate the influence of various exogenous disturbances belonging to a given class. Then, the determining set of parameters consists of q1 and q2 , and the dependence y = y(t, q1 , q2 ).

(1.20)

will be of interest. If constructive parameters and exogenous disturbances can be considered as fixed, then initial conditions will be determining, so that y = y(t, y0 (0), y˙ 0 (0)).

(1.21)

In the general case, the choice of a determining set of parameters corresponds, from the mathematical viewpoint, to the choice of some family of nonlinearly similar mathematical models. From the physical viewpoint, it corresponds to the choice of a family of actual systems with similar physical properties and operation conditions. For example, the choice of some set of constructive parameters to be determining corresponds to some set of systems with similar constructive implementation. If the determining set of parameters characterizes influence of disturbances, we consider a set of identical systems functioning under different operation conditions. In

© 2000 by CRC Press LLC

principle, a successful choice of a determining set of parameters makes it possible to describe mathematically and investigate diversified properties of actual physical systems. Let us define an important property of a determining set of parameters, called completeness. DEFINITION 1.1 A determining set of parameters α is called complete if it uniquely determines all state variables of the model under investigation. For a system defined by (1.17) that there are the following unique dependencies: Yi = Yi (t, α), Xk = Xk (t, α). (1.22) For example, for the system of equations for pendulum, the set of parameters appearing in the right side of (1.19) is complete. If some of these parameters are fixed and assume constant values, the set of all remaining parameters is complete. It should be noted that the complete set of parameters is not unique. For example, if we pass to a new set of parameters in (1.17) using a bi-unique functional relation, the new set of parameters obtained in this way is, evidently, also complete. In practice, the question about completeness of a chosen set of parameters is not always simple and is connected with existence and uniqueness of solution of equations constituting the mathematical model.

1.2.4

Problem of Sensitivity of Parametric Model

Let Equation (1.17) determine a chosen parametric model of the system, and let α be a complete determining set of parameters. Assume that Mα is the set of admissible values of parameters belonging to α. Denote by MY the set of all laws of variation of the vector consisting of variables Yi and Xi , which satisfy admissible operation conditions. Then, in analogy with (1.10), the condition of proper choice of parameter vector can be represented in the form RY (t, α) ∈ MY , α ∈ Mα . (1.23) It should be noted that in this section we consider the sensitivity problem for a parametric model of the system. Evidently, it is implicitly assumed that the mathematical model corresponds to the actual system. It is necessary to bear in mind that model parameters are considered here independently of their physical meaning. Therefore, variation of model parameters can be associated either with variation in technical parameters of initial system or with variation in operating conditions. It is convenient to represent the conditions (1.23) of normal operation of the system in another form.

© 2000 by CRC Press LLC

Let J = (J1 , . . . , Jm ) be a set of some quality indices, which are functionals defined over the set of state variables. If MJ is the set of admissible values of the vector J with components J1 , . . . , Jm , instead of (1.23) we can write J(α) ∈ MJ ,

α ∈ Mα .

(1.24)

Relation (1.23) includes the case when the value of one of the indices, say J1 , is chosen to be maximal, so that J1 = J1max

(1.25)

The simplest problem connected with system analysis reduces to finding at least one vector α ¯ for which (1.23) or (1.24) hold. Taking system parameters equal to α ¯ , we could expect that the system will operate satisfactorily under corresponding design conditions, provided that the correspondence between the system and its parametric model is sufficiently good. Nevertheless, this by no means ensures that the system will perform satisfactorily under real operation conditions, because in practice, the set of parameters α differs from the design parameters for various physical and technological reasons, and this leads to a deviation of operation conditions from nominal conditions. This property of a real system manifests itself in changing model state variables Yi and Xi under parameters variation. This property of a model will be called sensitivity with respect to fluctuations of the chosen determining group of parameters. If the problem is stated correctly, model sensitivity simulates, in a sense, sensitivity of the initial system with respect to variation of technical and operational parameters. In most cases, it is required that the system (and, accordingly, model) have low sensitivity (insensitivity) with respect to parameters variation. From the practical viewpoint, if there is necessary correspondence between the system and model, this reduces to the requirement that conditions (1.23) or (1.24) hold not only for the design set of parameters α = α ¯, but for all (or almost all) sets of parameters that can take place. Analysis of system sensitivity on the basis of its model reduces just to establishing this fact. In principle, requirements for system sensitivity must be taken into account as early as during its design. System structure and elements operators should be chosen in such a way that conditions (1.23) or (1.24) hold over the whole range of possible parameters variation. This approach leads to the problem of system design taking into account requirements for its sensitivity (insensitivity). It can easily be seen that the sensitivity problem formulated above is ideologically close to that of stability when possible system parameter vari-

© 2000 by CRC Press LLC

ations are considered as disturbing factors. As will be shown below, there is a definite formal interconnection between the sensitivity and stability problems. Nevertheless, from the practical viewpoint, sensitivity problems are much more diversified and have richer subject matter than routine stability problems. Sensitivity investigation makes it possible to track interdependence between quality indices and most diversified circumstances connected with peculiarities of system design, implementation, and operation.

1.3 1.3.1

Sensitivity Functions and Applications Sensitivity Functions

The simplest (from the ideological viewpoint) method of sensitivity analysis consists in numerical investigation of system parametric model over the whole range of variation of the determining set of parameters. In practice, as a rule, the use of this approach appears to be inexpedient or impossible due to the huge amounts of required computations and obtained results. The main investigation method in sensitivity theory consists in using socalled sensitivity functions. Let us introduce corresponding definitions. Let α1 , . . . , αm be a set of parameters constituting a complete set α. Moreover, let state variables Yi , (i = 1, . . . , n) and quality indices J1 , . . . , Js be singlevalued functions of parameters α1 , . . . , αm , i.e., Yi (t, α) = Yi (t, α1 , ..., αm ),

i = 1(1)n,

(1.26)

and Ji (α) = Ji (α1 , ..., αm ),

i = 1(1)s.

(1.27)

The whole set of state variables is denoted by Yi , (i = 1, . . . , n) for simplicity. DEFINITION 1.2

The following partial derivatives ∂Yi (t, α) , ∂αk

Ji (t, α) ∂αk

(1.28)

are called first-order sensitivity functions of the values Yi and Ji with respect to corresponding parameters.

© 2000 by CRC Press LLC

Thus, first-order sensitivity functions are derivatives of various state variables and quality indices with respect to parameters of the determining group. Note that in automatic control literature first-order sensitivity functions are often called simply “sensitivity functions.” We also use this terminology. DEFINITION 1.3 Partial k-th order derivatives of the values Yi and Ji with respect to arguments α1 , . . . , αm ∂ k Yi k1 km ∂α1 ...∂αm

,

∂ k Ji k1 km ∂α1 ...∂αm

,

k1 + ... + km = k,

(1.29)

will be called k-th order sensitivity functions with respect to corresponding parameters combinations. Obviously, sensitivity functions of state variables Yi (i, α) depend on t and parameters α1 , . . . , αm , while sensitivity functions of quality indices depend only on parameters α1 , . . . , αm . It is assumed that the derivatives in (1.28) and (1.29) do exist. Definition 1.3 assumes also that sensitivity functions are independent on the order of differentiating by the respective parameters. For this to be true, it suffices to assume that [114] in a locality O of the point α(α1 , . . . , αm ) there exist all possible derivatives of the (k1)-th order as well as mixed derivatives of the k-th order, and all of these derivatives are continuous. As will be shown in the next chapters, sensitivity functions of various orders are solutions of special equations that can be obtained directly from a known parametric model of a system. These equations are called sensitivity equations. The aggregate of initial mathematical model and auxiliary relations determining sensitivity functions is usually called sensitivity model of the system under consideration.

1.3.2

Main and Additional Motions

Let α ¯1, . . . , α ¯ m be a fixed set of parameters. This set α ¯ will be called the main or basic set. The chosen set α ¯ generates a set of state variables Y¯i = Yi (t, α ¯ ),

(1.30)

which will be called main or basic motion. Basic motion is associated with base quality indices J¯i = Ji (¯ α).

© 2000 by CRC Press LLC

(1.31)

Varying the values of parameters αi = αi + µi

(1.32)

Yi = Yi (t, α ¯ 1 + µ1 , ..., α ¯ m + µm ) = Yi (t, α ¯ + µ),

(1.33)

we obtain a new motion

which is associated with new values of quality indices Ji = Ji (¯ α1 + µ1 , ..., α ¯ m + µm ) = Ji (¯ α + µ). DEFINITION 1.4

(1.34)

The vector ∆Yi defined by the relation ∆Yi = Yi (t, α ¯ + µ) − Yi (t, α ¯ ),

(1.35)

will be called additional motion due to variation of parameters α1 , . . . , αm . Additional motion ∆Yi and corresponding increment of quality indices ∆Ji = Ji (¯ α + µ) − Ji (¯ α)

(1.36)

characterize variation of system properties, which are of interest to an investigator, caused by variation of corresponding parameters. Therefore, investigation of additional motions and their relations to properties of initial system is, in fact, the primary problem in sensitivity analysis. Let us establish relations between sensitivity functions and additional motion. With this aim in view, we use properties of higher-order differentials for functions of several variables [114]. Let y = Y (x1 , . . . , xr ) be a function of variables x1 , . . . , xr having continuous partial derivatives of the k-th order with respect to all sets of arguments. Then the following expressions are called k-th order differentials:  d(k) Y =

∂ ∂ dx1 + ... + dxr ∂x1 ∂xr

k Y,

(1.37)

where ∂/∂xi denotes differentiation with respect to the corresponding variable. To expand (1.37) we formally raise the expression inside the brackets to the power and convolute the product of the operators according to the rule ∂ ∂ ∂2 = , (1.38) ∂xi ∂xj ∂xi ∂xj

© 2000 by CRC Press LLC

and then apply the obtained operator to the function Y . As follows from (1.37) and (1.38), the k-th differential is a form of the k-th power with respect to the values dx1 , . . . , dxr . Assuming that all sensitivity functions up to the k-th order do exist and are continuous at the point α, ¯ we can write Taylor’s formula with remainder term ∆Y (t, α) = Y (t, α ¯ + µ) − Y (t, α ¯) = = dY (t, α ¯) +

1 (2) Y 2! d

(t, α) + ... +

1 (n) Y n! d

(t, α)+

(1.39)

1 + (n+1)! d(n+1) Y (t, α ¯ + θµ).

Notice that the variable t is a constant parameter in (1.39). Ignoring the remainder term, we obtain the following approximate expression for additional motion: ∆(k) Y (t, α =

k   1 (i)  , d Y (t, α) i! α=α ¯ i=1

(1.40)

It will be called the k-th approximation for additional motion or, equivalently, approximation to within (k+1)-th order of smallness. The value ∆k Y =

1 (k) d Y k!

(1.41)

will be called the k-th order correction of additional motion. As follows from (1.37) and (1.41), the k-th order correction is given by ∆k Y =

k   1 ∂ ∂ µ1 + ... + µm Y (t, α)α=α¯ . k! ∂α1 ∂αm

(1.42)

Hence, the k-th order correction is a linear form of k-th order sensitivity functions calculated for the base parameter values, as well as a k-th order form with respect to parameter increase µ1 , . . . , µm . Note that if differentiability conditions hold, we can write equations for respective increments of quality indices similar to (1.39) and (1.40). By Taylor’s formula we have ∆J(t, α) = J(t, α ¯ + µ) − J(t, α ¯ ) = dJ(t, α ¯ )+ 1 (2) + 2! d J(t, α ¯ ) + ... +

1 (n) J(t, α ¯ )+ n! d

1 + (n+1)! d(n+1) J(t, α ¯ + θµ).

© 2000 by CRC Press LLC

(1.43)

Then, the k-th approximation takes the form ∆(k) J(α) =

1.3.3

k   1 dJ(α)α=α¯ . i! i=1

(1.44)

Analysis of First Approximation

According to (1.37) and (1.40), the first approximation for additional motion can be written in the form ∆(1) Y (t, α) = U1 (t, α ¯ )µ1 + ... + Um (t, α ¯ )µm , where Ui =

 ∂Y (t, α)  , ∂αi α=α¯

i = 1(1)m,

(1.45)

(1.46)

are corresponding sensitivity functions. As follows from (1.45), the first approximation is linear with respect to increments of parameters and sensitivity functions, therefore it is very convenient for analysis and is often used as an approximate representation of additional motion: ∆Y (t, α) ≈ ∆(1) Y (t, α).

(1.47)

Evidently, correctness of (1.47) must be substantiated in any special case by theoretical or experimental results. Yet another important property of the first approximation that is also instrumental in applied investigations is its invariance with respect to functional transformations of the space of parameters. Let αi = αi (β1 , . . . , βq ),

i = 1(1)m,

(1.48)

where β1 , . . . , βq are new parameters constituting a vector β, and, generally speaking, q = m. The right sides of (1.48) are assumed to be continuously differentiable functions of all their arguments. Then, the first approximation (1.45) preserves its form if by µ1 , . . . , µm we mean the following differentials (increments): µi =

q  ∂αi j=1

∂βj

νj ,

i = 1(1)m,

where ν1 , . . . , νq are increments of parameters β1 , . . . , βq .

© 2000 by CRC Press LLC

(1.49)

Thus, having a set of sensitivity functions with respect to a complete set of parameters α and using (1.45) and (1.49), it is possible to investigate the influence on additional motion of arbitrary parameters connected with α by (1.48). Therefore, it is expedient to choose the set α with the minimal number of elements, because in this case, the number of required sensitivity functions (1.46) will be minimal. Notice that higher-order approximations, as distinct from the first approximation, are invariant with respect to transformation (1.48) if and only if this transformation is linear. If the approximate equality (1.47) holds with a sufficient precision, using (1.47) we can fairly simply analyze various properties of additional motion. For example, evaluating (1.47) with a norm, we obtain m        (1)  Ui (t, α ¯ )µi , ∆ Y (t, α) ≤

(1.50)

i=1

which gives an approximate estimate of the norm of additional motion for any time instant. Let now µ1 , . . . , µm be dependent stochastic values with mathematical expectations Mi and correlation matrix K = kij , i, j = 1, . . . , m, where kii = di are variances of corresponding parameters. Then, mathematical expectation M [∆Y ] of additional motion is approximately equal to m    M [∆Y ] ≈ M ∆(1) Y = Ui (t, α ¯ )Mi .

(1.51)

i=1

The correlation matrix of the first approximation can also be easily calculated. With this aim in view, subtract (1.51) from (1.45):

∆(1) Y 0 (t, α) =

m 

Ui (t, α ¯ )µ0i = Z,

(1.52)

i=1

Where the zero superscript denotes a centered component. Transposing (1.52), we obtain

Z T (t) =

m 

UiT (t, α ¯ )µ0i ,

i=1

where superscript “T ” denotes the transpose operator.

© 2000 by CRC Press LLC

(1.53)

Taking the product of (1.52) and (1.53), we obtain Z(t1 )Z T (t2 ) = =

m m i=1

j=1

Ui (t1 , α ¯ )UjT (t2 , α ¯ )µ0i µ0j = (1.54)

m m i=1

¯ )µ0i µ0j , j=1 Uij (t, α

where Uij (t1 , t2 , α ¯ ) = Ui (t, α ¯ )Uj (t, α ¯ ).

(1.55)

Averaging (1.54), we obtain KZ (t, α) =

m  m 

Uij (t1 , t2 , α ¯ )kij ,

(1.56)

i=1 j=1

where Kz (t1 , t2 , α) is the correlation matrix of the first approximation, which approximately determines the correlation matrix of additional plant motion. In a special case when random parameters µ1 , . . . , µm are independent, formula (1.56) gets simplified: KZ (t1 , t2 , α ¯) =

m 

Uii (t1 , t2 , α ¯ )di ,

(1.57)

i=1

where di = Kii is the variance of parameter αi . ¯ ) by Dz (t, α ¯ ), i.e., Denoting trace of the matrix Kz (t, t, α DZ (t, α ¯ ) = tr KZ (t, t, α ¯ ),

(1.58)

from (1.56) we obtain DZ (t, α ¯ ) = tr

m  m 

Uij (t, t, α ¯ )kij .

(1.59)

i=1 j=1

Here DZ (t, α ¯ ) denote the sum of variances of components of the vector ∆(1) Y 0 (t, α), which give approximately the variances of corresponding components of the vector of additional motion.

1.3.4

Statement of Optimization Sensitivity Requirements

Problems

with

In the problems formulated above it was implicitly assumed that area Mα of possible values of parameters vector α is known. Nevertheless, in

© 2000 by CRC Press LLC

many cases, such an assumption is not substantiated, and the investigator must find a set of values of some parameters Rα that ensure conditions of normal operation (1.23) or (1.24). Solution of such a problem may precede optimization, when it is required to choose a vector α0 from the set Rα such that condition (1.25) holds. Nevertheless, in many practical cases, spending a great deal of time and resources on obtaining the results of the optimization problem appears to be inexpedient, because obtained optimal system is too sensitive to parameters variation and, therefore, is practically unworkable. A method to overcome this difficulty consists of a statement of the optimization problems taking into account sensitivity requirements. Consider a general statement of the problem. According to (1.26) and (1.28), a classical optimization problem can be formulated as follows: it is required to find a vector α0 ∈ Rα , where Rα is a given set of parameters, such that J1 (α0 ) ≥ J(α),

α ∈ Rα ,

Y (t, α0 ) ∈ MY .

(1.60)

Solution of the problem ensures optimality of the system only for α = α0 . In reality we will always have α = α0 + ∆α,

(1.61)

where ∆α is parameters fluctuation belonging to a region ∆Rα , which depends on α0 in the general case. Then, taking into account the necessity of satisfying (1.23), we must add conditions (1.60) by Y (t, α + ∆α) ∈ MY

where

∆α ∈ ∆Rα .

(1.62)

Instead of (1.62) we may also write Y (t, α0 ) + ∆Y (t, α) ∈ MY

where

∆α ∈ ∆Rα ,

(1.63)

where ∆Y (t, α) = Y (t, α0 +∆α)−Y (t, α0 ) is the relevant additional motion. Conditions (1.60) and (1.63) determine the statement of the optimization problem with restrictions imposed on additional motions. In the framework of the exposed approach, this problem can be formulated in terms of sensitivity functions. Indeed, approximating additional motion by an approximation of the form (1.40), instead of (1.63) we obtain restrictions on sensitivity functions: Y (t, α0 ) + ∆(k) Y (t, α) ∈ MY

© 2000 by CRC Press LLC

while

∆α ∈ ∆Rα ,

(1.64)

Adding corresponding sensitivity equations to these relations, we formally obtain a classical optimization problem in augmented space of state variables and their sensitivity functions. Note that similar situations are often characteristic of various problems of adaptation and learning. Indeed, as was shown by Tsypkin [118], these problems can in most cases be reduced to the problem of parametric optimization under uncertainty, when a system optimizes automatically a functional of the form J(α) = Q(Y, α)P (Y )dY, where Q(Y, α) is a functional defined over the set of state variables, which depends on parameters vector α, and P (Y ) is, generally speaking, unknown distribution density. In the general case, even successful organization of the adaptation process, i.e., automatic adjustment of parameters α, does not ensure satisfactory properties of the system if the requirements (1.63) are not met. Therefore, strictly speaking, such restrictions must be taken into account in the adaptation procedure itself. Note the solution of the synthesis problem accounting for restrictions on additional motion appears to be cumbersome and is connected with many difficulties. Nevertheless, sensitivity analysis of an optimal system is an obligatory design stage.

© 2000 by CRC Press LLC

Chapter 2 Sensitivity of Finite-Dimensional Continuous Systems

2.1 2.1.1

Finite-Dimensional Continuous Depending on a Parameter

Systems

Mathematical Description

The most general mathematical description of the class of systems under consideration is given by systems of ordinary differential equations in the normal Cauchy form: dy1 = f1 (y1 , . . . , yn , t), dt ..................... dyn = fn (y1 , . . . , yn , t), dt

(2.1)

where y1 , . . . , yn are state variables, f1 , . . . , fn are nonlinear (in the general case) functions, and t is the argument hereinafter called time. Introducing column vectors Y and F with components y1 , . . . , yn and f1 , . . . , fn , respectively, the system (2.1) can be written as a vector equation dY = F (Y, t). dt

(2.2)

In order to describe, at least approximately, real physical process, (2.2) should satisfy some requirements caused by the necessity of similarity between physical process and its mathematical model. First of all, (2.2) should satisfy the requirement of existence and uniqueness of the solution for given initial conditions. Sufficient conditions of existence and uniqueness of the solution are given by the following theorem.

© 2000 by CRC Press LLC

THEOREM 2.1 EXISTENCE THEOREM [77] Assume that in some open area Γ of the space of n+1 variables y1 , . . . , yn , t the right sides of (2.1) are defined and continuous, as well as the following partial derivatives: ∂fi (y1 , . . . , yn , t), ∂yj

i, j = 1, . . . , n.

(2.3)

Then, for any point y10 , . . . , yn0 , t0 in the area Γ there exists a unique solution with these initial conditions yi (t0 ) = yi0 , i = 1, . . . , n defined over some interval of the argument t that includes the point t0 . Hereinafter we consider systems of Equations (2.1) and (2.2) defined for all t ≥ t0 , where t0 is a constant. Physically, this means that the mathematical model under investigation describes initial systems on any interval of its existence. Nevertheless, it should be noted that the existence theorem does not guarantee that the interval ts < t < te for which the solution is defined, is unilaterally or bilaterally infinite. Therefore, in order to be sure that the corresponding solutions are defined for all t ≥ t0 , it is necessary to impose some additional restrictions on Equation (2.1) [77, 101]. Further, it is always assumed that these requirements are satisfied, Equations (2.1) and (2.2) under consideration satisfy conditions of existence and uniqueness, and the solution can be continued for t → ∞.

2.1.2

Parametric Dependence of Solutions on Finite Time Intervals

According to general principles described in Chapter 1, the presence of various parameters changing the properties of the initial system can be mathematically described by introducing free variables (parameters) into equations of its mathematical model. Therefore, we can write a parametric model of a finite-dimensional continuous system in the following expanded form: dyi i = 1, . . . , n. (2.4) = fi (y1 , . . . , yn , t, α1 , . . . , αm ), dt Introducing parameter vector α, similarly to (2.2) we can write the vector equation dY = F (Y, t, α). (2.5) dt Further, we implicitly assume that for any α ∈ Rα , where Rα is a region of possible variation of the vector α, Equations (2.4) satisfy conditions of existence and uniqueness of solution, and this solution can be continued for t → ∞.

© 2000 by CRC Press LLC

One of the main tasks of sensitivity theory is investigation of properties of solutions of Equations (2.5) under variations of α. Some of these properties are defined by general theorems of differential equations theory. Below we give without proofs some basic theorems needed later [77]. THEOREM 2.2 CONTINUITY WITH RESPECT TO PARAMETERS ˜ of n+m+1 variables y1 , . . . , yn , α1 , . . . , αm , t the right Let in an open area Γ sides of equations (2.4) be continuous together with the partial derivatives ∂fi (Y, t, α). ∂yj

(2.6)

Y = Y (t, α)

(2.7)

Let also

˜ satisfying the initial condition be solutions of (2.5) in Γ Y (t, α0 ) = Y0 ,

(2.8)

˜ where t0 , Y0 and α0 belongs to Γ. Then, if the condition Y = Y (t, α) is defined on the interval t1 ≤ t ≤ t2 , there is a positive number ρ such that for |α − α0 | < ρ the solution (2.7) is defined on the same interval t1 ≤ t ≤ t2 , and the function Y = Y (t, α) is continuous with respect to t and α for |α − α0 | < ρ and t1 ≤ t ≤ t2 . Here and in the following the notation | · | denotes a norm. THEOREM 2.3 DIFFERENTIABILITY WITH RESPECT TO PARAMETERS ˜ If the right sides of (2.4) have continuous partial derivatives in area Γ with respect to variables y1 , . . . , yn , α1 , . . . , αm up to order s (inclusive) and the solution (2.7) satisfies conditions (2.8), for |α − α0 | < ρ and t1 ≤ t ≤ t2 (where ρ is a sufficiently small positive number) this solution has continuous partial derivatives with respect to α1 , . . . , αm up to order s (inclusive). It should be noted that in these theorems the initial moment t0 and initial condition Y (t0 ) are assumed invariant. On the basis of these theorems we can consider the dependence of the solutions of (2.4) on the initial data, which are included as parameters into the general parametric model of the finite-dimensional system (2.5). In the general case, the solution of (2.2) depends on initial moment t0 and initial

© 2000 by CRC Press LLC

value Y0 = Y (t0 ), so that it can be written as Y = Y (t, t0 , Y0 ).

(2.9)

Moreover, the following identity holds: Y (t0 , t0 , Y0 ) ≡ Y0 .

(2.10)

Let Y = Z + Y0 ,

t = τ + t0 .

(2.11)

Considering (2.11) as a change of variables, from (2.5) we easily obtain dZ = F (Z + Y0 , τ + t0 , α) = G(Z, τ, Y0 , α). dτ

(2.12)

The right side of (2.12) explicitly depends on the parameters Y0 and t0 , while initial conditions Z(0) = 0 do not. Therefore, we can apply the above theorems on continuity and differentiability with respect to parameters to (2.12). Returning to variables Y and t and using some additional relations [77], it is possible to prove the following theorem. THEOREM 2.4 CONTINUITY AND DIFFERENTIABILITY WITH RESPECT TO INITIAL DATA In an open area Γ of variables y1 , . . . , yn , t let the right sides of (2.4) be continuous together with the partial derivatives (2.3). If t0 , Y0 is an arbitrary point of the area Γ, and the general solution (2.9) satisfying initial conditions (2.10) is defined on the interval t1 ≤ t ≤ t2 , there is a constant ρ such that for t1 ≤ t ≤ t2 ,

| τ − t 0 | ρ ,

| Z0 − Y0 |≤ ρ

the solution Y (t, τ, Z0 ) is continuous with respect to t, τ and Z0 and has continuous partial derivatives with respect to components z1 , . . . , zn of the vector Z0 , and with respect to parameter τ . Moreover, there are continuous partial derivatives ∂ 2 Y (t, τ, Z0 ) , ∂τ ∂zi independent of the order of differentiation. Hereinafter, when necessary, the conditions of the above theorems are assumed to be valid.

© 2000 by CRC Press LLC

It is noteworthy that the theorems formulated in this section relate to a finite time interval. This circumstance restricts the applicability area of these theorems, because it may happen that the time interval appearing in a theorem is less than the interval on which performance of the system is investigated. To remove this restriction, one needs to obtain theorems on continuous dependence on parameters on infinite time intervals. This problem is analyzed in the following sections.

2.1.3

Calculation of Derivatives by Parameters

If the conditions of differentiability with respect to parameters formulated above hold, on the basis of (2.4) one can obtain a system of differential equations determining the derivative of the solution with respect to parameters. This fact is formulated below as a theorem. THEOREM 2.5 Let conditions of the integral theorem on differentiability with respect to parameters hold, and let t0 and Y0 be independent of α. Then, derivatives of solutions with respect to parameters are defined by differential equations  ∂fs ∂yi d ∂ys ∂fs = + , dt ∂αj ∂y ∂α ∂α i j j i=1 n

s = 1, . . . , n,

(2.13)

j = 1, . . . , m,

with initial conditions ∂ys (t = t0 ) = 0, ∂αj

s = 1, . . . , n,

j = 1, . . . , m.

(2.14)

It can be easily seen that Equations (2.13) are obtained from (2.4) by means of formal differentiating with respect to αj . From this theorem we can derive equations for determining derivatives with respect to initial conditions y10 , . . . , yn0 . These derivatives satisfy the equations  ∂fs ∂yi d ∂ys = , dt ∂y0j ∂yi ∂y0j i=1 n

s = 1, . . . , n,

j = 1, . . . , m,

(2.15)

which are obtained from (2.13) by excluding the second term in the right side. Equations (2.15) are often called equations in variations. The question

© 2000 by CRC Press LLC

about calculation of initial conditions will be considered in the next section from a more general viewpoint. Equations (2.13) will be called sensitivity equations with respect to parameter αj .

2.1.4

Parametric Equations

Models

and

General

Sensitivity

The general solution (2.9) for equation (2.5) depends on n + m + 1 scalar parameters t0 , y01 , . . . , y0n , α1 , . . . , αm , which are a complete group of parameters (cf. Sec. 2.3 of Chapter 1), because they determine a unique solution of (2.5). In the general case, an arbitrary family of parameters µ1 , . . . , µk is complete if there are unique relations t0 = t0 (µ1 , . . . , µk ), α = α(µ1 , . . . , µk ).

Y0 = Y0 (µ1 , . . . , µk ),

(2.16)

In this case, k parameters µ1 , . . . , µk uniquely specify initial conditions and vector α(µ), i.e., uniquely specify the corresponding solution Y = Y (t, µ). Therefore, hereinafter we shall say that (2.16) specifies the kparametric family of solutions of (2.15). Let us note that the last relation in (2.16), α = α(µ) (2.17) defines a change of variable in the space of parameters. Substituting (2.17) into (2.5), we obtain dY = F (Y, t, α(µ)) = F˜ (Y, t, µ). dt

(2.18)

Moreover, with (2.16) and (2.17) initial conditions can be represented in the form t0 = t0 (µ), Y0 = Y0 (µ). (2.19) In the special case when there exists only one scalar parameter µ = α, so that (2.5) has the form dY = f (Y, t, α), dt

(2.20)

parameter α constitutes a complete group and, therefore, determines a single-parameter family of solution, if initial conditions depend uniquely on α: t0 = t0 (α), Y0 = Y0 (α). (2.21)

© 2000 by CRC Press LLC

Now let us formulate a general theorem on differentiability with respect to a parameter, which is a direct result of the general theorems given in the previous section. THEOREM 2.6 If the function (2.21) is continuously differentiable with respect to α, the solution Y (t, α) satisfying the condition Y (t0 (α), α) = Y0 (α),

(2.22)

is continuously differentiable with respect to α on any closed interval t for ˜ in which the right side of which the solution Y (t, α) belongs to the area Γ, (2.20) is continuously differentiable with respect to Y and α. The derivative U (t, α) =

∂Y (t, α) ∂α

(2.23)

is determined by the equation   dU ∂F  ∂F  = U+ dt ∂Y Y =Y (t,α) ∂α Y =Y (t,α)

(2.24)

and initial conditions U (t0 ) =

dt0 dY0 dt0 dY0 − Y˙ 0 = − F (Y0 (α), t0 (α), α) . dα dα dα dα

(2.25)

In (2.24) the derivative ∂F/∂Y of the vector F by vector Y is a square matrix:    ∂fi  ∂F ,  = qik =  ∂Y ∂yk 

i, k = 1, . . . , n.

(2.26)

PROOF With due account for (2.21), Equation (2.20) can be reduced to the integral equation t Y (t, α) =

F (Y (t, α), t, α)dt + Y0 (α).

(2.27)

t0 (α)

In the conditions of the theorem hold, equation (2.27) can be differentiated with respect to parameter α, and interchanging of

© 2000 by CRC Press LLC

differentiation by α and integration in the right side is valid [101]. As a result, differentiating by α, we obtain ∂Y (t, α) ∂α  t  ∂F dt0 ∂F dY0 = U+ dt + − Y˙ 0 (t0 , α) . ∂Y ∂α dα dα

U (t, α) =

(2.28)

t0 (α)

Differentiating (2.28) by t, we obtain (2.24). Moreover, assuming t = t0 (α) in (2.28), we directly obtain (2.25). Let us make some important remarks to the above theorem. REMARK 2.1 As follows from (2.24), the form of equations determining the function U (t, α) is independent of initial conditions (2.24), i.e., is independent of the choice of a family of solutions and is defined exclusively by dependence of the right side of (2.20) on α. Therefore, it is natural to call (2.24) the vector sensitivity equation of the system with respect to parameter α. Sensitivity functions of various single-parameter families of solutions with respect to α satisfy the same equation (2.24) with different initial conditions (2.25). REMARK 2.2 As follows from (2.24), sensitivity equations with respect to parameter α are linear with respect to corresponding sensitivity functions. These equations are, in the general case, nonhomogeneous, and become homogeneous only when  ∂F  = o, ∂α Y =Y (t,α)

(2.29)

i.e., when the right side of (2.20) does not depend on parameter α on the family of solutions under consideration. REMARK 2.3 If the initial system depends on α and one more parameter β, instead of (2.20) we have dY = F (Y, t, α, β). dt

© 2000 by CRC Press LLC

(2.30)

If corresponding differentiability conditions hold, we can write, similarly to (2.24), the sensitivity equation with respect to parameter β:  dUβ ∂F ∂F  Uβ + = , dt ∂Y Y =Y (t,α) ∂β

(2.31)

where Uβ is the sensitivity function with respect to parameter β. Comparing (2.24) and (2.31) shows that these equations differ only by their free terms. Their common homogeneous part is described by the equation  dU ∂F  U, (2.32) = dt ∂Y Y =Y (t,α) which coincides with the equation in variations (2.15). REMARK 2.4 Taking initial moment t0 as a parameter, i.e., t0 = α, initial conditions can be written as t0 = α,

Y0 = const.

Since the right side of (2.20) is independent of α, the sensitivity equation coincides with the equation in variations (2.32), and initial conditions take the form U (t0 ) = −F (Y0 , t0 ).

(2.33)

REMARK 2.5 If t0 = const and the values α = y0i , i = 1, . . . , n are taken as parameters, the sensitivity equation coincides with the equation in variations (2.15), and initial conditions take the form um (t0 ) = 0,

m = i

ui (t0 ) = 1.

(2.34)

REMARK 2.6 If motion of the system under investigation is given by the n − th order equation y (n) = f (t, y, y, ˙ . . . , y (n−1) , α)

© 2000 by CRC Press LLC

(2.35)

with initial conditions (n−1)

y(t0 , α) = y0 (α), . . . , y (n−1) (t0 , α) = y0

(α),

t0 = t0 (α),

(2.36)

the question on differentiability of equations with respect to the parameter and calculation of the corresponding derivatives is solved directly on the basis of the theorems given above. Indeed, assuming y = y1 ,

y˙ = y2 , . . . , y (n−1) = yn ,

from (2.35) we obtain the equations dy1 = y2 , dt dy2 = y3 , dt ... dyn = f (t, y1 , . . . , yn , α), dt

(2.37)

and initial conditions reduce to (2.21). Therefore, the following theorem holds. THEOREM 2.7 (i) If the functions t0 (α), y0 (α), i = 0, . . . , n − 1 are continuously differentiable with respect to α, the solution y(t, α) of (2.35) satisfying the initial conditions (2.36) is continuously differentiable with respect ˜ where the to α on any closed interval of t in the area Γ, (n−1) function f (t, y, . . . , y , α) is continuously differentiable with respect to y, . . . , y (n−1) , α. The derivatives with respect to time y(t, α), . . . , y n−1) (t, α) are continuously differentiable with respect to α on the same interval of t. Using (2.24) and (2.25), it is easy to find that the derivative u(t, α) =

∂y(t, α) ∂α

(2.38)

must satisfy the equation u

(n)

=

n−1  s=0

© 2000 by CRC Press LLC

  ∂f  ∂f  (s) u + ∂α y(i) =y(i) (t,α) ∂y (s) y(i) =y(i) (t,α)

(2.39)

and initial conditions (s)

u0 =

dt0 (s+1) dy0 − y , dα α 0

(n−1) u0

dy = 0 dα

(s)

(n−1)

s = 0, . . . , n − 2, (2.40)

dt0 (n−1) , α). − f (t0 , y0 , . . . , y0 dα

The sensitivity equation (2.39) is obtained by formal differentiating of the initial equation (2.35) with respect to parameter.

2.1.5

Sensitivity Equations of Higher Orders

Using the reasoning of the previous section, we can establish the conditions of existence of higher order sensitivity functions and find equations that determine them. Consider, as an example, the sensitivity function U (2) (t, α) =

∂ 2 y(t, α) . ∂α2

(2.41)

Let initial conditions (2.21) define a family of solutions Y (t, α) of equation (2.20). Then, as was proved above, the first-order sensitivity functions (2.23) define, for various α, a single-parameter family of solutions of sensitivity equations (2.24) given by initial conditions (2.25). Moreover, existence of second-order sensitivity functions for initial equation (2.20) appears to be equivalent to existence of first-order sensitivity functions for (2.24). Applying Theorem 2.7 to (2.24), we obtain that if the right side of (2.24) is continuously differentiable with respect to Y and α, and the functions t0 (α) and U0 (α) are continuously differentiable with respect to α, the function (2.41) does exist and is continuous on the corresponding intervals of t. It is the solution of the differential equation obtained by differentiation of (2.24) with respect to α: dU (2) = dt



 ∂F (2) ∂ 2 F ∂ 2 F  ∂2F U + U + (U ) + 2 , ∂Y ∂Y 2 ∂Y ∂α ∂α2 Y =Y (t,α)

(2.42)

where the second term in the right side is an aggregate of terms representing second-order derivatives with respect to state variables. To find the initial conditions we use (2.25). Obviously, (2)

U0

© 2000 by CRC Press LLC

=

dt0 dU0 − U˙ 0 . dα dα

(2.43)

By (2.25) we have dU0 d 2 t0 d2 Y0 dY˙ 0 dt0 = − − Y˙ 0 2 . dα dα dα dα dα

(2.44)

Since Y˙ 0 ∂F0 dY0 ∂F0 dt0 ∂F0 = + + dα ∂Y0 dα ∂t0 dα ∂α   ∂F0 ˙ dt0 dt0 ∂F0 ∂F0 ∂F0 = Y0 + + U0 + + = Y¨0 + U˙ 0 , ∂Y0 ∂t0 dα ∂Y0 ∂α dα F0 = F (Y0 , t0 , α),

(2.45)

equation (2.43) can be written in the form (2)

U0

=

dt0 d 2 t0 d2 Y0 − Y˙ 0 2 − Y¨0 − 2U˙ 0 2 dα dα dα



dt0 dα

2 .

(2.46)

Then, using Theorem 2.7, we arrive at the conclusion that the sensitivity function (2.41) exists, in the general case, if the right sides of the scalar equations in 2.20 have 1. continuous partial derivatives with respect to Y and α up to the second order; 2. continuous second-order partial derivatives with respect to Y , α, and t where differentiation by t is performed only once; 3. functions (2.21) have continuous second-order derivatives with respect to α. Let us note that if the starting point t0 is fixed, the above relations get greatly simplified. Indeed, if t0 = const, equations (2.43) and (2.46) yield (2)

U0

=

dU0 d2 Y0 . = dα dα2

(2.47)

Thus, for t0 = const the above sufficient conditions get weaker and reduce to the following: 1. the right side of (2.20) must be twice continuously differentiable with respect to Y and α;

© 2000 by CRC Press LLC

2. the function Y0 (α) must be twice continuously differentiable with respect to α. It is easy to verify that this reasoning can be directly extended onto sensitivity functions of an arbitrary order. A general result is formulated below as a theorem. THEOREM 2.8 Continuous n-th order sensitivity functions U (n) (t, α) =

∂ n Y (t, α) ∂αn

(2.48)

of a single-parameter family of solutions defined by initial conditions (2.21) exist if 1. the right sides of the scalar equations forming vector equation (2.20) have continuous partial derivatives with respect to Y , α, and t p to n-th order, where differentiation with respect to t is performed at most n − 1 times, and 2. initial conditions (2.21) are also n times differentiable with respect to α. Under these conditions, the associated sensitivity equations can be obtained by differentiating initial equation (2.20) by the parameter α n times, and the corresponding initial conditions are derived by successive employment of (2.25). If the starting point t0 is independent of the parameter α, existence conditions for sensitivity functions get simplified, because the condition of differentiability of the vector F (Y, t, α) by t becomes redundant. Sensitivity equation of the n-th order can be written in the following general form dU (n) dn F (Y, t, α) = , (2.49) dt dαn where d/∂α is the so-called operator of the complete partial derivative with respect to α. Let us note some general properties of equations (2.49). 1. As follows from (2.49), the general form of sensitivity equations is independent of initial conditions and is determined only by the dependence of the right side of (2.20) on α. Therefore, Equation

© 2000 by CRC Press LLC

(2.49) can be called sensitivity equation of the n-th order with respect to parameter α. 2. Sensitivity equation of the n-th order is linear with respect to sensitivity function U (n) (t, α) and can be represented in the form  dU (n) ∂F  = U (n) + Rn (Y, U, U (2) , . . . , U (n−1) , t, α). (2.50) dt ∂Y Y =Y (t,α) 3. The homogeneous part of sensitivity equations is the same for any order and coincides with equations in variations (2.15).

2.1.6

Multiparameter Case

If equations of the system under investigation depend on a vector parameter α = (α1 , . . . , αm ) and have the form (2.5), it is natural to consider all possible sensitivity functions of the n-th order: U (l1 ,...,lm ) (t, α) =

∂Y (t, α) lm ∂α1i1 . . . ∂αm

l1 + l2 + . . . + lm = n.

(2.51)

Sufficient conditions of existence of continuous sensitivity functions (2.51) can be derived on the basis of general theorems given above. Using Theorem 2.7 several times, we can arrive at the following conclusion: PROPOSITION 2.1 Let the family of solutions under consideration be given by Equations (2.5) and initial conditions t0 = t0 (α),

Y0 = Y0 (α),

(2.52)

where α is a vector parameter. Then, continuous sensitivity functions (2.51) exist and are independent of the order of differentiation by various parameters if the following conditions hold: • the right side of (2.5) admits n continuous partial derivatives by the variables Y and α and n continuous partial derivatives with respect to Y , α, and t, where differentiating by t is performed at most n − 1 times; • functions (2.52) have continuous n-th order partial derivatives with respect to all parameters.

© 2000 by CRC Press LLC

The corresponding sensitivity equations can be obtained directly by differentiating the initial equation by the parameters, while initial conditions are derived by successive usage of (2.25). It is noteworthy that in special cases, for instance, when initial conditions are independent of some parameters, the above sufficient conditions can be weakened. A general principle of obtaining sufficient conditions of existence of the corresponding functions can be formulated as follows. PROPOSITION 2.2 Let initial equation (2.5) and initial conditions (2.52) be given. Then, sufficient conditions of existence of sensitivity functions (2.51) can be derived in the following way. First, differentiate successively Equation (2.5) with respect to parameters the desired number of times (it is assumed that this is possible). As a result, we obtain an equation of the form  dU (l1 ,...,lm ) = R U (l1 ,...,lm ) , . . . , U, Y, t, α , dt

(2.53)

which is linear with respect to each sensitivity function Y (l1 ,...,lm ) . Moreover, employing formally Relation (2.25), we obtain the following relation U (l1 ,...,lm ) (t0 ) = G(Y0 , t0 , α).

(2.54)

Then, if the right sides of (2.53) and (2.54) are continuous, the associated sensitivity function exists and is independent of the order of differentiation with respect to various parameters. Equation (2.53) will be the desired sensitivity equation, while Relation (2.54) gives the necessary initial conditions. Example 2.1 Let us have a scalar equation dy = f (y, t, β), dt

t0 = t0 (α),

y0 = y0 (β).

(2.55)

It is required to find sufficient conditions of existence of the sensitivity function uαβ = ∂ 2 y(t, α, β)/∂α∂β. Differentiating formally the initial equation with respect to α, we obtain duα ∂f = uα , dt ∂y

© 2000 by CRC Press LLC

uα =

∂y(t, α, β) . ∂α

(2.56)

Using (2.25), we derive the initial conditions uα (t0 ) = −f (y0 , t0 , β)

dt0 . dα

(2.57)

Then, differentiating (2.56) formally with respect to β yields duαβ ∂2f ∂2f ∂f = uαβ + 2 uα uβ + uαβ , dt ∂y ∂y ∂y∂β

(2.58)

where the function uβ = ∂y/∂β satisfies the equation duβ ∂f ∂f = uβ + dt ∂y ∂β

(2.59)

with initial conditions uβ (t0 ) =

dy0 . dβ

(2.60)

Using (2.25) and (2.57), we obtain duα (t0 ) Uαβ (t0 ) = =− dβ



∂f (y0 , t0 , β) dy0 ∂f (y0 , t0 , β) + ∂y0 dβ ∂β



dt0 . dα

(2.61)

As follows from the above reasoning, if the derivatives ∂2f , ∂y 2

∂2f ∂y∂β

are continuous and so are the functions ∂f , ∂β

∂y0 , ∂β

∂t0 , , ∂α

the function uαβ exists and is independent of the order of differentiation by α and β. Corresponding sensitivity equation has the form (2.58), and initial conditions are given by (2.61). For investigation of multiparameter problems it is often convenient to employ vector differentiation. Such an approach is especially handy while considering first-order sensitivity functions. If Equation (2.5) is given and

© 2000 by CRC Press LLC

conditions of existence of first-order sensitivity functions hold, the following sensitivity matrix can be introduced:    ∂yi (t, α)  ∂Y ,  U (t, α) = = ∂α ∂αk 

i = 1, . . . , n,

k = 1, . . . , m.

This matrix will be the solution of the sensitivity equation similar to (2.24):   dU ∂F  ∂F  U + = dt ∂Y Y =Y (t,α) ∂α Y =Y (t,α)

(2.62)

with initial conditions U (t0 ) = where matrices

dY0 dα

   ∂fi  ∂F ,  = ∂α ∂αk 

and

∂F ∂α

dt0 dY0 (α) − Y˙ 0 , dα dα

and vector

   ∂y0i  dY0 ,  = dα ∂αk  i = 1, . . . , n,

dt0 dα

are determined by the relations

   dt0 dtT0 dt0  ,  = ,..., dα dα1 dαk 

(2.63)

k = 1, . . . , m.

In the special case when the components of the initial conditions vector y0 are taken as parameters, from (2.63) we obtain the following nonhomogeneous equation in variations: dU ∂F = U, dt ∂Y

(2.64)

with initial conditions of the form U (t0 ) = E,

(2.65)

where E denotes the identity matrix of the corresponding dimension.

2.1.7

Analytical Representation Family of Solutions

of

Single-Parameter

In this section we discuss the problem of expanding of parametric families of solutions into power series in parameters values, which was considered in Chapter 1 in a general form.

© 2000 by CRC Press LLC

First, consider the case of a single-parameter family of solutions Y (t, α) defined by Equation (2.20) and initial conditions (2.21). Assume that the conditions of existence and continuity of sensitivity functions up to n + 1-th ˜ containing the interval Jα of parameter order inclusive hold in an area Γ values α1 ≤ α ≤ α2 . Let α0 be a parameter value belonging to the interval Jα so that α1 ≤ α0 ≤ α2 . Then, using Lagrange’s formula for differential calculus, for all µ such that α1 ≤ α0 + µ ≤ α2 and corresponding t we can write Y (t, α0 + µ) = Y (t, α0 ) + U (t, α0 )µ + . . . + +

1 (n) U (t, α0 )µn n!

1 U (n+1) (t, α0 + ϑµ)µn+1 , (n + 1)!

(2.66)

0 < ϑ < 1.

According to (2.49), the sensitivity fnctions U (i) in (2.66) are determined by the differential equations dU (i) di F (Y, t, α) = dt dαi

(2.67)

with initial conditions U (i) (t0 ) =

dU (i−1) (t0 ) dt0 − U˙ (i−1) (t0 ) dα dα

(2.68)

Ignoring the remainder term in (2.66), we obtain an approximation of the n-th order Y (t, α0 + µ) ≈ Y (t, α0 ) + U (t, α0 )µ + . . . +

1 U (t, α0 )µn . n!

(2.69)

According to Section 3.2 of Chapter 1, the difference ∆Y (t, µ) = Y (t, α0 + µ) − Y (t, α0 )

(2.70)

represents additional motion caused by the variation of parameter α. From (2.69) we have n-th approximation for the additional motion ∆(n) Y (t, µ) ≈ U (t, α0 )µ + . . . +

1 (n) U (t, α0 )µ(n) . n!

(2.71)

Similar expansions can be written for the multiparameter case as well. If all sensitivity functions up to n + 1-th order exist and are continuous, we

© 2000 by CRC Press LLC

can write 1 ∆Y (t, µ) = dY (t, α0 ) + d2 Y (t, α0 ) + . . . 2! 1 1 + dn Y (t, α0 ) + dn+1 Y (t, α0 + ϑ, µ), n! (n + 1)!

(2.72)

where α0 is the nominal (base) value of the parameter vector α, µ is the vector of parameter perturbation, and the argument t is considered as a parameter. Ignoring the remainder term, we obtain the n-th approximation for additional motion: ∆(n) Y (t, µ) ≈ dY (t, α0 ) +

1 (2) 1 d Y (t, α0 ) + . . . + d(n) Y (t, α0 ). 2! n!

(2.73)

Formulas (2.72) and (2.73) have already been given for the general case in Chapter 1. Nevertheless, for finite-dimensional systems we presented sufficient conditions for existence of sensitivity functions and general equations for their determination. Moreover, the relation between the error of the n-th approximation and sensitivity functions of n + 1-th order has been established. For example, if sufficient conditions of existence of continuous second-order sensitivity functions hold, for sufficiently small |µ|, where | · | denotes a norm of vector, the first approximation has the form

∆Y (t, µ) ≈ dY (t, α) =

m 

Ui (t, α0 )µi ,

(2.74)

i=1

which is very useful for investigation of general properties of additional motion.

2.1.8

Equations of Additional Motion

Let Y (t, α) be a single-parameter family of solutions of Equation (2.20), corresponding to initial conditions (2.21) in a closed interval Jα : α1 ≤ α ≤ α2 . Let α = α0 be some fixed base value of the parameter from the interval Jα , and Y (t, α0 ) be the corresponding solution. If the parameter value differs from the base one, we obtain the solution Y (t, µ) = Y (t, α0 + µ),

© 2000 by CRC Press LLC

(2.75)

satisfying the initial conditions Y (t0 , µ) = Y (t0 , α0 + µ).

(2.76)

Then, as follows from the aforesaid, the difference Z(t, µ) = Y (t, α0 + µ) − Y (t, α0 ) = ∆Y (t, µ)

(2.77)

represents the additional motion caused by variation of the value of parameter α with respect to the nominal (base) one α0 . Since for α = α0 +µ we have dY (t, α0 + µ) (2.78) = F (Y (t, α0 + µ), t, α0 + µ), dt subtracting (2.20) from (2.78), we obtain dZ(t, µ) = F˜ (Z, t, µ), dt

(2.79)

where F˜ (Z, t, µ) = F (Z + Y (t, α0 ), t, α0 + µ) − F (Y (t, α0 ), t, α0 ).

(2.80)

Equation (2.79) can be considered over an arbitrary time interval, on which both the functions Y (t, α0 ) and Y (t, α0 + µ) are defined. Notice that Equation (2.79) has the same form independent of the form of initial conditions (2.21). Therefore, it is natural to call it the equation of additional motion with respect to parameter α, independently of the form of the specific family of solutions Y (t, α). Let us formulate some general properties of the equation of additional motion (2.79). 1. Equation (2.79) can be formally derived from (2.20) by the change of variables Z(t, µ) = T (t, α0 + µ) − Y (t, α0 ),

α = α0 + µ.

(2.81)

2. By construction, the equation of additional motion has the trivial solution Z = 0,

µ = 0,

corresponding to the base solution Y (t, α0 ).

© 2000 by CRC Press LLC

(2.82)

3. A definite family of solutions Y (t, α) is associated with a singleparameter family of additional motions, which are solutions of Equation (2.79) and satisfy the initial conditions Z(t0 , µ) = Y (t0 , α0 + µ) − Y (t0 , α0 ), t0 = t0 (α0 + µ) = t0 (µ).

(2.83)

˜ is an area in the space of variables Y , t, and α, in which Equation 4. If Γ (2.20) is defined, the corresponding domain of the additional motion equation is defined by the change of variables (2.81). In this case, parameter µ will be inside the interval Jα so that α1 − α0 ≤ µ ≤ α2 − α0

(2.84)

including the point µ = 0. In a similar way, we can derive equations determining deviation of additional motion from a chosen approximation. For simplicity, let us consider the first approximation. Let in (2.79) Z(t, µ) = U (t, α0 )µ + V (t, µ),

(2.85)

V (t, µ) = ∆Y (t, µ) − ∆(1) Y (t, µ).

(2.86)

where

According to (2.80), the right side of (2.79) can be represented in the form   ∂F  ∂F  ˜ F (Z, t, µ) = Z+ µ + G(Z, t, µ). ∂Y  Y =Y (t,α0 ) ∂α  Y =Y (t,α) α=α0

(2.87)

α=α0

Considering (2.85) as a change of variable and taking (2.87), we obtain dV = G(V + U (t, α0 )µ, t, µ). dt

2.1.9

(2.88)

Estimation of First Approximation Error

Using sensitivity equations, it is possible, generally speaking, to obtain some estimates of the error of representation of additional motion in the form of a power series in parameter value. Consider the single-parameter case. Restrict ourselves by investigation of the first approximation error, which is most important from the practical viewpoint.

© 2000 by CRC Press LLC

According to (2.74), the first approximation for additional motion is given by U (t, α0 )µ = Z (1) (t, µ)

(2.89)

As before, denote additional motion by ∆Y (t, µ) = Y (t, α0 + µ) − Y (t, α0 ),

(2.90)

Then, by (2.66) we find the error between precise and approximate formulas for additional motion as Z(t, µ) − Z (1) (t, µ) =

1 (2) U (t, α + ϑµ)µ2 , 2!

0 < ϑ < 1.

(2.91)

Estimating both sides of (2.91) with a norm, it is easy to obtain     1     Z(t, µ) − Z (1) (t, µ) ≤ max U (2) (t, α + ϑµ) µ2 , 2! ϑ

0 ≤ ϑ ≤ 1.

(2.92)

To estimate the value ∆(1) =



 1   max U (2) (t, α + ϑµ) µ2 , 2 ϑ

0 ≤ ϑ ≤ 1,

(2.93)

we use sensitivity equations. Let us describe a procedure of such estimation. Let Equation (2.20) dY = F (Y, t, α) dt and initial conditions (2.21) t0 = t0 (α),

Y0 = Y0 (α)

˜ of variables Y , t, determine the family of solutions Y (t, α) in a closed set Γ and α. Assume that in this area there exist continuous sensitivity functions of the first and second order, U (t, α) and U (2) (t, α), respectively, satisfying the equations dU ∂F ∂F = U+ , dt ∂Y ∂α

U0 = U (t0 ) =

dt0 Y0 − Y˙ 0 , dα dα

dU (2) ∂F (2) = U + R2 (Y, U, t, α), dt ∂Y dt0 dU0 (2) U (2) (t0 ) = − U˙ 0 = U0 . dα dα

© 2000 by CRC Press LLC

(2.94)

(2.95)

Represent (2.94) as an integral equation t  U=

∂F ∂F U+ ∂Y ∂α

 dt + U0 .

(2.96)

t0

Estimating a norm, we have t | U |≤

(a | U | +b)dt + c,

(2.97)

t¯0

where

     ∂F     , b = max  ∂F  , a = max    ˜ ˜ ∂Y ∂Y  Γ Γ t¯0 = min t0 (α), c = max |U0 | . ˜ α∈Γ

(2.98)

˜ α∈Γ

As is known [5, 42], from inequality (2.97) it follows that  | U |≤

b c+ c



b ¯ ea(t−t0 ) − , a

t, α ∈ Γ.

(2.99)

Relation (2.99) gives an estimate of the first-order sensitivity function that ˜ In a similar way, transforming differential equation is uniform for the set Γ. (2.95) into integral one, we obtain t  U

(2)

=

 ∂F (2) (2) U + R2 dt + U0 . ∂Y

(2.100)

t0

Then, as before, derive the integral inequality   t     (2)    a U (2)  + b1 dt + c, U  ≤

(2.101)

t0

where b1 is an upper estimate of the norm of the function R2 , obtained with the help of (2.99),    (2)  c1 = max U0  , (2.102) ˜ α∈Γ

and the values a and t0 are the same as in (2.97).

© 2000 by CRC Press LLC

From (2.101) follows the desired estimate of the second-order sensitivity function     b1 b1  (2)  ¯ ˜ ea(t−t0 ) − , t, α ∈ Γ. (2.103) U  ≤ c1 + a a ˜ and Let us return to Equation (2.93). Considering it in the closed area Γ using (2.103), we obtain ∆(1) ≤

1 max 2 α0 +µ,t∈Γ˜

    b1 b1 2 ea(t−t0 ) − µ . c1 + a a

(2.104)

˜ an error estimate for Similarly we can derive, for a fixed set Γ, approximation of an arbitrary order. For this purpose Equation (2.72) can be used, which yields     Z(t, µ) − Z (n) (t, µ) ≤

  1   max{U (n+1) (t, α0 + ϑµ) µn+1 }. (2.105) (n + 1)! θ

The value of the right side of (2.105) can be estimated on the basis of sensitivity equations using the recurrent technique described above. In principle, this estimation method can be used without essential modifications in multiparameter problems. Another estimation technique, based on Lyapunov’s method, is presented in the next section.

2.2

Second Lyapunov’s Method in Sensitivity Theory

2.2.1

Norms of Finite-Dimensional Vectors and Matrices

DEFINITION 2.1 A norm of a real column vector Y with real components y1 , . . . , yn is a number |Y | satisfying the following conditions: • | Y |> 0

where

Y = O,

| O |= 0,

(2.106)

where O is the zero column vector; • | cY |=| c || Y |,

© 2000 by CRC Press LLC

(2.107)

where c is a scalar numeric multiplier; • | Y1 + Y2 |≤| Y1 | + | Y2 | .

(2.108)

Relation (2.108) is called triangle inequality. Inequality | Y |≤ r,

r = const

(2.109)

defines a n-dimensional ball Γr with radius r in n-dimensional space. As follows from Axioms (2.106)–(2.108), “the ball” is a convex centrally symmetric body that contains vector −Y together with Y , and vector cY1 + (1 − c)Y2 (where 0 ≤ c ≤ 1) together with Y1 and Y2 . It can be shown [111], that conversely, any centrally symmetric body V generates a norm |Y |V defined by the following relations: 1 Y ∈ V. t

| Y |V = inf t,

(2.110)

The surface | Y |= 1

(2.111)

is called n-dimensional unit sphere. Geometric sense of unit sphere can be various for various choices of initial norm | · |. For example, assuming | Y |=| Y1 |= max | yi |, i

(2.112)

we find that the set of vectors satisfying Condition (2.111), forms the surface of a unit cube −1 ≤ y1 ≤ 1, . . . , −1 ≤ yn ≤ 1.

(2.113)

Such a norm is called cubic. If we take   n  | Y |=| Y |3 =  yi2 ,

(2.114)

i=1

Equation (2.111) defines a unit sphere, therefore the norm (2.114) is called spherical. For different r’s, equations | Y |= r

© 2000 by CRC Press LLC

(2.115)

define the set of spheres Sr embedded into each other, which fill all space. DEFINITION 2.2 matrix

Operator norm of a finite-dimensional square A = aik ,

i, k = 1, . . . , n,

(2.116)

associated with a given vector norm | · | is a number |A| such that | A |= max | AY |

(2.117)

|Y |=1

It can be shown that for the vector norm (2.112) the associated operator norm is defined by the relation | A |1 = max i

n 

| aik | .

(2.118)

k=1

For the spherical norm (2.114) the operator norm of a matrix A is given by | A |3 =



λ,

(2.119)

where λ is the maximal eigenvalue of the matrix AT A, where “T ” denotes the transpose of a matrix. Hereinafter by a “norm of matrix” we will implicitly mean some operator norm. An arbitrary matrix norm satisfy the conditions | A |> 0

where

A = O,

| O |= 0

(2.120)

where O is a zero matrix and | cA |=| c | A |,

c = const,

(2.121)

| A + B |≤| A | + | B |,

(2.122)

| AB |≤| A || B | .

(2.123)

Norms of finite-dimensional vectors and matrices possess an important equivalency property. Let | · |1 and | · |2 be arbitrary norm of vector. Then, there are positive constants µ1 and µ2 independent of Y such that µ1 | Y |1 ≤| Y2 |≤ µ2 | Y |1 .

(2.124)

A similar relation holds for two arbitrary norms of finite-dimensional matrix.

© 2000 by CRC Press LLC

2.2.2

Functions of Constant and Definite Sign

DEFINITION 2.3 A single-valued continuous function v(Y ) in variables y1 , . . . , yn is called function of constant sign in a simply connected area Γ of the space Y , if the following conditions hold: v(O) = 0, DEFINITION 2.4 area Γ, if v(O) = 0,

v(Y ) ≥ 0,

Y ∈Γ

(2.125)

A function v(Y ) is called positive definite in an

v(Y ) > 0

where

Y = 0,

Y ∈ Γ,

(2.126)

DEFINITION 2.5 A function v(Y ) is called negative definite in an area Γ, if the function −v(Y ) is positive definite. Below we consider some general properties of functions of definite signs that are needed further. PROPOSITION 2.3 Let a continuous and single-valued function v(Y ) be positive definite in a closed area Γ. Let a number l be defined by the relation l = min v(Y ) = l(Γ), Y ⊂SΓ

(2.127)

where Sr is the boundary of Γ. Then, equation v(Y ) = c = const < l(Γ)

(2.128)

determines a family of closed surfaces embedded to each other, which belong to Γ and contain the origin. Moreover, for c1 < c2 < Γ the surface v(Y ) = c1 is inside the surface v(Y ) = c2 . PROOF Draw an arbitrary continuous curve L from the origin to the boundary Sr . Along this curve, the function v(Y ) takes values in the interval from 0 to ¯l ≥ l(Γ). Therefore, at some point of the curve L the function v(Y ) will take any value c < l(Γ), i.e., the curve L intersects all the surfaces (2.128), and all these surfaces are closed. Since the function v(Y ) is single-valued, the surfaces (2.128) do not intersect. Moreover, if c1 < c2 < l(Γ), the value v = c1 is encountered, along the curve L, earlier

© 2000 by CRC Press LLC

than v = c2 , i.e., the surface v(Y ) = c1 is inside the surface v(Y ) = c2 .

PROPOSITION 2.4 Let a continuous and single-valued function v(Y ) be positive definite inside a sphere Sr (2.115). Let a number l(r) be given by the relation l(r) = min v(Y ).

(2.129)

v(Y ) = c = const

(2.130)

|Y |=r

Then, equation where c < l(r), defines a set of closed surfaces embedded into each other, which are inside Sr and cover the origin. Moreover, for c1 < c2 < l(r) the surface v(Y ) = c1 is inside the surface v(Y ) = c2 . PROOF The proof of this proposition follows from Proposition 2.3, if we take area Γ in the form of the ball Γr . For c = 0 the surface (2.130) degenerates into the origin. PROPOSITION 2.5 Let a surface w(Y ) = ρ be closed and contain the origin, and let a number η(ρ) be defined by η(ρ) = min | Y |, (2.131) ω=ρ

where | · | is a norm of vector. Then, the sphere |Y | = r < η(ρ) is inside the surface w(Y ) = ρ. PROOF This proposition follows also from Proposition 2.3, because any norm |Y | is a positive definite function of its arguments. DEFINITION 2.6 A function v(Y ) is called infinitely large if for any positive number a there is r(a) such that outside the sphere |Y | = r(a) we have v(Y ) > a. In [6] it is shown that for an infinitely large positive definite function v(Y ) the surfaces (2.130) fill all space and are closed for all c. Indeed, taking a sufficiently large r in (2.129), we can ensure the inequality c < l(r), which proves the claim. From the aforesaid, it follows that there is a definite link between norms and positive definite functions. Any norm is an infinitely large positive

© 2000 by CRC Press LLC

definite function. The inverse proposition is, generally speaking, false. An infinitely large positive definite function v(Y ) defines a norm in the space of variables y1 , . . . , yn if its level surfaces (2.130) are convex and centrally symmetric.

2.2.3

Time-Dependent Functions of Constant and Definite Sign

DEFINITION 2.7 A continuous and single-valued function v(Y, t), where t is a scalar parameter, is called positive definite on the interval It (t1 ≤ t ≤ t2 ) in an area Γ of the space Y if v(O, t) = 0,

v(Y, t) ≥ v(Y ),

Y ∈ Γ,

t ∈ It ,

(2.132)

where v(Y ) is a positive definite function in Γ independent of t. Time dependent positive definite functions admit geometric description similar to that given in the previous section. Let Sr be a sphere with radius r in the area Γ, and let a number l(r) be defined by (2.129). Consider the equation v(Y, t) = c < l(r).

(2.133)

For various t’s Equation (2.133) defines a closed surface in the space of variables Y covering the origin, which is deformed with a change of t. Let us show that for all t ∈ It the surface (2.133) is inside the surface v(Y ) = c, and, therefore, is inside the sphere Sr . Indeed, if v(Y¯ ) = c, due to (2.132) v(Y¯ , t) ≥ c and the point Y¯ is outside the surface v(Y, t) = c or on it. Thus, from the condition v(Y¯ , t) ≤ c < l(r),

t ∈ It ,

(2.134)

it follows that the point Y¯ is inside a closed area of the space of variables Y bounded by the surface v(Y ) = c. DEFINITION 2.8 v(O, t) = 0,

If, instead of (2.132), weaker conditions hold: v(Y, t) ≥ 0,

Y ∈ Γ,

t ∈ It ,

the function v(Y, t) is called positive function of constant sign.

© 2000 by CRC Press LLC

(2.135)

DEFINITION 2.9 A function v(Y, t) is called negative definite (or negative function of constant sign), if the function −v(Y, t) is positive definite (or positive function of constant sign, respectively).

2.2.4

Lyapunov’s Principle

Let us have a system of differential equations dyi = fi (y1 , . . . , yn , t), dt

(2.136)

where the right sides are continuous with respect to all arguments and satisfy conditions of existence and uniqueness of solution inside a sphere Sr on the interval It : t1 ≤ t ≤ t2 . It is assumed than Equation (2.136) has a trivial solution yi = 0,

i = 1, . . . , n,

(2.137)

for which it is necessary and sufficient that fi (0, . . . , 0, t) = 0,

i = 1, . . . , n.

(2.138)

Let v(Y, t) = v(y1 , . . . , yn , t) be a positive definite function in Y ∈ Sr and t ∈ It , that is continuously differentiable with respect to all arguments. As is known, the function dv ∂v  ∂v fi (Y, t), = + dt ∂t i=1 ∂yi n

v(Y, ˙ t) =

(2.139)

is called the complete derivative of the function v(Y, t) due to Equation (2.136). If we rewrite (2.136) in a vector form as dY = F (Y, t), dt

(2.140)

the complete derivatives can be represented in a more compact form: v˙ =

dv ∂v = + dt ∂t



∂v ·F ∂Y

 ,

(2.141)

where (·) denotes scalar product. Analyzing properties of the functions v(Y, t) and v(Y, ˙ t) often makes it possible to obtain important information about properties of solutions of

© 2000 by CRC Press LLC

(2.140) without integrating the latter. For the present book, the following proposition, called Lyapunov’s principle, is very important. PROPOSITION 2.6 [LYAPUNOV’S PRINCIPLE] Let for Y ∈ Γr and t ∈ It a function v(Y, t) be positive definite, and let the derivative satisfy the relations v(Y, ˙ t) ≤ 0,

v(0, ˙ t) = 0,

(2.142)

i.e., the function v(Y, ˙ t) is negative function of constant sign. Let also v(Y ) be the function appearing in (2.132), and l(r) = min v(Y ). |Y |=r

(2.143)

Then, for any solution Y (t) of Equation (2.140) such that Y (t0 ) = Y0 , Y0 ∈ Γr , t0 ∈ It , from the condition v(Y0 , t0 ) = c < l(r)

(2.144)

follow the inequality v(Y (t), t) ≤ c,

t0 ≤ t ≤ t2 ,

(2.145)

and for all t0 ≤ t ≤ t2 the solution Y (t) is inside a bounded area v(Y ) ≤ c. The function v(Y, t) will hereinafter be called Lyapunov’s function. The above proposition given without a proof is a basis of numerous investigations in stability theory and adjacent fields [5, 53, 62]. It is noteworthy that Relation (2.145) is a specific estimate of the norm of solutions, because it follows that solution Y (t) for all t0 ≤ t ≤ t2 is inside a bounded area in the sphere Sr . If condition (2.132) holds for all t ≥ t0 , Relation (2.145) holds also for all t ≥ t0 . In this case, the trivial solution (2.137) is stable by Lyapunov [5, 62], i.e., for any 6 > 0 there is a number η(6) > 0 such that from the condition |Y0 | < η(6) follows that |Y (t)| < 6 for all t ≥ t0 . Given a number 6(6 < r), the number η(6) can be found in the following way. 1. Given 6, find l(6) = min v(Y ). |Y |=

© 2000 by CRC Press LLC

(2.146)

2. With l(6), find η(6) =

min

v(Y,t0 )=l()

|Y |.

(2.147)

The values l(6) and η(6) depend on the choice of the norm | · |. Geometric constructions associated with (2.146) and (2.147) are given in Figure 2.1.

Figure 2.1 Geometric interpretation It can be easily seen that actually the relation |Y (t)| < 6 will follow already from the condition v(Y0 , t0 ) < l(6).

2.2.5

Norm of Additional Motion

We will demonstrate the possibility of applying Lyapunov’s principle for evaluation of the norm of additional motion. Let the equation of additional motion have the form (2.79): dZ = F˜ (Z, t, µ) dt

(2.148)

and has the trivial solution Z = 0, µ = 0 by construction. THEOREM 2.9 Let a function v˜(Z, µ, t) satisfy the following conditions for all µ ∈ Iµ : −˜ µ≤µ≤µ ˜, µ ˜ > 0, t ∈ It inside the ball Γr of the space Z: v˜(Z, µ, t) ≥ v(Z),

v˜(O, 0, t) = 0,

(2.149)

where v(Z) is a positive definite function, and ∂˜ v v˜˙ = + ∂t

© 2000 by CRC Press LLC



∂˜ v ˜ ·F ∂Z

 ≤ 0.

(2.150)

Then, if the number l(r) is chosen according to (2.129), from Z0 ∈ Γr , t0 ∈ It , µ ∈ Iµ ,

v˜(Z0 , µ, t0 ) = c < l(r),

(2.151)

follows the inequality v(Z) ≤ v˜(Z, µ, t) ≤ c, PROOF

t ∈ It .

(2.152)

a = const > 0.

(2.153)

Consider the function ω(Z, µ, t) = v˜(Z, µ, t) +

aµ2 , 2

As follows from (2.149), the function (2.153) is positive definite in a cylin˜ of n + 1-dimensional space of variables (Z, µ) given by drical area Γ Z ∈ Γr ,

−˜ µ≤µ≤µ ˜.

(2.154)

The area (2.154) can be considered as a “unit sphere” in the space of n + 1 variables (Z, µ), where the norm of the n + 1-dimensional vector (Z, µ) is defined by  |Z| |µ| | (Z, µ) |= max , , (2.155) r µ ˜ where |Z| is a chosen norm of the vector Z in the n-dimensional space of variables z1 , . . . , zn . Write Equation (2.148) as a system of n + 1-th order dµ = 0, dt dZ = F˜ (Z, µ, t) dt

(2.156)

and calculate the derivative dw/dt of the function (2.153) with (2.156). Obviously, dω = v˜˙ . (2.157) dt As follows from (2.150), the derivative (2.157) is negative with constant sign. Then, from Lyapunov’s principle, it follows that if the number ˜l is chosen according to ˜l =

© 2000 by CRC Press LLC

min

|(Z,µ)|=1

v(Z) +

aµ2 2

 ,

(2.158)

due to (2.144) and (2.145) we obtain

if

v(Z) ≤ v˜(Z(t), µ, t) ≤ v˜(Z0 , µ, t0 ),

(2.159)

˜(Z0 , µ, t0 ) ≤ ˜l.

(2.160)

Notice that the number a > 0 in (2.153) can be taken as small as necessary. Therefore, for a → 0 condition (2.160) is changed by a weaker one: v˜(Z0 , µ, t0 ) < l(r), (2.161) where l(r) = min v(Z), |Z|=r

It is noteworthy that the above theorem does not depend on a specific form of Equation (2.148).

2.2.6

Parametric Stability

If the conditions of Theorem 2.9 hold for all t ≥ t0 , for the system (2.156) there is a function of constant sign from n + 1 variables (2.153), the derivative of which is a negative function with constant sign. Hence, the trivial solution Z = 0, µ = 0 of the system (2.156) is Lyapunov stable, i.e., if |(Z, µ)| is any norm of the n + 1-dimensional vector (Z, µ), for any 6 > 0 there is a number η(6) > 0 such that the condition |(Z0 , µ)| < η(6) yields |(Z(t), µ)| < 6 for all t ≥ t0 . If this property holds, the initial equation (2.148) will be called parametrically stable. Thus, the solution Z = 0, µ = 0 is parametrically stable by definition, if the trivial solution of the system (2.156) is Lyapunov stable. Using equivalency of norms of a finite-dimensional vectors, it can be easily proven that parametric stability is independent of the choice of norm |(Z, µ)|. If parametric stability takes place for one norm |(Z, µ)|1 , the same is valid for another norm |(Z, µ)|2 . Let us note that parametric stability of Equation (2.148) follows Lyapunov stability of its trivial solution Z = 0 for µ = 0, i.e., Lyapunov stability of the zero solution of the equation Z˙ = F˜ (Z, 0, t).

© 2000 by CRC Press LLC

(2.162)

The inverse proposition is, in the general case, false. Indeed, let a norm |(Z, µ)| be defined by (2.155). Then, if parametric stability takes place in an area (2.154), from the condition |(Z0 , µ)| < η(6) it follows that |(Z(t), µ)| < 6 for all t ≥ t0 . Hence, for Equation (2.148) with µ = 0 from |Z0 | < rη(6) follows that |Z(t)| < r6 for all t ≥ t0 , i.e., Lyapunov stability takes place. Let us give an example demonstrating that parametric stability of Equation (2.148) does not follow even from asymptotic stability of the trivial solution of (2.142). Example 2.2 Let the equation of additional motion (2.148) have the form dz = µz + az 3 , dt

t0 > 0,

a = const < 0.

(2.163)

It can be easily verified by direct calculations that for µ = 0 we have asymptotic Lyapunov stability. On the other hand, for any µ > 0 we have instability. Indeed, for µ > 0 the linear approximation d˜ z = µ˜ z dt is unstable, then, by Lyapunov’s theorem on stability by the first approximation, the trivial solution of Equation (2.163) is unstable. Note that from the physical viewpoint the notion of parametric stability, as applied to Equation (2.148), is much more meaningful than Lyapunov stability, i.e., stability with respect to initial data. According to Chapter 1, variation of parameter µ in (2.148) can be caused by variation of design parameters and operating conditions, or by exogenous disturbances. Therefore, it is natural that to ensure parametric stability it is necessary to meet much stronger restrictions than to provide Lyapunov stability for µ = 0.

2.2.7

General Investigation Method

As was shown by examples, Theorem 2.9 in many cases does not provide constructive results for estimation of the norm of additional motions and investigation of parametric stability. In the present section we derive another, more general, theorem, which allows one to extend the class of problems under consideration.

© 2000 by CRC Press LLC

THEOREM 2.10 Let for Equation (2.148) exist a function v˜(Z, µ, t) that satisfies the following conditions for all t ∈ It , µ ∈ Iµ inside a ball Γr in the space Z: 1. v˜(Z, µ, t) ≥ v(Z),

v˜(0, µ, t) = 0,

(2.164)

where v(Z) is a positive definite function. 2. If the number l(r) is defined by (2.129), for any c < l(r) there exists µ ¯(c) > 0 such that  v˜˙ v˜=c =

∂˜ v + ∂t



∂˜ v ˜ ·F ∂Z

   

0 and any solution Y (t, µ) of Equation (2.172) with v(Y0 ) = c and |µ| < µ ˜(c) we have v(Y (t)) ≤ v(Y0 ) = c.

(2.197)

The above calculations get greatly simplified if B = 0. In this case, this approach, using Lyapunov’s method, gives an estimate of disturbance influence onto a linear system. Example 2.3 Consider the simplest first-order equation dy = −ay + µby + µf (t), dt

| f (t) |≤ h,

(2.198)

where a > 0, b, and h are constants. Let v(y) =

1 2 y . 2

(2.199)

The derivatives with due account for Equation (2.188) have the form dv = −ay 2 + µby 2 + µf (t)y. dt

(2.200)

If v = c, from (2.200) we obtain   √ √ dv  ≤ 2c | µ | b 2c + h − 2ac. dt v=c

© 2000 by CRC Press LLC

(2.201)

√ If b 2c + h > 0, we can take µ ¯(c) =

√ a 2c √ h + b 2c

(2.202)

√ and for y02 < 2c, b 2c + h > 0, |mu| < µ ¯(c) y 2 (µ, t) < 2c.

(2.203)

√ If b 2c + h ≤ 0, Relation (2.203) holds for any µ.

2.3 2.3.1

Sensitivity on Infinite Time Intervals Statement of the Problem

Using the classical theorems given in Section 2.1, one can justify the validity of approximate representation of additional motion via sensitivity functions (2.73) and (2.74) on finite sufficiently small time intervals. Though it is possible, in principle, to estimate such time intervals, the procedure is very difficult in practice. Therefore, for applied problems, it is useful to derive conditions under which Formulas (2.73) and (2.74) remain valid for all t ≥ t0 , where t0 is a constant. If this is the case, it is not necessary to estimate the scope of corresponding time intervals. Below we present a solution of this problem based on Theorem 2.10 [89]. Let the vector equation of additional motion have the form (2.79): dZ = F˜ (Z, t, µ), dt

(2.204)

where initial solution Y = Y (t, α) is associated with the trivial solution of Equation (2.204) Z = 0, µ = 0. (2.205) The initial family of solutions Y (t, α) is associated with a family Z(t, µ) of additional motions with initial conditions (2.83) Z(t0 , µ) = Y (t0 , α0 + µ) − Y (t0 , α0 ), t0 = t0 (α0 + µ) = t0 (µ).

(2.206)

By construction, the family of solutions (2.206) includes the trivial solution (2.205), therefore, we must have Z(t0 , 0) = 0.

© 2000 by CRC Press LLC

Let

 ∂Y (t, α)  U (t) = U (t, α0 ) = ∂α α=α0

(2.207)

be the sensitivity function of the family of solutions under consideration. A widely employed applied method of investigating additional motion consists in approximate substitution of additional motion by the first approximation Z(t, µ) ≈ U (t, α0 )µ.

(2.208)

As follows from Section 2.1.7, under conditions of existence of the secondorder sensitivity functions for a sufficiently small |µ| Relation (2.208) holds for any fixed t with an arbitrary precision. Indeed, let Z(t, µ) = U (t)µ + V (t, µ).

(2.209)

As follows from (2.66), under conditions of existence of the second order sensitivity functions we have V (t, µ) =

1 (2) U (t, α0 + ϑµ)µ2 , 2

0 < ϑ < 1,

(2.210)

Therefore, for any fixed t, lim | V (t, µ) |= 0.

(2.211)

µ→0

Nevertheless, validity of the approximate equality (2.208) for any fixed t does not guarantee its applicability for all t ≥ t0 , because it may happen that admissible value |µ| tends to zero as t increases. In order that the representation (2.208) be valid for all t ≥ t0 , it is sufficient that Relation (2.211) holds uniformly with respect to t for all t ≥ t0 , i.e., that for any 6 > 0 there exists µ(6) > 0 such that | V (t, µ) |< 6

where

| µ |< µ(6),

t0 ≤ t < ∞.

(2.212)

If conditions (2.212) hold, from (2.209) we have, uniformly with respect to t, | Z(t, µ) − U (t)µ |< 6

while | µ |< µ(6),

t ≥ t0 .

(2.213)

Thus, the problem of finding sufficient conditions of applicability of the approximate representation on an infinite time interval can be reduced to obtaining sufficient conditions of validity of (2.212) and (2.213). As will be

© 2000 by CRC Press LLC

shown below, solution of this problem is connected with parametric stability of some auxiliary system of equations. The above problem can be generalized onto the case of approximate n-th order representation of additional motions. Assume that there exist continuous sensitivity functions up to n + 1-th order inclusive. Let, with account for (2.66) and (2.71), Z(t, µ) − ∆(n) Y (t, µ) = V (n) (t, µ).

(2.214)

The approximate representation Z(t, µ) ≈ ∆(n) Y (t, µ),

t ≥ t0 ,

(2.215)

holds if for any 6 > 0 there is µn (6) > 0 such that | V (n) (t, µ) |< 6 where

2.3.2

0 0 there is η(6) > 0 such that from the condition |Z0 , µ| < ν(6) follows |Z(t), µ| < 6. PROOF The existence of the trivial solution Z = 0, µ = 0 follows immediately from (2.217), because from (2.221) we have G(Z, t, 0) = 0. Then, we prove parametric stability, i.e., Lyapunov stability of the trivial solution Z = 0, µ = 0. As follows from the conditions (2.218) and (2.220) [5, 62], there exists a quadratic form v(Z, t) = Z T H(t)Z (2.222) (where H(t) is a symmetric time-dependent square matrix) such that dv ∂v(Z, t) = + dt ∂t



∂v(Z, t) · A(t)Z ∂Z

where | Z |=



 = −Z T Z = − | Z |2 ,

Z T Z.

(2.223)

(2.224)

Moreover, there are constants 0 < k1 < k2 such that k1 Z T Z ≤ v(Z, t) ≤ k2 Z T Z,

t ≥ t0 .

(2.225)

Conditions (2.225) mean that the quadratic form v(Z, t) is positive definite and bounded with respect to t for t ≥ t0 . As follows from (2.225), c c ≤ Z T Z =| Z |2 ≤ k2 k1

where

v(Z, t) = c.

(2.226)

Then, we calculate the derivative dv/dt of the positive definite function v(Z, t) given by (2.217). Obviously, dv ∂v = + dt ∂t

© 2000 by CRC Press LLC



∂v · A(t)Z ∂Z



 +

 ∂v · G(Z, t, µ) . ∂Z

(2.227)

With due account for (2.223) we obtain dv = −Z T Z + dt



 ∂v · G(Z, t, µ) . ∂Z

(2.228)

Estimating the second term with a norm yields    ∂v  dv T   | G(Z, t, µ) | . ≤ −Z Z +  dt ∂Z 

(2.229)

As follows from (2.225), the matrix H(t) is bounded for all t ≥ t0 , therefore    ∂v     ∂Z  ≤ k | Z |,

t ≥ t0 .

k = const,

(2.230)

Using (2.230) and (2.221), from (2.229) we obtain dv ≤ −Z T Z + kd3 | µ |ν2 (| Z | +d4 ) | Z | . dt

(2.231)

Now let v(Z, t) = c. Then, from (2.226) it follows that −Z T Z ≤ −



c , k2

| Z |≤

c k1

 12 ,

(2.232)

and (2.231) yields     12   12 dv  c c c ν2 ≤ − + kd3 | µ | + d4 . dt v=c k2 k1 k1 Let

  µ ¯(c) =

c  kk2 d3



c k1

 12 

c k1

(2.233)

−1  ν12 

 12 + d4



.

(2.234)

Obviously, µ ¯(c) > 0, and  dv  0 there is µ(6) > 0 such that for |µ| < µ(6) we have | Z(t, µ) − U (t)µ |≤| µ | 6,

t ≥ t0 .

(2.245)

PROOF First, we note that from the conditions (2.241)–(2.243) there follows the boundedness of all solutions of the sensitivity equation (2.239) for all t ≥ t0 [62]. Thus, | U (t) |≤ d = const,

t ≥ t0 .

(2.246)

˜ µ) = 1 (Z(t, µ) − U (t)µ). Z(t, µ

(2.247)

Now let

in (2.236). As a result, we obtain an equation with respect to Z˜ dZ˜ ˜ t, µ), = A(t)Z˜ + K(Z, dt

(2.248)

˜ t, µ) = G(µ(Z˜ + U (t)), t, µ)µ−1 . K(Z,

(2.249)

where

Taking a definite norm in n-th dimensional space |Z| and defining a norm in the space Z, µ as | Z, µ | + | Z | + | µ |,

(2.250)

from the conditions (2.244), (2.246), and (2.249) we obtain 1+ν2 d4   ˜  µ Z + U (t) , µ |µ| 2 ≤ d4 | µ |ν (| Z˜ | +d + 1).

˜ t, µ) ≤ K(Z,

(2.251)

Then, (2.251) yields ˜ t, 0) = 0 K(Z,

(2.252)

and Equation (2.248) has a trivial solution Z˜ = 0, µ = 0. Let us show that this solution is parametrically stable. Indeed, as follows from (2.241),

© 2000 by CRC Press LLC

(2.242), and (2.251), in the given case all the conditions set in Theorem 2.11 hold. Thus, for any 6 > 0 there is η(6) > 0 such that if | Z˜0 , µ |< η(6),

(2.253)

then ˜ | Z(t), µ |< 6,

t ≥ t0 .

(2.254)

Using (2.247) and (2.250), we find that if 1 | Z0 − U0 µ | + | µ |< η(6), |µ|

(2.255)

then 1 | Z(t) − U (t)µ | + | µ |< 6, |µ|

t ≥ t0 ,

(2.256)

and, obviously, | Z(t) − U (t)µ | 0 such that Relation (2.255) holds for all |µ| < µ(η(6)). For such µ we have (2.257). Let us analyze the sufficient conditions of applicability of the first approximation given by this theorem. Conditions (2.241) and (2.243) mean uniform boundedness of the matrix A(t) and vector B(t) appearing in the sensitivity equation (2.239) with respect to t for all t ≥ t0 . Its is important that due to (2.237) the matrix A(t) is defined only by the base solution Y (t, α0 ), while the vector B(t) depends on the choice of parameter α in initial equations. Therefore, including the base solution Y (t, α0 ) in various single-parameter families of solutions, we will obtain the same matrices A(t), but, generally speaking, different vectors B(t). Hence, for the same base motion the condition (2.243) can be true or false, depending on the choice of parameters. As is known, the condition (2.242) imposed on Cauchy’s matrix ensures asymptotic Lyapunov stability of the base solution for µ = 0(α = α0 ). Since the matrix H(t, τ ) is uniquely determined by the matrix A(t), the fulfillment of the condition (2.242) depends only on the base solution and

© 2000 by CRC Press LLC

is independent of the choice of a single-parameter family of solutions in which the base solution is included. It is know that if the matrix A(t) in (2.239) is constant, H(t, τ ) = eA(t−τ )

(2.259)

and the condition (2.242) reduces to the condition Re λρ < 0, ρ = 1, . . . , n, where λρ are the roots of the characteristic equation det(A − λE) = 0.

(2.260)

Condition (2.244) means that nonlinear terms in the equations of additional motion are bounded with respect to t and small with respect to µ, Z. Note that the conditions (2.241)–(2.243) can, in principle, be checked immediately by the sensitivity equation. To check the validity of (2.244) it is necessary to analyze nonlinear terms in the equations of additional motion. The above theorem can be used also for justification of the possibility to apply n-th approximation Z (n) (t, µ) (2.71) on large time intervals. With this aim in view, introduce the variable 1  Z˜ (n) (t, µ) = n Z(t, µ) − Z (n) (t, µ) , µ

(2.261)

Then, we obtain an equation of the form (2.248). The fulfillment of the conditions of the theorem will, in this case, guarantee applicability of the corresponding approximation.

2.3.4

Classification of Special Cases

The case when all the conditions of Theorem 2.12 hold will be called nonsingular or regular. Various cases when these conditions are violated will be called singular. Using the analysis of the theorem given in Section 2.3.3, singular cases can be divided into two types. The first type includes problems for which one of the conditions (2.241)– (2.243) is violated. These conditions are characterized by the fact that their validity can be checked immediately by the sensitivity equations (2.239). Special cases of the second type are connected with violation of the condition (2.244) and depend on the form of the terms in the right side of equations of additional motion having order higher than one with respect to Z, µ.

© 2000 by CRC Press LLC

It can be shown by examples that Relation (2.208) can become invalid if any of the conditions (2.241)–(2.243) is violated. For instance, this may happen if the conditions (2.241)–(2.242) hold while (2.243) and (2.244) do not. In this case, the base solution is stable by Lyapunov, but the approximate Formula (2.208) is invalid for sufficiently large time intervals. Parameters for which the conditions (2.241)–(2.242) hold, but (2.243)–(2.244) are false, will be called singular. Let us show by examples the possibility of existence of singular parameters, when the base motion is asymptotically stable, but Formula (2.208) does not give uniform approximation on the whole interval t ≥ t0 . Example 2.4 Let the equation of additional motion have the form dz + z = sin µt, dt

t ≥ 0.

(2.262)

The base solution is the trivial one: z = 0,

µ = 0.

(2.263)

Consider the family of periodic solutions of Equation (2.262) zp (t, µ) =

1 µ sin µt − 2 cos µt, µ2 + 1 µ +1

(2.264)

that transforms to the trivial one for µ = 0. The sensitivity equation (2.239) in this case has the form du + u = t, dt

(2.265)

so that A(t) = −1,

B(t) = t.

(2.266)

Moreover, it is easy to see that H(t, τ ) = e−(t−τ ) .

(2.267)

Thus, in the case under consideration, the conditions (2.241) and (2.242) hold, but (2.243) does not. Condition (2.244) is also violated. Let us show that Formula (2.208) in this case does not give an approximation of additional motion for sufficiently large time intervals.

© 2000 by CRC Press LLC

Indeed, from (2.264) it follows that  ∂zp (t, µ)  u(t) = ∂µ 

= t − 1,

(2.268)

µ=0

and the value u(t)µ is not bounded. Hence, the difference zp (t, µ) − u(t)µ

(2.269)

is not bounded, while all additional motions (2.264) are bounded for t ≥ t0 .

Example 2.5 Analyzing in the same way the equation dz + z = µ sin t, dt

(2.270)

we obtain the sensitivity equation du + u = sin t. dt

(2.271)

It is easy to verify that the conditions of Theorem 2.12 hold and the value u(t)µ gives the precise expression of the family of periodic solutions zp (t, µ) =

µ (sin t − cos t). 2

(2.272)

Example 2.6 Consider the following equation depending on two parameters: dz + z = µ1 sin µ2 t. dt

(2.273)

Fixing parameter µ1 , we obtain the situation considered in Example 2.4. Therefore, it is possible to claim that the parameter µ2 in Equation (2.273) is singular.

© 2000 by CRC Press LLC

2.4 2.4.1

Sensitivity of Self-Oscillating Systems in Time Domain Self-Oscillating Modes of Nonlinear Systems

Let us have an autonomous nonlinear system dY = F (Y, α), dt

(2.274)

where Y and F are vectors, and α is a scalar parameter. Assume that the vector F is continuously differentiable with respect to all arguments. In many problems corresponding to real physical phenomena, Equation (2.274) has a family of isolated periodic solutions Yp (t, α) depending on α: Yp (t + T (α), α) = Yp (t, α),

(2.275)

where T = T (α) is the period of self-oscillation depending on the parameter. As is known, such solutions are called self-oscillatory. If the family of solutions (2.275) exists, it is associated with unknown initial conditions Yp (0, α) = Y0 (α).

(2.276)

Let α = α0 be a parameter value (α1 ≤ α ≤ α2 ) that is associated with a specific self-oscillating mode Yp0 (t) with period T0 = T (α0 ), so that Yp0 (t + T0 ) = Yp0 (t).

(2.277)

It is very important for applications to investigate the dependence of selfoscillating modes on the parameter α. In the present section we consider the possibilities of analyzing these relations on the basis of sensitivity theory. Such an approach is connected with the proper account for a number of essential peculiarities that need be investigated specially. Let us enumerate some of them. 1. The problem of determining self-oscillating mode on the basis of Equations (2.274) and the periodicity condition (2.274) is a boundary problem. In any special case it can have different number of solutions or, as a special case, have no solutions at all. Therefore, hereinafter we consider a specific isolated family of self-oscillating, modes assuming that it does exist.

© 2000 by CRC Press LLC

2. Under the above assumptions, we postulate the existence of a singleparameter family of self-oscillating modes defined by the initial conditions (2.276). Nevertheless, it is incorrect, in the general case, to assume that the function Y0 (α) is differentiable with respect to α, hence, it is incorrect to assume that there exist sensitivity functions even of the first order. Nevertheless, in the present paragraph, it will be assumed that the initial conditions Y0 (α) and the period of self-oscillation T (α) are continuously differentiable functions of the parameter. For many applied problems such an assumption is justified. Some general ideas on this topic will be presented below in the paragraph dealing with sensitivity of boundary problems. 3. If the vector Y0 (α) is differentiable with respect to α, there is a sensitivity function ∂Yp (t, α) Up (t, α) = . (2.278) ∂α For α = α0 the corresponding sensitivity equation has the form dU = A(t)U + B(t), dt

(2.279)

where  ∂F  A(t) = , ∂Y Y =Yp (T,α0 )

 ∂F  B(t) = . ∂α Y =Yp (T,α0 )

(2.280)

Since the vector Yp (t, α0 ) is periodic by construction, the matrix A(t) and vector B(t) are also periodic with period T0 . Thus, the sensitivity equation of a self-oscillating mode is a linear nonhomogeneous vector equation with a periodic coefficient matrix and periodic fundamental term. For further investigation of the properties of the problem under consideration we need some results from the theory of linear differential equations with periodic coefficients that are given in the next paragraph.

2.4.2

Linear Differential Coefficients

Equations

with

Periodic

Let us have a homogeneous equation dU = A(t)U, dt

© 2000 by CRC Press LLC

A(t) = A(t + T ).

(2.281)

Denote by H(t) the square matrix satisfying Equation (2.281) and initial conditionH(0) = E, where E is an identity matrix of the corresponding dimensions. It can be shown [42, 132], that the matrix H(t) can be written in the form H(t) = D(t)eN t ,

(2.282)

where the matrix D(t) is nonsingular and satisfies the conditions D(t) = D(t + T ),

D(0) = E,

(2.283)

and the constant matrix N is given by N=

1 1 ln H(T ) = ln M. T T

(2.284)

The matrix M = H(T ) = eN T

(2.285)

is called the monodromy matrix. From (2.283) and (2.284) we have H(t + T = D(t + T )eN (t+T ) = H(t)eN T = H(t)M.

(2.286)

From (2.282) we can derive a general expression for the Cauchy’s matrix for Equation (2.281) H(t, τ ) = H(t)H −1 (τ ) = D(t)eN (t−τ ) D(τ ).

(2.287)

By the change of variable U = D(t)V

(2.288)

Equation (2.281) transforms into the following equations with constant coefficients dV = N V. (2.289) dt Using the variable V in the nonhomogeneous equation (2.279), we obtain dV ˜ = N V + B(t), dt

(2.290)

˜ = D−1 (t)B(t) B(t)

(2.291)

where

© 2000 by CRC Press LLC

is a periodic function. Consider the equation det(M − ρE) = 0,

(2.292)

the roots of which, ρ1 , . . . , ρn , are called the multiplicators of Equation (2.281). The values 1 λk = ln ρk , k = 1, . . . , n, (2.293) T are the characteristic indices of Equation (2.281). As is known [132], if ρm is a multiplicator, Equation (2.281) has a particular solution of the form ˜ (t), U (t) = eλm t U

˜ (t) = U ˜ (t + T ), U

(2.294)

and, conversely, if Equation (2.281) has a particular solution of the form (2.294), the value ρm = eλm T

(2.295)

is a multiplicator. Hence, Equation (2.281) has a periodic solution ˜ (t) = U ˜ (t + T ) U (t) = U

(2.296)

if and only if at least one multiplicator is equal to 1. Consider the characteristic equation for (2.289): det(N − λE) = 0.

(2.297)

As follows from (2.284), the roots λ1 , . . . , λn of Equation (2.297) are connected with multiplicators by the following relations: λm =

1 , T

i.e., the roots of Equation (2.297) are the characteristic indices of Equation (2.281). Therefore, if Equation (2.281) has a periodic solution (2.296), at least one of the roots of the characteristic equation (2.297) is equal to kiω = k 2πi T , where k is an integer.

© 2000 by CRC Press LLC

It can be shown, so that all solutions of Equation (2.281) tend exponentially to zero as t → ∞, it is necessary and sufficient that the following condition hold for the roots of Equation (2.297): Re λi < 0,

i = 1, . . . , n.

(2.298)

According to (2.295), this is equivalent to the requirement that all multiplicators be inside the unit circle, i.e., | ρi |< 1,

2.4.3

ρ = 1, . . . , n.

(2.299)

General Properties of Sensitivity Equations

Let us show that the homogeneous part of sensitivity equations, i.e. equations in variations of self-oscillating mode, dU = AU, dt has a periodic solution. Indeed, for α = α0 and Y (t) = Yp (t, α0 ), from (2.274) we identically have dYp (t, α0 ) ≡ F (Yp (t, α0 ), α0 ). dt

(2.300)

Differentiating the identity (2.300) with respect to t, we obtain  dX ∂F  X = A(t)X, = dt ∂Y Y =Yp (t,α0 )

(2.301)

where X=

dYp (t, α0 ) . dt

(2.302)

A comparison of (2.302) and (2.279) demonstrates that the equation in variation (2.281) has a periodic solution (2.302) that is associated with a multiplicator equal to 1. Hereinafter, we assume that all multiplicators of the equation in variation (2.281), except for one equal to 1, are inside the unit circle. Then, by the known theorem by Andronov and Vitt [42, 62], the self-oscillating mode corresponding to α = α0 is Lyapunov stable (not asymptotically). In this case, the fundamental matrix H(t) of the equation in variation (2.281) can be represented in the form of a block matrix as H(t) = Y˙ p (t, α0 ) G(t) P,

© 2000 by CRC Press LLC

(2.303)

where G(t) = gik (t) , i = 1, . . . , n, k = 2, . . . , n, is a matrix with elements satisfying the estimate | gik (t) |< ae−bt ,

t ≥ 0,

(2.304)

where a and b are positive constants independent of i and k, and P is the nonsingular constant square matrix. Using (2.303), the matrix H(t) can be represented in the form H(t) = H1 (t) + H2 (t),

(2.305)

where H1 and H2 are block matrices given by H1 (t) = Y˙ p (t, α0 ) On,n−1 P,

H2 (t) = On,1 G(t) P,

(2.306)

where 0i,k denotes the zero matrix having i rows and k columns. Obviously, by construction we have dHi = A(t)Hi , dt

i = 1, 2

(2.307)

and, moreover, H1 (t) = H1 (t + T ),

| H2 (t) |< de−bt ,

(2.308)

where d is a positive constant and T = T0 = T (α0 ). PROPOSITION 2.7 The sensitivity equations (2.279) have, in the general case, an unbounded solution of the form U = R(t)t + S(t),

(2.309)

where R(t) = R(t + 2T0 ),

S(t) = S(t + 2T0 )

(2.310)

are periodic vectors. PROOF Under the above assumptions regarding multiplicators of Equation (2.281), there is a nonsingular transformation [63] ˜ U (t) = Q(t)Z(t)

© 2000 by CRC Press LLC

(2.311)

with a periodic matrix Q(t) with period 2T0 , that transforms Equation (2.281) into the system of equations dz1 = q1 (t), dt n  dzi pis zs + pi z1 + qi (t), = dt s=2

(2.312) i = 2, . . . , n,

where the functions qj (t), (j = 1, . . . , n) are periodic with respect to t; pi and pis , (i = 2, . . . , n, s = 2, . . . , n) are constants. Moreover, the characteristic equation of the second group of equations in (2.312) has all roots with negative real parts. Let q1 (t) = q10 + q˜1 (t),

q10

1 = 2T0

2T0 q1 (t)dt.

(2.313)

0

Hereinafter we assume that q10 = 0. Then, from the first equation in (2.312) we obtain a particular solution t z1 (t) = q10 t +

q˜1 (t)dt.

(2.314)

0

The second term in (2.314) is periodic with respect to t. Substituting (2.314) into the remaining equations in (2.312), it can be easily seen that these equations have a solution of the form (2.309) with constant R and S. Returning to the initial data, we prove the claim. REMARK 2.7 As will be shown below, actually the vectors R(t) and S(t) appearing in (2.309) have period T0 . Since the sensitivity equations (2.279) have a particular solution of the form (2.309), their general solution can be represented as ˜ + R(t)t + S(t), U (t) = H(t)U

(2.315)

˜ is an arbitrary constant vector. Since for t = 0 we have where U ˜ + S(0), U (0) = U

© 2000 by CRC Press LLC

(2.316)

a general solution of (2.315) can be expressed in terms of initial data as U (t, U0 ) = H(t)(U0 − S(0)) + R(t)t + S(t).

(2.317)

From (2.317), (2.310) and (2.305) it follows that if R(t) ≡ 0, all the solutions of the sensitivity equations are not bounded. Hence, for the problem at hand the first approximation (2.208) does not give a result on sufficiently large time intervals. Indeed, for α sufficiently close to α0 , additional motions associated with self-oscillating modes are bounded and cannot be approximated by expressions of the form (2.315) on large time intervals. Note that in the given case the condition (2.242) of Theorem 2.12 restricting the Cauchy’s matrix H(t, τ ) does not hold. Indeed, if the condition (2.242) holds for τ = 0, for t ≥ 0 we obtain | H(t) |≤ d2 e−ν1 t .

(2.318)

But, as follows from (2.305)–(2.308), the condition (2.318) cannot be satisfied in the given case. Thus, investigating sensitivity of self-oscillating modes we always have to deal with a special case when the condition (2.242) of Theorem 2.12 is violated. Nevertheless, we will also show that, despite of this fact, the first order sensitivity functions give important information on properties of the system. Let us specify an explicit form of the general solution (2.317). With this aim in view, we substitute (2.317) into (2.279), and, taking into account the properties of H(t, τ ), obtain dR(t) dS(t) t + R(t) + = AR(t)t + AS(t) + B(t), dt dt

(2.319)

Then, comparing the coefficients of t, we find the following equation: dR = A(t)R, dt

(2.320)

i.e., the vector R(t) is a 2T -periodic solution of Equation (2.281). Hence, R(T ) = kYp (t, α0 ),

R(t) = R(t + T ),

(2.321)

where k is a constant, because Equation (2.281) has no other bounded solutions.

© 2000 by CRC Press LLC

Thus, from (2.321) and (2.317) we have U (t, U0 ) = H(t)(U0 − S(0) + ktYp (t, α0 ) + S(t).

2.4.4

(2.322)

Sensitivity Functions Variation over Self-Oscillation Period

Let ∆U (t, U0 ) = U (t + T, U0 ) − U (t, U0 )

(2.323)

be the variance of an arbitrary solution U (t, U0 ) of Equation (2.281) over the period of self-oscillation. Let us note a number of important properties of the variance ∆U (t, U0 ) and the general solution (2.322). PROPOSITION 2.8 For any U0 the function ∆U (t, U0 ) is a solution of the equation in variations (2.281). PROOF

Indeed, substituting t + T for t in (2.279), we obtain dU (t + T ) = A(t)U (t + T ) + B(t). dt

(2.324)

Subtracting (2.279) from (2.324) yields d∆U = A(t)∆U. dt

PROPOSITION 2.9 In fact, the vector S(t) in (2.322) has period T , i.e., S(t) = S(t + T ). PROOF

Let us calculate the variance ∆U (t, U0 ), using (2.322): ∆U (t, U0 ) = [H(t + T ) − H(t)](U0 − S(0)) + kT Y˙ p (t, α0 ) + [S (t + T ) − S(t)].

(2.325)

The left side and the first two terms in the right side satisfy Equation (2.281), therefore, we have also d[S(t + T ) − S(t)] = A(t)[S(t + T ) − S(t)]. dt

© 2000 by CRC Press LLC

(2.326)

Thus, the 2T -periodic function S(t + T ) − S(t) is a solution of the homogeneous equation (2.281). Hence, as in (2.281), we obtain S(t + T ) − S(t) = lY˙ p (t, α0 ) = lY˙ p0 (t),

l = const.

(2.327)

Substituting t + T for t in (2.327) yields S(t) − S(t + T ) = lY˙ p0 (t).

(2.328)

From (2.327) and (2.328) it follows that l = 0. Next, calculate the constant k in (2.322). With this aim in view, consider the identity Yp (t + T (α), α) = Yp (t, α).

(2.329)

Differentiating with respect to α, from (2.329) we obtain for α = α0 dT ∆Up (t) = U (t + T0 ) − Up (t) = −Y˙ p0 (t) , dα

(2.330)

where Up (t) is the sensitivity function of the self-oscillating mode, and ∆Up (t) is its variance over the period. As follows from (2.330), the variance of the sensitivity function of the self-oscillating mode is periodic with period T = T0 = T (α0 ). On the other hand, from (2.325) we have, bearing in mind that S(t+T ) = S(t), ∆U (t, U0 ) = [H(t + T ) − H(t)](U0 − S(0)) + kT Y˙ 0p (t).

(2.331)

From (2.331) it follows that the variance ∆U (t, α0 ) is periodic only if U0 = Up0 = S(0).

(2.332)

∆U (t, U0 ) = ∆Up (t) = kT Y˙ 0p (t).

(2.333)

In this case

Comparing (2.333) with (2.330), we find k=−

© 2000 by CRC Press LLC

1 dT . T dα

(2.334)

Returning back to Relation (2.322) and using (2.334), we find that a general solution of the sensitivity equation has the form U (t, U0 ) = H(t)(U0 − S(0)) − Y˙ 0p (t)

t dT + S(t). T dα

(2.335)

and the sensitivity function of self-oscillating mode is Up (t) = −Y˙ 0p (t)

t dT + S(t). T dα

(2.336)

Accordingly, the variance of an arbitrary solution of the sensitivity equation can be transformed, by means of (2.331) and (2.305), to the form dT U (t, U0 ) = [H2 (t + T ) − H2 (t)](U0 − S(0)) − Y˙ 0p (t) , dα

(2.337)

Hence, for U0 = S(0) we obtain (2.330). Using the above relations, we will derive the differential equation determining S(t). Substitute (2.335) into the sensitivity equation (2.279). Then, accounting for the fact that H(t) is a solution of the homogeneous equation, we obtain dS 1 dT = A(t)S + B(t) + Y˙ 0p (t) , dt T dα

S(t + T ) = S(t).

(2.338)

Let us also find the initial conditions determining the vector S(t). As follows from (2.336), for the sensitivity function of self-oscillating mode we have  dYp (t, α)  S(0) = Up (0) = Up0 = . (2.339) dα α=α0

2.4.5

Sensitivity Functions for Periodicity Characteristics

Using the results of the preceding paragraphs, it is possible to propose a technique of determination of the sensitivity functions for the main characteristics of self-oscillations. First, consider the problem of determining the sensitivity function for the period of self-oscillations, i.e., calculating the derivative dT dα . With this aim in view, we note that (2.308) and (2.337) yield     ∆U (t, U0 ) + Y˙ 0p (t) dT  < d(1 + e−bT )e−bt ,  dα 

© 2000 by CRC Press LLC

(2.340)

Therefore, independently of U0 , we have    dT  lim ∆U (t, U0 ) + Y˙ 0p (t)  = 0. t→∞ dα

(2.341)

As a special case, for each component of the vector relation (2.341) we have    dT   lim ∆ui (t, U0 ) + y˙ pi (t)  = 0. (2.342) t→∞  dα Let tk , k = 0, 1, 2, . . ., be any infinitely increasing sequence of observation moments such that y˙ pi (tk ) = 0. Then, for sufficiently large k we have the approximate equality ∆ui (tk , U0 ) ≈ −y˙ pi (tk )

dT , dα

(2.343)

Hence, dT ∆ui (tk , U0 ) ≈− dα y˙ pi (tk )

(2.344)

The approximate equality tends to a strict one as tk → ∞. Next, consider the problem of determining the sensitivity function for amplitude of self-oscillation. We will call the value Ai (α) = max | ypi (t, α) | . 0≤t≤T

(2.345)

amplitude of the i-th phase coordinate. Let tim (α) = ti (α) + mT (α), where m = 0, 1, 1, . . . and 0 ≤ ti (α) < T (α), be a sequence of argument values for which Ai (α) =| ypi (tim (α), α) | .

(2.346)

Below we give a method of evaluating the value dAi /dα which, according to (2.346), coincides with the derivative d ypi (tim (α), α), dα

(2.347)

or differs from the latter only in sign. Since ypi (t, α) is a smooth function of its arguments, dypi (tim (α), α) = 0. dt

© 2000 by CRC Press LLC

(2.348)

Using (2.348), we obtain d ∂ ypi (tim (α), α) = ypi (tim , α) = upi (tim , α), dα ∂α

(2.349)

i.e., the values of the sensitivity function upi (t, α) at the moments t = tim (α) coincide with the sensitivity function for amplitude of self-oscillation up to the sign. Transforming the vector formula (2.336) to the coordinate form, we obtain t dT upi (t) = −y˙ pi (t) (2.350) + si (t), T dα With due account for (2.348) and (2.349) we obtain d ypi (tim (α), α) = si (tim (α)), dα

(2.351)

and, according to (2.345), dAi = sign ypi (tim (α), α)upi (tim (α)) dα = sign ypi (tim (α), α)si (tim (α)).

(2.352)

Let us show that the value upi (tim , α) can be expressed in terms of the values of the variance of the sensitivity functions ∆upi (t, α). With this aim in view, we note that, acording to (2.305) and (2.306), H(t)(U0 − S(0)) = = [ Y˙ p (t, α0 ) On,n−1 + On,1 G(t) ] P (U0 − S(0)).

(2.353)

Assuming that P = pik ,

i, k = 1, . . . , n;

U0 − S(0) = uoi − s0i ,

u = 1, . . . , n, (2.354)

and using (2.306), we can easily verify that H1 (t)P (U0 − S(0)) = Y˙ p (t, α0 )f0 ,

(2.355)

where f0 =

n  i=1

© 2000 by CRC Press LLC

p1i (ui0 − si0 )

(2.356)

is a scalar constant depending on initial conditions. Taking (2.356), we can write Relation (2.335) in the form   t dT ˙ U (t, U0 ) = H2 (t)(U0 − S(0)) + Yp (t) f0 − + S(t). T dα

(2.357)

Using (2.348), (2.351) and (2.352), for t = tim (α) we obtain for the i-th component of the vector equation (2.357) ui (ti (α) + mT, U0 ) = = [H2 (ti (α) + mT )(U0 − S(0))]i +

dAi sign ypi (ti (α)). dα

(2.358)

From (2.358) and (2.308) it follows that there exists the limit

lim ui (ti (α) + mT, U0 ) =

m→∞

dAi sign ypi (ti (α)). dα

(2.359)

Indeed, since ∆U (ti (α) + mT, U0 ) = = U (ti (α) + (m + 1)T, U0 ) − U (ti (α) + mT , U0 ),

(2.360)

we have

ui (ti (α) + mT, U0 ) = ui (ti (α), α) +

m−1 

∆ui (ti (α) + sT, Uo ).

(2.361)

s=0

Equations (2.361) and (2.359) give the desired relation dAi sign yp (ti ) == ui (ti ) + lim m→∞ dα

m−1 

 ∆ui (ti + sT, Uo )

,

(2.362)

s=0

where ti = ti (α). From (2.308) and (2.337) it follows that the series in the right side of (2.362) converges absolutely and the tendency to zero has exponential character.

© 2000 by CRC Press LLC

2.4.6

Practical Method Functions

for

Calculating

Sensitivity

On the basis of the relations given above it is possible to develop a practical method of calculating sensitivity functions for self-oscillating mode. Consider an initial system of equations Y˙ = F (Y, α),

Y0 (α) = Yp0 (α),

Y (t) = Y (t + T )

(2.363)

for a fixed parameter value α = α0 , where Yp0 (α) represents initial conditions of self-oscillating mode, and T is the corresponding period of selfoscillations. Simultaneously, consider the system of equations dV = A(t)V + k(t)B(t), dt

V0 = 0,

(2.364)

where A(t) and B(t) are the same as in the sensitivity equations (2.279), and the function k(t) is defined by k(t) =

1, 0,

0≤t T.

(2.365)

For 0 ≤ t < T Equation (2.364) coincides with the sensitivity equation, while for t > T it coincides with the corresponding homogeneous system. The solution of Equation (2.365) with chosen initial conditions can be presented on the interval 0 ≤ t < t in the following explicit form: t V (t) =

H(t)H −1 (τ )B(τ )dτ,

0 ≤ t < T.

(2.366)

0

For t = T we have T V (T ) =

H(T )H −1 (τ )B(τ )dτ.

(2.367)

0

Solving the homogeneous equation (2.281) with initial conditions (2.367), we obtain a solution of Equation (2.364) for t ≥ T as T V¯ (t) = H(t, T ) 0

© 2000 by CRC Press LLC

H(T )H −1 (τ )B(τ )dτ.

(2.368)

According to (2.287), H(t, T ) = H(t)H −1 (T ),

(2.369)

Hence, Equation (2.368) yields T V¯ (t) = H(t)

H −1 (τ )B(τ )dτ.

(2.370)

0

Therefore, T V¯ (0) =)

H −1 (τ )B(τ )dτ.

(2.371)

0

On the other hand, consider a partial solution of the sensitivity equations (2.366) for all t ≥ 0. Its variance for the period of oscillation is given by T ∆V (t) = V (t + T ) − V (t) = H(t)

H(T )H −1 (τ )B(τ )dτ.

(2.372)

0

Due to the results of Section 2.4.4, the function (2.372) is a solution of the homogeneous equation (2.281). Then, assuming that t = 0, from (2.371) we have T ∆V (0) = H(t) H −1 (τ )B(τ )dτ, (2.373) 0

i.e., ∆V (t) = V (t + T ). thus, the solution V¯ (t) of Equation (2.364) for t ≥ T coincides with the variance of the sensitivity function (2.366) for the period of self-oscillation. Hence, due to (2.340), as t increases, the function (2.372) tends exponentially to the periodic function −Y˙ n (t)

dT . dα

(2.374)

Therefore, calculating the function V¯ (t) for sufficiently large t > T (where it can be considered as a periodic function with a given accuracy), in coordinate form we obtain dT vi (t) ≈ −y˙ pi (t) , (2.375) dα

© 2000 by CRC Press LLC

i.e., dT vi (t) ≈− . dα y˙ pi (t)

(2.376)

Moreover, fixing, for each i, the moments tim = ti +mT , when the term of self-oscillating mode ypi has the maximal absolute value, and using (2.362), for sufficiently large m we will have the approximation m  dAi vi (ti + sT ). sign ypi (ti ) ≈ vi (ti ) + dα s=1

(2.377)

Since the series converges exponentially, in practice it is possible to use a small number of terms in (2.377). REMARK 2.8 For integration of (2.364) we can choose arbitrary initial conditions V (0) = V) . Let V˜ (t) be the corresponding solution. Then, to obtain the variance ∆V˜ (t) Equation (2.364) is to be solved for t ≥ T with initial conditions ∆V˜ (T ) = V˜ (T ) − V0 . (2.378) Therefore, initial conditions taken in (2.364) are the most convenient ones, because there is no need to recalculate initial conditions for t = T . REMARK 2.9 Using this technique, there is no need to calculate precisely initial conditions of self-oscillating mode Y0 (α) = Yp0 (α). Choosing arbitrary initial conditions (sufficiently near to Yp0 (α)), Equation (2.363) should be solved until the solution becomes periodic with a given accuracy. Then, Equation (2.363) is increased by (2.364), and the above procedure must be repeated.

2.4.7

Application to Van der Paul Equation

A known van der Paul equation [63, 103] can be written in the form of a system of two equations as y˙ 1 = y2 , y˙ 2 = −µ(y12 − y2 ) − y1 .

(2.379)

Differentiating by µ, we obtain the sensitivity equation u˙ 1 = u2 , u˙ 2 = −µ(2y1 u1 − u2 ) − u1 − (y12 − y2 ).

© 2000 by CRC Press LLC

(2.380)

An auxiliary system of equations (2.364) for this case has the form v˙ 1 = v2 , v˙ 2 = −µ(2y1 v1 − v2 ) − v1 − k(t)(y12 − y2 ).

(2.381)

where the function k(t) is defined by (2.365). As was demonstrated by simulation [14], for µ = 1 Equation (2.379) has a periodic solution with initial conditions y10 ≈ 2.01, y20 = −0.903 · 10−3 and period T = 6.66 sec. Integrating together (2.379) and (2.381) with initial conditions y10 = 2.01, y20 = −0.9 · 10−3 , v10 = 0, and v20 = 0, m−1 we determine simultaneously u1 (tj ) = u1 (t1 ) and k=0 ∆u1 (t1 + kT, 0). This integration process continues until the solution of (2.379) and (2.381) becomes periodic. Then, using Formulas (2.376) and (2.377), we obtain dT /dµ = 0.84 and dA1 /dµ = 0.0104. In a similar way we obtained dT /dµ and dA1 /dµ for µ = 2, 3, 4. Figures 2.2 and 2.3 demonstrate the curves

Figure 2.2 Curves T and dT /dµ versus µ

T , dT /dµ, A1 , and dA1 /dµ versus µ. The phenomenon of increasing the sensitivity function is illustrated by Figure 2.4. Figure 2.5 demonstrates the variation ∆u1 (t, U0 = 0) and ∆u1 (t, U0 = 1). These curves show that, in fact, after time interval t = T the function ∆u1 (t, U0 ) becomes periodic independently of initial conditions. Moreover, it can be easily seen from

© 2000 by CRC Press LLC

Figure 2.3 Curves A1 and dA1 /dµ versus µ

the same figure that after the interval t = T ∆u1 (t, U0 ) ≈ −

2.5 2.5.1

dT dT y˙ 1 = − y2 . dµ dµ

Sensitivity of Non-Autonomous Systems Linear Oscillatory Systems

Consider vector equations of the form dY = A(τ )Y + B(τ ), dt

τ = ωt,

(2.382)

where ω is the frequency and is considered as a parameter. The matrix A(τ ) = A(ωt) and vector B(τ ) = B(ωt) are assumed to be continuously differentiable and periodic with respect to τ . Hereinafter, without loss of generality, we assume that A(τ + 2π) = A(τ ),

© 2000 by CRC Press LLC

B(τ + 2π) = B(τ ).

(2.383)

Figure 2.4 Sensitivity function variation

Let also the homogeneous system dY = A(τ )Y dt

(2.384)

be asymptotically stable in the considered frequency interval ω1 ≤ ω ≤ ω2 so that   H(t)H −1 (ν) < ce−λ(t−ν) ,

0≤τ ≤ν 0. Therefore, all additional motions of the system (2.382) in the frequency range under consideration are bounded. Hence, in this case, the approximation ∆Yp (t) ≈ Up (t)∆ω

(2.402)

where Up (t) is the sensitivity function of steady-state mode, is not correct for sufficiently large time intervals. Thus, in this problem we encountered a special case when the first-order sensitivity functions cannot be used for approximation of additional motion over sufficiently large time intervals. This corresponds to the general Theorem 2.12, because in this case the condition (2.243) is obviously violated. Nevertheless, as will be shown below, the bounded functions U1 (t) and U2 (t) can be used for solving sensitivity problems. A simulation blockdiagram that makes it possible to obtain the functions (2.399) without considering infinite processes is given in Figure 2.6.

© 2000 by CRC Press LLC

Figure 2.6 Simulation block-diagram Our main goal is now to obtain the sensitivity function for amplitude of forced oscillation (2.386). As in the previous paragraph, by amplitude Ai of the component ypi of forced oscillation we will mean absolute deviation from zero for a period. Substituting t + T = t + 2π/ω for t in (2.386), with due account for periodicity of these functions we obtain dU (t + T ) = A(τ )U (t + T ) + (t + T )A1 (τ )Yp (t) + (t + T )B1 (τ ). (2.403) dt Subtracting (2.396) from (2.403), we find d∆U = A(τ )∆U + T [A1 (τ )Yp (t) + B1 (τ )], dt

(2.404)

∆U = U (t + T ) − U (t).

(2.405)

where

On the other hand, since we have the identity Yp (t, ω) = Yp (t + T, ω),

(2.406)

differentiating with respect to ω we obtain ∂Yp (t, ω) ∂Yp (t + T, ω) dT = + Y˙ p (t) , ∂ω ∂ω dω

(2.407)

or, equivalently, dT 2π ∆Up (t) = Up (t + T ) − Up (t) = Y˙ p (t) = Y˙ p (t) 2 . dω ω

© 2000 by CRC Press LLC

(2.408)

Hence, the variation of the sensitivity function Up (t) over a period is a periodic function. On the other hand, for any solution of the sensitivity equation (2.409) Equation (2.398) yields ∆U (t) = U (t + T ) − U (t) = [H(t + T ) − H(t)](U0 − U10 ) + tU2 (t) = h(t)(M − E)(U0 − U10 ) + T U2 (t),

(2.409)

where M is the monodromy matrix (2.285). As follows from (2.409), the function ∆U (t) is periodic only for U0 = U10 . In this case ∆Up (t) = T U2 (t).

(2.410)

Comparing (2.410) with (2.408), we find U2 (t) =

1 ˙ Yp (t). ω

(2.411)

Assuming U0 = Up0 = U10 in (2.398) and taking account of (2.411), we obtain t Up (t) = U1 (t) + Y˙ p (t). (2.412) ω Let now Ai be defined as in (2.345). Then, similarly to (2.349) we have dAi = sign ypi (ti )upi (ti ), dω

(2.413)

where the time instant ti is defined by the condition Ai = |ypi (ti )|. Since y˙ pi = 0, from (2.413) and (2.412) comes that dAi = sign ypi (ti )u1i (ti ), dω

(2.414)

From the above relations it follows the following simple technique of constructing the value dAi /dω. Consider the following system of vector equations for arbitrary initial conditions. dY = A(τ )Y + B(τ ), dt ˜ U 1 ˜1 − 1 dY . = A(τ )U dt ω dt

© 2000 by CRC Press LLC

(2.415)

The first equation in (2.415) coincides with the initial equation (2.382), while the second one is obtained from the first equation in (2.390) by substituting the right side of (2.411) instead of U2 . ˜ = U1p (t) takes place in the system After periodic process Y = Yp (t), U (2.415), the values u1i (ti ) will, according to (2.414), determine the sensitivity function of steady state mode amplitude up to sign. Example 2.7 Let us check the validity of (2.414) on a simple example. Let dy = −ay + sin ωt, dt

a > 0,

(2.416)

The periodic solution has the form yp (t) =

a ω sin ωt − 2 cos ωt. ω 2 + a2 ω + a2

(2.417)

Differentiating with respect to t, we find y˙ p (t) =

aω ω2 cos ωt + sin ωt. ω 2 + a2 ω 2 + a2

(2.418)

Differentiating (2.418)with respect to ω, we obtain the sensitivity function up (t) =

∂yp (t, ω) = u1 (t) + tu2 (t), ∂ω

(2.419)

where  ω cos ωt ω2 + a2 a ω 1 u2 (t) = cos ωt + sin ωt = y˙ p (t). ω2 + a2 ω2 + a2 ω u1 (t) =

d dω



a ω2 + a2



sin ωt −

d dω



(2.420)

At the extremal points ti of the function (2.417) we have yp (ti ) = 0, therefore up (ti ) = u1 (ti ), (2.421) and dA = sign yp (ti )u1 (ti ). dω

(2.422)

It remains to show that the function u1 (t) is a solution of the second equation in (2.415). This can be proved by simple calculation.

© 2000 by CRC Press LLC

2.5.3

Sensitivity of Nonlinear Oscillatory System

The above approach can be used in the general case for a nonlinear nonautonomous oscillatory system. Consider the vector equation dY = F (Y, τ ), dt

τ = ωt,

(2.423)

where F is a continuousy differentiable function of arguments Y, τ such that F (Y, τ ) = F (Y, τ + 2π).

(2.424)

Assume that Equation (2.423) has a family of periodic steady state solutions  Yp (t, ω) = Yp

2π ,ω t+ ω

 (2.425)

in some interval of frequencies ω1 ≤ ω ≤ ω2 . Assuming that the solution (2.425) is differentiable with respect to ω, from (2.423) we can obtain the sensitivity equation of the steady-state mode. Differentiating (2.423) with respect to ω gives the sensitivity equation dU = A(t)U + tB1 (t), dt

(2.426)

where  ∂F  A(t) = ∂Y  Y = τ =

Yp (t) ωt

,

 ∂F  B1 (t) = ∂τ  Y = τ =

Yp (t)

,

(2.427)

ωt

By construction, A(t) = A(t + T ), B1 (t) = B1 (t + T ). Hereinafter we assume that in the chosen frequency range the condition (2.385) holds, so that the solution (2.425) is asymptotically stable by Lyapunov for all ω1 ≤ ω ≤ ω2 . As follows from (2.399), a general solution of the sensitivity equation (2.426) has the form U (t) = H(t)(U0 − U10 ) + U1 (t) + tU2 (t),

© 2000 by CRC Press LLC

(2.428)

where U1 (t) and U2 (t) are periodic solutions of the following system of equations: dU1 = A(t)U1 − U2 , dt (2.429) dU2 = A(t)U2 + B1 (t). dt Nevertheless, in the given case we can greatly simplify the solution, because it appears that 1 1 U2 (t) = Y˙ p (ti ) = F (Yp (t), ωt). (2.430) ω ω This formula is proved by step-by-step repetition of the transformations used for derivation of Relation (2.411). Moreover, similarly to (2.414), dAi = sign ypi (ti )u1i (ti ), dω

(2.431)

where ti (0 ≤ ti < T ) is the moment when the function ypi (t) takes the maximal absolute value. Therefore, to calculate the sensitivity functions of amplitudes of steady-state mode components it suffices to obtain the function U1 (t). For this purpose we can use the first equation in (2.429), which, with account for (2.430), appears as dU1 1 = A(t)U1 − F (Yp (t), ωt). dt ω

(2.432)

A practical method of calculating the function U1 can be described as follows. Consider the system of equations dY = F (Y, ωt), dt ˜ U 1 ˜1 − 1 F (Y, ωt). = A(t)U dt ω

(2.433)

If we choose initial conditions sufficiently near to Yp (0), due to asymptotic stability the solution Y (t) will tend to Yp (t), and, correspondingly, the right side of the second equation (2.433) will converge to the right side of (2.432). Therefore, the desired signal will establish at the output of the block diagram shown in Figure 2.7. Example 2.8 As an example, we consider Duffing’s equation y¨ + αy˙ = βy 3 = N cos ωt,

© 2000 by CRC Press LLC

(2.434)

Figure 2.7 Determination of U1

which is equivalent to the following system of equations dy1 = y2 , dt dy2 = −βy13 − αy2 + N cos ωt. dt

(2.435)

Differentiating (2.435) with respect to ω yields the sensitivity equation du1 = u2 , dt du2 = −3βy12 u1 − αu2 − tN sin ωt. dt

(2.436)

while the second equation in (2.433) can be written in an expanded form as dv1 1 = v2 − y2 , dt ω (2.437)  dv2 1 3 = −3βy12 v1 − αv2 + βy1 + αy2 − N cos ωt . dt ω where the notation is changed in order to avoid confusion. Returning to the variable y and referring to (2.437) and (2.435), we can rewrite the system of equations (2.433) in the form Y¨ + αy˙ + βy 3 = N cos ωt, α 2 v¨ + αv˙ + 3βy 2 v = − y˙ − y¨. ω ω

(2.438)

The steady-state periodic mode v = vp (t) makes it possible to calculate the sensitivity function for amplitude of the periodic solution dA/dω. If for

© 2000 by CRC Press LLC

t = ti the value yp (t) has the maximal absolute value, the value vp (ti ) gives the desired sensitivity function up to sign.

2.6 2.6.1

Sensitivity Problems

of

Solutions

of

Boundary-Value

Boundary-Value Problems Depending on Parameter

The methods of constructing sensitivity functions developed in the preceding paragraphs require that initial conditions (2.21) are known, i.e., in fact we dealt with the sensitivity of Cauchy’s problem. In that case, for continuous first-order sensitivity functions of a single-parameter family of solutions of Equation (2.20) to exist it is sufficient that the vector F (Y, t, α) be continuously differentiable with respect to Y, α, and the initial conditions (2.21) be continuously differentiable with respect to α. The existence conditions for higher-order sensitivity functions were formulated on the basis of the same proposition. Nevertheless, in many applied problems, initial conditions of the family of the solutions under consideration are not given and, conversely, are unknown to be found. Such a situation arises, for example, for various boundary-value problems. In such cases, the question of existence of corresponding family of solutions and, the more so, of differentiability of initial conditions with respect to parameter, needs to be investigated specially. Let us give examples of some general important problems in control theory that can be reduced to boundary-value problems for systems of ordinary differential equations. 1. Determination of non-autonomous periodic oscillations. Let us have a system of equations of the form (2.20) dY = F (Y, t, α), dt

(2.439)

and let F (Y, t, α) = F (Y, t + T, α). The problem of determining periodic solutions with period T reduces to determining solutions Y = Yp (t) for which the following boundary conditions hold: Yp (0) = Yp (T ).

(2.440)

If in some interval of parameter value α1 ≤ α ≤ α2 there is a desired

© 2000 by CRC Press LLC

periodic solution satisfying the initial conditions Yp (0) = Yp0 (α), then, the vector Yp0 (α) is not always differentiable with respect to α [63]. 2. Determination of periodic oscillation of autonomous systems. Let us have an autonomous system dY = F (Y, α), dt

(2.441)

and it is required to find its periodic modes Yp (t) = Yp (t + T ) (selfoscillations). In this case, the boundary conditions (2.440) hold as well. Nevertheless, as distinct from the previous case, period T of desired periodic solutions is an unknown value depending on the parameter, i.e., T = T (α). The, as is known [63, 103], one of the components of the initial conditions vector can be assumed. Therefore, based on Equation (2.441) and boundary conditions (2.440) it is required to find n values, viz. n − 1 components of the vector Yp0 and period T . Let us, for example, have the following second-order equation y¨ + f (y, y, ˙ α) = 0,

(2.442)

which is equivalent to the system of equations dy1 = y2 , dt dy2 = −f (y1 , y2 , α). dt

(2.443)

Let y1p (t) = y1p (t + T ) and y2p (t) = y2p (t + T ) be periodic solutions of Equations (2.443). Since the function yp (t) = y1p (t) has equal values for t = 0 and t = T , there is a moment t˜(0 ≤ t˜ < T ), when y2p (t˜) = y˙ p (t) = 0. Therefore, the desired initial conditions can be found in the form y1 (0) = y10 ,

y2 (0) = 0.

(2.444)

Now let y1 (t, y10 , y20 , α) and y2 (t, y10 , y20 , α) be a general solution of the system of equations (2.443) such that y1 (0, y10 , y20 , α) = y10 ,

y2 (0, y10 , y20 , α) = y20 .

(2.445)

Then, with account for (2.444), the periodicity conditions (2.440) of the desired solution can be written in the form y1 (T, y10 , 0, α) − y10 = 0,

© 2000 by CRC Press LLC

y2 (T, y10 , 0, α) = 0.

(2.446)

In principle, the desired values y10 (α) and T (α) can be obtained from (2.446). 3. Boundary-value problems of various types arise in variation calculus and optimal control problems. For example, for the problem of minimizing the simplest functional T(α)

J=

Φ(y, y, ˙ t, α)dt,

(2.447)

0

depending on a parameter, the desired extremals are determined by Euler’s equations d ∂Φ ∂Φ − = 0, (2.448) dt ∂ y˙ ∂y satisfying the boundary conditions, where y1 (α) and y2 (α) y(0) = y1 (α),

y(T (α)) = y2 (α),

(2.449)

are given functions. In problems with free endpoints, the functions y1 (α), y2 (α), and T (α) can be unknown. Generalizing the above examples, let us formulate a general scheme of two-point boundary-value problem. It is assumed that there are a vector equation (2.439) and boundary conditions in the form G(Y0 , Y1 , t0 , t1 , α) = G(Y (t0 ), Y (t1 ), t0 , t1 , α) = 0,

(2.450)

where G is a vector functional, t0 = t0 (α) and t1 = t1 (α) are functions in α, which are, in the general case, unknown and determined during solution of the boundary-value problem. Moreover, we will assume that the total number of scalar equations defined by the boundary conditions (2.450) is equal to the number of variables of the boundary-value problem. If boundary conditions relate the values of the desired solution in more than two points, such a boundary problem is called multipoint. For instance, the boundary conditions for a three-point problem have, similarly to (2.450), the form G(Y (t0 ), Y (t1 ), Y (t2 ), t0 , t1 , t2 , α) = 0.

2.6.2

Sensitivity Problems

Investigation

for

(2.451)

Boundary-Value

If Equations (2.439) and boundary conditions (2.450) define a singleparameter family of solutions Yk (t) = Yk (t, α) in an interval of parameter

© 2000 by CRC Press LLC

values α1 ≤ α ≤ α2 , the problem of calculation of the following sensitivity function arises naturally: U = Uk =

∂Yk (t, α) . ∂α

(2.452)

In the general case, the question of existence of a family of solutions Yk (t, α) and, the more so, of existence of sensitivity functions (2.452) is very difficult. On the other hand, as follows from Theorem 2.1, a continuous sensitivity function (2.452) does exist is the vector F (Y, t, α) is continuous with respect to all its argument, continuously differentiable with respect to Y and α, and if there exist continuous derivatives dYk (t0 )/dα and dt0 /dα. Hereinafter we assume that the vector F (Y, t, α) satisfies these requirements. Then, existence conditions of the sensitivity function Uk reduce to conditions of existence and differentiability for the vector of initial conditions Yk0 (α). In a special case when the starting point is fixed, the sensitivity function Uk exists and is differentiable if the vector of initial conditions Yk0 (α) is continuously differentiable. Then, we show that a boundary-value problem can, in principle, be reduced to a problem of solving a nonlinear system of equations determining the initial conditions vector. For concreteness, at first we assume that there is a boundary-value problem (2.439), (2.450), where the values t1 and t2 are fixed. Let Y = Y (t, Y0 , t0 , α),

(2.453)

be a general solution of Equation (2.439) such that Y (t0 , Y0 , t0 , α) = Y0 .

(2.454)

Due to the above assumptions on the properties of the vector F (Y, t, α), the right side of (2.453) is continuously differentiable with respect to all arguments. As follows from (2.453), Y (t1 ) = Y (t1 , Y0 , t0 , α).

(2.455)

Substituting (2.455) into the boundary conditions (2.450), we obtain G(Y0 , Y (t1 , Y0 , t0 , α), t0 , t1 , α) = 0.

(2.456)

Thus, if the vector Y0 (α) defines a solution of the boundary-value problem (2.439), (2.450), it is a solution of the vector equation (2.456). Conversely,

© 2000 by CRC Press LLC

if a vector Y0 (α) is a solution of Equation (2.456), the family Y (t, α) = Y (t, Y0 (α), t0 , α)

(2.457)

gives a single-parameter family of solutions of the boundary problem. Hence, the boundary-value problem at hand is equivalent to the problem of solving the vector equation (2.456). If the moments t0 (α) and t1 (α) are known, Equation (2.456) holds, but it must have a special structure such that it could be possible to find, together with t0 (α) and t1 (α), all unknown components of the vector Y0 (α). Assume that for α = α0 , α1 ≤ α0 ≤ α2 , Equation (2.456) has a solution Y0 (α), t(α), i.e., the initial boundary-value problem has a solution Yk (t, α0 ) = Y (t, Y0 (t0 , Y0 (α0 ), t0 (α0 ), α0 ).

(2.458)

Then, for the sensitivity function (2.452) to exist it is sufficient that Equation (2.456) be solvable in a locality of the point α = α0 , and the solution Y0 (α), t0 (α) be differentiable for α = α0 . Sufficient conditions of existence of such a solution and a method of its determination can be obtained using implicit functions theory.

2.6.3

Implicit Functions Theorems

In this paragraph we consider the simplest properties of implicit functions needed later [18, 114]. THEOREM 2.13 Let us have a system of equations fi (α, y1 , . . . , yn ) = 0,

i = 1, . . . , n,

(2.459)

where fi are continuous real functions of real arguments, and let the left sides of the equations become zero at the point K(α0 , y10 , . . . , yn0 ). Then, if in a locality R of the point K the functions fi have partial derivatives with respect to y10 , . . . , yn0 , which are continuous at the point K, and the following functional determinant:   ∂f1   ∂f1   ...  ∂y1 ∂yn    J = det  . . . . . . . . .   ∂f   n . . . ∂fn    ∂y1 ∂yn

© 2000 by CRC Press LLC

(2.460)

is not zero at the point K, the system of equations (2.459) has in a locality of the point K a unique continuous solution yi = φi (α),

i = 1, . . . , n,

(2.461)

satisfying the condition yi (α0 ) = yi0 ,

i = 1, . . . , n.

(2.462)

THEOREM 2.14 If, in addition to the conditions of Theorem 2.13, the left sides of Equations (2.459) are continuously differentiable with respect to y10 , . . . , yn0 , α, the functions φi (α) are continuously differentiable in a locality of the point α = α0 . Then, we presented a practical method for calculating the derivatives dφi (α)/dα, provided that they do exist. With this aim in view, we differentiate Equations (2.459) with respect to α for α = α0 : n  ∂fi ∂φk ∂fi · =− . ∂yk dα ∂α

(2.463)

h=1

Solving Equations (2.463) with respect to dφk /dα, we obtain  dφi  Ji =−  dα α=α0 J

(2.464)

where J is the Jacobian (2.460), and the determinants Ji have the form  ∂f1  ∂f1 ...   ∂y1 ∂yi−1  J = det  . . . . . . . . .  ∂f  n . . . ∂fn  ∂y1 ∂yi−1

 ∂f1 ∂f1 ∂f1  ...  ∂α ∂yi+1 ∂yn   ... ... ... ... . ∂fn ∂fn ∂fn   ...  ∂α ∂yi+1 ∂yn

(2.465)

Let us note also that the fact that the Jacobian (2.460) is zero does not mean that the solution (2.461) does not exist or does not possesses required properties. A detailed investigation of a special case when J = 0 is given in [18].

© 2000 by CRC Press LLC

2.6.4

Sensitivity Functions of Solution of Boundary-Value Problems

Let us apply the implicit function theorems to investigating Equations (2.456) and assume, for concreteness, that the values t0 and t1 are given and it is required to find all components of the vector Y0 . This calculation technique can be used without modifications for the general case as well. Assume that the vector G = (g1 , . . . , gn ) appearing in the boundary conditions (2.450) is continuously differentiable with respect to all its arguments. To calculate the Jacoby matrix  ∂g1  ∂g1  ...  ∂y10 ∂y n0  I = det  . . . . . . . . .  ∂g  n . . . ∂gn  ∂y10 ∂yn0

        

(2.466)

where g1 , . . . , gn and y10 , . . . , yn0 are components of the vectors G and Y0 , respectively, we employ the rules of vector differetiation. Differentiating the left side of (2.456) with respect to the vector Y0 , we have ∂G(Y0 , Y (t, Y0 , t0 , α), α) ∂Y0 ∂G(Y0 , Y (t1 , Y0 , t0 , α), α) + ∂Y1

I=

∂Y (t1 , Y0 , t0 , α) . ∂Y0

(2.467)

But, as follows from (2.64) and (2.65), the matrix H(t, t0 ) =

∂Y (t, Y0 , t0 , α), α) ∂Y0

(2.468)

is a solution of the equation in variations dH(t, t0 ) = A(t)H(t, t0 ) dt

(2.469)

satisfying the initial conditions H(t0 , t0 ) = E, i.e., the matrix H(t, t0 ) is the Cauchy’s matrix for the equation in variations. From (2.468) we have ∂Y (t1 , Y0 , t0 , α) = H(t1 , t0 ) ∂Y0

© 2000 by CRC Press LLC

(2.470)

and, correspondingly, the Jacobian of the problem under consideration has the form   ∂G ∂G J(α) = det I = det + H(t1 , t0 ) . (2.471) ∂Y0 ∂Y1 Then, using implicit functions theorems, we can formulate the following proposition. THEOREM 2.15 Assume that the right side of Equation (2.449) and vector G appearing in (2.450) are continuously differentiable with respect to Y0 , Y1 , α. Assume also that for α = α0 the associated boundary problem has a solution Y = Yk (t, α). Then, if the following condition holds: J(α0 ) = 0,

(2.472)

in a locality of α0 there is a single-parameter family Y (t, α) of solutions of the boundary-value problem, and the initial vector Yk (t0 , α) is continuously differentiable with respect to α. Assuming that the conditions set in Theorem 2.15 hold, let us calculate the derivative dYk (t0 , α)/dα. With this aim in view, we consider, according to Section 2.6.2, the equation G(Y0 (α), Y (t1 , Y0 , t0 , α), α) = 0.

(2.473)

The complete derivative with respect to α is given by ∂G dY0 ∂G + ∂Y0 dα ∂Y1



∂Y dY0 ∂Y + ∂Y0 dα ∂α

 +

∂G = 0, ∂α

(2.474)

or J(α)

dY0 ∂G ∂Y ∂G = − , dα ∂Y1 ∂α ∂α

(2.475)

Hence, 

 dYk (t0 , α) ∂G ∂Y ∂G  = − J −1 + . dα ∂Y1 ∂α ∂α α=α0

(2.476)

Let us determine the vector  ∂Y (t1 , Y0 (α), t0 , α)  ¯ U=  ∂α α=α0

© 2000 by CRC Press LLC

(2.477)

appearing in (2.476). Since for differentiation in (2.477) the values t0 , t1 ¯ is a solution of the sensitivity and Y0 are assumed to be fixed, the function U equation for zero initial conditions. If the sensitivity equation has the form dU = A(t)U + B(t), dt

(2.478)

where the matrix A(t) and vector B(t) are determined for α = α0 , y = Yk (t, α0 ), its solutions for zero initial conditions has the form t U (t, α) =

H(t, τ )B(τ )dτ,

U (t0 , α) = 0.

(2.479)

t0

Therefore t1 ¯= U

H(t1 , τ )B(τ )dτ.

(2.480)

t0

Substituting (2.480) into (2.476), we finally obtain    t1  dYk (t0 , α) ∂G ∂G  . H(t1 , τ )B(τ )dτ + = −J −1  dα ∂Y1 ∂α α=α0

(2.481)

t0

2.6.5

Sensitivity of Non-Autonomous Oscillatory System

In this section we apply the above results to the sensitivity investigation of non-autonomous periodic oscillations defined by the periodic equation (2.439) and boundary conditions (2.440) and assuming, at first, that period T is independent of the parameter. Taking t0 = 0, we can write the boundary conditions in the form (2.456) as Y0 − Y (T, Y0 , 0, α) = 0.

(2.482)

To calculate the Jacoby matrix we employ (2.467) and (2.470). In the given case, ∂G = E, ∂Y0

∂G = −E. ∂Y1

(2.483)

Moreover, H(t1 , t0 ) = H(T, 0) = H(T ) = M,

© 2000 by CRC Press LLC

(2.484)

where m is the monodromy matrix (2.285). Therefore, the Jacoby matrix is I = E − M.

(2.485)

Condition (2.472) takes the form det(E − M (α0 )) = 0.

(2.486)

Thus, we proved the following theorem. THEOREM 2.16 If for α = α0 the periodic system (2.439) has a solution with period T , the right side of (2.439) is continuously differentiable with respect to Y, α, and the condition (2.486) holds, there is a single-parameter family of periodic solutions Y = Yp (t, α) and the vector of initial conditions Y0 = Yp (0, alpha) is continuously differentiable with respect to α for α = α0 . Let us note that according to Section 2.4.2, the condition (2.486) means that there should be no solutions with period T of the equation in variations (2.281). Next, using (2.481), we obtain an explicit expression for the derivative dY0 /dα. Referring to (2.483)–(2.485) and using the fact that ∂G/∂α = 0, we have dY0 = (E − M )−1 dα

T

H(T )H −1 (τ )B(τ ) dτ |α=α0

0

& ' = (E − M )−1 − E

(2.487)

T H

−1

(τ )B(τ ) dτ |α=α0 .

0

By writing the solution of the sensitivity equation (2.478) with initial conditions (2.487), we obtain the sensitivity function of forced oscillation: dY0 Up (t) = H(t) + dα

t H(t, τ )B(τ )dτ.

(2.488)

0

It can be easily verified that Up (0) = Up (T ). Then, let period T be a known function of the parameter, i.e., T = T (α). For example, if α = ω is the frequency of periodic excitation, then T =

© 2000 by CRC Press LLC

2π/T . In this case, instead of (2.482), we have Y0 − Y (T (α), Y0 , 0, α) = 0.

(2.489)

Since the function T = T (α) is assumed to be known, the Jacoby matrix of Equations (2.489) coincides with (2.485), and the condition (2.486) remains valid. Hence, the assumptions made in Section 2.5.3 ensure the existence of sensitivity functions of additional mode, and this serves as a justification for the method presented in Section 2.5. If (2.486) holds, differentiating (2.489) with respect to α we have dY0 ∂Y dY0 ∂Y dT − − − Y˙ (T (α)) = 0. dα ∂Y0 dα ∂α dα

(2.490)

If for α = α0 the equation has a solution Y˙ (0) = F˙ (Yp0 , 0, α0 ) ,

Y0 (α0 ) = Yp0 ,

from (2.490) we obtain, similarly to (2.487),  dYp0 = (E − M (α))−1 M (α) dα

T 0

2.6.6

  dT  −1 ˙ H (τ )B(τ )dτ + Yp0 dα 

(2.491) α=α0

Sensitivity of Self-Oscillatory System

As was noted above, the problem of determining self-oscillatory modes is an example of a boundary-value problem where one of the endpoint t1 = T (α) is unknown. In this case, the boundary conditions (2.440) hold as well, but, as was said before, one of the components of the vector Y0 , for instance, y0n , can be assumed known, and the value T is unknown. For example, let y0i = y¯,

i = 1, . . . , n − 1,

y0n = a = const,

T = y¯n ,

(2.492)

Let Equation (2.489) for α = α0 have a solution yi = y¯i0 ,

i = 1, . . . , n − 1,

yn0 = T0 = T (α0 )

(2.493)

and, correspondingly, Equation (2.441) have a periodic solution Yp (t) = Y (t + T0 ).

© 2000 by CRC Press LLC

Let us calculate the Jacobian (2.471) of Equations (2.489) at the point (2.493). Rewrite Equation (2.489) in a scalar form gi = y¯i − yi (¯ yn , y¯1 , . . . , y¯n−1 , 0, α) = 0, gn = a − yn (¯ yn , y¯1 , . . . , y¯n−1 , 0, α) = 0.

i = 1, . . . , n − 1 ,

(2.494)

Hence, with account for (2.493) and (2.484), we have  ∂gi  = 1 − mik (T0 ), ∂ y¯k α=α0

i, k = 1, . . . , n. − 1,

(2.495)

where mik (T0 ) are components of the monodromy matrix M (T0 ) = M (α0 ). Moreover,  ∂gi  = −y˙ pi (T0 ) = −y˙ pi (0), ∂ y¯n α=α0

i = 1, . . . , n.

(2.496)

From the last equation in (2.494) we have ∂gn = −mni (T0 ), ∂ y˙ i

i = 1, . . . , n − 1.

(2.497)

Thus, the Jacoby matrix has the form   1 − m11 (T0 )   ... Ia (α0 ) =   −mn−1,1 (T0 )   −mn,1 (T0 )

 . . . −m1,n−1 (T0 ) y˙ p1 (0)    ... ... ... . . . . 1 − mn−1,n−1 (T0 ) −y˙ pn−1 (0)   . . . −mn,n−1 (T0 ) −y˙ pn (0) 

(2.498)

As a result, we can formulate the following proposition. THEOREM 2.17 If for α = α0 Equations (2.494) have a solution (2.493) and the determinant of the Jacoby matrix (2.498) is not zero, i.e., det Ia (α0 ) = 0

(2.499)

there exists a single-parameter family of solutions Y = Yp (t, α), and, moreover, initial conditions Yp (0) = Yp (0, α) and period T = T (α) are continuously differentiable with respect to α.

© 2000 by CRC Press LLC

Let us obtain explicit formulas for calculation of corresponding derivatives provided that all the conditions of Theorem 2.17 hold. Differentiating (2.494) with respect to α, we have  n−1  dy˙ i d¯ ys d¯ yn ∂yi  mis (T0 ) , = + y˙ pi (0) + dα dα dα ∂α α=α0 s=1  n−1  ∂yn  d¯ ys d¯ yn 0= + y˙ pn (0) + mns (T0 ) , dα dα ∂α α=α0 s=1

(2.500)

Introducing the vector of unknowns  ¯0 X

d¯ yn d¯ y1 ,..., dα dα

 ,

we can write Equation (2.501) in a vector form ¯0 = U ¯0 , Ia (α0 )X

(2.501)

¯0 is given, with where Ia (α0 ) is the Jacoby matrix (2.498), and the vector U account for (2.480), by T ¯0 = U

H(T )H

−1

T (τ )B(τ )dτ = M (α0 )

0

H −1 (τ )B(τ )dτ.

(2.502)

0

As follows from (2.501), ¯ 0 = I −1 (α0 )U ¯0 . X a

(2.503)

Let us note that ypn = const due to statement of the problem. Then, initial conditions for sensitivity function of self-oscillatory mode have the form ui0 =

d¯ yi , dα

i = 1, . . . , n,

un0 = 0,

(2.504)

where d¯ yi /dα, i = 1, . . . , n − 1 are the first n − 1 components of the vector (2.503). The last component of this vector gives the value dT /dα|α=α0 . It should be taken into account that the condition (2.486) is never satisfied for self-oscillatory mode, because the corresponding equation in variations (2.281) has a periodic solution, and, therefore, the characteristic equation (2.292) has a root ρ = 1. Nevertheless, as follows from the results of [49], if all the remaining roots of the characteristic equation (2.487) differ

© 2000 by CRC Press LLC

from 1, the condition (2.499) holds, hence, the conditions of Theorem 2.17 hold. Therefore, the assumptions made in Section 2.4.3 ensure existence of the corresponding sensitivity functions.

2.6.7

Boundary Conditions for Sensitivity Functions

Let us demonstrate that the problem of determining the sensitivity function for the boundary-value problem (2.439), (2.440) can be reduced to a boundary-value problem for the sensitivity equation dU = A(t)U + B(t). dt

(2.505)

Indeed, using the general solution (2.453), we can represent the boundary conditions (2.450) in the form G(Y (t0 ), Y (t1 , Y0 , t0 , α), t1 , t0 , α) = 0.

(2.506)

Assuming, for simplicity, that t0 = const and t1 = const, and differentiating (2.506) with respect to α (this is possible if the condition (2.472) holds), we obtain ∂G dY0 ∂G dY1 ∂G + + = 0, ∂Y0 dα ∂Y1 dα ∂α

α = α0 .

(2.507)

But, dY0 = U (t0 ) = U0 , dα

dY1 dY (t1 , α) = = U (t1 ) = U1 . dα dα

(2.508)

From (2.507) and (2.508) we find    ∂G  ∂G  ∂G  U0 + U1 + = 0. ∂Y0 α=α0 ∂Y1 α=α0 ∂α α=α0

(2.509)

Relations (2.509) are the desired boundary conditions that define the connection between U0 and U1 . As distinct from the initial boundary-value problem (2.439), (2.450), the boundary-value problem given by (2.505) and (2.509) is linear for the first-order sensitivity functions. Let us show how to find its solution, which does exist provided that (2.472) holds.

© 2000 by CRC Press LLC

From (2.505) we have t1 U1 = H(t1 , t0 )U0 +

H(t1 , τ )B(τ )dτ.

(2.510)

t0

Substituting (2.510) into (2.509) yields IU0 = K,

(2.511)

where the matrix I is defined by (2.467) and vector K is given by ∂G ∂G K=− − ∂α ∂Y1

t1 H(t1 , τ )B(τ )dτ.

(2.512)

t0

Due to (2.472), the matrix I is not singular, and, therefore, Equation (2.511) yields U0 = I −1 K. (2.513) Thus, a general expression for the sensitivity function of the boundaryvalue problem has the form U (t) = H(t, t0 )I

−1

t1 K+

H(t, τ )B(τ )dτ,

t 0 ≤ t ≤ t1 .

(2.514)

t0

If the sensitivity function (2.514) has been obtained in some way, the solution of initial boundary-value problem for sufficiently small |µ| has the form Yk (t, α0 + µ) ≈ Yk (t, α0 ) + U (t)µ. (2.515)

© 2000 by CRC Press LLC

Chapter 3 Sensitivity of Finite-Dimensional Discontinuous Systems

3.1 3.1.1

Sensitivity Equations Discontinuous Systems

for

Finite-Dimensional

Time-Domain Description

We will consider systems described by different vector differential equations on different time intervals: dY = Fi (Y, t), dt

ti−1 < t < ti ,

(3.1)

where Y is a vector of unknowns, Fi a vector of nonlinear functions. Hereinafter, all vectors Fi are assumed to be continuously differentiable with respect to all arguments. Moments ti , when one equation (3.1) is changed for another, will be called the switching moments. It is also assumed that the vector Y is discontinuous at the switching moment ti so that Yi+ = Φi (Yi− , ti ),

(3.2)

where Yi+ = Y (ti + 0) and Yi− = Y (ti − 0) are the values of the vector Y before and after the break, and Φ is a vector of nonlinear functions that are continuously differentiable with respect to all arguments. Relation (3.2) will be called break condition. In the simplest case when the solution remains continuous at the switching moments, the break condition transforms in the following continuity condition: Yi+ = Yi− = Yi .

© 2000 by CRC Press LLC

(3.3)

The switching moments ti can, in the general case, be found from some conditions of the form fi (Yi− , ti ) = 0,

(3.4)

that will be called switching conditions. The scalar functions fi in (3.4) are assumed to be continuously differentiable with respect to all arguments. Having a complete system of equations (3.1)–(3.4) that describe a discontinuous system in the time domain, it is possible to construct solutions associated with various initial data. Let us have an initial condition Y0 = Y0+ = Y (t0 + 0).

(3.5)

Then, integrating Equations (3.1) successively taking into account breaks and switching conditions we can, at least in principle, obtain a discontinuous solution Y = Y (t, Y0+ ),

Y (t0 + 0, Y0+ ) = Y0+ .

(3.6)

Assume that this process of continuing the solution is possible up to the moment te > t0 . Let also i, j, k, . . . be a sequence of numbers of equations (3.1) that was to be integrated while continuing the solution. This sequence will be called type of motion on the interval (t0 , te ). So type of motion can be symbolically defined in the form {i, j, k, . . . , t0 , tk }.

(3.7)

Thus, the mathematical description of a discontinuous system in the time domain used in this chapter includes motion equations (3.1) on the intervals between the switching moments, switching conditions (3.4) and break conditions (3.2). We note that mathematical description of real discontinuous systems includes, except for the above equations, some additional relations defined the order of changing one equation (3.1) by another. These conditions are obtained by preliminary analysis of a mathematical model of the discontinuous system under consideration. Therefore, the description of a discontinuous system given above as an initial one is, in fact, a result of fairly complex preliminary analysis.

3.1.2

Time-Domain Description of Relay Systems

As an illustration, we consider the simplest relay system [13, 113] described by the equations dY = A Y + Hf (σ), dt

© 2000 by CRC Press LLC

σ = J T Y,

(3.8)

where A is a constant matrix, H and J are constant column vectors, T denotes the transpose operator, f (σ) is a nonlinear function shown in Figure 3.1 having the following analytical representation  f (σ) = kp sign σ =

kp , if σ > 0, −kp , if σ < 0.

(3.9)

Figure 3.1 Relay characteristic Hereinafter we consider continuous solutions of Equations (3.8) and (3.9), for which the conditions (3.3) hold for any switching moment. Equation g(Y ) = J T Y = σ = 0

(3.10)

defines a plane containing the origin in the phase space of variables Y . The plane (3.10) divides the phase space into two areas: N+ where σ > 0, and N− where σ < 0. As follows from (3.8) and (3.9), in areas N+ and N− we have, respectively, the following systems of equations: dY = A Y + Hkp , dt dY = A Y − Hkp , dt

Y ∈ N+ ,

(3.11)

Y ∈ N− .

(3.12)

Equations (3.11) and (3.12) describe motion along the right and left branch of the nonlinear characteristic (3.9), respectively. Hereinafter the motion of the system under consideration described by Equations (3.11) and (3.12) will be called normal. Nevertheless, for complete description of all possible solutions of the relay system (3.8) in the time domain it is not sufficient to fix Equations (3.11)

© 2000 by CRC Press LLC

and (3.12) and switching conditions (3.10). Let Y0 = Y (t0 ) and the initial value of the variable σ is such that σ0 = J T Y0 < 0.

(3.13)

Then, integrating (3.12), we obtain [13, 86]   Y (t) = eA(t−t0 ) Y0 − A−1 eA(t−t0 ) − E H,

(3.14)

where eAt is a matrix exponential, and   σ(t) = J T eA(t−t0 ) Y0 − J T A−1 eA(t−t0 ) − E H.

(3.15)

the motion of the system (3.8) is described by relations (3.14) and (3.15) until the corresponding trajectory intersects the switching surface (3.10). The switching moment t1 can be found from the equation   σ(t1 ) = J T eA(t1 −t0 ) Y0 − J T A−1 eA(t1 −t0 ) − E H = 0.

(3.16)

Assume that σ(t ˙ 1 − 0) > 0.

(3.17)

σ(t ˙ 1 ) = J T A Y (t1 ) − J T Hkp ,

(3.18)

From (3.12) it follows that

and Relation (3.17) can be written in the form J T A Y (t1 ) − ρkp > 0,

(3.19)

where the constant ρ is given by ρ = J T H.

(3.20)

Having found the moment t1 from Equation (3.16), we obtain   Y (t1 ) = eA(t1 −t0 ) Y0 − A−1 eA(t1 −t0 ) − E H.

© 2000 by CRC Press LLC

(3.21)

Taking the value (3.21) as the initial one, we can continue the solution over the interval t1 < t < t2 . Nevertheless, there are several possibilities that will be analyzed separately. 1. Assume that for t > t1 the motion continues in the area σ > 0. Then, solving Equation (3.11) with initial condition (3.21), we obtain   Y (t) = eA(t−t1 ) Y (t1 ) + A−1 eA(t−t1 ) − E H,

t > t1 ,

(3.22)

and this holds up to the next switching moment. Nevertheless, such an assumption (and, therefore, Formula (3.22)) is not always correct. Indeed, for derivation of (3.22) we assumed that σ(t) > 0 immediately after the switching moment t1 where Relation (3.16) holds. For this to be true, it is necessary that, in addition to (3.17), the following inequality hold: σ(t ˙ 1 + 0) > 0,

(3.23)

which can, with reference to (3.11) and (3.21), be written in the form J T A Y (t1 ) + ρkp > 0.

(3.24)

If ρ ≥ 0, Equation (3.24) follows from (3.19), and Formula (3.22) holds. If ρ < 0, so that (3.24) holds, it is necessary that | J T A Y (t1 ) |>| ρkp | .

(3.25)

2. Assume that ρ < 0 and the condition (3.25) is not satisfied. Then, σ(t ˙ 1 + 0) < 0,

(3.26)

and motion in the area N+ (along the right branch of the nonlinear characteristic) is impossible. Therefore, we cannot use (3.22) if (3.26) holds. In the case of (3.26) it is assumed [13, 109, 120] that the system performs sliding motion along the switching plane (3.10) after the switching moment t1 . In this case, system motion is described by the equation dY = Ac Y, dt

(3.27)

1 Ac = A − HJ T A. ρ

(3.28)

where

© 2000 by CRC Press LLC

Taking the vector Y (t) (3.21) as initial condition for the sliding motion, we find the solution of sliding mode equation: Y (t) = eAC (t−t0 ) Y (t1 ),

(3.29)

which lies in the switching plane. The sliding mode exists up to a moment t2 when one of the following equalities holds: J T A Y (t2 ) = ±ρ,

(3.30)

after that the motion will continue in the area N+ if J T A Y (t2 ) = −ρ,

(3.31)

and in N− in the opposite case, and we can use Equations (3.11) and (3.12) to continue the solution. As was shown by a more detailed investigation [13], the transition from the sliding mode to the area N+ or N− , will take place if J T A2 Y (t2 ) + J T A H > 0,

(3.32)

J T A2 Y (t2 ) − J T A H < 0,

(3.33)

or

respectively. If both the conditions (3.32) and (3.33) are violated, the system continues sliding motion. Repeating the above reasoning, we can, generally speaking, continue construction of the solution for any finite time interval. In the general case, the solution will consist of segments with normal and sliding motion. Thus, for time-domain description of the relay system (3.8) it is necessary to have three systems of differential equations (3.11), (3.12), and (3.27), switching conditions (3.10), the relations (3.30), which can be considered as conditions of switching from sliding mode to normal, the conditions (3.32) and (3.23), and, perhaps, some other additional conditions [13]. In principle, the obtained system of equations makes i possible to continue any solution given for t = t0 over infinite time interval t > t0 . Consider the question on determining various types of motion for the system (3.8). Assign numbers 1, 2, and 3 to Equations (3.11), (3.12), and (3.13), respectively. Then, the notation of a type of motion {2, 3, t0 , ∞}

© 2000 by CRC Press LLC

means that from the moment t0 to a moment t1 the motion is described by Equation (3.12), after that the system operates in a sliding mode described by (3.27) and remains in this mode for all t > t1 . The motion type {2, 1, 3, t0 , ∞} means that the system operates in a sliding mode after two intervals of normal motions. As was shown by Yu. Neimark, the relay system (3.8) can have motions of any complex types.

3.1.3

Parametric Model and Sensitivity Function of Discontinuous System

To define the parametric model of a discontinuous system we will assume that initial equations (3.1), switching conditions (3.4), and break conditions (3.2) depend on a scalar parameter α, i.e., equations of the motion have the form dY ti−1 < t < ti . (3.34) = Fi (Y, t, α), dt The switching conditions depend on the parameter fi (Yi− , ti , α) = 0,

(3.35)

and the break conditions can be written in the form Yi+ = Φi (Yi− , ti , α).

(3.36)

Assume that initial conditions are given t0 = t0 (α),

Y + (t0 ) = Y + (t0 , α).

(3.37)

and the system of additional relations determining the system in the timedomain is such that for any α from some set it is possible to find a solution on an interval t0 ≤ t < t¯(α). In general, the switching moments ti (α) appearing in (3.34), will be functions of α. As a result, we can construct a set of discontinuous solutions Y = Y (t, α),

t0 (α) ≤ t < t¯(α).

(3.38)

DEFINITION 3.1 The set of solutions (3.38) forms a single-parameter family on the interval J(α): t0 (α) ≤ t < t¯(α) for all α from the interval

© 2000 by CRC Press LLC

α1 ≤ α ≤ α2 if all solutions (3.38) have the same type, i.e., if they are described by the same sequence of Equations (3.34). DEFINITION 3.2 We will call the sensitivity function of the singleparameter family of solutions (3.38) the derivative U (t, α) =

∂Y (t, α) , ∂α

(3.39)

where the operator ∂/∂α denotes the so-called ordinary derivative of the discontinuous function that is equal to the ordinary derivative at the points of differentiability of Y (t, α) and is not defined at the points of discontinuities (or breaks).

3.1.4

General Sensitivity Equations for Discontinuous Systems

Let (3.38) be a single-parameter family of solutions of the discontinuous system (3.34)–(3.36). Then, for any α from the interval α1 ≤ α ≤ α2 and initial condition (3.37) there is a moment t1 (α) > t0 (α) such that dY = F1 (Y, t, α), dt

t0 (α) ≤ t < t1 (α).

(3.40)

If the functions t0 (α) and Y0 (α) are continuously differentiable with respect to α and the function F1 (T, t, α) is continuous with respect to all arguments and continuously differentiable with respect to Y , α, then initial conditions (3.37) and Equation (3.40) define a single-parameter solution on the interval t0 ≤ t < t0 , for which the existence conditions formulated in Section 2.1 hold for the first-order sensitivity function. Therefore, we can propose that on the time interval under consideration the derivative (3.39) exists in the classical sense and is given by dU = A1 (t)U + B1 (t), dt

t0 (α) ≤ t < t1 (α),

(3.41)

where A1 (t) =

 ∂F1 (Y, t, α)  ,  ∂Y Y =Y (t,α)

B1 (t) =

 ∂F1 (Y, t, α)  (3.42)  ∂α Y =Y (t,α)

with initial conditions U0 =

© 2000 by CRC Press LLC

dt0 dY0 − Y˙ 0 . dα dα

(3.43)

Below we will show that under some conditions on any interval ti−1 (α) < t < ti (α) where the following condition holds: dY = Fi (Y, t, α), dt

(3.44)

the derivative exists in the classical sense and is given by   dY ∂Fi  ∂Fi  U + . = dt ∂Y Y =Y (t,α) ∂α Y =Y (ti ,α)

(3.45)

In practice, to construct the sensitivity function (3.39) it is necessary to obtain the relations for initial conditions associated with each equation in (3.45). The final result can be formulated as a theorem. THEOREM 3.1 Let the functions fj and Φi in Equations (3.34)–(3.36) be continuously differentiable with respect to all arguments, and the functions Fi be continuous with respect to all arguments and continuously differentiable with respect to Y , α. Let also for a chosen α0 (α1 ≤ α0 ≤ α2 ) the following conditions hold: ∂fi − ∂fi− F +

= 0, ∂Y i ∂t

(3.46)

where the superscript − means that the corresponding value is calculated at the moment t = ti − 0. Then, the derivative (3.39) exists for all t = ti (α0 ) and satisfies the conditions dU ∂Fi ∂Fi = U+ , dt ∂Y ∂α

ti−1 (α0 ) < t < ti (α0 ),

(3.47)

with initial conditions (3.43)

U0 =

dY0 dt0 − Y˙ 0 . dα dα

Moreover, the transition from one equation in (3.47) to another are

© 2000 by CRC Press LLC

performed by means of the break conditions: dti d∆Yi ∆Ui = U (ti + 0) − U (ti − 0) = Ui+ − Ui− = − ∆Fi dα dα   −  − ∂Φi ∂Φ dt i i = −∆Fi + − E Fi− + + ∂Y ∂t dα  −  ∂Φi ∂Φ− i + − E Ui− + , ∂Y ∂α

(3.48)

where ∆Fi = Fi+1 (Yi+ , ti , α) − Fi (Yi− , ti , α) = Fi+ − Fi− , ∂fi− − Ui + dti = − ∂Y− dα ∂fi − F + ∂Y i

∂fi− ∂α , ∂fi− ∂t

∆Yi = Yi+ − Yi− .

(3.49)

(3.50)

PROOF As was shown above, for an interval t0 < t < t1 where t1 (α) is given by the equation f1 (Y − (t1 ), t1 , α) = 0.

(3.51)

the sensitivity function U (t, α) exists and is a solution of Equation (3.41) with initial conditions (3.43). Let Y = Y 1 (t, Y0 , t0 , α) be a general solution of Equation (3.40). Due to the given assumptions, the function Y 1 (t, Y0 , t0 , α) is continuously differentiable with respect to all arguments. Then, Equation (3.51), which determines the switching moments, takes the form f1 (Y 1 (t1 , Y0 , t0 , α), t1 , α) = 0.

(3.52)

This equation determines the switching moments ti as an implicit function of α. Due to the above assumptions of differentiability, the left side of (3.52) has continuous partial derivatives with respect to t1 , α. Therefore, by virtue of Theorem 2.14, if  ∂f1 (Y (t1 , t0 , Y0 , α), t1 , α) 

= 0,  ∂t1 α=α0

(3.53)

a unique function T1 = t1 (α) exists and is continuously differentiable in a locality of the point α = α0 . Differentiating the left side of (3.52) with

© 2000 by CRC Press LLC

respect to ti , from (3.53) we obtain 

 ∂f1 dY ∂f1  + ∂Y dt ∂t  α=α0

= 0,

(3.54)

t1 =t1 −0

which coincides with the condition (3.46) for i = 1. Thus, if the condition (3.46) holds for i = 1, the function t1 = t1 (α) is continuously differentiable with respect to α. Then, the function Y1− = Y (t1 − 0) = Y 1 (t1 (α), t0 (α), Y0 (α), α)

(3.55)

is also continuously differentiable with respect to α in a locality of the point α0 . On the time interval t > t1 (α) the solution can be continued by means of the equation dY t > t1 (α), (3.56) = F2 (Y, t, α), dt and initial conditions t1 = t1 (α),

Y + (t1 ) = Φ1 (Y1− , t1 , α).

(3.57)

Owing to the assumption on differentiability of the function Φ1 appearing in (3.57), the function Y1+ = Y + (t1 ) is continuously differentiable with respect to α for α = α0 . Therefore, on the interval until the next switching moment t2 there exists a sensitivity function U (t, α) satisfying the equation dU ∂F2 ∂F2 = U+ dt ∂Y ∂α

(3.58)

dY1+ dt1 − F2 (Y1+ , t1 , α) . dα dα

(3.59)

with initial conditions U1+ =

Let us expand Relation (3.59) in detail. First of all, by (3.57) we have − − dY1+ ∂Φ− ∂Φ− ∂Φ− 1 dY1 1 dt1 1 = + + dα ∂Y dα ∂t dα ∂α

(3.60)

Y1− (α) = Y 1 (t1 (α), t0 (α), Y0 (α), α),

(3.61)

But

© 2000 by CRC Press LLC

and, differentiating with respect to α we obtain dY1− dt1 = F1 (Y −1 , t1 , α) + U1− (t1 , α). dα dα

(3.62)

Differentiating (3.52) with respect to α (it has been proved that this is possible), with account for (3.62) we obtain ∂f1− ∂Y



dt1 F1− dα

+

U1−

 +

∂f1− dt1 ∂f − + 1 = 0. ∂t dα ∂t

(3.63)

With account for (3.53), from (3.63) we have ∂f1− − U1 + dt1 = − ∂Y− dα ∂f1 − F + ∂Y 1

∂f1− ∂α , ∂f1− ∂t

(3.64)

Substituting (3.60), (3.62), and (3.64) into (3.59), we obtain

U1+

  −  ∂Φ1 ∂Φ− dt1 − 1 = −∆F1 + − E F1 + ∂Y ∂t dα ∂Φ− ∂Φ− − 1 1 + U1 + , ∂Y ∂α

(3.65)

which is equivalent to (3.48). Thus, the claim of the theorem is proved for i = 1. Obviously, if (3.46) holds, the above reasoning is valid for any switching monent. Consider some properties of the obtained relations.

1. Substituting (3.50) into (3.48), after routine transformations we obtain Ui+ = Pi Ui− + Qi ,

© 2000 by CRC Press LLC

(3.66)

where Pi and Qi are, respectively, a matrix and column defined by    − 1 ∂Φi Pi = − − −∆Fi + − E Fi− ∂Y fi  − T ∂Φ− ∂Φ− ∂fi i i + + , ∂t ∂Y ∂Y (3.67)   −  1 ∂Φi Qi = − − −∆Fi + − E Fi− ∂Y fi ∂Φ− ∂fi− ∂Φ− i i + + , ∂t ∂α ∂α 2. In principle, all the above relations remain valid if sequential equations in (3.1) have different order. In this case the derivatives ∂Φ− i /∂Yi are rectangular rather than square matrices. The matrices Pi in (3.67) will also be rectangular. 3. Relations (3.48) determining breaks of the sensitivity function at the switching moments are linear with respect to the sensitivity functions. 4. In some special cases Relations (3.67) and the break conditions for sensitivity functions (3.48) can be simplified. If the switching moments ti are independent of the parameter, we have dti /dα = 0 and  −  ∂Φi ∂Φ− i ∆Ui = − E Ui− + . (3.68) ∂Y ∂α Therefore, in (3.67) we obtain Pi =

3.1.5

∂Φ− i , ∂Y

Qi =

∂Φ− i . ∂α

(3.69)

Case of Continuous Solutions

The above equations get simplified if we consider continuous solutions of the sequence of equations (3.1). Indeed, in this case instead of the general break conditions (3.2) we have (3.3), whence ∂Φ− i = E, ∂Y

∂Φ± i = 0, ∂α

∂Φ± i = 0, ∂t

(3.70)

Then, Equation (3.48) yields ∆Ui = −∆Fi

© 2000 by CRC Press LLC

dti , dα

(3.71)

where ∆Fi is the break of the right side of (3.1) at the switching moment ti . As follows from (3.70) and (3.67), in the continuous case Pi =

1 = ∆Fi fi−



∂fi ∂Y

T + E,

Qi =

∂ f˙i /∂α ∆Fi . f˙−

(3.72)

i

Let us demonstrate that to determine the derivative dti /dα in the continuous case we can, in addition to (3.50), use one more relation that appears to be useful for a number of applied problems. Let us have, at the moment t1 , when Equation (3.40) is changed for (3.56), Y − (t1 ) = Y + (t1 ) = Y1 .

(3.73)

At the same time, due to the motion equations, we obtain dY1− = F1 (Y1 , t1 , α), dt

dY1+ = F2 (Y1 , t1 , α), dt

(3.74)

Consider the switching condition (3.4). In the given case it can, due to (3.73), be represented in the form f1 (Y + (t1 , α), t1 , α) = f1 (Y − (t1 , α), t1 , α) = 0.

(3.75)

Calculating the derivative ∂t1 /∂t1 , we find ∂f1 ∂f1 dY1+ ∂f1 ∂f1 dY − ∂f1 = + = + . ∂t1 ∂Y dt1 ∂t1 ∂Y dt1 ∂t1

(3.76)

Using (3.76), we obtain the following two expressions for the derivative: ∂f1 − U1 + dt1 = − ∂Y ∂f1 − dα F + ∂Y 1

∂f1 ∂f1 + U + ∂α = − ∂Y 1 ∂f1 + ∂f1 F + ∂t ∂Y 1

∂f1 ∂α ∂f1 ∂t

(3.77)

Using the second formula in (3.77), we obtain, analogously to (3.72), Ui− = Ri Ui+ + Si , where 1 Ri = E − + ∆Fi ˙ fi

© 2000 by CRC Press LLC



∂fi ∂Y

T ,

Si =

(3.78)

∂fi /∂α ∆Fi . f˙i+

(3.79)

Let us formulate some general ideas that follow from the above relations. 1. As follows from (3.71), ∆Ui = 0, i.e., the sensitivity function remains continuous at the switching moment ti if ∆Fi = 0,

(3.80)

i.e., the right side of (3.1) is continuous for t = ti , or dti = 0, α

(3.81)

i.e., the switching moment is independent of the parameter. 2. Consider general properties of the matrices Pi and Ri appearing in (3.72) and (3.78). We note that if J and H are column vectors, then det(E + HJ T ) = 1 + J T H,

(3.82)

Moreover, if 1 + J T H = 0, we have (E + HJ T )−1 = E −

1 HJ T . 1 + JT H

(3.83)

Formulas (3.82) and (3.83) can be verified by direct calculations. From (3.83) and (3.72) it follows that 1 det Pi = 1 + − ˙ fi



∂fi ∂Y

T ∆Fi = 1 +

 1  ˙+ ˙− = f − f i i f˙i−

f˙i+ . f˙i−

(3.84)

Similarly, it can be shown that det Ri =

f˙i− . f˙i+

(3.85)

Therefore, if f˙i− = 0, the matrix Ri is singular and the break conditions (3.78) cannot be immediately solved for the vector Ui+ . Moreover, from (3.72), (3.79) and (3.83) it follows that for f˙i+ = 0 and f˙i− = 0 we have Pi = Ri−1 .

(3.86)

3. Let us derive an important additional relation for the case of (3.3).

© 2000 by CRC Press LLC

Writing the relations similar to (3.59) and (3.62) for any i, we have dYi+ dti = Ui+ + Fi+ dα dα dYi− − − dti = Ui + Fi dα dα

(3.87)

For the case of continuous solutions, Yi+ = Yi− = Yi (ti , α),

(3.88)

dY (ti (α), α) dti dti = Ui− + Fi− = Ui+ + Fi+ . dα dα dα

(3.89)

and Relations (3.87) yield

3.2 3.2.1

Sensitivity Equations for Relay Systems General Equations of Relay Systems

Control systems often include a large number of nonlinear transformators called relay elements. Let us give a general mathematical description of various relay elements as a class of special nonlinear discrete operators [87, 120]. In the most general case, an arbitrary nonlinear element with one scalar input and one scalar output can be considered as a nonlinear operator transforming a set of input signals x(t) into a set of piecewise-continuous functions y(t). It is assumed that for any input signal x(t) of the considered class we can define a countable sequence of switching moments ti = ti [x(t)],

(3.90)

and a countable numerical sequence ki = ki [x(t)].

(3.91)

Then, the output signal is defined by the relations y(t) = ki

© 2000 by CRC Press LLC

where

ti−1 < t < ti .

(3.92)

The switching conditions (3.90) together with (3.91) and (3.92) define a generalized discrete element. Hereinafter, for the aggregate of the relations (3.90)–(3.92), we will use the notation y = Ld [x]

(3.93)

DEFINITION 3.3 A discrete element described by the relations (3.93) will be called relay element if all switching moments depend on the form of the input signal. In most applied problems, the switching conditions are given in implicit form. For example, for an ideal relay having the characteristic shown in Figure 3.1, we have ki = kp sign x(t),

ti−1 < t < ti ,

(3.94)

and the switching moments are defined by the relations x(ti ) = 0.

(3.95)

Figure 3.2 Generalized symmetric relay characteristic In applications, relay elements with the generalized symmetric characteristic shown in Figure 3.2 are widely used. The coefficient kr characterizes the output signal value, σ0 defines a dead zone, and µ is called the return coefficient. Figure 3.3 demonstrates characteristics of relay elements that can be obtained from the generalized one for various values of the return coefficient µ. Another widespread type of relay characteristic is one with the variable dead zone shown in Figure 3.4. In this case, the output signal takes two

© 2000 by CRC Press LLC

Figure 3.3 Relay characteristics for various µ

Figure 3.4 Relay characteristics with variable dead zone

values: y(t) = ±kp ,

ti−1 < t < ti ,

(3.96)

and the switching moments are defined by some functions from the input signal ti = ti [x(t)].

(3.97)

A particular case of such relays are those with the characteristic shown in Figure 3.5 used in relay extremal systems. In some control systems relay elements with non-symmetric characteristics are used. Examples of such characteristics are shown in Figure 3.6. Characteristics of relay elements depend on a definite set of parameters. For instance, parameters of generalized elements with the characteristics shown in Figure 3.2 are the values kr , σ0 and µ. Parameters of a relay with the characteristic shown in Figure 3.3 are kr and σ. In general, to characterize dependence of relay element characteristics on a generalized

© 2000 by CRC Press LLC

Figure 3.5 Relay characteristics used in extremal systems

Figure 3.6 Non-symmetric relay characteristics

parameter we will use the notation

y = Lp [x, α].

(3.98)

Now consider equations of relay systems. Hereinafter we investigate systems that differ from linear ones by the presence of a single relay element. For a wide scope of problems, such systems can be described by equations of direct control [61, 86]: dY = A Y + HLp [σ], dt

σ = J T Y,

(3.99)

where A is a constant matrix, and H and J are constant column vectors. If Equations (3.99) depend also on a scalar parameter α, instead of (3.99)

© 2000 by CRC Press LLC

we will have, in general, dY = A(α) Y + H(α)Lp [σ, α], dt

σ = J T (α) Y.

(3.100)

Taking into account that the signal z = Lr (σ, α) is a piecewise constant function and assuming z = qi = const,

ti−1 < t < ti ,

(3.101)

from (3.99) we obtain a sequence of equations dY = A Y + Hqi , dt

σ = J T Y,

(3.102)

that will be called equations of normal motion. As follows from the example dealing with ideal relay system, in general the sequence of equations of normal motion does not specify completely all possible motions in the system (3.99), because modes with sliding motion are also possible. Therefore, the sequence of equations of normal motions should be added by equations of sliding modes. In general, these equations can be obtained in the following way [109, 113]. Let us have a surface S in the phase space of variables y1 , . . . , yn defined by f (y) = f (y1 , . . . , yn ) = 0,

(3.103)

and dividing the space into areas N+ where f > 0 and N− where f < 0. In the areas N+ and N− we have two systems of differential equations, respectively, dY Y ∈ N+ , = F + (Y, t), dt (3.104) dY = F − (Y, t), Y ∈ N− , dt Each of them describes normal motion in the corresponding areas N+ and N− . If the phase trajectories of Equations (3.104) at some points of the surface S are directed toward each other, the system performs sliding motion given by the equations [109, 113] dYc = λF + + (1 − λ)F − , dt where λ=

© 2000 by CRC Press LLC

0 ≤ λ ≤ 1,

grad f · F − . grad f · [F − − F + ]

(3.105)

(3.106)

Thus, equations of the sliding mode are uniquely determined by the equations of normal motion before and after switching, and by the form of the switching surface. For relay systems, equations of normal motions have the form (3.102). Let us have, on a segment of the surface (plane) f = J T Y = σi = const

(3.107)

a sliding mode. Moreover, assume that on the one side of the surface (3.107) we have equation dY (3.108) = A Y + Hq1 = F + , dt and on the other side dY = AY + Hq2 = F − . dt

(3.109)

In the given case we have grad f = J T ,

grad f · F − = J T A Y + ρq2 ,

(3.110)

and, substituting (3.108)–(3.110) into (3.105) give the equation of the motion in the sliding mode dYc = dt



 1 A − HJ T A Yc , ρ

(3.111)

which coincides with (3.27).

3.2.2

Sensitivity Equations for Systems with Ideal Relay

Consider the system of equations (3.100) assuming that Lr is a characteristic of ideal relay (3.9). Then, as follows from the above, in the area σ > 0 we have Equation (3.11), in σ < 0 Equation (3.101), and the sliding motion in the plane σ = 0 is described by Equation (3.116). For a normal transition at the moment ti (α) from the half-plane σ < 0 into the half-plane σ > 0 (with σ(t ˙ i ± 0) > 0) we have F + = AY + Hkp ,

F − = A Y − Hkp ,

(3.112)

and, therefore, according to (3.71) the break of the sensitivity function is given by the equation dti ∆Ui = −2Hkp . (3.113) dα

© 2000 by CRC Press LLC

For a reverse transition, we have ∆Ui = 2Hkp

dti dα

(3.114)

Let, for instance, the matrix A and vector H in Equations (3.11) and (3.12) depend on α. Then, before the break, we have the sensitivity equation dU dA dH = AU + Y − kp , t ≤ ti − 0. (3.115) dt dα dα while after the break, dU dA dH = AU + Y + kp , dt dα dα

t ≥ ti + 0.

(3.116)

The switching moment is defined by the equation f = σ(ti ) = J T Y (ti ) = 0.

(3.117)

Moreover, since f = J T in the given case, Formulas (3.77) yield dti 1 1 =− J T Ui− = − J T Ui+ dα σ(t ˙ i − 0) σ(t ˙ i + 0)

(3.118)

Using (3.113) and the first relation in (3.118), we have Ui+ = Pi Ui−

(3.119)

and Pi = E +

2kp ρ HJ T , σ(t ˙ i − 0)

ρ = J T H,

(3.120)

where σ(t ˙ i − 0) = J T Y˙ (ti − 0) = J T A Y (ti ) − ρkp .

(3.121)

Using the second relation in (3.118), instead of (3.119) we obtain Ui− = Ri Ui+ ,

(3.122)

where Ri = E −

© 2000 by CRC Press LLC

2kp ρ HJ T . σ(t ˙ i + 0)

(3.123)

As follows from (3.84), (3.85), (3.120), and (3.123), σ(t ˙ i + 0) , σ(t ˙ i − 0) σ(t ˙ i − 0) det Ri = , σ(t ˙ i + 0) det Pi =

(3.124)

that can be immediately verified. Thus, if σ(t ˙ i − 0) = 0, i.e., J T A Y (ti ) = ρ,

(3.125)

the matrix Pi has no sense. Nevertheless, Relations (3.122) may hold, but they cannot be used for calculation of Ui+ , because detRi = 0. Similarly, if we consider a normal transition from the half-plane σ > 0 into the half-plane σ < 0, we have the sequence of equations (3.11) and (3.12). Then, with account for (3.114), we obtain  2kp ρ HJ T Ui− , σ(t ˙ i − 0)   2kp ρ = E+ HJ T Ui+ . σ(t ˙ i + 0) 

Ui+ = Ui−

E−

(3.126)

Relations (3.119), (3.122), and (3.126) can be, using the signs of σ(t ˙ i − 0) and σ(t ˙ i + 0), combined in the form  2kp ρ HJ T Ui− , | σ(t ˙ i − 0) |   2kp ρ = E− HJ T Ui+ . | σ(t ˙ i + 0) | 

Ui+ = Ui−

E+

(3.127)

Now assume that there is a switch from a normal motion in the half-plane σ < 0 to a sliding mode at the moment ti (α). Then, before the moment ti − 0 we have dY = A Y − Hkp = F − , dt

t ≤ ti − 0,

(3.128)

t ≥ ti + 0.

(3.129)

and in the sliding mode dY = dt

© 2000 by CRC Press LLC



 1 A − HJ T A Y = F + , ρ

Associated sensitivity equations have the form dU dA dH = AU + Y + kp dt dα dα

(3.130)

   dA 1 d 1 T T A − HJ A U + Y − HJ A Y. ρ dα dα ρ

(3.131)

and dU = dt



Let us find relations between the values of the vector U before and after the switching moment. In this case  1 T ∆Fi = − = H kp − J A Y (ti ) ρ   σ(t ˙ i − 0) + ρkp σ(t ˙ i − 0) = H kp − =− H, ρ ρ Fi+

Fi−



(3.132)

Therefore, due to (3.71), ∆Ui =

σ(t ˙ i − 0) dti H . ρ dα

(3.133)

Assuming that σ(t ˙ i − 0) = 0, for calculation of dti /dα we can use the first relation in (3.118), which yields, together with (3.133), 1 ∆Ui = − HJ T Ui− . ρ

(3.134)

This relation can be represented in the form (3.66), where 1 Pi = E − HJ T , ρ

Qi = 0.

(3.135)

The transition from the half-plane σ > 0 to a sliding mode can be analyzed in a similar way. Then, consider the transition from the sliding mode to the normal one. Let us have the equation of sliding mode (3.129) before the moment ti (α). At the moment ti (α) defined by the conditions (3.31) and (3.32) J T A Y (ti ) + ρ = 0,

© 2000 by CRC Press LLC

J T A2 Y (ti ) + ρ > 0,

(3.136)

there is a transition to the normal motion in the half-plane σ > 0, i.e., to the equation dY = A Y + Hkp = F + . dt

(3.137)

The sensitivity equations before and after switching are obtained from (3.129) and (3.137) by differentiating with respect to α: dU = dt





1 A − HJ T A ρ

  dA d 1 T U+ Y − HJ A Y, dα dα ρ t ≤ ti − 0,

dU dA dH = AU + Y + kp , dt aα dα

(3.138)

t ≥ ti + 0.

Next we consider the break conditions. In the case at hand 1 ∆Fi = Hkp + HJ T A Y (ti ) = H(kp − 1). ρ

(3.139)

The derivative dti /dα can be obtained using the first relation from (3.77) and the switching condition (3.136): dA dH Yi + J T J T Ui− + J T dti dα dα . =− dα J T A2 Yi + ρ

(3.140)

Using the general formula (3.66), we have Ui+ = Pi Ui− + Qi ,

(3.141)

Pi = E + ν −1 (kp − 1)HJ T ,   dA dH Qi = ν −1 (kp − 1)HJ T Yi + , dα dα ν = J T A2 Yi + ρ.

(3.142)

where

3.2.3

Systems with Logical Elements

Logical elements are described by a special class of relay operators with several input and several outputs, each of them able to assume a number of discrete values. In the present section we discuss sensitivity of a system

© 2000 by CRC Press LLC

with the simplest logical unit with the scheme given in [74, 78]. Equations of system motion have the form dY = AY + HL[σ1 , σ2 ], dt

σi = JiT Y,

i = 1, 2,

(3.143)

where A is a constant matrix, H and J are constant vectors, and L is the characteristic of the logical unit. Logical control law z = L(σ1 , σ2 ) can be visually presented on the plane σ1 , σ2 (Figure 3.7). The equation of logical

Figure 3.7 Logical unit characteristic unit can be defined analytically in the form   1, z = −1,  0,

if σ1 , if σ1 , if σ1 ,

σ2 ∈ S1 , σ2 ∈ S2 , σ2 ∈ S3 ,

(3.144)

where S1 , S2 , and S3 are regions marked in Figure 3.7. The characteristic (3.144) can be described in a shorter form using basic logical functions [74]. Consider main types of motions in the system at hand. First of all, in the regions Si we have normal motions described by the equations dY1 = A Y + H, dt dY = A Y − H, dt dY = A Y, dt

σ 1 , σ2 ∈ S1 ,

(3.145)

σ 1 , σ2 ∈ S2 ,

(3.146)

σ1 , σ2 ∈ S3 ,

(3.147)

The switching moments when one of Equations (3.145)–(3.147) is changed for another one will be called normal. Assume that before the moment ti

© 2000 by CRC Press LLC

we had Equation (3.147), and after it Equation (3.145). Assume also that the transition from the region S3 into the region S2 happens at the point M (see Figure 3.7), and σ(t ˙ i − 0) > 0 and σ(t ˙ i + 0) > 0. We will find the sensitivity equation for this case. For A = A(α), H = H(α), J2 = J2 (α), and b = b(α) the equations of the motion have the form dY = A(α) Y, dt dY = A(α) Y + H(α), dt

t ≤ ti − 0, (3.148) t ≥ ti + 0.

The sensitivity equations on the intervals of normal motion can be obtained by differentiating (3.148) with respect to α in the form dU dA = AU + Y, dt dα dU dA dH = AU + Y + , dt dα dα

t ≤ ti − 0, (3.149) t ≥ ti + 0.

Next, we calculate the breaks of the sensitivity function. In the case under consideration, Fi− = AYi ,

Fi+ = A Yi + H,

∆Fi = H,

(3.150)

while the switching condition has the form σ2 (ti ) = J2T Yi = −b.

(3.151)

Using (3.66) and (3.72), and assuming that σ˙ 2 (ti − 0) = J2T A Yi > 0, we obtain Ui+ = Pi Ui− + Qi ,

(3.152)

where Pi = E +

1 σ˙ 2 (t2 − 0)

T

HJ ,

1 Qi = σ˙ 2 (ti − 0)



db dJ2T Yi + dα dα

 . (3.153)

Consider the possibility of transition at the point M to a sliding mode in the plane (3.151). For such a mode to appear, the following conditions

© 2000 by CRC Press LLC

must be satisfied σ˙ 2 (ti − 0) = J2T A Yi > 0, σ˙ 2 (ti + 0) = J2T A Yi + ρ2 < 0,

ρ2 = J2T H.

(3.154)

Assume that the conditions (3.154) are satisfied and the sliding mode takes place. Using (3.147), (3.145), and (3.111) it can be easily seen that in this case the equation of the sliding mode has the form dY = dt



 1 T A − HJ2 A Y. ρ2

(3.155)

The sensitivity equation in the sliding mode is given by dU = dt



   1 1 d T T A − HJ2 A U + A − HJ A Y. ρ2 dα ρ2

(3.156)

To calculate the breaks of the sensitivity function we will use (3.71), bearing in mind that Fi+ = A Yi −

1 HJ2T Ai Yi , ρ2

F − = A Yi ,

(3.157)

while the switching condition still have the form (3.151). After some transformations, we obtain Formula (3.152), where Pi = E +

3.2.4

1 HJ2T , ρ2

Q=

1 ρ2



dJ2T db Yi + dα dα

 .

(3.158)

Relay System with Variable Delay

Consider sensitivity equations for a system including, except for a relay with a dead zone, a pure delay element [87, 104], where the delay changes at the switching moments. The system equations have the form dY = A Y + HL[σ, τ1 , τ2 ], dt

σ = J T Y,

(3.159)

where A is a constant matrix, H and J are constant vectors, and L(σ, τ1 , τ2 ) is a characteristic of the complex nonlinear element formed by series connection of an element with variable delay EVD and a relay element RE (see Figure 3.8). Let us assume that the characteristic of the relay element has

© 2000 by CRC Press LLC

Figure 3.8 Relay element with variable delay

the form shown in Figure 3.3 b, and the EVD transforms the input signal according to the rule  ˜ τ1 , τ2 ] = z = L[u,

u(t − τ1 ) for u = ±kp , u(t − τ2 ) for u = 0,

(3.160)

where τ1 and τ2 are nonnegative constants. We will consider continuous solutions of the system (3.159). Let σ(t) be the input signal of the relay element, which is a continuous function of t. Denote by ti the switching moments defined by the conditions σ(ti ) = J T Y (ti ) = ±σ0 .

(3.161)

We call the interval I : t1 < t < t2 an interval of the first type I1 if for t ∈ I we have |σ(t)| < σ0 , and an interval of the second type if for t ∈ I we have |σ(t)| > σ0 . Then, Equation (3.160) yields  z(t) =

u(t − τ1 ) for t ∈ I2 , u(t − τ2 ) for t ∈ I1 .

(3.162)

Hence, the function z(t) has, generally speaking, breaks at the moments t1i = ti + τ1 ,

t2i = ti + τ2 .

(3.163)

depending on the values of τ1 , τ2 and the duration of pulses incoming from the relay. Therefore, the EVD changes the number and duration of pulses acting on it. In general, at the switching moments (3.163) there can be a change of any combinations of the values kr , −kr , 0. For example, in the situation shown in Figure 3.9, we have a switch from kr to 0 at the moment t = 0, while in Figure 3.10 this moment corresponds to a switch from kr to −kr . Thus, if we do not take into account sliding modes in the switching planes (3.161), an arbitrary motion of the system under investigation can be described by

© 2000 by CRC Press LLC

Figure 3.9 Switch from kr to 0

Figure 3.10 Switch from kr to −kr

a sequence of differential equations of the form

dY = AY ± Hkp , dt dY = AY. dt

(3.164)

It should be emphasized that any equation in (3.164) can, in principle, be substituted by any of the two remaining ones. This circumstance is caused by the unit of variable delay. Now let us construct the sensitivity equations. At first, we assume that the matrix A and vectors H and J depends on the parameter α, while the values τ1 and τ2 are constants. Then, between the switching moments we will have the following equa-

© 2000 by CRC Press LLC

tions obtained from (3.164) by differentiating with respect to α: dU dA d = AU + Y ± (Hkp ), dt dα dα dU dA = AU + Y. dt dα

(3.165)

For the switching moments ti of the relay element we have Equation (3.161). Since τ1 and τ2 are assumed to be independent of the parameter, dt1i dt2i dti = = , dα dα dα

(3.166)

Moreover, as before, we have

dti =− dα

dJ T dσ0 Yi ± dα dα . σ(t ˙ i − 0)

J T Ui− +

(3.167)

Using Equations (3.164) before and after the switching moments and Formulas (3.71) and (3.167), it is easy to construct the corresponding conditions for breaks of the sensitivity function. Let us take the value α = τ1 as a parameter, assuming that A, H, J, kr , and σ0 are independent of α. In this case, the sensitivity equation (3.165) reduces to the single equation dU = A U, dt

(3.168)

The derivatives of the switching moment are given by dt1i dti = + 1, dα dα

dt2i dti = , dα dα

(3.169)

Moreover, in this case dti 1 =− J T Ui− . dα σ(t ˙ i − 0)

© 2000 by CRC Press LLC

(3.170)

3.2.5

Relay Extremal System

Consider the following system of equations: dY1 = A1 Y1 + H1 φ(σ2 ), dt dY2 = A2 Y2 + H2 Lp [σ1 ], dt

σ2 = J2T Y2 , (3.171) σ1 =

J1T Y1 ,

where A1 and A2 are constant square matrices, Hi and Ji (i = 1, 2) are constant vectors of corresponding dimensions, φ(σ2 ) is an even continuously differentiable function of the argument σ2 having a maximum at σ2 = 0. In applications it is often assumed [66] that φ(σ2 ) = −kσ22 ,

k = const > 0.

(3.172)

Hereinafter we will use (3.172) for simplicity. Moreover, Lr [σr ] in Equations (3.171) is the relay characteristic shown in Figure 3.4. The value σ0 determining the switching moments can be defined in various ways depending on the method of forming the relay switching law. Thus, if it is assumed that there is a fixed bilateral quantization grid, in Figure 3.4 we can take σ0 = 0, and switching of the relay will happen for σ1 (ti ) = JiT Y1 (ti ) = σ0 ,

σ˙ 1 (t1 ) < 0.

(3.173)

If the quantization grid if connected to σ1 (t), we have σ0 = σ1 max − χ,

(3.174)

where σ1max is the previous maximal value of the coordinate σ1 , and ξ is a fixed quantization step. Relation (3.174) can be considered as a switching condition defining the switching moment ti : σ1 (t1 ) = σ1 (tm ) − χ,

(3.175)

where tm is a moment when extremum is reached so that σ˙ 1 (tm ) = 0. Let us show that the system of equations (3.171) is equivalent to equations of a single-loop nonlinear system of extremal control. Assuming d/dt = p, from the first equations in (3.171) we have (pE − A1 )Y1 = H1 φ(σ2 ),

© 2000 by CRC Press LLC

(pE − A2 )Y2 = H2 L[σ1 ],

(3.176)

Hence, Y1 = (pE − A1 )−1 H1 φ(σ2 ),

Y2 = (pE − A2 )−1 H2 L[σ1 ].

(3.177)

Multiplying the first equation by J1T and the second one by J2T , we obtain σ1 = ω1 (p)φ(σ1 ),

σ2 = ω2 (p)L[σ1 ],

(3.178)

where ωi (p) = JiT (pE − Ai )−1 Hi ,

i = 1, 2,

(3.179)

which corresponds to general equations given in [66]. If in this case each of the matrices A1 and A2 has one zero eigenvalue, we have ωi (p) =

1 ω ˜ i (p), p

i = 1, 2,

(3.180)

and Equations (3.171) reduce to ones considered in [66]. Then, let us investigate the technique of constructing sensitivity equations for the system (3.171) for normal modes when the second equation of (3.171) reduces to the sequence of equations dY2 = A2 Y2 ± Hkp . dt

(3.181)

Then, assuming that all constant matrices and vectors appearing in (3.171) depend on the parameter α, we have dU1 = A1 U1 − 2kσ2 HJ2T U2 + dt dA1 dH1 dJ T Y1 − 2kσ2 H 2 Y2 − kσ2 , dα dα dα dU2 d dA2 = A2 U2 + ± (H2 kp ), dt dα dα +

(3.182)

where U1 =

∂Y1 (t, α) , ∂α

U2 =

∂Y2 (t, α) , ∂α

(3.183)

Next, we construct the break conditions for the sensitivity functions. Note that the right side of the first vector equation (3.171) is continuous. Then, from the general formula (3.71) it follows that the sensitivity function U1 is continuous.

© 2000 by CRC Press LLC

Then, consider the break conditions for the sensitivity function U2 . Let us have a switch of the relay from minus to plus. Then, Y˙ 2 (ti − 0) = A2 Y2 (ti ) − H2 kp , Y˙ 2 (ti + 0) = A2 Y2 (ti + 0) + H2 kp , i.e., ∆F2i = Y˙ 2 (ti + 0) − Y˙ 2 (ti − 0) = 2Hkp .

(3.184)

The derivative dti /dα for the switching laws (3.173) and (3.175). For the case of (3.173), we have, with reference to (3.77)

dti =− dα

− + J1T U1i

dJ1T dσ0 Y1i − dα dα . σ˙ 1 (ti )

(3.185)

To construct dti /dα for the case of (3.175), we write this switching condition in the form J1T Y1 (tm , α) − J1T Y1 (ti , α) = χ. With account for continuity of U1 and σ˙ 1 , differentiating with respect to α yields −σ˙ 1 (ti )

dti − J1T U1 (ti ) + J1T U1 (tm ) dα dJ T + 1 (Y1 (tm , α) − Y1 (ti , α)) = 0. dα

Hence, dti 1 = dα σ˙ 1 (ti )

 J1T U1 (ti ) − J1T U1 (tm ) −

 dJ1T dJ T Y1 (tm , α) + 1 Y1 (t1 , α) . dα dα

(3.186)

From (3.184) and (3.186) it is easy to find the break conditions of the vector U1 in the form (3.66).

3.2.6

System with Pulse-Frequency Modulation of First Kind

In the present section, we consider the system of equations (3.99), where Lr [σ] is the characteristic of the element performing pulse-frequency

© 2000 by CRC Press LLC

modulation. We restrict our discussion by bipolar modulation of the first kind [56]. In this case, the function z(t) can be expressed in the form  z = L[σ] =

sign σ for | σ(ti ) |> ∆, 0 for | σ(ti ) |< ∆,

ti < t < ti+1 ,

(3.187)

and the duration of discreteness interval is given by ti+1 − ti = f [σ(ti )],

(3.188)

where f (σ) is a positive even function such that limσ→∞ f (σ) = 0. Hereinafter the function f (σ) will be assumed to be differentiable. It is noteworthy that, according to the above definition, the system under consideration belongs to the class of relay systems, because all switching moments change when the form of the input signal changes. There is some contradiction with the adopted terminology. In [120] it was also noted that systems with pulse frequency modulation are similar to relay systems. Depending on the values of the controlling function, all possible motions of the system under consideration are described by the following system of equations derived from (3.99) and (3.187): dY = A Y ± H, dt

dY = A Y. dt

(3.189)

The sensitivity equations between the switching moments can be obtained, as before, by formal differentiation of Equations (3.189) with respect to the parameter. Therefore, we proceed at once to construct the break conditions for the sensitivity functions. Note that in this case Equations (3.189) can follow one after another in an arbitrary order. Let, for instance, Y˙ (ti − 0) = A Y (ti ) − H = Fi− , Y˙ (ti + 0) = A Y (ti ) = Fi+ ,

(3.190)

i.e., ∆Fi = Fi+ − Fi− = H.

(3.191)

The expression for the derivative dti /dα can be obtained by direct differentiation of (3.188). Obviously, we have dti+i dti dfi dσ(ti ) = + , dα dα dσ dα

© 2000 by CRC Press LLC

(3.192)

But, according to (3.178),  dσ(ti ) d T = J Y (ti , α) = dα dα   dti dJ T T ˙ =J Y (ti − 0) + U (ti − 0) + Y (ti ). dα dα

(3.193)

Therefore, Equation (3.192) yields   dti+1 dti dJ T dfi − dfi T − dfi = J Ui + 1 + σ˙ i + Yi . dα dσ dσ dα dα dσ

(3.194)

Let us note that in this case we have a recurrent relation (3.194) instead of an explicit expression for the derivative dti /dα. Nevertheless, this does not make calculations more difficult, because having initial conditions t0 = t0 (α),

Y (t0 ) = Y0 (α),

we can obtain the derivative dt0 /dα and all other derivatives of interest (3.194). It should be noted that the above reasoning remains valid if we additionally assume that the switching function f also depends on the parameter, i.e., f = f (σ, α).

(3.195)

In this case, instead of (3.192) we have dti+1 dti dfi dσ(ti ) dfi = + + . dα dα dσ dα dα

3.3 3.3.1

(3.196)

Sensitivity Equations for Pulse and Relay-Pulse Systems Pulse and Relay-Pulse Operators

DEFINITION 3.4 Discrete operator (3.93) given by (3.90) and (3.91) will be called pulse if the sampling moments ti are independent of the input signal.

© 2000 by CRC Press LLC

DEFINITION 3.5 If some of the switching moments ti depend on the form of the input signal while others do not, such an operator will be called relay-pulse. Hereinafter, for simplicity, we use the term “element” instead of “operator,” though, strictly speaking, they are not equivalent. Consider properties of pulse operators in detail. For simplicity, we investigate only the single-input-single-output (SISO) case. According to the above definition, a generalized pulse element is defined by the relations y = ki [x(t)],

ti < t < ti+1 ,

(3.197)

where the sampling moments ti are given numbers, and ki [x] are some functionals depending, in general, on the form of the input signal on the interval t0 ≤ t ≤ ti . DEFINITION 3.6

If all the switching moments ti are equidistant, i.e., ti+1 − ti = T = const

(3.198)

the pulse element (3.197) is called periodic. Hereinafter we consider, if it is not specified otherwise, only periodic pulse elements. In many cases, the form of the output signals of the pulse element is not rectangular. This can be taken onto account if we consider, instead of (3.197), the following relations: y = m(t)ki [x(t)],

ti < t < ti+1 ,

(3.199)

where m(t) = m(t + T ) is a given periodic function called modulated function. If all the functions ki [x] are linear, i.e., ki [a1 x1 + a2 x2 ] = a1 ki [x1 ] + a2 ki [x2 ],

(3.200)

where a1 and a2 are arbitrary constants, the pulse operator is called linear. The simplest type of linear pulse operator is a linear pulse-amplitude element, for which m(t) = 1,

© 2000 by CRC Press LLC

ki [x(t)] = kx(ti ),

k = const,

(3.201)

i.e., the output signal is given by y(t) = kx(ti ),

ti < t < ti + T.

(3.202)

Another example of generalized pulse elements is a linear hold device. For instance, the equation of an s-order extrapolator has the form y(t) =

s 

αnk [x]mk (t),

nT < t < (n + 1)T,

(3.203)

k=0

where mk (t) = mk (t + T ),

k = 0, . . . , s,

(3.204)

are functions with period T , and the coefficients ank [x] are given by a difference equation of the form

αnk =

l 

βkν x[(n − ν) T ] +

ν=0

r 

γkµ αn−µ,k ,

(3.205)

µ=1

where βkν and γkµ are constants independent on the input signal. In the special case when mk (t) = tk ,

0 ≤ t ≤ T,

(3.206)

Equations (3.203) define a polynomial extrapolation law. If mk (t) are trigonometric functions, we have a trigonometric extrapolation, etc. An extrapolator (3.203)–(3.205) performs, in the general case, not only extrapolation but also smoothing of measured data. This is due to the feedback with effectiveness determined by the coefficients γkµ . If the feedback is absent, i.e., all γkµ are zero, we obtain simpler equations of an extrapolator without a feedback: αnk [x] =

l 

βkν x[(n − ν) T ],

nT < t < (n + 1)T.

(3.207)

ν=1

The simplest nonlinear pulse element is a nonlinear pulse-amplitude element: y = f [x(nT )], where f (x) is a given function.

© 2000 by CRC Press LLC

nT < t < (n + 1)T,

(3.208)

Let us also give examples of relay-pulse elements. Consider pulse width modulation  kp sign x(nT ), nT < t < nT + t1n , y(t) = (3.209) 0, nT + t1n < t < (n + 1)T, with 0 ≤ t1n = g[x(nT )] ≤ T,

(3.210)

where g[x] is a given function. For an arbitrary input signal, the output signal will have the form of a step function assuming the values 0 and ±k. Moreover, two sequences of switching moments can be separated out: tn = nT and tn = nT + t1n . The first one is independent of the form of the input signal, while the second does depend. Therefore, by the adopted classification, a pulse width modulating element is relay-pulse. Thus, the terminology used here differs from the usual one. Hereinafter we shall consider, for concreteness, a system of equations of the form dY = AY + HLd [σ], σ = J T Y, (3.211) dt where A is a constant matrix, H and J are constant vectors, and Ld [σ] is a discrete (pulse or relay-pulse) element.

3.3.2

Sensitivity Equations of Pulse-Amplitude Systems

As a starting point, we consider the system of equations (3.211), where Ld [σ] is a nonlinear extrapolator without a feedback, so that on the interval between the switching moments of the pulse elements we have the following sequence of equations: dY = AY + Hf [˜ σs ], dt

sT < t < (s + 1)T,

(3.212)

where f (˜ σ ) is a single-valued continuously differentiable nonlinear function σ ˜s =

q  k=0

αsk mk (t),

αsk =

l 

βkν αks [(s − ν)T ].

(3.213)

ν=0

To determine the motion of the system (3.212)–(3.213) for t ≥ 0 it is necessary to define the values σ(t) for t = iT (i = 0, −1, . . . , −l). Denote σ ˜s = σ ˜ (sT ) (3.214)

© 2000 by CRC Press LLC

Let us also have a vector of initial conditions Y (0) = Y0 so that J T Y0 = σ0 . Then, on the interval 0 < t < T we have, according to (3.212) and (3.213), the equation dY (3.215) = AY + Hf (˜ σ0 ), dt where σ ˜0 =

q l  

βkν σ ˜ [−νT ]mk (t).

(3.216)

ν=0 k=0

Solving the linear equation (3.215) with the given initial conditions, we obtain t Y (t) = eAt Y0 + eA(t−τ ) Hf (˜ σ0 (τ )) dτ, (3.217) 0

Hence, for t = T we have  Y (T ) = eAT Y0 +

T

 e−Aτ Hf (˜ σ0 (τ )) dτ 

(3.218)

0

and, correspondingly,  σ(T ) = J T eAT Y0 +

T

 e−Aτ Hf (˜ σ0 (τ )) dτ  .

(3.219)

0

It can be easily seen that Relations (3.218) and (3.219) together with the values σs for s = −l+1, . . . , 1 completely define the equations of the motion for T < t < 2T and the corresponding initial conditions. Continuing this process of constructing the solution, we find that the values σ−l , . . . , σ−1 and initial conditions Y0 completely define the motion of the system under consideration. Then, consider the construction of the sensitivity equations assuming that A = A(α), H = H(α), J = J(α), Y = Y0 (α), f = f (σ, α), βkν = βkν (α), σ−l, . . . , σ−1 are known continuously differentiable functions of the parameter. Moreover, we will assume that the modulated functions mk (t) = mk (t, α) are continuously differentiable with respect to α for 0 < t < T . In this case, the sampling period T is assumed to be constant. In can be easily shown, using the above process of continuing the solution, that under the given assumptions the right sides of all the equations in (3.113) are continuously differentiable with respect to α. Therefore, on the intervals

© 2000 by CRC Press LLC

between the switching moments, the sensitivity equation is given by dU dA d = AU + Y + f [˜ σs (t, α), α] , dt dα dα

(3.220)

Moreover, d ∂ ∂ d˜ σs f [˜ σs (t, α), α] f [˜ σs (t, α), α] = f [˜ σs (t, α), α] + , dα ∂α ∂σ ˜s dα and

  l q d˜ σs d  βkν (α)σs−ν (α)mk (t, α) . = dα dα ν=0

(3.221)

(3.222)

k=0

In practical cases, Relations (3.220)–(3.222) can be substantially simplified. Next, we consider the break conditions for the sensitivity functions. Using the general formula (3.71) and the fact that, by assumption, the switching moments are independent of the parameter, we arrive at the following conclusion: Under the given assumptions the sensitivity functions of the linear pulse system are continuous with respect to t, despite the fact that the right side of (3.212) has discontinuities at the switching moments. This proposition remains valid also for the case when the functions mk (t) are continuously differentiable on the interval 0 ≤ t ≤ T , provided that the corresponding break moments are independent of the parameter.

3.3.3

Sensitivity of Pulse-Amplitude Respect to Sampling Period

Systems

with

In the present paragraph we consider some characteristic features of sensitivity investigation with respect to the parameter T characterizing periodicity of the pulse sequence. For simplicity, we consider systems with the simplest pulse-amplitude element (3.202), so that the motion equations have the form dY = A Y + kHσ(nT ), dt

σ = J T Y,

nT < t < (n + 1)T.

(3.223)

If the vector of initial conditions Y (0) = Y0 , for 0 < t < T we have  Y (t) = eAt Y0 + k 

t 0

© 2000 by CRC Press LLC

 eA(t−τ ) Hdτ  σ0 .

(3.224)

For t = T we obtain Y (T ) = P Y0 ,

(3.225)

P = eAt + kA−1 (eAT − E)HJ T .

(3.226)

where

From (3.225) it follows that Y (nT ) = P n Y0 .

(3.227)

It is known [13, 117] that if the roots of the characteristic equation det(λE − P ) = 0

(3.228)

are inside the unit circle, the system (3.223) is asymptotically stable and all its solutions tends to zero (by a norm) as t → ∞. The condition of asymptotic stability can be written in analytical form as | λρ |< 1, ρ = 1, . . . , n, (3.229) where λρ , ρ = 1, . . . , n are the roots of Equation (3.218). Let us expand the characteristic equation (3.228) in detail. With account for (3.226), we have det(λE − P ) = det(λE − eAT − kA−1 (eAT − E)HJ T ) = 0.

(3.230)

As follows from (3.230), the coefficients of the characteristic equations are continuous functions of the parameter T . Since the asymptotic stability conditions (3.229) are expressed by strict inequalities with respect to the coefficients of the characteristic equation, we can propose that if the conditions (3.229) hold for some T = T ∗ , they hold also for T = T ∗ + ∆T if |∆T | is sufficiently small. Thus, denoting by Y = Y (t, Y0 , T ) the general solution of Equation (3.223) and assuming that (3.229) holds for T = T ∗ , for sufficiently small |∆T | the additional motions ∆Y = Y (t, Y0 , T ∗ + ∆T ) − Y (t, Y0 , T ∗ )

(3.231)

are uniformly bounded and tend to zero as t → ∞. Then, we investigate sensitivity with respect to the parameter T , assuming that the matrix A and vectors H and J are independent of α = T . Differentiation of Equations (3.223) with respect to T yields dU d = A U + kHJ T Y (nT, T ), dt dT

© 2000 by CRC Press LLC

(3.232)

where, according to (3.89), d Y (nT, T ) = U (nT + 0)+ dT + nY˙ (nT + 0) = U (nT − 0) + nY˙ (nT − 0).

(3.233)

Then, consider the breaks of the sensitivity function U . In this case, the switching moments ti (i = 0, 1, . . .) are equal to ti = iT.

(3.234)

dti = i. dT

(3.235)

Therefore,

At the break instant t = ti we have Y˙ i− = A Yi + kHJ T Yi−1 ,

Y˙ i+ − = A Yi + kHJ T Yi ,

(3.236)

∆Fi = Y˙ i+ − Y˙ i− = kHJ T (Yi − Yi−1 ) = kH(σi − σi−1 ).

(3.237)

Hence,

Then, using (3.71) and (3.235), we obtain Ui+ = Ui− + ikHJ T (Yi−1 − Yi ),

(3.238)

Referring to (3.227), we find the expanded form Ui+ = Ui− + ikHJ T (E − P )P i−1 Y0 .

(3.239)

Using the above relations, we can easily construct a general expression of the sensitivity function. Similarly to (3.227), from (3.232) and (3.233) we find U [(n + 1)T − 0] = P U (nT + 0) + n Q Y˙ (nT + 0),

(3.240)

where P is the matrix given by (3.226), and Q = kA−1 (eAT − E)HJ T .

© 2000 by CRC Press LLC

(3.241)

Bearing in mind that Y˙ (nT + 0) = (A + HJ T )Y (nT ) = (A + HJ T )P n Y0 ,

(3.242)

and using the break conditions (3.239), we obtain the recurrent relation  + Ui+1 = P Ui+ + (i + 1)kHJ T (E − P ) 

 + ikA−1 eAT − E HJ T A + HJ T P i Y0 ,

(3.243)

which yield explicit expressions for the values Ui+ and Ui− . Let us note that the sequences Ui+ and Ui− tend to a common limit, despite the presence of an infinitely increasing multiplier i in (3.239). This is explained by the fact that under the condition (3.229) the elements of the matrix P i tend to zero as i → ∞ as a geometric progression, so that we have lim iP i = 0.

(3.244)

i→∞

3.3.4

Sensitivity Equations of Systems with Pulse-Width Modulation

Consider the equations dY = A Y + Hf (σ, t), dt

σ = J T Y,

where the controlling function is given by  sign σ(iT ) for iT < t < iT + t1i , z = f (σ, t) = 0, for iT + t1i < t < (i + 1)T,

(3.245)

(3.246)

Here T is a fixed sampling period, while the pulse duration t1i is a function of the value σi = σ(iT ): t1i = g(σi ), where g(σi ) is a differentiable function satisfying the conditions (3.210). According to the adopted classification, systems with pulse-width modulation are relay-pulse ones, because we have both switching caused by the pulse element and switching of a relay type at the moments t1i . The latter depend on the properties of the input signal σ(t). According to (3.245) and (3.246), between the switching moments we have dY = A Y + H sign σi , σi = J T Yi , (3.247) dt iT < t < iT + t1i ,

© 2000 by CRC Press LLC

dY = AY, dt

iT + t1i < t < (i + 1)T.

(3.248)

Let us assume that solutions of Equations (3.247) and (3.248) are sewed together according to the continuity conditions. Then, we have two sequences of the switching moments: ti = iT,

ti = iT + t1i .

(3.249)

At the moments ti a transition from Equations (3.248) to (3.249) takes place, so that ∆F (ti ) = ∆Fi = H sign σi .

(3.250)

Analogously, at the moments ti we have ∆F (ti ) = ∆Fi = −H sign σi .

(3.251)

Assume that A, H, and J depend on the parameter α, while T is independent of α. Then, dti /dα = 0 and the vector of sensitivity functions at the moments ti remains to be continuous. Hence, the vector Yn = Y (nT ) = Y (nT, α)

(3.252)

is continuously differentiable with respect to α. Therefore,  dti dgi dσi dgi dJ T = · = Yi + J T Ui . dα dσ dα dσ dα

(3.253)

Hence, at the moments ti the break of the sensitivity function vector is equal to U (ti + 0) − U (ti − 0) = sign σi

 T dgi dJ H Yi + J T Ui ; dσ dα

(3.254)

Moreover, the corresponding sensitivity equations are obtained by differentiating (3.247) and (3.248) with respect to parameter. As a result, we obtain dU dA dH = AU + Y + sign σi , iT < t < iT + t1i , dt dα dα dU dA = AU + Y, iT + t1i < t < (i + 1)T. dt dα

© 2000 by CRC Press LLC

(3.255) (3.256)

Then, take the value of the sampling period as a parameter and assume that A, H and J are independent of T . In this case, both the sequences of switching moments (3.249) depend on the parameter. Since dti = i, (3.257) dT i.e., at the moments ti = iT the break of the sensitivity functions vector is given by Ui+ − Ui− = U (ti + 0) − U (ti − 0) = −i sign σi H.

(3.258)

Let us find the derivatives of the switching moments ti with respect to the parameter T . In the case at hand, duration of the pulse t1i depends on the parameter via the value σi = σ(iT, T ). Similarly to (3.233), we have, with reference to to (3.89), dY (iT, T ) = U (iT + 0) + iY˙ (iT + 0) dT = U (iT − 0) + iY˙ (iT − 0).

(3.259)

dσ(iT, T ) = J T Ui− + iJ T Y˙ i− . dT

(3.260)

Hence,

Since

dti dgi dσi =i+ , (3.261) dT dσ dT using (3.249), (3.260), and (3.261), we obtain   dgi  T −   T ˙− U (ti + 0) − U (ti − 0) = sign σi H i + . (3.262) J Ui + iJ Yi dσ The sensitivity equations obtained from (3.247) and (3.248) by differentiating with respect to T are especially simple: dU = A U. dt

© 2000 by CRC Press LLC

(3.263)

Chapter 4 Sensitivity of Discontinuous Systems Given by Operator Models

4.1 4.1.1

Operator Parametric Models of Control Systems Operator Models of Control Systems

A widespread method of mathematical description of control systems consists in presenting a block-diagram including an aggregate of linear elements and nonlinear transformators of directed action. Stationary elements are given by their transfer functions or frequency responses. Such a method of control system description is called its operator model. Due to their generality and physical background, operator models and investigation methods based on them are widely used in automatic control theory [69, 70, 87]. Nevertheless, the use of operator models for systems with discontinuous and discrete characteristics calls for serious mathematical justification without which obvious, from the first glance, investigation methods may produce erroneous results. In the present chapter we present a method of investigation sensitivity of discontinuous control systems directly on the basis of their operator model. With this aim in view, we give a general description for a general class of operator models including systems containing nonstationary linear elements. It is shown that for systems containing elements with discontinuous and discrete characteristics operator models are to be considered equations in generalized functions.

4.1.2

Operator of Directed Action Element

If, for a fixed ViM , there is a unique relation between inputs and outputs of a model of a directed action element, its mathematical model is an operator, i.e., a rule (mapping) LiM that makes it possible to determine YiM for given XiM and ViM . Properties of this operator are defined by the

© 2000 by CRC Press LLC

mapping LiM and sets MX , MY , and MV of input and output signals and disturbances acting on the element. In automatic control problems, we encounter, most of all, input signals X and disturbances V given as vector functions of time. Hereinafter we consider only such functions. The output signals are usually either time dependent functions or constants. In the latter case the associated operators are called functionals. In practice, the operator LiM can be given in various forms: either in explicit form allowing calculation of YiM directly for given XiM and ViM , or in implicit form defined as an aggregate of conditions allowing, in principle, determination of YiM for given XiM and ViM . Hereinafter the subscript M is omitted for brevity. Consider some typical examples of operators and functionals encountered in control problems. Example 4.1 Let MX be the set of scalar inputs having continuous derivatives up to m-th inclusive, and MV the set of scalar continuous exogenous disturbances v. Then, the equation y = a1 x(m) + a2 x(m−1) + . . . + am x + v,

(4.1)

where ai (i = 1, . . . , m) are constants, defines, in an explicit form, an operator mapping the sets MX and MV to the set of continuous outputs MY .

Example 4.2 Let y(t) be the transient process in a stable system under the unit step, and ymax − y(∞) σ= (4.2) y(∞) be the overshoot. Equation (4.2) defines an operator (in the given case it is a functional). A time interval within which the initial deviation y(0) decreases k times is also a functional denoted tk (x). Example 4.3 Let us have the vector differential equation dY = f (Y, X, V ). dt

© 2000 by CRC Press LLC

(4.3)

If the conditions of existence and uniqueness of solution of Cauchy’s problem hold, for a given initial condition Y (0) = Y0 we have the general solution of the form Y (t) = Y (t0 , Y0 , X, V ) ≡ Y0

Y (t) = Y (t, Y0 , X, V ),

(4.4)

For a fixed Y0 , Equation (4.4) defined the corresponding operator L. For given initial conditions, Equation (4.3) defines the operator (4.4) in implicit form. It is known that only for some special cases we can construct an explicit expression for the operator (4.4). Example 4.4 Various quality criteria of control systems are also functionals, for instance, an expression of the form  y=

f (x, v)Pxv dxdv,

(4.5)



where f (x, v) is a given function, Pxv is a simultaneous probability density, and Q is an area in the space of variables x, v.

4.1.3

Families of Operators

In most cases, it is impossible to describe operation of real elements and systems by unique relations of the form (4.1). Most often, mathematical description of an element determines a unique relation between the input and output only under some additional conditions. Example 4.5 Let the input and output be related by Equation (4.3), but initial conditions be not given. In this case, the input and output will be uniquely connected if some additional conditions are given, viz., initial Y (0) = Y0 or boundary ones. If initial conditions are not fixed, Equation (4.4) defines a family (set) of operators depending on a finite number of scalar parameters, namely, components of the vector Y0 . To describe many elements and systems, we must use more complex families of operators.

© 2000 by CRC Press LLC

Example 4.6 Consider the following simplest equation with a delayed argument: dy = ky(t − 1) + x, dt

t ≥ 0, k = const.

(4.6)

If the initial conditions are given by y(t) = φ(t),

−1 ≤ t ≤ 0,

(4.7)

where φ(t) is a given continuous function, Equation (4.6) has a unique solution. Using the step method [125], we have y(t) = y1 (t) = k

t 0

y(t) = y2 (t) = k

t 0

φ(t − 1)dt + 0 ≤ t ≤ 1,

y1 (t − 1)dt + 1 ≤ t ≤ 2,

t

x(t)dt + φ(0),

0

t

(4.8) x(t)dt + y1 (t),

0

and so on. As a result, for a given initial function there is a unique relation (operator) y = y(x, φ(t))

(4.9)

Nevertheless, if the function φ(t) is not specified, Relation (4.9) defines a family of operators depending on the set of functions φ(t) that can occur in the problem at hand.

4.1.4

Parametric Properties of Operators

DEFINITION 4.1 A mathematical model of an element (system) is described by a parametric family of operators, if it can be represented in the form Y = L(X, α),

(4.10)

where α is a finite set of free (independent of L, Y , and V ) variables. Moreover, it is assumed that for fixed values of parameters α we have a unique relation between Y and X. In a number of problems, parametric

© 2000 by CRC Press LLC

properties of operators appear in a natural way. For example, as follows from (4.4), mathematical description of an element by a system of differential equations is equivalent to defining a parametric family of operators. Nevertheless, such a situation appears far from universal. For instance, Equation (4.9) does not, in the general case, define a parametric family of operators, because it depends on an arbitrary function φ(t). Simultaneously, we can separate out various parametric families from the set of operators (4.9). For this purpose we can consider initial functions φ(t) belonging to a finite parametric family φα (t) = φ(t, α),

−1 ≤ t ≤ 0.

(4.11)

Then, from (4.11) we have y = y(x, φ(t, α)) = y(x, α).

(4.12)

For a fixed parametric family of functions φα (t) Equation (4.12) defines a parametric family of operators. We should bear in mind that the same initial equation of an element may generate different families of operators. For instance, fixing various components of the factor Y0 in (4.4), we will obtain different families of operators. Analogously, Equation (4.6) generates an infinite set of operators of the form (4.12). Separating parametric families of operators generated by an equation, we actually separate out a parametric family of partial solutions of this equations.

4.1.5

Parametric Families of Linear Operators

DEFINITION 4.2 An operator Y = L(X) is called linear, if for any admissible inputs X1 and X2 and constants a1 and a2 we have L(a1 X1 + a2 X2 ) = a1 L(X1 ) + a2 L(X2 ).

(4.13)

Equation (4.13) is a mathematical expression of the superposition principle for linear operator. If the input X = X(t) and output Y = Y (t) are linear functions of time, for the most practical cases, analytical representation of the linear operator

© 2000 by CRC Press LLC

has the form ∞ Y (t) = L(X) =

∞ H(t, t − τ )X(t − τ )dτ .

H(t, τ )X(τ )dτ = −∞

DEFINITION 4.3 (matrix).

(4.14)

−∞

The matrix H(t, τ ) is called the Green function

Obviously, the Green matrix must be such that the improper integral in (4.14) converges for any input vector X(t) admissible in a specific problem under consideration. For a number of problems, we can consider matrix inputs X(t) of corresponding dimensions in (4.14). Take X(τ ) = Eδ(s − τ ),

(4.15)

where E is the identity matrix of the corresponding dimension, and δ(t) is the Dirac’s delta-function. Then, from (4.14) we obtain the output matrix Y = H(t, s) = L[Eδ(t − s)].

(4.16)

From (4.16) it follows that the element hik (t, τ ) of the Green matrix determines the i-th output of the operator for the case when a δ-function acts upon the k − th input while the remaining inputs are zero. DEFINITION 4.4

If the Green function H(t, τ ) satisfies the condition H(t, τ ) = 0 for t < τ,

(4.17)

the operator (4.14) is called causal. The corresponding Green function is called the weight function. For causal operator, Equations (4.14) take the form t Y (t) = L(X) =

∞ H(t, t − τ )X(t − τ )dτ .

H(t, τ )X(τ )dτ = −∞

(4.18)

0

An operator L(X) is not causal if (4.17) is not true. It should be noted that this terminology is fairly conventional. It can be shown that many processes

© 2000 by CRC Press LLC

that are really implemented in practice are mathematically described by operators that are not causal [88].

DEFINITION 4.5

A linear operator is called stationary if H(t, τ ) = H(t − τ ),

(4.19)

Otherwise the operator is called nonstationary. For a stationary operator the general formulas (4.14) take the form ∞

∞ H(t − τ )X(τ )dτ =

Y (t) = L(X) = −∞

H(τ )X(t − τ )dτ .

(4.20)

−∞

For a causal stationary operator Equations (4.18) and (4.19) yield H(τ ) = 0

for τ < 0,

(4.21)

Let the Green function and the input signal be given in the form of a parametric representation H = H(t, τ, α) and X = X(t, α), respectively, where α denotes the set of parameters on which H or X depend. Then, we have the parametric family of outputs t

∞ H(t − τ )X(τ )dτ =

Y (t) = −∞

H(t)X(t − τ )dτ.

(4.22)

0

If the Green function and input signal are given by parametric description of the form H = H(t, τ, α), X = X(t, α), respectively, where α denotes the set of parameters on which H and X depend, we have the following parametric family of outputs: ∞ Y (t, α) =

H(t, τ, α)X(τ, α)dτ. −∞

© 2000 by CRC Press LLC

(4.23)

For (4.18) and (4.20) we have, respectively, t Y (t, α) =

H(t, τ, α)X(τ, α)dτ −∞ ∞

(4.24)

H(t, t − τ, α)X(t − τ, α)dτ .

= 0

t H(t − τ, α)X(τ, α)dτ

Y (t, α) = −∞ ∞

(4.25)

H(τ, α)X(t − τ, α)dτ .

= 0

4.1.6

Transfer Functions and Frequency Responses of Linear Operators

A widespread method of description of linear stationary operators consists in using transfer functions and frequency responses. In the present paragraph we develop a general approach for linear nonstationary operators [87, 88]. DEFINITION 4.6

The function W (λ, t) = e−λt L(Eeλt ),

(4.26)

where λ is a complex variable and E is the identity matrix, is called the transfer function (matrix) of the linear operator (4.14). It is assumed that Relation (4.26) is considered in an area Γλ of value of the argument λ where Equation (4.26) has meaning, i.e., the improper integral ∞ W (λ, t) = H(t, τ )e−λ(t−τ ) dτ, (4.27) −∞

obtained from (4.14) and (4.26) converges. After a change of variable, from (4.27) we obtain ∞

H(t, t − τ )e

W (λ, t) = −∞

© 2000 by CRC Press LLC

−λτ

∞ dτ = −∞

G(t, τ )e−λτ dτ,

(4.28)

where G(t, τ ) = H(t, t − τ ),

(4.29)

Hence, for a fixed λ, transfer function is defined as the bilateral Laplace transform for the function G(t, τ ). As follows from general properties of bilateral Laplace transformation [19, 88], the area of convergence of the integral (4.27), provided that it exists, is a vertical strip Sλ : q1 (t) < Re λ < q2 (t),

(4.30)

inside which the transfer function is analytical with respect to λ. thus, by the transfer function of the operator (4.14) we mean a complex-valued function defined by (4.27) and (4.28) in a strip (4.30). According to (4.17), for a causal operator the second integral in (4.28) yields ∞ W (λ, t) = G(t, τ )e−λτ dτ, (4.31) 0

which is, for a fixed t, the routine (right-sided) Laplace transform. The area of convergence of the integral (4.31) (provided that it exists) is a half-plane Re λ > q1 (t).

(4.32)

Thus, the transfer function of a causal operator defined by (4.31) is analytical with respect to α in the half-plane (4.32). Now we assume that the strip (4.30) (or the half-plane (4.32)) contains the imaginary axis, i.e., q1 (t) < 0 < q2 (t)

(4.33)

q1 (t) < 0.

(4.34)

or, respectively,

In the case (4.33) the integral (4.27) has meaning for λ = iω, where ω is any real value, and the following expression: ∞ W (iω, t) = −∞

is defined.

© 2000 by CRC Press LLC

H(t, τ )e−iω(t−τ ) dτ =

∞ −∞

G(t, τ )e−iωτ dτ,

(4.35)

Analogously, for (4.31) under the condition (4.34) the following formula has meaning. ∞ W (iω, t) =

G(t, τ )e−iωτ dτ.

(4.36)

0

DEFINITION 4.7 The functions (4.35) and (4.36) will be called frequency response of the operator. It should be emphasized that if (4.33) and (4.34) do not hold, the corresponding frequency responses of the linear operators might, generally speaking, not exist. The above relations get greatly simplified for stationary operators, because from (4.20) and (4.27) we have ∞ H(τ )eλτ dτ = W (λ).

W (λ, t) =

(4.37)

−∞

i.e., the transfer function of a stationary operator is independent of t. Moreover, the strip of convergence Sλ (if it exists) is independent of t and has the form q1 < Re λ < q2 .

(4.38)

If the operator is causal, from (4.37) we obtain ∞ W (λ) =

H(τ )e−λτ dτ,

(4.39)

0

with Re λ > q1 . If the strip (4.38) (or a half-plane Re λ > q1 ) contains the imaginary axis, from (4.39) we obtain the following expression for the frequency response, which is also independent of t: ∞ W (λ) =

H(τ )e−λτ dτ,

(4.40)

0

If a linear operator (4.14) is given by a parametric family of Green

© 2000 by CRC Press LLC

functions H(t, τ, α), the formula ∞ W (λ, t, α) =

G(t, τ, α)e−λτ dτ

(4.41)

−∞

determines a parametric family of transfer functions. The strip of convergence of (4.41) depends on t and α. For a stationary case we similarly obtain a family of transfer functions W (λ, α). In a similar way we can construct the parametric families of frequency responses W (t, iω, α) and W (iω, α). Let us note that transfer functions and frequency responses define uniquely the associated Green function and, therefore, the initial operator (4.14). Indeed, using the general inversion formula for bilateral Laplace transformation [19], from (4.28) we obtain c+i∞ 

1 2πi

W (λ, t)eλτ dλ = G(t, τ ) = H(t, t − τ ),

c ∈ Sλ .

(4.42)

c−i∞

It should be emphasized that it is necessary to specify a strip of convergence for relations using bilateral Laplace transform, because the same inversion formula (4.42) leads to different originals in different strips of convergence. Accordingly, if the existence conditions hold for a frequency response, using the inverse Fourier transformation yields 1 2π

∞ W (t, iω)eiωτ dω = H(t, t − τ ).

(4.43)

−∞

Having a parametric family of transfer functions, analogous to (4.42) we have 1 2πi

c+i∞ 

W (λ, t, α)eλτ dλ = H(t, t − τ, α),

c ∈ Sλ (α),

(4.44)

c−i∞

where Sλ (α) is a strip (or half-plane) of convergence that depends on α in the general case. Hereinafter we will use special forms of representation of linear operators. Let relations between Y and X be defined by a linear operator (4.14) having

© 2000 by CRC Press LLC

a transfer function W (t, λ) in the strip Sλ . Then, we will write Y = W (λ, t)X,

Re λ ∈ Sλ .

(4.45)

Further, in many cases the strip of convergence will not be specified. Nevertheless, one should bear in mind that transfer function is defined, generally speaking, only in a corresponding strip of convergence. DEFINITION 4.8 linear element.

Relation (4.45) will be called operator equation of

Operator equation of an element defines uniquely the corresponding operator. Indeed, having a transfer function, by Formula (4.42) we can find the corresponding Green function H(t, τ ) and operator (4.14). From (4.26) it also follows that if X = Eeλt , Re λ ∈ Sλ , then Y = W (t, λ)eλt . Moreover, using the above relations, it is easy to verify that if X = eνt X1 ,

Y = eνt Y1 ,

(4.46)

the values X1 and Y1 are related by the operator equation Y1 = W (λ + ν, t)X1 .

(4.47)

provided that the convergence conditions hold. If there is a parametric family of operators, the corresponding operator equation has the form Y = W (λ, t, α)X.

4.1.7

(4.48)

Parametric Operator Model of System

Using the notions introduced above, we can give a formal definition of operator model of a system. DEFINITION 4.9 We will say that a system is given by an operator model if it is represented in the form of a block-diagram containing linear and nonlinear elements, where linear elements are defined by equations of the form Yi = Wi (λ, t)Xi ,

i = 1, . . . , s,

(4.49)

For brevity, we did not mention strip of convergence in (4.49), Nevertheless, we have to bear in mind that the operator model including relations

© 2000 by CRC Press LLC

of the form (4.49) has meaning only if all transfer functions Wi (t, λ) have a common strip of convergence. The aggregate of nonlinear elements included in operator model are described by relations of the form Xi = Li (Y1 , . . . , Ys ),

i = 1, . . . , m,

(4.50)

where Li are nonlinear (possibly discrete) operators. Thus, according to the above definitions, by operator model of a system we hereinafter mean an aggregate of relations (4.49) and (4.50). If the transfer functions Wi (t, λ) depend on a scalar parameter α, instead of (4.49) and (4.50) we will have the following system of equations: Yi = Wi (λ, t, α)Xi ,

Xi = Li (Y1 , . . . , Ys , α),

(4.51)

DEFINITION 4.10 The system of equations (4.51) is called parametric operator model of the system. Let us discuss some ideas connected with rigorous investigations of systems given by (4.49) and (4.50). Their main feature is that for many cases they demand to go outside the limits of classical theory of functions and basic operations with them. Below we consider some examples. 1. According to (4.16), the notion of the Green function of a linear operator is connected with investigation of element response to input signals in the form of Dirac delta functions. But it is known [22, 124] that deltafunction is not a function in the classical sense of this term, and the main formula (4.14) has, strictly speaking, no meaning for the input (4.15) if integral in (4.14) is understood in traditional sense. Thus, the fact of definition of Green functions calls for a generalization of the class of input signals and integration operation. 2. Consider a linear stationary element given by an operator equation y = ω(λ)x,

(4.52)

cλ λ2 − (a + b)λ + ab

(4.53)

where ω(λ) =

is the transfer function, and a, b, and c are constants. For the element (4.52) we can calculate and realize physically the response to the input signal of the form  0, if t < 0, x(t) = (4.54) 1, if t > 0.

© 2000 by CRC Press LLC

On the other hand, write (4.52) in the form of an integral equation as d2 y dy dx − (a + b) + aby = c , dt2 dt dt

(4.55)

For the case of (4.54) this equation has no meaning, because the function (4.54) is not differentiable. It is known [2, 85, 87] that this contradiction can be solved only if differentiation is understood in a generalized sense. 3. Similar difficulties arise in sensitivity analysis. Let, for instance, a = a(α), b = b(α), and c = c(α) in Equation (4.53) be functions of a scalar parameter α. Then, from the viewpoint of classical analysis it is hardly possible to define what is meant under sensitivity function and how it can be calculated immediately using Equation (4.52). Below we show that the above and analogous problems are naturally overcome if we understand operator models (4.49) and (4.50) as equations in generalized functions [2, 85, 87]. In the next paragraph we present minimal information on generalized functions and operations with them. After that, we present a method of sensitivity investigation based on the use of generalized functions.

4.2 4.2.1

Operator Models of Discontinuous Systems Generalized Functions

DEFINITION 4.11 Generalized function (by Schwarz) is a linear continuous functional F (φ) defined over the set of real functions φ(t) such that each function from the set is differentiable an infinite number of times and is finite, i.e., vanishes outside some bounded interval on the t-axis. DEFINITION 4.12 called basic.

Functions φ(t) satisfying the above conditions are

Due to linearity of the functional F we have F (a1 φ1 + a2 φ2 ) = a1 F (φ1 ) + a2 F (φ2 ),

ai = const, i = 1, 2.

(4.56)

Continuity of the functional F (φ) is understood in the following sense: if for i → ∞ the sequence of functions φi (t) goes to zero uniformly together

© 2000 by CRC Press LLC

with their derivatives of any order and is zero outside a bounded region, then lim F (φi ) = 0.

i→∞

(4.57)

Hereinafter we denote the action of the functional F on a basic function φ by < F, φ >. If f (t) is an ordinary function integrable on any finite interval, we can define a linear continuous functional F acting according to the formula ∞ F, φ =

f (t)φ(t)dt.

(4.58)

−∞

Identifying the function f (t) with the functional F , we find that the set of generalized functions contains, in particular, all integrable functions. An important example of generalized function that cannot be reduced to an ordinary one is a delta-function. Delta-function is a functional acting according to the formula δ, φ = φ(0).

(4.59)

Equality (4.58) is convenient to use when F does not reduce to an ordinary function, so that Formula (4.59) takes the following “widely known” form ∞ δ(t)φ(t)dt = φ(0).

(4.60)

−∞

Generalized functions always allow summation and multiplication by a number. Another important operation is a shift operation F (t − t0 ) defined by the relation F (t − t0 ), φ = F, φ(t + t0 ) .

(4.61)

For a delta-function we have, according to (4.61), δ(t − t0 ), φ = δ, φ(t + t0 ) = φ(t0 ).

(4.62)

Using the integral form, we obtain ∞ δ(t − t0 )φ(t)dt = φ(t0 ). −∞

© 2000 by CRC Press LLC

(4.63)

4.2.2

Differentiation of Generalized Functions

If F is a generalized function, its derivative DF/dt in the sense of theory of generalized functions is given by 

   DF dφ , φ = − F, . dt dt

(4.64)

Higher-order derivatives Dn F/dtn are defined by the relation 

   Dn F dn φ n , φ = (−1) F, n . dtn dt

(4.65)

Since the functions φ(t) are, by assumptions, infinitely differentiable and finite together with all derivatives, Formula (4.65) has meaning for any n, i.e., a generalized function is differentiable (in the above sense) infinite number of times. According to (4.64), for Dirac delta-function we obtain 

     Dδ ˙ φ = − δ, dφ = −φ(0). ˙ , φ = δ, dt dt

(4.66)

In an integral form it is equivalent to ∞ ˙ ˙ δ(t)φ(t)dt = −φ(0).

(4.67)

−∞

Using (4.65) and (4.61), we can obtain ∞ δ (n) (t − t0 )φ(t)dt = (−1)n φ(n) (t0 ).

(4.68)

−∞

If f (t) is a piecewise smooth function that is integrable on any finite interval and has discontinuities of the first kind for t = ti (i = 0, 1, . . .), then it defines a generalized function (4.3) and, therefore, is differentiable in the sense of theory of generalized functions. It can be shown [18, 112] that Df df = + ∆fi δ(t − ti ), dt dt i

© 2000 by CRC Press LLC

(4.69)

˙ where df dt = f denotes the ordinary derivative of a discontinuous function that equals classical derivative at the points of differentiability of f (t) and is not defined at the points t = ti . The value ∆fi = f (ti + 0) − f (ti − 0)

(4.70)

denotes the break of the function f (t) for t = ti . Using (4.69), the ordinary derivative can be expressed in terms of the generalized one: df Df = − ∆fi δ(t − ti ). dt dt i

(4.71)

Using Formula (4.69) once again, we obtain D2 f d2 f ˙ ˙ − ti ), = 2 + ∆fi δ(t − ti ) + ∆fi δ(t 2 dt dt i i

(4.72)

where d2 f /dt2 is the ordinary derivative of the function f˙ = df /dt, ∆f˙i = f˙i (t + 0) − f˙i (t − 0),

(4.73)

and ti (i = 0, 1, . . .) are the points of discontinuity of f˙(t). Similarly we can find the n-th order generalized derivative:

Dn f dn f (n−1) (n−1) + ... = n + ∆fi δ t − ti n dt dt i ... + ∆fi δ (n−1) (t − ti ),

(4.74)

i

where (s)

∆fi



(s) (s) = f (s) ti + 0 − f (s) ti − 0

(4.75)

are the value of discontinuities of ordinary derivative f (s) = ds t/dts at its (s) points of discontinuity ti . From (4.74) we can express ordinary derivatives ds t/dts of discontinuous function in terms of generalized ones. For instance, for s = 2 we have d2 f D2 f ˙  ˙ − ti ). = − ∆ f δ (t − t ) − ∆fi δ(t i i dt2 dt2 i i

© 2000 by CRC Press LLC

(4.76)

4.2.3

Multiplication of Generalized Functions

Multiplication of generalized functions differs substantially from multiplication of ordinary functions and is not always possible. For example, the products δ(t)δ(t) and δ(t)f (t), where f (t) is an ordinary function with discontinuities at t = t0 , have no meaning in the framework of theory of generalized functions. Nevertheless, it is possible to separate out some cases when multiplication is possible. 1. Let F be a generalized function, a(t) be an ordinary integrable function, and φ(t) be a basic functions. The product a(t)F is a generalized function defined as aF, φ = F, aφ , (4.77) It is assumed that the right side is defined for any basic function φ. Let, for instance, F = δ(t) and a(t) be continuous for t = 0. Then, a(t)δ, φ = δ, a(t)φ(t) = a(0)φ(0),

(4.78)

a(t)δ = a(0)δ.

(4.79)

i.e.,

Analogously, it can be shown that a(t)δ(t − t0 ) = a(t0 )δ(t − t0 ),

(4.80)

provided that a(t) is continuous at t = t0 . ˙ If F = δ(t), we formally have 

     ˙ φ = δ, ˙ aφ = − δ, (aφ) ˙ = −(aφ) ˙ a(t)δ, t=0   ˙ = −a(0)φ(0) − a(0)φ(0) ˙ = a(0)δ˙ − a(0)δ, ˙ φ .

(4.81)

The last equation shows that a(t)δ˙ = a(0)δ˙ − a(0)δ, ˙

(4.82)

this formula has meaning and is valid if the function a(t) is continuous together with its derivative at the point t = 0. In a similar way we can prove the following formula: a(t)δ (m) (t − t0 ) = a(t0 )δ (m) (t − t0 ) − ma(t ˙ 0 )δ (m−1) (t − t0 ) +

m(m − 1) a ¨(t0 )δ (m−2) (t − t0 ) + . . . 2

. . . +(−1)m a(m) (t0 )δ(t − t0 )

© 2000 by CRC Press LLC

(4.83)

provided that the function a(t) is differentiable at t = t0 . Formula (4.83) can be written in a short form as a(t)δ (m) (t − τ ) = (−1)m

dm [a(τ )δ(t − τ )]. dτ m

(4.84)

2. Another important case when multiplication is possible occurs if both the functions are ordinary integrable functions and their product is also integrable.

4.2.4

Operator Equation of Open-Loop Linear System

In many cases linear control systems are described as an aggregate of linear elements given by operator equations of the form ω1 (λ, t)y = ω2 (λ, t)x,

(4.85)

where wi (λ, t) (i = 1, 2) are transfer functions of some linear operators (see Section 1.6.1). For example, if y and x are related by a linear differential equation y (n) + a1 (t)y n−1 + . . . + an (t)y = b1 (t)x(n−1) + . . . + bn (t)x,

(4.86)

then, ω1 (λ, t) = λn + a1 (t)λn−1 + . . . + an (t)n, ω2 (λ, t) = b1 (t)λn−1 + . . . + bn (t)n.

(4.87)

If the input signal does not have the necessary number of derivatives in the classical sense, to use the mathematical model (4.86) we have to consider Equation (4.85) as follows: 1. Input and output signals x and y, respectively, are generalized functions. 2. The symbol λ in transfer functions w1 (λ, t) and w2 (λ, t) is considered an operator of generalized differentiation, i.e., λ=

D . dt

(4.88)

For these assumptions to be justified, we have to restrict the form of the input and output signals and transfer functions w1 (λ, t) and w2 (λ, t). Let us analyze this problem for Equation (4.86).

© 2000 by CRC Press LLC

Assume that y and x are ordinary discontinuous functions. Then, interpreting operator λ according to (4.88) and using (4.74), we obtain ds y (ρ) + ∆yi δ (s−ρ−1) (t − ti ), dts ρ=0 i s−1

λs y =

ds x (ρ) λ x= s + ∆xi δ (s−ρ−1) (t − ti ), dt ρ=0 i s−1

(4.89)

s

where ti are the points of discontinuities of the functions y and x and (ρ) (ρ) their respective derivatives, and ∆ys and ∆xi are associated breaks. Substituting (4.89) into (4.76), we formally obtain

 n−m−1 dn−m y (ρ) (n−m−ρ−1) am (t) + ∆yi δ (t − ti ) = dtn−m m=0 ρ=0 i

 (4.90) n n−m−1 dn−m x (ρ) (n−m−ρ−1) = bm (t) + ∆xi δ (t − ti ) . dtn−m m=1 ρ=0 i n

In order that the left and right sides of this relation make sense from the viewpoint of theory of generalized functions, it is necessary that all the products am (t)δ (n−m−ρ−1) (t − ti ) and bm (t)δ (n−m−ρ−1) (t − ti ) make sense. For this to be true, it is necessary to impose restrictions on discontinuities of the coefficients am (t) and bm (t) and their derivatives. If all the specified products are defined, using the general formula (4.84), we can transform (4.90) to the form  ω1

 n−1 d ,t Y + qsi δ (s) (t − ti ) = dt s=0 i   n−1 d = ω2 rsi δ (s) (t − ti ), ,t x + dt s=0 i

(4.91)

where d/dt is an operator of ordinary derivative, and qsi and rsi are constants depending on the coefficients of initial equation. Equating the coefficients by the same generalized functions in the left and right sides of (4.91), between the points of discontinuity of the functions y and x and their respective derivatives we have  ω1

© 2000 by CRC Press LLC

   d d , t y = ω2 , t x, dt dt

(4.92)

or, equivalently, dn y dn−1 y dn−1 x + a1 (t) n−1 + . . . + an (t)y = b1 (t) n−1 + . . . + bn (t)x. n dt dt dt

(4.93)

Moreover, the following linear relations must hold: qsi = rsi ,

s = 0, . . . , n − 2,

qn−1,i = 0,

(4.94)

which relates the breaks of the functions y and x to their derivatives. Thus, interpreting Equation (4.86) from the viewpoint of theory of generalized functions is equivalent to specifying a linear differential equation (4.93) with the conditions (4.94), which will be called the break conditions. We note that if y and x in (4.86) are continuous together with the required number of derivatives, Equation (4.93) coincides with (4.86), i.e., if the input x(t) is sufficiently smooth, Equations (4.85) and (4.92) coincide. For illustration we consider the following equation: λ2 y + a1 (t)λy + a2 (t)y = b0 (t)λ2 x + b1 (t)λx + b2 (t)x,

(4.95)

assuming that the function x(t) is continuous together with the first and second derivatives everywhere except for the moment t = t1 . Let us find conditions for which Equation (4.95) has an ordinary discontinuous function y(t) as a solution. According to (4.89), we have λy =

Dy dy = + ∆yδ(t − t1 ), dt dt

D2 y d2 y ˙ − t1 ) + ∆y δ(t ˙ − t1 ), λ y= = 2 + ∆y δ(t 2 dt dt

(4.96)

2

λx =

Dx dx = + ∆xδ(t − t1 ), dt dt

D2 x d2 x ˙ − t1 ) + ∆xδ(t ˙ − t1 ). λ x= = 2 + ∆xδ(t 2 dt dt

(4.97)

2

Substituting this equation into (4.95) and regrouping the terms, we obtain d2 y dy + a1 (t) a2 (t)y + [∆y˙ + a1 (t)∆y] δ(t − t1 ) 2 dt dt d2 x dx ˙ + ∆y δ(t − t1 ) = b0 (t) 2 + b1 (t) + b2 (t)x dt dt ˙ − t1 ) + [b0 (t)∆x˙ + b1 (t)∆x] δ(t − t1 ). + b0 (t)∆xδ(t

© 2000 by CRC Press LLC

(4.98)

Assuming that the coefficients a1 (t) and b1 (t) are continuous for t = t1 , and the coefficient is continuous at t = t0 together with the first derivative, by (4.80) and (4.83) we have a1 (t)δ(t − t1 ) = a1 (t1 )δ(t − t1 ), bi (t)δ(t − t1 ) = bi (t1 )δ(t − t1 )

(i = 0, 1),

(4.99)

˙ − t1 ) = b0 (t1 )δ(t ˙ − t1 ) − b˙ 0 (t1 )δ(t − t1 ). b0 (t)δ(t Substituting (4.99) into (4.98), and equating the coefficients by δ(t − t1 ) ˙ − t1 ), we find that the following equation holds outside the break and δ(t moment: d2 y dy d2 x dx + a (t) (t)y = b (t) + b1 (t) + a + b2 (t)x, 1 2 0 2 2 dt dt dt dt

(4.100)

and, moreover, at the moment t1 the break conditions hold: ∆y = b0 (t1 )∆x, (4.101) ∆y˙ + a1 (t1 )∆y = b0 (t1 )∆x˙ + [b1 (t1 ) − b˙ 0 (t1 )]∆x. As follows from (4.101), for b0 (t1 ) = 0 we have ∆y = 0, i.e., the solution y(t) is continuous.

4.2.5

Operator Equation of Closed-Loop Linear System

In the previous paragraph the breaks of the output signal y(t) and its derivatives are completely defined by the properties of the input signal x(t). Considering closed-loop systems we have to analyze them together, because properties of the signal x(t) will depend on y(t). Cosnider Equation (4.86) assuming additionally that x = f (y),

(4.102)

where f (y) is a discontinuous nonlinear function. In this case, the break (switching) moments ti are defined by the properties of the nonlinear element. As before, between the switching moments we have the following equations:     d d ω1 , t y = ω2 , t x, x = f (y). (4.103) dt dt

© 2000 by CRC Press LLC

Moreover, Equations (4.94) hold, as well as the relations following from (4.102): ∆xi = f [y(ti + 0)] − f [y(ti − 0)], df [y] df (y) ∆x˙ i = − dt t=ti +0 dt t=ti −0

(4.104)

and so on. Considering together the break conditions (4.94) and relations (4.104), we can continue the desired conditions in time. Thus, the system of equations in generalized functions (4.86) and (4.102) is equivalent to a sequence of ordinary differential equations (4.103) and break conditions of the form (4.94) and (4.104). As an example, we consider the system of equations (4.95), (4.102), assuming that y and x are ordinary differential functions. In this case, between the switching moments we have d2 y dy d2 f df + a + a (t) (t)y = b (t) + b1 (t) + b2 (t)f, 1 2 0 dt2 dt dt2 dt

(4.105)

while the break conditions (4.101) together with (4.94) yield y(ti + 0) − b0 (ti )f [y(ti + 0)] = y(ti − 0) − b0 (ti )f [y(ti − 0)], y(t ˙ i + 0) + a1 (ti )y(ti + 0) df (ti + 0) − [b1 (ti ) − b˙ 0 (ti )] f [y(ti + 0)] dt df (ti − 0) = y(t ˙ i − 0) + a1 (ti )y(ti − 0) − b0 (ti ) dt − [b1 (ti ) − b˙ 0 (ti )] f [y(ti − 0)].

− b0 (ti )

(4.106)

In principle, Relations (4.106) make it possible to calculate y(ti + 0) and y(t ˙ i + 0) by known y(ti − 0) and y(t ˙ i − 0).

4.3 4.3.1

Sensitivity of Operator Models Generalized Functions Depending on Parameter

Let a generalized function F depend on a scalar parameter α covering an area Mα . In this case we write F = F (t, α). Let φ be a basic function. Then F (t, α), φ = qφ (α),

© 2000 by CRC Press LLC

(4.107)

where qφ (α) is an ordinary function.

4.3.2

Generalized Differentiation by Parameter

DEFINITION 4.13 A generalized function F (t, α) is called differentiable in the area Mα if all functions qφ (α) are differentiable with respect to α in the area. DEFINITION 4.14

A generalized function F1 (t, α) operating as F1 , φ =

dqφ (α) , dα

(4.108)

is called the derivative of F (t, α) with respect to the parameter α and denoted F1 =

DF (t, α) . ∂α

(4.109)

Thus, the equality 

 DF (t, α) dqφ (α) ,φ = ∂α dα

(4.110)

can be considered as a definition of derivative with respect to parameter. It can be proven [22, 124] that if a generalized function F (t, α) is differentiable with respect to the parameter in an area Mα , it has derivatives of all orders in this area. In this case, all derivatives of F with respect to t have also derivatives of all orders with respect to α. Moreover, operations of generalized differentiation with respect to t and α are commutative. As a special case, we always have D D D D F (t, α) = F (t, α). ∂α ∂t ∂t ∂α

(4.111)

Using Relation (4.110), we can obtain a formula for generalized derivative with respect to parameter for a discontinuous function [87]. Let a discontinuos function f (t, α) is given by f (t, α) = fi (t, α),

© 2000 by CRC Press LLC

ti (α) < t < ti+1 (α),

(4.112)

where the functions fi (t, α) and tj (α) are differentiable with respect to α. Then, it can be shown [87] that Df ∂f dti = − ∆fi δ(t − ti ) , ∂α ∂α dα i

(4.113)

where ∂f /∂α is the ordinary derivative with respect to the parameter given by ∂f ∂fi = , ∂α ∂α

ti (α) < t < ti+1 (α),

(4.114)

and ∆fi = fi (ti , α) − fi−1 (ti , α)

(4.115)

are corresponding breaks of the function f (t, α). Generalized derivatives of higher orders of the function (4.112) with respect to the parameter can be obtained successively by means of differentiating (4.113). In order to perform this, we need a rule of differentiation of the function δ(t − ti (α)). Using (4.113), it can be shown [87] that Dδ(t − ti (α)) ˙ − ti ) dti , = −δ(t ∂α dα

(4.116)

and, in the general case, Dδ (s) (t − ti (α)) dti = −δ (s+1) (t − ti ) , ∂α dα

(4.117)

that has the same form as usual differentiation rule for composite functions. Using (4.116) and (4.113), we obtain   D2 f dti D ∂f d − ∆fi = δ(t − ti ) ∂α2 ∂α ∂α dα dα i  2 2 ∂ f ˙ − ti )∆fi dti + = δ(t 2 dα ∂α i   2 dt ˙ − ti )∆fi dti δ(t + − δ(t − ti )∆fi i dα dα i   i dti d ∆fi , − δ(t − ti ) dα dα

© 2000 by CRC Press LLC

(4.118)

where ti are the points of discontinuity of ∂f /∂α, and ∆fi =

∂fi ∂fi−1 (ti , α) − (ti , α). ∂α ∂α

(4.119)

Expressions for derivatives of higher orders can be found in a similar way. Now we consider differentiation of a product of generalized functions with respect to a parameter. Let a(t, α) be a function differentiable with respect to t and α required number of times, and F (t, α) be a generalized function depending on a parameter. In this case, the product aF is defined, according to (4.77), by a(t, α)F (t, α), φ = F (t, α), a(t, α)φ = qaφ (α).

(4.120)

Using (4.120), it can be easily shown that we have, analogously to (4.135), a(t, α)δ (s) (t − ti (α)) = (−1)s

ds [a(τ )δ(t − τ )] |t=ti (α) . dτ s

(4.121)

In virtue of (4.110), the generalized derivative of the product with respect to the parameter D (4.122) [a(t, α)F (t, α)] = F1 dα is given by the relation F1 , φ =

dqaφ (α) . dα

(4.123)

It can be shown [87] that in this case the following formula of differentiation of a product holds: D ∂a(t, α) DF (t, α) [a(t, α)F (t, α)] = F (t, α) + a(t, α) . ∂α ∂α ∂α

(4.124)

which is similar to the classical one. From (4.124) it follows that D [a(t, α) sign (t − t(α))] = ∂α ∂a(t, α) dt(α) = sign (t − t(α)) − 2a(t(α), α)δ(t − t(α)) ∂α dα

(4.125)

If f1 (t, α) and f2 (t, α) are ordinary piecewise continuous functions, Formula (4.124) in the general case does not hold, and the product is to be differentiated directly by (4.113).

© 2000 by CRC Press LLC

4.3.3

Sensitivity Equations

The use of generalized differentiation with respect to parameter yields an effective and simple method of constructing sensitivity equations. Let us demonstrate it by an example of a single-loop nonlinear system described by the equations λn y + a1 (t, α)λn−1 y + . . . + an (t, α)y = = b0 (t, α)λn−1 x + . . . + bn (t, α)x,

x = f (y, α).

(4.126)

Assuming λ = D/dt, we consider (4.126) as an equation in generalized functions ω1 (λ, t, α)y = ω2 (λ, t, α)x,

x = f (y, α).

(4.127)

DEFINITION 4.15 Let x = f (y, α) be a discontinuous function, and y be an ordinary function, possibly discontinuous. Then, the sensitivity functions uy and ux are defined as ordinary derivatives uy =

∂y(t, α) , ∂α

ux =

∂x(t, α) . ∂α

(4.128)

A general method of constructing sensitivity equations can be described as follows. 1. Both the sides of Equation (4.127) are differentiated with respect to the parameter. In this operation we employ the commutative property of differentiation by t and α. 2. As a result, we obtain a differential equation with respect to the generalized derivatives Dx/∂α and Dy/∂α. In any special case these equations must be derived using the above features of multiplication and differentiation of generalized functions. 3. Using relations between ordinary and generalized derivatives Dx(t, α) dti + ∆x(ti ) δ(t − ti ), ∂α dα i Dy(t, α) dti uy = + ∆y(ti ) δ(t − ti ), ∂α dα i

ux =

(4.129)

we find an equation in generalized functions with respect to desired sensitivity functions (4.128). For instance, if all coefficients of ai (t, α) and bi (t, α) in (4.126) are sufficiently smooth, for generalized differentiation with respect to α we may use the rule (4.124).

© 2000 by CRC Press LLC

As a result, we obtain ω1 (λ, t, α)

Dy ∂ω1 (λ, t, α) + y= ∂α ∂α Dx ∂ω2 (λ, t, α) = ω2 (λ, t, α) + x. ∂α ∂α

(4.130)

Substituting (4.129) in this equation, we obtain the sensitivity equation

dti ∂ω1 δ(t − ti ) + y= dα ∂α i dti ∂ω2 = ω2 u x − ω2 ∆xi δ(t − ti ) + x. dα ∂α i

ω1 uy − ω1

∆yi

(4.131)

Transforming Equation (4.131), we note that ω1 (λ, t, α)



dti δ(t − ti ) = dα i dti (n−k) = αk (t, α) ∆yi (t − ti ) δ dα i k dti = ∆yi ak (t, α)δ (n−k) (t − ti ). dα i ∆yi

(4.132)

Using Formula (4.132), we find ω1 (λ, t, α)

i

n dti ∆yi µρi δ (ρ) (t − ti ). δ(t − ti ) = dα ρ=0 i

(4.133)

where µρi are constants. Similarly, we can obtain

ω2 (λ, t, α)

i

∆xi

n−1 dti νρi δ (ρ) (t − ti ). δ(t − ti ) = dα ρ=0 i

(4.134)

Then, let us define, as usual, the following generalized derivatives: D s uy ds uy (ρ) = + ∆uyi δ (s−ρ−1) (t − ti ), s dt dts ρ=0 i  (ρ) s−1 s s D ux d df df + = ∆ δ (s−ρ−1) (t − ti ) dts dts dα ρ=0 i dα i s−1

© 2000 by CRC Press LLC

(4.135)

and substitute Relations (4.133)-(4.135) into the sensitivity equation. As a result, we find ordinary differential equations determining the functions uy on the intervals between the switching moments, and the corresponding break conditions for the function uy . We will not present the transformations in the general case due to their awkwardness, but these calculations usually are not cumbersome in practical cases. As an example of applying the developed method, we construct the sensitivity equations for the system (4.95), (4.102), assuming that the coefficients ai and bi depend on the parameter α, while the nonlinear function does not depend on the parameter explicitly: x = fi (y),

ti < t < ti+1 ,

so that the parameter α does not appear in the function fi explicitly. Using generalized differentiation, from (4.95) we have λ2

∂a2 Dy Dy ∂a1 Dy + a1 λ + a2 + λy + y ∂α ∂α ∂α ∂α ∂α (4.136) Dx ∂b1 Dx ∂b0 2 ∂b0 2 Dx = b0 λ + b1 λ + b2 + λ x+ λx + x. ∂α ∂α ∂α ∂α ∂α ∂α

Moreover, Dy dti = uy − δ(t − ti ), ∆yi ∂ dα i D Dy duy Dy = = + λ ∆uyi δ(t − ti ) ∂α ∂t ∂α dt i dti ˙ − ∆yi δ(t − ti ), dα i Dy D2 Dy d2 uy λ2 + ∆u˙ yi δ(t − ti ) = 2 = ∂α ∂t ∂α dt2 i dti ¨ ˙ − ti ) − + ∆uyi δ(t ∆yi δ(t − ti ). dα i Similarly, Dx dti =g− δ(t − ti ), ∆fi ∂ dα i dti ˙ Dx dg λ ∆fi = + ∆gi δ(t − ti ) − δ(t − ti ), ∂α dt dα i

© 2000 by CRC Press LLC

(4.137)

λ2

d2 g Dx ∆g˙ i δ(t − ti ) = 2 + ∂α dt i +



˙ − ti ) − ∆gi δ(t

i

i

∆fi

(4.138) dti ¨ δ(t − ti ), dα

where g=

∂fi ∂fi = uy , ∂α ∂y

ti < t < ti+1 ,

(4.139)

Substitute (4.135), (4.137) and (4.139) into (4.136). Note that, according to (4.83), for any function a(t) that is differentiable a sufficient number of times we have ¨ − ti ) = a(ti )δ(t ¨ − ti ) − 2a(t ˙ − ti ) + a a(t)δ(t ˙ i )δ(t ¨(ti )δ(t − ti ),

(4.140)

Then, after transition to ordinary differential equations, we obtain d2 uy duy ∂a1 dy ∂a2 + a2 uy + + y= + a1 dt2 dt ∂α dt ∂α d2 g dg ∂b0 d2 f ∂b1 df ∂b2 = b0 2 + b 1 + + b2 g + + f, 2 dt dt ∂α dt ∂α dt ∂α

(4.141)

and the break conditions for the sensitivity function and its derivative dti ∆uyi = b0 (ti )∆gi + 2b˙ 0 (ti )∆fi dα ∂b0 dti f ti + (ti )∆fi + a1 (ti )∆yi − b1 (ti )∆fi , ∂α dα dα dti dti + a2 (ti )∆yi dα dα ∂a1 dti + (ti )∆yi = b0 (ti )∆g˙ i − b˙ 0 (ti )∆gi − ¨b0 (ti )∆fi ∂α dα dt dt i i + b1 (ti )∆gi + b˙ 1 (ti )∆fi − b2 (ti )∆fi dα dα  2  ∂ b0 ∂b0 ∂b 1 − (ti )∆f˙i + (ti )∆fi . (ti )∆fi + ∂α∂t ∂α ∂α

∆u˙ yi + a1 (ti )∆uyi + a˙ 1 (ti )∆yi

(4.142)

Let us note that there is another method of constructing the sensitivity function that is equivalent to the proposed one. With this aim in view,

© 2000 by CRC Press LLC

Equation (4.131) should be rewritten in the form ω 1 uy − ω2

∂x = q(t). ∂α

(4.143)

The right side of this equation is unknown, and the problem can be reduced to solving a linear nonstationary differential equation. In this case, the presence of discontinuities in the nonlinear function gives corresponding delta-functions and their derivatives in the right side. Now consider, by an example, a special case of constructing sensitivity equations when coefficients of initial differential equation are not continuous and differentiable required number of times. Let us have the equation λ2 y + a1 (t)λy + a2 y = λx,

(4.144)

where a1 (t) = sign(t − τ ), a2 = const, x = y 2 sign(t − τ ).

(4.145)

We will construct the sensitivity equations assuming that the value τ is a parameter. First of all, we notice that in this case y is an ordinary continuous function, and by (4.74) we have dy d2 y ˙ )δ(t − τ ), , λ2 y = 2 + ∆y(τ dt dt dy λx = 2y sign(t − τ ) + 2y 2 (τ )δ(t − τ ). dt λy =

(4.146)

Then, using the break conditions (4.94), we obtain that initial equation (4.144) is equivalent to d2 y dy dy + a2 y = −2y , − dt2 dt dt d2 y dy dy + + a2 y = −2y , dt2 dt dt

t < τ, (4.147) t > τ,

and the break conditions have the form y(τ + 0) = y(τ − 0), y(τ ˙ + 0) = y(τ ˙ − 0) + 2y 2 (τ ).

© 2000 by CRC Press LLC

(4.148)

Then, we proceed to immediate construction of sensitivity equations. Using generalized differentiating of (4.144) with respect to τ , we have 2 Dy

D λ + ∂τ ∂τ



 Dy dy Dx sign (t − τ ) + a2 =λ dt ∂τ ∂τ

(4.149)

In this case Dy ∂y = = uy , ∂τ ∂τ Dx D 2 = [y sign (t − τ )] = 2yuy sign (t − τ ) − 2y 2 (τ )δ(t − τ ), ∂τ ∂τ (4.150) Dx duy D Dx λ = = 2yu ˙ y sign (t − τ ) + 2y sign (t − τ ) ∂τ ∂τ ∂τ dt ˙ − τ ). + 2y(τ )[uy (τ + 0)uy (τ − 0)] δ(t − τ ) − 2y 2 (τ )δ(t Moreover, D ∂τ



 dy sign (t − τ ) = dt     ∂ dy dy dy = sign (t − τ ) − (τ + 0) + (τ − 0) δ(t − τ ) (4.151) ∂τ dt dt dt duy = sign (t − τ ) − [y(τ ˙ + 0) + y(τ ˙ − 0)] δ(t − τ ). dt

With due account for (4.150) and (4.151), the sensitivity equation takes the form λ2 uy + sign (t − τ )

duy ˙ + 0) + y(τ ˙ − 0)] δ(t − τ ) + a2 uy − [y(τ dt

duy sign (t − τ ) + 2yu ˙ y sign(t − τ ) dt ˙ − τ ). + 2y(τ )[uy (τ + 0) + uy (τ − 0)] δ(t − τ ) − 2y 2 (τ )δ(t

= 2y

(4.152)

In this case uy is an ordinary discontinuous function. Therefore, duy + ∆uy δ(t − τ ), dt 2 d uy ˙ − τ ). λ2 u y = + ∆u˙ y δ(t − τ ) + ∆uy δ(t dt2 λuy =

© 2000 by CRC Press LLC

(4.153)

Substituting (4.153) into (4.152), we obtain an equation for uy : d2 uy duy + a2 uy = + sign (t − τ ) 2 dt dt duy = 2y sign (t − τ ) + 2yu ˙ y sign (t − τ ), dt

(4.154)

and the break conditions at the moment t = τ : ∆uy = uy (τ + 0) − uy (τ − 0) = −2y 2 (τ ), ∆u˙ y − [y(τ ˙ + 0) + y(τ ˙ − 0)] = 2y(τ )[uy (τ + 0) + uy (τ − 0)].

4.3.4

(4.155)

Sensitivity Equations for Multivariable Systems

Using the relations of the preceding paragraphs, a standard procedure for constructing sensitivity equations can be proposed for multivariable systems containing discontinuous nonlinearities. First, let equations of the system have the form W1 (λ, α)Y = W2 (λ, α)F (Y, α),

(4.156)

where Wi (λ, α)(i = 1, 2) are polynomial matrices in generalized differentiation operator λ = D/dt, Y is a vector of unknowns, F is a discontinuous vector of nonlinear functions, and α is a scalar parameter. In this case, the operators Wi (λ, α) are commutative with operator of generalized differentiation D/∂α. Therefore, differentiating (4.156) with respect to α yields W1 (λ, α)

where

∂ DY + W1 (λ, α)Y = ∂α ∂α DF ∂ = W2 (λ, α) + W2 (λ, α)F , ∂α ∂α DY dti ∆Yi =U− δ(t − ti ), ∂α dα i dF DF dti = − δ(t − ti ), ∆Fi ∂α dα dα i

(4.157)

(4.158)

and U=

© 2000 by CRC Press LLC

∂Y (t, α) , ∂α

dF ∂F ∂F = U+ . dα ∂Y ∂α

(4.159)

The derivatives and switching moments appearing in (4.158) can be obtained, in the general case, by the rules of differentiation for implicit functions. For example, if the break conditions have the form gi [Y (ti − 0), ti , α] = 0,

(4.160)

by (3.50) we have ∂gi− − Ui + dti = − ∂Y− dα ∂gi ˙ − Y + ∂Y i

∂gi− ∂α . ∂gi− ∂t

(4.161)

As in the scalar case, initial equations and sensitivity equations can be reduced to a sequence of ordinary differential equations linked with one another by the break conditions. Let us have W1 (λ, α) =

n

Ak λ

n−k

,

W2 (λ, α) =

k=0

n

Bk λn−k ,

(4.162)

k=1

in (4.156). Then, using the same reasoning as in the scalar case, we obtain  W1

d ,α dt



 = W2

 d , α F. dt

(4.163)

During the transition from one equation (4.163) to another we have the break conditions A0 ∆Yk = 0,

A0 ∆Y˙ k + A1 ∆Yk = B1 ∆Fk , . . . ,

(4.164)

where ∆Yk and ∆Fk are the breaks of the vectors Y and F at the switching moments tk . Analogously, we can obtain, on the basis of (4.157), ordinary equations and break conditions for the sensitivity functions. To illustrate the above theory, let us obtain the sensitivity equation for a system given in the normal Cauchy’s form. Let dY = F (Y, t, α), F (Y, t, α) = Fi (Y, t, α), dt

ti < t < ti+1 .

(4.165)

In this case, at the moments t = ti solutions of Equations (4.165) are connected by means of conditions (3.2): Y (ti + 0) = Yi+ = Φ[Y (ti − 0), ti (α)] = Φ(Yi− , ti , α).

© 2000 by CRC Press LLC

(4.166)

The vectors Fi and Φi are assumed to be continuously differentiable the required number of times with respect to all arguments. From (4.166) we have that the break of the vector Y at the moment t = ti is equal to ∆Yi = Yi+ − Yi− = Φ(Yi− , ti , α) − Yi− .

(4.167)

Hereinafter for the values of any function L(ti ±0) we will use the notation + − L± i and ∆Li = Li − Li . Let us find the differential equation for the generalized derivative DY /dt for the solution of (4.165) and (4.166). As follows from the aforesaid, DY dY ∆Yi δ(t − ti ), = + dt dt i

(4.168)

Then, using (4.165) and (4.167), we find   DY Φ(Yi− , ti , α) − Yi− δ(t − ti ). = F (Y, t, α) + dt i

(4.169)

Generalized differentiation with respect to α yields  D DY dti ˙ D = F (Y, t, α) − Yi− − δ(t − ti ) ∆Yi dt ∂α ∂α dα i d  + δ(t − ti ) Φi (Yi− , ti , α) . dα i where

DY dti ∆Yi =U− δ(t − ti ), ∂α dα i

(4.170)

(4.171)

Hence, D DY dti ˙ dU ∆Ui δ(t − ti ) − ∆Yi = + δ(t − ti ). dt ∂α dt dα i i Moreover,

DF dti dF = − δ(t − ti ), ∆Fi ∂α ∂α dα i

(4.172)

(4.173)

where dF/∂α is the total partial derivative given by dF ∂Fi ∂F = U+ , ∂α ∂Y ∂α

© 2000 by CRC Press LLC

ti < t < ti+1 .

(4.174)

Substituting (4.172) in the left side of (4.170), we have dU + ∆Ui δ(t − ti ) = dt i   dF d dti = + ∆Yi − ∆Fi δ(t − ti ). dα dα dα i

(4.175)

Hence, dU dF ∂F ∂Fi = = U+ , dt ∂α ∂Y ∂α

ti < t < ti+1 ,

(4.176)

 dti d  + Φi (Yi− , ti , α) − Yi− dα dα dti d = −∆Fi + ∆Yi . dα dα

(4.177)

and, moreover, ∆Ui = −∆Fi

Taking into account that − d ∂Φ− ∂Φ− ∂Φ− i dYi i dti i Φi (Yi− , ti , α) = + + , dα ∂Y dα ∂t dα ∂α

and

(4.178)

dYi− dti d − = Y (ti , α) = Ui− + Y˙ i− , dα dα dα

we can rewrite (4.177) in the form   ∂Φ− − − dti i ˙ −E Ui + Yi ∂Y dα − − ∂Φi dti ∂Φi + + , ∂t dα ∂α

dti ∆Ui = −∆Fi + dα



(4.179)

which agrees with the formulas obtained in another way in Chapter 3. In the special case when the solutions of initial system are continuous and ∆Yi = 0, Equation (4.179) yields δUi = −δFi

dti . dα

(4.180)

Then, let us consider systems with nonstationary linear elements. In this case the matrices Ai and Bi appearing in (4.162) depend on time, i.e., Ak = Ak (t, α),

© 2000 by CRC Press LLC

Bk = Bk (t, α).

(4.181)

For construction of the sensitivity equation it is necessary to consider two possibilities. If all coefficients Ai (t, α) and Bi (t, α) are sufficiently smooth, the products Ai (t, α)

Ds Y , dts

Bi (t, α)

Ds Y dts

(4.182)

are differentiable with respect to α using the classical formulas of differentiation:   D ∂Ai Ds Y Ds Y Ds DY Ai (t, α) s = . + A (t, α) i ∂α dt ∂α dts dts ∂α

(4.183)

For further calculation, the terms Ai (t, α)δ (s) (t − ti ) should be expanded according to (4.69) and (4.70). Another situation takes place when the coefficients Ai (t, α) and Bi (t, α) are discontinuous. Such being the case, the theory of generalized functions is applicable only if delta-function and its derivatives in (4.182) are multiplied by functions having a required number of derivatives. Nevertheless, products of ordinary functions having breaks at the same points are allowable, as was demonstrated in the scalar case. We will not present detailed calculations, because they are clear from the examples given above.

4.3.5

Higher-Order Sensitivity Equations

DEFINITION 4.16 Sensitivity function of a vector function Y (t, α) is defined as the following ordinary derivative: U=

∂Y dti DY ∆Yi = + δ(t − ti ), ∂α ∂α dα i

(4.184)

where t = ti are the points of discontinuity of the function Y (t, α). DEFINITION 4.17 Sensitivity functions of higher orders of the function Y (t, α) are defined as ordinary higher derivatives U (s) =

∂sY , ∂αs

s ≥ 1.

(4.185)

Relations between sensitivity functions of higher orders and corresponding generalized derivatives can be established immediately. For instance,

© 2000 by CRC Press LLC

applying generalized differentiation with respect to α to (4.184), we have  d  DU dti D2 Y = ∆Y δ(t − ti ) + i dα ∂α2 dα dα i  2 dti ˙ − ti ), − δ(t ∆Yi dα i

(4.186)

Then, using the fact that DU dti ∂U = − ∆Ui δ(t − dti ), ∂α ∂α dα

∂U = U (2) , ∂α

(4.187)

we find U

(2)

 d  dti D2 Y = + ∆Yi δ(t − ti ) ∂α2 dα dα i  2 dti dti ˙ − ti ). + δ(t δ(t − ti ) − ∆Ui ∆Yi dα dα i i

(4.188)

In some problems it is required to find differential equations for sensitivity functions of higher orders. In principle, this problem causes no difficulties. Indeed, from (4.185) it follows that U (s) =

∂U (s−1) . ∂α

(4.189)

Therefore, the sensitivity function of the s-th order is the first-order sensitivity function for U (s−1) . Having the differential equation for the sensitivity function U (s−1) and constructing the associated sensitivity equations, we obtain the required sensitivity equations of higher orders. To illustrate this idea, we construct the second-order sensitivity equations for the system given by (4.165) and (4.166). Notice that the sensitivity equation (4.176) can be combined with the break conditions (4.177) in the form DU dF ∆Ui δ(t − ti ). = + dt ∂α i In principle, Equation (4.190) does not differ from (4.169).

© 2000 by CRC Press LLC

(4.190)

Using

generalized differentiation with respect to α, we have D DU D = dt dα ∂α



dF ∂α

 +

d∆Ui i



δ(t − ti ) −



∆Ui

i

dti ˙ δ(t − ti ). (4.191) dα

Moreover, similarly to (4.172), D DU dti ˙ DU (2) ∆Ui = − δ(t − ti ). dt ∂α dα dα i

(4.192)

Since D ∂α



dF ∂α



d2 F − ∆ ∂α2 i

=



dF ∂α

 i

dti δ(t − ti ), dα

(4.193)

using the same reasoning as for derivation of the relations (4.176)-(4.178), from (4.191) we obtain an equation for the second-order sensitivity function dU (2) d2 F = , dt ∂α2 and, moreover,

 (2)

∆Ui

= −∆

dF ∂α

 i

d dti + ∆Ui , dα dα

(4.194)

(4.195)

where ∆Ui are determined by Formula (4.177). In a similar way we can find initial conditions for sensitivity functions of higher orders. For example, using Formula (2.25), we find, for the case when the sensitivity equations for the system (4.165)–(4.166) have the form (4.176)–(4.177), (2)

U0

=

dt0 dU0 dU0 − U˙ 0 = − dα dα dα



 dt0 dF , ∂α t=t0 dα

(4.196)

Moreover, by (2.25) we have U0 = and



dY0 dt0 − f (Y0 , t0 , α) , dα dα

     dF ∂F ∂F = U0 + . ∂α t=t0 ∂Y t=t0 +0 ∂α t=t0 +0

© 2000 by CRC Press LLC

(4.197)

(4.198)

In a similar way, the above relations are immediately generalized on operator equations of general form.

4.4 4.4.1

Sensitivity Equations for Relay and Pulse Systems Single-Loop Relay Systems

In the present paragraph the general theory developed above is applied to constructing sensitivity equations for a linear system with a relay element. Assume, for concreteness, that the linear part of the system is stationary, and its transfer function is real rational. Then, the equation of system motion has the form ω1 (λ)y = ω2 (λ)x,

x = f (y),

(4.199)

where ω1 (λ) =

n

ak λ(n−k) , ω2 (λ) =

k=0

n

bk λn−k ;

k=1

and f (y) are nonlinear characteristic shown in Figure 3.3a. Using the break conditions (4.164), we can find that Equations (4.199) are equivalent to a sequence of ordinary equations  ω1

d dt

 y = ±bn kp ,

(4.200)

and y and its derivatives are calculated from the break conditions a0 ∆yi = 0,

a0 ∆y˙ i = ±2kp b1 . . .

(4.201)

Hence, for a0 = 1 we have ∆yi = 0,

∆y˙ i = ±2kp b1 , ∆¨ yi = ±2kp (b2 − a1 b1 ) . . .

(4.202)

where it is assumed that the switching is normal. Now we proceed to constructing the sensitivity equation. First, we assume that the coefficients ai and bi depend on the parameter α, while the nonlinear characteristic does not.

Applying generalized differentiation with respect to α to (4.199), we have ω1 (λ)

Dy ∂ω1 Dx ∂ω2 + y = ω2 (λ) + x, ∂α ∂α ∂α ∂α

(4.203)

Then, due to the break conditions (4.202), Dy ∂y = = u, ∂α ∂α

(4.204)

Dx ∂x dti = − δ(t − ti ). ∆fi ∂α ∂α dα i

(4.205)

and, moreover,

The switching moments ti (α) are determined from the conditions y(ti , α) = ±σ0 ,

(4.206)

where ∆fi = 2kr in the case of sign “+” and ∆fi = −2kr otherwise. Differentiating (4.206) with respect to α, we obtain 

 dy dti + u(ti ± 0) = 0. dt t=ti ±0 dα

(4.207)

Then, from the equality y(ti + 0, α) = y(ti − 0, α)

(4.208)

it follows, after differentiating with respect to α, that u(ti + 0) + y(t ˙ i + 0)

dti dti ˙ i − 0) . = u(ti − 0) + y(t dα dα

(4.209)

Therefore, dti u(ti − 0) u(ti + 0) =− =− . dα y(t ˙ i − 0) y(t ˙ i − 0)

(4.210)

Since the function x = f (y) is constant between the switching moments, we have ∂x/∂α = 0. Let us also note that if for different signs in the right side of (4.206) we have y(t ˙ i − 0) > 0 and y(t ˙ i − 0) < 0 for plus and minus, respectively.

© 2000 by CRC Press LLC

Using the above formulas, from (4.205) we obtain u(ti − 0) Dx = 2kp δ(t − ti ). ∂α |y(t ˙ i − 0)| i

(4.211)

Substituting this equation into (4.203) and using (4.204), we find the sensitivity equation

 u(ti − 0) ∂ω1 ∂ω2 ω1 u + y = ω2 2kp δ(t − ti ) + x. ∂α |y(t ˙ i − 0)| ∂α i

(4.212)

The sensitivity function can be found from Equation (4.212) in two ways. The first method consists in the transition from (4.212) to a sequence of ordinary differential equations combined with corresponding break conditions. Using the general break conditions (4.164), for the case under consideration we find dti 2b1 kp u(ti − 0), dα |y(t ˙ i − 0)|     dti da1 db1 dti ∆u˙ i + a1 ∆ui + 2kp −a−2 ∆yi = − b2 ∆fi dα dα dα dα   db1 b2 = 2kp (−1)i − u(ti − 0) , dα |y(t ˙ i − 0)| (4.213) and so on. Moreover, in the intervals between the switching moments we have ∆ui = −b1 ∆fi



   d ∂ω1 d dbn u+ y= kp , dt ∂α dt dα     d ∂ω1 d dbn ω1 u+ y=− kp dt ∂α dt dα

ω1

(4.214)

for motion on the upper and lower branch of the relay characteristic, respectively. Solving the break conditions (4.213) yields  2b1 kp u− i , |y(t ˙ i − 0)|   2b1 kp b2 − − i db1 (−1) + u˙ − = a + 2k u˙ + u − u 1 p i i , |y(t ˙ i − 0)| i dα |y(t ˙ i − 0)| i 

u+ i =

1−

and so on.

© 2000 by CRC Press LLC

(4.215)

Solving successively Equations (4.214) and “sewing” them with the help of the conditions (4.215), we obtain the sensitivity function and its derivatives. But the procedure described above is fairly awkward. Moreover, in practical problems we often need only the sensitivity function itself, but not its derivatives. In such cases it is more convenient to use the second calculation technique described in Section 3.3. With this aim in view, we rewrite (4.212) in the form ω1 (λ)u = q(t) + ω2 (λ)

u(ti − 0) δ(t − ti ), |y(t ˙ i − 0)| i

(4.216)

where q(t) =

∂ω2 ∂ω1 x− y ∂α ∂α

(4.217)

is a known generalized function. If u ˜(t) is a fixed solution of the equation ω1 (λ)˜ u = q(t)

(4.218)

(it is, according to the given assumptions, an ordinary function), then the solution of (4.216) can be represented in the form u(t) = u ˜(t) + 2kp

u− i − g(t − ti ), y˙ i i

(4.219)

where g(t) is the weight function of the linear stationary system with the transfer function ω2 (λ) ω(λ) = . (4.220) ω1 (λ) It is known [70] that if the transfer function (4.220) can be expanded into partial fractions as cim ω(λ) = , (4.221) (λ − λi )m m i then g(t) =

m

i

cim eλi t tm−1 . (m − 1)!

(4.222)

Expression (4.219) makes it possible to immediately construct the sensitivity function in time domain. Indeed, let t0 be the starting point and t1 be the first switching moment. Then, from (4.219) it follows that u(t) = u ˜(t),

© 2000 by CRC Press LLC

t0 < t ≤ t1 − 0,

(4.223)

Hence, u− ˜(t1 − 0). 1 = u(t1 − 0) = u

(4.224)

Then, according to (4.219), we have u(t) = u ˜(t) + 2kp

u ˜(t1 − 0) g(t − t1 ), |y(t ˙ 1 − 0)|

t1 < t ≤ t2 − 0.

(4.225)

For t = t2 − 0 Equation (4.225) yields u ˜− − 1 u− = u ˜ + 2k g(t − t1 ) p 2 2 y˙ − 2 1

(4.226)

and, therefore, u ˜− 1 u(t) = u ˜(t) + 2kp − g(t − t1 ) y˙ 1   2kp u ˜− − 1 + − u ˜2 + 2kp − g(t2 − t1 ) g(t − t1 ), y˙ y˙ 2 1

(4.227)

t2 < t ≤ t3 − 0, and so on. This method is superior to the first one in the computational aspect. Now we consider determination of the function u ˜(t). Assume that t0 does not coincide with any of the switching moments. Then, the initial conditions u ˜(t0 ) will simultaneously be initial conditions for the desired solution u(t), because the second term in the right side of Formula (4.219) affects the result only for t ≥ t1 + 0. Therefore, the function u ˜(t) is a solution of Equation (4.218) with initial conditions obtained for the sensitivity function using the reasoning of Chapter 2. As a special case, if initial conditions are independent of the parameter, u ˜(t) is a solution of Equation (4.218) with zero initial conditions. Let us consider sensitivity of the system with respect to nonlinearity parameters kr and σ0 , assuming that the coefficients ai and bi are independent of these parameters. Applying generalized differentiation to (4.199) with respect to kr , we have Dx . (4.228) ω1 (λ)u = ω2 (λ) ∂kp Next, we construct the generalized derivative Dx/∂kr , using the main relation (4.205). For concreteness, we will assume that x > 0 at the moment t0 .

© 2000 by CRC Press LLC

Then, obviously, x(t) = (−1)i kp ,

ti < t < ti+1 ,

(4.229)

Hence, ∂x = r(t) = (−1)i , ∂kp

ti < t < ti+1 .

(4.230)

Since the parameter kr is not included in the condition of the switching surface, the value dti /dkr is, as before, given by Relation (4.210). Repeating the above calculations, we find u− Dx i − = r(t) = 2kp δ(t − ti ). y˙ ∂kp i i

(4.231)

Assume that initial conditions are independent of the parameter on the family of solutions under consideration. Then, the sensitivity function can be calculated by (4.219), where u ˜(t) is the solution of the equation ω1 (λ)u = ω2 (λ)r(t)

(4.232)

with zero initial conditions. As is known [70], u ˜(t) is defined by t g(t − τ )r(τ )dτ,

u ˜=

(4.233)

t0

where g(t) is the weight function (4.222). Substituting the explicit formula for r(t), we obtain

u ˜(t) =

n−1 i=0

ti +1

t

g(t − τ )dτ + (−1)

i

g(t − τ )dτ,

n

(−1)

ti

(4.234)

tn

tn < t < tn+1 . Using (4.234) and (4.219), as well as the recursive procedure described above, we can construct the required sensitivity function. Next, let us illustrate the method of constructing the sensitivity equation with respect to the parameter σ0 . Using generalized differentiation with

© 2000 by CRC Press LLC

respect to σ0 in (4.199), we obtain, similarly to (4.228), ω1 (λ)u = ω2 (λ)

Dx0 . ∂σ0

(4.235)

Since the parameter σ0 is not included in the nonlinear function explicitly, we have Dx dti =− ∆xi δ(t − ti ). (4.236) ∂σ0 dσ 0 i Then, considering the break conditions (4.206) as an implicit equation determining the function ti (σ0 ), we find dti u− ± 1 u+ ± 1 =− i − =− i + , dσ0 y˙ i y˙ i

(4.237)

and Equation (4.235) takes the form

 u− + (−1)i i ω1 (λ)u = ω2 (λ) 2kp δ(t − ti ) . y˙ −

(4.238)

i

i

In this case, q(t) = 0 (see (4.217)), and, assuming that the initial conditions are independent of σ0 , we have u ˜(t) = 0, which immediately yields u = 2kp

u− + (−1)i i g(t − ti ). y˙ − i

(4.239)

i

As before, it is assumed for concreteness that the first switching after t0 occurs from the upper branch of the relay characteristic to the lower one. In particular, from (4.239) it follows that u(t) = 0,

t0 < t < t1 .

(4.240)

From the physical viewpoint this result is clear. Indeed, since initial conditions are assumed to be independent of σ0 , it is obvious that variation of this parameter can affect the solution only after the first switching moment t1 . Let us construct, for the case at hand, the equation for the second-order sensitivity function.

© 2000 by CRC Press LLC

Using generalized differentiation with respect to σ0 in (4.235), we have ω1 (λ)

Du D2 x = ω2 (λ) 2 . ∂σ0 ∂σ0

(4.241)

Moreover, Du dti = u(2) − ∆ui δ(t − ti ), ∂σ0 dσ0  2 D2 x dti d 2 ti ˙ − ti ), =− ∆xi 2 δ(t − ti ) + ∆xi δ(t 2 ∂σ0 dσ0 dα

(4.242)

and it remains only to calculate the second derivative ∂ 2 ti /∂σ02 .With this aim in view, we differentiate Equation (4.206) twice with respect to σ0 , and solve the obtained equation for ∂ 2 ti /∂σ02 . As a result, we find (2)−

d 2 ti =− dσ02

ui

+ 2u˙ − i

dti + y¨i− dσ0 y˙ i−



dti dσ0

2 .

(4.243)

For a practical solution of Equation (4.241) any of the approaches given above can be applied. We can either transform it to a system of ordinary equations connected by the corresponding break conditions, or, rewriting (4.241) in the form ω1 (λ)u(2) = ω1 (λ)



∆ui

dti D2 x δ(t − ti ) + ω2 (λ) 2 , dσ0 ∂σ0

(4.244)

solve it directly for u(2) (t).

4.4.2

Pulse-Amplitude Systems

In the present paragraph we obtain equations and methods of calculation of sensitivity functions for single-loop systems consisting of a stationary linear part and a pulse-amplitude element. The equation of the system has the form ω1 (λ, α)y = ω2 (λ, α)x,

x = f (y, t),

(4.245)

where w1 and w2 are the same functions as in the previous section, and

© 2000 by CRC Press LLC

f (y, t) is the characteristic of the sampling element:  x = f (y, t) =

y(nT ), nT < t < nT + t1 , 0, nT + t1 < t < (n + 1)T.

(4.246)

First, we assume that the sampling period T is independent of the parameter. Using the break conditions (4.164), it is easy to find that in this case the solution y(t) is continuous, therefore, Dy dy = , dt dt

Dy ∂y = = u. ∂α ∂α

(4.247)

From continuity of y(t) it follows that the function (4.246) has discontinuities only at the moments ti = ti + t1 ,

ti = iT,

(4.248)

which are independent of the parameter α. Therefore, according to (4.205), in the given case we have Dx ∂x = . ∂α ∂α

(4.249)

Using generalized differentiation with respect to α is (4.245) and taking (4.247) and (4.249) into account, we obtain ω1 (λ, α)u = ω2 (λ, α)

∂ω2 ∂x ∂ω1 − y+ x. ∂α ∂α ∂α

(4.250)

Let us calculate the ordinary derivative dx/dα. With this aim in view, we notice that the sampling period is independent of the parameter, therefore ∂y(nT, α) = u(nT ), ∂α Hence, ∂x = ∂α



u(nT ), nT < t < nT + t1 , 0, nT + t1 < t < t(n + 1)T.

(4.251)

(4.252)

Equation (4.252) does not, in principle, differ from (4.246). Thus, in the given case, the sensitivity equation (4.250) is the equation of a pulse system similar to the initial one, but acted on by the exogenous disturbance q(t) = −

© 2000 by CRC Press LLC

∂ω1 ∂ω2 y+ x, ∂α ∂α

(4.253)

Since x is a piecewise continuous function, we have dbi ∂ω2 x= λn−i x; ∂α dα i=1 n

(4.254)

Due to the above results, Dx = ∆x(ti )δ(t − ti ) + ∆x(ti )δ(t − ti ), dt ˙ − ti ) + ˙ − t ), λ2 x = ∆x(ti )δ(t ∆x(ti )δ(t i λx =

(4.255)

.............................. where ∆x(ti ) = y(ti ) = y(iT ),

∆x(ti ) = −y(ti ) = −y(iT ),

(4.256)

As in the previous case, actual determination of the sensitivity function from Equation (4.250) can be performed in two ways: either by a transition to a sequence of ordinary equations connected by the corresponding break conditions, or by direct solution of an equation of the form (4.236) with a known function q(t). In this case, using the break equations, it is easy to show that the sensitivity function is continuous in the case under consideration. Then, consider the sampling period T as a parameter and assume that the polynomials wi (λ) are independent of T . In this case the switching moments ti and ti depend on the parameter, because dti = i, dT

dti = i. dT

(4.257)

We have (4.247), as before, but, as distinct from (4.249), Dx ∂x ∆x(ti )iδ(t − ti ) − ∆x(ti )iδ(t − ti ). = − ∂T ∂T i i

(4.258)

Therefore, generalized differentiation of (4.245) with respect to T yields   ∂x ω1 (λ)u = ω2 (λ) − i ∆x(ti ) iδ(t − ti ) ∂T    − ∆x(ti ) iδ(t − ti ) . i

© 2000 by CRC Press LLC

(4.259)

Let us calculate the ordinary derivative ∂x/∂T . For this purpose, we assume that the function y(nT ) depends on T in two ways: as a function of the current moment tn = nT and as a function of the parameter T on all preceding stages of motion. Therefore, the value y(nT ) = y(nT, T ) must actually appear in (4.246), and differentiation yields dy(nT, T ) dy = n (nT + 0) + u(nT + 0) dT dt dy = n (nT − 0) + u(nT − 0). dt

(4.260)

Using the last equality, we find ∂x = ∂T

 n

dy (nT − 0) + u(nT − 0), nT < t < nT + t1 , dt 0, nT + t1 < t < t(n + 1)T.

(4.261)

Let us show a possible way of actual construction of the sensitivity function by Equation (4.259). Let a function h(t, tn , tn ) be a solution of the equation ω1 (λ)y = ω2 (λ)x (4.262) for the case when  x(t) =

1 0

for tn < t < tn , for t < tn , t < tn ,

(4.263)

and zero initial conditions. Obviously,

h(t, tn , tn ) =

 0    t        g(t − τ )dτ

for t < tn , for tn < t < tn , (4.264)

t

n   t      g(t − τ )dτ  

for t > tn ,

tn

where g(t) is a weight function associated with the transfer function (4.220). Obviously, taking into account (4.256) and (4.261), a solution of Equation (4.258) for zero initial conditions corresponding to the moment t0 has the form   − u(t) = h(t, ti , ti ) u− i + iy˙ i i (4.265) − [g(t − ti ) − g(t − ti )] yi . i

© 2000 by CRC Press LLC

As a specific case, for 0 < t0 < T we have u(t) = 0,

t0 < t < T.

Using the above reasoning, it is easy to obtain sensitivity equations with respect to the parameter t1 , which characterizes the pulse duration. In this case, dti = 0, dt1

dti = 1, dt1

(4.266)

and we have, instead of (4.258), Dx ∂x = − ∆x(ti ) δ(t − ti ), ∂t1 ∂t1 i Moreover, since the sampling period T is assumed to be independent of t1 , we have also ∂x = ∂t1



u(nT ) 0

for nT < t < nT + t1 , for nT + t1 < t < (n + 1)T.

(4.267)

so that the sensitivity equation takes the form 

 ∂x  ω1 (λ)u = ω2 (λ) + y(iT ) δ(t − ti ) . ∂t1

(4.268)

Using the same technique as for derivation of Formula (4.265), from (4.268) we can obtain the following explicit expression for the sensitivity function: u(t) = h(t, ti , ti )ui + g(t − ti )yi , (4.269) where yi = y(iT ),

4.4.3

ui = u(iT ).

(4.270)

Pulse-Width Systems

Consider a system described by Equation (4.245), where the controlling function x = f (y, t) has the form  f (y, t) =

© 2000 by CRC Press LLC

sign y(nT ) 0

for nT < t < nT + t1n , for nT + t1n < t < (n + 1)T,

(4.271)

where T is a fixed sampling period, and pulse duration t1n ≤ T is a function of the value yn = y(nT ), so that t1n = ψ(yn ),

(4.272)

where ψ(y) is a smooth function. According to the assumptions made above, the function y(t) is continuous, therefore Equation (4.247) holds. The controlling function f (y, t) is piecewise constant and the height of pulses is independent of the parameter. Therefore, for any α we have Dx dti =− δ(t − ti ), ∆xi ∂α dα i

(4.273)

where ti are the corresponding switching moments. In this case, as before, we have two sequences of switching moments: ti = iT + t1i ,

ti = iT,

(4.274)

and, accordingly, ∆xi (ti ) = ∆xi = − sign yn .

∆xi = ∆x(ti ) = sign yn ,

(4.275)

Therefore, the general sensitivity equation of a pulse-width system can be obtained from (4.245) by generalized differentiation with respect to α:

ω1 (λ, α)u = −ω2 (λ, α) − δ(t −

ti )





i dti



 sign yi



δ(t − ti )

dti dα

(4.276)

∂ω1 ∂ω2 − y+ x. dα ∂α

Equation (4.276) gives a number of special cases important in applications. Assume that the sampling period is independent of the parameter. Then, dti =0 (4.277) dα and the sensitivity function remains continuous at the moments ti = iT , while the function y(iT ) = y(iT, α) = yi is continuously differentiable with respect to α. Therefore, dt1i dψ(yi ) ∂y(iT, α) dψ · = = (ti )u(ti ). dα dyi ∂α dy

© 2000 by CRC Press LLC

(4.278)

As follows from (4.276),

ω1 (λ, α)u = ω2 (λ, α)

i

 dψ  sign yi (ti )u(ti ) δ(t − ti ) dy

(4.279)

∂ω2 ∂ω1 + x− y. ∂α ∂α Now let us construct the sensitivity equation with respect to the sampling period T , assuming that all the remaining values in the initial equation are independent of T . Then, as distinct from (4.277), we have dti = i. dT

(4.280)

For calculation of the derivative dti d = [iT + ψ(yi )] dT dT

(4.281)

we must take the double dependence yi = y(iT, T ) from the parameter T into account. Obviously, dt dψ(yi ) dyi · =i+ , dT dyi dT and, similarly to the previous result, dyi + + = iy˙ i− + u− i = iy˙ i + ui . dT

(4.282)

Therefore, the sensitivity equation takes the form ω1 (λ)u = −ω2 (λ) i sign yi δ(t − ti ) i     dψ sign yi i + + ω2 (λ) δ(t − ti ). (ti ) iy˙ i− + u− i dy i

(4.283)

For zero initial conditions, the solution of the sensitivity equations can be

© 2000 by CRC Press LLC

expressed in terms of the weight function (4.222) as

i sign yi g(t − ti )    −  dψ − + i+ sign yi g(t − ti ), (ti ) iy˙ i + ui dy i

u(t) =

i

(4.284)

From this equation the desired solution can be obtained recursively.

4.4.4

Pulse-Frequency Systems

Sensitivity investigation for pulse-frequency systems is connected with a number of characteristic features mentioned in Chapter 3. Ignoring, for simplicity, the dead zone of the sampling element (3.187), we can describe a wide class of pulse-frequency modulators by relations of the form x = f (y, t) = sign yi ,

(4.285)

where the pulse duration is defined by ti+1 − ti = Ti = ψ(yi ),

(4.286)

where ψ(y) is a bounded, even positive, function that will be assumed to be continuously differentiable. Let t0 be a starting moment that does not coincide with switching moments ti (i = 1, 2, . . .). Then, as for derivation of Equation (4.276), we obtain the sensitivity equation of pulse-frequency system in the form ω1 (λ, α)u = −ω2 (λ, α)

i

∆xi

dti ∂ω2 ∂ω1 δ(t − ti ) + x+ y, dα ∂α ∂α

(4.287)

where ∆xi = 2sign yi for sign yi = −sign yi−1 , and ∆xi = 0 for sign yi = sign yi−1 . Therefore, actually only pulses corresponding to sign changes of the controlling function are taken in (4.287). According to (4.286), the switching moments are given by the recurrent relation ti = ti−1 + ψ(yi−1 ).

(4.288)

Consider the most general problem when, together with w1 and w2 , the function ψ also depends on the parameter. Note that the function y(t) is

© 2000 by CRC Press LLC

continuous. Therefore, as before, at the switching moments the function yi = y[ti (α), α] is differentiable with respect to α disregarding the fact that the sensitivity function can be discontinuous. As follows from the aforesaid, dyi dti + dti = y˙ i− + u− + u+ i = y˙ i i . dα dα dα

(4.289)

Differentiating (4.288) with respect to α, we find dti dti−1 ∂ψ ∂ψ dyi−1 = + (ti ) + (ti ), dα dα ∂y dα ∂α

(4.290)

where any expression obtained from (4.289) can be used at the place of dyi−1 /dα. Let us note that, as distinct from the previous problems where we had finite formulas determining the derivatives from the switching moments, in this case we obtain only recurrent relations (4.290). Nevertheless, using (4.290) and the sensitivity equations (4.287), for known initial conditions we can successively find all values of interest. Let, for instance, ∂ω1 ∂ω2 = =0 ∂α ∂α

(4.291)

From (4.287) we have ω1 (λ) = −ω2 (λ)

i

∆xi

dti δ(t − ti ). dα

(4.292)

Assume also that initial conditions for the sensitivity equation (4.292) are zero, and the moment t0 is independent of the parameter. In this case u(t) = 0,

t0 ≤ t ≤ t1 − 0,

(4.293)

where t1 is the first switching moment. Then, for sign y1 = −sign y0 we have u = −2 sign y1

© 2000 by CRC Press LLC

dt1 g(t − t1 ), dα

t2 − 0 ≥ t ≥ t1 + 0.

(4.294)

The switching moment t1 (α) is obtained from the equation t1 = t0 + ψ(y0 , α),

(4.295)

Hence, dt1 ∂ψ(y0 , α) = dα ∂α This uniquely determines the solution u(t) on the interval

(4.296)

t1 + 0 ≤ t ≤ t2 − 0. After that, calculation process can be continued, because we can find the derivative dt2 /dα using known dt1 /dα and the sensitivity equation, and so on.

© 2000 by CRC Press LLC

Chapter 5 Sensitivity of Non-Time Characteristics of Control Systems

5.1 5.1.1

Sensitivity of Transfer Function and Frequency Responses of Linear Systems Sensitivity of Transfer Function

DEFINITION 5.1 For a function x(t) in a real variable t the following function in complex variable s: ∞ Z(s) = L [z(t)] =

z(t)e−st dt.

(5.1)

0

is called the Laplace image (transform). It is assumed that the convergence conditions formulated in Section 4.1.6 hold for the integral (5.1). Let us have a parametric family of solutions z(t, α), and let for any α ∈ [α1 , α2 ] there exist the transform ∞ Z(s, α) =

z(t, α)e−st dt.

0

THEOREM 5.1 [114] Assume that the function z˜(t, α) = z(t, α)e−st does exist and is continuous with respect to t for t ≥ 0 and α ∈ [α1 , α2 ]. Moreover, let its derivative ∂ z˜(t, α)/∂α be, for the specified arguments, continuous with respect to both

© 2000 by CRC Press LLC

the arguments. Assume that the integral (5.1) converges for all α ∈ [α1 , α2 ], and the integral ∞ ∂ z˜(t, α) dt ∂α 0

converges uniformly with respect to α in the same interval. Then, for any α ∈ [α1 , α2 ] we have ∂Z(s, α) = ∂α

∞

∂ z˜(t, α) dt. ∂α

(5.2)

0

From Formula (5.3) we have ∂Z(s, α) = ∂α

∞

∂z(t, α) −st e dt, ∂α

0

or

  ∂Z ∂z(t, α) =L . ∂α ∂α

As follows from Theorem 5.1, Laplace transformation is commutative with respect to the variable t and differentiating by the parameter α. It is known that transfer function w(p) of a causal single-input-single-output linear system is the image of the weight function h(t), so that ∞ ω(s) =

h(t)e−st dt.

0

The weight function h(t) is defined as the response of the system to the unit delta-function acting on its input. The weight function corresponds to the sensitivity function u(t) = ∂h(t, α)/dα. By Theorem 5.1, we can write  L

 ∂h(t, α) ∂ = ω(s) ∂α ∂α

DEFINITION 5.2 The derivative ∂w(s)/∂α will be called the sensitivity function of the transfer function. Denote Sωα (s) =

© 2000 by CRC Press LLC

∂ω(s) . ∂α

α We notice that Sw (s) is a function in the complex argument s. For system investigation the half-logarithmic sensitivity function

∂ω(s, α) = Sωα (s)α ∂ ln α and the logarithmic one ∂ ln ω(s, α) α = Sωα (s) . ∂ ln α ω(s) are often used. As an example, we consider the transfer function of an oscillatory unit given by k ω(s) = 2 2 . T s + 2ξT s + 1 For this case we have ∂ω(s) 2ω(s)(T s + ξ)s =− 2 2 , ∂T T s + 2ξT s + 1

∂ω(s) 2T sω(s) =− 2 2 . ∂ξ T s + 2ξT s + 1

Let w0 (s) be transfer function of the open-loop system, and w(s) be transfer function of the closed-loop system: ω(s) =

ω0 (s) . (1 + ω0 (s))

Then, ∂ω0 (s) ∂ω(s) ∂α , = ∂α [1 + ω0 (s)]2

(5.3)

∂ ln ω0 (s) ∂ ln ω(s) ∂α = . ∂α 1 + ω0 (s)

(5.4)

Assume that ω(s) =

ω(s) , 1 + ωf b (s)ω0 (s)

where wf b (s) is the feedback transfer function that is independent of the parameter α. Then, ∂ω0 (s) ∂ω(s) ∂α , = ∂α [1 + ωf b (s)ω0 (s)]2

© 2000 by CRC Press LLC

(5.5)

∂ ln ω0 (s) ∂ ln ω(s) ∂α = , ∂α 1 + ωf b (s)ω0 (s)

(5.6)

Equations (5.5)–(5.6) establish the relations between sensitivity functions of closed-loop and open-loop systems.

5.1.2

Sensitivity of Frequency Responses

DEFINITION 5.3 Substitute jω for s in the transfer function w(s). The function thus obtained is called the complex frequency response of the system. In polar coordinates the function w(jω) has the form w(jω) = A(ω)ejφ(ω) ,

(5.7)

where A(ω) = |w(jω)| is the amplitude frequency response , and φ(ω) = arg w(jω) is the phase frequency response. Moreover, the complex function w(jω) can be written in the form w(jω) = P (ω) + jQ(ω)

(5.8)

where P (ω) = Re w(jω) and Q(ω) = Im w(jω). DEFINITION 5.4 The function P (ω) is called the real frequency response, and the function Q(ω) the imaginary frequency response. The functions A(ω), φ(ω), P (ω), and Q(ω) are connected by the following relations: P (ω) = A(ω) cos φ(ω), A(ω) =

Q(ω) = A(ω) sin φ(ω),

 P 2 (ω) + Q2 (ω),

φ(ω) = arctan

If transfer function of a system has the form g 

w(s) =

i=0 n  i=0

© 2000 by CRC Press LLC

bi sg−i ai sn−i

Q(ω) . P (ω)

(5.9)

the following relations hold:  A(ω) =

a2 (ω) + b2 (ω) , c2 (ω) + d2 (ω)

φ(ω) = arctan

b(ω)c(ω) − a(ω)d(ω) , a(ω)c(ω) + b(ω)d(ω)

(5.10)

(5.11)

P (ω) =

a(ω)c(ω) + b(ω)d(ω) , c2 (ω) + d2 (ω)

(5.12)

Q(ω) =

b(ω)c(ω) − a(ω)d(ω) , c2 (ω) + d2 (ω)

(5.13)

where a(ω) = b(ω) = c(ω) = d(ω) =

 i  i  i 

(−1)i bg−2i ω 2i , (−1)i bg−2i−1 ω 2i+1 , (−1)i an−2i ω 2i ,

(5.14)

(−1)i an−2i−1 ω 2i+1 .

i

DEFINITION 5.5 If all poles and zeros of a system are located in the left half-plane of the complex plane, such a system is called minimal-phase. For minimal-phase systems there are bi-unique relations between the functions P (ω) and Q(ω), as well as between A(ω) and φ(ω) [100]: 1 P (ω) = − π

1 Q(ω) = π

∞ −∞

∞ −∞

1 ln A(ω) = − π

© 2000 by CRC Press LLC

Q(τ ) dτ, τ −ω

P (τ ) dτ, τ −ω

∞ −∞

φ(τ ) dτ, τ −ω

(5.15)

(5.16)

(5.17)

1 φ(ω) = π

∞ −∞

ln A(τ ) dτ. τ −ω

(5.18)

For stable systems the complex frequency response can be obtained via the weight function by the formula ∞ w(jω) =

h(τ )e−jωτ dτ.

(5.19)

0

Moreover, the weight function is connected with other frequency responses. As a special case, Equation (5.19) yields ∞ P (ω) =

h(τ ) cos ωτ dτ, 0

(5.20)

∞

Q(ω) = −

h(τ ) sin ωτ dτ, 0

Denote frequency responses of the open-loop system by A0 (ω), φ0 (ω), P0 (ω), and Q0 (ω). Then, it can be shown that for a closed-loop system with transfer function w0 (s) 1 + w0 (s)

(5.21)

P02 (ω) + Q20 (ω) , [1 + P0 (ω)]2 + Q20 (ω) Q0 (ω) arctan , P0 (ω)[1 + P0 (ω)] + Q20 (ω) 1 A0 (ω)  2 , A0 (ω) + 1 + 2A0 (ω) cos φ0 (ω) sin φ0 (ω) arctan , A0 (ω) + cos φ0 (ω) P0 (ω)[1 + P0 (ω)] + Q20 (ω) , [1 + P0 (ω)]2 + Q20 (ω) Q0 (ω) . [1 + P0 (ω)]2 + Q20 (ω)

(5.22)

w(s) = the following relations hold:  A(ω) = φ(ω) = A(ω) = φ(ω) = P (ω) = Q(ω) =

© 2000 by CRC Press LLC

In many cases the coefficients of the numerator and denominator of transfer function linearly depend on variable parameters. Then, such a transfer function can be represented in the form of a rational linear function as w(s) =

b1 (s) + αb2 (s) , a1 (s) + αa2 (s)

(5.23)

where the polynomials a1 (s), a2 (s), b1 (s), and b2 (s) are independent of α. For a closed-loop system with transfer function (5.21), for w0 (s) =

b01 (s) + αb02 (s) a01 (s) + αa02 (s)

(5.24)

w(s) =

b01 (s) + αb02 (s) , a ˜1 (s) + α˜ a2 (s)

(5.25)

we have

where a ˜1 (s) = b01 (s) + a01 (s),

a ˜2 (s) = b02 (s) + a02 (s).

Among frequency quality indices of linear systems with constant coefficients we should also mention oscillation index, phase stability margin, equivalent bandwidth of the closed-loop system, and soon. DEFINITION 5.6 The oscillation index of a system is the ratio of the maximal value of the amplitude frequency response of the closed-loop system to its value for the frequency ω = 0: M=

A(ωmax ) , A(0)

(5.26)

where ωmax if the frequency for which A(ω) reaches maximum. For a system with transfer function (5.21) we have A(0) = 1, and    w0 (jωmax )   M =  1 + w0 (jωmax )  or

 M=

© 2000 by CRC Press LLC

P02 (ωmax ) + Q20 (ωmax ) . [1 + P0 (ωmax )]2 + Q20 (ωmax )

(5.27)

(5.28)

DEFINITION 5.7 by

Phase stability margin is defined as a value µ given µ1 = 180o + ψ1 ,

(5.29)

where ψ1 is the argument of the amplitude frequency response of the openloop system corresponding to the condition |w0 (jω)| = A0 (ω) = 1. DEFINITION 5.8 Equivalent bandwidth ωe of the closed-loop system is defined by the formula ∞ A2 (ω) dω.

ωe = 0

For the system with transfer function (5.21) we have ∞ ωe = 0

5.1.3

P02 (ω) + Q20 (ω) dω. [1 + P0 (ω)]2 + Q20 (ω)

Relations between Sensitivity Frequency Characteristics

Functions

of

As was shown in Section 5.1.1, if there is a sensitivity function of a time-domain characteristic, for instance, that of the weight function of a linear systems, in many cases there is a sensitivity function of the transfer function. In control theory the frequency response w(jω) is formally obtained from transfer function assuming that s = jω. Obviously, in this case (5.19) holds, and there is a sensitivity function of the frequency response ∂w(jω)/∂α given by ∂w(jω) = ∂α

∞

∂h(t) −jωt dt. e ∂α

0

Real and imaginary frequency responses are connected, in their turn, with the weight function h(t) by the integral relations (5.20). Let the weight function h(t) satisfy conditions of differentiability of integrand [114], i.e. h(t, α) is continuous in the domain [0 ≤ t < ∞, α1 < α0 < α2 ] for any constant α ∈ [α1 , α2 ], and the sensitivity function ∂h(t)/∂α does exist and

© 2000 by CRC Press LLC

is continuous with respect to t and α. Then, there are sensitivity functions ∂P (ω)/∂α and ∂Q(ω)/∂α given by ∂P (ω) = ∂α

∞

∂h(t, α) cos ωt dω, ∂α 0 ∞ ∂Q(ω) ∂h(t, α) =− sin ωt dω. ∂α ∂α 0

Other frequency characteristics and indices considered in the previous paragraph are functionally connected with real and imaginary frequency responses by relations of the forms (5.9), (5.22), (5.28) and (5.29). Since the right sides of these relations have derivatives with respect to P and Q, there are all corresponding sensitivity functions and coefficients. This fact makes it possible to differentiate frequency responses under consideration with respect to parameter α. Let us differentiate Relation (5.20) with respect to α. We obtain     ∂w(jω) ∂ ln A(ω) ∂A(ω) ∂φ(ω) jφ(ω) ∂φ(ω) = w(jω) = + jA(ω) e +j ∂α ∂α ∂α ∂α ∂α or ∂ ln w(jω) ∂ ln A(ω) ∂φ(ω) = +j . ∂α ∂α ∂α

(5.30)

The above equations define the relations between the sensitivity function of the complex phase-amplitude frequency response with sensitivity functions of the amplitude and phase responses. The sensitivity function ∂w(jω)/∂α can be represented in polar coordinates as ∂w(jω) = H(ω)ejψ(ω) , ∂α where  H(ω) =

∂A(ω) ∂α



2 +

A2 (ω)

∂φ(ω) ∂α

2 ,

∂A(ω) ∂φ(ω) sin φ(ω) + A(ω) cos φ(ω) ∂α ψ(ω) = arctan ∂α , ∂A(ω) ∂φ(ω) cos φ(ω) − A(ω) sin φ(ω) ∂α ∂α

© 2000 by CRC Press LLC

and in the form of a complex-valued function ∂w(jω) = C(ω) + jD(ω), ∂α where ∂A(ω) ∂φ(ω) cos φ(ω) − A(ω) sin φ(ω), ∂α ∂α ∂A(ω) ∂φ(ω) D(ω) = sin φ(ω) + A(ω) cos φ(ω). ∂α ∂α C(ω) =

Notice that if ∂φ(ω)/∂α = 0, we have H(ω) =

∂A(ω) , ∂α

ψ(ω) = φ(ω)

and ∂w(jω) ∂A(ω) jφ(ω) . = e ∂α ∂α For the case ∂A(ω)/∂α = 0 which is, generally speaking, degenerate, it is easy to show that H(ω) = A(ω)

∂φ(ω) , ∂α

ψ(ω) = φ(ω) −

π 2

and ∂w(jω) ∂φ(ω) j[φ(ω)−π/2] . = A(ω) e ∂α ∂α In the case of transfer function (5.23) and the corresponding frequency response, for the sensitivity functions we have ∂w(jω) a1 (jω)b2 (jω) − a2 (jω)b1 (jω) = , ∂α [a1 (jω) + αa2 (jω)]2 ∂ ln w(jω) a1 (jω)b2 (jω) − a2 (jω)b1 (jω) = . ∂α [a1 (jω) + αa2 (jω)][b1 (jω) + αb2 (jω)] Differentiating Equation (5.8), we obtain the following obvious relation: ∂w(jω) ∂P (ω) ∂Q(ω) = +j , ∂α ∂α ∂α which relates sensitivity of the phase-amplitude frequency response to sensitivity of the real and imaginary frequency responses.

© 2000 by CRC Press LLC

Analogously, from (5.9) we can find the following relations: ∂P (ω) ∂α ∂ ln P (ω) ∂α ∂Q(ω) ∂α ∂ ln Q(ω) ∂α ∂A(ω) ∂α ∂φ(ω) ∂α

5.1.4

= = = = = =

∂A(ω) ∂φ(ω) cos φ(ω) − A(ω) sin φ(ω), ∂α ∂α ∂ ln A(ω) ∂φ(ω) − tan φ(ω); ∂α ∂α ∂A(ω) ∂φ(ω) sin φ(ω) + A(ω) cos φ(ω), ∂α ∂α ∂ ln A(ω) ∂φ(ω) + cot φ(ω); ∂α ∂α ∂P (ω) ∂Q(ω) cos φ(ω) + sin φ(ω), ∂α ∂α   ∂ ln Q(ω) ∂ ln P (ω) − cos φ(ω) sin φ(ω), ∂α ∂α

(5.31)

Universal Algorithm for Determination of Sensitivity Functions for Frequency Characteristics

Any of the aforementioned frequency responses can be considered a composite function with respect to the parameter α, i.e., f (ω) = f [ω, a0 (α), a1 (α), . . . , an (α), b0 (α), b1 (α), . . . , bg (α)], where f (ω) is a special frequency response, and ai (α) and bi (α) are coefficients of the transfer function. Then, according to the rules of differentiation for composite functions, for the desired sensitivity function we have  ∂f dai  ∂f dbi ∂f = + . ∂α ∂ai dα ∂bi dα i=0 i=0 n

g

(5.32)

In this formula the form of the terms ∂f /∂ai and ∂f /∂bi is defined only by the structure of the transfer function (i.e., by the values n and s). Expressions for these functions can be constructed in advance in the form of general universal formulas or in the form of tables for various values of n and s [69]. Consider, for example, the amplitude frequency response A(ω) given by (5.11). Since ∂A2 (ω) ∂A(ω) = 2A(ω) ∂α ∂α

© 2000 by CRC Press LLC

and ∂A(ω) 1 ∂A2 (ω) = , ∂α 2A(ω) ∂α

(5.33)

we first obtain formulas for ∂A2 (ω)/∂α, and then find, by (5.33), the desired values. Using this approach, we introduce a new composite function that greatly simplifies calculation of the sensitivity function. This new function has the form A(ω) = f˜{A2 [ω, ai , bj ]}. As a result, the desired sensitivity function can be found by the formula n

s ∂A(ω) ∂ f˜  ∂A2 (ω) dai  ∂A(ω) dbi = + . ∂α ∂A2 i=0 ∂ai dα ∂bi dα i=0 In the given case, A(ω) =

 A2 (ω)

and ∂ f˜ 1 = . 2 ∂A 2A To find the values ∂A2 (ω)/∂ai and ∂A2 (ω)/∂bj we use the formula A2 (ω) =

a2 (ω) + b2 (ω) . c2 (ω) + d2 (ω)

Differentiating this function with respect to ai and bj yields n−k+2 2c(ω)ω n−k A2 (ω) ∂A2 (ω) = (−1) 2 , ∂ak c2 (ω) + d2 (ω)

(5.34)

n − k = 2i, i = 0, 1, . . . , n−k+1 2d(ω)ω n−k A2 (ω) ∂A2 (ω) = (−1) 2 , ∂ak c2 (ω) + d2 (ω)

(5.35)

n − k = 2i + 1, i = 0, 1, . . . , g−k 2a(ω)ω g−k ∂A2 (ω) = (−1) 2 2 , ∂bk c (ω) + d2 (ω)

g − k = 2i, i = 0, 1, . . . ,

© 2000 by CRC Press LLC

(5.36)

g−k−1 2b(ω)ω g−k ∂A2 (ω) = (−1) 2 , ∂bk c2 (ω) + d2 (ω)

(5.37)

g − k = 2i + 1, i = 0, 1, . . . Multiplying the values ∂A2 (ω)/∂ai and ∂A2 (ω)/∂bj just obtained by 1/2A(ω), we find the coefficients ∂A(ω)/∂ai and ∂A(ω)/∂bj , respectively. The multipliers dai /dα and dbj /dα are determined by the dependence of the transfer function on the parameter and can be found specially for each concrete system. Thus, the sensitivity function ∂A(ω)/∂α can be found in three stages. 1. First, by (5.34)–(5.37) the values ∂A2 (ω)/∂ai and ∂A2 (ω)/∂bj are obtained. 2. Then, using (5.33), by a known amplitude frequency response A(ω) we calculate ∂A(ω)/∂ai and ∂A(ω)/∂bj . 3. Then, the desired sensitivity functions are obtained after evaluation of dai /dα and dbj /dα. To find the sensitivity function ∂φ(ω)/∂α we employ the same technique. First we find the corresponding derivatives of the argument z in the function (5.12): b(ω)c(ω) − a(ω)d(ω) z(ω) = . a(ω)c(ω) + b(ω)d(ω) Then, by the formula n

g  ∂z(ω) dai  φ(ω) ∂z(ω) dbi 1 = + ∂α 1 + z 2 (ω) i=0 ∂ai dα ∂bi dα i=0 we obtain the desired sensitivity function. The values ∂z(ω)/∂ai and ∂z(ω)/∂bj are determined by formulas similar to (5.34)–(5.37). The sensitivity of the real P (ω) and imaginary Q(ω) frequency responses will be found by the following formulas: P (ω)  ∂P (ω) dai  ∂P (ω) dbi = + , ∂α ∂ai dα ∂bi dα i=0 i=0 n

g

Q(ω)  ∂Q(ω) dai  ∂Q(ω) dbi = + . ∂α ∂ai dα ∂bi dα i=0 i=0 n

g

For the cofactors ∂P (ω)/∂ai , ∂P (ω)/∂bj , ∂Q(ω)/∂ai , and ∂Q(ω)/∂bj we can also obtain universal formulas for different m and n.

© 2000 by CRC Press LLC

If we have the transfer function (5.23), Equation (5.32) for the sensitivity function gets simplified. Let in the transfer function (5.23) the orders of the polynomials b2 (s) and a2 (s) be equal to g2 and n2 , respectively. Then, the first sum in the right side of (5.32) includes n2 or n2 + 1 terms, while the second sum contains g2 or g2 + 1 terms. Moreover, the following relation holds: N M   ∂f ∂f ∂f = + , ∂ ln α ∂ ln ai ∂ ln bi i i where N and M are the numbers of terms in the corresponding sums. As an example, we consider a system with transfer function w(s) =

1 + T1 s . 1 + T2 s

For this transfer function we have a(ω) = b1 = 1, c(ω) = a1 = 1, 

1 + T12 ω 2 , 1 + T22 ω 2 1 + T1 T 2 ω 2 P (ω) = , 1 + T22 ω 2 (T1 − T2 )ω z(ω) = . 1 + T1 T2 ω 2 A(ω) =

b(ω) = b0 ω = T1 ω, d(ω) = a0 ω = T2 ω, (T1 − T2 )ω , 1 + T1 T2 ω 2 (T1 − T2 )ω Q(ω) = . 1 + T22 ω 2 φ(ω) = arctan

To find ∂A2 (ω)/∂T1 = ∂A2 (ω)/∂b0 we employ Formula (5.37), because g = 1, k = 0, and g − k = 1. Then, ∂A2 (ω) 2T1 ω 2 = . ∂T1 1 + T22 ω 2 Since

we have

∂A(ω) ∂A(ω) 1 ∂A2 (ω) = = , ∂T1 ∂b0 2A(ω) ∂T1 ∂A(ω) T1 ω 2 . = ∂T1 (1 + T12 ω 2 ) (1 + T22 ω 2 )

The value ∂A2 (ω)/∂T2 = ∂A2 (ω)/∂a0 can be found by (5.35), because n − k = 1: ∂A2 (ω) 2T2 ω 2 A2 (ω) =− . ∂T2 1 + T22 ω 2

© 2000 by CRC Press LLC

For ∂A(ω)/∂T2 we have: ∂A(ω) 1 ∂A2 (ω) ω 2 T2 A2 (ω) = =− . ∂T2 2A(ω) ∂T2 1 + T22 ω 2 Using the corresponding formulas for phase, real, and imaginary frequency responses, we obtain ωA2 (ω) ∂z(ω) =− , ∂T2 P (ω) (1 + T1 T2 ω 2 )

∂z(ω) ω = , ∂T1 P (ω) (1 + T22 ω 2 ) ∂P (ω) ω 2 T2 = , ∂T1 1 + T22 ω 2

∂Q(ω) ω = , ∂T1 1 + T22 ω 2

∂P (ω) ω[Q(ω) − ωT2 P (ω) = , ∂T2 1 + T22 ω 2 ∂Q(ω) ω[P (ω) + ωT2 Q(ω) =− . ∂T2 1 + T22 ω 2

5.1.5

Sensitivity Functions for Frequency Characteristics of Minimal-Phase Systems

As was already noted, in this cases there are bi-unique relations defined by (5.15)–(5.18) between some frequency responses. Assuming that the conditions of Theorem 5.1 hold, we differentiate both the sides of (5.15)– (5.18) with respect to the parameter α. As a result, we obtain the following formulas defining the relations between sensitivity functions of amplitude, phase, real and imaginary frequency characteristics:

∂P (ω) 1 =− ∂α π ∂Q 1 = ∂α π

∞ ∂Q(τ ) ∂α dτ, τ −ω

−∞

∞ −∞

∂P (τ ) 1 dτ, ∂α τ − ω

∂A(ω) A(ω) =− ∂α π ∂φ(ω) 1 = ∂α π

© 2000 by CRC Press LLC

∞ −∞

∞ −∞

∂φ(τ ) 1 dτ, ∂α τ − ω

∂ ln A(τ ) 1 dτ. ∂α τ −ω

(5.38)

5.1.6

Relations between Sensitivity Functions of Time and Frequency Characteristics

Let us differentiate, with respect to the parameter α, Equation (5.19), which connects the complex frequency response w(jω) and the weight function h(t): ∞ ∂w(jω) ∂h(τ ) −jωτ dτ. (5.39) = e ∂α ∂α 0

Using the above relation, by a known sensitivity function of the weight function we can find the sensitivity function of the frequency response. Differentiating Equations (5.20), we find the following formulas, which define the relations between sensitivity functions of the weight function and amplitude, phase, real and imaginary frequency characteristics: ∂P (ω) = ∂α

∞

∂h(τ ) cos ωτ dτ, ∂α 0 ∞ ∂Q(ω) ∂h(τ ) =− sin ωτ dτ. ∂α ∂α 0

Let us use the sensitivity function ∂h(t)/∂α. dependencies

Then, using the

∞ 2  ∞ 2   A2 (ω) =  h(t) cos ωt dt +  h(t) sin ωt dt , 0

− φ(ω) = arctan

0

∞

h(t) sin ωt dt

0 ∞

, h(t) cos ωt dt

0

we can obtain, by differentiation with respect to the parameter α, the sensitivity functions of the amplitude and phase frequency responses.

5.1.7

Relations between Sensitivity Functions of OpenLoop and Closed-Loop Systems

Relations between the frequency responses of closed-loop and open-loop systems are given by Formulas (5.22). Assume that we know the sensitivity

© 2000 by CRC Press LLC

functions of the open-loop system frequency responses. Under these conditions, let us obtain equations for the corresponding sensitivity functions of the closed-loop systems. With this aim in view, we differentiate (5.22) with respect to the parameter α. As a result, we have  ∂P0 (ω) ∂Q0 (ω) + Q1 (ω) P0 (ω)[1 + P0 (ω)] − Q20 (ω) ∂α ∂α , 2 A(ω) {[1 + P0 (ω)]2 + Q20 (ω)}  ∂Q0 (ω)  ∂P0 (ω) − Q1 (ω) P0 (ω)[1 + P0 (ω)] − Q20 (ω) ∂φ(ω) ∂α ∂α , = 2 2 2 ∂α {P0 (ω)[1 + P0 (ω)] + Q0 (ω)} + Q0 (ω) 

∂A(ω) = ∂α

∂A0 (ω) ∂φ0 (ω) + A20 (ω) sin φ0 (ω) [1 + A0 (ω) cos φ0 (ω) ∂A(ω) ∂α ∂α = 3 ∂α 2 [A0 (ω) + 1 + 2A0 (ω) cos φ0 (ω)] 2 and so on, where Q1 (ω) = Q0 (ω)[1 + 2P0 (ω)].

5.1.8

Sensitivity of Frequency-Domain Quality Indices

To determine the sensitivity coefficient of the oscillation index, let us differentiate (5.26) with respect to the parameter α. Taking into account that the value ωmax also depends on α, we find ∂M 1 = 2 ∂α A0 (0)



  ∂A(ωmax ) ∂ωmax ∂A(ωmax ) ∂A(0) + A(0) − A(ωmax ) ∂ωmax ∂α ∂α ∂α

Due to the properties of A(ωmax ), we have ∂A(ωmax ) = 0. ∂ωmax Therefore,   ∂M 1 ∂A(ωmax ) ∂A(0) = 2 A(0) − A(ωmax ) . ∂α A0 (0) ∂α ∂α

(5.40)

Similarly, using Formulas (5.27) and (5.28) we can find the relations between the sensitivity coefficient of the oscillation index and sensitivity functions of the open-loop frequency responses. On the basis of (5.29)

© 2000 by CRC Press LLC

sensitivity of the phase stability margin will be estimated by the coefficient ∂µ ∂ψ1 = . ∂α ∂α

(5.41)

For the sensitivity coefficient of the equivalent bandwidth ωe we have ∂ωe =2 ∂α

∞ A(ω)

∂A(ω) dω. ∂α

(5.42)

0

To find the right sides of (5.40)–(5.42), the corresponding formulas for determination of the sensitivity functions for frequency responses can be used.

5.2 5.2.1

Sensitivity of Poles and Zeros General Case

DEFINITION 5.9 Poles and zeros of a transfer function are the roots of its numerator and denominator, respectively: w(s) =

b0 sg + b1 sg−1 + . . . + bg . a0 sn + a1 sn−1 + . . . + an

(5.43)

Consider the equation

D(s, α) =

n 

ci (α)sn−i = 0.

(5.44)

i=0

Assume that there are continuous derivatives dci /dα in a locality of the point α = α0 . Moreover, let Equation (5.44) have, for α = α0 , n different roots p1 , . . . , pn . Then, from general properties of algebraic equations it follows that the roots p1 , . . . , pn are continuously differentiable functions in the parameter α. Moreover, by the theorem on differentiability of implicit function [114] the following relation holds for sensitivity function (coeffi-

© 2000 by CRC Press LLC

cient) of a root pi root sensitivity:  ∂D dcj  ∂D(s, α)   ∂cj dα dpi j=0  ∂α =− =−  ∂D ∂D(s, α)  dα  ∂s ∂s s=pi ,α=α0 n

         

(5.45) s=pi ,α=α0

Since ∂D = pn−k , k = 1, . . . , n i ∂ck n−1  ∂D = kcn−k pk−1 + c0 npn−1 , i i ∂pi k=1

we have −1 n n−1   dpi dck k−1 n−1 kcn−k pi + c0 npi pn−1 =− . i ∂α dα k=1

(5.46)

k=1

Formula (5.46) makes it possible to find root sensitivity coefficients for real as well as complex roots. As an example, we consider sensitivity coefficients of the roots of the following characteristic equation of an oscillatory unit: D(s) = s2 + 2ξω0 s + ω02 = 0. The roots of this equations are p1,2 = −ξω0 ± jω Hence,



1 − ξ2,

 ∂p1,2 = −ξ ± j 1 − ξ 2 ; ∂ω0 ∂p1,2 ω0 ξ = −ω ∓ j  . ∂ξ 1 − ξ2

To employ Formula (5.46), we preliminarily find da2 = 2ω0 , dω0 da1 = 2ξ, dω0

© 2000 by CRC Press LLC

da2 = 0, dξ da1 = 2ω0 . dξ

and, finally,  ∂p1,2 2ξp1,2 + 2ω0 =− = −ξ ± j 1 − ξ 2 , ∂ω0 2ξω0 + 2p1,2 p1,2 ∂p1,2 ω0 ξ =−  = −ω0 ∓ j  . 2 ∂ξ ±j 1 − ξ 1 − ξ2

5.2.2

Sensitivity of the Roots of a Polynomial

In many cases numerator and denominator polynomials of transfer function depends linearly on a variable parameter. Then, D(s) = D1 (s) + αD2 (s), where D1 (s) and D2 (s) are polynomials independent of the parameter α. As in the previous paragraph, we assume that the equation D(s) = 0 has no multiple roots. Then, for a root pi we have D1 (pi ) + αD2 (pi ) = 0. Then, Formula (5.45) yields dpi D2 (pi ) =− ∂D1 (pi ) ∂D2 (pi ) dα +α ∂pi ∂pi or dpi D2 (pi )  =− ∂D(s)  dα . ∂s 

(5.47)

s=pi

For the case under consideration, we can write, with respect to the parameter ξ, D(s) = D1 (s) + ξD2 (s), where D1 (s) = s2 + ω02 ,

D2 (s) = 2ω0 s.

As a result, ∂pi ω 0 pi . =− ∂ξ pi + ξω0

© 2000 by CRC Press LLC

5.2.3

Sensitivity of Poles and Zeros for Open-Loop and Closed-Loop Systems

Let the transfer function w(s) of a closed-loop system be defined in terms of the transfer function w0 (s) of the corresponding open-loop system as w(s) =

w0 (s) . 1 + w0 (s)

The transfer function w0 (s) is given by g 

w0 (s) =

(s − qi )

k i=1 n 

.

(s − pi )

i=1

Then, k w0 (s) =

g 

(s − qi )

i=1 n 

(s − pi ) + k

i=1

. (s − qi )

i=1

Simultaneously, k w0 (s) =

g 

g 

(s − qi )

i=1 n 

,

(5.48)

(s − zi )

i=1

where zi (i = 1, . . . , n) be the poles of the closed-loop transfer function. From Equation (5.48) it follows that the transfer functions of closed-loop and open-loop systems have the same zeros. Obviously, g n   (zj − pi ) + k (zj − qi ) = 0. i=1

i=1

Then, it is easy to show that m  g

   1 ∂qi ∂zj 1 ∂pi ∂ ln k = + w0 (zj ) − ∂α z − pi ∂α z − qi ∂α ∂α i=1 j i=1 j

−1 g l   1 1 × + w0 (zj ) . z − pi z − qi i=1 j i=1 j

© 2000 by CRC Press LLC

(5.49)

Example 5.1 Let w(s) =

s+b . s+a

Then, w(s) = p1 = −a,

s+b , a+b 2 s+ 2 

q1 = −b,

z1 = −

a+b . 2

Obviously, ∂p1 = −1; ∂a

∂z1 1 =− . ∂a 2

To find ∂z1 /∂α we use Formula (5.49):  ∂z1 = ∂a

5.2.4

1 1 ∂p1   a+b a+b ∂a +a +a − − 2 2 −1 a+b − +b 1 1  2 + ·  =− . a+b a+b 2 +a − +b − 2 2

Relations between Sensitivity of Transfer Function and that of Poles and Zeros

Consider the transfer function m 

w(s) =

k i=1 n 

(s − qi )

(s − pi )

i=1

© 2000 by CRC Press LLC

.

(5.50)

Differentiating with respect to α, we obtain −k

m m  ∂qi  i=1

∂α

n n  ∂pi  i=1

∂α



(s − pr )

2

(s − pi )

i=1

(s − pj )

j=i n 

m 

(s − qr )

r=1

j=1



+

j=i n 

n  r=1

j=1

∂w(s) ∂k w(s) = +k ∂α ∂α k

k

(s − qj )

2 (s − pi )

i=1

or, with account for (5.50), ∂w(s) = ∂α



∂ ln k  1 ∂qi  1 ∂pi − + ∂α s − qi ∂α i=1 s − pi ∂α i=1 m

n

 w(s),

Finally, we find ∂ ln w(s) ∂ ln k  1 ∂qi  1 ∂pi = − + . ∂α ∂α s − qi ∂α i=1 s − pi ∂α i=1 m

n

(5.51)

Evidently, for s = jω Formula (5.51) defines relations between sensitivity functions of frequency responses and sensitivity coefficients of poles and zeros of transfer function (5.50) of a linear system with constant coefficients.

5.3 5.3.1

Sensitivity of Eigenvalues and Eigenvectors of Linear Time-Invariant Systems Eigenvalues and Eigenvectors of Matrices

Let A = |aij |, i, j = 1, . . . , n be a constant square matrix with real elements.

© 2000 by CRC Press LLC

DEFINITION 5.10 The matrix λE − A, where λ is an independent variable, is called the characteristic matrix. Its determinant ∆(λ) = det(λE − A) = λn + a1 λn−1 + . . . + an

(5.52)

is called the characteristic polynomial of the matrix A, and the roots of the characteristic polynomial are called eigenvalues. Each eigenvalue λi is associated with an eigenvector Xi satisfying the equality A Xi = λi X i .

(5.53)

It is known that each eigenvector is determined up to a constant factor. Therefore, in specific problems it is necessary to perform normalization. Obviously, eigenvalues of the transposed matrix AT coincide with the eigenvalues of the matrix A. Nevertheless, the eigenvectors are, in the general case, different. Denote by Yi the eigenvectors of the matrix AT , so that AT Yi = λi Yi .

(5.54)

The eigenvectors of the initial and transposed matrices are orthogonal, i.e., (Xi , Yj ) = 0,

i = j.

(5.55)

For an integer k, eigenvalues of the matrix Ak are equal to λki (i = 1, . . . , n). n

DEFINITION 5.11 The sum of diagonal elements i=1 aij of a matrix A is called the trace of the matrix A and is denoted by tr A. DEFINITION 5.12 A matrix A is called similar to a matrix B if there is a nonsingular matrix H such that A = H −1 BH. Similar matrices have equal characteristic polynomials. Hence follows equality of the corresponding eigenvalues, determinants, and traces, because tr A =

n  i=1

© 2000 by CRC Press LLC

λi = −a1 ,

det A − (−1) an = n

n  i=1

λi .

(5.56)

5.3.2

Sensitivity of Eigenvalues

Let a matrix A for ∆α ∈ (−7, 7) have different eigenvalues λ1 , . . . , λn , and corresponding eigenvectors X1 , . . . , Xn . Consider the matrix A(∆α) = A + ∆αB, where B is an arbitrary real matrix of the same dimensions, and ∆α is a small parameter. The eigenvalues of the new matrix A + ∆αB are denoted by λi (∆α), and the corresponding eigenvectors by Xi (∆α) (i = 1, . . . , n). From the theory of perturbations of linear operators and matrices it is known [18, 110, 111] that the values λi (∆α) and Xi (∆α) (i = 1, . . . , n) are continuous differentiable functions in the parameter ∆α. Moreover, A(0) = A, λi (0) = λi , and Xi (0) = Xi . In this case, the following power series converge:   ∞  1 ∂ j λi (∆α)  λi (∆α) = λi + ∆j α,  jα j! ∂∆ ∆α=0 j=1

(5.57)

  ∞  1 ∂ j Xi (∆α)  Xi (∆α) = Xi + ∆j α.  jα j! ∂∆ ∆α=0 j=1

(5.58)

DEFINITION 5.13  βij =

The following values βij and γij given by

 ∂ j λi (∆α)  ,  ∂∆j α ∆α=0

 γij =

 ∂ j Xi (∆α)  ,  ∂∆j α ∆α=0

will be called sensitivity coefficient and sensitivity vector of the j-th order, respectively. For the eigenvalue (5.57) and eigenvector (5.58) we have A(∆α) Xi (∆α) = λi (∆α) Xi (∆α). Differentiating the last equations with respect to the parameter α yields ∂A(∆α) ∂Xi (∆α) Xi (∆α) + A(∆α) ∂∆α ∂∆α = βi1 Xi (∆α) + λi (∆α)

∂Xi (∆α) ∂∆α

(5.59)

Assuming ∆α = 0, let us transform the obtained sensitivity equation using

© 2000 by CRC Press LLC

multiplication by the eigenvalue Yi of the matrix AT so that  (BXi , Yi ) +

∂Xi , AT Yi − λi Yi ∂∆α

 =

∂λi (Xi , Yi ). ∂∆α

As a result, using (5.55), we obtain βi1 =

(BXi , Yi ) (Xi , Yi )

(5.60)

To determine the first-order sensitivity coefficient of an eigenvalue by Formula (5.60) we need the derivative of the matrix A∆α with respect to the parameter ∆α as well as the corresponding eigenvectors Xi and Yi . All these values are determined for ∆α = 0. Let us consider some special cases of Formula (5.60). Assume that an element of the matrix A, say aks , is variable. Then, ∆α = ∆aks , and the matrix B has the only nonzero element bks . Then, βi1 =

∂λi xis yik = n ,  ∂∆bks xij yij

(5.61)

j=1

where xij and yij are the corresponding components of the vectors Xi and Yi . For a diagonal element of the matrix akk we have βi1 =

∂λi xik yik = n ,  ∂∆akk xij yij

(5.62)

j=1

If the matrix A is symmetric, the initial and transposed matrices coincide, therefore ∂λi (BXi , Xi ) = (5.63) ∂∆α (Xi , Xi ) ∂λi xis xik = n ,  ∂∆aks 2 xij

(5.64)

j=1

∂λi x2 = n ik ,  ∂∆akk x2ij j=1

© 2000 by CRC Press LLC

(5.65)

If the eigenvectors of the matrices A and AT are orthonormalized, i.e., (Xi , Yj ) = δij ,

(5.66)

where δij is the Kronecker symbol, the formulas for the sensitivity coefficients of the eigenvalue get simplified so that their denominators are equal to unity. Let C = Ak and the matrix A have eigenvalues λ1 , . . . , λn . Denote the eigenvectors of (C) the matrix C by λi . Then, as was noted in Section 5.1, (c)

λi

= λki .

Moreover, (c)

∂λi ∂λi = kλk−1 i ∂α ∂α

(5.67)

or (c)

∂ ln λi ∂α

5.3.3

=k

∂ ln λi ∂α

(5.68)

Sensitivity of Real and Imaginary Parts of Complex Eigenvalues

In the preceding paragraph we presented general expressions for firstorder sensitivity coefficients. These formulas are applicable for investigation of real as well as complex eigenvalues. Now we will derive formulas allowing us to evaluate sensitivity of the real and imaginary part of a complex eigenvalue separately. With this aim in view, we consider the following pair of complex-conjugate eigenvalues: λk = µk + jνk , λk+1 = µk − jνk ,

(5.69)

They are associated with complex-conjugate eigenvectors Xk = Gk + jHk ;

Xk+1 = Gk − jHk .

(5.70)

The eigenvalues λk and λk+1 and the eigenvectors Xk and Xk+1 satisfy the equations A Xk = λk X k , A Xk+1 = λk+1 Xk+1 ,

© 2000 by CRC Press LLC

Therefore, A Gk = µk Gk − νk Hk , A Hk = µk Hk + νk Gk ,

(5.71)

The corresponding eigenvectors of the transposed matrix AT will be denoted by Yk = Qk + jVk ,

Yk+1 = Qk − jVk .

In analogy with (5.54), for the matrix AT we have AT Qk = µk Qk − νk Vk , AT Vk = µk Vk + νk Qk ,

(5.72)

Let the matrix A and, correspondingly, all the components of Equations (5.71), be variable., i.e., depending on a parameter ∆α. Let us differentiate Equation (5.71) with respect to this parameter. For convenience, hereinafter we denote partial derivatives with respect to the parameter by index α, for instance, Aα =

∂A(∆α) , ∂∆α

µkα =

∂µk (∆α) . ∂∆α

Then, after differentiation, we obtain the following sensitivity equations: Aα Gk + AGkα = µkα Gk + µk Gkα − νkα Hk − νk Hkα , Aα Hk + AHkα = µkα Hk + µk Hkα + νkα Gk + νk Gkα .

(5.73)

Determining the scalar product of each of the relations in (5.73) and the vectors Qk and Vk , for ∆α = 0 we find (Aα Gk , Qk ) + (AGkα , Qk ) = µkα (Gk , Qk ) + µk (Gkα , Qk ) − νkα (Hk , Qk ) − νk (Hkα , Qk ), (Aα Gk , Vk ) + (AGkα , Vk ) = µkα (Gk , Vk ) + µk (Gkα , Vk ) − νkα (Hk , Vk ) − νk (Hkα , Vk ), (5.74) (Aα Hk , Qk ) + (AHkα , Qk ) = µkα (Hk , Qk ) + µk (Hkα , Qk ) + νkα (Gk , Qk ) − νk (Gkα , Qk ), (Aα Hk , Vk ) + (AHkα , Vk ) = µkα (Hk , Vk ) + µk (Hkα , Vk ) + νkα (Gk , Vk ) − νk (Gkα , Vk ).

© 2000 by CRC Press LLC

In this system, let us subtract term-wise the last equation from the first one, thus obtaining [(Gk , Qk ) − (Hk , Qk )]µkα − [(Hk , Qk ) + (Gk , Vk )]νk α = (Aα Gk , Qk ) − (Aα Hk , Vk ).

(5.75)

Summation and further transformations of the second and third equations yield [(Gk , Vk ) + (Hk , Qk )]µkα + [(Gk , Qk ) − (Hk , Vk )]νk α = (Aα Gk , Vk ) + (Aα Hk , Qk ).

(5.76)

Equations (5.75) and (5.76) represent a system of linear nonhomogeneous equations with respect to the sensitivity coefficients µkα and νkα . The determinant of this system is given by ∆ = [(Gk , Qk ) − (Hk , Vk )]2 + [(Gk , Qk ) − (Hk , Qk )]2 and is a square of the absolute value of the following scalar product: (Xk , Yk ) = (Gk + jHk , Qk + jVk ) Due to this reason, ∆ = 0. For normalized eigenvectors we have ∆ = 1.

5.3.4

Sensitivity of Eigenvectors

Consider Equation (5.59). For ∆α = 0 we find the scalar term-wise product of this equation and a vector Yg such that i = g: (Aγi1 , Yg ) + (Aα Xi , Yg ) = (λi γi1 , Yg ) + (βi1 Xi , Yg ). The last equations can be transformed to the form (Aα Xi , Yg ) + (Yi1 , AT Yg − λi Yg ) = βi1 (Xi , Yg ). or (Aα Xi , Yg ) + (γi1 , Yg )(λg − λi ) = βi1 (Xi , Yg ). Hence, (γi1 , Yg ) =

© 2000 by CRC Press LLC

(Aα Xi , Yg ) − βi1 (Xi , Yg ) , λi − λg

λi = λg .

(5.77)

Since (Xi , Yg ) = 0 for i = g, we obtain (γi1 , Yg ) =

(Aα Xi , Yg ) , λi − λg

i = g.

(5.78)

Equation (5.78) defines relations between the coordinates of the sensitivity vector γi1 and the elements of the vectors Xi and Yg , elements of the matrix A and eigenvalues λi , λg , i = g. This equation can be written in the following coordinate form: n 

i = g,

ygj γi1j = dig ,

(5.79)

j=1

where dig =

(Aα Xi , Yg ) . λi − λg

To determine the components γi1j , j = 1, . . . , n we need n such equations. For different g (g = 1, . . . , n) such that g = i we can construct n − 1 ones. The remaining equation can be obtained as follows. Introduce a basis consisting of the eigenvectors X1 , . . . , Xn in the space under consideration (Euclidean n-dimensional real or complex). Then, γi1 =

n 

γi1j Xj .

(5.80)

j=1

Introduce a normalization condition for the vector Xi (∆α): (Xi (∆α), Xi (∆α)) = 1, Hence, (γi1 , Xi ) = 0 or, with with account for orthogonality of eigenvectors and (5.80), γi1j = 0. For the case when the eigenvectors of the matrix A are used as an orthogonal basis, the remaining components can be found using the relations  γi1g = (γi1 , Yg ) = 

n  j=1

© 2000 by CRC Press LLC

 γi1j Xj , Yg  .

Then, with account for (5.78), we finally obtain (Aα Xj , Yg ) , λi − λg = 0.

γi1g = γi1j

5.3.5

i = g,

(5.81)

Sensitivity Coefficients and Vectors of Higher Orders

For k ≥ 2, the values βik and γik can be obtained by corresponding differentiation of the left and right sides of the sensitivity equations for coefficients and vectors of lower orders. Consider the sensitivity equation (5.59). Differentiating term-wise with respect to the parameter ∆α, we have Aαα Xi + 2Aα Xiα + AXiαα = λiαα Xi + 2λiα Xiα + λi Xiαα ,

(5.82)

where Aαα =

∂2A , ∂∆2 α

Xiαα =

∂ 2 Xi . ∂∆2 α

Scalar multiplication of the terms of this equation by the vector Yi yields (Aαα Xi , Yi ) + 2(Xiα , AT Yi − λiα Yi ) + (Xiαα , AT Yi − λi Yi ) = λiαα (Xi , Yi ), Hence, the sensitivity coefficient of the second order is given by λiαα = βi2 =

(Aαα Xi , Yi ) + 2(Xiα , ATα Yi − λiα Yi ) . (Xi , Yi )

(5.83)

For the case when the matrix A depends linearly on the variable parameter ∆α, we have Aαα = 0 and βi2 = 2

(Xiα , ATα Yi − λiα Yi ) . (Xi , Yi )

(5.84)

Then, we find the scalar product of the left and right sides of the sensitivity equation (5.82) by the vector Yg , g = i: (Aαα Xi , Yg ) + 2(Xiα , ATα Yg − λiα Yg ) + (Xiαα , AT Yg − λi Yg ) = λiαα (Xi , Yg ),

© 2000 by CRC Press LLC

(5.85)

Take into account the fact that AT Yg = λs Yg . Then, Equation (5.85) yields (Xiαα , Yg ) =

2(Xiα , ATα Yg − λiα Yg ) + (Aαα , Xi , Yg ) − λiαα (Xi , Yg ), λi − λg

If A(∆α) + A + ∆α B, (Xi , Yg ) = 0, i = g, then (Xigg , Yg ) =

2(Xiα , ATα Yg − λiα Yg ) , λi − λg

i = g.

(5.86)

In a similar way we can obtain formulas for sensitivity coefficients and vectors of any order.

5.3.6

Sensitivity of Trace and Determinant of Matrix

Consider sensitivity of the trace of a matrix A: ∂tr A  ∂aii = ∂α ∂α i=1 n

Or, with account for (5.56), ∂tr A  ∂λi = ∂α ∂α i=1 n

The determinant of the matrix A is given by det A =

n 

λi .

i=1

The sensitivity coefficient of the determinant is given by n n n  ∂ det A  ∂λi  ∂ ln A λj = det A = ∂α ∂α ∂α i=1 i=1 j=1 j=1

© 2000 by CRC Press LLC

(5.87)

or ∂In det A  ∂ ln λi = . ∂α ∂α i=1 n

5.4 5.4.1

(5.88)

Sensitivity of Integral Quality Indices Integral Estimates

For indirect evaluation of the quality of transients, various integrals from system phase coordinates, their derivatives and other combinations of all of them are widely used in automatic control theory. For a system described by the equation n 

ai y (n−i) (t) =

i=0

g 

bi x(g−i) (t),

(5.89)

i=0

integral estimates that can be represented in the following generalized form ∞ f (y, y, ˙ . . . , y (n) , t)dt

I= 0

are commonly used. For different types of integrands there are well-known integral estimates ∞ I0 =

∞ y(t)dt,

0

|y(t)|dt,

I1 =

(i) I2

0

=

"2 y (i) (t) dt,

0

i = 0, . . . , n,

and so on.

For a stable system described by the equation Y˙ = A Y, generalized estimates of the form ∞ Y T V Y dt,

I= 0

© 2000 by CRC Press LLC

∞ !

are usually used, where V is a constant matrix. Using the integral estimation method, the above integrals are chosen so that they can be represented in the simplest way via system parameters.

5.4.2

Sensitivity of Integral Estimate I0

For evaluation of sensitivity for the integral estimate I0 we have ∂I0 ∂ = ∂α ∂α

∞

∞ y(t)dt =

0

u(t)dt, 0

where u(t) =

∂y(t) . ∂α

The Laplace transform of the sensitivity function u(t) is equal to ∞ L[u(t)] =

e−st u(t)dt.

0

Hence, ∂I0 = L[u(t)]|s=0 . ∂α To find the image L[u(t)], we can use the formula linking the image of the sensitivity function with transfer function of initial system and sensitivity model. To obtain ∂I0 /∂α we can also use the sensitivity equation. Consider, for example, a system described by the equations n 

ai y (n−i) = 0,

(i)

i = 0, . . . , n − 1.

y (i) (0) = y0 ,

(5.90)

i=0

From (5.90) we obtain the sensitivity equation in the form n  i=0 (i)

ai y (n−i) = −

u (0) = 0,

© 2000 by CRC Press LLC

n  ∂ai i=0

∂α

y (n−i) ,

i = 0, . . . , n − 1.

(5.91)

Integrating both sides of Equation (5.91) from 0 to ∞, we find n  i=0

∞ ai

u

(n−i)

dt = −

∞ n  ∂ai i=0

0

∂α

y (n−i) dt;

0

Since the initial system is stable, we have y (i) (t) = u(i) (t) = 0 as t → ∞. Therefore, ∞ an

u(t)dt = 0

n−1  i=0

∂ai (n−i) ∂an y (0) − ∂α ∂α

∞ y(t)dt, 0

Hence, n−1 ∂I0 ∂ ln an 1  ∂ai (n−i) (0) − = y I0 ∂α an i=0 ∂α ∂α

(5.92)

n−1 ∂ ln I0 ∂ ln an 1  ∂ai (n−i) (0) − = y . ∂α an I0 i=0 ∂α ∂α

(5.93)

or

5.4.3

Sensitivity of Quadratic Estimates. Transformation of Differential Equations

The sensitivity coefficients of quadratic estimates (i)

∂I2 =2 ∂α

∞ y (i) u(i) dt 0

can be found in many ways. Let us illustrate one of these methods by an example of a system described by the following second-order equation: a0 y  + a1 y  + a2 y = 0,

y(0) = y0 ,

y  (0) = y0 ,

(5.94)

The sensitivity equation has the form a0 u” + a1 u + a2 u = − u(0) = 0,

© 2000 by CRC Press LLC

∂a0 ∂a1  ∂a2 y” − y − y, ∂α ∂α ∂α u (0) = 0.

(5.95)

Multiplying Equation (5.94) successively by u and u , we find a0 y  u + a1 y  u + a2 yu = 0,

(5.96)

a0 y  u + a1 y  u + a2 yu = 0.

(5.97)

Similarly, multiply Equation (5.95) by y and y  : a0 u y + a1 u y + a2 uy = −

2  ∂ai i=0

a0 u y  + a1 u y  + a2 uy  = −

∂α

2  ∂ai i=0

∂α

y (2−i) y,

(5.98)

y (2−i) y  .

(5.99)

Then, we add (5.96) to (5.98) and (5.97) to (5.99). Thus, a0 (y  u + u y) + a1 (y  u + u y) + 2a2 yu = −

2  ∂ai

y (2−i) y, ∂α i=0 2  ∂ai (2−i)  a0 (y  u + u y  ) + 2a1 y  u + a2 (yu + uy  ) = − y. y ∂α i=0 Integrating both sides from 0 to ∞, we obtain ∞ ∞ ∞     a0 (y u + u y) dt + a1 (y u + u y) dt + 2a2 yu dt o

o

=−

o

∞ 2  ∂ai

y (2−i) y dt, ∂α i=0 o ∞ ∞ ∞       a0 (y u + u y ) dt + 2a1 y u dt + a2 (yu + uy  ) dt o

o

=−

∞ 2  ∂ai i=0

∂α

o

y (2−i) y  dt.

o

For the individual integrals we have ∞ ∞ ∞ ∂ ∂ (1)     (y u + u y) dt = −2 y u dt = − y˙ 2 dt = − I2 , ∂α ∂α 0

© 2000 by CRC Press LLC

0

0

∞ (y  u + u y) dt = 0,

∞

0

0

∞

1 yy dt = − y02 , 2 

0

y  y dt = y0 y0 − I2 , (1)

∞ (y  u + y  u ) dt = 0, 0

∞ (yu + uy  ) dt = 0,

∞

0

0

1 y  y  dt = − (y0 )2 . 2

As a result, we obtain the following system of linear algebraic equations with respect to sensitivity coefficients: −a0

(1) $ 1 ∂a ∂I2 ∂I2 ∂a2 ∂a0 #  1 2 (1) + + a2 =− y0 y0 − I2 y − I2 , ∂α ∂α ∂α 2 ∂α 0 ∂α (1)

a1 Hence,

∂I2 1 ∂a0  2 ∂a1 (1) 1 ∂a2 2 = (y ) − I + y , ∂α 2 ∂α 0 ∂α 2 2 ∂α 0

  (1) ∂I2 1 ∂a0 2 ∂a2 2 ∂a1 (1) , = y˙ + y −2 I ∂α 2a1 ∂α 0 ∂α 0 ∂α 2   ∂I2 ∂a0 2 ∂a2 2 ∂a1 (1) a0 = y˙ + y −2 I ∂α 2a1 a2 ∂α 0 ∂α 0 ∂α 2   1 ∂a0 ∂a0 (1) 1 ∂a1 2 ∂a2 (1) − y0 y˙ 0 − I − y + I . a2 ∂α ∂α 2 2 ∂α 0 ∂α 2

(5.100)

(5.101)

Let a0 = T 2 , a1 = 2ξT , and a2 = 1, i.e., we consider an oscillatory unit. Then, (1) (1) ∂I2 ∂I 1 (1) (1) = − I2 or 2 = −I2 , ∂ξ ξ ∂ ln ξ (1)

(1)

∂I2 I ∂I = −T 2 2 + T y02 = T 2 2 + T y02 , ∂ξ ξ ∂ξ (1)

∂I2 1 2 1 (1) = y˙ − I , ∂T 2ξ 0 T 2   2 ∂I2 y˙ 0 (1) =T T + I2 − 2y0 y˙ 0 + y02 ξ. ∂T 2ξ (i)

Obviously, if the dependence of the cost function I2 α is given in the form of a composite function (1)

I2

© 2000 by CRC Press LLC

= f [aj (α), bj (α)],

on the parameter

(i)

the coefficient ∂I2 /∂α can be found by direct differentiation of the function f:  ∂f ∂ak  ∂f ∂bk ∂I2 = + . ∂α ∂ak ∂α ∂bk ∂α (1)

5.4.4

n

g

k=0

k=0

Sensitivity of Quadratic Transform Method

Estimates.

Laplace

The sensitivity coefficient of the integral estimate (1) I2

∞ ! =

y (i)

"2 dt

0

is proportional to the integral ∞ y (i) u(i) dt 0

The Laplace image of the product y (i) (t)u(i) (t) has the form ∞ (i)

(i)

L[y (t) u (t)] =

y (i) (t) u(i) (t)e−st dt

0

Hence, ∞ y (i) (t) u(i) (t) dt = L[y (i) (t) u(i) (t)]p=0

(5.102)

0

In Laplace transformation theory the following theorems has been proved [29]. THEOREM 5.2 If functions f1 (t) and f2 (t) have Laplace images F1 (s) and F2 (s), respectively, and F1 (s) = a1 (s)/b1 (s) is a rational algebraic function having only q simple poles, the following equality holds: L[f1 (t) f2 (t)] =

q  a1 (pk ) k=1

© 2000 by CRC Press LLC

b1 (pk)

F2 (s − pk ),

(5.103)

where b1 (pk ) =

 db1 (s)  . ds s=pk

THEOREM 5.3 Let functions f1 (t) and f2 (t) have Laplace images F1 (s) and F2 (s), respectively, and F1 (s) be a rational algebraic function having n different poles n p1 , . . . , pn with multiplicities m1 , . . . , mn , respectively, so that i=1 mi = q. Then, L[f1 (t) f2 (t)] =

mk n   (−1)mk −j k=1 j=1

(mk − j)!

where Rkj =

1 (j − 1)!



 Rkj

 dmk −j F2 (s)   dsmk −j s=s−pk

(5.104)

  dj−1 mk  (s − p ) F (s) k 1  j−1 ds s=pk

. Let us use the results of the above theorems for calculating sensitivity coefficients of quadratic integral estimates. Denote the image of a function y (i) (t) by Yi (s) = ai (s)/bi (s), and that of the function u(i) (t) by Ui (s). Let the rational algebraic function ai (s)/bi (s) have only n simple poles. Then,  0



   q   a (p ) i j (i) (i)    y (t) u dt =  (p ) Ui (s − pk )  b j  i j=1

.

(5.105)

s=0

In the general case, when Yi (s) may have multiple poles, according to (5.104) we have 



0

y (i) (t) u(i) dt     mk −j  mk n      d (−1)mk −j  = U (s)  . Rkj mk −j i   (m − j)! ds k  s=s−p k j=1 k=1

where Rkj

  j−1  1 d mk  = (s − p ) Y (s) . k i  (j − 1)! dsj−1 s=pk

As an example, we consider the system ¨ T 2 +2ξT y˙ + y = 0,

© 2000 by CRC Press LLC

y(0) = y0 ,

y(0) ˙ = y˙ 0 .

(5.106) , s=0

For simplicity, let T = 1, y0 = 0, and y˙ 0 = 1. Then, Y (s) =

s2

1 , + 2ξs + 1

U (s) = −

b (s) = 2(s + ξ),

2s , (s2 + 2ξs + 1)2

  ∂I2 p2 p1 , + 2 =2 ∂ξ (p21 + 2ξp1 + 1)2 (p2 + 2ξp2 + 1)2 where p1,2 = −ξ ±



ξ 2 − 1.

Substituting the values p1 and p2 , we obtain ∂I2 1 = − 2. ∂ξ 4ξ The integral estimate IV can be written in the following scalar form n  n  



IV =

aij yi yj dt.

i=1 j=1 0

The corresponding sensitivity coefficient is defined by  ∂IV = ∂α i=1 j=1 n

n

∞ 

 ∂aij yi yj + aij (ui yj + yi uj ) dt. ∂α

0

If the coefficients aij are independent of α, we have  ∂IV aij = ∂α i=1 j=1 n

n

∞ (ui yj + yi uj ) dt. 0

As before, determination of the sensitivity coefficient reduces to calculation of the integrals ∞ ui yj dt = L[ui (t) yj (t)]|s=0 . 0

© 2000 by CRC Press LLC

5.4.5

Sensitivity Coefficients of Integral Estimates for Discontinuous Control Systems

Consider the generalized integral estimate of the form ∞ I=

Q(y, α) dt t0 (α)

where Q(Y, α) is a function of the variables Y and α having continuous derivatives ∂Q/∂Y and ∂Q/∂α on the intervals (ti−1 , ti ), (tg , ∞), (i = 1, . . . , g). Let Y (t) be a solution of a discontinuous system. Then, we may write

I=

t i (α) g  i=1

∞ Q(y, α) dt +

ti−1 (α)

Q(y, α) dt,

tg (α)

where ti (α) are the switching moments. Then, for sensitivity coefficient we have t i (α)    ∞  g  ∂I ∂Q ∂Q ∂Q ∂U = U+ dt + U+ dt ∂α ∂X ∂α ∂X ∂α i=1 ti−1 (α) tg (α)  g   dtg − dti + dti−1 + Q(ti , Yi ) − Q(ti−1 , Yi−1 ) − Q(Xg+ , tg ) , dα dα dα i=1

or,   ∞  g ti   ∂I ∂Q ∂Q ∂Q ∂Q = U+ dt + U+ dt ∂α ∂Y ∂α ∂Y ∂α i=1 ti−1

tg

g  + , dti dt0 + Q(ti , Yi− ) − Q(ti−1 , Yi+ ) − Q(t0 , X0+ ) . dα dα i=1

(5.107)

If the solution of initial system is continuous, i.e., Yi− = Yi+ , we have  ∂I = ∂α i=1 g

ti 

ti−1

∂Q ∂Q U+ ∂Y ∂α

∞ 

 dt +

∂Q ∂Q U+ ∂Y ∂α

tg

dt0 − Q(t0 , X0 ) . dα

© 2000 by CRC Press LLC

 dt (5.108)

5.5 5.5.1

Indirect Characteristics of Sensitivity Functions Preliminaries

For solution of a number of analysis and design problems for control systems with account for low sensitivity it is necessary to compare various sensitivity functions. In the general case, such a comparison appears to be cumbersome. In this connection, it is required to introduce some indirect indices reflecting various properties of sensitivity functions and making it possible to perform convenient comparative analysis of these functions. This can be done in analogy with indices used for evaluating quality of transients. A large number of quality indices used in automatic control theory can be divided into the following four groups: indices of precision, stability gain, speed of acting, and composite indices. Many of them, especially precision indices, can be used for estimation of the properties of sensitivity functions.

5.5.2

Precision Indices

For evaluating precision of control systems the error values in various typical operating conditions are used. In general, error with respect to steady-state (forced) process in an asymptotically stable system is defined by [121] z ∞ (t) − x(t) − y ∞ (t),

(5.109)

where x(t) is an input action and y ∞ (t) is the steady-state process, so that ∞

t

t h(t − τ )x(τ ) dτ =

y (t) = −∞

h(τ )x(t − τ ) dτ. −∞

It is assumed that the system is to reproduce a signal x(t) that is independent of α. With due acount of variations ∆α of the parameter α, for the steady-state process y ∞ (t) we have ∞ y (t, α0 ) = [h(τ, α0 ) + ∆h(τ )]x(t − τ ) dτ − ∆y ∞ (t). ∞

0

For small ∆α, ∆y ∞ (t) = u∞ (t)∆α,

© 2000 by CRC Press LLC

∞



µ(τ )x(t − τ ) dτ ∆α,

∆y (t) = 0

where µ(t) = ∂h(t)/∂α is the sensitivity function of the weight function. As a result, for sensitivity function of the steady-state process we obtain ∞



µ(τ )x(t − τ ) dτ.

u (t) =

(5.110)

0

Accordingly, for the sensitivity function of the error with respect to the steady-state process we have ∂z ∞ (t) = −u∞ (t). ∂α Let there exist an expansion of the function x(t − τ ) into Taylor’s series in a locality of the point t, i.e., x(t − τ ) =

∞ 

(−1)i

i=0

τ i (i) x (t). i!

(5.111)

Substituting (5.111) into (5.110), we find u∞ (t) =

∞ 

1 di x(i) (t) , i! i=0

where

∞ i

µ(τ )τ i dτ

di = (−1)

0

is the i-th order moment of the sensitivity function µ(t). Obviously, the sensitivity function ∂z ∞ (t)/∂α is described by the expression ∞  ∂z ∞ (t) x(i) (t) di =− ∂α i! i=0 As is known [8, 121], the steady-state error z ∞ (t) is given by ∞

z (t) =

∞  i=0

© 2000 by CRC Press LLC

ci

x(i) (t) , i!

where ci , i = 0, 1, . . . are the error coefficients. In this case we find that −di = ∂ci /∂α are the sensitivity functions (coefficients) of the error coefficients. As a result, ∞ ∂ci i+1 µ(τ )τ i dτ. = (−1) ∂α 0

The sensitivity coefficients ∂ci /∂α can also be calculated by the sensitivity function u(s) of the transfer function w(s): ∂w(s) u(s) = = ∂α

∞

µ(τ )e−sτ dτ.

0

It can be easily shown that  ∂ci di u(s)  =− , ∂α dsi s=0

i = 0, 1, . . .

Consider some examples. Let transfer function of a system have the form m 

b(s) w(s) = = i=0 n  a(s)

b i pi . ai pi

i=0

Then, for a constant input signal x(t) = x0 , that is independent of α, we have bm y∞ = x0 an and u∞ (t) =

∂ ∂α



bm x0 an

 =

bm x0 an



∂ ln bm ∂ ln an − ∂α ∂α

 .

Obviously, if the coefficients bs and an are independent of the parameter α, we have u∞ (t) = 0. Similar results can be obtained if we consider the image of the sensitivity function u(s) =

© 2000 by CRC Press LLC

∂w(s) x(s). ∂α

If transfer function of the system as the form w(s) =

w0 (s) , 1 + w1 (s)

where w1 (s) if the transfer function of the open-loop system, then y ∞ (t) =

w0 (0) x0 , 1+k

where k is the total gain of the open-loop system. Then, u



  ∂w0 (0) ∂ ln(1 + k) 1 − x0 . = 1+k ∂α ∂α

If the gain k is independent of the parameter α, we have u∞ (t) =

1 ∂w0 (0) x0 . 1 + k ∂α

From the last equations it follows that sensitivity of the steady-state value of the variable y(t) decreases as the open-loop system gain increases.

5.5.3

Integral Estimates of Sensitivity Functions

These estimates belong to composite indices. In analogy with quality indices for transient, we can introduce integral estimates of sensitivity functions of the form ∞ φ(u, u, ˙ . . . , u(n) , t) dt.

J= 0

Consider determination of some of these integral estimates in detail. Since ∞ ∞ ∂I0 ∂ = y(t) dt = u(t) dt ∂α ∂α 0

0

we have ∞ J0 =

u(t) dt = 0

© 2000 by CRC Press LLC

∂I0 . ∂α

Methods of determination of the coefficient ∂I0 /∂α were presented in Section 5.4. Below we demonstrate a technique of determinating the estimates ∞ ! "2 (i) J2 = u(i) (t) dt 0

by an example of a second-order system (i)

y (i) (0) = y0 (i = 0, 1),

a0 y¨ + a1 y˙ + a2 y = 0, a0 u ¨ + a1 u˙ + a2 u = −

s  ∂ai i=0

∂α

(5.112)

y (2−i) ,

u(0) = u(0) ˙ = 0.

(5.113)

Multiplying Equation (5.113) successively by u, u, ˙ u ¨ and integrating these expressions term-wise from 0 to ∞, we obtain ∞ a0

∞ u¨ u dt + a1

0

0

∞ a0 0

u ¨2 dt + a1 0

uu˙ dt = −

u˙ 2 dt + a2 ∞

u ¨u dt = −

u ¨u˙ dt + a2

2 

0

i=0

∞

∞

0

∂α

y (2−i) u dt,

0

∞ 2  ∂ai i=0

0

∞

∞ 2  ∂ai i=0

∞

0

∞

u2 dt = − 0

∞ u ¨u˙ dt + a1

a0

∞ uu˙ dt + a2

∂α

y (2−i) u˙ dt,

0

∂ai ∂α

∞ y (2−i) u ¨ dt. 0

For individual integrals we find ∞ u¨ u dt =

(1) −J2 ,

0

uu˙ dt = 0, 0

u¨ ˙ u dt = 0. 0

With due account for the above relations, we obtain the following algebraic system (1)

−a0 J2 + a2 J2 = −

∞ 2  ∂ai i=0

(1) a1 J2

=−

y (2−i) u dt.

(5.114)

0

∞ 2  ∂ai i=0

© 2000 by CRC Press LLC

∂α

∂α 0

y (2−i) u˙ dt.

(5.115)

(1) a0 J2

(1) a2 J2



=−

∞ 2  ∂ai i=0

∂α

y (2−i) u ¨ dt.

(5.116)

0 (1)

(2)

which yields the desired estimates J2 , J2 , and J2 . The terms in the (i) right sides, including the sensitivity coefficient ∂J2 /∂α, are obtained by (5.105) and (5.106). The determinant of the system (5.114)–(5.116) ∆ = a0 a1 a2 is not zero. Consider a system described by the following two first-order equations: y˙ 1 = a11 y1 + a12 y2 , y˙ 2 = a21 y1 + a22 y2 . They correspond to the sensitivity equations u˙ 1 = a11 u1 + a12 u2 + f1 , u˙ 2 = a21 u1 + a22 u2 + f2 , where fi =

∂ai1 ∂ai2 y1 + y2 , ∂α ∂α

i = 1, 2.

Multiplying both of them by uj (j = 1, 2) and integrating the result from 0 to ∞, we obtain ∞

∞ u˙ 1 u1 dt = a11

0

∞ u21

dt + a12

0

∞

0

∞

∞ u˙ 1 u2 dt = a11 0

∞

∞ u˙ 2 u2 dt = a21

© 2000 by CRC Press LLC

0

f2 u1 , dt,

(5.118)

f1 u2 , dt,

(5.119)

0

∞

∞ u22 , dt

+

0

0

∞

∞ u22 , dt

u1 u2 , dt + a22 0

(5.117)

∞ u1 u2 , dt +

u1 u2 dt + a12

0

f1 u1 , dt, 0

∞ u21 , dt + a22

0

0

u1 u2 , dt + 0

∞ u˙ 2 u1 dt = a21

∞

0

+

f2 u2 , dt, 0

(5.120)

In this system, ∞ ui u˙ i dt = 0,

i = 1, 2.

0

Moreover, ∞

∞ ∞  u˙ 1 u2 dt = u1 u2  − u1 u˙ 2 dt. 0

0

0

Hence, ∞

∞ u˙ 1 u2 dt +

0

u1 u˙ 2 dt = 0.

(5.121)

0

The algebraic equations (5.117)–(5.121) contain five unknowns: ∞

∞ u˙ 21

0

∞ u22

dt, 0

dt,

∞ u˙ 1 u2 dt,

0

∞ u˙ 1 u2 dt,

0

u1 u˙ 2 dt. 0

The determinant of the system is ∆ = 2(a12 a21 − a11 a22 ). Since the system is stable, the determinant is not zero. It can be easily shown that the techniques of determining integral estimates of sensitivity functions presented above can be generalized onto linear systems of any order. To find the estimate J2 , we can also employ the Reilly formula [100]: ∞ ∞ 1 2 2 [y(t)] dt = |ψy (jω)| dω, 2π −∞

0

where ψ(jω) is the frequency spectrum for y(t). As regards the sensitivity function u(t), we have ∞

1 u (t) dt = 2π

∞ 2

|ψu (jω)| dω,

2

0

© 2000 by CRC Press LLC

−∞

(5.122)

5.5.4

Envelope of Sensitivity Function

For many systems the sensitivity functions are oscillatory processes. This fact makes comparison of various sensitivity functions much more difficult. Envelopes of sensitivity functions have some advantages in this respect [107]. Many processes in real systems are described by the following equation of oscillatory unit: T 2 y¨ + 2T ξ y˙ + y = kx. The weight function of such a system is given by h(t) =

T



ξt t e− T sin 1 − ξ2. T 1 − ξ2

k

The sensitivity functions of the weight function with respect to parameters T and ξ have the form "  ξt ! ∂h(t) k  e− T (ξ − T ) sin φ − t 1 − ξ 2 cos φ = ∂ ln T T 2 1 − ξ2    ξt ξ ∂h(t) t uξ (t) = − = kψe− T sin φ − tψ cos φ , ∂ ln ξ 1 − ξ2 T

uT (t) =

where φ=

t 1 − ξ2 , T

ψ=

T



ξ 1 − ξ2

.

The envelope of a sensitivity function is its absolute value. Denote the envelopes of the sensitivity functions uT (t) and uξ (t) by AT (t) and Aξ (t), respectively. Then: AT (t) =

ξt  k  e− T (ξt − T )2 + t2 (1 − ξ 2 ), T 2 1 − ξ2

ξt kξ Aξ (t) =  e− T T 1 − ξ2



ξ t − 1 − ξ2 T

2 +

t2 ξ 2 . T 2 (1 − ξ 2 )

(5.123)

(5.124)

Let us obtain extremal points of the envelopes AT (t) and Aξ (t). With this aim in view, we differentiate these functions with respect to the variable t, and equate the derivatives to zero. For small values of T 2 and ξ 2 , we find that the moments of both the envelopes almost coincide with one another and are equal to T tmax ≈ . ξ

© 2000 by CRC Press LLC

In this case, the maximal values of the envelopes are given by AT (tmax ) =

k −1 e , Tξ

Aξ (tmax ) =

i.e., AT (tmax ) 1 = > 1. Aξ (tmax ) ξ

© 2000 by CRC Press LLC

k −1 e , T

Chapter 6 Sensitivity Invariants of Control Systems

6.1 6.1.1

Sensitivity Invariants of Time Characteristics Sensitivity Invariants

The form of mathematical model of the system under consideration determines restrictions and interconnections between sensitivity functions. These restrictions and connections may have the form of equalities of inequalities and be holonomic and non-holonomic. DEFINITION 6.1 Connections given by equations that do not include time derivatives of sensitivity functions, i.e., have the form of algebraic relations, are called holonomic. Otherwise, connections are called non-holonomic. Differential sensitivity equations are natural non-holonomic restrictions imposed on sensitivity functions. Moreover, there are holonomic restrictions imposed on sensitivity functions of some types. First, such connections were found by C. Belove [7] for electronic RLC-networks. It was found that some sums of sensitivity functions of networks system functions (transfer functions, conductivity) are constant values and do not change if the networks are equivalently transformed. Such sums are called sensitivity invariants. In [7], properties of homogeneous functions were used to obtain sensitivity invariants. DEFINITION 6.2 A function f (x1 , . . . , xg ) is called homogeneous of the v-th order if for an arbitrary µ the following condition holds: f (µx1 , . . . , µxg ) = µv f (x1 , . . . , xg ).

© 2000 by CRC Press LLC

(6.1)

Let us differentiate (6.1) with respect to µ: ∂f ∂µx1 ∂f ∂µx2 ∂f ∂µxg + + ... + = vµv−1 f (x1 , . . . , xg ). ∂µx1 ∂µ ∂µx2 ∂µ ∂µxg ∂µ Dividing the last expressions by f and letting µ = 1, we obtain g  ∂ ln f = v = const. ∂ ln xi i=1

(6.2)

Many characteristics of electronic networks are described with respect to parameters of homogeneous functions. As a result, if f is a network characteristic (transfer function, conductivity, and so on) and x1 , . . . , xg are parameters, from (6.2) it follows that that the sum of logarithmic sensitivity functions over all network parameters is a constant value, i.e., an invariant. DEFINITION 6.3 In the general case, by sensitivity invariant we mean a functional (algebraic) dependence between sensitivity functions that does not contain state variables and parameters of the initial system. If the system under consideration is characterized by sensitivity functions u1 (q), u2 (q), . . . , um (q), where q is an independent variable (time, frequency, Laplace transform variable, and so on), we denote sensitivity invariant in the form Ω(u1 , u2 , . . . , um , q) = 0. Hereinafter we will distinguish linear and nonlinear invariants. DEFINITION 6.4

An invariant described by the relation m 

γi ui (q) = const

i=1

or

m 

γi ui (q) = φ(q),

i=1

where φ(q) is a function depending only on q, and γi are weight coefficients, is called linear. DEFINITION 6.5 called nonlinear.

© 2000 by CRC Press LLC

Invariants described by nonlinear functions Ω are

In this book we consider systems described by ordinary differential equations or finite algebraic relations. For them the following three types of invariants can exist:   ∂y ∂y ∂y Ω1m , ,..., , q = 0, (6.3) ∂α1 ∂α2 ∂αm  Ωn1  Ωnm

∂y1 ∂y2 ∂yn , ,..., ,q ∂α ∂α ∂α



∂y1 ∂y2 ∂yn , ,..., ,q ∂α1 ∂α2 ∂αm

= 0,

(6.4)

 = 0.

(6.5)

The first dependence contains the sensitivity function of a single system variable with respect to m parameters, the second one includes the sensitivity function of n variables with respect to a single parameters, while the third one describes the sensitivity function of n variables with respect to m parameters. It is noteworthy that, depending on the initial problem and the form of argument in (6.3)–(6.5), ordinary as well as semi-logarithmic and logarithmic sensitivity functions can be arguments. Below in this chapter we consider mainly invariants with respect to semilogarithmic sensitivity functions. The presence of sensitivity invariants adds some specifics to many investigation problems for dynamic systems, including automatic control systems. Thus, sensitivity invariants can restrict independent variation of sensitivity functions in the design of optimal insensitive systems. Simultaneously, in many cases the use of invariants can simplify sensitivity analysis of control systems.

6.1.2

Existence of Sensitivity Invariants

First, we consider the dynamic system of the general form y˙ i = fi (y1 , . . . , yn , α1 , . . . , αm , t)

i = 1, . . . , n,

associated with sensitivity models n  ∂fi ∂fi u˙ ij = ukj + , ∂yk ∂ ln αj

i = 1, . . . , n,

j = 1, . . . , m.

(6.6)

k=1

Let there be an invariant: Ωnm (t, u11 , . . . , unm ) = 0.

© 2000 by CRC Press LLC

(6.7)

Then, the following identity holds:  ∂Ωnm duij dΩnm ∂Ωnm = + = 0. dt ∂u dt ∂t ij i,j Substitute the expression for u˙ ij from (6.6) into the last equation:  ∂Ωnm



∂uij

i,j

n  ∂fi ∂fi ukj + ∂yk ∂ ln αj

 +

k=1

∂Ωnm = 0. ∂t

(6.8)

The obtained relation is a necessary condition for existence of the invariant (6.7). From (6.8) we can obtain existence conditions for the invariants (6.3) for n = 1 and m > 1:   m  ∂Ω1m ∂Ω1m ∂f ∂f + uj + =0 ∂uj ∂y ∂ ln αj ∂t j=1

(6.9)

and existence conditions for the invariants (6.4) for n > 1 and m = 1:

i=1



 n  ∂fi ∂fi  ∂Ωn1  + = 0. uj + ∂ui ∂y ∂ ln α ∂ ln t j j=1

n  ∂Ωn1

(6.10)

Next, let a multivariable and multiparameter system be described by algebraic relations fi (y1 , . . . , yn , α1 , . . . , αm ) = 0,

i = 1, . . . , n,

associated with sensitivity equations of the form n  ∂fi ∂fi ukj + = 0, ∂yk ∂ ln αj

j = 1, . . . , m,

i = 1, . . . , n.

k=1

Consider the invariant Ωnm (U11 , U12 , . . . , Unm ) = 0 or Ωnm (U1 , U2 , . . . , Um ) = 0

© 2000 by CRC Press LLC

(6.11)

where UjT = [u1j , u2j , . . . , unj ],

j = 1, . . . , m,

are sensitivity vectors. The vector sensitivity function Uj is determined from the system A Uj = Bj ,

j = 1 . . . , m,

where A = ∂fi /∂y is the Jacoby matrix, and ∂f1 ∂fn BjT = − . ,..., ∂ ln αj ∂ ln αj Then, if the matrix A is nonsingular, i.e., the Jacobian det A is not zero, the sensitivity vector is equal to UjT = A−1 Bj .

(6.12)

Substituting (6.12) into (6.11), we obtain the following existence conditions for the invariant (6.11):

 Ωnm A−1 B1 , A−1 B2 , . . . , A−1 Bm = 0. For a single variable and m parameters this condition assumes the form Ω1m (z1 , z2 , . . . , zm ) = 0, where ∂f zj = − ∂ ln αj



∂f ∂y

For n output variables and a single parameter we have Ωn1 (β1 , β2 , . . . , βm ) = 0, where βi =

D(f1 , f2 , . . . , fn ) D(f1 , . . . , fi−1 , fi , . . . , fn ) : . D(y1 , . . . , yi−1 , α, yi+1 , . . . , yn ) D(y1 , y2 , . . . , yn )

The above necessary conditions do not provide for universal technique of construction of sensitivity invariants. It is quite possible that, similarly

© 2000 by CRC Press LLC

to construction of Lyapunov functions for nonlinear systems, construction of sensitivity invariants calls for special investigation in each specific case. As was shown by investigations, the most general existence condition for sensitivity invariants is parametric redundancy in the model of the initial system. As was noted in Section 1.2, in some cases we can transform a complete set of parameters to another one using a functional transformation. DEFINITION 6.6 If the number of parameters q of the new set is fewer than the number of parameters m for the first one, we have parametric redundancy. DEFINITION 6.7 redundancy.

The value p = m−q is called the order of parametric

It is believed that the number of independent sensitivity invariants is equal to p. Currently, using mostly sufficient conditions, linear invariants of linear systems have been obtained, and sufficient conditions for existence of linear invariants of nonlinear systems have been formulated. Some of these results are presented below. Since later we deal only with linear invariants, the word “linear” will be omitted for brevity.

6.1.3

Sensitivity Invariants of Single-Input–Single-Output Systems

Consider an automatic single-input–single-output (SISO) system control system described by the equation n 

(i)

y (0)

ai y (n−i) =

i=0 (i) = y0 ,

g 

bi y (g−i) ,

(6.13)

i=0

i = 0, . . . , n − 1,

n ≥ g.

We assume that the coefficients of this equation depend on parameters α1 , . . . , αm and are differentiable with respect to them. Then, the following sensitivity equations hold: n 

(n−i) ai uj

i=0 (k) uj (0)

© 2000 by CRC Press LLC

=

g  i=0

= 0,

n ∂bi (g−i)  ∂ai (n−i) x − y , ∂ ln αj ∂ ln αj i=0

k = 0, . . . , n − 1,

j = 1, . . . , m,

(6.14)

where uj (t) =

∂y(t) ∂ ln αj

Let us take the sum of the sensitivity equations (6.14) over all j: n  i=0

ai

m 

(n−i) uj

=

j=1

g 

x(g−i)

i=0

m  j=1

m n   ∂bi ∂ai − y (n−i) . ∂ ln αj ∂ ln αj i=0 j=1

(6.15)

Introduce the notation ψ(t) =

m 

u˙ j (t).

j=1

Assume that the coefficients ai and bj are considered as parameters and denoted by α1 , . . . , αm . Then, Equation (6.15) transforms to the form n 

ai ψ

(n−i)

=

i=0

g 

bi x(g−i) −

i=0

ψ (j) (0) = 0,

n 

ai y (n−i)

i=0

j = 0, . . . , n − 1,

and, due to (6.13), n 

ai ψ (n−i) = 0,

j = 0, . . . , n − 1.

ψ (j) (0) = 0,

(6.16)

i=0

The last equation is a homogeneous linear differential equation with zero initial conditions with respect to ψ(t). It is known that the solution of such an equation is equal to zero, i.e., ψ(t) =

n  i=0

 ∂y ∂y + = 0. ∂ ln αi i=0 ∂ ln bi g

(6.17)

The above sum is a sensitivity invariant of the transient y(t) with respect to coefficients of the differential equation of the system. It is noteworthy that the model (6.13) is parametrically redundant, and the order of redundancy is 1. Indeed, a complete set of parameters for Equation (6.13) for fixed initial conditions incorporates the coefficients ai , (i = 0, . . . , n), bi , (j = 0, . . . , g), i.e., n + g + 2 parameters all together. Using a simple

© 2000 by CRC Press LLC

transformation (after division of both sides of the equation by any nonzero coefficient, for example, a0 ), we can obtain a model with a complete defining set of n + m + 1 parameters. In many systems, for instance, in RLC-networks, the coefficients ai and bj are polylinear functions of primary parameters α1 , . . . , αm . Moreover, the following relations hold: m 

∂ai = qai , ∂αj

i = 0, . . . , n,

∂bi αj = qbi , ∂αj j=1

i = 0, . . . , g,

j=1 m 

αj

(6.18)

where q is the number of factors in the terms appearing in the equations for ai and bj . It can be easily shown that the invariant (6.17) can be, with due account for (6.18), transformed to the form g  n  m m   ∂y ∂ai ∂y ∂bi αj + αj = 0, ∂a ∂α ∂b i j i ∂αj i=0 j=1 i=0 j=1

Hence, the sensitivity invariant with respect to the parameters α1 , . . . , αm is given by m  ∂y(t) = 0. (6.19) ∂ ln αj j=1 Example 6.1 For a system with the transfer function w(s) = b0 /(a0 s + a1 ) the weight function is equal to b0 − aa1 t h(t) = e 0 . (6.20) a0 For this function, the invariant (6.17) takes the form ∂h(t) ∂h(t) ∂h(t) + + = 0, ∂ ln a0 ∂ ln a1 ∂ ln b1 because, due to (6.20), ∂h(t) = h(t) ∂ ln a0

© 2000 by CRC Press LLC



 a1 t −1 , a0

∂h(t) = h(t), ∂ ln b0

∂h(t) a1 t = −h(t) . ∂ ln a1 a0

The transient of this system is given by b0 k(t) = a1



 − aa10 t . 1−e

The sensitivity invariant of the transient is ∂k(t) ∂k(t) ∂k(t) + + = 0, ∂ ln a0 ∂ ln a1 ∂ ln b1 where ∂k(t) b0 t − a1 t = − e a0 , ∂ ln a0 a0 ∂k(t) b0 =− ∂ ln a1 a0



∂k(t) = k(t), ∂ ln b0

− a1 t 1 − e a0

 +

b0 t − aa1 t e 0 , a0

Example 6.2 Consider a system described by the equation a0 y˙ + a − 1y = b0 x,

x(t) = eλt ,

y(0) = 0. The transient in this system has the form y(t) =

b0 a1 + λa0



 − a1 t eλt − e a0 .

The sensitivity functions are given by ∂y λa0 a1 b0 t − a1 t =− y− e a0 , ∂ ln a0 a1 + λa0 a0 (a1 + λa0 ) ∂y a1 a1 b0 t − a1 t e a0 , =− y+ ∂ ln a1 a1 + λa0 a0 (a1 + λa0 ) ∂y = y. ∂ ln b0

© 2000 by CRC Press LLC

It can be easily seen that ∂y ∂y ∂y + + = 0. ∂ ln a0 ∂ ln a1 ∂ ln b0

6.1.4

Sensitivity Invariants of SISO Nonlinear Systems

Let the system under consideration be described by the differential equation y˙ = f (y, t, α1 , . . . , αm ),

y(0) = y0 ,

which gives the following sensitivity equations: u˙ i −

∂f ∂f ui = , ∂y ∂ ln αi

ui (0) = 0,

i = 1, . . . , m.

Let us add the sensitivity equations with corresponding weight coefficients. Using the notation ψ(t) =

m 

γi ui (t),

i=1

we obtain  ∂f ∂f ψ˙ − ψ= γi , ∂y ∂ ln αi i=1 m

ψ(0) = 0.

(6.21)

Relation (6.21) is a non-homogeneous ordinary differential equation with zero initial conditions that is linear with respect to the function ψ(t). As is known from the theory of differential equations, so that the solution ψ(t) is equal to zero it is sufficient that m 

γi

i=1

∂f = 0. ∂ ln αi

(6.22)

Therefore, we have the invariant Ω1m = ψ(t) =

m  i=1

© 2000 by CRC Press LLC

γi

∂y = 0. ∂ ln αi

(6.23)

Let us show that the invariant (6.23) satisfies the condition (6.9). Indeed, substituting the left side of (6.23) into (6.9), we obtain m 

 γj

j=1

∂f ∂f uj + ∂y ∂ ln αj

 =

m m  ∂f  ∂f γj uj + γj , ∂y j=1 ∂ ln αj j=1

Then, with due account for (6.22) and (6.23), we find m 

 γj

j=1

6.1.5

∂f ∂f uj + ∂y ∂ ln αj

 = 0.

Sensitivity Invariants of Multivariable Systems

Consider the vector equation Y˙ = F (Y, α1 , . . . , αm , t),

Y (0) = Y0 ,

(6.24)

which leads to the following sensitivity equations: ∂F ∂F U˙ i = , Ui + ∂Y ∂ ln αi

Ui (0) = 0,

where ∂Y Ui = , ∂ ln αi

i = 1, . . . , m,

(6.25)

   ∂fi  ∂F .  = ∂Y ∂yj 

Summation of Equations (6.25) yield m 

m m  ∂F ∂F  γi Ui + γi , ∂Y i=1 ∂ ln αi i=1

γi U˙ i =

i=1

m 

γi Ui (0) = 0,

(6.26)

i=1

or  ∂F ˙ − ∂F Ψ = Ψ γi , ∂Y ∂ ln αi i=1 m

Ψ(0) = 0,

(6.27)

where  ψ1 Ψ= γi Ui =  . . . ,  i=1 ψn m 



ψk =

m 

γi uki , k = 1, . . . , n.

i=1

Introduce the notions of vector and scalar invariants for the system (6.24).

© 2000 by CRC Press LLC

DEFINITION 6.8

An invariant of the form m 

γi Ui = 0,

(6.28)

i=1

will be called vector invariant. DEFINITION 6.9 The invariant for a component of the vector Ψ is called scalar and has the form m 

γi uki = 0.

i=1

As follows from (6.26), a sufficient condition of existence of the vector invariant reduces to the fact that the right side of the differential equation with respect to Ψ must be identically zero: m  i=1

γi

∂F = 0. ∂ ln αi

(6.29)

The necessary existence condition (6.8) for the sensitivity variant (6.28) can be written in the following vector form:   m  ∂Ωnm ∂F ∂F = 0. Ui + ∂Ui ∂Y ∂αi i=1 Then, it can be easily shown that the linear invariant (6.28) satisfies this condition. Represent Equation (6.26) in the form ˙ − A(t) Ψ = B(t), Ψ

Ψ(0) = 0,

where ∂F A(t) = , ∂Y

© 2000 by CRC Press LLC

B(t) =

m  i=1

γi

∂F . ∂ ln αi

Thus, we obtained a system of differential equations with variable coefficients. A solution of this equation is defined by the formula t Ψ(t) =

Φ(t)Φ−1 (τ ) B(τ ) dτ,

(6.30)

0

where Φ(t) is the transition matrix of the system. From the solution (6.30) it follows that the equality t

Φ(t)Φ−1 (τ ) B(τ ) dτ = 0,

G(t) =

t > 0.

(6.31)

0

is a necessary existence condition for the vector invariant. As a specific case, Equation (6.29) follows from this result. For existence of scalar invariant ψk (t) it is necessary that the corresponding component of the vector G(t) be identically zero: Gk (t) = 0.

(6.32)

An expanded expression for the left side of the condition (6.31) can be found either directly from (6.31) or with the help of special methods of invariance investigation for systems with variable parameters [57], for instance, using the method of equating operators. Let the initial system (6.24) be linear with constant coefficients, so that Y˙ = A Y + R(t),

Y (0) = Y0 ,

where A = aij . Then, we have ˙ − AΨ = Ψ

m 

γi

i=1

∂A Y ∂ ln αi

or, in the operator form, (sE − A) Ψ =

m  i=1

© 2000 by CRC Press LLC

γi

∂A Y. ∂ ln αi

(6.33)

Then, the existence condition for the invariant ψj =

m 

γi uji = 0

i=1

reduces to formal equality of the following determinant to zero:    s − a11 −a12 . . . −a1,j−1 b1 −a1,j+1 . . . −a1n    . . . . . . . . . . . . . . . . . . . . .  ∆j =  . . .  −an1 −an2 . . . −an,j−1 bn −an,j+1 . . . s − ann  where b1 =

n m  

γi

i=1 k=1

∂agk yk , ∂ ln αi

(6.34)

g = 1, 2, . . . , n.

Let parameters α1 , . . . , αm be the elements of the matrix A and γi = 1 (i = 1, . . . , m), where n2 = m. Then, it is easy to verify that m  i=1

and

m  n  i=1 k=1

γi

∂A =A ∂ ln αi

 ∂aqk γi yk = aqk yk . ∂ ln αi n

k=1

Therefore, the condition (6.34) takes the form    s − a11 −a12 . . . −a1,j−1 d1 −a1,j+1 . . . −a1n    . . . . . . . . . . . . . . . . . . . . .  ∆j =  . . .  −an1 −an2 . . . −an,j−1 dn −an,j+1 . . . s − ann  where dg =

n 

agk yk ,

(6.35)

g = 1, 2, . . . , n.

k=1

It is noteworthy that the causality conditions are often very strict for the equalities (6.34) and (6.35). Consider the following linear system with constant coefficients: Y˙ = A Y, Y (0) = 0, where A is a matrix with simple structure. Then, there exists a decomposed system y˙ i = λi yi , (i = 1, . . . , n), which is associated with the sensitivity

© 2000 by CRC Press LLC

equation u˙ ik = λi uik +

∂λi yi , ∂ ln αk

i = 1, . . . , n,

k = 1, . . . , m

where uik (0) = 0. After summation over all k’s, we obtain ψ˙ = λi ψi + yi

m  ∂λi , ∂ ln αk

ψ(0) = 0,

k=1

(6.36)

i = 1, . . . , n, where ψi =

m 

uik =

k=1

m  k=1

∂yi . ∂ ln αk

Equation (6.36) has a specific feature in that it combines two invariants, namely, invariant of root sensitivity and that of sensitivity of time characteristics. From this equation it follows that if m  ∂λi = 0, ∂ ln αk

k=1

i.e., if there is an invariant of root sensitivity, we have m  k=1

∂yi = 0, ∂ ln αk

i.e., there is a sensitivity invariant for transients.

6.1.6

Sensitivity Invariants of Weight Function

Then, consider sensitivity of the weight function for the system described by Equation (6.13). Assume that its transfer function is free of multiple poles. Then, the weight function is given by [112] h(t) =

n  i=1

© 2000 by CRC Press LLC

ci epi t ,

(6.37)

where g 

(pi − q)j)

j=1

ci = k 

(pi − pj ),

k=

b0 a0

j=1 j=i

and pi and qi denote a pole and a zero of the transfer function, respectively. Differentiating Equation (6.37) with respect to the parameter αk , we obtain  ∂h(t) = ci epi t ∂ ln αk i=1 n



∂ ln ci ∂pi +t ∂ ln αk ∂ ln αk

 ,

k = 1, . . . , m.

With due account of the expression for ci , after summation over all k’s we obtain  m  m n m    ∂pi  ∂h(t) ∂ ln k + = ci epi t t + ∂ ln αk ∂ ln αk ∂ ln αk i=1 k=1 k=1 k=1 m  g m   ∂pi  1 ∂qj + − p − qj ∂ ln αk ∂ ln αk (6.38) j=1 i k=1 k=1 m  m   ∂pi  ∂pj 1 − . − pi − q j ∂ ln αk ∂ ln αk j=1

k=1

k=1

j=i

m Let us analyze the sums k=1 . First, we consider the case when the coefficients ai and bj of the transfer function are considered as parameters. Then, g m n    ∂pi ∂pi ∂pi = + , ∂ ln αk ∂ ln a ∂ ln bj j j=0 j=0 k=1 g m n    ∂qi ∂qi ∂qi = + . ∂ ln αk ∂ ln a ∂ ln bj j j=0 j=0 k=1

It is easily seen that ∂pi pn−j i aj =− , ∂a(s)  ∂ ln aj ∂s s=pi

© 2000 by CRC Press LLC

∂qi q g−j aj = − i  , ∂b(s)  ∂ ln bj ∂s s=pi

Therefore, there are the following root sensitivity invariants: n  ∂pi = 0, ∂ ln aj j=0

g  ∂qi = 0. ∂ ln bj j=0

As a result, Equation (6.38) takes the form   g g n n n     ∂h(t)  ∂h(t) ∂ ln k ∂ ln k  = ci epi t  + ∂ ln a ∂ ln b ∂ ln a ∂ ln b j j j j j=0 j=0 i=1 j=0   j=0 ∂ ln k ∂ ln k = h(t) = 0, + ∂ ln a0 ∂ ln b0 Hence, the sensitivity invariant of the weight function with respect to the coefficients of the transfer function is given by g n   ∂h(t) ∂h(t) + = 0. ∂ ln aj j=0 ∂ ln bj j=0

If (6.18) holds, there is the invariant m  ∂h(t) = 0. ∂ ln αj j=0

6.2 6.2.1

Root and Transfer Function Sensitivity Invariants Root Sensitivity Invariants

Consider the polynomial D(s) =

n 

ai s(n−i) ,

a0 = 0,

i=0

Then, its roots p1 , . . . , pn are zeros or poles of the system transfer function, and depend on the parameters α1 , . . . , αm , so that pi = pi (α1 , . . . , αm ), i = 1, . . . , n.

© 2000 by CRC Press LLC

The sum of the roots of the polynomial D(s) is equal to n 

pi = −

i=1

a1 . a0

(6.39)

Let us differentiate the last equation with respect to a parameter αk : ∂a0 ∂a1 n a0 − a1  ∂pi ∂α ∂αk =− k . ∂αk a20 i=1

(6.40)

Assume that the coefficients a0 and a1 and their ratio a1 /a0 are independent of the parameter αk so that ∂a1 = 0, ∂αk

∂a0 = 0, ∂αk

(or

∂ a1 = 0). ∂α a0

Then, n  ∂pi =0 ∂αk i=1

or

n  i=1

∂pi = 0. ∂ ln αk

(6.41)

Thus, the sum of the sensitivity coefficients of the roots of the polynomial D(s) with respect to the parameter αk is zero, i.e., is an invariant. Let the coefficients a0 , a1 , . . . , an are polylinear functions of the parameters α1 , . . . , αm . Then, the following relation of the form (6.18) holds: m 

αk

k=1

∂ai = qai , ∂αk

i = 0, . . . , n.

(6.42)

These relations are, in fact, sufficient conditions for homogeneity of the function D(s). Multiplying both sides of Equation (6.40) by αk and taking the sum over all k’s, we obtain m  n  k=1 i=1

∂pi 1 =− 2 ∂ ln αk a0



m m   ∂a1 ∂a0 a0 − a1 ∂ ln αk ∂ ln αk k=1

k=1

With due account for (6.42), from the last relation we find m  n  k=1 i=1

© 2000 by CRC Press LLC

∂pi 1 = − 2 (a0 a1 − a1 a0 ) . ∂ ln αk a0

 .

Thus, we have the invariant m  n 

∂pi = 0. ∂ ln αk

k=1 i=1

Consider the sum

m  k=1

(6.43)

∂pi ∂ ln αk

for the case of pair-wise different roots. Differentiating the identity D(pi ) = 0 with respect to the parameter αk , we obtain  ∂D ∂aj ∂D ∂pi + = 0, ∂pi ∂αk j=0 ∂aj ∂αk n

Hence, n 

∂pi j=0 =− ∂ ln αk

pn−j i

∂aj ∂ ln αk .

∂D ∂pi

(6.44)

Then, n  m  k=1

∂pi j=0 =− ∂ ln αk

pn−j i

m  ∂aj ∂ ln αk

k=1

∂D ∂pi

,

or, with account for (6.42),

m  k=1

q ∂pi =− ∂ ln αk

n 

pn−j aj i

j=0

∂D ∂pi

=−

qD(pi ) ∂D . ∂pi

As a result, we have the invariant m  k=1

© 2000 by CRC Press LLC

∂pi = 0. ∂ ln αk

(6.45)

Comparing Equations (6.43) and (6.45), we find that the invariant (6.43) follows from the last one. Then, let us consider the product of the roots P =

n 

pi .

i=1

It is known that

n 

pi = (−1)n an .

i=1

Then,

n n  ∂P ∂pi  ∂an pj = (−1)n = ∂α ∂α ∂α i=1 j=1 j=i

or

 ∂ ln pi ∂ ln P ∂ ln an = = . ∂α ∂α ∂α i=1 n

Thus, we have the invariant n  ∂ ln pi i=1

∂ ln α

=

∂ ln an . ∂ ln α

(6.46)

At last we notice that the conditions (6.42) yield yet another invariant m  ∂ ln ai = q, ∂ ln αk

i = 0, . . . , n.

k=1

Indeed,

m m  ∂ ln ai 1  ∂ai = αk = q = const. ∂ ln αk ai ∂αk

k=1

6.2.2

k=1

Sensitivity Invariants of Transfer Functions

Consider the transfer function g 

b(s) w(s) = = i=0 n  a(s) i=0

© 2000 by CRC Press LLC

bi sg−i , n−i

ai s

g ≤ n,

(6.47)

the coefficients of which ai and bj depend on the parameters α1 , . . . , αm . Further we show that if the conditions (6.18) hold, there is the invariant m  ∂ ln w(s) k=1

∂ ln αk

= const.

(6.48)

Indeed,   n g  ∂ ln w(s) ∂w(s) ∂bi αk  ∂w(s) ∂ai = + ∂ ln αk w(s) i=0 ∂ai ∂αk i=0 ∂bi ∂αk   g n αk 1  g−i ∂bi b(s)  n−i ∂ai = s − 2 s w(s) a(s) i=0 ∂αk a (s) i=0 ∂αk αk  g−i ∂bi αk  n−i ∂ai s − s , b(s) i=0 ∂αk a(s) i=0 ∂αk g

=

n

Therefore, m  ∂ ln w(s) k=1

∂ ln αk

=

g m n m   sg−i  ∂bi sn−i  ∂ai αk − αk , b(s) ∂αk a(s) ∂αk i=0 i=0 k=1

k=1

Using (6.18) for the coefficients ai and bj , we find m  ∂ ln w(s) k=1

∂ ln αk

=

g  sg−i bi i=0

b(s)

q−

n  sn−i ai = 0. q a(s) i=0

To obtain this invariant, we can employ the following relation between sensitivity of the transfer function and that of the zeros and poles given in Section 5.2:  ∂pi 1 ∂ ln w(s) ∂ ln k  ∂zi 1 = − + . ∂ ln αj ∂αj ∂αj s − zi i=1 ∂αj s − pi i=1 g

n

Multiplying both sides of this equation by αk and summing up by j, we obtain m  ∂ ln w(s) j=1

∂ ln αj

=

g m n m m    ∂ ln k 1  ∂zi 1  ∂pi − αj + αj , ∂ ln αj s − zi j=1 ∂αj s − pi j=1 ∂α j=1 i=1 i=1

© 2000 by CRC Press LLC

Hence, with account for the invariants (6.45), m  ∂ ln w(s) j=1

∂ ln αj

=

m  ∂ ln k = const ∂ ln αj j=1

(6.49)

Example 6.3 Let transfer function w(s) have the form w(s) =

1 , 1 + Ts

(6.50)

where T = RC. Represent it in the form w(s) =

h , h + Cs

where h = 1/R is the conductivity. Consider the sum Z(s) =

∂ ln w(s) ∂ ln w(s) + ∂ ln C ∂ ln g

Since ∂ ln w(s) Cs =− , ∂ ln C h + Cs

∂ ln w(s) Cs =− , ∂ ln h h + Cs

we have Z(s) = 0. Example 6.4 Let us determine the sensitivity invariant for the transfer function of the network shown in Figure 6.1. For the transfer function we have w(s) =

1 + T1 s , 1 + T2 s

where T1 = R1 C and T2 = (R1 + R2 )C. Transformation to the form of a function with polylinear coefficients yields w(s) =

© 2000 by CRC Press LLC

h1 h2 + Ch2 s . h1 h2 + C(h1 + h2 )s

Figure 6.1 Simplest network It can be easily shown that ∂ ln w(s) h21 Cs =− , ∂ ln C (h1 + Cs)l(s) ∂ ln w(s) Ch1 s , = ∂ ln h2 l(s)

where

C 2 h1 s2 ∂ ln w(s) =− , ∂ ln h1 (h1 + Cs)l(s)

l(s) = h1 h2 + C(h1 + h2 )s.

As a result, ∂ ln w(s) ∂ ln w(s) ∂ ln w(s) + =0 + ∂ ln C ∂ ln h1 ∂ ln h2

Example 6.5 Consider the active network shown in Figure 6.2 with transfer function

Figure 6.2 Active network

© 2000 by CRC Press LLC

k 1 + [C1 (R1 + R2 ) − (k − 1)C2 R2 ]s + R1 R2 C1 C2 s2 h 1 h2 k = , h1 h2 + [C1 (h1 + h2 ) − (k − 1)C2 h1 ]s + C1 C2 s2

w(s) =

where h1 = 1/R1 and h2 = 1/R2 . The transfer function contains a dimensionless parameter k. To obtain the invariant (6.49), we need not take sensitivity with respect to the parameter k in the sums [32, 87, 98]. Thus, we obtain ∂ ln w(s) C1 (h1 + C2 s)s , = ∂ ln h1 l(s) ∂ ln w(s) [C1 − (k − 1)C2 ]h1 s + C1 C2 s2 + C1 h2 s = , ∂ ln h2 l(s) ∂ ln w(s) C1 [(h1 + h2 )s + C2 s2 =− , ∂ ln C1 l(s) ∂ ln w(s) C2 [(1 − k)h1 s + C1 s2 =− , ∂ ln C2 l(s) where l(s) = h1 h2 + [C1 (h1 + h2 ) − (k − 1)C2 h1 ]s + C1 C2 s2 , and ∂ ln w(s) ∂ ln w(s) ∂ ln w(s) ∂ ln w(s) + + + = 0. ∂ ln h1 ∂ ln h2 ∂ ln C1 ∂ ln C2

6.3 6.3.1

Sensitivity Invariants of Frequency Responses First Form of Sensitivity Invariants of Frequency Responses

Let us represent a frequency transfer function w(jω) in the form w(jω) =

© 2000 by CRC Press LLC

a(ω) + jb(ω) , c(ω) + jd(ω)

(6.51)

where a(ω) = bg − bg−2 ω 2 + bg−4 ω 4 − . . . , b(ω) = bg−1 ω − bg−3 ω 3 + bg−5 ω 5 − . . . , c(ω) = an − an2 ω 2 + an−4 ω 4 − . . . ,

(6.52)

d(ω) = an−1 ω − an−3 ω 3 + an−5 ω 5 − . . . , Then, it can be easily shown that w(va0 , va1 , . . . , van , vb0 , vb1 , . . . , vbg , jω) = w(a0 , a1 , . . . , an , b0 , b1 , . . . , bg , jω),

(6.53)

where ν is an arbitrary nonzero number. Note that relations of the form (6.53) hold also for other frequency responses, namely, amplitude A(ω), phase φ(ω), real R(ω), and imaginary Q(ω) ones. Differentiating (6.53) with respect to ν yields n  ∂w(jω) ∂vai i=0

∂vai

∂v

+

g  ∂w(jω) ∂bi v i=0

∂vbi

∂v

=0

Assuming ν = 1 in the last equation, we find n  ∂w(jω) i=0

∂ai

ai +

g  ∂w(jω) i=0

∂bi

bi = 0

(6.54)

=0

(6.55)

or n  ∂w(jω) i=0

∂ ln ai

+

g  ∂w(jω) i=0

∂ ln bi

The last relation can be considered as a sensitivity invariant of the frequency transfer function with respect to the coefficients of the transfer function. This invariant exists for transfer functions of all linear systems with constant parameters. Consider a system with a frequency transfer function b1 + jωb0 w(jω) = . (6.56) a0 (jω)2 + jωa1 + a2 Then, we have ∂w(jω) b1 = , ∂ ln b1 l(jω)

© 2000 by CRC Press LLC

jωb0 ∂w(jω) = , ∂ ln b0 l(jω)

∂w(jω) (b1 + b0 jω)a2 =− , ∂ ln a2 l2 (jω)

(b1 + b0 jω)jωa1 ∂w(jω) =− , ∂ ln a1 l2 (jω)

∂w(jω) (b1 + b0 jω)(jω)2 a0 =− , ∂ ln a0 l2 (jω) where l(jω) = a0 (jω)2 + a1 jω + a2 . It can be easily shown that ∂w(jω) ∂w(jω) ∂w(jω) ∂w(jω) ∂w(jω) + + + + = 0. ∂ ln b0 ∂ ln b1 ∂ ln a0 ∂ ln a1 ∂ ln a2

(6.57)

The invariant (6.54) can be written for logarithmic sensitivity function in the form g n  ∂ ln w(jω)  ∂ ln w(jω) + = 0. (6.58) ∂ ln ai ∂ ln bi i=0 i=0 Let the conditions (6.18) hold for the coefficients of the transfer function. With account for (6.18), we transform the equality (6.54) to the form n  ∂w(jω) i=0

or

∂ai

qai +

g  ∂w(jω)

∂bi

i=0

qbi = 0,

(6.59)

g  n  m m   ∂w(jω) ∂ai ∂w(jω) ∂bi αk + αk = 0, ∂a ∂α ∂bi ∂αk i k i=0 i=0 k=1

k=1

Hence, m  ∂w(jω) k=0

∂ ln αk

= 0.

(6.60)

Thus, if the conditions (6.53) and (6.18) hold, the sum of semi-logarithmic sensitivity functions for frequency transfer functions of linear systems with constant parameters is zero. It is noteworthy that the invariants of the form (6.51), (6.53), and (6.54) can also be obtained from the invariant of transfer function (6.48) by the substitution s = jω.

6.3.2

Second Form of Sensitivity Invariants of Frequency Responses

Using the following method, we can find yet another sensitivity invariant. Consider, formally, frequency ω as an additional parameter. Assume that this parameter decreased ν times. To preserve the values a(ω), b(ω), c(ω),

© 2000 by CRC Press LLC

and d(ω), it is necessary to increase the coefficients ai (i = 0, . . . , n) and bj (j = 0, . . . , g) ν n−i and ν g−j times, respectively. Thus, a(ω) = bg − bg−2 v 2 b(ω) = bg−1 v

ω v

 ω 2 v

− bg−3 v 3

c(ω) = an − an−2 v 2 d(ω) = an−1 v

ω  v

+ ...,

 ω 2 v

 ω 3 v

+ ...,

+ ...,

− an−3 v 3

 ω 3 v

+ ...,

Then,   jω w a0 v n , a1 v n−1 , . . . , an , b0 v g , b1 v g−1 , . . . , bm , v = w(a0 , a1 , . . . , an , b0 , b1 , . . . , bm , jω).

(6.61)

Obviously, Equation (6.61) holds also for amplitude, phase, and real and imaginary frequency responses. Let us differentiate (6.61) with respect to ν: g n   ∂w(jω) ∂w(jω) ∂w(jω) ω n−i−1 (n − i)v a + (g − i)v g−i−1 bi =  ω  2 . i n−i g−i ∂v ai ∂v bi v ∂ i=0 i=0 v

Letting ν = 1 in the last equation, we find n 

∂w(jω)  ∂w(jω) ∂w(jω) + (g − i) =ω . ∂ ln ai ∂ ln b ∂ω i i=0 g

(n − i)

i=0

(6.62)

Thus, we obtained yet another sensitivity invariant with respect to coefficients of the frequency transfer function. The invariants (6.62) differs from those given in Section 3.1 by two features. First, they are weighted sums of the sensitivity functions rather than ordinary sums. Second, these sums are not constants, but functions in the variable ω. Example 6.6 Let us find the invariant for the transfer function (6.56). For this example,

© 2000 by CRC Press LLC

Equation (6.62) takes the form ∂w(jω) ∂w(jω) ∂w(jω) ∂w(jω) + + =ω . ∂ ln a0 ∂ ln a1 ∂ ln b0 ∂ω

(6.63)

The terms in the left side were derived in the preceding example. For the right side we have 

j b0 a2 − b1 a1 + a0 b0 ω 2 + 2a0 b1 ω ∂w(jω) w ω. = 2 ∂ω (−a0 ω 2 + a2 + ja1 ω) Substituting the values of the corresponding terms, we find that the equality (6.63) holds. Dividing the left and right sides of Equation (6.62) by w(jω), we obtain the invariant for logarithmic sensitivity functions: n 

∂ ln w(jω)  ∂ ln w(jω) ∂ ln w(jω) . (n − i) + (g − i) = ∂ ln a ∂ ln b ∂ ln ω i i i=0 i=0 g

(6.64)

Next, we assume that m  ∂ai αk = (n − i)ai , ∂αk k=1 m  ∂bi αk = (g − i)bi , ∂αk

i = 0, . . . , n, (6.65) i = 0, . . . , g.

k=1

Then, g  n  m m   ∂w(jω) ∂ai ∂w(jω) ∂bi ∂w(jω) αk + αk = ω ∂ai ∂αk ∂bi ∂αk ∂ω i=0 i=0 k=1

k=1

or m  ∂w(jω) k=1

∂ ln αk

=

∂w(jω) . ∂ω

(6.66)

Example 6.7 Consider the system with transfer function (6.50). It is easily seen that the conditions (6.65) hold for the coefficients of the transfer function. Moreover,

© 2000 by CRC Press LLC

the invariant (6.62) is given by ∂w(jω) ∂w(jω) C=ω , ∂C ∂ω

(6.67)

because n = 1 and the terms with coefficients for i = n and i = g are zero in the sums (6.62). In the invariant (6.67) we have C

∂w(jω) jhωC , =− ∂C (h + jCω)2

ω

∂w(jω) jhωC . =− ∂ω (h + jCω)2

The invariants obtained above hold for other frequency responses: amplitude, phase, real, and imaginary, because the relations (6.53) and (6.61) hold for them. Obviously, linear combinations of some invariants can generate new invariants. Thus, the sum of the invariants (6.55) and (6.62) yields the invariant n 

∂w(jω)  ∂w(jω) ∂w(jω) (n + 1 − i) + (g + 1 − i) = , ∂ ln a ∂ ln b ∂ ln ω i i i=0 i=0 g

(6.68)

and their difference gives the invariant n 

∂w(jω)  ∂w(jω) ∂w(jω) + (g − 1 − i) = . ∂ ln ai ∂ ln bi ∂ ln ω i=0

(6.69)

∂w(jω)  ∂w(jω) ∂w(jω) + (g − 1 + q) = , ∂ ln ai ∂ ln bi ∂ ln ω i=0

(6.70)

g

(n − 1 − i)

i=0

In general, n 

g

(n − 1 + q)

i=0

where q = 0, ±1, ±2, . . .. For (6.56), the invariant (6.68) has the form 3

∂w(jω) ∂w(jω) ∂w(jω) ∂w(jω) ∂w(jω) ∂w(jω) +2 + +2 + = ∂ ln a0 ∂ ln a1 ∂ ln a2 ∂ ln b0 ∂ ln b1 ∂ ln ω

Using the above expressions for the summands, this can easily be proved. For the same example, the invariant (6.69) has the form ∂w(jω) ∂w(jω) ∂w(jω) ∂w(jω) − − = . ∂ ln a0 ∂ ln a2 ∂ ln b1 ∂ ln ω

© 2000 by CRC Press LLC

Finally, we will show that the invariant (6.62) has the form of two partial invariants. Consider the sensitivity functions of the frequency characteristic w(jω) with respect to the coefficients of the transfer function: ∂w(jω) b(jω) 1 ∂a(jω) ∂ = −b(jω) 2 = ∂ ln ai ∂ ln ai a(jω) a (jω) ∂ ln ai w(jω) ∂a(jω) =− , i = 0, . . . , n, a(jω) ∂ ln ai ∂w(jω) b(jω) 1 ∂b(jω) 1 = = , ∂ ln bi ∂ ln ai a(jω) a(jω) ∂ ln bi

i = 0, . . . , g.

(6.71)

(6.72)

The condition (6.61) holds for the functions b(jω) and a(jω). Then, there are the following sensitivity invariants: n 

(n − i)

i=0 g 

(g − i)

i=0

∂a(jω) ∂a(jω) = , ∂ ln ai ∂ ln ω

(6.73)

∂b(jω) ∂b(jω) . = ∂ ln bi ∂ ln ω

(6.74)

Let us multiply both sides of Equations (6.73) and (6.74) by the factors −w(jω)/a(jω) and 1/a(jω), respectively. Then, according to (6.71) and (6.72), we obtain n 

(n − i)

i=0 g  i=0

(g − i)

∂w(jω) ∂ ln a(jω) , = −w(jω) ∂ ln ai ∂ ln ω ∂w(jω) ∂ ln b(jω) = w(jω). ∂ ln bi ∂ ln ω

(6.75)

It is easy to seen that the sum of the last two invariants is exactly equal to the invariant (6.62).

6.3.3

Relations between Sensitivity Invariants of Time and Frequency Characteristics

For a linear system with transfer function w(s) and input signal x(t) and zero initial conditions, the image of the output signal y(t) is equal to y(s) = w(s)x(s).

© 2000 by CRC Press LLC

In the time-domain, the output signal is given by 1 y(t) = 2πj

j∞ w(jω)x(jω)ejωt d(jω). −j∞

Assuming that integration with respect to the variable jω and differentiation by the parameter αk are commutative operations, and x(t) is independent of αk , we have ∂y(t) 1 αk = ∂αk 2πj

j∞ −j∞

∂w(jω) αk x(jω)ejωt d(jω). ∂αk

If the parameters are the coefficients of the transfer functions, ∂y(t) 1 ai = ∂ai 2πj ∂y(t) 1 bi = ∂bi 2πj

j∞ −j∞ j∞

−j∞

∂w(jω) ai x(jω)ejωt d(jω), ∂ai (6.76) ∂w(jω) bi x(jω)ejωt d(jω), ∂bi

Hence, g n   ∂y(t) ∂y(t) + ∂ ln ai ∂ ln bi i=0 i=0  n  j∞ g  ∂w(jω)  ∂w(jω) 1 d(jω) = x(jω)ejωt + 2πj ∂ ln ai ∂ ln bi i=0 i=0 −j∞

and, due to (6.55), g n   ∂y(t) ∂y(t) + = 0. ∂ ln ai i=0 ∂ ln bi i=0

(6.77)

If the conditions (6.18) hold, instead of (6.77) we have the following invariant with respect to the parameters α1 , . . . , αm : m  ∂y(t) =0 ∂ ln αi i=1

© 2000 by CRC Press LLC

(6.78)

Multiplying the equations in (6.76) by (n − i) and (s − i), respectively, and summing them up, we can obtain the following sensitivity invariant for the transient: n 

(n − i)

i=0

g  ∂y(t) ∂y(t) + (g − i) ∂ ln ai ∂ ln bi i=0 j∞ ∂w(jω) 1 d(jω), = x(jω)ejωt 2πj ∂ ln ω

(6.79)

−j∞

which, under the conditions (6.18), takes the form j∞ m  ∂y(t) ∂w(jω) 1 = x(jω)ejωt d(jω). ∂ ln αi 2πj ∂ ln ω i=1 −j∞

It is known that there are analytic relations between time-domain and frequency-domain characteristics of linear systems. Thus, weight function can be expressed in terms of the frequency responses [100]: 2 h(t) = π

∞ P (ω) cos ωt dω, 0

2 h(t) = − π

∞ Q(ω) sin ωt dω, 0

Transfer function is given by 2 k(t) = π

∞

P (ω) sin ωt dω ω

0

and so on. Let us employ these relations to find sensitivity invariants of time-domain characteristics of linear systems. Consider, as a special case, the first of the above relations. Differentiation with respect to a parameter αk yields ∂h(t) 2 = ∂αk π

© 2000 by CRC Press LLC

∞ 0

∂P (ω) αk cos ωt dt ∂αk

Consider the case when coefficients of the transfer function are considered as parameters. Then, ∂h(t) 2 ai = ∂ai π ∂h(t) 2 bi = ∂bi π

∞ 0

∞ 0

∂P (ω) ai cos ωt dt, ∂ai (6.80) ∂P (ω) bi cos ωt dt. ∂bi

Summing these expressions up, we obtain   n ∞ g g n   ∂P (ω)  ∂h(t)  ∂h(t) ∂p(ω) 2 dω. + = cos ωt + ∂ ln ai i=0 ∂ ln bi π ∂ ln ai ∂ ln bi i=0 i=0 i=0 0

As was shown in Section 3.1, the term in the square brackets is a sensitivity invariant for frequency responses P (ω) and is identically zero. Therefore, g n  ∂h(t)  ∂h(t) + = 0. ∂ ln ai i=0 ∂ ln bi i=0

(6.81)

Analogously, we obtain the sensitivity invariant of the transient g n   ∂k(t) ∂k(t) + = 0. ∂ ln a ∂ ln bi i i=0 i=0

(6.82)

Under the conditions (6.18), from the invariants (6.81) and (6.82) with respect to coefficients of the transfer functions we can find the invariants with respect to the parameters αj (j = 1, . . . , m): m  ∂h(t) = 0, ∂ ln αj j=1

m  ∂k(t) = 0. ∂ ln αj j=1

(6.83)

Next, we multiply the equations in (6.80) by (n − i) and (s − i), respectively. After summation and routine transformations, we obtain yet another sensitivity invariant for the weight function: ∞ g n   ∂h(t) ∂h(t) ∂P (ω) 2 (n − i) + (g − i) = cos ωt dω. ∂ ln ai ∂ ln bi π ∂ ln ω i=0 i=0 0

© 2000 by CRC Press LLC

Similarly, for the transient k(t) we have ∞ g n   ∂k(t) ∂k(t) ∂P (ω) 2 cos ωt dω. (n − i) + (g − i) = ∂ ln a ∂ ln b π ∂ ln ω i i i=0 i=0 0

Under the conditions (6.65), we find ∞ m  ∂h(t) ∂P (ω) 2 = cos ωt dω, ∂ ln α π ∂ ln ω i i=1 0 ∞ m  ∂k(t) ∂P (ω) 2 = cos ωt dω, ∂ ln α π ∂ ln ω i i=1 0

6.4 6.4.1

Sensitivity Invariants of Integral Estimates First Form of Sensitivity Invariants

Consider the integral estimate ∞

1 h (t) dt = π

∞

2

I= 0

A2 (ω) dω,

(6.84)

0

where A(ω) is the amplitude frequency response, and h(t) is the weight function. Let us evaluate sensitivity of the integral estimate (6.84) with respect to deviation of a parameter αi by the value   ∞ ∂I ∂  αi = A2 (ω) dω  , ∂ ln αi ∂αi π 0

Then, assuming that differentiation with respect to the parameter αi and integration with respect to the variable ω are commutative operations, we obtain ∞ ∂I 2 ∂A(ω) = A(ω) dω. (6.85) ∂ ln αi π ∂ ln αi 0

© 2000 by CRC Press LLC

Let the coefficients ai and bi be considered as the parameters α1 , . . . , αm . Then, ∞ ∂I 2 ∂A(ω) = A(ω) dω, ∂ ln ai π ∂ ln ai 0 (6.86) ∞ ∂I 2 ∂A(ω) = A(ω) dω, ∂ ln bi π ∂ ln bi 0

Summing the last equalities, we have n  i=0

 ∂I ∂I 2 + = ∂ ln ai i=0 ∂ ln bi π g



∞ A(ω)

n  ∂A(ω) i=0

0

∂ ln ai

+

g  ∂A(ω) i=0

∂ ln bi

 dω.

According to the results of the previous paragraph, the term in the square brackets is the sensitivity invariant of the amplitude frequency response and is equal to zero. Therefore, we obtain the following sensitivity invariant for the integral estimate (6.84): n  i=0

 ∂I ∂I + = 0. ∂ ln ai i=0 ∂ ln bi n

(6.87)

Then, we find a sensitivity invariant of amplitude frequency response of a system with transfer function w(s) =

k Ts + 1

where b0 = k, a0 = T , and a1 = 1. The amplitude frequency response has the form k A(ω) = √ , 1 + T 2 ω2 and the integral characteristic is equal to 1 I= π

∞ A2 (ω) dω = 0

k2 b20 , = 2T 2a1 a0

Hence, ∂I b2 =− 0 , ∂ ln a0 2a0 a1

© 2000 by CRC Press LLC

∂I b2 =− 0 , ∂ ln a1 2a0 a1

∂I b2 =− 0 , ∂ ln b0 a0 a1

∂I ∂I ∂I + + = 0. ∂ ln a0 ∂ ln a1 ∂ ln b0

Under the conditions (6.18), the invariant (6.87) takes the form m  i=1

∂I = 0. ∂ ln αi

(6.88)

Then, for the previous example, we have A(ω) = √

h , h2 + C 2 ω 2

I+

h . 2C

Since ∂I h = , ∂ ln h 2C

∂I h =− , ∂ ln C 2C

we obtain ∂I ∂I + = 0. ∂ ln h ∂ ln h

6.4.2

Second Form of Sensitivity Invariants

Let us multiply relations in (6.86) by (n − i) and (s − i), respectively. After summation we obtain n 

g  ∂I ∂I (n − i) + (g − i) ∂ ln a ∂ ln bi i i=0 i=0   ∞  g n   ∂A(ω) ∂A(ω) 2 = A(ω) (n − i) + (g − i) dω, π ∂ ln ai ∂ ln bi i=0 i=0 0

Hence, using the results of the previous paragraph, we find the invariant n 

 ∂I ∂I 2 (n − i) + (g − i) = ∂ ln ai i=0 ∂ ln bi π i=0 n

∞ A(ω)

∂A(ω) ω dω. ∂ω

(6.89)

0

For the example considered in this section, the last expression takes the form ∞ ∞ ∂I ∂A2 (ω) 2 ∂A(ω) 1 = A(ω) dω = dω. (6.90) ∂ ln a0 π ∂ ln ω π ∂ ln ω 0

© 2000 by CRC Press LLC

0

Indeed, ∂I b2 k2 =− 0 =− , ∂ ln a0 2a0 a1 2T 1 π

∞ 0

∂A2 (ω) 1 dω = ∂∈ω π

∞ 0

2k 2 T 2 ω 2 b20 dω = − , (T 2 ω 2 + 1)2 2a0 a1

Hence, we obtain (6.90). Assume that the conditions (6.65) hold for the coefficients of the transfer function. Then, there is the following sensitivity invariant of the integral estimate: ∞ m  ∂I ∂A2 (ω) 1 = dω. ∂ ln αi π ∂ ln ω i=1 0

For instance, for A(ω) = √

h h2 + C 2 ω 2

this invariant takes the form ∂I 1 = ∂ ln C π

∞

∂A2 (ω) dω, ∂ ln ω

0

where ∂I h =− , ∂ ln C 2C 1 π

∞

∂A2 (ω) 2C 2 h2 dω = − ∂ ln ω π

0

6.5 6.5.1

∞

ω2 h dω = − . 2 2 +h ) 2C

(C 2 ω 2 0

Sensitivity Invariants for Gyroscopic Systems Motion Equations and Transfer Functions

In this section we illustrate possibilities of sensitivity invariants by an example of investigation of fundamental and forced motion of a three-axis (three-dimensional) gyroscopic stabilizer. Three-dimensional stabilizer is included in most modern gyroscopic stabilization and inertial navigation

© 2000 by CRC Press LLC

systems. The system of differential equations describing motion of threedimensional gyrostabilizer is fairly cumbersome. For simplicity, in applied theory of gyroscopic systems, weak connections in a three-dimensional gyrostabilizer are often ignored. Such being the case, only the influence of rotation angles around the suspender axis and procession angles of hydrounits are taken into account. In this case, investigation of fundamental motion of a three-axis gyrostabilizer reduces to investigation of the scalar stabilization channel, i.e., single-axis stabilizers, with account for significant interconnections between the channels. Linearized equations of the motion of a single-axis gyrostabilizer has the form [80] (Jp s2 + ns)ρ = Hωx1 + Mρd + Mρc , (Js + d)ωx1 = MαH − H sρ − Kα Wα (s)ρ

(6.91)

+ (d + ms)ωx0 + Mαf sign ωx0 , where ρ is the precession angle of the hydrounit, ωx1 is the angle velocity of the platform, ωx0 is the foundation angle velocity, α is the angle of rotation of the platform with respect to the foundation, Jp is the torque of inertia of the gyroscope with respect to the precession axis, J is the torque of inertia of the platform with moveable elements with respect to the axis x1 , H is the kinetic torque of the gyroscope, n is the specific damping torque with respect to the precession axis, d is the specific damping torque of the unloading motor reduced to the stabilization axis, m is the coefficient that characterizes inertial disturbing torque that acts around the stabilizer axis while running-in the rotor of the unloading unit with a reduction gear, MαH and Mαf are the torques of exogenous forces and “dry” friction with respect to the stabilization axis, Mρd and Mρc are disturbing and controlling torques with respect to the stabilization axis, Kα Wα (s) is the transfer function of the unloading channel, and s is the Laplace operator. Equation (6.91) can be written in terms of the coordinates ωx1 and ρ to be controlled: WJ (s)ωx1 = Wg (s)∆ω − Wu (s)ρ + Wd (s)ωx0 + Wd∗ (ωx0 ), ρ = Wm (s)∆ω,

ωy =

Mρc , H

∆ω = ω1 − ωy

where WJ (s) = Js + d is the transfer function of the platform, Wg (s) =

© 2000 by CRC Press LLC

H2 Jp s + n

(6.92)

is the transfer function with respect to gyroscopic torque, Wm (s) =

Jp

H + ns

s2

is the measurement transfer function, Wu (s) = Kα Wα (s) is the transfer function of the unloading channel, Wd (s) = ms+d is the disturbance transfer function, ∆ω is the stabilization error for the platform angle velocity, and Wd∗ (ωx0 ) = Mαf sign ωx0 (t) is the torque of dry friction with respect to the stabilization axis. Fundamental motions of the gyrostabilizer are investigated on the basis of Equation (6.92) under the condition ωy = ωx0 = 0. Then, from (6.92) we obtain Wj (s)ωx1 = −Wg (s)ωx1 − Wu (s)ρ, (6.93) ρ = Wm (s)ωx1 . From (6.93) we can easily determine the transfer function of the openloop (with respect to the unloading torque) gyrostabilizer Wm (s)Wu (s) WJ (s) + Wg (s)

(6.94)

Wm (s)Wu (s) . WJ (s) + Wg (s) + Wm (s)Wu (s)

(6.95)

W0 (s) = and the closed-loop one Φ(s) =

The transfer function (6.94) can be reduced to the form W0 (s) =

kα Wα (s) , Hs (1 + 2ξT0 s + T02 s2 )

(6.96)

where ω0 = 1/T0 is the nutation frequency, and ξ is the relative damping coefficient of nutational oscillations. Depending on the form of the functions Wm (s) and Wu (s), there are the following types of gyrostabilizers. 1. Power stabilizer (n = 0): H Wm (s) = Jp s2

© 2000 by CRC Press LLC

H2 Wg (s) = , jp s

d ξ= 2H



Jp . J

As a rule, ξ ≤ 0.01, because d H and Jm J due to constructive reasons. The gyrostabilizer has a clearly oscillatory transient, i.e., belongs to the class of low-damped systems. 2. Stabilizer with integrating gyroscope: H Wm (s) = Jp s2 + ns

H2 Wg (s) = , Jp s + n

n ξ= 2H



Jp . J

For a floated gyroscope, ξ > 0.3 . . . 0.5 (often ξ = 1), i.e., the torque of viscous friction acting with respect to precession angle damps nutational motion fairly well. For a two-stage gyroscope with large kinetic torque H and forced damping, we have ξ ≤ 0.1, i.e., the gyrostabilizer falls into the class of low-damped systems. 3. Indicator stabilizer. In such stabilizers, gyroscopic torque with respect to stabilization axis may be ignored in comparison with other ones. Therefore, we can assume Wg (s) = 0. Moreover, the transfer function can have various forms depending on the type of sensor. As sensors, the following gyroscopes are used: integrating, differentiating, astatic, doubly integrating. Using the transfer function Φ(s), we can determine forced motion with respect to the precession and stabilization axis as functions from the exogenous torque Mα H and foundation angular velocity ωx0 . The form of the forced motion transfer function is determined by the type of gyrostabilizer. For a power gyrostabilizer with two stages, the corresponding transfer function has the form ωx1 (s) Jp s2 + ns 1 = Φ(s) = , MαH (s) HWu (s) kc (s) ωx1 (s) (ms2 + ds)(Jp s + n) = Φ(s) = L(s). ωx0 (s) HWu (s)

(6.97)

The most important characteristics of stabilizer forced motion are the angular stabilization rigidity kc (s) and the oscillation damping index L(s). The angular stabilization rigidity determines stabilization errors under the influence of various forces and torques kc (s) = sD(s), (6.98) D(s) = WJ (s) + Wg (s) + Wm (s)Wu (s).

© 2000 by CRC Press LLC

The oscillation damping index L(s) characterizes the quality of stabilization under oscillation motion of the foundation. It is equal to the ratio of the amplitude of forced angular oscillation of the platform φx1 to the amplitude of platform oscillation with respect to the stabilization axis φx0 . For a harmonic oscillation motion of the foundation, the oscillation damping index is given by L(s) = Φ(s)

ms2 + ds + Mαf (c0 ) (Jp s + n(c0 )), HWu (s)

(6.99)

where Mαf (c0 ) =

0 4 Mαf · π c0 fo

is the coefficient of harmonic linearization of the dry friction torque with respect to the stabilization axis, 0 n(c0 ) = n + 0, 1Mρf ·

1 4 kα · π d c0 · fk

is the equivalent damping torque with respect to the precession axis, c0 is the amplitude of harmonic oscillation motion, fo is the oscillation fre0 0 quency, and Mαf and Mρf are the break-away torques with respect to the stabilization and precession axes, respectively.

6.5.2

Sensitivity Response

Invariants

of

Amplitude

Frequency

In power stabilizers, non-minimal phase units like phase-inverters are used [80], i.e., Wu (s) = kα

1 − To s . 1 + To s

(6.100)

Then, the transfer function (6.94) of the open-loop gyrostabilizer with two-stage gyroscopes is given by W0 (s) =

kα (1 − To s) . Hs(1 + 2ξT0 s + T02 s2 )(1 + To s)

(6.101)

The amplitude frequency response is obtained from the equation A20 (ω) =

© 2000 by CRC Press LLC

H 2 [4ξ 2 T02 ω 2

kα2 . + (1 − T02 ω 2 )2 ]

(6.102)

Sensitivity of amplitude frequency responses can be conveniently estimated by logarithmic sensitivity functions of the form SαAk0 (ω) =

∂ ln A0 (ω) 1 ∂A20 (ω) = αk . · ∂ ln αk 2A20 (ω) ∂αk

(6.103)

Differentiating Formula (6.102) according to (6.103), we find the sensitivity functions for all parameters appearing in the transfer function (6.101): a)

SkAα0 (ω) = 1,

b)

SJAp0 (ω) =

c)

SnA0 (ω) =

d)

SJA0 (ω) =

e)

A0 SH (ω) =

f)

SωA00 (ω) =

STAo0 (ω) = 0,

− T02 ω 2 ) , (1 − T02 ω 2 )2 4ξ 2 T02 ω 2 SξA0 (ω) = − 2 2 2 , 4ξ T0 ω + (1 − T02 ω 2 )2 T02 ω 2 (1 − T02 ω 2 − 4ξ 2 ) , 4ξ 2 T02 ω 2 + (1 − T02 ω 2 )2 2ω 2 T 2 (1 − T02 ω 2 − 4ξ 2 ) −1 − 2 20 2 , 4ξ T0 ω + (1 − T02 ω 2 )2 T02 ω 2 (4ξ 2 + 2T02 ω 2 − 2) . 4ξ 2 T02 ω 2 + (1 − T02 ω 2 )2 T02 ω 2 (1 2 4ξ T02 ω 2 +

(6.104)

The process of evaluating the sensitivity function (6.104) of a low-damped oscillatory system of power gyrostabilizer is numerically unstable in a locality of the nutational frequency ω0 . Numerical stability can be enhanced by using sensitivity invariants. Sensitivity invariants for the relative functions (6.104) are given by the sums a)

A0 SkAα0 (ω) + SH (ω) + SJA0 (ω) + SJAp0 (ω) + SnA0 (ω) = 0,

b)

SnA0 (ω) + SJAp0 (ω) − SJA0 (ω) = 0,

c)

A0 SH (ω) + 2SJA0 (ω) + SkAα0 (ω) = 0,

d)

SωA00 (ω) + SξA0 (ω) + 2SJAp0 (ω) = 0,

e)

SnA0 (ω) − SξA0 (ω) = 0,

(6.105)

Equation (6.105) can easily be proved by appropriate summation of the functions (6.104). The sensitivity invariants make it possible to simplify the procedure of sensitivity functions calculation. Indeed, as follows from the relations b)-d) in (6.105), any two sensitivity functions can be uniquely

© 2000 by CRC Press LLC

determined in terms of two remaining (base) ones. Hence, to calculate the six sensitivity functions of the response A(ω) of the gyrostabilizer denoted by A0 SH (ω), SJA0 (ω), SJAp0 (ω), SnA0 (ω), SωA00 (ω), SξA0 (ω),

it is sufficient to find only two base ones, for example, SnA0 (ω) and SJAp0 (ω), while the remaining ones can easily be found from the invariant relations (6.105). To enhance numerical stability, sensitivity functions having the simplest form are to be chosen as the base ones. The calculation procedure for determining sensitivity functions of amplitude frequency response of open-loop gyrostabilizer is to be performed in the following order. First, the base functions SnA0 (ω) and SJAp0 (ω) are calculated by the relations b) and c) of (6.104). Then, the remaining sensitivity functions are found from the following equalities obtained from (6.105): a)

SJA0 (ω) = SnA0 (ω) + SJAp0 (ω),

b)

A0 SH (ω) = −1 − 2SJA0 (ω),

c)

SωA00 (ω)

=

2SJAp0 (ω)



SnA0 (ω)

(6.106) = 0.

For the response Ac (ω) = |Φ(jω)| of a closed-loop gyrostabilizer the sensitivity invariant is determined by relations similar to (6.105). Nevertheless, base functions are given by the following more complex equations: T02 To ω 4 d(ω) = − c(ω) , |N (jω)|2 ωTo 2ξT0 ω 2 SnAc (ω) = [ωTo d(ω) + c(ω)] , |N (jω)|2 kα SkAαc (ω) = 1 + [ωTo d(ω) − c(ω)] , H|N (jω)|2 SJApc (ω)

(6.107)

where Φ(jω) =

R(jω) a(ω) + jb(ω) = . N (jω) c(ω) + jd(ω)

Sensitivity functions of gyrostabilizer frequency responses A0 (ω) and Ac (ω) are connected not only by the invariant relations (6.105) having the form of sums, but also by sensitivity cross-invariants. The latter connection exposes itself in the fact that on the nutational frequency the sensitivity

© 2000 by CRC Press LLC

functions have the form SJAp0 (ω0 ) = SJApc (ω0c ) = 0, A0 SkAα0 (ω0 ) = SH (ω0 ) = SωA00 (ω0 ) = 1,

SJA0 (ω0 ) = SnA0 (ω0 ) = SξA0 (ω0 ) = −1,

(6.108)

Ac c SkAαc (ω0c ) = SH (ω0c ) = SωA0c (ω0c ) = q0 ,

SJAc (ω0c ) = SnAc (ω0c ) = SξAcc (ω0c ) = −q0 . where q0 is a constant. For the frequencies ω0 ± ξω0 and ω0c ± ξc ω0c the sensitivity functions SH (ω), Sω0 (ω), SJp (ω), and SJ (ω) reach their maximal values.

6.5.3

Sensitivity Invariants of Integral Estimates

It is convenient to evaluate sensitivity of the characteristic polynomial D(s) and angle rigidity of stabilization kc (s) by means of sensitivity coefficient of an integral index of stabilization quality. The use of integral estimates makes it possible to estimate the quality of stabilization without solving differential equations. Integral estimates are expressed in terms of the coefficients of the characteristic polynomial D(s) according to the rules known in automatic control theory [8]. For a gyrostabilizer with transfer function (6.101) the integral quadratic estimate of the transient process for unit pulse input has the form I=

1 a3 (a1 − 1)2 + a0 (a3 a2 − a4 a1 ) , 2 [a1 (a3 a2 − a4 a1 ) − a23 a0 ]

(6.109)

where a4 =

a2 = To +

JJp To , H2 nJ , H2

a3 =

a1 = 1 −

H2 nJ + 2 To , JJp H jα To , H

a0 =

kα . H

Then, we estimate sensitivity of the integral index (6.109) using the expression  ∂I ∂ai ∂I ∂I = αk = αk . ∂ ln αk ∂αk ∂ai ∂αk i=0 4

© 2000 by CRC Press LLC

(6.110)

For a system with the transfer function (6.101) we have, according to Formula (6.110), ∂I ∂I ∂I = a0 − a0 To , ∂kα ∂a0 ∂a1 ∂I ∂I ∂I + a0 To H = −a0 ∂H ∂a0 ∂a1 ∂I ∂I ∂I −4ξTo − 2a3 − 2a4 , ∂a2 ∂a3 ∂a4 ∂I ∂I ∂I ∂I + a3 + a4 , J = 2ξTo ∂J ∂a2 ∂a3 ∂a4 ∂I ∂I ∂I Jp = T02 + a4 , ∂Jp ∂a3 ∂a4 ∂I ∂I ∂I + 2ξT0 To , n = 2ξT0 ∂n ∂a2 ∂a3 ∂I ∂I ∂I ∂I ∂I To = −a0 To + +To + 2ξT0 To + a4 . ∂To ∂a1 ∂a2 ∂a3 ∂a4

a) b)

c) d) e) f)

(6.111)

For a multivariable gyrostabilizer, calculation of the sensitivity coefficients is a difficult problem. The procedure can be simplified by the use of sensitivity invariants. It appears that invariant relations for sensitivity coefficients of the integral estimate (6.109) and sensitivity function of amplitude-pulse response (6.105) have similar form. For the integral estimate (6.109), the sensitivity invariants have the form a) b) c)

∂I ∂I ∂I ∂I ∂I kα + Jp + H+ J+ n = 0, ∂kα ∂H ∂J ∂Jp ∂n ∂I ∂I ∂I Jp − n+ J = 0, ∂n ∂Jp ∂J ∂I ∂I ∂I kα = 0. H +2 J + ∂H ∂J ∂kα

(6.112)

Relations (6.112) can easily be proven by summing the sensitivity coefficients (6.111) in an appropriate way.

6.5.4

Sensitivity Invariants of Damping Coefficient

The index of damping of gyrostabilizer foundation oscillations L(ω) can be shown in the form of the amplitude frequency response of the function L(s) (see (6.99)). For a gyrostabilizer with the transfer function (6.101),

© 2000 by CRC Press LLC

the square of the damping index L2 (ω) is given by L2 (ω) = A2c (ω)

Jp2 ω 2 + n2 (c0 ) [(Mαf (c0 ) + mω 2 )2 + d2 ω 2 ]. kα2 H 2

(6.113)

Since the frequencies of foundation pumping are two-three times less than nutational frequencies, in Equation (6.113) we can assume Ac (ω) = 1 for investigation of low-frequency part. Then, L21 (ω) =

Jp2 ω 2 + n2 (c0 ) [(Mαf (c0 ) + mω 2 )2 + d2 ω 2 ]. kα2 H 2

(6.114)

For a constant pumping amplitude c0 we can write the following sensitivity invariants of the foundation damping index (6.114): a)

L1 SkLα1 (ω) + SH (ω) + SJLp1 (ω) + SnL1 (ω) L1 L1 + SM (ω) + Sm (ω) + SdL1 (ω) = 0, αf

b)

L1 (ω) = −1 SkLα1 (ω) = SH

c)

SnL1 (ω) + SJLp1 (ω) = 1,

d)

L1 L1 SdL1 (ω) + SM (ω) + Sm (ω) = 1. αf

(6.115)

For determination of all sensitivity functions of the damping index it is necessary to find only three base ones: n2 (c0 ) , Jp2 ω 2 + n2 (c0 ) d2 ω 2 SdL1 (ω) = , (Mαf − mω 2 )2 + d2 ω 2 Mαf (Mαf − mω 2 ) L1 SM (ω) = . αf (Mαf − mω 2 )2 + d2 ω 2 L1 SnL1 (ω) = SM (ω) = ρf

(6.116)

The remaining sensitivity functions are defined, according to the invariant relstions (6.115), by the equalities SJLp1 (ω) = 1 − SnL1 (ω), L1 L1 (ω) = 1 − SdL1 (ω) − SM (ω). Sm αf

© 2000 by CRC Press LLC

(6.117)

For a three-axes gyroscope nine damping coefficients are usually required, and the use of the sensitivity invariants greatly decreases amount of calculations. Sensitivity functions of the foundation damping index make it possible to range parameters of gyrostabilizer by the degree of their influence on the quality of stabilization and to test stabilizer workability on a rolling foundation under extremal testing and operating conditions. The parameter ranging is performed by the following rule: a parameter αi is more important than a parameter αj on the rolling frequency fk if   L   Sα (fk ) > SαL (fk ) , i j or, with account for deviation of the parameters values ∆αi , by the rule  L    Sα (fk )∆αi  > SαL (fk )∆αj  . i

i

The gyrostabilizer remains workable for extremal conditions if L(ωk ) +

m 

∆αi SαLi (ωk )L(ωk ) ≤ L0 (ωk ),

i=1

where L0 (ωk ) is a given restriction imposed on L(ωk ). If the sensitivity functions of the damping index are calculated by Formula (6.113), the expression becomes much more involved. In this case the sensitivity invariants makes it possible to simplify the formulas. Thus, for the index (6.113) we can write L2 (ω) = A23 (ω) · L21 (ω).

(6.118)

The sensitivity function of the index L(ω) with respect to a parameter αi is determined by SαLi (ω) =

∂L2 (ω) 1 · αi . 2L2 (ω) ∂αi

(6.119)

Substituting (6.118) into (6.119) and performing differentiation, we obtain SαLi (ω) = SαAic (ω) + SαLi1 (ω).

(6.120)

Equation (6.120) demonstrates the possibility to use sensitivity invariants for simplification of investigation of complex characteristics. Thus, having

© 2000 by CRC Press LLC

the sensitivity invariants for the terms SαAic (ω) and SαLi1 (ω), we can find sensitivity invariants for the more complex characteristic L(ω) (see (6.113)). With account for (6.105), (6.115) and (6.120), the sensitivity invariants for the damping coefficient of the form (6.113) are given by a)

L SkLα (ω) + SH (ω) + SJLp (ω) + SnL (ω) + SJAc (ω) = −1,

b)

SnL (ω)

d)

L (ω) + SkLα (ω) + 2SJAc (ω) = −2. SH

i

+ SJLp (ω) − SJAc (ω) = 1,

(6.121)

The sensitivity invariants (6.121) make it possible to develop a simplified process for determining sensitivity functions of a fairly complex damping index of foundation oscillation by (6.113) according to the aforesaid rules. This is especially important for investigation of such a complex system as three-axes gyroscope.

© 2000 by CRC Press LLC

Chapter 7 Sensitivity of Mathematical Programming and Variational Calculus Problems

7.1 7.1.1

Sensitivity of Linear Programming Problems Actuality of Sensitivity

Investigation

of

Optimal

Control

Classical optimization theory is based on the assumption that there is sufficient information about a mathematical model of the plant to be optimized. Nevertheless, mathematical models differ from real plants for many reasons. Moreover, the control itself will be different from the calculated one, because elements and units realizing it are not ideal. All these reasons lead to violation of optimality conditions and some losses for any special case of controlling a plant. For this reason, it is very important to estimate sensitivity of optimal control with respect to variations of parameters describing the plant and control system that can be considered as initial data in optimal control design. Sensitivity analysis is most actual in operations research and system engineering for designing complex schemes, when even small uncertainties in initial data and assumptions may lead to great material losses. Therefore, sensitivity analysis is currently a mandatory stage of large system investigations [17, 48, 82]. In this chapter we discuss methods of sensitivity investigation for solutions of mathematical programming and variational calculus problems based on the use of sensitivity functions.

7.1.2

Linear Programming

Mathematical programming problems when a goal function (functional I) is linear and the set, where extremum of the function is to be found, is given by a system of linear equalities and inequalities, fall into the class of linear

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

programming (LP) problems. The generic problem of linear programming can be formulated as follows: Linear programming problem. It is required to find a solution x10 , . . . , xn0 from all nonnegative solutions xj ≥ 0 (j = 1, . . . , n) satisfying the inequalities n 

aij xj ≤ bi ,

i = 1, . . . , m,

j=1

such that a linear function of these variables I = c1 x1 + c2 x2 + . . . + cn xn reach the maximal value. The generic problem can be written in the following vector matrix form max{I(X) = C T X},

AX ≤ B,

X ≥ 0,

(7.1)

where X T = [x1 , . . . , xn ], C T = [c1 , . . . , cn ], B T = [b1 , . . . , bm ], A = aij , i = 1, . . . , n. It is known that any linear programming problem can be reduced to the above form. If, for instance, it is required to find the minimum of a cost function I(X), the generic problem can be obtained by using a new goal function I1 (X) after multiplying the initial function by −1. The linear programming problem can be written in the following canonical form: max{I(X) = C T X},

AX = B,

X ≥ 0.

(7.2)

Any generic problem can be reduced to the canonical form by introducing additional variables. For any linear programming problem there is a dual problem. For example, the dual problem for (7.2) is formulated as follows: max{J(X) = B T Y },

AT Y ≥ C,

Y ≥ 0.

(7.3)

In theory of linear programming it is shown that if X0 is the optimal solution of a direct problem and Y0 is the optimal solution of the associated dual problem, we have C T X0 = B T Y0 .

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

(7.4)

In linear programming, initial data are determined by components of the vectors C and B and those of the matrix A. First, consider a geometric interpretation of the influence of initial data variations onto an optimal solution. Consider the case of two variables x1 and x2 .

7.1.3

Qualitative Geometric Sensitivity Analysis

Figure 7.1 Geometric interpretation of linear programming problem Consider Figure 7.1 where the polygon ABDEF is the domain of admissible solutions, and the line M N corresponds to the goal function (efficiency index). Assume that the optimal solution (for instance, one maximizing the efficiency index) is located in the vertex B. From Figure 7.1 it is known that the vertex B corresponds not only to the optimal solution defined by the line M N . Specific location of the line M N is defined by the vector C. Indeed, the vector C perpendicular to the line M N is the directing vector (gradient) of the goal function I = C T X (see Figure 7.1). The vectors C1 and C2 will be the directing vectors of the lines AB and BD, respectively. The optimal solution determined by the vertex B will remain unchanged for variations of the vector C between C1 and C2 . Denoting the angles of the vectors C, C1 , and C2 by γ, γ1 , and γ2 , respectively, the solution in the vertex B remains optimal when γ ∈ (γ1 , γ2 ). If γ = γ1 (or γ = γ2 ) the line of the goal function will be parallel to the

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

polygon side AB (or BD, respectively). In this case, the LP problem will have an infinite number of solutions. Further variations of the vector C lead to the case of a single optimal solution, though in another vertex, A or D, respectively. The new optimal solution also remains valid in a range of variation of the vector C. In a similar way, we can trace the influence of variation of the vector B on the optimal solution. Variation of the coefficient bi leads to a linear shift of the hyperplane n 

aij xj = bi

j=1

with respect to the origin. The hyperplane inclination remains the same. In the two-dimensional case shown in Figure 7.1 variation of the coefficients bi (i = 1, . . . , m) causes parallel shift of the lines AB, BD, DE, and AF . Moreover, in some range of variation of the coefficients bi (i = 1, . . . , m) the optimal solution is determined by intersection of the same sides of the polygon of admissible solutions. It can be geometrically shown that this solution remains optimal as far as it is admissible. This result follows from the main theorem of linear programming [30, 34]. For geometric interpretation of the influence of variations of the vector B on the optimal solution we can also consider the dual problem with respect to the initial one. It is hardly possible to find a vivid representation of the influence of variation of the components of the matrix A onto optimal solution. Considering Figure 7.1, we can only propose that local variations of the coefficients aij included in the equations of the lines AF , EF , and DE do not change the optimal solution. These coefficients are not components of the basic matrix associated with the optimal plan in the vertex B. Three areas of admissible solutions determined by the position of the line AB are shown in Figure 7.2. In the first case shown in Figure 7.2a, the optimality vertex is B, in the second one (see Figure 7.2b) optimum is reached in A1 , while in the third one there is no optimal solution, because the admissible set is empty due to incompatible restrictions.

7.1.4

Quantitative Sensitivity Analysis

Considering initial data as parameters in a linear programming problem, we can consider the influence of their variations on the properties of optimal solution using the methods developed in the theory of parametric programming [25, 30]. Local properties of a solution of a linear programming problem can be estimated by sensitivity coefficients.

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

Figure 7.2 Areas of admissible solutions

THEOREM 7.1 [33, 64, 72] Let for nominal values of initial data A = A0 ,

C = C0 ,

B = B0

there be unique solutions X0 and Y0 of the initial (7.2) and dual (7.3) problems of linear programming. Then, in a small locality of the values A0 , B0 , and C0 the following representation holds: Imax (A0 + ∆A0 , B0 + ∆B0 , C0 + ∆C) = Imax (A0 , B0 , C0 ) + X0T ∆c + Y0T ∆B − Y0T ∆AX0 + 0(∆A, ∆B, ∆C).

(7.5)

PROOF Let us represent the optimal value of the goal function in a locality of A0 , B0 , and C0 in the form Imax (A0 + ∆A0 , B0 + ∆B, C0 + ∆C)     ∂I ∂I = Imax (A0 , B0 , C0 ) + ∆C + ∆B + δ(∆A), ∂C 0 ∂B 0 where



∂I ∂C



 , 0

∂I ∂B

 0

are row sensitivity vectors with respect to variations of the vectors C and B, respectively, and δ(∆A) is the increment of the goal function owing to

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

variation of the elements of the matrix A. To determine the sensitivity vector ∂I/∂C it suffices to differentiate the goal function directly, so that 

∂I ∂C



 = 0

∂(C T X) ∂C

 = X0T .

(7.6)

0

Thus, the vector of sensitivity with respect to variation of the parameters of the vector C coincides with the optimal solution of initial linear programming problem at the point (A0 , B0 , C0 ). Let the components of the vector C depend on a parameter α, i.e., ci = ci (α). Then, sensitivity of the value I0 with respect to the parameter variation is given by the formula n  dI0 ∂I0 dci = . dα ∂ci dα i=1 To obtain the sensitivity vector (∂I/∂B)0 , we consider the corresponding dual problem and use the condition (7.4) C T X0 = B T Y0 , Hence,



∂I ∂B



 = 0

∂(B T Y0 ) ∂B

 = Y0T ,

(7.7)

0

i.e., the sensitivity vector of the optimal value of the goal function with respect to variations of the elements of the vector B coincides with the optimal solution of the dual problem. Assume that we are interested in obtaining the sensitivity of the value I0 with respect to variations of the parameter α, assuming that the components of the vector B depend on α, i.e., bi = bi (α) Obviously,  ∂I0 dbi ∂I0 = . ∂α ∂bi dα i=1 m

To evaluate sensitivity of the optimal value of the goal function with respect to variations of the elements of the matrix A, we write restrictions of the initial problem in the canonical form (A0 + ∆A)(X0 + ∆X) = B0 + ∆B

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

or, ignoring second-order terms, A0 X = B0 + ∆B + ∆B1 , where ∆B1 = −∆AX0 .

(7.8)

As a result, we managed to reduce the variation ∆A to the equivalent, up to second-order terms, variation of the vector B. Then, Imax (A0 + ∆A, B0 + ∆B, C0 + ∆C) ≈ Imax (A0 , B0 + ∆B + ∆B1 , C0 + ∆C) =

∂I0 ∂I0 ∆C + (∆B + ∆B1 ) + I0 ∂C ∂B

= X0T ∆C + Y0T ∆B + Y0T ∆B1 + I0 or, with due account for (7.8), ∆I0 = X0T ∆C + Y0T ∆B − Y0T ∆AX0 .

(7.9)

From the latter relations it follows that δ(∆A) ≈ −Y0T ∆AX0 or, in a coordinate form, δ(∆A) = −

n m  

x0j y0i ∆aij ,

i=1 j=1

Then, for the sensitivity coefficients we find  uij =

∂I ∂aij

 = −x0j y0i .

(7.10)

0

From the last equation it is evident that the sensitivity coefficient uks is zero provided that any of the elements x0s or y0k is zero. This conclusion means that the element aks is a coefficient at non-basis (free) variables in initial and dual problems. In fact, this proves the hypothesis regarding the influence of variations of the elements of the matrix A on the solution of a

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

linear programming problem proposed in the previous paragraph. Let X1 be the vector of basis variables and X2 be the vector of free variables. Denote X T = [X1T X2T ]. Then, the equation of restrictions can be represented in the form A1 X1 + A2 X2 = B, where A1 and A2 are elements of the block matrix A = [A1 A2 ]. Since det A1 = 0, we have −1 X1 = A−1 1 B − A 1 A2 X 2 .

Transformation of the goal function yields   −1 T I = C T X = C1T X1 + C2T X2 = C1T A−1 1 B − A1 A2 X2 + C2 X2   T −1 T = C1T A−1 1 B − C1 A1 A2 − C2 X2 . According to the simplex method of solving linear programming problems, we have X2 = 0, therefore, I = C1T A−1 1 B. Obviously, the sensitivity coefficients of this function to variations of the elements of the matrix A are equal to zero. All the above reasoning relates to the case when there is a unique solution to the linear programming problem under consideration. In a special case in which the level line of the goal function is parallel to a side of the area of admissible solutions, the goal function is not differentiable with respect to the parameters. Then, the sensitivity coefficients are determined according to a theorem given in [33] by the formulas ∂I0 = max xj = xj max , x∈{DX } ∂c+ j ∂I0 = min yi = yi min , x∈{DY } ∂b+ i

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

∂I0 = min xj = xj min , x∈{DX } ∂c− j ∂I0 = max yi = xi max , y∈{DY } ∂b− i

(7.11)

 ∂I0 −xj min yi max , if yi max ≥ 0, = −x y , if yi max < 0, ∂a+ ij  j max i max ∂I0 −xj max yi min , if yi min ≥ 0, = −xj min yi min , if yi min < 0. ∂a− ij

(7.12)

Here by ∂I0 , ∂c+ j

∂I0 , ∂b+ i

∂I0 ∂a+ ij

we denote the right derivatives, and by ∂I0 , ∂c− j

∂I0 , ∂b− i

∂I0 ∂a− ij

the left partial derivatives of the goal functions with respect to the corresponding arguments. According to the conditions of the sets theorem, DX and DY must be non-empty bounded sets. Finally, we consider a simplest linear programming problem as an example. Example 7.1 Find the maximum of the function I = cx1 + x2 under the restrictions x1 + x2 ≤ 3, x1 ≥ 0,

x1 + x2 ≥ 1, x2 ≥ 0

and investigate its sensitivity with respect to the parameter c. The area of admissible solutions is given in Figure 7.3a. The level line of the goal function is characterized by the directing vector C with coefficients (c, 1). The type of “rotation” of the goal function line as the parameter c variates in the interval (−∞, +∞) is shown in Figure 7.3b. Figure 7.3c demonstrates the dependence of the optimal value of the function on the parameter c. In this figure tan β = 3. In the interval (−∞, 1) the optimal solution is determined by the coordinates of the vertex D of the polygon of admissible solutions, and I0D (0, 3) = 3.

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

Figure 7.3 Areas of admissible solutions

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

For c = 1, the level line of the goal function is parallel to the line AD, i.e., we have a special case. For c > 1, the optimal solution is determined by the coordinates of the vertex A, i.e., I0A (0, 3) = 3c. The sensitivity coefficient Uc = ∂I0 /∂c exists for all values of c except for c = 1, and is determined by the following values: ∂I0 = x1 max = c for c > 1, ∂c

∂I0 = x1 min = 0 ∂c

for c < 1.

Figures 7.3d and 7.3f show the curves of the optimal values of x1 and x2 on the parameter α = c. It is known [64] that the solution of an arbitrary finite matrix game can be reduced to a solution of a linear programming problem. Such being the case, the above approach can be useful for the investigation of the sensitivity of matrix games.

7.2 7.2.1

Sensitivity of Optimal Solution to Nonlinear Programming Problems Unconstrained Nonlinear Programming

Let a goal function I = I(X) be defined in n-dimensional Euclidean space and differentiable at a point X0 . Then, an extremum exists at the point X0 only if all partial derivatives of the first order are zero: ψi (X) =

∂I = 0, ∂xi

i = 1, . . . , n.

(7.13)

Moreover, assume that the goal function depends on a non-controllable parameter α so that I = I(X, α). Obviously, the solution X0 is a function of this parameter X0 = X0 (α). It is required to find the sensitivity coefficients dx0i /dα. Let functions ψi (X, α) satisfy the conditions of the theorem on differentiability of implicit function, i.e., the functions ψ1 , . . . , ψn are defined and

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

continuous in a locality of the point (X0 , α). Moreover, in this area there are continuous partial derivatives ∂ψi , ∂xj

∂ψi , ∂α

i, j = 1, . . . , n,

and the Jacobian J=

D(ψ1 , . . . , ψn ) . D(x1 , . . . , xn )

is nonzero. Then, by the theorem on differentiability of implicit functions [114], there are continuous sensitivity coefficients given by the formulas D(ψ1 , . . . , ψn ) dxi D(x1 , . . . , xi−1 , α, xi+1 , . . . xn ) =− . D(ψ1 , . . . , ψn ) dα D(x1 , . . . xn )

7.2.2

(7.14)

Nonlinear Programming with Equality Constraints

Consider the following problem. Find an extremum of the goal function I(X) under the constraints fi (X) = 0,

i = 1, . . . m < n.

(7.15)

It is assumed that the functions I(X) and fi (X) are doubly differentiable. To solve this problem, Lagrange multipliers λ1 , . . . , λm and the following Lagrange function are introduced: L(X, λ1 , . . . , λm ) = I(X) −

m 

λj fj (X).

j=1

Then, necessary extremum conditions take the form ∂L ∂L = 0, = 0, ∂xi ∂λj

i = 1, . . . , n, j = 1, . . . , m,

or  ∂fj ∂I − λj = 0, ∂xi j=1 ∂xi m

fj (X) = 0,

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

i = 1, . . . , n,

j = 1, . . . , m.

(7.16)

Let the goal function I(X) and the functions fj (j = 1, . . . , m) depend on a parameter α. Moreover, let the derivative ∂I/∂α exist. Obviously, the values x1 , . . . , xn and λ1 , . . . , λm satisfying (7.16) are functions of the parameter α. It is required to find the sensitivity coefficients ∂xi /∂α and ∂λj /∂α. Introduce the notation ∂L = ψi , i = 1, . . . , n, ∂xi ∂L = ψn+j , j = 1, . . . , m. ∂λj Assume that for α = α0 the following Jacobian is nonzero: J=

D(ψ1 , . . . , ψn+m ) . D(x1 , . . . , xn , λ1 , . . . , λm )

Then, according to the aforesaid theorem on differentiability of implicit functions, the desired coefficients ∂xi /∂α and ∂λj /∂α for α = α0 are given by D(ψ1 , . . . , ψn+m ) ∂xi D(x1 , . . . , xi−1 , α, xi+1 , . . . , xn , λ1 , . . . , λm ) =− , ∂α J D(ψ1 , . . . , ψn+m ) ∂λj D(x1 , . . . , xn , λ1 , . . . , λj−1 , α, λj+1 , . . . , λm ) =− . ∂α J

(7.17)

It can be easily seen that the values (7.17) are solutions of the following linear algebraic equations obtained from (7.16) by differentiation with respect to α: n m   ∂φi  ∂φi ∂xj ∂λj ∂ψji + − ψji + λj ∂α ∂x ∂α ∂α ∂α j j=1 j=1  n  ∂ψji ∂xs + λj = 0, ∂xs ∂α s=1

∂fi  ∂fi ∂xi + = 0, ∂α ∂xi ∂α i=1

i = 1, . . . , n,

n

j = 1, . . . , m,

where φi = ∂I/∂xi ,

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

ψij = ∂fi /∂xj ,

If only the goal function depend on the parameter α, the system for determining the sensitivity coefficients takes the form ∂φi  ∂φi ∂xj  + − ∂α ∂xj ∂α j=1 j=1 n

m

∂fj ∂xi = 0, ∂xi ∂α



n  ∂ψji ∂xs ∂λj ψji + λj ∂α ∂xs ∂α s=1

= 0,

j = 1, . . . , m, i = 1, . . . , n.

Let only the functions fi (X) (i = 1, . . . , m) depend on the parameter α. Then, n  ∂I ∂I ∂xi = , (7.18) ∂α ∂x i ∂α i=1 n  ∂fj ∂xi ∂fj + = 0, ∂xi ∂α ∂α i=1

j = 1, . . . , m.

(7.19)

Multiplying (7.19) by λj and adding (7.18) after summation over all j, we obtain   m n m   ∂I ∂fi   ∂I ∂fj  ∂xi =− + . λi − λj ∂α ∂α ∂xi j=1 ∂xi ∂α i=1 i=1 Since by (7.16) the term in the brackets is zero in the last equation, we have m  ∂I ∂fi =− λi ∂α ∂α i=1 Thus, in the case when only the constraints (7.15) depend on the parameter α, the sensitivity of the optimal value of the goal function is determined by optimal values of Lagrange coefficients and derivatives of the functions fi with respect to the parameter α for α = α0 . In some problems, for example in economic ones, the constraints have the form fi (X) = bi ,

i = 1, . . . , m,

(7.20)

where bi are resource expenses. Let the coefficient bs play the role of parameter α. Then, Equation (7.20) yields ∂I = λs , s = 1, . . . , m, (7.21) ∂bs

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

i.e., the sensitivity coefficient of optimal value of the goal function with respect to the parameter bs is equal to the corresponding optimal Lagrange multiplier. In economic problems the value I is interpreted as income or price. Then, the sensitivity coefficient, i.e., the Lagrange multiplier λi , characterizes variation of maximal income when the i-th resource increases by 1. Equation (7.21) is a generalization of Equation (7.7) obtained for linear programming problem with the help of dual variables yi . Note that Lagrange multipliers can also be considered as dual variables that can be the basis of a dual problem for the problem (7.15). Assume that we are interested in obtaining sensitivity of I with respect to variations of the parameter α on which the coefficients bs depend: bs = bs (α),

s = 1, . . . , m.

Obviously,  ∂I dbi dI = , dα ∂bi dα i=1 m

Then, with account for (7.21) we have  dbi dI = . λi dα dα i=1 m

In the given case, the sensitivity coefficient of the maximal value of the goal function is defined as the sum of Lagrange multipliers with weight coefficients equal to corresponding derivative coefficients of the right sides of constraints (7.20) with respect to the parameter α. Example 7.2 Consider the following two-dimensional problem min I(x1 , x2 )

x1 ,x2

for f (x1 , x2 ) = 0. For this problem, L = I(x1 , x2 ) − λf (x1 , x2 )

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

and necessary minimum conditions have the form ∂L ∂I ∂f = −λ = 0, ∂x1 ∂x1 ∂x1 ∂L ∂I ∂f ψ2 = = −λ = 0, ∂x2 ∂x2 ∂x2 ∂L ψ3 = = −f (x1 , x2 ) = 0. ∂λ

ψ1 =

For sensitivity coefficients we have the following expressions: ∂x1 J1 =− , ∂α J

∂x2 J2 =− , ∂α J

∂λ J3 =− , ∂α J

where D(ψ1 , ψ2 , ψ3 ) , D(x1 , x2 , λ) D(ψ1 , ψ2 , ψ3 ) J2 = , D(x1 , α, λ) J=

D(ψ1 , ψ2 , ψ3 ) , D(α, x2 , λ) D(ψ1 , ψ2 , ψ3 ) J3 = . D(x1 , x2 , α) J1 =

As a numerical example, we consider the problem min(αx21 + x22 ) where f (x1 , x2 ) = x1 − 5 = 0. In this problem L = αx21 + x22 − λ(x1 − 5), and the necessary minimum conditions have the form ∂L = 2αx1 − λ = 0, ∂x1 ∂L ψ2 = = 2x2 = 0, ∂x2 ∂L ψ3 = = −x1 + 5 = 0, ∂λ

ψ1 =

Then, the coordinates of the optimal point are x1 = 5,

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

x2 = 0,

λ = 10α.

(7.22)

The Jacobians J, J1 , J2 , and J3 are given by   2α  J =  0  −1   2α  J2 =  0  −1

 0 −1  2 0  = 2, 0 0  10 −1  0 0  = 0, 0 0

   10 0 −1    J1 =  0 2 0  = 0, 0 0 0    2α 0 10    J3 =  0 2 0  = 20.  1 0 0

As result, we find ∂x1 = 0, ∂α

∂x2 = 0, ∂α

∂λ = 10. ∂α

Obviously, in this illustrative example the desired sensitivity coefficients can be found by direct differentiation of the coordinates (7.22) of the point of minimum which are functions of the parameter α.

7.2.3

Sensitivity Coefficients in Economic Problems

Consider the use of sensitivity coefficients in non-classical demand theory [58] connected with description of consumer behavior in the case of variation of product price under the conditions of competitive market. Let the consumer be described by a continuous doubly differentiable effectiveness function (goal function) I(X) where x1 , . . . , xn are volumes of the corresponding products. The price of an exemplar of the i-th product is ci . It is required to minimize expenses for which a given effectiveness volume Iz can be reached. Thus, we have the following nonlinear programming problem with an equality constraint: min

n 

ci xi ,

i=1

I(X) = Iz . Lagrange function for this problem has the form L(X, λ) =

n  i=1

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

ci xi − λ(I(X) − Iz ).

Using this formula, we can obtain the following necessary extremum conditions: ∂I ci − λ = 0, (7.23) ∂xi I(X) = Iz , i = 1, . . . , n. Assume that it is required to find the sensitivity of optimal product volumes with respect to variations of a price, say cn , for unchanged value of Iz . Differentiating (7.23) with respect to cn , we obtain λ

n  j=1

∂ 2 I ∂xj ∂I ∂λ + = ∂xi ∂xj ∂cn ∂xi ∂cn



0, 1,

if if

i = n, i = n,

n  ∂I ∂xj = 0. ∂x j ∂cn j=1

Dividing the first equations in the last relations by λ, we write RZ = Γ where



∂I  0 ∂x 1   ∂I ∂2I  2  R =  ∂x1 ∂x1  ... ...    ∂I ∂2I ∂xn ∂x1 ∂xn  ZT =

(7.24)

∂I ∂xn ∂2I ... ∂x1 ∂xn ... ... ...

...

∂2I ∂x2n

      ,    

 1 ∂λ ∂x1 ∂xn , ... λ ∂cn ∂cn ∂cn

  1 ΓT = 0 0 . . . 0 . λ The system (7.24) makes it possible to determine any sensitivity coefficient ∂xi /∂cn (i = 1, . . . , n). It can be easily seen that the coefficient ∂xn /∂cn , for instance, can be found by the formula ∂xn 1 Rnn = , ∂cn λ det R where Rmm is the adjoint matrix to the element ∂ 2 I/∂x2n .

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

(7.25)

For a minimum we have the following sufficient condition: Rnn < 0, det R Hence, using the fact that λ > 0, we obtain ∂xn < 0. ∂cn The negative sign of the sensitivity coefficient ∂xn /∂cn shows that the volume of production of the n-th product decreases as the price cn increases, provided that the efficiency level Iz remains the same. Now we consider the case when the consumer’s income is fixed. Then, mathematically the problem can be formulated as follows: max I(X),

X n 

ci xi = cz ,

i=1

where cz is a fixed income. The Lagrange function for this problem has the form L(X, λ) = I(X) − λ

 n 

ci xi − cz

,

i=1

and the necessary conditions are ∂I − λci = 0, i = 1, . . . , n, ∂xi n  ci xi − cz = 0

(7.26)

i=1

Consider, as in the previous example, the influence of the price fluctuations cn onto the optimal consumer’s decision. Differentiation of (7.26) with respect to cn yield the corresponding sensitivity equation n  i=1

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

∂ 2 I ∂xj ∂λ − ci = ∂xi ∂xj ∂cn ∂cn



0, λ,

if i = n, if i = n,

n  i=1

ci

∂xi + xn = 0. ∂cn

After routine transformations we can write RZ1 = P,

(7.27)

where P

T

Z1T

= [−λxn 0 . . . 0 λ];

  1 ∂λ ∂x1 ∂xn . = − ... λ ∂cn ∂cn ∂cn

From Equation (7.27) we obtain ∂xi 1 (−λxn R1,i+1 + λRn,i+1 ), = ∂cn det R

(7.28)

where R1,i+1 and Rn,i+1 are the cofactors of the elements (1, i + 1) and (n.i + 1) of the matrix R, respectively. Then, we investigate the influence of the fluctuations of cz onto the optimal consumer’s decision (the values of the variables xi ). With this aim in view, let us differentiate the necessary conditions (7.26) with respect to the parameter cz , obtaining the following sensitivity equations: n  j=1

∂ 2 I ∂xj ∂λ − ci =0 ∂xi ∂xj ∂cz ∂cz n  i=1

ci

i = 1, . . . , n,

∂xi = 1, ∂cz

or, in the vector-matrix form, RZ2 = B,

(7.29)

where B T = [λ 0 . . . 0]. Therefore, ∂xi R1,i+1 . =λ ∂cz det R

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

(7.30)

Comparing (7.28) and (7.30), we obtain ∂xi ∂xi Rn,1+i = −xn +λ . ∂cn ∂cz det R

(7.31)

This equation is called in economics Slutskii equation [58] and determines the influence of non-compensated price fluctuation onto demand for each product. If i = n, the second term in the right side of (7.31) is proportional to the sensitivity coefficient ∂xn /∂cn obtained in the previous example (see (7.25)). Therefore, for i = n we can write 

∂xi ∂cn



 = −xn c=cz

∂xn ∂cz



 +k cn =cnz

∂xn ∂cn

 .

(7.32)

I=Iz

The indexes c = cz , cn = cnz and I = Iz show under what condition (fixed income, price, efficiency level) the corresponding sensitivity coefficients are calculated.

7.2.4

Nonlinear Programming Constraints

with

Weak

Equality

A typical nonlinear programming problem with weak equality constraints can be formulated as follows: find extremum (maximum or minimum) of a goal function I(X) under the constraints F (X) ≤ 0.

(7.33)

Then, depending on the specific statement of the problem, some or all components of the vector X may satisfy the non-negativity condition. It is generally known that nonlinear programming problems posed in this way have no general universal solution algorithm (like the simplex method for linear programming problems). There are only partial algorithms, determined by the form of the goal function and constraints, for the simplest types of nonlinear programming problems. Further, we consider approaches to sensitivity investigation for some of these problems. A. Minimization or maximization of a goal function under the constraints F (X) = 0,

X ≥ 0.

It is assumed that the functions I(X) and F (X) are doubly differentiable. Assume that the goal function reaches extremum at the point X0 inside of the boundary of the area of admissible solutions. If the solution is

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

determined by an internal point, it should satisfy the necessary conditions obtained with the help of the Lagrange function [116]: L(X, λ) = I(X) − ΛT F (X). Therefore, at the first stage, all stationary points of the nonnegative octant must be found and analyzed. Then, we investigate the boundary of this octant. With this aim in view, we first equate one of the variables to zero and solve the problem with remaining n − 1 variables. Obviously, it is necessary to solve n such problems. Then, we equate each two variables to zero and solve the problem with remaining n − 1 variables. There will be n!/(2!(n − 2)!) such problems. Using this technique, the number of variables is reduced to m. The extremum is determined by considering the whole aggregate of solutions obtained in this very cumbersome procedure. As a result, the optimal solution is determined by a quite specific model. It is assumed that the structure of this model remains unchanged under small fluctuations of the parameters of interest. Then, the sensitivity equations can be obtained directly from this model on the basis of the results obtained above. B. Minimization or maximization of a goal function under the constraints fi (X) ≤ bi ,

i = 1, . . . , m1 ,

fj (X) ≥ bj ,

j = m1 , . . . , (m1 + m2 ),

fs (X) = cs ,

s = (m1 + m2 ), . . . , m.

This problem can easily be reduced to the previous one by adding m1 +m2 nonnegative variables xqi : fi (X) + xqi = bi ,

i = 1, . . . , m1 ,

fj (X) − xqj = bj ,

j = m1 , . . . , (m1 + m2 ),

fs (X) = cs ,

s = (m1 + m2 ), . . . , m,

xqi ≥ 0,

i = 1, . . . , (m1 + m2 ).

To find an extremum inside the nonnegative octant of the space of variables xqi the following Lagrange function can be used: L(X, Xq , Λ) = I(X) − −

m 1 +m2 j=m1

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

m1 

λi [fi (X) + xqi − bi ]

i=1

λi [fj (X) − xqj − bj ] −

m  s=m1 +m2

(7.34) λs [fs (X) − bs ].

Necessary extremum conditions obtained with the use of this function contain equalities of the form ∂L = −λi = 0, ∂xqi ∂L = −λj = 0, ∂xqj

i = 1, . . . , m1 , (7.35) j = m1 , . . . , (m1 + m2 ).

Hence, if xqi ≥ 0 at the extremal point, the corresponding constraints in the form of equalities may be ignored. This is especially important for investigating sensitivity of the optimal solution of the initial problem. Thus, under the conditions (7.35), the algorithm of sensitivity investigation is completely reduced to the case of extremal problem with equality constraints. If some of xqi (i − 1, . . . , (m1 + m2 ) are zero at the point of extremum, the corresponding Lagrange multipliers may differ from zero. In the general case, sensitivity equations are constructed for the model associated with the optimal solution. Moreover, it is assumed that with small fluctuations of the parameters in question, the structure of the model and, correspondingly, the optimal solution, remain unchanged.

7.2.5

Sensitivity of Convex Programming Problems

In the most general form, the problem of sensitivity analysis for convex programming problems is solved by the theorem on marginal values [33, 34]. DEFINITION 7.1 The rates of variation of the goal function and the value of matrix game as a function of parameter fluctuation are called marginal values in linear programming and game theory. In fact, the marginal values are the corresponding sensitivity coefficients. It seems that the first investigation of marginal values in matrix games and linear programming problems were performed in [64]. Further, those results were corrected and generalized onto convex programming problems in [33, 34]. Consider the following convex programming problem: max I(X, α), X

fi (X, α) ≥ 0, X ∈ DX ,

i = 1, . . . , m, α = α0 + ∆α,

depending on a parameter ∆α ∈ [0, /].

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

(7.36) (7.37) (7.38)

Let I0 (∆α) be the maximal value of the goal function under the conditions (7.37) and (7.38). Introduce the Lagrange function of the above direct problem m  L(X, Λ, ∆α) = I(X, ∆α) − λi fi (X, ∆α) (7.39) i=1

and formulate the dual problem as min max L(X, Λ, ∆α).

(7.40)

Λ∈Dλ X∈DX

Then, the theorem on marginal values of the initial problem can be formulated as follows [34]. THEOREM 7.2 Let direct and dual problems be solvable for ∆α = 0, and the sets of their solutions, i.e. the domains of DX (0) and DΛ (0), are bounded. Assume also that the functions I(X, ∆α) = f0 , fi (X, ∆α),

i = 1, . . . , m,

 are differentiable at the point ∆α = 0 for any X in a locality DX of the direct problem solution domain DX (0). Moreover, let the derivatives be such that

fi (X, ∆α) − fi (X, 0) ∂fi (X, 0) → for∆α → +0 ∆α ∂∆α  uniformly with respect to X ∈ DX . Then, the function I0 (∆α) has the right derivative at the point ∆α = 0 such that

∂I0 ∂L(X, Λ, ∆α) ∂L(X, Λ, ∆α) = max min = min max , + X∈D Λ∈D Λ∈D X∈D ∂∆α ∂∆α ∂∆α Λ Λ X X where 

∂L(X, Λ, α) ∂α



 = 0

∂I(X, α) ∂α

 − α=α0

m  i=1

 λi

∂fi (X, α) ∂α

 . (7.41) α=α0

Obviously, if only the goal function depends on the parameter ∆α, we have   ∂I0 ∂I(X, α) = max . (7.42) X ∂∆α+ ∂α α=α0

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

If ∆α is affected only the functions fi (i = 1, . . . , m), then  m  ∂fi ∂I0 = max min − λi X Λ ∂∆α+ ∂α i=1

.

(7.43)

α=α0

Finally, if only fj depends on ∆α, we obtain   ∂I0 ∂fj −λ = max min . j X λj ∂∆α+ ∂α α=α0

(7.44)

In the case when the sets DX and DY contain the only optimal solution, we have, correspondingly, ∂I0 = ∂∆α+



∂I(X, α) ∂α

 m  ∂fi  ∂I0 = λi  ∂∆α+ ∂α  i=1

 , α=α0

, α=α0

 ∂I0 ∂fj  = −λ . j ∂∆α+ ∂α α=α0 The formulas for left and right derivatives can easily be found applying the procedure of calculating derivative by direction [39, 43] to convex programming problems. THEOREM 7.3 Consider the function φ(X) = max f (X, Y ). Y

Assume that the function f (X, Y ) is continuous with respect to its variables together with ∂f (X, Y )/∂X. Then, the function φ(X) has the derivative ∂φ/∂S at any point X by any direction S determined by the corresponding unit vector S. Moreover, n  ∂φ(X) ∂f si . = max Y ∂S ∂xi i=1

It is noteworthy that Theorem 7.3 covers practically all mathematical programming problems considered above. As a special case, from this theorem we can deduce theorem on marginal values and the corresponding

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

formulas for marginal values in linear programming problems. To obtain this result, it suffices to substitute the Lagrange function of a linear programming problem into the conditions of Theorem 7.3: L(X, Y, ∆α) =

n 

ci (∆α)xi +

i=1

m 

bj (∆α)yj −

j=1

m  n 

aij (∆α)xj yi .

i=1 j=1

If the derivatives on the left and on the right of the point ∆α = 0 coincide, Theorem 7.3 makes it possible to obtain basic relations for sensitivity coefficients of the form (7.6) and (7.7), and so on.

7.3 7.3.1

Sensitivity of Simplest Variational Problems Simplest Variational Problems

Let F (t, y, y) ˙ be a function having continuous partial derivatives with respect to all arguments up to the second-order inclusive. It is required to find a function y(t) having continuous derivative and satisfying the conditions y(t0 ) = a,

y(t1 ) = b,

(7.45)

such that it ensures a weak extremum of the functional t1 I(y) =

f (t, y, y) ˙ dt

(7.46)

t0

It is known that a solution to this problem must satisfy the following Euler equation Fy −

d Fy˙ = 0 dt

(7.47)

with the boundary conditions (7.45). DEFINITION 7.2 extremals.

Integral curves of the Euler equation are called

Assume that the functional (7.46) and boundary conditions (7.45) depend

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

on a parameter α ∈ [α1 , α2 ]: t 1 (α)

I(y, α) =

F (t, y, y, ˙ α) dt.

(7.48)

t0 (α)

It is also assumed that all requirements of existence of solution hold for the simplest variational problem with the functional (7.46) for all values of α ∈ [α1 , α2 ]. Then, we may introduce a single-parameter family of solutions of the variational problems y(t, α) = y[t, a(α), b(α), t0 (α), t1 (α), α].

(7.49)

In the present section we consider techniques for constructing sensitivity equations for determination of the derivative u(t)

∂y(t) , ∂α

called sensitivity function of the simplest variational problem.

7.3.2

Existence Conditions for Sensitivity Function

To derive these conditions, we employ the general approach to sensitivity investigation for boundary-value problems developed in Section 2.6. The Euler equation (7.47) can be written in the form Fy˙ y˙ y¨ + yF ˙ yy˙ + Fyt ˙ − Fy = 0

(7.50)

or in the following vector form Y˙ = Φ(Y, t, α),

(7.51)

where 

y2 ,



  Φ =  F − F − F y , y yt ˙ yy ˙ 2

 y1 , , y = y1 , y˙ = y2 , Y = y2

Fy˙ y˙ assuming that Fy˙ y˙ = 0 for t ∈ [t1 , t2 ] and α = α0 .

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC



Assume that for α = α0 there is a solution of Equations (7.50) and (7.51) with the corresponding boundary conditions. Let the vector function Φ(y, y, ˙ t, α) be continuously differentiable with respect to all arguments. Assume that the functions t0 (α) and t1 (α) are differentiable with respect to α. Let Y (t) = Y (t, t0 , Y0 , α)

(7.52)

be a general solution of Equation (7.51) such that Y (t0 , t0 , Y0 , α) = Y0 .

(7.53)

Y (t1 ) = Y (t1 , t0 , Y0 , α).

(7.54)

Moreover,

Then, the boundary conditions (7.45) can be reduced to the form g1 (Y0 , t0 , α) = y0 − a = 0, g2 (Y0 , t1 , α) = y(t1 , Y0 , α) − b = 0.

(7.55)

Consider the Jacobian    1 0   D(g1 , g2 )   ∂y(t1 ) J= = , = D(y0 , y˙ 0 )  ∂y(t1 ) ∂y(t1 )  ∂ y˙ 0   ∂y ∂ y˙ 0 0 i.e., the Jacobian J is equal to the value of the sensitivity function of the variable y(t) with respect to initial condition y˙0 at the moment t = t1 . According to Theorem 2.17 formulated in Section 2.6, if J = 0, the vector Y0 (α) is continuous and continuously differentiable with respect to the parameter α. And this is sufficient for existence and continuity of the sensitivity function ∂y(t)/∂α on the interval [t0 , t1 ]. Denote z(t) =

∂y(t) . ∂ y˙

Differentiating Equation (7.47) with respect to the parameter y˙0 , we obtain Fyy z + Fyy˙ z˙ −

d (Fyy ˙ = 0. ˙ z + Fy˙ y˙ z) dt

(7.56)

Obviously, initial conditions for this equation have the form z(t0 ) = 0,

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

z(t1 ) = 1.

(7.57)

Linear homogeneous equation of the second order (7.56) coincides with the Jacoby equation used for investigating sufficient conditions of extremum in calculus of variations. As a special case, if there exists a solution of the Jacoby equation that becomes zero for t = t0 and is nonzero in all other points of the interval (t0 , t1 ), then the Jacoby condition holds, and the arc of the extremal ab can be included in the central field of extremals. As distinct from Jacoby condition, the condition z(t1 ) = 0 is weaker. For existence and continuity of the sensitivity function u(t) it is sufficient that the sensitivity function z(t) be nonzero at t = t1 .

7.3.3

Sensitivity Equations

Assume that all the conditions of existence and continuity of sensitivity function considered in the previous paragraph hold. Then, the problem of constructing the sensitivity function for the above variational problem can be reduced to a boundary-value problem for the sensitivity equation obtained by direct differentiation of Euler equation with respect to the parameter α in the form (7.47) or (7.50). Differentiating (7.47), we obtain the equation d ˙ − Fyy u = Fyα . (Fy˙ y˙ u˙ + Fyy˙ u + Fyα ˙ ) − Fyy ˙ u dt

(7.58)

Equation (7.58) is a linear non-homogeneous differential equation of the second order. Its form coincides with that of the non-homogeneous Jacoby equation. Hereinafter we shall call this relation Euler sensitivity equation. Equation (7.58) can be rewritten in an explicit form with respect to the sensitivity function u(t): Fy˙ y˙ u ¨+

d Fy˙ y˙ u˙ + dt



 d d Fyy˙ − Fyy u − Fαy + Fyα ˙ = 0. dt dt

(7.59)

Represent the boundary conditions for the initial problem in the form G[Y (t0 , α), Y (t1 , α), α] = 0, where g1 = y(t0 , α) − a,

g2 = y(t1 , α) − b,

GT = (g1 , g2 ).

Differentiating (7.60) with respect to α, we obtain ∂G dY0 ∂G dY1 ∂G + + =0 ∂Y0 dα ∂Y1 dα ∂α

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

(7.60)

or, dy0 da = , dα dα

dy1 db = , dα dα

But dy(t0 ) dt0 = u(t0 ) + y˙ 0 , dα dα

dt1 dy(t1 ) = u(t1 ) + y˙ 1 . dα dα

Then, we obtain the following boundary conditions for Equation (7.59): u(t0 ) =

dt0 da − y˙ 0 , dα dα

u(t1 ) =

dt1 db − y1 . dα dα

(7.61)

The above approach can be extended onto generalizations of the simplest variational problem. Thus, if the functional I depends on several function y1 , . . . , yn , we have a system of differential sensitivity equations n 

Fy˙ j y˙ j u ¨j +

j=1

n n   d Φiyj uj + Φiα = 0, Fy˙ j y˙ j u˙ j + dt j=1 j=1

(7.62)

i = 1, . . . , n, where ui =

dyi ; dα

Φi =

d Fy˙ − Fyi . dt i

From Equation (7.58) we can easily derive sensitivity equations for the following typical simplest cases of the Euler equation. A. Function F is independent of y. ˙ The Euler equation has the form Fy (t, y, α) = 0, The curve described by this equation only passes through the boundary points (t0 , a) and (t1 , b) in exceptional cases. For this equation, the sensitivity equation is given by Fyy u + Fyα = 0. B. Function F is independent of y. The Euler equation has the form d Fy˙ = 0, dt which is associated with the following sensitivity equation: Fy˙ y˙ u ¨+

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

d d Fy˙ y˙ u˙ − Fyα ˙ = 0. dt dt

C. Function F depends only on y. ˙ Euler equation Fy˙ y˙ y¨ = 0 corresponds to the sensitivity equation ∂ Fy˙ y˙ = 0. ∂α

Fy˙ y˙ u ¨+ Example 7.3 Given the functional

t1 I(y) = (y 2 + τ 2 y˙ 2 ) dt. t0

construct the sensitivity equation with respect to the function u(t) = dy(t)/dt. For the given functional we have F = y 2 + τ 2 y˙ 2 ,

Fy˙ y˙ = 2τ 2 , Φy = −2,

d Fy˙ y˙ = 0, dt

Φ = 2τ 2 y¨ − 2y,

Φα = Φτ = 4τ y¨.

According to (7.59), the sensitivity equation takes the form τ 2u ¨ − u = −2τ y¨.

(7.63)

Assume that the boundary conditions of the initial variational problem are independent of the parameter τ . Then, the boundary conditions of the sensitivity equation (7.63) are zero. Example 7.4 Find a minimum of the functional 2  I(y) =

1 + y˙ 2 dt, t

y1 = y(1) = 0,

y2 = y(2) = 1.

1

A solution of this problem is given by the equation (y − c1 )2 + t2 =

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

1 , c22

c1 = 2,

1 c2 = √ , 5

i.e., (y − 2)2 + t2 = 5. Let the left boundary condition depend on the parameter α so that y1 (α) = α. To determine c1 and c2 with such a problem statement, we construct the following system of equations: 1 , c22

(1 − c1 )2 + 4 =

4 − α2 , 2(1 − α)

1−α . c2 = √ 5 − 6α

(α − c1 )2 + 1 =

1 , c22

Hence, c1 =

and we obtain the solution  y(t, α) = −

5 − 6α 2 . − t2 + (1 − α)2 1−α

The sensitivity function is determined by direct differentiation of the solution with respect to α: u(t) =

 dy  2 = −√ + 2. dα α=0 5 − t2

Then, we use the sensitivity equation which has, for this example, the form d ˙ = 0, (Fy˙ y˙ u) dt or Fy˙ y˙ u˙ = c1 with the boundary conditions u(t0 = 1) = 1,

u(t1 = 2) = 0.

Since 3

Fy˙ y˙ =

(5 − t2 ) 2 3

52 t

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

,

we have

 du = −ct

Hence,

5 5 − t2

3 2

dt,

√ c1 u(t) = −5 5 √ + c2 . 5 − t2

Using the boundary conditions, we find 2 c1 = √ , 5 5

c2 = 2

u(t) = − √

2 + 2. 5 − t2

and

7.4

Sensitivity of Variational Problems

7.4.1

Variational Problem with Movable Bounds

The simplest problem of this kind can be formulated as follows. Variational problem with movable bounds. Given a point (t0 , y0 ) and a curve y = H(t) find a curve passing through the point (t0 , y0 ) and intersecting the curve H(t) with a nonzero angle such that the curve ensures a weak extremum to the functional t1 I(y)

F (t, y, y) ˙ dt, t0

The integral is taken along this curve from the point (t0 , y0 ) to the point of its intersection with the curve H(t). It is known that a solution of this problem satisfies the Euler equation and the conditions y(t0 ) = a,

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

y(t1 ) = H(t1 )

(7.64)

and the transversality condition F + Fy˙ (Ht − y)| ˙ t=t1 = 0.

7.4.2

(7.65)

Existence Conditions for Sensitivity Functions

Let all the conditions set for the simplest variational problem hold. Moreover, we assume that the function H(t, α) has continuous second derivative with respect to t and mixed derivative with respect to t and α. To find existence conditions, we employ the approach developed in Section 2.6. With this aim in view, rewrite Equations (7.64) and (7.65) in the form g1 (y0 , t0 , α) = 0, g2 (y0 , y˙ 0 , t0 , t1 , α) = 0, f (y0 , y˙ 0 , t0 , t1 , α) = 0, where g1 = y0 − a,

g2 = y(t1 , t2 , y0 , y˙ 0 ) − H(t1 ),

f = F [y(t1 , y0 , y˙ 0 , α), y(t ˙ 1 , y0 , y˙ 0 , α), t1 , α] + Fy˙ [y(t1 , y0 , y˙ 0 , α), y(t ˙ 1 , y0 , y˙ 0 , α), t1 , α](Ht1 − y(t ˙ 1 , y0 , y˙ 0 , α)). Consider the Jacobian   ∂g2   ∂ y˙ 0 D(g1 , g2 , f ) J= =  D(y0 , y˙ 0 , t1 )  ∂f  ∂ y˙ 0

∂g2 ∂t1 ∂f ∂t1

    .   

If J = 0, the values of y0 , y˙ 0 , and t1 are continuous with respect to the parameter α. As was shown above, this is enough for existence of continuous sensitivity function dy(t)/dα.

7.4.3

Sensitivity Equations

It can be shown that under the conditions of existence of sensitivity functions the latter are solutions of the Lyapunov equation (7.58) under the conditions dt0 dy0 u(t0 ) = −y(t ˙ 0) + , (7.66) dα dα dt1 ˙ (Fy˙ y˙ u˙ + (Ht − y) dα +Fyy˙ u + Fyα ˙ ) + Fy u + Fy˙ Htα + Fα |t=t1 = 0,

(Ft + Ht Fy + Fy˙ Htt )

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

(7.67)

where

dt1 u(t1 ) − Hα (t1 ) =− . dα y(t ˙ 1 ) − Ht (t1 )

(7.68)

Indeed, the differential sensitivity equation can be obtained using the technique presented in Section 7.3, provided that the conditions of existence and continuity of the sensitivity function hold. The boundary condition (7.66) was also derived above. To obtain the conditions (7.67) and (7.68), we differentiate Equations (7.64) and (7.65) with respect to α. This is justified due to the aforesaid conditions. After differentiation we have Ft

dt1 dy1 dy˙ 1 + Fy + Fy˙ + Fα dα dα dα   dy˙ 1 dy1 dt1 + F + F + (Ht − y) ˙ Fyα + F ˙ y˙ y˙ yy ˙ yt ˙ dα dα dα    dt1 dy˙ 1  + Fy˙ Htt + Htα − = 0, dα dα t=t1 dy1 dt1 = Ht + Hα |t=t1 , dα dα

(7.69)

(7.70)

where y1 = y(t1 ) and y˙ 1 = y(t ˙ 1 ). Then, we will find the values dy1 /dα and dy˙ 1 /dα. We have t y(t) =

y˙ dt + y0 , t0 (α) t

y(t) ˙ =

(7.71) φ(t) dt + y˙ 0 ,

t0 (α)

where φ(t) =

Fy − Fty˙ − Fyy ˙ y˙ , Fy˙ y˙

Fy˙ y˙ = 0.

From (7.71) it follows that dt0 dy0 u(t) = −y(t ˙ 0) + + dα dα

t

dy˙ dt, dα

t0

dt0 dy˙ 0 u(t) ˙ = −φ(t0 ) + + dα dα

t

t0

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

dφ dt, dα

Hence, for t = t1 we have dt0 dy0 u(t1 ) = −y(t ˙ 0) + + dα dα dt0 dy˙ 0 u(t) ˙ = −φ(t0 ) + + dα dα

t1 t0 t1

dy˙ dt, dα (7.72) dφ dt, dα

t0

Differentiating Equation (7.71) with respect to α for t = t1 , we obtain dy1 dt1 dt0 = y(t ˙ 1) − y(t ˙ 0) + dα dα dα dy˙ 1 dt1 dt0 = φ(t1 ) − φ(t0 ) + dα dα dα

t1 t0 t1

dy˙ dy0 dt + , dα dα (7.73) dφ dy˙ 0 dt + . dα dα

t0

Relations (7.72) and (7.73) yield dy1 dt1 = y(t ˙ 1) + u(t1 ), dα dα dy˙ 1 dt1 = φ(t1 ) + u(t ˙ 1 ). dα dα

(7.74)

Substituting (7.74) into (7.69) and (7.70), after simple transformations we find the conditions (7.67) and (7.68). Note that Equation (7.68) coincides with Formula (3.77) for the derivative of the switching moment with respect to the parameter in sensitivity equations of discontinuous systems.

7.4.4

Case Study

Consider Example 7.3 for the case when y(t0 ) = 1 and the right end of the extremal can move along the curve y(t) = t + α. First, we determine the sensitivity function by direct differentiation of the solution y(t, α) of the variational problem with respect to the parameter α. With this aim in view, we find y(t, α), using the problem solution in the form  y(t) = c1 +

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

1 − t2 . c22

(7.75)

The transversality condition has the form 1 + y˙  1 = 0. t1 1 + y12 From (7.75) we find  y˙ = −t

1 − t2 c22

− 1 2

.

Then, the transversality condition can be written as  1 − c22 t21 = c2 t1 .

(7.76)

At the moment t0 = 0 we have c22 (1 − c1 )2 = 1.

(7.77)

Moreover, from the condition H(t1 ) = y(t1 ) we obtain  c1 +

1 − t21 = t1 + α. c22

(7.78)

Thus, for determination of the three variables c2 , c1 , and t1 we may use the three algebraic equations (7.76), (7.77) and (7.78). As a result, we find c1 = α,

1 t1 = √ (1 − α). 2

c2 = (1 − α)−1 ,

Therefore, y(t, α) = α −

 (1 − α)2 − t2 .

Then, the sensitivity function appears as u(t) = 1 − √

1 . 1 − t2

(7.79)

Next we use the sensitivity model (7.58), (7.66), (7.67), and (7.68) to find u(t). As was shown in Section 7.3, the sensitivity equation has the form Fy˙ y˙ u = c,

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

where 3 Fy˙ y˙ = t−1 (1 + y˙ 2 )− 2 .

or, with due account for the expression for y˙ at α = 0, 3

Fy˙ y˙ = t−1 (1 − t2 ) 2 . Then, ct

u˙ =

3

,

(1 − t2 ) 2 Hence, u(t) = √

c˜2 + c3 . 1 − t2

To find the arbitrary constants c˜1 and c3 , we use the conditions (7.66) and (7.67). From the first equation it follows that u(0) = c˜2 + c3 = 0 or, c˜2 = −c3 .

(7.80)

To construct the second equation, we find Ft = −t−2



1 + y˙ 2 ,

3 Fy˙ y˙ = t−1 (1 + y˙ 2 )− 2 ,

Then,

Ht = 1,

Fyy ˙ = 0, Fyα ˙ = 0,

 1 + y˙ 2 dt1 − + (1 − y) ˙ t2 dα

where

Fy = 0,

Htα = 0,

1 3

t(1 + y˙ 2 ) 2

u| ˙ t=t1 = 0,

dt1 u(t1 ) − 1 =− , dα y(t ˙ 1) − 1

For α = α0 we have y(t ˙ 1 ) = −1,

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

Htt = 0,

1 t1 = √ , 2 dt1 u(t1 ) − 1 = . dα 2

Fα = 0.

(7.81)

Moreover,

√ u(t1 ) = c˜2 ( 2 − 1), u(t ˙ 1 ) = 2˜ c2 .

Using the last relations, from (7.81) we obtain c˜2 = −1, Hence, c3 = 1. Thus, u(t) = − √

7.4.5

1 + 1. 1 − t2

Variational Problem with Corner Points

It is required to find, among all continuous functions y(t) satisfying the boundary conditions y(t0 ) = y0 , y(t1 ) = y1 , (7.82) a function that ensures a weak extremum of the functional t1 I(y) =

F (t, y, y) ˙ dt. t0

It is assumed that admissible curves y(t) may have a break at a point with abscissa t∗ such that t0 < t∗ < t1 . Obviously, the function Fy˙ y˙ (t) may become zero for the breakpoint. It is known that each of the two arcs of the broken line satisfies Euler equations. For each arc y1 (t) or y2 (t) we can write y1 (t) = y1 (t, c1 , c2 ), y2 (t) = y2 (t, t∗ , c3 , c4 ). To determine the constants c1 , c2 . c3 , c4 , and t∗ it is necessary to have yet another three relations in addition to the boundary conditions. At the breakpoint the following Erdman-Weierstrass equation holds (F − yF ˙ y˙ )|t=t∗ −0 = (F − yF ˙ y˙ )|t=t∗ +0 Fy˙ |t=t∗ −0 = Fy˙ |t=t∗ +0

(7.83)

The continuity condition for the extremal has the form y(t∗ − 0) = y(t∗ + 0).

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

(7.84)

Let the previous problem be parametric so that F = F (t, y, y, ˙ α),

y0 = y0 (α),

t0 = t0 (α),

y1 = y1 (α),

t1 = t1 (α),

and let all these functions be continuously differentiable with respect to α. To obtain existence conditions for the sensitivity function, we consider the relations f1 (y20 , y˙ 20 , t∗ , t1 ) = 0,

f2 (y10 , y˙ 10 , t0 , y20 , y˙ 20 , t∗ ) = 0,

f3 (y10 , y˙ 10 , t0 , y20 , y˙ 20 , t∗ ) = 0,

f4 (y10 , y˙ 10 , t0 , y20 , y˙ 20 , t∗ ) = 0,

where f1 = y(t1 ) − b, f2 = (F − yF ˙ y˙ )|t=t∗ −0 − (F − yF ˙ y˙ )|t=t∗ +0 , f3 = Fy˙ |t∗ −0 − Fy˙ |t∗ +0 ,

f4 = y1 (t∗ − 0) − y20 .

Consider the Jacobian J=

D(f1 , f2 , f3 , f4 ) . D(y˙ 10 , y20 , y˙ 20 , t∗ )

If this Jacobian is nonzero for α = α0 , the values y˙ 10 , y˙ 20 , and t∗ are continuously differentiable functions of the parameter α ina locality of the point α = α0 . The latter condition is sufficient for existence of the sensitivity functions u1 (t) =

dy1 (t) , dα

u2 (t) =

dy2 (t) , dα

To find sensitivity models, we consider two cases. First, assume that Fy˙ y˙ = 0 at the breakpoint. Construct a discontinuous system d˜ y1 = y˜2 , dt

˜2 d˜ y2 Fy − Fyt ˙ − Fy y˙ y = φ (˜ y1 , y˜2 , t) . = dt Fy˙ y˙

(7.85)

or, in a vector form, dY˜ = Φ(Y˜ , t). dt

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

(7.86)

Obviously, at the point t∗ the function y˜2 (t) has a break of the first kind so that ∆˜ y2 (t∗ ) = y˜2 (t∗ + 0) − y˜2 (t∗ − 0) = y˜2+ − y˜2− . Assume that there are left and right derivatives of the function at this point, which are defined by the difference ∆φ(t∗ ) = φ+ − φ− , where φ+ = φ(t∗ + 0),

φ− = φ(t∗ − 0).

The switching moment t∗ is found from the condition g(y, y, ˙ t∗ , α) = Fy˙ y˙ (y, y, ˙ t∗ , α) = 0.

(7.87)

The continuity condition for the extremal has the form y(t∗ + 0) = y(t∗ − 0).

(7.88)

Then, according to Theorem 3.1 and the results of Section 7.3, we obtain the following sensitivity model of the above variational problem with a corner point. The segments of the extremal on the left and on the right of the point t∗ are denoted by y1 (t) and y2 (t), respectively, and the corresponding sensitivity functions by u1 (t) and u2 (t). these sensitivity functions are solutions of equations of the form (7.58) for t0 ≤ t ≤ t∗ and t∗ ≤ t ≤ t1 with the boundary conditions dt0 dy0 + , dα dα dt0 dy1 u2 (t1 ) = −y(t ˙ 1) + . dα dα u1 (t0 ) = −y(t ˙ 0)

(7.89)

The break condition for the sensitivity function is given by ∆u = −∆˜ y2 (t∗ )

dt∗ , dα

T T   Fy+ U − + Fy− U + + Fy+ Fy− ˙ yα ˙ ˙ yα ˙ ˙ yY ˙ ˙ yY ˙ dt∗ =− . =− T T dα + − + F− + + F+ Fy− F Φ Φ y˙ yt ˙ y˙ yt ˙ ˙ yY ˙ y˙ yY ˙

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

(7.90)

(7.91)

The solutions u1 (t) = u1 (t, c1 , c2 ) and u2 (t) = u2 (t, c3 , c4 ) depend on four unknown constants. To find them, we can use the four conditions (7.89)–(7.91). Then, let us consider the second case, when the function Fy˙ y˙ (t) is nonzero at the breakpoint. Then, the sensitivity functions u1 (t) and u2 (t) satisfy equations of the form (7.58) on the intervals t0 ≤ t ≤ t∗ and t∗ ≤ t ≤ t1 with the boundary condition (7.89). Additional relations for determining the constants c1 , c2 , c3 , and c4 can be obtained by differentiating with respect to the parameter α Erdman-Weierstrass equation (7.83) and the continuity condition (7.84). After differentiation of these conditions we obtain dy dy˙ dt∗ t∗ +0 + Fy˙ y˙ + Fyt + Fyα ˙ ˙ |t∗ −0 = 0, dα dα dα  dy dt∗ dy Fα + Fy + Ft − y˙ Fyy ˙ dα dα dα t∗ +0 ∗  dy˙ dt  + Fy˙ y˙ + Fyt + Fyα ˙ ˙  ∗ = 0, dα dα  t −0 dy1  dy2  = . dα t∗ −0 dα t∗ +0 Fyy ˙

(7.92)

Represent the solutions y1 (t), y˙ 1 (t), y2 (t), and y˙ 2 (t) in the form t y1 (t) =

y˙ 1 (t) dt + y(t0 ), t0 t

y˙ 1 (t) =

φ1 (t) dt + y˙ 1 (t0 ), t0

t

y2 (t) = t∗ +0 t

y˙ 2 (t) =

y˙ 2 (t) dt + y2 (t∗ + 0), φ2 (t) dt + y˙ 2 (t∗ + 0),

t∗ +0

Hence,

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

dy1 (t∗ − 0) dt∗ = y˙ 1 (t∗ − 0) + u(t∗ − 0), dα dα

(7.93)

dy˙ 1 (t∗ − 0) dt∗ = φ1 (t∗ − 0) + u(t ˙ ∗ − 0), dα dα

(7.94)

dy2 (t∗ + 0) dt∗ = y˙ 2 (t∗ + 0) + u(t∗ + 0), dα dα

(7.95)

dy˙ 2 (t∗ + 0) dt∗ = φ2 (t∗ + 0) + u(t ˙ ∗ + 0). dα dα

(7.96)

Substituting (7.93)–(7.96) into (7.92) and using the fact that φ(t) =

Fy − Fyt ˙ − Fyy ˙ y˙ , Fy˙ y˙

we finally obtain Fyy ˙ + Fy ˙ u + Fy˙ y˙ u Fy u + Ft

dt∗ t∗ +0 + Fyα ˙ |t∗ −0 = 0, dα

dt∗ t∗ +0 ˙ + Fyα + Fα − y˙ (Fyy ˙ u + Fy˙ y˙ u ˙ )|t∗ −0 = 0, dα t∗ +0  dt∗ y˙ = 0. + u dα t∗ −0

(7.97)

If Fy˙ y˙ (t) = 0 for t ∈ (t0 , t1 ), the derivative dt∗ /dα is also unknown. It is determined together with the coefficients c1 , c2 , c3 , and c4 from the system of five equations (7.89) and (7.97). Example 7.5 Find the sensitivity function of the solution of the following variational problem with respect to a parameter α: 2 (y˙ 4 − 6y˙ 2 ) dt,

I(y) = 0

y(0) = α,

y(2) = 0.

For this problem, extremals are given by

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

y1 (t) = c1 t + c2 ,

0 ≤ t < t∗ ,

y2 (t) = c3 t + c4 ,

t∗ ≤ t < 2,

(7.98)

With account for the boundary conditions, they take the form y1 (t) = c1 t + α, (7.99)

y2 (t) = c3 (t − 2).

The Erdman-Weierstrass condition and continuity conditions have the form t∗ +0

4y˙ 3 − 12y| ˙ t∗ −0 = 0, t∗ +0 −3y˙ 4 + 6y˙ 2 t∗ −0 = 0, y1 (t∗ − 0) = y2 (t∗ + 0) or, with account for (7.99), c33 − 3c3 − c31 + 3c1 = 0, −c43 + 2c23 + c41 − 2c21 = 0, c1 t∗ + α = c3 (t∗ − 2), Hence c1 =



3,

√ c3 = − 3,

c2 = α,

√ c4 = −2c3 = 2 3,

α t∗ = 1 √ . 2 3 As a result, we have y1 (t) =



3t + α,

√ √ y2 (t) = − 3t + 2 3.

After all, direct differentiation yields the following sensitivity function  u(t) =

1 for 0 ≤ t < t∗ = 1, 0 for t∗ ≤ t ≤ 2.

The sensitivity function has a break at the point t∗ : ∆u(t∗ ) = −1. Let us employ the sensitivity model including differential sensitivity equations and the conditions (7.89) and (7.97). For the given problem, the

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

sensitivity equation has the form Fy˙ y˙ u˙ = c, Hence, with account for the fact that Fy˙ y˙ = const, we have u(t) = c˜1 t + c˜2 . Let us express u1 (t) = c1 t + c2 ,

0 ≤ t < t∗ ,

u2 (t) = c3 t + c4 ,

t∗ ≤ t ≤ 2.

From the conditions (7.89) we have u1 (0) = 1,

u2 (2) = 0,

Equation (7.97) yields [y˙ 22 (t∗ + 0) − 1]u˙ 2 (t∗ + 0) = [y˙ 12 (t∗ − 0) − 1]u˙ 1 (t∗ − 0), y˙ 2 (t∗ + 0)[y˙ 22 (t∗ + 0) − 1]u˙ 2 (t∗ + 0) = y˙ 1 (t∗ − 0)[y˙ 12 (t∗ − 0) − 1]u˙ 1 (t∗ − 0), y˙ 2 (t∗ + 0)

dt∗ dt∗ + u2 (t∗ + 0) = y˙ 12 (t∗ − 0) + u1 (t∗ − 0). dα dα

As a result, we find c1 = c3 = c4 = 0,

c2 = 1,

Hence, u1 (t) = 1, u2 (t) = 0.

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

7.5 7.5.1

Sensitivity of Conditional Extremum Problems Variational Problems on Conditional Extremum

In these problems it is required to find an extremum of a functional I under some constraints imposed on the functions on which the functional depends. Usually bonds of the following three types are considered: 1) 2) 3)

fi (t, y1 , . . . , yn ) = 0, i = 1, . . . , m < n, fi (t, y1 , . . . , yn , y˙ 1 , . . . , y˙ n ) = 0, t1 Gi (t, y1 , . . . , yn , y˙ 1 , . . . , y˙ n ) dt = ci .

(7.100) (7.101) (7.102)

t0

Bonds of the first type yield a Lagrange problem, while bonds of the third type lead to an isoperimetric problem.

7.5.2

Lagrange Problem

First, we consider the simplest case when it is required to find an extremum of the functional t1 I=

F (t, y1 , y2 , y˙ 1 , y˙ 2 , α) dt

(7.103)

t0

over all corresponding curves y1 = y1 (t),

y2 = y2 (t),

(7.104)

belonging to a surface f (y1 , y2 , t) = 0.

(7.105)

It is known that the solution of the problem is determined, under some

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

conditions, by the equations d Fy˙ = 0, dt 1 d + λfy2 − Fy˙ 2 = 0, dt f (y1 , y2 , t) = 0,

Fy1 + λfy1 − Fy2

y1 (t0 ) = y10 = a1 ,

y1 (t1 ) = y11 = b1 ,

y2 (t0 ) = y20 = a2 ,

y2 (t1 ) = y21 = b2 ,

(7.106)

where λ is a Lagrange multiplier. As a result, the values y1 (t) and y2 (t) are functions of the boundary values y10 , y11 , y20 , y21 and the Lagrange multiplier λ. Let us rewrite the boundary conditions in the form g1 = y10 − a1 = 0, g2 = y20 − a2 = 0, g3 = y1 (t1 , y10 , y˙ 10 , y20 , y˙ 20 , λ) − b1 = 0,

(7.107)

g4 = y2 (t1 , y10 , y˙ 10 , y20 , y˙ 20 , λ) − b2 = 0, f [y1 (t, y10 , y˙ 10 , y20 , y˙ 20 , λ), y2 (t, y10 , y˙ 10 , y20 , y˙ 20 , λ), t] = 0. Then, the existence condition for sensitivity functions dy1 /dα, dy2 /dα, and dλ/dα reduces to the requirement that the following Jacobian is nonzero: J=

D(g1 , g2 , g3 , g4 , f ) , D(y10 , y20 , y˙ 10 , y˙ 20 , λ)

(7.108)

Due to (7.107), it has the form   ∂g3   ∂ y˙ 10   ∂g  4 J =  ∂ y˙ 10   ∂f   ∂ y˙ 10

∂g3 ∂ y˙ 20 ∂g4 ∂ y˙ 20 ∂f ∂ y˙ 20

∂g3 ∂λ ∂g4 ∂λ ∂f ∂λ

       .     

In general, when the bonds are described by (7.100), the existence con-

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

dition for sensitivity functions of the Lagrange problem has the form D(g1 , g2 , . . . , g2n , f1 , . . . , fm ) = 0. D(y10 , y20 , . . . , yn0 , y˙ 10 , . . . , y˙ n0 , λ1 , . . . , λm )

(7.109)

If the existence conditions hold for sensitivity functions, the sensitivity equations can be found by direct differentiation of equations of the form (7.58) with respect to the parameter α.

7.5.3

Variational Problem with Differential Constraints

In this case, the existence conditions for sensitivity functions are similar to the conditions of the previous paragraph. Indeed, assume that it is required to find an extremum of the functionsl (7.103) under the conditions f (y1 , y2 , y˙ 1 , y˙ 2 , t) = 0

(7.110)

and the boundary conditions (7.106). It is known that the functions y1 (t) and y2 (t) realizing a conditional extremum of the functional and the multiplier λ must satisfy the equation d i = 1, 2, Ly˙ = 0, dt i f (y1 , y2 , y˙ 1 , y˙ 2 , t) = 0,

Lyi −

(7.111)

where L = F + λ(t). Obviously, yi (t) = yi (y10 , y20 , y˙ 10 , y˙ 20 , λ, t, )

i = 1, 2.

Then, we can construct the following system g1 = y10 − a1 = 0, g2 = y20 − a2 = 0, g3 = y1 (y10 , y20 , y˙ 10 , y˙ 20 , λ(t1 ), t1 ) − b1 = 0,

(7.112)

g4 = y2 (y10 , y20 , y˙ 10 , y˙ 20 , λ(t1 ), t1 ) − b2 = 0, f ( y1 , y2 , y˙ 1 , y˙ 2 , t) = 0, hence, we obtain a condition of the form (7.109) for m = 1 and n = 2. If the existence conditions for sensitivity function hold, the sensitivity equation can be obtained by direct differentiation of Equations (7.58) with respect to the parameter α.

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

7.5.4

Sensitivity of Isoperimetric Problem

Consider the simplest isoperimetric problem with the functional t1 I=

F (y, y, ˙ t, α) dt

(7.113)

D (y, y, ˙ t, α) dt = S.

(7.114)

t0

and constraint t1 t0

It is known that the isoperimetric problem can easily be reduced to the previous variational problem with differential constraint by introducing a new variable t1 z(t) = D (y, y, ˙ t, α) dt. t0

and transition to the differential equation z˙ = D (y, y, ˙ t, α) , z(t0 ) = 0,

(7.115)

z(t1 ) = S.

In this case, the existence conditions for the sensitivity functions ∂y/∂α and ∂λ/∂α are checked using the technique developed in the previous paragraph. In the given case the Euler equation takes the form   d d Fy − Fy˙ + λ Dy − Dy˙ = 0. dt dt As a result, we obtain (Fy˙ y˙ + λDy˙ y˙ ) u ¨α +

d (Fy˙ y˙ + λDy˙ y˙ ) u˙ α dt 

+ (Qy + λHy )uα = − Qα +

 ∂λ H , ∂α

where Q = −Fy +

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

d Fy˙ , dt

H = −Dy +

d Dy˙ . dt

Solution of the sensitivity equation is determined by the relation   ∂y(t) ∂λ uα (t) = = uα t, c1 , c2 , , ∂α ∂α i.e., contains three unknown constants. To find them we must have three equations. Two of them are derived using the boundary conditions. The third one can be obtained as a result of differentiation of the following isoperimetric condition with respect to the parameter α: t1 [Dy uα + Dy˙ u˙ α + Dα ] dt + D [y(t1 ), y(t ˙ 1 ), t1 ] t0

− D [y(t0 ), y(t ˙ 0 ), t0 ]

© 2000 by CRC Press LLC

© 2000 by CRC Press LLC

dt1 dα dt0 = 0. dα

Chapter 8 Applied Sensitivity Problems

8.1 8.1.1

Direct and Inverse Problems of Sensitivity Theory Classification of Basic Applied Sensitivity Problems

The modern stage of development of theoretical and practical methods of control systems design and implementation calls for using methods making it possible to account for parametric uncertainties (parameter variations). In fact, there is a fairly wide class of problems where the use of sensitivity theory apparatus is necessary and advantageous. The main problems of this class are the following: 1. precision and stability analysis for parametrically perturbed systems 2. design of insensitive systems 3. identification 4. optimization problems 5. tolerance distribution 6. adjustment, testing, and monitoring of technical systems as well as their units Even superficial analysis of the above applied problems of sensitivity theory shows that sensitivity functions U , additional motion ∆Y and variations of the corresponding parameters ∆α are necessary elements of any problem belonging to this class. The triple of elements (U, ∆Y, α) is used in each problem. In most applied problems of sensitivity theory it is required to find and analyze either additional motion ∆Y or parameters variation ∆α. In both the cases the sensitivity functions are assumed to be known (they are found using the sensitivity model for the known initial system). In some problems, estimation of additional motion is combined (alternate)

© 2000 by CRC Press LLC

with obtaining parameters variation. Using the above approach, most applied problems of sensitivity theory can be classified into the following three groups: 1. direct problems of sensitivity theory when it is required to investigate additional motion ∆Y on the basis of known sensitivity function U and the values (or characteristics) of parameters variation ∆α 2. inverse problems of sensitivity theory when parametric influence ∆α is to be estimated using given sensitivity functions U and additional motion ∆Y 3. mixed problems that include elements of both direct and inverse problems

8.1.2

Direct Problems

The solution of direct problems is connected, as a rule, with analysis of additional motion. A general technique of such analysis is given in Section 1.3, where general ideas about investigation of the first approximation for additional motion are presented for deterministic and stochastic cases. More detailed information on methods of estimating additional motion of various classes of dynamic systems by means of sensitivity functions can be found in [67, 94].

8.1.3

Inverse Problems and their Incorrectness

In general, the connection between additional motion ∆Y and parameters vector variation ∆α can be described by the following operator equation ∆Y = Γ∆α,

(8.1)

which is a basic equation for solving direct problems of sensitivity theory. Formally, inverse problems are associated with the relation ∆α = Γ−1 ∆Y,

(8.2)

where Γ−1 is the inverse operator for Γ. The use of Formula (8.2) is difficult, and sometimes even impossible, for applied problems due to incorrectness of inverse problems. Mathematical formulation of a correctly posed problem of solving Equation (8.1) can be given as follows [105].

© 2000 by CRC Press LLC

DEFINITION 8.1 A problem of obtaining the solution ∆α from a space Ωα by initial data ∆Y from a space QY is called correctly posed if the following conditions hold: 1. for any element ∆Y there is a solution ∆α ∈ Ωα 2. the solution is determined uniquely 3. the problem is stable DEFINITION 8.2 A problem of obtaining a solution ∆α = ψ(∆Y ) is called stable if for any  > 0 there is a number ∆() > 0 such that from the inequality ρY (∆Y1 , ∆Y2 ) ≤ δ() follows ρα (∆α1 , ∆α2 ) ≤ , where ∆α1 = ψ(∆Y1 ),

∆α2 = ψ(∆Y2 ),

∆Y1 ,

∆Y2 ∈ ΩY ,

∆α1 ,

are corresponding distances in the normed spaces Qα and QY . In practice, if the stability conditions hold, small errors in initial data ∆Y cause small errors in the solution ∆α. DEFINITION 8.3 Problems for which at least one of the above conditions does not hold, are called incorrectly posed. Using these definitions, let us consider various reasons of incorrectness of inverse problems in sensitivity theory. For an inverse problem, initial data are the components of the vector of additional motion ∆Y . In real conditions they are obtained experimentally and, therefore, with inevitable errors. Such being the case, for some values of the vector ∆Y Equation (8.1) may not have a solution with respect to ∆α (in the given space Ωα ). In this case the first condition of problem correctness is violated. If Equation (8.1) is nonlinear with respect to parameters variation ∆α, there can be many associated solutions so that the second condition is violated. After all, if a solution exists and is unique, it may be not stable. This manifests itself in the fact that small errors in initial data (in the vector ∆Y ) cause significant errors in the solution ∆α. The reason for such “error amplification” is that the inverse operator Γ−1 may be discontinuous. Thus, the third correctness condition is violated.

© 2000 by CRC Press LLC

8.1.4

Solution Methods for Inverse Problems

In practice, much effort of investigators is directed on developing methods and algorithms for determining approximate solutions for incorrectly posed problems that are stable with respect to small variations of initial data. A method of solving incorrect problems called solution by inspection [105] is the most widely used in engineering practice. As applied to the problem (8.1), the method can be described as follows. It is assumed that for an arbitrary element ∆β from an area Ωα of admissible solutions we can find the value f (∆β) = Γ∆β, i.e., a direct sensitivity problem is solved. Then, we take as an approximate solution an element ∆β˜ ∈ Ωα such that the distance between ∆β and ∆Y is minimal, i.e.,    J = ρ ∆Y, f ∆β˜ = min ρ[∆Y, f (∆β)]. ∆β∈Ωα

(8.3)

In real problems, the functional (8.3) is formed in the following way. Consider the difference Z = Y (α) − Y˜ (β), where Y (α) is the output signal of a plant (system), Y˜ (β) is the output signal of plant model, α = α0 + ∆α is an unknown vector of plant parameters, and β is the model parameters vector. Represent Y˜ (β) in the form Y˜ (β) = Y˜ (β0 ) + ∆Y˜ (∆β). Then, Z = Y (α) − Y˜ (β0 ) − ∆Y˜ (∆β). or Z = ∆Y − ∆Y˜ (∆β) = ∆Y − f (∆β), where ∆ = Y (α) − Y˜ (β0 ). As a result, the following value can be employed as a distance: J = ρ[f (∆β), ∆Y ] = (∆Y − f (∆β), ∆Y − f (∆β)).

(8.4)

In general, for minimization of the functional (8.4), numerical methods are used. Such being the case, we encounter a fairly difficult problem of finding extremum of a function of many variables. The problem can be

© 2000 by CRC Press LLC

simplified if we approximate the value Y˜ (β) by sensitivity functions, for example, in a locality of the point β0 . Moreover, for a linear approximation Y˜ (β0 + ∆β) = Y˜ (β0 ) + U ∆β

(8.5)

and corresponding choice of the functional J we can find an analytical solution to the inverse problem for each step. Let the functional J have the form 

τ

(Y − Y˜ )T D(Y − Y˜ ) dt

(8.6)

T   Y (ti ) − Y˜ (ti ) D Y (ti ) − Y˜ (ti ) ,

(8.7)

J= t0

or, J=

N   i=1

where D is a weight (usually diagonal) matrix. As a special case, D = E. Substituting (8.5) into (8.6) and (8.7) yields τ (∆Y − U ∆β)T D(∆Y − U ∆β) dt

J=

(8.8)

t0

J=

N 

T

[∆Y (ti ) − U (ti )∆β] D [∆Y (ti ) − U (ti )∆β] .

(8.9)

i=1

The vector ∆β can be found using the necessary condition of extremum of these functionals. Consider, for example, the functional (8.8). It can be rewritten in the form τ J=



∆Y T D∆Y − 2∆β T U T D∆Y + ∆β T U T DU ∆β dt.

(8.10)

t0

To find the necessary minimum condition for the functional (8.8), we differentiate (8.10) with respect to the vector ∆β: dJ =2 d∆β

τ t0

© 2000 by CRC Press LLC



U T DU ∆β − U T DU ∆Y



dt.

As a result, we find the following necessary extremum condition: C∆β = P,

(8.11)

where τ

τ T

C=

U DU dt, t0

U T D∆Y dt.

P = t0

Columns of the matrix U are the sensitivity vectors Ui =

˜ ∂U , ∂βi

i = 1, . . . , m.

The matrix C has elements τ UiT DUj dt,

cij =

i, j = 1, . . . , m,

t0

which will be considered as scalar product of the vector functions Ui and Uj in m-dimensional linear space and denoted by cij = (Ui , Uj ). Then, the matrix C takes the form C = (Ui , Uj ). The matrix C in the linear algebraic equation (8.11) is the Gramm matrix of the system of vector functions U1 , . . . , Um . Since det C is the Grammian (Gramm determinant), the matrix C will be non-degenerate if and only if the sensitivity vectors form a linearly independent system on the interval (t0 , τ ). Thus, for a unique solution of the linearized problem of inverse sensitivity it is necessary that the vectors U1 , . . . , Um be linearly independent in the observation interval (t0 , τ ). In this case, ∆β = C −1 P. Note that the considered procedure of solving inverse problems of sensitivity theory by inspection is multistep. Above we described the procedure of a single step. In the general case, on each step we find the value

© 2000 by CRC Press LLC

∆βk = Ck−1 Pk . Then, we obtain the vector βk = βk−1 + γk ∆βk . This vector makes it possible to find the matrix Ck+1 , vector Pk+1 and variation −1 ∆βk+1 = Ck+1 Pk+1 . As in many iterative algorithms, the parameter γk is used for improving convergence of the algorithm. The process is repeated up to the step N when the following “stop conditions” hold: |∆βs | ≤ δ,

s ≥ N.

As a result, for estimation of the vector β we obtain

β = β0 +

N 

∆βi ,

i=1

and for estimation of desired variations ∆α ∆α = β − α0 = β0 +

N 

∆βi − α0 .

i=1

Usually β0 = α0 . Then, ∆α =

N 

∆βi .

i=1

Let us show that the condition of linear independence of the sensitivity vector on each iteration step is sufficient to the whole iteration process to converge. For k + 1 iteration the functional J can be written in the form τ Jk+1 =

T  T [Y (α) − Y˜ (βk+1 ) D Y (α) − Y˜ (βk+1 ) dt

t0

τ T

[∆Yk − Uk ∆βk+1 ] D [∆Yk − Uk ∆βk+1 ] dt

= t0

or, τ Jk+1 = Jk − 2 t0

© 2000 by CRC Press LLC

τ T ∆βk+1 UkT D∆Yk

T ∆βk+1 UkT D∆βk+1 dt.

dt + t0

(8.12)

The estimate on the (k + 1)-th step is given by

∆βk+1

 τ −1  τ    =  UkT DUk dt  UkT D∆Yk dt . t0

t0

Substituting the expression for the estimate in (8.12), we obtain Jk+1 = Jk − PkT Ck−1 Pk , where



τ UkT D∆Yk

Pk =

(8.13)

dt,

UkT DUk dt.

Ck =

t0

t0

If the matrix Ck is nonsingular, Ck−1 is a symmetric positive definite matrix. Then, from (8.13) it follows that J0 > J1 > J2 > . . . > Jk > Jk+1 > . . . ,

(8.14)

i.e., estimates of the vector ∆β will improve, according to a chosen quality index J, on each iteration step.

8.1.5

Methods for Improving Stability of Inverse Problems

A combined use of solution by inspection and linearization method for solving inverse problems leads to solving a sequence of linear systems of algebraic equations of the form A Y = X,

(8.15)

where A is the Gramm matrix having the determinant det A = Γ(U1 , U2 , . . . , Um ). The matrix A is symmetric. If det A = 0, it can be transformed to a diagonal form using appropriate orthogonal transformation, and the system (8.15) will have a decomposed form λi y˜i = x ˜i ,

i = 1, . . . , m

where λi are eigenvalues of the matrix A.

© 2000 by CRC Press LLC

(8.16)

Then, y˜i =

1 x ˜i , λi

i = 1, . . . , m.

If the matrix A is singular, i.e., detA = 0, some of their eigenvalues, for example, λm−1 , . . . , λs where s is the rank of the matrix, will be equal to zero. Such being the case, the inverse operator of the system (8.16) loses continuity, i.e., the initial problem (8.15) becomes incorrect. Therefore, the solution procedure will be unstable. But nonsingularity of the matrix A is not sufficient for obtaining a solution with desired accuracy, because accuracy depends on its condition number [110, 111]. During numerical calculation on a computer a nonsingular system can, in fact, appear to be singular. In applied mathematics there are a number of methods of improving stability of solution of singular and badly conditioned algebraic systems [105, 111]. Let us present two methods for improving solution stability for an inverse problem of sensitivity theory based on the peculiar features of the matrix A formed with the help of sensitivity functions. The idea of the first method boils down to the following. As follows from the preceding material, sensitivity functions depend on input signals acting on the plant or system. For solving an inverse problem, there is the freedom of choosing such input signals. This is true, for instance, for problems of active identification, diagnostics, and controller adjustment. Therefore, there is a possibility to control, via input signals, linear independence of the sensitivity vectors U1 , . . . , Um , and, via this vectors, the Grammian detA. Let us demonstrate this idea by an example of a linear static plant y(t) =

m 

αi xi (t).

i=1

Model of the plant is described by y˜(t) =

m 

βi xi (t).

i=1

For this model, sensitivity functions are given by ui (t) =

∂ y˜(t) = xi (t). ∂βi

Then, ∆˜ y (t) =

m  i=1

© 2000 by CRC Press LLC

∆αi xi (t),

τ  ∆y(t) −

J=

m 

2 ∆αi xi (t)

dt,

i=1

t0

Hence, the necessary condition of extremum of the functional J can be represented in the form m  i=1

τ ∆αi

τ xi (t)xs (t) dt =

t0

∆y(t)xs (t) dt,

s = 1, . . . , m.

t0

The matrix C for this system is equal to C = (Xi , xj ), where

i, j = 1, . . . , m

τ (xi , xj ) =

xi (t)xj (t) dt. t0

In this problem the sensitivity functions coincide with input signals. Let us choose these signals to be pair-wise orthogonal, so that  τ xi (t)xj (t) dt = gi δij , t0

where δij is the Kronecker symbol. Then, 

g1 0 . . . 0



   0 g2 . . . 0    C=  . . . . . . . . . . . .   0 0 . . . gm and det C =

m 

gi = 0.

i=1

Analogous techniques have been investigated in detail in the theory of experiment planning. The second method of improving stability of a solution for inverse problems of sensitivity theory is connected with the following property of the Grammian [28]: m  Γ(U1 , U2 , . . . , Um ) ≤ Γ(Ui ). (8.17) i=1

© 2000 by CRC Press LLC

In (8.17), which is called Hadamard inequality, equality holds if and only if the sensitivity vectors U1 , . . . , Um are pair-wise orthogonal. Using the properties of Grammian, we can rewrite the Hadamard inequality in the form Γ(U1 , . . . , Um ) ≤

m 

˜ (Ui , Ui ) = ∆.

(8.18)

i=1

Obviously, ˜ = 0. ∆

(8.19)

In [105] it was proposed to change an initial equation by a correct equation that is close, in a sense, to the initial one. To solve Equation (8.15), it was proposed to use, instead of the operator A−1 , which is the inverse of A, an operator A˜−1 defined as follows. It is known that A−1 =

B B = , det a Γ(U1 , . . . , Um )

(8.20)

where B is the matrix consisting of the corresponding cofactors for the elements of the matrix A so that bij = Aji . Introduce the operator A˜−1 =

B m 

=

(Ui , Ui )

B . ˜ ∆

(8.21)

i=1

and consider some its properties. The elements of the matrix B are ˜ > 0. bounded. Therefore, the operator A˜−1 is continuous, because ∆ If the vectors U1 , . . . , Um are pair-wise orthogonal, the operators A˜−1 and A−1 coincide. Indeed, in this case Γ(U1 , . . . , Um ) =

m 

Γ(Ui ) =

i=1

m 

(Ui , Ui )

i=1

and from (8.20) and (8.21) we have A−1 = A˜−1 . The solution of the system (8.15) defined by the operator A−1 is given by Y = A−1 X =

© 2000 by CRC Press LLC

B X, det A

while the pseudo-solution of (8.15) defined by the operator A˜−1 is B Y˜ = A˜−1 X = X. ˜ ∆ Comparing Y and Y˜ , we find that Y˜ ≤ Y . Equality in the last equation holds only if the sensitivity vectors are pair-wise orthogonal. Deviation of Y˜ from Y can be estimated as     B  B = 1 − 1 Y˜ − Y  =  X BX X − ∆ ˜ ˜ det A  det A ∆       det A det A = − 1 A−1 X ≤ − 1 A−1  X. ˜ ˜ ∆ ∆ Since the matrix A is symmetric and positive definite, we have  −1  A  =

1 λmin

,

where λmin is the least eigenvalue of the matrix A, and 1/λmin is its condition number. Thus,   det A 1 ˜ Y − Y  ≤ . X. −1 ˜ λ δ min If the sensitivity vectors are pair-wise orthogonal, we have ˜ det A = ∆,

λmin > 0

and Y˜ − Y  = 0,

as was demonstrated above. If the matrix A is singular, we have λmin = 0 and the deviation Y˜ from Y is infinitely large. It can be easily shown that the solution Y˜ is less sensitive to variation of the right side of Equation (8.15) than the solution Y . Sensitivity of these solutions will be estimated by the matrices. U=

∂Y , ∂X

˜ ˜ = ∂Y . U ∂X

It is easy to see that U = A−1 ,

© 2000 by CRC Press LLC

˜ = A˜−1 , U

  U  = A−1  =

1 , λmin

˜  = A˜−1  = U

det A −1 ρ , A  = ˜ λmin ∆

where ρ=

det A , ˜ ∆

Hence, ˜  ≤ U . U Thus, the solution Y˜ is more stable with respect to deviation of initial data than the solution Y . Consider how the convergence condition of the solution of the inverse problem changes as a result of the transition to the pseudo-solution Y˜ . With this aim in view, we write the functional (8.6) in the form T T Jk+1 = Jk − 2∆βk+1 Pk + ∆βk+1 Ck ∆βk+1 . (8.22) The pseudo-solution is found by Bk  ∆β k+1 . = ˜ Pk ∆k Substituting it into (8.22), we obtain Jk+1 = Jk − 2PkT

BkT BT Bk Pk + PkT k Ck P ˜ ˜ ˜k k ∆k ∆k ∆ = Jk − 2ρk PkT Ck−1 Pk + ρ2k PkT Ck−1 Pk ,

where ρk =

det A ≤ 1. ˜k ∆

From analysis of the last equation it follows that convergence of the iterative process remains, though the rate of convergence may decrease.

8.1.6

Investigation of Convergence of Iterative Process

Consider, as a general case, a dynamic system with additional motion ∆Y (t) in the form of an n-dimensional function of time and m-dimensional vector ∆α of variation of parameters α. Let ∆Y (t) be defined by Equation (8.1), and a functional J of the form (8.7) be used. According to the above iterative process, the estimate of the vector β of model parameters at the (k+ 1)-th step is given by βk+1 = βk + γ(βk )

© 2000 by CRC Press LLC

Bk Pk , det Ck

(8.23)

where Bk is the matrix of cofactors of the matrix Ck , Ck =

N 

UkT (ti )D(ti )Uk (ti ),

Pk =

N 

i=1

UkT (ti )D(ti )∆Yk (ti ),

i=1

or, using pseudo-solution, βk+1 = βk + γ(βk )

Bk Pk . ˜k ∆

(8.24)

Let us investigate the rate of convergence of the iterative process (8.22). Expanding the vector function Y (t, β) into Taylor’s series in a locality of the point βk , we find 1 Y (t, β) = Y (t, βk ) + Uk ∆βk + Kk , 2 where Kk = ∆βkT

 ∂ 2 Y  ∆βk , ∂β 2 β=βk

Fki =

 ∂ 2 Y  ∂β 2 β=βk ,t=ti

is a three-dimensional matrix. Substituting this formula into (8.24), we obtain βk+1

  N   1 1 T Uk (ti )D(ti ) Uk (ti )∆βk + Kk = βk + γ(βk ) B ˜k k 2 ∆ i=1   N 1 1 T = βk + γ(βk ) Bk Ck ∆βk + U (ti )D(ti )Kk , ˜k 2 i=1 k ∆

Hence, using the formula Ck−1 =

Bk , det Ck

we obtain βk+1

N det Ck 1 Bk  T = βk + γ(βk ) ∆βk + γ(βk ) Uk (ti )D(ti )Kk . ˜k ˜k 2 ∆ ∆ i=1

or,   N det Ck 1 Bk  T βk+1 − β = 1 − γ(βk ) (βk − β) + γ(βk ) Uk (ti )D(ti )Kk . ˜k ˜k 2 ∆ ∆ i=1

© 2000 by CRC Press LLC

As a distance ρ(βk+1 , β) we may use, for instance, Euclidean vector norm. In this case   √  n det Ck   βk+1 − β ≤ 1 − γ(βk ) βk − β + γ(βk )  ˜ 2 ∆k N Bk   × [ Uk (ti ) D(ti ) Fki  ] βk − β2 . ˜k ∆

(8.25)

i=1

Equation (8.25) makes it possible to estimate the rate of local convergence of the iterative process (8.24). As a specific case, from (8.25) it follows that in the general case convergence is linear. The condition of linear convergence is defined by the relation   √   1 − γ(βk ) det Ck  + n γ(βk ) Bk   ˜k  ˜k 2 ∆ ∆ N  × [ Uk (ti ) D(ti ) Fki  ] δβ < 1,

(8.26)

i=1

where δβ is an infinitely small locality of the point β that includes the variation ∆βk . For γ(βk ) such that 0 < γ(βk ) < 1, the condition of linear convergence of the process (8.24) is completely determined by the convergence condition of the iterative process (8.23):  √  N  n det Ck > Uk (ti ) D(ti ) Fki  δβ . Bk  2 i=1

(8.27)

To ensure a higher rate of convergence, it is required to satisfy the condition γ(βk ) →

det Ck ˜k ∆

when k → ∞.

Quadratic convergence is reached only if γ(βk ) =

det Ck ˜k ∆

and is determined by the condition (8.27) with δβ = 1.

© 2000 by CRC Press LLC

Example 8.1 Consider a dynamic system described by a first-order differential equation T y(t) ˙ + y(t) = kx(t), where x(t) and y(t) are the input and output signals of the system, respectively. It is required to investigate the rate of convergence for the process of estimating the parameters T and k using sensitivity functions. The sensitivity functions uT (t) and uk (t) are obtained by integration of the following system of differential equations: y˙ =

u˙ T =

1 [kx(t) − y(t)], T

1 [−y(t) ˙ − uT (t)], T

u˙ k =

1 [x(t) − uk (t)] T

on the interval [t0 , τ ] with given initial conditions uT (t0 ) = uT 0 ,

uk (t0 ) = uk0 ,

y(t0 ) = y0 .

The system of normal equations obtained during minimization of the quadratic functional J has the form 



c11 c12

∆t ∆k

c21 c22



 =

p1

 ,

p2

where τ

τ u2T dt,

c11 = t0 τ

c12 = c21 =

u2k (t) dt,

c22 = t0 τ

uk (t)uT (t) dt, t0

p1 =

uT (t)∆y(t) dt, t0



τ

p2 =

uk (t)∆y(t) dt. t0

It is assumed that there is an initial approximation k0 and T0 of the parameters k and T , respectively. Then, the estimates of the parameters k

© 2000 by CRC Press LLC

and T on the (i + 1)-th step are given by c11 p2 − c21 p1 , c11 c22 − c21 c12 c11 p1 − c12 p2 = Ti + γ c11 c22 − c21 c12

(8.28)

c11 p2 − c21 p1 , c11 c22 c11 p1 − c12 p2 = Ti + γ . c11 c22

(8.29)

ki+1 = ki + γ Ti+1 or, using a pseudo-solution,

ki+1 = ki + γ Ti+1

The convergence condition of these iterative processes for 0 < γ < 1 can be written in the form

where B=

det C >

N  1 U (ti ) Fi  δβ , B 2 i=1





c22 c21 c12 c11

,

U T (ti ) = [uT (ti )uk (ti )],

 ∂ 2 y(ti ) ∂ 2 y(ti )  ∂T 2 ∂T ∂k  , Fi =    2 ∂ y(ti ) ∂ 2 y(ti ) ∂T ∂k ∂k 2  1 for solution (8.28), 

δβ =

∆β1

for solution

(8.29).

Tables 8.1 and 8.2 demonstrate the results of numerical solution of the systems (8.28) and (8.29) for the following initial data: k = 5, T = 1, k0 = 5, 5, T0 = 0, 8, y(t0 ) = uT (t0 ) = uk (t0 ) = 0, x(t) = x0 + t, x0 = 5,

N = 200.

The system of differential equations was solved by the Runge-Kutta method on the interval [0.1, 20] with integration step 0.1. After that, input and output variables of the model were affected by stochastic values with zero mathematical expectation and root-mean-square σ = 0.1.

© 2000 by CRC Press LLC

Table 8.1 Results of pseudo-solution Iteration K T

1 5.02773 0.94442

2 4.98678 0.98346

3 4.98234 0.98880

4 4.98181 0.98957

5 4.98175 0.98966

Table 8.2 Results of direct solution of the system Iteration K T

8.2 8.2.1

1 4.97778 0.95969

2 4.98164 0.98933

3 4.98174 0.98966

4 4.98174 0.98967

Identification of Dynamic Systems Definition of Identification

The identification problem is the most characteristic applied inverse problem of sensitivity theory. DEFINITION 8.4 By identification in control theory we mean a procedure of constructing mathematical models of plants (systems) on the basis of observable (measurable) information about input and output signals. An operator A transforming input signal X(t) into output signal Y (t) is an exhaustive plant characteristic. Using identification, we obtain not the operator A itself, but its estimate A0 , which describes the plant model. Obviously, the best model is the one for which the operator is, in a sense, close to the plant operator. In real conditions, the proximity is evaluated by a functional J depending on output signals of the plant and model. In many cases, the identification process can be reduced to the following scheme. There is a plant with measurable input and output signals denoted by X(t) and Y (t), respectively. A plant model is also given over a specified class of operators. The signal X(t) is applied to the inputs of the plant and model. The output signal Y˜ (t) of the model is algebraically compared with the plant output signal Y (t). The difference Y˜ (t) − Y (t) is used for constructing the proximity functional J. The desired estimate A0 of the model operator is obtained by minimizing this proximity functional (identification

© 2000 by CRC Press LLC

cost function). A general block-diagram of the above identification process is shown in Figure 8.2. In actual conditions, output and sometimes even

Figure 8.1 Identification process input signal of the plant is distorted by additive disturbances.

8.2.2

Basic Algorithm of Parametric Identification Using Sensitivity Functions

In the most general case, the identification problem formulated above can be reduced to a nonlinear programming problem that is usually solved using numerical methods. Many of these methods employ various gradient algorithms. As was noted many times, the components of the gradient vector are, in fact, sensitivity functions. Therefore, to find the components of the gradient, we can use sensitivity equations. Methods of constructing and solving these equations have been developed in sensitivity theory. This circumstance may well enhance speed and precision of gradient algorithms. Consider a general scheme of parametric identification of a system described by the equation Y˙ = F (Y, α),

Y (t0 ) = Y0 .

(8.30)

Y˜ (t0 ) = Y0 ,

(8.31)

The model equation has the form Y˜˙ = F (Y˜ , β),

where β is an analog of vector α of unknown system parameters.

© 2000 by CRC Press LLC

The vector function Y˜ (β0 + ∆β, t) can be approximated by Y˜ (β0 + ∆β0 t) = Y˜ (β0 , t) + U (t)∆β, where

 ∂ Y˜ (β, t)  U (t) =  ∂β 

(8.32)

∆β=0

if the sensitivity matrix that is the solution of the sensitivity equation ∂F (Y˜ , β) ∂F (Y˜ , β) U˙ = U+ ˜ ∂β ∂Y

(8.33)

where β = β0 and U (t0 ) = 0. Assume that the quality index of identification has the form of the functional (8.6) with D = E. Then, to determine the variation ∆β we obtain the following algebraic equation C∆β = P, where

(8.34)

τ U T (t)U (t) dt

C= t0

is the identification matrix, τ U T (t)∆Y (t) dt,

P =

∆Y (t) = Y (t, α) − Y˜ (t, β),

t0

Here Y (t, α) is the plant output signal, and Y˜ (t, β) is the solution of the model equation (8.31) for β = β0 (model output signal). If C is a nonsingular matrix, we have ∆β = C −1 P. The inequality det C = 0

(8.35)

is called the identifiability condition. An estimate of the vector α (or that of its variation ∆α provided that the nominal vector α0 is known) is performed

© 2000 by CRC Press LLC

in N steps (see Section 1.4):

α = β0 +

N 

∆βi .

i=1

Above, in fact, we described a single iteration step, i.e., ∆β = ∆β1 . Having obtained ∆β1 , we determine basis value of the vector β for the second step in the form β1 = β0 + ∆β1 and so on. In general, the estimates ∆β1 are random values. To obtain these estimates, linear equations of identification and least-square methods are used on each iteration step. Therefore, there is a possibility to perform their statistical analysis.

8.2.3

Identifiability and Observability

Consider the dynamic system Y˙ = F (Y, t, α),

Z = H(t)Y,

Y (t0 ) = Y0 ,

(8.36)

where Z is an s-dimensional vector of observable (measurable) coordinates, s ≤ n, and H(t) is the observation matrix of dimensions s × n. For the given system with D = E the identification cost function has the form τ J = (∆Z − G∆β)T (∆Z − G∆β) dt, (8.37) t0

where ∆Z = Z − H Y˜ ,

G = HU.

The necessary minimum condition for this functional is given by ˜ C∆β = P˜ ,

(8.38)

where τ C˜ =

G G dt, t0

© 2000 by CRC Press LLC

τ T

P˜ =

GT ∆Z dt. t0

Moreover, the identifiability condition is given by the inequality τ det C˜ = det

U T H T HU dt = 0.

(8.39)

t0

Obviously, to meet this requirement it suffices to ensure that the columns (or rows) of the matrix G = HU be linearly independent. Let us write the sensitivity equation (8.33) in the form U˙ = A(t)U + R(t),

U (t0 = 0) = 0,

(8.40)

where A(t) =

∂F , ∂ Y˜

R(t) =

∂F . ∂β

Moreover, ∆Z(t) = Ht)U (t)∆β.

(8.41)

Multiplying Equation (8.30) term-wise on the right by ∆β and using the notation Ω(t) = U (t)∆β, (8.42) we write Ω˙ = A(t)Ω + R(t)∆β,

Ω(t0 ) = 0,

δZ = H(t)Ω.

(8.43)

Equations (8.43) describe a linear nonstationary system with the vector of phase coordinates Ω(t) of dimension n and observation vector Z(t) of dimension s ≤ n. Assume that the system is observable, i.e., it is possible to reconstruct the vector Ω(t) by a known vector Z(t). Then, obviously, the variation ∆β can be, according to (8.42), found from the equation Ω = U ∆β. This can be done, for instance, by the least square method or by the method of pseudo-inverting the matrix U . Consider the observability condition for the dynamic system (8.43). With this aim in view, we introduce the transition matrix (Cauchy matrix) Φ(t, t0 = 0) satisfying the condition Φ˙ = A(t)Φ,

Φ(0) = E.

(8.44)

As is known, a linear periodic system with transition matrix Φ(t) of dimensions n × n and observation matrix H(t) of dimensions s × n is called

© 2000 by CRC Press LLC

fairly observable on the interval (0, τ ) if the matrix τ ΦT (t)H T (t)H(t)Φ(t) dt

M (τ ) =

(8.45)

t0

is nonsingular. Let us note that the structure of the matrix M (t) is identical to the structure of the matrix C˜ in Relation (8.38). Assume that the above technique employing sensitivity functions is applied to determine unknown initial conditions Y0 . In this case the sensitivity matrix U (t) =

∂ Y˜ ∂ Y˜0

satisfies the differential equation ∂F U˙ = U, ∂ Y˜

U (0) = E

(8.46)

Comparing (8.44) and (8.46), we obtain that Φ(t) = U (t). Then, the conditions of identifiability (8.35) and observability coincide.

8.3 8.3.1

Distribution of Parameters Tolerance Preliminaries

System design results in determining nominal (assumed) values of its parameters that satisfy a given quality index and workability criterion. As a special case, nominal parameter values can be found by determining an extremum of the quality index. Actual values of system parameters are usually different from the nominal ones. In general, variations of parameter values from their nominal values influence the quality of system operation. Let operation quality index be characterized by a value I that is a function of parameters α1 , . . . , αm . During system design the nominal parameter values are obtained. They are associated with the nominal value of the quality index I0 = I(α10 , . . . , αm0 ).

© 2000 by CRC Press LLC

Real systems have, due to the above reasons, other values of the parameters that differ from the nominal ones by ∆α1 , . . . , ∆αm , respectively. Then, I = I(α10 + ∆α1 , . . . , αm0 + ∆αm ), (8.47) Hence, the quality index gets an increment ∆I = I(α10 + ∆α1 , . . . , αm0 + ∆αm ) − I0 .

(8.48)

Owing to inevitable parameter perturbation with respect to their nominal values and, as a consequence, variation ∆I, at the design stage it is required to determine limits of variation of the quality index I or those of its increment ∆I. Thus, a system is considered to be operable (see Figure 1.16) if I(α1 , . . . , αm ) ∈ Mp or ∆I(∆α1 , . . . , ∆αm ) ∈ ∆Mp ,

(8.49)

where Mw and ∆Mw are admissible sets (regions) of the values of the quality index and its variation. The sets Mw and ∆Mw characterize tolerances for the quality index and its variation. In general, by tolerances we mean preliminarily specified limits in which the values of parameters and quality criterion determining operation quality of the system (plant, unit, device) must be kept. Quantitative characteristics of tolerance are the upper xmax and lower xmin limit deviation from a nominal value, or a tolerance field ∆x , coordinates of its mean value x ˜ and the half of the tolerance field δx . These values are related by ∆x 1 , x ˜ = (xmax + xmin ), 2 2 =x ˜ + δx , xmin = x ˜ − δx .

∆x = xmax − xmin , xmax

δx =

(8.50)

In technical literature the notion of tolerance is usually equivalent to the notion of tolerance field ∆x which is, as distinct from other tolerance characteristics, a strictly positive value. Other values may be negative or zero.

8.3.2

Tolerance Calculation Problem

Tolerance calculation is a mandatory stage in designing of all types of devices (including radio-electronic, electromechanical, precise mechanical and so on). The goal of this stage is to coordinate tolerances on quality

© 2000 by CRC Press LLC

criterion DeltaI for the system (unit, device) and those on parameters ∆αi of the elements. In this case it is important to make a difference between direct and inverse problems of determining tolerances. In a direct problem it is required to find, by given tolerances of the parameters α1 , . . . , αm , the tolerance ∆I in which the value of the quality criterion I will be kept. The problem reduces to determining an operator (transformation) L mapping the set (tolerance) ∆α from the parameter space (α1 , . . . , αm ) onto the set (tolerance) ∆I in the space of quality criterion I: ∆I = L{∆α }.

(8.51)

In an inverse problem, the value of the tolerance of the quality index is given. It is required to find tolerances ∆α for system parameters that ensure that the quality index remains inside the limits of ∆I . Mathematically, the problem reduces to determining an operator (transformation) Q mapping the set ∆I onto the set ∆α : ∆α = Q{∆I }.

(8.52)

In fact, the direct problem reduces to analysis of system precision under parameter variation. If an analytical relation (8.47) between the quality criterion I and parameters α1 , . . . , αm is known, the problem is readily solved by known methods in both deterministic and stochastic conditions. Tolerance distribution is a very difficult problem. Currently, there are no general methods for its solution. Some partial approaches to tolerance setting are given in [4, 32, 45, 71]. The most widespread are methods based on the use of sensitivity functions and coefficients.

8.3.3

Initial Mathematical Models

When tolerances are calculated using methods of sensitivity theory, it is assumed that deviation of the parameter values from nominal ones are small and parameter fluctuations are linear inside the tolerance field. Then, the variation (8.48) of the quality index I can be represented in the form ∆I =

m 

ui ∆αi ,

i=1

where

is the sensitivity coefficient.

© 2000 by CRC Press LLC

 ∂I  ui = ∂αi ∆α=0

(8.53)

The value ∆αi in (8.53) is the “current” variation given by ∆αi = αi − αi0 . Depending on the relation between αi and αi0 , the value ∆α can be positive as well as negative. The signs of the sensitivity coefficients u1 , . . . , um are known in advance. For the worst combination of parameters variations, the variation of the quality index I, we can use the following formula ∆I˜ =

m 

|ui |∆i ,

(8.54)

i=1

where ∆i = ∆αi is the tolerance field for the parameter αi . Obviously, in this case Equation (8.54) directly determines the tolerance field for the quality index ∆I , i.e., ∆I =

m 

|ui |∆i .

i=1

It can be easily shown that a similar formula can be obtained for the half of the tolerance field: m  ∆I = |ui |δi (8.55) i=1

Assume that parameter variations ∆α1 , . . . , ∆αm are stochastic values. For calculating tolerances it is often possible to determine only mathematical expectation and variance of the variation ∆I. Let K∆α be the correlation matrix of the vector ∆α = (∆α1 , . . . , ∆αm )T so that

K∆α

 2  σ1    K21 =   ...   Km1

 K12 . . . K1m   σ22 . . . K2m  . . . . . . . . . .   2  Km2 . . . σm

Then, the mathematical expectation M [∆I] and variance σ 2 [∆I] of the variation ∆I are given by, respectively, M [∆I] = σ 2 [∆I] =

m  i=1 m  i=1

© 2000 by CRC Press LLC

ui M [∆αi ], u2i σi2 + 2

 i q 3. in the general case, m > q + n If the sensitivity matrix U (t) exists and the system (8.93)–(8.94) is controllable, the control action V (t) in the problem of optimal insensitive system design can be found, as was already noted, by methods developed in the theory of optimal control.

8.5 8.5.1

Numerical Solution of Sensitivity Equations General Structure of Numerical Integration Error

General methods of calculation of sensitivity functions of dynamic systems with lumped parameters were considered in [67, 94]. Therein

© 2000 by CRC Press LLC

an analysis of computer integration methods for sensitivity equations was presented, possibilities of similar models were elucidated, and experimental techniques of determining sensitivity functions were proposed. In the present paragraph we present additional information on integration and evaluation of the error in obtained sensitivity functions. Among computer integration methods applied to sensitivity equations, the most popular are various methods of direct parallel integration of initial system and sensitivity equations. This means that the following system of equations is integrated: Y˙ = F (Y, t), Y0 = Y (0), (8.95) Φ˙ = Γ(U, Y, t), U (0) = U0 , where Φ=

∂F ∂F U+ . ∂Y ∂α

For simplicity, we assume that the right side of the first equation of the system (8.95) is continuous with respect to t and has continuous secondorder partial derivatives with respect to y and the parameter α. Let also y0 be independent of α. A peculiar feature of the system (8.95) consists in the fact that it is semi-decomposed, i.e., the right side of the first equation does not depend on the solution of the second one. Moreover, the right side of the second equation is formed, in fact, by linearization of the right side of the first equation. These circumstances lead to the conclusion that the problem of numerical integration of sensitivity equation is specific. This applies mostly to the character of error increase in integration process. As was shown above, the equation of the initial system is integrated independently of the sensitivity equation. Since numerical integration is approximate, as a result of integration of an equation y˙ = f (y, t),

y(0) = y0

we obtain y˜(t) = y(t) + ∆y(t), where ∆y(t) is the numerical integration error caused by transformed, computational and methodical errors. The solution y˜(t) is substituted into the sensitivity equation u˙ = φ(u, y˜, t).

(8.96)

As a result, even for correct integration of this equation, there arises the error ∆u(t) such that u ˜(t) = u(t) + ∆u(t).

© 2000 by CRC Press LLC

In the first approximation, this error is determined by the differential equation ∂φ ∂φ ∆u˙ = ∆u + ∆y, ∆u(0) = 0, ∂u ∂y or ∂f ∆u˙ = ∆u + ∂y



∂2f ∂2f u + ∂y 2 ∂α∂y

 ∆y,

u(0) = 0.

(8.97)

From Equation (8.97) it is evident that the value of the error ∆u(t) is determined by the second-order derivatives of the right side of the initial equation. But Equation (8.96) is, in its turn, integrated approximately using numerical methods. As a result, we obtain the following estimate of the sensitivity function: u∗ (t) = u ˜(t) + ∆˜ u(t) or u∗ (t) = u(t) + ∆u(t) + ∆˜ u(t) = u(t) + δu(t), where δu(t) = ∆u(t) + ∆˜ u(t).

8.5.2

Formula for Integration of Sensitivity Equations

Assume that the scheme (8.95) is integrated by Euler’s method with a step h. Let us show that in this case the approximate estimates of the sensitivity function are given by the formula un = h

n−1  j=1

where

 n−j−1  ∂fi ∂fj  1+h , ∂α i=1 ∂y

 ∂fi ∂f  = ∂y ∂y  y = yi , t = ti

This can be proved by the induction method. By Euler’s method, un = un−1 + hφ(un−1 , yn−1 , tn−1 ). For n = 1 we have u1 = u0 + hφ(u0 , y0 , t0 ) = h

© 2000 by CRC Press LLC

∂f0 , ∂α

(8.98)

and, for n = 2, u2 = u1 + hφ(u1 , y1 , t1 ) = h

∂f0 ∂f1 +h ∂α ∂α

 1+

∂f1 ∂y

 .

Assume that Equation (8.98) holds for un−1 so that un−1 = h

n−2  j=0

 n−j−2  ∂fi ∂fj  1+h . ∂α i=1 ∂y

Then, un = un−1 + hφ(un−1 , yn−1 , tn−1 )    n−2  ∂fi n−j−2   ∂fn−1 ∂fi ∂fn−1 = un−1 1 + h 1+h +h =h ∂y ∂α ∂α i=1 ∂y j=0     n−j−1 n−1  ∂fj  ∂fn−1 ∂fi ∂fn−1 × 1+h 1+h +h =h . ∂y ∂α ∂α ∂y j=0 i=1

8.5.3

Estimates of Solution Errors

It is known that the following estimate is true for the solution y(t) [41]: |y(tn ) − yn | ≤

hM [(1 + hN )n − 1], 2N

(8.99)

where y(tn ) is the precise solution for t = tn , yn is the approximate value found by Euler’s method, and the constants M and N satisfy the inequalities |f (t, y1 ) − f (t, y2 )| ≤ N |y1 − y2 |,      df   ∂f    =  f + ∂f  ≤ M,  dt   ∂y ∂t  where y1 and y2 are arbitrary values of y(t) from solution domain. Let us find an estimate similar to (8.99) for the sensitivity function u(t). Consider the difference φ(t, u1 )−φ(t, u2 ). Firstly, we analyze the case when the value of y(t) in the function φ(t, y, u) is accurately known. Then, |φ(t, u1 , y) − φ(t, u2 , y)|      ∂f   ∂f  =  (y1 − y2 ) =   |y1 − y2 | ≤ N |y1 − y2 | ∂y ∂y

© 2000 by CRC Press LLC

and

   2  dφ   ∂ f ∂2f ∂2f  =  dt   ∂y 2 f u + ∂y∂α f + ∂t∂y u   ∂2f ∂f ∂f ∂f  ≤ P. + + u+ ∂t∂α ∂y ∂y ∂α 

In this case, the error estimate for integration of sensitivity equations is given by the formula |u(tn ) − un | ≤

hP [(1 + hN )n − 1] . 2N

(8.100)

Then, let us estimate the difference φ(t, u1 ) − φ(t, u2 ) with account for error in determining y(t): |φ(u1 , y + ∆y, t) − φ(u2 , y + ∆y, t)|     ∂f  ∂2f  = + 2 ∆y (u1 − u2 ) ∂y ∂y    2   ∂f  ∂ f    ≤   |u1 − u2 | +  2  |∆y||u1 − u2 | ∂y ∂y ≤ N |u1 − u2 | + S max |∆y||u1 − u2 |, t

where S is a constant such that  2  ∂ f     ∂y 2  < S. Let max |∆y| =

hM [(1 + hN )n − 1] . 2N

Then, |φ(u1 , y˜, t) − φ(u2 , y˜, t)| ≤ N |u1 − u2 | +S

hM [(1 + hN )n − 1] |u1 − u2 | = Q|u1 − u2 |, 2N

where Q=N +S

© 2000 by CRC Press LLC

hM [(1 + hN )n − 1] . 2N

Obviously, Q > N . As a result, the error of sensitivity function integration can be estimated by the formula |u(tn ) − un | ≤

hP [(1 + hQ)n − 1] . 2Q

(8.101)

The estimate (8.101) is fairly robust and bears only theoretical meaning. Results more acceptable for practice can be obtained from the following reasoning. Using Euler’s method, approximate values of the solution of the system (8.95) can be successively determined by the formulas yi+1 = yi + hf (yi , ti ), ui+1 = ui + hφ(ui , yi , ti ),

i = 0, 1, 2, . . .

In the first integration step we have y˜1 = y0 + hf (y0 , t0 ), u ˜1 = u0 + hφ(u0 , y0 , t0 ) =

∂f0 , ∂α

where  ∂f0 ∂f  , = ∂α ∂α t0 ,y0

y˜1 = y1 + ∆y1 ,

u ˜1 = u1 + ∆u1 .

Let ∆y1 and ∆u1 are methodical integration errors. In the second step, u ˜2 = u ˜1 + hφ(˜ u1 , y˜1 , t1 )     ∂ f˜0 ∂ f˜1 ∂ f˜1 ∂ f˜1 ∂ f˜1 =h +h u ˜1 + = 1+h u ˜1 + h . ∂α ∂y ∂α ∂y ∂α Then, errors in determining y1 and u1 lead to incorrectness in calculating ∂f1 /∂y and ∂f1 /∂α and the whole right side. As a result, there appears an additional error δu2 in determining u2 . This error can be estimated by  u ˜2 =

1+h

© 2000 by CRC Press LLC

 ∂f1 ∂f1 ∂ 2 f1 ∂ 2 f1 + h 2 ∆y1 (u1 + ∆u1 ) + h +h ∆y1 . ∂y ∂y ∂α ∂α∂y

Hence,

  ∂ 2 f1 1 1 u ˜2 ≈ 1 + h ∂f u1 + h ∂f ∂y ∂α + h ∂y 2 u1 ∆y1   ∂f1 ∂ 2 f1 + 1+h ∆u1 + h ∆y1 , ∂y ∂α∂y

Therefore,   ∂ 2 f1 ∂f1 ∂ 2 f1 δu2 = h 2 u1 ∆y1 + 1 + h ∆u1 + ∆y1 h. ∂y ∂y ∂α∂y In general, it can be shown that u ˜i+1 = u ˜i + hφ(˜ ui , y˜i , t) = ui + hφ(ui , yi , t) + δui+1 , where

 δui+1 =

1+h

∂fi ∂y





Note that

∆ui + h∆yi 

δyi+1 =

1+h

∂fj ∂y

∂ 2 fi ∂ 2 fi + 2 ∂y ∂α∂y

(8.102)  .

 ∆yi .

Thus, the second-order derivatives of the function f (y, t, α) play a significant role in forming a part of the error δui+1 . Notice that Relation (8.102) is a discrete analog (difference equation) for the differential equation (8.97) for the error ∆u(t).

8.5.4

Estimates of Integration Error for a First-Order System

As follows from the above, integration errors for sensitivity functions may well significantly exceed integration errors for initial systems. This circumstance can be visually demonstrated for linear stationary systems. Consider the following simplest system y˙ = αy,

y(0) = y0 .

(8.103)

The sensitivity equation with respect to the parameter α has the form u˙ = αu + y,

u(0) = 0.

(8.104)

Solving (8.104) for the variable y, we obtain u ¨ − 2αu + α2 u = 0,

© 2000 by CRC Press LLC

u(0) = 0,

u(0) ˙ = 0.

(8.105)

The characteristic equation for (8.105) has the form s2 − 2αs + α2 = 0, i.e., there is a multiple root. For the case of a multiple root the solution of the sensitivity equation (8.105) contains a term proportional to t: u(t) = c1 eαt + c2 teαt or u(t) = y0 teαt , because c1 = 0 and c2 = y2 . On the other hand, the solution of Equation (8.103) has the form y(t) = y0 eαt . Euler’s method gives the following approximate solutions: un = nhy0 (1 + αh)n−1 , yn = y0 (1 + αh)n . Then, the errors are given by u(nh) − un = y0 nh[eαnh − (1 + αh)n−1 ], u(tn ) − un = y0 tn [eαtn − (1 + αh)n−1 ], y(nh) − yn = y0 [eαnh − (1 + αh)n ], y(tn ) − yn = y0 [eαtn − (1 + αh)n ]. It is seen that the integration error of sensitivity equation increases as t.

8.5.5

Results of Numerical Calculation

To justify the results of the previous paragraph, we performed numerical integration of the sensitivity equations using the methods of Euler and Runge-Kutta with different integration steps h. The initial system was described by the equation 1 y˙ = f (y, t) = − y T

© 2000 by CRC Press LLC

(8.106)

with initial condition y(0) = y0 . Equation (8.106) corresponds to the accurate solution t y(t) = y0 e− T .

(8.107)

The sensitivity equation with respect to the parameter T and the accurate solution of this equation have the form, respectively, t T u˙ + u = −y(t) = −y0 e− T ,

u(t) =

u(0) = 0,

(8.108)

y0 t − t e T. T2

(8.109)

For integration we assumed y(0) = 1,

T = 1 sec.

Using the simplest Euler’s method yi+1 = yi + hf (ti , yi ),

ti = t0 + ih

and Runge-Kutta method of the fourth order 1 yi+1 = yi + (b1 + 2b2 + 2b3 + b4 ), 6 where

 b1 = f (yi , ti )h,  b3 = f

h b2 ti + , yi + 2 2

b2 = f

h b1 ti + , yi + 2 2

 h,

 h,

b4 = f (ti + h, yi + b3 )h.

we determined the following values: 1. the numerical solution y˜(t) of the initial equation 2. the numerical solution u ˜1 (t) of the sensitivity equation for accurate y(t) 3. the numerical solution u ˜2 (t) of the sensitivity equation for y˜(t)

© 2000 by CRC Press LLC

4. the absolute errors ∆˜ u1 (t) = u ˜1 (t) − u(t),

∆˜ u2 (t) = u ˜2 (t) − u(t)

in calculating the sensitivity functions 5. the relative errors δu ˜1 (t) =

∆˜ u1 (t) , u1 (t)

δu ˜2 (t) =

∆˜ u2 (t) u2 (t)

in calculating sensitivity functions The results of computation are given in Figures 8.2–8.9. The first four figures present the curves obtained by Euler’s integration method, while the last four demonstrate the results of applying the Runge-Kutta algorithm. For comparison, the curves of accurate solutions are also shown in Figures 8.2, 8.3, 8.6, and 8.7. All the curves in Figure 8.6 practically coincide. The curves u ˜1 (t) and u ˜2 (t), ∆˜ u1 (t) and ∆˜ u2 (t), δ u ˜1 (t) and δ u ˜2 (t) shown in Figures 8.7–8.9 also converge. Integration by both the methods was performed for H = 0.05 sec, 0.1 sec and 0.2 sec. The total integration interval was 5 sec. Analysis of the above curves show an evident tendency to increase of integration errors for sensitivity functions as the integration step h increases. Advantages of the Runge-Kutta method in comparison with Euler’s method are also evident. For Euler’s method, the precision of calculating sensitivity function u ˜1 (t) is higher than that of u ˜2 (t).

© 2000 by CRC Press LLC

Figure 8.2 y(t) and y˜(t) by Euler’s method

© 2000 by CRC Press LLC

Figure 8.3 u(t) and u ˜(t) by Euler’s method

© 2000 by CRC Press LLC

Figure 8.4 ∆˜ u(t) by Euler’s method

© 2000 by CRC Press LLC

Figure 8.5 δu ˜(t) by Euler’s method

© 2000 by CRC Press LLC

Figure 8.6 y(t) and y˜(t) by the Runge-Kutta method

© 2000 by CRC Press LLC

Figure 8.7 u(t) and u ˜(t) by the Runge-Kutta method

© 2000 by CRC Press LLC

Figure 8.8 ∆˜ u(t) by the Runge-Kutta method

© 2000 by CRC Press LLC

Figure 8.9 δu ˜(t) by the Runge-Kutta method

© 2000 by CRC Press LLC

Bibliography [1] Abramov, O., Zdor, V., and Suponya, A., Tolerances and Nominals of Control Systems [in Russian], Moscow: Nauka, 1976. [2] Aizerman, M., and Gantmakher, F., On determination of periodic modes in nonlinear dynamic system with piecewise linear characteristic, PMM, 5, 1956. [3] Alekseev, O., and Gaev, S., Choice of optimal tolerances for device elements, Trans. Acad. Sc. USSR, Technical Cybern., n. 4, 1968. [4] Baranov, G., On the choice of tolerances ensuring unit precision and minimal cost, in Trans. Machine Inst., Russ. Acad. Sc., v. 11, 1956. [5] Barbashin, E., Introduction in Stability Theory [in Russian], Moscow: Nauka, 1967. [6] Barbashin, E., Lyapunov Functions [in Russian], Moscow: Nauka, 1970. [7] Belove, C. Sensitivity sums of homogeneous functions, IEEE Trans., CT-11, 1964. [8] Besekerskii, V., and Popov, E. Theory of Automatic Control Systems [in Russian], Moscow: Nauka, 1975. [9] Beloshapkin, V., Sensitivity of Meier-Boltz Optimal Control Problem, in Trans. of All-Russia Seminar, Vladivostok, Acad. Sc. USSR, 1975. [10] Beloshapkin, V., and Yusupov, R. Sensitivity of Boltz Variational Problem, in Problems of Cybernetics, v. 23, Sensitivity Theory and its Application, Moscow: Svyaz, 1977. [11] Blinov, I., Gaskarov, D., and Mozgalevskii, A., Automatic Monitoring of Control Systems, Moscow: Energia, 1968. [12] Bode, H., Network Analysis and Feedback Amplifier Design, Van Nostrand, NJ: Princeton, 1945. [13] Bromberg, P., Matrix Methods in Theory of Relay and Pulse Control, Moscow: Nauka, 1967. [14] Bure, E., and Rosenwasser, Y., On investigation of sensitivity of selfoscillating systems, Automat. Remote Contr. 7, 1974.

© 2000 by CRC Press LLC

[15] Bykhovsky, M., Foundations of Dynamic Accuracy of Electrical and Mechanical Networks [in Russian], Moscow: Acad. Sc. USSR, 1958. [16] Vavilov, A., Frequency Methods for Design of Nonlinear Systems, [in Russian], Moscow: Energia, 1970. [17] Vainberg, M., and Trenogin, V., Branching Theory for Solutions of Nonlinear Equations, [in Russian], Moscow: Nauka, 1969. [18] Vagner, G., Foundations of Operations Research [in Russian], Moscow: Mir, 1972. [19] Van der Paul, B., Bremer, H., Operational Calculus Based on Bilateral Laplace Transformation [in Russian], Moscow: IL, 1952. [20] Varshney, R., and Perkins, V., Sensitivity of Sampled-Data Systems to Sampling Period, in Proc. of the 31th IFAC Symposium, Italy, 1973. [21] Vishik, M., and Lyusternik, L., Solution of Some Disturbance for Matrices and Self-Adjoint and Non-Self-Adjoint Differential Equations, UMN, v. XV-3, 1960. [22] Vladimirov, V., Generalized Functions in Mathematical Physics [in Russian], Moscow: Nauka, 1976. [23] Yusupov, R., Ed., Problems of Cybernetics. Sensitivity Theory and Its Application [in Russian], Moscow: Svyaz, 1977. [24] Voronov, A., Foundation of Automatic Control Theory [in Russian], Moscow: Energia, 1966. [25] Gabasov, R., and Kirillova, F. Methods of Linear Programming [in Russian], Minsk: BGU, 1977. [26] Gabasov, R., and Kirillova, F. Methods of Optimization [in Russian], Minsk: BGU, 1975. [27] Galperin, I., Introduction in Theory of Generalized Functions [in Russian], Moscow: IL, 1954. [28] Gantmakher, F., Theory of Matrices [in Russian], Moscow: Nauka, 1966. [29] Gardner, M., Berns, J., Transients in Linear Systems [in Russian], Moscow: Gostekhizdat, 1951. [30] Gass, S., Linear Programming (Methods and Applications) [in Russian], Moscow: Fizmatgiz, 1961.

© 2000 by CRC Press LLC

[31] Gelfand, I., and Fomin, S., Calculus of Variations [in Russian], Moscow: Fizmatgiz, 1961. [32] Geher, K., Theory of Sensitivity and Tolerance of Electronic Networks [in Russian], Moscow: Sov. Radio, 1973. [33] Golshtein, E., and Yudin, D., New Directions in Linear Programming [in Russian], Moscow: Sov. Radio, 1966. [34] Golshtein, E., Duality Theory in Mathematical Programming and Its Application [in Russian], Moscow: Nauka, 1971. [35] Horowitz, A., Design of Feedback Systems [in Russian], Moscow: Sov. Radio, 1970. [36] Gorodetskii, V., Zakharin, F., Ponomarev, V., and Yusupov, R., Direct and inverse problems of sensitivity theory, Trans. Acad. Sc. USSR, Techn. Cybernetics, 5, 1971. [37] Gorodetskii, and Yusupov, R., Successive optimization method in identification problems, Trans. Acad. Sc. USSR, Techn. Cybernetics, 3, 1972. [38] Gumovskii, I., Sensitivity analysis and Lyapunov stability, in Proc. of Int. Symposium on Sensitivity of Control Systems, Moscow: Nauka, 1966. [39] Danskin, G., Maximin Theory [in Russian], Moscow: Sov. Radio, 1970. [40] De Baker, W., Break conditions for sensitivity functions, in Proc. of Int. Symposium on Sensitivity of Control Systems, Moscow: Nauka, 1966. [41] Demidovitch, B., Maron, I., and Shuvalova, E., Numerical Analysis Methods [in Russian], Moscow: Fizmatgiz, 1962. [42] Demidovitch, B., Lections on Mathematical Stability Theory [in Russian], Moscow: Nauka, 1967. [43] Demyanov, V., and Malozemov, V., Introduction to Minimax [in Russian], Moscow: Nauka, 1972. [44] Dynamics and Accuracy of Termomechanical Systems [in Russian], Tula: TPI, 1972. [45] Evlanov, L., Monitoring of Dynamic Systems [in Russian], Moscow: Nauka, 1972.

© 2000 by CRC Press LLC

[46] Ermachenko, A., and Yusupov, R., Application of sensitivity functions to design of linear multivariable systems, Trans. Acad. Sc. USSR, Techn. Cybernetics, 2, 1976. [47] Zoitendeik, G., Methods of Possible Directions [in Russian], Moscow: IL, 1963. [48] Qwade, E., Analysis of Complex Systems [in Russian], Moscow: Sov. Radio, 1969. [49] Koddington, E., and Levinson, I., Theory of Ordinary Differential Equations [in Russian], Moscow: IL, 1958. [50] Kozlov, Yu., and Yusupov, R., Preset Self-Adjusting Systems [in Russian], Moscow: Nauka, 1969. [51] Kocotovic, P., Method of sensitivity points in investigation and optimization of control systems, Autom. Remote Contr., 12, 1976. [52] Kostyuk, V., Preset Gradient Self-Adjusting Systems [in Russian], Kiev: Tekhnika, 1969. [53] Krasovsky, N., Some Problems of Stability Theory [in Russian], Moscow: Gostehizdat, 1950. [54] Krut’ko, P., Solution of identification problems using sensitivity theory, Trans. Acad. Sc. USSR, Techn. Cybernetics, 6, 1969. [55] Kuzmin, P., Stability under parametric disturbances, PMM, I, 1957. [56] Kuntsevitch, V., and Chehovoi, Yu., Nonlinear Control Systems with Pulse Frequency and Width Modulation [in Russian], Kiev: Technika, 1970. [57] Kuhtenko, A., Invariance Problem in Automatic Control [in Russian], Kiev: Tekhnika, 1970. [58] Lankaster, K., Mathematical Economy [in Russian], Moscow: Sov. Radio, 1972. [59] Lanne, A. Optimal Design of Linear Electrical Systems [in Russian], Moscow: Svyaz, 1969. [60] Lankaster, P., Theory of Matrices [in Russian], Moscow: Nauka, 1978. [61] Lur’e, A., Some Nonlinear Problems of Automatic Control Theory [in Russian], Moscow: Gostehizdat, 1951. [62] Malkin, I., Theory of Motion Stability [in Russian], Moscow: Gostehizdat, 1952. [63] Malkin, I., Some Problems of Theory of Nonlinear Oscillation [in Russian], Moscow: Gostehizdat, 1956.

© 2000 by CRC Press LLC

[64] Mils, H., Marginal values of matrix games and linear programming problems, in Linear Inequalities and Related Problems [in Russian], Moscow: IL, 1959. [65] Moiseev, N., Elements of Optimal Systems Theory [in Russian], Moscow: Nauka, 1975. [66] Morosanov, I., Relay Extremal Systems [in Russian], Moscow: Nauka, 1964. [67] Rosenvasser, Y., and Yusupov, R., Eds., Methods of Sensitivity Theory in Automatic Control [in Russian], Moscow: Energia, 1971. [69] Ponoparev, V., and Litvinov, A., Eds., Foundations of Automatic Control and Regulation [in Russian], Moscow: Vysshaya Shkola, 1974. [70] Pugachev, V., Ed., Foundations of Automatic Control [in Russian], Moscow: Fizmatgiz, 1963. [71] Popov, E., and Lakota, N., Eds., Design of Tracking Systems [in Russian], Moscow: Mashinistroenie, 1978. [72] Pervozvanskii, A., Mathematical Models in Production Control [in Russian], Moscow: Nauka, 1975. [73] Petrov, B., and Krut’ko, P., Application of sensitivity theory to problems of automatic control, Trans. Acad. Sc. USSR, Techn. Cybernetics, 2, 1970. [74] Petrov, B., and Starikova, M., Investigation of Self-Oscillations in Automatic Systems with Logical Units, Trans. Acad. Sc. USSR, Energetics and Automatics, 3, 1961. [75] Petrov, B., and Rutkovskii, V., Double invariance of automatic control systems, Rep. Russ. Acad. Sc., 4, 1965. [76] Plotnikov, V., Asymptotic Methods in Optimal Control Problems [in Russian], Odessa: OGU, 1976. [77] Pontryagin, L., Ordinary Differential Equations [in Russian], Moscow: Fizmatgiz, 1961. [78] Popov, E., On Investigation of Self-Oscillating Systems with Logical Units, Trans. Acad. Sc. USSR, Energetics and Automatics, 5, 1962. [79] Popov, E., Theory of Linear Control Systems [in Russian], Moscow: Nauka, 1978. [80] Pelpor, D., Ed., Design of Gyroscopic Systems [in Russian], Moscow: Vys. Shkola, 1977.

© 2000 by CRC Press LLC

[81] Propoi, A., Sensitivity of optimal solutions to parameters variation, in Preprints 2nd IFAC Symposium “System sensitivity and adaptivity,” Dubrovnic, 1968. [82] Radvik, B., Military Planning and System Analysis [in Russian], Moscow: MO USSR, 1972. [83] Roberts, G., Special problems of construction of sensitivity models, in Proc. of International Symposium on Sensitivity of Automatic Control Systems, Moscow: Nauka, 1968. [84] Rosenwasser, Y., General sensitivity equations for discontinuous systems, Autom. Remote Contr., 3, 1967. [85] Rosenwasser, Y., On construction of sensitivity equations for discontinuous systems given by operator equations, Autom. Remote Contr., 5, 1969. [86] Rosenwasser, Y., Oscillation of Nonlinear Systems. Method of Integral Equations [in Russian], Moscow: Nauka, 1969. [87] Rosenwasser, Y., Periodically Nonstationary Control Systems [in Russian], Moscow: Nauka, 1973. [88] Rosenwasser, Y., Lyapunov Indiced in Linear Control Systems Theory [in Russian], Moscow: Nauka, 1977. [89] Rosenwasser, Y., Sufficient conditions of applicability of the first approximation in problems of sensitivity theory [in Russian], Automat. Remote Contr. 11, 1978. [90] Rosenwasser, Y., On sensitivity investigation for autonomous oscillatory systems with respect to excitation frequency, Automat. Remote Contr. 3, 1980. [91] Rosenwasser, Y., and Yusupov, R., Sensitivity equations for sampleddata control systems, Automat. Remote Contr. 4, 1969. [92] Rosenwasser, Y., and Yusupov, R., Sensitivity models of nonlinear discontinuous automatic control systems, in Proc. 2nd All-Union Conference on Theory and Methods of Mathematical Simulation, Moscow: Nauka, 1969. [93] Rosenwasser, Y., and Yusupov, R., General sensitivity equations and parametric invariance estimation for control systems, in Proc. AllUnion Conf. on Invariance Theory, Moscow: Nauka, 1969.

© 2000 by CRC Press LLC

[94] Rosenvasser, Y., and Yusupov, R., Sensitivity of Automatic Control Systems [in Russian], Moscow: Energia, 1969. [95] Ruban, A., Identification of Nonlinear Dynamic Plants on the Basis of Sensitivity Algorithm [in Russian], Tomsk: Tomsk Univ., 1975. [96] Sedov, L., Similarity and Dimensions Methods in Mechanics, Moscow: Nauka, 1977. [97] Sigorskii, V., and Petrenko, A. Algorithms for Analysis of Electronic Schemes [in Russian], Kiev: Tekhnika, 1970. [98] Lanne A., Ed., Synthesis of Active RC-Networks [in Russian], Moscow: Svyaz’, 1975. [99] Solodov, A., and Petrov, F., Linear Control Systems with Variable Parameters [in Russian], Moscow: Nauka, 1971. [100] Solodovnikov, V., Foundations of Automatic Control Theory [in Russian], Moscow: Mashgiz, 1953. [101] Stepanov, V., A Course in Differential Equations [in Russian], Moscow: Gostehizdat, 1953. [102] Stonch, M., and Shilyak, D., Sensitivity of self-oscillations in nonlinear control systems, in Proc. Int. Symposium on Sensitivity of Automatic Control Systems, Moscow: Nauka, 1968. [103] Stoker, J., Nonlinear Oscillations in Mechanical and Electrical Systems [in Russian], Moscow: IL, 1953. [104] Teverovskii, V., On Periodic Mode of Relay System with Variable Delay, Autm. Remote Contr., 1, 1966. [105] Tikhonov, A., Arsenin, V., Solution Methods for Incorrect Problems [in Russian], Moscow: Nauka, 1974. [106] Tomovich, R., Sensitivity Analysis of Dynamic Systems, New York: McGraw-Hill, 1963. [107] Tomovich, R., and Vukobratovich, M. General Sensitivity Theory [in Russian], Moscow: Sov. Radio, 1972. [108] Traksel, D., Design of Automatic Control Systems [in Russian], Moscow: Mashinostroenie, 1964. [109] Utkin V., Sliding Modes and their Application in Systems with Variable Structure [in Russian], Moscow: Nauka, 1974.

© 2000 by CRC Press LLC

[110] Wilkinson, J., Algebraic Eigenvalue Problem [in Russian], Moscow: Nauka, 1970. [111] Faddeev, D., and Faddeeva, V., Computational Method of Linear Algebra [in Russian], Moscow: Fizmatgiz, 1960. [112] Feldbaum, A., Foundations of Optimal Automatic Control Theory [in Russian], Moscow: Fizmatgiz, 1963. [113] Filippov, A., Differential Equations with Discontinuous Right Side, Math. Sbornik [in Russian], 1, 1960. [114] Fikhtengoltz, G., A Course in Differential and Integral Calculus [in Russian], Moscow: Gostekhizdat, 1948. [115] Fomin, A., Borisov, V., and Chermoshenskii, V., Tolerances in Radioelectronic Units [in Russian], Moscow: Sov. Radio, 1973. [116] Hedly, G., Nonlinear and Dynamic Programming [in Russian], Moscow: Mir, 1967. [117] Tsypkin, Ya., Theory of Linear Pulse Systems [in Russian], Moscow: Fizmatgiz, 1963. [118] Tsypkin, Ya., Adaptation and Learning in Automatic Systems [in Russian], Moscow: Nauka, 1968. [119] Tsypkin, Ya., and Popkov Yu., Theory of Nonlinear Pulse Systems [in Russian], Moscow: Nauka, 1963. [120] Tsypkin, Ya., Relay Automatic Systems [in Russian], Moscow: Nauka, 1974. [121] Tsypkin, Ya., Foundations of Automatic Systems Theory [in Russian], Moscow: Nauka, 1977. [122] Chang, S., Synthesis of Optimal Control Systems [in Russian], Moscow: Mashinostroenie, 1964. [123] Tsypkin, Ya., Sensitivity of Automatic Control Systems, in Proc. of the 1st Intern. Symposium on Sensitivity of Automatic Control Systems, Moscow: Nauka, 1968. [124] Swartz, L., Mathematical Methods in Physical Sciences [in Russian], Moscow: Mir, 1965. [125] Elsgolts, L., and Norkin, S., Introduction to Theory of Differential Equations with Variating Arguments [in Russian], Moscow: Nauka, 1971.

© 2000 by CRC Press LLC

[126] Yusupov, R., Utilization of redundancy for design of control systems with account for sensitivity of dynamic characteristics, in Proc. AllUnion Symposium on Redundancy in Information Systems, Moscow: Nauka, 1969. [127] Yusupov, R., and Zakharin, F., Parametric optimization of stochastic systems, Trans. Acad. Sc. USSR, Techn. Cybernetics, 4, 1971. [128] Yusupov, R., and Ostov, Yu., Solution to problems of disturbance observation and identification by inverse sensitivity method, in Problems of Cybernetics. Adaptive Systems, Moscow: Acad. Sc. USSR, 1974. [129] Yusupov, R., and Sidorov, V., Some methods of computing sensitivity functions and their comparison, Izv. VUZ, Engineering, 7, 1970. [130] Yusupov, R., and Chervo, V., A method of measuring sensitivity functions of amplitude and phase characteristics for linear systems, Patent of Russia N 219668, Invention Bulletin, 19, 1968. [131] Yusupov, R., and Zakharin, F., Methods of sensitivity theory in identification problems for controlled plants, in Theory and Application of Adaptive Systems, Alma-Ata, 1971. [132] Yakubovitch, V., and Starzhinskii, V. Linear Differential Equations with Periodic Coefficients and their Applications [in Russian], Moscow: Nauka, 1972. [133] Yanushevskii, R. Theory of Linear Optimal Multivariable Control Systems [in Russian], Moscow: Nauka, 1973.

© 2000 by CRC Press LLC