Systems Control Theory 9783110574951, 9783110574944

The book provides an up-to-date overview of modern control methods based on system models. Linear transformation of stat

336 56 3MB

English Pages 202 [208] Year 2018

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Systems Control Theory
 9783110574951, 9783110574944

Table of contents :
Preface
Contents
1. System Model
2. Linear Transformation of State Vector
3. Solution of State Space Model
4. Stability Analysis
5. Controllability and Observability
6. State Feedback and Observer
Bibliography
Index

Citation preview

Xiangjie Liu Systems Control Theory

Also of Interest Fractional-Order Control Systems D. Xue, 2017 ISBN 978-3-11-049999-5, e-ISBN (PDF) 978-3-11-049797-7, e-ISBN (EPUB) 978-3-11-049719-9

Signals and Systems, Volumes 1+2 W. Zhang, 2017 ISBN 978-3-11-054409-1

Control Engineering J. Sun, 2018 ISBN 978-3-11-057326-8, e-ISBN 978-3-11-057327-5, e-ISBN (EPUB) 978-3-11-057336-7

Basic Process Engineering Control P. Agachi, M. Cristea, 2014 ISBN 978-3-11-028981-7, e-ISBN (PDF) 978-3-11-028982-4, e-ISBN (EPUB) 978-3-11-037701-9

Advanced Process Engineering Control P. Agachi, M. Cristea, 2016 ISBN 978-3-11-030662-0, e-ISBN (PDF) 978-3-11-030663-7, e-ISBN (EPUB) 978-3-11-038816-9

Xiangjie Liu

Systems Control Theory |

Author Prof. Xiangjie Liu China Science Publishing & Media Ltd. 16 Donghuangchenggen North Street 100717 Beijing China [email protected]

ISBN 978-3-11-057494-4 e-ISBN (PDF) 978-3-11-057495-1 e-ISBN (EPUB) 978-3-11-057527-9 Library of Congress Control Number: 2018014891 Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de. © 2018 Walter de Gruyter GmbH, Berlin/Boston; Science Press Typesetting: le-tex publishing services GmbH, Leipzig Printing and binding: CPI books GmbH, Leck Cover image: enot-poloskun/iStock/Getty Images www.degruyter.com

Preface “Modern Control Theory” is a fundamental and compulsory course for students who major in automation. The course takes place in the sixth semester. Together with the “automatic control theory” set up in the fifth term, it is the core theoretical basis of automation. During this period, the juniors already have an elementary knowledge of linear algebra, the Laplace transform and differential equations. The students also lay the solid foundation on feedback control theory. At this very opportune moment, students begin to learn modern control theory, which is more widely used in many aspects of modern control engineering. This book is the result of teaching an undergraduate course over the years. The overall contents of the book can be described as follows. Chapter 1 introduces the system modeling with state space representation. Chapter 2 gives an overview of linear transformation of state vector. Chapter 3 presents solution of state space equations. Chapter 4 covers two types of stability for linear systems; namely, the I/O stability and the state related stability. Chapter 5 focuses on controllability and observability, with system decomposition and minimal realizations. Chapter 6 presents the state feedback and observer. The reader’s understanding is developed further by experimenting with MATLAB command to develop simulations of their own control applications. Several benchmark problems on power generation modeling and control have also been incorporated into this book. It is with gratitude that we acknowledge the continued support of the National Nature Science Foundation of China (61673171,61273144,60974051), Beijing Higher Education and Teaching Reformation projects (GJJG201409). We owe thanks to many colleagues and students. They frequently asked questions, pointed out problems, and, therefore, forced us to improve our work.

https://doi.org/10.1515/9783110574951-201

Contents Preface | V 1 1.1 1.2 1.2.1 1.2.2 1.2.3 1.3 1.3.1 1.3.2 1.3.3 1.3.4 1.3.5

System Model | 1 Introduction | 1 Models of Systems | 1 Differential Equation | 1 Transfer Function | 4 The State Space Model | 6 Transition From One Mathematical Model to Another | 16 From Differential Equation to Transfer Function for SISO Systems | 16 From Transfer Function to Differential Equation for SISO Systems | 17 From G(s) to g(t) and Vice Versa | 17 From State Equations to Transfer Function Matrix | 17 From Transfer Function Matrix to State Equations for SISO Systems | 19 1.4 Summary | 22 Appendix Three Power Generation Models | 22 | 28 Exercise 2 Linear Transformation of State Vector | 31 2.1 Linear Algebra | 31 2.2 Transform to Diagonal Form and Jordan Form | 36 Exercise | 45 3 Solution of State Space Model | 47 3.1 Introduction | 47 3.2 Solution of LTI State Equations | 47 3.3 State Transfer Matrix | 50 3.3.1 Properties | 50 3.3.2 Calculating the State Transition Matrix | 54 3.4 Discretization | 61 3.5 Solution of Discrete Time Equation | 64 3.6 Summary | 70 Exercise | 70

VIII | Contents

4 Stability Analysis | 72 4.1 Introduction | 72 4.2 Definition | 72 4.3 Stability Criteria | 82 4.3.1 Lyapunov’s Second Method | 82 4.3.2 State Dynamics Stability Criteria for Continuous Linear Systems | 90 4.3.3 State Dynamics Stability Criteria for Discrete Systems | 98 4.4 Summary | 100 Exercise | 100 5 5.1 5.2 5.2.1 5.2.2 5.3 5.3.1 5.3.2 5.3.3 5.3.4 5.4 5.4.1 5.4.2 5.5 5.5.1 5.5.2 5.5.3 5.5.4

Controllability and Observability | 103 Introduction | 103 Definition | 104 Controllability | 104 Observability | 105 Criteria | 106 Controllable Criteria | 106 Controllable Examples | 112 Observable Criteria | 115 Observable Examples | 117 Duality System | 120 Definition | 120 Properties of Duality Systems | 121 Canonical Form | 122 Controllability Canonical Form of Single Input Systems | 122 Observability Canonical Form of Single Output Systems | 128 Examples | 130 Observability and Controllability Canonical Form of Multiple Input Multiple Output Systems | 132 5.6 System Decomposition | 134 5.6.1 Controllability Decomposition | 134 5.6.2 Observability Decomposition | 137 5.6.3 Controllability and Observability Decomposition | 140 5.6.4 Minimum Realization | 145 5.7 Summary | 152 Exercise | 152

Contents | IX

State Feedback and Observer | 156 Introduction | 156 Linear Feedback | 156 State Feedback | 156 Output Feedback | 157 Feedback From Output to ẋ | 158 Pole Assignment | 159 Sufficient and Necessary Condition for Arbitrary Pole Assignment | 159 6.3.2 Methods to Assign the Poles of a System | 162 6.3.3 Examples | 166 6.4 State Estimator | 171 6.4.1 Introduction | 171 6.4.2 State Estimator | 172 6.5 State Feedback Based on State Estimator | 185 6.6 Summary | 188 Appendix State Feedback and Observer for Main Steam Temperature Control in Power Plant Steam Boiler Generation System | 188 Exercise | 192

6 6.1 6.2 6.2.1 6.2.2 6.2.3 6.3 6.3.1

Bibliography | 195 Index | 197

1 System Model 1.1 Introduction In control theory research, the system model should be set up first. In this chapter, different kinds of models are discussed and some examples are given to show the reader how to set up a model of a system. The relationships between these model types are also described.

1.2 Models of Systems A mathematical expression that appropriately relates the physical system quantities to the system components is called the mathematical model of a system. There are, basically, two types of system descriptions; one is the external description, called the input-output description. The other is the internal one, called the state space description. In the former one, a system in operation involves the following three elements: the system’s input (or excitation), the system itself, and the system’s output (or response). This description just reveals the casual relationship between the external variables (the input and the output) without characterizing the internal structure. In the latter one, the description is a class of mathematic models based on the analysis of the internal structure of the system. It is a classical modern approach of describing a system. Correspondingly, there are two types of mathematical models of the system, which can facilitate the system analysis (it is well known that, in order to analyze a system, the mathematical model must be available). The input-output description of the system will be introduced in Section 1.2.1 and 1.2.2, in the differential equation and the transfer function form. The state space model will be presented in Section 1.2.3.

1.2.1 Differential Equation The differential equation is the fundamental mathematical model of a system. This description includes all the linearly independent equations of a system, as well as the appropriate initial conditions. The differential equation method is demonstrated by the following examples. Example 1.1. Consider the network shown in Figure 1.1, where R, C, and L stand for the resistance, the capacitance and the inductance of the circuit respectively. Derive the network’s differential equation mathematical model.

https://doi.org/10.1515/9783110574951-001

2 | 1 System Model iL (t) L u (t) (voltage source)



+ uC (t)

i (t) 

C 

R

Fig. 1.1: RLC network.

Solution. Applying Kirchhoff’s voltage law, t

L

di 1 + ∫ idt + Ri = v(t) dt C

(1.1)

0

The above integro-differential equation constitutes a mathematical description of the network. This model is a second order differential equation. Two appropriate initial conditions should be given to complete the description. The inductor’s current i L (t) and the capacitor’s voltage vC(t) at the instant that the switch closes (at t = 0) are adopted as initial conditions: i L (0) = I0 v C (0) = V0 , where I0 and V0 are given constants. The integro-differential equation and the two initial conditions thus constitute a complete description of the network shown in Figure 1.1. Example 1.2. Consider the network shown in Figure 1.2, where R, C and L stand for the resistance, the capacitance and the inductance of the circuit respectively. We assign the current of the inductance L x (x = 1, 2) as i x (x = 1, 2), and the voltage of the capacitance C x (x = 1, 2) as v x (x = 1, 2). Derive the network’s differential equation mathematical model. Solution. The differential equation method for describing this network is based on the three differential equations, which arise by applying Kirchhoff’s current law. These three-loop equations are: di1 + u C1 + R1 i3 = 0 , dt di2 + R2 i4 = 0 , −u C1 + L2 dt di2 di1 L2 − L1 − u C2 = 0 . dt dt −L1

(1.2a)

1.2 Models of Systems | 3

+ uC2 i1

a

b

L1 i

R1



+ C1

l1

l3 c

L2

uC1

i3

+ (current source)

i2

i4

l2

R2

Fig. 1.2: A three-loop network.

By applying Kirchhoff’s voltage law, we have: i + i3 + i1 − C2 C1

du C2 =0, dt

du C1 + i1 + i2 = 0 , dt

C2

(1.2b)

di2 + i2 − i4 = 0 . dt

The initial conditions are v C1 (0) = v C1 0 , v C2 (0) = v C2 0 and i L1 (0) = i L1 0 , i L1 (0) = i L1 0 . Example 1.3. Consider the mechanical system shown in Figure 1.3, where y, K, m and B are the position of the mass, the spring’s constant, the mass, and the friction coefficient respectively. Derive the system’s differential equation mathematical model. Solution. By using d’Alembert’s law of forces, the following differential equation is obtained: d2 y dy m 2 +B + Ky = f(t) . (1.3) dt dt The initial conditions of the above equation are the distance y(t) and the velocity v(t) = dy/dt at the instant t = 0, i.e., at the instant when the external force f(t) is applied. Therefore, the initial conditions are: y(0) = Y 0

and

v(0) = [

dy ] = V0 , dt t=0

where Y0 and V0 are given constants. The differential equations and the two initial conditions constitute the complete description of the mechanical system shown in Figure 1.3. Remark 1.2.1. A differential equation is a description in the time domain, which can be applied to many categories of systems, such as linear and nonlinear systems, time invariant and time variant systems with lumped and distributed parameters, and zero and nonzero initial conditions, among many others.

4 | 1 System Model

K

m

B

y (t)

f (t)

Fig. 1.3: A spring and a mass.

1.2.2 Transfer Function In contrast to the differential equation, which is a description in the time domain, the transfer function model is a description in the Laplace domain and holds only for a restricted category of systems, i.e., for linear time invariant (LTI) systems with zero initial conditions. The transfer function is designated by G(s) and is defined as follows. Definition. The transfer function G(s) of a linear, time invariant system with zero initial conditions is the ratio of the Laplace transform of the output y(t) to the Laplace transform of the input u(t), i.e., G(s) =

L{y(t)} Y(s) = . L{u(t)} U(s)

(1.4)

The introductory examples used in Section 1.2.1 will also be used for the derivation of their transfer functions. Example 1.4. Consider the network shown in Figure 1.1. Derive the transfer function G(s) = I(s)/V(s). Solution. Figure 1.1, in the Laplace domain and with zero initial conditions I0 and V0 , can be shown in Figure 1.4. From Kirchhoff’s voltage law, LsI(s) + RI(s) +

I(s) = V(s) . Cs

(1.5)

The transfer function is: G(s) =

I(s) I(s) Cs = = . V(s) [Ls + R + 1 ] I(s) LCs2 + RCs + 1 Cs

(1.6)

1.2 Models of Systems |

5

Ls

V (s)



 1  Cs

I (s)



R

Fig. 1.4: RLC circuit.

When the voltage V R (s) across the resistor is chosen as the output, the transfer function becomes: RCs V R (s) RI(s) = = G(s) = . (1.7) 2 V(s) V(s) LCs + RCs + 1 Example 1.5. Consider the electrical network shown in Figure 1.2. Determine the transfer function G(s) = I2 (s)/V(s). Solution. This network in the Laplace domain, with zero initial conditions, is shown in Figure 1.5. The equations for the two loops can be expressed as: [R1 + −

1 1 I1 (s) + [R2 + Ls + ] I2 (s) = 0 . Cs Cs

 

(1.8a) (1.8b)

R2

R1

V (s)

1 1 ] I1 (s) − I2 (s) = V(s) Cs Cs

 1  Cs

I1 (s)

I2 (s)

Ls

Fig. 1.5: A two-loop network.

Equation (1.8b) yields: I1 (s) = [LCs2 + R2 Cs + 1] I2 (s) .

(1.9)

Substituting equation (1.9) into equation (1.8a), we get: (R1 Cs + 1)(LCs2 + R2 Cs + 1)I2 (s) − I2 (s) = CsV(s) .

(1.10)

Hence, I2 (s) Cs = V(s) (R1 Cs + 1)(LCs2 + R2 Cs + 1) − 1 1 = . R1 LCs + (R1 R2 C + L)s + R1 + R2

G(s) =

(1.11)

6 | 1 System Model

Example 1.6. Consider the mechanical system shown in Figure 1.3. Determine the transfer function G(s) = Y(s)/F(s). Solution. This system, in the Laplace domain and with zero initial conditions, is shown in Figure 1.6.

K

ms2

Bs

Y (s)

F (s)

Fig. 1.6: A spring and a mass.

Using d’Alembert’s law of forces, we get: ms2 Y(s) + BsY(s) + KY(s) = F(s) .

(1.12)

The transfer function is: G(s) =

1 Y(s) = . F(s) ms2 + Bs + K

(1.13)

Remark 1.2.2. In the above examples, it can be seen that the transfer function G(s) is the ratio of two polynomials in the Laplace domain. In general, G(s) has the following form: ∏m β m s m + β m−1 s m−1 + ⋅ ⋅ ⋅ + β 1 s + β 0 i=1 (s + z i ) , (1.14) = K G(s) = s n + α n−1 s n−1 + ⋅ ⋅ ⋅ + α 1 s + α 0 ∏ ni=1 (s + p i ) where −p i (i = 1, 2, . . . , n) are the roots of the denominator, which are called the poles of G(s), and −z i are the roots of the numerator, which are called the zeros of G(s). Poles and zeros (particularly the poles) play a significant role in the behavior of a system.

1.2.3 The State Space Model The state space model is a description in the time domain, which may be applied to a very wide category of systems, such as linear and nonlinear systems, time invariant and time variant systems, systems with nonzero initial conditions, etc. The state

1.2 Models of Systems |

7

of a system refers to the past, present, and future of the system. From a mathematical point of view, the state of a system is expressed by its state variables. Usually, a system is described by a finite number of state variables, which are designated by x1 (t), x2 (t), . . . , x n (t) and are defined as follows. 1.2.3.1 Definition The state variables x1 (t), x2 (t), . . . , x n (t) of a system are defined as a (minimum) number of variables, such that, if we know the following, the determination of the system’s states for t > t0 is guaranteed: (1) their values at a certain instant t0 (2) the input of the system for t ≥ t0 (3) the mathematical model, which relates the inputs, the state variables, and the system itself Consider a system with multiple inputs and multiple outputs (MIMO), as shown in Figure 1.7.

u1 (t) u2 (t)

y1 (t) y2 (t) System yp (t)

um (t)

x1 (t) x2 (t)

xn (t)

Fig. 1.7: System with multiple inputs and multiple outputs (MIMO).

The input vector is designated by u(t) and has the form: u 1 (t) [ u (t) ] [ 2 ] ] u(t) = [ [ .. ] , [ . ] [u m (t)]

(1.15)

where m is the number of inputs. The output vector is designated by y(t) and has the form: y1 (t) [ y (t) ] [ 2 ] ] (1.16) y(t) = [ [ .. ] , [ . ] [y p (t)]

8 | 1 System Model

where p is the number of outputs. The state vector x(t) has the form: x1 (t) [ x (t)] [ 2 ] ] x(t) = [ [ .. ] , [ . ] [x n (t)]

(1.17)

where n is the number of state variables. The state equations are a number of n first order differential equations, which relate the input vector u(t) to the state vector x(t) and have the form: ̇ = f[x(t), u(t)] , x(t)

(1.18)

where f(⋅) is a column with n elements. The function f(⋅), in general, is a complex nonlinear function of x(t) and u(t). Note that equation (1.18) is a set of dynamic equations. The output vector y(t) of the system is related to the input vector u(t) and the state vector x(t) as follows: y(t) = g[x(t), u(t)] , (1.19) where g(⋅) is a column with p elements. Relation (1.19) is called the output equation. The function g(⋅) is generally a complex nonlinear function of x(t) and u(t). Note that equation (1.19) is a set of algebraic (nondynamic) equations. The initial conditions of the state space equation (1.18) are the values of the elements of the state vector x(t) for t = t0 , and is denoted as: x1 (t0 ) [ x (t )] [ 2 0 ] ] x(t0 ) = x0 = [ [ .. ] . [ . ] [x n (t0 )]

(1.20)

The state space equation (1.18), the output equation (1.19), and the initial conditions (1.20), i.e., the following equations, constitute the description of a dynamic system in the state space. ̇ = f[x(t), u(t)] , x(t)

(1.21a)

y(t) = g[x(t), u(t)] ,

(1.21b)

x(t0 ) = x0 .

(1.21c)

Since the dynamic state equation (1.21a) plays a dominant role in equations (1.21), all the three equations in (1.21), will be called, for simplicity, state equations.

1.2 Models of Systems

| 9

When the system is a linear stationary one, the state space model is: ̇ = Ax(t) + Bu(t) x(t) y(t) = Cx(t) + Du(t) ,

(1.22)

where A is the systematic matrix; B is the input/control matrix; C is the output matrix, and D is the direct transfer matrix. The state equations (1.21) are, in the field of automatic control, the modern method of system description. Thus, the state space model relates the following four elements: the input, the system, the state variables, and the output. In contrast, the differential equations and the transfer function relate three elements: the input, the system and the output – wherein the input is related to the output via the system directly (i.e., without giving information about the state of the system). It is exactly for this reason that these two models are called input-output models. 2. The Construction of the State Space Model There are three ways to set up the state space model, which can be based on: (1) the transform of the block diagram (2) first principal modeling (3) the input-output model Each way will be described in detail and examples will be given. The Transform of the Block Diagram Example 1.7. The system block diagram is shown in Figure 1.8 (a). u is the input and y is the output. Try to deduce the state space equations.

(a)

(b) Fig. 1.8: System block diagram.

10 | 1 System Model

Solution. The structure of each link part is shown in Figure 1.8 (b), thus the relative equations can be derived as follows. The state equations: ẋ 1 = KT33 x2 { { { { ẋ 2 = − T12 x2 + { { { { 1 { ẋ 3 = − T1 x3 −

K2 T2 x3 K1 K4 T1 x1

(1.23) +

K1 T1 u

.

The output equation: y = x1 . The above equations can be rewritten in the vector matrix form: K3 T3 − T12

0 [ [ ẋ = [ 0 K1 K4 [− T 1 y = [1

0

0

K2 ] ] T2 ] x − T11 ]

0

0 [0] +[ ]u (1.24)

K1

[ T1 ]

0] x .

The First Principal Modeling When a physical system is given, the mechanism analysis can be carried out with proper assumption and simplification. The mechanism model can be set up with chosen inputs and outputs. If the middle variables are eliminated, then the differential equation mentioned above can be obtained. If the middle variables are chosen as the state variables, then the state space model can be achieved. Example 1.8. Consider the system shown in Figure 1.9. The current of C1,2 is C1,2 u̇ C1,2 ̇ . The input is the current source, and the respectively, and the voltage of L1,2 is L1,2 i1,2 outputs are the voltages of capacitances, C1 and C2 . Derive the system’s state space representation. + uC2

i1

a

b

L1 (current source)

uC1 +

i3

+ i

R1



Fig. 1.9: A three-loop network.

i2

l1

C1

l3 c

L2

i4

l2

R2

1.2 Models of Systems |

11

Solution. The differential equations method for describing this network is based on the three differential equations, which arise by applying Kirchhoff ’s voltage law. These three-loop equations are: −L1

di1 + u C1 + R1 i3 = 0 , dt

−u C1 + L2

di2 + R2 i4 = 0 , dt

di2 di1 − L1 − u C2 = 0 . dt dt By applying Kirchhoff ’s current law, we get: L2

i + i3 + i1 − C 2

Define:

du C2 =0, dt

C1

du C1 + i1 + i2 = 0 , dt

C2

du C2 + i2 − i4 = 0 . dt

u C1 = x1 ,

u C2 = x2 ,

i1 = x3 ,

i2 = x4 .

The system’s state space equation can be expressed as: 0 ẋ 1 0 ẋ 2 ( )=( 1 ẋ 3 L1 ẋ 4 1 ( L2

(

0

− C11

− C2 (R11 +R2 )

R1 C 2 (R 1 +R 2 ) R1 R2 − L1 (R 1 +R 2 ) R1 R2 − L2 (R 1 +R 2 )

− L1 (RR11+R2) − L2 (RR12+R2)

y1 uC 1 ) = ( 1) = ( y2 u C2 0

0 1

0 0

− C11

0 x1 R 1 − C2 (RR12+R2 ) )(x2 )+( C2 (R1 +R2 ) ) i , R1 R2 R1 R2 x3 − L1 (R − L1 (R 1 +R 2 ) 1 +R 2 ) x4 R1 R2 R1 R2 − − L2 (R L 2 (R 1 +R 2 ) 1 +R 2 ) ) (1.25)

x1 x2 0 )( ) . x3 0 x4

Example 1.9. Consider the mechanical system shown in Figure 1.10, where y, K, m and B are the position of the mass, the spring’s constant, the mass, and the friction coefficient respectively. In the role of external forces f , derive the system’s state space representation where y1 , y2 are the outputs. Solution. Choose the position y1 , y2 and the velocity v1 , v2 of the mass M1 , M2 as the state variables: x2 = y2 , x1 = y1 , x3 = v1 =

dy1 , dt

x4 = v2 =

dy2 . dt

12 | 1 System Model

K1

K2 f M2

M1

B1

B2 v1

v2

y1

y2

Fig. 1.10: The mass-spring-damper system.

By using Newton’s laws of motion, for M1 we get: M1

dv1 dy1 dy2 dy1 = K2 (y2 − y1 ) + B2 ( − ) − K1 y1 − B1 . dt dt dt dt

M2

dv2 dy2 dy1 = f − K2 (y2 − y1 ) − B2 ( − ) . dt dt dt

For M2 :

With u = f , we can get: ẋ 1 = x3 ẋ 2 = x4 ẋ 3 = − ẋ 4 =

1 K2 1 B2 (K1 + K2 )x1 + x2 − (B1 + B2 )x3 + x4 M1 M1 M1 M1

K2 K2 B2 B2 x1 − x2 + x3 − x4 . M2 M2 M2 M2

The above equations can be expressed in a compact form: 0 ẋ 1 0 ẋ 2 ( ) = (− 1 (K + K ) 1 2 ẋ 3 M1 K2 ẋ 4 M2

0 0 K2 M1 K2 −M 2

1 0 − M11 (B1 + B2 ) B2 M2

0 1

0 x1 0 x2 B2 ) ( ) + ( )f . 0 x3 M1 1 x4 − B2 M2 M2

(1.26) The output equation is:

y1 1 ( )=( y2 0

0 1

0 0

x1 x2 0 )( ) . x3 0 x4

1.2 Models of Systems

| 13

Example 1.10. Consider a cart with an inverted pendulum hinged on top of it, as shown in Figure 1.11. For simplicity, the cart and the pendulum are assumed to move in only one plane, while the friction, the mass of the stick, and the gust of wind are disregarded. The problem is to maintain the pendulum at the vertical position.

 mg l

H u

V

M

y

Fig. 1.11: The inverted pendulum system.

Suppose H and V are, respectively, the horizontal and vertical forces exerted by the cart on the pendulum as shown. The application of Newton’s law to the linear movements yields: d2 y =u−H, dt2 d2 H = m 2 (y + l sin θ) = m ÿ + ml θ̈ cos θ − ml(θ)̇ 2 sin θ , dt d2 mg − V = m 2 (l cos θ) = ml[−θ̈ sin θ − (θ)̇ 2 cos θ] . dt M

The application of Newton’s law to the rotational movement of the pendulum around the hinge yields: mgl sin θ = ml θ̈ ⋅ l + m yl̈ cos θ , sin θ = θ ,

cos θ = 1 ,

mg = V , M ÿ = u − m ÿ − ml θ̈ ,

gθ = l θ̈ + ÿ ,

which imply: M ÿ = u − mgθ , Ml θ̈ = (M + m)gθ − u .

14 | 1 System Model

Define: x2 = ẏ ,

x1 = y ,

x3 = θ ,

x4 = θ̇ .

Then the state space model can be derived as: 0 ẋ 1 [ẋ ] [0 [ 2] [ [ ]=[ [ẋ 3 ] [0 [ẋ 4 ] [0

1 0 0 0

y = [1

0

0 −mg M

0 (M+m)g Ml

0

0 0] ] ] 1] 0]

x1 0 [x ] [ 1 ] [ 2] [ M ] [ ]+[ ]u , [x3 ] [ 0 ] −1 [x4 ] [ Ml ]

(1.27)

0] x .

Example 1.11. Consider a separately excited DC motor system, as shown in Figure 1.12. In the diagram, R and L stand for the resistance and inductance of the armature loop respectively. J is the inertia of the rotating part, and B is the viscous friction coefficient. Develop the state space equations when the armature voltage u is chosen as the control variable.

R

L



i u

J

M B

Fig. 1.12: The system of a separately excited DC motor.

Since the inductance L and the rotating inertia J are energy storage elements, their corresponding physical variables, e.g., the current i and the rotating angular speed ω, are independent of each other. They could be chosen as state variables: x1 = i , x2 = ω , then as:

dx1 di dx2 dω = , = . dt dt dt dt Using the circuit equations of the armature circuit, L

di + Ri + e = u . dt

According to the dynamics equations, we have: J

dω + Bω = Ka i . dt

1.2 Models of Systems

| 15

According to the electromagnetic induction relationship, we get: e = Kb ω , where e is the back electromotive force and Ka , Kb are the torque constant and back electromotive force respectively. According to the three equations above, the model of the system may be rewritten as: di R Kb 1 =− i+− ω+ u, dt L L L dω Ka B = i− ω. dt J J By putting x1 = i, x2 = ω into the above equations, we get: (

−R ẋ 1 ) = ( KaL ẋ 2 J

− KLb − BJ

)

(

1 x1 ) + (L)u . x2 0

If the angle speed ω is chosen as the output, it leaves: y = x2 = (0

1) (

x1 ) x2

If the angle θ is chosen as the output, then the above two state variables are not enough to represent the dynamics of the system, and another state variable, x3 , should be introduced: x3 = θ . So

ẋ 3 = θ̇ = x2 .

The state equation is: − RL ẋ 1 (ẋ 2 ) = ( KJa ẋ 3 0

− KLb

0

1 x1 L 0) ( x 2 ) + ( 0 ) u . x3 0 0

− BJ 1

(1.28)

The output equation is: y = x3 = (0

0

x1 1) (x2 ) . x3

The Input-Output Model When a system is created by the input-output model, i.e., transfer function or differential equation, the state space model can be derived with the input-output model. This process is called realization. Based on different types of the input-output model, different algorithms can be adopted for realization. The detailed procedure can be found in Section 1.3.5.

16 | 1 System Model

1.3 Transition From One Mathematical Model to Another As we know, every mathematical model has advantages and disadvantages. To use the advantages of all mathematical models, one must have the flexibility of transition from one model to another. This issue of transition is, obviously, of great practical and theoretical importance. In the following, we present some transition methods.

1.3.1 From Differential Equation to Transfer Function for Single-input-single-output Systems Case 1. The Righthand Side of the Differential Equation Does Not Involve Derivatives Consider a single-input-single-output (SISO) system described by the following differential equation: y(n) + α n−1 y(n−1) + ⋅ ⋅ ⋅ + α 1 y(1) + α 0 y = β 0 u , (1.29) where all the system’s initial conditions are assumed to be zero, i.e., y(k) (0) = 0, for k = 1, 2, . . . , n − 1. Applying the Laplace transform to equation (1.29) can result in: s n Y(s) + α n−1 s n−1 Y(s) + ⋅ ⋅ ⋅ + α 1 sY(s) + α 0 Y(s) = β 0 U(s) . Hence, the transfer function is given by: G(s) =

β0 Y(s) . = n n−1 U(s) s + α n−1 s + ⋅ ⋅ ⋅ + α1 s + α0

(1.30)

Case 2. The Righthand Side of the Differential Equation Involves Derivatives Consider a SISO system described by the differential equation: y(n) + α n−1 y(n−1) + ⋅ ⋅ ⋅ + α 1 y(1) + α 0 y = β m u (m) + ⋅ ⋅ ⋅ + β 1 u (1) + β 0 u ,

(1.31)

where m < n and all initial conditions are assumed to be zero, i.e., y(k) (0) = 0, for k = 0, 1, . . . , n − 1. We can determine the transfer equation (1.31) as follows: suppose z(t) is the solution of equation (1.29), with β 0 = 1. When the superposition principle is used, the solution y(t) of equation (1.31) will be obtained: y(t) = β m z(m) + β m−1 z(m−1) + ⋅ ⋅ ⋅ + β 1 z(1) + β 0 z .

(1.32)

By applying the Laplace transformation to equation (1.32), we obtain: Y(s) = β m s m Z(s) + β m−1 s m−1 Z(s) + ⋅ ⋅ ⋅ + β 1 sZ(s) + β 0 Z(s) .

(1.33)

Here, we have set z(k) (0) = 0, for k = 0, 1, . . . , n − 1. When β 0 = 1, the solution of equation (1.29) is z(t) = y(t). Here, it is assumed that all initial conditions of y(t),

1.3 Transition From One Mathematical Model to Another | 17

hence, of z(t), are zero. In equation (1.30), when β 0 = 1, we have: Z(s) = [

1 ] U(s) . s n + α n−1 s n−1 + ⋅ ⋅ ⋅ + α 1 s + α 0

(1.34)

By substituting equation (1.34) into (1.33), the transfer function G(s) of the differential equation (1.31) is obtained as: G(s) =

Y(s) β m s m + β m−1 s m−1 + ⋅ ⋅ ⋅ + β 1 s + β 0 . = U(s) s n + α n−1 s n−1 + ⋅ ⋅ ⋅ + α 1 s + α 0

(1.35)

Remark 1.3.1. The transfer function G(s), given by equation (1.35), can be easily derived from equation (1.31) if we set s k in place of the kth derivative and replace y(t) and u(t) with Y(s) and U(s), respectively. That is, we can derive equation (1.35) by replacing y(k) (t) with s k Y(s), and u (k) (t) with s k U(s) in equation (1.31).

1.3.2 From Transfer Function to Differential Equation for SISO Systems Suppose a SISO system is described by equation (1.35). Then, working backwards using the method given in Remark 1.3.1, the differential equation (1.31) can be constructed by substituting s k with the kth derivative and Y(s) and U(s) with y(t) and u(t), respectively.

1.3.3 From G(s) to g(t) and Vice Versa The matrices G(s) and g(t) are related through the Laplace transform: L{g(t)} = G(s)

or

g(t) = L−1 {G(s)} .

(1.36)

1.3.4 From State Equations to Transfer Function Matrix Consider a system described by the following state equations: ̇ = Ax(t) + Bu(t) x(t) y(t) = Cx(t) + Du(t) x(t0 ) = x(0) = x0 . Take the Laplace transformation to both sides of equation (1.37): sx(s) − x(0) = Ax(s) + Bu(s) y(s) = Cx(s) + Du(s) x(s) = (sI − A)−1 x(0) + (sI − A)−1 Bu(s) y(s) = C(sI − A)−1 x(0) + C(sI − A)−1 Bu(s) + Du(s) .

(1.37)

18 | 1 System Model

For zero initial condition, we have: y(s) = [C(sI − A)−1 B + D] u(s) . Then the system’s transfer function matrix G(s) is given by the relation: G(s) = C(sI − A)−1 B + D .

(1.38)

Example 1.12. Derive the transfer function of the following state space equation. 0 0 ẋ = ( 0 0 1 y=( 0

0 0 0 0 0 1

0 0 1 1 )x +( 1 0 0 −1

1 0 −1 0 0 0

1 1 )u , 0 −2

0 )x . 0

Solution. 0 0 A=( 0 0

0 0 0 0

1 0 −1 0

0 1 ) , 0 −1

0 1 B=( 1 0

(sI − A)−1

1 1 ) , 0 −2

1 s

0

1 s(s+1)

0 =( 0 0

1 s

0

0 0

1 s+1

1 C=( 0

0

0 1 s(s+1) )

0

1 s+1

The result can be obtained, according to (1.38): 1

G(s) = [ s(s+1) 1 s

1 s ] s−1 s(s+1)

.

MATLAB can be adopted to compute this equation. Type: a=[0 0 1 0;0 0 0 1;0 0 -1 0;0 0 0 -1]; b=[0 1;1 1;1 0;0 -2]; c=[1 0 0 0;0 1 0 0]; d=[0 0;0 0]; [N1,d1]=ss2tf(a,b,c,d,1) [N2,d2]=ss2tf(a,b,c,d,2) which yields

0 1

.

0 0

0 ) , 0

D=0,

1.3 Transition From One Mathematical Model to Another | 19

N1 = 0 0 d1 =

0 1.0000 1

N2 = 0 0 d2 =

2

1.0000 2.0000 1

1.0000 1.0000 1

2

0

0.0000 0

1.0000 -1.0000

0 0.0000

0

2.0000 0.0000 1

1.0000 1.0000

0

0

Thus, the transfer matrix is: s 2 +s

s 4 +2s 3 +s 2 G(s) = [ s 3 +2s 2 +s

[ s4 +2s3 +s2

s 3 +2s 2 +s s 4 +2s 3 +s 2 ] s 3 −s s 4 +2s 3 +s 2 ]

.

Simplifying yields: 1

G(s) = [ s(s+1) 1 s

1 s ] s−1 s(s+1)

.

Here, [N1,d1]=ss2tf(a,b,c,d,1) computes the transfer matrix from the first input to all outputs, e.g., the first column of G(s). N1 is the numerator coefficient of the first column of G(s), d1 is the denominator coefficient of the first column of G(s). In a similar way, [N2,d2]=ss2tf(a,b,c,d,2) computes the transfer matrix from the second input to all outputs.

1.3.5 From Transfer Function Matrix to State Equations for SISO Systems The transition from G(s) to state equations is the well known problem of the state space realization. This is, in general, a difficult problem and has been (and still remains) a topic to study. In the following, we will present some introductory results regarding this problem. Imagine a system is described by a scalar transfer function with the form: G(s) =

Y(s) β n−1 s n−1 + β n−1 s n−1 + ⋅ ⋅ ⋅ + β 1 s + β 0 , = U(s) s n + α n−1 s n−1 + ⋅ ⋅ ⋅ + α 1 s + α 0

(1.39)

or, equivalently, by the differential equation: y(n) + α n−1 y(n−1) + ⋅ ⋅ ⋅ + α 1 y(1) + α 0 y = β n u (n) + β n−1 u (n−1) ⋅ ⋅ ⋅ + β 1 u (1) + β 0 u . (1.40) Equation (1.40) can be expressed in the form of two equations as follows: z(n) + α n−1 z(n−1) + ⋅ ⋅ ⋅ + α 1 z(1) + α 0 z = u , y(t) = β n−1 z

(n−1)

+ ⋅ ⋅ ⋅ + β1 z

(1)

+ β0 z .

(1.41) (1.42)

20 | 1 System Model

Suppose z(t) is the solution of equation (1.41). Then, the solution of equation (1.40) will be given by equation (1.42). The state variables x1 , x2 , . . . , x n are defined as follows: x1 (t) = z(t) , (1)

x2 (t) = z(1) (t) = x1 (t) , (1)

x3 (t) = z(2) (t) = x2 (t) ,

(1.43)

.. . (1)

x n (t) = z(n−1) (t) = x n−1 (t) . Substituting equation (1.42) into equation (1.41) can result in: ẋ n (t) = −α n−1 x n (t) − ⋅ ⋅ ⋅ − α 1 x2 (t) − α 0 x1 (t) + u(t) .

(1.44)

Also, substituting equations (1.43) into equation (1.42) can result in: y(t) = β n−1 x n (t) + β n−2 x n−1 (t) + ⋅ ⋅ ⋅ + β 1 x2 (t) + β 0 x1 (t) .

(1.45)

Equations (1.43) to (1.45) can be expressed in a matrix form: ̇ = Ax(t) + bu(t) x(t)

(1.46)

y(t) = cT x(t) , where xT = (x1 , x2 , . . . , x n ) and: 0 0 .. . 0 [−α 0

[ [ [ [ A=[ [ [ [

1 0 .. . 0 −α 1

0 [ ] [0] [ ] [ ] b = [ ... ] , [ ] [ ] [0] [1] β0 [ ] [ β1 ] [ ] [ ] c = [ ... ] . [ ] [β ] [ n−2 ] [β n−1 ]

0 1 .. . 0 −α 2

0 0 .. . 0 −α 3

... ... ... ...

0 ] 0 ] ] .. ] , . ] ] ] 1 ] −α n−1 ]

(1.47)

1.3 Transition From One Mathematical Model to Another | 21

Hence, equations (1.46) constitute the state equations’ description of the transfer function (1.39). Due to the special form of matrix A and vector b, we say that the state equations (1.46) are in phase canonical form, while the state variables are called phase variables. Phase variables are, in general, state variables, which are defined according to equations (1.43); i.e., every state variable is the derivative of the previous one. In particular, the special form of matrix A and vector b is characterized as follows: If the first column and the last row in matrix A are deleted, then a (n − 1) × (n − 1) unit matrix is revealed. Also, the elements of the last row of A are the coefficients of the differential equation (1.40), placed in reverse order and all with negative signs. The vector b has all its elements equal to zero, except for the nth element, which is equal to one. Example 1.13. The differential equation mathematical model of a system is given as: y⃛ + 6ÿ + 41ẏ + 7y = 6u . Try to derive the state equation and the output equation. ̇ ̈ as the state variables: Solution. We choose y/6, y/6, y/6 x1 = Then,

y , 6

x2 =

ẏ , 6

x3 =

ÿ . 6

ẏ = x2 , 6 ÿ ẋ 2 = = x3 , 6 y⃛ x3 = = −7x1 − 41x2 − 6x3 + u . 6

ẋ 1 =

The state equation is: ẋ 1 0 (ẋ 2 ) = ( 0 −7 ẋ 3

1 0 −41

0 x1 0 1 ) (x2 ) + (0) u . −6 1 x3

The output equation is: y = 6x1 = (6

0

x1 0) (x2 ) . x3

(1.48)

22 | 1 System Model

1.4 Summary Three kinds of models and their relationship were introduced in this chapter. Each type of model has its merits and shortages. The transfer function is widely used in classical control theory, which mainly studies the SISO linear system in Laplace domain. The differential equation is used in time domain, and the state space equation is commonly used in modern control theory, in which the MIMO system is studied in time domain. The following contents of modern control theory, which studies the properties of a system and system synthesis, are mainly based on the state space description.

Appendix: Three Power Generation Models Case 1) Thermal Power Generation The fundamental dynamics of a 160 MW drum type boiler, turbine, generator (BTG) plant can be represented by a third order MIMO coupling nonlinear model over a wide operating range. Typically, the coordinated control governs the dominant behavior of the power unit through the power and steam pressure control loops, as shown in Figure 1.13. coordinated unit controller Pressure mapping Pressure controller Combustion controller

load demand load controller

steam Generator

Fuel

Condenser

Air

Deaerator

Fig. 1.13: Coordinated control scheme.

The inputs are positions of valve actuators that control the mass flow rates of fuel (u 1 in pu), steam to the turbine (u 2 in pu), and feedwater to the drum (u 3 in pu). The outputs are electric power (E in MW), drum steam pressure (P in kg/cm2 ), and drum

Appendix: Three Power Generation Models |

23

water level deviation (L in mm). The state variables are electric power (E), drum steam pressure (P), and fluid (steam water) density (ρ f ). The state equations are: dP = 0.9u 1 − 0.0018u 2 P9/8 − 0.15u 3 , dt dE = ((0.73u 2 − 0.16)P9/8 − E)/10 , dt dρ f = (141u 3 − (1.1u 2 − 0.19)P)/85 . dt

(1.49)

The drum water level output is calculated using the following algebraic equations: qe = (0.85u 2 − 0.14)P − 45.59u 1 − 2.51u 3 − 2.09 , α s = (1/ρ f − 0.0015)/(1/(0.8P − 25.6) − 0.0015) ,

(1.50)

L = 50(0.13ρ f + 60α s + 0.11qe − 65.5) , where α s is the steam quality and qe is the evaporation rate (kg/s). Define x1 , x2 and x3 as the drum steam pressure (kg/cm2 ), the electric power (MW), and the steam water fluid density in the drum, respectively. The output y3 is the drum water level (cm) calculated using two algebraic calculations, α cs and qe , which are the steam quality and the evaporation rates (kg/s).The input u 1 , u 2, and u 3 are normalized positions of valve actuators that control the mass flow rates of fuel, steam to the turbine, and feedwater to the drum, respectively. Then the state space equation becomes: 9/8 ẋ 1 = −0.0018u 2 x1 + 0.9u 1 − 0.15u 3 9/8 ẋ 2 = (0.073u 2 − 0.016) x1 − 0.1x2

ẋ 3 = [141u 3 − (1.1u 2 − 0.19) x1 ] /85 y1 = x1 y2 = x2

(1.51)

y3 = 0.05 (0.13073x3 + 100α cs + qe /9 − 67.975) α cs =

(1 − 0.001538x3 ) (0.8x1 − 25.6) x3 (1.0394 − 0.0012304x1 )

qe = (0.845u 2 − 0.147) x1 + 45.59u 1 − 2.514u 3 − 2.096 . Case 2) Nuclear Power Generation The water level model developed by E. Irving is a fourth-order transfer function representation, with its power level dependent parameters listed in Table 1: Y(s) =

G1 G2 G3 s Qw (s) , (Qw (s)− Qv (s))− (Qw (s)− Qv (s))+ −2 2 −2 2 s 1 + τ2 s τ1 + 4π T + 2τ −2 1 s+s (1.52)

24 | 1 System Model

Tab. 1: Parameters of the Steam Generator Model With Respect to Operating Power. P (% power) q v (kg/s) G1 G2 G3 τ1 τ2 T

5 57.4 0.058 9.63 0.181 41.9 48.4 119.6

15 180.8 0.058 4.46 0.226 26.3 21.5 60.5

30 381.7 0.058 1.83 0.310 43.4 4.5 17.7

50 660 0.058 1.05 0.215 34.8 3.6 14.2

100 1435 0.058 0.47 0.105 28.6 3.4 11.7

where Y(s), Qw (s) and Qv (s) represent the water level, the feed water flow rate and the steam flow rate respectively. τ 1 , τ 2 and T are the damping time constants and the oscillation period. Suppose that: G1 Y1 (s) = (Qw (s) − Qv (s)) , s G2 Y 2 (s) = − (Qw (s) − Qv (s)) , (1.53) 1 + τ2 s G3 s Y 3 (s) = −2 Qw (s) . 2 −2 2 τ1 + 4π T + 2τ−1 1 s+s Define the state variables as shown in the Figure 1.14.

(a)

(b)

(c) Fig. 1.14: (a) Y1 (s); (b) Y2 (s); (c) Y3 (s).

Appendix: Three Power Generation Models |

25

Then, ẋ 1 (t) = G1 (Qw (t) − Qv (t)) ẋ 2 (t) = −τ −1 2 x 2 (t) −

G2 (Qw (t) − Qv (t)) τ2

(1.54)

ẋ 3 (t) = −2τ−1 1 x 3 (t) + x 4 (t) + G 3 Qw (t) 2 −2 ẋ 4 (t) = −(τ −2 1 + 4π T )x 3 (t) .

The control variable u = Qw , the disturbance d = Qv , and the water level output y = x1 + x2 + x3 , and equations (1.54) can be rearranged in the following state space equations: { ẋ c (t) = A(θ)xc (t) + B(θ)u c (t) + W(θ)dc (t) (1.55) { yc (t) = Cxc (t) , { where x1 [x ] [ 2] xc = [ ] , [x3 ] [x4 ] 0 [0 [ A(θ) = [ [0 [0 C = [1

0 a22 0 0 1

a22 = −τ−1 2 ,

0 0 a33 a43

1

0 0 ] ] ] , a34 ] 0 ]

b1 [b ] [ 2] B(θ) = [ ] , [b3 ] [0]

d1 [d ] [ 2] W(θ) = [ ] , [0] [0]

0]

a33 = −2τ−1 1 ,

b 1 = G1 ,

b 2 = −G2 τ−1 2 ,

d1 = −G1 ,

d2 =

a34 = 1 ,

2 −2 a43 = −(τ−1 1 + 4π T )

b 3 = G3

G2 . τ2

Case 3) Wind Power Generation The doubly fed induction generators (DFIGs) have been widely used in the modern wind energy systems, due to the advantages of variable speed operation and four quadrant active and reactive power capabilities. In DFIG, the stator is directly connected to the power grid, while the rotor is connected to the grid by a bidirectional converter, as shown in Figure 1.15. This converter controls active and the reactive power between the stator and ac supply, or a standalone grid. The equivalent circuit of a DFIG in an arbitrary reference frame rotating at synchronous angular speed ω1 is shown in Figure 1.16.

26 | 1 System Model Network Supply

Transformer Wind Turbine

Gearbox

DFIG

Converter

Fig. 1.15: The doubly fed induction generator (DFIG) system.

Fig. 1.16: The equivalent circuit of a DFIG.

The DFIG model in the synchronous reference frame is given by: us

= Rs is + dφs /dt + jω1 φs ,

u r = Rr ir + dφr /dt + j (ω1 − ωr ) φr ,

(1.56) (1.57)

where the relationship between fluxes and currents is: φs = Ls is + Lm ir ,

(1.58)

φr = Lm is + Lr ir ,

(1.59)

Appendix: Three Power Generation Models |

27

and generator active and reactive power are: 3 (u sd isd + u sq isq ) , 2 3 Q = (u sq isd + u sd isq ) . 2 P=

(1.60) (1.61)

The subscripts s and r represent the stator and rotor parameters respectively ω1 represents the synchronous speed and ωr represents rotor angular speed. Rs and Rr represent stator and the rotor windings per phase electrical resistance and Ls , Lr and Lm represent the proper and the mutual inductances of the stator and rotor windings. u represents voltage vector. The DFIG power control aims independent stator active P and reactive power Q control by means of a rotor current regulation. For this purpose, P and Q are represented as functions of each individual rotor current. We use stator flux oriented control, that decouples the dq axis, which means: φsd = φs , φsq = 0. Thus, (1.58) becomes: φs Lm − ird , Ls Ls Lm =− irq . Ls

isd =

(1.62)

isq

(1.63)

Similarly, using stator flux orientation, the stator voltage becomes u sd = 0 and u sq = u s . Hence, the active (1.60) and reactive (1.61) power can be calculated by using (1.62) and (1.63): 3 Lm P = − us irq , 2 Ls 3 φs Lm − ird ) u s . Q= ( 2 Ls Ls

(1.64) (1.65)

Thus, rotor currents will reflect on stator current and on stator active and reactive power, respectively. Consequently, this principle can be used on stator active and reactive power control of the DFIG. The DFIG power control is realized by the rotor currents control using (1.64) and (1.65). Using (1.62) and (1.63), the rotor voltage (1.57), in the synchronous referential frame, becomes: dir Lm +j ωsl φs , dt Ls

(1.66)

dir −Rr 1 Lm ir − jωsl ir + ur − j ωsl φs , = dt σLr σLr σLr Ls

(1.67)

Rr dir =− ird + ωsl irq + dt σLr dir Rr irq + = −ωsl ird − dt σLr

(1.68)

u r = (Rr + jσLr ωsl ) ir + σLr then:

1 u rd σLr 1 ωsl Lm u rq − φs . σLr σLr Ls

28 | 1 System Model

(1.68) can be expressed as: di rd dt ] di rq [ dt ]

[

=[

Rr − σL r −ωsl

1 ird [ ]=[ irq 0

1 ωsl ird σL r Rr ] [ ] + [ − σLr irq 0

0

u rd ] 1 ][ u rq σL r

1 +[ 0

0 0 ][ ] 1 − ωσLslrLLms φs

(1.69)

0 ird ][ ] , 1 irq

where ωsl = ω1 − ωr

and

σ =1−

L2m . Ls Lr

Assuming that: x = [x1 , x2 ]T = [ird , irq ]T ,

̇ , irq ̇ ]T , ẋ = [ẋ 1 , ẋ 2 ]T = [ird

T

y = [y1 , y2 ]T = [ird , irq ] ,

u = [u 1 , u 2 ]T = [u rd , u rq ] ,

T

(1.69) can be expressed in the standard space state form: ẋ = Ax + Bu + Gω

(1.70)

y = Cx , where:

− Rr A = [ σLr −ωsl 1 C=[ 0

ωsl Rr ] , − σL r

1

B = [ σLr 0

0

1 ] σL r

,

0 Gω = [ ωsl Lm ] . − σLr Ls φs

0 ] , 1

Exercise 1.1. Consider the network shown in Figure 1.17. Derive the network’s differential equation mathematical model.

R1

R2 u1

uC1

u2 uC2

Fig. 1.17: Circuit diagram.

Exercise

| 29

1.2. Consider the mechanical system shown in Figure 1.18, where x1 = u C1 , x2 = u C2 , K, B, M are the spring’s constant, the friction coefficient and the mass, respectively. Derive the state space equations.

K

M1 B2

B1

M2 f (t)

Fig. 1.18: Mechanical system.

1.3. Consider the network shown in Figure 1.17. Derive the network’s state space equations. 1.4. Consider the network shown in Figure 1.19. Derive the network’s transfer function U C (s) . U(s)

R u (t)

L i (t) C

uC (t)

Fig. 1.19: Network.

1.5. Consider the network shown in Figure 1.19. Derive the state space equations.

30 | 1 System Model

1.6. Consider the network shown in Figure 1.20. Derive the network’s transfer function U L (s) , U1 (s)

R1

C2

+ i1 u1

U L (s) . U2 (s)

+

i3

i2 C1

u2

L

uL 

 R2

Fig. 1.20: Network.

1.7. Consider the network shown in Figure 1.20. Derive the network’s state space equations. 1.8. Transition from state equations to transfer function. (1)

ẋ 1 0 [ ̇ ] [ [ x 2 ] = [1 [ẋ 3 ] [0 y = [0

0 −2 1 1

0 6 x1 ][ ] [ ] 0 ][x2 ] + [0] u , −3][x3 ] [0]

(2)

x1 [ ] −2] [x2 ] . [x3 ]

[

−5 ẋ 1 ]=[ 1 ẋ 2

y = [1

1.9. Transition from transfer function to state equations. (1)

g(s) =

(2)

g(s) =

s3

s3 + s + 1 . + 6s2 + 11s + 6

s2 + 2s + 3 . s3 + 2s2 + 3s + 1

1.10. Transition from differential equation to transfer function. (1)

2y⃛ + 3ẏ = ü − u .

(2)

y⃛ + 2ÿ + 3ẏ + 5y = 5u⃛ + 7u .

1.11. Transition from transfer function to differential equation. g(s) =

Y(s) 160s + 720 = 3 . U(s) s + 16s2 + 194s + 640

0 x1 2 ][ ] + [ ] u , −1 x2 0

− 12 ] [

x1 ] . x2

2 Linear Transformation of State Vector 2.1 Linear Algebra This chapter reviews a number of concepts and results in linear algebra that are essential in the linear transformation of state vector. As we saw in the preceding chapter, all parameters that arise in the real world are real numbers. Therefore, we deal only with real numbers throughout this chapter, except when specifically stated otherwise. Suppose A, B, C, and D are, respectively, n × m, m × r, l × n, and r × p are dimensional real matrices. Let a i be the ith column of A, and b j the jth row of B. Then we have:

AB = [a1

CA = C [a1

a2

a2

b1 [b ] [ 2] ] BD = [ [ .. ] , [ . ] [b m ]

...

...

b1 [b ] [ 2] ] am ] [ [ .. ] = a1 b 1 + a2 b 2 + ⋅ ⋅ ⋅ + a m b m , [ . ] [b m ] a m ] = [Ca1

Ca2

...

b1 D [b D] [ 2 ] ] D=[ [ .. ] . [ . ] [b m D]

Ca m ] ,

(2.1)

(2.2)

(2.3)

These identities can easily be verified. Note that a i b i is a n × r matrix; it is the product of a n × 1 column vector and a 1 × r row vector. The product b i a i is not defined unless n = r; it becomes a scalar if n = r. Consider a n-dimensional real linear space, denoted by R n . Every vector in R n has n real numbers such as: x1 [x ] [ 2] ] x=[ [ .. ] . [.] [x n ] To save space, we write it as x = [x1 x2 . . . x n ]T , where the superscript denotes the transpose. The set of vectors {x1 , x2 , . . . , x m } in R n is said to be linearly dependent if there exists set of real numbers α 1 , α 2 , . . . , α m , which are not all zero, such that: α1 x1 + α2 x2 + ⋅ ⋅ ⋅ + α m x m = 0 .

(2.4)

If the only set of α i that makes (2.4) hold is α 1 = 0, α2 = 0, . . . , α m = 0, then the set of vectors {x1 , x2 , . . . , x m } is said to be linearly independent. https://doi.org/10.1515/9783110574951-002

32 | 2 Linear Transformation of State Vector

If the set of vectors in (2.4) is linearly dependent, then there exists at least one a i , such as a1 , that is nonzero. Then (2.4) implies: x1 = −

1 [α 2 x2 + α 3 x3 + ⋅ ⋅ ⋅ + α m x m ] = β 2 x2 + β 3 x3 + ⋅ ⋅ ⋅ + β m x m , α1

where β i = −α i /α 1 . Such an expression is called a linear combination. The dimension of a linear space can be defined as the maximum number of linearly independent vectors in the space. Thus, in R n , we can find n linearly independent vectors. Basis and Representation A set of linearly independent vectors in R n is called a basis if every vector in R n can be expressed as a unique linear combination of the set. In R n , any set of n linearly independent vectors can be used as a basis. Let {q1 , q2 , . . . , q n } be such a set. Then every vector x can be expressed uniquely as: x = α 1 q1 + α 2 q2 + ⋅ ⋅ ⋅ + α n q n .

(2.5)

Define the n × n square matrix: Q = [q1

q2

...

qn ] .

(2.6)

Then (2.5) can be written as: α1 [α ] [ 2] ] x = Q[ [ .. ] = Qx . [ .] [α n ]

(2.7)

We call x = [α1 α 2 . . . α n ]T the representation of the vector x, with respect to the basis {q1 , q2 , . . . , q n }. We will associate every R n with the following orthonormal basis: 1 [0] [ ] [ ] [0] [ ] i1 = [ . ] , [ .. ] [ ] [ ] [0] [0]

0 [1] [ ] [ ] [0] [ ] i2 = [ . ] , [ .. ] [ ] [ ] [0] [0]

... ,

i n−1

0 [0] [ ] [ ] [0] [ ] = [.] , [ .. ] [ ] [ ] [1] [0]

0 [0] [ ] [ ] [0] [ ] in = [ . ] . [ .. ] [ ] [ ] [0] [1]

(2.8)

2.1 Linear Algebra

| 33

With respect to this basis, we have: x1 x1 [x ] [x ] [ 2] [ 2] ] [ ] x=[ [ .. ] = x1 i1 + x2 i2 + ⋅ ⋅ ⋅ + x n i n = I n [ .. ] , [.] [.] x [x n ] [ n] where I n is the n × n unit matrix. In other words, the representation of any vector x with respect to the orthonormal basis in (2.8) equals itself. Norms of Vectors The concept of norm is a generalization of length or magnitude. Any real valued function of x, denoted by ‖x‖, can be defined as a norm if it has the following properties: (i) ‖x‖ ≥ 0 for every x and ‖x‖ = 0 if, and only if, x = 0 (ii) ‖αx‖ = |α|‖x‖, for any real α (iii) ‖x1 + x2 ‖ ≤ ‖x1 ‖ + ‖x2 ‖ for every x1 and x2 Orthonormalization A vector x is said to be normalized if its Eucildeam norm is 1 or xT x = 1. Note that xT x is scalar and xxT is n × n. Two vectors, x1 and x2 , are said to be orthogonal if xT1 x2 = xT2 x1 = 0. A set of vectors x i , i = 1, 2, . . . , m, are said to be orthonormal if: {0 if i ≠ j xTi x j = { 1 if i = j . { Given a set of linearly independent vectors e1 , e2 , . . . , e m , we can obtain an orthonormal set using the procedure that follows: u1 = e1 u2 = e2 −

q1 = u 1 /‖u 1 ‖ (qT1 e2 )q1

.. .

q2 = u 2 /‖u 2 ‖ .. .

m−1

u m = e m − ∑ (qTk e m )q k

q m = u m /‖u m ‖ .

k=1

The first equation normalizes the vector e1 to have norm 1. The vector (qT1 e2 )q1 is the projection of the vector e2 along q1 . Its subtraction from e2 yields the vertical part u 2 . It is then normalized to 1. Using this procedure, we can obtain an orthonormal set. This is called the Schmidt orthonormalization procedure.

34 | 2 Linear Transformation of State Vector Let A = [a1 a2 . . . a m ] be an n × m matrix with m ≤ n. If all columns of A or {a i , i = 1, 2, . . . , m} are orthonormal, then: aT1 [ aT ] [ 2] ] AT A = [ [ .. ] [a1 [ . ] T [a m ]

a2

1 [0 [ am ] = [ [ .. [. [0

...

0 1 .. . 0

... ... .. . ...

0 0] ] .. ] ] = Im , .] 1]

where I m is the unit matrix of order m. Note that, in general, AAT ≠ I n . Similarity Transformation Consider a n × n matrix A. It maps R n into itself. If we associate R n with the orthonormal basis {i1 , i2 , . . . , i n } in (2.8), then the ith column of A is the representation of Ai i , with respect to the orthonormal basis. Now, if we select a different set of basis {q1 , q2 , . . . , q n }, then the matrix A has a different representation A. It turns out that the ith column of A is the representation of Aq i , with respect to the basis {q1 , q2 , . . . , q n }. This is illustrated by the following example. Example 2.1. Consider the matrix: 3 [ A = [−2 [4

2 1 3

−1 ] 0] . 1]

(2.9)

If b = [0 0 1]T , we have: −1 [ ] Ab = [ 0 ] , [1]

−4 [ ] A2 b = A(Ab) = [ 2 ] , [−3]

−5 [ ] A3 b = A(A2 b) = [ 10 ] . [−13]

It can be verified that the following relation holds: A3 b = 17b − 15Ab + 5A2 b . A2 b

(2.10)

Because the three vectors b, Ab, and are linearly independent, they can be used as a basis. We now compute the representation of A with respect to this basis. It is clear that: 0 [ ] A(b) = [b Ab A2 b] [1] [0] 0 [ ] [1]

A(Ab) = [b

Ab

[ ] A2 b] 0

A(A2 b) = [b

Ab

17 [ ] A2 b] [−15] , [ 5 ]

2.1 Linear Algebra

| 35

where the last equation is obtained from (2.10). Thus, the representation of A, with respect to the basis {b, Ab, A2 b}, is: 0 [ A = [1 [0

0 0 1

17 ] −15] . 5 ]

(2.11)

The preceding discussion can be extended to general cases. For example, suppose that A is a n × n matrix. If there exists a n × 1 vector b such that the n vectors b, Ab, . . . , A n−1 b are linearly independent, and if A n = β 1 b + β 2 Ab + ⋅ ⋅ ⋅ + β n A n−1 b , the representation of A with respect to the basis {b, Ab, . . . , A n−1 b} is: 0 [1 [ [ [0 [ A = [. [ .. [ [ [0 [0

0 0 1 .. . 0 0

... ... ... ... ...

0 0 0 .. . 0 1

β1 β2 ] ] ] β3 ] ] . .. ] . ] ] ] β n−1 ] βn ]

This matrix is said to be in a companion form. Consider the equation: Ax = y . The square matrix A maps x in q n }, the equation becomes:

Rn

(2.12)

(2.13)

Rn .

With respect to the basis {q1 , q2 , . . . ,

Ax = y ,

(2.14)

into y in

where x and y are the representations of x and y, with respect to the basis {q1 , q2 , . . . , q n }. As discussed in (2.7), they are related by: x = Qx ,

y = Qy

with Q = [q1

q2

...

qn ]

(2.15)

to be a n × n nonsingular matrix. Substituting these into (2.13) yields: AQ x = Qy

or

Q−1 AQ x = y .

(2.16)

A = QAQ−1 .

(2.17)

Comparing this with (2.14) yields: A = Q−1 AQ

or

36 | 2 Linear Transformation of State Vector

This is called the similarity transformation and A and A are said to be similar. We write (2.17) as: AQ = QA , or as: A [q1

q2

...

q n ] = [Aq1

Aq2

...

Aq n ] = [q1

q2

...

qn ] A .

This shows that the ith column of A is indeed the representation of Aq i , with respect to the basis {q1 , q2 , . . . , q n }.

2.2 Transform to Diagonal Form and Jordan Form A square matrix A has different representations with respect to different sets of basis. In this section, we introduce a set of basis so that the representation will be diagonal or block diagonal. A real or complex number λ is called an eigenvalue of the n × n real matrix A if there exists a nonzero vector x, such that Ax = λx is called a (right) eigenvector of A associated with eigenvalue λ. In order to find the eigenvalue of A, we write Ax = λx = λIx as: (2.18) (A − λI) x = 0 , where I is the unit matrix of order n. This is a homogeneous equation. If the matrix (A − λI) is nonsingular, then the only solution of (2.18) is x = 0. Thus, in order for (2.18) to have a nonzero solution x, the matrix (A − λI) must be singular or have a determinant. We define: ∆(λ) = det (λI − A) . It is a monic polynomial of degree n with real coefficients and is called the characteristic polynomical of A. A polynomical is called monic if its leading coefficient is 1. If λ is a root of the characteristic polynomical, then the determinant of (A − λI) is 0 and (2.18) has at least one nonzero solution. Thus, every root of ∆(λ) is an eigenvalue of A. Because ∆(λ) has degree n, the n × n matrix A has n eigenvalues (not necessarily all distinct). We mention that the matrices 0 [1 [ [ [0 [0

0 0 1 0

0 0 0 1

−α4 −α 3 ] ] ] −α 2 ] −α 1 ]

−α 1 [ 1 [ [ [ 0 [ 0

−α 2 0 1 0

−α 3 0 0 1

−α 4 0 ] ] ] 0 ] 0 ]

2.2 Transform to Diagonal Form and Jordan Form

|

37

and their transposes 0 [ 0 [ [ [ 0 [−α4

1 0 0 −α 3

0 1 0 −α 2

−α 1 [−α [ 2 [ [−α 3 [−α 4

0 0 ] ] ] 1 ] −α 1 ]

1 0 0 0

0 1 0 0

0 0] ] ] 1] 0]

have the following characteristic polynomial: ∆(λ) = λ4 + α 1 λ3 + α 2 λ2 + α 3 λ + α 4 . These matrices can easily be formed from the coefficients of ∆(λ) and are called companion form matrices. The companion form matrices will arise repeatedly later. The matrix in (2.12) is such a form. Eigenvalues of A Are All Distinct Suppose λ i , i = 1, 2, . . . , n, are the eigenvalues of A and that all are distinct. Let q i be an eigenvector of A associated with λ i ; that is, Aq i = λ i q i . The set of eigenvectors {q1 , q2 , . . . , q n } would then be linearly independent and can be used as a basis. Let ̂ A be the representation of A with respect to this basis. Then the first column of ̂ A is the representation of Aq1 = λ1 q1 with respect to {q1 , q2 , . . . , q n }. From

Aq1 = λ1 q1 = [q1

q2

...

λ1 [ ] [0] [ ] [ ] qn ] [ 0 ] , [.] [.] [.] [0]

we conclude that the first column of ̂ A is [λ1 0 . . . 0]T . The second column of ̂ A is the representation of Aq2 = λ2 q2 with respect to {q1 , q2 , . . . , q n }. That is, [0 λ2 0 . . . 0]T . Proceeding forward, we can establish: λ1 [ [0 [ [ ̂ A = [0 [. [. [. [0

0 λ2 0 .. . 0

0 0 λ3 .. . 0

... ... ... ...

0 ] 0] ] 0] ] . .. ] ] . ] λN ]

(2.19)

This is a diagonal matrix. Thus, we conclude that every matrix with distinct eigenvalues has a diagonal matrix representation by using its eigenvectors as a basis. Different orderings of eigenvectors will yield different diagonal matrices for the same A.

38 | 2 Linear Transformation of State Vector

If we define: Q = [q1 then the matrix ̂ A equals:

q2

qn ] ,

...

(2.20)

̂ A = Q−1 AQ ,

(2.21)

as derived in (2.17). Computing (2.21) by hand is not simple because of the need to ̂, then we can verify (2.20) by checking compute the inverse of Q. However, if we know A ̂ Q A = AQ. Example 2.2. Consider the matrix: 0 [ A = [1 [0

0 0 1

0 ] 2] . 1]

Its characteristic polynomial is: λ [ ∆(λ) = det (λI − A) = det [−1 [0

0 λ −1

0 ] −2 ] λ − 1]

= λ [λ (λ − 1) − 2] = (λ − 2) (λ + 1) λ . Thus, the eigenvalues of A are 2, −1, and 0. The eigenvector associated with λ = 2 is any nonzero solution of −2 [ (A − 2I)q1 = [ 1 [0

0 −2 1

0 ] 2 ] q1 = 0 . −1]

As such, q1 = [0 1 1]T is an eigenvector associated with λ = 2. Note that the eigenvector is not unique; [0 α α]T for any nonzero real α can also be chosen as an eigenvector. The eigenvector associated with λ = −1 is any nonzero solution of 1 [ (A − (−1)I)q2 = [1 [0

0 1 1

0 ] 2] q2 = 0 , 2]

which yields q2 = [0 − 2 1]T . Similarly, the eigenvector associated with λ = 0 can be computed as q3 = [2 1 − 1]T . Therefore, the representation of A, with respect to {q1 , q2 , q3 }, is: 2 0 0 [ ] ̂ A = [0 −1 0] . (2.22) 0 0 0 [ ]

2.2 Transform to Diagonal Form and Jordan Form |

39

It is a diagonal matrix with eigenvalues on the diagonal. This matrix can also be obtained by computing ̂ A = Q−1 AQ with Q = [q1

q2

0 [ q 3 ] = [1 [1

0 −2 1

2 ] 1] . −1]

However, it is simpler to verify Q ̂ A = AQ or: 0 [ [1 [1

0 −2 1

2 2 ][ 1 ] [0 −1] [0

0 −1 0

0 0 ] [ 0] = [1 0] [0

0 0 1

0 0 ][ 2] [1 1] [1

0 −2 1

2 ] 1] . −1]

Eigenvalues of A Are Not All Distinct An eigenvalue with multiplicity 2 or higher is called a repeated eigenvalue. In contrast, an eigenvalue with multiplicity 1 is called a simple eigenvalue. If A has only simple eigenvalues, it always has a diagonal form representation. If A has repeated eigenvalues, then it may not have a diagonal form representation. However, it has a block diagonal and triangular form representation, as we will discuss next. Consider a n × n matrix A with eigenvalue λ and multiplicity n. In other words, A has only one distinct eigenvalue. To simplify the discussion, we assume n = 4. Suppose the matrix (A − λI) has rank n − 1 = 3 or, equivalently, nullity 1, then the equation (A − λI) q = 0 has only one independent solution. Thus, A has only one eigenvector associated with λ. We need n − 1 = 3 more linearly independent vectors to form a basis for R4 . The three vectors q2 , q3 , q4 will be chosen to have the properties (A − λI)2 q2 = 0, (A − λI)3 q3 = 0, and (A − λI)4 q4 = 0. A vector v is called a generalized eigenvector of grade n if (A − λI) n v = 0 and (A − λI) n−1 v ≠ 0 . If n = 1, they reduce to (A − λI)v = 0 and v ≠ 0 (v is an ordinary eigenvector). For n = 4, we define: v4 = v v3 = (A − λI) v4 = (A − λI) v v2 = (A − λI) v3 = (A − λI)2 v v1 = (A − λI) v2 = (A − λI)3 v .

40 | 2 Linear Transformation of State Vector They are called a chain of generalized eigenvectors of length n = 4 and have the properties (A − λI)v1 = 0, (A − λI)2 v2 = 0, (A − λI)3 v3 = 0, and (A − λI)4 v4 = 0. These vectors, as generated, are automatically linearly independent and can be used as a basis. From these equations, we can readily obtain: Av1 = λv1 Av2 = v1 + λv2 Av3 = v2 + λv3 Av4 = v3 + λv4 . Then the representation of A with respect to the basis {v1 , v2 , v3 , v4 } is: λ [0 [ J=[ [0 [0

1 λ 0 0

0 1 λ 0

0 0] ] ] . 1] λ]

(2.23)

We verify this for the first and last columns. The first column of J is the representation of Av1 = λv1 , with respect to {v1 , v2 , v3 , v4 }, which is [λ 0 0 0]T . The last column of J is the representation of Av4 = v3 + λv4 with respect to {v1 , v2 , v3 , v4 }, which is [0 0 1 λ]T . This verifies the representation in (2.23). The matrix J has eigenvalues on the diagonal and 1 on the superdiagonal. If we reverse the order of the basis, then the 1 in (2.23) will appear on the subdiagonal. The matrix is called a Jordan block of order n = 4. If (A − λI) has rank n − 2 or, equivalently, nullity 2, then the equation (A − λI) q = 0 has two linearly independent solutions. Thus, A has two linearly independent eigenvectors and we need (n − 2) generalized eigenvectors. In this case, two chains of generalized eigenvectors exist; {v1 , v2 , . . . , v k } and {u 1 , u 2 , . . . , u l } with k + l = n. If v1 and u 2 are linearly independent, then the set of n vectors {v1 , . . . , v k , u 1 , . . . , u l } is linearly independent and can be used as a basis. With respect to this basis, the representation of A is a block diagonal matrix of form: ̂ A = diag {J 1 ,

J2 } ,

where J 1 and J 2 are, respectively, Jordan blocks of order k and l. Now we discuss a specific example. Consider a 5 × 5 matrix A with repeated eigenvalue λ1 with multiplicity 4 and simple eigenvalue λ2 . Suppose that a nonsingular matrix Q exists, such that ̂ A = Q−1 AQ

2.2 Transform to Diagonal Form and Jordan Form

|

41

assumes one of the following forms: λ1 [0 [ [ ̂ A1 = [ 0 [ [0 [0

1 λ1 0 0 0

0 1 λ1 0 0

0 0 1 λ1 0

0 0] ] ] 0] , ] 0] λ2 ]

λ1 [0 [ [ ̂ A2 = [ 0 [ [0 [0

1 λ1 0 0 0

0 1 λ1 0 0

0 0 0 λ1 0

0 0] ] ] 0] , ] 0] λ2 ]

λ1 [0 [ [ ̂ A3 = [ 0 [ [0 [0

1 λ1 0 0 0

0 0 λ1 0 0

0 0 1 λ1 0

0 0] ] ] 0] , ] 0]

1 λ1 0 0 0

0 0 λ1 0 0

0 0 0 λ1 0

0 0] ] ] 0] , ] 0]

λ2 ]

λ1 [0 [ [ ̂ A4 = [ 0 [ [0 [0

λ1 [0 [ [ ̂ A5 = [ 0 [ [0 [0

0 λ1 0 0 0

0 0 λ1 0 0

0 0 0 λ1 0

0 0] ] ] 0] . ] 0] λ2 ]

(2.24)

λ2 ]

A has two The first matrix occurs when the nullity of (A−λ1 I) is 1. If the nullity is 2, then ̂ Jordan blocks associated with λ1 ; it may assume the form in ̂ A2 or ̂ A3 . If (A − λ1 I) has nullity 3, then ̂ A has three Jordan blocks associated with λ1 , as shown in ̂ A4 . Certainly, the positions of the Jordan blocks can be changed by changing the order of the basis. If the nullity is 4, then ̂ A is a diagonal matrix as shown in ̂ A5 . All these matrices are triangular and block diagonal with Jordan blocks on the diagonal; they are said to be in Jordan form. A diagonal matrix is a degenerated Jordan form; its Jordan blocks all have order 1. Jordan form matrices are triangular and block diagonal and can be used to establish many general properties of matrices. For example, because det(CD) = det C det D and det Q det Q−1 = det I = 1, from A = Q ̂ AQ−1 , we have: det A = det Q det ̂ A det Q−1 = det ̂ A. The determinant of ̂ A is the product of all diagonal entries or, equivalently, all eigenvalues of A. Thus, we have: det A = product of all eigenvalues of A , which implies that A is nonsingular if, and only if, it has nonzero eigenvalue.

42 | 2 Linear Transformation of State Vector

We discuss a useful property of Jordan blocks to conclude this section. Consider the Jordan block in (2.23) with order 4. Then we have: 0 [0 [ (J − λI) = [ [0 [0

1 0 0 0

0 1 0 0

0 0] ] ] 1] 0]

0 [0 [ (J − λI)3 = [ [0 [0

0 0 0 0

0 0 0 0

1 0] ] ] , 0] 0]

0 [0 [ (J − λI)2 = [ [0 [0

0 0 0 0

1 0 0 0

0 1] ] ] 0] 0]

and (J − λI)k = 0 – for k ≥ 4. This is called nilpotent. Example 2.3. Consider the matrix: 0 [ A = [−6 [−6

−1 ] 6] . 5]

1 −11 −11

Its characteristic polynomial is: −1 λ + 11 11

λ [ ∆(λ) = det (λI − A) = det [6 [6

1 ] −6 ] = (λ + 1) (λ + 2) (λ + 3) . λ − 5]

Thus, the eigenvalues of A are −1, −2, and −3. T

q1 = [1

0

1] ,

q2 = [1

2

4] ,

q3 = [1

6

9] .

T T

We can obtain the diagonal matrix by computing: ̂ A = Q−1 AQ , with: Q = [q1

q2

1 [ q 3 ] = [0 [1

1 2 4

Then we can get: Q−1 =

2 1[ [6 9 [1

−5 3 2

2 ] −3] . 1]

1 ] 6] . 9]

(2.25)

2.2 Transform to Diagonal Form and Jordan Form

|

43

We have the diagonal form: −1 [ ̂ A=[0 [0

0 −2 0

0 ] 0] , −3]

−2 ] ̂ = Q−1 B = [ B [3] , [−1] ̂ = CQ = [1 C

1

1] .

The result in this example can easily be obtained using MATLAB. Typing: a=[0 1 -1;-6 -11 6;-6 -11 5]; [q,d]=eig(a) yields q = 0.7071 0.0000 0.7071

-0.2182 -0.4364 -0.8729

-0.0921 -0.5523 -0.8285

-1.0000 0 0

0 -2.0000 0

0 0 -3.0000,

d =

where d is the diagonal matrix. The matrix is different from the Q, but their corresponding columns differ only by a constant. This is due to the nonuniqueness of eigenvectors and how every column of q is normalized to have norm 1 in MATLAB. Example 2.4. Try to transform the state space representation to the Jordan block: 0 ẋ = (0 2 y = (1

1 0 3 0

0 0 1) x + (0) u . 1 0 0) x

Solution. First, we get the eigenvalues of A: 󵄨󵄨 󵄨󵄨 󵄨󵄨󵄨 λ −1 0 󵄨󵄨󵄨 󵄨 󵄨 |λI − A| = 󵄨󵄨󵄨 0 λ −1󵄨󵄨󵄨 = 0 . 󵄨 󵄨󵄨 󵄨󵄨−2 −3 λ 󵄨󵄨󵄨 󵄨 󵄨 That is, λ3 − 3λ − 2 = 0 . We get: λ1,2 = −1 ,

λ3 = 2 .

44 | 2 Linear Transformation of State Vector Eigenvector Q1 corresponding to λ1 = −1: 0 (0 2

q11 0 q11 1) (q21 ) = − (q21 ) . 0 q31 q31

1 0 3

Then: q11 1 Q1 = (q21 ) = (−1) . 1 q31 Eigenvector Q2 corresponding to λ2 = −1: λ1 Q2 − AQ2 = −Q1 , q21 0 − (q22 ) − (0 2 q23

1 0 3

0 q21 1 ) ( ) = − ( 1 −1) , q22 0 1 q23 1 Q2 = ( 0 ) . −1

Last, eigenvector Q3 corresponding to λ3 = 2: λ3 Q3 = AQ3 , 1 Q3 = (2) , 4 T = (Q1

Q2

1 Q3 ) = (−1 1

It can be calculated as: T

−1

−5 3 2

2 1 = (6 9 1

1 0 −1

2 −3) . 1

Then, we can calculate the matrix we want: −1 J=(0 0 T

−1

B=

1 −1 0

2 9 (− 31 ) 1 9

CT = (1

1

,

1) .

0 0) , 2

1 2) . 4

Exercise

|

45

Using MATLAB, typing: A=[0 1 0;0 0 1;2 3 0]; [v,J] = jordan(A); yields v = 0.1111 0.2222 0.4444

0.6667 -0.6667 0.6667

0.8889 -0.2222 -0.4444

J = 2 0 0

0 -1 0

0 1 -1

jordan(A) computes the Jordan Canonical/Normal Form of the matrix A. The columns of V are the generalized eigenvectors. J is the Jordan canonical form.

Exercise 2.1. Judge whether the following vectors are linearly dependent or not. (1)

−1 2 1 ( 3 ) , (1) , (4) . 1 0 1

(2)

2 −1 0 (3) , ( 4 ) , (0) . 0 0 2

2.2. Find the characteristic polynomials of the following matrices.

(1)

λ1 0 ( 0 0

1 λ1 0 0

0 1 λ1 0

0 0 ) . 0 λ2

(3)

λ1 0 ( 0 0

1 λ1 0 0

0 0 λ1 0

0 0 ) . 0 λ1

(2)

λ1 0 ( 0 0

1 λ1 0 0

0 1 λ1 0

0 0 ) . 0 λ1

(4)

λ1 0 ( 0 0

0 λ1 0 0

0 0 λ1 0

0 0 ) . 0 λ1

2.3. Find eigenvectors of the following matrices. −2 −1

(1)

A=(

(3)

0 A=( 3 −12

1 ) . −2 1 0 −7

0 2) . −6

(2)

0 A=( −6

1 ) . −5

(4)

1 A = (−1 4

2 0 4

−1 −1) . 5

46 | 2 Linear Transformation of State Vector

2.4. find jordan form representations of the following matrices: 1 [ A 1 = [0 [0

4 2 0

10 ] 0] , 3]

0 [ A2 = [ 0 [−2

1 [ A 3 = [0 [0

0 1 0

−1 ] 0] , 2]

0 [ A 4 = [0 [0

1 0 −4 4 20 −25

0 ] 1] , −3] 3 ] 16 ] . −20]

2.5. Find Jordan form representations of the following state space equations: (1)

(

−2 ẋ 1 )=( 1 ẋ 2 y = (1

(2)

4 ẋ 1 (ẋ 2 ) = (1 1 ẋ 3 y=(

1 0

1 x1 0 )( ) + ( )u −2 1 x2 0) ( 1 0 −1 2 1

x1 ) . x2 −2 x1 3 2 ) (x2 ) + (2 3 5 x3

x1 0 ) (x2 ) . 1 x3

1 7) u 3

3 Solution of State Space Model 3.1 Introduction In Chapter 2, it was shown that linear systems can be described by state space equations. This chapter will discuss the solution of the state space equation for LTI systems. Different methods to solve the state transition matrix for both continuous systems and discrete systems are discussed in detail.

3.2 Solution of LTI State Equations Consider the LTI state space equation: ̇ = Ax(t) + Bu(t) , x(t)

(3.1)

y(t) = Cx(t) + Du(t) ,

(3.2)

where A, B are, respectively, n × n, n × p dimensional constant matrices. The problem is to find the solution excited by the initial state x(t)| t=0 = x(0) and the input u(t). The solution hinges on the exponential function of A. In particular, the following property d At e = Ae At = e At A (3.3) dt is necessary to develop the solution. Rewrite the equation as: ̇ − Ax(t) = Bu(t) . x(t)

(3.4)

Premultiplying e −At to both sides of (3.3) yields: ̇ − Ax(t)] = e −At Bu(t) , e−At [x(t)

(3.5)

which implies that d −At (e x(t)) = e−At Bu(t) . dt Its integration from 0 to t yields: t

󵄨t e−Aτ x(τ)󵄨󵄨󵄨󵄨 τ=0 = ∫ e−Aτ Bu(τ)dτ . 0

Thus, we have: t

e

−At

x(t) − e x(0) = ∫ e−Aτ Bu(τ)dτ . 0

0 https://doi.org/10.1515/9783110574951-003

(3.6)

48 | 3 Solution of State Space Model Because the inverse of e−At is e At and e0 = I, equation (3.6) implies: t At

x(t) = e x(0) + ∫ e A(t−τ) Bu(τ)dτ ,

(3.7)

0

or, equivalently: t

x(t) = Φ(t)x(0) + ∫ Φ(t − τ)Bu(τ)dτ ,

(3.8)

0

is called the state transition matrix. Equation (3.7), or (3.8), is the where Φ(t) = solution of (3.1). To verify that (3.7) is the solution of (3.1), it is necessary to show that (3.7) satisfies (3.1) and the initial condition x(t) = x(0) at t = 0. Indeed, at t = 0, (3.7) reduces to x(0) = e A⋅0 x(0) = e0 x(0) = Ix(0) = x(0) . e At

Thus, (3.6) satisfies the initial condition. The equation t

t

t0

t0

∂ ∂ 󵄨 ∫ f(t, τ)dτ = ∫ ( f(t, τ)) dτ + f(t, τ)󵄨󵄨󵄨 τ=t ∂t ∂t

(3.9)

is needed to show that (3.7) satisfies (3.1). Differentiating (3.7) and using (3.9) obtains: t

̇ = x(t)

d [ At e x(0) + ∫ e A(t−τ) Bu(τ)dτ] dt 0 ] [ t

󵄨 = Ae x(0) + ∫ Ae A(t−τ) Bu(τ)dτ + e A(t−τ) Bu(τ)󵄨󵄨󵄨τ=t At

0 t

= A (e At x(0) + ∫ e A(t−τ) Bu(τ)dτ) + e A⋅0 Bu(t) . 0

Substituting (3.7) into the above equation results in: ̇ = Ax(t) + Bu(t) . x(t) Thus, (3.7) meets (3.1) and the initial condition x(0), and is the solution of (3.1). Substituting (3.7) into (3.2) yields the solution of (3.2) as: t

y(t) = Ce At x(0) + C ∫ e A(t−τ) Bu(τ)dτ + Du(t) . 0

(3.10)

3.2 Solution of LTI State Equations |

49

This solution and (3.7) are computed directly in the time domain. It is also convenient to compute the solutions by using the Laplace transform. Applying the Laplace transform to (3.1) yields: sX(s) − x(0) = AX(s) + BU(s) (sI − A)X(s) = x(0) + BU(s) . Premultiplying (sI − A)−1 to both sides of the above equation yields: x(s) = (sI − A)−1 x(0) + (sI − A)−1 BU(s) . Notice that

(3.11)

(sI − A)−1 = ℓ[Φ(t)] U(s) = ℓ[u(t)] .

From the property of Laplace transform function, the second term of the right side of (3.11) can be expressed as: t

(sI − A) BU(s) = ℓ [∫ Φ(t − τ)Bu(τ)] dτ . [0 ] −1

(3.12)

Substituting (3.12) into (3.11) and applying the inverse Laplace transform yields: t

x(t) = Φ(t)x(0) + ∫ Φ(t − τ)Bu(τ)dτ

(3.13)

0

Equation (3.13) is just the time domain solution of (3.1). Example 3.1. Find the solution of the following system excited by the unit step function: 0 1 0 ẋ = [ ]x +[ ]u . −2 −3 1 Solution. 2e−t − e−2t Φ(t) = e At = (sI − A)−1 = [ −2e−t + 2e−2t

e−t − e−2t ] . −e−t + 2e−2t

Substituting B = [0 1]T and u(t) = 1(t) into equation (3.8) yields: 2e−t − e−2t x(t) = [ −2e−t + 2e−2t

t

x1 (0) e−(t−τ) − e−2(t−τ) e−t − e−2t ][ ] dτ ] + ∫ [ −(t−τ) −t −2t x2 (0) −e + 2e −e + 2e−2(t−τ) 0

(2e−t − e−2t )x1 (0) + (e−t − e−2t )x2 (0)

1 − e−t + 12 e−2t ] + [ 2 −t ] =[ −t −2t −t −2t e − e−2t (−2e + 2e )x1 (0) + (−e + 2e )x2 (0) 1 + [2x1 (0) + x2 (0) − 1] e −t − [x1 (0) + x2 (0) − 12 ] e−2t ] . = [2 − [2x1 (0) + x2 (0) − 1] e−t − [2x1 (0) + 2x2 (0) − 1] e−2t

50 | 3 Solution of State Space Model If the initial condition is zero, i.e., x(0) = 0, then the response of the system depends only on the excitation of the control action: [

1 − e−t + 12 e−2t x1 (t) ] . ] = [ 2 −t x2 (t) e − e−2t

In the following, the solution of the state space equation (3.8) is presented under three special control signals. (1) The impulse response: When u(t) = Kδ(t), x(0− ) = x0 , x(t) = e At x0 + e At BK , where K is a constant vector having the same dimension with u(t). (2) The step response: When u(t) = K × 1(t), x(0− ) = x0 , x(t) = e At x0 + A−1 (e At − 1)BK . (3) The scope response: When u(t) = Kt × 1(t), x(0− ) = x0 , x(t) = e At x0 + [A−2 (e At − 1) − A−1t ] BK .

3.3 State Transfer Matrix 3.3.1 Properties Consider the LTI state space equation: ̇ = Ax(t) + Bu(t) , x(t)

(3.14)

where x(t) ∈ R n , u(t) ∈ R r , A ∈ R n×n , B ∈ R n×r . The solution of the system is: x(t) = e At x0

or

x(t) = e A(t−t 0) x0 ,

which reflects a vector transition relation from the initial state vector x0 to the state x(t) of any t > 0, or t > t0 , where e At is called a transfer matrix. It is not a constant matrix – it is a n × n time varying function matrix because the elements of the matrix are general functions of t. This means that it makes the state vector change constantly in the state space, so Φ(t) = e At is also called state transition matrix. Φ(t) = e At is the transition matrix from x(0) to x(t), while Φ(t − t0 ) = e A(t−t 0) is the transition ̇ = Ax(t) + Bu(t) can also be matrix from x(t0 ) to x(t). Therefore, the solution of x(t) expressed as: x(t) = Φ(t)x(0) , or as: x(t) = Φ(t − t0 )x(t0 ) .

3.3 State Transfer Matrix

|

51

x2

t2

x21

t1 x22 t0=0

x20 x12

O

x11

x10

x1

Fig. 3.1: State transition trajectory.

Its geometric meaning, taking two-dimensional state vector for example, can be represented as in Figure 3.1. From Figure 3.1, we know x(0) = [x10 x20 ]T when t = 0. If we consider this as an initial condition and Φ(t1 ) is known, when t = t1 , the state will be: x(t1 ) = [

x11 ] = Φ(t1 )x(0) . x21

(3.15)

If Φ(t2 ) is known, when t = t2 , the state will be: x(t2 ) = [

x12 ] = Φ(t2 )x(0) . x22

(3.16)

That is to say, the state x(0) will transfer to the state x(t1 ) or x(t2 ) according to Φ(t1 ) or Φ(t2 ). If we take t = t1 as initial time, the state x(t1 ) is the initial state and the state transited from t1 to t2 will be: x(t2 ) = Φ(t2 − t1 )x(t1 ) .

(3.17)

Substituting x(t1 ) of equation (3.15) into the above equation can result in: x(t2 ) = Φ(t2 − t1 )Φ(t1 )x(0) .

(3.18)

Equation (3.18) shows the transformation of the state x from x(0) to x(t1 ), and then to x(t2 ). Comparing equation (3.16) and (3.18), it is clear that: Φ(t2 − t1 )Φ(t1 ) = Φ(t2 ) , or that: e A(t 2−t 1 ) e At 1 = e At 2 . Such a relation is called the combination property.

(3.19)

52 | 3 Solution of State Space Model From the above, for any given initial state vector, x(t0 ) can be transited to x(t) at any t using state transition matrix. In other words, matrix differential equations can be solved in an arbitrary time period. This is another advantage of state space representation on a dynamic system. Property 1 {

Φ(t)Φ(τ) = Φ(t + τ) e At e Aτ = e A(t+t)

or

(3.20)

This is a combination property, which means a combination of transition from −τ to 0 and transition from 0 to t. That is to say, Φ(t − 0)Φ [0 − (−τ)] = Φ [t − (−τ)] = Φ(t + τ) . Property 2 {

Φ(t − t) = I e A(t−t) = I

(3.21)

This property means that when the state vector transit from time instant t to t, the state vector is invariable. Property 3 {

[Φ(t)]−1 = Φ(−t) or

[e At ]−1 = e−At

(3.22)

This property shows that the inverse of the transition matrix means the reversion of time. If x(t) is known, we can have x(t0 ) at time t while t0 < t. Property 4 For the transition matrix: { { or {

̇ Φ(t) = AΦ(t) = Φ(t)A d At e = Ae At = e At ⋅ A dt

(3.23)

This property shows Φ(t) or e At can change with matrix A. Property 5 For the n × n square matrices A and B, if and only if AB = BA, there exists e At e Bt = e(A+B)t ; if AB ≠ BA, then e At e Bt ≠ e(A+B)t . This property shows that, unless A and B are matrix exchangeable, the product of their respective matrix exponential functions is not equivalent to matrix exponential function of the sum of A and B.

3.3 State Transfer Matrix

| 53

Below are some special matrix exponential functions. (1) If A is diagonal matrix, [ [ A=Λ=[ [ [

0

λ1 λ2 ..

.

[0 then e At

[ [ = Φ(t) = [ [ [

] ] ] , ] ]

λn ]

e λ1 t

0 e λ2 t ..

.

] ] ] . ] ]

(3.24)

eλn t ]

[ 0

(2) If A can be diagonalized through nonsingular transformation, T −1 AT = Λ , then e At

[ [ = Φ(t) = T [ [ [

e λ1 t

0 e λ2 t ..

.

] ] −1 ]T . ] ]

(3.25)

eλn t ]

[ 0 (3) If A is Jordan matrix, λ [ [ [ [ A=J=[ [ [ [ [ [0

1 λ

1 [ [0 [ [ e Jt = Φ(t) = e λt [ [ [ [0 [0

t 1

then

1 λ

1 λ

1 2 2! t

0 0

t

0 0

1 λ

... ... .. . ... ...

0 ] ] ] ] ] , ] ] ] 1] λ]

(3.26)

1 n−1 (n−1)! t 1 n−2 ] ] (n−2)! t ]

t 1

] ]. ] ] ]

(3.27)

]

(4) If σ A=[ −ω

ω ] , σ

then cos ωt e At = Φ(t) = [ − sin ωt

sin ωt ]. cos ωt

(3.28)

54 | 3 Solution of State Space Model

3.3.2 Calculating the State Transition Matrix For calculating the matrix e At , many methods have been proposed. Here are the four most popular ones. Method 1: Expansion of e At This method takes place entirely in the time domain and is based on the expansion of e At in a power series. Namely, it is based on the definition of e At or Φ(t): e At = I + At +

∞ 1 A2 t2 A3 t3 + + ⋅ ⋅ ⋅ = ∑ Ak tk . 2! 3! k! k=0

0 Example 3.2. Compute the matrix e At , where A = [ −2

(3.29)

1 ]. −3

Solution. e

At

1 =[ 0

0 0 ]+[ 1 −2

1 0 ]t+[ −3 −2

1 − t2 + t3 + . . . =[ −2t + 3t2 − 73 t3 + . . . [

2

3

t2 1 0 ] +[ −3 2! −2

t3 1 ] +... −3 3!

t − 32 t2 − 76 t3 + . . . 1 − 3t + 72 t2 − 52 t3 + . . .

] ]

Method 2: Diagonal Form This method takes place entirely in the time domain and is based on the diagonalization of the matrix A. Indeed, if the eigenvalues of matrix A are distinct, then A can be transformed to a diagonal matrix Λ via the transformation matrix T as follows: Λ = T −1 AT. Matrix Φ(t), under the transformation T, becomes: Φ(t) = e At = Te Λt T −1 . Since:

e

Λt

[ [ =[ [ [

e λ1 t

0 e λ2 t ..

.

[ [ Λ=[ [ [

eλn t ]

[ 0 it follows that Φ(t) = e At

] ] ] , ] ]

[ [ = T[ [ [

0 λ2 ..

.

[0

e λ1 t

[ 0

λ1

λn ] 0

e λ2 t ..

.

] ] ] , ] ]

] ] −1 ]T . ] ]

eλn t ]

3.3 State Transfer Matrix

Example 3.3. A = [ Solution.

0 −2

|

55

1 ] is known. Now compute the matrix e At . −3

󵄨󵄨 󵄨λ |λI − A| = 󵄨󵄨󵄨󵄨 󵄨󵄨2

󵄨 −1 󵄨󵄨󵄨 󵄨󵄨 = λ2 + 3λ + 2 = (λ + 1) (λ + 2) = 0 , λ + 3󵄨󵄨󵄨

so, λ1 = −1, λ2 = −2. We can, therefore, have the corresponding transfer matrix according to equation (3.25). 1 2 1 1 2 ] . T=[ ] and T −1 = [ −1 −1 −2 −2 Therefore, 1 e−t ][ −2 0

2 e At = [ −2

2e−t − e−2t =[ −2e−t + 2e−2t

0

1

][ e−2t −1

1 2

] −1

e−t − e−2t ] . −e−t + 2e−2t

If the matrix A has repeated eigenvalues, then A can be transformed to a Jordan matrix Λ via the transformation matrix T: J = T −1 AT , e At = Te Jt T −1 . 0 [ Example 3.4. Compute the matrix e At , where A = [0 [2 Solution.

󵄨󵄨 󵄨󵄨 λ 󵄨󵄨 |λI − A| = 󵄨󵄨󵄨 0 󵄨󵄨 󵄨󵄨−2 󵄨

−1 λ 5

0 ] 1]. 4]

󵄨 0 󵄨󵄨󵄨 󵄨󵄨 −1 󵄨󵄨󵄨 = (λ − 1)2 (λ − 2) = 0 , 󵄨󵄨 λ − 4󵄨󵄨󵄨

so, λ1 = λ2 = 1 ,

λ3 = 2 .

According to (3.26), 1 [ J = [0 [0 et [ e = [0 [0 Jt

1 0 −5

1 1 0 te t et 0

0 ] 0] , 2] 0 ] 0] . e2t ]

56 | 3 Solution of State Space Model

Since

−1 0 1

1 [ T = [1 [1

1 ] 2] ; 4]

T

−1

−2 [ = [−2 [1

5 3 −2

−2 ] −1] . 1]

Then,

e

At

1 [ = [1 [1 et [ t = [e t [e

−1 0 1

1 et ][ 2] [ 0 4] [ 0

te t − e t te t t te + e t

te t et 0

0 −2 ][ 0 ] [−2 e2t ] [ 1

−2 e2t ][ 2t 2e ] [−2 4e2t ] [ 1

−2te t + e2t [ = [ 2(e2t − te t − e t ) t t 2t [−2te − 4e + 4e

5 3 −2

−2 ] −1] 1]

5 3 −2

−2 ] −1] 1]

3te t + 2e t − e2t 3te t + 5e t − 4e2t 3te t + 8e t − 8e2t

−te t − e t + e2t ] −te t − 2e t + 2e2t ] −te t − 3e t + 4e2t ]

Method 3: The Inverse Laplace Transformation The inverse Laplace transformation method can be expressed as: e At = Φ(t) = ℓ−1 {(sI − A)−1 } .

(3.30)

Proof. Consider the differential equation: ̇ = Ax(t) , x(t) with the initial state x(0) = x0 . Apply the Laplace transform on both sides of the equation: sX(s) − x(0) = AX(s) . In other words: (sI − A)X(s) = x(0) = x 0 , and, therefore: X(s) = (sI − A)−1 x0 . Apply the inverse Laplace transform on both sides to get the solution of the differential equation: x(t) = ℓ−1 {(sI − A)−1 } x0 . Comparing the above equation with equation (3.15) can result in: e At = Φ(t) = ℓ−1 {(sI − A)−1 } .

3.3 State Transfer Matrix

0 Example 3.5. Compute the matrix e At , where A = [ −2

| 57

1 ]. −3

Solution. s sI − A = [ 2 (sI − A)−1 =

−1 ] s+3

1 1 s+3 adj(sI − A) = [ |sI − A| (s + 1)(s + 2) −2 s+3 (s+1)(s+2)

=[ −2 [ (s+1)(s+2) 2 1 − s+2 = [ s+1 2 −2 s+1 + s+2

1 ] s

1 (s+1)(s+2) ] s (s+1)(s+2) ] 1 s+1 −1 s+1

− +

1 s+2 ] 2 s+2

Therefore, 2e−t − e−2t e At = ℓ−1 {(sI − A)−1 } = [ −2e−t + 2e−2t

e−t − e−2t ] . −e−t + 2e−2t

Method 4: Cayley–Hamilton Theorem (1) A square matrix A satisfies the characteristic equation of itself according to the Cayley–Hamilton theorem: f(A) = A n + a n−1 A n−1 + ⋅ ⋅ ⋅ + a1 A + a0 I = 0 , thus, A n = −a n−1 A n−1 − a n−2 A n−2 − ⋅ ⋅ ⋅ − a1 A − a0 I , which is the linear combination of A n−1 , A n−2 , . . . , A, I. In the same way, A n+1 = A ⋅ A n = −a n−1 A n − (a n−2 A n−1 + a n−3 A n−2 + ⋅ ⋅ ⋅ + a1 A2 + a0 A) = −a n−1 (−a n−1 A n−1 − a n−2 A n−2 − ⋅ ⋅ ⋅ − a1 A − a0 I) − (a n−2 A n−1 + a n−3 A n−2 + ⋅ ⋅ ⋅ + a1 A2 + a0 A) = (a2n−1 − a n−2 ) A n−1 + (a n−1 a n−2 − a n−3 ) A n−2 + . . . + (a n−1 a1 − a0 ) A + a n−1 a0 I . That is to say, A n , A n+1 . . . can all be expressed with A n−1 , A n−2 , . . . , A, I. (2) In the definition equation (3.29) of e At , we can eliminate the terms of A with power equal to and above n, applying the method in (1). In other words: e At = I + At +

1 2 2 1 1 1 A t + ⋅⋅⋅ + A n−1 t n−1 + A n t n + A n+1 t n+1 + . . . 2! (n − 1)! n! (n + 1)!

= a n−1 (t)A n−1 + a n−2 (t)A n−2 + ⋅ ⋅ ⋅ + a1 (t)A + a0 (t)I .

(3.31)

58 | 3 Solution of State Space Model

Example 3.6. Compute a i (t) in the expression of e At , where A = [

0 −2

1 ]. −3

Solution. The characteristic equation of A: 󵄨󵄨 󵄨λ |λI − A| = 󵄨󵄨󵄨󵄨 󵄨󵄨2

󵄨 −1 󵄨󵄨󵄨 󵄨󵄨 = λ2 + 3λ + 2 = 0 . λ + 3󵄨󵄨󵄨

According to the Cayley–Hamilton theorem, A2 + 3A + 2I = 0 . Thus: A2 = −3A − 2I . While

A3 = A ⋅ A2 = A(−3A − 2I) = −3A2 − 2A = −3(−3A − 2I) − 2A = 7A − 6I , A4 = A ⋅ A2 = 7A2 + 6A = 7(−3A − 2I) + 6A = −15A − 14I . ...

Substituting the equations above into the following equation, we can eliminate the terms of A with power equal to and above two: 1 2 2 1 3 3 1 4 4 A t + A t + A t +... 2! 3! 4! 14 4 3 2 7 3 15 4 t + . . . ) A + (1 − t2 + t3 − t +...)I = (t − t + t − 2! 3! 4! 4! = a1 (t)A + a0 (t)I .

e At = I + At +

Therefore,

3 2 7 3 15 4 t + t − t +... 2! 3! 4! 14 4 t +... . a0 (t) = 1 − t2 + t3 − 4!

a1 (t) = t −

(3) When the eigenvalues of A are all distinct, we have: a0 (t) 1 [ a (t) ] [ ] [1 [ 1 [ . ]=[ [ . ] [ [ . ] 1 [a n−1 (t)] [

λ1 λ2 λn

λ21 λ22 ... λ2n

... ... ... ...

λ1n−1 λ2n−1 ] ] ] ] n−1 λn ]

−1

e λ1 t [e λ2 t ] [ ] [ . ] . [ . ] [ . ] λ t [e n ]

(3.32)

3.3 State Transfer Matrix

|

59

Proof. Matrix A satisfies the characteristic equation of itself and, therefore, eigenvalues λ and A are exchangeable. Thus, λ satisfies equation (3.31): a0 (t) + a1 (t)λ1 + ⋅ ⋅ ⋅ + a n−1 (t)λ1n−1 = e λ 1 t } } } } a0 (t) + a1 (t)λ2 + ⋅ ⋅ ⋅ + a n−1 (t)λ2n−1 = e λ 2 t } } } } .. } } } . } } } n−1 λn t a0 (t) + a1 (t)λ n + ⋅ ⋅ ⋅ + a n−1 (t)λ n = e } Solve the above equation for [a0 (t) a1 (t) . . . a n−1 (t)]T , and we can obtain (3.32). When eigenvalues of A are all λ1 , we have: 0 [ a0 (t) 0 [ ] [ [ a1 (t) ] [ .. [ ] [ . [ .. ] [ [ . ]=[ ] [ [ 0 [ [a (t)] [ [ n−2 ] [ [0 [a n−1 (t)] [1

0 0 .. . 0

0 0 .. . 1

... ...

0 1

...

1



...

(n − 1)λ1n−2

λ1

λ21

...

λ1n−1

1 ] (n − 1)λ1 ] ] .. ] ] . ] (n−1)(n−2) n−3 ] λ1 ] 2! ] (n − 1)λ1n−2 ] ] n−1 λ1 ]

−1

1 n−1 e λ 1 t (n−1)! t [ 1 n−2 λ 1 t ] [ (n−2)! t e ] [ ] [ ] .. [ ] . [ ] . [ 1 2 λ1 t ] [ 2! t e ] [ ] [ ] te λ 1 t

[

e λ1 t

] (3.33)

Proof. a0 (t) + a1 (t)λ1 + a2 (t)λ21 ⋅ ⋅ ⋅ + a n−1 (t)λ1n−1 = e λ 1 t . Differentiating both sides of the equation, we have: a1 (t) + 2a2 (t)λ1 ⋅ ⋅ ⋅ + (n − 1)a n−1 (t)λ1n−2 = te λ 1 t . Differentiating both sides of the equation again, we obtain: 2a2 (t) + 6a3 (t) + ⋅ ⋅ ⋅ + (n − 1)(n − 2)a n−1 (t)λ1n−3 = t2 e λ 1 t . Repeat the former step and, finally, we have: (n − 1)!a n−1 (t) = t n−1 e λ 1 t . The equation (3.33) can be reached from the above n equations, by solving a i (t). 0 Example 3.7. Compute e At , where A = [ −2

1 ]. −3

Solution. We know the eigenvalues of the matrix A is λ1 = −1, λ2 = −2. According to equation (3.33), we have: [

a0 1 ]=[ a1 1 =[

2 1

−1

λ1 ] λ2

[

e λ1 t 1 ]=[ 1 e λ2 t

−1 ] −2

−1

[

e−t ] e−2t

2e−t − e−2t −1 e−t ] . ] [ −2t ] = [ −t −1 e e − e−2t

60 | 3 Solution of State Space Model

Therefore, e At = a0 (t)I + a1 (t)A = (2e−t − e−2t ) [

1 0

2e−t − e−2t =[ −2e−t + 2e−2t

0 0 ] + (e−t − e−2t ) [ 1 −2

1 ] −3

e−t − e−2t ] . −e−t + 2e−2t

0 [ Example 3.8. Compute the matrix e At , where A = [0 [2

1 0 −5

0 ] 1]. 4]

Solution. The eigenvalues of the matrix A are λ1 = λ2 = 1, λ3 = 2. The part of λ1 = λ2 = 1 can be calculated based on equation (3.33), while the part of λ3 = 2 can be calculated based on equation (3.32): a0 0 [ ] [ [ a 1 ] = [1 [ a 2 ] [1 0 [ = [1 [1

1 λ1 λ3 1 1 2

2λ1 ] λ21 ] λ23 ] −1

2 ] 1] 4]

−1

te λ 1 t [ λ1 t ] [e ] λ3 t [e ]

te t −2 [ t] [ = [e ] [3 2t [e ] [−1

1 te t ][ t ] −2] [ e ] . 1 ] [e2t ]

0 2 −1

Therefore, e

At

1 [ = (−2te + e ) [0 [0 t

2t

0 1 0

0 0 ] t t 2t [ 0] + (3te + 2e − 2e ) [0 1] [2

0 [ + (−te t − e t + e2t ) [2 [8 −2te t + e2t [ = [ 2(e2t − te t − e t ) t t 2t [−2te − 4e + 4e

0 −5 −18

1 0 −5

0 ] 1] 4]

1 ] 4] 11]

3te t + 2e t − 2e2t 3te t + 5e t − 4e2t 3te t + 8e t − 8e2t

−te t − e t + e2t ] −te t − 2e t + 2e2t ] . −te t − 3e t + 4e2t ]

Example 3.9. Consider the state equation: −2 ̇x = [ [1 [0 y = [1

0 0 −2 −1

0 1 ] [ ] 1 ] x + [0] u −2] [1] 0] x .

Suppose the input is a step function of various magnitudes. First we use MATLAB to find its unit step response. We type:

3.4 Discretization

|

61

a=[-2 0 0;1 0 1;0 -2 -2]; b=[1;0;1]; c=[0 -2 -2]; d=0; [y,x,t]=step(a,b,c,d); plot(t,y,t,x) The system response is shown in Figure 3.2.

1.5 x3

1

x2

0.5 0

x1

0.5

y

1 1.5

0

1

2

3

4

5

6

7

8

Fig. 3.2: State variables of system step response.

3.4 Discretization Consider the continuous time state space equation: ̇ = Ax(t) + Bu(t) . x(t)

(3.34)

If this continuous time equation is to be computed on a digital computer, it must be discretized, because: x(t + T) − x(t) ẋ = lim . T→0 T When the sample period T is quite small, about 1/10 of the minimum time constant of the system, (3.34) can be approximated as: x[(k + 1)T] = (TA + 1)x(kT) + TBu(kT) .

(3.35)

62 | 3 Solution of State Space Model

Proof. According to the definition of derivative, we have: ̇ 0 ) = lim x(t

∆t→0

x(t0 + ∆t) − x(t0 ) . ∆t

(3.36)

If we compute x(t) and y(t) from t0 = kT to t = (k + 1)T for k = 0, 1, . . . , then (3.36) becomes: x[(k + 1)T] − x(kT) x[(k + 1)T] − x(kT) ̇ x(kT) = lim ≈ . (3.37) T→0 T T Substituting (3.37) into (3.34) yields: x[(k + 1)T] − x(kT) = Ax(kT) + Bu(kT) , T which can be rearranged as (3.35), thus providing proof. This is a discrete time state space equation and can easily be computed on a digital computer. This discretization is the easiest to carry out but yields the least accurate results. The following is an alternative discretization method. If an input u(t) is generated by a digital computer followed by a digital to analog converter, then u(t) will be piecewise constant. This situation often arises in computer control of control systems. Let u(t) = u(kT) = constant

kT ≤ t < (k + 1)T

for

for k = 0, 1, 2, . . . . This input changes value only at discrete time instants. For this input, the solution of the state equation in (3.34) still equals (3.7). Computing (3.7) at t = kT and t = (k + 1)T yields: (k+1)T

x[(k + 1)T] = e AT x(kT) +

∫ e A[(k+1)T−τ] Bdτu(kT) .

(3.38)

kT

Let t = (k +1)T − τ, then dτ = −dt. So the lower integral τ = kT becomes t = T; and the higher integral τ = (k+)T becomes t = 0. Thus, equation (3.38) can be simplified as: T

x[(k + 1)T] = e

AT

x(kT) + ∫ e At dtBu(kT) , 0

which equals T

x(k + 1) = e

AT

x(k) + ∫ e At dtBu(k) .

(3.39)

0

This is a discrete time state space equation. Note that there is no approximation involved in this derivation. It is the exact solution of (3.34) at t = kT, if the input is piecewise constant.

3.4 Discretization

| 63

Rewrite (3.39) as: x(k + 1) = G(T)x(k) + H(T)u(k) , where

(3.40)

T

G(T) = e

AT

H(T) = ∫ e At dt ⋅ B .

,

(3.41)

0

The MATLAB function [ad,bd]=c2d(a,b,T) transforms the continuous state equation in (3.34) into the discrete time state equation in (3.41). Example 3.10. Try to discretize the following state equation: ẋ = [

0 0

1 0 ] x(t) + [ ] u(t) . −2 1

Solution. Step 1. Equation (3.41) yields the exact result: s e At = ℓ−1 [(sI − A)−1 ] = ℓ−1 {[ 0

−1

−1 1 ] }=[ s+2 0

− e−2T ) ] . e−2T

1 2 (1

Thus, we have: − e−2T ) ] e−2T

1 2 (1

1 G(T) = [ 0 T

T

H = ∫ e dt ⋅ B = ∫ [ At

0

=[

1 0

0 1 2

T 0

− e−2t ) 0 ] dt ⋅ [ ] −2t e 1

1 2 (1

(T + 12 e−2T − 12 ) 0 ][ ] 1 − 12 e−2T + 12

−2T

1 (T + e 2 −1 ) ] . = [21 −2T ) 2 (1 − e

According to equation (3.40), the discretized state equation is: x(k + 1) = [

1 0

−2T

1 (T + e 2 −1 ) − e−2T ) ] u(k) . ] x(k) + [ 2 1 −2T e (1 − e−2T )

1 2 (1

2

Step 2. Equation (3.35) yields the approximate result: TA + I = [

0 0

T 1 ]+[ −2T 0

0 1 ]=[ 1 0

T ] 1 − 2T

0 H = TB = [ ] . T According to equation (3.45), the discretized state equation is: 1 x[(k + 1)T] = [ 0

T 0 ] x(kT) + [ ] u(kT) , 1 − 2T T

64 | 3 Solution of State Space Model

which can be rewritten as: 1 x(k) = [ 0

T 0 ] x(k) + [ ] u(k) . 1 − 2T T

Then we use MATLAB to finish this task. We type: a=[0 1;0 -2]; b=[0;1]; T=0.1; [ad,bd]=c2d(a,b,T) Yield ad = 1.0000 0

0.0906 0.8187

bd = 0.0047 0.0906 We can get: x(k) = [

1 0

0.09 0.0047 ] x(k) + [ ] u(k) . 0.8 0.09

3.5 Solution of Discrete Time Equation Method 1: Recursive Method Consider the discrete time state space equation: x(k + 1) = G(T)x(k) + H(T)u(k) 󵄨 x(k)󵄨󵄨󵄨 k=0 = x(0) .

(3.42)

The solution of the first order matrix differential equation is: k−1

x(k) = G k x(0) + ∑ G k−j−1 Hu(j) .

(3.43a)

j=0

Or: k−1

x(k) = G k x(0) + ∑ G j Hu(k − j − 1) ,

(3.43b)

j=0

which equals: x(k) = G k x(0) + G k−1 Hu(0) + G k−2 Hu(1) + ⋅ ⋅ ⋅ + GHu(k − 2) + Hu(k − 1) .

(3.43c)

3.5 Solution of Discrete Time Equation

| 65

Proof. Solve the matrix differential equation (3.42) by iterative method: For i = 0 ,

x(1) = Gx(0) + Hu(0) .

For i = 1 ,

x(2) = Gx(1) + Hu(1) = G2 x(0) + GHu(1) + Hu(1) .

For i = 2 ,

x(3) = Gx(2) + Hu(2) = G3 x(0) + G2 Hu(0) + GHu(1) + Hu(2) .

.. . For i = k − 1 ,

x(k) = Gx(k − 1) + Hu(k − 1) = G k x(0) + G k−1 Hu(0) + . . . + GHu(k − 2) + Hu(k − 1) .

The last general formula is just equation (3.43c). Equation (3.43c) can be expressed in matrix form as: G H x(1) ] [ 2] [ [ [ GH [x(2)] [G ] ] [ ] [ 2 [ [ [x(3)] [G3 ] ] = [ ] x(0) [ G H [ [ . [ . ] [ . ] [ .. [ .. ] [ .. ] ] [ ] [ [ k k−1 x(k) G ] [ ] [G H [

0 H GH .. . G k−2 H

0 0 H .. .

... ... ...

G k−3 H

...

0 ] 0] ] 0] ] . .. ] .] ] H]

(3.43d)

Solution (3.43) is derived from the initial time instant k = 0. If we start from the time k = h, the corresponding initial state is x(h). Then the solution becomes: k−1

x(k) = G k−h x(0) + ∑ G k−j−1 Hu(j) ,

(3.44a)

j=h

or: k−1

x(k) = G k−h x(0) + ∑ G j Hu(k − j − 1).

(3.44b)

j=h

Obviously, the solution of the discrete time state space equation is similar to that of the continuous time state space equation. It consists of two parts of responses; the response excited by the initial state and the response excited by the input signal. Furthermore, the solution of the discrete time state space equation is a discrete track in state space. Besides, in the response excited by the input, the state x(k) is only related with the sample values of input before time instant k. Similarly, we define Φ(k) = G k

or

Φ(k − h) = G k−h

(3.45)

as the state transition matrix of the discrete time system. Obviously, Φ(k + 1) = GΦ(k) ;

Φ(0) = I

(3.46)

66 | 3 Solution of State Space Model

and the following properties hold: Φ(k − h) = Φ(k − h1 )Φ(h1 − h)

k > h1 ≥ h

for

(3.47)

−1

Φ (k) = Φ(−k) .

(3.48)

Using the state transition matrix Φ(k), the solution (3.43) can be expressed as: k−1

x(k) = Φ(k)x(0) + ∑ Φ(k − j − 1)Hu(j) ,

(3.49a)

j=0

or as: k−1

x(k) = Φ(k)x(0) + ∑ Φ(k − j − 1)Hu(j).

(3.49b)

j=0

Thus, equation (3.44) can be written as: k−1

x(k) = Φ(k − h)x(h) + ∑ Φ(k − j − 1)Hu(j)

(3.50a)

j=h k−1

x(k) = Φ(k − h)x(0) + ∑ Φ(j)Hu(k − j − 1) .

(3.50b)

j=h

Example 3.11. The state equation of a discrete time system is: x(k + 1) = Gx(k) + Hu(k) 0 G=[ −0.16

1 ] , −1

1 H=[ ] , 1

with initial state x(0) = [1 − 1]T and control action u(k) = 1. Try to solve Φ(k), x(k). Solution. As defined. 0 Φ(k) = G = [ −0.16 k

k

1 ] . −1

For simplicity, we transform the original equation into Jordan canonical form, i.e., transform G into diagonal form. Let x(k) = T ̃x (k). Thus, the original equation becomes: ̃x(k + 1) = T −1 GT ̃x (k) + T −1 Hu(k) . Again, let T −1 GT = Λ ;

̃ Φ(k) = (T −1 GT)k = Λ k ,

so k−1

−1 ̃ ̃x (0) + ∑ Φ(j)T ̃ ̃x(k) = Φ(k) Hu(k − j − 1) j=0

󵄨󵄨 󵄨 −1 󵄨󵄨󵄨 󵄨 λ 󵄨󵄨 = (λ + 0.2)(λ + 0.8) = 0 |λI − G| = 󵄨󵄨󵄨󵄨 󵄨󵄨0.16 λ + 1󵄨󵄨󵄨 λ1 = −0.2 ; λ2 = −0.8 .

(3.51)

3.5 Solution of Discrete Time Equation

| 67

Therefore, −0.2 Λ=[ 0

0 ]; −0.8

k

(−0.2)k 0 ] =[ 0 −0.8

−0.2 ̃ Φ(k) =[ 0

0 ] , (−0.8)k

thus T=[

1 −0.2

4 3

1 ] , −0.8

5 3

T −1 = [ −1 [ 3

− 53

] . ]

Now, it is easy to derive 1 −1 ̃ =[ Φ(k) = T Φ(k)T −0.2 =

4

1 (−0.2)k ][ 0 −0.8

4(−0.2)k − (−0.8)k 1 [ 3 −0.8 [(−0.2)k − (−0.8)k ]

5

0 3 3 ] ][ (−0.8)k − 31 − 35 ] [ k k 5 [(−0.2) − (−0.8) ] ] . −(−0.2)k + 4(−0.8)k

Now compute x̃(k) according to equation (3.51). The first term on the right side is: (−0.2)k −1 ̃ ̃x (0) = Φ(k)T ̃ x(0) = [ Φ(k) 0

4

0 3 ][ (−0.8)k −1 [ 3

5 3

k ] [ 1 ] = 1 [−(−0.2) ] . 3 4(−0.8)k −1 − 35 ]

The second term on the right side is: k−1

k−1

4

−1 ̃ ̃ [ 3 ∑ Φ(j)T Hu(k − j − 1) = ∑ Φ(j) −1 j=0 j=0 [ 3 k−1

= ∑[ j=0

=[

(−0.2)j 0

5 3

] [1] [1] 1

− 35 ]

k−1 0 3 3(−0.2)j ] [ ∑[ ] ] = j −2(−0.8)j (−0.8) −2 j=0

3 [1 + (−0.2) + (−0.2)2 + ⋅ ⋅ ⋅ + (−0.2)k−1 ]

] −2 [1 + (−0.8) + (−0.8)2 + ⋅ ⋅ ⋅ + (−0.8)k−1 ] 3[1−(−0.2)k ] 1.2

] . =[ −2[1−(−0.8)k ] ] [ 1.8 Thus: 1 [1 − (−0.2)k ] 1 −(−0.2)k ] [ 0.4 ] + [ 1 3 4(−0.8)k [1 − (−0.8)k ] − 0.9 ] [ k + 5 (−0.2) − 17 6 2] . =[ 22 k − 10 (−0.8) 9 9 ] [

x̃(k) =

68 | 3 Solution of State Space Model

Therefore: 17 5 k 1 [− 6 (−0.2) + 2 ] ] 22 −0.8 (−0.8)k − 10 9 ] [ 9 17 22 25 k k − 6 (−0.2) + 9 (−0.8) + 18 ] . =[ 7 3.4 (−0.2)k − 17.6 (−0.8)k + 18 9 6 [ ]

1 x(k) = T ̃x (k) = [ −0.2

Method 2: z Transform Method For the LTI discrete system state equation, we can find its solution by using the z transform method. Consider the discrete time state space equation: x(k + 1) = G(T)x(k) + H(T)u(k) . Applying z transform to the above equation yields: zx(z) − zx(0) = Gx(z) + Hu(z) , or: (zI − G)x(z) = zx(0) + Hu(z) . Thus, x(z) = (zI − G)−1 zx(0) + (zI − G)−1 Hu(z) . Take inverse z transform: x(k) = ℓ−1 [(zI − G)−1 zx(0)] + ℓ−1 [(zI − G)−1 Hu(z)] .

(3.52)

Comparing (3.43) with (3.52) yields: G k x(0) = ℓ−1 [(zI − G)−1 zx(0)]

(3.53)

k−1

∑ G k−j−1 Hu(j) = ℓ−1 [(zI − G)−1 Hu(z)] .

(3.54)

j=0

Using the solution of the continuous state equation, we get: t

x(t) = Φ(t − kT)x(kT) + ∫ Φ(t − τ)Bu(kT)dτ . kT

Suppose that t = (k + ∆)T (0 ≤ ∆ ≤ 1) – the above equation becomes: ∆T

x [(k + ∆)T] = Φ(∆T)x(kT) + ∫ Φ(∆T − τ)dτBu(kT) . 0

(3.55)

| 69

3.5 Solution of Discrete Time Equation

Comparing (3.43) with (3.52) yields: G k = Φ(k) = ℓ−1 [(zI − G)−1 z]

(3.56)

k−1

∑ G k−j−1 Hu(j) = ℓ−1 [(zI − G)−1 Hu(z)] .

(3.57)

j=0

Proof. First we compute the z transform of G k : ∞

ℓ[G k ] = ∑ G k z−k = I + Gz−1 + G2 z−2 + . . . .

(3.58)

k=0

Then we premultiply Gz−1 to both sides of (3.58): Gz−1 ℓ[G k ] = Gz−1 + G2 z−12 + G3 z−3 + . . . .

(3.59)

Subtract (3.58) from (3.59): (I − Gz−1 )ℓ[G k ] = I , because

−1

ℓ[G k ] = (I − Gz−1 )

= (zI − G)−1 z .

(3.60)

Take z inverse transform of equation (3.60), and we can get equation (3.56). Next we use convolution formula to prove equation (3.57): k−1

ℓ [ ∑ G k−j−1 Hu(j)] = ℓ[G k−1 ]Hℓ[u(k)] [ j=0 ] = ℓ[G k ]z−1 Hℓ[u(k)] = (zI − G)−1 Hu(z) . Take z inverse transform of the above equation, then equation (3.57) is derived. k−1

∑ G k−j−1 Hu(j) = ℓ−1 [(zI − G)−1 Hu(z)] .

j=0

Example 3.12. Consider the state equation in Example 3.11; try to find Φ(k) and x(k) using the z transform method. Solution. As u(k) = 1, we have: u(z) =

z . z−1

According to equation (3.56), Φ(k) = ℓ−1 [(zI − G)−1 z] z = ℓ−1 {[ 0.16

−1

−1 ] z+1

z} = ℓ−1 {

z z+1 [ (z + 0.2)(z + 0.8) −0.16

4 −1 −5 5 + z+0.8 { z z+0.2 z+0.2 + z+0.8 ]} = ℓ−1 { [ } −1 3 −0.8 + 0.8 + 4 { [ z+0.2 z+0.8 z+0.2 z+0.8 ]} 1 4(−0.2)k − (−0.8)k 5(−0.2)k − 5(−0.8)k = [ ] . k k 3 −0.8(−0.2) + (−0.8) −(−0.2)k + 4(−0.8)k

1 ]} z

70 | 3 Solution of State Space Model

Now compute: z2

z

z z−1 z−1 zx(0) + Hu(z) = [ ] + [ ] = [ 2 ] . z −z +2z −z [ z−1 ] [ z−1 ] Thus, x(z) = (zI − G)−1 [zx(0) + Hu(z)] (z2 +2)z

−(17/6)z (25/18)z + (22/9)z z+2 z+0.8 + z−1 ] (3.4/6)z (−17.6/9)z (7/18)z + + z+0.8 z−1 ] [ z+2

(z+2)(z+0.8)(z−1) ] =[ =[ (−z2 +1.84z)z

[ (z+2)(z+0.8)(z−1) ]

.

hence, 22 25 k k − 17 6 (−0.2) + 9 (−0.8) + 18 ] . x(k) = ℓ−1 [x(z)] = [ 7 3.4 (−0.2)k − 17.6 (−0.8)k + 18 6 9 ] [

3.6 Summary The solution to both the continuous time and discrete time state space equation has been studied in this chapter. The state transfer matrix is a very important parameter matrix, which plays a big role in the solution of a state space equation. Different algorithms are discussed to obtain the state transfer matrix.

Exercise 3.1. Consider the matrix A: 0 A = (0 2

1 0 −5

0 1) . 4

Use the Laplace transform to find e−At . 3.2. Use three different methods to find e−At . (1)

0 A=( 4

−1 ) . 0

(2)

A=(

1 4

1 ) . 1

3.3. Examine the following matrix to see whether they meet the conditions of state transition matrix. If they do, try to find out the corresponding matrix A. (1)

1 Φ(t) = (0 0

0 sin t − cos t

(2)

1 Φ(t) = ( 0

1 2 (1

0 cos t ) . sin t

− e−2t ) ) . e−2t

Exercise

(3)

2e−t − e−2t Φ(t) = ( −t e − e−2t

(4)

Φ(t) = (

1 −t 2 (e

2e−t − 2e−2t ) . 2e−t − e−2t

− e−3t )

(−e−t + e3t )

− 41 (e−t + 2e3t ) ) . 1 −t 3t ) (e + e 2

3.4. Solve the state space model: 1 ẋ = ( 0

0 1 )x+( )u 0 1

y = (1

0) x .

The initial state is x(0) = (1 1)T . The input u(t) is a unit step response. 3.5. Calculate Φ(t, 0) and Φ−1 (t, 0). (1)

2t A=( 0

0 ) . 1

(2)

0 A = ( −t −e

e −t ) . 0

3.6. The discrete time system is listed below. Try to calculate x(k). 1

[

x1 (k + 1) 3 ]=( 1 x2 (k + 1) 9

x1 (0) = −1 ,

1 9 1 3

)[

1 x1 (k) ]+( x2 (k) 0

0 u 1 (k) )[ ] 1 u 2 (k)

x2 (0) = 4 .

u 1 (k) is sampled from a ramp function t and u 2 (k) is sampled from e−t .

| 71

4 Stability Analysis 4.1 Introduction Stability is an important property for a system because, only when a system is stable can it finish the target task. In this chapter, a group of conceptions of stability in the sense of Lyapunov are given at the beginning, which are somewhat different from the definitions of stability given in classical control theory. Following that, the theorems to decide whether a system is stable or not are introduced.

4.2 Definition The response of linear systems can always be decomposed as the zero state response and the zero input response. The stabilities of these two responses are commonly studied separately. The bounded input bounded output (BIBO) stability is for the zero state response, while marginal and asymptotic stabilities are for the zero input response. Definition 4.1 (External Stability). An input u(t) is said to be bounded if u(t) does not grow to positive infinity or negative infinity. Equivalently, constants β 1 and β 2 exist, and u(t) ≤ β 1 < ∞ holds for all t ≥ 0 . (4.1) A system is said to be BIBO stable if every bounded input excites a bounded output. For example: (4.2) y(t) ≤ β 2 < ∞ holds for all t ≥ 0 . This stability is defined for the zero state response. Conclusion 4.1 (BIBO Stability of Linear Time Variant System). Consider a continuous linear time variant (LTV) system with p inputs, m outputs and zero initial condition. If we define [t0 , ∞] as the time domain, the system is BIBO stable at time t0 if, and only if, there exists a limited positive number β, which satisfies the following relationship: t

󵄨 󵄨 ∫ 󵄨󵄨󵄨h ij (t, τ)󵄨󵄨󵄨 dτ ≤ β < ∞ ,

(4.3)

t0

where h ij (t, τ) ,

i = 1, 2, . . . , m ,

j = 1, 2, . . . , p

are elements of the impulse response matrix H(t, τ) at any t (t ∈ [t0 , ∞]).

https://doi.org/10.1515/9783110574951-004

(4.4)

4.2 Definition

| 73

Proof. The proof is divided into two parts. Step 1: The system is a SISO system. That is, it is p = m = 1. First, if h ij (t, τ) is absolutely integrable, then every bounded input excites a bounded output. Suppose u(t) is an arbitrary input with u(t) ≤ β 1 < ∞ for all t ≥ 0. This will lead to the output being bounded as follows: 󵄨󵄨 󵄨󵄨 t t 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 |y(t)| = 󵄨󵄨󵄨∫ h(t, τ)u(τ)dτ󵄨󵄨󵄨 ≤ ∫ |h(t, τ)| |u(τ)| dτ 󵄨󵄨 󵄨󵄨 󵄨󵄨 t 0 󵄨󵄨t 0 t

≤ β 1 ∫ |h(t, τ)| dτ ≤ β 1 β = β 2 < ∞

(4.5)

t0

Second, it can be seen that if h ij (t, τ) is not absolutely integrable, the system is not BIBO stable. If h ij (t, τ) is not absolutely integrable, for any absolutely large N, there exists a t1 ∈ [t0 , ∞] such that: t1

∫ |h(t1 , τ)| dτ ≥ N . t0

Let us use the following example to demonstrate: +1 , h(t1 , t) > 0 { { { u(t) = sgn h(t1 , t) = {0 , h(t1 , t) = 0 { { {−1 , h(t1 , t) < 0 . It is very clear that u is bounded. The output excited by the input is: t1

t1

y(t1 ) = ∫ h(t, τ)u(τ)dτ = ∫ |h(t1 , τ)| dτ = ∞ . t0

t0

Because y(t1 ) can be absolutely large, we conclude that a similar bounded input can excite an unbounded output, which is contrary to the definition external stability. Therefore, the assumption does not hold, and we have: t

󵄨 󵄨 ∫ 󵄨󵄨󵄨h ij (t, τ)󵄨󵄨󵄨 dτ ≤ β < ∞ ,

∀t ∈ [t0 , ∞] .

t0

Step 2: The system is a MIMO system. Note that any element y i (t) of output y(t) is 󵄨󵄨 t 󵄨󵄨 󵄨󵄨󵄨 󵄨󵄨󵄨 󵄨 |y i (t)| = 󵄨󵄨󵄨∫ [h i1 (t, τ)u 1 (τ) + ⋅ ⋅ ⋅ + h ip (t, τ)u p (τ)] dτ󵄨󵄨󵄨󵄨 , 󵄨󵄨 󵄨󵄨 󵄨󵄨t 0 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 t 󵄨󵄨 t 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨 󵄨󵄨 󵄨󵄨 ≤ 󵄨󵄨󵄨∫ h i1 (t, τ)u 1 (τ)dτ󵄨󵄨󵄨 + ⋅ ⋅ ⋅ + 󵄨󵄨󵄨∫ h ip (t, τ)u p (τ)dτ󵄨󵄨󵄨󵄨 , i = 1, 2, . . . , m . 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨t 0 󵄨󵄨t 0

74 | 4 Stability Analysis

The sum of a finite number of bounded functions remains bounded. Therefore, we can have the conclusion based on the condition of SISO. Proof has been provided. Conclusion 4.2 (BIBO Stability of LTI System). If we define the initial time as t0 = 0 in a continuous LTI system with p inputs, m outputs and zero initial condition, the system is BIBO stable. However, this would only apply if, and only if, there exists a limited positive number of β, which satisfies the following relationship: ∞

󵄨 󵄨 ∫ 󵄨󵄨󵄨h ij (t)󵄨󵄨󵄨 dt ≤ β < ∞ , 0

where h ij (t), i = 1, 2, . . . , m, j = 1, 2, . . . p are elements of the impulse response matrix H(t). Conclusion 4.3 (BIBO Stability of LTI System). If we define the initial time as t0 = 0 in a continuous time LTI system with p inputs, m outputs and zero initial condition, the system with proper rational transfer function matrix G(s) is BIBO stable. However, this would only apply if, and only if, every pole of G(s) has a negative real part or every pole of G(s) lies in the left half s-plane. Proof. The characteristic polynomial of G(s) is α G (s). The pole of G(s) is s l (l = 1, 2, . . . , m), which are the roots of α G (s) = 0. Therefore, any rational fraction of G(s) is g ij (s) (i = 1, 2, . . . , q, j = 1, 2, . . . p). Its expansion contains the partial fractions: βl , l = 1, 2, . . . , m , α l r = 1, 2, . . . , σ l , (s − s l )α lr where β l is zero or a nonzero constant, and the pole s l with multiplicity σ l . Thus, the inverse Laplace transform of g ij (s) is: ρ lr t α lr −1 e s l t ,

l = 1, 2, . . . , m .

If β l /(s − s l )α1lr = β l , the corresponding inverse Laplace transform is the impulse function δ. Therefore, the element h ij (t) of the impulse response matrix H(t), derived from the inverse Laplace transform of element transfer function g ij (s), is the sum of the finite terms as ρ l r t α lr −1 e slt . It may contain the function δ. It is straightforward to verify that every ρ l r t α lr −1 e slt (∀i = 1, 2, . . . , q, ∀j = 1, 2, . . . , p) is absolutely integrable if and only if the pole s l (l = 1, 2, . . . , m) has a negative real part, i.e., h ij (t) (∀i = 1, 2, . . . , q, ∀j = 1, 2, . . . , p) is absolutely integrable. Therefore, according to Conclusion 4.2, the system is BIBO stable. The BIBO stability is defined for the zero state response. Now we study the stability of the zero input response, or the response of: ̇ = Ax(t) , x(t) which is excited by nonzero initial state x0 . Clearly, the solution is: x(t) = e At x0 .

4.2 Definition

| 75

̇ Definition 4.2 (Internal Stability). The zero input response of equation x(t) = Ax(t) is marginally stable, or stable in the sense of Lyapunov, if every finite internal state x0 excites a bounded response. It is asymptotically stable if every finite initial state excites a bounded response, which also approaches 0 as t → ∞. Conclusion 4.4 (Internal Stability of LTV System). The zero input response of equation ̇ = Ax(t) is internal stable or marginally stable if every finite internal state x0 excites x(t) a bounded state transition matrix ϕ(t, t0 ), which also approaches 0 as t → ∞. Proof. If x(t0 ) = x0 at time t0 , the zero input response is: x0u = ϕ(t, t0 )x0 , ∀t ∈ [t0 , ∞] . It is straightforward to verify that x0u is bounded, only in instances where ϕ(t, t0 ) is bounded and limt→∞ x0u (t) = 0, and when limt→∞ ϕ(t, t0 ) = 0. Conclusion 4.5 (Internal Stability of LTI System). The zero input response of equation ̇ x(t) = Ax(t) + Bu with the initial state x(0) = x0 is internal stable or marginal stable only in instances where limt→∞ e At = 0. Proof. For a LTI system, the state transfer matrix ϕ(t) = e At and e At is bounded for any t > 0. Following this, we can obtain Conclusion 4.5 from Conclusion 4.4. Conclusion 4.6 (Internal Stability of LTI System). The zero input response of equation ̇ = Ax(t) + Bu(t) with the initial state x(0) = x0 is internal stable or marginal stable x(t) in instances where every eigenvalue λ i (A) (i = 1, 2, . . . , n) has a negative real part. In other words: Re {λ i (A)} < 0 , i = 1, 2, . . . , n . Conclusion 4.7 (The Relationship Between Internal Stability and External Stability). Consider the continuous LTI system: ẋ = Ax + Bu ,

x(0) = x0 ,

t≥0,

y = Cx + Du , where x is an n-dimensional state vector, u is a p-dimensional input vector, and y is a m-dimensional output vector. If the above system is internal stable or marginal stable, it must be BIBO stable or external stable. Proof. For the above LTI system, from the analysis of the system dynamics we know that the impulse response matrix H(t) is: H(t) = Ce At B + Dδ(t) . From Conclusion 4.5 we know that if the system is internal stable, e At is bounded and limx→∞ e At = 0. With the above contents, we can get all elements of the impulse re-

76 | 4 Stability Analysis sponse matrix H(t), where h ij (t) (i = 1, 2, . . . , m, j = 1, 2, . . . , p) satisfies the following relationship: ∞

󵄨 󵄨 ∫ 󵄨󵄨󵄨h ij (t)󵄨󵄨󵄨 dt ≤ β < ∞ . 0

The system is BIBO stable according to Conclusion 4.2. Conclusion 4.8 (The Relationship Between External Stability and Internal Stability). Consider the continuous LTI system: ẋ = Ax + Bu ,

x(0) = x0 ,

t≥0,

y = Cx + Du . BIBO stability or external stability cannot guarantee internal stability or marginal stability. Proof. When some poles and zeros are the same, the order of transfer function for a system is lower than that of state space description, i.e., the number of poles is less than the number of eigenvalues. The system is BIBO stable. In other words, every pole of G(s) has a negative real part and cannot guarantee that the eigenvalues of the system have negative real parts. Therefore, BIBO stability cannot guarantee the internal stability of the system. Conclusion 4.9 (The Equivalence Between External Stability and Internal Stability). Consider the continuous LTI system: ẋ = Ax + Bu ,

x(0) = x0 ,

t≥0,

y = Cx + Du . Without the zero pole cancelation, the system is internal stable if, and only if, the system is external stable. Proof. From Conclusion 4.7, we know that internal stability means external stability of the system. If the system has no zero pole cancelation, external stability means internal stability of the system according to the proof of Conclusion 4.8. Therefore, external stability is equivalent to internal stability of a system if the system has no zero pole cancelation. Definition 4.3 (Autonomous System). A dynamic system without external or input excitation is defined as an autonomous system. Generally, the state equation of a continuous nonlinear time variant (NTV) autonomous system can be described as follows: ẋ = f(x, t) ,

x(t0 ) = x0 ,

t ∈ [t0 , ∞]

(4.6)

where x is n-dimensional state vector, f(x, t) is n-dimensional vector function. For a continuous nonlinear time invariant (NTI) system, state equation can be written as ẋ = f(x).

4.2 Definition

| 77

For a continuous LTV system, the vector function f(x, t) of equation (4.6) can be further described as a linear vector function of state x. The state equation of autonomous system can be rewritten as ẋ = A(t)x ,

x(t0 ) = x0 ,

t ∈ [t0 , ∞]

(4.7)

And the state equation of a continuous LTI autonomous can be written as ẋ = Ax. Definition 4.4 (Equilibrium State). For a continuous NTV system, the equilibrium state of the autonomous system (4.7) is xe , which satisfies the following equation: ẋ e = f(x, t) = 0 ,

∀t ∈ [t0 , ∞] .

(4.8)

Below are some notes about the equilibrium state. (a) Intuitive meaning of the equilibrium state: Equilibrium state xe is a class of state which always satisfies ẋ e = 0. (b) The form of the equilibrium state: The equilibrium state xe can be solved from equation (4.8). For a two-dimensional autonomous system, the form of xe can be points or a line in the state space. (c) Nonuniqueness: The equilibrium state xe of an autonomous system is not always unique. For a continuous LTI system, the equilibrium state xe is the solution of Axe = 0. If the matrix A is nonsingular, we have a unique solution of xe = 0. If the matrix A is singular, the solution is not unique. (d) Zero equilibrium state: For the autonomous systems (4.6) or (4.7), xe = 0 must be an equilibrium state for the system. (e) Isolated equilibrium states: The isolated equilibrium states are in the form of isolated equilibrium point in the state space. An important feature of the isolated equilibrium states is that they can be transferred to state space origin by moving coordinates. (f) Agreement on the equilibrium states: In the direct method of Lyapunov, the stability analysis is mainly aimed at the equilibrium states. Therefore, we always set the state space origin as the equilibrium states, i.e., xe = 0 in the following sections of the stability analysis. Definition 4.5 (Disturbed Dynamics). The disturbed dynamics of a dynamic system is a class of state dynamics caused by the initial state x0 . In nature, the disturbed dynamics is the state response of zero input. We call it disturbed dynamics because a nonzero initial state x0 will be regarded as a state disturbance relative to the zero equilibrium state xe = 0 in stability analysis. Usually, for a more clear description of the relationship of time and causality in the disturbed dynamics, we further represent the disturbance dynamics in the following form: x0u (t) = ϕ(t; x0 , t0 ) , t ∈ [t0 , ∞] ,

78 | 4 Stability Analysis where ϕ is a vector function. When t = t0 , the vector function of the disturbed dynamics satisfies ϕ(t0 ; x0 , t0 ) = x0 . In the sense of geometry, the disturbed dynamics ϕ(t; x0 , t0 ) presents a trajectory from the initial state x0 in the state space. We can constitute a trajectory cluster of disturbed dynamics ϕ(t; x0 , t0 ) according to different initial states. Definition 4.6 (the Stability in the Sense of Lyapunov). The isolated equilibrium state xe = 0 of the autonomous system is considered to be stable in the sense of Lyapunov at the time instant t if, for any real number ε > 0, there exists a corresponding real number δ(ε, t0 ) > 0. When: ‖x0 − xe ‖ ≤ δ (ε, t0 ) ,

(4.9)

the disturbed dynamics ϕ(t; x0 , t0 ) from the initial x0 satisfies the following inequality: 󵄩 󵄩󵄩 (4.10) 󵄩󵄩ϕ(t; x0 , t0 ) − xe 󵄩󵄩󵄩 ≤ ε , ∀t ≥ t0 . Listed below are notes regarding the stability in the sense of Lyapunov. (a) Geometric significance of stability There is direct geometric significance about the stability in the sense of Lyapunov. Hence, inequality (4.10) can be considered as a supersphere in the state space, whose core is xe and the radius is ε. Its field can be represented with S(ε). Inequality (4.9) can be considered as a supersphere, whose core is xe and the radius is δ(ε, t0 ) in the state space. Its field can be represented with S(δ), which is a function of ε and t0 . The geometric explanation of stability in the sense of Lyapunov is that the dynamics trajectories ϕ(t; x0 , t0 ) starting from any initial state within the field S(δ) will never exceed the boundary H(ε) of the filed S(ε), as shown in Figure 4.1.

Fig. 4.1: The stability in the sense of Lyapunov.

4.2 Definition

| 79

(b) Uniform stability in the sense of Lyapunov According to the definition of stability in the sense of Lyapunov, if a real number δ(ε) > 0 exists, which is not related to the initial time t0 , i.e., when ‖x0 − xe ‖ ≤ δ(ε) holds, ‖ϕ(t; x0 , t0 ) − xe ‖ ≤ ε∀t ≥ t0 always holds. At this stage, we can call the equilibrium state xe uniformly stable in the sense of Lyapunov. In general, for the time variant systems, uniform stability is of more practical significance than stability. Uniform stability means that, if the system is stable in the sense of Lyapunov at an initial time instant t0 , the system is stable in the sense of Lyapunov at all initial time t0 within the definition interval of time. (c) The stability properties of the time invariant system For the time invariant system, whether it is a linear or nonlinear system or a continuous or time discrete system, stability in the sense of Lyapunov must be equivalent to uniform stability. In other words, if the equilibrium state xe for a time invariant system is stable in the sense of Lyapunov, xe must be uniformly stable in the sense of Lyapunov. (d) The nature of stability in the sense of Lyapunov The definition shows that the stability in the sense of Lyapunov can only guarantee the boundedness of the system’s disturbed dynamics instead of the asymptotic characteristic relative to the equilibrium state. Therefore, stability in the sense of Lyapunov does not necessarily mean stability within industrial processes. Definition 4.7 (the Asymptotic Stability). The isolated equilibrium state xe = 0 of the autonomous system is considered to be asymptotic stable if the following conditions hold: (i) If xe = 0 is stable in the sense of Lyapunov at time t0 . (ii) If, for a real number δ(ε, t0 ) > 0 and any real number μ > 0, there exists a corresponding real number T(μ, δ, t0 ) > 0. This would make the disturbed dynamics ϕ(t; x0 , t0 ), starting from any initial state x0 that satisfies the inequality (4.9), satisfy the following inequality: 󵄩 󵄩󵄩 󵄩󵄩ϕ(t; x0 , t0 ) − xe 󵄩󵄩󵄩 ≤ μ ,

∀t ≥ t0 + T (μ, δ, t0 ) .

(4.11)

We have provided the following points based on the definition of asymptotic stability. (a) Geometric significance of asymptotic stability Take a two-dimensional system for example. The geometric meaning of asymptotic stability is shown in Figures 4.2 and 4.3. The response from an initial state x0 in the sphere S(δ) will not exceed the sphere S(ε) (as shown in Figure 4.2) and converges to the sphere μ as time goes on (as shown in Figure 4.3).

80 | 4 Stability Analysis

Fig. 4.2: The asymptotic stability.

Fig. 4.3: The asymptotic stability.

(b) The equivalent definition of asymptotic stability According to the definition of asymptotic stability, if we select μ → 0, then T(μ, δ, t0 ) → ∞. Therefore, the equivalent definition of the asymptotic stability can be introduced, which reflects the asymptotic characteristics of the stable process in a more intuitive form. The isolated equilibrium state xe = 0 of an autonomous system (4.7) is asymptotic stable at time t0 when two conditions hold. That is, when the disturbed dynamics ϕ(t; x0 , t0 ) starting from any initial state x0 ∈ S(δ) is bounded to any t ∈ [t0 , ∞) relative to equilibrium state xe = 0. And also when the disturbed dynamics relative to equilibrium state xe = 0 meets the asymptotic characteristic, that is, limt→∞ ϕ(t; x0 , t0 ) = 0, ∀x0 ∈ S(δ).

4.2 Definition |

81

(c) Uniform asymptotic stability In the definition of asymptotic stability, if δ(ε) has nothing to do with t0 , and the other conditions hold, the equilibrium state xe is uniform asymptotic stable. Similarly, for time variant systems, the uniform asymptotic stability is more meaningful than the asymptotic stability. (d) The properties of asymptotic stability for time invariant systems For time invariant systems, whether the system is linear or nonlinear or time continuous or time discrete, the asymptotic stability is equivalent to the uniform asymptotic stability of the equilibrium state xe . In other words, the asymptotic stability of the equilibrium state xe ⇔ is the uniform asymptotic stability of the equilibrium state xe . (e) Large scale and small scale asymptotic stability Small scale asymptotic stability is also known as local asymptotic stability. The definition of local asymptotic stability is that: There exists a supersphere S(δ) around xe = 0, ∀0 ≠ x0 ∈ S(δ), xe is asymptotically stable.

(4.12)

Where S(δ) is the attraction domain, representing the property that all the states within S(δ) can be attracted to, the equilibrium state is xe . Large scale asymptotic stability is also known as global asymptotic stability. The definition of global asymptotic stability is: ∀0 ≠ x0 ∈ Rn ,

xe = 0 .

This is asymptotic stable.

(4.13)

(f) The necessary condition of large scale asymptotic stability From the definition of large scale asymptotic stability (4.13), the necessary condition for the equilibrium state xe = 0 to be large scale asymptotic stable is that there are no other asymptotic stable equilibrium states in the state space R n . (g) The properties of asymptotic stability for linear systems For linear systems, whether the system is time invariant or time variant, or if it is time continuous or time discrete, if the equilibrium state xe = 0 is asymptotic stable, it is largescale asymptotic stable. (h) The asymptotic stability in the sense of Lyapunov is ⇔ The stability in the sense of engineering. Definition 4.8 (the Instability). The isolated equilibrium state xe = 0 of the autonomous system is considered to be unstable if, for ε > 0 (and if ε is big enough), a corresponding real number δ(ε, t0 ) > 0 does not exist. This makes the disturbed dynamics ϕ(t; x0 , t0 ), starting from any initial state x0 that satisfies the inequality ‖x0 − xe ‖ ≤ δ(ε, t0 ), satisfy the following inequality: 󵄩 󵄩󵄩 󵄩󵄩ϕ(t; x0 , t0 ) − xe 󵄩󵄩󵄩 ≤ ε ,

∀t ≥ t0 .

82 | 4 Stability Analysis

Take a two-dimensional system for example. The geometric meaning of instability is shown in Figure 4.4. If the equilibrium state xe = 0 is unstable, no matter how large or small S(δ) is, nonzero point x∗0 ∈ S(δ) exists. This makes the disturbed dynamics trajectory starting from this point exceed the field S(δ). In essence, instability in the sense of Lyapunov is equivalent to divergent instability in the meaning of industrial process.

Fig. 4.4: Instability.

4.3 Stability Criteria 4.3.1 Lyapunov’s Second Method Lyapunov’s second method proposes such a visual revelation in physics, similar to how the dynamic process of a system is accompanied by changes of energy. If the change rate of the system energy always remains negative, i.e., the energy decreases monotonously, the disturbed dynamics of the system will eventually return to the equilibrium state. Based on this fact, we present the following stability criteria. Large Scale Asymptotic Stability Theorem Consider a NTV autonomous system described by: ẋ = f(x, t) ,

t ∈ [t0 , ∞) ,

(4.14)

where x ∈ R n×1 and f(0, t) = 0 for all t ∈ [t0 , ∞), which means the origin of the state space is an isolated equilibrium state.

4.3 Stability Criteria

|

83

Theorem 4.1. The origin of (4.14) is large scale uniformly and asymptotically stable if a scalar function V(x, t) exists that satisfies V(0, t) = 0 and has continuous first order partial derivatives for x and t, and qualifies the following terms for all the nonzero sates in state space R n listed below: (i) V(x, t) is positive definite and bounded, i.e., two continuous nondecreasing scalar function α(‖x‖) and β(‖x‖) (α(0) = 0, β(0) = 0) exist, so that β (‖x‖) ≥ V(x, t) ≥ α (‖x‖) > 0 ,

for all

x ≠ 0

and

t ∈ [t0 , ∞) .

(4.15)

̇ (ii) The derivative of V(x, t) on t: V(x, t) is negative definite and bounded, i.e., a continuous nondecreasing scalar function γ(‖x‖) (γ(0) = 0) exists, so that: ̇ V(x, t) ≤ −γ (‖x‖) < 0 ,

x ≠ 0

for all

and

t ∈ [t0 , ∞) .

(4.16)

(iii) α(‖x‖) → ∞ when ‖x‖ → ∞; equivalently, V(x, t) → ∞. Proof. Step 1: Prove that the origin equilibrium xe = 0 is uniformly stable. From the above term (i), we know that β(‖x‖) is continuous nondecreasing and β(0) = 0. Thus, for any real number ε > 0, a real number δ(ε) > 0 must exist so that ̇ β(δ) ≤ α(ε). Besides, as V(x, t) is negative definite, we have: t

̇ x0 , t0 ), τ)dτ ≤ 0 . V(ϕ(t; x 0 , t0 ), t) − V(x0 , t0 ) = ∫ V(ϕ(τ;

(4.17)

t0

Then for any initial time t0 and any nonzero initial state x0 with ‖x0 ‖ ≤ δ(ε), we have 󵄩 󵄩 α(ε) ≥ β(δ) ≥ V(x0 , t0 ) ≥ V(ϕ(t; x0 , t0 ), t) ≥ α (󵄩󵄩󵄩 ϕ(t; x0 , t0 )󵄩󵄩󵄩) , for any t ∈ [t0 , ∞) . (4.18) As α(‖x‖) is continuous nondecreasing and α(0) = 0, we can use the above equation to deduce that, for any initial time t0 and any nonzero initial state x0 which has ‖x0 ‖ ≤ δ(ε), we have: 󵄩󵄩 󵄩 (4.19) 󵄩󵄩ϕ(t; x0 , t0 )󵄩󵄩󵄩 ≤ ε , ∀t ≥ t0 . Therefore, for any real number ε > 0, we can find a δ(ε) > 0 (δ(ε), which is independent from the initial time t0 . This makes the response ϕ(t; x0 , t0 ) excited by any initial time t0 and any nonzero initial state x0 with ‖x0 ‖ ≤ δ(ε) qualify equation (4.19). According to the definition, the origin equilibrium xe = 0 is uniformly stable. Proof provided. Step 2: Prove that for any initial time t0 , the dynamics ϕ(t; x0 , t0 ) excited by any nonzero state x0, which meets ‖x0 ‖ ≤ δ(ε), converges to the original equilibrium state xe = 0. First, for any real number μ > 0 and the deduced real number δ(ε) > 0, we can construct a real number T(μ, δ) > 0. Suppose the initial time t0 is random and the nonzero x0 satisfies ‖x0 ‖ ≤ δ(ε). Without loss of generality, we assume 0 < μ ≤ ‖x0 ‖.

84 | 4 Stability Analysis Then, as V(x, t) is bounded, for the given μ > 0, we can find a corresponding real number v(μ) > 0, which makes β(v) ≤ α(μ). Besides, γ(‖x‖) is continuous and nondecreasing. Suppose ρ(μ, δ) is the minimum value of γ(‖x‖) in the interval v(μ) ≤ ‖x‖ ≤ ε. We can assume: β(δ) T(μ, δ) = . (4.20) ρ(μ, δ) In accordance with this principle, for any given real number μ > 0, we can construct a corresponding T(μ, δ), which is independent of the initial time t0 . Furthermore, for some time t2 (t0 ≤ t2 ≤ t0 + T(μ, δ)), we prove that ϕ(t2 ; x0 , t0 ) = v(μ). Suppose that t1 = t0 + T(μ, δ), and suppose that ϕ(t2 ; x0 , t0 ) > v(μ) for any t in the interval t0 ≤ t ≤ t1 . Then, using equation (4.20) and the negative definite property ̇ of V(x, t), we can deduce that: 0 < α(v) ≤ V(ϕ(t1 ; x0 , t0 ), t1 ) ≤ V(x0 , t1 ) ≤ V(x0 , t0 ) − (t1 − t0 )ρ(μ, δ) ≤ β(δ) − T(μ, δ)ρ(μ, δ) = β(δ) − β(δ) = 0 .

(4.21)

Obviously, the above equation is a contradictory result. So the hypothesis does not hold, which means that a time t2 in the time interval t0 ≤ t ≤ t1 must exist, which makes ϕ(t2 ; x0 , t0 ) = v(μ). Finally, we deduce that, for all t ≥ t0 + T(μ, δ), we have ‖ϕ(t; x0 , t0 )‖ ≤ μ. In this respect, considering ϕ(t2 ; x0 , t0 ) = v(μ) and using the bound of V(x, t) and the ̇ negative definite property of V(x, t), for all the t ≥ t2 , we have: 󵄩 󵄩 α (󵄩󵄩󵄩ϕ(t; x0 , t0 )󵄩󵄩󵄩) ≤ V(ϕ(t; x0 , t0 ), t) ≤ V(ϕ(t2 ; x0 , t0 ), t2 ) ≤ β(v) ≤ α(μ) .

(4.22)

Thus, based on the fact that α(‖x‖) is a continuous nondecreasing function, we can deduce from equation (4.22) that for all t ≥ t2 , we have: 󵄩󵄩 󵄩 󵄩󵄩 ϕ(t; x0 , t0 )󵄩󵄩󵄩 ≤ μ .

(4.23)

Besides, from t0 + T(μ, δ) ≥ t2 we can know that (4.23) holds for all t when t ≥ t0 + T(μ, δ), and T → ∞ when μ → 0. As proven above, for any initial time instant t0 , the dynamics excited by any nonzero initial state x0 with ‖x0 ‖ ≤ δ(ε) converges to the original equilibrium state xe = 0 when t → ∞. Step 3: Prove that for any nonzero initial state x0 in the state space R n , its forced dynamics ϕ(t; x0 , t0 ) is uniformly bounded. As α(‖x‖) → ∞ when ‖x‖ → ∞, there must exist a finite real number ε(δ) > 0, which makes β(δ) < α(ε) for any arbitrarily large real number δ > 0. Using the bound ̇ of V(x, t) and the negative definite property of V(x, t), we can know that for all t ∈ n [t 0 , ∞) and any nonzero x0 ∈ R , we have α(ε) > β(δ) ≥ V(x0 , t0 ) ≥ V(ϕ(t; x0 , t0 ), t) ≥ α(‖ϕ(t; x0 , t0 )‖) .

(4.24)

4.3 Stability Criteria

|

85

Therefore, considering that α(‖x‖) is a continuous nondecreasing function, we have: 󵄩󵄩 󵄩 (4.25) 󵄩󵄩 ϕ(t; x0 , t0 )󵄩󵄩󵄩 ≤ ε(δ) , ∀t ≥ t0 , ∀x0 ∈ R n . ε(δ) is independent of the initial time t0 . This indicates that for any nonzero initial state x0 ∈ R n , ϕ(t; x0 , t0 ) is uniformly bounded. As such, all proof has been provided. Notes on Theorem 4.1: (a) Physical implication For Theorem 4.1, in a physical sense, the positive definite bounded scalar function ̇ V(x, t) is regarded as some kind of “generalized energy” and V(x, t) is regarded as the change rate of the generalized energy. This idea reflects an institutive fact that, if the energy of the system is limited and the change rate of the energy is negative definite, the system energy is bounded and eventually decreases to zero. Correspondingly, the dynamics of the system is bounded and eventually converges to the origin equilibrium. (b) Lyapunov function In Theorem 4.1, V(x, t) is not equivalent to the energy. Furthermore, the meaning and form of V(x, t) varies with the physical property. Thus, in the theory of system stability, V(x, t) which qualifies the theorem is called the Lyapunov function. To judge the asymptotical stability of the system, we construct a Lyapunov function V(x, t) for the system. (c) The selection of Lyapunov function For a comparatively simple system, we usually select a quadratic function of state x as the Lyapunov function. If the function does not satisfy the theorem, we can try to select a more complicated one. For a complex system, the construction of Lyapunov function is difficult. Therefore, frequently we select the Lyapunov function by method of trial and error. (d) The sufficiency of the criterion Theorem 4.1 is sufficient, but not a necessary condition, to judge the large scale uniform and asymptotic stability of the system (4.14). The limitation is that, if we cannot find the Lyapunov function V(x, t) which meets the theorem, we cannot determine whether the system is stable or not. (e) The principles in using the criterion Considering the sufficient property of Theorem 4.1 in determining the stability, we first judge whether the system is large scale asymptotically stable or not. If the answer is no, then we judge the small scale asymptotical stability of the system. If the result is no again, we will judge whether the system is Lyapunov stable until the stability is determined. The above principle is helpful but does not always works. Now, we discuss the continuous NTI system, for which the state equation is: ẋ = f(x) ,

t≥0,

(4.26)

86 | 4 Stability Analysis where x ∈ R n×n and f(0) = 0 for all t ∈ [0, ∞), i.e., the state space origin x = 0 is an isolated equilibrium state of the system. We obtain the corresponding conclusion for the time invariant case directly from Theorem 4.1, since a time invariant system is a special case. Besides, we can see that the requirement is largely simplified in its form for time invariant cases. Theorem 4.2. For a continuous NTI autonomous system (4.26), if a scalar function V(x) exists which has continuous first order partial derivatives for x, and qualifies the following terms for all the nonzero sates in state space R n and V(0) = 0: (i) V(x) is positive definite ̇ (ii) V(x) = dV(x)/dt is negative definite (iii) V(x) → ∞ when ‖x‖ → ∞ The original equilibrium state of (4.26) is large scale asymptotically stable. Example 4.1. Consider a continuous NTI autonomous system: ẋ 1 = x2 − x1 (x21 + x22 ) ẋ 2 = −x1 − x2 (x21 + x22 ) . Discuss the stability of the system. Solution. Obviously, [x1 , x2 ]T = [0, 0]T is the equilibrium state. First, we select a quadratic function of state x as the Lyapunov function V(x): V(x) = x21 + x22 . It is obviously that V(x) is positive definite. ̇ Following that, by calculating V(x), we get: ∂V(x) dx1 ∂V(x) dx2 ∂V(x) ̇ + =[ V(x) = ∂x1 dt ∂x2 dt ∂x1 = [2x1

2x2 ] [

x2 − x1 (x21 + x22 ) −x1 − x2 (x21 + x22 )

∂V(x) ẋ 1 ][ ] ẋ 2 ∂x2

] = −2(x21 + x22 )2 .

̇ It is easy to see that V(x) is negative definite. √ Lastly, when ‖x‖ = x21 + x22 → ∞, we get: V(x) = ‖x‖2 = (x21 + x22 ) → ∞ . According to Theorem 4.2, the system original equilibrium state x = 0 is large scale asymptotically stable. ̇ The major difficulty in constructing Lyapunov function is that the item V(x) should be negative definite. This is a quite conservative condition. Next, we will give a relaxed stable criterion for a continuous NTI system.

4.3 Stability Criteria |

87

Theorem 4.3. For a continuous NTI autonomous system (4.26), suppose a scalar function V(x) exists. If V(x) has continuous first order partial derivatives for x and qualifies the following terms for all the nonzero states in state space R n and V(0) = 0: (i) V(x) is positive definite ̇ (ii) V(x) = dV(x)/dt is seminegative definite ̇ (iii) V(φ(t, x0 , 0)) is not identically equal to zero for any nonzero x0 ∈ R n (iv) V(x) → ∞ when ‖x‖ → ∞ the original equilibrium state of (4.26) is large scale asymptotically stable. Example 4.2. Consider a continuous NTI autonomous system: ẋ 1 = x2 ẋ 2 = −x1 − (1 + x2 )2 x2 . Discuss the stability of the system. Solution. Obviously, [x1 , x2 ]T = [0, 0]T is the only equilibrium state. First, we select a quadratic function of state x as the Lyapunov function V(x): V(x) = x21 + x22 . V(x) is positive definite. Second, by computation, we get: ∂V(x) ̇ V(x) =[ ∂x1 = [2x1

∂V(x) ẋ 1 ][ ] ẋ 2 ∂x2

x2 ] = −2x22 (1 + x2 )2 . 2x2 ] [ −x1 − (1 + x2 )2 x2

̇ We can see that there are two cases which make V(x) = 0: case 1: x1 is arbitrary and x2 = 0 case 2: x1 is arbitrary and x2 = −1 ̇ ̇ Except for these two cases, we have V(x) < 0 when x ≠ 0. Thus, V(x) is seminegative definite. ̇ Now we check whether V(φ(t, x0 , 0)) is identically equal to zero or not. The problem comes down to judging whether the above two cases are the disturbed response of the system. For case 1, ϕ(t; x0 , 0) = [x1 (t), 0]T . From x2 (t) ≡ 0, we can deduce ẋ 2 (t) = 0. Substituting this into the system equation yields: ẋ 1 (t) = x2 (t) = 0 0 = ẋ 2 (t) = −(1 + x2 (t))2 x2 (t) − x1 (t) = −x1 (t) .

88 | 4 Stability Analysis Therefore, ϕ(t; x0 , 0) = [x1 (t), 0]T is not the solution of the system disturbed dynamics except for the origin (x1 = 0, x2 = 0). For case 2, ϕ(t; x0 , 0) = [x1 (t), −1]T . From x2 (t) = −1, we can deduce that ẋ 2 (t) = 0. Substituting this into the system equation yields: ẋ 1 (t) = x2 (t) = −1 0 = ẋ 2 (t) = −(1 + x2 (t))2 x2 (t) − x1 (t) = −x1 (t) . Obviously, this is a contradictory result. Hence, ϕ(t; x0 , 0) = [x1 (t), −1]T is not the solution of the system. Therefore, item (iii) in Theorem 4.3 is satisfied. Lastly, when ‖x‖ = √(x21 + x22 ) → ∞, we get: V(x) = ‖x‖2 = (x21 + x22 ) → ∞ . According to Theorem 4.3, the original equilibrium state of the system x = 0 is large scale asymptotically stable. Besides, we can see that the Lyapunov function we chose for the system does not qualify for Theorem 4.2 but meets Theorem 4.3. Small Scale Asymptotically Stable Theorem In the application of the second Lyapunov method, when a system is not large scale asymptotically stable, we turn to judge the small scale asymptotically stable. This section presents some basic theorems about small scale asymptotically stable theorems in the application of the second Lyapunov method. For continuous NTV systems, we have the following conclusion. Theorem 4.4. For a continuous NTV autonomous system (4.14), suppose a scalar function V(x, t) (V(0, t) = 0) exists. If V(x, t) has continuous first order partial derivatives for x and t and an attractive region called Ω around the state space origin, and meets the following requirements for all nonzero states x ∈ Ω and all t ∈ [t0 , ∞): (i) V(x, t) is positive definite and bounded ̇ (ii) V(x, t) = dV(x, t)/dt is negative definite and bounded then the original equilibrium state of the system x = 0 is uniformly and asymptotically stable in the Ω region. For continuous NTI systems, we have the following two conclusions. Theorem 4.5. For a continuous NTI autonomous system (4.26), if a scalar function V(x) (V(0) = 0) exists, which has continuous first order partial derivatives for x and t and an attractive region called Ω around the state space origin, and meets the following terms

4.3 Stability Criteria

|

89

for all nonzero states x ∈ Ω and all t ∈ [t0 , ∞): (i) V(x) is positive definite ̇ (ii) V(x, t) = dV(x)/dt is negative definite then the original equilibrium state of the system x = 0 is asymptotically stable in the Ω region. Theorem 4.6. For a continuous NTI autonomous system (4.26), if a scalar function V(x) (V(0) = 0) exists, which has continuous first order partial derivatives for x and t and an attractive region called Ω around the state space origin, and meets the following criteria for all nonzero states x ∈ Ω and all t ∈ [t0 , ∞): (i) V(x) is positive definite ̇ (ii) V(x, t) = dV(x)/dt is seminegative definite ̇ (iii) V(φ(t, x0 , 0)) is not identically equal to zero for any nonzero state x ∈ Ω then the original equilibrium state x = 0 is asymptotically stable in the region called Ω. Theorem for Stability in the Sense of Lyapunov Similar to when a system is not large scale asymptotically stable, when a system is not small scale asymptotically stable, we turn to judging the stability in the sense of Lyapunov. In this section, we will provide some rules to determine the stability in the sense of Lyapunov. Theorem 4.7. For a continuous NTV autonomous system (4.14), if a scalar function V(x, t) (V(0, t) = 0) exists, which has continuous first order partial derivatives for x and t and an attractive region called Ω around the state space origin, and meets the following terms for all the nonzero states x ∈ Ω and all t ∈ [t0 , ∞): (i) V(x, t) is positive definite and bounded ̇ (ii) V(x, t) = dV(x, t)/dt is seminegative definite and bounded then the original equilibrium state of the system x = 0 is stable in the sense of Lyapunov in the Ω region. For continuous NTV systems, we have the following conclusion. Theorem 4.8. For a continuous NTI autonomous system (4.26), if a scalar function V(x) exists, V(0) = 0, V(x) has continuous first order partial derivatives for x and t and an attractive region called Ω around the state space origin which qualifies the following terms for all nonzero states x ∈ Ω and all t ∈ [t0 , ∞): (i) V(x) is positive definite ̇ (ii) V(x) = dV(x)/dt is seminegative definite then the original equilibrium state of the system x = 0 is stable in the sense of Lyapunov in the region called Ω.

90 | 4 Stability Analysis

Theorem for Instability For a continuous NTV system, the criterion for instability is presented as follows. Theorem 4.9. For a continuous NTV autonomous system (4.14), if a scalar function V(x, t) (V(0, t) = 0) exists, which has continuous first order partial derivatives for x and t and an attractive region called Ω around the state space origin, and meets the following terms for all the nonzero sates x ∈ Ω and all t ∈ [t0 , ∞): (i) V(x, t) is positive definite and bounded ̇ (ii) V(x, t) = dV(x, t)/dt is positive definite and bounded then the original equilibrium state of the system x = 0 is unstable. For continuous NTV systems, we have the following criteria. Theorem 4.10. For a continuous NTI autonomous system (4.26), if a scalar function V(x) (V(0) = 0) exists, which has continuous first order partial derivatives for x and t and an attractive region called Ω around the state space origin, and meets the following terms for all nonzero states x ∈ Ω and all t ∈ [t0 , ∞): (i) V(x) is positive definite ̇ (ii) V(x) = dV(x)/dt is positive definite then the original equilibrium state of the system x = 0 is unstable. Note: From the above two conclusions, we can see that the system is unstable when ̇ ̇ V(x, t) or V(x) have the same sign with V(x, t) or V(x). Theoretically, the disturbed dynamics trajectories of the system will diverge to infinity.

4.3.2 State Dynamics Stability Criteria for Continuous Linear Systems This section discusses the stability for continuous linear systems. Based on the concepts and results of the second Lyapunov method, similar to the LTI and LTV system, we will discuss the stability of the disturbed dynamics first. Then some stable criteria will be presented. Stability Criteria for LTI Systems Consider a continuous LTI system. The autonomous state equation is: ẋ = Ax ,

x(0) = x 0 ,

t≥0,

(4.27)

where x ∈ R n , and the origin of the state space x = 0 is an equilibrium state of the system. Next, we present the stability criteria for LTI systems based on eigenvalues.

4.3 Stability Criteria

|

91

Theorem 4.11. For a continuous LTI system (4.27), the original equilibrium state x = 0 is stable in the sense of Lyapunov if, and only if, all the eigenvalues of matrix A have nonpositive real parts, i.e., zero or negative real parts, and the eigenvalue, whose real part is zero, is distinct. Proof. The proof is divided into two steps. Step 1: Prove that the system is stable if ‖e At ‖ ≤ β < ∞. From the autonomous dynamics equation of the LTI system, we can obtain the disturbed dynamics of states: ϕ(t; x0 , 0) = x0u (t) = e At x0 .

(4.28)

The equilibrium state is xe = 0. We notice that xe = e At xe , thus we further deduce that the disturbed dynamics relative to equilibrium state xe = 0 is: ϕ(t; x0 , 0) − xe = e At (xe − x0 ) ,

∀t ≥ 0 .

(4.29)

This indicates that, specifically if ‖e At ‖ ≤ β < ∞, for any real number ε a real number δ(ε) = ε/β exists, which is independent of the initial time and makes the disturbed dynamics from any nonzero initial state ‖x0 − xe ‖ ≤ δ(ε) (x0 ∈ R n ) qualify for the following inequality: ε 󵄩 󵄩 󵄩 󵄩󵄩 󵄩󵄩 ϕ(t; x0 , 0) − xe 󵄩󵄩󵄩 ≤ 󵄩󵄩󵄩󵄩 e At 󵄩󵄩󵄩󵄩 ⋅ ‖x0 − xe ‖ ≤ β ⋅ = ε , β

∀t ≥ 0 .

(4.30)

As defined, the system is stable in the sense of Lyapunov. Proof has been successfully provided. Step 2: Prove the conclusion of Theorem 4.11. Introduce the linear nonsingular transformation ̂x = Q−1 x to ensure ̂ A = Q−1 AQ as the Jordan Canonical: 󵄩󵄩 At 󵄩 󵄩 ̂ 󵄩󵄩 󵄩󵄩 −1 󵄩󵄩 󵄩󵄩 e ̂ 󵄩󵄩󵄩 ≤ 󵄩󵄩󵄩󵄩Q−1 󵄩󵄩󵄩󵄩 󵄩󵄩󵄩󵄩e At 󵄩󵄩󵄩󵄩 ‖Q‖ , 󵄩󵄩󵄩󵄩e At 󵄩󵄩󵄩󵄩 ≤ ‖Q‖ 󵄩󵄩󵄩e At 󵄩 (4.31) 󵄩󵄩 󵄩󵄩 󵄩 󵄩󵄩 󵄩󵄩󵄩 󵄩󵄩󵄩Q 󵄩󵄩󵄩 . 󵄩󵄩 󵄩 󵄩 󵄩 ̂

This indicates that, the bound of ‖e At ‖ is equivalent to the bound of ‖e At ‖. From the ̂ Jordan Canonical, we know that the element of e At is the combination of the following items: t β i −1 e α i t+jω i t ,

λ i (̂ A) = λ i (A) = α i +jω i ,

i = 1, 2, . . . , μ,

β i = 1, 2, . . . , σ i , (4.32)

where λ(⋅) is the eigenvalue of the corresponding matrix, and σ i means that λ i is a σ i duplicate eigenvalue. When α i < 0, the corresponding items are bounded in the interval [0, ∞) for any limited positive integral β i . When α i = 0, the corresponding items are bounded in the interval [0, ∞) only for β i = 1. Furthermore, the bound of ̂ ̂ ̂ the elements of e At means the bound of ‖e At ‖. This indicates that, ‖e At ‖, i.e., ‖e At ‖ is bounded only if all the eigenvalues of matrix A have zero or negative real parts and the eigenvalues whose parts are zero are distinct. Using the proposition given in the first part, we can prove that the above condition is the necessary, and sufficient, condition for the stability in the sense of Lyapunov. Proof has been successfully provided.

92 | 4 Stability Analysis Theorem 4.12. For a continuous LTI system (4.27), the original equilibrium state x = 0 is asymptotically stable, only if all the eigenvalues of matrix A have negative real parts. Proof. From Theorem 4.11, the equilibrium state x = 0 is stable in the sense of Lyapunov only if all the eigenvalues of matrix A have zero or negative real parts, and the eigenvalues whose real parts are zero are distinct. Furthermore, from equations (4.28), (4.31) and (4.32), we know that: lim ϕ(t; x0 , 0) = lim e At x0 = 0 , t→∞ 󵄩󵄩 At 󵄩󵄩 lim 󵄩󵄩e 󵄩󵄩󵄩 = 0 t→∞ 󵄩 t→∞

⇔ ⇔ ⇔

lim t β i −1 e α i t+jω i t = 0, i = 1, 2, . . . , μ ,

β i = 1, 2, . . . , σ i .

t→∞

The eigenvalues of A all have negative real parts.

As defined, the system is asymptotically stable. Proof has been successfully provided. Note: We can see that the asymptotic stability equals to the internal stability illustrated beforehand. Furthermore, based on the second Lyapunov method, we can provide the Lyapunov stability criteria for LTI systems. Theorem 4.13. For an n-dimensional continuous LTI system (4.27), the original equilibrium state xe = 0 is asymptotically stable only if, for any given n× n dimensional positive definite symmetry matrix Q, the Lyapunov equation AT P + PA = −Q has a unique n × n dimensional positive definite symmetry matrix solution P. Proof. First we prove the sufficiency. Given n × n positive definite matrix P, we want to prove the asymptotic stability of xe = 0. For this, we select the Lyapunov function V(x) = xT Px. As P = PT > 0. V(x) is positive definite. Furthermore, we have: V(x) = ẋ T Px + xT P ẋ = (Ax)T Px + xT P(Ax) = xT (AT P + PA)x = −xT Qx .

(4.33)

̇ is negative. According to large scale Moreover, from Q = QT > 0 we know that V(x) asymptotically stable theorems, xe = 0 is asymptotically stable. Sufficiency has been proven. Then we prove the necessity. Given the asymptotic stability of xe = 0, we want to prove that matrix P is positive definite. For this, we use the matrix equation: Ẋ = AT X + XA ,

X(0) = Q ,

t≥0.

(4.34)

The matrix X is: T

X(t) = e A t Qe At ,

t≥0.

(4.35)

4.3 Stability Criteria |

93

The integration of (4.34) from t = 0 to t → ∞ is: ∞



T

X(∞) − X(0) = A ( ∫ X(t)dt) + ( ∫ X(t)dt) A . 0

(4.36)

0

As the system is asymptotically stable, e.g., e At → 0 when t → ∞, from (4.35) we have ∞ X(∞) = 0. Considering X(0) = Q, let P = ∫0 X(t)dt, so (4.36) can be further expressed as: AT P + PA = −Q . (4.37) ∞

Hence, P = ∫0 X(t)dt is the solution of the Lyapunov equation. The fact is, X(t) is ∞ unique and X(∞) = 0, P = ∫0 X(t)dt is unique. While: ∞ T

P = ∫ [e

AT t

At

T

∞ T

Qe ] dt = ∫ e A t Qe At dt = P .

0

(4.38)

0



So P = ∫0 X(t)dt is symmetry. Again, for any nonzero x0 ∈ R n , we have: ∞

xT0 Px0 = ∫ (e At x0 )T Q(e At x0 )dt ,

(4.39)

0

where the positive definite matrix Q = N T N is nonsingular. From equation (4.39), we can further deduce: ∞

xT0 Px0

= ∫ (e At x0 )T N T N(e At x0 )dt 0 ∞

󵄩 󵄩2 = ∫ 󵄩󵄩󵄩󵄩 Ne At x0 󵄩󵄩󵄩󵄩 dt > 0 .

(4.40)

0

Therefore, P is unique and positive definite. Necessity has been proven and proof, overall, has been provided. Example 4.3. Consider the stability of the following continuous LTI system: ẋ = [

−1 2

1 ]x . −3

Solution. For simplicity, we select Q = I2 . Furthermore, from the Lyapunov equation: −1 AT P + PA = [ 1 we can deduce:

2 p1 ][ −3 p3

p1 p3 ]+[ p2 p3

p3 −1 ][ 2 p2

−2p1 + 0p2 + 4p3 = −1 0p1 − 6p2 + 2p3 = −1 p1 + 2p2 − 4p3 = 0 .

1 −1 ]=[ −3 0

0 ] = −Q , −1

94 | 4 Stability Analysis

Using algebraic equation solving methods, we get: p1 −2 [ ] [ [p2 ] = [ 0 [p3 ] [ 1

0 −6 2

−1

4 ] 2] −4]

− 54 −1 [ [ ] [ 1 [−1] = [− 8 [ 0 ] [− 38

− 12 − 14 − 14

− 32

7

4 −1 ] [ ] [3] [ ] . = − 14 ] −1 [ ] ] [8] 3 [0 ] 5 −4] [8]

Therefore, solution of the Lyapunov equation is: 7 4

P=[ 5 [8

5 8] 3 8]

>0.

P is positive definite, so the system is asymptotically stable. MATLAB can be adopted for the solution of the above question. >> A={[-1 1;2 -3]}; >> Q={[1 0;0 1]}; >> P=lyap(A',Q) P = 1.7500 0.6250 0.6250 0.3750 The function “posdef” can be used to judge whether a matrix is positive definite or not. The format of the MATLAB function is: [key,sdet]=posdef(P) The codes are shown as follows: function [key,sdet]=posdef(P) [nr,nc]=size(P); sdet=[]; for i=1:nr sdet=[sdet,det(P(1:i,1:i))]; end key=1; if any(sdet 0 exists, which makes the following equation valid: 󵄩󵄩 󵄩 󵄩󵄩ϕ(t, t0 )󵄩󵄩󵄩 ≤ β(t0 ) < ∞ ,

∀t ≥ t0 .

(4.46)

Furthermore, if there exists independent real numbers β > 0 for all t0 , the original equilibrium state xe = 0 is stable in the sense of Lyapunov. Theorem 4.16. For a continuous LTV system (4.45), ϕ(t, t0 ) is the state transfer matrix of the system. Then the original equilibrium state xe = 0 is asymptotically stable at time t0 , only if a real number β(t0 ) > 0 exists, which qualifies the following two items: 󵄩 󵄩󵄩 󵄩󵄩 ϕ(t, t0 )󵄩󵄩󵄩 ≤ β(t0 ) < ∞, ∀t ≥ t0 󵄩 󵄩 lim 󵄩󵄩󵄩 ϕ(t, t0 )󵄩󵄩󵄩 = 0 .

(4.47)

t→∞

Furthermore, the original equilibrium state xe = 0 is uniformly and asymptotically stable, only if independent real numbers β 1 > 0 and β 2 > 0 exist for all t0 ∈ [0, ∞] that qualify the following equation: 󵄩 󵄩󵄩 󵄩󵄩ϕ(t, t0 )󵄩󵄩󵄩 ≤ β 1 e−β2 (t−t 0) .

(4.48)

Proof. First we prove the sufficiency. Given equation (4.47), we need to prove that xe = 0 is uniformly and asymptotically stable. From (4.48), and using the disturbed dynamics equation, we have: 󵄩 󵄩 󵄩 󵄩 󵄩 󵄩󵄩 󵄩󵄩ϕ(t; x0 , t0 )󵄩󵄩󵄩 = 󵄩󵄩󵄩 ϕ(t, t0 )x0 󵄩󵄩󵄩 ≤ 󵄩󵄩󵄩 ϕ(t, t0 )󵄩󵄩󵄩 ‖x0 ‖ ≤ β 1 ‖x0 ‖ e−β2 (t−t 0) .

(4.49)

This indicates that the disturbed dynamics ϕ(t; x0 , t0 ) is bounded for all t ≥ t0 . For all t0 ∈ [0, ∞) we have ‖ϕ(t; x0 , t0 )‖ → 0 when t → ∞. Thus, xe = 0 is uniformly and asymptotically stable. Sufficiency has been proven. Now we will prove the necessity. Given that xe = 0 is uniformly and asymptotically stable, we need to prove equation (4.47). As xe = 0 is uniformly and asymptotically stable, xe = 0 is stable in the sense of Lyapunov, i.e., there exists a real number β 3 > 0 which satisfies: 󵄩󵄩 󵄩 (4.50) 󵄩󵄩ϕ(t, t0 )󵄩󵄩󵄩 ≤ β 3 , ∀t0 ∈ [0, ∞] , ∀t ≥ t0 . Furthermore, for a fixed real number δ > 0 and any given real μ > 0, there exists a real number T > 0 that satisfies the following equation for all initial states x0 and all t0 ∈ [0, ∞): 󵄩󵄩 󵄩󵄩 󵄩󵄩 󵄩󵄩 (4.51) 󵄩󵄩 ϕ(t0 + T; x0 , t0 )󵄩󵄩 = 󵄩󵄩 ϕ(t0 + T; t0 , x0 )󵄩󵄩 ≤ μ . Randomly select a x0 to satisfy: ‖x0 ‖ = δ

and

󵄩 󵄩 󵄩 󵄩󵄩 󵄩󵄩 ϕ(t0 + T, t0 )x0 󵄩󵄩󵄩 = 󵄩󵄩󵄩ϕ(t0 + T, t0 )󵄩󵄩󵄩 ⋅ ‖x0 ‖ .

(4.52)

4.3 Stability Criteria

|

97

Then, by selecting μ = δ/2 from equations (4.51) and (4.52), we can further deduce that: 󵄩 1 󵄩󵄩 (4.53) 󵄩󵄩ϕ(t0 + T, t0 )󵄩󵄩󵄩 ≤ , ∀t0 ∈ [0, ∞) . 2 Hence, using equations (4.50) and (4.53) we can get: 󵄩󵄩 󵄩 󵄩󵄩 ϕ(t, t0 )󵄩󵄩󵄩 ≤ β 3 , ∀t ∈ [t0 , t0 + T) , 󵄩󵄩 󵄩 󵄩 󵄩 󵄩󵄩 ϕ(t, t0 )󵄩󵄩󵄩 = 󵄩󵄩󵄩 ϕ(t, t0 + T)ϕ(t0 + T, t0 )󵄩󵄩󵄩 , 󵄩󵄩 󵄩 β3 󵄩 , ∀t ∈ [t0 + T, t0 + 2T) , ≤ 󵄩󵄩󵄩 ϕ(t, t0 + T)󵄩󵄩󵄩 󵄩󵄩󵄩ϕ(t0 + T, t0 )󵄩󵄩󵄩 ≤ 2 󵄩 󵄩 󵄩󵄩 󵄩󵄩 󵄩 β3 󵄩󵄩 󵄩󵄩 ϕ(t, t0 )󵄩󵄩󵄩 ≤ 󵄩󵄩󵄩 ϕ(t, t0 + 2T)󵄩󵄩󵄩 󵄩󵄩󵄩ϕ(t0 + 2T, t0 + T)󵄩󵄩󵄩 󵄩󵄩󵄩 ϕ(t0 + T), t0 󵄩󵄩󵄩 ≤ 2 , 2 ∀t ∈ [t0 + 2T, t0 + 3T) , .. . 󵄩󵄩 󵄩 β3 󵄩󵄩 ϕ(t, t0 )󵄩󵄩󵄩 ≤ m , 2

∀t ∈ [t0 + mT, t0 + (m + 1)T) .

Again we construct an exponential function β 1 e−β2 (t−t 0) , which makes the following equation valid: [β 1 e−β2 (t−t 0) ]t=t

0 +mT

=

β3 , 2m−1

m = 1, 2 ,

... .

(4.54)

Furthermore, we can get: 1 m β 1 (e−β2 T )m = 2β 3 ( ) . 2

(4.55)

We can see that, by selecting β 1 = 2β 3 and an adequate β 2 , we can make e−β2 T = 1/2 hold. Thus, we proved that real numbers β 1 > 0 and β 2 > 0 exist to validate equation (4.48). Necessity has been proven and proof, overall, has been provided. Theorem 4.17. For a continuous LTV system (4.45), suppose xe = 0 is the unique equilibrium state of the system. The elements of n×n dimensional matrix A(t) are segmented continuous uniform and bounded real function, and the original equilibrium state xe = 0 is uniformly and asymptotically stable if two real numbers, β 1 > 0 and β 2 > 0, exist when 0 < β 1 I ≤ Q(t) ≤ β 2 I holds. The n × n solution matrix P(t) of the Lyapunov equation: ̇ = P(t)A(t) + AT (t)P(t) + Q(t) , − P(t)

∀t ≥ t0 ,

(4.56)

is real symmetry, uniformly bounded and uniformly positive definite. Equivalently, two real numbers exist, α 1 > 0 and α 2 > 0, making 0 < α 1 I ≤ P(t) ≤ α 2 I, ∀t ≥ t0 .

98 | 4 Stability Analysis

4.3.3 State Dynamics Stability Criteria for Discrete Systems Lyapunov Stability Theorem for Discrete NTI Systems Consider a discrete NTI system. The autonomous equation is: x(k + 1) = f(x(k)) ,

x(0) = x0 ,

k = 0, 1, 2, . . . ,

(4.57)

where x ∈ R n×n , f(0) = 0, i.e., the origin of the state space x = 0 is an equilibrium state. Next, we will present some Lyapunov stability theorems for discrete NTI systems. Theorem 4.18. For a discrete NTI system (4.57), if a scalar function V(x(k)) exists for discrete state x(k) which meets the following items for any x(k) ∈ R n : (i) V(x(k)) is positive definite (ii) if ∆V(x(k)) = V(x(k + 1)) − V(x(k)), ∆V(x(k)) is negative (iii) V(x(k)) → ∞ when x(k) → ∞ then the original equilibrium state x = 0 is large scale asymptotically stable. Note: The conservative property of (ii) may result in the failure of judgment for many systems. Thus, we can release this condition as follows. Theorem 4.19. For a discrete NTI system (4.57), if a scalar function V(x(k)) exists for discrete state x(k), which meets the following items for any x(k) ∈ R n : (i) V(x(k)) is positive definite (ii) if ∆V(x(k)) = V(x(k + 1)) − V(x(k)), ∆V(x(k)) is seminegative (iii) ∆V(x(k)) is not identically zero for any free dynamics started from any nonzero initial state x(0) ∈ R n , i.e., equation (4.57) (iv) V(x(k)) → ∞ when x(k) → ∞ then the original equilibrium state x = 0 is large scale asymptotically stable. Based on the above stability theorems, we can easily deduce a more intuitive and convenient stability criterion for discrete systems. Theorem 4.20. For a discrete NTI system (4.57), suppose f (0)=0, x = 0 is an equilibrium state of the system. If f(x(k)) is convergent, i.e., for x(k) ≠ 0, we have: ‖f(x(k))‖ < ‖x(k)‖ . Then the original equilibrium state x = 0 is large scale asymptotically stable. Proof. For a given discrete system, we select the Lyapunov function: V(x(k)) = ‖x(k)‖ . Obviously, V(x(k)) is positive definite. Furthermore, we can deduce that: ∆V(x(k)) = V(x(k + 1)) − V(x(k)) = ‖x(k + 1)‖ − ‖x(k)‖ = ‖f(x(k))‖ − ‖x(k)‖ .

(4.58)

4.3 Stability Criteria |

99

From equation (4.58), we can see that ∆V(x(k)) is negative, and V(x(k)) → ∞ when x(k) → ∞. According to Theorem 4.18, the original equilibrium state x = 0 is large scale asymptotically stable. Proof has been provided. Stability Criteria for Discrete LTI Systems Consider a discrete NTI system. The autonomous equation is: x(k + 1) = Gx(k) ,

k = 0, 1, 2, . . . ,

x(0) = x0 ,

(4.59)

where x ∈ R n×n , and the solution state xe of Gxe = 0 is an equilibrium state. If the matrix G is singular, there are nonzero equilibrium states besides xe = 0. While, if the matrix G is nonsingular, there is only one equilibrium state xe = 0. Next, we will give the corresponding equilibrium state stability criteria for LTI systems. Theorem 4.21. For a discrete LTI autonomous system (4.59), the original equilibrium state xe = 0 is stable in the sense of Lyapunov, only if all the amplitudes of the eigenvalues of G: λ i (G) (i = 1, 2, . . . , n) are equal to or less than 1, and the eigenvalue whose amplitude is 1 is the single root of the polynomial of G. Theorem 4.22. For a discrete LTI autonomous system (4.59), the origin equilibriums state xe = 0 is asymptotically stable, only if all the amplitudes of the eigenvalues of G: λ i (G) (i = 1, 2, . . . , n) are less than 1. Theorem 4.23. For an n-dimensional discrete LTI autonomous system (4.59), the origin equilibriums state xe = 0 is asymptotically stable. That is, all the amplitudes of the eigenvalues of G: λ i (G) (i = 1, 2, . . . , n) are less than 1, but only if any given n × n dimensional positive definite symmetry matrix Q the discrete Lyapunov function GT PG − P = −Q

(4.60)

has unique n × n dimensional positive definite symmetry solution matrix P. Theorem 4.24. For an n-dimensional discrete LTI autonomous system (4.59), the original equilibrium state xe = 0 is exponentially stable with the index of σ > 0. That is, the eigenvalues of G satisfy |λ i (G)| < σ ,

0≤σ≤1,

i = 1, 2, . . . , n ,

(4.61)

but only if, for any given n × n dimensional positive definite symmetry matrix Q, the expanded discrete Lyapunov function (1/σ)2 GT PG − P = −Q has unique n × n dimensional positive definite symmetry solution matrix P.

(4.62)

100 | 4 Stability Analysis

Example 4.4. The state equation of a discrete linear system is: x(k + 1) = [

λ1 0

0 ] x(k) λ2

Try to determine the condition for the asymptotic stability of the equilibrium state. Solution. According to GT PG − P = −I, we have: [

λ1 0

p11 0 ][ λ2 p12

p12 λ1 ][ 0 p22

p11 0 ]−[ λ2 p12

−1 p12 ]=[ 0 p22

0 ] . −1

Then: p11 (1 − λ21 ) = 1 ;

p12 (1 − λ1 λ2 ) = 0 ;

p22 (1 − λ22 ) = 1 .

So 1 0 2 ] . P = [ 1−λ 1 1 0 2 1−λ 2] [ The condition for asymptotic stability of the equilibrium state is:

|λ1 | < 1 and |λ2 | < 1 . MATLAB can be adopted for the solution of the above question. P=dlyap(G',Q); If P is positive definite, the system is asymptotically stable.

4.4 Summary Stability is very important for a system. In this chapter, the definitions of stability in the sense of Lyapunov were given for equilibrium state. Different stable criteria were listed and proven for different kinds of systems and examples were selected to show how to use the criteria.

Exercise 4.1. Determine whether the following functions are positive definite or not. (1)

V(x) = 2x21 + 3x22 + x23 − 2x1 x2 + 2x1 x3

(2)

V(x) =

(3)

V(x) =

(4)

V(x) =

2 2 2 1 2 [(x 1 + x 2 ) + 2x 1 + x 2 ] x21 + x23 − 2x1 x2 + x2 x3 x21 + 3x22 + 11x23 − 2x1 x2 + 4x2 x3

+ 2x1 x3

Exercise

| 101

4.2. Given a continuous time NTI system, try to analyze the stability of its equilibrium state: ẋ 1 = x2 ẋ 2 = −x21 x2 − x1 . 4.3. Consider a continuous time NTI system: ẋ 1 = x2 ẋ 2 = −x1 − x2 (1 + x2 )2 . Try to determine the stability of the original equilibrium state xe = 0. 4.4. Consider a continuous time LTI system: ẋ = [

0 −1

1 ]x . −1

Try to determine the stability of its equilibrium state. 4.5. Consider a continuous time NTI system: ẋ 1 = x2 ẋ 2 = −(1 − |x1 |)x2 − x1 . Try to analyze the stability of its equilibrium state. 4.6. Given the state space equation: 1 ẋ = [ −1

1 ]x , 1

try to determine the stability of the original equilibrium state xe = 0. 4.7. Given the state space equation: ẋ = [

0 −2

1 ]x , −3

try to determine the stability of the equilibrium point. 4.8. Consider a continuous time linear time varying system: ẋ = [

0 1 − t+1

1 ]x , −10

t≥0.

Try to determine whether the original equilibrium state xe = 0 is large scale asymptotically stable. (Hint: Suppose V(x, t) = 12 [x21 + (1 + t)x22 ]).

102 | 4 Stability Analysis

4.9. Try to analyze the BIBO stability and the asymptotical stability of the system equilibrium state xe = 0 of the following two systems: (1)

ẋ = [

0 1

y = [0

6 −2 ]x +[ ]u , −1 1

(2)

1] x ,

0 ̇x = [ [ 0 [250

1 0 0

0 0 ] [ ] 1 ]x +[ 0 ]u −5] [10]

y = [−25

5

0] x ,

4.10. Consider a linear discrete time system: x(k + 1) = [

λ1 0

0 ] x(k) . λ2

Try to determine the asymptotically stable condition for the equilibrium state. 4.11. Given a discrete time LTI system: 1 [ x(k + 1) = [−3 [2

4 −2 0

0 ] −3] x(k) . 0]

Use two methods to determine whether the system is asymptotically stable.

5 Controllability and Observability 5.1 Introduction This chapter introduces the concepts of controllability and observability. Controllability deals with whether or not the state of a state space equation can be controlled from the input, and observability deals with whether or not the initial state can be observed from the output. These concepts can be illustrated using the network shown in Figure 5.1. In Figure 5.1 (a), the network has two state variables. Suppose that x i is the voltage across the capacitor with capacitance C i , for i = 1, 2. The input u is a voltage source. From the network, it can be seen that, when C1 = C2 , R1 = R2 , we always have x1 = x2 . The input u cannot change x1 and x2 to any value, i.e., the system is uncontrollable. In Figure 5.1 (b), when the initial value of x1 and x2 have x1 (t) = x2 (t), the output cannot reflect the value of x1 (t) and x2 (t), so the system is unobservable. These concepts are essential in describing the internal structure of linear systems. They are also needed in studying control and filtering problems. In this chapter, we will discuss continuous time LTI state equations.

R3

i2

i1

R1 u (t)

+

(voltage source)



C1 x1

i4

i3 C2 x

R2

R1

x1

2

L1 R1

u (t) (voltage source)

R2

+ 

x2

L2

Fig. 5.1: (a) Network; (b) Network.

https://doi.org/10.1515/9783110574951-005

y

104 | 5 Controllability and Observability

5.2 Definition 5.2.1 Controllability Consider the n-dimensional p-input state equation: ẋ = Ax + Bu ,

(5.1)

where A and B are, respectively, n × n and n × p are real constant matrices. Because the output does not play any role in controllability, we will disregard the output equation in this study. Definition 5.1. The state equation (5.1) or the pair (A, B) is said to be controllable if, for any initial state x(0) = x0 and any final state x1 , there exists an input that transfers x0 to x1 in a finite time. Otherwise (5.1) or (A, B) is said to be uncontrollable. This definition requires only that the input be capable of moving any state in the state space to any other state in finite time (what trajectory the state should take is not specified). Furthermore, there is no constraint imposed on the input; its magnitude can be as large as desired. Consider the n-dimensional p-input q-output state equation: ẋ = Ax + Bu

(5.2)

y = Cx + Du ,

where A, B, C and D are, respectively, n × n, n × p, q × n and q × p constant matrices. Example 5.1. Consider the network shown in Figure 5.2 (a). Its state variable x is the voltage across the capacitor. If x(0) = 0, then x(t) = 0 for all t ≥ 0, no matter what +

u (t) (voltage source)

+

1

x +

1

y





 1

1

(a)

u (t) (voltage source)

+

1F

1F

+ x2 

 1

(b)

+ x1 

1 Fig. 5.2: Uncontrollable networks.

5.2 Definition

| 105

input is applied. This is due to the symmetry of the network. Furthermore, the input has no effect on the voltage across the capacitor. Thus the system or, more precisely, the state equation that describes the system., is not controllable. Next, we consider the network shown in Figure 5.2 (b). It has two state variables: x1 and x2 . The input can transfer x1 or x2 to any values, but it cannot transfer both x1 and x2 to any values. For example, if x1 (0) = x2 (0) = 0, then no matter what input is applied, x1 (t) always equals x2 (t) for all t ≥ 0. Thus, the equation that describes the network is not controllable.

5.2.2 Observability Definition 5.2. The state equation (5.2) is said to be observable if, for any unknown initial state x(0), there exists a finite t1 > 0 such that the knowledge of the input u and the output y over [0, t1 ] suffices to uniquely determine the initial state x(0). Otherwise, the equation is said to be unobservable. Example 5.2. Consider the network shown in Figure 5.3. If the input is zero, no matter what the initial voltage across the capacitor is, the output is identically zero because of the symmetry of the four resistors. We know the input and output (both are identically zero), but we cannot uniquely determine the initial state. Thus the network or, more precisely, the state equation that describes the network is not observable.

+ 1 u (t) (voltage source)

1

+

x +





y

1F 1

1



Fig. 5.3: Unobservable network.

The response of (5.2) excited by the initial state x(0) and the input u(t) is: t At

y(t) = Ce x(0) + C ∫ e A(t−τ) Bu(τ)dτ + Du(t) .

(5.3)

0

In the study of observability, the output y and the input u are assumed to be known; the initial state x(0) is the only unknown one. Thus, we can write (5.3) as: Ce At x(0) = y(t) ,

(5.4)

106 | 5 Controllability and Observability

where

t

y(t) = y(t) − C ∫ e A(t−τ) Bu(τ)dτ − Du(t) 0

is a known function. Thus, the observability problem reduces to solving x(0) from (5.4). If u ≡ 0, then y(t) reduces to the zero input response Ce At x(0). Therefore, Definition 5.2 can be modified as follows: Equation (5.2) is observable if, and only if, the initial state x(0) can be determined uniquely from its zero input response over a finite time interval.

5.3 Criteria 5.3.1 Controllable Criteria Consider the continuous LTI system – the state equation is expressed as: ẋ = Ax + Bu ,

t≥0,

x(0) = x0 ,

(5.5)

where x ∈ R n ; u ∈ R r ; A n×n , B n×r . Theorem 5.1 (Controllability Gram Matrix Criteria). System (5.5) is controllable only if the n × n matrix t1

Wc [0, t1 ] ≜ ∫ e−At BBT e−A t dt T

(5.6)

0

is nonsingular for any t1 > 0. Proof. First we show that, if Wc [0, t1 ] is nonsingular, then (5.5) is controllable. For any nonzero state x0 , the response of (5.5) at time t1 is derived as: t1

x(t1 ) = e

At 1

x0 + ∫ e A(t 1−t) Bu(t)dt 0 t1

=e

At 1

T x0 − [e At 1 ∫ e−At BBT e A t dt] Wc−1 [0, t1 ]x0

0 [ ] = e At 1 x0 − e At 1 Wc [0, t1 ]Wc−1 [0, t1 ]x0

= e At 1 x0 − e At 1 x0 = 0 ,

∀x0 ∈ R n .

(5.7)

This shows that all nonzero states in R n are controllable. As defined, the system is completely controllable. We show the converse by contradiction. Suppose Wc [0, t1 ] is singular, and that there exists a nonzero state x 0 , such that xT0 Wc [0, t1 ]x0 = 0. This

107

5.3 Criteria |

would leave us with: t1

0=

= ∫ xT0 e−At BBT e−A t x0 dt T

xT0 Wc [0, t1 ]x0

0 t1

T

= ∫ [BT e−A t x0 ] [BT e−A t x0 ] dt T

T

0 t1

󵄩󵄩 󵄩󵄩2 T = ∫ 󵄩󵄩󵄩BT e−A t x0 󵄩󵄩󵄩 dt , 󵄩 󵄩

(5.8)

0

which implies that: BT e−A t x0 = 0 , T

∀t ∈ [0, t1 ] .

(5.9)

If (5.5) is controllable, an input exists that ensures: t1

0 = x(t1 ) = e At 1 x 0 + ∫ e At 1 e−At Bu(t)dt ,

(5.10)

0

thus t1

x0 = − ∫ e−At Bu(t)dt 0 T

t1

t1

󵄩󵄩 󵄩󵄩2 T 󵄩󵄩x 0 󵄩󵄩 = x 0 x0 = [− ∫ e−At Bu(t)dt] x 0 = − ∫ u T (t)[BT e−At x0 ]dt . 0 [ 0 ]

(5.11)

From equation (5.9), (5.11) can be derived as: 󵄩󵄩 󵄩󵄩2 󵄩󵄩x 0 󵄩󵄩 = 0



x0 = 0 ,

(5.12)

which contradicts x0 ≠ 0. Thus Wc [0, t1 ] is nonsingular. Proof has been provided. Theorem 5.2 (Controllability Rank Criteria). The system (5.5) is controllable only if the n × np controllability matrix Qc = [B

AB

A2 B

...

A n−1 B]

(5.13)

has rank n (full row rank). Proof. First we show that, if rank Qc = n, then (5.5) is controllable. Suppose the system is not completely controllable. Using the Gram Matrix Criteria, we can get that the Gram matrix: t1

Wc [0, t1 ] ≜ ∫ e−At BBT e−A t dt , T

0

∀t1 > 0

(5.14)

108 | 5 Controllability and Observability

is singular, which means that a nonzero state α exists, such that: t1

0 = α Wc [0, t1 ]α = ∫ α T e−At BBT e−A t dt T

T

0 t1 T

= ∫ [α T e−At B] [α T e−At B] dt .

(5.15)

0

Therefore, we have: αT e−At B = 0 ,

∀t ∈ [0, t1 ] .

(5.16)

Compute the n − 1 order derivative of the above equation and let t = 0, and we have: αT B = 0 ,

α T AB = 0 ,

αT A2 B = 0 ,

... ,

α T A n−1 B = 0 ,

which equals: α T [B

AB

A2 B

...

A n−1 B] = αT Qc = 0 .

(5.17)

Because α ≠ 0, we know that all the rows of Qc are linearly independent and equivalently, rank Qc < n, which contradicts the hypothesis that rank Qc = n. So the system is controllable. We can also prove the converse by contradiction. Suppose that rank Qc < n, and that there exists a nonzero state α, such that: α T Qc = α T [B

AB

A2 B

...

A n−1 B] = 0 ,

which implies: αT A i B = 0 ,

i = 0, 1, . . . , n − 1 .

(5.18)

Thus, for any t1 > 0, we have: ±

Ai ti B=0, i!

∀t ∈ [0, t1 ] ,

i = 0, 1, 2, . . . ,

or: 0 = α T [I − At +

1 2 2 1 3 3 A t − A t + . . . ] B = α T e−At B , 2! 3!

∀t ∈ [0, t1 ] .

(5.19)

Hence, we can get: t1

0 = α ∫ e−At BBT e−A t αdt = α T Wc [0, t1 ]α , T

T

(5.20)

0

which indicates that the Gram matrix Wc [0, t1 ] is singular, and that the system is not totally controllable. This contradicts the hypothesis that the system is controllable. Proof has been provided.

5.3 Criteria

| 109

Theorem 5.3 (Controllability PBH Criteria). The system (5.5) is controllable only if: rank[sI − A, B] = n ,

∀s ∈ ℓ ,

(5.21)

i = 1, 2, . . . , n ,

(5.22)

or if: rank[λ i I − A, B] = n ,

where ℓ is plural field and λ i (i = 1, 2, . . . , n) is the eigenvalue. Proof. First we prove that, if (5.5) is controllable, then equation (5.21) and (5.22) are correct. Suppose for a certain eigenvalue λ i , rank[λ i I − A, B] < n exists, which implies that all the rows of Qc are linearly independent. Hence, there must exist a nonzero n-dimensional constant vector α, such that: α T [λ i I − A, B] = 0 .

(5.23)

That is, αT A = λ i α T , α T B = 0. Furthermore, α T B = 0, α T AB = λ i α T B = 0, . . . , α T A n−1 B = 0, which equals: α T [B

AB

...

A n−1 B] = aT Qc = 0 .

(5.24)

Because α ≠ 0, we have rank Qc < n. From the rank criteria, we know that the system is completely controllable and, therefore, the hypothesis is not supported. Besides, for all s in plural field ℓ except the eigenvalues λ i , we have rank[sI − A, B] = n, so (5.22) equals (5.21). Conversely, we suppose the system is not completely controllable, and that there must exist a linear nonsingular transformation which transforms (A, B) into the following form: Ac A 12 A = PAP−1 = [ ] 0 Ac (5.25) Bc B = PB = [ ] , 0 where (A c ∈ R n×n , Bc ∈ R n×p ) and (A c ∈ R(n−h)×(n−h) , Bc ∈ R(n−h)×p ), respectively, denote the controllable part and uncontrollable part after being decomposed. And the eigenvalue of A and A c have the following relationship: λ i = an eigenvalue of Ac = an eigenvalue of A Define q c ∈ ℓ1×(n−h) is one characteristic vector of λ i We can construct a nonzero n-dimensional row vector: qT = [0, q Tc ] P ⋅ P−1 [ qT A = [0, q Tc ] P ⋅ P−1 [

Bc ]=0 0 Ac 0

A12 ]P Ac

= [0, q Tc Ac ] P = [0, λ i q Tc ] P = λ i [0, q Tc ] P = λ i qT .

110 | 5 Controllability and Observability This shows that an n-dimensional row vector qT = 0 exists, such that: qT [λ i I − A, B] = 0 .

(5.26)

Equivalently, a λ i ∈ ℓ exists, such that: rank [λ i I − A, B] < n .

(5.27)

Obviously, this contradicts “rank[sI − A, B] = n, ∀s ∈ ℓ ”, thus the hypothesis is not established and (5.5) is controllable. Proof has been provided. Theorem 5.4 (Controllability and the Jordan Canonical Form Criteria I). Consider system (5.5). Suppose that the n eigenvalues λ1 , λ2 , . . . , λ n are pairwise differently, and that the system is controllable, but only if the Jordan canonical of (5.5) is: [ [ ẋ = [ [ [

λ1 λ2 ..

.

] ] ] x + Bu , ] ]

(5.28)

λn ]

[

where B does not contain zero row vector. This means each row vector of B satisfies: b i ≠ 0 ,

i = 1, 2, . . . , n .

(5.29)

Proof. For the Jordan canonical (5.28), we construct the PBH Criteria matrix: [ [ [sI − A, B] = [ [ [

s − λi

b1 b2 ] ] .. ] ] . . ]

s − λ2 ..

. s − λn

[

(5.30)

bn]

From the unit structure which makes up the matrix, we have s = λ i , i ∈ [1, 2, . . . , n], rank[sI − A, B] = n, but only if “b i ≠ 0, ∀i ∈ [1, 2, . . . , n]”. Proof has been provided. Theorem 5.5 (Controllability and the Jordan Canonical Form Criteria II). Consider system (5.5). Suppose the n eigenvalues are λ1 (σ 1 layers, α 1 layers), λ2 (σ 2 layers, α 2 layers), . . . , λ l (σ l layers, α l layers), and σ 1 + σ 2 + ⋅ ⋅ ⋅ + σ l = n, λ i ≠ λ j , ∀i ≠ j, i, j = 1, 2, . . . , l. Assume that the following Jordan canonical form is derived from linear nonsingular transformation of state equation (5.5): ̂ , x̂̇ = ̂ A ̂x + Bu

(5.31)

where: [ [ ̂ A =[ [ n×n [ [

J1 J2 ..

.

] ] ] , ] ] Jl]

̂1 B ] [B [ ̂2] ̂ [ B =[ . ] ] , n×p [ .. ] ̂l ] [B

(5.32)

5.3 Criteria

Ji

σ i× σ i

[ [ = i[ [ [

J i1

] ] ] , ] ]

J i2 ..

.

J iα i ]

[

J ik

r ik× r ik

[ [ [ [ = i[ [ [ [ [

λi

1 λi

1 .. .

..

.

..

.

] ] ] ] ] , ] ] ] 1] λi ]

[

| 111

̂ i1 B ] [B [ ̂ i2 ] ̂i = [ . ] , B [ . ] σ i ×p [ . ] ̂ iα ] [B i

(5.33)

̂ 1ik b ] [b ̂ [ 2ik ] ̂ ik = [ . ] , B [ . ] r ik ×p [ . ] ̂ rik ] [b

(5.34)

̂ i1 , B ̂ i2 , . . . , B ̂ iα are all pairwise linear inIf for i = 1, 2, . . . , l, the last row vector of B i dependent, which means that: ̂ ri1 b ] [b ̂ [ ri2 ] ] rank [ [ .. ] = α i , [ . ] ̂ riα ] [b

∀i = 1, 2, . . . , l .

(5.35)

i

then the system (5.35) is controllable. Proof. For simplicity, suppose that: [ [ [ [ [ ̂ A=[ [ [ [ [ [

λ1

1 λ1

1 λ1 λ1

1 λ1 λ2

[

] ] ] ] ] ] , ] ] ] ] 1] λ2 ]

̂ 111 b [b ] ̂ [ 211 ] [̂ ] [ b r11 ] [ ] ̂ ̂ ] B=[ [b 112 ] , [b ] [ ̂ r12 ] [̂ ] [b 121 ] ̂ r21 ] [b

(5.36)

where λ1 ≠ λ2 . For the above Jordan canonical form, we construct the PBH criteria matrix: [ [ [ [ [ ̂ =[ A, B] [sI − ̂ [ [ [ [ [ [

s − λ1

−1 s − λ1

−1 s − λ1 s − λ1

−1 s − λ1 s − λ2

−1 s − λ2

̂ 111 b ̂ 211 ] b ] ̂ r11 ] ] b ] ̂ 112 ] . b ] ̂ r12 ] b ] ̂ 121 ] ] b ̂ b r21 ] (5.37)

112 | 5 Controllability and Observability We imply the rank criteria when s = λ1 , which yields: [ [ [ [ [ ̂ =[ [λ1 I − ̂ A, B] [ [ [ [ [

0

−1 0

−1 0 0

−1 0 λ1 − λ2

[

−1 λ1 − λ2

̂ 111 b ̂ b 211 ] ] ̂ r11 ] ] b ] ̂ 112 ] , b ] ̂ r12 ] b ] ̂ 121 ] ] b ̂ b r21 ]

(5.38)

̂ has full rank for the rows; namely, where λ1 − λ2 ≠ 0. Obviously, [λ1 I − ̂ A, B] ̂ = n = 7, but only if: rank[λ1 I − ̂ A, B] ̂ r11 b rank [ ̂ ] = α 1 = 2 . b r12

(5.39)

̂ has full rank for rows: rank[λ1 I − ̂ ̂ = n = 7, but Similarly, for s = λ2 , [λ1 I − ̂ A, B] A, B] only if: ̂ r21 = α 2 = 1 . rank b (5.40) Therefore, equation (5.35) is proven.

5.3.2 Controllable Examples Example 5.3. Consider the controllability of the following continuous time LTI system: ẋ 1 4 0 x1 1 [ ]=[ ][ ]+[ ]u , n = 2 . 0 −5 x2 2 ẋ 2 Solution. The controllability matrix is: Qc = [B

AB] = [

1 2

4 ] . −10

Obviously, rank Qc = 2 = n. According to the rank criteria, the system is controllable. Example 5.4. Consider the controllability of the tem: −1 −4 −2 2 [ ] [ ẋ = [ 0 6 −1] x + [0 1 7 −1] [ [1

following continuous time LTI sys0 ] 1] u , 1]

n=3.

5.3 Criteria

| 113

Solution. The controllability matrix is: Qc = [B

AB

2 [ A2 B] = [0 [1

0 1 1

−4 −1 1

∗ ∗ ∗

∗ ∗ ∗

∗ ] ∗] . ∗]

From the first three columns of Qc , we can see that: 2 [ det [0 [1

0 1 1

−4 ] −1] ≠ 0 1]

rank Qc = 3 = n . Thus, there is no need to compute the last three columns of Qc . According to the rank criteria, the system is controllable. Example 5.5. Consider the tem: 0 [0 [ ẋ = [ [0 [0

controllability of the following continuous time LTI sys1 0 0 0

0 −1 0 5

0 0 [1 0] ] [ ]x +[ [0 1] 0] [−2

1 0] ] ]u , 1] 0]

n =4.

Solution. First, compute the matrix: −1 s 0 0

s [0 [ [sI − A, B] = [ [0 [0

0 1 s −5

0 0 −1 s

0 1 0 −2

1 0] ] ] . 1] 0]

The eigenvalues of A are calculated as: λ1 = λ2 = 0 ,

λ 3 = √5 ,

λ4 = −√5 .

Next, we check the rank of [sI − A, B] for each eigenvalue. For s = λ1 = λ2 = 0, we have: 0 [0 [ rank[sI − A, B] = rank [ [0 [0 −1 [0 [ = rank [ [0 [0

−1 0 0 0 0 1 0 −5

0 1 0 −5 0 0 −1 0

0 0 −1 0

0 1 0 −2

1 0] ] ] , 1] 0]

0 −1] ] ]=4=n. 0] −2]

114 | 5 Controllability and Observability For s = λ3 = √5, we have: √5 [0 [ rank[sI − A, B] = rank [ [0 [0

−1 √5 0 0

0 1 √5 −5

0 0 −1 √5

√5 [0 [ = rank [ [0 [0

−1 √5 0 0

0 1 0 −2

1 0] ] ]=4=n. 1] 0]

0 1 0 −2

1 0] ] ] . 1] 0]

For s = λ4 = −√5, we have: −√5 [ 0 [ rank[sI − A, B] = rank [ [ 0 [ 0

−1 −√5 0 0

0 1 −√5 −5

−√5 [ 0 [ = rank [ [ 0 [ 0

−1 −√5 0 0

0 1 0 −2

0 0 −1 −√5

0 1 0 −2

1 0] ] ] , 1] 0]

1 0] ] ]=4=n. 1] 0]

This shows that the given system qualifies the PBH Criteria, and it is controllable. Example 5.6. Considering a continuous time LTI system with pairwise different eigenvalues. Suppose that the Jordan canonical sate equation is: ẋ 1 −7 [ ̇ ] [ [x2 ] = [ 0 [ẋ 3 ] [ 0

0 −2 0

0 x1 0 ][ ] [ 0] [x2 ] + [4 1] [x3 ] [0

2 ] u1 0] [ ] . u2 1]

Solution. We can see directly that the matrix B does not contain zero row vectors. According to Jordan canonical form criteria I, the system is controllable. Example 5.7. Considering a continuous time LTI system with duplicate eigenvalues. Suppose that the Jordan canonical state equation is: −2 [0 [ [ [ ̇̂x = [ [ [ [ [ [ [ [

1 −2 −2 −2 3 0

1 3

0 [1 ] [ ] [ ] [0 ] [ ] ] ̂x + [0 [ ] [0 ] [ ] [ ] [1 ] 3] [0

0 0 4 0 0 1 4

0 0] ] ] 0] ] 7] ]u . 0] ] ] 0] 1]

5.3 Criteria |

115

Considering the last rows of Jordan blocks for λ1 = −2 and λ2 = 3, we find the corrê and construct the following two matrices: sponding rows in B ̂ r11 b 1 [̂ ] [ [b r12 ] = [0 ̂ [b r13 ] [0

0 4 0

0 ] 0] , 7]

̂ r21 b 1 [̂ ] = [ 0 b r22

1 4

0 ] . 1

Hence, we can see that both of them have full ranks. According to criteria II of the Jordan canonical form, the system is controllable.

5.3.3 Observable Criteria Consider the continuous time LTI system: ẋ = Ax + Bu

(5.41)

y = Cx + Du ,

where A, B, C, and D are respectively n × n, n × p, q × n, and q × p constant matrices. Theorem 5.6 (Gram Matrix Criteria). The state equation (5.41) is observable only if the n × n matrix t1

T

Wo [0, t1 ] ≜ ∫ e A t CT Ce At dt

(5.42)

0

is nonsingular for any t1 > 0. T

Proof. We premultiply (5.4) by e A t CT , and then integrate it over [0, t1 ] to yield: t1

t1

(∫ e

AT t T

T

At

C Ce dt) x(0) = ∫ e A t CT y(t)dt .

0

(5.43)

0

If Wo [0, t1 ] is nonsingular, then: t1

x(0) = Wo−1 [0, t1 ] ∫ e A t CT y(t)dt . T

(5.44)

0

This yields a unique x(0). It also shows that if Wo [0, t1 ] (for any t1 > 0) and is nonsingular, then (5.41) is observable. Next, we show that, if Wo [0, t1 ] is singular or, equivalently, positive semidefinite for all t1 , then (5.41) is not observable. If Wo [0, t1 ] is positive semidefinite, there exists an n × 1 nonzero constant vector v such that: t1

t1 T AT t T

T

v Wo [0, t1 ]v = ∫ v e 0

At

C Ce vdt = ∫ ‖Ce At v‖2 dt = 0 , 0

116 | 5 Controllability and Observability

which implies that: Ce At v ≡ 0 ,

(5.45)

for all t in [0, t1 ]. If u ≡ 0, then x1 (0) = v ≠ 0 and x2 (0) = 0 both yield the same y(t) = Ce At x i (0) ≡ 0 . Two different initial states yield the same zeroinput response. Therefore, we cannot uniquely determine x(0). Thus (5.41) is not observable. This completes the proof for Theorem 5.1. Theorem 5.7 (Theorem of Duality). The pair (A, B) is controllable only if the pair (AT , BT ) is observable. Proof. The pair (A, B) is controllable only if t1 T

Wc [0, t1 ] = ∫ e At BBT e A t dt 0

is nonsingular for any t1 . The pair (AT , BT ) is observable only if, by replacing A with T t AT and C with BT in (5.42), Wo [0, t1 ] = ∫0 1 e At BBT e A t dt is nonsingular for any t. Theorem 5.8 (Rank Criteria). The state equation (5.41) is observable only if the nq × n observability matrix C [ CA ] ] [ ] Qo = [ [ .. ] , [ . ] n−1 [CA ]

or

QTo = [CT

AT CT

...

(AT )n−1 CT ] ,

has rank n. Theorem 5.9 (Observable PBH Rank Criteria). The state equation (5.41) is observable only if: λi I − A rank [ ] = n , and i = 1, 2, . . . , n , C at the eigenvalue, λ, of A. Theorem 5.10 (Observable PBH Characteristic Vector Criteria). The state equation (5.41) is observable only if an orthogonal nonzero right characteristic vector for all rows of matrix C in matrix A does not exist. Equivalently, the only right characteristic vector at every eigenvalue λ, of A, that can satisfy the following equations: Aα = λ i α , is α = 0.

Cα = 0 ,

5.3 Criteria

| 117

Theorem 5.11 (Observable Jordan Canonical Form Criteria). Assuming λ i ≠ λ j , ∀i ≠ j, the eigenvalues of system (5.41) are λ i (σ i layers, α i layers) with i varying from 1 to l, and (σ 1 + σ 2 + . . . σ l ) = n. The Jordan canonical form of the system is obtained by linearly nonsingular transformation: ̂ẋ = ̂ A ̂x ̂ ̂x , y=C where: [ [ ̂ A =[ [ (n×n) [

J1

] ] ] , ] ]

J2 ..

.

J i1

] ] ] , ] ]

J i2 ..

̂

̂ i1 = [C

̂ i2 C

̂

= [̂c1ik

̂c2ik

...

̂ l] , C

.

Ci (q×σ i )

...

̂ iα ] , C i

J iα i ]

[ [ [ [ [ J ik = [ [ (r ik ×r ik ) [ [ [

̂2 C

Jl]

[ [ [ Ji = [ [ (σ i ×σ i ) [

̂ = [C ̂1 C

(q×n)

λi

1 λi

1 .. .

..

.

..

.

[

] ] ] ] ] , ] ] ] 1] λi ]

C ik (q×r ik )

...

̂c rik ] .

̂ i1 , C ̂ i2 , . . . , C ̂ iα are linearly independent. That For i = 1, 2, . . . , l, the first columns of C i is: rank [̂c1i1 ̂c1i2 . . . ̂c1iα i ] = α i , ∀i = 1, 2, . . . , l .

5.3.4 Observable Examples Example 5.8. Is the state equation: −1 [ ẋ = [ 0 [1 0 y=[ 1 observable?

−4 6 7 2 1

−2 ] −1] x , −1]

1 ]x 0

n=3,

118 | 5 Controllability and Observability

Solution. 0 [ [1 [ C [1 [ ] rank Qo = rank [ CA ] = rank [ [∗ [ 2 CA [ [ ] [∗ [∗

2 1 19 ∗ ∗ ∗

1 ] 0] ] −3] ]=3=n. ∗] ] ] ∗] ∗]

Obviously, we know that the matrix Qo has full rank from the first three rows. Therefore, the system is completely observable from rank criteria. Example 5.9. Is the state equation: 0 [0 [ ẋ = [ [0 [0

1 0 0 0

0 −1 0 5

0 1

1 0

0 1

y=[

0 0] ] ]x , 1] 0]

n=4,

−2 ]x 0

observable? Solution. First we compute the eigenvalues of A: 󵄨󵄨 λ 󵄨󵄨 󵄨󵄨 󵄨󵄨0 |λI − A| = 󵄨󵄨󵄨 󵄨󵄨0 󵄨󵄨 󵄨󵄨󵄨0

−1 λ 0 0

0 󵄨󵄨󵄨󵄨 󵄨 0 󵄨󵄨󵄨󵄨 󵄨=0, −1󵄨󵄨󵄨󵄨 󵄨 λ 󵄨󵄨󵄨

0 1 λ −5

λ 3 = √5 ,

λ1 = λ2 = 0 ,

λ 3 = −√5 .

According to observable PBH rank criteria, we compute:

λI − A ] C λ=0

0 [ [0 [ [0 = rank [ [0 [ [ [0 [1

λI − A ] C λ=√5

[ [ [ [ = rank [ [ [ [ [

rank [

rank [

√5 0 0 0 0 [1

−1 0 0 0 1 0 −1 √5 0 0 1 0

0 1 0 −5 0 1

0 ] 0] ] −1] ]=4=n, 0] ] ] −2] 0]

0 1 √5 −5 0 1

0 ] 0] ] −1 ] ]=4=n, √5] ] ] −2 ] 0]

5.3 Criteria |

rank [

λI − A ] C λ=−√5

−√5 0 0 0 0 [ 1

−1 −√5 0 0 1 0

[ [ [ [ = rank [ [ [ [ [

0 1 −√5 −5 0 1

119

0 ] 0 ] ] −1 ] ]=4=n, −√5] ] ] −2 ] 0 ]

which satisfy the observable PBH rank criteria. Therefore, the system is completely observable. Example 5.10. Consider a system with pairwise different eigenvalues, and suppose its Jordan canonical form is: ẋ 1 −7 [ ̇ ] [ [x2 ] = [ 0 [ẋ 3 ] [ 0 0 y=[ 2

0 −2 0 4 0

0 x1 ][ ] 0] [x2 ] , 1] [x3 ]

0 ]x . 1

Examine the observability of the system. Solution. We know the matrix C does not consist of a column whose elements are all zero. According to the observable Jordan canonical form criteria, we know that the system is observable. Example 5.11. Consider an LTI system with duplicate eigenvalues. Suppose that the Jordan canonical form is as follows: −2 [0 [ [ [ [ ̂ẋ = [ [ [ [ [ [

1 −2 −2 −2 3 0

1 3

3]

[ 1 [ y = [0 [0

] ] ] ] ] ] ̂x , ] ] ] ] ]

0 0 0

0 4 0

0 0 7

1 1 0

0 0 0

0 ] 4] ̂x . 1]

120 | 5 Controllability and Observability Solution. Consider the first column of two Jordan blocks for λ = −2 and λ = 3. Find the ̂ and construct the following two matrices: corresponding columns from the matrix C [̂c111

̂c112

1 ̂c113 ] = [ [0 [0

0 4 0

0 ] 0] , 7]

[̂c121

1 ̂c122 ] = [ [1 [0

0 ] 4] . 1]

We know that the two matrices are both linearly independent columns. According to the observable Jordan canonical form criteria, we know that the system is observable.

5.4 Duality System 5.4.1 Definition There are two systems. The system Σ1 is shown as follows: ẋ 1 = A1 x1 + B1 u 1 y1 = C1 x1 . The other system, Σ2 , is:

ẋ 2 = A2 x2 + B2 u 2 y2 = C2 x2 .

If the following conditions are satisfied, the system Σ1 and Σ2 are duality systems. A2 = AT1 ,

B2 = CT1 ,

C2 = BT1 ,

(5.46)

where x1 , x2 are n-dimensional state vectors, u 1 , u 2 are r-dimensional and m-dimensional control vectors respectively; y1 , y2 are m-dimensional and r-dimensional output vectors respectively; A1 , A2 are n × n system matrices; A1 , A2 are n × r and n × m control matrices respectively, and C1 , C2 are m × n and r × n output matrices respectively. Obviously, the system Σ1 is an n-order system with r inputs and m outputs, while its duality systems Σ2 is an n-order system with m inputs and r outputs. The configurations of the duality systems Σ1 and Σ2 are shown in Figure 5.4.

5.4 Duality System |

121

(a)

(b) Fig. 5.4: Configuration diagrams of duality systems.

5.4.2 Properties of Duality Systems Conclusion 5.1. No matter whether a system is a continuous time system or a discrete time system, if the system Σ1 is linear, its duality system Σ2 is also linear. Similarly, if the system Σ1 is time variable or time invariable, its duality system Σ2 is also time variable or time invariable. Conclusion 5.2. The transfer function matrices of the duality systems are mutually inverted. Proof. As shown in Figure 5.4 (a), the transfer function matrix W1 (s) of the system Σ1 is a m × r matrix as follows: W1 (s) = C1 (sI − A1 )−1 B1 . As shown in Figure 5.4 (b), the transfer function matrix W2 (s) of the system Σ2 is a r × m matrix as follows: W2 (s) = C2 (sI − A2 )−1 B2 = BT1 (sI − AT1 )−1 CT1 T

= BT1 [(sI − A1 )−1 ] CT1 . Obviously, [W2 (s)]T = C1 (sI − A1 )−1 B1 = W1 (s) .

122 | 5 Controllability and Observability In the same way we can know that the input state transfer function matrix (sI − A1 )−1 B1 of the system Σ1 and state input transfer function matrix C2 (sI − A2 )−1 of the system Σ2 are mutually inverted. The state input transfer function matrix C1 (sI −A1 )−1 of the system Σ1 and input state transfer function matrix (sI − A2 )−1 B2 of the system Σ2 are mutually inverted. In addition, the characteristic equations of duality systems are the same. That is to say: 󵄨 󵄨 |sI − A2 | = 󵄨󵄨󵄨󵄨sI − AT1 󵄨󵄨󵄨󵄨 = |sI − A1 | . Conclusion 5.3. The system Σ1 = (A1 , B1 , C1 ) and the system Σ2 = (A2 , B2 , C2 ) are duality systems. The controllability of system Σ1 is equivalent to the observability of system Σ2 . Similarly, the observability of system Σ1 is equivalent to the controllability of system Σ2 . In other words, if the system Σ1 is controllable or observable, the system Σ2 is observable or controllable. Proof. The controllability matrix of system Σ2 is: Q2c = [B2

A2 B2

...

A2n−1 B2 ] ,

and its rank equals n. Therefore, the system Σ2 is controllable. The integration of the equation (5.46) in the above equation results in the following equation: Q2c = [CT1 AT1 CT1 . . . (AT1 )(n−1) CT1 ] = QT1o , where Q1o is the observability matrix of system Σ1 . This indicates the rank of the observability matrix of system Σ1 is n. Therefore, system Σ1 is observable. In the same way, we know: QT2o = [CT2

AT2 CT2

...

(AT2 )n−1 CT2 ]

= [B1

A1 B1

...

A1n−1 B1 ] = Q1c .

If Q2o has full rank, the system Σ2 is observable, and Q1c also has full rank. Therefore, the system Σ1 is controllable.

5.5 Canonical Form This section discusses the controllability canonical forms and observability canonical forms of state equations.

5.5.1 Controllability Canonical Form of Single Input Systems Consider the n-dimensional time invariant system: ẋ = Ax + Bu y = Cx .

5.5 Canonical Form

| 123

If the system is controllable, we have: rank [B

AB

...

A n−1 B] = n .

Hence, there are at least n linearly independent n-dimensional column vectors in the controllability matrix. We select n linearly independent vectors from the nr column vectors and make some linear transformations. Then we can get a certain controllability canonical form, whose columns are still linear independent. For single input single output systems, there is only one group of linearly independent vectors, so the controllability canonical form is unique. However, for multiple input multiple output systems, there are various choices of n linearly independent vectors, so the controllability canonical form is not unique. Obviously, the canonical forms exist if, and only if, the system is controllable. Controllability Canonical Form I If the LTI single input system: ẋ = Ax + bu

(5.47)

y = Cx is controllable, then there exists a linear nonsingular transformation: x = Tc1 x ,

Tc1 = [A n−1 b

A n−2 b

...

1 [ [α n−1 [ [ .. b] [ [ . [ [ [ α2 [ α1

1 .. α3 α2

. ..

...

. α n−1

0 ] ] ] ] ] , ] ] ] ] 1]

(5.48)

which can transfer the state space equation into: ẋ = Ax + bu

(5.49)

y = Cx , where:

−1 A = Tc1 ATc1

0 .. . 0 [−α0

[ [ =[ [ [

C = CTc1 = [β 0

β1

1 .. −α 1 ...

.

...

] ] ] , ] 1 ] −α n−1 ]

0 [0] ] [ −1 ] b = Tc1 b=[ [ .. ] , [.] [1]

β n−1 ] .

(5.50)

(5.51)

Equation (5.49) is called the controllability canonical form I, where α i (i = 0, 1, . . . , n − 1) are the coefficients of the following polynomial: |λI − A| = λ n + α n−1 λ n−1 + ⋅ ⋅ ⋅ + α 1 λ + α0 .

124 | 5 Controllability and Observability β i (i = 0, 1, . . . , n − 1) are the results of CTc1 : β 0 = C(A n−1 b + α n−1 A n−2 b + ⋅ ⋅ ⋅ + α 1 b)} } } } } .. } } . } } } } β n−2 = C(Ab + α n−1 b) } } } β n−1 = Cb }

(5.52)

Proof. Suppose the system is controllable; the n × 1 vectors b, Ab, . . . , A n−1 b are linearly independent, and the new vectors e1 , e2 , . . . , e n in the following combination are also linearly independent: e1 = A n−1 b + α n−1 A n−2 b + α n−2 A n−3 b + ⋅ ⋅ ⋅ + α 1 b } } } } } e2 = A n−2 b + α n−1 A n−3 b + ⋅ ⋅ ⋅ + α 2 b } } } } } .. } . } } } } } } e n−1 = Ab + α n−1 b } } } en = b , }

(5.53)

where α i (i = 0, 1, . . . , n − 1) are the coefficients of the polynomial. Thus, the transformation matrix Tc1 are composed of e1 , e2 , . . . , e n : Tc1 = [e1

e2

...

en ] .

(5.54)

−1 As A = Tc1 ATc1 , we have:

Tc1 A = ATc1 = A [e1

e2

...

e n ] = [Ae1

Ae2

...

Ae n ] .

Substituting equation (5.53) into the above equation yields: Ae1 = A(A n−1 b + α n−1 A n−2 b + ⋅ ⋅ ⋅ + α 1 b) = (A n b + α n−1 A n−1 b + ⋅ ⋅ ⋅ + α 1 Ab + α 0 b) − α 0 b = −α 0 b = −α 0 e n Ae2 = A(A n−2 b + α n−1 A n−3 b + ⋅ ⋅ ⋅ + α 2 b) = (A n−1 b + α n−1 A n−2 b + ⋅ ⋅ ⋅ + α 2 Ab + α 1 b) − α 1 b = e1 − α1 e n .. . Ae n−1 = A(Ab + α n−1 b) = (A2 b + α n−1 Ab + α n−2 b) − α n−2 b = e n−2 − α n−2 e n Ae n = Ab = (Ab + α n−1 b) − α n−1 b = e n−1 − α n−1 e n .

(5.55)

5.5 Canonical Form

| 125

Then we substitute Ae1 , Ae2 , . . . , Ae n into (5.55): Tc1 A = [Ae1

= [e1

Ae2

e2

...

...

Ae n ] = [−α 0 e n 0 .. . 0 [−α 0

(e1 − α 1 e n )

1

[ [ en] [ [ [

.. −α 1

...

(e n−1 − α n−1 e n )]

] ] ] . ] 1 ] −α n−1 ]

.

...

−1 Furthermore, we deduce b. From b = Tc1 b, we have Tc1 b = b. Substituting b = e n yields: 0 [0] [ ] ] Tc1 b = e n = [e1 e2 . . . e n ] [ [ .. ] . [.] [1]

Therefore, 0 [0 ] [ ] ] b=[ [ .. ] . [.] [1 ] Finally, we deduce C. From C = CTc1 , we get: C = CTc1 = C [e1

e2

...

en ] .

Substituting (5.53) into the above equation yields: C = C [A n−1 b + α n−1 A n−2 b + α n−2 A n−3 b + ⋅ ⋅ ⋅ + α 1 b = [β 0 where:

β1

...

...

Ab + α n−1 b

b]

β n−1 ] ,

β 0 = C(A n−1 b + α n−1 A n−2 b + α n−2 A n−3 b + ⋅ ⋅ ⋅ + α 1 b) .. . β n−2 = C(Ab + α n−1 b) β n−1 = Cb .

We can derive the transfer function easily from the controllability canonical form I: W(s) = C(sI − A)−1 b =

β n−1 s n−1 + β n−2 s n−2 + ⋅ ⋅ ⋅ + β 1 s + β 0 . s n + α n−1 s n−1 + ⋅ ⋅ ⋅ + α 1 s + α 0

(5.56)

From (5.56), we can see that the coefficients of the denominator polynomial are the negative value of the elements of the last row of A, and that the coefficients of the numerator polynomial are the elements of C. Hence, we can write out A, b, C, directly from the coefficients of the denominator polynomial and numerator polynomial of the system transfer function.

126 | 5 Controllability and Observability

Controllability Canonical Form II If the LTI single input system, ẋ = Ax + bu

(5.57)

y = Cx , is controllable, then there exists a linear nonsingular transformation: x = Tc2 x = [b

Ab

A n−1 b] x ,

...

(5.58)

which will transfer the state space equation into: ẋ = Ax + bu

(5.59)

y = Cx . Here:

−1 A = Tc2 ATc2

0 [ [1 [ [ = [0 [. [. [. [0

C = CTc2 = [β 0

0 0 1 .. . 0 β1

... ... ...

0 0 0

...

0 1

−α0 ] −α1 ] ] −α2 ] ] , .. ] ] . ] −α n−1 ]

1 [0] ] [ −1 ] b = Tc2 b=[ [ .. ] . [.] [0]

β n−1 ]

...

(5.60)

(5.61)

Equation (5.59) is called the controllability canonical form II, where α 0 , α 1 , . . . , α n−1 are the coefficients of the following polynomial: |λI − A| = λ n + α n−1 λ n−1 + ⋅ ⋅ ⋅ + α 1 λ + α 0 . β 0 , β 1 , . . . , β n−1 are the results of CTc2 : β 0 = Cb

} } } } } } } } .. } } } . } } } n−1 = CA b }

β 1 = CAb

β n−1

(5.62)

Proof. As the system is controllable, the controllability matrix Qc = [b

Ab

...

A n−1 b]

is nonsingular. Let: x = Tc2 x where Tc2 = [b Ab . . . after transformation is:

A n−1 b]. Thus the state equation and output equation

−1 −1 ẋ = Ax + bu = Tc2 ATc2 x + Tc2 bu

y = Cx = CTc2 x .

5.5 Canonical Form

| 127

First, we deduce A: ATc2 = A [b

Ab

A n−1 b] = [Ab

...

A2 b

...

A n b] .

(5.63)

According to the Cayley–Hamilton theorem, we have: A n = −α n−1 A n−1 − α n−2 A n−2 − ⋅ ⋅ ⋅ − α 1 A − α0 . Substituting the above equation into (5.63) yields: ATc2 = [Ab

A2 b

...

(−α n−1 A n−1 − α n−2 A n−2 − ⋅ ⋅ ⋅ − α 0 )b] .

(5.64)

Now, rewrite (5.64) into matrix form:

ATc2 = [b

Ab

0 [ [1 [ [ A n−1 b] [0 [. [. [. [0

...

0 0 1 .. . 0

... ... ...

0 0 0

...

0 1

−α 0 ] −α 1 ] ] −α 2 ] ] , .. ] ] . ] −α n−1 ]

thus arriving at:

ATc2

0 [ [1 [ [ = Tc2 [0 [. [ .. [ [0

0 0 1 .. . 0

... ... ...

0 0 0

...

0 1

−α0 ] −α1 ] ] −α2 ] ] . .. ] . ] ] −α n−1 ]

−1 : Then multiply the above equation by Tc2

−1 A = Tc2 ATc2

0 [ [1 [ [ = [0 [. [ .. [ [0

−1 As b = Tc2 b, equally b = Tc2 b = [b

0 0 1 .. . 0

Ab

... ... ...

0 0 0

...

0 1

−α0 ] −α1 ] ] −α2 ] ] . .. ] . ] ] −α n−1 ]

A n−1 b]. Obviously, in order to guar-

...

antee (5.65), b should satisfy: 1 [0 ] [ ] ] b=[ [ .. ] , [.] [0 ] C = CTc2 = [Cb

CAb

C = CTc2 = [β 0

β1

(5.65)

...

CA n−1 b] .

That is: ...

β n−1 ] .

128 | 5 Controllability and Observability

5.5.2 Observability Canonical Form of Single Output Systems Similar to the condition for controllability canonical forms, the system has observability canonical form only when it is observable. That is: C [ CA ] ] [ ] rank [ [ .. ] = n . [ . ] n−1 [CA ] The state space equation has two types of observability canonical forms: observability canonical form I and observability canonical form II. Observability Canonical Form I If the LTI single output system ẋ = Ax + bu

(5.66)

y = Cx is controllable, then there exists a linear nonsingular transformation: x = To1 ̃x ,

(5.67)

which will transfer the state space equation (5.65) into: ̃ x̃̇ = ̃ A ̃x + bu ̃ ̃x . y=C

(5.68)

Define the inverse of the transformation matrix To1 :

−1 To1

C [ CA ] ] [ ] =N=[ [ .. ] . [ . ] n−1 [CA ]

(5.69)

Here:

−1 ̃ ATo1 A = To1

0 0 0 .. . −α [ 0

[ [ [ [ =[ [ [ [

̃ = CTo1 = [1 C

0

...

1 0 0 .. . −α 1 0] .

0 1 0 .. . −α 2

... ... ... ...

0 ] 0 ] ] 0 ] ] , .. ] . ] ] −α n−1 ]

β0 β1 ] ] .. ] ] , (5.70) . ] [β n−1 ]

[ [ ̃ = T −1 b = [ b o1 [ [

(5.71)

Equation (5.71) is called the observability canonical form I, where α i (i = 0, 1, . . . , n − 1) are the coefficients of the polynomial A.

5.5 Canonical Form

| 129

Observability Canonical Form II If the LTI single output system: ẋ = Ax + bu

(5.72)

y = Cx is controllable, then there exists a linear nonsingular transformation: x = To2 ̃x , where:

To2

1 [ [0 [ [ = [ ... [ [ [0 [0

α n−1 1 .. . 0 0

... ... ... ...

α2 α3 .. . 1 0

(5.73) CA n−1 α1 ] [ n−2 ] α 2 ] [CA ] ][ ] .. ] [ .. ] . [ . ] . ] ][ ] ][ ] α n−1 ] [ CA ] 1 ][ C ]

(5.74)

The state space equation after transformation is: ̃ x̃̇ = ̃ A ̃x + bu ̃ x̃ , y=C

(5.75)

where:

−1 ̃ AT02 A = To2

0 [ [1 [ [ =[ [0 [. [. [. [0

̃ = CTo1 = [0 C

0

0 0

... ...

1 .. . 0

0

0 0 .. .

...

0 1

...

−α0 ] −α1 ] ] ] −α 2 ] ] , .. ] ] . ] −α n−1 ]

β0 β1 ] ] .. ] ] , . ] [β n−1 ]

[ [ ̃ = T −1 b = [ b o2 [ [

1] .

(5.76)

(5.77)

Equation (5.75) is called the observability canonical form II, where α i (i = 0, 1, . . . , n − 1) are the coefficients of the polynomial A, and β i (i = 0, 1, . . . , n − 1) are the −1 results of To2 b and is shown in (5.51). The observability canonical form I is dual to the controllability canonical form II. And the observability canonical form II is dual to the controllability canonical form I. Proof. We prove it by the theory of duality. First we construct the duality system of Σ = (A, b, C): A∗ = AT b∗ = CT C∗ = bT .

130 | 5 Controllability and Observability This gives us the controllability canonical form II of Σ∗ = (A∗ , b ∗ , C∗ ). The observability canonical form I of Σ is just the controllability canonical form II of Σ∗ . For example: T ̃ A = A∗ = A

̃ = b∗ = CT b ̃ = C∗ = bT , C where: A, b, C

the coefficient matrices of the controllability canonical form II of Σ ;

̃ C ̃ ̃ A, b, ∗



the coefficient matrices of the observability canonical form I of Σ ; ∗

A , b , C the coefficient matrices of the controllability canonical form II of Σ∗ .

Therefore the transformation above can be deduced directly by the theory of duality. The same is true for observability canonical form I. Besides, we can also derive the transfer function directly from the controllability canonical form II: W(s) =

β n−1 s n−1 + β n−2 s n−2 + ⋅ ⋅ ⋅ + β 0 , + α n−1 s n−1 + α n−2 s n−2 ⋅ ⋅ ⋅ + α 0

sn

(5.78)

Where the coefficients of the denominator polynomial are the negative value of the elements of the last column of ̃ A, and the coefficients of the numerator polynomial ̃ are the elements of b.

5.5.3 Examples Example 5.12. Try to transform the following state space equation into controllability canonical form I: 1 2 0 2 [ [ ] ] ẋ = [3 −1 1] x + [1] u , [0 2 0] [1] y = [0

0

1] x .

Solution. First we judge the controllability of the system: Qc = [b

Ab

2 [ A2 b] = [1 [1

4 6 2

rank Qc = 3 so the system is controllable. Then we compute the characteristic polynomial: |λI − A| = λ3 − 9λ + 2 .

16 ] 8] . 12]

5.5 Canonical Form

| 131

Thus, α 2 = 0, α 1 = −9, α 0 = 2. From equations (5.49) and (5.50), we have: 0 [ A=[ 0 [−α 0 C = C [A2 b

= [0

0

1 0 −α 1

Ab

0 0 ] [ 1 ]=[0 −α 2 ] [−2

1 0 9

1 [ b] [α 2 [α1

0 ] 0] , 1]

16 [ 1] [ 8 [12

4 6 2

0 1 α2

2 1 ][ 1] [ 0 1] [−9

0 ] 1] , 0]

0 1 0

0 ] 0] = [3 1]

2

1] .

So the controllability canonical form I of the system is: 0 [ ẋ = [ 0 [−2 y = [3

1 0 9 2

0 0 [ ] ] 1] x + [0] u , 0] [1] 1] x .

From equation (5.56) we can write out the transfer function of the system as: W(s) =

β 2 s2 + β 1 s + β 0 s2 + 2s + 3 = . s3 + α 2 s2 + α 1 s + α 0 s3 − 9s + 2

Example 5.13. Try to transform the state space equation of Example 5.11 into controllability canonical form II. Solution. As shown in Example 5.11, we know α 2 = 0, α1 = −9, α0 = 2. From (5.60) to (5.61), we have: 0 [ A = [1 [0

0 0 1

−α0 0 ] [ −α 1 ] = [1 −α 2 ] [0

0 0 1

−2 ] 9] , 0]

1 [ ] b = [0] , [0] C = [Cb

CA2 b] = [1

CAb

2

12] .

Thus, the controllability canonical form II of the system is: 0 ̇x = [ [1 [0

0 0 1

−2 1 ] [ ] 9 ] x + [0] u , 0] [0]

y = [1

2

12] x .

132 | 5 Controllability and Observability

Example 5.14. Try to transform the state space equation of Example 5.11 into observability canonical forms. Solution. Step 1: Find the observability canonical form I of the system. First we compute the observability matrix Qo : C 0 ] [ [ Qo = [ CA ] = [0 2 [CA ] [6

0 2 −2

1 ] 0] . 2]

rank Qo = 3, so the system can be transformed into observability canonical forms. From equations (5.70) and (5.71), we have: 0 [ ̃ A=[0 [−2

1 0 9

0 ] 1] , 0]

1 ] ̃=[ b [2] , [12]

̃ = [1 C

0

0] .

Thus, the observability canonical form I of the system is: 0 [ x̃̇ = [ 0 [−2 y = [1

1 0 9 0

0 1 ] [ ] 1] ̃x + [ 2 ] u , 0] [12] 0] x̃ .

Step 2: Find the observability canonical form II of the system. From equations (5.76) and (5.77), we have: 0 [ ̃ A = [1 [0

−2 ] 9] , 0]

0 0 1

3 ] ̃=[ b [2] , [1]

̃ = [0 C

0

1] .

Thus, the observability canonical form II of the system is: 0 [ x̃̇ = [1 [0

0 0 1

−2 3 ] [ ] 9 ] ̃x + [2] u , 0] [1]

y = [0

0

1] ̃x .

5.5.4 Observability and Controllability Canonical Form of Multiple Input Multiple Output Systems Consider the following transfer function: G(s) =

B n−1 s n−1 + ⋅ ⋅ ⋅ + B1 s + B0 + Bn , s n + a n−1 s n−1 + ⋅ ⋅ ⋅ + a0

(5.79)

5.5 Canonical Form |

133

where B i is m × r dimensional matrix. The controllable canonical form is: Or [ [ [ Or [ . ẋ = [ [ .. [ [ [ Or [−a0 I r y = [B0

Ir ..

Or ] [ ] ] [ O] ] [ r] ] . ] ]x +[ [ .. ] u ] [ ] ] [O ] ] [ r] ] −a n−1 I r ] [ Ir ]

. ..

. Ir

B1

(5.80)

B n−1 ] x + B n u ,

...

where O r and I r are r × r dimensional zero matrix and unit matrix, while r stands for dimension of input vector and n is the order of the denominator polynomial. Example 5.15. The system transfer function matrix is given below. Try to give a controllable canonical form of the system: s+2

W(s) = [ s+1 s s+1

[ =

1 s+3 ] s+1 s+2

=[

s2 + 5s + 6 −(s2 + 5s + 6)

1 s+1 1 − s+1

1 s+3 ]+ 1 − s+2

s2 + 3s + 2 ] −(s2 + 4s + 3)

(s + 1)(s + 2)(s + 3) [

=

1 −1

5 1 ] s2 + [ −5 −1 s3

1 [ 1

+ 6s2

0 ] 1

1 +[ 1

3 6 ]s+[ −4 −6

2 ] −3

+ 11s + 6

0 ] 1

+[

1 1

0 ] . 1

Solution. For the system shown with the above model, we have n = 3, m = r = 2 and: a0 = 6 , 1 D=[ 1

0 ] , 1

B0 = [

6 −6

a1 = 11 ,

2 ] , −3

a2 = 6 ,

5 B1 = [ −5

3 ] , −4

1 B2 = [ −1

So we get the matrix of the controllable canonical form as follows: 0 [ [0 [ [0 A=[ [0 [ [ [−6 [0

0 0 0 0 0 −6

1 0 0 0 −11 0

6 −6

2 −3

5 −5

C=[

0 1 0 0 0 −11 3 −4

1 −1

0 0 1 0 −6 0

0 ] 0] ] 0] ] , 1] ] ] 0] −6]

1 ] . −1

0 [ [0 [ [0 B=[ [0 [ [ [1 [0

0 ] 0] ] 0] ] 0] ] ] 0] 1]

1 ] . −1

134 | 5 Controllability and Observability

The observable canonical form is: −α 0 I m B0 ] −α 1 I m ] [ B ] ] 1 ] [ −α 2 I m ] ]u ]x +[ . ] [ ] .. [ .. ] ] . ] [B n−1 ] −α n−1 I m ]

0m [ [ Im [ [ ẋ = [0m [ . [ .. [ [0m

0m 0m Im .. . 0m

... ... ... ... ...

0m 0m 0m .. . 0m

y = [0m

0m

...

Im ] x + Bn u ,

0m 0m 0m .. . Im

(5.81)

where 0m and I m are m × m dimensional zero matrix and unit matrix, while m stands for dimension of output vector, and n is the order of the denominator polynomial.

5.6 System Decomposition 5.6.1 Controllability Decomposition Suppose the LTI system ẋ = Ax + Bu

(5.82)

y = Cx is partly controllable. The controllability matrix is: Qc = [B

AB

...

A n−1 B] ,

and: rank Qc = n1 < n . Therefore, there exists a nonsingular transformation of: x = Rc ̂x ,

(5.83)

Which can transfer the state equation into the following form: ̂ x̂̇ = ̂ A ̂x + Bu ̂ ̂x , y=C

(5.84)

where: ̂x = [

̂ A=

̂x1 ] ̂x2

R−1 c ARc

n1 , n − n1

=

[

0

̂ A12 ] ̂ A22

n1

n − n1

̂ A11

n1 n − n1 ,

(5.85)

5.6 System Decomposition

̂1 B ̂ = R−1 B ] c B = [ 0 ̂ ̂ = CRc = [ C1 C n1

n1 , n − n1

|

135

(5.86)

̂2 ] C . n − n1

(5.87)

From the above equations, we know that after the system is transformed into equation (5.84), the state space description of the system is decomposed into controllable ̂1u + ̂ and uncontrollable parts. The n1 -dimensional subspace ̂ẋ 1 = ̂ A11 x̂1 + B A12 ̂x2 is ̇ ̂ controllable, while the (n − n1 )-dimensional subspace ̂x2 = A22 ̂x2 is uncontrollable. The state decomposition is shown in Figure 5.5. Because ̂x2 cannot be detected from u, ̂x2 only has uncontrollable free response. Obviously, if we neglect the (n − n1 )-dimensional subsystem, we can obtain a controllable system with less dimension.

Fig. 5.5: Controllability decomposition of system.

We form the nonsingular transfer matrix: Rc = [R1

R2

...

R n1

...

Rn ] ,

(5.88)

where the first n1 columns are any linearly independent columns of controllability matrix Qc , and the remaining columns can be chosen as long as Rc is nonsingular.

136 | 5 Controllability and Observability

Example 5.16. Are the state equations 0 [ ẋ = [1 [0

0 0 1

−1 1 ] [ ] −3] x + [1] u −3] [0]

y = [0

1

−2] x

controllable? If not, please write the controllability decomposition of the system. Solution. The controllability matrix is: Qc = [b

Ab

1 [ A2 b] = [1 [0

0 1 1

−1 ] −3] , −2]

and rank Qc = 2 < n. Therefore, the system is partly controllable. We form the nonsingular transfer matrix as equation (5.88). 1 [ ] R1 = b = [1] , [0]

0 [ ] R2 = Ab = [1] , [1]

0 [ ] R3 = [0] , [1]

Thus: 1 [ R c = [1 [0

0 1 1

0 ] 0] , 1]

where R3 is chosen arbitrarily as long as Rc is nonsingular. After the transformation, the new state equation is: x + R−1 x̂̇ = R−1 c ARc ̂ c bu 1 [ = [1 [0

0 1 1

0 [ = [1 [0

−1 −2 0

0 ] 0] 1]

y = CRc ̂x = [1

−1

0 [ [1 [0

0 0 1

−1 1 ][ −3] [1 −3] [0

0 1 1

0 1 ] [ 0] ̂x + [1 1] [0

−1 1 ]̂ [ ] −2 ] x + [ 0 ] u , −1 ] [0] −1

−2] ̂x .

We choose: 1 [ ] R3 = [0] , [1]

1 [ R c = [1 [0

0 1 1

1 ] 0] . 1]

0 1 1

0 ] 0] 1]

−1

1 [ ] [1] u [0]

5.6 System Decomposition

Thus:

−1 −2 0

0 ̂ẋ = [ [1 [0

| 137

1 0 ]̂ [ ] −2 ] x + [ 0 ] u , −1 ] [0]

y = CRc ̂x = [1

−2] ̂x .

−1

5.6.2 Observability Decomposition Suppose the LTI system ẋ = Ax + Bu

(5.89)

y = Cx is partly observable. The observability matrix is: C [ CA ] ] [ ] Qo = [ [ .. ] , [ . ] n−1 [CA ] and: rank Qo = n1 < n . Therefore, there exists a nonsingular transformation: x = Ro ̃x ,

(5.90)

which can transfer the state equation into the following form: ̃ x̃̇ = ̃ A x̃ + Bu ̃ ̃x , y=C

(5.91)

where: ̃x = [

x̃1 ] x̃2

n1 , n − n1

̃ A = R−1 o ARo =

̃ = R−1 B o B = [

[

̃ A11 ̃ A21

0 ] ̃ A22

n1

n − n1

̃1 B ] ̃2 B

̃ ̃ = CRo = [ C1 C n1

n1 n − n1

0] . n − n1

n1 n − n1 ,

,

(5.92)

(5.93)

(5.94)

138 | 5 Controllability and Observability

From the above equations, we know that when the system is transferred into equation (5.91), the state space description of the system is decomposed into observable and unobservable parts. The n1 -dimensional subspace ̃1u x̃̇ 1 = ̃ A11 ̃x1 + B ̃ 1 ̃x1 y=C is observable, while the (n − n1 )-dimensional subspace ̃2u ̃ẋ 2 = ̃ A21 ̃x1 + ̃ A22 ̃x2 + B is unobservable. The state decomposition is shown in Figure 5.6. Obviously, if we do not consider the (n− n1 )-dimensional unobservable subsystem, we can obtain a n1 -dimensional observable system.

Fig. 5.6: Observability decomposition of a system.

We form the nonsingular transfer matrix:

R−1 o

R󸀠1 [ R󸀠 ] [ 2] [ . ] [ . ] [ . ] ] =[ [R󸀠 ] , [ n1 ] [ . ] [ . ] [ . ] 󸀠 [ Rn ]

(5.95)

where the first n1 rows are any linearly independent rows of observability matrix Qo , and the remaining rows can be chosen as long as R−1 o is nonsingular.

5.6 System Decomposition

| 139

Example 5.17. Are the state equations 0 [ ẋ = [1 [0

0 0 1

−1 1 ] [ ] −3] x + [1] u −3] [0]

y = [0

1

−2] x

observable? If not, please write the observability decomposition of the system. Solution. The observability matrix is: C 0 ] [ [ Qo = [ CA ] = [ 1 2 [CA ] [−2

1 −2 3

−2 ] 3] , −4]

and rank Qo = 2 < n. Therefore, the system is partly observable. We form the nonsingular transfer matrix as equation (5.95). R󸀠1 = C = [0

1

−2] ,

R󸀠2 = CA = [1

−2

3] ,

thus, 0 [ R−1 = 1 [ o [0

1 −2 0

−2 ] 3] 1]

and 2 [ R o = [1 [0

1 0 0

1 ] 2] 1]

where R3 is chosen arbitrarily as long as R−1 o is nonsingular. After the transformation, the new state equation is: ̃ + R−1 x̃̇ = R−1 o ARo x o bu 0 [ = [ −1 [ 1

−1 −2 0

y = CRo ̃x = [ 1

0 1 ] [ ] 0 ] ̃x + [ −1 ] u , −1 ] [ 0 ] 0

0 ] ̃x .

R󸀠3 = [0

0

1] ,

140 | 5 Controllability and Observability

5.6.3 Controllability and Observability Decomposition Suppose the LTI system ẋ = Ax + Bu

(5.96)

y = Cx

is partly controllable and observable. Therefore, there exists a nonsingular transformation: x = Rx , (5.97) which can transfer the state equation into the following form: ẋ = Ax + Bv

(5.98)

y = Cx . Here: A11 [A [ 21 A = R AR = [ [ 0 [ 0

0 A22 0 0

−1

A13 A23 A33 A43

0 A24 ] ] ] , 0 ] A44 ]

B1 [B ] [ 2] B=R B=[ ] , [0] [0] −1

C = CR = [C1

0

(5.99)

(5.100)

C3

0] .

(5.101)

From the configuration of A, B, C, we know that the n-dimensional state space is divided into four subspaces according to the controllability and observability of the system. Equation (5.98) can be rewritten as follows: ẋ co A11 [ẋ ] [A [ co ] [ 21 [ ]=[ [ẋ co ] [ 0 [ẋ co ] [ 0 y = [C1

0 A22 0 0

0

C3

A13 A23 A33 A43

xco B1 0 ] [ ] [ A24 ] [xco ] [B2 ] ] ][ ] + [ ]u 0 ] [xco ] [ 0 ] A44 ] [xco ] [ 0 ]

xco [x ] [ ] 0] [ co ] , [xco ] [xco ]

and the subsystem (A11 , B1 , C1 ) is controllable and observable. The block diagram of equation (5.98) is shown in Figure 5.7.

(5.102)

5.6 System Decomposition

B1

|

141

C1

1

u A21

A13

+ +

B2

2

3

A23

A24

C3

A43

4

Fig. 5.7: The block diagram of equation (5.98).

From Figure 5.7, we know the condition for transfer of the information between the four subsystems. 1. The steps of transposition Step 1. The transfer matrix xc x = Rc [ ] xc

(5.103)

will transform system Σ = (A, B, C) into: xc ẋ c −1 [ ] = R−1 c ARc [ ] + Rc Bu ẋ c xc =[

A1 0

A2 xc B ][ ]+[ ]u , 0 A4 xc

xc y = CRc [ ] = [C 1 xc

(5.104)

xc C2 ] [ ] , xc

where xc is the controllable state, xc is the uncontrollable state, and Rc is formed according to equation (5.88). Step 2. The transfer matrix xc = Ro2 [xco xco ]T will transform the uncontrollable subsystem Σc = (A4 , 0, C2 ) into: [

xco A33 ẋ co ] = R−1 ]=[ o2 A 4 Ro2 [ A43 ẋ co xco y2 = C2 Ro2 [

xco ] = [C3 xco

0] [

x 0 ] [ co ] , A44 xco xco ] , xco

142 | 5 Controllability and Observability

where xco is an uncontrollable but observable state, xco is an uncontrollable and unobservable state, and Ro2 is formed according to equation (5.95) for system Σc = (A4 , 0, C2 ). Step 3. The transformation xc = Ro1 [xco xco ]T will transform the controllable subsystem Σc = (A 1 , B, C 1 ) according to observability. According to equation (5.104), we can obtain: ẋ c = A 1 xc + A 2 xc + Bu . Substitute the state transfer equations into the above equation: Ro1 [

ẋ co xco x ] = A1 Ro1 [ ] + A2 Ro2 [ co ] + Bu . ẋ co xco xco

Multiply the above equation by R−1 o1 , and we will have: [

ẋ co xco x ] = R−1 A1 Ro1 [ ] + R−1 A2 Ro2 [ co ] + R−1 o1 o1 o1 Bu ẋ co xco xco =[

A11 A21

y = CRo1 [

0 xco A13 ][ ] + [ A22 xco A23 xco ] = [C1 xco

0] [

0 x B1 ] [ co ] + [ ] u , A24 xco B2

xco ] , xco

where xco is controllable and observable state, xco is controllable but observable state, and Ro1 is formed according to equation (5.95) for system Σc = (A 1 , B, C 1 ). After the above three transformations, we can have the decomposition description as follows, according to controllability and observability: ẋ co A11 [ẋ ] [A [ co ] [ 21 [ ]=[ [ẋ co ] [ 0 [ẋ co ] [ 0 y = [C1

0 A22 0 0

0

A13 A23 A33 A43

C3

xco B1 0 ] [ ] [ A24 ] [xco ] [B2 ] ] ][ ] + [ ]u 0 ] [xco ] [ 0 ] A44 ] [xco ] [ 0 ]

xco [x ] [ ] 0] [ co ] . [xco ] [xco ]

Example 5.18. The LTI system 0 [ ẋ = [1 [0

0 0 1

−1 1 ] [ ] −3] x + [1] u −3] [0]

y = [0

1

−3] x

is partly controllable and observable. Give the controllability and observability decomposition.

5.6 System Decomposition

| 143

Solution. 1 [ R c = [1 [0

0 1 1

0 ] 0] . 1]

After transformation, we have: 0 ẋ c [ [ ] = [1 ẋ c [1

−1 −2 0

−1 1 ] xc [ ] −2 ] [ ] + [ 0 ] u , xc −1 ] [0]

y = [1

−1

2] [

xc ] . xc

From the above, we know that the uncontrollable subspace xc is one-dimensional. Obviously, the subspace is observable. Therefore, there is no need to decompose the subspace. The controllable subsystem Σc is: 0 ẋ c = [ 1

−1 −1 1 ] xc + [ ] xc + [ ] u , −2 −2 0

y1 = [1

−1] xc .

Then, we decompose the subsystem Σc according to observability. We form the nonsingular matrix according to equation (5.95): 1 R−1 o =[ 0

−1 ] , 1

which transfers the system Σc into: [

ẋ co 1 ]=[ ̇xco 0

−1 0 ][ 1 1

+[

1 0

−1 1 ][ −2 0

−1

−1 ] 1

−1 −1 1 ] [ ] xc + [ 1 0 −2

[

xco ] xco

−1

−1 ] 1

1 [ ]u . 0

Equivalently: [

ẋ co −1 ]=[ ̇xco 1 y1 = [1

0 xco 1 1 ] [ ] + [ ] xc + [ ] u , −1 xco −2 0 1 −1] [ 0

−1 ] 1

−1

[

xco ] = [1 xco

0] [

xco ] . xco

144 | 5 Controllability and Observability

With the above two transformations, we have the decomposition description: ẋ co −1 [ ̇ ] [ [xco ] = [ 1 [ẋ co ] [ 0 y = [1

0 −1 0 0

1 xco 1 ][ ] [ ] −2] [xco ] + [0] u , −1] [xco ] [0]

xco [ ] −2] [xco ] . [xco ]

2. Alternative method of decomposition First, we can transfer the system into Jordan canonical form, then examine the controllability and observability of all state variables according to controllability and observability criteria. Finally, we form the corresponding subsystems with those state variables. For example, the given Jordan canonical form of system Σ = (A, B, C) is: −4 ẋ 1 [ ̇ ] [ [x2 ] [ 0 [ ] [ [ẋ 3 ] [ [ ]=[ [ẋ ] [ [ 4] [ [ ] [ [ẋ 5 ] [ [ẋ 6 ] [

y1 3 [ ]=[ 1 y2

1 −4 3 0

1 3 −1 0

1 4

0 0

5 2

0 0

1 x1 ][ ] [ ] [ x 2 ] [5 ][ ] [ ] [ x 3 ] [4 ][ ] + [ ] [ x ] [0 ] [ 4] [ ][ ] [ 1 ] [ x 5 ] [1 −1 ] [x6 ] [0

3 ] 7] ] 3] ] [u 1 ] , 0] ] u2 ] 6] 0]

x1 [ ] [x2 ] [ ] 0 [ x3 ] ] . ][ [ 0 [x4 ] ] [ ] [x5 ] [x6 ]

According to the controllability and observability criteria of Jordan canonical forms, we know that x1 , x2 are controllable and observable variables, x3 , x5 are controllable but unobservable variables, x4 are uncontrollable and observable variables and x6 are uncontrollable and unobservable variables. Thus: x1 x3 xco = [ ] , xco = [ ] , xco = x4 , xco = x6 . x2 x5

5.6 System Decomposition

|

145

Rearrange according to this order, and you can get: −4 ẋ co [ ] [ 0 [ ] [ [ ] [ [ ẋ co ] [ 0 [ ]=[ [ ] [ 0 [ ] [ [ ] [ [ ẋ co ] [ 0 [ ẋ co ] [ 0

1 −4 0 0 0 0

0 0 3 0 0 0

0 0 0 −1 0 0

0 0 1 0 3 0

0 1 xco [ ] [5 0 ] ][ ] [ ][ ] [ [ xco ] [ 4 0 ] ][ ]+[ [ ] [1 1 ] ][ ] [ ][ ] [ 0 ] [ xco ] [ 0 −1 ] [ xco ] [ 0

3 7] ] ] 3] ] [u 1 ] , 6] ] u2 ] 0] 0]

̃ A m = P−1 A m P .

5.6.4 Minimum Realization Definition of Realization Given a transfer matrix W(s), if a state space equation Σ exists, such as: ẋ = Ax + Bu y = Cx + Du , which has W(s) as its transfer matrix, such as: C(sI − A)−1 B + D = W(s) , then W(s) is realizable and Σ is one of the realizations of W(s). It is noticeable that not every transfer matrix W(s) is realizable. W(s) is physically realizable if it meets the following criteria: (i) If all the coefficients of the numerator and denominator polynomials of each element W ik (s) (i = 1, 2, . . . , m; k = 1, 2, . . . , r) are real constants. (ii) If W ik (s) is a real rational fraction of s, i.e., the order of the numerator polynomial is no more than that of the denominator polynomial. When all the elements of W(s) are strictly real rational fractions, the realization of W(s) has the form of (A, B, C). Apart from this, the realization has the form of (A, B, C, D) and D = lim s→0 W(s). Minimum Realization If a transfer function is realizable, then it has an infinite number of realizations, although not necessarily of the same dimension. From the engineering point of view, it is significant to find the class of minimal dimensional realizations of the system. 1. Definition of minimum realization Consider one realization of the transfer function W(s), Σ: ẋ = Ax + Bu y = Cx .

146 | 5 Controllability and Observability

If any other realization

̃ x̃̇ = ̃ A ̃x + Bu ̃ ̃x y=C

has more dimensions than Σ, then Σ is the minimum realization of the system. Because the transfer matrix can only reflect the dynamic behaviors of the controllable and observable subsystem, removing the uncontrollable or unobservable states will not change the transfer matrix of the system. Thus, the state space expression with uncontrollable or unobservable states cannot be the minimum realization. As stated above, we have the following methods to verify the minimum realization. 2. Steps to find minimum realization Theorem 5.12. The realization of transfer matrix W(s), Σ: ẋ = Ax + Bu y = Cx , is the minimum realization if, and only if, Σ(A, B, C) is controllable and observable. According to this theorem, we can find the minimum realization of any transfer matrix W(s) whose elements are all strictly real rational fractions. Usually, we can obtain the minimum realization as follows: (1) For a given transfer matrix W(s), first we select one realization Σ(A, B, C). More often, we choose the controllability canonical or observability canonical for the sake of convenience. (2) For the Σ(A, B, C) chosen above, we find its controllable and observable part ̃1, C ̃ 1 ). So this part is just the minimum realization of the system. (̃ A1 , B Example 5.19. Try to find the minimum realization of the transfer matrix: 1 W(s) = [ (s+1)(s+2)

1 (s+2)(s+3) ]

.

Solution. W(s) is a strictly real rational fraction of s. Rewrite it in the descending order of s in the following form: s+3 W(s) = [ (s+1)(s+2)(s+3)

s+1 (s+1)(s+2)(s+3) ]

1 [(s + 3) (s + 1)] (s + 1)(s + 2)(s + 3) 1 = 3 {[1 1] s + [3 1]} . s + 6s2 + 11s + 6 =

5.6 System Decomposition

| 147

From equation (5.56), we know that: α0 = 6 , β 0 = [3

α1 = 11 ,

α2 = 6

β 1 = [1

1] ,

β 2 = [0

1] ,

0] .

The dimension of the output vector is m = 1, and the dimension of the input vector is r = 2. First we adopt the controllability canonical form realization: 0m [ A0 = [ I m [0 m

−α 0 I m 0 ] [ −α 1 I m ] = [1 −α 2 I m ] [0

0m 0m 0m

β0 3 [ ] [ B0 = [β 1 ] = [1 [β 2 ] [0 C0 = [0m

−6 ] −11] −6 ] .

1 ] 1] 0] I m ] = [0

0m

0 0 1

0

1]

Then we check whether the realization Σ = (A0 , B0 , C0 ) is controllable or not: Qc = [B0

A20 B0 ]

A0 B0

3 [ = [1 [0

1 1 0

0 3 1

0 1 1

−6 −11 −3

−6 ] −11] , −5 ]

rank Qc = 3 = n . Therefore, Σ = (A0 , B0 , C0 ) is controllable and observable, so it is the minimum realization. Example 5.20. Try to find the minimum realization of the transfer matrix: s+2 s+1

W(s) = [ s [ s+1

1 s+3 ] s+1 s+2 ]

Solution. First, we simplify W(s) into the form of strictly real rational function and write its controllability canonical form (or observability canonical form). After computing this, the controllability canonical form of the system is: 0 [ [0 [ [0 A=[ [0 [ [ [−6 [0

0 0 0 0 0 −6

1 0 0 0 −11 0

6 C=[ −6

2 −3

5 −5

0 1 0 0 0 −4 3 −4

0 0 1 0 −6 0 1 −1

0 ] 0] ] 0] ] , 1] ] ] 0] −6] 1 ] , −1

0 [ [0 [ [0 B=[ [0 [ [ [1 [0

0 ] 0] ] 0] ] , 0] ] ] 0] 1]

1 D=[ 1

0 ] . 1

148 | 5 Controllability and Observability

We then examine whether the states realized by the controllability canonical form are observable or not: 6 [ [−6 [ C −6 [ ] [ Qo = [ CA ] = [ [6 [ 2 [CA ] [ [6 [−6

2 −3 −6 6 18 −12

5 −5 −5 5 5 −5

3 −4 −9 8 27 −16

1 −1 −1 1 1 −1

1 ] −1] ] −3] ] . 2] ] ] 9] −4]

As rank Qo = 3 < n = 6, the controllability canonical form is not the minimum realization. Thus, we decompose the structure according to the observability. Now we construct the transfer matrix R−1 o and decompose the system according to the observability:

R−1 o

6 [ [ −6 [ [ −6 =[ [ 1 [ [ [ 0 [ 0

2 −3 −6 0 1 0

5 −5 −5 0 0 1

3 −4 −9 0 0 0

1 −1 −1 0 0 0

1 ] −1 ] ] −3 ] ] . 0 ] ] ] 0 ] 0 ]

0 0 0 0

1 0 0 0 −6

0 1 0 −1 0

0 0 ] ] ] 1 ] ] . 0 ] ] −5 ] ]

0

1

0 ]

Thus: 0 [ 0 [ [ [ 0 Ro = [ [ −1 [ [ 3 [ 2 [

0 0 0 −1 0

5 2

1 2 − 12

3

So: 0 [ 3 [−2 [ [ [ −3 −1 ̂ A = Ro ARo = [ [ 0 [ [ −1 [ [

3 2

1 [ −1 [ [ [ −1 ̂ = R−1 [ B = B o [ 0 [ [ [ 0 [ 0

0

1

0

0

−2

− 12

0

0

0 0 −1

−4 0 0

0 0 0

0 0 −1

0

−2

−6

0

1 −1 ] ] ] ̂ −3 ] ] = [B1 ] , ] 0 0 ] ] 0 ] 0 ]

0

] 0 ] ] ] ̂ A11 0 ] ] = [̂ A21 1 ] ] 0 ] ] −5 ]

0 ] , ̂ A22

5.6 System Decomposition

̂ = CRo = [ 1 C 0

0 1

0 0

0 0

0 0

0 ̂1 ] = [C 0

|

149

0] ,

̂1, C ̂ 1 ) is a controllable and observable subsystem. After examination, Σ = (̂ A11 , B Thus, the minimum realization of W(s) is: 0 [ 3 ̂ Am = A11 = [ [− 2 [ −3 ̂ 1 = [1 Cm = C 0

0 −2

1

−4 ]

0 0 1

1 ̂1 = [ Bm = B [−1 [−1

] − 21 ] ] ,

0 ] , 0

1 D=[ 1

1 ] −1] , −3] 0 ] . 1

After computing the transfer function according to the above Am , Bm , Cm , D, we can check the result: 1 Cm (sI − Am )−1 Bm + D = [ 0

0 1

s+2 s+1

s 0 [ ] [3 0 [2 [3 1 s+3 ]

=[ s [ s+1

0 s+2 0

−1

−1

] 1 ] 2 ] s + 4]

1 [ [−1 [−1

1 1 ] −1] + [ 1 −3]

0 ] 1

.

s+1 s+2 ]

We can also write out the realization of controllability canonical form Σ = (A0 , B0 , C0 ): 0 [ [0 [ [1 A0 = [ [0 [ [ [0 [0

0 0 0 1 0 0

0 0 0 0 1 0

0 0 0 0 0 1

−6 0 −11 0 −6 0

0 C0 = [ 0

0 0

0 0

0 0

1 0

0 ] −6 ] ] 0 ] ] , −11] ] ] 0 ] −6 ]

6 [ [−6 [ [5 B0 = [ [−5 [ [ [1 [−1

2 ] −3] ] 3] ] , −4] ] ] 1] −1]

0 ] . 1

Then we decompose Σ = (A0 , B0 , C0 ) by controllability, then choose the transform matrix Rc according to equation (5.88): 6 [ [−6 [ [5 Rc = [ [−5 [ [ [1 [−1

2 −3 3 −4 1 −1

−6 6 −9 8 −3 2

1 0 0 0 0 0

0 1 0 0 0 0

0 ] 0] ] 1] ] . 0] ] ] 0] 0]

150 | 5 Controllability and Observability

Now we have:

R−1 c

0 [ [0 [ [0 =[ [1 [ [ [0 [0

0 0 0 0 1 0

0 0 0 0 0 1

−1 1 0 4 −3 2

0 −2 −1 −2 0 −3

4 ] −7 ] ] −1 ] ] . −16] ] ] 9 ] 8 ]

Thus: 1 [0 [ [ ̃ [0 A12 ]=[ [0 ̃ A22 [ [ [0 [0

̃ A11 ̃ A = R−1 c A 0 Rc = [ 0

1 [ [0 [ ̃ [0 B1 ̃ = R−1 [ B = B = ] [ 0 c [0 0 [ [ [0 [0 ̃ = C 0 Rc = [C ̃1 C

0] = [

1 −1

0 0 1 0 0 0

0 −6 −5 0 0 0

0 0 0 0 0 1

−1 1 0 4 −3 2

0 −2 ] ] ] −1 ] ] , −2 ] ] ] 0 ] −3 ]

0 ] 1] ] 0] ] , 0] ] ] 0] 0] 1 −1

−3 2

0 0

0 0

0 ] . 0

̃ 1 ) is a controllable and observable subsystem, so the minimum real̃1, C Σ = (̃ A11 , B ization of W(s) is: 1 [ ̃ ̃ Am = A11 = [0 [0 ̃m = C ̃1 = [ 1 C −1

0 0 1

0 ] −6] , −5]

1 −1

1 [ ̃ ̃ B m = B 1 = [0 [0

0 ] 1] , 0]

1 1

0 ] . 1

−3 ] , 2

D=[

From the above calculation, we can see that if a transfer function is realizable, then it has an infinite number of realizations, although not necessarily of the same dimeñ m ) are the minĩm, C sion. However, we can prove that if Σ(Am , Bm , Cm ) and Σ(̃ Am , B mum realizations of the same transfer matrix W(s), then a state transformation x = P ̃x must exist, such that: ̃ Am = P−1 Am P ,

̃ m = P−1 Bm , B

̃ m = Cm P . C

We can see that the minimum realizations of the same transfer matrix are equivalent in algebra.

5.6 System Decomposition

| 151

Example 5.21. Try to find the minimum realization of the transfer matrix using MATLAB: 4s−10 2s+1

W(s) = [ 1 [ (2s+1)(s+2)

3 s+2

] .

s+1 (s+2)2 ]

Solution. The controllability canonical form of the system is: −4.5 0 1 0 0 [ 0

−6 C=[ 0

−6 0 0 0 1 0

0 −4.5 0 1 0 0

[ [ [ [ A=[ [ [ [ [

3 1

−24 0.5

7.5 1.5

0 −6 0 0 0 1

−2 0 0 0 0 0

−24 1

0 ] −2] ] 0] ] , 0] ] ] 0] 0]

3 ] , 0.5

1 [ [0 [ [0 B=[ [0 [ [ [0 [0

0 ] 1] ] 0] ] , 0] ] ] 0] 0]

2 D=[ 0

0 ] . 0

This six-dimensional realization is clearly not minimal realizations. It can be reduced to minimal realizations by calling the MATLAB function minreal. We type: a=[-4.5 0 -6 0 -2 0;0 -4.5 0 -6 0 -2;1 0 0 0 0 0;0 1 0 0 0 0; 0 0 1 0 0 0;0 0 0 1 0 0]; b=[1 0;0 1;0 0;0 0;0 0;0 0]; c=[-6 3 -24 7.5 -24 3;0 1 0.5 1.5 1 0.5]; d=[2 0;0 0];; [am,bm,cm,dm]=minreal(a,b,c,d) Yield am = -1.3387 2.5335 -0.0007

0.2185 -1.1599 -0.0002

-1.6003 4.8338 -2.0014

bm = -0.2666 0.2513 -0.0001

0.2026 -0.6119 0.3483

cm = 32.7210 0.8143 dm = 2 5

0 0

10.8346 -0.8632

8.6137 1.8281

152 | 5 Controllability and Observability

Thus, the minimum realization is expressed as: −1.3387 ̇ =[ x(t) [ 2.5335 [−0.0007

0.2185 −1.1599 −0.0002

−1.6003 −0.2666 ] [ 4.8338 ] x + [ 0.2513 −2.0014] [−0.0001

32.7210 −0.8143

10.8346 −0.8632

8.6137 2 ]x +[ 1.8281 0

y(t) = [

0.2026 ] −0.6119] u 0.3483 ]

0 ]u . 0

5.7 Summary The controllability and observability are both important properties of a system, and the definitions and criteria are described separately. The duality system is an important conception; duality systems have a series of interesting properties. The decomposition of a system is analyzed based on controllability and observability, and the minimum realization can be obtained accordingly.

Exercise 5.1. Check the controllability of the following systems: 1 ẋ 1 ]=[ 1 ẋ 2

1 x1 0 ][ ] + [ ]u 0 x2 1

(1)

[

(2)

ẋ 1 0 [ ̇ ] [ [x2 ] = [ 0 [ẋ 3 ] [−2

1 0 −4

0 1 x1 ][ ] [ 1 ] [x2 ] + [ 0 −3] [x3 ] [−1

0 ] u1 1] [ ] u2 1]

(3)

ẋ 1 −3 [ ̇ ] [ [x2 ] = [ 0 [ẋ 3 ] [ 0

1 −3 0

0 x1 1 ][ ] [ 0 ] [ x 2 ] + [0 −1] [x3 ] [2

−1 ] u1 0 ][ ] u2 0]

(4)

ẋ 1 λ1 [ẋ ] [ 0 [ 2] [ [ ]=[ [ẋ 3 ] [ 0 [ẋ 4 ] [ 0

1 λ1 0 0

(5)

ẋ 1 0 [ ̇ ] [ [ x 2 ] = [0 [ẋ 3 ] [0

4 20 −25

0 0 λ1 0

0 0 x1 [ ] [ ] 0] ] [ x 2 ] [1 ] ][ ] + [ ]u 0 ] [ x 3 ] [1 ] λ 1 ] [ x 4 ] [1 ]

3 −1 x1 ][ ] [ ] 16 ] [x2 ] + [ 3 ] u −20] [x3 ] [ 0 ]

Exercise

| 153

5.2. Check the observability of the following systems: ẋ 1 1 ]=[ ẋ 2 1

1 x1 ][ ] , 0 x2

(1)

[

(2)

ẋ 1 0 [ ̇ ] [ [x2 ] = [ 0 [ẋ 3 ] [−2

(3)

0 ẋ 1 [ ̇ ] [ [ x 2 ] = [0 [ẋ 3 ] [0

4 20 −25

(4)

ẋ 1 2 [ ̇ ] [ [ x 2 ] = [0 [ẋ 3 ] [0

1 2 0

(5)

ẋ 1 −4 [ ̇ ] [ [x2 ] = [ 0 [ẋ 3 ] [ 0

1 0 −4

y = [1

0 x1 ][ ] 1 ] [x2 ] , −3] [x3 ] 3 x1 ][ ] 16 ] [x2 ] , −20] [x3 ]

0 x1 ][ ] 0 ] [x2 ] , −3] [x3 ] 0 −4 0

0 x1 ][ ] 0] [x2 ] , 1] [x3 ]

0 y1 [ ]=[ y2 1

1] [

x1 ] x2

x1 −1 [ ] ] [x2 ] 1 [x3 ] x1 [ ] 3 0] [x2 ] [x3 ]

1 2

y = [−1

y = [0

1

x1 [ ] 1] [x2 ] [x3 ]

y = [1

1

x1 [ ] 4] [x2 ] [x3 ]

5.3. Is it possible to find a set of p and q, such that the state equation [

ẋ 1 1 ]=[ ẋ 2 1 y = [q

12 x1 p ][ ] + [ ]u x2 0 −1 1] [

x1 ] x2

is not controllable/observable? 5.4. Try to prove that the system, ẋ 1 20 [ ̇ ] [ [x2 ] = [ 4 [ẋ 3 ] [12

−1 16 0

0 x1 a ][ ] [ ] 0 ] [x2 ] + [b] u , 18] [x3 ] [ c ]

is not controllable, no matter what a, b and c are. 5.5. Try to transform the following state space equation into controllability canonical form I. 1 −2 1 ẋ = [ ]x +[ ]u 3 4 1 5.6. Try to transform the following state space equation into controllability canonical form I. −1 −2 −2 2 [ ] [ ] ẋ = [ 0 −1 1 ] x + [0] u 0 1] [1 [1]

154 | 5 Controllability and Observability

5.7. Try to transform the following state space equation into observability canonical form II. −1 −2 −2 2 [ ] [ ] ẋ = [ 0 −1 1 ] x + [0] u , 0 1] [1 [1] y = [1

1

0] x

5.8. Try to transform the following state space equation into observability canonical form II. 1 −1 2 ẋ = [ ]x +[ ]u , 1 1 1 y = [−1

1] x

5.9. Is the state equation −1 ẋ = [ 0

1 1 ]x +[ ]u 0 1

controllable? If not, please give the controllability decomposition of the system. 5.10. Is the state equation 1 ̇ =[ x(t) [0 [1

2 1 −4

0 −1 [ ] ] 0 ] x(t) + [0] u(t) 3] [1]

y(t) = [1

−1

1] x(t)

controllable? If not, please give the controllability decomposition of the system. 5.11. Is the state equation 1 ̇ =[ x(t) [0 [1

2 1 −4

−1 0 ] [ ] x(t) + 0] [0] u(t) 3] [1]

y(t) = [1

−1

1] x(t)

observable? If not, please give the observability decomposition of the system. 5.12. Try to find the minimum realization of the transfer matrix: s+1 s+2 ] . W(s) = [ s+3 (s+2)(s+4) [ ]

Exercise

| 155

5.13. The state equation of the inverted pendulum was developed in Example 1.10. Suppose, for a given pendulum, the equation becomes: 0 [0 [ ẋ = [ [0 [0

1 0 0 0

0 −1 0 5

y = [1

0

0

0 0 ] ] [ 0] [1] ]x +[ ]u [0] . 1] 0] [−2] 0] x

If x3 = θ deviates from zero slightly, can we find a control u to push it back to zero? Why?

6 State Feedback and Observer 6.1 Introduction Generally, the control theory can be divided into two parts; system analysis and system synthesis. In previous chapters, the solution of state space equations, the stability analysis and the controllability and observability of a control system were introduced. In this chapter, the system synthesis will be discussed. The controller is designed through feedback and the performance of a system is improved by pole assignment. The state estimator or state observer is designed to generate an estimation of the state.

6.2 Linear Feedback Feedback is the most popular way to improve the performance of a system. Three kinds of linear feedback will be discussed in this chapter; the state feedback, the output feedback, and feedback from output y to x.̇ The algorithms are also described in detail.

6.2.1 State Feedback Consider an n-dimensional LTI system: ẋ = Ax + Bu

(6.1)

y = Cx + Du ,

where x ∈ R n×1 ; u ∈ R p×1 ; y ∈ R m×1 , A ∈ R n×n , B ∈ R n×p , C ∈ R m×n , D ∈ R m×p . Assume D = 0 to simplify the discussion. In state feedback, the input u is given by: n

u = Kx + v = v + [k 1

k2

...

kn ] x = v + ∑ ki xi ,

(6.2)

i=1

where v ∈ R p×1 is the reference input and K ∈ R p×n is the feedback gain matrix. As shown in Figure 6.1, each feedback gain k i is a real constant. This is called the constant gain negative state feedback, or state feedback for simplicity. Substituting (6.2) into (6.1) yields the closed loop state space equation: ẋ = (A + BK)x + Bv y = Cx .

(6.3)

The closed loop transfer function is: W k (s) = C[sI − (A + BK)]−1 B . https://doi.org/10.1515/9783110574951-006

(6.4)

6.2 Linear Feedback |

157

Fig. 6.1: State feedback.

6.2.2 Output Feedback Output feedback is another linear feedback law using the output vector y, as shown in Figure 6.2.

Fig. 6.2: Output feedback.

The control system is: ẋ = Ax + Bu y = Cx + Du .

(6.5)

The input u is given by: u = Hy + v ,

(6.6)

is the output feedback gain matrix. where H ∈ If D = 0, the closed loop state space equation is: R p×m

ẋ = (A + BHC)x + Bv y = Cx .

(6.7)

158 | 6 State Feedback and Observer

The closed loop transfer function is: W H (s) = C[sI − (A + BHC)]−1 B .

(6.8)

From equation (6.4), HC in output feedback is comparative to K in state feedback. As m < n, the optional degree of H is much smaller than that of K. Only when C = I and HC = K, output feedback is absolutely equivalent to state feedback. Therefore, the effect of output feedback is obviously not as good as that of state feedback without compensator. However, output feedback shows its unique advantage in the facility in technique implementation.

6.2.3 Feedback From Output to ẋ This linear feedback from system output y to state vector ẋ is widely used in state estimator. This kind of feedback configuration is shown in Figure 6.3.

Fig. 6.3: Feedback from output y to x.̇

The control system is: ẋ = Ax + Bu y = Cx + Du .

(6.9)

Considering the feedback gain matrix G, G ∈ R n×m from the output y to the derivative of the state vector x,̇ the closed loop system is: ẋ = Ax + Gy + Bu y = Cx + Du .

(6.10)

Substituting y into ẋ yields: ẋ = (A + GC)x + (B + GD)u y = Cx + Du .

(6.11)

6.3 Pole Assignment

If D = 0, then:

ẋ = (A + GC)x + Bu

| 159

(6.12)

y = Cx . The closed loop transfer function is: W G (s) = C[sI − (A + GC)]−1 B .

(6.13)

As equation (6.13) shows, the change of matrix G will affect the eigenvalue of the closed loop system, and thus affect the characteristic of the system.

6.3 Pole Assignment 6.3.1 Sufficient and Necessary Condition for Arbitrary Pole Assignment Theorem 6.1. There exists a state feedback matrix K which can assign eigenvalues of the matrix A + BK of the closed loop system to arbitrary place of the state space, but only if the state vector of the open loop system is controllable. That is, if: rank Qc = n , where Qc = [B

AB

A2 B

...

A n−1 B] .

(6.14)

Proof (Only for Sufficiency). If the state vector of the open loop system is absolutely controllable, the following equation can be obtained with state feedback: det [λI − (A + BK)] = f ∗ (λ) ,

(6.15)

where f ∗ (λ) is the desired characteristic polynomial. Wo (s) = C(sI − A)−1 B , n

f ∗ (λ) = ∏(λ − λ∗i ) = λ n + a∗n−1 λ n−1 + ⋅ ⋅ ⋅ + a∗1 λ + a∗0 ,

(6.16)

i=1

where λ∗i (i = 1, 2, . . . , n) are desired closed loop poles. Step 1. If Σo = (A, B, C) is absolutely controllable, the following nonsingular transform exists: x = TcI x , where TcI is transfer matrix of controllable canonical form I.

160 | 6 State Feedback and Observer

Transform Σo into the controllable canonical form I. ẋ = Ax + Bu ,

(6.17)

y = Cx , where 0 0 .. . 0 [−a0

1 0 .. . 0 −a1

[ [ [ [ −1 A = TcI ATcI = [ [ [ [

0 1 .. . 0 −a2

... ...

0 ] 0 ] ] .. ] , . ] ] ] 1 ] −a n−1 ]

... ...

0 [.] [ .. ] −1 ] B = TcI B=[ [ ] , [0] [1] C = CTcI = [b 0

b1

...

b n−1 ] .

The transfer function of the controllable system Σo is: Wo (s) = C(sI − A)−1 B =

b n−1 s n−1 + b n−2 s n−2 + ⋅ ⋅ ⋅ + b 1 s + b 0 . s n + a n−1 s n−1 + ⋅ ⋅ ⋅ + a1 s + a0

(6.18)

Step 2. Consider the following state feedback gain matrix: K = [k 0

k1

...

k n−1 ] .

(6.19)

Then we can obtain the closed loop state space description to x: ẋ = (A + BK)x + Bu ,

(6.20)

y = Cx . Here, 0 [ 0 [ [ [ .. A + BK = [ . [ [ 0 [ [−(a0 − k 0 )

1 0 .. . 0 −(a1 − k 1 )

0 1 .. . 0 ...

... ... ... ...

0 ] 0 ] ] ] .. ] . . ] ] 1 ] −(a n−1 − k n−1 )]

Closed loop characteristic polynomial: 󵄨 󵄨 f(λ) = 󵄨󵄨󵄨󵄨λI − (A + BK)󵄨󵄨󵄨󵄨 = λ n + (a n−1 − k n−1 )λ n−1 + ⋅ ⋅ ⋅ + (a1 − k 1 )λ + (a0 − k 0 ) .

(6.21)

6.3 Pole Assignment

| 161

Closed loop transfer function: W k (s) = C [sI − (A + BK)] =

b n−1

s n−1

−1

B

+ b n−2 s n−2 + ⋅ ⋅ ⋅ + b 1 s + b 0

s n + (a n−1 − k n−1 )s n−1 + ⋅ ⋅ ⋅ + (a1 − k1 )s + (a0 − k 0 )

.

(6.22)

Step 3. To accord with the desired poles, the following equation has to be satisfied: f(λ) = f ∗ (λ) . The coefficients of the feedback matrix can be obtained by equaling the coefficients of λ with the same order in both sides of the above equation: k i = a i − a∗i .

(6.23)

Thus, K = [a0 − a∗0

a1 − a∗1

...

a n−1 − a∗n−1 ] .

Step 4. Transform K corresponding with x into K corresponding with x by using the following equation: −1 K = KTcI . (6.24) −1 x. This is due to u = v + Kx = v + KTcI

Theorem 6.2. For the case of pole assignment via output feedback, wherein u = Hy + v, a theorem similar to Theorem 6.1 has not yet been proven. The determination of the output feedback matrix H is, in general, a very difficult task. A method for determining the matrix H, which is closely related to the method of determining the matrix K presented earlier, is based on the equation: K = HC . (6.25) This method starts with the determination of the matrix K and, in the sequel, the matrix H is determined by using equation (6.25). It is fairly easy to determine the matrix H from equation (6.25) since this equation is linear in H. Note that equation (6.25) is only a sufficient condition. That is, if equation (6.25) does not have a solution for H, it does not follow that pole assignment by output feedback is impossible. Theorem 6.3. Even for the absolutely controllable SISO system Σo = (A, b, c), arbitrary pole assignment via output feedback cannot be guaranteed. Proof. The closed loop transfer function of the SISO feedback system, Σ h = [(A + bhc), b, c], is: Wo (s) W h (s) = c [sI − (A + bhc)]−1 b = , (6.26) 1 + hWo (s) where Wo (s) = c(sI − A)−1 b is the transfer function of the controllable system.

162 | 6 State Feedback and Observer

From the closed loop characteristic polynomial, we can obtain the closed loop root locus equation: hWo (s) = −1 . (6.27) When Wo (s) is fixed beforehand, we can obtain a series of root locus with the reference variable h varying from 0 to ∞. Obviously, no matter what h is, the desired pole which is not contained in the root locus cannot be assigned. Output linear feedback has an important drawback: arbitrary pole assignment is not realizable. To overcome the drawback, we always introduce additional regulatory network to affect the root locus by increasing open loop poles and zeros. Therefore, the regulated root locus consists of the desired pole. Theorem 6.4. For an absolutely controllable SISO system Σo = (A, b, c), the following is a sufficient and necessary condition of arbitrary pole assignment by the output feedback with dynamic compensator. (i) Σo is absolutely observable (ii) the order of dynamic compensator is n = 1

6.3.2 Methods to Assign the Poles of a System 1. Pole assignment via state feedback Consider an LTI system ̇ = Ax(t) + Bu(t) x(t)

(6.28)

where we assume that all states are accessible and known. To this system, we apply a linear state feedback control law of the following form: u(t) = −Kx(t) .

(6.29)

Then the closed loop system is given by the homogeneous equation: ̇ = (A − BK)x(t) . x(t)

(6.30)

It is remarked that the feedback law u(t) = −Kx(t) is used rather than the feedback law u(t) = Kx(t). This different chosen sign is utilized to facilitate the observer design problem. Here, the design problem is to find the appropriate controller matrix K so as to improve the performance of the closed loop system (6.30). One method to improve the performance of (6.30) is pole assignment. The pole assignment method consists of finding a particular matrix K, such that the poles of the closed loop system (6.30) are set to desirable preassigned values. Using this method, the behavior of the open loop system may be improved significantly. For example, the method can stabilize an

6.3 Pole Assignment

| 163

unstable system, increase or decrease the speed of response, widen or narrow the system’s bandwidth, and increase or decrease the steady state error, etc. For these reasons, improving the system performance via the pole assignment method is widely used in practice. The pole assignment, or eigenvalue assignment, problem can be defined as follows: suppose λ1 , λ2 , . . . , λ n are the eigenvalues of the matrix A of the open loop system (6.30) and λ∗1 , λ∗2 , . . . , λ∗n are the desired eigenvalues of matrix A − BK of the closed loop system (6.30), where all complex eigenvalues appear in complex conjugate pairs. Denote f(λ) and f ∗ (λ) as the characteristic polynomial and the desired characteristic polynomial to find a matrix K so that equation (6.32) is satisfied. n

f(λ) = ∏(λ − λ i ) = |λI − A| = λ n + a n−1 λ n−1 + ⋅ ⋅ ⋅ + a1 λ + a0 .

(6.31)

i=1 n

f ∗ (λ) = ∏(λ − λ∗i ) = |λI − A + BK| = λ n + a∗n−1 λ n−1 + ⋅ ⋅ ⋅ + a∗1 λ + a∗0 .

(6.32)

i=1

The pole assignment problem has attracted considerable attention for many years. The first significant results were established by Wonham in the late 1960s and are given by Theorem 6.1 in Section 6.3.1. According to Theorem 6.1, in cases that the open loop system (6.30) is not controllable, at least one eigenvalue of the matrix A remains invariant under the state feedback law (6.31). In such cases, in order to assign all eigenvalues, one must search for an appropriate dynamic controller wherein the feedback law (6.31) may involve, not only proportion, but also derivative, integral, and other terms. Dynamic controllers have the disadvantage in that they increase the order of the system. Now, consider the case that the system (A, B) is controllable; a fact which guarantees that there exists a K which satisfies the pole assignment problem. Next, we will deal with the problem of determining a feedback matrix K. For simplicity, we will first study the case of single input systems, in which the matrix B reduces to a vector b and the matrix K reduces to a row vector k. Equation (6.32) then becomes: n

f ∗ (λ) = ∏(λ − λ∗i ) = |λI − A + bk| = λ n + a∗n−1 λ n−1 + ⋅ ⋅ ⋅ + a∗1 λ + a∗0 .

(6.33)

i=1

It has been remarked that the solution of equation (6.33) for k is unique. Several methods have been proposed for determining k. We present three well known methods.

Method 1: The Base–Gura Formula One of the most popular pole assignment methods gives the following simple solution: −1 k = −KTcI .

(6.34)

164 | 6 State Feedback and Observer

Where K is defined in equation (6.23) and

TcI =

[A n−1 b,

...

1 [ [a [ n−1 b] [ . [ . [ .

Ab,

[ a1

..

.

..

. ...

..

.

a n−1

] ] ] ] . ] ]

(6.35)

1]

Method 2: The Phase Canonical Formula Consider the special case that the system under control is described in its phase variable canonical form, i.e., A and b have the special forms A∗ and b ∗ , where: 0 0 0 .. . 0 ̂0 [− a

[ [ [ [ [ ∗ A =[ [ [ [ [

1 0 0 .. . 0 ̂1 −a

0 1 0 .. . 0 ̂2 −a

... ... ... ... ...

0 0 ] ] ] 0 ] ] , .. ] . ] ] ] 1 ] ̂ n−1 ] −a

0 [0] [ ] [ ] [0] [ ] ∗ b = [.] . [ .. ] [ ] [ ] [0] [1]

(6.36)

One of the most popular pole assignment methods gives the following simple solution: k T = [W T QTc ]−1 (a∗ − a) .

(6.37)

Where Qc is defined in equation (6.14) and 1 [0 [ W =[ [ .. [. [0

a n−1 1 .. . 0

... ...

a1 a2 ] ] .. ] ] , .] 1]

...

a∗n−1 [a∗ ] [ n−2 ] ] a∗ = [ [ .. ] , [ . ] ∗ [ a0 ]

a n−1 [a ] [ n−2 ] ] a=[ [ .. ] . [ . ] [ a0 ]

(6.38)

Then, it can be easily shown that: Q∗c = [b ∗

A∗ b∗

A∗2 b ∗

A∗n−1 b ∗ ] .

...

The product W T Q∗T c reduces to the simple form:

W T Q∗T c

0 [0 [ = ̃I = [ [ .. [. [1

0 0 .. . 0

... ... ...

0 1 .. . 0

1 0] ] .. ] ] . .] 0]

(6.39)

In this case, the vector k∗T in expression (6.37) reduces to k ∗T = ̃I(a∗ − a), i.e., it reduces to the following form:

k ∗T

a∗0 − a0 [ a∗ − a ] 1 ] [ 1 ] . = ̃I(a∗ − a) = [ .. ] [ ] [ . ∗ a − a n−1 ] [ n−1

(6.40)

6.3 Pole Assignment

| 165

It is evident that expression (6.40) is extremely simple to apply, provided that the matrix A and the vector b of the system under control are in the phase variable canonical form (6.38).

Method 3: The Ackermann’s Formula Another approach for computing k has been proposed by Ackermann, which leads to the following expression: ∗ k = eT Q−1 (6.41) c f (A) . The matrix Qc is given in equation (6.14), wherein the variable s is replaced by the matrix A, i.e., f ∗ (A) = A n + a∗n−1 A n−1 + ⋅ ⋅ ⋅ + a∗1 A + a∗0 I . (6.42) In general cases of multi-input systems, the determination of the matrix K is somewhat complicated. A simple approach to the problem is to assume that K has the following form: K = qpT , (6.43) where q and p are n-dimensional vectors. Then, the matrix A − BK becomes: A − BK = A − BqpT = A − βpT ,

where

β = Bq .

(6.44)

Therefore, assuming that K has the form of equation (6.43), the multi-input system is reduced to a single input system, which has been studied previously. In other words, the solution for the vector p is equation (6.37) or equation (6.41), and differs only in Qc , which takes the following form: that the matrix Qc is now the matrix ̃ ̃ Qc = [β



A2 β

...

A n−1 β] ,

K = HC .

(6.45)

The vector β = Bq involves arbitrary parameters, which are the elements of the arbitrary vector q. These arbitrary parameters can have any value, provided that rank ̃ Qc = n. 2. Pole assignment via output feedback Consider a LTI system: ̇ = Ax(t) + Bu(t) x(t) y(t) = Cx(t) ,

(6.46)

where we assume that all states are accessible and known. We apply the following linear state feedback control law to the above system: u(t) = −Hy(t) + v .

(6.47)

Then the closed loop system is given by the homogeneous equation: ̇ = (A − BHC)x(t) . x(t)

(6.48)

166 | 6 State Feedback and Observer

The pole assignment, or eigenvalue assignment, problem can be defined as follows: denote λ1 , λ2 , . . . , λ n as the eigenvalues of the matrix A of the open loop system (6.46) and λ∗1 , λ∗2 , . . . , λ∗n as the desired eigenvalues of matrix A − BHC of the closed loop system (5.3)–(5.35), where all complex eigenvalues appear in complex conjugate pairs. In the case of pole assignment via output feedback, wherein u = −Hy + v, Theorem 6.2 has been proven. According to Theorem 6.2, we can obtain the matrix H by K = HC. This method starts with the determination of the matrix K, and in the following content, the matrix H is determined by using equation (6.25). It is fairly easy to determine the matrix H from equation (6.25) since this equation is linear in H. Note that equation (6.25) is only a sufficient condition, i.e., if equation (6.25) does not have a solution for H, it does not follow that pole assignment by output feedback is impossible.

6.3.3 Examples Example 6.1. Consider a system in the form (6.28), where: 0 A=[ −1

1 ] 0

0 b=[ ] . 1

and

Find a vector k to make the closed loop system eigenvalues become λ∗1 = −1 and λ∗2 = −1.5. Solution. We have: f(λ) = |λI − A| = λ2 + 1 and

f ∗ (λ) = (λ − λ∗1 )(λ − λ∗2 ) = λ2 + 2.5λ + 1.5 .

Method 1 Here, we use equation (6.35) and (6.23): TcI = [Ab

1 b] [ 0

0 1 ]=[ 1 0

0 1 ][ 1 0

0 1 ]=[ 1 0

0 ] , 1

And: K = [a0 − a∗0

a1 − a∗1 ] = [1 − 1.5

Therefore: 1 −1 TcI =[ 0

−1

0 ] 1

0 − 2.5] = [−0.5

1 =[ 0

−2.5] .

0 ] . 1

Hence: −1 k = −KTcI = − [−0.5

1 −2.5] [ 0

0 ] = [0.5 1

2.5] .

6.3 Pole Assignment

| 167

Method 2 Since the system is in phase variable canonical form, the vector k can readily be determined by equation (6.40), as follows: k T = k ∗T = [

a∗0 − a0 1.5 − 1 0.5 ]=[ ]=[ ] . 2.5 − 0 2.5 a∗1 − a1

Method 3 Here, we apply equation (6.41). We have: f ∗ (A) = A2 + a∗1 A + a∗0 I = A2 + 2.5A + 1.5I 2

0 =[ −1

0 1 ] + 2.5 [ −1 0

−1 =[ 0

0 0 ]+[ −1 −2.5

Q−1 c = [b

Ab]

−1

0 =[ 1

1 1 ] + 1.5 [ 0 0

0 ] 1

2.5 1.5 ]+[ 0 0

1 ] 0

−1

=[

0 1

0 0.5 ]=[ 1.5 −2.5

2.5 ] 0.5

1 ] . 0

Therefore: k = eT S−1 f ∗ (A) = [0

1] [

0 1

1 0.5 ][ 0 −2.5

2.5 ] = [0.5 0.5

2.5] .

Clearly, the resulting three controller vectors derived by the three methods are identical. This is due to the fact that, for a single input system, k is unique. Example 6.2. Consider a system in the form (6.28), where: 0 [ A = [0 [1

1 0 0

0 ] 1] 0]

and

0 [ ] b = [0] . [1]

Find a vector k to make the eigenvalues of the closed loop system into λ∗1 = −1, λ∗2 = −2, and λ∗3 = −2. Solution. We have: f(λ) = |λI − A| = λ3 − 1 , And: f ∗ (λ) = (λ − λ∗1 )(λ − λ∗2 )(λ − λ∗3 ) = λ3 + 5λ2 + 8λ + 4 . Method 1 Here we use equations (6.37) and (6.23): TcI = [A2 b

Ab

1 [ b] [0 [0

0 1 0

0 1 ] [ 0] = [0 1] [0

0 1 0

0 1 ][ 0] [0 1] [0

0 1 0

0 1 ] [ 0] = [0 1] [0

0 1 0

0 ] 0] , 1]

168 | 6 State Feedback and Observer

And: K = [a0 − a∗0

a1 − a∗1

a2 − a∗2 ] = [−1 − 4

Therefore: −1 TcI

Hence:

1 [ = [0 [0

−1 = − [−5 k = −KTcI

0 1 0

−8

0 ] 0] 1]

−1

1 [ = [0 [0

1 [ −5] [0 [0

0 − 5] = [−5

0−8

0 1 0

0 1 0

−8

−5] .

0 ] 0] . 1]

0 ] 0] = [5 1]

8

5] .

Method 2 Since the system is in phase variable canonical form, the vector k can readily be determined by equation (6.40), as follows: T

k =k

∗T

5 4 − (−1) a∗0 − a0 ] [ ] [ ∗ ] [ = [a1 − a1 ] = [ 8 − 0 ] = [8] . ∗ [a2 − a2 ] [ 5 − 0 ] [5]

Method 3 Here, we apply equation (6.41). We have: f ∗ (A) = A3 + a∗2 A2 + a∗1 A + a∗0 I = A3 + 5A2 + 8A + 4I 0 [ = [0 [1

1 0 0

1 [ = [0 [0

0 1 0

5 [ = [5 [8

8 5 5

3

2

0 1 0 0 1 0 0 1 0 [ [ ] ] [ ] 1] + 5 [0 0 1] + 8 [0 0 1] + 4 [0 1 0] [1 0 0] [1 0 0] [0 0 0 0 0 5 0 8 0 4 0 0 ] [ ] [ ] [ ] 0] + [5 0 0] + [0 0 8] + [0 4 0] 1] [0 5 0] [8 0 0] [0 0 4] 5 ] 8] . 5]

0 ] 0] 1]

Therefore: k = eT S−1 f ∗ (A) = [0

0

0 [ 1] [0 [1

0 1 0

1 5 ][ 0] [5 0] [8

8 5 5

5 ] 8] = [5 5]

8

5] .

Example 6.3. Consider the following system with transfer function as: W(s) =

10 . s(s + 1)(s + 2)

Try to find a state feedback controller to make the closed loop poles become −2 and −1 ± j1.

6.3 Pole Assignment |

169

Solution. Since the system is controllable and observable, the poles can be assigned arbitrarily. Choose the following controllable canonical form: 0 [ ẋ = [0 [0

1 0 −2

y = [10

0

0 0 ] [ ] 1 ] x + [0 ] u , −3] [1 ] 0] x .

With state feedback, the closed loop characteristic polynomial is: f(λ) = det[λI − (A + bK)] = λ3 + (3 − k 2 )λ2 + (2 − k 1 )λ − k 0 . The desired closed loop characteristic polynomial is: f ∗ (λ) = (λ + 2)(λ + 1 − j)(λ + 1 + j) = λ3 + 4λ2 + 6λ + 4 . Compare relative parameters in the above two functions, and we have: k 0 = −4 ,

k 1 = −4 ,

k 2 = −1 .

Thus: K = [−4

−4

−1] .

The closed loop transfer function is: G(s) =

s3

10 . + 6s + 4

+ 4s2

The block diagram of the closed loop system is shown in Figure 6.4.

Fig. 6.4: Block diagram of the closed loop system.

170 | 6 State Feedback and Observer

Example 6.4. The state space model of a system is: 0 [ ẋ = [0 [0

1 −1 0

y = [1

0

0 0 ] [ ] 1 ] x + [0] u −2] [1] 0] x .

Try to find a state feedback controller to make the closed loop poles become −2 and −1 ± j1. Solution. To determine the controllability of the system: M = [b

0 [ A2 b] = [0 [1

Ab

0 1 −2

1 ] −3] . −4]

|M| ≠ 0, so the system is controllable, and the closed loop poles of the system can be arbitrary assigned. Transform the above state space model into the controllable canonical form. The characteristic function is: |sI − A| = s3 + 3s2 + 2s . So we choose: 2 [ I = [3 [1

3 1 0

1 ] 0] . 0]

Then: Tc1 Suppose k̂ = [k̂0 be expressed as:

1 [ = MI = [0 [0

̂k 1

0 1 1

0 ] 0] , 1]

−1 Tc1

1 [ = [0 [0

0 1 −1

0 ] 0] . 1]

k̂2 ] and that the closed loop characteristic polynomials can

󵄨 ̂ ̂k)󵄨󵄨󵄨󵄨 = 󵄨󵄨󵄨󵄨λI − (T −1 AT + T −1 b ̂k)󵄨󵄨󵄨󵄨 . f(λ) = 󵄨󵄨󵄨󵄨λI − (̂ A+b 󵄨 󵄨 󵄨 They can also be expressed as: f ∗ (λ) = (λ + 2)(λ + 1 − j)(λ + 1 + j) = λ3 + 4λ2 + 6λ + 4 . To achieve the desired closed loop poles, we have f ∗ (λ) = f(λ). They can also be expressed as: ̂ −1 , k = kT c1

so

k = [−4

−4

1 [ 1] [0 [0

0 1 −1

0 ] 0] = [−4 1]

−3

The block diagram of the closed loop system is shown in Figure 6.5.

−1] .

6.4 State Estimator

| 171

Fig. 6.5: Block diagram of the closed loop system.

Example 6.5. Consider a plant described by: 0 [ ẋ = [0 [0

1 0 −2

0 0 ] [ ] 1 ] x + [0 ] u . −3] [1 ]

Let us introduce state feedback u = r − [k 1 k 2 k 3 ]x to place the three eigenvalues at −2, −1 ± j. Figure out how to solve it using MATLAB. The MATLAB function place computes state feedback gains for eigenvalue placement or assignment. For example, we can type: a=[0 1 0;0 0 1;0 -2 -3];b=[0;0;1]; p=[-2,-1+j,-1-j]; k=place(a,b,p) yield k = 4.0000

4.0000

1.0000

This is the matrix [k1 k2 k3 ] = [4 4 1].

6.4 State Estimator 6.4.1 Introduction In the preceding sections, we introduce state feedback under the implicit assumption that all state variables are available for feedback. This assumption may not hold in practice, either because the state variables are not accessible for direct connection or because sensors or transducers are not available. In this case, in order to apply state

172 | 6 State Feedback and Observer

feedback, we must design a device, called a state estimator or state observer, so that the output of the device will generate an estimation of the state.

6.4.2 State Estimator 1. Full Dimensional State Estimator Consider the LTI system Σ0 = (A, B, C): ẋ = Ax + Bu y = Cx ,

(6.49)

where A, B, and C are given and the input u(t) and the output y(t) are available to us. The problem is to estimate x from u and y with the knowledge of A, B, and C. If we know A and B, we can duplicate the original system as: x̂̇ = A ̂x + Bu ,

(6.50)

which is shown in Figure 6.6. The duplication will be called an open loop estimator. Now, if (6.49) and (6.50) have the same initial state, then for any input, we have ̂x (t) = x(t) for all t ≥ 0. Therefore, the remaining question is how to find the initial state of (6.49) and then set the initial state of (6.50) to that state. If (6.49) is observable, its initial state x(0) can be computed from u and y over any time interval, say, [0, t1 ]. We can then compute the state at t2 and set ̂x(t2 ) = x(t2 ). Then we have ̂x(t) = x(t) for all t ≥ t2 . Thus, if (6.49) is observable, an open loop estimator can be used to generate the state vector.

Fig. 6.6: Block diagram of open loop state estimator.

6.4 State Estimator

| 173

There are, however, two disadvantages, in using an open loop estimator. First, the initial state must be computed and set each time we use the estimator, which is very inconvenient. Secondly, and more seriously, if the matrix A has eigenvalues with positive real part, then even for a very small difference between x(t0 ) and ̂x(t0 ) for some t0 , which may be caused by disturbance or imperfect estimation of the initial state, the difference between x(t) and ̂x (t) will grow with time. Therefore, the open loop estimator is, in general, not satisfactory. We see from Figure 6.6 that, even though the input and output of (6.49) are available, we use only the input to drive the estimator. Now we shall modify the estimator in Figure 6.6 to the one in Figure 6.7, in which the output y(t) = Cx(t) of (6.49) is compared with C ̂x(t). Their difference, passing through an n × 1 constant gain vector G, is used as a correcting term. If the difference is zero, no correction is needed. If the difference is nonzero and if the gain G is properly designed, the difference will drive the estimated state to the actual state. Such an estimator is called a closed loop or an asymptotic estimator or, simply, an estimator. The open loop estimator is now modified as: ̂ẋ = A ̂x + Bu + G(y − ̂y) = A ̂x + Bu + Gy − GC ̂x ,

(6.51)

which is shown in Figure 6.7. Now, (6.51) can be written as: ̂ẋ = (A − GC)̂x + Bu + Gy ,

(6.52)

and is shown in Figure 6.8. It has two inputs, u and y, and its output yields an estimated state ̂x.

Fig. 6.7: Block diagram of closed loop state estimator I.

174 | 6 State Feedback and Observer

Fig. 6.8: Block diagram of closed loop state estimator Π.

Let us define: e = x − ̂x . It is the error between the actual state and estimated state. Differentiating e and then substituting (6.49) and (6.51) into it, we obtain: ė = ẋ − ̂ẋ = Ax + Bu − (A − GC)̂x − G(Cx) − Bu = Ax − (A − GC)̂x − GCx = (A − GC)(x − ̂x) , or ė = (A − GC)e .

(6.53)

This equation governs the estimation error. If all eigenvalues of (A − GC) can be assigned arbitrarily, then we can control the rate for e to approach zero or, equivalently, for the estimated state to approach the actual state. For example, if all eigenvalues of A − GC have negative real parts smaller than −σ, then the entire of e will approach zero at rates faster than e −σt . Therefore, even if there is a large error between ̂x(t0 ) and x(t0 ) at the initial time t0 , the estimated state will approach the actual state rapidly. Thus, there is no need to compute the initial state of the original state equation. In conclusion, if all eigenvalues of (A − GC) are properly assigned, a closed loop estimator is much more desirable than an open loop estimator. Theorem 6.5. Consider the pair (A, C). All eigenvalues of (A − GC) can be assigned arbitrarily by selecting a real constant vector G if, and only if, (A, C) is observable.

6.4 State Estimator

| 175

Example 6.6. Consider the state equation: 1 ẋ = [ 0

0 1 ]x +[ ]u , 0 1

y = [2

−1] x .

Try to find the state estimator, so that the desired closed loop eigenvalues can be −10, −10. Solution. Step 1: Examine the system’s observability. We have: N=[

C 2 ]=[ CA 2

−1 ] . 0

Since rank N = 2, there exists a full dimensional state estimator. Step 2: Transfer the system to observability criterion form Π. The characteristic polynomial of the system is: det [λI − A] = det [

λ−1 0

0 ] = λ2 − λ . λ

We have: a1 = −1 , and: T −1 = LN = [

a1 1

a0 = 0 ,

L=[

1 2 ][ 0 2

−1 0 ]=[ 0 2

−1 1

1 −1 ]=[ 0 1 1 ] , −1

1 ] , 0 1

T = [2 1

1 2] .

0

Thus: 0 ẋ = T −1 AT x + T −1 bu = [ 1

0 1 ]x +[ ]u , 1 1

y = CT x . Step 3: Introducing feedback matrix G = [g 1 g 2 ]T . The characteristic polynomial of the estimator yields: 󵄨 󵄨 󵄨󵄨 λ 󵄨 f(λ) = 󵄨󵄨󵄨󵄨λI − (A − GC)󵄨󵄨󵄨󵄨 = 󵄨󵄨󵄨󵄨 󵄨󵄨−1

󵄨󵄨 g1 󵄨󵄨 󵄨󵄨 = λ2 − (1 − g 2 )λ + g 1 . λ − (1 − g2 )󵄨󵄨󵄨

Step 4. The desired characteristic polynomial is: f ∗ (λ) = (λ + 10)2 = λ2 + 20λ + 100 . Step 5. Comparing the corresponding coefficient of f(λ) and f ∗ (λ), we have: g 1 = 100 ,

g 2 = 21 ,

and

G=[

100 ] . 21

176 | 6 State Feedback and Observer

Step 6. Transforming to the state of x yields: 1

G = TG = [ 2 1

1 2 ] [100]

0

21

=[

60.5 ] . 100

Step 7. The proposed estimator is: x̂̇ = (A − Gc)̂x + bu + Gy −120 =[ −200

60.5 1 60.5 ] ̂x + [ ] u + [ ]y , 100 1 100

Or: 1 x̂̇ = A ̂x + bu + G(y − ̂y) = [ 0

0 1 60.5 ] x̂ + [ ] u + [ ] (y − ̂y) . 0 1 100

Example 6.7. Consider the transfer function of a controlled system: W0 (s) =

1 . s(s + 6)

Find a vector k such that the closed loop system has eigenvalues λ∗1 = −1 and λ∗2 = −1.5 by state feedback, and design a full dimensional state estimator that can realize the above feedback. Solution. Step 1. From the transfer function above, we know that the system is controllable and observable. Therefore, the state feedback matrix and estimator can be designed independently due to the separation principle. Step 2: Design the state feedback matrix K. For a convenient estimator design, we use the observable canonical form Π of the system directly: 0 0 1 ẋ = ( )x +( )u 1 −6 0 y = (0

1) x .

Step 3. We have: f(λ) = |λI − A| = λ2 + 6λ , and: f ∗ (λ) = (λ − λ∗1 )(λ − λ∗2 ) = λ2 + 8λ + 52 Here we use equation (6.37) and (6.23): TcI = [Ab

1 b] [ 6

0 0 ]=[ 1 1

1 1 ][ 0 6

0 6 ]=[ 1 1

1 ] , 0

and: K = [a0 − a∗0

a1 − a∗1 ] = [0 − 52

6 − 8] = [−52

−2] .

6.4 State Estimator

Therefore: −1 TcI =[

6 1

−1

1 ] 0

0 =[ 1

| 177

1 ] . −6

Hence: −1 k = −KTcI = − [−52

0 −2] [ 1

1 ] = [2 −6

40] .

Step 4: Design the full dimensional estimator. Suppose G = [g1 g2 ]T , then: 0 A − GC = ( 1

0 g1 ) − ( ) (0 g2 −6

0 1) = ( 1

−g1 ) , −6 − g2

and: det[λI − (A − GC)] = det (

λ −1

g1 ) = λ2 + (6 + g2 )λ + g1 . λ + 6 + g2

Comparing with: f ∗ (λ) = (λ − λ∗1 )(λ − λ∗2 ) = λ2 + 20λ + 100 we can obtain: 100 G=( ). 4 The full dimensional estimator equation is: ̂ẋ = (A − GC)̂x + Gy + bu 0 =( 1

−100 100 1 ) x̂ + ( )y +( )u . −20 4 0

Example 6.8. The state space model of a system is: { { { ̇ [0 1] X + [2] u { {X = 3 4 4 [ ] [ ] { { { { {y = [ 0 1] X . { Try to construct a two-dimensional state estimator with poles to be −4 and −6. Find out the model of the stator estimator and plot the diagram of the system with state estimator. Solution. The desired characteristic equation is: (s + 4)(s + 6) = s2 + 10s + 24 = 0 ,

󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨 C 󵄨 󵄨0 |N| = 󵄨󵄨󵄨󵄨 󵄨󵄨󵄨󵄨 = 󵄨󵄨󵄨󵄨 󵄨󵄨CA 󵄨󵄨 󵄨󵄨3

󵄨 1󵄨󵄨󵄨 󵄨󵄨 = −3 ≠ 0 , 4󵄨󵄨󵄨

rank N = 2 .

The system is observable and the state estimator can be constructed as: |sI − A + GC| = s2 + (g2 − 4)s + 3g1 − 3 = 0 ,

g1 = 9 ,

g2 = 14 .

178 | 6 State Feedback and Observer

The state estimator is: 0 Ẋ g = (A − GC)X g + Bu + Gy = [ 3

−8 2 9 ] Xg + [ ] u + [ ] y . −10 4 14

The diagram of the system is shown in Figure 6.9.

Fig. 6.9: Block diagram of the system.

Example 6.9. Consider a plant described by the following state equation: 0 ̇x = [ [ 0 [1.24

1 0 0.3965

y = [1

0] x .

0

0 0 ] [ ] 1 ]x +[ 0 ]u −3.145] [1.244]

Try to design a state estimator to place the eigenvalues at −5 ± j5√3, −10 by MATLAB. For the example, we type: a=[0 1 0;0 0 1;1.244 0.3965 -3.145]; b=[0;0;1.244]; c=[1 0 0]; v=[-5+j*5*sqrt(3) -5-j*5*sqrt(3) -10]; l=(acker(a',c',v))'

6.4 State Estimator

|

179

yield l = 16.8550 147.3875 544.3932 Then we can get: 16.855 [ ] L = [147.3875] . [544.3932] So the state estimator is: ẋ = (Λ − LC)x + Bu + Ly 0 1 { {[ = {[ 0 0 { 1.244 0.3965 {[ 16.855 [ ] + [147.3875] y [544.3932] −16.855 [ = [−147.3875 [−544.3932

0 16.855 ] [ ] 1 ] − [147.3875] [1 −3.145] [544.3932]

1 0 0.3965

0

0 } } [ ] 0]} x + [ 0 ] u } } [1.244]

0 0 16.855 ] [ ] [ ] 1 ] x + [ 0 ] u + [147.3875] y . −3.145] [1.244] [544.3932]

2. Reduced Dimensional State Estimator The estimator presented above, usually called a full dimensional estimator, has the same dimension with the controlled system. Actually, the output vector y is always measurable. We can derive a part of state variables directly from y, thus reducing the dimension of the estimator. Consider an observable system, assume the rank of the output matrix C is m, then m dimension state variables can be acquired by the output y. The other n − m dimension state variables can be acquired by a (n − m) dimensional state estimator. This estimator with the output equation can then be used to estimate all n state variables. It also has a lesser dimension than the system (6.49) and is called a reduced dimensional estimator. The controllable system is: ẋ = Ax + Bu (6.54) y = Cx .

180 | 6 State Feedback and Observer With rank C = m, the pair (A, C) is observable. The design consists of two steps. (1) Decompose the state to x 1 and x 2 . m dimension x2 can be derived from y while n − m dimension x1 are to be observed. (2) Construct the (n − m) dimensional state estimator. Suppose x = Tx: A = T −1 AT = [ B = T −1 B = [ C = CT =

A 11

A12

A 21

A22

B1 B2

n−m

m

m

A12 ] , A22

,

,

m I]

A11 A21

m

n−m

]

[0

A = T −1 AT = [

n−m

]

, B=[

B1 ] , B2

C = CT = [0

I] .

The transform matrix T is: T

−1

=[

C0 C

n−m

]

m

−1

,

C0 T=[ ] C

,

where C0 is a (n − m) × n matrix to guarantee that T is nonsingular. −1

CT = C [

C0 ] C

= [0

I] .

The state space equation can be written as: ẋ 1 A11 [ ̇ ]=[ A21 x2 y = [0

A12 x 1 B1 ][ ]+[ ]u A22 x 2 B2

(6.55)

x1 I] [ ] = x2 . x2

As the system (6.54) is observable, (6.55) is also observable. From (6.55), we can see x 2 can be directly detected from y, and x 1 can be obtained from the estimator. The decomposed system structure is shown in Figure 6.10. The subsystem Σ 1 = (A 11 , A12 , B1 , 0) is to be reconfigured. Following the strategy of full dimension state estimator, we can duplicate Σ1 from (6.55) as: ẋ 1 = A11 x 1 + A12 x2 + B1 u = A11 x1 + M ,

(6.56)

M = A 12 x 2 + B1 u .

(6.57)

where Let Z = A21 x2 , then Z = ẋ 2 − A22 x2 − B 2 u.

6.4 State Estimator

| 181

Fig. 6.10: Structure of decomposed system.

Consider M and Z as the input and output of Σ 1 , and A21 as the output matrix. Because of the observable pair (A, C), the pair (A 11 , A21 ) for Σ1 is also observable, thus the subsystem Σ 1 can be estimated. Consulting equation (6.52), the estimator can be written as: ̇ x̂1 = (A11 − GA 21 )x1 + M + GZ . (6.58) Similarly, the eigenvalues of (A 11 − GA 21 ) can be assigned at desired positions by choosing a (n − m) × m dimensional matrix G. Substituting (6.57) into (6.58) yields: ̇ x̂1 = (A11 − GA 21 )̂x1 + (A12 − GA 22 )y + (B 1 − GB 2 )u + G ẏ . Considering the difficulty of implementation of y,̇ we introduce a new variable of: ̂ = ̂x 1 − Gy . w So the estimator equation can be described as: ̂̇ = (A 11 − GA21 )̂x 1 + (A 12 − GA 22 )y + (B 1 − GB 2 )u , w ̂x1 = w ̂ + Gy . Hence, all n state variables x̂ can be constructed as: ̂ + Gy I ̂ x̂1 G w ] = [ ]w x̂ = [̂ ] = [ +[ ]y . 0 I y x2 ̂ Then transform ̂x to ̂x , and we have x = T x. The whole structure of the estimator is shown in Figure 6.11.

(6.59)

182 | 6 State Feedback and Observer

u

x2 , y

  (A,B,C)

 G

  A12GA22

  B1GB2

+

1 s

+

^ w

T

+

x^

^  x1

  A11GA21

Fig. 6.11: Block diagram of reduced dimensional estimator.

From equation (6.57), we can see that x 2 = y, so there are no estimated errors of these m dimension state variables. Subtracting (6.59) from (6.56), we can obtain the estimated error equation: ̇ ė 1 = ẋ 1 − x̂1 = A 11 x 1 + A12 y + B1 u − (A11 − GA21 )x 1 − (A12 − GA22 )y − (B1 − GB 2 )u − G ẏ . Considering A21 x 1 = ẏ − A22 y − B 2 u, the above equation can be simplified as: ė 1 = (A 11 − GA 21 )(x 1 − ̂x1 ) = (A11 − GA 21 )e1 ,

(6.60)

where e1 is the error between x and x̂1 . As the subsystem Σ1 is observable, the eigenvalues of (A11 − GA 21 ) can be assigned at desired positions by choosing G, thus guaranteeing that the error e1 can approach zero at the desired rate. Example 6.10. Consider the system: 4 [ ẋ = [−11 [ 13

4 −12 14

y = [1

1] x .

1

4 1 ] [ ] −12] x + [−1] u 13 ] [0]

Find a reduced dimensional state estimator with the poles −3, −4. Solution. Examine the system’s observability. There exists a reduced dimensional state estimator rank c = 1.

6.4 State Estimator

| 183

Construct the transform matrix T: T

−1

1 [ = [0 [1

0 1 1

0 ] 0] , 1]

1 [ T=[0 [−1

0 4 ][ 0] [−11 1] [ 13

4 −12 14

4 1 ][ −12] [ 0 13 ] [−1

0 1 −1

0 ] 0] . 1]

Suppose: 1 [ A = T −1 AT = [0 [1

0 1 1

0 1 −1

0 0 ] [ 0 ] = [1 1 ] [1

0 0 1

4 ] −12] , 5 ]

we have: 1 [ b = T −1 b = [0 [1 c = cT = [1

1

0 1 1

0 1 1 ][ ] [ ] 0] [−1] = [−1] , 1] [ 0 ] [ 0 ]

1 [ 1] [ 0 [−1

0 1 −1

0 ] 0] = [0 1]

0

1] .

Since x 3 can be provided directly by y, a second dimensional state estimator is needed. Step 1. Introducing feedback matrix G = [g 1 g 2 ]T ; the characteristic polynomial of the estimator yields: 󵄨 g 0 0 󵄨 󵄨 󵄨󵄨 λ 0 f(λ) = 󵄨󵄨󵄨󵄨λI − (A11 − GA 21 )󵄨󵄨󵄨󵄨 = 󵄨󵄨󵄨󵄨[ ]−[ ] − [ 1 ] [1 g2 1 0 󵄨󵄨 0 λ 󵄨󵄨 󵄨󵄨 g 1 󵄨󵄨 󵄨 λ + g1 󵄨󵄨 = λ2 + (g 1 + g 2 )λ + g 1 . = 󵄨󵄨󵄨󵄨 󵄨󵄨−1 + g2 λ + g2 󵄨󵄨󵄨

󵄨󵄨 󵄨 1]󵄨󵄨󵄨󵄨 󵄨󵄨

Step 2. The desired characteristic polynomial is: f ∗ (λ) = (λ + 3)(λ + 4) = λ2 + 7λ + 12 . Step 3. Comparing the corresponding coefficient of f(λ) and f ∗ (λ), we have: g 1 = 12, g 2 = −5 ,

and

12 G=[ ] . −5

Step 4. From equation (6.59), we obtain the estimator equation: ̂̇ = [−12 w 6

−12 ̂ −56 1 ] x1 + [ ]y +[ ]u 5 13 −1

̂ + [12] y . x̂ 1 = w −5

184 | 6 State Feedback and Observer

The estimation of the state after linear transformation is: 1 ̂ + Gy x̂1 w [ ] = [0 x̂ = [̂ ] = [ y x3 [0

0 12 w1 + 12y ] w1 [ ] [ ] 1] [ ] + [−5] y = [ w 2 − 5y ] . w2 0] y [1] [ ]

Step 5. To get the state estimation of the original system, transform x̂ as follows: 1 ̂x = T ̂x = [ [0 [−1

0 1 −1

0 w1 + 12y w 1 + 12y ][ ] [ ] 0] [ w 2 − 5y ] = [ w 2 − 5y ] . 1] [ y ] [−w1 − w 2 − 6y]

Example 6.11. The state space model of a system is { { ̇ [0 2] X + [1] u { {X = 3 1 3 ] [ ] [ { { { { { y = [0 1] X . Try to construct a one dimensional state estimator with pole to be −5, and plot the diagram of the system. Solution. The system model is an observable canonical, so the system is observable, the state estimator can be constructed and the pole can be assigned arbitrary y = x2 . Only x1 needs to be constructed: 󵄨󵄨 󵄨 󵄨󵄨sI − ̂ A11 + g ̂ A21 󵄨󵄨󵄨󵄨 = s − 0 + g = s + 5 = 0 , 󵄨

g = 5;

Ẇ = ẇ = (A11 − GA21 )W + [(A12 − GA22 ) + (A11 − GA21 )G]Y + (B1 − GB2 )U = −5w − 38y − 14u , x1g = ̂x1g = W + GY = w + 5y ,

x2g = ̂x2g = y .

The state estimator is: ẇ = −5w − 38y − 14u { { { x = w + 5y { { 1g { {x2g = y . The diagram of the system with state estimator is shown in Figure 6.12.

6.5 State Feedback Based on State Estimator

185

|

Fig. 6.12: Diagram of the system.

6.5 State Feedback Based on State Estimator Figure 6.13 is a state feedback system based on full dimensional state estimator. Consider the controllable and observable controlled system Σ0 = (A, B, C): ẋ = Ax + Bu y = Cx The state estimator Σ G :

}

̂ẋ = (A − GC)̂x + Gy + Bu ̂y = C ̂x

(6.61)

}

(6.62)

The state feedback law is: u = −K ̂x + v .

(6.63)

By substituting equation (6.63) into equation (6.61) and equation (6.62), you can obtain the state space description of the total closed loop system. ẋ = Ax − BK ̂x + Bv } } } ̇̂x = GCx + (A − GC − BK)̂x + Bv } } } y = Cx }

(6.64)

186 | 6 State Feedback and Observer

Fig. 6.13: State feedback system based on full dimensional state estimator.

Equation (6.64) can be written in the following matrix form: ẋ A ( ̇) = ( ̂x GC y = (C

−BK x B } ) ( ) + ( ) v} } ̂x A − GC − BK B } } } } x } } } 0) ( ) ̂x }

(6.65)

This is a closed loop system with dimension of 2n. Define the state error as x̃ = x − x̂ and introduce the following equivalent transformation: x I 0 x x (6.66) ( )=( )( ) = ( ) . ̃x ̂x I −I x − ̂x Suppose the transfer matrix is: T=( Then: T

−1

I =( I

0 ) −I

I I

0 ) . −I

−1

=(

I I

(6.67)

0 )=T. −I

(6.68)

6.5 State Feedback Based on State Estimator

|

187

With linear transformation, the system turns into (A1 , B1 , C 1 ): A1 = T −1 A1 T = ( B1 = T −1 B1 = ( C1 = C1 T = (C

I I

I I

0 A )( −I GC

−BK I )( A − GC − BK I

0 A − BK )=( −I 0

0 B B )( ) = ( ) −I B 0 0) (

I I

0 ) = (C −I

BK ) A − GC (6.69)

0) .

Linear transformation does not change the poles of system, therefore: det [λI − A1 ] = det (

λI − (A − BK) 0

−BK ) λI − (A − GC)

= det [λI − (A − BK)] det [λI − (A − GC)]

(6.70)

= det [λI − (A − BK)] det [λI − (A − GC)] . The results are very interesting, because they illustrate the fact that the characteristic polynomial of the closed loop state feedback system based on state estimator equals the product of the characteristic polynomial of the matrix (A − BK), and that of the matrix (A − GC). The poles of the closed loop system equals the sum of the poles of direct state feedback (A − BK) and that of state estimator (A − GC). Indeed, if system (A, B) is controllable, then the matrix K of the state feedback law (6.63) can be chosen so that the poles of the closed loop system Σ0 = (A, B, C) have any desired arbitrary values. The same applies to equation (6.62) where, if the system (A, C) is observable, the matrix G of estimator can be chosen so as to force the error to go rapidly to zero. This property, where the two design problems (the estimator and the matrix K of the closed loop system) can be handled independently, is called the separation principle. This principle is clearly a very important design feature, since it reduces a very difficult design task to two separate simpler design problem. Consider the pole assignment and the estimator design problem. The pole assignment problem is called the control problem and it is rather a simple control design tool for improving the closed loop system performance. The estimator design problem is called estimator problem, since it produces a good estimate of x(t) in cases where x(t) is not measurable. The solution of the estimator design problem reduces to that of solving a pole assignment problem. In cases where an estimate of x(t) is used in the control problem, one faces the problem of simultaneously solving the estimation and the control problem. At first sight this appears to be a formidable task. However, thanks to the separation theorem, the solution of the combined problem of estimation and control breaks down to separately solving the estimation and the control problem. The solution of the combined problem of estimation and control requires twice of the pole assignment.

188 | 6 State Feedback and Observer

6.6 Summary Three types of feedback are introduced in this chapter. They can be used to improve the performance of a system. The precondition and algorithm of every feedback are discussed in detail. The pole assignment can be realized with some feedback. The desired poles come from the request for the performance of a system. The state estimator can be designed when a system is observable to realize the state estimation so as to fulfill the state feedback to optimize the system performance.

Appendix: State Feedback and Observer for Main Steam Temperature Control in Power Plant Steam Boiler Generation System The superheater is an important part of the steam generation process in the boiler turbine system, where steam is superheated before entering the turbine that drives the generator. The objective is to control the superheated steam temperature by controlling the flow of spray water using the spray water valves. As can be seen in Figure 6.14, a two stage water sprayer is used to control the superheated temperature. The steam generated from the boiler drum passes through the low temperature superheater before it enters the radiant type platen superheater. Water is sprayed onto the steam to control the superheated steam temperature in both the low and high temperature superheaters. Proper control of the superheated steam temperature is extremely important to ensure the overall efficiency and safety of the power plant. Therefore, the superheated steam temperature is to be controlled by adjusting the flow of spray water. 1

to turbine 1

2

2

combustion gas flow

5

feedwater

3

4

low-temperature platen high-temperature superheater superheater superheater

Fig. 6.14: Boiler and superheater steam generation process.

Appendix: State Feedback and Observer for Power Plant Steam Temperature Control

|

189

The typical mathematic model of the superheated steam temperature control process is a sixth order transfer function as follows: G0 (s) =

θ(s) 1.589 × 2.45 , = W(s) (1 + 14s)2 (1 + 15.8s)4

where θ and W represent the superheater steam temperature and the water flow rate of spray superheating, respectively. Then the transfer function can be transformed to the controllable canonical form: ̇ = Ax(t) + Bu(t) x(t) y(t) = Cx(t) + Du(t) . where: −0.396 −0.0653 −0.0057 −2.8355e−6 −7.4664e−6 −8.1868e−8 ] 1 0 0 0 0 0 ] ] ] 0 1 0 0 0 0 ] , ] 0 0 1 0 0 0 ] ] 0 0 0 1 0 0 ] 0 0 0 0 1 0 ] [

[ [ [ [ A=[ [ [ [ [

1 [ ] [0] [ ] [0] ] B=[ [0] , [ ] [ ] [0] [0]

C = [0

0

0

0

0

0.3187e−6] .

Here, we need to find a state feedback controller u = r − [k 1 k 2 k 3 k 4 k 5 k 6 ]x to make the closed loop poles at [−0.1 − 0.1 − 0.1 − 0.1 − 0.1 − 0.1]. The block diagram of the system is shown in Figure 6.15.

Setpoint of temperature Controller

Attemperator

State feedback

Superheater

State observer

Fig. 6.15: The steam temperature control system with state feedback and state observer.

Steam temperature

190 | 6 State Feedback and Observer

The system is controllable according to the matrix A. With state feedback, the closed loop characteristic polynomial is: f(λ) = det[λI − (A − BK)] = λ6 + (0.396 + k 1 )λ5 + (0.0653 + k 2 )λ4 + (0.0057 + k 3 )λ3 + (2.8355e−6 + k 4 )λ2 + (7.4664e−6 + k 5 )λ + 8.1868e−8 + k 6 . The desired closed loop characteristic polynomial is: f ∗ (λ) = (λ + 0.1)6 = λ6 + 0.6λ5 + 0.15λ4 + 0.02λ3 + 0.0015λ2 + 0.00006λ + 0.000001 . Compare relative parameters in the above two functions, and we have: k1 = 0.204 ,

k 2 = 0.0847 ,

k 3 = 0.0143 ,

k 4 = 1.4972e−3 ,

k5 = 5.2534e−5 ,

k 6 = 9.1813e−7 .

Thus: K = [0.204

0.0847

0.0143

1.4972e−3

5.2534e−5

9.1813e−7] .

The observable canonical form of the system is: ̇ = Ax(t) + Bu(t) x(t) y(t) = Cx(t) + Du(t) , where: 0 [ [1 [ [0 A=[ [0 [ [ [0 [0

0 0 1 0 0 0

0 0 0 1 0 0

0 0 0 0 1 0

0.3187e−6 ] 0 ] ] ] 0 ] , ] 0 ] ] 0 ] 0 ] [

[ [ [ [ B=[ [ [ [ [

0 0 0 0 0 1

−8.1868e−8 ] −7.4664e−6] ] −2.8355e−6] ] , −0.0057 ] ] ] −0.0653 ] −0.396 ]

C = [0

0

0

0

0

1] .

Next, we could design a full dimensional state estimator with poles to be [−0.25 −0.25 −0.25 − 0.25 − 0.25 − 0.25].

Appendix: State Feedback and Observer for Power Plant Steam Temperature Control

|

191

Design the full dimensional estimator. Suppose G = [g1 g2 g3 g4 g5 g6 ]T , then: 0 [ [1 [ [0 A − GC = [ [0 [ [ [0 [0

0 0 1 0 0 0

0 0 0 1 0 0

0 0 0 0 1 0

0 0 0 0 0 1

g1 −8.1868e−8 ] [ ] −7.4664e−6] [g2 ] ] [ ] [ ] −2.8355e−6] ] − [g3 ] [0 [ ] −0.0057 ] ] [g4 ] ] [ ] −0.0653 ] [g5 ] −0.396 ] [g6 ]

0 [ [1 [ [0 =[ [0 [ [ [0 [0

0 0 1 0 0 0

0 0 0 1 0 0

0 0 0 0 1 0

0 0 0 0 0 1

−8.1868e−8 − g1 ] −7.4664e−6 − g2 ] ] −2.8355e−6 − g3 ] ] , −0.0057 − g4 ] ] ] −0.0653 − g5 ] −0.396 − g6 ]

0

0

0

0

1]

and det [λI − (A − GC)] = λ6 + (0.396 + g6 )λ5 + (0.0653 + g5 )λ4 + (0.0057 + g4 )λ3 + (2.8355e−6 + g3 )λ2 + (7.4664e−6 + g2 )λ + 8.1868e−8 + g1 . Comparing with: f ∗ (λ) = (λ + 0.25)6 = λ6 + 1.5λ5 + 0.9375λ4 + 0.3125λ3 + 0.0586λ2 + 0.00586λ + 0.000244 , we can obtain: 0.0002 [ ] [0.0059] [ ] [0.0583] ] G=[ [0.3068] . [ ] [ ] [0.8722] [ 1.104 ] The full dimensional estimator equation is: x̂̇ = (A − GC) ̂x + Gy + Bu 0 [ [1 [ [0 =[ [0 [ [ [0 [0

0 0 1 0 0 0

0 0 0 1 0 0

0 0 0 0 1 0

0 0 0 0 0 1

0.3187e−6 0.0002 −0.000244 ] [ ] [ ] 0 −0.00586 ] ] [ [0.0059] ] [ ] [ ] ] [ [0.0583] 0 −0.0586 ] ]u . ]y +[ ] ̂x + [ ] [ [0.3068] 0 −0.3125 ] ] [ ] [ ] ] [ ] [ ] 0 −0.9375 ] ] [ [0.8722] 0 1.104 −1.5 ] ] [ ] [

192 | 6 State Feedback and Observer

Exercise 6.1. Determine whether the following systems can realize arbitrary pole assignment with state feedback: (1)

1 ẋ = [ 3

2 1 ]x +[ ]u 1 0

(2)

4 ẋ = [ 0

2 1 ]x +[ ]u −2 0

(3)

1 [ ẋ = [0 [0

0 −2 0

(4)

0 [0 [ ẋ = [ [0 [−2

0 1 [ ] 1 ] x + [0 −2] [0

1 0 0 −4

0 1 0 −3

0 ] 1] u 0]

0 0 ] [ 0] [0 ]x+[ [0 1] −5] [1

0 0 1 0

0 1] ] ]u 0] 0]

6.2. Consider a single input continuous time LTI system: 1 ẋ = [ 3

2 1 ]x +[ ]u . 1 0

Try to find a state feedback matrix k, which makes the closed loop eigenvalues λ∗1 = −2 + j, λ∗2 = −2 − j. 6.3. Given the transfer function of a SISO continuous time LTI system: G(s) =

1 , s(s + 4)(s + 8)

try to find a state feedback matrix k, which makes the closed loop eigenvalues λ∗1 = −2, λ∗2 = −4, λ∗3 = −7. 6.4. Given a single input LTI system: 0 [ ẋ = [1 [0

0 −6 1

0 1 ] [ ] 0 ] x + [0] u , −12] [0]

Try to find a state feedback matrix u = −Kx, which makes the closed loop eigenvalues λ∗1 = −2, λ∗2 = −1 + j, λ∗3 = −1 − j. 6.5. Consider a continuous time LTI system: ẋ = [

1 0

1 0 ]x +[ ]u 1 1

y=[

2 0

0 ]x . 1

Exercise

| 193

Try to find an output feedback matrix f , which makes the closed loop eigenvalues become λ∗1 = −2, λ∗2 = −4. 6.6. Consider the following fourth order system: 2 [0 [ ẋ = [ [0 [0

1 2 0 0

0 0 −2 0

0 0 [1] 0] ] [ ] ]x +[ ]u . [1] 0] −2] [1]

Determine state feedback matrix to place the closed loop system poles at: (1)

λ∗1 = −2 ,

λ∗2 = −2 ,

λ∗3 = −2 ,

λ∗3 = −2

(2)

λ∗1 = −3 ,

λ∗2 = −3 ,

λ∗3 = −3 ,

λ∗3 = −2

(3)

λ∗1 = −3 ,

λ∗2 = −4 ,

λ∗3 = −3 ,

λ∗3 = −3

6.7. Consider a continuous time LTI system: 1 ̇x = [ [0 [0

1 1 0

0 0 ] [ 0] x + [1 2] [0

0 ] 0 ]u . −1]

Try to find the state feedback matrix to place the closed loop eigenvalues at λ∗1 = −2, λ∗2 = −1 + j2, λ∗3 = −1 − j2. 6.8. Given the transfer function of a SISO continuous time LTI system: g0 (s) =

(s + 2)(s + 3) , (s + 1)(s − 2)(s + 4)

try to determine if there exists a state feedback matrix k, which can make the closed loop transfer function as: s+3 g(s) = . (s + 2)(s + 4) If it does, find a state feedback matrix k. 6.9. Design a full dimensional state estimator with eigenvalues to at −r, −2r (r > 0) for the state equation below: 0 ẋ = [ 0

1 0 ]x +[ ]u 0 1

y = [1

0] x .

6.10. Design a reduced dimensional state estimator with eigenvalues to as −4 and −5 for the state equation below: 0 [ ẋ = [0 [0

1 0 0

0 0 ] [ ] 1] x + [0] u 0] [1]

y = [1

0

0] x .

Bibliography [1] [2] [3] [4] [5] [6] [7]

C. T. Chen. Linear System Theory and Design. Oxford University Press. 1999. (Third Edition). J. J. D’Azzo. Linear Control System Analysis and Design. McGraw–Hill, 1995. J. J. D’Azzo. Linear Control System Analysis and Design With MATLAB. Marcel Dekker, 2003, 1995. K. Ogata. Modern Control Engineering: Fourth Edition. Prentice–Hall. Inc. 2006. B. Liu. Modern Control Theory. Second Edition. China Machine Press, 1992 (only available in Chinese). P. N. Paraskevoppulos. Modern Control Engineering. Marcel Dekker. Inc 2002. D. Z. Zheng. Linear System Theory. Second Edition. Tsinghua University Press, 2002 (only available in Chinese).

https://doi.org/10.1515/9783110574951-007

Index algebraic (nondynamic) equations 8 arbitrary pole assignment 162

norms of vectors 33 nuclear power generation 23

basis and representation 32 BIBO (bounded input bounded output) 72

observability canonical form 128, 129 observable Jordan canonical form criteria 117 observable PBH characteristic vector criteria 116 observable PBH rank criteria 116, 118 orthonormalization 33 output feedback 157

Cayley–Hamilton theorem 57 controllability canonical form 123, 126 controllability Gram matrix criteria 106 controllability Jordan canonical form criteria 110 controllability PBH criteria 109 controllability rank criteria 107 diagonal form 36 differential equation 1 discrete time equation 64 doubly fed induction generators (DFIGs) 25 duality system 120 dynamic equations 8 first principal modeling 10 full dimensional state estimator 172 Gram matrix criteria 115 input-output model 15 Jordan form 36 Laplace 4 large scale asymptotic stability theorem 82 linear algebra 31 linear time invariant (LTI) 103 linear time invariant (LTI) state space equation 47 LTI state equations 47 Lyapunov stability theorem for discrete nonlinear time invariant (NTI) systems 98 mathematical model of a system 1 minimum realization 145 multiple inputs and multiple outputs (MIMO) 7

https://doi.org/10.1515/9783110574951-008

pole assignment 162 rank criteria 116, 118 realization 145 reduced dimensional state estimator 179 similarity transformation 34 SISO systems 16 small scale asymptotically stable theorem 88 stability criteria for discrete linear time invariant (LTI) Systems 99 stability criteria for linear time invariant (LTI) systems 90 stability criteria for linear time variant (LTV) Systems 95 state equations 8 state estimator 172 state feedback 156 state observer 172 state space model 6 state transfer matrix 50 state transition matrix 52 theorem for instability 90 theorem for stability in the sense of Lyapunov 89 theorem of duality 116 thermal power generation 22 transfer function model 4 transform of the block diagram 9 wind power generation 25