Solutions to Linear Matrix Equations and Their Applications 9782759831036

This book addresses both the basic and applied aspects of the finite iterative algorithm, CGLS iterative algorithm, and

226 23 2MB

English Pages 196 Year 2023

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Solutions to Linear Matrix Equations and Their Applications
 9782759831036

Table of contents :
Contents
1. Introduction
2. Finite Iterative Algorithm to Coupled Transpose Matrix Equations
3. Finite Iterative Algorithm to Coupled Operator Matrix Equations with Sub-Matrix Constrained
4. MCGLS Iterative Algorithm to Linear Conjugate Matrix Equation
5. MCGLS Iterative Algorithm to Linear Conjugate Transpose Matrix Equation
6. MCGLS Iterative Algorithm to Coupled Linear Operator Systems
7. Explicit Solutions to the Matrix Equation X – AXB = CY + R
8. Explicit Solutions to the Nonhomogeneous Yakubovich-Transpose Matrix Equation
9. Explicit Solutions to the Matrix Equations XB – AX = CY and XB – AX̂ = CY
10. Explicit Solutions to Linear Transpose Matrix Equation
References

Citation preview

Current Natural Sciences

Caiqin SONG

Solutions to Linear Matrix Equations and Their Applications

Printed in France

EDP Sciences – ISBN(print): 978-2-7598-3102-9 – ISBN(ebook): 978-2-7598-3103-6 DOI: 10.1051/978-2-7598-3102-9 All rights relative to translation, adaptation and reproduction by any means whatsoever are reserved, worldwide. In accordance with the terms of paragraphs 2 and 3 of Article 41 of the French Act dated March 11, 1957, “copies or reproductions reserved strictly for private use and not intended for collective use” and, on the other hand, analyses and short quotations for example or illustrative purposes, are allowed. Otherwise, “any representation or reproduction – whether in full or in part – without the consent of the author or of his successors or assigns, is unlawful” (Article 40, paragraph 1). Any representation or reproduction, by any means whatsoever, will therefore be deemed an infringement of copyright punishable under Articles 425 and following of the French Penal Code. The printed edition is not for sale in Chinese mainland. Customers in Chinese mainland please order the print book from Science Press. ISBN of the China edition: Science Press 978-7-03-072138-9 Ó Science Press, EDP Sciences, 2023

About the Author

Caiqin Song, now is an associate professor and Master’s supervisor. She received her PhD from East China Normal University in June 2012. The main research direction is the explicit method of linear matrix equations, the iterative algorithm of linear matrix equations, the explicit method of quaternion matrix equations, and the problem of the semi-tensor product of matrix, etc. She has published more than 30 academic papers in international academic journals. She once presided over one National Natural Science Foundation of China (No. 11501246), one first-class grant and one second-class grant from the China Postdoctoral Science Foundation (No. 2017M610243, 2013M541900), and one Shandong Provincial Natural Science Foundation (No. ZR2020MA052).

Preface

Solving linear matrix equations has experienced decades of development and has become a relatively mature scientific field. The study of linear matrix equations needs to focus on two important elements: one is the sufficient condition or necessary and sufficient condition of the linear matrix equation having a solution; the other is the explicit solution expression or iterative algorithm formula. The topic on linear matrix equations includes the solvable conditions of linear matrix equations, mainly including sufficient conditions and necessary and sufficient conditions; explicit method of linear matrix equations; iterative algorithms of linear matrix equations and comparison with various types of existing algorithms in time and accuracy; the application of linear matrix equation in the field of control. The knowledge of linear matrix equations is becoming more and more necessary. It has been applied in many engineering branches, such as robust control, observer design, and attitude control of spacecraft. This book is written for a wider audience and has three main purposes. Firstly, for researchers who want to consider the linear matrix equation, but lack of the knowledge about linear matrix equations. In this case, this book can be used as a primer book. Secondly, this book can provide its control model for researchers who have many important and good research results in the field of the linear matrix equation. Thirdly, this book provides textbooks or reference books for graduate students in related fields. In order to achieve the above purpose, this book provides two types of iterative algorithms for some linear matrix equations, which are used to find the numerical solutions and least squares solutions of several types of linear matrix equations, and provides an explicit method for several types of linear matrix equations. This book has several characteristics: Firstly, the book contains a number of recent research results, giving a finite iterative algorithm for coupled operator matrix equations with sub-matrix constraints and a least squares iterative algorithm for coupled operator matrix equations. Secondly, the control model and the application of the algorithm to the linear matrix equation are given. The iterative DOI: 10.1051/978-2-7598-3102-9.c901 Ó Science Press, EDP Sciences, 2023

VI

Preface

algorithms and explicit solutions provided in this book are very useful for further research on control problems. This book is not only easy to understand, but also provides a large number of cybernetic models and numerical examples to illustrate the superiority and practicality of the algorithm. Caiqin SONG July 29, 2021

Notations

C: the set of all complex numbers R : the set of all real numbers Rmn: the set of all the real matrix m  n Cmn: the set of all the complex matrix m  n  the conjugate of matrix A A: AT : the transpose of matrix A A: the conjugate transpose of matrix A A1: the inverse of matrix A I n: the identity matrix with n  n 0mn: the zero matrix with m  n detðAÞ : the determinant of matrix A rðAÞ: the rank of matrix A trðAÞ: the trace of matrix A qðAÞ: the spectral radius of matrix A kðAÞ: the set of character value on matrix A kxk2: Euclid length of vector x kAk2: the spectral norm of matrix A kAkF : the Frobenius norm of matrix A A  B : represents A and B with same size satisfying a ij  bij [ : Collective union \ : Collection of crosses ;: empty set (): equivalence a i: the ith column of A vec ðAÞ ¼ ða T1 ; a T2 ; . . .; a Tn ÞT : the vec operator of matrix A hA; Bi: the inner product of matrices A and B LC mn;pq : the set of linear operators from C mn onto C pq I: the identity operator on C mn I ½1; N : the integer set from 1 to N

Contents About the Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . III Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VII CHAPTER 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 The First Part: Finite Iterative Algorithm to Linear Matrix Equation . 1.2 The Second Part: MCGLS Iterative Algorithm to Linear Matrix Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 The Third Part: Explicit Solutions to Linear Matrix Equation . . . . . .

1 1 3 5

CHAPTER 2 Finite Iterative Algorithm to Coupled Transpose Matrix Equations 2.1 Finite Iterative Algorithm and Convergence Analysis . . . . . . 2.2 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Control Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

11 12 24 27 28

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

29 30 41 42 53 54

MCGLS Iterative Algorithm to Linear Conjugate Matrix Equation . 4.1 MCGLS Iterative Algorithm and Convergence Analysis . . . . . 4.2 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Control Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

55 57 66 73 74

CHAPTER 3 Finite Iterative Algorithm to Coupled Operator Matrix Equations with Sub-Matrix Constrained . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Iterative Method for Solving Problem 3.1 . . . . . . . . . . . . . . 3.2 Iterative Method for Solving Problem 3.2 . . . . . . . . . . . . . . 3.3 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Control Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CHAPTER 4

Contents

X

CHAPTER 5 MCGLS Iterative Algorithm to Linear Conjugate Transpose Matrix Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 MCGLS Iterative Algorithm and Convergence Analysis . . . . . 5.2 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

75 78 90 95

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. 97 . 98 . 99 . 107 . 113

CHAPTER 6 MCGLS Iterative Algorithm to Coupled Linear Operator Systems 6.1 Some Useful Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 MCGLS Iterative Algorithm and Convergence Analysis . . . . 6.3 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . .

CHAPTER 7 Explicit Solutions to the Matrix Equation X − AXB = CY + R . . . 7.1 Solutions to the Real Matrix Equation X − AXB = CY + R 7.2 Parametric Pole Assignment for Descriptor Linear Systems by P-D Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . 115 . . . . . . . 115 . . . . . . . 124 . . . . . . . 128

CHAPTER 8 Explicit Solutions to the Nonhomogeneous Yakubovich-Transpose Matrix Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 The First Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 The Second Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Illustrative Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

131 132 140 142 143

CHAPTER 9 Explicit Solutions to the Matrix Equations XB  AX ¼ CY b ¼ CY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . and XB  A X 9.1 Real Matrix Equation XB − AX = CY . . . . . . . . . . . . . . . . . . . . b = CY . . . . . 9.2 Quaternion-j-Conjugate Matrix Equation XB − A X 9.2.1 Real Representation of a Quaternion Matrix . . . . . . . . . . 9.2.2 Solutions to the Quaternion j-Conjugate Matrix Equation b = CY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XB − A X 9.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . 145 . . . . 145 . . . . 148 . . . . 148 . . . . 150 . . . . 155

Contents

XI

CHAPTER 10 Explicit Solutions to Linear Transpose Matrix Equation . . . . . . . . . . . . . . . 10.1 Solutions to the Sylvester Transpose Matrix Equation . . . . . . . . . . . 10.1.1 The First Case: A or B is Nonsingular . . . . . . . . . . . . . . . . 10.1.2 The Second Case: A and B are Nonsingular . . . . . . . . . . . . 10.2 Solutions to the Generalized Sylvester Transpose Matrix Equation . 10.3 Algorithms for Solving Two Transpose Equations and Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Application in Control Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . .

157 157 158 162 165

. 168 . 174 . 175

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

Chapter 1 Introduction In this book, we will derive three parts to consider the solutions to linear equations. They are finite iterative algorithms to the linear matrix equations, MCGLS iterative algorithms to the linear matrix equations, and explicit solutions to the linear matrix equations, respectively.

1.1

The First Part: Finite Iterative Algorithm to Linear Matrix Equation

In chapter 2, we consider a finite iterative algorithm to the coupled Sylvester-transpose matrix equations. It is well-known that Sylvester and Lyapunov matrix equations are very important since they play a fundamental role in the various field of engineering theory, particularly in control systems. The numerical solutions of Sylvester and Lyapunov matrix equations have been addressed in a large number of literature. Zhou and Duan[1–3] established the solutions of several generalized Sylvester matrix equations. Zhou et al.[4] proposed a gradient-based iterative algorithm for solving the general coupled Sylvester matrix equations with weighted least squares solutions. In[5], the general parametric solutions to a family of generalized Sylvester matrix equations arising in linear system theory was presented by using the so-called generalized Sylvester mapping which had some elegant properties. With the help of the Kronecker map, Wu et al.[6] introduced a complete, general, and explicit solution to the Yakubovich matrix equation V  AVF ¼ BW , with F in an arbitrary form. Also using the Kroneker map, an explicit solution for the matrix equation XF  AX ¼ C was established[7]. Moreover, Ding and Chen presented the hierarchical gradient iterative (HGI) algorithms for general matrix equations[10,11] and hierarchical least squares iterative (HLSI) algorithms for generalized coupled Sylvester matrix equations and general coupled matrix equations[12,13]. The HGI algorithms[10,11] and HLSI algorithms[10,13,14] for solving general (coupled) matrix equations are innovational and computationally efficient numerical ones and are proposed based on the hierarchical identification principle[12,15] which regards the unknown matrix as the system DOI: 10.1051/978-2-7598-3102-9.c001 © Science Press, EDP Sciences, 2023

2

Solutions to Linear Matrix Equations and Their Applications

parameter matrix to be identified. In[16], Dehghan and Hajarian proposed an iterative algorithm for solving the second-order Sylvester matrix equation EVF 2  AVF  CV ¼ BW :

ð1:1:1Þ

Recently Dehghan and Hajarian[17–20] introduced several iterative methods to solve (coupled) matrix equations such as (coupled) Sylvester matrix equations over reflexive, anti-reflexive, and generalized bisymmetric matrices. In chapter 3, we consider the finite iterative algorithm to the coupled operator matrix equations with sub-matrix constrained. Chen[59] puts forward the applications of the generalized centrosymmetric matrix in three major areas: the altitude estimation of a level network, an electric network, and the structural analysis of trusses. Coupled matrix equations have numerous applications in stability theory, control theory, perturbation analysis, signal processing, and image restoration. For example, in stability analysis of linear jump systems with Markovian transitions, the continuous-time coupled Lyapunov matrix equations ATi P i þ P i Ai þ Q i þ

N X

pij P j ¼ 0;

Q i [ 0; i 2 I ½1; N ;

ð1:1:2Þ

j¼1

and P i ¼ ATi

N X

 pij P j Ai þ Q i ;

Q i [ 0; i 2 I ½1; N ;

ð1:1:3Þ

j¼1

are often encountered[58,65,138], where P i [ 0; i 2 I ½1; N  are the matrices to be determined. The coupled Sylvester matrix equations arise in computing error bounds for eigenvalue and eigenvalue space of the generalized eigenvalue problem[33]     A C D F S  kT ¼ k ; ð1:1:4Þ 0 B 0 E in computing deflating subspace of the same problem, and in computing certain decomposition of the transferable matrix arising in control theory. Many papers have studied several matrix equations[8,18,35,36,38,40,43–46,51,52,56,66,76, 100,126] . But with the development of economy, science, and technology, many researchers pay close attention to transpose matrix equations and conjugate matrix equations. Sylvester transpose matrix equation can be used to solve many control problems such as pole assignment[60], eigenstructure assignment[61,166–168], and robust fault detection[62]. Recently, in[64], minimal residual method was reported for solving CT Sylvester matrix equations and DT Sylvester matrix equations. In[16,63,67,169], finite iterative algorithm was constructed for solving the second-order Sylvester matrix equation EVF 2  AVF  CV ¼ BW . In[47,48], Wang investigated the centrosymmetric solution to the matrix equation systems A1 X ¼ C 1 ; A3 XB 3 ¼ C 3 . Moreover, some necessary and sufficient conditions, parameterize general solution and the maximal and minimal ranks of the general solution to the mixed Sylvester matrix equations and coupled Sylvester matrix equations are investigated[68–71].

Introduction

3

Generalized Sylvester matrix equation, quaternion matrix equation, and operator matrix equation have been studied[72–75]. By applying the so-called generalized Kroneker map which had some elegant properties, Wu et al.[77] discussed the closed-form solution to the generalized Sylvester-conjugate matrix equation. Zhou and Duan[80] proposed a gradient-based iterative algorithm for solving general coupled Sylvester matrix equations arising in linear systems theory. By using the so-called generalized Sylverter mapping, right coprime factorization, and Bezout identity associated with the certain polynomial matrix, Zhou et al.[81] investigated the problem of parameterizing all solutions to the polynomial Diophantine matrix equation and the generalized Sylverter matrix equation. It is shown that the provided solutions can be parameterized as soon as two pairs of polynomial matrices satisfying the right coprime factorization and Bezout identity are obtained. Based on the conjugate gradient searching principle and its dual form, Li et al.[21] presented two algorithms for solving the minimal norm P P least squares solution to a general linear equation ri¼1 Ai XB i þ sj¼1 C j X T Dj ¼ E. Between these two methods, the first one is to minimize the spectral radius of the iteration matrix, and explicit expression for the optimal step size is obtained. The second one is to minimize the square sum of the F-norm of the error matrices produced by the algorithm. The solvability, existence of unique solution, closed-form solution and numerical solution of matrix equation X ¼ Af ðXÞB þ C with f ðXÞ ¼ X T ; f ðXÞ ¼ X and f ðXÞ ¼ X H , where X was the unknown matrix, were investigated[80]. In[22], the following matrix equation r X i¼1

Ai XB i þ

s X

C j X T D j ¼ E;

ð1:1:5Þ

j¼1

was considered, where Ai ; B i ; C j ; D j ; i ¼ 1; . . .; r; j ¼ 1; . . .; s; and E were some known constant matrices of appropriate dimensions and X was a matrix to be determined. In[67], the solution of the generalized Sylvester-transpose matrix equation AXB þ CX T D ¼ E was presented by the iterative algorithm. The Moore–Penrose generalized inverse was used in[24] to study the matrix equation AX þ X T B ¼ C and the explicit solutions to this matrix equation were given.

1.2

The Second Part: MCGLS Iterative Algorithm to Linear Matrix Equation

In chapter 4, we present MCGLS iterative algorithm to the linear conjugate matrix equation. Many problems in mathematics, physics, and engineering lead to solving linear matrix equations. For instance, a digital filter can be characterized by state variable equations xðn þ 1Þ ¼ AxðnÞ þ buðnÞ;

ð1:2:1Þ

Solutions to Linear Matrix Equations and Their Applications

4

and yðnÞ ¼ cxðnÞ þ duðnÞ:

ð1:2:2Þ

The solutions K and W of the Lyapunov matrix equations K ¼ AKAT þ bbT ;

ð1:2:3Þ

W ¼ AT WA þ cT c;

ð1:2:4Þ

and

can analyzed the quantization noise generated by a digital filter[83]. The Lyapunov and (generalized) Sylvester matrix equations appear in robust control[84], neural network[85] and singular system control[86]. It can be shown that certain control problem, such as pole/eigenstructure assignment and observe design[5,87] of (1.2.1), are closely related with the following generalized Sylvester matrix equation p X k¼1

Ak XB k þ

q X

C j YD j ¼ E:

ð1:2:5Þ

j¼1

So far many techniques have been implemented for obtaining solutions of various linear matrix equations[23,37,39,41,42,44,48–50,68,69]. In[37], the least squares solution of the generalized Sylvester matrix equation AXB þ CYD ¼ E;

ð1:2:6Þ

was studied for symmetric arrowhead matrices with the least norm. By applying the hierarchical identification principle, Ding and Chen introduced gradient based iterative algorithms to solve (coupled) generalized Sylvester matrix equations, nonlinear systems[11,13,14,91,92]. Recently, in[89,90,93,94], some efficient extended conjugate gradient methods were proposed for solving the least squares solution of several linear matrix equations. Very recently, the matrix forms of CGS, BiCOR, CORS, Bi-CGSTAB and finite iterative methods were investigated for solving various linear matrix equations[78,95–98,101–104]. Based on the idea of the paper[37,41,42,77,78,92–98,101–104], this chapter focuses on the goal of MCGLS algorithm for computing the least square solution of linear conjugate matrix equation. In chapter 5, based on the theory of chapter 5, we present MCGLS iterative algorithm for solving the generalized Sylvester conjugate transpose matrix equation. The generalized Sylvester conjugate transpose matrix equation refers to linear systems whose state evolution depends on both the state and its conjugate. Several reasons can be found to study this class of linear systems, for instance, they are naturally encountered in linear dynamical quantum systems theory and any real-valued linear systems with lower dimensions. Therefore, this kind of matrix equation has many applications in some fundamental problems such as state response, controllability, observability, stability, pole assignment, linear quadratic regulation, and state observer design[3,84,105–112]. So far many techniques have been implemented for obtaining solutions to various linear matrix equations[113–119].

Introduction

5

Recently, in[11,29,90,93], some efficient extended conjugate gradient methods were proposed for solving the least squares solution of several linear matrix equations. Based on the idea of the paper[11,48–50,68,69,77,78,89,90,93–98], this chapter is concerned with the least square solution of the generalized Sylvester conjugate transpose matrix equation. In chapter 6, based on chapters 4 and 5, we consider the least squares solutions to coupled linear operator systems by using MCGLS iterative algorithm. The least squares problems have wide applications in particle physics and geology, digital image and signal processing, inverse Sturm–Liouville problem, inverse problems of vibration theory, and control theory[105–109]. As we all know, matrix equations often arise from control theory, system theory, and stability analysis[83,117–119]. Many authors have received much attention on how to solve these matrix equations. Therefore, some different methods for solving matrix equations were established. The hierarchical identification principle was presented by Ding and Chen[10,11,13,14]. In order to solve coupled matrix equations, the gradient-based iterative algorithms were established by combining the hierarchical identification principle and the gradient iterative method of the simple matrix equations[22,120,121]. For more references, one can refer to[4,44,48–50,67–69,88,115–119,122–124]. To improve the previous result, Zhou developed the weighted least squares iterative for solving the coupled matrix equation in[4]. Another important method for solving the least squares problem of matrix equations is the CGLS method. Originating from the conjugate gradient method, this method has become more popular. The reason of presented CGLS iterative method is that the residual and the relative error of our iterative algorithm are smaller than those of LSQR-M iterative algorithm[90,98,125].

1.3

The Third Part: Explicit Solutions to Linear Matrix Equation

In chapter 7, we consider the explicit solution to the nonhomogeneous Yakubovich matrix equation. This nonhomogeneous Yakubovich matrix equation plays an important role in output regulator design for time-invariant linear systems[141] and eigenstructure assignment for second-order linear systems[130,131]. Obviously, the well-known Yakubovich matrix equation X  AXB ¼ CY ;

ð1:3:1Þ

and generalized Sylvester matrix equation AX  EXF ¼ CY ;

ð1:3:2Þ

are the special cases of the nonhomogeneous Yakubovich matrix equation, respectively. If the coefficient matrices satisfy some relations, the generalized Sylvester matrix equation can be transformed into the Yakubovich matrix equation and the solutions to these two matrix equations are equivalent. As we have known,

6

Solutions to Linear Matrix Equations and Their Applications

the generalized discrete Sylvester matrix equation is closely related to many problems in conventional linear control system theory, such as pole/eigenstructure assignment design, Luenberger-type observer design, robust fault detection[132,136,137], and so on. Finding the complete parametric solutions to the matrix equation is of extreme importance because many problems are referred to as design freedom, such as robustness in control system design. Due to the above-mentioned applications, the generalized Sylvester matrix equations (1.3.1) and (1.3.2) have been studied by many authors. When the coefficient matrix F is in Jordan form, Duan[133] has proposed a complete parametric solution to the generalized Sylvester matrix equation (1.3.2). However, this solution is not in a direct, explicit form but in a recursive form. Meanwhile, when the matrix F is also in Jordan form and the matrix triple ðE; A; BÞ is assumed to be R-controllable, Duan[53] has proposed a complete and explicit solution by applying the right coprime factorization of the input-state transfer function ðsE  AÞ1 B. For the generalized discrete Sylvester matrix equation (1.3.1), an explicit, analytical, and complete solution is obtained in[6,47]. In[147], the proposed solution is in a very neat form and can be immediately obtained as soon as a series of matrices Di ; i ¼ 0; 1; . . .; x. In[6], the explicit solution can be obtained by using the Kronecker map. The dual form of the matrix equation (1.3.1), which is called the generalized Sylvester matrix equation, has also been well studied[134,144,149]. For instance, in[144], an analytical and restriction-free solution is established when the coefficient matrix F is in a Jordan form. In order to obtain this solution, it needs to computer an inversion, carries out an orthogonal transformation, and solve a series of linear equation groups. When the matrix F is also in Jordan form, two solutions of the matrix equation in[134] are proposed. One solution is in an iterative form, the other solution is in an explicit parametric form. In order to obtain the second explicit parametric solution, one needs to carry out a right coprime factorization of the polynomial matrix ðsI  AÞ1 B in the case of the eigenvalues of the Jordan matrix F being undetermined or a series of singular value decomposition when the eigenvalues of F are known. Moreover, some necessary and sufficient conditions, parameterize the general solution and the maximal and minimal ranks of the general solution to the mixed Sylvester matrix equations and coupled Sylvester matrix equations are investigated[68–71]. The matrix equations which are studied in the current paper have included many linear matrix equations as special cases, such as discrete Sylvester matrix equation, generalized Sylvester matrix equation, discrete Lyapunov matrix equation, and so on[6,134,144]. The idea of the present chapter comes from the references[6,53,133,134,139,142,145–150]. In chapter 8, we consider the explicit solutions to the nonhomogeneous Yakubovich-transpose matrix equation. The matrix equations AX  XB ¼ C and X  AXB ¼ C play important role in stability theory and control theory[132,140,143,151,152,165,170,171]. When A ¼ B T (the transpose of B ), the matrix equations are the well-known Lyapunov matrix equation and Stein matrix equation. Many control problems, for example, pole assignment[153–155] and eigenstructure assignment[54], are closely related to the second-order nonhomogeneous generalized Sylvester matrix equation MXF 2 þ DXF þ KX ¼ BY þ R, where ðX; Y Þ is a pair of

Introduction

7

matrices to be determined and other matrices are the known matrices. When R ¼ 0, it becomes the second-order homogeneous matrix equation MXF 2 þ DXF þ KX ¼ BY . Other matrix equations such as the coupled Sylvester matrix equations and the Riccati equation have also been found numerous applications in control theory. For more related introduction, see[72,156–161] and the references therein. Furthermore, another kind of transpose matrix equation, for example, AX þ X T B ¼ C ; AX  X T AT ¼ B and AXB þ CX T D ¼ E, have received much attention in the literature over the past several decades[21,22,24,120,162–164]. In these references, the necessary and sufficient conditions for the existence of solutions, the solvability and closed-form solutions of the above transpose matrix equations are obtained by using SVD, GSVD, Moore–Penrose generalized inverse method and iterative algorithm. In chapter 9, we consider the explicit solution to the generalized Sylvester matrix equation. In the field of linear algebra, there is another type of similarity which is called consimilarity. For two complex square matrices A and B, there exists a nonsingular complex matrix P such that A ¼ P 1 BP [171]. Then A and B is called consimilarity. Consimilarity of complex matrices is referred to studying an antilinear operator for different bases in complex vector space. The consimilarity theory of complex matrices plays an important role in the research of modern quantum theory[173]. For this reason, the complex matrix equations X  AXB ¼ CY and AX  XB ¼ CY have been respectively extended to complex conjugate matrix equation X  AXB ¼ CY and AX  XB ¼ CY by applying consimilarity. In the quaternion field, the consimilarity of quaternion matrices is also defined in[174]. Similar to the linear matrix equation AX  XB ¼ CY , the quaternion matrix equation XB  AX ¼ CY has also been generalized to the quaternion j-conjugate b ¼ CY by applying consimilarity over the quaternion matrix equation XB  A X field. Traditionally, quaternion algebra[175,176] has been extensively used in computer graphics[177] and aerospace problems[178], due to its compact notation, moderate computational requirements, and avoidance of singularities associated to 3  3 rotation matrices[179]. So many researchers pay close attention to the quaternion matrix equation problem. For example, Ning and Wang[180] have studied the maximal and minimal ranks of a 2  2 partitioned quaternion matrix. Huang[181] has established some necessary and sufficient conditions for the existence of a unique Pk i solution to the quaternion matrix equation i¼0 A XB i ¼ E. By applying the complex representation of the quaternion matrix, the Moore–Penrose generalized inverse, and the Kronecker product of matrices, the least squares solution of the b B ¼ C (where X b denotes the quaternion j-conjugate matrix equation X  A X [182] j-conjugate of quaternion matrix X) with the least norm has been discussed. Some necessary and sufficient conditions for the existence of the quaternion matrix equations AXB þ CYD ¼ E; ðAX; XC Þ ¼ ðB; DÞ; AXB ¼ C , have been proposed and the solutions have been derived in the references[48,49,183]. Song et al.[160,184] have studied the explicit solutions to the quaternion j-conjugate matrix equation b B ¼ C and XF  A X b ¼ CY , where the coefficient matrix A is a block X  AX diagonal form over a complex field.

8

Solutions to Linear Matrix Equations and Their Applications

In recent years, some other linear matrix equations and nonlinear matrix equations have also been attached. By using the so-called generalized Sylvester mapping, right coprime factorization, and Bezout identity associated with the certain polynomial matrix, Zhou et al.[81] have investigated the problem of parameterizing all solutions to the polynomial Diophantine matrix equation and the generalized Sylverter matrix equation. It is shown that the provided solutions can be parameterized as soon as two pairs of polynomial matrices satisfy the right coprime factorization. The solvability, existence of unique solution, closed-form solution, and numerical solution of matrix equation X ¼ Af ðXÞB þ C with f ðXÞ ¼ X T ; f ðXÞ ¼ X and f ðXÞ ¼ X H , where X is the unknown matrix, are investigated[80]. A complete, general, and explicit solution[81] to the generalized Sylvester matrix equation AX  XF ¼ BY , with the coefficient matrix F in a companion form, is proposed. By introducing and studying a matrix operator on complex matrices, some sufficient conditions and necessary conditions for the existence of positive definite solutions of 1

the matrix equation X þ AH X A ¼ I [185] are also proposed. In addition, Wu et al.[6,7] have constructed some explicit expressions of the solution to the matrix equation AX  XB ¼ C and X  AXB ¼ CY by applying the Kronecker map. It is shown that there exists a unique solution if and only if AA and BB have no common eigenvalues. In chapter 10, we consider the explicit solutions to the equations AX þ X T B ¼ C ;

ð1:3:3Þ

AX þ X T B ¼ CY ;

ð1:3:4Þ

and

where A; B; C 2 Rnn are given n  n real matrices and X; Y 2 Rnn are unknown n  n real matrices to be determined. There are named the Sylvester transpose matrix and the generalized Sylvester transpose matrix equation, respectively. If X ¼ X T , the equations (1.3.3) and (1.3.4) reduce to the well-known Sylvester (or generalized Sylvester) matrix equation, which has many important applications in control theory. Therefore, many researchers focus on the solutions to these two equations, for details, please see[30–32]. Because (1.3.3) and (1.3.4) are obtained by changing the second unknown matrix of the (generalized) Sylvester matrix equation, they are called the (generalized) Sylvester transpose matrix equation. The traditional method for solving the equation (1.3.3) is to transformed it into its equivalent form Ax ¼ c, where A ¼ I n  A þ ðB T  I n ÞP nn , c ¼ vecðC Þ and x ¼ vecðXÞ. Therefore, the unique solution to equation (1.3.3) can be obtained as x ¼ A1 c[55]. Recently, a lot of methods have been constructed for calculating explicit solutions to (1.3.3) and are briefly classified as follows. The method of applying the matrix decomposition was given in[189]. The method by using GSVD, CCD, and projective theorem was presented to discuss the least square solution to the transpose equation AT XB þ B T X T A ¼ D [162]. The method of using a generalized inverse was proposed to investigate the explicit solution to the equation (1.3.3)[163]. Also, some solvability was provided for the equations

Introduction

9

AH X þ X H A ¼ B and AH X þ X H A ¼ 0 and the expressions of the solutions were given to the two matrix equations above[159,190]. Some necessary and sufficient conditions were presented for the existing unique solution to the generalized H-Sylvester matrix equation and they were in terms of the Kronecker canonical form of the matrix pencil A þ kB [191]. The solvability of the Sylvester matrix equation was provided by Desouza and Bhattacharyya[31] and it demonstrated that the solution existed if and only if A and B were similar. For a more related introduction, see[30,157,163,190–194] and the references therein. In addition, many works were devoted to calculating the numerical solutions to the Sylvester transpose equation. Here, we are not concerned with the numerical solutions to the linear transpose matrix equations and we suggested the readers to see[23,120] and their references therein. Besides the topics above on linear matrix equations, some other topics were also focused on. For instance, the solutions to the (generalized) Sylvester matrix equation over the real field, complex field, and quaternion field, for details, please see[1,21,22,50,145,149,195–201]. However, the explicit solutions to the matrix equations (1.3.3) and (1.3.4) have not yet been given in the literature. The difficulty of solving equations (1.3.3) and (1.3.4) is that there are unknown matrices X and X T . It is clear that the form of an unknown matrix is not united. Inspired by[30,50,139,186,198,199], in this chapter, we’ll adopt the Kronecker map and Sylvester sum to study explicit solutions to the equations (1.3.3) and (1.3.4). We’ll propose the solvability and necessary and sufficient conditions for these two equations. Moreover, we’ll offer three forms of solutions to equations (1.3.3) and (1.3.4) and one of the solutions is Jameson’s Theorem. We also establish the corresponding algorithms for the matrix equations above and two algorithms are proposed for solving the equation (1.3.3) when A or B is nonsingular. As the application of our presented algorithm, we give the design of the time-varying linear system in control theory. In short, this book wants to build a working platform for readers, provide a benchmark, and become a cornerstone for further learning, application, and development of matrix equations. The author is very talented and shallow, and omissions are inevitable. Readers and related experts are welcome to give us advice.

Chapter 2 Finite Iterative Algorithm to Coupled Transpose Matrix Equations In this chapter, we consider the following two problems of the coupled Sylvester-transpose matrix equations. Problem 2.1. Given matrices Aig 2 Rmi l g ; B ig 2 Rng pi ; C ig 2 Rmi ng ; D ig 2 Rl g pi ; and F i 2 Rmi pi ; i 2 I ½1; N ; g 2 I ½1; p, find the matrix group ðX 1 ; X 2 ; . . .; X p Þ with X g 2 Rl g ng ; g 2 I ½1; p such that p X g¼1

ðAig X g B ig þ C ig X Tg Dig Þ ¼ E i ;

i ¼ 1; 2; . . .; p:

ð2:0:1Þ

Problem 2.2. Let problem 2.1 be consistent, and its solution group set is denoted by b 1; X b 2 ; . . .; X b p Þ with X b g 2 Rl g ng ; g 2 I ½1; p, find S r . For a given matrix group ð X l n e 2 ; . . .; X e p Þ 2 S r with X e g 2 R g g ; g 2 I ½1; p such that e 1; X ðX ( ) p p X X 2 2 ej  X b jk ¼ b jk : kX min kX j  X ð2:0:2Þ ðX 1 ;...;X p Þ2S r

j¼1

j¼1

In[21,22], the following linear equation r X i¼1

Ai XB i þ

s X

C j X T D j ¼ E;

ð2:0:3Þ

j¼1

where Ai ; B i ; C j ; Dj ; i ¼ 1; . . .; r; j ¼ 1; . . .; s: and E were some known constant matrices of appropriate dimensions and X was a matrix to be determined, was considered. In[23], the special case of (2.0.3), which was matrix equation AXB þ CX T D ¼ E, was considered by an iterative algorithm. A more special case of (2.0.3), namely, the matrix equation AX þ X T B ¼ C , was investigated by Piao et al.[24]. The Moore–Penrose generalized inverse was used in[24] to find explicit DOI: 10.1051/978-2-7598-3102-9.c002 © Science Press, EDP Sciences, 2023

Solutions to Linear Matrix Equations and Their Applications

12

solutions to this matrix equation. Compared with the above references, in this chapter, we investigate the finite iterative algorithm to the matrix equation (2.1.1).

2.1

Finite Iterative Algorithm and Convergence Analysis

In this section, we consider the following coupled Sylvester-transpose matrix equations p X g¼1

ðAig X g B ig þ C ig X Tg Dig Þ ¼ F i ; i 2 I ½1; N ;

ð2:1:1Þ

where Aig 2 Rmi l g ; B ig 2 Rng pi ; C ig 2 Rmi n g ; D ig 2 Rl g pi , F i 2 Rmi pi ; i 2 I ½1; N ; g 2 I ½1; p, are the given known matrices, and X g 2 Rl g ng ; g 2 I ½1; p are the matrices to be determined. When the matrix equations are consistent, we propose the following iterative algorithm to solve it. Algorithm 2.1 Step 1. Input matrices Aig 2 Rmi l g ; B ig 2 Rng pi ; C ig 2 Rmi ng ; D ig 2 Rl g pi , F i 2 Rmi ni ; i 2 I ½1; N , and arbitrary matrices X g ð1Þ 2 Rl g ng ; g 2 I ½1; p; Calculate Ri ð1Þ ¼ F i 

p X ½Aig X g ð1ÞB ig þ C ig X Tg ð1ÞD ig ; i 2 I ½1; N ; g¼1

P g ð1Þ ¼

N X ½ATig Ri ð1ÞB Tig þ Dig RTi ð1ÞC ig ; g 2 I ½1; p: i¼1

Step 2. If Ri ðkÞ ¼ 0; i 2 I ½1; N  then stop; else, k ¼ k þ 1. Step 3. Calculate PN kRx ðkÞk2 X g ðk þ 1Þ ¼ X g ðkÞ þ Px¼1 P g ðkÞ; g 2 I ½1; p; p 2 s¼1 kP s ðkÞk Ri ðk þ 1Þ ¼ F i 

p X ½Aig X g ðk þ 1ÞB ig þ C ig X Tg ðk þ 1ÞDig  g¼1

PN

¼ Ri ðkÞ  Px¼1 p s¼1

kRx ðkÞk2 X p

kP s ðkÞk2

g¼1

þ C ig P Tg ðkÞD ig ; i 2 I ½1; N ;

½Aig P g ðkÞB ig

Finite Iterative Algorithm to Coupled Transpose Matrix Equations

P g ðk þ 1Þ ¼

13

N X ½ATig Ri ðk þ 1ÞB Tig þ Dig RTi ðk þ 1ÞC ig  i¼1

þ

PN

x¼1 kRx ðk þ 1Þk P N 2 x¼1 kRx ðkÞk

2

P g ðkÞ; g 2 I ½1; p:

Step 4. Go to step 2. Now we introduce some properties of the algorithm above. Lemma 2.1. Suppose that the sequences Pg ðkÞ; g 2 I ½1; p and Ri ðkÞ; i 2 I ½1; N  are generated by algorithm 2.1, then the following relation holds N P i¼1

tr½RTi ðk þ 1ÞRi ðjÞ N P

PN

 Px¼1 p

kRx ðkÞk2 X p

tr½P Tg ðkÞP g ðjÞ kP s ðkÞk2 g¼1 PN P p 2 X kRx ðkÞk2 N x¼1 kRx ðjÞk þ Pp x¼1 tr½P Tg ðkÞP g ðj  1Þ: P 2 N 2 kP ðkÞk kR ðj  1Þk s x g¼1 s¼1 x¼1

¼

i¼1

tr½RTi ðkÞRi ðjÞ

s¼1

Proof. Because all matrices in algorithm 2.1 are real, we can write N X

tr½RTi ðk þ 1ÞRi ðjÞ

i¼1

PN p N h X kRx ðkÞk2 X ¼ tr ðRi ðkÞ  Px¼1 ðAig P g ðkÞB ig p 2 i¼1 s¼1 kP s ðkÞk g¼1 i þ C ig P Tg ðkÞDig ÞÞT Ri ðjÞ ¼

N X

tr½RTi ðkÞRi ðjÞ

i¼1



N X

tr

i¼1

" p X g¼1

PN

 Px¼1 p s¼1

ðB Tig P Tg ðkÞATig

kRx ðkÞk2 kP s ðkÞk2

þ DTig P g ðkÞC Tig ÞRi ðjÞ

# :

In addition, by applying some properties of the trace of a matrix one has " #  p  N X X T T T T T tr B ig P g ðkÞAig þ Dig P g ðkÞC ig Ri ðjÞ i¼1

¼

g¼1

N X i¼1

"

# p  X tr P Tg ðkÞATig Ri ðjÞB Tig þ P g ðkÞC Tig Ri ðjÞDTig : g¼1

ð2:1:2Þ

Solutions to Linear Matrix Equations and Their Applications

14

By changing order of this double sum, one has from algorithm 2.1, p  N hX i  X tr B Tig P Tg ðkÞATig þ D Tig P g ðkÞC Tig Ri ðjÞ g¼1

i¼1

¼

p X

tr

g¼1

¼

p X g¼1

¼

p X g¼1

¼

p X g¼1

¼

p X g¼1

¼

N  hX i P Tg ðkÞATig Ri ðjÞB Tig þ P g ðkÞC Tig Ri ðjÞD Tig i¼1

h

N N X X  i tr P Tg ðkÞ ATig Ri ðjÞB Tig þ P g ðkÞ C Tig Ri ðjÞDTig i¼1

i¼1

h

N N X X  i tr P Tg ðkÞ ATig Ri ðjÞB Tig þ P Tg ðkÞ Dig RTi ðjÞC ig i¼1

i¼1

h

N  X

"

i¼1

tr P Tg ðkÞ

ATig Ri ðjÞB Tig þ Dig RTi ðjÞC ig PN

tr P Tg ðkÞ P g ðjÞ  PN

2 x¼1 kRx ðjÞk

x¼1

kRx ðj  1Þk2

i !#

P g ðj  1Þ

PN p h i X kRx ðjÞk2 tr P Tg ðkÞP g ðjÞ  PN x¼1 2 g¼1 x¼1 kRx ðj  1Þk p i X  tr½P Tg ðkÞP g ðj  1ÞÞ : g¼1

Combining the preceding two relations gives the conclusion of this lemma.



Lemma 2.2. Suppose that the sequences fP g ðkÞg; g 2 I ½1; p and fRi ðkÞg; i 2 I ½1; N  are generated by algorithm 2.1. If there exists an integer number u  1, such that PN 2 i¼1 jRi ðk Þjj 6¼ 0; for all k ¼ 0; 1; . . .; u, then N X

tr½RTi ðkÞRi ðjÞ ¼ 0;

i¼1

p X g¼1

tr½P Tg ðjÞP g ðkÞ ¼ 0:

ð2:1:3Þ

for all j; k ¼ 1; . . .; u; j 6¼ k. Proof. It is known that tr½RTi ðkÞRi ðjÞ ¼ tr½RTi ðjÞRi ðkÞ;

tr½P Tg ðkÞP g ðjÞ ¼ tr½P Tg ðjÞP g ðkÞ:

We only need to indicate that N X i¼1

for 1  j\k  u.

tr½RTi ðkÞRi ðjÞ ¼ 0;

p X g¼1

tr½P Tg ðkÞP g ðjÞ ¼ 0:

ð2:1:4Þ

Finite Iterative Algorithm to Coupled Transpose Matrix Equations

15

We apply induction to prove conclusion (2.1.4), and also we do it in two steps. Step 1. For j ¼ 1, by using lemma 2.1 we can obtain N X tr½RTi ð2ÞRi ð1Þ i¼1

PN p kRx ð1Þk2 X tr½RTi ð1ÞRi ð1Þ  Px¼1 tr½P Tg ð1ÞP g ð1Þ p 2 kP ð1Þk s g¼1 i¼1 s¼1 PN p 2X N X kRx ð1Þk ¼ kRi ð1Þk2  Px¼1 kP g ð1Þk2 ¼ 0: p 2 kP ð1Þk s g¼1 i¼1 s¼1

¼

N X

And from algorithm 2.1 one also has p h i X tr P Tg ð2ÞP g ð1Þ g¼1

¼

p N h X T i X tr ðATig Ri ð2ÞB Tig þ Dig RTi ð2ÞC ig Þ P g ð1Þ g¼1

i¼1

PN

þ Px¼1 N

kRx ð2Þk2 X p

tr½P Tg ð1ÞP g ð1Þ 2 kR ð1Þk x g¼1 x¼1 p N hX i X ¼ tr ðB ig RTi ð2ÞAig þ C Tig Ri ð2ÞD Tig ÞP g ð1Þ g¼1

PN

i¼1

þ Px¼1 N

kRx ð2Þk2 X p

kP g ð1Þk2 2 kR ð1Þk x g¼1 x¼1 p N  hX i X ¼ tr RTi ð2ÞAig P g ð1ÞB ig þ Ri ð2ÞDTig P g ð1ÞC Tig g¼1

PN

i¼1

þ Px¼1 N

kRx ð2Þk2 X p

kP g ð1Þk2 2 kR ð1Þk x g¼1 x¼1 p N  hX i X ¼ tr RTi ð2ÞAig P g ð1ÞB ig þ RTi ð2ÞC ig P Tg ð1ÞDig g¼1

PN

i¼1

þ Px¼1 N

kRx ð2Þk2 X p

kP g ð1Þk2 2 kR ð1Þk x g¼1 x¼1 p  N h i X X ¼ tr RTi ð2Þ Aig P g ð1ÞB ig þ C ig P Tg ð1ÞDig i¼1

PN

þ Px¼1 N

g¼1

kRx ð2Þk2 X p

kP g ð1Þk2 2 kR ð1Þk x g¼1 x¼1 N h i Pp kP ð1Þk2 X s T ¼ tr Ri ð2ÞðRi ð1Þ  Ri ð2ÞÞ PNs¼1 2 i¼1 x¼1 kRx ð1Þk

Solutions to Linear Matrix Equations and Their Applications

16

PN

þ Px¼1 N

kRx ð2Þk2 X p

kP g ð1Þk2 2 kR ð1Þk x g¼1 x¼1 Pp PN p 2 2X N X 2 s¼1 kP s ð1Þk x¼1 kRx ð2Þk ¼ kRi ð2Þk PN þ kP g ð1Þk2 ¼ 0: P 2 N 2 i¼1 x¼1 kRx ð1Þk x¼1 kRx ð1Þk g¼1 Step 2. In this step, it is assumed that N X

tr½RTi ðvÞRi ðjÞ ¼ 0;

i¼1

p X g¼1

tr½P Tg ðvÞP g ðjÞ ¼ 0;

ð2:1:5Þ

tr½P Tg ðv þ 1ÞP g ðjÞ ¼ 0;

ð2:1:6Þ

Now we prove N X

tr½RTi ðv þ 1ÞRi ðjÞ ¼ 0;

i¼1

p X g¼1

for j ¼ 1; 2; . . .; v. By applying lemma 2.1 and step 1, for j ¼ 1; 2; . . .; v  1, it follows from the induction assumption (2.1.5), N X

tr½RTi ðv þ 1ÞRi ðjÞ

i¼1

¼

PN p kRx ðvÞk2 X tr½RTi ðvÞRi ðjÞ  Px¼1 tr½P Tg ðvÞP g ðjÞ p 2 i¼1 s¼1 kP s ðvÞk g¼1 PN p 2 PN 2 X kRx ðvÞk x¼1 kRx ðjÞk þ Pp x¼1  tr½P Tg ðvÞP g ðj  1Þ ¼ 0: 2 PN 2 kP ðvÞk kR ðj  1Þk s x g¼1 s¼1 x¼1 N X

In addition, it can be obtained from lemma 2.1 and induction assumption (2.1.5) that N X

tr½RTi ðv þ 1ÞRi ðvÞ

i¼1

¼

N X

PN

kRi ðvÞk  Px¼1 p 2

kRx ðvÞk2 X p

2

kP g ðvÞk2

kP s ðvÞk g¼1 i2 N 2 p X x¼1 kRx ðvÞk þ Pp tr½P Tg ðvÞP g ðv  1Þ 2 PN 2 kP ðvÞk kR ðv  1Þk s x g¼1 s¼1 x¼1 hP i2 N 2 p X x¼1 kRx ðvÞk tr½P Tg ðvÞP g ðv  1Þ ¼ 0: ¼ N Pp 2 P 2 g¼1 kRx ðv  1Þk s¼1 kP s ðvÞk i¼1

hP

s¼1

x¼1

Finite Iterative Algorithm to Coupled Transpose Matrix Equations

17

It shows that the second relation in (2.1.6) holds for j ¼ 1; 2; . . .; v. From algorithm 2.1, it can also be obtained that p h i X tr P Tg ðv þ 1ÞP g ðjÞ g¼1

¼

p N  hX i T X tr ATig Ri ðv þ 1ÞB Tig þ Dig RTi ðv þ 1ÞC ig P g ðjÞ g¼1

PN

i¼1

p i kRx ðv þ 1Þk2 X h T tr P g ðvÞP g ðjÞ 2 g¼1 x¼1 kRx ðvÞk p N h i   X X ¼ tr B ig RTi ðv þ 1ÞAig þ C Tig Ri ðv þ 1ÞDTig P g ðjÞ

þ

x¼1 P N

g¼1

þ ¼

PN

þ

PN

p X g¼1

þ ¼

p 2X h i x¼1 kRx ðv þ 1Þk tr P Tg ðvÞP g ðjÞ PN 2 g¼1 x¼1 kRx ðvÞk

p N  hX i X tr RTi ðv þ 1ÞAig P g ðjÞB ig þ Ri ðv þ 1ÞDTig P g ðjÞC Tig g¼1

¼

i¼1

i¼1

p 2X h i x¼1 kRx ðv þ 1Þk tr P Tg ðvÞP g ðjÞ PN 2 g¼1 x¼1 kRx ðvÞk

"

tr

PN

N  X

RTi ðv þ 1ÞAig P g ðjÞB ig

þ RTi ðv

þ 1ÞC ig P Tg ðjÞD ig



#

i¼1

p 2X h i x¼1 kRx ðv þ 1Þk tr P Tg ðvÞP g ðjÞ PN 2 g¼1 x¼1 kRx ðvÞk

p N  h i X X tr RTi ðv þ 1Þ Aig P g ðjÞB ig þ C ig P Tg ðjÞDig g¼1

PN

i¼1

p i kRx ðv þ 1Þk2 X h T tr P ðvÞP ðjÞ g g 2 g¼1 x¼1 kRx ðvÞk Pp N h  i kP s ðjÞk2 X T ¼ PNs¼1 tr R ðv þ 1Þ R ðjÞ  R ðj þ 1Þ i i i 2 x¼1 kRx ðjÞk i¼1 PN 2X N h i T x¼1 kRx ðv þ 1Þk þ P tr P ðvÞP ðjÞ : g g N 2 i¼1 x¼1 kRx ðvÞk

þ

x¼1 P N

From this relation, it follows from the induction (2.1.5) that the first relation in (2.1.6), for j ¼ 1; 2; . . .; v  1; can be written as p X g¼1

tr½P Tg ðv þ 1ÞP g ðjÞ ¼ 0:

Solutions to Linear Matrix Equations and Their Applications

18

For j ¼ v; from the above relation we have p h i X tr P Tg ðv þ 1ÞP g ðvÞ g¼1

Pp N h i kP s ðvÞk2  X ¼ PNs¼1 t RTi ðv þ 1ÞRi ðvÞ 2 i¼1 x¼1 kRx ðvÞk N h i X tr RTi ðv þ 1ÞRi ðv þ 1Þ  i¼1

PN

N h i kRx ðv þ 1Þk2 X T tr P ðvÞP ðvÞ g g 2 i¼1 x¼1 kRx ðvÞk Pp N kP s ðvÞk2 X ¼  PNs¼1 kRi ðv þ 1Þk2 2 kR ðvÞk x i¼1 Px¼1 p N 2X x¼1 kRx ðv þ 1Þk þ P kP g ðvÞk2 ¼ 0: N 2 kR ðvÞk x g¼1 x¼1

þ

x¼1 P N

Thus the second relation in (2.1.6) holds for j ¼ 1; 2; . . .; v: From steps 1 and 2, the conclusion is immediately obtained by the principle of induction. □ Lemma 2.3. Assume that the coupled Sylvester-transpose matrix equation (2.1.1) are consistent and let ½X 1 ; X 2 ; . . .; X p  be their solutions. Then, for any initial matrices X g ð1Þ; g 2 I ½1; p, the sequences fX g ðkÞg; fP g ðkÞg; g 2 I ½1; p and fRi ðkÞg; i 2 I ½1; N ; generated by algorithm 2.1 satisfy p X g¼1

tr½P Tg ðkÞðX g  X g ðkÞÞ ¼

N X

kRi ðkÞk2 ;

k ¼ 1; 2; . . .

i¼1

Proof. We prove the conclusion by induction. For i ¼ 1, we have p h  i X tr P Tg ð1Þ X g  X g ð1Þ g¼1

¼

p N hX i X tr ðB ig RTi ð1ÞAig þ C Tig Ri ð1ÞDTig ÞðX g  X g ð1ÞÞ g¼1

i¼1

p N  hX X ¼ tr RTi Aig ðX g  X g ð1ÞÞB ig g¼1

i¼1

þ Ri ð1ÞDTig ðX g  X g ð1ÞÞC Tig ¼

i

p N  hX X tr RTi Aig ðX g  X g ð1ÞÞB ig g¼1

i¼1

þ RTi ð1ÞC ig ðX g  X g ð1ÞÞT Dig

i

ð2:1:7Þ

Finite Iterative Algorithm to Coupled Transpose Matrix Equations

¼

p  N h X X Aig ðX g  X g ð1ÞÞB ig tr RTi g¼1

i¼1

þ C ig ðX g  X g ð1ÞÞT Dig ¼

i

p N h  i X X tr RTi F i  ðAig X g ð1ÞÞB ig þ C ig X Tg ð1ÞD ig Þ g¼1

i¼1 N h i X ¼ tr RTi Ri i¼1

¼

N X

kRi k2 :

i¼1

Now assume the conclusion (2.1.7) holds for k ¼ n. Then we can get p X g¼1

tr½P Tg ðnÞðX g  X g ðnÞÞ ¼

N X

kRi ðnÞk2 :

i¼1

It follows from algorithm 2.1 that p h  i X tr P Tg ðn þ 1Þ X g  X g ðn þ 1Þ g¼1

¼

p N  hX  X tr B ig RTi ðn þ 1ÞAig þ C Tig Ri ðn þ 1ÞDTig g¼1



i¼1

i  X g  X g ðn þ 1Þ PN p i kRx ðnÞk2 h X T  þ Px¼1 tr P g ðnÞ X g  X g ðnÞ N 2 g¼1 x¼1 kRx ðnÞk In addition, we have p N  hX i   X tr B ig RTi ðn þ 1ÞAig þ C Tig Ri ðn þ 1ÞDTig  X g  X g ðn þ 1Þ g¼1

i¼1

p N  hX  X ¼ tr RTi ðn þ 1ÞAig ðX g  X g ðn þ 1Þ B ig g¼1

i¼1

 i þ Ri ðn þ 1ÞDTig X g  X g ðn þ 1ÞÞC Tig ¼

p N  hX  X tr RTi ðn þ 1ÞAig ðX g  X g ðn þ 1Þ B ig g¼1

i¼1

 i þ RTi ðn þ 1ÞC ig X g  X g ðn þ 1ÞÞT Dig

19

Solutions to Linear Matrix Equations and Their Applications

20

¼

p N hX    X tr RTi ðn þ 1Þ Aig X g  X g ðn þ 1Þ B ig i¼1



g¼1

þ C ig X g  X g ðn þ 1Þ ¼

i Dig

p  N h  X X tr RTi ðn þ 1Þ F i  Aig X g ðn þ 1ÞB ig i¼1

þ C ig X Tg ðn þ 1ÞDig ¼

T

N X

i

g¼1

kRi ðn þ 1Þk2 :

i¼1

By induction and algorithm 2.1 we can obtain p h i X tr P Tg ðnÞðX g  X g ðn þ 1ÞÞ g¼1

PN  i kRx ðnÞk2 P Tg ðnÞ X g  X g ðnÞ  Px¼1 P g ðnÞ p 2 g¼1 s¼1 kP s ðnÞk p hX i ¼ tr P Tg ðnÞðX g  X g ðnÞÞ

¼ tr

p hX

g¼1

PN

p i kRx ðnÞk2 X h T tr P ðnÞP ðnÞ g g 2 s¼1 kP s ðnÞk g¼1

 Px¼1 p ¼

N X i¼1

kRi ðnÞÞk2 

N X

kRx ðnÞk2 ¼ 0:

x¼1

Thus we have p X g¼1

tr½P Tg ðn þ 1ÞðX g  X g ðn þ 1ÞÞ ¼

N X

kRi ðn þ 1Þk2 :

i¼1

This implies that (2.1.7) holds for k ¼ n þ 1. Thus, we complete the proof.



Lemma 2.4. Let Pðm; nÞ 2 R be a square mn  mn matrix partitioned into m  n sub-matrices such that jith position be 1 and zeros elsewhere, i.e. [23]

mnmn

Pðm; nÞ ¼

m X n X

E ij  E Tij ;

ð2:1:8Þ

i¼1 j¼1

where E ij ¼ ei eTj called an elementary matrix of order m  1ðn  1Þ. Using this definition, we have

Finite Iterative Algorithm to Coupled Transpose Matrix Equations Pðm; nÞvecðX T Þ ¼ vecðXÞ; P ðm; nÞ ¼ P T

1

21

Pðm; nÞPðn; mÞ ¼ I mn ;

ðm; nÞ ¼ Pðn; mÞ:

Lemma 2.5. [28]Suppose that the consistent system of linear equation Ax ¼ b has a solution x  2 RðAH Þ, then x  is the unique least norm solution of the system of linear equation. Theorem 2.1. Assume that the coupled Sylvester-transpose matrix equations (2.1.1) are consistent, then for any initial matrices ðX 1 ; X 2 ; . . .; X p Þ, the sequences X i ðkÞ; i 2 I ½1; p given by the algorithm 2.1, convergence to their solutions with finite iterative steps in the absence of roundoff errors. Furthermore, if we take the initial matrices X i ð1Þ ¼

N X g¼1

ðATgi E g B Tgi þ Dgi E Tg C gi Þ;

i ¼ 1; 2; . . .; p;

ð2:1:9Þ

where E g ; g ¼ 1; 2; . . .; N ; are arbitrary matrices, or especially X i ð1Þ ¼ 0; i ¼ 1; 2; . . .; p; then the solution group ðX 1 ; X 2 ; . . .; X p Þ, generated by algorithm 2.1, is the least Frobinius norm solution group of the coupled matrix equations (2.1.1). Proof. Firstly, in the space Rm1 p1  Rm2 p2   RmN pN , we define an inner product as N X hðA1 ; A2 ; . . .; AN Þ; ðB 1 ; B 2 ; ; B N Þi ¼ trðB Ti Ai Þ; i¼1

for ðA1 ; A2 ; . . .; AN Þ; ðB 1 ; B 2 ; . . .; B N Þ 2 Rm1 p1  Rm 2 p2   RmN pN . PN P 2 If N i¼1 kRi ðkÞk 6¼ 0; k ¼ 1; 2; . . .; q ¼ 2 i¼1 m i pi , then from previous lemmas we Pp 2 get j¼1 kP j ðkÞk 6¼ 0. Now we can compute Ri ðq þ 1Þ and ½X 1 ðq þ 1Þ; X 2 ðq þ 1Þ; . . .; X p ðq þ 1Þ. From lemma 2.2, it is not difficult to get N X ðRTi ðq þ 1ÞRi ðjÞÞ ¼ 0; j ¼ 1; 2; . . .; q; i¼1

and N X

ðRTi ðmÞRi ðjÞÞ ¼ 0; m; j ¼ 1; 2; . . .; q; m 6¼ j:

i¼1

Then ½R1 ðkÞ; R2 ðkÞ; . . .; RN ðkÞ; k ¼ 1; 2; . . .; 2q, is a group of orthogonal basis of the previously defined inner product space. It follows from ½R1 ðq þ 1Þ; R2 ðq þ 1Þ; . . .; RN ðq þ 1Þ ¼ 0 that ½X 1 ðq þ 1Þ; X 2 ðq þ 1Þ; . . .; X p ðq þ 1Þ ¼ 0 is a solution group of problem 2.1. Therefore when problem 2.1 is consistent, we can verify that the solution group of problem 2.1 can be obtained within finite iterative steps. □

Solutions to Linear Matrix Equations and Their Applications

22

Now let E g be arbitrary matrices for g ¼ 1; 2; . . .; N . We can get 1 P N T T T ðA E B þ D E C Þ vec g1 g g1 g1 g g1 C B g¼1 C B B P C N C B T T T B vec ðAg2 E g B g2 þ Dg2 E g C g2 Þ C C B g¼1 C B C B .. C B C B . B P C N A @ vec ðATgp E g B Tgp þ Dgp E Tg C gp Þ 0

0

g¼1

B 11  AT11 þ ðC T11  D11 ÞPðm 1 ; p1 Þ B B  AT þ ðC T  D ÞPðm ; p Þ 12 1 1 B 12 12 12 B . B ¼B .. B T T @ B 1p  A þ ðC  D 1p ÞPðm 1 ; p Þ 1 1p 1p

... ... .. . ...

1 0 1 B N 1  ATN 1 þ ðC TN 1  DN 1 ÞPðm N ; pN Þ vecðE 1 Þ B C B N 2  ATN 2 þ ðC TN 2  DN 2 ÞPðm N ; pN Þ C C B vecðE 2 Þ C C B C  .. .. C B C A @ A . . T T vecðE N Þ B Np  ANp þ ðC Np  DNp ÞPðm N ; pN Þ 0

B T11  A11 þ P T ðm 1 ; p1 ÞðC 11  DT11 Þ B B T  A21 þ P T ðm 2 ; p ÞðC 21  DT Þ 2 B 21 21 B .. ¼B B . B T @ B  AN 1 þ P T ðm N ; pN ÞðC N 1  DT Þ N1 N1

... ... .. . ...

1

0 1 vecðE 1 Þ C B vecðE Þ C B T2p  A2p þ P T ðm 2 ; p2 ÞðC 2p  DT2p Þ C 2 C C B C  C B . B C .. .. C @ A . A T T T vecðE N Þ B Np  ANp þ P ðm N ; pN ÞðC Np  D Np Þ 00 T T T B 11  A11 þ P ðm 1 ; p1 ÞðC 11  D11 Þ ... BB B T  A21 þ P T ðm 2 ; p ÞðC 21  DT Þ ... 2 BB 21 21 BB .. .. B 2 RB BB . . BB T @@ B  AN 1 þ P T ðm N ; pN ÞðC N 1  DT Þ . . . N1 N1 B T1p  A1p þ P T ðm 1 ; p1 ÞðC 1p  DT1p Þ

B T1p  A1p þ P T ðm 1 ; p1 ÞðC 1p  DT1p Þ

1T 1

C B T2p  A2p þ P T ðm 2 ; p2 ÞðC 2p  DT2p Þ C C C .. C . A T T T B Np  ANp þ P ðm N ; pN ÞðC Np  DNp Þ

C C C C: C A

Finite Iterative Algorithm to Coupled Transpose Matrix Equations

23

If we consider the initial matrices X i ð1Þ ¼

N X g¼1

ðATgi E g B Tgi þ Dgi E Tg C gi Þ; i 2 ½1; p;

where E g ; g ¼ 1; 2; . . .; N , are the arbitrary matrices, then all X i ðkÞ for i ¼ 1; 2; . . .; p, generated by algorithm 2.1 satisfy 00 0 1 B T11  A11 þ P T ðm 1 ; p1 ÞðC 11  DT11 Þ vecðX 1 ðkÞÞ BB B T  A21 þ P T ðm 2 ; p2 ÞðC 21  DT Þ 21 21 B vecðX 2 ðkÞÞ C BB B C BB . 2 R . B C BB .. . @ A BB . @@ B T  A þ P T ðm ; p ÞðC  D T Þ N1 N N1 N N1 N1 vecðX ðkÞÞ p

... ... .. . ...

1T 1 B T1p  A1p þ P T ðm 1 ; p1 ÞðC 1p  DT1p Þ C B T2p  A2p þ P T ðm 2 ; p2 ÞðC 2p  DT2p Þ C C C C C .. : C C . C C C T T T A A B Np  ANp þ P ðm N ; pN ÞðC Np  DNp Þ

Hence, with the initial matrix group ðX 1 ð1Þ; X 2 ð1Þ; . . .; X p ð1ÞÞ where X i ð1Þ ¼

N X g¼1

ðATgi E g B Tgi þ Dgi E Tg C gi Þ; i 2 ½1; p;

according to lemma 2.5, the solution group ðX 1 ð1Þ; X 2 ð1Þ; . . .; X p ð1ÞÞ obtained by algorithm 2.1 is the least Frobenius norm solution group. The proof of problem 2.2 will be given as follows. When the matrix equations (2.1.1) are consistent, the solution set S E of the coupled matrix equations (2.1.1) is non-empty. For a given matrix group b 1; X b 2 ; . . .; X b p Þ with X b j 2 Rl j nj ; j ¼ 1; 2; . . .; p; it is not difficult to get ðX N X ðAig X g B ig þ C ig X Tg Dig Þ ¼ F i ; g¼1

()

i ¼ 1; 2; . . .; N ;

N X b g ÞB ig þ C ig ðX g  X b g ÞT Dig Þ ðAig ðX g  X g¼1

¼ Fi 

N X g¼1

T

b g B ig þ C ig X b Dig Þ; i ¼ 1; 2; . . .; N : ðAig X g

P b g and F i ¼ F i  N ðAig X b g B ig þ C ig X b T Dig Þ for Define X g ¼ X g  X g g¼1 i ¼ 1; 2; . . .; N . Then the matrix nearness problem 2.1 is equivalent to first finding the least Frobenius norm solution group ðX 1 ; X 2 ; . . .; X p Þ of new coupled Sylvester-transpose matrix equations

Solutions to Linear Matrix Equations and Their Applications

24 N X g¼1

T

ðAig X g B ig þ C ig X g D ig Þ ¼ F i ;

i ¼ 1; 2; . . .; N ;

which can be obtained by using algorithm 2.1 with the initial matrices X i ð1Þ ¼

N X g¼1

ðATgi E g B Tgi þ Dgi E Tg C gi Þ; i 2 ½1; p;

where E g are arbitrary matrices, or especially, X i ð1Þ ¼ 0 for i ¼ 1; 2; . . .; p. Here the matrix solution group of the matrix nearness problem 2.2 can be represented as e 1; X e 2 ; . . .; X e p Þ with ðX b g; e g ¼ X g þ X X

g ¼ 1; 2; . . .; p: □

Thus, the proof of problem 2.2 can be finished.

2.2

Numerical Example

In this section, we report a numerical example to test the proposed iterative method. Example 2.1. Consider the following coupled Sylvester-transpose matrix equations  A11 XB 11 þ C 11 X T D 11 þ A12 YB 12 þ C 12 Y T D 12 ¼ E 1 ; ð2:2:1Þ A21 XB 21 þ C 21 X T D 21 þ C 22 Y T D22 ¼ E 2 ; with the following 0 1 2 B 2 2 A11 ¼ B @ 2 1 1 2 0

A12

1 B2 ¼B @3 2 0

B 21

4 B4 ¼B @2 4

parametric matrices 1 0 1 1 1 B5 3 1C C; B ¼ B 2 2 A 11 @ 1 5 1 2

2 2 1 6

4 2 2 3

2 6 1 5

1 0 6 1 B2 1C C; A ¼ B 6 A 21 @ 1 2 1

1 2 5 1

3 1 3 2

2 2 3 2

1 0 2 0 B0 1C C; C ¼ B 1 A 11 @ 6 0 2

2 5 1 2

1 4 1 2

1 0 3 21 B 2 1C C; C ¼ B 4 A 21 @ 2 1 11 1 0 1 1 B 5 4C C; D ¼ B 1 A 21 @ 1 0 2

1 2 3 1 1 3 7 4

1 0 1 2 B 2 1C C; D ¼ B 2 A 11 @ 1 1 2

2 2 13 32

11 3 2 5

1 1 1C C; 2A 1

22 2 1 6

1 12 53 4 1 C C; 1 4 A 2 1

2 2 1 6

1 11 3 4 1 C C; 0 0 A 2 8

Finite Iterative Algorithm to Coupled Transpose Matrix Equations 0

B 12

2 B4 ¼B @4 1

1 3 2 2

0

1 0 1 0 B0 6C C; C ¼ B 1 A 22 @ 4 0 0

3 2 3 3

1889 B 543 E1 ¼ B @ 344 387

1 0 1 3 B 21 1C C; D ¼ B 1 A 22 @ 1 1 2

1 0 1384 1269 360 B 1201 513 507 C C; E ¼ B 671 143 A 2 @ 527 698 354 304

359 396 600 535 0

C 12

2 0 11 21 0 2 1 1

13 B 2 ¼B @ 2 1

2 14 12 0 1 2 2 6

1 0 61 1 B 5 1 C C; D ¼ B 0 A 12 @ 10 1 2

656 1903 1539 2197

25

12 11 23 1 2 4 11 3

138 671 555 474

12 11 2 4 1 1 6 2

1 2 2C C; 1A 2

1 340 284 C C; 1953 A 3999

1 13 1 C C: 4 A 1

It can be verified that the generalized coupled matrix equations (2.2.1) are consistent and have a unique solution pair ðX  ; Y  Þ as follows: 0

1 B 2  X ¼B @ 2 1

2 2 2 2

2 2 3 2

1 0 1 1 1 B1 2 2C  C; Y ¼ B @1 1 2A 4 1 1

1 1 0 2

1 0 1C C: 2A 1

If we let the initial matrix ðXð1Þ; Y ð1ÞÞ ¼ 0, applying algorithm 2.1, we obtain the solutions, which are 0

1:0000 B 1:9999 Xð109Þ ¼ B @ 2:0000 0:9999 0

1:0000 B 1:0000 Y ð109Þ ¼ B @ 0:9999 0:9999

2:0000 2:0000 1:9999 1:9999 0:9999 2:0000 1:0000 1:0000

2:0000 1:9999 3:0000 2:0000 0:9999 0:9999 0:0000 2:0000

1 1:0000 1:9999 C C; 2:0000 A 4:0000 1 0:0000 0:9999 C C; 2:0000 A 1:0000

with corresponding residual kRð109Þk ¼ 4:2982  1011 : The obtained results are presented in figure 2.1.1, where ek ¼

kðXðkÞ; Y ðkÞ  ðX  ; Y  Þk ; kðX  ; X  Þk

r k ¼ kRðkÞk:

Solutions to Linear Matrix Equations and Their Applications

26

FIG. 2.2.1 – The relative error of solution and the residual for example 2.1.

Now let

0

1 B0 b ¼B X @1 1

1 0 7 1

1 0 1 2

1 0 0 1 B1 1C b ¼B C; Y @1 1A 8 1

0 1 0 0

2 2 1 0

1 0 0C C; 0A 2

By algorithm 2.1 for the generalized coupled Sylvester matrix equation (2.2.1), with the initial matrix pair ðXð1Þ; Y ð1ÞÞ ¼ 0, we have the least Frobenius norm solution of the generalized coupled Sylvester matrix equation (2.2.1) by the following form: 0 1 2:0000 0:9999 1:0000 1:0000 B 2:0000  1:9999 1:9999 1:0000 C C X ¼ Xð107Þ ¼ B @ 0:9999 4:9999 1:9999 1:0000 A; 0:0000 1:0000 0:0000 4:0000 0

0:0000 B 0:0000  Y ¼ Y ð107ÞB @ 0:0000 0:0000

1:0000 0:9999 0:9999 0:9999

1:0000 0:9999 0:9999 2:0000

with corresponding residual kRð107Þk ¼ 7:8716  1011 :

1 0:0000 0:9999 C C; 1:9999 A 1:0000

Finite Iterative Algorithm to Coupled Transpose Matrix Equations

27

FIG. 2.2.2 – The relative error of least Frobenius norm solution and the residual for example 2.1.

The obtained results are presented in figure 2.2.2. Therefore, the solutions of the matrix equations nearness problem are 0 1 1:0000 1:9999 2:0000 1:0000 B C e ¼ X þ X b ¼ B 2:0000 1:9999 1:9999 2:0000 C; X @ 1:9999 2:0001 2:9999 2:0000 A 1:0000 2:0000 2:0000 4:0000 0

1:0000 B 1:0000  e ¼Y þY b ¼B Y @ 1:0000 1:0000

2.3

1:0000 1:9999 0:9999 0:9999

1:0000 1:0001 0:0001 2:0000

1 0:0000 0:9999 C C: 1:9999 A 1:0000

Control Application

A number of control problems are related to the SCA (state covariance assignment) problem[25], for example, controllability/obserbability Gramian assignment[26], a certain class of H 1 control problems[27]. From a mathematical viewpoint, we point out that both the continuous-time and discrete-time SCA problems can be reduced to solving a symmetric matrix equation. Let us expand the continuous-time Lyapunov equation AK X þ XATK þ B K B TK ¼ 0;

Solutions to Linear Matrix Equations and Their Applications

28

as ðAX þ XAT þ B 1 B T1 Þ þ B 2 KðC 2 X þ D21 B T1 Þ þ ðC 2 X þ D 21 B T1 ÞT K T B T2 þ B 2 K ðD 21 DT21 ÞK T B T2 ¼ 0: We also expand the discrete-time Lyapunov equation AK XATK  X þ B K B TK ¼ 0; as ðAXAT  X þ B 1 B T1 Þ þ B 2 KðC 2 XAT þ D21 B T1 Þ þ ðC 2 XAT þ D 21 B T1 ÞT K T B T2 þ B 2 K ðC 2 XC T2 þ D21 DT21 ÞK T B T2 ¼ 0: It is easy to see that both the continuous-time and discrete-time SCA problems are essentially the same from the viewpoint of mathematics. We also consider the relationship between the matrix equation and some important special cases as a control problem. For example, let matrices B 2 Rnm ; Q 2 Rnn ; R 2 Rpp and S 2 Rpn be given where Q ¼ Q T ; R ¼ RT  0 and rank R ¼ r. We focus on a symmetric matrix equation of quadratic type[26]: Q þ BKS þ ðBKS ÞT þ BKRK T B T ¼ 0:

ð2:3:1Þ

Thus, the solution of the symmetric matrix equation above is equivalent to the SCA problem with proper definitions of B; Q; R, and S. When R ¼ 0, the special case of matrix equation above is reduced to the Karm-Yakubovich-transpose matrix equation X  AX T B ¼ C .

2.4

Conclusions

In this chapter, we have constructed an iterative algorithm for solving the generalized coupled Sylvester-transpose matrix equations. By this algorithm, the solvability of the generalized coupled Sylvester-transpose matrix equations can be derived automatically. Also when the considered coupled matrix equations are consistent, for any initial matrix group, a generalized solution group can be obtained in finite iterative steps in the absence of roundoff errors. Furthermore, we have solved problem 2.2 using the proposed algorithm 2.1. Finally, a numerical example is presented to support the theoretical results of this chapter.

Chapter 3 Finite Iterative Algorithm to Coupled Operator Matrix Equations with Sub-Matrix Constrained In this chapter, we consider the following linear matrix equations p hX i¼1

A1i ðX i Þ;

p X i¼1

A2i ðX i Þ; . . .;

p X

i Api ðX i Þ ¼ ½M 1 ; M 2 ; . . .; M p ;

ð3:0:1Þ

i¼1

where Asi 2 LCmi ni ;ps q s and M s 2 Cps q s ; i; s ¼ 1; 2; . . .; p. This coupled operator matrix equations (3.0.1) include the generalized coupled Sylvester matrix equations and generalized coupled Lyapunov matrix equations as special cases. In addition, all the constraint solutions in[17,19,20,23,34–46] can be summarized as the following cases. (1) (2) (3) (4) (5)

Symmetric (Skew-symmetric) constraint: X ¼ X T ðX ¼ X T Þ; Centrosymmetric (Centroskew symmetric) constraint: X ¼ S n XS n ðS n XS n Þ; Reflexive (Anti-reflexive) constraint: X ¼ PXPðPXPÞ; Generalized reflexive (anti-reflexive) constraint: X ¼ PXQðPXQÞ; P orthogonal-symmetric (orthogonal-anti-symmetric) constraint: ðPXÞT ¼ PXðPXÞ, where P T ¼ P ¼ P 1 6¼ I n and Q T ¼ Q ¼ Q 1 6¼ I n .

It should be remarked that all the above common constraint solutions of matrix equations are all special cases of the following constraint X ¼ UðXÞ;

ð3:0:2Þ

where U 2 LC mn is a self-conjugate involution operator, i.e., U  ¼ U; U 2 ¼ I : From an operator point of view, in this chapter we construct a finite iterative algorithm to solve the coupled operator matrix equations (3.0.1) by extending the idea of conjugate gradient method. On the whole, we mainly consider the following two problems.

DOI: 10.1051/978-2-7598-3102-9.c003 © Science Press, EDP Sciences, 2023

Solutions to Linear Matrix Equations and Their Applications

30

Problem 3.1. Let S denote the set of constrained matrices defined by (3.0.2). For given Aij 2 LC mj nj ;pi q i ; M i 2 C pi q i ; i; j ¼ 1; 2; . . .; p, find X j 2 S; j ¼ 1; 2; . . .; p; such that p hX

A1i ðX i Þ;

i¼1

p X i¼1

A2i ðX i Þ; . . .;

p X

i Api ðX i Þ ¼ ½M 1 ; M 2 ; . . .; M p :

ð3:0:3Þ

i¼1

Problem 3.2. Let S denote the solution set of problem 3.1, for given b j 2 Cmj nj ; j ¼ 1; 2; . . .; p, find X j 2 S; j ¼ 1; 2; . . .; p, such that X ( ) p p X X 2 2 b jk ¼ b jk : kX j  X min kX j  X ð3:0:4Þ ðX 1 ;...;X p Þ2S

j¼1

j¼1

In the following, we define inner product as hX; Y i ¼ trðY H XÞ for all X; Y 2 Cmn . If hX; Y i ¼ 0, two matrices X and Y are said to be orthogonal. In addition, let LC mn;pq denote the set of linear operators from Cmn onto Cpq . Specially, when p ¼ m and q ¼ n, LCmn;pq is denoted by LCmn. For A 2 LCmn;pq , its conjugate operator A is a linear operator satisfying the following equality hAðXÞ; Y i ¼ hX; A ðY Þi;

for all X 2 Cmn ; Y 2 Cpq :

For example, if A is defined as X ! AXB, then we have A : Y ! AH YB H .

3.1

Iterative Method for Solving Problem 3.1

In this section, first of all, in order to solve problem 3.1, we propose an iterative algorithm. Next, some basic properties of the algorithm are presented. Finally, finding the least Frobenius norm solution to problem 3.1 is considered. In the sequel, the least norm solution always means the least Frobenius norm solution. Algorithm 3.1 Step 1. Input Aij 2 LCmj nj ;pi q i ; M i 2 Cpi q i , and an arbitrary X j ð1Þ 2 S; i; j ¼ 1; 2; . . .; p; Step 2. Compute   p p p P P P Rð1Þ ¼ diag M 1  A1i ðX i ð1ÞÞ; M 2  A2i ðX i ð1ÞÞ; . . .; M p  Api ðX i ð1ÞÞ i¼1

i¼1

¼ diagðR1 ð1Þ; R2 ð1Þ; . . .; Rp ð1ÞÞ; p 1X  P i ð1Þ ¼ ½A ðRs ð1ÞÞ þ UAsi ðRs ð1ÞÞ; i ¼ 1; 2; . . .; p; k :¼ 0; 2 s¼1 si

i¼1

Finite Iterative Algorithm to Coupled Operator Matrix Equations

31

Step 3. If RðkÞ ¼ 0 then stop and ðX 1 ðkÞ; X 2 ðkÞ; . . .; X p ðkÞÞ is the constraint solution group; else if RðkÞ 6¼ 0 but P m ðkÞ ¼ 0 for all m ¼ 1; 2; . . .; p, then stop and the general coupled matrix equations (3.0.1) are not consistent over constraint solution group; else k :¼ k þ 1; Step 4. Compute kRðk  1ÞkF 2 X i ðkÞ ¼ X i ðk  1Þ þ Pp P i ðk  1Þ; i ¼ 1; 2; . . .; p; 2 t¼1 kP t ðk  1ÞkF p p p   X X X RðkÞ ¼ diag M 1  A1i ðX i ðkÞÞ; M 2  A2i ðX i ðkÞÞ; . . .; M p  Api ðX i ðkÞÞ i¼1

i¼1 2

p X

kRðk  1ÞkF ¼ Rðk  1Þ  Pp  diag 2 i¼1 t¼1 kP i ðk  1ÞkF p p X X A2i ðP i ðk  1ÞÞ; . . .; Api ðP i ðk  1ÞÞÞ i¼1

 A1i ðP i ðk  1Þ ;

i¼1

i¼1

¼ diagðR1 ðkÞ; R2 ðkÞ; . . .; Rp ðkÞÞ; p 1X  Z i ðkÞ ¼ ½A ðRs ðkÞÞ þ UAsi ðRs ðkÞÞ; 2 s¼1 si P i ðkÞ ¼ Z i ðkÞ þ ¼

kRðkÞk2F kRðk  1Þk2F

P i ðk  1Þ

1X  ½A ðRs ðkÞÞ þ UAsi ðRs ðkÞÞ 2 s¼1 si p

þ

kRðkÞk2F kRðk  1Þk2F

P i ðk  1Þ; i ¼ 1; 2; . . .; p;

Step 5. Go to step 3. Some basic properties of algorithm 3.1 are listed in the following lemmas. Lemma 3.1. The sequences fX i ðkÞg and fP i ðkÞg; i ¼ 1; 2; . . .; p; generated by algorithm 3.1 are contained in the constraint set S. Proof. We prove the conclusion by induction. For k ¼ 1; i ¼ 1; 2; . . .; p, by U 2 ¼ I and algorithm 3.1, we have UðP i ð1ÞÞ ¼ U ¼ i.e., P i ð1Þ 2 S.

p 1X

2 s¼1

p X

½Asi ðRs ð1ÞÞ þ UAsi ðRs ð1ÞÞ



1 ½UAsi ðRs ð1ÞÞ þ Asi ðRs ð1ÞÞ ¼ P i ð1Þ; 2 s¼1

Solutions to Linear Matrix Equations and Their Applications

32

For k ¼ 2, since X i ð1Þ 2 S; i ¼ 1; 2; . . .; p, by algorithm 3.1, we have kRð1ÞkF 2 UðP i ð1ÞÞ UðX i ð2ÞÞ ¼ UðX i ð1ÞÞ þ Pp 2 t¼1 kP i ð1ÞkF kRð1ÞkF 2 P i ð1Þ ¼ X i ð2Þ; ¼ X i ð1Þ þ Pp 2 t¼1 kP i ð1ÞkF i.e., X i ð2Þ 2 S; i ¼ 1; 2; . . .; p. Moreover, we have

UðP i ð2ÞÞ ¼ U ¼U

p 1X

2 s¼1 p 1X

2 s¼1

½Asi ðRs ð2ÞÞ þ UAsi ðRs ð2ÞÞ þ ½Asi ðRs ð2ÞÞ þ UAsi ðRs ð2ÞÞ þ

kRð2Þk2F kRð1Þk2F kRð2Þk2F kRð1Þk2F

 P i ð1Þ UðP i ð1Þ



1X kRð2Þk2F ½UAsi ðRs ð2ÞÞ þ Asi ðRs ð2ÞÞ þ P i ð1Þ 2 s¼1 kRð1Þk2F p

¼

¼ P i ð2Þ; i.e., P i ð2Þ 2 S; i ¼ 1; 2; . . .; p. Now if we assume that the conclusion holds for k ¼ mðm [ 2Þ, i.e. X i ðmÞ; P i ðmÞ 2 S; i ¼ 1; 2; . . .; p, then we can obtain UðX i ðm þ 1ÞÞ kRðmÞkF 2 UðP i ðmÞÞ ¼ UðX i ðmÞÞ þ Pp 2 t¼1 kP i ðmÞkF kRðmÞkF 2 P i ðmÞ ¼ X i ðm þ 1Þ; ¼ X i ðmÞ þ Pp 2 t¼1 kP i ðmÞkF and UðP i ðm þ 1ÞÞ " # p 1X  kRðm þ 1Þk2F  ðAsi ðRs ðm þ 1ÞÞ þ UAsi ðRs ðm þ 1ÞÞÞ þ UðP i ðmÞÞ ¼U 2 s¼1 kRðmÞk2F p 1X kRðm þ 1Þk2F þ 1Þ ½UAsi ðRs ðm þ 1ÞÞ þ Asi ðRðm Þ þ P i ðmÞ ¼ s 2 s¼1 kRðmÞk2F ¼ P i ðm þ 1Þ:

Finite Iterative Algorithm to Coupled Operator Matrix Equations

33

Therefore, we can get X i ðm þ 1Þ; P i ðm þ 1Þ 2 S; i ¼ 1; 2; . . .; p: Thus, by the principle of induction we draw the conclusion.



Lemma 3.2. Supposed that X  ¼ ½X 1 ; X 2 ; . . .; X p  is a solution of problem 3.1. Let XðkÞ ¼ ½X 1 ðkÞ; X 2 ðkÞ; . . .; X p ðkÞ, then the sequences fX i ðkÞg; fRðkÞg and fPðkÞg generated by algorithm 3.1 satisfy the following equality p X i¼1

hX i  X i ðkÞ; P i ðkÞi ¼ kRðkÞk2 ;

i ¼ 1; 2; . . .; p: k ¼ 1; 2; . . .:

ð3:1:1Þ

Proof. We prove the conclusion by induction. For k ¼ 1, it follows from X i ; X i ð1Þ 2 S that X i  X i ð1Þ 2 S. Then by algorithm 3.1, we have p X i¼1

hX i  X i ð1Þ; P i ð1Þi

¼

p D p E X 1X  ½Asi ðRs ð1ÞÞ þ UAsi ðRs ð1ÞÞ X i  X i ð1Þ; 2 s¼1 i¼1

¼

1XX  hX  X i ð1Þ; Asi ðRi ð1ÞÞ þ UAsi ðRs ð1ÞÞi 2 i¼1 s¼1 i

¼

1XX  ½hX i  X i ð1Þ; Asi ðRi ð1ÞÞi 2 i¼1 s¼1

p

p

p

p

þ hX i  X i ð1Þ; UAsi ðRs ð1ÞÞi ¼ ¼

1XX hAsi ðX i  X i ð1ÞÞ þ Asi UðX i  X i ð1ÞÞ; Rs ð1Þi 2 i¼1 s¼1 p

p

p X p X hAsi ðX i  X i ð1ÞÞ; Rs ð1Þi i¼1 s¼1

¼

p D p E X X Asi ðX i ð1ÞÞ; Rs ð1Þ Ms  s¼1

¼

i¼1

p X

p X

s¼1

s¼1

hRs ð1Þ; Rs ð1Þi ¼

kRs ð1Þk2 ¼ kRð1Þk2 :

Solutions to Linear Matrix Equations and Their Applications

34

For k ¼ 2, by algorithm 3.1 and lemma 3.1, we have p X hX i  X i ð2Þ; P i ð2Þi i¼1

¼

p X

* X i

i¼1

¼

p D X i¼1

þ

¼

p

X i  X i ð2Þ;

+

E 1X  ½Asi ðRs ð2ÞÞ þ UAsi ðRs ð2ÞÞ 2 s¼1 p

kRð2Þk2F X  hX i  X i ð2Þ; P i ð1Þi kRð1Þk2F s¼1 p

1XX  hX  X i ð2Þ; Asi ðRs ð2ÞÞ þ UAsi ðRs ð2ÞÞi 2 i¼1 s¼1 i p

þ

¼

1X  kRð2Þk2F  X i ð2Þ; ½Asi ðRs ð2ÞÞ þ UAsi ðRs ð2ÞÞ þ P i ð1Þ 2 s¼1 kRð1Þk2F

p

E kRð2Þk2F X D  kRð1Þk2F  X ð1Þ  P i ð1Þ; P i ð1Þ X Pp i i 2 2 kRð1ÞkF i¼1 t¼1 kP t ð1ÞkF p

1XX   ð2Þ hX  X i ð2Þ; Asi ðRð2Þ s Þ þ UAsi ðRs Þi 2 i¼1 s¼1 i p

þ

p

kRð2Þk2F X  hX i  X i ð1Þ; P i ð1Þi kRð1Þk2F i¼1 p

X kRð2Þk2F  Pp hP i ð1Þ; P i ð1Þi 2 t¼1 kP t ð1ÞkF i¼1 p

¼

p X p X i¼1 s¼1

¼

p D X s¼1

¼

p X s¼1

hAsi ðX i  X i ð2ÞÞ; Rs ð2Þi

Ms 

p X

Asi ðX i ð2ÞÞ; Rs ð2Þ

E

i¼1

kRs ð2Þk2F ¼ kRð2Þk2F :

Assumed that the conclusion holds for k ¼ mðm [ 2Þ, i.e. P i ðmÞi ¼ kRðmÞk2 ; so for k ¼ m þ 1, we have

Pp

 i¼1 hX i

 X i ðmÞ,

Finite Iterative Algorithm to Coupled Operator Matrix Equations

35

p X hX i  X i ðm þ 1Þ; P i ðm þ 1Þi i¼1

1Xh  Asi ðRs ðm þ 1ÞÞ 2 s¼1 i¼1 i kRðm þ 1Þk2 E F P ðmÞ þ UAsi ðRs ðm þ 1ÞÞ þ i kRðmÞk2F p D p X 1X  ¼ ½A ðRs ðm þ 1ÞÞ þ U X i  X i ðm þ 1Þ; 2 s¼1 si i¼1 p E kRðm þ 1Þk2 X F hX i  X i ðm þ 1Þ; P i ðmÞi  Asi ðRs ðm þ 1ÞÞ þ kRðmÞk2F i¼1 p p 1XX  ¼ hX  X i ðm þ 1Þ; Asi ðRs ðm þ 1ÞÞ þ UAsi ðRs ðm þ 1ÞÞi 2 i¼1 s¼1 i p E kRðm þ 1Þk2F X D  kRðmÞk2F P i ðmÞ; P i ðmÞ þ X i  X i ðmÞ  Pp 2 2 kRðmÞkF i¼1 t¼1 kP t ðmÞkF p X p X 1 þ 1Þ þ 1Þ ¼ hX   X i ðm þ 1Þ; Asi ðRðm Þ þ UAsi ðRðm Þi s s 2 i¼1 s¼1 i

¼

p D X

þ

X i  X i ðm þ 1Þ;

kRðm þ 1Þk2F X p

kRðmÞk2F

i¼1

p

hX i  X i ðmÞ; P i ðmÞi

kRðm þ 1Þk2F X hP i ðmÞ; P i ðmÞi  Pp 2 t¼1 kP t ðmÞkF i¼1 p X p X ¼ hAsi ðX i  X i ðm þ 1ÞÞ; Rs ðm þ 1Þi i¼1 s¼1

¼

p D X s¼1

¼

p X s¼1

Ms 

p X

p

Asi ðX i ðm þ 1ÞÞ; Rs ðm þ 1Þ

E

i¼1

kRs ðm þ 1Þk2F ¼ kRðm þ 1Þk2F :

Thus, we complete the proof by the principle of induction.



Remark 1. From lemma 3.2, we can conclude that kRðkÞk2 6¼ 0 implies that Pp i¼1 P i ðkÞ 6¼ 0 if problem 3.1 is consistent. Else if there exists a positive number m P such that kRðmÞk2 6¼ 0 but pi¼1 P i ðmÞ ¼ 0, then problem 3.1 must be inconsistent. Hence, the solvability of problem 3.1 can be determined by algorithm 3.1 in the absence of roundoff errors. Lemma 3.3. For the sequences fRi ðkÞg, fP i ðkÞg and fZ i ðkÞg; i ¼ 1; 2; . . .; p; generated by algorithm 3.1, satisfy the following equality

Solutions to Linear Matrix Equations and Their Applications

36 p P

hRi ðm þ 1Þ; Ri ðnÞi ¼

i¼1

p P i¼1

kRðmÞk2F kP t ðmÞk2F t¼1

hRi ðmÞ; Ri ðnÞi  Pp

p P

hP i ðmÞ; Z i ðnÞi:

i¼1

ð3:1:2Þ Proof. According to algorithm 3.1, lemma 3.1 and the relation U  ¼ U, we have p X hRi ðm þ 1Þ; Ri ðnÞi i¼1

p p X X kRðmÞk2F hRi ðmÞ  Pp Ait ðP t ðmÞÞ; Ri ðnÞi 2 i¼1 t¼1 kP t ðmÞkF t¼1 p p X p X X kRðmÞk2F hRi ðmÞ; Ri ðnÞi  Pp hAit ðP t ðmÞÞ; Ri ðnÞi ¼ 2 i¼1 t¼1 kP t ðmÞkF i¼1 t¼1 p p X p X X kRðmÞk2F hRi ðmÞ; Ri ðnÞi  Pp hP t ðmÞ; Ait ðRi ðnÞÞi ¼ 2 i¼1 t¼1 kP t ðmÞkF i¼1 t¼1 p X ¼ hRi ðmÞ; Ri ðnÞi

¼

i¼1

XX 1 kRðmÞk2F hP t ðmÞ þ UP t ðmÞ; Ait ðRi ðnÞÞi  Pp 2 2 t¼1 kP t ðmÞkF i¼1 t¼1 p

¼

p

p p X X kRðmÞk2F hRi ðmÞ; Ri ðnÞi  Pp hP t ðmÞ; 2 i¼1 t¼1 kP t ðmÞkF t¼1 p 1X  ½A ðRi ðnÞÞ þ UAit ðRi ðnÞÞi 2 i¼1 it

p p X X kRðmÞk2F hRi ðmÞ; Ri ðnÞi  Pp hP t ðmÞ; Z t ðnÞi 2 i¼1 t¼1 kP t ðmÞkF t¼1 p p X X kRðmÞk2F hRi ðmÞ; Ri ðnÞi  Pp hP i ðmÞ; Z i ðnÞi: ¼ 2 i¼1 t¼1 kP t ðmÞkF i¼1

¼



This completes the proof.

Lemma 3.4. Supposed that problem 3.1 is consistent. If there exists a positive integer m such that kRðmÞk2 6¼ 0 for all m ¼ 1; 2; . . .; k, then the sequences fX i ðmÞg; fRi ðmÞg and fP i ðmÞg generated by algorithm 3.1 satisfy the following relations p X

hRi ðmÞ; Ri ðnÞi ¼ 0; and

i¼1

p X

hP i ðmÞ; P i ðnÞi ¼ 0;

ð3:1:3Þ

i¼1

in which m; n ¼ 1; 2; . . .; k; m 6¼ n. Proof. For arbitrary two matrices A and B, by using the property of inner product we have hA; Bi ¼ hB; Ai. Thus, we only need to prove the conclusion holds for all 0  n\m  k. In the case of n ¼ 1, by lemma 3.3 we have

Finite Iterative Algorithm to Coupled Operator Matrix Equations p X

37

hRi ð2Þ; Ri ð1Þi

i¼1

p p X X kRð1Þk2 hRi ð1Þ; Ri ð1Þi  Pr hP i ð1Þ; Z i ð1Þi 2 i¼1 t¼1 kP t ð1Þk i¼1 p p X X kRð1Þk2 kRi ð1Þk2  Pp hP i ð1Þ; P i ð1Þi ¼ 0; ¼ 2 i¼1 t¼1 kP t ð1Þk i¼1

¼

and p P i¼1

hP i ð2Þ; P i ð1Þi

p E p D1X P kRð2Þk2 ½Asi ðRs ð2ÞÞ þ UAsi ðRs ð2ÞÞ þ P ð1Þ; P ð1Þ i i kRð1Þk2 i¼1 2 s¼1 p p D E X E D p P 1X kRð2Þk2 ½Asi ðP i ð1ÞÞ þ UAsi ðP i ð1ÞÞ þ P ð1Þ; P ð1Þ Rs ð2Þ; ¼ i i 2 s¼1 kRð1Þk2 i¼1 i¼1 p p p 2 D X 1XX kRð1Þk Asi ðP i ð1ÞÞ; Asi ðP i ð1ÞÞ Rs ð1Þ  Pp ¼ 2 2 s¼1 i¼1 t¼1 kP t ð1Þk i¼1 p E kRð2Þk2 X kP i ð1Þk2 þ UAsi ðP i ð1ÞÞ þ kRð1Þk2 i¼1 p p E 1XXD ¼ Rs ð1Þ; Asi ðP i ð1ÞÞ þ UAsi ðP i ð1ÞÞ 2 s¼1 i¼1 p X p DX p X 1 kRð1Þk2 Asi ðP i ð1ÞÞ; Asi ðP i ð1ÞÞ  Pp 2 t¼1 kP t ð1Þk2 i¼1 s¼1 i¼1 p E kRð2Þk2 X kP i ð1Þk2 þ UAsi ðP i ð1ÞÞ þ kRð1Þk2 i¼1 p E kRð2Þk2 X p D p P P ¼ kP i ð1Þk2 Rs ð1Þ; Asi ðP i ð1ÞÞ þ kRð1Þk2 i¼1 s¼1 i¼1 p X p D p E X X kRð1Þk2  Pp ðP ð1ÞÞ; Asi ðP i ð1ÞÞ A si i 2 i¼1 t¼1 kP t ð1Þk s¼1 i¼1 Pp 2 E D p P t¼1 kP t ð1Þk ðRs ð1Þ  Rs ð2ÞÞ Rs ð1Þ; ¼ 2 kRð1Þk s¼1 p D Pp 2 2 X kRð1Þk t¼1 kP t ð1Þk ðRs ð1Þ  Rs ð2ÞÞ;  Pp 2 kRð1Þk2 t¼1 kP t ð1Þk s¼1 Pp p 2 E kRð2Þk2 X t¼1 kP t ð1Þk ðR ð1Þ  R ð2Þ þ kP i ð1Þk2 s s kRð1Þk2 kRð1Þk2 i¼1 Pp p p E kRð2Þk2 X kP t ð1Þk2 X D ¼ t¼1 ð1Þ; R ð1Þ  R ð2Þ þ kP i ð1Þk2 R s s s 2 kRð1Þk2 kRð1Þk s¼1 i¼1 Pp p D 2X E t¼1 kP t ð1Þk  Rs ð1Þ  Rs ð2Þ; Rs ð1Þ  Rs ð2Þ kRð1Þk2 s¼1 p P 2 Pp kP ð1Þk2 kRð2Þk2 X  p  t 2 2 ¼  P t ð1Þ  t¼1 ðkRð1Þk þ kRð2Þk Þ þ kP i ð1Þk2 ¼ 0: 2 2 kRð1Þk kRð1Þk t¼1 i¼1

¼

Solutions to Linear Matrix Equations and Their Applications

38

P P Now if we assume that pi¼1 hRi ðmÞ; Ri ðnÞi ¼ 0 and pi¼1 hP i ðmÞ; P i ðnÞi ¼ 0 hold Pp for all 0  n\m; 0\m  k, next we will prove that i¼1 hRi ðm þ 1Þ; Ri ðnÞi ¼ 0 and Pp i¼1 hP i ðm þ 1Þ; P i ðnÞi ¼ 0 hold for all 0  n\m þ 1; 0\m þ 1  k. By the hypothesis and lemma 3.3, in the case of 0  n\m, we can obtain p X

hRi ðm þ 1Þ; Ri ðnÞi

i¼1

p D E X kRðmÞk2 hP ðmÞ; Z ðnÞ Ri ðmÞ; Ri ðnÞi  Pp i i 2 i¼1 t¼1 kP t ðmÞk p D E X kRðmÞk2 kRðnÞk2 ¼  Pp ðmÞ; P ðnÞ  P ðn  1Þ P i i i 2 kRðn  1Þk2 t¼1 kP t ðmÞk i¼1 p D E X kRðmÞk2 ¼  Pp ðmÞ; P ðnÞ P i i 2 t¼1 kP t ðmÞk i¼1 p kRðmÞk2 kRðnÞk2 X þ Pp hP i ðmÞ; P i ðn  1Þi ¼ 0; 2 2 t¼1 kP t ðmÞk kRðn  1Þk i¼1

¼

and p D X

P i ðm þ 1Þ; P i ðnÞ

E

i¼1

p D E X kRðm þ 1Þk2 P ðmÞ; P ðnÞ Z i ðm þ 1Þ þ i i kRðmÞk2 i¼1 p D p D E kRðm þ 1Þk2 X E X ¼ ðmÞ; P ðnÞ Z i ðm þ 1Þ; P i ðnÞ þ P i i kRðmÞk2 i¼1 i¼1 p D E X ¼ Z i ðm þ 1Þ; P i ðnÞ

¼

i¼1

¼

Pp

i¼1 hRi ðm þ 1Þ; Ri ðnÞi



Pp

i¼1 hRi ðm þ 1Þ; Ri ðn 2 kRðmÞk Pp kP t ðmÞk2

þ 1Þi

¼ 0:

t¼1

In the case of n ¼ m, we also have p X hRi ðm þ 1Þ; Ri ðmÞi i¼1

X kRðmÞk2 hRi ðmÞ; Ri ðmÞi  Pp hP i ðmÞ; Z i ðmÞi 2 i¼1 t¼1 kP t ðmÞk i¼1 p p D E X X kRðmÞk2 kRðmÞk2 kRi ðmÞk2  Pp P ðm  1Þ ¼ P i ðmÞ; P i ðmÞ  2 2 i kRðm  1Þk i¼1 t¼1 kP t ðmÞk i¼1 p D 2 2 E X kRðmÞk kRðmÞk ¼ kRðmÞk2  kRðmÞk2 þ Pp P i ðmÞ; P i ðm  1Þ 2 2 t¼1 kP t ðmÞk kRðm  1Þk i¼1

¼

p X

¼ 0;

p

Finite Iterative Algorithm to Coupled Operator Matrix Equations

39

and p X

hP i ðm þ 1Þ; P i ðmÞi

i¼1

¼ ¼ ¼

p D E X kRðm þ 1Þk2 P ðmÞ; P ðmÞ Z i ðm þ 1Þ þ i i kRðmÞk2 i¼1 p p X kRðm þ 1Þk2 X hZ i ðm þ 1Þ; P i ðmÞi þ hP i ðmÞ; P i ðmÞi kRðmÞk2 i¼1 i¼1 Pp Pp i¼1 hRi ðm þ 1Þ; Ri ðmÞi  i¼1 hRi ðm þ 1Þ; Ri ðm þ 1Þi

PpkRðmÞk

2

kP t ðmÞk2 t¼1

þ

kRðm þ 1Þk2 X p

kRðmÞk2

kP i ðmÞk2 ¼ 0:

i¼1

Therefore, by the principle of induction, the proof has been completed.



Remark 2. Lemma 3.4 implies that when problem 3.1 is consistent, the sequences P i ð1Þ; P i ð2Þ; . . .; i ¼ 1; 2; . . .; p; are orthogonal to each other in the finite dimension matrix space S. Hence, there exists a positive integer t  mn such thatP i ðtÞ ¼ 0; i ¼ 1; 2; . . .; p. According to lemma 3.2, we havekRðtÞk2 ¼ 0. Thus, it can be concluded that the iteration will be terminated in finite steps in the absence of roundoff errors. In the following, we will solve the least norm solution of problem 3.1. The following lemmas are needed for our derivation. Lemma 3.5. [28]Supposed that the consistent system of linear equations Ax ¼ b has a solution x  2 RðAH Þ, then x  is the unique least norm solution of the system of linear equations. Lemma 3.6. For A 2 LC mn;pq , there exists a unique matrix M 2 Cpqmn , such that vec ðAðXÞÞ ¼ M vecðXÞ for all X 2 Cmn . According to lemma 3.6 and the definition of the conjugate operator, one can easily obtain the following corollary. Corollary 3.1. Let A and M be the same as those in lemma 3.4, and A be the conjugate operator of A, then vec ðA ðY ÞÞ ¼ M H vecðY Þ for all Y 2 Cpq . Theorem 3.1. When problem 3.1 is consistent, if the initial matrices are chosen as P X j ð1Þ ¼ 12 pi¼1 ½Aij ðH j Þ þ UAij ðH j Þ; j ¼ 1; 2; . . .; p, in which H j is an arbitrary matrix, or more especially X j ð1Þ ¼ 0; j ¼ 1; 2; . . .; p, then the solution group ½X 1 ; X 2 ; . . .; X p  generated by algorithm 3.1 is the least Frobenius norm constraint solution group of the coupled operator matrix equations (3.0.1).

Solutions to Linear Matrix Equations and Their Applications

40

Proof. Now we consider the following coupled operator matrix equations 8 p P > > A1i ðX i Þ ¼ E 1 ; > > > i¼1 > > > p >P > > > > i¼1 A2i ðX i Þ ¼ E 2 ; > > > > .. > > > . > > p P > > > < Api ðX i Þ ¼ E p ; i¼1

p P > > A1i ðUðX i ÞÞ ¼ E 1 ; > > > i¼1 > > > p P > > > A2i ðUðX i ÞÞ ¼ E 2 ; > > > i¼1 > > > > ... > > > >P p > > > : Api ðUðX i ÞÞ ¼ E p :

ð3:1:4Þ

i¼1

If problem 3.1 has a solution ½X 1 ; X 2 ; . . .; X p , then ½X 1 ; X 2 ; . . .; X p  must be a solution of the coupled operator matrix equations (3.1.4). Conversely, if the coupled operator matrix equations (3.1.4) have a solution ½X 1 ; X 2 ; . . .; X p , let ~ 1; X ~ 2 ; . . .; X ~ p  is a ~ j ¼ X j þ UðX j Þ ; j ¼ 1; 2; . . .; p, then it is easy to verify that ½X X 2

solution to problem 3.1. Therefore, the solvability of problem 3.1 is equivalent to the coupled operator matrix equations (3.1.4). P If we choose the initial matrix X j ð1Þ ¼ 12 pi¼1 ½Aij ðH j Þ þ UAij ðH j Þ; j ¼ 1; 2; . . .; p, where H j is an arbitrary matrix in C mj nj , by algorithm 3.1 and remark 2, we can get the solution ½X 1 ; X 2 ; . . .; X p  of problem 3.1 within finite iteration steps. The P obtained solution can be stated as X j ¼ 12 pi¼1 ½Aij ðY j Þ þ UAij ðY j Þ; j ¼ 1; 2; . . .; p. Since the solution set of problem 3.1 is a subset of that of the coupled operator matrix equations (3.1.4), next we will prove that ½X 1 ; X 2 ; . . .; X p  is the least norm constraint solutions group of the coupled matrix equations (3.1.4). It implies that ½X 1 ; X 2 ; . . .; X p  is also the least norm constraint solutions group of problem 3.1. Let M s and N be the matrices such that 0 1 vecðX 1 Þ p  X B C .. Asi ðX i Þ ¼ M s @ vec A; . i¼1

and

vecðX p Þ

0

1 0 1 vecðUðX 1 ÞÞ vecðX 1 Þ B C B C .. .. @ A ¼ N@ A; . . vecðUðX p ÞÞ

m j n j

vecðX p Þ

; j ¼ 1; 2; . . .; p. Then the coupled matrix equations (3.1.4) are for all X j 2 C equivalent to the following coupled matrix equations

Finite Iterative Algorithm to Coupled Operator Matrix Equations

41

1 vecðE 1 Þ C B vecðE 2 Þ C B C C B B .. C C0 B B 1 C C vecðX Þ B B . C C B B 1 C B B M p CB vecðE Þ C B .. p C C@ B A¼B . C: B M 1N C vecðE Þ 1 C C B B B vecðE 2 Þ C B M 2 N C vecðX p Þ C C B B C B B .. C .. A @ @ . A . M pN vecðE p Þ 0

M1 M2 .. .

1

0

Note that 0 1 2 0 1 0 13 vecðX 1 Þ vecðY 1 Þ vecðY 1 Þ p B C 1X 6 HB C B C7 .. .. .. B C¼ 6M B C þ NHMHB C7 . s @ . . @ A 2 4 s@ A A5 s¼1  vecðX p Þ vecðY p Þ vecðY p Þ H H H H H 2 RðM H 1 ; M 2 ; . . .; M p ; ðM 1 N Þ ; ðM 2 N Þ ; . . .; ðM p N Þ Þ:

Then it follows from lemma 3.5 that ½X 1 ; . . .; X p  is the least norm constraint solutions group of the coupled operator matrix equations (3.1.4). This solution is also the least norm constraint solutions of problem 3.1.

3.2

Iterative Method for Solving Problem 3.2

In this section, we consider iterative method for solving nearness problem 3.2 of solution to the matrix equations. Let S denote the solution set of problem 3.1. b j 2 Cmj nj ; j ¼ 1; 2; . . .; p; and arbitrary matrix X j 2 S; For given matrix X j ¼ 1; 2; . . .; p, we have 2 2   b jÞ X b jÞ  b j þ Uð X b j  Uð X X    b j    ¼ X j  X j  X 2 2 2    b b 2 b b X j þ Uð X Þj    X j  Uð X Þj  ð3:2:1Þ ¼ X j   þ  2 2 D b jÞ X b jÞ E b j þ Uð X b j  Uð X X ;  2 Xj  : 2 2 Note that D

b jÞ X b jÞ E b j þ Uð X b j  Uð X X ; Xj  2 2 D b jÞ X b jÞ E b j þ Uð X b j  Uð X X Þ; ¼ UðX j  2 2 D b j Þ Uð X b jÞ  X bj E b j þ Uð X X ; ¼ Xj  2 2 D b jÞ X b jÞ E b j þ Uð X b j  Uð X X ; : ¼  Xj  2 2

Solutions to Linear Matrix Equations and Their Applications

42

D b þ Uð X bÞ X b Uð X b ÞE X This implies X j  j 2 j ; j 2 j ¼ 0. Therefore, the relation (3.2.1) can be reduced into 2 2  X b jÞ  b jÞ  b j þ Uð X X  b j  Uð X   b j k2 ¼   þ kX j  X X j    : 2 2  b þ Uð X b Þ  X b j k is equivalent to minX 2S  Hence, minX j 2S kX j  X X j  j 2 j , where j

Xj 

b j þ Uð X bj Þ X 2

2 S.

 P b j þ Uð X bjÞ biÞ  e j ¼ Xj  X e s ¼ E s  p Asi Xbi þ Uð X Now if we denote X and E , i¼1 2 2

e p  is the constraint solutions group of the coupled e 1 ; . . .; X s ¼ 1; 2; . . .; p, then ½ X operator matrix equations p hX i¼1

~ i Þ; A1i ðX

p X

~ i Þ; . . .; A2i ðX

i¼1

p X

i h i ~ iÞ ¼ E e 1; E e 2 ; . . .; E ep ;X ~ i 2 S; Api ðX

i¼1

for i ¼ 1; 2; . . .; p: Thus, problem 3.2 is equivalent to finding the least norm solutions of above coupled operator matrix equations. Based on the analysis above, it can be summarized that once the unique least e  ; . . .; X e   of above coupled operator matrix equations are norm solutions ½ X 1 p obtained by applying algorithm 3.1, the unique solutions ½X 1 ; . . .; X p  of problem 3.2 b j þ Uð X bjÞ eþ X can also be obtained, where X j ¼ X ; j ¼ 1; 2; . . .; p. j 2

3.3

Numerical Example

In this section, we will give three numerical examples to illustrate the efficiency of algorithm 3.1. All the tests are performed by MATLAB 7.0 with machine precision around 1016 . Example 3.1. Find the least norm symmetric solution of the following matrix equations  A11 X 1 B 11 þ A12 X 2 B 12 ¼ M 1 ; ð3:3:1Þ A21 X 1 B 21 þ A22 X 2 B 22 ¼ M 2 ; where A11 ¼ randð5; 5Þ, B 11 ¼ randð5; 5Þ, A12 ¼ randð5; 5Þ, B 12 ¼ randð5; 5Þ, A21 ¼ randð5; 5Þ, B 21 ¼ randð5; 5Þ, A22 ¼ randð5; 5Þ, B 22 ¼ randð5; 5Þ. If we choose X ¼ randð5; 5Þ; Y ¼ randið½1; 4; ½5; 5Þ; then we have X 1 ¼ trilðX; 1Þ þ triuðX T ; 0Þ; M 1 ¼ A11 X 1 B 11 þ A12 X 2 B 12 ;

X 2 ¼ trilðY ; 1Þ þ triuðY T ; 0Þ; M 2 ¼ A21 X 1 B 21 þ A22 X 2 B 22 :

It can be verified that the generalized coupled matrix equation (3.3.1) are consistent and have unique symmetric solution pair ðX 1 ; X 2 Þ. In fact, this solution pair is also the unique minimal norm least squares solution of matrix equation (3.3.1).

Finite Iterative Algorithm to Coupled Operator Matrix Equations

43

If we let the initial matrix X1 ¼ ðX 1 ð1Þ; X 2 ð1ÞÞ ¼ ðzerosð5; 5Þ; zerosð5; 5ÞÞ, X1 ¼ ðX 1 ð1Þ; X 2 ð1ÞÞ ¼ ðonesð5; 5Þ; onesð5; 5ÞÞ and X1 ¼ ðX 1 ð1Þ; X 2 ð1ÞÞ ¼ ðI 5 ; I 5 Þ, by applying of algorithm 3.1 we can obtain the symmetric solutions. rðkÞ and eðkÞ can be found in figures 3.3.1 and 3.3.2, where ek ¼

kðX 1 ðkÞ; X 2 ðkÞ  ðX 1 ; X 2 Þk ; kðX 1 ; X 2 Þk

r k ¼ kRðkÞk:

FIG. 3.3.1 – The residual for example 3.1.

FIG. 3.3.2 – The relative error of solution for example 3.1.

Solutions to Linear Matrix Equations and Their Applications

44

Now if we let Y 1 ¼ randð5; 5Þ; Y 2 ¼ randið½2; 3; ½5; 5Þ, then we have b 1 ¼ trilðY 1 ; 1Þ þ triuðY T ; 0Þ; X 1

b 2 ¼ trilðY 2 ; 1Þ þ triuðY T ; 0Þ: X 2

With the initial matrix pair X1 ¼ ðX 1 ð1Þ; X 2 ð1ÞÞ ¼ ðzerosð5; 5Þ; zerosð5; 5ÞÞ, X1 ¼ ðX 1 ð1Þ; X 2 ð1ÞÞ ¼ ðonesð5; 5Þ; onesð5; 5ÞÞ and X1 ¼ ðX 1 ð1Þ; X 2 ð1ÞÞ ¼ ðI 5 ; I 5 Þ,   by algorithm 3.1 we have the least Frobenius norm solutions ðX 1 ; X 2 Þ of the generalized coupled Sylvester matrix equations (3.3.1). The obtained results are presented in figures 3.3.3 and 3.3.4. Therefore, the solutions of the coupled matrix equation (3.3.1) nearness problem are e 1 ¼ X 1 þ X c1 ; X and e 2 ¼ X 2 þ X c2 : X Remark 3. From figures 3.3.1 and 3.3.2, we can see that the residual and the relative error of the solutions to coupled matrix equations (3.3.1) are reduced with the addition of iterative numbers for the different initial matrices X1 ¼ ðX 1 ð1Þ; X 2 ð1ÞÞ ¼ ðzerosð5; 5Þ; zerosð5; 5ÞÞ, X1 ¼ ðX 1 ð1Þ; X 2 ð1ÞÞ ¼ ðonesð5; 5Þ; onesð5; 5ÞÞ and X1 ¼ ðX 1 ð1Þ; X 2 ð1ÞÞ ¼ ðI 5 ; I 5 Þ, respectively. It showed that the iterative solution smoothly converges to the exact solution. Also, it is showed the presented algorithm in this chapter can be converged for any initial matrices with random coefficient matrices. It is proved that our proposed iterative algorithm is efficient and is a good algorithm.

FIG. 3.3.3 – The residual of least norm solution for example 3.1.

Finite Iterative Algorithm to Coupled Operator Matrix Equations

45

FIG. 3.3.4 – The relative error of least norm solution for example 3.1. Remark 4. From figures 3.3.3 and 3.3.4, it is shown that the least norm solutions can be derived by using algorithm 3.1 for giving any initial matrices. Hence, if a group of matrices is given, we can obtain the optimal approximation solutions. Therefore, the presented algorithm can solve the nearness problem of coupled matrix equations with random coefficient matrices. Example 3.2. Find the least norm symmetric solution of the following matrix equations  A11 X 1 B 11 þ C 11 X T1 D11 þ A12 X 2 B 12 þ C 12 X T2 D12 ¼ M 1 ; ð3:3:2Þ A21 X 1 B 21 þ C 21 X T11 D21 þ C 22 X T21 D 22 ¼ M 2 ; where A11 ¼ trilðrandð40; 40Þ; 1Þ, C 11 ¼ randð40; 40Þ, B 11 ¼ randð40; 40Þ, D11 ¼ rand ð40; 40Þ, A12 ¼ randð40; 40Þ, C 12 ¼ randð40; 40Þ, B 12 ¼ rand ð40; 40Þ, D12 ¼ rand ð40; 40Þ, A21 ¼ diagð1 þ diagðrandð40ÞÞÞ, C 21 ¼ diagð0:5 þ diagðrandð40ÞÞÞ, B 21 ¼ randð40; 40Þ, D21 ¼ randð40; 40Þ, C 22 ¼ randð40; 40Þ, D22 ¼ randð40; 40Þ. If we choose X ¼ randið½1; 3; ½40; 40Þ, Y ¼ randið½2; 4; ½40; 40Þ; X 1 ¼ trilðX; 1Þ þ triuðX T ; 0Þ;

X 2 ¼ trilðY ; 1Þ þ triuðY T ; 0Þ;

then we have M 1 ¼ A11 X 1 B 11 þ C 11 X T1 D 11 þ A12 X 2 B 12 þ C 12 X T2 D12 ; M 2 ¼ A21 X 1 B 21 þ C 21 X T11 D21 þ C 22 X T21 D22 : It can be verified that the generalized coupled matrix equations (3.3.2) are consistent and have unique symmetric solution pair ðX 1 ; X 2 Þ. Meanwhile, this solution pair is also the unique minimal norm least squares solutions of matrix equations (3.3.2). If we let the initial matrix X1 ¼ ðX 1 ð1Þ; X 2 ð1ÞÞ ¼ ðI 40 ; I 40 Þ, by applying of algorithm 3.1, we can obtain the solutions. rðkÞ and eðkÞ can be found in

46

Solutions to Linear Matrix Equations and Their Applications

figure 3.3.5. Moreover, if we let the initial matrix X1 ¼ ðX 1 ð1Þ; X 2 ð1ÞÞ ¼ ð2 þ 0:1  onesð40; 40Þ; 2 þ 0:1  onesð40; 40ÞÞ, by applying of algorithm 3.1, we can also obtain the solutions. rðkÞ and eðkÞ can be found in figure 3.3.6, where ek ¼

kðX 1 ðkÞ; X 2 ðkÞ  ðX 1 ; X 2 Þk ; kðX 1 ; X 2 Þk

r k ¼ kRðkÞk:

FIG. 3.3.5 – rðkÞ and eðkÞ for example 3.2: ðX 1 ð1Þ; X 2 ð1ÞÞ ¼ ðI 40 ; I 40 Þ:

FIG. 3.3.6 – rðkÞ and eðkÞ for example 3.2: ðX 1 ð1Þ; X 2 ð1ÞÞ ¼ ð2 þ 0:1  onesð40; 40Þ; 2 þ 0:1  onesð40; 40ÞÞ.

Finite Iterative Algorithm to Coupled Operator Matrix Equations

47

Now if we let Y 1 ¼ randið½0; 1; ½40; 40Þ; Y 2 ¼ randið½2; 3; ½40; 40Þ, then we have b 11 ¼ trilðY 1 ; 1Þ þ triuðY T ; 0Þ; X 1

b 22 ¼ trilðY 2 ; 1Þ þ triuðY T ; 0Þ: X 2

With the initial matrix pair X1 ¼ ðX 1 ð1Þ; X 2 ð1ÞÞ ¼ ðonesð40; 40Þ; onesð40; 40ÞÞ,   by algorithm 3.1 we have the least Frobenius norm solutions ðX 1 ; X 2 Þ of the generalized coupled Sylvester matrix equation (3.3.2). The obtained results are presented in figure 3.3.7. Moreover, now if we let Y 1 ¼ randið½0; 1; ½40; 40Þ; Y 2 ¼ randið½0; 1; ½40; 40Þ, then we have b 11 ¼ trilðY 1 ; 1Þ þ triuðY T ; 0Þ; X 1

b 22 ¼ trilðY 2 ; 1Þ þ triuðY T ; 0Þ: X 2

with the initial matrix pair X1 ¼ ðX 1 ð1Þ; X 2 ð1ÞÞ ¼ ðI 40 ; I 40 Þ, we have the least   Frobenius norm solutions ðX 1 ; X 2 Þ of the generalized coupled Sylvester matrix equation (3.3.2). The obtained results are presented in figure 3.3.8. Therefore, the solutions of the coupled matrix equation (3.3.2) nearness problem are e 1 ¼ X 1 þ X c1 ; X and e 2 ¼ X 2 þ X c2 : X Remark 5. If we provide a group of random coefficient matrices with a bigger dimension, figures 3.3.5 and 3.3.6 show that the presented algorithm is still convergent for any initial matrices. Figures 3.3.7 and 3.3.8 prove that the least norm

FIG. 3.3.7 – rðkÞ and eðkÞ of least Frobenius norm solutions for example 3.2: ðX 1 ð1Þ; X 2 ð1ÞÞ ¼ ðonesð40; 40Þ; onesð40; 40ÞÞ.

48

Solutions to Linear Matrix Equations and Their Applications

FIG. 3.3.8 – rðkÞ and eðkÞ of least Frobenius norm solutions for example 3.2:



 X 1 ð1Þ; X 2 ð1Þ ¼ ðI 40 ; I 40 Þ:

solutions can be obtained by using algorithm 3.1 for higher dimension random coefficient matrices. Hence, if a group of higher-dimension matrices are given, the optimal approximation solutions can be derived. Therefore, the presented algorithm can solve the nearness problem of coupled matrix equations with higher-dimension random coefficient matrices. This further proves the effectiveness of our proposed iterative algorithm. Example 3.3. Consider the centrosymmetric solutions of Sylvester-transpose matrix equations  A11 XB 11 þ C 11 X T D 11 þ A12 YB 12 þ C 12 Y T D 12 ¼ E 1 ; A21 XB 21 þ C 21 X T D 21 þ C 22 Y T D22 ¼ E 2 ; with the following 0 3 B 21 B B D22 ¼ B 1 B @ 2 0 0 147 B 988 B B E 2 ¼ B 388 B @ 88 198

parametric matrices 1 0 12 11 2 2 179 27 C B 3 1 2 3C B 112 177 C B 2 4 1 5 C; E 1 ¼ B 63 41 C B @ 12 48 11 3 2 5A 1 2 11 4 158 94 1 0 366 332 262 331 2 B 4 212 272 505 587 C C B C B 233 198 149 200 C; B 12 ¼ B 4 C B @ 1 65 57 17 4 A 109 139 143 179 2

58 43 51 2 84 1 3 3 2 3 2 2 3 3 0

coupled

ð3:3:3Þ

1 97 60 46 164 C C C 269 200 C; C 176 136 A 53 75 1 1 1 1 4 C C C 1 4 C; C 0 5 A 3 4

Finite Iterative Algorithm to Coupled Operator Matrix Equations

0 A11

B2 B B ¼B B2 B @1 0

D21

A12

1

2

3

1

1

2

2

2

5

1

1 2 0 2 1 2 12 3

1 0

2

4

1

1

1

4

6

2

1

2

3 2 3

B 2 B B ¼B B 2 B @ 1 0

B 21

1

5 B 0 5 B B ¼B B6 1 B @2 2 0

C 12

2

B 5 B B ¼B B 1 B @ 2 0

C 11

1

1

0

1

B5 0 C C B C B C 3 C; B 11 ¼ B B1 C B @2 1 A

3

2

4

1

1

1

4

6

2

1

3 2 1 0 0 0 B0 1C C B C B B ; C 3C ¼ C 22 B 4 C B @0 3A

1

3

4

1

2

0

1

11

21

1

0

2

1

1 1

1

3

1

0 C C C 0 C C; C 1 A 6 1 4 5 C C C 5 C C; C 3 A

1 0 2 1 2 3 1 2 1 1 0 1 1 0 2 2 1 3 3 B 2 2 4 1 3 C 3 1 0 C C C B C C B C; B ; D 7 2 3 C 1 1 0 0 4 ¼ C 11 B C C C B @ 2 6 2 8 1 A 4 1 3 A 2

4

1

2

4

1

2

0

1

1

2

0

2

6

1

2

1 2

4 B4 B B ¼B B2 B @4

3 2

2 0 1 B2 B B ¼B B3 B @2

2 2 4 2

2

1

2

1

49

2

1

2

1

3

3

1

2

2

0 1 6

2

6

1

2

1

6

3

5

2

3

0

1

2

1

2

3

1

12

1

3

2

4

1

1

1

4

6

2

1

1

2

0

1

21 C B 4 C B 2 C B C 2 C; C 21 ¼ B B 2 C B A @ 11 0

2

11

1 0

2

3

3

2

12

5

1 0C C C 2 1C C; C 1 1A

1

1

0

B 5 0C C B C B C 0 C; D12 ¼ B B 10 C B @ 2 1A 3 3

1

0

1 1 1 0 0 1 C B 0 C B2 C B B 1 C C; A21 ¼ B 1 C B @1 4 A 2

0

2 2 1 1 1 2

2

4

5

3

1

1

1

0

0

0

1

4 1

1

1C C C 2C C; C 2A 0 1

1 4 1 1 2 C C C 3 C C: C 1 A 1

It can be verified that the generalized coupled matrix equations (3.3.3) are consistent over-generalized centrosymmetric matrices and have the generalized centrosymmetric solution pair ðX  ; Y  Þ with

Solutions to Linear Matrix Equations and Their Applications

50

0

1 1 B 1 1 B X ¼ B B0 0 @2 2 1 1 0 1 0 B 0 1 B Y ¼ B B0 2 @2 0 0 1

0 0 1 0 0 0 2 1 0 1

2 2 0 0 1 2 0 0 0 0

0 0 1 0 0 0 0 1 0 0

0 0 0 1 0 0 0 0 1 0

1 1 1C C 55 0C C 2 CSRQ 1 ; 1A 1 1 0 1C C 55 1C C 2 CSRQ1 ; 0A 1

where 0

1 0 B0 1 B Q1 ¼ B B0 0 @0 0 0 0 0 1 0 B 0 1 B Q2 ¼ B B0 0 @0 0 0 0

1 0 0C C 55 0C C 2 SOR ; 0A 1 1 0 0 C C 55 0 C C 2 SOR : 0 A 1

Taking the initial matrices ðXð1Þ; Y ð1ÞÞ ¼ ðonesð5; 5Þ; onesð5; 5ÞÞ and applying algorithm 3.1, we obtain the solutions 0 1 1:0000 1:0000 0:0000 1:9999 1:0000 B 1:0000 0:9999 0:0000 1:9999 0:9999 C B C C Xð50Þ ¼ B B 0:0000 0:0000 0:9999 0:0000 0:0000 C; @ 1:9999 1:9999 0:0000 0:0000 0:9999 A 1:0000 0:9999 0:0000 0:9999 0:9999 0 1 1:0000 0:0000 0:0000 2:0000 0:0000 B 0:0000 0:9999 1:9999 0:0000 1:0000 C B C C Y ð50Þ ¼ B B 0:0000 1:9999 1:0000 0:0000 0:9999 C: @ 2:0000 0:0000 0:0000 0:0000 0:0000 A 0:0000 1:0000 0:9999 0:0000 0:9999 with corresponding residual kRð50Þk ¼ 9:1981  1012 : The obtained results are presented in figure 3.3.9, where ek ¼

kðXðkÞ; Y ðkÞ  ðX  ; Y  Þk ; kðX  ; X  Þk

r k ¼ kRðkÞk:

Finite Iterative Algorithm to Coupled Operator Matrix Equations

51

FIG. 3.3.9 – The relative error of solution and the residual for example 3.4.

Now let

0

2 B2 B b ¼ B0 X B @2 2

2 2 0 2 2

0 0 2 0 0

2 2 0 2 2

1 0 2 2 B0 2C C B b B 0C C; Y ¼ B 0 A @2 2 0 2

0 2 2 0 2

0 2 2 0 2

2 0 0 2 0

1 0 2C C 2C C: 0A 2

Choosing the initial generalized centrosymmetric matrix pair ðXð1Þ; Y ð1ÞÞ ¼ ðonesð5; 5Þ; onesð5; 5ÞÞ, by algorithm 3.1 we have the least Frobenius norm generalized centrosymmetric solutions of the generalized coupled Sylvester matrix equations (3.3.3) with the following form 1 1:0000 0:9999 0:0000 0:0000 1:0000 B 0:9999 2:9999 0:0000 0:0000 1:0000 C C B  C B X ¼ Xð50Þ ¼ B 0:0000 0:0000 1:0000 0:0000 0:0000 C; C B @ 0:0000 0:0000 0:0000 1:9999 1:0000 A 1:0000 1:0000 0:0000 1:0000 1:0000 1 0 0:9999 0:0000 0:0000 0:0000 0:0000 B 0:0000 2:9999 0:0000 0:0000 0:9999 C C B  C B Y ¼ Y ð50ÞB 0:0000 0:0000 0:9999 0:0000 1:0000 C; C B @ 0:0000 0:0000 0:0000 1:9999 0:0000 A 0:0000 0:9999 1:0000 0:0000 0:9999 0

52

Solutions to Linear Matrix Equations and Their Applications

with corresponding residual kRð50Þk ¼ 5:1001  1012 : The obtained results are presented in figure coupled Sylvester-transpose matrix equations 0 1:0000 1:0001 B 1:0001 0:9999 B e ¼ X þ X b ¼ B 0:0000 0:0000 X B @ 2:0000 2:0000 1:0000 1:0000 and

0

1:0001 0:0000 B 0:0000 0:9999 B e ¼ Y þ Y b ¼ B 0:0000 2:0000 Y B @ 2:0000 0:0000 0:0000 1:0001

3.3.10. Therefore, the solutions of the nearness problem are 1 0:0000 2:0000 1:0000 0:0000 2:0000 1:0001 C C 1:0000 0:0000 0:0000 C C; 0:0000 0:0001 1:0000 A 0:0000 1:0000 1:0000

0:0000 2:0000 1:0001 0:0000 1:0000

2:0000 0:0000 0:0000 0:0001 0:0000

1 0:0000 1:0001 C C 1:0000 C C: 0:0000 A 1:0001

Remark 6. From figures 3.3.1–3.3.10, it can be concluded that the presented algorithm 3.1 can solve many kinds of constrained solutions such as the symmetric solution, centrosymmetric solution, and reflective solution. Also, it is showed algorithm 3.1 can computer solutions and least norm solutions of coupled matrix

FIG. 3.3.10 – The relative error of the least Frobenius norm generalized centro-symmetric solution and the residual for example 3.4.

Finite Iterative Algorithm to Coupled Operator Matrix Equations

53

equations with some random coefficient matrices, higher dimension random coefficient matrices, and real coefficient matrices for details. Therefore, it can solve the nearness problem of coupled matrix equations, including coupled transpose matrix equations, general coupled matrix equations, and general matrix equations with X and Y unknown. This further proves the effectiveness of our proposed iterative algorithm.

3.4

Control Application

Example 3.4. Consider a linear system which can be described as: 8 < x_ ¼ Ax þ Bu þ Pw; w_ ¼ Fw; : e ¼ Cx þ Qw; where A 2 Rnn ; B 2 Rnr ; F 2 Rpp ; P 2 Rnp ; C 2 Rmn and Q 2 Rmp are constant matrices, x 2 Rn ; u 2 Rr and e 2 Rm are the state, the control input and the measurable error output, respectively. The symbol w 2 Rp is the exogenous input which includes “reference signals to be tracked” and/or “disturbances to be rejected”. If we assume that ðA; BÞ is controllable, then F is critical stable. If the full information feedback u ¼ Kx þ Lw is applied on the system, then the closed-loop system can result in: 8 < x_ ¼ ðA  BK Þx þ ðP þ BLÞw; w_ ¼ Fw; : e ¼ Cx þ Qw; The aim of the output regulation problem is to find two matrices K and L such that the matrix A  BK is stable and limt!1 eðtÞ ¼ limt!1 ðCxðtÞ þ QwðtÞÞ ¼ 0; which ðxð0Þ; wð0ÞÞ are arbitrary and ðxð0Þ; wð0ÞÞ 2 Rn  Rp . It has been shown in[34] that such a problem is solvable if and only if there exist two matrices X and Y such that AX  XF ¼ BY þ R; CX þ Q ¼ 0: By the theorem 3.1, we can obtain the solution to the matrix equation above.

54

3.5

Solutions to Linear Matrix Equations and Their Applications

Conclusions

In this chapter, in order to solve all kinds of constraint solutions of coupled matrix equations, we construct an iterative algorithm. The presented iterative method can obtain the solutions and least form solutions to the coupled matrix equations with random coefficient matrices, higher random coefficient matrices, and known specific real coefficient matrices. In addition, if a group of matrices is given, we can easily compute the associated optimal approximation solutions. Moreover, the numerical examples show that the solutions to the coupled matrix equations can be obtained for any initial matrices in the absence of roundoff errors. This fully demonstrates the effectiveness of the presented algorithm.

Chapter 4 MCGLS Iterative Algorithm to Linear Conjugate Matrix Equation In this chapter, we study the generalized Sylvester-conjugate matrix equation p X i¼1

Ai XB i þ

t X

C j XDj ¼ E;

ð4:0:1Þ

j¼1

with known matrices Ai 2 Cpm ; B i 2 Cnq ; Cj 2 Cpm ; Dj 2 Cnq ; E 2 Cpq and unknown matrix X 2 C mn for i ¼ 1; 2; . . .; p; j ¼ 1; 2; . . .; t. It is obvious that this class of matrix equation includes various linear matrix equations such as Lyapunov, Sylvester, and the generalized Sylvester matrix equation as special cases. We mainly consider the following two problems. Problem 4.1. For given Ai 2 Cpm ; B i 2 Cnq ; C j 2 Cpm ; D j 2 Cnq ; E 2 Cpq ; i ¼ 1; 2; . . .; p and j ¼ 1; 2; . . .; t, find X  2 Cmn such that   p  t X X   Ai X  B i þ C j X  Dj  E     i¼1 j¼1   ð4:0:2Þ p X  t X   ¼ min Ai XB i þ C j XDj  E :   X2Cmn  i¼1 j¼1 Problem 4.2. Let G r denote the solution set of problem 4.1, for given X 0 2 Cmn ,  2 S r , such that find the matrix X   X 0 k ¼ min kX  X 0 k: kX X2G r

ð4:0:3Þ

For X and Y two matrices in Cmn , we define the real inner product as hX; Y i ¼ Re½trðY H XÞ. The associated norm is the well-known Frobenius norm. For matrices R; A; B and X with appropriate dimension, a well-known property of the inner product is hR; AXBi ¼ hAH RB H ; Xi. Two matrices X and Y are said to be DOI: 10.1051/978-2-7598-3102-9.c004 © Science Press, EDP Sciences, 2023

Solutions to Linear Matrix Equations and Their Applications

56

orthogonal if hX; Y i ¼ 0. First of all, we introduce three lemmas that will be used in the next. And the proof line of lemma 4.2 is similar to the reference[98]. Lemma 4.1. [99]Let U be an inner product space, V be a subspace of U , and V ? be the orthogonal complement subspace of V . For a given u 2 U , if there exists a v 0 2 V such that ku  v 0 k  ku  vk holds for any v 2 V, then v 0 is unique and v 0 2 V is the unique minimization vector in V if and only if ðu  v 0 Þ ? V ; i.e. ðu  v 0 Þ 2 V ? : ^ is the residual of equation (4.0.1) corresponding to the Lemma 4.2. Assume that R P P mn ^ ¼ E  p Ai XB ^ ^ i  t C j XD ^ j . Then the matrix X ^ 2C , that is, R matrix X i¼1 j¼1 is the least squares solution of equation (4.0.1) if p X i¼1

^ H AH i RB i þ

t X

H ^ j H ¼ 0: C j RD

ð4:0:4Þ

j¼1

P P Proof. We first define the linear subspace W ¼ fFjF ¼ pi¼1 Ai XB i þ tj¼1 C j XDj g with Ai 2 Cpm ; B i 2 Cnq ; C j 2 Cpm ; Dj 2 Cnq ; F 2 Cpq ; and X 2 Cmn ; P P ^ ¼ p Ai XB ^ i þ t C j XD ^ j , then i ¼ 1; 2; . . .; p; j ¼ 1; 2; . . .; t. Now if we let F i¼1

^ 2 W . Therefore, for any F 2 W, we have F

j¼1

^ Fi hE  F; p p t t D E X X X X ^ i ^ j; Ai XB C j XD Ai XB i þ C j XDj ¼ E j¼1

i¼1

j¼1

i¼1

p t D X E X ^ ¼ R; Ai XB i þ C j XD j i¼1

¼

p DX i¼1

¼

p DX i¼1

j¼1

E

^ H AH i RB i ; X þ ^ H AH i RB i þ

t X

t DX

H ^ jH ; X C j RD

E

j¼1

E H ^ jH ; X : C j RD

j¼1

P Pt H ^ H ^ H Hence, if we let pi¼1 AH i RB i þ j¼1 C j RD j ¼ 0; the above equations show ^ Fi ¼ 0: It follows from lemma 4.1 that ðE  FÞ ^ 2 W ? . So the matrix that hE  F; ^ is the least squares solution of equation (4.0.1). X □ ^ is a solution of problem 4.1, then any solution X ~ of problem 4.1 Lemma 4.3. If we let X mn ^ þ Z where the matrix Z 2 C satisfies can be expressed as X p X i¼1

Ai ZB i þ

t X j¼1

C j Z Dj ¼ 0:

ð4:0:5Þ

MCGLS Iterative Algorithm to Linear Conjugate Matrix Equation

57

~ of problem 4.1, if we define the matrix Z ¼ X ~  X, ^ then Proof. For any solution X ~ ^ we have X ¼ X þ Z . Now it is showed that equation (4.0.5) holds. By applying lemma 4.2, we can obtain 2  p  t X X   ^ ^ Ai XB i þ C j XDj  E     i¼1 j¼1  2 p X  t X  ~ iþ ~ j  E ¼ Ai XB C j XD   i¼1  j¼1  2 p X  t X  ^ þ Z ÞB i þ ^ þ Z ÞDj  E  ¼ Ai ð X C j ðX   i¼1  j¼1  2 p X  t X   ^ ¼ Ai ZB i þ C j Z D j  R  i¼1  j¼1   p  X t X   ^ 2 ¼ Ai ZB i þ C j Z Dj  þ kRk   i¼1 j¼1 2

p DX

Ai ZB i þ

i¼1

t X

E ^ C j Z Dj ; R

j¼1

  p X  t X   ^ 2 ¼ Ai ZB i þ C j Z Dj  þ kRk  i¼1  j¼1 p t D X E X H H ^ Hþ ^  2 Z; AH C RB RD j j i i i¼1

j¼1

  p  t X X   ^ 2: Ai ZB i þ C j Z Dj  þ kRk ¼   i¼1 j¼1 This shows that the equation (4.0.5) holds.

4.1



MCGLS Iterative Algorithm and Convergence Analysis

The CGLS method is a powerful method for solving the solution of large sparse least squares problem minx2Cn kb  Axk. In this section, we first propose an iterative algorithm by generalizing the CGLS method to solve problem 4.1, then present some basic properties of the algorithm. We also consider finding the least Frobenius norm solution to problem 4.1.

Solutions to Linear Matrix Equations and Their Applications

58

Algorithm 4.1. xð1Þ 2 Rn is an initial matrix, rð1Þ ¼ b  Axð1Þ; sð1Þ ¼ AT rð1Þ; pð1Þ ¼ sð1Þ; cð1Þ ¼ ksð1Þk2 , for k ¼ 1; 2; 3; . . ., repeat the following: qðkÞ ¼ ApðkÞ; dðkÞ ¼ cðkÞ=kqðkÞk2 ; xðk þ 1Þ ¼ xðkÞ þ dðkÞpðkÞ; rðk þ 1Þ ¼ rðkÞ  dðkÞqðkÞ; sðk þ 1Þ ¼ AT rðk þ 1Þ; cðk þ 1Þ ¼ ksðk þ 1Þk2 ; kðkÞ ¼ cðk þ 1Þ=cðkÞ; pðk þ 1Þ ¼ sðk þ 1Þ þ kðkÞpðkÞ:

Based on the above CGLS method, we propose the following matrix form of the CGLS iterative algorithm (CGLS-M) to solve the least squares problem of the generalized Sylvester-conjugate matrix equation (4.0.1). Algorithm 4.2 Step 1. Input matrices Ai 2 Cpm ; B i 2 Cnq ; C j 2 Cpm ; Dj 2 Cnq ; E 2 Cpq and Xð1Þ 2 C mn for i ¼ 1; 2; . . .; p; j ¼ 1; 2; . . .; t; Step 2. Compute Rð1Þ ¼ E 

p X

Ai Xð1ÞB i 

t X

i¼1

Sð1Þ ¼

p X i¼1

H AH i Rð1ÞB i þ

Pð1Þ ¼ Sð1Þ;

C j Xð1ÞDj ;

j¼1 t X

H

H

C j Rð1ÞDj ;

j¼1

cð1Þ ¼ kSð1Þk2 ;

For k ¼ 1; 2; 3; . . . repeat the following: Step 3. If kRðkÞk ¼ 0, then stop and XðkÞ is the solution of equation (4.0.1), break; else if kRðkÞk 6¼ 0 but kSðkÞk ¼ 0, then stop and XðkÞ is the solution of problem 4.1, break; else k :¼ k þ 1; Step 4. Compute QðkÞ ¼

p X i¼1

Ai PðkÞB i þ

t X j¼1

C j PðkÞD j ;

MCGLS Iterative Algorithm to Linear Conjugate Matrix Equation

59

dðkÞ ¼ cðkÞ=kQðkÞk2 ; Xðk þ 1Þ ¼ XðkÞ þ dðkÞPðkÞ; Rðk þ 1Þ ¼ RðkÞ  dðkÞQðkÞ; p t X X H H H AH Rðk þ 1ÞB þ C j Rðk þ 1ÞD j Sðk þ 1Þ ¼ i i i¼1

j¼1

¼ SðkÞ  dðkÞ

p X i¼1

H AH i QðkÞB i

þ

t X

! H

C j QðkÞDj

H

;

j¼1

cðk þ 1Þ ¼ kSðk þ 1Þk2 ; kðkÞ ¼ cðk þ 1Þ=cðkÞ; Pðk þ 1Þ ¼ Sðk þ 1Þ þ kðkÞPðkÞ; Step 5. Go to step 3. Some basic properties of algorithm 4.1 are listed in the following lemmas. Lemma 4.4. For the sequences SðkÞ, PðkÞ and QðkÞ which are generated by algorithm 4.1, if there exists a positive number r, such that kSðuÞk 6¼ 0 and kQðuÞk 6¼ 0; 8u ¼ 1; 2; . . .; r, then the following statements hold for u; w ¼ 1; 2; . . .; r and u 6¼ w. (1)hSðuÞ; SðwÞi ¼ 0, (2)hQðuÞ; QðwÞi ¼ 0, (3)hPðuÞ; SðwÞi ¼ 0. Proof. Step 1: We prove the conclusion by induction. Because the real inner product is commutative, it is enough to prove three statements for 1  u\w  r: For u ¼ 1; w ¼ 2, by algorithm 4.1, we have hSð1Þ; Sð2Þi * ¼

Sð1Þ; Sð1Þ  dð1Þ

p X i¼1

* 2

¼ kSð1Þk  dð1Þ Sð1Þ; * 2

¼ kSð1Þk  dð1Þ

H AH i Qð1ÞB i

p X i¼1

p X

þ

Ai Sð1ÞB i þ

i¼1

¼ kSð1Þk2  dð1ÞhQð1Þ; Qð1Þi ¼ 0;

j¼1

!+ H

C j Qð1ÞDj

H

j¼1

H AH i Qð1ÞB i t X

t X

þ

t X

+ H

C j Qð1ÞDj

j¼1

C j Sð1ÞDj ; Qð1Þ

+

H

Solutions to Linear Matrix Equations and Their Applications

60 hQð1Þ; Qð2Þi * ¼

Qð1Þ;

Ai Pð2ÞB i þ

i¼1

* ¼

p X

Qð1Þ;

t X

+ C j Pð2ÞDj Þ

j¼1

p X

Ai ðSð2Þ þ kð1ÞPð1ÞÞB i þ

i¼1

t X

+ C j ðSð2Þ þ kð1ÞPð1ÞÞDj

j¼1

* 2

¼ kð1ÞkQð1Þk þ Qð1Þ;

p X

Ai Sð2ÞB i

i¼1

t X

+

C j Sð2ÞD j

j¼1

* + p t X X 1 H H H ¼ kð1ÞkQð1Þk  A ðRð2Þ  Rð1ÞÞB H C j ðRð2Þ  Rð1ÞÞDj ; Sð2Þ i þ dð1Þ i¼1 i j¼1 2

¼ kð1ÞkQð1Þk2 

1 hSð2Þ  Sð1Þ; Sð2Þi ¼ 0; dð1Þ

and hPð1Þ; Sð2Þi ¼ hSð1Þ; Sð2Þi ¼ 0: Step 2: In this case, for u\w\r we assume that hSðuÞ; SðwÞi ¼ 0;

hQðuÞ; QðwÞi ¼ 0;

hPðuÞ; SðwÞi ¼ 0:

Thus, we can write hSðuÞ; Sðw þ 1Þi * ¼

SðuÞ; SðwÞ  dðwÞ

i¼1

* ¼ dðwÞ SðuÞ; * ¼ dðwÞ * ¼ dðwÞ þ

p X i¼1

p X i¼1 p X i¼1

t X

p X

H AH i QðwÞB i þ

H AH i QðwÞB i

Ai SðuÞB i þ

t X

þ

t X

t X

!+ H

C j QðwÞDj

j¼1

+

H

C j QðwÞDj

j¼1

+

C j SðuÞDj ; QðwÞ

j¼1

Ai ðPðuÞ  kðu  1ÞPðu  1ÞÞB i +

C j ðPðuÞ  kðu  1ÞPðu  1ÞÞDj ; QðwÞ

j¼1

¼ dðwÞhQðuÞ  kðu  1ÞQðu  1Þ; QðwÞi ¼ 0;

H

H

MCGLS Iterative Algorithm to Linear Conjugate Matrix Equation hPðuÞ; Sðw þ 1Þi * ¼

p X

PðuÞ; SðwÞ  dðwÞ

i¼1

* ¼ dðwÞ PðuÞ; * ¼ dðwÞ

p X i¼1

p X

H AH i QðwÞB i

H AH i QðwÞB i

Ai PðuÞB i þ

i¼1

t X

þ

t X

þ

t X

61

!+ H

C j QðwÞD j

j¼1

+

H

C j QðwÞDj

j¼1

H

H

+

C j PðuÞDj ; QðwÞ

¼ 0:

j¼1

hQðuÞ; Qðw þ 1Þi * + p t X X Ai Pðw þ 1ÞB i þ C j Pðw þ 1ÞD j ¼ QðuÞ; ¼

QðuÞ;

p X

Ai ðSðw þ 1Þ þ kðwÞPðwÞÞB i þ

i¼1

* ¼

j¼1

i¼1

*

QðuÞ;

p X

t X

+ C j ðSðw þ 1Þ þ kðwÞPðwÞÞDj

j¼1

Ai Sðw þ 1ÞB i þ

t X

i¼1

+

C j Sðw þ 1ÞD j þ kðwÞQðwÞ

j¼1

* + p t X X 1 Rðu þ 1Þ  RðuÞ; ¼ Ai Sðw þ 1ÞB i þ C j Sðw þ 1ÞDj dðuÞ i¼1 j¼1 * p X 1 ¼ AH ðRðu þ 1Þ  RðuÞÞB H i dðuÞ i¼1 i + t X H H þ C j ðRðu þ 1Þ  RðuÞÞDj ; Sðw þ 1Þ j¼1

¼

1 hSðu þ 1Þ  SðuÞ; Sðw þ 1Þi ¼ 0; dðuÞ

For u ¼ w, we have hSðwÞ; Sðw þ 1Þi * ¼

SðwÞ; SðwÞ  dðwÞ

p X i¼1

þ

t X

!+ H

C j QðwÞDj

j¼1

H

* 2

¼ kSðwÞk  dðwÞ SðwÞ; þ

t X j¼1

H AH i QðwÞB i

+ H

C j QðwÞDj

H

p X i¼1

H AH i QðwÞB i

Solutions to Linear Matrix Equations and Their Applications

62

* 2

¼ kSðwÞk  dðwÞ * 2

¼ kSðwÞk  dðwÞ

p X

Ai SðwÞB i þ

+ C j SðwÞDj ; QðwÞ

j¼1

i¼1 p X

t X

Ai ðPðwÞ  kðw  1ÞPðw  1ÞÞB i

i¼1

þ

t X

+

C j ðPðwÞ  kðw  1ÞPðw  1ÞÞDj ; QðwÞ

j¼1

¼ kSðwÞk2  dðwÞhQðwÞ  kðw  1ÞQðw  1Þ; QðwÞi ¼ 0; hQðwÞ; Qðw þ 1Þi * + p t X X Ai Pðw þ 1ÞB i þ C j Pðw þ 1ÞDj ¼ QðwÞ; i¼1

* ¼

QðwÞ;

p X

j¼1

Ai ðSðw þ 1Þ þ kðwÞPðwÞÞB i

i¼1

þ

t X

C j ðSðw þ 1Þ þ kðwÞPðwÞÞDj

j¼1

* ¼

+

QðwÞ;

p X

Ai Sðw þ 1ÞB i þ

i¼1

t X

+ C j Sðw þ 1ÞDj Þ þ kðwÞQðwÞ

j¼1

1 hRðw þ 1Þ  RðwÞ; dðwÞ + p t X X Ai Sðw þ 1ÞB i þ C j Sðw þ 1ÞDj

¼ kðwÞkQðwÞk2 

i¼1

j¼1

* p X 1 ¼ kðwÞkQðwÞk  AH ðRðw þ 1Þ  RðwÞÞB H i dðwÞ i¼1 i + t X H H þ C j ðRðw þ 1Þ  RðwÞÞDj ; Sðw þ 1Þ 2

j¼1

1 hSðw þ 1Þ  SðwÞ; Sðw þ 1Þi dðwÞ 1 ¼ kðwÞkQðwÞk2  kSðw þ 1Þk2 dðwÞ

¼ kðwÞkQðwÞk2 

¼ 0;

MCGLS Iterative Algorithm to Linear Conjugate Matrix Equation hPðwÞ; Sðw þ 1Þi * ¼

PðwÞ; SðwÞ  dðwÞ

p X i¼1

H AH i QðwÞB i

þ

t X

63

!+ H

C j QðwÞD j

H

j¼1

¼ hSðwÞ þ kðw  1ÞSðw  1Þ þ kðw  1Þkðw  2ÞSðw  2Þ þ    þ kðw  1Þ. . .kð1ÞSð1Þ; SðwÞi  dðwÞhPðwÞ; + p t X X H H H AH C j QðwÞDj i QðwÞB i þ i¼1

* 2

¼ kSðwÞk  dðwÞ

j¼1

p X

Ai PðwÞB i þ

i¼1

t X

+ C j PðwÞD j ; QðwÞ

¼ 0:

j¼1



By the principle of induction, we draw a conclusion.

Lemma 4.5. If there exists a positive number l such that dðlÞ ¼ 0 or dðlÞ ¼ 1 in algorithm 4.1, then XðlÞ is a solution of problem 4.1. Proof. If dðlÞ ¼ 0, we have kSðlÞk2 ¼ 0. If dðlÞ ¼ 1, we have kQðlÞk2 ¼ 0. It is showed that kSðlÞk2 ¼ hSðlÞ þ kðl  1ÞSðl  1Þ þ kðl  1Þkðl  2ÞSðl  2Þ þ    þ kðl  1Þ. . .kð1ÞSð1Þ; SðlÞi ¼ hPðlÞ; SðlÞi * + p t X X H H H ¼ PðlÞ; AH C j RðlÞD j ¼ 0: i RðlÞB i þ i¼1

j¼1

Therefore, for dðlÞ ¼ 0 or dðlÞ ¼ 1 we have SðlÞ ¼

p X i¼1

H AH i RðlÞB i þ

t X

H

C j RðlÞD j

H

¼ 0:

j¼1

So by lemma 4.2, we can conclude that XðlÞ is the solution to problem 4.1.



Theorem 4.1. If the matrix equation (4.0.1) is consistent, then, for any arbitrary initial matrix XðlÞ, the solution X  of problem 4.1 can be obtained by using algorithm 4.1 within a finite number of iterations in the absence of roundoff errors. Proof. By lemma 4.4, the set SðiÞ; i ¼ 1; 2; 3; . . .; m  n is an orthogonal basis of the real inner product space C mn with dimension m  n. Therefore, we can obtain kSðm  n þ 1Þk ¼ 0. It is also showed that Xðm  n þ 1Þ is the solution of problem 4.1 in the absence of roundoff errors. □ Pp H H Theorem 4.2. Supposed that the initial matrix is i¼1 Ai FðlÞB i þ Pt H H mn is an arbitrary matrix, or especially j¼1 C j FðlÞD j , where Fð1Þ 2 C  Xð1Þ ¼ 0, then the solution X generated by algorithm 4.1 is the least norm solution of problem 4.1.

Solutions to Linear Matrix Equations and Their Applications

64

P Pt H H H Proof. Let the initial matrix be Xð1Þ ¼ pi¼1 AH i FðlÞB i þ j¼1 C j FðlÞD j , then, it follows from algorithm 4.1 that the generated matrix XðkÞ can be easily expressed P Pt H H H as XðkÞ ¼ pi¼1 AH for certain matrices FðkÞ 2 Cmn i FðkÞB i þ j¼1 C j FðkÞD j for k ¼ 2; 3; . . .. This implies that there exists a matrix F  2 Cmn such that P P H  H ~  is an arbitrary solution of X  ¼ p AH F  B H þ t C j F D j . Now if we let X i¼1

i

i

j¼1

problem 4.1, then, by using lemma 4.3, there exists the matrix Z  2 Cmn such that P P ~  ¼ X  þ Z  and p Ai Z  B i þ t C j Z  Dj ¼ 0. Thus, one can obtain X i¼1 j¼1 * + p t X X H  H H  H    Ai F B i þ C j F Dj ; Z hX ; Z i ¼ * ¼

j¼1

i¼1

F ;

p X

Ai Z  B i þ

i¼1

t X

+ C j Z  Dj

¼ 0:

j¼1

By applying this relation, one can obtain 

~ k2 ¼ kX  þ Z  k2 kX ¼ kX  k2 þ kZ  k2 þ 2hX  ; Z  i ¼ kX  k2 þ kZ  k2  kX  k2 : This shows that the solution X  is the least Frobenius norm solution of problem 4.1. □ Similar to[98], the minimization property of the proposed algorithm is stated as follows. This property shows that algorithm 4.1 converges smoothly. Theorem 4.3. For any initial matrix Xð1Þ 2 Cmn , we have  2 p X  t X   Ai Xðk þ 1ÞB i þ C j Xðk þ 1ÞDj  E    i¼1  j¼1  2 p X  t X   ¼ min  Ai XB i þ C j XD j  E  ;  X2wk  i¼1 j¼1 where Xðk þ 1Þ is generated by algorithm 4.1 at the k þ 1-th iteration and wk presents an affine subspace which has the following form wk ¼ Xð1Þ þ spanðPð1Þ; Pð2Þ; . . .; PðkÞÞ: Proof. For any matrix X 2 wk , it follows from the second formula in theorem 4.3 that there exist numbers a1 ; a2 ; . . .; ak such that X ¼ Xð1Þ þ

k X

al PðlÞ:

l¼1

Now we define the continuous and differentiable function f with respect to the variable a1 ; a2 ; . . .; ak as

MCGLS Iterative Algorithm to Linear Conjugate Matrix Equation

65

 X s k X  f ða1 ; a2 ; . . .; ak Þ ¼  Ai ðXð1Þ þ al PðlÞÞB i  i¼1 l¼1 þ

t X

C j ðXð1Þ þ

k X

j¼1

l¼1

2   al PðlÞÞDj  E  : 

By lemma 4.4 one can obtain f ða1 ; a2 ; . . .; ak Þ  X s t X  Ai Xð1ÞB i þ C j Xð1ÞDj  E ¼  i¼1 j¼1 " #2  k s t X X X  þ al Ai PðlÞB i þ C j PðlÞDj   i¼1 j¼1 l¼1 ¼ kRð1Þk2 þ

k X l¼1

a2l kQð1Þk2  2al hQð1Þ; Rð1Þi:

Now we consider the problem of minimizing the function f ða1 ; a2 ; . . .; ak Þ. It is obvious that 2  p  X t X   minf ða1 ; a2 ; . . .; ak Þ ¼ min  Ai XB i þ C j XD j  E  : al  X2wk  i¼1 j¼1 For this function, the minimum occurs when @ða1 ; a2 ; . . .; ak Þ ¼0 @al

for l ¼ 1; 2; . . .; k:

Thus, we can get al ¼

hQðlÞ; RðlÞi kQðlÞk2

:

From algorithm 4.1, one can obtain Rð1Þ ¼ RðlÞ þ dðl  1ÞQðl  1Þ þ dðl  2ÞQðl  2Þ þ    þ dð1ÞQð1Þ: Therefore, it follows from lemma 4.4 that Ps Pt H H H H hQðlÞ; RðlÞi hPðlÞ; i¼1 Ai RðlÞB i þ j¼1 C j RðlÞD j i al ¼ ¼ kQðlÞk2 kQðlÞk2 hPðlÞ; SðlÞi hSðlÞ þ kðl  1ÞPðl  1Þ; SðlÞi ¼ ¼ kQðlÞk2 kQðlÞk2 ¼

kSðlÞk2 kQðlÞk2

¼ dðlÞ:

Thus, we complete the proof.



66

Solutions to Linear Matrix Equations and Their Applications

By theorem 4.1, the solution generalized by algorithm 4.1 at the k þ 1-th iteration for any initial matrix minimizes the residual norm in the affine subspace wk . Also one has  2 X  s t X   Ai Xðk þ lÞB i þ C j Xðk þ lÞD j  E    i¼1  j¼1  2 X  s t X    Ai XðkÞB i þ C j XðkÞD j  E  ;  i¼1  j¼1 which shows that the sequence of the norm of residuals kRð1Þk; kRð2Þk; . . . is monotonically decreasing. This decent property of the norm of the residuals shows that algorithm 4.1 processes fast and smooth convergence. Now we will solve problem 4.2. For a given matrix X 0 , by problem 4.1 we can get   p  X t X   min A XB þ C XD  E   i i j j mn   X2C j¼1 i¼1  p X t X  ¼ min Ai ðX  X 0 ÞB i þ C j ðX  X 0 ÞDj  mn  X2C i¼1 j¼1 ! p t  X X   E Ai X 0 B i  C j X 0 Dj :  j¼1 i¼1 P P If we let the set E 1 ¼ E  pi¼1 Ai X 0 B i  tj¼1 C j X 0 Dj and X 1 ¼ X  X 0 , then problem 4.2 is equivalent to find the least Frobenius norm solution X 1 of   p X  t X   minmn  Ai X 1 B i þ C j X 1 Dj  E 1 ;   X 1 2C j¼1 i¼1 which can be computed by using algorithm 4.1 with the initial matrix X 1 ð1Þ ¼ Pt Ps H H mn H H is an arbitrary matrix, or especially i¼1 Ai FB i þ j¼1 C j FD j where F 2 C  X 1 ð1Þ ¼ 0. Thus the solution of problem 4.2 kX  X 0 k ¼ minX2G r kX  X 0 k can be stated as  ¼ X  þ X 0: X 1

It’s showed that based on the problem 4.1, we can solve the problem 4.2. In other words, we can obtain the solution of problem 4.2 by using the solution of problem 4.1.

4.2

Numerical Example

In this section, we shall give some numerical examples to illustrate the efficiency of algorithm 4.1. All the tests are performed by MATLAB with machine precision around 1016 .

MCGLS Iterative Algorithm to Linear Conjugate Matrix Equation

67

Example 4.1. Find the least Frobenius norm solution of the following generalized Sylvester-conjugate matrix equation AXB þ C XD ¼ M ; where

0

1þi B 2þi A¼B @ 5 þ 6i 1 0

2 B 4 þ 6i B B¼@ 0 2i 0

i B 11 C ¼B @ 18 9 þ 8i 0

9 2 þ 3i 11  9i 11

4660  5011i B 14900  1787i B M ¼@ 12135  12509i 10031 þ 8092i

15 12 21 i

1i 0 2 þ 3i 12

ð4:2:1Þ

6i 12 11  2i 0

1 4 þ 3i 10 C C; i A 9i

4 þ 6i 12 15 12

1 9 þ 8i 9 C C; 18 A 11

3  2i 11 12 9i

1 0 2i 9 þ 2i B 4i 11 C C; D ¼ B @ 23 i A 8i 23

6322  1536i 13144  1058i 12910  5709i 12415 þ 3909i

i 19 26 þ i 0

2161 þ 1789i 5872 þ 5983i 4671 þ 6089i 3793 þ 9126i

i 0 9i 6

1 6 þ 8i 11 C C; 9 A 8i

1 6035 þ 1940i 12675 þ 8798i C C: 11126 þ 4083i A 5328 þ 12279i

It can be verified that the generalized Sylvester-conjugate matrix equation (4.2.1) is consistent and has the solution 0 1 4 þ 3i 2þi 11 6 B 9 þ 6i 11 þ 2i 8 þ 9i 9 C C: X ¼B @ 9i 4 2i 11i A 23 4 7 12 If we let the initial matrix X 1 ¼ 0, applying algorithm 4.1, we obtain the solution with the corresponding residual rð126Þ ¼ 9:8915  1016 ; and relative error eð126Þ ¼ 1:3293  1014 : The obtained results are presented in figures 4.2.1 and 4.2.2, where r k ¼ log10 kE  AXðkÞB  C XðkÞDk;

68

Solutions to Linear Matrix Equations and Their Applications

and errðkÞ ¼

kXðkÞ  Xk : kXk

If we let the initial matrices be Xð1Þ ¼ 0; Xð1Þ ¼ I 4 ; Xð1Þ ¼ onesð4; 4Þ; respectively, applying algorithm 4.1, the residual and the relative error of the solution are presented in figures 4.2.3 and 4.2.4.

FIG. 4.2.1 – The residual of solution for example 4.1.

FIG. 4.2.2 – The relative error of solution for example 4.1.

MCGLS Iterative Algorithm to Linear Conjugate Matrix Equation

69

FIG. 4.2.3 – The residual of solution for different initial values for example 4.1.

FIG. 4.2.4 – The relative error of solution for different initial values for example 4.1.

Remark 1. From figures 4.2.1 and 4.2.2, the residual and the relative error of the solution to matrix equation (4.2.1) are reduced with the addition of an iterative number. It showed that the iterative solution converges to the exact solution. Moreover, with the addition of an iterative number, the residual and the relative error of the LSQR-M iterative algorithm are not varied. Finally, the residual and the relative error of the iterative algorithm are smaller than those of the LSQR-M

Solutions to Linear Matrix Equations and Their Applications

70

iterative algorithm. It is proved that our proposed method (MCGLS iterative method or CGLS-M iterative algorithm) is efficient and is a good algorithm. Remark 2. From figures 4.2.3 and 4.2.4, for the different initial value Xð1Þ ¼ 0; Xð1Þ ¼ I 4 ; Xð1Þ ¼ onesð4; 4Þ; respectively, the iterative solution to the matrix equation (4.2.1) all converges to the exact solution to the matrix equation (4.2.1). Example 4.2. Find the least Frobenius norm solution of the following matrix equation AX þ XD ¼ M ; where

0

9 þ 2i B 4i B B 23 D¼B B 23 B @ 112 11 0

1 þ 2i B 2 þ 11i B B 5 þ 6i A¼B B 1 B @ 23 34 0

i 19 26 þ i 0 123 þ i 2 11  3i 0 2 þ 3i 12 43 89

1349 þ 1310i B 2951 þ 2863i B B 574 þ 3690i M ¼B B 802 þ 1004i B @ 14159 þ 2417i 24695  1540i

i 0 9i 6 119 3i

6 þ 8i 11 9 8i 21 4 þ 28i

12 þ 6i 12 11  2i 0 21 45i

829  172i 4222 þ 22i 4159  959i 1885  105i 16561 þ 1277i 29322 þ 289i

ð4:2:2Þ 1 23 3 þ 4i C C 23 C C; 18 þ 9i C C 3i A 788

12 3i 12 2 29 þ 9i 89 þ 9i

4 þ 23i 10 i 9i 99i 2 þ 3i

11 34 þ 5i 34 13 þ 45i 45i 4 þ 7i

604 þ 229i 4948 þ 311i 6374  1083i 2110 þ 827i 16974 þ 3140i 25160 þ 2003i

8166 þ 846i 7743  2175i 14398  1699i 10150 þ 6537i 6763 þ 9222i 12010  18641i

1 2 11 C C 45 C C; 3 C C 4 þ 9i A 4 1547 þ 2029i 5997 þ 782i 5778 þ 166i 2515 þ 7660i 2939 þ 6494i 14052  275i

1 53917 þ 3054i 2060  23329i C C 10406  3223i C C: 62506 þ 2197i C C 11737 þ 10689i A 4965  182152i

It can be verified that the generalized Sylvester-conjugate matrix equation (4.2.2) is consistent and has the solution

MCGLS Iterative Algorithm to Linear Conjugate Matrix Equation 0

4 þ 3i B 9 þ 6i B B 9i X ¼B B 23 B @ 2 þ 78i 56i

2þi 11 þ 2i 4 4 2 þ 9i 79

11 8 þ 9i 2i 7 17 118

6 9 11i 12i 119 19

71

1 67 34i C C 12 þ 18i C C: 78 C C 12 A 237i

3i 23 3 þ 10i 12 þ 3i 127 191

If we let the initial matrix Xð1Þ ¼ 0, applying algorithm 4.1, we obtain the solution, which is 0

4:0000 þ 3:0000i B 8:9999 þ 6:0000i B B 8:9999i Xð587Þ ¼ B B 22:9999 B @ 2:0000 þ 77:9999i 56:0000i 6:0000 9:0000 11:0000i 12:0000i 118:9999 19:0000

2:0000 þ 0:99999i 10:9999 þ 2:0000i 3:9999 3:9999 2:0000 þ 8:9999i 79:0000

2:9999i 23:0000 3:0000 þ 1:0000i 12:0000 þ 3:0000i 126:9999 191:0000

11:0000 7:9999 þ 9:0000i 1:9999i 6:9999 17:0000 117:9999

1 67:0000 C 33:9999i C 12:0000 þ 18:0000i C C; C 78:0000 C A 11:9999 237:0000

with corresponding residual rð587Þ ¼ 6:9270  1016 ; and relative error eð587Þ ¼ 7:9381  1014 : The obtained results are presented in figures 4.2.5 and 4.2.6, where r k ¼ log10 kE  AXðkÞB  C XðkÞDk; and errðkÞ ¼

kXðkÞ  Xk : kXk

If we let the initial matrices be Xð1Þ ¼ 0; Xð1Þ ¼ I 4 ; Xð1Þ ¼ onesð4; 4Þ; respectively, applying algorithm 4.1, the residual and the relative error of solution are presented in figures 4.2.7 and 4.2.8.

72

Solutions to Linear Matrix Equations and Their Applications

FIG. 4.2.5 – The residual of solution for example 4.2.

FIG. 4.2.6 – The relative error of solution for example 4.2. Remark 3. Figures 4.2.5 and 4.2.6, showed that the iterative solution converges to the exact solution. Moreover, with the addition of the iterative number, the residual and the relative error of the iterative algorithm are smaller than those of the LSQR-M iterative algorithm. It is proved that our proposed method (CGLS-M iterative algorithm) is efficient and is a good algorithm. From figures 4.2.7 and 4.2.8, for the different initial value Xð1Þ ¼ 0; Xð1Þ ¼ I 4 ; Xð1Þ ¼ onesð4; 4Þ; respectively, the iterative solution to the matrix equation (4.2.2) all converges to its exact solution.

MCGLS Iterative Algorithm to Linear Conjugate Matrix Equation

73

FIG. 4.2.7 – The residual of solution for different initial values for example 4.2.

FIG. 4.2.8 – The relative error of solution for different initial values for example 4.2.

4.3

Control Application

In linear systems, the problem of eigenstructure assignment is to find a control law such that the system matrix of the closed-loop system has the desired eigenvalues and multiplicities, and simultaneously determines the corresponding eigenvectors and generalized eigenvectors.

Solutions to Linear Matrix Equations and Their Applications

74

Now we consider the following discrete-time anti-linear system xðt þ 1Þ ¼ AxðtÞ þ BuðtÞ;

ð4:3:1Þ

where xðtÞ 2 Cn is the state, and uðtÞ 2 Cr is the input. A 2 Cnn and C 2 Cnr are the system matrix and input matrix, respectively. If the state feedback uðtÞ ¼ KxðtÞ;

ð4:3:2Þ

is applied to the system (4.3.1), the closed-loop system is obtained as xðt þ 1Þ ¼ ðA þ BK ÞxðtÞ:

ð4:3:3Þ

Given a prescribed matrix F 2 Cnn , find a state feedback controller in the form of (4.3.2) such that the system matrix ðA þ BK Þ of the closed-loop system (4.3.3) is consimilar to the matrix F, and simultaneously determine a nonsingular matrix X 2 Cnn satisfying X 1 ðA þ BKÞX ¼ F:

ð4:3:4Þ

Due to the non-singularity of X, the equation (4.3.4) can equivalently written as AX þ BK X ¼ XF:

ð4:3:5Þ

Let KX ¼ W , then, the equation (4.3.5) can be transformed into the following equation AX þ BW ¼ XF;

ð4:3:6Þ

which is the so-called Sylvester conjugate matrix equation. By algorithm 4.1, we can obtain the solution of equation (4.3.6).

4.4

Conclusions

In this chapter, the MCGLS iterative algorithm is constructed to solve the least Frobenius norm solution of the generalized Sylvester-conjugate matrix equation (4.0.1). By this algorithm, for any initial matrix Xð1Þ, a solution X  can be obtained in finite iteration steps in the absence of roundoff errors, and the least Frobenius norm solution can be obtained by choosing any initial matrix. In addition,  to a given by use of this iterative method, the optimal approximation solution X matrix X 0 can be derived by first finding the least Frobenius norm solution of a new corresponding matrix equation. And we have checked the obtained theory by the numerical examples. The given numerical examples show that the proposed iterative algorithms are quite more efficient than the existing LSQR iterative algorithm[90,93,94].

Chapter 5 MCGLS Iterative Algorithm to Linear Conjugate Transpose Matrix Equation The aim of this chapter is to present an iterative algorithm for solving the generalized Sylvester conjugate transpose matrix equation. There are many problems such as in mathematics, physics, and engineering, leading to solving linear matrix equations. For example, a digital filter can be characterized by state variable equations xðn þ 1Þ ¼ AxðnÞ þ buðnÞ;

ð5:0:1Þ

yðnÞ ¼ cxðnÞ þ duðnÞ:

ð5:0:2Þ

and

The solutions K and W of the Lyapunov matrix equations K ¼ AKAT þ bbT ;

ð5:0:3Þ

and W ¼ AT WA þ cT c;

ð5:0:4Þ [3]

can analyze the quantization noise generated by a digital filter . It can be shown that certain control problem, such as pole/eigenstructure assignment and observe design of (5.0.1), are closely related with the following generalized Sylvester matrix equation p X

Ak XB k þ

q X

C j YD j ¼ E:

ð5:0:5Þ

j¼1

k¼1

The generalized Sylvester conjugate transpose matrix equation r X i¼1

Ai XB i þ

t X j¼1

C j XDj þ

k X s¼1

GsX T H s þ

u X

M l X H N l ¼ E;

ð5:0:6Þ

l¼1

DOI: 10.1051/978-2-7598-3102-9.c005 © Science Press, EDP Sciences, 2023

76

Solutions to Linear Matrix Equations and Their Applications

with known matrices Ai 2 Cpm ; B i 2 Cnq ; C j 2 Cpm ; Dj 2 Cnq ; G s 2 Cpn ; H s 2 Cmq ; M l 2 Cpn ; N l 2 Cmq ; E 2 Cpq and unknown matrix X 2 C mn for i ¼ 1; 2; . . .; r; j ¼ 1; 2; . . .; t; s ¼ 1; 2; . . .; k; l ¼ 1; 2; . . .; u:, which is discussed in this chapter, is the generalization of the above matrix equation (5.0.5). Obviously, this class of matrix equation includes various linear matrix equations such as Lyapunov, Sylvester and the generalized Sylvester matrix equations as special cases. This chapter is concerned to find the least squares solution of generalized Sylvester-conjugate transpose matrix equation (5.0.6) by generalizing CGLS iterative algorithm. For details, we mainly consider the following two problems. Problem 5.1. For given Ai 2 Cpm ; B i 2 Cnq ; C j 2 Cpm ; Dj 2 Cnq ; G s 2 Cpn ; H s 2 Cmq ; M l 2 Cpn ; N l 2 Cmq ; E 2 Cpq ; i ¼ 1; 2; . . .; p; j ¼ 1; 2; . . .; t; s ¼ 1; 2; . . .; k and l ¼ 1; 2; . . .; u: find X  2 Cmn such that    P t k u P P P  r   T H C j X Dj þ GsX H s þ M l X N l  E  Ai X B i þ  i¼1 s¼1 l¼1  j¼1  P t k u P P P  r T H ¼ min A XB þ C XD þ G X H þ M X N  E :  i i j j s s l l  X2C mn i¼1 j¼1 s¼1 l¼1 ð5:0:7Þ Problem 5.2. Let G r denote the solution set of problem 5.1, for given X 0 2 Cmn ,  2 S r , such that find the matrix X   X   X 0  ¼ min kX  X 0 k: ð5:0:8Þ X2G r

Firstly, let I n and S n denote the n  n unit matrix and reverse unit matrix respectively. For X and Y two matrices in Cmn , we define the real inner product as hX; Y i ¼ Re½trðY H XÞ. The associated norm is the well-known Frobenius norm. For matrices R; A; B and X with appropriate dimension, a well-known property of the inner product is hR; AXBi ¼ hAH RB H ; Xi. Two matrices X and Y are said to be orthogonal if hX; Y i ¼ 0. First of all, we present two lemmas. And the proof line of lemma 5.2 is similar to the reference[98]. Lemma 5.1. [99]Let U be an inner product space, V be a subspace of U , and V ? be the orthogonal complement subspace of V . Given w 2 U , if there exists a w 0 2 V such that kw  v 0 k  kw  v k holds for any v 2 V, then v 0 is unique and v 0 2 V is the unique minimization vector in V if and only if ðw  v 0 Þ ? V ; i.e. ðw  v 0 Þ 2 V ? : ^ is the residual of equation (5.0.6) corresponding to the Lemma 5.2. Assume that R P P P mn ^ ¼ E  r Ai XB ^ i  t C j XD ^ j  k GsX ^ T H s ^ 2C , that is, R matrix X i¼1 j¼1 s¼1 Pu ^ ^H l¼1 M l X N l . Then the matrix X is the least squares solution of equation (5.0.6) if

MCGLS Iterative Algorithm to Linear Conjugate Transpose Matrix Equation r X

^ H AH i RB i þ

i¼1

Proof.

Pt

t X

k X

H ^ jH þ C j RD

j¼1

We

first

Pk

u X

^ Gs þ H sR T

s¼1

define

the

^ M l ¼ 0: NlR H

77

ð5:0:9Þ

l¼1

linear

Pu

Pr

i¼1 Ai XB i þ pm Ai 2 C ; B i 2 Cnq ; mq pq

W ¼ fFjF ¼

subspace

with j¼1 C j XD j þ s¼1 G s X H s þ l¼1 M l X N l g C j 2 Cpm ; Dj 2 Cnq ; G s 2 Cpn ; H s 2 Cmq ; M l 2 Cpn ; N l 2 C ;E 2 C ; F 2 pq mn ; i ¼ 1; 2; . . .; p; j ¼ 1; 2; . . .; t; s ¼ 1; 2; . . .; k and l ¼ 1; 2; . . .; u: C ; and X 2 C P P P P ^ i þ t C j XD ^ j þ k GsX ^ THs þ u MlX ^ H Nl, ^ ¼ r Ai XB Next, if we let F i¼1 j¼1 s¼1 l¼1 ^ 2 W holds. Therefore, for any F 2 W, one can obtain then F   ^ F E  F; * p t k u X X X X ^ i ^ j ^ THs  ^ H Nl; ¼ E Ai XB C j XD GsX MlX T

j¼1

i¼1 p X

Ai XB i þ

i¼1

^ R;

¼ * ¼

t X

C j XDj þ

r X

Ai XB i þ

i¼1 r X

t X j¼1

+

^ H AH i RB i ; X

* * ¼

u X

GsX H s þ T

þ

k X

GsX T H s þ

s¼1 t X

MlX Nl

l¼1

C j XD j þ *

+

^ j ;X C j RD H

H

* þ

j¼1

+

+

H

u X

+ MlXH N l

l¼1 k X

+

^ Gs; X H sR T

s¼1

^H Ml; X NlR

l¼1 r X

^ H AH i RB i

i¼1

Pu

l¼1 u X

s¼1

i¼1

þ

s¼1 k X

j¼1

*

H

þ

t X

^ j þ C j RD H

H

j¼1

In this case, if we suppose

k X

^ Gs þ H sR T

s¼1

Pr

H^ H i¼1 Ai RB i

þ

u X

+ ^ Ml; X : NlR H

l¼1

Pt

j¼1 C j

H

^ jH þ RD

Pk

^ T Gs þ

s¼1 H s R

^ Fi ¼ 0: ^ H M l ¼ 0; then by the above equations it is apparent that hE  F; ? ^ 2 W . Thus, the matrix X ^ is the least squares By lemma 5.1, we have ðE  FÞ solution of equation (5.0.6). □ l¼1 N l R

^ be a solution of problem 5.1, then any solution X ~ of problem Lemma 5.3. If we let X mn ^ satisfies 5.1 can be expressed as X þ Z where the matrix Z 2 C r X i¼1

Ai ZB i þ

t X j¼1

C j Z Dj þ

k X s¼1

GsZ T H s þ

u X l¼1

M l Z H N l ¼ 0:

ð5:0:10Þ

Solutions to Linear Matrix Equations and Their Applications

78

~ of problem 5.1, if we define the matrix Z ¼ X ~  X, ^ then Proof. For any solution X ~ ^ X ¼ X þ Z holds. Now it is showed that equation (5.0.10) holds. By applying lemma 5.2, we can obtain 2   X r t k u X X X T H  ^ iþ ^ jþ ^ Hs þ ^ N l  E Ai XB C j XD GsX MlX     i¼1 j¼1 s¼1 l¼1 2   X r t k u X X X T H  ~ iþ ~ jþ ~ Hs þ ~ N l  E ¼ Ai XB C j XD GsX MlX    i¼1 j¼1 s¼1 l¼1  X r t X  ^ þ Z ÞB i þ ^ þ Z ÞDj ¼ Ai ðX C j ðX  i¼1 j¼1 2  k u X X  T H ^ ^ þ G s ðX þ Z Þ H s þ M l ðX þ Z Þ N l  E   s¼1 l¼1 2   X r t k u X X X  ^ ¼ Ai ZB i þ C j Z Dj þ GsZ T H s þ MlZH Nl  R    i¼1 j¼1 s¼1 l¼1 2   X r t k u X X X  2   ^ ¼ Ai ZB i þ C j Z Dj þ GsZ T H s þ M l Z H N l  þ R   i¼1 j¼1 s¼1 l¼1 * + r t k u X X X X T H ^ 2 Ai ZB i þ C j Z Dj þ GsZ H s þ MlZ Nl; R i¼1

j¼1

s¼1

l¼1

2   X r t k u X X X  2   T H ^ ¼ Ai ZB i þ C j Z Dj þ GsZ H s þ M l Z N l  þ R   i¼1 j¼1 s¼1 l¼1 * + r t k u X X X X T H H H ^ H ^ j þ ^ Gs þ ^ Ml  2 Z; AH C j RD H sR NlR i RB i þ i¼1

j¼1

s¼1

l¼1

2   X r t k u X X X  2   ^ : Ai ZB i þ C j Z Dj þ GsZ T H s þ M l Z H N l  þ R ¼   i¼1 j¼1 s¼1 l¼1

Therefore, the equation (5.0.9) holds.

5.1



MCGLS Iterative Algorithm and Convergence Analysis

In this section, firstly, in order to solve problem 5.1 an iterative algorithm is proposed, then some basic properties of the algorithm are given. Next finding the least Frobenius norm solution to problem 5.2 is discussed. As is well known, the CGLS method is a powerful method for obtaining the solution of large sparse least squares problem minx2Cn kb  Axk. It is stated as follows.

MCGLS Iterative Algorithm to Linear Conjugate Transpose Matrix Equation

79

Algorithm CGLS.[29] xð1Þ 2 Rn is an initial guess, rð1Þ ¼ b  Axð1Þ; sð1Þ ¼ AT rð1Þ; pð1Þ ¼ sð1Þ; cð1Þ ¼ ksð1Þk2 , for k ¼ 1; 2; 3; . . . repeat the following: qðkÞ ¼ ApðkÞ; dðkÞ ¼ cðkÞ=kqðkÞk2 ; xðk þ 1Þ ¼ xðkÞ þ dðkÞpðkÞ; rðk þ 1Þ ¼ rðkÞ  dðkÞqðkÞ; sðk þ 1Þ ¼ AT rðk þ 1Þ; cðk þ 1Þ ¼ ksðk þ 1Þk2 ; kðkÞ ¼ cðk þ 1Þ=cðkÞ; pðk þ 1Þ ¼ sðk þ 1Þ þ kðkÞpðkÞ: For solving the least squares problem of the generalized Sylvester-conjugate transpose matrix equation (5.0.6), based on the above CGLS method in vector form, the following matrix form of the CGLS iterative algorithm (MCGLS or CGLS-M) is presented.

Algorithm 5.1 Step 1. Input matrices Ai 2 Cpm , B i 2 Cnq , C j 2 Cpm ; Dj 2 Cnq , G s 2 Cpn , H s 2 Cmq , M l 2 Cpn , N l 2 Cmq , E 2 Cpq , i ¼ 1; 2; . . .; p; j ¼ 1; 2; . . .; t; s ¼ 1; 2; . . .; k and l ¼ 1; 2; . . .; u: and Xð1Þ 2 Cmn ; Step 2. Compute Rð1Þ ¼ E 

r X

Ai Xð1ÞB i 

t X

i¼1



k X

G s Xð1ÞT H s þ

u X

s¼1

Sð1Þ ¼

p X

þ

k X

M l Xð1ÞH N l ;

l¼1

H AH i Rð1ÞB i þ

i¼1

C j Xð1ÞD j

j¼1

t X

H

C j Rð1ÞDj

H

j¼1

H s Rð1ÞT G s þ

u X

s¼1

N l Rð1ÞH M l ;

l¼1 2

Pð1Þ ¼ Sð1Þ; cð1Þ ¼ kSð1Þk ; For k ¼ 1; 2; 3; . . . repeat the following: Step 3. If kRðkÞk ¼ 0, then stop and XðkÞ is the solution of equation (5.0.6), break; else if kRðkÞk 6¼ 0 but kSðkÞk ¼ 0, then stop and XðkÞ is the solution of problem 5.1, break; else k :¼ k þ 1;

Solutions to Linear Matrix Equations and Their Applications

80 Step 4. Compute QðkÞ ¼

r X

Ai PðkÞB i þ

i¼1

þ

t X

C j PðkÞDj

j¼1

k X

G s PðkÞT H s þ

s¼1

u X

M l PðkÞH N l ;

l¼1 2

dðkÞ ¼ cðkÞ=kQðkÞk ; Xðk þ 1Þ ¼ XðkÞ þ dðkÞPðkÞ; Rðk þ 1Þ ¼ RðkÞ  dðkÞQðkÞ; r t X X H H H AH Rðk þ 1ÞB þ C j Rðk þ 1ÞDj Sðk þ 1Þ ¼ i i i¼1

þ

j¼1 k X

H s Rðk þ 1ÞT G s þ

s¼1 r X

H AH i QðkÞB i

i¼1 k X

N l Rðk þ 1ÞH M l

l¼1

¼ SðkÞ  dðkÞ þ

u X

H s QðkÞT G s þ

s¼1

þ

t X

! H

C j QðkÞDj

H

j¼1 u X

N l QðkÞH M l ;

l¼1

cðk þ 1Þ ¼ kSðk þ 1Þk2 ; kðkÞ ¼ cðk þ 1Þ=cðkÞ; Pðk þ 1Þ ¼ Sðk þ 1Þ þ kðkÞPðkÞ; Step 5. Go to step 3. Some basic properties of algorithm 5.1 are listed in the following lemmas. Lemma 5.4. For the sequences SðkÞ, PðkÞ and QðkÞ which are generated by algorithm 5.1, if there exists a positive number r, such that kSðuÞk 6¼ 0 and kQðuÞk 6¼ 0; 8u ¼ 1; 2; . . .; r, then the following statements hold for u; w ¼ 1; 2; . . .; r and u 6¼ w. ð1ÞhSðuÞ; SðwÞi ¼ 0; ð2ÞhQðuÞ; QðwÞi ¼ 0; ð3ÞhPðuÞ; SðwÞi ¼ 0: Proof. Step 1: We prove the conclusion by induction. Because the real inner product is commutative, it is enough to prove three statements for 1  u\w  r: For u ¼ 1; w ¼ 2, by applying of algorithm 5.1, we obtain

MCGLS Iterative Algorithm to Linear Conjugate Transpose Matrix Equation hSð1Þ; Sð2Þi * ¼

Sð1Þ; Sð1Þ  dð1Þ

r X

H AH i Qð1ÞB i

þ

i¼1

þ

k X

T

H s Qð1Þ G s þ

s¼1

u X

+

*

¼ kSð1Þk  dð1Þ Sð1Þ; H s Qð1ÞT G s þ

s¼1

*

¼ kSð1Þk2  dð1Þ þ

T

r X

H AH i Qð1ÞB i þ

u X

+

t X

H

C j Qð1ÞDj

H

j¼1

N l Qð1ÞH M l

l¼1 r X

G s Sð1Þ H s þ

s¼1

j¼1

N l Qð1Þ M l

Ai Sð1ÞB i þ

t X

i¼1 k X

C j Qð1ÞDj

H

H

i¼1

þ

! H

l¼1

2

k X

t X

C j Sð1ÞDj

j¼1 u X

+

H

M l Sð1Þ N l ; Qð1Þ

l¼1

¼ kSð1Þk2  dð1ÞhQð1Þ; Qð1Þi ¼ 0; hQð1Þ;  Qð2Þi r t P P ¼ Qð1Þ; Ai Pð2ÞB i þ C j Pð2ÞDj Þ: i¼1 j¼1  k u P P þ G s Pð2ÞT H s þ M l Pð2ÞH N l l¼1  s¼1 r t P P ¼ Qð1Þ; Ai ðSð2Þ þ kð1ÞPð1ÞÞB i þ C j ðSð2Þ þ kð1ÞPð1ÞÞDj i¼1 j¼1  k u P P T H þ G s ðSð2Þ þ kð1ÞPð1ÞÞ H s þ M l ðSð2Þ þ kð1ÞPð1ÞÞ N l s¼1 l¼1  r t P P C j Sð2ÞD j ¼ kð1ÞkQð1Þk2 þ Qð1Þ; Ai Sð2ÞB i þ i¼1 j¼1  k u P P þ G s Sð2ÞT H s þ M l Sð2ÞH N l s¼1 l¼1 r X 1 2 ¼ kð1ÞkQð1Þk  AH ðRð2Þ  Rð1ÞÞB H i dð1Þ i¼1 i t k P P H H þ C j ðRð2Þ  Rð1ÞÞDj þ H s ðRð2Þ j¼1 s¼1  u P N l ðRð2Þ  Rð1ÞÞH M l ; Sð2Þ Rð1ÞÞT G s þ l¼1

¼ kð1ÞkQð1Þk2 

1 hSð2Þ  Sð1Þ; Sð2Þi ¼ 0; dð1Þ

81

Solutions to Linear Matrix Equations and Their Applications

82

and hPð1Þ; Sð2Þi ¼ hSð1Þ; Sð2Þi ¼ 0: Step 2: Now for u\w\r we assume that hSðuÞ; SðwÞi ¼ 0;

hQðuÞ; QðwÞi ¼ 0;

hPðuÞ; SðwÞi ¼ 0:

Thus, we can write hSðuÞ; Sðw þ 1Þi * ¼

SðuÞ; SðwÞ  dðwÞ

r X i¼1

þ

k X s¼1

T

H s QðwÞ G s þ

u X

+

s¼1

*

r X

u X

þ

l¼1

j¼1

H

C j QðwÞDj +

H

N l QðwÞ M l

r X

Ai SðuÞB i þ

t X

C j SðuÞD j þ

j¼1

k X

G s SðuÞT H s

s¼1

+

M l SðuÞH N l ; QðwÞ *

¼ dðwÞ

p X

Ai ðPðuÞ  kðu  1ÞPðu  1ÞÞB i

i¼1

þ

t X

C j ðPðuÞ  kðu  1ÞPðu  1ÞÞDj

j¼1

þ

k X

H s ðPðuÞ  kðu  1ÞPðu  1ÞÞT G s +

s¼1

þ

u X

H

N l ðPðuÞ  kðu  1ÞPðu  1ÞÞ M l ; QðwÞ

l¼1

¼ dðwÞhQðuÞ  kðu  1ÞQðu  1Þ; QðwÞi ¼ 0; hQðuÞ; Qðw þ 1Þi * r t X X Ai Pðw þ 1ÞB i þ C j Pðw þ 1ÞD j ¼ QðuÞ; i¼1

þ

k X s¼1

H

l¼1

i¼1 u X

t X j¼1

H s QðwÞ G s þ

¼ dðwÞ

C j QðwÞDj

H

N l QðwÞ M l

H AH i QðwÞB i þ

T

*

! H

H

i¼1

þ

t X

l¼1

¼ dðwÞ SðuÞ; k X

þ

H AH i QðwÞB i

j¼1 T

G s Pðw þ 1Þ H s þ

u X l¼1

+ H

M l Pðw þ 1Þ N l

MCGLS Iterative Algorithm to Linear Conjugate Transpose Matrix Equation * ¼

QðuÞ;

r X

Ai ðSðw þ 1Þ þ kðwÞPðwÞÞB i þ

i¼1

þ * ¼

k X

T

G s ðSðw þ 1Þ þ kðwÞPðwÞÞ H s þ

u X

+ H

M l ðSðw þ 1Þ þ kðwÞPðwÞÞ N l

l¼1

QðuÞ;

r X

Ai Sðw þ 1ÞB i þ

t X

i¼1

þ

C j ðSðw þ 1Þ þ kðwÞPðwÞÞD j

j¼1

s¼1

k X

t X

C j Sðw þ 1ÞDj

j¼1 T

G s Sðw þ 1Þ H s þ

s¼1

u X

+ H

M l Sðw þ 1Þ N l þ kðwÞQðwÞ

l¼1

* r t X X 1 ¼ Rðu þ 1Þ  RðuÞ; Ai Sðw þ 1ÞB i þ Cj dðuÞ i¼1 j¼1 Sðw þ 1ÞD j þ

k X

T

G s Sðw þ 1Þ H s þ

s¼1

u X

+ H

M l Sðw þ 1Þ N l

l¼1

* r t X X 1 H H H ¼ AH C j ðRðu þ 1Þ  RðuÞÞDj i ðRðu þ 1Þ  RðuÞÞB i þ dðuÞ i¼1 j¼1 þ

k X

u X

T

H s ðRðu þ 1Þ  RðuÞÞ G s þ

s¼1

¼

83

H

N l ðRðu þ 1Þ  RðuÞÞ M l ; Sðw þ 1Þ

l¼1

1 hSðu þ 1Þ  SðuÞ; Sðw þ 1Þi ¼ 0; dðuÞ hPðuÞ; Sðw þ 1Þi * ¼

PðuÞ; SðwÞ  dðwÞ

r X

H AH i QðwÞB i þ

i¼1

þ

k X

H s QðwÞT G s þ

s¼1

u X

¼ dðwÞ PðuÞ; þ

r X

*

¼ dðwÞ þ

s¼1

t X

H

C j QðwÞDj

j¼1 u X

H

+

N l QðwÞH M l

l¼1 p X

Ai PðuÞB i þ

i¼1 k X

+

N l QðwÞH M l

H AH i QðwÞB i þ

H s QðwÞT G s þ

s¼1

H

C j QðwÞDj

j¼1

!

i¼1 k X

t X

l¼1

*

G s PðuÞT H s þ

t X j¼1

u X l¼1

+

C j PðuÞDj +

M l PðuÞH N l ; QðwÞ

¼ 0:

H

Solutions to Linear Matrix Equations and Their Applications

84

For u ¼ w, we have hSðwÞ; Sðw þ 1Þi * ¼

r X

SðwÞ; SðwÞ  dðwÞ

H AH i QðwÞB i þ

i¼1

þ

k X

u X

H s QðwÞT G s þ

s¼1

j¼1

N l QðwÞH M l H AH i QðwÞB i þ

i¼1

QðwÞDj þ

T

¼ kSðwÞk2  dðwÞ

u X

þ

r X

Ai SðwÞB i þ u X

G s SðwÞ H s þ * 2

¼ kSðwÞk  dðwÞ

H

+

H

N l QðwÞ M l

t X

C j SðwÞDj

j¼1

T

s¼1

Cj

l¼1

i¼1 k X

t X j¼1

H s QðwÞ G s þ

s¼1

*

H

!+

r X

¼ kSðwÞk2  dðwÞ SðwÞ; H

H

C j QðwÞD j

l¼1

*

k X

t X

+

H

M l SðwÞ N l ; QðwÞ

l¼1 r X

Ai ðPðwÞ  kðw  1ÞPðw  1ÞÞB i

i¼1

þ

t X

C j ðPðwÞ  kðw  1ÞPðw  1ÞÞDj

j¼1

þ

k X

G s ðPðwÞ  kðw  1ÞPðw  1ÞÞT H s

s¼1

þ

u X

+ H

M l ðPðwÞ  kðw  1ÞPðw  1ÞÞ N l ; QðwÞ

l¼1

¼ kSðwÞk2  dðwÞhQðwÞ  kðw  1ÞQðw  1Þ; QðwÞi ¼ 0; hQðwÞ; Qðw þ 1Þi * r t X X Ai Pðw þ 1ÞB i þ C j Pðw þ 1ÞDj ¼ QðwÞ; i¼1

þ * ¼

k X

j¼1 T

G s kðw þ 1Þ H s þ

s¼1

u X

+ H

M l Pðw þ 1Þ N l

l¼1

QðwÞ;

r X

Ai ðSðw þ 1Þ þ kðwÞPðwÞÞB i þ

i¼1

þ

k X s¼1

t X

C j ðSðw þ 1Þ þ kðwÞPðwÞÞDj

j¼1 T

G s ðSðw þ 1Þ þ kðwÞPðwÞÞ H s þ

u X l¼1

+ H

M l ðSðw þ 1Þ þ kðwÞPðwÞÞ N l

MCGLS Iterative Algorithm to Linear Conjugate Transpose Matrix Equation * ¼

QðwÞ;

r X

t X

Ai Sðw þ 1ÞB i þ

i¼1

þ

k X

C j Sðw þ 1ÞDj Þ

j¼1 T

G s Sðw þ 1Þ H s þ

s¼1

u X

85

+ H

M l Sðw þ 1Þ N l þ kðwÞQðwÞ

l¼1

* r X 1 Rðw þ 1Þ  RðwÞ; ¼ kðwÞkQðwÞk  Ai Sðw þ 1ÞB i dðwÞ i¼1 2

þ

t X

k X

C j Sðw þ 1ÞDj þ

j¼1

T

G s Sðw þ 1Þ H s þ

s¼1

u X

+ H

M l Sðw þ 1Þ N l

l¼1

* r X 1 AH ðRðw þ 1Þ  RðwÞÞB H ¼ kðwÞkQðwÞk  i dðwÞ i¼1 i 2

þ

t X

H

H

C j ðRðw þ 1Þ  RðwÞÞDj þ

j¼1

þ

u X

k X

H s ðRðw þ 1Þ  RðwÞÞT G s

s¼1

+

N l ðRðw þ 1Þ  RðwÞÞH M l ; Sðw þ 1Þ

l¼1

1 hSðw þ 1Þ  SðwÞ; Sðw þ 1Þi dðwÞ 1 ¼ kðwÞkQðwÞk2  kSðw þ 1Þk2 ¼ 0; dðwÞ ¼ kðwÞkQðwÞk2 

hPðwÞ; Sðw þ 1Þi * ¼

r X

PðwÞ; SðwÞ  dðwÞ

H AH i QðwÞB i þ

i¼1

þ

k X

H s QðwÞT G s þ

s¼1

u X

t X

H

C j QðwÞDj

H

j¼1

!+

N l QðwÞH M l

l¼1

¼ hSðwÞ þ kðw  1ÞSðw  1Þ þ kðw  1Þkðw  2ÞSðw  2Þ þ    * r X H þ kðw  1Þ. . .kð1ÞSð1Þ; SðwÞi  dðwÞ PðwÞ; AH i QðwÞB i i¼1

þ

t X

H

H

C j QðwÞDj þ

j¼1

* 2

¼ kSðwÞk  dðwÞ

k X

T

H s QðwÞ G s þ

s¼1 r X

þ

s¼1

T

Ai PðwÞB i þ

t X

G s PðwÞ H s þ

j¼1 u X

+ H

N l QðwÞ M l

l¼1

i¼1 k X

u X

C j PðwÞDj +

H

M l PðwÞ N l ; QðwÞ

¼ 0:

l¼1

By the principle of induction, we draw the conclusion.



Solutions to Linear Matrix Equations and Their Applications

86

Lemma 5.5. If there exists a positive number l such that dðlÞ ¼ 0 or dðlÞ ¼ 1 in algorithm 5.1, then XðlÞ is a solution of problem 5.1. Proof. If dðlÞ ¼ 0, we have kSðlÞk2 ¼ 0. If dðlÞ ¼ 1, we have kQðlÞk2 ¼ 0. It is showed that kSðlÞk2 ¼ hSðlÞ þ kðl  1ÞSðl  1Þ þ kðl  1Þkðl  2ÞSðl  2Þ þ    þ kðl  1Þ  kð1ÞSð1Þ; SðlÞi * + p t X X H H H H ¼ hPðlÞ; SðlÞi ¼ PðlÞ; Ai RðlÞB i þ C j RðlÞDj * ¼

i¼1 p X

Ai PðlÞB i þ

t X

+

C j PðlÞDj ; RðlÞ

j¼1

¼ 0:

j¼1

i¼1

Therefore, for dðlÞ ¼ 0 or dðlÞ ¼ 1 we have SðlÞ ¼

p X i¼1

H AH i RðlÞB i þ

t X

H

C j RðlÞD j

H

¼ 0:

j¼1

So by lemma 5.2 we can conclude that XðlÞ is the solution of problem 5.1.



Theorem 5.1. If the matrix equation (5.0.6) is consistent, then, for any arbitrary initial matrix XðlÞ, the solution X  of the problem 5.1 can be obtained by using algorithm 5.1 within a finite iterative steps in the absence of roundoff errors. Proof. By lemma 5.4, the set SðiÞ; i ¼ 1; 2; 3; . . .; m  n is an orthogonal basis of the real inner product space Cmn with dimension m  n. Therefore, we can obtain kSðm  n þ 1Þk ¼ 0. It is also showed that Xðm  n þ 1Þ is the solution of problem 5.1 in the absence of roundoff errors. □ Pp H H Theorem 5.2. Supposed that the initial matrix is i¼1 Ai FðlÞB i þ Pt H H mn is an arbitrary matrix, or especially j¼1 C j FðlÞD j , where Fð1Þ 2 C  Xð1Þ ¼ 0, then the solution X generated by algorithm 5.1 is the least norm solution of problem 5.1. P Pt H H H Proof. Let the initial matrix be Xð1Þ ¼ pi¼1 AH i FðlÞB i þ j¼1 C j FðlÞ D j , then, it follows from algorithm 5.1 that the generated matrix XðkÞ can be easily expressed P Pt H H H as XðkÞ ¼ pi¼1 AH for certain matrices FðkÞ 2 Cmn i FðkÞB i þ j¼1 C j FðkÞD j for k ¼ 2; 3; . . .. This implies that there exists a matrix F  2 Cmn such that P P H H ~  is an arbitrary solution of X  ¼ p AH F  B H þ t C j F  D j . Now if we let X i¼1

i

i

j¼1

problem 5.1, then, by using lemma 5.3, there exists the matrix Z  2 Cmn such that P P ~  ¼ X  þ Z  and p Ai Z  B i þ t C j Z  Dj ¼ 0. Thus, one can obtain X i¼1 j¼1

MCGLS Iterative Algorithm to Linear Conjugate Transpose Matrix Equation

87

hX  ; Z  i * + p t X X H  H H  H  Ai F B i þ C j F Dj ; Z ¼ * ¼

j¼1

i¼1 

F ;

p X



Ai Z B i þ

i¼1

t X

+ 

C j Z Dj

¼ 0:

j¼1

By applying the relation above, one can obtain 

~ k2 ¼ kX  þ Z  k2 kX ¼ kX  k2 þ kZ  k2 þ 2hX  ; Z  i ¼ kX  k2 þ kZ  k2  kX  k2 : This shows that the solution X  is the least Frobenius norm solution of problem 5.1. □ Similar to[98], the minimization property of the proposed algorithm is stated as follows. This property shows that algorithm 5.1 converges smoothly. Theorem 5.3. For any initial matrix Xð1Þ 2 Cmn , we have  X r t X  Ai Xðk þ 1ÞB i þ C j Xðk þ 1ÞDj   i¼1 j¼1 2  k u X X  T H þ G s Xðk þ 1Þ H s þ M l Xðk þ 1Þ N l  E   s¼1 l¼1 2   X r t k u X X X   T H ¼ min  Ai XB i þ C j XDj þ GsX H s þ M l X N l  E ;  X2wk  i¼1 j¼1 s¼1 l¼1 ð5:1:1Þ where Xðk þ 1Þ is generated by algorithm 5.1 at the k þ 1-th iteration and wk presents an affine subspace which has the following form wk ¼ Xð1Þ þ spanðPð1Þ; Pð2Þ; . . .; PðkÞÞ:

ð5:1:2Þ

Proof. For any matrix X 2 wk , it follows from equation (5.1.2) that there exist numbers a1 ; a2 ; . . .; ak such that X ¼ Xð1Þ þ

k X

al PðlÞ:

l¼1

Now we define the continuous and differentiable function f with respect to the variable a1 ; a2 ; . . .; ak as

Solutions to Linear Matrix Equations and Their Applications

88

f ða1 ; a2 ; . . .; ak Þ  X r k t k X X X  ¼ Ai ðXð1Þ þ al PðlÞÞB i þ C j ðXð1Þ þ al PðlÞÞD j  i¼1 j¼1 l¼1 l¼1 þ

k X

G s ðXð1Þ þ

s¼1

k X

T

al PðlÞÞ H s þ

l¼1

u X

M l ðXð1Þ þ

l¼1

k X l¼1

2   al PðlÞÞ N l  E  :  H

Due to lemma 5.4 one can obtain f ða1 ; a2 ; . . .; ak Þ  X r t k u X X X  Ai Xð1ÞB i þ C j Xð1ÞD j þ G s Xð1ÞT H s þ M l Xð1ÞH N l  E ¼  i¼1 j¼1 s¼1 l¼1 þ

k X l¼1

al ½

r X

Ai PðlÞB i þ

i¼1

¼ kRð1Þk2 þ

k X

t X

C j PðlÞDj þ

j¼1

k X

T

G s PðlÞ H s þ

s¼1

u X l¼1

2   M l PðlÞ N l   H

a2l kQð1Þk2  2al hQð1Þ; Rð1Þi:

l¼1

Now we consider the problem of minimizing the function f ða1 ; a2 ; . . .; ak Þ. It is obvious that minf ða1 ; a2 ; . . .; ak Þ al  2 p X  t k u X X X   T H Ai XB i þ C j XD j þ GsX H s þ M l X N l  E : ¼ min   X2wk  i¼1 j¼1 s¼1 l¼1 According to this function, the minimum occurs when @ða1 ; a2 ; . . .; ak Þ ¼0 @al

for l ¼ 1; 2; . . .; k:

Consequently, we can get al ¼

hQðlÞ; RðlÞi kQðlÞk2

:

Since algorithm 5.1, one can obtain Rð1Þ ¼ RðlÞ þ dðl  1ÞQðl  1Þ þ dðl  2ÞQðl  2Þ þ    þ dð1ÞQð1Þ: Therefore, it follows from lemma 5.4 that

MCGLS Iterative Algorithm to Linear Conjugate Transpose Matrix Equation

al ¼

hQðlÞ; RðlÞi D

¼ ¼

89

kQðlÞk2 PðlÞ;

Pr

H H i¼1 Ai RðlÞB i þ

Pt

H

H

j¼1 C j RðlÞD j þ

Pk

T s¼1 H s RðlÞ G s þ

Pu

H l¼1 N l RðlÞ M l

E

kQðlÞk2

hPðlÞ; SðlÞi kQðlÞk2

¼

hSðlÞ þ kðl  1ÞPðl  1Þ; SðlÞi kQðlÞk2

¼

kSðlÞk2 kQðlÞk2

¼ dðlÞ: □

By this means, we complete the proof.

By theorem 5.1, the solution generalized by algorithm 5.1 at the k þ 1-th iteration for any initial matrix minimizes the residual norm in the affine subspace wk . Also one has  X s t k X X  Ai Xðk þ lÞB i þ C j Xðk þ lÞDj þ G s Xðk þ lÞT H s   i¼1 j¼1 s¼1 2  u X  þ M l Xðk þ lÞH N l  E   l¼1 2   X s t k u X X X   T H  Ai XðkÞB i þ C j XðkÞDj þ G s XðkÞ H s þ M l XðkÞ N l  E  ;   i¼1 j¼1 s¼1 l¼1 ð5:1:3Þ which shows that the sequence of the norm of residuals kRð1Þk; kRð2Þk; . . . is monotonically decreasing. This decent property of the norm of the residuals shows that the algorithm 5.1 processes fast and smoothly convergence. Now we will solve problem 5.2. For a given matrix X 0 by problem 5.1, we can get    X r t k u X X X   T H min A XB þ C XD þ G X H þ M X N  E   i i j j s s l l  X2C mn  i¼1 j¼1 s¼1 l¼1  X r t X  ¼ min A ðX  X ÞB þ C j ðX  X 0 ÞDj  i 0 i X2C mn  i¼1 j¼1 þ

k X

u X

G s ðX  X 0 ÞT H s þ

s¼1

 E

l¼1 r X i¼1

Pu

M l ðX  X 0 ÞH N l

Ai X 0 B i 

t X

C j X 0 Dj þ

j¼1

If we let the set E 1 ¼ E 

k X

G s X T0 H s

s¼1

Pp

i¼1 Ai X 0 B i



þ

u X

MlXH 0 Nl

l¼1

Pt

j¼1 C j X 0 D j



Pk

!   : 

T s¼1 G s X 0 H s



and X 1 ¼ X  X 0 , then problem 5.2 is equivalent to find the least Frobenius norm solution X 1 of H l¼1 M l X 0 N l

Solutions to Linear Matrix Equations and Their Applications

90

   X r t k u X X X   T H minmn  Ai X 1 B i þ C j X 1 Dj þ GsX 1 H s þ M l X 1 N l  E 1 ;   i¼1 X 1 2C j¼1 s¼1 l¼1 which can be computed by using algorithm 5.1 with the initial matrix X 1 ð1Þ ¼ Pt Pk Pu Ps H mn H H H T H is i¼1 Ai FB i þ j¼1 C j FD j þ s¼1 H s F G s þ l¼1 N l F M l where F 2 C an arbitrary matrix, or especially X 1 ð1Þ ¼ 0. Thus the solution of problem 5.2   X 0 k ¼ minX2G kX  X 0 k can be stated as kX r  ¼ X  þ X 0: X 1

It’s showed that we can solve the problem 5.2 based on the problem 5.1. In other words, we can obtain the solution of problem 5.2 by using the solution of problem 5.1.

5.2

Numerical Example

In this section, in order to illustrate the efficiency of algorithm 5.1, two numerical examples are given. All the tests are performed by MATLAB with machine precision around 1016 . Example 5.1. Find the least Frobenius norm solution of the following matrix equation AXB þ C XD þ EX H F þ GX T H ¼ M ; where

0

5176 þ 1755i B 30638 þ 224i M ¼B @ 49038  1619i 15249 þ 15652i 0

1þi B 2þi A¼B @ 5 þ 6i 1 0

2 B 4 þ 6i B¼B @ 0 2i

10683 þ 3023i 20539  885i 33221  13413i 21330 þ 7577i

1i 0 2 þ 3i 12

6i 12 11  2i 0

3  2i 11 12 9i

4 þ 6i 12 15 12

3277 þ 5921i 21270  454i 39726  10567i 5458 þ 8704i

1 0 4 þ 3i i B 11 10 C C; C ¼ B @ 18 i A 9i 9 þ 8i 1 0 9 þ 8i i B 11 9 C C; E ¼ B @ 78 18 A 11 9  8i

ð5:2:1Þ 1 10579 þ 4432i 23668 þ 8249i C C; 34533 þ 1371i A 14865 þ 13306i

9 2 þ 3i 11  9i 11 8 21  3i 11 11

15 12 21 i

1 2i 11 C C; i A 8i

1 1 2i 12 11 C C; 21 i A i 8i

MCGLS Iterative Algorithm to Linear Conjugate Transpose Matrix Equation 0

3 B 11 H ¼B @ 18 8i

9i 2 þ 3i 11 þ 9i 21

15 1 þ 2i 21 i

1 0 2þi i B 11 11 C C; F ¼ B @ 18 i A 8i 9 þ 8i

0

9 þ 2i B 4i D¼G¼B @ 23 23

i 19 26 þ i 0

i 0 9i 6

9 2 þ 3i 9i 1  8i

1  5i 12 21 9

91

1 2i 11 C C; i A 0

1 6 þ 8i 11 C C: 9 A 8i

It can be verified that the generalized Sylvester equation (5.2.1) are consistent and have the solution 0 4 þ 3i 2þi 11 B 9 þ 6i 11 þ 2i 8 þ 9i X ¼B @ 9i 4 2i 23 4 7

conjugate transpose matrix 1 6 9 C C: 11i A 12

If we let the initial matrix X 1 ¼ 0, by using algorithm 5.1, we obtain the solution 0 1 4:0000 þ 3:0000i 2:0000 þ 1:0000i 10:9999 5:9999 B 8:9999 þ 6:0000i 11:0000 þ 2:0000i 8:0000 þ 9:0000i 9:0000 C C; Xð133Þ ¼ B @ 8:9999i 4:0000 1:9999i 11:0000i A 22:9999 3:9999 7:0000 12:0000 with corresponding residual rð133Þ ¼ 4:7116  1016 ; and relative error eð133Þ ¼ 2:6266  1015 : The obtained results are presented in figures 5.2.1 and 5.2.2,where r k ¼ log10 kM  AXðkÞB  C XðkÞD  EXðkÞH F  GXðkÞT H k; and errðkÞ ¼

kXðkÞ  Xk : kXk

If we let the initial matrix Xð1Þ ¼ 0; Xð1Þ ¼ I 4 ; Xð1Þ ¼ onesð4; 4Þ; respectively, according to algorithm 5.1, the residual and the relative error of solution are presented in figures 5.2.3 and 5.2.4. Remark 1. From figures 5.2.1 and 5.2.2, the residual and the relative error of the solution to matrix equation (5.2.1) are reduced with the addition of an iterative number. It showed that the iterative solution converges to the exact solution.

92

Solutions to Linear Matrix Equations and Their Applications

FIG. 5.2.1 – The residual of solution for example 5.1.

FIG. 5.2.2 – The relative error of solution for example 5.1. Moreover, with the addition of an iterative number, the residual and the relative error of the LSQR-M iterative algorithm are not varied. Finally, the residual and the relative error of the iterative algorithm are smaller than those of the LSQR-M iterative algorithm. It is proved that our proposed method (MCGLS iterative algorithm or CGLS-M iterative algorithm) is efficient and is a good algorithm.

MCGLS Iterative Algorithm to Linear Conjugate Transpose Matrix Equation

93

FIG. 5.2.3 – The relative error of solution for different initial values for example 5.1.

Remark 2. From figure 5.2.3, for the different initial value Xð1Þ ¼ 0; Xð1Þ ¼ I 4 ; Xð1Þ ¼ onesð4; 4Þ; respectively, the iterative solution to the matrix equation (5.2.1) all converges to the exact solution to the matrix equation (5.2.1). Example 5.2. Find the least Frobenius norm solution of the following matrix equation AXB þ C XD þ EX H F þ GX T H ¼ M ; ð5:2:2Þ where A ¼ randð6; 7Þ þ i  randð6; 7Þ; B ¼ randð5; 4Þ þ i  randð5; 4Þ; C ¼ randð6; 7Þ þ i  randð6; 7Þ; D ¼ randð5; 4Þ þ i  randð5; 4Þ; E ¼ randð6; 5Þ þ i  randð6; 5Þ; F ¼ randð7; 4Þ þ i  randð7; 4Þ; G ¼ randð6; 5Þ þ i  randð6; 5Þ; H ¼ randð7; 4Þ þ i  randð7; 4Þ; 0

1:1212 þ 36:8478i B 5:7270 þ 32:8779i B B 1:1063 þ 37:1070i M ¼B B 1:2607 þ 24:4888i B @ 2:9581 þ 34:7771i 6:5203 þ 25:7637i

0:2336 þ 32:7105i 8:4829 þ 28:7701i 2:1256 þ 35:1041i 4:0543 þ 21:8070i 4:3860 þ 31:0165i 7:0275 þ 22:1274i

0:4078 þ 32:5045i 4:6524 þ 30:6806i 1:4087 þ 34:2908i 0:8941 þ 24:5297i 1:5855 þ 33:7016i 5:4346 þ 24:7957i

1 5:5394 þ 29:9953i 1:1100 þ 27:3557i C C 5:4257 þ 30:3026i C C: 3:5789 þ 20:1358i C C 1:2014 þ 28:1810i A 0:5099 þ 20:4696i

94

Solutions to Linear Matrix Equations and Their Applications

It can be verified that the generalized Sylvester equation (5.2.2) are consistent and have the solution 0 0:6273 þ 0:6164i 0:6173 þ 0:9883i B 0:0216 þ 0:9397i 0:5754 þ 0:7668i B B 0:9106 þ 0:3545i 0:5301 þ 0:3367i B X ¼B B 0:8006 þ 0:4106i 0:2751 þ 0:6624i B 0:7458 þ 0:9843i 0:2486 þ 0:2442i B @ 0:8131 þ 0:9456i 0:4516 þ 0:2955i 0:3833 þ 0:6766i 0:2277 þ 0:6802i 0:0669 þ 0:5118i 0:9394 þ 0:0826i 0:0182 þ 0:7196i 0:6838 þ 0:9962i 0:7837 þ 0:3545i 0:5341 þ 0:9713 0:8854 þ 0:3464i

conjugate transpose matrix 0:8044 þ 0:5278i 0:9861 þ 0:4116i 0:0299 þ 0:6026i 0:5357 þ 0:7505i 0:0870 þ 0:5835i 0:8021 þ 0:5518i 0:9891 þ 0:5836i

1 0:8990 þ 0:8865i 0:6259 þ 0:4547i C C 0:1379 þ 0:4134i C C 0:2178 þ 0:2177i C C: 0:1821 þ 0:1257i C C 0:0418 þ 0:3089 A 0:1069 þ 0:7261i

If we let the initial matrix Xð1Þ ¼ 0, according to algorithm 5.1, we obtain the solutions with corresponding residual rð111Þ ¼ 5:4178  1016 : The obtained results are presented in figure 5.2.4, where r k ¼ log10 kM  AXðkÞB  C XðkÞD  EXðkÞH F  GXðkÞT H k:

FIG. 5.2.4 – The residual of solution for example 5.2.

MCGLS Iterative Algorithm to Linear Conjugate Transpose Matrix Equation

95

Remark 3. Figure 5.2.4 showed that the iterative solution converges to the exact solution for any random coefficient matrices. Moreover, with the addition of an iterative number, the residual of the iterative algorithm is smaller. It is proved that our proposed method (CGLS-M iterative algorithm) is feasible and is a magnificent algorithm.

5.3

Conclusions

In order to solve the least Frobenius norm solution of the generalized Sylvester conjugate transpose matrix equation (5.0.6), we constructed the CGLS-M iterative algorithm in this chapter by extending the classical CGLS iterative algorithm. Due to this newly presented iterative algorithm, for any initial matrix Xð1Þ, a solution X  can be obtained in finite iteration steps in the absence of roundoff errors, what is more, the least Frobenius norm solution can also be obtained by choosing any initial matrix. Besides, by use of this iterative method, if a matrix X 0 is given, then the  can be derived by first finding the least Frobenius optimal approximation solution X norm solution of a new corresponding matrix equation. Furthermore, two numerical examples are given to check the obtained theory. It is worth noting that the coefficient matrices of the second numerical example are random matrices. It shows that the proposed iterative algorithm is quite more efficient than the existing LSQR iterative algorithm[90,93,94].

Chapter 6 MCGLS Iterative Algorithm to Coupled Linear Operator Systems In this chapter, we investigate the least squares problem of the following coupled linear operator matrix equations  L11 ðXÞ þ L12 ðY Þ ¼ E 1 ; ð6:0:1Þ L21 ðXÞ þ L22 ðY Þ ¼ E 2 ; in which Lij ; i; j ¼ 1; 2: Denotes a real linear operator and X; Y 2 Cmn ; E i 2 Cpq ; i ¼ 1; 2: Obviously, the matrix equations (6.0.1) includes the general matrix equations r P m¼1

þ

Aim XB im þ g P

t P

C ij XDij þ

j¼1

G ih Y T H ih þ

h¼1

P l¼1u

k P

G is X H H is

s¼1

M il YN il ¼ E i ; i ¼ 1; 2:

as special case. So this chapter aims to propose the least squares iterative solution of coupled linear operator matrix equations and generalize the results of the paper in[90]. In this chapter, we mainly consider the following two problems. Problem 6.1. For given E i 2 Cpq ; i ¼ 1; 2; find X  ; Y  2 Cmn such that    L11 ðX  Þ þ L12 ðY  Þ  E 1    ¼ min: L21 ðX  Þ þ L22 ðY  Þ  E 2

ð6:0:2Þ

Problem 6.2. Let G r denote the solution set of problem 6.1, for given  2 S r , such that X 0 ; Y 0 2 Cmn , find the matrix X   X 0 k ¼ min kX  X 0 k; kX X2G r

  Y 0 k ¼ min kY  Y 0 k: kY Y 2G r

ð6:0:3Þ

DOI: 10.1051/978-2-7598-3102-9.c006 © Science Press, EDP Sciences, 2023

98

6.1

Solutions to Linear Matrix Equations and Their Applications

Some Useful Lemmas

Two matrices X and Y are said to be orthogonal if hX; Y i ¼ 0. For a linear operator L and matrices B; X with appropriate dimension, a well-known property of the inner product is hLðXÞ; Bi ¼ hX; L ðBÞi, in which the symbol L is the conjugate operator of L. Lemma 6.1. [99]Let U be an inner product space, V be a subspace of U , and V ? be the orthogonal complement subspace of V . Given w 2 U , if there exists a w 0 2 V such that kw  v 0 k  kw  vk holds for any v 2 V, then v 0 is unique and v 0 2 V is the unique minimization vector in V if and only if ðw  v 0 Þ ? V ; i.e.ðw  v 0 Þ 2 V ? : ^  Li2 ðY ^ Þ; i ¼ 1; 2 be the residual of ^ i ¼ E i  Li1 ðXÞ Lemma 6.2. If we let R ^ Y ^ 2 C mn , then the matrices equations (6.0.1) corresponding to the matrices X; ^ and Y ^ are the least squares solutions of equation (6.0.1) if X ^ i Þ ¼ 0; Li1 ðR

^ i Þ ¼ 0; Li2 ðR

i ¼ 1; 2:

ð6:1:1Þ

Proof. Firstly, the linear subspace can be defined asW i ¼ fF i jF i ¼ Li1 ðXÞ þ Li2 ðY Þg, i ¼ 1; 2 with F i 2 Cpq and X 2 Cmn . Next, if we suppose that ^ i ¼ Li1 ðXÞ ^ þ Li2 ðY ^ Þ; i ¼ 1; 2, then we have F ^ i 2 W i ; i ¼ 1; 2. Thus, for any F matrices F i 2 W i ; i ¼ 1; 2, we have ^ i ; F i i ¼ hE i  Li1 ðXÞ ^  Li2 ðY ^ Þ; Li1 ðXÞ þ Li2 ðY Þi hE i  F ^ i ; Li1 ðXÞ þ Li2 ðY Þi ¼ hR ^ i ; Li1 ðXÞi þ hR ^ i ; Li2 ðY Þi ¼ hR ^ i Þ; Xi þ hLi2 ðR ^ i Þ; Y i: ¼ hLi1 ðR ^ i Þ; L ðR ^ i ÞÞ ¼ 0; i ¼ 1; 2, then we have hE i  F ^ i ; F i i ¼ 0; i ¼ Therefore, let ðLi1 ðR i2 ? ^ ^ is 1; 2: According to lemma 6.1, we can get ðE i  F i Þ 2 W i ; i ¼ 1; 2. So matrix X the least squares solution of equations (6.0.1). □ ^ Y ^ Þ be the solutions of problem 6.1, then any soluLemma 6.3. Supposed that ðX; ~ Y ~ Þ of problem 6.1 can be characterized by ðX ^ þ Z; Y ^ þ Z Þ in which matrix tions ðX; Z 2 Cmn satisfies Li1 ðZ Þ þ Li2 ðZ Þ ¼ 0;

i ¼ 1; 2:

ð6:1:2Þ

~ Y ~ Þ of problem 6.1, if the matrix defined as Z ¼ X ~  X, ^ Proof. For any solution ðX; ~ ^ then we have X ¼ X þ Z . From lemma 6.2, we have

MCGLS Iterative Algorithm to Coupled Linear Operator Systems

99

^ þ Li2 ðY ^ Þ  E i k2 kLi1 ðXÞ ~ þ Li1 ðY ~ Þ  Eik ¼ kLi1 ðXÞ

2

^ þ Z Þ þ Li2 ðY ^ þ Z Þ  E i k2 ¼ kLi1 ðX ^ i k2 ¼ kLi1 ðZ Þ þ Li2 ðZ Þ  R ^ i k  2hLi1 ðZ Þ þ Li2 ðZ Þ; R ^ ii ¼ kLi1 ðZ Þ þ Li2 ðZ Þk2 þ kR 2

^ i k  2hZ ; L ðR ^ i Þ þ L ð R ^ i Þi ¼ kLi1 ðZ Þ þ Li2 ðZ Þk2 þ kR i1 i2 2

^ i k2 : ¼ kR Thus, we complete the proof of the conclusion.

6.2



MCGLS Iterative Algorithm and Convergence Analysis

In this section, we first present a CGLS-M iterative algorithm, then we give some basic properties of the presented CGLS-M or MCGLS iterative algorithm. At the end of this section, we put the answer to find the least Frobenius norm solution of problem 6.1. Algorithm 6.1 Step 1. Input complex matrices E i 2 C pq ; i ¼ 1; 2 and initial matrices Xð1Þ; Y ð1Þ 2 Cmn ; Step 2. Compute Ri ð1Þ ¼ E i  Li1 ðXð1ÞÞ  Li2 ðY ð1ÞÞ; i ¼ 1; 2; T i ð1Þ ¼ Li1 ðRi ð1ÞÞ þ Li2 ðRi ð1ÞÞ; i ¼ 1; 2; P i ð1Þ ¼ T i ð1Þ; i ¼ 1; 2; cð1Þ ¼ kT 1 ð1Þk2 þ kT 2 ð1Þk2 ; Step 3. For k ¼ 1; 2; 3; . . . repeat the following, if kR1 ðkÞk þ kR2 ðkÞk ¼ 0, then stop and XðkÞ; Y ðkÞ are the solutions of equations (6.0.1), break; else if kR1 ðkÞk þ kR2 ðkÞk 6¼ 0 but kT 1 ðkÞk þ kT 2 ðkÞk ¼ 0, then stop and XðkÞ; Y ðkÞ are the solutions of problem 6.1, break; else k :¼ k þ 1; Step 4. Compute Q i ðkÞ ¼ Li1 ðP i ðkÞÞ þ Li2 ðP i ðkÞÞ; i ¼ 1; 2; dðkÞ ¼ cðkÞ=ðkQ 1 ðkÞk2 þ kQ 2 ðkÞk2 Þ;

Solutions to Linear Matrix Equations and Their Applications

100

1 Xðk þ 1Þ ¼ XðkÞ þ dðkÞðP 1 ðkÞ þ P 2 ðkÞÞ; 2 1 Y ðk þ 1Þ ¼ Y ðkÞ þ dðkÞðP 1 ðkÞ þ P 2 ðkÞÞ; 2 Ri ðk þ 1Þ ¼ Ri ðkÞ  dðkÞQ i ðkÞ; i ¼ 1; 2; T i ðk þ 1Þ ¼ Li1 ðRi ðk þ 1ÞÞ þ Li2 ðRi ðk þ 1ÞÞ

¼ T i ðkÞ  dðkÞðLi1 ðQ i ðkÞ þ Li2 ðQ i ðkÞÞÞ; i ¼ 1; 2;

cðk þ 1Þ ¼ kT 1 ðk þ 1Þk2 þ kT 2 ðk þ 1Þk2 ; kðkÞ ¼ cðk þ 1Þ=cðkÞ; P i ðk þ 1Þ ¼ T i ðk þ 1Þ þ kðkÞP i ðkÞ; i ¼ 1; 2; Step 5. Go to step 3. In the following, we give some basic properties of algorithm 6.1. Lemma 6.4. If there exists a positive number r, such that kT i ðuÞk 6¼ 0 and kQ i ðuÞk 6¼ 0; i ¼ 1; 2; 8u ¼ 1; 2; . . .; r, then the the sequences T i ðkÞ, P i ðkÞ and Q i ðkÞ; i ¼ 1; 2 generated by algorithm 6.1 satisfy the following statement, for u; w ¼ 1; 2; . . .; r and u 6¼ w. 2 X hT i ðuÞ; T i ðwÞi ¼ 0;

2 X hQ i ðuÞ; Q i ðwÞi ¼ 0;

2 X hP i ðuÞ; T i ðwÞi ¼ 0: ð6:2:1Þ

i¼1

i¼1

i¼1

Proof. Step 1: The conclusion will be proved by induction. Because of commutation of the real inner product, it is enough to prove three statements for 1  u\w  r: For u ¼ 1; w ¼ 2, from algorithm 6.1, we have the following relation 2 X hT i ð1Þ; T i ð2Þi i¼1

¼

2 X

hT i ð1Þ; T i ð1Þ  dð1ÞðLi1 ðQ i ð1ÞÞ þ Li2 ðQ i ð1ÞÞÞi

i¼1

¼

2 X i¼1

¼

2 X i¼1

¼

2 X i¼1

kT i ð1Þk2  dð1Þ

2 X hLi1 ðT i ð1ÞÞ þ Li2 ðT i ð1ÞÞ; Q i ð1Þi i¼1

2 X kT i ð1Þk2  dð1Þ hLi1 ðP i ð1ÞÞ þ Li2 ðP i ð1ÞÞ; P i ð1Þi i¼1 2 X kT i ð1Þk2  dð1Þ hQ i ð1Þ; Q i ð1Þi ¼ 0; i¼1

MCGLS Iterative Algorithm to Coupled Linear Operator Systems 2 X hQ i ð1Þ; Q i ð2Þi i¼1

¼ ¼

2 X

hQ i ð1Þ; Li1 ðP i ð2ÞÞ þ Li2 ðP i ð2ÞÞi

i¼1 2 D X

Q i ð1Þ; Li1 ðT i ð2Þ þ

i¼1

þ Li2 ðT i ð2Þ þ ¼ ¼

2 X i¼1 2 X

2 X

2 X

kð1ÞP i ð1ÞÞ

i¼1

kð1ÞP i ð1ÞÞ

i¼1

kð1ÞkQ i ð1Þk2 þ

2 X

E

hQ i ð1Þ; Li1 ðT i ð2ÞÞ þ Li2 ðT i ð2ÞÞi

i¼1

kð1ÞkQ i ð1Þk2 

i¼1

2 1 X hL ðRi ð2Þ  Ri ð1ÞÞ dð1Þ i¼1 i1

þ Li2 ðRi ð2Þ  Ri ð1ÞÞ; T i ð2Þi 2 2 X 1 X ¼ kð1ÞkQ i ð1Þk2  hT i ð2Þ  T i ð1Þ; T i ð2Þi ¼ 0; dð1Þ i¼1 i¼1 and 2 X

hP i ð1Þ; T i ð2Þi ¼

i¼1

2 X hT i ð1Þ; T i ð2Þi ¼ 0: i¼1

Step 2: Thus, for u\w\r if we let 2 X

hT i ðuÞ; T i ðwÞi ¼ 0;

i¼1

2 X

hQ i ðuÞ; Q i ðwÞi ¼ 0;

i¼1

2 X hP i ðuÞ; T i ðwÞi ¼ 0: i¼1

So we have 2 P

hT i ðuÞ; T i ðw þ 1Þi

i¼1

¼

2 P i¼1

hT i ðuÞ; T i ðwÞ  dðwÞðLi1 ðQ i ðwÞÞ þ Li2 ðQ i ðwÞÞÞi

¼ dðwÞ ¼ dðwÞ ¼ dðwÞ

2 P i¼1 2 P i¼1 2 P i¼1

hT i ðuÞ; Li1 ðQ i ðwÞÞ þ Li2 ðQ i ðwÞÞi hLi1 ðT i ðuÞÞ þ Li2 ðT i ðuÞÞ; Q i ðwÞi hLi1 ðP i ðuÞ  kðu  1ÞP i ðu  1ÞÞ

þ Li2 ðP i ðuÞ  kðu  1ÞP i ðu  1ÞÞ; Q i ðwÞi 2 P ¼ dðwÞ hQ i ðuÞ  kðu  1ÞQ i ðu  1Þ; Q i ðwÞi ¼ 0; i¼1

101

Solutions to Linear Matrix Equations and Their Applications

102

2 X

hP i ðuÞ; T i ðw þ 1Þi ¼

i¼1

2 X hP i ðuÞ; T i ðwÞ  dðwÞðLi1 ðQ i ðwÞÞ þ Li2 ðQ i ðwÞÞÞi i¼1

¼ dðwÞ

2 X

hP i ðuÞ; Li1 ðQ i ðwÞÞ þ Li2 ðQ i ðwÞÞi

i¼1

¼ dðwÞ

2 X

hLi1 ðP i ðuÞÞ þ Li2 ðP i ðuÞÞ; Q i ðwÞi

i¼1

¼ dðwÞ

2 X

hQ i ðuÞ; Q i ðwÞi ¼ 0:

i¼1 2 X

hQ i ðuÞ; Q i ðw þ 1Þi

i¼1

¼

2 X

hQ i ðuÞ; Li1 ðP i ðw þ 1ÞÞ þ Li2 ðP i ðw þ 1ÞÞi

i¼1

¼

2 X

hQ i ðuÞ; Li1 ðT i ðw þ 1Þ þ kðwÞP i ðwÞÞ þ Li2 ðT i ðw þ 1Þ þ kðwÞP i ðwÞÞi

i¼1

¼

2 X

hQ i ðuÞ; Li1 ðT i ðw þ 1ÞÞ þ Li2 ðT i ðw þ 1ÞÞ þ kðwÞQ i ðwÞi

i¼1

¼

2 1 X hRi ðu þ 1Þ  Ri ðuÞ; Li1 ðT i ðw þ 1ÞÞ þ Li2 ðT i ðw þ 1ÞÞi dðuÞ i¼1

¼

2 1 X hL ðRi ðu þ 1Þ  Ri ðuÞÞ þ Li2 ðRi ðu þ 1Þ  Ri ðuÞÞ; T i ðw þ 1Þi dðuÞ i¼1 i1

¼

2 1 X hT i ðu þ 1Þ  T i ðuÞ; T i ðw þ 1Þi ¼ 0: dðuÞ i¼1

For u ¼ w, we also obtain the following relation 2 X hT i ðwÞ; T i ðw þ 1Þi i¼1

¼

2 X

hT i ðwÞ; T i ðwÞ  dðwÞLi ðQ i ðwÞÞi

i¼1

¼

2 X i¼1

kT i ðwÞk2  dðwÞ

2 X hT i ðwÞ; Li ðQ i ðwÞÞi i¼1

MCGLS Iterative Algorithm to Coupled Linear Operator Systems

¼

2 X i¼1

¼

2 X i¼1

¼

2 X

kT i ðwÞk2  dðwÞ

2 X hLi ðT i ðwÞÞ; Q i ðwÞi i¼1

2 X kT i ðwÞk2  dðwÞ hLi ðP i ðwÞ  kðw  1ÞP i ðw  1ÞÞ; Q i ðwÞi i¼1

2 X kT i ðwÞk2  dðwÞ hQ i ðwÞ  kðw  1ÞQ i ðw  1Þ; Q i ðwÞi ¼ 0;

i¼1 2 X

103

i¼1

hQ i ðwÞ; Q i ðw þ 1Þi

i¼1

¼

2 X hQ i ðwÞ; Li1 ðP i ðw þ 1ÞÞ þ Li2 ðP i ðw þ 1ÞÞi i¼1

2 X ¼ hQ i ðwÞ; Li1 ðT i ðw þ 1Þ þ kðwÞP i ðwÞÞ þ Li2 ðT i ðw þ 1Þ þ kðwÞP i ðwÞÞi i¼1

2 X ¼ hQ i ðwÞ; Li1 ðT i ðw þ 1Þ þ Li2 ðT i ðw þ 1Þ þ kðwÞQ i ðwÞÞi i¼1

¼

2 X

kðwÞkQ i ðwÞk2 

i¼1

¼

2 X

kðwÞkQ i ðwÞk2 

i¼1

2 1 X hRi ðw þ 1Þ  Ri ðwÞ; Li1 ðT i ðw þ 1ÞÞ þ Li2 ðT i ðw þ 1ÞÞi dðwÞ i¼1 2 1 X hL ðRi ðw þ 1Þ  Ri ðwÞÞ dðwÞ i¼1 i1

þ Li2 ðRi ðw þ 1Þ  Ri ðwÞÞ; T i ðw þ 1Þi 2 2 X 1 X kðwÞkQ i ðwÞk2  hT i ðw þ 1Þ  T i ðwÞ; T i ðw þ 1Þi ¼ dðwÞ i¼1 i¼1 2 2 X 1 X ¼ kðwÞkQ i ðwÞk2  kT i ðw þ 1Þk2 ¼ 0; dðwÞ i¼1 i¼1 2 X hP i ðwÞ; T i ðw þ 1Þi i¼1

¼

2 X hP i ðwÞ; T i ðwÞ  dðwÞðLi1 ðQ i ðwÞÞ þ Li2 ðQ i ðwÞÞÞi i¼1

2 X hT i ðwÞ þ kðw  1ÞT i ðw  1Þ þ kðw  1Þkðw  2ÞT i ðw  2Þ ¼ i¼1

þ    þ kðw  1Þ. . .kð1ÞT i ð1Þ; T i ðwÞi 2 P dðwÞ hP i ðwÞ; Li1 ðQ i ðwÞÞ þ Li2 ðQ i ðwÞÞi ¼

2 X i¼1

i¼1

kT i ðwÞk2  dðwÞ

2 X

hLi1 ðP i ðwÞÞ þ Li2 ðP i ðwÞÞ; Q i ðwÞi ¼ 0:

i¼1

Thus, we complete the proof of the conclusion.



Solutions to Linear Matrix Equations and Their Applications

104

Lemma 6.5. In algorithm 6.1, it is assumed that there exists a positive number l such that dðlÞ ¼ 0 or dðlÞ ¼ 1, then ðXðlÞ; Y ðlÞÞ are the solutions of problem 6.1. P Proof. Now, it is assumed that dðlÞ ¼ 0, then 2i¼1 kT i ðlÞk2 ¼ 0. Let dðlÞ ¼ 1, then P we obtain 2i¼1 kQ i ðlÞk2 ¼ 0. This contradicts 2 X

kT i ðlÞk2

i¼1

¼

2 X

hT i ðlÞ þ kðl  1ÞT i ðl  1Þ þ kðl  1Þkðl  2ÞT i ðl  2Þ

i¼1

þ    þ kðl  1Þ. . .kð1ÞT i ð1Þ; T i ðlÞi ¼

2 X i¼1

hP i ðlÞ; T i ðlÞi ¼

2 X

hP i ðlÞ; Li1 ðRi ðlÞÞ þ Li2 ðRi ðlÞÞi ¼ 0:

i¼1

In this way, for dðlÞ ¼ 0 or dðlÞ ¼ 1 we have T i ðlÞ ¼ Li1 ðR1 ðlÞÞ þ Li2 ðR2 ðlÞÞ ¼ 0;

i ¼ 1; 2:

On this occasion, it follows from lemma 6.2 that ðXðlÞ; Y ðlÞÞ are the solutions of problem 6.1. Thus, we complete the proof of this lemma. □ Theorem 6.1. Supposed that coupled matrix equations (6.0.1) are consistent, then by using algorithm 6.1 the solution ðX  ; Y  Þ of problem 6.1 can be obtained for any arbitrary initial matrix pair ðXðlÞ; Y ðlÞÞ. Proof. For the sake of lemma 6.4, the sequence set generated by algorithm 6.1 P ðT 1 ðjÞ; T 2 ðjÞÞ; j ¼ 1; 2; 3; . . .; 2i¼1 2m i  n i ; is an orthogonal basis of the real inner P2 product space Cm1 n1  Cm2 n2 with dimension i¼1 2m i  n i . Under this circum P2  P2  stance, we have i¼1 T i ð i¼1 2m i n i þ 1Þ ¼ 0. In this case, we can conclude that P P ðXð 2i¼1 2m i n i þ 1Þ; Y ð 2i¼1 2m i n i þ 1ÞÞ are the solutions of problem 6.1 in the absence of roundoff errors. □ Theorem 6.2. If we choose the initial matrix as Li1 ðFð1ÞÞ þ Li2 ðFð1ÞÞ; i ¼ 1; 2, in which Fð1Þ 2 C mn is an arbitrary complex matrix, or especially we choose the initial complex matrices Xð1Þ ¼ 0; Y ð1Þ ¼ 0, then the solution ðX  ; Y  Þ which are generated by algorithm 6.1 are the least norm solutions of problem 6.1. Proof. If we choose the initial matrices of coupled matrix equations as Xð1Þ ¼ L11 ðFð1ÞÞ þ L21 ðFð1Þ and Y ð1Þ ¼ L12 ðFð1ÞÞ þ L22 ðFð1Þ, then the matrix sequence generated by algorithm 6.1 XðkÞ; Y ðkÞ can be stated as XðkÞ ¼ L11 ðFðkÞÞ þ L21 ðFðkÞÞ and Y ðkÞ ¼ L12 ðFðkÞÞ þ L22 ðFðkÞÞ for a certain complex matrix FðkÞ 2 Cmn ; k ¼ 2; 3; . . .. Therefore, we can conclude that it must have a complex matrix F  2 Cmn such that X  ¼ L11 ðF  Þ þ L21 ðF  Þ and        ~ ~ Y ¼ L12 ðF Þ þ L22 ðF Þ. Hence we assume X ; Y be an arbitrary solution of problem 6.1, then, according to lemma 6.3, there must be the complex matrix

MCGLS Iterative Algorithm to Coupled Linear Operator Systems

105

~  ¼ X  þ Z ; Y ~  ¼ Y  þ Z  and L11 ðZ  Þ þ L21 ðZ  Þ ¼ 0; Z  2 Cmn such that X   L12 ðZ Þ þ L22 ðZ Þ ¼ 0. So we have the following relation hX  ; Z  i ¼ hL11 ðF  Þ þ L21 ðF  Þ; Z  i ¼ hF  ; L11 ðZ  Þ þ L21 ðZ  Þi ¼ 0; and hY  ; Z  i ¼ hL12 ðF  Þ þ L22 ðF  Þ; Z  i ¼ hF  ; L12 ðZ  Þ þ L22 ðZ  Þi ¼ 0: Thus, we have  2

~ k ¼ kX  þ Z  k2 ¼ kX  k2 þ kZ  k2 þ 2hX  ; Z  i ¼ kX  k2 þ kZ  k2  kX  k2 ; kX and  2

~ k ¼ kY  þ Z  k2 ¼ kY  k2 þ kZ  k2 þ 2hY  ; Z  i ¼ kY  k2 þ kZ  k2  kY  k2 : kY At this time, it is concluded that ðX  ; Y  Þ are the least Frobenius norm solutions of problem 6.1. So we complete the proof of the conclusion. □ Next, we will state as the minimization property of the proposed algorithm 6.1 in the following. This property shows that algorithm 6.1 converges smoothly. Theorem 6.3. Supposed that ðXð1Þ; Y ð1ÞÞ 2 Cmn be any initial matrices, then we have the following conclusion 2 P

kLi1 ðXðk þ 1ÞÞ þ Li2 ðY ðk þ 1ÞÞ  E i k ¼ min

2 P

X2wk i¼1

i¼1

kLi1 ðXÞ þ Li2 ðY Þ  E i ; i ¼ 1; 2k ð6:2:2Þ

in which ðXðk þ 1Þ; Y ðk þ 1ÞÞ are generated by algorithm 6.1 at the k þ 1-th iteration and wk presents an affine subspace which has the following form wk ¼ Xð1Þ þ Y ð1Þ þ spanððP 1 ð1Þ; P 2 ð1ÞÞ; ðP 1 ð2Þ; P 2 ð2ÞÞ; . . .; ðP 1 ðkÞ; P 2 ðkÞÞ: ð6:2:3Þ Proof. According to equation (6.2.3), for any matrix ðX; Y Þ 2 wk , there must exist numbers a1 ; a2 ; . . .; ak such that k X X ¼ Xð1Þ þ al ðP 1 ðlÞ; P 2 ðlÞÞ; l¼1

and Y ¼ Y ð1Þ þ

k X

al ðP 1 ðlÞ; P 2 ðlÞÞ:

l¼1

Now if we define a function f with respect to the variable a1 ; a2 ; . . .; ak as  2    2  k k P P P   f ða1 ; a2 ; . . .; ak Þ ¼ al P i ðlÞÞ þ Li2 ðY ð1Þ þ al P i ðlÞ  E i  ; Li1 Xð1Þ þ  i¼1  l¼1 l¼1

Solutions to Linear Matrix Equations and Their Applications

106

in which the function f is continuous and differentiable. So from lemma 6.4 we have f ða1 ; a2 ; . . .; ak Þ 2 2  k X X   al ðLi1 ðP i ðlÞÞ þ Li2 ðP i ðlÞÞÞ ¼ Li1 ðXð1ÞÞ þ Li2 ðY ð1ÞÞ  E i þ i¼1

¼

l¼1

2 X

kRi ð1Þk2 þ

i¼1

2 X k X

a2l kQ i ð1Þk2  2

i¼1 l¼1

2 X k X

al hQ i ð1Þ; Ri ð1Þi:

i¼1 l¼1

At the same time, if we now consider the problem of minimizing the function f ða1 ; a2 ; . . .; ak Þ, we can obtain minf ða1 ; a2 ; . . .; ak Þ al

¼ min

X;Y 2wk

2 X

kLi1 ðXÞ þ Li2 ðY Þ  E i k2 ;

l ¼ 1; 2; . . .; k:

i¼1

From this function, the minimum occurs when @ða1 ; a2 ; . . .; ak Þ ¼0 @al

for l ¼ 1; 2; . . .; k:

Consequently, we can get P2 al ¼

i¼1 hQ i ðlÞ; Ri1 ðlÞi : P 2 2 i¼1 kQ i ðlÞk

Since algorithm 6.1, we have Ri ð1Þ ¼ Ri ðlÞ þ dðl  1ÞQ i ðl  1Þ þ dðl  2ÞQ i ðl  2Þ þ    þ dð1ÞQ i ð1Þ: Therefore, according to lemma 6.4 we have P2 P2   i¼1 hQ i ðlÞ; Ri ðlÞi i¼1 hP i ðlÞ; Li1 ðRi ðlÞÞ þ Li2 ðRi ðlÞÞi al ¼ P ¼ P2 2 2 2 i¼1 kQ i ðlÞk i¼1 kQ i ðlÞk P2 P2 hS i ðlÞ þ kðl  1ÞP i ðl  1Þ; S i ðlÞi i¼1 hP i ðlÞ; S i ðlÞi ¼ P ¼ i¼1 P2 2 2 2 i¼1 kQ i ðlÞk i¼1 kQ i ðlÞk P2 kS i ðlÞk2 ¼ P2i¼1 ¼ dðlÞ: 2 i¼1 kQ i ðlÞk In a similar way, we can prove the case of i ¼ 2: By this means, we complete the proof. □ By theorem 6.1, for any initial matrix, we can conclude that the residual norm of the solution generalized by algorithm 6.1 minimizes at the k þ 1 th iteration in the affine subspace wk . So we have

MCGLS Iterative Algorithm to Coupled Linear Operator Systems 2 P

kLi1 ðXðk þ lÞÞ þ Li2 ðY ðk þ lÞÞ  E i k2 

i¼1

2 P

107

kLi ðXðkÞÞ þ Li2 ðY ðkÞÞ  E i k2 ;

i¼1

ð6:2:4Þ P P This indicates that the residual norm 2i¼1 kRi ð1Þk; 2i¼1 kRi ð2Þk; . . .; i ¼ 1; 2; is monotonically decreasing. This shows that algorithm 6.1 can process fast and smoothly convergence. Now problem 6.2 will be considered. For a given matrix X 0 , from problem 6.1 we have 2 X minmn kLi1 ðXÞ þ Li2 ðY Þ  E i k X;Y 2C

¼

i¼1

minmn

X;Y 2C

2 X

kLi1 ðX  X 0 Þ þ Li2 ðY  Y 0 Þ  ðE i  Li1 ðX 0 Þ  Li2 ðY 0 ÞÞk:

i¼1

ð6:2:5Þ  i ¼ E i  Li1 ðX 0 Þ  Li2 ðY 0 Þ and X  ¼ X  X 0; Y  ¼ Y  Y 0, If we let the set E then problem 6.2 is equivalent to find the least Frobenius norm solution ðX 1 ; Y 1 Þ of  2  X     L i1 ðXÞ þ Li2 ðY Þ  Ei ; mn

min

 Y  2C X;

ð6:2:6Þ

i¼1

 ð1Þ ¼ L ðFÞ þ L ðFÞ, in which  with the initial matrix Xð1Þ ¼ L11 ðFÞ þ L21 ðFÞ; Y 12 22 mn   ð1Þ ¼ 0. Therefore, we F 2C is an arbitrary matrix, or especially Xð1Þ ¼ 0; Y   X 0 k ¼ minX2G kX  X 0 k and can state the solutions kX r   Y 0 k ¼ min kY  Y 0 k; kY Y 2G r

of problem 6.2 as and

 ¼X   þ X 0; X

ð6:2:7Þ

 ¼Y   þ Y 0: Y

ð6:2:8Þ

Thus, it is concluded that the solution of problem 6.2 can be obtained through the solution of problem 6.1.

6.3

Numerical Examples

In this section, we give two numerical examples to check our proposed CGLS-M iterative algorithm. The first example proves our proposed iterative algorithm is better than the existing LSQR iterative algorithm. The second example shows

Solutions to Linear Matrix Equations and Their Applications

108

that our presented iterative algorithm is also effective for any random coefficient matrices. Example 6.1. Solve the least Frobenius norm solution of the following coupled matrix equations  A1 XB 1 þ C 1 XD1 þ E 1 X H F 1 ¼ M 1 ; ð6:3:1Þ A2 XB 2 þ C 2 X T D2 ¼ M 2 ; in which the coefficient matrices are 0

12 þ i B 2þi A1 ¼ B @ 5  6i 1 0

i B 1 C1 ¼ B @ 1  8i 9  8i

1  3i 10 2 þ 13i 12 10 13i 11  9i 1

0

3 15 B 11 1 þ 2i C2 ¼ B @ 18 21 8i 21 0

i B 11 A2 ¼ B @ 78 9  8i 0

13 B 7 E1 ¼ B @ 30  i 8i 0

6 2 2i 10

1 0 4 þ 3i 3 C 0 C @ ; B ¼ 9 þ 8i i A 1 2i 9i

2i 1  2i 21 i

1 11 i i

2þi

1 0 4 i 1 C C; D1 ¼ @ 4i i A 23 8i 1

0

9i C B 12 þ 3i C; D 2 ¼ B A @ 1 þ 9i 8i 21

8 21  3i 11 11

1 0 1 2i 9 12 11 C C; B 2 ¼ @ 11 21 i A 18 i 8i

5 1 þ 23i 21 2  6i

1 0 i 13 B 1 1iC C; F ¼ B 8i A 1 @ 9 8i 8i

9323 þ 4757i B 543 þ 11632i M1 ¼ B @ 17415 þ 7013i 5119  1418i

11447 þ 1515i 885 þ 10010i 22836 þ 3936i 9271  8442i

1 4 þ 6i 11 A; 12

2i 4  6i 9i

1 6 þ 8i 0 A; 9i

i 19 26 þ i

15 1 þ 2i 21 i 1  5i 2 þ 3i 9i

5 0 21 21

1 2þi 11 C C; i A 8i 1 2i 12 A; 0 1

2 þ 6i 11 i i

4774 þ 1172i 423 þ 2254i 4649 þ 4863i 6220  289i

C C; A 1 C C; A

MCGLS Iterative Algorithm to Coupled Linear Operator Systems 0

8953 þ 2835i B 19013 þ 4942i M2 ¼ B @ 33450 þ 6034i 15116  2658i

8019  249i 6635  3474i 18057 þ 3909i 9423 þ 343i

4094 þ 339i 8742  389i 26325 þ 1270i 8147  2956i

109 1 C C: A

By some computation, we can obtain that the coupled matrix equation (6.3.1) are consistent and have the solution 0 1 3i 21 þ i 0 B 6i 9 8  9i C C: X ¼B @ 11  i 4 2  9i A 2  3i 4 12 Here, if the initial matrix X 1 ¼ onesð4; 3Þ can be chosen, applying algorithm 6.1 the iterative least square solution can be solved as 0 1 2:9999i 21:0000 þ 0:9999i 0:0000 B 5:9999i 9:0000 8:0000  8:9999i C C: Xð243Þ ¼ B @ 11:0000  0:9999i 3:9999 1:9999  9:0000i A 1:9999  2:9999i 3:9999 11:9999 The residual and relative error of coupled matrix equation (6.3.1) are showed in figures 6.3.1 and 6.3.2, in which qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 r k ¼ log10 jjM 1  Gjj2 þ jjM 2  H jj2 ; where G ¼ A1 XðkÞB 1 þ C 1 XðkÞD 1 þ E 1 XðkÞH F 1 , H ¼ A2 XðkÞB 2 þ C 2 XðkÞT D 2 and errðkÞ ¼

kXðkÞ  Xk : kXk

Remark 1. It is followed from figures 6.3.1 and 6.3.2 that the residual and the relative error of the solution to coupled matrix equation (6.3.1) are reduced with the addition of an iterative number. By this result, we can see the iterative solution of coupled matrix equations converges to the exact solution. Furthermore, with the addition of an iterative number, it is also shown that the residual and the relative error of the LSQR-M iterative algorithm of coupled matrix equations are not varied. At the same time, it is proved that the residual and the relative error of our iterative algorithm are smaller than those of the LSQR-M iterative algorithm. Therefore, our proposed method (MCGLS iterative algorithm) is an effective and great algorithm. Example 6.2. Solve the least Frobenius norm solution of the following coupled matrix equations  A1 XB 1 þ C 1 XD1 þ E 1 YF 1 ¼ M 1 ; ð6:3:2Þ A2 XB 2 þ C 2 XD2 þ E 2 Y F2 ¼ M 2 ;

110

Solutions to Linear Matrix Equations and Their Applications

FIG. 6.3.1 – The residual of solution for example 6.1.

FIG. 6.3.2 – The relative error of solution for example 6.1. where A1 ¼ randð6; 7Þ þ i  randð6; 7Þ;

B 1 ¼ randð5; 4Þ þ i  randð5; 4Þ;

C 1 ¼ randð6; 7Þ þ i  randð6; 7Þ;

D1 ¼ randð5; 4Þ þ i  randð5; 4Þ;

MCGLS Iterative Algorithm to Coupled Linear Operator Systems A2 ¼ randð6; 7Þ þ i  randð6; 7Þ;

B 2 ¼ randð5; 4Þ þ i  randð5; 4Þ;

C 2 ¼ randð6; 7Þ þ i  randð6; 7Þ;

D2 ¼ randð5; 4Þ þ i  randð5; 4Þ;

E 2 ¼ randð6; 7Þ þ 1i  randð6; 7Þ;

F 2 ¼ randð5; 4Þ þ i  randð5; 4Þ;

E 1 ¼ randð6; 7Þ þ 1i  randð6; 7Þ;

F 1 ¼ randð5; 4Þ þ i  randð5; 4Þ;

0

6:6343 þ 29:7004i B 4:3583 þ 20:2072i B B 9:0665 þ 24:7064i M1 ¼ B B 8:0747 þ 19:2879i B @ 8:7840 þ 20:3774i 5:7835 þ 23:3203i

11:6641 þ 26:6410i 9:0990 þ 16:9895i 13:0560 þ 22:1559i 10:7742 þ 17:5695i 12:7093 þ 17:5514i 10:7752 þ 22:2704i

111

4:7953 þ 26:2878i 2:7453 þ 17:6718i 7:2720 þ 21:8593i 5:4971 þ 17:6245i 6:5985 þ 18:3941i 4:3001 þ 21:0105i 1 3:4258 þ 26:0280i 0:3306 þ 16:9502i C C 6:8525 þ 21:3105i C C; 5:7757 þ 15:8973i C C 6:6394 þ 18:9002i A 4:1352 þ 19:7401i

0

14:5118 þ 26:6063i B 11:5972 þ 28:1350i B B 15:8140 þ 18:5914i M2 ¼ B B 14:1510 þ 21:0841i B @ 13:5717 þ 21:0144i 17:3981 þ 23:2946i

11:6376 þ 32:5976i 5:4724 þ 34:2821i 11:6417 þ 26:0488i 8:8672 þ 27:9473i 9:6517 þ 27:6175 14:0598 þ 31:9706i

10:4847 þ 21:0546i 7:6472 þ 22:9915i 12:4484 þ 15:4641i 10:8980 þ 18:4100i 10:3281 þ 16:0999i 13:7004 þ 19:7497i 1 11:7599 þ 27:9033i 7:8647 þ 28:8984i C C 14:1854 þ 23:0876i C C: 11:0275 þ 25:8017i C C 10:1937 þ 22:7480i A 15:4352 þ 26:8111i

By some computation, we can obtain that the coupled matrix equations (6.3.2) are consistent and have the solution

Solutions to Linear Matrix Equations and Their Applications

112 0

0:4417 þ 0:2880i B 0:0132 þ 0:6925i B B 0:8972 þ 0:5567i B X ¼B B 0:1967 þ 0:3965i B 0:0934 þ 0:0616i B @ 0:3074 þ 0:7802i 0:4561 þ 0:3376i

0:1017 þ 0:6079i 0:9954 þ 0:7413i 0:3321 þ 0:1048i 0:2973 þ 0:1279i 0:0620 þ 0:5495i 0:2982 þ 0:4852i 0:0464 þ 0:8905i 0:5338 þ 0:6837i 0:1092 þ 0:1321i 0:8258 þ 0:7227i 0:3381 þ 0:1104i 0:2940 þ 0:1175i 0:7463 þ 0:6407i 0:0103 þ 0:3288i

and

0

0:8669 þ 0:6555i B 0:0862 þ 0:1098i B B 0:3664 þ 0:9338i B Y ¼B B 0:3692 þ 0:1875i B 0:6850 þ 0:2662i B @ 0:5979 þ 0:7978i 0:7894 þ 0:4876i

0:5054 þ 0:7990i 0:7614 þ 0:7343i 0:6311 þ 0:0513i 0:0899 þ 0:0729i 0:0809 þ 0:0885i 0:7772 þ 0:7984i 0:9051 þ 0:9430i 1 0:0485 þ 0:6538i 0:6679 þ 0:7491i C C 0:6035 þ 0:5832i C C 0:5261 þ 0:7400i C C; 0:7297 þ 0:2348i C C 0:7073 þ 0:7350i A 0:7814 þ 0:9706i

0:3677 þ 0:7690i 0:2060 þ 0:3960i 0:0867 þ 0:2729i 0:7719 þ 0:0372i 0:2057 þ 0:6733i 0:3883 þ 0:4296i 0:5518 þ 0:4517i 0:2374 þ 0:0924i 0:5309 þ 0:0078i 0:0915 þ 0:4231i 0:4053 þ 0:6556i 0:1048 þ 0:7229i 0:1123 þ 0:5312i 0:7844 þ 0:1088i

0:2290 þ 0:6099i 0:6419 þ 0:0594i 0:4845 þ 0:3158i 0:1518 þ 0:7727i 0:7819 þ 0:6964i 0:1006 þ 0:1253i 0:2941 þ 0:1302i 1 0:2916 þ 0:6318i 0:6035 þ 0:1265i C C 0:9644 þ 0:1343i C C 0:4325 þ 0:0986i C C: 0:6948 þ 0:1420i C C 0:7581 þ 0:1683i A 0:4326 þ 0:1962i

According to algorithm 6.1, if we let the initial matrix Xð1Þ ¼ onesð7; 5Þ; Y ð1Þ ¼ onesð7; 5Þ, the iterative solution converge to the exact solution and we can obtain figure 6.3.3. The corresponding residual is rð324Þ ¼ 9:5931  1016 ; where r k ¼ log10

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 kM 1  M 01 k2 þ kM 2  M 02 k2 ;

M 01 ¼ A1 XðkÞB 1  C 1 XðkÞD1  E 1 Y ðkÞF 1 ;

MCGLS Iterative Algorithm to Coupled Linear Operator Systems

113

FIG. 6.3.3 – The residual of solution for example 6.2. and M 02 ¼ A2 XðkÞB 2  C 2 XðkÞD2  E 2 Y ðkÞF 2 : Remark 2. From figure 6.3.3, for any random coefficient matrices, it’s shown that the residual is becoming smaller and smaller with the addition of an iterative number, and the iterative solution converges to the exact solution. In addition, it is also proved that our proposed MCGLS iterative algorithm is a magnificent algorithm.

6.4

Conclusions

In this chapter, the MCGLS iterative algorithm is constructed for solving the least Frobenius norm solution to a coupled matrix equations. By use of this proposed iterative algorithm, on the one hand, for any initial matrix pair ðXð1Þ; Y ð1ÞÞ, the least Frobenius norm solution can be obtained. On the other hand, if a matrix pair  Y  Þ can be derived by ðX 0 ; Y 0 Þ are given, the optimal approximation solution ðX; first finding the least Frobenius norm solution of a new corresponding coupled matrix equations. It is worth noting that our proposed iterative algorithm is suitable for a family of coupled matrix equations whose coefficient matrices are random matrices. In addition, the numerical example shows that our proposed MCGLS iterative algorithm is better than the existing LSQR iterative algorithm.

Chapter 7 Explicit Solutions to the Matrix Equation X  AXB ¼ CY þ R In this chapter, we are concerned with the explicit solutions to the nonhomogeneous Yakubovich matrix equation X  AXB ¼ CY þ R; nn

pp

nr

ð7:0:1Þ np

where A 2 R , B 2 R and C 2 R are given real matrices, X 2 R and Y 2 Rrp are the matrices to be determined. Obviously, the well known Yakubovich matrix equation (7.0.2) and generalized Sylvester matrix equation (7.0.3) X  AXB ¼ CY ;

ð7:0:2Þ

AX  EXF ¼ CY ;

ð7:0:3Þ

and

are the special cases of the nonhomogeneous Yakubovich matrix equation.

7.1

Solutions to the Real Matrix Equation X  AXB ¼ CY þ R

In this section, we provide the closed-form solutions of the nonhomogeneous Yakubivich matrix equation (7.0.1) by Smith normal form reduction and polynomial matrix. In order to obtain the result, we need the following lemma. Lemma 7.1. Given matrices A 2 Rnn , B 2 Rpp and C 2 Rnr , suppose that T fsjdetðI n  sAÞ ¼ 0g kðBÞ ¼ /. Then there exist polynomial matrices LðsÞ 2 Rnn ½s, H ðsÞ 2 Rrn ½s, and polynomial DðsÞ 2 R½s such that ðI n  sAÞLðsÞ  CH ðsÞ ¼ DðsÞI n ;

ð7:1:1Þ

in which DðsÞ 6¼ 0, for any s 2 kðBÞ. DOI: 10.1051/978-2-7598-3102-9.c007 © Science Press, EDP Sciences, 2023

Solutions to Linear Matrix Equations and Their Applications

116

Proof. Assume PðsÞ 2 Rnn ½s and QðsÞ 2 Rðn þ rÞðn þ rÞ ½s be two uniocular polynomial matrices. Now we transform ½I n  sA  C  into its Smith normal form PðsÞ½I n  sA  C QðsÞ ¼ ½diagni¼1 d i ðsÞ 0;

ð7:1:2Þ

where d i1 ðsÞjd i ðsÞ 2 R½s; i 2 I ½2; n. Since the relation fsjdetðI n  sAÞ ¼ T 0g kðBÞ ¼ / holds, then d i ðcÞ 6¼ 0; i 2 I ½1; n; for any c 2 kðBÞ. Partition QðsÞ into ½Q 1 ðsÞ; Q 2 ðsÞ with Q 1 ðsÞ 2 Rðn þ rÞn ½s. It follows from (7.1.2) that PðsÞ½I n  sA  C Q 1 ðsÞ ¼ diagni¼1 d i ðsÞ d n ðsÞ ¼ I n d n ðsÞ () PðsÞ½I n  sA  C Q 1 ðsÞdiagni¼1 d i ðsÞ   d n ðsÞ () ½I n  sA  C  Q 1 ðsÞdiagni¼1 PðsÞ ¼ I n d n ðsÞ: d i ðsÞ Thus, one can choose DðsÞ ¼ d n ðsÞ and     d n ðsÞ LðsÞ ¼ Q 1 ðsÞdiagni¼1 PðsÞ; H ðsÞ d i ðsÞ □

to satisfy the relation (7.1.1).

In the following, we are going to provide a closed-form solution of the nonhomogeneous Yakubovich matrix equation (7.0.1) by the above lemma. Theorem 7.1. Given matrices A 2 Rnn , B 2 Rpp , C 2 Rnr and R 2 Rnp , supP P nn i ½s and H ðsÞ ¼ ti¼0 H i s i 2 pose that polynomial matrices LðsÞ ¼ t1 i¼0 Li s 2 R P Rrn ½s, and polynomial DðsÞ ¼ ti¼0 Di si 2 R½s satisfy (7.1.1) T with DðsÞ 6¼ 0, for any s 2 kðBÞ. Further, let the relation fsjdetðI n  sAÞ ¼ 0g kðBÞ ¼ / hold, and f ðI n ;AÞ ðsÞ ¼ detðI n  sAÞ ¼ an s n þ    þ a1 s þ a0 ; a0 ¼ 1; adjðI n  sAÞ ¼ Rn1 s n1 þ    þ R1 s þ R0 : Then all the solutions to the nonhomogeneous Yakubovich matrix equation (7.0.1) can be stated as 8 n1 tP 1 P > > Ri CZB i þ Li R½DðBÞ1 B i ;

> : Y ¼ Zf ðI n ;AÞ ðBÞ þ H i R½DðBÞ1 B i : i¼0

where Z 2 R

rp

is an arbitrary chosen free parameter matrix.

Proof. We first show that the matrices X and Y given in (7.1.3) are solutions of the matrix equation (7.0.1). A direct calculation gives

Explicit Solutions to the Matrix Equation X  AXB ¼ CY þ R

117

X  AXB n 1 t1 n 1 t1 X X X X Ri CZB i þ Li R½DðBÞ1 B i  A Ri CZB i B  A Li R½DðBÞ1 B i B ¼ i¼0

¼

n 1 X

i¼0 n 1 X

Ri CZB i  A

i¼0

¼ R0 CZ þ þ

t1 X

n 1 X

Ri CZB i þ 1 þ

i¼0

t1 X

i¼0

i¼0 t1 X

Li R½DðBÞ1 B i  A

i¼0

Li R½DðBÞ1 B i þ 1

i¼0

ðRi  ARi1 ÞCZB i  ARn1 CZB n þ L0 R½DðBÞ1

i¼1

ðLi  ALi1 ÞR½DðBÞ1 B i  ALt1 R½DðBÞ1 B t :

i¼1

ð7:1:4Þ According to ðI n  sAÞadjðI n  sAÞ ¼ I n detðI n  sAÞ, it is easy to obtain 8 < R0 ¼ a0 I n ; R  ARi1 ¼ ai I n ; i 2 I ½1; n  1; ð7:1:5Þ : i ARn1 ¼ an I n : So we have n1 n P P ðRi  ARi1 ÞCZB i  ARn1 CZB n ¼ CZ ai B i ¼ CZf ðI n ;AÞ ðBÞ: R0 CZ þ i¼1

i¼0

ð7:1:6Þ Moreover, comparing the power of s in (7.1.1) yields 8 < L0 ¼ CH 0 þ D0 I n ; L  ALi1 ¼ CH i þ Di I n ; i 2 I ½1; t  1; : i ALt1 ¼ CH t þ Dt I n :

ð7:1:7Þ

With these relations, we can derive L0 R½DðBÞ1 þ

t1 X ðLi  ALi1 ÞR½DðBÞ1 B i  ALt1 R½DðBÞ1 B t i¼1

¼ ðCH 0 þ D0 I n ÞR½DðBÞ1 þ

t1 X ðCH i þ Di I n ÞR½DðBÞ1 B i i¼1

þ ðCH t þ Dt I n ÞR½DðBÞ1 B t t t X X ¼ CH i R½DðBÞ1 B i þ Di R½DðBÞ1 B i ¼

i¼0 t X

i¼0

CH i R½DðBÞ1 B i þ R½DðBÞ1

i¼0

¼

t X

¼

i¼0

Di B i

i¼0

CH i R½DðBÞ1 B i þ R½DðBÞ1 DðBÞ

i¼0

t X

t X

CH i R½DðBÞ1 B i þ R:

ð7:1:8Þ

Solutions to Linear Matrix Equations and Their Applications

118

Substituting (7.1.6) and (7.1.8) into (7.1.4), we have X  AXB ¼ CZf ðI n ;AÞ ðBÞ þ

t X

CH i R½DðBÞ1 B i þ R

i¼0

¼ C Zf ðI n ;AÞ ðBÞ þ

t X

! 1

H i R½DðBÞ B

i

þR

i¼0

¼ CY þ R: Therefore, the matrices X and Y given in (7.1.3) satisfy the matrix equation (7.0.1). Next, the completeness of solution (7.1.3) will be proved. In relation (7.1.3), the free parameter matrix Z is included, which is an r  p matrix. So the parametric matrix Z has rp parameters. In this case, solution (7.1.3) has rp parameters. Putting vecðÞ on both sides of expression ðX; Y Þ in (7.1.3), we have     vecðXÞ vecðZ Þ ¼W ; vecðY Þ vecðRÞ   P M 11 M 12 i T in which W ¼ , M 11 ¼ n1 i¼1 ½ðB Þ  ðRi C Þ, M 21 M 22 M 12 ¼

t1 h i X ððDðBÞÞ1 B i ÞT  ðLi Þ ; M 21 ¼ ½f ðI n ;AÞ ðBÞT  I r ; i¼0

M 22 ¼

t h X

i ððDðBÞÞ1 B i ÞT  ðH i Þ :

i¼0

Denote the set e ¼ R



      X  vecðXÞ vecðZ Þ ¼W ; Z 2 Rrp ;  Y vecðY Þ vecðRÞ

e  R. and according to the above section of the proof in theorem 7.1, we have R e ¼ R (R denote the solution set of matrix equation (7.0.1)) if and only Obviously, R e ¼ rp. Since dimðZ Þ ¼ rp, dimð RÞ e ¼ rp if and only if rankðWÞ ¼ rp. if dimð RÞ Obviously, rankðWÞ ¼ rp if M 21 is nonsingular. M 21 is nonsingular if and only if f ðI n ;AÞ ðBÞ is a nonsingular matrix. If we assume f ðI n ;AÞ ðsÞ ¼ detðI n  sAÞ ¼ an sn þ    þ a1 s þ a0 ; a0 ¼ 1; and T adjðI n  sAÞ ¼ Rn1 s n1 þ    þ R1 s þ R0 , because of fsjdetðI n  sAÞ ¼ 0g kðBÞ ¼ /, the matrix f ðI n ;AÞ ðBÞ ¼ an B n þ    þ a1 B þ a0 I p is a nonsingular matrix. Therefore, the mapping Z ! ðX; Y Þ defined by (7.1.3) is injective. It is showed the completeness of solution (7.1.3). Thus the proof has been completed. □

Explicit Solutions to the Matrix Equation X  AXB ¼ CY þ R

119

Remark 1. In[142], in order to obtain the solution, it demands the matrices Di ; i ¼ 1; 2; . . .; w: satisfying the unknown matrices X ¼ BD 0 þ ABD 1 þ    þ Aw1 BD w1 þ Aw BD w and XY ¼ R. In order to make the matrices Di ; i ¼ 1; 2; . . .; w: easier, it expect that the matrices X and Y have low dimension. In this case, ðX; Y Þ is a pair of full rank factorization of R. But in the present chapter, it has no this limit. From T theorem 7.1, it is easy to know that under the condition of fsjdetðI n  sAÞ ¼ 0g kðBÞ ¼ / all the solutions of the matrix equation (7.0.2) can be explicitly given by 8 n1 P < X¼ Ri CZB i ; ð7:1:9Þ i¼0 : Y ¼ Zf ðI n ;AÞ ðBÞ: Moreover, by theorem 7.1 it is obvious that a special solution of the matrix equation (7.0.1) is 8 t1 P > > Li R½DðBÞ1 B i ;

> :Y ¼ H i R½DðBÞ1 B i : i¼0

Regarding the solution of ai ; i 2 I ½0; n, and Ri ; i 2 I ½0; n  1, the so-called generalized Leverrier algorithm[139] can be stated as the following relation: 8 < Ri ¼ ARi1 þ ai I n ; i 2 I ½1; n  1; R0 ¼ I ; i 2 I ½1; n: ð7:1:10Þ : trace ðARi1 Þ ai ¼ : i For relative lower-order cases, the Smith norm form reduction can be realized by applying elementary polynomial matrix transformation. For high-order cases, the Maple function “smith” can be readily employed. Similarly to the reference[145], we have the following equivalent form of the presented solution (7.1.3). For this solution, a lemma is necessary. The idea of the proof of this lemma is motivated by[145,148]. Lemma 7.2. Suppose Li ; i 2 I ½0; t  1, and H i ; Di ; i 2 I ½0; t, be the coefficients of LðsÞ, H ðsÞ and DðsÞ, respectively. Then the relation (7.1.1) holds if and only if H i ; Di ; i 2 I ½0; t, satisfy t X i¼0

ðAÞti CH i ¼ 

t X

Di ðAÞti ;

ð7:1:11Þ

i¼0

and Li ; i 2 I ½0; t  1, satisfy ½L0 L1    Lt1  ¼ Ctr t ðA; C ÞS t ðH ðsÞÞ þ Ctr t ðA; I n ÞS t ðDðsÞI n Þ:

ð7:1:12Þ

Solutions to Linear Matrix Equations and Their Applications

120

Proof. By theorem 7.1 we can know that Li ; i 2 I ½0; t  1, and H i ; Di ; i 2 I ½0; t satisfy (7.1.1) if and only if they satisfy (7.1.7). Therefore, we only need to show that the relation (7.1.7) is equivalent to the relations (7.1.11) and (7.1.12). From the recursive relation (7.1.7), a direct calculation gives 8 L0 ¼ CH 0 þ D0 I n ; > > > > < L1 ¼ CH 1 þ ACH 0 þ D1 I n þ AD0 I n ; ð7:1:13Þ  > > > Lt1 ¼ CH t1 þ ACH t2 þ    þ At1 CH 0 > : þ Dt1 I n þ Dt2 A þ    þ D0 At1 : and 0 ¼ CH t þ ACH t1 þ    þ At CH 0 þ Dt I þ Dt1 A þ    þ D0 At :

ð7:1:14Þ

The relation (7.1.14) is exactly (7.1.11). Rewriting (7.1.14) in a matrix form one can derive (7.1.12). In this case, it is shown that (7.1.7) implies (7.1.11) and (7.1.12). In addition, it follows from the relation (7.1.12) that ALt1 ¼ ACH t1 þ A2 CH t2 þ    þ At CH 0 þ Dt1 A þ Dt2 A2    þ D0 At : Combining this relation with (7.1.11) we can obtain the last expression of (7.1.7). For i 2 I ½1; t  1, according to (7.1.12) one has ALi1 þ CH i þ Di I n i1 X

¼A ½Aij1 CH j þ Dj Aij1  þ CH i þ Di I n j¼0

¼

i1 X

½Aij CH j þ Dj Aij  þ CH i þ Di I n

j¼0 i X ¼ ½Aði þ 1Þj1 CH j þ Dj Aði þ 1Þj1  j¼0

¼ Li : This is just the second expression of (7.1.7). The first expression of (7.1.7) is obvious. Thus the relation (7.1.11) and (7.1.12) imply the relation (7.1.7). So the proof is completed. □ In the following, the presented closed-form solution to the matrix equation (7.0.1) is in an extremely neat form represented by the controllability matrix associated with the matrices A and C , the observability matrix associated with B and the free parameter matrix Z , and the symmetric operator matrix. This property may propose convenience and advantages to the further analysis of the matrix equation (7.0.1). nn pp nr Theorem 7.2. Given matrices A 2 R and R 2 Rnp , T , B2R , C 2R suppose that fsjdetðI n  sAÞ ¼ 0g kðBÞ ¼ /, and the polynomial matrix

Explicit Solutions to the Matrix Equation X  AXB ¼ CY þ R

121

P P H ðsÞ ¼ ti¼1 H i s i 2 Rrn ½s and polynomial DðsÞ ¼ ti¼1 Di s i 2 R½s satisfy (7.1.11) and DðsÞ 6¼ 0, for any s 2 kðBÞ. Then all the solutions to the matrix equation (7.0.1) can be stated as 8 X ¼ Ctr n ðA; C ÞS n ðf ðI n ;AÞ ðsÞI r ÞObs n ðB; Z Þ > >

> > > > þ Ctr t ðA; C ÞS t ðH ðsÞI r ÞObs t B; R½DðBÞ1 > <

1 ð7:1:15Þ þ Ctr ðA; I ÞS ðDðsÞI ÞObs B; R½DðBÞ ; t n t n t > > > > t P > > > H i R½DðBÞ1 B i ; :Y ¼ Zf ðI n ;AÞ ðBÞ þ i¼0

where Z is an arbitrary chosen real parameter matrix. Proof. According to the relation (7.1.10), we can derive 8 R0 ¼ I n ; > > > > < R1 ¼ a1 I n þ A; R2 ¼ a2 I n þ a1 A þ A2 ; > > >... > : Rn1 ¼ an1 I n þ an2 A þ    þ An1 : This relation can be compactly expressed as Ri ¼

i X

ak Aik ; a0 ¼ 1; i ¼ 1; 2; . . .; n  1:

k¼0

Therefore, one can obtain n 1 X

Ri CZB i ¼

i¼0

n 1 X i X

ak Aik CZB i :

i¼0 k¼0

With these relations, one easily has n 1 X i¼0

Ri CZB i ¼ CtrðA; C ÞS n ðf ðI n ;AÞ ðsÞI r ÞObs n ðB; Z Þ;

ð7:1:16Þ

It follows from lemma 7.2 that the relation (7.1.1) holds if and only if the relations (7.1.11) and (7.1.12) hold. So we have t1 X

Li R½DðBÞ1 B i

i¼0

¼ ½L0 L1    Lt1 Obs t ðB; R½DðBÞ1 Þ ¼ ½Ctr t ðA; C ÞS t ðH ðsÞÞ þ Ctr t ðA; I n ÞS t ðDðsÞÞI n   Obs t ðB; R½DðBÞ1 Þ:

ð7:1:17Þ

122

Solutions to Linear Matrix Equations and Their Applications

Substituting (7.1.16) and (7.1.17) into the expression (7.1.3), gives (7.1.15). The proof is completed. □ Example 7.1. In order to show the efficiency of theorem 7.2, we give an example in the form of (7.0.2). We use the following parameter matrices. 0 1 0 1 0 1 9 7 2 1 4 3 4 2 4 A ¼ @ 1 1 0 A; B ¼ @ 9 6 2 A; C ¼ @ 4 11 10 A; 7 1 1 1 0 1 4 2 1 T we can check fsjdetðI  sAÞ ¼ 0g kðBÞ ¼ /. By simple computations, it is easy to obtain f ðI 3 ;AÞ ðsÞ ¼ detðI  sAÞ ¼ 6s 3  4s 2  3s þ 1; 0

1 B0 B B0 B B0 B S 3 ðf ðI 3 ;AÞðsÞ ; I 3 Þ ¼ B B0 B0 B B0 B @0 0 0

9 Ctr 3 ðA; C Þ ¼ @ 4 7

7 11 1

0 1 0 0 0 0 0 0 0

0 0 1 0 0 0 0 0 0

3 0 0 1 0 0 0 0 0

2 4 10 13 1 2

0 3 0 0 1 0 0 0 0 48 18 6

0 0 3 0 0 1 0 0 0 45 12 3

4 0 0 3 0 0 1 0 0 62 17 6

0 4 0 0 3 0 0 1 0

1 0 0 C C 4 C C 0 C C 0 C C; 3 C C 0 C C 0 A 1

138 66 54

1 102 57 A; 48

1 1:2808 1:8309 3:0644 B 4:0899 5:9867 10:1498 C C B B 3:8419 5:7190 9:7834 C C B B 0:9027 2:2950 4:5257 C C B C Obs 3 ðB; Z Þ ¼ B B 3:0786 7:4407 14:5360 C: B 3:0300 7:0635 13:7130 C C B B 1:0587 2:9131 3:5466 C C B @ 3:4920 9:4151 11:9690 A 3:4004 8:8951 11:7060 0 1 1:2808 1:8309 3:0644 Choose Z ¼ @ 4:0899 5:9867 10:1498 A; so from theorem 7.2 we have 3:8419 5:7190 9:7834 0 1 0 1 1 4 9 52:4426 32:9016 12:9672 X ¼ @ 2 7 0 A; Y ¼ @ 156:4645 99:4153 38:6940 A: 1 1 2 136:6339 87:8962 33:0765 0

Explicit Solutions to the Matrix Equation X  AXB ¼ CY þ R

123

When the matrix triple ðA; I n ; C Þ is Rcontrollable, that is rank½I n  sA  C  ¼ n; for any s 2 R; the approach of theorem 7.2 can be readily employed to solve the matrix equation (7.0.1) without use of Smith normal form reduction. This is summarized as the following corollary. Corollary 7.1. Given matrices AT2 Rnn , B 2 Rpp , C 2 Rnr and R 2 Rnp , suppose that fsjdetðI n  sAÞ ¼ 0g kðBÞ ¼ / and ðA; I n ; C Þ is Rcontrollable. Let P the polynomial matrix H ðsÞ ¼ ti¼0 H i s i 2 Rrn ½s satisfy t X

Ati CH i ¼ At :

ð7:1:18Þ

i¼0

Then all the solutions to the matrix equation (7.0.1) can be characterized by 8 X ¼ Ctr t ðA; C ÞS n ðf ðI n ;AÞ ðsÞI r ÞObs n ðB; Z Þ > > < þ ½Ctr t ðA; C ÞS t ðH ðsÞÞ þ Ctr t ðA; I n ÞObs t ðB; RÞ; ð7:1:19Þ t P > > H i RB i ; : Y ¼ Zf ðI n ;AÞ ðBÞ þ i¼0

in which Z is an arbitrary chosen parameter matrix. According to theorem 7.2, when ðA; I n ; C Þ is Rcontrollable, this corollary can be derived if the polynomial DðsÞ choose to be 1. In this case, the relations ½DðBÞ1 ¼ I n and S t ðDðsÞI n Þ ¼ I hold, and the matrix equation (7.1.11) transforms into the the matrix equation (7.1.18). If we denote H ½t ¼ ½H Tt H Tt1    H T0 T , this equation can be rewritten as Ctr t þ 1 ðA; C ÞH ½t ¼ At :

ð7:1:20Þ

If the number t is sufficiently large, one gives rank½Ctr t þ 1 ðA; C Þ ¼ rank½Ctr t þ 1 ðA; C Þ  At :

ð7:1:21Þ

Under this condition, the matrices H i ; i 2 ½1; t; can be obtained from (7.1.19) by using some numerically reliable approaches such as QR decomposition and SVD decomposition. Furthermore, the relation S t ðDðsÞI n Þ ¼ 0 holds when the polynomial DðsÞ is selected to be s t . Similar to the above argument, it is easy to derive the following corollary. nn pp Corollary and C 2 Rnr , suppose that fsjdetðI n  sAÞ T 7.2. Let A 2 R , B 2 R ¼ 0g kðBÞ ¼ / and 0 is not included the set kðBÞ. If the polynomial matrix P H ðsÞ ¼ ti¼0 H i s i 2 Rrn ½s satisfy t X i¼0

Ati CH i ¼ I n :

ð7:1:22Þ

Solutions to Linear Matrix Equations and Their Applications

124

Then all the solutions to the matrix equation (7.0.1) can be stated as 8 X ¼ Ctr t ðA; C ÞS n ðf ðI n ;AÞ ðsÞI r ÞObs n ðB; Z Þ > > < þ Ctr t ðA; C ÞS t ðH ðsÞÞObs n ðB; RB t Þ; t P > > H i RB i ; : Y ¼ Zf ðI n ;AÞ ðBÞ þ

ð7:1:23Þ

i¼0

where Z is an arbitrary chosen parameter matrix.

7.2

Parametric Pole Assignment for Descriptor Linear Systems by P-D Feedback

Example 7.2. As an application of the proposed solution, we consider the following linear system: E x_ ¼ Ax þ Bu; ð7:2:1Þ where A; E 2 Rnn and B 2 Rnr are known coefficient matrices. If the following state feedback controller _ u ¼ K p x  K d x;

ð7:2:2Þ

is applied to the above system (7.2.1), the closed-loop system becomes ðE þ CK d Þx_ ¼ ðA  CK p Þx:

ð7:2:3Þ

Thus, the closed-loop system (7.2.3) is regular if and only if detðE þ BK d Þ 6¼ 0:

ð7:2:4Þ

In this case, if the closed-loop system is supposed to be regular, the closed-loop system (7.2.3) can be rewritten as x_ ¼ Ac x;

Ac ¼ ðE þ BK d Þ1 ðA  BK p Þ;

ð7:2:5Þ

Given a set of desired closed-loop poles s i ; i ¼ 1; 2; . . .; n: The parametric pole assignment problem is to determine the matrices K d and K p such that the closed-loop system matrix ðE þ BK d Þ1 ðA  BK p Þ has desired eigenvalues and eigenstructure, i.e. determine the matrices K d and K p such that K p ¼ ðY  K d XBÞX 1 ;

ð7:2:6Þ

and eigenstructure rðAc Þ ¼ rððE þ BK d Þ1 ðA  BK p ÞÞ ¼ fs i ; i ¼ 1; 2; . . .; ng;

ð7:2:7Þ

Note that (7.2.7) is satisfied if and only if there exists a nonsingular matrix X and a matrix B satisfying rðAc Þ ¼ rðBÞ such that X 1 ½ðE þ BK d Þ1 ðA  BK p ÞX ¼ F;

Explicit Solutions to the Matrix Equation X  AXB ¼ CY þ R

125

which is equivalent to the generalized Sylvester matrix equation (7.0.3) by setting Y ¼ K p X þ K d XF. Therefore, the pivotal step in solving the parametric pole assignment problem is to solve the generalized Sylvester matrix equation (7.0.3). The following lemma shows that the generalized Sylvester matrix equation (7.0.3) is the equivalent form of the generalized discrete Sylvester matrix equation (7.0.2). Lemma 7.3. [81]The generalized Sylvester matrix equation (7.0.3), where A; E 2 Rnn , B 2 Rnr , and F 2 Rpp are known and the matrix pair ðE; AÞ is regular, is equivalent to the generalized discrete Sylvester matrix equation (7.0.2) with M ¼ ðcE  AÞ1 E;

N ¼ cI  F;

T ¼ ðcE  AÞ1 B;

where c is an arbitrary scalar such that ðcE  AÞ is nonsingular. Lemma 7.4. [81]There exists a matrix K d such that detðE þ BK d Þ 6¼ 0 holds if and only if rank½E B ¼ n holds. Further, let rank½E B ¼ n hold and U 2 Rnn be a nonsingular matrix such that     E1 Pb ; UE ¼ UB ¼ ; 0 E2 where Pb 2 Rrr r , then all the matrix K d satisfying detðE þ BK d Þ 6¼ 0 can be characterized as K d ¼ P1 b ðP  E 1 Þ; satisfying with P 2 Rrn r ImðE T2 tÞ  ImðP T Þ ¼ Rn : Generally speaking, if the desired eigenvalue set contains complex numbers, the coefficient matrix B is a complex matrix. Thus the existing results involve complex arithmetic, for example in[127]. However, complex arithmetic can be avoided if the results of this chapter are used. In order to illustrate our approach to solving the generalized discrete Sylvester matrix equation X  AXB ¼ CY with their applications on parametric pole assignment for descriptor linear systems. In the following, we consider a system in the form of (7.0.3) with the following system matrices (borrowed from[129,135]). 1 0 1 0 1 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 B0 0C B1 0 0 0 0 0C B0 1 0 0 0 0C C B C B C B B0 0C B0 1 0 1 0 0C B0 0 1 0 0 0C C B C B C B E¼B C; A ¼ B 0 0 0 1 0 0 C; B ¼ B 0 0 C: C B C B B0 0 0 0 1 0C @1 1A @0 0 0 0 1 0A @0 0 0 0 0 0A 0 0 1 0 0 0 0 1 1 0 0 0 0 0

126

Solutions to Linear Matrix Equations and Their Applications

It is obvious that such a system is C -controllable. If we choose c ¼ 1, then according to lemma 7.3, we can get the matrices M and N as 1 0 1 0 0 0 0 1 0 0 0 0 B 0 B 0 0 C 0 1 0 1 0 C C B C B C B C B 1 0 0 0 0 0C B 1 0 C: ; T ¼ M ¼B B C B 0 0 C 0 0 0 1 0 C C B 0 B @ A @ 0 1 1 A 0 0 0 0 0 0 0 1 1 0 0 0 0 The inverse characteristic polynomial of M is bðs; wÞ ¼ bðs; 3Þ ¼ detðsM  I Þ ¼ s 3 þ 1: Therefore, we have dðs; 6Þ ¼ 0s 6 þ 0s 5 þ 0s 4 þ s 3 þ 1 and

0 B B B S 2 ðI 2 ; AÞ ¼ B B B @

I2

0I 2 I2

0I 2 0I 2 I2

I2 0I 2 0I 2 I2

0I 2 I2 0I 2 0I 2 I2

1 0I 2 0I 2 C C I2 C C: 0I 2 C C 0I 2 A I2

We now consider a P-D feedback law in the form of (7.2.1). For select the closed-loop eigenvalue set as rðAc Þ ¼ rðFÞ ¼ f1; 1; 2; 2; 1 ig: Thus, the matrix F 0 1 B 0 B B 0 F ¼B B 0 B @ 0 0

can be chosen as 0 0 1 0 0 2 0 0 0 0 0 0

0 0 0 2 0 0

0 0 0 0 0 2

1 0 0 C C 0 C C; 0 C C 1 A 2

N ¼ cI 6  F ¼ F:

Under this condition, according to theorem 7.2, a complete parametric solution to the generalized discrete Sylvester matrix equation (7.0.2) can be expressed as  X ¼ Ctr 6 ðM ; T ÞS 2 ðI 2 ; AÞObs 6 ðZ ; N Þ; Y ¼ Z  ZN 3 : where Z 2 R26 is an arbitrary parametric matrix. In order to guarantee that X is nonsingular, we need to have ðz 225  2z 26 z 25 þ 2z 226 Þðz 22 z 11  z 21 z 12 Þðz 14 z 23 þ z 13 z 24 Þ 6¼ 0:

Explicit Solutions to the Matrix Equation X  AXB ¼ CY þ R

127

In addition, the matrices U ; E 1 ; E 2 and Pb defined in lemma 7.4 can be expressed as 1 1 0 0 1 0 0 0 0 0 1 0 0 0 0 0 C B0 0 0 0 1 0C B C   B0 0 0 0 0 0C B   B 0 0 1 0 0 0 C E1 B0 0 1 0 0 0C 1 0 C C B B U ¼B ¼B C; C; P b ¼ 1 1 : B 0 0 0 1 0 0 C E2 B0 0 0 0 1 0C @0 1 0 0 0 0A @0 1 0 0 0 0A 0 0 0 0 0 1 1 0 0 0 0 0 Therefore, by lemma 7.4 all the derivative gain matrix K d can be parameterized as K d ¼ P1 b ðP  E 1 Þ  p11  1 ¼ p21 þ 1  p11

p12

p13

p14

p15

p16

p22  p12

p23 13

p24  p14

p25  p15

p26  p16

 ;

where P 2 R26 is a parameter matrix satisfying p26 p14  p16 p24 6¼ 0: According to (7.2.5), by specially choosing    1 0 0 1 0 0 0 0 Z¼ ; P¼ 0 1 1 0 1 1 0 0 We can obtain  1 0 0 Kd ¼ 1 0 0

1 1

 0 0 ; 0 1

 Kp ¼

11 27

0 0

1 0

13 5 23 9

0 0

 0 : 1

3 8 3 11

 2 : 8

Example 7.3. Consider the following linear system: E x_ ¼ Ax þ Bu;

ð7:2:8Þ

where A; E 2 Rnn and B 2 Rnr are known coefficient matrices. If the following state feedback controller _ u ¼ K p x  K d x;

ð7:2:9Þ

is applied to the above system (7.2.8), the closed-loop system becomes ðE þ BK d Þx_ ¼ ðA  BK p Þx;

ð7:2:10Þ

The eigenstructure assignment problem is to determine the matrix K p such that the closed-loop system matrix A  BK p has desired eigenvalues and struture, i.e., determine the matrix K p such that A  BK p ¼ EXFX 1 ;

ð7:2:11Þ

where F is the desired Jordan form of the closed-loop system and X is the corresponding eigenvector matrix. If we choose K p X þ K d XF ¼ Y , the

128

Solutions to Linear Matrix Equations and Their Applications

equation (7.2.11) is equivalent to the generalized Sylvester matrix equation AX  EXF ¼ BY . By lemma 7.3, the generalized Sylvester matrix equation AX  EXF ¼ BY is equivelent into the Yakubovich matrix equation X  MXN ¼ TY . By theorem 7.1, we can obtain the solution to Yakubovich matrix equation X  MXN ¼ TY . 0 1 0 1 0 1 2 4 3 4 3 2 1 2 3 If we choose A ¼ @ 1 1 0 A, E ¼ @ 9 8 7 A, F ¼ @ 0 1 2 A, B ¼ 1 0 1 4 3 2 4 0 1 0 1 9 7 6 T @ 4 3 2 A; we can check fsjdetðI  sAÞ ¼ 0g kðBÞ ¼ /. If we choose c ¼ 1; by 0 1 0 lemma 7.3 we have 0 1 0 1 1:4400 1:2800 1:1200 0 2 3 M ¼ @ 1:4200 1:5400 1:6600 A; N ¼ @ 0 0 2 A; 1:0600 1:2200 1:3800 4 0 0 0

1:8800 1:2000 T ¼ @ 3:3400 2:6000 0:6200 0:8000

1 1:3600 2:4800 A: 0:6400

By simple computations, it is easy to obtain f ðI 3 ;AÞ ðsÞ ¼ detðI  sM Þ ¼ 0:3s 2  1:28s þ 1; 0 1 0 1 1 0 0 0:16 1:28 1:12 R0 ¼ @ 0 1 0 A; R1 ¼ @ 1:42 2:82 1:66 A; 0 0 1 1:22 0:1 0 11:06 0:1 0:4 0:4 0:8 0:8 A: R2 ¼ @ 0:2 0:1 0:4 0:4 0 1 106:91 38:1915 106:3752 Choose Z ¼ @ 70:0416 29:1167 71:1143 A; so from theorem 7.1 we have 230:1276 82:4103 228:1911 0 1 0 1 3 2 4 189:3333 110:6667 243:3333 X ¼ @ 1 6 3 A; Y ¼ @ 130:0000 76:0000 159:0000 A: 4 2 1 410:1667 236:3333 520:6667

7.3

Conclusions

In this chapter, the solutions to the nonhomogeneous Yakubovich matrix equation in the real field are presented by means of Smith’s normal form reduction and polynomial matrix. These proposed solutions can provide all the degrees of freedom,

Explicit Solutions to the Matrix Equation X  AXB ¼ CY þ R

129

which are represented by an arbitrarily chosen parameter matrix Z . All the coefficient matrices are not restricted to be in any canonical form. The matrix B explicitly appears in the solutions, thus can be unknown a priori. From the discussion in this chapter, one can observe that the solutions to the nonhomogeneous Yakubovich matrix equation are crucial as the theoretical basis of the development of many kinds of other matrix equations, and are deserved further investigation in the future. In addition, as the theoretical generalization of the well-known generalized Sylvester matrix equation, it may be helpful for future control applications.

Chapter 8 Explicit Solutions to the Nonhomogeneous Yakubovich-Transpose Matrix Equation The well-known Karm-Yakubovich-transpose matrix equation X  AX T B ¼ C ;

ð8:0:1Þ

and the general discrete Lyapunov-transpose matrix equation X  B T X T B ¼ CC T ;

ð8:0:2Þ

are closely related to many problems in conventional linear control system theory, such as pole/eigenstructure assignment design, Luenberger-type observer design, robust fault detection, and so on. Moreover, the generalized discrete Yakubovich-transpose matrix equation X  AX T B ¼ CY ;

ð8:0:3Þ

where A 2 Rnp , B 2 Rnp and C 2 Rnr are the known matrices and X 2 Rnp , Y 2 Rrp are the unknown matrices, has important applications in dealing with complicated linear systems, such as large-scale systems with interconnections, linear systems with certain partitioned structures or extended models and second order or higher order linear systems. As a generalization of the above transpose matrix equations, the following nonhomogeneous Yakubovich-transpose matrix equation X  AX T B ¼ CY þ R;

ð8:0:4Þ

is considered in this chapter, where A 2 Rnp , B; R 2 Rnp and C 2 Rnr are the given matrices, and X 2 Rnp ; Y 2 Rrp are the unknown matrices. In the present chapter, two methods to obtain the solution to the nonhomogeneous Yakubovich-transpose matrix equation are given. Furthermore, the equivalent form DOI: 10.1051/978-2-7598-3102-9.c008 © Science Press, EDP Sciences, 2023

132

Solutions to Linear Matrix Equations and Their Applications

of the solution is proposed. One of the solutions is established with the controllability matrix, the observability matrix, and symmetric operator matrix.

8.1

The First Approach

To obtain the result, we need the following lemma with the aid of Smith normal form reduction. Lemma 8.1. Given matrices A 2 Rnp , B 2 Rnp and C 2 Rnr , suppose that T fsjdetðI  sAB T Þ ¼ 0g kfB T Ag ¼ /. Then there exist polynomial matrices LðsÞ 2 Rnn ½s, H ðsÞ 2 Rrn ½s, and polynomial DðsÞ 2 R½s such that ðI  sAB T ÞLðsÞ  CH ðsÞ ¼ DðsÞI n ;

ð8:1:1Þ

where DðcÞ 6¼ 0, for any c 2 kðB T AÞ. Proof. Let PðsÞ 2 Rnn ½s and QðsÞ 2 Rðn þ rÞðn þ rÞ ½s be two uniocular polynomial matrices that transform ½In  sAB T  C  into its Smith normal form, that is, PðsÞ½In  sAB T  C QðsÞ ¼ ½diagni¼1 d i ðsÞ 0 ;

ð8:1:2Þ T

with d i1 ðsÞjd i ðsÞ 2 R½s; i 2 I ½2; n. Since fsjdetðI  sAB T Þ ¼ 0g kfB T Ag ¼ /, then d i ðcÞ 6¼ 0; i 2 I ½1; n; for any c 2 kðB T AÞ. Partition QðsÞ into ½Q 1 ðsÞ; Q 2 ðsÞ with Q 1 ðsÞ 2 Rðn þ rÞn ½s. It follows from (8.1.2) that PðsÞ½In  sAB T  C Q 1 ðsÞ ¼ diagni¼1 d i ðsÞ d n ðsÞ ¼ I n d n ðsÞ d i ðsÞ d n ðsÞ ÞPðsÞ ¼ I n d n ðsÞ:  C ðQ 1 ðsÞdiagni¼1 d i ðsÞ

() PðsÞ½In  sAB T  C Q 1 ðsÞdiagni¼1 () ½In  sAB T

Thus, one can choose DðsÞ ¼ d n ðsÞ and     LðsÞ n d n ðsÞ ¼ Q 1 ðsÞdiagi¼1 PðsÞ: H ðsÞ d i ðsÞ to satisfy the relation (8.1.1).



With this lemma, now we are in a position to provide a closed-form solution of the nonhomogeneous Yakubovich-transpose matrix equation (8.0.4). Theorem 8.1. Let A 2 Rnp , B 2 Rnp and C 2 Rnr , suppose that T P i fsjdetðI  sAB T Þ ¼ 0g kðB T AÞ ¼ /, and polynomial matrices LðsÞ ¼ t1 i¼0 Li s 2 Pt P t nn rn R ½s and H ðsÞ ¼ i¼0 H i s i 2 R ½s, and polynomial DðsÞ ¼ i¼0 Di s i 2 R½s satisfy (8.1.1) with DðcÞ 6¼ 0, for any c 2 kðB T AÞ. Further, let

Nonhomogeneous Yakubovich-Transpose Matrix Equation

133

f ðI ;ABT Þ ðsÞ ¼ detðI  sAB T Þ ¼ an s n þ    þ a1 s þ a0 ; a0 ¼ 1; and adjðI  sAB T Þ ¼ Rn1 s n1 þ    þ R1 s þ R0 : Then all the solutions to the nonhomogeneous Yakubovich-transpose matrix equation (8.0.1) can be characterized by 8 n1 n1 tP 1 P P > i i T T T > > X ¼ R CZ ðA BÞ þ AðB AÞ ðR CZ Þ B þ Li i i > > > i¼0 i¼0 i¼0 > < tP 1 ð8:1:3Þ R½DðAT BÞ1 ðAT BÞi þ AðB T AÞi ðLi R½DðAT BÞ1 ÞT B; > i¼0 > > > t > P > T > H i R½DðAT BÞ1 ðAT BÞi : : Y ¼ Zf ðI ;AB T Þ ðA BÞ þ i¼0

Proof. We first show that the matrices X and Y given in (8.1.3) are solutions of the matrix equation (8.0.4). A direct calculation gives X  AX T B ¼

n 1 X

Ri CZ ðAT BÞi þ

i¼0

þ

n 1 X

AðB T AÞi ðRi CZ ÞT B þ

i¼0

t1 X



AðB T AÞi ðLi R½DðAT BÞ1 ÞT B 



n1 X

AðB T AÞi ðRi CZ ÞT B

i¼0

AB T ðRi CZ ÞðAT BÞi AT B 

i¼0 t1 X

Li R½DðAT BÞ1 ðAT BÞi

i¼0

i¼0 n1 X

t1 X

t1 X

AðB T AÞi ðLi R½DðAT BÞ1 ÞT B

i¼0

AB T ðLi R½DðAT BÞ1 ÞðAT BÞi AT B

i¼0

¼

n1 X

Ri CZ ðAT BÞi 

i¼0

n1 X

ðAB T ÞRi CZ ðAT BÞi þ 1 þ

i¼0

 R½DðAT BÞ1 ðAT BÞi 

t1 X

Li

i¼0 t1 X

AB T Li R½DðAT BÞ1 ðAT BÞi þ 1

i¼0

¼ R0 CZ þ

n 1 X

ðRi  AB T Ri1 ÞCZ ðAT BÞi  AB T Rn1 CZ ðAT BÞn þ L0 R½DðAT BÞ1

i¼1

þ

t1 X

ðLi  AB T Li1 ÞR½DðAT BÞ1 ðAT BÞi  AB T Lt1 R½DðAT BÞ1 ðAT BÞt :

i¼1

ð8:1:4Þ

Solutions to Linear Matrix Equations and Their Applications

134

By the relation ðI  sAB T ÞadjðI  sAB T Þ ¼ I detðI  sAB T Þ, it is easily derived that 8 < R 0 ¼ a0 I ; R  AB T Ri1 ¼ ai I ; i 2 I ½1; n  1; ð8:1:5Þ : i T AB Rn1 ¼ an I : So we have R0 CZ þ

n 1 X ðRi  AB T Ri1 ÞCZ ðAT BÞi  AB T Rn1 CZ ðAT BÞn i¼1

¼ CZ

n X i¼0

ð8:1:6Þ T

i

T

ai ðA BÞ ¼ CZf ðI ;AB T Þ ðA BÞ:

In addition, comparing the power of s in (8.1.1) yields 8 < L0 ¼ CH 0 þ D0 I ; L  AB T Li1 ¼ CH i þ Di I ; : i T AB Lt1 ¼ CH t þ Dt I :

i 2 I ½1; t  1;

ð8:1:7Þ

With these relations, we can easily obtain L0 R½DðAT BÞ1 þ

t1 X

ðLi  AB T Li1 ÞR½DðAT BÞ1 ðAT BÞi

i¼1

 AB Lt1 R½DðAT BÞ1 ðAT BÞt T

¼ ðCH 0 þ D0 I ÞR½DðAT BÞ1 þ

t1 X

ðCH i þ Di I Þ

i¼1

 R½DðAT BÞ1 ðAT BÞi þ ðCH t þ Dt I ÞR½DðAT BÞ1 ðAT BÞt ¼

t X i¼0

¼

t X

CH i R½DðAT BÞ1 ðAT BÞi þ

t X

CH i R½DðAT BÞ1 ðAT BÞi þ R½DðAT BÞ1

i¼0

¼

t X

t X i¼0

t X

Di ðAT BÞi

i¼0

CH i R½DðAT BÞ1 ðAT BÞi þ R½DðAT BÞ1 DðAT BÞ

i¼0

¼

Di R½DðAT BÞ1 ðAT BÞi

i¼0

CH i R½DðAT BÞ1 ðAT BÞi þ R:

ð8:1:8Þ

Nonhomogeneous Yakubovich-Transpose Matrix Equation

135

Substituting (8.1.6) and (8.1.8) into (8.1.4), yields X  AX T B ¼ CZf ðI ;ABT Þ ðAT BÞ þ T

t X

CH i R½DðAT BÞ1 ðAT BÞi þ R

i¼0

¼ C Zf ðI ;AB T Þ ðA BÞ þ

t X

! T

1

T

H i R½DðA BÞ ðA BÞ

i

þR

i¼0

¼ CY þ R: Therefore, the matrices X and Y given in (8.1.3) satisfy the matrix equation (8.0.4). Next, the completeness of solution (8.1.3) is showed. First of all, from the relation (8.1.3) we can see the solution (8.1.3) has rp parameters represented by the elements T of the free matrix Z . Because fsjdetðI  sAB T Þ ¼ 0g kðB T AÞ ¼ /, f ðI ;AB T Þ ðAT BÞ is nonsingular. So it is showed that the mapping Z ! ðX; Y Þ defined by (8.1.3) is injective. The proof is thus completed. □ Regarding the solution of ai ; i 2 I ½0; n, and Ri ; i 2 I ½0; n  1, the so-called generalized Leverrier algorithm[139] can be stated as the following relation: 8 > R ¼ AB T Ri1 þ ai I ; i 2 I ½1; n  1; R0 ¼ I ; > < i i 2 I ½1; n: ð8:1:9Þ T > traceðAB R Þ i1 > : ai ¼ : i For relative lower-order cases, the Smith norm form reduction can be realized by using elementary polynomial matrix transformation. For high-order cases, the Maple function “smith” can be readily employed. From theorem 8.1, it is easy to know that under the condition of T fsjdetðI  sAB T Þ ¼ 0g kðB T AÞ ¼ / all the solutions of the matrix equation (8.0.3) can be explicitly given by 8 n1 n1 > < X ¼ P R CZ ðAT BÞi þ P AðB T AÞi ðR CZ ÞT B; i i ð8:1:10Þ i¼0 i¼0 > T : Y ¼ Zf T ðA BÞ: ðI ;AB Þ Moreover, from theorem 8.1 it is obvious that a special solution of the matrix equation (8.0.4) is 8 tP 1 > > > X¼ Li R½DðAT BÞ1 ðAT BÞi > > > i¼0 > < tP 1 þ AðB T AÞi ðLi R½DðAT BÞ1 ÞT B; > i¼0 > > > t > P > > H i R½DðAT BÞ1 ðAT BÞi : :Y ¼ i¼0

Solutions to Linear Matrix Equations and Their Applications

136

Similarly to the reference[145], about theorem 8.1 we have the following equivalent form. First we give the following lemma. The idea of the proof of this lemma is motivated by[148]. Lemma 8.2. Suppose that Li ; i 2 I ½0; t  1, and H i ; Di ; i 2 I ½0; t, be the coefficients of LðsÞ, H ðsÞ and DðsÞ, respectively. Then they satisfy (8.1.1) if and only if H i ; Di ; i 2 I ½0; t, satisfy t t X X ðAB T Þti CH i ¼  Di ðAB T Þti ; ð8:1:11Þ i¼0

i¼0

and Li ; i 2 I ½0; t  1, satisfy ½L0 L1    Lt1  ¼ Ctr t ðAB T ; C ÞS t ðH ðsÞÞ þ Ctr t ðAB T ; I n ÞS t ðDðsÞI n Þ:

ð8:1:12Þ

Proof. From the proof of theorem 8.1 we can see that Li ; i 2 I ½0; t  1, and H i ; Di ; i 2 I ½0; t satisfy (8.1.1) if and only if they satisfy (8.1.7). So we need to show that the relation (8.1.7) is equivalent to the relations (8.1.11) and (8.1.12). From the recursive relation (8.1.7), a direct calculation gives 8 L0 ¼ CH 0 þ D0 I ; > > > > L < 1 ¼ CH 1 þ AB T CH 0 þ D1 I þ AB T D0 I ; ð8:1:13Þ  > T T t1 > L ¼ CH þ AB CH þ    þ ðAB Þ CH > t1 t2 0 > : t1 þ Dt1 I þ Dt2 AB T þ    þ D0 ðAB T Þt1 : and 0 ¼ CH t þ AB T CH t1 þ    þ ðAB T Þt CH 0 þ Dt I þ Dt1 AB T þ    þ D0 ðAB T Þt : ð8:1:14Þ The relation (8.1.14) is exactly (8.1.11). Rewriting (8.1.13) in a form gives (8.1.12). So it is shown that (8.1.7) implies (8.1.11) and (8.1.12). On the other hand, it follows from the relation (8.1.12) that AB T Lt1 ¼ AB T CH t1 þ ðAB T Þ2 CH t2 þ    þ ðAB T Þt CH 0 þ Dt1 AB T þ    þ D0 ðAB T Þt : Combining this relation with (8.1.11) we can obtain the last expression of (8.1.7). For i 2 I ½1; t  1, from (8.1.12) one has AB T Li1 þ CH i þ Di I   iP 1 ½ðAB T Þij1 CH j þ Dj ðAB T Þij1  þ CH i þ Di I ¼ AB T ¼ ¼

i1 P

j¼0

½ðAB T Þij CH j þ Dj ðAB T Þij  þ CH i þ Di I

j¼0 i P

½ðAB T Þði þ 1Þj1 CH j þ Dj ðAB T Þði þ 1Þj1 

j¼0

¼ Li :

Nonhomogeneous Yakubovich-Transpose Matrix Equation

137

This is the second expression of (8.1.7). The first expression of (8.1.7) is obvious. Thus the relation (8.1.11) and (8.1.12) imply the relation (8.1.7). So the proof is completed. □ In the following theorem, the solution to the matrix equation (8.0.4) is stated as the controllability matrix associated with the matrices AB T and C , and the observability matrix associated with B and the free parameter matrix. This property may propose convenience and advantages to the further analysis of the matrix equation (8.0.4). Theorem 8.2. Let the matrices A 2 Rnp , B 2 Rnp and C 2 Rnr be given, supT pose that fsjdetðI  sAB T Þ ¼ 0g kfB T Ag ¼ /, and the polynomial matrix H ðsÞ ¼ P Pt rn i and polynomial DðsÞ ¼ ti¼1 Di s i 2 R½s satisfy (8.1.10) and i¼1 H i s 2 R DðcÞ 6¼ 0, for any c 2 kðB T AÞ. Then all the solution to the matrix equation (8.0.1) can be stated as 8 X ¼CtrðAB T ; C ÞS n ðf ðI ;AB T Þ ðsÞI r ÞObs n ðAT B; Z Þ > > > > > > > þ A½CtrðAB T ; C ÞS n ðf ðI ;AB T Þ ðsÞI r ÞObs n ðAT B; Z ÞT B > > > > > > þ ½CtrðAB T ; C ÞS t ðH ðsÞÞ þ CtrðAB T ; I n ÞS t ðDðsÞI n Þ > > > <  Obs t ðAT B; R½DðAT BÞ1 Þ ð8:1:15Þ > > T T > þ Af½CtrðAB ; C ÞS t ðH ðsÞÞ þ Ctr t ðAB ; I n ÞS t ðDðsÞÞI n  > > > > > > Obs t ðAT B; R½DðAT BÞ1 ÞgT B; > > > > t > P > > H i R½DðAT BÞ1 ðAT BÞi ; : Y ¼ Zf ðI ;ABT Þ ðAT BÞ þ i¼1

where Z is an arbitrary chosen parameter matrix. Proof. By the relation (8.1.9), it is easily obtained that 8 R0 ¼ I n ; > > > > < R1 ¼ a1 I n þ AB T ; R2 ¼ a2 I n þ a1 AB T þ ðAB T Þ2 ; > > >... > : Rn1 ¼ an1 I n þ an2 AB T þ    þ ðAB T Þn1 : This relation can be compactly expressed as Ri ¼

i X

ak ðAB T Þik ; a0 ¼ 1; i ¼ 1; 2; . . .; n  1:

k¼0

Therefore, it is easily obtained n 1 X i¼0

Ri CZ ðAB T Þi ¼

n 1 X i X i¼0 k¼0

ak ðAB T Þik CZ ðAB T Þi :

Solutions to Linear Matrix Equations and Their Applications

138

So we have n 1 X i¼0

Ri CZ ðAT BÞi ¼ CtrðAB T ; C ÞS n ðf ðI ;AB T Þ ðsÞI r ÞObs n ðAT B; Z Þ;

ð8:1:16Þ

and n 1 n 1 X X ðB T AÞi ðRi CZ ÞT B ¼ ½Ri CZ ðAT BÞi T B i¼0

i¼0

¼ ½CtrðAB T ; C ÞS n ðf ðI ;AB T Þ ðsÞI r ÞObs n ðAT B; Z ÞT B: ð8:1:17Þ It follows from lemma 8.2 that the relation (8.1.1) holds if and only if the relations (8.1.11) and (8.1.12) hold. So we have t1 X

Li R½DðAT BÞ1 ðAT BÞi

i¼0

¼ ½L0 L1 Lt1 Obs t ðAT B; R½DðAT BÞ1 Þ ¼ ½CtrðAB T ; C ÞS t ðH ðsÞÞ þ CtrðAB T ; I n ÞS t ðDðsÞÞI n Obs t ðAT B; R½DðAT BÞ1 Þ: ð8:1:18Þ Similarly, one has t1 X

ðB T AÞi ðLi R½DðAT BÞ1 ÞT B ¼

i¼0

t1 hX

Li R½DðAT BÞ1 ðAT BÞi

iT

B

i¼0

¼ f½CtrðAB T ; C ÞS t ðH ðsÞÞ þ Ctr t ðAB T ; I n ÞS t ðDðsÞÞI n   Obs t ðAT B; R½DðAT BÞ1 ÞgT B:

ð8:1:19Þ Substituting (8.1.16)–(8.1.19) into the expression (8.1.3), gives (8.1.15). The proof is completed. □ When the matrix triple ðAB T ; I ; C Þ is R-controllable, that is rank ½I  sAB T  C  ¼ n; for any s 2 R; the approach of theorem 8.2 can be readily employed to solve the matrix equation (8.0.4) without use of Smith normal form reduction. This is summarized as the following corollary. Corollary 8.1. Let the matrices A 2 Rnp , B 2 Rnp and C 2 Rnr be given, supT pose that fsjdetðI  sAB T Þ ¼ 0g kfB T Ag ¼ / and ðAB T ; I ; C Þ is R-controllable. P Let the polynomial matrix H ðsÞ ¼ ti¼0 H i s i 2 Rrn satisfy t X ðAB T Þti CH i ¼ ðAB T Þt : i¼0

ð8:1:20Þ

Nonhomogeneous Yakubovich-Transpose Matrix Equation

139

Then all the solution to the matrix equation (8.0.4) can be characterized by 8 X ¼CtrðAB T ; C ÞS n ðf ðI ;AB T Þ ðsÞI r ÞObs n ðAT B; Z Þ > > > > > > þ A½CtrðAB T ; C ÞS n ðf ðI ;AB T Þ ðsÞI r ÞObs n ðAT B; Z ÞT B > > > < þ ½CtrðAB T ; C ÞS t ðH ðsÞÞ þ CtrðAB T ; I n ÞObs t ðAT B; RÞ > > > þ Af½CtrðAB T ; C ÞS t ðH ðsÞÞ þ Ctr t ðAB T ; I n ÞObs t ðAT B; RÞgT B; > > > t > P > > : Y ¼ Zf ðI ;ABT Þ ðAT BÞ þ H i RðAT BÞi ; i¼0

where Z is an arbitrary chosen parameter matrix. When ðAB T ; I ; C Þ is R-controllable, the polynomial DðsÞ can be chosen to be 1. In this case, ½DðAT BÞ1 ¼ I n ; S t ðDðsÞI n Þ ¼ I ; and the matrix equation (8.1.11) becomes the the matrix equation (8.1.20). Denote ½H Tt H Tt1    H T0 T , this equation can be rewritten as Ctr t þ 1 ðAB T ; C ÞH ½t ¼ ðAB T Þt :

ð8:1:21Þ

If the number t is sufficiently large, one has rank½Ctr t þ 1 ðAB T ; C Þ ¼ rank½Ctr t þ 1 ðAB T ; C Þ  ðAB T Þt : In this case, the matrices H i ; i 2 ½1; t; can be obtained from (8.1.21) by using some numerically reliable approaches such as QR decomposition approach and SVD approach. Furthermore, when the polynomial DðsÞ is selected to be s t , then S t ðDðsÞI n Þ ¼ 0. Similar to the above arguments, the following corollary can be easily obtained. Corollary 8.2. Let A 2 Rnp , B 2 Rnp and C 2 Rnr be given, suppose that T fsjdetðI  sAB T Þ ¼ 0g kfB T Ag ¼ / and ðAB T ; I ; C Þ is R-controllable. If the P polynomial matrix H ðsÞ ¼ ti¼0 H i s i 2 Rrn satisfy t X ðAB T Þti CH i ¼ I : i¼0

Then all the solutions to the matrix equation (8.0.4) can be stated as 8 X ¼CtrðAB T ; C ÞS n ðf ðI ;AB T Þ ðsÞI r ÞObs n ðAT B; Z Þ > > > > > > þ A½CtrðAB T ; C ÞS n ðf ðI ;ABT Þ ðsÞI r ÞObs n ðAT B; Z ÞT B > > > < þ CtrðAB T ; C ÞS t ðH ðsÞÞObs t ðAT B; RðAT BÞt Þ > > > þ A½CtrðAB T ; C ÞS t ðH ðsÞÞObs t ðAT B; RðAT BÞt T B; > > > t > P > > : Y ¼ Zf ðI ;AB T Þ ðAT BÞ þ H i RðAT BÞi ; i¼0

where Z is an arbitrary chosen parameter matrix.

Solutions to Linear Matrix Equations and Their Applications

140

8.2

The Second Approach

In this section, we propose an alternative approach to solve the Yakubovich-transpose matrix equation (8.0.1). Before proceeding, we need the following lemma on the matrix equation X 0  AX T0 B ¼ R;

ð8:2:1Þ

where A 2 Rnp , B 2 Rnp and R 2 Rnp are known matrices, and X 0 is the matrix to be determined. Lemma 8.3. Let the matrices A 2 Rnp , B 2 Rnp and R 2 Rnp be given, suppose T that fsjdetðI  sAB T Þ ¼ 0g kfB T Ag ¼ /, and let f ðI ;ABT Þ ðsÞ ¼ detðI  sAB T Þ ¼ an s n þ    þ a1 s þ a0 ; a0 ¼ 1; and adjðI  sAB T Þ ¼

n 1 X

Ri s i :

ð8:2:2Þ

i¼0

Then the unique solution to the matrix equation (8.2.1) can be characterized by X0 ¼

n 1 hX

Ri RðAT BÞi þ

i¼0

n 1 X i¼0

i Ri ART BðAT BÞi  ½f ðI ;AB T Þ ðAT BÞ1 :

ð8:2:3Þ

Proof. According to the theorem 8.2 in the reference[80], we have n 1 X i hX X0 ¼ ak ðAB T Þik RðAT BÞi i¼0 k¼0

þ

n 1 X i X

ak ðAB T Þik ART BðAT BÞi

i¼0 k¼0

i1 ih f ðI ;AB T Þ ðAT BÞ :

By the relation (8.1.9), we have Ri ¼

i X

ak ðAB T Þik ; a0 ¼ 1; i ¼ 1; 2; . . .; n  1:

k¼0

So we can obtain the following conclusion " # n 1 n 1 h i1 X X i i X0 ¼ Ri RðAT BÞ þ Ri ART BðAT BÞ  f ðI ;AB T Þ ðAT BÞ : i¼0

i¼0

Thus, the proof has been completed.



Nonhomogeneous Yakubovich-Transpose Matrix Equation

141

Suppose that a solution to the nonhomogeneous Yakubovich-transpose matrix e;Y ¼ Y e with ð X e;Y e Þ being a equation (8.0.1) can be expressed as X ¼ T þ X solution to matrix equation (8.0.2). Then one has e Þ  AðT þ X e ÞT B  C Y e R ðT þ X e  AX e T B þ T  AT T B  C Y e  R ¼ T  AT T B  R: ¼X e;Y ¼ Y e is a solution to (8.0.2) if It follows from this relation that X ¼ T þ X X 0 ¼ T is a solution to (8.2.1). Combining this fact with the solution (8.1.10) to the matrix equation (8.0.2), now we have the following conclusion on the solution to the Yakubovich-transpose matrix equation (8.0.1). Different from theorems 8.1 and 8.2, one does not need to solve the polynomial matrices H ðsÞ and LðsÞ, and polynomial DðsÞ when applying theorem 8.3 to obtain general solutions to the Yakubovich-transpose matrix equation (8.2.1). Theorem 8.3. Given matrices A 2 Rnp , B 2 Rnp and C 2 Rnr , suppose that T fsjdetðI  sAB T Þ ¼ 0g kfB T Ag ¼ /, and (8.2.2) holds. Then all solutions to the nonhomogeneous Yakubovich-transpose matrix equation (8.0.4) can be expressed as 8 n 1 n 1 X X > > >X ¼ Ri CZ ðAT BÞi þ AðB T AÞi ðRi CZ ÞT B > > > i¼0 i¼0 > < " # n 1 n 1 h i1 X X i i T T T T > T ðA BÞ R RðA BÞ þ R AR BðA BÞ ; þ  f i i > ðI ;AB Þ > > > i¼0 i¼0 > > : Y ¼Zf ðI ;AB T Þ ðAT BÞ; where Z 2 Rrp is an arbitrary chosen free parameter matrix. In addition, it is easily known from theorem 8.3 that 8 n1  h i1 n1 P P < Ri RðAT BÞi þ Ri ART BðAT BÞi  f ðI ;AB T Þ ðAT BÞ ; X¼ i¼0 i¼0 : Y ¼ 0; is also a special solution to the matrix equation (8.0.4). Similar to derivation in section 9.1, we can also obtain some equivalent forms of the solution in theorem 8.3. Corollary 8.3. Let the matrices A 2 Rnp , B 2 Rnp and C 2 Rnr be given, supT pose that fsjdetðI  sAB T Þ ¼ 0g kfB T Ag ¼ /, and (8.2.2) holds. Then all solutions to the nonhomogeneous Yakubovich matrix equation (8.0.4) can be characterized by 8 n1 n1 P P > > Ri ½CZ þ R½f ðI ;AB T Þ ðAT BÞ1 ðAT BÞi þ fðAB T Þi AðRi CZ ÞT >

> > :

þ Ri A½ðAB T Þi RT ½f ðI ;AB T Þ ðAT BÞ1 gB; Y ¼ Zf ðI ;AB T Þ ðAT BÞ;

i¼0

where Z 2 Rrp is an arbitrary chosen free parameter matrix.

Solutions to Linear Matrix Equations and Their Applications

142

Corollary 8.4. Let the matrices A 2 Rnp , B 2 Rnp and C 2 Rnr be given, T suppose that fsjdetðI  sAB T Þ ¼ 0g kfB T Ag ¼ /, and (8.2.2) holds. Then all solutions to the nonhomogeneous Yakubovich matrix equation (8.0.4) can be characterized by 8  > X ¼A ½CtrðAB T ; ½C RÞT S n ðf ðI ;AB T Þ ðsÞI r þ p Þ > > > > > " #!!T > > > ZB T > T > > Obsn B A; > > R½f ðI ;AB T Þ ðAT BÞ1 B > < þ CtrðAB T ; ½CRÞS n ðf ðI ;ABT Þ ðsÞI r þ p Þ > > " #! > > > Z > > T > Obs n A B; ; > > R½f ðI ;ABT Þ ðAT BÞ1 > > > > > T : Y ¼Zf ðI ;AB T Þ ðA BÞ; where Z 2 Rrp is an arbitrary chosen free parameter matrix.

8.3

Illustrative Example

In this section, we give an example to obtain the solution of the nonhomogeneous Yakubovich-transpose matrix equation in the form of (8.0.4). Example 8.1. Consider the nonhomogeneous Yakubovich-transpose matrix equation in the form of (8.0.1) with the following parameters: 0 1 0 1 2 4 3 30 22 4 B C B C A¼@ 1 1 0 A; R ¼ @ 1 1 6 A; 0

1 1

B B ¼ @2 0

0 0 1 1

1

1 1

0

11 30:5530

3

1

13:7765

C B 0 A; C ¼ @ 7:3409 0:3371 1 1:1667 0:4167

6:9924

1

C 1:8106 A:

0:8333

T It is easy to check that fsjdetðI  sAB T Þ ¼ 0g kðB T AÞ ¼ /. By simple computations, we have detðI  sAB T Þ ¼ s 3  3s 2 þ 3s þ 1; 0 1 18 6 30 B C f ðI ;AB T Þ ðAT BÞ ¼ @ 72 24 108 A; h

f ðI ;AB T Þ ðAT BÞ

i1

0

78

24

0:3333

B ¼ @ 0:5000 0:3333

114 0:0833 0:1667

1

C 0:6667 0:5000 A: 0:0833 0

Nonhomogeneous Yakubovich-Transpose Matrix Equation

In addition, 0 1 R0 ¼ @ 0 0

we have 1 0 0 1 0 A; 0 1 0 1 2 Choose Z ¼ @ 2 0 0 3 0

1 0 2 8 1 1 R1 ¼ @ 1 6 1 A; R 2 ¼ @ 1 0 2 2 2 1 1 4 A. So by theorem 8.3 we have 1 1 3:0000 12:0000 3:0000 B C X ¼ @ 1:0000 9:0000 13:0000 A; 0

0

5:0000

5:0000

84:0000 B Y ¼ @ 348:0000 294:0000

8.4

143

6 5 10

1 5 4 A: 7

1:0000

30:0000 108:0000 96:0000

1 132:0000 C 516:0000 A:

438:0000

Conclusions

In this chapter, we have proposed parametric solutions to the nonhomogeneous Yakubovich-transpose matrix equation. These solutions can provide all the degrees of freedom, which are represented by an arbitrarily chosen parameter matrix Z . All the coefficient matrices are not restricted to be in any canonical form. The matrix B explicitly appears in the solutions, and thus can be unknown a priori. Therefore, the matrix B can also be viewed as some free design parameters in some problems related to the matrix equation (8.0.4). Meanwhile, an equivalent form of the solutions to the nonhomogeneous Yakubovich-transpose matrix equation has been expressed in terms of the controllability matrix associated with A, B, and C , and observability matrix associated with A, B and the free parameter matrix Z . Such a feature may bring greater convenience and advantages to some problems related to the nonhomogeneous Yakubovich-transpose matrix equation. Moreover, by specifying the obtained solutions, some expressions can also be given for the solutions to the normal transpose matrix equations. From the discussion in this chapter, one can observe that the solutions to the nonhomogeneous Yakubovich-transpose matrix equation are crucial as the theoretical basis of the development of many kinds of other matrix equations, and are deserved further investigation in the future. In addition, as the theoretical generalization of the well-known Karm-Yakubovich-transpose matrix equation, it may be helpful for future control applications.

Chapter 9 Explicit Solutions to the Matrix Equations XB  AX ¼ CY b ¼ CY and XB  A X In this chapter, we are concerned with the explicit solution to the generalized Sylvester matrix equation in the real field and the generalized quaternion j-conjugate matrix equation over the quaternion field. In order to obtain the explicit solution of the generalized Sylvester conjugate matrix equation, we present a polynomial matrix algorithm. It is also shown that the explicit solution to the considered matrix equation is expressed as the coefficient matrices A; B; C , the free parameter matrix Z and the coefficient matrix of the polynomial matrix adjðsI 2n  AAÞ. Moreover, we study the generalized Sylvester b ¼ CY based on the idea of the quaternion j-conjugate matrix equation XB  A X [6,149,160,184,187,188] references and provide the closed-form solutions to this quaternion j-conjugate matrix equation by applying the real representation of a quaternion matrix. Compared to the complex representation method[184], the real representation method does not require the coefficient matrix A to be any special case. As the special case of the generalized Sylvester quaternion j-conjugate matrix equation, the b ¼ C is also investiSylvester quaternion j-conjugate matrix equation XB  A X gated. The explicit solutions to this kind of matrix equation have been established.

9.1

Real Matrix Equation XB  AX ¼ CY

We consider the following generalized Sylvester matrix equation in real field XB  AX ¼ CY :

ð9:1:1Þ

Theorem 9.1. Let A 2 Rnn , B 2 Rpp and C 2 Rnr be given real matrices. Suppose kðAÞ \ kðBÞ ¼ /, the following relations DOI: 10.1051/978-2-7598-3102-9.c009 © Science Press, EDP Sciences, 2023

Solutions to Linear Matrix Equations and Their Applications

146

f A ðsÞ ¼ detðsI  AÞ ¼ an sn þ    þ a1 s þ a0 ; an ¼ 1; and adjðsI  AÞ ¼ Rn1 s n1 þ    þ R1 s þ R0 ; hold true. Then all the solutions to the generalized Sylvester matrix equation (9.1.1) can be established as 8 n1 P < X¼ Ri CZB i ; ð9:1:2Þ i¼0 : Y ¼ Zf A ðBÞ; where the matrix Z 2 Rrp is an arbitrary matrix over real field. Proof. First of all, we prove matrices X and Y given in (9.1.2) are the solutions to the matrix equation (9.1.1). By the direct calculation we can obtain XB  AX ¼

n 1 X

Ri CZB i þ 1  A

i¼0

n 1 X

Ri CZB i

i¼0

n 1 X ¼ Rn1 CZB n þ ðRi1  ARi ÞCZB i  AR0 CZ :

ð9:1:3Þ

i¼1

According to the relation ðsI  AÞadjðsI  AÞ ¼ I detðsI  AÞ, it is easy to derive the following relations 8 < AR0 ¼ a0 I ; R  ARi ¼ ai I ; i ¼ 1; 2; . . .; n  1; ð9:1:4Þ : i1 Rn1 ¼ an I : So one has Rn1 CZB n þ

n 1 X i¼1

ðRi1  ARi ÞCZB i  AR0 CZ ¼ CZ

n X i¼0

ai B i ¼ CZf A ðBÞ ¼ CY :

Therefore, matrices X and Y given in (9.1.2) satisfy the matrix equation (9.1.1). Next, we prove the completeness of the solution (9.1.2). It follows from theorem 6 of[31] that there are rp degrees of freedom in the solution of the matrix equation (9.1.1), while solution (9.1.2) has exactly rp parameters represented by the elements of the free matrix Z . So we only need to show that all the parameters in the matrix Z contribute to the solution. For all this, it suffices to show that the mapping Z ! ðX; Y Þ defined by (9.1.3) is injective. This is true since f A ðBÞ is nonsingular under the condition of kðAÞ \ kðBÞ ¼ /. Thus, the proof has been completed. □ Remark 1. This result is the same as corollary 2 in[150]. In[150], the authors presented the solution to the generalized Sylvester matrix equation (9.1.1) in terms of the Krylov matrix, a symmetric operator matrix, and the generalized observability matrix by applying matrix polynomial and linear solution space. Corollary 2 in[150] is an equivalent form of the presented solution. However, here we establish the solution

Explicit Solutions to the Matrix Equations

XB  AX ¼ CY

and

b ¼ CY XB  A X

147

by applying the characteristic polynomial sI  A of coefficient matrix A and the coefficient matrix of the characteristic polynomial sI  A. In reference[186], we can find the following well-known generalized Faddeev-Leverrier algorithm: ( Rnk ¼ Rnk þ 1 A þ ank þ 1 I n ; Rn ¼ 0; k ¼ 1; 2; . . .; n  1; traceðRnk AÞ ð9:1:5Þ ; an ¼ 1; k ¼ 1; 2; . . .; n  1; ank ¼ k where ai ; i ¼ 0; 1; 2; . . .; n  1, are the coefficients of the characteristic polynomial of the matrix A, and Ri ; i ¼ 0; 1; . . .; n  1, are the coefficient matrices of the adjoint matrix adjðsI n  AÞ. Theorem 9.2. Let A 2 Rnn , B 2 Rpp and C 2 Rrp be given matrices, suppose f A ðsÞ ¼ detðsI  AÞ ¼ an sn þ    þ a1 s þ a0 ; an ¼ 1: Then matrices X and Y given by (9.1.2) have the following equivalent form 8 n1 n P P < X¼ ak Aki1 CZB i ; ð9:1:6Þ i¼0 k¼i þ 1 : Y ¼ Zf A ðBÞ: Proof. Due to the relation (9.1.5), it is easy to obtain 8 > R ¼ a1 I þ a2 A þ    þ an1 An2 þ An1 ; > < 0  > > Rn2 ¼ an1 I þ A; : Rn1 ¼ I : This relation can be compactly expressed as Ri ¼

n X

ak Aki1 ; an ¼ 1; i ¼ 1; 2; . . .; n  1:

k¼i þ 1

Substituting the above relation into the expression of X in (9.1.6) and recording the sum, we have X¼

n 1 X i¼0

Ri CZB i ¼

n 1  X n X i¼0

k¼i þ 1

n 1 X n  X ak Aki1 CZB i ¼ ak Aki1 CZB i : i¼0 k¼i þ 1



Combining this with theorem 9.1 we can prove the conclusion. [149]

[149]

. In , Remark 2. The above conclusion is essentially the same as theorem 9.2 in the authors provided the solution to the generalized Sylvester matrix equation (9.1.1) with the column vector form of the unknown matrices X and Y by means of the matrix polynomial and Faddeev-Leverrier algorithm. In the current paper, we present the equivalent form of theorem 9.1 using the well-known generalized Faddeev-Leverrier algorithm.

Solutions to Linear Matrix Equations and Their Applications

148

9.2

9.2.1

Quaternion-j-Conjugate Matrix Equation b ¼ CY XB  A X Real Representation of a Quaternion Matrix

For any quaternion matrix A ¼ A1 þ A2 i þ A3 j þ A4 k 2 Qmn ; Al 2 Rmn ðl ¼ 1; 2; 3; 4Þ, the real representation matrix of quaternion matrix A can be defined as 0 1 A1 A2 A3 A4 B A2 A1 A4 A3 C C 2 R4m4n : Ar ¼ B ð9:2:1Þ @ A3 A4 A1 A2 A A 4 A3 A2 A1 For an m  n quaternion 0 0 It B 0 I t Pt ¼ B @0 0 0 0 0

0 B0 St ¼ B @0 It

0 0 I t 0

matrix A, we define Atr ¼ ðAr Þt . In 1 0 0 0 0 0 I t B It 0 0 C 0 0 C; Q ¼ B It 0 0 0 A t @0 0 I t 0 0 I t 0 It 0 0

1 0 I t 0 B 0 0 C C; R ¼ B 0 A t @ I t 0 0

0 0 0 I t

It 0 0 0

addition, if we let 1 0 0C C; It A 0 1 0 It C C; 0A 0

in which I t is a t  t identity matrix, then P t ; Q t ; S t ; Rt are unitary matrices. For the expression (9.2.1), we can see that r : A ! Ar is a mapping from the quaternion matrix A to its real representation matrix Ar for any quaternion matrix A. In this case, for a real matrix A 2 R4n4n , the invertible mapping r1 can be stated as: r1 : Ar1 ¼ B; in which B 2 Q nn is a quaternion matrix. We observe that the mapping r is an injective homomorphism. The real representation matrix has the following properties, which are given in[188]. Proposition 9.1. Let A; B 2 Qmn ; C 2 Qns ; a 2 R. Then b Þ Ps; ðA þ BÞr ¼ Ar þ B r , ðaAÞr ¼ aAr , ðAC Þr ¼ Ar P n C r ¼ Ar ð C r A ¼ B () Ar ¼ B r ; 1 1 1 b Q 1 m Ar Q n ¼ Ar , Rm Ar Rn ¼ Ar , S m Ar S n ¼ Ar , P m Ar P n ¼ ð AÞr ; The quaternion matrix A is nonsingular if and only if Ar is nonsingular, and the quaternion matrix A is an unitary matrix if and only if Ar is an orthogonal matrix; b k (5) If A 2 Qmm , then A2k r ¼ ððA AÞ Þr P m ; mm nn mn (6) A 2 Q ;B 2 Q ;C 2 Q and k þ l is even, then

(1) (2) (3) (4)

Explicit Solutions to the Matrix Equations ( Akr C r B lr

¼

b s ðA C b BÞð BBÞ b tÞ ; ððA AÞ r s t b C ð BBÞ b Þr ; ððA AÞ

XB  AX ¼ CY

and

b ¼ CY XB  A X

149

k ¼ 2s þ 1; l ¼ 2t þ 1; k ¼ 2s; l ¼ 2t:

Proposition 9.2. If k is a characteristic value of Ar , then so are k and k. For any A 2 Qmm, let the characteristic polynomial of the real representation P 2k matrix Ar be f Ar ðkÞ ¼ detðkI 4m  Ar Þ ¼ 2m k¼0 a 2k k , and define h Ar ðkÞ ¼ P 2ð2mkÞ k4m f Ar ðk1 Þ ¼ 2m . So by propositions 9.1 and 9.2 we have the following k¼0 a 2k k proposition 9.3. Proposition 9.3. Let A 2 Q mm ; B 2 Qnn . Then P a 2k k2k ; (1) f Ar ðkÞ is a real polynomial, and f Ar ðkÞ ¼ 2m Pk¼0 2m (2) h Ar ðkÞ is a real polynomial, and h Ar ðkÞ ¼ k¼0 a 2k k2ð2mkÞ ; b P n ; f A ðB r Þ ¼ ðpA ðB BÞÞ b P n , in which (3) h Ar ðB r Þ ¼ ðg Ar ðB BÞÞ r r r r P2m P2m 2mk k a k , p ðkÞ ¼ a k are real polynomials. 2k 2k A k¼0 k¼0 r

g Ar ðkÞ ¼

Proof. By proposition 9.2, we easily know that a k is a real number, and a 2k þ 1 ¼ 0. b k For any k, by proposition 9.1, we have B 2k r ¼ ððB BÞ Þr P n . Thus, we can obtain the result (3). □ Furthermore, we have the following conclusion on the relation of quaternion matrix and quaternion j-conjugate matrix. Proposition 9.4. Let A 2 Qmm ; B 2 Qmm . Then b ¼ B; b (1) A ¼ B () A b b (2) A ¼ B () A ¼ B; b b ¼ A; (3) A d b B. b (4) AB ¼ A b items (1), (2) and Proof. By the definition of quaternion j-conjugate matrix A, (3) can be proved. Now we prove item (4). Let A ¼ A1 þ A2 i þ A3 j þ A4 k; B ¼ B 1 þ B 2 i þ B 3 j þ B 4 k. By calculation we have AB ¼ðA1 B 1  A2 B 2  A3 B 3  A4 B 4 Þ þ ðA1 B 2 þ A2 B 1 þ A3 B 4  A4 B 3 Þi þ ðA1 B 3 þ A3 B 1  A2 B 4 þ A4 B 2 Þj þ ðA4 B 1 þ A1 B 4 þ A2 B 3  A3 B 2 Þk; d ¼ðA1 B 1  A2 B 2  A3 B 3  A4 B 4 Þ  ðA1 B 2 þ A2 B 1 þ A3 B 4  A4 B 3 Þi AB þ ðA1 B 3 þ A3 B 1  A2 B 4 þ A4 B 2 Þj  ðA4 B 1 þ A1 B 4 þ A2 B 3  A3 B 2 Þk; and bB b ¼ðA1 B 1  A2 B 2  A3 B 3  A4 B 4 Þ þ ðA1 B 2  A2 B 1  A3 B 4 þ A4 B 3 Þi A þ ðA1 B 3 þ A3 B 1  A2 B 4 þ A4 B 2 Þj þ ðA4 B 1  A1 B 4  A2 B 3 þ A3 B 2 Þk:

150

Solutions to Linear Matrix Equations and Their Applications

d¼A b B. b Hence we have AB □ In the real field, for two square matrices, if there exists a real number k satisfies detðA þ kBÞ 6¼ 0, then the real matrix pair ðA; BÞ is called real regular matrix pair. Similarly, we have the following definition on the quaternion regular matrix pair. Definition 9.1. Let A; B 2 Qmm . If there exists k 2 R such that detðAr þ kB r Þ 6¼ 0; then the quaternion matrix pencil ðA; BÞ is called a quaternion regular matrix pair. According to definition 9.1, it is easy to obtain the following lemma. Lemma 9.1. Quaternion matrix pair ðA; BÞ is a quaternion regular matrix pair if and only if its real representation matrix pair ðAr ; B r Þ is a real regular matrix pair.

9.2.2

Solutions to the Quaternion j-Conjugate Matrix b ¼ CY Equation XB  A X

In this subsection, we will discuss the solution of the following quaternion j-conjugate matrix equation b ¼ CY ; XB  A X ð9:2:2Þ in which A 2 Qnn , B 2 Qpp and C 2 Qnr are known quaternion matrices, X 2 Qnp and Y 2 Qrp are unknown quaternion matrices. Firstly, we define the real representation of the generalized Sylvester quaternion j-conjugate matrix equation (9.2.2) by ^  Ar V ¼ C r W : V ðBÞ ð9:2:3Þ r

Due to propositions 9.1 and 9.4, the generalized Sylvester quaternion matrix equation (9.2.2) is equivalent to the following matrix equation b Þ ¼ X r P n B r  Ar X r P p : ðXB  A X r Post-multiplying the two sides of the above quaternion matrix equation by P p, the matrix equation (9.2.2) can be converted into ^  A r X r ¼ C r ðY ^Þ : X r ðBÞ r

r

In this case, we have the following conclusion. Proposition 9.5. Let A 2 Qnn , B 2 Qpp and C 2 Qnr be given quaternion matrices. Then the quaternion matrix equation (9.2.3) has a solution ðX; Y Þ if and only if the real representation matrix equation (9.2.3) has a solution ^ Þ Þ. ðV ; W Þ ¼ ðX r ; ðY r Theorem 9.3. Suppose A 2 Qnn , B 2 Qpp , C 2 Qnr be given quaternion matrices. Then the quaternion matrix equation (9.2.3) has a solution ðX; Y Þ if and only if the real representation matrix equation (9.2.3) has a solution ðV ; W Þ. Furthermore,

Explicit Solutions to the Matrix Equations

XB  AX ¼ CY

and

b ¼ CY XB  A X

151

if ðV ; W Þ is a solution to (9.2.3), then the following quaternion matrices are solutions to (9.2.3): 2 3 8 Ip > > > 6 iI p 7 > 1 > 1 1 6 7 > X ¼ ½I n iI n jI n kI n   ðV  Q 1 > n VQ p þ Rn VRp  S n VS p Þ4 jI 5; > > 16 p > < 2kI p 3 Ip > > > > 6 iI p 7 1 > 1 1 1 > 7 > Y ¼ ½I r iI r jI r kI r   ðW  Q n WQ p þ Rn WRp  S n WS p Þt 6 > 4 jI p 5: > 16 > : kI p ð9:2:4Þ Proof. By (3) of proposition 9.1, the quaternion matrix equation (9.2.2) is equivalent to 1 1 b VR1 p ð BÞr Rp  Rn Ar Rn V ¼ Rn C r Rr W :

ð9:2:5Þ

Post-multiplying both sides of the matrix equation (9.2.5) by R1 p , we can obtain 1 1 1 1 ^ VR1 p ðBÞr  Rn Ar Rn VRp ¼ Rn C r Rr WRp :

ð9:2:6Þ

Pre-multiplying both sides of the matrix equation (9.2.6) by Rn, we have 1 1 ^ Rn VR1 p ðBÞr  Ar Rn VRp ¼ C r Rr WRp :

ð9:2:7Þ

Noting R1 p ¼ Rp , we give 1 1 ^ R1 n VRp ðBÞr  Ar Rn VRp ¼ C r Rr WRp :

This shows that if ðV ; W Þ is a real solution of matrix equation (9.2.3), then 1 ðR1 n VRp ; Rr WRp Þ is also a real solution of quaternion matrix equation (9.2.3). In addition, according to (3) of proposition 9.1, the quaternion matrix equation (9.2.3) is also equivalent to 1 1 ^ VQ 1 p ðBÞr Q p  Q n Ar Q n V ¼ Q n C r Q r W :

ð9:2:8Þ

Post-multiplying both sides of the quaternion matrix equation (9.2.8) by Q 1 p , we have 1 1 1 1 ^ VQ 1 p ðBÞr  Q n Ar Q n VQ p ¼ Q n C r Q r WQ p :

ð9:2:9Þ

Pre-multiplying the two sides of the quaternion matrix equation (9.2.9) by Q 1 n and computing Q 1 p ¼ Q p , give 1 1 ^ ðQ 1 n VQ p ÞðBÞr  Ar ðQ n VQ p Þ ¼ C r ðQ r WQ p Þ:

Solutions to Linear Matrix Equations and Their Applications

152

This is to say that if ðV ; W Þ is a real solution of matrix equation (9.2.3), then 1 ðQ 1 n VQ p ; Q r WQ p Þ is also a real solution of matrix equation (9.2.3). Similarly, 1 we can prove ðS 1 n VS p ; S r WS p Þ is also a real solution of quaternion matrix equation (9.2.3). In this case, the conclusion can be obtained along similar line of the □ proof of theorem 4.2 in[187]. Theorem 9.4. Given quaternion matrices A 2 Q nn , B 2 Q pp and C 2 Q nr , suppose f Ar ðsÞ ¼ detðsI 4n  Ar Þ ¼

2n X

a 2k s 2k ;

k¼0

and pAr ðsÞ ¼

2n X

a 2k sk :

k¼0

Then the quaternion matrices X 2 Qnp ; Y 2 Qrp can be established as 8  n1 2n P P > > ^ ks1 AC ^ Z ðB BÞ b s >X ¼ a2k ðAAÞ > > < k¼1 s¼k1  n1 P ð9:2:10Þ b ks1 C Z b BðB b BÞ b s ; þ ðA AÞ > > > s¼k1 > > : b Y ¼ Z^ pA ðB BÞ; r

in which Z is an arbitrary quaternion matrix. Proof. If the generalized Sylvester quaternion j-conjugate matrix equation (9.2.2) has solution ðX; Y Þ, then the real representation matrix equation (9.2.3) has solution ^ Þ Þ with the free parameter Z r . By theorems 9.2 and 9.3, we have ðV ; W Þ ¼ ðX r ; ðY r Xr ¼ ¼ ¼ ¼ ¼ ¼ ¼

4n1 P

4n P

j¼0 k¼j þ 1 2n1 2n P P

^ j ak Akj1 C r Z r ðBÞ r r

^ j a2k A2kj1 C r Z r ðBÞ r r j¼0 k¼j þ 1 h i 2n 2n1 P P 2kj1 ^ j a2k Ar C r Z r ðBÞ r k¼1 j¼k1 h n1 i 2n n P P P 2k2s1 ^ 2s þ ^ 2s1 a2k Ar C r Z r ðBÞ A2k2s C r Z r ðBÞ r r r k¼1 s¼k1 s¼k h n1 i 2n n P P P 2ðks1Þ 2s ^ ^ 2ðs1Þ þ 1 a2k Ar Ar C r Z r ðBÞr þ A2ðksÞ C Z ð BÞ r r r r k¼1 s¼k1 s¼k h     i 2n n1 n P P P b ks1 AC b ks C Z ^ Z ðB BÞ b s þ b BðB b BÞ b s1 a2k ðA AÞ ðA AÞ r r k¼1 s¼k1 s¼k h n1    i 2n n1 P  b ks1 ^ P P b ks1 C Z b s þ b BðB b BÞ b s a2k ðA AÞ AC Z ðB BÞ ðA AÞ : r r k¼1 s¼k1 s¼k1

Explicit Solutions to the Matrix Equations

XB  AX ¼ CY

and

b ¼ CY XB  A X

153

So we have X¼

2n X k¼1

a2k

n 1 h X

n 1 X

b ks1 AC ^ Z ðB BÞ b sþ ðA AÞ

s¼k1

b ks1 C Z b BðB b BÞ b s : ðA AÞ

s¼k1

In addition, according to proposition 9.3, f Ar ðsÞ is a real polynomial and we have the following relation b ÞP p : f Ar ðB r Þ ¼ ½pAr ððB BÞ r Thus, according to proposition 9.1, we have ^ Þ ¼ Z r f A ðB r Þ ¼ Z r ½pA ððB BÞ b ÞP p ¼ ½ZpA ð BBÞ b ðY r r r: r r r Therefore, from proposition 9.4 we can obtain the following relations b ¼ ZpA ð BBÞ; b Y r and ^ pA ðB BÞ: b Y ¼Z r □

Thus, we complete the proof of the conclusion.

In the following, based on the main result proposed above, we consider the following Sylvester quaternion j-conjugate matrix equation b ¼ C: XB  A X

ð9:2:11Þ

Corollary 9.1. Given quaternion matrices A 2 Qnn , B 2 Qpp and C 2 Qnp , let f Ar ðsÞ ¼ detðsI 4n  Ar Þ ¼

2n X

a 2k s 2k ;

k¼0

and pAr ðsÞ ¼

2n X

a 2k sk :

k¼0

(1) If X is a solution of the equation (9.2.11), then h k1 i 2n kP 1 P P b s b s AC b ðB BÞ b b BÞ b ks1 þ b ks1 : ¼ a2k ðA AÞ C BðB ðA AÞ XpAr ð BBÞ k¼1

s¼0

s¼0

ð9:2:12Þ

Solutions to Linear Matrix Equations and Their Applications

154

(2) If ðA; BÞ is a regular quaternion matrix pair, then the quaternion matrix (9.2.11) has a unique solution h k1 i 2n k1 P P b s b P b s 1 b BÞ b ks1 þ b ks1 ½pA ð BBÞ b X¼ a2k ðA AÞ C BðB ðA AÞ A C ðB BÞ : s¼0

k¼1

r

s¼0

ð9:2:13Þ

Proof. If X is a solution of the equation (9.2.10), then Y ¼ X r is a solution of the ^ r  Ar X r ¼ C r P p . By theorem 3 in[30] and proposition real matrix equation X r B 9.1, we have 4n X k 1 X ^ rÞ ¼ ^ kj1 : ak Arj C r P p B X r f Ar ð B r k¼1 j¼0

b r Þ ¼ ðpA ðB BÞÞ b From proposition 9.3, f Ar ðsÞ is a real polynomial and f Ar ð B r P p. r So using proposition 9.1, we have b ½XpAr ð BBÞ r b Pp ¼ X r ½pAr ðB BÞ r b rÞ ¼ X rf A ðB r

¼

4n X k 1 X k¼1 j¼0

¼

b kj1 ak Arj C r P p ð BÞ r

2n 2X k1 X k¼1 j¼0

¼

2n X

a2k

¼

a2k

¼

a2k

2n X

b 2k2s1 þ A2s r C r P p ð BÞr

a2k

k 1  hX

k 1  hX

k¼1

k X s¼1

b 2ðks1Þ þ 1 þ A2s r C r P p ð BÞr b s C BðB b BÞ b ks1 ðA AÞ

s¼0

k¼1

¼

k 1 hX s¼0

k¼1 2n X

k 1 hX s¼0

k¼1 2n X

b 2kj1 a2k Arj C r P p ð BÞ r

b s C BðB b BÞ b ks1 ðA AÞ

s¼0

 r

 r

b 2k2s A2s1 C r P p ð BÞ r r

k X s¼1

þ

i

b 2k2s A2s1 C r P p ð BÞ r r

k  X

i

b s1 A C b ðB BÞ b ks ðA AÞ

s¼1

þ

k 1  X

b s AC b ðB BÞ b ks1 ðA AÞ

s¼0

 i r

 i : r

So we have b XpAr ð BBÞ ¼

2n X k¼1

a2k

k 1 k 1 hX i X b s C BðB b s AC b ðB BÞ b BÞ b ks1 þ b ks1 : ðA AÞ ðA AÞ s¼0

Thus, the conclusion has been proved.

s¼0



Explicit Solutions to the Matrix Equations

XB  AX ¼ CY

and

b ¼ CY XB  A X

155

In the following, we give an example to obtain the solution of Sylvester b ¼ C. quaternion matrix equation XB  A X Example 9.1. Consider the explicit solution of the Sylvester quaterniom matrix b ¼ C with the following parameters: equation XB  A X     1 2i 3þj 2  k A¼ ;B ¼ ; j k 4i 1  C ¼

12 þ 4i  6k 2  4i þ 7j  13k

 4  4i  10k : 1  7i þ 9j þ 4k

According to the definition of real representation 0 1 0 0 2 0 0 B0 0 0 0 1 0 B B 0 2 1 0 0 0 B B0 0 0 0 0 1 Ar ¼ B B0 0 0 0 1 0 B B1 0 0 1 0 0 B @0 0 0 0 0 2 0 1 1 0 0 0

of a complex matrix, we have 1 0 0 0 1 C C 0 0 C C 1 0 C C: 0 2 C C 0 0 C C 1 0 A 0 0

It is easy to check the quaternion matrix pair ðA; BÞ is regular. By some simple computation, we have f Ar ðsÞ ¼ s 8  4s 6 þ 6s 4  4s 2 þ 1; pAr ðsÞ ¼ s 4  4s 3 þ 6s 2  4s þ 1; and a8 ¼ 1; a6 ¼ 4; a4 ¼ 6; a2 ¼ 4; a0 ¼ 1: It follows from corollary 9.1 that the solution of the Sylvester quaternion matrix equation is   2 1  2i X¼ : 3j þ k 4j

9.3

Conclusions

In this chapter, the real generalized Sylvester matrix equation XB  AX ¼ CY and the quaternion matrix equation XB  AX ¼ CY have been investigated. We have provided a new solution to the real matrix equation XB  AX ¼ CY through a

156

Solutions to Linear Matrix Equations and Their Applications

direct method. Based on the solution of this real matrix equation and the real representation of a quaternion matrix, we have established the explicit solutions of e ¼ CY . Compared to the existing the quaternion matrix equation XB  A X results[184], there is no requirement for the coefficient matrix A. But in the reference[184], the coefficient matrix A is restricted with a block diagonal complex matrix. In this sense, the results in this chapter can be regarded as the further result of the reference[184]. It is the generalization of existing results. As a special case of the generalized Sylvester quaternion j-conjugate matrix equation, Sylvester quaternion b ¼ C has also been considered in this paper. j-conjugate matrix equation XB  A X The explicit solutions to Sylvester quaternion j-conjugate matrix equation have been proposed.

Chapter 10 Explicit Solutions to Linear Transpose Matrix Equation In this chapter, we consider the explicit solutions to the equations AX þ X T B ¼ C ;

ð10:0:1Þ

and AX þ X T B ¼ CY ;

ð10:0:2Þ

where A; B; C 2 R are given n  n real matrices and X; Y 2 R are unknown n  n real matrices to be determined. They are named the Sylvester transpose matrix equation and the generalized Sylvester transpose matrix equation, respectively. If X ¼ X T , the equations (10.0.1) and (10.0.2) reduce to the well-known Sylvester (or generalized Sylvester) matrix equation, which has many important applications in control theory. For calculating the solutions to (10.0.1) and (10.0.2), three algorithms are provided in this chapter. The design of a time-varying linear system in control is given. nn

10.1

nn

Solutions to the Sylvester Transpose Matrix Equation

In this section, the explicit solutions to (10.0.1) are considered and three forms of solutions are constructed. One of the solutions is the well-known Jameson’s Theorem. In this sense, we generalize Jameson’s Theorem from the Sylvester matrix equation to the Sylvester transpose matrix equation. Firstly, some useful lemmas are summarized below. P Definition 10.1 (Sylvester sum[6]). Let T ðsÞ ¼ ti¼0 T i s i 2 Cmq ½s; F 2 Cpp , and P Z 2 Cqp , the following matrix sum SylðT ðsÞ; F; Z Þ ¼ ti¼0 T i ZF i is called Sylvester sum. DOI: 10.1051/978-2-7598-3102-9.c010 © Science Press, EDP Sciences, 2023

Solutions to Linear Matrix Equations and Their Applications

158

Definition 10.2 (Kronecker map[6]). Consider the polynomial matrix T ðsÞ ¼ P Pt mq i ½s, for a fixed F 2 Cpp , the map U½T ðsÞ ¼ ti¼0 ðF T Þi  T i is i¼0 T i s 2 C called U-Kronecker map. Lemma 10.1. U½Y ðsÞ: Lemma 10.2.

10.1.1

[6]

[6]

Let XðsÞ 2 C qr ½s; Y ðsÞ 2 Crm ½s, then U½XðsÞY ðsÞ ¼ U½XðsÞ

Let T ðsÞ 2 Cqq ½s, then U½adjðT ðsÞÞU½T ðsÞ ¼ U½I q detT ðsÞ:

The First Case: A or B is Nonsingular

Define f ðI n ;BT A1 Þ ðsÞ :¼ detðI n  sB T A1 Þ ¼ an sn þ    þ a1 s þ a0 ;

a0 ¼ 1;

ð10:1:1Þ

and adjðI n  sB T A1 Þ :¼ !n1 s n1 þ    þ !1 s þ !0 :

ð10:1:2Þ

Theorem 10.1. Let A be nonsingular and k1 ; k2 ; . . .; kn be the eigenvalues of B T A1 , thus, we have (1) Supposed that X is a solution to equation (10.0.1), then Xf ðI n ;BT A1 Þ ððB T A1 ÞT Þ ¼ A1

n 1 X

!i ðC  ðB T A1 C ÞT ÞððB T A1 ÞT Þi :

ð10:1:3Þ

i¼0

(2) If ki kj 6¼ 1 for all i; j, then equation (10.0.1) has a unique solution " # n 1 X T T i 1 1 1 X ¼A !i ðC  ðB T A C Þ ÞððB T A Þ Þ  f 1 ððB T A1 ÞT Þ: ð10:1:4Þ ðI n ;B T A1 Þ i¼0

Proof. From lemma 10.1, equation (10.0.1) can be equivalent to SylðI n  sB T A1 ; ðB T A1 ÞT ; AXÞ ¼ C  ðB T A1 C ÞT : By definitions (10.1.1) and (10.1.2), the above relation can be rewritten as U½I n  sB T A1 vecðAXÞ ¼ vecðC  ðB T A1 C ÞT Þ: From lemma 10.2, we have U½I n detðI n  sB T A1 ÞvecðAXÞ ¼ U½adjðI n  sB T A1 ÞvecðC  ðB T A1 C ÞT Þ:

Explicit Solutions to Linear Transpose Matrix Equation

159

So we get SylðI n detðI n  sB T A1 Þ; ðB T A1 ÞT ; AXÞ ¼ SylðadjðI n  sB T A1 Þ; ðB T A1 ÞT ; C  ðB T A1 C ÞT Þ: By simple computation, one can obtain SylðI n detðI n  sB T A1 Þ; ðB T A1 ÞT ; AXÞ ¼ AXf ðI n ;BT A1 Þ ððB T A1 ÞT Þ; and SylðadjðI n  sB T A1 Þ; ðB T A1 ÞT ; C  ðB T A1 C ÞT Þ n 1 X !i ðC  ðB T A1 C ÞT ÞððB T A1 ÞT Þi : ¼ i¼0

Thus, we get Xf ðI n ;BT A1 Þ ððB T A1 ÞT Þ ¼ A1

n 1 X

!i ðC  ðB T A1 C ÞT ÞððB T A1 ÞT Þi :

i¼0

If ki kj 6¼ 1 for all i; j, then f ðI n ;BT A1 Þ ððB T A1 ÞT Þ is nonsingular. Therefore, equation (10.0.1) has a unique solution and (10.1.4) holds. □ Now, we summarize the well-known generalized Leverrier algorithm[186] as follows: 8 < !k ¼ !k1 B T A1 þ ak I n ; !0 ¼ I n ; k ¼ 1; 2; . . .; n; ð10:1:5Þ traceð!k1 B T A1 Þ : ak ¼ ; a0 ¼ 1; k ¼ 1; 2; . . .; n; k where ai ; i ¼ 0; 1; 2; . . .; n  1, are the coefficients of the characteristic polynomial of B T A1 , and !i ; i ¼ 0; 1; . . .; n  1, are the coefficient matrices of adjðsI n  B T A1 Þ. Applying (10.1.5), we will provide the following results. Theorem 10.2. Let A be nonsingular and k1 ; k2 ; . . .; kn be the eigenvalues of B T A1 . (1) Supposed that X is a solution of the equation (10.0.1), then Xf ðI n ;BT A1 Þ ððB T A1 ÞT Þ j n 1 X X ¼ A1 ak ðB T A1 Þjk ðC  ðB T A1 C ÞT ÞððB T A1 ÞT Þ j :

ð10:1:6Þ

j¼0 k¼0

(2) If ki kj 6¼ 1 for all i; j, then the unique solution to (10.0.1) is stated as X ¼ A1

j n 1 X hX

ak ðB T A1 Þjk ðC  ðB T A1 C ÞT ÞððB T A1 ÞT Þ j

j¼0 k¼0



f 1 ððB T A1 ÞT Þ: ðI n ;B T A1 Þ

i ð10:1:7Þ

Solutions to Linear Matrix Equations and Their Applications

160

Proof. From (10.1.5), we get 8 !0 ¼ I n ; > > > > < !1 ¼ a1 I n þ B T A1 ; !2 ¼ a2 I n þ a1 B T A1 þ ðB T A1 Þ2 ; > > > > : !n1 ¼ an1 I n þ an2 B T A1 þ    þ ðB T A1 Þn1 : The above formula can be rewritten as !j ¼

j X

ak ðB T A1 Þjk ;

j ¼ 1; 2; . . .; n  1:

k¼0

Therefore, we have n1 P

!j ðC  ðB T A1 C ÞT ÞððB T A1 ÞT Þ j ! j n 1 X X T 1 jk ¼ ak ðB A Þ ðC  ðB T A1 C ÞT ÞððB T A1 ÞT Þ j

j¼0

j¼0

¼

k¼0

j n 1 X X

ak ðB T A1 Þjk ðC  ðB T A1 C ÞT ÞððB T A1 ÞT Þ j :

j¼0 k¼0

Combining this relation with theorem 10.1, (10.1.6) can be proved. If ki kj ¼ 6 1 for T 1 T all i; j, then f ðI n ;BT A1 Þ ððB A Þ Þ is nonsingular. Thus, (10.0.1) has a unique solution. So the desired result follows. □ Remark 1. This theorem is the Jameson’s Theorem of the Sylvester transpose matrix equation (10.0.1), which has the same form as theorem 7 in[9]. Theorem 10.3. Supposed that A is nonsingular, let k1 ; k2 ; . . .; kn be the eigenvalues of B T A1 and C  ðB T A1 C ÞT ¼ C 1 C 2 ; C 1 2 Rnr ,C 2 2 Rrn hold, where r ¼ rankðC  ðB T A1 C ÞT Þ. (1) Let X be a solution to (10.0.1), then this solution can be stated as Xf ðI n ;BT A1 Þ ððB T A1 ÞT Þ ¼ A1 Q c ðB T A1 ; C 1 ; nÞS r ðI ; B T A1 ÞQ o ððB T A1 ÞT ; C 2 ; nÞ:

ð10:1:8Þ

(2) If ki kj 6¼ 1 for all i; j, then the unique solution to (10.0.1) can be expressed as h i X ¼ A1 Q c ðB T A1 ; C 1 ; nÞS r ðI ; B T A1 ÞQ o ððB T A1 ÞT ; C 2 ; nÞ ð10:1:9Þ ððB T A1 ÞT Þ:  f 1 ðI n ;B T A1 Þ

Explicit Solutions to Linear Transpose Matrix Equation

161

Proof. According to (10.1.5), we have ½ !0 C 1

!1 C 1



!n1 C 1  ¼ Q c ðB T A1 ; C 1 ; nÞS r ðI ; B T A1 Þ:

Thus, it is easy to obtain n1 P

!i ðC  ðB T A1 C ÞT ÞððB T A1 ÞT Þi

i¼0

¼ ½ !0 C 1 !1 C 1    !n1 C 1 Q o ððB T A1 ÞT ; C 2 ; nÞ ¼ Q c ðB T A1 ; C 1 ; nÞS r ðI ; B T A1 ÞQ o ððB T A1 ÞT ; C 2 ; nÞ:

Combining this relation with theorem 10.2, (10.1.8) can be proved. If ki kj 6¼ 1 for all i; j, then f ðI n ;BT A1 Þ ððB T A1 ÞT Þ is nonsingular. Therefore, (10.0.1) has a unique solution and (10.1.9) holds. □ Remark 2. From corollary 1 in[24], the sufficient condition for existence of solution to (10.0.1) is that AT þ C and AT  C are nonsingular under the condition C T ¼ C . From corollary 2 in[24], the necessary and sufficient conditions for existence of solution to (10.0.1) is that AT þ C is nonsingular under the conditions C T ¼ C and where P 21 ¼ ðAT  BÞT ðAT þ BÞ and B 2 ¼ ðI  P T21 ÞB 2 ðI  P 21 Þ ¼ 0, 1 T 1 T T T T T C ðA þ BÞ ðA  BÞ þ ðA  BÞ ½ðA þ BÞ  C . Compared with corollaries 1 and 2 in[24], the sufficient condition for existence of solution to (10.0.1) in theorem 10.3 only requests kl 6¼ 1 for any k; l 2 rðB T A1 Þ. This condition is easy to check through Matlab. Suppose that B is nonsingular. Taking the transpose of the equation AX þ X T B ¼ C , we have B T X þ X T AT ¼ C T . Note that rðAðB T Þ1 Þ ¼ rðB T A1 Þ. Moreover, kl 6¼ 1 for any k; l 2 rðB T A1 Þ is equivalent to kl 6¼ 1 for any k; l 2 rðAðB T Þ1 Þ. Thus, we have the following corollaries. The proof of these corollaries is similar to the above theorems and is omitted here. Corollary 10.1. Supposed that B is nonsingular, let g ðI n ;AðBT Þ1 Þ ðsÞ :¼ detðI n  sAðB T Þ1 Þ ¼ an s n þ    þ a1 s þ a0 ; a0 ¼ 1;

ð10:1:10Þ

and adjðI n  sAðB T Þ1 Þ :¼ Un1 s n1 þ    þ U1 s þ U0 :

ð10:1:11Þ

(1) If the equation (10.0.1) has solution, then the solution can be stated as

Xg ðI n ;AðBT Þ1 Þ ðB 1 AT Þ ¼ ðB T Þ1

n 1 X i¼0

Ui ðC T  CB 1 AT ÞðB 1 AT Þi :

ð10:1:12Þ

Solutions to Linear Matrix Equations and Their Applications

162

(2) If kl 6¼ 1 for any k; l 2 rðAðB T Þ1 Þ, then the unique solution to (10.0.1) can be characterized by X ¼ ðB T Þ1

n 1 hX i¼0

i Ui ðC T  CB 1 AT ÞðB 1 AT Þi g 1 ðB 1 AT Þ: ð10:1:13Þ ðI ;AðB T Þ1 Þ n

Corollary 10.2. Supposed that B is nonsingular and l1 ; l2 ; . . .; ln are the eigenvalues of AðB T Þ1 . (1) Let X be a solution to (10.0.1), then the solution can be summarized as Xg ðI n ;AðBT Þ1 Þ ðB 1 AT Þ ¼ ðB T Þ1

j n 1 X X

bk ðAðB T Þ1 Þjk ðC T  CB 1 AT ÞðB 1 AT Þ j :

ð10:1:14Þ

j¼0 k¼0

(2) Let li lj 6¼ 1 for all i; j, then the unique solution to (10.0.1) can be characterized with j n 1 X hX i X ¼ ðB T Þ1 bk ðAðB T Þ1 Þjk ðC T  CB 1 AT ÞðB 1 AT Þ j ð10:1:15Þ j¼0 k¼0  g 1 ðB 1 AT Þ: ðI ;AðB T Þ1 Þ n

Corollary 10.3. Supposed that B is nonsingular and C T  CB 1 AT ¼ C 1 C 2 ; C 1 2 Rnr ; C 2 2 Rrn ; where r ¼ rankðC T  CB 1 AT Þ: (1) Let X be a solution to (10.0.1), then the solution can be characterized with Xg ðI n ;AðBT Þ1 Þ ðB 1 AT Þ ¼ ðB T Þ1 Q c ðAðB T Þ1 ; C 1 ; nÞS r ðI ; AðB T Þ1 ÞQ o ðB 1 AT ; C 2 ; nÞ:

ð10:1:16Þ

(2) Let kl 6¼ 1 for any k; l 2 rðAðB T Þ1 Þ, then a unique solution to (10.0.1) is X ¼ ðB T Þ1 Q c ðAðB T Þ1 ; C 1 ; nÞS r ðI ; AðB T Þ1 ÞQ o ðB 1 AT ; C 2 ; nÞg 1 ðB 1 AT Þ: ðI ;AðB T Þ1 Þ n

ð10:1:17Þ

10.1.2

The Second Case: A and B are Nonsingular

Theorem 10.4. Let A and B be nonsingular, h BT A1 ðsÞ :¼ detðsI n  B T A1 Þ ¼ cn s n þ    þ c1 s þ c0 ; cn ¼ 1;

ð10:1:18Þ

Explicit Solutions to Linear Transpose Matrix Equation

163

and adjðsI n  B T A1 Þ :¼ Dn1 s n1 þ    þ D 1 s þ D0 :

ð10:1:19Þ

(1) Supposed that X is a solution to the equation (10.0.1), then the solution can be characterized with n 1 X Xh BT A1 ðB 1 AT Þ ¼ A1 Di ðCB 1 AT  C T ÞðB 1 AT Þi : ð10:1:20Þ i¼0

(2) If rðB T A1 Þ \ rðB 1 AT Þ ¼ Ø, then the equation (10.0.1) has a unique solution X ¼ A1

n 1 hX

i Di ðCB 1 AT  C T ÞðB 1 AT Þi  ½h BT A1 ðB 1 AT Þ1 : ð10:1:21Þ

i¼0

Proof. According to definition (10.1.18), the equation (10.0.1) can be rewritten as SylðsI n  B T A1 ; B 1 AT ; AXÞ ¼ CB 1 AT  C T : By definitions (10.1.18) and (10.1.19), the above relation is equivalent to U½sI n  B T A1 vecðAXÞ ¼ vecðCB 1 AT  C T Þ: It follows from lemma 10.2 that U½I n detðsI n  B T A1 ÞvecðAXÞ ¼ U½adjðsI n  B T A1 ÞvecðCB 1 AT  C T Þ: So we get SylðI n detðsI n  B T A1 Þ; B 1 AT ; AXÞ ¼ SylðadjðsI n  B T A1 Þ; B 1 AT ; CB 1 AT  C T Þ:

From definition (10.1.18) and some simple computations, we have SylðI n detðsI n  B T A1 Þ; B 1 AT ; AXÞ ¼ AXh BT A1 ðB 1 AT Þ; and SylðadjðsI n  B T A1 Þ; B 1 AT ; CB 1 AT  C T Þ ¼

n 1 X

D i ðCB 1 AT  C T ÞðB 1 AT Þi :

i¼0

Thus, we get Xh BT A1 ðB 1 AT Þ ¼ A1

n1 X i¼0

D i ðCB 1 AT  C T ÞðB 1 AT Þi :

Solutions to Linear Matrix Equations and Their Applications

164

Therefore, (10.1.20) has been proved. Due to rðB T A1 Þ \ rðB 1 AT Þ ¼ Ø, h BT A1 ðB 1 AT Þ is nonsingular and the equation (10.0.1) has a unique solution. So (10.1.21) holds. □ Next, the well-known Faddeev-Leverrier algorithm[186] is shown as follows: 8 < Dnk ¼ D nk þ 1 B T A1 þ cnk þ 1 I n ; Dn ¼ 0; k ¼ 1; 2; . . .; n; T 1 : cnk ¼ traceðDnk B A Þ ; cn ¼ 1; k ¼ 1; 2; . . .; n: k

ð10:1:22Þ

Based on this Faddeev-Leverrier algorithm, we have the equivalent forms of theorem 10.4. The proof of that is similar to theorems 10.2 and 10.3 and it is omitted here. Theorem 10.5. Supposed that A and B are nonsingular. (1) If X is a solution of the equation (10.0.1), then the solution can be stated as Xh B T A1 ðB 1 AT Þ ¼ A1

n X k1 X

ck ðB T A1 Þ j ðCB 1 AT  C T ÞðB 1 AT Þkj1 :

ð10:1:23Þ

k¼1 j¼0

(2) If rðB T A1 Þ \ rðB 1 AT Þ ¼ Ø, then the equation (10.0.1) has a unique solution X ¼ A1

n X k 1 hX k¼1 j¼0

i ck ðB T A1 Þ j ðCB 1 AT  C T ÞðB 1 AT Þkj1 h 1 ðB 1 AT Þ: B T A1 ð10:1:24Þ

Remark 3. Different from theorem 10.4, in order to obtain the solution to (10.0.1), this theorem only needs the coefficients of the characteristic polynomial of B T A1 . Besides the Faddeev-Leverrier algorithm, there exist other numerically reliable algorithms which can also be applied.  1C 2 Theorem 10.6. Supposed that A and B are nonsingular, let CB 1 AT C T ¼ C T nr  rn 1 T  hold, C 1 2 R , C 2 2 R , where r ¼ rankðCB A  C Þ: (1) If X is a solution to the equation (10.0.1), then the solution can be expressed as  1 ; nÞS r ðB T A1 ÞQ o ðB 1 AT ; C  2 ; nÞ: Xh BT A1 ðB 1 AT Þ ¼ A1 Q c ðB T A1 ; C

ð10:1:25Þ

(2) If rðB T A1 Þ \ rðB 1 AT Þ ¼ Ø, then the equation (10.0.1) has a unique solution  1 ; nÞS r ðB T A1 ÞQ o ðB 1 AT ; C  2 ; nÞh 1T 1 ðB 1 AT Þ: X ¼ A1 Q c ðB T A1 ; C B A

ð10:1:26Þ

Explicit Solutions to Linear Transpose Matrix Equation

10.2

165

Solutions to the Generalized Sylvester Transpose Matrix Equation

In this section, we discuss the parametric solutions to the equation (10.0.2). Firstly, we define f ðI n ;A1 BT Þ ðsÞ :¼ detðI þ sA1 B T Þ ¼ dn s n þ    þ d1 s þ d0 ; d0 ¼ 1;

ð10:2:1Þ

and adjðI n þ sA1 B T Þ :¼ M n1 s n1 þ    þ M 1 s þ M 0 :

ð10:2:2Þ

T Theorem 10.7. If A is nonsingular and fsjdetðI þ sA1 B T Þ ¼ 0g rððA1 ÞT BÞ ¼ Ø holds, then all the solutions to the equation (10.0.2) are stated as 8 n1 P > > X¼ M i A1 CZ ððA1 ÞT BÞi > > > i¼0 < n1 P 1 þ A ðB T A1 Þi ðM i A1 CZ ÞT ðBÞ; > > > i¼0 > > : Y ¼ Zf ðI n ;A1 BT Þ ððA1 ÞT BÞ:

ð10:2:3Þ

Proof. First, we’ll prove that X and Y given in (10.2.3) are the solutions to (10.0.2). By calculation we get AX þ X T B ¼A

n 1 X

M i A1 CZ ððA1 ÞT ðBÞÞi þ

i¼0

þA

n 1 X

ðB T A1 Þi ðM i A1 CZ ÞT B

i¼0

n 1 X

A1 ðB T A1 Þi ðM i A1 CZ ÞT ðBÞ

i¼0

þ

n 1 X

ðBÞT ðM i A1 CZ ÞððA1 ÞT BÞi ðA1 ÞT B

i¼0

¼

n 1 X

AM i A1 CZ ððA1 ÞT BÞi 

n 1 X

i¼0

ðB T ÞM i A1 CZ ððA1 ÞT BÞi þ 1

i¼0

¼ AM 0 A1 CZ þ

n1 X

ðAM i þ B T Ri1 ÞA1 CZ ððA1 ÞT BÞi

i¼1

þ B M n1 A CZ ððA1 ÞT BÞn : T

1

Solutions to Linear Matrix Equations and Their Applications

166

Note that ðI n þ sA1 B T ÞadjðI n þ sA1 B T Þ ¼ I n detðI n þ sA1 B T Þ, we have 8 < M 0 ¼ d0 I n ; M i þ A1 B T Ri1 ¼ di I n ; i 2 I ½1; n  1; : 1 A B T M n1 ¼ dn I n : Thus, one get AM 0 A1 CZ þ ¼ CZ

n P i¼0

n1 P

ðAM i þ B T M i1 ÞA1 CZ ððA1 ÞT BÞi þ B T M n1 A1 CZ ððA1 ÞT BÞn

i¼1

di ððA1 ÞT BÞi ¼ CZf ðI n ;A1 B T Þ ððA1 ÞT BÞ ¼ CY :

Therefore, X and Y in (10.2.3) are satisfied to (10.0.2). Next, we will prove the completeness of (10.2.3). Firstly, it is showed that (10.2.3) has rp parameters which are represented by the elements of the parametric matrix T Z . Secondly, because fsjdetðI n þ sA1 B T Þ ¼ 0g rðB T A1 Þ ¼ Ø holds and f ðI n ;A1 BT Þ ððA1 ÞT BÞ is nonsingular, this demonstrates that the mapping Z ! ðX; Y Þ given in (10.2.3) is injective. So all the solutions to (10.0.2) can be characterized by (10.2.3). □ In the following, we’ll give two equivalent forms of (10.2.3), one of which is stated as the controllability matrix associated with A1 B T and A1 C , symmetric operator matrix, and the observability matrix associated with ðA1 ÞT B and Z . Using this form, convenience and advantages may be provided for further analyzing control problem. T Theorem 10.8. If A is nonsingular and fsjdetðI n þ sA1 B T Þ ¼ 0g rððA1 ÞT BÞ ¼ Ø holds, then all the solutions to the equation (10.0.2) can be characterized by 8 n 1 X i X > > >X ¼ dk ðA1 B T Þik A1 CZ ððA1 ÞT BÞi þ A1 > > > i¼0 k¼0 > < n 1 X i hX iT ð10:2:4Þ i 1 T ik 1 1 T >  d ðA B Þ A CZ ððA Þ BÞ ðBÞ; k > > > > i¼0 k¼0 > > : Y ¼Zf ðI n ;A1 BT Þ ððA1 ÞT BÞ; or

8 > X ¼Q c ðA1 B T ; A1 C ; nÞS r ðI ; A1 B T ÞQ o ððA1 ÞT B; Z ; nÞ > > > > < þ A1 ½Q c ðA1 B T ; A1 C ; nÞS r ðI ; A1 B T Þ >  Q o ððA1 ÞT B; Z ÞT ðBÞ; > > > > 1 T : Y ¼Zf ðI n ;A1 B T Þ ððA Þ BÞ;

where Z 2 Rnn is an arbitrary chosen parameter matrix.

ð10:2:5Þ

Explicit Solutions to Linear Transpose Matrix Equation

167

Proof. From (10.1.5), one derive 8 M ¼ I n; > > < 0 M i ¼ A1 B T M i1 þ ai I n ; i 2 I ½1; n  1; > traceðA1 B T M i1 Þ > : di ¼ ; i 2 I ½1; n: i This relation can be compactly expressed as 8 M 0 ¼ I n; > > > > < M 1 ¼ d1 I n þ ðA1 B T Þ; M 2 ¼ d2 I n þ d1 ðA1 B T Þ þ ðA1 B T Þ2 ; > >  > > : M n1 ¼ dn1 I n þ dn2 ðA1 B T Þ þ    þ ðA1 B T Þn1 : Also it can be rewritten as Mi ¼

i X

dk ðA1 B T Þik ; d0 ¼ 1; i ¼ 1; 2; . . .; n  1:

k¼0

Thus, by (10.2.3) we get n 1 X

M i A1 CZ ððA1 ÞT BÞi ¼

i¼0

n 1 X i X

dk ðA1 B T Þik A1 CZ ððA1 ÞT BÞi :

i¼0 k¼0

and " #T n 1 n 1 X i X X ðB T A1 Þi ðM i A1 CZ ÞT ¼ dk ðA1 B T Þik A1 CZ ððA1 ÞT BÞi : i¼0

i¼0 k¼0

Moreover, from (10.2.3) we have n 1 X

M i A1 CZ ððA1 ÞT BÞi

i¼0

¼ Q c ðA1 B T ; A1 C ; nÞS r ðI ; A1 B T ÞQ o ððA1 ÞT B; Z ; nÞ; and A1

n1 P

ðB T A1 Þi ðM i A1 CZ ÞT ðBÞ

i¼0 1

¼ A ½Q c ðA1 B T ; A1 C ; nÞS r ðI ; A1 B T ÞQ o ððA1 ÞT B; Z ; nÞT ðBÞ: Therefore, the desired result follows.



Suppose that B is nonsingular, we have the following results. Due to the line of the proof is similar to theorems 10.7 and 10.8, here it is omitted.

Solutions to Linear Matrix Equations and Their Applications

168

Theorem 10.9. Supposed that B is nonsingular and fsjdetðI þ sA ðB 1 ÞT Þ ¼ 0g T rðAT B 1 Þ ¼ Ø holds, let f ðI n ;AðB1 ÞT Þ ðsÞ :¼ detðI n þ sAðB 1 ÞT Þ ¼ xn s n þ    þ x1 s þ x0 ; a0 ¼ 1; ð10:2:6Þ and adjðI n þ sAðB 1 ÞT Þ :¼ N n1 sn1 þ    þ N 1 s þ N 0 ; then all the solutions to the equation (10.0.2) can be expressed as 8 i n1 h n1 > < X ¼ P N CZ ðAT B 1 Þi T þ P ðB 1 ÞT ðN CZ ÞðAT B 1 Þi ðAÞT ; i

i¼0

> T 1 : Y ¼ Zf ðI n ;AðB 1 ÞT Þ ðA B ÞB:

i

i¼0

ð10:2:7Þ

ð10:2:8Þ

where Z 2 Rnn is an arbitrary chosen parameter matrix. Theorem 10.10. Supposed that B is nonsingular and fsjdetðI þ sAðB 1 ÞT Þ ¼ 0g T rðAT B 1 Þ ¼ Ø holds, then all the solutions to the equation (10.0.2) can be stated as 8 n 1 X i h iT X > > > X¼ xk ðAðB 1 ÞT Þik CZ ðAT B 1 Þi > > > i¼0 k¼0 > < n 1 X i X ð10:2:9Þ > þ xk ðB 1 ÞT ðAðB 1 ÞT Þik CZ ðAT B 1 Þi ðAÞT ; > > > > i¼0 k¼0 > > : Y ¼Zf ðI n ;AðB1 ÞT Þ ðAT B 1 ÞB; or

8 h iT > > X ¼ Q c ðAðB 1 ÞT ; C ; nÞS r ðI ; AðB 1 ÞT ÞQ o ðAT B 1 ; Z Þ > > h > > < þ ðB 1 ÞT Q c ðAðB 1 ÞT ; C ; nÞS r ðI ; AðB 1 ÞT Þ i > T 1 > > ðA B ; Z ; nÞ ðAÞT ; Q o > > > : Y ¼ Zf T 1 ðI n ;AðB 1 ÞT Þ ðA B ÞB;

ð10:2:10Þ

where Z 2 Rnn is an arbitrary chosen parameter matrix.

10.3

Algorithms for Solving Two Transpose Equations and Numerical Example

Through the discussion of sections 10.1 and 10.2, it is shown that the solutions to (10.0.1) and (10.0.2) can be constructed according to the presented new results. In this section, three algorithms are proposed for computing the solutions to (10.0.1) and (10.0.2).

Explicit Solutions to Linear Transpose Matrix Equation

169

Algorithm 10.1. (For equation (10.0.1). Case 1: A is nonsingular.)  Step 1. Find the decomposition of the matrix C  ðB T A1 C ÞT , obtain C 1 and C 2 , then compute Q c ðB T A1 ; C 1 ; nÞ and Q o ððB T A1 ÞT ; C 2 ; nÞ.  Step 2. Compute A1 ; B T A1 and f BT A1 ðsÞ, find a1 ; a2 ; . . .; an , then calculate S r ðI ; B T A1 Þ and f ðI n ;BT A1 Þ ððB T A1 ÞT Þ.  Step 3. Find the solution to the equation (10.0.1) by the formula (10.1.8); For any k; l 2 rðB T A1 Þ, if kl 6¼ 1, then compute f 1 ððB T A1 ÞT Þ, obtain the ðI n ;B T A1 Þ unique solution of the equation (10.0.1) by the formula (10.1.9). Algorithm 10.2. (For equation (10.0.1). Case 2: A and B are nonsingular.)  1 and C  2,  Step 1. Find the decomposition of the matrix CB 1 AT  C T , obtain C T 1  T 1 T  then compute Q c ðB A ; C 1 ; nÞ and Q o ððB A Þ ; C 2 ; nÞ.  Step 2. Compute B 1 ; B T A1 and h BT A1 ðsÞ, find c1 ; c2 ; . . .; cn , then calculate g BT A1 ðB 1 AT Þ and S r ðB T A1 Þ.  Step 3.Find the solution to the equation (10.0.1) by the formula (10.1.24); If rðB T A1 Þ \ rðB 1 AT Þ ¼ [, compute h 1 ðB 1 AT Þ, then calculate the unique B T A1 solution of the equation (10.0.1) by the formula (10.1.26). P Remark 4. If f BT A1 ðsÞ :¼ detðsI n  B T A1 Þ1 Þ ¼ ni¼0 ai s i , then f ðI n ;BT A1 Þ ðsÞ :¼ P detðI n  sB T A1 Þ ¼ ni¼0 ai s ni . Because the order of the coefficients in f BT A1 ðsÞ, i.e. a0 ; a1 ; . . .; an , is opposite to that of f ðI n ;BT A1 Þ ðsÞ. Remark 5. If B is nonsingular, it has an algorithm similar to algorithm 10.1 and here is omitted. If A is nonsingular, we can obtain the solution to (10.0.1) by algorithm 10.1 or algorithm 10.2. Similarly, if B is nonsingular, the solution to (10.0.1) can also be obtained by two algorithms. Example 10.1. We consider the explicit solution to the Sylvester transpose matrix equation in the form of (10.0.1) with the following parameters: 0 1 0 1 0 1 3 11 1 1 4 0 19 19 24 A ¼ @ 2 10 8 A; B ¼ @ 1 3 6 A; C ¼ @ 16 78 12 A: 1 0 1 2 3 7 13 22 33 The first method: by algorithm 10.1 By simple computation, the eigenvalues of the matrix B T A1 are as follows: k1 ¼ 0:1468;

k2 ¼ 1:2951 þ 1:7362i;

k3 ¼ 1:2951  1:7362i:

It is easy to check that ki kj 6¼ 1; i; j ¼ 1; 2; 3. So the equation (10.0.1) has a unique solution. From algorithm 10.1, by some computations, we have

170

Solutions to Linear Matrix Equations and Their Applications 0

0:1953 0:1558 0:0282 0:0904 0:4590 0:2537 1:0221 1:7938 0:7784

0:0616 B 0:0772 B B 0:0313 B B 0:0541 B T 1 T 3 Q o ððB A Þ ; C 2 ; 3Þ ¼ 10  B B 0:0346 B 0:0394 B B 0:2376 B @ 0:5390 0:2563

1 0:0275 0:1045 C C 0:0553 C C 0:2142 C C 0:3641 C C; 0:1559 C C 0:4373 C C 0:4641 A 0:1455

f ðI 3 ;BT A1 Þ ðsÞ ¼ 0:6887s 3 þ 4:3113s 2 þ 2:4434s þ 1; 0

1 C  ðB T A1 C ÞT ¼ C 1 C 2 ; C 1 ¼ @ 0 4 0

I3 S 3 ðI ; B T A1 Þ ¼ @ 03 03

1 0 4 A; 6

1 48:0212 3:8939 A; 62:1857

72:2812 107:7931 C 2 ¼ @ 36:6366 24:4032 38:8859 25:5013 0

1 2 5

2:4434I 3 I3 03

1 4:3113I 3 2:4434I 3 A; I3

and Q c0 ðB T A1 ; C 1 ; 3Þ 1:0000 1:0000 ¼@ 0 2:0000 4:0000 5:0000 4:7170 27:0943 14:0000

0 4:0000 6:0000

14:5867 36:3265 0:4151

4:3019 18:5660 7:0000 19:1912 51:2004 3:9811

Thus, by theorem 10.3, the unique solution to 0 0:9999 2:0007 X ¼ @ 2:0000 0:0001 3:0000 5:9999

4:6226 22:7925 10:0000 1 24:7896 70:7682 A: 9:7358

the equation (10.0.1) is 1 3:0005 6:0001 A: 0:0001

Explicit Solutions to Linear Transpose Matrix Equation

171

The second method: by algorithm 10.2 Similarly, the eigenvalues of the matrix B T A1 can be obtained by simple computation: c1 ¼ 0:1468; c2 ¼ 1:2951 þ 1:7362i; c3 ¼ 1:2951  1:7362i: Due to ci cj 6¼ 1; i; j ¼ 1; 2; 3, equation (10.0.1) has a unique solution. By algorithm 10.2, by some computations one gets  1C  2; CB 1 AT  C T ¼ C g BT A1 ðsÞ ¼ s 3 þ 2:4434s2 þ 4:3113s  0:6887; 0

27:8192 B 9:3914 B B 8:2260 B B 41:8461 B  2 ; 3Þ ¼ B 49:0233 Q o ðB 1 AT ; C B B 10:3864 B B 229:5391 B @ 296:5836 130:3930 0

1 1 ¼ @0 C 4

2 4 5

94:7192 46:5595 12:6490 10:1242 42:0930 44:8060 330:9403 431:9032 171:9269

1 0 3 27:8192  2 ¼ @ 9:3914 1 A; C 2 8:2260 0

4:3113I 3 S 3 ðB T A1 Þ ¼ @ 2:4434I 3 I3

1 6:1628 2:4884 C C 21:9535 C C 21:4734 C C 12:7442 C C; 4:9685 C C 21:2978 C C 33:9439 A 19:7306

94:7192 46:5595 12:6490

2:4434I 3 I3 03

1 6:1628 2:4884 A; 21:9535

1 I3 03 A ; 03

and  1 ; 3Þ Q c0 ðB T A1 ; C 1:0000 2:0000 ¼@ 0 4:0000 4:0000 5:0000 3:0660 9:3113 0:5000

3:0000 1:0000 2:0000

4:3019 4:1509 18:5660 22:2830 7:0000 10:0000

14:5867 18:9066 36:3265 51:1255 0:4151 4:7075

1 4:4538 4:1192 A: 6:7217

Solutions to Linear Matrix Equations and Their Applications

172

Thus, by theorem 10.6, the unique solution to equation (10.0.1) is 0

0:9999 X ¼ @ 2:0000 2:9999

1 3:0004 6:0000 A: 0:0000

2:0001 0:0001 5:9999

Algorithm 10.3. (For equation (10.0.2). Case: A is nonsingular.)  Step 1. Compute A1 ; A1 B T and A1 C , then calculate Q c ðA1 B T ; A1 C ; nÞ.  Step 2. Compute f ðA1 BT Þ ðsÞ, find d1 ; d2 ; . . .; dn , then calculate f ðI n ;A1 BT Þ ððA1 ÞT BÞ and S r ðI ; A1 B T Þ.  Step 3. Choose a parameter matrix Z , then compute Q o ððA1 ÞT B; Z ; nÞ.  Step 4. Find a solution to the equation (10.0.2) by the formula (10.2.5).

Remark 6. If B is nonsingular, it has a similar algorithm to algorithm 10.3 and here is omitted. Example 10.2. In this example, we consider the parametric solutions to equation (10.0.2) with the following parametric matrices: 0

4

B 21 B A¼B @ 1 2

11

2 10

0 10

80 11

2 2

0

1

1

B 6 C B C C; C ¼ B @ 3 A 2

3

21 4 0 11 42 B 12 23 B B¼B @ 21 30

6 17

23

0

1

10

2

2

1

9

4

7 2

12 3

1 C C C; 12 A

3 1

23

12

3C C C: 2A 2

By some computations, we get f ðI 4 ;BT A1 Þ ðsÞ ¼ 1:8048s 4 þ 6:7717s 3 þ 6:0306s 2 þ 2:8223s þ 1: Choose Z as 0

111:2117 794:7075 B 13:8692 149:9002 Z ¼B @ 2:1329 5:5192 1:6508 10:8792

1 233:5488 188:2771 40:5172 33:7128 C C: 2:4184 1:6934 A 3:9105 2:6594

Explicit Solutions to Linear Transpose Matrix Equation

173

Thus, we get Q c0 ðA1 B T ; A1 C ; 4Þ 0:1776 0:4605 B 0:2278 0:4081 ¼B @ 0:1280 0:0622 0:2539 1:1523

0:6664 0:5057 0:3162 1:5943

1:4451 2:7668 0:7060 5:1792

3:4495 1:0575 0:9770 3:1384

23:7628 5:6111 2:7953 2:9494

and

2:2801 0:6707 0:5987 0:4940

0:3487 0:3064 0:0380 0:6225

10:9121 2:2126 0:4947 0:9304 2:7906 0:1772 10:8548 0:2810

0

111:2117 B 13:8692 B B 2:1329 B B 1:6508 B B 23:4868 B B 5:4046 B B 2:1652 B B 0:1456 Q o ððA1 ÞT B; Z ; 4Þ ¼ B B 0:5917 B B 15:4207 B B 3:7945 B B 3:0130 B B 19:2898 B B 24:7008 B @ 5:2777 9:0960

2:1196 0:2802 0:2143 0:3245

5:9847 1:9951 1:9370 8:1124

794:7075 149:9002 5:5192 10:8792 283:6510 57:8579 0:5315 1:3795 106:0737 25:8637 1:1908 5:3683 101:3619 18:7178 1:8355 0:1991

4:7172 6:4886 2:7520 16:4958

11:4997 33:2165 8:2816 57:9738

0:4709 3:3105 1:2676 9:2827

1 15:9334 15:8537 C C 1:2043 C C; 17:0626 A

1 233:5488 188:2771 40:5172 33:7128 C C 2:4184 1:6934 C C 3:9105 2:6594 C C 73:5257 65:3330 C C 10:6996 10:6265 C C 1:0392 0:7826 C C 1:1643 0:9756 C C: 11:5955 18:9204 C C 1:5306 1:5146 C C 0:5386 0:5077 C C 1:5835 0:0638 C C 6:4952 4:6183 C C 6:5999 2:4232 C C 1:8841 0:7852 A 0:6903 0:2245

Thus, by theorem 10.6, the parametric solutions to equation (10.0.2) are 0 1 1 20 3 2 B3 5 6 8 C C X ¼B @ 0 2 6 7 A; 8 9 3 2 and

0

0:7857 B 34:1853 Y ¼B @ 11:7356 32:2445

187:2857 96:7143 48:5108 27:6114 4:7634 1:4671 14:1288 10:9599

1 89:5000 20:4951 C C: 0:2488 A 2:4803

174

10.4

Solutions to Linear Matrix Equations and Their Applications

Application in Control Theory

As an application of the proposed solution to (10.0.1), in this section, we consider the continuous zeroing dynamics (CZD) design of the time-varying linear system. The design formula expressed in matrix form is defined as _ LðtÞ ¼ kJMP ðLðtÞÞ;

ð10:4:1Þ

and JMPðÞ : Rmn ! Rmn is denoted as an array of jmp functions: 0 1 jmpðl 11 ðtÞÞ jmpðl 12 ðtÞÞ    jmpðl 1n ðtÞÞ B jmpðl 21 ðtÞÞ jmpðl 22 ðtÞÞ    jmpðl 2n ðtÞÞ C B C JMPðLðtÞÞ ¼ B C; .. .. .. .. @ A . . . . jmpðl n1 ðtÞÞ jmpðl n2 ðtÞÞ    jmpðl nn ðtÞÞ  1; l ij ðtÞ [ 0 where jmpðl ij ðtÞÞ ¼ . And the time derivative is 0; l ij ðtÞ 0 _ _ _ T BðtÞ þ XðtÞT BðtÞ: _ LðtÞ ¼ AðtÞXðtÞ þ AðtÞXðtÞ þ XðtÞ

ð10:4:2Þ

ð10:4:3Þ

Substituting (10.4.1) into (10.4.3) and arranging, one has the following time-varying linear system _ _ T BðtÞ ¼ C ðtÞ; AðtÞXðtÞ þ XðtÞ

ð10:4:4Þ

where AðtÞ; BðtÞ; C ðtÞ 2 Rnn are smoothly time-varying matrices, XðtÞ 2 Rnn is the unknown time-varying matrix to be determined. Besides, t 2 ½0; t f  ½0; þ 1Þ, and t f denotes the final time. Thus, the equation (10.4.4) is equivalent to the equation AX þ X T B ¼ C . We choose the following parametric matrices: A ¼ triuðrandð10; 10Þ; 1Þ þ diagð1 þ diagðrandð10ÞÞÞ; 0 B B B B B B B C ¼B B B B B B B @

12:0308 7:4456 3:3343 13:8946 5:7489 3:8058 1:9270 4:5679 3:4261 4:5089 2:3090 7:7085 6:9273 3:3946 6:1090 3:4464 4:8700 3:0501 6:8190 3:2828

2:1117 5:5906 8:8902 4:6981 4:4524 6:0933 7:0041 8:7568 3:8442 4:2765

B ¼ trilðrandð10; 10Þ; 1Þ;

1:9960 2:7438 3:5619 15:0807 5:6138 4:2581 3:2479 5:7667 6:3470 5:4472

7:4386 7:0838 1:1289 6:2074 8:3613 4:6963 2:0214 3:7945 5:2390 3:3497

Explicit Solutions to Linear Transpose Matrix Equation

3:4554 6:5278 5:0187 4:7645 7:7872 11:0492 2:1400 2:8991 2:3227 1:9376

2:4897 5:5624 5:8117 5:5780 1:9622 5:5616 12:2184 5:5384 3:3837 2:6750

5:3519 5:6862 3:1962 5:5443 3:9063 2:6473 9:0972 10:8941 5:7356 6:7963

4:1122 4:1144 4:9426 4:5838 6:6722 7:4494 5:3937 7:6649 14:7296 2:3251

175

7:8176 8:9310 7:1911 4:7910 5:1343 1:7236 3:9903 2:5836 1:9510 10:0195

1 C C C C C C C C: C C C C C C A

It is easy to check that gc 6¼ 1 for any g; c 2 rðB T A1 Þ and A is nonsingular. Therefore, the equation AX þ X T B ¼ C has a unique solution. It follows from algorithm 10.1 that the unique solution to AX þ X T B ¼ C is 0

6:0000 B 0:0000 B B 0:0000 B B 0:0000 B B 0:0000 X ¼B B 0:0000 B B 0:0000 B B 0:0000 B @ 0:0000 0:0000

10.5

0:1544 6:0000 0:0000 0:0000 0:0000 0:0000 0:0000 0:0000 0:0000 0:0000

0:5975 0:3353 6:0000 0:0000 0:0000 0:0000 0:0000 0:0000 0:0000 0:0000

0:1249 0:0244 0:2902 6:0000 0:0001 0:0000 0:0000 0:0001 0:0000 0:0000

0:7593 0:7406 0:7437 0:1059 6:0000 0:0000 0:0001 0:0000 0:0000 0:0000

0:1636 0:6660 0:8944 0:5166 0:7027 6:0000 0:0000 0:0000 0:0000 0:0000

0:8092 0:7486 0:1202 0:5250 0:3258 0:5464 6:0000 0:0000 0:0000 0:0000

0:0205 0:9237 0:6537 0:9326 0:1635 0:9211 0:7947 6:0000 0:0000 0:0000

0:7519 0:2287 0:0642 0:7673 0:6712 0:7152 0:6421 0:4190 6:0000 0:0000

1 0:3174 0:8145 C C 0:7891 C C 0:8523 C C 0:5056 C C: 0:6357 C C 0:9509 C C 0:4440 C C 0:0600 A 6:0000

Conclusions

In this chapter, we have studied the explicit solutions to the equations AX þ X T B ¼ C and AX þ X T B ¼ CY , which are named the Sylvester transpose matrix equation and the generalized Sylvester transpose matrix equation. The work of this chapter has the following contributions.  Three forms of solutions to the Sylvester transpose matrix equation are provided. One of the solutions is the well-known Jameson’s Theorem. Two algorithms are proposed for computing the solutions to this equation.  Compared with the necessary and sufficient conditions for the existence of solutions to the Sylvester transpose matrix equation in[163], the condition in this chapter only needs to check the eigenvalues of B T A1 . Moreover, there is no limit to all the coefficient matrices, while in[163] it requests a coefficient matrix C ¼ CT.  As an application of the proposed algorithm for the equation AX þ X T B ¼ C , the continuous zeroing dynamics (CZD) design of a time-varying linear system is given.

176

Solutions to Linear Matrix Equations and Their Applications

 Parametric solutions to the generalized Sylvester transpose matrix equation have been constructed. An algorithm for obtaining the parametric solutions has been proposed.  The idea in this chapter is also applicable to some other matrix equations with similar structures, e.g., the Stein transpose matrix equation X þ AX T B ¼ C and generalized Stein transpose matrix equation X þ AX T B ¼ CY . What we need is to make some small changes of specific coefficients with respect to X T .

References

[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17]

[18]

Zhou B., Duan G.R. (2006) A new solution to the generalized Sylvester matrix equation AV  EVF ¼ BW , Sys. Control Lett. 55, 193. Zhou B., Duan G.R. (2007) Solutions to generalized Sylvester matrix equation by Schur decomposotion, Int. J. Sys. Sci. 38, 369. Zhou B., Yan Z.B. (2008) Solutions to right coprime factorizations and generalized Sylvester matrix equations, Trans. Inst. Measure. Contr. 30, 397. Zhou B., Li Z.Y., Duan G.R., Wang Y. (2009) Weighted least squares solutions to general coupled Sylvester matrix equation, J. Comput. Appl. Math. 224, 759. Zhou B., Duan G.R. (2008) On the generalized Sylvester mapping and matrix equations, Syst. Control Lett. 57, 200. Wu A.G., Fu Y.M., Duan G.R. (2008) On solutions of matrix equations V  AVF ¼ BW and V  AV F ¼ BW , Math. Comput. Model. 47, 1181. Wu A.G., Duan G.R., Yu H.H. (2006) On solutions of matrix equations XF  AX ¼ C and XF  AX ¼ C , Appl. Math. Comput. 183, 932. Zhour Z.A., Kilicman A. (2007) Some new connections between matrix products for partitioned and non-partitioned matrices, Comput. Math. Appl. 54, 763. Jameson A. (1968) Solution of the equation AX  XB ¼ C by inversion of an M  M or N  N matrix, SIAM J. Appl. Math. 16, 1020. Ding F., Liu P.X., Ding J. (2008) Iterative solutions of the generalized Sylvester matrix equation by using the hierarchical identification principle, Appl. Math. Comput. 197, 41. Ding F., Chen T. (2005) Gradient based iterative algorithms for solving a class of matrix equation, IEEE Trans. Automat. Contr. 50, 1216. Ding F., Chen T. (2005) Hierarchical gradient-based identification of multivarious disctete-time systems, Autom. 41, 315. Ding F., Chen T. (2005) Iterative least squares solutions of coupled Sylvester matrix equations, Syst. Control Lett. 54, 95. Ding F., Chen T. (2006) On iterative solutions of general coupled matrix equations, SIAM Control Optim. 44, 2269. Ding F., Chen T. (2005) Hierarchical least squares identification methods for multivarious systems, IEEE Trans. Automat. Control 50, 397. Dehghan M., Hajarian M. (2009) Efficient iterative method for solving the second-order Sylvester matrix equation EVF 2  AVF  CV ¼ BW , IET Control Theor. Appl. 3, 1401. Dehghan M., Hajarian M. (2008) An iterative algorithm for the reflexive solutions of the generalized coupled Sylvester matrix equations and its optimal approximation, Appl. Math. Comput. 202, 571. Dehghan M., Hajarian M. (2008) An iterative algorithm for solving a pair of matrix equations AYB ¼ E; CYD ¼ F over generealized centro-symmetric matrices, Comput. Math. Appl. 56, 3246.

178

References

[19]

Dehghan M., Hajarian M. (2009) Finite iterative algorithms for the reflexive and anti-reflexive solutions of the matrix equation A1 X 1 B 1 þ A2 X 2 B 2 ¼ C , Math. Comput. Model 49, 1937. Dehghan M., Hajarian M. (2010) The general coupled matrix equations over generalized bisymmitric matrices, Linear Algebra Appl. 432, 1531. Li Z.Y., Wang Y., Zhou B., Duan G.R. (2010) Least squares solution with the minimum-norm to general matrix equations via iteration, Appl. Math. Comput. 215, 3547. Xie L., Ding J., Ding F. (2009) Gradient based iterative solutions for general linear matrix equations, Comput. Math. Appl. 58, 1441. Wang M.H., Cheng X.H., Wei M.S. (2007) Iterative algorithms for solving the matrix equation AXB þ CX T D ¼ E, Appl. Math. Comput. 187, 622. Piao F., Zhang Q., Wang Z. (2007) The solution to matrix equation AX þ X T C ¼ B, J. Franklin Inst. 344, 1056. Fujioka H., Hara S. (1994) State covariance assignment problem with measurement noise: A unified approach based on a symmetric matrix equation, Linear Algebra Appl. 203–204, 579. Yasuda K., Skelton R.E. (1990) Assigning controllability, and observability Gramians in feedback control, J. Guidance Control 14, 878. Skelton R.E., Iwasaki T. (1993) Lyapunov and covariance controllers, Int. J. Control 57, 519. Golub G. Nash S., Van Loan C. (1979) A Hessenberg-Schur method for the problem AX þ XB ¼ C , IEEE Trans. Auto. Control 24, 909. Jameson A., Kreindler E. (1973) Inverse problem of linear optimal control, SIAM J. Control 11, 1. Wang Q.W., Zhang F.Z. (2008) The reflexive re-nonnegative definite solution to a quaternion matrix equation, Electron. J. Linear Algebra 17, 88. Desouza E., Bhattacharyya S.P. (1981) Controllability, observability and the solution of AX  XB ¼ C , Linear Algebra App. 39, 167. Ma E.C. (1966) A finite series solution of the matrix equation AX  XB ¼ C , SIAM J. Appl. Math. 14, 490. Kágström B., Westin L. (1989) Generalized Schur methods with condition estimators for solving the generalized Sylvester equation, IEEE Trans. Autom. Control 34, 745. Huang G.X., Yin F., Guo K. (2008) An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB ¼ C , J. Comput. Appl. Math. 212, 231. Dehghan M., Hajarian M. (2010) An iterative method for solving the generalized coupled Sylvester matrix equations over generalized bisymmetric matrices, Appl. Math. Model. 34, 639. Peng Z.H., Hu X.Y., Zhang L. (2007) The bisymmetric solutions for the matrix equation A1 X 1 B 1 þ A2 X 2 B 2 þ    þ Al X l B l ¼ C and its optimal approximation, Linear Algebra App. 426, 583. Li F.L., Hu X.Y., Zhang L. (2008) The generalized anti-reflexive solutions for a class of matrix equations BX ¼ C ; XD ¼ E, Comput. Appl. Math. 27, 31. Mitra S.K. (1973) Common solutions to a pair of linear matrix equations A1 XB 1 ¼ C 1 ; A2 XB 2 ¼ C 2 , Proc. Cambridge Philos. Soc. 74, 213. Sheng X.P., Chen G.L. (2010) An iterative method for the symmetric and skew symmetric solutions of a linear matrix equation AXB þ CYD ¼ E, J. Comput. Appl. Math. 233, 3030. Peng Y.X., Hu X.Y., Zhang L. (2006) An iterative method for symmetric solutions and optimal approximation solution of the system of matrix equations A1 XB 1 ¼ C 1 ; A2 XB 2 ¼ C 2 , Appl. Math. Comput. 183, 1127. Yuan Y.X., Dai H. (2008) Generalized reflexive solutions of the matrix equation AXB ¼ D and associated optimal approximation problem, Comput. Math. Appl. 56, 1643. Zhang J.C., Zhou S.Z., Hu X.Y. (2009) The (P,Q) generalized reflexive and anti-reflexive solutions of the matrix equation AX ¼ B, Appl. Math. Comput. 209, 254. Peng Y.X., Hu X.Y., Zhang L. (2007) The reflexive and anti-reflexive solutions of the matrix equation AH XB ¼ C , J. Comput. Appl. Math. 200, 749.

[20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34]

[35]

[36]

[37] [38] [39] [40]

[41] [42] [43]

References

[44] [45] [46]

[47]

[48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71]

179

Sheng X.P., Chen G.L. (2007) A finite iterative method for solving a pair of linear matrix equations ðAXB; CXDÞ ¼ ðE; FÞ, Appl. Math. Comput. 189, 1350. Dehghan M., Hajarian M. (2010) Matrix equations over (R,S)-symmetric and (R,S)-skew symmetric matrices, Comput. Math. App. 59, 3583. Peng X.Y., Hu X.Y., Zhang L. (2006) The orthogonal-symmetric or orthogonal-anti-symmetric least-square solutions of the matrix equation, Chese J. Engi. Math. 23, 1048. Wang Q.W., Sun J.H., Li S.Z. (2002) Consistency for bi(skew)symmetric solutions to systems of generalized Sylvester equations over a finite central algebra, Linear Algebra App. 353, 169. Wang Q.W. (2005) Bisymmetric and centrosymmetric solutions to systems of real quaternion matrix equations, Comput. Math. Appl. 2005, 641. Wang Q.W. (2005) The general solution to a system of real quaternion matrix equations, Comput. Math. Appl. 49, 665. Wang Q.W., Li C.K. (2009) Ranks and the least-norm of the general solution to a system of quaternion matrix equations, Linear Algebra App. 2009, 1626. Bai Z.Z., Huang Y.M., Ng M.K. (2007) On preconditioned iterative methods for Burgers equations, SIAM J. Sci. Comput. 29, 415. Bai Z.Z., Ng M.K. (2003) Preconditioners for nonsymmetric block Toeplitz-like-plus-diagonal linear systems, Numerische Mathematik 96, 197. Duan G.R. (1996) On the solution to Sylvester matrix equation AV þ BW ¼ EVF, IEEE Trans. Autom. Control 41, 612. Duan G.R. (2003) Two parametric approaches for eigenstructure assignment in second-order linear systems, J. Control Theory App., 1, 59. Horn R.A., Johnson C.R. (1991) Topics in matrix analysis, Cambridge University Press, New York. Chen J.L., Chen X.H. (2001) Special matrices, Tsinghua University Press, Beijing. Bao L., Lin Y., Wei Y. (2007) A new projection method for solving large Sylvester equations, Appl. Numer. Math. 57, 521. Borno I. (1995) Parallel computation of the solution of coupled algebric Lyapunov equations, Autom. 31, 1345. Chen H.C. (1998) Generalized reflexive matrices: special properties and applications, SIAM J. Matrix Anal. Appl. 19, 140. Dai L. (1989) Singular control systems, Springer-Verlag, Berlin. Fletcher L.R., Kuatsky J., Nichols N.K. (1986) Eigenstructure assignment in descriptor systems, IEEE Trans. Auto. Control. 31, 1138. Frank P.M. (1990) Fault diagnosis in dynamic systems using analytical and knowledgebased redundancy a survey and some new results, Autom. 26, 459. Liang M.L., You C.H., Dai L.F. (2007) An efficient algorithm for the generalized centro-symmetric solution of matrix equation AXB ¼ C , Numer. Algorithms, 4, 173. Lin Y. (2006) Minimal residual methods augmented with eigenvectors for solving Sylvester equations and generalized Sylvester equations, Appl. Math. Comput. 181, 487. Mariton M. (1990) Jump linear systems in automatic control, Marcel Dekker, New York. Robbé M., Sadkane M. (2008) Use of near-breakdowns in the block Arnildi method for solving large Sylvester equations, Appl. Numer. Math. 58, 486. Wang M.H., Cheng X.H., Wei M.S. (2007) Iterative algorithms for solving the matrix equation AXB þ CX T D ¼ E, Appl. Math. Comput. 187, 622. Wang Q.W., He Z.H. (2013) Solvability conditions and general solution for mixed Sylvester equations, Autom. 49, 2713. Wang Q.W., He Z.H. (2014) Systems of coupled generalized Sylvester matrix equations, Autom. 50, 2840. Wang Q.W., He Z.H. (2013) A system of matrix equations and its applications, Sci. Chin. Math. 56, 1795. Wang Q.W., He Z.H. (2012) Some matrix equations with applications, Linear Multilinear Algebra 60, 1327.

180

References

[72]

Wang Q.W., van der Woude J.W., Chang H.X. (2009) A system of real quaternion matrix equations with applications, Linear Algebra App. 431, 2291. Wang Q.W. (2005) A system of four matrix equations over von Neumann regular rings and its applications, Acta Math. Sci., 21, 323. Wang Q.W., Wu Z.C. (2010) Common Hermitian solutions to some operator equations on Hilbert C-modules, Linear Algebra App. 432, 3159. Wang Q.W., Song G.J., Lin C.Y. (2007) Extreme ranks of the solution to a consistent system of linear quaternion matrix equations with an application, Appl. Math. Comput. 189, 1517. Wu A.G., Li B., Zhang Y., Duan G.R. (2011) Finite iterative solutions to coupled Sylvester-conjugate matrix equations, Appl. Math. Modell. 35, 1065. Wu A.G., Zhang E.Z., Liu F.C. (2012) On closed-form solutions to the generalized Sylvester-conjugate matrix equation, Appl. Math. Comput. 218, 9730. Wu A.G., Lv L.L., Duan G.R., Liu W.Q. (2011) Parametric solutions to Sylvester-conjugate matrix equations, Comput. Math. App. 62, 3317. Xie D.X., Hu X.Y., Sheng Y.P. (2006)The solvability conditions for the inverse eigenproblems of symmetric and generalized centro-symmetric matrices and their approximations, Linear Algebra App. 418, 142. Zhou B., James L., Duan G.R. (2011) Toward solution of matrix equation X ¼ Af ðXÞB þ C , Linear Algebra App. 435, 1370. Zhou B., Yan Z.B., Duan G.R. (2010) Unified parametrization for the solutions to the polynomial diophantine matrix equation and the generalized sylvester matrix equation, Int. J. Control Autom. Syst. 8, 29. Zhou B., Hu X.Y., Zhang L. (2003) The solvability conditions for the inverse eigen problem of generalized centro-symmetric matrices, Linear Algebra Appl. 364, 147. Gerheim A. (1984) Numerical solution of the Lyapunov equation for narrow-band digital filters, IEEE Trans. Circuits Syst. 31, 991. Varga A. (2000) Robust pole assignment via Sylvester equation based state feedback parametrization, Proceedings of the 2000 IEEE International Symposium on Computer-Aided Control System Design, Alsaka, USA. Zhang Y.N., Jiang D.C., Wang J. (2002) A recurrent neural network for solving Sylvester equation with time-varying coefficients, IEEE Trans. Neural Network, 13, 1053. Shahzad A., Jones B.L., Kerrigan E.C., Constantinides G.A. (2011) An efficient algorithm for the solution of a coupled Sylvester equation appearing in descriptor systems, Autom. 47, 244. Duan G.R. (2005) Parametric eigenstructure assignment in high-order linear systems, Int. J. Control Autom. Syst. 3, 419. Li H., Gao Z., Zhao D. (2014) Least squares solutions of the matrix equation AXB þ CYD ¼ E with the least norm for symmetric arrowhead matrices, Appl. Math. Comput. 226, 719. Hajarian M. (2014) Developing the CGLS algorithm for the least squares solutions of the general coupled matrix equations, Math. Methods Appl. Sci. 37, 2782. Peng Z. (2013) The reflexive least squares solutions of the matrix equation A1 X 1 B 1 þ A2 X 2 B 2 þ    þ Al X l B l ¼ C with a submatrix constraint, Numer. Algorithms 64, 455. Ding F., Wang Y., Ding J. (2015) Recursive least squares parameter identification algorithms for systems with colored noise using the filtering technique and the auxilary model, Digital Signal Process. 37, 100. Ding F., Wang X., Chen Q., Xiao Y. (2016) Recursive least squares parameter estimation for a class of output nonlinear systems based on the model decomposition, Circuits, Syst. Signal Process. 35, 3323. Peng Z., Xin H. (2013) The reflexive least squares solutions of the general coupled matrix equations with a submatrix constraint, Appl. Math. Comput. 225, 425. Peng Z. (2015) The (R,S)-symmetric least squares solutions of the general coupled matrix equations, Linear and Multilinear Algebra 63, 1086. Hajarian M. (2015) Developing BiCOR and CORS methods for coupled Sylvester-transpose and periodic Sylvester matrix equations, Appl. Math. Model 39, 6073.

[73] [74] [75] [76] [77] [78] [79]

[80] [81]

[82] [83] [84]

[85] [86]

[87] [88] [89] [90]

[91]

[92]

[93] [94] [95]

References

[96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107]

[108] [109] [110] [111]

[112]

[113] [114]

[115]

[116] [117] [118] [119] [120]

181

Hajarian M. (2013) Matrix iterative methods for solving the Sylvester-transpose and periodic Sylvester matrix equations, J. Franklin Inst. 350, 3328. Hajarian M. (2014) Matrix form of the CGS method for solving general coupled matrix equations, Appl. Math. Lett. 34, 37. Hajarian M. (2016) Extending the CGLS algorithm for least squares solutions of the generalized Sylvester-transpose matrix equations, J. Franklin Inst. 353, 1168. Wang R.C. (2003) Functional analysis and optimization theory, Beijing University of Aeronautics and Astronautics Press, Beijing. _ (1996) Numerical methods for least squares problems, SIAM, Philadelphia. Björck A. Dmytryshyn A., Kastrom B. (2015) Coupled Sylvester-type matrix equations and block diagonalization, SIAM J. Matrix Anal. Appl. 38, 580. Wu A.G., Lv L.L., Hou M.Z. (2011) Finite iterative algorithms for extended Sylvester-conjugate matrix equations, Math. Comput. Modell. 54, 2363. Wu A.G., Lv L.L., Li B., Zhang Y., Duan G.R. (2011) Finite iterative solutions to coupled Sylvester-conjugate matrix equations, Appl. Math. Modell. 35, 1065. Wu A.G., Sun Y., Feng G. (2010) Closed-form solution to the non-homogeneous generalised Sylvester matrix equation, IET Control Theory App. 10, 1914. Yang S.D., Zheng A.Y. (2017) Complete weight enumerators of a class of linear codes, Discrete Math. 340, 729. Mao A.M., Yang L.J., Qian A.X., Luan S.X. (2017) Existence and concentration of solutions of Schroinger-Poisson system, Appl. Math. Lett. 68, 8. Sun W.W., Peng L.H. (2014) Observer-based robust adaptive control for uncertain stochastic Hamiltonian systems with state and input delays, Nonlinear Anal. Modell Control 19, 626. Xu Y.M., Wang L.B. (2015) Breakdown of classical solutions to Cauchy problem for inhomogeneous quasilinear hyperbolic systems, Indian J. Pure Appl. Math. 46, 1. Liu B.H., Qu B., Zheng N. (2014) A successive projection algorithm for solving the multiple-sets split feasibility problem, Numer. Funct. Anal. Optim. 35, 1459. Chen H.B., Wang Y.J., Wang G. (2014) Strong convergence of extragradient method for generalized variational inequalities in Hilbert space, J. Inequalities App. 2014, 1. Feng Q.H., Meng F.W. (2017) Traveling wave solutions for fractional partial differential equations arising in mathematical physics by an improved fractional Jacobi elliptic equation method, Math. Methods Appl. Sci. 40, 3676. Gao L.J., Wang D.D., Wang G. (2015) Further results on exponential stability for impulsive switched nonlinear time delay systems with delayed impulse effects, Appl. Math. Comput. 268, 186. Zheng X.X., Shang Y.D., Peng X.M. (2017) Orbital stability of periodic traveling wave sloutions to the generalized Zakharov equations, Acta Math. Sci. 37, 998. Lin X.L., Zhao Z.Q. (2016) Iterative technique for a third-order differential equation with three-point nonlinear boundary value conditions, Electron. J. Qual. Theory Differ. Equ. 12, 1. Guan Y.L., Zhao Z.Q., Lin X.L. (2016) On the existence of positive solutions and negative solutions of singular fractional differential equations via global bifurcation techniques, Boundary Value Prob. 141, 1. Li F.S., Gao Q.Y. (2016) Blow-up of solution for a nonlinear Petrovsky type equation with memory, Appl. Math. Comput. 274, 383. Wang Y., Caccetta L., Zhou G. (2016) Convergence analysis of a block improvement method for polynomial optimization over unit spheres, Numer. Linear Algebra App. 22, 1059. Liu A.J., Chen G.L. (2014) On the Hermitian positive definite solutions of nonlinear matrix P  t i Ai ¼ Q, Appl. Math. Comput. 243, 950. equation X s þ m i¼1 Ai X Liu Q.H., Liu A.J. (2014) Block SOR methods for the solution of indefinite least squares problems, Calcolo 51, 367. Xie L., Liu Y.J., Yang H.Z. (2010) Gradient based and least squares based iterative algorithms for matrix equations AXB þ CX T D ¼ F, Appl. Math. Comput. 217, 2191.

182

References

[121]

Ding J., Liu Y.J., Ding F. (2010) Iterative solutions to matrix equations of the form Ai XB i ¼ F i , Comput. Math. Appl. 59, 3500. Zhang H.M., Ding F. (2014) A property of the eigenvalues of the symmetric positive definite matrix and the iterative algorithm for coupled Sylvester matrix equations, J. Frankl. Inst. 351, 340. Zhang H.M. (2015) Reduced-rank gradient-based algorithms for generalized coupled Sylvester matrix equations and its applications, Comput. Math. Appl. 70, 2049. Zhang H.M., Ding F. (2016) Iterative algorithms for X þ AT X 1 A ¼ I by using the hierarchical identification principle, J. Frankl. Inst. 353, 1132. Li S.K., Huang T.Z. (2012) LSQR iterative method for generalized coupled Sylvester matrix equations, Appl. Math. Model 36, 3545. Zhou B., Lam J., Duan G.R. (2008) Convergence of gradient-based iterative solution of the coupled Markovian jump Lyapunov equations, Comput. Math. Appl. 12, 3070. Bradshwa A., Proter B. (1975) Design of linear multivariable continuous-time tracking systems incorporating feedforward and feedback controllers, Int. J. Syst. Sci. 6, 233. Chen J., Patton R., Zhang H. (1996) Design unknown input observers and robust fault detection filters, Int. J. Control 63, 85. Chen H.C., Chang F.R. (1993) Chained eigenstructure assignment for constraint ratio proportional and derivative (CRPD) control law in controllable singular systems, Syst. Control Lett. 21, 405. Duan G.R., Liu G.P., (1999) Complete parametric approach for eigenstructure assignment in a class of second-order linear systems, Autom. 38, 725. Duan G.R., (2004) On Solution to the matrix equation AV þ BW ¼ EVJ þ R, Appl. Math. Lett. 17, 1197. Duan G.R., Patton R.J. (2001) Robust fault detection using Luenberger-type unknown input observers-a parametric approach, Int. J. Syst. Sci. 32, 533. Duan G.R. (1992) Solution to matrix equation AV þ BW ¼ EVF and eigenstructure assignment for descriptor systems, Autom. 28, 639. Duan G.R. (1993) Solution to matrix equation AV þ BW ¼ VF and their application to eigenstructure assignment in linear systems, IEEE Trans. Autom. Control 38, 276. Duan G.R., Patton R.J. (1997) Eigenstructure assignment in descriptor systems via proportional derivative feedback, Int. J. Control 68, 1147. Fahmay M.M., Oreilly J. (1983) Eigenstructure assignment in linear multivariable systems a parametric solution, IEEE Trans. Autom. Control 28, 990. Kwon B.H., Youn M.J., (1987) Eigenvalue-generalized eigenvector assignment by output feedback, IEEE Trans. Autom. Control 32, 417. Luenberger D.G. (1971) An introduction to observers, IEEE Trans. Autom. Control 16, 596. Lewis F.L. (1986) Further remarks on the Cayley-Hamiiton theorem and Leverrier’s method for matrix pencil ðsI  AÞ, IEEE Trans. Autom. Control 31, 869. Park J., Rizzoni G. (1994) An eigenstructure assignment algorithm for the design of fault detection filters, IEEE Trans. Autom. Control 39, 1521. Saberi A., Stoorvogel A.A., Sannuti P. (1997) Control of linear systems with regulation and input constraints, in series of Communications and Control Engineering, Spring-Verlag, New York. Song C.Q., Feng J.E. (2016) On solutions to the matrix equations XB  AX ¼ CY and b ¼ CY , J. Franklin Inst. 353, 1075. XB  A X Tsui C.C. (1988) New approach to robust observer design, Int. J. Control 47, 745. Tsui C.C. (1987) A complete analytical solution to the matrix equation TA  FT ¼ LC and its applications, IEEE Trans. Autom. Control 32, 742. Wu A.G., Feng G., Hu J.Q., Duan G.R. (2009) Closed-form solutions to the nonhomogeneous Yakubovich-conjugate matrix equations, Appl. Math. Comput. 214, 442. Wu A.G., Wang H.Q., Duan G.R. (2009) On matrix equations X  AXF ¼ C and X  AXF ¼ C , J. Comput. Appl. Math. 214, 442. Zhou B., Duan G.R. (2009) Parametric solutions to the generalized discrete Sylvester matrix equation MXN  X ¼ TY and their applications, IMA J. Math. Control Inf. 26, 59.

[122]

[123] [124] [125] [126] [127] [128] [129]

[130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141]

[142] [143] [144] [145] [146] [147]

References

[148] [149] [150]

[151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162]

[163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173]

183

Zhou B., Duan G.R. (2006) An explicit solution to polynomial matrix right corprime factorization with applications in eigenstructure assignment, J. Control App. 2, 447. Zhou B., Duan G.R. (2005) An explicit solution to the matrix equation AX  XF ¼ BY , Linear Algebra App. 402, 345. Zhou B., Duan G.R. (2007) Parametric solutions to the generalized Sylvester matrix equation AX  XF ¼ BY and the regulator equation AX  XF ¼ BY þ R, Asian J. Control 9, 475. Barnett S., Storey C. (1970) Marix methods in stability theory, Nelson, London. Barnett S. (1971) Matrices in control theory with applications to linear programming, Van Nostrand Reinhod, New York. Rincon F. (1992) Feedback stabilization of second-order models, Ph.D.dissertation. Northern Illinois University, De Kalb, Illinois: USA. Kim Y., Kim H.S. (1999) Eigenstructure assigenment algorithm for mechanical second-order systems, J. Guidance Control Dyn. 22, 729. Chu E.K., Datta B.N. (1996) Numeically robust pole assignment for second-order systems, Int. J. Control 4, 1113. Feng J., Lam J., Wei Y. (2009) Spectral properties of sums of certain Kronecker products, Linear Algebra Appl. 431, 1691. Li C.K., Rodman L., Tsing N.K. (1992) Linear operators preserving certain equivalence relations originating in system theory, Linear Algebra Appl. 161, 165. Miller D.F. (1988) The iterative solution of the matrix equation XA þ BX þ C ¼ 0, Linear Algebra Appl. 105, 131. Teran F.D., Dopico F.M. (2011) The solution of the equation XA þ AX T ¼ 0 and its application to the theory of orbits, Linear Algebra Appl. 434, 44. Song C.Q., Chen G.L. (2011) On solutions of matrix equations XF  AX ¼ C and XF  e ¼ C over quaternion field, J. Appl. Math. Comput. 37, 57. AX Song C.Q., Chen G.L., Liu Q.B. (2012) Explicit solutions to the quaternion matrix equations e F ¼ C , Int. J. Comput. Math. 89, 890. X  AXF ¼ C and X  A X Liao A.P., Lei Y., Hu X. (2007) Least – squares solution with the minimum-norm for the matrix equation AT XB þ B T X T A ¼ D and its applications, Acta Mathematica Sinica English Series 23, 269. Piao F.X., Zhang Q., Wang Z. (2007) The solution to matrix equation AX þ X T C ¼ B, J. Franklin Inst. 344, 1056. Song C.Q., Chen G.L., Zhao L.L. (2011) Iterative solutions to coupled Sylvester-transpose matrix equations, Appl. Math. Model 35, 4675. Gavin K.R., Bhattacharyya S.P. (1983) Robust and well-conditioned eigenstructure assignment via Sylvester’s equation, Proc.Amer.Control Conf. 4, 205. Kwon B.H., Youn M.J. (1987) Eigenvalue-generalized eigenvector assignment by output feedback, IEEE Trans. Automat. Control AC 32, 417. Luenberger D.G. (1964) Observing the state of a linear system, IEEE Trans. Mil. Electron. MI 8, 74. Chen J., Patton R., Zhang H. (1996) Design unknown input observers and robust fault detection filters, Int. J. Control 63, 85. Song C.Q., Wang X.D., Feng J.E., Zhao J.L. (2014) Parametric solutions to the generalized discrete Yakubovich-transpose matrix equation, Asian J. Control 16, 1. Duan G.R. (1998) Eigenstructure assignment and response analysis in descriptor linear systems with state feedback control, Int. J. Control 69, 663. Casterlan E.B., Dasilva V.G. (2005) On the solutions of a Sylvester equation appearing in descriptor linear systems control theory, Syst. Control Lett. 54, 109. Bevis J.H., Hall F.J., Hartwing R.E. (1998) The matrix equation AX  XB ¼ C and its special cases, SIAM J. Matrix Anal. App. 9, 348. Jiang T.S., Cheng X.H., Chen L. (2006) An algebraic relation between consimilarity and similarity of complex matrices and its applications, J. Phys. A: Math. Gen. 39, 9215.

184

References

[174]

Huang L.P. (2001) Consimilarity and quaternion matrices and complex matrices, Linear Algebra App. 331, 21. Miron S., LeBihan N., Mars J. (2006) Quaternion-MUSIC for vector-sensor array processing, IEEE Trans. Signal Process. 54, 1218. Ward J.P. (1997) Quaternions and cayley numbers: Algebra and applications. Kluwer, Dordrecht, The Netherlands. Hanson A.J. (2005) Visualizing quaternions, Morgan Kaufmann, San Mateo, CA. Fortuna L., Muscato G., Xibilia M. (2001) A comparison between HMLP and HRBF for attitude control, IEEE Trans. Neural Netw. 12, 318. Kuipers J.B. (2002) Quaternions and rotation sequences: A primer with applications to orbits, Aerospace and Virtual Reality, Princeton Univ. Press, Princeton, NJ. Ning Q., Wang Q.W. (2008) Ranks of a partitioned quaternion matrix under a pair of matrix equations, Southeast Asian Bull. Math. 32, 741. P Huang L.P. (1998) The quaternion matrix equation Ai XB i ¼ E, Acta Mathematica Sinica 14, 91. Yuan S.F., Liao A.P. (2011) Least squares solution of the quaternion matrix equation X  ^ ¼ C with the least norm, Linear and Multilinear Algebra 59, 985. AXB Wang Q.W., Zhang H.S., Yu S.W. (2008) On solutions to the quaternion matrix equation AXB þ CYD ¼ E, Electron. J. Linear Algebra 17, 343. Song C.Q., Chen G.L., Wang X.D. (2012) On solutions of quaternion matrix equations e ¼ BY , Acta Mathematica Scientia 32, 1967. XF  AX ¼ BY and XF  A X Zhou B., Cai G.B., Lam J. (2013) Positive definite solutions of the nonlinear matrix equation 1 X þ AH X A ¼ I , Appl. Math. Comput. 219, 7377. Hanzon B., Peeters R.M. (1996) A Feddeev sequence method for solving Lyapunov and Sylvester equations, Linear Algebra App. 241-243, 401. Jiang T.S., Wei M.S., (2003) On solutions of the matric equations X  AXB ¼ C and X  AXB ¼ C , Linear Algebra App. 367, 225. eB ¼ C Jiang T.S., Wei M.S. (2005) On a solution of the quaternion matrix equation X  A X and its application, Acta Mathematica Sinica English Ser. 21, 483. Braden H. (1998) The equations AT X  X T A ¼ B, SIAM J. Matrix Anal. App. 20, 295. Terán F.D., Dopico F.M., Guillery N., Montealegre D., Reyes N. (2013) The solution of the equation AX þ X HB ¼ 0, Linear Algebra App. 438, 2817. Terán F.D., Dopico F.M. (2011) The equation XA þ AX H ¼ 0 and the dimension of congruence orbits, Electron. J. Linear Algebra 22, 448. Terán F.D., Dopico F.M. (2011) Consistency and efficient solution of the Sylvester equation for  congrurnce, Electron. J. Linear Algebra 22, 849. Lancaster P., Rozsa P. (1983) On the matrix equation AX þ X HAH ¼ C , SIAM J. Algebraic Discrete Methods 4, 432. Terán F.D., Dopico F.M. (2011) Consistency and efficient solution of the Sylvester equation for H-congruence: AX þ X HB ¼ C , Electron. J. Linear Algebra 22, 849. He Z.H., Wang Q.W., Zhang Y. (2018) A system of quaternary coupled Sylvester-type real quaternion matrix equations, Autom. 87, 25. He Z.H. (2019) Pure PSVD approach to Sylvester-type quaternion matrix equations, Electron. J. Linear Algebra 35, 266. He Z.H., Wang M., Liu X. (2020) On the general solutions to some systems of quaternion matrix equations, RACSAM 114, 95. He Z.H., Liu J.Z., Tam T.Y. (2017) The general /-hermitian solution to mixed pairs of quaternion matrix Sylvester equations, Electron. J. Linear Algebra 32, 475. He Z.H. (2019) The general solution to a system of coupled Sylvester-type quaternion tensor equations involving g-Hermicity, Bull. Iran. Math. Soc. 45, 1407. Wang Q.W., Van der Woude J.W., Yu S.W. (2011) An equivalence canonical form of a matrix triplet over an arbitrary division ring with applications, Sci. Chin. Math. 5, 907. Wang Q.W., Yu S.W., Zhang Q. (2009) The real solution to a system of quaternion matrix equations with applications, Commun. Algebra 37, 2060.

[175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201]