Robust Control for Discrete-Time Markovian Jump Systems in the Finite-Time Domain 3031221818, 9783031221811

This book provides robust analysis and synthesis tools for Markovian jump systems in the finite-time domain with specifi

334 88 4MB

English Pages 211 [212] Year 2023

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Robust Control for Discrete-Time Markovian Jump Systems in the Finite-Time Domain
 3031221818, 9783031221811

Table of contents :
Preface
Acknowledgements
Contents
1 Introduction
1.1 Markovian Jump Systems (MJSs)
1.1.1 Nonlinear MJSs
1.1.2 Switching MJSs
1.1.3 Non-homogenous MJSs
1.2 Finite-Time Stability and Control
1.2.1 FTS for Deterministic Systems
1.2.2 FTS for Stochastic MJSs
1.3 Outline
References
2 Finite-Time Stability and Stabilization for Discrete-Time Markovian Jump Systems
2.1 Introduction
2.2 Preliminaries and Problem Formulation
2.3 Stochastic Finite-Time Stabilization for Linear MJSs
2.4 Stochastic Finite-Time Stabilization for Nonlinear MJSs
2.5 Simulation Analysis
2.6 Conclusion
References
3 Finite-Time Stability and Stabilization for Switching Markovian Jump Systems
3.1 Introduction
3.2 Preliminaries and Problem Formulation
3.3 Stochastic Finite-Time Hinfty Control
3.4 Observer-Based Finite-Time Hinfty Control
3.5 Simulation Analysis
3.6 Conclusion
References
4 Finite-Time Stability and Stabilization for Non-homegeneous Markovian Jump Systems
4.1 Introduction
4.2 Preliminaries and Problem Formulation
4.3 Stochastic Finite-Time Stabilization
4.4 Stochastic Finite-Time Hinfty Control
4.5 Observer-Based Finite-Time Control
4.6 Simulation Analysis
4.7 Conclusion
References
5 Asynchronous Finite-Time Passive Control for Discrete-Time Markovian Jump Systems
5.1 Introduction
5.2 Finite-Time Passive Control
5.3 Asynchronous Finite-Time Passive Control
5.4 Simulation Analysis
5.5 Conclusions
References
6 Finite-Time Sliding Mode Control for Discrete-Time Markovian Jump Systems
6.1 Introduction
6.2 Finite-Time Sliding Mode Control
6.3 Asynchronous Finite-Time Sliding Mode Control
6.4 Simulation Analysis
6.5 Conclusion
References
7 Finite-Frequency Control with Finite-Time Performance for Markovian Jump Systems
7.1 Introduction
7.2 Finite-Time Stabilization with Finite-Frequency Performance
7.3 Finite-Time Multiple-Frequency Control Based on Derandomization
7.4 Simulation Analysis
7.5 Conclusion
References
8 Stochastic Finite-Time Consensualization for Markovian Jump Networks with Disturbances
8.1 Introduction
8.2 Preliminaries and Problem Formulation
8.3 Finite-Time Consensualization with State Feedback
8.4 Finite-Time Consensualization with Output Feedback
8.5 Simulation Analysis
8.6 Conclusion
References
9 Higher-Order Moment Finite-Time Stabilization for Discrete-Time Markovian Jump Systems
9.1 Introduction
9.2 Preliminaries and Problem Formulation
9.3 Higher-Order Moment Stabilization in the Finite-Time Domain
9.4 Higher-Order Moment Finite-Time Stabilization with Finite-Frequency Performance
9.5 Simulation Analysis
9.6 Conclusion
References
10 Model Predictive Control for Markovian Jump Systems in the Finite-Time Domain
10.1 Introduction
10.2 Stochastic Finite-Time MPC for MJSs
10.3 Stochastic Finite-Time MPC for Semi-MJSs
10.4 Simulation Analysis
10.5 Conclusion
References
11 Conclusion
Index

Citation preview

Lecture Notes in Control and Information Sciences 492

Xiaoli Luan Shuping He Fei Liu

Robust Control for Discrete-Time Markovian Jump Systems in the Finite-Time Domain

Lecture Notes in Control and Information Sciences Volume 492

Series Editors Frank Allgöwer, Institute for Systems Theory and Automatic Control, Universität Stuttgart, Stuttgart, Germany Manfred Morari, Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, USA Advisory Editors P. Fleming, University of Sheffield, UK P. Kokotovic, University of California, Santa Barbara, CA, USA A. B. Kurzhanski, Moscow State University, Moscow, Russia H. Kwakernaak, University of Twente, Enschede, The Netherlands A. Rantzer, Lund Institute of Technology, Lund, Sweden J. N. Tsitsiklis, MIT, Cambridge, MA, USA

This series reports new developments in the fields of control and information sciences—quickly, informally and at a high level. The type of material considered for publication includes: 1. 2. 3. 4.

Preliminary drafts of monographs and advanced textbooks Lectures on a new field, or presenting a new angle on a classical field Research reports Reports of meetings, provided they are (a) of exceptional interest and (b) devoted to a specific topic. The timeliness of subject material is very important.

Indexed by EI-Compendex, SCOPUS, Ulrich’s, MathSciNet, Current Index to Statistics, Current Mathematical Publications, Mathematical Reviews, IngentaConnect, MetaPress and Springerlink.

Xiaoli Luan · Shuping He · Fei Liu

Robust Control for Discrete-Time Markovian Jump Systems in the Finite-Time Domain

Xiaoli Luan Key Laboratory of Advanced Process Control for Light Industry (Ministry of Education) Institute of Automation Jiangnan University Wuxi, Jiangsu, China

Shuping He Key Laboratory of Intelligent Computing and Signal Processing (Ministry of Education) School of Electrical Engineering and Automation Anhui University Hefei, Anhui, China

Fei Liu Key Laboratory of Advanced Process Control for Light Industry (Ministry of Education) Institute of Automation Jiangnan University Wuxi, Jiangsu, China

ISSN 0170-8643 ISSN 1610-7411 (electronic) Lecture Notes in Control and Information Sciences ISBN 978-3-031-22181-1 ISBN 978-3-031-22182-8 (eBook) https://doi.org/10.1007/978-3-031-22182-8 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

In the field of modern industry, there are many hybrid systems involving both continuous state evolution and discrete event-driven, such as biochemical systems, communication networks, aerospace systems, manufacturing processes, economic systems. These systems often encounter component failure, external environment change, and subsystem correlation change, which will cause random jumping or switching of system structure and parameters. That is, the switching between each mode is random but may conform to certain statistical laws. If it conforms to Markovian characteristics, it is called stochastic Markovian jump systems (MJSs). The dynamic behavior of MJSs consists of two forms: one is the discrete mode, which is described by a set of Markovian chains valued in a finite integer set. The other is a continuously changing state, characterized by a differential (or difference) equation for each mode. In this sense, the MJSs belong to a category of hybrid systems, and their particularity lye in that the discrete events and continuous variables can be expressed by a stochastic differential equation or difference equation. This provides ideas for people to apply the state space method in modern control theory to study some problems of MJSs. On the other hand, the control theory has focused on the steady-state characteristics of the systems in the infinite-time domain for a long time. However, for most engineering systems, the transient characteristics over a finite-time interval are more practical. On the one hand, an asymptotically stable system does not imply good transition characteristics. Sometimes, the system even appears violent shocks, thus cannot meet the production requirements; on the other hand, many practical production processes are short time running systems, such as biochemical reaction systems, economic systems, and people are more interested in their transient performance in a given time domain. Therefore, this book introduces the theory of finite-time control into stochastic discrete-time MJSs, considers the transient characteristics of the discrete-time MJSs over a finite-time interval, establishes its stability, boundedness, robustness, and other performances in a given time domain, and ensures that the state trajectory of the system is limited within a certain range of the equilibrium point. In this way, the engineering conservativeness of asymptotic stability of conventional control theory is reduced from the time dimension. v

vi

Preface

This book aims at developing less conservative analysis and design methodology for discrete-time MJSs via finite-time control theory. It can be used for final year undergraduates, postgraduates, and academic researchers. Prerequisite knowledge includes linear algebra, linear system theory, theory of matrix, stochastic systems, etc. It should be described as an advanced book. Wuxi, Jiangsu, China

Xiaoli Luan [email protected]

Hefei, Anhui, China

Shuping He [email protected]

Wuxi, Jiangsu, China

Fei Liu [email protected]

Acknowledgements

The authors would like to express our sincere appreciation to those direct participation in various aspects of the research leading to this book. Special thanks go to Prof. Pedro Albertos from the Universidad Politécnica de Valencia in Spain, Prof. Peng Shi from the University of Adelaide in Australia, Prof. Shuping He from Anhui University in China, Profs. Fei Liu, Jiwei Wen, and Shunyi Zhao from Jiangnan University in China for their helpful suggestions, valuable comments, and great support. The authors also thank many colleagues and students who have contributed technical support and assistance throughout this research. In particular, we would like to acknowledge the contributions of Wei Xue, Haiying Wan, Peng He, Ziheng Zhou, Chang’an Han, Chengcheng Ren, Xiang Zhang, and Shuang Gao. Finally, we are incredibly grateful to our families for their never-ending encouragement and support whenever necessary. This book was supported in part by the National Natural Science Foundation of China (Nos. 61991402, 61991400, 61833007, 62073154, 62073001), Scientific Research Cooperation and High-level Personnel Training Programs with New Zealand (No. 1252011004200040), the University Synergy Innovation Program of Anhui Province (No. GXXT-2021-010), Anhui Provincial Key Research and Development Project (No. 2022i01020013), and Anhui University Quality Engineering Project (No. 2022i01020013, 2020jyxm0102, 021jxtd017).

vii

Contents

1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Markovian Jump Systems (MJSs) . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 Nonlinear MJSs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.2 Switching MJSs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.3 Non-homogenous MJSs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Finite-Time Stability and Control . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 FTS for Deterministic Systems . . . . . . . . . . . . . . . . . . . . . . 1.2.2 FTS for Stochastic MJSs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 2 3 4 4 6 8 9 12

2

Finite-Time Stability and Stabilization for Discrete-Time Markovian Jump Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Preliminaries and Problem Formulation . . . . . . . . . . . . . . . . . . . . . . 2.3 Stochastic Finite-Time Stabilization for Linear MJSs . . . . . . . . . . 2.4 Stochastic Finite-Time Stabilization for Nonlinear MJSs . . . . . . . 2.5 Simulation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

21 21 22 24 26 33 36 36

Finite-Time Stability and Stabilization for Switching Markovian Jump Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Preliminaries and Problem Formulation . . . . . . . . . . . . . . . . . . . . . . 3.3 Stochastic Finite-Time H∞ Control . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Observer-Based Finite-Time H∞ Control . . . . . . . . . . . . . . . . . . . . 3.5 Simulation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

39 39 40 42 51 60 66 66

3

ix

x

4

Contents

Finite-Time Stability and Stabilization for Non-homegeneous Markovian Jump Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Preliminaries and Problem Formulation . . . . . . . . . . . . . . . . . . . . . . 4.3 Stochastic Finite-Time Stabilization . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Stochastic Finite-Time H∞ Control . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Observer-Based Finite-Time Control . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Simulation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

69 69 70 72 75 79 85 90 90

5

Asynchronous Finite-Time Passive Control for Discrete-Time Markovian Jump Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 5.2 Finite-Time Passive Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 5.3 Asynchronous Finite-Time Passive Control . . . . . . . . . . . . . . . . . . 99 5.4 Simulation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 5.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

6

Finite-Time Sliding Mode Control for Discrete-Time Markovian Jump Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Finite-Time Sliding Mode Control . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Asynchronous Finite-Time Sliding Mode Control . . . . . . . . . . . . . 6.4 Simulation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7

8

Finite-Frequency Control with Finite-Time Performance for Markovian Jump Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Finite-Time Stabilization with Finite-Frequency Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Finite-Time Multiple-Frequency Control Based on Derandomization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Simulation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stochastic Finite-Time Consensualization for Markovian Jump Networks with Disturbances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Preliminaries and Problem Formulation . . . . . . . . . . . . . . . . . . . . . . 8.3 Finite-Time Consensualization with State Feedback . . . . . . . . . . . 8.4 Finite-Time Consensualization with Output Feedback . . . . . . . . . 8.5 Simulation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

109 109 110 115 125 128 128 131 131 132 137 142 147 150 151 151 152 154 157 160

Contents

xi

8.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 9

Higher-Order Moment Finite-Time Stabilization for Discrete-Time Markovian Jump Systems . . . . . . . . . . . . . . . . . . . . . 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Preliminaries and Problem Formulation . . . . . . . . . . . . . . . . . . . . . . 9.3 Higher-Order Moment Stabilization in the Finite-Time Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Higher-Order Moment Finite-Time Stabilization with Finite-Frequency Performance . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Simulation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

173 176 180 180

10 Model Predictive Control for Markovian Jump Systems in the Finite-Time Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Stochastic Finite-Time MPC for MJSs . . . . . . . . . . . . . . . . . . . . . . . 10.3 Stochastic Finite-Time MPC for Semi-MJSs . . . . . . . . . . . . . . . . . 10.4 Simulation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

183 183 186 188 195 200 202

165 165 166 169

11 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205

Chapter 1

Introduction

Abstract Due to the engineering practicability of finite-time theory, a great number of research results of finite-time analysis and synthesis have been achieved, especially for Markovian jump systems (MJSs). MJS is a special kind of hybrid system, the particularity of which lies in that although it belongs to a hybrid system, its discreteevent dynamics are random processes that follow Markovian chains. Due to the strong engineering background and practical significance, MJSs can describe most of the industrial processes. This chapter mainly introduces the background and the research status of MJSs (involving linear and nonlinear MJSs, switching MJSs, and non-homogeneous MJSs) and finite-time performance for deterministic systems and stochastic MJSs.

1.1 Markovian Jump Systems (MJSs) Markovian jump system (MJS) was first proposed by Krasovskii in 1961 [1]. It was initially regarded as a special stochastic system but did not attract enough attention. With the development of hybrid system theory, people find that MJS is actually a special kind of hybrid system, so it has attracted extensive attention from researchers [2, 3]. MJS assumes that the system’s dynamics are switched in a set of known subsystem models, and the switching law between subsystem models obeys the finite-state Markovian process, in which subsystem models are also called modes. The particularity of the MJS lies in that although it belongs to a hybrid system, its discreteevent dynamics are random processes that follow statistical laws. Thanks to the development of the theory of stochastic process, the MJS can be dynamically written in the form of a stochastic differential equation or stochastic difference equation. Then the analysis and synthesis of the MJS can be studied by using the methods similar to the continuous variable dynamic system. The MJS was proposed with a strong engineering background and practical significance. The system model has been proved to be an effective method to describe the industrial processes, which are often subjected to random disturbances from the internal component failure and the change of the external environment. Common examples include biochemical systems, power systems, flexible manufacturing © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 X. Luan et al., Robust Control for Discrete-Time Markovian Jump Systems in the Finite-Time Domain, Lecture Notes in Control and Information Sciences 492, https://doi.org/10.1007/978-3-031-22182-8_1

1

2

1 Introduction

systems, aerospace systems, communication networks, robotics, energy systems, economic systems, etc. Although Krasovskii proposed the specific model of MJS, it is only used as an example of mathematical analysis without practical application background. It is Sworder who really applies it to actual control problems [4]. In 1969, he first discussed the optimal control problem of mixed linear systems with Markovian jump parameters from the perspective of the random maximum principle. In 1971, Wohnam proposed the dynamic programming problem of stochastic control systems and successfully applied it to the optimal control of linear MJSs [5]. Subsequently, issues such as stochastic stability, controllability, observability, robust control, and filtering, and fault detection for linear MJSs have been solved [6–11].

1.1.1 Nonlinear MJSs Compared with linear MJSs, the research of nonlinear MJSs progresses slowly [12– 14]. It is mainly caused by the complexity of real MJSs, the complex behavior of nonlinearities, and the limitations of existing control theories and algorithms. For linear MJSs, the control design can be transformed into the corresponding Ricatti equation or linear matrix inequality solution. However, for nonlinear MJSs, it is impossible to design a general controller satisfying the performance and stability of the systems. Therefore, it is a difficult problem in the control field to control the MJSs with nonlinearities. Although some scholars have paid attention to this kind of system in the early stage, the development of related theories is still slow. Aliyu and Boukas tried to use the Hamilton–Jacobi equation to give sufficient conditions for stochastic stability of nonlinear MJSs [15]. Unfortunately, it is very difficult to obtain a global solution of the Hamilton–Jacobi equation by numerical method or analytical method because the difficulties of its related mathematical theory are still unresolved. Therefore, many scholars turn to nonlinear approximation methods (mainly fuzzy and neural network technologies) to solve the control problem of nonlinear MJSs [16–18]. Takagi–Sugeno (T–S) fuzzy model is one of the effective methods to deal with MJSs with nonlinearities. According to IF-THEN fuzzy rules, the local linear description or approximate representation of nonlinear MJSs is provided. For the suggested T–S fuzzy MJSs, the hot issues in the nonlinear MJSs focus on the quantized feedback control, robust control, dissipative control, asynchronous dissipative control, asynchronous sliding mode control, adaptive synchronous control, robust filtering, asynchronous filtering, etc. [19–25]. With the expansion of the complexity of the system, people’s requirements for the safety and reliability of the controlled system are increasing day by day. Literatures [26–28] have carried out investigations on the fault detection and fault diagnosis of T–S fuzzy MJSs based on the observer and filter design. Another effective technique to deal with nonlinear MJSs is neural networks. One typical alternative is to use neural networks to linearize the nonlinearities of MJSs. Through linear difference inclusions under the state-space representation, the optimal

1.1 Markovian Jump Systems (MJSs)

3

control, robust control, output feedback control, scheduling control, H∞ filtering, and robust fault detection have been addressed in [29–35]. Then the sliding mode control and the event-triggered fault detection were investigated in [36, 37] by employing a multilayer neural network to substitute the nonlinearities. Combined with the backstepping scheme, the adaptive tracking control for nonlinear MJSs has been examined in [38, 39]. By adopting neural networks to realize the online adaptive dynamic programming learning algorithm, optimal control, optimal tracking control, and online learning and control have been addressed in [40–42].

1.1.2 Switching MJSs For a general linear switching system, the switching rules between subsystems are deterministic, and each subsystems can be described by linear differential equations (difference equations). However, for a single switching subsystem, component failure and sudden environmental disturbance often occur, leading to system structure and parameter jumping. Therefore, a single subsystem is more suitable to be described by MJS. Taking the voltage conversion circuit as an example, we obtain the required voltage by switching different gear positions. At a certain gear position, due to the failure of electronic components, the system may undergo random jumps. It is not appropriate to model such a system with a simple switching system. In such a system, there are not only deterministic switching signals but also random jumping modes. Such more complex systems are called switching MJSs, and the model of such systems was first proposed by Bolzern [43]. In [43], mean square stability was dissolved by the time evolution of the secondorder moment of the state following constraints on the dwell time between switching moments. Then exponential almost sure stability for switching signals that satisfy an average dwell time restriction was investigated by Bolzern in [44], and the tradeoff is found between the average dwell time and the ratio of the residence times. Similarly, the almost sure stability for linear switching MJSs in continuous time was addressed in [45, 46] by applying the Lyapunov function approach. On the basis of stability analysis, the exponential l2 − l∞ control, H∞ control, resilient dynamic output feedback control, mean square stabilization, and almost sure stabilization have been studied by researchers [47–51]. In recent years, the analysis and synthesis results for switching MJSs have been extended to positive systems [52, 53], where the state variables take only nonnegative values. The development of switching MJSs enriches the research field of hybrid system and provides a more general system modeling method. When the random jumping is not considered, the system is a general switching system. When switching rules are not considered, the single subsystem is the general MJS. Due to the coupling of switching signals and jumping modes, the analysis and synthesis of the system bring great challenges, and there are still many tough problems need to be solved.

4

1 Introduction

1.1.3 Non-homogenous MJSs All the aforementioned results about MJSs are limited to systems with fixed jumping transition probabilities (TPs). In engineering practice, the TP matrix often changes with time, that is, the non-homogeneous Markovian process is universal. Therefore, the theory of non-homogeneous Markovian process has become the research hotspot of the majority of experts and scholars [54, 55]. In 2011, Aberkane explored nonhomogeneous discrete MJSs and proposed conclusions related to controller design [56]. In this study, the time-varying TP is described in the form of the polytopic description with fixed vertices. In the same way, the robust control, model predictive control, output feedback control, and H∞ filtering for non-homogeneous MJSs have been proposed in [57–61]. Although the above research results on non-homogeneous MJSs cover the three aspects of control, filtering and stability analysis, the time-varying TPs are described in the form of the polytopic description with fixed vertices. They focus on the change of the TPs, but do not pay attention to how the TPs changes. Among the existing research results, there are two ways to consider the change of TPs: one is periodic nonhomogeneous MJSs with periodic change. The TPs of this special non-homogeneous MJSs change in a cyclic manner according to the period, and the system parameters in each mode also change in a cyclic manner. The second is the non-homogeneous MJSs that follows the high-order Markovian chain. This kind of system introduces a high-order Markovian chain to express the change of the TPs. For periodic non-homogeneous MJSs, the observability and detectability, l2 − l∞ control, H∞ filtering and strict dissipative filtering have been presented in [62–65]. For non-homogeneous MJSs following high-order Markovian chains, Zhang discussed the particularity of piecewise homogeneous MJSs and dealt with its H∞ estimation problem [66]. In this paper, a high-order Markovian chain is used to indicate that the change of TP matrix between segments is also random jumping according to the probability, which brings a new way of thinking to the later study of the change of TPs. In 2012, Wu discussed the stability of piecewise homogeneous MJSs with time-delays [67]. Literatures [68, 69] proposed the H∞ control and filtering for non-homogeneous MJSs with the TPs following Gaussian distribution.

1.2 Finite-Time Stability and Control Modern control theory covers a wide field and involves many methods, but stability analysis is the core and basis of almost all methods, especially Lyapunov stability and asymptotic stability. Lyapunov stability, as a sufficient condition, is simple and intuitive, but it focuses on the system behavior in an infinite time domain, which inevitably brings conservatism from the perspective of engineering practice. For most engineering systems, the transient performance within a certain time are more practical. On the one hand, an asymptotically stable system does not imply good

1.2 Finite-Time Stability and Control

5

transition characteristics. Sometimes, the system even appears violent shocks, thus cannot meet the production requirements; on the other hand, many practical production processes are short-time running systems, such as biochemical reaction systems, economic systems, and people are more interested in their transient performance in a given time domain. In order to study the transient performance of the system, Kamenkov first proposed the concept of finite-time stability (FTS) in the Russian journal PPM in 1953 [70]. Similar articles soon followed in the same journal [71], and the early articles on FTS were mostly written by Russians, dealing with linear systems as well as nonlinear systems. In 1961, there appeared some articles on the FTS of linear time-varying systems, such as “short-time stability in linear time-varying systems” written by Dorato [72]. The idea of short-time stability is essentially the same thing as FTS, but the term FTS is more commonly used later. Also in 1961, LaSalle and Lefschetz wrote “stability by Lyapunov’s direct methods: with applications,” and the concept of “practical stability” is proposed in [73]. Both concepts are required to be bounded in finite-time domain, but the length of the time interval of the two studies is slightly different. In 1965, Weiss and Infante made an in-depth discussion on the FTS analysis of nonlinear systems, and introduced the concepts of quasi-contractive stability and convergence stability over a certain finite-time interval [74]. Shortly thereafter, Weiss and Infante further studied the FTS of nonlinear systems with perturbation, which led to the new concept of finite-time bounded input bounded output (BIBO) stability [75]. The concept of BIBO stability evolved into what is now known as “finite-time bounded stability.” In 1969, Michel and Wu extended the FTS from continuous-time to discrete-time systems on the basis of many existing research results [76]. In the decade from 1965 to 1975, a large number of articles on FTS appeared. But all of these articles are limited to the analysis of the stability and do not give the method of control design [77–79]. In 1969, Garrard studied the finite-time control method for nonlinear systems [80]. In 1972, Van Mellaert and Dorato extended finite-time control to stochastic systems [81]. During this period, San Filippo and Dorato studied the design of robust control for linear systems based on linear quadratic and FTS, and applied the results to the control problem of aircraft [82]. Grujic applied the concept of FTS to the controller design of adaptive systems [83]. The design techniques proposed between 1969 and 1976 all required complex calculations. In the actual application process, there is no absolutely ideal situation for the operating state of the system, and the system is often affected by external disturbances and other factors during operation. In order to better solve the stability problem of the system under external disturbances, the Italian cybernetics scholar Amato gave “finite-time stability” and “finite-time boundedness,” thus effectively avoiding external disturbances of the system. In view of the importance of FTS in practical applications, more and more researchers have devoted themselves to the study of finite-time control problems in recent years [84–88]. FTS is a concept of stability which is different from asymptotic stability for studying transient performance of system. The so-called FTS is to give a bound of the initial conditions of the system whose state norm does not

6

1 Introduction

exceed a certain threshold value in a given finite-time interval. FTS has three elements, namely a certain time interval, bounds of initial conditions and bounds of system states. Therefore, to judge whether a system is finite-time stable, a period of time interval, the boundary of initial conditions and the boundary of system state should be given first according to requirements, and then the system state in this time interval should be checked whether the system state is within the pre-given boundary. Therefore, we can distinguish the FTS and asymptotic stability from the above three factors: (1) FTS examines the performance of the system in a specific time interval, and asymptotic stability examines the performance of the system in an infinite-time interval. (2) FTS is for initial conditions within a given bound, and asymptotic stability is for arbitrary initial conditions. (3) FTS requires that the system state trajectory keep within predefined bounds, while asymptotic stability requires that the system state converge asymptotically (no specific bounds are required for the state trajectory). Thus, these two kinds of stability are independent of each other. A system is FTS, but beyond a given time interval, the state of the system may diverge so that the system is not asymptotically stable. A system is asymptotically stable, but the state of the system may exceed a given region for a certain period of time so that the system does not satisfy the requirement of FTS. In general, the asymptotic stability concerns the asymptotic convergence performance of the system in the infinite-time domain, and the FTS concerns the transient performance of the system in a specific time interval.

1.2.1 FTS for Deterministic Systems In recent years, with the development of linear matrix inequality (LMI) theory, the problems related to FTS have been discussed again. In 1997, Dorato presented a robust finite-time controller design for linear systems at the 36th IEEE CDC conference [89]. In this paper, the state feedback control law for finite-time stabilization of linear systems is presented for the first time by employing LMI, and LMI theory was introduced into the FTS analysis and controller design for linear systems. Subsequently, Amato also presented a series of FTS (or finite-time boundedness) and finite-time controller design methods for uncertain linear continuous-time systems based on LMIs conditions [90, 91]. In 2005, Amato extended the above FTS and finite-time control problems for linear continuous-time systems to linear discrete-time systems [92] and addressed the design conditions for the finite-time stabilizing state feedback controller and output feedback controller, respectively [93]. In subsequent studies, Amato further extended the results of FTS to more general systems, and at the same time, other scholars also began to study the FTS problems [94–100]. The traditional asymptotic

1.2 Finite-Time Stability and Control

7

stability requires the corresponding Lyapunov energy function to decrease strictly, but the solutions abovementioned relax the requirements of Lyapunov energy function by allowing it to increase within a certain range, thus transforming the FTS problem into a series of feasibility problems of LMIs. Therefore, the FTS analysis and synthesis conditions given in these literatures are easy to verify, and the difference between FTS and traditional asymptotic stability can be clearly seen. Differential linear matrix inequality (DLMIs) is another standard method to analyze the FTS problem [101]. Based on DLMIs, the design of a finite-time bounded dynamic output feedback controller for time-varying linear systems was studied [102]. In 2011, Amato studied the FTS problem of impulsive dynamic linear systems and the robust FTS problem of impulsive dynamic linear systems with normbounded uncertainty [103–105]. In 2013, Amato gave the necessary and sufficient conditions for the FTS of impulsive dynamic linear systems [106]. Compared with LMIs, DLMI-based method is more suitable for linear time-varying systems and less conservative, but they are computationally complex and difficult to be generalized to some other types of complex systems. In addition, DLMI-based analysis method can also be used to discuss the input–output finite-time stability of linear systems [107–109]. The above contents are all for the FTS of linear systems. For the FTS of nonlinear systems, the following two methods are generally used to deal with the problem: (1) Directly use the knowledge related to nonlinear systems. This method does not need to restrict the nonlinearity of the system, so it has universality, but the result is difficult to realize in calculation. For example, some early literatures on FTS of nonlinear systems adopted this approach [74, 110–113]. In 2004, Mastellone studied the FTS of nonlinear discrete stochastic systems utilizing the upper bound of exit probability and correlation function and further gave the design method of the finite-time stabilizing controller [114]. In 2009, Yang carried out FTS analysis and synthesis for nonlinear stochastic systems with pulses based on Lyapunov function-like method [115]. (2) Use methods similar to the above linear systems. This method requires special limitations on the nonlinearity of the system, and the result is generally expressed as the feasibility of LMIs (or DLMIs) [116–120]. For example, in the literatures [121–123], the robust finite-time control problem for a class of nonlinear systems with norm-bounded parameter uncertainties and external disturbances (nonlinear properties are initially approximated by a multilayer feedback neural network model). Elbsat [124] studied the finite-time state feedback control problem for a class of discrete-time nonlinear systems with conic-type nonlinear and external interference inputs. A robust and elastic linear state feedback controller is designed based on LMI technology to ensure that the closed-loop system is finite-time stable for all nonlinearities (in a centrally uncertain hypersphere), all admissible external disturbances and all controller gain disturbances within a set boundary.

8

1 Introduction

1.2.2 FTS for Stochastic MJSs With the development of MJSs theory, finite-time analysis and synthesis problems are widely studied in MJSs. The existing research results mainly focus on three categories. Firstly, considering the various complex situations of the system, such as time-delay, uncertainty, time variation, external interference, nonlinear and other factors, the finite-time analysis and synthesis problems of MJSs under complex situations are studied [125–129]. In the study of delay MJSs, two kinds of sufficient conditions are concerned. One type is independent of the size of delay, which is called delay independent. For the system with hour delays, this kind of condition has strong conservativeness. Therefore, another type of FTS condition containing the size of the time-delay, namely delay-dependent condition, has attracted widespread attention. Since delay-dependent conditions can better regulate the system than delayindependent conditions and have less conservativeness, scholars pay more attention to delay-dependent FTS analysis and synthesis, such as model change determination method, delay segmentation method, parameterized model transformation method, free weight method, and so on [130–134]. Meanwhile, due to various uncertainties, robust control theory is used to adjust the finite-time performance of MJSs. One is robust FTS for system analysis, and the other is finite-time controller design for regulating system performance so that the closed-loop system can be robust finite-time stabilized over the given time interval. The common research methods include the Riccati equation, linear matrix inequality, and robust H∞ control [134–139]. The advantage of H∞ control is that the H∞ norm of the transfer function can describe the maximum gain from the input energy to the output. By solving the optimization problem, the influence of the disturbance with the finite power spectrum can be minimized. Of course, there are also many other FTS analyses and comprehensive research results for complex MJSs, such as 2D MJSs, singular MJSs, nonlinear MJSs, positive MJSs, neutral MJSs, switching MJSs, distributed parameter MJSs, etc. [140–145]. The second category of the research results of FTS for MJSs is reflected in the change of TPs. Initially, the relevant studies were based on the time-invariant TPs and all the elements in TP matrix are assumed to be known in advance. However, in practical engineering applications, it is not easy to accurately obtain all the elements in TP matrix. Therefore, some scholars have studied the finite-time performance for MJSs with partially known TPs [146, 147]. In literatures [148, 149], considering the TPs are unknown but bounded, convex polyhedra or bounded conditions are used to define the change of TPs, and the robust FTS analysis and synthesis methods for such systems were studied. Since the TPs often change with time, the non-homogeneous Markovian process generally exists in practical engineering systems. As a result, the FTS analysis and synthesis for non-homogeneous MJSs have also been extensively investigated [150, 151]. Since the jumping time of MJSs follows an exponential distribution, the TP matrix of MJSs is a time-invariant function matrix, which brings limitations to the application of MJSs. Compared with MJSs, semi-MJSs are characterized by a fixed TP matrix and a dwell time probability density function matrix. Because the restriction of

1.3 Outline

9

the probability distribution function is relaxed, the semi-MJSs have a more extensive application importance. Therefore, it is of great theoretical value and practical significance to study the FTS analysis and synthesis of semi-MJSs. By the method of supplementary variables and model transformation, the asynchronous event-triggered sliding mode control, event-triggered guaranteed cost control, memory sampleddata control, observer-based sliding mode control, and H∞ filtering were addressed for semi-MJSs within a finite-time interval [152–156]. The third category of the research results is the finite-time performance study combined with other control strategies. For example, the finite-time sliding mode control methods for MJSs were presented in [157–160]. As a high-performance robust control strategy, sliding mode control has the advantages of insensitivity to parameter perturbation, good transient performance, fast response speed, and strong robustness. It is a typical variable structure control. In other words, the sliding mode controller of the closed-loop system design drives the state trajectory to the designed sliding mode surface. When the state trajectory reaches the sliding mode surface and maintains motion, it is not affected by other external factors. Therefore, as a common design method, sliding mode control is applied to the finite-time performance study of MJSs.

1.3 Outline In order to facilitate readers to understand the context of the book clearly, the main research content is shown in Fig. 1.1. The outline of the book is as follows. This chapter introduces the research background, motivations, and research problems for finite-time analysis and synthesis of MJSs, including FTS for typical kinds of MJSs (involving linear and nonlinear MJSs, switching MJSs, and non-homogeneous MJSs) and FTS for MJSs combined with other control strategies (involving sliding mode control, dissipative control, and non-periodic triggered control). Chapter 2 investigates the stochastic FTS, stochastic finite-time boundedness, and stochastic finite-time stabilization for discrete-time linear and nonlinear MJSs by relaxing the strict decreasing of the Lyapunov energy function. For linear MJSs, the finite-time control design can be transformed into the corresponding Ricatti equation or linear matrix inequality solution. However, for nonlinear MJSs, it is impossible to design a general controller satisfying the transient performance of the systems. To deal with the nonlinearities of MJSs, the neural network is utilized to approximate the nonlinearities by linear difference inclusions. The designed controller can keep the state trajectories of the systems remain within the pre-specified bounds in the given time interval rather than asymptotically converges to the equilibrium point despite the approximation error and external disturbance. Chapter 3 extends the results of stochastic FTS, stochastic finite-time boundedness, and stochastic finite-time stabilization to switching MJSs with time-delay. Due to the coupling of switching signals and jumping modes, it brings great challenges to the finite-time analysis and synthesis of the system. To analyze the transient perfor-

10

1 Introduction

Fig. 1.1 The main research content

mance of switching MJSs, the finite-time boundedness, finite-time H∞ stabilization, and observer-based finite-time H∞ control for a class of stochastic jumping systems governed by deterministic switching signals are investigated in this chapter. Considering the effect of the average dwell time on the finite-time performance, some results on the stochastic finite-time boundedness and stochastic finite-time stabilization with H∞ disturbance attenuation level are given. The relationship among three kinds of time scales, such as time-delay, average dwell time, and finite-time interval, is derived by means of the average dwell time constraint condition. Chapter 4 addresses the stochastic finite-time stabilization, stochastic finite-time H∞ control, and the observer-based state feedback finite-time control problems for non-homogeneous MJSs by considering the random variation of TPs. Different from the results presented in Chaps. 2 and 3, this chapter focuses on the random change of TPs. Gaussian transition probability density function (PDF) is utilized to describe the random time-varying property of TPs. In this way, the random time-varying TPs can be characterized with a Gaussian PDF. The variance of Gaussian PDF can quantize the uncertainties of TPs. Then, the variation-dependent controller is devised to guarantee the corresponding finite-time stabilization with H∞ disturbance attenuation level for discrete-time MJSs with random time-varying transition probabilities. Chapter 5 focuses on the stochastic finite-time performance analysis and synthesis for MJSs from the perspective of system internal stability and energy relationship. Passivity represents the energy change attribute of the system. If the controller can make the system energy function decaying according to the desired rate, the control goal can be achieved. Therefore, this chapter studies the finite-time passive control for MJSs. Firstly, a finite-time passive controller is proposed to guarantee that the

1.3 Outline

11

closed-loop system is finite-time bounded and meets the desired passive performance requirement simultaneously under ideal conditions. Then, considering the more practical situation that the controller’s mode is not synchronized with the system mode, an asynchronous finite-time passive controller is planned, which is for the more general hidden Markovian jump systems. Chapter 6 combines the finite-time performance with sliding mode control to achieve better performance indicators for discrete-time MJSs. As a high-performance robust control strategy, sliding mode control has the advantages of insensitivity to parameter perturbation, good transient performance, fast response speed, and strong robustness. Therefore, this chapter focuses on the finite-time sliding mode control problem for MJSs with uncertainties. Firstly, the sliding mode function and sliding mode controller are designed such that the closed-loop discrete-time MJSs are stochastic finite-time stabilizable and fulfill the given H∞ performance index. Moreover, an appropriate asynchronous sliding mode controller is constructed, and the rationality conditions of the coefficient parameter are given and proved for the purpose that the closed-loop discrete-time MJSs can be driven onto the sliding surface. Also, the transient performance of the discrete-time MJSs during the reaching and sliding motion phase has been investigated, respectively. Chapters 2–6 consider the transient performance of MJSs in the entire frequency range, which leads to over-design and conservativeness. To reduce the engineering conservation of controller design for MJSs from the perspective of the time domain and the frequency domain, Chap. 7 presents the finite-time multiple-frequency control for MJSs by introducing frequency information into controller design, the multiple-frequency control with finite-time performance is analyzed both in the time domain and the frequency domain. Moreover, in order to overcome the effect of stochastic jumping among different modes on system performance, the derandomization method has been introduced into controller design by transforming the original stochastic multimodal systems into deterministic single-mode ones. Chapter 8 concerns not only the transient behavior of MJSs in the finite-time domain but also the consistent state behavior of each subsystem. Therefore, the finitetime consensus protocol design approach for network-connected systems with random Markovian jump topologies, communication delays and external disturbances is analyzed in this chapter. With relaxing the conditions that the disagreement dynamics asymptotically converge to zero, the finite-time consensualization protocol is employed to make sure the disagreement dynamics of interconnected networks are confined within the prescribed bound in the fixed time interval. By taking advantage of certain features of the Laplacian matrix in real Jordan form, the new model transformation method has been proposed, which makes the designed control protocol more general. Chapter 9 proposes the higher-order moment stabilization in the finite-time domain for MJSs to guarantee that not only the mean and variance of the states remain within the desired range in the fixed time interval, but also the higher-order moment of the states is limited to the given bound. Firstly, the derandomization method is utilized to transform the multimode stochastic jumping systems into single-mode deterministic systems. Then, with the help of the cumulant generating function in

12

1 Introduction

statistical theory, the higher-order moment components of the states are obtained by first-order Taylor expansion. Compared with the existing control methods, the high-order moment stabilization improves the effect of the control by taking the higher-order moment information of the state into consideration. Chapter 10 adopts model predictive control to optimize the finite-time performance of MJSs. Firstly, by means of online rolling optimization, the minimum energy consumption is realized, and the required transient performance is satisfied simultaneously under the assumption that the jumping time of MJSs follows an exponential distribution. Then, the proposed results are extended to semi-MJSs. The finite-time performance under the model predictive control scheme is analyzed in the situation that the TP matrix at each time depends on the history information of elapsed switching sequences. Compared with MJSs, semi-MJSs are characterized by a fixed TP matrix and a dwell time probability density function matrix. Because the restriction of the probability distribution function is relaxed, the finite-time model predictive control for semi-MJSs has a more extensive application importance. Chapter 11 sums up the results of the book and discusses the possible research directions in future work.

References 1. Krasovskii, N.M., Lidskii, E.A.: Analytical design of controllers in systems with random attributes. Automat. Rem. Control. 22, 1021–2025 (1961) 2. Ji, Y., Chizeck, H.J.: Controllability, stability and continuous-time Markovian jump linear quadratic control. IEEE Trans. Autom. Control 35(7), 777–788 (1990) 3. Florentin, J.J.: Optimal control of continuous-time Markovian stochastic systems. J. Electron. Control 10(6), 473–488 (1961) 4. Sworder, D.: Feedback control of a class of linear systems with jump parameters. IEEE Trans. Autom. Control. 14(1), 9–14 (1969) 5. Wonham, W.M.: Random differential equations in control theory. Probab. Methods Appl. Math. 2, 131–212 (1971) 6. Feng, X., Loparo, K.A., Ji, Y., Chizeck, H.J.: Stochastic stability properties of jump linear systems. IEEE Trans. Autom. Control. 37, 38–53 (1992) 7. Karan, M., Shi, P., Kaya, Y.: Transition probability bounds for the stochastic stability robustness of continuous and discrete-time Markovian jump linear systems. Automatica 42, 2159– 2168 (2006) 8. Mariton, M.: On controllability of linear systems with stochastic jump parameters. IEEE Trans. Autom. Control. 31(7), 680–683 (1986) 9. Shi, P., Boukas, E.K., Agarwal, R.: Robust control for Markovian jumping discrete-time systems. Int. J. Syst. Sci. 30(8), 787–797 (1999) 10. Mariton, M.: Robust jump linear quadratic control: a mode stabilizing solution. IEEE Trans. Autom. Control 30(11), 1145–1147 (1985) 11. Shi, P., Boukas, E.K., Agarwal, R.: Kalman filtering for continuous-time uncertain systems with Markovian jumping parameters. IEEE Trans. Autom. Control 44(8), 1592–1597 (1999) 12. Sthananthan, S., Keel, L.H.: Optimal practical stabilization and controllability of systems with Marikovian jumps. Nonlinear Anal. 54(6), 1011–1027 (2003) 13. He, S.P., Liu, F.: Exponential passive filtering for a class of nonlinear jump systems. J. Syst. Eng. Electron. 20(4), 829–837 (2009)

References

13

14. Yao, X.M., Guo, L.: Composite anti-disturbance control for Markovian jump nonlinear systems via disturbance observer. Automatica 49(8), 2538–2545 (2013) 15. Aliyu, M.D.S., Boukas. E.K.: H∞ control for Markovian jump nonlinear systems. In: Proceedings of the 37th IEEE Conference on Decision and Control, Tampa, FL, USA, vol. 1, pp. 766–771 (1998) 16. Liu, Y., Wang, Z., Liang, J., Liu, X.: Stability and synchronization of discrete-time Markovian jumping neural networks with mixed mode-dependent time delays. IEEE Trans. Neural. Netw. 20(7), 1102–1116 (2009) 17. Zhang, Y., Xu, S., Zou, Y., Lu, J.: Delay-dependent robust stabilization for uncertain discretetime fuzzy Markovian jump systems with mode-dependent time delays. Fuzzy Sets Syst. 164(1), 66–81 (2011) 18. Balasubramaniam, P., Lakshmanan, S.: Delay-range dependent stability criteria for neural networks with Markovian jumping parameters. Nonlinear Anal. Hybrid. Syst. 3(4), 749–756 (2009) 19. Zhang, M., Shi, P., Ma, L.H., Cai, J.P., Su, H.Y.: Quantized feedback control of fuzzy Markovian jump systems. IEEE Trans. Cybern. 49(9), 3375–3384 (2019) 20. Wang, J.W., Wu, H.N., Guo, L.: Robust H∞ fuzzy control for uncertain nonlinear Markovian jump systems with time-varying delay. Fuzzy Sets Syst. 212, 41–61 (2013) 21. Sheng, L., Gao, M., Zhang, W.H.: Dissipative control for Markovian jump non-linear stochastic systems based on T-S fuzzy model. Int. J. Syst. Sci. 45(5), 1213–1224 (2014) 22. Wu, Z.G., Dong, S.L., Su, H.Y., Li, C.D.: Asynchronous dissipative control for fuzzy Markovian jump systems. IEEE Trans. Cybern. 48(8), 2426–2436 (2018) 23. Song, J., Niu, Y.G., Zou, Y.Y.: Asynchronous sliding mode control of Markovian jump systems with time-varying delays and partly accessible mode detection probabilities. Automatica 93, 33–41 (2018) 24. Tong, D.B., Zhu, Q.Y., Zhou, W.N.: Adaptive synchronization for stochastic T-S fuzzy neural networks with time-delay and Markovian jumping parameters. Neurocomputing 17(14), 91– 97 (2013) 25. Tao, J., Lu, R.Q., Su, H.Y., Shi, P., Wu, Z.G.: Asynchronous filtering of nonlinear Markovian jump systems with randomly occurred quantization via T-S fuzzy models. IEEE Trans. Fuzzy Syst. 26(4), 1866–1877 (2018) 26. He, S.P., Liu, F.: Fuzzy model-based fault detection for Markovian jump systems. Int. J. Robust. Nonlinear Control 19(11), 1248–1266 (2009) 27. He, S.P., Liu, F.: Filtering-based robust fault detection of fuzzy jump systems. Fuzzy Sets Syst. 185(1), 95–110 (2011) 28. Cheng, P., Wang, J.C., He, S.P., Luan, X.L., Liu, F.: Observer-based asynchronous fault detection for conic-type nonlinear jumping systems and its application to separately excited DC motor. IEEE Trans. Circ. Syst.-I 67(3), 951–962 (2020) 29. Luan, X.L., Liu, F., Shi, P.: Neural network based stochastic optimal control for nonlinear Markovian jump systems. Int. J. Innov. Comput. Inf. Control 6(8), 3715–3728 (2010) 30. Luan, X.L., Liu, F.: Design of performance robustness for uncertain nonlinear time-delay systems via neural network. J. Syst. Eng. Electron. 18(4), 852–858 (2007) 31. Luan, X.L., Liu, F., Shi, P.: Passive output feedback control for non-linear systems with time delays. Proc. Inst. Mech. Eng. Part I-J Syst. Control Eng. 223(16), 737–743 (2009) 32. Yin, Y., Shi, P., Liu, F.: H∞ scheduling control on stochastic neutral systems subject to actuator nonlinearity. Int. J. Syst. Sci. 44(7), 1301–1311 (2013) 33. Luan, X.L., Liu, F., Shi, P.: H∞ filtering for nonlinear systems via neural networks. J. Frankl. Inst. 347, 1035–1046 (2010) 34. Luan, X.L., Liu, F.: Neural network-based H∞ filtering for nonlinear systems with timedelays. J. Syst. Eng. Electron 19(1), 141–147 (2008) 35. Luan, X.L., He, S.P., Liu, F.: Neural network-based robust fault detection for nonlinear jump systems. Chaos Soliton Fract. 42(2), 760–766 (2009) 36. Tong, D.B., Xu, C., Chen, Q.Y., Zhou, W.N., Xu, Y.H.: Sliding mode control for nonlinear stochastic systems with Markovian jumping parameters and mode-dependent time-varying delays. Nonlinear Dyn. 100, 1343–1358 (2020)

14

1 Introduction

37. Liu, Q.D., Long, Y., Park, J.H., Li, T.S.: Neural network-based event-triggered fault detection for nonlinear Markovian jump system with frequency specifications. Nonlinear Dyn. 103, 2671–2687 (2021) 38. Chang, R., Fang, Y.M., Li, J.X., Liu, L.: Neural-network-based adaptive tracking control for Markovian jump nonlinear systems with unmodeled dynamics. Neurocomputing 179, 44–53 (2016) 39. Wang, Z., Yuan, J.P., Pan, Y.P., Che, D.J.: Adaptive neural control for high order Markovian jump nonlinear systems with unmodeled dynamics and dead zone inputs. Neurocomputing 247, 62–72 (2017) 40. Zhong, X.N., He, H.B., Zhang, H.G., Wang, Z.S.: Optimal control for unknown discretetime nonlinear Markovian jump systems using adaptive dynamic programming. IEEE Trans. Neural Netw. Learn. Syst. 25(12), 2141–2155 (2014) 41. Zhong, X.N., He, H.B., Zhang, H.G., Wang, Z.S.: A neural network based online learning and control approach for Markovian jump systems. Neurocomputing 149(3), 116–123 (2015) 42. Jiang, H., Zhang, H.G., Luo, Y.H., Wang, J.Y.: Optimal tracking control for completely unknown nonlinear discrete-time Markovian jump systems using data-based reinforcement learning method. Neurocomputing 194(19), 176–182 (2016) 43. Bolzern, P., Colaneri, P., Nicolao, G.D.: Markovian jump linear systems with switching transition rates: mean square stability with dwell-time. Automatica 46, 1081–1088 (2010) 44. Bolzern, P., Colaneri, P., Nicolao, G.D.: Almost sure stability of Markovian jump linear systems with deterministic switching. IEEE Trans. Autom. Control 58(1), 209–213 (2013) 45. Song, Y., Yang, J., Yang, T.C.: Almost sure stability of switching Markovian jump linear systems. IEEE Trans. Autom. Control 61(9), 2638–2643 (2015) 46. Cong, S.: A result on almost sure stability of linear continuous-time Markovian switching systems. IEEE Trans. Autom. Control 63(7), 2226–2233 (2018) 47. Hou, L.L., Zong, G.D., Zheng, W.X.: Exponential l2 − l∞ control for discrete-time switching Markovian jump linear systems. Circ. Syst. Signal Process. 32, 2745–2759 (2013) 48. Chen, L.J., Leng, Y., Guo, A.F.: H∞ control of a class of discrete-time Markovian jump linear systems with piecewise-constant TPs subject to average dwell time switching. J. Frankl. Inst. 349(6), 1989–2003 (2012) 49. Wang, J.M., Ma, S.P.: Resilient dynamic output feedback control for discrete-time descriptor switching Markovian jump systems and its applications. Nonlinear Dyn. 93, 2233–2247 (2018) 50. Qu, H.B., Hu, J., Song, Y., Yang, T.H.: Mean square stabilization of discrete-time switching Markovian jump linear systems. Optim. Control Appl. Methods 40(1), 141–151 (2019) 51. Wang, G.L., Xu, L.: Almost sure stability and stabilization of Markovian jump systems with stochastic switching. IEEE Trans. Autom. Control (2021). https://doi.org/10.1109/TAC.2021. 3069705 52. Lian, J., Liu, J., Zhuang, Y.: Mean stability of positive Markovian jump linear systems with homogeneous and switching transition probabilities. IEEE Trans. Circ. Syst.-II 62(8), 801– 805 (2015) 53. Bolzern, P., Colaneri, P., Nicolao, G.: Stabilization via switching of positive Markovian jump linear systems. In: Proceedings of the 53rd IEEE Conference on Decision and Control, Los Angeles, CA, USA (2014) 54. Aberkane, S.: Bounded real lemma for nonhomogeneous Markovian jump linear systems. IEEE Trans. Autom. Control 58(3), 797–801 (2013) 55. Yin, Y.Y., Shi, P., Liu, F., Lim, C.C.: Robust control for nonhomogeneous Markovian jump processes: an application to DC motor device. J. Frankl. Inst. 351(6), 3322–3338 (2014) 56. Aberkane, S.: Stochastic stabilization of a class of nonhomogeneous Markovian jump linear systems. Syst. Control Lett. 60(3), 156–160 (2011) 57. Liu, Y.Q., Yin, Y.Y., Liu, F., Teo, K.L.: Constrained MPC design of nonlinear Markovian jump system with nonhomogeneous process. Nonlinear Anal. Hybrid Syst. 17, 1–9 (2015) 58. Liu, Y.Q., Liu, F., Toe, K.L.: Output feedback control of nonhomogeneous Markovian jump system with unit-energy disturbance. Circ. Syst. Signal Process. 33(9), 2793–2806 (2014)

References

15

59. Ding, Y.C., Liu, H., Shi, K.B.: H∞ state-feedback controller design for continuous-time nonhomogeneous Markovian jump systems. Optimal Control Appl. Methods 20(1), 133–144 (2016) 60. Yin, Y.Y., Shi, P., Liu, F., Toe, K.L.: Filtering for discrete-time non-homogeneous Markovian jump systems with uncertainties. Inf. Sci. 259, 118–127 (2014) 61. Yin, Y., Shi, P., Liu, F., Teo, K.L.: Fuzzy model-based robust H∞ filtering for a class of nonlinear nonhomogeneous Markov jump systems. Signal Process 93(9), 2381–2391 (2013) 62. Hou, T., Ma, H.J., Zhang, W.H.: Spectral tests for observability and detectability of periodic Markovian jump systems with nonhomogeneous Markovian chain. Automatica 63, 175–181 (2016) 63. Hou, T., Ma, H.J.: Stochastic H2 /H∞ control of discrete-time periodic Markovian jump systems with detectability. In: Proceedings of the 54th Annual Conference of the Society of Instrument and Control Engineers of Japan, Hangzhou, China, pp. 530–535 (2015) 64. Tao, J., Su, H., Lu, R., Wu, Z.G.: Dissipativity-based filtering of nonlinear periodic Markovian jump systems: the discrete-time case. Neurocomputing 171, 807–814 (2016) 65. Aberkane, S., Dragan, V.: H∞ filtering of periodic Markovian jump systems: application to filtering with communication constraints. Automatica 48(12), 3151–3156 (2012) 66. Zhang, L.X.: H∞ estimation for discrete-time piecewise homogeneous Markovian jump linear systems. Automatica 45(11), 2570–2576 (2009) 67. Wu, Z.G., Ju, H.P., Su, H., Chu, J.: Stochastic stability analysis of piecewise homogeneous Markovian jump neural networks with mixed time-delays. J. Frankl. Inst. 349(6), 2136–2150 (2012) 68. Luan, X.L., Shunyi, Z., Shi, P., Liu, F.: H∞ filtering for discrete-time Markovian jump systems with unknown transition probabilities. Int. J. Adapt. Control Signal Process 28(2), 138–148 (2014) 69. Luan, X.L., Shunyi, Z., Liu, F.: H∞ control for discrete-time Markovian jump systems with uncertain transition probabilities. IEEE Trans. Autom. Control. 58(6), 1566–1572 (2013) 70. Kamenkov, G.: On stability of motion over a finite interval of time. J. Appl. Math. Mech. 17, 529–540 (1953) 71. Lebedev, A.: On stability of motion during a given interval of time. J. Appl. Math. Mech. 18, 139–148 (1954) 72. Dorato, P.: Short-Time Stability in Linear Time-Varying Systems. Polytechnic Institute of Brooklyn Publishing, Brooklyn, New York (1961) 73. Liasalle, J., Lefechetz, S.: Stability by Lyapunov’s Direct Methods: With Applications. Academic Press Publishing, New York (1961) 74. Weiss, L., Infante, E.F.: On the stability of systems defined over a finite time interval. Natl. Acad. Sci. 54(1), 44–48 (1965) 75. Weiss, L., Infante, E.: Finite time stability under perturbing forces and on product spaces. IEEE Trans. Autom. Control 12(1), 54–59 (1967) 76. Michel, A.N., Wu, S.H.: Stability of discrete systems over a finite interval of time. Int. J. Control 9(6), 679–693 (1969) 77. Weiss, L.: On uniform and nonuniform finite-time stability. IEEE Trans. Autom. Control 14(3), 313–314 (1969) 78. Bhat, S.P., Bernstein, D.S.: Finite-time stability of continuous autonomous systems. SIAM J. Control Opim. 38(3), 751–766 (2000) 79. Chen, W., Jiao, L.C.: Finite-time stability theorem of stochastic nonlinear systems. Automatica 46(12), 2105–2108 (2010) 80. Garrard, W.L., McClamroch, N.H., Clark, L.G.: An approach to suboptimal feedback control of nonlinear systems. Int. J. Control 5(5), 425–435 (1967) 81. Van Mellaert, L., Dorato, P.: Nurmerical solution of an optimal control problem with a probability vriterion. IEEE Trans. Autom. Control 17(4), 543–546 (1972) 82. San Filippo, F.A., Dorato, P.: Short-time prarmeter optimization with flight control application. Automatica 10(4), 425–430 (1974)

16

1 Introduction

83. Gmjic, W.L.: Finite time stability in control system synthesis. In: Proceedings of the 4th IFAC Congress, Warsaw, Poland, pp. 21–31 (1969) 84. Haimo, V.T.: Finite-time control and optimization. SIAM J Control Opim. 24(4), 760–770 (1986) 85. Liu, L., Sun, J.: Finite-time stabilization of linear systems via impulsive control. Int. J. Control 8(6), 905–909 (2008) 86. Germain, G., Sophie, T., Jacques, B.: Finite-time stabilization of linear time-varying continuous systems. IEEE Trans. Autom. Control 4(2), 364–369 (2009) 87. Moulay, E., Perruquetti, W.: Finite time stability and stabilization of a class of continuous systems. J. Math. Anal. Appl. 323(2), 1430–1443 (2006) 88. Abdallah, C.T., Amato, F., Ariola, M.: Statistical learning methods in linear algebra and control problems: the examples of finite-time control of uncertain linear systems. Linear Algebra Appl. 351, 11–26 (2002) 89. Dorato, P., Famularo, D.: Robust finite-time stability design via linear matrix inequalities. In: Proceedings of the 36th IEEE Conference on Desicion and Control, San Diego, pp. 1305–1306 (1997) 90. Amato, F., Ariola, M., Dorato, P.: Robust finite-time stabilization of linear systems depending on parametric uncertainties. In: Proceedings of the 37th IEEE Conference on Decision and Control, Tampa, Florida, pp. 1207–1208 (1998) 91. Amato, F., Ariola, M., Dorato, P.: Finite-time control of linear systems subject to parametric uncertainties and disturbances. Automatica 37(9), 1459–1463 (2001) 92. Amato, F., Ariola, M.: Finite-time control of discrete-time linear system. IEEE Trans. Autom. Control 50(5), 724–729 (2005) 93. Amato, F., Ariola, M., Cosentino, C.: Finite-time stabilization via dynamic output feedback. Automatica 42(2), 337–342 (2006) 94. Hong, Y.G., Huang, J., Yu, Y.: On an output feedback finite-time stabilization problem. IEEE Trans. Autom. Control 46(2), 305–309 (2001) 95. Yu, S., Yu, X., Shirinzadeh, B.: Continuous finite-time control for robotic manipulators with terminal sliding mode. Automatica 41(11), 1957–1964 (2005) 96. Huang, X., Lin, W., Yang, B.: Global finite-time stabilization of a class of uncertain nonlinear systems. Automatica 41(5), 881–888 (2005) 97. Feng, J.E., Wu, Z., Sun, J.B.: Finite-time control of linear singular systems with parametric uncertainties and disturbances. Acta Automatica Sinica 31(4), 634–637 (2005) 98. Moulay, E., Dambrine, M., Yeganefax, N.: Finite time stability and stabilization of time-delay systems. Syst. Control Lett. 57(7), 561–566 (2008) 99. Zuo, Z., Li, H., Wang, Y.: New criterion for finite-time stability of linear discrete-time systems with time-varying delay. J. Frankl. Inst. 350(9), 2745–2756 (2013) 100. Stojanovic, S.B., Debeljkovic, D.L., Antic, D.S.: Robust finite-lime stability and stabilization of linear uncertain time-delay systems. Asian J. Control 15(5), 1548–1554 (2013) 101. Amato, F., Ariola, M., Cosentino, C.: Finite-time control of discrete-time linear systems: analysis and design conditions. Automatica 46(5), 919–924 (2010) 102. Amato, F., Ariola, M., Cosentino, C.: Necessary and sufficient conditions for finite-time stability of linear systems. In: Proceedings of the 2003 American Control Conference, Denver, Colorado, pp. 4452–4456 (2003) 103. Amato, F., Ariola, M., Cosentino, C.: Finite-time stability of linear time-varying systems: analysis and controller design. IEEE Trans. Autom. Control 55(4), 1003–1008 (2009) 104. Amato, F., Ambrosino, R., Ariola, M.: Robust finite-time stability of impulsive dynamical linear systems subject to norm-bounded uncertainties. Int. J. Robust Nonlinear Control 21(10), 1080–1092 (2011) 105. Amato, F., Ariola, M., Cosentino, C.: Finite-time stabilization of impulsive dynamical linear systems. Nonlinear Anal. Hybrid Syst. 5(1), 89–101 (2011) 106. Amato, F., Tommasig, D., Pironti, A.: Necessary and sufficient conditions for finite-time stability of impulsive dynamical linear systems. Automatica 49(8), 2546–2550 (2013)

References

17

107. Amato, F., Ambrosino, R., Cosentino, C.: Input-output finite time stabilization of linear systems. Automatica 46(9), 1558–1562 (2010) 108. Amato, F., Ambrosino, R., Cosentino, C.: Input-output finite-time stability of linear systems. In: Proceedings of the 17th Mediterranean Conference on Control and Automation, Makedonia, Palace, Thessaloniki, Greece, pp. 342–346 (2009) 109. Amato, F., Carannante, G., De Tommasi, G.: Input-output finite-time stabilization of a class of hybrid systems via static output feedback. Int. J. Control 84(6), 1055–1066 (2011) 110. Weiss, L.: Converse theorems for finite time stability. SIAM J. Appl. Math. 16(6), 1319–1324 (1968) 111. Ryan, E.P.: Finite-time stabilization of uncertain nonlinear planar systems. Dyn. Control 1(1), 83–94 (1991) 112. Hong, Y.G., Wang, J., Cheng, D.: Adaptive finite-time control of nonlinear systems with parametric uncertainty. IEEE Trans. Autom. Control 51(5), 858–862 (2006) 113. Nersesov, S.G., Nataxaj, C., Avis, J.M.: Design of finite time stabilizing controller for nonlinear dynamical systems. Int. J. Robust Nonlinear Control 19(8), 900–918 (2009) 114. Mastellone, S., Dorato, P., Abdallah, C.T.: Finite-time stability of discrete-time nonlinear systems: analysis and design. In: Proceedings of the 43rd IEEE Conference on Decision and Control, Atlantis, Paradise Island, Bahamas, pp. 2572–2577 (2004) 115. Yang, Y., Li, J., Chen, G.: Finite-time stability and stabilization of nonlinear stochastic hybrid systems. J. Math. Anal. Appl. 356(1), 338–345 (2009) 116. Chen, F., Xu, S., Zou, Y.: Finite-time boundedness and stabilization for a class of non-linear quadratic time-delay systems with disturbances. IET Control Theor. Appl. 7(13), 1683–1688 (2013) 117. Yin, J., Khoo, S., Man, Z.: Finite-time stability and instability of stochastic nonlinear systems. Automatica 47(12), 2671–2677 (2011) 118. Khoo, S., Yin, J.L., Man, Z.H.: Finite-time stabilization of stochastic nonlinear systems in strict-feedback form. Automatica 49(5), 1403–1410 (2013) 119. Amato, F., Cosentesto, C., Merola, A.: Sufficient conditions for finite-time stability and stabilization of nonlinear quadratic systems. IEEE Trans. Autom. Control 55(2), 430–434 (2010) 120. He, S., Liu, F.: Finite-time H∞ fuzzy control of nonlinear jump systems with time delays via dynamic observer-based state feedback. IEEE Trans. Fuzzy. Syst. 20(4), 605–614 (2012) 121. Luan, X.L., Liu, F., Shi, P.: Robust finite-time H∞ control for nonlinear jump systems via neural networks. Circ. Syst. Signal Process 29(3), 481–498 (2010) 122. Luan, X.L., Liu, F., Shi, P.: Neural-network-based finite-time H∞ control for extended Markovian jump nonlinear systems. Int. J. Adapt. Control Signal Process 24(7), 554–567 (2010) 123. Luan, X.L., Liu, F., Shi, P.: Finite-time filtering for nonlinear stochastic systems with partially known transition jump rates. IET Control Theor. Appl. 4(5), 735–745 (2010) 124. Elbsat, M.N., Yaz, E.E.: Robust and resilient finite-time bounded control of discrete-time uncertain nonlinear systems. Automatica 49(7), 2292–2296 (2013) 125. Zhang, Y., Shi, P., Nguang, S.K.: Robust finite-time fuzzy H∞ control for uncertain time-delay systems with stochastic jumps. J. Frankl. Inst. 351(8), 4211–4229 (2014) 126. Luan, X.L., Zhao, C.Z., Liu, F.: Finite-time H∞ control with average dwell-time constraint for time-delay Markovian jump systems governed by deterministic switches. IET Control Theor. Appl. 8(11), 968–977 (2014) 127. Chen, C., Gao, Y., Zhu, S.: Finite-time dissipative control for stochastic interval systems with time-delay and Markovian switching. Appl. Math. Comput. 310, 169–181 (2017) 128. Yan, Z., Zhang, W., Zhang, G.: Finite-time stability and stabilization of It oˆ stochastic systems with Markovian switching: mode-dependent parameters approach. IEEE Trans. Autom. Control 60(9), 2428–2433 (2015) 129. Lyu, X.X., Ai, Q.L., Yan, Z.G., He, S.P., Luan, X.L., Liu, F.: Finite-time asynchronous resilient observer design of a class of non-linear switched systems with time-delays and uncertainties. IET Control. Theor. Appl. 14(7), 952–963 (2020) 130. Nie, R., He, S.P., Luan, X.L.: Finite-time stabilization for a class of time-delayed Markovian jump systems with conic nonlinearities. IET Control Theor. Appl. 13(9), 1279–1283 (2019)

18

1 Introduction

131. Yan, Z., Song, Y., Park, J.H.: Finite-time stability and stabilization for stochastic Markov jump systems with mode-dependent time delays. ISA Trans. 68, 141–149 (2017) 132. Wen, J., Nguang, S.K., Shi, P.: Finite-time stabilization of Markovian jump delay systems–a switching control approach. Int. J. Robust Nonlinear Control 7(2), 298–318 (2016) 133. Chen, Y., Liu, Q., Lu, R., Xue, A.: Finite-time control of switched stochastic delayed systems. Neurocomputing 191, 374–379 (2016) 134. Ma, Y., Jia, X., Zhang, Q.: Robust observer-based finite-time H∞ control for discrete-time singular Markovian jumping system with time delay and actuator saturation. Nonlinear Anal. Hybrid. Syst. 28, 1–22 (2018) 135. Shen, H., Li, F., Yan, H.C., Karimi, H.R., Lam, H.K.: Finite-time event-triggered H∞ control for T-S fuzzy Markovian jump systems. IEEE Trans. Fuzzy Syst. 26(5), 3122–3135 (2018) 136. Luan, X.L., Min, Y., Ding, Z.T., Liu, F.: Stochastic given-time H∞ consensus over Markovian jump networks with disturbance constraint. Trans. Inst. Meas. Control 39(8), 1253–1261 (2017) 137. Cheng, J., Zhu, H., Zhong, S.M., Zeng, Y., Dong, X.C.: Finite-time H∞ control for a class of Markovian jump systems with mode-dependent time-varying delays via new Lyapunov functionals. ISA Trans. 52(6), 768–774 (2013) 138. Song, X.N., Wang, M., Ahn, C.K., Song, S.: Finite-time H∞ asynchronous control for nonlinear Markovian jump distributed parameter systems via quantized fuzzy output-feedback approach. IEEE Trans. Cybern. 50(9), 4098–4109 (2020) 139. Ma, Y.C., Jia, X.R., Zhang, Q.L.: Robust finite-time non-fragile memory H∞ control for discrete-time singular Markovian jump systems subject to actuator saturation. J. Frankl. Inst. 354(18), 8256–8282 (2017) 140. Cheng, P., He, S.P., Luan, X.L., Liu, F.: Finite-region asynchronous H∞ control for 2D Markovian jump systems. Automatica (2021). https://doi.org/10.1016/j.automatica.2021.109590 141. Ren, H.L., Zong, G.D., Karimi, H.R.: Asynchronous finite-time filtering of Markovian jump nonlinear systems and its applications. IEEE Trans. Syst. Man Cybern. Syst. 51(3), 1725–1734 (2019) 142. Li, S.Y., Ma, Y.: Finite-time dissipative control for singular Markovian jump systems via quantizing approach. Nonlinear Anal. Hybrid Syst. 27, 323–340 (2018) 143. Ren, C.C., He, S.P., Luan, X.L., Liu, F., Karimi, H.R.: Finite-time l2 -gain asynchronous control for continuous-time positive hidden Markovian jump systems via T-S fuzzy model approach. IEEE Trans. Cybern. 51(1), 77–87 (2021) 144. Yan, H.C., Tian, Y.X., Li, H.Y., Zhang, H., Li, Z.C.: Input-output finite-time mean square stabilization of nonlinear semi-Markovian jump systems. Automatica 104, 82–89 (2021) 145. Ju, Y.Y., Cheng, G.F., Ding, Z.S.: Stochastic H∞ finite-time control for linear neutral semiMarkovian jump systems under event-triggering scheme. J. Frankl. Inst. 358(2), 1529–1552 (2021) 146. Ren, H.L., Zong, G.D.: Robust input-output finite-time filtering for uncertain Markovian jump nonlinear systems with partially known transition probabilities. Int. J. Adapt. Control Signal. Process. 31(10), 1437–1455 (2017) 147. Zong, G.D., Yang, D., Hou, L.L., Wang, Q.Z.: Robust finite-time H∞ control for Markovian jump systems with partially known transition probabilities. J. Frankl. Inst. 350(6), 1562–1578 (2013) 148. Cheng, J., Park, J.H., Liu, Y.J., Liu, Z.J., Tang, L.M.: Finite-time H∞ fuzzy control of nonlinear Markovian jump delayed systems with partly uncertain transition descriptions. Fuzzy Sets Syst. 314, 99–115 (2017) 149. Luan, X.L., Zhao, C.Z., Liu, F.: Finite-time stabilization of switching Markovian jump systems with uncertain transition rates. Circ. Syst. Signal Process 34(12), 3741–3756 (2015) 150. Chen, F., Luan, X.L., Liu, F.: Observer based finite-time stabilization for discrete-time Markovian jump systems with Gaussian transition probabilities. Circ. Syst. Signal Process 33(10), 3019–3035 (2014) 151. Luan, X.L., Shi, P., Liu, F.: Finite-time stabilization for Markovian jump systems with Gaussian transition probabilities. IET Control Theor. Appl. 7(2), 298–304 (2013)

References

19

152. Wang, J., Ru, T.T., Xia, J.W., Shen, H., Sreeram, V.: Asynchronous event-triggered sliding mode control for semi-Markovian jump systems within a finite-time interval. IEEE Trans. Circuits Syst.-I 68(1), 458–468 (2021) 153. Zong, G.D., Ren, H.L.: Guaranteed cost finite-time control for semi-Markovian jump systems with event-triggered scheme and quantization input. Int. J. Robust. Nonlinear Control 29(15), 5251–5273 (2019) 154. Chen, J., Zhang, D., Qi, W.H., Cao, J.D., Shi, K.B.: Finite-time stabilization of T-S fuzzy semi-Markovian switching systems: a coupling memory sampled-data control approach. J. Frankl. Inst. 357(16), 11265–11280 (2020) 155. Wang, J.M., Ma, S.P., Zhang, C.H.: Finite-time H∞ filtering for nonlinear continuous-time singular semi-Markovian jump systems. Asian J. Control 21(2), 1017–1027 (2019) 156. Qi, W.H., Zong, G.D., Karimi, H.R.: Finite-time observer-based sliding mode control for quantized semi-Markovian switching systems with application. IEEE Trans. Ind. Electron 16(2), 1259–1271 (2020) 157. Song, J., Niu, Y.G., Zou, Y.Y.: A parameter-dependent sliding mode approach for finite-time bounded control of uncertain stochastic systems with randomly varying actuator faults and its application to a parallel active suspension system. IEEE Trans. Ind. Electron 65(10), 2455– 2461 (2018) 158. Cao, Z.R., Niu, Y.G., Zhao, H.J.: Finite-time sliding mode control of Markovian jump systems subject to actuator faults. Int. J. Control Autom. Syst. 16, 2282–2289 (2018) 159. Li, F.B., Du, C.L., Yang, C.H., Wu, L.G., Gui, W.H.: Finite-time asynchronous sliding mode control for Markovian jump systems. Automatica (2021). https://doi.org/10.1016/j. automatica.2019.108503 160. Ren, C.C., He, S.P.: Sliding mode control for a class of nonlinear positive Markovian jump systems with uncertainties in a finite-time interval. Int. J. Control Autom. Syst. 17(7), 1634– 1641 (2019)

Chapter 2

Finite-Time Stability and Stabilization for Discrete-Time Markovian Jump Systems

Abstract To relax the requirement of asymptotic stability of discrete-time Markovian jump systems (MJSs), the finite-time stability, finite-time boundedness, and finite-time stabilization for discrete-time linear and nonlinear MJSs have been investigated in this chapter. To deal with the nonlinear part of MJSs, the neural network is utilized to approximate the nonlinearities by linear difference inclusions. The designed controller can keep the state trajectories of the systems remain within the pre-specified bounds in the given time interval rather than asymptotically converges to the equilibrium point despite the approximation error and external disturbance.

2.1 Introduction In practical engineering applications, more consideration has been paid to the system’s transient behavior in a restricted time instead of the steady-state performance in the infinite-time domain. Subsequently, to decrease the conservativeness of controller design, the finite-time stability theory was proposed by Dorato in 1961. Consider the impacts of exogenous disturbance on the system, finite-time boundedness was further explored. Since then, an incredible number of research results on finite-time stability, finite-time boundedness, and finite-time stabilization for linear deterministic systems have been intensively studied [1–3]. Furthermore, by considering the influence of the transition probability on the control performance, the stochastic finite-time stability, stochastic finite-time boundedness, and stochastic finite-time stabilization for stochastic Markovian jump systems (MJSs) have also been extensively investigated [4–6]. On the other hand, nonlinearities are the common feature of practical plants. How to guarantee the transient performance of nonlinear MJSs in the finite-time span is a challenging issue. Assuming that the nonlinear terms satisfy the Lipschitz conditions, the asynchronous finite-time filtering, finite-time dissipative filtering, and asynchronous output finite-time control problems have been solved [7–9]. Using the Takagi–Sugeno fuzzy model to represent the nonlinear MJSs, the asynchronous finite-time control, finite-time H∞ control, finite-time H∞ filtering, etc., have been addressed in [10–12]. In addition to the above methods of dealing with nonlinearities © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 X. Luan et al., Robust Control for Discrete-Time Markovian Jump Systems in the Finite-Time Domain, Lecture Notes in Control and Information Sciences 492, https://doi.org/10.1007/978-3-031-22182-8_2

21

2 Finite-Time Stability and Stabilization . . .

22

of MJSs, neural networks were also efficient tools to analyze the transient performance of nonlinear MJSs [13, 14]. The primary purpose of this chapter is to investigate the FTS and finite-time stabilization problems for discrete-time linear and nonlinear MJSs. For nonlinear MJSs with time-delays and external disturbances, neural networks are utilized to represent nonlinear terms through linear difference inclusions (LDIs) under state-space representation. The mode-dependent finite-time controllers are designed to make the linear and nonlinear MJSs stochastic finite-time stabilizable. By constructing the appropriate stochastic Lyapunov function, sufficient conditions are derived from linear matrix inequalities (LMIs).

2.2 Preliminaries and Problem Formulation Consider the following discrete-time linear MJS with uncertainties: 

x(k + 1) = [A(rk ) + A(rk )] x(k) + [Bu (rk ) + Bu (rk )] u(k) + Bw (rk )w(k) x(k) = x0 , rk = r0 , k = 0 (2.1) where k ∈ {1, . . . , N }, N ∈ N, and N is the set of positive integers. x(k) ∈ R n is the q and w(k) ∈ l2 0 + ∞ vector of state variables, u(k) ∈ R m is the controlled input,  N T 2 is the exogenous disturbances with bounded energy E k=0 w(k) w(k) < d . x0 , r0 are the initial state and mode respectively, and rk is a discrete-time, discretestate Markovian chain taking values in M = {1, 2, . . . , i, . . . , M} with transition probabilities (2.2) πi j = Pr (rk = j|rk−1 = i) where πi j is the transition probabilities from mode i to mode j satisfying πi j ≥ 0,

M 

πi j = 1, ∀i ∈ M.

j=1

For each possible value of rk = i, it has A(rk ) = Ai , A(rk ) = Ai , Bu (rk ) = Bui , Bu (rk ) = Bui , Bw (rk ) = Bwi . (2.3) Ai and Bui are the time-varying but norm-bounded uncertainties that satisfy 

 Ai Bui = Mi Fi (k) N1i N2i .

(2.4)

2.2 Preliminaries and Problem Formulation

23

Mi , N1i and N2i are known mode-dependent matrices with suitable dimensions and Fi (k) is the time-varying unknown matrix function with Lebesgue norm assessable elements satisfying FiT (k) Fi (k) ≤ I . Concerning uncertain linear MJS (2.1), the following state feedback controller is constructed: (2.5) u(k) = K i x(k) where K i ∈ R m×n is state feedback gain to be designed. Then, the resulting closedloop MJS ensures that: 

 x(k + 1) = A¯ i +  A¯ i x(k) + Bwi w(k) x(k) = x0 , rk = r0 , k = 0

(2.6)

where A¯ i = Ai + Bui K i ,  A¯ i = Ai + Bui K i . This paper aims to investigate the finite-time control problem for uncertain discrete-time linear MJS (2.1). By choosing the proper Lyapunov function, the main results will be presented in the form of LMIs. The following definitions can formalize the general idea of stochastic finite-time control problem of discrete-time MJSs. Definition 2.1 (Stochastic FTB) For a given time-constant N > 0 and positive scalars c1 , c2 , the uncertain discrete-time linear MJS (2.1) (setting u(k) = 0) is said to be stochastic finite-time bounded (FTB) with respect to (c1 c2 N R d), where c1 < c2 , R > 0, if E x T (0)Rx(0) ≤ c1 ⇒ E x T (k)Rx(k) < c2 , ∀ k ∈ {1, 2, . . . , N } .

(2.7)

Remark 2.1 In fact, stochastic finite-time stability in the presence of external disturbance results in the concept of stochastic finite-time boundedness. Vice versa, letting w(k) = 0, the concept in Definition 2.1 is equivalent to stochastic FTS. In other words, the uncertain discrete-time linear MJS (2.1) (setting w(k) = 0, u(k) = 0) is said to be stochastic finite-time stable (FTS) with respect to (c1 c2 N R) if Eq. (2.7) holds. It is obvious that stochastic finite-time boundedness indicates stochastic finite-time stability, but in turn, may not be set up. Remark 2.2 Both stochastic finite-time boundedness and stochastic finite-time stability are open-loop concepts, which belong to the analysis of open-loop MJS with u(k) = 0. With the designed control in the form of formulation (2.5), the closedloop system (2.6) is stochastic finite-time stabilizable, which gives the concept of stochastic finite-time stabilization. Remark 2.3 Note that Lyapunov asymptotic stability and stochastic finite-time stability are different concepts. The concept of Lyapunov asymptotic stability is primarily known to the control community. However, an MJS is stochastic FTS if its state stays inside the desired bounds during the fixed time interval. Therefore, it can be concluded that an MJS that is stochastic FTS may not be Lyapunov asymptotic stability. Conversely, the Lyapunov asymptotic stability could be not stochastic FTS if its state exceeds the given bounds during the transient response process.

2 Finite-Time Stability and Stabilization . . .

24

2.3 Stochastic Finite-Time Stabilization for Linear MJSs This subsection will first consider the stochastic finite-time stabilization problem for uncertain discrete-time linear MJS (2.1). Before presenting the main results, the following lemma will be helpful. Lemma 2.1 [15] Assume that H , L, Q, and S are real matrices with appropriate dimensions, for the positive scalar θ > 0 and U T U ≤ I , we can get H + LU S + S T U T L T ≤ H + θ L L T + θ −1 S T S.

(2.8)

Lemma 2.2 (Schur complement lemma) For a given matric S = S11 ∈ R r ×r , the following statements are equivalent:

S11 S12 S21 S22

 with

(a) S < 0 ; T −1 (b) S11 < 0, S22 − S12 S11 S12 < 0; −1 T S12 < 0. (c) S22 < 0, S11 − S12 S22 Now, we focus our attention on presenting a solution to stochastic finite-time controller design, which is derived through the following theorem: Theorem 2.1 For a given scalar α ≥ 1, the uncertain discrete-time linear MJS (2.1) (setting u(k) = 0) is said to be stochastic FTB with respect to (c1 c2 N R d), if there exist symmetric positive-definite matrices Pi ∈ R n×n , i ∈ M and Q ∈ R q×q such that ⎡ ⎤ M M   T T πi j (Ai + Ai ) P j (Ai + Ai ) − α Pi πi j (Ai + Ai ) P j Bwi ⎥ ⎢ ⎢ j=1 ⎥ j=1 ⎢ ⎥ λ1 x T (k)Rx(k).

(2.32)

According to conditions (2.31)–(2.32), one has

 (1 + α) N c12 λ2 + c12 λ5 + d 2 λ3 + c22 ρi2 λ4 . x (k)Rx(k) < λ1 T

(2.33)

Condition (2.28) implies that for k ∈ {1, 2, . . . , N }, the state trajectories do not exceed the upper bound c2 , i.e., E{x T (k)Rx(k)} < c2 . This completes the proof.  Theorem 2.3 For given scalars α ≥ 0, h > 0, and ρi > 0, the closed-loop system (2.25) is stochastic finite-time stabilizable via state feedback controller in the form of (2.5) respect to (c1 c2 N R d), if there exist matrices X i = X iT > 0, Yi , H = H T > 0, Q = Q T > 0, and G = G T > 0 such that ⎡

−(1 + α)X i N1iT 0 0 ⎢ N −M + N M M 1i 5i 5i 3i 4i ⎢ ⎢ 0 M3iT −(1 + α)Q 0 ⎢ ⎣ 0 −(1 + α)G 0 M4iT 0 0 0 Xi

⎤ Xi 0 ⎥ ⎥ 0 ⎥ ⎥ 0, if x T (k1 )Rx(k1 ) ≤ c12 ⇒ x T (k2 )Rx(k2 ) < c22 , k1 ∈ {−h, . . . , 0}, k2 ∈ {1, 2 . . . , N }. (3.5) Definition 3.2 For the given scalars 0 < c1 < c2 , R > 0, γ > 0, the closed-loop system (3.4) is said to be stochastic finite-time H∞ stabilizable concerning (c1 c2 N R d γ ), if the system (3.4) is stochastic finite-time stabilizable for the state feedback controller (3.3) and under the zero-initial condition the output z(k) satisfies  E

N  k=0

z T (k)z(k) < γ 2 E



N 

w T (k)w(k) .

(3.6)

k=0

Definition 3.3 For the switching signal σk and the sampling time k > k0 , L α (k0 , k) is used to denote the switching times of σk during the finite interval [k0 , k). If for any given scalars L 0 > 0 and τa > 0, it has L a (k0 , k) ≤ L 0 + (k − k0 )/τa , then the variables τa and N0 are referred to as average dwell time and chatter bound. As regularly utilized in the existing references, L 0 = 0 is chosen to simplify the controller design. Lemma 3.1 [12] For the symmetric positive matrix M and the matrix N , the following condition is met: − N M −1 N T ≤ M − N T − N .

(3.7)

3 Finite-Time Stability and Stabilization for Switching . . .

42

3.3 Stochastic Finite-Time H∞ Control In this section, sufficient conditions will be presented such that the free system (3.1) is stochastic FTB and the closed-loop system (3.4) is stochastic finite-time H∞ stabilizable. Proposition 3.1 For given scalars δ ≥ 1, h > 0, and μ > 1, the system (3.4) is stochastic finite-time stabilizable in regard to (c1 c2 N R d), if there are positivedefinite matrices Pα,i > 0, Pβ,i > 0, G α,i > 0, i ∈ M, α, β ∈ S and Q such that the subsequent inequalities hold: ⎡

⎤ A¯ Tα,i P¯α,i Adα,i A¯ Tα,i P¯α,i A¯ α,i − μPα,i + Q A¯ Tα,i P¯α,i Bwα,i ⎣ ⎦

N ln δ = τa∗ ln 1 − ln 2

(3.11)

where P¯α,i =

M 

πiαj Pα, j , P˜α,i = R −1/2 Pα,i R −1/2 , Q˜ = R −1/2 Q R −1/2 ,

j=1

1 = c22 min λmin ( P˜α,i ), i∈M,α∈S



2 = μ

N

 2 2 2 ˜ ˜ max λmax ( Pα,i )c1 + λmax ( Q)hc1 + max λmax (G α,i )d .

i∈M,α∈S

i∈M,α∈S

Proof Choose the following Lyapunov function: Vα,i (k) = x (k)Pα,i x(k) + T

k−1  f =k−h

x T ( f )Qx( f ).

(3.12)

3.3 Stochastic Finite-Time H∞ Control

43

Simple calculation explicates that E{Vα, j (k + 1)} − Vα,i (k) = x T (k + 1) P¯α,i x(k + 1) − x T (k)Pα,i x(k) + x T (k)Qx(k) − x T (k − h)Qx(k − h)   = x T (k) A¯ Tα,i P¯α,i A¯ α,i − Pα,i + Q x(k) + 2x T (k) A¯ Tα,i P¯α,i Adα,i x(k − h)   + 2x T (k) A¯ Tα,i P¯α,i Bwα,i w(k) + x T (k − h) ATdα,i P¯α,i Adα,i − Q x(k − h) T P¯α,i Bwα,i w(k) + 2x T (k − h)ATdα,i P¯α,i Bwα,i w(k) + w T (k)Bwα,i

= ζ T (k) α,i ζ (k)

(3.13)

where ζ T (k) = [x T (k) x T (k − h) w T (k)], ⎤ A¯ Tα,i P¯α,i Adα,i A¯ Tα,i P¯α,i Bwα,i A¯ Tα,i P¯α,i A¯ α,i − Pα,i + Q ∗ −Q + ATdα,i P¯α,i Adα,i ATdα,i P¯α,i Bwα,i ⎦ . =⎣ T ∗ ∗ Bwα,i P¯α,i Bwα,i ⎡

α,i

Combining formulas (3.8) and (3.13) and μ > 1, one has 



E Vα, j (k + 1) < μx (k)Pα,i x(k) + w (k)G α,i w(k) + μ T

T

k−1 

x T ( f )Qx( f )

f =k−h

= μVα,i (k) + w T (k)G α,i w(k).

(3.14)

From the above inequality (3.14), one has   E Vα, j (k + 1) < μVα,i (k) + max λmax (G α,i )w T (k)w(k). i∈M,α∈S

(3.15)

Let kl , kl−1 , kl−2 , . . . be the switching instants, then in the same mode, formula (3.15) gives V (rkl , σk , k) < μV (rkl , σk−1 , k − 1) + max λmax (G α,i )w T (k − 1)w(k − 1) i∈M,α∈S

< μk−kl V (rkl , σkl , kl ) + max λmax (G α,i ) i∈M,α∈S

k−1 

μk−θ−1 w T (θ )w(θ ).

θ=kl

(3.16)

3 Finite-Time Stability and Stabilization for Switching . . .

44

According to the inequalities (3.9) and (3.12), it yields k l −1

¯ kl , σkl , kl )x(kl ) + V (rkl , σkl , kl ) = x (kl ) P(r T

x T ( f )Qx( f )

f =kl −h k l −1

¯ kl−1 , σkl , kl )x(kl ) + < δx T (kl ) P(r

x T ( f )Qx( f ).

f =kl −h

According to condition (3.15), for the different switching mode, one has ¯ kl−1 , kl )x(kl ) + V (rkl−1 , σkl , kl ) = x T (kl ) P(r

k l −1

x T ( f )Qx( f )

f =kl −h k l −1

< μkl −kl−1 V (rkl−1 , σkl−1 , kl−1 ) + max λmax (G α,i ) i∈M,α∈S

μkl −θ−1 w T (θ )w(θ ).

θ=kl−1

(3.17) The above two formulas lead to k l −1

¯ kl−1 , σkl , kl )x(kl ) + V (rkl , σkl , kl ) < δx (kl ) P(r T

⎡ = δ ⎣V (rkl−1 , σkl , kl ) −



k l −1

f =kl −h k l −1

x T ( f )Qx( f )⎦ +

f =kl −h

= δV (rkl−1 , σkl , kl ) + (1 − δ)

x T ( f )Qx( f )

x T ( f )Qx( f )

f =kl −h k l −1

x T ( f )Qx( f )

f =kl −h

< δμkl −kl−1 V (rkl−1 , σkl−1 , kl−1 ) + δ max λmax (G α,i ) i∈M,α∈S

k l −1

μkl −θ−1 w T (θ )w(θ ).

θ=kl−1

(3.18) Substituting the inequality (3.18) into formula (3.17) with μ > 1, δ ≥ 1, it yields V (rkl , σk , k) < μk−kl V (rkl , σkl , kl ) + max λmax (G α,i ) i∈M,α∈S

k−1 

μk−θ−1 w T (θ )w(θ )

θ=kl

< δμk−kl−1 V (rkl−1 , σkl−1 , kl−1 ) + δ max λmax (G α,i ) i∈M,α∈S

k l −1 θ=kl−1

μk−θ−1 w T (θ )w(θ )

3.3 Stochastic Finite-Time H∞ Control

+ max λmax (G α,i ) i∈M,α∈S

1, δ N /τa < , which means x T (k)Rx(k) < c2 .



This completes the proof.

Remark 3.1 It should be noticed that the derived conditions for discrete-time switching MJS (3.1) have relationships with the stochastic Markovian chain rk and the deterministic switching signal σk . The influence of jumping and switching signals on Lyapunov function recurrence from instant kl to kl−1 is shown in condition (3.18). According to the recursive expression (3.18), the essential Lyapunov function relationship between k and k0 is acquired, which constitutes the foundation to guarantee the stochastic finite-time stabilization of the closed-loop system (3.4). Remark 3.2 From the derived condition of the average dwell time (3.11), it can be recognized that if the time-delay is larger, the corresponding minimum average dwell time τa∗ is also more prolonged. Therefore, to compensate for the effect of time-delay on the instability of the system (3.1), the switching frequency among different modes should be a little bit slow. In other words, the average dwell time stays in the same mode for a little longer. Based on the derived stochastic FTB conditions in Proposition 3.1, the next goal is to acquire sufficient conditions for stochastic finite-time H∞ controller design. Proposition 3.2 For given scalars δ ≥ 1, and μ > 1, the closed-loop system (3.4) is stochastic finite-time stabilizable with H∞ disturbance rejection performance concerning (c1 c2 N R d γ ), if there are positive-definite matrices Pα,i > 0, Pβ,i > 0 and Q such that the subsequent inequalities hold α,i ⎡

⎤ T C¯ T T T T 11 + C¯ α,i α,i μ A¯ α,i P¯α,i Adα,i + C¯ α,i C dα,i μ A¯ α,i P¯α,i Bwα,i + C¯ α,i Dwα,i ⎥ ⎢ T C T D ∗ 12 + Cdα,i μATdα,i P¯α,i Bwα,i + Cdα,i =⎣ dα,i wα,i ⎦ < 0 T ∗ ∗ 13 + Dwα,i Dwα,i

(3.23) P¯α,i ≤ δ P¯β,i

(3.24)

3.3 Stochastic Finite-Time H∞ Control

47

μ N γ 2 d 2 < c22 min λmin ( P˜α,i ) i∈M,α∈S

(3.25)

with average dwell time satisfying τa >

where

ln[c22

N ln δ = τa∗ N 2 2 ˜ min λmin ( Pα,i )] − ln μ γ d

(3.26)

i∈M,α∈S

11 = μ A¯ Tα,i P¯α,i A¯ α,i − μPα,i + μQ, 12 = −μQ + μATdα,i P¯α,i Adα,i , T 13 = −γ 2 + μBwα,i P¯α,i Bwα,i .

Proof Inequality (3.23) can be rewritten as ⎤ ⎡ T ⎤ 11 μ A¯ Tα,i P¯α,i Adα,i μ A¯ Tα,i P¯α,i Bwα,i C¯ α,i   T ⎦ C¯ α,i Cdα,i Dwα,i < 0. ⎣ ∗ 12 μATdα,i P¯α,i Bwα,i ⎦ + ⎣ Cdα,i T Dwα,i ∗ ∗ 13 ⎡

Note that ⎡

⎤ T C¯ α,i   T ⎦ ¯ ⎣ Cdα,i Cα,i Cdα,i Dwα,i ≥ 0. T Dwα,i Then we can get ⎡

⎤ 11 μ A¯ Tα,i P¯α,i Adα,i μ A¯ Tα,i P¯α,i Bwα,i ⎣ ∗ 12 μATdα,i P¯α,i Bwα,i ⎦ < 0. ∗ ∗ 13 The above inequality implies   μE Vα, j (k + 1) < μVα,i (k) + γ 2 w T (k)w(k). Based on the condition μ ≥ 1, it has   E Vα, j (k + 1) < μVα,i (k) + γ 2 w T (k)w(k).

(3.27)

Similar to the same principles of the proof of Proposition 3.1, and under the zero initial condition V (rk0 , σk0 , k0 ) = 0, c1 = 0, we have

3 Finite-Time Stability and Stabilization for Switching . . .

48

x T (k)Rx(k)
1, if there are positive-definite matrices X α,i > 0, Yα,i , α ∈ M, i ∈ S and H such that ⎤ ⎡ T −μX α,i 0 C˜ α,i X α,i L˜ T1,i T ⎢ ∗ −γ 2 I Dwα,i L T3,i 0 ⎥ ⎥ ⎢ T T −1 −1 ⎢ ∗ μ Cdα,i H L 2,i 0 ⎥ ∗ −I + μ Cdα,i H Cdα,i ⎥ 0

(3.31)

R 1/2 X α,i R 1/2 < λI

(3.32)

with average dwell time satisfying N ln δ = τa∗ ln c22 μ−N − ln λγ 2 d 2

τa > where

(3.33)

T C˜ α,i = (Cα,i X α,i + Du α,i Yα,i )T ,

L˜ T1,i =



   α T α ˜T πi1 Aα,i πi2 A˜ α,i · · · πiαM A˜ Tα,i ,

A˜ Tα,i = (Aα,i X α,i + Bu α,i Yα,i )T . A finite-time H∞ stabilizing controller satisfying the γ -disturbance rejection level −1 . can be built as K α,i = Yα,i X α,i Proof By Schur complement lemma, inequality (3.23) is equal to ⎡

−μPα,i + μQ 0 0 L T1,i ⎢ ∗ −μQ 0 L T2,i ⎢ 2 ⎢ ∗ ∗ −γ I L T3,i ⎢ −1 ⎣ ∗ ∗ ∗ −μ−1 Pα,i ∗ ∗ ∗ ∗ where L T1,i = L T2,i = L T3,i =



 

⎤ T C¯ α,i T ⎥ Cdα,i ⎥ T ⎥ Dwα,i ⎥ < 0, 0 ⎦ −I

   α T α ¯T πi1 Aα,i πi2 A¯ α,i . . . πiαM A¯ Tα,i ,

   α T α T Adα,i πi2 Adα,i . . . πiαM ATdα,i , πi1

   α T T α T , Bwα,i πi2 Bwα,i . . . πiαM Bwα,i πi1

Pα = diag{Pα,1 Pα,2 . . . Pα,M }.

3 Finite-Time Stability and Stabilization for Switching . . .

50

Implementing the matrix conversion to the above condition, it leads to the subsequent inequality ⎡

⎤ T −μPα,i + μQ 0 C¯ α,i L T1,i 0 T ⎢ L T3,i 0 ⎥ ∗ −γ 2 I Dwα,i ⎢ ⎥ ⎢ ⎥ ∗ ∗ −I 0 C dα,i ⎥ < 0. ⎢ −1 ⎣ ∗ ∗ ∗ −μ−1 Pα,i L 2,i ⎦ ∗ ∗ ∗ ∗ −μQ Using Schur complement lemma to the above inequality again, one has ⎡ ⎤ T C¯ α,i L T1,i I −μPα,i 0 T T 2 ⎢ ∗ ⎥ Dwα,i L 3,i 0 −γ I ⎢ ⎥ −1 C −1 C T −1 C −1 L T ⎢ ∗ ⎥ < 0. ∗ −I + μ Q μ Q 0 dα,i dα,i dα,i 2,i ⎢ ⎥ −1 ⎣ ∗ ⎦ ∗ ∗ −μ−1 Pα,i + μ−1 L 2,i Q −1 L T2,i 0 ∗ ∗ ∗ ∗ −μ−1 Q −1 −1 Implementing a congruence to the above inequality by diag{Pα,i , I, I, I, I } and −1 −1 letting X α,i = Pα,i , Yα,i = K α,i X α,i , H = Q , the LMI (3.29) can be derived. By using Schur complement lemma, inequality (3.24) can be rewritten as



−μ

⎢ ⎢ ⎢ ⎢ ⎢ ⎣

M  j=1

β

πi j Pβ, j

∗ ∗ ∗

⎤   α πi1 · · · πiαM ⎥ ⎥ −1 −Pα,1 ··· 0 ⎥ ⎥ ≤ 0. .. ⎥ .. . ∗ . ⎦ ∗

(3.34)

−1 ∗ −Pα,M

−1 Implementing a congruence to condition (3.34) by diag{Pα,i , I, . . . , I }, it leads to the following inequality:



M  α  β ⎢−μ j=1 πi j X α,i Pβ, j X α,i πi1 X α,i ⎢ ⎢ ∗ −X α,1 ⎢ ⎢ ⎣ ∗ ∗ ∗ ∗

⎤  α πi M X α,i ⎥ ⎥ ⎥ ··· 0 ⎥ ≤ 0. ⎥ .. .. ⎦ . . ∗ −X α,M ···

(3.35)

Based on Lemma 3.1, one has −X α,i Pβ, j X α,i ≤ X β, j − 2X α,i . Then −μ

M  j=1

β

πi j X α,i Pβ, j X α,i ≤ μ

M  j=1

β

πi j X β, j − 2μX α,i .

(3.36)

3.4 Observer-Based Finite-Time H∞ Control

51

Formulas (3.35) and (3.36) lead to linear matrix inequality (3.30) in Theorem 3.1. On the other hand, consider the following conditions: min λmin ( P˜α,i ) =

i∈M,α∈S

1 max λmax ( X˜ α,i )

,

i∈M,α∈S

and

−1 X˜ α,i = P˜α,i = R 1/2 X α,i R 1/2 .

Inequality (3.25) follows that: μ N γ 2 d 2 max λmax ( X˜ α,i ) < c22 . i∈M,α∈S

(3.37)

Making the assumption max λmax ( X˜ α,i ) < λ, it implies R 1/2 X α,i R 1/2 < λI . i∈M,α∈S

Then, inequality (3.37) is equal to LMI (3.31) in Theorem 3.1. Meanwhile, condition (3.28) follows that: x T (k)Rx(k) < δ N /τa μ N γ 2 d 2 max λmax ( X˜ α,i ) < λδ N /τa μ N γ 2 d 2 . i∈M,α∈S

Conditions (3.31) and (3.33) guarantee that c22 μ−N c22 μ−N N /τa > 1, δ < . λγ 2 d 2 λγ 2 d 2 Therefore, it has x T (k)Rx(k) < λδ N /τa μ N γ 2 d 2 < c2 . Thus the proof is completed.

3.4 Observer-Based Finite-Time H∞ Control In this subsection, the following observer-based feedback controller will be designed to guarantee the stochastic finite-time H∞ disturbance rejection performance for the closed-loop system (3.4): ⎧ x¯k+1 = Aα,i x¯k + Adα,i x¯k−d + Bα,i u k + Hα,i (yk − y¯k ) ⎪ ⎪ ⎨ y¯k = E α,i x¯k + E dα,i x¯k−d u ⎪ k = K α,i x¯ k ⎪ ⎩ x¯ f = η f , f ∈ {−d, . . . , 0}, r (0) = r0

(3.38)

where x¯k and y¯k are the state and output variables to be estimated, K α,i and Hα,i are the controller and the observer gains to be designed simultaneously.

3 Finite-Time Stability and Stabilization for Switching . . .

52

T  Letting ek = xk − x¯k and x˜k = xkT ekT , the corresponding error closed-loop system follows that: ⎧ ⎪ ⎨ x˜k+1 = A˜ α,i x˜k + A˜ dα,i x˜k−h + B˜ wα,i wk z k = C˜ α,i x˜k + C˜ dα,i x˜k−h + Dwα,i wk (3.39) ⎪ ⎩ x˜ =  ϕ T ϕ T − ηT T , f ∈ {−h, . . . , 0}, r (0) = r f

f

f

0

f

where     Aα,i + Bu α,i K α,i −Buα,i K α,i Adα,i 0 ˜ ˜ Aα,i = , Adα,i = , 0 Aα,i − Hα,i E α,i 0 Adα,i − Hα,i E dα,i       Bwi ˜ , C˜ α,i = Cα,i + Duα,i K α,i,ξk −Duα,i K α,i , C˜ dα,i = Cdα,i 0 . Bwi = Bwi The control problem to be dealt with in this subsection is to find suitable controller and the observer gains K α,i and Hα,i such that the error closed-loop system (3.39) is stochastic finite-time stabilizable with H∞ performance concerning (c1 c2 N R d γ ). The following Proposition 3.3 gives the sufficient conditions to investigate if the error closed-loop system (3.39) is finite-time stabilizable with H∞ performance, and it will be utilized in the solution of controller and observer gains. Proposition 3.3 For given scalars δ ≥ 1, h > 0, and μ > 1, the error closedloop system (3.39) is finite-time stabilizable with H∞ performance concerning (c1 c2 N R d γ ), if there are positive-definite matrices Pα,i > 0, Pβ,i > 0 and Q such that the subsequent inequalities hold ⎤ ⎡ T ˜ T ˜ T 11 + C˜ α,i Dwα,i Cα,i μ A˜ Tα,i P¯α,i A˜ dα,i + C˜ α,i Cdα,i μ A˜ Tα,i P¯α,i B˜ wα,i + C˜ α,i T T ⎣ ∗ 12 + C˜ dα,i μ A˜ Tdα,i P¯α,i B˜ α,i + C˜ dα,i Dwα,i ⎦ < 0 C˜ dα,i T ∗ ∗ 13 + Dwα,i Dwα,i (3.40) P¯α,i ≤ δ P¯β,i (3.41)

1 < 2

(3.42)

with the average dwell time satisfying τa > where P¯α,i =

M  j=1

N ln δ = τa∗ ln 2 − ln 1

πiαj Pα, j , P˜α,i = R −1/2 Pα,i R −1/2 ,

(3.43)

3.4 Observer-Based Finite-Time H∞ Control

53

˜ 11 = μ A˜ Tα,i P¯α,i A˜ α,i − μ P˜α,i + μ Q, 12 = −μ Q˜ + μ A˜ Tdα,i P¯α,i A˜ dα,i , T 13 = −γ 2 + μ B˜ wα,i P¯α,i B˜ wα,i ,



1 = μ

N

 2 2 ˜ ˜ γ d + max λmax ( Pα,i )c1 + λmax ( Q)hc1 , 2 2

i∈M,α∈S

2 = c22 min λmin ( P˜α,i ). i∈M,α∈S

Proof Choose the following Lyapunov function: Vα,i (k) = x T (k)Pα,i x(k) +

k−1 

x T ( f )Qx( f ).

(3.44)

f =k−h

Inequality (3.40) can be rewritten as ⎡

⎤ ⎡ T ⎤ C˜ α,i  11 μ A˜ Tα,i P¯α,i A˜ dα,i μ A˜ Tα,i P¯α,i B˜ wα,i  T T ⎣ ∗ ⎦ C˜ α,i C˜ dα,i Dwα,i < 0. 12 μ A˜ dα,i P¯α,i B˜ wα,i ⎦ + ⎣ C˜ dα,i T Dwα,i ∗ ∗ 13 Because ⎡

⎤ T C˜ α,i   ⎣ C˜ T ⎦ C˜ α,i C˜ dα,i Dwα,i ≥ 0. dα,i T Dwα,i It is easy to get ⎡

α,i

⎤ 11 μ A¯ Tα,i P¯α,i Adα,i μ A¯ Tα,i P¯α,i Bwα,i =⎣ ∗ 12 μATdα,i P¯α,i Bwα,i ⎦ < 0 ∗ ∗ 13

(3.45)

which implies   μE Vα, j (k + 1) < μVα,i (k) + γ 2 w T (k)w(k). With the fact that μ ≥ 1, following inequality holds:   E Vα, j (k + 1) < μVα,i (k) + γ 2 w T (k)w(k).

(3.46)

3 Finite-Time Stability and Stabilization for Switching . . .

54

Assuming that kl , kl−1 , kl−2 , . . . are the switching instants, then in the same mode without switching, formula (3.46) leads to V (rkl , σk , k) < μV (rkl , σk−1 , k − 1) + γ 2 w T (k − 1)w(k − 1) 1, δ ≥ 1 and substituting Eq. (3.48) into Eq. (3.47), it has

3.4 Observer-Based Finite-Time H∞ Control

55

k l −1   E V (rkl , σk , k) < μk−kl V (rkl , σkl , kl ) + μk−θ−1 γ 2 w T (θ )w(θ ) θ=kl−1 k l −1

< δμk−kl−1 V (rkl−1 , σkl−1 , kl−1 ) + δ

μk−θ−1 γ 2 w T (θ )w(θ )

θ=kl−1

+

k−1 

μk−θ−1 γ 2 w T (θ )w(θ )

θ=kl



min λmin ( P˜α,i )x T (k)Rx(k). i∈M,α∈S

(3.50)

Under the zero initial condition V (rk0 , σk0 , k0 ) = 0, and from Eqs. (3.49) and (3.50), we can obtain

x T (k)Rx(k)
0, X α,i

The error closed-loop system (3.39) is stochastic finite-time stabilizperformance via observer-based controller (3.38) concerning given d γ ) with δ ≥ 1, and μ > 1, if there are positive-definite matrices > 0, Yα,i and Q such that ⎡

−μPα,i + μQ 0 0 L T1,i ⎢ ∗ −μQ 0 L T2,i ⎢ ⎢ 2 ∗ ∗ −γ I L T3,i ⎢  ⎢ ⎣ ∗ ∗ ∗ −μ−1 X α,i ∗ ∗ ∗ ∗ Pi,ξk X i,ξk = I

⎤ T C˜ α,i T ⎥ C˜ dα,i ⎥ T ⎥ Dwα,i ⎥ < 0 ⎥ 0 ⎦ −I

(3.51)

(3.52)

3.4 Observer-Based Finite-Time H∞ Control

⎡ M  α  β ⎢δ j=1 πi j X β, j − 2δ X α,i πi1 X α,i ⎢ ⎢ ∗ −X α,1 ⎢ ⎢ ⎣ ∗ ∗ ∗ ∗

57

···





πiαM X α,i ⎥

··· 0 .. .. . . ∗ −X α,M

⎥ ⎥ ⎥≤0 ⎥ ⎦

(3.53)

μ N γ 2 d 2 < c22 λ

(3.54)

R 1/2 X α,i R 1/2 < λI

(3.55)

with average dwell time satisfying τa > where L T1,i = L T2,i = L T3,i =



 

N ln δ = τa∗ ln c22 λ − ln μ N γ 2 d 2

   α T α ˜T πi1 Aα,i πi2 A˜ α,i . . . πiαM A˜ Tα,i ,

   α T α ˜T πi1 Adα,i πi2 A˜ dα,i . . . πiαM A˜ Tdα,i ,

   α T T α ˜T , πi1 Bwα,i πi2 B˜ wα,i . . . πiαM B˜ wα,i



X α,i = diag{X α,1 X α,2 . . . X α,M }, A˜ α,i = 1 + 2 K α,i,ξk 3 + 4 Hα,i,ξk 5 , B˜ wα,i = 6 Bwα,i A˜ dα,i = 7 + 8 Hα,i,ξk 9 ,   C˜ α,i = Cα,i + Du α,i K α,i,ξk 0 = 10 + Duα,i K α,i,ξk 11 ,   C˜ dα,i = Cdα,i 0m×n = 12 ,      Aα,i 0n×n Buα,i , 2 = , 3 = In×n −In×n , 1 = 0n×n Aα,i 0n×m 

     0n×n I , 5 = 0q×n E α,i , 6 = n×n , −In×n In×n

 4 =  7 =

     Adα,i 0n×n 0n×n , 8 = , 9 = 0q×n E dα,i , 0n×n Adα,i −In×n

(3.56)

3 Finite-Time Stability and Stabilization for Switching . . .

58

      10 = Cα,i 0m×n , 11 = In×n 0n×n , 12 = Cdα,i 0m×n . Proof By Schur complement lemma, inequality (3.51) is equal to ⎡

−δ

⎢ ⎢ ⎢ ⎢ ⎢ ⎣

M 

β

j=1

πi j Pβ, j



 α ⎤ πi M ⎥ ⎥ ··· 0 ⎥ ⎥ ≤ 0. .. ⎥ .. . . ⎦ −1 ∗ −Pα,M

α πi1 ···



−1 −Pα,1

∗ ∗

∗ ∗

−1 Implementing a congruence to the above inequality by diag{Pα,i , I, . . . , I }, it leads to ⎤ ⎡ M  α  α  β −δ π X P X π X · · · π X i j α,i β, j α,i i1 α,i i M α,i ⎥ ⎢ j=1 ⎥ ⎢ ⎥ ⎢ ∗ −X α,1 · · · 0 (3.57) ⎥ ≤ 0. ⎢ ⎥ ⎢ .. .. ⎦ ⎣ . ∗ ∗ . ∗ ∗ ∗ −X α,M

Based on Lemma 3.1, it is followed that: −X α,i Pβ, j X α,i ≤ X β, j − 2X α,i . Then −δ

M 

β

πi j X α,i Pβ, j X α,i ≤ δ

j=1

M 

β

πi j X β, j − 2δ X α,i .

j=1

Inequality (3.57) can be rewritten as ⎡ M  α  β ⎢δ j=1 πi j X β, j − 2δ X α,i πi1 X α,i ⎢ ⎢ ∗ −X α,1 ⎢ ⎢ ⎣ ∗ ∗ ∗



···



⎤ πiαM X α,i ⎥ ⎥ ⎥ 0 ⎥ ≤ 0, ⎥ .. ⎦ .

··· .. . ∗ −X α,M

which is linear matrix inequality (3.53) in Theorem 3.2. Similarly, with Schur complement lemma, inequality (3.40) can be rewritten as ⎡ ⎤ T −μPα,i + μQ 0 0 L T1,i C¯ α,i T ⎥ ⎢ ∗ −μQ 0 L T2,i Cdα,i ⎢ ⎥ T T ⎥ 2 ⎢ ∗ ∗ −γ I L 3,i Dwα,i ⎥ < 0 (3.58) ⎢  −1 ⎢ ⎥ ⎣ ∗ ∗ ∗ −μ−1 P α,i 0 ⎦ ∗ ∗ ∗ ∗ −I

3.4 Observer-Based Finite-Time H∞ Control

where

59



P α,i = diag{Pα,1 Pα,2 , . . . , Pα,M }. With some simple mathematical arrangements, we have   Aα,i + Bu α,i K α,i,ξk −Buα,i K α,i,ξk ˜ Aα,i = 0 Aα,i − Hα,i,ξk E α,i           Buα,i Aα,i 0n×n 0n×n + K α,i,ξk In×n − In×n + Hα,i,ξk 0 p×n E α,i = 0n×n Aα,i 0n×m −In×n = 1 + 2 K α,i,ξk 3 + 4 Hα,i,ξk 5

(3.59)

    In×n Bwi ˜ = B = 6 Bwi Bwi = Bwi In×n wi

(3.60)

  0 Adα,i A˜ dα,i = 0 Adα,i − Hα,i,ξk E dα,i       Aα,i 0n×n 0n×n = + Hα,i,ξk 0q×n E dα,i 0n×n Aα,i −In×n C˜ α,i

C˜ dα,i

= 7 + 8 Hα,i,ξk 9   = Cα,i + Duα,i K α,i,ξk 0     = Cα,i 0m×n + Duα,i K α,i,ξk In×n 0n×n

(3.61)

= 10 + Dα,i K α,i,ξk 11   = Cdα,i 0m×n = 12

(3.62) (3.63)

−1 , the Substituting Eqs. (3.59)–(3.63) into Eq. (3.58), and denoting X˜ i,ξk = P˜i,ξ k LMIs (3.51) and (3.52) can be obtained in Theorem 3.2. On the other hand, considering the following conditions:

min λmin ( P˜α,i ) =

i∈M,α∈S

1 max λmax ( X˜ α,i )

,

i∈M,α∈S

and

−1 X˜ α,i = P˜α,i = R 1/2 X α,i R 1/2 .

Assuming max λmax ( X˜ α,i ) < λ, which means min λmin ( P˜α,i ) > λ, then we i∈M,α∈S

i∈M,α∈S

can get condition (3.55). Moreover, define c1 = 0, then conditions (3.54) and (3.56) are equivalent to conditions (3.42) and (3.43) in Proposition 3.3, and we can also get x T (k)Rx(k) < c2 . Thus the proof is completed.



3 Finite-Time Stability and Stabilization for Switching . . .

60

Remark 3.3 It should be seen that the derived conditions in Theorem 3.2 are not strict linear matrix inequalities because of the coupling relationship between different matrix variables. Therefore, the non-feasibility problem in Theorem 3.2 should be transformed into the subsequent optimization problem including the LMI conditions:   Minimize trace Pα,i X α,i = I subject to (3.51), (3.53)−(3.55), Pα,i > 0, X α,i > 0   Pα,i I > 0. and I X α,i

(3.64)

Then, for given (c1 c2 N R d γ ) and scalars δ ≥ 1, and μ ≥ 1, the matrices K α,i , and Hα,i can be solved with the following algorithm [13]:  0  0 Algorithm 3.1 (1) Determine an initial feasible value Pα,i , X α,i , Q 0 satisfying conditions (3.51), (3.53)–(3.55) and (3.64). Let k = 0. (2) Find the solution of the following linear matrix inequality optimization problem:   0 0 X α,i + X α,i Pα,i Minimize trace Pα,i subject to (3.51), (3.53)−(3.55) and (3.64).  k  k Substituting the acquired matrices Pα,i , X α,i , Q k into Eqs. (3.51) and (3.55). (3) If condition (3.64) is guaranteed with     trace Pα,i X α,i − n  < ζ,   forsome sufficiently small scalar ζ > 0, give the feasible solution Pα,i , X α,i , Q  k k , X α,i , Q k and stop. = Pα,i   k+1 k+1  k k , X α,i , Q k = Pα,i , X α,i , (4) If k > N , gives it up and stops. Set k = k + 1, Pα,i  k+1 and go to step (2). Q

3.5 Simulation Analysis Two examples will be given in this subsection to illustrate the efficacy of the proposed finite-time H∞ controller configuration approach for stochastic MJSs supervised by the deterministic switching. The first example shows that for the finite-time unstable system, the closed-loop system is finite-time stabilizable with the designed controller. The second example is adapted from a typical economic system to demonstrate the practical applicability of the theoretical results.

3.5 Simulation Analysis

61

Example 3.1 Consider the discrete-time switching MJS (3.1) with M = 2, S = 3, and the following parameters: MJS 1:       1 −0.4 0 −0.26 0.2 −1.1 A1,1 = , A1,2 = , A1,3 = , 2 0.81 0.9 1.13 0.2 0.4 Ad1,1 =

      −0.2 0.1 −0.5 0 −0.3 0.2 , Ad1,2 = , Ad1,3 = , 0.2 0.15 0.3 −0.5 0.4 0.5  T  T  T Bu1,1 = 1 1 , Bu1,2 = 1 1 , Bu1,3 = 2 −1 ,

 T  T  T Bw1,1 = −0.4 0.3 , Bw1,2 = 0.2 0.26 , Bw1,3 = 0.5 −0.3 , C1,1 = [0.5 0.4] , C1,2 = [0.1 0.3] , C1,3 = [0.4 0.3] ,       Cd1,1 = 0.1 −0.2 , Cd1,2 = −0.3 0.6 , Cd1,3 = 0.07 0.4 , Du1,1 = 0.4, Du1,2 = 0.5, Du1,3 = 0.6, Dw1,1 = 0.2, Dw1,2 = 0.3, Dw1,3 = 1.1. The transition probabilities matrix is assumed to be known as follows in advance: ⎡

⎤ 0.3 0.6 0.1 1 = ⎣0.2 0.5 0.3⎦ . 0.2 0.2 0.6 MJS 2: 

A2,1

Ad2,1

     1 −0.05 0.8 0.8 −0.3 0.6 = , A2,2 = , A2,3 = , 0.4 −0.72 0.6 1 0.4 0.34

      0.2 0 0.8 −0.24 0.6 0.4 = , Ad2,2 = , Ad2,3 = , 0 0.5 −0.7 −0.32 0.2 −0.3 Bu2,1 = Bu1,1 , Bu2,2 = Bu1,2 , Bu2,3 = Bu1,3 , Bw2,1 = Bw1,1 , Bw2,2 = Bw1,2 , Bw2,3 = Bw1,3 , C2,1 = [0.2 0.1] , C2,2 = [0.3 0.4] , C2,3 = [−0.1 0.2] ,

      Cd2,1 = 0.03 −0.05 , Cd2,2 = 0.1 0.2 , Cd2,3 = −0.3 0.5 , Du2,1 = Du1,1 , Du2,2 = Du1,2 , Du2,3 = Du1,3 ,

3 Finite-Time Stability and Stabilization for Switching . . .

62

switching signal

5

jumping modes

4.5 4 3.5

2 1 0

5 time

10

3 2.5 2 1.5 1 0.5

0

1

2

3

4

5

6

7

8

9

10

time

Fig. 3.1 Jumping modes and switching signals

Dw2,1 = Dw1,1 , Dw2,2 = Dw1,2 , Dw2,3 = Dw1,3 . The transition probabilities matrix is assumed as follows: ⎡

⎤ 0.5 0.2 0.3 2 = ⎣0.7 0.1 0.2⎦ . 0.2 0.6 0.2 Letting c22 = 2, R = I2 , h = 1, N = 10, d 2 = 0.5, μ = 1.01, γ = 2.3 and δ = 1.8, and by solving the LMIs in Theorem 3.1 with MATLAB tool box, the subsequent state feedback controller gains can be calculated as follows:       K 1,1 = −1.5722 −0.2230 , K 1,2 = −0.6270 −0.7086 , K 1,3 = −0.0939 0.5256 ,       K 2,1 = −0.6244 0.4674 , K 2,2 = −0.7615 0.8487 , K 2,3 = 0.1251 −0.3600 ,

with λ = 0.1998. The minimum average dwell time can be solved as τa∗ = 3.0541, so we take the average dwell time τa = 3.3333. To show the efficacy of the calculated controller, we present the state trajectory of the free system (3.1) and the closed-loop system (3.4) with the controller in the form  T of (3.3). The initial state, mode and disturbance signals are set as x0 = 0.2 0.3 , r0 = 1 and w(k) = 0.6e−k , respectively. Figure 3.1 shows the stochastic jumping modes and the switching signals. The state trajectories of the free and controlled systems are displayed in Figs. 3.2 and 3.3. It can be observed that the state trajectory of the free system surpasses the given bound c22 . Hence, the original free system is not stochastic FTB. However, with the designed controller (3.3), the state trajectory of the closed-loop system is restricted within the desired bound during the given time interval.

3.5 Simulation Analysis

63

3

2

x2

1

0

-1

-2

-3

-3

-2

-1

0

1

x1

2

3

Fig. 3.2 Trajectory of the free system 2 1.5 1

x2

0.5 0

-0.5 -1 -1.5 -2

-2

-1.5

-1

-0.5

0

x1

0.5

1

1.5

2

Fig. 3.3 Trajectory of the closed-loop system

Example 3.2 The second example considers an economic system adapted from [14]. There are three operation modes representing different financial situations: normal, boom and slump. The Markovian chain governs the stochastic transitions among the three modes. On account of the variation in domestic and international economic environment, the macroeconomic control from government is necessary. Government intervention leads to a change in the economic model, which can be viewed as a toplevel supervisor. The detailed parameters of the economic system are as follows:

3 Finite-Time Stability and Stabilization for Switching . . .

64

MJS 1: 

     0 1 0 1 0 1 , A1,2 = , A1,3 = , −2.6 3.3 −4.4 4.6 5.4 −5.3

A1,1 =

 T Bu1,1 = Bu1,2 = Bu1,3 = 0 1 ,  T  T  T Bw 1,1 = 0.3 0.24 , Bw1,2 = −0.15 −0.3 , Bw 1,3 = 0.3 0.45 , ⎤ ⎤ ⎤ ⎡ ⎡ 1.5477 −1.0976 3.1212 −0.5082 1.8385 −1.2728 = ⎣−1.0976 1.9145 ⎦ , C1,2 = ⎣−0.5082 2.7824 ⎦ , C1,3 = ⎣−1.2728 1.6971 ⎦ , 0 0 0 0 0 0 ⎡

C1,1

 T  T   Du1,1 = 0 0 1.6125 , Du1,2 = 0 0 1.0794 , Du1,3 = 0 0 1.0540 T ,  T  T  T Dw11 = 0.18 0.3 0.36 , Dw12 = −0.27 0.3 0.18 , Dw13 = 0.3 0.12 0.3 . The transition probabilities among three modes are given as follows: ⎡

⎤ 0.55 0.23 0.22 1 = ⎣0.36 0.35 0.29⎦ . 0.32 0.16 0.52 MLS 2:  A2,1 =

     0 1 0 1 0 1 , A2,2 = , A2,3 = , −2.4 3.1 −4.2 4.4 5.2 −5.1 Bu2,1 = Bu1,1 , Bu2,2 = Bu1,2 , Bu2,3 = Bu1,3 .

Other parameters of the system are same as MJS 1 and the transition probabilities matrix is ⎡ ⎤ 0.79 0.11 0.1 2 = ⎣0.27 0.53 0.2⎦ . 0.23 0.07 0.7 Denoting c22 = 2, R = I2 , d = 0, N = 20, h 2 = 0.5, μ = 1.02, γ = 2.1 and δ = 1.5, and solving the inequalities in Theorem 3.1, the controller gains can be calculated as follows:       K 1,1 = 2.2920 −2.5385 , K 1,2 = 4.4520 −4.1158 , K 1,3 = −5.5034 5.6540 ,       K 2,1 = 2.2190 −2.4395 , K 2,2 = 4.1318 −3.8865 , K 2,3 = −5.1032 5.3619 ,

3.5 Simulation Analysis

65

switching signal

5 4.5

jumping modes

4 3.5

2 1 20

10 time

0

3 2.5 2 1.5 1 0.5

0

2

4

6

8

10

12

14

16

18

20

time Fig. 3.4 Jumping modes and switching signals 4 3 2

x2

1 0 -1 -2 -3 -4 -4

-3

-2

-1

0

x1

1

2

3

4

Fig. 3.5 State trajectory of the free system

with λ = 0.1498 and τa∗ = 3.8653. Here we choose the average dwell time τa = 4.  T The initial state, mode and external disturbance are taken as x0 = 0 0 , r0 = 1 and w(k) = 0.5e−k , respectively. The following figures show the jumping modes and switching signals, the state trajectories of the free and closed-loop economic system (Figs. 3.4, 3.5, and 3.6). From Fig. 3.6 we can see that the economic situation is kept within the desired bound with the designed controller.

3 Finite-Time Stability and Stabilization for Switching . . .

66 2 1.5 1

x2

0.5 0

-0.5 -1 -1.5 -2

-2

-1.5

-1

-0.5

0

x1

0.5

1

1.5

2

Fig. 3.6 State trajectory of the closed-loop system

3.6 Conclusion The finite-time boundedness, finite-time H∞ stabilization, and observer-based finitetime H∞ control for a class of stochastic discrete-time jumping systems governed by deterministic switching signals are investigated in this chapter. With the help of average dwell time method and by allowing the stochastic Lyapunov energy function to increase at switching instants, the state feedback and observer-based finite-time H∞ controllers are designed such that the corresponding closed-loop system is finitetime stabilizable with H∞ decay rate under average constraints on the dwell time between switching instants. The next chapter will extend the finite-time controller design to discrete-time MJSs with non-homogeneous transition probabilities.

References 1. Feng, X., Loparo, K.A., Ji, Y., Chizeck, H.J.: Stochastic stability properties of jump linear systems. IEEE Trans. Autom. Control 37, 38–53 (1992) 2. Shi, P., Boukas, E.K., Agarwal, R.: Kalman filtering for continuous-time uncertain systems with Markovian jumping parameters. IEEE Trans. Autom. Control 44(8), 1592–1597 (1999) 3. Boukas, E.K.: Stochastic Switching Systems: Analysis and Design. Birkhauser Publishing, Berlin (2005) 4. Zhai, G.S., Hu, B., Yasuda, K., Michel, A.N.: Stability analysis of switched systems with stable and unstable subsystems: an average dwell time approach. Int. J. Syst. Sci. 32(8), 1055–1061 (2001) 5. Shi, P., Xia, Y., Liu, G., Rees, D.: On designing of sliding mode control for stochastic jump systems. IEEE Trans. Autom. Control 51(1), 97–103 (2006) 6. Luan, X.L., Shunyi, Zhao, Liu, F.: H∞ control for discrete-time Markovian jump systems with uncertain transition probabilities. IEEE Trans. Autom. Control 58(6), 1566–1572 (2013)

References

67

7. Bolzern, P., Colaneri, P., Nicolao, G.D.: Markovian jump linear systems with switching transition rates: mean square stability with dwell-time. Automatica 46, 1081–1088 (2010) 8. Hou, L.L., Zong, G.D., Zheng, W.X.: Exponential l2 -l∞ control for discrete-time switching Markovian jump linear systems. Circ. Syst. Signal Process 32(6), 2745–2759 (2013) 9. Bolzern, P., Colaneri, P., Nicolao, G.D.: Almost sure stability of Markovian jump linear systems with deterministic switching. IEEE Trans. Autom. Control 58(1), 209–213 (2013) 10. Luan, X.L., Zhao, C.Z., Liu, F.: Finite-time H∞ control with average dwell-time constraint for time-delay Markovian jump systems governed by deterministic switches. IET Control Theor. Appl. 8(11), 968–977 (2014) 11. Luan, X.L., Zhao, C.Z., Liu, F.: Finite-time stabilization of switching Markovian jump systems with uncertain transition rates. Circ. Syst. Signal Process 34(12), 3741–3756 (2015) 12. Yin, Y., Shi, P., Liu, F., Teo, K.L.: Observer-based H∞ control on nonhomogeneous discretetime Markov jump systems. J. Dyn. Syst. Meas. Control 135(4), 1–8 (2013) 13. He, Y., Wu, M., Liu, G.P., She, J.H.: Output feedback stabilization for a discrete-time system with a time-varying delay. IEEE Trans. Autom. Control 53(10), 2372–2377 (2008) 14. Costa, O., Assumpcao, ˜ E.O., Boukas, E.K., Marques, R.P.: Constrained quadratic state feedback control of discrete-time Markovian jump linear systems. Automatica 35(4), 617–626 (1999)

Chapter 4

Finite-Time Stability and Stabilization for Non-homegeneous Markovian Jump Systems

Abstract Considering the practical case that the transition probabilities jumping among different modes are random time-varying, the finite-time stabilization, finitetime H∞ control and the observer-based state feedback finite-time control problems for discrete-time Markovian jump systems with non-homogeneous transition probabilities are investigated in this chapter. Gaussian transition probability density function is utilized to describe the random time-varying property of transition probabilities. Then, the variation-dependent controller is devised to guarantee the corresponding closed-loop systems finite-time stabilization for random time-varying transition probabilities.

4.1 Introduction Markovian jump systems (MJSs) are a set of dynamic systems with random jumps among finite subsystems. As essential system parameters, the jump transition probabilities (TPs) determine which mode the system is in at the current moment. Under the hypothesis that TPs are known accurately in advance, many problems of this kind of MJSs with homogeneous TPs have been well studied [1–3]. However, the assumption that the TPs are exactly known may lead to instability or deterioration of system performance. Therefore, more practical MJSs with uncertain TPs are investigated to address the related research problems. Similar to the uncertainties about the system matrices, one frequently used form of uncertainty is the polytopic description, where the TP matrix is supposed to be in a convex framework with associate vertices [4–6]. The other type is specified in an element-wise style. In this form, the components of the TP matrix are estimated in practice, and error bounds are provided in the meantime. Then, the robust methodologies can be employed to tackle the norm-bounded or polytopic uncertainties supposed in the TPs [6, 7]. Considering more practical cases that some components in the TP matrix are precious to collect, the partially unknown TPs of MJSs has been recommended in [8, 9]. Different with the uncertain TPs considered in [4–7], the notion of partially unknown TPs does not expect any information of the unknown components. How© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 X. Luan et al., Robust Control for Discrete-Time Markovian Jump Systems in the Finite-Time Domain, Lecture Notes in Control and Information Sciences 492, https://doi.org/10.1007/978-3-031-22182-8_4

69

4 Finite-Time Stability and Stabilization . . .

70

ever, it is essential to point out that the more details in TPs are unknown, the more conservativeness of the controller or filter design. In extreme circumstances, such as all the elements in TPs are unavailable, the MJSs are equivalent to the switching systems in a particular case. In this chapter, the random time-varying TPs are considered from the stochastic viewpoint. The Gaussian probability density function (PDF) is employed to describe the relevant probability concerning TPs to occur at a provided constant. In this way, the random time-varying TPs can be characterized with a Gaussian PDF. The variance of Gaussian PDF can quantize the uncertainties of TPs. Then the finite-time stabilization, finite-time H∞ control and the observer-based state feedback finitetime control problems are presented to deal with the transient performance analysis of discrete-time non-homogeneous MJSs.

4.2 Preliminaries and Problem Formulation Consider the discrete-time MJS with the same structure in the preceding chapters: 

x(k + 1) = A(rk )x(k) + Bu (rk )u(k) + Bw (rk )w(k) x(k) = x0 , rk = r0 , k = 0

(4.1)

where the state variable, the control input, and the exogenous disturbances are the same with those defined in the preceding chapters. The system matrices A(rk ), Bu (rk ) and Bw (rk ) are denoted as Ai , Bui , and Bwi , respectively. rk is a time-varying Markovian chain taking values in M = {1, 2, ..., M} with transition probabilities πi(ξj k ) = Pr (rk = j|rk−1 = i, k), where πi(ξj k ) is the transition probabilities from mode i to mode j satisfying πi(ξj k ) ≥  (ξk ) 0, M = 1, ∀i, j ∈ M. j=1 πi j In this chapter, the random time-varying TPs are characterized by a Gaussian stochastic process {ξk , k ∈ N}. The pruned Gaussian PDF of random variables πi(ξj k ) can be denoted as follows:   ( ξk ) πi j −μi j 1 √ √ f   σi j σi j    p πi(ξj k ) =  (4.2) 1−μ 0−μ F √σi ij j − F √σi ij j where f (·) is the PDF of the standard normal distribution, F(·) is the cumulative distribution of f (·), μi j and σi j are the means and variances of Gaussian PDFs, respectively. Therefore, the matrix of transition probability can be expressed as:

4.2 Preliminaries and Problem Formulation

71

P robability dens ity

10 s=0.2 s=0.1 s=0.05

8 6 4 2

TP 0

0 50

0.1

0.2

0.3

0.4

0.5

0.7 50 40

30

30

30

10 0

Count

40

Count

40

20

20 10

0.5

0.8

0.9

0

1

s=0.05

s=0.1

s=0.2

Count

0.6

50

20 10

0.5

0

0.5

Fig. 4.1 Gaussian PDF for (0.5, 0.2), (0.5, 0.1) and (0.5, 0.05)

⎤ n(μ11 , σ11 ) n(μ12 , σ12 ) · · · n(μ1M , σ1M ) ⎢ n(μ21 , σ21 ) n(μ22 , σ22 ) · · · n(μ2M , σ2M ) ⎥ ⎥ ⎢ N =⎢ ⎥ .. .. .. .. ⎦ ⎣ . . . . n(μ M1 , σ M1 ) n(μ M2 , σ M2 ) · · · n(μ M M , σ M M ) ⎡

(4.3)

where n(μi j , σi j ) = p(πi(ξj k ) ) means the pruned Gaussian PDF of πi(ξj k ) . To explicitly demonstrate how the pruned Gaussian PDF represents the relevant probability for TPs to occur at a provided constant value, the charts of distribution functions for different variance with the same mean are displayed in Fig. 4.1. From Fig. 4.1, it is clearly noticed that larger values of variance yield the flatter graphs, and most of the areas in the chart are far from the mean of Gaussian PDF. In contrast, smaller values of variance yield sharper graphs, and most of the areas in the graph are pretty close to the mean of Gaussian PDF. Based in the above analysis, the expectation of random variable πi(ξj k ) can be expressed as follows:

4 Finite-Time Stability and Stabilization . . .

72

πˆ i(ξj k )

 1    (ξk ) = πi(ξj k ) p πi(ξj k ) dπi(ξj k ) =  πi j  = μi j +

f F



0 0−μi j √ σi j 0−μi j √ σi j

 

− f −F

 

1−μi j √ σi j 1−μi j √ σi j

 √  σi j

(4.4)

As a consequence, the expected TP matrix can be denoted as follows: ⎡

(ξk ) (ξk ) E(π11 ) E(π12 ) (ξ ) k ⎢ E(π ) E(π (ξk ) ) 21 22 ⎢ =⎢ .. .. ⎣ . . (ξk ) (ξk ) ) E(π M2 ) E(π M1

(ξk ) ⎤ · · · E(π1M ) (ξk ) ⎥ · · · E(π2M )⎥ ⎥ .. .. ⎦ . . (ξk ) · · · E(π M M )

(4.5)

with M 

    E πi(ξj k ) = 1, E πi(ξj k ) ≥ 0, 1 ≤ i, j ≤ M.

j

4.3 Stochastic Finite-Time Stabilization Design the following state feedback controller for the discrete-time MJS (4.1): u(k) = −K i,π (ξk ) x(k)

(4.6)

ij

where K i,π (ξk ) are controller gains to be calculated. Substituting controller (4.6) into ij

system (4.1) leads to the following closed-loop system: x(k + 1) = A¯ i,π (ξk ) x(k) + Bwi w(k)

(4.7)

ij

where A¯ i,π (ξk ) = Ai − Bui K i,π (ξk ) . ij

ij

Firstly, the following proposition is presented to develop the main results. Proposition 4.1 For a given scalar α ≥ 1, the closed-loop MJS (4.7) is stochastic finite-time stabilizable concerning (c1 c2 N R d), if there exist symmetric positivedefinite matrices Pi,π (ξk ) and Q such that ij

⎡ ⎣

A¯ T

¯

(ξk ) P j,πikj

i,πi j

A¯ i,π (ξk ) − α Pi,π (ξk ) ij



ij

A¯ T

¯

(ξk ) P j,π (ξk ) Bwi

i,πi j

ij

T ¯ Bwi P j,π (ξk ) Bwi − α Q ij

⎤ ⎦ 0, and Q h > 0, and matrices Vl , Vh , and W¯ satisfying the following conditions: ⎡ ⎤ −Pl Q l + W¯ Rl 0  0 ⎢ ∗ Pl − 2 cos ϑl Q l − H e AW¯ Rl + Bu K¯ Rl Bw Vl −C W¯ Rl ⎥ ⎢ ⎥ 0 and matrices Yi satisfying the following conditions: ⎡

⎤ diag {−α X i } 0 L T1i T ⎣ ⎦ λmin (Pt )S T (k, p)RS(k, p).

(9.26)

Similarly, V (0, p) = S T (0, p) P˜ S(0, p) T  = S T (0, p) Pt 1/2 R Pt 1/2 S(0, p) < λmax (Pt )S T (0, p)RS(0, p)].

(9.27)

According to the finite-time-bounded definition, when S T (0, p)RS(0, p) < c1 , combining inequalities (9.25), (9.26), and (9.27), the following inequality holds: λmin (Pt )S T (k, p)RS(k, p) < (α + 1) N λmax (Pt )c1 + γ 2 d 2 .

(9.28)

If condition (9.17) holds, it has S T (k, p)RS(k, p)
0 and Q h > 0, matrice X˜ , Vl , and Vh satisfying the following conditions: ⎡

⎤ Q l + X˜ Rl 0  0 ⎢ ⎥ ⎢ ∗ Pl − 2 cos ϑl Q l − H e A M p X˜ Rl + Bu M p Y˜ Rl B M p Vl −C M p X˜ Rl ⎥ ⎢ ⎥