Limit search to available items
Book Cover
E-book
Author Costa, Oswaldo Luiz do Valle.

Title Continuous-time Markov jump linear systems / Oswaldo L.V. Costa, Marcelo D. Fragoso, Marcos G. Todorov
Published Berlin ; London : Springer, 2013

Copies

Description 1 online resource
Series Probability and its applications
Probability and its applications (Springer-Verlag)
Contents 880-01 Introduction -- A Few Tools and Notations -- Mean-Square Stability -- Quadratic Optimal Control with Complete Observations -- H₂ Optimal Control with Complete Observations -- Quadratic and H₂ Optimal Control with Partial Observations -- Best Linear Filter with Unknown (x(t), [theta](t)) -- H [infinity] Control -- Design Techniques -- Some Numerical Examples
880-01/(S Machine generated contents note: 1. Introduction -- 1.1. Markov Jump Linear Systems -- 1.2. Some Applications of MJLS -- 1.3. Prerequisites and General Remarks -- 1.4. Overview of the Chapters -- 1.5. Historical Remarks -- 2. Few Tools and Notations -- 2.1. Outline of the Chapter -- 2.2. Some Basic Notation and Definitions -- 2.3. Semigroup Operators and Infinitesimal Generator -- 2.4. Fundamental Theorem for Differential Equations -- 2.5. Continuous-Time Markov Chains -- 2.6. Space of Sequences of N Matrices -- 2.7. Auxiliary Results -- 2.8. Linear Matrix Inequalities -- 3. Mean-Square Stability -- 3.1. Outline of the Chapter -- 3.2. Models and Problem Statement -- 3.3. Main Operators and Auxiliary Results -- 3.4. Mean-Square Stability for the Homogeneous Case -- 3.4.1. MSS, StS, and the Spectrum of an Augmented Matrix -- 3.4.2. Coupled Lyapunov Equations -- 3.4.3. Summary -- 3.5. Lr2{Ω, F, P) and Jump Diffusion Cases -- 3.5.1. Lr2(Ω, F, P) Disturbance Case -- 3.5.2. Jump Diffusion Case -- 3.5.3. Summary -- 3.6. Mean-Square Stabilizability and Detectability -- 3.6.1. Definitions and LMIs Conditions -- 3.6.2. Mean-Square Stabilizability with θ(t) Partially Known -- 3.6.3. Dynamic Output Mean-Square Stabilizability -- 3.7. Historical Remarks -- 4. Quadratic Optimal Control with Complete Observations -- 4.1. Outline of the Chapter -- 4.2. Notation and Problem Formulation -- 4.3. Dynkin's Formula -- 4.4. Finite-Horizon Optimal Control Problem -- 4.5. Infinite-Horizon Optimal Control Problem -- 4.6. Historical Remarks -- 5. H2 Optimal Control with Complete Observations -- 5.1. Outline of the Chapter -- 5.2. Robust and Quadratic Mean-Square Stabilizability -- 5.3. Controllability, Observability Gramians, and the H2-Norm -- 5.4. H2 Control via Convex Analysis -- 5.4.1. Preliminaries -- 5.4.2. Π Exactly Known -- 5.4.3. Π Not Exactly Known -- 5.5. Convex Approach and the CARE -- 5.6. Historical Remarks -- 6. Quadratic and H2 Optimal Control with Partial Observations -- 6.1. Outline of the Chapter -- 6.2. Finite-Horizon Quadratic Optimal Control with Partial Observations -- 6.2.1. Problem Statement -- 6.2.2. Filtering Problem -- 6.2.3. Separation Principle for MJLS -- 6.3. H2 Control Problem with Partial Observations -- 6.3.1. Problem Statement -- 6.3.2. Filtering H2 Problem -- 6.3.3. Separation Principle -- 6.3.4. LMIs Approach for the H2 Control Problem -- 6.4. Historical Remarks -- 7. Best Linear Filter with Unknown (x(t), θ(t)) -- 7.1. Outline of the Chapter -- 7.2. Preliminaries -- 7.3. Problem Formulation for the Finite-Horizon Case -- 7.4. Main Result for the Finite-Horizon Case -- 7.5. Stationary Solution for the Algebraic Riccati Equation -- 7.6. Stationary Filter -- 7.6.1. Auxiliary Results and Problem Formulation -- 7.6.2. Solution for the Stationary Filtering Problem via the ARE -- 7.7. Historical Remarks -- 8. H[∞] Control -- 8.1. Outline of the Chapter -- 8.2. Description of the Problem -- 8.3. Bounded Real Lemma -- 8.3.1. Problem Formulation and Main Result -- 8.3.2. Proof of Proposition 8.3 and Lemma 8.4 -- 8.4. H[∞] Control Problem -- 8.5. Static State Feedback -- 8.6. Dynamic Output Feedback -- 8.6.1. Main Results -- 8.6.2. Analysis of Dynamic Controllers -- 8.6.3. Synthesis of Dynamic Controllers -- 8.6.4. H[∞] Analysis and Synthesis Algorithms -- 8.7. Historical Remarks -- 9. Design Techniques -- 9.1. Outline of the Chapter -- 9.2. Stability Radius -- 9.3. Robustness Margin for Π -- 9.4. Robust Control -- 9.4.1. Preliminary Results -- 9.4.2. Robust H2 Control -- 9.4.3. Equalized Case -- 9.4.4. Robust Mixed H2/H[∞] Control -- 9.5. Robust Linear Filtering Problem via an LMIs Formulation -- 9.5.1. LMIs Formulation -- 9.5.2. Robust Filter -- 9.5.3. ARE Approximations for the LMIs Problem -- 9.6. Historical Remarks -- 10. Some Numerical Examples -- 10.1. Outline of the Chapter -- 10.2. Example on Economics -- 10.3. Coupled Electrical Machines -- 10.3.1. Problem Statement -- 10.3.2. Mean-Square Stabilization -- 10.3.3. H2 Control -- 10.3.4. Stability Radius Analysis -- 10.3.5. Synthesis of Robust Controllers -- 10.4. Robust Control of an Underactuated Robotic Arm -- 10.5. Example of a Stationary Filter -- 10.6. Historical Remarks -- Appendix A Coupled Differential and Algebraic Riccati Equations -- A.1. Outline of the Appendix -- A.2. Coupled Differential Riccati Equations -- A.3. Maximal Solution -- A.4. Stabilizing Solution -- A.5. Filtering Coupled Algebraic Riccati Equations -- A.6. Asymptotic Convergence -- A.7. Filtering Differential and Algebraic Riccati Equation for Unknown θ(t) -- Appendix B Adjoint Operator and Some Auxiliary Results -- B.1. Outline of the Appendix -- B.2. Preliminaries -- B.3. Main Results
Summary It has been widely recognized nowadays the importance of introducing mathematical models that take into account possible sudden changes in the dynamical behavior of high-integrity systems or a safety-critical system. Such systems can be found in aircraft control, nuclear power stations, robotic manipulator systems, integrated communication networks and large-scale flexible structures for space stations, and are inherently vulnerable to abrupt changes in their structures caused by component or interconnection failures. In this regard, a particularly interesting class of models is the so-called Markov jump linear systems (MJLS), which have been used in numerous applications including robotics, economics and wireless communication. Combining probability and operator theory, the present volume provides a unified and rigorous treatment of recent results in control theory of continuous-time MJLS. This unique approach is of great interest to experts working in the field of linear systems with Markovian jump parameters or in stochastic control. The volume focuses on one of the few cases of stochastic control problems with an actual explicit solution and offers material well-suited to coursework, introducing students to an interesting and active research area. The book is addressed to researchers working in control and signal processing engineering. Prerequisites include a solid background in classical linear control theory, basic familiarity with continuous-time Markov chains and probability theory, and some elementary knowledge of operator theory
Bibliography Includes bibliographical references and index
Notes English
Print version record
In Springer eBooks
Subject Stochastic control theory.
Linear systems.
Markov processes.
Distribution (Probability theory)
Markov Chains
distribution (statistics-related concept)
SCIENCE -- System Theory.
Sistemas lineales
Distribución (Teoría de probabilidades)
Control estocástico, Teoría de
Markov, Procesos de
Linear systems
Markov processes
Stochastic control theory
Markov-Sprungprozess
Lineares System
Stochastische Kontrolltheorie
Form Electronic book
Author Fragoso, M. D. (Marcelo Dutra)
Todorov, Marcos G. (Marcos Garcia)
ISBN 9783642341007
3642341004
9781283944960
1283944960
9783642341014
3642341012
9783642431128
3642431127