Description |
1 online resource (xxx, 594 pages) |
Series |
Advances in industrial control |
|
Advances in industrial control.
|
Contents |
History of Adaptive Dynamic Programming -- Part I: Continuous-Time Systems -- Optimal Control of Continuous-Time Affine Nonlinear Systems -- Optimal Control of Nonaffine Continuous-Time Systems -- Robust and Guaranteed Cost Control of Continuous-Time Nonlinear Systems -- Decentralized Stabilization and Control of Nonlinear Interconnected Systems -- Online Synchronous Optimal Learnign Algorithms for Multiplayer Nonzero-Sum Games with Unknown Dynamics -- Part II: Discrete-Time Systems -- Value Iteration Adaptive Dynamic Programming for Discrete-Time Nonlinear Systems -- Finite Approximation Error-Based Value Iteration for Adaptive Dynamic Programming -- Policy Iteration for Optimal Control of Discrete-Time Nonlinear Systems -- Generalized Policy Iteration Adaptive Dynamic Programming for Discrete-Time Nonlinear Systems -- Error-Bound Analysis of Adaptive Dynamic Programming Algorithms for Solving Undiscounted Optimal Control Problems -- Part III: Applications -- Adaptive Dynamic Programming for Renewable Energy Scheduling and Battery Management in Smart Homes -- Adaptive Dynamic Programming for Optimal Tracking Control of a Coal Gasification Process -- Data-Driven Neuro-Optimal Temperature Control of Water?Gas Shift Reaction |
Summary |
This book covers the most recent developments in adaptive dynamic programming (ADP). The text begins with a thorough background review of ADP making sure that readers are sufficiently familiar with the fundamentals. In the core of the book, the authors address first discrete- and then continuous-time systems. Coverage of discrete-time systems starts with a more general form of value iteration to demonstrate its convergence, optimality, and stability with complete and thorough theoretical analysis. A more realistic form of value iteration is studied where value function approximations are assumed to have finite errors. Adaptive Dynamic Programming also details another avenue of the ADP approach: policy iteration. Both basic and generalized forms of policy-iteration-based ADP are studied with complete and thorough theoretical analysis in terms of convergence, optimality, stability, and error bounds. Among continuous-time systems, the control of affine and nonaffine nonlinear systems is studied using the ADP approach which is then extended to other branches of control theory including decentralized control, robust and guaranteed cost control, and game theory. In the last part of the book the real-world significance of ADP theory is presented, focusing on three application examples developed from the authors' work: renewable energy scheduling for smart power grids; coal gasification processes; and water-gas shift reactions |
Bibliography |
Includes bibliographical references and index |
Notes |
Online resource; title from PDF title page (SpringerLink, viewed January 19, 2017) |
Subject |
Dynamic programming.
|
|
Control theory.
|
|
Artificial intelligence.
|
|
Nonlinear science.
|
|
Automatic control engineering.
|
|
MATHEMATICS -- Applied.
|
|
MATHEMATICS -- Probability & Statistics -- General.
|
|
Control theory
|
|
Dynamic programming
|
Form |
Electronic book
|
Author |
Wei, Qinglai, author
|
|
Wang, Ding, author
|
|
Yang, Xiong, author
|
|
Li, Hongliang, author
|
ISBN |
9783319508153 |
|
3319508156 |
|