Добавить в избранное
Форум
Правила сайта "Мир Книг"
Группа в Вконтакте
Подписка на книги
Правообладателям
Найти книгу:
Навигация
Вход на сайт
Регистрация



Реклама



An Introduction to Optimal Control Theory: The Dynamic Programming ApproachНазвание: An Introduction to Optimal Control Theory: The Dynamic Programming Approach
Автор: Onesimo Hernandez-Lerma, Leonardo R. Laura-Guarachi, Saul Mendoza-Palacios
Издательство: Springer
Серия: Texts in Applied Mathematics
Год: 2023
Страниц: 279
Язык: английский
Формат: pdf (true), epub
Размер: 19.0 MB

This book introduces optimal control problems for large families of deterministic and stochastic systems with discrete or continuous time parameter. These families include most of the systems studied in many disciplines, including Economics, Engineering, Operations Research, and Management Science, among many others.

The main objective is to give a concise, systematic, and reasonably self contained presentation of some key topics in optimal control theory. To this end, most of the analyses are based on the dynamic programming (DP) technique. This technique is applicable to almost all control problems that appear in theory and applications. They include, for instance, finite and infinite horizon control problems in which the underlying dynamic system follows either a deterministic or stochastic difference or differential equation. In the infinite horizon case, it also uses DP to study undiscounted problems, such as the ergodic or long-run average cost.

After a general introduction to control problems, the book covers the topic dividing into four parts with different dynamical systems: control of discrete-time deterministic systems, discrete-time stochastic systems, ordinary differential equations, and finally a general continuous-time MCP with applications for stochastic differential equations.

In a few words, in an optimal control problem (OCP) we are given a dynamical system that is “controllable” in the sense that its behavior depends on some parameters or components that we can choose within certain ranges. These components are called control actions. When we look at these control actions throught the whole period of time in which the system is functioning, then they form control policies or strategies. On the other hand, we are also given an objective function or performance index that somehow measures the system’s response to each control policy. The OCP is then to find a control policy that optimizes the given objective function.

The first and second part should be accessible to undergraduate students with some knowledge of elementary calculus, linear algebra, and some concepts from probability theory (random variables, expectations, and so forth). Whereas the third and fourth part would be appropriate for advanced undergraduates or graduate students who have a working knowledge of mathematical analysis (derivatives, integrals, ...) and stochastic processes.

Скачать An Introduction to Optimal Control Theory: The Dynamic Programming Approach








ОТСУТСТВУЕТ ССЫЛКА/ НЕ РАБОЧАЯ ССЫЛКА ЕСТЬ РЕШЕНИЕ, ПИШИМ СЮДА!







Автор: Ingvar16 25-02-2023, 17:32 | Напечатать |
 
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.





С этой публикацией часто скачивают:

Посетители, находящиеся в группе Гости, не могут оставлять комментарии к данной публикации.


 MirKnig.Su  ©2021     При использовании материалов библиотеки обязательна обратная активная ссылка    Политика конфиденциальности