Добавить в избранное
Форум
Правила сайта "Мир Книг"
Группа в Вконтакте
Подписка на книги
Правообладателям
Найти книгу:
Навигация
Вход на сайт
Регистрация



Реклама



Название: Variational Bayesian Learning Theory
Автор: Shinichi Nakajima, Kazuho Watanabe
Издательство: Cambridge University Press
Год: 2019
Страниц: 561
Язык: английский
Формат: pdf (true)
Размер: 10.1 MB

Variational Bayesian learning is one of the most popular methods in Machine Learning (ML). Designed for researchers and graduate students in Machine Learning, this book summarizes recent developments in the non-asymptotic and asymptotic theory of variational Bayesian learning and suggests how this theory can be applied in practice. The authors begin by developing a basic framework with a focus on conjugacy, which enables the reader to derive tractable algorithms. Next, it summarizes non-asymptotic theory, which, although limited in application to bilinear models, precisely describes the behavior of the variational Bayesian solution and reveals its sparsity inducing mechanism. Finally, the text summarizes asymptotic theory, which reveals phase transition phenomena depending on the prior setting, thus providing suggestions on how to set hyperparameters for particular purposes. Detailed derivations allow readers to follow along without prior knowledge of the mathematical techniques specific to Bayesian learning.

Bayesian learning is a statistical inference method that provides estimators and other quantities computed from the posterior distribution—the conditional distribution of unknown variables given observed variables. Compared with point estimation methods such as maximum likelihood (ML) estimation and maximum a posteriori (MAP) learning, Bayesian learning has the following advantages:

- Theoretically optimal.
- Uncertainty information is available.
- Model selection and hyperparameter estimation can be performed in a single framework.
- Less prone to overfitting.

On the other hand, Bayesian learning has a critical drawback—computing the posterior distribution is computationally hard in many practical models. This is because Bayesian learning requires expectation operations or integral computations, which cannot be analytically performed except for simple cases. Accordingly, various approximation methods, including deterministic and sampling methods, have been proposed. Variational Bayesian (VB) learning is one of the most popular deterministic approximation methods to Bayesian learning.

'This book presents a very thorough and useful explanation of classical (pre deep learning) mean field variational Bayes. It covers basic algorithms, detailed derivations for various models (eg matrix factorization, GLMs, GMMs, HMMs), and advanced theory, including results on sparsity of the VB estimator, and asymptotic properties (generalization bounds).' Kevin Murphy, Research scientist, Google Brain

Скачать Variational Bayesian Learning Theory







ОТСУТСТВУЕТ ССЫЛКА/ НЕ РАБОЧАЯ ССЫЛКА ЕСТЬ РЕШЕНИЕ, ПИШИМ СЮДА!







Автор: Ingvar16 7-01-2020, 04:44 | Напечатать |
 
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.





С этой публикацией часто скачивают:

Посетители, находящиеся в группе Гости, не могут оставлять комментарии к данной публикации.


 MirKnig.Su  ©2024     При использовании материалов библиотеки обязательна обратная активная ссылка    Политика конфиденциальности