Practical patterns for scaling Machine Learning from your laptop to a distributed cluster.
In Distributed Machine Learning Patterns you will learn how to:
Apply distributed systems patterns to build scalable and reliable machine learning projects Construct machine learning pipelines with data ingestion, distributed training, model serving, and more Automate machine learning tasks with Kubernetes, TensorFlow, Kubeflow, and Argo Workflows Make trade offs between different patterns and approaches Manage and monitor machine learning workloads at scale
Distributed Machine Learning Patterns teaches you how to scale Machine Learning models from your laptop to large distributed clusters. In it, you’ll learn how to apply established distributed systems patterns to Machine Learning projects, and explore new ML-specific patterns as well. Firmly rooted in the real world, this book demonstrates how to apply patterns using examples based in TensorFlow, Kubernetes, Kubeflow, and Argo Workflows. Real-world scenarios, hands-on projects, and clear, practical DevOps techniques let you easily launch, manage, and monitor cloud-native distributed Machine Learning pipelines.
About the technology: Scaling up models from standalone devices to large distributed clusters is one of the biggest challenges faced by modern Machine Learning practitioners. Distributing machine learning systems allow developers to handle extremely large datasets across multiple clusters, take advantage of automation tools, and benefit from hardware accelerations. In this book, Kubeflow co-chair Yuan Tang shares patterns, techniques, and experience gained from years spent building and managing cutting-edge distributed Machine Learning infrastructure.
About the book: Distributed Machine Learning Patterns is filled with practical patterns for running Machine Learning systems on distributed Kubernetes clusters in the cloud. Each pattern is designed to help solve common challenges faced when building distributed machine learning systems, including supporting distributed model training, handling unexpected failures, and dynamic model serving traffic. Real-world scenarios provide clear examples of how to apply each pattern, alongside the potential trade offs for each approach. Once you’ve mastered these cutting edge techniques, you’ll put them all into practice and finish up by building a comprehensive distributed Machine Learning system.
In recent years, advances in Machine Learning have made tremendous progress, yet large-scale machine learning remains challenging. Take model training as an example. With the variety of machine learning frameworks such as TensorFlow, PyTorch, and XGBoost, it’s not easy to automate the process of training machine learning models on distributed Kubernetes clusters. Different models require different distributed training strategies, such as utilizing parameter servers and collective communication strategies that use the network structure. In a real-world machine learning system, many other essential components, such as data ingestion, model serving, and workflow orchestration, must be designed carefully to make the system scalable, efficient, and portable. Machine learning researchers with little or no DevOps experience cannot easily launch and manage distributed training tasks.
This book also includes a hands-on project that builds an end-to-end distributed Machine Learning system that incorporates a lot of the patterns that we cover in the book. We will use several state-of-art technologies to implement the system, including Kubernetes, Kubeflow, TensorFlow, and Argo. These technologies are popular choices when building a distributed machine learning system from scratch in a cloud-native way, making it very scalable and portable.
About the reader: For data analysts, data scientists, and software engineers who know the basics of Machine Learning algorithms and running Machine Learning in production. Readers should be familiar with the basics of Bash, Python, and Docker.