Limit search to available items
Streaming video
Author Wilkins, Hollin, on-screen presenter

Title Deploying machine learning models as microservices using Docker : a REST-based architecture for serving ML model outputs at scale / with Hollin Wilkins & Jason Slepicka
Published [Place of publication not identified] : O'Reilly, 2017

Copies

Description 1 online resource (1 streaming video file (24 min., 30 sec.))
Summary "Modern applications running in the cloud often rely on REST-based microservices architectures by using Docker containers. Docker enables your applications to communicate between one another and to compose and scale various components. Data scientists use these techniques to efficiently scale their machine learning models to production applications. This video teaches you how to deploy machine learning models behind a REST API, to serve low latency requests from applications, without using a Spark cluster. In the process, you'll learn how to export models trained in SparkML; how to work with Docker, a convenient way to build, deploy, and ship application code for microservices; and how a model scoring service should support single on-demand predictions and bulk predictions. Learners should have basic familiarity with the following: Scala or Python; Hadoop, Spark, or Pandas; SBT or Maven; cloud platforms like Amazon Web Services; Bash, Docker, and REST."--Resource description page
Notes Title from title screen (Safari, viewed January 15, 2018)
Release date from resource description page (Safari, viewed January 15, 2018)
Performer Presenter, Hollin Wilkins
Subject Machine learning.
Application software -- Development.
Application software -- Development.
Machine learning.
Form Streaming video
Author Slepicka, Jason, author
Other Titles REST-based architecture for serving ML model outputs at scale