View on GitHub

Paige Liu's Posts

Inside the Docker image built by Azure Machine Learning service

Azure Machine Learning (ML) service is a cloud-based environment that makes it easier for you to develop, train, test, deploy, and manage machine learning models. A model can be deployed as a web service that runs on Azure Container Instances, Azure Kubernetes Service, FPGAs, or as an IoT module that runs on Azure IoT Edge devices. In all these cases, the model, its dependencies, and its associated files are encapsulated in a Docker image which exposes a web service endpoint that receives scoring requests and returns inference results.

Using Azure Machine Learning Python SDK, you don’t have to worry about how to create a web service that calls your model, or how to build a Docker image from a Dockerfile. Instead, you can create the Docker image as following:

image_config = ContainerImage.image_configuration(
    execution_script = "", #this file contains init and run functions that you implement
    runtime = "python",
    conda_file = "myenv.yml" #this file contains conda environment that the model depends on
image = ContainerImage.create(
    name = "myimage", 
    models = [model], #this is the trained model object
    image_config = image_config,
    workspace = ws

You can read up on how to create the model, the execution script, and the conda environment file in the Azure ML documentation. What’s not in the documentation, however, is how everything is put together in the image to make the scoring web service work. Understanding how this works can help you troubleshoot or customize your code and your deployment.

This is what’s inside the Docker image: Azure Machine Learning Docker image