AWS Open Source Blog

Build, train, and deploy Amazon Lookout for Vision models using the Python SDK

Amazon Lookout for Vision is a new machine learning (ML) service that spots defects and anomalies in visual representations using computer vision (CV). It was made available in Preview at AWS re:Invent 2020 and became generally available in February 2021.

This service lets manufacturing companies increase quality and reduce operational costs by quickly identifying differences in images of objects at scale. For example, Amazon Lookout for Vision can be used to identify missing components in products, damage to vehicles or structures, irregularities in production lines, minuscule defects in silicon wafers, and other similar problems.

Amazon Lookout for Vision uses ML to see and understand images from any camera as a person would, but with an even higher degree of accuracy and at a much larger scale. Lookout for Vision allows customers to reduce the need for costly and inconsistent manual inspection, while improving quality control, defect and damage assessment, and compliance. In minutes, you can begin using Lookout for Vision to automate inspection of images and objects — with no machine learning expertise required.

At AWS, we take our mission to put machine learning in the hands of every developer seriously. For this reason, we want to present another add-on to Lookout for Vision: an open source Python SDK that allows developers and data scientists to build, train, and deploy Lookout for Vision models similarly to what they are used to with Amazon SageMaker. We also added helper functions to the open source SDK that will help you to check your images for compliance, resize them, and generate manifest files automatically. You can use the open source SDK in any place with the Python3 runtime environment.

In this blog post, we will provide a step-by-step guide for using the Lookout for Vision open source Python SDK from within a SageMaker notebook.

Set SageMaker IAM permissions

To use the open source Lookout for Vision SDK from a SageMaker notebook, you first need to grant the SageMaker notebook the necessary permissions for calling Lookout for Vision APIs. We assume that you have already created an Amazon SageMaker notebook instance. Refer to the corresponding Amazon SageMaker notebook instance documentation for more information. The instance is automatically associated with an execution role. To find the role that is attached to your instance, select the instance name in the SageMaker console, like this:

On the next screen, scrolled down to find the IAM role attached to your instance under the Permissions and encryption section. Click on the IAM role to open the IAM console:

Click on the IAM role ARN link

We will attach an inline policy to our SageMaker IAM role. Once you have selected the above role and a new tab has opened, select Add inline policy on the right-hand side of the screen:

Click on the link Add inline policy on the right of the screen

In the top tabs, select JSON and use the following snippet as your template. We used a wild card action (lookoutvision:*) for the service for demo purposes; you can fine-tune the actions and resources based on your needs:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "lookoutvision:*"
            ],
            "Resource": "*"
        }
    ]
}

From here, select Review policy; then, on the next screen, name the policy and create the policy. Now you are all set to use Amazon Lookout for Vision in your SageMaker notebook environment.

Getting started with the open source SDK

The SDK is built to help you on your Lookout for Vision journey and introduces further functionalities that supports working with the service, for example, creating and pushing a manifest file or checking an image for compliance with the service limits.

Before we continue with the setup, we’ll provide a high-level description of what the service expects and how it works. The service expects images in the range of 64×64 to 4096×4096 in dimensions. It further expects at least 20 images of good objects and 10 of bad if we only use a training set.

If you also use a validation dataset—which ensures you can compare model performance of your trained model—then Lookout for Vision expects you to upload 10 normal images for both train and validation set, as well as 10 anomalous images for the validation set.

Although you can get started with 30 images, we recommend that you add more than the minimum number of labeled images based on your use case. Also, use the console to provide feedback and iterate on model creation.

If you are developing on your local computer or any instance other than the SageMaker environment we are using, you can copy the below code. Alternatively, you also can refer to an example notebook, which contains the same code snippets and lets you get started even faster.

Setting the general variables yields:

# Training & Inference
 input_bucket = "YOUR_S3_BUCKET_FOR_TRAINING"
 project_name = "YOUR_PROJECT_NAME"
 model_version = "1" # leave this as one if you start right at the beginning
 # Inference
 output_bucket = "YOUR_S3_BUCKET_FOR_INFERENCE" # can be same as input_bucket
 input_prefix = "YOUR_KEY_TO_FILES_TO_PREDICT/" # used in batch_predict
 output_prefix = "YOUR_KEY_TO_SAVE_FILES_AFTER_PREDICTION/" # used in batch_predict

The purpose of these variables can be decoupled in training and inference. Some variables are only needed if you just want to build, train, and deploy the model. If you want to run inference, you have several options to choose from, such as .predict() or .batch_predict(). We will use these variables as:

  • input_bucket: The S3 bucket that contains your images for training a model.
  • project_name: The unique name of the Amazon Lookout for Vision project.
  • model_version: The model version you want to deploy. (Note that when starting fresh, 1 is the default.)
  • output_bucket: A bucket in which your model and inference results are stored (can be same as input_bucket).
  • input_prefix: If you run inference out of Amazon Simple Storage Service (Amazon S3), this is the key of the image(s) you want to predict.
  • output_prefix: This is the Amazon S3 key to which your prediction(s) will be saved.

Lastly, you need to install the SDK. You can do this via pip install. Use:

# Install the SDK using pip
 !pip install lookoutvision

in your Jupyter notebook or similar (without the exclamation point) in your terminal. You are now all set to get started building a model.

Build an Amazon Lookout for Vision model

In this section, we’ll walk through the process of building a model. Before we start using the SDK, please ensure that you have two folders set up in the same directory you are developing in:

  • good: Contains all the images of the class you consider to be normal.
  • bad: Likewise, you need a folder for the abnormal images.

The SDK will first help you check whether all images have the correct size for Lookout for Vision and whether they all have the same shape. If they don’t, the SDK will help you rescale them to the optimal shape. The SDK also looks specifically for folder names called good and bad with images in them.

Create a project with the SDK

Let’s source in all modules we’ll use through the entire process first:

# Import all the libraries needed to get started:
from lookoutvision.image import Image
from lookoutvision.manifest import Manifest
from lookoutvision.metrics import Metrics
from lookoutvision.lookoutvision import LookoutForVision

This is how these four modules can help you:

  • Image to interact with your local images
  • Manifest to generate and push manifest files
  • Metrics to view and compare Model metrics
  • LookoutForVision as the main class to interact with the service

Next, we want to create all the classes so that we can start working with the images and finally train a model.

img = Image()
mft = Manifest(
    bucket=input_bucket,
    s3_path="{}/".format(project_name),
    datasets=["training", "validation"])
# This class creation will tell you if the project already exists
l4v = LookoutForVision(project_name=project_name)
# >>> Project ‘YOUR_PROJECT_NAME’ does not exist yet…
met = Metrics(project_name=project_name)

Note: We used two datasets in the Manifest class, namely training and validation. We do that, because we want Lookout for Vision to have a training and validation set being used during training. You could also choose to use only training, in which case you would pass in [“training”] only.

In the Lookout for Vision service, you need to create projects to manage your models and uploaded datasets. Let’s create our first Lookout for Vision project with the SDK. This can simply be done by using the LookoutForVision class object and running the .create_project() method.

Working with images

With an existing project, you can now look at your images using the SDK. The Image() class has three main methods you can make use of:

  • check_image_sizes() will loop through all images in your good/ and bad/ folders and check the image sizes. It will count the times an image complies with the Amazon Lookout for Vision Service. An example output of this could be:{'good': {'no_of_images': 211, 'compliant_images': 211, 'compliant': True},'bad': {'no_of_images': 91, 'compliant_images': 91, 'compliant': True}}This would tell you that all your images are compliant with the service.
  • check_image_shape() works similarly to the above method, but this time it checks whether all your images have the same shape, because a computer vision model expects this to be true in order to work. The output of this method will also give you an optimal shape for all images in case they are not already equal. The JSON key is min_image_shape.
  • rescale() will help you rescale the images in case they don’t have the same shape yet. It will automatically do this for you based on the check_image_shape() method that is used internally. If you don’t want to overwrite existing images, you can pass in a prefix argument (e.g., “rescale_”), which will create a new folder for each image type with the prefix for you and store the new images in these.

Your notebook or code could then look similar to:

Example notebook or code

Here we checked sizes and shapes for the rescaled images one more time.

With all compliant images, we are now ready to upload them to Amazon S3. Amazon Lookout for Vision expects a specific structure in your S3 bucket in order to pick up training vs. validation and normal vs. anomaly. Our SDK will handle this for you in an automated fashion. The only thing you need to run is:

img.upload_from_local(
    bucket=input_bucket,
    s3_path="{}/".format(project_name),
    train_and_test=True,
    test_split=0.2,
    prefix="rescaled_")

Here we are uploading the rescaled images, will use a training and test (validation) set, and want a 80:20 split between these datasets.

Build and push the manifest file

Before we can let Lookout for Vision know where it can look up the images, we need a manifest file. This is also something the SDK will handle for you. Run:

mft_resp = mft.push_manifests()

The output of this method, namely mft_resp, can now be used to create your Lookout for Vision datasets, because it contains all meta-information needed:

{
    'training': {
        'bucket': 'YOUR_S3_BUCKET',
        'key': 'training.manifest',
        'location': 's3://YOUR_S3_BUCKET/training.manifest'
    },
    'validation': {
        'bucket': 'YOUR_S3_BUCKET',
        'key': 'validation.manifest',
        'location': 's3://YOUR_S3_BUCKET/validation.manifest'
    }
}

Create the Amazon Lookout for Vision dataset

The last step needed to build the model is to create the Lookout for Vision dataset within the service. This step can be accomplished by running l4v.create_datasets method. With this, you are all set and can now start training the model.

dsets = l4v.create_datasets(mft_resp, wait=True)

Train the model

Training a model using Amazon Lookout for Vision is simple once the datasets are created. There is only one command you need to run:

l4v.fit(
    output_bucket=output_bucket,
    model_prefix="mymodelprefix_",
    wait=True)

If you wait for the training to be finished in your environment, a sample piece of the output will look similar to this:

Example output of a model training job

You can also train your model and set wait=False, then check progress in the Lookout for Vision console.

Deploy the model

We use the same methodology many of our customers are already used to when using Amazon SageMaker to build, train, and deploy models. Usually, the deployment can be done with a simple single call of a function. We follow the same methodology here:

l4v.deploy(
    model_version=model_version,
    wait=True)

This call also will take some time to finish. Once the model has hosted the sample output, you should see an output that looks like:

Example output of a hosted model

Inference using the open source Python SDK

Doing inference is important because this is the time the model is actually used and when it provides the desired business value. Because of this, we implemented two functions, namely .predict() and .batch_predict(). The special thing about these functions is that they work not only with local images but also with images stored in S3.

Thus, you can either run predictions on local images or you can run a batch prediction within Amazon S3 and store the results as JSON objects directly in S3. With this approach, you could then, for example, use AWS Glue Crawler and Amazon Athena to create a database with the tables of your metrics. This database could then be used for reporting, decision-making, and many other things.

Batch prediction

For batch prediction, where your images are in Amazon S3, please provide the following information as input to the function:

  • model_version: Either provide your model version or, by default, it will take model version as 1.
  • input_bucket: Input bucket name where your input images — which are required to be predicted normal/anomalous — are.
  • input_prefix: Folder name/Key name (if applicable) for the S3 path where input images are. If you have this, please make sure that you put a forward slash (/) at the end as mentioned in the example.
  • output_bucket: Output bucket name where your prediction results would be stored in a JSON file. Note that the output JSON file’s name would be image_name.json.
  • output_prefix: Folder name/Key name (if applicable) for the S3 path where output predicted files would be. If you have this, please make sure that you put a forward slash (/) at the end as mentioned in the example.
  • content_type: "image/jpeg".
l4v.batch_predict(
     model_version=model_version,
     input_bucket=input_bucket,
     input_prefix=input_prefix,
     output_bucket =output_bucket,
     output_prefix=output_prefix,
     content_type="image/jpeg")

For batch prediction where your images are stored in a local path, provide the following information as input to the function.

  • model_version: Either provide your model version or, by default, it will take model version as 1.
  • local_path: Local path where your input images — which are required to be predicted normal/anomalous — are.
  • content_type: "image/jpeg".
prediction = l4v._batch_predict_local(
     local_path=”your/local/path”,
     model_version=model_version,
     content_type="image/jpeg")

Single image prediction

We have implemented a similar concept using the standard .predict() function call. Either you can run:

l4v.predict(local_file="your/local/bad/file.jpeg")

for local images or:

l4v.predict(
     bucket=input_bucket,
     key='my/key/to/the/file.jpeg')

for images located in S3.

Summary

In this blog post, we showed how to build, train, and deploy your first Amazon Lookout for Vision model using the open source Python SDK. You can now begin using the SDK to train more models within a project by using, for example, the .update_datasets() method from the LookoutForVision class. It lets you update your dataset within Amazon Lookout for Vision and then train a new version of your model using the .fit() method.

Multiple model versions can be used alongside each other. The Python SDK simplifies the existing Boto3 SDK by offering familiar methods to machine learning practitioners like .fit() and .deploy(), and provides additional functionality to streamline your end-to-end workflow with Amazon Lookout for Vision.

Stop the model after you are done

Finally, please stop the model via stop_model() if you are not using it any more. If you don’t provide any model version, by default, it will stop model version 1.

l4v.stop_model()

You can also specify the particular model version to stop.

new_model_version = "2"
l4v.stop_model(model_version=new_model_version)

References

Michael Wallner

Michael Wallner

Michael Wallner is a Senior Consultant Data & AI with AWS Professional Services and is passionate about enabling customers on their journey to become data-driven and AWSome in the AWS cloud. On top, he likes thinking big with customers to innovate and invent new ideas for them.

Bandana Das

Bandana Das

Bandana Das is a Senior Data Architect at Amazon Web Services and specializes in data and analytics. She builds event-driven data architectures to support customers in data management and data-driven decision-making. She is also passionate about enabling customers on their data management journey to the cloud.

Shreyas Subramanian

Shreyas Subramanian

Shreyas Subramanian is a Principal data scientist and helps customers by using Machine Learning to solve their business challenges using the AWS platform. Shreyas has a background in large scale optimization and Machine Learning, and in use of Machine Learning and Reinforcement Learning for accelerating optimization tasks.

Weizhou "Wei" Sun

Weizhou "Wei" Sun