Beginner’s Guide to Deploying Image Deep Learning Model using AWS Serverless Architecture, WSL2 and Windows10

After this walk-through, you will be able to deploy an Image Deep Learning Model using AWS Serverless architecture. This is from a course I took. The deployment shows steps of how to deploy the model using WSL2 on Windows 10.

Create a new Conda Environment

conda create -n keras-serverless python=3.6 pylint rope jupyter

Activate Environment

conda activate keras-serverless

Install Packages

python -m pip install tensorflow==1.12.0 keras==2.2.4 boto3 pillow

Train a model using Jupyter Notebook

kers-build-model

The above notebook downloads a ResNet50 model using Keras Applications. Keras Applications are deep learning models that are made available alongside pre-trained weights. These models can be used for prediction, feature extraction, and fine-tuning. Weights are downloaded automatically when instantiating a model.

Access Keras Model from WSL2 (Windows 10)

The models are stored in a physical disk to be attached and mounted inside WSL 2, which enables you to access filesystems that aren’t natively supported by Windows (such as ext4). To see how this can be set-up, please click here.

\\wsl$\Debian\home\user_name\.keras\models

To View hidden files and folders in Windows 10

  • Open File Explorer from the taskbar. 
  • Select View Options > Change folder and search options.
  • Select the View tab and, in Advanced settings, select Show hidden files, folders, and drives and OK.

Access WSL2/Ubuntu Drive from File Explorer

With WSL2/Debian installed, the local Windows C drive is mounted in Debian automatically. To get files in WSL2, follow these steps:

  1. Open File Explorer
  2. Type \\wsl$ in the address bar
  3. Click on it, and you can see the file system.

Upon instantiation, the models will be built according to the image data format set in your Keras configuration file at ~/.keras/keras.json.

Create two s3 Bucket

Upload the image and model to separate folders:

keras_example

To see how to create AWS credentials, please click here.

Delete the temporary folder after testing that the code is working.

Create Project and Edit Handler File

sls create --template aws-python3 --name cnn-serverless

Install a plugin for Python requirements

This will automatically add the plugin to your project’s package.json and the plugins section of its serverless.yml. The plugin will now bundle your python dependencies specified in your requirements.txt or Pipfile when you run sls deploy. We also install a particular version of the plugin.

sls plugin install -n serverless-python-requirements@4.2.4

handler.py

print('container start')
try:
  import unzip_requirements
except ImportError:
  pass
print('unzip end')

import json
from keras.applications.resnet50 import ResNet50, preprocess_input, decode_predictions
from keras.preprocessing import image
import numpy as np
import keras_applications
import boto3
import os
import tempfile
print('import end')

keras_applications.imagenet_utils.CLASS_INDEX = json.load(open('imagenet_class_index.json'))

MODEL_BUCKET_NAME = os.environ['MODEL_BUCKET_NAME']
MODEL_KEY_NAME = os.environ['MODEL_KEY_NAME']
TEMP_DIR = '/tmp' 
MODEL_PATH = os.path.join(TEMP_DIR, MODEL_KEY_NAME)
UPLOAD_BUCKET_NAME = os.environ['UPLOAD_BUCKET_NAME']

print('downloading model...')
s3 = boto3.resource(
's3',
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY)
s3.Bucket(MODEL_BUCKET_NAME).download_file(MODEL_KEY_NAME, MODEL_PATH)

print('loading model...')
model = ResNet50(weights=MODEL_PATH)
print('model loaded\n')


def classify(event, context):
    body = {}

    params = event['queryStringParameters']
    if params is not None and 'imageKey' in params:
        image_key = params['imageKey']

        # download image
        print('Downloading image...')
        tmp_image_file = tempfile.NamedTemporaryFile(dir=TEMP_DIR)
        img_object = s3.Bucket(UPLOAD_BUCKET_NAME).Object(image_key)
        img_object.download_fileobj(tmp_image_file)
        print('Image downloaded to', tmp_image_file.name)

        #  load and preprocess the image
        img = image.load_img(tmp_image_file.name, target_size=(224, 224))
        x = image.img_to_array(img)
        x = np.expand_dims(x, axis=0) 
        x = preprocess_input(x)
        tmp_image_file.close()

        # predict image classes and decode predictions
        predictions = model.predict(x)
        decoded_predictions = decode_predictions(predictions, top=3)[0]
        predictions_list = []
        for pred in decoded_predictions:
            predictions_list.append({'label': pred[1].replace('_', ' ').capitalize(), 'probability': float(pred[2])})

        body['message'] = 'OK'
        body['predictions'] = predictions_list

    response = {
        "statusCode": 200,
        "body": json.dumps(body),
        "headers": {
            "Access-Control-Allow-Origin": "*",
            "Content-Type": "application/json"
        }
    }

    return response

serverless.py

service: resnet50 # NOTE: update this with your service name

provider:
  name: aws
  runtime: python3.6
  stage: dev
  region: us-east-1

# you can add statements to the Lambda function's IAM Role here
  iamRoleStatements:
    - Effect: "Allow"
      Action:
        - "s3:*"
      Resource: "*"

# you can define service wide environment variables here
  environment:
    MODEL_BUCKET_NAME: cnn-serverless-model
    MODEL_KEY_NAME: resnet50_weights_tf_dim_ordering_tf_kernels.h5
    UPLOAD_BUCKET_NAME: cnn-image-uploads

# you can add packaging information here
package:
#  include:
#    - include-me.py
#    - include-me-dir/**
  exclude:
    - node_modules/**
    - .vscode/**
    - __pycache__/**
    - .ipynb_checkpoints/**
    - (*).ipynb

functions:
  resnet50-classify:
    handler: handler.classify
    memorySize: 2048
    timeout: 30
    events:
      - http:
          path: classify
          method: get
          request:
            parameters:
              querystrings:
                imageKey: true

custom:
  pythonRequirements:
    dockerizePip: true
    slim: true
    zip: true
    noDeploy: []
    useDownloadCache: true
    useStaticCache: true
    slimPatterns:
      - "**/tensorboard*"
      - "**/markdown*"             
      - "**/werkzeug*"             
      - "**/grpc*"             
      - "**/tensorflow/contrib*"
      - "**/tensorflow/include*"

plugins:
  - serverless-python-requirements

event.json

{
    "queryStringParameters": {
        "imageKey": "elephant.jpg"
    }
}

Test function locally using Serverless

sls invoke local --function resnet50-classify --path event.json

Create deployment AWS – package without deploying it

The sls package command packages your entire infrastructure into the .serverless directory by default and makes it ready for deployment. You can specify another packaging directory by passing the --package option.

sls package

Unzip the .requirements.zip file. Extract the following packages to main folder of the serverless project

  • tensorflow

Delete the folder and zipped folder. The reason is to decrease the size of the package and model size for AWS Lambda.

Deploy Model to AWS

Delete tensorflow from requirements.txt file

absl-py==0.7.1
astor==0.8.0
certifi==2019.3.9
gast==0.2.2
grpcio==1.21.1
h5py==2.9.0
Keras==2.2.4
Keras-Applications==1.0.7
Keras-Preprocessing==1.0.9
Markdown==3.1.1
numpy==1.16.3
protobuf==3.7.1
PyYAML==5.1
scipy==1.3.0
six==1.12.0
tensorboard==1.12.2
#tensorflow==1.12.0
termcolor==1.1.0
Werkzeug==0.15.4
sls deploy

If you get this error, just run the above command again.

Test function globally using Serverless

sls invoke --function resnet50-classify --path event.json --log

Change settings in IAM service to allow (AWS Cognito Service) random images upload

Create a policy using the following text and attach it to role.

IAM Policy for Unauth ID Pool:

{
   "Version": "2012-10-17",
   "Statement": [
      {
         "Effect": "Allow",
         "Action": [
            "s3:*"
         ],
         "Resource": [
            "arn:aws:s3:::BUCKET_NAME/*"
         ]
      }
   ]
}


S3 CORS config (in JSON):

[
    {
        "AllowedHeaders": [
            "*"
        ],
        "AllowedMethods": [
            "POST",
            "GET",
            "PUT",
            "DELETE",
            "HEAD"
        ],
        "AllowedOrigins": [
            "*"
        ],
        "ExposeHeaders": []
    }
]

References

  • https://ubuntu.com/blog/getting-started-with-cuda-on-ubuntu-on-wsl-2
  • https://www.serverless.com/blog/using-tensorflow-serverless-framework-deep-learning-image-recognition
  • https://aws.amazon.com/blogs/machine-learning/how-to-deploy-deep-learning-models-with-aws-lambda-and-tensorflow/
  • https://www.udemy.com/course/deploy-serverless-machine-learning-models-to-aws-lambda/

Leave a Reply