Deploy your first Flask Restful API ML Model to AWS Elastic Beanstalk

Tolulade Ademisoye
8 min readMar 25, 2023

MLOps Foundation Series

When I started my career as a data scientist back in 2018/2019, I wasn’t deploying models into production. I worked on projects involving machine learning models; however, there were other constraints that hindered deployment. Today, the story is different, and I am excited to say that I have been able to overcome some of the challenges and deploy models into production.

Join Semis today to network in AI & Bigtech

Some of the factors that have hindered several data scientists or machine learning engineers from deploying models into production are the lack of a single platform and cost. However, today these limitations are changing due to new emerging technologies and simplifications.

Please note that there are several ways to deploy your model, such as serverless machine learning, containerizing, model APIs, and hosted cloud platforms like Heroku, AWS, GCP, among others.

Why you should deploy your model?

It’s important to note that no model gains business value sitting on your PC; it needs to be interacted with by business stakeholders or team members.

Header Title

In this piece, I’ll teach you how you can deploy your first model into production. For this write-up, I am assuming that you already have the following files: Flask restful API script — application.py or app.py, The Model stored in pickle or joblib format, an AWS account, and Docker installed on your local PC.

The focus for this write-up is deploying your already-built machine learning model with Flask, Docker, and AWS Elastic Beanstalk.

Join Semis today to network in AI & Bigtech

Setting up Environment

To start with, we need a Python IDE where we can write our scripts. I recommend PyCharm, which is a Python user-friendly IDE that helps you set up your virtual environment.

Assuming you have downloaded your machine learning model from Jupyter or Colab notebook, let’s call it “model.joblib” (saved with joblib). Create a folder for your project on your PC; let’s call ours “flaskmodelfolder”. This folder is different from the folder your IDE creates for the Flask project.

Create a Flask Restful API (application.py or app.py) for your model (this write-up doesn’t cover this) using PyCharm. The Flask app is required for our model deployment; it will help communicate with the web interface when eventually deployed.

Join Semis today to network in AI & Bigtech

Now, copy and paste your model (model.joblib) and Flask app (application.py) into the folder we created (flaskmodelfolder). Also, place the project dataset you used for the machine learning training (e.g., “ml_dataset.csv”) in the flaskmodelfolder.

NB: There are two folders in this project: the first one we created called “flaskmodelfolder” (for our Docker image), while the second one was automatically created by our IDE for the Flask app. Ensure the files are updated in the “flaskmodelfolder”.

Application Server

You’ll need an application server for your flask app in a production environment. I’ll show you how to set this up.

We’ll implement a WSGI server on our Flask app, I’m going to use Gunicorn, which is a commonly used Python WSGI HTTP Server.

Install gunicorn using pip (pip install gunicorn or pip3 )

or any other method that works for you in your original flask project that contains the app.py or application.py file u. In Pycharm, simply put gunicorn in the requirments.txt file and Pycharm will prompt you to install it.

Go back to the project for the flask app in your IDE, create a file named wsgi.py. Place this inside the empty wsgi.py file;

from app import app

if __name__ == '__main__':
app.run()

Note: Change app to application in the everywhere in the wsgi.py file if your flask app is named application.py

This setup will enable the flask app to run on the WSGI server.

Wait! Go back to your flask app (app.py or application.py file) in Pycharm and edit this at the bottom; you may use any other convenient port.

Join Semis today to network in AI & Bigtech

if __name__ == '__main__':
# To make the app accessible
#host to accept request online
application.run(host='0.0.0.0', port=5000, debug=False)

Run the Flask App with Gunicorn

Let’s test and run our app with Gunicorn, open a terminal window/cmd and navigate to the directory that contains your wsgi.py or app.py file. Or Simply use the terminal for this project in your IDE, then use the following command:

#please note to use either app or application depending on your flask app name

gunicorn wsgi:app #run on command prompt to start flask app on WSGI HTTP server
#gunicorn wsgi:application

#more conditions you could try
gunicorn wsgi:app --workers 4 --bind 0.0.0.0:5000 --log-level=debug
#gunicorn wsgi:application --workers 4 --bind 0.0.0.0:5000 --log-level=debug
#gunicorn app:app --bind 0.0.0.0:5000

You should get similar output;

[2023-03-09 12:00:00 +0000] [12345] [INFO] Starting gunicorn 20.1.0
[2023-03-09 12:00:00 +0000] [12345] [INFO] Listening at: http://0.0.0.0:5000 (12345)
[2023-03-09 12:00:00 +0000] [12345] [INFO] Using worker: sync
[2023-03-09 12:00:00 +0000] [12346] [INFO] Booting worker with pid: 12346

#This output indicates that the Gunicorn server is running
# and listening for incoming connections on port 5000.

You can stop the app.

Setting Up Docker & Docker Image

Docker images are the basis of containers. We need docker for our flask model api to communicate with AWS. There are alternatives to using docker, however this project focuses on the use of it.

First, download Docker desktop into your pc, after installation (follow the instructions clearly). Also, signup for Docker Hub, an online platform for the image registry. Take note of your docker hub user name please.

Launch docker desktop of your pc, it’s needed

Create an empty file named Dockerfile with no extensions (use Pycharm or another editor).

Place the Dockerfile in your project folder we created earlier (different from the folder your IDE creates for the project), using an editor insert;

FROM python:3.8

# set a directory for the app
WORKDIR /flaskmodelfolder

# copy all the files to the container
COPY . .

# install dependencies
RUN pip install --upgrade pip
RUN pip install -r requirements.txt

# define the port number the container should expose
EXPOSE 5000

# run the command app or application
CMD ["gunicorn", "application:application", "--bind", "0.0.0.0:5000"]

This Dockerfile contains the installation guide for the requirements.txt file used in the project. It’s required for the docker image and gives specifications for the deployed application.

Join Semis today to network in AI & Bigtech

Requirements File

This file contains all the packages/libraries you used during the flask application development. To get this file simply goto Pycharm settings, under the project folder you should see Python Interpreter, and a list of the packages. Copy the packages and the verion only (not latest verion);

PyCharm Python Interpreter packages

Alternatively, you may run this freeze command in the terminal to get your packages automatically into the requirements.tx file in the Pycharm project folder.

pip freeze | grep -v "pkg-resources" > requirements.txt

Eventually, it should look like this or close depending on your pc and packages used;

Flask==2.2.3
Flask-Cors==3.0.10
Flask-RESTful==0.3.9
Jinja2==3.1.2
MarkupSafe==2.1.2
Werkzeug==2.2.3 2.2.3
aniso8601== 9.0.1
certifi==2022.12.7
charset-normalizer==3.0.1
click==8.1.3
colorama==0.4.6
gunicorn==20.1.0
idna==3.4
itsdangerous==2.1.2
joblib==1.2.0
numpy==1.24.2
pandas==1.5.3
pip==21.1.2

etc

So far we have the following set-up;

Flask app

Application server (wsgi.py)

Docker desktop & hub

Requirements.txt file

Dockerfile (with no extension like txt)

Build Docker Image

To build our Docker image, we need two files;

  1. Dockerfile — includes instructions for creating the environment, installing dependencies, and running the application.
  2. The requirements file — contains all of the Python packages needed for our app

The flaskmodelfolder contains (application file, wsgi.py file, model file, dataset used, requirements.txt file, Dockerfile and Dockerrun.aws.json file(we’ll create this below).

In your terminal/cmd navigate to the location in your pc that contains flaskmodelfolder, then run the following commands, one after the other.

#to build our docker image
#<your-dockerhub-username> can be buchi or david (docker hub username)
#model_api; name given for my image, you can use a new name here
#. the dot is for docker to connect with the Dockerfile
docker build -t <your-dockerhub-username>/model_api .

#after building, run this code
#lookman is the docker hub user name
#model_api is the image name
docker run -p 5000:5000 lookman/model_api

Push Desktop Image to Docker Hub

You’ll need to push the image created on your desktop docker to docker hub online, so we can use with aws.

#run this command to push
docker push lookman/model_api

AWS Cloud Deployment

So far, we have been able to build our docker image and push to docker hub online. We can now integrate with AWS Elastic Beanstalk.

Our application requires a web server, thankfully Elastic Beanstalk automatically creates that for us using Nginx.

Join Semis today to network in AI & Bigtech

To connect our docker image from the docker hub to AWS Elastic beanstalk, we need to create a file called Dockerrun.aws.json. Use an editor to create this, place in the flaskmodelfolder. Edit the Name value to your docker user name and docker image; lookman/model_api:latest

{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "lookman/model_api:latest",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "5000"
}
],
"Logging": "/var/log/nginx"
}

Launch AWS and goto Elastic Beanstalk;

AWS Elastic Beanstalk
  1. Click on create application
  2. Fill in the application name
  3. You can leave the application tag empty
  4. Platform — select docker
  5. Platform branch- docker running on 64bit amazon linux 2
  6. Platform version -use recommended
  7. Under application code- select upload your code
  8. Upload the Dockerrun.aws.json file and create application
  9. After building, ensure it is okay. Your application would have a specialised url for the model api which would be consumed in the web application or other services you intend.
Creating an Application on Elastic Beanstalk

Congratulations! You may also buy me a coffee to support my work.

Join Semis today to network in AI & Bigtech

Connect with me on Github, Twitter & Linkedin.

I’m glad you got this far, all the best!

--

--

Tolulade Ademisoye

i build enterprise AI & data for the world at Reispar Technologies