End to End Machine Learning project implementation (Part 3) | by Abhinaba Banerjee | Feb, 2024 – DataDrivenInvestor

Photo by Emile Perron on Unsplash

Create a prediction pipeline using a Flask web app, project deployment in the AWS cloud using the CI-CD pipeline

Introduction

This part is a continuation of Part 2 where we did Data Ingestion, Data Transformation, Model Training, and Model Evaluation, Model Hyperparameter Tuning. In this part, we will develop a Flask web application, deploy the project in the AWS cloud using the CI-CD pipeline.

We need to know some Flask before proceeding with building the application. Also, create the templates folder in the main project structure.

Create 2 files inside the templates folder: home.html and index.html.

These 2 files are responsible for the frontend design of the application.

Create the predict_pipeline.py inside the pipeline folder that is inside the src folder.

Explain the app.py and predict_pipeline.py in detail.

Run app.py on the terminal (of course inside the conda environment)

Open a new window and run on the search bar.

Now add /predictdata (from app.py) to http://127.0.0.1:5000/

Looks like this

First, create the folder .ebextensions and create the file python.config inside the folder. This instance is created to deploy the project on Amazon Web Services (AWS) Elastic BeanTalk (EBS). Then create the python file application.py to deploy the project. Copy the contents of app.py and paste in application.py. Then commit into GitHub using the previous steps.

Better to use application.py instead of app.py to avoid dependency issues.

The contents of python.config

We are using the elasticbeanstalk AWS (to deploy the application) instance and the Python container.

For AWS deployment, we need an AWS account.

Go to the search bar and search Elastic Beanstalk

We will use a Codepipeline to integrate the GitHub we updated earlier with the AWS Elastic BeanStalk.

Then search for Codepipeline in the AWS search console for Continous Delivery.

Then click on Create pipeline

In the Add Source stage, use GitHub version 1 as the source, so it will integrate with your GitHub repository for this project

Next, skip the add build stage

For the add deploy stage, use AWS Beanstalk as the deploy provider, the other options are application name and environment name

The codepipeline will ultimately integrate with the created EBS application and the final deployment will be done.

The main objective of this blog was to understand the steps of deploying the project on Flask and AWS using CICD pipelines. This is the first time I am using AWS to deploy an end-to-end Machine Learning or Deep Learning project.

Hope you like the end-to-end series of learning and showcasing. I will keep doing end-to-end deployments this year and hope to make valueable projects.

GitHub Repository of the project

This marks the end of the blog.

Stay tuned for the next part and follow me here and say hi.

Twitter

References

Read more here:
End to End Machine Learning project implementation (Part 3) | by Abhinaba Banerjee | Feb, 2024 - DataDrivenInvestor

Related Posts

Comments are closed.