Deploy Machine Learning Web Applications with Streamlit

As a data scientist or machine learning engineer, it is not enough to leave your machine learning models in your notebooks. You will want your end-users to seamlessly interact with your model in a creative and easy way.

Streamlit is a Python-based library that allows the creation and deployment of machine learning web applications. It is also fast and flexible, turning application development time from days into hours.

Using Flask or Django or turning models into an Application Programming Interfaces (APIs) are also alternatives, the only disclaimer is that you need to have software engineering skills.

In this article, we will learn how to

  1. Export your machine learning model as a pickle file.
  2. Create a streamlit interface to interact with our pickle file.
  3. Get model results from the streamlit interface.
  4. Deploy the streamlit app.

Pre-Requisites

  • This article assumes you have knowledge of building machine learning models.
  • Download a Python IDE for building the Streamlit app. For this project, I’ll use PyCharm.

Find the code for this article via Github Repo

Let’s get started!

Step 1 — Export Pickle file

  • Export your machine learning model results as a pickle file.If you don’t have one and you wish to use the same notebook I am using, you can download the link from this github link
import pickle
pickle_out = open("emp-model.pkl","wb")
pickle.dump(model, pickle_out)
pickle_out.close()
  • We are saving our pickle file as (emp-model.pkl).
  • Then pickle.dump() serializes our dictionary and writes it to our file.

Step 2 — Install Libraries and IDE

  • Open your favorite IDE, create a new project, and create a python file. The name of my file is testml.py
  • Install the libraries you need.
pip install streamlit
pip install pandas
pip install numpy

If you get the below error while installing streamlit, then try to run this commnand first. [python -m pip install -U cffi pip setuptools]

Cargo, the Rust package manager, is not installed or is not on PATH.
This package requires Rust and Cargo to compile extensions. Install it through
the system's package manager or via https://rustup.rs/

Step 3 — Load pickle file and dataset

Load your pickle file inside that folder, and in this case, I have my dataset in the folder as well since I want to display a sample of the dataset in the app’s home page.

Step 4 — Creating the app itself.

Step 4.1 — Import libraries

import streamlit as st
import pandas as pd
import numpy as np
import pickle

We want to create two pages

  1. Home — This will serve as our landing page.
  2. Prediction_Churn — This is where the actual prediction happens

To do this in streamlit, we can use the sidebar and selectbox feature

app_mode = st.sidebar.selectbox('Select Page',['Home','Predict_Churn'])

This gives us the following view

Step 4.2 — Define how the home page will look when someone clicks on the “Home” tab.

if app_mode=='Home': 
st.title('Employee Prediction')
st.markdown('Dataset :')
df=pd.read_csv('emp_analytics.csv') #Read our data dataset
st.write(df.head())

Remember we had imported streamlit with the alias name st. So anywhere we use st, still means streamlit.


if app_mode=='Home': ## if someone selects the Home Tab
st.title('Employee Prediction') ##Display text in title formatting
st.markdown('Dataset :') ## Display string formatted as Markdown.
st.write(df.head()) #write and display out dataset using the command df.head

Run the app using the command

streamlit run testml.py[Name of the python file]

Select any of the urls that will be output for you to view your app so far.The app should now look as below

Next, Our Prediction page where our model will be triggered from.

4.3 — Prediction Page

elif app_mode == 'Predict_Churn':## specify our inputs
st.subheader('Fill in employee details to get prediction ')
st.sidebar.header("Other details :")
prop = {'salary_low': 1, 'salary_high': 2, 'salary_medium': 3}
satisfaction_level = st.number_input("satisfaction_level", min_value=0.0, max_value=1.0)
average_montly_hours = st.number_input("average_montly_hours")
promotion_last_5year = st.number_input("promotion_last_5year")
salary = st.sidebar.radio("Select Salary ",tuple(prop.keys()))

salary_low,salary_medium,salary_high=0,0,0
if salary == 'High':
salary_high = 1
elif salary == 'Low':
salary_low = 1
else :salary_medium = 0


subdata={
'satisfaction_level':satisfaction_level,
'average_montly_hours ':average_montly_hours ,
'promotion_last_5year': promotion_last_5year,
'salary':[salary_low,salary_medium,salary_high],
}

features = [satisfaction_level, average_montly_hours, promotion_last_5year, subdata['salary'][0],subdata['salary'][1], subdata['salary'][2]]

results = np.array(features).reshape(1, -1)

The code above directs user to the Prediction Page when they select the “Prediction_Churn” tab.

Machine learning algorithms cannot handle categorical variables and I had created dumies for the salary column.

In order to display this using streamlit intuitively , we:

  • Have created a sidebar radio containing the three elements, and each one has a binary variable.
salary = st.sidebar.radio("Select Salary ",tuple(prop.keys()))

The prop.keys references the below dictionary we had initially set. If for instance someone selects, ‘salary_low’, streamlit will give it the integer 1.

prop = {'salary_low': 1, 'salary_high': 2, 'salary_medium': 3}

If someone selects one of the salary levels, eg salary_high , this equates to 1 and the others[salary_low,salary_medium] will be equal to 0.

salary_low,salary_medium,salary_high=0,0,0
if salary == 'High':
salary_high = 1
elif salary == 'Low':
salary_low = 1
else :salary_medium = 1

We then create our variables as a dictionary, and assign them to the features variable.

subdata={
'satisfaction_level':satisfaction_level,
'average_montly_hours ':average_montly_hours ,
'promotion_last_5year': promotion_last_5year,
'salary':[salary_low,salary_medium,salary_high],
}

features = [satisfaction_level, average_montly_hours, promotion_last_5year, subdata['salary'][0],subdata['salary'][1], subdata['salary'][2]]
results = np.array(features).reshape(1, -1)

We finally create a predict button and run our app:

if st.button("Predict"):

picklefile = open("emp-model.pkl", "rb")
model = pickle.load(picklefile)

prediction = model.predict(results)
if prediction[0] == 0:
st.success('Employee will not churn')
elif prediction[0] == 1:
st.error( 'Employee will churn')
streamlit run testml.py[Name of the python file]

The predict button will load our pickle file. Our model prediction result can either be 1 or 0. We will subtitute this with texts easy to understand from a user perspective.

This predict button will trigger our model and output our result. Either employee churn or not churn.

Great, we now have our machine learning web app running and it’s much easier for end users to interact with our model. The remaining bit would be a simple deployment process to the streamlit servers.

Note:This is not the only way, you can deploy your streamlit app on other platforms such as AWS, Heroku etc

Step 5- Create a Repository on Github and deploy your files there.

Step 5.1 — Export requirements

You need to export your installed libraries that you used to run the app so that when running the app in the streamlit server, the libraries will be used to run and load the app.

Use the command:

pip freeze>requirements.txt

Step 5.2 — Create a new repo and upload your files. This should essentialy be your pickle file, the dataset, the exported requirements file and the streamlit python file.

Then upload all the necessary files.

You can just drag and drop the file, then commit your changes.

Your files will all now be ready to be deployed in streamlit via this link

Select new app, and specify the github repo you want to deploy,the branch and the main file path.

Main file path should be the streamlit python file. Then choose Deploy! Wait for the requirements to be installed and soon after, if successful, the new app page will open automatically in your browser!

Great!, We’ve successfully deployed your app!For this project, our page app can be found via this link

For more documentation, visit the streamlit page: docs.streamlit.io

--

--

--

Big Data & Analytics, Data Science, Machine Learning, Data Engineering | ngugijoan.com

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Internal Covariate Shift: An Overview of How to Speed up Neural Network Training

Machine Learning Ideas: Separating background jobs with machine learning

Joint IWPT + DepLing Panel

Customizing LSTMs with new features

Practical approach of State-of-the-Art Flair in Named Entity Recognition

Object Detection with OpenCV-Python using YOLOv3

First interaction Artificial Neural Network

biological_neural_network

Machine Learning in Unity

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Joan Ngugi

Joan Ngugi

Big Data & Analytics, Data Science, Machine Learning, Data Engineering | ngugijoan.com

More from Medium

PyCaret 101 : ML Workflow Automation

Easy way to build a web app to test your model just like “If else logic”​

Creating an E-Commerce Product Category Classifier using Deep Learning — Part 2

How to bring your own machine learning model to databases