In this article I will veer of a little from my norm of technical writing and write this article which fits into personal growth and development .
Someone once asked me, “how would you eat an elephant?” For a second there I was startled , and smiled when it hit me he wasn’t really asking about eating an elephant literally : ) . This question and concept has been on my mind ever since. He explained how we needed to “eat” an elephant that was right in front of us. …
At the time of writing this article Google handles an average of 3.8 million searches per minute across the globe. That translates to 5.6 billion searches per day, or 2 trillion searches per year! When we look at Twitter there are 500 million tweets per day and around 200 billion tweets per year. The internet is generating huge volumes of data every day. Just by reading that short paragraph I have approached data from a numbers perspective. If I had continued with approaching more data information from a numbers perspective, I run the risk of losing your interest.
This tutorial is part 3 and the final part of the guide on how to deploy Machine Learn Models using Flask to Heroku via GitHub.
Step 1: Creating a New App
From your Heroku dashboard click on New and then Create new…
This is the second part of the tutorial on how to create and deploy a machine learning model on Heroku using Flask. The first part was about data pre-processing and creation of a logistic regression model using JupyterNotebook.
Part 2 will be a guide on how to structure your flask application and embed your machine learning model in it, as well as uploading the application on GitHub.
Structuring the Flask Application
I am using PyCharm as my IDE. Make sure you have installed the packages we used in our model, as well as flask, numpy and Gunicorn. …
You have just learned how to create machine learning models. You model is working as you would like it to. But then you have one dilemma, how to take your model from your notebook and package it as a product that people can use with ease.
This tutorial aims to give an introduction to Machine Learning beginners on how to create and deploy machine learning models.
Volumes in Kubernetes can be thought of as a directory that is accessible to the containers in a pod. Volumes help you persist data even if your container restarts. There are different types of volumes in Kubernetes and the type defines how the volume is created and it’s content.
A persistent volume is a piece of storage in a Kubernetes cluster. PersistentVolumes are a cluster-level resource like nodes, which don’t belong to any namespace. It is provisioned by the administrator and has a particular file size. This way, a developer deploying their app on Kubernetes need not know…
This article is a continuation of the configMaps article. https://medium.com/@ngugijoan/understanding-configuration-kubernetes-e527d191ecf2
Secretes are Kubernetes objects mostly used with sensitive config data such as passwords or API keys. Secrets are stored as based64-encoded within Kubernetes.
Like ConfigMaps, you can create secrets in 2 ways.
Secretes are stored in tmpfs on a Node(not on disk).
Using Secrets with literal files
To use Secrets directly use the kubectl create secret command. For instance, how would you save a database username and password?
Like many other applications, we will always have the logic of the application and also the configuration. While we can always put the configuration in the code, this is not a good approach. Kubernetes is no different. It’s always better and safer to separate the logic of your Kubernetes application from its configuration.
Configurations are settings or values that might change over the life of an application. Configuration values include things such as authentication credentials, DNS addresses of third-party services and environment-specific settings. A change in any of these values will require a complete redeployment of the entire application.
Traffic lights are used to control the movement of traffic. They facilitate the movement and direction of traffic for safety and avoidance of collision. Likewise in Kubernetes, network policies are used to control the traffic requests from and into our Pods.
By default, pods can accept traffic from any source. We would however not want any traffic sending requests to our pods. Hence the need for a Network policy.
What is a Network Policy
Network policies simply are rules that control traffic from and to the Pod. …
In a real ecosystem, teams share usually share a cluster by giving each team a namespace. The issue would be when one team begins using more than their fair share of resources. A Kubernetes administrator would, therefore, create a Resource quota specifying the limit of resources(CPU and memory)that can be consumed in each namespace.
When an administrator specifies the resource quota for a namespace, the following happens: