Intro
Kubernetes is an amazing orchestrator that has tons of configuration options for any size application that you may want to deploy on it. This guide will cover the deployment of a pre-built Docker image containing our Laravel 6 application (complete with Nginx/MySQL etc.)
Kubernetes is super simple to get started with, and challenging to master, so let’s spark that curiosity spark of yours.
Suggested book (affiliate link): Kubernetes: Up and Running: Dive Into the Future of Infrastructure
Minicube
What on earth is that? Simply speaking – it’s a local Kubernetes cluster, think of it as a sandbox environment. An in-depth tutorial on using MC is coming soon, so keep an eye out for that.
Pre-reqs
I’ll assume you have a Kubernetes cluster setup, and you’ve adjusted your contexts and environments (i.e. you’ve made a dev/staging/production contexts). The second assumption is that you have a docker image available, living on an artifactory somewhere, ready for deployment.
We need to install kubectl
so that we can manage the deployment files “upload”. I’ll show you how to install it on UNIX based systems, both macOS and Linux, if you’re on Windows – may Google be with you. Here’s the reference guide: https://kubernetes.io/docs/tasks/tools/install-kubectl/
macOS kubectl
install
- Get the installation file by calling:
(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl"
- Make it executable:
chmod +x ./kubectl
- Move the executable to your bin dir:
sudo mv ./kubectl /usr/local/bin/kubectl
- Test the installation, by calling version on the executable:
kubectl version --client
For the last step, you should see something like this:
Linux kubectl
install
- Get the installation file:
curl -LO https://storage.googleapis.com/kubernetes-release/release/curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt/bin/linux/amd64/kubectl
- Make it executable:
chmod +x ./kubectl
- Move it to your bin folder:
sudo mv ./kubectl /usr/local/bin/kubectl
- Check the installation:
kubectl version --client
You should see something like this:
Environments and contexts
So now that you have kubectl
installed, you’re half-way there to deploying your application. You would need to set up your contexts so that you deploy your application in the proper env (dev/staging/production). Here’s the command to do that:
kubectl config set-context --current --namespace={YOUR_NAMESPACE}
With the above setting, you can call kubectl get {ENTITY}
directly, without having to change the context all the time.
Deployment files
So, to get our application up, on Kubernetes, we need the following files (as a minimum):
data.yml
– basically the config/env filedeployment.yml
– how k8s should deploy the applicationservice.yml
– basic service descriptioningress.yml
– routing instructions
There are a lot more configurations that we could make but think of this as the bare minimum to get you going. I’ve decided not to make them into a Helm template, as this is an intro tutorial. If you really want to go deep into Helm and what it is, check this out: https://helm.sh/docs/
All of the Kubernetes config files start with the same top-most construction:
apiVersion: v1
kind: Service/ConfigMap/Deployment (...)
metadata:
(...)
data:
(...)
apiVersion
– instructs Kubernetes to which interpreter to use to follow the declarative instructions. Think of this as the docker-compose.yml
versioning.
kind
– instructs Kubernetes as to what this file will be applied as. A ConfigMap, for example, will carry configuration information to be applied to the application
metadata
– this is where you’ll tell Kubernetes which service to apply the file to; where you’ll add a name for the entity etc.
data
– the actual data to be read and applied
Let’s dive into the files
data.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: {YOUR_APP_NAME}-config-map
data:
APP_ENV: "production"
APP_KEY: "base64:{SOME_KEY}"
APP_SECRET: "{SOME_SECRET}"
(...)
This file will add the environmental vars to our application, grab all the keys from the env
file and add them to the data
block, adjusting them for production deployment.
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {YOUR_APP_NAME}
labels:
app: {YOUR_APP_NAME}
spec:
replicas: 1
selector:
matchLabels:
app: {YOUR_APP_NAME}
template:
metadata:
labels:
app: {YOUR_APP_NAME}
spec:
containers:
- name: {YOUR_APP_NAME}
image: {YOUR_IMAGE_PATH}
envFrom:
- configMapRef:
name: {YOUR_APP_NAME}-config-map
resources:
requests:
memory: "64Mi"
cpu: "0.1"
limits:
memory: "512Mi"
ports:
- containerPort: 80
- containerPort: 443
So this is the deployment instructions that are read by Kubernetes, think of this as the base instructions as to what the pods will inherit in terms of: resources, base-images, ports etc.
A few key points:
selector
– this flag will go over your namespace looking for the service name given, so that it can attach the current deployment to it, make sure that the naming matchesenvFrom
– this is how we load the config file we created earlier, into the pods. Think of this as the variable injection stage.ports
– we’ve exposed both plain and TLS ports here, if you’re only operating on HTTP, you can remove the 443 port declaration here
service.yml
apiVersion: v1
kind: Service
metadata:
name: {YOUR_APP_NAME}
labels:
team: {YOUR_NAMESPACE}
spec:
selector:
app: {YOUR_APP_NAME}
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
This is the service declaration file, think of this as being the source-of-truth, the base declaration, to which every other config relates. You’re basically telling Kubernetes – here’s a service, it will use these ports and it lives in the following namespace.
ingress.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: {YOUR_APP_NAME}
namespace: {YOUR_NAMESPACE}
labels:
annotations:
kubernetes.io/ingress.class: "ingress-nginx-int"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
tls:
- hosts:
- {YOUR_APP_HOST_URL}
secretName: wildcard-tls
rules:
- host: {YOUR_APP_HOST_URL}
http:
paths:
- path: /
backend:
serviceName: {YOUR_APP_NAME}
servicePort: 443
This is the traffic entry point definition, as you can see we’re redirecting all traffic calling /
to the service, on port 443, for handling. The extra annotations are added for the TLS certificate. The latter is already configured on the cluster, thus we just reference it, and tell Kubernetes – hey, I’ll use this cert for HTTPS traffic.
Deployment
Now that you have all the files ready, we’re set to deploy the application. It’s as simple as calling the following command:
kubectl apply -f {FILE_PATH}
Everything else is handled by Kubernetes via the metadata annotations we added.
There’s no set order to deploy the files in, but let’s go with this as an example, call them one-by-one:
kubectl apply -f service.yml
kubectl apply -f data.yml
kubectl apply -f ingress.yml
kubectl apply -f deployment.yml
Now if you call this command, you should see the uploaded declarations:
kubectl get services
You can check on the above declarations with the following commands
kubectl get deployments - GETS DEPLOYMENTS
kubectl get services - GETS SERVICES
kubectl get ingresses - GETS ENTRYPOINTS
kubectl get configmaps - GETS DATA CARRIERS
kubectl get pods - GETS PODS
Conclusion
Kubernetes is an amazing orchestrator, that helps the developers with every control they have implemented. This is a massive subject, so keep an eye out for future tutorials on Kubernetes.
Official documentation: https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/#overview