Kubernetes is an amazing orchestrator that has tons of configuration options for any size application that you may want to deploy on it. This guide will cover the deployment of a pre-built Docker image containing our Laravel 6 application (complete with Nginx/MySQL etc.)

Kubernetes is super simple to get started with, and challenging to master, so let’s spark that curiosity spark of yours.

Suggested book (affiliate link): Kubernetes: Up and Running: Dive Into the Future of Infrastructure


What on earth is that? Simply speaking – it’s a local Kubernetes cluster, think of it as a sandbox environment. An in-depth tutorial on using MC is coming soon, so keep an eye out for that.


I’ll assume you have a Kubernetes cluster setup, and you’ve adjusted your contexts and environments (i.e. you’ve made a dev/staging/production contexts). The second assumption is that you have a docker image available, living on an artifactory somewhere, ready for deployment.

We need to install kubectl so that we can manage the deployment files “upload”. I’ll show you how to install it on UNIX based systems, both macOS and Linux, if you’re on Windows – may Google be with you. Here’s the reference guide:

macOS kubectl install

  1. Get the installation file by calling: (curl -s"
  2. Make it executable: chmod +x ./kubectl
  3. Move the executable to your bin dir: sudo mv ./kubectl /usr/local/bin/kubectl
  4. Test the installation, by calling version on the executable: kubectl version --client

For the last step, you should see something like this:

kubectl version – macOS

Linux kubectl install

  1. Get the installation file: curl -LO -s
  2. Make it executable: chmod +x ./kubectl
  3. Move it to your bin folder: sudo mv ./kubectl /usr/local/bin/kubectl
  4. Check the installation: kubectl version --client

You should see something like this:

kubectl version – Linux

Environments and contexts

So now that you have kubectl installed, you’re half-way there to deploying your application. You would need to set up your contexts so that you deploy your application in the proper env (dev/staging/production). Here’s the command to do that:

kubectl config set-context --current --namespace={YOUR_NAMESPACE}

With the above setting, you can call kubectl get {ENTITY} directly, without having to change the context all the time.

Deployment files

So, to get our application up, on Kubernetes, we need the following files (as a minimum):

  1. data.yml – basically the config/env file
  2. deployment.yml – how k8s should deploy the application
  3. service.yml – basic service description
  4. ingress.yml – routing instructions

There are a lot more configurations that we could make but think of this as the bare minimum to get you going. I’ve decided not to make them into a Helm template, as this is an intro tutorial. If you really want to go deep into Helm and what it is, check this out:

All of the Kubernetes config files start with the same top-most construction:

apiVersion: v1
kind: Service/ConfigMap/Deployment (...)

apiVersion – instructs Kubernetes to which interpreter to use to follow the declarative instructions. Think of this as the docker-compose.yml versioning.

kind – instructs Kubernetes as to what this file will be applied as. A ConfigMap, for example, will carry configuration information to be applied to the application

metadata – this is where you’ll tell Kubernetes which service to apply the file to; where you’ll add a name for the entity etc.

data – the actual data to be read and applied

Let’s dive into the files


apiVersion: v1
kind: ConfigMap
  name: {YOUR_APP_NAME}-config-map
  APP_ENV: "production"
  APP_KEY: "base64:{SOME_KEY}"

This file will add the environmental vars to our application, grab all the keys from the env file and add them to the data block, adjusting them for production deployment.


apiVersion: apps/v1
kind: Deployment
  name: {YOUR_APP_NAME}
    app: {YOUR_APP_NAME}
  replicas: 1
      app: {YOUR_APP_NAME}
        app: {YOUR_APP_NAME}
        - name: {YOUR_APP_NAME}
          image: {YOUR_IMAGE_PATH}
            - configMapRef:
                name: {YOUR_APP_NAME}-config-map
              memory: "64Mi"
              cpu: "0.1"
              memory: "512Mi"
            - containerPort: 80
            - containerPort: 443

So this is the deployment instructions that are read by Kubernetes, think of this as the base instructions as to what the pods will inherit in terms of: resources, base-images, ports etc.

A few key points:

  1. selector – this flag will go over your namespace looking for the service name given, so that it can attach the current deployment to it, make sure that the naming matches
  2. envFrom – this is how we load the config file we created earlier, into the pods. Think of this as the variable injection stage.
  3. ports – we’ve exposed both plain and TLS ports here, if you’re only operating on HTTP, you can remove the 443 port declaration here


apiVersion: v1
kind: Service
  name: {YOUR_APP_NAME}
    team: {YOUR_NAMESPACE}
    app: {YOUR_APP_NAME}
    - name: http
      protocol: TCP
      port: 80
      targetPort: 80
    - name: https
      protocol: TCP
      port: 443
      targetPort: 443

This is the service declaration file, think of this as being the source-of-truth, the base declaration, to which every other config relates. You’re basically telling Kubernetes – here’s a service, it will use these ports and it lives in the following namespace.


kind: Ingress
    name: {YOUR_APP_NAME}
    namespace: {YOUR_NAMESPACE}
    annotations: "ingress-nginx-int" "HTTPS"
        - hosts:
          - {YOUR_APP_HOST_URL}
          secretName: wildcard-tls
        - host: {YOUR_APP_HOST_URL}
                  - path: /
                        serviceName: {YOUR_APP_NAME}
                        servicePort: 443

This is the traffic entry point definition, as you can see we’re redirecting all traffic calling / to the service, on port 443, for handling. The extra annotations are added for the TLS certificate. The latter is already configured on the cluster, thus we just reference it, and tell Kubernetes – hey, I’ll use this cert for HTTPS traffic.


Now that you have all the files ready, we’re set to deploy the application. It’s as simple as calling the following command:

kubectl apply -f {FILE_PATH}

Everything else is handled by Kubernetes via the metadata annotations we added.

There’s no set order to deploy the files in, but let’s go with this as an example, call them one-by-one:

kubectl apply -f service.yml
kubectl apply -f data.yml
kubectl apply -f ingress.yml
kubectl apply -f deployment.yml

Now if you call this command, you should see the uploaded declarations:

kubectl get services

You can check on the above declarations with the following commands

kubectl get deployments - GETS DEPLOYMENTS
kubectl get services    - GETS SERVICES
kubectl get ingresses   - GETS ENTRYPOINTS
kubectl get configmaps  - GETS DATA CARRIERS
kubectl get pods        - GETS PODS 


Kubernetes is an amazing orchestrator, that helps the developers with every control they have implemented. This is a massive subject, so keep an eye out for future tutorials on Kubernetes.

Official documentation: