Getting Started with Kubernetes
As a follow up to my Docker post I’ll go over my notes from some recent Kubernetes training. I’ll cover the key concepts and a few useful commands for getting started!
Why Kubernetes?
Docker was great for setting up and running a single container and Docker Compose could be used to create and connect a set of different containers. The problem occurs when these containers require maintenance or scaling.
For scaling, new containers would need to be created and added to the cluster via a series of Docker commands. If a container failed it would either need to be fixed or deleted and a new one created in its place.
This is where Kubernetes comes in. It’s a system that manages your containers for you. Some benefits of using it include:
- Automates deployments and manages state
- Manages scaling and high availability with self healing
- Load balances requests and monitors resources
- Portable - can be moved between hosting vendors with minimal re-configuration
- Provides service discovery and easy access to logs
- Can be used to validate the health of services
Key Concepts
- YAML Deployment Files are used to define the desired state of deployed containers
- It relies on pre-built Docker images located locally or in Docker Hub
- Masters constantly manage Nodes to ensure the desired state is met
- Masters decide where to run each container
- Nodes can run dissimilar sets of containers
- Nodes are individual machines (or vm’s) that run containers
- A Pod is a group of common purpose containers
Below you can see how these key components interact:
Kubernetes Master
The Kubernetes Master is a collection of three processes that run on a single Node in your cluster. The processes are the kube-controller-manager, kube-scheduler and kube-apiserver.
- The controller manager is a daemon that embeds the core control loops shipped with Kubernetes.
- The scheduler is in charge of scheduling Pods onto Nodes.
- The API provides a RESTful interface for retrieving, creating, updating, and deleting primary resources.
The API Server is given instructions by the scheduler defining what it needs to ensure is running. In this example it is told there needs to be 2 containers running the client Docker image, 1 running the api and 4 running the worker. The active column shows the current state, in this example the API needs to request 2 more worker containers to be created.
If one of the containers was killed, the active value would drop down by 1 and the API Server would contact all Nodes stating that it requires 1 additional container to be created.
Kubernetes Node
A Kubernetes Node is a container for Pods and services to run within. They contain a few key services including:
-
Docker - When a container needs to be created, Docker will access Docker Hub to download the image if it’s not already in cache. It will then populate each Pod with the image so that any number of containers can be created from it.
-
Kubelet - Responsible for communication between the Kubernetes Master and the Node. It manages the Pods and the containers running on a machine.
-
Kube Proxy - Responsible for network traffic between Nodes and the load balancer. It’s the access point for all communication from the outside world to a Node.
Pods
A Pod is the smallest thing that Kubernetes can deploy. It is a group of one or more containers with a very common purpose that need to be deployed together. In most cases all containers in a Pod will be the same, but tightly coupled containers such as monitoring or logging may also be run within the same Pod.
A Pod contains the required Docker image to spin up the number of containers required. It also has a label and a port that can be used to connect to other Pods or services.
Deployments
A Deployment is a template for a Pod with a controller that ensures that a given number of Pods are running. The template contains the name and image to use for each Pod and the number of containers to run within them. It also contains the port and label information so the created Pods can access other components.
Deployments make managing Pods much easier. Rather than having to manually define each one, you write a single definition then pass the number you want to run. This value can easily be increased later on if required.
Services
A Service is an abstract way to expose an application running on a set of Pods as a network service.
Kubernetes gives Pods their own IP addresses and DNS name and manages the load-balancing across them. If a Pod goes down Kubernetes will automatically create another one, likely with a newly assigned IP address. This means you can’t access your running application via the IP address of a Pod as it is most likely going to change.
This is where Services come in. Services set up networking in a Kubernetes cluster and have a number of different types for different situations:
- ClusterIP - Exposes a set of Pods to other objects in the cluster
- NodePort - Exposes a set of Pods to the outside world (used for development purposes)
- LoadBalancer - Legacy way of getting network traffic into a cluster
- Ingress - Exposes a set of services to the outside world
Deploying with Kubernetes
There are two ways to deploy to Kubernetes:
Imperative Deployments
‘Do exactly these steps to arrive at this container setup’
Imperative deployments are executed via a series of kubectl commands. These can be saved into a script or run manually.
- Create 3 containers using the worker image
- Make another 2 containers with the worker image
- Delete 1 container running worker
- Those 4 containers should be networked to the api
- Those 4 containers should be updated to use worker v2.0
Declarative Deployments
‘Our container setup should look like this, make it happen’
Declarative deployments are executed via deployment files and are described in more detail below.
- There should be 3 containers using worker
- There should be 5 containers using worker
- There should be 4 containers using worker
- There should be 4 containers using worker networked to the api
- There should be 4 containers using worker using v2.0 networked to the api
Deployment Files
A Kubernetes deployment (or configuration) file is used to create ‘Objects’. Object types include (but are not limited to) Pods, Services, ReplicaControllers and Stateful Sets. These object types are usually deployed with two files:
- Deployment - Used to configure the deployment of a Pod or set of Pods
- Service - Used to configure the service for network communication to the Pod
Here is what a deployment config file looks like for creating a Deployment:
# client-deployment.yaml
# Different versions are available for different deployment
# types - apps, batch, policy etc.
apiVersion: apps/v1
# The type of Kubernetes object to create - Pod, Deployment etc.
kind: Deployment
# Contains details for the deployment itself
metadata:
# Used for identification when running kubectl commands
name: client-deployment
# The desired state of the created object
spec:
# The number of containers to run
replicas: 2
# Used by the Deployment to identify the Pods created by
# the template below
selector:
# Must match value defined in the template below
matchLabels:
# Both the key and value are user defined
component: website
# The template for each Pod
template:
# Metadata used in each created Pod
metadata:
# Every Pod will have this label
labels:
component: website
spec:
# List of containers to create
containers:
# The name of the container
- name: client
# Docker image to use to create container
image: dockerid/client
# Ports mapped out to that container
ports:
- containerPort: 5000
An example ClusterIP service would look like the following:
## client-service.yaml
apiVersion: v1
kind: Service
metadata:
name: client-service
spec:
# Provides access to the Pod by other Pods in the cluster
type: ClusterIP
# Pod to direct traffic to. Links to defined labels.
selector:
component: web
# Ports on that Pod to expose the the outside world
ports:
# The port another Pod access this Pod
- port: 5000
# Identical to containerPort defined above
# The port providing access to
targetPort: 5000
Summary
Now that all the individual Kubernetes objects have been covered in more detail, let’s take a look at how it all looks together!
This is not a complete diagram, but shows a few key points:
- The Master manages Nodes via the kublet
- Services use matching labels and ports to expose Pods to the rest of the cluster
- The Ingress service is used to expose Pods and Services to the outside world via kube-proxy
- The Ingress service also manages internal load-balancing within Nodes
- An external cloud provider is used to manage load-balancing across Nodes
- Docker pulls images from Docker Hub and injects them into Pods to create containers
- Although not shown here Pods can communicate across Nodes
Useful Commands
action | command |
---|---|
Apply config settings to deployment | kubectl apply -f <config_file> |
List active Pods/services | kubectl get <object type> |
Get detailed info on an object | kubectl describe <object type><object name> |
Delete a running object | kubectl delete -f <config_file> |
View persistent volumes | kubectl get pv |
Create a secret | kubectl create secret generic <secret_name> --from-literal <SECRET_KEY>=<secret_value> |
View created secrets | kubectl get secrets |
Deploy forcing new image version | kubectl set image <object_type>/<object_name> <container_name>=<new_image> |
command | explanation |
---|---|
kubectl |
CLI we use to change our K8s cluster |
apply |
Change the current configuration of our cluster |
get |
Retrieve information about a running object |
set |
Set information on a running object |
describe |
Get detailed info |
create |
Create a new object |
secret |
An object type |
generic |
A type of secret |
delete |
Delete a running object |
-f |
Specify a file that has the config changes |
--from-literal |
Added secret as part of the command, rather than from a file |
<config_file> |
Path to the file with the config |
<object_type> |
Specifies the object type that we want to get information about |
<object_name> |
Name of the object |
<secret_name> |
Name of the secret for reference in a config file |
<SECRET_KEY> |
The key for the secret |
<secret_value> |
The secret value |
Useful Links

We are the engineers behind LandInsight and LandEnhance. We’re helping property professionals build more houses, one line of code at a time. We're a remote-first company with our tech team in Europe, and yes, we're hiring!