Land Technologies
< all posts

Understanding Docker

Posted by Dave N on January 17, 2020 · 12 min read dockertraining

Why Docker?

Let’s say you’ve just finished prototyping a new web app and you want to pass it onto the team to have a play around with. You don’t have anywhere to host it privately, so you send them the code to run locally.

You soon start getting messages from everyone struggling to get it running. It turns out most don’t have Node.js installed. The code’s also written in Typescript, which you only installed globally so no one can compile it. It also turns out there’s a weird bug preventing the app running on Windows. Ugh!

The manual way

Using a little bit of Docker terminology, here are the manual steps you’d need to get your app running successfully:

command explanation
FROM
Install an operating system
RUN
Open the default browser
Navigate to https://nodejs.org/en/download/
Download the installer
Run the installer
Run npm install -g typescript
Download the source code
Run npm install
CMD
Run npm start

The Docker way

Using Docker you can predefine these steps using 3 core commands to create a Docker Image.

Commands

This image allows someone with Docker installed to run a single command and have your app running in a container, regardless of their operating system or what they have installed.

Here are the steps Docker takes to achieve the same result as above:

command explanation
FROM
Download node:alpine image
RUN
Get image from previous step
Create a container out of it
Run npm install -g typescript in it
Copy across the source code from your machine
Run npm install
Take a snapshot of that containers filesystem
Shut down that temporary container
Get image ready for next instruction
CMD
Get image from last step
Create a container out of it
Tell the container to run npm start when started
Shut down the temporary container
The image is complete!

What is a Container?

A container is an isolated environment that uses a portion of the host machines resources, including CPU, Network and Hard Drive.

It contains only the software defined in the Docker image and has no access to any of the files or software installed on the host, unless you specifically grant it. This means that the container will run consistently across any host machine.

Container

Understanding the Docker Image

A Docker image is the template from which a container is created. A base Docker image has an empty file system and no startup command. This is where the FROM, RUN and CMD commands come in.

Base Docker Image

FROM

This is where you select a base image. This is commonly a light weight operating system with some pre-installed software. The above example used node:alpine. This is essentially a minimal version of Node.js built on Alpine Linux. This will include base OS directories bin, dev, etc, as well as node giving you the core command line functionality you need.

A quick note: alpine refers to a minimal image with everything but the bare essentials stripped out. For example the alpine linux image is under 4mb, compared to Ubuntu or Debian that are over 100mb!

RUN

This is where you setup and additional software you require that does not come with the base image. In the above example this is where we copied across our source code and installed the dependencies and typescript.

CMD

Finally you define the command you want to execute when the container is started up. This can be starting up your web application, running your tests or anything else.

The image that would have been created in the flow above would look something like this:

Docker Image

And here’s how the full workflow looks:

Workflow

The Dockerfile

So with all the theory out the way, how would you actually go about creating the Docker image? After installing Docker, create a file named Dockerfile in the root of your project.

## Dockerfile

# use existing image as base
FROM node:alpine

# copy your local app directory to the image
COPY ./app /app

# download the project dependencies
RUN npm install

# download/install dependency
RUN npm install typescript -g

# what to run when the container starts
CMD ["npm", "start"]

To build the Docker image run the following in the same directory as the Dockerfile:

docker build .

This will build the Docker Image with a unique ID. You can now run your app by spinning up a container using the created Docker Image.

docker run <id>

Tagging Docker Images

As you might rightly think referring to Docker Images by ID can be a pain. It’s not easy to remember a randomly generated hash every time you want to run or stop a container. The solution to this is tagging your images, which are written in the following format:

<docker_id>/<image_name>:<tag_name>

  • Docker ID - To get a Docker ID you’ll need to create a free account with Docker Hub. Instructions for this can be found in the Docker Docs.
  • Image Name - This is the name you want to give your image and can be used instead of the image ID in the future.
  • Tag Name - Usually the version number of the image, best updated whenever you make a change to the Docker Image. If no tag is specified it will default to latest, and will be used if no tag is provided when running commands.

To build a Docker Image with a tag run the following command in the same directory as your Dockerfile:

docker build -t <docker_id>/<project_name>:<tag_name> .

The Docker image can now be used to spin up a container by running:

docker run <docker_id>/<image_name>

With your image tagged you can now push it up to Docker Hub for others to use:

docker push <docker_id>/<image_name>

The Docker Ecosystem

At this point we’ve gone through the full Docker flow, so lets take a step back and see how it all connects together.

Docker Ecosystem

  • Docker Client - Provide commands to that are passed onto the Docker Server
  • Docker Server - Responsible for creating images, running containers etc.
  • Docker Images - A single file template with all the dependencies and configuration required to create a container
  • Containers - Running instances software defined in a Docker Image
  • Docker Hub - Repository of Docker images
  • Docker Compose - A tool for defining and running multi-container Docker applications

Docker Compose

Using Dockerfiles are great for spinning up a single container, but what if we want multiple containers running together? A front-end might not show much unless its connected to a backend and a database.

Docker Compose can be used to build the individual Docker Images and run containers for each with a single command. Running these containers with Docker Compose also allows each of the containers to communicate with each other. This means port mapping is only required for providing access to the host machine, not for containers to communicate with each other.

To illustrate this the following example will show how Docker Compose can be used to build and run a front-end client, a server and a database all in separate containers with their own Docker images.

First lets create a Docker file for the client:

## client/Dockerfile

# stage given a name so it can be referenced in another stage
FROM node:alpine as builder
# set directory on the container to run the following commands from
WORKDIR '/app'
# copy package.json from the host to the container /app directory
COPY package.json ./
# install the dependencies listed in the package.json file
RUN npm install
# copy all project src code from the host directory to the
# container /app directory
COPY . .
# build the project
RUN npm run build

The above Dockerfile is broken up into two stages. The first brings across all the source code, downloads all the dependencies and builds the application. The container now contains nodejs, the project source code and a rather large node_modules directory. None of these bulky files are needed for running a static site.

That’s why the second stage is used to take just the required build from the previous stage and place it into a clean container focused on serving static files on port 80. This produces a much more lightweight Docker Image reading to be used for running a front-end.

Now onto the api, following a similar pattern:

# api/Dockerfile
FROM node:alpine
WORKDIR "/app"
COPY ./package.json ./
RUN npm install
COPY . .
CMD ["npm", "run", "start"]

Finally the docker-compose file itself:

## docker-compose.yml

#version of docker-compose to use
version: "3"
# a list of container types we want to create
services:
  # hostname to give the containers created
  client:
    # settings for starting the container from the docker image
    build:
      # the directory the dockerfile is in
      context: ./client
      # the name of the dockerfile
      dockerfile: Dockerfile.dev
    # array of ports to map
    ports:
      # ports are in the format local:container
      - "3000:3000"
    # settings for any volumes to setup
    volumes:
      # do not map a local host directory against node_modules
      # inside the container
      - /app/node_modules
      # map the host pwd to the /app directory in the container
      - .:/app
  # setup for the api, similar to above
  api:
    build:
      dockerfile: Dockerfile
      context: ./server
    volumes:
      - /app/node_modules
      - ./server:/app
  # for the database simply use a standard postgres image
  postgres:
    image: "postgres:latest"

So in summary the above file defines three container types, client and api and postgres.

With the file finished the most important commands are:

  • docker-compose build - build all docker images
  • docker-compose up - launch a container for each image (-d to run in the background)
  • docker-compose down - stop the running containers
  • docker-compose ps - status of running containers

Final Notes

For the last few years I’ve always spent my time working on front-ends, so I’ve never really had the chance to learn Docker properly. When I’ve needed to run something I’ve either blindly copied what I’ve needed from old faithful Stack Overflow or tried tweaking Dockerfiles from other projects.

As my team are starting to adopt Kubernetes more seriously I quickly found myself lost. After watching a few quick training videos I realised I needed to understand the basics of Docker before I could make any real progress with k8s.

This post has been created from my notes trying to understand the key Docker concepts and commands. Hopefully you’ve found useful if you’re just getting started with Docker too! Below are the most useful commands I used with explanations and a few useful links.

Key Docker Commands

action command
Create a Docker Image docker create <image_name>
Start a Docker Image docker start <container_id>
Create & Start a Docker Image docker run <image_name> <command>
Create & Start a Docker Image with port mapping docker run -p <local_port> : <container_port> <image_name>
List running containers docker ps
List all top level images docker images
Get container Logs docker logs <container_id>
Stop a container docker stop <container_id>
Kill a container docker kill <container_id>
Removed unused containers docker container prune
Removed unused Docker images docker image prune
Execute additional command in a container docker exec -it <container_id> <command>
Build & Tagging an image docker build -t <docker-id>/<image_name>:<version> <build_directory>
Create volume for persisting data docker run -v <container_dir>
Link a local directory to a container docker run -v <host_dir>:/<container_dir>

Command Explanations

command explanation
docker Reference the Docker Client
create Create a container
start Start a container
run Create and start a container running
ps List all running containers
images List all top level images
logs Gets logs
stop Stop the container
kill Deletes the container
prune Removes unused items from the host file system
exec Run a command
<image_name> Name of image to use for the container
<container_id> Name of the created container
<command> Default command override
<docker_id> Your personal docker id
<image_name> What you want to call the docker image
<version> The version of the image, defaults to latest
<build_directory> The location to build the docker image to
<local_port> Port number on the local machine
<container_port> Port number on the container
-p Maps a local port to a container port
-t Tag a docker image
-it Allows you to provide input to the container
-v <container_dir> Adds a bookmark to a directory inside the container
-v <local>:<container> Creates a reference in the container pointing to a directory on the host

Useful Links


dockertraining
Land Technologies

We are the engineers behind LandInsight and LandEnhance. We’re helping property professionals build more houses, one line of code at a time. We're based in London, and yes, we're hiring!

© 2021, Built with Gatsby
LAND TECHNOLOGIES LTD, m: WeWork c/o LandTech, 10 Devonshire Square, London, EC2M 4PL t: 0203 086 7855
© 2016 Land Technologies Ltd. Registered Company Number 08845300.