Let’s say you’ve just finished prototyping a new web app and you want to pass it onto the team to have a play around with. You don’t have anywhere to host it privately, so you send them the code to run locally.
You soon start getting messages from everyone struggling to get it running. It turns out most don’t have Node.js installed. The code’s also written in Typescript, which you only installed globally so no one can compile it. It also turns out there’s a weird bug preventing the app running on Windows. Ugh!
The manual way
Using a little bit of Docker terminology, here are the manual steps you’d need to get your app running successfully:
|Install an operating system|
|Open the default browser|
|Navigate to https://nodejs.org/en/download/|
|Download the installer|
|Run the installer|
|Download the source code|
The Docker way
Using Docker you can predefine these steps using 3 core commands to create a Docker Image.
This image allows someone with Docker installed to run a single command and have your app running in a container, regardless of their operating system or what they have installed.
Here are the steps Docker takes to achieve the same result as above:
|Download node:alpine image|
|Get image from previous step|
|Create a container out of it|
|Copy across the source code from your machine|
|Take a snapshot of that containers filesystem|
|Shut down that temporary container|
|Get image ready for next instruction|
|Get image from last step|
|Create a container out of it|
|Tell the container to run
|Shut down the temporary container|
|The image is complete!|
What is a Container?
A container is an isolated environment that uses a portion of the host machines resources, including CPU, Network and Hard Drive.
It contains only the software defined in the Docker image and has no access to any of the files or software installed on the host, unless you specifically grant it. This means that the container will run consistently across any host machine.
Understanding the Docker Image
A Docker image is the template from which a container is created. A base Docker image has an empty file system and no startup command. This is where the FROM, RUN and CMD commands come in.
This is where you select a base image. This is commonly a light weight operating system with some pre-installed
software. The above example used
node:alpine. This is essentially a minimal version of Node.js built on Alpine Linux.
This will include base OS directories
etc, as well as
node giving you the core command line
functionality you need.
A quick note: alpine refers to a minimal image with everything but the bare essentials stripped out. For example the alpine linux image is under 4mb, compared to Ubuntu or Debian that are over 100mb!
This is where you setup and additional software you require that does not come with the base image. In the above example this is where we copied across our source code and installed the dependencies and typescript.
Finally you define the command you want to execute when the container is started up. This can be starting up your web application, running your tests or anything else.
The image that would have been created in the flow above would look something like this:
And here’s how the full workflow looks:
So with all the theory out the way, how would you actually go about creating the Docker image?
After installing Docker, create a file named
Dockerfile in the root of your project.
## Dockerfile # use existing image as base FROM node:alpine # copy your local app directory to the image COPY ./app /app # download the project dependencies RUN npm install # download/install dependency RUN npm install typescript -g # what to run when the container starts CMD ["npm", "start"]
To build the Docker image run the following in the same directory as the Dockerfile:
docker build .
This will build the Docker Image with a unique ID. You can now run your app by spinning up a container using the created Docker Image.
docker run <id>
Tagging Docker Images
As you might rightly think referring to Docker Images by ID can be a pain. It’s not easy to remember a randomly generated hash every time you want to run or stop a container. The solution to this is tagging your images, which are written in the following format:
- Docker ID - To get a Docker ID you’ll need to create a free account with Docker Hub. Instructions for this can be found in the Docker Docs.
- Image Name - This is the name you want to give your image and can be used instead of the image ID in the future.
- Tag Name - Usually the version number of the image, best updated whenever you make a change to the Docker Image.
If no tag is specified it will default to
latest, and will be used if no tag is provided when running commands.
To build a Docker Image with a tag run the following command in the same directory as your Dockerfile:
docker build -t <docker_id>/<project_name>:<tag_name> .
The Docker image can now be used to spin up a container by running:
docker run <docker_id>/<image_name>
With your image tagged you can now push it up to Docker Hub for others to use:
docker push <docker_id>/<image_name>
The Docker Ecosystem
At this point we’ve gone through the full Docker flow, so lets take a step back and see how it all connects together.
- Docker Client - Provide commands to that are passed onto the Docker Server
- Docker Server - Responsible for creating images, running containers etc.
- Docker Images - A single file template with all the dependencies and configuration required to create a container
- Containers - Running instances software defined in a Docker Image
- Docker Hub - Repository of Docker images
- Docker Compose - A tool for defining and running multi-container Docker applications
Using Dockerfiles are great for spinning up a single container, but what if we want multiple containers running together? A front-end might not show much unless its connected to a backend and a database.
Docker Compose can be used to build the individual Docker Images and run containers for each with a single command. Running these containers with Docker Compose also allows each of the containers to communicate with each other. This means port mapping is only required for providing access to the host machine, not for containers to communicate with each other.
To illustrate this the following example will show how Docker Compose can be used to build and run a front-end client, a server and a database all in separate containers with their own Docker images.
First lets create a Docker file for the client:
## client/Dockerfile # stage given a name so it can be referenced in another stage FROM node:alpine as builder # set directory on the container to run the following commands from WORKDIR '/app' # copy package.json from the host to the container /app directory COPY package.json ./ # install the dependencies listed in the package.json file RUN npm install # copy all project src code from the host directory to the # container /app directory COPY . . # build the project RUN npm run build
The above Dockerfile is broken up into two stages. The first brings across all the source code, downloads all the dependencies and builds the application. The container now contains nodejs, the project source code and a rather large node_modules directory. None of these bulky files are needed for running a static site.
That’s why the second stage is used to take just the required build from the previous stage and place it into a clean container focused on serving static files on port 80. This produces a much more lightweight Docker Image reading to be used for running a front-end.
Now onto the api, following a similar pattern:
# api/Dockerfile FROM node:alpine WORKDIR "/app" COPY ./package.json ./ RUN npm install COPY . . CMD ["npm", "run", "start"]
Finally the docker-compose file itself:
## docker-compose.yml #version of docker-compose to use version: "3" # a list of container types we want to create services: # hostname to give the containers created client: # settings for starting the container from the docker image build: # the directory the dockerfile is in context: ./client # the name of the dockerfile dockerfile: Dockerfile.dev # array of ports to map ports: # ports are in the format local:container - "3000:3000" # settings for any volumes to setup volumes: # do not map a local host directory against node_modules # inside the container - /app/node_modules # map the host pwd to the /app directory in the container - .:/app # setup for the api, similar to above api: build: dockerfile: Dockerfile context: ./server volumes: - /app/node_modules - ./server:/app # for the database simply use a standard postgres image postgres: image: "postgres:latest"
So in summary the above file defines three container types,
With the file finished the most important commands are:
docker-compose build- build all docker images
docker-compose up- launch a container for each image (-d to run in the background)
docker-compose down- stop the running containers
docker-compose ps- status of running containers
For the last few years I’ve always spent my time working on front-ends, so I’ve never really had the chance to learn Docker properly. When I’ve needed to run something I’ve either blindly copied what I’ve needed from old faithful Stack Overflow or tried tweaking Dockerfiles from other projects.
As my team are starting to adopt Kubernetes more seriously I quickly found myself lost. After watching a few quick training videos I realised I needed to understand the basics of Docker before I could make any real progress with k8s.
This post has been created from my notes trying to understand the key Docker concepts and commands. Hopefully you’ve found useful if you’re just getting started with Docker too! Below are the most useful commands I used with explanations and a few useful links.
Key Docker Commands
|Create a Docker Image||
|Start a Docker Image||
|Create & Start a Docker Image||
|Create & Start a Docker Image with port mapping||
|List running containers||
|List all top level images||
|Get container Logs||
|Stop a container||
|Kill a container||
|Removed unused containers||
|Removed unused Docker images||
|Execute additional command in a container||
|Build & Tagging an image||
|Create volume for persisting data||
|Link a local directory to a container||
||Reference the Docker Client|
||Create a container|
||Start a container|
||Create and start a container running|
||List all running containers|
||List all top level images|
||Stop the container|
||Deletes the container|
||Removes unused items from the host file system|
||Run a command|
||Name of image to use for the container|
||Name of the created container|
||Default command override|
||Your personal docker id|
||What you want to call the docker image|
||The version of the image, defaults to
||The location to build the docker image to|
||Port number on the local machine|
||Port number on the container|
||Maps a local port to a container port|
||Tag a docker image|
||Allows you to provide input to the container|
||Adds a bookmark to a directory inside the container|
||Creates a reference in the container pointing to a directory on the host|
We are the engineers behind LandInsight and LandEnhance. We’re helping property professionals build more houses, one line of code at a time. We're based in London, and yes, we're hiring!