How to build Docker Images from Dockerfile

Introduction

Docker has changed the way we build, package, and deploy applications. In this tutorial, we’ll learn about the Dockerfile. What it is, how to create a Dockerfile, and how to configure the basics to Build Docker images from it.

Docker images and Docker containers at a glance :

A Docker image contains the application and everything you need to run the application. Docker containers are instances of Docker images, whether running or stopped. You can run many Docker containers from the same Docker image.

What is Dockerfile 

A Dockerfile is a script that contains collections of commands and instructions that will be automatically executed in sequence in the docker environment for building a new docker image.

Before We Begin:

  • Ubuntu 18.04 system with root or user with sudo privileges
  • Docker must be installed and running status

Note:If  you are not Installed Docker , simply refer to one of our installation guides for Install Docker on Ubuntu

How to Create a Dockerfile

Creating a Dockerfile is as easy as creating a new file named “Dockerfile” with your text editor of choice and defining some instructions. Dockerfile is the default name but you can use any filename that you want (and even have multiple dockerfiles in the same folder)

The most common scenario when creating Docker images is to pull an existing image from a registry (usually from Docker Hub) and specify the changes you want to make on the base image

Note:The most commonly used base image when creating Docker images is Alpine because it is small and optimized to be run in RAM.

The first thing you need to do is to create a directory in which you can store all the Docker images you build.

1. As an example, we will create a directory named My_DockerImages with the command:

mkdir My_DockerImages

2. Move into that directory and create a new empty file (Dockerfile) in it by typing:

cd My_DockerImages
touch Dockerfile

3. Open the file with a text editor of your choice. In this example, I am using nano

nano Dockerfile

4. Then, add the following content:

FROM ubuntu
MAINTAINER rajeshk
RUN apt-get update
CMD ["echo", "Hello World"]

Let’s explain the meaning of each of the lines in the Dockerfile:

FROM: Defines the base of the image you are creating. You can start from a parent image (as in the example above) or a base image. When using a parent image, you are using an existing image on which you base a new one.

Using a base image means you are starting from scratch. There are also many base images out there that you can use, so you don’t need to create one in most cases

FROM must be the first instruction you use when writing a Dockerfile.

You can also use a specific version of your base image, by appending : and the version_name at the end of the image name. For example:

FROM ubuntu:18.04

This will be relevant also when you want a specific version of a Ruby or Python interpreter, MySQL version, or what have you, when you use an official base image for any of these tools.

MAINTAINER : Specifies the author of the image. Here you can type in your first and/or last name (or even add an email address). You could also use the LABEL instruction to add metadata to an image.

RUN : Instructions to execute a command while building an image in a layer on top of it. In this example, the system searches for repository updates once it starts building the Docker image. You can have more than one RUN instruction in a Dockerfile.

CMD  : There can be only one CMD instruction inside a Dockerfile. Its purpose is to provide defaults for an executing container. With it, you set a default command. The system will execute it if you run a container without specifying a command.

5. Save and exit the file.

Build a Docker Image with Dockerfile

The basic syntax used to build an image using a Dockerfile is:

docker build [OPTIONS] PATH | URL | –

To build a docker image, you have to use command :

docker build [location of your dockerfile]

If you are already in the directory where the Dockerfile is located, put a . instead of the location:

docker build .

By adding the -t flag, you can tag the new image with a name which will help you when dealing with multiple images:

docker build -t first_test_image .

Note:

You may receive an error saying Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker…

This means the user does not have permission to access the Docker engine. Solve this problem by adding sudo before the command or run it as root.Also check our tutorial on how to run docker commands without using sudo

The output of the build process will look something like this:

Once the image is successfully built, you can verify whether it is on the list of local images with the command:

docker images

The output should show first_test_image available in the repository.

Create a New Container from the images

Launch a new Docker container based on the image you created in the previous steps. We will name the container “test” and create it with the command:

docker run --name test my_first_image

The Hello World message should appear in the command line, as seen in the image above.

Create a Docker file for a Sample node js web application:

To get more Clarity over the Docker file and its basics ,Let’s create a another Dockerfile for a node js application.We will create a simple web application in Node.js, then we will build a Docker image for that application, and lastly we will run a container from that image.

First, create a new folder named mynode_app inside our My_DockerImages directory.

Create a package.json file that describes your app and its dependencies:

{
"name": "docker_web_app",
"version": "1.0.0",
"description": "Node.js on Docker",
"author": "Docker docker@example.com",
"main": "server.js",
"scripts": {
"start": "node server.js"
},
"dependencies": {
"express": "^4.16.1"
}
}

With your new package.json file, run npm install. If you are using npm version 5 or later, this will generate a package-lock.json file which will be copied to your Docker image.

Note:For this session you must have nodejs & npm installed on your system

Then, create a server.js file that defines a web app using the Express.js framework:

'use strict';
const express = require('express');
// Constants
const PORT = 8099;
const HOST = '0.0.0.0';
// App
const app = express();
app.get('/', (req, res) => {
res.send('Hello World');
});
app.listen(PORT, HOST);
console.log(Running on http://${HOST}:${PORT});

Now create Dockerfile as follow:

nano Dockerfile

base image :

FROM node:12

Here we will use the latest LTS (long term support) version 12 of node available from the Docker Hub:

Note:When building a Docker image, you also want to make sure to keep Docker image size light. Avoiding large images speeds-up building and deploying containers. Therefore, it is crucial to reduce the image size to a minimum.

Copying source code:

Next we create a directory to hold the application code inside the image, this will be the working directory for your application:

# Create app directory
WORKDIR /usr/src/app

This image comes with Node.js and NPM already installed so the next thing we need to do is to install your app dependencies using the npm binary. Please note that if you are using npm version 4 or earlier a package-lock.json file will not be generated.

# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm@5+)
COPY package*.json ./

RUN npm install

First, we set the working directory using WORKDIR. We then copy files using the COPY command. The first argument is the source path, and the second is the destination path on the image file system. We copy package.jsonand install our project dependencies using npm install. This will create the node_modules directory

Note: You might be wondering why we copied package.json before the source code. Docker images are made up of layers. They’re created based on the output generated from each command. Since the file package.json does not change often as our source code, we don’t want to keep rebuilding node_modules each time we run Docker build.

Copying over files that define our app dependencies and install them immediately enables us to take advantage of the Docker cache. The main benefit here is quicker build time

Exposing a port:

Exposing port 8099 informs Docker which port the container is listening on at runtime. Let’s modify the Docker file and expose the port 8099.

EXPOSE 8099

Docker CMD:

The CMD command tells Docker how to run the application we packaged in the image. The CMD follows the format CMD [“command”, “argument1”, “argument2”].

Here we will use node server.js to start your server:

CMD [ "node", "server.js" ]

Our Dockerfile should now look like this:

FROM node:12

# Create app directory
WORKDIR /usr/src/app

# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm@5+)
COPY package*.json ./

RUN npm install

# Bundle app source
COPY . .

EXPOSE 8099
CMD [ "node", "server.js" ]

create .dockerignore file

Create a .dockerignore file in the same directory as your Dockerfile with following content:

node_modules
npm-debug.log

This will prevent your local modules and debug logs from being copied onto your Docker image and possibly overwriting modules installed within your image.

Building your image:

Go to the directory that has your Dockerfile and run the following command to build the Docker image. The -t flag lets you tag your image so it’s easier to find later using the docker images command:

docker build -t rjshk/node-webapp .

You will get output look like :

You can see our image created with name rjshk/node-webapp:latest

Your image will now be listed by Docker:

docker images

Run the image:

Running your image with -d runs the container in detached mode, leaving the container running in the background.

The -p flag redirects a public port to a private port inside the container. Run the image you previously built with container name node-server

docker run -d --name node-server -p 49160:8099 rjshk/node-webapp:latest

Check our new node-server container is running properly:

docker ps

You can see it is UP and running

Print the output of your app:

# Get container ID
$ docker ps

# Print app output
$ docker logs <container id>

# Example
Running on http://localhost:8099

Testing of node-server

To test your app, get the port of your app that Docker mapped .Here Docker mapped the 8099 port inside of the container to the port 49160 on your machine.

Now you can call your app using curl (install if needed via: sudo apt-get install curl):

curl -i localhost:49160

Yous hould get output look like :

We hope this tutorial helped you get up and running a simple Node.js application on Docker.

Conclusion

Using Dockerfile is a simpler and faster way of building a Docker image. It automates the process by going through the script with all the commands needed to build an image. In this tutorial, we learned how to create Docker images using Dockerfile and basic commands. Also learned to create Node.js application using Dockerfile

Leave a Reply