Introduction
It can be painful to run your application in multiple environments (for development and production). Containers allow you to easily take your application and run them in any environment required. I want to show you how to containerize a node application using docker containers.
Why use containers
Containers provide a standardized building block for your application. They include application code, dependencies, runtime environments, and everything else required to run your code. Everything you need is then packaged into a single bundle that can be executed everywhere docker is supported (nearly everywhere). Such a portable environment allows you to take your containers everywhere you need them. For development, you can share them across your devices to set up your test applications with a few clicks rather than having complex set-up sequences. For a production environment, they allow you to easily set up and run your application on every cloud provider or server.
Each container also has consistent behaviour. As they are created via a pre-defined set of steps, they are always set up and run the same way. Combined with all the dependencies bundled together, a container will always have the same behaviour when you run them each time. In a production environment, you can easily create numerous containers or destroy and re-do your containers if you make any changes. Therefore, having containers allow you to utilize the advantages of cloud computing fully.
How to containerize a node application?
Create the node application
const express = require('express');
const app = express();
const port = 8080;
app.get('/', (req, res) => res.send('Hello World!'));
app.listen(port, () => console.log(`Example app listening on port ${port}!`));
First, you will need a node application. For example, I will use a basic API running on node and express. To get a node.js application to containerize we first have to create the application using the following steps:
- Init the node application using
npm init
- add the start script to package.json
start": "node src/express.js
- install express
npm install express
- Create a basic express.js application
- Run the application (
npm start
), and test if it is working (viacurl localhost:8080
)
Add docker to the application
The next step is to add the dockerfile to the application. The dockerfile is the recipe that docker uses to create the image for your machine.
FROM node:19
WORKDIR /app
COPY package.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD ["node", "src/express.js"]
- Select your base image. For this, we are using
node:19
which is the latest version of node.js. The base image allows us to have a setup with node already running, and we only have to make application-specific changes. - The next step is to create a working directory (I use the name app) for docker. Docker will copy the files and do all the operations in the working directory. Docker will automatically create a working directory when omitted, but it is best practice to create a specific directory explicitly.
- Now that the basic docker director is set up, we must get our file to the working directory. The copy command will copy the files we need (at this point, only the package.json) to the working directory.
- After copying the package.json, we can execute
npm install
to install all the dependencies and requirements for the application. - The next step is to copy the application code to the working directory. We will do a one-to-one copy, so the command is
COPY . .
. - Then we need to expose the port for our application so that you can access it from the outside. In this case, it is port
8080
, but you should adjust it based on the application. - After everything is installed and set up, the application needs to be started by the container. For this, we use
CMD ["node", "src/express.js"]
to provide the command arguments that are run (the command equivalent isnode src/express.js
).
Now you have a complete recipe to create our container’s application image. If you need to do any other steps, you can add them to the end of this recipe.
Creating an image and running the container
Creating a docker image
Now that we have the instructions/recipe to create the application, we need to execute them to create an image. A docker image is an executable file used to run the application, and docker will use it to start up an instance (container).
The docker build
command runs the dockerfile and creates the docker image. You want to specify a name for your image with the -t
tag so that you can identify your images.
The name of a docker image consists of 2 parts: (1) the name of the repository and (2) the version tag. If you provide no version tag, the latest
tag is automatically added.
[image name] = [repository]:[tag]
You should add your docker-hub username for the repository name if you want to publish your image. For my project, the image name would be kaykleinvogel/example-api:v0.1.0-alpha
. Ultimately, we must specify where the build context (Dockerfile) is located. If you are within the directory with the dockerfile you can use .
. With this, our final command would be
docker build -t kaykleinvogel/example-api:v.0.1.0-alpha .
You can confirm the creation of your image with the docker images
command. You should see your image with the corresponding name and version.
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
kaykleinvogel/example-api v0.1.0-alpha 87c3980396ff 2 minutes ago 1GB
If you are using the Docker Desktop application, you can also verify the creation there.
Creating the container
Now that you have created the image, we can create an instance of our application. I will review the creation in the CLI and the Docker Desktop Application.
docker run --name test-api-container -p 8080:8080 -d kaykleinvogel/example-api:v0.1.0-alpha
So let us go through each argument of the command:
docker run
tells docker that we want to run an instance of an image--name test-api-container
tells docker the name the container should have. You can freely select the name here.-p 8080:8080
this is the port forwarding. You can choose whichever local port you want to forward to the container. To have a clear overview, I will do a one-to-one mapping from the local 8080 port to the container 8080 port. The order of the ports islocal port : container port
.-d kaykleinvogel/example-api:v0.1.0-alpha
this is the image and version from which docker will create the container. We will use the image that we previously created.
You should get the Container ID as an output once the container runs. You can use the docker ps
command to check the currently running container.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ea7c2b9f618b kaykleinvogel/example-api:v0.1.0-alpha "docker-entrypoint.s…" 2 seconds ago Up 2 seconds 0.0.0.0:8080->8080/tcp test-api-container
To check if the application works, we can use a simple curl
command to our localhost:8080
. The port-forwarding will then automatically forward the request to the corresponding port at the container, and we will receive our response.
$ curl localhost:8080
Hello World!
If you want to stop the container, you need the stop
command and the container id. You will then get the id as an output to confirm that docker stopped the container.
$ docker stop ea7c2b9f618b
ea7c2b9f618b
Creating Container using Docker Desktop
If you want to do the same thing with the Docker Desktop application you can also easily do it.
For this you select your target image in the images list, hit run and then fill out the parameters (in optional settings). To have an identical setup to our CLI command, the following variables will be set:
- container name =
test-api-container
- Host port:
8080
After hitting run, we now have our container up and working.
To check if everything is working, we can send the same curl request as above or simply navigate there using the web browser.
To stop the container, you can go to the container overview and hit stop.
Conclusion
As you can see, creating a container from a node.js application is easy and provides a transportable file that you can take everywhere. However, the image is quite large and should be further optimized if you want to use it for production. I will add a second part to this shortly, showing how to optimize your image for production and deploy the image to the cloud.
So stay tuned and curious.