docker-compose for NestJS project
Overview
Recently I was asked to prepare a NestJS development environment for one of our development team so that they can start project quickly without having any problem with installed softwares and their versioning. Docker is obviously the best answer so I decided to start composing a docker-compose file for this task. The development environment need the following components :
- NestJS as backend
- Postgresql as DB
- A Nginx who will serve as web server
docker-compose.yml
Application part
- I started with creating a folder for the whole project
mkdir my-awesome-project
cd my-awesome-project
- Supposed that you have already installed NodeJS on your computer, the following commands are taken from NestJS document which will help us to create a NestJS
app
project inside of `my-awesome-project`.
$ npm i -g @nestjs/cli
$ nest new app
- NestJS CLI will add necessary files and we can verify the initialization step by running the following commands
$ npm install
$ npm run start
※Once the application is running, open your browser and navigate to http://localhost:3000/
. You should see the Hello World!
message.
- Next we will create a docker-compose.yml file for your NestJS project
version: '3.8'
services:
api:
container_name: "hrm_api_${NODE_ENV}"
image: "hrm_api_${NODE_ENV}"
environment:
- NODE_ENV:${NODE_ENV}
build:
context: ./app
target: "${NODE_ENV}"
dockerfile: Dockerfile
entrypoint: ["npm", "run", "start:${NODE_ENV}"]
env_file:
- .env
ports:
- 9229:9229
networks:
- nesjs-network
volumes:
- ./app:/usr/src/app
- /usr/src/app/node_modules
restart: unless-stopped
networks:
nesjs-network:
There’s something need to be noticed here.
- First we use .env file to define some useful environment variables used by docker-compose. In above code, $NODE_ENV will be substitute by a value defined in .env file in ther later run. We create the .env file in the same folder with
docker-compose.yml
and for now it contains only the following content.
NODE_ENV=dev
- We are going to use NODE_ENV’s value later in Dockerfile that is why we need the following lines
environment:
- NODE_ENV:${NODE_ENV}
- The default network is named as nestjs-network
- The part entrypoint tells docker that we need to run
npm run start
to start our Nestjs project at the end. - We then create the Dockerfile in app folder to define the enviroment on that our NestJS is going to run. The file is self explained as we create the environment from a node-alpine image, install postgresql-client package for later use, we then copied the package.json and package-locked.json into and run
npm install
to install all necessary modules. *Please noted that we use a separated volume for node modules /usr/src/app/node_modules so that runningnpm install
from the host will not affect the container.
FROM node:fermium-alpine as dev
RUN apk --update add postgresql-client
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
RUN npm install glob rimraf
COPY . .
RUN npm run build
FROM node:fermium-alpine as prod
RUN apk --update add postgresql-client
ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --production
COPY . .
COPY --from=dev /usr/src/app/dist ./dist
CMD ["node", "dist/main"]
Database part
The next step will be for our postgresql database setup. Below the API part, I added the following definitions
postgres:
container_name: postgres
image: postgres:latest
networks:
- nesjs-network
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_DB: "${POSTGRES_DB_PREFIX}_${POSTGRES_DB_NAME}"
PG_DATA: /var/lib/postgresql/data
ports:
- ${POSTGRES_DB_PORT}:${POSTGRES_DB_PORT}
volumes:
- ./pgdata/data:/var/lib/postgresql/data
Again we use the .env file for environment variables and we also mounted database to a host folder. The following definitions will be added to the .env file, please make any modification to the value to suit your needs, below are just samples
#Postgres
POSTGRES_USER=user
POSTGRES_PASSWORD=secret
POSTGRES_DB_PREFIX=db
POSTGRES_DB_NAME=nestjs
POSTGRES_DB_HOST=postgres
POSTGRES_DB_PORT=5432
PGAdmin
Without a web interface, it could be hard for developer to see the content of the database so I added PGAdmin container for this purpose
pgadmin:
links:
- postgres:postgres
container_name: pgadmin
image: dpage/pgadmin4
volumes:
- ./pgdata/pgadmin:/root/.pgadmin
env_file:
- .env
networks:
- nesjs-network
The user/password for pgAdmin will also be defined in .env file.
PGADMIN_DEFAULT_EMAIL=admin@admin.com
PGADMIN_DEFAULT_PASSWORD=password
Nginx part
Now everything is almost ready and we need a web server sit in the middle to serve all the requests and routes them to appropriate containers.
nginx:
image: nginx:stable-alpine
container_name: nginx
volumes:
- ./nginx/templates:/etc/nginx/templates
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
environment:
- NGINX_PORT=${NGINX_PORT}
- BACKEND_PORT=${BACKEND_PORT}
ports:
- 80:${NGINX_PORT}
depends_on:
- api
- postgres
- redis
networks:
- nesjs-network
Some environment variables are also defined for Nginx container
NGINX_PORT=80
BACKEND_PORT=3000
The key here is the configuration file for nginx and it will look like belows
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
- templates/default.conf.template :
server {
listen ${NGINX_PORT};
server_name _;
charset utf-8;
client_max_body_size 50M;
location /pgadmin {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Script-Name /pgadmin;
proxy_redirect off;
# proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://pgadmin;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_next_upstream error timeout http_502 http_503 http_504;
}
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Forwarded-Proto $scheme;
# proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://api:${BACKEND_PORT};
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_next_upstream error timeout http_502 http_503 http_504;
}
}
※ As you can see, Nginx config will route the request to each container according to the url path.
Startup order
We need API container will be started after postgresql container, otherwise it could be an error. depends
could be used however, for startup Compose does not wait until a container is “ready” (whatever that means for your particular application) — only until it’s running and there’s a good reason for this. The problem of waiting for a database (for example) to be ready is really just a subset of a much larger problem of distributed systems. In production, your database could become unavailable or move hosts at any time. Your application needs to be resilient to these types of failures.
To handle this, design your application to attempt to re-establish a connection to the database after a failure. If the application retries the connection, it can eventually connect to the database.
To handle this, we need some modification to our api part
api:
container_name: "hrm_api_${NODE_ENV}"
image: "hrm_api_${NODE_ENV}"
environment:
- NODE_ENV:${NODE_ENV}
build:
context: ./app
target: "${NODE_ENV}"
dockerfile: Dockerfile
entrypoint: ["./wait-for-postgres.sh", "npm", "run", "start:${NODE_ENV}"]
env_file:
- .env
ports:
- 9229:9229
depends_on:
- redis
- postgres
networks:
- nesjs-network
volumes:
- ./app:/usr/src/app
- /usr/src/app/node_modules
restart: unless-stopped
Here we create a small script which ping the postgresql server until it is ready and it is quite simple:
#!/bin/sh
# wait-for-postgres.sh
until PGPASSWORD=$POSTGRES_PASSWORD PGUSER=$POSTGRES_USER PGHOST=$POSTGRES_DB_HOST PGDATABASE=$POSTGRES_DB_PREFIX"_"$POSTGRES_DB_NAME psql -c '\q'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up"
exec "$@"
# End
So now everything is ready, please run the following command for start our docker container up
$ docker-compose build
$ docker-compose up
More code can be found at : https://github.com/thanhpt-25/hrm-backend