Skip to content

Explaining Development Dockerfile & Docker Compose files

This is going to be using a simple example with Node, React, Redis, etc. containers all working together.

Client React Dev Dockerfile

For the development version of the docker file we would use a file with the name Dockerfile.dev as follows:

FROM node:alpine
WORKDIR '/app'
COPY ./package.json
RUN npm install
COPY . .
CMD["npm", "run", "start"]

This file:

  • Takes a base node image running on Linux.
  • Sets the working directory to /app, so everything will take place in this folder.
  • Copies just the package.json over so if the container is rebuilt because a file needed later is changed, this step doesn't need to be re-run.
  • Installed the dependencies, etc. from package.json.
  • Copies over the remaining files, and probably package.json again ¯\_(ツ)_/¯
  • Sets the default command on starting a container instance to npm run start.

You'd then run the following in a terminal in the same folder as this file to build this:

docker build -f Dockerfile.dev .

We could now run this container with the generated id, for example:

docker run 71088f27a8f5

Setting up Server and Worker Docker files

These will be fairly similar, have the same names as before (Dockerfile.dev) in their relevant directories:

Server Dev Dockerfile

FROM node:alpine
WORKDIR '/app'
COPY ./package.json
RUN npm install
COPY . .
CMD["npm", "run", "dev"]

Since we can use nodemon to look for changes in the code and reload automatically, the default starting command is slightly different here in development.

Worker Dev Dockerfile

This is the same as before:

FROM node:alpine
WORKDIR '/app'
COPY ./package.json
RUN npm install
COPY . .
CMD["npm", "run", "dev"]

Building and running the Server and Worker docker files

Images are built for both in the same way as before:

docker build -f Dockerfile.dev .

These can be tested again as before:

docker run 31488f27a2d7

Creating a docker compose file

This makes starting up multiple containers simpler and allows us to set up the configuration required, including redis and postgres.

We will need to specify items such as:

  • What Dockerfile to use
  • Define volumes so that we can update source code without having to rebuild the container.
  • Specify environment variables such as ports, etc.

In the root directory (one level above the client, worker and server folders) we create the fle docker-compose.yml as:

version: '3'
services:
  postgres:
    image: 'postgres:latest'
  redis:
    image: 'redis:latest'
  api:
    build:
      dockerfile: Dockerfile.dev
      context: ./server
    volumes:
      - /app/node_modules
      - ./server:/app
    environment:
      - REDS_HOST=redis
      - REDIS_PORT=6379
      - PGUSER=postgres
      - PGHOST=postgres
      - PGDATABASE=postgres
      - PGPASSWORD=postgres_password
      - PGPGPORT=5432
  client:
    build:
      dockerfile: Dockerfile.dev
      context: ./client
    volumes:
      - /app/node_modules
      - ./client:/app
  worker:
    build:
      dockerfile: Dockerfile.dev
      context: ./worker
    volumes:
      - /app/node_modules
      - ./worker:/app
  • The context option tells the build what folder to look for when building the image.
  • The first volumes option of - /app/node_modules stops that directory from being overwritten, leave it as is.
  • The second volumes option of - ./server:/app tells the process to copy the contents of the local server folder into the /app (the same as we set WORKDIR to), so essentially apart from our exception of node_modules, all other code will be redirected to our local server folder, so we do not need to rebuild for every change we make to the source code.
  • The environment variables are explicitly defined with their values (see this page for more detail on how this works).

Note: Remember to look at Docker Hub for documentation and configuration.


We can run this from the same directory as the docker-compose.yml file with:

docker-compose up

At this point no ports are being exposed. We can use Nginx to handle some of the port mapping.

Adding nginx

We're going to configure nginx so that it will look at the path of incoming requests and route traffic as follows:

Incoming Path Route to
/api/ Express Server (server service)
/ React Server (client service)

We didn't just use port numbers as in a production environment we don't want to worry about juggling the ports, we just want to use references which is a lot easier and clearer and can change without issues.

Note: After nginx does the routing for /api/ it will chop off the /api portion and so endpoints can be specified without this, for example /api/values/all will get forwarded as /values/all. This is an optional setting, it doesn't have to work like this!

To configure nginx we create the file default.conf and add this to the nginx image:

  • Tell nginx there is an upstream server at client:3000
  • Tell nginx there is an upstream server at api:5000

The servers (React server and Express server) can only be reached via nginx so are referred to as upstream servers.

client: and sever: are both actual addresses (URLs) that nginx will direct traffic to, and refer to the client and server services defined inside the docker-compose file (see above).

We will also listen on port 80 in the image.

Finally we set up the routing rules such that:

  • If any traffic comes to / send to the client upstream.
  • If any traffic comes to /api send to the api upstream.

The nginx/default.conf file will look like this:

upstream client {
  server client:3000;
}

upstream api {
  server api:5000;
}

server {
  listen 80;

  location / {
    proxy_pass http://client;
  }

  location /api {
    rewrite /api /(.*) /$1 break;
    proxy_pass http://api;
  }
}

Note: api was called server, but I renamed references above from server to api to avoid confusion and potential conflicts with the nginx configuration that uses server as a keyword.


NOTE: The command rewrite /api /(.*) /$1 break; is the regex rewrite rule that removes the /api portion of the incoming endpoint uri when forwarding to the api container, so essentially a request of /api/someEndpoint will get forwarded as /someEndpoint to the api container.

The break keyword prevents further rewrite rules being applied.


Now we need to get this configuration file into nginx. We do this by having the nginx container with the file inserted via the dockerfile as usual to create a custom image.

As usual look as the nginx documentation on dockerhub at the customisation, which lets us know how to configure the custom image.

Creating a new file nginx/Dockerfile.dev will look like this:

FROM nginx
COPY ./default.conf /etc/nginx/conf.d/default.conf

Now we have to add this to our docker-compose.yml file:

version: '3'
services:
  postgres:
    image: 'postgres:latest'
  redis:
    image: 'redis:latest'
  nginx:
    restart: always
    build:
      dockerfile: Dockerfile.dev
      context: ./nginx
    ports:
      - '3050:80'
  api:
    build:
      dockerfile: Dockerfile.dev
      context: ./server
    volumes:
      - /app/node_modules
      - ./server:/app
    environment:
      - REDS_HOST=redis
      - REDIS_PORT=6379
      - PGUSER=postgres
      - PGHOST=postgres
      - PGDATABASE=postgres
      - PGPASSWORD=postgres_password
      - PGPGPORT=5432
  client:
    build:
      dockerfile: Dockerfile.dev
      context: ./client
    volumes:
      - /app/node_modules
      - ./client:/app
  worker:
    build:
      dockerfile: Dockerfile.dev
      context: ./worker
    volumes:
      - /app/node_modules
      - ./worker:/app

NOTE: We called this nginx above, but it could have been anything (proxy for example).


NOTE: We defined this to listen on port 3050, but it could have gone to any unused port.


This can now be tested inside a terminal window and the browser. The first time we do this it's likely to fail as some code wants redis to be already running, which is slower the first time.

In this case the first instance of docker compose may need to be terminated and restarted.

To force a rebuild on the start use:

docker-compose up --build

To test via the nginx in the browser, we need to go to that port we mapped, so http://localhost:3050.

Opening Web Socket Connections

For React apps running in Dev Mode, not configuring the development trace socket correctly can badly affect performance, this is shown as an error in the console window of the web browser:

Web socket error

To fix this the nginx server must be configured to allow the required websocket connections through. We expose one additional route in the nginx configuration file to fix this:

location /sockjs-node {
  proxy_pass http://client;
  proxy_http_version 1.1;
  proxy_set_header Upgrade $http_upgrade;
  proxy_set_header Connection "Upgrade";
}

So our default.conf config file now looks like this:

upstream client {
  server client:3000;
}

upstream api {
  server api:5000;
}

server {
  listen 80;

  location / {
    proxy_pass http://client;
  }

  location /sockjs-node {
    proxy_pass http://client;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "Upgrade";
  }

  location /api {
    rewrite /api /(.*) /$1 break;
    proxy_pass http://api;
  }
}

We can now run the up command (with build) and everything should be fine now (another restart may be needed - without the build flag).