Dockerizing Node.js, Prisma ORM, and Postgres – A Complete Tutorial

Want to build a Node.js application with Prisma ORM and PostgreSQL without the hassle of a complex setup? This tutorial will show you how to leverage Docker to get up and running quickly. We’ll create a simple Todo app that demonstrates the core concepts, and you’ll learn patterns you can apply to larger projects.

Before we dive in, make sure you have Docker installed and some familiarity with Node.js and Prisma ORM. Let’s get started!

Generate and set up Prisma Schema

If you don’t want the set up the app from scratch, you can clone the repo with the final result.

First, make sure to install Prisma:

npm install prisma express body-parser --save
npx prisma init --datasource-provider postgresql

This will create a prisma folder in your root folder with schema.prisma file in it. This file contains the DB schema. Let’s create a simple one for our Todo app:

generator client {
  provider = "prisma-client-js"
}

datasource db {
  provider = "postgresql"
  url      = env("DATABASE_URL")
}

model Todo {
  id     Int    @id @default(autoincrement())
  title  String
}

As you can see, the Todo model is very simple, having only id and title. You have probably noticed that we get the datasource URL from an ENV variable – you will set it up in your docker-compose files below.

The application code

Create an index.js file and add the following code. If you are familiar with express, this should look familiar. We are setting up 3 endpoints for fetching, creating, and updating a todo.

import { PrismaClient } from '@prisma/client'
import express from 'express'
const prisma = new PrismaClient()
import bodyParser from 'body-parser';

const app = express()
app.use(bodyParser.json())

app.get('/todos', async (req, res) => {
    const allTodos = await prisma.todo.findMany();

    res.json(allTodos)
})

app.post('/todos', async (req, res) => {
    const post = await prisma.todo.create({
        data: { ...req.body }
    })
    res.json(post)
})

app.put('/todos/:id', async (req, res) => {
    const { id } = req.params
    const { title } = req.body;
    const post = await prisma.todo.update({
        where: { id: Number(id) },
        data: { title },
    })
    res.json(post)
})

const server = app.listen(3005, () => {
    console.log('app listening on port 3005')
})

Also, make sure to add this to your package.json ‘s scripts:

"start": "nodemon index.js"

Docker setup

Our Docker environment needs to handle two core services: the main application and PostgreSQL database. While this setup works for most scenarios, handling database schema changes requires a bit more complexity. To manage migrations effectively, we’ll create two separate application containers that share the same database connection:

  • A migration container to handle schema updates
  • The main application container

Although this might seem like extra overhead at first, you’ll see how this separation makes managing database changes much smoother. We will begin with the migration container since it will handle our initial schema synchronization with the database.

First, make sure to add the following environment variables in your .env file:

POSTGRES_PASSWORD=your-pass
POSTGRES_USER=your-user
POSTGRES_DB=your-db-name
POSTGRES_PORT=5432

Migration container

Let’s create a Dockerfile.migrations file with the following content:

FROM node:22-alpine

# Required for schema generation
RUN apk add --no-cache openssl

WORKDIR /app

COPY package*.json ./

# Copy the prisma schema before npm install
COPY prisma ./prisma

# This script allow us to detect when Postgres is running so that we can run our application without errors
# Source: https://github.com/eficode/wait-for/blob/master/wait-for
COPY wait-for.sh .
RUN chmod +x ./wait-for.sh

RUN npm install

RUN npx prisma generate

Now that we have the image, let’s also create docker-compose.migration.yaml:

version: '3'
services:
  postgres:
    image: postgres:14-alpine
    ports:
      - ${POSTGRES_PORT}:5432
    volumes:
      - db_data:/var/lib/postgresql/data
    environment:
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
      - POSTGRES_USER=${POSTGRES_USER}
      - POSTGRES_DB=${POSTGRES_DB}
    networks:
      - default-network
  node-app-migrate:
    container_name: node-app-migrate
    build:
      context: .
      dockerfile: Dockerfile.migrate
    environment:
      - DATABASE_URL=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:${POSTGRES_PORT}/${POSTGRES_DB}
    depends_on:
      - postgres
    networks:
      - default-network
    command: ./wait-for.sh postgres:5432 -- npx prisma migrate dev --name ${PRISMA_MIGRATION_NAME}
    volumes:
      - .:/app
      - node_modules:/app/node_modules
volumes:
  db_data:
  node_modules:
networks:
  default-network:

Key points about our compose file:

  • The migration container uses a dedicated Dockerfile.migrate instead of the default Dockerfile. This separation allows us to handle migrations independently from the main application.
  • The wait-for.sh script ensures the database is ready to accept connections. Once connected, we run migrations with a name specified through an environment variable when launching docker-compose up

Our configuration follows Docker best practices by:

  • Storing database files in a named volume for data persistence
  • Maintaining node_modules in a separate volume for efficient dependency management
  • Creating a dedicated network to enable communication between the migration container and PostgreSQL

Let’s now run the initial migration with the following command:

PRISMA_MIGRATION_NAME=init docker-compose -f ./docker-compose.migrate.yaml up --abort-on-container-exit

We set the PRISMA_MIGRATION_NAME env variable first and then run docker-compose by pointing the compose file and making sure that the container is exited once the operation is ready.

Important: During early database prototyping, you can use npx prisma db push instead of npx prisma migrate dev in your docker-compose.migrate file. This approach offers:

  • No need to specify migration names when running docker-compose up
  • Faster schema updates

However, this is not recommended for production since you won’t have a documented history of schema changes. Learn about the difference between the two approaches here.

The application container

Let’s set up the main application container for local development by creating a Dockerfile. This container will run our Node.js application now that our database schema is synchronized:

FROM node:22-alpine

# Required for schema generation
RUN apk add --no-cache openssl

WORKDIR /app

COPY package*.json ./

# Copy the prisma schema before npm install
COPY prisma ./prisma

RUN npm install

# Necessary for continous running even if the app crashes
RUN npm install -g nodemon

COPY . .

# Make sure that the wait-for script has the necessary permissions
RUN chmod +x ./wait-for.sh

EXPOSE 3005

Unlike the migration image, here we need to copy all the application code and expose the port on which our app is running. Let’s now create docker-compose.yaml

version: '3'
services:
  postgres:
    image: postgres:14-alpine
    restart: always
    ports:
      - ${POSTGRES_PORT}:5432
    volumes:
      - db_data:/var/lib/postgresql/data
    environment:
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
      - POSTGRES_USER=${POSTGRES_USER}
      - POSTGRES_DB=${POSTGRES_DB}
    networks:
      - default-network
  node-app:
    build: .
    environment:
      - DATABASE_URL=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:${POSTGRES_PORT}/${POSTGRES_DB}
    volumes:
      - .:/app
      - node_modules:/app/node_modules
    depends_on:
      - postgres
    restart: always
    networks:
      - default-network
    ports:
      - 3005:3005
    command: sh ./wait-for.sh postgres:${POSTGRES_PORT} -- npm run start
volumes:
  db_data:
  node_modules:
networks:
  default-network:

The Postgres setup is the same as the migration compose file, but the application part differs. We create a bind mount volume with our application code so that we can work on it while the container is running. Let’s now run our application with docker-compose up

In the terminal, you should see app listening on port 3005 . Now, we can create our first todo:

And fetch all todos:

Great, the app is running! Let’s now create a schema change and ensure that the migration is successful:

Schema change an migration

In prisma.schema, we will add a new status property to our Todo model with a default value of “not started”:

model Todo {
  id       Int     @id @default(autoincrement())
  title    String
  status   String   @default("not started")
}

Let’s run another migration, which we will call “add_status”:

PRISMA_MIGRATION_NAME=add_status docker-compose -f ./docker-compose.migrate.yaml up --abort-on-container-exit

And then run our app:

docker-compose-up

Now, we need to check if the new property is added:

Congratulations! You now have a fully dockerized development environment where you can build your application and manage database migrations seamlessly.

Want to take this further? Try creating a production-ready Docker image as a next step. The concepts we covered here provide a solid foundation for more advanced configurations.

You can find all the code from this tutorial in the repository link. Have suggestions or questions? I’d love to hear your feedback and ideas for improving this setup.