Introduction
Over the last few weeks, I've been diving deep into Docker optimizations for my NestJS and Prisma application. One of the areas I was most focused on was reducing the Docker image size to improve deployment speed, resource efficiency, and overall performance in production environments.
At the beginning of this project, my Docker image was sitting at around 1.2 GB, which I felt was unnecessarily large, especially for a production-grade application. Not only was this increasing my deployment times, but it was also consuming more storage and resources than needed.
After some research and experimentation, I successfully managed to reduce the Docker image size by over 50%, bringing it down to just 650 MB. This was a big win in terms of both resource efficiency and deployment performance.
The Challenge
Docker images, especially for Node.js applications with a heavy set of dependencies, can quickly become bloated. When working with frameworks like NestJS and tools like Prisma, the size can grow significantly, especially if you include unnecessary dependencies or files in the final image. This can affect the speed of deployments, impact CI/CD pipeline efficiency, and increase infrastructure costs.
The Approach: Multi-Stage Docker Builds
To solve this, I decided to break down the Dockerfile into multiple stages using Docker’s multi-stage build feature. The idea behind multi-stage builds is simple: we use intermediate stages to install and build everything we need, and then we copy only the necessary files into the final production image.
This approach ensures that the final image contains only the essential files—no development dependencies, build artifacts, or temporary files.
Here’s a breakdown of how I structured the build:
1. Production Dependencies Stage (prod-deps)
In the first stage, I installed only the production dependencies using the yarn install --production command. This way, I ensured that no development dependencies were included in the final image.
2. Build Stage (builder)
In the second stage, I performed the full application build process. This included installing both production and development dependencies and running Prisma’s npx prisma generate command to generate the necessary database client files. I then built the application using yarn build.
3. Final Production Image (runner)
In the final stage, I copied over the production-ready application—only the built code and production dependencies—into a fresh Alpine-based image. This kept the final image lightweight and contained only what was necessary for running the application in production.
Why Alpine-Based Images?
I chose the Alpine version of the official Node.js image for its small footprint. Alpine is a minimal Linux distribution that’s optimized for Docker, and it reduces the overhead typically associated with larger base images. This was a key part of reducing the size of the final image.
The Results
Before optimizations, my Docker image was 1.2 GB, which was quite large for a simple web application. After implementing these changes, the final image size dropped to 650 MB, resulting in a 50% reduction.
This optimization not only reduced the image size but also improved deployment times and resource utilization. By trimming the fat from my Docker images, I’ve made the development and deployment process much faster and more efficient.
Dockerfile Optimization
Here’s the optimized Dockerfile I ended up with:
# ---------- PROD DEPS ---------- FROM node:22-alpine AS prod-deps WORKDIR /app RUN corepack enable COPY package.json yarn.lock ./ RUN yarn install --frozen-lockfile --production # ---------- BUILD ---------- FROM node:22-alpine AS builder WORKDIR /app RUN corepack enable COPY package.json yarn.lock ./ RUN yarn install --frozen-lockfile COPY . . ARG DATABASE_URL ENV DATABASE_URL=$DATABASE_URL RUN npx prisma generate RUN yarn build # ---------- RUN ---------- FROM node:22-alpine AS runner WORKDIR /app ENV NODE_ENV=production COPY --from=prod-deps /app/node_modules ./node_modules COPY --from=builder /app/dist ./dist CMD ["node", "dist/src/main.js"]
Key Takeaways:
- Multi-Stage Builds: By splitting the Dockerfile into separate stages, I kept the final image lean and free of unnecessary build tools or development dependencies.
- Alpine Images: Switching to Alpine-based images drastically reduced the image size, while still allowing everything to run smoothly in production.
- Environment Variables: I utilized build-time arguments (like DATABASE_URL) to pass any necessary environment variables into the build process, ensuring a seamless deployment workflow.
- Prisma Optimization: By ensuring Prisma-related files and commands were only executed during the build stage, I kept the final image clean and devoid of unnecessary Prisma-specific files.
Final Thoughts
Optimizing Docker images is an ongoing process that requires a combination of best practices, trial, and error. By leveraging multi-stage builds, Alpine images, and careful management of dependencies, I’ve been able to achieve a significant reduction in image size without compromising on performance or functionality.
This experience has been incredibly rewarding, and it serves as a reminder that continuous improvement is essential, especially in production environments.

