Multi-Stage Dockerfile
Building a Production Image
Section titled “Building a Production Image”Up to this point, our local setup has been intentionally split:
- MongoDB in Docker
- Express in Docker
- Vite running natively on the host during development
That was the right move for local learning.
But production is a different story.
We do not want to deploy a Vite dev server.
We do not want a container bloated with build tools it no longer needs at runtime.
And we do not want to manually glue together separate frontend and backend deployment stories for this app.
The Production Image Strategy
Section titled “The Production Image Strategy”Included
Section titled “Included”- Express server code
- backend production dependencies
- compiled frontend assets
Not Included
Section titled “Not Included”- Vite dev server
- frontend source build tooling in the runtime stage
- unnecessary development baggage
That keeps our production image smaller and more focused.
Not microscopic. Not magical. Just appropriately trimmed.
Root Level Dockerfile
Section titled “Root Level Dockerfile”Earlier in the lesson, we created a server/Dockerfile so Docker Compose could build and run the API service locally.
That file was fine for the local backend container.
Now we want a different Dockerfile with a different purpose:
- the old one was for the local API service
- this new one is for the full production application image
That means this new Dockerfile belongs at the root of the project, because it needs access to both:
server/client/
It is completely normal for local development and production deployment to use different container strategies. Our earlier API-only Dockerfile helped Compose run the backend. This new root Dockerfile packages the full app for deployment.
Our existing Dockerfile can stay where it is. It won’t interfere.
What a Multi-Stage Build Does
Section titled “What a Multi-Stage Build Does”A multi-stage build lets us use one temporary stage to build the frontend, and a second lighter stage to run the final application.
That means:
Builder Stage
Section titled “Builder Stage”- install frontend dependencies
- run the Vite build
- produce
client/dist
Runtime Stage
Section titled “Runtime Stage”- install backend dependencies only
- copy the Express server code
- copy the already-built frontend assets
- start the Node server
This way our final image gets the compiled frontend, without the toolchain baggage.
Create a Root Dockerfile
Section titled “Create a Root Dockerfile”At the root of the project, create a new file named:
DockerfileAdd this:
# ------ STAGE 1: BUILD THE FRONTEND ------FROM node:20-slim AS builder
WORKDIR /app
# Install frontend dependenciesCOPY client/package*.json ./client/RUN npm install --prefix client
# Copy frontend sourceCOPY client/ ./client/
# Build the Vite frontendRUN npm run build --prefix client
# ------ STAGE 2: RUN THE APP ------FROM node:20-alpine
WORKDIR /app
# Install backend dependencies onlyCOPY server/package*.json ./server/RUN npm install --omit=dev --prefix server
# Copy backend sourceCOPY server/ ./server/
# Copy compiled frontend assets from the builder stageCOPY --from=builder /app/client/dist ./client/dist
# The app listens on a dynamic host-provided port at runtimeEXPOSE 3000
CMD ["node", "server/index.js"]This is our production image blueprint.
Clean. Focused. No extra passengers.
Stage 1: Build the Frontend
Section titled “Stage 1: Build the Frontend”The first stage is named builder:
FROM node:20-slim AS builderWe use it to compile the Vite app.
Install frontend dependencies
Section titled “Install frontend dependencies”COPY client/package*.json ./client/RUN npm install --prefix clientWe copy the frontend package metadata first so Docker can cache dependency installation more effectively.
Copy frontend source
Section titled “Copy frontend source”COPY client/ ./client/Now the build stage has the actual client code.
Build the frontend
Section titled “Build the frontend”RUN npm run build --prefix clientThis generates:
client/distThat folder is the whole reason this stage exists.
Once the build is done, the final image does not need Vite, Tailwind’s build tooling, or the frontend source dependencies anymore.
Why --prefix Is Useful Here
Section titled “Why --prefix Is Useful Here”We are not using a root package.json, so commands need to target the correct subfolder.
That is what --prefix does.
For example:
RUN npm install --prefix clientmeans:
- run
npm install - but do it inside the
clientdirectory
Likewise:
RUN npm install --omit=dev --prefix servermeans:
- install the backend dependencies
- inside the
serverdirectory - excluding development-only packages
It is a tidy way to work with a split project structure without awkwardly bouncing around with multiple WORKDIR changes.
Since Voyager’s Log uses separate client and server package files,
--prefix gives us a clean way to install and build each side without
pretending everything lives in one root-level Node project.
Stage 2: Build the Runtime Image
Section titled “Stage 2: Build the Runtime Image”The second stage is the actual production runtime image:
FROM node:20-alpineThis image is lighter than the builder stage and only contains what the running app needs.
Install backend dependencies
Section titled “Install backend dependencies”COPY server/package*.json ./server/RUN npm install --omit=dev --prefix serverThis installs the server dependencies only.
We do not install frontend build dependencies here, because the frontend has already been compiled.
Copy backend source
Section titled “Copy backend source”COPY server/ ./server/Now the Express app is in place.
Copy compiled frontend assets from the builder stage
Section titled “Copy compiled frontend assets from the builder stage”COPY --from=builder /app/client/dist ./client/distThis is the most important line in the file.
It reaches into the earlier build stage, grabs the compiled static frontend assets, and places them into the runtime image.
That means the final container includes:
- backend code in
server/ - built frontend assets in
client/dist
And because Page 07 taught Express how to serve client/dist in production, the runtime image now has everything it needs.
The runtime container serves compiled frontend files. It does not run the Vite dev server. That is a development tool, not part of the production runtime.
Why This Works
Section titled “Why This Works”The server is already set up to serve the compiled frontend in production:
if (process.env.NODE_ENV === 'production') { const distPath = path.join(__dirname, '../client/dist');
app.use(express.static(distPath));
app.get('/{*splat}', (req, res) => { res.sendFile(path.join(distPath, 'index.html')); });}This Dockerfile makes sure that those compiled frontend assets are actually present inside the final image.
So the two pieces work together:
- the server knows how to serve
client/dist - the Dockerfile makes sure
client/distexists in the runtime image
That is the handoff.
Without the server logic, the image would not know how to serve the frontend.
Without this Dockerfile, the server would have nothing to serve.
Add a Root .dockerignore
Section titled “Add a Root .dockerignore”Because this new Dockerfile builds from the project root, we should also create a root .dockerignore.
At the root of the project, add:
.git.gitignore.envclient/node_modulesserver/node_modulesclient/distThis helps keep the build context smaller and cleaner.
It prevents Docker from shoveling unnecessary local clutter into the build process.
Building the Image Locally
Section titled “Building the Image Locally”To test the root Dockerfile locally, from the project root run:
docker build -t voyagers-log-production .That builds the full production-style image.
We do not need to fully switch our local workflow over to this image yet, but it is useful to understand that this image is now the deployable artifact.
Testing the Production Image Locally
Section titled “Testing the Production Image Locally”For local testing we need to use a second environment file just for the production image.
Why?
Our Compose setup uses:
MONGO_URI=mongodb://db:27017/voyagers_logThat works because db is the Mongo service name inside the Compose network.
But if we run the production image directly with docker run, there is no Compose network, so db will not exist as a hostname.
Recommended Setup
Section titled “Recommended Setup”Keep:
.envfor Docker Compose.env.prodfor local production-image testing
Example:
MONGO_URI=mongodb://host.docker.internal:27017/voyagers_logPORT=3000SESSION_SECRET=super-secret-dev-key-change-thisADMIN_USERNAME=notmynameADMIN_PASSWORD=youhadbetterbeusingstrongpasswordsbynowThis lets us test the production image locally without changing the existing Compose setup.
It also keeps local secrets out of the the Docker image itself.
Files like .env, .env.prod, and .env.local contain secrets.
We must always ensure that they are never committed to Git nor copied into a Docker image build by accident.
A safe pattern in both .gitignore and .dockerignore is:
.env*
!.env.example
Github’s Node.js .gitignore template includes this already, but we need to be sure to add it to .dockerignore ourselves
Local Production Needs a DB
Section titled “Local Production Needs a DB”Running the production image locally does not automatically give it a MongoDB instance.
If we test the image with docker run, make sure MongoDB is already running somewhere the container can reach.
For this project, the easiest option is usually to start just the database service:
docker compose up -d db
Then the production container can connect using the MONGO_URI value from your .env.prod file.
Run it!!
Section titled “Run it!!”Finally, we can run our production image locally, with our pseudo-production environment variables, like this:
docker run --rm --name voyagers-log-prod -p 3000:3000 --env-file .env.prod -e NODE_ENV=production voyagers-log-productionCurrent Project Structure
Section titled “Current Project Structure”Our project now looks roughly like this:
voyagers-log/├── compose.yaml├── Dockerfile├── .dockerignore├── .gitignore├── .dockerignore├── .env├── .env.prod├── server/│ ├── package.json│ ├── package-lock.json│ ├── index.js│ ├── config/│ ├── middleware/│ └── models/└── client/ ├── package.json ├── package-lock.json ├── vite.config.js ├── index.html └── src/Notice what is not there:
- no root
package.json
That matters.
Our dependencies still live separately in:
server/package.jsonclient/package.json
So our Dockerfile needs to respect that structure instead of pretending we built a monorepo package setup we never actually introduced.
What We Have Accomplished
Section titled “What We Have Accomplished”Voyagers Log now has a production-oriented container strategy.
We have:
- kept the local API-only Docker setup for Compose where it made sense
- introduced a root production Dockerfile for the full app
- compiled the Vite frontend in a builder stage
- copied only the built frontend assets into the runtime image
- installed only backend dependencies in the final container
- aligned the image with the Express production serving logic from the previous page
That is a serious upgrade.
The app is no longer just “something we can run locally.”
It is now something we can package cleanly for a real platform.
Extra Bits & Bytes
Section titled “Extra Bits & Bytes”Docker Docs: Multi-Stage Builds
⏭ Data Gets a Cloud
Section titled “⏭ Data Gets a Cloud”Time to move our data from MongoDB in a container to MongoDB Atlas in the Cloud