This post is part of our ‘The Miners’ Guide to Code Crafting’ series, designed to help aspiring developers learn and grow. Stay tuned for more!
If you’ve been in software development for a while, you’ve probably dealt with this annoying issue: your code runs perfectly on your machine, but when a colleague tries to run it, they get a bunch of error messages. This is the classic "it works on my machine" problem, and it has been a pain for development teams for years. The reason? Different environments: dependencies, configurations, operating systems, and versions vary from machine to machine.
But there’s good news! We now have a modern solution to this problem: Docker.
Deliver Code Through Containers
Think of Docker as a standardized shipping container for your code. Just like how shipping containers revolutionized global trade by providing a standardized way to move goods, Docker ensures that your application runs the same way everywhere. Whether on your local machine, a colleague’s, or a production server, Docker packages your app and everything it needs into a container — a lightweight, self-contained unit that can run anywhere.
With Docker, you can ship your app, run it, and be confident it’ll work the same way, no matter where it’s running. No more worrying about whether your colleague’s machine has the right dependencies or the right version of Node.js. It just works.
Virtual Machines vs. Containers: What’s the Difference?
Now, you might be wondering: “Can’t we just use virtual machines to solve this?” It’s a good question. Both virtual machines (VMs) and containers are ways to virtualize environments, but they work in different ways. Let’s break it down.
Virtual Machines
VMs create full system abstractions, essentially running an entire operating system (OS) within the host system. Each VM operates like a separate physical computer with its own kernel. This provides great isolation but comes with some drawbacks, particularly in terms of resource usage.
- Run a complete operating system: Each VM includes its own OS, meaning you have to install and run everything you need.
- Heavy on storage: VMs need a lot of disk space to store each complete OS and its associated components.
- Resource-hungry: Since each VM is essentially a separate computer, it requires significant CPU, memory, and storage.
Containers
Containers are different. Rather than running a full operating system, containers share the host’s OS kernel while isolating individual processes. This design reduces the amount of system resources needed and makes containers far more efficient for many use cases.
- Share the host’s OS kernel: Containers use the same underlying OS as the host, so they don’t need to replicate an entire operating system.
- Use much less storage: Since containers don’t carry the full OS, they’re much smaller and faster to set up.
- Light on resources: Containers are very lightweight and don’t consume as much CPU, memory, or disk space compared to VMs.
Key Differences Between VMs and Containers
Aspect | Virtual Machines | Containers |
---|---|---|
Isolation | Full OS-level isolation (including the kernel) | OS-level virtualization, sharing the kernel |
Portability | Less portable, depends on the hypervisor | Highly portable, runs anywhere Docker is supported |
Resource Usage | Heavy on CPU and memory | Lightweight, uses fewer resources |
Let’s Get Practical: Containerizing a Node.js App
Let’s start with a simple Fastify application and see how Docker can help us:
// app.js
import Fastify from "fastify";
const fastify = Fastify({
logger: true,
});
fastify.get("/", async function handler(request, reply) {
return { message: "Hello from Docker! 🐳" };
});
fastify.listen({ port: 3000 }, (err, address) => {
if (err) {
fastify.log.error(err);
process.exit(1);
}
fastify.log.info(`Server running on ${address}`);
});
This code sets up a Fastify server that listens on port 3000. When someone visits the root route, it sends a JSON response: { message: "Hello from Docker! 🐳" }
.
Now, instead of asking your team to install Node.js and dependencies manually, we wil containerize the application using Docker. This ensures that everyone runs the same environment, and it makes deployment consistent across any machine. We’ll define this environment using a special file called Dockerfile
. Here’s the Dockerfile that will containerize the app:
# Use an official Node.js image as the base
FROM node:23-slim
# Set the working directory
WORKDIR /app
# Copy package.json and package-lock.json (if present)
COPY package*.json ./
# Install dependencies
RUN npm install
# Copy the rest of the application code
COPY . .
# Expose the port Fastify will run on
EXPOSE 3000
# Command to start the application
CMD ["node", "app.js"]
Dockerfile Breakdown:
FROM node:23-slim
: Specifies that the Docker image will be built on top of the official Node.js image (version 23, slim variant).WORKDIR /app
: Creates a working directory inside the container (/app
) and makes it the current directory for subsequent instructions (likeCOPY
andRUN
).COPY package*.json ./
: Copies bothpackage.json
andpackage-lock.json
(if they exist) into the container. Docker will cache this layer, so if the dependencies don’t change, it avoids reinstalling them during subsequent builds.RUN npm install
: Installs the dependencies defined inpackage.json
inside the container.COPY . .
: Copies all the remaining files from your local machine to the/app
directory in the containerEXPOSE 3000
: This tells Docker that the container will listen on port3000
at runtime. It doesn’t expose the port outside of the container by itself but provides information to anyone using the container about which port to map.CMD ["node", "app.js"]
: This defines the command that will run when the container starts. In this case, it runs theapp.js
file using Node.js to start your Fastify server.
Docker Image Layers
Notice how we copy package.json
first, then run npm install
, and only then copy the rest of the app’s code? This isn’t by accident. Docker builds images in layers, and each layer is cached. If your package.json
hasn’t changed, Docker will reuse the cached layer for dependencies, making builds faster.
Building and Running Docker
Once you have the Dockerfile
and app.js
files ready, you can proceed to build the image and run the Docker container. Here’s how:
- Build the Docker Image:
docker build -t fastify-docker-example .
This command builds a Docker image named fastify-docker-example
. The -t
flag allows you to specify a tag (in this case, the image name). Docker will look for the Dockerfile
in the current directory (.
) and use it to create the image. If the Dockerfile
is located in another directory, you can specify the path instead of .
.
- Run the Docker Container:
docker run -p 3000:3000 fastify-docker-example
This command runs the container and maps port 3000
of your local machine to port 3000
on the container. You can now access your application by visiting http://localhost:3000
in your browser.
- Add a
.dockerignore
File:
It’s a good practice to include a .dockerignore
file in your project to exclude unnecessary files from being added to the Docker image. For example, you should exclude the node_modules
folder, as it’s already specified in your package.json
. Create a .dockerignore
file with the following content:
node_modules
This will prevent the node_modules
folder from being copied into the container, which helps reduce the size of the Docker image.
Adding a database layer
As your application grows, you might need to add a database to store and manage your data. In a traditional setup, you would start a separate database service and connect your application to it. However, with Docker, you can manage both your app and your database as separate containers, and tools like Docker Compose make it even easier to handle the orchestration.
Docker Compose
Docker Compose is a powerful tool for managing and orchestrating multiple containers. Using a simple YAML configuration file, you can define and run all the services that make up your application, including your web application, databases, and more. Here’s a quick breakdown of the key concepts:
- Services: Containers that work together to perform a function. For example, you might have a web service running your application and a database service running MongoDB or PostgreSQL.
- Networks: Allow containers to communicate with each other. By default, Compose creates a network for all services, but you can define custom ones if needed.
- Volumes: Persist data outside of containers, ensuring that data is not lost when containers are recreated, which is especially important for databases.
With Docker Compose, you can easily configure your application’s services. Let’s walk through an example of setting up a web app alongside a MongoDB database:
// docker-compose.yml
services:
app:
build: .
ports:
- "3000:3000"
environment:
- MONGODB_URI=mongodb://db:27017/fastify-app-db
depends_on:
- db
db:
image: mongo:latest
volumes:
- mongodb_data:/data/db
volumes:
mongodb_data:
Docker Compose Breakdown
app
:build: .
: Builds the app service using the localDockerfile
.ports: "3000:3000"
: Maps port3000
of your host machine to port3000
on the container, making the app accessible athttp://localhost:3000
.environment: MONGODB_URI=mongodb://db:27017/fastify-app-db
: Sets the MongoDB connection string, pointing to thedb
service using the service name (db
) as the hostname.depends_on: db
: Specifies that theapp
service depends on thedb
service. Docker Compose will startdb
before startingapp
.
db
:image: mongo:latest
: Pulls the latest MongoDB image from Docker Hub.volumes: mongodb_data:/data/db
: Persists MongoDB data using a named volume (mongodb_data
). This ensures that data stored in MongoDB will survive container restarts or recreation.
volumes
: This section defines persistent volumes for data storage. Themongodb_data
volume is used by thedb
service to store MongoDB’s data.
Running Our Application with Docker Compose
Once you have the docker-compose.yml
file in place, running your entire application stack becomes incredibly easy. Instead of starting each container individually, you can use Docker Compose to bring everything up with a single command:
docker compose up
This command will:
- Build the
app
service (if necessary) - Pull the latest MongoDB image (if not already available locally)
- Start both services (
app
anddb
) and set up their communication - Expose the necessary ports, making your app accessible via
http://localhost:3000
By sharing this configuration file, we ensure that our application can run across different environments. Once the Docker image is built and the docker-compose.yml
file is in place, anyone can run the application with the exact same setup, regardless of their local environment. All the dependencies and services (such as MongoDB) are contained within the Docker ecosystem.
Modifying Our App to Use MongoDB
To modify the application to connect to MongoDB, we update the app.js
file to include MongoDB integration. This involves defining a user model, connecting to the database, and adding a simple /users
endpoint that returns a list of users from the database. Here’s the updated code:
import Fastify from "fastify";
import mongoose from "mongoose";
const fastify = Fastify({
logger: true
});
// MongoDB Configuration
const MONGODB_URI = 'mongodb://localhost:27017/fastify-app-db';
// Define User Model
const User = mongoose.model('User', {
name: String,
email: String,
createdAt: { type: Date, default: Date.now }
});
// Connect to MongoDB
mongoose.connect(MONGODB_URI)
.then(() => fastify.log.info('Connected to MongoDB'))
.catch(err => fastify.log.error('MongoDB connection error:', err));
fastify.get('/', async () => {
return { message: 'Hello from a containerized world! 🐳' };
});
fastify.get('/users', async (request, reply) => {
try {
const users = await User.find();
return users;
} catch (error) {
reply.code(500);
return { error: 'Failed to fetch users' };
}
});
// Start server
fastify.listen({ port: 3000, host: '0.0.0.0' }, (err) => {
if (err) {
fastify.log.error(err);
process.exit(1);
}
});
Rebuild and Run
After updating the docker-compose.yml
and app.js
files, it’s time to rebuild and run the Docker containers. We can do this by executing the following command:
docker-compose up --build
This will rebuild the app image and start both the app and MongoDB containers. To test the newly-created /users
endpoint, you can use tools like Postman or curl
. Since the database is initially empty, the response should be an empty array:
curl http://localhost:3000/users
The response will look like this:
[]
This is expected since no users have been added to the database yet. Let’s add some users by accessing the terminal inside our database container.
Adding Users to the Database
To add users to the MongoDB database, we first need to access the MongoDB container. Here’s a guide on how to enter the container and use mongosh
to add users to the database.
Accessing the Database Container
To interact with the MongoDB container, we need to enter its shell. Since we’re using Docker Compose, the process is straightforward: we access the container through the service name defined in the docker-compose.yml
file. In this case, the service name is db
, which corresponds to the MongoDB container.
To enter the container, run the following command:
docker compose exec -it db bash
This command opens a shell inside the MongoDB container, where we can directly execute commands and interact with the database.
Side Note: Using docker exec
directly
If we were working with Docker directly instead of Docker Compose, the process would differ slightly. Instead of using the service name, we’d need to know the container name to access it. To find the container name, we can run the docker ps
command, which will list all running containers:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dfe30915554e node "docker-entrypoint.s…" 22 minutes ago Up 2 seconds 0.0.0.0:3000 fastify-app
f02146e8317b mongo "docker-entrypoint.s…" 22 minutes ago Up 2 seconds 0.0.0.0:27017 fastify-db
Once we have the container name (eg:fastify-app
orfastify-db
), we would use the following command to open a shell inside the container:
docker exec -it fastify-app bash
While the concept is the same, this approach requires using the container name rather than the service name defined in the docker-compose.yml
file.
Once inside the container, we’re ready to interact with our database.
Adding Users with mongosh
After accessing the container, follow these steps to add users to the database:
Start the mongosh shell:
mongosh
Switch to the database:
use fastify-app-db
Insert some user data into the database:
db.users.insertMany([ { name: "Lázaro Ramos", email: "lazaro.ramos@example.com", createdAt: new Date() }, { name: "Caetano Veloso", email: "caetano.veloso@example.com", createdAt: new Date() } ]);
To confirm that the users were added correctly, run:
db.users.find().pretty();
This will display the inserted users along with their
_id
values.
Testing the /users
Endpoint
Once we’ve added users to the database, exit the container and test the /users
endpoint from your application:
curl http://localhost:3000/users
The response should now include the users you’ve just added:
[
{
"_id": "...",
"name": "Lázaro Ramos",
"email": "lazaro.ramos@example.com",
"createdAt": "..."
},
{
"_id": "...",
"name": "Caetano Veloso",
"email": "caetano.veloso@example.com",
"createdAt": "..."
}
]
Shipping Containers: Docker Hub
Just as we can push and pull code on platforms like GitHub or GitLab, Docker has its own distribution service called Docker Hub. Docker Hub is a cloud-based registry service where you can find, share, and distribute container images. It serves as a repository for storing public and private Docker images and provides the following features:
- Repositories for storing images.
- CI/CD integration for automated builds.
- Version control with tags.
- Official and verified images.
Conclusion
Docker has fundamentally changed how we develop and deploy applications. No more "it works on my machine" syndrome – if it works in the container, it works everywhere!
Docker has become indispensable in modern software development by providing a standardized way to package and deploy applications. Through containers, it offers a complete ecosystem for building, sharing, and running applications consistently across different environments.
That’s it for today! You can find the source code for this blog post right here. Happy containerizing! 🐳
Want to dive deeper? Check out the full ‘The Miners’ Guide to Code Crafting’ series and continue your coding journey with us!
We want to work with you. Check out our "What We Do" section!