Docker for Beginners
Understand the basics of containerization and how to use Docker for your applications.
Understand the basics of containerization and how to use Docker for your applications.

Introduction: If you've been in the software development world for a while, you've probably heard the phrase "but it works on my machine" more times than you can count. This common frustration has plagued developers for years, leading to countless hours of debugging environment-specific issues. Enter Docker, a revolutionary platform that's changing how we develop, ship, and run applications.
Docker is an open platform that enables you to separate your applications from your infrastructure, allowing you to deliver software quickly and consistently. By using containerization technology, Docker ensures that your application runs the same way regardless of where it's deployed, effectively eliminating the "works on my machine" problem once and for all.
Before Docker became mainstream, deploying applications was often a complex and error-prone process. Developers would write code on their local machines, then operations teams would struggle to recreate the exact same environment in production. Different operating systems, library versions, and configurations would lead to unexpected behaviors and bugs that were difficult to reproduce and fix.
Docker fundamentally changed this paradigm by introducing a standardized way to package applications along with all their dependencies. This approach has become so popular that it's now considered an essential skill for modern developers and DevOps engineers. Companies of all sizes, from startups to tech giants, rely on Docker to streamline their development and deployment workflows.
At its core, a container is a lightweight, standalone, executable package that includes everything needed to run a piece of software. This includes:
Think of it as a complete, self-contained unit that can run consistently across different computing environments.
Containers are often confused with virtual machines, but they're fundamentally different:
| Feature | Containers | Virtual Machines |
|---|---|---|
| Size | Lightweight (MBs) | Heavy (GBs) |
| Startup | Seconds | Minutes |
| Performance | Near-native | Slower |
| Isolation | Process-level | OS-level |
| Resource Usage | Minimal | Significant |
While virtual machines emulate entire operating systems complete with their own kernels, containers share the host system's kernel and isolate the application processes from the rest of the system. This makes containers much more lightweight and efficient than traditional virtual machines.
The beauty of containers lies in their portability and consistency. When you containerize an application, you're essentially creating a snapshot of your application's entire runtime environment. This snapshot can then be:
All with the assurance that it will behave exactly the same way everywhere.
Docker has become the de facto standard for containerization, and for good reason. It provides a simple, intuitive interface for creating, managing, and deploying containers. Before Docker, containerization technologies existed but were complex and difficult to use. Docker democratized containers by making them accessible to developers of all skill levels.
One of Docker's greatest strengths is its ecosystem. Docker Hub, the official Docker registry, hosts millions of pre-built container images that you can use as starting points for your own applications.
Need a Node.js environment? There's an image for that.
Want to run PostgreSQL? Just pull the official image and you're ready to go in seconds.
Docker also promotes a microservices architecture, where applications are broken down into smaller, independent services that can be developed, deployed, and scaled individually. This architectural pattern has become increasingly popular because it allows teams to:
Before you can start using Docker, you'll need to install Docker Engine on your machine. Docker provides installation packages for:
For Windows and macOS users, Docker Desktop provides a convenient all-in-one package that includes everything you need to get started.
Once installed, you can verify that Docker is working correctly by opening a terminal and running:
docker --version
The Docker workflow typically follows this pattern:
Images are like blueprints or templates, while containers are the actual running instances created from those images. You can create multiple containers from a single image, and each container runs independently of the others.
Understanding a few key concepts will help you navigate the Docker ecosystem more effectively.
Image: A read-only template that contains the instructions for creating a container.
Container: When you run an image, Docker creates a container from it, adding a writable layer on top where your application can make changes.
Dockerfiles are text files that contain instructions for building Docker images. They specify:
Dockerfiles make it easy to version control your container configurations and share them with your team.
Docker registries are repositories where Docker images are stored and shared:
When you need an image, you pull it from a registry.
When you want to share an image, you push it to a registry.
Volumes are Docker's way of persisting data generated by containers. Since containers are designed to be ephemeral and can be destroyed and recreated at any time, volumes provide a mechanism to store data outside the container's filesystem.
This is essential for:
Let's explore the fundamental commands you'll use daily when working with Docker. These commands form the foundation of Docker operations and will become second nature as you gain experience.
The docker run command is your gateway to creating and starting containers. This command pulls an image if it's not already available locally, creates a container from that image, and starts it.
Test your installation:
docker run hello-world
This simple command downloads a small test image and runs it, displaying a welcome message. It's the perfect first command to verify your Docker installation is working correctly.
Run a web server:
docker run -d -p 8080:80 --name my-nginx nginx
This command:
-d: Runs in detached mode (background)-p 8080:80: Maps port 8080 on host to port 80 in container--name my-nginx: Gives the container a friendly namenginx: The image to useList running containers:
docker ps
List all containers (including stopped):
docker ps -a
Stop, start, and restart containers:
docker stop my-nginx
docker start my-nginx
docker restart my-nginx
Remove containers:
docker rm my-nginx
Note: You can remove multiple containers at once by specifying multiple container IDs or names.
List local images:
docker images
Build an image from a Dockerfile:
docker build -t my-app:latest .
The -t flag assigns a name and tag to your image. The dot . specifies the build context (current directory).
Pull an image from a registry:
docker pull ubuntu:22.04
Remove an image:
docker rmi my-app:latest
Warning: You can't remove an image if containers are still using it.
View container logs:
docker logs my-nginx
docker logs -f my-nginx # Follow logs in real-time
Execute commands inside a running container:
docker exec -it my-nginx bash
This opens an interactive bash shell inside the container. The flags mean:
-i: Interactive mode-t: Allocate a pseudo-TTYInspect detailed container information:
docker inspect my-nginx
This provides detailed information in JSON format, including network settings, volume mounts, environment variables, and much more.
A Dockerfile is a blueprint for building Docker images. It's a simple text file that contains a series of instructions that Docker executes to build your image.
Every Dockerfile starts with a FROM instruction, which specifies the base image to build upon.
FROM node:18-alpine
This example uses the official Node.js 18 image based on Alpine Linux, a minimal distribution that keeps image sizes small.
Sets the working directory for subsequent instructions.
WORKDIR /app
Copies files from your local machine into the image.
COPY package*.json ./
COPY . .
Executes commands during the build process.
RUN npm install --production
Documents which ports your application listens on.
EXPOSE 3000
Note: This doesn't actually publish the ports; it serves as documentation.
Specifies the default command to run when a container starts.
CMD ["node", "server.js"]
Here's a complete Dockerfile for a Node.js application:
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install --production
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
# Build the image
docker build -t my-node-app:1.0 .
# Run the container
docker run -d -p 3000:3000 --name node-app my-node-app:1.0
While docker run is great for single containers, real-world applications often consist of multiple services working together. Docker Compose solves this problem by allowing you to define and run multi-container applications using a simple YAML file.
A docker-compose.yml file describes all the services that make up your application, along with their:
This declarative approach makes it easy to share complete application stacks with your team.
version: '3.8'
services:
web:
build: .
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgresql://db:5432/myapp
depends_on:
- db
volumes:
- .:/app
- /app/node_modules
db:
image: postgres:15-alpine
environment:
- POSTGRES_DB=myapp
- POSTGRES_PASSWORD=secret
volumes:
- postgres-data:/var/lib/postgresql/data
volumes:
postgres-data:
Start all services:
docker-compose up
Start in detached mode:
docker-compose up -d
Stop all services:
docker-compose down
View logs:
docker-compose logs -f
Rebuild services:
docker-compose up --build
Docker Compose handles:
As you work more with Docker, following best practices will help you create more efficient, secure, and maintainable containers.
Example multi-stage build:
# Build stage
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# Production stage
FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY package*.json ./
RUN npm install --production
CMD ["node", "dist/server.js"]
Place instructions that change frequently near the bottom of your Dockerfile.
Good:
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . . # This changes frequently, so it's last
Bad:
FROM node:18-alpine
WORKDIR /app
COPY . . # This invalidates cache for everything below
COPY package*.json ./
RUN npm install
Use environment variables, Docker secrets, or external configuration management tools.
Bad:
ENV API_KEY=super-secret-key-123
Good:
docker run -e API_KEY=${API_KEY} my-app
Exclude unnecessary files from the build context:
node_modules
.git
.env
*.log
dist
coverage
FROM node:18-alpine
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nodejs -u 1001
WORKDIR /app
COPY --chown=nodejs:nodejs . .
USER nodejs
CMD ["node", "server.js"]
Bad:
FROM node:latest
Good:
FROM node:18.17-alpine3.18
Add health checks to monitor container health:
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD node healthcheck.js
Docker excels in numerous scenarios, making it a versatile tool for different stages of software development and deployment.
Docker ensures that all team members work in identical environments. New developers can get started quickly by pulling a few Docker images instead of spending hours installing and configuring software.
Benefits:
Docker provides reproducible build environments. You can test your application in the exact same environment it will run in production.
Example GitHub Actions:
jobs:
test:
runs-on: ubuntu-latest
container:
image: node:18-alpine
steps:
- uses: actions/checkout@v3
- run: npm install
- run: npm test
Docker is ideal for running microservices architectures, where each service runs in its own container.
Advantages:
Docker makes it easy to spin up temporary environments with databases, message queues, and other dependencies.
# Start test database
docker run -d -p 5432:5432 -e POSTGRES_PASSWORD=test postgres:15-alpine
# Run tests
npm test
# Clean up
docker stop $(docker ps -q)
You can containerize older applications without modifying their code, making them easier to deploy and manage alongside modern cloud-native applications.
Even experienced developers encounter Docker issues from time to time. Understanding common problems and their solutions will save you frustration and time.
Symptom: Container exits right after starting.
Solution: Check the logs:
docker logs container-name
Common causes:
Symptom: Unable to access containerized service from host.
Checklist:
docker psdocker exec -it container-name netstat -tlnp
Symptom: Docker builds fail or system runs out of space.
Solution: Clean up unused resources:
# Remove all stopped containers, unused networks, dangling images
docker system prune
# Remove all unused images, not just dangling ones
docker system prune -a
# Remove all unused volumes
docker volume prune
Check disk usage:
docker system df
Causes and Solutions:
.dockerignore--cache-from flagdocker build --cache-from my-app:latest -t my-app:new .
Symptom: Permission denied when accessing mounted volumes.
Solution: Match user IDs:
docker run -u $(id -u):$(id -g) -v $(pwd):/app my-app
Or in Dockerfile:
RUN chown -R node:node /app
USER node
Symptom: Containers can't communicate with each other.
Solution: Use Docker networks:
# Create a network
docker network create my-network
# Run containers on the same network
docker run -d --network my-network --name db postgres
docker run -d --network my-network --name web my-web-app
Docker is just the beginning of your containerization journey. As you become comfortable with Docker basics, you'll naturally want to explore more advanced topics.
Intermediate Topics:
Advanced Topics:
Start small by containerizing a simple application, then gradually tackle more complex scenarios:
The skills you develop with Docker form a foundation that translates well to:
The Docker community is vast and helpful. As you grow more proficient, consider contributing back by:
The investment you make in learning Docker will pay dividends throughout your career. Whether you're a developer looking to streamline your workflow, an operations engineer managing deployments, or a student exploring modern development practices, Docker is an essential tool in today's software landscape.
Remember that experimentation is key. Docker's lightweight nature means you can quickly destroy and recreate containers as you learn. Don't be afraid to make mistakes—they're often the best learning opportunities.
Welcome to the world of containerization. Your journey with Docker starts here, and the possibilities are limitless.