How to Use Docker Compose
How to Use Docker Compose Docker Compose is a powerful tool that simplifies the management of multi-container Docker applications. While Docker allows you to run individual containers, real-world applications often consist of multiple interconnected services — such as a web server, database, cache layer, and message broker. Managing each of these containers manually with individual docker run comm
How to Use Docker Compose
Docker Compose is a powerful tool that simplifies the management of multi-container Docker applications. While Docker allows you to run individual containers, real-world applications often consist of multiple interconnected services such as a web server, database, cache layer, and message broker. Managing each of these containers manually with individual docker run commands becomes unwieldy, error-prone, and difficult to reproduce across environments. Docker Compose solves this by letting you define and orchestrate your entire application stack using a single YAML configuration file. With just one command docker compose up you can start, stop, and manage all services defined in your configuration. This tutorial provides a comprehensive, step-by-step guide to mastering Docker Compose, covering everything from basic setup to advanced best practices and real-world examples. Whether you're a developer, DevOps engineer, or system administrator, understanding how to use Docker Compose effectively is essential for modern application deployment.
Step-by-Step Guide
Prerequisites
Before you begin using Docker Compose, ensure your system meets the following requirements:
- Docker Engine installed (version 17.06 or higher)
- Docker Compose installed (v2.20 or higher recommended)
- A basic understanding of Docker containers and images
- A terminal or command-line interface (CLI)
To verify Docker is installed and running, open your terminal and type:
docker --version
To check Docker Compose:
docker compose version
If Docker Compose is not installed, visit the official Docker Compose installation guide for your operating system. On most modern systems, Docker Desktop includes Docker Compose automatically. For Linux users, you can install it via curl:
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
Understanding the docker-compose.yml File
The heart of Docker Compose is the docker-compose.yml file (or compose.yaml in newer versions). This YAML file defines the services, networks, and volumes that make up your application. Each service corresponds to a container, and you specify the image to use, environment variables, ports, dependencies, and more.
A minimal docker-compose.yml file might look like this:
version: '3.8'
services:
web:
image: nginx:latest
ports:
- "80:80"
Lets break this down:
- version: Specifies the Compose file format version. Use version 3.8 or higher for modern features and compatibility with Docker Engine 20.10+.
- services: The top-level key that lists all the containers in your application.
- web: A service name you define. It becomes the hostname inside the network.
- image: The Docker image to use. In this case, the latest Nginx image from Docker Hub.
- ports: Maps port 80 on the host to port 80 in the container, allowing external access.
Creating Your First Docker Compose Project
Lets build a simple web application using Docker Compose. Well create a Python Flask app that connects to a PostgreSQL database.
Step 1: Create a project directory
mkdir flask-postgres-app
cd flask-postgres-app
Step 2: Create the Flask application
Create a file named app.py:
from flask import Flask
import os
import psycopg2
app = Flask(__name__)
@app.route('/')
def hello():
conn = psycopg2.connect(
host="db",
database="mydb",
user="myuser",
password="mypassword"
)
cur = conn.cursor()
cur.execute("SELECT version();")
db_version = cur.fetchone()
cur.close()
conn.close()
return f"Hello from Flask! Database version: {db_version[0]}"
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)
Step 3: Create a requirements file
Create requirements.txt:
Flask==3.0.0
psycopg2-binary==2.9.7
Step 4: Create a Dockerfile for the Flask app
Create Dockerfile:
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 5000
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--workers", "4", "app:app"]
We use Gunicorn as a production-ready WSGI server instead of Flasks built-in server.
Step 5: Write the docker-compose.yml file
Create docker-compose.yml:
version: '3.8'
services:
web:
build: .
ports:
- "5000:5000"
environment:
- DATABASE_URL=postgresql://myuser:mypassword@db:5432/mydb
depends_on:
- db
networks:
- app-network
db:
image: postgres:15
environment:
POSTGRES_DB: mydb
POSTGRES_USER: myuser
POSTGRES_PASSWORD: mypassword
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- app-network
volumes:
postgres_data:
networks:
app-network:
driver: bridge
Lets examine the key components:
- build: .: Tells Compose to build the image from the current directory using the Dockerfile.
- ports: Exposes port 5000 on the host to port 5000 in the container.
- environment: Sets environment variables for the service. The Flask app uses
DATABASE_URLto connect to PostgreSQL. - depends_on: Ensures the
dbservice starts beforeweb. Note: This only controls startup order, not readiness. For production, use health checks. - volumes: Persists PostgreSQL data outside the container so it survives restarts.
- networks: Creates a custom bridge network so services can communicate using their service names as hostnames.
Starting the Application
With the files in place, start your application:
docker compose up
This command will:
- Build the Flask app image from the Dockerfile
- Download the PostgreSQL image if not already present
- Create and start both containers
- Connect them via the custom network
- Forward port 5000 to your host machine
Open your browser and navigate to http://localhost:5000. You should see:
Hello from Flask! Database version: PostgreSQL 15.x.x
If you see this message, your Docker Compose setup is working correctly!
Managing the Application
Once your services are running, you can manage them with additional commands:
docker compose down: Stops and removes containers, networks, and volumes defined in the compose file. Usedocker compose down -vto also remove named volumes.docker compose ps: Lists all running services and their status.docker compose logs web: Shows logs for the web service. Use-fto follow logs in real time.docker compose build: Rebuilds images if Dockerfile changes.docker compose restart: Restarts all services.docker compose exec web bash: Opens a shell inside the running web container.
Scaling Services
Docker Compose allows you to scale services horizontally. For example, if your Flask app needs to handle more traffic, you can run multiple instances:
docker compose up --scale web=3
This starts three instances of the web service. Note: This works best when your application is stateless. For stateful services like databases, scaling is not recommended without additional architecture (e.g., replication, clustering).
Best Practices
Use Version 3.8 or Higher
Always specify a recent version in your docker-compose.yml. Version 3.8 introduced support for health checks, deploy configurations, and improved volume syntax. Avoid version 1 and 2 unless maintaining legacy systems. Version 3.x is designed for Docker Swarm and standalone Docker Engine, making it the standard choice.
Separate Development and Production Configurations
Never use the same compose file for development and production. Differences include:
- Development: Mount local code directories, enable debug mode, use exposed ports
- Production: Use pre-built images, disable debug, use secrets, restrict ports
Use multiple files with -f flag:
docker compose -f docker-compose.yml -f docker-compose.prod.yml up
Example docker-compose.prod.yml:
version: '3.8'
services:
web:
image: your-registry/your-app:latest
environment:
- FLASK_ENV=production
ports:
- "80:5000"
deploy:
replicas: 4
restart_policy:
condition: on-failure
Use Health Checks
Always define health checks for critical services. This ensures containers only start dependent services when theyre truly ready.
db:
image: postgres:15
healthcheck:
test: ["CMD-SHELL", "pg_isready -U myuser -d mydb"]
interval: 10s
timeout: 5s
retries: 5
start_period: 40s
Now, depends_on will wait until the health check passes, not just until the container starts.
Minimize Image Size
Use slim or alpine-based base images. Avoid installing unnecessary packages. Multi-stage builds can drastically reduce final image size.
FROM python:3.11-slim as builder
WORKDIR /app
COPY requirements.txt .
RUN pip install --user -r requirements.txt
FROM python:3.11-slim
WORKDIR /app
COPY --from=builder /root/.local /root/.local
COPY . .
EXPOSE 5000
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--workers", "4", "app:app"]
Use .dockerignore
Create a .dockerignore file to exclude files from the build context:
.git
node_modules
__pycache__
.env
*.log
This reduces build time and prevents sensitive files from being included in the image.
Manage Secrets Securely
Avoid hardcoding passwords in compose files. Use Docker secrets or external files:
services:
web:
secrets:
- db_password
secrets:
db_password:
file: ./db_password.txt
Store secrets in files with restricted permissions:
echo "mysecretpassword" > db_password.txt
chmod 600 db_password.txt
Use Named Volumes for Data Persistence
Always use named volumes instead of bind mounts for production data. Named volumes are managed by Docker and are portable across environments.
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data:
Avoid Running as Root
Run containers as non-root users for security:
FROM python:3.11-slim
RUN adduser --disabled-password --gecos '' appuser && \
mkdir /app && chown appuser:appuser /app
USER appuser
WORKDIR /app
COPY --chown=appuser:appuser requirements.txt .
RUN pip install --user -r requirements.txt
COPY --chown=appuser:appuser . .
EXPOSE 5000
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--workers", "4", "app:app"]
Use Environment Variables for Configuration
Use .env files to manage environment-specific values:
DB_HOST=db
DB_PORT=5432
DB_NAME=mydb
DB_USER=myuser
DB_PASSWORD=mypassword
Reference them in docker-compose.yml:
environment:
- DATABASE_URL=postgresql://${DB_USER}:${DB_PASSWORD}@${DB_HOST}:${DB_PORT}/${DB_NAME}
Docker Compose automatically loads .env from the same directory as the compose file.
Tools and Resources
Official Documentation
The most reliable source of information is the official Docker Compose documentation:
Visual Editors
Editing YAML files manually can be error-prone. Use tools that provide validation and auto-completion:
- Visual Studio Code with the Docker extension offers syntax highlighting, schema validation, and linting.
- IntelliJ IDEA / PyCharm built-in Docker Compose support and file templates.
- Compose Validator online tool to validate your YAML syntax: YAML Lint
Template Repositories
Use open-source templates as starting points:
- Awesome Compose Official Docker collection of sample applications.
- Real Python Docker Compose Examples Flask, PostgreSQL, Redis setups.
- JWilders Examples Nginx, MySQL, WordPress, and more.
Monitoring and Debugging Tools
Use these tools to monitor and troubleshoot your Compose applications:
- Docker Desktop GUI for managing containers, viewing logs, and inspecting resources.
- Portainer Open-source web UI for Docker. Install via Compose: Portainer Compose Guide
- Netdata Real-time performance monitoring. Can be added as a service to your compose file.
- Logspout Routes container logs to centralized logging systems.
CI/CD Integration
Integrate Docker Compose into your CI/CD pipeline:
- GitHub Actions: Use
docker/setup-docker-compose-actionto install Compose and run tests. - GitLab CI: Use Docker-in-Docker to run
docker compose upfor integration tests. - Argo CD: Deploy Compose stacks to Kubernetes via Kompose (converts Compose to K8s manifests).
Convert Compose to Kubernetes
If you plan to migrate to Kubernetes, use Kompose:
kompose convert -f docker-compose.yml
This generates Kubernetes deployment, service, and configmap YAML files. Use it for migration testing or hybrid environments.
Real Examples
Example 1: WordPress with MySQL and phpMyAdmin
A common use case for Docker Compose is hosting a WordPress site. Heres a production-ready setup:
version: '3.8'
services:
db:
image: mysql:8.0
command: --innodb-use-native-aio=0
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
MYSQL_ROOT_PASSWORD: rootpassword
wordpress:
depends_on:
- db
image: wordpress:latest
ports:
- "8080:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
volumes:
- wordpress_data:/var/www/html
phpmyadmin:
image: phpmyadmin/phpmyadmin
ports:
- "8081:80"
environment:
PMA_HOST: db
depends_on:
- db
volumes:
db_data:
wordpress_data:
Run with docker compose up and access WordPress at http://localhost:8080 and phpMyAdmin at http://localhost:8081.
Example 2: Node.js App with Redis and MongoDB
A modern web application stack:
version: '3.8'
services:
app:
build: .
ports:
- "3000:3000"
depends_on:
- redis
- mongo
environment:
REDIS_HOST: redis
MONGO_URI: mongodb://mongo:27017/myapp
networks:
- app-network
redis:
image: redis:7-alpine
ports:
- "6379:6379"
networks:
- app-network
mongo:
image: mongo:6
ports:
- "27017:27017"
volumes:
- mongo_data:/data/db
networks:
- app-network
volumes:
mongo_data:
networks:
app-network:
driver: bridge
Use this structure for API services, microservices, or real-time applications using Redis pub/sub or caching.
Example 3: Multi-Service Microservices Architecture
For a more advanced example, imagine a system with:
- Frontend (React)
- API Gateway (Node.js)
- Auth Service (Python)
- Notification Service (Go)
- PostgreSQL
- Redis
- RabbitMQ
Each service has its own Dockerfile and is defined in a single compose file. Use health checks, named networks, and environment variables to ensure loose coupling. This setup allows developers to start the entire system locally with one command, making onboarding and testing seamless.
Example 4: Local Development with MailHog
When developing email features, you dont want to send real emails. Use MailHog:
version: '3.8'
services:
web:
build: .
ports:
- "5000:5000"
environment:
- MAIL_HOST=mailhog
- MAIL_PORT=1025
mailhog:
image: mailhog/mailhog:latest
ports:
- "8025:8025"
- "1025:1025"
db:
image: postgres:15
environment:
POSTGRES_DB: myapp
POSTGRES_USER: myuser
POSTGRES_PASSWORD: mypass
volumes:
- pg_data:/var/lib/postgresql/data
volumes:
pg_data:
Visit http://localhost:8025 to view all sent emails in a web interface perfect for testing email workflows locally.
FAQs
What is the difference between Docker and Docker Compose?
Docker is the platform that allows you to build, run, and manage individual containers. Docker Compose is a tool built on top of Docker that lets you define and run multi-container applications using a declarative YAML file. You use Docker to run one container; you use Docker Compose to run many containers together as a cohesive application.
Can I use Docker Compose in production?
Yes, but with caution. Docker Compose is excellent for development, staging, and small-scale production deployments. For large-scale, high-availability systems, consider using Kubernetes or Docker Swarm. However, many startups and SMBs successfully run production workloads with Docker Compose, especially when combined with monitoring, backups, and automated restarts.
Why is my container restarting constantly?
Check the logs with docker compose logs <service>. Common causes include:
- Application crashes due to misconfiguration
- Missing environment variables
- Port conflicts
- Dependency services (like databases) not ready
Always use health checks and ensure dependencies are properly configured.
How do I update services after changing code?
For changes to the application code:
- If using
build: ., rundocker compose buildthendocker compose up -d - If using volumes to mount local code (development), changes are reflected immediately just restart the container
- If using pre-built images, push the new image to a registry and update the image tag in compose.yml
Can I use Docker Compose on Windows and macOS?
Yes. Docker Desktop for Windows and macOS includes Docker Compose. On Windows, ensure youre using WSL2 backend for best performance. Linux users can install Compose manually via CLI.
What happens if I delete a volume?
Deleting a volume with docker compose down -v permanently removes all data stored in that volume. This is useful for resetting environments but dangerous for production databases. Always backup critical data before performing destructive operations.
How do I share my Docker Compose setup with my team?
Commit the docker-compose.yml, Dockerfile, .env, and .dockerignore files to your version control system (e.g., Git). Never commit secrets or sensitive data. Use a template .env.example file to guide team members on required variables.
Is Docker Compose faster than running containers manually?
Yes. Docker Compose automates the process of linking containers, setting up networks, and managing dependencies. It reduces human error and ensures consistency across environments. A single command replaces dozens of manual docker run commands.
Conclusion
Docker Compose is not just a convenience tool its a fundamental component of modern software development and deployment. By enabling developers to define entire application stacks in a single, version-controlled file, Docker Compose eliminates the it works on my machine problem and streamlines collaboration, testing, and deployment workflows. From simple web apps to complex microservices architectures, Docker Compose provides a consistent, repeatable, and scalable way to manage containerized applications.
This guide has walked you through the essentials: from setting up your first compose file, to writing production-ready configurations, leveraging best practices, and exploring real-world examples. Youve learned how to structure services, manage volumes and networks, secure secrets, and scale applications. You now understand how to integrate Docker Compose into your daily workflow and leverage it for both development and production environments.
As containerization continues to dominate cloud-native development, mastering Docker Compose is no longer optional its essential. Start small, experiment with different service combinations, and gradually adopt advanced patterns like multi-stage builds, health checks, and environment-specific configurations. The more you use Docker Compose, the more youll appreciate its elegance and power. Build your next project with it and see how much simpler development becomes.