How to Install Docker

How to Install Docker: A Complete Step-by-Step Guide for Developers and DevOps Teams Docker has revolutionized the way software is developed, tested, and deployed. By enabling containerization, Docker allows developers to package applications and their dependencies into lightweight, portable units called containers. These containers run consistently across different environments — from a developer

Oct 30, 2025 - 12:05
Oct 30, 2025 - 12:05
 0

How to Install Docker: A Complete Step-by-Step Guide for Developers and DevOps Teams

Docker has revolutionized the way software is developed, tested, and deployed. By enabling containerization, Docker allows developers to package applications and their dependencies into lightweight, portable units called containers. These containers run consistently across different environments — from a developer’s laptop to production servers — eliminating the infamous “it works on my machine” problem. Whether you're a beginner learning modern DevOps practices or a seasoned engineer optimizing deployment pipelines, installing Docker correctly is the foundational step toward building scalable, reliable, and efficient systems.

This comprehensive guide walks you through every aspect of installing Docker on major operating systems, including Windows, macOS, and Linux. Beyond installation, we cover best practices, essential tools, real-world use cases, and answers to frequently asked questions. By the end of this tutorial, you’ll not only have Docker up and running but also understand how to configure it securely and efficiently for professional use.

Step-by-Step Guide

Installing Docker on Windows

Docker on Windows requires either Windows 10 Pro, Enterprise, or Education (64-bit) with Hyper-V and Windows Subsystem for Linux 2 (WSL 2) enabled. Windows Home users can still install Docker Desktop using WSL 2 by following the additional setup steps below.

First, ensure your system meets the prerequisites:

  • 64-bit processor with Second Level Address Translation (SLAT)
  • Minimum 4GB RAM
  • BIOS-level virtualization enabled (Intel VT-x or AMD-V)
  • Windows 10 version 2004 or higher (Build 19041 or higher)

To enable WSL 2:

  1. Open PowerShell as Administrator and run: dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart
  2. Then run: dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart
  3. Restart your computer.
  4. Download and install the WSL 2 Linux kernel update package from the Microsoft website.
  5. Set WSL 2 as the default version by running: wsl --set-default-version 2

Next, download Docker Desktop for Windows from the official Docker website: https://www.docker.com/products/docker-desktop.

Run the installer and follow the on-screen prompts. During installation, Docker will automatically configure WSL 2 backend and enable required services. After installation completes:

  • Launch Docker Desktop from the Start menu.
  • Wait for the Docker whale icon to appear in the system tray — this indicates the daemon is running.
  • Open a terminal (Command Prompt, PowerShell, or Windows Terminal) and run: docker --version

If you see output like “Docker version 24.0.7, build afdd53b”, Docker is successfully installed.

Installing Docker on macOS

Docker Desktop for Mac is the recommended and easiest way to install Docker on macOS systems running macOS 11 (Big Sur) or later. Apple Silicon (M1/M2) and Intel-based Macs are both supported.

Begin by visiting the Docker website and downloading the latest Docker Desktop for Mac installer: https://www.docker.com/products/docker-desktop.

Once downloaded:

  1. Open the .dmg file and drag the Docker icon into the Applications folder.
  2. Launch Docker from your Applications folder.
  3. You’ll be prompted to authenticate with your administrator password — enter it to allow Docker to install required components.
  4. Docker will begin initializing. This may take a few minutes as it downloads and sets up the Linux VM and container engine.
  5. When the Docker icon turns green in the menu bar, the installation is complete.
  6. Open Terminal and run: docker --version

For optimal performance on Apple Silicon Macs, ensure you’re using Docker Desktop version 3.3.0 or later, which includes native ARM64 support. Avoid running Docker through Rosetta 2 unless necessary.

Installing Docker on Ubuntu and Other Debian-Based Linux Distributions

On Linux, Docker is typically installed via the command line using the official Docker repository for better version control and security updates.

Start by updating your package index:

sudo apt update

Install required packages to allow apt to use a repository over HTTPS:

sudo apt install apt-transport-https ca-certificates curl software-properties-common

Add Docker’s official GPG key:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

Set up the stable repository:

echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Update the package index again:

sudo apt update

Install the latest version of Docker Engine:

sudo apt install docker-ce docker-ce-cli containerd.io

Verify the installation:

sudo docker --version

By default, Docker requires root privileges. To run Docker commands without sudo, add your user to the docker group:

sudo usermod -aG docker $USER

Log out and log back in, or run newgrp docker to refresh group membership.

Installing Docker on CentOS, RHEL, and Fedora

On Red Hat-based systems, the process is similar but uses dnf or yum package managers.

First, remove any older Docker installations:

sudo yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine

Install required dependencies:

sudo yum install -y yum-utils

Add the Docker repository:

sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

Install Docker Engine:

sudo yum install docker-ce docker-ce-cli containerd.io

Start and enable Docker to run at boot:

sudo systemctl start docker

sudo systemctl enable docker

Verify the installation:

sudo docker --version

Add your user to the docker group:

sudo usermod -aG docker $USER

Log out and back in to apply group changes.

Installing Docker on Arch Linux

Arch Linux users can install Docker using the official package manager, pacman:

sudo pacman -S docker

Start and enable the service:

sudo systemctl start docker

sudo systemctl enable docker

Verify installation and add user to docker group:

docker --version

sudo usermod -aG docker $USER

Best Practices

Use the Official Docker Repository

Always install Docker from the official Docker repository rather than using distribution-provided packages. The official repository ensures you receive the latest stable releases, security patches, and compatibility fixes. Distribution repositories often lag behind in version updates, which may lead to compatibility issues with modern applications or tools.

Regularly Update Docker

Security vulnerabilities in container runtimes are discovered periodically. Regularly updating Docker ensures your environment remains protected. Use the package manager you used for installation to update:

  • Ubuntu/Debian: sudo apt update && sudo apt upgrade docker-ce
  • CentOS/RHEL: sudo yum update docker-ce
  • macOS/Windows: Docker Desktop automatically notifies you of updates — always apply them promptly.

Configure Resource Limits

By default, Docker Desktop on macOS and Windows allocates a significant portion of system resources (e.g., 2–4 CPUs, 2–8GB RAM). For development machines with limited resources, adjust these settings to avoid system slowdowns.

In Docker Desktop, go to Settings > Resources to reduce CPU, memory, or disk usage. On Linux, monitor resource usage with docker stats and use cgroups or systemd to enforce limits on containers.

Use Non-Root Users

Running Docker as root poses a security risk. Even though adding your user to the docker group is standard, ensure only trusted users have access to this group. Avoid running containers with root privileges inside the container unless absolutely necessary. Use the USER directive in Dockerfiles to switch to a non-root user:

FROM ubuntu:22.04

RUN useradd --create-home --shell /bin/bash appuser

USER appuser

COPY . /home/appuser/app

WORKDIR /home/appuser/app

CMD ["./app"]

Enable Content Trust and Scan Images

Docker Content Trust (DCT) ensures that only signed images are pulled and run. Enable it by setting:

export DOCKER_CONTENT_TRUST=1

Use tools like trivy or docker scan to scan images for vulnerabilities before deployment:

docker scan your-image:tag

Use .dockerignore Files

Just as .gitignore excludes files from version control, .dockerignore excludes files from the build context. This reduces image size and speeds up builds. Create a .dockerignore file in your project root:

.git

node_modules

.env

log/

*.log

Dockerfile

docker-compose.yml

Optimize Dockerfile Layers

Each instruction in a Dockerfile creates a new layer. Combine related commands using && to minimize layers:

RUN apt-get update && apt-get install -y \

curl \

vim \

nginx \

&& rm -rf /var/lib/apt/lists/*

Place infrequently changing instructions (like installing dependencies) before frequently changing ones (like copying source code) to leverage Docker’s layer caching.

Monitor and Log Container Activity

Use docker logs <container-id> to inspect application output. For production environments, integrate centralized logging with tools like ELK Stack, Fluentd, or Loki. Monitor container health with Docker’s built-in health checks:

HEALTHCHECK --interval=30s --timeout=3s --start-period=40s --retries=3 \

CMD curl -f http://localhost/ || exit 1

Tools and Resources

Essential Docker CLI Commands

Mastering the Docker CLI is critical for daily operations. Here are the most essential commands:

  • docker run — Run a container from an image
  • docker ps — List running containers
  • docker ps -a — List all containers (including stopped)
  • docker images — List local images
  • docker build — Build an image from a Dockerfile
  • docker pull — Download an image from a registry
  • docker push — Upload an image to a registry
  • docker stop / docker start — Stop or start a container
  • docker rm — Remove a container
  • docker rmi — Remove an image
  • docker exec -it <container> bash — Open a shell inside a running container
  • docker logs <container> — View container logs
  • docker stats — Monitor real-time resource usage

Docker Compose for Multi-Container Applications

Docker Compose allows you to define and run multi-container applications using a YAML file. It’s ideal for local development environments with databases, caches, and microservices.

Install Docker Compose:

  • On Linux: sudo apt install docker-compose (or use the standalone binary from GitHub)
  • On macOS/Windows: Included with Docker Desktop

Create a docker-compose.yml file:

version: '3.8'

services:

web:

image: nginx:alpine

ports:

- "8080:80"

volumes:

- ./html:/usr/share/nginx/html

db:

image: postgres:14

environment:

POSTGRES_DB: myapp

POSTGRES_USER: user

POSTGRES_PASSWORD: password

volumes:

- pgdata:/var/lib/postgresql/data

volumes:

pgdata:

Start services with: docker-compose up

Container Registries

  • Docker Hub — Free public registry with millions of images
  • GitHub Container Registry (GHCR) — Integrated with GitHub repositories
  • Amazon ECR — Secure registry for AWS users
  • Google Container Registry (GCR) — Integrated with Google Cloud
  • GitLab Container Registry — Built into GitLab CI/CD pipelines

Always prefer private registries for internal applications to maintain security and compliance.

Development and Debugging Tools

  • Dive — Analyze Docker image layers and detect bloat
  • Portainer — Web-based GUI for managing Docker containers and volumes
  • docker-slim — Minimize image size by analyzing runtime behavior
  • Trivy — Open-source vulnerability scanner for containers
  • Visual Studio Code with Docker Extension — Integrated Docker management in your IDE

Learning Resources

Real Examples

Example 1: Running a Python Flask App in a Container

Let’s containerize a simple Flask application.

Create a project directory:

mkdir flask-app

cd flask-app

Create app.py:

from flask import Flask

app = Flask(__name__)

@app.route('/')

def hello():

return "Hello from Dockerized Flask!"

if __name__ == '__main__':

app.run(host='0.0.0.0', port=5000)

Create requirements.txt:

Flask==2.3.3

Create Dockerfile:

FROM python:3.11-slim

WORKDIR /app

COPY requirements.txt .

RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 5000

CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--workers", "1", "app:app"]

Build and run:

docker build -t flask-app .

docker run -p 5000:5000 flask-app

Visit http://localhost:5000 in your browser to see the app running.

Example 2: Database + Web App with Docker Compose

Deploy a Node.js Express app with a PostgreSQL database using docker-compose.yml:

version: '3.8'

services:

web:

build: .

ports:

- "3000:3000"

environment:

- DB_HOST=db

- DB_PORT=5432

- DB_USER=postgres

- DB_PASSWORD=secret

- DB_NAME=myapp

depends_on:

- db

volumes:

- .:/app

- /app/node_modules

db:

image: postgres:15

ports:

- "5432:5432"

environment:

POSTGRES_DB: myapp

POSTGRES_USER: postgres

POSTGRES_PASSWORD: secret

volumes:

- pgdata:/var/lib/postgresql/data

volumes:

pgdata:

Run docker-compose up and your full stack is live with automatic networking between services.

Example 3: CI/CD Pipeline with Docker

Many teams use Docker in CI/CD pipelines. Here’s a GitHub Actions workflow that builds, tests, and pushes a Docker image:

name: Build and Push Docker Image

on:

push:

branches: [ main ]

jobs:

build:

runs-on: ubuntu-latest

steps:

- uses: actions/checkout@v4

- name: Set up Docker Buildx

uses: docker/setup-buildx-action@v3

- name: Login to Docker Hub

uses: docker/login-action@v3

with:

username: ${{ secrets.DOCKER_USERNAME }}

password: ${{ secrets.DOCKER_PASSWORD }}

- name: Build and push

uses: docker/build-push-action@v5

with:

context: .

file: ./Dockerfile

push: true

tags: myusername/myapp:latest

This workflow automatically builds and pushes the image to Docker Hub on every push to main — enabling continuous deployment.

Example 4: Local Development with Multiple Services

Modern applications often require Redis, Elasticsearch, or Kafka. Docker makes it trivial to spin them up locally:

version: '3.8'

services:

redis:

image: redis:7-alpine

ports:

- "6379:6379"

elasticsearch:

image: docker.elastic.co/elasticsearch/elasticsearch:8.10.0

environment:

- discovery.type=single-node

- xpack.security.enabled=false

ports:

- "9200:9200"

kafka:

image: bitnami/kafka:3.6

ports:

- "9092:9092"

environment:

- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092

- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092

- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181

depends_on:

- zookeeper

zookeeper:

image: bitnami/zookeeper:3.8

ports:

- "2181:2181"

With one command, you have a full local environment matching production.

FAQs

Can I run Docker on Windows 10 Home?

Yes, but only using WSL 2. Docker Desktop for Windows requires WSL 2, which is supported on Windows 10 Home starting with version 2004. You must manually enable WSL 2 and install a Linux distribution from the Microsoft Store (e.g., Ubuntu).

What’s the difference between Docker and a virtual machine?

Docker containers share the host OS kernel and run as isolated processes, making them lightweight and fast to start. Virtual machines emulate an entire operating system, requiring more memory and slower boot times. Containers are ideal for microservices and application deployment; VMs are better for running different OSes or legacy applications.

Why do I get “permission denied” when running docker commands?

This occurs when your user isn’t in the docker group. Fix it by running sudo usermod -aG docker $USER, then log out and back in. Alternatively, prefix commands with sudo — but this is not recommended for daily use.

How do I remove all Docker containers and images?

To remove all stopped containers: docker container prune
To remove all unused images: docker image prune -a
To remove everything: docker system prune -a (use with caution)

Can I run Docker on ARM-based devices like Raspberry Pi?

Yes. Docker supports ARM architectures. Download the appropriate ARM version from the Docker website or use curl -fsSL https://get.docker.com | sh on Raspberry Pi OS. Many popular images (e.g., nginx, postgres, redis) are multi-arch and work natively on ARM.

How do I update Docker Compose?

On Linux, download the latest binary from GitHub: https://github.com/docker/compose/releases. Replace the existing binary in /usr/local/bin/docker-compose. On macOS and Windows, Docker Desktop updates Docker Compose automatically.

Is Docker safe for production use?

Yes, when configured properly. Use trusted base images, scan for vulnerabilities, limit container privileges, enable content trust, and monitor logs and resource usage. Many Fortune 500 companies rely on Docker in production environments.

What happens if Docker crashes or the daemon stops?

Containers will stop running, but their data persists unless volumes are deleted. Use docker start <container-id> to restart them. Enable restart policies to auto-restart containers on failure:

docker run --restart=always your-image

How do I back up Docker volumes?

Use tar to archive volume data:

docker run --rm -v myvolume:/volume -v $(pwd):/backup alpine tar cvf /backup/backup.tar /volume

To restore:

docker run --rm -v myvolume:/volume -v $(pwd):/backup alpine sh -c "cd /volume && tar xvf /backup/backup.tar --no-overwrite-dir"

Conclusion

Installing Docker is more than just running an installer — it’s the gateway to modern software development. By containerizing applications, you gain consistency, portability, and scalability that traditional deployment methods simply cannot match. Whether you're deploying a simple web app or orchestrating complex microservices across cloud environments, Docker provides the foundation for reliability and efficiency.

This guide has walked you through installing Docker on Windows, macOS, and Linux, applied best practices for security and performance, introduced essential tools like Docker Compose and Trivy, and demonstrated real-world use cases from Flask apps to CI/CD pipelines. You now have the knowledge to not only install Docker but to use it effectively in professional environments.

As you continue your journey, remember: the power of Docker lies not in the installation itself, but in how you leverage containers to streamline workflows, reduce complexity, and accelerate delivery. Start small — containerize a single service. Then expand. Eventually, you’ll find that Docker isn’t just a tool — it’s a mindset that transforms how software is built and shipped.

Keep experimenting. Keep learning. And most importantly — keep deploying.