How to Run Containers

How to Run Containers Running containers has become a cornerstone of modern software development, deployment, and infrastructure management. Whether you're a developer building microservices, a DevOps engineer scaling applications, or a system administrator optimizing resource usage, understanding how to run containers effectively is no longer optional—it’s essential. Containers provide a lightwei

Oct 30, 2025 - 12:06
Oct 30, 2025 - 12:06
 0

How to Run Containers

Running containers has become a cornerstone of modern software development, deployment, and infrastructure management. Whether you're a developer building microservices, a DevOps engineer scaling applications, or a system administrator optimizing resource usage, understanding how to run containers effectively is no longer optional—it’s essential. Containers provide a lightweight, portable, and consistent way to package applications and their dependencies, ensuring they run reliably across different environments—from a developer’s laptop to production cloud servers. This guide offers a comprehensive, step-by-step tutorial on how to run containers, covering foundational concepts, practical execution, industry best practices, essential tools, real-world examples, and answers to frequently asked questions. By the end of this tutorial, you’ll have the knowledge and confidence to deploy and manage containers in any environment.

Step-by-Step Guide

Understanding Containers and Containerization

Before diving into execution, it’s critical to understand what containers are and how they differ from traditional virtual machines (VMs). A container is a standardized unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. Unlike VMs, which virtualize the entire operating system, containers share the host OS kernel and isolate processes at the application level. This makes them significantly lighter, faster to start, and more resource-efficient.

Containerization relies on operating system-level virtualization. Linux namespaces and control groups (cgroups) are the core technologies enabling this isolation. Namespaces provide separate views of the system—for example, process IDs, network interfaces, and file systems—while cgroups limit and account for resource usage like CPU, memory, and I/O.

The most widely adopted container platform today is Docker, though alternatives like Podman, containerd, and CRI-O are gaining traction—especially in Kubernetes environments. For this guide, we’ll focus on Docker as the primary tool, given its broad adoption and rich ecosystem.

Prerequisites

Before you begin running containers, ensure your system meets the following requirements:

  • A 64-bit operating system (Linux, Windows, or macOS)
  • At least 4 GB of RAM (8 GB recommended for production)
  • Internet connectivity to download container images
  • Administrative privileges to install and run container engines

For Linux users, ensure your kernel version is 3.10 or higher. On Windows and macOS, Docker Desktop provides a seamless experience by running a lightweight Linux VM in the background.

Step 1: Install a Container Runtime

The first step in running containers is installing a container runtime. We’ll use Docker as our example.

On Ubuntu/Debian Linux:

sudo apt update

sudo apt install apt-transport-https ca-certificates curl software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt update

sudo apt install docker-ce docker-ce-cli containerd.io

On macOS:

Download Docker Desktop from docker.com/products/docker-desktop and install it via the GUI. Launch the application after installation—it will automatically start the Docker daemon.

On Windows:

Download Docker Desktop for Windows and install it. Ensure Windows Subsystem for Linux (WSL 2) is enabled. Docker Desktop will configure WSL 2 automatically during installation.

After installation, verify Docker is working:

docker --version

You should see output like: Docker version 24.0.7, build afdd53b

Step 2: Pull a Container Image

Containers are instantiated from images. An image is a read-only template that includes everything needed to run an application: code, runtime, libraries, environment variables, and configuration files.

Images are stored in registries—the most popular being Docker Hub. To pull an image, use the docker pull command.

For example, to pull the official Nginx web server image:

docker pull nginx

This downloads the latest version of the Nginx image from Docker Hub. You can specify a tag (version) if needed:

docker pull nginx:1.25

To list all downloaded images on your system:

docker images

Step 3: Run a Container

Once you have an image, you can run it as a container using the docker run command. This command creates and starts a container from the specified image.

Basic syntax:

docker run [OPTIONS] IMAGE [COMMAND] [ARG...]

Let’s run the Nginx container:

docker run -d -p 8080:80 --name my-nginx nginx

Here’s what each flag does:

  • -d: Run the container in detached mode (in the background)
  • -p 8080:80: Map port 8080 on the host to port 80 inside the container
  • --name my-nginx: Assign a custom name to the container
  • nginx: The image to use

After running this command, open your browser and navigate to http://localhost:8080. You should see the default Nginx welcome page.

Step 4: Manage Running Containers

Once containers are running, you’ll need to monitor and manage them.

To list all running containers:

docker ps

To list all containers (including stopped ones):

docker ps -a

To stop a running container:

docker stop my-nginx

To restart a stopped container:

docker start my-nginx

To remove a container (after stopping it):

docker rm my-nginx

To view container logs:

docker logs my-nginx

To enter a running container interactively:

docker exec -it my-nginx /bin/bash

This opens a shell inside the container, allowing you to inspect files, run commands, or debug issues.

Step 5: Build a Custom Container Image

While pre-built images are convenient, you’ll often need to create custom images tailored to your application.

Create a directory for your project:

mkdir my-app

cd my-app

Create a simple Python web app called app.py:

from flask import Flask

app = Flask(__name__)

@app.route('/')

def hello():

return "Hello from a custom container!"

if __name__ == '__main__':

app.run(host='0.0.0.0', port=5000)

Create a requirements.txt file:

Flask==3.0.0

Create a Dockerfile:

FROM python:3.11-slim

WORKDIR /app

COPY requirements.txt .

RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 5000

CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--workers", "1", "app:app"]

Build the image:

docker build -t my-python-app .

Run the custom container:

docker run -d -p 5000:5000 --name my-app my-python-app

Visit http://localhost:5000 to see your app in action.

Step 6: Use Docker Compose for Multi-Container Applications

Most real-world applications involve multiple services: a web server, a database, a cache, etc. Docker Compose lets you define and run multi-container applications using a single YAML file.

Create a docker-compose.yml file:

version: '3.8'

services:

web:

build: .

ports:

- "5000:5000"

depends_on:

- redis

environment:

- REDIS_HOST=redis

redis:

image: redis:alpine

ports:

- "6379:6379"

Run the entire stack:

docker-compose up -d

This command builds the web service (using your Dockerfile), pulls the Redis image, and starts both containers in detached mode. Use docker-compose logs to monitor output, and docker-compose down to stop and remove all services.

Best Practices

Use Minimal Base Images

Always prefer slim or alpine-based base images. For example, use python:3.11-slim instead of python:3.11. Smaller images reduce attack surface, improve build times, and decrease bandwidth usage during pulls. Alpine Linux images are especially popular due to their tiny size (often under 5 MB).

Minimize Layers in Dockerfiles

Each instruction in a Dockerfile creates a new layer. Combine related commands using && to reduce layers:

RUN apt-get update && apt-get install -y \

curl \

vim \

&& rm -rf /var/lib/apt/lists/*

This avoids leaving unnecessary files and keeps the image lean.

Use .dockerignore

Just as you use .gitignore, create a .dockerignore file to exclude unnecessary files from the build context:

.git

node_modules

.env

*.log

__pycache__

This speeds up builds and prevents sensitive files from being included in the image.

Don’t Run as Root

By default, containers run as the root user. This is a security risk. Create a non-root user inside your image:

FROM python:3.11-slim

RUN addgroup -g 1001 -S appuser && adduser -u 1001 -S appuser -g appuser

USER appuser

WORKDIR /app

COPY --chown=appuser:appuser requirements.txt .

RUN pip install --no-cache-dir -r requirements.txt

COPY --chown=appuser:appuser . .

EXPOSE 5000

CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--workers", "1", "app:app"]

Set Resource Limits

Prevent containers from consuming excessive resources. Use flags like --memory and --cpus:

docker run -d --name my-app --memory=512m --cpus=0.5 my-python-app

In Docker Compose:

services:

web:

image: my-python-app

deploy:

resources:

limits:

memory: 512M

cpus: '0.5'

Use Environment Variables for Configuration

Never hardcode secrets or environment-specific settings in images. Use environment variables:

docker run -d -e DATABASE_URL=postgres://user:pass@db:5432/mydb my-app

Or use a .env file with Docker Compose:

docker-compose --env-file .env up

Regularly Update and Rebuild Images

Base images receive security patches. Rebuild your images periodically to incorporate updates:

docker pull python:3.11-slim

docker build -t my-app:latest .

Consider using automated tools like Dependabot or Renovate to monitor for vulnerable dependencies.

Scan Images for Vulnerabilities

Use tools like Docker Scout, Trivy, or Clair to scan images for known vulnerabilities before deployment:

trivy image my-python-app

Integrate scanning into your CI/CD pipeline to block deployments of insecure images.

Tag Images Properly

Use semantic versioning for images:

  • my-app:v1.2.0 — stable release
  • my-app:latest — only for development
  • my-app:dev-20240510 — temporary build tag

Avoid relying on :latest in production. It leads to unpredictable deployments.

Log to stdout/stderr, Not Files

Container logs should be written to stdout and stderr. This allows the container runtime to capture and route logs centrally. Avoid writing logs to files inside the container unless absolutely necessary.

Tools and Resources

Core Tools

  • Docker — The most widely used container runtime and toolset. Ideal for development and single-host deployments.
  • Docker Compose — For defining and running multi-container applications on a single host.
  • Podman — A Docker-compatible container engine that runs without a daemon and supports rootless containers. Popular in enterprise and Red Hat environments.
  • containerd — A lightweight container runtime used by Docker and Kubernetes under the hood.
  • CRI-O — A Kubernetes-native container runtime compliant with the Container Runtime Interface (CRI).

Image Registries

  • Docker Hub — Public registry with millions of images. Free tier available.
  • GitHub Container Registry (GHCR) — Integrated with GitHub repositories. Ideal for CI/CD workflows.
  • Amazon ECR — Managed container registry for AWS users.
  • Google Container Registry (GCR) — Google’s managed registry for GCP users.
  • GitLab Container Registry — Built into GitLab CI/CD pipelines.

Orchestration Platforms

For managing containers at scale, use orchestration tools:

  • Kubernetes — The industry standard for container orchestration. Automates deployment, scaling, and management.
  • Docker Swarm — Docker’s native clustering tool. Simpler than Kubernetes but less feature-rich.
  • Nomad — A lightweight scheduler from HashiCorp that supports containers and non-container workloads.

Monitoring and Observability

  • Prometheus + Grafana — For metrics collection and visualization.
  • ELK Stack (Elasticsearch, Logstash, Kibana) — For centralized logging.
  • OpenTelemetry — For distributed tracing and telemetry data collection.
  • Docker Stats — Built-in command to monitor resource usage of running containers.

Security Tools

  • Trivy — Open-source scanner for vulnerabilities, misconfigurations, and secrets.
  • Clair — Static analysis tool for container vulnerabilities.
  • Docker Bench for Security — Script that checks Docker host for security best practices.
  • Notary — For image signing and verification.

Learning Resources

Real Examples

Example 1: Running a WordPress Site with MySQL

Many websites still run on WordPress. Here’s how to deploy it using containers:

Create a docker-compose.yml:

version: '3.8'

services:

db:

image: mysql:8.0

container_name: wordpress_db

environment:

MYSQL_DATABASE: wordpress

MYSQL_USER: wordpress

MYSQL_PASSWORD: wordpress

MYSQL_ROOT_PASSWORD: rootpassword

volumes:

- db_data:/var/lib/mysql

restart: always

wordpress:

image: wordpress:latest

container_name: wordpress_app

ports:

- "8000:80"

environment:

WORDPRESS_DB_HOST: db:3306

WORDPRESS_DB_USER: wordpress

WORDPRESS_DB_PASSWORD: wordpress

WORDPRESS_DB_NAME: wordpress

volumes:

- wp_data:/var/www/html

restart: always

depends_on:

- db

volumes:

db_data:

wp_data:

Run:

docker-compose up -d

Visit http://localhost:8000 to complete the WordPress setup. This setup is production-ready for small blogs and can be scaled with reverse proxies and load balancers.

Example 2: Deploying a Node.js API with Redis Cache

Consider a REST API that needs a fast in-memory cache:

app.js:

const express = require('express');

const redis = require('redis');

const app = express();

const client = redis.createClient({ host: 'redis' });

client.on('error', (err) => {

console.error('Redis error:', err);

});

app.get('/', async (req, res) => {

const cached = await client.get('homepage');

if (cached) {

return res.send('Cached: ' + cached);

}

const data = 'Hello from Node.js API!';

await client.set('homepage', data, 'EX', 60); // Cache for 60 seconds

res.send(data);

});

app.listen(3000, () => {

console.log('Server running on port 3000');

});

Dockerfile:

FROM node:18-alpine

WORKDIR /app

COPY package*.json ./

RUN npm install --only=production

COPY . .

EXPOSE 3000

CMD ["node", "app.js"]

docker-compose.yml:

version: '3.8'

services:

api:

build: .

ports:

- "3000:3000"

depends_on:

- redis

redis:

image: redis:alpine

ports:

- "6379:6379"

This setup demonstrates a scalable, decoupled architecture. The API and Redis are independent containers, allowing them to be scaled, updated, or replaced without affecting each other.

Example 3: CI/CD Pipeline with GitHub Actions

Automate container builds and pushes using GitHub Actions:

Create .github/workflows/build-and-push.yml:

name: Build and Push Docker Image

on:

push:

branches: [ main ]

jobs:

build:

runs-on: ubuntu-latest

steps:

- uses: actions/checkout@v4

- name: Login to GitHub Container Registry

uses: docker/login-action@v3

with:

registry: ghcr.io

username: ${{ github.actor }}

password: ${{ secrets.GITHUB_TOKEN }}

- name: Build and push

uses: docker/build-push-action@v5

with:

context: .

file: ./Dockerfile

push: true

tags: ghcr.io/${{ github.repository }}:latest

Every push to the main branch triggers a build, pushes the image to GitHub Container Registry, and ensures your latest code is always available for deployment.

FAQs

What is the difference between a container and a virtual machine?

Containers share the host operating system’s kernel and isolate applications at the process level, while virtual machines virtualize the entire hardware stack and run a full guest OS. Containers are lighter, start faster, and use fewer resources. VMs offer stronger isolation and are better for running multiple different operating systems on the same host.

Can I run Windows containers on Linux?

No. Containers rely on the host OS kernel. Windows containers require a Windows host, and Linux containers require a Linux host. Docker Desktop on macOS and Windows uses a Linux VM to run Linux containers. Windows containers can be run on Windows Server or Windows 10/11 with Hyper-V enabled.

How do I persist data in containers?

Containers are ephemeral. To persist data, use Docker volumes or bind mounts. Volumes are managed by Docker and stored in a designated location on the host. Bind mounts link a directory on the host to a directory in the container. Use volumes for databases and persistent application data.

Are containers secure?

Containers are secure when configured properly. Key practices include running as non-root, using minimal base images, scanning for vulnerabilities, limiting resource access, and avoiding exposed ports. However, misconfigurations can lead to privilege escalation or container breakout. Always follow security best practices and integrate scanning into your workflow.

Do I need Kubernetes to run containers?

No. You can run containers on a single machine using Docker or Podman. Kubernetes is necessary only when you need to manage hundreds or thousands of containers across multiple servers with automated scaling, self-healing, and load balancing.

How do I update a running container?

You cannot update a running container directly. Instead, stop and remove the container, then run a new one from an updated image. In production, use orchestration tools like Kubernetes to perform rolling updates with zero downtime.

What happens if a container crashes?

By default, a crashed container stops. You can configure restart policies using --restart flags:

  • no — Do not restart (default)
  • on-failure — Restart only on non-zero exit codes
  • always — Always restart, even after system reboot
  • unless-stopped — Always restart unless manually stopped

Can I run containers without Docker?

Yes. You can use Podman, containerd, or CRI-O directly. Podman is a drop-in replacement for Docker and doesn’t require a daemon. Many cloud platforms and Kubernetes clusters use containerd or CRI-O under the hood.

How do I inspect what’s inside a container image?

Use docker run --rm -it image_name /bin/sh to open a shell inside a temporary container. Alternatively, use tools like dive (a Docker image explorer) to analyze image layers and contents.

Why is my container using so much memory?

Check for memory leaks in your application. Also, ensure you’ve set memory limits using --memory or Docker Compose resource constraints. Some applications (like Java or Node.js) may allocate more memory than needed by default. Use environment variables like JAVA_OPTS or NODE_OPTIONS to limit heap size.

Conclusion

Running containers is a fundamental skill in modern software engineering. From simple single-service applications to complex microservices architectures, containers provide the speed, portability, and consistency that traditional deployment methods lack. This guide has walked you through the entire lifecycle—from installing Docker and pulling images, to building custom containers, managing multi-service applications with Docker Compose, and applying industry best practices for security, performance, and scalability.

As you continue your journey, remember that containerization is not just a technical tool—it’s a cultural shift toward immutable infrastructure, DevOps collaboration, and automated delivery. Embrace automation, prioritize security from the start, and always test your containers in environments that mirror production.

Whether you’re deploying your first web app or managing a fleet of microservices across the cloud, the principles outlined here will serve as a solid foundation. The future of software delivery is containerized—and now, you’re equipped to build it.