If you have spent your career managing bare metal servers or virtual machines, Docker can feel like an unnecessary layer of abstraction. You are comfortable with apt, yum, and systemd, so the idea of wrapping services in containers might seem like a solution looking for a problem. However, the real value of Docker for a sysadmin is not about following trends, but about environment parity and dependency isolation. It allows you to run a specific version of a service with its exact required libraries without polluting the host operating system or dealing with conflicting Python or Node versions. This guide skips the marketing talk and focuses on how Docker actually functions in a production or lab environment.
Understanding the Image vs Container Distinction
The most common hurdle for traditional admins is grasping the relationship between an image and a container. Think of an image as a read only snapshot or a template, similar to a VM template but much lighter. It contains the application code, the runtime, and the system libraries. A container is a running instance of that image. When you start a container, Docker adds a thin writable layer on top of the static image.
This means any changes you make inside a running container, such as editing a config file via a shell, will be lost the moment the container is deleted. This is by design. If you need to change a configuration permanently, you either modify the image or mount external files. This ephemeral nature is why Docker is so effective for testing tools like a Pi-hole setup. You can deploy it, break it, and reset it to a known good state in seconds.
The Core Commands You Actually Need
You do not need to memorize fifty commands to be productive. Most of your daily work will involve pulling images, starting containers, and checking logs. The following command structure is the foundation of container management:
# Pull an image from a registry
docker pull ubuntu:22.04
# Run a container in detached mode with a custom name
docker run -d --name my-web-server -p 8080:80 nginx
# View running containers
docker ps
# Access the shell of a running container
docker exec -it my-web-server /bin/bash
# View real-time logs for troubleshooting
docker logs -f my-web-serverThe -p flag is critical as it maps the host port to the container port. In the example above, traffic hitting your server on port 8080 is routed to port 80 inside the container. This allows you to run multiple services that all want to use port 80 on a single host without IP conflicts.
Persistent Data and Volume Mapping
Since containers are ephemeral, you must explicitly tell Docker where to store data that needs to survive a restart or an upgrade. This is handled through volumes or bind mounts. If you are setting up a database or a service like Bitwarden, you map a directory on your host machine to a directory inside the container.
For example, if you are following a NAS setup guide and want to run a file indexing service in Docker, you would use the -v flag: -v /mnt/data:/app/data. Now, anything the application writes to /app/data is actually being written to your host's /mnt/data directory. When you update the container image later, your data remains untouched on the host disk. This separation of the application logic from the data is the key to a reliable 3-2-1 backup strategy since you only need to back up the host volumes.
Docker Compose: The Sysadmin's Runbook
Running long docker run commands with ten different flags is inefficient and prone to human error. Docker Compose is a tool that allows you to define your entire stack in a single YAML file. This file acts as living documentation for your infrastructure. Instead of documenting which ports and volumes a service needs, you simply write it into a docker-compose.yml file.
A typical compose file looks like this:
version: '3'
services:
db:
image: postgres:15
volumes:
- ./db_data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: example_password
web:
image: my-app-image
ports:
- "80:80"
depends_on:
- dbWith this file, you can bring up the entire environment by typing docker-compose up -d. This approach is perfect for complex setups like a Proxmox home lab where you might want to run several supporting microservices alongside your main VMs.
Networking and Security Basics
By default, Docker creates a bridge network for containers. This provides a layer of isolation from the host network. However, sysadmins often need containers to communicate with each other. When containers are in the same Docker Compose file, they can reach each other using their service names as hostnames. This internal DNS is handled automatically by Docker.
From a security perspective, always run your containers as a non root user when possible. Many official images support environment variables to set the User ID and Group ID. Additionally, keep your host OS hardened. If you are running Docker on Windows, follow a Windows security hardening guide to ensure the underlying subsystem is protected. Docker is not a security sandbox in the same way a VM is, so proper host configuration and firewalling remain your primary lines of defense.
Want to go deeper?
Need to audit your server setup? Our Small Business IT Audit Checklist covers hardware, software, security posture, backups, and network documentation. $9, instant download.