Ever had an Ansible playbook work perfectly on your machine, only to fail in CI or someone else’s laptop? That’s usually not an Ansible problem—it’s an environment problem.
This is where containers quietly become one of the most useful tools in an Ansible developer’s workflow. Instead of wrestling with Python versions, missing collections, or conflicting dependencies, you define everything once and run it anywhere.
Why containerize your Ansible workflow?
Ansible is agentless, which is great—but your development environment isn’t. It depends on:
- Python versions
- Ansible versions
- Collections and roles
- System packages (ssh, git, etc.)
These differences cause subtle bugs that are hard to track. Containers fix this by giving you a predictable runtime.
If your Ansible runs inside a container, your playbooks behave the same everywhere—local machine, CI pipeline, or production runner.
Start simple: a minimal Ansible Docker image
Let’s build a lightweight container for running Ansible:
FROM python:3.11-slim
WORKDIR /ansible
RUN pip install --no-cache-dir ansible
CMD ["ansible", "--version"]
Build and test it:
1docker build -t ansible-dev .
2docker run --rm ansible-devThis gives you a clean, isolated Ansible environment. No pollution from your host system.
Mounting your playbooks into the container
You rarely bake playbooks into the image. Instead, mount them:
1docker run --rm \
2 -v $(pwd):/ansible \
3 ansible-dev \
4 ansible-playbook playbook.ymlThis keeps your development workflow flexible while still benefiting from a consistent runtime.
Here’s where things get interesting: Ansible Execution Environments
If you’ve worked with Ansible Automation Platform or modern Ansible setups, you’ve probably heard of Execution Environments (EEs).
They’re essentially purpose-built container images that include:
- Ansible Core
- Collections
- Python dependencies
- System libraries
You define them declaratively using a execution-environment.yml file:
1version: 1
2build_arg_defaults:
3 EE_BASE_IMAGE: quay.io/ansible/ansible-runner
4
5dependencies:
6 galaxy: requirements.yml
7 python: requirements.txt
8Then build with:
1ansible-builder build -t my-ansible-eeThis approach scales better than ad-hoc Dockerfiles, especially for teams.
Managing dependencies the right way
A common mistake developers make is installing everything manually inside containers. That quickly becomes unmaintainable.
Instead, split dependencies clearly:
1. Ansible collections
1# requirements.yml
2collections:
3 - name: community.docker
4 - name: ansible.posix2. Python dependencies
1# requirements.txt
2boto3
3requests3. System packages (optional)
Handled in your Dockerfile or EE config:
1RUN apt-get update && apt-get install -y sshpass gitThis separation makes your builds reproducible and easier to debug.
Running Ansible in CI/CD with containers
Once containerized, plugging Ansible into CI becomes trivial.
Example using GitHub Actions:
1jobs:
2 ansible:
3 runs-on: ubuntu-latest
4 container:
5 image: my-ansible-ee
6
7 steps:
8 - uses: actions/checkout@v3
9 - name: Run playbook
10 run: ansible-playbook playbook.ymlNo need to install Ansible in the pipeline. No surprises from environment drift.
Local development workflow that actually scales
Here’s a setup that works well for teams:
- Use Docker or Podman for running Ansible
- Keep playbooks in your local filesystem
- Use a Makefile or script for common commands
Example:
1# Makefile
2run:
3 docker run --rm -v $(PWD):/ansible my-ansible-ee ansible-playbook playbook.yml
4
5lint:
6 docker run --rm -v $(PWD):/ansible my-ansible-ee ansible-lintThis removes the need for developers to install anything except Docker.
Networking and SSH gotchas
Containers introduce a few quirks when working with remote hosts:
- SSH keys: Mount your ~/.ssh directory
- Known hosts: Avoid interactive prompts by preloading known_hosts
- Network access: Ensure container can reach target hosts
Example:
1docker run --rm \
2 -v $(PWD):/ansible \
3 -v ~/.ssh:/root/.ssh \
4 my-ansible-ee \
5 ansible all -i inventory -m pingPerformance considerations
Containers are fast, but not free. A few tips:
- Reuse images instead of rebuilding frequently
- Cache dependencies during builds
- Avoid installing packages at runtime
If builds feel slow, your Dockerfile or EE config is usually the culprit.
When this approach really shines
Containerized Ansible development is especially useful when:
- You have multiple contributors
- You rely on many external collections
- You run playbooks in CI/CD pipelines
- You need strict version control over tooling
It’s less critical for small, one-off scripts—but becomes essential as complexity grows.
A quick mental model
Think of your Ansible container as:
- The "runtime" for your automation
- A contract between dev and CI
- A snapshot of everything your playbooks need
Once you adopt this mindset, debugging becomes easier and deployments become more predictable.
And perhaps the biggest benefit: onboarding new developers goes from "install 10 things" to "run this container."