If you’ve been working with Kubernetes for a while, you probably started out using Docker as the container runtime. Docker was everywhere. It was the default. But as Kubernetes matured and grew into the backbone of production-grade container orchestration, a quiet shift began. Now, Kubernetes no longer officially supports Docker as a container runtime — a move that left many engineers scratching their heads and diving into alternatives like Containerd and CRI‑O.
So what happened? And more importantly, which runtime should you be using now? In this article, we’re going beyond Docker, exploring Containerd and CRI‑O in modern Kubernetes deployments, breaking down what they are, why they matter, and how to choose between them.
What’s a Container Runtime, Really?
Let’s start from the ground up. A container runtime is the low-level component responsible for running containers on a system. It handles downloading container images, starting/stopping containers, managing their resources, and isolating processes.
Docker is not just a runtime. It’s a full-featured platform that includes a CLI, a daemon, image building tools, networking plugins, and — until recently — the underlying runtime itself (which used to be runc
).
But Kubernetes doesn’t need all that. It doesn’t care about Docker’s image building or CLI tools. All Kubernetes wants is something that can start and stop containers reliably, efficiently, and securely.
That’s where Containerd and CRI‑O come in.
From Docker Shim to CRI
Before Kubernetes v1.20, the platform used a component called the “Docker shim” to communicate with Docker. This shim translated Kubernetes Container Runtime Interface (CRI) calls into Docker Engine API calls. It worked, but it wasn’t elegant.
The Docker shim added extra layers, used more system resources, and made it harder to decouple Kubernetes from Docker’s design decisions. Kubernetes maintainers eventually decided to drop support for it — a move that finalized in Kubernetes 1.24.
Enter Containerd and CRI‑O, two lightweight runtimes built specifically to implement the CRI directly, without any translation layers or extra bloat.
Why Containerd and CRI‑O Replaced Docker in Kubernetes
Both Containerd and CRI‑O exist to solve the same problem: provide Kubernetes with a simple, secure, and efficient way to manage containers.
Let’s talk about what sets each one apart and why they’ve become the go-to runtimes in modern Kubernetes deployments.
Containerd: The Docker Core, Evolved
Containerd originally came from Docker. In fact, it is the core part of Docker responsible for running containers. Docker eventually spun it out into its own CNCF project, where it matured independently.
Why people love Containerd:
- Mature and battle-tested – It’s been used in production for years.
- Part of the Docker ecosystem – If you know Docker, you’re already familiar with much of Containerd’s behavior.
- Pluggable and extensible – It supports plugins for snapshotters, image storage, and more.
- Actively maintained by big names – Including Docker, Google, and AWS.
You can think of it as “Docker without the bells and whistles.” It gives Kubernetes exactly what it needs — no more, no less.
CRI‑O: Kubernetes Native from the Start
CRI‑O was built from scratch by the Kubernetes community, specifically to run containers in a way that adheres strictly to Kubernetes’ design and security goals.
Why people choose CRI‑O:
- Security-first – It’s designed with minimal attack surface and SELinux/AppArmor integration.
- Lightweight and minimal – No unnecessary features, just the runtime.
- Red Hat’s favorite – It’s the default in OpenShift and is tightly integrated with enterprise-grade Kubernetes distributions.
CRI‑O supports only the OCI runtime (runc
) and is tightly aligned with Kubernetes’ roadmap.
Containerd vs. CRI‑O: A Side-by-Side Comparison
Feature | Containerd | CRI‑O |
---|---|---|
Project Origin | Originally part of Docker | Built by Kubernetes community |
CRI Support | Direct (native CRI implementation) | Direct (native CRI implementation) |
OCI Runtime | Supports runc (default), pluggable | Uses only runc |
Extensibility | Highly extensible (plugins) | Minimalist and opinionated |
Image Support | Docker and OCI images | Primarily OCI images |
Integration | Works with most Kubernetes setups | Used mainly in Red Hat/OpenShift |
Performance | Excellent | Excellent |
Security | Good, flexible with plugins | Excellent, minimal surface area |
There’s no outright winner — the right runtime depends on your use case.
Beyond Docker: Containerd and CRI‑O in Modern Kubernetes Deployments
If you’re setting up a new Kubernetes cluster today, you’re likely to choose between Containerd and CRI‑O depending on your distro. Many Kubernetes distributions — including kubeadm, GKE, and EKS — have already standardized on Containerd. Meanwhile, Red Hat’s OpenShift uses CRI‑O by default, with a focus on security compliance and enterprise integration.
Choosing the right one is not about performance (they’re both fast), but about ecosystem, tooling, and maintainability.
How to Choose Between Containerd and CRI‑O
Here’s a basic decision tree:
- Are you running OpenShift or a Red Hat distribution?
→ Go with CRI‑O. It’s designed for it. - Are you on a cloud-managed Kubernetes service like GKE or EKS?
→ Use what they give you. It’s usually Containerd. - Do you want the most flexible setup with access to Docker-like tooling?
→ Containerd is likely a better fit. - Do you prioritize tight Kubernetes alignment and security?
→ CRI‑O might be your choice. - Do you need to use Dockerfiles for image builds?
→ You can still build images with Docker or BuildKit and run them with either runtime.
Tips for Migrating Off Docker
Moving away from Docker can feel scary, especially if your team has years of experience with it. But with the right tools and mindset, it doesn’t have to be painful.
Here’s what helps:
- Know that image building stays the same – Docker is still great for building images. It’s just not used for running them in production anymore.
- Use
ctr
orcrictl
– These are the CLI tools for Containerd and CRI‑O, respectively. Learn them. - Update your monitoring tools – Some agents may need tweaking to work with Containerd or CRI‑O.
- Leverage distro defaults – Kubernetes distros will usually have Containerd or CRI‑O set up and optimized out of the box.
Real‑World Use Cases: What Companies Are Doing
Let’s break it down by provider and use case.
Google Kubernetes Engine (GKE)
GKE uses Containerd under the hood. It provides great performance, works well with gVisor for sandboxing, and is optimized for Google Cloud’s infrastructure.
Amazon Elastic Kubernetes Service (EKS)
AWS also ships Containerd by default. It’s well-integrated with ECR and supports the latest runtime security features.
Red Hat OpenShift
OpenShift uses CRI‑O exclusively. The focus is on SELinux, compliance, and strong enterprise support.
Self-hosted Clusters
If you’re setting up Kubernetes yourself, Containerd is often the simplest path, unless you’re in an environment where security mandates push you toward CRI‑O.
The Future of Container Runtimes
What lies ahead? With Docker out of the Kubernetes runtime conversation, Containerd and CRI‑O will likely continue to evolve independently — each targeting slightly different audiences.
Expect to see:
- More features like rootless containers, improved sandboxing, and better GPU support.
- Tighter integration with cloud-native observability tools.
- Innovation in security – including support for confidential containers and stricter isolation.
The container runtime layer is becoming more invisible — and that’s a good thing. As developers and DevOps teams, we should be able to focus more on workloads and less on the machinery beneath.
Final Thoughts
The shift beyond Docker marks a significant turning point in the Kubernetes world. It signals a move toward more focused, lightweight, and secure container runtimes that align better with Kubernetes’ architecture.
Whether you go with Containerd or CRI‑O in modern Kubernetes deployments, you’ll be working with tools that are faster, simpler, and more reliable. Docker still has its place — mainly in local development and image building — but in production clusters, it’s time to move on.
Don’t fear the change. Embrace it. Learn the tools. And build better.
FAQs
1. Is Docker completely gone from Kubernetes?
Not exactly. You can still build images with Docker, but it’s no longer used as the runtime in Kubernetes clusters.
2. Can I still use Docker images with Containerd or CRI‑O?
Yes. As long as they are OCI-compliant (which most Docker images are), both runtimes can run them.
3. Which runtime is more secure: Containerd or CRI‑O?
CRI‑O is generally considered more minimal and security-focused, but both can be hardened.
4. Is switching from Docker to Containerd difficult?
Not really. Most Kubernetes distributions handle this transition for you, and the CLI tooling is fairly similar.
5. What happens if I don’t switch from Docker in Kubernetes?
If you’re still relying on the Docker shim in Kubernetes 1.24+, your pods won’t run. You need to migrate to a supported runtime.