Caddy v2
Caddy v2 released on May 4 and ended a two-year rewrite that was, as Matt Holt and the contributors had warned, more disruptive than the version number suggests. v2 is not a continuation of v1. It is a complete reimplementation, with a new configuration model, a new module architecture, a different process lifecycle, and a clearer argument for why a modern web server should look different from nginx and Apache.
Six weeks in — long enough for the v2.1 release on June 23 to be imminent, and long enough for the early-adopter reports to be useful — the picture is clearer. Caddy v2 is the right web server for a meaningful subset of deployments, and the wrong choice for others. This post is about what changed, what stayed the same, and how to decide which one you are.
What Caddy is actually for
Caddy's pitch since v1 has been automatic HTTPS. You configure a site by hostname; Caddy obtains a certificate from Let's Encrypt or ZeroSSL on first request, renews it automatically, and serves over TLS by default. There is no --ssl-on flag and no "I'll add HTTPS later" pattern. HTTPS is the only way Caddy serves traffic, unless you explicitly opt out.
In 2016, this was a meaningful differentiator against nginx and Apache, both of which made HTTPS deployment a multi-step manual process even after Let's Encrypt existed. By 2020, certbot's nginx integration is mature and certbot --nginx is a one-liner; the gap is narrower. But Caddy still does it more cleanly: certificate management is part of the server's data model, not a parallel process that drops files into a directory the server reads. The integration shows.
The deeper argument for Caddy v2 is configuration as data. The native configuration format is JSON, with a complete API for runtime mutation. The Caddyfile — the human-friendly DSL most users actually write — is a thin syntactic layer that compiles down to JSON. Anything you can express in the Caddyfile, you can express in the JSON; the JSON exposes capabilities the Caddyfile does not. And the running server can be reconfigured at runtime by POSTing new JSON to its admin API, with zero downtime.
This is genuinely different from how nginx and Apache work. Both treat their configuration as a text file that gets parsed and applied at startup or signal-driven reload. Caddy v2 treats configuration as live, queryable, and mutable state. For some use cases — load balancers managed by orchestrators, multi-tenant hosting platforms, anything that needs to add and remove sites programmatically — this changes what is possible.
What's new in v2
A non-exhaustive list of the things v1 users will notice:
Module system. Every Caddy capability — TLS, file serving, reverse proxying, load balancing, access logging, rate limiting — is a module. Modules are loaded at startup based on what the configuration references. Custom builds (xcaddy build) let you compile in only the modules you need or add third-party modules from the Go ecosystem. The result is a 30 MB static binary that can do everything the v1 monolithic binary did and more, without runtime plugin loading complexity.
JSON configuration. The native config format is JSON. The schema is documented and stable. The Caddyfile DSL is preserved for human use but is now a frontend to the JSON, not a parallel format. This means Caddy modules that want to add new directives only need to register handlers; the JSON schema picks them up automatically.
Admin API. A REST API on localhost:2019 lets you query and modify the running configuration. POST a new config to /load and Caddy applies it atomically. GET /config/ returns the current state. PATCH a subpath to modify part of the config without replacing the whole thing. This is invaluable for orchestration; it is also a security surface, and the default is to bind only to localhost.
Better reverse proxying. The reverse proxy in v2 is a first-class module with health checks, multiple load balancing policies (random, least-connections, round-robin, IP hash, header-based), request and response header manipulation, and a clean error-handling model. Most of what people used nginx as a reverse proxy for is now expressible in a few lines of Caddyfile.
On-demand TLS. Caddy v2 can obtain certificates at request time for hostnames it has never seen before, with rate limiting and an "ask" hook that lets you decide whether to issue. This is the right primitive for SaaS platforms that host customer domains: a customer points DNS at your Caddy, and Caddy obtains a certificate for that domain on the first request, with no pre-provisioning.
HTTP/3 (experimental). Caddy v2 has experimental HTTP/3 support via the quic-go library. It is off by default in v2.0 but can be enabled with experimental_http3 in the global options. This is years ahead of nginx and Apache on the HTTP/3 front, and for sites that benefit from QUIC's connection-migration and 0-RTT properties, Caddy is currently the easiest way to deploy it.
A working configuration
A minimal Caddyfile that serves a static site, reverse-proxies an API, and handles HTTPS automatically:
example.com {
root * /var/www/example.com
file_server
encode gzip zstd
handle /api/* {
reverse_proxy localhost:8080
}
log {
output file /var/log/caddy/example.com.log
format json
}
}
api.example.com {
reverse_proxy localhost:8080 {
health_uri /health
health_interval 30s
}
}
The TLS certificate is obtained automatically. Let's Encrypt is the default issuer; ZeroSSL is the fallback if Let's Encrypt rate-limits or refuses. The renewal happens automatically thirty days before expiry. There is nothing to configure for the certificate to work.
Compare to the equivalent nginx configuration: two server blocks, manual ssl_certificate and ssl_certificate_key directives, a separate certbot invocation to obtain the certs, and a cron job or systemd timer to handle renewal. The nginx approach is more flexible — it gives you direct control of every TLS parameter — but for the common case, Caddy's defaults are sensible and the configuration is shorter by an order of magnitude.
Installation on Debian 10 Buster
Caddy provides a Debian repository:
1curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/cfg/setup/bash.deb.sh' \
2 | sudo bash
3sudo apt-get update
4sudo apt-get install caddy
This installs Caddy as a systemd service, with config at /etc/caddy/Caddyfile and a dedicated caddy user. The service is enabled by default; reload after editing the Caddyfile with:
1sudo systemctl reload caddy
Reload is graceful — existing connections complete on the old config, new connections use the new one, and TLS state is preserved. There is essentially no reason to restart Caddy in normal operation; reload handles every config change.
What v2 does not do well
Three things to know before committing.
Migration from v1 is non-trivial. The Caddyfile syntax is different in subtle ways. Site addresses parse differently, directive names have changed, and several v1 features are restructured rather than copied over. The official migration guide is good but not magical; expect to rewrite Caddyfiles, not translate them line-for-line. If you have a complex v1 deployment, plan for a full afternoon to migrate, not a quick syntax patch.
Static binary is large. The v2 binary is around 30 MB. For embedded or highly resource-constrained environments, this is meaningful. For everyone else, it's a one-time disk cost; runtime memory usage is low (typically 20–50 MB for a server hosting dozens of sites), and the static binary makes deployment trivial.
Logging is JSON by default. This is a strength for systems that ingest structured logs (Loki, Elastic, anything that parses JSON), and a friction for tail-and-grep workflows. v2 supports text logging via format console, but the configuration is one extra step compared to nginx's text-by-default.
Plugin ecosystem is younger. v1 had a mature ecosystem of community plugins. v2 is rebuilding it, and most plugins of interest have been ported, but a handful of niche v1 plugins do not yet have v2 equivalents. Check before migrating if you depend on a specific plugin.
When to use Caddy v2
The use cases where Caddy v2 is the right answer:
-
Personal sites and small business sites where automatic HTTPS, low operational overhead, and a five-line config matter more than fine-grained tuning.
-
SaaS platforms hosting customer domains, where on-demand TLS provisioning is the killer feature.
-
Reverse-proxy frontends for application servers, where Caddy's clean reverse proxy module and graceful reloads outperform nginx's configuration ergonomics.
-
Edge servers needing HTTP/3 as early as possible.
-
Anything orchestrated by code, where the admin API and JSON configuration are first-class.
The use cases where nginx remains the better choice: -
Very high concurrency with finely-tuned worker processes. Caddy is fast, but nginx still has the edge at the extreme end.
-
Complex
locationmatching, rewrites, and module ecosystems that nginx has accumulated over twenty years and Caddy does not yet match. -
Existing infrastructure where the team knows nginx and switching has no payoff.
For new deployments without inertia, Caddy v2 is the default I would recommend. For existing nginx deployments that are working, the migration only makes sense if you have a specific reason — usually on-demand TLS or HTTP/3 — that nginx cannot give you.
The summary
Caddy v2 is a thoughtful rewrite of a thoughtful web server. The configuration model is genuinely different from the file-based world of nginx and Apache, and the difference matters most when you are deploying programmatically or hosting many domains. For static sites, simple reverse-proxy frontends, and small-to-medium deployments, the operational simplicity is hard to argue against — five-line configurations that handle TLS automatically and reload gracefully are not nothing.
Six weeks in, v2 is stable enough for production. v2.1 will smooth the remaining rough edges. By the end of 2020, Caddy will be a meaningfully larger fraction of new web-server deployments than it was a year ago. Worth knowing your way around it before you find yourself debugging someone else's Caddyfile in the middle of an incident.