Caddyfile 地址模式
Caddyfile 支持多种地址模式,方便你为不同类型的站点配置服务。
通配符证书
你可以用 *
作为主机名的一部分来为多个子域配置 HTTPS 证书。例如:
*.example.com {
root * /var/www/example
file_server
}
这会为所有 example.com
的一级子域(如 foo.example.com
、bar.example.com
)自动获取通配符证书。
注意:
- 只支持一级通配符(
*
只能出现在主机名的最左侧标签)。 - 你需要配置 DNS 方式的 ACME 验证(如 Cloudflare、Aliyun 等),否则 Let's Encrypt 无法签发通配符证书。
本地 HTTPS
如果你用 localhost
或本地 IP(如 127.0.0.1
)作为站点地址,Caddy 会自动为其签发本地受信任的 HTTPS 证书。
localhost {
file_server
}
这样你在本地开发时也能用 HTTPS 访问站点,无需手动配置证书。
端口和协议
你可以在地址中指定端口和协议。例如:
:8080
表示监听所有主机的 8080 端口(HTTP)。https://example.com:8443
表示用 HTTPS 监听 8443 端口。http://
表示监听所有主机的 HTTP(80 端口)。
如果省略协议,Caddy 会根据端口自动推断(443 为 HTTPS,80 为 HTTP)。
Static file server
example.com {
root * /var/www
file_server
}
As usual, the first line is the site address. The root
directive specifies the path to the root of the site (the *
means to match all requests, so as to disambiguate from a path matcher)—change the path to your site if it isn't the current working directory. Finally, we enable the static file server.
Reverse proxy
Proxy all requests:
example.com {
reverse_proxy localhost:5000
}
Only proxy requests having a path starting with /api/
and serve static files for everything else:
example.com {
root * /var/www
reverse_proxy /api/* localhost:5000
file_server
}
This uses a request matcher to match only requests that start with /api/
and proxy them to the backend. All other requests will be served from the site root
with the static file server. This also depends on the fact that reverse_proxy
is higher on the directive order than file_server
.
There are many more reverse_proxy
examples here.
PHP
PHP-FPM
With a PHP FastCGI service running, something like this works for most modern PHP apps:
example.com {
root * /srv/public
encode
php_fastcgi localhost:9000
file_server
}
Customize the site root accordingly; this example assumes that your PHP app's webroot is within a public
directory—requests for files that exist on disk will be served with file_server
, and anything else will be routed to index.php
for handling by the PHP app.
You may sometimes use a unix socket to connect to PHP-FPM:
php_fastcgi unix//run/php/php8.2-fpm.sock
The php_fastcgi
directive is actually just a shortcut for several pieces of configuration.
FrankenPHP
Alternatively, you may use FrankenPHP, which is a distribution of Caddy which calls PHP directly using CGO (Go to C bindings). This can be up to 4x faster than with PHP-FPM, and even better if you can use the worker mode.
{
frankenphp
order php_server before file_server
}
example.com {
root * /srv/public
encode zstd br gzip
php_server
}
Redirect www.
subdomain
To add the www.
subdomain with an HTTP redirect:
example.com {
redir https://www.{host}{uri}
}
www.example.com {
}
To remove it:
www.example.com {
redir https://example.com{uri}
}
example.com {
}
To remove it for multiple domains at once; this uses the {labels.*}
placeholders which are the parts of the hostname, 0
-indexed from the right (e.g. 0
=com
, 1
=example-one
, 2
=www
):
www.example-one.com, www.example-two.com {
redir https://{labels.1}.{labels.0}{uri}
}
example-one.com, example-two.com {
}
Trailing slashes
You will not usually need to configure this yourself; the file_server
directive will automatically add or remove trailing slashes from requests by way of HTTP redirects, depending on whether the requested resource is a directory or file, respectively.
However, if you need to, you can still enforce trailing slashes with your config. There are two ways to do it: internally or externally.
Internal enforcement
This uses the rewrite
directive. Caddy will rewrite the URI internally to add or remove the trailing slash:
example.com {
rewrite /add /add/
rewrite /remove/ /remove
}
Using a rewrite, requests with and without the trailing slash will be the same.
External enforcement
This uses the redir
directive. Caddy will ask the browser to change the URI to add or remove the trailing slash:
example.com {
redir /add /add/
redir /remove/ /remove
}
Using a redirect, the client will have to re-issue the request, enforcing a single acceptable URI for a resource.
Wildcard certificates
If you need to serve multiple subdomains with the same wildcard certificate, the best way to handle them is with a Caddyfile like this, making use of the handle
directive and host
matchers:
*.example.com {
tls {
dns <provider_name> [<params...>]
}
@foo host foo.example.com
handle @foo {
respond "Foo!"
}
@bar host bar.example.com
handle @bar {
respond "Bar!"
}
# Fallback for otherwise unhandled domains
handle {
abort
}
}
You must enable the ACME DNS challenge to have Caddy automatically manage wildcard certificates.
Single-page apps (SPAs)
When a web page does its own routing, servers may receive lots of requests for pages that don't exist server-side, but which are renderable client-side as long as the singular index file is served instead. Web applications architected like this are known as SPAs, or single-page apps.
The main idea is to have the server "try files" to see if the requested file exists server-side, and if not, fall back to an index file where the client does the routing (usually with client-side JavaScript).
A typical SPA config usually looks something like this:
example.com {
root * /srv
encode
try_files {path} /index.html
file_server
}
If your SPA is coupled with an API or other server-side-only endpoints, you will want to use handle
blocks to treat them exclusively:
example.com {
encode
handle /api/* {
reverse_proxy backend:8000
}
handle {
root * /srv
try_files {path} /index.html
file_server
}
}
If your index.html
contains references to your JS/CSS assets with hashed filenames, you may want to consider adding a Cache-Control
header to instruct clients to not cache it (so that if the assets change, browsers fetch the new ones). Since the try_files
rewrite is used to serve your index.html
from any path that doesn't match another file on disk, you can wrap the try_files
with a route
so that the header
handler runs after the rewrite (it normally would run before due to the directive order):
route {
try_files {path} /index.html
header /index.html Cache-Control "public, max-age=0, must-revalidate"
}
Caddy proxying to another Caddy
If you have one Caddy instance publicly accessible (let's call it "front"), and another Caddy instance in your private network (let's call it "back") serving your actual app, you can use the reverse_proxy
directive to pass requests through.
Front instance:
foo.example.com, bar.example.com {
reverse_proxy 10.0.0.1:80
}
Back instance:
{
servers {
trusted_proxies static private_ranges
}
}
http://foo.example.com {
reverse_proxy foo-app:8080
}
http://bar.example.com {
reverse_proxy bar-app:9000
}
-
This example serves two different domains, proxying both to the same back Caddy instance, on port
80
. Your back instance is serving the two domains different ways, so it's configured with two separate site blocks. -
On the back,
http://
is used to accept HTTP on port80
. The front instance terminates TLS, and the traffic between front and back are on a private network, so there's no need to re-encrypt it. -
You may use a different port like
8080
on the back instance if you need to; just append:8080
to each site address on the back's config, OR set thehttp_port
global option to8080
. -
On the back, the
trusted_proxies
global option is used to tell Caddy to trust the front instance as a proxy. This ensures the real client IP is preserved. -
Going further, you could have more than one back instance that you load balance between. You could set up mTLS (mutual TLS) using the
acme_server
on the front instance such that it acts like the CA for the back instance (useful if the traffic between front and back cross untrusted networks).