文档
一个 项目

log

启用并配置 HTTP 请求日志(也称为访问日志)。

log 指令适用于其所在站点块的主机名,除非通过 hostnames 子指令覆盖。

配置后,默认会记录对该站点的所有请求。要有条件地跳过某些请求的日志记录,请使用 log_skip 指令

要向日志条目添加自定义字段,请使用 log_append 指令

默认情况下,包含潜在敏感信息的头(CookieSet-CookieAuthorizationProxy-Authorization)在访问日志中会被记录为 REDACTED。可以通过 log_credentials 全局服务器选项禁用此行为。

语法

log [<logger_name>] {
	hostnames <hostnames...>
	no_hostname
	output <writer_module> ...
	format <encoder_module> ...
	level  <level>
}
  • logger_name 是此站点日志记录器名称的可选覆盖。

    默认情况下,会自动生成日志记录器名称,例如 log0log1,依赖于 Caddyfile 中站点的顺序。仅当你希望从全局选项中定义的另一个日志记录器可靠地引用此日志记录器的输出时才有用。见下方示例

  • hostnames 是此日志记录器适用的主机名的可选覆盖。

    默认情况下,日志记录器适用于其所在站点块的主机名(即站点地址)。如果你希望为通配符站点块的每个子域定义不同的日志记录器,这很有用。见下方示例

  • no_hostname 阻止日志记录器与任何站点块的主机名关联。默认情况下,日志记录器与 log 指令所在的站点地址关联。

    当你希望根据某些条件(如请求路径或方法)将请求记录到不同文件时,这很有用,可配合 log_name 指令 使用。

  • output 配置日志写入位置。见下方输出模块

    默认值:stderr

  • format 描述日志的编码或格式化方式。见下方格式模块

    默认值:如果检测到 stderr 是终端,则为 console,否则为 json

  • level 是要记录的最小日志级别。默认值:INFO

    注意,访问日志目前只会输出 INFOERROR 级别的日志。

输出模块

output 子指令可自定义日志的写入位置。

stderr

标准错误(控制台,默认)。

output stderr

stdout

标准输出(控制台)。

output stdout

discard

不输出。

output discard

file

文件。默认情况下,日志文件会轮转("roll"),以防止磁盘空间耗尽。

日志轮转由 lumberjack 提供。

output file <filename> {
	mode          <mode>
	roll_disabled
	roll_size     <size>
	roll_uncompressed
	roll_local_time
	roll_keep     <num>
	roll_keep_for <days>
}
  • <filename> 日志文件路径。

  • mode 日志文件的 Unix 文件权限。格式为 1~4 位八进制数字(与 Unix chmod 命令一致,0 表示默认权限 600)。如 644 允许所有者读写,组和其他用户只读;600 仅允许所有者读写。

    默认值:600

  • roll_disabled 禁用日志轮转。仅当你有其他方式维护日志文件时使用,否则可能导致磁盘空间耗尽。

  • roll_size 日志文件轮转的大小。当前实现支持以 MB 为单位,带小数会向上取整。如 1.1MiB 会变为 2MiB

    默认值:100MiB

  • roll_uncompressed 关闭 gzip 日志压缩。

    默认值:启用 gzip 压缩。

  • roll_local_time 轮转时使用本地时间戳命名文件。

    默认值:使用 UTC 时间。

  • roll_keep 保留的日志文件数量,超出后删除最旧的。

    默认值:10

  • roll_keep_for 保留轮转文件的时长,格式为持续时间字符串。当前实现支持以天为单位,带小数会向上取整。如 36h(1.5 天)会变为 48h(2 天)。默认值:2160h(90 天)

net

网络套接字。如果套接字断开,将尝试重连并将日志输出到 stderr。

output net <address> {
	dial_timeout <duration>
	soft_start
}
  • <address> 日志写入的地址

  • dial_timeout 连接日志套接字的超时时间。套接字断开时,日志最多会阻塞这么久。

  • soft_start 连接套接字时忽略错误,允许你即使远程日志服务不可用也能加载配置。日志会输出到 stderr。

格式模块

format 子指令可自定义日志的编码(格式化)方式。

除每个编码器的语法外,大多数编码器还可设置以下通用属性:

format <encoder_module> {
	message_key     <key>
	level_key       <key>
	time_key        <key>
	name_key        <key>
	caller_key      <key>
	stacktrace_key  <key>
	line_ending     <char>
	time_format     <format>
	time_local
	duration_format <format>
	level_format    <format>
}
  • message_key 日志条目的消息字段键。默认:msg

  • level_key 日志条目的级别字段键。默认:level

  • time_key 日志条目的时间字段键。默认:ts

  • name_key 日志条目的名称字段键。默认:name

  • caller_key 日志条目的调用者字段键。

  • stacktrace_key 日志条目的堆栈跟踪字段键。

  • line_ending 行结尾字符。

  • time_format 时间戳格式。

    默认:如果格式为 console,则为 wall_milli,否则为 unix_seconds_float。 可选值:

    • unix_seconds_float 自 Unix 纪元以来的秒数(浮点数)
    • unix_milli_float 自 Unix 纪元以来的毫秒数(浮点数)
    • unix_nano 自 Unix 纪元以来的纳秒数(整数)
    • iso8601 例:2006-01-02T15:04:05.000Z0700
    • rfc3339 例:2006-01-02T15:04:05Z07:00
    • rfc3339_nano 例:2006-01-02T15:04:05.999999999Z07:00
    • wall 例:2006/01/02 15:04:05
    • wall_milli 例:2006/01/02 15:04:05.000
    • wall_nano 例:2006/01/02 15:04:05.000000000
    • common_log 例:02/Jan/2006:15:04:05 -0700
    • 或,任何兼容的时间布局字符串;请参阅 Go 文档 以获取完整详细信息。

    注意,格式字符串中的部分是布局的特殊常量;所以 2006 是年份,01 是月份,Jan 是月份的字符串,02 是日期。不要在格式字符串中使用实际的当前日期数字。

  • time_local 使用本地系统时间而不是默认的 UTC 时间记录日志。

  • duration_format 持续时间格式。

    默认值:seconds。 可选值:

    • s, secondseconds 浮点数秒数
    • ms, millimillis 浮点数毫秒数
    • ns, nanonanos 整数纳秒数
    • string 使用 Go 的内置字符串格式,例如 1m32.05s6.31ms
  • level_format 级别格式。

    默认值:如果格式为 console,则为 color,否则为 lower。 可选值:

    • lower 小写。
    • upper 大写。
    • color 大写,带 ANSI 颜色。

console

The console encoder formats the log entry for human readability while preserving some structure.

format console

json

Formats each log entry as a JSON object.

format json

filter

Allows per-field filtering.

format filter {
	fields {
		<field> <filter> ...
	}
	<field> <filter> ...
	wrap <encode_module> ...
}

Nested fields can be referenced by representing a layer of nesting with >. In other words, for an object like {"a":{"b":0}}, the inner field can be referenced as a>b.

The following fields are fundamental to the log and cannot be filtered because they are added by the underlying logging library as special cases: ts, level, logger, and msg.

Specifying wrap is optional; if omitted, a default is chosen depending on whether the current output module is stderr or stdout, and is an interactive terminal, in which case console is chosen, otherwise json is chosen.

As a shortcut, the fields block can be omitted and the filters can be specified directly within the filter block.

These are the available filters:

delete

Marks a field to be skipped from being encoded.

<field> delete
rename

Rename the key of a log field.

<field> rename <key>
replace

Marks a field to be replaced with the provided string at encoding time.

<field> replace <replacement>
ip_mask

Masks IP addresses in the field using a CIDR mask, i.e. the number of bits from the IP to retain, starting from the left side. If the field is an array of strings (e.g. HTTP headers), each value in the array is masked. The value may be a comma separated string of IP addresses.

There is separate configuration for IPv4 and IPv6 addresses, since they have a different total number of bits.

Most commonly, the fields to filter would be:

  • request>remote_ip for the directly connecting client
  • request>client_ip for the parsed "real client" when trusted_proxies is configured
  • request>headers>X-Forwarded-For if behind a reverse proxy
<field> ip_mask [<ipv4> [<ipv6>]] {
	ipv4 <cidr>
	ipv6 <cidr>
}
query

Marks a field to have one or more actions performed, to manipulate the query part of a URL field. Most commonly, the field to filter would be request>uri.

<field> query {
	delete  <key>
	replace <key> <replacement>
	hash    <key>
}

The available actions are:

  • delete removes the given key from the query.

  • replace replaces the value of the given query key with replacement. Useful to insert a redaction placeholder; you'll see that the query key was in the URL, but the value is hidden.

  • hash replaces the value of the given query key with the first 4 bytes of the SHA-256 hash of the value, lowercase hexadecimal. Useful to obscure the value if it's sensitive, while being able to notice whether each request had a different value.

Marks a field to have one or more actions performed, to manipulate a Cookie HTTP header's value. Most commonly, the field to filter would be request>headers>Cookie.

<field> cookie {
	delete  <name>
	replace <name> <replacement>
	hash    <name>
}

The available actions are:

  • delete removes the given cookie by name from the header.

  • replace replaces the value of the given cookie with replacement. Useful to insert a redaction placeholder; you'll see that the cookie was in the header, but the value is hidden.

  • hash replaces the value of the given cookie with the first 4 bytes of the SHA-256 hash of the value, lowercase hexadecimal. Useful to obscure the value if it's sensitive, while being able to notice whether each request had a different value.

If many actions are defined for the same cookie name, only the first action will be applied.

regexp

Marks a field to have a regular expression replacement applied at encoding time. If the field is an array of strings (e.g. HTTP headers), each value in the array has replacements applied.

<field> regexp <pattern> <replacement>

The regular expression language used is RE2, included in Go. See the RE2 syntax reference and the Go regexp syntax overview.

In the replacement string, capture groups can be referenced with ${group} where group is either the name or number of the capture group in the expression. Capture group 0 is the full regexp match, 1 is the first capture group, 2 is the second capture group, and so on.

hash

Marks a field to be replaced with the first 4 bytes (8 hex characters) of the SHA-256 hash of the value at encoding time. If the field is a string array (e.g. HTTP headers), each value in the array is hashed.

Useful to obscure the value if it's sensitive, while being able to notice whether each request had a different value.

<field> hash

append

Appends field(s) to all log entries.

format append {
	fields {
		<field> <value>
	}
	<field> <value>
	wrap <encode_module> ...
}

It is most useful for adding information about the Caddy instance that is producing the log entries, possibly via an environment variable. The field values may be global placeholders (e.g. {env.*}), but not per-request placeholders due to logs being written outside of the HTTP request context.

Specifying wrap is optional; if omitted, a default is chosen depending on whether the current output module is stderr or stdout, and is an interactive terminal, in which case console is chosen, otherwise json is chosen.

The fields block can be omitted and the fields can be specified directly within the append block.

Examples

Enable access logging to the default logger.

In other words, by default this logs to stderr, but this can be changed by reconfiguring the default logger with the log global option:

example.com {
	log
}

Write logs to a file (with log rolling, which is enabled by default):

example.com {
	log {
		output file /var/log/access.log
	}
}

Customize log rolling:

example.com {
	log {
		output file /var/log/access.log {
			roll_size 1gb
			roll_keep 5
			roll_keep_for 720h
		}
	}
}

Delete the User-Agent request header from the logs:

example.com {
	log {
		format filter {
			request>headers>User-Agent delete
		}
	}
}

Redact multiple sensitive cookies. (Note that some sensitive headers are logged with empty values by default; see the log_credentials global option to enable logging Cookie header values):

example.com {
	log {
		format filter {
			request>headers>Cookie cookie {
				replace session REDACTED
				delete secret
			}
		}
	}
}

Mask the remote address from the request, keeping the first 16 bits (i.e. 255.255.0.0) for IPv4 addresses, and the first 32 bits from IPv6 addresses.

Note that as of Caddy v2.7, both remote_ip and client_ip are logged, where client_ip is the "real IP" when trusted_proxies is configured:

example.com {
	log {
		format filter {
			request>remote_ip ip_mask 16 32
			request>client_ip ip_mask 16 32
		}
	}
}

To append a server ID from an environment variable to all log entries, and chain it with a filter to delete a header:

example.com {
	log {
		format append {
			server_id {env.SERVER_ID}
			wrap filter {
				request>headers>Cookie delete
			}
		}
	}
}

To write separate log files for each subdomain in a wildcard site block, by overriding hostnames for each logger. This uses a snippet to avoid repetition:

(subdomain-log) {
	log {
		hostnames {args[0]}
		output file /var/log/{args[0]}.log
	}
}

*.example.com {
	import subdomain-log foo.example.com
	@foo host foo.example.com
	handle @foo {
		respond "foo"
	}

	import subdomain-log bar.example.com
	@bar host bar.example.com
	handle @bar {
		respond "bar"
	}
}

To write the access logs for a particular subdomain to two different files, with different formats (one with transform-encoder plugin and the other with json).

This works by overriding the logger name as foo in the site block, then including the access logs produced by that logger in the two loggers in global options with include http.log.access.foo:

{
	log access-formatted {
		include http.log.access.foo
		output file /var/log/access-foo.log
		format transform "{common_log}"
	}

	log access-json {
		include http.log.access.foo
		output file /var/log/access-foo.json
		format json
	}
}

foo.example.com {
	log foo
}