Logging
The processes running in the containers are configured to log to the console. The logs are saved to the disk using Docker's default JSON logging driver. Logging configuration is defined in each Compose file using YAML anchor syntax as shown below.
The logging is configured to facilitate log collection, shipping and feeding into log analytics solutions such as ELK, SumoLogic, Splunk, etc. In order to simplify this process, the role configures the following:
JSON format (where available)
Docker log tags
Docker labels
Logging format
The clients are configured to log in JSON format if they support it which makes it easier to feed them into log analytics solutions without additional parsing.
Multiline logs
Some clients generate multiline log entries. Multiline logs combined with Docker logging (which breaks down each line into a separate JSON object) poses a special challenge. Refer to our blog post describing this. The table below summarizes logging features of each client.
Client | JSON format | Multiline |
---|---|---|
Geth | yes | no |
Erigon | yes | no |
Besu | no | yes |
Nethermind | no | yes |
Lighthouse | yes | no |
Prysm | yes | no |
Teku | no | yes |
Nimbus | yes | no |
Tags
Each Docker compose file contains a tag with details that are useful when managing the logs. The tag contains the following data:
layer (execution, consensus, validator)
client name
Docker image name
container name
Full ID of the image
Full ID of the container
Execution layer tag
Compose template
Rendered tag
Consensus layer tag
Compose template
Rendered tag
Validator layer tag
Compose template
Rendered tag
Jinja template
The variables prefixed with dot (for example {{.ImageName}}) symbol are Docker variables and use the double curly braces syntax for substitution. See Docker documentation for details. Double curly braces {{ }} are treated by Jinja as variables and need to be escaped using {% raw %} {% endraw %} Jinja tag as shown below.
Docker labels
The containers have custom labels defined in the Compose files. There two tags that specify the "layer" and the "client". The labels are required to allow for selective log collection using Filebeat. Refer to our blog post for details.
slingnode.ethereum_observability
SlingNode has developed an Ansible role that can deploy an Observability stack that seamlessly integrates with nodes deployed using slingnode.ethereum role. The role enables log parsing and forwarding to ElasticSearch.
Last updated