ELK
Overview
ElasticSearch, Logstash, Kibana (ELK) stack is deployed as a single container using sebp/elk image.
Image source: https://hub.docker.com/r/sebp/elk/
Documentation: https://elk-docker.readthedocs.io/
Kibana data view
By default there are no data views configured in Kibana. This means that the logs will not be visible. Refer to Creating Kibana Data View for steps to create one.
Note: the data view can be created only if an ElasticSearch index exists, this means that some logs must be ingested before Data View can be created.
Container
If you want to use different version of container you can modify the following variable:
Memory limit
Due to how ElasticSearch manages memory, the container will consume all available RAM. By default the role limits the container memory to 6GB. The required minimum is 4GB.
The value can be customized using the following variable:
Data persistence
ELK container uses named volume to persist the data. The container can be safely deleted and recreated without the loss of data stored in ElasticSearch.
Deleting data
To delete the data and start from scratch you will need to execute the following command on the server:
Note: this will delete all exisiting data and configuration, including all indexes, Kibana data views and dashboards. You may want to export your dashboards before.
Configuration
Logstash
Logstash configuration directory is mapped as a volume to the host in the Docker Compose template as shown below:
The path defaults to the following location:
The role comes with an opinionated Logstash configuration that seamlessly integrates with Ethereum clients deployed using slingnode.ethereum role. Refer to the logging documentation for details.
Logstash pipeline is defined in a single .conf file. The configuration has been designed to properly parse and normalize logs generated by clients supported by the slingnode.ethereum role. You can review the configuration here: https://github.com/SlingNode/slingnode-ansible-ethereum-observability/blob/master/files/01-logstash-pipeline.conf
In order to customize the parser or provide your own you can modify the following variable by editing the "src" and pointing it to the desired file:
ElasticSearch and Kibana
ElasticSearch and Kibana configuration is default. Currently the role does not expose option to customize it. Please log a Github issue if that's something you'd like.
Log forwarding
The role uses Filebeat as log forwarder. Filebeat is configured to autodiscover containers based on container labels and selectively forward logs. Refer to the Filebeat section for details.
Last updated