ELK
Last updated
Last updated
ElasticSearch, Logstash, Kibana (ELK) stack is deployed as a single container using sebp/elk image.
Image source:
Documentation:
By default there are no data views configured in Kibana. This means that the logs will not be visible. Refer to for steps to create one.
Note: the data view can be created only if an ElasticSearch index exists, this means that some logs must be ingested before Data View can be created.
If you want to use different version of container you can modify the following variable:
Due to how ElasticSearch manages memory, the container will consume all available RAM. By default the role limits the container memory to 6GB. The required minimum is 4GB.
The value can be customized using the following variable:
ELK container uses named volume to persist the data. The container can be safely deleted and recreated without the loss of data stored in ElasticSearch.
To delete the data and start from scratch you will need to execute the following command on the server:
Note: this will delete all exisiting data and configuration, including all indexes, Kibana data views and dashboards. You may want to export your dashboards before.
Logstash configuration directory is mapped as a volume to the host in the Docker Compose template as shown below:
The path defaults to the following location:
In order to customize the parser or provide your own you can modify the following variable by editing the "src" and pointing it to the desired file:
ElasticSearch and Kibana configuration is default. Currently the role does not expose option to customize it. Please log a Github issue if that's something you'd like.
The role comes with an opinionated Logstash configuration that seamlessly integrates with Ethereum clients deployed using slingnode.ethereum role. Refer to the for details.
Logstash pipeline is defined in a single .conf file. The configuration has been designed to properly parse and normalize logs generated by clients supported by the slingnode.ethereum role. You can review the configuration here:
The role uses Filebeat as log forwarder. Filebeat is configured to autodiscover containers based on container labels and selectively forward logs. Refer to the for details.