Architecture
Overview
Conceptually the stack is broken down to two components:
monitoring server
monitoring agents
Monitoring server comprises of:
ELK
Grafana
Prometheus
Monitoring agents comprise of:
Filebeat
CAdvisor
node-exporter
ethereum-metrics-exporter
By default both monitoring server and monitoring agents are installed and configured to monitor Ethereum clients running on the same server (single_server_deployment).
Docker compose
The role utilizes Docker Compose. Each component (monitoring server and monitoring agents) is defined in a separate Compose file. The compose files are templated using Jinja2 and are dynamically generated when the role runs.
Docker network
Docker compose services (elk, grafana, prometheus, filebeat, exporters) are "connected" to the same Docker network. The network is defined in both Compose files.
Ethereum clients deployed using slingnode.ethereum role are connected to the same Docker network. This means they can freely communicate without the need to expose any ports.
Data persistence
ELK, Grafana and Prometheus containers use named volumes in order to persist the data. The following volumes are defined in the compose file:
Filebeat is the only "monitoring agent" container that persists data. The following volume is defined in the compose file:
The remaining containers do not persist any data.
Starting the Docker compose project
The compose templates are rendered and copied to the target server location defined by the following variables:
Directory structure
The role creates the following directory structure.
The location can be modified by overriding the following variable:
Connectivity between monitoring server and monitoring agents/client targets
As outlined in the Docker network section, in single server deployments all network communication happens over Docker network. By its nature, in a distributed deployment where monitoring server is installed on a separate server than the endpoints it monitors the communication occurs over the network.
A high level traffic flow looks as follows:
Prometheus (monitoring server) -> scrape targets (agents, clients)
Filebeat (endpoint/monitoring agent) -> ELK (monitoring server)
Exposing ports
The role defaults to a "single server" deployment and maps container ports to the Docker host's localhost interface. This means that you will need a reverse proxy (with TLS) or SSH tunnels to access the web interfaces (Kibana, Grafana, Prometheus, CAdvisor).
The default port mapping looks as follows when viewed using docker ps (the exact command is: docker ps --format '{{ .Names }}\t{{ .Ports }}'
Note: metrics exporters do not need to be mapped to the localhost since in a single server deployment Prometheus scrapes them over Docker network. They are accessible on the localhost to make automated role testing easier.
Exposing ports to remote hosts
Configuration outlined in this section is suitable only if you run your servers on a private network behind a network firewall (or security groups), otherwise sensitive APIs and metrics will be exposed to the Internet even if your server has FirewallD or UFW enabled due to how Docker modifies IPTables. Refer to Docker & host firewall section for details.
To make the ports accessible from remote hosts you can override the following variable:
Mapping ports to 0.0.0.0 is required for distributed deployments when Monitoring server communicates with the endpoints (Prometheus scraping) over the network to scrape node-exporter and ethereum-metrics-exporter metrics.
This is what it looks like viewed using docker ps on a monitored endpoint:
This is also required on the monitoring server in order to expose Logstash port for log forwarding by Filebeat.
Last updated