Architecture

Overview

This page describes how the role works under the hood and how it configures the managed servers. The purpose is to enable everyone to understand exactly what the role does and aid in advanced customization or troubleshooting. In practice anyone should be able to successfully use the role without ever needing to read this.

Docker compose

The role utilizes Docker Compose. Each client is defined as a service in the corresponding compose file. The compose files are templated out using Jinja2 (Ansible templating language) and are dynamically generated every time the role runs based on the defined variables. There is one Docker compose service defined per file which enables the role to "compose" the desired client mix or deploy them on their own. The files use the following naming convention:

dc-blockchain-eth-execution-client_name.yml
dc-blockchain-eth-consensus-client_name.yml
dc-blockchain-eth-validator-client_name.yml

As of writing the role comprises the following compose templates:

templates/
├── dc-blockchain-eth-consensus-lighthouse.yml.j2
├── dc-blockchain-eth-consensus-nimbus.yml.j2
├── dc-blockchain-eth-consensus-prysm.yml.j2
├── dc-blockchain-eth-consensus-teku.yml.j2
├── dc-blockchain-eth-execution-besu.yml.j2
├── dc-blockchain-eth-execution-erigon.yml.j2
├── dc-blockchain-eth-execution-geth-prune.yml
├── dc-blockchain-eth-execution-geth.yml.j2
├── dc-blockchain-eth-execution-nethermind.yml.j2
├── dc-blockchain-eth-validator-lighthouse.yml.j2
├── dc-blockchain-eth-validator-nimbus.yml.j2
├── dc-blockchain-eth-validator-prysm.yml.j2
└── dc-blockchain-eth-validator-teku.yml.j2

Docker network

Docker compose services (execution, consensus and validator) are "connected" to the same Docker network. The network is defined in each Compose file.

networks:
  service_net:
    name: service_net

If you have any containers managed outside of slingnode.ethereum role (for example monitoring such as Prometheus) that you want to be able to connect to the running clients, you will need to either connect it to the same Docker network or expose the required ports to the host.

Starting the Docker compose project

The compose templates are rendered and copied to the target server location defined by the following variable:

blockchain_root_path: /opt/blockchain
blockchain_docker_compose_path: "{{ blockchain_root_path }}/blockchain_dc"

Ansible merges the files and starts the services as a single project. Refer to Docker Compose documentation for details on using multiple compose files.

Directory structure

The role creates the following directory structure.

opt/
└── blockchain
    ├── blockchain_dc # Docker Compose files
    ├── consensus     # Consensus clients data directories
    ├── execution     # Execution clients data directories
    ├── jwt           # JWT secret
    ├── logs          # Logs generated by custom scripts
    ├── scripts       # Custom scripts 
    └── validator     # Validator clients data directories

The location can be modified by overriding the following variable:

blockchain_root_path: /opt/blockchain

The exact sub-directories will depend on the deployed layers and client mix. For instance, if you deploy only execution client, the consensus and validator directories will not be created. The example below shows the directory and file layout that would exist if Geth and Lighthouse (consensus and validator) were deployed.

blockchain/
├── blockchain_dc
│   ├── dc-blockchain-eth-consensus-lighthouse.yml
│   ├── dc-blockchain-eth-execution-geth.yml
│   └── dc-blockchain-eth-validator-lighthouse.yml
├── consensus
│   └── lighthouse
├── execution
│   └── geth
├── jwt
│   └── jwt.hex
├── scripts
│   └── prune_execution_client.py
└── validator
    └── lighthouse

Volume mapping

Clients' data directories are mapped to the host directories.

For example if Geth is started with the two following flags:

--datadir=/gethdata
--authrpc.jwtsecret=/jwt/jwt.hex

The volume mapping in the Compose file is:

volumes:
   - /opt/blockchain/execution/geth:/gethdata:rw    # Data dir
   - /etc/localtime:/etc/localtime:ro               # Using host's time
   - /opt/blockchain/jwt/jwt.hex:/jwt/jwt.hex:ro    # JWT secret

User accounts and groups

The role creates three user accounts and corresponding groups and one additional group with predefined UIDs and GIDs. Each container runs in the context of a dedicated user account. Additionally the execution and consensus containers are added to the "jwt_secret_access_group" that allows them to read the file containing the JWT secret.

Created users:

UserGroupPurpose

execution_client

execution_client

Runs execution client container

consensus_client

consensus_client

Runs consensus client container

validator_client

validator_client

Runs validator client container

jwt_secret_access_group

Grants access to JWT secret

execution_client and consensus_client users are members of the jwt_secret_access_group.

The UIDs, GIDs, user and group names can be modified using variables. See defaults/main/main.yml.

Container users

The container processes run in the context of their respective user accounts by passing UID and GID using the user Compose directive. This overrides the default user defined in the Dockerfile of the image. Since those UIDs don't exist in the containers, all writable directories must be mapped to the host.

    user: "59284:59284"

jwt_secret_access_group

Execution and consensus client container are added to the the jwt_secret_access_group using group_add Compose directive.

    group_add:
      - "29284"

Directory permissions

The container processes have read and write permissions only their own data directories. The execution and consensus processes have additional read permission to the jwt directory.

.
├── [drwxr-xr-x root     root    ]  blockchain_dc
├── [drwxr-xr-x root     root    ]  consensus
│   └── [drwx------ consensus_client consensus_client]  geth
├── [drwxr-xr-x root     root    ]  execution
│   └── [drwx------ execution_client execution_client]  lighthouse
├── [drwxr-x--- root     jwt_secret_access_group]  jwt
├── [drwxr-xr-x root     root    ]  logs
├── [drwxr-xr-x root     root    ]  scripts
└── [drwxr-xr-x root     root    ]  validator
    └── [drwx------ validator_client validator_client]  lighthouse

Last updated