Commit fab341b1 authored by Barnabas Busa's avatar Barnabas Busa Committed by GitHub

refactor!: participant_network & rename participant fields. (#508)

# Important!
There are many participant fields that have been renamed to be more
consistent with the rest of the package. The following fields have been
renamed:
### EL Flags
```
el_client_type -> el_type
el_client_image -> el_image
el_client_log_level -> el_log_level
el_client_volume_size -> el_volume_size
```
### CL Flags
```
cl_client_type -> cl_type
cl_client_image -> cl_image
cl_client_volume_size -> cl_volume_size
cl_client_log_level -> cl_log_level
beacon_extra_params -> cl_extra_params
beacon_extra_labels -> cl_extra_labels
bn_min_cpu -> cl_min_cpu
bn_max_cpu -> cl_max_cpu
bn_min_mem -> cl_min_mem
bn_max_mem -> cl_max_mem
use_separate_validator_client -> use_separate_vc
```
### Validator flags
```
validator_client_type -> vc_type
validator_tolerations -> vc_tolerations
validator_client_image -> vc_image
validator_extra_params -> vc_extra_params
validator_extra_labels -> vc_extra_labels
v_min_cpu -> vc_min_cpu
v_max_cpu -> vc_max_cpu
v_min_mem -> vc_min_mem
v_max_mem -> vc_max_mem
```
### Global flags
```
global_client_log_level -> global_log_level
```


Once this PR is merged, the old names will no longer work, and you will
have to bulk rename all your yaml files.

A rename.sh bash script is added, which can be used to do bulk `find and
replace` operations.
```bash
rename.sh yourFile.yaml
```

---------
Co-authored-by: default avatarGyanendra Mishra <anomaly.the@gmail.com>
parent da55be84
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: lighthouse cl_type: lighthouse
count: 1 count: 1
- el_client_type: geth - el_type: geth
cl_client_type: lodestar cl_type: lodestar
count: 1 count: 1
additional_services: additional_services:
- assertoor - assertoor
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: besu - el_type: besu
cl_client_type: prysm cl_type: prysm
- el_client_type: besu - el_type: besu
cl_client_type: nimbus cl_type: nimbus
- el_client_type: besu - el_type: besu
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: besu - el_type: besu
cl_client_type: lodestar cl_type: lodestar
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
el_client_image: ethpandaops/geth:master el_image: ethpandaops/geth:master
cl_client_type: lighthouse cl_type: lighthouse
blobber_enabled: true blobber_enabled: true
blobber_extra_params: blobber_extra_params:
- --proposal-action-frequency=1 - --proposal-action-frequency=1
- "--proposal-action={\"name\": \"blob_gossip_delay\", \"delay_milliseconds\": 1500}" - "--proposal-action={\"name\": \"blob_gossip_delay\", \"delay_milliseconds\": 1500}"
count: 1 count: 1
- el_client_type: geth - el_type: geth
el_client_image: ethpandaops/geth:master el_image: ethpandaops/geth:master
cl_client_type: lodestar cl_type: lodestar
count: 1 count: 1
network_params: network_params:
deneb_fork_epoch: 1 deneb_fork_epoch: 1
......
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: nethermind - el_type: nethermind
cl_client_type: prysm cl_type: prysm
- el_client_type: erigon - el_type: erigon
cl_client_type: nimbus cl_type: nimbus
- el_client_type: besu - el_type: besu
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: reth - el_type: reth
cl_client_type: lodestar cl_type: lodestar
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: teku cl_type: teku
network_params: network_params:
network: "dencun-devnet-12" network: "dencun-devnet-12"
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: nethermind - el_type: nethermind
cl_client_type: prysm cl_type: prysm
- el_client_type: erigon - el_type: erigon
cl_client_type: nimbus cl_type: nimbus
- el_client_type: besu - el_type: besu
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: reth - el_type: reth
cl_client_type: lodestar cl_type: lodestar
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: teku cl_type: teku
network_params: network_params:
deneb_fork_epoch: 0 deneb_fork_epoch: 0
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: besu - el_type: besu
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: reth - el_type: reth
cl_client_type: lodestar cl_type: lodestar
- el_client_type: erigon - el_type: erigon
cl_client_type: nimbus cl_type: nimbus
- el_client_type: nethermind - el_type: nethermind
cl_client_type: prysm cl_type: prysm
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: teku cl_type: teku
additional_services: [] additional_services: []
disable_peer_scoring: true disable_peer_scoring: true
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: nethermind - el_type: nethermind
cl_client_type: prysm cl_type: prysm
- el_client_type: erigon - el_type: erigon
cl_client_type: nimbus cl_type: nimbus
- el_client_type: besu - el_type: besu
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: reth - el_type: reth
cl_client_type: lodestar cl_type: lodestar
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: teku cl_type: teku
network_params: network_params:
network: "ephemery" network: "ephemery"
additional_services: [] additional_services: []
participants: participants:
- el_client_type: erigon - el_type: erigon
cl_client_type: teku cl_type: teku
- el_client_type: erigon - el_type: erigon
cl_client_type: prysm cl_type: prysm
- el_client_type: erigon - el_type: erigon
cl_client_type: nimbus cl_type: nimbus
- el_client_type: erigon - el_type: erigon
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: erigon - el_type: erigon
cl_client_type: lodestar cl_type: lodestar
additional_services: [] additional_services: []
participants: participants:
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: teku cl_type: teku
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: prysm cl_type: prysm
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: nimbus cl_type: nimbus
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: lodestar cl_type: lodestar
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: geth - el_type: geth
cl_client_type: prysm cl_type: prysm
- el_client_type: geth - el_type: geth
cl_client_type: nimbus cl_type: nimbus
- el_client_type: geth - el_type: geth
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: geth - el_type: geth
cl_client_type: lodestar cl_type: lodestar
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
el_client_image: ethpandaops/geth:transition-post-genesis-04b0304 el_image: ethpandaops/geth:transition-post-genesis-04b0304
cl_client_type: lighthouse cl_type: lighthouse
cl_client_image: ethpandaops/lighthouse:verkle-trees-capella-2ffb8a9 cl_image: ethpandaops/lighthouse:verkle-trees-capella-2ffb8a9
- el_client_type: geth - el_type: geth
el_client_image: ethpandaops/geth:transition-post-genesis-04b0304 el_image: ethpandaops/geth:transition-post-genesis-04b0304
cl_client_type: lodestar cl_type: lodestar
cl_client_image: ethpandaops/lodestar:g11tech-verge-815364b cl_image: ethpandaops/lodestar:g11tech-verge-815364b
network_params: network_params:
electra_fork_epoch: 1 electra_fork_epoch: 1
network: holesky-shadowfork-verkle network: holesky-shadowfork-verkle
......
participants: participants:
- el_client_type: geth - el_type: geth
el_client_image: ethereum/client-go:v1.13.14 el_image: ethereum/client-go:v1.13.14
cl_client_type: teku cl_type: teku
cl_client_image: consensys/teku:24.2.0 cl_image: consensys/teku:24.2.0
network_params: network_params:
dencun_fork_epoch: 0 dencun_fork_epoch: 0
network: holesky-shadowfork network: holesky-shadowfork
......
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: nethermind - el_type: nethermind
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: erigon - el_type: erigon
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: besu - el_type: besu
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: reth - el_type: reth
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: lighthouse cl_type: lighthouse
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: lodestar cl_type: lodestar
- el_client_type: nethermind - el_type: nethermind
cl_client_type: lodestar cl_type: lodestar
- el_client_type: erigon - el_type: erigon
cl_client_type: lodestar cl_type: lodestar
- el_client_type: besu - el_type: besu
cl_client_type: lodestar cl_type: lodestar
- el_client_type: reth - el_type: reth
cl_client_type: lodestar cl_type: lodestar
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: lodestar cl_type: lodestar
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
use_separate_validator_client: true use_separate_vc: true
- el_client_type: nethermind - el_type: nethermind
cl_client_type: prysm cl_type: prysm
- el_client_type: erigon - el_type: erigon
cl_client_type: nimbus cl_type: nimbus
use_separate_validator_client: true use_separate_vc: true
- el_client_type: besu - el_type: besu
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: reth - el_type: reth
cl_client_type: lodestar cl_type: lodestar
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: nimbus cl_type: nimbus
additional_services: [] additional_services: []
persistent: true persistent: true
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
use_separate_validator_client: true use_separate_vc: true
- el_client_type: nethermind - el_type: nethermind
cl_client_type: prysm cl_type: prysm
- el_client_type: erigon - el_type: erigon
cl_client_type: nimbus cl_type: nimbus
use_separate_validator_client: true use_separate_vc: true
- el_client_type: besu - el_type: besu
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: reth - el_type: reth
cl_client_type: lodestar cl_type: lodestar
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: nimbus cl_type: nimbus
additional_services: [] additional_services: []
persistent: true persistent: true
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: nethermind - el_type: nethermind
cl_client_type: prysm cl_type: prysm
- el_client_type: erigon - el_type: erigon
cl_client_type: nimbus cl_type: nimbus
- el_client_type: besu - el_type: besu
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: reth - el_type: reth
cl_client_type: lodestar cl_type: lodestar
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: teku cl_type: teku
network_params: network_params:
capella_fork_epoch: 1 capella_fork_epoch: 1
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: nethermind - el_type: nethermind
cl_client_type: prysm cl_type: prysm
- el_client_type: erigon - el_type: erigon
cl_client_type: nimbus cl_type: nimbus
- el_client_type: besu - el_type: besu
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: reth - el_type: reth
cl_client_type: lodestar cl_type: lodestar
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: teku cl_type: teku
additional_services: additional_services:
- tx_spammer - tx_spammer
- blob_spammer - blob_spammer
......
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: nethermind - el_type: nethermind
cl_client_type: prysm cl_type: prysm
- el_client_type: erigon - el_type: erigon
cl_client_type: nimbus cl_type: nimbus
- el_client_type: besu - el_type: besu
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: reth - el_type: reth
cl_client_type: lodestar cl_type: lodestar
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: teku cl_type: teku
additional_services: additional_services:
- tx_spammer - tx_spammer
- blob_spammer - blob_spammer
......
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: nethermind - el_type: nethermind
cl_client_type: prysm cl_type: prysm
- el_client_type: erigon - el_type: erigon
cl_client_type: nimbus cl_type: nimbus
- el_client_type: besu - el_type: besu
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: reth - el_type: reth
cl_client_type: lodestar cl_type: lodestar
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: teku cl_type: teku
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
use_separate_validator_client: true use_separate_vc: true
validator_client_type: lodestar vc_type: lodestar
- el_client_type: besu - el_type: besu
cl_client_type: nimbus cl_type: nimbus
use_separate_validator_client: true use_separate_vc: true
validator_client_type: lighthouse vc_type: lighthouse
additional_services: [] additional_services: []
participants: participants:
- el_client_type: nethermind - el_type: nethermind
cl_client_type: teku cl_type: teku
- el_client_type: nethermind - el_type: nethermind
cl_client_type: prysm cl_type: prysm
- el_client_type: nethermind - el_type: nethermind
cl_client_type: nimbus cl_type: nimbus
- el_client_type: nethermind - el_type: nethermind
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: nethermind - el_type: nethermind
cl_client_type: lodestar cl_type: lodestar
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: nimbus cl_type: nimbus
- el_client_type: nethermind - el_type: nethermind
cl_client_type: nimbus cl_type: nimbus
- el_client_type: erigon - el_type: erigon
cl_client_type: nimbus cl_type: nimbus
- el_client_type: besu - el_type: besu
cl_client_type: nimbus cl_type: nimbus
- el_client_type: reth - el_type: reth
cl_client_type: nimbus cl_type: nimbus
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: nimbus cl_type: nimbus
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: nimbus - el_type: nimbus
cl_client_type: teku cl_type: teku
- el_client_type: nimbus - el_type: nimbus
cl_client_type: prysm cl_type: prysm
- el_client_type: nimbus - el_type: nimbus
cl_client_type: nimbus cl_type: nimbus
- el_client_type: nimbus - el_type: nimbus
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: nimbus - el_type: nimbus
cl_client_type: lodestar cl_type: lodestar
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: nimbus cl_type: nimbus
mev_type: full mev_type: full
participants: participants:
- el_client_type: reth - el_type: reth
cl_client_type: teku cl_type: teku
use_separate_validator_client: true use_separate_vc: true
node_selectors: { node_selectors: {
"kubernetes.io/hostname": testing-1, "kubernetes.io/hostname": testing-1,
} }
- el_client_type: reth - el_type: reth
cl_client_type: teku cl_type: teku
use_separate_validator_client: true use_separate_vc: true
global_node_selectors: { global_node_selectors: {
"kubernetes.io/hostname": testing-2, "kubernetes.io/hostname": testing-2,
} }
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
validator_count: 0 validator_count: 0
parallel_keystore_generation: true parallel_keystore_generation: true
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
validator_count: 0 validator_count: 0
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
parallel_keystore_generation: true parallel_keystore_generation: true
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
validator_count: 0 validator_count: 0
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
parallel_keystore_generation: true parallel_keystore_generation: true
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: geth - el_type: geth
cl_client_type: prysm cl_type: prysm
- el_client_type: geth - el_type: geth
cl_client_type: nimbus cl_type: nimbus
- el_client_type: geth - el_type: geth
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: geth - el_type: geth
cl_client_type: lodestar cl_type: lodestar
additional_services: [] additional_services: []
network_params: network_params:
preregistered_validator_count: 400 preregistered_validator_count: 400
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: prysm cl_type: prysm
- el_client_type: nethermind - el_type: nethermind
cl_client_type: prysm cl_type: prysm
- el_client_type: erigon - el_type: erigon
cl_client_type: prysm cl_type: prysm
- el_client_type: besu - el_type: besu
cl_client_type: prysm cl_type: prysm
- el_client_type: reth - el_type: reth
cl_client_type: prysm cl_type: prysm
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: prysm cl_type: prysm
additional_services: [] additional_services: []
participants: participants:
- el_client_type: reth - el_type: reth
cl_client_type: teku cl_type: teku
- el_client_type: reth - el_type: reth
cl_client_type: prysm cl_type: prysm
- el_client_type: reth - el_type: reth
cl_client_type: nimbus cl_type: nimbus
- el_client_type: reth - el_type: reth
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: reth - el_type: reth
cl_client_type: lodestar cl_type: lodestar
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: nethermind - el_type: nethermind
cl_client_type: prysm cl_type: prysm
- el_client_type: erigon - el_type: erigon
cl_client_type: lighthouse el_image: ethpandaops/erigon:devel-d754b29 # this is a temp fix, till upstream is fixed
- el_client_type: besu cl_type: lighthouse
cl_client_type: lighthouse - el_type: besu
- el_client_type: reth cl_type: lighthouse
cl_client_type: lodestar - el_type: reth
- el_client_type: ethereumjs cl_type: lodestar
cl_client_type: nimbus - el_type: ethereumjs
cl_type: nimbus
network_params: network_params:
network: sepolia network: sepolia
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: nimbus cl_type: nimbus
use_separate_validator_client: true use_separate_vc: true
validator_count: 0 validator_count: 0
- el_client_type: nethermind - el_type: nethermind
cl_client_type: nimbus cl_type: nimbus
use_separate_validator_client: true use_separate_vc: true
- el_client_type: erigon - el_type: erigon
cl_client_type: nimbus cl_type: nimbus
use_separate_validator_client: true use_separate_vc: true
- el_client_type: besu - el_type: besu
cl_client_type: nimbus cl_type: nimbus
use_separate_validator_client: true use_separate_vc: true
- el_client_type: reth - el_type: reth
cl_client_type: nimbus cl_type: nimbus
use_separate_validator_client: true use_separate_vc: true
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: nimbus cl_type: nimbus
use_separate_validator_client: true use_separate_vc: true
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
validator_count: 0 validator_count: 0
use_separate_validator_client: true use_separate_vc: true
- el_client_type: nethermind - el_type: nethermind
cl_client_type: teku cl_type: teku
use_separate_validator_client: true use_separate_vc: true
- el_client_type: erigon - el_type: erigon
cl_client_type: teku cl_type: teku
use_separate_validator_client: true use_separate_vc: true
- el_client_type: besu - el_type: besu
cl_client_type: teku cl_type: teku
use_separate_validator_client: true use_separate_vc: true
- el_client_type: reth - el_type: reth
cl_client_type: teku cl_type: teku
use_separate_validator_client: true use_separate_vc: true
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: teku cl_type: teku
use_separate_validator_client: true use_separate_vc: true
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: nethermind - el_type: nethermind
cl_client_type: teku cl_type: teku
- el_client_type: erigon - el_type: erigon
cl_client_type: teku cl_type: teku
- el_client_type: besu - el_type: besu
cl_client_type: teku cl_type: teku
- el_client_type: reth - el_type: reth
cl_client_type: teku cl_type: teku
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: teku cl_type: teku
additional_services: [] additional_services: []
participants: participants:
- el_client_type: reth - el_type: reth
cl_client_type: teku cl_type: teku
use_separate_validator_client: true use_separate_vc: true
cl_tolerations: cl_tolerations:
- key: "node-role.kubernetes.io/master1" - key: "node-role.kubernetes.io/master1"
operator: "Exists" operator: "Exists"
...@@ -13,20 +13,20 @@ participants: ...@@ -13,20 +13,20 @@ participants:
- key: "node-role.kubernetes.io/master3" - key: "node-role.kubernetes.io/master3"
operator: "Exists" operator: "Exists"
effect: "NoSchedule" effect: "NoSchedule"
validator_tolerations: vc_tolerations:
- key: "node-role.kubernetes.io/master4" - key: "node-role.kubernetes.io/master4"
operator: "Exists" operator: "Exists"
effect: "NoSchedule" effect: "NoSchedule"
- el_client_type: reth - el_type: reth
cl_client_type: teku cl_type: teku
use_separate_validator_client: true use_separate_vc: true
tolerations: tolerations:
- key: "node-role.kubernetes.io/master5" - key: "node-role.kubernetes.io/master5"
operator: "Exists" operator: "Exists"
effect: "NoSchedule" effect: "NoSchedule"
- el_client_type: reth - el_type: reth
cl_client_type: teku cl_type: teku
use_separate_validator_client: true use_separate_vc: true
additional_services: additional_services:
- dora - dora
global_tolerations: global_tolerations:
......
participants: participants:
- el_client_type: geth - el_type: geth
el_client_image: ethpandaops/geth:kaustinen-with-shapella-0b110bd el_image: ethpandaops/geth:kaustinen-with-shapella-0b110bd
cl_client_type: lighthouse cl_type: lighthouse
cl_client_image: ethpandaops/lighthouse:verkle-trees-capella-2ffb8a9 cl_image: ethpandaops/lighthouse:verkle-trees-capella-2ffb8a9
count: 2 count: 2
- el_client_type: geth - el_type: geth
el_client_image: ethpandaops/geth:kaustinen-with-shapella-0b110bd el_image: ethpandaops/geth:kaustinen-with-shapella-0b110bd
cl_client_type: lodestar cl_type: lodestar
cl_client_image: ethpandaops/lodestar:g11tech-verge-815364b cl_image: ethpandaops/lodestar:g11tech-verge-815364b
network_params: network_params:
network: verkle-gen-devnet-4 network: verkle-gen-devnet-4
participants: participants:
- el_client_type: geth - el_type: geth
el_client_image: ethpandaops/geth:kaustinen-with-shapella-0b110bd el_image: ethpandaops/geth:kaustinen-with-shapella-0b110bd
cl_client_type: lighthouse cl_type: lighthouse
cl_client_image: ethpandaops/lighthouse:verkle-trees-capella-2ffb8a9 cl_image: ethpandaops/lighthouse:verkle-trees-capella-2ffb8a9
count: 2 count: 2
- el_client_type: geth - el_type: geth
el_client_image: ethpandaops/geth:kaustinen-with-shapella-0b110bd el_image: ethpandaops/geth:kaustinen-with-shapella-0b110bd
cl_client_type: lodestar cl_type: lodestar
cl_client_image: ethpandaops/lodestar:g11tech-verge-815364b cl_image: ethpandaops/lodestar:g11tech-verge-815364b
count: 2 count: 2
network_params: network_params:
electra_fork_epoch: 0 electra_fork_epoch: 0
......
participants: participants:
- el_client_type: geth - el_type: geth
el_client_image: ethpandaops/geth:transition-post-genesis-04b0304 el_image: ethpandaops/geth:transition-post-genesis-04b0304
cl_client_type: lighthouse cl_type: lighthouse
cl_client_image: ethpandaops/lighthouse:verkle-trees-capella-2ffb8a9 cl_image: ethpandaops/lighthouse:verkle-trees-capella-2ffb8a9
count: 2 count: 2
- el_client_type: geth - el_type: geth
el_client_image: ethpandaops/geth:transition-post-genesis-04b0304 el_image: ethpandaops/geth:transition-post-genesis-04b0304
cl_client_type: lodestar cl_type: lodestar
cl_client_image: ethpandaops/lodestar:g11tech-verge-815364b cl_image: ethpandaops/lodestar:g11tech-verge-815364b
network_params: network_params:
electra_fork_epoch: 1 electra_fork_epoch: 1
additional_services: additional_services:
......
# Important recent update notes - temporary note
There are many participant fields that have been renamed to be more consistent with the rest of the package. The following fields have been renamed:
### EL Flags
```
el_client_type -> el_type
el_client_image -> el_image
el_client_log_level -> el_log_level
el_client_volume_size -> el_volume_size
```
### CL Flags
```
cl_client_type -> cl_type
cl_client_image -> cl_image
cl_client_volume_size -> cl_volume_size
cl_client_log_level -> cl_log_level
beacon_extra_params -> cl_extra_params
beacon_extra_labels -> cl_extra_labels
bn_min_cpu -> cl_min_cpu
bn_max_cpu -> cl_max_cpu
bn_min_mem -> cl_min_mem
bn_max_mem -> cl_max_mem
use_separate_validator_client -> use_separate_vc
```
### Validator flags
```
validator_client_type -> vc_type
validator_tolerations -> vc_tolerations
validator_client_image -> vc_image
validator_extra_params -> vc_extra_params
validator_extra_labels -> vc_extra_labels
v_min_cpu -> vc_min_cpu
v_max_cpu -> vc_max_cpu
v_min_mem -> vc_min_mem
v_max_mem -> vc_max_mem
```
### Global flags
```
global_client_log_level -> global_log_level
```
To help you with the transition, we have added a script that will automatically update your `yaml` file to the new format. You can run the following command to update your network_params.yaml file:
```bash
./rename.sh example.yaml
```
# Ethereum Package # Ethereum Package
![Run of the Ethereum Network Package](run.gif) ![Run of the Ethereum Network Package](run.gif)
...@@ -42,7 +88,7 @@ Optional features (enabled via flags or parameter files at runtime): ...@@ -42,7 +88,7 @@ Optional features (enabled via flags or parameter files at runtime):
Kurtosis packages are parameterizable, meaning you can customize your network and its behavior to suit your needs by storing parameters in a file that you can pass in at runtime like so: Kurtosis packages are parameterizable, meaning you can customize your network and its behavior to suit your needs by storing parameters in a file that you can pass in at runtime like so:
```bash ```bash
kurtosis run --enclave my-testnet github.com/kurtosis-tech/ethereum-package "$(cat ~/network_params.yaml)" kurtosis run --enclave my-testnet github.com/kurtosis-tech/ethereum-package --args-file network_params.yaml
``` ```
Where `network_params.yaml` contains the parameters for your network in your home directory. Where `network_params.yaml` contains the parameters for your network in your home directory.
...@@ -60,7 +106,7 @@ When running on a public testnet using a cloud provider's Kubernetes cluster, th ...@@ -60,7 +106,7 @@ When running on a public testnet using a cloud provider's Kubernetes cluster, th
3. Network Syncing: The disk speed provided by cloud providers may not be sufficient to sync with networks that have high demands, such as the mainnet. This could lead to syncing issues and delays. 3. Network Syncing: The disk speed provided by cloud providers may not be sufficient to sync with networks that have high demands, such as the mainnet. This could lead to syncing issues and delays.
To mitigate these issues, you can use the `el_client_volume_size` and `cl_client_volume_size` flags to override the default settings locally. This allows you to allocate more storage to the EL and CL clients, which can help accommodate faster state growth and improve syncing performance. However, keep in mind that increasing the volume size may also increase your cloud provider costs. Always monitor your usage and adjust as necessary to balance performance and cost. To mitigate these issues, you can use the `el_volume_size` and `cl_volume_size` flags to override the default settings locally. This allows you to allocate more storage to the EL and CL clients, which can help accommodate faster state growth and improve syncing performance. However, keep in mind that increasing the volume size may also increase your cloud provider costs. Always monitor your usage and adjust as necessary to balance performance and cost.
For optimal performance, we recommend using a cloud provider that allows you to provision Kubernetes clusters with fast persistent storage or self hosting your own Kubernetes cluster with fast persistent storage. For optimal performance, we recommend using a cloud provider that allows you to provision Kubernetes clusters with fast persistent storage or self hosting your own Kubernetes cluster with fast persistent storage.
...@@ -89,8 +135,8 @@ persistent: true ...@@ -89,8 +135,8 @@ persistent: true
It is possible to run the package on a Kubernetes cluster with taints and tolerations. This is done by adding the tolerations to the `tolerations` field in the `network_params.yaml` file. For example: It is possible to run the package on a Kubernetes cluster with taints and tolerations. This is done by adding the tolerations to the `tolerations` field in the `network_params.yaml` file. For example:
```yaml ```yaml
participants: participants:
- el_client_type: reth - el_type: reth
cl_client_type: teku cl_type: teku
global_tolerations: global_tolerations:
- key: "node-role.kubernetes.io/master6" - key: "node-role.kubernetes.io/master6"
value: "true" value: "true"
...@@ -99,7 +145,7 @@ global_tolerations: ...@@ -99,7 +145,7 @@ global_tolerations:
``` ```
It is possible to define toleration globally, per participant or per container. The order of precedence is as follows: It is possible to define toleration globally, per participant or per container. The order of precedence is as follows:
1. Container (`el_tolerations`, `cl_tolerations`, `validator_tolerations`) 1. Container (`el_tolerations`, `cl_tolerations`, `vc_tolerations`)
2. Participant (`tolerations`) 2. Participant (`tolerations`)
3. Global (`global_tolerations`) 3. Global (`global_tolerations`)
...@@ -152,9 +198,10 @@ To configure the package behaviour, you can modify your `network_params.yaml` fi ...@@ -152,9 +198,10 @@ To configure the package behaviour, you can modify your `network_params.yaml` fi
```yaml ```yaml
# Specification of the participants in the network # Specification of the participants in the network
participants: participants:
# EL(Execution Layer) Specific flags
# The type of EL client that should be started # The type of EL client that should be started
# Valid values are geth, nethermind, erigon, besu, ethereumjs, reth, nimbus-eth1 # Valid values are geth, nethermind, erigon, besu, ethereumjs, reth, nimbus-eth1
- el_client_type: geth - el_type: geth
# The Docker image that should be used for the EL client; leave blank to use the default for the client type # The Docker image that should be used for the EL client; leave blank to use the default for the client type
# Defaults by client: # Defaults by client:
...@@ -165,30 +212,25 @@ participants: ...@@ -165,30 +212,25 @@ participants:
# - reth: ghcr.io/paradigmxyz/reth # - reth: ghcr.io/paradigmxyz/reth
# - ethereumjs: ethpandaops/ethereumjs:master # - ethereumjs: ethpandaops/ethereumjs:master
# - nimbus-eth1: ethpandaops/nimbus-eth1:master # - nimbus-eth1: ethpandaops/nimbus-eth1:master
el_client_image: "" el_image: ""
# The log level string that this participant's EL client should log at # The log level string that this participant's EL client should log at
# If this is emptystring then the global `logLevel` parameter's value will be translated into a string appropriate for the client (e.g. if # If this is emptystring then the global `logLevel` parameter's value will be translated into a string appropriate for the client (e.g. if
# global `logLevel` = `info` then Geth would receive `3`, Besu would receive `INFO`, etc.) # global `logLevel` = `info` then Geth would receive `3`, Besu would receive `INFO`, etc.)
# If this is not emptystring, then this value will override the global `logLevel` setting to allow for fine-grained control # If this is not emptystring, then this value will override the global `logLevel` setting to allow for fine-grained control
# over a specific participant's logging # over a specific participant's logging
el_client_log_level: "" el_log_level: ""
# A list of optional extra params that will be passed to the EL client container for modifying its behaviour
el_extra_params: []
# A list of optional extra env_vars the el container should spin up with # A list of optional extra env_vars the el container should spin up with
el_extra_env_vars: {} el_extra_env_vars: {}
# Persistent storage size for the EL client container (in MB)
# Defaults to 0, which means that the default size for the client will be used
# Default values can be found in /src/package_io/constants.star VOLUME_SIZE
el_client_volume_size: 0
# A list of optional extra labels the el container should spin up with # A list of optional extra labels the el container should spin up with
# Example; el_extra_labels: {"ethereum-package.partition": "1"} # Example; el_extra_labels: {"ethereum-package.partition": "1"}
el_extra_labels: {} el_extra_labels: {}
# A list of optional extra params that will be passed to the EL client container for modifying its behaviour
el_extra_params: []
# A list of tolerations that will be passed to the EL client container # A list of tolerations that will be passed to the EL client container
# Only works with Kubernetes # Only works with Kubernetes
# Example: el_tolerations: # Example: el_tolerations:
...@@ -200,9 +242,24 @@ participants: ...@@ -200,9 +242,24 @@ participants:
# Defaults to empty # Defaults to empty
el_tolerations: [] el_tolerations: []
# Persistent storage size for the EL client container (in MB)
# Defaults to 0, which means that the default size for the client will be used
# Default values can be found in /src/package_io/constants.star VOLUME_SIZE
el_volume_size: 0
# Resource management for el containers
# CPU is milicores
# RAM is in MB
# Defaults are set per client
el_min_cpu: 0
el_max_cpu: 0
el_min_mem: 0
el_max_mem: 0
# CL(Consensus Layer) Specific flags
# The type of CL client that should be started # The type of CL client that should be started
# Valid values are nimbus, lighthouse, lodestar, teku, and prysm # Valid values are nimbus, lighthouse, lodestar, teku, and prysm
cl_client_type: lighthouse cl_type: lighthouse
# The Docker image that should be used for the CL client; leave blank to use the default for the client type # The Docker image that should be used for the CL client; leave blank to use the default for the client type
# Defaults by client: # Defaults by client:
...@@ -211,24 +268,61 @@ participants: ...@@ -211,24 +268,61 @@ participants:
# - nimbus: statusim/nimbus-eth2:multiarch-latest # - nimbus: statusim/nimbus-eth2:multiarch-latest
# - prysm: gcr.io/prysmaticlabs/prysm/beacon-chain:latest # - prysm: gcr.io/prysmaticlabs/prysm/beacon-chain:latest
# - lodestar: chainsafe/lodestar:next # - lodestar: chainsafe/lodestar:next
cl_client_image: "" cl_image: ""
# The log level string that this participant's CL client should log at # The log level string that this participant's CL client should log at
# If this is emptystring then the global `logLevel` parameter's value will be translated into a string appropriate for the client (e.g. if # If this is emptystring then the global `logLevel` parameter's value will be translated into a string appropriate for the client (e.g. if
# global `logLevel` = `info` then Teku would receive `INFO`, Prysm would receive `info`, etc.) # global `logLevel` = `info` then Teku would receive `INFO`, Prysm would receive `info`, etc.)
# If this is not emptystring, then this value will override the global `logLevel` setting to allow for fine-grained control # If this is not emptystring, then this value will override the global `logLevel` setting to allow for fine-grained control
# over a specific participant's logging # over a specific participant's logging
cl_client_log_level: "" cl_log_level: ""
# A list of optional extra env_vars the cl container should spin up with
cl_extra_env_vars: {}
# A list of optional extra labels that will be passed to the CL client Beacon container.
# Example; cl_extra_labels: {"ethereum-package.partition": "1"}
cl_extra_labels: {}
# A list of optional extra params that will be passed to the CL client Beacon container for modifying its behaviour
# If the client combines the Beacon & validator nodes (e.g. Teku, Nimbus), then this list will be passed to the combined Beacon-validator node
cl_extra_params: []
# A list of tolerations that will be passed to the CL client container
# Only works with Kubernetes
# Example: el_tolerations:
# - key: "key"
# operator: "Equal"
# value: "value"
# effect: "NoSchedule"
# toleration_seconds: 3600
# Defaults to empty
cl_tolerations: []
# Persistent storage size for the CL client container (in MB)
# Defaults to 0, which means that the default size for the client will be used
# Default values can be found in /src/package_io/constants.star VOLUME_SIZE
cl_volume_size: 0
# Resource management for cl containers
# CPU is milicores
# RAM is in MB
# Defaults are set per client
cl_min_cpu: 0
cl_max_cpu: 0
cl_min_mem: 0
cl_max_mem: 0
# Whether to use a separate validator client attached to the CL client. # Whether to use a separate validator client attached to the CL client.
# Defaults to false for clients that can run both in one process (Teku, Nimbus) # Defaults to false for clients that can run both in one process (Teku, Nimbus)
use_separate_validator_client: false use_separate_vc: false
# VC (Validator Client) Specific flags
# The type of validator client that should be used # The type of validator client that should be used
# Valid values are nimbus, lighthouse, lodestar, teku, and prysm # Valid values are nimbus, lighthouse, lodestar, teku, and prysm
# ( The prysm validator only works with a prysm CL client ) # ( The prysm validator only works with a prysm CL client )
# Defaults to matching the chosen CL client (cl_client_type) # Defaults to matching the chosen CL client (cl_type)
validator_client_type: "" vc_type: ""
# The Docker image that should be used for the separate validator client # The Docker image that should be used for the separate validator client
# Defaults by client: # Defaults by client:
...@@ -237,14 +331,27 @@ participants: ...@@ -237,14 +331,27 @@ participants:
# - nimbus: statusim/nimbus-validator-client:multiarch-latest # - nimbus: statusim/nimbus-validator-client:multiarch-latest
# - prysm: gcr.io/prysmaticlabs/prysm/validator:latest # - prysm: gcr.io/prysmaticlabs/prysm/validator:latest
# - teku: consensys/teku:latest # - teku: consensys/teku:latest
validator_client_image: "" vc_image: ""
# Persistent storage size for the CL client container (in MB) # The log level string that this participant's CL client should log at
# Defaults to 0, which means that the default size for the client will be used # If this is emptystring then the global `logLevel` parameter's value will be translated into a string appropriate for the client (e.g. if
# Default values can be found in /src/package_io/constants.star VOLUME_SIZE # global `logLevel` = `info` then Teku would receive `INFO`, Prysm would receive `info`, etc.)
cl_client_volume_size: 0 # If this is not emptystring, then this value will override the global `logLevel` setting to allow for fine-grained control
# over a specific participant's logging
vc_log_level: ""
# A list of tolerations that will be passed to the CL client container # A list of optional extra env_vars the vc container should spin up with
vc_extra_env_vars: {}
# A list of optional extra labels that will be passed to the CL client validator container.
# Example; vc_extra_labels: {"ethereum-package.partition": "1"}
vc_extra_labels: {}
# A list of optional extra params that will be passed to the CL client validator container for modifying its behaviour
# If the client combines the Beacon & validator nodes (e.g. Teku, Nimbus), then this list will also be passed to the combined Beacon-validator node
vc_extra_params: []
# A list of tolerations that will be passed to the validator container
# Only works with Kubernetes # Only works with Kubernetes
# Example: el_tolerations: # Example: el_tolerations:
# - key: "key" # - key: "key"
...@@ -253,18 +360,28 @@ participants: ...@@ -253,18 +360,28 @@ participants:
# effect: "NoSchedule" # effect: "NoSchedule"
# toleration_seconds: 3600 # toleration_seconds: 3600
# Defaults to empty # Defaults to empty
cl_tolerations: [] vc_tolerations: []
# A list of tolerations that will be passed to the validator container # Resource management for vc containers
# CPU is milicores
# RAM is in MB
# Defaults are set per client
vc_min_cpu: 0
vc_max_cpu: 0
vc_min_mem: 0
vc_max_mem: 0
# Count of the number of validators you want to run for a given participant
# Default to null, which means that the number of validators will be using the
# network parameter num_validator_keys_per_node
validator_count: null
#Participant specific flags
# Node selector
# Only works with Kubernetes # Only works with Kubernetes
# Example: el_tolerations: # Example: node_selectors: { "disktype": "ssd" }
# - key: "key"
# operator: "Equal"
# value: "value"
# effect: "NoSchedule"
# toleration_seconds: 3600
# Defaults to empty # Defaults to empty
validator_tolerations: [] node_selectors: {}
# A list of tolerations that will be passed to the EL/CL/validator containers # A list of tolerations that will be passed to the EL/CL/validator containers
# This is to be used when you don't want to specify the tolerations for each container separately # This is to be used when you don't want to specify the tolerations for each container separately
...@@ -278,56 +395,9 @@ participants: ...@@ -278,56 +395,9 @@ participants:
# Defaults to empty # Defaults to empty
tolerations: [] tolerations: []
# Node selector # Count of nodes to spin up for this participant
# Only works with Kubernetes # Default to 1
# Example: node_selectors: { "disktype": "ssd" } count: 1
# Defaults to empty
node_selectors: {}
# A list of optional extra params that will be passed to the CL client Beacon container for modifying its behaviour
# If the client combines the Beacon & validator nodes (e.g. Teku, Nimbus), then this list will be passed to the combined Beacon-validator node
beacon_extra_params: []
# A list of optional extra labels that will be passed to the CL client Beacon container.
# Example; beacon_extra_labels: {"ethereum-package.partition": "1"}
beacon_extra_labels: {}
# A list of optional extra params that will be passed to the CL client validator container for modifying its behaviour
# If the client combines the Beacon & validator nodes (e.g. Teku, Nimbus), then this list will also be passed to the combined Beacon-validator node
validator_extra_params: []
# A list of optional extra labels that will be passed to the CL client validator container.
# Example; validator_extra_labels: {"ethereum-package.partition": "1"}
validator_extra_labels: {}
# A set of parameters the node needs to reach an external block building network
# If `null` then the builder infrastructure will not be instantiated
# Example:
#
# "relay_endpoints": [
# "https:#0xdeadbeefcafa@relay.example.com",
# "https:#0xdeadbeefcafb@relay.example.com",
# "https:#0xdeadbeefcafc@relay.example.com",
# "https:#0xdeadbeefcafd@relay.example.com"
# ]
builder_network_params: null
# Resource management for el/beacon/validator containers
# CPU is milicores
# RAM is in MB
# Defaults are set per client
el_min_cpu: 0
el_max_cpu: 0
el_min_mem: 0
el_max_mem: 0
bn_min_cpu: 0
bn_max_cpu: 0
bn_min_mem: 0
bn_max_mem: 0
v_min_cpu: 0
v_max_cpu: 0
v_min_mem: 0
v_max_mem: 0
# Snooper can be enabled with the `snooper_enabled` flag per client or globally # Snooper can be enabled with the `snooper_enabled` flag per client or globally
# Defaults to false # Defaults to false
...@@ -341,15 +411,6 @@ participants: ...@@ -341,15 +411,6 @@ participants:
# Defaults to false # Defaults to false
xatu_sentry_enabled: false xatu_sentry_enabled: false
# Count of nodes to spin up for this participant
# Default to 1
count: 1
# Count of the number of validators you want to run for a given participant
# Default to null, which means that the number of validators will be using the
# network parameter num_validator_keys_per_node
validator_count: null
# Prometheus additional configuration for a given participant prometheus target. # Prometheus additional configuration for a given participant prometheus target.
# Execution, beacon and validator client targets on prometheus will include this # Execution, beacon and validator client targets on prometheus will include this
# configuration. # configuration.
...@@ -367,8 +428,26 @@ participants: ...@@ -367,8 +428,26 @@ participants:
# Defaults to empty # Defaults to empty
blobber_extra_params: [] blobber_extra_params: []
# Default configuration parameters for the Eth network # A set of parameters the node needs to reach an external block building network
# If `null` then the builder infrastructure will not be instantiated
# Example:
#
# "relay_endpoints": [
# "https:#0xdeadbeefcafa@relay.example.com",
# "https:#0xdeadbeefcafb@relay.example.com",
# "https:#0xdeadbeefcafc@relay.example.com",
# "https:#0xdeadbeefcafd@relay.example.com"
# ]
builder_network_params: null
# Default configuration parameters for the network
network_params: network_params:
# Network name, used to enable syncing of alternative networks
# Defaults to "kurtosis"
# You can sync any public network by setting this to the network name (e.g. "mainnet", "goerli", "sepolia", "holesky")
# You can sync any devnet by setting this to the network name (e.g. "dencun-devnet-12", "verkle-gen-devnet-2")
network: "kurtosis"
# The network ID of the network. # The network ID of the network.
network_id: 3151908 network_id: 3151908
...@@ -384,8 +463,10 @@ network_params: ...@@ -384,8 +463,10 @@ network_params:
# This mnemonic will a) be used to create keystores for all the types of validators that we have and b) be used to generate a CL genesis.ssz that has the children # This mnemonic will a) be used to create keystores for all the types of validators that we have and b) be used to generate a CL genesis.ssz that has the children
# validator keys already preregistered as validators # validator keys already preregistered as validators
preregistered_validator_keys_mnemonic: "giant issue aisle success illegal bike spike question tent bar rely arctic volcano long crawl hungry vocal artwork sniff fantasy very lucky have athlete" preregistered_validator_keys_mnemonic: "giant issue aisle success illegal bike spike question tent bar rely arctic volcano long crawl hungry vocal artwork sniff fantasy very lucky have athlete"
# The number of pre-registered validators for genesis. If 0 or not specified then the value will be calculated from the participants # The number of pre-registered validators for genesis. If 0 or not specified then the value will be calculated from the participants
preregistered_validator_count: 0 preregistered_validator_count: 0
# How long you want the network to wait before starting up # How long you want the network to wait before starting up
genesis_delay: 20 genesis_delay: 20
...@@ -403,17 +484,6 @@ network_params: ...@@ -403,17 +484,6 @@ network_params:
# Defaults to 2048 # Defaults to 2048
eth1_follow_distance: 2048 eth1_follow_distance: 2048
# The epoch at which the capella/deneb/electra forks are set to occur.
capella_fork_epoch: 0
deneb_fork_epoch: 500
electra_fork_epoch: null
# Network name, used to enable syncing of alternative networks
# Defaults to "kurtosis"
# You can sync any public network by setting this to the network name (e.g. "mainnet", "goerli", "sepolia", "holesky")
# You can sync any devnet by setting this to the network name (e.g. "dencun-devnet-12", "verkle-gen-devnet-2")
network: "kurtosis"
# The number of epochs to wait validators to be able to withdraw # The number of epochs to wait validators to be able to withdraw
# Defaults to 256 epochs ~27 hours # Defaults to 256 epochs ~27 hours
min_validator_withdrawability_delay: 256 min_validator_withdrawability_delay: 256
...@@ -422,6 +492,11 @@ network_params: ...@@ -422,6 +492,11 @@ network_params:
# Defaults to 256 epoch ~27 hours # Defaults to 256 epoch ~27 hours
shard_committee_period: 256 shard_committee_period: 256
# The epoch at which the capella/deneb/electra forks are set to occur.
capella_fork_epoch: 0
deneb_fork_epoch: 4
electra_fork_epoch: null
# Network sync base url for syncing public networks from a custom snapshot (mostly useful for shadowforks) # Network sync base url for syncing public networks from a custom snapshot (mostly useful for shadowforks)
# Defaults to "https://ethpandaops-ethereum-node-snapshots.ams3.digitaloceanspaces.com/ # Defaults to "https://ethpandaops-ethereum-node-snapshots.ams3.digitaloceanspaces.com/
# If you have a local snapshot, you can set this to the local url: # If you have a local snapshot, you can set this to the local url:
...@@ -429,6 +504,31 @@ network_params: ...@@ -429,6 +504,31 @@ network_params:
# The snapshots are taken with https://github.com/ethpandaops/snapshotter # The snapshots are taken with https://github.com/ethpandaops/snapshotter
network_sync_base_url: https://ethpandaops-ethereum-node-snapshots.ams3.digitaloceanspaces.com/ network_sync_base_url: https://ethpandaops-ethereum-node-snapshots.ams3.digitaloceanspaces.com/
# Global parameters for the network
# By default includes
# - A transaction spammer & blob spammer is launched to fake transactions sent to the network
# - Forkmon for EL will be launched
# - A prometheus will be started, coupled with grafana
# - A beacon metrics gazer will be launched
# - A light beacon chain explorer will be launched
# - Default: ["tx_spammer", "blob_spammer", "el_forkmon", "beacon_metrics_gazer", "dora"," "prometheus_grafana"]
additional_services:
- assertoor
- broadcaster
- tx_spammer
- blob_spammer
- custom_flood
- goomy_blob
- el_forkmon
- blockscout
- beacon_metrics_gazer
- dora
- full_beaconchain_explorer
- prometheus_grafana
- blobscan
# Configuration place for transaction spammer - https:#github.com/MariusVanDerWijden/tx-fuzz # Configuration place for transaction spammer - https:#github.com/MariusVanDerWijden/tx-fuzz
tx_spammer_params: tx_spammer_params:
# A list of optional extra params that will be passed to the TX Spammer container for modifying its behaviour # A list of optional extra params that will be passed to the TX Spammer container for modifying its behaviour
...@@ -503,35 +603,13 @@ assertoor_params: ...@@ -503,35 +603,13 @@ assertoor_params:
tests: [] tests: []
# By default includes
# - A transaction spammer & blob spammer is launched to fake transactions sent to the network
# - Forkmon for EL will be launched
# - A prometheus will be started, coupled with grafana
# - A beacon metrics gazer will be launched
# - A light beacon chain explorer will be launched
# - Default: ["tx_spammer", "blob_spammer", "el_forkmon", "beacon_metrics_gazer", "dora"," "prometheus_grafana"]
additional_services:
- assertoor
- broadcaster
- tx_spammer
- blob_spammer
- custom_flood
- goomy_blob
- el_forkmon
- blockscout
- beacon_metrics_gazer
- dora
- full_beaconchain_explorer
- prometheus_grafana
- blobscan
# If set, the package will block until a finalized epoch has occurred. # If set, the package will block until a finalized epoch has occurred.
wait_for_finalization: false wait_for_finalization: false
# The global log level that all clients should log at # The global log level that all clients should log at
# Valid values are "error", "warn", "info", "debug", and "trace" # Valid values are "error", "warn", "info", "debug", and "trace"
# This value will be overridden by participant-specific values # This value will be overridden by participant-specific values
global_client_log_level: "info" global_log_level: "info"
# EngineAPI Snooper global flags for all participants # EngineAPI Snooper global flags for all participants
# Default to false # Default to false
...@@ -619,14 +697,14 @@ xatu_sentry_params: ...@@ -619,14 +697,14 @@ xatu_sentry_params:
xatu_server_headers: {} xatu_server_headers: {}
# Beacon event stream topics to subscribe to # Beacon event stream topics to subscribe to
beacon_subscriptions: beacon_subscriptions:
- attestation - attestation
- block - block
- chain_reorg - chain_reorg
- finalized_checkpoint - finalized_checkpoint
- head - head
- voluntary_exit - voluntary_exit
- contribution_and_proof - contribution_and_proof
- blob_sidecar - blob_sidecar
# Global tolerations that will be passed to all containers (unless overridden by a more specific toleration) # Global tolerations that will be passed to all containers (unless overridden by a more specific toleration)
# Only works with Kubernetes # Only works with Kubernetes
...@@ -653,31 +731,31 @@ global_node_selectors: {} ...@@ -653,31 +731,31 @@ global_node_selectors: {}
```yaml ```yaml
participants: participants:
- el_client_type: geth - el_type: geth
el_client_image: ethpandaops/geth:<VERKLE_IMAGE> el_image: ethpandaops/geth:<VERKLE_IMAGE>
elExtraParams: elExtraParams:
- "--override.verkle=<UNIXTIMESTAMP>" - "--override.verkle=<UNIXTIMESTAMP>"
cl_client_type: lighthouse cl_type: lighthouse
cl_client_image: sigp/lighthouse:latest cl_image: sigp/lighthouse:latest
- el_client_type: geth - el_type: geth
el_client_image: ethpandaops/geth:<VERKLE_IMAGE> el_image: ethpandaops/geth:<VERKLE_IMAGE>
elExtraParams: elExtraParams:
- "--override.verkle=<UNIXTIMESTAMP>" - "--override.verkle=<UNIXTIMESTAMP>"
cl_client_type: lighthouse cl_type: lighthouse
cl_client_image: sigp/lighthouse:latest cl_image: sigp/lighthouse:latest
- el_client_type: geth - el_type: geth
el_client_image: ethpandaops/geth:<VERKLE_IMAGE> el_image: ethpandaops/geth:<VERKLE_IMAGE>
elExtraParams: elExtraParams:
- "--override.verkle=<UNIXTIMESTAMP>" - "--override.verkle=<UNIXTIMESTAMP>"
cl_client_type: lighthouse cl_type: lighthouse
cl_client_image: sigp/lighthouse:latest cl_image: sigp/lighthouse:latest
network_params: network_params:
capella_fork_epoch: 2 capella_fork_epoch: 2
deneb_fork_epoch: 5 deneb_fork_epoch: 4
additional_services: [] additional_services: []
wait_for_finalization: false wait_for_finalization: false
wait_for_verifications: false wait_for_verifications: false
global_client_log_level: info global_log_level: info
``` ```
...@@ -689,20 +767,20 @@ global_client_log_level: info ...@@ -689,20 +767,20 @@ global_client_log_level: info
```yaml ```yaml
participants: participants:
- el_client_type: geth - el_type: geth
el_client_image: '' el_image: ''
cl_client_type: lighthouse cl_type: lighthouse
cl_client_image: '' cl_image: ''
count: 2 count: 2
- el_client_type: nethermind - el_type: nethermind
el_client_image: '' el_image: ''
cl_client_type: teku cl_type: teku
cl_client_image: '' cl_image: ''
count: 1 count: 1
- el_client_type: besu - el_type: besu
el_client_image: '' el_image: ''
cl_client_type: prysm cl_type: prysm
cl_client_image: '' cl_image: ''
count: 2 count: 2
mev_type: mock mev_type: mock
additional_services: [] additional_services: []
...@@ -715,13 +793,13 @@ additional_services: [] ...@@ -715,13 +793,13 @@ additional_services: []
```yaml ```yaml
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: lighthouse cl_type: lighthouse
count: 2 count: 2
- el_client_type: nethermind - el_type: nethermind
cl_client_type: teku cl_type: teku
- el_client_type: besu - el_type: besu
cl_client_type: prysm cl_type: prysm
count: 2 count: 2
mev_type: full mev_type: full
network_params: network_params:
...@@ -737,8 +815,8 @@ additional_services: [] ...@@ -737,8 +815,8 @@ additional_services: []
```yaml ```yaml
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: lighthouse cl_type: lighthouse
count: 2 count: 2
snooper_enabled: true snooper_enabled: true
``` ```
......
...@@ -63,7 +63,7 @@ Then the validator keys are generated. A tool called [eth2-val-tools](https://gi ...@@ -63,7 +63,7 @@ Then the validator keys are generated. A tool called [eth2-val-tools](https://gi
### Starting EL clients ### Starting EL clients
Next, we plug the generated genesis data [into EL client "launchers"](https://github.com/kurtosis-tech/ethereum-package/tree/main/src/participant_network/el) to start a mining network of EL nodes. The launchers come with a `launch` function that consumes EL genesis data and produces information about the running EL client node. Running EL node information is represented by [an `el_client_context` struct](https://github.com/kurtosis-tech/ethereum-package/blob/main/src/participant_network/el/el_client_context.star). Each EL client type has its own launcher (e.g. [Geth](https://github.com/kurtosis-tech/ethereum-package/tree/main/src/participant_network/el/geth), [Besu](https://github.com/kurtosis-tech/ethereum-package/tree/main/src/participant_network/el/besu)) because each EL client will require different environment variables and flags to be set when launching the client's container. Next, we plug the generated genesis data [into EL client "launchers"](https://github.com/kurtosis-tech/ethereum-package/tree/main/src/participant_network/el) to start a mining network of EL nodes. The launchers come with a `launch` function that consumes EL genesis data and produces information about the running EL client node. Running EL node information is represented by [an `el_context` struct](https://github.com/kurtosis-tech/ethereum-package/blob/main/src/participant_network/el/el_context.star). Each EL client type has its own launcher (e.g. [Geth](https://github.com/kurtosis-tech/ethereum-package/tree/main/src/participant_network/el/geth), [Besu](https://github.com/kurtosis-tech/ethereum-package/tree/main/src/participant_network/el/besu)) because each EL client will require different environment variables and flags to be set when launching the client's container.
### Starting CL clients ### Starting CL clients
...@@ -71,9 +71,9 @@ Once CL genesis data and keys have been created, the CL client nodes are started ...@@ -71,9 +71,9 @@ Once CL genesis data and keys have been created, the CL client nodes are started
- CL client launchers implement come with a `launch` method - CL client launchers implement come with a `launch` method
- One CL client launcher exists per client type (e.g. [Nimbus](https://github.com/kurtosis-tech/ethereum-package/tree/main/src/participant_network/cl/nimbus), [Lighthouse](https://github.com/kurtosis-tech/ethereum-package/tree/main/src/participant_network/cl/lighthouse)) - One CL client launcher exists per client type (e.g. [Nimbus](https://github.com/kurtosis-tech/ethereum-package/tree/main/src/participant_network/cl/nimbus), [Lighthouse](https://github.com/kurtosis-tech/ethereum-package/tree/main/src/participant_network/cl/lighthouse))
- Launched CL node information is tracked in [a `cl_client_context` struct](https://github.com/kurtosis-tech/ethereum-package/blob/main/src/participant_network/cl/cl_client_context.star) - Launched CL node information is tracked in [a `cl_context` struct](https://github.com/kurtosis-tech/ethereum-package/blob/main/src/participant_network/cl/cl_context.star)
There are only two major difference between CL client and EL client launchers. First, the `cl_client_launcher.launch` method also consumes an `el_client_context`, because each CL client is connected in a 1:1 relationship with an EL client. Second, because CL clients have keys, the keystore files are passed in to the `launch` function as well. There are only two major difference between CL client and EL client launchers. First, the `cl_client_launcher.launch` method also consumes an `el_context`, because each CL client is connected in a 1:1 relationship with an EL client. Second, because CL clients have keys, the keystore files are passed in to the `launch` function as well.
## Auxiliary Services ## Auxiliary Services
......
...@@ -99,7 +99,7 @@ def run(plan, args={}): ...@@ -99,7 +99,7 @@ def run(plan, args={}):
plan, plan,
args_with_right_defaults.participants, args_with_right_defaults.participants,
network_params, network_params,
args_with_right_defaults.global_client_log_level, args_with_right_defaults.global_log_level,
jwt_file, jwt_file,
keymanager_file, keymanager_file,
keymanager_p12_file, keymanager_p12_file,
...@@ -112,20 +112,20 @@ def run(plan, args={}): ...@@ -112,20 +112,20 @@ def run(plan, args={}):
plan.print( plan.print(
"NODE JSON RPC URI: '{0}:{1}'".format( "NODE JSON RPC URI: '{0}:{1}'".format(
all_participants[0].el_client_context.ip_addr, all_participants[0].el_context.ip_addr,
all_participants[0].el_client_context.rpc_port_num, all_participants[0].el_context.rpc_port_num,
) )
) )
all_el_client_contexts = [] all_el_contexts = []
all_cl_client_contexts = [] all_cl_contexts = []
all_validator_client_contexts = [] all_vc_contexts = []
all_ethereum_metrics_exporter_contexts = [] all_ethereum_metrics_exporter_contexts = []
all_xatu_sentry_contexts = [] all_xatu_sentry_contexts = []
for participant in all_participants: for participant in all_participants:
all_el_client_contexts.append(participant.el_client_context) all_el_contexts.append(participant.el_context)
all_cl_client_contexts.append(participant.cl_client_context) all_cl_contexts.append(participant.cl_context)
all_validator_client_contexts.append(participant.validator_client_context) all_vc_contexts.append(participant.vc_context)
all_ethereum_metrics_exporter_contexts.append( all_ethereum_metrics_exporter_contexts.append(
participant.ethereum_metrics_exporter_context participant.ethereum_metrics_exporter_context
) )
...@@ -138,13 +138,13 @@ def run(plan, args={}): ...@@ -138,13 +138,13 @@ def run(plan, args={}):
ranges = validator_ranges.generate_validator_ranges( ranges = validator_ranges.generate_validator_ranges(
plan, plan,
validator_ranges_config_template, validator_ranges_config_template,
all_cl_client_contexts, all_cl_contexts,
args_with_right_defaults.participants, args_with_right_defaults.participants,
) )
fuzz_target = "http://{0}:{1}".format( fuzz_target = "http://{0}:{1}".format(
all_el_client_contexts[0].ip_addr, all_el_contexts[0].ip_addr,
all_el_client_contexts[0].rpc_port_num, all_el_contexts[0].rpc_port_num,
) )
# Broadcaster forwards requests, sent to it, to all nodes in parallel # Broadcaster forwards requests, sent to it, to all nodes in parallel
...@@ -152,7 +152,7 @@ def run(plan, args={}): ...@@ -152,7 +152,7 @@ def run(plan, args={}):
args_with_right_defaults.additional_services.remove("broadcaster") args_with_right_defaults.additional_services.remove("broadcaster")
broadcaster_service = broadcaster.launch_broadcaster( broadcaster_service = broadcaster.launch_broadcaster(
plan, plan,
all_el_client_contexts, all_el_contexts,
global_node_selectors, global_node_selectors,
) )
fuzz_target = "http://{0}:{1}".format( fuzz_target = "http://{0}:{1}".format(
...@@ -174,18 +174,18 @@ def run(plan, args={}): ...@@ -174,18 +174,18 @@ def run(plan, args={}):
and args_with_right_defaults.mev_type == MOCK_MEV_TYPE and args_with_right_defaults.mev_type == MOCK_MEV_TYPE
): ):
el_uri = "{0}:{1}".format( el_uri = "{0}:{1}".format(
all_el_client_contexts[0].ip_addr, all_el_contexts[0].ip_addr,
all_el_client_contexts[0].engine_rpc_port_num, all_el_contexts[0].engine_rpc_port_num,
) )
beacon_uri = "{0}:{1}".format( beacon_uri = "{0}:{1}".format(
all_cl_client_contexts[0].ip_addr, all_cl_client_contexts[0].http_port_num all_cl_contexts[0].ip_addr, all_cl_contexts[0].http_port_num
) )
endpoint = mock_mev.launch_mock_mev( endpoint = mock_mev.launch_mock_mev(
plan, plan,
el_uri, el_uri,
beacon_uri, beacon_uri,
raw_jwt_secret, raw_jwt_secret,
args_with_right_defaults.global_client_log_level, args_with_right_defaults.global_log_level,
global_node_selectors, global_node_selectors,
) )
mev_endpoints.append(endpoint) mev_endpoints.append(endpoint)
...@@ -194,16 +194,16 @@ def run(plan, args={}): ...@@ -194,16 +194,16 @@ def run(plan, args={}):
and args_with_right_defaults.mev_type == FULL_MEV_TYPE and args_with_right_defaults.mev_type == FULL_MEV_TYPE
): ):
builder_uri = "http://{0}:{1}".format( builder_uri = "http://{0}:{1}".format(
all_el_client_contexts[-1].ip_addr, all_el_client_contexts[-1].rpc_port_num all_el_contexts[-1].ip_addr, all_el_contexts[-1].rpc_port_num
) )
beacon_uris = ",".join( beacon_uris = ",".join(
[ [
"http://{0}:{1}".format(context.ip_addr, context.http_port_num) "http://{0}:{1}".format(context.ip_addr, context.http_port_num)
for context in all_cl_client_contexts for context in all_cl_contexts
] ]
) )
first_cl_client = all_cl_client_contexts[0] first_cl_client = all_cl_contexts[0]
first_client_beacon_name = first_cl_client.beacon_service_name first_client_beacon_name = first_cl_client.beacon_service_name
contract_owner, normal_user = genesis_constants.PRE_FUNDED_ACCOUNTS[6:8] contract_owner, normal_user = genesis_constants.PRE_FUNDED_ACCOUNTS[6:8]
mev_flood.launch_mev_flood( mev_flood.launch_mev_flood(
...@@ -263,8 +263,8 @@ def run(plan, args={}): ...@@ -263,8 +263,8 @@ def run(plan, args={}):
mev_boost_service_name = "{0}-{1}-{2}-{3}".format( mev_boost_service_name = "{0}-{1}-{2}-{3}".format(
input_parser.MEV_BOOST_SERVICE_NAME_PREFIX, input_parser.MEV_BOOST_SERVICE_NAME_PREFIX,
index_str, index_str,
participant.cl_client_type, participant.cl_type,
participant.el_client_type, participant.el_type,
) )
mev_boost_context = mev_boost.launch( mev_boost_context = mev_boost.launch(
plan, plan,
...@@ -306,7 +306,7 @@ def run(plan, args={}): ...@@ -306,7 +306,7 @@ def run(plan, args={}):
plan, plan,
genesis_constants.PRE_FUNDED_ACCOUNTS, genesis_constants.PRE_FUNDED_ACCOUNTS,
fuzz_target, fuzz_target,
all_cl_client_contexts[0], all_cl_contexts[0],
network_params.deneb_fork_epoch, network_params.deneb_fork_epoch,
network_params.seconds_per_slot, network_params.seconds_per_slot,
network_params.genesis_delay, network_params.genesis_delay,
...@@ -319,8 +319,8 @@ def run(plan, args={}): ...@@ -319,8 +319,8 @@ def run(plan, args={}):
goomy_blob.launch_goomy_blob( goomy_blob.launch_goomy_blob(
plan, plan,
genesis_constants.PRE_FUNDED_ACCOUNTS, genesis_constants.PRE_FUNDED_ACCOUNTS,
all_el_client_contexts, all_el_contexts,
all_cl_client_contexts[0], all_cl_contexts[0],
network_params.seconds_per_slot, network_params.seconds_per_slot,
goomy_blob_params, goomy_blob_params,
global_node_selectors, global_node_selectors,
...@@ -336,7 +336,7 @@ def run(plan, args={}): ...@@ -336,7 +336,7 @@ def run(plan, args={}):
el_forkmon.launch_el_forkmon( el_forkmon.launch_el_forkmon(
plan, plan,
el_forkmon_config_template, el_forkmon_config_template,
all_el_client_contexts, all_el_contexts,
global_node_selectors, global_node_selectors,
) )
plan.print("Successfully launched execution layer forkmon") plan.print("Successfully launched execution layer forkmon")
...@@ -345,7 +345,7 @@ def run(plan, args={}): ...@@ -345,7 +345,7 @@ def run(plan, args={}):
beacon_metrics_gazer_prometheus_metrics_job = ( beacon_metrics_gazer_prometheus_metrics_job = (
beacon_metrics_gazer.launch_beacon_metrics_gazer( beacon_metrics_gazer.launch_beacon_metrics_gazer(
plan, plan,
all_cl_client_contexts, all_cl_contexts,
network_params, network_params,
global_node_selectors, global_node_selectors,
) )
...@@ -359,7 +359,7 @@ def run(plan, args={}): ...@@ -359,7 +359,7 @@ def run(plan, args={}):
plan.print("Launching blockscout") plan.print("Launching blockscout")
blockscout_sc_verif_url = blockscout.launch_blockscout( blockscout_sc_verif_url = blockscout.launch_blockscout(
plan, plan,
all_el_client_contexts, all_el_contexts,
persistent, persistent,
global_node_selectors, global_node_selectors,
) )
...@@ -370,7 +370,7 @@ def run(plan, args={}): ...@@ -370,7 +370,7 @@ def run(plan, args={}):
dora.launch_dora( dora.launch_dora(
plan, plan,
dora_config_template, dora_config_template,
all_cl_client_contexts, all_cl_contexts,
el_cl_data_files_artifact_uuid, el_cl_data_files_artifact_uuid,
network_params.electra_fork_epoch, network_params.electra_fork_epoch,
network_params.network, network_params.network,
...@@ -381,8 +381,8 @@ def run(plan, args={}): ...@@ -381,8 +381,8 @@ def run(plan, args={}):
plan.print("Launching blobscan") plan.print("Launching blobscan")
blobscan.launch_blobscan( blobscan.launch_blobscan(
plan, plan,
all_cl_client_contexts, all_cl_contexts,
all_el_client_contexts, all_el_contexts,
network_params.network_id, network_params.network_id,
persistent, persistent,
global_node_selectors, global_node_selectors,
...@@ -396,8 +396,8 @@ def run(plan, args={}): ...@@ -396,8 +396,8 @@ def run(plan, args={}):
full_beaconchain_explorer.launch_full_beacon( full_beaconchain_explorer.launch_full_beacon(
plan, plan,
full_beaconchain_explorer_config_template, full_beaconchain_explorer_config_template,
all_cl_client_contexts, all_cl_contexts,
all_el_client_contexts, all_el_contexts,
persistent, persistent,
global_node_selectors, global_node_selectors,
) )
...@@ -436,9 +436,9 @@ def run(plan, args={}): ...@@ -436,9 +436,9 @@ def run(plan, args={}):
plan.print("Launching prometheus...") plan.print("Launching prometheus...")
prometheus_private_url = prometheus.launch_prometheus( prometheus_private_url = prometheus.launch_prometheus(
plan, plan,
all_el_client_contexts, all_el_contexts,
all_cl_client_contexts, all_cl_contexts,
all_validator_client_contexts, all_vc_contexts,
prometheus_additional_metrics_jobs, prometheus_additional_metrics_jobs,
all_ethereum_metrics_exporter_contexts, all_ethereum_metrics_exporter_contexts,
all_xatu_sentry_contexts, all_xatu_sentry_contexts,
...@@ -458,7 +458,7 @@ def run(plan, args={}): ...@@ -458,7 +458,7 @@ def run(plan, args={}):
if args_with_right_defaults.wait_for_finalization: if args_with_right_defaults.wait_for_finalization:
plan.print("Waiting for the first finalized epoch") plan.print("Waiting for the first finalized epoch")
first_cl_client = all_cl_client_contexts[0] first_cl_client = all_cl_contexts[0]
first_client_beacon_name = first_cl_client.beacon_service_name first_client_beacon_name = first_cl_client.beacon_service_name
epoch_recipe = GetHttpRequestRecipe( epoch_recipe = GetHttpRequestRecipe(
endpoint="/eth/v1/beacon/states/head/finality_checkpoints", endpoint="/eth/v1/beacon/states/head/finality_checkpoints",
......
participants: participants:
- el_client_type: geth # EL
el_client_image: ethereum/client-go:latest - el_type: geth
el_client_log_level: "" el_image: ethereum/client-go:latest
el_extra_params: [] el_log_level: ""
el_extra_env_vars: {}
el_extra_labels: {} el_extra_labels: {}
el_extra_params: []
el_tolerations: [] el_tolerations: []
cl_client_type: lighthouse el_volume_size: 0
cl_client_image: sigp/lighthouse:latest
cl_client_log_level: ""
cl_tolerations: []
validator_tolerations: []
tolerations: []
node_selectors: {}
beacon_extra_params: []
beacon_extra_labels: {}
validator_extra_params: []
validator_extra_labels: {}
builder_network_params: null
validator_count: null
snooper_enabled: false
ethereum_metrics_exporter_enabled: false
xatu_sentry_enabled: false
el_min_cpu: 0 el_min_cpu: 0
el_max_cpu: 0 el_max_cpu: 0
el_min_mem: 0 el_min_mem: 0
el_max_mem: 0 el_max_mem: 0
bn_min_cpu: 0 # CL
bn_max_cpu: 0 cl_type: lighthouse
bn_min_mem: 0 cl_image: sigp/lighthouse:latest
bn_max_mem: 0 cl_log_level: ""
v_min_cpu: 0 cl_extra_env_vars: {}
v_max_cpu: 0 cl_extra_labels: {}
v_min_mem: 0 cl_extra_params: []
v_max_mem: 0 cl_tolerations: []
cl_volume_size: 0
cl_min_cpu: 0
cl_max_cpu: 0
cl_min_mem: 0
cl_max_mem: 0
use_separate_vc: true
# Validator
vc_type: lighthouse
vc_image: sigp/lighthouse:latest
vc_log_level: ""
vc_extra_env_vars: {}
vc_extra_labels: {}
vc_extra_params: []
vc_tolerations: []
vc_min_cpu: 0
vc_max_cpu: 0
vc_min_mem: 0
vc_max_mem: 0
validator_count: null
# participant specific
node_selectors: {}
tolerations: []
count: 2 count: 2
snooper_enabled: false
ethereum_metrics_exporter_enabled: false
xatu_sentry_enabled: false
prometheus_config: prometheus_config:
scrape_interval: 15s scrape_interval: 15s
labels: {} labels: {}
blobber_enabled: false blobber_enabled: false
blobber_extra_params: [] blobber_extra_params: []
builder_network_params: null
network_params: network_params:
network: kurtosis
network_id: "3151908" network_id: "3151908"
deposit_contract_address: "0x4242424242424242424242424242424242424242" deposit_contract_address: "0x4242424242424242424242424242424242424242"
seconds_per_slot: 12 seconds_per_slot: 12
...@@ -52,14 +66,13 @@ network_params: ...@@ -52,14 +66,13 @@ network_params:
genesis_delay: 20 genesis_delay: 20
max_churn: 8 max_churn: 8
ejection_balance: 16000000000 ejection_balance: 16000000000
eth1_follow_distance: 2048
min_validator_withdrawability_delay: 256
shard_committee_period: 256
capella_fork_epoch: 0 capella_fork_epoch: 0
deneb_fork_epoch: 4 deneb_fork_epoch: 4
electra_fork_epoch: null electra_fork_epoch: null
network: kurtosis
min_validator_withdrawability_delay: 256
shard_committee_period: 256
network_sync_base_url: https://ethpandaops-ethereum-node-snapshots.ams3.digitaloceanspaces.com/ network_sync_base_url: https://ethpandaops-ethereum-node-snapshots.ams3.digitaloceanspaces.com/
additional_services: additional_services:
- tx_spammer - tx_spammer
- blob_spammer - blob_spammer
...@@ -67,14 +80,34 @@ additional_services: ...@@ -67,14 +80,34 @@ additional_services:
- beacon_metrics_gazer - beacon_metrics_gazer
- dora - dora
- prometheus_grafana - prometheus_grafana
tx_spammer_params:
tx_spammer_extra_args: []
goomy_blob_params:
goomy_blob_args: []
assertoor_params:
image: ""
run_stability_check: true
run_block_proposal_check: true
run_transaction_test: false
run_blob_transaction_test: false
run_opcodes_transaction_test: false
run_lifecycle_test: false
tests: []
wait_for_finalization: false wait_for_finalization: false
global_client_log_level: info global_log_level: info
snooper_enabled: false snooper_enabled: false
ethereum_metrics_exporter_enabled: false ethereum_metrics_exporter_enabled: false
parallel_keystore_generation: false parallel_keystore_generation: false
disable_peer_scoring: false
grafana_additional_dashboards: []
persistent: false
mev_type: null mev_type: null
mev_params: mev_params:
mev_relay_image: flashbots/mev-boost-relay mev_relay_image: flashbots/mev-boost-relay
mev_builder_image: ethpandaops/flashbots-builder:main
mev_builder_cl_image: sigp/lighthouse:latest
mev_boost_image: flashbots/mev-boost
mev_boost_args: ["mev-boost", "--relay-check"]
mev_relay_api_extra_args: [] mev_relay_api_extra_args: []
mev_relay_housekeeper_extra_args: [] mev_relay_housekeeper_extra_args: []
mev_relay_website_extra_args: [] mev_relay_website_extra_args: []
...@@ -85,10 +118,22 @@ mev_params: ...@@ -85,10 +118,22 @@ mev_params:
mev_flood_image: flashbots/mev-flood mev_flood_image: flashbots/mev-flood
mev_flood_extra_args: [] mev_flood_extra_args: []
mev_flood_seconds_per_bundle: 15 mev_flood_seconds_per_bundle: 15
mev_boost_image: flashbots/mev-boost custom_flood_params:
mev_boost_args: ["mev-boost", "--relay-check"] interval_between_transactions: 1
grafana_additional_dashboards: []
persistent: false
xatu_sentry_enabled: false xatu_sentry_enabled: false
xatu_sentry_params:
xatu_sentry_image: ethpandaops/xatu-sentry
xatu_server_addr: localhost:8000
xatu_server_tls: false
xatu_server_headers: {}
beacon_subscriptions:
- attestation
- block
- chain_reorg
- finalized_checkpoint
- head
- voluntary_exit
- contribution_and_proof
- blob_sidecar
global_tolerations: [] global_tolerations: []
global_node_selectors: {} global_node_selectors: {}
#!/bin/bash
# Helper function to perform replacements
perform_replacements() {
local input_file="$1"
shift
local replacements=("$@")
for ((i = 0; i < ${#replacements[@]}; i+=2)); do
original="${replacements[$i]}"
replacement="${replacements[$i+1]}"
sed -i -- "s/$original/$replacement/g" "$input_file"
done
}
# Check if an input file is provided
if [ $# -eq 0 ]; then
echo "Usage: $0 <input_file>"
exit 1
fi
# Define the input YAML file
input_file="$1"
# Define the replacement pairs as a list
replacements=(
el_client_type
el_type
el_client_image
el_image
el_client_log_level
el_log_level
el_client_volume_size
el_volume_size
cl_client_type
cl_type
cl_client_image
cl_image
cl_client_volume_size
cl_volume_size
cl_client_log_level
cl_log_level
beacon_extra_params
cl_extra_params
beacon_extra_labels
cl_extra_labels
bn_min_cpu
cl_min_cpu
bn_max_cpu
cl_max_cpu
bn_min_mem
cl_min_mem
bn_max_mem
cl_max_mem
use_separate_validator_client
use_separate_vc
validator_client_type
vc_type
validator_tolerations
vc_tolerations
validator_client_image
vc_image
validator_extra_params
vc_extra_params
validator_extra_labels
vc_extra_labels
v_min_cpu
vc_min_cpu
v_max_cpu
vc_max_cpu
v_min_mem
vc_min_mem
v_max_mem
vc_max_mem
global_client_log_level
global_log_level
)
# Perform replacements
perform_replacements "$input_file" "${replacements[@]}"
echo "Replacements completed."
...@@ -39,12 +39,12 @@ def launch_assertoor( ...@@ -39,12 +39,12 @@ def launch_assertoor(
global_node_selectors, global_node_selectors,
): ):
all_client_info = [] all_client_info = []
validator_client_info = [] vc_info = []
for index, participant in enumerate(participant_contexts): for index, participant in enumerate(participant_contexts):
participant_config = participant_configs[index] participant_config = participant_configs[index]
cl_client = participant.cl_client_context cl_client = participant.cl_context
el_client = participant.el_client_context el_client = participant.el_context
all_client_info.append( all_client_info.append(
new_client_info( new_client_info(
...@@ -57,7 +57,7 @@ def launch_assertoor( ...@@ -57,7 +57,7 @@ def launch_assertoor(
) )
if participant_config.validator_count != 0: if participant_config.validator_count != 0:
validator_client_info.append( vc_info.append(
new_client_info( new_client_info(
cl_client.ip_addr, cl_client.ip_addr,
cl_client.http_port_num, cl_client.http_port_num,
...@@ -68,7 +68,7 @@ def launch_assertoor( ...@@ -68,7 +68,7 @@ def launch_assertoor(
) )
template_data = new_config_template_data( template_data = new_config_template_data(
HTTP_PORT_NUMBER, all_client_info, validator_client_info, assertoor_params HTTP_PORT_NUMBER, all_client_info, vc_info, assertoor_params
) )
template_and_data = shared_utils.new_template_and_data( template_and_data = shared_utils.new_template_and_data(
...@@ -134,9 +134,7 @@ def get_config( ...@@ -134,9 +134,7 @@ def get_config(
) )
def new_config_template_data( def new_config_template_data(listen_port_num, client_info, vc_info, assertoor_params):
listen_port_num, client_info, validator_client_info, assertoor_params
):
additional_tests = [] additional_tests = []
for index, testcfg in enumerate(assertoor_params.tests): for index, testcfg in enumerate(assertoor_params.tests):
if type(testcfg) == "dict": if type(testcfg) == "dict":
...@@ -153,7 +151,7 @@ def new_config_template_data( ...@@ -153,7 +151,7 @@ def new_config_template_data(
return { return {
"ListenPortNum": listen_port_num, "ListenPortNum": listen_port_num,
"ClientInfo": client_info, "ClientInfo": client_info,
"ValidatorClientInfo": validator_client_info, "ValidatorClientInfo": vc_info,
"RunStabilityCheck": assertoor_params.run_stability_check, "RunStabilityCheck": assertoor_params.run_stability_check,
"RunBlockProposalCheck": assertoor_params.run_block_proposal_check, "RunBlockProposalCheck": assertoor_params.run_block_proposal_check,
"RunLifecycleTest": assertoor_params.run_lifecycle_test, "RunLifecycleTest": assertoor_params.run_lifecycle_test,
......
...@@ -33,13 +33,13 @@ MAX_MEMORY = 300 ...@@ -33,13 +33,13 @@ MAX_MEMORY = 300
def launch_beacon_metrics_gazer( def launch_beacon_metrics_gazer(
plan, plan,
cl_client_contexts, cl_contexts,
network_params, network_params,
global_node_selectors, global_node_selectors,
): ):
config = get_config( config = get_config(
cl_client_contexts[0].ip_addr, cl_contexts[0].ip_addr,
cl_client_contexts[0].http_port_num, cl_contexts[0].http_port_num,
global_node_selectors, global_node_selectors,
) )
......
...@@ -14,7 +14,7 @@ def launch_blob_spammer( ...@@ -14,7 +14,7 @@ def launch_blob_spammer(
plan, plan,
prefunded_addresses, prefunded_addresses,
el_uri, el_uri,
cl_client_context, cl_context,
deneb_fork_epoch, deneb_fork_epoch,
seconds_per_slot, seconds_per_slot,
genesis_delay, genesis_delay,
...@@ -23,7 +23,7 @@ def launch_blob_spammer( ...@@ -23,7 +23,7 @@ def launch_blob_spammer(
config = get_config( config = get_config(
prefunded_addresses, prefunded_addresses,
el_uri, el_uri,
cl_client_context, cl_context,
deneb_fork_epoch, deneb_fork_epoch,
seconds_per_slot, seconds_per_slot,
genesis_delay, genesis_delay,
...@@ -35,7 +35,7 @@ def launch_blob_spammer( ...@@ -35,7 +35,7 @@ def launch_blob_spammer(
def get_config( def get_config(
prefunded_addresses, prefunded_addresses,
el_uri, el_uri,
cl_client_context, cl_context,
deneb_fork_epoch, deneb_fork_epoch,
seconds_per_slot, seconds_per_slot,
genesis_delay, genesis_delay,
...@@ -51,12 +51,12 @@ def get_config( ...@@ -51,12 +51,12 @@ def get_config(
"apk update", "apk update",
"apk add curl jq", "apk add curl jq",
'current_epoch=$(curl -s http://{0}:{1}/eth/v2/beacon/blocks/head | jq -r ".version")'.format( 'current_epoch=$(curl -s http://{0}:{1}/eth/v2/beacon/blocks/head | jq -r ".version")'.format(
cl_client_context.ip_addr, cl_client_context.http_port_num cl_context.ip_addr, cl_context.http_port_num
), ),
"echo $current_epoch", "echo $current_epoch",
'while [ $current_epoch != "deneb" ]; do echo "waiting for deneb, current epoch is $current_epoch"; current_epoch=$(curl -s http://{0}:{1}/eth/v2/beacon/blocks/head | jq -r ".version"); sleep {2}; done'.format( 'while [ $current_epoch != "deneb" ]; do echo "waiting for deneb, current epoch is $current_epoch"; current_epoch=$(curl -s http://{0}:{1}/eth/v2/beacon/blocks/head | jq -r ".version"); sleep {2}; done'.format(
cl_client_context.ip_addr, cl_context.ip_addr,
cl_client_context.http_port_num, cl_context.http_port_num,
seconds_per_slot, seconds_per_slot,
), ),
'echo "sleep is over, starting to send blob transactions"', 'echo "sleep is over, starting to send blob transactions"',
......
shared_utils = import_module("../shared_utils/shared_utils.star") shared_utils = import_module("../shared_utils/shared_utils.star")
input_parser = import_module("../package_io/input_parser.star") input_parser = import_module("../package_io/input_parser.star")
cl_client_context = import_module("../cl/cl_client_context.star") cl_context = import_module("../cl/cl_context.star")
blobber_context = import_module("../blobber/blobber_context.star") blobber_context = import_module("../blobber/blobber_context.star")
......
...@@ -55,18 +55,18 @@ POSTGRES_MAX_MEMORY = 1024 ...@@ -55,18 +55,18 @@ POSTGRES_MAX_MEMORY = 1024
def launch_blobscan( def launch_blobscan(
plan, plan,
cl_client_contexts, cl_contexts,
el_client_contexts, el_contexts,
chain_id, chain_id,
persistent, persistent,
global_node_selectors, global_node_selectors,
): ):
node_selectors = global_node_selectors node_selectors = global_node_selectors
beacon_node_rpc_uri = "http://{0}:{1}".format( beacon_node_rpc_uri = "http://{0}:{1}".format(
cl_client_contexts[0].ip_addr, cl_client_contexts[0].http_port_num cl_contexts[0].ip_addr, cl_contexts[0].http_port_num
) )
execution_node_rpc_uri = "http://{0}:{1}".format( execution_node_rpc_uri = "http://{0}:{1}".format(
el_client_contexts[0].ip_addr, el_client_contexts[0].rpc_port_num el_contexts[0].ip_addr, el_contexts[0].rpc_port_num
) )
postgres_output = postgres.run( postgres_output = postgres.run(
......
...@@ -40,7 +40,7 @@ VERIF_USED_PORTS = { ...@@ -40,7 +40,7 @@ VERIF_USED_PORTS = {
def launch_blockscout( def launch_blockscout(
plan, plan,
el_client_contexts, el_contexts,
persistent, persistent,
global_node_selectors, global_node_selectors,
): ):
...@@ -53,11 +53,11 @@ def launch_blockscout( ...@@ -53,11 +53,11 @@ def launch_blockscout(
node_selectors=global_node_selectors, node_selectors=global_node_selectors,
) )
el_client_context = el_client_contexts[0] el_context = el_contexts[0]
el_client_rpc_url = "http://{}:{}/".format( el_client_rpc_url = "http://{}:{}/".format(
el_client_context.ip_addr, el_client_context.rpc_port_num el_context.ip_addr, el_context.rpc_port_num
) )
el_client_name = el_client_context.client_name el_client_name = el_context.client_name
config_verif = get_config_verif(global_node_selectors) config_verif = get_config_verif(global_node_selectors)
verif_service_name = "{}-verif".format(SERVICE_NAME_BLOCKSCOUT) verif_service_name = "{}-verif".format(SERVICE_NAME_BLOCKSCOUT)
......
...@@ -9,20 +9,20 @@ MIN_MEMORY = 128 ...@@ -9,20 +9,20 @@ MIN_MEMORY = 128
MAX_MEMORY = 2048 MAX_MEMORY = 2048
def launch_broadcaster(plan, all_el_client_contexts, global_node_selectors): def launch_broadcaster(plan, all_el_contexts, global_node_selectors):
config = get_config(all_el_client_contexts, global_node_selectors) config = get_config(all_el_contexts, global_node_selectors)
return plan.add_service(SERVICE_NAME, config) return plan.add_service(SERVICE_NAME, config)
def get_config( def get_config(
all_el_client_contexts, all_el_contexts,
node_selectors, node_selectors,
): ):
return ServiceConfig( return ServiceConfig(
image=IMAGE_NAME, image=IMAGE_NAME,
cmd=[ cmd=[
"http://{0}:{1}".format(context.ip_addr, context.rpc_port_num) "http://{0}:{1}".format(context.ip_addr, context.rpc_port_num)
for context in all_el_client_contexts for context in all_el_contexts
], ],
min_cpu=MIN_CPU, min_cpu=MIN_CPU,
max_cpu=MAX_CPU, max_cpu=MAX_CPU,
......
def new_cl_client_context( def new_cl_context(
client_name, client_name,
enr, enr,
ip_addr, ip_addr,
......
lighthouse = import_module("./lighthouse/lighthouse_launcher.star")
lodestar = import_module("./lodestar/lodestar_launcher.star")
nimbus = import_module("./nimbus/nimbus_launcher.star")
prysm = import_module("./prysm/prysm_launcher.star")
teku = import_module("./teku/teku_launcher.star")
constants = import_module("../package_io/constants.star")
input_parser = import_module("../package_io/input_parser.star")
shared_utils = import_module("../shared_utils/shared_utils.star")
snooper = import_module("../snooper/snooper_engine_launcher.star")
cl_context_BOOTNODE = None
def launch(
plan,
network_params,
el_cl_data,
jwt_file,
keymanager_file,
keymanager_p12_file,
participants,
all_el_contexts,
global_log_level,
global_node_selectors,
global_tolerations,
persistent,
network_id,
num_participants,
validator_data,
prysm_password_relative_filepath,
prysm_password_artifact_uuid,
):
plan.print("Launching CL network")
cl_launchers = {
constants.CL_TYPE.lighthouse: {
"launcher": lighthouse.new_lighthouse_launcher(
el_cl_data, jwt_file, network_params.network
),
"launch_method": lighthouse.launch,
},
constants.CL_TYPE.lodestar: {
"launcher": lodestar.new_lodestar_launcher(
el_cl_data, jwt_file, network_params.network
),
"launch_method": lodestar.launch,
},
constants.CL_TYPE.nimbus: {
"launcher": nimbus.new_nimbus_launcher(
el_cl_data,
jwt_file,
network_params.network,
keymanager_file,
),
"launch_method": nimbus.launch,
},
constants.CL_TYPE.prysm: {
"launcher": prysm.new_prysm_launcher(
el_cl_data,
jwt_file,
network_params.network,
prysm_password_relative_filepath,
prysm_password_artifact_uuid,
),
"launch_method": prysm.launch,
},
constants.CL_TYPE.teku: {
"launcher": teku.new_teku_launcher(
el_cl_data,
jwt_file,
network_params.network,
keymanager_file,
keymanager_p12_file,
),
"launch_method": teku.launch,
},
}
all_snooper_engine_contexts = []
all_cl_contexts = []
preregistered_validator_keys_for_nodes = (
validator_data.per_node_keystores
if network_params.network == constants.NETWORK_NAME.kurtosis
or constants.NETWORK_NAME.shadowfork in network_params.network
else None
)
for index, participant in enumerate(participants):
cl_type = participant.cl_type
el_type = participant.el_type
node_selectors = input_parser.get_client_node_selectors(
participant.node_selectors,
global_node_selectors,
)
if cl_type not in cl_launchers:
fail(
"Unsupported launcher '{0}', need one of '{1}'".format(
cl_type, ",".join([cl.name for cl in cl_launchers.keys()])
)
)
cl_launcher, launch_method = (
cl_launchers[cl_type]["launcher"],
cl_launchers[cl_type]["launch_method"],
)
index_str = shared_utils.zfill_custom(index + 1, len(str(len(participants))))
cl_service_name = "cl-{0}-{1}-{2}".format(index_str, cl_type, el_type)
new_cl_node_validator_keystores = None
if participant.validator_count != 0:
new_cl_node_validator_keystores = preregistered_validator_keys_for_nodes[
index
]
el_context = all_el_contexts[index]
cl_context = None
snooper_engine_context = None
if participant.snooper_enabled:
snooper_service_name = "snooper-{0}-{1}-{2}".format(
index_str, cl_type, el_type
)
snooper_engine_context = snooper.launch(
plan,
snooper_service_name,
el_context,
node_selectors,
)
plan.print(
"Successfully added {0} snooper participants".format(
snooper_engine_context
)
)
all_snooper_engine_contexts.append(snooper_engine_context)
if index == 0:
cl_context = launch_method(
plan,
cl_launcher,
cl_service_name,
participant.cl_image,
participant.cl_log_level,
global_log_level,
cl_context_BOOTNODE,
el_context,
new_cl_node_validator_keystores,
participant.cl_min_cpu,
participant.cl_max_cpu,
participant.cl_min_mem,
participant.cl_max_mem,
participant.snooper_enabled,
snooper_engine_context,
participant.blobber_enabled,
participant.blobber_extra_params,
participant.cl_extra_params,
participant.cl_extra_env_vars,
participant.cl_extra_labels,
persistent,
participant.cl_volume_size,
participant.cl_tolerations,
participant.tolerations,
global_tolerations,
node_selectors,
participant.use_separate_vc,
)
else:
boot_cl_client_ctx = all_cl_contexts
cl_context = launch_method(
plan,
cl_launcher,
cl_service_name,
participant.cl_image,
participant.cl_log_level,
global_log_level,
boot_cl_client_ctx,
el_context,
new_cl_node_validator_keystores,
participant.cl_min_cpu,
participant.cl_max_cpu,
participant.cl_min_mem,
participant.cl_max_mem,
participant.snooper_enabled,
snooper_engine_context,
participant.blobber_enabled,
participant.blobber_extra_params,
participant.cl_extra_params,
participant.cl_extra_env_vars,
participant.cl_extra_labels,
persistent,
participant.cl_volume_size,
participant.cl_tolerations,
participant.tolerations,
global_tolerations,
node_selectors,
participant.use_separate_vc,
)
# Add participant cl additional prometheus labels
for metrics_info in cl_context.cl_nodes_metrics_info:
if metrics_info != None:
metrics_info["config"] = participant.prometheus_config
all_cl_contexts.append(cl_context)
return (
all_cl_contexts,
all_snooper_engine_contexts,
preregistered_validator_keys_for_nodes,
)
shared_utils = import_module("../../shared_utils/shared_utils.star") shared_utils = import_module("../../shared_utils/shared_utils.star")
input_parser = import_module("../../package_io/input_parser.star") input_parser = import_module("../../package_io/input_parser.star")
cl_client_context = import_module("../../cl/cl_client_context.star") cl_context = import_module("../../cl/cl_context.star")
node_metrics = import_module("../../node_metrics_info.star") node_metrics = import_module("../../node_metrics_info.star")
cl_node_ready_conditions = import_module("../../cl/cl_node_ready_conditions.star") cl_node_ready_conditions = import_module("../../cl/cl_node_ready_conditions.star")
constants = import_module("../../package_io/constants.star") constants = import_module("../../package_io/constants.star")
...@@ -54,11 +54,11 @@ BEACON_USED_PORTS = { ...@@ -54,11 +54,11 @@ BEACON_USED_PORTS = {
} }
VERBOSITY_LEVELS = { VERBOSITY_LEVELS = {
constants.GLOBAL_CLIENT_LOG_LEVEL.error: "error", constants.GLOBAL_LOG_LEVEL.error: "error",
constants.GLOBAL_CLIENT_LOG_LEVEL.warn: "warn", constants.GLOBAL_LOG_LEVEL.warn: "warn",
constants.GLOBAL_CLIENT_LOG_LEVEL.info: "info", constants.GLOBAL_LOG_LEVEL.info: "info",
constants.GLOBAL_CLIENT_LOG_LEVEL.debug: "debug", constants.GLOBAL_LOG_LEVEL.debug: "debug",
constants.GLOBAL_CLIENT_LOG_LEVEL.trace: "trace", constants.GLOBAL_LOG_LEVEL.trace: "trace",
} }
...@@ -70,25 +70,26 @@ def launch( ...@@ -70,25 +70,26 @@ def launch(
participant_log_level, participant_log_level,
global_log_level, global_log_level,
bootnode_contexts, bootnode_contexts,
el_client_context, el_context,
node_keystore_files, node_keystore_files,
bn_min_cpu, cl_min_cpu,
bn_max_cpu, cl_max_cpu,
bn_min_mem, cl_min_mem,
bn_max_mem, cl_max_mem,
snooper_enabled, snooper_enabled,
snooper_engine_context, snooper_engine_context,
blobber_enabled, blobber_enabled,
blobber_extra_params, blobber_extra_params,
extra_beacon_params, extra_params,
extra_beacon_labels, extra_env_vars,
extra_labels,
persistent, persistent,
cl_volume_size, cl_volume_size,
cl_tolerations, cl_tolerations,
participant_tolerations, participant_tolerations,
global_tolerations, global_tolerations,
node_selectors, node_selectors,
use_separate_validator_client=True, use_separate_vc=True,
): ):
beacon_service_name = "{0}".format(service_name) beacon_service_name = "{0}".format(service_name)
...@@ -102,16 +103,16 @@ def launch( ...@@ -102,16 +103,16 @@ def launch(
network_name = shared_utils.get_network_name(launcher.network) network_name = shared_utils.get_network_name(launcher.network)
bn_min_cpu = int(bn_min_cpu) if int(bn_min_cpu) > 0 else BEACON_MIN_CPU cl_min_cpu = int(cl_min_cpu) if int(cl_min_cpu) > 0 else BEACON_MIN_CPU
bn_max_cpu = ( cl_max_cpu = (
int(bn_max_cpu) int(cl_max_cpu)
if int(bn_max_cpu) > 0 if int(cl_max_cpu) > 0
else constants.RAM_CPU_OVERRIDES[network_name]["lighthouse_max_cpu"] else constants.RAM_CPU_OVERRIDES[network_name]["lighthouse_max_cpu"]
) )
bn_min_mem = int(bn_min_mem) if int(bn_min_mem) > 0 else BEACON_MIN_MEMORY cl_min_mem = int(cl_min_mem) if int(cl_min_mem) > 0 else BEACON_MIN_MEMORY
bn_max_mem = ( cl_max_mem = (
int(bn_max_mem) int(cl_max_mem)
if int(bn_max_mem) > 0 if int(cl_max_mem) > 0
else constants.RAM_CPU_OVERRIDES[network_name]["lighthouse_max_mem"] else constants.RAM_CPU_OVERRIDES[network_name]["lighthouse_max_mem"]
) )
...@@ -130,16 +131,17 @@ def launch( ...@@ -130,16 +131,17 @@ def launch(
image, image,
beacon_service_name, beacon_service_name,
bootnode_contexts, bootnode_contexts,
el_client_context, el_context,
log_level, log_level,
bn_min_cpu, cl_min_cpu,
bn_max_cpu, cl_max_cpu,
bn_min_mem, cl_min_mem,
bn_max_mem, cl_max_mem,
snooper_enabled, snooper_enabled,
snooper_engine_context, snooper_engine_context,
extra_beacon_params, extra_params,
extra_beacon_labels, extra_env_vars,
extra_labels,
persistent, persistent,
cl_volume_size, cl_volume_size,
tolerations, tolerations,
...@@ -198,7 +200,7 @@ def launch( ...@@ -198,7 +200,7 @@ def launch(
) )
nodes_metrics_info = [beacon_node_metrics_info] nodes_metrics_info = [beacon_node_metrics_info]
return cl_client_context.new_cl_client_context( return cl_context.new_cl_context(
"lighthouse", "lighthouse",
beacon_node_enr, beacon_node_enr,
beacon_service.ip_address, beacon_service.ip_address,
...@@ -223,15 +225,16 @@ def get_beacon_config( ...@@ -223,15 +225,16 @@ def get_beacon_config(
image, image,
service_name, service_name,
boot_cl_client_ctxs, boot_cl_client_ctxs,
el_client_context, el_context,
log_level, log_level,
bn_min_cpu, cl_min_cpu,
bn_max_cpu, cl_max_cpu,
bn_min_mem, cl_min_mem,
bn_max_mem, cl_max_mem,
snooper_enabled, snooper_enabled,
snooper_engine_context, snooper_engine_context,
extra_params, extra_params,
extra_env_vars,
extra_labels, extra_labels,
persistent, persistent,
cl_volume_size, cl_volume_size,
...@@ -246,8 +249,8 @@ def get_beacon_config( ...@@ -246,8 +249,8 @@ def get_beacon_config(
) )
else: else:
EXECUTION_ENGINE_ENDPOINT = "http://{0}:{1}".format( EXECUTION_ENGINE_ENDPOINT = "http://{0}:{1}".format(
el_client_context.ip_addr, el_context.ip_addr,
el_client_context.engine_rpc_port_num, el_context.engine_rpc_port_num,
) )
# NOTE: If connecting to the merge devnet remotely we DON'T want the following flags; when they're not set, the node's external IP address is auto-detected # NOTE: If connecting to the merge devnet remotely we DON'T want the following flags; when they're not set, the node's external IP address is auto-detected
...@@ -367,25 +370,27 @@ def get_beacon_config( ...@@ -367,25 +370,27 @@ def get_beacon_config(
persistent_key="data-{0}".format(service_name), persistent_key="data-{0}".format(service_name),
size=cl_volume_size, size=cl_volume_size,
) )
env = {RUST_BACKTRACE_ENVVAR_NAME: RUST_FULL_BACKTRACE_KEYWORD}
env.update(extra_env_vars)
return ServiceConfig( return ServiceConfig(
image=image, image=image,
ports=BEACON_USED_PORTS, ports=BEACON_USED_PORTS,
cmd=cmd, cmd=cmd,
files=files, files=files,
env_vars={RUST_BACKTRACE_ENVVAR_NAME: RUST_FULL_BACKTRACE_KEYWORD}, env_vars=env,
private_ip_address_placeholder=PRIVATE_IP_ADDRESS_PLACEHOLDER, private_ip_address_placeholder=PRIVATE_IP_ADDRESS_PLACEHOLDER,
ready_conditions=cl_node_ready_conditions.get_ready_conditions( ready_conditions=cl_node_ready_conditions.get_ready_conditions(
BEACON_HTTP_PORT_ID BEACON_HTTP_PORT_ID
), ),
min_cpu=bn_min_cpu, min_cpu=cl_min_cpu,
max_cpu=bn_max_cpu, max_cpu=cl_max_cpu,
min_memory=bn_min_mem, min_memory=cl_min_mem,
max_memory=bn_max_mem, max_memory=cl_max_mem,
labels=shared_utils.label_maker( labels=shared_utils.label_maker(
constants.CL_CLIENT_TYPE.lighthouse, constants.CL_TYPE.lighthouse,
constants.CLIENT_TYPES.cl, constants.CLIENT_TYPES.cl,
image, image,
el_client_context.client_name, el_context.client_name,
extra_labels, extra_labels,
), ),
tolerations=tolerations, tolerations=tolerations,
......
shared_utils = import_module("../../shared_utils/shared_utils.star") shared_utils = import_module("../../shared_utils/shared_utils.star")
input_parser = import_module("../../package_io/input_parser.star") input_parser = import_module("../../package_io/input_parser.star")
cl_client_context = import_module("../../cl/cl_client_context.star") cl_context = import_module("../../cl/cl_context.star")
node_metrics = import_module("../../node_metrics_info.star") node_metrics = import_module("../../node_metrics_info.star")
cl_node_ready_conditions = import_module("../../cl/cl_node_ready_conditions.star") cl_node_ready_conditions = import_module("../../cl/cl_node_ready_conditions.star")
blobber_launcher = import_module("../../blobber/blobber_launcher.star") blobber_launcher = import_module("../../blobber/blobber_launcher.star")
...@@ -43,11 +43,11 @@ BEACON_USED_PORTS = { ...@@ -43,11 +43,11 @@ BEACON_USED_PORTS = {
} }
VERBOSITY_LEVELS = { VERBOSITY_LEVELS = {
constants.GLOBAL_CLIENT_LOG_LEVEL.error: "error", constants.GLOBAL_LOG_LEVEL.error: "error",
constants.GLOBAL_CLIENT_LOG_LEVEL.warn: "warn", constants.GLOBAL_LOG_LEVEL.warn: "warn",
constants.GLOBAL_CLIENT_LOG_LEVEL.info: "info", constants.GLOBAL_LOG_LEVEL.info: "info",
constants.GLOBAL_CLIENT_LOG_LEVEL.debug: "debug", constants.GLOBAL_LOG_LEVEL.debug: "debug",
constants.GLOBAL_CLIENT_LOG_LEVEL.trace: "trace", constants.GLOBAL_LOG_LEVEL.trace: "trace",
} }
...@@ -59,25 +59,26 @@ def launch( ...@@ -59,25 +59,26 @@ def launch(
participant_log_level, participant_log_level,
global_log_level, global_log_level,
bootnode_contexts, bootnode_contexts,
el_client_context, el_context,
node_keystore_files, node_keystore_files,
bn_min_cpu, cl_min_cpu,
bn_max_cpu, cl_max_cpu,
bn_min_mem, cl_min_mem,
bn_max_mem, cl_max_mem,
snooper_enabled, snooper_enabled,
snooper_engine_context, snooper_engine_context,
blobber_enabled, blobber_enabled,
blobber_extra_params, blobber_extra_params,
extra_beacon_params, extra_params,
extra_beacon_labels, extra_env_vars,
extra_labels,
persistent, persistent,
cl_volume_size, cl_volume_size,
cl_tolerations, cl_tolerations,
participant_tolerations, participant_tolerations,
global_tolerations, global_tolerations,
node_selectors, node_selectors,
use_separate_validator_client=True, use_separate_vc=True,
): ):
beacon_service_name = "{0}".format(service_name) beacon_service_name = "{0}".format(service_name)
log_level = input_parser.get_client_log_level_or_default( log_level = input_parser.get_client_log_level_or_default(
...@@ -90,16 +91,16 @@ def launch( ...@@ -90,16 +91,16 @@ def launch(
network_name = shared_utils.get_network_name(launcher.network) network_name = shared_utils.get_network_name(launcher.network)
bn_min_cpu = int(bn_min_cpu) if int(bn_min_cpu) > 0 else BEACON_MIN_CPU cl_min_cpu = int(cl_min_cpu) if int(cl_min_cpu) > 0 else BEACON_MIN_CPU
bn_max_cpu = ( cl_max_cpu = (
int(bn_max_cpu) int(cl_max_cpu)
if int(bn_max_cpu) > 0 if int(cl_max_cpu) > 0
else constants.RAM_CPU_OVERRIDES[network_name]["lodestar_max_cpu"] else constants.RAM_CPU_OVERRIDES[network_name]["lodestar_max_cpu"]
) )
bn_min_mem = int(bn_min_mem) if int(bn_min_mem) > 0 else BEACON_MIN_MEMORY cl_min_mem = int(cl_min_mem) if int(cl_min_mem) > 0 else BEACON_MIN_MEMORY
bn_max_mem = ( cl_max_mem = (
int(bn_max_mem) int(cl_max_mem)
if int(bn_max_mem) > 0 if int(cl_max_mem) > 0
else constants.RAM_CPU_OVERRIDES[network_name]["lodestar_max_mem"] else constants.RAM_CPU_OVERRIDES[network_name]["lodestar_max_mem"]
) )
...@@ -118,16 +119,17 @@ def launch( ...@@ -118,16 +119,17 @@ def launch(
image, image,
beacon_service_name, beacon_service_name,
bootnode_contexts, bootnode_contexts,
el_client_context, el_context,
log_level, log_level,
bn_min_cpu, cl_min_cpu,
bn_max_cpu, cl_max_cpu,
bn_min_mem, cl_min_mem,
bn_max_mem, cl_max_mem,
snooper_enabled, snooper_enabled,
snooper_engine_context, snooper_engine_context,
extra_beacon_params, extra_params,
extra_beacon_labels, extra_env_vars,
extra_labels,
persistent, persistent,
cl_volume_size, cl_volume_size,
tolerations, tolerations,
...@@ -189,7 +191,7 @@ def launch( ...@@ -189,7 +191,7 @@ def launch(
) )
nodes_metrics_info = [beacon_node_metrics_info] nodes_metrics_info = [beacon_node_metrics_info]
return cl_client_context.new_cl_client_context( return cl_context.new_cl_context(
"lodestar", "lodestar",
beacon_node_enr, beacon_node_enr,
beacon_service.ip_address, beacon_service.ip_address,
...@@ -214,15 +216,16 @@ def get_beacon_config( ...@@ -214,15 +216,16 @@ def get_beacon_config(
image, image,
service_name, service_name,
bootnode_contexts, bootnode_contexts,
el_client_context, el_context,
log_level, log_level,
bn_min_cpu, cl_min_cpu,
bn_max_cpu, cl_max_cpu,
bn_min_mem, cl_min_mem,
bn_max_mem, cl_max_mem,
snooper_enabled, snooper_enabled,
snooper_engine_context, snooper_engine_context,
extra_params, extra_params,
extra_env_vars,
extra_labels, extra_labels,
persistent, persistent,
cl_volume_size, cl_volume_size,
...@@ -230,8 +233,8 @@ def get_beacon_config( ...@@ -230,8 +233,8 @@ def get_beacon_config(
node_selectors, node_selectors,
): ):
el_client_rpc_url_str = "http://{0}:{1}".format( el_client_rpc_url_str = "http://{0}:{1}".format(
el_client_context.ip_addr, el_context.ip_addr,
el_client_context.rpc_port_num, el_context.rpc_port_num,
) )
# If snooper is enabled use the snooper engine context, otherwise use the execution client context # If snooper is enabled use the snooper engine context, otherwise use the execution client context
...@@ -242,8 +245,8 @@ def get_beacon_config( ...@@ -242,8 +245,8 @@ def get_beacon_config(
) )
else: else:
EXECUTION_ENGINE_ENDPOINT = "http://{0}:{1}".format( EXECUTION_ENGINE_ENDPOINT = "http://{0}:{1}".format(
el_client_context.ip_addr, el_context.ip_addr,
el_client_context.engine_rpc_port_num, el_context.engine_rpc_port_num,
) )
cmd = [ cmd = [
...@@ -344,20 +347,21 @@ def get_beacon_config( ...@@ -344,20 +347,21 @@ def get_beacon_config(
image=image, image=image,
ports=BEACON_USED_PORTS, ports=BEACON_USED_PORTS,
cmd=cmd, cmd=cmd,
env_vars=extra_env_vars,
files=files, files=files,
private_ip_address_placeholder=PRIVATE_IP_ADDRESS_PLACEHOLDER, private_ip_address_placeholder=PRIVATE_IP_ADDRESS_PLACEHOLDER,
ready_conditions=cl_node_ready_conditions.get_ready_conditions( ready_conditions=cl_node_ready_conditions.get_ready_conditions(
BEACON_HTTP_PORT_ID BEACON_HTTP_PORT_ID
), ),
min_cpu=bn_min_cpu, min_cpu=cl_min_cpu,
max_cpu=bn_max_cpu, max_cpu=cl_max_cpu,
min_memory=bn_min_mem, min_memory=cl_min_mem,
max_memory=bn_max_mem, max_memory=cl_max_mem,
labels=shared_utils.label_maker( labels=shared_utils.label_maker(
constants.CL_CLIENT_TYPE.lodestar, constants.CL_TYPE.lodestar,
constants.CLIENT_TYPES.cl, constants.CLIENT_TYPES.cl,
image, image,
el_client_context.client_name, el_context.client_name,
extra_labels, extra_labels,
), ),
tolerations=tolerations, tolerations=tolerations,
......
# ---------------------------------- Library Imports ---------------------------------- # ---------------------------------- Library Imports ----------------------------------
shared_utils = import_module("../../shared_utils/shared_utils.star") shared_utils = import_module("../../shared_utils/shared_utils.star")
input_parser = import_module("../../package_io/input_parser.star") input_parser = import_module("../../package_io/input_parser.star")
cl_client_context = import_module("../../cl/cl_client_context.star") cl_context = import_module("../../cl/cl_context.star")
cl_node_ready_conditions = import_module("../../cl/cl_node_ready_conditions.star") cl_node_ready_conditions = import_module("../../cl/cl_node_ready_conditions.star")
node_metrics = import_module("../../node_metrics_info.star") node_metrics = import_module("../../node_metrics_info.star")
constants = import_module("../../package_io/constants.star") constants = import_module("../../package_io/constants.star")
validator_client_shared = import_module("../../validator_client/shared.star") vc_shared = import_module("../../vc/shared.star")
# ---------------------------------- Beacon client ------------------------------------- # ---------------------------------- Beacon client -------------------------------------
# Nimbus requires that its data directory already exists (because it expects you to bind-mount it), so we # Nimbus requires that its data directory already exists (because it expects you to bind-mount it), so we
# have to to create it # have to to create it
...@@ -63,11 +63,11 @@ BEACON_USED_PORTS = { ...@@ -63,11 +63,11 @@ BEACON_USED_PORTS = {
} }
VERBOSITY_LEVELS = { VERBOSITY_LEVELS = {
constants.GLOBAL_CLIENT_LOG_LEVEL.error: "ERROR", constants.GLOBAL_LOG_LEVEL.error: "ERROR",
constants.GLOBAL_CLIENT_LOG_LEVEL.warn: "WARN", constants.GLOBAL_LOG_LEVEL.warn: "WARN",
constants.GLOBAL_CLIENT_LOG_LEVEL.info: "INFO", constants.GLOBAL_LOG_LEVEL.info: "INFO",
constants.GLOBAL_CLIENT_LOG_LEVEL.debug: "DEBUG", constants.GLOBAL_LOG_LEVEL.debug: "DEBUG",
constants.GLOBAL_CLIENT_LOG_LEVEL.trace: "TRACE", constants.GLOBAL_LOG_LEVEL.trace: "TRACE",
} }
ENTRYPOINT_ARGS = ["sh", "-c"] ENTRYPOINT_ARGS = ["sh", "-c"]
...@@ -81,25 +81,26 @@ def launch( ...@@ -81,25 +81,26 @@ def launch(
participant_log_level, participant_log_level,
global_log_level, global_log_level,
bootnode_contexts, bootnode_contexts,
el_client_context, el_context,
node_keystore_files, node_keystore_files,
bn_min_cpu, cl_min_cpu,
bn_max_cpu, cl_max_cpu,
bn_min_mem, cl_min_mem,
bn_max_mem, cl_max_mem,
snooper_enabled, snooper_enabled,
snooper_engine_context, snooper_engine_context,
blobber_enabled, blobber_enabled,
blobber_extra_params, blobber_extra_params,
extra_beacon_params, extra_params,
extra_beacon_labels, extra_env_vars,
extra_labels,
persistent, persistent,
cl_volume_size, cl_volume_size,
cl_tolerations, cl_tolerations,
participant_tolerations, participant_tolerations,
global_tolerations, global_tolerations,
node_selectors, node_selectors,
use_separate_validator_client, use_separate_vc,
): ):
beacon_service_name = "{0}".format(service_name) beacon_service_name = "{0}".format(service_name)
...@@ -113,16 +114,16 @@ def launch( ...@@ -113,16 +114,16 @@ def launch(
network_name = shared_utils.get_network_name(launcher.network) network_name = shared_utils.get_network_name(launcher.network)
bn_min_cpu = int(bn_min_cpu) if int(bn_min_cpu) > 0 else BEACON_MIN_CPU cl_min_cpu = int(cl_min_cpu) if int(cl_min_cpu) > 0 else BEACON_MIN_CPU
bn_max_cpu = ( cl_max_cpu = (
int(bn_max_cpu) int(cl_max_cpu)
if int(bn_max_cpu) > 0 if int(cl_max_cpu) > 0
else constants.RAM_CPU_OVERRIDES[network_name]["nimbus_max_cpu"] else constants.RAM_CPU_OVERRIDES[network_name]["nimbus_max_cpu"]
) )
bn_min_mem = int(bn_min_mem) if int(bn_min_mem) > 0 else BEACON_MIN_MEMORY cl_min_mem = int(cl_min_mem) if int(cl_min_mem) > 0 else BEACON_MIN_MEMORY
bn_max_mem = ( cl_max_mem = (
int(bn_max_mem) int(cl_max_mem)
if int(bn_max_mem) > 0 if int(cl_max_mem) > 0
else constants.RAM_CPU_OVERRIDES[network_name]["nimbus_max_mem"] else constants.RAM_CPU_OVERRIDES[network_name]["nimbus_max_mem"]
) )
...@@ -141,18 +142,19 @@ def launch( ...@@ -141,18 +142,19 @@ def launch(
image, image,
beacon_service_name, beacon_service_name,
bootnode_contexts, bootnode_contexts,
el_client_context, el_context,
log_level, log_level,
node_keystore_files, node_keystore_files,
bn_min_cpu, cl_min_cpu,
bn_max_cpu, cl_max_cpu,
bn_min_mem, cl_min_mem,
bn_max_mem, cl_max_mem,
snooper_enabled, snooper_enabled,
snooper_engine_context, snooper_engine_context,
extra_beacon_params, extra_params,
extra_beacon_labels, extra_env_vars,
use_separate_validator_client, extra_labels,
use_separate_vc,
persistent, persistent,
cl_volume_size, cl_volume_size,
tolerations, tolerations,
...@@ -190,7 +192,7 @@ def launch( ...@@ -190,7 +192,7 @@ def launch(
) )
nodes_metrics_info = [nimbus_node_metrics_info] nodes_metrics_info = [nimbus_node_metrics_info]
return cl_client_context.new_cl_client_context( return cl_context.new_cl_context(
"nimbus", "nimbus",
beacon_node_enr, beacon_node_enr,
beacon_service.ip_address, beacon_service.ip_address,
...@@ -216,18 +218,19 @@ def get_beacon_config( ...@@ -216,18 +218,19 @@ def get_beacon_config(
image, image,
service_name, service_name,
bootnode_contexts, bootnode_contexts,
el_client_context, el_context,
log_level, log_level,
node_keystore_files, node_keystore_files,
bn_min_cpu, cl_min_cpu,
bn_max_cpu, cl_max_cpu,
bn_min_mem, cl_min_mem,
bn_max_mem, cl_max_mem,
snooper_enabled, snooper_enabled,
snooper_engine_context, snooper_engine_context,
extra_params, extra_params,
extra_env_vars,
extra_labels, extra_labels,
use_separate_validator_client, use_separate_vc,
persistent, persistent,
cl_volume_size, cl_volume_size,
tolerations, tolerations,
...@@ -252,8 +255,8 @@ def get_beacon_config( ...@@ -252,8 +255,8 @@ def get_beacon_config(
) )
else: else:
EXECUTION_ENGINE_ENDPOINT = "http://{0}:{1}".format( EXECUTION_ENGINE_ENDPOINT = "http://{0}:{1}".format(
el_client_context.ip_addr, el_context.ip_addr,
el_client_context.engine_rpc_port_num, el_context.engine_rpc_port_num,
) )
cmd = [ cmd = [
...@@ -295,12 +298,9 @@ def get_beacon_config( ...@@ -295,12 +298,9 @@ def get_beacon_config(
"--validators-dir=" + validator_keys_dirpath, "--validators-dir=" + validator_keys_dirpath,
"--secrets-dir=" + validator_secrets_dirpath, "--secrets-dir=" + validator_secrets_dirpath,
"--suggested-fee-recipient=" + constants.VALIDATING_REWARDS_ACCOUNT, "--suggested-fee-recipient=" + constants.VALIDATING_REWARDS_ACCOUNT,
"--graffiti=" "--graffiti=" + constants.CL_TYPE.nimbus + "-" + el_context.client_name,
+ constants.CL_CLIENT_TYPE.nimbus
+ "-"
+ el_client_context.client_name,
"--keymanager", "--keymanager",
"--keymanager-port={0}".format(validator_client_shared.VALIDATOR_HTTP_PORT_NUM), "--keymanager-port={0}".format(vc_shared.VALIDATOR_HTTP_PORT_NUM),
"--keymanager-address=0.0.0.0", "--keymanager-address=0.0.0.0",
"--keymanager-allow-origin=*", "--keymanager-allow-origin=*",
"--keymanager-token-file=" + constants.KEYMANAGER_MOUNT_PATH_ON_CONTAINER, "--keymanager-token-file=" + constants.KEYMANAGER_MOUNT_PATH_ON_CONTAINER,
...@@ -332,9 +332,9 @@ def get_beacon_config( ...@@ -332,9 +332,9 @@ def get_beacon_config(
} }
beacon_validator_used_ports = {} beacon_validator_used_ports = {}
beacon_validator_used_ports.update(BEACON_USED_PORTS) beacon_validator_used_ports.update(BEACON_USED_PORTS)
if node_keystore_files != None and not use_separate_validator_client: if node_keystore_files != None and not use_separate_vc:
validator_http_port_id_spec = shared_utils.new_port_spec( validator_http_port_id_spec = shared_utils.new_port_spec(
validator_client_shared.VALIDATOR_HTTP_PORT_NUM, vc_shared.VALIDATOR_HTTP_PORT_NUM,
shared_utils.TCP_PROTOCOL, shared_utils.TCP_PROTOCOL,
shared_utils.HTTP_APPLICATION_PROTOCOL, shared_utils.HTTP_APPLICATION_PROTOCOL,
) )
...@@ -357,20 +357,21 @@ def get_beacon_config( ...@@ -357,20 +357,21 @@ def get_beacon_config(
image=image, image=image,
ports=beacon_validator_used_ports, ports=beacon_validator_used_ports,
cmd=cmd, cmd=cmd,
env_vars=extra_env_vars,
files=files, files=files,
private_ip_address_placeholder=PRIVATE_IP_ADDRESS_PLACEHOLDER, private_ip_address_placeholder=PRIVATE_IP_ADDRESS_PLACEHOLDER,
ready_conditions=cl_node_ready_conditions.get_ready_conditions( ready_conditions=cl_node_ready_conditions.get_ready_conditions(
BEACON_HTTP_PORT_ID BEACON_HTTP_PORT_ID
), ),
min_cpu=bn_min_cpu, min_cpu=cl_min_cpu,
max_cpu=bn_max_cpu, max_cpu=cl_max_cpu,
min_memory=bn_min_mem, min_memory=cl_min_mem,
max_memory=bn_max_mem, max_memory=cl_max_mem,
labels=shared_utils.label_maker( labels=shared_utils.label_maker(
constants.CL_CLIENT_TYPE.nimbus, constants.CL_TYPE.nimbus,
constants.CLIENT_TYPES.cl, constants.CLIENT_TYPES.cl,
image, image,
el_client_context.client_name, el_context.client_name,
extra_labels, extra_labels,
), ),
user=User(uid=0, gid=0), user=User(uid=0, gid=0),
......
shared_utils = import_module("../../shared_utils/shared_utils.star") shared_utils = import_module("../../shared_utils/shared_utils.star")
input_parser = import_module("../../package_io/input_parser.star") input_parser = import_module("../../package_io/input_parser.star")
cl_client_context = import_module("../../cl/cl_client_context.star") cl_context = import_module("../../cl/cl_context.star")
node_metrics = import_module("../../node_metrics_info.star") node_metrics = import_module("../../node_metrics_info.star")
cl_node_ready_conditions = import_module("../../cl/cl_node_ready_conditions.star") cl_node_ready_conditions = import_module("../../cl/cl_node_ready_conditions.star")
constants = import_module("../../package_io/constants.star") constants = import_module("../../package_io/constants.star")
...@@ -50,11 +50,11 @@ BEACON_NODE_USED_PORTS = { ...@@ -50,11 +50,11 @@ BEACON_NODE_USED_PORTS = {
} }
VERBOSITY_LEVELS = { VERBOSITY_LEVELS = {
constants.GLOBAL_CLIENT_LOG_LEVEL.error: "error", constants.GLOBAL_LOG_LEVEL.error: "error",
constants.GLOBAL_CLIENT_LOG_LEVEL.warn: "warn", constants.GLOBAL_LOG_LEVEL.warn: "warn",
constants.GLOBAL_CLIENT_LOG_LEVEL.info: "info", constants.GLOBAL_LOG_LEVEL.info: "info",
constants.GLOBAL_CLIENT_LOG_LEVEL.debug: "debug", constants.GLOBAL_LOG_LEVEL.debug: "debug",
constants.GLOBAL_CLIENT_LOG_LEVEL.trace: "trace", constants.GLOBAL_LOG_LEVEL.trace: "trace",
} }
...@@ -66,25 +66,26 @@ def launch( ...@@ -66,25 +66,26 @@ def launch(
participant_log_level, participant_log_level,
global_log_level, global_log_level,
bootnode_contexts, bootnode_contexts,
el_client_context, el_context,
node_keystore_files, node_keystore_files,
bn_min_cpu, cl_min_cpu,
bn_max_cpu, cl_max_cpu,
bn_min_mem, cl_min_mem,
bn_max_mem, cl_max_mem,
snooper_enabled, snooper_enabled,
snooper_engine_context, snooper_engine_context,
blobber_enabled, blobber_enabled,
blobber_extra_params, blobber_extra_params,
extra_beacon_params, extra_params,
extra_beacon_labels, extra_env_vars,
extra_labels,
persistent, persistent,
cl_volume_size, cl_volume_size,
cl_tolerations, cl_tolerations,
participant_tolerations, participant_tolerations,
global_tolerations, global_tolerations,
node_selectors, node_selectors,
use_separate_validator_client=True, use_separate_vc=True,
): ):
beacon_service_name = "{0}".format(service_name) beacon_service_name = "{0}".format(service_name)
log_level = input_parser.get_client_log_level_or_default( log_level = input_parser.get_client_log_level_or_default(
...@@ -97,16 +98,16 @@ def launch( ...@@ -97,16 +98,16 @@ def launch(
network_name = shared_utils.get_network_name(launcher.network) network_name = shared_utils.get_network_name(launcher.network)
bn_min_cpu = int(bn_min_cpu) if int(bn_min_cpu) > 0 else BEACON_MIN_CPU cl_min_cpu = int(cl_min_cpu) if int(cl_min_cpu) > 0 else BEACON_MIN_CPU
bn_max_cpu = ( cl_max_cpu = (
int(bn_max_cpu) int(cl_max_cpu)
if int(bn_max_cpu) > 0 if int(cl_max_cpu) > 0
else constants.RAM_CPU_OVERRIDES[network_name]["prysm_max_cpu"] else constants.RAM_CPU_OVERRIDES[network_name]["prysm_max_cpu"]
) )
bn_min_mem = int(bn_min_mem) if int(bn_min_mem) > 0 else BEACON_MIN_MEMORY cl_min_mem = int(cl_min_mem) if int(cl_min_mem) > 0 else BEACON_MIN_MEMORY
bn_max_mem = ( cl_max_mem = (
int(bn_max_mem) int(cl_max_mem)
if int(bn_max_mem) > 0 if int(cl_max_mem) > 0
else constants.RAM_CPU_OVERRIDES[network_name]["prysm_max_mem"] else constants.RAM_CPU_OVERRIDES[network_name]["prysm_max_mem"]
) )
...@@ -124,16 +125,17 @@ def launch( ...@@ -124,16 +125,17 @@ def launch(
image, image,
beacon_service_name, beacon_service_name,
bootnode_contexts, bootnode_contexts,
el_client_context, el_context,
log_level, log_level,
bn_min_cpu, cl_min_cpu,
bn_max_cpu, cl_max_cpu,
bn_min_mem, cl_min_mem,
bn_max_mem, cl_max_mem,
snooper_enabled, snooper_enabled,
snooper_engine_context, snooper_engine_context,
extra_beacon_params, extra_params,
extra_beacon_labels, extra_env_vars,
extra_labels,
persistent, persistent,
cl_volume_size, cl_volume_size,
tolerations, tolerations,
...@@ -173,7 +175,7 @@ def launch( ...@@ -173,7 +175,7 @@ def launch(
) )
nodes_metrics_info = [beacon_node_metrics_info] nodes_metrics_info = [beacon_node_metrics_info]
return cl_client_context.new_cl_client_context( return cl_context.new_cl_context(
"prysm", "prysm",
beacon_node_enr, beacon_node_enr,
beacon_service.ip_address, beacon_service.ip_address,
...@@ -198,15 +200,16 @@ def get_beacon_config( ...@@ -198,15 +200,16 @@ def get_beacon_config(
beacon_image, beacon_image,
service_name, service_name,
bootnode_contexts, bootnode_contexts,
el_client_context, el_context,
log_level, log_level,
bn_min_cpu, cl_min_cpu,
bn_max_cpu, cl_max_cpu,
bn_min_mem, cl_min_mem,
bn_max_mem, cl_max_mem,
snooper_enabled, snooper_enabled,
snooper_engine_context, snooper_engine_context,
extra_params, extra_params,
extra_env_vars,
extra_labels, extra_labels,
persistent, persistent,
cl_volume_size, cl_volume_size,
...@@ -221,8 +224,8 @@ def get_beacon_config( ...@@ -221,8 +224,8 @@ def get_beacon_config(
) )
else: else:
EXECUTION_ENGINE_ENDPOINT = "http://{0}:{1}".format( EXECUTION_ENGINE_ENDPOINT = "http://{0}:{1}".format(
el_client_context.ip_addr, el_context.ip_addr,
el_client_context.engine_rpc_port_num, el_context.engine_rpc_port_num,
) )
cmd = [ cmd = [
...@@ -326,20 +329,21 @@ def get_beacon_config( ...@@ -326,20 +329,21 @@ def get_beacon_config(
image=beacon_image, image=beacon_image,
ports=BEACON_NODE_USED_PORTS, ports=BEACON_NODE_USED_PORTS,
cmd=cmd, cmd=cmd,
env_vars=extra_env_vars,
files=files, files=files,
private_ip_address_placeholder=PRIVATE_IP_ADDRESS_PLACEHOLDER, private_ip_address_placeholder=PRIVATE_IP_ADDRESS_PLACEHOLDER,
ready_conditions=cl_node_ready_conditions.get_ready_conditions( ready_conditions=cl_node_ready_conditions.get_ready_conditions(
BEACON_HTTP_PORT_ID BEACON_HTTP_PORT_ID
), ),
min_cpu=bn_min_cpu, min_cpu=cl_min_cpu,
max_cpu=bn_max_cpu, max_cpu=cl_max_cpu,
min_memory=bn_min_mem, min_memory=cl_min_mem,
max_memory=bn_max_mem, max_memory=cl_max_mem,
labels=shared_utils.label_maker( labels=shared_utils.label_maker(
constants.CL_CLIENT_TYPE.prysm, constants.CL_TYPE.prysm,
constants.CLIENT_TYPES.cl, constants.CLIENT_TYPES.cl,
beacon_image, beacon_image,
el_client_context.client_name, el_context.client_name,
extra_labels, extra_labels,
), ),
tolerations=tolerations, tolerations=tolerations,
......
shared_utils = import_module("../../shared_utils/shared_utils.star") shared_utils = import_module("../../shared_utils/shared_utils.star")
input_parser = import_module("../../package_io/input_parser.star") input_parser = import_module("../../package_io/input_parser.star")
cl_client_context = import_module("../../cl/cl_client_context.star") cl_context = import_module("../../cl/cl_context.star")
node_metrics = import_module("../../node_metrics_info.star") node_metrics = import_module("../../node_metrics_info.star")
cl_node_ready_conditions = import_module("../../cl/cl_node_ready_conditions.star") cl_node_ready_conditions = import_module("../../cl/cl_node_ready_conditions.star")
constants = import_module("../../package_io/constants.star") constants = import_module("../../package_io/constants.star")
validator_client_shared = import_module("../../validator_client/shared.star") vc_shared = import_module("../../vc/shared.star")
# ---------------------------------- Beacon client ------------------------------------- # ---------------------------------- Beacon client -------------------------------------
TEKU_BINARY_FILEPATH_IN_IMAGE = "/opt/teku/bin/teku" TEKU_BINARY_FILEPATH_IN_IMAGE = "/opt/teku/bin/teku"
...@@ -54,11 +54,11 @@ BEACON_USED_PORTS = { ...@@ -54,11 +54,11 @@ BEACON_USED_PORTS = {
ENTRYPOINT_ARGS = ["sh", "-c"] ENTRYPOINT_ARGS = ["sh", "-c"]
VERBOSITY_LEVELS = { VERBOSITY_LEVELS = {
constants.GLOBAL_CLIENT_LOG_LEVEL.error: "ERROR", constants.GLOBAL_LOG_LEVEL.error: "ERROR",
constants.GLOBAL_CLIENT_LOG_LEVEL.warn: "WARN", constants.GLOBAL_LOG_LEVEL.warn: "WARN",
constants.GLOBAL_CLIENT_LOG_LEVEL.info: "INFO", constants.GLOBAL_LOG_LEVEL.info: "INFO",
constants.GLOBAL_CLIENT_LOG_LEVEL.debug: "DEBUG", constants.GLOBAL_LOG_LEVEL.debug: "DEBUG",
constants.GLOBAL_CLIENT_LOG_LEVEL.trace: "TRACE", constants.GLOBAL_LOG_LEVEL.trace: "TRACE",
} }
...@@ -70,25 +70,26 @@ def launch( ...@@ -70,25 +70,26 @@ def launch(
participant_log_level, participant_log_level,
global_log_level, global_log_level,
bootnode_context, bootnode_context,
el_client_context, el_context,
node_keystore_files, node_keystore_files,
bn_min_cpu, cl_min_cpu,
bn_max_cpu, cl_max_cpu,
bn_min_mem, cl_min_mem,
bn_max_mem, cl_max_mem,
snooper_enabled, snooper_enabled,
snooper_engine_context, snooper_engine_context,
blobber_enabled, blobber_enabled,
blobber_extra_params, blobber_extra_params,
extra_beacon_params, extra_params,
extra_beacon_labels, extra_env_vars,
extra_labels,
persistent, persistent,
cl_volume_size, cl_volume_size,
cl_tolerations, cl_tolerations,
participant_tolerations, participant_tolerations,
global_tolerations, global_tolerations,
node_selectors, node_selectors,
use_separate_validator_client, use_separate_vc,
): ):
beacon_service_name = "{0}".format(service_name) beacon_service_name = "{0}".format(service_name)
log_level = input_parser.get_client_log_level_or_default( log_level = input_parser.get_client_log_level_or_default(
...@@ -99,20 +100,20 @@ def launch( ...@@ -99,20 +100,20 @@ def launch(
cl_tolerations, participant_tolerations, global_tolerations cl_tolerations, participant_tolerations, global_tolerations
) )
extra_params = [param for param in extra_beacon_params] extra_params = [param for param in extra_params]
network_name = shared_utils.get_network_name(launcher.network) network_name = shared_utils.get_network_name(launcher.network)
bn_min_cpu = int(bn_min_cpu) if int(bn_min_cpu) > 0 else BEACON_MIN_CPU cl_min_cpu = int(cl_min_cpu) if int(cl_min_cpu) > 0 else BEACON_MIN_CPU
bn_max_cpu = ( cl_max_cpu = (
int(bn_max_cpu) int(cl_max_cpu)
if int(bn_max_cpu) > 0 if int(cl_max_cpu) > 0
else constants.RAM_CPU_OVERRIDES[network_name]["teku_max_cpu"] else constants.RAM_CPU_OVERRIDES[network_name]["teku_max_cpu"]
) )
bn_min_mem = int(bn_min_mem) if int(bn_min_mem) > 0 else BEACON_MIN_MEMORY cl_min_mem = int(cl_min_mem) if int(cl_min_mem) > 0 else BEACON_MIN_MEMORY
bn_max_mem = ( cl_max_mem = (
int(bn_max_mem) int(cl_max_mem)
if int(bn_max_mem) > 0 if int(cl_max_mem) > 0
else constants.RAM_CPU_OVERRIDES[network_name]["teku_max_mem"] else constants.RAM_CPU_OVERRIDES[network_name]["teku_max_mem"]
) )
...@@ -132,18 +133,19 @@ def launch( ...@@ -132,18 +133,19 @@ def launch(
image, image,
beacon_service_name, beacon_service_name,
bootnode_context, bootnode_context,
el_client_context, el_context,
log_level, log_level,
node_keystore_files, node_keystore_files,
bn_min_cpu, cl_min_cpu,
bn_max_cpu, cl_max_cpu,
bn_min_mem, cl_min_mem,
bn_max_mem, cl_max_mem,
snooper_enabled, snooper_enabled,
snooper_engine_context, snooper_engine_context,
extra_beacon_params, extra_params,
extra_beacon_labels, extra_env_vars,
use_separate_validator_client, extra_labels,
use_separate_vc,
persistent, persistent,
cl_volume_size, cl_volume_size,
tolerations, tolerations,
...@@ -183,7 +185,7 @@ def launch( ...@@ -183,7 +185,7 @@ def launch(
) )
nodes_metrics_info = [beacon_node_metrics_info] nodes_metrics_info = [beacon_node_metrics_info]
return cl_client_context.new_cl_client_context( return cl_context.new_cl_context(
"teku", "teku",
beacon_node_enr, beacon_node_enr,
beacon_service.ip_address, beacon_service.ip_address,
...@@ -210,18 +212,19 @@ def get_beacon_config( ...@@ -210,18 +212,19 @@ def get_beacon_config(
image, image,
service_name, service_name,
bootnode_contexts, bootnode_contexts,
el_client_context, el_context,
log_level, log_level,
node_keystore_files, node_keystore_files,
bn_min_cpu, cl_min_cpu,
bn_max_cpu, cl_max_cpu,
bn_min_mem, cl_min_mem,
bn_max_mem, cl_max_mem,
snooper_enabled, snooper_enabled,
snooper_engine_context, snooper_engine_context,
extra_params, extra_params,
extra_env_vars,
extra_labels, extra_labels,
use_separate_validator_client, use_separate_vc,
persistent, persistent,
cl_volume_size, cl_volume_size,
tolerations, tolerations,
...@@ -246,8 +249,8 @@ def get_beacon_config( ...@@ -246,8 +249,8 @@ def get_beacon_config(
) )
else: else:
EXECUTION_ENGINE_ENDPOINT = "http://{0}:{1}".format( EXECUTION_ENGINE_ENDPOINT = "http://{0}:{1}".format(
el_client_context.ip_addr, el_context.ip_addr,
el_client_context.engine_rpc_port_num, el_context.engine_rpc_port_num,
) )
cmd = [ cmd = [
"--logging=" + log_level, "--logging=" + log_level,
...@@ -293,14 +296,12 @@ def get_beacon_config( ...@@ -293,14 +296,12 @@ def get_beacon_config(
"--validators-proposer-default-fee-recipient=" "--validators-proposer-default-fee-recipient="
+ constants.VALIDATING_REWARDS_ACCOUNT, + constants.VALIDATING_REWARDS_ACCOUNT,
"--validators-graffiti=" "--validators-graffiti="
+ constants.CL_CLIENT_TYPE.teku + constants.CL_TYPE.teku
+ "-" + "-"
+ el_client_context.client_name, + el_context.client_name,
"--validator-api-enabled=true", "--validator-api-enabled=true",
"--validator-api-host-allowlist=*", "--validator-api-host-allowlist=*",
"--validator-api-port={0}".format( "--validator-api-port={0}".format(vc_shared.VALIDATOR_HTTP_PORT_NUM),
validator_client_shared.VALIDATOR_HTTP_PORT_NUM
),
"--validator-api-interface=0.0.0.0", "--validator-api-interface=0.0.0.0",
"--validator-api-keystore-file=" "--validator-api-keystore-file="
+ constants.KEYMANAGER_P12_MOUNT_PATH_ON_CONTAINER, + constants.KEYMANAGER_P12_MOUNT_PATH_ON_CONTAINER,
...@@ -382,9 +383,9 @@ def get_beacon_config( ...@@ -382,9 +383,9 @@ def get_beacon_config(
} }
beacon_validator_used_ports = {} beacon_validator_used_ports = {}
beacon_validator_used_ports.update(BEACON_USED_PORTS) beacon_validator_used_ports.update(BEACON_USED_PORTS)
if node_keystore_files != None and not use_separate_validator_client: if node_keystore_files != None and not use_separate_vc:
validator_http_port_id_spec = shared_utils.new_port_spec( validator_http_port_id_spec = shared_utils.new_port_spec(
validator_client_shared.VALIDATOR_HTTP_PORT_NUM, vc_shared.VALIDATOR_HTTP_PORT_NUM,
shared_utils.TCP_PROTOCOL, shared_utils.TCP_PROTOCOL,
shared_utils.HTTP_APPLICATION_PROTOCOL, shared_utils.HTTP_APPLICATION_PROTOCOL,
) )
...@@ -407,21 +408,21 @@ def get_beacon_config( ...@@ -407,21 +408,21 @@ def get_beacon_config(
image=image, image=image,
ports=beacon_validator_used_ports, ports=beacon_validator_used_ports,
cmd=cmd, cmd=cmd,
# entrypoint=ENTRYPOINT_ARGS, env_vars=extra_env_vars,
files=files, files=files,
private_ip_address_placeholder=PRIVATE_IP_ADDRESS_PLACEHOLDER, private_ip_address_placeholder=PRIVATE_IP_ADDRESS_PLACEHOLDER,
ready_conditions=cl_node_ready_conditions.get_ready_conditions( ready_conditions=cl_node_ready_conditions.get_ready_conditions(
BEACON_HTTP_PORT_ID BEACON_HTTP_PORT_ID
), ),
min_cpu=bn_min_cpu, min_cpu=cl_min_cpu,
max_cpu=bn_max_cpu, max_cpu=cl_max_cpu,
min_memory=bn_min_mem, min_memory=cl_min_mem,
max_memory=bn_max_mem, max_memory=cl_max_mem,
labels=shared_utils.label_maker( labels=shared_utils.label_maker(
constants.CL_CLIENT_TYPE.teku, constants.CL_TYPE.teku,
constants.CLIENT_TYPES.cl, constants.CLIENT_TYPES.cl,
image, image,
el_client_context.client_name, el_context.client_name,
extra_labels, extra_labels,
), ),
user=User(uid=0, gid=0), user=User(uid=0, gid=0),
......
...@@ -30,14 +30,14 @@ USED_PORTS = { ...@@ -30,14 +30,14 @@ USED_PORTS = {
def launch_dora( def launch_dora(
plan, plan,
config_template, config_template,
cl_client_contexts, cl_contexts,
el_cl_data_files_artifact_uuid, el_cl_data_files_artifact_uuid,
electra_fork_epoch, electra_fork_epoch,
network, network,
global_node_selectors, global_node_selectors,
): ):
all_cl_client_info = [] all_cl_client_info = []
for index, client in enumerate(cl_client_contexts): for index, client in enumerate(cl_contexts):
all_cl_client_info.append( all_cl_client_info.append(
new_cl_client_info( new_cl_client_info(
client.ip_addr, client.http_port_num, client.beacon_service_name client.ip_addr, client.http_port_num, client.beacon_service_name
......
shared_utils = import_module("../../shared_utils/shared_utils.star") shared_utils = import_module("../../shared_utils/shared_utils.star")
input_parser = import_module("../../package_io/input_parser.star") input_parser = import_module("../../package_io/input_parser.star")
el_client_context = import_module("../../el/el_client_context.star") el_context = import_module("../../el/el_context.star")
el_admin_node_info = import_module("../../el/el_admin_node_info.star") el_admin_node_info = import_module("../../el/el_admin_node_info.star")
node_metrics = import_module("../../node_metrics_info.star") node_metrics = import_module("../../node_metrics_info.star")
constants = import_module("../../package_io/constants.star") constants = import_module("../../package_io/constants.star")
...@@ -51,11 +51,11 @@ USED_PORTS = { ...@@ -51,11 +51,11 @@ USED_PORTS = {
ENTRYPOINT_ARGS = ["sh", "-c"] ENTRYPOINT_ARGS = ["sh", "-c"]
VERBOSITY_LEVELS = { VERBOSITY_LEVELS = {
constants.GLOBAL_CLIENT_LOG_LEVEL.error: "ERROR", constants.GLOBAL_LOG_LEVEL.error: "ERROR",
constants.GLOBAL_CLIENT_LOG_LEVEL.warn: "WARN", constants.GLOBAL_LOG_LEVEL.warn: "WARN",
constants.GLOBAL_CLIENT_LOG_LEVEL.info: "INFO", constants.GLOBAL_LOG_LEVEL.info: "INFO",
constants.GLOBAL_CLIENT_LOG_LEVEL.debug: "DEBUG", constants.GLOBAL_LOG_LEVEL.debug: "DEBUG",
constants.GLOBAL_CLIENT_LOG_LEVEL.trace: "TRACE", constants.GLOBAL_LOG_LEVEL.trace: "TRACE",
} }
...@@ -138,7 +138,7 @@ def launch( ...@@ -138,7 +138,7 @@ def launch(
service_name, METRICS_PATH, metrics_url service_name, METRICS_PATH, metrics_url
) )
return el_client_context.new_el_client_context( return el_context.new_el_context(
"besu", "besu",
"", # besu has no ENR "", # besu has no ENR
enode, enode,
...@@ -262,7 +262,7 @@ def get_config( ...@@ -262,7 +262,7 @@ def get_config(
min_memory=el_min_mem, min_memory=el_min_mem,
max_memory=el_max_mem, max_memory=el_max_mem,
labels=shared_utils.label_maker( labels=shared_utils.label_maker(
constants.EL_CLIENT_TYPE.besu, constants.EL_TYPE.besu,
constants.CLIENT_TYPES.el, constants.CLIENT_TYPES.el,
image, image,
cl_client_name, cl_client_name,
......
def new_el_client_context( def new_el_context(
client_name, client_name,
enr, enr,
enode, enode,
......
constants = import_module("../package_io/constants.star")
input_parser = import_module("../package_io/input_parser.star")
shared_utils = import_module("../shared_utils/shared_utils.star")
geth = import_module("./geth/geth_launcher.star")
besu = import_module("./besu/besu_launcher.star")
erigon = import_module("./erigon/erigon_launcher.star")
nethermind = import_module("./nethermind/nethermind_launcher.star")
reth = import_module("./reth/reth_launcher.star")
ethereumjs = import_module("./ethereumjs/ethereumjs_launcher.star")
nimbus_eth1 = import_module("./nimbus-eth1/nimbus_launcher.star")
def launch(
plan,
network_params,
el_cl_data,
jwt_file,
participants,
global_log_level,
global_node_selectors,
global_tolerations,
persistent,
network_id,
num_participants,
):
el_launchers = {
constants.EL_TYPE.geth: {
"launcher": geth.new_geth_launcher(
el_cl_data,
jwt_file,
network_params.network,
network_id,
network_params.capella_fork_epoch,
el_cl_data.cancun_time,
el_cl_data.prague_time,
network_params.electra_fork_epoch,
),
"launch_method": geth.launch,
},
constants.EL_TYPE.gethbuilder: {
"launcher": geth.new_geth_launcher(
el_cl_data,
jwt_file,
network_params.network,
network_id,
network_params.capella_fork_epoch,
el_cl_data.cancun_time,
el_cl_data.prague_time,
network_params.electra_fork_epoch,
),
"launch_method": geth.launch,
},
constants.EL_TYPE.besu: {
"launcher": besu.new_besu_launcher(
el_cl_data,
jwt_file,
network_params.network,
),
"launch_method": besu.launch,
},
constants.EL_TYPE.erigon: {
"launcher": erigon.new_erigon_launcher(
el_cl_data,
jwt_file,
network_params.network,
network_id,
el_cl_data.cancun_time,
),
"launch_method": erigon.launch,
},
constants.EL_TYPE.nethermind: {
"launcher": nethermind.new_nethermind_launcher(
el_cl_data,
jwt_file,
network_params.network,
),
"launch_method": nethermind.launch,
},
constants.EL_TYPE.reth: {
"launcher": reth.new_reth_launcher(
el_cl_data,
jwt_file,
network_params.network,
),
"launch_method": reth.launch,
},
constants.EL_TYPE.ethereumjs: {
"launcher": ethereumjs.new_ethereumjs_launcher(
el_cl_data,
jwt_file,
network_params.network,
),
"launch_method": ethereumjs.launch,
},
constants.EL_TYPE.nimbus: {
"launcher": nimbus_eth1.new_nimbus_launcher(
el_cl_data,
jwt_file,
network_params.network,
),
"launch_method": nimbus_eth1.launch,
},
}
all_el_contexts = []
for index, participant in enumerate(participants):
cl_type = participant.cl_type
el_type = participant.el_type
node_selectors = input_parser.get_client_node_selectors(
participant.node_selectors,
global_node_selectors,
)
tolerations = input_parser.get_client_tolerations(
participant.el_tolerations, participant.tolerations, global_tolerations
)
if el_type not in el_launchers:
fail(
"Unsupported launcher '{0}', need one of '{1}'".format(
el_type, ",".join([el.name for el in el_launchers.keys()])
)
)
el_launcher, launch_method = (
el_launchers[el_type]["launcher"],
el_launchers[el_type]["launch_method"],
)
# Zero-pad the index using the calculated zfill value
index_str = shared_utils.zfill_custom(index + 1, len(str(len(participants))))
el_service_name = "el-{0}-{1}-{2}".format(index_str, el_type, cl_type)
el_context = launch_method(
plan,
el_launcher,
el_service_name,
participant.el_image,
participant.el_log_level,
global_log_level,
all_el_contexts,
participant.el_min_cpu,
participant.el_max_cpu,
participant.el_min_mem,
participant.el_max_mem,
participant.el_extra_params,
participant.el_extra_env_vars,
participant.el_extra_labels,
persistent,
participant.el_volume_size,
tolerations,
node_selectors,
)
# Add participant el additional prometheus metrics
for metrics_info in el_context.el_metrics_info:
if metrics_info != None:
metrics_info["config"] = participant.prometheus_config
all_el_contexts.append(el_context)
plan.print("Successfully added {0} EL participants".format(num_participants))
return all_el_contexts
shared_utils = import_module("../../shared_utils/shared_utils.star") shared_utils = import_module("../../shared_utils/shared_utils.star")
input_parser = import_module("../../package_io/input_parser.star") input_parser = import_module("../../package_io/input_parser.star")
el_admin_node_info = import_module("../../el/el_admin_node_info.star") el_admin_node_info = import_module("../../el/el_admin_node_info.star")
el_client_context = import_module("../../el/el_client_context.star") el_context = import_module("../../el/el_context.star")
node_metrics = import_module("../../node_metrics_info.star") node_metrics = import_module("../../node_metrics_info.star")
constants = import_module("../../package_io/constants.star") constants = import_module("../../package_io/constants.star")
...@@ -51,11 +51,11 @@ USED_PORTS = { ...@@ -51,11 +51,11 @@ USED_PORTS = {
ENTRYPOINT_ARGS = ["sh", "-c"] ENTRYPOINT_ARGS = ["sh", "-c"]
VERBOSITY_LEVELS = { VERBOSITY_LEVELS = {
constants.GLOBAL_CLIENT_LOG_LEVEL.error: "1", constants.GLOBAL_LOG_LEVEL.error: "1",
constants.GLOBAL_CLIENT_LOG_LEVEL.warn: "2", constants.GLOBAL_LOG_LEVEL.warn: "2",
constants.GLOBAL_CLIENT_LOG_LEVEL.info: "3", constants.GLOBAL_LOG_LEVEL.info: "3",
constants.GLOBAL_CLIENT_LOG_LEVEL.debug: "4", constants.GLOBAL_LOG_LEVEL.debug: "4",
constants.GLOBAL_CLIENT_LOG_LEVEL.trace: "5", constants.GLOBAL_LOG_LEVEL.trace: "5",
} }
...@@ -142,7 +142,7 @@ def launch( ...@@ -142,7 +142,7 @@ def launch(
service_name, METRICS_PATH, metrics_url service_name, METRICS_PATH, metrics_url
) )
return el_client_context.new_el_client_context( return el_context.new_el_context(
"erigon", "erigon",
enr, enr,
enode, enode,
...@@ -284,7 +284,7 @@ def get_config( ...@@ -284,7 +284,7 @@ def get_config(
max_memory=el_max_mem, max_memory=el_max_mem,
env_vars=extra_env_vars, env_vars=extra_env_vars,
labels=shared_utils.label_maker( labels=shared_utils.label_maker(
constants.EL_CLIENT_TYPE.erigon, constants.EL_TYPE.erigon,
constants.CLIENT_TYPES.el, constants.CLIENT_TYPES.el,
image, image,
cl_client_name, cl_client_name,
......
shared_utils = import_module("../../shared_utils/shared_utils.star") shared_utils = import_module("../../shared_utils/shared_utils.star")
input_parser = import_module("../..//package_io/input_parser.star") input_parser = import_module("../..//package_io/input_parser.star")
el_client_context = import_module("../../el/el_client_context.star") el_context = import_module("../../el/el_context.star")
el_admin_node_info = import_module("../../el/el_admin_node_info.star") el_admin_node_info = import_module("../../el/el_admin_node_info.star")
node_metrics = import_module("../../node_metrics_info.star") node_metrics = import_module("../../node_metrics_info.star")
...@@ -53,11 +53,11 @@ USED_PORTS = { ...@@ -53,11 +53,11 @@ USED_PORTS = {
ENTRYPOINT_ARGS = [] ENTRYPOINT_ARGS = []
VERBOSITY_LEVELS = { VERBOSITY_LEVELS = {
constants.GLOBAL_CLIENT_LOG_LEVEL.error: "error", constants.GLOBAL_LOG_LEVEL.error: "error",
constants.GLOBAL_CLIENT_LOG_LEVEL.warn: "warn", constants.GLOBAL_LOG_LEVEL.warn: "warn",
constants.GLOBAL_CLIENT_LOG_LEVEL.info: "info", constants.GLOBAL_LOG_LEVEL.info: "info",
constants.GLOBAL_CLIENT_LOG_LEVEL.debug: "debug", constants.GLOBAL_LOG_LEVEL.debug: "debug",
constants.GLOBAL_CLIENT_LOG_LEVEL.trace: "trace", constants.GLOBAL_LOG_LEVEL.trace: "trace",
} }
...@@ -139,7 +139,7 @@ def launch( ...@@ -139,7 +139,7 @@ def launch(
# metrics_url = "http://{0}:{1}".format(service.ip_address, METRICS_PORT_NUM) # metrics_url = "http://{0}:{1}".format(service.ip_address, METRICS_PORT_NUM)
ethjs_metrics_info = None ethjs_metrics_info = None
return el_client_context.new_el_client_context( return el_context.new_el_context(
"ethereumjs", "ethereumjs",
"", # ethereumjs has no enr "", # ethereumjs has no enr
enode, enode,
...@@ -251,7 +251,7 @@ def get_config( ...@@ -251,7 +251,7 @@ def get_config(
max_memory=el_max_mem, max_memory=el_max_mem,
env_vars=extra_env_vars, env_vars=extra_env_vars,
labels=shared_utils.label_maker( labels=shared_utils.label_maker(
constants.EL_CLIENT_TYPE.ethereumjs, constants.EL_TYPE.ethereumjs,
constants.CLIENT_TYPES.el, constants.CLIENT_TYPES.el,
image, image,
cl_client_name, cl_client_name,
......
shared_utils = import_module("../../shared_utils/shared_utils.star") shared_utils = import_module("../../shared_utils/shared_utils.star")
input_parser = import_module("../../package_io/input_parser.star") input_parser = import_module("../../package_io/input_parser.star")
el_client_context = import_module("../../el/el_client_context.star") el_context = import_module("../../el/el_context.star")
el_admin_node_info = import_module("../../el/el_admin_node_info.star") el_admin_node_info = import_module("../../el/el_admin_node_info.star")
genesis_constants = import_module( genesis_constants = import_module(
"../../prelaunch_data_generator/genesis_constants/genesis_constants.star" "../../prelaunch_data_generator/genesis_constants/genesis_constants.star"
...@@ -58,11 +58,11 @@ USED_PORTS = { ...@@ -58,11 +58,11 @@ USED_PORTS = {
ENTRYPOINT_ARGS = ["sh", "-c"] ENTRYPOINT_ARGS = ["sh", "-c"]
VERBOSITY_LEVELS = { VERBOSITY_LEVELS = {
constants.GLOBAL_CLIENT_LOG_LEVEL.error: "1", constants.GLOBAL_LOG_LEVEL.error: "1",
constants.GLOBAL_CLIENT_LOG_LEVEL.warn: "2", constants.GLOBAL_LOG_LEVEL.warn: "2",
constants.GLOBAL_CLIENT_LOG_LEVEL.info: "3", constants.GLOBAL_LOG_LEVEL.info: "3",
constants.GLOBAL_CLIENT_LOG_LEVEL.debug: "4", constants.GLOBAL_LOG_LEVEL.debug: "4",
constants.GLOBAL_CLIENT_LOG_LEVEL.trace: "5", constants.GLOBAL_LOG_LEVEL.trace: "5",
} }
BUILDER_IMAGE_STR = "builder" BUILDER_IMAGE_STR = "builder"
...@@ -156,7 +156,7 @@ def launch( ...@@ -156,7 +156,7 @@ def launch(
service_name, METRICS_PATH, metrics_url service_name, METRICS_PATH, metrics_url
) )
return el_client_context.new_el_client_context( return el_context.new_el_context(
"geth", "geth",
enr, enr,
enode, enode,
...@@ -370,7 +370,7 @@ def get_config( ...@@ -370,7 +370,7 @@ def get_config(
max_memory=el_max_mem, max_memory=el_max_mem,
env_vars=extra_env_vars, env_vars=extra_env_vars,
labels=shared_utils.label_maker( labels=shared_utils.label_maker(
constants.EL_CLIENT_TYPE.geth, constants.EL_TYPE.geth,
constants.CLIENT_TYPES.el, constants.CLIENT_TYPES.el,
image, image,
cl_client_name, cl_client_name,
......
shared_utils = import_module("../../shared_utils/shared_utils.star") shared_utils = import_module("../../shared_utils/shared_utils.star")
input_parser = import_module("../../package_io/input_parser.star") input_parser = import_module("../../package_io/input_parser.star")
el_client_context = import_module("../../el/el_client_context.star") el_context = import_module("../../el/el_context.star")
el_admin_node_info = import_module("../../el/el_admin_node_info.star") el_admin_node_info = import_module("../../el/el_admin_node_info.star")
node_metrics = import_module("../../node_metrics_info.star") node_metrics = import_module("../../node_metrics_info.star")
...@@ -49,11 +49,11 @@ USED_PORTS = { ...@@ -49,11 +49,11 @@ USED_PORTS = {
} }
VERBOSITY_LEVELS = { VERBOSITY_LEVELS = {
constants.GLOBAL_CLIENT_LOG_LEVEL.error: "ERROR", constants.GLOBAL_LOG_LEVEL.error: "ERROR",
constants.GLOBAL_CLIENT_LOG_LEVEL.warn: "WARN", constants.GLOBAL_LOG_LEVEL.warn: "WARN",
constants.GLOBAL_CLIENT_LOG_LEVEL.info: "INFO", constants.GLOBAL_LOG_LEVEL.info: "INFO",
constants.GLOBAL_CLIENT_LOG_LEVEL.debug: "DEBUG", constants.GLOBAL_LOG_LEVEL.debug: "DEBUG",
constants.GLOBAL_CLIENT_LOG_LEVEL.trace: "TRACE", constants.GLOBAL_LOG_LEVEL.trace: "TRACE",
} }
...@@ -136,7 +136,7 @@ def launch( ...@@ -136,7 +136,7 @@ def launch(
service_name, METRICS_PATH, metrics_url service_name, METRICS_PATH, metrics_url
) )
return el_client_context.new_el_client_context( return el_context.new_el_context(
"nethermind", "nethermind",
"", # nethermind has no ENR in the eth2-merge-kurtosis-module either "", # nethermind has no ENR in the eth2-merge-kurtosis-module either
# Nethermind node info endpoint doesn't return ENR field https://docs.nethermind.io/nethermind/ethereum-client/json-rpc/admin # Nethermind node info endpoint doesn't return ENR field https://docs.nethermind.io/nethermind/ethereum-client/json-rpc/admin
...@@ -259,7 +259,7 @@ def get_config( ...@@ -259,7 +259,7 @@ def get_config(
max_memory=el_max_mem, max_memory=el_max_mem,
env_vars=extra_env_vars, env_vars=extra_env_vars,
labels=shared_utils.label_maker( labels=shared_utils.label_maker(
constants.EL_CLIENT_TYPE.nethermind, constants.EL_TYPE.nethermind,
constants.CLIENT_TYPES.el, constants.CLIENT_TYPES.el,
image, image,
cl_client_name, cl_client_name,
......
shared_utils = import_module("../../shared_utils/shared_utils.star") shared_utils = import_module("../../shared_utils/shared_utils.star")
input_parser = import_module("../../package_io/input_parser.star") input_parser = import_module("../../package_io/input_parser.star")
el_client_context = import_module("../../el/el_client_context.star") el_context = import_module("../../el/el_context.star")
el_admin_node_info = import_module("../../el/el_admin_node_info.star") el_admin_node_info = import_module("../../el/el_admin_node_info.star")
node_metrics = import_module("../../node_metrics_info.star") node_metrics = import_module("../../node_metrics_info.star")
constants = import_module("../../package_io/constants.star") constants = import_module("../../package_io/constants.star")
...@@ -53,11 +53,11 @@ USED_PORTS = { ...@@ -53,11 +53,11 @@ USED_PORTS = {
} }
VERBOSITY_LEVELS = { VERBOSITY_LEVELS = {
constants.GLOBAL_CLIENT_LOG_LEVEL.error: "ERROR", constants.GLOBAL_LOG_LEVEL.error: "ERROR",
constants.GLOBAL_CLIENT_LOG_LEVEL.warn: "WARN", constants.GLOBAL_LOG_LEVEL.warn: "WARN",
constants.GLOBAL_CLIENT_LOG_LEVEL.info: "INFO", constants.GLOBAL_LOG_LEVEL.info: "INFO",
constants.GLOBAL_CLIENT_LOG_LEVEL.debug: "DEBUG", constants.GLOBAL_LOG_LEVEL.debug: "DEBUG",
constants.GLOBAL_CLIENT_LOG_LEVEL.trace: "TRACE", constants.GLOBAL_LOG_LEVEL.trace: "TRACE",
} }
...@@ -141,7 +141,7 @@ def launch( ...@@ -141,7 +141,7 @@ def launch(
service_name, METRICS_PATH, metric_url service_name, METRICS_PATH, metric_url
) )
return el_client_context.new_el_client_context( return el_context.new_el_context(
"nimbus", "nimbus",
"", # nimbus has no enr "", # nimbus has no enr
enode, enode,
...@@ -252,7 +252,7 @@ def get_config( ...@@ -252,7 +252,7 @@ def get_config(
max_memory=el_max_mem, max_memory=el_max_mem,
env_vars=extra_env_vars, env_vars=extra_env_vars,
labels=shared_utils.label_maker( labels=shared_utils.label_maker(
constants.EL_CLIENT_TYPE.nimbus, constants.EL_TYPE.nimbus,
constants.CLIENT_TYPES.el, constants.CLIENT_TYPES.el,
image, image,
cl_client_name, cl_client_name,
......
shared_utils = import_module("../../shared_utils/shared_utils.star") shared_utils = import_module("../../shared_utils/shared_utils.star")
input_parser = import_module("../../package_io/input_parser.star") input_parser = import_module("../../package_io/input_parser.star")
el_client_context = import_module("../../el/el_client_context.star") el_context = import_module("../../el/el_context.star")
el_admin_node_info = import_module("../../el/el_admin_node_info.star") el_admin_node_info = import_module("../../el/el_admin_node_info.star")
node_metrics = import_module("../../node_metrics_info.star") node_metrics = import_module("../../node_metrics_info.star")
constants = import_module("../../package_io/constants.star") constants = import_module("../../package_io/constants.star")
...@@ -51,11 +51,11 @@ USED_PORTS = { ...@@ -51,11 +51,11 @@ USED_PORTS = {
ENTRYPOINT_ARGS = ["sh", "-c"] ENTRYPOINT_ARGS = ["sh", "-c"]
VERBOSITY_LEVELS = { VERBOSITY_LEVELS = {
constants.GLOBAL_CLIENT_LOG_LEVEL.error: "v", constants.GLOBAL_LOG_LEVEL.error: "v",
constants.GLOBAL_CLIENT_LOG_LEVEL.warn: "vv", constants.GLOBAL_LOG_LEVEL.warn: "vv",
constants.GLOBAL_CLIENT_LOG_LEVEL.info: "vvv", constants.GLOBAL_LOG_LEVEL.info: "vvv",
constants.GLOBAL_CLIENT_LOG_LEVEL.debug: "vvvv", constants.GLOBAL_LOG_LEVEL.debug: "vvvv",
constants.GLOBAL_CLIENT_LOG_LEVEL.trace: "vvvvv", constants.GLOBAL_LOG_LEVEL.trace: "vvvvv",
} }
...@@ -139,7 +139,7 @@ def launch( ...@@ -139,7 +139,7 @@ def launch(
service_name, METRICS_PATH, metric_url service_name, METRICS_PATH, metric_url
) )
return el_client_context.new_el_client_context( return el_context.new_el_context(
"reth", "reth",
"", # reth has no enr "", # reth has no enr
enode, enode,
...@@ -265,7 +265,7 @@ def get_config( ...@@ -265,7 +265,7 @@ def get_config(
max_memory=el_max_mem, max_memory=el_max_mem,
env_vars=extra_env_vars, env_vars=extra_env_vars,
labels=shared_utils.label_maker( labels=shared_utils.label_maker(
constants.EL_CLIENT_TYPE.reth, constants.EL_TYPE.reth,
constants.CLIENT_TYPES.el, constants.CLIENT_TYPES.el,
image, image,
cl_client_name, cl_client_name,
......
...@@ -29,11 +29,11 @@ MAX_MEMORY = 256 ...@@ -29,11 +29,11 @@ MAX_MEMORY = 256
def launch_el_forkmon( def launch_el_forkmon(
plan, plan,
config_template, config_template,
el_client_contexts, el_contexts,
global_node_selectors, global_node_selectors,
): ):
all_el_client_info = [] all_el_client_info = []
for client in el_client_contexts: for client in el_contexts:
client_info = new_el_client_info( client_info = new_el_client_info(
client.ip_addr, client.rpc_port_num, client.service_name client.ip_addr, client.rpc_port_num, client.service_name
) )
......
...@@ -20,8 +20,8 @@ def launch( ...@@ -20,8 +20,8 @@ def launch(
plan, plan,
pair_name, pair_name,
ethereum_metrics_exporter_service_name, ethereum_metrics_exporter_service_name,
el_client_context, el_context,
cl_client_context, cl_context,
node_selectors, node_selectors,
): ):
exporter_service = plan.add_service( exporter_service = plan.add_service(
...@@ -40,13 +40,13 @@ def launch( ...@@ -40,13 +40,13 @@ def launch(
str(METRICS_PORT_NUMBER), str(METRICS_PORT_NUMBER),
"--consensus-url", "--consensus-url",
"http://{}:{}".format( "http://{}:{}".format(
cl_client_context.ip_addr, cl_context.ip_addr,
cl_client_context.http_port_num, cl_context.http_port_num,
), ),
"--execution-url", "--execution-url",
"http://{}:{}".format( "http://{}:{}".format(
el_client_context.ip_addr, el_context.ip_addr,
el_client_context.rpc_port_num, el_context.rpc_port_num,
), ),
], ],
min_cpu=MIN_CPU, min_cpu=MIN_CPU,
...@@ -61,6 +61,6 @@ def launch( ...@@ -61,6 +61,6 @@ def launch(
pair_name, pair_name,
exporter_service.ip_address, exporter_service.ip_address,
METRICS_PORT_NUMBER, METRICS_PORT_NUMBER,
cl_client_context.client_name, cl_context.client_name,
el_client_context.client_name, el_context.client_name,
) )
...@@ -94,8 +94,8 @@ FRONTEND_MAX_MEMORY = 2048 ...@@ -94,8 +94,8 @@ FRONTEND_MAX_MEMORY = 2048
def launch_full_beacon( def launch_full_beacon(
plan, plan,
config_template, config_template,
cl_client_contexts, cl_contexts,
el_client_contexts, el_contexts,
persistent, persistent,
global_node_selectors, global_node_selectors,
): ):
...@@ -143,12 +143,12 @@ def launch_full_beacon( ...@@ -143,12 +143,12 @@ def launch_full_beacon(
) )
el_uri = "http://{0}:{1}".format( el_uri = "http://{0}:{1}".format(
el_client_contexts[0].ip_addr, el_client_contexts[0].rpc_port_num el_contexts[0].ip_addr, el_contexts[0].rpc_port_num
) )
redis_url = "{}:{}".format(redis_output.hostname, redis_output.port_number) redis_url = "{}:{}".format(redis_output.hostname, redis_output.port_number)
template_data = new_config_template_data( template_data = new_config_template_data(
cl_client_contexts[0], cl_contexts[0],
el_uri, el_uri,
little_bigtable.ip_address, little_bigtable.ip_address,
LITTLE_BIGTABLE_PORT_NUMBER, LITTLE_BIGTABLE_PORT_NUMBER,
......
...@@ -13,16 +13,16 @@ MAX_MEMORY = 300 ...@@ -13,16 +13,16 @@ MAX_MEMORY = 300
def launch_goomy_blob( def launch_goomy_blob(
plan, plan,
prefunded_addresses, prefunded_addresses,
el_client_contexts, el_contexts,
cl_client_context, cl_context,
seconds_per_slot, seconds_per_slot,
goomy_blob_params, goomy_blob_params,
global_node_selectors, global_node_selectors,
): ):
config = get_config( config = get_config(
prefunded_addresses, prefunded_addresses,
el_client_contexts, el_contexts,
cl_client_context, cl_context,
seconds_per_slot, seconds_per_slot,
goomy_blob_params.goomy_blob_args, goomy_blob_params.goomy_blob_args,
global_node_selectors, global_node_selectors,
...@@ -32,14 +32,14 @@ def launch_goomy_blob( ...@@ -32,14 +32,14 @@ def launch_goomy_blob(
def get_config( def get_config(
prefunded_addresses, prefunded_addresses,
el_client_contexts, el_contexts,
cl_client_context, cl_context,
seconds_per_slot, seconds_per_slot,
goomy_blob_args, goomy_blob_args,
node_selectors, node_selectors,
): ):
goomy_cli_args = [] goomy_cli_args = []
for index, client in enumerate(el_client_contexts): for index, client in enumerate(el_contexts):
goomy_cli_args.append( goomy_cli_args.append(
"-h http://{0}:{1}".format( "-h http://{0}:{1}".format(
client.ip_addr, client.ip_addr,
...@@ -61,11 +61,11 @@ def get_config( ...@@ -61,11 +61,11 @@ def get_config(
"apt-get update", "apt-get update",
"apt-get install -y curl jq", "apt-get install -y curl jq",
'current_epoch=$(curl -s http://{0}:{1}/eth/v2/beacon/blocks/head | jq -r ".version")'.format( 'current_epoch=$(curl -s http://{0}:{1}/eth/v2/beacon/blocks/head | jq -r ".version")'.format(
cl_client_context.ip_addr, cl_client_context.http_port_num cl_context.ip_addr, cl_context.http_port_num
), ),
'while [ $current_epoch != "deneb" ]; do echo "waiting for deneb, current epoch is $current_epoch"; current_epoch=$(curl -s http://{0}:{1}/eth/v2/beacon/blocks/head | jq -r ".version"); sleep {2}; done'.format( 'while [ $current_epoch != "deneb" ]; do echo "waiting for deneb, current epoch is $current_epoch"; current_epoch=$(curl -s http://{0}:{1}/eth/v2/beacon/blocks/head | jq -r ".version"); sleep {2}; done'.format(
cl_client_context.ip_addr, cl_context.ip_addr,
cl_client_context.http_port_num, cl_context.http_port_num,
seconds_per_slot, seconds_per_slot,
), ),
'echo "sleep is over, starting to send blob transactions"', 'echo "sleep is over, starting to send blob transactions"',
......
...@@ -15,7 +15,7 @@ def launch_mock_mev( ...@@ -15,7 +15,7 @@ def launch_mock_mev(
el_uri, el_uri,
beacon_uri, beacon_uri,
jwt_secret, jwt_secret,
global_client_log_level, global_log_level,
global_node_selectors, global_node_selectors,
): ):
mock_builder = plan.add_service( mock_builder = plan.add_service(
...@@ -32,7 +32,7 @@ def launch_mock_mev( ...@@ -32,7 +32,7 @@ def launch_mock_mev(
"--el={0}".format(el_uri), "--el={0}".format(el_uri),
"--cl={0}".format(beacon_uri), "--cl={0}".format(beacon_uri),
"--bid-multiplier=5", # TODO: This could be customizable "--bid-multiplier=5", # TODO: This could be customizable
"--log-level={0}".format(global_client_log_level), "--log-level={0}".format(global_log_level),
], ],
min_cpu=MIN_CPU, min_cpu=MIN_CPU,
max_cpu=MAX_CPU, max_cpu=MAX_CPU,
......
shared_utils = import_module("../shared_utils/shared_utils.star")
el_cl_genesis_data = import_module(
"../prelaunch_data_generator/el_cl_genesis/el_cl_genesis_data.star"
)
def launch(plan, network, cancun_time, prague_time):
# We are running a devnet
url = shared_utils.calculate_devnet_url(network)
el_cl_genesis_uuid = plan.upload_files(
src=url,
name="el_cl_genesis",
)
el_cl_genesis_data_uuid = plan.run_sh(
run="mkdir -p /network-configs/ && mv /opt/* /network-configs/",
store=[StoreSpec(src="/network-configs/", name="el_cl_genesis_data")],
files={"/opt": el_cl_genesis_uuid},
)
genesis_validators_root = read_file(url + "/genesis_validators_root.txt")
el_cl_data = el_cl_genesis_data.new_el_cl_genesis_data(
el_cl_genesis_data_uuid.files_artifacts[0],
genesis_validators_root,
cancun_time,
prague_time,
)
final_genesis_timestamp = shared_utils.read_genesis_timestamp_from_config(
plan, el_cl_genesis_data_uuid.files_artifacts[0]
)
network_id = shared_utils.read_genesis_network_id_from_config(
plan, el_cl_genesis_data_uuid.files_artifacts[0]
)
validator_data = None
return el_cl_data, final_genesis_timestamp, network_id, validator_data
shared_utils = import_module("../shared_utils/shared_utils.star")
el_cl_genesis_data = import_module(
"../prelaunch_data_generator/el_cl_genesis/el_cl_genesis_data.star"
)
def launch(plan, cancun_time, prague_time):
el_cl_genesis_data_uuid = plan.run_sh(
run="mkdir -p /network-configs/ && \
curl -o latest.tar.gz https://ephemery.dev/latest.tar.gz && \
tar xvzf latest.tar.gz -C /network-configs && \
cat /network-configs/genesis_validators_root.txt",
image="badouralix/curl-jq",
store=[StoreSpec(src="/network-configs/", name="el_cl_genesis_data")],
)
genesis_validators_root = el_cl_genesis_data_uuid.output
el_cl_data = el_cl_genesis_data.new_el_cl_genesis_data(
el_cl_genesis_data_uuid.files_artifacts[0],
genesis_validators_root,
cancun_time,
prague_time,
)
final_genesis_timestamp = shared_utils.read_genesis_timestamp_from_config(
plan, el_cl_genesis_data_uuid.files_artifacts[0]
)
network_id = shared_utils.read_genesis_network_id_from_config(
plan, el_cl_genesis_data_uuid.files_artifacts[0]
)
validator_data = None
return el_cl_data, final_genesis_timestamp, network_id, validator_data
shared_utils = import_module("../shared_utils/shared_utils.star")
validator_keystores = import_module(
"../prelaunch_data_generator/validator_keystores/validator_keystore_generator.star"
)
constants = import_module("../package_io/constants.star")
# The time that the CL genesis generation step takes to complete, based off what we've seen
# This is in seconds
CL_GENESIS_DATA_GENERATION_TIME = 5
# Each CL node takes about this time to start up and start processing blocks, so when we create the CL
# genesis data we need to set the genesis timestamp in the future so that nodes don't miss important slots
# (e.g. Altair fork)
# TODO(old) Make this client-specific (currently this is Nimbus)
# This is in seconds
CL_NODE_STARTUP_TIME = 5
def launch(plan, network_params, participants, parallel_keystore_generation):
num_participants = len(participants)
plan.print("Generating cl validator key stores")
validator_data = None
if not parallel_keystore_generation:
validator_data = validator_keystores.generate_validator_keystores(
plan, network_params.preregistered_validator_keys_mnemonic, participants
)
else:
validator_data = validator_keystores.generate_valdiator_keystores_in_parallel(
plan,
network_params.preregistered_validator_keys_mnemonic,
participants,
)
plan.print(json.indent(json.encode(validator_data)))
# We need to send the same genesis time to both the EL and the CL to ensure that timestamp based forking works as expected
final_genesis_timestamp = shared_utils.get_final_genesis_timestamp(
plan,
network_params.genesis_delay
+ CL_GENESIS_DATA_GENERATION_TIME
+ num_participants * CL_NODE_STARTUP_TIME,
)
# if preregistered validator count is 0 (default) then calculate the total number of validators from the participants
total_number_of_validator_keys = network_params.preregistered_validator_count
if network_params.preregistered_validator_count == 0:
for participant in participants:
total_number_of_validator_keys += participant.validator_count
plan.print("Generating EL CL data")
# we are running bellatrix genesis (deprecated) - will be removed in the future
if (
network_params.capella_fork_epoch > 0
and network_params.electra_fork_epoch == None
):
ethereum_genesis_generator_image = (
constants.ETHEREUM_GENESIS_GENERATOR.bellatrix_genesis
)
# we are running capella genesis - default behavior
elif (
network_params.capella_fork_epoch == 0
and network_params.electra_fork_epoch == None
and network_params.deneb_fork_epoch > 0
):
ethereum_genesis_generator_image = (
constants.ETHEREUM_GENESIS_GENERATOR.capella_genesis
)
# we are running deneb genesis - experimental, soon to become default
elif network_params.deneb_fork_epoch == 0:
ethereum_genesis_generator_image = (
constants.ETHEREUM_GENESIS_GENERATOR.deneb_genesis
)
# we are running electra - experimental
elif network_params.electra_fork_epoch != None:
if network_params.electra_fork_epoch == 0:
ethereum_genesis_generator_image = (
constants.ETHEREUM_GENESIS_GENERATOR.verkle_genesis
)
else:
ethereum_genesis_generator_image = (
constants.ETHEREUM_GENESIS_GENERATOR.verkle_support_genesis
)
else:
fail(
"Unsupported fork epoch configuration, need to define either capella_fork_epoch, deneb_fork_epoch or electra_fork_epoch"
)
return (
total_number_of_validator_keys,
ethereum_genesis_generator_image,
final_genesis_timestamp,
validator_data,
)
shared_utils = import_module("../shared_utils/shared_utils.star")
el_cl_genesis_data = import_module(
"../prelaunch_data_generator/el_cl_genesis/el_cl_genesis_data.star"
)
constants = import_module("../package_io/constants.star")
def launch(plan, network, cancun_time, prague_time):
# We are running a public network
dummy_genesis_data = plan.run_sh(
run="mkdir /network-configs",
store=[StoreSpec(src="/network-configs/", name="el_cl_genesis_data")],
)
el_cl_data = el_cl_genesis_data.new_el_cl_genesis_data(
dummy_genesis_data.files_artifacts[0],
constants.GENESIS_VALIDATORS_ROOT[network],
cancun_time,
prague_time,
)
final_genesis_timestamp = constants.GENESIS_TIME[network]
network_id = constants.NETWORK_ID[network]
validator_data = None
return el_cl_data, final_genesis_timestamp, network_id, validator_data
shared_utils = import_module("../shared_utils/shared_utils.star")
constants = import_module("../package_io/constants.star")
input_parser = import_module("../package_io/input_parser.star")
def shadowfork_prep(
plan,
network_params,
shadowfork_block,
participants,
global_tolerations,
global_node_selectors,
):
base_network = shared_utils.get_network_name(network_params.network)
# overload the network name to remove the shadowfork suffix
if constants.NETWORK_NAME.ephemery in base_network:
chain_id = plan.run_sh(
run="curl -s https://ephemery.dev/latest/config.yaml | yq .DEPOSIT_CHAIN_ID | tr -d '\n'",
image="linuxserver/yq",
)
network_id = chain_id.output
else:
network_id = constants.NETWORK_ID[
base_network
] # overload the network id to match the network name
latest_block = plan.run_sh( # fetch the latest block
run="mkdir -p /shadowfork && \
curl -o /shadowfork/latest_block.json "
+ network_params.network_sync_base_url
+ base_network
+ "/geth/"
+ shadowfork_block
+ "/_snapshot_eth_getBlockByNumber.json",
image="badouralix/curl-jq",
store=[StoreSpec(src="/shadowfork", name="latest_blocks")],
)
for index, participant in enumerate(participants):
tolerations = input_parser.get_client_tolerations(
participant.el_tolerations,
participant.tolerations,
global_tolerations,
)
node_selectors = input_parser.get_client_node_selectors(
participant.node_selectors,
global_node_selectors,
)
cl_type = participant.cl_type
el_type = participant.el_type
# Zero-pad the index using the calculated zfill value
index_str = shared_utils.zfill_custom(index + 1, len(str(len(participants))))
el_service_name = "el-{0}-{1}-{2}".format(index_str, el_type, cl_type)
shadowfork_data = plan.add_service(
name="shadowfork-{0}".format(el_service_name),
config=ServiceConfig(
image="alpine:3.19.1",
cmd=[
"apk add --no-cache curl tar zstd && curl -s -L "
+ network_params.network_sync_base_url
+ base_network
+ "/"
+ el_type
+ "/"
+ shadowfork_block
+ "/snapshot.tar.zst"
+ " | tar -I zstd -xvf - -C /data/"
+ el_type
+ "/execution-data"
+ " && touch /tmp/finished"
+ " && tail -f /dev/null"
],
entrypoint=["/bin/sh", "-c"],
files={
"/data/"
+ el_type
+ "/execution-data": Directory(
persistent_key="data-{0}".format(el_service_name),
size=constants.VOLUME_SIZE[base_network][
el_type + "_volume_size"
],
),
},
tolerations=tolerations,
node_selectors=node_selectors,
),
)
for index, participant in enumerate(participants):
cl_type = participant.cl_type
el_type = participant.el_type
# Zero-pad the index using the calculated zfill value
index_str = shared_utils.zfill_custom(index + 1, len(str(len(participants))))
el_service_name = "el-{0}-{1}-{2}".format(index_str, el_type, cl_type)
plan.wait(
service_name="shadowfork-{0}".format(el_service_name),
recipe=ExecRecipe(command=["cat", "/tmp/finished"]),
field="code",
assertion="==",
target_value=0,
interval="1s",
timeout="6h", # 6 hours should be enough for the biggest network
)
return latest_block, network_id
EL_CLIENT_TYPE = struct( EL_TYPE = struct(
gethbuilder="geth-builder", gethbuilder="geth-builder",
geth="geth", geth="geth",
erigon="erigon", erigon="erigon",
...@@ -9,7 +9,7 @@ EL_CLIENT_TYPE = struct( ...@@ -9,7 +9,7 @@ EL_CLIENT_TYPE = struct(
nimbus="nimbus", nimbus="nimbus",
) )
CL_CLIENT_TYPE = struct( CL_TYPE = struct(
lighthouse="lighthouse", lighthouse="lighthouse",
teku="teku", teku="teku",
nimbus="nimbus", nimbus="nimbus",
...@@ -17,7 +17,7 @@ CL_CLIENT_TYPE = struct( ...@@ -17,7 +17,7 @@ CL_CLIENT_TYPE = struct(
lodestar="lodestar", lodestar="lodestar",
) )
VC_CLIENT_TYPE = struct( VC_TYPE = struct(
lighthouse="lighthouse", lighthouse="lighthouse",
lodestar="lodestar", lodestar="lodestar",
nimbus="nimbus", nimbus="nimbus",
...@@ -25,7 +25,7 @@ VC_CLIENT_TYPE = struct( ...@@ -25,7 +25,7 @@ VC_CLIENT_TYPE = struct(
teku="teku", teku="teku",
) )
GLOBAL_CLIENT_LOG_LEVEL = struct( GLOBAL_LOG_LEVEL = struct(
info="info", info="info",
error="error", error="error",
warn="warn", warn="warn",
...@@ -410,11 +410,11 @@ RAM_CPU_OVERRIDES = { ...@@ -410,11 +410,11 @@ RAM_CPU_OVERRIDES = {
"prysm_max_cpu": 1000, # 1 core "prysm_max_cpu": 1000, # 1 core
"lighthouse_max_mem": 1024, # 1GB "lighthouse_max_mem": 1024, # 1GB
"lighthouse_max_cpu": 1000, # 1 core "lighthouse_max_cpu": 1000, # 1 core
"teku_max_mem": 1024, # 1GB "teku_max_mem": 2048, # 2GB
"teku_max_cpu": 1000, # 1 core "teku_max_cpu": 1000, # 1 core
"nimbus_max_mem": 1024, # 1GB "nimbus_max_mem": 1024, # 1GB
"nimbus_max_cpu": 1000, # 1 core "nimbus_max_cpu": 1000, # 1 core
"lodestar_max_mem": 1024, # 1GB "lodestar_max_mem": 2048, # 2GB
"lodestar_max_cpu": 1000, # 1 core "lodestar_max_cpu": 1000, # 1 core
}, },
} }
...@@ -163,46 +163,46 @@ def input_parser(plan, input_args): ...@@ -163,46 +163,46 @@ def input_parser(plan, input_args):
return struct( return struct(
participants=[ participants=[
struct( struct(
el_client_type=participant["el_client_type"], el_type=participant["el_type"],
el_client_image=participant["el_client_image"], el_image=participant["el_image"],
el_client_log_level=participant["el_client_log_level"], el_log_level=participant["el_log_level"],
el_client_volume_size=participant["el_client_volume_size"], el_volume_size=participant["el_volume_size"],
el_extra_params=participant["el_extra_params"], el_extra_params=participant["el_extra_params"],
el_extra_env_vars=participant["el_extra_env_vars"], el_extra_env_vars=participant["el_extra_env_vars"],
el_extra_labels=participant["el_extra_labels"], el_extra_labels=participant["el_extra_labels"],
el_tolerations=participant["el_tolerations"], el_tolerations=participant["el_tolerations"],
cl_client_type=participant["cl_client_type"], cl_type=participant["cl_type"],
cl_client_image=participant["cl_client_image"], cl_image=participant["cl_image"],
cl_client_log_level=participant["cl_client_log_level"], cl_log_level=participant["cl_log_level"],
cl_client_volume_size=participant["cl_client_volume_size"], cl_volume_size=participant["cl_volume_size"],
cl_extra_env_vars=participant["cl_extra_env_vars"],
cl_tolerations=participant["cl_tolerations"], cl_tolerations=participant["cl_tolerations"],
use_separate_validator_client=participant[ use_separate_vc=participant["use_separate_vc"],
"use_separate_validator_client" vc_type=participant["vc_type"],
], vc_image=participant["vc_image"],
validator_client_type=participant["validator_client_type"], vc_log_level=participant["vc_log_level"],
validator_client_image=participant["validator_client_image"], vc_tolerations=participant["vc_tolerations"],
validator_client_log_level=participant["validator_client_log_level"], cl_extra_params=participant["cl_extra_params"],
validator_tolerations=participant["validator_tolerations"], cl_extra_labels=participant["cl_extra_labels"],
tolerations=participant["tolerations"], vc_extra_params=participant["vc_extra_params"],
node_selectors=participant["node_selectors"], vc_extra_env_vars=participant["vc_extra_env_vars"],
beacon_extra_params=participant["beacon_extra_params"], vc_extra_labels=participant["vc_extra_labels"],
beacon_extra_labels=participant["beacon_extra_labels"],
validator_extra_params=participant["validator_extra_params"],
validator_extra_labels=participant["validator_extra_labels"],
builder_network_params=participant["builder_network_params"], builder_network_params=participant["builder_network_params"],
el_min_cpu=participant["el_min_cpu"], el_min_cpu=participant["el_min_cpu"],
el_max_cpu=participant["el_max_cpu"], el_max_cpu=participant["el_max_cpu"],
el_min_mem=participant["el_min_mem"], el_min_mem=participant["el_min_mem"],
el_max_mem=participant["el_max_mem"], el_max_mem=participant["el_max_mem"],
bn_min_cpu=participant["bn_min_cpu"], cl_min_cpu=participant["cl_min_cpu"],
bn_max_cpu=participant["bn_max_cpu"], cl_max_cpu=participant["cl_max_cpu"],
bn_min_mem=participant["bn_min_mem"], cl_min_mem=participant["cl_min_mem"],
bn_max_mem=participant["bn_max_mem"], cl_max_mem=participant["cl_max_mem"],
v_min_cpu=participant["v_min_cpu"], vc_min_cpu=participant["vc_min_cpu"],
v_max_cpu=participant["v_max_cpu"], vc_max_cpu=participant["vc_max_cpu"],
v_min_mem=participant["v_min_mem"], vc_min_mem=participant["vc_min_mem"],
v_max_mem=participant["v_max_mem"], vc_max_mem=participant["vc_max_mem"],
validator_count=participant["validator_count"], validator_count=participant["validator_count"],
tolerations=participant["tolerations"],
node_selectors=participant["node_selectors"],
snooper_enabled=participant["snooper_enabled"], snooper_enabled=participant["snooper_enabled"],
count=participant["count"], count=participant["count"],
ethereum_metrics_exporter_enabled=participant[ ethereum_metrics_exporter_enabled=participant[
...@@ -296,7 +296,7 @@ def input_parser(plan, input_args): ...@@ -296,7 +296,7 @@ def input_parser(plan, input_args):
), ),
additional_services=result["additional_services"], additional_services=result["additional_services"],
wait_for_finalization=result["wait_for_finalization"], wait_for_finalization=result["wait_for_finalization"],
global_client_log_level=result["global_client_log_level"], global_log_level=result["global_log_level"],
mev_type=result["mev_type"], mev_type=result["mev_type"],
snooper_enabled=result["snooper_enabled"], snooper_enabled=result["snooper_enabled"],
ethereum_metrics_exporter_enabled=result["ethereum_metrics_exporter_enabled"], ethereum_metrics_exporter_enabled=result["ethereum_metrics_exporter_enabled"],
...@@ -344,62 +344,62 @@ def parse_network_params(input_args): ...@@ -344,62 +344,62 @@ def parse_network_params(input_args):
actual_num_validators = 0 actual_num_validators = 0
# validation of the above defaults # validation of the above defaults
for index, participant in enumerate(result["participants"]): for index, participant in enumerate(result["participants"]):
el_client_type = participant["el_client_type"] el_type = participant["el_type"]
cl_client_type = participant["cl_client_type"] cl_type = participant["cl_type"]
validator_client_type = participant["validator_client_type"] vc_type = participant["vc_type"]
if cl_client_type in (NIMBUS_NODE_NAME) and ( if cl_type in (NIMBUS_NODE_NAME) and (
result["network_params"]["seconds_per_slot"] < 12 result["network_params"]["seconds_per_slot"] < 12
): ):
fail("nimbus can't be run with slot times below 12 seconds") fail("nimbus can't be run with slot times below 12 seconds")
el_image = participant["el_client_image"] el_image = participant["el_image"]
if el_image == "": if el_image == "":
default_image = DEFAULT_EL_IMAGES.get(el_client_type, "") default_image = DEFAULT_EL_IMAGES.get(el_type, "")
if default_image == "": if default_image == "":
fail( fail(
"{0} received an empty image name and we don't have a default for it".format( "{0} received an empty image name and we don't have a default for it".format(
el_client_type el_type
) )
) )
participant["el_client_image"] = default_image participant["el_image"] = default_image
cl_image = participant["cl_client_image"] cl_image = participant["cl_image"]
if cl_image == "": if cl_image == "":
default_image = DEFAULT_CL_IMAGES.get(cl_client_type, "") default_image = DEFAULT_CL_IMAGES.get(cl_type, "")
if default_image == "": if default_image == "":
fail( fail(
"{0} received an empty image name and we don't have a default for it".format( "{0} received an empty image name and we don't have a default for it".format(
cl_client_type cl_type
) )
) )
participant["cl_client_image"] = default_image participant["cl_image"] = default_image
if participant["use_separate_validator_client"] == None: if participant["use_separate_vc"] == None:
# Default to false for CL clients that can run validator clients # Default to false for CL clients that can run validator clients
# in the same process. # in the same process.
if cl_client_type in ( if cl_type in (
constants.CL_CLIENT_TYPE.nimbus, constants.CL_TYPE.nimbus,
constants.CL_CLIENT_TYPE.teku, constants.CL_TYPE.teku,
): ):
participant["use_separate_validator_client"] = False participant["use_separate_vc"] = False
else: else:
participant["use_separate_validator_client"] = True participant["use_separate_vc"] = True
if validator_client_type == "": if vc_type == "":
# Defaults to matching the chosen CL client # Defaults to matching the chosen CL client
validator_client_type = cl_client_type vc_type = cl_type
participant["validator_client_type"] = validator_client_type participant["vc_type"] = vc_type
validator_client_image = participant["validator_client_image"] vc_image = participant["vc_image"]
if validator_client_image == "": if vc_image == "":
if cl_image == "": if cl_image == "":
# If the validator client image is also empty, default to the image for the chosen CL client # If the validator client image is also empty, default to the image for the chosen CL client
default_image = DEFAULT_VC_IMAGES.get(validator_client_type, "") default_image = DEFAULT_VC_IMAGES.get(vc_type, "")
else: else:
if cl_client_type == "prysm": if cl_type == "prysm":
default_image = cl_image.replace("beacon-chain", "validator") default_image = cl_image.replace("beacon-chain", "validator")
elif cl_client_type == "nimbus": elif cl_type == "nimbus":
default_image = cl_image.replace( default_image = cl_image.replace(
"nimbus-eth2", "nimbus-validator-client" "nimbus-eth2", "nimbus-validator-client"
) )
...@@ -408,10 +408,10 @@ def parse_network_params(input_args): ...@@ -408,10 +408,10 @@ def parse_network_params(input_args):
if default_image == "": if default_image == "":
fail( fail(
"{0} received an empty image name and we don't have a default for it".format( "{0} received an empty image name and we don't have a default for it".format(
validator_client_type vc_type
) )
) )
participant["validator_client_image"] = default_image participant["vc_image"] = default_image
snooper_enabled = participant["snooper_enabled"] snooper_enabled = participant["snooper_enabled"]
if snooper_enabled == False: if snooper_enabled == False:
...@@ -428,10 +428,10 @@ def parse_network_params(input_args): ...@@ -428,10 +428,10 @@ def parse_network_params(input_args):
blobber_enabled = participant["blobber_enabled"] blobber_enabled = participant["blobber_enabled"]
if blobber_enabled: if blobber_enabled:
# unless we are running lighthouse, we don't support blobber # unless we are running lighthouse, we don't support blobber
if participant["cl_client_type"] != "lighthouse": if participant["cl_type"] != "lighthouse":
fail( fail(
"blobber is not supported for {0} client".format( "blobber is not supported for {0} client".format(
participant["cl_client_type"] participant["cl_type"]
) )
) )
...@@ -458,11 +458,11 @@ def parse_network_params(input_args): ...@@ -458,11 +458,11 @@ def parse_network_params(input_args):
actual_num_validators += participant["validator_count"] actual_num_validators += participant["validator_count"]
beacon_extra_params = participant.get("beacon_extra_params", []) cl_extra_params = participant.get("cl_extra_params", [])
participant["beacon_extra_params"] = beacon_extra_params participant["cl_extra_params"] = cl_extra_params
validator_extra_params = participant.get("validator_extra_params", []) vc_extra_params = participant.get("vc_extra_params", [])
participant["validator_extra_params"] = validator_extra_params participant["vc_extra_params"] = vc_extra_params
total_participant_count += participant["count"] total_participant_count += participant["count"]
...@@ -586,91 +586,94 @@ def default_input_args(): ...@@ -586,91 +586,94 @@ def default_input_args():
"participants": participants, "participants": participants,
"network_params": network_params, "network_params": network_params,
"wait_for_finalization": False, "wait_for_finalization": False,
"global_client_log_level": "info", "global_log_level": "info",
"snooper_enabled": False, "snooper_enabled": False,
"ethereum_metrics_exporter_enabled": False, "ethereum_metrics_exporter_enabled": False,
"xatu_sentry_enabled": False,
"parallel_keystore_generation": False, "parallel_keystore_generation": False,
"disable_peer_scoring": False, "disable_peer_scoring": False,
"persistent": False,
"mev_type": None,
"xatu_sentry_enabled": False,
"global_tolerations": [], "global_tolerations": [],
"global_node_selectors": {},
} }
def default_network_params(): def default_network_params():
# this is temporary till we get params working # this is temporary till we get params working
return { return {
"preregistered_validator_keys_mnemonic": "giant issue aisle success illegal bike spike question tent bar rely arctic volcano long crawl hungry vocal artwork sniff fantasy very lucky have athlete", "network": "kurtosis",
"preregistered_validator_count": 0,
"num_validator_keys_per_node": 64,
"network_id": "3151908", "network_id": "3151908",
"deposit_contract_address": "0x4242424242424242424242424242424242424242", "deposit_contract_address": "0x4242424242424242424242424242424242424242",
"seconds_per_slot": 12, "seconds_per_slot": 12,
"num_validator_keys_per_node": 64,
"preregistered_validator_keys_mnemonic": "giant issue aisle success illegal bike spike question tent bar rely arctic volcano long crawl hungry vocal artwork sniff fantasy very lucky have athlete",
"preregistered_validator_count": 0,
"genesis_delay": 20, "genesis_delay": 20,
"max_churn": 8, "max_churn": 8,
"ejection_balance": 16000000000, "ejection_balance": 16000000000,
"eth1_follow_distance": 2048, "eth1_follow_distance": 2048,
"capella_fork_epoch": 0,
"deneb_fork_epoch": 500,
"electra_fork_epoch": None,
"network": "kurtosis",
"min_validator_withdrawability_delay": 256, "min_validator_withdrawability_delay": 256,
"shard_committee_period": 256, "shard_committee_period": 256,
"capella_fork_epoch": 0,
"deneb_fork_epoch": 4,
"electra_fork_epoch": None,
"network_sync_base_url": "https://ethpandaops-ethereum-node-snapshots.ams3.digitaloceanspaces.com/", "network_sync_base_url": "https://ethpandaops-ethereum-node-snapshots.ams3.digitaloceanspaces.com/",
} }
def default_participant(): def default_participant():
return { return {
"el_client_type": "geth", "el_type": "geth",
"el_client_image": "", "el_image": "",
"el_client_log_level": "", "el_log_level": "",
"el_client_volume_size": 0,
"el_extra_params": [],
"el_extra_env_vars": {}, "el_extra_env_vars": {},
"el_extra_labels": {}, "el_extra_labels": {},
"el_extra_params": [],
"el_tolerations": [], "el_tolerations": [],
"cl_client_type": "lighthouse", "el_volume_size": 0,
"cl_client_image": "",
"cl_client_log_level": "",
"cl_client_volume_size": 0,
"use_separate_validator_client": None,
"validator_client_type": "",
"validator_client_log_level": "",
"validator_client_image": "",
"cl_tolerations": [],
"validator_tolerations": [],
"tolerations": [],
"node_selectors": {},
"beacon_extra_params": [],
"beacon_extra_labels": {},
"validator_extra_params": [],
"validator_extra_labels": {},
"builder_network_params": None,
"el_min_cpu": 0, "el_min_cpu": 0,
"el_max_cpu": 0, "el_max_cpu": 0,
"el_min_mem": 0, "el_min_mem": 0,
"el_max_mem": 0, "el_max_mem": 0,
"bn_min_cpu": 0, "cl_type": "lighthouse",
"bn_max_cpu": 0, "cl_image": "",
"bn_min_mem": 0, "cl_log_level": "",
"bn_max_mem": 0, "cl_extra_env_vars": {},
"v_min_cpu": 0, "cl_extra_labels": {},
"v_max_cpu": 0, "cl_extra_params": [],
"v_min_mem": 0, "cl_tolerations": [],
"v_max_mem": 0, "cl_volume_size": 0,
"cl_min_cpu": 0,
"cl_max_cpu": 0,
"cl_min_mem": 0,
"cl_max_mem": 0,
"use_separate_vc": None,
"vc_type": "",
"vc_image": "",
"vc_log_level": "",
"vc_extra_env_vars": {},
"vc_extra_labels": {},
"vc_extra_params": [],
"vc_tolerations": [],
"vc_min_cpu": 0,
"vc_max_cpu": 0,
"vc_min_mem": 0,
"vc_max_mem": 0,
"validator_count": None, "validator_count": None,
"node_selectors": {},
"tolerations": [],
"count": 1,
"snooper_enabled": False, "snooper_enabled": False,
"ethereum_metrics_exporter_enabled": False, "ethereum_metrics_exporter_enabled": False,
"xatu_sentry_enabled": False, "xatu_sentry_enabled": False,
"count": 1,
"prometheus_config": { "prometheus_config": {
"scrape_interval": "15s", "scrape_interval": "15s",
"labels": None, "labels": None,
}, },
"blobber_enabled": False, "blobber_enabled": False,
"blobber_extra_params": [], "blobber_extra_params": [],
"global_tolerations": [], "builder_network_params": None,
"global_node_selectors": {},
} }
...@@ -742,14 +745,14 @@ def get_default_custom_flood_params(): ...@@ -742,14 +745,14 @@ def get_default_custom_flood_params():
def enrich_disable_peer_scoring(parsed_arguments_dict): def enrich_disable_peer_scoring(parsed_arguments_dict):
for index, participant in enumerate(parsed_arguments_dict["participants"]): for index, participant in enumerate(parsed_arguments_dict["participants"]):
if participant["cl_client_type"] == "lighthouse": if participant["cl_type"] == "lighthouse":
participant["beacon_extra_params"].append("--disable-peer-scoring") participant["cl_extra_params"].append("--disable-peer-scoring")
if participant["cl_client_type"] == "prysm": if participant["cl_type"] == "prysm":
participant["beacon_extra_params"].append("--disable-peer-scorer") participant["cl_extra_params"].append("--disable-peer-scorer")
if participant["cl_client_type"] == "teku": if participant["cl_type"] == "teku":
participant["beacon_extra_params"].append("--Xp2p-gossip-scoring-enabled") participant["cl_extra_params"].append("--Xp2p-gossip-scoring-enabled")
if participant["cl_client_type"] == "lodestar": if participant["cl_type"] == "lodestar":
participant["beacon_extra_params"].append("--disablePeerScoring") participant["cl_extra_params"].append("--disablePeerScoring")
return parsed_arguments_dict return parsed_arguments_dict
...@@ -762,36 +765,34 @@ def enrich_mev_extra_params(parsed_arguments_dict, mev_prefix, mev_port, mev_typ ...@@ -762,36 +765,34 @@ def enrich_mev_extra_params(parsed_arguments_dict, mev_prefix, mev_port, mev_typ
mev_url = "http://{0}-{1}-{2}-{3}:{4}".format( mev_url = "http://{0}-{1}-{2}-{3}:{4}".format(
MEV_BOOST_SERVICE_NAME_PREFIX, MEV_BOOST_SERVICE_NAME_PREFIX,
index_str, index_str,
participant["cl_client_type"], participant["cl_type"],
participant["el_client_type"], participant["el_type"],
mev_port, mev_port,
) )
if participant["cl_client_type"] == "lighthouse": if participant["cl_type"] == "lighthouse":
participant["validator_extra_params"].append("--builder-proposals") participant["vc_extra_params"].append("--builder-proposals")
participant["beacon_extra_params"].append("--builder={0}".format(mev_url)) participant["cl_extra_params"].append("--builder={0}".format(mev_url))
if participant["cl_client_type"] == "lodestar": if participant["cl_type"] == "lodestar":
participant["validator_extra_params"].append("--builder") participant["vc_extra_params"].append("--builder")
participant["beacon_extra_params"].append("--builder") participant["cl_extra_params"].append("--builder")
participant["beacon_extra_params"].append( participant["cl_extra_params"].append("--builder.urls={0}".format(mev_url))
"--builder.urls={0}".format(mev_url) if participant["cl_type"] == "nimbus":
) participant["vc_extra_params"].append("--payload-builder=true")
if participant["cl_client_type"] == "nimbus": participant["cl_extra_params"].append("--payload-builder=true")
participant["validator_extra_params"].append("--payload-builder=true") participant["cl_extra_params"].append(
participant["beacon_extra_params"].append("--payload-builder=true")
participant["beacon_extra_params"].append(
"--payload-builder-url={0}".format(mev_url) "--payload-builder-url={0}".format(mev_url)
) )
if participant["cl_client_type"] == "teku": if participant["cl_type"] == "teku":
participant["validator_extra_params"].append( participant["vc_extra_params"].append(
"--validators-builder-registration-default-enabled=true" "--validators-builder-registration-default-enabled=true"
) )
participant["beacon_extra_params"].append( participant["cl_extra_params"].append(
"--builder-endpoint={0}".format(mev_url) "--builder-endpoint={0}".format(mev_url)
) )
if participant["cl_client_type"] == "prysm": if participant["cl_type"] == "prysm":
participant["validator_extra_params"].append("--enable-builder") participant["vc_extra_params"].append("--enable-builder")
participant["beacon_extra_params"].append( participant["cl_extra_params"].append(
"--http-mev-relay={0}".format(mev_url) "--http-mev-relay={0}".format(mev_url)
) )
...@@ -801,18 +802,12 @@ def enrich_mev_extra_params(parsed_arguments_dict, mev_prefix, mev_port, mev_typ ...@@ -801,18 +802,12 @@ def enrich_mev_extra_params(parsed_arguments_dict, mev_prefix, mev_port, mev_typ
) )
if mev_type == "full": if mev_type == "full":
mev_participant = default_participant() mev_participant = default_participant()
mev_participant["el_client_type"] = ( mev_participant["el_type"] = mev_participant["el_type"] + "-builder"
mev_participant["el_client_type"] + "-builder"
)
mev_participant.update( mev_participant.update(
{ {
"el_client_image": parsed_arguments_dict["mev_params"][ "el_image": parsed_arguments_dict["mev_params"]["mev_builder_image"],
"mev_builder_image" "cl_image": parsed_arguments_dict["mev_params"]["mev_builder_cl_image"],
], "cl_extra_params": [
"cl_client_image": parsed_arguments_dict["mev_params"][
"mev_builder_cl_image"
],
"beacon_extra_params": [
"--always-prepare-payload", "--always-prepare-payload",
"--prepare-payload-lookahead", "--prepare-payload-lookahead",
"12000", "12000",
......
def new_participant( def new_participant(
el_client_type, el_type,
cl_client_type, cl_type,
validator_client_type, vc_type,
el_client_context, el_context,
cl_client_context, cl_context,
validator_client_context, vc_context,
snooper_engine_context, snooper_engine_context,
ethereum_metrics_exporter_context, ethereum_metrics_exporter_context,
xatu_sentry_context, xatu_sentry_context,
): ):
return struct( return struct(
el_client_type=el_client_type, el_type=el_type,
cl_client_type=cl_client_type, cl_type=cl_type,
validator_client_type=validator_client_type, vc_type=vc_type,
el_client_context=el_client_context, el_context=el_context,
cl_client_context=cl_client_context, cl_context=cl_context,
validator_client_context=validator_client_context, vc_context=vc_context,
snooper_engine_context=snooper_engine_context, snooper_engine_context=snooper_engine_context,
ethereum_metrics_exporter_context=ethereum_metrics_exporter_context, ethereum_metrics_exporter_context=ethereum_metrics_exporter_context,
xatu_sentry_context=xatu_sentry_context, xatu_sentry_context=xatu_sentry_context,
......
validator_keystores = import_module(
"./prelaunch_data_generator/validator_keystores/validator_keystore_generator.star"
)
el_cl_genesis_data_generator = import_module( el_cl_genesis_data_generator = import_module(
"./prelaunch_data_generator/el_cl_genesis/el_cl_genesis_generator.star" "./prelaunch_data_generator/el_cl_genesis/el_cl_genesis_generator.star"
) )
el_cl_genesis_data = import_module(
"./prelaunch_data_generator/el_cl_genesis/el_cl_genesis_data.star"
)
input_parser = import_module("./package_io/input_parser.star") input_parser = import_module("./package_io/input_parser.star")
shared_utils = import_module("./shared_utils/shared_utils.star") shared_utils = import_module("./shared_utils/shared_utils.star")
static_files = import_module("./static_files/static_files.star") static_files = import_module("./static_files/static_files.star")
constants = import_module("./package_io/constants.star")
geth = import_module("./el/geth/geth_launcher.star")
besu = import_module("./el/besu/besu_launcher.star")
erigon = import_module("./el/erigon/erigon_launcher.star")
nethermind = import_module("./el/nethermind/nethermind_launcher.star")
reth = import_module("./el/reth/reth_launcher.star")
ethereumjs = import_module("./el/ethereumjs/ethereumjs_launcher.star")
nimbus_eth1 = import_module("./el/nimbus-eth1/nimbus_launcher.star")
lighthouse = import_module("./cl/lighthouse/lighthouse_launcher.star")
lodestar = import_module("./cl/lodestar/lodestar_launcher.star")
nimbus = import_module("./cl/nimbus/nimbus_launcher.star")
prysm = import_module("./cl/prysm/prysm_launcher.star")
teku = import_module("./cl/teku/teku_launcher.star")
validator_client = import_module("./validator_client/validator_client_launcher.star")
snooper = import_module("./snooper/snooper_engine_launcher.star")
ethereum_metrics_exporter = import_module( ethereum_metrics_exporter = import_module(
"./ethereum_metrics_exporter/ethereum_metrics_exporter_launcher.star" "./ethereum_metrics_exporter/ethereum_metrics_exporter_launcher.star"
) )
xatu_sentry = import_module("./xatu_sentry/xatu_sentry_launcher.star")
genesis_constants = import_module(
"./prelaunch_data_generator/genesis_constants/genesis_constants.star"
)
participant_module = import_module("./participant.star") participant_module = import_module("./participant.star")
constants = import_module("./package_io/constants.star") xatu_sentry = import_module("./xatu_sentry/xatu_sentry_launcher.star")
launch_ephemery = import_module("./network_launcher/ephemery.star")
BOOT_PARTICIPANT_INDEX = 0 launch_public_network = import_module("./network_launcher/public_network.star")
launch_devnet = import_module("./network_launcher/devnet.star")
# The time that the CL genesis generation step takes to complete, based off what we've seen launch_kurtosis = import_module("./network_launcher/kurtosis.star")
# This is in seconds launch_shadowfork = import_module("./network_launcher/shadowfork.star")
CL_GENESIS_DATA_GENERATION_TIME = 5
# Each CL node takes about this time to start up and start processing blocks, so when we create the CL
# genesis data we need to set the genesis timestamp in the future so that nodes don't miss important slots
# (e.g. Altair fork)
# TODO(old) Make this client-specific (currently this is Nimbus)
# This is in seconds
CL_NODE_STARTUP_TIME = 5
CL_CLIENT_CONTEXT_BOOTNODE = None el_client_launcher = import_module("./el/el_launcher.star")
cl_client_launcher = import_module("./cl/cl_launcher.star")
vc = import_module("./vc/vc_launcher.star")
def launch_participant_network( def launch_participant_network(
...@@ -77,8 +40,8 @@ def launch_participant_network( ...@@ -77,8 +40,8 @@ def launch_participant_network(
parallel_keystore_generation=False, parallel_keystore_generation=False,
): ):
network_id = network_params.network_id network_id = network_params.network_id
num_participants = len(participants)
latest_block = "" latest_block = ""
num_participants = len(participants)
cancun_time = 0 cancun_time = 0
prague_time = 0 prague_time = 0
shadowfork_block = "latest" shadowfork_block = "latest"
...@@ -96,180 +59,25 @@ def launch_participant_network( ...@@ -96,180 +59,25 @@ def launch_participant_network(
if ( if (
constants.NETWORK_NAME.shadowfork in network_params.network constants.NETWORK_NAME.shadowfork in network_params.network
): # shadowfork requires some preparation ): # shadowfork requires some preparation
base_network = shared_utils.get_network_name(network_params.network) latest_block, network_id = launch_shadowfork.shadowfork_prep(
# overload the network name to remove the shadowfork suffix plan,
if constants.NETWORK_NAME.ephemery in base_network: network_params,
chain_id = plan.run_sh( shadowfork_block,
run="curl -s https://ephemery.dev/latest/config.yaml | yq .DEPOSIT_CHAIN_ID | tr -d '\n'", participants,
image="linuxserver/yq", global_tolerations,
) global_node_selectors,
network_id = chain_id.output
else:
network_id = constants.NETWORK_ID[
base_network
] # overload the network id to match the network name
latest_block = plan.run_sh( # fetch the latest block
run="mkdir -p /shadowfork && \
curl -o /shadowfork/latest_block.json "
+ network_params.network_sync_base_url
+ base_network
+ "/geth/"
+ shadowfork_block
+ "/_snapshot_eth_getBlockByNumber.json",
image="badouralix/curl-jq",
store=[StoreSpec(src="/shadowfork", name="latest_blocks")],
) )
for index, participant in enumerate(participants):
tolerations = input_parser.get_client_tolerations(
participant.el_tolerations,
participant.tolerations,
global_tolerations,
)
node_selectors = input_parser.get_client_node_selectors(
participant.node_selectors,
global_node_selectors,
)
cl_client_type = participant.cl_client_type
el_client_type = participant.el_client_type
# Zero-pad the index using the calculated zfill value
index_str = shared_utils.zfill_custom(
index + 1, len(str(len(participants)))
)
el_service_name = "el-{0}-{1}-{2}".format(
index_str, el_client_type, cl_client_type
)
shadowfork_data = plan.add_service(
name="shadowfork-{0}".format(el_service_name),
config=ServiceConfig(
image="alpine:3.19.1",
cmd=[
"apk add --no-cache curl tar zstd && curl -s -L "
+ network_params.network_sync_base_url
+ base_network
+ "/"
+ el_client_type
+ "/"
+ shadowfork_block
+ "/snapshot.tar.zst"
+ " | tar -I zstd -xvf - -C /data/"
+ el_client_type
+ "/execution-data"
+ " && touch /tmp/finished"
+ " && tail -f /dev/null"
],
entrypoint=["/bin/sh", "-c"],
files={
"/data/"
+ el_client_type
+ "/execution-data": Directory(
persistent_key="data-{0}".format(el_service_name),
size=constants.VOLUME_SIZE[base_network][
el_client_type + "_volume_size"
],
),
},
tolerations=tolerations,
node_selectors=node_selectors,
),
)
for index, participant in enumerate(participants):
cl_client_type = participant.cl_client_type
el_client_type = participant.el_client_type
# Zero-pad the index using the calculated zfill value
index_str = shared_utils.zfill_custom(
index + 1, len(str(len(participants)))
)
el_service_name = "el-{0}-{1}-{2}".format(
index_str, el_client_type, cl_client_type
)
plan.wait(
service_name="shadowfork-{0}".format(el_service_name),
recipe=ExecRecipe(command=["cat", "/tmp/finished"]),
field="code",
assertion="==",
target_value=0,
interval="1s",
timeout="6h", # 6 hours should be enough for the biggest network
)
# We are running a kurtosis or shadowfork network # We are running a kurtosis or shadowfork network
plan.print("Generating cl validator key stores") (
validator_data = None total_number_of_validator_keys,
if not parallel_keystore_generation: ethereum_genesis_generator_image,
validator_data = validator_keystores.generate_validator_keystores( final_genesis_timestamp,
plan, network_params.preregistered_validator_keys_mnemonic, participants validator_data,
) ) = launch_kurtosis.launch(
else: plan, network_params, participants, parallel_keystore_generation
validator_data = (
validator_keystores.generate_valdiator_keystores_in_parallel(
plan,
network_params.preregistered_validator_keys_mnemonic,
participants,
)
)
plan.print(json.indent(json.encode(validator_data)))
# We need to send the same genesis time to both the EL and the CL to ensure that timestamp based forking works as expected
final_genesis_timestamp = get_final_genesis_timestamp(
plan,
network_params.genesis_delay
+ CL_GENESIS_DATA_GENERATION_TIME
+ num_participants * CL_NODE_STARTUP_TIME,
) )
# if preregistered validator count is 0 (default) then calculate the total number of validators from the participants
total_number_of_validator_keys = network_params.preregistered_validator_count
if network_params.preregistered_validator_count == 0:
for participant in participants:
total_number_of_validator_keys += participant.validator_count
plan.print("Generating EL CL data")
# we are running bellatrix genesis (deprecated) - will be removed in the future
if (
network_params.capella_fork_epoch > 0
and network_params.electra_fork_epoch == None
):
ethereum_genesis_generator_image = (
constants.ETHEREUM_GENESIS_GENERATOR.bellatrix_genesis
)
# we are running capella genesis - default behavior
elif (
network_params.capella_fork_epoch == 0
and network_params.electra_fork_epoch == None
and network_params.deneb_fork_epoch > 0
):
ethereum_genesis_generator_image = (
constants.ETHEREUM_GENESIS_GENERATOR.capella_genesis
)
# we are running deneb genesis - experimental, soon to become default
elif network_params.deneb_fork_epoch == 0:
ethereum_genesis_generator_image = (
constants.ETHEREUM_GENESIS_GENERATOR.deneb_genesis
)
# we are running electra - experimental
elif network_params.electra_fork_epoch != None:
if network_params.electra_fork_epoch == 0:
ethereum_genesis_generator_image = (
constants.ETHEREUM_GENESIS_GENERATOR.verkle_genesis
)
else:
ethereum_genesis_generator_image = (
constants.ETHEREUM_GENESIS_GENERATOR.verkle_support_genesis
)
else:
fail(
"Unsupported fork epoch configuration, need to define either capella_fork_epoch, deneb_fork_epoch or electra_fork_epoch"
)
el_cl_genesis_config_template = read_file( el_cl_genesis_config_template = read_file(
static_files.EL_CL_GENESIS_GENERATION_CONFIG_TEMPLATE_FILEPATH static_files.EL_CL_GENESIS_GENERATION_CONFIG_TEMPLATE_FILEPATH
) )
...@@ -297,211 +105,47 @@ def launch_participant_network( ...@@ -297,211 +105,47 @@ def launch_participant_network(
) )
elif network_params.network in constants.PUBLIC_NETWORKS: elif network_params.network in constants.PUBLIC_NETWORKS:
# We are running a public network # We are running a public network
dummy = plan.run_sh( (
run="mkdir /network-configs", el_cl_data,
store=[StoreSpec(src="/network-configs/", name="el_cl_genesis_data")], final_genesis_timestamp,
) network_id,
el_cl_data = el_cl_genesis_data.new_el_cl_genesis_data( validator_data,
dummy.files_artifacts[0], ) = launch_public_network.launch(
constants.GENESIS_VALIDATORS_ROOT[network_params.network], plan, network_params.network, cancun_time, prague_time
cancun_time,
prague_time,
) )
final_genesis_timestamp = constants.GENESIS_TIME[network_params.network]
network_id = constants.NETWORK_ID[network_params.network]
validator_data = None
elif network_params.network == constants.NETWORK_NAME.ephemery: elif network_params.network == constants.NETWORK_NAME.ephemery:
el_cl_genesis_data_uuid = plan.run_sh( # We are running an ephemery network
run="mkdir -p /network-configs/ && \ (
curl -o latest.tar.gz https://ephemery.dev/latest.tar.gz && \ el_cl_data,
tar xvzf latest.tar.gz -C /network-configs && \ final_genesis_timestamp,
cat /network-configs/genesis_validators_root.txt", network_id,
image="badouralix/curl-jq", validator_data,
store=[StoreSpec(src="/network-configs/", name="el_cl_genesis_data")], ) = launch_ephemery.launch(plan, cancun_time, prague_time)
)
genesis_validators_root = el_cl_genesis_data_uuid.output
el_cl_data = el_cl_genesis_data.new_el_cl_genesis_data(
el_cl_genesis_data_uuid.files_artifacts[0],
genesis_validators_root,
cancun_time,
prague_time,
)
final_genesis_timestamp = shared_utils.read_genesis_timestamp_from_config(
plan, el_cl_genesis_data_uuid.files_artifacts[0]
)
network_id = shared_utils.read_genesis_network_id_from_config(
plan, el_cl_genesis_data_uuid.files_artifacts[0]
)
validator_data = None
else: else:
# We are running a devnet # We are running a devnet
url = calculate_devnet_url(network_params.network) (
el_cl_genesis_uuid = plan.upload_files( el_cl_data,
src=url, final_genesis_timestamp,
name="el_cl_genesis", network_id,
) validator_data,
el_cl_genesis_data_uuid = plan.run_sh( ) = launch_devnet.launch(plan, network_params.network, cancun_time, prague_time)
run="mkdir -p /network-configs/ && mv /opt/* /network-configs/",
store=[StoreSpec(src="/network-configs/", name="el_cl_genesis_data")], # Launch all execution layer clients
files={"/opt": el_cl_genesis_uuid}, all_el_contexts = el_client_launcher.launch(
) plan,
genesis_validators_root = read_file(url + "/genesis_validators_root.txt") network_params,
el_cl_data,
el_cl_data = el_cl_genesis_data.new_el_cl_genesis_data( jwt_file,
el_cl_genesis_data_uuid.files_artifacts[0], participants,
genesis_validators_root, global_log_level,
cancun_time, global_node_selectors,
prague_time, global_tolerations,
) persistent,
final_genesis_timestamp = shared_utils.read_genesis_timestamp_from_config( network_id,
plan, el_cl_genesis_data_uuid.files_artifacts[0] num_participants,
) )
network_id = shared_utils.read_genesis_network_id_from_config(
plan, el_cl_genesis_data_uuid.files_artifacts[0]
)
validator_data = None
el_launchers = {
constants.EL_CLIENT_TYPE.geth: {
"launcher": geth.new_geth_launcher(
el_cl_data,
jwt_file,
network_params.network,
network_id,
network_params.capella_fork_epoch,
el_cl_data.cancun_time,
el_cl_data.prague_time,
network_params.electra_fork_epoch,
),
"launch_method": geth.launch,
},
constants.EL_CLIENT_TYPE.gethbuilder: {
"launcher": geth.new_geth_launcher(
el_cl_data,
jwt_file,
network_params.network,
network_id,
network_params.capella_fork_epoch,
el_cl_data.cancun_time,
el_cl_data.prague_time,
network_params.electra_fork_epoch,
),
"launch_method": geth.launch,
},
constants.EL_CLIENT_TYPE.besu: {
"launcher": besu.new_besu_launcher(
el_cl_data,
jwt_file,
network_params.network,
),
"launch_method": besu.launch,
},
constants.EL_CLIENT_TYPE.erigon: {
"launcher": erigon.new_erigon_launcher(
el_cl_data,
jwt_file,
network_params.network,
network_id,
el_cl_data.cancun_time,
),
"launch_method": erigon.launch,
},
constants.EL_CLIENT_TYPE.nethermind: {
"launcher": nethermind.new_nethermind_launcher(
el_cl_data,
jwt_file,
network_params.network,
),
"launch_method": nethermind.launch,
},
constants.EL_CLIENT_TYPE.reth: {
"launcher": reth.new_reth_launcher(
el_cl_data,
jwt_file,
network_params.network,
),
"launch_method": reth.launch,
},
constants.EL_CLIENT_TYPE.ethereumjs: {
"launcher": ethereumjs.new_ethereumjs_launcher(
el_cl_data,
jwt_file,
network_params.network,
),
"launch_method": ethereumjs.launch,
},
constants.EL_CLIENT_TYPE.nimbus: {
"launcher": nimbus_eth1.new_nimbus_launcher(
el_cl_data,
jwt_file,
network_params.network,
),
"launch_method": nimbus_eth1.launch,
},
}
all_el_client_contexts = []
for index, participant in enumerate(participants):
cl_client_type = participant.cl_client_type
el_client_type = participant.el_client_type
node_selectors = input_parser.get_client_node_selectors(
participant.node_selectors,
global_node_selectors,
)
tolerations = input_parser.get_client_tolerations(
participant.el_tolerations, participant.tolerations, global_tolerations
)
if el_client_type not in el_launchers:
fail(
"Unsupported launcher '{0}', need one of '{1}'".format(
el_client_type, ",".join([el.name for el in el_launchers.keys()])
)
)
el_launcher, launch_method = (
el_launchers[el_client_type]["launcher"],
el_launchers[el_client_type]["launch_method"],
)
# Zero-pad the index using the calculated zfill value
index_str = shared_utils.zfill_custom(index + 1, len(str(len(participants))))
el_service_name = "el-{0}-{1}-{2}".format(
index_str, el_client_type, cl_client_type
)
el_client_context = launch_method(
plan,
el_launcher,
el_service_name,
participant.el_client_image,
participant.el_client_log_level,
global_log_level,
all_el_client_contexts,
participant.el_min_cpu,
participant.el_max_cpu,
participant.el_min_mem,
participant.el_max_mem,
participant.el_extra_params,
participant.el_extra_env_vars,
participant.el_extra_labels,
persistent,
participant.el_client_volume_size,
tolerations,
node_selectors,
)
# Add participant el additional prometheus metrics
for metrics_info in el_client_context.el_metrics_info:
if metrics_info != None:
metrics_info["config"] = participant.prometheus_config
all_el_client_contexts.append(el_client_context)
plan.print("Successfully added {0} EL participants".format(num_participants))
plan.print("Launching CL network") # Launch all consensus layer clients
prysm_password_relative_filepath = ( prysm_password_relative_filepath = (
validator_data.prysm_password_relative_filepath validator_data.prysm_password_relative_filepath
if network_params.network == constants.NETWORK_NAME.kurtosis if network_params.network == constants.NETWORK_NAME.kurtosis
...@@ -512,181 +156,51 @@ def launch_participant_network( ...@@ -512,181 +156,51 @@ def launch_participant_network(
if network_params.network == constants.NETWORK_NAME.kurtosis if network_params.network == constants.NETWORK_NAME.kurtosis
else None else None
) )
cl_launchers = {
constants.CL_CLIENT_TYPE.lighthouse: { (
"launcher": lighthouse.new_lighthouse_launcher( all_cl_contexts,
el_cl_data, jwt_file, network_params.network all_snooper_engine_contexts,
), preregistered_validator_keys_for_nodes,
"launch_method": lighthouse.launch, ) = cl_client_launcher.launch(
}, plan,
constants.CL_CLIENT_TYPE.lodestar: { network_params,
"launcher": lodestar.new_lodestar_launcher( el_cl_data,
el_cl_data, jwt_file, network_params.network jwt_file,
), keymanager_file,
"launch_method": lodestar.launch, keymanager_p12_file,
}, participants,
constants.CL_CLIENT_TYPE.nimbus: { all_el_contexts,
"launcher": nimbus.new_nimbus_launcher( global_log_level,
el_cl_data, jwt_file, network_params.network, keymanager_file global_node_selectors,
), global_tolerations,
"launch_method": nimbus.launch, persistent,
}, network_id,
constants.CL_CLIENT_TYPE.prysm: { num_participants,
"launcher": prysm.new_prysm_launcher( validator_data,
el_cl_data, prysm_password_relative_filepath,
jwt_file, prysm_password_artifact_uuid,
network_params.network,
prysm_password_relative_filepath,
prysm_password_artifact_uuid,
),
"launch_method": prysm.launch,
},
constants.CL_CLIENT_TYPE.teku: {
"launcher": teku.new_teku_launcher(
el_cl_data,
jwt_file,
network_params.network,
keymanager_file,
keymanager_p12_file,
),
"launch_method": teku.launch,
},
}
all_snooper_engine_contexts = []
all_cl_client_contexts = []
all_ethereum_metrics_exporter_contexts = []
all_xatu_sentry_contexts = []
preregistered_validator_keys_for_nodes = (
validator_data.per_node_keystores
if network_params.network == constants.NETWORK_NAME.kurtosis
or constants.NETWORK_NAME.shadowfork in network_params.network
else None
) )
ethereum_metrics_exporter_context = None
all_ethereum_metrics_exporter_contexts = []
all_xatu_sentry_contexts = []
all_vc_contexts = []
# Some CL clients cannot run validator clients in the same process and need
# a separate validator client
_cls_that_need_separate_vc = [
constants.CL_TYPE.prysm,
constants.CL_TYPE.lodestar,
constants.CL_TYPE.lighthouse,
]
for index, participant in enumerate(participants): for index, participant in enumerate(participants):
cl_client_type = participant.cl_client_type el_type = participant.el_type
el_client_type = participant.el_client_type cl_type = participant.cl_type
node_selectors = input_parser.get_client_node_selectors( vc_type = participant.vc_type
participant.node_selectors,
global_node_selectors,
)
if cl_client_type not in cl_launchers:
fail(
"Unsupported launcher '{0}', need one of '{1}'".format(
cl_client_type, ",".join([cl.name for cl in cl_launchers.keys()])
)
)
cl_launcher, launch_method = (
cl_launchers[cl_client_type]["launcher"],
cl_launchers[cl_client_type]["launch_method"],
)
index_str = shared_utils.zfill_custom(index + 1, len(str(len(participants)))) index_str = shared_utils.zfill_custom(index + 1, len(str(len(participants))))
el_context = all_el_contexts[index]
cl_service_name = "cl-{0}-{1}-{2}".format( cl_context = all_cl_contexts[index]
index_str, cl_client_type, el_client_type
)
new_cl_node_validator_keystores = None
if participant.validator_count != 0:
new_cl_node_validator_keystores = preregistered_validator_keys_for_nodes[
index
]
el_client_context = all_el_client_contexts[index]
cl_client_context = None
snooper_engine_context = None
if participant.snooper_enabled:
snooper_service_name = "snooper-{0}-{1}-{2}".format(
index_str, cl_client_type, el_client_type
)
snooper_engine_context = snooper.launch(
plan,
snooper_service_name,
el_client_context,
node_selectors,
)
plan.print(
"Successfully added {0} snooper participants".format(
snooper_engine_context
)
)
all_snooper_engine_contexts.append(snooper_engine_context)
if index == 0:
cl_client_context = launch_method(
plan,
cl_launcher,
cl_service_name,
participant.cl_client_image,
participant.cl_client_log_level,
global_log_level,
CL_CLIENT_CONTEXT_BOOTNODE,
el_client_context,
new_cl_node_validator_keystores,
participant.bn_min_cpu,
participant.bn_max_cpu,
participant.bn_min_mem,
participant.bn_max_mem,
participant.snooper_enabled,
snooper_engine_context,
participant.blobber_enabled,
participant.blobber_extra_params,
participant.beacon_extra_params,
participant.beacon_extra_labels,
persistent,
participant.cl_client_volume_size,
participant.cl_tolerations,
participant.tolerations,
global_tolerations,
node_selectors,
participant.use_separate_validator_client,
)
else:
boot_cl_client_ctx = all_cl_client_contexts
cl_client_context = launch_method(
plan,
cl_launcher,
cl_service_name,
participant.cl_client_image,
participant.cl_client_log_level,
global_log_level,
boot_cl_client_ctx,
el_client_context,
new_cl_node_validator_keystores,
participant.bn_min_cpu,
participant.bn_max_cpu,
participant.bn_min_mem,
participant.bn_max_mem,
participant.snooper_enabled,
snooper_engine_context,
participant.blobber_enabled,
participant.blobber_extra_params,
participant.beacon_extra_params,
participant.beacon_extra_labels,
persistent,
participant.cl_client_volume_size,
participant.cl_tolerations,
participant.tolerations,
global_tolerations,
node_selectors,
participant.use_separate_validator_client,
)
# Add participant cl additional prometheus labels
for metrics_info in cl_client_context.cl_nodes_metrics_info:
if metrics_info != None:
metrics_info["config"] = participant.prometheus_config
all_cl_client_contexts.append(cl_client_context)
ethereum_metrics_exporter_context = None
if participant.ethereum_metrics_exporter_enabled: if participant.ethereum_metrics_exporter_enabled:
pair_name = "{0}-{1}-{2}".format(index_str, cl_client_type, el_client_type) pair_name = "{0}-{1}-{2}".format(index_str, cl_type, el_type)
ethereum_metrics_exporter_service_name = ( ethereum_metrics_exporter_service_name = (
"ethereum-metrics-exporter-{0}".format(pair_name) "ethereum-metrics-exporter-{0}".format(pair_name)
...@@ -696,9 +210,9 @@ def launch_participant_network( ...@@ -696,9 +210,9 @@ def launch_participant_network(
plan, plan,
pair_name, pair_name,
ethereum_metrics_exporter_service_name, ethereum_metrics_exporter_service_name,
el_client_context, el_context,
cl_client_context, cl_context,
node_selectors, participant.node_selectors,
) )
plan.print( plan.print(
"Successfully added {0} ethereum metrics exporter participants".format( "Successfully added {0} ethereum metrics exporter participants".format(
...@@ -711,18 +225,18 @@ def launch_participant_network( ...@@ -711,18 +225,18 @@ def launch_participant_network(
xatu_sentry_context = None xatu_sentry_context = None
if participant.xatu_sentry_enabled: if participant.xatu_sentry_enabled:
pair_name = "{0}-{1}-{2}".format(index_str, cl_client_type, el_client_type) pair_name = "{0}-{1}-{2}".format(index_str, cl_type, el_type)
xatu_sentry_service_name = "xatu-sentry-{0}".format(pair_name) xatu_sentry_service_name = "xatu-sentry-{0}".format(pair_name)
xatu_sentry_context = xatu_sentry.launch( xatu_sentry_context = xatu_sentry.launch(
plan, plan,
xatu_sentry_service_name, xatu_sentry_service_name,
cl_client_context, cl_context,
xatu_sentry_params, xatu_sentry_params,
network_params, network_params,
pair_name, pair_name,
node_selectors, participant.node_selectors,
) )
plan.print( plan.print(
"Successfully added {0} xatu sentry participants".format( "Successfully added {0} xatu sentry participants".format(
...@@ -732,42 +246,22 @@ def launch_participant_network( ...@@ -732,42 +246,22 @@ def launch_participant_network(
all_xatu_sentry_contexts.append(xatu_sentry_context) all_xatu_sentry_contexts.append(xatu_sentry_context)
plan.print("Successfully added {0} CL participants".format(num_participants)) plan.print("Successfully added {0} CL participants".format(num_participants))
all_validator_client_contexts = []
# Some CL clients cannot run validator clients in the same process and need
# a separate validator client
_cls_that_need_separate_vc = [
constants.CL_CLIENT_TYPE.prysm,
constants.CL_CLIENT_TYPE.lodestar,
constants.CL_CLIENT_TYPE.lighthouse,
]
for index, participant in enumerate(participants):
cl_client_type = participant.cl_client_type
validator_client_type = participant.validator_client_type
if participant.use_separate_validator_client == None: plan.print("Start adding validators for participant #{0}".format(index_str))
if participant.use_separate_vc == None:
# This should only be the case for the MEV participant, # This should only be the case for the MEV participant,
# the regular participants default to False/True # the regular participants default to False/True
all_validator_client_contexts.append(None) all_vc_contexts.append(None)
continue continue
if ( if cl_type in _cls_that_need_separate_vc and not participant.use_separate_vc:
cl_client_type in _cls_that_need_separate_vc fail("{0} needs a separate validator client!".format(cl_type))
and not participant.use_separate_validator_client
):
fail("{0} needs a separate validator client!".format(cl_client_type))
if not participant.use_separate_validator_client: if not participant.use_separate_vc:
all_validator_client_contexts.append(None) all_vc_contexts.append(None)
continue continue
el_client_context = all_el_client_contexts[index]
cl_client_context = all_cl_client_contexts[index]
# Zero-pad the index using the calculated zfill value
index_str = shared_utils.zfill_custom(index + 1, len(str(len(participants))))
plan.print( plan.print(
"Using separate validator client for participant #{0}".format(index_str) "Using separate validator client for participant #{0}".format(index_str)
) )
...@@ -776,55 +270,51 @@ def launch_participant_network( ...@@ -776,55 +270,51 @@ def launch_participant_network(
if participant.validator_count != 0: if participant.validator_count != 0:
vc_keystores = preregistered_validator_keys_for_nodes[index] vc_keystores = preregistered_validator_keys_for_nodes[index]
validator_client_context = validator_client.launch( vc_context = vc.launch(
plan=plan, plan=plan,
launcher=validator_client.new_validator_client_launcher( launcher=vc.new_vc_launcher(el_cl_genesis_data=el_cl_data),
el_cl_genesis_data=el_cl_data
),
keymanager_file=keymanager_file, keymanager_file=keymanager_file,
keymanager_p12_file=keymanager_p12_file, keymanager_p12_file=keymanager_p12_file,
service_name="vc-{0}-{1}-{2}".format( service_name="vc-{0}-{1}-{2}".format(index_str, vc_type, el_type),
index_str, validator_client_type, el_client_type vc_type=vc_type,
), image=participant.vc_image,
validator_client_type=validator_client_type, participant_log_level=participant.vc_log_level,
image=participant.validator_client_image,
participant_log_level=participant.validator_client_log_level,
global_log_level=global_log_level, global_log_level=global_log_level,
cl_client_context=cl_client_context, cl_context=cl_context,
el_client_context=el_client_context, el_context=el_context,
node_keystore_files=vc_keystores, node_keystore_files=vc_keystores,
v_min_cpu=participant.v_min_cpu, vc_min_cpu=participant.vc_min_cpu,
v_max_cpu=participant.v_max_cpu, vc_max_cpu=participant.vc_max_cpu,
v_min_mem=participant.v_min_mem, vc_min_mem=participant.vc_min_mem,
v_max_mem=participant.v_max_mem, vc_max_mem=participant.vc_max_mem,
extra_params=participant.validator_extra_params, extra_params=participant.vc_extra_params,
extra_labels=participant.validator_extra_labels, extra_env_vars=participant.vc_extra_env_vars,
extra_labels=participant.vc_extra_labels,
prysm_password_relative_filepath=prysm_password_relative_filepath, prysm_password_relative_filepath=prysm_password_relative_filepath,
prysm_password_artifact_uuid=prysm_password_artifact_uuid, prysm_password_artifact_uuid=prysm_password_artifact_uuid,
validator_tolerations=participant.validator_tolerations, vc_tolerations=participant.vc_tolerations,
participant_tolerations=participant.tolerations, participant_tolerations=participant.tolerations,
global_tolerations=global_tolerations, global_tolerations=global_tolerations,
node_selectors=node_selectors, node_selectors=participant.node_selectors,
network=network_params.network, # TODO: remove when deneb rebase is done network=network_params.network,
electra_fork_epoch=network_params.electra_fork_epoch, # TODO: remove when deneb rebase is done electra_fork_epoch=network_params.electra_fork_epoch,
) )
all_validator_client_contexts.append(validator_client_context) all_vc_contexts.append(vc_context)
if validator_client_context and validator_client_context.metrics_info: if vc_context and vc_context.metrics_info:
validator_client_context.metrics_info[ vc_context.metrics_info["config"] = participant.prometheus_config
"config"
] = participant.prometheus_config
all_participants = [] all_participants = []
for index, participant in enumerate(participants): for index, participant in enumerate(participants):
el_client_type = participant.el_client_type el_type = participant.el_type
cl_client_type = participant.cl_client_type cl_type = participant.cl_type
validator_client_type = participant.validator_client_type vc_type = participant.vc_type
snooper_engine_context = None
el_client_context = all_el_client_contexts[index] el_context = all_el_contexts[index]
cl_client_context = all_cl_client_contexts[index] cl_context = all_cl_contexts[index]
validator_client_context = all_validator_client_contexts[index] vc_context = all_vc_contexts[index]
if participant.snooper_enabled: if participant.snooper_enabled:
snooper_engine_context = all_snooper_engine_contexts[index] snooper_engine_context = all_snooper_engine_contexts[index]
...@@ -841,12 +331,12 @@ def launch_participant_network( ...@@ -841,12 +331,12 @@ def launch_participant_network(
xatu_sentry_context = all_xatu_sentry_contexts[index] xatu_sentry_context = all_xatu_sentry_contexts[index]
participant_entry = participant_module.new_participant( participant_entry = participant_module.new_participant(
el_client_type, el_type,
cl_client_type, cl_type,
validator_client_type, vc_type,
el_client_context, el_context,
cl_client_context, cl_context,
validator_client_context, vc_context,
snooper_engine_context, snooper_engine_context,
ethereum_metrics_exporter_context, ethereum_metrics_exporter_context,
xatu_sentry_context, xatu_sentry_context,
...@@ -860,44 +350,3 @@ def launch_participant_network( ...@@ -860,44 +350,3 @@ def launch_participant_network(
el_cl_data.genesis_validators_root, el_cl_data.genesis_validators_root,
el_cl_data.files_artifact_uuid, el_cl_data.files_artifact_uuid,
) )
# this is a python procedure so that Kurtosis can do idempotent runs
# time.now() runs everytime bringing non determinism
# note that the timestamp it returns is a string
def get_final_genesis_timestamp(plan, padding):
result = plan.run_python(
run="""
import time
import sys
padding = int(sys.argv[1])
print(int(time.time()+padding), end="")
""",
args=[str(padding)],
store=[StoreSpec(src="/tmp", name="final-genesis-timestamp")],
)
return result.output
def calculate_devnet_url(network):
sf_suffix_mapping = {"hsf": "-hsf-", "gsf": "-gsf-", "ssf": "-ssf-"}
shadowfork = "sf-" in network
if shadowfork:
for suffix, delimiter in sf_suffix_mapping.items():
if delimiter in network:
network_parts = network.split(delimiter, 1)
network_type = suffix
else:
network_parts = network.split("-devnet-", 1)
network_type = "devnet"
devnet_name, devnet_number = network_parts[0], network_parts[1]
devnet_category = devnet_name.split("-")[0]
devnet_subname = (
devnet_name.split("-")[1] + "-" if len(devnet_name.split("-")) > 1 else ""
)
return "github.com/ethpandaops/{0}-devnets/network-configs/{1}{2}-{3}".format(
devnet_category, devnet_subname, network_type, devnet_number
)
...@@ -134,8 +134,8 @@ def generate_validator_keystores(plan, mnemonic, participants): ...@@ -134,8 +134,8 @@ def generate_validator_keystores(plan, mnemonic, participants):
keystore_stop_index = (keystore_start_index + participant.validator_count) - 1 keystore_stop_index = (keystore_start_index + participant.validator_count) - 1
artifact_name = "{0}-{1}-{2}-{3}-{4}".format( artifact_name = "{0}-{1}-{2}-{3}-{4}".format(
padded_idx, padded_idx,
participant.cl_client_type, participant.cl_type,
participant.el_client_type, participant.el_type,
keystore_start_index, keystore_start_index,
keystore_stop_index, keystore_stop_index,
) )
...@@ -286,8 +286,8 @@ def generate_valdiator_keystores_in_parallel(plan, mnemonic, participants): ...@@ -286,8 +286,8 @@ def generate_valdiator_keystores_in_parallel(plan, mnemonic, participants):
keystore_stop_index = (keystore_start_index + participant.validator_count) - 1 keystore_stop_index = (keystore_start_index + participant.validator_count) - 1
artifact_name = "{0}-{1}-{2}-{3}-{4}".format( artifact_name = "{0}-{1}-{2}-{3}-{4}".format(
padded_idx, padded_idx,
participant.cl_client_type, participant.cl_type,
participant.el_client_type, participant.el_type,
keystore_start_index, keystore_start_index,
keystore_stop_index, keystore_stop_index,
) )
......
...@@ -5,12 +5,12 @@ shared_utils = import_module("../../shared_utils/shared_utils.star") ...@@ -5,12 +5,12 @@ shared_utils = import_module("../../shared_utils/shared_utils.star")
def generate_validator_ranges( def generate_validator_ranges(
plan, plan,
config_template, config_template,
cl_client_contexts, cl_contexts,
participants, participants,
): ):
data = [] data = []
running_total_validator_count = 0 running_total_validator_count = 0
for index, client in enumerate(cl_client_contexts): for index, client in enumerate(cl_contexts):
participant = participants[index] participant = participants[index]
if participant.validator_count == 0: if participant.validator_count == 0:
continue continue
......
...@@ -3,7 +3,7 @@ prometheus = import_module("github.com/kurtosis-tech/prometheus-package/main.sta ...@@ -3,7 +3,7 @@ prometheus = import_module("github.com/kurtosis-tech/prometheus-package/main.sta
EXECUTION_CLIENT_TYPE = "execution" EXECUTION_CLIENT_TYPE = "execution"
BEACON_CLIENT_TYPE = "beacon" BEACON_CLIENT_TYPE = "beacon"
VALIDATOR_CLIENT_TYPE = "validator" vc_type = "validator"
METRICS_INFO_NAME_KEY = "name" METRICS_INFO_NAME_KEY = "name"
METRICS_INFO_URL_KEY = "url" METRICS_INFO_URL_KEY = "url"
...@@ -21,18 +21,18 @@ MAX_MEMORY = 2048 ...@@ -21,18 +21,18 @@ MAX_MEMORY = 2048
def launch_prometheus( def launch_prometheus(
plan, plan,
el_client_contexts, el_contexts,
cl_client_contexts, cl_contexts,
validator_client_contexts, vc_contexts,
additional_metrics_jobs, additional_metrics_jobs,
ethereum_metrics_exporter_contexts, ethereum_metrics_exporter_contexts,
xatu_sentry_contexts, xatu_sentry_contexts,
global_node_selectors, global_node_selectors,
): ):
metrics_jobs = get_metrics_jobs( metrics_jobs = get_metrics_jobs(
el_client_contexts, el_contexts,
cl_client_contexts, cl_contexts,
validator_client_contexts, vc_contexts,
additional_metrics_jobs, additional_metrics_jobs,
ethereum_metrics_exporter_contexts, ethereum_metrics_exporter_contexts,
xatu_sentry_contexts, xatu_sentry_contexts,
...@@ -51,16 +51,16 @@ def launch_prometheus( ...@@ -51,16 +51,16 @@ def launch_prometheus(
def get_metrics_jobs( def get_metrics_jobs(
el_client_contexts, el_contexts,
cl_client_contexts, cl_contexts,
validator_client_contexts, vc_contexts,
additional_metrics_jobs, additional_metrics_jobs,
ethereum_metrics_exporter_contexts, ethereum_metrics_exporter_contexts,
xatu_sentry_contexts, xatu_sentry_contexts,
): ):
metrics_jobs = [] metrics_jobs = []
# Adding execution clients metrics jobs # Adding execution clients metrics jobs
for context in el_client_contexts: for context in el_contexts:
if len(context.el_metrics_info) >= 1 and context.el_metrics_info[0] != None: if len(context.el_metrics_info) >= 1 and context.el_metrics_info[0] != None:
execution_metrics_info = context.el_metrics_info[0] execution_metrics_info = context.el_metrics_info[0]
scrape_interval = PROMETHEUS_DEFAULT_SCRAPE_INTERVAL scrape_interval = PROMETHEUS_DEFAULT_SCRAPE_INTERVAL
...@@ -90,7 +90,7 @@ def get_metrics_jobs( ...@@ -90,7 +90,7 @@ def get_metrics_jobs(
) )
) )
# Adding consensus clients metrics jobs # Adding consensus clients metrics jobs
for context in cl_client_contexts: for context in cl_contexts:
if ( if (
len(context.cl_nodes_metrics_info) >= 1 len(context.cl_nodes_metrics_info) >= 1
and context.cl_nodes_metrics_info[0] != None and context.cl_nodes_metrics_info[0] != None
...@@ -123,7 +123,7 @@ def get_metrics_jobs( ...@@ -123,7 +123,7 @@ def get_metrics_jobs(
) )
# Adding validator clients metrics jobs # Adding validator clients metrics jobs
for context in validator_client_contexts: for context in vc_contexts:
if context == None: if context == None:
continue continue
metrics_info = context.metrics_info metrics_info = context.metrics_info
...@@ -131,7 +131,7 @@ def get_metrics_jobs( ...@@ -131,7 +131,7 @@ def get_metrics_jobs(
scrape_interval = PROMETHEUS_DEFAULT_SCRAPE_INTERVAL scrape_interval = PROMETHEUS_DEFAULT_SCRAPE_INTERVAL
labels = { labels = {
"service": context.service_name, "service": context.service_name,
"client_type": VALIDATOR_CLIENT_TYPE, "client_type": vc_type,
"client_name": context.client_name, "client_name": context.client_name,
} }
......
...@@ -155,3 +155,44 @@ def get_network_name(network): ...@@ -155,3 +155,44 @@ def get_network_name(network):
network_name = network.split("-shadowfork")[0] network_name = network.split("-shadowfork")[0]
return network_name return network_name
# this is a python procedure so that Kurtosis can do idempotent runs
# time.now() runs everytime bringing non determinism
# note that the timestamp it returns is a string
def get_final_genesis_timestamp(plan, padding):
result = plan.run_python(
run="""
import time
import sys
padding = int(sys.argv[1])
print(int(time.time()+padding), end="")
""",
args=[str(padding)],
store=[StoreSpec(src="/tmp", name="final-genesis-timestamp")],
)
return result.output
def calculate_devnet_url(network):
sf_suffix_mapping = {"hsf": "-hsf-", "gsf": "-gsf-", "ssf": "-ssf-"}
shadowfork = "sf-" in network
if shadowfork:
for suffix, delimiter in sf_suffix_mapping.items():
if delimiter in network:
network_parts = network.split(delimiter, 1)
network_type = suffix
else:
network_parts = network.split("-devnet-", 1)
network_type = "devnet"
devnet_name, devnet_number = network_parts[0], network_parts[1]
devnet_category = devnet_name.split("-")[0]
devnet_subname = (
devnet_name.split("-")[1] + "-" if len(devnet_name.split("-")) > 1 else ""
)
return "github.com/ethpandaops/{0}-devnets/network-configs/{1}{2}-{3}".format(
devnet_category, devnet_subname, network_type, devnet_number
)
shared_utils = import_module("../shared_utils/shared_utils.star") shared_utils = import_module("../shared_utils/shared_utils.star")
input_parser = import_module("../package_io/input_parser.star") input_parser = import_module("../package_io/input_parser.star")
el_client_context = import_module("../el/el_client_context.star") el_context = import_module("../el/el_context.star")
el_admin_node_info = import_module("../el/el_admin_node_info.star") el_admin_node_info = import_module("../el/el_admin_node_info.star")
snooper_engine_context = import_module("../snooper/snooper_engine_context.star") snooper_engine_context = import_module("../snooper/snooper_engine_context.star")
...@@ -25,10 +25,10 @@ MIN_MEMORY = 10 ...@@ -25,10 +25,10 @@ MIN_MEMORY = 10
MAX_MEMORY = 300 MAX_MEMORY = 300
def launch(plan, service_name, el_client_context, node_selectors): def launch(plan, service_name, el_context, node_selectors):
snooper_service_name = "{0}".format(service_name) snooper_service_name = "{0}".format(service_name)
snooper_config = get_config(service_name, el_client_context, node_selectors) snooper_config = get_config(service_name, el_context, node_selectors)
snooper_service = plan.add_service(snooper_service_name, snooper_config) snooper_service = plan.add_service(snooper_service_name, snooper_config)
snooper_http_port = snooper_service.ports[SNOOPER_ENGINE_RPC_PORT_ID] snooper_http_port = snooper_service.ports[SNOOPER_ENGINE_RPC_PORT_ID]
...@@ -37,10 +37,10 @@ def launch(plan, service_name, el_client_context, node_selectors): ...@@ -37,10 +37,10 @@ def launch(plan, service_name, el_client_context, node_selectors):
) )
def get_config(service_name, el_client_context, node_selectors): def get_config(service_name, el_context, node_selectors):
engine_rpc_port_num = "http://{0}:{1}".format( engine_rpc_port_num = "http://{0}:{1}".format(
el_client_context.ip_addr, el_context.ip_addr,
el_client_context.engine_rpc_port_num, el_context.engine_rpc_port_num,
) )
cmd = [ cmd = [
SNOOPER_BINARY_COMMAND, SNOOPER_BINARY_COMMAND,
......
constants = import_module("../package_io/constants.star") constants = import_module("../package_io/constants.star")
input_parser = import_module("../package_io/input_parser.star") input_parser = import_module("../package_io/input_parser.star")
shared_utils = import_module("../shared_utils/shared_utils.star") shared_utils = import_module("../shared_utils/shared_utils.star")
validator_client_shared = import_module("./shared.star") vc_shared = import_module("./shared.star")
RUST_BACKTRACE_ENVVAR_NAME = "RUST_BACKTRACE" RUST_BACKTRACE_ENVVAR_NAME = "RUST_BACKTRACE"
RUST_FULL_BACKTRACE_KEYWORD = "full" RUST_FULL_BACKTRACE_KEYWORD = "full"
VERBOSITY_LEVELS = { VERBOSITY_LEVELS = {
constants.GLOBAL_CLIENT_LOG_LEVEL.error: "error", constants.GLOBAL_LOG_LEVEL.error: "error",
constants.GLOBAL_CLIENT_LOG_LEVEL.warn: "warn", constants.GLOBAL_LOG_LEVEL.warn: "warn",
constants.GLOBAL_CLIENT_LOG_LEVEL.info: "info", constants.GLOBAL_LOG_LEVEL.info: "info",
constants.GLOBAL_CLIENT_LOG_LEVEL.debug: "debug", constants.GLOBAL_LOG_LEVEL.debug: "debug",
constants.GLOBAL_CLIENT_LOG_LEVEL.trace: "trace", constants.GLOBAL_LOG_LEVEL.trace: "trace",
} }
...@@ -21,14 +21,15 @@ def get_config( ...@@ -21,14 +21,15 @@ def get_config(
participant_log_level, participant_log_level,
global_log_level, global_log_level,
beacon_http_url, beacon_http_url,
cl_client_context, cl_context,
el_client_context, el_context,
node_keystore_files, node_keystore_files,
v_min_cpu, vc_min_cpu,
v_max_cpu, vc_max_cpu,
v_min_mem, vc_min_mem,
v_max_mem, vc_max_mem,
extra_params, extra_params,
extra_env_vars,
extra_labels, extra_labels,
tolerations, tolerations,
node_selectors, node_selectors,
...@@ -40,17 +41,17 @@ def get_config( ...@@ -40,17 +41,17 @@ def get_config(
) )
validator_keys_dirpath = shared_utils.path_join( validator_keys_dirpath = shared_utils.path_join(
validator_client_shared.VALIDATOR_CLIENT_KEYS_MOUNTPOINT, vc_shared.VALIDATOR_CLIENT_KEYS_MOUNTPOINT,
node_keystore_files.raw_keys_relative_dirpath, node_keystore_files.raw_keys_relative_dirpath,
) )
validator_secrets_dirpath = shared_utils.path_join( validator_secrets_dirpath = shared_utils.path_join(
validator_client_shared.VALIDATOR_CLIENT_KEYS_MOUNTPOINT, vc_shared.VALIDATOR_CLIENT_KEYS_MOUNTPOINT,
node_keystore_files.raw_secrets_relative_dirpath, node_keystore_files.raw_secrets_relative_dirpath,
) )
cmd = [ cmd = [
"lighthouse", "lighthouse",
"validator_client", "vc",
"--debug-level=" + log_level, "--debug-level=" + log_level,
"--testnet-dir=" + constants.GENESIS_CONFIG_MOUNT_PATH_ON_CONTAINER, "--testnet-dir=" + constants.GENESIS_CONFIG_MOUNT_PATH_ON_CONTAINER,
"--validators-dir=" + validator_keys_dirpath, "--validators-dir=" + validator_keys_dirpath,
...@@ -63,7 +64,7 @@ def get_config( ...@@ -63,7 +64,7 @@ def get_config(
# burn address - If unset, the validator will scream in its logs # burn address - If unset, the validator will scream in its logs
"--suggested-fee-recipient=" + constants.VALIDATING_REWARDS_ACCOUNT, "--suggested-fee-recipient=" + constants.VALIDATING_REWARDS_ACCOUNT,
"--http", "--http",
"--http-port={0}".format(validator_client_shared.VALIDATOR_HTTP_PORT_NUM), "--http-port={0}".format(vc_shared.VALIDATOR_HTTP_PORT_NUM),
"--http-address=0.0.0.0", "--http-address=0.0.0.0",
"--http-allow-origin=*", "--http-allow-origin=*",
"--unencrypted-http-transport", "--unencrypted-http-transport",
...@@ -71,14 +72,9 @@ def get_config( ...@@ -71,14 +72,9 @@ def get_config(
"--metrics", "--metrics",
"--metrics-address=0.0.0.0", "--metrics-address=0.0.0.0",
"--metrics-allow-origin=*", "--metrics-allow-origin=*",
"--metrics-port={0}".format( "--metrics-port={0}".format(vc_shared.VALIDATOR_CLIENT_METRICS_PORT_NUM),
validator_client_shared.VALIDATOR_CLIENT_METRICS_PORT_NUM
),
# ^^^^^^^^^^^^^^^^^^^ PROMETHEUS CONFIG ^^^^^^^^^^^^^^^^^^^^^ # ^^^^^^^^^^^^^^^^^^^ PROMETHEUS CONFIG ^^^^^^^^^^^^^^^^^^^^^
"--graffiti=" "--graffiti=" + cl_context.client_name + "-" + el_context.client_name,
+ cl_client_context.client_name
+ "-"
+ el_client_context.client_name,
] ]
if not (constants.NETWORK_NAME.verkle in network or electra_fork_epoch != None): if not (constants.NETWORK_NAME.verkle in network or electra_fork_epoch != None):
...@@ -89,24 +85,25 @@ def get_config( ...@@ -89,24 +85,25 @@ def get_config(
files = { files = {
constants.GENESIS_DATA_MOUNTPOINT_ON_CLIENTS: el_cl_genesis_data.files_artifact_uuid, constants.GENESIS_DATA_MOUNTPOINT_ON_CLIENTS: el_cl_genesis_data.files_artifact_uuid,
validator_client_shared.VALIDATOR_CLIENT_KEYS_MOUNTPOINT: node_keystore_files.files_artifact_uuid, vc_shared.VALIDATOR_CLIENT_KEYS_MOUNTPOINT: node_keystore_files.files_artifact_uuid,
} }
env = {RUST_BACKTRACE_ENVVAR_NAME: RUST_FULL_BACKTRACE_KEYWORD}
env.update(extra_env_vars)
return ServiceConfig( return ServiceConfig(
image=image, image=image,
ports=validator_client_shared.VALIDATOR_CLIENT_USED_PORTS, ports=vc_shared.VALIDATOR_CLIENT_USED_PORTS,
cmd=cmd, cmd=cmd,
env_vars=env,
files=files, files=files,
env_vars={RUST_BACKTRACE_ENVVAR_NAME: RUST_FULL_BACKTRACE_KEYWORD}, min_cpu=vc_min_cpu,
min_cpu=v_min_cpu, max_cpu=vc_max_cpu,
max_cpu=v_max_cpu, min_memory=vc_min_mem,
min_memory=v_min_mem, max_memory=vc_max_mem,
max_memory=v_max_mem,
labels=shared_utils.label_maker( labels=shared_utils.label_maker(
constants.VC_CLIENT_TYPE.lighthouse, constants.VC_TYPE.lighthouse,
constants.CLIENT_TYPES.validator, constants.CLIENT_TYPES.validator,
image, image,
cl_client_context.client_name, cl_context.client_name,
extra_labels, extra_labels,
), ),
tolerations=tolerations, tolerations=tolerations,
......
constants = import_module("../package_io/constants.star") constants = import_module("../package_io/constants.star")
input_parser = import_module("../package_io/input_parser.star") input_parser = import_module("../package_io/input_parser.star")
shared_utils = import_module("../shared_utils/shared_utils.star") shared_utils = import_module("../shared_utils/shared_utils.star")
validator_client_shared = import_module("./shared.star") vc_shared = import_module("./shared.star")
VERBOSITY_LEVELS = { VERBOSITY_LEVELS = {
constants.GLOBAL_CLIENT_LOG_LEVEL.error: "error", constants.GLOBAL_LOG_LEVEL.error: "error",
constants.GLOBAL_CLIENT_LOG_LEVEL.warn: "warn", constants.GLOBAL_LOG_LEVEL.warn: "warn",
constants.GLOBAL_CLIENT_LOG_LEVEL.info: "info", constants.GLOBAL_LOG_LEVEL.info: "info",
constants.GLOBAL_CLIENT_LOG_LEVEL.debug: "debug", constants.GLOBAL_LOG_LEVEL.debug: "debug",
constants.GLOBAL_CLIENT_LOG_LEVEL.trace: "trace", constants.GLOBAL_LOG_LEVEL.trace: "trace",
} }
...@@ -18,14 +18,15 @@ def get_config( ...@@ -18,14 +18,15 @@ def get_config(
participant_log_level, participant_log_level,
global_log_level, global_log_level,
beacon_http_url, beacon_http_url,
cl_client_context, cl_context,
el_client_context, el_context,
node_keystore_files, node_keystore_files,
v_min_cpu, vc_min_cpu,
v_max_cpu, vc_max_cpu,
v_min_mem, vc_min_mem,
v_max_mem, vc_max_mem,
extra_params, extra_params,
extra_env_vars,
extra_labels, extra_labels,
tolerations, tolerations,
node_selectors, node_selectors,
...@@ -35,12 +36,12 @@ def get_config( ...@@ -35,12 +36,12 @@ def get_config(
) )
validator_keys_dirpath = shared_utils.path_join( validator_keys_dirpath = shared_utils.path_join(
validator_client_shared.VALIDATOR_CLIENT_KEYS_MOUNTPOINT, vc_shared.VALIDATOR_CLIENT_KEYS_MOUNTPOINT,
node_keystore_files.raw_keys_relative_dirpath, node_keystore_files.raw_keys_relative_dirpath,
) )
validator_secrets_dirpath = shared_utils.path_join( validator_secrets_dirpath = shared_utils.path_join(
validator_client_shared.VALIDATOR_CLIENT_KEYS_MOUNTPOINT, vc_shared.VALIDATOR_CLIENT_KEYS_MOUNTPOINT,
node_keystore_files.raw_secrets_relative_dirpath, node_keystore_files.raw_secrets_relative_dirpath,
) )
...@@ -56,20 +57,15 @@ def get_config( ...@@ -56,20 +57,15 @@ def get_config(
"--suggestedFeeRecipient=" + constants.VALIDATING_REWARDS_ACCOUNT, "--suggestedFeeRecipient=" + constants.VALIDATING_REWARDS_ACCOUNT,
"--keymanager", "--keymanager",
"--keymanager.authEnabled=true", "--keymanager.authEnabled=true",
"--keymanager.port={0}".format(validator_client_shared.VALIDATOR_HTTP_PORT_NUM), "--keymanager.port={0}".format(vc_shared.VALIDATOR_HTTP_PORT_NUM),
"--keymanager.address=0.0.0.0", "--keymanager.address=0.0.0.0",
"--keymanager.cors=*", "--keymanager.cors=*",
# vvvvvvvvvvvvvvvvvvv PROMETHEUS CONFIG vvvvvvvvvvvvvvvvvvvvv # vvvvvvvvvvvvvvvvvvv PROMETHEUS CONFIG vvvvvvvvvvvvvvvvvvvvv
"--metrics", "--metrics",
"--metrics.address=0.0.0.0", "--metrics.address=0.0.0.0",
"--metrics.port={0}".format( "--metrics.port={0}".format(vc_shared.VALIDATOR_CLIENT_METRICS_PORT_NUM),
validator_client_shared.VALIDATOR_CLIENT_METRICS_PORT_NUM
),
# ^^^^^^^^^^^^^^^^^^^ PROMETHEUS CONFIG ^^^^^^^^^^^^^^^^^^^^^ # ^^^^^^^^^^^^^^^^^^^ PROMETHEUS CONFIG ^^^^^^^^^^^^^^^^^^^^^
"--graffiti=" "--graffiti=" + cl_context.client_name + "-" + el_context.client_name,
+ cl_client_context.client_name
+ "-"
+ el_client_context.client_name,
"--useProduceBlockV3", "--useProduceBlockV3",
] ]
...@@ -79,24 +75,25 @@ def get_config( ...@@ -79,24 +75,25 @@ def get_config(
files = { files = {
constants.GENESIS_DATA_MOUNTPOINT_ON_CLIENTS: el_cl_genesis_data.files_artifact_uuid, constants.GENESIS_DATA_MOUNTPOINT_ON_CLIENTS: el_cl_genesis_data.files_artifact_uuid,
validator_client_shared.VALIDATOR_CLIENT_KEYS_MOUNTPOINT: node_keystore_files.files_artifact_uuid, vc_shared.VALIDATOR_CLIENT_KEYS_MOUNTPOINT: node_keystore_files.files_artifact_uuid,
} }
return ServiceConfig( return ServiceConfig(
image=image, image=image,
ports=validator_client_shared.VALIDATOR_CLIENT_USED_PORTS, ports=vc_shared.VALIDATOR_CLIENT_USED_PORTS,
cmd=cmd, cmd=cmd,
env_vars=extra_env_vars,
files=files, files=files,
private_ip_address_placeholder=validator_client_shared.PRIVATE_IP_ADDRESS_PLACEHOLDER, private_ip_address_placeholder=vc_shared.PRIVATE_IP_ADDRESS_PLACEHOLDER,
min_cpu=v_min_cpu, min_cpu=vc_min_cpu,
max_cpu=v_max_cpu, max_cpu=vc_max_cpu,
min_memory=v_min_mem, min_memory=vc_min_mem,
max_memory=v_max_mem, max_memory=vc_max_mem,
labels=shared_utils.label_maker( labels=shared_utils.label_maker(
constants.VC_CLIENT_TYPE.lodestar, constants.VC_TYPE.lodestar,
constants.CLIENT_TYPES.validator, constants.CLIENT_TYPES.validator,
image, image,
cl_client_context.client_name, cl_context.client_name,
extra_labels, extra_labels,
), ),
tolerations=tolerations, tolerations=tolerations,
......
constants = import_module("../package_io/constants.star") constants = import_module("../package_io/constants.star")
shared_utils = import_module("../shared_utils/shared_utils.star") shared_utils = import_module("../shared_utils/shared_utils.star")
validator_client_shared = import_module("./shared.star") vc_shared = import_module("./shared.star")
def get_config( def get_config(
...@@ -8,14 +8,15 @@ def get_config( ...@@ -8,14 +8,15 @@ def get_config(
image, image,
keymanager_file, keymanager_file,
beacon_http_url, beacon_http_url,
cl_client_context, cl_context,
el_client_context, el_context,
node_keystore_files, node_keystore_files,
v_min_cpu, vc_min_cpu,
v_max_cpu, vc_max_cpu,
v_min_mem, vc_min_mem,
v_max_mem, vc_max_mem,
extra_params, extra_params,
extra_env_vars,
extra_labels, extra_labels,
tolerations, tolerations,
node_selectors, node_selectors,
...@@ -24,11 +25,11 @@ def get_config( ...@@ -24,11 +25,11 @@ def get_config(
validator_secrets_dirpath = "" validator_secrets_dirpath = ""
if node_keystore_files != None: if node_keystore_files != None:
validator_keys_dirpath = shared_utils.path_join( validator_keys_dirpath = shared_utils.path_join(
validator_client_shared.VALIDATOR_CLIENT_KEYS_MOUNTPOINT, vc_shared.VALIDATOR_CLIENT_KEYS_MOUNTPOINT,
node_keystore_files.nimbus_keys_relative_dirpath, node_keystore_files.nimbus_keys_relative_dirpath,
) )
validator_secrets_dirpath = shared_utils.path_join( validator_secrets_dirpath = shared_utils.path_join(
validator_client_shared.VALIDATOR_CLIENT_KEYS_MOUNTPOINT, vc_shared.VALIDATOR_CLIENT_KEYS_MOUNTPOINT,
node_keystore_files.raw_secrets_relative_dirpath, node_keystore_files.raw_secrets_relative_dirpath,
) )
...@@ -38,20 +39,15 @@ def get_config( ...@@ -38,20 +39,15 @@ def get_config(
"--secrets-dir=" + validator_secrets_dirpath, "--secrets-dir=" + validator_secrets_dirpath,
"--suggested-fee-recipient=" + constants.VALIDATING_REWARDS_ACCOUNT, "--suggested-fee-recipient=" + constants.VALIDATING_REWARDS_ACCOUNT,
"--keymanager", "--keymanager",
"--keymanager-port={0}".format(validator_client_shared.VALIDATOR_HTTP_PORT_NUM), "--keymanager-port={0}".format(vc_shared.VALIDATOR_HTTP_PORT_NUM),
"--keymanager-address=0.0.0.0", "--keymanager-address=0.0.0.0",
"--keymanager-allow-origin=*", "--keymanager-allow-origin=*",
"--keymanager-token-file=" + constants.KEYMANAGER_MOUNT_PATH_ON_CONTAINER, "--keymanager-token-file=" + constants.KEYMANAGER_MOUNT_PATH_ON_CONTAINER,
# vvvvvvvvvvvvvvvvvvv METRICS CONFIG vvvvvvvvvvvvvvvvvvvvv # vvvvvvvvvvvvvvvvvvv METRICS CONFIG vvvvvvvvvvvvvvvvvvvvv
"--metrics", "--metrics",
"--metrics-address=0.0.0.0", "--metrics-address=0.0.0.0",
"--metrics-port={0}".format( "--metrics-port={0}".format(vc_shared.VALIDATOR_CLIENT_METRICS_PORT_NUM),
validator_client_shared.VALIDATOR_CLIENT_METRICS_PORT_NUM "--graffiti=" + cl_context.client_name + "-" + el_context.client_name,
),
"--graffiti="
+ cl_client_context.client_name
+ "-"
+ el_client_context.client_name,
] ]
if len(extra_params) > 0: if len(extra_params) > 0:
...@@ -59,25 +55,26 @@ def get_config( ...@@ -59,25 +55,26 @@ def get_config(
cmd.extend([param for param in extra_params]) cmd.extend([param for param in extra_params])
files = { files = {
validator_client_shared.VALIDATOR_CLIENT_KEYS_MOUNTPOINT: node_keystore_files.files_artifact_uuid, vc_shared.VALIDATOR_CLIENT_KEYS_MOUNTPOINT: node_keystore_files.files_artifact_uuid,
constants.KEYMANAGER_MOUNT_PATH_ON_CLIENTS: keymanager_file, constants.KEYMANAGER_MOUNT_PATH_ON_CLIENTS: keymanager_file,
} }
return ServiceConfig( return ServiceConfig(
image=image, image=image,
ports=validator_client_shared.VALIDATOR_CLIENT_USED_PORTS, ports=vc_shared.VALIDATOR_CLIENT_USED_PORTS,
cmd=cmd, cmd=cmd,
env_vars=extra_env_vars,
files=files, files=files,
private_ip_address_placeholder=validator_client_shared.PRIVATE_IP_ADDRESS_PLACEHOLDER, private_ip_address_placeholder=vc_shared.PRIVATE_IP_ADDRESS_PLACEHOLDER,
min_cpu=v_min_cpu, min_cpu=vc_min_cpu,
max_cpu=v_max_cpu, max_cpu=vc_max_cpu,
min_memory=v_min_mem, min_memory=vc_min_mem,
max_memory=v_max_mem, max_memory=vc_max_mem,
labels=shared_utils.label_maker( labels=shared_utils.label_maker(
constants.VC_CLIENT_TYPE.nimbus, constants.VC_TYPE.nimbus,
constants.CLIENT_TYPES.validator, constants.CLIENT_TYPES.validator,
image, image,
cl_client_context.client_name, cl_context.client_name,
extra_labels, extra_labels,
), ),
user=User(uid=0, gid=0), user=User(uid=0, gid=0),
......
constants = import_module("../package_io/constants.star") constants = import_module("../package_io/constants.star")
shared_utils = import_module("../shared_utils/shared_utils.star") shared_utils = import_module("../shared_utils/shared_utils.star")
validator_client_shared = import_module("./shared.star") vc_shared = import_module("./shared.star")
PRYSM_PASSWORD_MOUNT_DIRPATH_ON_SERVICE_CONTAINER = "/prysm-password" PRYSM_PASSWORD_MOUNT_DIRPATH_ON_SERVICE_CONTAINER = "/prysm-password"
PRYSM_BEACON_RPC_PORT = 4000 PRYSM_BEACON_RPC_PORT = 4000
...@@ -10,14 +10,15 @@ def get_config( ...@@ -10,14 +10,15 @@ def get_config(
el_cl_genesis_data, el_cl_genesis_data,
image, image,
beacon_http_url, beacon_http_url,
cl_client_context, cl_context,
el_client_context, el_context,
node_keystore_files, node_keystore_files,
v_min_cpu, vc_min_cpu,
v_max_cpu, vc_max_cpu,
v_min_mem, vc_min_mem,
v_max_mem, vc_max_mem,
extra_params, extra_params,
extra_env_vars,
extra_labels, extra_labels,
prysm_password_relative_filepath, prysm_password_relative_filepath,
prysm_password_artifact_uuid, prysm_password_artifact_uuid,
...@@ -25,7 +26,7 @@ def get_config( ...@@ -25,7 +26,7 @@ def get_config(
node_selectors, node_selectors,
): ):
validator_keys_dirpath = shared_utils.path_join( validator_keys_dirpath = shared_utils.path_join(
validator_client_shared.VALIDATOR_CLIENT_KEYS_MOUNTPOINT, vc_shared.VALIDATOR_CLIENT_KEYS_MOUNTPOINT,
node_keystore_files.prysm_relative_dirpath, node_keystore_files.prysm_relative_dirpath,
) )
validator_secrets_dirpath = shared_utils.path_join( validator_secrets_dirpath = shared_utils.path_join(
...@@ -40,7 +41,7 @@ def get_config( ...@@ -40,7 +41,7 @@ def get_config(
+ "/config.yaml", + "/config.yaml",
"--beacon-rpc-provider=" "--beacon-rpc-provider="
+ "{}:{}".format( + "{}:{}".format(
cl_client_context.ip_addr, cl_context.ip_addr,
PRYSM_BEACON_RPC_PORT, PRYSM_BEACON_RPC_PORT,
), ),
"--beacon-rest-api-provider=" + beacon_http_url, "--beacon-rest-api-provider=" + beacon_http_url,
...@@ -48,19 +49,14 @@ def get_config( ...@@ -48,19 +49,14 @@ def get_config(
"--wallet-password-file=" + validator_secrets_dirpath, "--wallet-password-file=" + validator_secrets_dirpath,
"--suggested-fee-recipient=" + constants.VALIDATING_REWARDS_ACCOUNT, "--suggested-fee-recipient=" + constants.VALIDATING_REWARDS_ACCOUNT,
"--rpc", "--rpc",
"--rpc-port={0}".format(validator_client_shared.VALIDATOR_HTTP_PORT_NUM), "--rpc-port={0}".format(vc_shared.VALIDATOR_HTTP_PORT_NUM),
"--rpc-host=0.0.0.0", "--rpc-host=0.0.0.0",
# vvvvvvvvvvvvvvvvvvv METRICS CONFIG vvvvvvvvvvvvvvvvvvvvv # vvvvvvvvvvvvvvvvvvv METRICS CONFIG vvvvvvvvvvvvvvvvvvvvv
"--disable-monitoring=false", "--disable-monitoring=false",
"--monitoring-host=0.0.0.0", "--monitoring-host=0.0.0.0",
"--monitoring-port={0}".format( "--monitoring-port={0}".format(vc_shared.VALIDATOR_CLIENT_METRICS_PORT_NUM),
validator_client_shared.VALIDATOR_CLIENT_METRICS_PORT_NUM
),
# ^^^^^^^^^^^^^^^^^^^ METRICS CONFIG ^^^^^^^^^^^^^^^^^^^^^ # ^^^^^^^^^^^^^^^^^^^ METRICS CONFIG ^^^^^^^^^^^^^^^^^^^^^
"--graffiti=" "--graffiti=" + cl_context.client_name + "-" + el_context.client_name,
+ cl_client_context.client_name
+ "-"
+ el_client_context.client_name,
] ]
if len(extra_params) > 0: if len(extra_params) > 0:
...@@ -69,25 +65,26 @@ def get_config( ...@@ -69,25 +65,26 @@ def get_config(
files = { files = {
constants.GENESIS_DATA_MOUNTPOINT_ON_CLIENTS: el_cl_genesis_data.files_artifact_uuid, constants.GENESIS_DATA_MOUNTPOINT_ON_CLIENTS: el_cl_genesis_data.files_artifact_uuid,
validator_client_shared.VALIDATOR_CLIENT_KEYS_MOUNTPOINT: node_keystore_files.files_artifact_uuid, vc_shared.VALIDATOR_CLIENT_KEYS_MOUNTPOINT: node_keystore_files.files_artifact_uuid,
PRYSM_PASSWORD_MOUNT_DIRPATH_ON_SERVICE_CONTAINER: prysm_password_artifact_uuid, PRYSM_PASSWORD_MOUNT_DIRPATH_ON_SERVICE_CONTAINER: prysm_password_artifact_uuid,
} }
return ServiceConfig( return ServiceConfig(
image=image, image=image,
ports=validator_client_shared.VALIDATOR_CLIENT_USED_PORTS, ports=vc_shared.VALIDATOR_CLIENT_USED_PORTS,
cmd=cmd, cmd=cmd,
env_vars=extra_env_vars,
files=files, files=files,
private_ip_address_placeholder=validator_client_shared.PRIVATE_IP_ADDRESS_PLACEHOLDER, private_ip_address_placeholder=vc_shared.PRIVATE_IP_ADDRESS_PLACEHOLDER,
min_cpu=v_min_cpu, min_cpu=vc_min_cpu,
max_cpu=v_max_cpu, max_cpu=vc_max_cpu,
min_memory=v_min_mem, min_memory=vc_min_mem,
max_memory=v_max_mem, max_memory=vc_max_mem,
labels=shared_utils.label_maker( labels=shared_utils.label_maker(
constants.VC_CLIENT_TYPE.prysm, constants.VC_TYPE.prysm,
constants.CLIENT_TYPES.validator, constants.CLIENT_TYPES.validator,
image, image,
cl_client_context.client_name, cl_context.client_name,
extra_labels, extra_labels,
), ),
tolerations=tolerations, tolerations=tolerations,
......
constants = import_module("../package_io/constants.star") constants = import_module("../package_io/constants.star")
shared_utils = import_module("../shared_utils/shared_utils.star") shared_utils = import_module("../shared_utils/shared_utils.star")
validator_client_shared = import_module("./shared.star") vc_shared = import_module("./shared.star")
def get_config( def get_config(
...@@ -9,14 +9,15 @@ def get_config( ...@@ -9,14 +9,15 @@ def get_config(
keymanager_p12_file, keymanager_p12_file,
image, image,
beacon_http_url, beacon_http_url,
cl_client_context, cl_context,
el_client_context, el_context,
node_keystore_files, node_keystore_files,
v_min_cpu, vc_min_cpu,
v_max_cpu, vc_max_cpu,
v_min_mem, vc_min_mem,
v_max_mem, vc_max_mem,
extra_params, extra_params,
extra_env_vars,
extra_labels, extra_labels,
tolerations, tolerations,
node_selectors, node_selectors,
...@@ -25,11 +26,11 @@ def get_config( ...@@ -25,11 +26,11 @@ def get_config(
validator_secrets_dirpath = "" validator_secrets_dirpath = ""
if node_keystore_files != None: if node_keystore_files != None:
validator_keys_dirpath = shared_utils.path_join( validator_keys_dirpath = shared_utils.path_join(
validator_client_shared.VALIDATOR_CLIENT_KEYS_MOUNTPOINT, vc_shared.VALIDATOR_CLIENT_KEYS_MOUNTPOINT,
node_keystore_files.teku_keys_relative_dirpath, node_keystore_files.teku_keys_relative_dirpath,
) )
validator_secrets_dirpath = shared_utils.path_join( validator_secrets_dirpath = shared_utils.path_join(
validator_client_shared.VALIDATOR_CLIENT_KEYS_MOUNTPOINT, vc_shared.VALIDATOR_CLIENT_KEYS_MOUNTPOINT,
node_keystore_files.teku_secrets_relative_dirpath, node_keystore_files.teku_secrets_relative_dirpath,
) )
...@@ -46,14 +47,12 @@ def get_config( ...@@ -46,14 +47,12 @@ def get_config(
"--validators-proposer-default-fee-recipient=" "--validators-proposer-default-fee-recipient="
+ constants.VALIDATING_REWARDS_ACCOUNT, + constants.VALIDATING_REWARDS_ACCOUNT,
"--validators-graffiti=" "--validators-graffiti="
+ cl_client_context.client_name + cl_context.client_name
+ "-" + "-"
+ el_client_context.client_name, + el_context.client_name,
"--validator-api-enabled=true", "--validator-api-enabled=true",
"--validator-api-host-allowlist=*", "--validator-api-host-allowlist=*",
"--validator-api-port={0}".format( "--validator-api-port={0}".format(vc_shared.VALIDATOR_HTTP_PORT_NUM),
validator_client_shared.VALIDATOR_HTTP_PORT_NUM
),
"--validator-api-interface=0.0.0.0", "--validator-api-interface=0.0.0.0",
"--validator-api-keystore-file=" "--validator-api-keystore-file="
+ constants.KEYMANAGER_P12_MOUNT_PATH_ON_CONTAINER, + constants.KEYMANAGER_P12_MOUNT_PATH_ON_CONTAINER,
...@@ -63,9 +62,7 @@ def get_config( ...@@ -63,9 +62,7 @@ def get_config(
"--metrics-enabled=true", "--metrics-enabled=true",
"--metrics-host-allowlist=*", "--metrics-host-allowlist=*",
"--metrics-interface=0.0.0.0", "--metrics-interface=0.0.0.0",
"--metrics-port={0}".format( "--metrics-port={0}".format(vc_shared.VALIDATOR_CLIENT_METRICS_PORT_NUM),
validator_client_shared.VALIDATOR_CLIENT_METRICS_PORT_NUM
),
] ]
if len(extra_params) > 0: if len(extra_params) > 0:
...@@ -74,26 +71,27 @@ def get_config( ...@@ -74,26 +71,27 @@ def get_config(
files = { files = {
constants.GENESIS_DATA_MOUNTPOINT_ON_CLIENTS: el_cl_genesis_data.files_artifact_uuid, constants.GENESIS_DATA_MOUNTPOINT_ON_CLIENTS: el_cl_genesis_data.files_artifact_uuid,
validator_client_shared.VALIDATOR_CLIENT_KEYS_MOUNTPOINT: node_keystore_files.files_artifact_uuid, vc_shared.VALIDATOR_CLIENT_KEYS_MOUNTPOINT: node_keystore_files.files_artifact_uuid,
constants.KEYMANAGER_MOUNT_PATH_ON_CLIENTS: keymanager_file, constants.KEYMANAGER_MOUNT_PATH_ON_CLIENTS: keymanager_file,
constants.KEYMANAGER_P12_MOUNT_PATH_ON_CLIENTS: keymanager_p12_file, constants.KEYMANAGER_P12_MOUNT_PATH_ON_CLIENTS: keymanager_p12_file,
} }
return ServiceConfig( return ServiceConfig(
image=image, image=image,
ports=validator_client_shared.VALIDATOR_CLIENT_USED_PORTS, ports=vc_shared.VALIDATOR_CLIENT_USED_PORTS,
cmd=cmd, cmd=cmd,
env_vars=extra_env_vars,
files=files, files=files,
private_ip_address_placeholder=validator_client_shared.PRIVATE_IP_ADDRESS_PLACEHOLDER, private_ip_address_placeholder=vc_shared.PRIVATE_IP_ADDRESS_PLACEHOLDER,
min_cpu=v_min_cpu, min_cpu=vc_min_cpu,
max_cpu=v_max_cpu, max_cpu=vc_max_cpu,
min_memory=v_min_mem, min_memory=vc_min_mem,
max_memory=v_max_mem, max_memory=vc_max_mem,
labels=shared_utils.label_maker( labels=shared_utils.label_maker(
constants.VC_CLIENT_TYPE.teku, constants.VC_TYPE.teku,
constants.CLIENT_TYPES.validator, constants.CLIENT_TYPES.validator,
image, image,
cl_client_context.client_name, cl_context.client_name,
extra_labels, extra_labels,
), ),
tolerations=tolerations, tolerations=tolerations,
......
def new_validator_client_context( def new_vc_context(
service_name, service_name,
client_name, client_name,
metrics_info, metrics_info,
......
input_parser = import_module("../package_io/input_parser.star") input_parser = import_module("../package_io/input_parser.star")
constants = import_module("../package_io/constants.star") constants = import_module("../package_io/constants.star")
node_metrics = import_module("../node_metrics_info.star") node_metrics = import_module("../node_metrics_info.star")
validator_client_context = import_module("./validator_client_context.star") vc_context = import_module("./vc_context.star")
lighthouse = import_module("./lighthouse.star") lighthouse = import_module("./lighthouse.star")
lodestar = import_module("./lodestar.star") lodestar = import_module("./lodestar.star")
nimbus = import_module("./nimbus.star") nimbus = import_module("./nimbus.star")
prysm = import_module("./prysm.star") prysm = import_module("./prysm.star")
teku = import_module("./teku.star") teku = import_module("./teku.star")
validator_client_shared = import_module("./shared.star") vc_shared = import_module("./shared.star")
# The defaults for min/max CPU/memory that the validator client can use # The defaults for min/max CPU/memory that the validator client can use
MIN_CPU = 50 MIN_CPU = 50
...@@ -23,22 +23,23 @@ def launch( ...@@ -23,22 +23,23 @@ def launch(
keymanager_file, keymanager_file,
keymanager_p12_file, keymanager_p12_file,
service_name, service_name,
validator_client_type, vc_type,
image, image,
participant_log_level, participant_log_level,
global_log_level, global_log_level,
cl_client_context, cl_context,
el_client_context, el_context,
node_keystore_files, node_keystore_files,
v_min_cpu, vc_min_cpu,
v_max_cpu, vc_max_cpu,
v_min_mem, vc_min_mem,
v_max_mem, vc_max_mem,
extra_params, extra_params,
extra_env_vars,
extra_labels, extra_labels,
prysm_password_relative_filepath, prysm_password_relative_filepath,
prysm_password_artifact_uuid, prysm_password_artifact_uuid,
validator_tolerations, vc_tolerations,
participant_tolerations, participant_tolerations,
global_tolerations, global_tolerations,
node_selectors, node_selectors,
...@@ -49,113 +50,121 @@ def launch( ...@@ -49,113 +50,121 @@ def launch(
return None return None
tolerations = input_parser.get_client_tolerations( tolerations = input_parser.get_client_tolerations(
validator_tolerations, participant_tolerations, global_tolerations vc_tolerations, participant_tolerations, global_tolerations
) )
beacon_http_url = "http://{}:{}".format( beacon_http_url = "http://{}:{}".format(
cl_client_context.ip_addr, cl_context.ip_addr,
cl_client_context.http_port_num, cl_context.http_port_num,
) )
v_min_cpu = int(v_min_cpu) if int(v_min_cpu) > 0 else MIN_CPU vc_min_cpu = int(vc_min_cpu) if int(vc_min_cpu) > 0 else MIN_CPU
v_max_cpu = int(v_max_cpu) if int(v_max_cpu) > 0 else MAX_CPU vc_max_cpu = int(vc_max_cpu) if int(vc_max_cpu) > 0 else MAX_CPU
v_min_mem = int(v_min_mem) if int(v_min_mem) > 0 else MIN_MEMORY vc_min_mem = int(vc_min_mem) if int(vc_min_mem) > 0 else MIN_MEMORY
v_max_mem = int(v_max_mem) if int(v_max_mem) > 0 else MAX_MEMORY vc_max_mem = int(vc_max_mem) if int(vc_max_mem) > 0 else MAX_MEMORY
if validator_client_type == constants.VC_CLIENT_TYPE.lighthouse: if vc_type == constants.VC_TYPE.lighthouse:
config = lighthouse.get_config( config = lighthouse.get_config(
el_cl_genesis_data=launcher.el_cl_genesis_data, el_cl_genesis_data=launcher.el_cl_genesis_data,
image=image, image=image,
participant_log_level=participant_log_level, participant_log_level=participant_log_level,
global_log_level=global_log_level, global_log_level=global_log_level,
beacon_http_url=beacon_http_url, beacon_http_url=beacon_http_url,
cl_client_context=cl_client_context, cl_context=cl_context,
el_client_context=el_client_context, el_context=el_context,
node_keystore_files=node_keystore_files, node_keystore_files=node_keystore_files,
v_min_cpu=v_min_cpu, vc_min_cpu=vc_min_cpu,
v_max_cpu=v_max_cpu, vc_max_cpu=vc_max_cpu,
v_min_mem=v_min_mem, vc_min_mem=vc_min_mem,
v_max_mem=v_max_mem, vc_max_mem=vc_max_mem,
extra_params=extra_params, extra_params=extra_params,
extra_env_vars=extra_env_vars,
extra_labels=extra_labels, extra_labels=extra_labels,
tolerations=tolerations, tolerations=tolerations,
node_selectors=node_selectors, node_selectors=node_selectors,
network=network, # TODO: remove when deneb rebase is done network=network, # TODO: remove when deneb rebase is done
electra_fork_epoch=electra_fork_epoch, # TODO: remove when deneb rebase is done electra_fork_epoch=electra_fork_epoch, # TODO: remove when deneb rebase is done
) )
elif validator_client_type == constants.VC_CLIENT_TYPE.lodestar: elif vc_type == constants.VC_TYPE.lodestar:
config = lodestar.get_config( config = lodestar.get_config(
el_cl_genesis_data=launcher.el_cl_genesis_data, el_cl_genesis_data=launcher.el_cl_genesis_data,
image=image, image=image,
participant_log_level=participant_log_level, participant_log_level=participant_log_level,
global_log_level=global_log_level, global_log_level=global_log_level,
beacon_http_url=beacon_http_url, beacon_http_url=beacon_http_url,
cl_client_context=cl_client_context, cl_context=cl_context,
el_client_context=el_client_context, el_context=el_context,
node_keystore_files=node_keystore_files, node_keystore_files=node_keystore_files,
v_min_cpu=v_min_cpu, vc_min_cpu=vc_min_cpu,
v_max_cpu=v_max_cpu, vc_max_cpu=vc_max_cpu,
v_min_mem=v_min_mem, vc_min_mem=vc_min_mem,
v_max_mem=v_max_mem, vc_max_mem=vc_max_mem,
extra_params=extra_params, extra_params=extra_params,
extra_env_vars=extra_env_vars,
extra_labels=extra_labels, extra_labels=extra_labels,
tolerations=tolerations, tolerations=tolerations,
node_selectors=node_selectors, node_selectors=node_selectors,
) )
elif validator_client_type == constants.VC_CLIENT_TYPE.teku: elif vc_type == constants.VC_TYPE.teku:
config = teku.get_config( config = teku.get_config(
el_cl_genesis_data=launcher.el_cl_genesis_data, el_cl_genesis_data=launcher.el_cl_genesis_data,
keymanager_file=keymanager_file, keymanager_file=keymanager_file,
keymanager_p12_file=keymanager_p12_file, keymanager_p12_file=keymanager_p12_file,
image=image, image=image,
beacon_http_url=beacon_http_url, beacon_http_url=beacon_http_url,
cl_client_context=cl_client_context, cl_context=cl_context,
el_client_context=el_client_context, el_context=el_context,
node_keystore_files=node_keystore_files, node_keystore_files=node_keystore_files,
v_min_cpu=v_min_cpu, vc_min_cpu=vc_min_cpu,
v_max_cpu=v_max_cpu, vc_max_cpu=vc_max_cpu,
v_min_mem=v_min_mem, vc_min_mem=vc_min_mem,
v_max_mem=v_max_mem, vc_max_mem=vc_max_mem,
extra_params=extra_params, extra_params=extra_params,
extra_env_vars=extra_env_vars,
extra_labels=extra_labels, extra_labels=extra_labels,
tolerations=tolerations, tolerations=tolerations,
node_selectors=node_selectors, node_selectors=node_selectors,
) )
elif validator_client_type == constants.VC_CLIENT_TYPE.nimbus: elif vc_type == constants.VC_TYPE.nimbus:
config = nimbus.get_config( config = nimbus.get_config(
el_cl_genesis_data=launcher.el_cl_genesis_data, el_cl_genesis_data=launcher.el_cl_genesis_data,
keymanager_file=keymanager_file, keymanager_file=keymanager_file,
image=image, image=image,
beacon_http_url=beacon_http_url, beacon_http_url=beacon_http_url,
cl_client_context=cl_client_context, cl_context=cl_context,
el_client_context=el_client_context, el_context=el_context,
node_keystore_files=node_keystore_files, node_keystore_files=node_keystore_files,
v_min_cpu=v_min_cpu, vc_min_cpu=vc_min_cpu,
v_max_cpu=v_max_cpu, vc_max_cpu=vc_max_cpu,
v_min_mem=v_min_mem, vc_min_mem=vc_min_mem,
v_max_mem=v_max_mem, vc_max_mem=vc_max_mem,
extra_params=extra_params, extra_params=extra_params,
extra_env_vars=extra_env_vars,
extra_labels=extra_labels, extra_labels=extra_labels,
tolerations=tolerations, tolerations=tolerations,
node_selectors=node_selectors, node_selectors=node_selectors,
) )
elif validator_client_type == constants.VC_CLIENT_TYPE.prysm: elif vc_type == constants.VC_TYPE.prysm:
# Prysm VC only works with Prysm beacon node right now # Prysm VC only works with Prysm beacon node right now
if cl_client_context.client_name != constants.CL_CLIENT_TYPE.prysm: if cl_context.client_name != constants.CL_TYPE.prysm:
fail("Prysm VC is only compatible with Prysm beacon node") fail(
cl_context.client_name
+ "Prysm VC is only compatible with Prysm beacon node"
)
config = prysm.get_config( config = prysm.get_config(
el_cl_genesis_data=launcher.el_cl_genesis_data, el_cl_genesis_data=launcher.el_cl_genesis_data,
image=image, image=image,
beacon_http_url=beacon_http_url, beacon_http_url=beacon_http_url,
cl_client_context=cl_client_context, cl_context=cl_context,
el_client_context=el_client_context, el_context=el_context,
node_keystore_files=node_keystore_files, node_keystore_files=node_keystore_files,
v_min_cpu=v_min_cpu, vc_min_cpu=vc_min_cpu,
v_max_cpu=v_max_cpu, vc_max_cpu=vc_max_cpu,
v_min_mem=v_min_mem, vc_min_mem=vc_min_mem,
v_max_mem=v_max_mem, vc_max_mem=vc_max_mem,
extra_params=extra_params, extra_params=extra_params,
extra_env_vars=extra_env_vars,
extra_labels=extra_labels, extra_labels=extra_labels,
prysm_password_relative_filepath=prysm_password_relative_filepath, prysm_password_relative_filepath=prysm_password_relative_filepath,
prysm_password_artifact_uuid=prysm_password_artifact_uuid, prysm_password_artifact_uuid=prysm_password_artifact_uuid,
...@@ -163,30 +172,28 @@ def launch( ...@@ -163,30 +172,28 @@ def launch(
node_selectors=node_selectors, node_selectors=node_selectors,
) )
else: else:
fail("Unsupported validator_client_type: {0}".format(validator_client_type)) fail("Unsupported vc_type: {0}".format(vc_type))
validator_service = plan.add_service(service_name, config) validator_service = plan.add_service(service_name, config)
validator_metrics_port = validator_service.ports[ validator_metrics_port = validator_service.ports[
validator_client_shared.VALIDATOR_CLIENT_METRICS_PORT_ID vc_shared.VALIDATOR_CLIENT_METRICS_PORT_ID
] ]
validator_metrics_url = "{0}:{1}".format( validator_metrics_url = "{0}:{1}".format(
validator_service.ip_address, validator_metrics_port.number validator_service.ip_address, validator_metrics_port.number
) )
validator_node_metrics_info = node_metrics.new_node_metrics_info( validator_node_metrics_info = node_metrics.new_node_metrics_info(
service_name, validator_client_shared.METRICS_PATH, validator_metrics_url service_name, vc_shared.METRICS_PATH, validator_metrics_url
) )
validator_http_port = validator_service.ports[ validator_http_port = validator_service.ports[vc_shared.VALIDATOR_HTTP_PORT_ID]
validator_client_shared.VALIDATOR_HTTP_PORT_ID
]
return validator_client_context.new_validator_client_context( return vc_context.new_vc_context(
service_name=service_name, service_name=service_name,
client_name=validator_client_type, client_name=vc_type,
metrics_info=validator_node_metrics_info, metrics_info=validator_node_metrics_info,
) )
def new_validator_client_launcher(el_cl_genesis_data): def new_vc_launcher(el_cl_genesis_data):
return struct(el_cl_genesis_data=el_cl_genesis_data) return struct(el_cl_genesis_data=el_cl_genesis_data)
...@@ -18,7 +18,7 @@ MAX_MEMORY = 1024 ...@@ -18,7 +18,7 @@ MAX_MEMORY = 1024
def launch( def launch(
plan, plan,
xatu_sentry_service_name, xatu_sentry_service_name,
cl_client_context, cl_context,
xatu_sentry_params, xatu_sentry_params,
network_params, network_params,
pair_name, pair_name,
...@@ -30,8 +30,8 @@ def launch( ...@@ -30,8 +30,8 @@ def launch(
str(METRICS_PORT_NUMBER), str(METRICS_PORT_NUMBER),
pair_name, pair_name,
"http://{}:{}".format( "http://{}:{}".format(
cl_client_context.ip_addr, cl_context.ip_addr,
cl_client_context.http_port_num, cl_context.http_port_num,
), ),
xatu_sentry_params.xatu_server_addr, xatu_sentry_params.xatu_server_addr,
network_params.network, network_params.network,
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment