Commit fab341b1 authored by Barnabas Busa's avatar Barnabas Busa Committed by GitHub

refactor!: participant_network & rename participant fields. (#508)

# Important!
There are many participant fields that have been renamed to be more
consistent with the rest of the package. The following fields have been
renamed:
### EL Flags
```
el_client_type -> el_type
el_client_image -> el_image
el_client_log_level -> el_log_level
el_client_volume_size -> el_volume_size
```
### CL Flags
```
cl_client_type -> cl_type
cl_client_image -> cl_image
cl_client_volume_size -> cl_volume_size
cl_client_log_level -> cl_log_level
beacon_extra_params -> cl_extra_params
beacon_extra_labels -> cl_extra_labels
bn_min_cpu -> cl_min_cpu
bn_max_cpu -> cl_max_cpu
bn_min_mem -> cl_min_mem
bn_max_mem -> cl_max_mem
use_separate_validator_client -> use_separate_vc
```
### Validator flags
```
validator_client_type -> vc_type
validator_tolerations -> vc_tolerations
validator_client_image -> vc_image
validator_extra_params -> vc_extra_params
validator_extra_labels -> vc_extra_labels
v_min_cpu -> vc_min_cpu
v_max_cpu -> vc_max_cpu
v_min_mem -> vc_min_mem
v_max_mem -> vc_max_mem
```
### Global flags
```
global_client_log_level -> global_log_level
```


Once this PR is merged, the old names will no longer work, and you will
have to bulk rename all your yaml files.

A rename.sh bash script is added, which can be used to do bulk `find and
replace` operations.
```bash
rename.sh yourFile.yaml
```

---------
Co-authored-by: default avatarGyanendra Mishra <anomaly.the@gmail.com>
parent da55be84
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: lighthouse cl_type: lighthouse
count: 1 count: 1
- el_client_type: geth - el_type: geth
cl_client_type: lodestar cl_type: lodestar
count: 1 count: 1
additional_services: additional_services:
- assertoor - assertoor
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: besu - el_type: besu
cl_client_type: prysm cl_type: prysm
- el_client_type: besu - el_type: besu
cl_client_type: nimbus cl_type: nimbus
- el_client_type: besu - el_type: besu
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: besu - el_type: besu
cl_client_type: lodestar cl_type: lodestar
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
el_client_image: ethpandaops/geth:master el_image: ethpandaops/geth:master
cl_client_type: lighthouse cl_type: lighthouse
blobber_enabled: true blobber_enabled: true
blobber_extra_params: blobber_extra_params:
- --proposal-action-frequency=1 - --proposal-action-frequency=1
- "--proposal-action={\"name\": \"blob_gossip_delay\", \"delay_milliseconds\": 1500}" - "--proposal-action={\"name\": \"blob_gossip_delay\", \"delay_milliseconds\": 1500}"
count: 1 count: 1
- el_client_type: geth - el_type: geth
el_client_image: ethpandaops/geth:master el_image: ethpandaops/geth:master
cl_client_type: lodestar cl_type: lodestar
count: 1 count: 1
network_params: network_params:
deneb_fork_epoch: 1 deneb_fork_epoch: 1
......
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: nethermind - el_type: nethermind
cl_client_type: prysm cl_type: prysm
- el_client_type: erigon - el_type: erigon
cl_client_type: nimbus cl_type: nimbus
- el_client_type: besu - el_type: besu
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: reth - el_type: reth
cl_client_type: lodestar cl_type: lodestar
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: teku cl_type: teku
network_params: network_params:
network: "dencun-devnet-12" network: "dencun-devnet-12"
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: nethermind - el_type: nethermind
cl_client_type: prysm cl_type: prysm
- el_client_type: erigon - el_type: erigon
cl_client_type: nimbus cl_type: nimbus
- el_client_type: besu - el_type: besu
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: reth - el_type: reth
cl_client_type: lodestar cl_type: lodestar
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: teku cl_type: teku
network_params: network_params:
deneb_fork_epoch: 0 deneb_fork_epoch: 0
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: besu - el_type: besu
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: reth - el_type: reth
cl_client_type: lodestar cl_type: lodestar
- el_client_type: erigon - el_type: erigon
cl_client_type: nimbus cl_type: nimbus
- el_client_type: nethermind - el_type: nethermind
cl_client_type: prysm cl_type: prysm
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: teku cl_type: teku
additional_services: [] additional_services: []
disable_peer_scoring: true disable_peer_scoring: true
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: nethermind - el_type: nethermind
cl_client_type: prysm cl_type: prysm
- el_client_type: erigon - el_type: erigon
cl_client_type: nimbus cl_type: nimbus
- el_client_type: besu - el_type: besu
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: reth - el_type: reth
cl_client_type: lodestar cl_type: lodestar
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: teku cl_type: teku
network_params: network_params:
network: "ephemery" network: "ephemery"
additional_services: [] additional_services: []
participants: participants:
- el_client_type: erigon - el_type: erigon
cl_client_type: teku cl_type: teku
- el_client_type: erigon - el_type: erigon
cl_client_type: prysm cl_type: prysm
- el_client_type: erigon - el_type: erigon
cl_client_type: nimbus cl_type: nimbus
- el_client_type: erigon - el_type: erigon
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: erigon - el_type: erigon
cl_client_type: lodestar cl_type: lodestar
additional_services: [] additional_services: []
participants: participants:
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: teku cl_type: teku
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: prysm cl_type: prysm
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: nimbus cl_type: nimbus
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: lodestar cl_type: lodestar
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: geth - el_type: geth
cl_client_type: prysm cl_type: prysm
- el_client_type: geth - el_type: geth
cl_client_type: nimbus cl_type: nimbus
- el_client_type: geth - el_type: geth
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: geth - el_type: geth
cl_client_type: lodestar cl_type: lodestar
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
el_client_image: ethpandaops/geth:transition-post-genesis-04b0304 el_image: ethpandaops/geth:transition-post-genesis-04b0304
cl_client_type: lighthouse cl_type: lighthouse
cl_client_image: ethpandaops/lighthouse:verkle-trees-capella-2ffb8a9 cl_image: ethpandaops/lighthouse:verkle-trees-capella-2ffb8a9
- el_client_type: geth - el_type: geth
el_client_image: ethpandaops/geth:transition-post-genesis-04b0304 el_image: ethpandaops/geth:transition-post-genesis-04b0304
cl_client_type: lodestar cl_type: lodestar
cl_client_image: ethpandaops/lodestar:g11tech-verge-815364b cl_image: ethpandaops/lodestar:g11tech-verge-815364b
network_params: network_params:
electra_fork_epoch: 1 electra_fork_epoch: 1
network: holesky-shadowfork-verkle network: holesky-shadowfork-verkle
......
participants: participants:
- el_client_type: geth - el_type: geth
el_client_image: ethereum/client-go:v1.13.14 el_image: ethereum/client-go:v1.13.14
cl_client_type: teku cl_type: teku
cl_client_image: consensys/teku:24.2.0 cl_image: consensys/teku:24.2.0
network_params: network_params:
dencun_fork_epoch: 0 dencun_fork_epoch: 0
network: holesky-shadowfork network: holesky-shadowfork
......
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: nethermind - el_type: nethermind
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: erigon - el_type: erigon
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: besu - el_type: besu
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: reth - el_type: reth
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: lighthouse cl_type: lighthouse
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: lodestar cl_type: lodestar
- el_client_type: nethermind - el_type: nethermind
cl_client_type: lodestar cl_type: lodestar
- el_client_type: erigon - el_type: erigon
cl_client_type: lodestar cl_type: lodestar
- el_client_type: besu - el_type: besu
cl_client_type: lodestar cl_type: lodestar
- el_client_type: reth - el_type: reth
cl_client_type: lodestar cl_type: lodestar
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: lodestar cl_type: lodestar
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
use_separate_validator_client: true use_separate_vc: true
- el_client_type: nethermind - el_type: nethermind
cl_client_type: prysm cl_type: prysm
- el_client_type: erigon - el_type: erigon
cl_client_type: nimbus cl_type: nimbus
use_separate_validator_client: true use_separate_vc: true
- el_client_type: besu - el_type: besu
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: reth - el_type: reth
cl_client_type: lodestar cl_type: lodestar
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: nimbus cl_type: nimbus
additional_services: [] additional_services: []
persistent: true persistent: true
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
use_separate_validator_client: true use_separate_vc: true
- el_client_type: nethermind - el_type: nethermind
cl_client_type: prysm cl_type: prysm
- el_client_type: erigon - el_type: erigon
cl_client_type: nimbus cl_type: nimbus
use_separate_validator_client: true use_separate_vc: true
- el_client_type: besu - el_type: besu
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: reth - el_type: reth
cl_client_type: lodestar cl_type: lodestar
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: nimbus cl_type: nimbus
additional_services: [] additional_services: []
persistent: true persistent: true
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: nethermind - el_type: nethermind
cl_client_type: prysm cl_type: prysm
- el_client_type: erigon - el_type: erigon
cl_client_type: nimbus cl_type: nimbus
- el_client_type: besu - el_type: besu
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: reth - el_type: reth
cl_client_type: lodestar cl_type: lodestar
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: teku cl_type: teku
network_params: network_params:
capella_fork_epoch: 1 capella_fork_epoch: 1
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: nethermind - el_type: nethermind
cl_client_type: prysm cl_type: prysm
- el_client_type: erigon - el_type: erigon
cl_client_type: nimbus cl_type: nimbus
- el_client_type: besu - el_type: besu
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: reth - el_type: reth
cl_client_type: lodestar cl_type: lodestar
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: teku cl_type: teku
additional_services: additional_services:
- tx_spammer - tx_spammer
- blob_spammer - blob_spammer
......
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: nethermind - el_type: nethermind
cl_client_type: prysm cl_type: prysm
- el_client_type: erigon - el_type: erigon
cl_client_type: nimbus cl_type: nimbus
- el_client_type: besu - el_type: besu
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: reth - el_type: reth
cl_client_type: lodestar cl_type: lodestar
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: teku cl_type: teku
additional_services: additional_services:
- tx_spammer - tx_spammer
- blob_spammer - blob_spammer
......
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: nethermind - el_type: nethermind
cl_client_type: prysm cl_type: prysm
- el_client_type: erigon - el_type: erigon
cl_client_type: nimbus cl_type: nimbus
- el_client_type: besu - el_type: besu
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: reth - el_type: reth
cl_client_type: lodestar cl_type: lodestar
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: teku cl_type: teku
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
use_separate_validator_client: true use_separate_vc: true
validator_client_type: lodestar vc_type: lodestar
- el_client_type: besu - el_type: besu
cl_client_type: nimbus cl_type: nimbus
use_separate_validator_client: true use_separate_vc: true
validator_client_type: lighthouse vc_type: lighthouse
additional_services: [] additional_services: []
participants: participants:
- el_client_type: nethermind - el_type: nethermind
cl_client_type: teku cl_type: teku
- el_client_type: nethermind - el_type: nethermind
cl_client_type: prysm cl_type: prysm
- el_client_type: nethermind - el_type: nethermind
cl_client_type: nimbus cl_type: nimbus
- el_client_type: nethermind - el_type: nethermind
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: nethermind - el_type: nethermind
cl_client_type: lodestar cl_type: lodestar
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: nimbus cl_type: nimbus
- el_client_type: nethermind - el_type: nethermind
cl_client_type: nimbus cl_type: nimbus
- el_client_type: erigon - el_type: erigon
cl_client_type: nimbus cl_type: nimbus
- el_client_type: besu - el_type: besu
cl_client_type: nimbus cl_type: nimbus
- el_client_type: reth - el_type: reth
cl_client_type: nimbus cl_type: nimbus
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: nimbus cl_type: nimbus
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: nimbus - el_type: nimbus
cl_client_type: teku cl_type: teku
- el_client_type: nimbus - el_type: nimbus
cl_client_type: prysm cl_type: prysm
- el_client_type: nimbus - el_type: nimbus
cl_client_type: nimbus cl_type: nimbus
- el_client_type: nimbus - el_type: nimbus
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: nimbus - el_type: nimbus
cl_client_type: lodestar cl_type: lodestar
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: nimbus cl_type: nimbus
mev_type: full mev_type: full
participants: participants:
- el_client_type: reth - el_type: reth
cl_client_type: teku cl_type: teku
use_separate_validator_client: true use_separate_vc: true
node_selectors: { node_selectors: {
"kubernetes.io/hostname": testing-1, "kubernetes.io/hostname": testing-1,
} }
- el_client_type: reth - el_type: reth
cl_client_type: teku cl_type: teku
use_separate_validator_client: true use_separate_vc: true
global_node_selectors: { global_node_selectors: {
"kubernetes.io/hostname": testing-2, "kubernetes.io/hostname": testing-2,
} }
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
validator_count: 0 validator_count: 0
parallel_keystore_generation: true parallel_keystore_generation: true
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
validator_count: 0 validator_count: 0
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
parallel_keystore_generation: true parallel_keystore_generation: true
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
validator_count: 0 validator_count: 0
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
parallel_keystore_generation: true parallel_keystore_generation: true
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: geth - el_type: geth
cl_client_type: prysm cl_type: prysm
- el_client_type: geth - el_type: geth
cl_client_type: nimbus cl_type: nimbus
- el_client_type: geth - el_type: geth
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: geth - el_type: geth
cl_client_type: lodestar cl_type: lodestar
additional_services: [] additional_services: []
network_params: network_params:
preregistered_validator_count: 400 preregistered_validator_count: 400
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: prysm cl_type: prysm
- el_client_type: nethermind - el_type: nethermind
cl_client_type: prysm cl_type: prysm
- el_client_type: erigon - el_type: erigon
cl_client_type: prysm cl_type: prysm
- el_client_type: besu - el_type: besu
cl_client_type: prysm cl_type: prysm
- el_client_type: reth - el_type: reth
cl_client_type: prysm cl_type: prysm
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: prysm cl_type: prysm
additional_services: [] additional_services: []
participants: participants:
- el_client_type: reth - el_type: reth
cl_client_type: teku cl_type: teku
- el_client_type: reth - el_type: reth
cl_client_type: prysm cl_type: prysm
- el_client_type: reth - el_type: reth
cl_client_type: nimbus cl_type: nimbus
- el_client_type: reth - el_type: reth
cl_client_type: lighthouse cl_type: lighthouse
- el_client_type: reth - el_type: reth
cl_client_type: lodestar cl_type: lodestar
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: nethermind - el_type: nethermind
cl_client_type: prysm cl_type: prysm
- el_client_type: erigon - el_type: erigon
cl_client_type: lighthouse el_image: ethpandaops/erigon:devel-d754b29 # this is a temp fix, till upstream is fixed
- el_client_type: besu cl_type: lighthouse
cl_client_type: lighthouse - el_type: besu
- el_client_type: reth cl_type: lighthouse
cl_client_type: lodestar - el_type: reth
- el_client_type: ethereumjs cl_type: lodestar
cl_client_type: nimbus - el_type: ethereumjs
cl_type: nimbus
network_params: network_params:
network: sepolia network: sepolia
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: nimbus cl_type: nimbus
use_separate_validator_client: true use_separate_vc: true
validator_count: 0 validator_count: 0
- el_client_type: nethermind - el_type: nethermind
cl_client_type: nimbus cl_type: nimbus
use_separate_validator_client: true use_separate_vc: true
- el_client_type: erigon - el_type: erigon
cl_client_type: nimbus cl_type: nimbus
use_separate_validator_client: true use_separate_vc: true
- el_client_type: besu - el_type: besu
cl_client_type: nimbus cl_type: nimbus
use_separate_validator_client: true use_separate_vc: true
- el_client_type: reth - el_type: reth
cl_client_type: nimbus cl_type: nimbus
use_separate_validator_client: true use_separate_vc: true
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: nimbus cl_type: nimbus
use_separate_validator_client: true use_separate_vc: true
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
validator_count: 0 validator_count: 0
use_separate_validator_client: true use_separate_vc: true
- el_client_type: nethermind - el_type: nethermind
cl_client_type: teku cl_type: teku
use_separate_validator_client: true use_separate_vc: true
- el_client_type: erigon - el_type: erigon
cl_client_type: teku cl_type: teku
use_separate_validator_client: true use_separate_vc: true
- el_client_type: besu - el_type: besu
cl_client_type: teku cl_type: teku
use_separate_validator_client: true use_separate_vc: true
- el_client_type: reth - el_type: reth
cl_client_type: teku cl_type: teku
use_separate_validator_client: true use_separate_vc: true
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: teku cl_type: teku
use_separate_validator_client: true use_separate_vc: true
additional_services: [] additional_services: []
participants: participants:
- el_client_type: geth - el_type: geth
cl_client_type: teku cl_type: teku
- el_client_type: nethermind - el_type: nethermind
cl_client_type: teku cl_type: teku
- el_client_type: erigon - el_type: erigon
cl_client_type: teku cl_type: teku
- el_client_type: besu - el_type: besu
cl_client_type: teku cl_type: teku
- el_client_type: reth - el_type: reth
cl_client_type: teku cl_type: teku
- el_client_type: ethereumjs - el_type: ethereumjs
cl_client_type: teku cl_type: teku
additional_services: [] additional_services: []
participants: participants:
- el_client_type: reth - el_type: reth
cl_client_type: teku cl_type: teku
use_separate_validator_client: true use_separate_vc: true
cl_tolerations: cl_tolerations:
- key: "node-role.kubernetes.io/master1" - key: "node-role.kubernetes.io/master1"
operator: "Exists" operator: "Exists"
...@@ -13,20 +13,20 @@ participants: ...@@ -13,20 +13,20 @@ participants:
- key: "node-role.kubernetes.io/master3" - key: "node-role.kubernetes.io/master3"
operator: "Exists" operator: "Exists"
effect: "NoSchedule" effect: "NoSchedule"
validator_tolerations: vc_tolerations:
- key: "node-role.kubernetes.io/master4" - key: "node-role.kubernetes.io/master4"
operator: "Exists" operator: "Exists"
effect: "NoSchedule" effect: "NoSchedule"
- el_client_type: reth - el_type: reth
cl_client_type: teku cl_type: teku
use_separate_validator_client: true use_separate_vc: true
tolerations: tolerations:
- key: "node-role.kubernetes.io/master5" - key: "node-role.kubernetes.io/master5"
operator: "Exists" operator: "Exists"
effect: "NoSchedule" effect: "NoSchedule"
- el_client_type: reth - el_type: reth
cl_client_type: teku cl_type: teku
use_separate_validator_client: true use_separate_vc: true
additional_services: additional_services:
- dora - dora
global_tolerations: global_tolerations:
......
participants: participants:
- el_client_type: geth - el_type: geth
el_client_image: ethpandaops/geth:kaustinen-with-shapella-0b110bd el_image: ethpandaops/geth:kaustinen-with-shapella-0b110bd
cl_client_type: lighthouse cl_type: lighthouse
cl_client_image: ethpandaops/lighthouse:verkle-trees-capella-2ffb8a9 cl_image: ethpandaops/lighthouse:verkle-trees-capella-2ffb8a9
count: 2 count: 2
- el_client_type: geth - el_type: geth
el_client_image: ethpandaops/geth:kaustinen-with-shapella-0b110bd el_image: ethpandaops/geth:kaustinen-with-shapella-0b110bd
cl_client_type: lodestar cl_type: lodestar
cl_client_image: ethpandaops/lodestar:g11tech-verge-815364b cl_image: ethpandaops/lodestar:g11tech-verge-815364b
network_params: network_params:
network: verkle-gen-devnet-4 network: verkle-gen-devnet-4
participants: participants:
- el_client_type: geth - el_type: geth
el_client_image: ethpandaops/geth:kaustinen-with-shapella-0b110bd el_image: ethpandaops/geth:kaustinen-with-shapella-0b110bd
cl_client_type: lighthouse cl_type: lighthouse
cl_client_image: ethpandaops/lighthouse:verkle-trees-capella-2ffb8a9 cl_image: ethpandaops/lighthouse:verkle-trees-capella-2ffb8a9
count: 2 count: 2
- el_client_type: geth - el_type: geth
el_client_image: ethpandaops/geth:kaustinen-with-shapella-0b110bd el_image: ethpandaops/geth:kaustinen-with-shapella-0b110bd
cl_client_type: lodestar cl_type: lodestar
cl_client_image: ethpandaops/lodestar:g11tech-verge-815364b cl_image: ethpandaops/lodestar:g11tech-verge-815364b
count: 2 count: 2
network_params: network_params:
electra_fork_epoch: 0 electra_fork_epoch: 0
......
participants: participants:
- el_client_type: geth - el_type: geth
el_client_image: ethpandaops/geth:transition-post-genesis-04b0304 el_image: ethpandaops/geth:transition-post-genesis-04b0304
cl_client_type: lighthouse cl_type: lighthouse
cl_client_image: ethpandaops/lighthouse:verkle-trees-capella-2ffb8a9 cl_image: ethpandaops/lighthouse:verkle-trees-capella-2ffb8a9
count: 2 count: 2
- el_client_type: geth - el_type: geth
el_client_image: ethpandaops/geth:transition-post-genesis-04b0304 el_image: ethpandaops/geth:transition-post-genesis-04b0304
cl_client_type: lodestar cl_type: lodestar
cl_client_image: ethpandaops/lodestar:g11tech-verge-815364b cl_image: ethpandaops/lodestar:g11tech-verge-815364b
network_params: network_params:
electra_fork_epoch: 1 electra_fork_epoch: 1
additional_services: additional_services:
......
This diff is collapsed.
...@@ -63,7 +63,7 @@ Then the validator keys are generated. A tool called [eth2-val-tools](https://gi ...@@ -63,7 +63,7 @@ Then the validator keys are generated. A tool called [eth2-val-tools](https://gi
### Starting EL clients ### Starting EL clients
Next, we plug the generated genesis data [into EL client "launchers"](https://github.com/kurtosis-tech/ethereum-package/tree/main/src/participant_network/el) to start a mining network of EL nodes. The launchers come with a `launch` function that consumes EL genesis data and produces information about the running EL client node. Running EL node information is represented by [an `el_client_context` struct](https://github.com/kurtosis-tech/ethereum-package/blob/main/src/participant_network/el/el_client_context.star). Each EL client type has its own launcher (e.g. [Geth](https://github.com/kurtosis-tech/ethereum-package/tree/main/src/participant_network/el/geth), [Besu](https://github.com/kurtosis-tech/ethereum-package/tree/main/src/participant_network/el/besu)) because each EL client will require different environment variables and flags to be set when launching the client's container. Next, we plug the generated genesis data [into EL client "launchers"](https://github.com/kurtosis-tech/ethereum-package/tree/main/src/participant_network/el) to start a mining network of EL nodes. The launchers come with a `launch` function that consumes EL genesis data and produces information about the running EL client node. Running EL node information is represented by [an `el_context` struct](https://github.com/kurtosis-tech/ethereum-package/blob/main/src/participant_network/el/el_context.star). Each EL client type has its own launcher (e.g. [Geth](https://github.com/kurtosis-tech/ethereum-package/tree/main/src/participant_network/el/geth), [Besu](https://github.com/kurtosis-tech/ethereum-package/tree/main/src/participant_network/el/besu)) because each EL client will require different environment variables and flags to be set when launching the client's container.
### Starting CL clients ### Starting CL clients
...@@ -71,9 +71,9 @@ Once CL genesis data and keys have been created, the CL client nodes are started ...@@ -71,9 +71,9 @@ Once CL genesis data and keys have been created, the CL client nodes are started
- CL client launchers implement come with a `launch` method - CL client launchers implement come with a `launch` method
- One CL client launcher exists per client type (e.g. [Nimbus](https://github.com/kurtosis-tech/ethereum-package/tree/main/src/participant_network/cl/nimbus), [Lighthouse](https://github.com/kurtosis-tech/ethereum-package/tree/main/src/participant_network/cl/lighthouse)) - One CL client launcher exists per client type (e.g. [Nimbus](https://github.com/kurtosis-tech/ethereum-package/tree/main/src/participant_network/cl/nimbus), [Lighthouse](https://github.com/kurtosis-tech/ethereum-package/tree/main/src/participant_network/cl/lighthouse))
- Launched CL node information is tracked in [a `cl_client_context` struct](https://github.com/kurtosis-tech/ethereum-package/blob/main/src/participant_network/cl/cl_client_context.star) - Launched CL node information is tracked in [a `cl_context` struct](https://github.com/kurtosis-tech/ethereum-package/blob/main/src/participant_network/cl/cl_context.star)
There are only two major difference between CL client and EL client launchers. First, the `cl_client_launcher.launch` method also consumes an `el_client_context`, because each CL client is connected in a 1:1 relationship with an EL client. Second, because CL clients have keys, the keystore files are passed in to the `launch` function as well. There are only two major difference between CL client and EL client launchers. First, the `cl_client_launcher.launch` method also consumes an `el_context`, because each CL client is connected in a 1:1 relationship with an EL client. Second, because CL clients have keys, the keystore files are passed in to the `launch` function as well.
## Auxiliary Services ## Auxiliary Services
......
...@@ -99,7 +99,7 @@ def run(plan, args={}): ...@@ -99,7 +99,7 @@ def run(plan, args={}):
plan, plan,
args_with_right_defaults.participants, args_with_right_defaults.participants,
network_params, network_params,
args_with_right_defaults.global_client_log_level, args_with_right_defaults.global_log_level,
jwt_file, jwt_file,
keymanager_file, keymanager_file,
keymanager_p12_file, keymanager_p12_file,
...@@ -112,20 +112,20 @@ def run(plan, args={}): ...@@ -112,20 +112,20 @@ def run(plan, args={}):
plan.print( plan.print(
"NODE JSON RPC URI: '{0}:{1}'".format( "NODE JSON RPC URI: '{0}:{1}'".format(
all_participants[0].el_client_context.ip_addr, all_participants[0].el_context.ip_addr,
all_participants[0].el_client_context.rpc_port_num, all_participants[0].el_context.rpc_port_num,
) )
) )
all_el_client_contexts = [] all_el_contexts = []
all_cl_client_contexts = [] all_cl_contexts = []
all_validator_client_contexts = [] all_vc_contexts = []
all_ethereum_metrics_exporter_contexts = [] all_ethereum_metrics_exporter_contexts = []
all_xatu_sentry_contexts = [] all_xatu_sentry_contexts = []
for participant in all_participants: for participant in all_participants:
all_el_client_contexts.append(participant.el_client_context) all_el_contexts.append(participant.el_context)
all_cl_client_contexts.append(participant.cl_client_context) all_cl_contexts.append(participant.cl_context)
all_validator_client_contexts.append(participant.validator_client_context) all_vc_contexts.append(participant.vc_context)
all_ethereum_metrics_exporter_contexts.append( all_ethereum_metrics_exporter_contexts.append(
participant.ethereum_metrics_exporter_context participant.ethereum_metrics_exporter_context
) )
...@@ -138,13 +138,13 @@ def run(plan, args={}): ...@@ -138,13 +138,13 @@ def run(plan, args={}):
ranges = validator_ranges.generate_validator_ranges( ranges = validator_ranges.generate_validator_ranges(
plan, plan,
validator_ranges_config_template, validator_ranges_config_template,
all_cl_client_contexts, all_cl_contexts,
args_with_right_defaults.participants, args_with_right_defaults.participants,
) )
fuzz_target = "http://{0}:{1}".format( fuzz_target = "http://{0}:{1}".format(
all_el_client_contexts[0].ip_addr, all_el_contexts[0].ip_addr,
all_el_client_contexts[0].rpc_port_num, all_el_contexts[0].rpc_port_num,
) )
# Broadcaster forwards requests, sent to it, to all nodes in parallel # Broadcaster forwards requests, sent to it, to all nodes in parallel
...@@ -152,7 +152,7 @@ def run(plan, args={}): ...@@ -152,7 +152,7 @@ def run(plan, args={}):
args_with_right_defaults.additional_services.remove("broadcaster") args_with_right_defaults.additional_services.remove("broadcaster")
broadcaster_service = broadcaster.launch_broadcaster( broadcaster_service = broadcaster.launch_broadcaster(
plan, plan,
all_el_client_contexts, all_el_contexts,
global_node_selectors, global_node_selectors,
) )
fuzz_target = "http://{0}:{1}".format( fuzz_target = "http://{0}:{1}".format(
...@@ -174,18 +174,18 @@ def run(plan, args={}): ...@@ -174,18 +174,18 @@ def run(plan, args={}):
and args_with_right_defaults.mev_type == MOCK_MEV_TYPE and args_with_right_defaults.mev_type == MOCK_MEV_TYPE
): ):
el_uri = "{0}:{1}".format( el_uri = "{0}:{1}".format(
all_el_client_contexts[0].ip_addr, all_el_contexts[0].ip_addr,
all_el_client_contexts[0].engine_rpc_port_num, all_el_contexts[0].engine_rpc_port_num,
) )
beacon_uri = "{0}:{1}".format( beacon_uri = "{0}:{1}".format(
all_cl_client_contexts[0].ip_addr, all_cl_client_contexts[0].http_port_num all_cl_contexts[0].ip_addr, all_cl_contexts[0].http_port_num
) )
endpoint = mock_mev.launch_mock_mev( endpoint = mock_mev.launch_mock_mev(
plan, plan,
el_uri, el_uri,
beacon_uri, beacon_uri,
raw_jwt_secret, raw_jwt_secret,
args_with_right_defaults.global_client_log_level, args_with_right_defaults.global_log_level,
global_node_selectors, global_node_selectors,
) )
mev_endpoints.append(endpoint) mev_endpoints.append(endpoint)
...@@ -194,16 +194,16 @@ def run(plan, args={}): ...@@ -194,16 +194,16 @@ def run(plan, args={}):
and args_with_right_defaults.mev_type == FULL_MEV_TYPE and args_with_right_defaults.mev_type == FULL_MEV_TYPE
): ):
builder_uri = "http://{0}:{1}".format( builder_uri = "http://{0}:{1}".format(
all_el_client_contexts[-1].ip_addr, all_el_client_contexts[-1].rpc_port_num all_el_contexts[-1].ip_addr, all_el_contexts[-1].rpc_port_num
) )
beacon_uris = ",".join( beacon_uris = ",".join(
[ [
"http://{0}:{1}".format(context.ip_addr, context.http_port_num) "http://{0}:{1}".format(context.ip_addr, context.http_port_num)
for context in all_cl_client_contexts for context in all_cl_contexts
] ]
) )
first_cl_client = all_cl_client_contexts[0] first_cl_client = all_cl_contexts[0]
first_client_beacon_name = first_cl_client.beacon_service_name first_client_beacon_name = first_cl_client.beacon_service_name
contract_owner, normal_user = genesis_constants.PRE_FUNDED_ACCOUNTS[6:8] contract_owner, normal_user = genesis_constants.PRE_FUNDED_ACCOUNTS[6:8]
mev_flood.launch_mev_flood( mev_flood.launch_mev_flood(
...@@ -263,8 +263,8 @@ def run(plan, args={}): ...@@ -263,8 +263,8 @@ def run(plan, args={}):
mev_boost_service_name = "{0}-{1}-{2}-{3}".format( mev_boost_service_name = "{0}-{1}-{2}-{3}".format(
input_parser.MEV_BOOST_SERVICE_NAME_PREFIX, input_parser.MEV_BOOST_SERVICE_NAME_PREFIX,
index_str, index_str,
participant.cl_client_type, participant.cl_type,
participant.el_client_type, participant.el_type,
) )
mev_boost_context = mev_boost.launch( mev_boost_context = mev_boost.launch(
plan, plan,
...@@ -306,7 +306,7 @@ def run(plan, args={}): ...@@ -306,7 +306,7 @@ def run(plan, args={}):
plan, plan,
genesis_constants.PRE_FUNDED_ACCOUNTS, genesis_constants.PRE_FUNDED_ACCOUNTS,
fuzz_target, fuzz_target,
all_cl_client_contexts[0], all_cl_contexts[0],
network_params.deneb_fork_epoch, network_params.deneb_fork_epoch,
network_params.seconds_per_slot, network_params.seconds_per_slot,
network_params.genesis_delay, network_params.genesis_delay,
...@@ -319,8 +319,8 @@ def run(plan, args={}): ...@@ -319,8 +319,8 @@ def run(plan, args={}):
goomy_blob.launch_goomy_blob( goomy_blob.launch_goomy_blob(
plan, plan,
genesis_constants.PRE_FUNDED_ACCOUNTS, genesis_constants.PRE_FUNDED_ACCOUNTS,
all_el_client_contexts, all_el_contexts,
all_cl_client_contexts[0], all_cl_contexts[0],
network_params.seconds_per_slot, network_params.seconds_per_slot,
goomy_blob_params, goomy_blob_params,
global_node_selectors, global_node_selectors,
...@@ -336,7 +336,7 @@ def run(plan, args={}): ...@@ -336,7 +336,7 @@ def run(plan, args={}):
el_forkmon.launch_el_forkmon( el_forkmon.launch_el_forkmon(
plan, plan,
el_forkmon_config_template, el_forkmon_config_template,
all_el_client_contexts, all_el_contexts,
global_node_selectors, global_node_selectors,
) )
plan.print("Successfully launched execution layer forkmon") plan.print("Successfully launched execution layer forkmon")
...@@ -345,7 +345,7 @@ def run(plan, args={}): ...@@ -345,7 +345,7 @@ def run(plan, args={}):
beacon_metrics_gazer_prometheus_metrics_job = ( beacon_metrics_gazer_prometheus_metrics_job = (
beacon_metrics_gazer.launch_beacon_metrics_gazer( beacon_metrics_gazer.launch_beacon_metrics_gazer(
plan, plan,
all_cl_client_contexts, all_cl_contexts,
network_params, network_params,
global_node_selectors, global_node_selectors,
) )
...@@ -359,7 +359,7 @@ def run(plan, args={}): ...@@ -359,7 +359,7 @@ def run(plan, args={}):
plan.print("Launching blockscout") plan.print("Launching blockscout")
blockscout_sc_verif_url = blockscout.launch_blockscout( blockscout_sc_verif_url = blockscout.launch_blockscout(
plan, plan,
all_el_client_contexts, all_el_contexts,
persistent, persistent,
global_node_selectors, global_node_selectors,
) )
...@@ -370,7 +370,7 @@ def run(plan, args={}): ...@@ -370,7 +370,7 @@ def run(plan, args={}):
dora.launch_dora( dora.launch_dora(
plan, plan,
dora_config_template, dora_config_template,
all_cl_client_contexts, all_cl_contexts,
el_cl_data_files_artifact_uuid, el_cl_data_files_artifact_uuid,
network_params.electra_fork_epoch, network_params.electra_fork_epoch,
network_params.network, network_params.network,
...@@ -381,8 +381,8 @@ def run(plan, args={}): ...@@ -381,8 +381,8 @@ def run(plan, args={}):
plan.print("Launching blobscan") plan.print("Launching blobscan")
blobscan.launch_blobscan( blobscan.launch_blobscan(
plan, plan,
all_cl_client_contexts, all_cl_contexts,
all_el_client_contexts, all_el_contexts,
network_params.network_id, network_params.network_id,
persistent, persistent,
global_node_selectors, global_node_selectors,
...@@ -396,8 +396,8 @@ def run(plan, args={}): ...@@ -396,8 +396,8 @@ def run(plan, args={}):
full_beaconchain_explorer.launch_full_beacon( full_beaconchain_explorer.launch_full_beacon(
plan, plan,
full_beaconchain_explorer_config_template, full_beaconchain_explorer_config_template,
all_cl_client_contexts, all_cl_contexts,
all_el_client_contexts, all_el_contexts,
persistent, persistent,
global_node_selectors, global_node_selectors,
) )
...@@ -436,9 +436,9 @@ def run(plan, args={}): ...@@ -436,9 +436,9 @@ def run(plan, args={}):
plan.print("Launching prometheus...") plan.print("Launching prometheus...")
prometheus_private_url = prometheus.launch_prometheus( prometheus_private_url = prometheus.launch_prometheus(
plan, plan,
all_el_client_contexts, all_el_contexts,
all_cl_client_contexts, all_cl_contexts,
all_validator_client_contexts, all_vc_contexts,
prometheus_additional_metrics_jobs, prometheus_additional_metrics_jobs,
all_ethereum_metrics_exporter_contexts, all_ethereum_metrics_exporter_contexts,
all_xatu_sentry_contexts, all_xatu_sentry_contexts,
...@@ -458,7 +458,7 @@ def run(plan, args={}): ...@@ -458,7 +458,7 @@ def run(plan, args={}):
if args_with_right_defaults.wait_for_finalization: if args_with_right_defaults.wait_for_finalization:
plan.print("Waiting for the first finalized epoch") plan.print("Waiting for the first finalized epoch")
first_cl_client = all_cl_client_contexts[0] first_cl_client = all_cl_contexts[0]
first_client_beacon_name = first_cl_client.beacon_service_name first_client_beacon_name = first_cl_client.beacon_service_name
epoch_recipe = GetHttpRequestRecipe( epoch_recipe = GetHttpRequestRecipe(
endpoint="/eth/v1/beacon/states/head/finality_checkpoints", endpoint="/eth/v1/beacon/states/head/finality_checkpoints",
......
participants: participants:
- el_client_type: geth # EL
el_client_image: ethereum/client-go:latest - el_type: geth
el_client_log_level: "" el_image: ethereum/client-go:latest
el_extra_params: [] el_log_level: ""
el_extra_env_vars: {}
el_extra_labels: {} el_extra_labels: {}
el_extra_params: []
el_tolerations: [] el_tolerations: []
cl_client_type: lighthouse el_volume_size: 0
cl_client_image: sigp/lighthouse:latest
cl_client_log_level: ""
cl_tolerations: []
validator_tolerations: []
tolerations: []
node_selectors: {}
beacon_extra_params: []
beacon_extra_labels: {}
validator_extra_params: []
validator_extra_labels: {}
builder_network_params: null
validator_count: null
snooper_enabled: false
ethereum_metrics_exporter_enabled: false
xatu_sentry_enabled: false
el_min_cpu: 0 el_min_cpu: 0
el_max_cpu: 0 el_max_cpu: 0
el_min_mem: 0 el_min_mem: 0
el_max_mem: 0 el_max_mem: 0
bn_min_cpu: 0 # CL
bn_max_cpu: 0 cl_type: lighthouse
bn_min_mem: 0 cl_image: sigp/lighthouse:latest
bn_max_mem: 0 cl_log_level: ""
v_min_cpu: 0 cl_extra_env_vars: {}
v_max_cpu: 0 cl_extra_labels: {}
v_min_mem: 0 cl_extra_params: []
v_max_mem: 0 cl_tolerations: []
cl_volume_size: 0
cl_min_cpu: 0
cl_max_cpu: 0
cl_min_mem: 0
cl_max_mem: 0
use_separate_vc: true
# Validator
vc_type: lighthouse
vc_image: sigp/lighthouse:latest
vc_log_level: ""
vc_extra_env_vars: {}
vc_extra_labels: {}
vc_extra_params: []
vc_tolerations: []
vc_min_cpu: 0
vc_max_cpu: 0
vc_min_mem: 0
vc_max_mem: 0
validator_count: null
# participant specific
node_selectors: {}
tolerations: []
count: 2 count: 2
snooper_enabled: false
ethereum_metrics_exporter_enabled: false
xatu_sentry_enabled: false
prometheus_config: prometheus_config:
scrape_interval: 15s scrape_interval: 15s
labels: {} labels: {}
blobber_enabled: false blobber_enabled: false
blobber_extra_params: [] blobber_extra_params: []
builder_network_params: null
network_params: network_params:
network: kurtosis
network_id: "3151908" network_id: "3151908"
deposit_contract_address: "0x4242424242424242424242424242424242424242" deposit_contract_address: "0x4242424242424242424242424242424242424242"
seconds_per_slot: 12 seconds_per_slot: 12
...@@ -52,14 +66,13 @@ network_params: ...@@ -52,14 +66,13 @@ network_params:
genesis_delay: 20 genesis_delay: 20
max_churn: 8 max_churn: 8
ejection_balance: 16000000000 ejection_balance: 16000000000
eth1_follow_distance: 2048
min_validator_withdrawability_delay: 256
shard_committee_period: 256
capella_fork_epoch: 0 capella_fork_epoch: 0
deneb_fork_epoch: 4 deneb_fork_epoch: 4
electra_fork_epoch: null electra_fork_epoch: null
network: kurtosis
min_validator_withdrawability_delay: 256
shard_committee_period: 256
network_sync_base_url: https://ethpandaops-ethereum-node-snapshots.ams3.digitaloceanspaces.com/ network_sync_base_url: https://ethpandaops-ethereum-node-snapshots.ams3.digitaloceanspaces.com/
additional_services: additional_services:
- tx_spammer - tx_spammer
- blob_spammer - blob_spammer
...@@ -67,14 +80,34 @@ additional_services: ...@@ -67,14 +80,34 @@ additional_services:
- beacon_metrics_gazer - beacon_metrics_gazer
- dora - dora
- prometheus_grafana - prometheus_grafana
tx_spammer_params:
tx_spammer_extra_args: []
goomy_blob_params:
goomy_blob_args: []
assertoor_params:
image: ""
run_stability_check: true
run_block_proposal_check: true
run_transaction_test: false
run_blob_transaction_test: false
run_opcodes_transaction_test: false
run_lifecycle_test: false
tests: []
wait_for_finalization: false wait_for_finalization: false
global_client_log_level: info global_log_level: info
snooper_enabled: false snooper_enabled: false
ethereum_metrics_exporter_enabled: false ethereum_metrics_exporter_enabled: false
parallel_keystore_generation: false parallel_keystore_generation: false
disable_peer_scoring: false
grafana_additional_dashboards: []
persistent: false
mev_type: null mev_type: null
mev_params: mev_params:
mev_relay_image: flashbots/mev-boost-relay mev_relay_image: flashbots/mev-boost-relay
mev_builder_image: ethpandaops/flashbots-builder:main
mev_builder_cl_image: sigp/lighthouse:latest
mev_boost_image: flashbots/mev-boost
mev_boost_args: ["mev-boost", "--relay-check"]
mev_relay_api_extra_args: [] mev_relay_api_extra_args: []
mev_relay_housekeeper_extra_args: [] mev_relay_housekeeper_extra_args: []
mev_relay_website_extra_args: [] mev_relay_website_extra_args: []
...@@ -85,10 +118,22 @@ mev_params: ...@@ -85,10 +118,22 @@ mev_params:
mev_flood_image: flashbots/mev-flood mev_flood_image: flashbots/mev-flood
mev_flood_extra_args: [] mev_flood_extra_args: []
mev_flood_seconds_per_bundle: 15 mev_flood_seconds_per_bundle: 15
mev_boost_image: flashbots/mev-boost custom_flood_params:
mev_boost_args: ["mev-boost", "--relay-check"] interval_between_transactions: 1
grafana_additional_dashboards: []
persistent: false
xatu_sentry_enabled: false xatu_sentry_enabled: false
xatu_sentry_params:
xatu_sentry_image: ethpandaops/xatu-sentry
xatu_server_addr: localhost:8000
xatu_server_tls: false
xatu_server_headers: {}
beacon_subscriptions:
- attestation
- block
- chain_reorg
- finalized_checkpoint
- head
- voluntary_exit
- contribution_and_proof
- blob_sidecar
global_tolerations: [] global_tolerations: []
global_node_selectors: {} global_node_selectors: {}
#!/bin/bash
# Helper function to perform replacements
perform_replacements() {
local input_file="$1"
shift
local replacements=("$@")
for ((i = 0; i < ${#replacements[@]}; i+=2)); do
original="${replacements[$i]}"
replacement="${replacements[$i+1]}"
sed -i -- "s/$original/$replacement/g" "$input_file"
done
}
# Check if an input file is provided
if [ $# -eq 0 ]; then
echo "Usage: $0 <input_file>"
exit 1
fi
# Define the input YAML file
input_file="$1"
# Define the replacement pairs as a list
replacements=(
el_client_type
el_type
el_client_image
el_image
el_client_log_level
el_log_level
el_client_volume_size
el_volume_size
cl_client_type
cl_type
cl_client_image
cl_image
cl_client_volume_size
cl_volume_size
cl_client_log_level
cl_log_level
beacon_extra_params
cl_extra_params
beacon_extra_labels
cl_extra_labels
bn_min_cpu
cl_min_cpu
bn_max_cpu
cl_max_cpu
bn_min_mem
cl_min_mem
bn_max_mem
cl_max_mem
use_separate_validator_client
use_separate_vc
validator_client_type
vc_type
validator_tolerations
vc_tolerations
validator_client_image
vc_image
validator_extra_params
vc_extra_params
validator_extra_labels
vc_extra_labels
v_min_cpu
vc_min_cpu
v_max_cpu
vc_max_cpu
v_min_mem
vc_min_mem
v_max_mem
vc_max_mem
global_client_log_level
global_log_level
)
# Perform replacements
perform_replacements "$input_file" "${replacements[@]}"
echo "Replacements completed."
...@@ -39,12 +39,12 @@ def launch_assertoor( ...@@ -39,12 +39,12 @@ def launch_assertoor(
global_node_selectors, global_node_selectors,
): ):
all_client_info = [] all_client_info = []
validator_client_info = [] vc_info = []
for index, participant in enumerate(participant_contexts): for index, participant in enumerate(participant_contexts):
participant_config = participant_configs[index] participant_config = participant_configs[index]
cl_client = participant.cl_client_context cl_client = participant.cl_context
el_client = participant.el_client_context el_client = participant.el_context
all_client_info.append( all_client_info.append(
new_client_info( new_client_info(
...@@ -57,7 +57,7 @@ def launch_assertoor( ...@@ -57,7 +57,7 @@ def launch_assertoor(
) )
if participant_config.validator_count != 0: if participant_config.validator_count != 0:
validator_client_info.append( vc_info.append(
new_client_info( new_client_info(
cl_client.ip_addr, cl_client.ip_addr,
cl_client.http_port_num, cl_client.http_port_num,
...@@ -68,7 +68,7 @@ def launch_assertoor( ...@@ -68,7 +68,7 @@ def launch_assertoor(
) )
template_data = new_config_template_data( template_data = new_config_template_data(
HTTP_PORT_NUMBER, all_client_info, validator_client_info, assertoor_params HTTP_PORT_NUMBER, all_client_info, vc_info, assertoor_params
) )
template_and_data = shared_utils.new_template_and_data( template_and_data = shared_utils.new_template_and_data(
...@@ -134,9 +134,7 @@ def get_config( ...@@ -134,9 +134,7 @@ def get_config(
) )
def new_config_template_data( def new_config_template_data(listen_port_num, client_info, vc_info, assertoor_params):
listen_port_num, client_info, validator_client_info, assertoor_params
):
additional_tests = [] additional_tests = []
for index, testcfg in enumerate(assertoor_params.tests): for index, testcfg in enumerate(assertoor_params.tests):
if type(testcfg) == "dict": if type(testcfg) == "dict":
...@@ -153,7 +151,7 @@ def new_config_template_data( ...@@ -153,7 +151,7 @@ def new_config_template_data(
return { return {
"ListenPortNum": listen_port_num, "ListenPortNum": listen_port_num,
"ClientInfo": client_info, "ClientInfo": client_info,
"ValidatorClientInfo": validator_client_info, "ValidatorClientInfo": vc_info,
"RunStabilityCheck": assertoor_params.run_stability_check, "RunStabilityCheck": assertoor_params.run_stability_check,
"RunBlockProposalCheck": assertoor_params.run_block_proposal_check, "RunBlockProposalCheck": assertoor_params.run_block_proposal_check,
"RunLifecycleTest": assertoor_params.run_lifecycle_test, "RunLifecycleTest": assertoor_params.run_lifecycle_test,
......
...@@ -33,13 +33,13 @@ MAX_MEMORY = 300 ...@@ -33,13 +33,13 @@ MAX_MEMORY = 300
def launch_beacon_metrics_gazer( def launch_beacon_metrics_gazer(
plan, plan,
cl_client_contexts, cl_contexts,
network_params, network_params,
global_node_selectors, global_node_selectors,
): ):
config = get_config( config = get_config(
cl_client_contexts[0].ip_addr, cl_contexts[0].ip_addr,
cl_client_contexts[0].http_port_num, cl_contexts[0].http_port_num,
global_node_selectors, global_node_selectors,
) )
......
...@@ -14,7 +14,7 @@ def launch_blob_spammer( ...@@ -14,7 +14,7 @@ def launch_blob_spammer(
plan, plan,
prefunded_addresses, prefunded_addresses,
el_uri, el_uri,
cl_client_context, cl_context,
deneb_fork_epoch, deneb_fork_epoch,
seconds_per_slot, seconds_per_slot,
genesis_delay, genesis_delay,
...@@ -23,7 +23,7 @@ def launch_blob_spammer( ...@@ -23,7 +23,7 @@ def launch_blob_spammer(
config = get_config( config = get_config(
prefunded_addresses, prefunded_addresses,
el_uri, el_uri,
cl_client_context, cl_context,
deneb_fork_epoch, deneb_fork_epoch,
seconds_per_slot, seconds_per_slot,
genesis_delay, genesis_delay,
...@@ -35,7 +35,7 @@ def launch_blob_spammer( ...@@ -35,7 +35,7 @@ def launch_blob_spammer(
def get_config( def get_config(
prefunded_addresses, prefunded_addresses,
el_uri, el_uri,
cl_client_context, cl_context,
deneb_fork_epoch, deneb_fork_epoch,
seconds_per_slot, seconds_per_slot,
genesis_delay, genesis_delay,
...@@ -51,12 +51,12 @@ def get_config( ...@@ -51,12 +51,12 @@ def get_config(
"apk update", "apk update",
"apk add curl jq", "apk add curl jq",
'current_epoch=$(curl -s http://{0}:{1}/eth/v2/beacon/blocks/head | jq -r ".version")'.format( 'current_epoch=$(curl -s http://{0}:{1}/eth/v2/beacon/blocks/head | jq -r ".version")'.format(
cl_client_context.ip_addr, cl_client_context.http_port_num cl_context.ip_addr, cl_context.http_port_num
), ),
"echo $current_epoch", "echo $current_epoch",
'while [ $current_epoch != "deneb" ]; do echo "waiting for deneb, current epoch is $current_epoch"; current_epoch=$(curl -s http://{0}:{1}/eth/v2/beacon/blocks/head | jq -r ".version"); sleep {2}; done'.format( 'while [ $current_epoch != "deneb" ]; do echo "waiting for deneb, current epoch is $current_epoch"; current_epoch=$(curl -s http://{0}:{1}/eth/v2/beacon/blocks/head | jq -r ".version"); sleep {2}; done'.format(
cl_client_context.ip_addr, cl_context.ip_addr,
cl_client_context.http_port_num, cl_context.http_port_num,
seconds_per_slot, seconds_per_slot,
), ),
'echo "sleep is over, starting to send blob transactions"', 'echo "sleep is over, starting to send blob transactions"',
......
shared_utils = import_module("../shared_utils/shared_utils.star") shared_utils = import_module("../shared_utils/shared_utils.star")
input_parser = import_module("../package_io/input_parser.star") input_parser = import_module("../package_io/input_parser.star")
cl_client_context = import_module("../cl/cl_client_context.star") cl_context = import_module("../cl/cl_context.star")
blobber_context = import_module("../blobber/blobber_context.star") blobber_context = import_module("../blobber/blobber_context.star")
......
...@@ -55,18 +55,18 @@ POSTGRES_MAX_MEMORY = 1024 ...@@ -55,18 +55,18 @@ POSTGRES_MAX_MEMORY = 1024
def launch_blobscan( def launch_blobscan(
plan, plan,
cl_client_contexts, cl_contexts,
el_client_contexts, el_contexts,
chain_id, chain_id,
persistent, persistent,
global_node_selectors, global_node_selectors,
): ):
node_selectors = global_node_selectors node_selectors = global_node_selectors
beacon_node_rpc_uri = "http://{0}:{1}".format( beacon_node_rpc_uri = "http://{0}:{1}".format(
cl_client_contexts[0].ip_addr, cl_client_contexts[0].http_port_num cl_contexts[0].ip_addr, cl_contexts[0].http_port_num
) )
execution_node_rpc_uri = "http://{0}:{1}".format( execution_node_rpc_uri = "http://{0}:{1}".format(
el_client_contexts[0].ip_addr, el_client_contexts[0].rpc_port_num el_contexts[0].ip_addr, el_contexts[0].rpc_port_num
) )
postgres_output = postgres.run( postgres_output = postgres.run(
......
...@@ -40,7 +40,7 @@ VERIF_USED_PORTS = { ...@@ -40,7 +40,7 @@ VERIF_USED_PORTS = {
def launch_blockscout( def launch_blockscout(
plan, plan,
el_client_contexts, el_contexts,
persistent, persistent,
global_node_selectors, global_node_selectors,
): ):
...@@ -53,11 +53,11 @@ def launch_blockscout( ...@@ -53,11 +53,11 @@ def launch_blockscout(
node_selectors=global_node_selectors, node_selectors=global_node_selectors,
) )
el_client_context = el_client_contexts[0] el_context = el_contexts[0]
el_client_rpc_url = "http://{}:{}/".format( el_client_rpc_url = "http://{}:{}/".format(
el_client_context.ip_addr, el_client_context.rpc_port_num el_context.ip_addr, el_context.rpc_port_num
) )
el_client_name = el_client_context.client_name el_client_name = el_context.client_name
config_verif = get_config_verif(global_node_selectors) config_verif = get_config_verif(global_node_selectors)
verif_service_name = "{}-verif".format(SERVICE_NAME_BLOCKSCOUT) verif_service_name = "{}-verif".format(SERVICE_NAME_BLOCKSCOUT)
......
...@@ -9,20 +9,20 @@ MIN_MEMORY = 128 ...@@ -9,20 +9,20 @@ MIN_MEMORY = 128
MAX_MEMORY = 2048 MAX_MEMORY = 2048
def launch_broadcaster(plan, all_el_client_contexts, global_node_selectors): def launch_broadcaster(plan, all_el_contexts, global_node_selectors):
config = get_config(all_el_client_contexts, global_node_selectors) config = get_config(all_el_contexts, global_node_selectors)
return plan.add_service(SERVICE_NAME, config) return plan.add_service(SERVICE_NAME, config)
def get_config( def get_config(
all_el_client_contexts, all_el_contexts,
node_selectors, node_selectors,
): ):
return ServiceConfig( return ServiceConfig(
image=IMAGE_NAME, image=IMAGE_NAME,
cmd=[ cmd=[
"http://{0}:{1}".format(context.ip_addr, context.rpc_port_num) "http://{0}:{1}".format(context.ip_addr, context.rpc_port_num)
for context in all_el_client_contexts for context in all_el_contexts
], ],
min_cpu=MIN_CPU, min_cpu=MIN_CPU,
max_cpu=MAX_CPU, max_cpu=MAX_CPU,
......
def new_cl_client_context( def new_cl_context(
client_name, client_name,
enr, enr,
ip_addr, ip_addr,
......
lighthouse = import_module("./lighthouse/lighthouse_launcher.star")
lodestar = import_module("./lodestar/lodestar_launcher.star")
nimbus = import_module("./nimbus/nimbus_launcher.star")
prysm = import_module("./prysm/prysm_launcher.star")
teku = import_module("./teku/teku_launcher.star")
constants = import_module("../package_io/constants.star")
input_parser = import_module("../package_io/input_parser.star")
shared_utils = import_module("../shared_utils/shared_utils.star")
snooper = import_module("../snooper/snooper_engine_launcher.star")
cl_context_BOOTNODE = None
def launch(
plan,
network_params,
el_cl_data,
jwt_file,
keymanager_file,
keymanager_p12_file,
participants,
all_el_contexts,
global_log_level,
global_node_selectors,
global_tolerations,
persistent,
network_id,
num_participants,
validator_data,
prysm_password_relative_filepath,
prysm_password_artifact_uuid,
):
plan.print("Launching CL network")
cl_launchers = {
constants.CL_TYPE.lighthouse: {
"launcher": lighthouse.new_lighthouse_launcher(
el_cl_data, jwt_file, network_params.network
),
"launch_method": lighthouse.launch,
},
constants.CL_TYPE.lodestar: {
"launcher": lodestar.new_lodestar_launcher(
el_cl_data, jwt_file, network_params.network
),
"launch_method": lodestar.launch,
},
constants.CL_TYPE.nimbus: {
"launcher": nimbus.new_nimbus_launcher(
el_cl_data,
jwt_file,
network_params.network,
keymanager_file,
),
"launch_method": nimbus.launch,
},
constants.CL_TYPE.prysm: {
"launcher": prysm.new_prysm_launcher(
el_cl_data,
jwt_file,
network_params.network,
prysm_password_relative_filepath,
prysm_password_artifact_uuid,
),
"launch_method": prysm.launch,
},
constants.CL_TYPE.teku: {
"launcher": teku.new_teku_launcher(
el_cl_data,
jwt_file,
network_params.network,
keymanager_file,
keymanager_p12_file,
),
"launch_method": teku.launch,
},
}
all_snooper_engine_contexts = []
all_cl_contexts = []
preregistered_validator_keys_for_nodes = (
validator_data.per_node_keystores
if network_params.network == constants.NETWORK_NAME.kurtosis
or constants.NETWORK_NAME.shadowfork in network_params.network
else None
)
for index, participant in enumerate(participants):
cl_type = participant.cl_type
el_type = participant.el_type
node_selectors = input_parser.get_client_node_selectors(
participant.node_selectors,
global_node_selectors,
)
if cl_type not in cl_launchers:
fail(
"Unsupported launcher '{0}', need one of '{1}'".format(
cl_type, ",".join([cl.name for cl in cl_launchers.keys()])
)
)
cl_launcher, launch_method = (
cl_launchers[cl_type]["launcher"],
cl_launchers[cl_type]["launch_method"],
)
index_str = shared_utils.zfill_custom(index + 1, len(str(len(participants))))
cl_service_name = "cl-{0}-{1}-{2}".format(index_str, cl_type, el_type)
new_cl_node_validator_keystores = None
if participant.validator_count != 0:
new_cl_node_validator_keystores = preregistered_validator_keys_for_nodes[
index
]
el_context = all_el_contexts[index]
cl_context = None
snooper_engine_context = None
if participant.snooper_enabled:
snooper_service_name = "snooper-{0}-{1}-{2}".format(
index_str, cl_type, el_type
)
snooper_engine_context = snooper.launch(
plan,
snooper_service_name,
el_context,
node_selectors,
)
plan.print(
"Successfully added {0} snooper participants".format(
snooper_engine_context
)
)
all_snooper_engine_contexts.append(snooper_engine_context)
if index == 0:
cl_context = launch_method(
plan,
cl_launcher,
cl_service_name,
participant.cl_image,
participant.cl_log_level,
global_log_level,
cl_context_BOOTNODE,
el_context,
new_cl_node_validator_keystores,
participant.cl_min_cpu,
participant.cl_max_cpu,
participant.cl_min_mem,
participant.cl_max_mem,
participant.snooper_enabled,
snooper_engine_context,
participant.blobber_enabled,
participant.blobber_extra_params,
participant.cl_extra_params,
participant.cl_extra_env_vars,
participant.cl_extra_labels,
persistent,
participant.cl_volume_size,
participant.cl_tolerations,
participant.tolerations,
global_tolerations,
node_selectors,
participant.use_separate_vc,
)
else:
boot_cl_client_ctx = all_cl_contexts
cl_context = launch_method(
plan,
cl_launcher,
cl_service_name,
participant.cl_image,
participant.cl_log_level,
global_log_level,
boot_cl_client_ctx,
el_context,
new_cl_node_validator_keystores,
participant.cl_min_cpu,
participant.cl_max_cpu,
participant.cl_min_mem,
participant.cl_max_mem,
participant.snooper_enabled,
snooper_engine_context,
participant.blobber_enabled,
participant.blobber_extra_params,
participant.cl_extra_params,
participant.cl_extra_env_vars,
participant.cl_extra_labels,
persistent,
participant.cl_volume_size,
participant.cl_tolerations,
participant.tolerations,
global_tolerations,
node_selectors,
participant.use_separate_vc,
)
# Add participant cl additional prometheus labels
for metrics_info in cl_context.cl_nodes_metrics_info:
if metrics_info != None:
metrics_info["config"] = participant.prometheus_config
all_cl_contexts.append(cl_context)
return (
all_cl_contexts,
all_snooper_engine_contexts,
preregistered_validator_keys_for_nodes,
)
shared_utils = import_module("../../shared_utils/shared_utils.star") shared_utils = import_module("../../shared_utils/shared_utils.star")
input_parser = import_module("../../package_io/input_parser.star") input_parser = import_module("../../package_io/input_parser.star")
cl_client_context = import_module("../../cl/cl_client_context.star") cl_context = import_module("../../cl/cl_context.star")
node_metrics = import_module("../../node_metrics_info.star") node_metrics = import_module("../../node_metrics_info.star")
cl_node_ready_conditions = import_module("../../cl/cl_node_ready_conditions.star") cl_node_ready_conditions = import_module("../../cl/cl_node_ready_conditions.star")
constants = import_module("../../package_io/constants.star") constants = import_module("../../package_io/constants.star")
...@@ -54,11 +54,11 @@ BEACON_USED_PORTS = { ...@@ -54,11 +54,11 @@ BEACON_USED_PORTS = {
} }
VERBOSITY_LEVELS = { VERBOSITY_LEVELS = {
constants.GLOBAL_CLIENT_LOG_LEVEL.error: "error", constants.GLOBAL_LOG_LEVEL.error: "error",
constants.GLOBAL_CLIENT_LOG_LEVEL.warn: "warn", constants.GLOBAL_LOG_LEVEL.warn: "warn",
constants.GLOBAL_CLIENT_LOG_LEVEL.info: "info", constants.GLOBAL_LOG_LEVEL.info: "info",
constants.GLOBAL_CLIENT_LOG_LEVEL.debug: "debug", constants.GLOBAL_LOG_LEVEL.debug: "debug",
constants.GLOBAL_CLIENT_LOG_LEVEL.trace: "trace", constants.GLOBAL_LOG_LEVEL.trace: "trace",
} }
...@@ -70,25 +70,26 @@ def launch( ...@@ -70,25 +70,26 @@ def launch(
participant_log_level, participant_log_level,
global_log_level, global_log_level,
bootnode_contexts, bootnode_contexts,
el_client_context, el_context,
node_keystore_files, node_keystore_files,
bn_min_cpu, cl_min_cpu,
bn_max_cpu, cl_max_cpu,
bn_min_mem, cl_min_mem,
bn_max_mem, cl_max_mem,
snooper_enabled, snooper_enabled,
snooper_engine_context, snooper_engine_context,
blobber_enabled, blobber_enabled,
blobber_extra_params, blobber_extra_params,
extra_beacon_params, extra_params,
extra_beacon_labels, extra_env_vars,
extra_labels,
persistent, persistent,
cl_volume_size, cl_volume_size,
cl_tolerations, cl_tolerations,
participant_tolerations, participant_tolerations,
global_tolerations, global_tolerations,
node_selectors, node_selectors,
use_separate_validator_client=True, use_separate_vc=True,
): ):
beacon_service_name = "{0}".format(service_name) beacon_service_name = "{0}".format(service_name)
...@@ -102,16 +103,16 @@ def launch( ...@@ -102,16 +103,16 @@ def launch(
network_name = shared_utils.get_network_name(launcher.network) network_name = shared_utils.get_network_name(launcher.network)
bn_min_cpu = int(bn_min_cpu) if int(bn_min_cpu) > 0 else BEACON_MIN_CPU cl_min_cpu = int(cl_min_cpu) if int(cl_min_cpu) > 0 else BEACON_MIN_CPU
bn_max_cpu = ( cl_max_cpu = (
int(bn_max_cpu) int(cl_max_cpu)
if int(bn_max_cpu) > 0 if int(cl_max_cpu) > 0
else constants.RAM_CPU_OVERRIDES[network_name]["lighthouse_max_cpu"] else constants.RAM_CPU_OVERRIDES[network_name]["lighthouse_max_cpu"]
) )
bn_min_mem = int(bn_min_mem) if int(bn_min_mem) > 0 else BEACON_MIN_MEMORY cl_min_mem = int(cl_min_mem) if int(cl_min_mem) > 0 else BEACON_MIN_MEMORY
bn_max_mem = ( cl_max_mem = (
int(bn_max_mem) int(cl_max_mem)
if int(bn_max_mem) > 0 if int(cl_max_mem) > 0
else constants.RAM_CPU_OVERRIDES[network_name]["lighthouse_max_mem"] else constants.RAM_CPU_OVERRIDES[network_name]["lighthouse_max_mem"]
) )
...@@ -130,16 +131,17 @@ def launch( ...@@ -130,16 +131,17 @@ def launch(
image, image,
beacon_service_name, beacon_service_name,
bootnode_contexts, bootnode_contexts,
el_client_context, el_context,
log_level, log_level,
bn_min_cpu, cl_min_cpu,
bn_max_cpu, cl_max_cpu,
bn_min_mem, cl_min_mem,
bn_max_mem, cl_max_mem,
snooper_enabled, snooper_enabled,
snooper_engine_context, snooper_engine_context,
extra_beacon_params, extra_params,
extra_beacon_labels, extra_env_vars,
extra_labels,
persistent, persistent,
cl_volume_size, cl_volume_size,
tolerations, tolerations,
...@@ -198,7 +200,7 @@ def launch( ...@@ -198,7 +200,7 @@ def launch(
) )
nodes_metrics_info = [beacon_node_metrics_info] nodes_metrics_info = [beacon_node_metrics_info]
return cl_client_context.new_cl_client_context( return cl_context.new_cl_context(
"lighthouse", "lighthouse",
beacon_node_enr, beacon_node_enr,
beacon_service.ip_address, beacon_service.ip_address,
...@@ -223,15 +225,16 @@ def get_beacon_config( ...@@ -223,15 +225,16 @@ def get_beacon_config(
image, image,
service_name, service_name,
boot_cl_client_ctxs, boot_cl_client_ctxs,
el_client_context, el_context,
log_level, log_level,
bn_min_cpu, cl_min_cpu,
bn_max_cpu, cl_max_cpu,
bn_min_mem, cl_min_mem,
bn_max_mem, cl_max_mem,
snooper_enabled, snooper_enabled,
snooper_engine_context, snooper_engine_context,
extra_params, extra_params,
extra_env_vars,
extra_labels, extra_labels,
persistent, persistent,
cl_volume_size, cl_volume_size,
...@@ -246,8 +249,8 @@ def get_beacon_config( ...@@ -246,8 +249,8 @@ def get_beacon_config(
) )
else: else:
EXECUTION_ENGINE_ENDPOINT = "http://{0}:{1}".format( EXECUTION_ENGINE_ENDPOINT = "http://{0}:{1}".format(
el_client_context.ip_addr, el_context.ip_addr,
el_client_context.engine_rpc_port_num, el_context.engine_rpc_port_num,
) )
# NOTE: If connecting to the merge devnet remotely we DON'T want the following flags; when they're not set, the node's external IP address is auto-detected # NOTE: If connecting to the merge devnet remotely we DON'T want the following flags; when they're not set, the node's external IP address is auto-detected
...@@ -367,25 +370,27 @@ def get_beacon_config( ...@@ -367,25 +370,27 @@ def get_beacon_config(
persistent_key="data-{0}".format(service_name), persistent_key="data-{0}".format(service_name),
size=cl_volume_size, size=cl_volume_size,
) )
env = {RUST_BACKTRACE_ENVVAR_NAME: RUST_FULL_BACKTRACE_KEYWORD}
env.update(extra_env_vars)
return ServiceConfig( return ServiceConfig(
image=image, image=image,
ports=BEACON_USED_PORTS, ports=BEACON_USED_PORTS,
cmd=cmd, cmd=cmd,
files=files, files=files,
env_vars={RUST_BACKTRACE_ENVVAR_NAME: RUST_FULL_BACKTRACE_KEYWORD}, env_vars=env,
private_ip_address_placeholder=PRIVATE_IP_ADDRESS_PLACEHOLDER, private_ip_address_placeholder=PRIVATE_IP_ADDRESS_PLACEHOLDER,
ready_conditions=cl_node_ready_conditions.get_ready_conditions( ready_conditions=cl_node_ready_conditions.get_ready_conditions(
BEACON_HTTP_PORT_ID BEACON_HTTP_PORT_ID
), ),
min_cpu=bn_min_cpu, min_cpu=cl_min_cpu,
max_cpu=bn_max_cpu, max_cpu=cl_max_cpu,
min_memory=bn_min_mem, min_memory=cl_min_mem,
max_memory=bn_max_mem, max_memory=cl_max_mem,
labels=shared_utils.label_maker( labels=shared_utils.label_maker(
constants.CL_CLIENT_TYPE.lighthouse, constants.CL_TYPE.lighthouse,
constants.CLIENT_TYPES.cl, constants.CLIENT_TYPES.cl,
image, image,
el_client_context.client_name, el_context.client_name,
extra_labels, extra_labels,
), ),
tolerations=tolerations, tolerations=tolerations,
......
shared_utils = import_module("../../shared_utils/shared_utils.star") shared_utils = import_module("../../shared_utils/shared_utils.star")
input_parser = import_module("../../package_io/input_parser.star") input_parser = import_module("../../package_io/input_parser.star")
cl_client_context = import_module("../../cl/cl_client_context.star") cl_context = import_module("../../cl/cl_context.star")
node_metrics = import_module("../../node_metrics_info.star") node_metrics = import_module("../../node_metrics_info.star")
cl_node_ready_conditions = import_module("../../cl/cl_node_ready_conditions.star") cl_node_ready_conditions = import_module("../../cl/cl_node_ready_conditions.star")
blobber_launcher = import_module("../../blobber/blobber_launcher.star") blobber_launcher = import_module("../../blobber/blobber_launcher.star")
...@@ -43,11 +43,11 @@ BEACON_USED_PORTS = { ...@@ -43,11 +43,11 @@ BEACON_USED_PORTS = {
} }
VERBOSITY_LEVELS = { VERBOSITY_LEVELS = {
constants.GLOBAL_CLIENT_LOG_LEVEL.error: "error", constants.GLOBAL_LOG_LEVEL.error: "error",
constants.GLOBAL_CLIENT_LOG_LEVEL.warn: "warn", constants.GLOBAL_LOG_LEVEL.warn: "warn",
constants.GLOBAL_CLIENT_LOG_LEVEL.info: "info", constants.GLOBAL_LOG_LEVEL.info: "info",
constants.GLOBAL_CLIENT_LOG_LEVEL.debug: "debug", constants.GLOBAL_LOG_LEVEL.debug: "debug",
constants.GLOBAL_CLIENT_LOG_LEVEL.trace: "trace", constants.GLOBAL_LOG_LEVEL.trace: "trace",
} }
...@@ -59,25 +59,26 @@ def launch( ...@@ -59,25 +59,26 @@ def launch(
participant_log_level, participant_log_level,
global_log_level, global_log_level,
bootnode_contexts, bootnode_contexts,
el_client_context, el_context,
node_keystore_files, node_keystore_files,
bn_min_cpu, cl_min_cpu,
bn_max_cpu, cl_max_cpu,
bn_min_mem, cl_min_mem,
bn_max_mem, cl_max_mem,
snooper_enabled, snooper_enabled,
snooper_engine_context, snooper_engine_context,
blobber_enabled, blobber_enabled,
blobber_extra_params, blobber_extra_params,
extra_beacon_params, extra_params,
extra_beacon_labels, extra_env_vars,
extra_labels,
persistent, persistent,
cl_volume_size, cl_volume_size,
cl_tolerations, cl_tolerations,
participant_tolerations, participant_tolerations,
global_tolerations, global_tolerations,
node_selectors, node_selectors,
use_separate_validator_client=True, use_separate_vc=True,
): ):
beacon_service_name = "{0}".format(service_name) beacon_service_name = "{0}".format(service_name)
log_level = input_parser.get_client_log_level_or_default( log_level = input_parser.get_client_log_level_or_default(
...@@ -90,16 +91,16 @@ def launch( ...@@ -90,16 +91,16 @@ def launch(
network_name = shared_utils.get_network_name(launcher.network) network_name = shared_utils.get_network_name(launcher.network)
bn_min_cpu = int(bn_min_cpu) if int(bn_min_cpu) > 0 else BEACON_MIN_CPU cl_min_cpu = int(cl_min_cpu) if int(cl_min_cpu) > 0 else BEACON_MIN_CPU
bn_max_cpu = ( cl_max_cpu = (
int(bn_max_cpu) int(cl_max_cpu)
if int(bn_max_cpu) > 0 if int(cl_max_cpu) > 0
else constants.RAM_CPU_OVERRIDES[network_name]["lodestar_max_cpu"] else constants.RAM_CPU_OVERRIDES[network_name]["lodestar_max_cpu"]
) )
bn_min_mem = int(bn_min_mem) if int(bn_min_mem) > 0 else BEACON_MIN_MEMORY cl_min_mem = int(cl_min_mem) if int(cl_min_mem) > 0 else BEACON_MIN_MEMORY
bn_max_mem = ( cl_max_mem = (
int(bn_max_mem) int(cl_max_mem)
if int(bn_max_mem) > 0 if int(cl_max_mem) > 0
else constants.RAM_CPU_OVERRIDES[network_name]["lodestar_max_mem"] else constants.RAM_CPU_OVERRIDES[network_name]["lodestar_max_mem"]
) )
...@@ -118,16 +119,17 @@ def launch( ...@@ -118,16 +119,17 @@ def launch(
image, image,
beacon_service_name, beacon_service_name,
bootnode_contexts, bootnode_contexts,
el_client_context, el_context,
log_level, log_level,
bn_min_cpu, cl_min_cpu,
bn_max_cpu, cl_max_cpu,
bn_min_mem, cl_min_mem,
bn_max_mem, cl_max_mem,
snooper_enabled, snooper_enabled,
snooper_engine_context, snooper_engine_context,
extra_beacon_params, extra_params,
extra_beacon_labels, extra_env_vars,
extra_labels,
persistent, persistent,
cl_volume_size, cl_volume_size,
tolerations, tolerations,
...@@ -189,7 +191,7 @@ def launch( ...@@ -189,7 +191,7 @@ def launch(
) )
nodes_metrics_info = [beacon_node_metrics_info] nodes_metrics_info = [beacon_node_metrics_info]
return cl_client_context.new_cl_client_context( return cl_context.new_cl_context(
"lodestar", "lodestar",
beacon_node_enr, beacon_node_enr,
beacon_service.ip_address, beacon_service.ip_address,
...@@ -214,15 +216,16 @@ def get_beacon_config( ...@@ -214,15 +216,16 @@ def get_beacon_config(
image, image,
service_name, service_name,
bootnode_contexts, bootnode_contexts,
el_client_context, el_context,
log_level, log_level,
bn_min_cpu, cl_min_cpu,
bn_max_cpu, cl_max_cpu,
bn_min_mem, cl_min_mem,
bn_max_mem, cl_max_mem,
snooper_enabled, snooper_enabled,
snooper_engine_context, snooper_engine_context,
extra_params, extra_params,
extra_env_vars,
extra_labels, extra_labels,
persistent, persistent,
cl_volume_size, cl_volume_size,
...@@ -230,8 +233,8 @@ def get_beacon_config( ...@@ -230,8 +233,8 @@ def get_beacon_config(
node_selectors, node_selectors,
): ):
el_client_rpc_url_str = "http://{0}:{1}".format( el_client_rpc_url_str = "http://{0}:{1}".format(
el_client_context.ip_addr, el_context.ip_addr,
el_client_context.rpc_port_num, el_context.rpc_port_num,
) )
# If snooper is enabled use the snooper engine context, otherwise use the execution client context # If snooper is enabled use the snooper engine context, otherwise use the execution client context
...@@ -242,8 +245,8 @@ def get_beacon_config( ...@@ -242,8 +245,8 @@ def get_beacon_config(
) )
else: else:
EXECUTION_ENGINE_ENDPOINT = "http://{0}:{1}".format( EXECUTION_ENGINE_ENDPOINT = "http://{0}:{1}".format(
el_client_context.ip_addr, el_context.ip_addr,
el_client_context.engine_rpc_port_num, el_context.engine_rpc_port_num,
) )
cmd = [ cmd = [
...@@ -344,20 +347,21 @@ def get_beacon_config( ...@@ -344,20 +347,21 @@ def get_beacon_config(
image=image, image=image,
ports=BEACON_USED_PORTS, ports=BEACON_USED_PORTS,
cmd=cmd, cmd=cmd,
env_vars=extra_env_vars,
files=files, files=files,
private_ip_address_placeholder=PRIVATE_IP_ADDRESS_PLACEHOLDER, private_ip_address_placeholder=PRIVATE_IP_ADDRESS_PLACEHOLDER,
ready_conditions=cl_node_ready_conditions.get_ready_conditions( ready_conditions=cl_node_ready_conditions.get_ready_conditions(
BEACON_HTTP_PORT_ID BEACON_HTTP_PORT_ID
), ),
min_cpu=bn_min_cpu, min_cpu=cl_min_cpu,
max_cpu=bn_max_cpu, max_cpu=cl_max_cpu,
min_memory=bn_min_mem, min_memory=cl_min_mem,
max_memory=bn_max_mem, max_memory=cl_max_mem,
labels=shared_utils.label_maker( labels=shared_utils.label_maker(
constants.CL_CLIENT_TYPE.lodestar, constants.CL_TYPE.lodestar,
constants.CLIENT_TYPES.cl, constants.CLIENT_TYPES.cl,
image, image,
el_client_context.client_name, el_context.client_name,
extra_labels, extra_labels,
), ),
tolerations=tolerations, tolerations=tolerations,
......
# ---------------------------------- Library Imports ---------------------------------- # ---------------------------------- Library Imports ----------------------------------
shared_utils = import_module("../../shared_utils/shared_utils.star") shared_utils = import_module("../../shared_utils/shared_utils.star")
input_parser = import_module("../../package_io/input_parser.star") input_parser = import_module("../../package_io/input_parser.star")
cl_client_context = import_module("../../cl/cl_client_context.star") cl_context = import_module("../../cl/cl_context.star")
cl_node_ready_conditions = import_module("../../cl/cl_node_ready_conditions.star") cl_node_ready_conditions = import_module("../../cl/cl_node_ready_conditions.star")
node_metrics = import_module("../../node_metrics_info.star") node_metrics = import_module("../../node_metrics_info.star")
constants = import_module("../../package_io/constants.star") constants = import_module("../../package_io/constants.star")
validator_client_shared = import_module("../../validator_client/shared.star") vc_shared = import_module("../../vc/shared.star")
# ---------------------------------- Beacon client ------------------------------------- # ---------------------------------- Beacon client -------------------------------------
# Nimbus requires that its data directory already exists (because it expects you to bind-mount it), so we # Nimbus requires that its data directory already exists (because it expects you to bind-mount it), so we
# have to to create it # have to to create it
...@@ -63,11 +63,11 @@ BEACON_USED_PORTS = { ...@@ -63,11 +63,11 @@ BEACON_USED_PORTS = {
} }
VERBOSITY_LEVELS = { VERBOSITY_LEVELS = {
constants.GLOBAL_CLIENT_LOG_LEVEL.error: "ERROR", constants.GLOBAL_LOG_LEVEL.error: "ERROR",
constants.GLOBAL_CLIENT_LOG_LEVEL.warn: "WARN", constants.GLOBAL_LOG_LEVEL.warn: "WARN",
constants.GLOBAL_CLIENT_LOG_LEVEL.info: "INFO", constants.GLOBAL_LOG_LEVEL.info: "INFO",
constants.GLOBAL_CLIENT_LOG_LEVEL.debug: "DEBUG", constants.GLOBAL_LOG_LEVEL.debug: "DEBUG",
constants.GLOBAL_CLIENT_LOG_LEVEL.trace: "TRACE", constants.GLOBAL_LOG_LEVEL.trace: "TRACE",
} }
ENTRYPOINT_ARGS = ["sh", "-c"] ENTRYPOINT_ARGS = ["sh", "-c"]
...@@ -81,25 +81,26 @@ def launch( ...@@ -81,25 +81,26 @@ def launch(
participant_log_level, participant_log_level,
global_log_level, global_log_level,
bootnode_contexts, bootnode_contexts,
el_client_context, el_context,
node_keystore_files, node_keystore_files,
bn_min_cpu, cl_min_cpu,
bn_max_cpu, cl_max_cpu,
bn_min_mem, cl_min_mem,
bn_max_mem, cl_max_mem,
snooper_enabled, snooper_enabled,
snooper_engine_context, snooper_engine_context,
blobber_enabled, blobber_enabled,
blobber_extra_params, blobber_extra_params,
extra_beacon_params, extra_params,
extra_beacon_labels, extra_env_vars,
extra_labels,
persistent, persistent,
cl_volume_size, cl_volume_size,
cl_tolerations, cl_tolerations,
participant_tolerations, participant_tolerations,
global_tolerations, global_tolerations,
node_selectors, node_selectors,
use_separate_validator_client, use_separate_vc,
): ):
beacon_service_name = "{0}".format(service_name) beacon_service_name = "{0}".format(service_name)
...@@ -113,16 +114,16 @@ def launch( ...@@ -113,16 +114,16 @@ def launch(
network_name = shared_utils.get_network_name(launcher.network) network_name = shared_utils.get_network_name(launcher.network)
bn_min_cpu = int(bn_min_cpu) if int(bn_min_cpu) > 0 else BEACON_MIN_CPU cl_min_cpu = int(cl_min_cpu) if int(cl_min_cpu) > 0 else BEACON_MIN_CPU
bn_max_cpu = ( cl_max_cpu = (
int(bn_max_cpu) int(cl_max_cpu)
if int(bn_max_cpu) > 0 if int(cl_max_cpu) > 0
else constants.RAM_CPU_OVERRIDES[network_name]["nimbus_max_cpu"] else constants.RAM_CPU_OVERRIDES[network_name]["nimbus_max_cpu"]
) )
bn_min_mem = int(bn_min_mem) if int(bn_min_mem) > 0 else BEACON_MIN_MEMORY cl_min_mem = int(cl_min_mem) if int(cl_min_mem) > 0 else BEACON_MIN_MEMORY
bn_max_mem = ( cl_max_mem = (
int(bn_max_mem) int(cl_max_mem)
if int(bn_max_mem) > 0 if int(cl_max_mem) > 0
else constants.RAM_CPU_OVERRIDES[network_name]["nimbus_max_mem"] else constants.RAM_CPU_OVERRIDES[network_name]["nimbus_max_mem"]
) )
...@@ -141,18 +142,19 @@ def launch( ...@@ -141,18 +142,19 @@ def launch(
image, image,
beacon_service_name, beacon_service_name,
bootnode_contexts, bootnode_contexts,
el_client_context, el_context,
log_level, log_level,
node_keystore_files, node_keystore_files,
bn_min_cpu, cl_min_cpu,
bn_max_cpu, cl_max_cpu,
bn_min_mem, cl_min_mem,
bn_max_mem, cl_max_mem,
snooper_enabled, snooper_enabled,
snooper_engine_context, snooper_engine_context,
extra_beacon_params, extra_params,
extra_beacon_labels, extra_env_vars,
use_separate_validator_client, extra_labels,
use_separate_vc,
persistent, persistent,
cl_volume_size, cl_volume_size,
tolerations, tolerations,
...@@ -190,7 +192,7 @@ def launch( ...@@ -190,7 +192,7 @@ def launch(
) )
nodes_metrics_info = [nimbus_node_metrics_info] nodes_metrics_info = [nimbus_node_metrics_info]
return cl_client_context.new_cl_client_context( return cl_context.new_cl_context(
"nimbus", "nimbus",
beacon_node_enr, beacon_node_enr,
beacon_service.ip_address, beacon_service.ip_address,
...@@ -216,18 +218,19 @@ def get_beacon_config( ...@@ -216,18 +218,19 @@ def get_beacon_config(
image, image,
service_name, service_name,
bootnode_contexts, bootnode_contexts,
el_client_context, el_context,
log_level, log_level,
node_keystore_files, node_keystore_files,
bn_min_cpu, cl_min_cpu,
bn_max_cpu, cl_max_cpu,
bn_min_mem, cl_min_mem,
bn_max_mem, cl_max_mem,
snooper_enabled, snooper_enabled,
snooper_engine_context, snooper_engine_context,
extra_params, extra_params,
extra_env_vars,
extra_labels, extra_labels,
use_separate_validator_client, use_separate_vc,
persistent, persistent,
cl_volume_size, cl_volume_size,
tolerations, tolerations,
...@@ -252,8 +255,8 @@ def get_beacon_config( ...@@ -252,8 +255,8 @@ def get_beacon_config(
) )
else: else:
EXECUTION_ENGINE_ENDPOINT = "http://{0}:{1}".format( EXECUTION_ENGINE_ENDPOINT = "http://{0}:{1}".format(
el_client_context.ip_addr, el_context.ip_addr,
el_client_context.engine_rpc_port_num, el_context.engine_rpc_port_num,
) )
cmd = [ cmd = [
...@@ -295,12 +298,9 @@ def get_beacon_config( ...@@ -295,12 +298,9 @@ def get_beacon_config(
"--validators-dir=" + validator_keys_dirpath, "--validators-dir=" + validator_keys_dirpath,
"--secrets-dir=" + validator_secrets_dirpath, "--secrets-dir=" + validator_secrets_dirpath,
"--suggested-fee-recipient=" + constants.VALIDATING_REWARDS_ACCOUNT, "--suggested-fee-recipient=" + constants.VALIDATING_REWARDS_ACCOUNT,
"--graffiti=" "--graffiti=" + constants.CL_TYPE.nimbus + "-" + el_context.client_name,
+ constants.CL_CLIENT_TYPE.nimbus
+ "-"
+ el_client_context.client_name,
"--keymanager", "--keymanager",
"--keymanager-port={0}".format(validator_client_shared.VALIDATOR_HTTP_PORT_NUM), "--keymanager-port={0}".format(vc_shared.VALIDATOR_HTTP_PORT_NUM),
"--keymanager-address=0.0.0.0", "--keymanager-address=0.0.0.0",
"--keymanager-allow-origin=*", "--keymanager-allow-origin=*",
"--keymanager-token-file=" + constants.KEYMANAGER_MOUNT_PATH_ON_CONTAINER, "--keymanager-token-file=" + constants.KEYMANAGER_MOUNT_PATH_ON_CONTAINER,
...@@ -332,9 +332,9 @@ def get_beacon_config( ...@@ -332,9 +332,9 @@ def get_beacon_config(
} }
beacon_validator_used_ports = {} beacon_validator_used_ports = {}
beacon_validator_used_ports.update(BEACON_USED_PORTS) beacon_validator_used_ports.update(BEACON_USED_PORTS)
if node_keystore_files != None and not use_separate_validator_client: if node_keystore_files != None and not use_separate_vc:
validator_http_port_id_spec = shared_utils.new_port_spec( validator_http_port_id_spec = shared_utils.new_port_spec(
validator_client_shared.VALIDATOR_HTTP_PORT_NUM, vc_shared.VALIDATOR_HTTP_PORT_NUM,
shared_utils.TCP_PROTOCOL, shared_utils.TCP_PROTOCOL,
shared_utils.HTTP_APPLICATION_PROTOCOL, shared_utils.HTTP_APPLICATION_PROTOCOL,
) )
...@@ -357,20 +357,21 @@ def get_beacon_config( ...@@ -357,20 +357,21 @@ def get_beacon_config(
image=image, image=image,
ports=beacon_validator_used_ports, ports=beacon_validator_used_ports,
cmd=cmd, cmd=cmd,
env_vars=extra_env_vars,
files=files, files=files,
private_ip_address_placeholder=PRIVATE_IP_ADDRESS_PLACEHOLDER, private_ip_address_placeholder=PRIVATE_IP_ADDRESS_PLACEHOLDER,
ready_conditions=cl_node_ready_conditions.get_ready_conditions( ready_conditions=cl_node_ready_conditions.get_ready_conditions(
BEACON_HTTP_PORT_ID BEACON_HTTP_PORT_ID
), ),
min_cpu=bn_min_cpu, min_cpu=cl_min_cpu,
max_cpu=bn_max_cpu, max_cpu=cl_max_cpu,
min_memory=bn_min_mem, min_memory=cl_min_mem,
max_memory=bn_max_mem, max_memory=cl_max_mem,
labels=shared_utils.label_maker( labels=shared_utils.label_maker(
constants.CL_CLIENT_TYPE.nimbus, constants.CL_TYPE.nimbus,
constants.CLIENT_TYPES.cl, constants.CLIENT_TYPES.cl,
image, image,
el_client_context.client_name, el_context.client_name,
extra_labels, extra_labels,
), ),
user=User(uid=0, gid=0), user=User(uid=0, gid=0),
......
shared_utils = import_module("../../shared_utils/shared_utils.star") shared_utils = import_module("../../shared_utils/shared_utils.star")
input_parser = import_module("../../package_io/input_parser.star") input_parser = import_module("../../package_io/input_parser.star")
cl_client_context = import_module("../../cl/cl_client_context.star") cl_context = import_module("../../cl/cl_context.star")
node_metrics = import_module("../../node_metrics_info.star") node_metrics = import_module("../../node_metrics_info.star")
cl_node_ready_conditions = import_module("../../cl/cl_node_ready_conditions.star") cl_node_ready_conditions = import_module("../../cl/cl_node_ready_conditions.star")
constants = import_module("../../package_io/constants.star") constants = import_module("../../package_io/constants.star")
...@@ -50,11 +50,11 @@ BEACON_NODE_USED_PORTS = { ...@@ -50,11 +50,11 @@ BEACON_NODE_USED_PORTS = {
} }
VERBOSITY_LEVELS = { VERBOSITY_LEVELS = {
constants.GLOBAL_CLIENT_LOG_LEVEL.error: "error", constants.GLOBAL_LOG_LEVEL.error: "error",
constants.GLOBAL_CLIENT_LOG_LEVEL.warn: "warn", constants.GLOBAL_LOG_LEVEL.warn: "warn",
constants.GLOBAL_CLIENT_LOG_LEVEL.info: "info", constants.GLOBAL_LOG_LEVEL.info: "info",
constants.GLOBAL_CLIENT_LOG_LEVEL.debug: "debug", constants.GLOBAL_LOG_LEVEL.debug: "debug",
constants.GLOBAL_CLIENT_LOG_LEVEL.trace: "trace", constants.GLOBAL_LOG_LEVEL.trace: "trace",
} }
...@@ -66,25 +66,26 @@ def launch( ...@@ -66,25 +66,26 @@ def launch(
participant_log_level, participant_log_level,
global_log_level, global_log_level,
bootnode_contexts, bootnode_contexts,
el_client_context, el_context,
node_keystore_files, node_keystore_files,
bn_min_cpu, cl_min_cpu,
bn_max_cpu, cl_max_cpu,
bn_min_mem, cl_min_mem,
bn_max_mem, cl_max_mem,
snooper_enabled, snooper_enabled,
snooper_engine_context, snooper_engine_context,
blobber_enabled, blobber_enabled,
blobber_extra_params, blobber_extra_params,
extra_beacon_params, extra_params,
extra_beacon_labels, extra_env_vars,
extra_labels,
persistent, persistent,
cl_volume_size, cl_volume_size,
cl_tolerations, cl_tolerations,
participant_tolerations, participant_tolerations,
global_tolerations, global_tolerations,
node_selectors, node_selectors,
use_separate_validator_client=True, use_separate_vc=True,
): ):
beacon_service_name = "{0}".format(service_name) beacon_service_name = "{0}".format(service_name)
log_level = input_parser.get_client_log_level_or_default( log_level = input_parser.get_client_log_level_or_default(
...@@ -97,16 +98,16 @@ def launch( ...@@ -97,16 +98,16 @@ def launch(
network_name = shared_utils.get_network_name(launcher.network) network_name = shared_utils.get_network_name(launcher.network)
bn_min_cpu = int(bn_min_cpu) if int(bn_min_cpu) > 0 else BEACON_MIN_CPU cl_min_cpu = int(cl_min_cpu) if int(cl_min_cpu) > 0 else BEACON_MIN_CPU
bn_max_cpu = ( cl_max_cpu = (
int(bn_max_cpu) int(cl_max_cpu)
if int(bn_max_cpu) > 0 if int(cl_max_cpu) > 0
else constants.RAM_CPU_OVERRIDES[network_name]["prysm_max_cpu"] else constants.RAM_CPU_OVERRIDES[network_name]["prysm_max_cpu"]
) )
bn_min_mem = int(bn_min_mem) if int(bn_min_mem) > 0 else BEACON_MIN_MEMORY cl_min_mem = int(cl_min_mem) if int(cl_min_mem) > 0 else BEACON_MIN_MEMORY
bn_max_mem = ( cl_max_mem = (
int(bn_max_mem) int(cl_max_mem)
if int(bn_max_mem) > 0 if int(cl_max_mem) > 0
else constants.RAM_CPU_OVERRIDES[network_name]["prysm_max_mem"] else constants.RAM_CPU_OVERRIDES[network_name]["prysm_max_mem"]
) )
...@@ -124,16 +125,17 @@ def launch( ...@@ -124,16 +125,17 @@ def launch(
image, image,
beacon_service_name, beacon_service_name,
bootnode_contexts, bootnode_contexts,
el_client_context, el_context,
log_level, log_level,
bn_min_cpu, cl_min_cpu,
bn_max_cpu, cl_max_cpu,
bn_min_mem, cl_min_mem,
bn_max_mem, cl_max_mem,
snooper_enabled, snooper_enabled,
snooper_engine_context, snooper_engine_context,
extra_beacon_params, extra_params,
extra_beacon_labels, extra_env_vars,
extra_labels,
persistent, persistent,
cl_volume_size, cl_volume_size,
tolerations, tolerations,
...@@ -173,7 +175,7 @@ def launch( ...@@ -173,7 +175,7 @@ def launch(
) )
nodes_metrics_info = [beacon_node_metrics_info] nodes_metrics_info = [beacon_node_metrics_info]
return cl_client_context.new_cl_client_context( return cl_context.new_cl_context(
"prysm", "prysm",
beacon_node_enr, beacon_node_enr,
beacon_service.ip_address, beacon_service.ip_address,
...@@ -198,15 +200,16 @@ def get_beacon_config( ...@@ -198,15 +200,16 @@ def get_beacon_config(
beacon_image, beacon_image,
service_name, service_name,
bootnode_contexts, bootnode_contexts,
el_client_context, el_context,
log_level, log_level,
bn_min_cpu, cl_min_cpu,
bn_max_cpu, cl_max_cpu,
bn_min_mem, cl_min_mem,
bn_max_mem, cl_max_mem,
snooper_enabled, snooper_enabled,
snooper_engine_context, snooper_engine_context,
extra_params, extra_params,
extra_env_vars,
extra_labels, extra_labels,
persistent, persistent,
cl_volume_size, cl_volume_size,
...@@ -221,8 +224,8 @@ def get_beacon_config( ...@@ -221,8 +224,8 @@ def get_beacon_config(
) )
else: else:
EXECUTION_ENGINE_ENDPOINT = "http://{0}:{1}".format( EXECUTION_ENGINE_ENDPOINT = "http://{0}:{1}".format(
el_client_context.ip_addr, el_context.ip_addr,
el_client_context.engine_rpc_port_num, el_context.engine_rpc_port_num,
) )
cmd = [ cmd = [
...@@ -326,20 +329,21 @@ def get_beacon_config( ...@@ -326,20 +329,21 @@ def get_beacon_config(
image=beacon_image, image=beacon_image,
ports=BEACON_NODE_USED_PORTS, ports=BEACON_NODE_USED_PORTS,
cmd=cmd, cmd=cmd,
env_vars=extra_env_vars,
files=files, files=files,
private_ip_address_placeholder=PRIVATE_IP_ADDRESS_PLACEHOLDER, private_ip_address_placeholder=PRIVATE_IP_ADDRESS_PLACEHOLDER,
ready_conditions=cl_node_ready_conditions.get_ready_conditions( ready_conditions=cl_node_ready_conditions.get_ready_conditions(
BEACON_HTTP_PORT_ID BEACON_HTTP_PORT_ID
), ),
min_cpu=bn_min_cpu, min_cpu=cl_min_cpu,
max_cpu=bn_max_cpu, max_cpu=cl_max_cpu,
min_memory=bn_min_mem, min_memory=cl_min_mem,
max_memory=bn_max_mem, max_memory=cl_max_mem,
labels=shared_utils.label_maker( labels=shared_utils.label_maker(
constants.CL_CLIENT_TYPE.prysm, constants.CL_TYPE.prysm,
constants.CLIENT_TYPES.cl, constants.CLIENT_TYPES.cl,
beacon_image, beacon_image,
el_client_context.client_name, el_context.client_name,
extra_labels, extra_labels,
), ),
tolerations=tolerations, tolerations=tolerations,
......
shared_utils = import_module("../../shared_utils/shared_utils.star") shared_utils = import_module("../../shared_utils/shared_utils.star")
input_parser = import_module("../../package_io/input_parser.star") input_parser = import_module("../../package_io/input_parser.star")
cl_client_context = import_module("../../cl/cl_client_context.star") cl_context = import_module("../../cl/cl_context.star")
node_metrics = import_module("../../node_metrics_info.star") node_metrics = import_module("../../node_metrics_info.star")
cl_node_ready_conditions = import_module("../../cl/cl_node_ready_conditions.star") cl_node_ready_conditions = import_module("../../cl/cl_node_ready_conditions.star")
constants = import_module("../../package_io/constants.star") constants = import_module("../../package_io/constants.star")
validator_client_shared = import_module("../../validator_client/shared.star") vc_shared = import_module("../../vc/shared.star")
# ---------------------------------- Beacon client ------------------------------------- # ---------------------------------- Beacon client -------------------------------------
TEKU_BINARY_FILEPATH_IN_IMAGE = "/opt/teku/bin/teku" TEKU_BINARY_FILEPATH_IN_IMAGE = "/opt/teku/bin/teku"
...@@ -54,11 +54,11 @@ BEACON_USED_PORTS = { ...@@ -54,11 +54,11 @@ BEACON_USED_PORTS = {
ENTRYPOINT_ARGS = ["sh", "-c"] ENTRYPOINT_ARGS = ["sh", "-c"]
VERBOSITY_LEVELS = { VERBOSITY_LEVELS = {
constants.GLOBAL_CLIENT_LOG_LEVEL.error: "ERROR", constants.GLOBAL_LOG_LEVEL.error: "ERROR",
constants.GLOBAL_CLIENT_LOG_LEVEL.warn: "WARN", constants.GLOBAL_LOG_LEVEL.warn: "WARN",
constants.GLOBAL_CLIENT_LOG_LEVEL.info: "INFO", constants.GLOBAL_LOG_LEVEL.info: "INFO",
constants.GLOBAL_CLIENT_LOG_LEVEL.debug: "DEBUG", constants.GLOBAL_LOG_LEVEL.debug: "DEBUG",
constants.GLOBAL_CLIENT_LOG_LEVEL.trace: "TRACE", constants.GLOBAL_LOG_LEVEL.trace: "TRACE",
} }
...@@ -70,25 +70,26 @@ def launch( ...@@ -70,25 +70,26 @@ def launch(
participant_log_level, participant_log_level,
global_log_level, global_log_level,
bootnode_context, bootnode_context,
el_client_context, el_context,
node_keystore_files, node_keystore_files,
bn_min_cpu, cl_min_cpu,
bn_max_cpu, cl_max_cpu,
bn_min_mem, cl_min_mem,
bn_max_mem, cl_max_mem,
snooper_enabled, snooper_enabled,
snooper_engine_context, snooper_engine_context,
blobber_enabled, blobber_enabled,
blobber_extra_params, blobber_extra_params,
extra_beacon_params, extra_params,
extra_beacon_labels, extra_env_vars,
extra_labels,
persistent, persistent,
cl_volume_size, cl_volume_size,
cl_tolerations, cl_tolerations,
participant_tolerations, participant_tolerations,
global_tolerations, global_tolerations,
node_selectors, node_selectors,
use_separate_validator_client, use_separate_vc,
): ):
beacon_service_name = "{0}".format(service_name) beacon_service_name = "{0}".format(service_name)
log_level = input_parser.get_client_log_level_or_default( log_level = input_parser.get_client_log_level_or_default(
...@@ -99,20 +100,20 @@ def launch( ...@@ -99,20 +100,20 @@ def launch(
cl_tolerations, participant_tolerations, global_tolerations cl_tolerations, participant_tolerations, global_tolerations
) )
extra_params = [param for param in extra_beacon_params] extra_params = [param for param in extra_params]
network_name = shared_utils.get_network_name(launcher.network) network_name = shared_utils.get_network_name(launcher.network)
bn_min_cpu = int(bn_min_cpu) if int(bn_min_cpu) > 0 else BEACON_MIN_CPU cl_min_cpu = int(cl_min_cpu) if int(cl_min_cpu) > 0 else BEACON_MIN_CPU
bn_max_cpu = ( cl_max_cpu = (
int(bn_max_cpu) int(cl_max_cpu)
if int(bn_max_cpu) > 0 if int(cl_max_cpu) > 0
else constants.RAM_CPU_OVERRIDES[network_name]["teku_max_cpu"] else constants.RAM_CPU_OVERRIDES[network_name]["teku_max_cpu"]
) )
bn_min_mem = int(bn_min_mem) if int(bn_min_mem) > 0 else BEACON_MIN_MEMORY cl_min_mem = int(cl_min_mem) if int(cl_min_mem) > 0 else BEACON_MIN_MEMORY
bn_max_mem = ( cl_max_mem = (
int(bn_max_mem) int(cl_max_mem)
if int(bn_max_mem) > 0 if int(cl_max_mem) > 0
else constants.RAM_CPU_OVERRIDES[network_name]["teku_max_mem"] else constants.RAM_CPU_OVERRIDES[network_name]["teku_max_mem"]
) )
...@@ -132,18 +133,19 @@ def launch( ...@@ -132,18 +133,19 @@ def launch(
image, image,
beacon_service_name, beacon_service_name,
bootnode_context, bootnode_context,
el_client_context, el_context,
log_level, log_level,
node_keystore_files, node_keystore_files,
bn_min_cpu, cl_min_cpu,
bn_max_cpu, cl_max_cpu,
bn_min_mem, cl_min_mem,
bn_max_mem, cl_max_mem,
snooper_enabled, snooper_enabled,
snooper_engine_context, snooper_engine_context,
extra_beacon_params, extra_params,
extra_beacon_labels, extra_env_vars,
use_separate_validator_client, extra_labels,
use_separate_vc,
persistent, persistent,
cl_volume_size, cl_volume_size,
tolerations, tolerations,
...@@ -183,7 +185,7 @@ def launch( ...@@ -183,7 +185,7 @@ def launch(
) )
nodes_metrics_info = [beacon_node_metrics_info] nodes_metrics_info = [beacon_node_metrics_info]
return cl_client_context.new_cl_client_context( return cl_context.new_cl_context(
"teku", "teku",
beacon_node_enr, beacon_node_enr,
beacon_service.ip_address, beacon_service.ip_address,
...@@ -210,18 +212,19 @@ def get_beacon_config( ...@@ -210,18 +212,19 @@ def get_beacon_config(
image, image,
service_name, service_name,
bootnode_contexts, bootnode_contexts,
el_client_context, el_context,
log_level, log_level,
node_keystore_files, node_keystore_files,
bn_min_cpu, cl_min_cpu,
bn_max_cpu, cl_max_cpu,
bn_min_mem, cl_min_mem,
bn_max_mem, cl_max_mem,
snooper_enabled, snooper_enabled,
snooper_engine_context, snooper_engine_context,
extra_params, extra_params,
extra_env_vars,
extra_labels, extra_labels,
use_separate_validator_client, use_separate_vc,
persistent, persistent,
cl_volume_size, cl_volume_size,
tolerations, tolerations,
...@@ -246,8 +249,8 @@ def get_beacon_config( ...@@ -246,8 +249,8 @@ def get_beacon_config(
) )
else: else:
EXECUTION_ENGINE_ENDPOINT = "http://{0}:{1}".format( EXECUTION_ENGINE_ENDPOINT = "http://{0}:{1}".format(
el_client_context.ip_addr, el_context.ip_addr,
el_client_context.engine_rpc_port_num, el_context.engine_rpc_port_num,
) )
cmd = [ cmd = [
"--logging=" + log_level, "--logging=" + log_level,
...@@ -293,14 +296,12 @@ def get_beacon_config( ...@@ -293,14 +296,12 @@ def get_beacon_config(
"--validators-proposer-default-fee-recipient=" "--validators-proposer-default-fee-recipient="
+ constants.VALIDATING_REWARDS_ACCOUNT, + constants.VALIDATING_REWARDS_ACCOUNT,
"--validators-graffiti=" "--validators-graffiti="
+ constants.CL_CLIENT_TYPE.teku + constants.CL_TYPE.teku
+ "-" + "-"
+ el_client_context.client_name, + el_context.client_name,
"--validator-api-enabled=true", "--validator-api-enabled=true",
"--validator-api-host-allowlist=*", "--validator-api-host-allowlist=*",
"--validator-api-port={0}".format( "--validator-api-port={0}".format(vc_shared.VALIDATOR_HTTP_PORT_NUM),
validator_client_shared.VALIDATOR_HTTP_PORT_NUM
),
"--validator-api-interface=0.0.0.0", "--validator-api-interface=0.0.0.0",
"--validator-api-keystore-file=" "--validator-api-keystore-file="
+ constants.KEYMANAGER_P12_MOUNT_PATH_ON_CONTAINER, + constants.KEYMANAGER_P12_MOUNT_PATH_ON_CONTAINER,
...@@ -382,9 +383,9 @@ def get_beacon_config( ...@@ -382,9 +383,9 @@ def get_beacon_config(
} }
beacon_validator_used_ports = {} beacon_validator_used_ports = {}
beacon_validator_used_ports.update(BEACON_USED_PORTS) beacon_validator_used_ports.update(BEACON_USED_PORTS)
if node_keystore_files != None and not use_separate_validator_client: if node_keystore_files != None and not use_separate_vc:
validator_http_port_id_spec = shared_utils.new_port_spec( validator_http_port_id_spec = shared_utils.new_port_spec(
validator_client_shared.VALIDATOR_HTTP_PORT_NUM, vc_shared.VALIDATOR_HTTP_PORT_NUM,
shared_utils.TCP_PROTOCOL, shared_utils.TCP_PROTOCOL,
shared_utils.HTTP_APPLICATION_PROTOCOL, shared_utils.HTTP_APPLICATION_PROTOCOL,
) )
...@@ -407,21 +408,21 @@ def get_beacon_config( ...@@ -407,21 +408,21 @@ def get_beacon_config(
image=image, image=image,
ports=beacon_validator_used_ports, ports=beacon_validator_used_ports,
cmd=cmd, cmd=cmd,
# entrypoint=ENTRYPOINT_ARGS, env_vars=extra_env_vars,
files=files, files=files,
private_ip_address_placeholder=PRIVATE_IP_ADDRESS_PLACEHOLDER, private_ip_address_placeholder=PRIVATE_IP_ADDRESS_PLACEHOLDER,
ready_conditions=cl_node_ready_conditions.get_ready_conditions( ready_conditions=cl_node_ready_conditions.get_ready_conditions(
BEACON_HTTP_PORT_ID BEACON_HTTP_PORT_ID
), ),
min_cpu=bn_min_cpu, min_cpu=cl_min_cpu,
max_cpu=bn_max_cpu, max_cpu=cl_max_cpu,
min_memory=bn_min_mem, min_memory=cl_min_mem,
max_memory=bn_max_mem, max_memory=cl_max_mem,
labels=shared_utils.label_maker( labels=shared_utils.label_maker(
constants.CL_CLIENT_TYPE.teku, constants.CL_TYPE.teku,
constants.CLIENT_TYPES.cl, constants.CLIENT_TYPES.cl,
image, image,
el_client_context.client_name, el_context.client_name,
extra_labels, extra_labels,
), ),
user=User(uid=0, gid=0), user=User(uid=0, gid=0),
......
...@@ -30,14 +30,14 @@ USED_PORTS = { ...@@ -30,14 +30,14 @@ USED_PORTS = {
def launch_dora( def launch_dora(
plan, plan,
config_template, config_template,
cl_client_contexts, cl_contexts,
el_cl_data_files_artifact_uuid, el_cl_data_files_artifact_uuid,
electra_fork_epoch, electra_fork_epoch,
network, network,
global_node_selectors, global_node_selectors,
): ):
all_cl_client_info = [] all_cl_client_info = []
for index, client in enumerate(cl_client_contexts): for index, client in enumerate(cl_contexts):
all_cl_client_info.append( all_cl_client_info.append(
new_cl_client_info( new_cl_client_info(
client.ip_addr, client.http_port_num, client.beacon_service_name client.ip_addr, client.http_port_num, client.beacon_service_name
......
shared_utils = import_module("../../shared_utils/shared_utils.star") shared_utils = import_module("../../shared_utils/shared_utils.star")
input_parser = import_module("../../package_io/input_parser.star") input_parser = import_module("../../package_io/input_parser.star")
el_client_context = import_module("../../el/el_client_context.star") el_context = import_module("../../el/el_context.star")
el_admin_node_info = import_module("../../el/el_admin_node_info.star") el_admin_node_info = import_module("../../el/el_admin_node_info.star")
node_metrics = import_module("../../node_metrics_info.star") node_metrics = import_module("../../node_metrics_info.star")
constants = import_module("../../package_io/constants.star") constants = import_module("../../package_io/constants.star")
...@@ -51,11 +51,11 @@ USED_PORTS = { ...@@ -51,11 +51,11 @@ USED_PORTS = {
ENTRYPOINT_ARGS = ["sh", "-c"] ENTRYPOINT_ARGS = ["sh", "-c"]
VERBOSITY_LEVELS = { VERBOSITY_LEVELS = {
constants.GLOBAL_CLIENT_LOG_LEVEL.error: "ERROR", constants.GLOBAL_LOG_LEVEL.error: "ERROR",
constants.GLOBAL_CLIENT_LOG_LEVEL.warn: "WARN", constants.GLOBAL_LOG_LEVEL.warn: "WARN",
constants.GLOBAL_CLIENT_LOG_LEVEL.info: "INFO", constants.GLOBAL_LOG_LEVEL.info: "INFO",
constants.GLOBAL_CLIENT_LOG_LEVEL.debug: "DEBUG", constants.GLOBAL_LOG_LEVEL.debug: "DEBUG",
constants.GLOBAL_CLIENT_LOG_LEVEL.trace: "TRACE", constants.GLOBAL_LOG_LEVEL.trace: "TRACE",
} }
...@@ -138,7 +138,7 @@ def launch( ...@@ -138,7 +138,7 @@ def launch(
service_name, METRICS_PATH, metrics_url service_name, METRICS_PATH, metrics_url
) )
return el_client_context.new_el_client_context( return el_context.new_el_context(
"besu", "besu",
"", # besu has no ENR "", # besu has no ENR
enode, enode,
...@@ -262,7 +262,7 @@ def get_config( ...@@ -262,7 +262,7 @@ def get_config(
min_memory=el_min_mem, min_memory=el_min_mem,
max_memory=el_max_mem, max_memory=el_max_mem,
labels=shared_utils.label_maker( labels=shared_utils.label_maker(
constants.EL_CLIENT_TYPE.besu, constants.EL_TYPE.besu,
constants.CLIENT_TYPES.el, constants.CLIENT_TYPES.el,
image, image,
cl_client_name, cl_client_name,
......
def new_el_client_context( def new_el_context(
client_name, client_name,
enr, enr,
enode, enode,
......
constants = import_module("../package_io/constants.star")
input_parser = import_module("../package_io/input_parser.star")
shared_utils = import_module("../shared_utils/shared_utils.star")
geth = import_module("./geth/geth_launcher.star")
besu = import_module("./besu/besu_launcher.star")
erigon = import_module("./erigon/erigon_launcher.star")
nethermind = import_module("./nethermind/nethermind_launcher.star")
reth = import_module("./reth/reth_launcher.star")
ethereumjs = import_module("./ethereumjs/ethereumjs_launcher.star")
nimbus_eth1 = import_module("./nimbus-eth1/nimbus_launcher.star")
def launch(
plan,
network_params,
el_cl_data,
jwt_file,
participants,
global_log_level,
global_node_selectors,
global_tolerations,
persistent,
network_id,
num_participants,
):
el_launchers = {
constants.EL_TYPE.geth: {
"launcher": geth.new_geth_launcher(
el_cl_data,
jwt_file,
network_params.network,
network_id,
network_params.capella_fork_epoch,
el_cl_data.cancun_time,
el_cl_data.prague_time,
network_params.electra_fork_epoch,
),
"launch_method": geth.launch,
},
constants.EL_TYPE.gethbuilder: {
"launcher": geth.new_geth_launcher(
el_cl_data,
jwt_file,
network_params.network,
network_id,
network_params.capella_fork_epoch,
el_cl_data.cancun_time,
el_cl_data.prague_time,
network_params.electra_fork_epoch,
),
"launch_method": geth.launch,
},
constants.EL_TYPE.besu: {
"launcher": besu.new_besu_launcher(
el_cl_data,
jwt_file,
network_params.network,
),
"launch_method": besu.launch,
},
constants.EL_TYPE.erigon: {
"launcher": erigon.new_erigon_launcher(
el_cl_data,
jwt_file,
network_params.network,
network_id,
el_cl_data.cancun_time,
),
"launch_method": erigon.launch,
},
constants.EL_TYPE.nethermind: {
"launcher": nethermind.new_nethermind_launcher(
el_cl_data,
jwt_file,
network_params.network,
),
"launch_method": nethermind.launch,
},
constants.EL_TYPE.reth: {
"launcher": reth.new_reth_launcher(
el_cl_data,
jwt_file,
network_params.network,
),
"launch_method": reth.launch,
},
constants.EL_TYPE.ethereumjs: {
"launcher": ethereumjs.new_ethereumjs_launcher(
el_cl_data,
jwt_file,
network_params.network,
),
"launch_method": ethereumjs.launch,
},
constants.EL_TYPE.nimbus: {
"launcher": nimbus_eth1.new_nimbus_launcher(
el_cl_data,
jwt_file,
network_params.network,
),
"launch_method": nimbus_eth1.launch,
},
}
all_el_contexts = []
for index, participant in enumerate(participants):
cl_type = participant.cl_type
el_type = participant.el_type
node_selectors = input_parser.get_client_node_selectors(
participant.node_selectors,
global_node_selectors,
)
tolerations = input_parser.get_client_tolerations(
participant.el_tolerations, participant.tolerations, global_tolerations
)
if el_type not in el_launchers:
fail(
"Unsupported launcher '{0}', need one of '{1}'".format(
el_type, ",".join([el.name for el in el_launchers.keys()])
)
)
el_launcher, launch_method = (
el_launchers[el_type]["launcher"],
el_launchers[el_type]["launch_method"],
)
# Zero-pad the index using the calculated zfill value
index_str = shared_utils.zfill_custom(index + 1, len(str(len(participants))))
el_service_name = "el-{0}-{1}-{2}".format(index_str, el_type, cl_type)
el_context = launch_method(
plan,
el_launcher,
el_service_name,
participant.el_image,
participant.el_log_level,
global_log_level,
all_el_contexts,
participant.el_min_cpu,
participant.el_max_cpu,
participant.el_min_mem,
participant.el_max_mem,
participant.el_extra_params,
participant.el_extra_env_vars,
participant.el_extra_labels,
persistent,
participant.el_volume_size,
tolerations,
node_selectors,
)
# Add participant el additional prometheus metrics
for metrics_info in el_context.el_metrics_info:
if metrics_info != None:
metrics_info["config"] = participant.prometheus_config
all_el_contexts.append(el_context)
plan.print("Successfully added {0} EL participants".format(num_participants))
return all_el_contexts
shared_utils = import_module("../../shared_utils/shared_utils.star") shared_utils = import_module("../../shared_utils/shared_utils.star")
input_parser = import_module("../../package_io/input_parser.star") input_parser = import_module("../../package_io/input_parser.star")
el_admin_node_info = import_module("../../el/el_admin_node_info.star") el_admin_node_info = import_module("../../el/el_admin_node_info.star")
el_client_context = import_module("../../el/el_client_context.star") el_context = import_module("../../el/el_context.star")
node_metrics = import_module("../../node_metrics_info.star") node_metrics = import_module("../../node_metrics_info.star")
constants = import_module("../../package_io/constants.star") constants = import_module("../../package_io/constants.star")
...@@ -51,11 +51,11 @@ USED_PORTS = { ...@@ -51,11 +51,11 @@ USED_PORTS = {
ENTRYPOINT_ARGS = ["sh", "-c"] ENTRYPOINT_ARGS = ["sh", "-c"]
VERBOSITY_LEVELS = { VERBOSITY_LEVELS = {
constants.GLOBAL_CLIENT_LOG_LEVEL.error: "1", constants.GLOBAL_LOG_LEVEL.error: "1",
constants.GLOBAL_CLIENT_LOG_LEVEL.warn: "2", constants.GLOBAL_LOG_LEVEL.warn: "2",
constants.GLOBAL_CLIENT_LOG_LEVEL.info: "3", constants.GLOBAL_LOG_LEVEL.info: "3",
constants.GLOBAL_CLIENT_LOG_LEVEL.debug: "4", constants.GLOBAL_LOG_LEVEL.debug: "4",
constants.GLOBAL_CLIENT_LOG_LEVEL.trace: "5", constants.GLOBAL_LOG_LEVEL.trace: "5",
} }
...@@ -142,7 +142,7 @@ def launch( ...@@ -142,7 +142,7 @@ def launch(
service_name, METRICS_PATH, metrics_url service_name, METRICS_PATH, metrics_url
) )
return el_client_context.new_el_client_context( return el_context.new_el_context(
"erigon", "erigon",
enr, enr,
enode, enode,
...@@ -284,7 +284,7 @@ def get_config( ...@@ -284,7 +284,7 @@ def get_config(
max_memory=el_max_mem, max_memory=el_max_mem,
env_vars=extra_env_vars, env_vars=extra_env_vars,
labels=shared_utils.label_maker( labels=shared_utils.label_maker(
constants.EL_CLIENT_TYPE.erigon, constants.EL_TYPE.erigon,
constants.CLIENT_TYPES.el, constants.CLIENT_TYPES.el,
image, image,
cl_client_name, cl_client_name,
......
shared_utils = import_module("../../shared_utils/shared_utils.star") shared_utils = import_module("../../shared_utils/shared_utils.star")
input_parser = import_module("../..//package_io/input_parser.star") input_parser = import_module("../..//package_io/input_parser.star")
el_client_context = import_module("../../el/el_client_context.star") el_context = import_module("../../el/el_context.star")
el_admin_node_info = import_module("../../el/el_admin_node_info.star") el_admin_node_info = import_module("../../el/el_admin_node_info.star")
node_metrics = import_module("../../node_metrics_info.star") node_metrics = import_module("../../node_metrics_info.star")
...@@ -53,11 +53,11 @@ USED_PORTS = { ...@@ -53,11 +53,11 @@ USED_PORTS = {
ENTRYPOINT_ARGS = [] ENTRYPOINT_ARGS = []
VERBOSITY_LEVELS = { VERBOSITY_LEVELS = {
constants.GLOBAL_CLIENT_LOG_LEVEL.error: "error", constants.GLOBAL_LOG_LEVEL.error: "error",
constants.GLOBAL_CLIENT_LOG_LEVEL.warn: "warn", constants.GLOBAL_LOG_LEVEL.warn: "warn",
constants.GLOBAL_CLIENT_LOG_LEVEL.info: "info", constants.GLOBAL_LOG_LEVEL.info: "info",
constants.GLOBAL_CLIENT_LOG_LEVEL.debug: "debug", constants.GLOBAL_LOG_LEVEL.debug: "debug",
constants.GLOBAL_CLIENT_LOG_LEVEL.trace: "trace", constants.GLOBAL_LOG_LEVEL.trace: "trace",
} }
...@@ -139,7 +139,7 @@ def launch( ...@@ -139,7 +139,7 @@ def launch(
# metrics_url = "http://{0}:{1}".format(service.ip_address, METRICS_PORT_NUM) # metrics_url = "http://{0}:{1}".format(service.ip_address, METRICS_PORT_NUM)
ethjs_metrics_info = None ethjs_metrics_info = None
return el_client_context.new_el_client_context( return el_context.new_el_context(
"ethereumjs", "ethereumjs",
"", # ethereumjs has no enr "", # ethereumjs has no enr
enode, enode,
...@@ -251,7 +251,7 @@ def get_config( ...@@ -251,7 +251,7 @@ def get_config(
max_memory=el_max_mem, max_memory=el_max_mem,
env_vars=extra_env_vars, env_vars=extra_env_vars,
labels=shared_utils.label_maker( labels=shared_utils.label_maker(
constants.EL_CLIENT_TYPE.ethereumjs, constants.EL_TYPE.ethereumjs,
constants.CLIENT_TYPES.el, constants.CLIENT_TYPES.el,
image, image,
cl_client_name, cl_client_name,
......
shared_utils = import_module("../../shared_utils/shared_utils.star") shared_utils = import_module("../../shared_utils/shared_utils.star")
input_parser = import_module("../../package_io/input_parser.star") input_parser = import_module("../../package_io/input_parser.star")
el_client_context = import_module("../../el/el_client_context.star") el_context = import_module("../../el/el_context.star")
el_admin_node_info = import_module("../../el/el_admin_node_info.star") el_admin_node_info = import_module("../../el/el_admin_node_info.star")
genesis_constants = import_module( genesis_constants = import_module(
"../../prelaunch_data_generator/genesis_constants/genesis_constants.star" "../../prelaunch_data_generator/genesis_constants/genesis_constants.star"
...@@ -58,11 +58,11 @@ USED_PORTS = { ...@@ -58,11 +58,11 @@ USED_PORTS = {
ENTRYPOINT_ARGS = ["sh", "-c"] ENTRYPOINT_ARGS = ["sh", "-c"]
VERBOSITY_LEVELS = { VERBOSITY_LEVELS = {
constants.GLOBAL_CLIENT_LOG_LEVEL.error: "1", constants.GLOBAL_LOG_LEVEL.error: "1",
constants.GLOBAL_CLIENT_LOG_LEVEL.warn: "2", constants.GLOBAL_LOG_LEVEL.warn: "2",
constants.GLOBAL_CLIENT_LOG_LEVEL.info: "3", constants.GLOBAL_LOG_LEVEL.info: "3",
constants.GLOBAL_CLIENT_LOG_LEVEL.debug: "4", constants.GLOBAL_LOG_LEVEL.debug: "4",
constants.GLOBAL_CLIENT_LOG_LEVEL.trace: "5", constants.GLOBAL_LOG_LEVEL.trace: "5",
} }
BUILDER_IMAGE_STR = "builder" BUILDER_IMAGE_STR = "builder"
...@@ -156,7 +156,7 @@ def launch( ...@@ -156,7 +156,7 @@ def launch(
service_name, METRICS_PATH, metrics_url service_name, METRICS_PATH, metrics_url
) )
return el_client_context.new_el_client_context( return el_context.new_el_context(
"geth", "geth",
enr, enr,
enode, enode,
...@@ -370,7 +370,7 @@ def get_config( ...@@ -370,7 +370,7 @@ def get_config(
max_memory=el_max_mem, max_memory=el_max_mem,
env_vars=extra_env_vars, env_vars=extra_env_vars,
labels=shared_utils.label_maker( labels=shared_utils.label_maker(
constants.EL_CLIENT_TYPE.geth, constants.EL_TYPE.geth,
constants.CLIENT_TYPES.el, constants.CLIENT_TYPES.el,
image, image,
cl_client_name, cl_client_name,
......
shared_utils = import_module("../../shared_utils/shared_utils.star") shared_utils = import_module("../../shared_utils/shared_utils.star")
input_parser = import_module("../../package_io/input_parser.star") input_parser = import_module("../../package_io/input_parser.star")
el_client_context = import_module("../../el/el_client_context.star") el_context = import_module("../../el/el_context.star")
el_admin_node_info = import_module("../../el/el_admin_node_info.star") el_admin_node_info = import_module("../../el/el_admin_node_info.star")
node_metrics = import_module("../../node_metrics_info.star") node_metrics = import_module("../../node_metrics_info.star")
...@@ -49,11 +49,11 @@ USED_PORTS = { ...@@ -49,11 +49,11 @@ USED_PORTS = {
} }
VERBOSITY_LEVELS = { VERBOSITY_LEVELS = {
constants.GLOBAL_CLIENT_LOG_LEVEL.error: "ERROR", constants.GLOBAL_LOG_LEVEL.error: "ERROR",
constants.GLOBAL_CLIENT_LOG_LEVEL.warn: "WARN", constants.GLOBAL_LOG_LEVEL.warn: "WARN",
constants.GLOBAL_CLIENT_LOG_LEVEL.info: "INFO", constants.GLOBAL_LOG_LEVEL.info: "INFO",
constants.GLOBAL_CLIENT_LOG_LEVEL.debug: "DEBUG", constants.GLOBAL_LOG_LEVEL.debug: "DEBUG",
constants.GLOBAL_CLIENT_LOG_LEVEL.trace: "TRACE", constants.GLOBAL_LOG_LEVEL.trace: "TRACE",
} }
...@@ -136,7 +136,7 @@ def launch( ...@@ -136,7 +136,7 @@ def launch(
service_name, METRICS_PATH, metrics_url service_name, METRICS_PATH, metrics_url
) )
return el_client_context.new_el_client_context( return el_context.new_el_context(
"nethermind", "nethermind",
"", # nethermind has no ENR in the eth2-merge-kurtosis-module either "", # nethermind has no ENR in the eth2-merge-kurtosis-module either
# Nethermind node info endpoint doesn't return ENR field https://docs.nethermind.io/nethermind/ethereum-client/json-rpc/admin # Nethermind node info endpoint doesn't return ENR field https://docs.nethermind.io/nethermind/ethereum-client/json-rpc/admin
...@@ -259,7 +259,7 @@ def get_config( ...@@ -259,7 +259,7 @@ def get_config(
max_memory=el_max_mem, max_memory=el_max_mem,
env_vars=extra_env_vars, env_vars=extra_env_vars,
labels=shared_utils.label_maker( labels=shared_utils.label_maker(
constants.EL_CLIENT_TYPE.nethermind, constants.EL_TYPE.nethermind,
constants.CLIENT_TYPES.el, constants.CLIENT_TYPES.el,
image, image,
cl_client_name, cl_client_name,
......
shared_utils = import_module("../../shared_utils/shared_utils.star") shared_utils = import_module("../../shared_utils/shared_utils.star")
input_parser = import_module("../../package_io/input_parser.star") input_parser = import_module("../../package_io/input_parser.star")
el_client_context = import_module("../../el/el_client_context.star") el_context = import_module("../../el/el_context.star")
el_admin_node_info = import_module("../../el/el_admin_node_info.star") el_admin_node_info = import_module("../../el/el_admin_node_info.star")
node_metrics = import_module("../../node_metrics_info.star") node_metrics = import_module("../../node_metrics_info.star")
constants = import_module("../../package_io/constants.star") constants = import_module("../../package_io/constants.star")
...@@ -53,11 +53,11 @@ USED_PORTS = { ...@@ -53,11 +53,11 @@ USED_PORTS = {
} }
VERBOSITY_LEVELS = { VERBOSITY_LEVELS = {
constants.GLOBAL_CLIENT_LOG_LEVEL.error: "ERROR", constants.GLOBAL_LOG_LEVEL.error: "ERROR",
constants.GLOBAL_CLIENT_LOG_LEVEL.warn: "WARN", constants.GLOBAL_LOG_LEVEL.warn: "WARN",
constants.GLOBAL_CLIENT_LOG_LEVEL.info: "INFO", constants.GLOBAL_LOG_LEVEL.info: "INFO",
constants.GLOBAL_CLIENT_LOG_LEVEL.debug: "DEBUG", constants.GLOBAL_LOG_LEVEL.debug: "DEBUG",
constants.GLOBAL_CLIENT_LOG_LEVEL.trace: "TRACE", constants.GLOBAL_LOG_LEVEL.trace: "TRACE",
} }
...@@ -141,7 +141,7 @@ def launch( ...@@ -141,7 +141,7 @@ def launch(
service_name, METRICS_PATH, metric_url service_name, METRICS_PATH, metric_url
) )
return el_client_context.new_el_client_context( return el_context.new_el_context(
"nimbus", "nimbus",
"", # nimbus has no enr "", # nimbus has no enr
enode, enode,
...@@ -252,7 +252,7 @@ def get_config( ...@@ -252,7 +252,7 @@ def get_config(
max_memory=el_max_mem, max_memory=el_max_mem,
env_vars=extra_env_vars, env_vars=extra_env_vars,
labels=shared_utils.label_maker( labels=shared_utils.label_maker(
constants.EL_CLIENT_TYPE.nimbus, constants.EL_TYPE.nimbus,
constants.CLIENT_TYPES.el, constants.CLIENT_TYPES.el,
image, image,
cl_client_name, cl_client_name,
......
shared_utils = import_module("../../shared_utils/shared_utils.star") shared_utils = import_module("../../shared_utils/shared_utils.star")
input_parser = import_module("../../package_io/input_parser.star") input_parser = import_module("../../package_io/input_parser.star")
el_client_context = import_module("../../el/el_client_context.star") el_context = import_module("../../el/el_context.star")
el_admin_node_info = import_module("../../el/el_admin_node_info.star") el_admin_node_info = import_module("../../el/el_admin_node_info.star")
node_metrics = import_module("../../node_metrics_info.star") node_metrics = import_module("../../node_metrics_info.star")
constants = import_module("../../package_io/constants.star") constants = import_module("../../package_io/constants.star")
...@@ -51,11 +51,11 @@ USED_PORTS = { ...@@ -51,11 +51,11 @@ USED_PORTS = {
ENTRYPOINT_ARGS = ["sh", "-c"] ENTRYPOINT_ARGS = ["sh", "-c"]
VERBOSITY_LEVELS = { VERBOSITY_LEVELS = {
constants.GLOBAL_CLIENT_LOG_LEVEL.error: "v", constants.GLOBAL_LOG_LEVEL.error: "v",
constants.GLOBAL_CLIENT_LOG_LEVEL.warn: "vv", constants.GLOBAL_LOG_LEVEL.warn: "vv",
constants.GLOBAL_CLIENT_LOG_LEVEL.info: "vvv", constants.GLOBAL_LOG_LEVEL.info: "vvv",
constants.GLOBAL_CLIENT_LOG_LEVEL.debug: "vvvv", constants.GLOBAL_LOG_LEVEL.debug: "vvvv",
constants.GLOBAL_CLIENT_LOG_LEVEL.trace: "vvvvv", constants.GLOBAL_LOG_LEVEL.trace: "vvvvv",
} }
...@@ -139,7 +139,7 @@ def launch( ...@@ -139,7 +139,7 @@ def launch(
service_name, METRICS_PATH, metric_url service_name, METRICS_PATH, metric_url
) )
return el_client_context.new_el_client_context( return el_context.new_el_context(
"reth", "reth",
"", # reth has no enr "", # reth has no enr
enode, enode,
...@@ -265,7 +265,7 @@ def get_config( ...@@ -265,7 +265,7 @@ def get_config(
max_memory=el_max_mem, max_memory=el_max_mem,
env_vars=extra_env_vars, env_vars=extra_env_vars,
labels=shared_utils.label_maker( labels=shared_utils.label_maker(
constants.EL_CLIENT_TYPE.reth, constants.EL_TYPE.reth,
constants.CLIENT_TYPES.el, constants.CLIENT_TYPES.el,
image, image,
cl_client_name, cl_client_name,
......
...@@ -29,11 +29,11 @@ MAX_MEMORY = 256 ...@@ -29,11 +29,11 @@ MAX_MEMORY = 256
def launch_el_forkmon( def launch_el_forkmon(
plan, plan,
config_template, config_template,
el_client_contexts, el_contexts,
global_node_selectors, global_node_selectors,
): ):
all_el_client_info = [] all_el_client_info = []
for client in el_client_contexts: for client in el_contexts:
client_info = new_el_client_info( client_info = new_el_client_info(
client.ip_addr, client.rpc_port_num, client.service_name client.ip_addr, client.rpc_port_num, client.service_name
) )
......
...@@ -20,8 +20,8 @@ def launch( ...@@ -20,8 +20,8 @@ def launch(
plan, plan,
pair_name, pair_name,
ethereum_metrics_exporter_service_name, ethereum_metrics_exporter_service_name,
el_client_context, el_context,
cl_client_context, cl_context,
node_selectors, node_selectors,
): ):
exporter_service = plan.add_service( exporter_service = plan.add_service(
...@@ -40,13 +40,13 @@ def launch( ...@@ -40,13 +40,13 @@ def launch(
str(METRICS_PORT_NUMBER), str(METRICS_PORT_NUMBER),
"--consensus-url", "--consensus-url",
"http://{}:{}".format( "http://{}:{}".format(
cl_client_context.ip_addr, cl_context.ip_addr,
cl_client_context.http_port_num, cl_context.http_port_num,
), ),
"--execution-url", "--execution-url",
"http://{}:{}".format( "http://{}:{}".format(
el_client_context.ip_addr, el_context.ip_addr,
el_client_context.rpc_port_num, el_context.rpc_port_num,
), ),
], ],
min_cpu=MIN_CPU, min_cpu=MIN_CPU,
...@@ -61,6 +61,6 @@ def launch( ...@@ -61,6 +61,6 @@ def launch(
pair_name, pair_name,
exporter_service.ip_address, exporter_service.ip_address,
METRICS_PORT_NUMBER, METRICS_PORT_NUMBER,
cl_client_context.client_name, cl_context.client_name,
el_client_context.client_name, el_context.client_name,
) )
...@@ -94,8 +94,8 @@ FRONTEND_MAX_MEMORY = 2048 ...@@ -94,8 +94,8 @@ FRONTEND_MAX_MEMORY = 2048
def launch_full_beacon( def launch_full_beacon(
plan, plan,
config_template, config_template,
cl_client_contexts, cl_contexts,
el_client_contexts, el_contexts,
persistent, persistent,
global_node_selectors, global_node_selectors,
): ):
...@@ -143,12 +143,12 @@ def launch_full_beacon( ...@@ -143,12 +143,12 @@ def launch_full_beacon(
) )
el_uri = "http://{0}:{1}".format( el_uri = "http://{0}:{1}".format(
el_client_contexts[0].ip_addr, el_client_contexts[0].rpc_port_num el_contexts[0].ip_addr, el_contexts[0].rpc_port_num
) )
redis_url = "{}:{}".format(redis_output.hostname, redis_output.port_number) redis_url = "{}:{}".format(redis_output.hostname, redis_output.port_number)
template_data = new_config_template_data( template_data = new_config_template_data(
cl_client_contexts[0], cl_contexts[0],
el_uri, el_uri,
little_bigtable.ip_address, little_bigtable.ip_address,
LITTLE_BIGTABLE_PORT_NUMBER, LITTLE_BIGTABLE_PORT_NUMBER,
......
...@@ -13,16 +13,16 @@ MAX_MEMORY = 300 ...@@ -13,16 +13,16 @@ MAX_MEMORY = 300
def launch_goomy_blob( def launch_goomy_blob(
plan, plan,
prefunded_addresses, prefunded_addresses,
el_client_contexts, el_contexts,
cl_client_context, cl_context,
seconds_per_slot, seconds_per_slot,
goomy_blob_params, goomy_blob_params,
global_node_selectors, global_node_selectors,
): ):
config = get_config( config = get_config(
prefunded_addresses, prefunded_addresses,
el_client_contexts, el_contexts,
cl_client_context, cl_context,
seconds_per_slot, seconds_per_slot,
goomy_blob_params.goomy_blob_args, goomy_blob_params.goomy_blob_args,
global_node_selectors, global_node_selectors,
...@@ -32,14 +32,14 @@ def launch_goomy_blob( ...@@ -32,14 +32,14 @@ def launch_goomy_blob(
def get_config( def get_config(
prefunded_addresses, prefunded_addresses,
el_client_contexts, el_contexts,
cl_client_context, cl_context,
seconds_per_slot, seconds_per_slot,
goomy_blob_args, goomy_blob_args,
node_selectors, node_selectors,
): ):
goomy_cli_args = [] goomy_cli_args = []
for index, client in enumerate(el_client_contexts): for index, client in enumerate(el_contexts):
goomy_cli_args.append( goomy_cli_args.append(
"-h http://{0}:{1}".format( "-h http://{0}:{1}".format(
client.ip_addr, client.ip_addr,
...@@ -61,11 +61,11 @@ def get_config( ...@@ -61,11 +61,11 @@ def get_config(
"apt-get update", "apt-get update",
"apt-get install -y curl jq", "apt-get install -y curl jq",
'current_epoch=$(curl -s http://{0}:{1}/eth/v2/beacon/blocks/head | jq -r ".version")'.format( 'current_epoch=$(curl -s http://{0}:{1}/eth/v2/beacon/blocks/head | jq -r ".version")'.format(
cl_client_context.ip_addr, cl_client_context.http_port_num cl_context.ip_addr, cl_context.http_port_num
), ),
'while [ $current_epoch != "deneb" ]; do echo "waiting for deneb, current epoch is $current_epoch"; current_epoch=$(curl -s http://{0}:{1}/eth/v2/beacon/blocks/head | jq -r ".version"); sleep {2}; done'.format( 'while [ $current_epoch != "deneb" ]; do echo "waiting for deneb, current epoch is $current_epoch"; current_epoch=$(curl -s http://{0}:{1}/eth/v2/beacon/blocks/head | jq -r ".version"); sleep {2}; done'.format(
cl_client_context.ip_addr, cl_context.ip_addr,
cl_client_context.http_port_num, cl_context.http_port_num,
seconds_per_slot, seconds_per_slot,
), ),
'echo "sleep is over, starting to send blob transactions"', 'echo "sleep is over, starting to send blob transactions"',
......
...@@ -15,7 +15,7 @@ def launch_mock_mev( ...@@ -15,7 +15,7 @@ def launch_mock_mev(
el_uri, el_uri,
beacon_uri, beacon_uri,
jwt_secret, jwt_secret,
global_client_log_level, global_log_level,
global_node_selectors, global_node_selectors,
): ):
mock_builder = plan.add_service( mock_builder = plan.add_service(
...@@ -32,7 +32,7 @@ def launch_mock_mev( ...@@ -32,7 +32,7 @@ def launch_mock_mev(
"--el={0}".format(el_uri), "--el={0}".format(el_uri),
"--cl={0}".format(beacon_uri), "--cl={0}".format(beacon_uri),
"--bid-multiplier=5", # TODO: This could be customizable "--bid-multiplier=5", # TODO: This could be customizable
"--log-level={0}".format(global_client_log_level), "--log-level={0}".format(global_log_level),
], ],
min_cpu=MIN_CPU, min_cpu=MIN_CPU,
max_cpu=MAX_CPU, max_cpu=MAX_CPU,
......
shared_utils = import_module("../shared_utils/shared_utils.star")
el_cl_genesis_data = import_module(
"../prelaunch_data_generator/el_cl_genesis/el_cl_genesis_data.star"
)
def launch(plan, network, cancun_time, prague_time):
# We are running a devnet
url = shared_utils.calculate_devnet_url(network)
el_cl_genesis_uuid = plan.upload_files(
src=url,
name="el_cl_genesis",
)
el_cl_genesis_data_uuid = plan.run_sh(
run="mkdir -p /network-configs/ && mv /opt/* /network-configs/",
store=[StoreSpec(src="/network-configs/", name="el_cl_genesis_data")],
files={"/opt": el_cl_genesis_uuid},
)
genesis_validators_root = read_file(url + "/genesis_validators_root.txt")
el_cl_data = el_cl_genesis_data.new_el_cl_genesis_data(
el_cl_genesis_data_uuid.files_artifacts[0],
genesis_validators_root,
cancun_time,
prague_time,
)
final_genesis_timestamp = shared_utils.read_genesis_timestamp_from_config(
plan, el_cl_genesis_data_uuid.files_artifacts[0]
)
network_id = shared_utils.read_genesis_network_id_from_config(
plan, el_cl_genesis_data_uuid.files_artifacts[0]
)
validator_data = None
return el_cl_data, final_genesis_timestamp, network_id, validator_data
shared_utils = import_module("../shared_utils/shared_utils.star")
el_cl_genesis_data = import_module(
"../prelaunch_data_generator/el_cl_genesis/el_cl_genesis_data.star"
)
def launch(plan, cancun_time, prague_time):
el_cl_genesis_data_uuid = plan.run_sh(
run="mkdir -p /network-configs/ && \
curl -o latest.tar.gz https://ephemery.dev/latest.tar.gz && \
tar xvzf latest.tar.gz -C /network-configs && \
cat /network-configs/genesis_validators_root.txt",
image="badouralix/curl-jq",
store=[StoreSpec(src="/network-configs/", name="el_cl_genesis_data")],
)
genesis_validators_root = el_cl_genesis_data_uuid.output
el_cl_data = el_cl_genesis_data.new_el_cl_genesis_data(
el_cl_genesis_data_uuid.files_artifacts[0],
genesis_validators_root,
cancun_time,
prague_time,
)
final_genesis_timestamp = shared_utils.read_genesis_timestamp_from_config(
plan, el_cl_genesis_data_uuid.files_artifacts[0]
)
network_id = shared_utils.read_genesis_network_id_from_config(
plan, el_cl_genesis_data_uuid.files_artifacts[0]
)
validator_data = None
return el_cl_data, final_genesis_timestamp, network_id, validator_data
shared_utils = import_module("../shared_utils/shared_utils.star")
validator_keystores = import_module(
"../prelaunch_data_generator/validator_keystores/validator_keystore_generator.star"
)
constants = import_module("../package_io/constants.star")
# The time that the CL genesis generation step takes to complete, based off what we've seen
# This is in seconds
CL_GENESIS_DATA_GENERATION_TIME = 5
# Each CL node takes about this time to start up and start processing blocks, so when we create the CL
# genesis data we need to set the genesis timestamp in the future so that nodes don't miss important slots
# (e.g. Altair fork)
# TODO(old) Make this client-specific (currently this is Nimbus)
# This is in seconds
CL_NODE_STARTUP_TIME = 5
def launch(plan, network_params, participants, parallel_keystore_generation):
num_participants = len(participants)
plan.print("Generating cl validator key stores")
validator_data = None
if not parallel_keystore_generation:
validator_data = validator_keystores.generate_validator_keystores(
plan, network_params.preregistered_validator_keys_mnemonic, participants
)
else:
validator_data = validator_keystores.generate_valdiator_keystores_in_parallel(
plan,
network_params.preregistered_validator_keys_mnemonic,
participants,
)
plan.print(json.indent(json.encode(validator_data)))
# We need to send the same genesis time to both the EL and the CL to ensure that timestamp based forking works as expected
final_genesis_timestamp = shared_utils.get_final_genesis_timestamp(
plan,
network_params.genesis_delay
+ CL_GENESIS_DATA_GENERATION_TIME
+ num_participants * CL_NODE_STARTUP_TIME,
)
# if preregistered validator count is 0 (default) then calculate the total number of validators from the participants
total_number_of_validator_keys = network_params.preregistered_validator_count
if network_params.preregistered_validator_count == 0:
for participant in participants:
total_number_of_validator_keys += participant.validator_count
plan.print("Generating EL CL data")
# we are running bellatrix genesis (deprecated) - will be removed in the future
if (
network_params.capella_fork_epoch > 0
and network_params.electra_fork_epoch == None
):
ethereum_genesis_generator_image = (
constants.ETHEREUM_GENESIS_GENERATOR.bellatrix_genesis
)
# we are running capella genesis - default behavior
elif (
network_params.capella_fork_epoch == 0
and network_params.electra_fork_epoch == None
and network_params.deneb_fork_epoch > 0
):
ethereum_genesis_generator_image = (
constants.ETHEREUM_GENESIS_GENERATOR.capella_genesis
)
# we are running deneb genesis - experimental, soon to become default
elif network_params.deneb_fork_epoch == 0:
ethereum_genesis_generator_image = (
constants.ETHEREUM_GENESIS_GENERATOR.deneb_genesis
)
# we are running electra - experimental
elif network_params.electra_fork_epoch != None:
if network_params.electra_fork_epoch == 0:
ethereum_genesis_generator_image = (
constants.ETHEREUM_GENESIS_GENERATOR.verkle_genesis
)
else:
ethereum_genesis_generator_image = (
constants.ETHEREUM_GENESIS_GENERATOR.verkle_support_genesis
)
else:
fail(
"Unsupported fork epoch configuration, need to define either capella_fork_epoch, deneb_fork_epoch or electra_fork_epoch"
)
return (
total_number_of_validator_keys,
ethereum_genesis_generator_image,
final_genesis_timestamp,
validator_data,
)
shared_utils = import_module("../shared_utils/shared_utils.star")
el_cl_genesis_data = import_module(
"../prelaunch_data_generator/el_cl_genesis/el_cl_genesis_data.star"
)
constants = import_module("../package_io/constants.star")
def launch(plan, network, cancun_time, prague_time):
# We are running a public network
dummy_genesis_data = plan.run_sh(
run="mkdir /network-configs",
store=[StoreSpec(src="/network-configs/", name="el_cl_genesis_data")],
)
el_cl_data = el_cl_genesis_data.new_el_cl_genesis_data(
dummy_genesis_data.files_artifacts[0],
constants.GENESIS_VALIDATORS_ROOT[network],
cancun_time,
prague_time,
)
final_genesis_timestamp = constants.GENESIS_TIME[network]
network_id = constants.NETWORK_ID[network]
validator_data = None
return el_cl_data, final_genesis_timestamp, network_id, validator_data
shared_utils = import_module("../shared_utils/shared_utils.star")
constants = import_module("../package_io/constants.star")
input_parser = import_module("../package_io/input_parser.star")
def shadowfork_prep(
plan,
network_params,
shadowfork_block,
participants,
global_tolerations,
global_node_selectors,
):
base_network = shared_utils.get_network_name(network_params.network)
# overload the network name to remove the shadowfork suffix
if constants.NETWORK_NAME.ephemery in base_network:
chain_id = plan.run_sh(
run="curl -s https://ephemery.dev/latest/config.yaml | yq .DEPOSIT_CHAIN_ID | tr -d '\n'",
image="linuxserver/yq",
)
network_id = chain_id.output
else:
network_id = constants.NETWORK_ID[
base_network
] # overload the network id to match the network name
latest_block = plan.run_sh( # fetch the latest block
run="mkdir -p /shadowfork && \
curl -o /shadowfork/latest_block.json "
+ network_params.network_sync_base_url
+ base_network
+ "/geth/"
+ shadowfork_block
+ "/_snapshot_eth_getBlockByNumber.json",
image="badouralix/curl-jq",
store=[StoreSpec(src="/shadowfork", name="latest_blocks")],
)
for index, participant in enumerate(participants):
tolerations = input_parser.get_client_tolerations(
participant.el_tolerations,
participant.tolerations,
global_tolerations,
)
node_selectors = input_parser.get_client_node_selectors(
participant.node_selectors,
global_node_selectors,
)
cl_type = participant.cl_type
el_type = participant.el_type
# Zero-pad the index using the calculated zfill value
index_str = shared_utils.zfill_custom(index + 1, len(str(len(participants))))
el_service_name = "el-{0}-{1}-{2}".format(index_str, el_type, cl_type)
shadowfork_data = plan.add_service(
name="shadowfork-{0}".format(el_service_name),
config=ServiceConfig(
image="alpine:3.19.1",
cmd=[
"apk add --no-cache curl tar zstd && curl -s -L "
+ network_params.network_sync_base_url
+ base_network
+ "/"
+ el_type
+ "/"
+ shadowfork_block
+ "/snapshot.tar.zst"
+ " | tar -I zstd -xvf - -C /data/"
+ el_type
+ "/execution-data"
+ " && touch /tmp/finished"
+ " && tail -f /dev/null"
],
entrypoint=["/bin/sh", "-c"],
files={
"/data/"
+ el_type
+ "/execution-data": Directory(
persistent_key="data-{0}".format(el_service_name),
size=constants.VOLUME_SIZE[base_network][
el_type + "_volume_size"
],
),
},
tolerations=tolerations,
node_selectors=node_selectors,
),
)
for index, participant in enumerate(participants):
cl_type = participant.cl_type
el_type = participant.el_type
# Zero-pad the index using the calculated zfill value
index_str = shared_utils.zfill_custom(index + 1, len(str(len(participants))))
el_service_name = "el-{0}-{1}-{2}".format(index_str, el_type, cl_type)
plan.wait(
service_name="shadowfork-{0}".format(el_service_name),
recipe=ExecRecipe(command=["cat", "/tmp/finished"]),
field="code",
assertion="==",
target_value=0,
interval="1s",
timeout="6h", # 6 hours should be enough for the biggest network
)
return latest_block, network_id
EL_CLIENT_TYPE = struct( EL_TYPE = struct(
gethbuilder="geth-builder", gethbuilder="geth-builder",
geth="geth", geth="geth",
erigon="erigon", erigon="erigon",
...@@ -9,7 +9,7 @@ EL_CLIENT_TYPE = struct( ...@@ -9,7 +9,7 @@ EL_CLIENT_TYPE = struct(
nimbus="nimbus", nimbus="nimbus",
) )
CL_CLIENT_TYPE = struct( CL_TYPE = struct(
lighthouse="lighthouse", lighthouse="lighthouse",
teku="teku", teku="teku",
nimbus="nimbus", nimbus="nimbus",
...@@ -17,7 +17,7 @@ CL_CLIENT_TYPE = struct( ...@@ -17,7 +17,7 @@ CL_CLIENT_TYPE = struct(
lodestar="lodestar", lodestar="lodestar",
) )
VC_CLIENT_TYPE = struct( VC_TYPE = struct(
lighthouse="lighthouse", lighthouse="lighthouse",
lodestar="lodestar", lodestar="lodestar",
nimbus="nimbus", nimbus="nimbus",
...@@ -25,7 +25,7 @@ VC_CLIENT_TYPE = struct( ...@@ -25,7 +25,7 @@ VC_CLIENT_TYPE = struct(
teku="teku", teku="teku",
) )
GLOBAL_CLIENT_LOG_LEVEL = struct( GLOBAL_LOG_LEVEL = struct(
info="info", info="info",
error="error", error="error",
warn="warn", warn="warn",
...@@ -410,11 +410,11 @@ RAM_CPU_OVERRIDES = { ...@@ -410,11 +410,11 @@ RAM_CPU_OVERRIDES = {
"prysm_max_cpu": 1000, # 1 core "prysm_max_cpu": 1000, # 1 core
"lighthouse_max_mem": 1024, # 1GB "lighthouse_max_mem": 1024, # 1GB
"lighthouse_max_cpu": 1000, # 1 core "lighthouse_max_cpu": 1000, # 1 core
"teku_max_mem": 1024, # 1GB "teku_max_mem": 2048, # 2GB
"teku_max_cpu": 1000, # 1 core "teku_max_cpu": 1000, # 1 core
"nimbus_max_mem": 1024, # 1GB "nimbus_max_mem": 1024, # 1GB
"nimbus_max_cpu": 1000, # 1 core "nimbus_max_cpu": 1000, # 1 core
"lodestar_max_mem": 1024, # 1GB "lodestar_max_mem": 2048, # 2GB
"lodestar_max_cpu": 1000, # 1 core "lodestar_max_cpu": 1000, # 1 core
}, },
} }
This diff is collapsed.
def new_participant( def new_participant(
el_client_type, el_type,
cl_client_type, cl_type,
validator_client_type, vc_type,
el_client_context, el_context,
cl_client_context, cl_context,
validator_client_context, vc_context,
snooper_engine_context, snooper_engine_context,
ethereum_metrics_exporter_context, ethereum_metrics_exporter_context,
xatu_sentry_context, xatu_sentry_context,
): ):
return struct( return struct(
el_client_type=el_client_type, el_type=el_type,
cl_client_type=cl_client_type, cl_type=cl_type,
validator_client_type=validator_client_type, vc_type=vc_type,
el_client_context=el_client_context, el_context=el_context,
cl_client_context=cl_client_context, cl_context=cl_context,
validator_client_context=validator_client_context, vc_context=vc_context,
snooper_engine_context=snooper_engine_context, snooper_engine_context=snooper_engine_context,
ethereum_metrics_exporter_context=ethereum_metrics_exporter_context, ethereum_metrics_exporter_context=ethereum_metrics_exporter_context,
xatu_sentry_context=xatu_sentry_context, xatu_sentry_context=xatu_sentry_context,
......
This diff is collapsed.
...@@ -134,8 +134,8 @@ def generate_validator_keystores(plan, mnemonic, participants): ...@@ -134,8 +134,8 @@ def generate_validator_keystores(plan, mnemonic, participants):
keystore_stop_index = (keystore_start_index + participant.validator_count) - 1 keystore_stop_index = (keystore_start_index + participant.validator_count) - 1
artifact_name = "{0}-{1}-{2}-{3}-{4}".format( artifact_name = "{0}-{1}-{2}-{3}-{4}".format(
padded_idx, padded_idx,
participant.cl_client_type, participant.cl_type,
participant.el_client_type, participant.el_type,
keystore_start_index, keystore_start_index,
keystore_stop_index, keystore_stop_index,
) )
...@@ -286,8 +286,8 @@ def generate_valdiator_keystores_in_parallel(plan, mnemonic, participants): ...@@ -286,8 +286,8 @@ def generate_valdiator_keystores_in_parallel(plan, mnemonic, participants):
keystore_stop_index = (keystore_start_index + participant.validator_count) - 1 keystore_stop_index = (keystore_start_index + participant.validator_count) - 1
artifact_name = "{0}-{1}-{2}-{3}-{4}".format( artifact_name = "{0}-{1}-{2}-{3}-{4}".format(
padded_idx, padded_idx,
participant.cl_client_type, participant.cl_type,
participant.el_client_type, participant.el_type,
keystore_start_index, keystore_start_index,
keystore_stop_index, keystore_stop_index,
) )
......
...@@ -5,12 +5,12 @@ shared_utils = import_module("../../shared_utils/shared_utils.star") ...@@ -5,12 +5,12 @@ shared_utils = import_module("../../shared_utils/shared_utils.star")
def generate_validator_ranges( def generate_validator_ranges(
plan, plan,
config_template, config_template,
cl_client_contexts, cl_contexts,
participants, participants,
): ):
data = [] data = []
running_total_validator_count = 0 running_total_validator_count = 0
for index, client in enumerate(cl_client_contexts): for index, client in enumerate(cl_contexts):
participant = participants[index] participant = participants[index]
if participant.validator_count == 0: if participant.validator_count == 0:
continue continue
......
...@@ -3,7 +3,7 @@ prometheus = import_module("github.com/kurtosis-tech/prometheus-package/main.sta ...@@ -3,7 +3,7 @@ prometheus = import_module("github.com/kurtosis-tech/prometheus-package/main.sta
EXECUTION_CLIENT_TYPE = "execution" EXECUTION_CLIENT_TYPE = "execution"
BEACON_CLIENT_TYPE = "beacon" BEACON_CLIENT_TYPE = "beacon"
VALIDATOR_CLIENT_TYPE = "validator" vc_type = "validator"
METRICS_INFO_NAME_KEY = "name" METRICS_INFO_NAME_KEY = "name"
METRICS_INFO_URL_KEY = "url" METRICS_INFO_URL_KEY = "url"
...@@ -21,18 +21,18 @@ MAX_MEMORY = 2048 ...@@ -21,18 +21,18 @@ MAX_MEMORY = 2048
def launch_prometheus( def launch_prometheus(
plan, plan,
el_client_contexts, el_contexts,
cl_client_contexts, cl_contexts,
validator_client_contexts, vc_contexts,
additional_metrics_jobs, additional_metrics_jobs,
ethereum_metrics_exporter_contexts, ethereum_metrics_exporter_contexts,
xatu_sentry_contexts, xatu_sentry_contexts,
global_node_selectors, global_node_selectors,
): ):
metrics_jobs = get_metrics_jobs( metrics_jobs = get_metrics_jobs(
el_client_contexts, el_contexts,
cl_client_contexts, cl_contexts,
validator_client_contexts, vc_contexts,
additional_metrics_jobs, additional_metrics_jobs,
ethereum_metrics_exporter_contexts, ethereum_metrics_exporter_contexts,
xatu_sentry_contexts, xatu_sentry_contexts,
...@@ -51,16 +51,16 @@ def launch_prometheus( ...@@ -51,16 +51,16 @@ def launch_prometheus(
def get_metrics_jobs( def get_metrics_jobs(
el_client_contexts, el_contexts,
cl_client_contexts, cl_contexts,
validator_client_contexts, vc_contexts,
additional_metrics_jobs, additional_metrics_jobs,
ethereum_metrics_exporter_contexts, ethereum_metrics_exporter_contexts,
xatu_sentry_contexts, xatu_sentry_contexts,
): ):
metrics_jobs = [] metrics_jobs = []
# Adding execution clients metrics jobs # Adding execution clients metrics jobs
for context in el_client_contexts: for context in el_contexts:
if len(context.el_metrics_info) >= 1 and context.el_metrics_info[0] != None: if len(context.el_metrics_info) >= 1 and context.el_metrics_info[0] != None:
execution_metrics_info = context.el_metrics_info[0] execution_metrics_info = context.el_metrics_info[0]
scrape_interval = PROMETHEUS_DEFAULT_SCRAPE_INTERVAL scrape_interval = PROMETHEUS_DEFAULT_SCRAPE_INTERVAL
...@@ -90,7 +90,7 @@ def get_metrics_jobs( ...@@ -90,7 +90,7 @@ def get_metrics_jobs(
) )
) )
# Adding consensus clients metrics jobs # Adding consensus clients metrics jobs
for context in cl_client_contexts: for context in cl_contexts:
if ( if (
len(context.cl_nodes_metrics_info) >= 1 len(context.cl_nodes_metrics_info) >= 1
and context.cl_nodes_metrics_info[0] != None and context.cl_nodes_metrics_info[0] != None
...@@ -123,7 +123,7 @@ def get_metrics_jobs( ...@@ -123,7 +123,7 @@ def get_metrics_jobs(
) )
# Adding validator clients metrics jobs # Adding validator clients metrics jobs
for context in validator_client_contexts: for context in vc_contexts:
if context == None: if context == None:
continue continue
metrics_info = context.metrics_info metrics_info = context.metrics_info
...@@ -131,7 +131,7 @@ def get_metrics_jobs( ...@@ -131,7 +131,7 @@ def get_metrics_jobs(
scrape_interval = PROMETHEUS_DEFAULT_SCRAPE_INTERVAL scrape_interval = PROMETHEUS_DEFAULT_SCRAPE_INTERVAL
labels = { labels = {
"service": context.service_name, "service": context.service_name,
"client_type": VALIDATOR_CLIENT_TYPE, "client_type": vc_type,
"client_name": context.client_name, "client_name": context.client_name,
} }
......
...@@ -155,3 +155,44 @@ def get_network_name(network): ...@@ -155,3 +155,44 @@ def get_network_name(network):
network_name = network.split("-shadowfork")[0] network_name = network.split("-shadowfork")[0]
return network_name return network_name
# this is a python procedure so that Kurtosis can do idempotent runs
# time.now() runs everytime bringing non determinism
# note that the timestamp it returns is a string
def get_final_genesis_timestamp(plan, padding):
result = plan.run_python(
run="""
import time
import sys
padding = int(sys.argv[1])
print(int(time.time()+padding), end="")
""",
args=[str(padding)],
store=[StoreSpec(src="/tmp", name="final-genesis-timestamp")],
)
return result.output
def calculate_devnet_url(network):
sf_suffix_mapping = {"hsf": "-hsf-", "gsf": "-gsf-", "ssf": "-ssf-"}
shadowfork = "sf-" in network
if shadowfork:
for suffix, delimiter in sf_suffix_mapping.items():
if delimiter in network:
network_parts = network.split(delimiter, 1)
network_type = suffix
else:
network_parts = network.split("-devnet-", 1)
network_type = "devnet"
devnet_name, devnet_number = network_parts[0], network_parts[1]
devnet_category = devnet_name.split("-")[0]
devnet_subname = (
devnet_name.split("-")[1] + "-" if len(devnet_name.split("-")) > 1 else ""
)
return "github.com/ethpandaops/{0}-devnets/network-configs/{1}{2}-{3}".format(
devnet_category, devnet_subname, network_type, devnet_number
)
shared_utils = import_module("../shared_utils/shared_utils.star") shared_utils = import_module("../shared_utils/shared_utils.star")
input_parser = import_module("../package_io/input_parser.star") input_parser = import_module("../package_io/input_parser.star")
el_client_context = import_module("../el/el_client_context.star") el_context = import_module("../el/el_context.star")
el_admin_node_info = import_module("../el/el_admin_node_info.star") el_admin_node_info = import_module("../el/el_admin_node_info.star")
snooper_engine_context = import_module("../snooper/snooper_engine_context.star") snooper_engine_context = import_module("../snooper/snooper_engine_context.star")
...@@ -25,10 +25,10 @@ MIN_MEMORY = 10 ...@@ -25,10 +25,10 @@ MIN_MEMORY = 10
MAX_MEMORY = 300 MAX_MEMORY = 300
def launch(plan, service_name, el_client_context, node_selectors): def launch(plan, service_name, el_context, node_selectors):
snooper_service_name = "{0}".format(service_name) snooper_service_name = "{0}".format(service_name)
snooper_config = get_config(service_name, el_client_context, node_selectors) snooper_config = get_config(service_name, el_context, node_selectors)
snooper_service = plan.add_service(snooper_service_name, snooper_config) snooper_service = plan.add_service(snooper_service_name, snooper_config)
snooper_http_port = snooper_service.ports[SNOOPER_ENGINE_RPC_PORT_ID] snooper_http_port = snooper_service.ports[SNOOPER_ENGINE_RPC_PORT_ID]
...@@ -37,10 +37,10 @@ def launch(plan, service_name, el_client_context, node_selectors): ...@@ -37,10 +37,10 @@ def launch(plan, service_name, el_client_context, node_selectors):
) )
def get_config(service_name, el_client_context, node_selectors): def get_config(service_name, el_context, node_selectors):
engine_rpc_port_num = "http://{0}:{1}".format( engine_rpc_port_num = "http://{0}:{1}".format(
el_client_context.ip_addr, el_context.ip_addr,
el_client_context.engine_rpc_port_num, el_context.engine_rpc_port_num,
) )
cmd = [ cmd = [
SNOOPER_BINARY_COMMAND, SNOOPER_BINARY_COMMAND,
......
def new_validator_client_context( def new_vc_context(
service_name, service_name,
client_name, client_name,
metrics_info, metrics_info,
......
...@@ -18,7 +18,7 @@ MAX_MEMORY = 1024 ...@@ -18,7 +18,7 @@ MAX_MEMORY = 1024
def launch( def launch(
plan, plan,
xatu_sentry_service_name, xatu_sentry_service_name,
cl_client_context, cl_context,
xatu_sentry_params, xatu_sentry_params,
network_params, network_params,
pair_name, pair_name,
...@@ -30,8 +30,8 @@ def launch( ...@@ -30,8 +30,8 @@ def launch(
str(METRICS_PORT_NUMBER), str(METRICS_PORT_NUMBER),
pair_name, pair_name,
"http://{}:{}".format( "http://{}:{}".format(
cl_client_context.ip_addr, cl_context.ip_addr,
cl_client_context.http_port_num, cl_context.http_port_num,
), ),
xatu_sentry_params.xatu_server_addr, xatu_sentry_params.xatu_server_addr,
network_params.network, network_params.network,
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment