Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
E
ethereum-package
Project
Project
Details
Activity
Releases
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
vicotor
ethereum-package
Commits
0b2a2ae0
Unverified
Commit
0b2a2ae0
authored
Nov 27, 2024
by
Barnabas Busa
Committed by
GitHub
Nov 27, 2024
1
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
feat: add support for pull through cache (#833)
parent
2633d15b
Changes
53
Expand all
Show whitespace changes
Inline
Side-by-side
Showing
53 changed files
with
508 additions
and
158 deletions
+508
-158
mix-with-tools-minimal.yaml
.github/tests/mix-with-tools-minimal.yaml
+3
-0
README.md
README.md
+30
-5
main.star
main.star
+13
-6
apache_launcher.star
src/apache/apache_launcher.star
+8
-1
assertoor_launcher.star
src/assertoor/assertoor_launcher.star
+3
-5
beacon_metrics_gazer_launcher.star
src/beacon_metrics_gazer/beacon_metrics_gazer_launcher.star
+7
-1
blob_spammer.star
src/blob_spammer/blob_spammer.star
+4
-2
blobscan_launcher.star
src/blobscan/blobscan_launcher.star
+25
-3
blockscout_launcher.star
src/blockscout/blockscout_launcher.star
+17
-3
blutgang_launcher.star
src/blutgang/blutgang_launcher.star
+7
-1
cl_launcher.star
src/cl/cl_launcher.star
+19
-20
grandine_launcher.star
src/cl/grandine/grandine_launcher.star
+1
-1
lighthouse_launcher.star
src/cl/lighthouse/lighthouse_launcher.star
+1
-1
lodestar_launcher.star
src/cl/lodestar/lodestar_launcher.star
+1
-1
nimbus_launcher.star
src/cl/nimbus/nimbus_launcher.star
+1
-1
prysm_launcher.star
src/cl/prysm/prysm_launcher.star
+1
-1
teku_launcher.star
src/cl/teku/teku_launcher.star
+1
-1
dora_launcher.star
src/dora/dora_launcher.star
+3
-5
dugtrio_launcher.star
src/dugtrio/dugtrio_launcher.star
+7
-1
besu_launcher.star
src/el/besu/besu_launcher.star
+1
-1
erigon_launcher.star
src/el/erigon/erigon_launcher.star
+1
-1
ethereumjs_launcher.star
src/el/ethereumjs/ethereumjs_launcher.star
+1
-1
geth_launcher.star
src/el/geth/geth_launcher.star
+1
-1
nethermind_launcher.star
src/el/nethermind/nethermind_launcher.star
+1
-1
nimbus_launcher.star
src/el/nimbus-eth1/nimbus_launcher.star
+1
-1
reth_launcher.star
src/el/reth/reth_launcher.star
+1
-1
el_forkmon_launcher.star
src/el_forkmon/el_forkmon_launcher.star
+7
-1
ethereum_metrics_exporter_launcher.star
..._metrics_exporter/ethereum_metrics_exporter_launcher.star
+5
-1
forky_launcher.star
src/forky/forky_launcher.star
+7
-1
goomy_blob.star
src/goomy_blob/goomy_blob.star
+4
-5
grafana_launcher.star
src/grafana/grafana_launcher.star
+1
-3
mev_custom_flood_launcher.star
...flashbots/mev_custom_flood/mev_custom_flood_launcher.star
+5
-1
ephemery.star
src/network_launcher/ephemery.star
+0
-1
kurtosis.star
src/network_launcher/kurtosis.star
+23
-13
shadowfork.star
src/network_launcher/shadowfork.star
+0
-1
constants.star
src/package_io/constants.star
+9
-0
input_parser.star
src/package_io/input_parser.star
+145
-6
sanity_check.star
src/package_io/sanity_check.star
+10
-0
participant_network.star
src/participant_network.star
+19
-24
el_cl_genesis_generator.star
...data_generator/el_cl_genesis/el_cl_genesis_generator.star
+0
-1
validator_keystore_generator.star
...tor/validator_keystores/validator_keystore_generator.star
+13
-9
prometheus_launcher.star
src/prometheus/prometheus_launcher.star
+1
-0
shared_utils.star
src/shared_utils/shared_utils.star
+49
-3
snooper_beacon_launcher.star
src/snooper/snooper_beacon_launcher.star
+22
-4
snooper_engine_launcher.star
src/snooper/snooper_engine_launcher.star
+11
-4
tracoor_launcher.star
src/tracoor/tracoor_launcher.star
+7
-1
transaction_spammer.star
src/transaction_spammer/transaction_spammer.star
+5
-7
lighthouse.star
src/vc/lighthouse.star
+1
-1
lodestar.star
src/vc/lodestar.star
+1
-1
nimbus.star
src/vc/nimbus.star
+1
-1
prysm.star
src/vc/prysm.star
+1
-1
teku.star
src/vc/teku.star
+1
-1
vero.star
src/vc/vero.star
+1
-1
No files found.
.github/tests/mix-with-tools-minimal.yaml
View file @
0b2a2ae0
...
@@ -31,3 +31,6 @@ additional_services:
...
@@ -31,3 +31,6 @@ additional_services:
ethereum_metrics_exporter_enabled
:
true
ethereum_metrics_exporter_enabled
:
true
snooper_enabled
:
true
snooper_enabled
:
true
keymanager_enabled
:
true
keymanager_enabled
:
true
docker_cache_params
:
enabled
:
true
url
:
"
docker.ethquokkaops.io"
README.md
View file @
0b2a2ae0
...
@@ -637,19 +637,24 @@ additional_services:
...
@@ -637,19 +637,24 @@ additional_services:
# Configuration place for dora the explorer - https://github.com/ethpandaops/dora
# Configuration place for dora the explorer - https://github.com/ethpandaops/dora
dora_params
:
dora_params
:
# Dora docker image to use
# Dora docker image to use
# Leave blank to use the default image according to your network params
# Defaults to the latest image
image
:
"
"
image
:
"
ethpandaops/dora:latest"
# A list of optional extra env_vars the dora container should spin up with
# A list of optional extra env_vars the dora container should spin up with
env
:
{}
env
:
{}
# Configuration place for transaction spammer - https://github.com/MariusVanDerWijden/tx-fuzz
# Configuration place for transaction spammer - https://github.com/MariusVanDerWijden/tx-fuzz
tx_spammer_params
:
tx_spammer_params
:
# TX Spammer docker image to use
# Defaults to the latest master image
image
:
"
ethpandaops/tx-fuzz:master"
# A list of optional extra params that will be passed to the TX Spammer container for modifying its behaviour
# A list of optional extra params that will be passed to the TX Spammer container for modifying its behaviour
tx_spammer_extra_args
:
[]
tx_spammer_extra_args
:
[]
# Configuration place for goomy the blob spammer - https://github.com/ethpandaops/goomy-blob
# Configuration place for goomy the blob spammer - https://github.com/ethpandaops/goomy-blob
goomy_blob_params
:
goomy_blob_params
:
# Goomy Blob docker image to use
# Defaults to the latest
image
:
"
ethpandaops/goomy-blob:latest"
# A list of optional params that will be passed to the blob-spammer comamnd for modifying its behaviour
# A list of optional params that will be passed to the blob-spammer comamnd for modifying its behaviour
goomy_blob_args
:
[]
goomy_blob_args
:
[]
...
@@ -664,6 +669,9 @@ prometheus_params:
...
@@ -664,6 +669,9 @@ prometheus_params:
max_cpu
:
1000
max_cpu
:
1000
min_mem
:
128
min_mem
:
128
max_mem
:
2048
max_mem
:
2048
# Prometheus docker image to use
# Defaults to the latest image
image
:
"
prom/prometheus:latest"
# Configuration place for grafana
# Configuration place for grafana
grafana_params
:
grafana_params
:
...
@@ -676,12 +684,15 @@ grafana_params:
...
@@ -676,12 +684,15 @@ grafana_params:
max_cpu
:
1000
max_cpu
:
1000
min_mem
:
128
min_mem
:
128
max_mem
:
2048
max_mem
:
2048
# Grafana docker image to use
# Defaults to the latest image
image
:
"
grafana/grafana:latest"
# Configuration place for the assertoor testing tool - https://github.com/ethpandaops/assertoor
# Configuration place for the assertoor testing tool - https://github.com/ethpandaops/assertoor
assertoor_params
:
assertoor_params
:
# Assertoor docker image to use
# Assertoor docker image to use
#
Leave blank to use the default image according to your network params
#
Defaults to the latest image
image
:
"
"
image
:
"
ethpandaops/assertoor:latest
"
# Check chain stability
# Check chain stability
# This check monitors the chain and succeeds if:
# This check monitors the chain and succeeds if:
...
@@ -771,6 +782,20 @@ disable_peer_scoring: false
...
@@ -771,6 +782,20 @@ disable_peer_scoring: false
# Defaults to false
# Defaults to false
persistent
:
false
persistent
:
false
# Docker cache url enables all docker images to be pulled through a custom docker registry
# Disabled by default
# Defaults to empty cache url
# Images pulled from dockerhub will be prefixed with "/dh/" by default (docker.io)
# Images pulled from github registry will be prefixed with "/gh/" by default (ghcr.io)
# Images pulled from google registory will be prefixed with "/gcr/" by default (gcr.io)
# If you want to use a local image in combination with the cache, do not put "/" in your local image name
docker_cache_params
:
enabled
:
false
url
:
"
"
dockerhub_prefix
:
"
/dh/"
github_prefix
:
"
/gh/"
google_prefix
:
"
/gcr/"
# Supports three valeus
# Supports three valeus
# Default: "null" - no mev boost, mev builder, mev flood or relays are spun up
# Default: "null" - no mev boost, mev builder, mev flood or relays are spun up
# "mock" - mock-builder & mev-boost are spun up
# "mock" - mock-builder & mev-boost are spun up
...
...
main.star
View file @
0b2a2ae0
...
@@ -92,6 +92,7 @@ def run(plan, args={}):
...
@@ -92,6 +92,7 @@ def run(plan, args={}):
global_node_selectors = args_with_right_defaults.global_node_selectors
global_node_selectors = args_with_right_defaults.global_node_selectors
keymanager_enabled = args_with_right_defaults.keymanager_enabled
keymanager_enabled = args_with_right_defaults.keymanager_enabled
apache_port = args_with_right_defaults.apache_port
apache_port = args_with_right_defaults.apache_port
docker_cache_params = args_with_right_defaults.docker_cache_params
prefunded_accounts = genesis_constants.PRE_FUNDED_ACCOUNTS
prefunded_accounts = genesis_constants.PRE_FUNDED_ACCOUNTS
if (
if (
...
@@ -158,9 +159,8 @@ def run(plan, args={}):
...
@@ -158,9 +159,8 @@ def run(plan, args={}):
network_id,
network_id,
) = participant_network.launch_participant_network(
) = participant_network.launch_participant_network(
plan,
plan,
args_with_right_defaults
.participants
,
args_with_right_defaults,
network_params,
network_params,
args_with_right_defaults.global_log_level,
jwt_file,
jwt_file,
keymanager_file,
keymanager_file,
persistent,
persistent,
...
@@ -169,10 +169,6 @@ def run(plan, args={}):
...
@@ -169,10 +169,6 @@ def run(plan, args={}):
global_node_selectors,
global_node_selectors,
keymanager_enabled,
keymanager_enabled,
parallel_keystore_generation,
parallel_keystore_generation,
args_with_right_defaults.checkpoint_sync_enabled,
args_with_right_defaults.checkpoint_sync_url,
args_with_right_defaults.port_publisher,
args_with_right_defaults.mev_type,
)
)
plan.print(
plan.print(
...
@@ -459,6 +455,7 @@ def run(plan, args={}):
...
@@ -459,6 +455,7 @@ def run(plan, args={}):
network_params.seconds_per_slot,
network_params.seconds_per_slot,
network_params.genesis_delay,
network_params.genesis_delay,
global_node_selectors,
global_node_selectors,
args_with_right_defaults.tx_spammer_params,
)
)
plan.print("Successfully launched blob spammer")
plan.print("Successfully launched blob spammer")
elif additional_service == "goomy_blob":
elif additional_service == "goomy_blob":
...
@@ -488,6 +485,7 @@ def run(plan, args={}):
...
@@ -488,6 +485,7 @@ def run(plan, args={}):
global_node_selectors,
global_node_selectors,
args_with_right_defaults.port_publisher,
args_with_right_defaults.port_publisher,
index,
index,
args_with_right_defaults.docker_cache_params,
)
)
plan.print("Successfully launched execution layer forkmon")
plan.print("Successfully launched execution layer forkmon")
elif additional_service == "beacon_metrics_gazer":
elif additional_service == "beacon_metrics_gazer":
...
@@ -500,6 +498,7 @@ def run(plan, args={}):
...
@@ -500,6 +498,7 @@ def run(plan, args={}):
global_node_selectors,
global_node_selectors,
args_with_right_defaults.port_publisher,
args_with_right_defaults.port_publisher,
index,
index,
args_with_right_defaults.docker_cache_params,
)
)
)
)
launch_prometheus_grafana = True
launch_prometheus_grafana = True
...
@@ -516,6 +515,7 @@ def run(plan, args={}):
...
@@ -516,6 +515,7 @@ def run(plan, args={}):
global_node_selectors,
global_node_selectors,
args_with_right_defaults.port_publisher,
args_with_right_defaults.port_publisher,
index,
index,
args_with_right_defaults.docker_cache_params,
)
)
plan.print("Successfully launched blockscout")
plan.print("Successfully launched blockscout")
elif additional_service == "dora":
elif additional_service == "dora":
...
@@ -550,6 +550,7 @@ def run(plan, args={}):
...
@@ -550,6 +550,7 @@ def run(plan, args={}):
global_node_selectors,
global_node_selectors,
args_with_right_defaults.port_publisher,
args_with_right_defaults.port_publisher,
index,
index,
args_with_right_defaults.docker_cache_params,
)
)
plan.print("Successfully launched dugtrio")
plan.print("Successfully launched dugtrio")
elif additional_service == "blutgang":
elif additional_service == "blutgang":
...
@@ -566,6 +567,7 @@ def run(plan, args={}):
...
@@ -566,6 +567,7 @@ def run(plan, args={}):
global_node_selectors,
global_node_selectors,
args_with_right_defaults.port_publisher,
args_with_right_defaults.port_publisher,
index,
index,
args_with_right_defaults.docker_cache_params,
)
)
plan.print("Successfully launched blutgang")
plan.print("Successfully launched blutgang")
elif additional_service == "blobscan":
elif additional_service == "blobscan":
...
@@ -580,6 +582,7 @@ def run(plan, args={}):
...
@@ -580,6 +582,7 @@ def run(plan, args={}):
global_node_selectors,
global_node_selectors,
args_with_right_defaults.port_publisher,
args_with_right_defaults.port_publisher,
index,
index,
args_with_right_defaults.docker_cache_params,
)
)
plan.print("Successfully launched blobscan")
plan.print("Successfully launched blobscan")
elif additional_service == "forky":
elif additional_service == "forky":
...
@@ -598,6 +601,7 @@ def run(plan, args={}):
...
@@ -598,6 +601,7 @@ def run(plan, args={}):
final_genesis_timestamp,
final_genesis_timestamp,
args_with_right_defaults.port_publisher,
args_with_right_defaults.port_publisher,
index,
index,
args_with_right_defaults.docker_cache_params,
)
)
plan.print("Successfully launched forky")
plan.print("Successfully launched forky")
elif additional_service == "tracoor":
elif additional_service == "tracoor":
...
@@ -616,6 +620,7 @@ def run(plan, args={}):
...
@@ -616,6 +620,7 @@ def run(plan, args={}):
final_genesis_timestamp,
final_genesis_timestamp,
args_with_right_defaults.port_publisher,
args_with_right_defaults.port_publisher,
index,
index,
args_with_right_defaults.docker_cache_params,
)
)
plan.print("Successfully launched tracoor")
plan.print("Successfully launched tracoor")
elif additional_service == "apache":
elif additional_service == "apache":
...
@@ -627,6 +632,7 @@ def run(plan, args={}):
...
@@ -627,6 +632,7 @@ def run(plan, args={}):
all_participants,
all_participants,
args_with_right_defaults.participants,
args_with_right_defaults.participants,
global_node_selectors,
global_node_selectors,
args_with_right_defaults.docker_cache_params,
)
)
plan.print("Successfully launched apache")
plan.print("Successfully launched apache")
elif additional_service == "full_beaconchain_explorer":
elif additional_service == "full_beaconchain_explorer":
...
@@ -673,6 +679,7 @@ def run(plan, args={}):
...
@@ -673,6 +679,7 @@ def run(plan, args={}):
fuzz_target,
fuzz_target,
args_with_right_defaults.custom_flood_params,
args_with_right_defaults.custom_flood_params,
global_node_selectors,
global_node_selectors,
args_with_right_defaults.docker_cache_params,
)
)
else:
else:
fail("Invalid additional service %s" % (additional_service))
fail("Invalid additional service %s" % (additional_service))
...
...
src/apache/apache_launcher.star
View file @
0b2a2ae0
...
@@ -11,6 +11,7 @@ APACHE_ENR_LIST_FILENAME = "bootstrap_nodes.txt"
...
@@ -11,6 +11,7 @@ APACHE_ENR_LIST_FILENAME = "bootstrap_nodes.txt"
APACHE_CONFIG_MOUNT_DIRPATH_ON_SERVICE = "/usr/local/apache2/htdocs/"
APACHE_CONFIG_MOUNT_DIRPATH_ON_SERVICE = "/usr/local/apache2/htdocs/"
IMAGE_NAME = "library/httpd:latest"
# The min/max CPU/memory that assertoor can use
# The min/max CPU/memory that assertoor can use
MIN_CPU = 100
MIN_CPU = 100
MAX_CPU = 300
MAX_CPU = 300
...
@@ -33,6 +34,7 @@ def launch_apache(
...
@@ -33,6 +34,7 @@ def launch_apache(
participant_contexts,
participant_contexts,
participant_configs,
participant_configs,
global_node_selectors,
global_node_selectors,
docker_cache_params,
):
):
config_files_artifact_name = plan.upload_files(
config_files_artifact_name = plan.upload_files(
src=static_files.APACHE_CONFIG_FILEPATH, name="apache-config"
src=static_files.APACHE_CONFIG_FILEPATH, name="apache-config"
...
@@ -93,6 +95,7 @@ def launch_apache(
...
@@ -93,6 +95,7 @@ def launch_apache(
public_ports,
public_ports,
bootstrap_info_files_artifact_name,
bootstrap_info_files_artifact_name,
global_node_selectors,
global_node_selectors,
docker_cache_params,
)
)
plan.add_service(SERVICE_NAME, config)
plan.add_service(SERVICE_NAME, config)
...
@@ -104,6 +107,7 @@ def get_config(
...
@@ -104,6 +107,7 @@ def get_config(
public_ports,
public_ports,
bootstrap_info_files_artifact_name,
bootstrap_info_files_artifact_name,
node_selectors,
node_selectors,
docker_cache_params,
):
):
files = {
files = {
constants.GENESIS_DATA_MOUNTPOINT_ON_CLIENTS: el_cl_genesis_data,
constants.GENESIS_DATA_MOUNTPOINT_ON_CLIENTS: el_cl_genesis_data,
...
@@ -145,7 +149,10 @@ def get_config(
...
@@ -145,7 +149,10 @@ def get_config(
cmd_str = " ".join(cmd)
cmd_str = " ".join(cmd)
return ServiceConfig(
return ServiceConfig(
image="httpd:latest",
image=shared_utils.docker_cache_image_calc(
docker_cache_params,
IMAGE_NAME,
),
ports=USED_PORTS,
ports=USED_PORTS,
cmd=[cmd_str],
cmd=[cmd_str],
public_ports=public_ports,
public_ports=public_ports,
...
...
src/assertoor/assertoor_launcher.star
View file @
0b2a2ae0
...
@@ -119,12 +119,10 @@ def get_config(
...
@@ -119,12 +119,10 @@ def get_config(
ASSERTOOR_CONFIG_FILENAME,
ASSERTOOR_CONFIG_FILENAME,
)
)
if assertoor_params.image != "":
IMAGE_NAME = assertoor_params.image
IMAGE_NAME = assertoor_params.image
elif network_params.electra_fork_epoch < constants.ELECTRA_FORK_EPOCH:
if network_params.electra_fork_epoch < constants.ELECTRA_FORK_EPOCH:
IMAGE_NAME = "ethpandaops/assertoor:electra-support"
IMAGE_NAME = "ethpandaops/assertoor:electra-support"
else:
IMAGE_NAME = "ethpandaops/assertoor:latest"
return ServiceConfig(
return ServiceConfig(
image=IMAGE_NAME,
image=IMAGE_NAME,
...
...
src/beacon_metrics_gazer/beacon_metrics_gazer_launcher.star
View file @
0b2a2ae0
...
@@ -37,12 +37,14 @@ def launch_beacon_metrics_gazer(
...
@@ -37,12 +37,14 @@ def launch_beacon_metrics_gazer(
global_node_selectors,
global_node_selectors,
port_publisher,
port_publisher,
additional_service_index,
additional_service_index,
docker_cache_params,
):
):
config = get_config(
config = get_config(
cl_contexts[0].beacon_http_url,
cl_contexts[0].beacon_http_url,
global_node_selectors,
global_node_selectors,
port_publisher,
port_publisher,
additional_service_index,
additional_service_index,
docker_cache_params,
)
)
beacon_metrics_gazer_service = plan.add_service(SERVICE_NAME, config)
beacon_metrics_gazer_service = plan.add_service(SERVICE_NAME, config)
...
@@ -64,6 +66,7 @@ def get_config(
...
@@ -64,6 +66,7 @@ def get_config(
node_selectors,
node_selectors,
port_publisher,
port_publisher,
additional_service_index,
additional_service_index,
docker_cache_params,
):
):
config_file_path = shared_utils.path_join(
config_file_path = shared_utils.path_join(
BEACON_METRICS_GAZER_CONFIG_MOUNT_DIRPATH_ON_SERVICE,
BEACON_METRICS_GAZER_CONFIG_MOUNT_DIRPATH_ON_SERVICE,
...
@@ -78,7 +81,10 @@ def get_config(
...
@@ -78,7 +81,10 @@ def get_config(
)
)
return ServiceConfig(
return ServiceConfig(
image=IMAGE_NAME,
image=shared_utils.docker_cache_image_calc(
docker_cache_params,
IMAGE_NAME,
),
ports=USED_PORTS,
ports=USED_PORTS,
public_ports=public_ports,
public_ports=public_ports,
files={
files={
...
...
src/blob_spammer/blob_spammer.star
View file @
0b2a2ae0
IMAGE_NAME = "ethpandaops/tx-fuzz:master"
SERVICE_NAME = "blob-spammer"
SERVICE_NAME = "blob-spammer"
ENTRYPOINT_ARGS = ["/bin/sh", "-c"]
ENTRYPOINT_ARGS = ["/bin/sh", "-c"]
...
@@ -19,6 +18,7 @@ def launch_blob_spammer(
...
@@ -19,6 +18,7 @@ def launch_blob_spammer(
seconds_per_slot,
seconds_per_slot,
genesis_delay,
genesis_delay,
global_node_selectors,
global_node_selectors,
tx_spammer_params,
):
):
config = get_config(
config = get_config(
prefunded_addresses,
prefunded_addresses,
...
@@ -28,6 +28,7 @@ def launch_blob_spammer(
...
@@ -28,6 +28,7 @@ def launch_blob_spammer(
seconds_per_slot,
seconds_per_slot,
genesis_delay,
genesis_delay,
global_node_selectors,
global_node_selectors,
tx_spammer_params.image,
)
)
plan.add_service(SERVICE_NAME, config)
plan.add_service(SERVICE_NAME, config)
...
@@ -40,10 +41,11 @@ def get_config(
...
@@ -40,10 +41,11 @@ def get_config(
seconds_per_slot,
seconds_per_slot,
genesis_delay,
genesis_delay,
node_selectors,
node_selectors,
image,
):
):
dencunTime = (deneb_fork_epoch * 32 * seconds_per_slot) + genesis_delay
dencunTime = (deneb_fork_epoch * 32 * seconds_per_slot) + genesis_delay
return ServiceConfig(
return ServiceConfig(
image=
IMAGE_NAME
,
image=
image
,
entrypoint=ENTRYPOINT_ARGS,
entrypoint=ENTRYPOINT_ARGS,
cmd=[
cmd=[
" && ".join(
" && ".join(
...
...
src/blobscan/blobscan_launcher.star
View file @
0b2a2ae0
...
@@ -69,6 +69,7 @@ def launch_blobscan(
...
@@ -69,6 +69,7 @@ def launch_blobscan(
global_node_selectors,
global_node_selectors,
port_publisher,
port_publisher,
additional_service_index,
additional_service_index,
docker_cache_params,
):
):
node_selectors = global_node_selectors
node_selectors = global_node_selectors
beacon_node_rpc_uri = "{0}".format(cl_contexts[0].beacon_http_url)
beacon_node_rpc_uri = "{0}".format(cl_contexts[0].beacon_http_url)
...
@@ -83,6 +84,9 @@ def launch_blobscan(
...
@@ -83,6 +84,9 @@ def launch_blobscan(
max_memory=POSTGRES_MAX_MEMORY,
max_memory=POSTGRES_MAX_MEMORY,
persistent=persistent,
persistent=persistent,
node_selectors=node_selectors,
node_selectors=node_selectors,
image=shared_utils.docker_cache_image_calc(
docker_cache_params, "library/postgres:alpine"
),
)
)
redis_output = redis.run(
redis_output = redis.run(
...
@@ -94,6 +98,9 @@ def launch_blobscan(
...
@@ -94,6 +98,9 @@ def launch_blobscan(
max_memory=REDIS_MAX_MEMORY,
max_memory=REDIS_MAX_MEMORY,
persistent=persistent,
persistent=persistent,
node_selectors=node_selectors,
node_selectors=node_selectors,
image=shared_utils.docker_cache_image_calc(
docker_cache_params, "library/redis:alpine"
),
)
)
api_config = get_api_config(
api_config = get_api_config(
...
@@ -104,6 +111,7 @@ def launch_blobscan(
...
@@ -104,6 +111,7 @@ def launch_blobscan(
node_selectors,
node_selectors,
port_publisher,
port_publisher,
additional_service_index,
additional_service_index,
docker_cache_params,
)
)
blobscan_config = plan.add_service(API_SERVICE_NAME, api_config)
blobscan_config = plan.add_service(API_SERVICE_NAME, api_config)
...
@@ -119,6 +127,7 @@ def launch_blobscan(
...
@@ -119,6 +127,7 @@ def launch_blobscan(
node_selectors,
node_selectors,
port_publisher,
port_publisher,
additional_service_index,
additional_service_index,
docker_cache_params,
)
)
plan.add_service(WEB_SERVICE_NAME, web_config)
plan.add_service(WEB_SERVICE_NAME, web_config)
...
@@ -128,6 +137,7 @@ def launch_blobscan(
...
@@ -128,6 +137,7 @@ def launch_blobscan(
execution_node_rpc_uri,
execution_node_rpc_uri,
network_params.network,
network_params.network,
node_selectors,
node_selectors,
docker_cache_params,
)
)
plan.add_service(INDEXER_SERVICE_NAME, indexer_config)
plan.add_service(INDEXER_SERVICE_NAME, indexer_config)
...
@@ -140,6 +150,7 @@ def get_api_config(
...
@@ -140,6 +150,7 @@ def get_api_config(
node_selectors,
node_selectors,
port_publisher,
port_publisher,
additional_service_index,
additional_service_index,
docker_cache_params,
):
):
IMAGE_NAME = "blossomlabs/blobscan-api:latest"
IMAGE_NAME = "blossomlabs/blobscan-api:latest"
...
@@ -151,7 +162,10 @@ def get_api_config(
...
@@ -151,7 +162,10 @@ def get_api_config(
)
)
return ServiceConfig(
return ServiceConfig(
image=IMAGE_NAME,
image=shared_utils.docker_cache_image_calc(
docker_cache_params,
IMAGE_NAME,
),
ports=API_PORTS,
ports=API_PORTS,
public_ports=public_ports,
public_ports=public_ports,
env_vars={
env_vars={
...
@@ -192,6 +206,7 @@ def get_web_config(
...
@@ -192,6 +206,7 @@ def get_web_config(
node_selectors,
node_selectors,
port_publisher,
port_publisher,
additional_service_index,
additional_service_index,
docker_cache_params,
):
):
# TODO: https://github.com/kurtosis-tech/kurtosis/issues/1861
# TODO: https://github.com/kurtosis-tech/kurtosis/issues/1861
# Configure NEXT_PUBLIC_BEACON_BASE_URL and NEXT_PUBLIC_EXPLORER_BASE env vars
# Configure NEXT_PUBLIC_BEACON_BASE_URL and NEXT_PUBLIC_EXPLORER_BASE env vars
...
@@ -206,7 +221,10 @@ def get_web_config(
...
@@ -206,7 +221,10 @@ def get_web_config(
)
)
return ServiceConfig(
return ServiceConfig(
image=IMAGE_NAME,
image=shared_utils.docker_cache_image_calc(
docker_cache_params,
IMAGE_NAME,
),
ports=WEB_PORTS,
ports=WEB_PORTS,
public_ports=public_ports,
public_ports=public_ports,
env_vars={
env_vars={
...
@@ -231,11 +249,15 @@ def get_indexer_config(
...
@@ -231,11 +249,15 @@ def get_indexer_config(
execution_node_rpc,
execution_node_rpc,
network_name,
network_name,
node_selectors,
node_selectors,
docker_cache_params,
):
):
IMAGE_NAME = "blossomlabs/blobscan-indexer:master"
IMAGE_NAME = "blossomlabs/blobscan-indexer:master"
return ServiceConfig(
return ServiceConfig(
image=IMAGE_NAME,
image=shared_utils.docker_cache_image_calc(
docker_cache_params,
IMAGE_NAME,
),
env_vars={
env_vars={
"BEACON_NODE_ENDPOINT": beacon_node_rpc,
"BEACON_NODE_ENDPOINT": beacon_node_rpc,
"BLOBSCAN_API_ENDPOINT": blobscan_api_url,
"BLOBSCAN_API_ENDPOINT": blobscan_api_url,
...
...
src/blockscout/blockscout_launcher.star
View file @
0b2a2ae0
...
@@ -4,6 +4,7 @@ postgres = import_module("github.com/kurtosis-tech/postgres-package/main.star")
...
@@ -4,6 +4,7 @@ postgres = import_module("github.com/kurtosis-tech/postgres-package/main.star")
IMAGE_NAME_BLOCKSCOUT = "blockscout/blockscout:6.8.0"
IMAGE_NAME_BLOCKSCOUT = "blockscout/blockscout:6.8.0"
IMAGE_NAME_BLOCKSCOUT_VERIF = "ghcr.io/blockscout/smart-contract-verifier:v1.9.0"
IMAGE_NAME_BLOCKSCOUT_VERIF = "ghcr.io/blockscout/smart-contract-verifier:v1.9.0"
POSTGRES_IMAGE = "library/postgres:alpine"
SERVICE_NAME_BLOCKSCOUT = "blockscout"
SERVICE_NAME_BLOCKSCOUT = "blockscout"
...
@@ -44,6 +45,7 @@ def launch_blockscout(
...
@@ -44,6 +45,7 @@ def launch_blockscout(
global_node_selectors,
global_node_selectors,
port_publisher,
port_publisher,
additional_service_index,
additional_service_index,
docker_cache_params,
):
):
postgres_output = postgres.run(
postgres_output = postgres.run(
plan,
plan,
...
@@ -52,6 +54,7 @@ def launch_blockscout(
...
@@ -52,6 +54,7 @@ def launch_blockscout(
extra_configs=["max_connections=1000"],
extra_configs=["max_connections=1000"],
persistent=persistent,
persistent=persistent,
node_selectors=global_node_selectors,
node_selectors=global_node_selectors,
image=shared_utils.docker_cache_image_calc(docker_cache_params, POSTGRES_IMAGE),
)
)
el_context = el_contexts[0]
el_context = el_contexts[0]
...
@@ -64,6 +67,7 @@ def launch_blockscout(
...
@@ -64,6 +67,7 @@ def launch_blockscout(
global_node_selectors,
global_node_selectors,
port_publisher,
port_publisher,
additional_service_index,
additional_service_index,
docker_cache_params,
)
)
verif_service_name = "{}-verif".format(SERVICE_NAME_BLOCKSCOUT)
verif_service_name = "{}-verif".format(SERVICE_NAME_BLOCKSCOUT)
verif_service = plan.add_service(verif_service_name, config_verif)
verif_service = plan.add_service(verif_service_name, config_verif)
...
@@ -79,6 +83,7 @@ def launch_blockscout(
...
@@ -79,6 +83,7 @@ def launch_blockscout(
global_node_selectors,
global_node_selectors,
port_publisher,
port_publisher,
additional_service_index,
additional_service_index,
docker_cache_params,
)
)
blockscout_service = plan.add_service(SERVICE_NAME_BLOCKSCOUT, config_backend)
blockscout_service = plan.add_service(SERVICE_NAME_BLOCKSCOUT, config_backend)
plan.print(blockscout_service)
plan.print(blockscout_service)
...
@@ -90,7 +95,9 @@ def launch_blockscout(
...
@@ -90,7 +95,9 @@ def launch_blockscout(
return blockscout_url
return blockscout_url
def get_config_verif(node_selectors, port_publisher, additional_service_index):
def get_config_verif(
node_selectors, port_publisher, additional_service_index, docker_cache_params
):
public_ports = shared_utils.get_additional_service_standard_public_port(
public_ports = shared_utils.get_additional_service_standard_public_port(
port_publisher,
port_publisher,
constants.HTTP_PORT_ID,
constants.HTTP_PORT_ID,
...
@@ -99,7 +106,10 @@ def get_config_verif(node_selectors, port_publisher, additional_service_index):
...
@@ -99,7 +106,10 @@ def get_config_verif(node_selectors, port_publisher, additional_service_index):
)
)
return ServiceConfig(
return ServiceConfig(
image=IMAGE_NAME_BLOCKSCOUT_VERIF,
image=shared_utils.docker_cache_image_calc(
docker_cache_params,
IMAGE_NAME_BLOCKSCOUT_VERIF,
),
ports=VERIF_USED_PORTS,
ports=VERIF_USED_PORTS,
public_ports=public_ports,
public_ports=public_ports,
env_vars={
env_vars={
...
@@ -123,6 +133,7 @@ def get_config_backend(
...
@@ -123,6 +133,7 @@ def get_config_backend(
node_selectors,
node_selectors,
port_publisher,
port_publisher,
additional_service_index,
additional_service_index,
docker_cache_params,
):
):
database_url = "{protocol}://{user}:{password}@{hostname}:{port}/{database}".format(
database_url = "{protocol}://{user}:{password}@{hostname}:{port}/{database}".format(
protocol="postgresql",
protocol="postgresql",
...
@@ -141,7 +152,10 @@ def get_config_backend(
...
@@ -141,7 +152,10 @@ def get_config_backend(
)
)
return ServiceConfig(
return ServiceConfig(
image=IMAGE_NAME_BLOCKSCOUT,
image=shared_utils.docker_cache_image_calc(
docker_cache_params,
IMAGE_NAME_BLOCKSCOUT,
),
ports=USED_PORTS,
ports=USED_PORTS,
public_ports=public_ports,
public_ports=public_ports,
cmd=[
cmd=[
...
...
src/blutgang/blutgang_launcher.star
View file @
0b2a2ae0
...
@@ -41,6 +41,7 @@ def launch_blutgang(
...
@@ -41,6 +41,7 @@ def launch_blutgang(
global_node_selectors,
global_node_selectors,
port_publisher,
port_publisher,
additional_service_index,
additional_service_index,
docker_cache_params,
):
):
all_el_client_info = []
all_el_client_info = []
for index, participant in enumerate(participant_contexts):
for index, participant in enumerate(participant_contexts):
...
@@ -76,6 +77,7 @@ def launch_blutgang(
...
@@ -76,6 +77,7 @@ def launch_blutgang(
global_node_selectors,
global_node_selectors,
port_publisher,
port_publisher,
additional_service_index,
additional_service_index,
docker_cache_params,
)
)
plan.add_service(SERVICE_NAME, config)
plan.add_service(SERVICE_NAME, config)
...
@@ -87,6 +89,7 @@ def get_config(
...
@@ -87,6 +89,7 @@ def get_config(
node_selectors,
node_selectors,
port_publisher,
port_publisher,
additional_service_index,
additional_service_index,
docker_cache_params,
):
):
config_file_path = shared_utils.path_join(
config_file_path = shared_utils.path_join(
BLUTGANG_CONFIG_MOUNT_DIRPATH_ON_SERVICE,
BLUTGANG_CONFIG_MOUNT_DIRPATH_ON_SERVICE,
...
@@ -105,7 +108,10 @@ def get_config(
...
@@ -105,7 +108,10 @@ def get_config(
public_ports = shared_utils.get_port_specs(public_port_assignments)
public_ports = shared_utils.get_port_specs(public_port_assignments)
return ServiceConfig(
return ServiceConfig(
image=IMAGE_NAME,
image=shared_utils.docker_cache_image_calc(
docker_cache_params,
IMAGE_NAME,
),
ports=USED_PORTS,
ports=USED_PORTS,
public_ports=public_ports,
public_ports=public_ports,
files={
files={
...
...
src/cl/cl_launcher.star
View file @
0b2a2ae0
...
@@ -20,9 +20,8 @@ def launch(
...
@@ -20,9 +20,8 @@ def launch(
el_cl_data,
el_cl_data,
jwt_file,
jwt_file,
keymanager_file,
keymanager_file,
participan
ts,
args_with_right_defaul
ts,
all_el_contexts,
all_el_contexts,
global_log_level,
global_node_selectors,
global_node_selectors,
global_tolerations,
global_tolerations,
persistent,
persistent,
...
@@ -30,9 +29,6 @@ def launch(
...
@@ -30,9 +29,6 @@ def launch(
validator_data,
validator_data,
prysm_password_relative_filepath,
prysm_password_relative_filepath,
prysm_password_artifact_uuid,
prysm_password_artifact_uuid,
checkpoint_sync_enabled,
checkpoint_sync_url,
port_publisher,
):
):
plan.print("Launching CL network")
plan.print("Launching CL network")
...
@@ -94,7 +90,7 @@ def launch(
...
@@ -94,7 +90,7 @@ def launch(
else None
else None
)
)
network_name = shared_utils.get_network_name(network_params.network)
network_name = shared_utils.get_network_name(network_params.network)
for index, participant in enumerate(participants):
for index, participant in enumerate(
args_with_right_defaults.
participants):
cl_type = participant.cl_type
cl_type = participant.cl_type
el_type = participant.el_type
el_type = participant.el_type
node_selectors = input_parser.get_client_node_selectors(
node_selectors = input_parser.get_client_node_selectors(
...
@@ -118,7 +114,9 @@ def launch(
...
@@ -118,7 +114,9 @@ def launch(
cl_launchers[cl_type]["launch_method"],
cl_launchers[cl_type]["launch_method"],
)
)
index_str = shared_utils.zfill_custom(index + 1, len(str(len(participants))))
index_str = shared_utils.zfill_custom(
index + 1, len(str(len(args_with_right_defaults.participants)))
)
cl_service_name = "cl-{0}-{1}-{2}".format(index_str, cl_type, el_type)
cl_service_name = "cl-{0}-{1}-{2}".format(index_str, cl_type, el_type)
new_cl_node_validator_keystores = None
new_cl_node_validator_keystores = None
...
@@ -140,6 +138,7 @@ def launch(
...
@@ -140,6 +138,7 @@ def launch(
snooper_service_name,
snooper_service_name,
el_context,
el_context,
node_selectors,
node_selectors,
args_with_right_defaults.docker_cache_params,
)
)
plan.print(
plan.print(
"Successfully added {0} snooper participants".format(
"Successfully added {0} snooper participants".format(
...
@@ -147,15 +146,15 @@ def launch(
...
@@ -147,15 +146,15 @@ def launch(
)
)
)
)
if checkpoint_sync_enabled:
if
args_with_right_defaults.
checkpoint_sync_enabled:
if checkpoint_sync_url == "":
if
args_with_right_defaults.
checkpoint_sync_url == "":
if (
if (
network_params.network in constants.PUBLIC_NETWORKS
network_params.network in constants.PUBLIC_NETWORKS
or network_params.network == constants.NETWORK_NAME.ephemery
or network_params.network == constants.NETWORK_NAME.ephemery
):
):
checkpoint_sync_url = constants.CHECKPOINT_SYNC_URL[
args_with_right_defaults.checkpoint_sync_url = (
network_params.network
constants.CHECKPOINT_SYNC_URL[network_params.network]
]
)
else:
else:
fail(
fail(
"Checkpoint sync URL is required if you enabled checkpoint_sync for custom networks. Please provide a valid URL."
"Checkpoint sync URL is required if you enabled checkpoint_sync for custom networks. Please provide a valid URL."
...
@@ -169,7 +168,7 @@ def launch(
...
@@ -169,7 +168,7 @@ def launch(
cl_launcher,
cl_launcher,
cl_service_name,
cl_service_name,
participant,
participant,
global_log_level,
args_with_right_defaults.
global_log_level,
cl_context_BOOTNODE,
cl_context_BOOTNODE,
el_context,
el_context,
full_name,
full_name,
...
@@ -178,9 +177,9 @@ def launch(
...
@@ -178,9 +177,9 @@ def launch(
persistent,
persistent,
tolerations,
tolerations,
node_selectors,
node_selectors,
checkpoint_sync_enabled,
args_with_right_defaults.
checkpoint_sync_enabled,
checkpoint_sync_url,
args_with_right_defaults.
checkpoint_sync_url,
port_publisher,
args_with_right_defaults.
port_publisher,
index,
index,
)
)
else:
else:
...
@@ -190,7 +189,7 @@ def launch(
...
@@ -190,7 +189,7 @@ def launch(
cl_launcher,
cl_launcher,
cl_service_name,
cl_service_name,
participant,
participant,
global_log_level,
args_with_right_defaults.
global_log_level,
boot_cl_client_ctx,
boot_cl_client_ctx,
el_context,
el_context,
full_name,
full_name,
...
@@ -199,9 +198,9 @@ def launch(
...
@@ -199,9 +198,9 @@ def launch(
persistent,
persistent,
tolerations,
tolerations,
node_selectors,
node_selectors,
checkpoint_sync_enabled,
args_with_right_defaults.
checkpoint_sync_enabled,
checkpoint_sync_url,
args_with_right_defaults.
checkpoint_sync_url,
port_publisher,
args_with_right_defaults.
port_publisher,
index,
index,
)
)
...
...
src/cl/grandine/grandine_launcher.star
View file @
0b2a2ae0
...
@@ -326,7 +326,7 @@ def get_beacon_config(
...
@@ -326,7 +326,7 @@ def get_beacon_config(
"labels": shared_utils.label_maker(
"labels": shared_utils.label_maker(
client=constants.CL_TYPE.grandine,
client=constants.CL_TYPE.grandine,
client_type=constants.CLIENT_TYPES.cl,
client_type=constants.CLIENT_TYPES.cl,
image=participant.cl_image,
image=participant.cl_image
[-constants.MAX_LABEL_LENGTH :]
,
connected_client=el_context.client_name,
connected_client=el_context.client_name,
extra_labels=participant.cl_extra_labels,
extra_labels=participant.cl_extra_labels,
supernode=participant.supernode,
supernode=participant.supernode,
...
...
src/cl/lighthouse/lighthouse_launcher.star
View file @
0b2a2ae0
...
@@ -323,7 +323,7 @@ def get_beacon_config(
...
@@ -323,7 +323,7 @@ def get_beacon_config(
"labels": shared_utils.label_maker(
"labels": shared_utils.label_maker(
client=constants.CL_TYPE.lighthouse,
client=constants.CL_TYPE.lighthouse,
client_type=constants.CLIENT_TYPES.cl,
client_type=constants.CLIENT_TYPES.cl,
image=participant.cl_image,
image=participant.cl_image
[-constants.MAX_LABEL_LENGTH :]
,
connected_client=el_context.client_name,
connected_client=el_context.client_name,
extra_labels=participant.cl_extra_labels,
extra_labels=participant.cl_extra_labels,
supernode=participant.supernode,
supernode=participant.supernode,
...
...
src/cl/lodestar/lodestar_launcher.star
View file @
0b2a2ae0
...
@@ -316,7 +316,7 @@ def get_beacon_config(
...
@@ -316,7 +316,7 @@ def get_beacon_config(
"labels": shared_utils.label_maker(
"labels": shared_utils.label_maker(
client=constants.CL_TYPE.lodestar,
client=constants.CL_TYPE.lodestar,
client_type=constants.CLIENT_TYPES.cl,
client_type=constants.CLIENT_TYPES.cl,
image=participant.cl_image,
image=participant.cl_image
[-constants.MAX_LABEL_LENGTH :]
,
connected_client=el_context.client_name,
connected_client=el_context.client_name,
extra_labels=participant.cl_extra_labels,
extra_labels=participant.cl_extra_labels,
supernode=participant.supernode,
supernode=participant.supernode,
...
...
src/cl/nimbus/nimbus_launcher.star
View file @
0b2a2ae0
...
@@ -337,7 +337,7 @@ def get_beacon_config(
...
@@ -337,7 +337,7 @@ def get_beacon_config(
"labels": shared_utils.label_maker(
"labels": shared_utils.label_maker(
client=constants.CL_TYPE.nimbus,
client=constants.CL_TYPE.nimbus,
client_type=constants.CLIENT_TYPES.cl,
client_type=constants.CLIENT_TYPES.cl,
image=participant.cl_image,
image=participant.cl_image
[-constants.MAX_LABEL_LENGTH :]
,
connected_client=el_context.client_name,
connected_client=el_context.client_name,
extra_labels=participant.cl_extra_labels,
extra_labels=participant.cl_extra_labels,
supernode=participant.supernode,
supernode=participant.supernode,
...
...
src/cl/prysm/prysm_launcher.star
View file @
0b2a2ae0
...
@@ -303,7 +303,7 @@ def get_beacon_config(
...
@@ -303,7 +303,7 @@ def get_beacon_config(
"labels": shared_utils.label_maker(
"labels": shared_utils.label_maker(
client=constants.CL_TYPE.prysm,
client=constants.CL_TYPE.prysm,
client_type=constants.CLIENT_TYPES.cl,
client_type=constants.CLIENT_TYPES.cl,
image=participant.cl_image,
image=participant.cl_image
[-constants.MAX_LABEL_LENGTH :]
,
connected_client=el_context.client_name,
connected_client=el_context.client_name,
extra_labels=participant.cl_extra_labels,
extra_labels=participant.cl_extra_labels,
supernode=participant.supernode,
supernode=participant.supernode,
...
...
src/cl/teku/teku_launcher.star
View file @
0b2a2ae0
...
@@ -346,7 +346,7 @@ def get_beacon_config(
...
@@ -346,7 +346,7 @@ def get_beacon_config(
"labels": shared_utils.label_maker(
"labels": shared_utils.label_maker(
client=constants.CL_TYPE.teku,
client=constants.CL_TYPE.teku,
client_type=constants.CLIENT_TYPES.cl,
client_type=constants.CLIENT_TYPES.cl,
image=participant.cl_image,
image=participant.cl_image
[-constants.MAX_LABEL_LENGTH :]
,
connected_client=el_context.client_name,
connected_client=el_context.client_name,
extra_labels=participant.cl_extra_labels,
extra_labels=participant.cl_extra_labels,
supernode=participant.supernode,
supernode=participant.supernode,
...
...
src/dora/dora_launcher.star
View file @
0b2a2ae0
...
@@ -120,12 +120,10 @@ def get_config(
...
@@ -120,12 +120,10 @@ def get_config(
0,
0,
)
)
if dora_params.image != "":
IMAGE_NAME = dora_params.image
IMAGE_NAME = dora_params.image
elif network_params.electra_fork_epoch < constants.ELECTRA_FORK_EPOCH:
if network_params.electra_fork_epoch < constants.ELECTRA_FORK_EPOCH:
IMAGE_NAME = "ethpandaops/dora:master"
IMAGE_NAME = "ethpandaops/dora:master"
else:
IMAGE_NAME = "ethpandaops/dora:latest"
return ServiceConfig(
return ServiceConfig(
image=IMAGE_NAME,
image=IMAGE_NAME,
...
...
src/dugtrio/dugtrio_launcher.star
View file @
0b2a2ae0
...
@@ -34,6 +34,7 @@ def launch_dugtrio(
...
@@ -34,6 +34,7 @@ def launch_dugtrio(
global_node_selectors,
global_node_selectors,
port_publisher,
port_publisher,
additional_service_index,
additional_service_index,
docker_cache_params,
):
):
all_cl_client_info = []
all_cl_client_info = []
for index, participant in enumerate(participant_contexts):
for index, participant in enumerate(participant_contexts):
...
@@ -66,6 +67,7 @@ def launch_dugtrio(
...
@@ -66,6 +67,7 @@ def launch_dugtrio(
global_node_selectors,
global_node_selectors,
port_publisher,
port_publisher,
additional_service_index,
additional_service_index,
docker_cache_params,
)
)
plan.add_service(SERVICE_NAME, config)
plan.add_service(SERVICE_NAME, config)
...
@@ -77,6 +79,7 @@ def get_config(
...
@@ -77,6 +79,7 @@ def get_config(
node_selectors,
node_selectors,
port_publisher,
port_publisher,
additional_service_index,
additional_service_index,
docker_cache_params,
):
):
config_file_path = shared_utils.path_join(
config_file_path = shared_utils.path_join(
DUGTRIO_CONFIG_MOUNT_DIRPATH_ON_SERVICE,
DUGTRIO_CONFIG_MOUNT_DIRPATH_ON_SERVICE,
...
@@ -91,7 +94,10 @@ def get_config(
...
@@ -91,7 +94,10 @@ def get_config(
)
)
return ServiceConfig(
return ServiceConfig(
image=IMAGE_NAME,
image=shared_utils.docker_cache_image_calc(
docker_cache_params,
IMAGE_NAME,
),
ports=USED_PORTS,
ports=USED_PORTS,
public_ports=public_ports,
public_ports=public_ports,
files={
files={
...
...
src/el/besu/besu_launcher.star
View file @
0b2a2ae0
...
@@ -233,7 +233,7 @@ def get_config(
...
@@ -233,7 +233,7 @@ def get_config(
"labels": shared_utils.label_maker(
"labels": shared_utils.label_maker(
client=constants.EL_TYPE.besu,
client=constants.EL_TYPE.besu,
client_type=constants.CLIENT_TYPES.el,
client_type=constants.CLIENT_TYPES.el,
image=participant.el_image,
image=participant.el_image
[-constants.MAX_LABEL_LENGTH :]
,
connected_client=cl_client_name,
connected_client=cl_client_name,
extra_labels=participant.el_extra_labels,
extra_labels=participant.el_extra_labels,
supernode=participant.supernode,
supernode=participant.supernode,
...
...
src/el/erigon/erigon_launcher.star
View file @
0b2a2ae0
...
@@ -230,7 +230,7 @@ def get_config(
...
@@ -230,7 +230,7 @@ def get_config(
"labels": shared_utils.label_maker(
"labels": shared_utils.label_maker(
client=constants.EL_TYPE.erigon,
client=constants.EL_TYPE.erigon,
client_type=constants.CLIENT_TYPES.el,
client_type=constants.CLIENT_TYPES.el,
image=participant.el_image,
image=participant.el_image
[-constants.MAX_LABEL_LENGTH :]
,
connected_client=cl_client_name,
connected_client=cl_client_name,
extra_labels=participant.el_extra_labels,
extra_labels=participant.el_extra_labels,
supernode=participant.supernode,
supernode=participant.supernode,
...
...
src/el/ethereumjs/ethereumjs_launcher.star
View file @
0b2a2ae0
...
@@ -216,7 +216,7 @@ def get_config(
...
@@ -216,7 +216,7 @@ def get_config(
"labels": shared_utils.label_maker(
"labels": shared_utils.label_maker(
client=constants.EL_TYPE.ethereumjs,
client=constants.EL_TYPE.ethereumjs,
client_type=constants.CLIENT_TYPES.el,
client_type=constants.CLIENT_TYPES.el,
image=participant.el_image,
image=participant.el_image
[-constants.MAX_LABEL_LENGTH :]
,
connected_client=cl_client_name,
connected_client=cl_client_name,
extra_labels=participant.el_extra_labels,
extra_labels=participant.el_extra_labels,
supernode=participant.supernode,
supernode=participant.supernode,
...
...
src/el/geth/geth_launcher.star
View file @
0b2a2ae0
...
@@ -312,7 +312,7 @@ def get_config(
...
@@ -312,7 +312,7 @@ def get_config(
"labels": shared_utils.label_maker(
"labels": shared_utils.label_maker(
client=constants.EL_TYPE.geth,
client=constants.EL_TYPE.geth,
client_type=constants.CLIENT_TYPES.el,
client_type=constants.CLIENT_TYPES.el,
image=participant.el_image,
image=participant.el_image
[-constants.MAX_LABEL_LENGTH :]
,
connected_client=cl_client_name,
connected_client=cl_client_name,
extra_labels=participant.el_extra_labels,
extra_labels=participant.el_extra_labels,
supernode=participant.supernode,
supernode=participant.supernode,
...
...
src/el/nethermind/nethermind_launcher.star
View file @
0b2a2ae0
...
@@ -223,7 +223,7 @@ def get_config(
...
@@ -223,7 +223,7 @@ def get_config(
"labels": shared_utils.label_maker(
"labels": shared_utils.label_maker(
client=constants.EL_TYPE.nethermind,
client=constants.EL_TYPE.nethermind,
client_type=constants.CLIENT_TYPES.el,
client_type=constants.CLIENT_TYPES.el,
image=participant.el_image,
image=participant.el_image
[-constants.MAX_LABEL_LENGTH :]
,
connected_client=cl_client_name,
connected_client=cl_client_name,
extra_labels=participant.el_extra_labels,
extra_labels=participant.el_extra_labels,
supernode=participant.supernode,
supernode=participant.supernode,
...
...
src/el/nimbus-eth1/nimbus_launcher.star
View file @
0b2a2ae0
...
@@ -209,7 +209,7 @@ def get_config(
...
@@ -209,7 +209,7 @@ def get_config(
"labels": shared_utils.label_maker(
"labels": shared_utils.label_maker(
client=constants.EL_TYPE.nimbus,
client=constants.EL_TYPE.nimbus,
client_type=constants.CLIENT_TYPES.el,
client_type=constants.CLIENT_TYPES.el,
image=participant.el_image,
image=participant.el_image
[-constants.MAX_LABEL_LENGTH :]
,
connected_client=cl_client_name,
connected_client=cl_client_name,
extra_labels=participant.el_extra_labels,
extra_labels=participant.el_extra_labels,
supernode=participant.supernode,
supernode=participant.supernode,
...
...
src/el/reth/reth_launcher.star
View file @
0b2a2ae0
...
@@ -263,7 +263,7 @@ def get_config(
...
@@ -263,7 +263,7 @@ def get_config(
"labels": shared_utils.label_maker(
"labels": shared_utils.label_maker(
client=constants.EL_TYPE.reth,
client=constants.EL_TYPE.reth,
client_type=constants.CLIENT_TYPES.el,
client_type=constants.CLIENT_TYPES.el,
image=participant.el_image,
image=participant.el_image
[-constants.MAX_LABEL_LENGTH :]
,
connected_client=cl_client_name,
connected_client=cl_client_name,
extra_labels=participant.el_extra_labels,
extra_labels=participant.el_extra_labels,
supernode=participant.supernode,
supernode=participant.supernode,
...
...
src/el_forkmon/el_forkmon_launcher.star
View file @
0b2a2ae0
...
@@ -32,6 +32,7 @@ def launch_el_forkmon(
...
@@ -32,6 +32,7 @@ def launch_el_forkmon(
global_node_selectors,
global_node_selectors,
port_publisher,
port_publisher,
additional_service_index,
additional_service_index,
docker_cache_params,
):
):
all_el_client_info = []
all_el_client_info = []
for client in el_contexts:
for client in el_contexts:
...
@@ -59,6 +60,7 @@ def launch_el_forkmon(
...
@@ -59,6 +60,7 @@ def launch_el_forkmon(
global_node_selectors,
global_node_selectors,
port_publisher,
port_publisher,
additional_service_index,
additional_service_index,
docker_cache_params,
)
)
plan.add_service(SERVICE_NAME, config)
plan.add_service(SERVICE_NAME, config)
...
@@ -69,6 +71,7 @@ def get_config(
...
@@ -69,6 +71,7 @@ def get_config(
node_selectors,
node_selectors,
port_publisher,
port_publisher,
additional_service_index,
additional_service_index,
docker_cache_params,
):
):
config_file_path = shared_utils.path_join(
config_file_path = shared_utils.path_join(
EL_FORKMON_CONFIG_MOUNT_DIRPATH_ON_SERVICE, EL_FORKMON_CONFIG_FILENAME
EL_FORKMON_CONFIG_MOUNT_DIRPATH_ON_SERVICE, EL_FORKMON_CONFIG_FILENAME
...
@@ -82,7 +85,10 @@ def get_config(
...
@@ -82,7 +85,10 @@ def get_config(
)
)
return ServiceConfig(
return ServiceConfig(
image=IMAGE_NAME,
image=shared_utils.docker_cache_image_calc(
docker_cache_params,
IMAGE_NAME,
),
ports=USED_PORTS,
ports=USED_PORTS,
public_ports=public_ports,
public_ports=public_ports,
files={
files={
...
...
src/ethereum_metrics_exporter/ethereum_metrics_exporter_launcher.star
View file @
0b2a2ae0
...
@@ -23,11 +23,15 @@ def launch(
...
@@ -23,11 +23,15 @@ def launch(
el_context,
el_context,
cl_context,
cl_context,
node_selectors,
node_selectors,
docker_cache_params,
):
):
exporter_service = plan.add_service(
exporter_service = plan.add_service(
ethereum_metrics_exporter_service_name,
ethereum_metrics_exporter_service_name,
ServiceConfig(
ServiceConfig(
image=DEFAULT_ETHEREUM_METRICS_EXPORTER_IMAGE,
image=shared_utils.docker_cache_image_calc(
docker_cache_params,
DEFAULT_ETHEREUM_METRICS_EXPORTER_IMAGE,
),
ports={
ports={
HTTP_PORT_ID: shared_utils.new_port_spec(
HTTP_PORT_ID: shared_utils.new_port_spec(
METRICS_PORT_NUMBER,
METRICS_PORT_NUMBER,
...
...
src/forky/forky_launcher.star
View file @
0b2a2ae0
...
@@ -38,6 +38,7 @@ def launch_forky(
...
@@ -38,6 +38,7 @@ def launch_forky(
final_genesis_timestamp,
final_genesis_timestamp,
port_publisher,
port_publisher,
additional_service_index,
additional_service_index,
docker_cache_params,
):
):
all_cl_client_info = []
all_cl_client_info = []
all_el_client_info = []
all_el_client_info = []
...
@@ -88,6 +89,7 @@ def launch_forky(
...
@@ -88,6 +89,7 @@ def launch_forky(
global_node_selectors,
global_node_selectors,
port_publisher,
port_publisher,
additional_service_index,
additional_service_index,
docker_cache_params,
)
)
plan.add_service(SERVICE_NAME, config)
plan.add_service(SERVICE_NAME, config)
...
@@ -100,6 +102,7 @@ def get_config(
...
@@ -100,6 +102,7 @@ def get_config(
node_selectors,
node_selectors,
port_publisher,
port_publisher,
additional_service_index,
additional_service_index,
docker_cache_params,
):
):
config_file_path = shared_utils.path_join(
config_file_path = shared_utils.path_join(
FORKY_CONFIG_MOUNT_DIRPATH_ON_SERVICE,
FORKY_CONFIG_MOUNT_DIRPATH_ON_SERVICE,
...
@@ -116,7 +119,10 @@ def get_config(
...
@@ -116,7 +119,10 @@ def get_config(
)
)
return ServiceConfig(
return ServiceConfig(
image=IMAGE_NAME,
image=shared_utils.docker_cache_image_calc(
docker_cache_params,
IMAGE_NAME,
),
ports=USED_PORTS,
ports=USED_PORTS,
public_ports=public_ports,
public_ports=public_ports,
files={
files={
...
...
src/goomy_blob/goomy_blob.star
View file @
0b2a2ae0
SERVICE_NAME = "goomy-blob-spammer"
SERVICE_NAME = "goomy-blob-spammer"
IMAGE_NAME = "ethpandaops/goomy-blob:master"
ENTRYPOINT_ARGS = ["/bin/sh", "-c"]
ENTRYPOINT_ARGS = ["/bin/sh", "-c"]
...
@@ -24,7 +23,7 @@ def launch_goomy_blob(
...
@@ -24,7 +23,7 @@ def launch_goomy_blob(
el_contexts,
el_contexts,
cl_context,
cl_context,
seconds_per_slot,
seconds_per_slot,
goomy_blob_params
.goomy_blob_args
,
goomy_blob_params,
global_node_selectors,
global_node_selectors,
)
)
plan.add_service(SERVICE_NAME, config)
plan.add_service(SERVICE_NAME, config)
...
@@ -35,7 +34,7 @@ def get_config(
...
@@ -35,7 +34,7 @@ def get_config(
el_contexts,
el_contexts,
cl_context,
cl_context,
seconds_per_slot,
seconds_per_slot,
goomy_blob_
arg
s,
goomy_blob_
param
s,
node_selectors,
node_selectors,
):
):
goomy_cli_args = []
goomy_cli_args = []
...
@@ -47,7 +46,7 @@ def get_config(
...
@@ -47,7 +46,7 @@ def get_config(
)
)
)
)
goomy_args = " ".join(goomy_blob_args)
goomy_args = " ".join(goomy_blob_
params.goomy_blob_
args)
if goomy_args == "":
if goomy_args == "":
goomy_args = "combined -b 2 -t 2 --max-pending 3"
goomy_args = "combined -b 2 -t 2 --max-pending 3"
goomy_cli_args.append(goomy_args)
goomy_cli_args.append(goomy_args)
...
@@ -57,7 +56,7 @@ def get_config(
...
@@ -57,7 +56,7 @@ def get_config(
)
)
return ServiceConfig(
return ServiceConfig(
image=
IMAGE_NAME
,
image=
goomy_blob_params.image
,
entrypoint=ENTRYPOINT_ARGS,
entrypoint=ENTRYPOINT_ARGS,
cmd=[cmd],
cmd=[cmd],
min_cpu=MIN_CPU,
min_cpu=MIN_CPU,
...
...
src/grafana/grafana_launcher.star
View file @
0b2a2ae0
...
@@ -3,8 +3,6 @@ static_files = import_module("../static_files/static_files.star")
...
@@ -3,8 +3,6 @@ static_files = import_module("../static_files/static_files.star")
SERVICE_NAME = "grafana"
SERVICE_NAME = "grafana"
IMAGE_NAME = "grafana/grafana:latest-ubuntu"
HTTP_PORT_ID = "http"
HTTP_PORT_ID = "http"
HTTP_PORT_NUMBER_UINT16 = 3000
HTTP_PORT_NUMBER_UINT16 = 3000
...
@@ -128,7 +126,7 @@ def get_config(
...
@@ -128,7 +126,7 @@ def get_config(
grafana_params,
grafana_params,
):
):
return ServiceConfig(
return ServiceConfig(
image=
IMAGE_NAME
,
image=
grafana_params.image
,
ports=USED_PORTS,
ports=USED_PORTS,
env_vars={
env_vars={
CONFIG_DIRPATH_ENV_VAR: GRAFANA_CONFIG_DIRPATH_ON_SERVICE,
CONFIG_DIRPATH_ENV_VAR: GRAFANA_CONFIG_DIRPATH_ON_SERVICE,
...
...
src/mev/flashbots/mev_custom_flood/mev_custom_flood_launcher.star
View file @
0b2a2ae0
shared_utils = import_module("../../../shared_utils/shared_utils.star")
PYTHON_IMAGE = "ethpandaops/python-web3"
PYTHON_IMAGE = "ethpandaops/python-web3"
CUSTOM_FLOOD_SERVICE_NAME = "mev-custom-flood"
CUSTOM_FLOOD_SERVICE_NAME = "mev-custom-flood"
...
@@ -15,13 +16,16 @@ def spam_in_background(
...
@@ -15,13 +16,16 @@ def spam_in_background(
el_uri,
el_uri,
params,
params,
global_node_selectors,
global_node_selectors,
docker_cache_params,
):
):
sender_script = plan.upload_files(src="./sender.py", name="mev-custom-flood-sender")
sender_script = plan.upload_files(src="./sender.py", name="mev-custom-flood-sender")
plan.add_service(
plan.add_service(
name=CUSTOM_FLOOD_SERVICE_NAME,
name=CUSTOM_FLOOD_SERVICE_NAME,
config=ServiceConfig(
config=ServiceConfig(
image=PYTHON_IMAGE,
image=shared_utils.docker_cache_image_calc(
docker_cache_params, PYTHON_IMAGE
),
files={"/tmp": sender_script},
files={"/tmp": sender_script},
cmd=["/bin/sh", "-c", "touch /tmp/sender.log && tail -f /tmp/sender.log"],
cmd=["/bin/sh", "-c", "touch /tmp/sender.log && tail -f /tmp/sender.log"],
env_vars={
env_vars={
...
...
src/network_launcher/ephemery.star
View file @
0b2a2ae0
...
@@ -17,7 +17,6 @@ def launch(plan, prague_time):
...
@@ -17,7 +17,6 @@ def launch(plan, prague_time):
mv /ephemery-release/metadata/* /network-configs/ ;\
mv /ephemery-release/metadata/* /network-configs/ ;\
cat /network-configs/genesis_validators_root.txt ;\
cat /network-configs/genesis_validators_root.txt ;\
'",
'",
image="badouralix/curl-jq",
store=[StoreSpec(src="/network-configs/", name="el_cl_genesis_data")],
store=[StoreSpec(src="/network-configs/", name="el_cl_genesis_data")],
)
)
genesis_validators_root = el_cl_genesis_data_uuid.output
genesis_validators_root = el_cl_genesis_data_uuid.output
...
...
src/network_launcher/kurtosis.star
View file @
0b2a2ae0
...
@@ -17,19 +17,25 @@ CL_GENESIS_DATA_GENERATION_TIME = 5
...
@@ -17,19 +17,25 @@ CL_GENESIS_DATA_GENERATION_TIME = 5
CL_NODE_STARTUP_TIME = 5
CL_NODE_STARTUP_TIME = 5
def launch(plan, network_params, participants, parallel_keystore_generation):
def launch(
num_participants = len(participants)
plan, network_params, args_with_right_defaults, parallel_keystore_generation
):
num_participants = len(args_with_right_defaults.participants)
plan.print("Generating cl validator key stores")
plan.print("Generating cl validator key stores")
validator_data = None
validator_data = None
if not parallel_keystore_generation:
if not parallel_keystore_generation:
validator_data = validator_keystores.generate_validator_keystores(
validator_data = validator_keystores.generate_validator_keystores(
plan, network_params.preregistered_validator_keys_mnemonic, participants
plan,
network_params.preregistered_validator_keys_mnemonic,
args_with_right_defaults.participants,
args_with_right_defaults.docker_cache_params,
)
)
else:
else:
validator_data = validator_keystores.generate_valdiator_keystores_in_parallel(
validator_data = validator_keystores.generate_valdiator_keystores_in_parallel(
plan,
plan,
network_params.preregistered_validator_keys_mnemonic,
network_params.preregistered_validator_keys_mnemonic,
participants,
args_with_right_defaults.participants,
args_with_right_defaults.docker_cache_params,
)
)
plan.print(json.indent(json.encode(validator_data)))
plan.print(json.indent(json.encode(validator_data)))
...
@@ -46,30 +52,34 @@ def launch(plan, network_params, participants, parallel_keystore_generation):
...
@@ -46,30 +52,34 @@ def launch(plan, network_params, participants, parallel_keystore_generation):
total_number_of_validator_keys = network_params.preregistered_validator_count
total_number_of_validator_keys = network_params.preregistered_validator_count
if network_params.preregistered_validator_count == 0:
if network_params.preregistered_validator_count == 0:
for participant in participants:
for participant in
args_with_right_defaults.
participants:
total_number_of_validator_keys += participant.validator_count
total_number_of_validator_keys += participant.validator_count
plan.print("Generating EL CL data")
plan.print("Generating EL CL data")
# we are running capella genesis - deprecated
# we are running capella genesis - deprecated
if network_params.deneb_fork_epoch > 0:
if network_params.deneb_fork_epoch > 0:
ethereum_genesis_generator_image = (
ethereum_genesis_generator_image = shared_utils.docker_cache_image_calc(
constants.ETHEREUM_GENESIS_GENERATOR.capella_genesis
args_with_right_defaults.docker_cache_params,
constants.ETHEREUM_GENESIS_GENERATOR.capella_genesis,
)
)
# we are running deneb genesis - default behavior
# we are running deneb genesis - default behavior
elif network_params.deneb_fork_epoch == 0:
elif network_params.deneb_fork_epoch == 0:
ethereum_genesis_generator_image = (
ethereum_genesis_generator_image = shared_utils.docker_cache_image_calc(
constants.ETHEREUM_GENESIS_GENERATOR.deneb_genesis
args_with_right_defaults.docker_cache_params,
constants.ETHEREUM_GENESIS_GENERATOR.deneb_genesis,
)
)
# we are running electra - experimental
# we are running electra - experimental
elif network_params.electra_fork_epoch != None:
elif network_params.electra_fork_epoch != None:
if network_params.electra_fork_epoch == 0:
if network_params.electra_fork_epoch == 0:
ethereum_genesis_generator_image = (
ethereum_genesis_generator_image = shared_utils.docker_cache_image_calc(
constants.ETHEREUM_GENESIS_GENERATOR.verkle_genesis
args_with_right_defaults.docker_cache_params,
constants.ETHEREUM_GENESIS_GENERATOR.verkle_genesis,
)
)
else:
else:
ethereum_genesis_generator_image = (
ethereum_genesis_generator_image = shared_utils.docker_cache_image_calc(
constants.ETHEREUM_GENESIS_GENERATOR.verkle_support_genesis
args_with_right_defaults.docker_cache_params,
constants.ETHEREUM_GENESIS_GENERATOR.verkle_support_genesis,
)
)
else:
else:
fail(
fail(
...
...
src/network_launcher/shadowfork.star
View file @
0b2a2ae0
...
@@ -35,7 +35,6 @@ def shadowfork_prep(
...
@@ -35,7 +35,6 @@ def shadowfork_prep(
+ "/geth/"
+ "/geth/"
+ shadowfork_block
+ shadowfork_block
+ "/_snapshot_eth_getBlockByNumber.json",
+ "/_snapshot_eth_getBlockByNumber.json",
image="badouralix/curl-jq",
store=[StoreSpec(src="/shadowfork", name="latest_blocks")],
store=[StoreSpec(src="/shadowfork", name="latest_blocks")],
)
)
...
...
src/package_io/constants.star
View file @
0b2a2ae0
...
@@ -113,6 +113,15 @@ FULU_FORK_EPOCH = 100000001
...
@@ -113,6 +113,15 @@ FULU_FORK_EPOCH = 100000001
EIP7594_FORK_VERSION = "0x80000038"
EIP7594_FORK_VERSION = "0x80000038"
EIP7594_FORK_EPOCH = 100000002
EIP7594_FORK_EPOCH = 100000002
MAX_LABEL_LENGTH = 63
CONTAINER_REGISTRY = struct(
dockerhub="/",
ghcr="ghcr.io",
gcr="gcr.io",
)
ETHEREUM_GENESIS_GENERATOR = struct(
ETHEREUM_GENESIS_GENERATOR = struct(
capella_genesis="ethpandaops/ethereum-genesis-generator:2.0.12", # Deprecated (no support for minimal config)
capella_genesis="ethpandaops/ethereum-genesis-generator:2.0.12", # Deprecated (no support for minimal config)
deneb_genesis="ethpandaops/ethereum-genesis-generator:3.4.1", # Default
deneb_genesis="ethpandaops/ethereum-genesis-generator:3.4.1", # Default
...
...
src/package_io/input_parser.star
View file @
0b2a2ae0
This diff is collapsed.
Click to expand it.
src/package_io/sanity_check.star
View file @
0b2a2ae0
...
@@ -175,7 +175,15 @@ SUBCATEGORY_PARAMS = {
...
@@ -175,7 +175,15 @@ SUBCATEGORY_PARAMS = {
"image",
"image",
"env",
"env",
],
],
"docker_cache_params": [
"enabled",
"url",
"dockerhub_prefix",
"github_prefix",
"google_prefix",
],
"tx_spammer_params": [
"tx_spammer_params": [
"image",
"tx_spammer_extra_args",
"tx_spammer_extra_args",
],
],
"goomy_blob_params": [
"goomy_blob_params": [
...
@@ -188,6 +196,7 @@ SUBCATEGORY_PARAMS = {
...
@@ -188,6 +196,7 @@ SUBCATEGORY_PARAMS = {
"max_mem",
"max_mem",
"storage_tsdb_retention_time",
"storage_tsdb_retention_time",
"storage_tsdb_retention_size",
"storage_tsdb_retention_size",
"image",
],
],
"grafana_params": [
"grafana_params": [
"additional_dashboards",
"additional_dashboards",
...
@@ -195,6 +204,7 @@ SUBCATEGORY_PARAMS = {
...
@@ -195,6 +204,7 @@ SUBCATEGORY_PARAMS = {
"max_cpu",
"max_cpu",
"min_mem",
"min_mem",
"max_mem",
"max_mem",
"image",
],
],
"assertoor_params": [
"assertoor_params": [
"image",
"image",
...
...
src/participant_network.star
View file @
0b2a2ae0
...
@@ -30,9 +30,8 @@ beacon_snooper = import_module("./snooper/snooper_beacon_launcher.star")
...
@@ -30,9 +30,8 @@ beacon_snooper = import_module("./snooper/snooper_beacon_launcher.star")
def launch_participant_network(
def launch_participant_network(
plan,
plan,
participan
ts,
args_with_right_defaul
ts,
network_params,
network_params,
global_log_level,
jwt_file,
jwt_file,
keymanager_file,
keymanager_file,
persistent,
persistent,
...
@@ -41,14 +40,10 @@ def launch_participant_network(
...
@@ -41,14 +40,10 @@ def launch_participant_network(
global_node_selectors,
global_node_selectors,
keymanager_enabled,
keymanager_enabled,
parallel_keystore_generation,
parallel_keystore_generation,
checkpoint_sync_enabled,
checkpoint_sync_url,
port_publisher,
mev_builder_type,
):
):
network_id = network_params.network_id
network_id = network_params.network_id
latest_block = ""
latest_block = ""
num_participants = len(participants)
num_participants = len(
args_with_right_defaults.
participants)
prague_time = 0
prague_time = 0
shadowfork_block = "latest"
shadowfork_block = "latest"
total_number_of_validator_keys = 0
total_number_of_validator_keys = 0
...
@@ -70,7 +65,7 @@ def launch_participant_network(
...
@@ -70,7 +65,7 @@ def launch_participant_network(
plan,
plan,
network_params,
network_params,
shadowfork_block,
shadowfork_block,
participants,
args_with_right_defaults.
participants,
global_tolerations,
global_tolerations,
global_node_selectors,
global_node_selectors,
)
)
...
@@ -82,7 +77,7 @@ def launch_participant_network(
...
@@ -82,7 +77,7 @@ def launch_participant_network(
final_genesis_timestamp,
final_genesis_timestamp,
validator_data,
validator_data,
) = launch_kurtosis.launch(
) = launch_kurtosis.launch(
plan, network_params,
participan
ts, parallel_keystore_generation
plan, network_params,
args_with_right_defaul
ts, parallel_keystore_generation
)
)
el_cl_genesis_config_template = read_file(
el_cl_genesis_config_template = read_file(
...
@@ -137,15 +132,15 @@ def launch_participant_network(
...
@@ -137,15 +132,15 @@ def launch_participant_network(
network_params,
network_params,
el_cl_data,
el_cl_data,
jwt_file,
jwt_file,
participants,
args_with_right_defaults.
participants,
global_log_level,
args_with_right_defaults.
global_log_level,
global_node_selectors,
global_node_selectors,
global_tolerations,
global_tolerations,
persistent,
persistent,
network_id,
network_id,
num_participants,
num_participants,
port_publisher,
args_with_right_defaults.
port_publisher,
mev_builder
_type,
args_with_right_defaults.mev
_type,
)
)
# Launch all consensus layer clients
# Launch all consensus layer clients
...
@@ -170,9 +165,8 @@ def launch_participant_network(
...
@@ -170,9 +165,8 @@ def launch_participant_network(
el_cl_data,
el_cl_data,
jwt_file,
jwt_file,
keymanager_file,
keymanager_file,
participan
ts,
args_with_right_defaul
ts,
all_el_contexts,
all_el_contexts,
global_log_level,
global_node_selectors,
global_node_selectors,
global_tolerations,
global_tolerations,
persistent,
persistent,
...
@@ -180,9 +174,6 @@ def launch_participant_network(
...
@@ -180,9 +174,6 @@ def launch_participant_network(
validator_data,
validator_data,
prysm_password_relative_filepath,
prysm_password_relative_filepath,
prysm_password_artifact_uuid,
prysm_password_artifact_uuid,
checkpoint_sync_enabled,
checkpoint_sync_url,
port_publisher,
)
)
ethereum_metrics_exporter_context = None
ethereum_metrics_exporter_context = None
...
@@ -200,12 +191,14 @@ def launch_participant_network(
...
@@ -200,12 +191,14 @@ def launch_participant_network(
]
]
current_vc_index = 0
current_vc_index = 0
for index, participant in enumerate(participants):
for index, participant in enumerate(
args_with_right_defaults.
participants):
el_type = participant.el_type
el_type = participant.el_type
cl_type = participant.cl_type
cl_type = participant.cl_type
vc_type = participant.vc_type
vc_type = participant.vc_type
remote_signer_type = participant.remote_signer_type
remote_signer_type = participant.remote_signer_type
index_str = shared_utils.zfill_custom(index + 1, len(str(len(participants))))
index_str = shared_utils.zfill_custom(
index + 1, len(str(len(args_with_right_defaults.participants)))
)
for sub_index in range(participant.vc_count):
for sub_index in range(participant.vc_count):
vc_index_str = shared_utils.zfill_custom(
vc_index_str = shared_utils.zfill_custom(
sub_index + 1, len(str(participant.vc_count))
sub_index + 1, len(str(participant.vc_count))
...
@@ -231,6 +224,7 @@ def launch_participant_network(
...
@@ -231,6 +224,7 @@ def launch_participant_network(
el_context,
el_context,
cl_context,
cl_context,
node_selectors,
node_selectors,
args_with_right_defaults.docker_cache_params,
)
)
plan.print(
plan.print(
"Successfully added {0} ethereum metrics exporter participants".format(
"Successfully added {0} ethereum metrics exporter participants".format(
...
@@ -320,6 +314,7 @@ def launch_participant_network(
...
@@ -320,6 +314,7 @@ def launch_participant_network(
snooper_service_name,
snooper_service_name,
cl_context,
cl_context,
node_selectors,
node_selectors,
args_with_right_defaults.docker_cache_params,
)
)
plan.print(
plan.print(
"Successfully added {0} snooper participants".format(
"Successfully added {0} snooper participants".format(
...
@@ -359,7 +354,7 @@ def launch_participant_network(
...
@@ -359,7 +354,7 @@ def launch_participant_network(
participant=participant,
participant=participant,
global_tolerations=global_tolerations,
global_tolerations=global_tolerations,
node_selectors=node_selectors,
node_selectors=node_selectors,
port_publisher=port_publisher,
port_publisher=
args_with_right_defaults.
port_publisher,
remote_signer_index=current_vc_index,
remote_signer_index=current_vc_index,
)
)
...
@@ -376,7 +371,7 @@ def launch_participant_network(
...
@@ -376,7 +371,7 @@ def launch_participant_network(
service_name="vc-{0}".format(full_name),
service_name="vc-{0}".format(full_name),
vc_type=vc_type,
vc_type=vc_type,
image=participant.vc_image,
image=participant.vc_image,
global_log_level=global_log_level,
global_log_level=
args_with_right_defaults.
global_log_level,
cl_context=cl_context,
cl_context=cl_context,
el_context=el_context,
el_context=el_context,
remote_signer_context=remote_signer_context,
remote_signer_context=remote_signer_context,
...
@@ -392,7 +387,7 @@ def launch_participant_network(
...
@@ -392,7 +387,7 @@ def launch_participant_network(
preset=network_params.preset,
preset=network_params.preset,
network=network_params.network,
network=network_params.network,
electra_fork_epoch=network_params.electra_fork_epoch,
electra_fork_epoch=network_params.electra_fork_epoch,
port_publisher=port_publisher,
port_publisher=
args_with_right_defaults.
port_publisher,
vc_index=current_vc_index,
vc_index=current_vc_index,
)
)
all_vc_contexts.append(vc_context)
all_vc_contexts.append(vc_context)
...
@@ -403,7 +398,7 @@ def launch_participant_network(
...
@@ -403,7 +398,7 @@ def launch_participant_network(
all_participants = []
all_participants = []
for index, participant in enumerate(participants):
for index, participant in enumerate(
args_with_right_defaults.
participants):
el_type = participant.el_type
el_type = participant.el_type
cl_type = participant.cl_type
cl_type = participant.cl_type
vc_type = participant.vc_type
vc_type = participant.vc_type
...
...
src/prelaunch_data_generator/el_cl_genesis/el_cl_genesis_generator.star
View file @
0b2a2ae0
...
@@ -74,7 +74,6 @@ def generate_el_cl_genesis_data(
...
@@ -74,7 +74,6 @@ def generate_el_cl_genesis_data(
name="read-prague-time",
name="read-prague-time",
description="Reading prague time from genesis",
description="Reading prague time from genesis",
run="jq .config.pragueTime /data/genesis.json | tr -d '\n'",
run="jq .config.pragueTime /data/genesis.json | tr -d '\n'",
image="badouralix/curl-jq",
files={"/data": genesis.files_artifacts[0]},
files={"/data": genesis.files_artifacts[0]},
)
)
...
...
src/prelaunch_data_generator/validator_keystores/validator_keystore_generator.star
View file @
0b2a2ae0
...
@@ -38,8 +38,9 @@ def launch_prelaunch_data_generator(
...
@@ -38,8 +38,9 @@ def launch_prelaunch_data_generator(
plan,
plan,
files_artifact_mountpoints,
files_artifact_mountpoints,
service_name_suffix,
service_name_suffix,
docker_cache_params,
):
):
config = get_config(files_artifact_mountpoints)
config = get_config(files_artifact_mountpoints
, docker_cache_params
)
service_name = "{0}{1}".format(
service_name = "{0}{1}".format(
SERVICE_NAME_PREFIX,
SERVICE_NAME_PREFIX,
...
@@ -51,11 +52,9 @@ def launch_prelaunch_data_generator(
...
@@ -51,11 +52,9 @@ def launch_prelaunch_data_generator(
def launch_prelaunch_data_generator_parallel(
def launch_prelaunch_data_generator_parallel(
plan, files_artifact_mountpoints, service_name_suffixes
plan, files_artifact_mountpoints, service_name_suffixes
, docker_cache_params
):
):
config = get_config(
config = get_config(files_artifact_mountpoints, docker_cache_params)
files_artifact_mountpoints,
)
service_names = [
service_names = [
"{0}{1}".format(
"{0}{1}".format(
SERVICE_NAME_PREFIX,
SERVICE_NAME_PREFIX,
...
@@ -68,9 +67,12 @@ def launch_prelaunch_data_generator_parallel(
...
@@ -68,9 +67,12 @@ def launch_prelaunch_data_generator_parallel(
return service_names
return service_names
def get_config(files_artifact_mountpoints):
def get_config(files_artifact_mountpoints
, docker_cache_params
):
return ServiceConfig(
return ServiceConfig(
image=ETH_VAL_TOOLS_IMAGE,
image=shared_utils.docker_cache_image_calc(
docker_cache_params,
ETH_VAL_TOOLS_IMAGE,
),
entrypoint=ENTRYPOINT_ARGS,
entrypoint=ENTRYPOINT_ARGS,
files=files_artifact_mountpoints,
files=files_artifact_mountpoints,
)
)
...
@@ -79,8 +81,10 @@ def get_config(files_artifact_mountpoints):
...
@@ -79,8 +81,10 @@ def get_config(files_artifact_mountpoints):
# Generates keystores for the given number of nodes from the given mnemonic, where each keystore contains approximately
# Generates keystores for the given number of nodes from the given mnemonic, where each keystore contains approximately
#
#
# num_keys / num_nodes keys
# num_keys / num_nodes keys
def generate_validator_keystores(plan, mnemonic, participants):
def generate_validator_keystores(plan, mnemonic, participants, docker_cache_params):
service_name = launch_prelaunch_data_generator(plan, {}, "cl-validator-keystore")
service_name = launch_prelaunch_data_generator(
plan, {}, "cl-validator-keystore", docker_cache_params
)
all_output_dirpaths = []
all_output_dirpaths = []
all_sub_command_strs = []
all_sub_command_strs = []
...
...
src/prometheus/prometheus_launcher.star
View file @
0b2a2ae0
...
@@ -46,6 +46,7 @@ def launch_prometheus(
...
@@ -46,6 +46,7 @@ def launch_prometheus(
node_selectors=global_node_selectors,
node_selectors=global_node_selectors,
storage_tsdb_retention_time=prometheus_params.storage_tsdb_retention_time,
storage_tsdb_retention_time=prometheus_params.storage_tsdb_retention_time,
storage_tsdb_retention_size=prometheus_params.storage_tsdb_retention_size,
storage_tsdb_retention_size=prometheus_params.storage_tsdb_retention_size,
image=prometheus_params.image,
)
)
return prometheus_url
return prometheus_url
...
...
src/shared_utils/shared_utils.star
View file @
0b2a2ae0
...
@@ -83,9 +83,9 @@ def label_maker(
...
@@ -83,9 +83,9 @@ def label_maker(
labels = {
labels = {
"ethereum-package.client": client,
"ethereum-package.client": client,
"ethereum-package.client-type": client_type,
"ethereum-package.client-type": client_type,
"ethereum-package.client-image":
image.replace("/", "-")
"ethereum-package.client-image":
ensure_alphanumeric_bounds(
.replace(":", "_")
image.replace("/", "-").replace(":", "_").replace(".", "-").split("@")[0]
.split("@")[0]
, # drop the sha256 part of the image from the label
)
, # drop the sha256 part of the image from the label
"ethereum-package.sha256": sha256,
"ethereum-package.sha256": sha256,
"ethereum-package.connected-client": connected_client,
"ethereum-package.connected-client": connected_client,
}
}
...
@@ -346,3 +346,49 @@ def get_cpu_mem_resource_limits(
...
@@ -346,3 +346,49 @@ def get_cpu_mem_resource_limits(
else constants.VOLUME_SIZE[network_name][client_type + "_volume_size"]
else constants.VOLUME_SIZE[network_name][client_type + "_volume_size"]
)
)
return min_cpu, max_cpu, min_mem, max_mem, volume_size
return min_cpu, max_cpu, min_mem, max_mem, volume_size
def docker_cache_image_calc(docker_cache_params, image):
if docker_cache_params.enabled:
if docker_cache_params.url in image:
return image
if constants.CONTAINER_REGISTRY.ghcr in image:
return (
docker_cache_params.url
+ docker_cache_params.github_prefix
+ "/".join(image.split("/")[1:])
)
elif constants.CONTAINER_REGISTRY.gcr in image:
return (
docker_cache_params.url
+ docker_cache_params.gcr_prefix
+ "/".join(image.split("/")[1:])
)
elif constants.CONTAINER_REGISTRY.dockerhub in image:
return (
docker_cache_params.url + docker_cache_params.dockerhub_prefix + image
)
return image
def is_alphanumeric(c):
return ("a" <= c and c <= "z") or ("A" <= c and c <= "Z") or ("0" <= c and c <= "9")
def ensure_alphanumeric_bounds(s):
# Trim from the start
start = 0
for i in range(len(s)):
if is_alphanumeric(s[i]):
start = i
break
# Trim from the end
end = len(s)
for i in range(len(s) - 1, -1, -1):
if is_alphanumeric(s[i]):
end = i + 1
break
return s[start:end]
src/snooper/snooper_beacon_launcher.star
View file @
0b2a2ae0
...
@@ -21,10 +21,21 @@ MIN_MEMORY = 10
...
@@ -21,10 +21,21 @@ MIN_MEMORY = 10
MAX_MEMORY = 600
MAX_MEMORY = 600
def launch(plan, service_name, cl_context, node_selectors):
def launch(
plan,
service_name,
cl_context,
node_selectors,
docker_cache_params,
):
snooper_service_name = "{0}".format(service_name)
snooper_service_name = "{0}".format(service_name)
snooper_config = get_config(service_name, cl_context, node_selectors)
snooper_config = get_config(
service_name,
cl_context,
node_selectors,
docker_cache_params,
)
snooper_service = plan.add_service(snooper_service_name, snooper_config)
snooper_service = plan.add_service(snooper_service_name, snooper_config)
snooper_http_port = snooper_service.ports[SNOOPER_BEACON_RPC_PORT_ID]
snooper_http_port = snooper_service.ports[SNOOPER_BEACON_RPC_PORT_ID]
...
@@ -33,7 +44,12 @@ def launch(plan, service_name, cl_context, node_selectors):
...
@@ -33,7 +44,12 @@ def launch(plan, service_name, cl_context, node_selectors):
)
)
def get_config(service_name, cl_context, node_selectors):
def get_config(
service_name,
cl_context,
node_selectors,
docker_cache_params,
):
beacon_rpc_port_num = "{0}".format(
beacon_rpc_port_num = "{0}".format(
cl_context.beacon_http_url,
cl_context.beacon_http_url,
)
)
...
@@ -45,7 +61,9 @@ def get_config(service_name, cl_context, node_selectors):
...
@@ -45,7 +61,9 @@ def get_config(service_name, cl_context, node_selectors):
]
]
return ServiceConfig(
return ServiceConfig(
image=constants.DEFAULT_SNOOPER_IMAGE,
image=shared_utils.docker_cache_image_calc(
docker_cache_params, constants.DEFAULT_SNOOPER_IMAGE
),
ports=SNOOPER_USED_PORTS,
ports=SNOOPER_USED_PORTS,
cmd=cmd,
cmd=cmd,
min_cpu=MIN_CPU,
min_cpu=MIN_CPU,
...
...
src/snooper/snooper_engine_launcher.star
View file @
0b2a2ae0
...
@@ -22,10 +22,15 @@ MIN_MEMORY = 10
...
@@ -22,10 +22,15 @@ MIN_MEMORY = 10
MAX_MEMORY = 600
MAX_MEMORY = 600
def launch(plan, service_name, el_context, node_selectors):
def launch(plan, service_name, el_context, node_selectors
, docker_cache_params
):
snooper_service_name = "{0}".format(service_name)
snooper_service_name = "{0}".format(service_name)
snooper_config = get_config(service_name, el_context, node_selectors)
snooper_config = get_config(
service_name,
el_context,
node_selectors,
docker_cache_params,
)
snooper_service = plan.add_service(snooper_service_name, snooper_config)
snooper_service = plan.add_service(snooper_service_name, snooper_config)
snooper_http_port = snooper_service.ports[SNOOPER_ENGINE_RPC_PORT_ID]
snooper_http_port = snooper_service.ports[SNOOPER_ENGINE_RPC_PORT_ID]
...
@@ -34,7 +39,7 @@ def launch(plan, service_name, el_context, node_selectors):
...
@@ -34,7 +39,7 @@ def launch(plan, service_name, el_context, node_selectors):
)
)
def get_config(service_name, el_context, node_selectors):
def get_config(service_name, el_context, node_selectors
, docker_cache_params
):
engine_rpc_port_num = "http://{0}:{1}".format(
engine_rpc_port_num = "http://{0}:{1}".format(
el_context.ip_addr,
el_context.ip_addr,
el_context.engine_rpc_port_num,
el_context.engine_rpc_port_num,
...
@@ -47,7 +52,9 @@ def get_config(service_name, el_context, node_selectors):
...
@@ -47,7 +52,9 @@ def get_config(service_name, el_context, node_selectors):
]
]
return ServiceConfig(
return ServiceConfig(
image=constants.DEFAULT_SNOOPER_IMAGE,
image=shared_utils.docker_cache_image_calc(
docker_cache_params, constants.DEFAULT_SNOOPER_IMAGE
),
ports=SNOOPER_USED_PORTS,
ports=SNOOPER_USED_PORTS,
cmd=cmd,
cmd=cmd,
min_cpu=MIN_CPU,
min_cpu=MIN_CPU,
...
...
src/tracoor/tracoor_launcher.star
View file @
0b2a2ae0
...
@@ -36,6 +36,7 @@ def launch_tracoor(
...
@@ -36,6 +36,7 @@ def launch_tracoor(
final_genesis_timestamp,
final_genesis_timestamp,
port_publisher,
port_publisher,
additional_service_index,
additional_service_index,
docker_cache_params,
):
):
all_client_info = []
all_client_info = []
for index, participant in enumerate(participant_contexts):
for index, participant in enumerate(participant_contexts):
...
@@ -81,6 +82,7 @@ def launch_tracoor(
...
@@ -81,6 +82,7 @@ def launch_tracoor(
global_node_selectors,
global_node_selectors,
port_publisher,
port_publisher,
additional_service_index,
additional_service_index,
docker_cache_params,
)
)
plan.add_service(SERVICE_NAME, config)
plan.add_service(SERVICE_NAME, config)
...
@@ -93,6 +95,7 @@ def get_config(
...
@@ -93,6 +95,7 @@ def get_config(
node_selectors,
node_selectors,
port_publisher,
port_publisher,
additional_service_index,
additional_service_index,
docker_cache_params,
):
):
config_file_path = shared_utils.path_join(
config_file_path = shared_utils.path_join(
TRACOOR_CONFIG_MOUNT_DIRPATH_ON_SERVICE,
TRACOOR_CONFIG_MOUNT_DIRPATH_ON_SERVICE,
...
@@ -107,7 +110,10 @@ def get_config(
...
@@ -107,7 +110,10 @@ def get_config(
)
)
return ServiceConfig(
return ServiceConfig(
image=IMAGE_NAME,
image=shared_utils.docker_cache_image_calc(
docker_cache_params,
IMAGE_NAME,
),
ports=USED_PORTS,
ports=USED_PORTS,
public_ports=public_ports,
public_ports=public_ports,
files={
files={
...
...
src/transaction_spammer/transaction_spammer.star
View file @
0b2a2ae0
...
@@ -18,7 +18,7 @@ def launch_transaction_spammer(
...
@@ -18,7 +18,7 @@ def launch_transaction_spammer(
config = get_config(
config = get_config(
prefunded_addresses,
prefunded_addresses,
el_uri,
el_uri,
tx_spammer_params
.tx_spammer_extra_args
,
tx_spammer_params,
global_node_selectors,
global_node_selectors,
)
)
plan.add_service(SERVICE_NAME, config)
plan.add_service(SERVICE_NAME, config)
...
@@ -27,22 +27,20 @@ def launch_transaction_spammer(
...
@@ -27,22 +27,20 @@ def launch_transaction_spammer(
def get_config(
def get_config(
prefunded_addresses,
prefunded_addresses,
el_uri,
el_uri,
tx_spammer_
extra_arg
s,
tx_spammer_
param
s,
node_selectors,
node_selectors,
):
):
tx_spammer_image = "ethpandaops/tx-fuzz:master"
cmd = [
cmd = [
"spam",
"spam",
"--rpc={}".format(el_uri),
"--rpc={}".format(el_uri),
"--sk={0}".format(prefunded_addresses[3].private_key),
"--sk={0}".format(prefunded_addresses[3].private_key),
]
]
if len(tx_spammer_extra_args) > 0:
if len(tx_spammer_
params.tx_spammer_
extra_args) > 0:
cmd.extend([param for param in tx_spammer_extra_args])
cmd.extend([param for param in tx_spammer_
params.tx_spammer_
extra_args])
return ServiceConfig(
return ServiceConfig(
image=tx_spammer_image,
image=tx_spammer_
params.
image,
cmd=cmd,
cmd=cmd,
min_cpu=MIN_CPU,
min_cpu=MIN_CPU,
max_cpu=MAX_CPU,
max_cpu=MAX_CPU,
...
...
src/vc/lighthouse.star
View file @
0b2a2ae0
...
@@ -119,7 +119,7 @@ def get_config(
...
@@ -119,7 +119,7 @@ def get_config(
"labels": shared_utils.label_maker(
"labels": shared_utils.label_maker(
client=constants.VC_TYPE.lighthouse,
client=constants.VC_TYPE.lighthouse,
client_type=constants.CLIENT_TYPES.validator,
client_type=constants.CLIENT_TYPES.validator,
image=image,
image=image
[-constants.MAX_LABEL_LENGTH :]
,
connected_client=cl_context.client_name,
connected_client=cl_context.client_name,
extra_labels=participant.vc_extra_labels,
extra_labels=participant.vc_extra_labels,
supernode=participant.supernode,
supernode=participant.supernode,
...
...
src/vc/lodestar.star
View file @
0b2a2ae0
...
@@ -135,7 +135,7 @@ def get_config(
...
@@ -135,7 +135,7 @@ def get_config(
"labels": shared_utils.label_maker(
"labels": shared_utils.label_maker(
client=constants.VC_TYPE.lodestar,
client=constants.VC_TYPE.lodestar,
client_type=constants.CLIENT_TYPES.validator,
client_type=constants.CLIENT_TYPES.validator,
image=image,
image=image
[-constants.MAX_LABEL_LENGTH :]
,
connected_client=cl_context.client_name,
connected_client=cl_context.client_name,
extra_labels=participant.vc_extra_labels,
extra_labels=participant.vc_extra_labels,
supernode=participant.supernode,
supernode=participant.supernode,
...
...
src/vc/nimbus.star
View file @
0b2a2ae0
...
@@ -107,7 +107,7 @@ def get_config(
...
@@ -107,7 +107,7 @@ def get_config(
"labels": shared_utils.label_maker(
"labels": shared_utils.label_maker(
client=constants.VC_TYPE.nimbus,
client=constants.VC_TYPE.nimbus,
client_type=constants.CLIENT_TYPES.validator,
client_type=constants.CLIENT_TYPES.validator,
image=image,
image=image
[-constants.MAX_LABEL_LENGTH :]
,
connected_client=cl_context.client_name,
connected_client=cl_context.client_name,
extra_labels=participant.vc_extra_labels,
extra_labels=participant.vc_extra_labels,
supernode=participant.supernode,
supernode=participant.supernode,
...
...
src/vc/prysm.star
View file @
0b2a2ae0
...
@@ -125,7 +125,7 @@ def get_config(
...
@@ -125,7 +125,7 @@ def get_config(
"labels": shared_utils.label_maker(
"labels": shared_utils.label_maker(
client=constants.VC_TYPE.prysm,
client=constants.VC_TYPE.prysm,
client_type=constants.CLIENT_TYPES.validator,
client_type=constants.CLIENT_TYPES.validator,
image=image,
image=image
[-constants.MAX_LABEL_LENGTH :]
,
connected_client=cl_context.client_name,
connected_client=cl_context.client_name,
extra_labels=participant.vc_extra_labels,
extra_labels=participant.vc_extra_labels,
supernode=participant.supernode,
supernode=participant.supernode,
...
...
src/vc/teku.star
View file @
0b2a2ae0
...
@@ -121,7 +121,7 @@ def get_config(
...
@@ -121,7 +121,7 @@ def get_config(
"labels": shared_utils.label_maker(
"labels": shared_utils.label_maker(
client=constants.VC_TYPE.teku,
client=constants.VC_TYPE.teku,
client_type=constants.CLIENT_TYPES.validator,
client_type=constants.CLIENT_TYPES.validator,
image=image,
image=image
[-constants.MAX_LABEL_LENGTH :]
,
connected_client=cl_context.client_name,
connected_client=cl_context.client_name,
extra_labels=participant.vc_extra_labels,
extra_labels=participant.vc_extra_labels,
supernode=participant.supernode,
supernode=participant.supernode,
...
...
src/vc/vero.star
View file @
0b2a2ae0
...
@@ -65,7 +65,7 @@ def get_config(
...
@@ -65,7 +65,7 @@ def get_config(
"labels": shared_utils.label_maker(
"labels": shared_utils.label_maker(
client=constants.VC_TYPE.vero,
client=constants.VC_TYPE.vero,
client_type=constants.CLIENT_TYPES.validator,
client_type=constants.CLIENT_TYPES.validator,
image=image,
image=image
[-constants.MAX_LABEL_LENGTH :]
,
connected_client=cl_context.client_name,
connected_client=cl_context.client_name,
extra_labels=participant.vc_extra_labels,
extra_labels=participant.vc_extra_labels,
supernode=participant.supernode,
supernode=participant.supernode,
...
...
vicotor
@luxueqian
mentioned in commit
e957062f
·
Apr 13, 2025
mentioned in commit
e957062f
mentioned in commit e957062f619b4c8503c2c41cd7f51dbdb48a4ed3
Toggle commit list
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment