Commit 34540373 authored by Mark Tyneway's avatar Mark Tyneway

ops: delete legacy ops package

The `ops-bedrock` package will likely be migrated to
the `ops` package. Keep around the dockerfile because the
builds of the `fault-detector` and the `chain-mon` depend
on this dockerfile. We should eventually refactor away this
dockerfile and follow the pattern that the other packages use
where the dockerfile is in the particular package instead of
being outside in its own package.
parent 233ede59
DOCKER_COMPOSE_CMD := docker-compose \
-f docker-compose.yml
build:
DOCKER_BUILDKIT=1 \
$(DOCKER_COMPOSE_CMD) \
build
.PHONY: build
up: down
DOCKER_BUILDKIT=1 \
$(DOCKER_COMPOSE_CMD) \
up --build --detach
.PHONY: up
down:
$(DOCKER_COMPOSE_CMD) \
down
.PHONY: down
ps:
$(DOCKER_COMPOSE_CMD) \
ps
.PHONY: ps
up-metrics: down-metrics
DOCKER_BUILDKIT=1 \
$(DOCKER_COMPOSE_CMD) \
-f docker-compose-metrics.yml \
up --build --detach
.PHONY: up-metrics
down-metrics:
$(DOCKER_COMPOSE_CMD) \
-f docker-compose-metrics.yml \
down
.PHONY: down-metrics
ps-metrics:
$(DOCKER_COMPOSE_CMD) \
-f docker-compose-metrics.yml \
ps
.PHONY: ps
# docker-compose
# ops
The docker-compose project runs a local optimism stack.
## prerequisites
- docker
- docker-compose
- make
## Building the services
```bash
make build
```
## Starting and stopping the project
The base `docker-compose.yml` file will start the required components for a full stack.
Supplementing the base configuration is an additional metric enabling file, `docker-compose-metrics.yml`. Adding this configuration to the stack will enable metric emission for l2geth and start grafana (for metrics visualisation) and influxdb (for metric collection) instances.
Also available for testing is the `rpc-proxy` service in the `docker-compose-rpc-proxy.yml` file. It can be used to restrict what RPC methods are allowed to the Sequencer.
The base stack can be started and stopped with a command like this:
```
docker-compose \
-f docker-compose.yml \
up --build --detach
```
*Note*: This generates a large amount of log data which docker stores by default. See [Disk Usage](#disk-usage).
Also note that Docker Desktop only allocates 2GB of memory by default, which isn't enough to run the docker-compose services reliably.
To allocate more memory, go to Settings > Resources in the Docker UI and use the slider to change the value (_8GB recommended_). Make sure to click Apply & Restart for the changes to take effect.
To start the stack with monitoring enabled, just add the metric composition file.
```
docker-compose \
-f docker-compose.yml \
-f docker-compose-metrics.yml \
up --build --detach
```
Optionally, run a verifier along the rest of the stack. Run a replica with the same command by switching the service name!
```
docker-compose
-f docker-compose.yml \
up --scale \
verifier=1 \
--build --detach
```
A Makefile has been provided for convience. The following targets are available.
- make up
- make down
- make up-metrics
- make down-metrics
## Turning off L2 Fee Enforcement
Fees can be turned off at runtime by setting the environment variable
`ROLLUP_ENFORCE_FEES` to `false`.
```bash
ROLLUP_ENFORCE_FEES=false docker-compose up
```
## Cross domain communication
By default, the `message-relayer` service is turned off. This means that
any tests must manually submit withdrawals. The `message-relayer` will
automatically look for withdrawals and submit the proofs. To run with the
`message-relayer` on, use the command:
```bash
$ docker-compose up --scale relayer=1
```
## Authentication
Influxdb has authentication disabled.
Grafana requires a login. The defaults are:
```
user: admin
password: optimism
```
## Data persistance
Grafana data is not currently saved. Any modifications or additions will be lost on container restart.
InfluxDB is persisting data to a Docker volume.
**Stopping the project removing the containers will not clear this volume**
To remove the influxdb and grafana data, run a commands like
```
docker volume rm ops_influxdb_data
docker volume rm ops_grafana_data
```
## Accessing Grafana dashboards
After starting up the project Grafana should be listening on http://localhost:3000.
Access this link and authenticate as `admin` (see #Authentication)
From the Dashboard list, select "Geth dashboard".
## Disk Usage
The logs generated are in the gigabytes per day range, so you need to be wary of disk exhaustion when running for long periods.
One way to solve this is to configure `/etc/docker/daemon.json` like this:
```json
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
```
This configures log rotation with a limit of 10MB per file and storing a maximum of three files (per container). [More details on docker logging configuration](https://docs.docker.com/config/containers/logging/configure/).
You can also decrease logging by increasing polling intervals:
```env
DATA_TRANSPORT_LAYER__POLLING_INTERVAL=100
```
- [./envs/dtl.env#L7](./envs/dtl.env#L7)
```env
ROLLUP_POLL_INTERVAL_FLAG=500ms
```
- [./envs/geth.env#L8](./envs/geth.env#L8)
Various operational packages
version: "3.4"
services:
l2geth:
command: ["--metrics", "--metrics.influxdb", "--metrics.influxdb.endpoint", "http://influxdb:8086", "--metrics.influxdb.database", "l2geth"]
batch_submitter:
environment:
BATCH_SUBMITTER_RUN_METRICS_SERVER: "true"
BATCH_SUBMITTER_METRICS_PORT: 7300
BATCH_SUBMITTER_METRICS_HOSTNAME: 0.0.0.0
grafana:
image: grafana/grafana:7.5.5
env_file:
- ./envs/metrics.env
ports:
- ${GRAFANA_HTTP_PORT:-3000}:3000
volumes:
- ./docker/grafana/provisioning/:/etc/grafana/provisioning/:ro
- grafana_data:/var/lib/grafana/
- grafana_dashboards:/grafana-dashboards:ro
influxdb:
image: quay.io/influxdb/influxdb:1.6
env_file:
- ./envs/metrics.env
volumes:
- influxdb_data:/var/lib/influxdb
prometheus:
image: prom/prometheus
env_file:
- ./envs/metrics.env
volumes:
- ./docker/prometheus:/etc/prometheus
- prometheus_data:/prometheus
dashboard-sync:
image: python:3
env_file:
- ./envs/metrics.env
command:
- python
- /scripts/dashboard-sync.py
volumes:
- ./docker/scripts/:/scripts
- grafana_dashboards:/grafana-dashboards
volumes:
influxdb_data:
grafana_data:
grafana_dashboards:
prometheus_data:
version: '3.4'
x-system-addr-env: &system-addr-env
# private key: a6aecc98b63bafb0de3b29ae9964b14acb4086057808be29f90150214ebd4a0f
# OK to publish this since it will only ever be used in itests
SYSTEM_ADDRESS_0_DEPLOYER: '0xa961b0d6dce82db098cf70a42a14add3ee3db2d5'
# private key: 3b8d2345102cce2443acb240db6e87c8edd4bb3f821b17fab8ea2c9da08ea132
# OK to publish this since it will only ever be used in itests
SYSTEM_ADDRESS_1_DEPLOYER: '0xdfc82d475833a50de90c642770f34a9db7deb725'
services:
# this is a helper service used because there's no official hardhat image
l1_chain:
image: ethereumoptimism/hardhat-node:${DOCKER_TAG_HARDHAT:-latest}
build:
context: ./docker/hardhat
dockerfile: Dockerfile
env_file:
- ./envs/l1_chain.env
ports:
# expose the service to the host for integration testing
- ${L1CHAIN_HTTP_PORT:-9545}:8545
deployer:
depends_on:
- l1_chain
build:
context: ..
dockerfile: ./ops/docker/Dockerfile.packages
target: deployer
image: ethereumoptimism/deployer:${DOCKER_TAG_DEPLOYER:-latest}
entrypoint: ./deployer.sh
environment:
# Env vars for the deployment script.
CONTRACTS_RPC_URL: http://l1_chain:8545
CONTRACTS_DEPLOYER_KEY: 'ac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80'
CONTRACTS_TARGET_NETWORK: 'local'
ports:
# expose the service to the host for getting the contract addrs
- ${DEPLOYER_PORT:-8080}:8081
dtl:
depends_on:
- l1_chain
- deployer
- l2geth
build:
context: ..
dockerfile: ./ops/docker/Dockerfile.packages
target: data-transport-layer
image: ethereumoptimism/data-transport-layer:${DOCKER_TAG_DATA_TRANSPORT_LAYER:-latest}
# override with the dtl script and the env vars required for it
entrypoint: ./dtl.sh
env_file:
- ./envs/dtl.env
# set the rest of the env vars for the network whcih do not
# depend on the docker-compose setup
environment:
# used for setting the address manager address
URL: http://deployer:8081/addresses.json
# connect to the 2 layers
DATA_TRANSPORT_LAYER__L1_RPC_ENDPOINT: http://l1_chain:8545
DATA_TRANSPORT_LAYER__L2_RPC_ENDPOINT: http://l2geth:8545
DATA_TRANSPORT_LAYER__SYNC_FROM_L2: 'true'
DATA_TRANSPORT_LAYER__L2_CHAIN_ID: 17
ports:
- ${DTL_PORT:-7878}:7878
l2geth:
depends_on:
- l1_chain
- deployer
build:
context: ..
dockerfile: ./l2geth/Dockerfile
image: ethereumoptimism/l2geth:${DOCKER_TAG_L2GETH:-latest}
# override with the geth script and the env vars required for it
entrypoint: sh ./geth.sh
env_file:
- ./envs/geth.env
environment:
<<: *system-addr-env
ETH1_HTTP: http://l1_chain:8545
ROLLUP_TIMESTAMP_REFRESH: 5s
ROLLUP_STATE_DUMP_PATH: http://deployer:8081/state-dump.latest.json
# connecting to the DTL
ROLLUP_CLIENT_HTTP: http://dtl:7878
ETH1_CTC_DEPLOYMENT_HEIGHT: 8
RETRIES: 60
# no need to keep this secret, only used internally to sign blocks
BLOCK_SIGNER_KEY: '6587ae678cf4fc9a33000cdbf9f35226b71dcc6a4684a31203241f9bcfd55d27'
BLOCK_SIGNER_ADDRESS: '0x00000398232E2064F896018496b4b44b3D62751F'
ROLLUP_ENFORCE_FEES: ${ROLLUP_ENFORCE_FEES:-true}
ROLLUP_FEE_THRESHOLD_DOWN: 0.9
ROLLUP_FEE_THRESHOLD_UP: 1.1
ports:
- ${L2GETH_HTTP_PORT:-8545}:8545
- ${L2GETH_WS_PORT:-8546}:8546
relayer:
depends_on:
- l1_chain
- l2geth
deploy:
replicas: 0
build:
context: ..
dockerfile: ./ops/docker/Dockerfile.packages
target: message-relayer
image: ethereumoptimism/message-relayer:${DOCKER_TAG_MESSAGE_RELAYER:-latest}
entrypoint: ./relayer.sh
environment:
MESSAGE_RELAYER__L1_RPC_PROVIDER: http://l1_chain:8545
MESSAGE_RELAYER__L2_RPC_PROVIDER: http://l2geth:8545
MESSAGE_RELAYER__L1_WALLET: '0xdbda1821b80551c9d65939329250298aa3472ba22feea921c0cf5d620ea67b97'
RETRIES: 60
fault_detector:
depends_on:
- l1_chain
- l2geth
deploy:
replicas: 0
build:
context: ..
dockerfile: ./ops/docker/Dockerfile.packages
target: fault-detector
image: ethereumoptimism/fault-detector:${DOCKER_TAG_FAULT_DETECTOR:-latest}
entrypoint: ./detector.sh
environment:
FAULT_DETECTOR__L1_RPC_PROVIDER: http://l1_chain:8545
FAULT_DETECTOR__L2_RPC_PROVIDER: http://l2geth:8545
RETRIES: 60
verifier:
depends_on:
- l1_chain
- deployer
- dtl
- l2geth
deploy:
replicas: 1
build:
context: ..
dockerfile: ./l2geth/Dockerfile
image: ethereumoptimism/l2geth:${DOCKER_TAG_L2GETH:-latest}
entrypoint: sh ./geth.sh
env_file:
- ./envs/geth.env
environment:
<<: *system-addr-env
ETH1_HTTP: http://l1_chain:8545
SEQUENCER_CLIENT_HTTP: http://l2geth:8545
ROLLUP_STATE_DUMP_PATH: http://deployer:8081/state-dump.latest.json
ROLLUP_CLIENT_HTTP: http://dtl:7878
ROLLUP_BACKEND: 'l1'
ETH1_CTC_DEPLOYMENT_HEIGHT: 8
RETRIES: 60
ROLLUP_VERIFIER_ENABLE: 'true'
ports:
- ${VERIFIER_HTTP_PORT:-8547}:8545
- ${VERIFIER_WS_PORT:-8548}:8546
replica:
depends_on:
- dtl
- l2geth
deploy:
replicas: 1
build:
context: ..
dockerfile: ./l2geth/Dockerfile
image: ethereumoptimism/l2geth:${DOCKER_TAG_L2GETH:-latest}
entrypoint: sh ./geth.sh
env_file:
- ./envs/geth.env
environment:
<<: *system-addr-env
ETH1_HTTP: http://l1_chain:8545
SEQUENCER_CLIENT_HTTP: http://l2geth:8545
ROLLUP_STATE_DUMP_PATH: http://deployer:8081/state-dump.latest.json
ROLLUP_CLIENT_HTTP: http://dtl:7878
ROLLUP_BACKEND: 'l2'
ROLLUP_VERIFIER_ENABLE: 'true'
ETH1_CTC_DEPLOYMENT_HEIGHT: 8
RETRIES: 60
ports:
- ${REPLICA_HTTP_PORT:-8549}:8545
- ${REPLICA_WS_PORT:-8550}:8546
replica_healthcheck:
depends_on:
- l2geth
- replica
deploy:
replicas: 0
build:
context: ..
dockerfile: ./ops/docker/Dockerfile.packages
target: replica-healthcheck
image: ethereumoptimism/replica-healthcheck:${DOCKER_TAG_REPLICA_HEALTHCHECK:-latest}
environment:
HEALTHCHECK__REFERENCE_RPC_PROVIDER: http://l2geth:8545
HEALTHCHECK__TARGET_RPC_PROVIDER: http://replica:8545
ports:
- ${HEALTHCHECK_HTTP_PORT:-7300}:7300
gas_oracle:
deploy:
replicas: 0
build:
context: ..
dockerfile: ./gas-oracle/Dockerfile
image: ethereumoptimism/gas-oracle:${DOCKER_TAG_GAS_ORACLE:-latest}
environment:
GAS_PRICE_ORACLE_ETHEREUM_HTTP_URL: http://l1_chain:8545
GAS_PRICE_ORACLE_LAYER_TWO_HTTP_URL: http://l2geth:8545
# Default hardhat account 5
GAS_PRICE_ORACLE_PRIVATE_KEY: '0x8b3a350cf5c34c9194ca85829a2df0ec3153be0318b5e2d3348e872092edffba'
# @eth-optimism/ci-builder
## 0.5.0
### Minor Changes
- 80f2271f5: Update foundry
### Patch Changes
- 035391a1f: Bump foundry to edf15abd648bb96e2bcee342c1d72ec7d1066cd1
## 0.4.0
### Minor Changes
- 05cc935b2: Bump foundry to 2ff99025abade470a795724c10648c800a41025e
## 0.3.8
### Patch Changes
- 85dfa9fe2: Add echidna tests for encoding
- ea0540e51: Update the slither version to fix echidna tests
- 0f8fc58ad: Add echidna tests for Burn
## 0.3.7
### Patch Changes
- 18d1ce3f4: Require rebuild on null ref
- 1594678e0: Add echidna test for AliasHelper
- 74fd040ce: Pin echidna version
## 0.3.6
### Patch Changes
- 011acf411: Add echidna to ci-builder
## 0.3.5
### Patch Changes
- c44ff357f: Update foundry in ci-builder
## 0.3.4
### Patch Changes
- 8e22c28f: Update geth to 1.10.25
## 0.3.3
### Patch Changes
- 3f485627: Pin slither version to 0.9.0
## 0.3.2
### Patch Changes
- fcfcf6e7: Remove ugly shell hack
- 009939e0: Fix codecov download step
## 0.3.1
### Patch Changes
- 7375a949: Download and verify codecov uploader binary in the ci-builder image
## 0.3.0
### Minor Changes
- 25c564bc: Automate foundry build
## 0.2.4
### Patch Changes
- c6fab69f: Update foundry to fix a bug in coverage generation
- f7323e0b: Upgrade foundry to support consistent storage layouts
## 0.2.3
### Patch Changes
- 9ac88806: Update golang, geth and golangci-lint
## 0.2.2
### Patch Changes
- c666fedc: Upgrade to Debian 11
## 0.2.1
### Patch Changes
- 9bb6a152: Trigger release to update foundry version
## 0.2.0
### Minor Changes
- e8909be0: Fix unbound variable in check_changed script
This now uses -z to check if a variable is unbound instead of -n.
This should fix the error when the script is being ran on develop.
## 0.1.2
### Patch Changes
- 184f13b6: Retrigger release of ci-builder
## 0.1.1
### Patch Changes
- 7bf30513: Fix publishing
- a60502f9: Install new version of bash
## 0.1.0
### Minor Changes
- 8c121ece: Update foundry in ci builder
### Patch Changes
- 445efe9d: Use ethereumoptimism/foundry:latest
FROM debian:bullseye-20220822-slim as foundry-build
SHELL ["/bin/bash", "-c"]
WORKDIR /opt
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && \
apt-get install -y curl build-essential git && \
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs > rustup.sh && \
chmod +x ./rustup.sh && \
./rustup.sh -y
WORKDIR /opt/foundry
# Only diff from upstream docker image is this clone instead
# of COPY. We select a specific commit to use.
RUN git clone https://github.com/foundry-rs/foundry.git . \
&& git checkout da2392e58bb8a7fefeba46b40c4df1afad8ccd22
RUN source $HOME/.profile && \
cargo build --release && \
strip /opt/foundry/target/release/forge && \
strip /opt/foundry/target/release/cast && \
strip /opt/foundry/target/release/anvil
FROM ethereum/client-go:alltools-v1.10.25 as geth
FROM ghcr.io/crytic/echidna/echidna:v2.0.4 as echidna-test
FROM python:3.8.13-slim-bullseye
ENV GOPATH=/go
ENV PATH=/usr/local/go/bin:$GOPATH/bin:$PATH
ENV DEBIAN_FRONTEND=noninteractive
COPY --from=foundry-build /opt/foundry/target/release/forge /usr/local/bin/forge
COPY --from=foundry-build /opt/foundry/target/release/cast /usr/local/bin/cast
COPY --from=foundry-build /opt/foundry/target/release/anvil /usr/local/bin/anvil
COPY --from=geth /usr/local/bin/abigen /usr/local/bin/abigen
COPY --from=echidna-test /usr/local/bin/echidna-test /usr/local/bin/echidna-test
COPY check-changed.sh /usr/local/bin/check-changed
RUN apt-get update && \
apt-get install -y bash curl openssh-client git build-essential ca-certificates jq musl gnupg coreutils && \
curl -sL https://deb.nodesource.com/setup_16.x -o nodesource_setup.sh && \
curl -sL https://go.dev/dl/go1.19.linux-amd64.tar.gz -o go1.19.linux-amd64.tar.gz && \
tar -C /usr/local/ -xzvf go1.19.linux-amd64.tar.gz && \
ln -s /usr/local/go/bin/gofmt /usr/local/bin/gofmt && \
bash nodesource_setup.sh && \
apt-get install -y nodejs && \
npm i -g npm@8.11.0 \
npm i -g yarn && \
npm i -g depcheck && \
pip install slither-analyzer==0.9.1 && \
go install gotest.tools/gotestsum@latest && \
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin v1.48.0 && \
curl -fLSs https://raw.githubusercontent.com/CircleCI-Public/circleci-cli/master/install.sh | bash && \
chmod +x /usr/local/bin/check-changed
RUN echo "downloading solidity compilers" && \
curl -o solc-linux-amd64-v0.5.17+commit.d19bba13 -sL https://binaries.soliditylang.org/linux-amd64/solc-linux-amd64-v0.5.17+commit.d19bba13 && \
curl -o solc-linux-amd64-v0.8.9+commit.e5eed63a -sL https://binaries.soliditylang.org/linux-amd64/solc-linux-amd64-v0.8.9+commit.e5eed63a && \
curl -o solc-linux-amd64-v0.8.10+commit.fc410830 -sL https://binaries.soliditylang.org/linux-amd64/solc-linux-amd64-v0.8.10+commit.fc410830 && \
curl -o solc-linux-amd64-v0.8.12+commit.f00d7308 -sL https://binaries.soliditylang.org/linux-amd64/solc-linux-amd64-v0.8.12+commit.f00d7308 && \
echo "verifying checksums" && \
(echo "c35ce7a4d3ffa5747c178b1e24c8541b2e5d8a82c1db3719eb4433a1f19e16f3 solc-linux-amd64-v0.5.17+commit.d19bba13" | sha256sum --check --status - || exit 1) && \
(echo "f851f11fad37496baabaf8d6cb5c057ca0d9754fddb7a351ab580d7fd728cb94 solc-linux-amd64-v0.8.9+commit.e5eed63a" | sha256sum --check --status - || exit 1) && \
(echo "c7effacf28b9d64495f81b75228fbf4266ac0ec87e8f1adc489ddd8a4dd06d89 solc-linux-amd64-v0.8.10+commit.fc410830" | sha256sum --check --status - || exit 1) && \
(echo "556c3ec44faf8ff6b67933fa8a8a403abe82c978d6e581dbfec4bd07360bfbf3 solc-linux-amd64-v0.8.12+commit.f00d7308" | sha256sum --check --status - || exit 1) && \
echo "caching compilers" && \
mkdir -p ~/.cache/hardhat-nodejs/compilers/linux-amd64 && \
cp solc-linux-amd64-v0.5.17+commit.d19bba13 ~/.cache/hardhat-nodejs/compilers/linux-amd64/ && \
cp solc-linux-amd64-v0.8.9+commit.e5eed63a ~/.cache/hardhat-nodejs/compilers/linux-amd64/ && \
cp solc-linux-amd64-v0.8.10+commit.fc410830 ~/.cache/hardhat-nodejs/compilers/linux-amd64/ && \
cp solc-linux-amd64-v0.8.12+commit.f00d7308 ~/.cache/hardhat-nodejs/compilers/linux-amd64/ && \
mkdir -p ~/.svm/0.5.17 && \
cp solc-linux-amd64-v0.5.17+commit.d19bba13 ~/.svm/0.5.17/solc-0.5.17 && \
mkdir -p ~/.svm/0.8.9 && \
cp solc-linux-amd64-v0.8.9+commit.e5eed63a ~/.svm/0.8.9/solc-0.8.9 && \
mkdir -p ~/.svm/0.8.10 && \
cp solc-linux-amd64-v0.8.10+commit.fc410830 ~/.svm/0.8.10/solc-0.8.10 && \
mkdir -p ~/.svm/0.8.12 && \
cp solc-linux-amd64-v0.8.12+commit.f00d7308 ~/.svm/0.8.12/solc-0.8.12 && \
rm solc-linux-amd64-v0.5.17+commit.d19bba13 && \
rm solc-linux-amd64-v0.8.9+commit.e5eed63a && \
rm solc-linux-amd64-v0.8.10+commit.fc410830 && \
rm solc-linux-amd64-v0.8.12+commit.f00d7308
RUN echo "downloading and verifying Codecov uploader" && \
curl https://keybase.io/codecovsecurity/pgp_keys.asc | gpg --no-default-keyring --keyring trustedkeys.gpg --import && \
curl -Os "https://uploader.codecov.io/latest/linux/codecov" && \
curl -Os "https://uploader.codecov.io/latest/linux/codecov.SHA256SUM" && \
curl -Os "https://uploader.codecov.io/latest/linux/codecov.SHA256SUM.sig" && \
gpgv codecov.SHA256SUM.sig codecov.SHA256SUM && \
shasum -a 256 -c codecov.SHA256SUM || sha256sum -c codecov.SHA256SUM && \
cp codecov /usr/local/bin/codecov && \
chmod +x /usr/local/bin/codecov && \
rm codecov
RUN echo "downloading mockery tool" && \
mkdir -p mockery-tmp-dir && \
curl -o mockery-tmp-dir/mockery.tar.gz -sL https://github.com/vektra/mockery/releases/download/v2.28.1/mockery_2.28.1_Linux_x86_64.tar.gz && \
tar -xzvf mockery-tmp-dir/mockery.tar.gz -C mockery-tmp-dir && \
cp mockery-tmp-dir/mockery /usr/local/bin/mockery && \
chmod +x /usr/local/bin/mockery && \
rm -rf mockery-tmp-dir
#!/usr/bin/env -S bash -euET -o pipefail -O inherit_errexit
# Usage: check-changed.sh <diff-pattern>.
#
# This script compares the files changed in the <diff-pattern> to the git diff,
# and writes TRUE or FALSE to stdout if the diff matches/does not match. It is
# used by CircleCI jobs to determine if they need to run.
echoerr() { echo "$@" 1>&2; }
# Check if this is a CircleCI PR.
if [[ -z ${CIRCLE_PULL_REQUEST+x} ]]; then
# CIRCLE_PULL_REQUEST is unbound here
# Non-PR builds always require a rebuild.
echoerr "Not a PR build, requiring a total rebuild."
echo "TRUE"
else
# CIRCLE_PULL_REQUEST is bound here
PACKAGE=$1
# Craft the URL to the GitHub API. The access token is optional for the monorepo since it's an open-source repo.
GITHUB_API_URL="https://api.github.com/repos/ethereum-optimism/optimism/pulls/${CIRCLE_PULL_REQUEST/https:\/\/github.com\/ethereum-optimism\/optimism\/pull\//}"
echoerr "GitHub URL:"
echoerr "$GITHUB_API_URL"
# Grab the PR's base ref using the GitHub API.
PR=$(curl -H "Authorization: token $GITHUB_ACCESS_TOKEN" -H "Accept: application/vnd.github.v3+json" --retry 3 --retry-delay 1 -s "$GITHUB_API_URL")
echoerr "PR data:"
echoerr "$PR"
REF=$(echo "$PR" | jq -r ".base.ref")
if [ "$REF" = "master" ]; then
echoerr "Base ref is master, requiring a total rebuild."
echo "TRUE"
exit 0
fi
if [ "$REF" = "null" ]; then
echoerr "Bad ref, requiring a total rebuild."
echo "TRUE"
exit 1
fi
echoerr "Base Ref: $REF"
echoerr "Base Ref SHA: $(git show-branch --sha1-name "$REF")"
echoerr "Curr Ref: $(git rev-parse --short HEAD)"
DIFF=$(git diff --dirstat=files,0 "$REF...HEAD")
# Compare HEAD to the PR's base ref, stripping out the change percentages that come with git diff --dirstat.
# Pass in the diff pattern to grep, and echo TRUE if there's a match. False otherwise.
(echo "$DIFF" | sed 's/^[ 0-9.]\+% //g' | grep -q -E "$PACKAGE" && echo "TRUE") || echo "FALSE"
fi
# @eth-optimism/foundry
## 0.2.0
### Minor Changes
- 05cc935b2: Bump foundry to 2ff99025abade470a795724c10648c800a41025e
## 0.1.3
### Patch Changes
- c6fab69f: Update foundry to fix a bug in coverage generation
- f7323e0b: Upgrade foundry to support consistent storage layouts
## 0.1.2
### Patch Changes
- e736a4d0: Update to 64fe4acc97e6d76551cea7598c201f05ecd65639
## 0.1.1
### Patch Changes
- 921653f8: Bump foundry to 3c49efe58ca4bdeec4729490501da06914446405
## 0.1.0
### Minor Changes
- 5ae9c133: Initial release, pin to b7b1ec471bdd38221773e1a569dc4f20297bd7db
### Patch Changes
- d4de18ea: Use alpine:3.14
from alpine:3.14 as build-environment
WORKDIR /opt
RUN apk add clang lld curl build-base linux-headers git \
&& curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs > rustup.sh \
&& chmod +x ./rustup.sh \
&& ./rustup.sh -y
WORKDIR /opt/foundry
# Only diff from upstream docker image is this clone instead
# of COPY. We select a specific commit to use.
RUN git clone https://github.com/foundry-rs/foundry.git . \
&& git checkout 2ff99025abade470a795724c10648c800a41025e
RUN source $HOME/.profile && cargo build --release \
&& strip /opt/foundry/target/release/forge \
&& strip /opt/foundry/target/release/cast \
&& strip /opt/foundry/target/release/anvil
from alpine:3.14 as foundry-client
ENV GLIBC_KEY=https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub
ENV GLIBC_KEY_FILE=/etc/apk/keys/sgerrand.rsa.pub
ENV GLIBC_RELEASE=https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.35-r0/glibc-2.35-r0.apk
RUN apk add linux-headers gcompat
RUN wget -q -O ${GLIBC_KEY_FILE} ${GLIBC_KEY} \
&& wget -O glibc.apk ${GLIBC_RELEASE} \
&& apk add glibc.apk --force
COPY --from=build-environment /opt/foundry/target/release/forge /usr/local/bin/forge
COPY --from=build-environment /opt/foundry/target/release/cast /usr/local/bin/cast
COPY --from=build-environment /opt/foundry/target/release/anvil /usr/local/bin/anvil
ENTRYPOINT ["/bin/sh", "-c"]
{
"name": "@eth-optimism/foundry",
"version": "0.2.0",
"scripts": {},
"license": "MIT",
"dependencies": {}
}
apiVersion: 1
providers:
- name: dashboards
type: file
updateIntervalSeconds: 30
options:
path: /grafana-dashboards
foldersFromFilesStructure: true
\ No newline at end of file
apiVersion: 1
datasources:
- name: InfluxDB
type: influxdb
access: proxy
orgId: 1
database: l2geth
url: http://influxdb:8086
version: 1
editable: false
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
orgId: 1
url: http://prometheus:9090
version: 1
editable: false
\ No newline at end of file
# @eth-optimism/hardhat-node
## 0.2.4
### Patch Changes
- 805acded4: Bump hardhat to 2.12.2
## 0.2.3
### Patch Changes
- 6c238212d: Trigger a release of the hardhat-node
## 0.2.2
### Patch Changes
- 8333f0f2c: Upgrade dependencies, add fork chain ID support
## 0.2.1
### Patch Changes
- 839f784f: Fixes CI to properly release the hardhat-node
## 0.2.0
### Minor Changes
- 587a7d0d: Correct configuration file behavior
## 0.1.5
### Patch Changes
- 29ff7462: Revert es target back to 2017
## 0.1.4
### Patch Changes
- 847a6338: Bump to hardhat@2.9.1
## 0.1.3
### Patch Changes
- 88601cb7: Refactored Dockerfiles
## 0.1.2
### Patch Changes
- 72a325f6: Add fork mode config to ethereumoptimism/hardhat docker image
- 50e2f6ff: Update to hardhat@2.7.0
## 0.1.1
### Patch Changes
- 57d5b8f9: Build docker images with node.js version 16
## 0.1.0
### Minor Changes
- 81ccd6e4: `regenesis/0.5.0` release
### Patch Changes
- eba5f50c: Create `ethereumoptimism/hardhat-node` docker image
FROM node:16-alpine
# bring in the config files for installing deps
COPY [ \
"package.json", \
"/hardhat/" \
]
# install deps
WORKDIR /hardhat
RUN yarn install && yarn cache clean
# bring in dockerenv so that hardhat launches with host = 0.0.0.0 instead of 127.0.0.1
# so that it's accessible from other boxes as well
# https://github.com/nomiclabs/hardhat/blob/bd7f4b93ed3724f3473052bebe4f8b5587e8bfa8/packages/hardhat-core/src/builtin-tasks/node.ts#L275-L287
COPY [ ".dockerenv" , "/hardhat/" ]
# bring in the scripts we'll be using
COPY [ "hardhat.config.js" , "/hardhat/" ]
EXPOSE 8545
# runs the script (assumes that the `CONTRACT` and `ARGS` are passed as args to `--env`)
CMD ["yarn", "start"]
require('dotenv').config()
const isForkModeEnabled = !!process.env.FORK_URL
const forkUrl = process.env.FORK_URL
const forkStartingBlock =
parseInt(process.env.FORK_STARTING_BLOCK, 10) || undefined
const gasPrice = parseInt(process.env.GAS_PRICE, 10) || 0
const config = {
networks: {
hardhat: {
gasPrice,
initialBaseFeePerGas: 0,
chainId: process.env.FORK_CHAIN_ID ? Number(process.env.FORK_CHAIN_ID) : 31337
},
},
analytics: { enabled: false },
}
if (isForkModeEnabled) {
console.log(`Running hardhat in a fork mode! URL: ${forkUrl}`)
if (forkStartingBlock) {
console.log(`Starting block: ${forkStartingBlock}`)
}
config.networks.hardhat.forking = {
url: forkUrl,
blockNumber: forkStartingBlock,
}
} else {
console.log('Running with a fresh state...')
}
module.exports = config
{
"name": "@eth-optimism/hardhat-node",
"version": "0.2.4",
"scripts": {
"start": "hardhat node --network hardhat"
},
"license": "MIT",
"dependencies": {
"dotenv": "^10.0.0",
"hardhat": "^2.9.6"
}
}
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
- job_name: 'data-transport-layer'
static_configs:
- targets: ['dtl:7878']
- job_name: 'batch-submitter'
static_configs:
- targets: ['batch_submitter:7300']
import os
import urllib.request
dashboard_list=[
{
'name': 'Single Geth',
'filename': 'single_geth.json',
'url': 'https://grafana.com/api/dashboards/13877/revisions/1/download'
}
]
dashboard_path="/grafana-dashboards"
GF_SECURITY_ADMIN_PASSWORD = os.environ.get('GF_SECURITY_ADMIN_PASSWORD')
if GF_SECURITY_ADMIN_PASSWORD is None:
print('GF_SECURITY_ADMIN_PASSWORD env value is missing, exiting.')
sys.exit(1)
if (not os.path.exists(dashboard_path)) or (not os.path.isdir(dashboard_path)) or (not os.access(dashboard_path, os.W_OK)):
print('Dashboard path %s is not writable, exiting'.format(dashboard_path))
sys.exit(1)
for dashboard in dashboard_list:
with urllib.request.urlopen(dashboard['url']) as f:
response = f.read()
decoded_html = response.decode('utf-8')
data = decoded_html.replace('${DS_INFLUXDB}', 'InfluxDB')
d_file = open(os.path.join(dashboard_path, dashboard['filename']),'w')
d_file.write(data)
d_file.close()
DATA_TRANSPORT_LAYER__SYNC_FROM_L1=true
DATA_TRANSPORT_LAYER__SYNC_FROM_L2=false
DATA_TRANSPORT_LAYER__DB_PATH=/db
DATA_TRANSPORT_LAYER__SERVER_PORT=7878
DATA_TRANSPORT_LAYER__TRANSACTIONS_PER_POLLING_INTERVAL=1000
DATA_TRANSPORT_LAYER__CONFIRMATIONS=0
DATA_TRANSPORT_LAYER__POLLING_INTERVAL=100
DATA_TRANSPORT_LAYER__LOGS_PER_POLLING_INTERVAL=2000
DATA_TRANSPORT_LAYER__DANGEROUSLY_CATCH_ALL_ERRORS=true
DATA_TRANSPORT_LAYER__SERVER_HOSTNAME=0.0.0.0
DATA_TRANSPORT_LAYER__L1_START_HEIGHT=1
DATA_TRANSPORT_LAYER__ADDRESS_MANAGER=
DATA_TRANSPORT_LAYER__L1_RPC_ENDPOINT=
DATA_TRANSPORT_LAYER__L2_RPC_ENDPOINT=
DATA_TRANSPORT_LAYER__L2_CHAIN_ID=
ETH1_HTTP=
ETH1_CTC_DEPLOYMENT_HEIGHT=
ETH1_SYNC_SERVICE_ENABLE=true
ETH1_CONFIRMATION_DEPTH=0
ROLLUP_CLIENT_HTTP=
ROLLUP_POLL_INTERVAL_FLAG=500ms
ROLLUP_ENABLE_L2_GAS_POLLING=true
RPC_ENABLE=true
RPC_ADDR=0.0.0.0
RPC_PORT=8545
RPC_API=eth,net,rollup,web3,debug
RPC_CORS_DOMAIN=*
RPC_VHOSTS=*
WS=true
WS_ADDR=0.0.0.0
WS_PORT=8546
WS_API=eth,net,rollup,web3
WS_ORIGINS=*
CHAIN_ID=17
DATADIR=/root/.ethereum
GASPRICE=0
GCMODE=archive
IPC_DISABLE=true
NETWORK_ID=17
NO_USB=true
NO_DISCOVER=true
TARGET_GAS_LIMIT=15000000
USING_OVM=true
BLOCK_SIGNER_KEY=6587ae678cf4fc9a33000cdbf9f35226b71dcc6a4684a31203241f9bcfd55d27
BLOCK_SIGNER_ADDRESS=0x00000398232E2064F896018496b4b44b3D62751F
L2_BLOCK_GAS_LIMIT=15000000
BUILD_ENV=development
ETH_NETWORK_NAME=clique
LOG_LEVEL=debug
L1_ETH_RPC=http://127.0.0.1:7545
L2_ETH_RPC=http://127.0.0.1:4545
L1_STANDARD_BRIDGE_ADDRESS=
L2_STANDARD_BRIDGE_ADDRESS=
L2_GENESIS_BLOCK_HASH=
INDEXER_START_BLOCK_NUMBER=
INDEXER_START_BLOCK_HASH=
INDEXER_DB_HOST=localhost
INDEXER_DB_PORT=5432
INDEXER_DB_USER=postgres
INDEXER_DB_PASSWORD=
INDEXER_DB_NAME=indexer
FORK_URL=
FORK_STARTING_BLOCK=
GF_SECURITY_ADMIN_PASSWORD=optimism
INFLUXDB_HTTP_AUTH_ENABLED=false
INFLUXDB_DB=l2geth
# Builds an image using Buildx. Usage:
# build <name> <tag> <dockerfile> <context>
function build() {
echo "Building $1."
echo "Tag: $2"
echo "Dockerfile: $3"
echo "Context: $4"
docker buildx build \
--tag "$2" \
--cache-from "type=local,src=/tmp/.buildx-cache/$1" \
--cache-to="type=local,dest=/tmp/.buildx-cache-new/$1" \
--file "$3" \
--load "$4" \
&
}
mkdir -p /tmp/.buildx-cache-new
build l2geth "ethereumoptimism/l2geth:latest" "./l2geth/Dockerfile" .
build l1chain "ethereumoptimism/hardhat-node:latest" "./ops/docker/hardhat/Dockerfile" ./ops/docker/hardhat
wait
build deployer "ethereumoptimism/deployer:latest" "./ops/docker/Dockerfile.deployer" .
build dtl "ethereumoptimism/data-transport-layer:latest" "./ops/docker/Dockerfile.data-transport-layer" .
build relayer "ethereumoptimism/message-relayer:latest" "./ops/docker/Dockerfile.message-relayer" .
build relayer "ethereumoptimism/fault-detector:latest" "./ops/docker/Dockerfile.fault-detector" .
wait
#!/usr/bin/env bash
set -euo pipefail
DOCKER_REPO=$1
GIT_TAG=$2
GIT_SHA=$3
IMAGE_NAME=$(echo "$GIT_TAG" | grep -Eow '^(fault-detector|proxyd|indexer|op-[a-z0-9\-]*)' || true)
if [ -z "$IMAGE_NAME" ]; then
echo "image name could not be parsed from git tag '$GIT_TAG'"
exit 1
fi
IMAGE_TAG=$(echo "$GIT_TAG" | grep -Eow 'v.*' || true)
if [ -z "$IMAGE_TAG" ]; then
echo "image tag could not be parsed from git tag '$GIT_TAG'"
exit 1
fi
SOURCE_IMAGE_TAG="$DOCKER_REPO/$IMAGE_NAME:$GIT_SHA"
TARGET_IMAGE_TAG="$DOCKER_REPO/$IMAGE_NAME:$IMAGE_TAG"
TARGET_IMAGE_TAG_LATEST="$DOCKER_REPO/$IMAGE_NAME:latest"
echo "Checking if docker images exist for '$IMAGE_NAME'"
echo ""
tags=$(gcloud container images list-tags "$DOCKER_REPO/$IMAGE_NAME" --limit 1 --format json)
if [ "$tags" = "[]" ]; then
echo "No existing docker images were found for '$IMAGE_NAME'. The code tagged with '$GIT_TAG' may not have an associated dockerfile or docker build job."
echo "If this service has a dockerfile, add a docker-publish job for it in the circleci config."
echo ""
echo "Exiting"
exit 0
fi
echo "Tagging $SOURCE_IMAGE_TAG with '$IMAGE_TAG'"
gcloud container images add-tag -q "$SOURCE_IMAGE_TAG" "$TARGET_IMAGE_TAG"
# Do not tag with latest if the release is a release candidate.
if [[ "$IMAGE_TAG" == *"rc"* ]]; then
echo "Not tagging with 'latest' because the release is a release candidate."
exit 0
fi
echo "Tagging $SOURCE_IMAGE_TAG with 'latest'"
gcloud container images add-tag -q "$SOURCE_IMAGE_TAG" "$TARGET_IMAGE_TAG_LATEST"
const os = require('os')
// this script unbundles the published packages output
// from changesets action to a key-value pair to be used
// with our publishing CI workflow
data = process.argv[2]
data = JSON.parse(data)
for (const i of data) {
const name = i.name.replace("@eth-optimism/", "")
const version = i.version
process.stdout.write(`::set-output name=${name}::${version}` + os.EOL)
}
#!/bin/bash
set -euo
RETRIES=${RETRIES:-20}
JSON='{"jsonrpc":"2.0","id":0,"method":"net_version","params":[]}'
if [ -z "$CONTRACTS_RPC_URL" ]; then
echo "Must specify \$CONTRACTS_RPC_URL."
exit 1
fi
# wait for the base layer to be up
curl \
--fail \
--show-error \
--silent \
-H "Content-Type: application/json" \
--retry-connrefused \
--retry $RETRIES \
--retry-delay 1 \
-d $JSON \
$CONTRACTS_RPC_URL > /dev/null
echo "Connected to L1."
echo "Building deployment command."
DEPLOY_CMD="npx hardhat deploy --network $CONTRACTS_TARGET_NETWORK"
echo "Deploying contracts. Deployment command:"
echo "$DEPLOY_CMD"
eval "$DEPLOY_CMD"
echo "Building addresses.json."
export ADDRESS_MANAGER_ADDRESS=$(cat "./deployments/$CONTRACTS_TARGET_NETWORK/Lib_AddressManager.json" | jq -r .address)
# First, create two files. One of them contains a list of addresses, the other contains a list of contract names.
find "./deployments/$CONTRACTS_TARGET_NETWORK" -maxdepth 1 -name '*.json' | xargs cat | jq -r '.address' > addresses.txt
find "./deployments/$CONTRACTS_TARGET_NETWORK" -maxdepth 1 -name '*.json' | sed -e "s/.\/deployments\/$CONTRACTS_TARGET_NETWORK\///g" | sed -e 's/.json//g' > filenames.txt
# Start building addresses.json.
echo "{" >> addresses.json
# Zip the two files describe above together, then, switch their order and format as JSON.
paste addresses.txt filenames.txt | sed -e "s/^\([^ ]\+\)\s\+\([^ ]\+\)/\"\2\": \"\1\",/" >> addresses.json
# Add the address manager alias.
echo "\"AddressManager\": \"$ADDRESS_MANAGER_ADDRESS\"" >> addresses.json
# End addresses.json
echo "}" >> addresses.json
echo "Built addresses.json. Content:"
jq . addresses.json
echo "Building dump file."
npx hardhat take-dump --network $CONTRACTS_TARGET_NETWORK
mv addresses.json ./genesis
cp ./genesis/$CONTRACTS_TARGET_NETWORK.json ./genesis/state-dump.latest.json
# expose the deployments
cp -rf ./deployments ./genesis/deployments
# service the addresses and dumps
echo "Starting server."
python3 -m http.server \
--bind "0.0.0.0" 8081 \
--directory ./genesis
#!/bin/bash
set -e
RETRIES=${RETRIES:-60}
# waits for l2geth to be up
curl \
--fail \
--show-error \
--silent \
--output /dev/null \
--retry-connrefused \
--retry $RETRIES \
--retry-delay 1 \
$FAULT_DETECTOR__L2_RPC_PROVIDER
# go
exec yarn start
#!/bin/bash
set -e
RETRIES=${RETRIES:-60}
if [[ ! -z "$URL" ]]; then
# get the addrs from the URL provided
ADDRESSES=$(curl --fail --show-error --silent --retry-connrefused --retry $RETRIES --retry-delay 5 $URL)
# set the env
export DATA_TRANSPORT_LAYER__ADDRESS_MANAGER=$(echo $ADDRESSES | jq -r '.AddressManager')
fi
# go
exec node dist/src/services/run.js
#!/bin/sh
# FIXME: Cannot use set -e since bash is not installed in Dockerfile
# set -e
RETRIES=${RETRIES:-40}
VERBOSITY=${VERBOSITY:-6}
# get the genesis file from the deployer
curl \
--fail \
--show-error \
--silent \
--retry-connrefused \
--retry-all-errors \
--retry $RETRIES \
--retry-delay 5 \
$ROLLUP_STATE_DUMP_PATH \
-o genesis.json
# wait for the dtl to be up, else geth will crash if it cannot connect
curl \
--fail \
--show-error \
--silent \
--output /dev/null \
--retry-connrefused \
--retry $RETRIES \
--retry-delay 1 \
$ROLLUP_CLIENT_HTTP
# import the key that will be used to locally sign blocks
# this key does not have to be kept secret in order to be secure
# we use an insecure password ("pwd") to lock/unlock the password
echo "Importing private key"
echo $BLOCK_SIGNER_KEY > key.prv
echo "pwd" > password
geth account import --password ./password ./key.prv
# initialize the geth node with the genesis file
echo "Initializing Geth node"
geth --verbosity="$VERBOSITY" "$@" init genesis.json
# start the geth node
echo "Starting Geth node"
exec geth \
--verbosity="$VERBOSITY" \
--password ./password \
--allow-insecure-unlock \
--unlock $BLOCK_SIGNER_ADDRESS \
--mine \
--miner.etherbase $BLOCK_SIGNER_ADDRESS \
"$@"
#!/bin/bash
set -e
RETRIES=${RETRIES:-60}
JSON='{"jsonrpc":"2.0","id":0,"method":"rollup_getInfo","params":[]}'
if [[ ! -z "$URL" ]]; then
# get the addrs from the URL provided
ADDRESSES=$(curl --fail --show-error --silent --retry-connrefused --retry $RETRIES --retry-delay 5 $URL)
export ADDRESS_MANAGER=$(echo $ADDRESSES | jq -r '.AddressManager')
fi
# wait for the sequencer to be up
curl \
--silent \
--fail \
--show-error \
-H "Content-Type: application/json" \
--retry-connrefused \
--retry $RETRIES \
--retry-delay 3 \
-d $JSON \
--output /dev/null \
$L2_URL
npx hardhat test --network optimism --no-compile "$@"
#!/bin/bash
set -e
RETRIES=${RETRIES:-60}
# waits for l2geth to be up
curl \
--fail \
--show-error \
--silent \
--output /dev/null \
--retry-connrefused \
--retry $RETRIES \
--retry-delay 1 \
$MESSAGE_RELAYER__L2_RPC_PROVIDER
# go
exec yarn start
#!/bin/bash
# set up the stats file
mkdir ~/logs
touch ~/logs/stats.txt
while true; do
{
echo "$(date) ----------------";
echo "total memory usage --------------------------";
free -m;
echo "docker stats --------------------------------";
docker stats --no-stream;
echo "memory munchers -----------------------------";
ps aux --sort=-%mem | head;
} >> ~/logs/stats.txt
sleep 1;
done
#!/usr/bin/env bash
BEDROCK_TAGS_REMOTE="$1"
VERSION="$2"
if [ -z "$VERSION" ]; then
echo "You must specify a version."
exit 0
fi
FIRST_CHAR=$(printf '%s' "$VERSION" | cut -c1)
if [ "$FIRST_CHAR" != "v" ]; then
echo "Tag must start with v."
exit 0
fi
git tag "op-bindings/$VERSION"
git tag "op-service/$VERSION"
git push $BEDROCK_TAGS_REMOTE "op-bindings/$VERSION"
git push $BEDROCK_TAGS_REMOTE "op-service/$VERSION"
cd op-chain-ops
go get github.com/ethereum-optimism/optimism/op-bindings@$VERSION
go get github.com/ethereum-optimism/optimism/op-service@$VERSION
go mod tidy
git add .
git commit -am 'chore: Upgrade op-chain-ops dependencies'
git tag "op-chain-ops/$VERSION"
git push $BEDROCK_TAGS_REMOTE "op-chain-ops/$VERSION"
cd ../op-node
go get github.com/ethereum-optimism/optimism/op-bindings@$VERSION
go get github.com/ethereum-optimism/optimism/op-service@$VERSION
go get github.com/ethereum-optimism/optimism/op-chain-ops@$VERSION
go mod tidy
echo Please update the version to ${VERSION} in op-node/version/version.go
read -p "Press [Enter] key to continue"
git add .
git commit -am 'chore: Upgrade op-node dependencies'
git push $BEDROCK_TAGS_REMOTE
git tag "op-node/$VERSION"
git push $BEDROCK_TAGS_REMOTE "op-node/$VERSION"
cd ../op-proposer
go get github.com/ethereum-optimism/optimism/op-bindings@$VERSION
go get github.com/ethereum-optimism/optimism/op-service@$VERSION
go get github.com/ethereum-optimism/optimism/op-node@$VERSION
go mod tidy
echo Please update the version to ${VERSION} in op-proposer/cmd/main.go
read -p "Press [Enter] key to continue"
git add .
git commit -am 'chore: Upgrade op-proposer dependencies'
git push $BEDROCK_TAGS_REMOTE
git tag "op-proposer/$VERSION"
git push $BEDROCK_TAGS_REMOTE "op-proposer/$VERSION"
cd ../op-batcher
go get github.com/ethereum-optimism/optimism/op-bindings@$VERSION
go get github.com/ethereum-optimism/optimism/op-service@$VERSION
go get github.com/ethereum-optimism/optimism/op-node@$VERSION
go get github.com/ethereum-optimism/optimism/op-proposer@$VERSION
go mod tidy
echo Please update the version to ${VERSION} in op-batcher/cmd/main.go
read -p "Press [Enter] key to continue"
git add .
git commit -am 'chore: Upgrade op-batcher dependencies'
git push $BEDROCK_TAGS_REMOTE
git tag "op-batcher/$VERSION"
git push $BEDROCK_TAGS_REMOTE "op-batcher/$VERSION"
cd ../op-e2e
go get github.com/ethereum-optimism/optimism/op-bindings@$VERSION
go get github.com/ethereum-optimism/optimism/op-service@$VERSION
go get github.com/ethereum-optimism/optimism/op-node@$VERSION
go get github.com/ethereum-optimism/optimism/op-proposer@$VERSION
go get github.com/ethereum-optimism/optimism/op-batcher@$VERSION
go mod tidy
git add .
git commit -am 'chore: Upgrade op-e2e dependencies'
git push $BEDROCK_TAGS_REMOTE
git tag "op-e2e/$VERSION"
git push $BEDROCK_TAGS_REMOTE "op-e2e/$VERSION"
\ No newline at end of file
#!/usr/bin/env python3
import json
import subprocess
import os
GETH_VERSION='v1.11.2'
def main():
for project in ('.', 'indexer'):
print(f'Updating {project}...')
update_mod(project)
def update_mod(project):
print('Replacing...')
subprocess.run([
'go',
'mod',
'edit',
'-replace',
f'github.com/ethereum/go-ethereum@{GETH_VERSION}=github.com/ethereum-optimism/op-geth@optimism'
], cwd=os.path.join(project), check=True)
print('Tidying...')
subprocess.run([
'go',
'mod',
'tidy'
], cwd=os.path.join(project), check=True)
if __name__ == '__main__':
main()
#!/bin/bash
CONTAINER=l2geth
RETRIES=90
i=0
until docker-compose logs l2geth | grep -q "Starting Sequencer Loop";
do
sleep 1
if [ $i -eq $RETRIES ]; then
echo 'Timed out waiting for sequencer'
exit 1
fi
echo 'Waiting for sequencer...'
((i=i+1))
done
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment