Commit d31e937d authored by mergify[bot]'s avatar mergify[bot] Committed by GitHub

Merge branch 'develop' into qbzzt/230320-get-in-touch-form

parents be20409c cc5a5031
...@@ -2,4 +2,4 @@ ...@@ -2,4 +2,4 @@
'@eth-optimism/contracts-bedrock': patch '@eth-optimism/contracts-bedrock': patch
--- ---
Print tenderly simulation links during deployment Reduce the time that the system dictator deploy scripts wait before checking the chain state.
---
'@eth-optimism/sdk': patch
---
Have SDK automatically create Standard and ETH bridges when L1StandardBridge is provided.
---
'@eth-optimism/atst': minor
---
Update readAttestations and prepareWriteAttestation to handle keys longer than 32 bytes
---
'@eth-optimism/atst': minor
---
Remove broken allowFailures as option
---
'@eth-optimism/common-ts': patch
---
Fix BaseServiceV2 configuration for caseCase options
---
'@eth-optimism/atst': patch
---
Update docs
---
'@eth-optimism/atst': minor
---
Move react api to @eth-optimism/atst/react so react isn't required to run the core sdk
...@@ -2,4 +2,4 @@ ...@@ -2,4 +2,4 @@
'@eth-optimism/contracts-bedrock': patch '@eth-optimism/contracts-bedrock': patch
--- ---
Optionally print cast commands during migration Added a contsructor to the System Dictator
---
'@eth-optimism/sdk': patch
---
Update migrated withdrawal gaslimit calculation
---
'@eth-optimism/atst': minor
---
Fix main and module in atst package.json
---
'@eth-optimism/batch-submitter-service': patch
---
fix flag name for MaxStateRootElements in batch-submitter
fix log package for proposer
---
'@eth-optimism/atst': patch
---
Fixed bug with atst not defaulting to currently connected chain
---
'@eth-optimism/atst': minor
---
Deprecate parseAttestationBytes and createRawKey in favor for createKey, createValue
---
'@eth-optimism/fault-detector': patch
---
Fixes a bug that would cause the fault detector to error out if no outputs had been proposed yet.
---
'@eth-optimism/chain-mon': patch
'@eth-optimism/data-transport-layer': patch
'@eth-optimism/fault-detector': patch
'@eth-optimism/message-relayer': patch
'@eth-optimism/replica-healthcheck': patch
---
Empty patch release to re-release packages that failed to be released by a bug in the release process.
...@@ -518,6 +518,10 @@ jobs: ...@@ -518,6 +518,10 @@ jobs:
patterns: packages patterns: packages
# Note: The below needs to be manually configured whenever we # Note: The below needs to be manually configured whenever we
# add a new package to CI. # add a new package to CI.
- run:
name: Check common-ts
command: npx depcheck
working_directory: packages/common-ts
- run: - run:
name: Check contracts name: Check contracts
command: npx depcheck command: npx depcheck
......
...@@ -209,7 +209,7 @@ func NewConfig(ctx *cli.Context) (Config, error) { ...@@ -209,7 +209,7 @@ func NewConfig(ctx *cli.Context) (Config, error) {
MaxL1TxSize: ctx.GlobalUint64(flags.MaxL1TxSizeFlag.Name), MaxL1TxSize: ctx.GlobalUint64(flags.MaxL1TxSizeFlag.Name),
MaxPlaintextBatchSize: ctx.GlobalUint64(flags.MaxPlaintextBatchSizeFlag.Name), MaxPlaintextBatchSize: ctx.GlobalUint64(flags.MaxPlaintextBatchSizeFlag.Name),
MinStateRootElements: ctx.GlobalUint64(flags.MinStateRootElementsFlag.Name), MinStateRootElements: ctx.GlobalUint64(flags.MinStateRootElementsFlag.Name),
MaxStateRootElements: ctx.GlobalUint64(flags.MinStateRootElementsFlag.Name), MaxStateRootElements: ctx.GlobalUint64(flags.MaxStateRootElementsFlag.Name),
MaxBatchSubmissionTime: ctx.GlobalDuration(flags.MaxBatchSubmissionTimeFlag.Name), MaxBatchSubmissionTime: ctx.GlobalDuration(flags.MaxBatchSubmissionTimeFlag.Name),
PollInterval: ctx.GlobalDuration(flags.PollIntervalFlag.Name), PollInterval: ctx.GlobalDuration(flags.PollIntervalFlag.Name),
NumConfirmations: ctx.GlobalUint64(flags.NumConfirmationsFlag.Name), NumConfirmations: ctx.GlobalUint64(flags.NumConfirmationsFlag.Name),
......
...@@ -13,13 +13,13 @@ import ( ...@@ -13,13 +13,13 @@ import (
"github.com/ethereum-optimism/optimism/bss-core/metrics" "github.com/ethereum-optimism/optimism/bss-core/metrics"
"github.com/ethereum-optimism/optimism/bss-core/txmgr" "github.com/ethereum-optimism/optimism/bss-core/txmgr"
l2ethclient "github.com/ethereum-optimism/optimism/l2geth/ethclient" l2ethclient "github.com/ethereum-optimism/optimism/l2geth/ethclient"
"github.com/ethereum-optimism/optimism/l2geth/log"
"github.com/ethereum/go-ethereum/accounts/abi" "github.com/ethereum/go-ethereum/accounts/abi"
"github.com/ethereum/go-ethereum/accounts/abi/bind" "github.com/ethereum/go-ethereum/accounts/abi/bind"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/ethclient" "github.com/ethereum/go-ethereum/ethclient"
"github.com/ethereum/go-ethereum/log"
) )
// stateRootSize is the size in bytes of a state root. // stateRootSize is the size in bytes of a state root.
......
import event from '@vuepress/plugin-pwa/lib/event'
export default ({ router }) => { export default ({ router }) => {
registerAutoReload();
router.addRoutes([ router.addRoutes([
{ path: '/docs/', redirect: '/' }, { path: '/docs/', redirect: '/' },
]) ])
} }
// When new content is detected by the app, this will automatically
// refresh the page, so that users do not need to manually click
// the refresh button. For more details see:
// https://linear.app/optimism/issue/FE-1003/investigate-archive-issue-on-docs
const registerAutoReload = () => {
event.$on('sw-updated', e => e.skipWaiting().then(() => {
location.reload(true);
}))
}
...@@ -19,13 +19,13 @@ aside.sidebar { ...@@ -19,13 +19,13 @@ aside.sidebar {
p.sidebar-heading { p.sidebar-heading {
color: #323A43 !important; color: #323A43 !important;
font-family: 'Open Sans', sans-serif; font-family: 'Open Sans', sans-serif;
font-weight: 600; font-weight: 600 !important;
font-size: 14px !important; font-size: 14px !important;
line-height: 24px !important; line-height: 24px !important;
min-height: 36px; min-height: 36px;
margin-left: 32px; margin-left: 20px;
padding: 8px 16px !important; padding: 8px 16px !important;
width: calc(100% - 64px) !important; width: calc(100% - 60px) !important;
border-radius: 8px; border-radius: 8px;
} }
...@@ -34,18 +34,20 @@ a.sidebar-link { ...@@ -34,18 +34,20 @@ a.sidebar-link {
font-size: 14px !important; font-size: 14px !important;
line-height: 24px !important; line-height: 24px !important;
min-height: 36px; min-height: 36px;
margin-left: 32px; margin-top: 3px;
margin-left: 20px;
padding: 8px 16px !important; padding: 8px 16px !important;
width: calc(100% - 64px) !important; width: calc(100% - 60px) !important;
border-radius: 8px; border-radius: 8px;
} }
section.sidebar-group a.sidebar-link { section.sidebar-group a.sidebar-link,
margin-left: 44px; section.sidebar-group p.sidebar-heading.clickable {
width: calc(100% - 64px) !important; margin-left: 32px;
width: calc(100% - 60px) !important;
} }
.sidebar-links:not(.sidebar-group-items) > li > a.sidebar-link { .sidebar-links:not(.sidebar-group-items) > li > a.sidebar-link {
font-weight: 600 !important; font-weight: 600 !important;
color: #323A43 !important; color: #323A43 !important;
} }
......
...@@ -59,3 +59,66 @@ Download and install [Docker engine](https://docs.docker.com/engine/install/#ser ...@@ -59,3 +59,66 @@ Download and install [Docker engine](https://docs.docker.com/engine/install/#ser
After the docker containers start, browse to http:// < *computer running Blockscout* > :4000 to view the user interface. After the docker containers start, browse to http:// < *computer running Blockscout* > :4000 to view the user interface.
You can also use the [API](https://docs.blockscout.com/for-users/api) You can also use the [API](https://docs.blockscout.com/for-users/api)
### GraphQL
Blockscout's API includes [GraphQL](https://graphql.org/) support under `/graphiql`.
For example, this query looks at addresses.
```
query {
addresses(hashes:[
"0xcB69A90Aa5311e0e9141a66212489bAfb48b9340",
"0xC2dfA7205088179A8644b9fDCecD6d9bED854Cfe"])
```
GraphQL queries start with a top level entity (or entities).
In this case, our [top level query](https://docs.blockscout.com/for-users/api/graphql#queries) is for multiple addresses.
Note that you can only query on fields that are indexed.
For example, here we query on the addresses.
However, we couldn't query on `contractCode` or `fetchedCoinBalance`.
```
{
hash
contractCode
fetchedCoinBalance
```
The fields above are fetched from the address table.
```
transactions(first:5) {
```
We can also fetch the transactions that include the address (either as source or destination).
The API does not let us fetch an unlimited number of transactions, so here we ask for the first 5.
```
edges {
node {
```
Because this is a [graph](https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)), the entities that connect two types, for example addresses and transactions, are called `edges`.
At the other end of each edge there is a transaction, which is a separate `node`.
```
hash
fromAddressHash
toAddressHash
input
}
```
These are the fields we read for each transaction.
```
}
}
}
}
```
Finally, close all the brackets.
\ No newline at end of file
...@@ -39,12 +39,11 @@ This tutorial was checked on: ...@@ -39,12 +39,11 @@ This tutorial was checked on:
| Software | Version | Installation command(s) | | Software | Version | Installation command(s) |
| -------- | ---------- | - | | -------- | ---------- | - |
| Ubuntu | 20.04 LTS | | | Ubuntu | 20.04 LTS | |
| git | OS default | | | git, curl, and make | OS default | `sudo apt install -y git curl make` |
| make | 4.2.1-1.2 | `sudo apt install -y make`
| Go | 1.20 | `sudo apt update` <br> `wget https://go.dev/dl/go1.20.linux-amd64.tar.gz` <br> `tar xvzf go1.20.linux-amd64.tar.gz` <br> `sudo cp go/bin/go /usr/bin/go` <br> `sudo mv go /usr/lib` <br> `echo export GOROOT=/usr/lib/go >> ~/.bashrc` | Go | 1.20 | `sudo apt update` <br> `wget https://go.dev/dl/go1.20.linux-amd64.tar.gz` <br> `tar xvzf go1.20.linux-amd64.tar.gz` <br> `sudo cp go/bin/go /usr/bin/go` <br> `sudo mv go /usr/lib` <br> `echo export GOROOT=/usr/lib/go >> ~/.bashrc`
| Node | 16.19.0 | `curl -fsSL https://deb.nodesource.com/setup_16.x | sudo -E bash -` <br> `sudo apt-get install -y nodejs` | Node | 16.19.0 | `curl -fsSL https://deb.nodesource.com/setup_16.x | sudo -E bash -` <br> `sudo apt-get install -y nodejs npm`
| yarn | 1.22.19 | `sudo npm install -g yarn` | yarn | 1.22.19 | `sudo npm install -g yarn`
| Foundry | 0.2.0 | `curl -L https://foundry.paradigm.xyz | bash` <br> `sudo bash` <br> `foundryup` | Foundry | 0.2.0 | `curl -L https://foundry.paradigm.xyz | bash` <br> `. ~/.bashrc` <br> `foundryup`
## Build the Source Code ## Build the Source Code
...@@ -74,7 +73,8 @@ We’re going to be spinning up an EVM Rollup from the OP Stack source code. Yo ...@@ -74,7 +73,8 @@ We’re going to be spinning up an EVM Rollup from the OP Stack source code. Yo
1. Build the various packages inside of the Optimism Monorepo. 1. Build the various packages inside of the Optimism Monorepo.
```bash ```bash
make build make op-node op-batcher
yarn build
``` ```
### Build op-geth ### Build op-geth
......
--- ---
title: Contribute to OP Stack title: Contribute to the OP Stack
lang: en-US lang: en-US
--- ---
...@@ -21,7 +21,7 @@ The OP Stack needs YOU (yes you!) to help review the codebase for bugs and vulne ...@@ -21,7 +21,7 @@ The OP Stack needs YOU (yes you!) to help review the codebase for bugs and vulne
## Documentation help ## Documentation help
Spot a typo in these docs? See something that could be a little clearer? Head over to the Optimism Monorepo where the OP Stack docs are hosted and make a pull request. No contribution is too small! Spot a typo in these docs? See something that could be a little clearer? Head over to the Optimism Monorepo where the OP Stack docs are hosted and make a pull request. No contribution is too small!
## Community contributions ## Community contributions
...@@ -30,4 +30,4 @@ If you’re looking for other ways to get involved, here are a few options: ...@@ -30,4 +30,4 @@ If you’re looking for other ways to get involved, here are a few options:
- Grab an idea from the [project ideas list](https://github.com/ethereum-optimism/optimism-project-ideas) to and building - Grab an idea from the [project ideas list](https://github.com/ethereum-optimism/optimism-project-ideas) to and building
- Suggest a new idea for the [project ideas list](https://github.com/ethereum-optimism/optimism-project-ideas) - Suggest a new idea for the [project ideas list](https://github.com/ethereum-optimism/optimism-project-ideas)
- Improve the [Optimism Community Hub](https://community.optimism.io/) [documentation](https://github.com/ethereum-optimism/community-hub) or [tutorials](https://github.com/ethereum-optimism/optimism-tutorial) - Improve the [Optimism Community Hub](https://community.optimism.io/) [documentation](https://github.com/ethereum-optimism/community-hub) or [tutorials](https://github.com/ethereum-optimism/optimism-tutorial)
- Become an Optimism Ambassador, Support Nerd, and more in the [Optimism Discord](https://discord-gateway.optimism.io/) - Become an Optimism Ambassador, Support Nerd, and more in the [Optimism Discord](https://discord-gateway.optimism.io/)
\ No newline at end of file
module github.com/ethereum-optimism/optimism module github.com/ethereum-optimism/optimism
go 1.18 go 1.19
require ( require (
github.com/btcsuite/btcd v0.23.3 github.com/btcsuite/btcd v0.23.3
...@@ -9,7 +9,7 @@ require ( ...@@ -9,7 +9,7 @@ require (
github.com/docker/docker v20.10.21+incompatible github.com/docker/docker v20.10.21+incompatible
github.com/docker/go-connections v0.4.0 github.com/docker/go-connections v0.4.0
github.com/ethereum-optimism/go-ethereum-hdwallet v0.1.3 github.com/ethereum-optimism/go-ethereum-hdwallet v0.1.3
github.com/ethereum/go-ethereum v1.11.2 github.com/ethereum/go-ethereum v1.11.4
github.com/fsnotify/fsnotify v1.6.0 github.com/fsnotify/fsnotify v1.6.0
github.com/golang/snappy v0.0.4 github.com/golang/snappy v0.0.4
github.com/google/go-cmp v0.5.9 github.com/google/go-cmp v0.5.9
...@@ -69,11 +69,11 @@ require ( ...@@ -69,11 +69,11 @@ require (
github.com/francoispqt/gojay v1.2.13 // indirect github.com/francoispqt/gojay v1.2.13 // indirect
github.com/gballet/go-libpcsclite v0.0.0-20191108122812-4678299bea08 // indirect github.com/gballet/go-libpcsclite v0.0.0-20191108122812-4678299bea08 // indirect
github.com/getsentry/sentry-go v0.18.0 // indirect github.com/getsentry/sentry-go v0.18.0 // indirect
github.com/go-kit/kit v0.10.0 // indirect
github.com/go-ole/go-ole v1.2.6 // indirect github.com/go-ole/go-ole v1.2.6 // indirect
github.com/go-stack/stack v1.8.1 // indirect github.com/go-stack/stack v1.8.1 // indirect
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0 // indirect github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0 // indirect
github.com/godbus/dbus/v5 v5.1.0 // indirect github.com/godbus/dbus/v5 v5.1.0 // indirect
github.com/gofrs/flock v0.8.1 // indirect
github.com/gogo/protobuf v1.3.2 // indirect github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang-jwt/jwt/v4 v4.4.2 // indirect github.com/golang-jwt/jwt/v4 v4.4.2 // indirect
github.com/golang/mock v1.6.0 // indirect github.com/golang/mock v1.6.0 // indirect
...@@ -86,7 +86,6 @@ require ( ...@@ -86,7 +86,6 @@ require (
github.com/hashicorp/errwrap v1.1.0 // indirect github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-bexpr v0.1.11 // indirect github.com/hashicorp/go-bexpr v0.1.11 // indirect
github.com/hashicorp/golang-lru/v2 v2.0.1 // indirect github.com/hashicorp/golang-lru/v2 v2.0.1 // indirect
github.com/holiman/big v0.0.0-20221017200358-a027dc42d04e // indirect
github.com/holiman/bloomfilter/v2 v2.0.3 // indirect github.com/holiman/bloomfilter/v2 v2.0.3 // indirect
github.com/huin/goupnp v1.1.0 // indirect github.com/huin/goupnp v1.1.0 // indirect
github.com/influxdata/influxdb v1.8.3 // indirect github.com/influxdata/influxdb v1.8.3 // indirect
...@@ -147,7 +146,6 @@ require ( ...@@ -147,7 +146,6 @@ require (
github.com/prometheus/client_model v0.3.0 // indirect github.com/prometheus/client_model v0.3.0 // indirect
github.com/prometheus/common v0.39.0 // indirect github.com/prometheus/common v0.39.0 // indirect
github.com/prometheus/procfs v0.9.0 // indirect github.com/prometheus/procfs v0.9.0 // indirect
github.com/prometheus/tsdb v0.10.0 // indirect
github.com/quic-go/qpack v0.4.0 // indirect github.com/quic-go/qpack v0.4.0 // indirect
github.com/quic-go/qtls-go1-18 v0.2.0 // indirect github.com/quic-go/qtls-go1-18 v0.2.0 // indirect
github.com/quic-go/qtls-go1-19 v0.2.0 // indirect github.com/quic-go/qtls-go1-19 v0.2.0 // indirect
...@@ -191,6 +189,6 @@ require ( ...@@ -191,6 +189,6 @@ require (
nhooyr.io/websocket v1.8.7 // indirect nhooyr.io/websocket v1.8.7 // indirect
) )
replace github.com/ethereum/go-ethereum v1.11.2 => github.com/ethereum-optimism/op-geth v1.11.2-de8c5df46.0.20230308025559-13ee9ab9153b replace github.com/ethereum/go-ethereum v1.11.4 => github.com/ethereum-optimism/op-geth v1.11.2-de8c5df46.0.20230321002540-11f0554a4313
//replace github.com/ethereum/go-ethereum v1.11.2 => ../go-ethereum //replace github.com/ethereum/go-ethereum v1.11.4 => ../go-ethereum
This diff is collapsed.
...@@ -30,10 +30,10 @@ ...@@ -30,10 +30,10 @@
"devDependencies": { "devDependencies": {
"@babel/eslint-parser": "^7.5.4", "@babel/eslint-parser": "^7.5.4",
"@eth-optimism/contracts": "^0.5.40", "@eth-optimism/contracts": "^0.5.40",
"@eth-optimism/contracts-bedrock": "0.13.0", "@eth-optimism/contracts-bedrock": "0.13.1",
"@eth-optimism/contracts-periphery": "^1.0.7", "@eth-optimism/contracts-periphery": "^1.0.7",
"@eth-optimism/core-utils": "0.12.0", "@eth-optimism/core-utils": "0.12.0",
"@eth-optimism/sdk": "2.0.0", "@eth-optimism/sdk": "2.0.1",
"@ethersproject/abstract-provider": "^5.7.0", "@ethersproject/abstract-provider": "^5.7.0",
"@ethersproject/providers": "^5.7.0", "@ethersproject/providers": "^5.7.0",
"@ethersproject/transactions": "^5.7.0", "@ethersproject/transactions": "^5.7.0",
......
FROM --platform=$BUILDPLATFORM golang:1.18.0-alpine3.15 as builder FROM --platform=$BUILDPLATFORM golang:1.19.0-alpine3.15 as builder
ARG VERSION=v0.0.0 ARG VERSION=v0.0.0
......
...@@ -19,8 +19,19 @@ test: ...@@ -19,8 +19,19 @@ test:
lint: lint:
golangci-lint run -E goimports,sqlclosecheck,bodyclose,asciicheck,misspell,errorlint -e "errors.As" -e "errors.Is" golangci-lint run -E goimports,sqlclosecheck,bodyclose,asciicheck,misspell,errorlint -e "errors.As" -e "errors.Is"
fuzz:
go test -run NOTAREALTEST -v -fuzztime 10s -fuzz FuzzChannelConfig_CheckTimeout ./batcher
go test -run NOTAREALTEST -v -fuzztime 10s -fuzz FuzzDurationZero ./batcher
go test -run NOTAREALTEST -v -fuzztime 10s -fuzz FuzzDurationTimeoutMaxChannelDuration ./batcher
go test -run NOTAREALTEST -v -fuzztime 10s -fuzz FuzzDurationTimeoutZeroMaxChannelDuration ./batcher
go test -run NOTAREALTEST -v -fuzztime 10s -fuzz FuzzChannelCloseTimeout ./batcher
go test -run NOTAREALTEST -v -fuzztime 10s -fuzz FuzzChannelZeroCloseTimeout ./batcher
go test -run NOTAREALTEST -v -fuzztime 10s -fuzz FuzzSeqWindowClose ./batcher
go test -run NOTAREALTEST -v -fuzztime 10s -fuzz FuzzSeqWindowZeroTimeoutClose ./batcher
.PHONY: \ .PHONY: \
op-batcher \ op-batcher \
clean \ clean \
test \ test \
lint lint \
fuzz
...@@ -12,9 +12,9 @@ import ( ...@@ -12,9 +12,9 @@ import (
gethrpc "github.com/ethereum/go-ethereum/rpc" gethrpc "github.com/ethereum/go-ethereum/rpc"
"github.com/urfave/cli" "github.com/urfave/cli"
"github.com/ethereum-optimism/optimism/op-batcher/metrics"
"github.com/ethereum-optimism/optimism/op-batcher/rpc" "github.com/ethereum-optimism/optimism/op-batcher/rpc"
oplog "github.com/ethereum-optimism/optimism/op-service/log" oplog "github.com/ethereum-optimism/optimism/op-service/log"
opmetrics "github.com/ethereum-optimism/optimism/op-service/metrics"
oppprof "github.com/ethereum-optimism/optimism/op-service/pprof" oppprof "github.com/ethereum-optimism/optimism/op-service/pprof"
oprpc "github.com/ethereum-optimism/optimism/op-service/rpc" oprpc "github.com/ethereum-optimism/optimism/op-service/rpc"
) )
...@@ -36,9 +36,10 @@ func Main(version string, cliCtx *cli.Context) error { ...@@ -36,9 +36,10 @@ func Main(version string, cliCtx *cli.Context) error {
} }
l := oplog.NewLogger(cfg.LogConfig) l := oplog.NewLogger(cfg.LogConfig)
m := metrics.NewMetrics("default")
l.Info("Initializing Batch Submitter") l.Info("Initializing Batch Submitter")
batchSubmitter, err := NewBatchSubmitterFromCLIConfig(cfg, l) batchSubmitter, err := NewBatchSubmitterFromCLIConfig(cfg, l, m)
if err != nil { if err != nil {
l.Error("Unable to create Batch Submitter", "error", err) l.Error("Unable to create Batch Submitter", "error", err)
return err return err
...@@ -64,16 +65,15 @@ func Main(version string, cliCtx *cli.Context) error { ...@@ -64,16 +65,15 @@ func Main(version string, cliCtx *cli.Context) error {
}() }()
} }
registry := opmetrics.NewRegistry()
metricsCfg := cfg.MetricsConfig metricsCfg := cfg.MetricsConfig
if metricsCfg.Enabled { if metricsCfg.Enabled {
l.Info("starting metrics server", "addr", metricsCfg.ListenAddr, "port", metricsCfg.ListenPort) l.Info("starting metrics server", "addr", metricsCfg.ListenAddr, "port", metricsCfg.ListenPort)
go func() { go func() {
if err := opmetrics.ListenAndServe(ctx, registry, metricsCfg.ListenAddr, metricsCfg.ListenPort); err != nil { if err := m.Serve(ctx, metricsCfg.ListenAddr, metricsCfg.ListenPort); err != nil {
l.Error("error starting metrics server", err) l.Error("error starting metrics server", err)
} }
}() }()
opmetrics.LaunchBalanceMetrics(ctx, l, registry, "", batchSubmitter.L1Client, batchSubmitter.From) m.StartBalanceMetrics(ctx, l, batchSubmitter.L1Client, batchSubmitter.From)
} }
rpcCfg := cfg.RPCConfig rpcCfg := cfg.RPCConfig
...@@ -95,6 +95,9 @@ func Main(version string, cliCtx *cli.Context) error { ...@@ -95,6 +95,9 @@ func Main(version string, cliCtx *cli.Context) error {
return fmt.Errorf("error starting RPC server: %w", err) return fmt.Errorf("error starting RPC server: %w", err)
} }
m.RecordInfo(version)
m.RecordUp()
interruptChannel := make(chan os.Signal, 1) interruptChannel := make(chan os.Signal, 1)
signal.Notify(interruptChannel, []os.Signal{ signal.Notify(interruptChannel, []os.Signal{
os.Interrupt, os.Interrupt,
......
...@@ -12,7 +12,6 @@ import ( ...@@ -12,7 +12,6 @@ import (
) )
var ( var (
ErrInvalidMaxFrameSize = errors.New("max frame size cannot be zero")
ErrInvalidChannelTimeout = errors.New("channel timeout is less than the safety margin") ErrInvalidChannelTimeout = errors.New("channel timeout is less than the safety margin")
ErrInputTargetReached = errors.New("target amount of input data reached") ErrInputTargetReached = errors.New("target amount of input data reached")
ErrMaxFrameIndex = errors.New("max frame index reached (uint16)") ErrMaxFrameIndex = errors.New("max frame index reached (uint16)")
...@@ -82,7 +81,15 @@ func (cc *ChannelConfig) Check() error { ...@@ -82,7 +81,15 @@ func (cc *ChannelConfig) Check() error {
// will infinitely loop when trying to create frames in the // will infinitely loop when trying to create frames in the
// [channelBuilder.OutputFrames] function. // [channelBuilder.OutputFrames] function.
if cc.MaxFrameSize == 0 { if cc.MaxFrameSize == 0 {
return ErrInvalidMaxFrameSize return errors.New("max frame size cannot be zero")
}
// If the [MaxFrameSize] is less than [FrameV0OverHeadSize], the channel
// out will underflow the maxSize variable in the [derive.ChannelOut].
// Since it is of type uint64, it will wrap around to a very large
// number, making the frame size extremely large.
if cc.MaxFrameSize < derive.FrameV0OverHeadSize {
return fmt.Errorf("max frame size %d is less than the minimum 23", cc.MaxFrameSize)
} }
return nil return nil
...@@ -127,8 +134,12 @@ type channelBuilder struct { ...@@ -127,8 +134,12 @@ type channelBuilder struct {
blocks []*types.Block blocks []*types.Block
// frames data queue, to be send as txs // frames data queue, to be send as txs
frames []frameData frames []frameData
// total amount of output data of all frames created yet
outputBytes int
} }
// newChannelBuilder creates a new channel builder or returns an error if the
// channel out could not be created.
func newChannelBuilder(cfg ChannelConfig) (*channelBuilder, error) { func newChannelBuilder(cfg ChannelConfig) (*channelBuilder, error) {
co, err := derive.NewChannelOut() co, err := derive.NewChannelOut()
if err != nil { if err != nil {
...@@ -145,11 +156,21 @@ func (c *channelBuilder) ID() derive.ChannelID { ...@@ -145,11 +156,21 @@ func (c *channelBuilder) ID() derive.ChannelID {
return c.co.ID() return c.co.ID()
} }
// InputBytes returns to total amount of input bytes added to the channel. // InputBytes returns the total amount of input bytes added to the channel.
func (c *channelBuilder) InputBytes() int { func (c *channelBuilder) InputBytes() int {
return c.co.InputBytes() return c.co.InputBytes()
} }
// ReadyBytes returns the amount of bytes ready in the compression pipeline to
// output into a frame.
func (c *channelBuilder) ReadyBytes() int {
return c.co.ReadyBytes()
}
func (c *channelBuilder) OutputBytes() int {
return c.outputBytes
}
// Blocks returns a backup list of all blocks that were added to the channel. It // Blocks returns a backup list of all blocks that were added to the channel. It
// can be used in case the channel needs to be rebuilt. // can be used in case the channel needs to be rebuilt.
func (c *channelBuilder) Blocks() []*types.Block { func (c *channelBuilder) Blocks() []*types.Block {
...@@ -173,22 +194,25 @@ func (c *channelBuilder) Reset() error { ...@@ -173,22 +194,25 @@ func (c *channelBuilder) Reset() error {
// AddBlock returns a ChannelFullError if called even though the channel is // AddBlock returns a ChannelFullError if called even though the channel is
// already full. See description of FullErr for details. // already full. See description of FullErr for details.
// //
// AddBlock also returns the L1BlockInfo that got extracted from the block's
// first transaction for subsequent use by the caller.
//
// Call OutputFrames() afterwards to create frames. // Call OutputFrames() afterwards to create frames.
func (c *channelBuilder) AddBlock(block *types.Block) error { func (c *channelBuilder) AddBlock(block *types.Block) (derive.L1BlockInfo, error) {
if c.IsFull() { if c.IsFull() {
return c.FullErr() return derive.L1BlockInfo{}, c.FullErr()
} }
batch, err := derive.BlockToBatch(block) batch, l1info, err := derive.BlockToBatch(block)
if err != nil { if err != nil {
return fmt.Errorf("converting block to batch: %w", err) return l1info, fmt.Errorf("converting block to batch: %w", err)
} }
if _, err = c.co.AddBatch(batch); errors.Is(err, derive.ErrTooManyRLPBytes) { if _, err = c.co.AddBatch(batch); errors.Is(err, derive.ErrTooManyRLPBytes) {
c.setFullErr(err) c.setFullErr(err)
return c.FullErr() return l1info, c.FullErr()
} else if err != nil { } else if err != nil {
return fmt.Errorf("adding block to channel out: %w", err) return l1info, fmt.Errorf("adding block to channel out: %w", err)
} }
c.blocks = append(c.blocks, block) c.blocks = append(c.blocks, block)
c.updateSwTimeout(batch) c.updateSwTimeout(batch)
...@@ -198,7 +222,7 @@ func (c *channelBuilder) AddBlock(block *types.Block) error { ...@@ -198,7 +222,7 @@ func (c *channelBuilder) AddBlock(block *types.Block) error {
// Adding this block still worked, so don't return error, just mark as full // Adding this block still worked, so don't return error, just mark as full
} }
return nil return l1info, nil
} }
// Timeout management // Timeout management
...@@ -255,7 +279,7 @@ func (c *channelBuilder) updateTimeout(timeoutBlockNum uint64, reason error) { ...@@ -255,7 +279,7 @@ func (c *channelBuilder) updateTimeout(timeoutBlockNum uint64, reason error) {
} }
// checkTimeout checks if the channel is timed out at the given block number and // checkTimeout checks if the channel is timed out at the given block number and
// in this case marks the channel as full, if it wasn't full alredy. // in this case marks the channel as full, if it wasn't full already.
func (c *channelBuilder) checkTimeout(blockNum uint64) { func (c *channelBuilder) checkTimeout(blockNum uint64) {
if !c.IsFull() && c.TimedOut(blockNum) { if !c.IsFull() && c.TimedOut(blockNum) {
c.setFullErr(c.timeoutReason) c.setFullErr(c.timeoutReason)
...@@ -370,10 +394,11 @@ func (c *channelBuilder) outputFrame() error { ...@@ -370,10 +394,11 @@ func (c *channelBuilder) outputFrame() error {
} }
frame := frameData{ frame := frameData{
id: txID{chID: c.co.ID(), frameNumber: fn}, id: frameID{chID: c.co.ID(), frameNumber: fn},
data: buf.Bytes(), data: buf.Bytes(),
} }
c.frames = append(c.frames, frame) c.frames = append(c.frames, frame)
c.outputBytes += len(frame.data)
return err // possibly io.EOF (last frame) return err // possibly io.EOF (last frame)
} }
......
This diff is collapsed.
package batcher_test
import (
"math"
"testing"
"github.com/ethereum-optimism/optimism/op-batcher/batcher"
"github.com/stretchr/testify/require"
)
// TestInputThreshold tests the [ChannelConfig.InputThreshold]
// function using a table-driven testing approach.
func TestInputThreshold(t *testing.T) {
type testInput struct {
TargetFrameSize uint64
TargetNumFrames int
ApproxComprRatio float64
}
type test struct {
input testInput
assertion func(uint64)
}
// Construct test cases that test the boundary conditions
tests := []test{
{
input: testInput{
TargetFrameSize: 1,
TargetNumFrames: 1,
ApproxComprRatio: 0.4,
},
assertion: func(output uint64) {
require.Equal(t, uint64(2), output)
},
},
{
input: testInput{
TargetFrameSize: 1,
TargetNumFrames: 100000,
ApproxComprRatio: 0.4,
},
assertion: func(output uint64) {
require.Equal(t, uint64(250_000), output)
},
},
{
input: testInput{
TargetFrameSize: 1,
TargetNumFrames: 1,
ApproxComprRatio: 1,
},
assertion: func(output uint64) {
require.Equal(t, uint64(1), output)
},
},
{
input: testInput{
TargetFrameSize: 1,
TargetNumFrames: 1,
ApproxComprRatio: 2,
},
assertion: func(output uint64) {
require.Equal(t, uint64(0), output)
},
},
{
input: testInput{
TargetFrameSize: 100000,
TargetNumFrames: 1,
ApproxComprRatio: 0.4,
},
assertion: func(output uint64) {
require.Equal(t, uint64(250_000), output)
},
},
{
input: testInput{
TargetFrameSize: 1,
TargetNumFrames: 100000,
ApproxComprRatio: 0.4,
},
assertion: func(output uint64) {
require.Equal(t, uint64(250_000), output)
},
},
{
input: testInput{
TargetFrameSize: 100000,
TargetNumFrames: 100000,
ApproxComprRatio: 0.4,
},
assertion: func(output uint64) {
require.Equal(t, uint64(25_000_000_000), output)
},
},
{
input: testInput{
TargetFrameSize: 1,
TargetNumFrames: 1,
ApproxComprRatio: 0.000001,
},
assertion: func(output uint64) {
require.Equal(t, uint64(1_000_000), output)
},
},
{
input: testInput{
TargetFrameSize: 0,
TargetNumFrames: 0,
ApproxComprRatio: 0,
},
assertion: func(output uint64) {
// Need to allow for NaN depending on the machine architecture
require.True(t, output == uint64(0) || output == uint64(math.NaN()))
},
},
}
// Validate each test case
for _, tt := range tests {
config := batcher.ChannelConfig{
TargetFrameSize: tt.input.TargetFrameSize,
TargetNumFrames: tt.input.TargetNumFrames,
ApproxComprRatio: tt.input.ApproxComprRatio,
}
got := config.InputThreshold()
tt.assertion(got)
}
}
...@@ -6,7 +6,9 @@ import ( ...@@ -6,7 +6,9 @@ import (
"io" "io"
"math" "math"
"github.com/ethereum-optimism/optimism/op-batcher/metrics"
"github.com/ethereum-optimism/optimism/op-node/eth" "github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
...@@ -22,8 +24,9 @@ var ErrReorg = errors.New("block does not extend existing chain") ...@@ -22,8 +24,9 @@ var ErrReorg = errors.New("block does not extend existing chain")
// channel. // channel.
// Functions on channelManager are not safe for concurrent access. // Functions on channelManager are not safe for concurrent access.
type channelManager struct { type channelManager struct {
log log.Logger log log.Logger
cfg ChannelConfig metr metrics.Metricer
cfg ChannelConfig
// All blocks since the last request for new tx data. // All blocks since the last request for new tx data.
blocks []*types.Block blocks []*types.Block
...@@ -40,10 +43,12 @@ type channelManager struct { ...@@ -40,10 +43,12 @@ type channelManager struct {
confirmedTransactions map[txID]eth.BlockID confirmedTransactions map[txID]eth.BlockID
} }
func NewChannelManager(log log.Logger, cfg ChannelConfig) *channelManager { func NewChannelManager(log log.Logger, metr metrics.Metricer, cfg ChannelConfig) *channelManager {
return &channelManager{ return &channelManager{
log: log, log: log,
cfg: cfg, metr: metr,
cfg: cfg,
pendingTransactions: make(map[txID]txData), pendingTransactions: make(map[txID]txData),
confirmedTransactions: make(map[txID]eth.BlockID), confirmedTransactions: make(map[txID]eth.BlockID),
} }
...@@ -71,6 +76,8 @@ func (s *channelManager) TxFailed(id txID) { ...@@ -71,6 +76,8 @@ func (s *channelManager) TxFailed(id txID) {
} else { } else {
s.log.Warn("unknown transaction marked as failed", "id", id) s.log.Warn("unknown transaction marked as failed", "id", id)
} }
s.metr.RecordBatchTxFailed()
} }
// TxConfirmed marks a transaction as confirmed on L1. Unfortunately even if all frames in // TxConfirmed marks a transaction as confirmed on L1. Unfortunately even if all frames in
...@@ -78,7 +85,8 @@ func (s *channelManager) TxFailed(id txID) { ...@@ -78,7 +85,8 @@ func (s *channelManager) TxFailed(id txID) {
// resubmitted. // resubmitted.
// This function may reset the pending channel if the pending channel has timed out. // This function may reset the pending channel if the pending channel has timed out.
func (s *channelManager) TxConfirmed(id txID, inclusionBlock eth.BlockID) { func (s *channelManager) TxConfirmed(id txID, inclusionBlock eth.BlockID) {
s.log.Trace("marked transaction as confirmed", "id", id, "block", inclusionBlock) s.metr.RecordBatchTxSubmitted()
s.log.Debug("marked transaction as confirmed", "id", id, "block", inclusionBlock)
if _, ok := s.pendingTransactions[id]; !ok { if _, ok := s.pendingTransactions[id]; !ok {
s.log.Warn("unknown transaction marked as confirmed", "id", id, "block", inclusionBlock) s.log.Warn("unknown transaction marked as confirmed", "id", id, "block", inclusionBlock)
// TODO: This can occur if we clear the channel while there are still pending transactions // TODO: This can occur if we clear the channel while there are still pending transactions
...@@ -92,13 +100,15 @@ func (s *channelManager) TxConfirmed(id txID, inclusionBlock eth.BlockID) { ...@@ -92,13 +100,15 @@ func (s *channelManager) TxConfirmed(id txID, inclusionBlock eth.BlockID) {
// If this channel timed out, put the pending blocks back into the local saved blocks // If this channel timed out, put the pending blocks back into the local saved blocks
// and then reset this state so it can try to build a new channel. // and then reset this state so it can try to build a new channel.
if s.pendingChannelIsTimedOut() { if s.pendingChannelIsTimedOut() {
s.log.Warn("Channel timed out", "chID", s.pendingChannel.ID()) s.metr.RecordChannelTimedOut(s.pendingChannel.ID())
s.log.Warn("Channel timed out", "id", s.pendingChannel.ID())
s.blocks = append(s.pendingChannel.Blocks(), s.blocks...) s.blocks = append(s.pendingChannel.Blocks(), s.blocks...)
s.clearPendingChannel() s.clearPendingChannel()
} }
// If we are done with this channel, record that. // If we are done with this channel, record that.
if s.pendingChannelIsFullySubmitted() { if s.pendingChannelIsFullySubmitted() {
s.log.Info("Channel is fully submitted", "chID", s.pendingChannel.ID()) s.metr.RecordChannelFullySubmitted(s.pendingChannel.ID())
s.log.Info("Channel is fully submitted", "id", s.pendingChannel.ID())
s.clearPendingChannel() s.clearPendingChannel()
} }
} }
...@@ -194,8 +204,8 @@ func (s *channelManager) TxData(l1Head eth.BlockID) (txData, error) { ...@@ -194,8 +204,8 @@ func (s *channelManager) TxData(l1Head eth.BlockID) (txData, error) {
// all pending blocks be included in this channel for submission. // all pending blocks be included in this channel for submission.
s.registerL1Block(l1Head) s.registerL1Block(l1Head)
if err := s.pendingChannel.OutputFrames(); err != nil { if err := s.outputFrames(); err != nil {
return txData{}, fmt.Errorf("creating frames with channel builder: %w", err) return txData{}, err
} }
return s.nextTxData() return s.nextTxData()
...@@ -211,7 +221,11 @@ func (s *channelManager) ensurePendingChannel(l1Head eth.BlockID) error { ...@@ -211,7 +221,11 @@ func (s *channelManager) ensurePendingChannel(l1Head eth.BlockID) error {
return fmt.Errorf("creating new channel: %w", err) return fmt.Errorf("creating new channel: %w", err)
} }
s.pendingChannel = cb s.pendingChannel = cb
s.log.Info("Created channel", "chID", cb.ID(), "l1Head", l1Head) s.log.Info("Created channel",
"id", cb.ID(),
"l1Head", l1Head,
"blocks_pending", len(s.blocks))
s.metr.RecordChannelOpened(cb.ID(), len(s.blocks))
return nil return nil
} }
...@@ -229,28 +243,27 @@ func (s *channelManager) registerL1Block(l1Head eth.BlockID) { ...@@ -229,28 +243,27 @@ func (s *channelManager) registerL1Block(l1Head eth.BlockID) {
// processBlocks adds blocks from the blocks queue to the pending channel until // processBlocks adds blocks from the blocks queue to the pending channel until
// either the queue got exhausted or the channel is full. // either the queue got exhausted or the channel is full.
func (s *channelManager) processBlocks() error { func (s *channelManager) processBlocks() error {
var blocksAdded int var (
var _chFullErr *ChannelFullError // throw away, just for type checking blocksAdded int
_chFullErr *ChannelFullError // throw away, just for type checking
latestL2ref eth.L2BlockRef
)
for i, block := range s.blocks { for i, block := range s.blocks {
if err := s.pendingChannel.AddBlock(block); errors.As(err, &_chFullErr) { l1info, err := s.pendingChannel.AddBlock(block)
if errors.As(err, &_chFullErr) {
// current block didn't get added because channel is already full // current block didn't get added because channel is already full
break break
} else if err != nil { } else if err != nil {
return fmt.Errorf("adding block[%d] to channel builder: %w", i, err) return fmt.Errorf("adding block[%d] to channel builder: %w", i, err)
} }
blocksAdded += 1 blocksAdded += 1
latestL2ref = l2BlockRefFromBlockAndL1Info(block, l1info)
// current block got added but channel is now full // current block got added but channel is now full
if s.pendingChannel.IsFull() { if s.pendingChannel.IsFull() {
break break
} }
} }
s.log.Debug("Added blocks to channel",
"blocks_added", blocksAdded,
"channel_full", s.pendingChannel.IsFull(),
"blocks_pending", len(s.blocks)-blocksAdded,
"input_bytes", s.pendingChannel.InputBytes(),
)
if blocksAdded == len(s.blocks) { if blocksAdded == len(s.blocks) {
// all blocks processed, reuse slice // all blocks processed, reuse slice
s.blocks = s.blocks[:0] s.blocks = s.blocks[:0]
...@@ -258,6 +271,53 @@ func (s *channelManager) processBlocks() error { ...@@ -258,6 +271,53 @@ func (s *channelManager) processBlocks() error {
// remove processed blocks // remove processed blocks
s.blocks = s.blocks[blocksAdded:] s.blocks = s.blocks[blocksAdded:]
} }
s.metr.RecordL2BlocksAdded(latestL2ref,
blocksAdded,
len(s.blocks),
s.pendingChannel.InputBytes(),
s.pendingChannel.ReadyBytes())
s.log.Debug("Added blocks to channel",
"blocks_added", blocksAdded,
"blocks_pending", len(s.blocks),
"channel_full", s.pendingChannel.IsFull(),
"input_bytes", s.pendingChannel.InputBytes(),
"ready_bytes", s.pendingChannel.ReadyBytes(),
)
return nil
}
func (s *channelManager) outputFrames() error {
if err := s.pendingChannel.OutputFrames(); err != nil {
return fmt.Errorf("creating frames with channel builder: %w", err)
}
if !s.pendingChannel.IsFull() {
return nil
}
inBytes, outBytes := s.pendingChannel.InputBytes(), s.pendingChannel.OutputBytes()
s.metr.RecordChannelClosed(
s.pendingChannel.ID(),
len(s.blocks),
s.pendingChannel.NumFrames(),
inBytes,
outBytes,
s.pendingChannel.FullErr(),
)
var comprRatio float64
if inBytes > 0 {
comprRatio = float64(outBytes) / float64(inBytes)
}
s.log.Info("Channel closed",
"id", s.pendingChannel.ID(),
"blocks_pending", len(s.blocks),
"num_frames", s.pendingChannel.NumFrames(),
"input_bytes", inBytes,
"output_bytes", outBytes,
"full_reason", s.pendingChannel.FullErr(),
"compr_ratio", comprRatio,
)
return nil return nil
} }
...@@ -273,3 +333,14 @@ func (s *channelManager) AddL2Block(block *types.Block) error { ...@@ -273,3 +333,14 @@ func (s *channelManager) AddL2Block(block *types.Block) error {
return nil return nil
} }
func l2BlockRefFromBlockAndL1Info(block *types.Block, l1info derive.L1BlockInfo) eth.L2BlockRef {
return eth.L2BlockRef{
Hash: block.Hash(),
Number: block.NumberU64(),
ParentHash: block.ParentHash(),
Time: block.Time(),
L1Origin: eth.BlockID{Hash: l1info.BlockHash, Number: l1info.Number},
SequenceNumber: l1info.SequenceNumber,
}
}
This diff is collapsed.
...@@ -9,6 +9,7 @@ import ( ...@@ -9,6 +9,7 @@ import (
"github.com/urfave/cli" "github.com/urfave/cli"
"github.com/ethereum-optimism/optimism/op-batcher/flags" "github.com/ethereum-optimism/optimism/op-batcher/flags"
"github.com/ethereum-optimism/optimism/op-batcher/metrics"
"github.com/ethereum-optimism/optimism/op-batcher/rpc" "github.com/ethereum-optimism/optimism/op-batcher/rpc"
"github.com/ethereum-optimism/optimism/op-node/rollup" "github.com/ethereum-optimism/optimism/op-node/rollup"
"github.com/ethereum-optimism/optimism/op-node/sources" "github.com/ethereum-optimism/optimism/op-node/sources"
...@@ -20,18 +21,21 @@ import ( ...@@ -20,18 +21,21 @@ import (
) )
type Config struct { type Config struct {
log log.Logger log log.Logger
L1Client *ethclient.Client metr metrics.Metricer
L2Client *ethclient.Client L1Client *ethclient.Client
RollupNode *sources.RollupClient L2Client *ethclient.Client
PollInterval time.Duration RollupNode *sources.RollupClient
PollInterval time.Duration
From common.Address
TxManagerConfig txmgr.Config TxManagerConfig txmgr.Config
From common.Address
// RollupConfig is queried at startup // RollupConfig is queried at startup
Rollup *rollup.Config Rollup *rollup.Config
// Channel creation parameters // Channel builder parameters
Channel ChannelConfig Channel ChannelConfig
} }
......
...@@ -10,7 +10,9 @@ import ( ...@@ -10,7 +10,9 @@ import (
"sync" "sync"
"time" "time"
"github.com/ethereum-optimism/optimism/op-batcher/metrics"
"github.com/ethereum-optimism/optimism/op-node/eth" "github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
opcrypto "github.com/ethereum-optimism/optimism/op-service/crypto" opcrypto "github.com/ethereum-optimism/optimism/op-service/crypto"
"github.com/ethereum-optimism/optimism/op-service/txmgr" "github.com/ethereum-optimism/optimism/op-service/txmgr"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
...@@ -34,13 +36,14 @@ type BatchSubmitter struct { ...@@ -34,13 +36,14 @@ type BatchSubmitter struct {
// lastStoredBlock is the last block loaded into `state`. If it is empty it should be set to the l2 safe head. // lastStoredBlock is the last block loaded into `state`. If it is empty it should be set to the l2 safe head.
lastStoredBlock eth.BlockID lastStoredBlock eth.BlockID
lastL1Tip eth.L1BlockRef
state *channelManager state *channelManager
} }
// NewBatchSubmitterFromCLIConfig initializes the BatchSubmitter, gathering any resources // NewBatchSubmitterFromCLIConfig initializes the BatchSubmitter, gathering any resources
// that will be needed during operation. // that will be needed during operation.
func NewBatchSubmitterFromCLIConfig(cfg CLIConfig, l log.Logger) (*BatchSubmitter, error) { func NewBatchSubmitterFromCLIConfig(cfg CLIConfig, l log.Logger, m metrics.Metricer) (*BatchSubmitter, error) {
ctx := context.Background() ctx := context.Background()
signer, fromAddress, err := opcrypto.SignerFactoryFromConfig(l, cfg.PrivateKey, cfg.Mnemonic, cfg.SequencerHDPath, cfg.SignerConfig) signer, fromAddress, err := opcrypto.SignerFactoryFromConfig(l, cfg.PrivateKey, cfg.Mnemonic, cfg.SequencerHDPath, cfg.SignerConfig)
...@@ -104,12 +107,12 @@ func NewBatchSubmitterFromCLIConfig(cfg CLIConfig, l log.Logger) (*BatchSubmitte ...@@ -104,12 +107,12 @@ func NewBatchSubmitterFromCLIConfig(cfg CLIConfig, l log.Logger) (*BatchSubmitte
return nil, err return nil, err
} }
return NewBatchSubmitter(ctx, batcherCfg, l) return NewBatchSubmitter(ctx, batcherCfg, l, m)
} }
// NewBatchSubmitter initializes the BatchSubmitter, gathering any resources // NewBatchSubmitter initializes the BatchSubmitter, gathering any resources
// that will be needed during operation. // that will be needed during operation.
func NewBatchSubmitter(ctx context.Context, cfg Config, l log.Logger) (*BatchSubmitter, error) { func NewBatchSubmitter(ctx context.Context, cfg Config, l log.Logger, m metrics.Metricer) (*BatchSubmitter, error) {
balance, err := cfg.L1Client.BalanceAt(ctx, cfg.From, nil) balance, err := cfg.L1Client.BalanceAt(ctx, cfg.From, nil)
if err != nil { if err != nil {
return nil, err return nil, err
...@@ -118,12 +121,14 @@ func NewBatchSubmitter(ctx context.Context, cfg Config, l log.Logger) (*BatchSub ...@@ -118,12 +121,14 @@ func NewBatchSubmitter(ctx context.Context, cfg Config, l log.Logger) (*BatchSub
cfg.log = l cfg.log = l
cfg.log.Info("creating batch submitter", "submitter_addr", cfg.From, "submitter_bal", balance) cfg.log.Info("creating batch submitter", "submitter_addr", cfg.From, "submitter_bal", balance)
cfg.metr = m
return &BatchSubmitter{ return &BatchSubmitter{
Config: cfg, Config: cfg,
txMgr: NewTransactionManager(l, txMgr: NewTransactionManager(l,
cfg.TxManagerConfig, cfg.Rollup.BatchInboxAddress, cfg.Rollup.L1ChainID, cfg.TxManagerConfig, cfg.Rollup.BatchInboxAddress, cfg.Rollup.L1ChainID,
cfg.From, cfg.L1Client), cfg.From, cfg.L1Client),
state: NewChannelManager(l, cfg.Channel), state: NewChannelManager(l, m, cfg.Channel),
}, nil }, nil
} }
...@@ -187,13 +192,16 @@ func (l *BatchSubmitter) Stop() error { ...@@ -187,13 +192,16 @@ func (l *BatchSubmitter) Stop() error {
func (l *BatchSubmitter) loadBlocksIntoState(ctx context.Context) { func (l *BatchSubmitter) loadBlocksIntoState(ctx context.Context) {
start, end, err := l.calculateL2BlockRangeToStore(ctx) start, end, err := l.calculateL2BlockRangeToStore(ctx)
if err != nil { if err != nil {
l.log.Trace("was not able to calculate L2 block range", "err", err) l.log.Warn("Error calculating L2 block range", "err", err)
return
} else if start.Number == end.Number {
return return
} }
var latestBlock *types.Block
// Add all blocks to "state" // Add all blocks to "state"
for i := start.Number + 1; i < end.Number+1; i++ { for i := start.Number + 1; i < end.Number+1; i++ {
id, err := l.loadBlockIntoState(ctx, i) block, err := l.loadBlockIntoState(ctx, i)
if errors.Is(err, ErrReorg) { if errors.Is(err, ErrReorg) {
l.log.Warn("Found L2 reorg", "block_number", i) l.log.Warn("Found L2 reorg", "block_number", i)
l.state.Clear() l.state.Clear()
...@@ -203,24 +211,34 @@ func (l *BatchSubmitter) loadBlocksIntoState(ctx context.Context) { ...@@ -203,24 +211,34 @@ func (l *BatchSubmitter) loadBlocksIntoState(ctx context.Context) {
l.log.Warn("failed to load block into state", "err", err) l.log.Warn("failed to load block into state", "err", err)
return return
} }
l.lastStoredBlock = id l.lastStoredBlock = eth.ToBlockID(block)
latestBlock = block
} }
l2ref, err := derive.L2BlockToBlockRef(latestBlock, &l.Rollup.Genesis)
if err != nil {
l.log.Warn("Invalid L2 block loaded into state", "err", err)
return
}
l.metr.RecordL2BlocksLoaded(l2ref)
} }
// loadBlockIntoState fetches & stores a single block into `state`. It returns the block it loaded. // loadBlockIntoState fetches & stores a single block into `state`. It returns the block it loaded.
func (l *BatchSubmitter) loadBlockIntoState(ctx context.Context, blockNumber uint64) (eth.BlockID, error) { func (l *BatchSubmitter) loadBlockIntoState(ctx context.Context, blockNumber uint64) (*types.Block, error) {
ctx, cancel := context.WithTimeout(ctx, networkTimeout) ctx, cancel := context.WithTimeout(ctx, networkTimeout)
defer cancel()
block, err := l.L2Client.BlockByNumber(ctx, new(big.Int).SetUint64(blockNumber)) block, err := l.L2Client.BlockByNumber(ctx, new(big.Int).SetUint64(blockNumber))
cancel()
if err != nil { if err != nil {
return eth.BlockID{}, err return nil, fmt.Errorf("getting L2 block: %w", err)
} }
if err := l.state.AddL2Block(block); err != nil { if err := l.state.AddL2Block(block); err != nil {
return eth.BlockID{}, err return nil, fmt.Errorf("adding L2 block to state: %w", err)
} }
id := eth.ToBlockID(block)
l.log.Info("added L2 block to local state", "block", id, "tx_count", len(block.Transactions()), "time", block.Time()) l.log.Info("added L2 block to local state", "block", eth.ToBlockID(block), "tx_count", len(block.Transactions()), "time", block.Time())
return id, nil return block, nil
} }
// calculateL2BlockRangeToStore determines the range (start,end] that should be loaded into the local state. // calculateL2BlockRangeToStore determines the range (start,end] that should be loaded into the local state.
...@@ -283,6 +301,7 @@ func (l *BatchSubmitter) loop() { ...@@ -283,6 +301,7 @@ func (l *BatchSubmitter) loop() {
l.log.Error("Failed to query L1 tip", "error", err) l.log.Error("Failed to query L1 tip", "error", err)
break break
} }
l.recordL1Tip(l1tip)
// Collect next transaction data // Collect next transaction data
txdata, err := l.state.TxData(l1tip.ID()) txdata, err := l.state.TxData(l1tip.ID())
...@@ -316,6 +335,14 @@ func (l *BatchSubmitter) loop() { ...@@ -316,6 +335,14 @@ func (l *BatchSubmitter) loop() {
} }
} }
func (l *BatchSubmitter) recordL1Tip(l1tip eth.L1BlockRef) {
if l.lastL1Tip == l1tip {
return
}
l.lastL1Tip = l1tip
l.metr.RecordLatestL1Block(l1tip)
}
func (l *BatchSubmitter) recordFailedTx(id txID, err error) { func (l *BatchSubmitter) recordFailedTx(id txID, err error) {
l.log.Warn("Failed to send transaction", "err", err) l.log.Warn("Failed to send transaction", "err", err)
l.state.TxFailed(id) l.state.TxFailed(id)
......
...@@ -55,7 +55,7 @@ func (t *TransactionManager) SendTransaction(ctx context.Context, data []byte) ( ...@@ -55,7 +55,7 @@ func (t *TransactionManager) SendTransaction(ctx context.Context, data []byte) (
return nil, fmt.Errorf("failed to create tx: %w", err) return nil, fmt.Errorf("failed to create tx: %w", err)
} }
ctx, cancel := context.WithTimeout(ctx, 100*time.Second) // TODO: Select a timeout that makes sense here. ctx, cancel := context.WithTimeout(ctx, 10*time.Minute) // TODO: Select a timeout that makes sense here.
defer cancel() defer cancel()
if receipt, err := t.txMgr.Send(ctx, tx); err != nil { if receipt, err := t.txMgr.Send(ctx, tx); err != nil {
t.log.Warn("unable to publish tx", "err", err, "data_size", len(data)) t.log.Warn("unable to publish tx", "err", err, "data_size", len(data))
......
package metrics
import (
"context"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/ethclient"
"github.com/ethereum/go-ethereum/log"
"github.com/prometheus/client_golang/prometheus"
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
opmetrics "github.com/ethereum-optimism/optimism/op-service/metrics"
)
const Namespace = "op_batcher"
type Metricer interface {
RecordInfo(version string)
RecordUp()
// Records all L1 and L2 block events
opmetrics.RefMetricer
RecordLatestL1Block(l1ref eth.L1BlockRef)
RecordL2BlocksLoaded(l2ref eth.L2BlockRef)
RecordChannelOpened(id derive.ChannelID, numPendingBlocks int)
RecordL2BlocksAdded(l2ref eth.L2BlockRef, numBlocksAdded, numPendingBlocks, inputBytes, outputComprBytes int)
RecordChannelClosed(id derive.ChannelID, numPendingBlocks int, numFrames int, inputBytes int, outputComprBytes int, reason error)
RecordChannelFullySubmitted(id derive.ChannelID)
RecordChannelTimedOut(id derive.ChannelID)
RecordBatchTxSubmitted()
RecordBatchTxSuccess()
RecordBatchTxFailed()
Document() []opmetrics.DocumentedMetric
}
type Metrics struct {
ns string
registry *prometheus.Registry
factory opmetrics.Factory
opmetrics.RefMetrics
Info prometheus.GaugeVec
Up prometheus.Gauge
// label by openend, closed, fully_submitted, timed_out
ChannelEvs opmetrics.EventVec
PendingBlocksCount prometheus.GaugeVec
BlocksAddedCount prometheus.Gauge
ChannelInputBytes prometheus.GaugeVec
ChannelReadyBytes prometheus.Gauge
ChannelOutputBytes prometheus.Gauge
ChannelClosedReason prometheus.Gauge
ChannelNumFrames prometheus.Gauge
ChannelComprRatio prometheus.Histogram
BatcherTxEvs opmetrics.EventVec
}
var _ Metricer = (*Metrics)(nil)
func NewMetrics(procName string) *Metrics {
if procName == "" {
procName = "default"
}
ns := Namespace + "_" + procName
registry := opmetrics.NewRegistry()
factory := opmetrics.With(registry)
return &Metrics{
ns: ns,
registry: registry,
factory: factory,
RefMetrics: opmetrics.MakeRefMetrics(ns, factory),
Info: *factory.NewGaugeVec(prometheus.GaugeOpts{
Namespace: ns,
Name: "info",
Help: "Pseudo-metric tracking version and config info",
}, []string{
"version",
}),
Up: factory.NewGauge(prometheus.GaugeOpts{
Namespace: ns,
Name: "up",
Help: "1 if the op-batcher has finished starting up",
}),
ChannelEvs: opmetrics.NewEventVec(factory, ns, "channel", "Channel", []string{"stage"}),
PendingBlocksCount: *factory.NewGaugeVec(prometheus.GaugeOpts{
Namespace: ns,
Name: "pending_blocks_count",
Help: "Number of pending blocks, not added to a channel yet.",
}, []string{"stage"}),
BlocksAddedCount: factory.NewGauge(prometheus.GaugeOpts{
Namespace: ns,
Name: "blocks_added_count",
Help: "Total number of blocks added to current channel.",
}),
ChannelInputBytes: *factory.NewGaugeVec(prometheus.GaugeOpts{
Namespace: ns,
Name: "input_bytes",
Help: "Number of input bytes to a channel.",
}, []string{"stage"}),
ChannelReadyBytes: factory.NewGauge(prometheus.GaugeOpts{
Namespace: ns,
Name: "ready_bytes",
Help: "Number of bytes ready in the compression buffer.",
}),
ChannelOutputBytes: factory.NewGauge(prometheus.GaugeOpts{
Namespace: ns,
Name: "output_bytes",
Help: "Number of compressed output bytes from a channel.",
}),
ChannelClosedReason: factory.NewGauge(prometheus.GaugeOpts{
Namespace: ns,
Name: "channel_closed_reason",
Help: "Pseudo-metric to record the reason a channel got closed.",
}),
ChannelNumFrames: factory.NewGauge(prometheus.GaugeOpts{
Namespace: ns,
Name: "channel_num_frames",
Help: "Total number of frames of closed channel.",
}),
ChannelComprRatio: factory.NewHistogram(prometheus.HistogramOpts{
Namespace: ns,
Name: "channel_compr_ratio",
Help: "Compression ratios of closed channel.",
Buckets: append([]float64{0.1, 0.2}, prometheus.LinearBuckets(0.3, 0.05, 14)...),
}),
BatcherTxEvs: opmetrics.NewEventVec(factory, ns, "batcher_tx", "BatcherTx", []string{"stage"}),
}
}
func (m *Metrics) Serve(ctx context.Context, host string, port int) error {
return opmetrics.ListenAndServe(ctx, m.registry, host, port)
}
func (m *Metrics) Document() []opmetrics.DocumentedMetric {
return m.factory.Document()
}
func (m *Metrics) StartBalanceMetrics(ctx context.Context,
l log.Logger, client *ethclient.Client, account common.Address) {
opmetrics.LaunchBalanceMetrics(ctx, l, m.registry, m.ns, client, account)
}
// RecordInfo sets a pseudo-metric that contains versioning and
// config info for the op-batcher.
func (m *Metrics) RecordInfo(version string) {
m.Info.WithLabelValues(version).Set(1)
}
// RecordUp sets the up metric to 1.
func (m *Metrics) RecordUp() {
prometheus.MustRegister()
m.Up.Set(1)
}
const (
StageLoaded = "loaded"
StageOpened = "opened"
StageAdded = "added"
StageClosed = "closed"
StageFullySubmitted = "fully_submitted"
StageTimedOut = "timed_out"
TxStageSubmitted = "submitted"
TxStageSuccess = "success"
TxStageFailed = "failed"
)
func (m *Metrics) RecordLatestL1Block(l1ref eth.L1BlockRef) {
m.RecordL1Ref("latest", l1ref)
}
// RecordL2BlockLoaded should be called when a new L2 block was loaded into the
// channel manager (but not processed yet).
func (m *Metrics) RecordL2BlocksLoaded(l2ref eth.L2BlockRef) {
m.RecordL2Ref(StageLoaded, l2ref)
}
func (m *Metrics) RecordChannelOpened(id derive.ChannelID, numPendingBlocks int) {
m.ChannelEvs.Record(StageOpened)
m.BlocksAddedCount.Set(0) // reset
m.PendingBlocksCount.WithLabelValues(StageOpened).Set(float64(numPendingBlocks))
}
// RecordL2BlocksAdded should be called when L2 block were added to the channel
// builder, with the latest added block.
func (m *Metrics) RecordL2BlocksAdded(l2ref eth.L2BlockRef, numBlocksAdded, numPendingBlocks, inputBytes, outputComprBytes int) {
m.RecordL2Ref(StageAdded, l2ref)
m.BlocksAddedCount.Add(float64(numBlocksAdded))
m.PendingBlocksCount.WithLabelValues(StageAdded).Set(float64(numPendingBlocks))
m.ChannelInputBytes.WithLabelValues(StageAdded).Set(float64(inputBytes))
m.ChannelReadyBytes.Set(float64(outputComprBytes))
}
func (m *Metrics) RecordChannelClosed(id derive.ChannelID, numPendingBlocks int, numFrames int, inputBytes int, outputComprBytes int, reason error) {
m.ChannelEvs.Record(StageClosed)
m.PendingBlocksCount.WithLabelValues(StageClosed).Set(float64(numPendingBlocks))
m.ChannelNumFrames.Set(float64(numFrames))
m.ChannelInputBytes.WithLabelValues(StageClosed).Set(float64(inputBytes))
m.ChannelOutputBytes.Set(float64(outputComprBytes))
var comprRatio float64
if inputBytes > 0 {
comprRatio = float64(outputComprBytes) / float64(inputBytes)
}
m.ChannelComprRatio.Observe(comprRatio)
m.ChannelClosedReason.Set(float64(ClosedReasonToNum(reason)))
}
func ClosedReasonToNum(reason error) int {
// CLI-3640
return 0
}
func (m *Metrics) RecordChannelFullySubmitted(id derive.ChannelID) {
m.ChannelEvs.Record(StageFullySubmitted)
}
func (m *Metrics) RecordChannelTimedOut(id derive.ChannelID) {
m.ChannelEvs.Record(StageTimedOut)
}
func (m *Metrics) RecordBatchTxSubmitted() {
m.BatcherTxEvs.Record(TxStageSubmitted)
}
func (m *Metrics) RecordBatchTxSuccess() {
m.BatcherTxEvs.Record(TxStageSuccess)
}
func (m *Metrics) RecordBatchTxFailed() {
m.BatcherTxEvs.Record(TxStageFailed)
}
package metrics
import (
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
opmetrics "github.com/ethereum-optimism/optimism/op-service/metrics"
)
type noopMetrics struct{ opmetrics.NoopRefMetrics }
var NoopMetrics Metricer = new(noopMetrics)
func (*noopMetrics) Document() []opmetrics.DocumentedMetric { return nil }
func (*noopMetrics) RecordInfo(version string) {}
func (*noopMetrics) RecordUp() {}
func (*noopMetrics) RecordLatestL1Block(l1ref eth.L1BlockRef) {}
func (*noopMetrics) RecordL2BlocksLoaded(eth.L2BlockRef) {}
func (*noopMetrics) RecordChannelOpened(derive.ChannelID, int) {}
func (*noopMetrics) RecordL2BlocksAdded(eth.L2BlockRef, int, int, int, int) {}
func (*noopMetrics) RecordChannelClosed(derive.ChannelID, int, int, int, int, error) {}
func (*noopMetrics) RecordChannelFullySubmitted(derive.ChannelID) {}
func (*noopMetrics) RecordChannelTimedOut(derive.ChannelID) {}
func (*noopMetrics) RecordBatchTxSubmitted() {}
func (*noopMetrics) RecordBatchTxSuccess() {}
func (*noopMetrics) RecordBatchTxFailed() {}
FROM golang:1.18.0-alpine3.15 as builder FROM golang:1.19.0-alpine3.15 as builder
RUN apk add --no-cache make gcc musl-dev linux-headers git jq bash RUN apk add --no-cache make gcc musl-dev linux-headers git jq bash
......
package main
import (
"context"
"fmt"
"math/big"
"os"
"strings"
"github.com/ethereum-optimism/optimism/op-chain-ops/crossdomain"
"github.com/ethereum-optimism/optimism/op-chain-ops/db"
"github.com/mattn/go-isatty"
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum-optimism/optimism/op-bindings/hardhat"
"github.com/ethereum-optimism/optimism/op-chain-ops/genesis"
"github.com/ethereum/go-ethereum/ethclient"
"github.com/urfave/cli"
)
func main() {
log.Root().SetHandler(log.StreamHandler(os.Stderr, log.TerminalFormat(isatty.IsTerminal(os.Stderr.Fd()))))
app := &cli.App{
Name: "check-migration",
Usage: "Run sanity checks on a migrated database",
Flags: []cli.Flag{
&cli.StringFlag{
Name: "l1-rpc-url",
Value: "http://127.0.0.1:8545",
Usage: "RPC URL for an L1 Node",
Required: true,
},
&cli.StringFlag{
Name: "ovm-addresses",
Usage: "Path to ovm-addresses.json",
Required: true,
},
&cli.StringFlag{
Name: "ovm-allowances",
Usage: "Path to ovm-allowances.json",
Required: true,
},
&cli.StringFlag{
Name: "ovm-messages",
Usage: "Path to ovm-messages.json",
Required: true,
},
&cli.StringFlag{
Name: "witness-file",
Usage: "Path to witness file",
Required: true,
},
&cli.StringFlag{
Name: "db-path",
Usage: "Path to database",
Required: true,
},
cli.StringFlag{
Name: "deploy-config",
Usage: "Path to hardhat deploy config file",
Required: true,
},
cli.StringFlag{
Name: "network",
Usage: "Name of hardhat deploy network",
Required: true,
},
cli.StringFlag{
Name: "hardhat-deployments",
Usage: "Comma separated list of hardhat deployment directories",
Required: true,
},
cli.IntFlag{
Name: "db-cache",
Usage: "LevelDB cache size in mb",
Value: 1024,
},
cli.IntFlag{
Name: "db-handles",
Usage: "LevelDB number of handles",
Value: 60,
},
},
Action: func(ctx *cli.Context) error {
deployConfig := ctx.String("deploy-config")
config, err := genesis.NewDeployConfig(deployConfig)
if err != nil {
return err
}
ovmAddresses, err := crossdomain.NewAddresses(ctx.String("ovm-addresses"))
if err != nil {
return err
}
ovmAllowances, err := crossdomain.NewAllowances(ctx.String("ovm-allowances"))
if err != nil {
return err
}
ovmMessages, err := crossdomain.NewSentMessageFromJSON(ctx.String("ovm-messages"))
if err != nil {
return err
}
evmMessages, evmAddresses, err := crossdomain.ReadWitnessData(ctx.String("witness-file"))
if err != nil {
return err
}
log.Info(
"Loaded witness data",
"ovmAddresses", len(ovmAddresses),
"evmAddresses", len(evmAddresses),
"ovmAllowances", len(ovmAllowances),
"ovmMessages", len(ovmMessages),
"evmMessages", len(evmMessages),
)
migrationData := crossdomain.MigrationData{
OvmAddresses: ovmAddresses,
EvmAddresses: evmAddresses,
OvmAllowances: ovmAllowances,
OvmMessages: ovmMessages,
EvmMessages: evmMessages,
}
network := ctx.String("network")
deployments := strings.Split(ctx.String("hardhat-deployments"), ",")
hh, err := hardhat.New(network, []string{}, deployments)
if err != nil {
return err
}
l1RpcURL := ctx.String("l1-rpc-url")
l1Client, err := ethclient.Dial(l1RpcURL)
if err != nil {
return err
}
var block *types.Block
tag := config.L1StartingBlockTag
if tag.BlockNumber != nil {
block, err = l1Client.BlockByNumber(context.Background(), big.NewInt(tag.BlockNumber.Int64()))
} else if tag.BlockHash != nil {
block, err = l1Client.BlockByHash(context.Background(), *tag.BlockHash)
} else {
return fmt.Errorf("invalid l1StartingBlockTag in deploy config: %v", tag)
}
if err != nil {
return err
}
dbCache := ctx.Int("db-cache")
dbHandles := ctx.Int("db-handles")
// Read the required deployment addresses from disk if required
if err := config.GetDeployedAddresses(hh); err != nil {
return err
}
if err := config.Check(); err != nil {
return err
}
postLDB, err := db.Open(ctx.String("db-path"), dbCache, dbHandles)
if err != nil {
return err
}
if err := genesis.PostCheckMigratedDB(
postLDB,
migrationData,
&config.L1CrossDomainMessengerProxy,
config.L1ChainID,
config.FinalSystemOwner,
config.ProxyAdminOwner,
&derive.L1BlockInfo{
Number: block.NumberU64(),
Time: block.Time(),
BaseFee: block.BaseFee(),
BlockHash: block.Hash(),
BatcherAddr: config.BatchSenderAddress,
L1FeeOverhead: eth.Bytes32(common.BigToHash(new(big.Int).SetUint64(config.GasPriceOracleOverhead))),
L1FeeScalar: eth.Bytes32(common.BigToHash(new(big.Int).SetUint64(config.GasPriceOracleScalar))),
},
); err != nil {
return err
}
if err := postLDB.Close(); err != nil {
return err
}
return nil
},
}
if err := app.Run(os.Args); err != nil {
log.Crit("error in migration", "err", err)
}
}
...@@ -106,6 +106,11 @@ func main() { ...@@ -106,6 +106,11 @@ func main() {
Value: "rollup.json", Value: "rollup.json",
Required: true, Required: true,
}, },
cli.BoolFlag{
Name: "post-check-only",
Usage: "Only perform sanity checks",
Required: false,
},
}, },
Action: func(ctx *cli.Context) error { Action: func(ctx *cli.Context) error {
deployConfig := ctx.String("deploy-config") deployConfig := ctx.String("deploy-config")
......
...@@ -25,8 +25,8 @@ func GetOVMETHTotalSupplySlot() common.Hash { ...@@ -25,8 +25,8 @@ func GetOVMETHTotalSupplySlot() common.Hash {
return getOVMETHTotalSupplySlot() return getOVMETHTotalSupplySlot()
} }
// getOVMETHBalance gets a user's OVM ETH balance from state by querying the // GetOVMETHBalance gets a user's OVM ETH balance from state by querying the
// appropriate storage slot directly. // appropriate storage slot directly.
func getOVMETHBalance(db *state.StateDB, addr common.Address) *big.Int { func GetOVMETHBalance(db *state.StateDB, addr common.Address) *big.Int {
return db.GetState(OVMETHAddress, CalcOVMETHStorageKey(addr)).Big() return db.GetState(OVMETHAddress, CalcOVMETHStorageKey(addr)).Big()
} }
...@@ -13,6 +13,20 @@ import ( ...@@ -13,6 +13,20 @@ import (
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
) )
const (
// checkJobs is the number of parallel workers to spawn
// when iterating the storage trie.
checkJobs = 64
// BalanceSlot is an ordinal used to represent slots corresponding to OVM_ETH
// balances in the state.
BalanceSlot = 1
// AllowanceSlot is an ordinal used to represent slots corresponding to OVM_ETH
// allowances in the state.
AllowanceSlot = 2
)
var ( var (
// OVMETHAddress is the address of the OVM ETH predeploy. // OVMETHAddress is the address of the OVM ETH predeploy.
OVMETHAddress = common.HexToAddress("0xDeadDeAddeAddEAddeadDEaDDEAdDeaDDeAD0000") OVMETHAddress = common.HexToAddress("0xDeadDeAddeAddEAddeadDEaDDEAdDeaDDeAD0000")
...@@ -27,84 +41,204 @@ var ( ...@@ -27,84 +41,204 @@ var (
common.HexToHash("0x0000000000000000000000000000000000000000000000000000000000000005"): true, common.HexToHash("0x0000000000000000000000000000000000000000000000000000000000000005"): true,
common.HexToHash("0x0000000000000000000000000000000000000000000000000000000000000006"): true, common.HexToHash("0x0000000000000000000000000000000000000000000000000000000000000006"): true,
} }
// sequencerEntrypointAddr is the address of the OVM sequencer entrypoint contract.
sequencerEntrypointAddr = common.HexToAddress("0x4200000000000000000000000000000000000005")
) )
func MigrateLegacyETH(db *state.StateDB, addresses []common.Address, chainID int, noCheck bool) error { // accountData is a wrapper struct that contains the balance and address of an account.
// It gets passed via channel to the collector process.
type accountData struct {
balance *big.Int
legacySlot common.Hash
address common.Address
}
// MigrateBalances migrates all balances in the LegacyERC20ETH contract into state. It performs checks
// in parallel with mutations in order to reduce overall migration time.
func MigrateBalances(mutableDB *state.StateDB, dbFactory util.DBFactory, addresses []common.Address, allowances []*crossdomain.Allowance, chainID int, noCheck bool) error {
// Chain params to use for integrity checking. // Chain params to use for integrity checking.
params := crossdomain.ParamsByChainID[chainID] params := crossdomain.ParamsByChainID[chainID]
if params == nil { if params == nil {
return fmt.Errorf("no chain params for %d", chainID) return fmt.Errorf("no chain params for %d", chainID)
} }
// Log the chain params for debugging purposes. return doMigration(mutableDB, dbFactory, addresses, allowances, params.ExpectedSupplyDelta, noCheck)
log.Info("Chain params", "chain-id", chainID, "supply-delta", params.ExpectedSupplyDelta) }
func doMigration(mutableDB *state.StateDB, dbFactory util.DBFactory, addresses []common.Address, allowances []*crossdomain.Allowance, expDiff *big.Int, noCheck bool) error {
// We'll need to maintain a list of all addresses that we've seen along with all of the storage
// slots based on the witness data.
slotsAddrs := make(map[common.Hash]common.Address)
slotsInp := make(map[common.Hash]int)
// Deduplicate the list of addresses by converting to a map. // For each known address, compute its balance key and add it to the list of addresses.
deduped := make(map[common.Address]bool) // Mint events are instrumented as regular ETH events in the witness data, so we no longer
// need to iterate over mint events during the migration.
for _, addr := range addresses { for _, addr := range addresses {
deduped[addr] = true sk := CalcOVMETHStorageKey(addr)
slotsAddrs[sk] = addr
slotsInp[sk] = BalanceSlot
} }
// Migrate the legacy ETH to ETH. // For each known allowance, compute its storage key and add it to the list of addresses.
log.Info("Migrating legacy ETH to ETH", "num-accounts", len(addresses)) for _, allowance := range allowances {
totalMigrated := new(big.Int) sk := CalcAllowanceStorageKey(allowance.From, allowance.To)
logAccountProgress := util.ProgressLogger(1000, "imported accounts") slotsAddrs[sk] = allowance.From
for addr := range deduped { slotsInp[sk] = AllowanceSlot
}
// Add the old SequencerEntrypoint because someone sent it ETH a long time ago and it has a
// balance but none of our instrumentation could easily find it. Special case.
entrySK := CalcOVMETHStorageKey(sequencerEntrypointAddr)
slotsAddrs[entrySK] = sequencerEntrypointAddr
slotsInp[entrySK] = BalanceSlot
// Channel to receive storage slot keys and values from each iteration job.
outCh := make(chan accountData)
// Channel that gets closed when the collector is done.
doneCh := make(chan struct{})
// Create a map of accounts we've seen so that we can filter out duplicates.
seenAccounts := make(map[common.Address]bool)
// Keep track of the total migrated supply.
totalFound := new(big.Int)
// Kick off a background process to collect
// values from the channel and add them to the map.
var count int
progress := util.ProgressLogger(1000, "Migrated OVM_ETH storage slot")
go func() {
defer func() { doneCh <- struct{}{} }()
for account := range outCh {
progress()
// Filter out duplicate accounts. See the below note about keyspace iteration for
// why we may have to filter out duplicates.
if seenAccounts[account.address] {
log.Info("skipping duplicate account during iteration", "addr", account.address)
continue
}
// Accumulate addresses and total supply.
totalFound = new(big.Int).Add(totalFound, account.balance)
mutableDB.SetBalance(account.address, account.balance)
mutableDB.SetState(predeploys.LegacyERC20ETHAddr, account.legacySlot, common.Hash{})
count++
seenAccounts[account.address] = true
}
}()
err := util.IterateState(dbFactory, predeploys.LegacyERC20ETHAddr, func(db *state.StateDB, key, value common.Hash) error {
// We can safely ignore specific slots (totalSupply, name, symbol).
if ignoredSlots[key] {
return nil
}
slotType, ok := slotsInp[key]
if !ok {
log.Error("unknown storage slot in state", "slot", key.String())
if !noCheck {
return fmt.Errorf("unknown storage slot in state: %s", key.String())
}
}
// No accounts should have a balance in state. If they do, bail. // No accounts should have a balance in state. If they do, bail.
if db.GetBalance(addr).Sign() > 0 { addr, ok := slotsAddrs[key]
if !ok {
log.Crit("could not find address in map - should never happen")
}
bal := db.GetBalance(addr)
if bal.Sign() != 0 {
log.Error(
"account has non-zero balance in state - should never happen",
"addr", addr,
"balance", bal.String(),
)
if !noCheck {
return fmt.Errorf("account has non-zero balance in state - should never happen: %s", addr.String())
}
}
// Add balances to the total found.
switch slotType {
case BalanceSlot:
// Send the data to the channel.
outCh <- accountData{
balance: value.Big(),
legacySlot: key,
address: addr,
}
case AllowanceSlot:
// Allowance slot. Do nothing here.
default:
// Should never happen.
if noCheck { if noCheck {
log.Error("account has non-zero balance in state - should never happen", "addr", addr) log.Error("unknown slot type", "slot", key, "type", slotType)
} else { } else {
log.Crit("account has non-zero balance in state - should never happen", "addr", addr) log.Crit("unknown slot type %d, should never happen", slotType)
} }
} }
// Pull out the OVM ETH balance. return nil
ovmBalance := getOVMETHBalance(db, addr) }, checkJobs)
// Actually perform the migration by setting the appropriate values in state. if err != nil {
db.SetBalance(addr, ovmBalance) return err
db.SetState(predeploys.LegacyERC20ETHAddr, CalcOVMETHStorageKey(addr), common.Hash{}) }
// Bump the total OVM balance.
totalMigrated = totalMigrated.Add(totalMigrated, ovmBalance)
// Log progress. // Close the outCh to cancel the collector. The collector will signal that it's done
logAccountProgress() // using doneCh. Any values waiting to be read from outCh will be read before the
// collector exits.
close(outCh)
<-doneCh
// Log how many slots were iterated over.
log.Info("Iterated legacy balances", "count", count)
// Verify the supply delta. Recorded total supply in the LegacyERC20ETH contract may be higher
// than the actual migrated amount because self-destructs will remove ETH supply in a way that
// cannot be reflected in the contract. This is fine because self-destructs just mean the L2 is
// actually *overcollateralized* by some tiny amount.
db, err := dbFactory()
if err != nil {
log.Crit("cannot get database", "err", err)
} }
// Make sure that the total supply delta matches the expected delta. This is equivalent to
// checking that the total migrated is equal to the total found, since we already performed the
// same check against the total found (a = b, b = c => a = c).
totalSupply := getOVMETHTotalSupply(db) totalSupply := getOVMETHTotalSupply(db)
delta := new(big.Int).Sub(totalSupply, totalMigrated) delta := new(big.Int).Sub(totalSupply, totalFound)
if delta.Cmp(params.ExpectedSupplyDelta) != 0 { if delta.Cmp(expDiff) != 0 {
if noCheck { log.Error(
log.Error( "supply mismatch",
"supply mismatch", "migrated", totalFound.String(),
"migrated", totalMigrated.String(), "supply", totalSupply.String(),
"supply", totalSupply.String(), "delta", delta.String(),
"delta", delta.String(), "exp_delta", expDiff.String(),
"exp_delta", params.ExpectedSupplyDelta.String(), )
) if !noCheck {
} else { return fmt.Errorf("supply mismatch: %s", delta.String())
log.Crit(
"supply mismatch",
"migrated", totalMigrated.String(),
"supply", totalSupply.String(),
"delta", delta.String(),
"exp_delta", params.ExpectedSupplyDelta.String(),
)
} }
} }
// Supply is verified.
log.Info(
"supply verified OK",
"migrated", totalFound.String(),
"supply", totalSupply.String(),
"delta", delta.String(),
"exp_delta", expDiff.String(),
)
// Set the total supply to 0. We do this because the total supply is necessarily going to be // Set the total supply to 0. We do this because the total supply is necessarily going to be
// different than the sum of all balances since we no longer track balances inside the contract // different than the sum of all balances since we no longer track balances inside the contract
// itself. The total supply is going to be weird no matter what, might as well set it to zero // itself. The total supply is going to be weird no matter what, might as well set it to zero
// so it's explicitly weird instead of implicitly weird. // so it's explicitly weird instead of implicitly weird.
db.SetState(predeploys.LegacyERC20ETHAddr, getOVMETHTotalSupplySlot(), common.Hash{}) mutableDB.SetState(predeploys.LegacyERC20ETHAddr, getOVMETHTotalSupplySlot(), common.Hash{})
log.Info("Set the totalSupply to 0") log.Info("Set the totalSupply to 0")
// Fin.
return nil return nil
} }
This diff is collapsed.
package ether
import (
"errors"
"fmt"
"math/big"
"github.com/ethereum-optimism/optimism/op-chain-ops/crossdomain"
"github.com/ethereum-optimism/optimism/op-chain-ops/util"
"github.com/ethereum-optimism/optimism/op-bindings/predeploys"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/log"
)
// PreCheckBalances checks that the given list of addresses and allowances represents all storage
// slots in the LegacyERC20ETH contract. We don't have to filter out extra addresses like we do for
// withdrawals because we'll simply carry the balance of a given address to the new system, if the
// account is extra then it won't have any balance and nothing will happen.
func PreCheckBalances(ldb ethdb.Database, db *state.StateDB, addresses []common.Address, allowances []*crossdomain.Allowance, chainID int, noCheck bool) ([]common.Address, error) {
// Chain params to use for integrity checking.
params := crossdomain.ParamsByChainID[chainID]
if params == nil {
return nil, fmt.Errorf("no chain params for %d", chainID)
}
// We'll need to maintain a list of all addresses that we've seen along with all of the storage
// slots based on the witness data.
addrs := make([]common.Address, 0)
slotsInp := make(map[common.Hash]int)
// For each known address, compute its balance key and add it to the list of addresses.
// Mint events are instrumented as regular ETH events in the witness data, so we no longer
// need to iterate over mint events during the migration.
for _, addr := range addresses {
addrs = append(addrs, addr)
slotsInp[CalcOVMETHStorageKey(addr)] = 1
}
// For each known allowance, compute its storage key and add it to the list of addresses.
for _, allowance := range allowances {
addrs = append(addrs, allowance.From)
slotsInp[CalcAllowanceStorageKey(allowance.From, allowance.To)] = 2
}
// Add the old SequencerEntrypoint because someone sent it ETH a long time ago and it has a
// balance but none of our instrumentation could easily find it. Special case.
sequencerEntrypointAddr := common.HexToAddress("0x4200000000000000000000000000000000000005")
addrs = append(addrs, sequencerEntrypointAddr)
slotsInp[CalcOVMETHStorageKey(sequencerEntrypointAddr)] = 1
// Build a mapping of every storage slot in the LegacyERC20ETH contract, except the list of
// slots that we know we can ignore (totalSupply, name, symbol).
var count int
slotsAct := make(map[common.Hash]common.Hash)
progress := util.ProgressLogger(1000, "Read OVM_ETH storage slot")
err := db.ForEachStorage(predeploys.LegacyERC20ETHAddr, func(key, value common.Hash) bool {
progress()
// We can safely ignore specific slots (totalSupply, name, symbol).
if ignoredSlots[key] {
return true
}
// Slot exists, so add it to the map.
slotsAct[key] = value
count++
return true
})
if err != nil {
return nil, fmt.Errorf("cannot iterate over LegacyERC20ETHAddr: %w", err)
}
// Log how many slots were iterated over.
log.Info("Iterated legacy balances", "count", count)
// Iterate over the list of known slots and check that we have a slot for each one. We'll also
// keep track of the total balance to be migrated and throw if the total supply exceeds the
// expected supply delta.
totalFound := new(big.Int)
var unknown bool
for slot := range slotsAct {
slotType, ok := slotsInp[slot]
if !ok {
if noCheck {
log.Error("ignoring unknown storage slot in state", "slot", slot.String())
} else {
unknown = true
log.Error("unknown storage slot in state", "slot", slot.String())
continue
}
}
// Add balances to the total found.
switch slotType {
case 1:
// Balance slot.
totalFound.Add(totalFound, slotsAct[slot].Big())
case 2:
// Allowance slot.
continue
default:
// Should never happen.
if noCheck {
log.Error("unknown slot type", "slot", slot, "type", slotType)
} else {
log.Crit("unknown slot type: %d", slotType)
}
}
}
if unknown {
return nil, errors.New("unknown storage slots in state (see logs for details)")
}
// Verify the supply delta. Recorded total supply in the LegacyERC20ETH contract may be higher
// than the actual migrated amount because self-destructs will remove ETH supply in a way that
// cannot be reflected in the contract. This is fine because self-destructs just mean the L2 is
// actually *overcollateralized* by some tiny amount.
totalSupply := getOVMETHTotalSupply(db)
delta := new(big.Int).Sub(totalSupply, totalFound)
if delta.Cmp(params.ExpectedSupplyDelta) != 0 {
if noCheck {
log.Error(
"supply mismatch",
"migrated", totalFound.String(),
"supply", totalSupply.String(),
"delta", delta.String(),
"exp_delta", params.ExpectedSupplyDelta.String(),
)
} else {
log.Crit(
"supply mismatch",
"migrated", totalFound.String(),
"supply", totalSupply.String(),
"delta", delta.String(),
"exp_delta", params.ExpectedSupplyDelta.String(),
)
}
}
// Supply is verified.
log.Info(
"supply verified OK",
"migrated", totalFound.String(),
"supply", totalSupply.String(),
"delta", delta.String(),
"exp_delta", params.ExpectedSupplyDelta.String(),
)
// We know we have at least a superset of all addresses here since we know that we have every
// storage slot. It's fine to have extras because they won't have any balance.
return addrs, nil
}
...@@ -6,6 +6,11 @@ import ( ...@@ -6,6 +6,11 @@ import (
"errors" "errors"
"fmt" "fmt"
"math/big" "math/big"
"math/rand"
"github.com/ethereum-optimism/optimism/op-chain-ops/util"
"github.com/ethereum-optimism/optimism/op-chain-ops/ether"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/rawdb" "github.com/ethereum/go-ethereum/core/rawdb"
...@@ -22,11 +27,21 @@ import ( ...@@ -22,11 +27,21 @@ import (
"github.com/ethereum-optimism/optimism/op-node/rollup/derive" "github.com/ethereum-optimism/optimism/op-node/rollup/derive"
) )
// MaxSlotChecks is the maximum number of storage slots to check const (
// when validating the untouched predeploys. This limit is in place // MaxPredeploySlotChecks is the maximum number of storage slots to check
// to bound execution time of the migration. We can parallelize this // when validating the untouched predeploys. This limit is in place
// in the future. // to bound execution time of the migration. We can parallelize this
const MaxSlotChecks = 1000 // in the future.
MaxPredeploySlotChecks = 1000
// MaxOVMETHSlotChecks is the maximum number of OVM ETH storage slots to check
// when validating the OVM ETH migration.
MaxOVMETHSlotChecks = 5000
// OVMETHSampleLikelihood is the probability that a storage slot will be checked
// when validating the OVM ETH migration.
OVMETHSampleLikelihood = 0.1
)
type StorageCheckMap = map[common.Hash]common.Hash type StorageCheckMap = map[common.Hash]common.Hash
...@@ -144,7 +159,7 @@ func PostCheckMigratedDB( ...@@ -144,7 +159,7 @@ func PostCheckMigratedDB(
} }
log.Info("checked L1Block") log.Info("checked L1Block")
if err := PostCheckLegacyETH(db); err != nil { if err := PostCheckLegacyETH(prevDB, db, migrationData); err != nil {
return err return err
} }
log.Info("checked legacy eth") log.Info("checked legacy eth")
...@@ -206,7 +221,7 @@ func PostCheckUntouchables(udb state.Database, currDB *state.StateDB, prevRoot c ...@@ -206,7 +221,7 @@ func PostCheckUntouchables(udb state.Database, currDB *state.StateDB, prevRoot c
if err := prevDB.ForEachStorage(addr, func(key, value common.Hash) bool { if err := prevDB.ForEachStorage(addr, func(key, value common.Hash) bool {
count++ count++
expSlots[key] = value expSlots[key] = value
return count < MaxSlotChecks return count < MaxPredeploySlotChecks
}); err != nil { }); err != nil {
return fmt.Errorf("error iterating over storage: %w", err) return fmt.Errorf("error iterating over storage: %w", err)
} }
...@@ -251,7 +266,7 @@ func PostCheckPredeploys(prevDB, currDB *state.StateDB) error { ...@@ -251,7 +266,7 @@ func PostCheckPredeploys(prevDB, currDB *state.StateDB) error {
// Balances and nonces should match legacy // Balances and nonces should match legacy
oldNonce := prevDB.GetNonce(addr) oldNonce := prevDB.GetNonce(addr)
oldBalance := prevDB.GetBalance(addr) oldBalance := ether.GetOVMETHBalance(prevDB, addr)
newNonce := currDB.GetNonce(addr) newNonce := currDB.GetNonce(addr)
newBalance := currDB.GetBalance(addr) newBalance := currDB.GetBalance(addr)
if oldNonce != newNonce { if oldNonce != newNonce {
...@@ -361,14 +376,94 @@ func PostCheckPredeployStorage(db vm.StateDB, finalSystemOwner common.Address, p ...@@ -361,14 +376,94 @@ func PostCheckPredeployStorage(db vm.StateDB, finalSystemOwner common.Address, p
} }
// PostCheckLegacyETH checks that the legacy eth migration was successful. // PostCheckLegacyETH checks that the legacy eth migration was successful.
// It currently only checks that the total supply was set to 0. // It checks that the total supply was set to 0, and randomly samples storage
func PostCheckLegacyETH(db vm.StateDB) error { // slots pre- and post-migration to ensure that balances were correctly migrated.
func PostCheckLegacyETH(prevDB, migratedDB *state.StateDB, migrationData crossdomain.MigrationData) error {
allowanceSlots := make(map[common.Hash]bool)
addresses := make(map[common.Hash]common.Address)
log.Info("recomputing witness data")
for _, allowance := range migrationData.OvmAllowances {
key := ether.CalcAllowanceStorageKey(allowance.From, allowance.To)
allowanceSlots[key] = true
}
for _, addr := range migrationData.Addresses() {
addresses[ether.CalcOVMETHStorageKey(addr)] = addr
}
log.Info("checking legacy eth fixed storage slots")
for slot, expValue := range LegacyETHCheckSlots { for slot, expValue := range LegacyETHCheckSlots {
actValue := db.GetState(predeploys.LegacyERC20ETHAddr, slot) actValue := migratedDB.GetState(predeploys.LegacyERC20ETHAddr, slot)
if actValue != expValue { if actValue != expValue {
return fmt.Errorf("expected slot %s on %s to be %s, but got %s", slot, predeploys.LegacyERC20ETHAddr, expValue, actValue) return fmt.Errorf("expected slot %s on %s to be %s, but got %s", slot, predeploys.LegacyERC20ETHAddr, expValue, actValue)
} }
} }
var count int
threshold := 100 - int(100*OVMETHSampleLikelihood)
progress := util.ProgressLogger(100, "checking legacy eth balance slots")
var innerErr error
err := prevDB.ForEachStorage(predeploys.LegacyERC20ETHAddr, func(key, value common.Hash) bool {
val := rand.Intn(100)
// Randomly sample storage slots.
if val > threshold {
return true
}
// Ignore fixed slots.
if _, ok := LegacyETHCheckSlots[key]; ok {
return true
}
// Ignore allowances.
if allowanceSlots[key] {
return true
}
// Grab the address, and bail if we can't find it.
addr, ok := addresses[key]
if !ok {
innerErr = fmt.Errorf("unknown OVM_ETH storage slot %s", key)
return false
}
// Pull out the pre-migration OVM ETH balance, and the state balance.
ovmETHBalance := value.Big()
ovmETHStateBalance := prevDB.GetBalance(addr)
// Pre-migration state balance should be zero.
if ovmETHStateBalance.Cmp(common.Big0) != 0 {
innerErr = fmt.Errorf("expected OVM_ETH pre-migration state balance for %s to be 0, but got %s", addr, ovmETHStateBalance)
return false
}
// Migrated state balance should equal the OVM ETH balance.
migratedStateBalance := migratedDB.GetBalance(addr)
if migratedStateBalance.Cmp(ovmETHBalance) != 0 {
innerErr = fmt.Errorf("expected OVM_ETH post-migration state balance for %s to be %s, but got %s", addr, ovmETHStateBalance, migratedStateBalance)
return false
}
// Migrated OVM ETH balance should be zero, since we wipe the slots.
migratedBalance := migratedDB.GetState(predeploys.LegacyERC20ETHAddr, key)
if migratedBalance.Big().Cmp(common.Big0) != 0 {
innerErr = fmt.Errorf("expected OVM_ETH post-migration ERC20 balance for %s to be 0, but got %s", addr, migratedBalance)
return false
}
progress()
count++
// Stop iterating if we've checked enough slots.
return count < MaxOVMETHSlotChecks
})
if err != nil {
return fmt.Errorf("error iterating over OVM_ETH storage: %w", err)
}
if innerErr != nil {
return innerErr
}
return nil return nil
} }
...@@ -506,7 +601,9 @@ func CheckWithdrawalsAfter(db vm.StateDB, data crossdomain.MigrationData, l1Cros ...@@ -506,7 +601,9 @@ func CheckWithdrawalsAfter(db vm.StateDB, data crossdomain.MigrationData, l1Cros
// Now, iterate over each legacy withdrawal and check if there is a corresponding // Now, iterate over each legacy withdrawal and check if there is a corresponding
// migrated withdrawal. // migrated withdrawal.
var innerErr error var innerErr error
progress := util.ProgressLogger(1000, "checking withdrawals")
err = db.ForEachStorage(predeploys.LegacyMessagePasserAddr, func(key, value common.Hash) bool { err = db.ForEachStorage(predeploys.LegacyMessagePasserAddr, func(key, value common.Hash) bool {
progress()
// The legacy message passer becomes a proxy during the migration, // The legacy message passer becomes a proxy during the migration,
// so we need to ignore the implementation/admin slots. // so we need to ignore the implementation/admin slots.
if key == ImplementationSlot || key == AdminSlot { if key == ImplementationSlot || key == AdminSlot {
...@@ -543,7 +640,7 @@ func CheckWithdrawalsAfter(db vm.StateDB, data crossdomain.MigrationData, l1Cros ...@@ -543,7 +640,7 @@ func CheckWithdrawalsAfter(db vm.StateDB, data crossdomain.MigrationData, l1Cros
// If the sender is _not_ the L2XDM, the value should not be migrated. // If the sender is _not_ the L2XDM, the value should not be migrated.
wd := wdsByOldSlot[key] wd := wdsByOldSlot[key]
if wd.XDomainSender == predeploys.L2CrossDomainMessengerAddr { if wd.MessageSender == predeploys.L2CrossDomainMessengerAddr {
// Make sure the value is abiTrue if this withdrawal should be migrated. // Make sure the value is abiTrue if this withdrawal should be migrated.
if migratedValue != abiTrue { if migratedValue != abiTrue {
innerErr = fmt.Errorf("expected migrated value to be true, but got %s", migratedValue) innerErr = fmt.Errorf("expected migrated value to be true, but got %s", migratedValue)
...@@ -552,7 +649,7 @@ func CheckWithdrawalsAfter(db vm.StateDB, data crossdomain.MigrationData, l1Cros ...@@ -552,7 +649,7 @@ func CheckWithdrawalsAfter(db vm.StateDB, data crossdomain.MigrationData, l1Cros
} else { } else {
// Otherwise, ensure that withdrawals from senders other than the L2XDM are _not_ migrated. // Otherwise, ensure that withdrawals from senders other than the L2XDM are _not_ migrated.
if migratedValue != abiFalse { if migratedValue != abiFalse {
innerErr = fmt.Errorf("a migration from a sender other than the L2XDM was migrated") innerErr = fmt.Errorf("a migration from a sender other than the L2XDM was migrated. sender: %s, migrated value: %s", wd.MessageSender, migratedValue)
return false return false
} }
} }
......
...@@ -76,6 +76,8 @@ type DeployConfig struct { ...@@ -76,6 +76,8 @@ type DeployConfig struct {
ProxyAdminOwner common.Address `json:"proxyAdminOwner"` ProxyAdminOwner common.Address `json:"proxyAdminOwner"`
// Owner of the system on L1 // Owner of the system on L1
FinalSystemOwner common.Address `json:"finalSystemOwner"` FinalSystemOwner common.Address `json:"finalSystemOwner"`
// GUARDIAN account in the OptimismPortal
PortalGuardian common.Address `json:"portalGuardian"`
// L1 recipient of fees accumulated in the BaseFeeVault // L1 recipient of fees accumulated in the BaseFeeVault
BaseFeeVaultRecipient common.Address `json:"baseFeeVaultRecipient"` BaseFeeVaultRecipient common.Address `json:"baseFeeVaultRecipient"`
// L1 recipient of fees accumulated in the L1FeeVault // L1 recipient of fees accumulated in the L1FeeVault
...@@ -128,6 +130,9 @@ func (d *DeployConfig) Check() error { ...@@ -128,6 +130,9 @@ func (d *DeployConfig) Check() error {
if d.FinalizationPeriodSeconds == 0 { if d.FinalizationPeriodSeconds == 0 {
return fmt.Errorf("%w: FinalizationPeriodSeconds cannot be 0", ErrInvalidDeployConfig) return fmt.Errorf("%w: FinalizationPeriodSeconds cannot be 0", ErrInvalidDeployConfig)
} }
if d.PortalGuardian == (common.Address{}) {
return fmt.Errorf("%w: PortalGuardian cannot be address(0)", ErrInvalidDeployConfig)
}
if d.MaxSequencerDrift == 0 { if d.MaxSequencerDrift == 0 {
return fmt.Errorf("%w: MaxSequencerDrift cannot be 0", ErrInvalidDeployConfig) return fmt.Errorf("%w: MaxSequencerDrift cannot be 0", ErrInvalidDeployConfig)
} }
......
...@@ -82,16 +82,25 @@ func MigrateDB(ldb ethdb.Database, config *DeployConfig, l1Block *types.Block, m ...@@ -82,16 +82,25 @@ func MigrateDB(ldb ethdb.Database, config *DeployConfig, l1Block *types.Block, m
) )
} }
// Set up the backing store. dbFactory := func() (*state.StateDB, error) {
underlyingDB := state.NewDatabaseWithConfig(ldb, &trie.Config{ // Set up the backing store.
Preimages: true, underlyingDB := state.NewDatabaseWithConfig(ldb, &trie.Config{
Cache: 1024, Preimages: true,
}) Cache: 1024,
})
// Open up the state database.
db, err := state.New(header.Root, underlyingDB, nil) // Open up the state database.
db, err := state.New(header.Root, underlyingDB, nil)
if err != nil {
return nil, fmt.Errorf("cannot open StateDB: %w", err)
}
return db, nil
}
db, err := dbFactory()
if err != nil { if err != nil {
return nil, fmt.Errorf("cannot open StateDB: %w", err) return nil, fmt.Errorf("cannot create StateDB: %w", err)
} }
// Before we do anything else, we need to ensure that all of the input configuration is correct // Before we do anything else, we need to ensure that all of the input configuration is correct
...@@ -134,16 +143,6 @@ func MigrateDB(ldb ethdb.Database, config *DeployConfig, l1Block *types.Block, m ...@@ -134,16 +143,6 @@ func MigrateDB(ldb ethdb.Database, config *DeployConfig, l1Block *types.Block, m
filteredWithdrawals = crossdomain.SafeFilteredWithdrawals(unfilteredWithdrawals) filteredWithdrawals = crossdomain.SafeFilteredWithdrawals(unfilteredWithdrawals)
} }
// We also need to verify that we have all of the storage slots for the LegacyERC20ETH contract
// that we expect to have. An error will be thrown if there are any missing storage slots.
// Unlike with withdrawals, we do not need to filter out extra addresses because their balances
// would necessarily be zero and therefore not affect the migration.
log.Info("Checking addresses...", "no-check", noCheck)
addrs, err := ether.PreCheckBalances(ldb, db, migrationData.Addresses(), migrationData.OvmAllowances, int(config.L1ChainID), noCheck)
if err != nil {
return nil, fmt.Errorf("addresses mismatch: %w", err)
}
// At this point we've fully verified the witness data for the migration, so we can begin the // At this point we've fully verified the witness data for the migration, so we can begin the
// actual migration process. This involves modifying parts of the legacy database and inserting // actual migration process. This involves modifying parts of the legacy database and inserting
// a transition block. // a transition block.
...@@ -193,11 +192,12 @@ func MigrateDB(ldb ethdb.Database, config *DeployConfig, l1Block *types.Block, m ...@@ -193,11 +192,12 @@ func MigrateDB(ldb ethdb.Database, config *DeployConfig, l1Block *types.Block, m
} }
// Finally we migrate the balances held inside the LegacyERC20ETH contract into the state trie. // Finally we migrate the balances held inside the LegacyERC20ETH contract into the state trie.
// We also delete the balances from the LegacyERC20ETH contract. // We also delete the balances from the LegacyERC20ETH contract. Unlike the steps above, this step
// combines the check and mutation steps into one in order to reduce migration time.
log.Info("Starting to migrate ERC20 ETH") log.Info("Starting to migrate ERC20 ETH")
err = ether.MigrateLegacyETH(db, addrs, int(config.L1ChainID), noCheck) err = ether.MigrateBalances(db, dbFactory, migrationData.Addresses(), migrationData.OvmAllowances, int(config.L1ChainID), noCheck)
if err != nil { if err != nil {
return nil, fmt.Errorf("cannot migrate legacy eth: %w", err) return nil, fmt.Errorf("failed to migrate OVM_ETH: %w", err)
} }
// We're done messing around with the database, so we can now commit the changes to the DB. // We're done messing around with the database, so we can now commit the changes to the DB.
......
...@@ -295,7 +295,7 @@ func deployL1Contracts(config *DeployConfig, backend *backends.SimulatedBackend) ...@@ -295,7 +295,7 @@ func deployL1Contracts(config *DeployConfig, backend *backends.SimulatedBackend)
Name: "OptimismPortal", Name: "OptimismPortal",
Args: []interface{}{ Args: []interface{}{
predeploys.DevL2OutputOracleAddr, predeploys.DevL2OutputOracleAddr,
config.FinalSystemOwner, config.PortalGuardian,
true, // _paused true, // _paused
}, },
}, },
......
...@@ -19,6 +19,7 @@ ...@@ -19,6 +19,7 @@
"l1GenesisBlockGasLimit": "0xe4e1c0", "l1GenesisBlockGasLimit": "0xe4e1c0",
"l1GenesisBlockDifficulty": "0x1", "l1GenesisBlockDifficulty": "0x1",
"finalSystemOwner": "0x0000000000000000000000000000000000000111", "finalSystemOwner": "0x0000000000000000000000000000000000000111",
"portalGuardian": "0x0000000000000000000000000000000000000112",
"finalizationPeriodSeconds": 2, "finalizationPeriodSeconds": 2,
"l1GenesisBlockMixHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "l1GenesisBlockMixHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"l1GenesisBlockCoinbase": "0x0000000000000000000000000000000000000000", "l1GenesisBlockCoinbase": "0x0000000000000000000000000000000000000000",
......
package util
import (
"fmt"
"math/big"
"sync"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/rlp"
"github.com/ethereum/go-ethereum/trie"
)
var (
// maxSlot is the maximum possible storage slot.
maxSlot = common.HexToHash("0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff")
)
type DBFactory func() (*state.StateDB, error)
type StateCallback func(db *state.StateDB, key, value common.Hash) error
func IterateState(dbFactory DBFactory, address common.Address, cb StateCallback, workers int) error {
if workers <= 0 {
panic("workers must be greater than 0")
}
// WaitGroup to wait for all workers to finish.
var wg sync.WaitGroup
// Channel to receive errors from each iteration job.
errCh := make(chan error, workers)
// Channel to cancel all iteration jobs.
cancelCh := make(chan struct{})
worker := func(start, end common.Hash) {
// Decrement the WaitGroup when the function returns.
defer wg.Done()
db, err := dbFactory()
if err != nil {
// Should never happen, so explode if it does.
log.Crit("cannot create state db", "err", err)
}
st, err := db.StorageTrie(address)
if err != nil {
// Should never happen, so explode if it does.
log.Crit("cannot get storage trie", "address", address, "err", err)
}
// st can be nil if the account doesn't exist.
if st == nil {
errCh <- fmt.Errorf("account does not exist: %s", address.Hex())
return
}
it := trie.NewIterator(st.NodeIterator(start.Bytes()))
// Below code is largely based on db.ForEachStorage. We can't use that
// because it doesn't allow us to specify a start and end key.
for it.Next() {
select {
case <-cancelCh:
// If one of the workers encounters an error, cancel all of them.
return
default:
break
}
// Use the raw (i.e., secure hashed) key to check if we've reached
// the end of the partition. Use > rather than >= here to account for
// the fact that the values returned by PartitionKeys are inclusive.
// Duplicate addresses that may be returned by this iteration are
// filtered out in the collector.
if new(big.Int).SetBytes(it.Key).Cmp(end.Big()) > 0 {
return
}
// Skip if the value is empty.
rawValue := it.Value
if len(rawValue) == 0 {
continue
}
// Get the preimage.
rawKey := st.GetKey(it.Key)
if rawKey == nil {
// Should never happen, so explode if it does.
log.Crit("cannot get preimage for storage key", "key", it.Key)
}
key := common.BytesToHash(rawKey)
// Parse the raw value.
_, content, _, err := rlp.Split(rawValue)
if err != nil {
// Should never happen, so explode if it does.
log.Crit("mal-formed data in state: %v", err)
}
value := common.BytesToHash(content)
// Call the callback with the DB, key, and value. Errors get
// bubbled up to the errCh.
if err := cb(db, key, value); err != nil {
errCh <- err
return
}
}
}
for i := 0; i < workers; i++ {
wg.Add(1)
// Partition the keyspace per worker.
start, end := PartitionKeyspace(i, workers)
// Kick off our worker.
go worker(start, end)
}
wg.Wait()
for len(errCh) > 0 {
err := <-errCh
if err != nil {
return err
}
}
return nil
}
// PartitionKeyspace divides the key space into partitions by dividing the maximum keyspace
// by count then multiplying by i. This will leave some slots left over, which we handle below. It
// returns the start and end keys for the partition as a common.Hash. Note that the returned range
// of keys is inclusive, i.e., [start, end] NOT [start, end).
func PartitionKeyspace(i int, count int) (common.Hash, common.Hash) {
if i < 0 || count < 0 {
panic("i and count must be greater than 0")
}
if i > count-1 {
panic("i must be less than count - 1")
}
// Divide the key space into partitions by dividing the key space by the number
// of jobs. This will leave some slots left over, which we handle below.
partSize := new(big.Int).Div(maxSlot.Big(), big.NewInt(int64(count)))
start := common.BigToHash(new(big.Int).Mul(big.NewInt(int64(i)), partSize))
var end common.Hash
if i < count-1 {
// If this is not the last partition, use the next partition's start key as the end.
end = common.BigToHash(new(big.Int).Mul(big.NewInt(int64(i+1)), partSize))
} else {
// If this is the last partition, use the max slot as the end.
end = maxSlot
}
return start, end
}
package util
import (
crand "crypto/rand"
"fmt"
"math/rand"
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/trie"
"github.com/stretchr/testify/require"
)
var testAddr = common.Address{0: 0xff}
func TestStateIteratorWorkers(t *testing.T) {
_, factory, _ := setupRandTest(t)
for i := -1; i <= 0; i++ {
require.Panics(t, func() {
_ = IterateState(factory, testAddr, func(db *state.StateDB, key, value common.Hash) error {
return nil
}, i)
})
}
}
func TestStateIteratorNonexistentAccount(t *testing.T) {
_, factory, _ := setupRandTest(t)
require.ErrorContains(t, IterateState(factory, common.Address{}, func(db *state.StateDB, key, value common.Hash) error {
return nil
}, 1), "account does not exist")
}
func TestStateIteratorRandomOK(t *testing.T) {
for i := 0; i < 100; i++ {
hashes, factory, workerCount := setupRandTest(t)
seenHashes := make(map[common.Hash]bool)
hashCh := make(chan common.Hash)
doneCh := make(chan struct{})
go func() {
defer close(doneCh)
for hash := range hashCh {
seenHashes[hash] = true
}
}()
require.NoError(t, IterateState(factory, testAddr, func(db *state.StateDB, key, value common.Hash) error {
hashCh <- key
return nil
}, workerCount))
close(hashCh)
<-doneCh
// Perform a less or equal check here in case of duplicates. The map check below will assert
// that all of the hashes are accounted for.
require.LessOrEqual(t, len(seenHashes), len(hashes))
// Every hash we put into state should have been iterated over.
for _, hash := range hashes {
require.Contains(t, seenHashes, hash)
}
}
}
func TestStateIteratorRandomError(t *testing.T) {
for i := 0; i < 100; i++ {
hashes, factory, workerCount := setupRandTest(t)
failHash := hashes[rand.Intn(len(hashes))]
require.ErrorContains(t, IterateState(factory, testAddr, func(db *state.StateDB, key, value common.Hash) error {
if key == failHash {
return fmt.Errorf("test error")
}
return nil
}, workerCount), "test error")
}
}
func TestPartitionKeyspace(t *testing.T) {
tests := []struct {
i int
count int
expected [2]common.Hash
}{
{
i: 0,
count: 1,
expected: [2]common.Hash{
common.HexToHash("0x00"),
common.HexToHash("0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"),
},
},
{
i: 0,
count: 2,
expected: [2]common.Hash{
common.HexToHash("0x00"),
common.HexToHash("0x7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"),
},
},
{
i: 1,
count: 2,
expected: [2]common.Hash{
common.HexToHash("0x7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"),
common.HexToHash("0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"),
},
},
{
i: 0,
count: 3,
expected: [2]common.Hash{
common.HexToHash("0x00"),
common.HexToHash("0x5555555555555555555555555555555555555555555555555555555555555555"),
},
},
{
i: 1,
count: 3,
expected: [2]common.Hash{
common.HexToHash("0x5555555555555555555555555555555555555555555555555555555555555555"),
common.HexToHash("0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"),
},
},
{
i: 2,
count: 3,
expected: [2]common.Hash{
common.HexToHash("0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"),
common.HexToHash("0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"),
},
},
}
for _, tt := range tests {
t.Run(fmt.Sprintf("i %d, count %d", tt.i, tt.count), func(t *testing.T) {
start, end := PartitionKeyspace(tt.i, tt.count)
require.Equal(t, tt.expected[0], start)
require.Equal(t, tt.expected[1], end)
})
}
t.Run("panics on invalid i or count", func(t *testing.T) {
require.Panics(t, func() {
PartitionKeyspace(1, 1)
})
require.Panics(t, func() {
PartitionKeyspace(-1, 1)
})
require.Panics(t, func() {
PartitionKeyspace(0, -1)
})
require.Panics(t, func() {
PartitionKeyspace(-1, -1)
})
})
}
func setupRandTest(t *testing.T) ([]common.Hash, DBFactory, int) {
memDB := rawdb.NewMemoryDatabase()
db, err := state.New(common.Hash{}, state.NewDatabaseWithConfig(memDB, &trie.Config{
Preimages: true,
Cache: 1024,
}), nil)
require.NoError(t, err)
hashCount := rand.Intn(100)
if hashCount == 0 {
hashCount = 1
}
hashes := make([]common.Hash, hashCount)
db.CreateAccount(testAddr)
for j := 0; j < hashCount; j++ {
hashes[j] = randHash(t)
db.SetState(testAddr, hashes[j], hashes[j])
}
root, err := db.Commit(false)
require.NoError(t, err)
err = db.Database().TrieDB().Commit(root, true)
require.NoError(t, err)
factory := func() (*state.StateDB, error) {
return state.New(root, state.NewDatabaseWithConfig(memDB, &trie.Config{
Preimages: true,
Cache: 1024,
}), nil)
}
workerCount := rand.Intn(64)
if workerCount == 0 {
workerCount = 1
}
return hashes, factory, workerCount
}
func randHash(t *testing.T) common.Hash {
var h common.Hash
_, err := crand.Read(h[:])
require.NoError(t, err)
return h
}
...@@ -35,7 +35,7 @@ func TestBatchInLastPossibleBlocks(gt *testing.T) { ...@@ -35,7 +35,7 @@ func TestBatchInLastPossibleBlocks(gt *testing.T) {
ChainID: sd.L2Cfg.Config.ChainID, ChainID: sd.L2Cfg.Config.ChainID,
Nonce: n, Nonce: n,
GasTipCap: big.NewInt(2 * params.GWei), GasTipCap: big.NewInt(2 * params.GWei),
GasFeeCap: new(big.Int).Add(miner.l1Chain.CurrentBlock().BaseFee(), big.NewInt(2*params.GWei)), GasFeeCap: new(big.Int).Add(miner.l1Chain.CurrentBlock().BaseFee, big.NewInt(2*params.GWei)),
Gas: params.TxGas, Gas: params.TxGas,
To: &dp.Addresses.Bob, To: &dp.Addresses.Bob,
Value: e2eutils.Ether(2), Value: e2eutils.Ether(2),
...@@ -146,7 +146,7 @@ func TestLargeL1Gaps(gt *testing.T) { ...@@ -146,7 +146,7 @@ func TestLargeL1Gaps(gt *testing.T) {
ChainID: sd.L2Cfg.Config.ChainID, ChainID: sd.L2Cfg.Config.ChainID,
Nonce: n, Nonce: n,
GasTipCap: big.NewInt(2 * params.GWei), GasTipCap: big.NewInt(2 * params.GWei),
GasFeeCap: new(big.Int).Add(miner.l1Chain.CurrentBlock().BaseFee(), big.NewInt(2*params.GWei)), GasFeeCap: new(big.Int).Add(miner.l1Chain.CurrentBlock().BaseFee, big.NewInt(2*params.GWei)),
Gas: params.TxGas, Gas: params.TxGas,
To: &dp.Addresses.Bob, To: &dp.Addresses.Bob,
Value: e2eutils.Ether(2), Value: e2eutils.Ether(2),
......
...@@ -21,7 +21,7 @@ func TestShapellaL1Fork(gt *testing.T) { ...@@ -21,7 +21,7 @@ func TestShapellaL1Fork(gt *testing.T) {
_, _, miner, sequencer, _, verifier, _, batcher := setupReorgTestActors(t, dp, sd, log) _, _, miner, sequencer, _, verifier, _, batcher := setupReorgTestActors(t, dp, sd, log)
require.False(t, sd.L1Cfg.Config.IsShanghai(miner.l1Chain.CurrentBlock().Time()), "not active yet") require.False(t, sd.L1Cfg.Config.IsShanghai(miner.l1Chain.CurrentBlock().Time), "not active yet")
// start op-nodes // start op-nodes
sequencer.ActL2PipelineFull(t) sequencer.ActL2PipelineFull(t)
...@@ -34,7 +34,7 @@ func TestShapellaL1Fork(gt *testing.T) { ...@@ -34,7 +34,7 @@ func TestShapellaL1Fork(gt *testing.T) {
// verify Shanghai is active // verify Shanghai is active
l1Head := miner.l1Chain.CurrentBlock() l1Head := miner.l1Chain.CurrentBlock()
require.True(t, sd.L1Cfg.Config.IsShanghai(l1Head.Time())) require.True(t, sd.L1Cfg.Config.IsShanghai(l1Head.Time))
// build L2 chain up to and including L2 blocks referencing shanghai L1 blocks // build L2 chain up to and including L2 blocks referencing shanghai L1 blocks
sequencer.ActL1HeadSignal(t) sequencer.ActL1HeadSignal(t)
......
...@@ -10,6 +10,7 @@ import ( ...@@ -10,6 +10,7 @@ import (
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/trie" "github.com/ethereum/go-ethereum/trie"
"github.com/stretchr/testify/require"
) )
// L1Miner wraps a L1Replica with instrumented block building ability. // L1Miner wraps a L1Replica with instrumented block building ability.
...@@ -100,26 +101,33 @@ func (s *L1Miner) ActL1IncludeTx(from common.Address) Action { ...@@ -100,26 +101,33 @@ func (s *L1Miner) ActL1IncludeTx(from common.Address) Action {
t.Fatalf("no pending txs from %s, and have %d unprocessable queued txs from this account", from, len(q)) t.Fatalf("no pending txs from %s, and have %d unprocessable queued txs from this account", from, len(q))
} }
tx := txs[i] tx := txs[i]
if tx.Gas() > s.l1BuildingHeader.GasLimit { s.IncludeTx(t, tx)
t.Fatalf("tx consumes %d gas, more than available in L1 block %d", tx.Gas(), s.l1BuildingHeader.GasLimit)
}
if tx.Gas() > uint64(*s.l1GasPool) {
t.InvalidAction("action takes too much gas: %d, only have %d", tx.Gas(), uint64(*s.l1GasPool))
return
}
s.pendingIndices[from] = i + 1 // won't retry the tx s.pendingIndices[from] = i + 1 // won't retry the tx
s.l1BuildingState.SetTxContext(tx.Hash(), len(s.l1Transactions))
receipt, err := core.ApplyTransaction(s.l1Cfg.Config, s.l1Chain, &s.l1BuildingHeader.Coinbase,
s.l1GasPool, s.l1BuildingState, s.l1BuildingHeader, tx, &s.l1BuildingHeader.GasUsed, *s.l1Chain.GetVMConfig())
if err != nil {
s.l1TxFailed = append(s.l1TxFailed, tx)
t.Fatalf("failed to apply transaction to L1 block (tx %d): %w", len(s.l1Transactions), err)
}
s.l1Receipts = append(s.l1Receipts, receipt)
s.l1Transactions = append(s.l1Transactions, tx)
} }
} }
func (s *L1Miner) IncludeTx(t Testing, tx *types.Transaction) {
from, err := s.l1Signer.Sender(tx)
require.NoError(t, err)
s.log.Info("including tx", "nonce", tx.Nonce(), "from", from)
if tx.Gas() > s.l1BuildingHeader.GasLimit {
t.Fatalf("tx consumes %d gas, more than available in L1 block %d", tx.Gas(), s.l1BuildingHeader.GasLimit)
}
if tx.Gas() > uint64(*s.l1GasPool) {
t.InvalidAction("action takes too much gas: %d, only have %d", tx.Gas(), uint64(*s.l1GasPool))
return
}
s.l1BuildingState.SetTxContext(tx.Hash(), len(s.l1Transactions))
receipt, err := core.ApplyTransaction(s.l1Cfg.Config, s.l1Chain, &s.l1BuildingHeader.Coinbase,
s.l1GasPool, s.l1BuildingState, s.l1BuildingHeader, tx, &s.l1BuildingHeader.GasUsed, *s.l1Chain.GetVMConfig())
if err != nil {
s.l1TxFailed = append(s.l1TxFailed, tx)
t.Fatalf("failed to apply transaction to L1 block (tx %d): %v", len(s.l1Transactions), err)
}
s.l1Receipts = append(s.l1Receipts, receipt)
s.l1Transactions = append(s.l1Transactions, tx)
}
func (s *L1Miner) ActL1SetFeeRecipient(coinbase common.Address) { func (s *L1Miner) ActL1SetFeeRecipient(coinbase common.Address) {
s.prefCoinbase = coinbase s.prefCoinbase = coinbase
if s.l1Building { if s.l1Building {
......
...@@ -31,7 +31,7 @@ func TestL1Miner_BuildBlock(gt *testing.T) { ...@@ -31,7 +31,7 @@ func TestL1Miner_BuildBlock(gt *testing.T) {
ChainID: sd.L1Cfg.Config.ChainID, ChainID: sd.L1Cfg.Config.ChainID,
Nonce: 0, Nonce: 0,
GasTipCap: big.NewInt(2 * params.GWei), GasTipCap: big.NewInt(2 * params.GWei),
GasFeeCap: new(big.Int).Add(miner.l1Chain.CurrentBlock().BaseFee(), big.NewInt(2*params.GWei)), GasFeeCap: new(big.Int).Add(miner.l1Chain.CurrentBlock().BaseFee, big.NewInt(2*params.GWei)),
Gas: params.TxGas, Gas: params.TxGas,
To: &dp.Addresses.Bob, To: &dp.Addresses.Bob,
Value: e2eutils.Ether(2), Value: e2eutils.Ether(2),
...@@ -41,7 +41,8 @@ func TestL1Miner_BuildBlock(gt *testing.T) { ...@@ -41,7 +41,8 @@ func TestL1Miner_BuildBlock(gt *testing.T) {
// make an empty block, even though a tx may be waiting // make an empty block, even though a tx may be waiting
miner.ActL1StartBlock(10)(t) miner.ActL1StartBlock(10)(t)
miner.ActL1EndBlock(t) miner.ActL1EndBlock(t)
bl := miner.l1Chain.CurrentBlock() header := miner.l1Chain.CurrentBlock()
bl := miner.l1Chain.GetBlockByHash(header.Hash())
require.Equal(t, uint64(1), bl.NumberU64()) require.Equal(t, uint64(1), bl.NumberU64())
require.Zero(gt, bl.Transactions().Len()) require.Zero(gt, bl.Transactions().Len())
...@@ -49,7 +50,8 @@ func TestL1Miner_BuildBlock(gt *testing.T) { ...@@ -49,7 +50,8 @@ func TestL1Miner_BuildBlock(gt *testing.T) {
miner.ActL1StartBlock(10)(t) miner.ActL1StartBlock(10)(t)
miner.ActL1IncludeTx(dp.Addresses.Alice)(t) miner.ActL1IncludeTx(dp.Addresses.Alice)(t)
miner.ActL1EndBlock(t) miner.ActL1EndBlock(t)
bl = miner.l1Chain.CurrentBlock() header = miner.l1Chain.CurrentBlock()
bl = miner.l1Chain.GetBlockByHash(header.Hash())
require.Equal(t, uint64(2), bl.NumberU64()) require.Equal(t, uint64(2), bl.NumberU64())
require.Equal(t, 1, bl.Transactions().Len()) require.Equal(t, 1, bl.Transactions().Len())
require.Equal(t, tx.Hash(), bl.Transactions()[0].Hash()) require.Equal(t, tx.Hash(), bl.Transactions()[0].Hash())
......
...@@ -103,9 +103,9 @@ func (s *L1Replica) ActL1RewindDepth(depth uint64) Action { ...@@ -103,9 +103,9 @@ func (s *L1Replica) ActL1RewindDepth(depth uint64) Action {
t.InvalidAction("cannot rewind L1 past genesis (current: %d, rewind depth: %d)", head, depth) t.InvalidAction("cannot rewind L1 past genesis (current: %d, rewind depth: %d)", head, depth)
return return
} }
finalized := s.l1Chain.CurrentFinalizedBlock() finalized := s.l1Chain.CurrentFinalBlock()
if finalized != nil && head < finalized.NumberU64()+depth { if finalized != nil && head < finalized.Number.Uint64()+depth {
t.InvalidAction("cannot rewind head of chain past finalized block %d with rewind depth %d", finalized.NumberU64(), depth) t.InvalidAction("cannot rewind head of chain past finalized block %d with rewind depth %d", finalized.Number.Uint64(), depth)
return return
} }
if err := s.l1Chain.SetHead(head - depth); err != nil { if err := s.l1Chain.SetHead(head - depth); err != nil {
...@@ -188,7 +188,7 @@ func (s *L1Replica) UnsafeNum() uint64 { ...@@ -188,7 +188,7 @@ func (s *L1Replica) UnsafeNum() uint64 {
head := s.l1Chain.CurrentBlock() head := s.l1Chain.CurrentBlock()
headNum := uint64(0) headNum := uint64(0)
if head != nil { if head != nil {
headNum = head.NumberU64() headNum = head.Number.Uint64()
} }
return headNum return headNum
} }
...@@ -197,16 +197,16 @@ func (s *L1Replica) SafeNum() uint64 { ...@@ -197,16 +197,16 @@ func (s *L1Replica) SafeNum() uint64 {
safe := s.l1Chain.CurrentSafeBlock() safe := s.l1Chain.CurrentSafeBlock()
safeNum := uint64(0) safeNum := uint64(0)
if safe != nil { if safe != nil {
safeNum = safe.NumberU64() safeNum = safe.Number.Uint64()
} }
return safeNum return safeNum
} }
func (s *L1Replica) FinalizedNum() uint64 { func (s *L1Replica) FinalizedNum() uint64 {
finalized := s.l1Chain.CurrentFinalizedBlock() finalized := s.l1Chain.CurrentFinalBlock()
finalizedNum := uint64(0) finalizedNum := uint64(0)
if finalized != nil { if finalized != nil {
finalizedNum = finalized.NumberU64() finalizedNum = finalized.Number.Uint64()
} }
return finalizedNum return finalizedNum
} }
...@@ -219,7 +219,7 @@ func (s *L1Replica) ActL1Finalize(t Testing, num uint64) { ...@@ -219,7 +219,7 @@ func (s *L1Replica) ActL1Finalize(t Testing, num uint64) {
t.InvalidAction("need to move forward safe block before moving finalized block") t.InvalidAction("need to move forward safe block before moving finalized block")
return return
} }
newFinalized := s.l1Chain.GetBlockByNumber(num) newFinalized := s.l1Chain.GetHeaderByNumber(num)
if newFinalized == nil { if newFinalized == nil {
t.Fatalf("expected block at %d after finalized L1 block %d, safe head is ahead", num, finalizedNum) t.Fatalf("expected block at %d after finalized L1 block %d, safe head is ahead", num, finalizedNum)
} }
...@@ -234,7 +234,7 @@ func (s *L1Replica) ActL1FinalizeNext(t Testing) { ...@@ -234,7 +234,7 @@ func (s *L1Replica) ActL1FinalizeNext(t Testing) {
// ActL1Safe marks the given unsafe block as safe. // ActL1Safe marks the given unsafe block as safe.
func (s *L1Replica) ActL1Safe(t Testing, num uint64) { func (s *L1Replica) ActL1Safe(t Testing, num uint64) {
newSafe := s.l1Chain.GetBlockByNumber(num) newSafe := s.l1Chain.GetHeaderByNumber(num)
if newSafe == nil { if newSafe == nil {
t.InvalidAction("could not find L1 block %d, cannot label it as safe", num) t.InvalidAction("could not find L1 block %d, cannot label it as safe", num)
return return
......
...@@ -85,7 +85,7 @@ func TestL1Replica_ActL1Sync(gt *testing.T) { ...@@ -85,7 +85,7 @@ func TestL1Replica_ActL1Sync(gt *testing.T) {
}) })
syncFromA := replica1.ActL1Sync(canonL1(chainA)) syncFromA := replica1.ActL1Sync(canonL1(chainA))
// sync canonical chain A // sync canonical chain A
for replica1.l1Chain.CurrentBlock().NumberU64()+1 < uint64(len(chainA)) { for replica1.l1Chain.CurrentBlock().Number.Uint64()+1 < uint64(len(chainA)) {
syncFromA(t) syncFromA(t)
} }
require.Equal(t, replica1.l1Chain.CurrentBlock().Hash(), chainA[len(chainA)-1].Hash(), "sync replica1 to head of chain A") require.Equal(t, replica1.l1Chain.CurrentBlock().Hash(), chainA[len(chainA)-1].Hash(), "sync replica1 to head of chain A")
...@@ -94,7 +94,7 @@ func TestL1Replica_ActL1Sync(gt *testing.T) { ...@@ -94,7 +94,7 @@ func TestL1Replica_ActL1Sync(gt *testing.T) {
// sync new canonical chain B // sync new canonical chain B
syncFromB := replica1.ActL1Sync(canonL1(chainB)) syncFromB := replica1.ActL1Sync(canonL1(chainB))
for replica1.l1Chain.CurrentBlock().NumberU64()+1 < uint64(len(chainB)) { for replica1.l1Chain.CurrentBlock().Number.Uint64()+1 < uint64(len(chainB)) {
syncFromB(t) syncFromB(t)
} }
require.Equal(t, replica1.l1Chain.CurrentBlock().Hash(), chainB[len(chainB)-1].Hash(), "sync replica1 to head of chain B") require.Equal(t, replica1.l1Chain.CurrentBlock().Hash(), chainB[len(chainB)-1].Hash(), "sync replica1 to head of chain B")
...@@ -105,7 +105,7 @@ func TestL1Replica_ActL1Sync(gt *testing.T) { ...@@ -105,7 +105,7 @@ func TestL1Replica_ActL1Sync(gt *testing.T) {
_ = replica2.Close() _ = replica2.Close()
}) })
syncFromOther := replica2.ActL1Sync(replica1.CanonL1Chain()) syncFromOther := replica2.ActL1Sync(replica1.CanonL1Chain())
for replica2.l1Chain.CurrentBlock().NumberU64()+1 < uint64(len(chainB)) { for replica2.l1Chain.CurrentBlock().Number.Uint64()+1 < uint64(len(chainB)) {
syncFromOther(t) syncFromOther(t)
} }
require.Equal(t, replica2.l1Chain.CurrentBlock().Hash(), chainB[len(chainB)-1].Hash(), "sync replica2 to head of chain B") require.Equal(t, replica2.l1Chain.CurrentBlock().Hash(), chainB[len(chainB)-1].Hash(), "sync replica2 to head of chain B")
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -55,7 +55,7 @@ func TestL2Sequencer_SequencerDrift(gt *testing.T) { ...@@ -55,7 +55,7 @@ func TestL2Sequencer_SequencerDrift(gt *testing.T) {
ChainID: sd.L2Cfg.Config.ChainID, ChainID: sd.L2Cfg.Config.ChainID,
Nonce: n, Nonce: n,
GasTipCap: big.NewInt(2 * params.GWei), GasTipCap: big.NewInt(2 * params.GWei),
GasFeeCap: new(big.Int).Add(miner.l1Chain.CurrentBlock().BaseFee(), big.NewInt(2*params.GWei)), GasFeeCap: new(big.Int).Add(miner.l1Chain.CurrentBlock().BaseFee, big.NewInt(2*params.GWei)),
Gas: params.TxGas, Gas: params.TxGas,
To: &dp.Addresses.Bob, To: &dp.Addresses.Bob,
Value: e2eutils.Ether(2), Value: e2eutils.Ether(2),
...@@ -76,7 +76,7 @@ func TestL2Sequencer_SequencerDrift(gt *testing.T) { ...@@ -76,7 +76,7 @@ func TestL2Sequencer_SequencerDrift(gt *testing.T) {
origin := miner.l1Chain.CurrentBlock() origin := miner.l1Chain.CurrentBlock()
// L2 makes blocks to catch up // L2 makes blocks to catch up
for sequencer.SyncStatus().UnsafeL2.Time+sd.RollupCfg.BlockTime < origin.Time() { for sequencer.SyncStatus().UnsafeL2.Time+sd.RollupCfg.BlockTime < origin.Time {
makeL2BlockWithAliceTx() makeL2BlockWithAliceTx()
require.Equal(t, uint64(0), sequencer.SyncStatus().UnsafeL2.L1Origin.Number, "no L1 origin change before time matches") require.Equal(t, uint64(0), sequencer.SyncStatus().UnsafeL2.L1Origin.Number, "no L1 origin change before time matches")
} }
...@@ -89,7 +89,7 @@ func TestL2Sequencer_SequencerDrift(gt *testing.T) { ...@@ -89,7 +89,7 @@ func TestL2Sequencer_SequencerDrift(gt *testing.T) {
sequencer.ActL1HeadSignal(t) sequencer.ActL1HeadSignal(t)
// Make blocks up till the sequencer drift is about to surpass, but keep the old L1 origin // Make blocks up till the sequencer drift is about to surpass, but keep the old L1 origin
for sequencer.SyncStatus().UnsafeL2.Time+sd.RollupCfg.BlockTime <= origin.Time()+sd.RollupCfg.MaxSequencerDrift { for sequencer.SyncStatus().UnsafeL2.Time+sd.RollupCfg.BlockTime <= origin.Time+sd.RollupCfg.MaxSequencerDrift {
sequencer.ActL2KeepL1Origin(t) sequencer.ActL2KeepL1Origin(t)
makeL2BlockWithAliceTx() makeL2BlockWithAliceTx()
require.Equal(t, uint64(1), sequencer.SyncStatus().UnsafeL2.L1Origin.Number, "expected to keep old L1 origin") require.Equal(t, uint64(1), sequencer.SyncStatus().UnsafeL2.L1Origin.Number, "expected to keep old L1 origin")
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment