Commit 9902541b authored by mergify[bot]'s avatar mergify[bot] Committed by GitHub

Merge branch 'develop' into qbzzt/230314-opstack-build

parents 272cb6c6 60e56b5e
......@@ -2,4 +2,4 @@
'@eth-optimism/contracts-bedrock': patch
---
Print tenderly simulation links during deployment
Reduce the time that the system dictator deploy scripts wait before checking the chain state.
---
'@eth-optimism/atst': minor
---
Update readAttestations and prepareWriteAttestation to handle keys longer than 32 bytes
---
'@eth-optimism/atst': minor
---
Remove broken allowFailures as option
---
'@eth-optimism/common-ts': patch
---
Fix BaseServiceV2 configuration for caseCase options
---
'@eth-optimism/atst': patch
---
Update docs
---
'@eth-optimism/atst': minor
---
Move react api to @eth-optimism/atst/react so react isn't required to run the core sdk
---
'@eth-optimism/sdk': patch
---
Update migrated withdrawal gaslimit calculation
---
'@eth-optimism/atst': minor
---
Fix main and module in atst package.json
---
'@eth-optimism/atst': patch
---
Fixed bug with atst not defaulting to currently connected chain
---
'@eth-optimism/atst': minor
---
Deprecate parseAttestationBytes and createRawKey in favor for createKey, createValue
---
'@eth-optimism/fault-detector': patch
---
Fixes a bug that would cause the fault detector to error out if no outputs had been proposed yet.
---
'@eth-optimism/chain-mon': patch
'@eth-optimism/data-transport-layer': patch
'@eth-optimism/fault-detector': patch
'@eth-optimism/message-relayer': patch
'@eth-optimism/replica-healthcheck': patch
---
Empty patch release to re-release packages that failed to be released by a bug in the release process.
......@@ -97,7 +97,6 @@ jobs:
- "packages/migration-data/node_modules"
- "packages/replica-healthcheck/node_modules"
- "packages/sdk/node_modules"
- "packages/two-step-monitor/node_modules"
- run:
name: print forge version
command: forge --version
......@@ -543,10 +542,6 @@ jobs:
name: Check integration-tests
command: npx depcheck
working_directory: integration-tests
- run:
name: Check two-step-monitor
command: npx depcheck
working_directory: packages/two-step-monitor
go-lint:
parameters:
......@@ -611,7 +606,7 @@ jobs:
command: |
# Note: We don't use circle CI test splits because we need to split by test name, not by package. There is an additional
# constraint that gotestsum does not currently (nor likely will) accept files from different pacakges when building.
OP_TESTLOG_DISABLE_COLOR=true OP_E2E_DISABLE_PARALLEL=false OP_E2E_USE_HTTP=<<parameters.use_http>> gotestsum \
OP_TESTLOG_DISABLE_COLOR=true OP_E2E_DISABLE_PARALLEL=true OP_E2E_USE_HTTP=<<parameters.use_http>> gotestsum \
--format=standard-verbose --junitfile=/tmp/test-results/<<parameters.module>>_http_<<parameters.use_http>>.xml \
-- -timeout=20m ./...
working_directory: <<parameters.module>>
......
(The MIT License)
Copyright 2020-2022 Optimism
Copyright 2020-2023 Optimism
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
......
......@@ -24,7 +24,7 @@ If you want to build Optimism, check out the [Protocol Specs](./specs/).
## Community
General discussion happens most frequently on the [Optimism discord](https://discord.optimism.io).
General discussion happens most frequently on the [Optimism discord](https://discord-gateway.optimism.io).
Governance discussion can also be found on the [Optimism Governance Forum](https://gov.optimism.io/).
## Contributing
......@@ -138,7 +138,7 @@ When merging commits to the `develop` branch you MUST include a changeset file i
To add a changeset, run the command `yarn changeset` in the root of this monorepo.
You will be presented with a small prompt to select the packages to be released, the scope of the release (major, minor, or patch), and the reason for the release.
Comments with in changeset files will be automatically included in the changelog of the package.
Comments within changeset files will be automatically included in the changelog of the package.
### Triggering Releases
......
......@@ -38,8 +38,8 @@ module.exports = {
offset: -200,
},
algolia: {
appId: '7Q6XITDI0Z',
apiKey: '9d55a31a04b210cd26f97deabd161705',
appId: 'O9WKE9RMCV',
apiKey: '00cf17cba30b374d08d7f7afead974be',
indexName: 'optimism'
},
nav: [
......@@ -117,7 +117,7 @@ module.exports = {
{
title: "OP Stack",
collapsable: false,
children: [
children: [
'/',
[
'/docs/understand/design-principles.md',
......@@ -126,7 +126,7 @@ module.exports = {
'/docs/understand/landscape.md',
'/docs/understand/explainer.md'
]
},
},
{
title: "Releases",
collapsable: false,
......@@ -164,10 +164,10 @@ module.exports = {
title: "Sample Hacks",
children: [
"/docs/build/tutorials/add-attr.md",
"/docs/build/tutorials/new-precomp.md",
"/docs/build/tutorials/new-precomp.md",
]
} // End of tutorials
],
} // End of tutorials
],
}, // End of OP Stack hacks
],
}, // End of Building OP Stack Rollups
......@@ -185,7 +185,7 @@ module.exports = {
'/docs/security/faq.md',
'/docs/security/policy.md',
]
},
},
], // end of sidebar
plugins: [
"@vuepress/pwa",
......
import event from '@vuepress/plugin-pwa/lib/event'
export default ({ router }) => {
registerAutoReload();
router.addRoutes([
{ path: '/docs/', redirect: '/' },
])
}
// When new content is detected by the app, this will automatically
// refresh the page, so that users do not need to manually click
// the refresh button. For more details see:
// https://linear.app/optimism/issue/FE-1003/investigate-archive-issue-on-docs
const registerAutoReload = () => {
event.$on('sw-updated', e => e.skipWaiting().then(() => {
location.reload(true);
}))
}
......@@ -19,13 +19,13 @@ aside.sidebar {
p.sidebar-heading {
color: #323A43 !important;
font-family: 'Open Sans', sans-serif;
font-weight: 600;
font-weight: 600 !important;
font-size: 14px !important;
line-height: 24px !important;
min-height: 36px;
margin-left: 32px;
margin-left: 20px;
padding: 8px 16px !important;
width: calc(100% - 64px) !important;
width: calc(100% - 60px) !important;
border-radius: 8px;
}
......@@ -34,18 +34,20 @@ a.sidebar-link {
font-size: 14px !important;
line-height: 24px !important;
min-height: 36px;
margin-left: 32px;
margin-top: 3px;
margin-left: 20px;
padding: 8px 16px !important;
width: calc(100% - 64px) !important;
width: calc(100% - 60px) !important;
border-radius: 8px;
}
section.sidebar-group a.sidebar-link {
margin-left: 44px;
width: calc(100% - 64px) !important;
section.sidebar-group a.sidebar-link,
section.sidebar-group p.sidebar-heading.clickable {
margin-left: 32px;
width: calc(100% - 60px) !important;
}
.sidebar-links:not(.sidebar-group-items) > li > a.sidebar-link {
.sidebar-links:not(.sidebar-group-items) > li > a.sidebar-link {
font-weight: 600 !important;
color: #323A43 !important;
}
......
......@@ -59,3 +59,66 @@ Download and install [Docker engine](https://docs.docker.com/engine/install/#ser
After the docker containers start, browse to http:// < *computer running Blockscout* > :4000 to view the user interface.
You can also use the [API](https://docs.blockscout.com/for-users/api)
### GraphQL
Blockscout's API includes [GraphQL](https://graphql.org/) support under `/graphiql`.
For example, this query looks at addresses.
```
query {
addresses(hashes:[
"0xcB69A90Aa5311e0e9141a66212489bAfb48b9340",
"0xC2dfA7205088179A8644b9fDCecD6d9bED854Cfe"])
```
GraphQL queries start with a top level entity (or entities).
In this case, our [top level query](https://docs.blockscout.com/for-users/api/graphql#queries) is for multiple addresses.
Note that you can only query on fields that are indexed.
For example, here we query on the addresses.
However, we couldn't query on `contractCode` or `fetchedCoinBalance`.
```
{
hash
contractCode
fetchedCoinBalance
```
The fields above are fetched from the address table.
```
transactions(first:5) {
```
We can also fetch the transactions that include the address (either as source or destination).
The API does not let us fetch an unlimited number of transactions, so here we ask for the first 5.
```
edges {
node {
```
Because this is a [graph](https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)), the entities that connect two types, for example addresses and transactions, are called `edges`.
At the other end of each edge there is a transaction, which is a separate `node`.
```
hash
fromAddressHash
toAddressHash
input
}
```
These are the fields we read for each transaction.
```
}
}
}
}
```
Finally, close all the brackets.
\ No newline at end of file
---
title: Contribute to OP Stack
title: Contribute to the OP Stack
lang: en-US
---
......@@ -21,7 +21,7 @@ The OP Stack needs YOU (yes you!) to help review the codebase for bugs and vulne
## Documentation help
Spot a typo in these docs? See something that could be a little clearer? Head over to the Optimism Monorepo where the OP Stack docs are hosted and make a pull request. No contribution is too small!
Spot a typo in these docs? See something that could be a little clearer? Head over to the Optimism Monorepo where the OP Stack docs are hosted and make a pull request. No contribution is too small!
## Community contributions
......@@ -30,4 +30,4 @@ If you’re looking for other ways to get involved, here are a few options:
- Grab an idea from the [project ideas list](https://github.com/ethereum-optimism/optimism-project-ideas) to and building
- Suggest a new idea for the [project ideas list](https://github.com/ethereum-optimism/optimism-project-ideas)
- Improve the [Optimism Community Hub](https://community.optimism.io/) [documentation](https://github.com/ethereum-optimism/community-hub) or [tutorials](https://github.com/ethereum-optimism/optimism-tutorial)
- Become an Optimism Ambassador, Support Nerd, and more in the [Optimism Discord](https://discord-gateway.optimism.io/)
\ No newline at end of file
- Become an Optimism Ambassador, Support Nerd, and more in the [Optimism Discord](https://discord-gateway.optimism.io/)
This diff is collapsed.
......@@ -30,10 +30,10 @@
"devDependencies": {
"@babel/eslint-parser": "^7.5.4",
"@eth-optimism/contracts": "^0.5.40",
"@eth-optimism/contracts-bedrock": "0.13.0",
"@eth-optimism/contracts-bedrock": "0.13.1",
"@eth-optimism/contracts-periphery": "^1.0.7",
"@eth-optimism/core-utils": "0.12.0",
"@eth-optimism/sdk": "2.0.0",
"@eth-optimism/sdk": "2.0.1",
"@ethersproject/abstract-provider": "^5.7.0",
"@ethersproject/providers": "^5.7.0",
"@ethersproject/transactions": "^5.7.0",
......
......@@ -19,8 +19,18 @@ test:
lint:
golangci-lint run -E goimports,sqlclosecheck,bodyclose,asciicheck,misspell,errorlint -e "errors.As" -e "errors.Is"
fuzz:
go test -run NOTAREALTEST -v -fuzztime 10s -fuzz FuzzDurationZero ./batcher
go test -run NOTAREALTEST -v -fuzztime 10s -fuzz FuzzDurationTimeoutMaxChannelDuration ./batcher
go test -run NOTAREALTEST -v -fuzztime 10s -fuzz FuzzDurationTimeoutZeroMaxChannelDuration ./batcher
go test -run NOTAREALTEST -v -fuzztime 10s -fuzz FuzzChannelCloseTimeout ./batcher
go test -run NOTAREALTEST -v -fuzztime 10s -fuzz FuzzChannelZeroCloseTimeout ./batcher
go test -run NOTAREALTEST -v -fuzztime 10s -fuzz FuzzSeqWindowClose ./batcher
go test -run NOTAREALTEST -v -fuzztime 10s -fuzz FuzzSeqWindowZeroTimeoutClose ./batcher
.PHONY: \
op-batcher \
clean \
test \
lint
lint \
fuzz
......@@ -12,9 +12,9 @@ import (
gethrpc "github.com/ethereum/go-ethereum/rpc"
"github.com/urfave/cli"
"github.com/ethereum-optimism/optimism/op-batcher/metrics"
"github.com/ethereum-optimism/optimism/op-batcher/rpc"
oplog "github.com/ethereum-optimism/optimism/op-service/log"
opmetrics "github.com/ethereum-optimism/optimism/op-service/metrics"
oppprof "github.com/ethereum-optimism/optimism/op-service/pprof"
oprpc "github.com/ethereum-optimism/optimism/op-service/rpc"
)
......@@ -36,9 +36,10 @@ func Main(version string, cliCtx *cli.Context) error {
}
l := oplog.NewLogger(cfg.LogConfig)
m := metrics.NewMetrics("default")
l.Info("Initializing Batch Submitter")
batchSubmitter, err := NewBatchSubmitterFromCLIConfig(cfg, l)
batchSubmitter, err := NewBatchSubmitterFromCLIConfig(cfg, l, m)
if err != nil {
l.Error("Unable to create Batch Submitter", "error", err)
return err
......@@ -64,16 +65,15 @@ func Main(version string, cliCtx *cli.Context) error {
}()
}
registry := opmetrics.NewRegistry()
metricsCfg := cfg.MetricsConfig
if metricsCfg.Enabled {
l.Info("starting metrics server", "addr", metricsCfg.ListenAddr, "port", metricsCfg.ListenPort)
go func() {
if err := opmetrics.ListenAndServe(ctx, registry, metricsCfg.ListenAddr, metricsCfg.ListenPort); err != nil {
if err := m.Serve(ctx, metricsCfg.ListenAddr, metricsCfg.ListenPort); err != nil {
l.Error("error starting metrics server", err)
}
}()
opmetrics.LaunchBalanceMetrics(ctx, l, registry, "", batchSubmitter.L1Client, batchSubmitter.From)
m.StartBalanceMetrics(ctx, l, batchSubmitter.L1Client, batchSubmitter.From)
}
rpcCfg := cfg.RPCConfig
......@@ -95,6 +95,9 @@ func Main(version string, cliCtx *cli.Context) error {
return fmt.Errorf("error starting RPC server: %w", err)
}
m.RecordInfo(version)
m.RecordUp()
interruptChannel := make(chan os.Signal, 1)
signal.Notify(interruptChannel, []os.Signal{
os.Interrupt,
......
......@@ -11,6 +11,29 @@ import (
"github.com/ethereum/go-ethereum/core/types"
)
var (
ErrZeroMaxFrameSize = errors.New("max frame size cannot be zero")
ErrSmallMaxFrameSize = errors.New("max frame size cannot be less than 23")
ErrInvalidChannelTimeout = errors.New("channel timeout is less than the safety margin")
ErrInputTargetReached = errors.New("target amount of input data reached")
ErrMaxFrameIndex = errors.New("max frame index reached (uint16)")
ErrMaxDurationReached = errors.New("max channel duration reached")
ErrChannelTimeoutClose = errors.New("close to channel timeout")
ErrSeqWindowClose = errors.New("close to sequencer window timeout")
)
type ChannelFullError struct {
Err error
}
func (e *ChannelFullError) Error() string {
return "channel full: " + e.Err.Error()
}
func (e *ChannelFullError) Unwrap() error {
return e.Err
}
type ChannelConfig struct {
// Number of epochs (L1 blocks) per sequencing window, including the epoch
// L1 origin block itself
......@@ -48,6 +71,32 @@ type ChannelConfig struct {
ApproxComprRatio float64
}
// Check validates the [ChannelConfig] parameters.
func (cc *ChannelConfig) Check() error {
// The [ChannelTimeout] must be larger than the [SubSafetyMargin].
// Otherwise, new blocks would always be considered timed out.
if cc.ChannelTimeout < cc.SubSafetyMargin {
return ErrInvalidChannelTimeout
}
// If the [MaxFrameSize] is set to 0, the channel builder
// will infinitely loop when trying to create frames in the
// [channelBuilder.OutputFrames] function.
if cc.MaxFrameSize == 0 {
return ErrZeroMaxFrameSize
}
// If the [MaxFrameSize] is set to < 23, the channel out
// will underflow the maxSize variable in the [derive.ChannelOut].
// Since it is of type uint64, it will wrap around to a very large
// number, making the frame size extremely large.
if cc.MaxFrameSize < 23 {
return ErrSmallMaxFrameSize
}
return nil
}
// InputThreshold calculates the input data threshold in bytes from the given
// parameters.
func (c ChannelConfig) InputThreshold() uint64 {
......@@ -87,8 +136,12 @@ type channelBuilder struct {
blocks []*types.Block
// frames data queue, to be send as txs
frames []frameData
// total amount of output data of all frames created yet
outputBytes int
}
// newChannelBuilder creates a new channel builder or returns an error if the
// channel out could not be created.
func newChannelBuilder(cfg ChannelConfig) (*channelBuilder, error) {
co, err := derive.NewChannelOut()
if err != nil {
......@@ -105,11 +158,21 @@ func (c *channelBuilder) ID() derive.ChannelID {
return c.co.ID()
}
// InputBytes returns to total amount of input bytes added to the channel.
// InputBytes returns the total amount of input bytes added to the channel.
func (c *channelBuilder) InputBytes() int {
return c.co.InputBytes()
}
// ReadyBytes returns the amount of bytes ready in the compression pipeline to
// output into a frame.
func (c *channelBuilder) ReadyBytes() int {
return c.co.ReadyBytes()
}
func (c *channelBuilder) OutputBytes() int {
return c.outputBytes
}
// Blocks returns a backup list of all blocks that were added to the channel. It
// can be used in case the channel needs to be rebuilt.
func (c *channelBuilder) Blocks() []*types.Block {
......@@ -133,22 +196,25 @@ func (c *channelBuilder) Reset() error {
// AddBlock returns a ChannelFullError if called even though the channel is
// already full. See description of FullErr for details.
//
// AddBlock also returns the L1BlockInfo that got extracted from the block's
// first transaction for subsequent use by the caller.
//
// Call OutputFrames() afterwards to create frames.
func (c *channelBuilder) AddBlock(block *types.Block) error {
func (c *channelBuilder) AddBlock(block *types.Block) (derive.L1BlockInfo, error) {
if c.IsFull() {
return c.FullErr()
return derive.L1BlockInfo{}, c.FullErr()
}
batch, err := derive.BlockToBatch(block)
batch, l1info, err := derive.BlockToBatch(block)
if err != nil {
return fmt.Errorf("converting block to batch: %w", err)
return l1info, fmt.Errorf("converting block to batch: %w", err)
}
if _, err = c.co.AddBatch(batch); errors.Is(err, derive.ErrTooManyRLPBytes) {
c.setFullErr(err)
return c.FullErr()
return l1info, c.FullErr()
} else if err != nil {
return fmt.Errorf("adding block to channel out: %w", err)
return l1info, fmt.Errorf("adding block to channel out: %w", err)
}
c.blocks = append(c.blocks, block)
c.updateSwTimeout(batch)
......@@ -158,7 +224,7 @@ func (c *channelBuilder) AddBlock(block *types.Block) error {
// Adding this block still worked, so don't return error, just mark as full
}
return nil
return l1info, nil
}
// Timeout management
......@@ -215,7 +281,7 @@ func (c *channelBuilder) updateTimeout(timeoutBlockNum uint64, reason error) {
}
// checkTimeout checks if the channel is timed out at the given block number and
// in this case marks the channel as full, if it wasn't full alredy.
// in this case marks the channel as full, if it wasn't full already.
func (c *channelBuilder) checkTimeout(blockNum uint64) {
if !c.IsFull() && c.TimedOut(blockNum) {
c.setFullErr(c.timeoutReason)
......@@ -330,10 +396,11 @@ func (c *channelBuilder) outputFrame() error {
}
frame := frameData{
id: txID{chID: c.co.ID(), frameNumber: fn},
id: frameID{chID: c.co.ID(), frameNumber: fn},
data: buf.Bytes(),
}
c.frames = append(c.frames, frame)
c.outputBytes += len(frame.data)
return err // possibly io.EOF (last frame)
}
......@@ -371,23 +438,3 @@ func (c *channelBuilder) PushFrame(frame frameData) {
}
c.frames = append(c.frames, frame)
}
var (
ErrInputTargetReached = errors.New("target amount of input data reached")
ErrMaxFrameIndex = errors.New("max frame index reached (uint16)")
ErrMaxDurationReached = errors.New("max channel duration reached")
ErrChannelTimeoutClose = errors.New("close to channel timeout")
ErrSeqWindowClose = errors.New("close to sequencer window timeout")
)
type ChannelFullError struct {
Err error
}
func (e *ChannelFullError) Error() string {
return "channel full: " + e.Err.Error()
}
func (e *ChannelFullError) Unwrap() error {
return e.Err
}
This diff is collapsed.
package batcher_test
import (
"math"
"testing"
"github.com/ethereum-optimism/optimism/op-batcher/batcher"
"github.com/stretchr/testify/require"
)
// TestInputThreshold tests the [ChannelConfig.InputThreshold]
// function using a table-driven testing approach.
func TestInputThreshold(t *testing.T) {
type testInput struct {
TargetFrameSize uint64
TargetNumFrames int
ApproxComprRatio float64
}
type test struct {
input testInput
assertion func(uint64)
}
// Construct test cases that test the boundary conditions
tests := []test{
{
input: testInput{
TargetFrameSize: 1,
TargetNumFrames: 1,
ApproxComprRatio: 0.4,
},
assertion: func(output uint64) {
require.Equal(t, uint64(2), output)
},
},
{
input: testInput{
TargetFrameSize: 1,
TargetNumFrames: 100000,
ApproxComprRatio: 0.4,
},
assertion: func(output uint64) {
require.Equal(t, uint64(250_000), output)
},
},
{
input: testInput{
TargetFrameSize: 1,
TargetNumFrames: 1,
ApproxComprRatio: 1,
},
assertion: func(output uint64) {
require.Equal(t, uint64(1), output)
},
},
{
input: testInput{
TargetFrameSize: 1,
TargetNumFrames: 1,
ApproxComprRatio: 2,
},
assertion: func(output uint64) {
require.Equal(t, uint64(0), output)
},
},
{
input: testInput{
TargetFrameSize: 100000,
TargetNumFrames: 1,
ApproxComprRatio: 0.4,
},
assertion: func(output uint64) {
require.Equal(t, uint64(250_000), output)
},
},
{
input: testInput{
TargetFrameSize: 1,
TargetNumFrames: 100000,
ApproxComprRatio: 0.4,
},
assertion: func(output uint64) {
require.Equal(t, uint64(250_000), output)
},
},
{
input: testInput{
TargetFrameSize: 100000,
TargetNumFrames: 100000,
ApproxComprRatio: 0.4,
},
assertion: func(output uint64) {
require.Equal(t, uint64(25_000_000_000), output)
},
},
{
input: testInput{
TargetFrameSize: 1,
TargetNumFrames: 1,
ApproxComprRatio: 0.000001,
},
assertion: func(output uint64) {
require.Equal(t, uint64(1_000_000), output)
},
},
{
input: testInput{
TargetFrameSize: 0,
TargetNumFrames: 0,
ApproxComprRatio: 0,
},
assertion: func(output uint64) {
// Need to allow for NaN depending on the machine architecture
require.True(t, output == uint64(0) || output == uint64(math.NaN()))
},
},
}
// Validate each test case
for _, tt := range tests {
config := batcher.ChannelConfig{
TargetFrameSize: tt.input.TargetFrameSize,
TargetNumFrames: tt.input.TargetNumFrames,
ApproxComprRatio: tt.input.ApproxComprRatio,
}
got := config.InputThreshold()
tt.assertion(got)
}
}
......@@ -6,7 +6,9 @@ import (
"io"
"math"
"github.com/ethereum-optimism/optimism/op-batcher/metrics"
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log"
......@@ -22,8 +24,9 @@ var ErrReorg = errors.New("block does not extend existing chain")
// channel.
// Functions on channelManager are not safe for concurrent access.
type channelManager struct {
log log.Logger
cfg ChannelConfig
log log.Logger
metr metrics.Metricer
cfg ChannelConfig
// All blocks since the last request for new tx data.
blocks []*types.Block
......@@ -40,10 +43,12 @@ type channelManager struct {
confirmedTransactions map[txID]eth.BlockID
}
func NewChannelManager(log log.Logger, cfg ChannelConfig) *channelManager {
func NewChannelManager(log log.Logger, metr metrics.Metricer, cfg ChannelConfig) *channelManager {
return &channelManager{
log: log,
cfg: cfg,
log: log,
metr: metr,
cfg: cfg,
pendingTransactions: make(map[txID]txData),
confirmedTransactions: make(map[txID]eth.BlockID),
}
......@@ -71,6 +76,8 @@ func (s *channelManager) TxFailed(id txID) {
} else {
s.log.Warn("unknown transaction marked as failed", "id", id)
}
s.metr.RecordBatchTxFailed()
}
// TxConfirmed marks a transaction as confirmed on L1. Unfortunately even if all frames in
......@@ -78,7 +85,8 @@ func (s *channelManager) TxFailed(id txID) {
// resubmitted.
// This function may reset the pending channel if the pending channel has timed out.
func (s *channelManager) TxConfirmed(id txID, inclusionBlock eth.BlockID) {
s.log.Trace("marked transaction as confirmed", "id", id, "block", inclusionBlock)
s.metr.RecordBatchTxSubmitted()
s.log.Debug("marked transaction as confirmed", "id", id, "block", inclusionBlock)
if _, ok := s.pendingTransactions[id]; !ok {
s.log.Warn("unknown transaction marked as confirmed", "id", id, "block", inclusionBlock)
// TODO: This can occur if we clear the channel while there are still pending transactions
......@@ -92,13 +100,15 @@ func (s *channelManager) TxConfirmed(id txID, inclusionBlock eth.BlockID) {
// If this channel timed out, put the pending blocks back into the local saved blocks
// and then reset this state so it can try to build a new channel.
if s.pendingChannelIsTimedOut() {
s.log.Warn("Channel timed out", "chID", s.pendingChannel.ID())
s.metr.RecordChannelTimedOut(s.pendingChannel.ID())
s.log.Warn("Channel timed out", "id", s.pendingChannel.ID())
s.blocks = append(s.pendingChannel.Blocks(), s.blocks...)
s.clearPendingChannel()
}
// If we are done with this channel, record that.
if s.pendingChannelIsFullySubmitted() {
s.log.Info("Channel is fully submitted", "chID", s.pendingChannel.ID())
s.metr.RecordChannelFullySubmitted(s.pendingChannel.ID())
s.log.Info("Channel is fully submitted", "id", s.pendingChannel.ID())
s.clearPendingChannel()
}
}
......@@ -194,8 +204,8 @@ func (s *channelManager) TxData(l1Head eth.BlockID) (txData, error) {
// all pending blocks be included in this channel for submission.
s.registerL1Block(l1Head)
if err := s.pendingChannel.OutputFrames(); err != nil {
return txData{}, fmt.Errorf("creating frames with channel builder: %w", err)
if err := s.outputFrames(); err != nil {
return txData{}, err
}
return s.nextTxData()
......@@ -211,7 +221,11 @@ func (s *channelManager) ensurePendingChannel(l1Head eth.BlockID) error {
return fmt.Errorf("creating new channel: %w", err)
}
s.pendingChannel = cb
s.log.Info("Created channel", "chID", cb.ID(), "l1Head", l1Head)
s.log.Info("Created channel",
"id", cb.ID(),
"l1Head", l1Head,
"blocks_pending", len(s.blocks))
s.metr.RecordChannelOpened(cb.ID(), len(s.blocks))
return nil
}
......@@ -229,28 +243,27 @@ func (s *channelManager) registerL1Block(l1Head eth.BlockID) {
// processBlocks adds blocks from the blocks queue to the pending channel until
// either the queue got exhausted or the channel is full.
func (s *channelManager) processBlocks() error {
var blocksAdded int
var _chFullErr *ChannelFullError // throw away, just for type checking
var (
blocksAdded int
_chFullErr *ChannelFullError // throw away, just for type checking
latestL2ref eth.L2BlockRef
)
for i, block := range s.blocks {
if err := s.pendingChannel.AddBlock(block); errors.As(err, &_chFullErr) {
l1info, err := s.pendingChannel.AddBlock(block)
if errors.As(err, &_chFullErr) {
// current block didn't get added because channel is already full
break
} else if err != nil {
return fmt.Errorf("adding block[%d] to channel builder: %w", i, err)
}
blocksAdded += 1
latestL2ref = l2BlockRefFromBlockAndL1Info(block, l1info)
// current block got added but channel is now full
if s.pendingChannel.IsFull() {
break
}
}
s.log.Debug("Added blocks to channel",
"blocks_added", blocksAdded,
"channel_full", s.pendingChannel.IsFull(),
"blocks_pending", len(s.blocks)-blocksAdded,
"input_bytes", s.pendingChannel.InputBytes(),
)
if blocksAdded == len(s.blocks) {
// all blocks processed, reuse slice
s.blocks = s.blocks[:0]
......@@ -258,6 +271,53 @@ func (s *channelManager) processBlocks() error {
// remove processed blocks
s.blocks = s.blocks[blocksAdded:]
}
s.metr.RecordL2BlocksAdded(latestL2ref,
blocksAdded,
len(s.blocks),
s.pendingChannel.InputBytes(),
s.pendingChannel.ReadyBytes())
s.log.Debug("Added blocks to channel",
"blocks_added", blocksAdded,
"blocks_pending", len(s.blocks),
"channel_full", s.pendingChannel.IsFull(),
"input_bytes", s.pendingChannel.InputBytes(),
"ready_bytes", s.pendingChannel.ReadyBytes(),
)
return nil
}
func (s *channelManager) outputFrames() error {
if err := s.pendingChannel.OutputFrames(); err != nil {
return fmt.Errorf("creating frames with channel builder: %w", err)
}
if !s.pendingChannel.IsFull() {
return nil
}
inBytes, outBytes := s.pendingChannel.InputBytes(), s.pendingChannel.OutputBytes()
s.metr.RecordChannelClosed(
s.pendingChannel.ID(),
len(s.blocks),
s.pendingChannel.NumFrames(),
inBytes,
outBytes,
s.pendingChannel.FullErr(),
)
var comprRatio float64
if inBytes > 0 {
comprRatio = float64(outBytes) / float64(inBytes)
}
s.log.Info("Channel closed",
"id", s.pendingChannel.ID(),
"blocks_pending", len(s.blocks),
"num_frames", s.pendingChannel.NumFrames(),
"input_bytes", inBytes,
"output_bytes", outBytes,
"full_reason", s.pendingChannel.FullErr(),
"compr_ratio", comprRatio,
)
return nil
}
......@@ -273,3 +333,14 @@ func (s *channelManager) AddL2Block(block *types.Block) error {
return nil
}
func l2BlockRefFromBlockAndL1Info(block *types.Block, l1info derive.L1BlockInfo) eth.L2BlockRef {
return eth.L2BlockRef{
Hash: block.Hash(),
Number: block.NumberU64(),
ParentHash: block.ParentHash(),
Time: block.Time(),
L1Origin: eth.BlockID{Hash: l1info.BlockHash, Number: l1info.Number},
SequenceNumber: l1info.SequenceNumber,
}
}
This diff is collapsed.
......@@ -9,6 +9,7 @@ import (
"github.com/urfave/cli"
"github.com/ethereum-optimism/optimism/op-batcher/flags"
"github.com/ethereum-optimism/optimism/op-batcher/metrics"
"github.com/ethereum-optimism/optimism/op-batcher/rpc"
"github.com/ethereum-optimism/optimism/op-node/rollup"
"github.com/ethereum-optimism/optimism/op-node/sources"
......@@ -20,21 +21,35 @@ import (
)
type Config struct {
log log.Logger
L1Client *ethclient.Client
L2Client *ethclient.Client
RollupNode *sources.RollupClient
PollInterval time.Duration
log log.Logger
metr metrics.Metricer
L1Client *ethclient.Client
L2Client *ethclient.Client
RollupNode *sources.RollupClient
PollInterval time.Duration
From common.Address
TxManagerConfig txmgr.Config
From common.Address
// RollupConfig is queried at startup
Rollup *rollup.Config
// Channel creation parameters
// Channel builder parameters
Channel ChannelConfig
}
// Check ensures that the [Config] is valid.
func (c *Config) Check() error {
if err := c.Rollup.Check(); err != nil {
return err
}
if err := c.Channel.Check(); err != nil {
return err
}
return nil
}
type CLIConfig struct {
/* Required Params */
......
......@@ -10,7 +10,9 @@ import (
"sync"
"time"
"github.com/ethereum-optimism/optimism/op-batcher/metrics"
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
opcrypto "github.com/ethereum-optimism/optimism/op-service/crypto"
"github.com/ethereum-optimism/optimism/op-service/txmgr"
"github.com/ethereum/go-ethereum/core/types"
......@@ -34,13 +36,14 @@ type BatchSubmitter struct {
// lastStoredBlock is the last block loaded into `state`. If it is empty it should be set to the l2 safe head.
lastStoredBlock eth.BlockID
lastL1Tip eth.L1BlockRef
state *channelManager
}
// NewBatchSubmitterFromCLIConfig initializes the BatchSubmitter, gathering any resources
// that will be needed during operation.
func NewBatchSubmitterFromCLIConfig(cfg CLIConfig, l log.Logger) (*BatchSubmitter, error) {
func NewBatchSubmitterFromCLIConfig(cfg CLIConfig, l log.Logger, m metrics.Metricer) (*BatchSubmitter, error) {
ctx := context.Background()
signer, fromAddress, err := opcrypto.SignerFactoryFromConfig(l, cfg.PrivateKey, cfg.Mnemonic, cfg.SequencerHDPath, cfg.SignerConfig)
......@@ -99,12 +102,17 @@ func NewBatchSubmitterFromCLIConfig(cfg CLIConfig, l log.Logger) (*BatchSubmitte
},
}
return NewBatchSubmitter(ctx, batcherCfg, l)
// Validate the batcher config
if err := batcherCfg.Check(); err != nil {
return nil, err
}
return NewBatchSubmitter(ctx, batcherCfg, l, m)
}
// NewBatchSubmitter initializes the BatchSubmitter, gathering any resources
// that will be needed during operation.
func NewBatchSubmitter(ctx context.Context, cfg Config, l log.Logger) (*BatchSubmitter, error) {
func NewBatchSubmitter(ctx context.Context, cfg Config, l log.Logger, m metrics.Metricer) (*BatchSubmitter, error) {
balance, err := cfg.L1Client.BalanceAt(ctx, cfg.From, nil)
if err != nil {
return nil, err
......@@ -113,12 +121,14 @@ func NewBatchSubmitter(ctx context.Context, cfg Config, l log.Logger) (*BatchSub
cfg.log = l
cfg.log.Info("creating batch submitter", "submitter_addr", cfg.From, "submitter_bal", balance)
cfg.metr = m
return &BatchSubmitter{
Config: cfg,
txMgr: NewTransactionManager(l,
cfg.TxManagerConfig, cfg.Rollup.BatchInboxAddress, cfg.Rollup.L1ChainID,
cfg.From, cfg.L1Client),
state: NewChannelManager(l, cfg.Channel),
state: NewChannelManager(l, m, cfg.Channel),
}, nil
}
......@@ -182,13 +192,16 @@ func (l *BatchSubmitter) Stop() error {
func (l *BatchSubmitter) loadBlocksIntoState(ctx context.Context) {
start, end, err := l.calculateL2BlockRangeToStore(ctx)
if err != nil {
l.log.Trace("was not able to calculate L2 block range", "err", err)
l.log.Warn("Error calculating L2 block range", "err", err)
return
} else if start.Number == end.Number {
return
}
var latestBlock *types.Block
// Add all blocks to "state"
for i := start.Number + 1; i < end.Number+1; i++ {
id, err := l.loadBlockIntoState(ctx, i)
block, err := l.loadBlockIntoState(ctx, i)
if errors.Is(err, ErrReorg) {
l.log.Warn("Found L2 reorg", "block_number", i)
l.state.Clear()
......@@ -198,24 +211,34 @@ func (l *BatchSubmitter) loadBlocksIntoState(ctx context.Context) {
l.log.Warn("failed to load block into state", "err", err)
return
}
l.lastStoredBlock = id
l.lastStoredBlock = eth.ToBlockID(block)
latestBlock = block
}
l2ref, err := derive.L2BlockToBlockRef(latestBlock, &l.Rollup.Genesis)
if err != nil {
l.log.Warn("Invalid L2 block loaded into state", "err", err)
return
}
l.metr.RecordL2BlocksLoaded(l2ref)
}
// loadBlockIntoState fetches & stores a single block into `state`. It returns the block it loaded.
func (l *BatchSubmitter) loadBlockIntoState(ctx context.Context, blockNumber uint64) (eth.BlockID, error) {
func (l *BatchSubmitter) loadBlockIntoState(ctx context.Context, blockNumber uint64) (*types.Block, error) {
ctx, cancel := context.WithTimeout(ctx, networkTimeout)
defer cancel()
block, err := l.L2Client.BlockByNumber(ctx, new(big.Int).SetUint64(blockNumber))
cancel()
if err != nil {
return eth.BlockID{}, err
return nil, fmt.Errorf("getting L2 block: %w", err)
}
if err := l.state.AddL2Block(block); err != nil {
return eth.BlockID{}, err
return nil, fmt.Errorf("adding L2 block to state: %w", err)
}
id := eth.ToBlockID(block)
l.log.Info("added L2 block to local state", "block", id, "tx_count", len(block.Transactions()), "time", block.Time())
return id, nil
l.log.Info("added L2 block to local state", "block", eth.ToBlockID(block), "tx_count", len(block.Transactions()), "time", block.Time())
return block, nil
}
// calculateL2BlockRangeToStore determines the range (start,end] that should be loaded into the local state.
......@@ -278,6 +301,7 @@ func (l *BatchSubmitter) loop() {
l.log.Error("Failed to query L1 tip", "error", err)
break
}
l.recordL1Tip(l1tip)
// Collect next transaction data
txdata, err := l.state.TxData(l1tip.ID())
......@@ -311,6 +335,14 @@ func (l *BatchSubmitter) loop() {
}
}
func (l *BatchSubmitter) recordL1Tip(l1tip eth.L1BlockRef) {
if l.lastL1Tip == l1tip {
return
}
l.lastL1Tip = l1tip
l.metr.RecordLatestL1Block(l1tip)
}
func (l *BatchSubmitter) recordFailedTx(id txID, err error) {
l.log.Warn("Failed to send transaction", "err", err)
l.state.TxFailed(id)
......
......@@ -55,7 +55,7 @@ func (t *TransactionManager) SendTransaction(ctx context.Context, data []byte) (
return nil, fmt.Errorf("failed to create tx: %w", err)
}
ctx, cancel := context.WithTimeout(ctx, 100*time.Second) // TODO: Select a timeout that makes sense here.
ctx, cancel := context.WithTimeout(ctx, 10*time.Minute) // TODO: Select a timeout that makes sense here.
defer cancel()
if receipt, err := t.txMgr.Send(ctx, tx); err != nil {
t.log.Warn("unable to publish tx", "err", err, "data_size", len(data))
......
package metrics
import (
"context"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/ethclient"
"github.com/ethereum/go-ethereum/log"
"github.com/prometheus/client_golang/prometheus"
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
opmetrics "github.com/ethereum-optimism/optimism/op-service/metrics"
)
const Namespace = "op_batcher"
type Metricer interface {
RecordInfo(version string)
RecordUp()
// Records all L1 and L2 block events
opmetrics.RefMetricer
RecordLatestL1Block(l1ref eth.L1BlockRef)
RecordL2BlocksLoaded(l2ref eth.L2BlockRef)
RecordChannelOpened(id derive.ChannelID, numPendingBlocks int)
RecordL2BlocksAdded(l2ref eth.L2BlockRef, numBlocksAdded, numPendingBlocks, inputBytes, outputComprBytes int)
RecordChannelClosed(id derive.ChannelID, numPendingBlocks int, numFrames int, inputBytes int, outputComprBytes int, reason error)
RecordChannelFullySubmitted(id derive.ChannelID)
RecordChannelTimedOut(id derive.ChannelID)
RecordBatchTxSubmitted()
RecordBatchTxSuccess()
RecordBatchTxFailed()
Document() []opmetrics.DocumentedMetric
}
type Metrics struct {
ns string
registry *prometheus.Registry
factory opmetrics.Factory
opmetrics.RefMetrics
Info prometheus.GaugeVec
Up prometheus.Gauge
// label by openend, closed, fully_submitted, timed_out
ChannelEvs opmetrics.EventVec
PendingBlocksCount prometheus.GaugeVec
BlocksAddedCount prometheus.Gauge
ChannelInputBytes prometheus.GaugeVec
ChannelReadyBytes prometheus.Gauge
ChannelOutputBytes prometheus.Gauge
ChannelClosedReason prometheus.Gauge
ChannelNumFrames prometheus.Gauge
ChannelComprRatio prometheus.Histogram
BatcherTxEvs opmetrics.EventVec
}
var _ Metricer = (*Metrics)(nil)
func NewMetrics(procName string) *Metrics {
if procName == "" {
procName = "default"
}
ns := Namespace + "_" + procName
registry := opmetrics.NewRegistry()
factory := opmetrics.With(registry)
return &Metrics{
ns: ns,
registry: registry,
factory: factory,
RefMetrics: opmetrics.MakeRefMetrics(ns, factory),
Info: *factory.NewGaugeVec(prometheus.GaugeOpts{
Namespace: ns,
Name: "info",
Help: "Pseudo-metric tracking version and config info",
}, []string{
"version",
}),
Up: factory.NewGauge(prometheus.GaugeOpts{
Namespace: ns,
Name: "up",
Help: "1 if the op-batcher has finished starting up",
}),
ChannelEvs: opmetrics.NewEventVec(factory, ns, "channel", "Channel", []string{"stage"}),
PendingBlocksCount: *factory.NewGaugeVec(prometheus.GaugeOpts{
Namespace: ns,
Name: "pending_blocks_count",
Help: "Number of pending blocks, not added to a channel yet.",
}, []string{"stage"}),
BlocksAddedCount: factory.NewGauge(prometheus.GaugeOpts{
Namespace: ns,
Name: "blocks_added_count",
Help: "Total number of blocks added to current channel.",
}),
ChannelInputBytes: *factory.NewGaugeVec(prometheus.GaugeOpts{
Namespace: ns,
Name: "input_bytes",
Help: "Number of input bytes to a channel.",
}, []string{"stage"}),
ChannelReadyBytes: factory.NewGauge(prometheus.GaugeOpts{
Namespace: ns,
Name: "ready_bytes",
Help: "Number of bytes ready in the compression buffer.",
}),
ChannelOutputBytes: factory.NewGauge(prometheus.GaugeOpts{
Namespace: ns,
Name: "output_bytes",
Help: "Number of compressed output bytes from a channel.",
}),
ChannelClosedReason: factory.NewGauge(prometheus.GaugeOpts{
Namespace: ns,
Name: "channel_closed_reason",
Help: "Pseudo-metric to record the reason a channel got closed.",
}),
ChannelNumFrames: factory.NewGauge(prometheus.GaugeOpts{
Namespace: ns,
Name: "channel_num_frames",
Help: "Total number of frames of closed channel.",
}),
ChannelComprRatio: factory.NewHistogram(prometheus.HistogramOpts{
Namespace: ns,
Name: "channel_compr_ratio",
Help: "Compression ratios of closed channel.",
Buckets: append([]float64{0.1, 0.2}, prometheus.LinearBuckets(0.3, 0.05, 14)...),
}),
BatcherTxEvs: opmetrics.NewEventVec(factory, ns, "batcher_tx", "BatcherTx", []string{"stage"}),
}
}
func (m *Metrics) Serve(ctx context.Context, host string, port int) error {
return opmetrics.ListenAndServe(ctx, m.registry, host, port)
}
func (m *Metrics) Document() []opmetrics.DocumentedMetric {
return m.factory.Document()
}
func (m *Metrics) StartBalanceMetrics(ctx context.Context,
l log.Logger, client *ethclient.Client, account common.Address) {
opmetrics.LaunchBalanceMetrics(ctx, l, m.registry, m.ns, client, account)
}
// RecordInfo sets a pseudo-metric that contains versioning and
// config info for the op-batcher.
func (m *Metrics) RecordInfo(version string) {
m.Info.WithLabelValues(version).Set(1)
}
// RecordUp sets the up metric to 1.
func (m *Metrics) RecordUp() {
prometheus.MustRegister()
m.Up.Set(1)
}
const (
StageLoaded = "loaded"
StageOpened = "opened"
StageAdded = "added"
StageClosed = "closed"
StageFullySubmitted = "fully_submitted"
StageTimedOut = "timed_out"
TxStageSubmitted = "submitted"
TxStageSuccess = "success"
TxStageFailed = "failed"
)
func (m *Metrics) RecordLatestL1Block(l1ref eth.L1BlockRef) {
m.RecordL1Ref("latest", l1ref)
}
// RecordL2BlockLoaded should be called when a new L2 block was loaded into the
// channel manager (but not processed yet).
func (m *Metrics) RecordL2BlocksLoaded(l2ref eth.L2BlockRef) {
m.RecordL2Ref(StageLoaded, l2ref)
}
func (m *Metrics) RecordChannelOpened(id derive.ChannelID, numPendingBlocks int) {
m.ChannelEvs.Record(StageOpened)
m.BlocksAddedCount.Set(0) // reset
m.PendingBlocksCount.WithLabelValues(StageOpened).Set(float64(numPendingBlocks))
}
// RecordL2BlocksAdded should be called when L2 block were added to the channel
// builder, with the latest added block.
func (m *Metrics) RecordL2BlocksAdded(l2ref eth.L2BlockRef, numBlocksAdded, numPendingBlocks, inputBytes, outputComprBytes int) {
m.RecordL2Ref(StageAdded, l2ref)
m.BlocksAddedCount.Add(float64(numBlocksAdded))
m.PendingBlocksCount.WithLabelValues(StageAdded).Set(float64(numPendingBlocks))
m.ChannelInputBytes.WithLabelValues(StageAdded).Set(float64(inputBytes))
m.ChannelReadyBytes.Set(float64(outputComprBytes))
}
func (m *Metrics) RecordChannelClosed(id derive.ChannelID, numPendingBlocks int, numFrames int, inputBytes int, outputComprBytes int, reason error) {
m.ChannelEvs.Record(StageClosed)
m.PendingBlocksCount.WithLabelValues(StageClosed).Set(float64(numPendingBlocks))
m.ChannelNumFrames.Set(float64(numFrames))
m.ChannelInputBytes.WithLabelValues(StageClosed).Set(float64(inputBytes))
m.ChannelOutputBytes.Set(float64(outputComprBytes))
var comprRatio float64
if inputBytes > 0 {
comprRatio = float64(outputComprBytes) / float64(inputBytes)
}
m.ChannelComprRatio.Observe(comprRatio)
m.ChannelClosedReason.Set(float64(ClosedReasonToNum(reason)))
}
func ClosedReasonToNum(reason error) int {
// CLI-3640
return 0
}
func (m *Metrics) RecordChannelFullySubmitted(id derive.ChannelID) {
m.ChannelEvs.Record(StageFullySubmitted)
}
func (m *Metrics) RecordChannelTimedOut(id derive.ChannelID) {
m.ChannelEvs.Record(StageTimedOut)
}
func (m *Metrics) RecordBatchTxSubmitted() {
m.BatcherTxEvs.Record(TxStageSubmitted)
}
func (m *Metrics) RecordBatchTxSuccess() {
m.BatcherTxEvs.Record(TxStageSuccess)
}
func (m *Metrics) RecordBatchTxFailed() {
m.BatcherTxEvs.Record(TxStageFailed)
}
package metrics
import (
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
opmetrics "github.com/ethereum-optimism/optimism/op-service/metrics"
)
type noopMetrics struct{ opmetrics.NoopRefMetrics }
var NoopMetrics Metricer = new(noopMetrics)
func (*noopMetrics) Document() []opmetrics.DocumentedMetric { return nil }
func (*noopMetrics) RecordInfo(version string) {}
func (*noopMetrics) RecordUp() {}
func (*noopMetrics) RecordLatestL1Block(l1ref eth.L1BlockRef) {}
func (*noopMetrics) RecordL2BlocksLoaded(eth.L2BlockRef) {}
func (*noopMetrics) RecordChannelOpened(derive.ChannelID, int) {}
func (*noopMetrics) RecordL2BlocksAdded(eth.L2BlockRef, int, int, int, int) {}
func (*noopMetrics) RecordChannelClosed(derive.ChannelID, int, int, int, int, error) {}
func (*noopMetrics) RecordChannelFullySubmitted(derive.ChannelID) {}
func (*noopMetrics) RecordChannelTimedOut(derive.ChannelID) {}
func (*noopMetrics) RecordBatchTxSubmitted() {}
func (*noopMetrics) RecordBatchTxSuccess() {}
func (*noopMetrics) RecordBatchTxFailed() {}
package main
import (
"context"
"fmt"
"math/big"
"os"
"strings"
"github.com/ethereum-optimism/optimism/op-chain-ops/crossdomain"
"github.com/ethereum-optimism/optimism/op-chain-ops/db"
"github.com/mattn/go-isatty"
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum-optimism/optimism/op-bindings/hardhat"
"github.com/ethereum-optimism/optimism/op-chain-ops/genesis"
"github.com/ethereum/go-ethereum/ethclient"
"github.com/urfave/cli"
)
func main() {
log.Root().SetHandler(log.StreamHandler(os.Stderr, log.TerminalFormat(isatty.IsTerminal(os.Stderr.Fd()))))
app := &cli.App{
Name: "check-migration",
Usage: "Run sanity checks on a migrated database",
Flags: []cli.Flag{
&cli.StringFlag{
Name: "l1-rpc-url",
Value: "http://127.0.0.1:8545",
Usage: "RPC URL for an L1 Node",
Required: true,
},
&cli.StringFlag{
Name: "ovm-addresses",
Usage: "Path to ovm-addresses.json",
Required: true,
},
&cli.StringFlag{
Name: "ovm-allowances",
Usage: "Path to ovm-allowances.json",
Required: true,
},
&cli.StringFlag{
Name: "ovm-messages",
Usage: "Path to ovm-messages.json",
Required: true,
},
&cli.StringFlag{
Name: "witness-file",
Usage: "Path to witness file",
Required: true,
},
&cli.StringFlag{
Name: "db-path",
Usage: "Path to database",
Required: true,
},
cli.StringFlag{
Name: "deploy-config",
Usage: "Path to hardhat deploy config file",
Required: true,
},
cli.StringFlag{
Name: "network",
Usage: "Name of hardhat deploy network",
Required: true,
},
cli.StringFlag{
Name: "hardhat-deployments",
Usage: "Comma separated list of hardhat deployment directories",
Required: true,
},
cli.IntFlag{
Name: "db-cache",
Usage: "LevelDB cache size in mb",
Value: 1024,
},
cli.IntFlag{
Name: "db-handles",
Usage: "LevelDB number of handles",
Value: 60,
},
},
Action: func(ctx *cli.Context) error {
deployConfig := ctx.String("deploy-config")
config, err := genesis.NewDeployConfig(deployConfig)
if err != nil {
return err
}
ovmAddresses, err := crossdomain.NewAddresses(ctx.String("ovm-addresses"))
if err != nil {
return err
}
ovmAllowances, err := crossdomain.NewAllowances(ctx.String("ovm-allowances"))
if err != nil {
return err
}
ovmMessages, err := crossdomain.NewSentMessageFromJSON(ctx.String("ovm-messages"))
if err != nil {
return err
}
evmMessages, evmAddresses, err := crossdomain.ReadWitnessData(ctx.String("witness-file"))
if err != nil {
return err
}
log.Info(
"Loaded witness data",
"ovmAddresses", len(ovmAddresses),
"evmAddresses", len(evmAddresses),
"ovmAllowances", len(ovmAllowances),
"ovmMessages", len(ovmMessages),
"evmMessages", len(evmMessages),
)
migrationData := crossdomain.MigrationData{
OvmAddresses: ovmAddresses,
EvmAddresses: evmAddresses,
OvmAllowances: ovmAllowances,
OvmMessages: ovmMessages,
EvmMessages: evmMessages,
}
network := ctx.String("network")
deployments := strings.Split(ctx.String("hardhat-deployments"), ",")
hh, err := hardhat.New(network, []string{}, deployments)
if err != nil {
return err
}
l1RpcURL := ctx.String("l1-rpc-url")
l1Client, err := ethclient.Dial(l1RpcURL)
if err != nil {
return err
}
var block *types.Block
tag := config.L1StartingBlockTag
if tag.BlockNumber != nil {
block, err = l1Client.BlockByNumber(context.Background(), big.NewInt(tag.BlockNumber.Int64()))
} else if tag.BlockHash != nil {
block, err = l1Client.BlockByHash(context.Background(), *tag.BlockHash)
} else {
return fmt.Errorf("invalid l1StartingBlockTag in deploy config: %v", tag)
}
if err != nil {
return err
}
dbCache := ctx.Int("db-cache")
dbHandles := ctx.Int("db-handles")
// Read the required deployment addresses from disk if required
if err := config.GetDeployedAddresses(hh); err != nil {
return err
}
if err := config.Check(); err != nil {
return err
}
postLDB, err := db.Open(ctx.String("db-path"), dbCache, dbHandles)
if err != nil {
return err
}
if err := genesis.PostCheckMigratedDB(
postLDB,
migrationData,
&config.L1CrossDomainMessengerProxy,
config.L1ChainID,
config.FinalSystemOwner,
config.ProxyAdminOwner,
&derive.L1BlockInfo{
Number: block.NumberU64(),
Time: block.Time(),
BaseFee: block.BaseFee(),
BlockHash: block.Hash(),
BatcherAddr: config.BatchSenderAddress,
L1FeeOverhead: eth.Bytes32(common.BigToHash(new(big.Int).SetUint64(config.GasPriceOracleOverhead))),
L1FeeScalar: eth.Bytes32(common.BigToHash(new(big.Int).SetUint64(config.GasPriceOracleScalar))),
},
); err != nil {
return err
}
if err := postLDB.Close(); err != nil {
return err
}
return nil
},
}
if err := app.Run(os.Args); err != nil {
log.Crit("error in migration", "err", err)
}
}
......@@ -106,6 +106,11 @@ func main() {
Value: "rollup.json",
Required: true,
},
cli.BoolFlag{
Name: "post-check-only",
Usage: "Only perform sanity checks",
Required: false,
},
},
Action: func(ctx *cli.Context) error {
deployConfig := ctx.String("deploy-config")
......
......@@ -20,3 +20,13 @@ func getOVMETHTotalSupplySlot() common.Hash {
key := common.BytesToHash(common.LeftPadBytes(position.Bytes(), 32))
return key
}
func GetOVMETHTotalSupplySlot() common.Hash {
return getOVMETHTotalSupplySlot()
}
// GetOVMETHBalance gets a user's OVM ETH balance from state by querying the
// appropriate storage slot directly.
func GetOVMETHBalance(db *state.StateDB, addr common.Address) *big.Int {
return db.GetState(OVMETHAddress, CalcOVMETHStorageKey(addr)).Big()
}
This diff is collapsed.
package ether
import (
"fmt"
"math/big"
"math/rand"
"testing"
"github.com/ethereum-optimism/optimism/op-bindings/predeploys"
"github.com/ethereum-optimism/optimism/op-chain-ops/crossdomain"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/rawdb"
......@@ -12,7 +16,7 @@ import (
"github.com/stretchr/testify/require"
)
func TestMigrateLegacyETH(t *testing.T) {
func TestMigrateBalances(t *testing.T) {
tests := []struct {
name string
totalSupply *big.Int
......@@ -46,11 +50,9 @@ func TestMigrateLegacyETH(t *testing.T) {
},
check: func(t *testing.T, db *state.StateDB, err error) {
require.NoError(t, err)
require.Equal(t, db.GetBalance(common.HexToAddress("0x123")), big.NewInt(1))
require.Equal(t, db.GetBalance(common.HexToAddress("0x456")), big.NewInt(2))
require.Equal(t, db.GetState(OVMETHAddress, CalcOVMETHStorageKey(common.HexToAddress("0x123"))), common.Hash{})
require.Equal(t, db.GetState(OVMETHAddress, CalcOVMETHStorageKey(common.HexToAddress("0x456"))), common.Hash{})
require.Equal(t, db.GetState(OVMETHAddress, getOVMETHTotalSupplySlot()), common.Hash{})
require.EqualValues(t, common.Big1, db.GetBalance(common.HexToAddress("0x123")))
require.EqualValues(t, common.Big2, db.GetBalance(common.HexToAddress("0x456")))
require.EqualValues(t, common.Hash{}, db.GetState(predeploys.LegacyERC20ETHAddr, GetOVMETHTotalSupplySlot()))
},
},
{
......@@ -66,9 +68,9 @@ func TestMigrateLegacyETH(t *testing.T) {
},
check: func(t *testing.T, db *state.StateDB, err error) {
require.NoError(t, err)
require.Equal(t, db.GetBalance(common.HexToAddress("0x123")), big.NewInt(1))
require.Equal(t, db.GetState(OVMETHAddress, CalcOVMETHStorageKey(common.HexToAddress("0x123"))), common.Hash{})
require.Equal(t, db.GetState(OVMETHAddress, getOVMETHTotalSupplySlot()), common.Hash{})
require.EqualValues(t, common.Big1, db.GetBalance(common.HexToAddress("0x123")))
require.EqualValues(t, common.Big0, db.GetBalance(common.HexToAddress("0x456")))
require.EqualValues(t, common.Hash{}, db.GetState(predeploys.LegacyERC20ETHAddr, GetOVMETHTotalSupplySlot()))
},
},
{
......@@ -97,9 +99,9 @@ func TestMigrateLegacyETH(t *testing.T) {
},
check: func(t *testing.T, db *state.StateDB, err error) {
require.NoError(t, err)
require.Equal(t, db.GetBalance(common.HexToAddress("0x123")), big.NewInt(1))
require.Equal(t, db.GetState(OVMETHAddress, CalcOVMETHStorageKey(common.HexToAddress("0x123"))), common.Hash{})
require.Equal(t, db.GetState(OVMETHAddress, getOVMETHTotalSupplySlot()), common.Hash{})
require.EqualValues(t, common.Big1, db.GetBalance(common.HexToAddress("0x123")))
require.EqualValues(t, common.Big0, db.GetBalance(common.HexToAddress("0x456")))
require.EqualValues(t, common.Hash{}, db.GetState(predeploys.LegacyERC20ETHAddr, GetOVMETHTotalSupplySlot()))
},
},
{
......@@ -174,25 +176,23 @@ func TestMigrateLegacyETH(t *testing.T) {
},
check: func(t *testing.T, db *state.StateDB, err error) {
require.NoError(t, err)
require.Equal(t, db.GetBalance(common.HexToAddress("0x123")), big.NewInt(1))
require.Equal(t, db.GetBalance(common.HexToAddress("0x456")), big.NewInt(2))
require.Equal(t, db.GetState(OVMETHAddress, CalcOVMETHStorageKey(common.HexToAddress("0x123"))), common.Hash{})
require.Equal(t, db.GetState(OVMETHAddress, CalcOVMETHStorageKey(common.HexToAddress("0x456"))), common.Hash{})
require.Equal(t, db.GetState(OVMETHAddress, getOVMETHTotalSupplySlot()), common.Hash{})
require.EqualValues(t, common.Big1, db.GetBalance(common.HexToAddress("0x123")))
require.EqualValues(t, common.Big2, db.GetBalance(common.HexToAddress("0x456")))
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
db := makeLegacyETH(t, tt.totalSupply, tt.stateBalances, tt.stateAllowances)
err := doMigration(db, tt.inputAddresses, tt.inputAllowances, tt.expDiff, false, true)
db, factory := makeLegacyETH(t, tt.totalSupply, tt.stateBalances, tt.stateAllowances)
err := doMigration(db, factory, tt.inputAddresses, tt.inputAllowances, tt.expDiff, false)
tt.check(t, db, err)
})
}
}
func makeLegacyETH(t *testing.T, totalSupply *big.Int, balances map[common.Address]*big.Int, allowances map[common.Address]common.Address) *state.StateDB {
db, err := state.New(common.Hash{}, state.NewDatabaseWithConfig(rawdb.NewMemoryDatabase(), &trie.Config{
func makeLegacyETH(t *testing.T, totalSupply *big.Int, balances map[common.Address]*big.Int, allowances map[common.Address]common.Address) (*state.StateDB, DBFactory) {
memDB := rawdb.NewMemoryDatabase()
db, err := state.New(common.Hash{}, state.NewDatabaseWithConfig(memDB, &trie.Config{
Preimages: true,
Cache: 1024,
}), nil)
......@@ -201,7 +201,7 @@ func makeLegacyETH(t *testing.T, totalSupply *big.Int, balances map[common.Addre
db.CreateAccount(OVMETHAddress)
db.SetState(OVMETHAddress, getOVMETHTotalSupplySlot(), common.BigToHash(totalSupply))
for slot := range OVMETHIgnoredSlots {
for slot := range ignoredSlots {
if slot == getOVMETHTotalSupplySlot() {
continue
}
......@@ -220,5 +220,137 @@ func makeLegacyETH(t *testing.T, totalSupply *big.Int, balances map[common.Addre
err = db.Database().TrieDB().Commit(root, true)
require.NoError(t, err)
return db
return db, func() (*state.StateDB, error) {
return state.New(root, state.NewDatabaseWithConfig(memDB, &trie.Config{
Preimages: true,
Cache: 1024,
}), nil)
}
}
// TestMigrateBalancesRandom tests that the pre-check balances function works
// with random addresses. This test makes sure that the partition logic doesn't
// miss anything.
func TestMigrateBalancesRandom(t *testing.T) {
for i := 0; i < 100; i++ {
addresses := make([]common.Address, 0)
stateBalances := make(map[common.Address]*big.Int)
allowances := make([]*crossdomain.Allowance, 0)
stateAllowances := make(map[common.Address]common.Address)
totalSupply := big.NewInt(0)
for j := 0; j < rand.Intn(10000); j++ {
addr := randAddr(t)
addresses = append(addresses, addr)
stateBalances[addr] = big.NewInt(int64(rand.Intn(1_000_000)))
totalSupply = new(big.Int).Add(totalSupply, stateBalances[addr])
}
for j := 0; j < rand.Intn(1000); j++ {
addr := randAddr(t)
to := randAddr(t)
allowances = append(allowances, &crossdomain.Allowance{
From: addr,
To: to,
})
stateAllowances[addr] = to
}
db, factory := makeLegacyETH(t, totalSupply, stateBalances, stateAllowances)
err := doMigration(db, factory, addresses, allowances, big.NewInt(0), false)
require.NoError(t, err)
for addr, expBal := range stateBalances {
actBal := db.GetBalance(addr)
require.EqualValues(t, expBal, actBal)
}
}
}
func TestPartitionKeyspace(t *testing.T) {
tests := []struct {
i int
count int
expected [2]common.Hash
}{
{
i: 0,
count: 1,
expected: [2]common.Hash{
common.HexToHash("0x00"),
common.HexToHash("0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"),
},
},
{
i: 0,
count: 2,
expected: [2]common.Hash{
common.HexToHash("0x00"),
common.HexToHash("0x7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"),
},
},
{
i: 1,
count: 2,
expected: [2]common.Hash{
common.HexToHash("0x7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"),
common.HexToHash("0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"),
},
},
{
i: 0,
count: 3,
expected: [2]common.Hash{
common.HexToHash("0x00"),
common.HexToHash("0x5555555555555555555555555555555555555555555555555555555555555555"),
},
},
{
i: 1,
count: 3,
expected: [2]common.Hash{
common.HexToHash("0x5555555555555555555555555555555555555555555555555555555555555555"),
common.HexToHash("0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"),
},
},
{
i: 2,
count: 3,
expected: [2]common.Hash{
common.HexToHash("0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"),
common.HexToHash("0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"),
},
},
}
for _, tt := range tests {
t.Run(fmt.Sprintf("i %d, count %d", tt.i, tt.count), func(t *testing.T) {
start, end := PartitionKeyspace(tt.i, tt.count)
require.Equal(t, tt.expected[0], start)
require.Equal(t, tt.expected[1], end)
})
}
t.Run("panics on invalid i or count", func(t *testing.T) {
require.Panics(t, func() {
PartitionKeyspace(1, 1)
})
require.Panics(t, func() {
PartitionKeyspace(-1, 1)
})
require.Panics(t, func() {
PartitionKeyspace(0, -1)
})
require.Panics(t, func() {
PartitionKeyspace(-1, -1)
})
})
}
func randAddr(t *testing.T) common.Address {
var addr common.Address
_, err := rand.Read(addr[:])
require.NoError(t, err)
return addr
}
......@@ -7,6 +7,10 @@ import (
"fmt"
"math/big"
"github.com/ethereum-optimism/optimism/op-chain-ops/util"
"github.com/ethereum-optimism/optimism/op-chain-ops/ether"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/core/state"
......@@ -31,6 +35,7 @@ const MaxSlotChecks = 1000
type StorageCheckMap = map[common.Hash]common.Hash
var (
L2XDMOwnerSlot = common.Hash{31: 0x33}
ProxyAdminOwnerSlot = common.Hash{}
LegacyETHCheckSlots = map[common.Hash]common.Hash{
......@@ -250,7 +255,7 @@ func PostCheckPredeploys(prevDB, currDB *state.StateDB) error {
// Balances and nonces should match legacy
oldNonce := prevDB.GetNonce(addr)
oldBalance := prevDB.GetBalance(addr)
oldBalance := ether.GetOVMETHBalance(prevDB, addr)
newNonce := currDB.GetNonce(addr)
newBalance := currDB.GetBalance(addr)
if oldNonce != newNonce {
......@@ -505,7 +510,9 @@ func CheckWithdrawalsAfter(db vm.StateDB, data crossdomain.MigrationData, l1Cros
// Now, iterate over each legacy withdrawal and check if there is a corresponding
// migrated withdrawal.
var innerErr error
progress := util.ProgressLogger(1000, "checking withdrawals")
err = db.ForEachStorage(predeploys.LegacyMessagePasserAddr, func(key, value common.Hash) bool {
progress()
// The legacy message passer becomes a proxy during the migration,
// so we need to ignore the implementation/admin slots.
if key == ImplementationSlot || key == AdminSlot {
......@@ -542,7 +549,7 @@ func CheckWithdrawalsAfter(db vm.StateDB, data crossdomain.MigrationData, l1Cros
// If the sender is _not_ the L2XDM, the value should not be migrated.
wd := wdsByOldSlot[key]
if wd.XDomainSender == predeploys.L2CrossDomainMessengerAddr {
if wd.MessageSender == predeploys.L2CrossDomainMessengerAddr {
// Make sure the value is abiTrue if this withdrawal should be migrated.
if migratedValue != abiTrue {
innerErr = fmt.Errorf("expected migrated value to be true, but got %s", migratedValue)
......@@ -551,7 +558,7 @@ func CheckWithdrawalsAfter(db vm.StateDB, data crossdomain.MigrationData, l1Cros
} else {
// Otherwise, ensure that withdrawals from senders other than the L2XDM are _not_ migrated.
if migratedValue != abiFalse {
innerErr = fmt.Errorf("a migration from a sender other than the L2XDM was migrated")
innerErr = fmt.Errorf("a migration from a sender other than the L2XDM was migrated. sender: %s, migrated value: %s", wd.MessageSender, migratedValue)
return false
}
}
......
......@@ -76,6 +76,8 @@ type DeployConfig struct {
ProxyAdminOwner common.Address `json:"proxyAdminOwner"`
// Owner of the system on L1
FinalSystemOwner common.Address `json:"finalSystemOwner"`
// GUARDIAN account in the OptimismPortal
PortalGuardian common.Address `json:"portalGuardian"`
// L1 recipient of fees accumulated in the BaseFeeVault
BaseFeeVaultRecipient common.Address `json:"baseFeeVaultRecipient"`
// L1 recipient of fees accumulated in the L1FeeVault
......@@ -128,6 +130,9 @@ func (d *DeployConfig) Check() error {
if d.FinalizationPeriodSeconds == 0 {
return fmt.Errorf("%w: FinalizationPeriodSeconds cannot be 0", ErrInvalidDeployConfig)
}
if d.PortalGuardian == (common.Address{}) {
return fmt.Errorf("%w: PortalGuardian cannot be address(0)", ErrInvalidDeployConfig)
}
if d.MaxSequencerDrift == 0 {
return fmt.Errorf("%w: MaxSequencerDrift cannot be 0", ErrInvalidDeployConfig)
}
......
......@@ -82,16 +82,25 @@ func MigrateDB(ldb ethdb.Database, config *DeployConfig, l1Block *types.Block, m
)
}
// Set up the backing store.
underlyingDB := state.NewDatabaseWithConfig(ldb, &trie.Config{
Preimages: true,
Cache: 1024,
})
// Open up the state database.
db, err := state.New(header.Root, underlyingDB, nil)
dbFactory := func() (*state.StateDB, error) {
// Set up the backing store.
underlyingDB := state.NewDatabaseWithConfig(ldb, &trie.Config{
Preimages: true,
Cache: 1024,
})
// Open up the state database.
db, err := state.New(header.Root, underlyingDB, nil)
if err != nil {
return nil, fmt.Errorf("cannot open StateDB: %w", err)
}
return db, nil
}
db, err := dbFactory()
if err != nil {
return nil, fmt.Errorf("cannot open StateDB: %w", err)
return nil, fmt.Errorf("cannot create StateDB: %w", err)
}
// Before we do anything else, we need to ensure that all of the input configuration is correct
......@@ -182,17 +191,13 @@ func MigrateDB(ldb ethdb.Database, config *DeployConfig, l1Block *types.Block, m
return nil, fmt.Errorf("cannot migrate withdrawals: %w", err)
}
// We also need to verify that we have all of the storage slots for the LegacyERC20ETH contract
// that we expect to have. An error will be thrown if there are any missing storage slots.
// Unlike with withdrawals, we do not need to filter out extra addresses because their balances
// would necessarily be zero and therefore not affect the migration.
//
// Once verified, we migrate the balances held inside the LegacyERC20ETH contract into the state trie.
// We also delete the balances from the LegacyERC20ETH contract.
// Finally we migrate the balances held inside the LegacyERC20ETH contract into the state trie.
// We also delete the balances from the LegacyERC20ETH contract. Unlike the steps above, this step
// combines the check and mutation steps into one in order to reduce migration time.
log.Info("Starting to migrate ERC20 ETH")
err = ether.MigrateLegacyETH(db, migrationData.Addresses(), migrationData.OvmAllowances, int(config.L1ChainID), noCheck, commit)
err = ether.MigrateBalances(db, dbFactory, migrationData.Addresses(), migrationData.OvmAllowances, int(config.L1ChainID), noCheck)
if err != nil {
return nil, fmt.Errorf("cannot migrate legacy eth: %w", err)
return nil, fmt.Errorf("failed to migrate OVM_ETH: %w", err)
}
// We're done messing around with the database, so we can now commit the changes to the DB.
......
......@@ -295,7 +295,7 @@ func deployL1Contracts(config *DeployConfig, backend *backends.SimulatedBackend)
Name: "OptimismPortal",
Args: []interface{}{
predeploys.DevL2OutputOracleAddr,
config.FinalSystemOwner,
config.PortalGuardian,
true, // _paused
},
},
......
......@@ -19,6 +19,7 @@
"l1GenesisBlockGasLimit": "0xe4e1c0",
"l1GenesisBlockDifficulty": "0x1",
"finalSystemOwner": "0x0000000000000000000000000000000000000111",
"portalGuardian": "0x0000000000000000000000000000000000000112",
"finalizationPeriodSeconds": 2,
"l1GenesisBlockMixHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"l1GenesisBlockCoinbase": "0x0000000000000000000000000000000000000000",
......
package actions
import (
"testing"
"github.com/ethereum/go-ethereum/log"
"github.com/stretchr/testify/require"
"github.com/ethereum-optimism/optimism/op-e2e/e2eutils"
"github.com/ethereum-optimism/optimism/op-node/testlog"
)
func TestShapellaL1Fork(gt *testing.T) {
t := NewDefaultTesting(gt)
dp := e2eutils.MakeDeployParams(t, defaultRollupTestParams)
sd := e2eutils.Setup(t, dp, defaultAlloc)
activation := sd.L1Cfg.Timestamp + 24
sd.L1Cfg.Config.ShanghaiTime = &activation
log := testlog.Logger(t, log.LvlDebug)
_, _, miner, sequencer, _, verifier, _, batcher := setupReorgTestActors(t, dp, sd, log)
require.False(t, sd.L1Cfg.Config.IsShanghai(miner.l1Chain.CurrentBlock().Time()), "not active yet")
// start op-nodes
sequencer.ActL2PipelineFull(t)
verifier.ActL2PipelineFull(t)
// build empty L1 blocks, crossing the fork boundary
miner.ActEmptyBlock(t)
miner.ActEmptyBlock(t)
miner.ActEmptyBlock(t)
// verify Shanghai is active
l1Head := miner.l1Chain.CurrentBlock()
require.True(t, sd.L1Cfg.Config.IsShanghai(l1Head.Time()))
// build L2 chain up to and including L2 blocks referencing shanghai L1 blocks
sequencer.ActL1HeadSignal(t)
sequencer.ActBuildToL1Head(t)
miner.ActL1StartBlock(12)(t)
batcher.ActSubmitAll(t)
miner.ActL1IncludeTx(batcher.batcherAddr)(t)
miner.ActL1EndBlock(t)
// sync verifier
verifier.ActL1HeadSignal(t)
verifier.ActL2PipelineFull(t)
// verify verifier accepted shanghai L1 inputs
require.Equal(t, l1Head.Hash(), verifier.SyncStatus().SafeL2.L1Origin.Hash, "verifier synced L1 chain that includes shanghai headers")
require.Equal(t, sequencer.SyncStatus().UnsafeL2, verifier.SyncStatus().UnsafeL2, "verifier and sequencer agree")
}
......@@ -10,6 +10,7 @@ import (
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/trie"
"github.com/stretchr/testify/require"
)
// L1Miner wraps a L1Replica with instrumented block building ability.
......@@ -72,6 +73,9 @@ func (s *L1Miner) ActL1StartBlock(timeDelta uint64) Action {
header.GasLimit = parent.GasLimit * s.l1Cfg.Config.ElasticityMultiplier()
}
}
if s.l1Cfg.Config.IsShanghai(header.Time) {
header.WithdrawalsHash = &types.EmptyWithdrawalsHash
}
s.l1Building = true
s.l1BuildingHeader = header
......@@ -97,26 +101,33 @@ func (s *L1Miner) ActL1IncludeTx(from common.Address) Action {
t.Fatalf("no pending txs from %s, and have %d unprocessable queued txs from this account", from, len(q))
}
tx := txs[i]
if tx.Gas() > s.l1BuildingHeader.GasLimit {
t.Fatalf("tx consumes %d gas, more than available in L1 block %d", tx.Gas(), s.l1BuildingHeader.GasLimit)
}
if tx.Gas() > uint64(*s.l1GasPool) {
t.InvalidAction("action takes too much gas: %d, only have %d", tx.Gas(), uint64(*s.l1GasPool))
return
}
s.IncludeTx(t, tx)
s.pendingIndices[from] = i + 1 // won't retry the tx
s.l1BuildingState.SetTxContext(tx.Hash(), len(s.l1Transactions))
receipt, err := core.ApplyTransaction(s.l1Cfg.Config, s.l1Chain, &s.l1BuildingHeader.Coinbase,
s.l1GasPool, s.l1BuildingState, s.l1BuildingHeader, tx, &s.l1BuildingHeader.GasUsed, *s.l1Chain.GetVMConfig())
if err != nil {
s.l1TxFailed = append(s.l1TxFailed, tx)
t.Fatalf("failed to apply transaction to L1 block (tx %d): %w", len(s.l1Transactions), err)
}
s.l1Receipts = append(s.l1Receipts, receipt)
s.l1Transactions = append(s.l1Transactions, tx)
}
}
func (s *L1Miner) IncludeTx(t Testing, tx *types.Transaction) {
from, err := s.l1Signer.Sender(tx)
require.NoError(t, err)
s.log.Info("including tx", "nonce", tx.Nonce(), "from", from)
if tx.Gas() > s.l1BuildingHeader.GasLimit {
t.Fatalf("tx consumes %d gas, more than available in L1 block %d", tx.Gas(), s.l1BuildingHeader.GasLimit)
}
if tx.Gas() > uint64(*s.l1GasPool) {
t.InvalidAction("action takes too much gas: %d, only have %d", tx.Gas(), uint64(*s.l1GasPool))
return
}
s.l1BuildingState.SetTxContext(tx.Hash(), len(s.l1Transactions))
receipt, err := core.ApplyTransaction(s.l1Cfg.Config, s.l1Chain, &s.l1BuildingHeader.Coinbase,
s.l1GasPool, s.l1BuildingState, s.l1BuildingHeader, tx, &s.l1BuildingHeader.GasUsed, *s.l1Chain.GetVMConfig())
if err != nil {
s.l1TxFailed = append(s.l1TxFailed, tx)
t.Fatalf("failed to apply transaction to L1 block (tx %d): %v", len(s.l1Transactions), err)
}
s.l1Receipts = append(s.l1Receipts, receipt)
s.l1Transactions = append(s.l1Transactions, tx)
}
func (s *L1Miner) ActL1SetFeeRecipient(coinbase common.Address) {
s.prefCoinbase = coinbase
if s.l1Building {
......@@ -135,6 +146,9 @@ func (s *L1Miner) ActL1EndBlock(t Testing) {
s.l1BuildingHeader.GasUsed = s.l1BuildingHeader.GasLimit - uint64(*s.l1GasPool)
s.l1BuildingHeader.Root = s.l1BuildingState.IntermediateRoot(s.l1Cfg.Config.IsEIP158(s.l1BuildingHeader.Number))
block := types.NewBlock(s.l1BuildingHeader, s.l1Transactions, nil, s.l1Receipts, trie.NewStackTrie(nil))
if s.l1Cfg.Config.IsShanghai(s.l1BuildingHeader.Time) {
block = block.WithWithdrawals(make([]*types.Withdrawal, 0))
}
// Write state changes to db
root, err := s.l1BuildingState.Commit(s.l1Cfg.Config.IsEIP158(s.l1BuildingHeader.Number))
......
......@@ -91,6 +91,10 @@ func (s *L2Batcher) SubmittingData() bool {
// ActL2BatchBuffer adds the next L2 block to the batch buffer.
// If the buffer is being submitted, the buffer is wiped.
func (s *L2Batcher) ActL2BatchBuffer(t Testing) {
require.NoError(t, s.Buffer(t), "failed to add block to channel")
}
func (s *L2Batcher) Buffer(t Testing) error {
if s.l2Submitting { // break ongoing submitting work if necessary
s.l2ChannelOut = nil
s.l2Submitting = false
......@@ -120,7 +124,7 @@ func (s *L2Batcher) ActL2BatchBuffer(t Testing) {
s.l2ChannelOut = nil
} else {
s.log.Info("nothing left to submit")
return
return nil
}
}
// Create channel if we don't have one yet
......@@ -143,9 +147,10 @@ func (s *L2Batcher) ActL2BatchBuffer(t Testing) {
s.l2ChannelOut = nil
}
if _, err := s.l2ChannelOut.AddBlock(block); err != nil { // should always succeed
t.Fatalf("failed to add block to channel: %v", err)
return err
}
s.l2BufferedBlock = eth.ToBlockID(block)
return nil
}
func (s *L2Batcher) ActL2ChannelClose(t Testing) {
......@@ -158,7 +163,7 @@ func (s *L2Batcher) ActL2ChannelClose(t Testing) {
}
// ActL2BatchSubmit constructs a batch tx from previous buffered L2 blocks, and submits it to L1
func (s *L2Batcher) ActL2BatchSubmit(t Testing) {
func (s *L2Batcher) ActL2BatchSubmit(t Testing, txOpts ...func(tx *types.DynamicFeeTx)) {
// Don't run this action if there's no data to submit
if s.l2ChannelOut == nil {
t.InvalidAction("need to buffer data first, cannot batch submit with empty buffer")
......@@ -192,6 +197,9 @@ func (s *L2Batcher) ActL2BatchSubmit(t Testing) {
GasFeeCap: gasFeeCap,
Data: data.Bytes(),
}
for _, opt := range txOpts {
opt(rawTx)
}
gas, err := core.IntrinsicGas(rawTx.Data, nil, false, true, true, false)
require.NoError(t, err, "need to compute intrinsic gas")
rawTx.Gas = gas
......
package actions
import (
"crypto/rand"
"errors"
"math/big"
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/params"
......@@ -12,6 +15,7 @@ import (
"github.com/ethereum-optimism/optimism/op-e2e/e2eutils"
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
"github.com/ethereum-optimism/optimism/op-node/testlog"
)
......@@ -378,3 +382,131 @@ func TestExtendedTimeWithoutL1Batches(gt *testing.T) {
sequencer.ActL2PipelineFull(t)
require.Equal(t, sequencer.L2Unsafe(), sequencer.L2Safe(), "same for sequencer")
}
// TestBigL2Txs tests a high-throughput case with constrained batcher:
// - Fill 100 L2 blocks to near max-capacity, with txs of 120 KB each
// - Buffer the L2 blocks into channels together as much as possible, submit data-txs only when necessary
// (just before crossing the max RLP channel size)
// - Limit the data-tx size to 40 KB, to force data to be split across multiple datat-txs
// - Defer all data-tx inclusion till the end
// - Fill L1 blocks with data-txs until we have processed them all
// - Run the verifier, and check if it derives the same L2 chain as was created by the sequencer.
//
// The goal of this test is to quickly run through an otherwise very slow process of submitting and including lots of data.
// This does not test the batcher code, but is really focused at testing the batcher utils
// and channel-decoding verifier code in the derive package.
func TestBigL2Txs(gt *testing.T) {
t := NewDefaultTesting(gt)
p := &e2eutils.TestParams{
MaxSequencerDrift: 100,
SequencerWindowSize: 1000,
ChannelTimeout: 200, // give enough space to buffer large amounts of data before submitting it
}
dp := e2eutils.MakeDeployParams(t, p)
sd := e2eutils.Setup(t, dp, defaultAlloc)
log := testlog.Logger(t, log.LvlInfo)
miner, engine, sequencer := setupSequencerTest(t, sd, log)
_, verifier := setupVerifier(t, sd, log, miner.L1Client(t, sd.RollupCfg))
batcher := NewL2Batcher(log, sd.RollupCfg, &BatcherCfg{
MinL1TxSize: 0,
MaxL1TxSize: 40_000, // try a small batch size, to force the data to be split between more frames
BatcherKey: dp.Secrets.Batcher,
}, sequencer.RollupClient(), miner.EthClient(), engine.EthClient())
sequencer.ActL2PipelineFull(t)
verifier.ActL2PipelineFull(t)
cl := engine.EthClient()
batcherNonce := uint64(0) // manually track batcher nonce. the "pending nonce" value in tx-pool is incorrect after we fill the pending-block gas limit and keep adding txs to the pool.
batcherTxOpts := func(tx *types.DynamicFeeTx) {
tx.Nonce = batcherNonce
batcherNonce++
tx.GasFeeCap = e2eutils.Ether(1) // be very generous with basefee, since we're spamming L1
}
// build many L2 blocks filled to the brim with large txs of random data
for i := 0; i < 100; i++ {
aliceNonce, err := cl.PendingNonceAt(t.Ctx(), dp.Addresses.Alice)
status := sequencer.SyncStatus()
// build empty L1 blocks as necessary, so the L2 sequencer can continue to include txs while not drifting too far out
if status.UnsafeL2.Time >= status.HeadL1.Time+12 {
miner.ActEmptyBlock(t)
}
sequencer.ActL1HeadSignal(t)
sequencer.ActL2StartBlock(t)
baseFee := engine.l2Chain.CurrentBlock().BaseFee() // this will go quite high, since so many consecutive blocks are filled at capacity.
// fill the block with large L2 txs from alice
for n := aliceNonce; ; n++ {
require.NoError(t, err)
signer := types.LatestSigner(sd.L2Cfg.Config)
data := make([]byte, 120_000) // very large L2 txs, as large as the tx-pool will accept
_, err := rand.Read(data[:]) // fill with random bytes, to make compression ineffective
require.NoError(t, err)
gas, err := core.IntrinsicGas(data, nil, false, true, true, false)
require.NoError(t, err)
if gas > engine.l2GasPool.Gas() {
break
}
tx := types.MustSignNewTx(dp.Secrets.Alice, signer, &types.DynamicFeeTx{
ChainID: sd.L2Cfg.Config.ChainID,
Nonce: n,
GasTipCap: big.NewInt(2 * params.GWei),
GasFeeCap: new(big.Int).Add(new(big.Int).Mul(baseFee, big.NewInt(2)), big.NewInt(2*params.GWei)),
Gas: gas,
To: &dp.Addresses.Bob,
Value: big.NewInt(0),
Data: data,
})
require.NoError(gt, cl.SendTransaction(t.Ctx(), tx))
engine.ActL2IncludeTx(dp.Addresses.Alice)(t)
}
sequencer.ActL2EndBlock(t)
for batcher.l2BufferedBlock.Number < sequencer.SyncStatus().UnsafeL2.Number {
// if we run out of space, close the channel and submit all the txs
if err := batcher.Buffer(t); errors.Is(err, derive.ErrTooManyRLPBytes) {
log.Info("flushing filled channel to batch txs", "id", batcher.l2ChannelOut.ID())
batcher.ActL2ChannelClose(t)
for batcher.l2ChannelOut != nil {
batcher.ActL2BatchSubmit(t, batcherTxOpts)
}
}
}
}
// if anything is left in the channel, submit it
if batcher.l2ChannelOut != nil {
log.Info("flushing trailing channel to batch txs", "id", batcher.l2ChannelOut.ID())
batcher.ActL2ChannelClose(t)
for batcher.l2ChannelOut != nil {
batcher.ActL2BatchSubmit(t, batcherTxOpts)
}
}
// build L1 blocks until we're out of txs
txs, _ := miner.eth.TxPool().ContentFrom(dp.Addresses.Batcher)
for {
if len(txs) == 0 {
break
}
miner.ActL1StartBlock(12)(t)
for range txs {
if len(txs) == 0 {
break
}
tx := txs[0]
if miner.l1GasPool.Gas() < tx.Gas() { // fill the L1 block with batcher txs until we run out of gas
break
}
log.Info("including batcher tx", "nonce", tx)
miner.IncludeTx(t, tx)
txs = txs[1:]
}
miner.ActL1EndBlock(t)
}
verifier.ActL1HeadSignal(t)
verifier.ActL2PipelineFull(t)
require.Equal(t, sequencer.SyncStatus().UnsafeL2, verifier.SyncStatus().SafeL2, "verifier synced sequencer data even though of huge tx in block")
}
package op_e2e
import (
"math"
"math/big"
"testing"
"time"
"github.com/ethereum-optimism/optimism/op-bindings/bindings"
"github.com/ethereum-optimism/optimism/op-bindings/predeploys"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
"github.com/ethereum-optimism/optimism/op-node/testlog"
"github.com/ethereum/go-ethereum/accounts/abi/bind"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/params"
"github.com/stretchr/testify/require"
)
// TestERC20BridgeDeposits tests the the L1StandardBridge bridge ERC20
// functionality.
func TestERC20BridgeDeposits(t *testing.T) {
parallel(t)
if !verboseGethNodes {
log.Root().SetHandler(log.DiscardHandler())
}
cfg := DefaultSystemConfig(t)
sys, err := cfg.Start()
require.Nil(t, err, "Error starting up system")
defer sys.Close()
log := testlog.Logger(t, log.LvlInfo)
log.Info("genesis", "l2", sys.RollupConfig.Genesis.L2, "l1", sys.RollupConfig.Genesis.L1, "l2_time", sys.RollupConfig.Genesis.L2Time)
l1Client := sys.Clients["l1"]
l2Client := sys.Clients["sequencer"]
opts, err := bind.NewKeyedTransactorWithChainID(sys.cfg.Secrets.Alice, cfg.L1ChainIDBig())
require.Nil(t, err)
// Deploy WETH9
weth9Address, tx, WETH9, err := bindings.DeployWETH9(opts, l1Client)
require.NoError(t, err)
_, err = waitForTransaction(tx.Hash(), l1Client, 3*time.Duration(cfg.DeployConfig.L1BlockTime)*time.Second)
require.NoError(t, err, "Waiting for deposit tx on L1")
// Get some WETH
opts.Value = big.NewInt(params.Ether)
tx, err = WETH9.Fallback(opts, []byte{})
require.NoError(t, err)
_, err = waitForTransaction(tx.Hash(), l1Client, 3*time.Duration(cfg.DeployConfig.L1BlockTime)*time.Second)
require.NoError(t, err)
opts.Value = nil
wethBalance, err := WETH9.BalanceOf(&bind.CallOpts{}, opts.From)
require.NoError(t, err)
require.Equal(t, big.NewInt(params.Ether), wethBalance)
// Deploy L2 WETH9
l2Opts, err := bind.NewKeyedTransactorWithChainID(sys.cfg.Secrets.Alice, cfg.L2ChainIDBig())
require.NoError(t, err)
optimismMintableTokenFactory, err := bindings.NewOptimismMintableERC20Factory(predeploys.OptimismMintableERC20FactoryAddr, l2Client)
require.NoError(t, err)
tx, err = optimismMintableTokenFactory.CreateOptimismMintableERC20(l2Opts, weth9Address, "L2-WETH", "L2-WETH")
require.NoError(t, err)
_, err = waitForTransaction(tx.Hash(), l2Client, 3*time.Duration(cfg.DeployConfig.L2BlockTime)*time.Second)
require.NoError(t, err)
// Get the deployment event to have access to the L2 WETH9 address
it, err := optimismMintableTokenFactory.FilterOptimismMintableERC20Created(&bind.FilterOpts{Start: 0}, nil, nil)
require.NoError(t, err)
var event *bindings.OptimismMintableERC20FactoryOptimismMintableERC20Created
for it.Next() {
event = it.Event
}
require.NotNil(t, event)
// Approve WETH9 with the bridge
tx, err = WETH9.Approve(opts, predeploys.DevL1StandardBridgeAddr, new(big.Int).SetUint64(math.MaxUint64))
require.NoError(t, err)
_, err = waitForTransaction(tx.Hash(), l1Client, 3*time.Duration(cfg.DeployConfig.L1BlockTime)*time.Second)
require.NoError(t, err)
// Bridge the WETH9
l1StandardBridge, err := bindings.NewL1StandardBridge(predeploys.DevL1StandardBridgeAddr, l1Client)
require.NoError(t, err)
tx, err = l1StandardBridge.BridgeERC20(opts, weth9Address, event.LocalToken, big.NewInt(100), 100000, []byte{})
require.NoError(t, err)
depositReceipt, err := waitForTransaction(tx.Hash(), l1Client, 3*time.Duration(cfg.DeployConfig.L1BlockTime)*time.Second)
require.NoError(t, err)
t.Log("Deposit through L1StandardBridge", "gas used", depositReceipt.GasUsed)
// compute the deposit transaction hash + poll for it
portal, err := bindings.NewOptimismPortal(predeploys.DevOptimismPortalAddr, l1Client)
require.NoError(t, err)
depIt, err := portal.FilterTransactionDeposited(&bind.FilterOpts{Start: 0}, nil, nil, nil)
require.NoError(t, err)
var depositEvent *bindings.OptimismPortalTransactionDeposited
for depIt.Next() {
depositEvent = depIt.Event
}
require.NotNil(t, depositEvent)
depositTx, err := derive.UnmarshalDepositLogEvent(&depositEvent.Raw)
require.NoError(t, err)
_, err = waitForTransaction(types.NewTx(depositTx).Hash(), l2Client, 3*time.Duration(cfg.DeployConfig.L2BlockTime)*time.Second)
require.NoError(t, err)
// Ensure that the deposit went through
optimismMintableToken, err := bindings.NewOptimismMintableERC20(event.LocalToken, l2Client)
require.NoError(t, err)
// Should have balance on L2
l2Balance, err := optimismMintableToken.BalanceOf(&bind.CallOpts{}, opts.From)
require.NoError(t, err)
require.Equal(t, l2Balance, big.NewInt(100))
}
......@@ -12,6 +12,7 @@ import (
"time"
bss "github.com/ethereum-optimism/optimism/op-batcher/batcher"
batchermetrics "github.com/ethereum-optimism/optimism/op-batcher/metrics"
"github.com/ethereum-optimism/optimism/op-node/chaincfg"
"github.com/ethereum-optimism/optimism/op-node/sources"
l2os "github.com/ethereum-optimism/optimism/op-proposer/proposer"
......@@ -341,7 +342,7 @@ func TestMigration(t *testing.T) {
Format: "text",
},
PrivateKey: hexPriv(secrets.Batcher),
}, lgr.New("module", "batcher"))
}, lgr.New("module", "batcher"), batchermetrics.NoopMetrics)
require.NoError(t, err)
t.Cleanup(func() {
batcher.StopIfRunning()
......
......@@ -7,6 +7,7 @@ import (
"math/big"
"os"
"path"
"sort"
"strings"
"testing"
"time"
......@@ -23,6 +24,7 @@ import (
"github.com/stretchr/testify/require"
bss "github.com/ethereum-optimism/optimism/op-batcher/batcher"
batchermetrics "github.com/ethereum-optimism/optimism/op-batcher/metrics"
"github.com/ethereum-optimism/optimism/op-bindings/predeploys"
"github.com/ethereum-optimism/optimism/op-chain-ops/genesis"
"github.com/ethereum-optimism/optimism/op-e2e/e2eutils"
......@@ -119,14 +121,6 @@ func DefaultSystemConfig(t *testing.T) SystemConfig {
JWTFilePath: writeDefaultJWT(t),
JWTSecret: testingJWTSecret,
Nodes: map[string]*rollupNode.Config{
"verifier": {
Driver: driver.Config{
VerifierConfDepth: 0,
SequencerConfDepth: 0,
SequencerEnabled: false,
},
L1EpochPollInterval: time.Second * 4,
},
"sequencer": {
Driver: driver.Config{
VerifierConfDepth: 0,
......@@ -141,6 +135,14 @@ func DefaultSystemConfig(t *testing.T) SystemConfig {
},
L1EpochPollInterval: time.Second * 4,
},
"verifier": {
Driver: driver.Config{
VerifierConfDepth: 0,
SequencerConfDepth: 0,
SequencerEnabled: false,
},
L1EpochPollInterval: time.Second * 4,
},
},
Loggers: map[string]log.Logger{
"verifier": testlog.Logger(t, log.LvlInfo).New("role", "verifier"),
......@@ -225,7 +227,43 @@ func (sys *System) Close() {
sys.Mocknet.Close()
}
func (cfg SystemConfig) Start() (*System, error) {
type systemConfigHook func(sCfg *SystemConfig, s *System)
type SystemConfigOption struct {
key string
role string
action systemConfigHook
}
type SystemConfigOptions struct {
opts map[string]systemConfigHook
}
func NewSystemConfigOptions(_opts []SystemConfigOption) (SystemConfigOptions, error) {
opts := make(map[string]systemConfigHook)
for _, opt := range _opts {
if _, ok := opts[opt.key+":"+opt.role]; ok {
return SystemConfigOptions{}, fmt.Errorf("duplicate option for key %s and role %s", opt.key, opt.role)
}
opts[opt.key+":"+opt.role] = opt.action
}
return SystemConfigOptions{
opts: opts,
}, nil
}
func (s *SystemConfigOptions) Get(key, role string) (systemConfigHook, bool) {
v, ok := s.opts[key+":"+role]
return v, ok
}
func (cfg SystemConfig) Start(_opts ...SystemConfigOption) (*System, error) {
opts, err := NewSystemConfigOptions(_opts)
if err != nil {
return nil, err
}
sys := &System{
cfg: cfg,
Nodes: make(map[string]*node.Node),
......@@ -457,7 +495,17 @@ func (cfg SystemConfig) Start() (*System, error) {
snapLog.SetHandler(log.DiscardHandler())
// Rollup nodes
for name, nodeConfig := range cfg.Nodes {
// Ensure we are looping through the nodes in alphabetical order
ks := make([]string, 0, len(cfg.Nodes))
for k := range cfg.Nodes {
ks = append(ks, k)
}
// Sort strings in ascending alphabetical order
sort.Strings(ks)
for _, name := range ks {
nodeConfig := cfg.Nodes[name]
c := *nodeConfig // copy
c.Rollup = makeRollupConfig()
......@@ -482,6 +530,10 @@ func (cfg SystemConfig) Start() (*System, error) {
return nil, err
}
sys.RollupNodes[name] = node
if action, ok := opts.Get("afterRollupNodeStart", name); ok {
action(&cfg, sys)
}
}
if cfg.P2PTopology != nil {
......@@ -549,7 +601,7 @@ func (cfg SystemConfig) Start() (*System, error) {
Format: "text",
},
PrivateKey: hexPriv(cfg.Secrets.Batcher),
}, sys.cfg.Loggers["batcher"])
}, sys.cfg.Loggers["batcher"], batchermetrics.NoopMetrics)
if err != nil {
return nil, fmt.Errorf("failed to setup batch submitter: %w", err)
}
......
......@@ -649,8 +649,94 @@ func TestSystemMockP2P(t *testing.T) {
require.Contains(t, received, receiptVerif.BlockHash)
}
// TestSystemMockP2P sets up a L1 Geth node, a rollup node, and a L2 geth node and then confirms that
// the nodes can sync L2 blocks before they are confirmed on L1.
//
// Test steps:
// 1. Spin up the nodes (P2P is disabled on the verifier)
// 2. Send a transaction to the sequencer.
// 3. Wait for the TX to be mined on the sequencer chain.
// 5. Wait for the verifier to detect a gap in the payload queue vs. the unsafe head
// 6. Wait for the RPC sync method to grab the block from the sequencer over RPC and insert it into the verifier's unsafe chain.
// 7. Wait for the verifier to sync the unsafe chain into the safe chain.
// 8. Verify that the TX is included in the verifier's safe chain.
func TestSystemMockAltSync(t *testing.T) {
parallel(t)
if !verboseGethNodes {
log.Root().SetHandler(log.DiscardHandler())
}
cfg := DefaultSystemConfig(t)
// slow down L1 blocks so we can see the L2 blocks arrive well before the L1 blocks do.
// Keep the seq window small so the L2 chain is started quick
cfg.DeployConfig.L1BlockTime = 10
var published, received []common.Hash
seqTracer, verifTracer := new(FnTracer), new(FnTracer)
seqTracer.OnPublishL2PayloadFn = func(ctx context.Context, payload *eth.ExecutionPayload) {
published = append(published, payload.BlockHash)
}
verifTracer.OnUnsafeL2PayloadFn = func(ctx context.Context, from peer.ID, payload *eth.ExecutionPayload) {
received = append(received, payload.BlockHash)
}
cfg.Nodes["sequencer"].Tracer = seqTracer
cfg.Nodes["verifier"].Tracer = verifTracer
sys, err := cfg.Start(SystemConfigOption{
key: "afterRollupNodeStart",
role: "sequencer",
action: func(sCfg *SystemConfig, system *System) {
rpc, _ := system.Nodes["sequencer"].Attach() // never errors
cfg.Nodes["verifier"].L2Sync = &rollupNode.L2SyncRPCConfig{
Rpc: client.NewBaseRPCClient(rpc),
}
},
})
require.Nil(t, err, "Error starting up system")
defer sys.Close()
l2Seq := sys.Clients["sequencer"]
l2Verif := sys.Clients["verifier"]
// Transactor Account
ethPrivKey := cfg.Secrets.Alice
// Submit a TX to L2 sequencer node
toAddr := common.Address{0xff, 0xff}
tx := types.MustSignNewTx(ethPrivKey, types.LatestSignerForChainID(cfg.L2ChainIDBig()), &types.DynamicFeeTx{
ChainID: cfg.L2ChainIDBig(),
Nonce: 0,
To: &toAddr,
Value: big.NewInt(1_000_000_000),
GasTipCap: big.NewInt(10),
GasFeeCap: big.NewInt(200),
Gas: 21000,
})
err = l2Seq.SendTransaction(context.Background(), tx)
require.Nil(t, err, "Sending L2 tx to sequencer")
// Wait for tx to be mined on the L2 sequencer chain
receiptSeq, err := waitForTransaction(tx.Hash(), l2Seq, 6*time.Duration(sys.RollupConfig.BlockTime)*time.Second)
require.Nil(t, err, "Waiting for L2 tx on sequencer")
// Wait for alt RPC sync to pick up the blocks on the sequencer chain
receiptVerif, err := waitForTransaction(tx.Hash(), l2Verif, 12*time.Duration(sys.RollupConfig.BlockTime)*time.Second)
require.Nil(t, err, "Waiting for L2 tx on verifier")
require.Equal(t, receiptSeq, receiptVerif)
// Verify that the tx was received via RPC sync (P2P is disabled)
require.Contains(t, received, receiptVerif.BlockHash)
// Verify that everything that was received was published
require.GreaterOrEqual(t, len(published), len(received))
require.ElementsMatch(t, received, published[:len(received)])
}
// TestSystemDenseTopology sets up a dense p2p topology with 3 verifier nodes and 1 sequencer node.
func TestSystemDenseTopology(t *testing.T) {
t.Skip("Skipping dense topology test to avoid flakiness. @refcell address in p2p scoring pr.")
parallel(t)
if !verboseGethNodes {
log.Root().SetHandler(log.DiscardHandler())
......
......@@ -26,6 +26,16 @@ the transaction hash.
into channels. It then stores the channels with metadata on disk where the file name is the Channel ID.
### Force Close
`batch_decoder force-close` will create a transaction data that can be sent from the batcher address to
the batch inbox address which will force close the given channels. This will allow future channels to
be read without waiting for the channel timeout. It uses uses the results from `batch_decoder fetch` to
create the close transaction because the transaction it creates for a specific channel requires information
about if the channel has been closed or not. If it has been closed already but is missing specific frames
those frames need to be generated differently than simply closing the channel.
## JQ Cheat Sheet
`jq` is a really useful utility for manipulating JSON files.
......@@ -48,7 +58,6 @@ jq "select(.is_ready == false)|[.id, .frames[0].inclusion_block, .frames[0].tran
## Roadmap
- Parallel transaction fetching (CLI-3563)
- Create force-close channel tx data from channel ID (CLI-3564)
- Pull the batches out of channels & store that information inside the ChannelWithMetadata (CLI-3565)
- Transaction Bytes used
- Total uncompressed (different from tx bytes) + compressed bytes
......
......@@ -9,6 +9,7 @@ import (
"github.com/ethereum-optimism/optimism/op-node/cmd/batch_decoder/fetch"
"github.com/ethereum-optimism/optimism/op-node/cmd/batch_decoder/reassemble"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/ethclient"
"github.com/urfave/cli"
......@@ -113,6 +114,46 @@ func main() {
return nil
},
},
{
Name: "force-close",
Usage: "Create the tx data which will force close a channel",
Flags: []cli.Flag{
cli.StringFlag{
Name: "id",
Required: true,
Usage: "ID of the channel to close",
},
cli.StringFlag{
Name: "inbox",
Value: "0x0000000000000000000000000000000000000000",
Usage: "(Optional) Batch Inbox Address",
},
cli.StringFlag{
Name: "in",
Value: "/tmp/batch_decoder/transactions_cache",
Usage: "Cache directory for the found transactions",
},
},
Action: func(cliCtx *cli.Context) error {
var id derive.ChannelID
if err := (&id).UnmarshalText([]byte(cliCtx.String("id"))); err != nil {
log.Fatal(err)
}
frames := reassemble.LoadFrames(cliCtx.String("in"), common.HexToAddress(cliCtx.String("inbox")))
var filteredFrames []derive.Frame
for _, frame := range frames {
if frame.Frame.ID == id {
filteredFrames = append(filteredFrames, frame.Frame)
}
}
data, err := derive.ForceCloseTxData(filteredFrames)
if err != nil {
log.Fatal(err)
}
fmt.Printf("%x\n", data)
return nil
},
},
}
if err := app.Run(os.Args); err != nil {
......
......@@ -38,14 +38,8 @@ type Config struct {
OutDirectory string
}
// Channels loads all transactions from the given input directory that are submitted to the
// specified batch inbox and then re-assembles all channels & writes the re-assembled channels
// to the out directory.
func Channels(config Config) {
if err := os.MkdirAll(config.OutDirectory, 0750); err != nil {
log.Fatal(err)
}
txns := loadTransactions(config.InDirectory, config.BatchInbox)
func LoadFrames(directory string, inbox common.Address) []FrameWithMetadata {
txns := loadTransactions(directory, inbox)
// Sort first by block number then by transaction index inside the block number range.
// This is to match the order they are processed in derivation.
sort.Slice(txns, func(i, j int) bool {
......@@ -56,7 +50,17 @@ func Channels(config Config) {
}
})
frames := transactionsToFrames(txns)
return transactionsToFrames(txns)
}
// Channels loads all transactions from the given input directory that are submitted to the
// specified batch inbox and then re-assembles all channels & writes the re-assembled channels
// to the out directory.
func Channels(config Config) {
if err := os.MkdirAll(config.OutDirectory, 0750); err != nil {
log.Fatal(err)
}
frames := LoadFrames(config.InDirectory, config.BatchInbox)
framesByChannel := make(map[derive.ChannelID][]FrameWithMetadata)
for _, frame := range frames {
framesByChannel[frame.Frame.ID] = append(framesByChannel[frame.Frame.ID], frame)
......@@ -143,6 +147,7 @@ func transactionsToFrames(txns []fetch.TransactionWithMetadata) []FrameWithMetad
return out
}
// if inbox is the zero address, it will load all frames
func loadTransactions(dir string, inbox common.Address) []fetch.TransactionWithMetadata {
files, err := os.ReadDir(dir)
if err != nil {
......@@ -152,7 +157,7 @@ func loadTransactions(dir string, inbox common.Address) []fetch.TransactionWithM
for _, file := range files {
f := path.Join(dir, file.Name())
txm := loadTransactionsFile(f)
if txm.InboxAddr == inbox && txm.ValidSender {
if (inbox == common.Address{} || txm.InboxAddr == inbox) && txm.ValidSender {
out = append(out, txm)
}
}
......
......@@ -7,6 +7,7 @@ import (
"github.com/ethereum-optimism/optimism/op-node/chaincfg"
"github.com/ethereum-optimism/optimism/op-node/sources"
oplog "github.com/ethereum-optimism/optimism/op-service/log"
"github.com/urfave/cli"
)
......@@ -113,23 +114,6 @@ var (
Required: false,
Value: time.Second * 12 * 32,
}
LogLevelFlag = cli.StringFlag{
Name: "log.level",
Usage: "The lowest log level that will be output",
Value: "info",
EnvVar: prefixEnvVar("LOG_LEVEL"),
}
LogFormatFlag = cli.StringFlag{
Name: "log.format",
Usage: "Format the log output. Supported formats: 'text', 'json'",
Value: "text",
EnvVar: prefixEnvVar("LOG_FORMAT"),
}
LogColorFlag = cli.BoolFlag{
Name: "log.color",
Usage: "Color the log output",
EnvVar: prefixEnvVar("LOG_COLOR"),
}
MetricsEnabledFlag = cli.BoolFlag{
Name: "metrics.enabled",
Usage: "Enable the metrics server",
......@@ -185,6 +169,12 @@ var (
EnvVar: prefixEnvVar("HEARTBEAT_URL"),
Value: "https://heartbeat.optimism.io",
}
BackupL2UnsafeSyncRPC = cli.StringFlag{
Name: "l2.backup-unsafe-sync-rpc",
Usage: "Set the backup L2 unsafe sync RPC endpoint.",
EnvVar: prefixEnvVar("L2_BACKUP_UNSAFE_SYNC_RPC"),
Required: false,
}
)
var requiredFlags = []cli.Flag{
......@@ -194,7 +184,7 @@ var requiredFlags = []cli.Flag{
RPCListenPort,
}
var optionalFlags = append([]cli.Flag{
var optionalFlags = []cli.Flag{
RollupConfig,
Network,
L1TrustRPC,
......@@ -205,9 +195,6 @@ var optionalFlags = append([]cli.Flag{
SequencerStoppedFlag,
SequencerL1Confs,
L1EpochPollIntervalFlag,
LogLevelFlag,
LogFormatFlag,
LogColorFlag,
RPCEnableAdmin,
MetricsEnabledFlag,
MetricsAddrFlag,
......@@ -219,10 +206,17 @@ var optionalFlags = append([]cli.Flag{
HeartbeatEnabledFlag,
HeartbeatMonikerFlag,
HeartbeatURLFlag,
}, p2pFlags...)
BackupL2UnsafeSyncRPC,
}
// Flags contains the list of configuration options available to the binary.
var Flags = append(requiredFlags, optionalFlags...)
var Flags []cli.Flag
func init() {
optionalFlags = append(optionalFlags, p2pFlags...)
optionalFlags = append(optionalFlags, oplog.CLIFlags(envVarPrefix)...)
Flags = append(requiredFlags, optionalFlags...)
}
func CheckRequired(ctx *cli.Context) error {
l1NodeAddr := ctx.GlobalString(L1NodeAddr.Name)
......
......@@ -9,6 +9,7 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum-optimism/optimism/op-bindings/bindings"
"github.com/ethereum-optimism/optimism/op-bindings/predeploys"
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup"
......@@ -114,7 +115,16 @@ func (n *nodeAPI) OutputAtBlock(ctx context.Context, number hexutil.Uint64) (*et
}
var l2OutputRootVersion eth.Bytes32 // it's zero for now
l2OutputRoot := rollup.ComputeL2OutputRoot(l2OutputRootVersion, head.Hash(), head.Root(), proof.StorageHash)
l2OutputRoot, err := rollup.ComputeL2OutputRoot(&bindings.TypesOutputRootProof{
Version: l2OutputRootVersion,
StateRoot: head.Root(),
MessagePasserStorageRoot: proof.StorageHash,
LatestBlockhash: head.Hash(),
})
if err != nil {
n.log.Error("Error computing L2 output root, nil ptr passed to hashing function")
return nil, err
}
return &eth.OutputResponse{
Version: l2OutputRootVersion,
......
......@@ -19,6 +19,11 @@ type L2EndpointSetup interface {
Check() error
}
type L2SyncEndpointSetup interface {
Setup(ctx context.Context, log log.Logger) (cl client.RPC, err error)
Check() error
}
type L1EndpointSetup interface {
// Setup a RPC client to a L1 node to pull rollup input-data from.
// The results of the RPC client may be trusted for faster processing, or strictly validated.
......@@ -75,6 +80,50 @@ func (p *PreparedL2Endpoints) Setup(ctx context.Context, log log.Logger) (client
return p.Client, nil
}
// L2SyncEndpointConfig contains configuration for the fallback sync endpoint
type L2SyncEndpointConfig struct {
// Address of the L2 RPC to use for backup sync
L2NodeAddr string
}
var _ L2SyncEndpointSetup = (*L2SyncEndpointConfig)(nil)
func (cfg *L2SyncEndpointConfig) Setup(ctx context.Context, log log.Logger) (client.RPC, error) {
l2Node, err := client.NewRPC(ctx, log, cfg.L2NodeAddr)
if err != nil {
return nil, err
}
return l2Node, nil
}
func (cfg *L2SyncEndpointConfig) Check() error {
if cfg.L2NodeAddr == "" {
return errors.New("empty L2 Node Address")
}
return nil
}
type L2SyncRPCConfig struct {
// RPC endpoint to use for syncing
Rpc client.RPC
}
var _ L2SyncEndpointSetup = (*L2SyncRPCConfig)(nil)
func (cfg *L2SyncRPCConfig) Setup(ctx context.Context, log log.Logger) (client.RPC, error) {
return cfg.Rpc, nil
}
func (cfg *L2SyncRPCConfig) Check() error {
if cfg.Rpc == nil {
return errors.New("rpc cannot be nil")
}
return nil
}
type L1EndpointConfig struct {
L1NodeAddr string // Address of L1 User JSON-RPC endpoint to use (eth namespace required)
......
......@@ -13,8 +13,9 @@ import (
)
type Config struct {
L1 L1EndpointSetup
L2 L2EndpointSetup
L1 L1EndpointSetup
L2 L2EndpointSetup
L2Sync L2SyncEndpointSetup
Driver driver.Config
......
......@@ -197,7 +197,28 @@ func (n *OpNode) initL2(ctx context.Context, cfg *Config, snapshotLog log.Logger
return err
}
n.l2Driver = driver.NewDriver(&cfg.Driver, &cfg.Rollup, n.l2Source, n.l1Source, n, n.log, snapshotLog, n.metrics)
var syncClient *sources.SyncClient
// If the L2 sync config is present, use it to create a sync client
if cfg.L2Sync != nil {
if err := cfg.L2Sync.Check(); err != nil {
log.Info("L2 sync config is not present, skipping L2 sync client setup", "err", err)
} else {
rpcSyncClient, err := cfg.L2Sync.Setup(ctx, n.log)
if err != nil {
return fmt.Errorf("failed to setup L2 execution-engine RPC client for backup sync: %w", err)
}
// The sync client's RPC is always trusted
config := sources.SyncClientDefaultConfig(&cfg.Rollup, true)
syncClient, err = sources.NewSyncClient(n.OnUnsafeL2Payload, rpcSyncClient, n.log, n.metrics.L2SourceCache, config)
if err != nil {
return fmt.Errorf("failed to create sync client: %w", err)
}
}
}
n.l2Driver = driver.NewDriver(&cfg.Driver, &cfg.Rollup, n.l2Source, n.l1Source, syncClient, n, n.log, snapshotLog, n.metrics)
return nil
}
......@@ -263,13 +284,21 @@ func (n *OpNode) initP2PSigner(ctx context.Context, cfg *Config) error {
func (n *OpNode) Start(ctx context.Context) error {
n.log.Info("Starting execution engine driver")
// start driving engine: sync blocks by deriving them from L1 and driving them into the engine
err := n.l2Driver.Start()
if err != nil {
if err := n.l2Driver.Start(); err != nil {
n.log.Error("Could not start a rollup node", "err", err)
return err
}
// If the backup unsafe sync client is enabled, start its event loop
if n.l2Driver.L2SyncCl != nil {
if err := n.l2Driver.L2SyncCl.Start(); err != nil {
n.log.Error("Could not start the backup sync client", "err", err)
return err
}
}
return nil
}
......@@ -382,6 +411,13 @@ func (n *OpNode) Close() error {
if err := n.l2Driver.Close(); err != nil {
result = multierror.Append(result, fmt.Errorf("failed to close L2 engine driver cleanly: %w", err))
}
// If the L2 sync client is present & running, close it.
if n.l2Driver.L2SyncCl != nil {
if err := n.l2Driver.L2SyncCl.Close(); err != nil {
result = multierror.Append(result, fmt.Errorf("failed to close L2 engine backup sync client cleanly: %w", err))
}
}
}
// close L2 engine RPC client
......
......@@ -139,6 +139,7 @@ func (bq *BatchQueue) AddBatch(batch *BatchData, l2SafeHead eth.L2BlockRef) {
if validity == BatchDrop {
return // if we do drop the batch, CheckBatch will log the drop reason with WARN level.
}
bq.log.Debug("Adding batch", "batch_timestamp", batch.Timestamp, "parent_hash", batch.ParentHash, "batch_epoch", batch.Epoch(), "txs", len(batch.Transactions))
bq.batches[batch.Timestamp] = append(bq.batches[batch.Timestamp], &data)
}
......@@ -212,7 +213,7 @@ batchLoop:
if nextBatch.Batch.EpochNum == rollup.Epoch(epoch.Number)+1 {
bq.l1Blocks = bq.l1Blocks[1:]
}
bq.log.Trace("Returning found batch", "epoch", epoch, "batch_epoch", nextBatch.Batch.EpochNum, "batch_timestamp", nextBatch.Batch.Timestamp)
bq.log.Info("Found next batch", "epoch", epoch, "batch_epoch", nextBatch.Batch.EpochNum, "batch_timestamp", nextBatch.Batch.Timestamp)
return nextBatch.Batch, nil
}
......@@ -241,7 +242,7 @@ batchLoop:
// to preserve that L2 time >= L1 time. If this is the first block of the epoch, always generate a
// batch to ensure that we at least have one batch per epoch.
if nextTimestamp < nextEpoch.Time || firstOfEpoch {
bq.log.Trace("Generating next batch", "epoch", epoch, "timestamp", nextTimestamp)
bq.log.Info("Generating next batch", "epoch", epoch, "timestamp", nextTimestamp)
return &BatchData{
BatchV1{
ParentHash: l2SafeHead.Hash,
......
......@@ -69,6 +69,7 @@ func (cb *ChannelBank) prune() {
ch := cb.channels[id]
cb.channelQueue = cb.channelQueue[1:]
delete(cb.channels, id)
cb.log.Info("pruning channel", "channel", id, "totalSize", totalSize, "channel_size", ch.size, "remaining_channel_count", len(cb.channels))
totalSize -= ch.size
}
}
......@@ -77,7 +78,7 @@ func (cb *ChannelBank) prune() {
// Read() should be called repeatedly first, until everything has been read, before adding new data.
func (cb *ChannelBank) IngestFrame(f Frame) {
origin := cb.Origin()
log := log.New("origin", origin, "channel", f.ID, "length", len(f.Data), "frame_number", f.FrameNumber, "is_last", f.IsLast)
log := cb.log.New("origin", origin, "channel", f.ID, "length", len(f.Data), "frame_number", f.FrameNumber, "is_last", f.IsLast)
log.Debug("channel bank got new data")
currentCh, ok := cb.channels[f.ID]
......@@ -86,6 +87,7 @@ func (cb *ChannelBank) IngestFrame(f Frame) {
currentCh = NewChannel(f.ID, origin)
cb.channels[f.ID] = currentCh
cb.channelQueue = append(cb.channelQueue, f.ID)
log.Info("created new channel")
}
// check if the channel is not timed out
......@@ -114,7 +116,7 @@ func (cb *ChannelBank) Read() (data []byte, err error) {
ch := cb.channels[first]
timedOut := ch.OpenBlockNumber()+cb.cfg.ChannelTimeout < cb.Origin().Number
if timedOut {
cb.log.Debug("channel timed out", "channel", first, "frames", len(ch.inputs))
cb.log.Info("channel timed out", "channel", first, "frames", len(ch.inputs))
delete(cb.channels, first)
cb.channelQueue = cb.channelQueue[1:]
return nil, nil // multiple different channels may all be timed out
......@@ -137,7 +139,6 @@ func (cb *ChannelBank) Read() (data []byte, err error) {
// consistency around channel bank pruning which depends upon the order
// of operations.
func (cb *ChannelBank) NextData(ctx context.Context) ([]byte, error) {
// Do the read from the channel bank first
data, err := cb.Read()
if err == io.EOF {
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -77,6 +77,7 @@ type PayloadsQueue struct {
pq payloadsByNumber
currentSize uint64
MaxSize uint64
blockNos map[uint64]bool
SizeFn func(p *eth.ExecutionPayload) uint64
}
......@@ -99,6 +100,9 @@ func (upq *PayloadsQueue) Push(p *eth.ExecutionPayload) error {
if p == nil {
return errors.New("cannot add nil payload")
}
if upq.blockNos[p.ID().Number] {
return errors.New("cannot add duplicate payload")
}
size := upq.SizeFn(p)
if size > upq.MaxSize {
return fmt.Errorf("cannot add payload %s, payload mem size %d is larger than max queue size %d", p.ID(), size, upq.MaxSize)
......@@ -111,6 +115,7 @@ func (upq *PayloadsQueue) Push(p *eth.ExecutionPayload) error {
for upq.currentSize > upq.MaxSize {
upq.Pop()
}
upq.blockNos[p.ID().Number] = true
return nil
}
......@@ -132,5 +137,7 @@ func (upq *PayloadsQueue) Pop() *eth.ExecutionPayload {
}
ps := heap.Pop(&upq.pq).(payloadAndSize) // nosemgrep
upq.currentSize -= ps.size
// remove the key from the blockNos map
delete(upq.blockNos, ps.payload.ID().Number)
return ps.payload
}
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -126,8 +126,17 @@ func FindL2Heads(ctx context.Context, cfg *rollup.Config, l1 L1Chain, l2 L2Chain
// then we return the last L2 block of the epoch before that as safe head.
// Each loop iteration we traverse a single L2 block, and we check if the L1 origins are consistent.
for {
// Fetch L1 information if we never had it, or if we do not have it for the current origin
if l1Block == (eth.L1BlockRef{}) || n.L1Origin.Hash != l1Block.Hash {
// Fetch L1 information if we never had it, or if we do not have it for the current origin.
// Optimization: as soon as we have a previous L1 block, try to traverse L1 by hash instead of by number, to fill the cache.
if n.L1Origin.Hash == l1Block.ParentHash {
b, err := l1.L1BlockRefByHash(ctx, n.L1Origin.Hash)
if err != nil {
// Exit, find-sync start should start over, to move to an available L1 chain with block-by-number / not-found case.
return nil, fmt.Errorf("failed to retrieve L1 block: %w", err)
}
l1Block = b
ahead = false
} else if l1Block == (eth.L1BlockRef{}) || n.L1Origin.Hash != l1Block.Hash {
b, err := l1.L1BlockRefByNumber(ctx, n.L1Origin.Number)
// if L2 is ahead of L1 view, then consider it a "plausible" head
notFound := errors.Is(err, ethereum.NotFound)
......
......@@ -157,7 +157,7 @@ func (cfg *Config) CheckL2ChainID(ctx context.Context, client L2Client) error {
return err
}
if cfg.L2ChainID.Cmp(id) != 0 {
return fmt.Errorf("incorrect L2 RPC chain id %d, expected %d", cfg.L2ChainID, id)
return fmt.Errorf("incorrect L2 RPC chain id, expected from config %d, obtained from client %d", cfg.L2ChainID, id)
}
return nil
}
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment