Commit 20cdf1cd authored by mergify[bot]'s avatar mergify[bot] Committed by GitHub

Merge branch 'develop' into aj/update-geth-vana

parents 89e9a4cd 7d0c54cd
---
'@eth-optimism/contracts-bedrock': patch
---
Reduce the time that the system dictator deploy scripts wait before checking the chain state.
---
'@eth-optimism/chain-mon': patch
'@eth-optimism/data-transport-layer': patch
'@eth-optimism/fault-detector': patch
'@eth-optimism/message-relayer': patch
'@eth-optimism/replica-healthcheck': patch
---
Empty patch release to re-release packages that failed to be released by a bug in the release process.
......@@ -150,6 +150,7 @@ module.exports = {
'/docs/build/getting-started.md',
'/docs/build/conf.md',
'/docs/build/explorer.md',
'/docs/build/sdk.md',
{
title: "OP Stack Hacks",
collapsable: true,
......
import event from '@vuepress/plugin-pwa/lib/event'
export default ({ router }) => {
registerAutoReload();
router.addRoutes([
{ path: '/docs/', redirect: '/' },
])
}
// When new content is detected by the app, this will automatically
// refresh the page, so that users do not need to manually click
// the refresh button. For more details see:
// https://linear.app/optimism/issue/FE-1003/investigate-archive-issue-on-docs
const registerAutoReload = () => {
event.$on('sw-updated', e => e.skipWaiting().then(() => {
location.reload(true);
}))
}
......@@ -39,12 +39,11 @@ This tutorial was checked on:
| Software | Version | Installation command(s) |
| -------- | ---------- | - |
| Ubuntu | 20.04 LTS | |
| git | OS default | |
| make | 4.2.1-1.2 | `sudo apt install -y make`
| git, curl, and make | OS default | `sudo apt install -y git curl make` |
| Go | 1.20 | `sudo apt update` <br> `wget https://go.dev/dl/go1.20.linux-amd64.tar.gz` <br> `tar xvzf go1.20.linux-amd64.tar.gz` <br> `sudo cp go/bin/go /usr/bin/go` <br> `sudo mv go /usr/lib` <br> `echo export GOROOT=/usr/lib/go >> ~/.bashrc`
| Node | 16.19.0 | `curl -fsSL https://deb.nodesource.com/setup_16.x | sudo -E bash -` <br> `sudo apt-get install -y nodejs`
| Node | 16.19.0 | `curl -fsSL https://deb.nodesource.com/setup_16.x | sudo -E bash -` <br> `sudo apt-get install -y nodejs npm`
| yarn | 1.22.19 | `sudo npm install -g yarn`
| Foundry | 0.2.0 | `curl -L https://foundry.paradigm.xyz | bash` <br> `sudo bash` <br> `foundryup`
| Foundry | 0.2.0 | `curl -L https://foundry.paradigm.xyz | bash` <br> `. ~/.bashrc` <br> `foundryup`
## Build the Source Code
......@@ -74,7 +73,8 @@ We’re going to be spinning up an EVM Rollup from the OP Stack source code. Yo
1. Build the various packages inside of the Optimism Monorepo.
```bash
make build
make op-node op-batcher
yarn build
```
### Build op-geth
......@@ -440,20 +440,27 @@ Once you’ve connected your wallet, you’ll probably notice that you don’t h
cd ~/optimism/packages/contracts-bedrock
```
1. Grab the address of the `OptimismPortalProxy` contract:
1. Grab the address of the proxy to the L1 standard bridge contract:
```bash
cat deployments/getting-started/OptimismPortalProxy.json | grep \"address\":
cat deployments/getting-started/Proxy__OVM_L1StandardBridge.json.json | grep \"address\":
```
You should see a result like the following (**your address will be different**):
```
"address": "0x264B5fde6B37fb6f1C92AaC17BA144cf9e3DcFE9",
"address": "0x264B5fde6B37fb6f1C92AaC17BA144cf9e3DcFE9",
"address": "0x874f2E16D803c044F10314A978322da3c9b075c7",
"internalType": "address",
"type": "address"
"internalType": "address",
"type": "address"
"internalType": "address",
"type": "address"
"internalType": "address",
"type": "address"
```
1. Grab the `OptimismPortalProxy` address and, using the wallet that you want to have ETH on your Rollup, send that address a small amount of ETH on Goerli (0.1 or less is fine). It may take up to 5 minutes for that ETH to appear in your wallet on L2.
1. Grab the L1 bridge proxy contract address and, using the wallet that you want to have ETH on your Rollup, send that address a small amount of ETH on Goerli (0.1 or less is fine). It may take up to 5 minutes for that ETH to appear in your wallet on L2.
## Use your Rollup
......
---
title: Using the SDK with OP Stack
lang: en-US
---
When building applications for use with your OP Stack, you can continue to use [the Optimism JavaScript SDK](https://sdk.optimism.io/).
The main difference is you need to provide some contract addresses to the `CrossDomainMessenger` because they aren't preconfigured.
## Contract addresses
### L1 contract addresses
The contract addresses are in `.../optimism/packages/contracts-bedrock/deployments/getting-started`, which you created when you deployed the L1 contracts.
| Contract name when creating `CrossDomainMessenger` | File with address |
| - | - |
| `AddressManager` | `Lib_AddressManager.json`
| `L1CrossDomainMessenger` | `Proxy__OVM_L1CrossDomainMessenger.json`
| `L1StandardBridge` | `Proxy__OVM_L1StandardBridge.json`
| `OptimismPortal` | `OptimismPortalProxy.json`
| `L2OutputOracle` | `L2OutputOracleProxy.json`
### Unneeded contract addresses
Some contracts are required by the SDK, but not actually used.
For these contracts you can just specify the zero address:
- `StateCommitmentChain`
- `CanonicalTransactionChain`
- `BondManager`
In JavaScript you can create the zero address using the expression `"0x".padEnd(42, "0")`.
## The CrossChainMessenger object
These directions assume you are inside the [Hardhat console](https://hardhat.org/hardhat-runner/docs/guides/hardhat-console).
They further assume that your project already includes the Optimism SDK [`@eth-optimism/sdk`](https://www.npmjs.com/package/@eth-optimism/sdk).
1. Import the SDK
```js
optimismSDK = require("@eth-optimism/sdk")
```
1. Set the configuration parameters.
| Variable name | Value |
| - | - |
| `l1Url` | URL to an RPC provider for L1, for example `https://eth-goerli.g.alchemy.com/v2/<api key>`
| `l2Url` | URL to your OP Stack. If running on the same computer, it is `http://localhost:8545`
| `privKey` | The private key for an account that has some ETH on the L1
1. Create the [providers](https://docs.ethers.org/v5/api/providers/) and [signers](https://docs.ethers.org/v5/api/signer/).
```js
l1Provider = new ethers.providers.JsonRpcProvider(l1Url)
l2Provider = new ethers.providers.JsonRpcProvider(l2Url)
l1Signer = new ethers.Wallet(privKey).connect(l1Provider)
l2Signer = new ethers.Wallet(privKey).connect(l2Provider)
```
1. Create the L1 contracts structure.
```js
zeroAddr = "0x".padEnd(42, "0")
l1Contracts = {
StateCommitmentChain: zeroAddr,
CanonicalTransactionChain: zeroAddr,
BondManager: zeroAddr,
// These contracts have the addresses you found out earlier.
AddressManager: "0x....", // Lib_AddressManager.json
L1CrossDomainMessenger: "0x....", // Proxy__OVM_L1CrossDomainMessenger.json
L1StandardBridge: "0x....", // Proxy__OVM_L1StandardBridge.json
OptimismPortal: "0x....", // OptimismPortalProxy.json
L2OutputOracle: "0x....", // L2OutputOracleProxy.json
}
```
1. Create the data structure for the standard bridge.
```js
bridges = {
Standard: {
l1Bridge: l1Contracts.L1StandardBridge,
l2Bridge: "0x4200000000000000000000000000000000000010",
Adapter: optimismSDK.StandardBridgeAdapter
},
ETH: {
l1Bridge: l1Contracts.L1StandardBridge,
l2Bridge: "0x4200000000000000000000000000000000000010",
Adapter: optimismSDK.ETHBridgeAdapter
}
}
```
1. Create the [`CrossChainMessenger`](https://sdk.optimism.io/classes/crosschainmessenger) object.
```js
crossChainMessenger = new optimismSDK.CrossChainMessenger({
bedrock: true,
contracts: {
l1: l1Contracts
},
bridges: bridges,
l1ChainId: await l1Signer.getChainId(),
l2ChainId: await l2Signer.getChainId(),
l1SignerOrProvider: l1Signer,
l2SignerOrProvider: l2Signer,
})
```
## Verify SDK functionality
To verify the SDK's functionality, transfer some ETH from L1 to L2.
1. Get the current balances.
```js
balances0 = [
await l1Provider.getBalance(l1Signer.address),
await l2Provider.getBalance(l1Signer.address)
]
```
1. Transfer 1 gwei.
```js
tx = await crossChainMessenger.depositETH(1e9)
rcpt = await tx.wait()
```
1. Get the balances after the transfer.
```js
balances1 = [
await l1Provider.getBalance(l1Signer.address),
await l2Provider.getBalance(l1Signer.address)
]
```
1. See that the L1 balance changed (probably by a lot more than 1 gwei because of the cost of the transaction).
```js
(balances0[0]-balances1[0])/1e9
```
1. See that the L2 balance changed (it might take a few minutes).
```js
((await l2Provider.getBalance(l1Signer.address))-balances0[1])/1e9
```
---
title: Contribute to OP Stack
title: Contribute to the OP Stack
lang: en-US
---
......@@ -21,7 +21,7 @@ The OP Stack needs YOU (yes you!) to help review the codebase for bugs and vulne
## Documentation help
Spot a typo in these docs? See something that could be a little clearer? Head over to the Optimism Monorepo where the OP Stack docs are hosted and make a pull request. No contribution is too small!
Spot a typo in these docs? See something that could be a little clearer? Head over to the Optimism Monorepo where the OP Stack docs are hosted and make a pull request. No contribution is too small!
## Community contributions
......@@ -30,4 +30,4 @@ If you’re looking for other ways to get involved, here are a few options:
- Grab an idea from the [project ideas list](https://github.com/ethereum-optimism/optimism-project-ideas) to and building
- Suggest a new idea for the [project ideas list](https://github.com/ethereum-optimism/optimism-project-ideas)
- Improve the [Optimism Community Hub](https://community.optimism.io/) [documentation](https://github.com/ethereum-optimism/community-hub) or [tutorials](https://github.com/ethereum-optimism/optimism-tutorial)
- Become an Optimism Ambassador, Support Nerd, and more in the [Optimism Discord](https://discord-gateway.optimism.io/)
\ No newline at end of file
- Become an Optimism Ambassador, Support Nerd, and more in the [Optimism Discord](https://discord-gateway.optimism.io/)
......@@ -12,9 +12,9 @@ import (
gethrpc "github.com/ethereum/go-ethereum/rpc"
"github.com/urfave/cli"
"github.com/ethereum-optimism/optimism/op-batcher/metrics"
"github.com/ethereum-optimism/optimism/op-batcher/rpc"
oplog "github.com/ethereum-optimism/optimism/op-service/log"
opmetrics "github.com/ethereum-optimism/optimism/op-service/metrics"
oppprof "github.com/ethereum-optimism/optimism/op-service/pprof"
oprpc "github.com/ethereum-optimism/optimism/op-service/rpc"
)
......@@ -36,9 +36,10 @@ func Main(version string, cliCtx *cli.Context) error {
}
l := oplog.NewLogger(cfg.LogConfig)
m := metrics.NewMetrics("default")
l.Info("Initializing Batch Submitter")
batchSubmitter, err := NewBatchSubmitterFromCLIConfig(cfg, l)
batchSubmitter, err := NewBatchSubmitterFromCLIConfig(cfg, l, m)
if err != nil {
l.Error("Unable to create Batch Submitter", "error", err)
return err
......@@ -64,16 +65,15 @@ func Main(version string, cliCtx *cli.Context) error {
}()
}
registry := opmetrics.NewRegistry()
metricsCfg := cfg.MetricsConfig
if metricsCfg.Enabled {
l.Info("starting metrics server", "addr", metricsCfg.ListenAddr, "port", metricsCfg.ListenPort)
go func() {
if err := opmetrics.ListenAndServe(ctx, registry, metricsCfg.ListenAddr, metricsCfg.ListenPort); err != nil {
if err := m.Serve(ctx, metricsCfg.ListenAddr, metricsCfg.ListenPort); err != nil {
l.Error("error starting metrics server", err)
}
}()
opmetrics.LaunchBalanceMetrics(ctx, l, registry, "", batchSubmitter.L1Client, batchSubmitter.From)
m.StartBalanceMetrics(ctx, l, batchSubmitter.L1Client, batchSubmitter.From)
}
rpcCfg := cfg.RPCConfig
......@@ -95,6 +95,9 @@ func Main(version string, cliCtx *cli.Context) error {
return fmt.Errorf("error starting RPC server: %w", err)
}
m.RecordInfo(version)
m.RecordUp()
interruptChannel := make(chan os.Signal, 1)
signal.Notify(interruptChannel, []os.Signal{
os.Interrupt,
......
......@@ -12,7 +12,8 @@ import (
)
var (
ErrInvalidMaxFrameSize = errors.New("max frame size cannot be zero")
ErrZeroMaxFrameSize = errors.New("max frame size cannot be zero")
ErrSmallMaxFrameSize = errors.New("max frame size cannot be less than 23")
ErrInvalidChannelTimeout = errors.New("channel timeout is less than the safety margin")
ErrInputTargetReached = errors.New("target amount of input data reached")
ErrMaxFrameIndex = errors.New("max frame index reached (uint16)")
......@@ -82,7 +83,15 @@ func (cc *ChannelConfig) Check() error {
// will infinitely loop when trying to create frames in the
// [channelBuilder.OutputFrames] function.
if cc.MaxFrameSize == 0 {
return ErrInvalidMaxFrameSize
return ErrZeroMaxFrameSize
}
// If the [MaxFrameSize] is set to < 23, the channel out
// will underflow the maxSize variable in the [derive.ChannelOut].
// Since it is of type uint64, it will wrap around to a very large
// number, making the frame size extremely large.
if cc.MaxFrameSize < 23 {
return ErrSmallMaxFrameSize
}
return nil
......@@ -127,6 +136,8 @@ type channelBuilder struct {
blocks []*types.Block
// frames data queue, to be send as txs
frames []frameData
// total amount of output data of all frames created yet
outputBytes int
}
// newChannelBuilder creates a new channel builder or returns an error if the
......@@ -147,11 +158,21 @@ func (c *channelBuilder) ID() derive.ChannelID {
return c.co.ID()
}
// InputBytes returns to total amount of input bytes added to the channel.
// InputBytes returns the total amount of input bytes added to the channel.
func (c *channelBuilder) InputBytes() int {
return c.co.InputBytes()
}
// ReadyBytes returns the amount of bytes ready in the compression pipeline to
// output into a frame.
func (c *channelBuilder) ReadyBytes() int {
return c.co.ReadyBytes()
}
func (c *channelBuilder) OutputBytes() int {
return c.outputBytes
}
// Blocks returns a backup list of all blocks that were added to the channel. It
// can be used in case the channel needs to be rebuilt.
func (c *channelBuilder) Blocks() []*types.Block {
......@@ -175,22 +196,25 @@ func (c *channelBuilder) Reset() error {
// AddBlock returns a ChannelFullError if called even though the channel is
// already full. See description of FullErr for details.
//
// AddBlock also returns the L1BlockInfo that got extracted from the block's
// first transaction for subsequent use by the caller.
//
// Call OutputFrames() afterwards to create frames.
func (c *channelBuilder) AddBlock(block *types.Block) error {
func (c *channelBuilder) AddBlock(block *types.Block) (derive.L1BlockInfo, error) {
if c.IsFull() {
return c.FullErr()
return derive.L1BlockInfo{}, c.FullErr()
}
batch, err := derive.BlockToBatch(block)
batch, l1info, err := derive.BlockToBatch(block)
if err != nil {
return fmt.Errorf("converting block to batch: %w", err)
return l1info, fmt.Errorf("converting block to batch: %w", err)
}
if _, err = c.co.AddBatch(batch); errors.Is(err, derive.ErrTooManyRLPBytes) {
c.setFullErr(err)
return c.FullErr()
return l1info, c.FullErr()
} else if err != nil {
return fmt.Errorf("adding block to channel out: %w", err)
return l1info, fmt.Errorf("adding block to channel out: %w", err)
}
c.blocks = append(c.blocks, block)
c.updateSwTimeout(batch)
......@@ -200,7 +224,7 @@ func (c *channelBuilder) AddBlock(block *types.Block) error {
// Adding this block still worked, so don't return error, just mark as full
}
return nil
return l1info, nil
}
// Timeout management
......@@ -372,10 +396,11 @@ func (c *channelBuilder) outputFrame() error {
}
frame := frameData{
id: txID{chID: c.co.ID(), frameNumber: fn},
id: frameID{chID: c.co.ID(), frameNumber: fn},
data: buf.Bytes(),
}
c.frames = append(c.frames, frame)
c.outputBytes += len(frame.data)
return err // possibly io.EOF (last frame)
}
......
......@@ -2,14 +2,17 @@ package batcher
import (
"bytes"
"crypto/rand"
"errors"
"math"
"math/big"
"math/rand"
"testing"
"time"
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
dtest "github.com/ethereum-optimism/optimism/op-node/rollup/derive/test"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
......@@ -37,7 +40,11 @@ func TestConfigValidation(t *testing.T) {
// Set the config to have a zero max frame size.
validChannelConfig.MaxFrameSize = 0
require.ErrorIs(t, validChannelConfig.Check(), ErrInvalidMaxFrameSize)
require.ErrorIs(t, validChannelConfig.Check(), ErrZeroMaxFrameSize)
// Set the config to have a max frame size less than 23.
validChannelConfig.MaxFrameSize = 22
require.ErrorIs(t, validChannelConfig.Check(), ErrSmallMaxFrameSize)
// Reset the config and test the Timeout error.
// NOTE: We should be fuzzing these values with the constraint that
......@@ -48,54 +55,64 @@ func TestConfigValidation(t *testing.T) {
require.ErrorIs(t, validChannelConfig.Check(), ErrInvalidChannelTimeout)
}
// addNonsenseBlock is a helper function that adds a nonsense block
// to the channel builder using the [channelBuilder.AddBlock] method.
func addNonsenseBlock(cb *channelBuilder) error {
lBlock := types.NewBlock(&types.Header{
// addMiniBlock adds a minimal valid L2 block to the channel builder using the
// channelBuilder.AddBlock method.
func addMiniBlock(cb *channelBuilder) error {
a := newMiniL2Block(0)
_, err := cb.AddBlock(a)
return err
}
// newMiniL2Block returns a minimal L2 block with a minimal valid L1InfoDeposit
// transaction as first transaction. Both blocks are minimal in the sense that
// most fields are left at defaults or are unset.
//
// If numTx > 0, that many empty DynamicFeeTxs will be added to the txs.
func newMiniL2Block(numTx int) *types.Block {
return newMiniL2BlockWithNumberParent(numTx, new(big.Int), (common.Hash{}))
}
// newMiniL2Block returns a minimal L2 block with a minimal valid L1InfoDeposit
// transaction as first transaction. Both blocks are minimal in the sense that
// most fields are left at defaults or are unset. Block number and parent hash
// will be set to the given parameters number and parent.
//
// If numTx > 0, that many empty DynamicFeeTxs will be added to the txs.
func newMiniL2BlockWithNumberParent(numTx int, number *big.Int, parent common.Hash) *types.Block {
l1Block := types.NewBlock(&types.Header{
BaseFee: big.NewInt(10),
Difficulty: common.Big0,
Number: big.NewInt(100),
}, nil, nil, nil, trie.NewStackTrie(nil))
l1InfoTx, err := derive.L1InfoDeposit(0, lBlock, eth.SystemConfig{}, false)
l1InfoTx, err := derive.L1InfoDeposit(0, l1Block, eth.SystemConfig{}, false)
if err != nil {
return err
panic(err)
}
txs := []*types.Transaction{types.NewTx(l1InfoTx)}
a := types.NewBlock(&types.Header{
Number: big.NewInt(0),
txs := make([]*types.Transaction, 0, 1+numTx)
txs = append(txs, types.NewTx(l1InfoTx))
for i := 0; i < numTx; i++ {
txs = append(txs, types.NewTx(&types.DynamicFeeTx{}))
}
return types.NewBlock(&types.Header{
Number: number,
ParentHash: parent,
}, txs, nil, nil, trie.NewStackTrie(nil))
err = cb.AddBlock(a)
return err
}
// buildTooLargeRlpEncodedBlockBatch is a helper function that builds a batch
// of blocks that are too large to be added to a channel.
func buildTooLargeRlpEncodedBlockBatch(cb *channelBuilder) error {
// Construct a block with way too many txs
lBlock := types.NewBlock(&types.Header{
BaseFee: big.NewInt(10),
Difficulty: common.Big0,
Number: big.NewInt(100),
}, nil, nil, nil, trie.NewStackTrie(nil))
l1InfoTx, _ := derive.L1InfoDeposit(0, lBlock, eth.SystemConfig{}, false)
txs := []*types.Transaction{types.NewTx(l1InfoTx)}
for i := 0; i < 500_000; i++ {
txData := make([]byte, 32)
_, _ = rand.Read(txData)
tx := types.NewTransaction(0, common.Address{}, big.NewInt(0), 0, big.NewInt(0), txData)
txs = append(txs, tx)
// addTooManyBlocks adds blocks to the channel until it hits an error,
// which is presumably ErrTooManyRLPBytes.
func addTooManyBlocks(cb *channelBuilder) error {
for i := 0; i < 10_000; i++ {
block := newMiniL2Block(100)
_, err := cb.AddBlock(block)
if err != nil {
return err
}
}
block := types.NewBlock(&types.Header{
Number: big.NewInt(0),
}, txs, nil, nil, trie.NewStackTrie(nil))
// Try to add the block to the channel builder
// This should fail since the block is too large
// When a batch is constructed from the block and
// then rlp encoded in the channel out, the size
// will exceed [derive.MaxRLPBytesPerChannel]
err := cb.AddBlock(block)
return err
return nil
}
// FuzzDurationTimeoutZeroMaxChannelDuration ensures that when whenever the MaxChannelDuration
......@@ -387,7 +404,7 @@ func TestOutputFrames(t *testing.T) {
require.Equal(t, 0, readyBytes)
// Let's add a block
err = addNonsenseBlock(cb)
err = addMiniBlock(cb)
require.NoError(t, err)
// Check how many ready bytes
......@@ -417,17 +434,18 @@ func TestOutputFrames(t *testing.T) {
// TestMaxRLPBytesPerChannel tests the [channelBuilder.OutputFrames]
// function errors when the max RLP bytes per channel is reached.
func TestMaxRLPBytesPerChannel(t *testing.T) {
t.Parallel()
channelConfig := defaultTestChannelConfig
channelConfig.MaxFrameSize = 2
channelConfig.MaxFrameSize = derive.MaxRLPBytesPerChannel * 2
channelConfig.TargetFrameSize = derive.MaxRLPBytesPerChannel * 2
channelConfig.ApproxComprRatio = 1
// Construct the channel builder
cb, err := newChannelBuilder(channelConfig)
require.NoError(t, err)
require.False(t, cb.IsFull())
require.Equal(t, 0, cb.NumFrames())
// Add a block that overflows the [ChannelOut]
err = buildTooLargeRlpEncodedBlockBatch(cb)
err = addTooManyBlocks(cb)
require.ErrorIs(t, err, derive.ErrTooManyRLPBytes)
}
......@@ -458,7 +476,7 @@ func TestOutputFramesMaxFrameIndex(t *testing.T) {
a := types.NewBlock(&types.Header{
Number: big.NewInt(0),
}, txs, nil, nil, trie.NewStackTrie(nil))
err = cb.AddBlock(a)
_, err = cb.AddBlock(a)
if cb.IsFull() {
fullErr := cb.FullErr()
require.ErrorIs(t, fullErr, ErrMaxFrameIndex)
......@@ -490,7 +508,7 @@ func TestBuilderAddBlock(t *testing.T) {
require.NoError(t, err)
// Add a nonsense block to the channel builder
err = addNonsenseBlock(cb)
err = addMiniBlock(cb)
require.NoError(t, err)
// Check the fields reset in the AddBlock function
......@@ -501,7 +519,7 @@ func TestBuilderAddBlock(t *testing.T) {
// Since the channel output is full, the next call to AddBlock
// should return the channel out full error
err = addNonsenseBlock(cb)
err = addMiniBlock(cb)
require.ErrorIs(t, err, ErrInputTargetReached)
}
......@@ -516,7 +534,7 @@ func TestBuilderReset(t *testing.T) {
require.NoError(t, err)
// Add a nonsense block to the channel builder
err = addNonsenseBlock(cb)
err = addMiniBlock(cb)
require.NoError(t, err)
// Check the fields reset in the Reset function
......@@ -532,7 +550,7 @@ func TestBuilderReset(t *testing.T) {
require.NoError(t, err)
// Add another block to increment the block count
err = addNonsenseBlock(cb)
err = addMiniBlock(cb)
require.NoError(t, err)
// Check the fields reset in the Reset function
......@@ -618,3 +636,73 @@ func TestFramePublished(t *testing.T) {
// Now the timeout will be 1000
require.Equal(t, uint64(1000), cb.timeout)
}
func TestChannelBuilder_InputBytes(t *testing.T) {
require := require.New(t)
rng := rand.New(rand.NewSource(time.Now().UnixNano()))
cb, _ := defaultChannelBuilderSetup(t)
require.Zero(cb.InputBytes())
var l int
for i := 0; i < 5; i++ {
block := newMiniL2Block(rng.Intn(32))
l += blockBatchRlpSize(t, block)
_, err := cb.AddBlock(block)
require.NoError(err)
require.Equal(cb.InputBytes(), l)
}
}
func TestChannelBuilder_OutputBytes(t *testing.T) {
require := require.New(t)
rng := rand.New(rand.NewSource(time.Now().UnixNano()))
cfg := defaultTestChannelConfig
cfg.TargetFrameSize = 1000
cfg.MaxFrameSize = 1000
cfg.TargetNumFrames = 16
cfg.ApproxComprRatio = 1.0
cb, err := newChannelBuilder(cfg)
require.NoError(err, "newChannelBuilder")
require.Zero(cb.OutputBytes())
for {
block, _ := dtest.RandomL2Block(rng, rng.Intn(32))
_, err := cb.AddBlock(block)
if errors.Is(err, ErrInputTargetReached) {
break
}
require.NoError(err)
}
require.NoError(cb.OutputFrames())
require.True(cb.IsFull())
require.Greater(cb.NumFrames(), 1)
var flen int
for cb.HasFrame() {
f := cb.NextFrame()
flen += len(f.data)
}
require.Equal(cb.OutputBytes(), flen)
}
func defaultChannelBuilderSetup(t *testing.T) (*channelBuilder, ChannelConfig) {
t.Helper()
cfg := defaultTestChannelConfig
cb, err := newChannelBuilder(cfg)
require.NoError(t, err, "newChannelBuilder")
return cb, cfg
}
func blockBatchRlpSize(t *testing.T, b *types.Block) int {
t.Helper()
batch, _, err := derive.BlockToBatch(b)
require.NoError(t, err)
var buf bytes.Buffer
require.NoError(t, batch.EncodeRLP(&buf), "RLP-encoding batch")
return buf.Len()
}
......@@ -6,7 +6,9 @@ import (
"io"
"math"
"github.com/ethereum-optimism/optimism/op-batcher/metrics"
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log"
......@@ -22,8 +24,9 @@ var ErrReorg = errors.New("block does not extend existing chain")
// channel.
// Functions on channelManager are not safe for concurrent access.
type channelManager struct {
log log.Logger
cfg ChannelConfig
log log.Logger
metr metrics.Metricer
cfg ChannelConfig
// All blocks since the last request for new tx data.
blocks []*types.Block
......@@ -40,10 +43,12 @@ type channelManager struct {
confirmedTransactions map[txID]eth.BlockID
}
func NewChannelManager(log log.Logger, cfg ChannelConfig) *channelManager {
func NewChannelManager(log log.Logger, metr metrics.Metricer, cfg ChannelConfig) *channelManager {
return &channelManager{
log: log,
cfg: cfg,
log: log,
metr: metr,
cfg: cfg,
pendingTransactions: make(map[txID]txData),
confirmedTransactions: make(map[txID]eth.BlockID),
}
......@@ -71,6 +76,8 @@ func (s *channelManager) TxFailed(id txID) {
} else {
s.log.Warn("unknown transaction marked as failed", "id", id)
}
s.metr.RecordBatchTxFailed()
}
// TxConfirmed marks a transaction as confirmed on L1. Unfortunately even if all frames in
......@@ -78,7 +85,8 @@ func (s *channelManager) TxFailed(id txID) {
// resubmitted.
// This function may reset the pending channel if the pending channel has timed out.
func (s *channelManager) TxConfirmed(id txID, inclusionBlock eth.BlockID) {
s.log.Trace("marked transaction as confirmed", "id", id, "block", inclusionBlock)
s.metr.RecordBatchTxSubmitted()
s.log.Debug("marked transaction as confirmed", "id", id, "block", inclusionBlock)
if _, ok := s.pendingTransactions[id]; !ok {
s.log.Warn("unknown transaction marked as confirmed", "id", id, "block", inclusionBlock)
// TODO: This can occur if we clear the channel while there are still pending transactions
......@@ -92,13 +100,15 @@ func (s *channelManager) TxConfirmed(id txID, inclusionBlock eth.BlockID) {
// If this channel timed out, put the pending blocks back into the local saved blocks
// and then reset this state so it can try to build a new channel.
if s.pendingChannelIsTimedOut() {
s.log.Warn("Channel timed out", "chID", s.pendingChannel.ID())
s.metr.RecordChannelTimedOut(s.pendingChannel.ID())
s.log.Warn("Channel timed out", "id", s.pendingChannel.ID())
s.blocks = append(s.pendingChannel.Blocks(), s.blocks...)
s.clearPendingChannel()
}
// If we are done with this channel, record that.
if s.pendingChannelIsFullySubmitted() {
s.log.Info("Channel is fully submitted", "chID", s.pendingChannel.ID())
s.metr.RecordChannelFullySubmitted(s.pendingChannel.ID())
s.log.Info("Channel is fully submitted", "id", s.pendingChannel.ID())
s.clearPendingChannel()
}
}
......@@ -194,8 +204,8 @@ func (s *channelManager) TxData(l1Head eth.BlockID) (txData, error) {
// all pending blocks be included in this channel for submission.
s.registerL1Block(l1Head)
if err := s.pendingChannel.OutputFrames(); err != nil {
return txData{}, fmt.Errorf("creating frames with channel builder: %w", err)
if err := s.outputFrames(); err != nil {
return txData{}, err
}
return s.nextTxData()
......@@ -211,7 +221,11 @@ func (s *channelManager) ensurePendingChannel(l1Head eth.BlockID) error {
return fmt.Errorf("creating new channel: %w", err)
}
s.pendingChannel = cb
s.log.Info("Created channel", "chID", cb.ID(), "l1Head", l1Head)
s.log.Info("Created channel",
"id", cb.ID(),
"l1Head", l1Head,
"blocks_pending", len(s.blocks))
s.metr.RecordChannelOpened(cb.ID(), len(s.blocks))
return nil
}
......@@ -229,28 +243,27 @@ func (s *channelManager) registerL1Block(l1Head eth.BlockID) {
// processBlocks adds blocks from the blocks queue to the pending channel until
// either the queue got exhausted or the channel is full.
func (s *channelManager) processBlocks() error {
var blocksAdded int
var _chFullErr *ChannelFullError // throw away, just for type checking
var (
blocksAdded int
_chFullErr *ChannelFullError // throw away, just for type checking
latestL2ref eth.L2BlockRef
)
for i, block := range s.blocks {
if err := s.pendingChannel.AddBlock(block); errors.As(err, &_chFullErr) {
l1info, err := s.pendingChannel.AddBlock(block)
if errors.As(err, &_chFullErr) {
// current block didn't get added because channel is already full
break
} else if err != nil {
return fmt.Errorf("adding block[%d] to channel builder: %w", i, err)
}
blocksAdded += 1
latestL2ref = l2BlockRefFromBlockAndL1Info(block, l1info)
// current block got added but channel is now full
if s.pendingChannel.IsFull() {
break
}
}
s.log.Debug("Added blocks to channel",
"blocks_added", blocksAdded,
"channel_full", s.pendingChannel.IsFull(),
"blocks_pending", len(s.blocks)-blocksAdded,
"input_bytes", s.pendingChannel.InputBytes(),
)
if blocksAdded == len(s.blocks) {
// all blocks processed, reuse slice
s.blocks = s.blocks[:0]
......@@ -258,6 +271,53 @@ func (s *channelManager) processBlocks() error {
// remove processed blocks
s.blocks = s.blocks[blocksAdded:]
}
s.metr.RecordL2BlocksAdded(latestL2ref,
blocksAdded,
len(s.blocks),
s.pendingChannel.InputBytes(),
s.pendingChannel.ReadyBytes())
s.log.Debug("Added blocks to channel",
"blocks_added", blocksAdded,
"blocks_pending", len(s.blocks),
"channel_full", s.pendingChannel.IsFull(),
"input_bytes", s.pendingChannel.InputBytes(),
"ready_bytes", s.pendingChannel.ReadyBytes(),
)
return nil
}
func (s *channelManager) outputFrames() error {
if err := s.pendingChannel.OutputFrames(); err != nil {
return fmt.Errorf("creating frames with channel builder: %w", err)
}
if !s.pendingChannel.IsFull() {
return nil
}
inBytes, outBytes := s.pendingChannel.InputBytes(), s.pendingChannel.OutputBytes()
s.metr.RecordChannelClosed(
s.pendingChannel.ID(),
len(s.blocks),
s.pendingChannel.NumFrames(),
inBytes,
outBytes,
s.pendingChannel.FullErr(),
)
var comprRatio float64
if inBytes > 0 {
comprRatio = float64(outBytes) / float64(inBytes)
}
s.log.Info("Channel closed",
"id", s.pendingChannel.ID(),
"blocks_pending", len(s.blocks),
"num_frames", s.pendingChannel.NumFrames(),
"input_bytes", inBytes,
"output_bytes", outBytes,
"full_reason", s.pendingChannel.FullErr(),
"compr_ratio", comprRatio,
)
return nil
}
......@@ -273,3 +333,14 @@ func (s *channelManager) AddL2Block(block *types.Block) error {
return nil
}
func l2BlockRefFromBlockAndL1Info(block *types.Block, l1info derive.L1BlockInfo) eth.L2BlockRef {
return eth.L2BlockRef{
Hash: block.Hash(),
Number: block.NumberU64(),
ParentHash: block.ParentHash(),
Time: block.Time(),
L1Origin: eth.BlockID{Hash: l1info.BlockHash, Number: l1info.Number},
SequenceNumber: l1info.SequenceNumber,
}
}
......@@ -7,6 +7,7 @@ import (
"testing"
"time"
"github.com/ethereum-optimism/optimism/op-batcher/metrics"
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
derivetest "github.com/ethereum-optimism/optimism/op-node/rollup/derive/test"
......@@ -14,7 +15,6 @@ import (
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/trie"
"github.com/stretchr/testify/require"
)
......@@ -23,7 +23,7 @@ import (
func TestPendingChannelTimeout(t *testing.T) {
// Create a new channel manager with a ChannelTimeout
log := testlog.Logger(t, log.LvlCrit)
m := NewChannelManager(log, ChannelConfig{
m := NewChannelManager(log, metrics.NoopMetrics, ChannelConfig{
ChannelTimeout: 100,
})
......@@ -67,7 +67,7 @@ func TestPendingChannelTimeout(t *testing.T) {
// detects a reorg when it has cached L1 blocks.
func TestChannelManagerReturnsErrReorg(t *testing.T) {
log := testlog.Logger(t, log.LvlCrit)
m := NewChannelManager(log, ChannelConfig{})
m := NewChannelManager(log, metrics.NoopMetrics, ChannelConfig{})
a := types.NewBlock(&types.Header{
Number: big.NewInt(0),
......@@ -101,29 +101,17 @@ func TestChannelManagerReturnsErrReorg(t *testing.T) {
// detects a reorg even if it does not have any blocks inside it.
func TestChannelManagerReturnsErrReorgWhenDrained(t *testing.T) {
log := testlog.Logger(t, log.LvlCrit)
m := NewChannelManager(log, ChannelConfig{
TargetFrameSize: 0,
MaxFrameSize: 120_000,
ApproxComprRatio: 1.0,
})
l1Block := types.NewBlock(&types.Header{
BaseFee: big.NewInt(10),
Difficulty: common.Big0,
Number: big.NewInt(100),
}, nil, nil, nil, trie.NewStackTrie(nil))
l1InfoTx, err := derive.L1InfoDeposit(0, l1Block, eth.SystemConfig{}, false)
require.NoError(t, err)
txs := []*types.Transaction{types.NewTx(l1InfoTx)}
m := NewChannelManager(log, metrics.NoopMetrics,
ChannelConfig{
TargetFrameSize: 0,
MaxFrameSize: 120_000,
ApproxComprRatio: 1.0,
})
a := types.NewBlock(&types.Header{
Number: big.NewInt(0),
}, txs, nil, nil, trie.NewStackTrie(nil))
x := types.NewBlock(&types.Header{
Number: big.NewInt(1),
ParentHash: common.Hash{0xff},
}, txs, nil, nil, trie.NewStackTrie(nil))
a := newMiniL2Block(0)
x := newMiniL2BlockWithNumberParent(0, big.NewInt(1), common.Hash{0xff})
err = m.AddL2Block(a)
err := m.AddL2Block(a)
require.NoError(t, err)
_, err = m.TxData(eth.BlockID{})
......@@ -138,7 +126,7 @@ func TestChannelManagerReturnsErrReorgWhenDrained(t *testing.T) {
// TestChannelManagerNextTxData checks the nextTxData function.
func TestChannelManagerNextTxData(t *testing.T) {
log := testlog.Logger(t, log.LvlCrit)
m := NewChannelManager(log, ChannelConfig{})
m := NewChannelManager(log, metrics.NoopMetrics, ChannelConfig{})
// Nil pending channel should return EOF
returnedTxData, err := m.nextTxData()
......@@ -181,7 +169,7 @@ func TestClearChannelManager(t *testing.T) {
// Create a channel manager
log := testlog.Logger(t, log.LvlCrit)
rng := rand.New(rand.NewSource(time.Now().UnixNano()))
m := NewChannelManager(log, ChannelConfig{
m := NewChannelManager(log, metrics.NoopMetrics, ChannelConfig{
// Need to set the channel timeout here so we don't clear pending
// channels on confirmation. This would result in [TxConfirmed]
// clearing confirmed transactions, and reseting the pendingChannels map
......@@ -254,7 +242,7 @@ func TestClearChannelManager(t *testing.T) {
func TestChannelManagerTxConfirmed(t *testing.T) {
// Create a channel manager
log := testlog.Logger(t, log.LvlCrit)
m := NewChannelManager(log, ChannelConfig{
m := NewChannelManager(log, metrics.NoopMetrics, ChannelConfig{
// Need to set the channel timeout here so we don't clear pending
// channels on confirmation. This would result in [TxConfirmed]
// clearing confirmed transactions, and reseting the pendingChannels map
......@@ -308,7 +296,7 @@ func TestChannelManagerTxConfirmed(t *testing.T) {
func TestChannelManagerTxFailed(t *testing.T) {
// Create a channel manager
log := testlog.Logger(t, log.LvlCrit)
m := NewChannelManager(log, ChannelConfig{})
m := NewChannelManager(log, metrics.NoopMetrics, ChannelConfig{})
// Let's add a valid pending transaction to the channel
// manager so we can demonstrate correctness
......@@ -351,11 +339,12 @@ func TestChannelManager_TxResend(t *testing.T) {
require := require.New(t)
rng := rand.New(rand.NewSource(time.Now().UnixNano()))
log := testlog.Logger(t, log.LvlError)
m := NewChannelManager(log, ChannelConfig{
TargetFrameSize: 0,
MaxFrameSize: 120_000,
ApproxComprRatio: 1.0,
})
m := NewChannelManager(log, metrics.NoopMetrics,
ChannelConfig{
TargetFrameSize: 0,
MaxFrameSize: 120_000,
ApproxComprRatio: 1.0,
})
a, _ := derivetest.RandomL2Block(rng, 4)
......
......@@ -9,6 +9,7 @@ import (
"github.com/urfave/cli"
"github.com/ethereum-optimism/optimism/op-batcher/flags"
"github.com/ethereum-optimism/optimism/op-batcher/metrics"
"github.com/ethereum-optimism/optimism/op-batcher/rpc"
"github.com/ethereum-optimism/optimism/op-node/rollup"
"github.com/ethereum-optimism/optimism/op-node/sources"
......@@ -20,18 +21,21 @@ import (
)
type Config struct {
log log.Logger
L1Client *ethclient.Client
L2Client *ethclient.Client
RollupNode *sources.RollupClient
PollInterval time.Duration
log log.Logger
metr metrics.Metricer
L1Client *ethclient.Client
L2Client *ethclient.Client
RollupNode *sources.RollupClient
PollInterval time.Duration
From common.Address
TxManagerConfig txmgr.Config
From common.Address
// RollupConfig is queried at startup
Rollup *rollup.Config
// Channel creation parameters
// Channel builder parameters
Channel ChannelConfig
}
......
......@@ -10,7 +10,9 @@ import (
"sync"
"time"
"github.com/ethereum-optimism/optimism/op-batcher/metrics"
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
opcrypto "github.com/ethereum-optimism/optimism/op-service/crypto"
"github.com/ethereum-optimism/optimism/op-service/txmgr"
"github.com/ethereum/go-ethereum/core/types"
......@@ -34,13 +36,14 @@ type BatchSubmitter struct {
// lastStoredBlock is the last block loaded into `state`. If it is empty it should be set to the l2 safe head.
lastStoredBlock eth.BlockID
lastL1Tip eth.L1BlockRef
state *channelManager
}
// NewBatchSubmitterFromCLIConfig initializes the BatchSubmitter, gathering any resources
// that will be needed during operation.
func NewBatchSubmitterFromCLIConfig(cfg CLIConfig, l log.Logger) (*BatchSubmitter, error) {
func NewBatchSubmitterFromCLIConfig(cfg CLIConfig, l log.Logger, m metrics.Metricer) (*BatchSubmitter, error) {
ctx := context.Background()
signer, fromAddress, err := opcrypto.SignerFactoryFromConfig(l, cfg.PrivateKey, cfg.Mnemonic, cfg.SequencerHDPath, cfg.SignerConfig)
......@@ -104,12 +107,12 @@ func NewBatchSubmitterFromCLIConfig(cfg CLIConfig, l log.Logger) (*BatchSubmitte
return nil, err
}
return NewBatchSubmitter(ctx, batcherCfg, l)
return NewBatchSubmitter(ctx, batcherCfg, l, m)
}
// NewBatchSubmitter initializes the BatchSubmitter, gathering any resources
// that will be needed during operation.
func NewBatchSubmitter(ctx context.Context, cfg Config, l log.Logger) (*BatchSubmitter, error) {
func NewBatchSubmitter(ctx context.Context, cfg Config, l log.Logger, m metrics.Metricer) (*BatchSubmitter, error) {
balance, err := cfg.L1Client.BalanceAt(ctx, cfg.From, nil)
if err != nil {
return nil, err
......@@ -118,12 +121,14 @@ func NewBatchSubmitter(ctx context.Context, cfg Config, l log.Logger) (*BatchSub
cfg.log = l
cfg.log.Info("creating batch submitter", "submitter_addr", cfg.From, "submitter_bal", balance)
cfg.metr = m
return &BatchSubmitter{
Config: cfg,
txMgr: NewTransactionManager(l,
cfg.TxManagerConfig, cfg.Rollup.BatchInboxAddress, cfg.Rollup.L1ChainID,
cfg.From, cfg.L1Client),
state: NewChannelManager(l, cfg.Channel),
state: NewChannelManager(l, m, cfg.Channel),
}, nil
}
......@@ -187,13 +192,16 @@ func (l *BatchSubmitter) Stop() error {
func (l *BatchSubmitter) loadBlocksIntoState(ctx context.Context) {
start, end, err := l.calculateL2BlockRangeToStore(ctx)
if err != nil {
l.log.Trace("was not able to calculate L2 block range", "err", err)
l.log.Warn("Error calculating L2 block range", "err", err)
return
} else if start.Number == end.Number {
return
}
var latestBlock *types.Block
// Add all blocks to "state"
for i := start.Number + 1; i < end.Number+1; i++ {
id, err := l.loadBlockIntoState(ctx, i)
block, err := l.loadBlockIntoState(ctx, i)
if errors.Is(err, ErrReorg) {
l.log.Warn("Found L2 reorg", "block_number", i)
l.state.Clear()
......@@ -203,24 +211,34 @@ func (l *BatchSubmitter) loadBlocksIntoState(ctx context.Context) {
l.log.Warn("failed to load block into state", "err", err)
return
}
l.lastStoredBlock = id
l.lastStoredBlock = eth.ToBlockID(block)
latestBlock = block
}
l2ref, err := derive.L2BlockToBlockRef(latestBlock, &l.Rollup.Genesis)
if err != nil {
l.log.Warn("Invalid L2 block loaded into state", "err", err)
return
}
l.metr.RecordL2BlocksLoaded(l2ref)
}
// loadBlockIntoState fetches & stores a single block into `state`. It returns the block it loaded.
func (l *BatchSubmitter) loadBlockIntoState(ctx context.Context, blockNumber uint64) (eth.BlockID, error) {
func (l *BatchSubmitter) loadBlockIntoState(ctx context.Context, blockNumber uint64) (*types.Block, error) {
ctx, cancel := context.WithTimeout(ctx, networkTimeout)
defer cancel()
block, err := l.L2Client.BlockByNumber(ctx, new(big.Int).SetUint64(blockNumber))
cancel()
if err != nil {
return eth.BlockID{}, err
return nil, fmt.Errorf("getting L2 block: %w", err)
}
if err := l.state.AddL2Block(block); err != nil {
return eth.BlockID{}, err
return nil, fmt.Errorf("adding L2 block to state: %w", err)
}
id := eth.ToBlockID(block)
l.log.Info("added L2 block to local state", "block", id, "tx_count", len(block.Transactions()), "time", block.Time())
return id, nil
l.log.Info("added L2 block to local state", "block", eth.ToBlockID(block), "tx_count", len(block.Transactions()), "time", block.Time())
return block, nil
}
// calculateL2BlockRangeToStore determines the range (start,end] that should be loaded into the local state.
......@@ -283,6 +301,7 @@ func (l *BatchSubmitter) loop() {
l.log.Error("Failed to query L1 tip", "error", err)
break
}
l.recordL1Tip(l1tip)
// Collect next transaction data
txdata, err := l.state.TxData(l1tip.ID())
......@@ -316,6 +335,14 @@ func (l *BatchSubmitter) loop() {
}
}
func (l *BatchSubmitter) recordL1Tip(l1tip eth.L1BlockRef) {
if l.lastL1Tip == l1tip {
return
}
l.lastL1Tip = l1tip
l.metr.RecordLatestL1Block(l1tip)
}
func (l *BatchSubmitter) recordFailedTx(id txID, err error) {
l.log.Warn("Failed to send transaction", "err", err)
l.state.TxFailed(id)
......
package metrics
import (
"context"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/ethclient"
"github.com/ethereum/go-ethereum/log"
"github.com/prometheus/client_golang/prometheus"
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
opmetrics "github.com/ethereum-optimism/optimism/op-service/metrics"
)
const Namespace = "op_batcher"
type Metricer interface {
RecordInfo(version string)
RecordUp()
// Records all L1 and L2 block events
opmetrics.RefMetricer
RecordLatestL1Block(l1ref eth.L1BlockRef)
RecordL2BlocksLoaded(l2ref eth.L2BlockRef)
RecordChannelOpened(id derive.ChannelID, numPendingBlocks int)
RecordL2BlocksAdded(l2ref eth.L2BlockRef, numBlocksAdded, numPendingBlocks, inputBytes, outputComprBytes int)
RecordChannelClosed(id derive.ChannelID, numPendingBlocks int, numFrames int, inputBytes int, outputComprBytes int, reason error)
RecordChannelFullySubmitted(id derive.ChannelID)
RecordChannelTimedOut(id derive.ChannelID)
RecordBatchTxSubmitted()
RecordBatchTxSuccess()
RecordBatchTxFailed()
Document() []opmetrics.DocumentedMetric
}
type Metrics struct {
ns string
registry *prometheus.Registry
factory opmetrics.Factory
opmetrics.RefMetrics
Info prometheus.GaugeVec
Up prometheus.Gauge
// label by openend, closed, fully_submitted, timed_out
ChannelEvs opmetrics.EventVec
PendingBlocksCount prometheus.GaugeVec
BlocksAddedCount prometheus.Gauge
ChannelInputBytes prometheus.GaugeVec
ChannelReadyBytes prometheus.Gauge
ChannelOutputBytes prometheus.Gauge
ChannelClosedReason prometheus.Gauge
ChannelNumFrames prometheus.Gauge
ChannelComprRatio prometheus.Histogram
BatcherTxEvs opmetrics.EventVec
}
var _ Metricer = (*Metrics)(nil)
func NewMetrics(procName string) *Metrics {
if procName == "" {
procName = "default"
}
ns := Namespace + "_" + procName
registry := opmetrics.NewRegistry()
factory := opmetrics.With(registry)
return &Metrics{
ns: ns,
registry: registry,
factory: factory,
RefMetrics: opmetrics.MakeRefMetrics(ns, factory),
Info: *factory.NewGaugeVec(prometheus.GaugeOpts{
Namespace: ns,
Name: "info",
Help: "Pseudo-metric tracking version and config info",
}, []string{
"version",
}),
Up: factory.NewGauge(prometheus.GaugeOpts{
Namespace: ns,
Name: "up",
Help: "1 if the op-batcher has finished starting up",
}),
ChannelEvs: opmetrics.NewEventVec(factory, ns, "channel", "Channel", []string{"stage"}),
PendingBlocksCount: *factory.NewGaugeVec(prometheus.GaugeOpts{
Namespace: ns,
Name: "pending_blocks_count",
Help: "Number of pending blocks, not added to a channel yet.",
}, []string{"stage"}),
BlocksAddedCount: factory.NewGauge(prometheus.GaugeOpts{
Namespace: ns,
Name: "blocks_added_count",
Help: "Total number of blocks added to current channel.",
}),
ChannelInputBytes: *factory.NewGaugeVec(prometheus.GaugeOpts{
Namespace: ns,
Name: "input_bytes",
Help: "Number of input bytes to a channel.",
}, []string{"stage"}),
ChannelReadyBytes: factory.NewGauge(prometheus.GaugeOpts{
Namespace: ns,
Name: "ready_bytes",
Help: "Number of bytes ready in the compression buffer.",
}),
ChannelOutputBytes: factory.NewGauge(prometheus.GaugeOpts{
Namespace: ns,
Name: "output_bytes",
Help: "Number of compressed output bytes from a channel.",
}),
ChannelClosedReason: factory.NewGauge(prometheus.GaugeOpts{
Namespace: ns,
Name: "channel_closed_reason",
Help: "Pseudo-metric to record the reason a channel got closed.",
}),
ChannelNumFrames: factory.NewGauge(prometheus.GaugeOpts{
Namespace: ns,
Name: "channel_num_frames",
Help: "Total number of frames of closed channel.",
}),
ChannelComprRatio: factory.NewHistogram(prometheus.HistogramOpts{
Namespace: ns,
Name: "channel_compr_ratio",
Help: "Compression ratios of closed channel.",
Buckets: append([]float64{0.1, 0.2}, prometheus.LinearBuckets(0.3, 0.05, 14)...),
}),
BatcherTxEvs: opmetrics.NewEventVec(factory, ns, "batcher_tx", "BatcherTx", []string{"stage"}),
}
}
func (m *Metrics) Serve(ctx context.Context, host string, port int) error {
return opmetrics.ListenAndServe(ctx, m.registry, host, port)
}
func (m *Metrics) Document() []opmetrics.DocumentedMetric {
return m.factory.Document()
}
func (m *Metrics) StartBalanceMetrics(ctx context.Context,
l log.Logger, client *ethclient.Client, account common.Address) {
opmetrics.LaunchBalanceMetrics(ctx, l, m.registry, m.ns, client, account)
}
// RecordInfo sets a pseudo-metric that contains versioning and
// config info for the op-batcher.
func (m *Metrics) RecordInfo(version string) {
m.Info.WithLabelValues(version).Set(1)
}
// RecordUp sets the up metric to 1.
func (m *Metrics) RecordUp() {
prometheus.MustRegister()
m.Up.Set(1)
}
const (
StageLoaded = "loaded"
StageOpened = "opened"
StageAdded = "added"
StageClosed = "closed"
StageFullySubmitted = "fully_submitted"
StageTimedOut = "timed_out"
TxStageSubmitted = "submitted"
TxStageSuccess = "success"
TxStageFailed = "failed"
)
func (m *Metrics) RecordLatestL1Block(l1ref eth.L1BlockRef) {
m.RecordL1Ref("latest", l1ref)
}
// RecordL2BlockLoaded should be called when a new L2 block was loaded into the
// channel manager (but not processed yet).
func (m *Metrics) RecordL2BlocksLoaded(l2ref eth.L2BlockRef) {
m.RecordL2Ref(StageLoaded, l2ref)
}
func (m *Metrics) RecordChannelOpened(id derive.ChannelID, numPendingBlocks int) {
m.ChannelEvs.Record(StageOpened)
m.BlocksAddedCount.Set(0) // reset
m.PendingBlocksCount.WithLabelValues(StageOpened).Set(float64(numPendingBlocks))
}
// RecordL2BlocksAdded should be called when L2 block were added to the channel
// builder, with the latest added block.
func (m *Metrics) RecordL2BlocksAdded(l2ref eth.L2BlockRef, numBlocksAdded, numPendingBlocks, inputBytes, outputComprBytes int) {
m.RecordL2Ref(StageAdded, l2ref)
m.BlocksAddedCount.Add(float64(numBlocksAdded))
m.PendingBlocksCount.WithLabelValues(StageAdded).Set(float64(numPendingBlocks))
m.ChannelInputBytes.WithLabelValues(StageAdded).Set(float64(inputBytes))
m.ChannelReadyBytes.Set(float64(outputComprBytes))
}
func (m *Metrics) RecordChannelClosed(id derive.ChannelID, numPendingBlocks int, numFrames int, inputBytes int, outputComprBytes int, reason error) {
m.ChannelEvs.Record(StageClosed)
m.PendingBlocksCount.WithLabelValues(StageClosed).Set(float64(numPendingBlocks))
m.ChannelNumFrames.Set(float64(numFrames))
m.ChannelInputBytes.WithLabelValues(StageClosed).Set(float64(inputBytes))
m.ChannelOutputBytes.Set(float64(outputComprBytes))
var comprRatio float64
if inputBytes > 0 {
comprRatio = float64(outputComprBytes) / float64(inputBytes)
}
m.ChannelComprRatio.Observe(comprRatio)
m.ChannelClosedReason.Set(float64(ClosedReasonToNum(reason)))
}
func ClosedReasonToNum(reason error) int {
// CLI-3640
return 0
}
func (m *Metrics) RecordChannelFullySubmitted(id derive.ChannelID) {
m.ChannelEvs.Record(StageFullySubmitted)
}
func (m *Metrics) RecordChannelTimedOut(id derive.ChannelID) {
m.ChannelEvs.Record(StageTimedOut)
}
func (m *Metrics) RecordBatchTxSubmitted() {
m.BatcherTxEvs.Record(TxStageSubmitted)
}
func (m *Metrics) RecordBatchTxSuccess() {
m.BatcherTxEvs.Record(TxStageSuccess)
}
func (m *Metrics) RecordBatchTxFailed() {
m.BatcherTxEvs.Record(TxStageFailed)
}
package metrics
import (
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
opmetrics "github.com/ethereum-optimism/optimism/op-service/metrics"
)
type noopMetrics struct{ opmetrics.NoopRefMetrics }
var NoopMetrics Metricer = new(noopMetrics)
func (*noopMetrics) Document() []opmetrics.DocumentedMetric { return nil }
func (*noopMetrics) RecordInfo(version string) {}
func (*noopMetrics) RecordUp() {}
func (*noopMetrics) RecordLatestL1Block(l1ref eth.L1BlockRef) {}
func (*noopMetrics) RecordL2BlocksLoaded(eth.L2BlockRef) {}
func (*noopMetrics) RecordChannelOpened(derive.ChannelID, int) {}
func (*noopMetrics) RecordL2BlocksAdded(eth.L2BlockRef, int, int, int, int) {}
func (*noopMetrics) RecordChannelClosed(derive.ChannelID, int, int, int, int, error) {}
func (*noopMetrics) RecordChannelFullySubmitted(derive.ChannelID) {}
func (*noopMetrics) RecordChannelTimedOut(derive.ChannelID) {}
func (*noopMetrics) RecordBatchTxSubmitted() {}
func (*noopMetrics) RecordBatchTxSuccess() {}
func (*noopMetrics) RecordBatchTxFailed() {}
This diff is collapsed.
This diff is collapsed.
......@@ -6,6 +6,9 @@ import (
"errors"
"fmt"
"math/big"
"math/rand"
"github.com/ethereum-optimism/optimism/op-chain-ops/util"
"github.com/ethereum-optimism/optimism/op-chain-ops/ether"
......@@ -24,11 +27,21 @@ import (
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
)
// MaxSlotChecks is the maximum number of storage slots to check
// when validating the untouched predeploys. This limit is in place
// to bound execution time of the migration. We can parallelize this
// in the future.
const MaxSlotChecks = 1000
const (
// MaxPredeploySlotChecks is the maximum number of storage slots to check
// when validating the untouched predeploys. This limit is in place
// to bound execution time of the migration. We can parallelize this
// in the future.
MaxPredeploySlotChecks = 1000
// MaxOVMETHSlotChecks is the maximum number of OVM ETH storage slots to check
// when validating the OVM ETH migration.
MaxOVMETHSlotChecks = 5000
// OVMETHSampleLikelihood is the probability that a storage slot will be checked
// when validating the OVM ETH migration.
OVMETHSampleLikelihood = 0.1
)
type StorageCheckMap = map[common.Hash]common.Hash
......@@ -146,7 +159,7 @@ func PostCheckMigratedDB(
}
log.Info("checked L1Block")
if err := PostCheckLegacyETH(db); err != nil {
if err := PostCheckLegacyETH(prevDB, db, migrationData); err != nil {
return err
}
log.Info("checked legacy eth")
......@@ -208,7 +221,7 @@ func PostCheckUntouchables(udb state.Database, currDB *state.StateDB, prevRoot c
if err := prevDB.ForEachStorage(addr, func(key, value common.Hash) bool {
count++
expSlots[key] = value
return count < MaxSlotChecks
return count < MaxPredeploySlotChecks
}); err != nil {
return fmt.Errorf("error iterating over storage: %w", err)
}
......@@ -363,14 +376,94 @@ func PostCheckPredeployStorage(db vm.StateDB, finalSystemOwner common.Address, p
}
// PostCheckLegacyETH checks that the legacy eth migration was successful.
// It currently only checks that the total supply was set to 0.
func PostCheckLegacyETH(db vm.StateDB) error {
// It checks that the total supply was set to 0, and randomly samples storage
// slots pre- and post-migration to ensure that balances were correctly migrated.
func PostCheckLegacyETH(prevDB, migratedDB *state.StateDB, migrationData crossdomain.MigrationData) error {
allowanceSlots := make(map[common.Hash]bool)
addresses := make(map[common.Hash]common.Address)
log.Info("recomputing witness data")
for _, allowance := range migrationData.OvmAllowances {
key := ether.CalcAllowanceStorageKey(allowance.From, allowance.To)
allowanceSlots[key] = true
}
for _, addr := range migrationData.Addresses() {
addresses[ether.CalcOVMETHStorageKey(addr)] = addr
}
log.Info("checking legacy eth fixed storage slots")
for slot, expValue := range LegacyETHCheckSlots {
actValue := db.GetState(predeploys.LegacyERC20ETHAddr, slot)
actValue := migratedDB.GetState(predeploys.LegacyERC20ETHAddr, slot)
if actValue != expValue {
return fmt.Errorf("expected slot %s on %s to be %s, but got %s", slot, predeploys.LegacyERC20ETHAddr, expValue, actValue)
}
}
var count int
threshold := 100 - int(100*OVMETHSampleLikelihood)
progress := util.ProgressLogger(100, "checking legacy eth balance slots")
var innerErr error
err := prevDB.ForEachStorage(predeploys.LegacyERC20ETHAddr, func(key, value common.Hash) bool {
val := rand.Intn(100)
// Randomly sample storage slots.
if val > threshold {
return true
}
// Ignore fixed slots.
if _, ok := LegacyETHCheckSlots[key]; ok {
return true
}
// Ignore allowances.
if allowanceSlots[key] {
return true
}
// Grab the address, and bail if we can't find it.
addr, ok := addresses[key]
if !ok {
innerErr = fmt.Errorf("unknown OVM_ETH storage slot %s", key)
return false
}
// Pull out the pre-migration OVM ETH balance, and the state balance.
ovmETHBalance := value.Big()
ovmETHStateBalance := prevDB.GetBalance(addr)
// Pre-migration state balance should be zero.
if ovmETHStateBalance.Cmp(common.Big0) != 0 {
innerErr = fmt.Errorf("expected OVM_ETH pre-migration state balance for %s to be 0, but got %s", addr, ovmETHStateBalance)
return false
}
// Migrated state balance should equal the OVM ETH balance.
migratedStateBalance := migratedDB.GetBalance(addr)
if migratedStateBalance.Cmp(ovmETHBalance) != 0 {
innerErr = fmt.Errorf("expected OVM_ETH post-migration state balance for %s to be %s, but got %s", addr, ovmETHStateBalance, migratedStateBalance)
return false
}
// Migrated OVM ETH balance should be zero, since we wipe the slots.
migratedBalance := migratedDB.GetState(predeploys.LegacyERC20ETHAddr, key)
if migratedBalance.Big().Cmp(common.Big0) != 0 {
innerErr = fmt.Errorf("expected OVM_ETH post-migration ERC20 balance for %s to be 0, but got %s", addr, migratedBalance)
return false
}
progress()
count++
// Stop iterating if we've checked enough slots.
return count < MaxOVMETHSlotChecks
})
if err != nil {
return fmt.Errorf("error iterating over OVM_ETH storage: %w", err)
}
if innerErr != nil {
return innerErr
}
return nil
}
......@@ -508,7 +601,9 @@ func CheckWithdrawalsAfter(db vm.StateDB, data crossdomain.MigrationData, l1Cros
// Now, iterate over each legacy withdrawal and check if there is a corresponding
// migrated withdrawal.
var innerErr error
progress := util.ProgressLogger(1000, "checking withdrawals")
err = db.ForEachStorage(predeploys.LegacyMessagePasserAddr, func(key, value common.Hash) bool {
progress()
// The legacy message passer becomes a proxy during the migration,
// so we need to ignore the implementation/admin slots.
if key == ImplementationSlot || key == AdminSlot {
......
......@@ -76,6 +76,8 @@ type DeployConfig struct {
ProxyAdminOwner common.Address `json:"proxyAdminOwner"`
// Owner of the system on L1
FinalSystemOwner common.Address `json:"finalSystemOwner"`
// GUARDIAN account in the OptimismPortal
PortalGuardian common.Address `json:"portalGuardian"`
// L1 recipient of fees accumulated in the BaseFeeVault
BaseFeeVaultRecipient common.Address `json:"baseFeeVaultRecipient"`
// L1 recipient of fees accumulated in the L1FeeVault
......@@ -128,6 +130,9 @@ func (d *DeployConfig) Check() error {
if d.FinalizationPeriodSeconds == 0 {
return fmt.Errorf("%w: FinalizationPeriodSeconds cannot be 0", ErrInvalidDeployConfig)
}
if d.PortalGuardian == (common.Address{}) {
return fmt.Errorf("%w: PortalGuardian cannot be address(0)", ErrInvalidDeployConfig)
}
if d.MaxSequencerDrift == 0 {
return fmt.Errorf("%w: MaxSequencerDrift cannot be 0", ErrInvalidDeployConfig)
}
......
......@@ -143,16 +143,6 @@ func MigrateDB(ldb ethdb.Database, config *DeployConfig, l1Block *types.Block, m
filteredWithdrawals = crossdomain.SafeFilteredWithdrawals(unfilteredWithdrawals)
}
// We also need to verify that we have all of the storage slots for the LegacyERC20ETH contract
// that we expect to have. An error will be thrown if there are any missing storage slots.
// Unlike with withdrawals, we do not need to filter out extra addresses because their balances
// would necessarily be zero and therefore not affect the migration.
log.Info("Checking addresses...", "no-check", noCheck)
addrs, err := ether.PreCheckBalances(dbFactory, migrationData.Addresses(), migrationData.OvmAllowances, int(config.L1ChainID), noCheck)
if err != nil {
return nil, fmt.Errorf("addresses mismatch: %w", err)
}
// At this point we've fully verified the witness data for the migration, so we can begin the
// actual migration process. This involves modifying parts of the legacy database and inserting
// a transition block.
......@@ -202,11 +192,12 @@ func MigrateDB(ldb ethdb.Database, config *DeployConfig, l1Block *types.Block, m
}
// Finally we migrate the balances held inside the LegacyERC20ETH contract into the state trie.
// We also delete the balances from the LegacyERC20ETH contract.
// We also delete the balances from the LegacyERC20ETH contract. Unlike the steps above, this step
// combines the check and mutation steps into one in order to reduce migration time.
log.Info("Starting to migrate ERC20 ETH")
err = ether.MigrateLegacyETH(db, addrs, int(config.L1ChainID), noCheck)
err = ether.MigrateBalances(db, dbFactory, migrationData.Addresses(), migrationData.OvmAllowances, int(config.L1ChainID), noCheck)
if err != nil {
return nil, fmt.Errorf("cannot migrate legacy eth: %w", err)
return nil, fmt.Errorf("failed to migrate OVM_ETH: %w", err)
}
// We're done messing around with the database, so we can now commit the changes to the DB.
......
......@@ -295,7 +295,7 @@ func deployL1Contracts(config *DeployConfig, backend *backends.SimulatedBackend)
Name: "OptimismPortal",
Args: []interface{}{
predeploys.DevL2OutputOracleAddr,
config.FinalSystemOwner,
config.PortalGuardian,
true, // _paused
},
},
......
......@@ -19,6 +19,7 @@
"l1GenesisBlockGasLimit": "0xe4e1c0",
"l1GenesisBlockDifficulty": "0x1",
"finalSystemOwner": "0x0000000000000000000000000000000000000111",
"portalGuardian": "0x0000000000000000000000000000000000000112",
"finalizationPeriodSeconds": 2,
"l1GenesisBlockMixHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"l1GenesisBlockCoinbase": "0x0000000000000000000000000000000000000000",
......
package op_e2e
import (
"math"
"math/big"
"testing"
"time"
"github.com/ethereum-optimism/optimism/op-bindings/bindings"
"github.com/ethereum-optimism/optimism/op-bindings/predeploys"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
"github.com/ethereum-optimism/optimism/op-node/testlog"
"github.com/ethereum/go-ethereum/accounts/abi/bind"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/params"
"github.com/stretchr/testify/require"
)
// TestERC20BridgeDeposits tests the the L1StandardBridge bridge ERC20
// functionality.
func TestERC20BridgeDeposits(t *testing.T) {
parallel(t)
if !verboseGethNodes {
log.Root().SetHandler(log.DiscardHandler())
}
cfg := DefaultSystemConfig(t)
sys, err := cfg.Start()
require.Nil(t, err, "Error starting up system")
defer sys.Close()
log := testlog.Logger(t, log.LvlInfo)
log.Info("genesis", "l2", sys.RollupConfig.Genesis.L2, "l1", sys.RollupConfig.Genesis.L1, "l2_time", sys.RollupConfig.Genesis.L2Time)
l1Client := sys.Clients["l1"]
l2Client := sys.Clients["sequencer"]
opts, err := bind.NewKeyedTransactorWithChainID(sys.cfg.Secrets.Alice, cfg.L1ChainIDBig())
require.Nil(t, err)
// Deploy WETH9
weth9Address, tx, WETH9, err := bindings.DeployWETH9(opts, l1Client)
require.NoError(t, err)
_, err = waitForTransaction(tx.Hash(), l1Client, 3*time.Duration(cfg.DeployConfig.L1BlockTime)*time.Second)
require.NoError(t, err, "Waiting for deposit tx on L1")
// Get some WETH
opts.Value = big.NewInt(params.Ether)
tx, err = WETH9.Fallback(opts, []byte{})
require.NoError(t, err)
_, err = waitForTransaction(tx.Hash(), l1Client, 3*time.Duration(cfg.DeployConfig.L1BlockTime)*time.Second)
require.NoError(t, err)
opts.Value = nil
wethBalance, err := WETH9.BalanceOf(&bind.CallOpts{}, opts.From)
require.NoError(t, err)
require.Equal(t, big.NewInt(params.Ether), wethBalance)
// Deploy L2 WETH9
l2Opts, err := bind.NewKeyedTransactorWithChainID(sys.cfg.Secrets.Alice, cfg.L2ChainIDBig())
require.NoError(t, err)
optimismMintableTokenFactory, err := bindings.NewOptimismMintableERC20Factory(predeploys.OptimismMintableERC20FactoryAddr, l2Client)
require.NoError(t, err)
tx, err = optimismMintableTokenFactory.CreateOptimismMintableERC20(l2Opts, weth9Address, "L2-WETH", "L2-WETH")
require.NoError(t, err)
_, err = waitForTransaction(tx.Hash(), l2Client, 3*time.Duration(cfg.DeployConfig.L2BlockTime)*time.Second)
require.NoError(t, err)
// Get the deployment event to have access to the L2 WETH9 address
it, err := optimismMintableTokenFactory.FilterOptimismMintableERC20Created(&bind.FilterOpts{Start: 0}, nil, nil)
require.NoError(t, err)
var event *bindings.OptimismMintableERC20FactoryOptimismMintableERC20Created
for it.Next() {
event = it.Event
}
require.NotNil(t, event)
// Approve WETH9 with the bridge
tx, err = WETH9.Approve(opts, predeploys.DevL1StandardBridgeAddr, new(big.Int).SetUint64(math.MaxUint64))
require.NoError(t, err)
_, err = waitForTransaction(tx.Hash(), l1Client, 3*time.Duration(cfg.DeployConfig.L1BlockTime)*time.Second)
require.NoError(t, err)
// Bridge the WETH9
l1StandardBridge, err := bindings.NewL1StandardBridge(predeploys.DevL1StandardBridgeAddr, l1Client)
require.NoError(t, err)
tx, err = l1StandardBridge.BridgeERC20(opts, weth9Address, event.LocalToken, big.NewInt(100), 100000, []byte{})
require.NoError(t, err)
depositReceipt, err := waitForTransaction(tx.Hash(), l1Client, 3*time.Duration(cfg.DeployConfig.L1BlockTime)*time.Second)
require.NoError(t, err)
t.Log("Deposit through L1StandardBridge", "gas used", depositReceipt.GasUsed)
// compute the deposit transaction hash + poll for it
portal, err := bindings.NewOptimismPortal(predeploys.DevOptimismPortalAddr, l1Client)
require.NoError(t, err)
depIt, err := portal.FilterTransactionDeposited(&bind.FilterOpts{Start: 0}, nil, nil, nil)
require.NoError(t, err)
var depositEvent *bindings.OptimismPortalTransactionDeposited
for depIt.Next() {
depositEvent = depIt.Event
}
require.NotNil(t, depositEvent)
depositTx, err := derive.UnmarshalDepositLogEvent(&depositEvent.Raw)
require.NoError(t, err)
_, err = waitForTransaction(types.NewTx(depositTx).Hash(), l2Client, 3*time.Duration(cfg.DeployConfig.L2BlockTime)*time.Second)
require.NoError(t, err)
// Ensure that the deposit went through
optimismMintableToken, err := bindings.NewOptimismMintableERC20(event.LocalToken, l2Client)
require.NoError(t, err)
// Should have balance on L2
l2Balance, err := optimismMintableToken.BalanceOf(&bind.CallOpts{}, opts.From)
require.NoError(t, err)
require.Equal(t, l2Balance, big.NewInt(100))
}
......@@ -12,6 +12,7 @@ import (
"time"
bss "github.com/ethereum-optimism/optimism/op-batcher/batcher"
batchermetrics "github.com/ethereum-optimism/optimism/op-batcher/metrics"
"github.com/ethereum-optimism/optimism/op-node/chaincfg"
"github.com/ethereum-optimism/optimism/op-node/sources"
l2os "github.com/ethereum-optimism/optimism/op-proposer/proposer"
......@@ -341,7 +342,7 @@ func TestMigration(t *testing.T) {
Format: "text",
},
PrivateKey: hexPriv(secrets.Batcher),
}, lgr.New("module", "batcher"))
}, lgr.New("module", "batcher"), batchermetrics.NoopMetrics)
require.NoError(t, err)
t.Cleanup(func() {
batcher.StopIfRunning()
......
......@@ -24,6 +24,7 @@ import (
"github.com/stretchr/testify/require"
bss "github.com/ethereum-optimism/optimism/op-batcher/batcher"
batchermetrics "github.com/ethereum-optimism/optimism/op-batcher/metrics"
"github.com/ethereum-optimism/optimism/op-bindings/predeploys"
"github.com/ethereum-optimism/optimism/op-chain-ops/genesis"
"github.com/ethereum-optimism/optimism/op-e2e/e2eutils"
......@@ -600,7 +601,7 @@ func (cfg SystemConfig) Start(_opts ...SystemConfigOption) (*System, error) {
Format: "text",
},
PrivateKey: hexPriv(cfg.Secrets.Batcher),
}, sys.cfg.Loggers["batcher"])
}, sys.cfg.Loggers["batcher"], batchermetrics.NoopMetrics)
if err != nil {
return nil, fmt.Errorf("failed to setup batch submitter: %w", err)
}
......
......@@ -76,7 +76,7 @@ func (co *ChannelOut) AddBlock(block *types.Block) (uint64, error) {
return 0, errors.New("already closed")
}
batch, err := BlockToBatch(block)
batch, _, err := BlockToBatch(block)
if err != nil {
return 0, err
}
......@@ -182,7 +182,7 @@ func (co *ChannelOut) OutputFrame(w *bytes.Buffer, maxSize uint64) (uint16, erro
}
// BlockToBatch transforms a block into a batch object that can easily be RLP encoded.
func BlockToBatch(block *types.Block) (*BatchData, error) {
func BlockToBatch(block *types.Block) (*BatchData, L1BlockInfo, error) {
opaqueTxs := make([]hexutil.Bytes, 0, len(block.Transactions()))
for i, tx := range block.Transactions() {
if tx.Type() == types.DepositTxType {
......@@ -190,17 +190,17 @@ func BlockToBatch(block *types.Block) (*BatchData, error) {
}
otx, err := tx.MarshalBinary()
if err != nil {
return nil, fmt.Errorf("could not encode tx %v in block %v: %w", i, tx.Hash(), err)
return nil, L1BlockInfo{}, fmt.Errorf("could not encode tx %v in block %v: %w", i, tx.Hash(), err)
}
opaqueTxs = append(opaqueTxs, otx)
}
l1InfoTx := block.Transactions()[0]
if l1InfoTx.Type() != types.DepositTxType {
return nil, ErrNotDepositTx
return nil, L1BlockInfo{}, ErrNotDepositTx
}
l1Info, err := L1InfoDepositTxData(l1InfoTx.Data())
if err != nil {
return nil, fmt.Errorf("could not parse the L1 Info deposit: %w", err)
return nil, l1Info, fmt.Errorf("could not parse the L1 Info deposit: %w", err)
}
return &BatchData{
......@@ -211,7 +211,7 @@ func BlockToBatch(block *types.Block) (*BatchData, error) {
Timestamp: block.Time(),
Transactions: opaqueTxs,
},
}, nil
}, l1Info, nil
}
// ForceCloseTxData generates the transaction data for a transaction which will force close
......
......@@ -216,17 +216,17 @@ func TestEngineQueue_Finalize(t *testing.T) {
eng.ExpectL2BlockRefByHash(refF0.ParentHash, refE1, nil)
// meet previous safe, counts 1/2
l1F.ExpectL1BlockRefByNumber(refE.Number, refE, nil)
l1F.ExpectL1BlockRefByHash(refE.Hash, refE, nil)
eng.ExpectL2BlockRefByHash(refE1.ParentHash, refE0, nil)
eng.ExpectL2BlockRefByHash(refE0.ParentHash, refD1, nil)
// now full seq window, inclusive
l1F.ExpectL1BlockRefByNumber(refD.Number, refD, nil)
l1F.ExpectL1BlockRefByHash(refD.Hash, refD, nil)
eng.ExpectL2BlockRefByHash(refD1.ParentHash, refD0, nil)
eng.ExpectL2BlockRefByHash(refD0.ParentHash, refC1, nil)
// now one more L1 origin
l1F.ExpectL1BlockRefByNumber(refC.Number, refC, nil)
l1F.ExpectL1BlockRefByHash(refC.Hash, refC, nil)
eng.ExpectL2BlockRefByHash(refC1.ParentHash, refC0, nil)
// parent of that origin will be considered safe
eng.ExpectL2BlockRefByHash(refC0.ParentHash, refB1, nil)
......@@ -450,17 +450,17 @@ func TestEngineQueue_ResetWhenUnsafeOriginNotCanonical(t *testing.T) {
eng.ExpectL2BlockRefByHash(refF0.ParentHash, refE1, nil)
// meet previous safe, counts 1/2
l1F.ExpectL1BlockRefByNumber(refE.Number, refE, nil)
l1F.ExpectL1BlockRefByHash(refE.Hash, refE, nil)
eng.ExpectL2BlockRefByHash(refE1.ParentHash, refE0, nil)
eng.ExpectL2BlockRefByHash(refE0.ParentHash, refD1, nil)
// now full seq window, inclusive
l1F.ExpectL1BlockRefByNumber(refD.Number, refD, nil)
l1F.ExpectL1BlockRefByHash(refD.Hash, refD, nil)
eng.ExpectL2BlockRefByHash(refD1.ParentHash, refD0, nil)
eng.ExpectL2BlockRefByHash(refD0.ParentHash, refC1, nil)
// now one more L1 origin
l1F.ExpectL1BlockRefByNumber(refC.Number, refC, nil)
l1F.ExpectL1BlockRefByHash(refC.Hash, refC, nil)
eng.ExpectL2BlockRefByHash(refC1.ParentHash, refC0, nil)
// parent of that origin will be considered safe
eng.ExpectL2BlockRefByHash(refC0.ParentHash, refB1, nil)
......@@ -782,17 +782,17 @@ func TestVerifyNewL1Origin(t *testing.T) {
}
// meet previous safe, counts 1/2
l1F.ExpectL1BlockRefByNumber(refE.Number, refE, nil)
l1F.ExpectL1BlockRefByHash(refE.Hash, refE, nil)
eng.ExpectL2BlockRefByHash(refE1.ParentHash, refE0, nil)
eng.ExpectL2BlockRefByHash(refE0.ParentHash, refD1, nil)
// now full seq window, inclusive
l1F.ExpectL1BlockRefByNumber(refD.Number, refD, nil)
l1F.ExpectL1BlockRefByHash(refD.Hash, refD, nil)
eng.ExpectL2BlockRefByHash(refD1.ParentHash, refD0, nil)
eng.ExpectL2BlockRefByHash(refD0.ParentHash, refC1, nil)
// now one more L1 origin
l1F.ExpectL1BlockRefByNumber(refC.Number, refC, nil)
l1F.ExpectL1BlockRefByHash(refC.Hash, refC, nil)
eng.ExpectL2BlockRefByHash(refC1.ParentHash, refC0, nil)
// parent of that origin will be considered safe
eng.ExpectL2BlockRefByHash(refC0.ParentHash, refB1, nil)
......
package derive
import (
"fmt"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup"
)
// L2BlockRefSource is a source for the generation of a L2BlockRef. E.g. a
// *types.Block is a L2BlockRefSource.
//
// L2BlockToBlockRef extracts L2BlockRef from a L2BlockRefSource. The first
// transaction of a source must be a Deposit transaction.
type L2BlockRefSource interface {
Hash() common.Hash
ParentHash() common.Hash
NumberU64() uint64
Time() uint64
Transactions() types.Transactions
}
// PayloadToBlockRef extracts the essential L2BlockRef information from an L2
// block ref source, falling back to genesis information if necessary.
func L2BlockToBlockRef(block L2BlockRefSource, genesis *rollup.Genesis) (eth.L2BlockRef, error) {
hash, number := block.Hash(), block.NumberU64()
var l1Origin eth.BlockID
var sequenceNumber uint64
if number == genesis.L2.Number {
if hash != genesis.L2.Hash {
return eth.L2BlockRef{}, fmt.Errorf("expected L2 genesis hash to match L2 block at genesis block number %d: %s <> %s", genesis.L2.Number, hash, genesis.L2.Hash)
}
l1Origin = genesis.L1
sequenceNumber = 0
} else {
txs := block.Transactions()
if txs.Len() == 0 {
return eth.L2BlockRef{}, fmt.Errorf("l2 block is missing L1 info deposit tx, block hash: %s", hash)
}
tx := txs[0]
if tx.Type() != types.DepositTxType {
return eth.L2BlockRef{}, fmt.Errorf("first payload tx has unexpected tx type: %d", tx.Type())
}
info, err := L1InfoDepositTxData(tx.Data())
if err != nil {
return eth.L2BlockRef{}, fmt.Errorf("failed to parse L1 info deposit tx from L2 block: %w", err)
}
l1Origin = eth.BlockID{Hash: info.BlockHash, Number: info.Number}
sequenceNumber = info.SequenceNumber
}
return eth.L2BlockRef{
Hash: hash,
Number: number,
ParentHash: block.ParentHash(),
Time: block.Time(),
L1Origin: l1Origin,
SequenceNumber: sequenceNumber,
}, nil
}
......@@ -126,8 +126,17 @@ func FindL2Heads(ctx context.Context, cfg *rollup.Config, l1 L1Chain, l2 L2Chain
// then we return the last L2 block of the epoch before that as safe head.
// Each loop iteration we traverse a single L2 block, and we check if the L1 origins are consistent.
for {
// Fetch L1 information if we never had it, or if we do not have it for the current origin
if l1Block == (eth.L1BlockRef{}) || n.L1Origin.Hash != l1Block.Hash {
// Fetch L1 information if we never had it, or if we do not have it for the current origin.
// Optimization: as soon as we have a previous L1 block, try to traverse L1 by hash instead of by number, to fill the cache.
if n.L1Origin.Hash == l1Block.ParentHash {
b, err := l1.L1BlockRefByHash(ctx, n.L1Origin.Hash)
if err != nil {
// Exit, find-sync start should start over, to move to an available L1 chain with block-by-number / not-found case.
return nil, fmt.Errorf("failed to retrieve L1 block: %w", err)
}
l1Block = b
ahead = false
} else if l1Block == (eth.L1BlockRef{}) || n.L1Origin.Hash != l1Block.Hash {
b, err := l1.L1BlockRefByNumber(ctx, n.L1Origin.Number)
// if L2 is ahead of L1 view, then consider it a "plausible" head
notFound := errors.Is(err, ethereum.NotFound)
......
......@@ -24,6 +24,7 @@ type L1ClientConfig struct {
func L1ClientDefaultConfig(config *rollup.Config, trustRPC bool, kind RPCProviderKind) *L1ClientConfig {
// Cache 3/2 worth of sequencing window of receipts and txs
span := int(config.SeqWindowSize) * 3 / 2
fullSpan := span
if span > 1000 { // sanity cap. If a large sequencing window is configured, do not make the cache too large
span = 1000
}
......@@ -40,7 +41,8 @@ func L1ClientDefaultConfig(config *rollup.Config, trustRPC bool, kind RPCProvide
MustBePostMerge: false,
RPCProviderKind: kind,
},
L1BlockRefsCacheSize: span,
// Not bounded by span, to cover find-sync-start range fully for speedy recovery after errors.
L1BlockRefsCacheSize: fullSpan,
}
}
......
......@@ -34,6 +34,7 @@ func L2ClientDefaultConfig(config *rollup.Config, trustRPC bool) *L2ClientConfig
span *= 12
span /= int(config.BlockTime)
}
fullSpan := span
if span > 1000 { // sanity cap. If a large sequencing window is configured, do not make the cache too large
span = 1000
}
......@@ -50,7 +51,8 @@ func L2ClientDefaultConfig(config *rollup.Config, trustRPC bool) *L2ClientConfig
MustBePostMerge: true,
RPCProviderKind: RPCKindBasic,
},
L2BlockRefsCacheSize: span,
// Not bounded by span, to cover find-sync-start range fully for speedy recovery after errors.
L2BlockRefsCacheSize: fullSpan,
L1ConfigsCacheSize: span,
RollupCfg: config,
}
......
......@@ -208,6 +208,13 @@ func NewL2OutputSubmitter(cfg Config, l log.Logger) (*L2OutputSubmitter, error)
return nil, err
}
version, err := l2ooContract.Version(&bind.CallOpts{})
if err != nil {
cancel()
return nil, err
}
log.Info("Connected to L2OutputOracle", "address", cfg.L2OutputOracleAddr, "version", version)
parsed, err := abi.JSON(strings.NewReader(bindings.L2OutputOracleMetaData.ABI))
if err != nil {
cancel()
......
package metrics
import (
"fmt"
"github.com/prometheus/client_golang/prometheus"
)
type Event struct {
Total prometheus.Counter
LastTime prometheus.Gauge
}
func (e *Event) Record() {
e.Total.Inc()
e.LastTime.SetToCurrentTime()
}
func NewEvent(factory Factory, ns string, name string, displayName string) Event {
return Event{
Total: factory.NewCounter(prometheus.CounterOpts{
Namespace: ns,
Name: fmt.Sprintf("%s_total", name),
Help: fmt.Sprintf("Count of %s events", displayName),
}),
LastTime: factory.NewGauge(prometheus.GaugeOpts{
Namespace: ns,
Name: fmt.Sprintf("last_%s_unix", name),
Help: fmt.Sprintf("Timestamp of last %s event", displayName),
}),
}
}
type EventVec struct {
Total prometheus.CounterVec
LastTime prometheus.GaugeVec
}
func (e *EventVec) Record(lvs ...string) {
e.Total.WithLabelValues(lvs...).Inc()
e.LastTime.WithLabelValues(lvs...).SetToCurrentTime()
}
func NewEventVec(factory Factory, ns string, name string, displayName string, labelNames []string) EventVec {
return EventVec{
Total: *factory.NewCounterVec(prometheus.CounterOpts{
Namespace: ns,
Name: fmt.Sprintf("%s_total", name),
Help: fmt.Sprintf("Count of %s events", displayName),
}, labelNames),
LastTime: *factory.NewGaugeVec(prometheus.GaugeOpts{
Namespace: ns,
Name: fmt.Sprintf("last_%s_unix", name),
Help: fmt.Sprintf("Timestamp of last %s event", displayName),
},
labelNames),
}
}
package metrics
import (
"encoding/binary"
"time"
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum/go-ethereum/common"
"github.com/prometheus/client_golang/prometheus"
)
type RefMetricer interface {
RecordRef(layer string, name string, num uint64, timestamp uint64, h common.Hash)
RecordL1Ref(name string, ref eth.L1BlockRef)
RecordL2Ref(name string, ref eth.L2BlockRef)
}
// RefMetrics provides block reference metrics. It's a metrics module that's
// supposed to be embedded into a service metrics type. The service metrics type
// should set the full namespace and create the factory before calling
// NewRefMetrics.
type RefMetrics struct {
RefsNumber *prometheus.GaugeVec
RefsTime *prometheus.GaugeVec
RefsHash *prometheus.GaugeVec
RefsSeqNr *prometheus.GaugeVec
RefsLatency *prometheus.GaugeVec
// hash of the last seen block per name, so we don't reduce/increase latency on updates of the same data,
// and only count the first occurrence
LatencySeen map[string]common.Hash
}
var _ RefMetricer = (*RefMetrics)(nil)
// MakeRefMetrics returns a new RefMetrics, initializing its prometheus fields
// using factory. It is supposed to be used inside the construtors of metrics
// structs for any op service after the full namespace and factory have been
// setup.
//
// ns is the fully qualified namespace, e.g. "op_node_default".
func MakeRefMetrics(ns string, factory Factory) RefMetrics {
return RefMetrics{
RefsNumber: factory.NewGaugeVec(prometheus.GaugeOpts{
Namespace: ns,
Name: "refs_number",
Help: "Gauge representing the different L1/L2 reference block numbers",
}, []string{
"layer",
"type",
}),
RefsTime: factory.NewGaugeVec(prometheus.GaugeOpts{
Namespace: ns,
Name: "refs_time",
Help: "Gauge representing the different L1/L2 reference block timestamps",
}, []string{
"layer",
"type",
}),
RefsHash: factory.NewGaugeVec(prometheus.GaugeOpts{
Namespace: ns,
Name: "refs_hash",
Help: "Gauge representing the different L1/L2 reference block hashes truncated to float values",
}, []string{
"layer",
"type",
}),
RefsSeqNr: factory.NewGaugeVec(prometheus.GaugeOpts{
Namespace: ns,
Name: "refs_seqnr",
Help: "Gauge representing the different L2 reference sequence numbers",
}, []string{
"type",
}),
RefsLatency: factory.NewGaugeVec(prometheus.GaugeOpts{
Namespace: ns,
Name: "refs_latency",
Help: "Gauge representing the different L1/L2 reference block timestamps minus current time, in seconds",
}, []string{
"layer",
"type",
}),
LatencySeen: make(map[string]common.Hash),
}
}
func (m *RefMetrics) RecordRef(layer string, name string, num uint64, timestamp uint64, h common.Hash) {
m.RefsNumber.WithLabelValues(layer, name).Set(float64(num))
if timestamp != 0 {
m.RefsTime.WithLabelValues(layer, name).Set(float64(timestamp))
// only meter the latency when we first see this hash for the given label name
if m.LatencySeen[name] != h {
m.LatencySeen[name] = h
m.RefsLatency.WithLabelValues(layer, name).Set(float64(timestamp) - (float64(time.Now().UnixNano()) / 1e9))
}
}
// we map the first 8 bytes to a float64, so we can graph changes of the hash to find divergences visually.
// We don't do math.Float64frombits, just a regular conversion, to keep the value within a manageable range.
m.RefsHash.WithLabelValues(layer, name).Set(float64(binary.LittleEndian.Uint64(h[:])))
}
func (m *RefMetrics) RecordL1Ref(name string, ref eth.L1BlockRef) {
m.RecordRef("l1", name, ref.Number, ref.Time, ref.Hash)
}
func (m *RefMetrics) RecordL2Ref(name string, ref eth.L2BlockRef) {
m.RecordRef("l2", name, ref.Number, ref.Time, ref.Hash)
m.RecordRef("l1_origin", name, ref.L1Origin.Number, 0, ref.L1Origin.Hash)
m.RefsSeqNr.WithLabelValues(name).Set(float64(ref.SequenceNumber))
}
// NoopRefMetrics can be embedded in a noop version of a metric implementation
// to have a noop RefMetricer.
type NoopRefMetrics struct{}
func (*NoopRefMetrics) RecordRef(string, string, uint64, uint64, common.Hash) {}
func (*NoopRefMetrics) RecordL1Ref(string, eth.L1BlockRef) {}
func (*NoopRefMetrics) RecordL2Ref(string, eth.L2BlockRef) {}
......@@ -154,11 +154,22 @@ func (m *SimpleTxManager) IncreaseGasPrice(ctx context.Context, tx *types.Transa
thresholdFeeCap := new(big.Int).Mul(priceBumpPercent, tx.GasFeeCap())
thresholdFeeCap = thresholdFeeCap.Div(thresholdFeeCap, oneHundred)
if tx.GasFeeCapIntCmp(gasFeeCap) >= 0 {
m.l.Debug("Reusing the previous fee cap", "previous", tx.GasFeeCap(), "suggested", gasFeeCap)
gasFeeCap = tx.GasFeeCap()
reusedFeeCap = true
if reusedTip {
m.l.Debug("Reusing the previous fee cap", "previous", tx.GasFeeCap(), "suggested", gasFeeCap)
gasFeeCap = tx.GasFeeCap()
reusedFeeCap = true
} else {
m.l.Debug("Overriding the fee cap to enforce a price bump because we increased the tip", "previous", tx.GasFeeCap(), "suggested", gasFeeCap, "new", thresholdFeeCap)
gasFeeCap = thresholdFeeCap
}
} else if thresholdFeeCap.Cmp(gasFeeCap) > 0 {
m.l.Debug("Overriding the fee cap to enforce a price bump", "previous", tx.GasFeeCap(), "suggested", gasFeeCap, "new", thresholdFeeCap)
if reusedTip {
// TODO (CLI-3620): Increase the basefee then recompute the feecap
m.l.Warn("Overriding the fee cap to enforce a price bump without increasing the tip. Will likely result in ErrReplacementUnderpriced",
"previous", tx.GasFeeCap(), "suggested", gasFeeCap, "new", thresholdFeeCap)
} else {
m.l.Debug("Overriding the fee cap to enforce a price bump", "previous", tx.GasFeeCap(), "suggested", gasFeeCap, "new", thresholdFeeCap)
}
gasFeeCap = thresholdFeeCap
}
......
......@@ -621,6 +621,84 @@ func TestIncreaseGasPriceEnforcesMinBump(t *testing.T) {
baseFee: big.NewInt(460),
}
mgr := &SimpleTxManager{
Config: Config{
ResubmissionTimeout: time.Second,
ReceiptQueryInterval: 50 * time.Millisecond,
NumConfirmations: 1,
SafeAbortNonceTooLowCount: 3,
Signer: func(ctx context.Context, from common.Address, tx *types.Transaction) (*types.Transaction, error) {
return tx, nil
},
From: common.Address{},
},
name: "TEST",
backend: &borkedBackend,
l: testlog.Logger(t, log.LvlTrace),
}
tx := types.NewTx(&types.DynamicFeeTx{
GasTipCap: big.NewInt(100),
GasFeeCap: big.NewInt(1000),
})
ctx := context.Background()
newTx, err := mgr.IncreaseGasPrice(ctx, tx)
require.NoError(t, err)
require.True(t, newTx.GasFeeCap().Cmp(tx.GasFeeCap()) > 0, "new tx fee cap must be larger")
require.True(t, newTx.GasTipCap().Cmp(tx.GasTipCap()) > 0, "new tx tip must be larger")
}
// TestIncreaseGasPriceEnforcesMinBumpForBothOnTipIncrease asserts that if the gasTip goes up,
// but the baseFee doesn't, both values are increased by 10%
func TestIncreaseGasPriceEnforcesMinBumpForBothOnTipIncrease(t *testing.T) {
t.Parallel()
borkedBackend := failingBackend{
gasTip: big.NewInt(101),
baseFee: big.NewInt(440),
}
mgr := &SimpleTxManager{
Config: Config{
ResubmissionTimeout: time.Second,
ReceiptQueryInterval: 50 * time.Millisecond,
NumConfirmations: 1,
SafeAbortNonceTooLowCount: 3,
Signer: func(ctx context.Context, from common.Address, tx *types.Transaction) (*types.Transaction, error) {
return tx, nil
},
From: common.Address{},
},
name: "TEST",
backend: &borkedBackend,
l: testlog.Logger(t, log.LvlCrit),
}
tx := types.NewTx(&types.DynamicFeeTx{
GasTipCap: big.NewInt(100),
GasFeeCap: big.NewInt(1000),
})
ctx := context.Background()
newTx, err := mgr.IncreaseGasPrice(ctx, tx)
require.NoError(t, err)
require.True(t, newTx.GasFeeCap().Cmp(tx.GasFeeCap()) > 0, "new tx fee cap must be larger")
require.True(t, newTx.GasTipCap().Cmp(tx.GasTipCap()) > 0, "new tx tip must be larger")
}
// TestIncreaseGasPriceEnforcesMinBumpForBothOnBaseFeeIncrease asserts that if the baseFee goes up,
// but the tip doesn't, both values are increased by 10%
// TODO(CLI-3620): This test will fail until we implemented CLI-3620.
func TestIncreaseGasPriceEnforcesMinBumpForBothOnBaseFeeIncrease(t *testing.T) {
t.Skip("Failing until CLI-3620 is implemented")
t.Parallel()
borkedBackend := failingBackend{
gasTip: big.NewInt(99),
baseFee: big.NewInt(460),
}
mgr := &SimpleTxManager{
Config: Config{
ResubmissionTimeout: time.Second,
......
......@@ -141,6 +141,7 @@ L2ERC721Bridge_Test:test_bridgeERC721_succeeds() (gas: 144643)
L2ERC721Bridge_Test:test_bridgeERC721_wrongOwner_reverts() (gas: 29258)
L2ERC721Bridge_Test:test_constructor_succeeds() (gas: 10110)
L2ERC721Bridge_Test:test_finalizeBridgeERC721_alreadyExists_reverts() (gas: 29128)
L2ERC721Bridge_Test:test_finalizeBridgeERC721_interfaceNotCompliant_reverts() (gas: 236012)
L2ERC721Bridge_Test:test_finalizeBridgeERC721_notFromRemoteMessenger_reverts() (gas: 19874)
L2ERC721Bridge_Test:test_finalizeBridgeERC721_notViaLocalMessenger_reverts() (gas: 16104)
L2ERC721Bridge_Test:test_finalizeBridgeERC721_selfToken_reverts() (gas: 17659)
......
......@@ -49,7 +49,7 @@ contract OptimismPortal is Initializable, ResourceMetering, Semver {
L2OutputOracle public immutable L2_ORACLE;
/**
* @notice Address that has the ability to pause and unpause deposits and withdrawals.
* @notice Address that has the ability to pause and unpause withdrawals.
*/
address public immutable GUARDIAN;
......
......@@ -278,6 +278,33 @@ contract L2ERC721Bridge_Test is Messenger_Initializer {
assertEq(localToken.ownerOf(tokenId), alice);
}
function test_finalizeBridgeERC721_interfaceNotCompliant_reverts() external {
// Create a non-compliant token
NonCompliantERC721 nonCompliantToken = new NonCompliantERC721(alice);
// Bridge the non-compliant token.
vm.prank(alice);
bridge.bridgeERC721(address(nonCompliantToken), address(0x01), tokenId, 1234, hex"5678");
// Attempt to finalize the withdrawal. Should revert because the token does not claim
// to be compliant with the `IOptimismMintableERC721` interface.
vm.mockCall(
address(L2Messenger),
abi.encodeWithSelector(L2Messenger.xDomainMessageSender.selector),
abi.encode(otherBridge)
);
vm.prank(address(L2Messenger));
vm.expectRevert("L2ERC721Bridge: local token interface is not compliant");
bridge.finalizeBridgeERC721(
address(address(nonCompliantToken)),
address(address(0x01)),
alice,
alice,
tokenId,
hex"5678"
);
}
function test_finalizeBridgeERC721_notViaLocalMessenger_reverts() external {
// Finalize a withdrawal.
vm.prank(alice);
......@@ -349,3 +376,33 @@ contract L2ERC721Bridge_Test is Messenger_Initializer {
);
}
}
/**
* @dev A non-compliant ERC721 token that does not implement the full ERC721 interface.
*
* This is used to test that the bridge will revert if the token does not claim to support
* the ERC721 interface.
*/
contract NonCompliantERC721 {
address internal immutable owner;
constructor(address _owner) {
owner = _owner;
}
function ownerOf(uint256) external view returns (address) {
return owner;
}
function remoteToken() external pure returns (address) {
return address(0x01);
}
function burn(address, uint256) external {
// Do nothing.
}
function supportsInterface(bytes4) external pure returns (bool) {
return false;
}
}
......@@ -20,6 +20,7 @@
"sequencerFeeVaultRecipient": "0xfabb0ac9d68b0b445fb7357272ff202c5651694a",
"proxyAdminOwner": "0xBcd4042DE499D14e55001CcbB24a551F3b954096",
"finalSystemOwner": "0xBcd4042DE499D14e55001CcbB24a551F3b954096",
"portalGuardian": "0xBcd4042DE499D14e55001CcbB24a551F3b954096",
"controller": "0xf39fd6e51aad88f6f4ce6ab8827279cfffb92266",
"finalizationPeriodSeconds": 2,
"deploymentWaitConfirmations": 1,
......
{
"finalSystemOwner": "0x62790eFcB3a5f3A5D398F95B47930A9Addd83807",
"portalGuardian": "0x62790eFcB3a5f3A5D398F95B47930A9Addd83807",
"controller": "0x2d30335B0b807bBa1682C487BaAFD2Ad6da5D675",
"l1StartingBlockTag": "0x4104895a540d87127ff11eef0d51d8f63ce00a6fc211db751a45a4b3a61a9c83",
......
......@@ -2,6 +2,7 @@
"numDeployConfirmations": 1,
"finalSystemOwner": "ADMIN",
"portalGuardian": "ADMIN",
"controller": "ADMIN",
"l1StartingBlockTag": "BLOCKHASH",
......
{
"finalSystemOwner": "0x62790eFcB3a5f3A5D398F95B47930A9Addd83807",
"portalGuardian": "0x62790eFcB3a5f3A5D398F95B47930A9Addd83807",
"controller": "0x2d30335B0b807bBa1682C487BaAFD2Ad6da5D675",
"l1StartingBlockTag": "0x4104895a540d87127ff11eef0d51d8f63ce00a6fc211db751a45a4b3a61a9c83",
......
......@@ -2,6 +2,7 @@
"numDeployConfirmations": 1,
"finalSystemOwner": "0xBc1233d0C3e6B5d53Ab455cF65A6623F6dCd7e4f",
"portalGuardian": "0xBc1233d0C3e6B5d53Ab455cF65A6623F6dCd7e4f",
"controller": "0xBc1233d0C3e6B5d53Ab455cF65A6623F6dCd7e4f",
"l1StartingBlockTag": "0x6ffc1bf3754c01f6bb9fe057c1578b87a8571ce2e9be5ca14bace6eccfd336c7",
......
{
"finalSystemOwner": "0x9965507D1a55bcC2695C58ba16FB37d819B0A4dc",
"portalGuardian": "0x9965507D1a55bcC2695C58ba16FB37d819B0A4dc",
"controller": "0xf39fd6e51aad88f6f4ce6ab8827279cfffb92266",
"l1StartingBlockTag": "earliest",
......
{
"finalSystemOwner": "0x858F0751ef8B4067f0d2668C076BDB50a8549fbF",
"portalGuardian": "0x858F0751ef8B4067f0d2668C076BDB50a8549fbF",
"controller": "0x2d30335B0b807bBa1682C487BaAFD2Ad6da5D675",
"l1StartingBlockTag": "0x19c7e6b18fe156e45f4cfef707294fd8f079fa9c30a7b7cd6ec1ce3682ec6a2e",
......
......@@ -17,13 +17,11 @@ const deployFn: DeployFunction = async (hre) => {
'L2OutputOracleProxy'
)
const finalSystemOwner = hre.deployConfig.finalSystemOwner
const finalSystemOwnerCode = await hre.ethers.provider.getCode(
finalSystemOwner
)
if (finalSystemOwnerCode === '0x') {
const portalGuardian = hre.deployConfig.portalGuardian
const portalGuardianCode = await hre.ethers.provider.getCode(portalGuardian)
if (portalGuardianCode === '0x') {
console.log(
`WARNING: setting OptimismPortal.GUARDIAN to ${finalSystemOwner} and it has no code`
`WARNING: setting OptimismPortal.GUARDIAN to ${portalGuardian} and it has no code`
)
if (!isLiveDeployer) {
throw new Error(
......@@ -35,13 +33,13 @@ const deployFn: DeployFunction = async (hre) => {
// Deploy the OptimismPortal implementation as paused to
// ensure that users do not interact with it and instead
// interact with the proxied contract.
// The `finalSystemOwner` is set at the GUARDIAN.
// The `portalGuardian` is set at the GUARDIAN.
await deploy({
hre,
name: 'OptimismPortal',
args: [
L2OutputOracleProxy.address,
finalSystemOwner,
portalGuardian,
true, // paused
],
postDeployAction: async (contract) => {
......@@ -53,7 +51,7 @@ const deployFn: DeployFunction = async (hre) => {
await assertContractVariable(
contract,
'GUARDIAN',
hre.deployConfig.finalSystemOwner
hre.deployConfig.portalGuardian
)
},
})
......
......@@ -109,7 +109,7 @@ const deployFn: DeployFunction = async (hre) => {
const owner = await AddressManager.owner()
return owner === SystemDictator.address
},
30000,
5000,
1000
)
} else {
......@@ -149,7 +149,7 @@ const deployFn: DeployFunction = async (hre) => {
})
return owner === SystemDictator.address
},
30000,
5000,
1000
)
} else {
......@@ -187,7 +187,7 @@ const deployFn: DeployFunction = async (hre) => {
})
return owner === SystemDictator.address
},
30000,
5000,
1000
)
} else {
......
......@@ -203,7 +203,7 @@ const deployFn: DeployFunction = async (hre) => {
async () => {
return SystemDictator.dynamicConfigSet()
},
30000,
5000,
1000
)
}
......@@ -316,7 +316,7 @@ const deployFn: DeployFunction = async (hre) => {
const paused = await OptimismPortal.paused()
return !paused
},
30000,
5000,
1000
)
......@@ -345,7 +345,7 @@ const deployFn: DeployFunction = async (hre) => {
async () => {
return SystemDictator.finalized()
},
30000,
5000,
1000
)
......
......@@ -14,6 +14,12 @@ interface RequiredDeployConfig {
*/
finalSystemOwner?: string
/**
* Address that is deployed as the GUARDIAN in the OptimismPortal. Has the
* ability to pause withdrawals.
*/
portalGuardian: string
/**
* Address that will own the entire system on L1 during the deployment process. This address will
* not own the system after the deployment is complete, ownership will be transferred to the
......@@ -181,6 +187,9 @@ export const deployConfigSpec: {
finalSystemOwner: {
type: 'address',
},
portalGuardian: {
type: 'address',
},
controller: {
type: 'address',
},
......
diff --git a/node_modules/@changesets/cli/dist/cli.cjs.dev.js b/node_modules/@changesets/cli/dist/cli.cjs.dev.js
index b158219..6fdfb6e 100644
--- a/node_modules/@changesets/cli/dist/cli.cjs.dev.js
+++ b/node_modules/@changesets/cli/dist/cli.cjs.dev.js
@@ -937,7 +937,7 @@ async function publishPackages({
}) {
const packagesByName = new Map(packages.map(x => [x.packageJson.name, x]));
const publicPackages = packages.filter(pkg => !pkg.packageJson.private);
- const unpublishedPackagesInfo = await getUnpublishedPackages(publicPackages, preState);
+ const unpublishedPackagesInfo = await getUnpublishedPackages(packages, preState);
if (unpublishedPackagesInfo.length === 0) {
return [];
@@ -957,20 +957,27 @@ async function publishAPackage(pkg, access, twoFactorState, tag) {
const {
name,
version,
- publishConfig
+ publishConfig,
+ private: isPrivate
} = pkg.packageJson;
const localAccess = publishConfig === null || publishConfig === void 0 ? void 0 : publishConfig.access;
- logger.info(`Publishing ${chalk__default['default'].cyan(`"${name}"`)} at ${chalk__default['default'].green(`"${version}"`)}`);
- const publishDir = publishConfig !== null && publishConfig !== void 0 && publishConfig.directory ? path.join(pkg.dir, publishConfig.directory) : pkg.dir;
- const publishConfirmation = await publish(name, {
- cwd: publishDir,
- access: localAccess || access,
- tag
- }, twoFactorState);
+ let published;
+ if (!isPrivate) {
+ logger.info(`Publishing ${chalk__default['default'].cyan(`"${name}"`)} at ${chalk__default['default'].green(`"${version}"`)}`);
+ const publishDir = publishConfig !== null && publishConfig !== void 0 && publishConfig.directory ? path.join(pkg.dir, publishConfig.directory) : pkg.dir;
+ const publishConfirmation = await publish(name, {
+ cwd: publishDir,
+ access: localAccess || access,
+ tag
+ }, twoFactorState);
+ published = publishConfirmation.published;
+ } else {
+ published = true;
+ }
return {
name,
newVersion: version,
- published: publishConfirmation.published
+ published
};
}
@@ -1140,8 +1147,13 @@ async function tagPublish(tool, packageReleases, cwd) {
if (tool !== "root") {
for (const pkg of packageReleases) {
const tag = `${pkg.name}@${pkg.newVersion}`;
- logger.log("New tag: ", tag);
- await git.tag(tag, cwd);
+ const allTags = await git.getAllTags(cwd);
+ if (allTags.has(tag)) {
+ logger.log("Skipping existing tag: ", tag);
+ } else {
+ logger.log("New tag: ", tag);
+ await git.tag(tag, cwd);
+ }
}
} else {
const tag = `v${packageReleases[0].newVersion}`;
This diff is collapsed.
This diff is collapsed.
......@@ -48,8 +48,8 @@ mechanisms.
### L1 Components
- **DepositFeed**: A feed of L2 transactions which originated as smart contract calls in the L1 state.
- The `DepositFeed` contract emits `TransactionDeposited` events, which the rollup driver reads in order to process
- **OptimismPortal**: A feed of L2 transactions which originated as smart contract calls in the L1 state.
- The `OptimismPortal` contract emits `TransactionDeposited` events, which the rollup driver reads in order to process
deposits.
- Deposits are guaranteed to be reflected in the L2 state within the _sequencing window_.
- Beware that _transactions_ are deposited, not tokens. However deposited transactions are a key part of implementing
......@@ -106,7 +106,7 @@ The below diagram illustrates how the sequencer and verifiers fit together:
- [Deposits](./deposits.md)
Optimism supports two types of deposits: user deposits, and L1 attributes deposits. To perform a user deposit, users
call the `depositTransaction` method on the `DepositFeed` contract. This in turn emits `TransactionDeposited` events,
call the `depositTransaction` method on the `OptimismPortal` contract. This in turn emits `TransactionDeposited` events,
which the rollup node reads during block derivation.
L1 attributes deposits are used to register L1 block attributes (number, timestamp, etc.) on L2 via a call to the L1
......@@ -141,8 +141,8 @@ worth of blocks has passed, i.e. after L1 block number `n + SEQUENCING_WINDOW_SI
Each epoch contains at least one block. Every block in the epoch contains an L1 info transaction which contains
contextual information about L1 such as the block hash and timestamp. The first block in the epoch also contains all
deposits initiated via the `DepositFeed` contract on L1. All L2 blocks can also contain _sequenced transactions_, i.e.
transactions submitted directly to the sequencer.
deposits initiated via the `OptimismPortal` contract on L1. All L2 blocks can also contain _sequenced transactions_,
i.e. transactions submitted directly to the sequencer.
Whenever the sequencer creates a new L2 block for a given epoch, it must submit it to L1 as part of a _batch_, within
the epoch's sequencing window (i.e. the batch must land before L1 block `n + SEQUENCING_WINDOW_SIZE`). These batches are
......
......@@ -130,7 +130,7 @@ recognize that having the same address does not imply that a contract on L2 will
## The Optimism Portal Contract
The Optimism Portal serves as both the entry and exit point to the Optimism L2. It is a contract which inherits from
the [DepositFeed](./deposits.md#deposit-contract) contract, and in addition provides the following interface for
the [OptimismPortal](./deposits.md#deposit-contract) contract, and in addition provides the following interface for
withdrawals:
- [`WithdrawalTransaction` type]
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment