Commit 5fbdf71e authored by mergify[bot]'s avatar mergify[bot] Committed by GitHub

Merge branch 'develop' into aj/fpp-e2e-tx

parents a0d6ec9b f9d34808
---
'@eth-optimism/sdk': patch
---
Fix firefox bug with getTokenPair
......@@ -11,7 +11,8 @@ One easy way to do this is to use [Blockscout](https://www.blockscout.com/).
### Archive mode
Blockscout expects to interact with an Ethereum execution client in [archive mode](https://www.alchemy.com/overviews/archive-nodes#archive-nodes).
To create such a node, follow the [directions to add a node](./getting-started.md#adding-nodes), but in the command you use to start `op-geth` replace:
If your `op-geth` is running in full mode, you can create a separate archive node.
To do so, follow the [directions to add a node](./getting-started.md#adding-nodes), but in the command you use to start `op-geth` replace:
```sh
--gcmode=full \
......
......@@ -73,7 +73,7 @@ We’re going to be spinning up an EVM Rollup from the OP Stack source code. Yo
1. Build the various packages inside of the Optimism Monorepo.
```bash
make op-node op-batcher
make op-node op-batcher op-proposer
yarn build
```
......@@ -154,9 +154,9 @@ Save these accounts and their respective private keys somewhere, you’ll need t
Recommended funding amounts are as follows:
- `Admin`0.2 ETH
- `Proposer`0.5 ETH
- `Batcher` — 1.0 ETH
- `Admin` — 2 ETH
- `Proposer` — 5 ETH
- `Batcher` — 10 ETH
::: danger Not for production deployments
......@@ -304,13 +304,32 @@ We’re almost ready to run our chain! Now we just need to run a few commands to
Everything is now initialized and ready to go!
## Run op-geth
Whew! We made it. It’s time to run `op-geth` and get our system started.
## Run the node software
Run `op-geth` with the following command. Make sure to replace `<SEQUENCER>` with the address of the `Sequencer` account you generated earlier.
There are four components that need to run for a rollup.
The first two, `op-geth` and `op-node`, have to run on every node.
The other two, `op-batcher` and `op-proposer`, run only in one place, the sequencer that accepts transactions.
Set these environment variables for the configuration
| Variable | Value |
| -------------- | -
| `SEQ_ADDR` | Address of the `Sequencer` account
| `SEQ_KEY` | Private key of the `Sequencer` account
| `BATCHER_KEY` | Private key of the `Batcher` accounts, which should have at least 1 ETH
| `PROPOSER_KEY` | Private key of the `Proposer` account
| `L1_RPC` | URL for the L1 (such as Goerli) you're using
| `RPC_KIND` | The type of L1 server to which you connect, which can optimize requests. Available options are `alchemy`, `quicknode`, `parity`, `nethermind`, `debug_geth`, `erigon`, `basic`, and `any`
| `L2OO_ADDR` | The address of the `L2OutputOracleProxy`, available at `~/optimism/packages/contracts-bedrock/deployments/getting-started/L2OutputOracleProxy.json
### `op-geth`
Run `op-geth` with the following commands.
```bash
cd ~/op-geth
./build/bin/geth \
--datadir ./datadir \
--http \
......@@ -324,7 +343,7 @@ Run `op-geth` with the following command. Make sure to replace `<SEQUENCER>` wit
--ws.origins="*" \
--ws.api=debug,eth,txpool,net,engine \
--syncmode=full \
--gcmode=full \
--gcmode=archive \
--nodiscover \
--maxpeers=0 \
--networkid=42069 \
......@@ -336,13 +355,25 @@ Run `op-geth` with the following command. Make sure to replace `<SEQUENCER>` wit
--password=./datadir/password \
--allow-insecure-unlock \
--mine \
--miner.etherbase=<SEQUENCER> \
--unlock=<SEQUENCER>
--miner.etherbase=$SEQ_ADDR \
--unlock=$SEQ_ADDR
```
And `op-geth` should be running! You should see some output, but you won’t see any blocks being created yet because `op-geth` is driven by the `op-node`. We’ll need to get that running next.
### Reinitializing op-geth
::: tip Why archive mode?
Archive mode takes more disk storage than full mode.
However, using it is important for two reasons:
- The `op-proposer` requires access to the full state.
If at some point `op-proposer` needs to look beyond 256 blocks in the past (8.5 minutes in the default configuration), for example because it was down for that long, we need archive mode.
- The [explorer](./explorer.md) requires archive mode.
:::
#### Reinitializing op-geth
There are several situations are indicate database corruption and require you to reset the `op-geth` component:
......@@ -374,13 +405,13 @@ This is the reinitialization procedure:
1. Start `op-node`
## Run op-node
### `op-node`
Once we’ve got `op-geth` running we’ll need to run `op-node`. Like Ethereum, the OP Stack has a consensus client (the `op-node`) and an execution client (`op-geth`). The consensus client drives the execution client over the Engine API.
Head over to the `op-node` package and start the `op-node` using the following command. Replace `<SEQUENCERKEY>` with the private key for the `Sequencer` account, replace `<RPC>` with the URL for your L1 node, and replace `<RPCKIND>` with the kind of RPC you’re connected to. Although the `l1.rpckind` argument is optional, setting it will help the `op-node` optimize requests and reduce the overall load on your endpoint. Available options for the `l1.rpckind` argument are `"alchemy"`, `"quicknode"`, `"quicknode"`, `"parity"`, `"nethermind"`, `"debug_geth"`, `"erigon"`, `"basic"`, and `"any"`.
```bash
cd ~/optimism/op-node
./bin/op-node \
--l2=http://localhost:8551 \
--l2.jwt-secret=./jwt.txt \
......@@ -392,9 +423,9 @@ Head over to the `op-node` package and start the `op-node` using the following c
--rpc.port=8547 \
--p2p.disable \
--rpc.enable-admin \
--p2p.sequencer.key=<SEQUENCERKEY> \
--l1=<RPC> \
--l1.rpckind=<RPCKIND>
--p2p.sequencer.key=$SEQ_KEY \
--l1=$L1_RPC \
--l1.rpckind=$RPC_KIND
```
Once you run this command, you should start seeing the `op-node` begin to process all of the L1 information after the starting block number that you picked earlier. Once the `op-node` has enough information, it’ll begin sending Engine API payloads to `op-geth`. At that point, you’ll start to see blocks being created inside of `op-geth`. We’re live!
......@@ -420,50 +451,68 @@ Once you have multiple nodes, it makes sense to use these command line parameter
## Run op-batcher
### `op-batcher`
The final component necessary to put all the pieces together is the `op-batcher`. The `op-batcher` takes transactions from the Sequencer and publishes those transactions to L1. Once transactions are on L1, they’re officially part of the Rollup. Without the `op-batcher`, transactions sent to the Sequencer would never make it to L1 and wouldn’t become part of the canonical chain. The `op-batcher` is critical!
The `op-batcher` takes transactions from the Sequencer and publishes those transactions to L1. Once transactions are on L1, they’re officially part of the Rollup. Without the `op-batcher`, transactions sent to the Sequencer would never make it to L1 and wouldn’t become part of the canonical chain. The `op-batcher` is critical!
1. Head over to the `op-batcher` package inside the Optimism Monorepo:
It is best to give the `Batcher` at least 1 Goerli ETH to ensure that it can continue operating without running out of ETH for gas.
```bash
cd ~/optimism/op-batcher
```
1. And run the `op-batcher` using the following command.
Replace `<RPC>` with your L1 node URL and replace `<BATCHERKEY>` with the private key for the `Batcher` account that you created and funded earlier.
It’s best to give the `Batcher` at least 1 Goerli ETH to ensure that it can continue operating without running out of ETH for gas.
```bash
cd ~/optimism/op-batcher
./bin/op-batcher \
--l2-eth-rpc=http://localhost:8545 \
--rollup-rpc=http://localhost:8547 \
--poll-interval=1s \
--sub-safety-margin=6 \
--num-confirmations=1 \
--safe-abort-nonce-too-low-count=3 \
--resubmission-timeout=30s \
--rpc.addr=0.0.0.0 \
--rpc.port=8548 \
--rpc.enable-admin \
--max-channel-duration=1 \
--l1-eth-rpc=$L1_RPC \
--private-key=$BATCHER_KEY
```
```bash
./bin/op-batcher \
--l2-eth-rpc=http://localhost:8545 \
--rollup-rpc=http://localhost:8547 \
--poll-interval=1s \
--sub-safety-margin=6 \
--num-confirmations=1 \
--safe-abort-nonce-too-low-count=3 \
--resubmission-timeout=30s \
--rpc.addr=0.0.0.0 \
--rpc.port=8548 \
--rpc.enable-admin \
--max-channel-duration=1 \
--target-l1-tx-size-bytes=2048 \
--l1-eth-rpc=<RPC> \
--private-key=<BATCHERKEY>
```
::: tip Controlling batcher costs
The `--max-channel-duration=n` setting tells the batcher to write all the data to L1 every `n` L1 blocks.
When it is low, transactions are written to L1 frequently, withdrawals are quick, and other nodes can synchronize from L1 fast.
When it is high, transactions are written to L1 less frequently, and the batcher spends less ETH.
:::
### `op-proposer`
::: tip Controlling batcher costs
Now start `op-proposer`, which proposes new state roots.
The `--max-channel-duration=n` setting tells the batcher to write all the data to L1 every `n` L1 blocks.
When it is low, transactions are written to L1 frequently, withdrawals are quick, and other nodes can synchronize from L1 fast.
When it is high, transactions are written to L1 less frequently, and the batcher spends less ETH.
```bash
cd ~/optimism/op-proposer
./bin/op-proposer \
--poll-interval 12s \
--rpc.port 8560 \
--rollup-rpc http://localhost:8547 \
--l2oo-address $L2OO_ADDR \
--private-key $PROPOSER_KEY \
--l1-eth-rpc $L1_RPC
```
:::
<!--
::: warning Change before moving to production
The `--allow-non-finalized` flag allows for faster tests on a test network.
However, in production you would probably want to only submit proposals on properly finalized blocks.
:::
-->
## Get some ETH on your Rollup
Once you’ve connected your wallet, you’ll probably notice that you don’t have any ETH on your Rollup. You’ll need some ETH to pay for gas on your Rollup. The easiest way to deposit Goerli ETH into your chain is to send funds directly to the `OptimismPortalProxy` contract. You can find the address of the `OptimismPortalProxy` contract for your chain by looking inside the `deployments` folder in the `contracts-bedrock` package.
Once you’ve connected your wallet, you’ll probably notice that you don’t have any ETH on your Rollup. You’ll need some ETH to pay for gas on your Rollup. The easiest way to deposit Goerli ETH into your chain is to send funds directly to the `L1StandardBridge` contract. You can find the address of the `L1StandardBridge` contract for your chain by looking inside the `deployments` folder in the `contracts-bedrock` package.
1. First, head over to the `contracts-bedrock` package:
......@@ -474,13 +523,7 @@ Once you’ve connected your wallet, you’ll probably notice that you don’t h
1. Grab the address of the proxy to the L1 standard bridge contract:
```bash
cat deployments/getting-started/Proxy__OVM_L1StandardBridge.json | grep \"address\":
```
You should see a result similar to the following (**your address will be different**):
```
"address": "0x874f2E16D803c044F10314A978322da3c9b075c7",
cat deployments/getting-started/Proxy__OVM_L1StandardBridge.json | jq -r .address
```
1. Grab the L1 bridge proxy contract address and, using the wallet that you want to have ETH on your Rollup, send that address a small amount of ETH on Goerli (0.1 or less is fine). It may take up to 5 minutes for that ETH to appear in your wallet on L2.
......@@ -529,7 +572,7 @@ To see your rollup in action, you can use the [Optimism Mainnet Getting Started
```bash
cast call $GREETER "greet()" | cast --to-ascii
cast send --mnemonic-path mnem.delme $GREETER "setGreeting(string)" "New greeting"
cast send --mnemonic-path ./mnem.delme $GREETER "setGreeting(string)" "New greeting"
cast call $GREETER "greet()" | cast --to-ascii
```
......
......@@ -79,7 +79,7 @@ func FaultProofProgram(logger log.Logger, cfg *config.Config) error {
l2DebugCl := &L2Source{L2Client: l2Cl, DebugClient: sources.NewDebugClient(l2RPC.CallContext)}
logger.Info("Setting up pre-fetcher")
prefetch := prefetcher.NewPrefetcher(l1Cl, l2DebugCl, kv)
prefetch := prefetcher.NewPrefetcher(logger, l1Cl, l2DebugCl, kv)
preimageOracle = asOracleFn(func(key common.Hash) ([]byte, error) {
return prefetch.GetPreimage(ctx, key)
})
......
......@@ -10,6 +10,7 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-program/client/l1"
......@@ -38,10 +39,10 @@ type Prefetcher struct {
kvStore kvstore.KV
}
func NewPrefetcher(l1Fetcher L1Source, l2Fetcher L2Source, kvStore kvstore.KV) *Prefetcher {
func NewPrefetcher(logger log.Logger, l1Fetcher L1Source, l2Fetcher L2Source, kvStore kvstore.KV) *Prefetcher {
return &Prefetcher{
l1Fetcher: l1Fetcher,
l2Fetcher: l2Fetcher,
l1Fetcher: NewRetryingL1Source(logger, l1Fetcher),
l2Fetcher: NewRetryingL2Source(logger, l2Fetcher),
kvStore: kvStore,
}
}
......
......@@ -5,9 +5,11 @@ import (
"math/rand"
"testing"
"github.com/ethereum-optimism/optimism/op-node/testlog"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/rlp"
"github.com/stretchr/testify/require"
......@@ -263,6 +265,7 @@ type l2Client struct {
}
func createPrefetcher(t *testing.T) (*Prefetcher, *testutils.MockL1Source, *l2Client, kvstore.KV) {
logger := testlog.Logger(t, log.LvlDebug)
kv := kvstore.NewMemKV()
l1Source := new(testutils.MockL1Source)
......@@ -271,7 +274,7 @@ func createPrefetcher(t *testing.T) (*Prefetcher, *testutils.MockL1Source, *l2Cl
MockDebugClient: new(testutils.MockDebugClient),
}
prefetcher := NewPrefetcher(l1Source, l2Source, kv)
prefetcher := NewPrefetcher(logger, l1Source, l2Source, kv)
return prefetcher, l1Source, l2Source, kv
}
......
package prefetcher
import (
"context"
"math"
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-service/backoff"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log"
)
const maxAttempts = math.MaxInt // Succeed or die trying
type RetryingL1Source struct {
logger log.Logger
source L1Source
strategy backoff.Strategy
}
func NewRetryingL1Source(logger log.Logger, source L1Source) *RetryingL1Source {
return &RetryingL1Source{
logger: logger,
source: source,
strategy: backoff.Exponential(),
}
}
func (s *RetryingL1Source) InfoByHash(ctx context.Context, blockHash common.Hash) (eth.BlockInfo, error) {
var info eth.BlockInfo
err := backoff.DoCtx(ctx, maxAttempts, s.strategy, func() error {
res, err := s.source.InfoByHash(ctx, blockHash)
if err != nil {
s.logger.Warn("Failed to retrieve info", "hash", blockHash, "err", err)
return err
}
info = res
return nil
})
return info, err
}
func (s *RetryingL1Source) InfoAndTxsByHash(ctx context.Context, blockHash common.Hash) (eth.BlockInfo, types.Transactions, error) {
var info eth.BlockInfo
var txs types.Transactions
err := backoff.DoCtx(ctx, maxAttempts, s.strategy, func() error {
i, t, err := s.source.InfoAndTxsByHash(ctx, blockHash)
if err != nil {
s.logger.Warn("Failed to retrieve info and txs", "hash", blockHash, "err", err)
return err
}
info = i
txs = t
return nil
})
return info, txs, err
}
func (s *RetryingL1Source) FetchReceipts(ctx context.Context, blockHash common.Hash) (eth.BlockInfo, types.Receipts, error) {
var info eth.BlockInfo
var rcpts types.Receipts
err := backoff.DoCtx(ctx, maxAttempts, s.strategy, func() error {
i, r, err := s.source.FetchReceipts(ctx, blockHash)
if err != nil {
s.logger.Warn("Failed to fetch receipts", "hash", blockHash, "err", err)
return err
}
info = i
rcpts = r
return nil
})
return info, rcpts, err
}
var _ L1Source = (*RetryingL1Source)(nil)
type RetryingL2Source struct {
logger log.Logger
source L2Source
strategy backoff.Strategy
}
func (s *RetryingL2Source) InfoAndTxsByHash(ctx context.Context, blockHash common.Hash) (eth.BlockInfo, types.Transactions, error) {
var info eth.BlockInfo
var txs types.Transactions
err := backoff.DoCtx(ctx, maxAttempts, s.strategy, func() error {
i, t, err := s.source.InfoAndTxsByHash(ctx, blockHash)
if err != nil {
s.logger.Warn("Failed to retrieve info and txs", "hash", blockHash, "err", err)
return err
}
info = i
txs = t
return nil
})
return info, txs, err
}
func (s *RetryingL2Source) NodeByHash(ctx context.Context, hash common.Hash) ([]byte, error) {
var node []byte
err := backoff.DoCtx(ctx, maxAttempts, s.strategy, func() error {
n, err := s.source.NodeByHash(ctx, hash)
if err != nil {
s.logger.Warn("Failed to retrieve node", "hash", hash, "err", err)
return err
}
node = n
return nil
})
return node, err
}
func (s *RetryingL2Source) CodeByHash(ctx context.Context, hash common.Hash) ([]byte, error) {
var code []byte
err := backoff.DoCtx(ctx, maxAttempts, s.strategy, func() error {
c, err := s.source.CodeByHash(ctx, hash)
if err != nil {
s.logger.Warn("Failed to retrieve code", "hash", hash, "err", err)
return err
}
code = c
return nil
})
return code, err
}
func NewRetryingL2Source(logger log.Logger, source L2Source) *RetryingL2Source {
return &RetryingL2Source{
logger: logger,
source: source,
strategy: backoff.Exponential(),
}
}
var _ L2Source = (*RetryingL2Source)(nil)
package prefetcher
import (
"context"
"errors"
"testing"
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/testlog"
"github.com/ethereum-optimism/optimism/op-node/testutils"
"github.com/ethereum-optimism/optimism/op-service/backoff"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
)
func TestRetryingL1Source(t *testing.T) {
ctx := context.Background()
hash := common.Hash{0xab}
info := &testutils.MockBlockInfo{InfoHash: hash}
// The mock really doesn't like returning nil for a eth.BlockInfo so return a value we expect to be ignored instead
wrongInfo := &testutils.MockBlockInfo{InfoHash: common.Hash{0x99}}
txs := types.Transactions{
&types.Transaction{},
}
rcpts := types.Receipts{
&types.Receipt{},
}
t.Run("InfoByHash Success", func(t *testing.T) {
source, mock := createL1Source(t)
defer mock.AssertExpectations(t)
mock.ExpectInfoByHash(hash, info, nil)
result, err := source.InfoByHash(ctx, hash)
require.NoError(t, err)
require.Equal(t, info, result)
})
t.Run("InfoByHash Error", func(t *testing.T) {
source, mock := createL1Source(t)
defer mock.AssertExpectations(t)
expectedErr := errors.New("boom")
mock.ExpectInfoByHash(hash, wrongInfo, expectedErr)
mock.ExpectInfoByHash(hash, info, nil)
result, err := source.InfoByHash(ctx, hash)
require.NoError(t, err)
require.Equal(t, info, result)
})
t.Run("InfoAndTxsByHash Success", func(t *testing.T) {
source, mock := createL1Source(t)
defer mock.AssertExpectations(t)
mock.ExpectInfoAndTxsByHash(hash, info, txs, nil)
actualInfo, actualTxs, err := source.InfoAndTxsByHash(ctx, hash)
require.NoError(t, err)
require.Equal(t, info, actualInfo)
require.Equal(t, txs, actualTxs)
})
t.Run("InfoAndTxsByHash Error", func(t *testing.T) {
source, mock := createL1Source(t)
defer mock.AssertExpectations(t)
expectedErr := errors.New("boom")
mock.ExpectInfoAndTxsByHash(hash, wrongInfo, nil, expectedErr)
mock.ExpectInfoAndTxsByHash(hash, info, txs, nil)
actualInfo, actualTxs, err := source.InfoAndTxsByHash(ctx, hash)
require.NoError(t, err)
require.Equal(t, info, actualInfo)
require.Equal(t, txs, actualTxs)
})
t.Run("FetchReceipts Success", func(t *testing.T) {
source, mock := createL1Source(t)
defer mock.AssertExpectations(t)
mock.ExpectFetchReceipts(hash, info, rcpts, nil)
actualInfo, actualRcpts, err := source.FetchReceipts(ctx, hash)
require.NoError(t, err)
require.Equal(t, info, actualInfo)
require.Equal(t, rcpts, actualRcpts)
})
t.Run("FetchReceipts Error", func(t *testing.T) {
source, mock := createL1Source(t)
defer mock.AssertExpectations(t)
expectedErr := errors.New("boom")
mock.ExpectFetchReceipts(hash, wrongInfo, nil, expectedErr)
mock.ExpectFetchReceipts(hash, info, rcpts, nil)
actualInfo, actualRcpts, err := source.FetchReceipts(ctx, hash)
require.NoError(t, err)
require.Equal(t, info, actualInfo)
require.Equal(t, rcpts, actualRcpts)
})
}
func createL1Source(t *testing.T) (*RetryingL1Source, *testutils.MockL1Source) {
logger := testlog.Logger(t, log.LvlDebug)
mock := &testutils.MockL1Source{}
source := NewRetryingL1Source(logger, mock)
// Avoid sleeping in tests by using a fixed backoff strategy with no delay
source.strategy = backoff.Fixed(0)
return source, mock
}
func TestRetryingL2Source(t *testing.T) {
ctx := context.Background()
hash := common.Hash{0xab}
info := &testutils.MockBlockInfo{InfoHash: hash}
// The mock really doesn't like returning nil for a eth.BlockInfo so return a value we expect to be ignored instead
wrongInfo := &testutils.MockBlockInfo{InfoHash: common.Hash{0x99}}
txs := types.Transactions{
&types.Transaction{},
}
data := []byte{1, 2, 3, 4, 5}
t.Run("InfoAndTxsByHash Success", func(t *testing.T) {
source, mock := createL2Source(t)
defer mock.AssertExpectations(t)
mock.ExpectInfoAndTxsByHash(hash, info, txs, nil)
actualInfo, actualTxs, err := source.InfoAndTxsByHash(ctx, hash)
require.NoError(t, err)
require.Equal(t, info, actualInfo)
require.Equal(t, txs, actualTxs)
})
t.Run("InfoAndTxsByHash Error", func(t *testing.T) {
source, mock := createL2Source(t)
defer mock.AssertExpectations(t)
expectedErr := errors.New("boom")
mock.ExpectInfoAndTxsByHash(hash, wrongInfo, nil, expectedErr)
mock.ExpectInfoAndTxsByHash(hash, info, txs, nil)
actualInfo, actualTxs, err := source.InfoAndTxsByHash(ctx, hash)
require.NoError(t, err)
require.Equal(t, info, actualInfo)
require.Equal(t, txs, actualTxs)
})
t.Run("NodeByHash Success", func(t *testing.T) {
source, mock := createL2Source(t)
defer mock.AssertExpectations(t)
mock.ExpectNodeByHash(hash, data, nil)
actual, err := source.NodeByHash(ctx, hash)
require.NoError(t, err)
require.Equal(t, data, actual)
})
t.Run("NodeByHash Error", func(t *testing.T) {
source, mock := createL2Source(t)
defer mock.AssertExpectations(t)
expectedErr := errors.New("boom")
mock.ExpectNodeByHash(hash, nil, expectedErr)
mock.ExpectNodeByHash(hash, data, nil)
actual, err := source.NodeByHash(ctx, hash)
require.NoError(t, err)
require.Equal(t, data, actual)
})
t.Run("CodeByHash Success", func(t *testing.T) {
source, mock := createL2Source(t)
defer mock.AssertExpectations(t)
mock.ExpectCodeByHash(hash, data, nil)
actual, err := source.CodeByHash(ctx, hash)
require.NoError(t, err)
require.Equal(t, data, actual)
})
t.Run("CodeByHash Error", func(t *testing.T) {
source, mock := createL2Source(t)
defer mock.AssertExpectations(t)
expectedErr := errors.New("boom")
mock.ExpectCodeByHash(hash, nil, expectedErr)
mock.ExpectCodeByHash(hash, data, nil)
actual, err := source.CodeByHash(ctx, hash)
require.NoError(t, err)
require.Equal(t, data, actual)
})
}
func createL2Source(t *testing.T) (*RetryingL2Source, *MockL2Source) {
logger := testlog.Logger(t, log.LvlDebug)
mock := &MockL2Source{}
source := NewRetryingL2Source(logger, mock)
// Avoid sleeping in tests by using a fixed backoff strategy with no delay
source.strategy = backoff.Fixed(0)
return source, mock
}
type MockL2Source struct {
mock.Mock
}
func (m *MockL2Source) InfoAndTxsByHash(ctx context.Context, blockHash common.Hash) (eth.BlockInfo, types.Transactions, error) {
out := m.Mock.MethodCalled("InfoAndTxsByHash", blockHash)
return out[0].(eth.BlockInfo), out[1].(types.Transactions), *out[2].(*error)
}
func (m *MockL2Source) NodeByHash(ctx context.Context, hash common.Hash) ([]byte, error) {
out := m.Mock.MethodCalled("NodeByHash", hash)
return out[0].([]byte), *out[1].(*error)
}
func (m *MockL2Source) CodeByHash(ctx context.Context, hash common.Hash) ([]byte, error) {
out := m.Mock.MethodCalled("CodeByHash", hash)
return out[0].([]byte), *out[1].(*error)
}
func (m *MockL2Source) ExpectInfoAndTxsByHash(blockHash common.Hash, info eth.BlockInfo, txs types.Transactions, err error) {
m.Mock.On("InfoAndTxsByHash", blockHash).Once().Return(info, txs, &err)
}
func (m *MockL2Source) ExpectNodeByHash(hash common.Hash, node []byte, err error) {
m.Mock.On("NodeByHash", hash).Once().Return(node, &err)
}
func (m *MockL2Source) ExpectCodeByHash(hash common.Hash, code []byte, err error) {
m.Mock.On("CodeByHash", hash).Once().Return(code, &err)
}
var _ L2Source = (*MockL2Source)(nil)
......@@ -395,14 +395,15 @@ RLPWriter_writeUint_Test:test_writeUint_smallint_succeeds() (gas: 7280)
RLPWriter_writeUint_Test:test_writeUint_zero_succeeds() (gas: 7749)
ResolvedDelegateProxy_Test:test_fallback_addressManagerNotSet_reverts() (gas: 605906)
ResolvedDelegateProxy_Test:test_fallback_delegateCallBar_reverts() (gas: 24783)
ResourceMetering_Test:test_meter_initialResourceParams_succeeds() (gas: 10368)
ResourceMetering_Test:test_meter_updateNoGasDelta_succeeds() (gas: 2009696)
ResourceMetering_Test:test_meter_updateOneEmptyBlock_succeeds() (gas: 18860)
ResourceMetering_Test:test_meter_updateParamsNoChange_succeeds() (gas: 15149)
ResourceMetering_Test:test_meter_updateTenEmptyBlocks_succeeds() (gas: 21713)
ResourceMetering_Test:test_meter_updateTwoEmptyBlocks_succeeds() (gas: 21669)
ResourceMetering_Test:test_meter_useMax_succeeds() (gas: 20018715)
ResourceMetering_Test:test_meter_useMoreThanMax_reverts() (gas: 17505)
ResourceMetering_Test:test_meter_denominatorEq1_reverts() (gas: 20024064)
ResourceMetering_Test:test_meter_initialResourceParams_succeeds() (gas: 12423)
ResourceMetering_Test:test_meter_updateNoGasDelta_succeeds() (gas: 2011591)
ResourceMetering_Test:test_meter_updateOneEmptyBlock_succeeds() (gas: 20894)
ResourceMetering_Test:test_meter_updateParamsNoChange_succeeds() (gas: 17217)
ResourceMetering_Test:test_meter_updateTenEmptyBlocks_succeeds() (gas: 23747)
ResourceMetering_Test:test_meter_updateTwoEmptyBlocks_succeeds() (gas: 23703)
ResourceMetering_Test:test_meter_useMax_succeeds() (gas: 20020816)
ResourceMetering_Test:test_meter_useMoreThanMax_reverts() (gas: 19549)
SafeCall_call_Test:test_callWithMinGas_noLeakageHigh_succeeds() (gas: 2075873614)
SafeCall_call_Test:test_callWithMinGas_noLeakageLow_succeeds() (gas: 753665282)
Semver_Test:test_behindProxy_succeeds() (gas: 506748)
......
......@@ -7,25 +7,28 @@ import { Proxy } from "../universal/Proxy.sol";
import { Constants } from "../libraries/Constants.sol";
contract MeterUser is ResourceMetering {
ResourceMetering.ResourceConfig public innerConfig;
constructor() {
initialize();
innerConfig = Constants.DEFAULT_RESOURCE_CONFIG();
}
function initialize() public initializer {
__ResourceMetering_init();
}
function resourceConfig() public pure returns (ResourceMetering.ResourceConfig memory) {
function resourceConfig() public view returns (ResourceMetering.ResourceConfig memory) {
return _resourceConfig();
}
function _resourceConfig()
internal
pure
view
override
returns (ResourceMetering.ResourceConfig memory)
{
return Constants.DEFAULT_RESOURCE_CONFIG();
return innerConfig;
}
function use(uint64 _amount) public metered(_amount) {}
......@@ -41,6 +44,10 @@ contract MeterUser is ResourceMetering {
prevBlockNum: _prevBlockNum
});
}
function setParams(ResourceMetering.ResourceConfig memory newConfig) public {
innerConfig = newConfig;
}
}
/**
......@@ -134,6 +141,32 @@ contract ResourceMetering_Test is Test {
assertEq(postBaseFee, 2125000000);
}
/**
* @notice This tests that the metered modifier reverts if
* the ResourceConfig baseFeeMaxChangeDenominator
* is set to 1.
* Since the metered modifier internally calls
* solmate's powWad function, it will revert
* with the error string "UNDEFINED" since the
* first parameter will be computed as 0.
*/
function test_meter_denominatorEq1_reverts() external {
ResourceMetering.ResourceConfig memory rcfg = meter.resourceConfig();
uint64 target = uint64(rcfg.maxResourceLimit) / uint64(rcfg.elasticityMultiplier);
uint64 elasticityMultiplier = uint64(rcfg.elasticityMultiplier);
rcfg.baseFeeMaxChangeDenominator = 1;
meter.setParams(rcfg);
meter.use(target * elasticityMultiplier);
(, uint64 prevBoughtGas, ) = meter.params();
assertEq(prevBoughtGas, target * elasticityMultiplier);
vm.roll(initialBlockNum + 2);
vm.expectRevert("UNDEFINED");
meter.use(0);
}
function test_meter_useMoreThanMax_reverts() external {
ResourceMetering.ResourceConfig memory rcfg = meter.resourceConfig();
uint64 target = uint64(rcfg.maxResourceLimit) / uint64(rcfg.elasticityMultiplier);
......
......@@ -187,13 +187,12 @@ export class StandardBridgeAdapter implements IBridgeAdapter {
// exception then we assume that the token is not supported. Other errors are thrown. Since
// the JSON-RPC API is not well-specified, we need to handle multiple possible error codes.
if (
err.message.toString().includes('CALL_EXCEPTION') ||
err.stack.toString().includes('execution reverted')
!err?.message?.toString().includes('CALL_EXCEPTION') &&
!err?.stack?.toString().includes('execution reverted')
) {
return false
} else {
throw err
console.error('Unexpected error when checking bridge', err)
}
return false
}
}
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment