Commit db1a81ed authored by mergify[bot]'s avatar mergify[bot] Committed by GitHub

Merge branch 'develop' into inphi/fpp-comms

parents 7b425259 3d990967
...@@ -660,6 +660,45 @@ jobs: ...@@ -660,6 +660,45 @@ jobs:
command: npx depcheck command: npx depcheck
working_directory: integration-tests working_directory: integration-tests
atst-tests:
docker:
- image: ethereumoptimism/ci-builder:latest
resource_class: large
steps:
- checkout
- attach_workspace: { at: '.' }
- check-changed:
patterns: atst,contracts-periphery
- restore_cache:
name: Restore Yarn Package Cache
keys:
- yarn-packages-v2-{{ checksum "yarn.lock" }}
- run:
name: anvil
background: true
command: anvil --fork-url $ANVIL_L2_FORK_URL_MAINNET --fork-block-number 92093723
- run:
name: build
command: yarn build
working_directory: packages/atst
- run:
name: typecheck
command: yarn typecheck
working_directory: packages/atst
- run:
name: lint
command: yarn lint:check
working_directory: packages/atst
- run:
name: make sure anvil is up
command: npx wait-on tcp:8545 && cast block-number --rpc-url http://localhost:8545
- run:
name: test
command: yarn test
no_output_timeout: 5m
working_directory: packages/atst
go-lint: go-lint:
parameters: parameters:
module: module:
...@@ -1094,6 +1133,9 @@ workflows: ...@@ -1094,6 +1133,9 @@ workflows:
- op-bindings-build: - op-bindings-build:
requires: requires:
- yarn-monorepo - yarn-monorepo
- atst-tests:
requires:
- yarn-monorepo
- js-lint-test: - js-lint-test:
name: actor-tests-tests name: actor-tests-tests
coverage_flag: actor-tests-tests coverage_flag: actor-tests-tests
...@@ -1464,4 +1506,4 @@ workflows: ...@@ -1464,4 +1506,4 @@ workflows:
docker_tags: <<pipeline.git.revision>>,latest docker_tags: <<pipeline.git.revision>>,latest
docker_context: ./ops/docker/ci-builder docker_context: ./ops/docker/ci-builder
context: context:
- oplabs-gcr - oplabs-gcr
\ No newline at end of file
...@@ -11,7 +11,8 @@ One easy way to do this is to use [Blockscout](https://www.blockscout.com/). ...@@ -11,7 +11,8 @@ One easy way to do this is to use [Blockscout](https://www.blockscout.com/).
### Archive mode ### Archive mode
Blockscout expects to interact with an Ethereum execution client in [archive mode](https://www.alchemy.com/overviews/archive-nodes#archive-nodes). Blockscout expects to interact with an Ethereum execution client in [archive mode](https://www.alchemy.com/overviews/archive-nodes#archive-nodes).
To create such a node, follow the [directions to add a node](./getting-started.md#adding-nodes), but in the command you use to start `op-geth` replace: If your `op-geth` is running in full mode, you can create a separate archive node.
To do so, follow the [directions to add a node](./getting-started.md#adding-nodes), but in the command you use to start `op-geth` replace:
```sh ```sh
--gcmode=full \ --gcmode=full \
......
...@@ -73,7 +73,7 @@ We’re going to be spinning up an EVM Rollup from the OP Stack source code. Yo ...@@ -73,7 +73,7 @@ We’re going to be spinning up an EVM Rollup from the OP Stack source code. Yo
1. Build the various packages inside of the Optimism Monorepo. 1. Build the various packages inside of the Optimism Monorepo.
```bash ```bash
make op-node op-batcher make op-node op-batcher op-proposer
yarn build yarn build
``` ```
...@@ -154,9 +154,9 @@ Save these accounts and their respective private keys somewhere, you’ll need t ...@@ -154,9 +154,9 @@ Save these accounts and their respective private keys somewhere, you’ll need t
Recommended funding amounts are as follows: Recommended funding amounts are as follows:
- `Admin`0.2 ETH - `Admin` — 2 ETH
- `Proposer`0.5 ETH - `Proposer` — 5 ETH
- `Batcher` — 1.0 ETH - `Batcher` — 10 ETH
::: danger Not for production deployments ::: danger Not for production deployments
...@@ -304,13 +304,32 @@ We’re almost ready to run our chain! Now we just need to run a few commands to ...@@ -304,13 +304,32 @@ We’re almost ready to run our chain! Now we just need to run a few commands to
Everything is now initialized and ready to go! Everything is now initialized and ready to go!
## Run op-geth
Whew! We made it. It’s time to run `op-geth` and get our system started. ## Run the node software
Run `op-geth` with the following command. Make sure to replace `<SEQUENCER>` with the address of the `Sequencer` account you generated earlier. There are four components that need to run for a rollup.
The first two, `op-geth` and `op-node`, have to run on every node.
The other two, `op-batcher` and `op-proposer`, run only in one place, the sequencer that accepts transactions.
Set these environment variables for the configuration
| Variable | Value |
| -------------- | -
| `SEQ_ADDR` | Address of the `Sequencer` account
| `SEQ_KEY` | Private key of the `Sequencer` account
| `BATCHER_KEY` | Private key of the `Batcher` accounts, which should have at least 1 ETH
| `PROPOSER_KEY` | Private key of the `Proposer` account
| `L1_RPC` | URL for the L1 (such as Goerli) you're using
| `RPC_KIND` | The type of L1 server to which you connect, which can optimize requests. Available options are `alchemy`, `quicknode`, `parity`, `nethermind`, `debug_geth`, `erigon`, `basic`, and `any`
| `L2OO_ADDR` | The address of the `L2OutputOracleProxy`, available at `~/optimism/packages/contracts-bedrock/deployments/getting-started/L2OutputOracleProxy.json
### `op-geth`
Run `op-geth` with the following commands.
```bash ```bash
cd ~/op-geth
./build/bin/geth \ ./build/bin/geth \
--datadir ./datadir \ --datadir ./datadir \
--http \ --http \
...@@ -324,7 +343,7 @@ Run `op-geth` with the following command. Make sure to replace `<SEQUENCER>` wit ...@@ -324,7 +343,7 @@ Run `op-geth` with the following command. Make sure to replace `<SEQUENCER>` wit
--ws.origins="*" \ --ws.origins="*" \
--ws.api=debug,eth,txpool,net,engine \ --ws.api=debug,eth,txpool,net,engine \
--syncmode=full \ --syncmode=full \
--gcmode=full \ --gcmode=archive \
--nodiscover \ --nodiscover \
--maxpeers=0 \ --maxpeers=0 \
--networkid=42069 \ --networkid=42069 \
...@@ -336,13 +355,25 @@ Run `op-geth` with the following command. Make sure to replace `<SEQUENCER>` wit ...@@ -336,13 +355,25 @@ Run `op-geth` with the following command. Make sure to replace `<SEQUENCER>` wit
--password=./datadir/password \ --password=./datadir/password \
--allow-insecure-unlock \ --allow-insecure-unlock \
--mine \ --mine \
--miner.etherbase=<SEQUENCER> \ --miner.etherbase=$SEQ_ADDR \
--unlock=<SEQUENCER> --unlock=$SEQ_ADDR
``` ```
And `op-geth` should be running! You should see some output, but you won’t see any blocks being created yet because `op-geth` is driven by the `op-node`. We’ll need to get that running next. And `op-geth` should be running! You should see some output, but you won’t see any blocks being created yet because `op-geth` is driven by the `op-node`. We’ll need to get that running next.
### Reinitializing op-geth ::: tip Why archive mode?
Archive mode takes more disk storage than full mode.
However, using it is important for two reasons:
- The `op-proposer` requires access to the full state.
If at some point `op-proposer` needs to look beyond 256 blocks in the past (8.5 minutes in the default configuration), for example because it was down for that long, we need archive mode.
- The [explorer](./explorer.md) requires archive mode.
:::
#### Reinitializing op-geth
There are several situations are indicate database corruption and require you to reset the `op-geth` component: There are several situations are indicate database corruption and require you to reset the `op-geth` component:
...@@ -374,13 +405,13 @@ This is the reinitialization procedure: ...@@ -374,13 +405,13 @@ This is the reinitialization procedure:
1. Start `op-node` 1. Start `op-node`
## Run op-node ### `op-node`
Once we’ve got `op-geth` running we’ll need to run `op-node`. Like Ethereum, the OP Stack has a consensus client (the `op-node`) and an execution client (`op-geth`). The consensus client drives the execution client over the Engine API. Once we’ve got `op-geth` running we’ll need to run `op-node`. Like Ethereum, the OP Stack has a consensus client (the `op-node`) and an execution client (`op-geth`). The consensus client drives the execution client over the Engine API.
Head over to the `op-node` package and start the `op-node` using the following command. Replace `<SEQUENCERKEY>` with the private key for the `Sequencer` account, replace `<RPC>` with the URL for your L1 node, and replace `<RPCKIND>` with the kind of RPC you’re connected to. Although the `l1.rpckind` argument is optional, setting it will help the `op-node` optimize requests and reduce the overall load on your endpoint. Available options for the `l1.rpckind` argument are `"alchemy"`, `"quicknode"`, `"quicknode"`, `"parity"`, `"nethermind"`, `"debug_geth"`, `"erigon"`, `"basic"`, and `"any"`.
```bash ```bash
cd ~/optimism/op-node
./bin/op-node \ ./bin/op-node \
--l2=http://localhost:8551 \ --l2=http://localhost:8551 \
--l2.jwt-secret=./jwt.txt \ --l2.jwt-secret=./jwt.txt \
...@@ -392,9 +423,9 @@ Head over to the `op-node` package and start the `op-node` using the following c ...@@ -392,9 +423,9 @@ Head over to the `op-node` package and start the `op-node` using the following c
--rpc.port=8547 \ --rpc.port=8547 \
--p2p.disable \ --p2p.disable \
--rpc.enable-admin \ --rpc.enable-admin \
--p2p.sequencer.key=<SEQUENCERKEY> \ --p2p.sequencer.key=$SEQ_KEY \
--l1=<RPC> \ --l1=$L1_RPC \
--l1.rpckind=<RPCKIND> --l1.rpckind=$RPC_KIND
``` ```
Once you run this command, you should start seeing the `op-node` begin to process all of the L1 information after the starting block number that you picked earlier. Once the `op-node` has enough information, it’ll begin sending Engine API payloads to `op-geth`. At that point, you’ll start to see blocks being created inside of `op-geth`. We’re live! Once you run this command, you should start seeing the `op-node` begin to process all of the L1 information after the starting block number that you picked earlier. Once the `op-node` has enough information, it’ll begin sending Engine API payloads to `op-geth`. At that point, you’ll start to see blocks being created inside of `op-geth`. We’re live!
...@@ -420,50 +451,68 @@ Once you have multiple nodes, it makes sense to use these command line parameter ...@@ -420,50 +451,68 @@ Once you have multiple nodes, it makes sense to use these command line parameter
## Run op-batcher ### `op-batcher`
The final component necessary to put all the pieces together is the `op-batcher`. The `op-batcher` takes transactions from the Sequencer and publishes those transactions to L1. Once transactions are on L1, they’re officially part of the Rollup. Without the `op-batcher`, transactions sent to the Sequencer would never make it to L1 and wouldn’t become part of the canonical chain. The `op-batcher` is critical! The `op-batcher` takes transactions from the Sequencer and publishes those transactions to L1. Once transactions are on L1, they’re officially part of the Rollup. Without the `op-batcher`, transactions sent to the Sequencer would never make it to L1 and wouldn’t become part of the canonical chain. The `op-batcher` is critical!
1. Head over to the `op-batcher` package inside the Optimism Monorepo: It is best to give the `Batcher` at least 1 Goerli ETH to ensure that it can continue operating without running out of ETH for gas.
```bash
cd ~/optimism/op-batcher
```
1. And run the `op-batcher` using the following command. ```bash
Replace `<RPC>` with your L1 node URL and replace `<BATCHERKEY>` with the private key for the `Batcher` account that you created and funded earlier. cd ~/optimism/op-batcher
It’s best to give the `Batcher` at least 1 Goerli ETH to ensure that it can continue operating without running out of ETH for gas.
./bin/op-batcher \
--l2-eth-rpc=http://localhost:8545 \
--rollup-rpc=http://localhost:8547 \
--poll-interval=1s \
--sub-safety-margin=6 \
--num-confirmations=1 \
--safe-abort-nonce-too-low-count=3 \
--resubmission-timeout=30s \
--rpc.addr=0.0.0.0 \
--rpc.port=8548 \
--rpc.enable-admin \
--max-channel-duration=1 \
--l1-eth-rpc=$L1_RPC \
--private-key=$BATCHER_KEY
```
```bash ::: tip Controlling batcher costs
./bin/op-batcher \
--l2-eth-rpc=http://localhost:8545 \ The `--max-channel-duration=n` setting tells the batcher to write all the data to L1 every `n` L1 blocks.
--rollup-rpc=http://localhost:8547 \ When it is low, transactions are written to L1 frequently, withdrawals are quick, and other nodes can synchronize from L1 fast.
--poll-interval=1s \ When it is high, transactions are written to L1 less frequently, and the batcher spends less ETH.
--sub-safety-margin=6 \
--num-confirmations=1 \ :::
--safe-abort-nonce-too-low-count=3 \
--resubmission-timeout=30s \ ### `op-proposer`
--rpc.addr=0.0.0.0 \
--rpc.port=8548 \
--rpc.enable-admin \
--max-channel-duration=1 \
--target-l1-tx-size-bytes=2048 \
--l1-eth-rpc=<RPC> \
--private-key=<BATCHERKEY>
```
::: tip Controlling batcher costs Now start `op-proposer`, which proposes new state roots.
The `--max-channel-duration=n` setting tells the batcher to write all the data to L1 every `n` L1 blocks. ```bash
When it is low, transactions are written to L1 frequently, withdrawals are quick, and other nodes can synchronize from L1 fast. cd ~/optimism/op-proposer
When it is high, transactions are written to L1 less frequently, and the batcher spends less ETH.
./bin/op-proposer \
--poll-interval 12s \
--rpc.port 8560 \
--rollup-rpc http://localhost:8547 \
--l2oo-address $L2OO_ADDR \
--private-key $PROPOSER_KEY \
--l1-eth-rpc $L1_RPC
```
::: <!--
::: warning Change before moving to production
The `--allow-non-finalized` flag allows for faster tests on a test network.
However, in production you would probably want to only submit proposals on properly finalized blocks.
:::
-->
## Get some ETH on your Rollup ## Get some ETH on your Rollup
Once you’ve connected your wallet, you’ll probably notice that you don’t have any ETH on your Rollup. You’ll need some ETH to pay for gas on your Rollup. The easiest way to deposit Goerli ETH into your chain is to send funds directly to the `OptimismPortalProxy` contract. You can find the address of the `OptimismPortalProxy` contract for your chain by looking inside the `deployments` folder in the `contracts-bedrock` package. Once you’ve connected your wallet, you’ll probably notice that you don’t have any ETH on your Rollup. You’ll need some ETH to pay for gas on your Rollup. The easiest way to deposit Goerli ETH into your chain is to send funds directly to the `L1StandardBridge` contract. You can find the address of the `L1StandardBridge` contract for your chain by looking inside the `deployments` folder in the `contracts-bedrock` package.
1. First, head over to the `contracts-bedrock` package: 1. First, head over to the `contracts-bedrock` package:
...@@ -474,13 +523,7 @@ Once you’ve connected your wallet, you’ll probably notice that you don’t h ...@@ -474,13 +523,7 @@ Once you’ve connected your wallet, you’ll probably notice that you don’t h
1. Grab the address of the proxy to the L1 standard bridge contract: 1. Grab the address of the proxy to the L1 standard bridge contract:
```bash ```bash
cat deployments/getting-started/Proxy__OVM_L1StandardBridge.json | grep \"address\": cat deployments/getting-started/Proxy__OVM_L1StandardBridge.json | jq -r .address
```
You should see a result similar to the following (**your address will be different**):
```
"address": "0x874f2E16D803c044F10314A978322da3c9b075c7",
``` ```
1. Grab the L1 bridge proxy contract address and, using the wallet that you want to have ETH on your Rollup, send that address a small amount of ETH on Goerli (0.1 or less is fine). It may take up to 5 minutes for that ETH to appear in your wallet on L2. 1. Grab the L1 bridge proxy contract address and, using the wallet that you want to have ETH on your Rollup, send that address a small amount of ETH on Goerli (0.1 or less is fine). It may take up to 5 minutes for that ETH to appear in your wallet on L2.
...@@ -529,7 +572,7 @@ To see your rollup in action, you can use the [Optimism Mainnet Getting Started ...@@ -529,7 +572,7 @@ To see your rollup in action, you can use the [Optimism Mainnet Getting Started
```bash ```bash
cast call $GREETER "greet()" | cast --to-ascii cast call $GREETER "greet()" | cast --to-ascii
cast send --mnemonic-path mnem.delme $GREETER "setGreeting(string)" "New greeting" cast send --mnemonic-path ./mnem.delme $GREETER "setGreeting(string)" "New greeting"
cast call $GREETER "greet()" | cast --to-ascii cast call $GREETER "greet()" | cast --to-ascii
``` ```
......
...@@ -12,6 +12,7 @@ import ( ...@@ -12,6 +12,7 @@ import (
oppcl "github.com/ethereum-optimism/optimism/op-program/client" oppcl "github.com/ethereum-optimism/optimism/op-program/client"
opp "github.com/ethereum-optimism/optimism/op-program/host" opp "github.com/ethereum-optimism/optimism/op-program/host"
oppconf "github.com/ethereum-optimism/optimism/op-program/host/config" oppconf "github.com/ethereum-optimism/optimism/op-program/host/config"
"github.com/ethereum/go-ethereum/accounts/abi/bind"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/rpc" "github.com/ethereum/go-ethereum/rpc"
...@@ -39,26 +40,51 @@ func TestVerifyL2OutputRoot(t *testing.T) { ...@@ -39,26 +40,51 @@ func TestVerifyL2OutputRoot(t *testing.T) {
require.Nil(t, err) require.Nil(t, err)
rollupClient := sources.NewRollupClient(client.NewBaseRPCClient(rollupRPCClient)) rollupClient := sources.NewRollupClient(client.NewBaseRPCClient(rollupRPCClient))
// TODO (CLI-3855): Actually perform some tx to set up a more complex chain. t.Log("Sending transactions to setup existing state, prior to challenged period")
aliceKey := cfg.Secrets.Alice
// Wait for the safe head to reach block 10 opts, err := bind.NewKeyedTransactorWithChainID(aliceKey, cfg.L1ChainIDBig())
require.NoError(t, waitForSafeHead(ctx, 10, rollupClient)) require.Nil(t, err)
SendDepositTx(t, cfg, l1Client, l2Seq, opts, func(l2Opts *DepositTxOpts) {
// Use block 5 as the agreed starting block on L2 l2Opts.Value = big.NewInt(100_000_000)
l2AgreedBlock, err := l2Seq.BlockByNumber(ctx, big.NewInt(5)) })
require.NoError(t, err, "could not retrieve l2 genesis") SendL2Tx(t, cfg, l2Seq, aliceKey, func(opts *TxOpts) {
l2Head := l2AgreedBlock.Hash() // Agreed starting L2 block opts.ToAddr = &cfg.Secrets.Addresses().Bob
opts.Value = big.NewInt(1_000)
// Get the expected output at block 10 opts.Nonce = 1
l2ClaimBlockNumber := uint64(10) })
SendWithdrawal(t, cfg, l2Seq, aliceKey, func(opts *WithdrawalTxOpts) {
opts.Value = big.NewInt(500)
opts.Nonce = 2
})
t.Log("Capture current L2 head as agreed starting point")
l2AgreedBlock, err := l2Seq.BlockByNumber(ctx, nil)
require.NoError(t, err, "could not retrieve l2 agreed block")
l2Head := l2AgreedBlock.Hash()
t.Log("Sending transactions to modify existing state, within challenged period")
SendDepositTx(t, cfg, l1Client, l2Seq, opts, func(l2Opts *DepositTxOpts) {
l2Opts.Value = big.NewInt(5_000)
})
SendL2Tx(t, cfg, l2Seq, cfg.Secrets.Bob, func(opts *TxOpts) {
opts.ToAddr = &cfg.Secrets.Addresses().Alice
opts.Value = big.NewInt(100)
})
SendWithdrawal(t, cfg, l2Seq, aliceKey, func(opts *WithdrawalTxOpts) {
opts.Value = big.NewInt(100)
opts.Nonce = 4
})
t.Log("Determine L2 claim")
l2ClaimBlockNumber, err := l2Seq.BlockNumber(ctx)
require.NoError(t, err, "get L2 claim block number")
l2Output, err := rollupClient.OutputAtBlock(ctx, l2ClaimBlockNumber) l2Output, err := rollupClient.OutputAtBlock(ctx, l2ClaimBlockNumber)
require.NoError(t, err, "could not get expected output") require.NoError(t, err, "could not get expected output")
l2Claim := l2Output.OutputRoot l2Claim := l2Output.OutputRoot
// Find the current L1 head t.Log("Determine L1 head that includes all batches required for L2 claim block")
l1BlockNumber, err := l1Client.BlockNumber(ctx) require.NoError(t, waitForSafeHead(ctx, l2ClaimBlockNumber, rollupClient))
require.NoError(t, err, "get l1 head block number") l1HeadBlock, err := l1Client.BlockByNumber(ctx, nil)
l1HeadBlock, err := l1Client.BlockByNumber(ctx, new(big.Int).SetUint64(l1BlockNumber))
require.NoError(t, err, "get l1 head block") require.NoError(t, err, "get l1 head block")
l1Head := l1HeadBlock.Hash() l1Head := l1HeadBlock.Hash()
...@@ -73,7 +99,11 @@ func TestVerifyL2OutputRoot(t *testing.T) { ...@@ -73,7 +99,11 @@ func TestVerifyL2OutputRoot(t *testing.T) {
err = opp.FaultProofProgram(log, fppConfig) err = opp.FaultProofProgram(log, fppConfig)
require.NoError(t, err) require.NoError(t, err)
t.Log("Shutting down network")
// Shutdown the nodes from the actual chain. Should now be able to run using only the pre-fetched data. // Shutdown the nodes from the actual chain. Should now be able to run using only the pre-fetched data.
sys.BatchSubmitter.StopIfRunning(context.Background())
sys.L2OutputSubmitter.Stop()
sys.L2OutputSubmitter = nil
for _, node := range sys.Nodes { for _, node := range sys.Nodes {
require.NoError(t, node.Close()) require.NoError(t, node.Close())
} }
......
...@@ -6,18 +6,17 @@ import ( ...@@ -6,18 +6,17 @@ import (
"fmt" "fmt"
"strings" "strings"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum-optimism/optimism/op-node/eth" "github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-program/client/l1" "github.com/ethereum-optimism/optimism/op-program/client/l1"
"github.com/ethereum-optimism/optimism/op-program/client/l2" "github.com/ethereum-optimism/optimism/op-program/client/l2"
"github.com/ethereum-optimism/optimism/op-program/client/mpt" "github.com/ethereum-optimism/optimism/op-program/client/mpt"
"github.com/ethereum-optimism/optimism/op-program/host/kvstore" "github.com/ethereum-optimism/optimism/op-program/host/kvstore"
"github.com/ethereum-optimism/optimism/op-program/preimage" "github.com/ethereum-optimism/optimism/op-program/preimage"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/log"
) )
type L1Source interface { type L1Source interface {
...@@ -33,6 +32,7 @@ type L2Source interface { ...@@ -33,6 +32,7 @@ type L2Source interface {
} }
type Prefetcher struct { type Prefetcher struct {
logger log.Logger
l1Fetcher L1Source l1Fetcher L1Source
l2Fetcher L2Source l2Fetcher L2Source
lastHint string lastHint string
...@@ -41,6 +41,7 @@ type Prefetcher struct { ...@@ -41,6 +41,7 @@ type Prefetcher struct {
func NewPrefetcher(logger log.Logger, l1Fetcher L1Source, l2Fetcher L2Source, kvStore kvstore.KV) *Prefetcher { func NewPrefetcher(logger log.Logger, l1Fetcher L1Source, l2Fetcher L2Source, kvStore kvstore.KV) *Prefetcher {
return &Prefetcher{ return &Prefetcher{
logger: logger,
l1Fetcher: NewRetryingL1Source(logger, l1Fetcher), l1Fetcher: NewRetryingL1Source(logger, l1Fetcher),
l2Fetcher: NewRetryingL2Source(logger, l2Fetcher), l2Fetcher: NewRetryingL2Source(logger, l2Fetcher),
kvStore: kvStore, kvStore: kvStore,
...@@ -71,6 +72,7 @@ func (p *Prefetcher) prefetch(ctx context.Context, hint string) error { ...@@ -71,6 +72,7 @@ func (p *Prefetcher) prefetch(ctx context.Context, hint string) error {
if err != nil { if err != nil {
return err return err
} }
p.logger.Debug("Prefetching", "type", hintType, "hash", hash)
switch hintType { switch hintType {
case l1.HintL1BlockHeader: case l1.HintL1BlockHeader:
header, err := p.l1Fetcher.InfoByHash(ctx, hash) header, err := p.l1Fetcher.InfoByHash(ctx, hash)
...@@ -143,8 +145,11 @@ func (p *Prefetcher) storeTransactions(txs types.Transactions) error { ...@@ -143,8 +145,11 @@ func (p *Prefetcher) storeTransactions(txs types.Transactions) error {
func (p *Prefetcher) storeTrieNodes(values []hexutil.Bytes) error { func (p *Prefetcher) storeTrieNodes(values []hexutil.Bytes) error {
_, nodes := mpt.WriteTrie(values) _, nodes := mpt.WriteTrie(values)
for _, node := range nodes { for _, node := range nodes {
err := p.kvStore.Put(preimage.Keccak256Key(crypto.Keccak256Hash(node)).PreimageKey(), node) key := preimage.Keccak256Key(crypto.Keccak256Hash(node)).PreimageKey()
if err != nil { if err := p.kvStore.Put(key, node); errors.Is(err, kvstore.ErrAlreadyExists) {
// It's not uncommon for different tries to contain common nodes (esp for receipts)
continue
} else if err != nil {
return fmt.Errorf("failed to store node: %w", err) return fmt.Errorf("failed to store node: %w", err)
} }
} }
......
...@@ -129,6 +129,28 @@ func TestFetchL1Receipts(t *testing.T) { ...@@ -129,6 +129,28 @@ func TestFetchL1Receipts(t *testing.T) {
require.EqualValues(t, hash, header.Hash()) require.EqualValues(t, hash, header.Hash())
assertReceiptsEqual(t, receipts, actualReceipts) assertReceiptsEqual(t, receipts, actualReceipts)
}) })
// Blocks may have identical RLP receipts for different transactions.
// Check that the node already existing is handled
t.Run("CommonTrieNodes", func(t *testing.T) {
prefetcher, l1Cl, _, kv := createPrefetcher(t)
l1Cl.ExpectInfoByHash(hash, eth.BlockToInfo(block), nil)
l1Cl.ExpectInfoAndTxsByHash(hash, eth.BlockToInfo(block), block.Transactions(), nil)
l1Cl.ExpectFetchReceipts(hash, eth.BlockToInfo(block), receipts, nil)
defer l1Cl.AssertExpectations(t)
// Pre-store one receipt node (but not the whole trie leading to it)
// This would happen if an identical receipt was in an earlier block
opaqueRcpts, err := eth.EncodeReceipts(receipts)
require.NoError(t, err)
_, nodes := mpt.WriteTrie(opaqueRcpts)
require.NoError(t, kv.Put(preimage.Keccak256Key(crypto.Keccak256Hash(nodes[0])).PreimageKey(), nodes[0]))
oracle := l1.NewPreimageOracle(asOracleFn(t, prefetcher), asHinter(t, prefetcher))
header, actualReceipts := oracle.ReceiptsByBlockHash(hash)
require.EqualValues(t, hash, header.Hash())
assertReceiptsEqual(t, receipts, actualReceipts)
})
} }
func TestFetchL2Block(t *testing.T) { func TestFetchL2Block(t *testing.T) {
......
...@@ -55,6 +55,14 @@ func (k Keccak256Key) PreimageKey() (out common.Hash) { ...@@ -55,6 +55,14 @@ func (k Keccak256Key) PreimageKey() (out common.Hash) {
return return
} }
func (k Keccak256Key) String() string {
return common.Hash(k).String()
}
func (k Keccak256Key) TerminalString() string {
return common.Hash(k).String()
}
// Hint is an interface to enable any program type to function as a hint, // Hint is an interface to enable any program type to function as a hint,
// when passed to the Hinter interface, returning a string representation // when passed to the Hinter interface, returning a string representation
// of what data the host should prepare pre-images for. // of what data the host should prepare pre-images for.
......
...@@ -18,7 +18,7 @@ Vitest snapshots for the vitest tests ...@@ -18,7 +18,7 @@ Vitest snapshots for the vitest tests
CLI implementations of atst read and write CLI implementations of atst read and write
## contants ## constants
Internal and external constants Internal and external constants
...@@ -32,4 +32,4 @@ Test helpers ...@@ -32,4 +32,4 @@ Test helpers
## types ## types
Zod and typscript types Zod and typscript types
\ No newline at end of file
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment