Commit 97b0d79b authored by mergify[bot]'s avatar mergify[bot] Committed by GitHub

Merge branch 'develop' into jg/docs

parents 73c4f292 be1cb549
---
'@eth-optimism/proxyd': minor
---
Fixed JSON-RPC 2.0 specification compliance by adding the optional data field on an RPCError
---
'@eth-optimism/sdk': patch
---
Adds contract addresses for the Bedrock Alpha testnet
---
'@eth-optimism/contracts-bedrock': patch
'@eth-optimism/sdk': patch
---
Rename the event emitted in the L2ToL1MessagePasser
...@@ -2,4 +2,4 @@ ...@@ -2,4 +2,4 @@
'@eth-optimism/contracts-bedrock': patch '@eth-optimism/contracts-bedrock': patch
--- ---
Use uint64 for arithmetic in XDM's baseGas Moves initializers underneath constructors always
---
'@eth-optimism/contracts-periphery': patch
---
Goerli nft bridge deployment
---
'@eth-optimism/l2geth': patch
---
add --rpc.evmtimeout flag to configure timeout for eth_call
---
'@eth-optimism/integration-tests': patch
'@eth-optimism/contracts-periphery': patch
---
Fix erc721 factory to match erc21 factory
---
'@eth-optimism/indexer': minor
---
Bedrock support
---
'@eth-optimism/proxyd': minor
---
Adds new Redis rate limiter
---
"@eth-optimism/contracts-periphery": patch
---
mainnet nft bridge deployments
---
'@eth-optimism/contracts-bedrock': patch
---
Removes an unnecessary initializer parameter in the L200
---
'@eth-optimism/endpoint-monitor': major
---
Initial release of endpoint monitor
---
'@eth-optimism/ci-builder': patch
---
Pin slither version to 0.9.0
...@@ -2,6 +2,48 @@ version: 2.1 ...@@ -2,6 +2,48 @@ version: 2.1
orbs: orbs:
go: circleci/go@1.5.0 go: circleci/go@1.5.0
gcp-cli: circleci/gcp-cli@2.4.1
commands:
gcp-oidc-authenticate:
description: "Authenticate with GCP using a CircleCI OIDC token."
parameters:
project_id:
type: env_var_name
default: GCP_PROJECT_ID
workload_identity_pool_id:
type: env_var_name
default: GCP_WIP_ID
workload_identity_pool_provider_id:
type: env_var_name
default: GCP_WIP_PROVIDER_ID
service_account_email:
type: env_var_name
default: GCP_SERVICE_ACCOUNT_EMAIL
gcp_cred_config_file_path:
type: string
default: /home/circleci/gcp_cred_config.json
oidc_token_file_path:
type: string
default: /home/circleci/oidc_token.json
steps:
- run:
name: "Create OIDC credential configuration"
command: |
# Store OIDC token in temp file
echo $CIRCLE_OIDC_TOKEN > << parameters.oidc_token_file_path >>
# Create a credential configuration for the generated OIDC ID Token
gcloud iam workload-identity-pools create-cred-config \
"projects/${<< parameters.project_id >>}/locations/global/workloadIdentityPools/${<< parameters.workload_identity_pool_id >>}/providers/${<< parameters.workload_identity_pool_provider_id >>}"\
--output-file="<< parameters.gcp_cred_config_file_path >>" \
--service-account="${<< parameters.service_account_email >>}" \
--credential-source-file=<< parameters.oidc_token_file_path >>
- run:
name: "Authenticate with GCP using OIDC"
command: |
# Configure gcloud to leverage the generated credential configuration
gcloud auth login --brief --cred-file "<< parameters.gcp_cred_config_file_path >>"
# Configure ADC
echo "export GOOGLE_APPLICATION_CREDENTIALS='<< parameters.gcp_cred_config_file_path >>'" | tee -a "$BASH_ENV"
jobs: jobs:
yarn-monorepo: yarn-monorepo:
docker: docker:
...@@ -67,6 +109,7 @@ jobs: ...@@ -67,6 +109,7 @@ jobs:
image: ubuntu-2204:2022.07.1 image: ubuntu-2204:2022.07.1
resource_class: xlarge resource_class: xlarge
steps: steps:
- gcp-oidc-authenticate
# Below is CircleCI recommended way of specifying nameservers on an Ubuntu box: # Below is CircleCI recommended way of specifying nameservers on an Ubuntu box:
# https://support.circleci.com/hc/en-us/articles/7323511028251-How-to-set-custom-DNS-on-Ubuntu-based-images-using-netplan # https://support.circleci.com/hc/en-us/articles/7323511028251-How-to-set-custom-DNS-on-Ubuntu-based-images-using-netplan
- run: sudo sed -i '13 i \ \ \ \ \ \ \ \ \ \ \ \ nameservers:' /etc/netplan/50-cloud-init.yaml - run: sudo sed -i '13 i \ \ \ \ \ \ \ \ \ \ \ \ nameservers:' /etc/netplan/50-cloud-init.yaml
...@@ -101,7 +144,7 @@ jobs: ...@@ -101,7 +144,7 @@ jobs:
- run: - run:
name: Publish name: Publish
command: | command: |
echo "$DOCKER_PASS" | docker login -u "$DOCKER_USERNAME" --password-stdin "<<parameters.repo>>" gcloud auth configure-docker us-central1-docker.pkg.dev
docker push <<parameters.docker_tags>> docker push <<parameters.docker_tags>>
contracts-bedrock-tests: contracts-bedrock-tests:
...@@ -145,7 +188,7 @@ jobs: ...@@ -145,7 +188,7 @@ jobs:
working_directory: packages/contracts-bedrock working_directory: packages/contracts-bedrock
- run: - run:
name: upload coverage name: upload coverage
command: codecov --verbose --clean --flag contracts-bedrock-tests command: codecov --verbose --clean --flags contracts-bedrock-tests
environment: environment:
FOUNDRY_PROFILE: ci FOUNDRY_PROFILE: ci
- run: - run:
...@@ -217,7 +260,7 @@ jobs: ...@@ -217,7 +260,7 @@ jobs:
working_directory: packages/<<parameters.package_name>> working_directory: packages/<<parameters.package_name>>
- run: - run:
name: Upload coverage name: Upload coverage
command: codecov --verbose --clean --flag <<parameters.coverage_flag>> command: codecov --verbose --clean --flags <<parameters.coverage_flag>>
bedrock-go-tests: bedrock-go-tests:
docker: docker:
...@@ -308,7 +351,7 @@ jobs: ...@@ -308,7 +351,7 @@ jobs:
working_directory: op-chain-ops working_directory: op-chain-ops
- run: - run:
name: upload coverage name: upload coverage
command: codecov --verbose --clean --flag bedrock-go-tests command: codecov --verbose --clean --flags bedrock-go-tests
- store_test_results: - store_test_results:
path: /test-results path: /test-results
- run: - run:
......
...@@ -105,3 +105,39 @@ pull_request_rules: ...@@ -105,3 +105,39 @@ pull_request_rules:
comment: comment:
message: | message: |
This PR changes implementation code, but doesn't include a changeset. Did you forget to add one? This PR changes implementation code, but doesn't include a changeset. Did you forget to add one?
- name: Add indexer tag and ecopod reviewers
conditions:
- 'files~=^indexer/'
actions:
label:
add:
- indexer
request_reviews:
users:
- roninjin10
- nickbalestra
- name: Add sdk tag and ecopod reviewers
conditions:
- 'files~=^packages/sdk/'
actions:
label:
add:
- sdk
request_reviews:
users:
- roninjin10
- nickbalestra
- name: Add common-ts tag and ecopod reviewers
conditions:
- 'files~=^packages/common-ts/'
actions:
label:
add:
- common-ts
request_reviews:
users:
- imranjami
- roninjin10
exclude:
global:
- infra/op-replica/** # snyk does not respect kustomizations, so not useful here
# @eth-optimism/endpoint-monitor # @eth-optimism/endpoint-monitor
## 1.0.0
### Major Changes
- a10c2b49: Initial release of endpoint monitor
{ {
"name": "@eth-optimism/endpoint-monitor", "name": "@eth-optimism/endpoint-monitor",
"version": "0.0.0", "version": "1.0.0",
"private": true, "private": true,
"dependencies": {} "dependencies": {}
} }
# @eth-optimism/indexer # @eth-optimism/indexer
## 0.3.0
### Minor Changes
- 19e581d8: Bedrock support
## 0.2.0 ## 0.2.0
### Minor Changes ### Minor Changes
......
{ {
"name": "@eth-optimism/indexer", "name": "@eth-optimism/indexer",
"version": "0.2.0", "version": "0.3.0",
"private": true, "private": true,
"license": "MIT" "license": "MIT"
} }
# @eth-optimism/integration-tests # @eth-optimism/integration-tests
## 0.5.21
### Patch Changes
- a3242d4f: Fix erc721 factory to match erc21 factory
## 0.5.20 ## 0.5.20
### Patch Changes ### Patch Changes
......
{ {
"private": true, "private": true,
"name": "@eth-optimism/integration-tests", "name": "@eth-optimism/integration-tests",
"version": "0.5.20", "version": "0.5.21",
"description": "[Optimism] Integration tests", "description": "[Optimism] Integration tests",
"scripts": { "scripts": {
"lint": "yarn lint:fix && yarn lint:check", "lint": "yarn lint:fix && yarn lint:check",
...@@ -30,9 +30,9 @@ ...@@ -30,9 +30,9 @@
"devDependencies": { "devDependencies": {
"@babel/eslint-parser": "^7.5.4", "@babel/eslint-parser": "^7.5.4",
"@eth-optimism/contracts": "^0.5.37", "@eth-optimism/contracts": "^0.5.37",
"@eth-optimism/contracts-periphery": "^1.0.1", "@eth-optimism/contracts-periphery": "^1.0.2",
"@eth-optimism/core-utils": "0.10.1", "@eth-optimism/core-utils": "0.10.1",
"@eth-optimism/sdk": "1.6.6", "@eth-optimism/sdk": "1.6.7",
"@ethersproject/abstract-provider": "^5.7.0", "@ethersproject/abstract-provider": "^5.7.0",
"@ethersproject/providers": "^5.7.0", "@ethersproject/providers": "^5.7.0",
"@ethersproject/transactions": "^5.7.0", "@ethersproject/transactions": "^5.7.0",
......
# Changelog # Changelog
## 0.5.25
### Patch Changes
- 89f1abfa: add --rpc.evmtimeout flag to configure timeout for eth_call
## 0.5.24 ## 0.5.24
### Patch Changes ### Patch Changes
......
{ {
"name": "@eth-optimism/l2geth", "name": "@eth-optimism/l2geth",
"version": "0.5.24", "version": "0.5.25",
"private": true, "private": true,
"devDependencies": {} "devDependencies": {}
} }
...@@ -10,6 +10,7 @@ const ( ...@@ -10,6 +10,7 @@ const (
DevOptimismMintableERC20Factory = "0x6900000000000000000000000000000000000004" DevOptimismMintableERC20Factory = "0x6900000000000000000000000000000000000004"
DevAddressManager = "0x6900000000000000000000000000000000000005" DevAddressManager = "0x6900000000000000000000000000000000000005"
DevProxyAdmin = "0x6900000000000000000000000000000000000006" DevProxyAdmin = "0x6900000000000000000000000000000000000006"
DevWETH9 = "0x6900000000000000000000000000000000000007"
) )
var ( var (
...@@ -20,6 +21,7 @@ var ( ...@@ -20,6 +21,7 @@ var (
DevOptimismMintableERC20FactoryAddr = common.HexToAddress(DevOptimismMintableERC20Factory) DevOptimismMintableERC20FactoryAddr = common.HexToAddress(DevOptimismMintableERC20Factory)
DevAddressManagerAddr = common.HexToAddress(DevAddressManager) DevAddressManagerAddr = common.HexToAddress(DevAddressManager)
DevProxyAdminAddr = common.HexToAddress(DevProxyAdmin) DevProxyAdminAddr = common.HexToAddress(DevProxyAdmin)
DevWETH9Addr = common.HexToAddress(DevWETH9)
DevPredeploys = make(map[string]*common.Address) DevPredeploys = make(map[string]*common.Address)
) )
...@@ -32,4 +34,5 @@ func init() { ...@@ -32,4 +34,5 @@ func init() {
DevPredeploys["OptimismMintableERC20Factory"] = &DevOptimismMintableERC20FactoryAddr DevPredeploys["OptimismMintableERC20Factory"] = &DevOptimismMintableERC20FactoryAddr
DevPredeploys["AddressManager"] = &DevAddressManagerAddr DevPredeploys["AddressManager"] = &DevAddressManagerAddr
DevPredeploys["Admin"] = &DevProxyAdminAddr DevPredeploys["Admin"] = &DevProxyAdminAddr
DevPredeploys["WETH9"] = &DevWETH9Addr
} }
...@@ -14,10 +14,10 @@ import ( ...@@ -14,10 +14,10 @@ import (
// LegacyWithdrawal represents a pre bedrock upgrade withdrawal. // LegacyWithdrawal represents a pre bedrock upgrade withdrawal.
type LegacyWithdrawal struct { type LegacyWithdrawal struct {
Target *common.Address Target *common.Address `json:"target"`
Sender *common.Address Sender *common.Address `json:"sender"`
Data []byte Data []byte `json:"data"`
Nonce *big.Int Nonce *big.Int `json:"nonce"`
} }
var _ WithdrawalMessage = (*LegacyWithdrawal)(nil) var _ WithdrawalMessage = (*LegacyWithdrawal)(nil)
......
package crossdomain
import (
"errors"
"fmt"
"math/big"
"github.com/ethereum-optimism/optimism/op-bindings/bindings"
"github.com/ethereum-optimism/optimism/op-bindings/predeploys"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/vm"
)
var (
abiTrue = common.Hash{31: 0x01}
errLegacyStorageSlotNotFound = errors.New("cannot find storage slot")
)
// MigrateWithdrawals will migrate a list of pending withdrawals given a StateDB.
func MigrateWithdrawals(withdrawals []*PendingWithdrawal, db vm.StateDB, l1CrossDomainMessenger, l1StandardBridge *common.Address) error {
for _, legacy := range withdrawals {
legacySlot, err := legacy.StorageSlot()
if err != nil {
return err
}
legacyValue := db.GetState(predeploys.LegacyMessagePasserAddr, legacySlot)
if legacyValue != abiTrue {
return fmt.Errorf("%w: %s", errLegacyStorageSlotNotFound, legacyValue)
}
withdrawal, err := MigrateWithdrawal(&legacy.LegacyWithdrawal, l1CrossDomainMessenger, l1StandardBridge)
if err != nil {
return err
}
slot, err := withdrawal.StorageSlot()
if err != nil {
return err
}
db.SetState(predeploys.L2ToL1MessagePasserAddr, slot, abiTrue)
}
return nil
}
// MigrateWithdrawal will turn a LegacyWithdrawal into a bedrock
// style Withdrawal.
func MigrateWithdrawal(withdrawal *LegacyWithdrawal, l1CrossDomainMessenger, l1StandardBridge *common.Address) (*Withdrawal, error) {
value := new(big.Int)
isFromL2StandardBridge := *withdrawal.Sender == predeploys.L2StandardBridgeAddr
if withdrawal.Target == nil {
return nil, errors.New("withdrawal target cannot be nil")
}
isToL1StandardBridge := *withdrawal.Target == *l1StandardBridge
if isFromL2StandardBridge && isToL1StandardBridge {
abi, err := bindings.L1StandardBridgeMetaData.GetAbi()
if err != nil {
return nil, err
}
method, err := abi.MethodById(withdrawal.Data)
if err != nil {
return nil, err
}
if method.Name == "finalizeETHWithdrawal" {
data, err := method.Inputs.Unpack(withdrawal.Data[4:])
if err != nil {
return nil, err
}
// bounds check
if len(data) < 3 {
return nil, errors.New("not enough data")
}
var ok bool
value, ok = data[2].(*big.Int)
if !ok {
return nil, errors.New("not big.Int")
}
}
}
abi, err := bindings.L1CrossDomainMessengerMetaData.GetAbi()
if err != nil {
return nil, err
}
versionedNonce := EncodeVersionedNonce(withdrawal.Nonce, common.Big1)
data, err := abi.Pack(
"relayMessage",
versionedNonce,
withdrawal.Sender,
withdrawal.Target,
value,
new(big.Int),
withdrawal.Data,
)
if err != nil {
return nil, fmt.Errorf("cannot abi encode relayMessage: %w", err)
}
w := NewWithdrawal(
withdrawal.Nonce,
&predeploys.L2CrossDomainMessengerAddr,
l1CrossDomainMessenger,
value,
new(big.Int),
data,
)
return w, nil
}
package crossdomain_test
import (
"fmt"
"testing"
"github.com/ethereum-optimism/optimism/op-bindings/predeploys"
"github.com/ethereum-optimism/optimism/op-chain-ops/crossdomain"
"github.com/ethereum/go-ethereum/common"
"github.com/stretchr/testify/require"
)
func TestMigrateWithdrawal(t *testing.T) {
withdrawals := make([]*crossdomain.LegacyWithdrawal, 0)
for _, receipt := range receipts {
msg, err := findCrossDomainMessage(receipt)
require.Nil(t, err)
withdrawal, err := msg.ToWithdrawal()
require.Nil(t, err)
legacyWithdrawal, ok := withdrawal.(*crossdomain.LegacyWithdrawal)
require.True(t, ok)
withdrawals = append(withdrawals, legacyWithdrawal)
}
l1CrossDomainMessenger := common.HexToAddress("0x25ace71c97B33Cc4729CF772ae268934F7ab5fA1")
l1StandardBridge := common.HexToAddress("0x99C9fc46f92E8a1c0deC1b1747d010903E884bE1")
for i, legacy := range withdrawals {
t.Run(fmt.Sprintf("test%d", i), func(t *testing.T) {
withdrawal, err := crossdomain.MigrateWithdrawal(legacy, &l1CrossDomainMessenger, &l1StandardBridge)
require.Nil(t, err)
require.NotNil(t, withdrawal)
require.Equal(t, legacy.Nonce.Uint64(), withdrawal.Nonce.Uint64())
require.Equal(t, *withdrawal.Sender, predeploys.L2CrossDomainMessengerAddr)
require.Equal(t, *withdrawal.Target, l1CrossDomainMessenger)
})
}
}
...@@ -13,12 +13,12 @@ var _ WithdrawalMessage = (*Withdrawal)(nil) ...@@ -13,12 +13,12 @@ var _ WithdrawalMessage = (*Withdrawal)(nil)
// Withdrawal represents a withdrawal transaction on L2 // Withdrawal represents a withdrawal transaction on L2
type Withdrawal struct { type Withdrawal struct {
Nonce *big.Int Nonce *big.Int `json:"nonce"`
Sender *common.Address Sender *common.Address `json:"sender"`
Target *common.Address Target *common.Address `json:"target"`
Value *big.Int Value *big.Int `json:"value"`
GasLimit *big.Int GasLimit *big.Int `json:"gasLimit"`
Data []byte Data []byte `json:"data"`
} }
// NewWithdrawal will create a Withdrawal // NewWithdrawal will create a Withdrawal
......
...@@ -15,12 +15,8 @@ import ( ...@@ -15,12 +15,8 @@ import (
// A PendingWithdrawal represents a withdrawal that has // A PendingWithdrawal represents a withdrawal that has
// not been finalized on L1 // not been finalized on L1
type PendingWithdrawal struct { type PendingWithdrawal struct {
Target common.Address `json:"target"` LegacyWithdrawal `json:"withdrawal"`
Sender common.Address `json:"sender"` TransactionHash common.Hash `json:"transactionHash"`
Message []byte `json:"message"`
MessageNonce *big.Int `json:"nonce"`
GasLimit *big.Int `json:"gasLimit"`
TransactionHash common.Hash `json:"transactionHash"`
} }
// Backends represents a set of backends for L1 and L2. // Backends represents a set of backends for L1 and L2.
...@@ -119,11 +115,12 @@ func GetPendingWithdrawals(messengers *Messengers, version *big.Int, start, end ...@@ -119,11 +115,12 @@ func GetPendingWithdrawals(messengers *Messengers, version *big.Int, start, end
log.Info("%s not yet relayed", event.Raw.TxHash) log.Info("%s not yet relayed", event.Raw.TxHash)
withdrawal := PendingWithdrawal{ withdrawal := PendingWithdrawal{
Target: event.Target, LegacyWithdrawal: LegacyWithdrawal{
Sender: event.Sender, Target: &event.Target,
Message: event.Message, Sender: &event.Sender,
MessageNonce: event.MessageNonce, Data: event.Message,
GasLimit: event.GasLimit, Nonce: event.MessageNonce,
},
TransactionHash: event.Raw.TxHash, TransactionHash: event.Raw.TxHash,
} }
......
...@@ -272,8 +272,7 @@ func TestGetPendingWithdrawals(t *testing.T) { ...@@ -272,8 +272,7 @@ func TestGetPendingWithdrawals(t *testing.T) {
for i, msg := range msgs[3:] { for i, msg := range msgs[3:] {
withdrawal := withdrawals[i] withdrawal := withdrawals[i]
require.Equal(t, msg.Target, withdrawal.Target) require.Equal(t, msg.Target, *withdrawal.Target)
require.Equal(t, msg.Message, withdrawal.Message) require.Equal(t, msg.Message, withdrawal.Data)
require.Equal(t, uint64(msg.MinGasLimit), withdrawal.GasLimit.Uint64())
} }
} }
...@@ -165,6 +165,16 @@ func BuildL1DeveloperGenesis(config *DeployConfig) (*core.Genesis, error) { ...@@ -165,6 +165,16 @@ func BuildL1DeveloperGenesis(config *DeployConfig) (*core.Genesis, error) {
for name, proxyAddr := range predeploys.DevPredeploys { for name, proxyAddr := range predeploys.DevPredeploys {
memDB.SetState(*proxyAddr, ImplementationSlot, depsByName[name].Address.Hash()) memDB.SetState(*proxyAddr, ImplementationSlot, depsByName[name].Address.Hash())
// Special case for WETH since it was not designed to be behind a proxy
if name == "WETH9" {
name, _ := state.EncodeStringValue("Wrapped Ether", 0)
symbol, _ := state.EncodeStringValue("WETH", 0)
decimals, _ := state.EncodeUintValue(18, 0)
memDB.SetState(*proxyAddr, common.Hash{}, name)
memDB.SetState(*proxyAddr, common.Hash{31: 0x01}, symbol)
memDB.SetState(*proxyAddr, common.Hash{31: 0x02}, decimals)
}
} }
stateDB, err := backend.Blockchain().State() stateDB, err := backend.Blockchain().State()
...@@ -183,6 +193,7 @@ func BuildL1DeveloperGenesis(config *DeployConfig) (*core.Genesis, error) { ...@@ -183,6 +193,7 @@ func BuildL1DeveloperGenesis(config *DeployConfig) (*core.Genesis, error) {
memDB.CreateAccount(depAddr) memDB.CreateAccount(depAddr)
memDB.SetCode(depAddr, dep.Bytecode) memDB.SetCode(depAddr, dep.Bytecode)
for iter.Next() { for iter.Next() {
_, data, _, err := rlp.Split(iter.Value) _, data, _, err := rlp.Split(iter.Value)
if err != nil { if err != nil {
...@@ -250,6 +261,9 @@ func deployL1Contracts(config *DeployConfig, backend *backends.SimulatedBackend) ...@@ -250,6 +261,9 @@ func deployL1Contracts(config *DeployConfig, backend *backends.SimulatedBackend)
common.Address{19: 0x01}, common.Address{19: 0x01},
}, },
}, },
{
Name: "WETH9",
},
}...) }...)
return deployer.Deploy(backend, constructors, l1Deployer) return deployer.Deploy(backend, constructors, l1Deployer)
} }
...@@ -308,6 +322,11 @@ func l1Deployer(backend *backends.SimulatedBackend, opts *bind.TransactOpts, dep ...@@ -308,6 +322,11 @@ func l1Deployer(backend *backends.SimulatedBackend, opts *bind.TransactOpts, dep
backend, backend,
common.Address{}, common.Address{},
) )
case "WETH9":
_, tx, _, err = bindings.DeployWETH9(
opts,
backend,
)
default: default:
if strings.HasSuffix(deployment.Name, "Proxy") { if strings.HasSuffix(deployment.Name, "Proxy") {
_, tx, _, err = bindings.DeployProxy(opts, backend, deployer.TestAddress) _, tx, _, err = bindings.DeployProxy(opts, backend, deployer.TestAddress)
......
...@@ -92,6 +92,18 @@ func TestBuildL1DeveloperGenesis(t *testing.T) { ...@@ -92,6 +92,18 @@ func TestBuildL1DeveloperGenesis(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, predeploys.DevL1StandardBridgeAddr, bridgeAddr) require.Equal(t, predeploys.DevL1StandardBridgeAddr, bridgeAddr)
weth9, err := bindings.NewWETH9(predeploys.DevWETH9Addr, sim)
require.NoError(t, err)
decimals, err := weth9.Decimals(callOpts)
require.NoError(t, err)
require.Equal(t, uint8(18), decimals)
symbol, err := weth9.Symbol(callOpts)
require.NoError(t, err)
require.Equal(t, "WETH", symbol)
name, err := weth9.Name(callOpts)
require.NoError(t, err)
require.Equal(t, "Wrapped Ether", name)
// test that we can do deposits, etc. // test that we can do deposits, etc.
priv, err := crypto.HexToECDSA("ac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80") priv, err := crypto.HexToECDSA("ac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80")
require.NoError(t, err) require.NoError(t, err)
......
...@@ -106,6 +106,7 @@ func (s *L1Miner) ActL1IncludeTx(from common.Address) Action { ...@@ -106,6 +106,7 @@ func (s *L1Miner) ActL1IncludeTx(from common.Address) Action {
return return
} }
s.pendingIndices[from] = i + 1 // won't retry the tx s.pendingIndices[from] = i + 1 // won't retry the tx
s.l1BuildingState.Prepare(tx.Hash(), len(s.l1Transactions))
receipt, err := core.ApplyTransaction(s.l1Cfg.Config, s.l1Chain, &s.l1BuildingHeader.Coinbase, receipt, err := core.ApplyTransaction(s.l1Cfg.Config, s.l1Chain, &s.l1BuildingHeader.Coinbase,
s.l1GasPool, s.l1BuildingState, s.l1BuildingHeader, tx, &s.l1BuildingHeader.GasUsed, *s.l1Chain.GetVMConfig()) s.l1GasPool, s.l1BuildingState, s.l1BuildingHeader, tx, &s.l1BuildingHeader.GasUsed, *s.l1Chain.GetVMConfig())
if err != nil { if err != nil {
......
package actions
import (
"bytes"
"context"
"crypto/ecdsa"
"io"
"math/big"
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/params"
"github.com/stretchr/testify/require"
)
type SyncStatusAPI interface {
SyncStatus(ctx context.Context) (*eth.SyncStatus, error)
}
type BlocksAPI interface {
BlockByNumber(ctx context.Context, number *big.Int) (*types.Block, error)
}
type L1TxAPI interface {
PendingNonceAt(ctx context.Context, account common.Address) (uint64, error)
HeaderByNumber(ctx context.Context, number *big.Int) (*types.Header, error)
SendTransaction(ctx context.Context, tx *types.Transaction) error
}
type BatcherCfg struct {
// Limit the size of txs
MinL1TxSize uint64
MaxL1TxSize uint64
BatcherKey *ecdsa.PrivateKey
}
// L2Batcher buffers and submits L2 batches to L1.
//
// TODO: note the batcher shares little logic/state with actual op-batcher,
// tests should only use this actor to build batch contents for rollup node actors to consume,
// until the op-batcher is refactored and can be covered better.
type L2Batcher struct {
log log.Logger
rollupCfg *rollup.Config
syncStatusAPI SyncStatusAPI
l2 BlocksAPI
l1 L1TxAPI
l1Signer types.Signer
l2ChannelOut *derive.ChannelOut
l2Submitting bool // when the channel out is being submitted, and not safe to write to without resetting
l2BufferedBlock eth.BlockID
l2SubmittedBlock eth.BlockID
l2BatcherCfg *BatcherCfg
}
func NewL2Batcher(log log.Logger, rollupCfg *rollup.Config, batcherCfg *BatcherCfg, api SyncStatusAPI, l1 L1TxAPI, l2 BlocksAPI) *L2Batcher {
return &L2Batcher{
log: log,
rollupCfg: rollupCfg,
syncStatusAPI: api,
l1: l1,
l2: l2,
l2BatcherCfg: batcherCfg,
l1Signer: types.LatestSignerForChainID(rollupCfg.L1ChainID),
}
}
// SubmittingData indicates if the actor is submitting buffer data.
// All data must be submitted before it can safely continue buffering more L2 blocks.
func (s *L2Batcher) SubmittingData() bool {
return s.l2Submitting
}
// ActL2BatchBuffer adds the next L2 block to the batch buffer.
// If the buffer is being submitted, the buffer is wiped.
func (s *L2Batcher) ActL2BatchBuffer(t Testing) {
if s.l2Submitting { // break ongoing submitting work if necessary
s.l2ChannelOut = nil
s.l2Submitting = false
}
syncStatus, err := s.syncStatusAPI.SyncStatus(t.Ctx())
require.NoError(t, err, "no sync status error")
// If we just started, start at safe-head
if s.l2SubmittedBlock == (eth.BlockID{}) {
s.log.Info("Starting batch-submitter work at safe-head", "safe", syncStatus.SafeL2)
s.l2SubmittedBlock = syncStatus.SafeL2.ID()
s.l2BufferedBlock = syncStatus.SafeL2.ID()
s.l2ChannelOut = nil
}
// If it's lagging behind, catch it up.
if s.l2SubmittedBlock.Number < syncStatus.SafeL2.Number {
s.log.Warn("last submitted block lagged behind L2 safe head: batch submission will continue from the safe head now", "last", s.l2SubmittedBlock, "safe", syncStatus.SafeL2)
s.l2SubmittedBlock = syncStatus.SafeL2.ID()
s.l2BufferedBlock = syncStatus.SafeL2.ID()
s.l2ChannelOut = nil
}
// Create channel if we don't have one yet
if s.l2ChannelOut == nil {
ch, err := derive.NewChannelOut()
require.NoError(t, err, "failed to create channel")
s.l2ChannelOut = ch
}
// Add the next unsafe block to the channel
if s.l2BufferedBlock.Number >= syncStatus.UnsafeL2.Number {
return
}
block, err := s.l2.BlockByNumber(t.Ctx(), big.NewInt(int64(s.l2BufferedBlock.Number+1)))
require.NoError(t, err, "need l2 block %d from sync status", s.l2SubmittedBlock.Number+1)
if block.ParentHash() != s.l2BufferedBlock.Hash {
s.log.Error("detected a reorg in L2 chain vs previous submitted information, resetting to safe head now", "safe_head", syncStatus.SafeL2)
s.l2SubmittedBlock = syncStatus.SafeL2.ID()
s.l2BufferedBlock = syncStatus.SafeL2.ID()
s.l2ChannelOut = nil
}
if err := s.l2ChannelOut.AddBlock(block); err != nil { // should always succeed
t.Fatalf("failed to add block to channel: %v", err)
}
}
func (s *L2Batcher) ActL2ChannelClose(t Testing) {
// Don't run this action if there's no data to submit
if s.l2ChannelOut == nil {
t.InvalidAction("need to buffer data first, cannot batch submit with empty buffer")
return
}
require.NoError(t, s.l2ChannelOut.Close(), "must close channel before submitting it")
}
// ActL2BatchSubmit constructs a batch tx from previous buffered L2 blocks, and submits it to L1
func (s *L2Batcher) ActL2BatchSubmit(t Testing) {
// Don't run this action if there's no data to submit
if s.l2ChannelOut == nil {
t.InvalidAction("need to buffer data first, cannot batch submit with empty buffer")
return
}
// Collect the output frame
data := new(bytes.Buffer)
data.WriteByte(derive.DerivationVersion0)
// subtract one, to account for the version byte
if err := s.l2ChannelOut.OutputFrame(data, s.l2BatcherCfg.MaxL1TxSize-1); err == io.EOF {
s.l2Submitting = false
// there may still be some data to submit
} else if err != nil {
s.l2Submitting = false
t.Fatalf("failed to output channel data to frame: %v", err)
}
nonce, err := s.l1.PendingNonceAt(t.Ctx(), s.rollupCfg.BatchSenderAddress)
require.NoError(t, err, "need batcher nonce")
gasTipCap := big.NewInt(2 * params.GWei)
pendingHeader, err := s.l1.HeaderByNumber(t.Ctx(), big.NewInt(-1))
require.NoError(t, err, "need l1 pending header for gas price estimation")
gasFeeCap := new(big.Int).Add(gasTipCap, new(big.Int).Mul(pendingHeader.BaseFee, big.NewInt(2)))
rawTx := &types.DynamicFeeTx{
ChainID: s.rollupCfg.L1ChainID,
Nonce: nonce,
To: &s.rollupCfg.BatchInboxAddress,
GasTipCap: gasTipCap,
GasFeeCap: gasFeeCap,
Data: data.Bytes(),
}
gas, err := core.IntrinsicGas(rawTx.Data, nil, false, true, true)
require.NoError(t, err, "need to compute intrinsic gas")
rawTx.Gas = gas
tx, err := types.SignNewTx(s.l2BatcherCfg.BatcherKey, s.l1Signer, rawTx)
require.NoError(t, err, "need to sign tx")
err = s.l1.SendTransaction(t.Ctx(), tx)
require.NoError(t, err, "need to send tx")
}
package actions
import (
"math/big"
"testing"
"github.com/ethereum-optimism/optimism/op-e2e/e2eutils"
"github.com/ethereum-optimism/optimism/op-node/testlog"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/params"
"github.com/stretchr/testify/require"
)
func TestBatcher(gt *testing.T) {
t := NewDefaultTesting(gt)
p := &e2eutils.TestParams{
MaxSequencerDrift: 20, // larger than L1 block time we simulate in this test (12)
SequencerWindowSize: 24,
ChannelTimeout: 20,
}
dp := e2eutils.MakeDeployParams(t, p)
sd := e2eutils.Setup(t, dp, defaultAlloc)
log := testlog.Logger(t, log.LvlDebug)
miner, seqEngine, sequencer := setupSequencerTest(t, sd, log)
verifEngine, verifier := setupVerifier(t, sd, log, miner.L1Client(t, sd.RollupCfg))
rollupSeqCl := sequencer.RollupClient()
batcher := NewL2Batcher(log, sd.RollupCfg, &BatcherCfg{
MinL1TxSize: 0,
MaxL1TxSize: 128_000,
BatcherKey: dp.Secrets.Batcher,
}, rollupSeqCl, miner.EthClient(), seqEngine.EthClient())
// Alice makes a L2 tx
cl := seqEngine.EthClient()
n, err := cl.PendingNonceAt(t.Ctx(), dp.Addresses.Alice)
require.NoError(t, err)
signer := types.LatestSigner(sd.L2Cfg.Config)
tx := types.MustSignNewTx(dp.Secrets.Alice, signer, &types.DynamicFeeTx{
ChainID: sd.L2Cfg.Config.ChainID,
Nonce: n,
GasTipCap: big.NewInt(2 * params.GWei),
GasFeeCap: new(big.Int).Add(miner.l1Chain.CurrentBlock().BaseFee(), big.NewInt(2*params.GWei)),
Gas: params.TxGas,
To: &dp.Addresses.Bob,
Value: e2eutils.Ether(2),
})
require.NoError(gt, cl.SendTransaction(t.Ctx(), tx))
sequencer.ActL2PipelineFull(t)
verifier.ActL2PipelineFull(t)
// Make L2 block
sequencer.ActL2StartBlock(t)
seqEngine.ActL2IncludeTx(dp.Addresses.Alice)(t)
sequencer.ActL2EndBlock(t)
// batch submit to L1
batcher.ActL2BatchBuffer(t)
batcher.ActL2ChannelClose(t)
batcher.ActL2BatchSubmit(t)
// confirm batch on L1
miner.ActL1StartBlock(12)(t)
miner.ActL1IncludeTx(dp.Addresses.Batcher)(t)
miner.ActL1EndBlock(t)
bl := miner.l1Chain.CurrentBlock()
log.Info("bl", "txs", len(bl.Transactions()))
// Now make enough L1 blocks that the verifier will have to derive a L2 block
for i := uint64(1); i < sd.RollupCfg.SeqWindowSize; i++ {
miner.ActL1StartBlock(12)(t)
miner.ActL1EndBlock(t)
}
// sync verifier from L1 batch in otherwise empty sequence window
verifier.ActL1HeadSignal(t)
verifier.ActL2PipelineFull(t)
require.Equal(t, uint64(1), verifier.SyncStatus().SafeL2.L1Origin.Number)
// check that the tx from alice made it into the L2 chain
verifCl := verifEngine.EthClient()
vTx, isPending, err := verifCl.TransactionByHash(t.Ctx(), tx.Hash())
require.NoError(t, err)
require.False(t, isPending)
require.NotNil(t, vTx)
}
...@@ -174,6 +174,7 @@ func (e *L2Engine) ActL2IncludeTx(from common.Address) Action { ...@@ -174,6 +174,7 @@ func (e *L2Engine) ActL2IncludeTx(from common.Address) Action {
return return
} }
e.pendingIndices[from] = i + 1 // won't retry the tx e.pendingIndices[from] = i + 1 // won't retry the tx
e.l2BuildingState.Prepare(tx.Hash(), len(e.l2Transactions))
receipt, err := core.ApplyTransaction(e.l2Cfg.Config, e.l2Chain, &e.l2BuildingHeader.Coinbase, receipt, err := core.ApplyTransaction(e.l2Cfg.Config, e.l2Chain, &e.l2BuildingHeader.Coinbase,
e.l2GasPool, e.l2BuildingState, e.l2BuildingHeader, tx, &e.l2BuildingHeader.GasUsed, *e.l2Chain.GetVMConfig()) e.l2GasPool, e.l2BuildingState, e.l2BuildingHeader, tx, &e.l2BuildingHeader.GasUsed, *e.l2Chain.GetVMConfig())
if err != nil { if err != nil {
......
...@@ -96,7 +96,7 @@ func (ea *L2EngineAPI) startBlock(parent common.Hash, params *eth.PayloadAttribu ...@@ -96,7 +96,7 @@ func (ea *L2EngineAPI) startBlock(parent common.Hash, params *eth.PayloadAttribu
if err := tx.UnmarshalBinary(otx); err != nil { if err := tx.UnmarshalBinary(otx); err != nil {
return fmt.Errorf("transaction %d is not valid: %v", i, err) return fmt.Errorf("transaction %d is not valid: %v", i, err)
} }
ea.l2BuildingState.Prepare(tx.Hash(), i)
receipt, err := core.ApplyTransaction(ea.l2Cfg.Config, ea.l2Chain, &ea.l2BuildingHeader.Coinbase, receipt, err := core.ApplyTransaction(ea.l2Cfg.Config, ea.l2Chain, &ea.l2BuildingHeader.Coinbase,
ea.l2GasPool, ea.l2BuildingState, ea.l2BuildingHeader, &tx, &ea.l2BuildingHeader.GasUsed, *ea.l2Chain.GetVMConfig()) ea.l2GasPool, ea.l2BuildingState, ea.l2BuildingHeader, &tx, &ea.l2BuildingHeader.GasUsed, *ea.l2Chain.GetVMConfig())
if err != nil { if err != nil {
......
This diff is collapsed.
package actions
import (
"math/rand"
"testing"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum-optimism/optimism/op-e2e/e2eutils"
"github.com/ethereum-optimism/optimism/op-node/testlog"
"github.com/ethereum-optimism/optimism/op-node/withdrawals"
)
func TestCrossLayerUser(gt *testing.T) {
t := NewDefaultTesting(gt)
dp := e2eutils.MakeDeployParams(t, defaultRollupTestParams)
sd := e2eutils.Setup(t, dp, defaultAlloc)
log := testlog.Logger(t, log.LvlDebug)
miner, seqEngine, seq := setupSequencerTest(t, sd, log)
// need to start derivation before we can make L2 blocks
seq.ActL2PipelineFull(t)
l1Cl := miner.EthClient()
l2Cl := seqEngine.EthClient()
withdrawalsCl := &withdrawals.Client{} // TODO: need a rollup node actor to wrap for output root proof RPC
addresses := e2eutils.CollectAddresses(sd, dp)
l1UserEnv := &BasicUserEnv[*L1Bindings]{
EthCl: l1Cl,
Signer: types.LatestSigner(sd.L1Cfg.Config),
AddressCorpora: addresses,
Bindings: NewL1Bindings(t, l1Cl, &sd.DeploymentsL1),
}
l2UserEnv := &BasicUserEnv[*L2Bindings]{
EthCl: l2Cl,
Signer: types.LatestSigner(sd.L2Cfg.Config),
AddressCorpora: addresses,
Bindings: NewL2Bindings(t, l2Cl, withdrawalsCl),
}
alice := NewCrossLayerUser(log, dp.Secrets.Alice, rand.New(rand.NewSource(1234)))
alice.L1.SetUserEnv(l1UserEnv)
alice.L2.SetUserEnv(l2UserEnv)
// regular L2 tx, in new L2 block
alice.L2.ActResetTxOpts(t)
alice.L2.ActSetTxToAddr(&dp.Addresses.Bob)(t)
alice.L2.ActMakeTx(t)
seq.ActL2StartBlock(t)
seqEngine.ActL2IncludeTx(alice.Address())(t)
seq.ActL2EndBlock(t)
alice.L2.ActCheckReceiptStatusOfLastTx(true)(t)
// regular L1 tx, in new L1 block
alice.L1.ActResetTxOpts(t)
alice.L1.ActSetTxToAddr(&dp.Addresses.Bob)(t)
alice.L1.ActMakeTx(t)
miner.ActL1StartBlock(12)(t)
miner.ActL1IncludeTx(alice.Address())(t)
miner.ActL1EndBlock(t)
alice.L1.ActCheckReceiptStatusOfLastTx(true)(t)
// regular Deposit, in new L1 block
alice.ActDeposit(t)
miner.ActL1StartBlock(12)(t)
miner.ActL1IncludeTx(alice.Address())(t)
miner.ActL1EndBlock(t)
seq.ActL1HeadSignal(t)
// sync sequencer build enough blocks to adopt latest L1 origin
for seq.SyncStatus().UnsafeL2.L1Origin.Number < miner.l1Chain.CurrentBlock().NumberU64() {
seq.ActL2StartBlock(t)
seq.ActL2EndBlock(t)
}
// Now that the L2 chain adopted the latest L1 block, check that we processed the deposit
alice.ActCheckDepositStatus(true, true)(t)
}
...@@ -43,7 +43,7 @@ func NewRPC(ctx context.Context, lgr log.Logger, addr string, opts ...rpc.Client ...@@ -43,7 +43,7 @@ func NewRPC(ctx context.Context, lgr log.Logger, addr string, opts ...rpc.Client
func DialRPCClientWithBackoff(ctx context.Context, log log.Logger, addr string, opts ...rpc.ClientOption) (*rpc.Client, error) { func DialRPCClientWithBackoff(ctx context.Context, log log.Logger, addr string, opts ...rpc.ClientOption) (*rpc.Client, error) {
bOff := backoff.Exponential() bOff := backoff.Exponential()
var ret *rpc.Client var ret *rpc.Client
err := backoff.Do(10, bOff, func() error { err := backoff.DoCtx(ctx, 10, bOff, func() error {
client, err := rpc.DialOptions(ctx, addr, opts...) client, err := rpc.DialOptions(ctx, addr, opts...)
if err != nil { if err != nil {
if client == nil { if client == nil {
......
...@@ -4,8 +4,11 @@ import ( ...@@ -4,8 +4,11 @@ import (
"math/big" "math/big"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
) )
var _ BlockInfo = (&types.Block{})
type BlockInfo interface { type BlockInfo interface {
Hash() common.Hash Hash() common.Hash
ParentHash() common.Hash ParentHash() common.Hash
...@@ -16,7 +19,6 @@ type BlockInfo interface { ...@@ -16,7 +19,6 @@ type BlockInfo interface {
// MixDigest field, reused for randomness after The Merge (Bellatrix hardfork) // MixDigest field, reused for randomness after The Merge (Bellatrix hardfork)
MixDigest() common.Hash MixDigest() common.Hash
BaseFee() *big.Int BaseFee() *big.Int
ID() BlockID
ReceiptHash() common.Hash ReceiptHash() common.Hash
} }
...@@ -28,3 +30,15 @@ func InfoToL1BlockRef(info BlockInfo) L1BlockRef { ...@@ -28,3 +30,15 @@ func InfoToL1BlockRef(info BlockInfo) L1BlockRef {
Time: info.Time(), Time: info.Time(),
} }
} }
type NumberAndHash interface {
Hash() common.Hash
NumberU64() uint64
}
func ToBlockID(b NumberAndHash) BlockID {
return BlockID{
Hash: b.Hash(),
Number: b.NumberU64(),
}
}
...@@ -10,6 +10,8 @@ import ( ...@@ -10,6 +10,8 @@ import (
"strconv" "strconv"
"time" "time"
libp2pmetrics "github.com/libp2p/go-libp2p-core/metrics"
pb "github.com/libp2p/go-libp2p-pubsub/pb"
"github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/collectors" "github.com/prometheus/client_golang/prometheus/collectors"
"github.com/prometheus/client_golang/prometheus/promauto" "github.com/prometheus/client_golang/prometheus/promauto"
...@@ -30,6 +32,32 @@ const ( ...@@ -30,6 +32,32 @@ const (
BatchMethod = "<batch>" BatchMethod = "<batch>"
) )
type Metricer interface {
RecordInfo(version string)
RecordUp()
RecordRPCServerRequest(method string) func()
RecordRPCClientRequest(method string) func(err error)
RecordRPCClientResponse(method string, err error)
SetDerivationIdle(status bool)
RecordPipelineReset()
RecordSequencingError()
RecordPublishingError()
RecordDerivationError()
RecordReceivedUnsafePayload(payload *eth.ExecutionPayload)
recordRef(layer string, name string, num uint64, timestamp uint64, h common.Hash)
RecordL1Ref(name string, ref eth.L1BlockRef)
RecordL2Ref(name string, ref eth.L2BlockRef)
RecordUnsafePayloadsBuffer(length uint64, memSize uint64, next eth.BlockID)
CountSequencedTxs(count int)
RecordL1ReorgDepth(d uint64)
RecordGossipEvent(evType int32)
IncPeerCount()
DecPeerCount()
IncStreamCount()
DecStreamCount()
RecordBandwidth(ctx context.Context, bwc *libp2pmetrics.BandwidthCounter)
}
type Metrics struct { type Metrics struct {
Info *prometheus.GaugeVec Info *prometheus.GaugeVec
Up prometheus.Gauge Up prometheus.Gauge
...@@ -67,6 +95,12 @@ type Metrics struct { ...@@ -67,6 +95,12 @@ type Metrics struct {
TransactionsSequencedTotal prometheus.Counter TransactionsSequencedTotal prometheus.Counter
// P2P Metrics
PeerCount prometheus.Gauge
StreamCount prometheus.Gauge
GossipEventsTotal *prometheus.CounterVec
BandwidthTotal *prometheus.GaugeVec
registry *prometheus.Registry registry *prometheus.Registry
} }
...@@ -217,6 +251,35 @@ func NewMetrics(procName string) *Metrics { ...@@ -217,6 +251,35 @@ func NewMetrics(procName string) *Metrics {
Help: "Count of total transactions sequenced", Help: "Count of total transactions sequenced",
}), }),
PeerCount: promauto.With(registry).NewGauge(prometheus.GaugeOpts{
Namespace: ns,
Subsystem: "p2p",
Name: "peer_count",
Help: "Count of currently connected p2p peers",
}),
StreamCount: promauto.With(registry).NewGauge(prometheus.GaugeOpts{
Namespace: ns,
Subsystem: "p2p",
Name: "stream_count",
Help: "Count of currently connected p2p streams",
}),
GossipEventsTotal: promauto.With(registry).NewCounterVec(prometheus.CounterOpts{
Namespace: ns,
Subsystem: "p2p",
Name: "gossip_events_total",
Help: "Count of gossip events by type",
}, []string{
"type",
}),
BandwidthTotal: promauto.With(registry).NewGaugeVec(prometheus.GaugeOpts{
Namespace: ns,
Subsystem: "p2p",
Name: "bandwidth_bytes_total",
Help: "P2P bandwidth by direction",
}, []string{
"direction",
}),
registry: registry, registry: registry,
} }
} }
...@@ -348,6 +411,42 @@ func (m *Metrics) RecordL1ReorgDepth(d uint64) { ...@@ -348,6 +411,42 @@ func (m *Metrics) RecordL1ReorgDepth(d uint64) {
m.L1ReorgDepth.Observe(float64(d)) m.L1ReorgDepth.Observe(float64(d))
} }
func (m *Metrics) RecordGossipEvent(evType int32) {
m.GossipEventsTotal.WithLabelValues(pb.TraceEvent_Type_name[evType]).Inc()
}
func (m *Metrics) IncPeerCount() {
m.PeerCount.Inc()
}
func (m *Metrics) DecPeerCount() {
m.PeerCount.Dec()
}
func (m *Metrics) IncStreamCount() {
m.StreamCount.Inc()
}
func (m *Metrics) DecStreamCount() {
m.StreamCount.Dec()
}
func (m *Metrics) RecordBandwidth(ctx context.Context, bwc *libp2pmetrics.BandwidthCounter) {
tick := time.NewTicker(10 * time.Second)
defer tick.Stop()
for {
select {
case <-tick.C:
bwTotals := bwc.GetBandwidthTotals()
m.BandwidthTotal.WithLabelValues("in").Set(float64(bwTotals.TotalIn))
m.BandwidthTotal.WithLabelValues("out").Set(float64(bwTotals.TotalOut))
case <-ctx.Done():
return
}
}
}
// Serve starts the metrics server on the given hostname and port. // Serve starts the metrics server on the given hostname and port.
// The server will be closed when the passed-in context is cancelled. // The server will be closed when the passed-in context is cancelled.
func (m *Metrics) Serve(ctx context.Context, hostname string, port int) error { func (m *Metrics) Serve(ctx context.Context, hostname string, port int) error {
...@@ -364,3 +463,78 @@ func (m *Metrics) Serve(ctx context.Context, hostname string, port int) error { ...@@ -364,3 +463,78 @@ func (m *Metrics) Serve(ctx context.Context, hostname string, port int) error {
}() }()
return server.ListenAndServe() return server.ListenAndServe()
} }
type noopMetricer struct{}
var NoopMetrics = new(noopMetricer)
func (n *noopMetricer) RecordInfo(version string) {
}
func (n *noopMetricer) RecordUp() {
}
func (n *noopMetricer) RecordRPCServerRequest(method string) func() {
return func() {}
}
func (n *noopMetricer) RecordRPCClientRequest(method string) func(err error) {
return func(err error) {}
}
func (n *noopMetricer) RecordRPCClientResponse(method string, err error) {
}
func (n *noopMetricer) SetDerivationIdle(status bool) {
}
func (n *noopMetricer) RecordPipelineReset() {
}
func (n *noopMetricer) RecordSequencingError() {
}
func (n *noopMetricer) RecordPublishingError() {
}
func (n *noopMetricer) RecordDerivationError() {
}
func (n *noopMetricer) RecordReceivedUnsafePayload(payload *eth.ExecutionPayload) {
}
func (n *noopMetricer) recordRef(layer string, name string, num uint64, timestamp uint64, h common.Hash) {
}
func (n *noopMetricer) RecordL1Ref(name string, ref eth.L1BlockRef) {
}
func (n *noopMetricer) RecordL2Ref(name string, ref eth.L2BlockRef) {
}
func (n *noopMetricer) RecordUnsafePayloadsBuffer(length uint64, memSize uint64, next eth.BlockID) {
}
func (n *noopMetricer) CountSequencedTxs(count int) {
}
func (n *noopMetricer) RecordL1ReorgDepth(d uint64) {
}
func (n *noopMetricer) RecordGossipEvent(evType int32) {
}
func (n *noopMetricer) IncPeerCount() {
}
func (n *noopMetricer) DecPeerCount() {
}
func (n *noopMetricer) IncStreamCount() {
}
func (n *noopMetricer) DecStreamCount() {
}
func (n *noopMetricer) RecordBandwidth(ctx context.Context, bwc *libp2pmetrics.BandwidthCounter) {
}
...@@ -196,7 +196,7 @@ func (n *OpNode) initMetricsServer(ctx context.Context, cfg *Config) error { ...@@ -196,7 +196,7 @@ func (n *OpNode) initMetricsServer(ctx context.Context, cfg *Config) error {
func (n *OpNode) initP2P(ctx context.Context, cfg *Config) error { func (n *OpNode) initP2P(ctx context.Context, cfg *Config) error {
if cfg.P2P != nil { if cfg.P2P != nil {
p2pNode, err := p2p.NewNodeP2P(n.resourcesCtx, &cfg.Rollup, n.log, cfg.P2P, n) p2pNode, err := p2p.NewNodeP2P(n.resourcesCtx, &cfg.Rollup, n.log, cfg.P2P, n, n.metrics)
if err != nil || p2pNode == nil { if err != nil || p2pNode == nil {
return err return err
} }
......
...@@ -41,7 +41,7 @@ import ( ...@@ -41,7 +41,7 @@ import (
type SetupP2P interface { type SetupP2P interface {
Check() error Check() error
// Host creates a libp2p host service. Returns nil, nil if p2p is disabled. // Host creates a libp2p host service. Returns nil, nil if p2p is disabled.
Host(log log.Logger) (host.Host, error) Host(log log.Logger, reporter metrics.Reporter) (host.Host, error)
// Discovery creates a disc-v5 service. Returns nil, nil, nil if discovery is disabled. // Discovery creates a disc-v5 service. Returns nil, nil, nil if discovery is disabled.
Discovery(log log.Logger, rollupCfg *rollup.Config, tcpPort uint16) (*enode.LocalNode, *discover.UDPv5, error) Discovery(log log.Logger, rollupCfg *rollup.Config, tcpPort uint16) (*enode.LocalNode, *discover.UDPv5, error)
TargetPeers() uint TargetPeers() uint
...@@ -91,8 +91,6 @@ type Config struct { ...@@ -91,8 +91,6 @@ type Config struct {
ConnGater func(conf *Config) (connmgr.ConnectionGater, error) ConnGater func(conf *Config) (connmgr.ConnectionGater, error)
ConnMngr func(conf *Config) (connmgr.ConnManager, error) ConnMngr func(conf *Config) (connmgr.ConnManager, error)
// nil to disable bandwidth metrics
BandwidthMetrics metrics.Reporter
} }
type ConnectionGater interface { type ConnectionGater interface {
......
...@@ -46,6 +46,10 @@ var MessageDomainValidSnappy = [4]byte{1, 0, 0, 0} ...@@ -46,6 +46,10 @@ var MessageDomainValidSnappy = [4]byte{1, 0, 0, 0}
const MaxGossipSize = 1 << 20 const MaxGossipSize = 1 << 20
type GossipMetricer interface {
RecordGossipEvent(evType int32)
}
func blocksTopicV1(cfg *rollup.Config) string { func blocksTopicV1(cfg *rollup.Config) string {
return fmt.Sprintf("/optimism/%s/0/blocks", cfg.L2ChainID.String()) return fmt.Sprintf("/optimism/%s/0/blocks", cfg.L2ChainID.String())
} }
...@@ -115,7 +119,7 @@ func BuildGlobalGossipParams(cfg *rollup.Config) pubsub.GossipSubParams { ...@@ -115,7 +119,7 @@ func BuildGlobalGossipParams(cfg *rollup.Config) pubsub.GossipSubParams {
return params return params
} }
func NewGossipSub(p2pCtx context.Context, h host.Host, cfg *rollup.Config) (*pubsub.PubSub, error) { func NewGossipSub(p2pCtx context.Context, h host.Host, cfg *rollup.Config, m GossipMetricer) (*pubsub.PubSub, error) {
denyList, err := pubsub.NewTimeCachedBlacklist(30 * time.Second) denyList, err := pubsub.NewTimeCachedBlacklist(30 * time.Second)
if err != nil { if err != nil {
return nil, err return nil, err
...@@ -132,6 +136,7 @@ func NewGossipSub(p2pCtx context.Context, h host.Host, cfg *rollup.Config) (*pub ...@@ -132,6 +136,7 @@ func NewGossipSub(p2pCtx context.Context, h host.Host, cfg *rollup.Config) (*pub
pubsub.WithPeerExchange(false), pubsub.WithPeerExchange(false),
pubsub.WithBlacklist(denyList), pubsub.WithBlacklist(denyList),
pubsub.WithGossipSubParams(BuildGlobalGossipParams(cfg)), pubsub.WithGossipSubParams(BuildGlobalGossipParams(cfg)),
pubsub.WithEventTracer(&gossipTracer{m: m}),
) )
// TODO: pubsub.WithPeerScoreInspect(inspect, InspectInterval) to update peerstore scores with gossip scores // TODO: pubsub.WithPeerScoreInspect(inspect, InspectInterval) to update peerstore scores with gossip scores
} }
...@@ -441,3 +446,13 @@ func LogTopicEvents(ctx context.Context, log log.Logger, evHandler *pubsub.Topic ...@@ -441,3 +446,13 @@ func LogTopicEvents(ctx context.Context, log log.Logger, evHandler *pubsub.Topic
} }
} }
} }
type gossipTracer struct {
m GossipMetricer
}
func (g *gossipTracer) Trace(evt *pb.TraceEvent) {
if g.m != nil {
g.m.RecordGossipEvent(int32(*evt.Type))
}
}
...@@ -8,6 +8,7 @@ import ( ...@@ -8,6 +8,7 @@ import (
"github.com/libp2p/go-libp2p-core/connmgr" "github.com/libp2p/go-libp2p-core/connmgr"
"github.com/libp2p/go-libp2p-core/host" "github.com/libp2p/go-libp2p-core/host"
"github.com/libp2p/go-libp2p-core/metrics"
"github.com/libp2p/go-libp2p-core/peer" "github.com/libp2p/go-libp2p-core/peer"
"github.com/libp2p/go-libp2p-peerstore/pstoreds" "github.com/libp2p/go-libp2p-peerstore/pstoreds"
lconf "github.com/libp2p/go-libp2p/config" lconf "github.com/libp2p/go-libp2p/config"
...@@ -41,7 +42,7 @@ func (e *extraHost) ConnectionManager() connmgr.ConnManager { ...@@ -41,7 +42,7 @@ func (e *extraHost) ConnectionManager() connmgr.ConnManager {
var _ ExtraHostFeatures = (*extraHost)(nil) var _ ExtraHostFeatures = (*extraHost)(nil)
func (conf *Config) Host(log log.Logger) (host.Host, error) { func (conf *Config) Host(log log.Logger, reporter metrics.Reporter) (host.Host, error) {
if conf.DisableP2P { if conf.DisableP2P {
return nil, nil return nil, nil
} }
...@@ -115,7 +116,7 @@ func (conf *Config) Host(log log.Logger) (host.Host, error) { ...@@ -115,7 +116,7 @@ func (conf *Config) Host(log log.Logger) (host.Host, error) {
ResourceManager: nil, // TODO use resource manager interface to manage resources per peer better. ResourceManager: nil, // TODO use resource manager interface to manage resources per peer better.
NATManager: nat, NATManager: nat,
Peerstore: ps, Peerstore: ps,
Reporter: conf.BandwidthMetrics, // may be nil if disabled Reporter: reporter, // may be nil if disabled
MultiaddrResolver: madns.DefaultResolver, MultiaddrResolver: madns.DefaultResolver,
// Ping is a small built-in libp2p protocol that helps us check/debug latency between peers. // Ping is a small built-in libp2p protocol that helps us check/debug latency between peers.
DisablePing: false, DisablePing: false,
......
...@@ -21,7 +21,6 @@ import ( ...@@ -21,7 +21,6 @@ import (
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"github.com/ethereum-optimism/optimism/op-node/eth" "github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/metrics"
"github.com/ethereum-optimism/optimism/op-node/rollup" "github.com/ethereum-optimism/optimism/op-node/rollup"
"github.com/ethereum-optimism/optimism/op-node/testlog" "github.com/ethereum-optimism/optimism/op-node/testlog"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
...@@ -65,10 +64,10 @@ func TestingConfig(t *testing.T) *Config { ...@@ -65,10 +64,10 @@ func TestingConfig(t *testing.T) *Config {
func TestP2PSimple(t *testing.T) { func TestP2PSimple(t *testing.T) {
confA := TestingConfig(t) confA := TestingConfig(t)
confB := TestingConfig(t) confB := TestingConfig(t)
hostA, err := confA.Host(testlog.Logger(t, log.LvlError).New("host", "A")) hostA, err := confA.Host(testlog.Logger(t, log.LvlError).New("host", "A"), nil)
require.NoError(t, err, "failed to launch host A") require.NoError(t, err, "failed to launch host A")
defer hostA.Close() defer hostA.Close()
hostB, err := confB.Host(testlog.Logger(t, log.LvlError).New("host", "B")) hostB, err := confB.Host(testlog.Logger(t, log.LvlError).New("host", "B"), nil)
require.NoError(t, err, "failed to launch host B") require.NoError(t, err, "failed to launch host B")
defer hostB.Close() defer hostB.Close()
err = hostA.Connect(context.Background(), peer.AddrInfo{ID: hostB.ID(), Addrs: hostB.Addrs()}) err = hostA.Connect(context.Background(), peer.AddrInfo{ID: hostB.ID(), Addrs: hostB.Addrs()})
...@@ -132,7 +131,7 @@ func TestP2PFull(t *testing.T) { ...@@ -132,7 +131,7 @@ func TestP2PFull(t *testing.T) {
// TODO: maybe swap the order of sec/mux preferences, to test that negotiation works // TODO: maybe swap the order of sec/mux preferences, to test that negotiation works
logA := testlog.Logger(t, log.LvlError).New("host", "A") logA := testlog.Logger(t, log.LvlError).New("host", "A")
nodeA, err := NewNodeP2P(context.Background(), &rollup.Config{}, logA, &confA, &mockGossipIn{}) nodeA, err := NewNodeP2P(context.Background(), &rollup.Config{}, logA, &confA, &mockGossipIn{}, nil)
require.NoError(t, err) require.NoError(t, err)
defer nodeA.Close() defer nodeA.Close()
...@@ -143,7 +142,7 @@ func TestP2PFull(t *testing.T) { ...@@ -143,7 +142,7 @@ func TestP2PFull(t *testing.T) {
conns <- conn conns <- conn
}}) }})
backend := NewP2PAPIBackend(nodeA, logA, metrics.NewMetrics("")) backend := NewP2PAPIBackend(nodeA, logA, nil)
srv := rpc.NewServer() srv := rpc.NewServer()
require.NoError(t, srv.RegisterName("opp2p", backend)) require.NoError(t, srv.RegisterName("opp2p", backend))
client := rpc.DialInProc(srv) client := rpc.DialInProc(srv)
...@@ -155,7 +154,7 @@ func TestP2PFull(t *testing.T) { ...@@ -155,7 +154,7 @@ func TestP2PFull(t *testing.T) {
logB := testlog.Logger(t, log.LvlError).New("host", "B") logB := testlog.Logger(t, log.LvlError).New("host", "B")
nodeB, err := NewNodeP2P(context.Background(), &rollup.Config{}, logB, &confB, &mockGossipIn{}) nodeB, err := NewNodeP2P(context.Background(), &rollup.Config{}, logB, &confB, &mockGossipIn{}, nil)
require.NoError(t, err) require.NoError(t, err)
defer nodeB.Close() defer nodeB.Close()
hostB := nodeB.Host() hostB := nodeB.Host()
...@@ -289,7 +288,7 @@ func TestDiscovery(t *testing.T) { ...@@ -289,7 +288,7 @@ func TestDiscovery(t *testing.T) {
resourcesCtx, resourcesCancel := context.WithCancel(context.Background()) resourcesCtx, resourcesCancel := context.WithCancel(context.Background())
defer resourcesCancel() defer resourcesCancel()
nodeA, err := NewNodeP2P(context.Background(), rollupCfg, logA, &confA, &mockGossipIn{}) nodeA, err := NewNodeP2P(context.Background(), rollupCfg, logA, &confA, &mockGossipIn{}, nil)
require.NoError(t, err) require.NoError(t, err)
defer nodeA.Close() defer nodeA.Close()
hostA := nodeA.Host() hostA := nodeA.Host()
...@@ -304,7 +303,7 @@ func TestDiscovery(t *testing.T) { ...@@ -304,7 +303,7 @@ func TestDiscovery(t *testing.T) {
confB.DiscoveryDB = discDBC confB.DiscoveryDB = discDBC
// Start B // Start B
nodeB, err := NewNodeP2P(context.Background(), rollupCfg, logB, &confB, &mockGossipIn{}) nodeB, err := NewNodeP2P(context.Background(), rollupCfg, logB, &confB, &mockGossipIn{}, nil)
require.NoError(t, err) require.NoError(t, err)
defer nodeB.Close() defer nodeB.Close()
hostB := nodeB.Host() hostB := nodeB.Host()
...@@ -319,7 +318,7 @@ func TestDiscovery(t *testing.T) { ...@@ -319,7 +318,7 @@ func TestDiscovery(t *testing.T) {
}}) }})
// Start C // Start C
nodeC, err := NewNodeP2P(context.Background(), rollupCfg, logC, &confC, &mockGossipIn{}) nodeC, err := NewNodeP2P(context.Background(), rollupCfg, logC, &confC, &mockGossipIn{}, nil)
require.NoError(t, err) require.NoError(t, err)
defer nodeC.Close() defer nodeC.Close()
hostC := nodeC.Host() hostC := nodeC.Host()
......
...@@ -6,9 +6,11 @@ import ( ...@@ -6,9 +6,11 @@ import (
"fmt" "fmt"
"strconv" "strconv"
"github.com/ethereum-optimism/optimism/op-node/metrics"
"github.com/hashicorp/go-multierror" "github.com/hashicorp/go-multierror"
"github.com/libp2p/go-libp2p-core/connmgr" "github.com/libp2p/go-libp2p-core/connmgr"
"github.com/libp2p/go-libp2p-core/host" "github.com/libp2p/go-libp2p-core/host"
p2pmetrics "github.com/libp2p/go-libp2p-core/metrics"
pubsub "github.com/libp2p/go-libp2p-pubsub" pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/libp2p/go-libp2p/p2p/protocol/identify" "github.com/libp2p/go-libp2p/p2p/protocol/identify"
ma "github.com/multiformats/go-multiaddr" ma "github.com/multiformats/go-multiaddr"
...@@ -30,12 +32,12 @@ type NodeP2P struct { ...@@ -30,12 +32,12 @@ type NodeP2P struct {
gsOut GossipOut // p2p gossip application interface for publishing gsOut GossipOut // p2p gossip application interface for publishing
} }
func NewNodeP2P(resourcesCtx context.Context, rollupCfg *rollup.Config, log log.Logger, setup SetupP2P, gossipIn GossipIn) (*NodeP2P, error) { func NewNodeP2P(resourcesCtx context.Context, rollupCfg *rollup.Config, log log.Logger, setup SetupP2P, gossipIn GossipIn, metrics metrics.Metricer) (*NodeP2P, error) {
if setup == nil { if setup == nil {
return nil, errors.New("p2p node cannot be created without setup") return nil, errors.New("p2p node cannot be created without setup")
} }
var n NodeP2P var n NodeP2P
if err := n.init(resourcesCtx, rollupCfg, log, setup, gossipIn); err != nil { if err := n.init(resourcesCtx, rollupCfg, log, setup, gossipIn, metrics); err != nil {
closeErr := n.Close() closeErr := n.Close()
if closeErr != nil { if closeErr != nil {
log.Error("failed to close p2p after starting with err", "closeErr", closeErr, "err", err) log.Error("failed to close p2p after starting with err", "closeErr", closeErr, "err", err)
...@@ -48,10 +50,12 @@ func NewNodeP2P(resourcesCtx context.Context, rollupCfg *rollup.Config, log log. ...@@ -48,10 +50,12 @@ func NewNodeP2P(resourcesCtx context.Context, rollupCfg *rollup.Config, log log.
return &n, nil return &n, nil
} }
func (n *NodeP2P) init(resourcesCtx context.Context, rollupCfg *rollup.Config, log log.Logger, setup SetupP2P, gossipIn GossipIn) error { func (n *NodeP2P) init(resourcesCtx context.Context, rollupCfg *rollup.Config, log log.Logger, setup SetupP2P, gossipIn GossipIn, metrics metrics.Metricer) error {
bwc := p2pmetrics.NewBandwidthCounter()
var err error var err error
// nil if disabled. // nil if disabled.
n.host, err = setup.Host(log) n.host, err = setup.Host(log, bwc)
if err != nil { if err != nil {
if n.dv5Udp != nil { if n.dv5Udp != nil {
n.dv5Udp.Close() n.dv5Udp.Close()
...@@ -66,10 +70,10 @@ func (n *NodeP2P) init(resourcesCtx context.Context, rollupCfg *rollup.Config, l ...@@ -66,10 +70,10 @@ func (n *NodeP2P) init(resourcesCtx context.Context, rollupCfg *rollup.Config, l
n.connMgr = extra.ConnectionManager() n.connMgr = extra.ConnectionManager()
} }
// notify of any new connections/streams/etc. // notify of any new connections/streams/etc.
n.host.Network().Notify(NewNetworkNotifier(log)) n.host.Network().Notify(NewNetworkNotifier(log, metrics))
// unregister identify-push handler. Only identifying on dial is fine, and more robust against spam // unregister identify-push handler. Only identifying on dial is fine, and more robust against spam
n.host.RemoveStreamHandler(identify.IDDelta) n.host.RemoveStreamHandler(identify.IDDelta)
n.gs, err = NewGossipSub(resourcesCtx, n.host, rollupCfg) n.gs, err = NewGossipSub(resourcesCtx, n.host, rollupCfg, metrics)
if err != nil { if err != nil {
return fmt.Errorf("failed to start gossipsub router: %w", err) return fmt.Errorf("failed to start gossipsub router: %w", err)
} }
...@@ -90,6 +94,10 @@ func (n *NodeP2P) init(resourcesCtx context.Context, rollupCfg *rollup.Config, l ...@@ -90,6 +94,10 @@ func (n *NodeP2P) init(resourcesCtx context.Context, rollupCfg *rollup.Config, l
if err != nil { if err != nil {
return fmt.Errorf("failed to start discv5: %w", err) return fmt.Errorf("failed to start discv5: %w", err)
} }
if metrics != nil {
go metrics.RecordBandwidth(resourcesCtx, bwc)
}
} }
return nil return nil
} }
......
package p2p package p2p
import ( import (
"github.com/ethereum-optimism/optimism/op-node/metrics"
"github.com/libp2p/go-libp2p-core/network" "github.com/libp2p/go-libp2p-core/network"
ma "github.com/multiformats/go-multiaddr" ma "github.com/multiformats/go-multiaddr"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
) )
// TODO: add metrics here as well type NotificationsMetricer interface {
IncPeerCount()
DecPeerCount()
IncStreamCount()
DecStreamCount()
}
type notifications struct { type notifications struct {
log log.Logger log log.Logger
m NotificationsMetricer
} }
func (notif *notifications) Listen(n network.Network, a ma.Multiaddr) { func (notif *notifications) Listen(n network.Network, a ma.Multiaddr) {
...@@ -20,20 +27,27 @@ func (notif *notifications) ListenClose(n network.Network, a ma.Multiaddr) { ...@@ -20,20 +27,27 @@ func (notif *notifications) ListenClose(n network.Network, a ma.Multiaddr) {
notif.log.Info("stopped listening network address", "addr", a) notif.log.Info("stopped listening network address", "addr", a)
} }
func (notif *notifications) Connected(n network.Network, v network.Conn) { func (notif *notifications) Connected(n network.Network, v network.Conn) {
notif.m.IncPeerCount()
notif.log.Info("connected to peer", "peer", v.RemotePeer(), "addr", v.RemoteMultiaddr()) notif.log.Info("connected to peer", "peer", v.RemotePeer(), "addr", v.RemoteMultiaddr())
} }
func (notif *notifications) Disconnected(n network.Network, v network.Conn) { func (notif *notifications) Disconnected(n network.Network, v network.Conn) {
notif.m.DecPeerCount()
notif.log.Info("disconnected from peer", "peer", v.RemotePeer(), "addr", v.RemoteMultiaddr()) notif.log.Info("disconnected from peer", "peer", v.RemotePeer(), "addr", v.RemoteMultiaddr())
} }
func (notif *notifications) OpenedStream(n network.Network, v network.Stream) { func (notif *notifications) OpenedStream(n network.Network, v network.Stream) {
notif.m.IncStreamCount()
c := v.Conn() c := v.Conn()
notif.log.Trace("opened stream", "protocol", v.Protocol(), "peer", c.RemotePeer(), "addr", c.RemoteMultiaddr()) notif.log.Trace("opened stream", "protocol", v.Protocol(), "peer", c.RemotePeer(), "addr", c.RemoteMultiaddr())
} }
func (notif *notifications) ClosedStream(n network.Network, v network.Stream) { func (notif *notifications) ClosedStream(n network.Network, v network.Stream) {
notif.m.DecStreamCount()
c := v.Conn() c := v.Conn()
notif.log.Trace("opened stream", "protocol", v.Protocol(), "peer", c.RemotePeer(), "addr", c.RemoteMultiaddr()) notif.log.Trace("opened stream", "protocol", v.Protocol(), "peer", c.RemotePeer(), "addr", c.RemoteMultiaddr())
} }
func NewNetworkNotifier(log log.Logger) network.Notifiee { func NewNetworkNotifier(log log.Logger, m metrics.Metricer) network.Notifiee {
return &notifications{log: log} if m == nil {
m = metrics.NoopMetrics
}
return &notifications{log: log, m: m}
} }
...@@ -5,6 +5,7 @@ import ( ...@@ -5,6 +5,7 @@ import (
"fmt" "fmt"
"github.com/libp2p/go-libp2p-core/host" "github.com/libp2p/go-libp2p-core/host"
"github.com/libp2p/go-libp2p-core/metrics"
"github.com/ethereum-optimism/optimism/op-node/rollup" "github.com/ethereum-optimism/optimism/op-node/rollup"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
...@@ -38,7 +39,7 @@ func (p *Prepared) Check() error { ...@@ -38,7 +39,7 @@ func (p *Prepared) Check() error {
} }
// Host creates a libp2p host service. Returns nil, nil if p2p is disabled. // Host creates a libp2p host service. Returns nil, nil if p2p is disabled.
func (p *Prepared) Host(log log.Logger) (host.Host, error) { func (p *Prepared) Host(log log.Logger, reporter metrics.Reporter) (host.Host, error) {
return p.HostP2P, nil return p.HostP2P, nil
} }
......
...@@ -54,12 +54,16 @@ type Node interface { ...@@ -54,12 +54,16 @@ type Node interface {
type APIBackend struct { type APIBackend struct {
node Node node Node
log log.Logger log log.Logger
m *metrics.Metrics m metrics.Metricer
} }
var _ API = (*APIBackend)(nil) var _ API = (*APIBackend)(nil)
func NewP2PAPIBackend(node Node, log log.Logger, m *metrics.Metrics) *APIBackend { func NewP2PAPIBackend(node Node, log log.Logger, m metrics.Metricer) *APIBackend {
if m == nil {
m = metrics.NoopMetrics
}
return &APIBackend{ return &APIBackend{
node: node, node: node,
log: log, log: log,
......
...@@ -59,7 +59,11 @@ func (bq *BatchQueue) Origin() eth.L1BlockRef { ...@@ -59,7 +59,11 @@ func (bq *BatchQueue) Origin() eth.L1BlockRef {
} }
func (bq *BatchQueue) NextBatch(ctx context.Context, safeL2Head eth.L2BlockRef) (*BatchData, error) { func (bq *BatchQueue) NextBatch(ctx context.Context, safeL2Head eth.L2BlockRef) (*BatchData, error) {
originBehind := bq.origin.Number < safeL2Head.L1Origin.Number // Note: We use the origin that we will have to determine if it's behind. This is important
// because it's the future origin that gets saved into the l1Blocks array.
// We always update the origin of this stage if it is not the same so after the update code
// runs, this is consistent.
originBehind := bq.prev.Origin().Number < safeL2Head.L1Origin.Number
// Advance origin if needed // Advance origin if needed
// Note: The entire pipeline has the same origin // Note: The entire pipeline has the same origin
......
...@@ -138,7 +138,7 @@ func BatchReader(r io.Reader, l1InclusionBlock eth.L1BlockRef) (func() (BatchWit ...@@ -138,7 +138,7 @@ func BatchReader(r io.Reader, l1InclusionBlock eth.L1BlockRef) (func() (BatchWit
if err != nil { if err != nil {
return nil, err return nil, err
} }
rlpReader := rlp.NewStream(zr, 10_000_000) rlpReader := rlp.NewStream(zr, MaxRLPBytesPerChannel)
// Read each batch iteratively // Read each batch iteratively
return func() (BatchWithL1InclusionBlock, error) { return func() (BatchWithL1InclusionBlock, error) {
ret := BatchWithL1InclusionBlock{ ret := BatchWithL1InclusionBlock{
......
...@@ -5,6 +5,7 @@ import ( ...@@ -5,6 +5,7 @@ import (
"compress/zlib" "compress/zlib"
"crypto/rand" "crypto/rand"
"errors" "errors"
"fmt"
"io" "io"
"github.com/ethereum-optimism/optimism/op-node/rollup" "github.com/ethereum-optimism/optimism/op-node/rollup"
...@@ -14,15 +15,14 @@ import ( ...@@ -14,15 +15,14 @@ import (
) )
var ErrNotDepositTx = errors.New("first transaction in block is not a deposit tx") var ErrNotDepositTx = errors.New("first transaction in block is not a deposit tx")
var ErrTooManyRLPBytes = errors.New("batch would cause RLP bytes to go over limit")
type ChannelOut struct { type ChannelOut struct {
id ChannelID id ChannelID
// Frame ID of the next frame to emit. Increment after emitting // Frame ID of the next frame to emit. Increment after emitting
frame uint64 frame uint64
// How much we've pulled from the reader so far // rlpLength is the uncompressed size of the channel. Must be less than MAX_RLP_BYTES_PER_CHANNEL
offset uint64 rlpLength int
// scratch for temporary buffering
scratch bytes.Buffer
// Compressor stage. Write input data to it // Compressor stage. Write input data to it
compress *zlib.Writer compress *zlib.Writer
...@@ -38,9 +38,9 @@ func (co *ChannelOut) ID() ChannelID { ...@@ -38,9 +38,9 @@ func (co *ChannelOut) ID() ChannelID {
func NewChannelOut() (*ChannelOut, error) { func NewChannelOut() (*ChannelOut, error) {
c := &ChannelOut{ c := &ChannelOut{
id: ChannelID{}, // TODO: use GUID here instead of fully random data id: ChannelID{}, // TODO: use GUID here instead of fully random data
frame: 0, frame: 0,
offset: 0, rlpLength: 0,
} }
_, err := rand.Read(c.id[:]) _, err := rand.Read(c.id[:])
if err != nil { if err != nil {
...@@ -59,9 +59,8 @@ func NewChannelOut() (*ChannelOut, error) { ...@@ -59,9 +59,8 @@ func NewChannelOut() (*ChannelOut, error) {
// TODO: reuse ChannelOut for performance // TODO: reuse ChannelOut for performance
func (co *ChannelOut) Reset() error { func (co *ChannelOut) Reset() error {
co.frame = 0 co.frame = 0
co.offset = 0 co.rlpLength = 0
co.buf.Reset() co.buf.Reset()
co.scratch.Reset()
co.compress.Reset(&co.buf) co.compress.Reset(&co.buf)
co.closed = false co.closed = false
_, err := rand.Read(co.id[:]) _, err := rand.Read(co.id[:])
...@@ -71,11 +70,33 @@ func (co *ChannelOut) Reset() error { ...@@ -71,11 +70,33 @@ func (co *ChannelOut) Reset() error {
return nil return nil
} }
// AddBlock adds a block to the channel. It returns an error
// if there is a problem adding the block. The only sentinel
// error that it returns is ErrTooManyRLPBytes. If this error
// is returned, the channel should be closed and a new one
// should be made.
func (co *ChannelOut) AddBlock(block *types.Block) error { func (co *ChannelOut) AddBlock(block *types.Block) error {
if co.closed { if co.closed {
return errors.New("already closed") return errors.New("already closed")
} }
return blockToBatch(block, co.compress) batch, err := blockToBatch(block)
if err != nil {
return err
}
// We encode to a temporary buffer to determine the encoded length to
// ensure that the total size of all RLP elements is less than MAX_RLP_BYTES_PER_CHANNEL
var buf bytes.Buffer
if err := rlp.Encode(&buf, batch); err != nil {
return err
}
if co.rlpLength+buf.Len() > MaxRLPBytesPerChannel {
return fmt.Errorf("could not add %d bytes to channel of %d bytes, max is %d. err: %w",
buf.Len(), co.rlpLength, MaxRLPBytesPerChannel, ErrTooManyRLPBytes)
}
co.rlpLength += buf.Len()
_, err = io.Copy(co.compress, &buf)
return err
} }
// ReadyBytes returns the number of bytes that the channel out can immediately output into a frame. // ReadyBytes returns the number of bytes that the channel out can immediately output into a frame.
...@@ -141,35 +162,35 @@ func (co *ChannelOut) OutputFrame(w *bytes.Buffer, maxSize uint64) error { ...@@ -141,35 +162,35 @@ func (co *ChannelOut) OutputFrame(w *bytes.Buffer, maxSize uint64) error {
} }
} }
// blockToBatch writes the raw block bytes (after batch encoding) to the writer // blockToBatch transforms a block into a batch object that can easily be RLP encoded.
func blockToBatch(block *types.Block, w io.Writer) error { func blockToBatch(block *types.Block) (*BatchData, error) {
var opaqueTxs []hexutil.Bytes var opaqueTxs []hexutil.Bytes
for _, tx := range block.Transactions() { for i, tx := range block.Transactions() {
if tx.Type() == types.DepositTxType { if tx.Type() == types.DepositTxType {
continue continue
} }
otx, err := tx.MarshalBinary() otx, err := tx.MarshalBinary()
if err != nil { if err != nil {
return err // TODO: wrap err return nil, fmt.Errorf("could not encode tx %v in block %v: %w", i, tx.Hash(), err)
} }
opaqueTxs = append(opaqueTxs, otx) opaqueTxs = append(opaqueTxs, otx)
} }
l1InfoTx := block.Transactions()[0] l1InfoTx := block.Transactions()[0]
if l1InfoTx.Type() != types.DepositTxType { if l1InfoTx.Type() != types.DepositTxType {
return ErrNotDepositTx return nil, ErrNotDepositTx
} }
l1Info, err := L1InfoDepositTxData(l1InfoTx.Data()) l1Info, err := L1InfoDepositTxData(l1InfoTx.Data())
if err != nil { if err != nil {
return err // TODO: wrap err return nil, fmt.Errorf("could not parse the L1 Info deposit: %w", err)
} }
batch := &BatchData{BatchV1{ return &BatchData{
ParentHash: block.ParentHash(), BatchV1{
EpochNum: rollup.Epoch(l1Info.Number), ParentHash: block.ParentHash(),
EpochHash: l1Info.BlockHash, EpochNum: rollup.Epoch(l1Info.Number),
Timestamp: block.Time(), EpochHash: l1Info.BlockHash,
Transactions: opaqueTxs, Timestamp: block.Time(),
}, Transactions: opaqueTxs,
} },
return rlp.Encode(w, batch) }, nil
} }
...@@ -405,7 +405,7 @@ func (eq *EngineQueue) forceNextSafeAttributes(ctx context.Context) error { ...@@ -405,7 +405,7 @@ func (eq *EngineQueue) forceNextSafeAttributes(ctx context.Context) error {
// ResetStep Walks the L2 chain backwards until it finds an L2 block whose L1 origin is canonical. // ResetStep Walks the L2 chain backwards until it finds an L2 block whose L1 origin is canonical.
// The unsafe head is set to the head of the L2 chain, unless the existing safe head is not canonical. // The unsafe head is set to the head of the L2 chain, unless the existing safe head is not canonical.
func (eq *EngineQueue) Reset(ctx context.Context, _ eth.L1BlockRef) error { func (eq *EngineQueue) Reset(ctx context.Context, _ eth.L1BlockRef) error {
result, err := sync.FindL2Heads(ctx, eq.cfg, eq.l1Fetcher, eq.engine) result, err := sync.FindL2Heads(ctx, eq.cfg, eq.l1Fetcher, eq.engine, eq.log)
if err != nil { if err != nil {
return NewTemporaryError(fmt.Errorf("failed to find the L2 Heads to start from: %w", err)) return NewTemporaryError(fmt.Errorf("failed to find the L2 Heads to start from: %w", err))
} }
......
...@@ -15,6 +15,10 @@ const DerivationVersion0 = 0 ...@@ -15,6 +15,10 @@ const DerivationVersion0 = 0
// starting with the oldest channel. // starting with the oldest channel.
const MaxChannelBankSize = 100_000_000 const MaxChannelBankSize = 100_000_000
// MaxRLPBytesPerChannel is the maximum amount of bytes that will be read from
// a channel. This limit is set when decoding the RLP.
const MaxRLPBytesPerChannel = 10_000_000
// DuplicateErr is returned when a newly read frame is already known // DuplicateErr is returned when a newly read frame is already known
var DuplicateErr = errors.New("duplicate frame") var DuplicateErr = errors.New("duplicate frame")
......
...@@ -6,6 +6,7 @@ import ( ...@@ -6,6 +6,7 @@ import (
"github.com/ethereum-optimism/optimism/op-node/eth" "github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive" "github.com/ethereum-optimism/optimism/op-node/rollup/derive"
"github.com/ethereum/go-ethereum" "github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/common"
) )
// confDepth is an util that wraps the L1 input fetcher used in the pipeline, // confDepth is an util that wraps the L1 input fetcher used in the pipeline,
...@@ -30,7 +31,9 @@ func (c *confDepth) L1BlockRefByNumber(ctx context.Context, num uint64) (eth.L1B ...@@ -30,7 +31,9 @@ func (c *confDepth) L1BlockRefByNumber(ctx context.Context, num uint64) (eth.L1B
// TODO: performance optimization: buffer the l1Unsafe, invalidate any reorged previous buffer content, // TODO: performance optimization: buffer the l1Unsafe, invalidate any reorged previous buffer content,
// and instantly return the origin by number from the buffer if we can. // and instantly return the origin by number from the buffer if we can.
if num == 0 || c.depth == 0 || num+c.depth <= c.l1Head().Number { // Don't apply the conf depth is l1Head is empty (as it is during the startup case before the l1State is initialized).
l1Head := c.l1Head()
if num == 0 || c.depth == 0 || num+c.depth <= l1Head.Number || l1Head.Hash == (common.Hash{}) {
return c.L1Fetcher.L1BlockRefByNumber(ctx, num) return c.L1Fetcher.L1BlockRefByNumber(ctx, num)
} }
return eth.L1BlockRef{}, ethereum.NotFound return eth.L1BlockRef{}, ethereum.NotFound
......
...@@ -9,11 +9,15 @@ import ( ...@@ -9,11 +9,15 @@ import (
"github.com/ethereum-optimism/optimism/op-node/eth" "github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/testutils" "github.com/ethereum-optimism/optimism/op-node/testutils"
"github.com/ethereum/go-ethereum" "github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/common"
) )
var exHash = common.Hash{0xff}
type confTest struct { type confTest struct {
name string name string
head uint64 head uint64
hash common.Hash // hash of head block
req uint64 req uint64
depth uint64 depth uint64
pass bool pass bool
...@@ -21,7 +25,7 @@ type confTest struct { ...@@ -21,7 +25,7 @@ type confTest struct {
func (ct *confTest) Run(t *testing.T) { func (ct *confTest) Run(t *testing.T) {
l1Fetcher := &testutils.MockL1Source{} l1Fetcher := &testutils.MockL1Source{}
l1Head := eth.L1BlockRef{Number: ct.head} l1Head := eth.L1BlockRef{Number: ct.head, Hash: ct.hash}
l1HeadGetter := func() eth.L1BlockRef { return l1Head } l1HeadGetter := func() eth.L1BlockRef { return l1Head }
cd := NewConfDepth(ct.depth, l1HeadGetter, l1Fetcher) cd := NewConfDepth(ct.depth, l1HeadGetter, l1Fetcher)
...@@ -43,18 +47,19 @@ func TestConfDepth(t *testing.T) { ...@@ -43,18 +47,19 @@ func TestConfDepth(t *testing.T) {
// note: we're not testing overflows. // note: we're not testing overflows.
// If a request is large enough to overflow the conf depth check, it's not returning anything anyway. // If a request is large enough to overflow the conf depth check, it's not returning anything anyway.
testCases := []confTest{ testCases := []confTest{
{name: "zero conf future", head: 4, req: 5, depth: 0, pass: true}, {name: "zero conf future", head: 4, hash: exHash, req: 5, depth: 0, pass: true},
{name: "zero conf present", head: 4, req: 4, depth: 0, pass: true}, {name: "zero conf present", head: 4, hash: exHash, req: 4, depth: 0, pass: true},
{name: "zero conf past", head: 4, req: 4, depth: 0, pass: true}, {name: "zero conf past", head: 4, hash: exHash, req: 4, depth: 0, pass: true},
{name: "one conf future", head: 4, req: 5, depth: 1, pass: false}, {name: "one conf future", head: 4, hash: exHash, req: 5, depth: 1, pass: false},
{name: "one conf present", head: 4, req: 4, depth: 1, pass: false}, {name: "one conf present", head: 4, hash: exHash, req: 4, depth: 1, pass: false},
{name: "one conf past", head: 4, req: 3, depth: 1, pass: true}, {name: "one conf past", head: 4, hash: exHash, req: 3, depth: 1, pass: true},
{name: "two conf future", head: 4, req: 5, depth: 2, pass: false}, {name: "two conf future", head: 4, hash: exHash, req: 5, depth: 2, pass: false},
{name: "two conf present", head: 4, req: 4, depth: 2, pass: false}, {name: "two conf present", head: 4, hash: exHash, req: 4, depth: 2, pass: false},
{name: "two conf not like 1", head: 4, req: 3, depth: 2, pass: false}, {name: "two conf not like 1", head: 4, hash: exHash, req: 3, depth: 2, pass: false},
{name: "two conf pass", head: 4, req: 2, depth: 2, pass: true}, {name: "two conf pass", head: 4, hash: exHash, req: 2, depth: 2, pass: true},
{name: "easy pass", head: 100, req: 20, depth: 5, pass: true}, {name: "easy pass", head: 100, hash: exHash, req: 20, depth: 5, pass: true},
{name: "genesis case", head: 0, req: 0, depth: 4, pass: true}, {name: "genesis case", head: 0, hash: exHash, req: 0, depth: 4, pass: true},
{name: "no L1 state", req: 10, depth: 4, pass: true},
} }
for _, tc := range testCases { for _, tc := range testCases {
t.Run(tc.name, tc.Run) t.Run(tc.name, tc.Run)
......
...@@ -91,6 +91,7 @@ func (d *Sequencer) CompleteBuildingBlock(ctx context.Context) (*eth.ExecutionPa ...@@ -91,6 +91,7 @@ func (d *Sequencer) CompleteBuildingBlock(ctx context.Context) (*eth.ExecutionPa
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to complete building on top of L2 chain %s, error (%d): %w", d.buildingOnto.HeadBlockHash, errTyp, err) return nil, fmt.Errorf("failed to complete building on top of L2 chain %s, error (%d): %w", d.buildingOnto.HeadBlockHash, errTyp, err)
} }
d.buildingID = eth.PayloadID{}
return payload, nil return payload, nil
} }
...@@ -103,7 +104,6 @@ func (d *Sequencer) CreateNewBlock(ctx context.Context, l2Head eth.L2BlockRef, l ...@@ -103,7 +104,6 @@ func (d *Sequencer) CreateNewBlock(ctx context.Context, l2Head eth.L2BlockRef, l
if err != nil { if err != nil {
return l2Head, nil, err return l2Head, nil, err
} }
d.buildingID = eth.PayloadID{}
// Generate an L2 block ref from the payload. // Generate an L2 block ref from the payload.
ref, err := derive.PayloadToBlockRef(payload, &d.config.Genesis) ref, err := derive.PayloadToBlockRef(payload, &d.config.Genesis)
......
...@@ -266,7 +266,7 @@ func (s *Driver) eventLoop() { ...@@ -266,7 +266,7 @@ func (s *Driver) eventLoop() {
s.log.Warn("not creating block, node is deriving new l2 data", "head_l1", l1Head) s.log.Warn("not creating block, node is deriving new l2 data", "head_l1", l1Head)
break break
} }
ctx, cancel := context.WithTimeout(ctx, 10*time.Second) ctx, cancel := context.WithTimeout(ctx, 20*time.Minute)
err := s.createNewL2Block(ctx) err := s.createNewL2Block(ctx)
cancel() cancel()
if err != nil { if err != nil {
...@@ -309,9 +309,7 @@ func (s *Driver) eventLoop() { ...@@ -309,9 +309,7 @@ func (s *Driver) eventLoop() {
s.metrics.SetDerivationIdle(false) s.metrics.SetDerivationIdle(false)
s.idleDerivation = false s.idleDerivation = false
s.log.Debug("Derivation process step", "onto_origin", s.derivation.Origin(), "attempts", stepAttempts) s.log.Debug("Derivation process step", "onto_origin", s.derivation.Origin(), "attempts", stepAttempts)
stepCtx, cancel := context.WithTimeout(ctx, time.Second*10) // TODO pick a timeout for executing a single step err := s.derivation.Step(context.Background())
err := s.derivation.Step(stepCtx)
cancel()
stepAttempts += 1 // count as attempt by default. We reset to 0 if we are making healthy progress. stepAttempts += 1 // count as attempt by default. We reset to 0 if we are making healthy progress.
if err == io.EOF { if err == io.EOF {
s.log.Debug("Derivation process went idle", "progress", s.derivation.Origin()) s.log.Debug("Derivation process went idle", "progress", s.derivation.Origin())
......
...@@ -32,6 +32,7 @@ import ( ...@@ -32,6 +32,7 @@ import (
"github.com/ethereum-optimism/optimism/op-node/rollup" "github.com/ethereum-optimism/optimism/op-node/rollup"
"github.com/ethereum/go-ethereum" "github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/log"
) )
type L1Chain interface { type L1Chain interface {
...@@ -101,7 +102,7 @@ func currentHeads(ctx context.Context, cfg *rollup.Config, l2 L2Chain) (*FindHea ...@@ -101,7 +102,7 @@ func currentHeads(ctx context.Context, cfg *rollup.Config, l2 L2Chain) (*FindHea
// Plausible: meaning that the blockhash of the L2 block's L1 origin // Plausible: meaning that the blockhash of the L2 block's L1 origin
// (as reported in the L1 Attributes deposit within the L2 block) is not canonical at another height in the L1 chain, // (as reported in the L1 Attributes deposit within the L2 block) is not canonical at another height in the L1 chain,
// and the same holds for all its ancestors. // and the same holds for all its ancestors.
func FindL2Heads(ctx context.Context, cfg *rollup.Config, l1 L1Chain, l2 L2Chain) (result *FindHeadsResult, err error) { func FindL2Heads(ctx context.Context, cfg *rollup.Config, l1 L1Chain, l2 L2Chain, lgr log.Logger) (result *FindHeadsResult, err error) {
// Fetch current L2 forkchoice state // Fetch current L2 forkchoice state
result, err = currentHeads(ctx, cfg, l2) result, err = currentHeads(ctx, cfg, l2)
if err != nil { if err != nil {
...@@ -137,6 +138,8 @@ func FindL2Heads(ctx context.Context, cfg *rollup.Config, l1 L1Chain, l2 L2Chain ...@@ -137,6 +138,8 @@ func FindL2Heads(ctx context.Context, cfg *rollup.Config, l1 L1Chain, l2 L2Chain
ahead = notFound ahead = notFound
} }
lgr.Trace("walking sync start", "number", n.Number)
// Don't walk past genesis. If we were at the L2 genesis, but could not find its L1 origin, // Don't walk past genesis. If we were at the L2 genesis, but could not find its L1 origin,
// the L2 chain is building on the wrong L1 branch. // the L2 chain is building on the wrong L1 branch.
if n.Number == cfg.Genesis.L2.Number { if n.Number == cfg.Genesis.L2.Number {
......
...@@ -73,7 +73,9 @@ func (c *syncStartTestCase) Run(t *testing.T) { ...@@ -73,7 +73,9 @@ func (c *syncStartTestCase) Run(t *testing.T) {
Genesis: genesis, Genesis: genesis,
SeqWindowSize: c.SeqWindowSize, SeqWindowSize: c.SeqWindowSize,
} }
result, err := FindL2Heads(context.Background(), cfg, chain, chain) lgr := log.New()
lgr.SetHandler(log.DiscardHandler())
result, err := FindL2Heads(context.Background(), cfg, chain, chain, lgr)
if c.ExpectedErr != nil { if c.ExpectedErr != nil {
require.ErrorIs(t, err, c.ExpectedErr, "expected error") require.ErrorIs(t, err, c.ExpectedErr, "expected error")
return return
......
...@@ -251,7 +251,7 @@ func (s *EthClient) FetchReceipts(ctx context.Context, blockHash common.Hash) (e ...@@ -251,7 +251,7 @@ func (s *EthClient) FetchReceipts(ctx context.Context, blockHash common.Hash) (e
for i := 0; i < len(txs); i++ { for i := 0; i < len(txs); i++ {
txHashes[i] = txs[i].Hash() txHashes[i] = txs[i].Hash()
} }
fetcher = NewReceiptsFetcher(info.ID(), info.ReceiptHash(), txHashes, s.client.BatchCallContext, s.maxBatchSize) fetcher = NewReceiptsFetcher(eth.ToBlockID(info), info.ReceiptHash(), txHashes, s.client.BatchCallContext, s.maxBatchSize)
s.receiptsCache.Add(blockHash, fetcher) s.receiptsCache.Add(blockHash, fetcher)
} }
// Fetch all receipts // Fetch all receipts
......
# @eth-optimism/ci-builder # @eth-optimism/ci-builder
## 0.3.3
### Patch Changes
- 3f485627: Pin slither version to 0.9.0
## 0.3.2 ## 0.3.2
### Patch Changes ### Patch Changes
......
{ {
"name": "@eth-optimism/ci-builder", "name": "@eth-optimism/ci-builder",
"version": "0.3.2", "version": "0.3.3",
"scripts": {}, "scripts": {},
"license": "MIT", "license": "MIT",
"dependencies": {} "dependencies": {}
......
# @eth-optimism/actor-tests # @eth-optimism/actor-tests
## 0.0.9
### Patch Changes
- Updated dependencies [35a7bb5e]
- Updated dependencies [b40913b1]
- Updated dependencies [a5e715c3]
- Updated dependencies [d18b8aa3]
- @eth-optimism/contracts-bedrock@0.8.1
- @eth-optimism/sdk@1.6.7
## 0.0.8 ## 0.0.8
### Patch Changes ### Patch Changes
......
{ {
"name": "@eth-optimism/actor-tests", "name": "@eth-optimism/actor-tests",
"version": "0.0.8", "version": "0.0.9",
"description": "A library and suite of tests to stress test Optimism Bedrock.", "description": "A library and suite of tests to stress test Optimism Bedrock.",
"license": "MIT", "license": "MIT",
"author": "", "author": "",
...@@ -18,9 +18,9 @@ ...@@ -18,9 +18,9 @@
"test:coverage": "yarn test" "test:coverage": "yarn test"
}, },
"dependencies": { "dependencies": {
"@eth-optimism/contracts-bedrock": "0.8.0", "@eth-optimism/contracts-bedrock": "0.8.1",
"@eth-optimism/core-utils": "^0.10.1", "@eth-optimism/core-utils": "^0.10.1",
"@eth-optimism/sdk": "^1.6.6", "@eth-optimism/sdk": "^1.6.7",
"@types/chai": "^4.2.18", "@types/chai": "^4.2.18",
"@types/chai-as-promised": "^7.1.4", "@types/chai-as-promised": "^7.1.4",
"async-mutex": "^0.3.2", "async-mutex": "^0.3.2",
......
# @eth-optimism/contracts-bedrock # @eth-optimism/contracts-bedrock
## 0.8.1
### Patch Changes
- 35a7bb5e: Use uint64 for arithmetic in XDM's baseGas
- a5e715c3: Rename the event emitted in the L2ToL1MessagePasser
- d18b8aa3: Removes an unnecessary initializer parameter in the L200
## 0.8.0 ## 0.8.0
### Minor Changes ### Minor Changes
......
...@@ -99,6 +99,14 @@ contract OptimismPortal is Initializable, ResourceMetering, Semver { ...@@ -99,6 +99,14 @@ contract OptimismPortal is Initializable, ResourceMetering, Semver {
initialize(); initialize();
} }
/**
* @notice Initializer;
*/
function initialize() public initializer {
l2Sender = DEFAULT_L2_SENDER;
__ResourceMetering_init();
}
/** /**
* @notice Accepts value so that users can send ETH directly to this contract and have the * @notice Accepts value so that users can send ETH directly to this contract and have the
* funds be deposited to their address on L2. This is intended as a convenience * funds be deposited to their address on L2. This is intended as a convenience
...@@ -214,14 +222,6 @@ contract OptimismPortal is Initializable, ResourceMetering, Semver { ...@@ -214,14 +222,6 @@ contract OptimismPortal is Initializable, ResourceMetering, Semver {
return _isOutputFinalized(proposal); return _isOutputFinalized(proposal);
} }
/**
* @notice Initializer;
*/
function initialize() public initializer {
l2Sender = DEFAULT_L2_SENDER;
__ResourceMetering_init();
}
/** /**
* @notice Accepts deposits of ETH and data, and emits a TransactionDeposited event for use in * @notice Accepts deposits of ETH and data, and emits a TransactionDeposited event for use in
* deriving deposit transactions. Note that if a deposit is made by a contract, its * deriving deposit transactions. Note that if a deposit is made by a contract, its
......
...@@ -7,7 +7,7 @@ import '@eth-optimism/hardhat-deploy-config' ...@@ -7,7 +7,7 @@ import '@eth-optimism/hardhat-deploy-config'
const upgradeABIs = { const upgradeABIs = {
L2OutputOracleProxy: async (deployConfig) => [ L2OutputOracleProxy: async (deployConfig) => [
'initialize(bytes32,uint256,address,address)', 'initialize(bytes32,address,address)',
[ [
deployConfig.l2OutputOracleGenesisL2Output, deployConfig.l2OutputOracleGenesisL2Output,
deployConfig.l2OutputOracleProposer, deployConfig.l2OutputOracleProposer,
......
{ {
"name": "@eth-optimism/contracts-bedrock", "name": "@eth-optimism/contracts-bedrock",
"version": "0.8.0", "version": "0.8.1",
"description": "Contracts for Optimism Specs", "description": "Contracts for Optimism Specs",
"main": "dist/index", "main": "dist/index",
"types": "dist/index", "types": "dist/index",
......
# @eth-optimism/contracts-periphery # @eth-optimism/contracts-periphery
## 1.0.2
### Patch Changes
- e81a6ff5: Goerli nft bridge deployment
- a3242d4f: Fix erc721 factory to match erc21 factory
- ffa5297e: mainnet nft bridge deployments
## 1.0.1 ## 1.0.1
### Patch Changes ### Patch Changes
......
{ {
"name": "@eth-optimism/contracts-periphery", "name": "@eth-optimism/contracts-periphery",
"version": "1.0.1", "version": "1.0.2",
"description": "[Optimism] External (out-of-protocol) L1 and L2 smart contracts for Optimism", "description": "[Optimism] External (out-of-protocol) L1 and L2 smart contracts for Optimism",
"main": "dist/index", "main": "dist/index",
"types": "dist/index", "types": "dist/index",
...@@ -55,7 +55,7 @@ ...@@ -55,7 +55,7 @@
"devDependencies": { "devDependencies": {
"@defi-wonderland/smock": "^2.0.7", "@defi-wonderland/smock": "^2.0.7",
"@eth-optimism/contracts": "^0.5.37", "@eth-optimism/contracts": "^0.5.37",
"@eth-optimism/contracts-bedrock": "^0.8.0", "@eth-optimism/contracts-bedrock": "^0.8.1",
"@eth-optimism/core-utils": "^0.10.1", "@eth-optimism/core-utils": "^0.10.1",
"@eth-optimism/hardhat-deploy-config": "^0.2.4", "@eth-optimism/hardhat-deploy-config": "^0.2.4",
"@ethersproject/hardware-wallets": "^5.7.0", "@ethersproject/hardware-wallets": "^5.7.0",
......
# @eth-optimism/drippie-mon # @eth-optimism/drippie-mon
## 0.3.18
### Patch Changes
- Updated dependencies [b40913b1]
- Updated dependencies [a5e715c3]
- Updated dependencies [e81a6ff5]
- Updated dependencies [a3242d4f]
- Updated dependencies [ffa5297e]
- @eth-optimism/sdk@1.6.7
- @eth-optimism/contracts-periphery@1.0.2
## 0.3.17 ## 0.3.17
### Patch Changes ### Patch Changes
......
{ {
"private": true, "private": true,
"name": "@eth-optimism/drippie-mon", "name": "@eth-optimism/drippie-mon",
"version": "0.3.17", "version": "0.3.18",
"description": "[Optimism] Service for monitoring Drippie instances", "description": "[Optimism] Service for monitoring Drippie instances",
"main": "dist/index", "main": "dist/index",
"types": "dist/index", "types": "dist/index",
...@@ -33,9 +33,9 @@ ...@@ -33,9 +33,9 @@
}, },
"dependencies": { "dependencies": {
"@eth-optimism/common-ts": "0.6.6", "@eth-optimism/common-ts": "0.6.6",
"@eth-optimism/contracts-periphery": "1.0.1", "@eth-optimism/contracts-periphery": "1.0.2",
"@eth-optimism/core-utils": "0.10.1", "@eth-optimism/core-utils": "0.10.1",
"@eth-optimism/sdk": "1.6.6", "@eth-optimism/sdk": "1.6.7",
"ethers": "^5.7.0" "ethers": "^5.7.0"
}, },
"devDependencies": { "devDependencies": {
......
...@@ -29,7 +29,7 @@ ...@@ -29,7 +29,7 @@
"devDependencies": { "devDependencies": {
"@eth-optimism/contracts": "0.5.37", "@eth-optimism/contracts": "0.5.37",
"@eth-optimism/core-utils": "0.10.1", "@eth-optimism/core-utils": "0.10.1",
"@eth-optimism/sdk": "1.6.6", "@eth-optimism/sdk": "1.6.7",
"@ethersproject/abstract-provider": "^5.7.0", "@ethersproject/abstract-provider": "^5.7.0",
"chai-as-promised": "^7.1.1", "chai-as-promised": "^7.1.1",
"chai": "^4.3.4", "chai": "^4.3.4",
......
# @eth-optimism/message-relayer # @eth-optimism/message-relayer
## 0.5.17
### Patch Changes
- Updated dependencies [b40913b1]
- Updated dependencies [a5e715c3]
- @eth-optimism/sdk@1.6.7
## 0.5.16 ## 0.5.16
### Patch Changes ### Patch Changes
......
{ {
"private": true, "private": true,
"name": "@eth-optimism/message-relayer", "name": "@eth-optimism/message-relayer",
"version": "0.5.16", "version": "0.5.17",
"description": "[Optimism] Service for automatically relaying L2 to L1 transactions", "description": "[Optimism] Service for automatically relaying L2 to L1 transactions",
"main": "dist/index", "main": "dist/index",
"types": "dist/index", "types": "dist/index",
...@@ -33,7 +33,7 @@ ...@@ -33,7 +33,7 @@
"dependencies": { "dependencies": {
"@eth-optimism/common-ts": "0.6.6", "@eth-optimism/common-ts": "0.6.6",
"@eth-optimism/core-utils": "0.10.1", "@eth-optimism/core-utils": "0.10.1",
"@eth-optimism/sdk": "1.6.6", "@eth-optimism/sdk": "1.6.7",
"ethers": "^5.7.0" "ethers": "^5.7.0"
}, },
"devDependencies": { "devDependencies": {
......
# @eth-optimism/sdk # @eth-optimism/sdk
## 1.6.7
### Patch Changes
- b40913b1: Adds contract addresses for the Bedrock Alpha testnet
- a5e715c3: Rename the event emitted in the L2ToL1MessagePasser
- Updated dependencies [35a7bb5e]
- Updated dependencies [a5e715c3]
- Updated dependencies [d18b8aa3]
- @eth-optimism/contracts-bedrock@0.8.1
## 1.6.6 ## 1.6.6
### Patch Changes ### Patch Changes
......
{ {
"name": "@eth-optimism/sdk", "name": "@eth-optimism/sdk",
"version": "1.6.6", "version": "1.6.7",
"description": "[Optimism] Tools for working with Optimism", "description": "[Optimism] Tools for working with Optimism",
"main": "dist/index", "main": "dist/index",
"types": "dist/index", "types": "dist/index",
...@@ -50,7 +50,7 @@ ...@@ -50,7 +50,7 @@
"dependencies": { "dependencies": {
"@eth-optimism/contracts": "0.5.37", "@eth-optimism/contracts": "0.5.37",
"@eth-optimism/core-utils": "0.10.1", "@eth-optimism/core-utils": "0.10.1",
"@eth-optimism/contracts-bedrock": "0.8.0", "@eth-optimism/contracts-bedrock": "0.8.1",
"lodash": "^4.17.21", "lodash": "^4.17.21",
"merkletreejs": "^0.2.27", "merkletreejs": "^0.2.27",
"rlp": "^2.2.7" "rlp": "^2.2.7"
......
# @eth-optimism/proxyd # @eth-optimism/proxyd
## 3.12.0
### Minor Changes
- e9f2c701: Allow disabling backend rate limiter
- ca45a85e: Support pattern matching in exempt origins/user agents
- f4faa44c: adds server.log_level config
## 3.11.0
### Minor Changes
- b3c5eeec: Fixed JSON-RPC 2.0 specification compliance by adding the optional data field on an RPCError
- 01ae6625: Adds new Redis rate limiter
## 3.10.2 ## 3.10.2
### Patch Changes ### Patch Changes
......
...@@ -5,6 +5,7 @@ import ( ...@@ -5,6 +5,7 @@ import (
"crypto/rand" "crypto/rand"
"encoding/hex" "encoding/hex"
"fmt" "fmt"
"math"
"sync" "sync"
"time" "time"
...@@ -256,5 +257,30 @@ func randStr(l int) string { ...@@ -256,5 +257,30 @@ func randStr(l int) string {
return hex.EncodeToString(b) return hex.EncodeToString(b)
} }
type ServerRateLimiter struct { type NoopBackendRateLimiter struct{}
var noopBackendRateLimiter = &NoopBackendRateLimiter{}
func (n *NoopBackendRateLimiter) IsBackendOnline(name string) (bool, error) {
return true, nil
}
func (n *NoopBackendRateLimiter) SetBackendOffline(name string, duration time.Duration) error {
return nil
}
func (n *NoopBackendRateLimiter) IncBackendRPS(name string) (int, error) {
return math.MaxInt, nil
}
func (n *NoopBackendRateLimiter) IncBackendWSConns(name string, max int) (bool, error) {
return true, nil
}
func (n *NoopBackendRateLimiter) DecBackendWSConns(name string) error {
return nil
}
func (n *NoopBackendRateLimiter) FlushBackendWSConns(names []string) error {
return nil
} }
...@@ -37,6 +37,21 @@ func main() { ...@@ -37,6 +37,21 @@ func main() {
log.Crit("error reading config file", "err", err) log.Crit("error reading config file", "err", err)
} }
// update log level from config
logLevel, err := log.LvlFromString(config.Server.LogLevel)
if err != nil {
logLevel = log.LvlInfo
if config.Server.LogLevel != "" {
log.Warn("invalid server.log_level set: " + config.Server.LogLevel)
}
}
log.Root().SetHandler(
log.LvlFilterHandler(
logLevel,
log.StreamHandler(os.Stdout, log.JSONFormat()),
),
)
shutdown, err := proxyd.Start(config) shutdown, err := proxyd.Start(config)
if err != nil { if err != nil {
log.Crit("error starting proxyd", "err", err) log.Crit("error starting proxyd", "err", err)
......
...@@ -14,6 +14,7 @@ type ServerConfig struct { ...@@ -14,6 +14,7 @@ type ServerConfig struct {
WSPort int `toml:"ws_port"` WSPort int `toml:"ws_port"`
MaxBodySizeBytes int64 `toml:"max_body_size_bytes"` MaxBodySizeBytes int64 `toml:"max_body_size_bytes"`
MaxConcurrentRPCs int64 `toml:"max_concurrent_rpcs"` MaxConcurrentRPCs int64 `toml:"max_concurrent_rpcs"`
LogLevel string `toml:"log_level"`
// TimeoutSeconds specifies the maximum time spent serving an HTTP request. Note that isn't used for websocket connections // TimeoutSeconds specifies the maximum time spent serving an HTTP request. Note that isn't used for websocket connections
TimeoutSeconds int `toml:"timeout_seconds"` TimeoutSeconds int `toml:"timeout_seconds"`
...@@ -41,13 +42,14 @@ type MetricsConfig struct { ...@@ -41,13 +42,14 @@ type MetricsConfig struct {
} }
type RateLimitConfig struct { type RateLimitConfig struct {
UseRedis bool `toml:"use_redis"` UseRedis bool `toml:"use_redis"`
BaseRate int `toml:"base_rate"` EnableBackendRateLimiter bool `toml:"enable_backend_rate_limiter"`
BaseInterval TOMLDuration `toml:"base_interval"` BaseRate int `toml:"base_rate"`
ExemptOrigins []string `toml:"exempt_origins"` BaseInterval TOMLDuration `toml:"base_interval"`
ExemptUserAgents []string `toml:"exempt_user_agents"` ExemptOrigins []string `toml:"exempt_origins"`
ErrorMessage string `toml:"error_message"` ExemptUserAgents []string `toml:"exempt_user_agents"`
MethodOverrides map[string]*RateLimitMethodOverride `toml:"method_overrides"` ErrorMessage string `toml:"error_message"`
MethodOverrides map[string]*RateLimitMethodOverride `toml:"method_overrides"`
} }
type RateLimitMethodOverride struct { type RateLimitMethodOverride struct {
......
...@@ -19,6 +19,8 @@ ws_port = 8085 ...@@ -19,6 +19,8 @@ ws_port = 8085
# Maximum client body size, in bytes, that the server will accept. # Maximum client body size, in bytes, that the server will accept.
max_body_size_bytes = 10485760 max_body_size_bytes = 10485760
max_concurrent_rpcs = 1000 max_concurrent_rpcs = 1000
# Server log level
log_level = "info"
[redis] [redis]
# URL to a Redis instance. # URL to a Redis instance.
......
...@@ -15,4 +15,7 @@ max_rps = 2 ...@@ -15,4 +15,7 @@ max_rps = 2
backends = ["good"] backends = ["good"]
[rpc_method_mappings] [rpc_method_mappings]
eth_chainId = "main" eth_chainId = "main"
\ No newline at end of file
[rate_limit]
enable_backend_rate_limiter = true
\ No newline at end of file
...@@ -19,4 +19,7 @@ ws_url = "$BAD_BACKEND_RPC_URL" ...@@ -19,4 +19,7 @@ ws_url = "$BAD_BACKEND_RPC_URL"
backends = ["bad", "good"] backends = ["bad", "good"]
[rpc_method_mappings] [rpc_method_mappings]
eth_chainId = "main" eth_chainId = "main"
\ No newline at end of file
[rate_limit]
enable_backend_rate_limiter = true
\ No newline at end of file
...@@ -26,3 +26,6 @@ backends = ["good"] ...@@ -26,3 +26,6 @@ backends = ["good"]
[rpc_method_mappings] [rpc_method_mappings]
eth_chainId = "main" eth_chainId = "main"
[rate_limit]
enable_backend_rate_limiter = true
\ No newline at end of file
{ {
"name": "@eth-optimism/proxyd", "name": "@eth-optimism/proxyd",
"version": "3.10.2", "version": "3.12.0",
"private": true, "private": true,
"dependencies": {} "dependencies": {}
} }
...@@ -53,11 +53,15 @@ func Start(config *Config) (func(), error) { ...@@ -53,11 +53,15 @@ func Start(config *Config) (func(), error) {
var lim BackendRateLimiter var lim BackendRateLimiter
var err error var err error
if redisClient == nil { if config.RateLimit.EnableBackendRateLimiter {
log.Warn("redis is not configured, using local rate limiter") if redisClient != nil {
lim = NewLocalBackendRateLimiter() lim = NewRedisRateLimiter(redisClient)
} else {
log.Warn("redis is not configured, using local rate limiter")
lim = NewLocalBackendRateLimiter()
}
} else { } else {
lim = NewRedisRateLimiter(redisClient) lim = noopBackendRateLimiter
} }
// While modifying shared globals is a bad practice, the alternative // While modifying shared globals is a bad practice, the alternative
......
...@@ -8,6 +8,7 @@ import ( ...@@ -8,6 +8,7 @@ import (
"io" "io"
"math" "math"
"net/http" "net/http"
"regexp"
"strconv" "strconv"
"strings" "strings"
"sync" "sync"
...@@ -49,8 +50,8 @@ type Server struct { ...@@ -49,8 +50,8 @@ type Server struct {
upgrader *websocket.Upgrader upgrader *websocket.Upgrader
mainLim FrontendRateLimiter mainLim FrontendRateLimiter
overrideLims map[string]FrontendRateLimiter overrideLims map[string]FrontendRateLimiter
limExemptOrigins map[string]bool limExemptOrigins []*regexp.Regexp
limExemptUserAgents map[string]bool limExemptUserAgents []*regexp.Regexp
rpcServer *http.Server rpcServer *http.Server
wsServer *http.Server wsServer *http.Server
cache RPCCache cache RPCCache
...@@ -104,15 +105,23 @@ func NewServer( ...@@ -104,15 +105,23 @@ func NewServer(
} }
var mainLim FrontendRateLimiter var mainLim FrontendRateLimiter
limExemptOrigins := make(map[string]bool) limExemptOrigins := make([]*regexp.Regexp, 0)
limExemptUserAgents := make(map[string]bool) limExemptUserAgents := make([]*regexp.Regexp, 0)
if rateLimitConfig.BaseRate > 0 { if rateLimitConfig.BaseRate > 0 {
mainLim = limiterFactory(time.Duration(rateLimitConfig.BaseInterval), rateLimitConfig.BaseRate, "main") mainLim = limiterFactory(time.Duration(rateLimitConfig.BaseInterval), rateLimitConfig.BaseRate, "main")
for _, origin := range rateLimitConfig.ExemptOrigins { for _, origin := range rateLimitConfig.ExemptOrigins {
limExemptOrigins[strings.ToLower(origin)] = true pattern, err := regexp.Compile(origin)
if err != nil {
return nil, err
}
limExemptOrigins = append(limExemptOrigins, pattern)
} }
for _, agent := range rateLimitConfig.ExemptUserAgents { for _, agent := range rateLimitConfig.ExemptUserAgents {
limExemptUserAgents[strings.ToLower(agent)] = true pattern, err := regexp.Compile(agent)
if err != nil {
return nil, err
}
limExemptUserAgents = append(limExemptUserAgents, pattern)
} }
} else { } else {
mainLim = NoopFrontendRateLimiter mainLim = NoopFrontendRateLimiter
...@@ -548,11 +557,22 @@ func (s *Server) populateContext(w http.ResponseWriter, r *http.Request) context ...@@ -548,11 +557,22 @@ func (s *Server) populateContext(w http.ResponseWriter, r *http.Request) context
} }
func (s *Server) isUnlimitedOrigin(origin string) bool { func (s *Server) isUnlimitedOrigin(origin string) bool {
return s.limExemptOrigins[strings.ToLower(origin)] for _, pat := range s.limExemptOrigins {
if pat.MatchString(origin) {
return true
}
}
return false
} }
func (s *Server) isUnlimitedUserAgent(origin string) bool { func (s *Server) isUnlimitedUserAgent(origin string) bool {
return s.limExemptUserAgents[strings.ToLower(origin)] for _, pat := range s.limExemptUserAgents {
if pat.MatchString(origin) {
return true
}
}
return false
} }
func setCacheHeader(w http.ResponseWriter, cached bool) { func setCacheHeader(w http.ResponseWriter, cached bool) {
......
...@@ -366,6 +366,9 @@ When decompressing a channel, we limit the amount of decompressed data to `MAX_R ...@@ -366,6 +366,9 @@ When decompressing a channel, we limit the amount of decompressed data to `MAX_R
humongous amount of data). If the decompressed data exceeds the limit, things proceeds as thought the channel contained humongous amount of data). If the decompressed data exceeds the limit, things proceeds as thought the channel contained
only the first `MAX_RLP_BYTES_PER_CHANNEL` decompressed bytes. only the first `MAX_RLP_BYTES_PER_CHANNEL` decompressed bytes.
When decoding batches, all batches that can be completly decoded below `MAX_RLP_BYTES_PER_CHANNEL` will be accepted
even if the size of the channel is greater than `MAX_RLP_BYTES_PER_CHANNEL`.
While the above pseudocode implies that all batches are known in advance, it is possible to perform streaming While the above pseudocode implies that all batches are known in advance, it is possible to perform streaming
compression and decompression of RLP-encoded batches. This means it is possible to start including channel frames in a compression and decompression of RLP-encoded batches. This means it is possible to start including channel frames in a
[batcher transaction][g-batcher-transaction] before we know how many batches (and how many frames) the channel will [batcher transaction][g-batcher-transaction] before we know how many batches (and how many frames) the channel will
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment