Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
N
nebula
Project
Project
Details
Activity
Releases
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
exchain
nebula
Commits
565a4e67
Unverified
Commit
565a4e67
authored
Jun 02, 2023
by
mergify[bot]
Committed by
GitHub
Jun 02, 2023
Browse files
Options
Browse Files
Download
Plain Diff
Merge branch 'develop' into aj/add-gaters
parents
2e7bed79
2de63457
Changes
28
Expand all
Hide whitespace changes
Inline
Side-by-side
Showing
28 changed files
with
1118 additions
and
66 deletions
+1118
-66
three-colts-march.md
.changeset/three-colts-march.md
+7
-0
config.yml
.circleci/config.yml
+3
-3
package.json
docs/op-stack/package.json
+3
-1
enhanceApp.js
docs/op-stack/src/.vuepress/enhanceApp.js
+8
-3
yarn.lock
docs/op-stack/yarn.lock
+251
-7
main.go
op-chain-ops/cmd/rollover/main.go
+36
-5
challenger.go
op-challenger/challenger/challenger.go
+14
-10
oracle_test.go
op-challenger/challenger/oracle_test.go
+1
-1
output.go
op-challenger/challenger/output.go
+52
-0
output_test.go
op-challenger/challenger/output_test.go
+159
-0
peer_monitor.go
op-node/p2p/monitor/peer_monitor.go
+1
-1
sync.go
op-node/p2p/sync.go
+1
-0
sync_test.go
op-node/p2p/sync_test.go
+68
-31
l2_output_submitter.go
op-proposer/proposer/l2_output_submitter.go
+1
-1
cheat.go
op-wheel/cheat/cheat.go
+8
-0
commands.go
op-wheel/commands.go
+21
-0
Dockerfile
ops/docker/ci-builder/Dockerfile
+8
-0
Encoding.t.sol
...ontracts-bedrock/contracts/test/invariants/Encoding.t.sol
+92
-0
Hashing.t.sol
...contracts-bedrock/contracts/test/invariants/Hashing.t.sol
+171
-0
AddressManager.json
...contracts-bedrock/deployments/mainnet/AddressManager.json
+0
-0
Encoding.md
packages/contracts-bedrock/invariant-docs/Encoding.md
+12
-0
Hashing.md
packages/contracts-bedrock/invariant-docs/Hashing.md
+18
-0
eoa-migration.sh
packages/contracts-bedrock/scripts/eoa-migration.sh
+104
-0
chain-constants.ts
packages/sdk/src/utils/chain-constants.ts
+18
-2
consensus_test.go
proxyd/integration_tests/consensus_test.go
+50
-0
consensus.toml
proxyd/integration_tests/testdata/consensus.toml
+1
-0
metrics.go
proxyd/metrics.go
+9
-0
derivation.md
specs/derivation.md
+1
-1
No files found.
.changeset/three-colts-march.md
0 → 100644
View file @
565a4e67
---
'
@eth-optimism/contracts-bedrock'
:
minor
'
@eth-optimism/contracts'
:
minor
'
@eth-optimism/sdk'
:
minor
---
Update sdk contract addresses for bedrock
.circleci/config.yml
View file @
565a4e67
...
...
@@ -293,7 +293,7 @@ jobs:
contracts-bedrock-coverage
:
docker
:
-
image
:
ethereumoptimism
/ci-builder:latest
-
image
:
us-docker.pkg.dev/oplabs-tools-artifacts/images
/ci-builder:latest
resource_class
:
large
steps
:
-
checkout
...
...
@@ -553,7 +553,7 @@ jobs:
sdk-next-tests
:
docker
:
-
image
:
ethereumoptimism
/ci-builder:latest
-
image
:
us-docker.pkg.dev/oplabs-tools-artifacts/images
/ci-builder:latest
resource_class
:
large
steps
:
-
checkout
...
...
@@ -700,7 +700,7 @@ jobs:
atst-tests
:
docker
:
-
image
:
ethereumoptimism
/ci-builder:latest
-
image
:
us-docker.pkg.dev/oplabs-tools-artifacts/images
/ci-builder:latest
resource_class
:
large
steps
:
-
checkout
...
...
docs/op-stack/package.json
View file @
565a4e67
...
...
@@ -5,12 +5,14 @@
"main"
:
"index.js"
,
"scripts"
:
{
"dev"
:
"vuepress dev src"
,
"build"
:
"vuepress build src"
"build"
:
"vuepress build src"
,
"preview"
:
"yarn build && serve -s src/.vuepress/dist -p 8080"
},
"license"
:
"MIT"
,
"devDependencies"
:
{
"@vuepress/plugin-medium-zoom"
:
"^1.8.2"
,
"@vuepress/plugin-pwa"
:
"^1.9.7"
,
"serve"
:
"^14.2.0"
,
"vuepress"
:
"^1.8.2"
,
"vuepress-plugin-plausible-analytics"
:
"^0.2.1"
,
"vuepress-theme-hope"
:
"^1.22.0"
...
...
docs/op-stack/src/.vuepress/enhanceApp.js
View file @
565a4e67
...
...
@@ -13,7 +13,12 @@ export default ({ router }) => {
// the refresh button. For more details see:
// https://linear.app/optimism/issue/FE-1003/investigate-archive-issue-on-docs
const
registerAutoReload
=
()
=>
{
event
.
$on
(
'
sw-updated
'
,
e
=>
e
.
skipWaiting
().
then
(()
=>
{
location
.
reload
(
true
);
}))
event
.
$on
(
'
sw-updated
'
,
e
=>
{
e
.
skipWaiting
().
then
(()
=>
{
if
(
typeof
location
!==
'
undefined
'
)
location
.
reload
(
true
);
}
)
})
}
docs/op-stack/yarn.lock
View file @
565a4e67
This diff is collapsed.
Click to expand it.
op-chain-ops/cmd/rollover/main.go
View file @
565a4e67
...
...
@@ -51,6 +51,8 @@ func main() {
return
err
}
log
.
Info
(
"Requires an archive node"
)
log
.
Info
(
"Connecting to AddressManager"
,
"address"
,
addresses
.
AddressManager
)
addressManager
,
err
:=
bindings
.
NewAddressManager
(
addresses
.
AddressManager
,
clients
.
L1Client
)
if
err
!=
nil
{
...
...
@@ -70,28 +72,42 @@ func main() {
time
.
Sleep
(
3
*
time
.
Second
)
}
shutoffBlock
,
err
:=
addressManager
.
GetAddress
(
&
bind
.
CallOpts
{},
"DTL_SHUTOFF_BLOCK"
)
if
err
!=
nil
{
return
err
}
shutoffHeight
:=
shutoffBlock
.
Big
()
log
.
Info
(
"Connecting to CanonicalTransactionChain"
,
"address"
,
addresses
.
CanonicalTransactionChain
)
ctc
,
err
:=
legacy_bindings
.
NewCanonicalTransactionChain
(
addresses
.
CanonicalTransactionChain
,
clients
.
L1Client
)
if
err
!=
nil
{
return
err
}
queueLength
,
err
:=
ctc
.
GetQueueLength
(
&
bind
.
CallOpts
{})
queueLength
,
err
:=
ctc
.
GetQueueLength
(
&
bind
.
CallOpts
{
BlockNumber
:
shutoffHeight
,
})
if
err
!=
nil
{
return
err
}
totalElements
,
err
:=
ctc
.
GetTotalElements
(
&
bind
.
CallOpts
{})
totalElements
,
err
:=
ctc
.
GetTotalElements
(
&
bind
.
CallOpts
{
BlockNumber
:
shutoffHeight
,
})
if
err
!=
nil
{
return
err
}
totalBatches
,
err
:=
ctc
.
GetTotalBatches
(
&
bind
.
CallOpts
{})
totalBatches
,
err
:=
ctc
.
GetTotalBatches
(
&
bind
.
CallOpts
{
BlockNumber
:
shutoffHeight
,
})
if
err
!=
nil
{
return
err
}
pending
,
err
:=
ctc
.
GetNumPendingQueueElements
(
&
bind
.
CallOpts
{})
pending
,
err
:=
ctc
.
GetNumPendingQueueElements
(
&
bind
.
CallOpts
{
BlockNumber
:
shutoffHeight
,
})
if
err
!=
nil
{
return
err
}
...
...
@@ -131,6 +147,7 @@ func main() {
if
err
!=
nil
{
return
err
}
// If the queue origin is l1, then it is a deposit.
if
json
.
QueueOrigin
==
"l1"
{
if
json
.
QueueIndex
==
nil
{
...
...
@@ -138,12 +155,26 @@ func main() {
return
fmt
.
Errorf
(
"queue index is nil for tx %s at height %d"
,
hash
.
Hex
(),
blockNumber
)
}
queueIndex
:=
uint64
(
*
json
.
QueueIndex
)
if
json
.
L1BlockNumber
==
nil
{
// This should never happen.
return
fmt
.
Errorf
(
"L1 block number is nil for tx %s at height %d"
,
hash
.
Hex
(),
blockNumber
)
}
l1BlockNumber
:=
json
.
L1BlockNumber
.
ToInt
()
log
.
Info
(
"Deposit found"
,
"l2-block"
,
blockNumber
,
"l1-block"
,
l1BlockNumber
,
"queue-index"
,
queueIndex
)
// This should never happen
if
json
.
L1BlockNumber
.
ToInt
()
.
Uint64
()
>
shutoffHeight
.
Uint64
()
{
log
.
Warn
(
"Lost deposit"
)
return
fmt
.
Errorf
(
"Lost deposit: %s"
,
hash
.
Hex
())
}
// Check to see if the final deposit was ingested. Subtract 1 here to handle zero
// indexing.
if
queueIndex
==
queueLength
.
Uint64
()
-
1
{
log
.
Info
(
"Found final deposit in l2geth"
,
"queue-index"
,
queueIndex
)
break
}
// If the queue index is less than the queue length, then not all deposits have
// been ingested by l2geth yet. This means that we need to reset the blocknumber
// to the latest block number to restart walking backwards to find deposits that
...
...
@@ -258,7 +289,7 @@ func waitForTotalElements(wg *sync.WaitGroup, contract RollupContract, client *e
log
.
Info
(
"Waiting for elements to be submitted"
,
"name"
,
name
,
"count"
,
totalElements
.
Uint64
()
-
bn
,
"count"
,
bn
-
totalElements
.
Uint64
()
,
"height"
,
bn
,
"total-elements"
,
totalElements
.
Uint64
(),
)
...
...
op-challenger/challenger/challenger.go
View file @
565a4e67
...
...
@@ -6,21 +6,25 @@ import (
"sync"
"time"
abi
"github.com/ethereum/go-ethereum/accounts/abi"
bind
"github.com/ethereum/go-ethereum/accounts/abi/bind"
common
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/accounts/abi"
"github.com/ethereum/go-ethereum/accounts/abi/bind"
"github.com/ethereum/go-ethereum/common"
ethclient
"github.com/ethereum/go-ethereum/ethclient"
log
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/log"
config
"github.com/ethereum-optimism/optimism/op-challenger/config"
metrics
"github.com/ethereum-optimism/optimism/op-challenger/metrics"
"github.com/ethereum-optimism/optimism/op-challenger/config"
"github.com/ethereum-optimism/optimism/op-challenger/metrics"
bindings
"github.com/ethereum-optimism/optimism/op-bindings/bindings"
sources
"github.com/ethereum-optimism/optimism/op-node/sources
"
"github.com/ethereum-optimism/optimism/op-bindings/bindings"
"github.com/ethereum-optimism/optimism/op-node/eth
"
opclient
"github.com/ethereum-optimism/optimism/op-service/client"
txmgr
"github.com/ethereum-optimism/optimism/op-service/txmgr"
"github.com/ethereum-optimism/optimism/op-service/txmgr"
)
type
OutputAPI
interface
{
OutputAtBlock
(
ctx
context
.
Context
,
blockNum
uint64
)
(
*
eth
.
OutputResponse
,
error
)
}
// Challenger contests invalid L2OutputOracle outputs
type
Challenger
struct
{
txMgr
txmgr
.
TxManager
...
...
@@ -35,7 +39,7 @@ type Challenger struct {
l1Client
*
ethclient
.
Client
rollupClient
*
sources
.
RollupClient
rollupClient
OutputAPI
// l2 Output Oracle contract
l2ooContract
*
bindings
.
L2OutputOracleCaller
...
...
op-challenger/challenger/oracle_test.go
View file @
565a4e67
...
...
@@ -48,5 +48,5 @@ func TestBuildOutputLogFilter_Fails(t *testing.T) {
// Build the filter
_
,
err
:=
BuildOutputLogFilter
(
&
l2ooABI
)
require
.
Error
(
t
,
err
)
require
.
E
qual
(
t
,
ErrMissingEvent
,
err
)
require
.
E
rrorIs
(
t
,
err
,
ErrMissingEvent
)
}
op-challenger/challenger/output.go
0 → 100644
View file @
565a4e67
package
challenger
import
(
"context"
"errors"
"math/big"
"github.com/ethereum-optimism/optimism/op-node/eth"
)
var
(
// supportedL2OutputVersion is the version of the L2 output that the challenger supports.
supportedL2OutputVersion
=
eth
.
Bytes32
{}
// ErrInvalidBlockNumber is returned when the block number of the output does not match the expected block number.
ErrInvalidBlockNumber
=
errors
.
New
(
"invalid block number"
)
// ErrUnsupportedL2OOVersion is returned when the output version is not supported.
ErrUnsupportedL2OOVersion
=
errors
.
New
(
"unsupported l2oo version"
)
)
// ValidateOutput checks that a given output is expected via a trusted rollup node rpc.
// It returns: if the output is correct, the fetched output, error
func
(
c
*
Challenger
)
ValidateOutput
(
ctx
context
.
Context
,
l2BlockNumber
*
big
.
Int
,
expected
eth
.
Bytes32
)
(
bool
,
*
eth
.
Bytes32
,
error
)
{
// Fetch the output from the rollup node
ctx
,
cancel
:=
context
.
WithTimeout
(
ctx
,
c
.
networkTimeout
)
defer
cancel
()
output
,
err
:=
c
.
rollupClient
.
OutputAtBlock
(
ctx
,
l2BlockNumber
.
Uint64
())
if
err
!=
nil
{
c
.
log
.
Error
(
"Failed to fetch output"
,
"blockNum"
,
l2BlockNumber
,
"err"
,
err
)
return
false
,
nil
,
err
}
// Compare the output root to the expected output root
equalRoots
,
err
:=
c
.
compareOutputRoots
(
output
,
expected
,
l2BlockNumber
)
if
err
!=
nil
{
return
false
,
nil
,
err
}
return
equalRoots
,
&
output
.
OutputRoot
,
nil
}
// compareOutputRoots compares the output root of the given block number to the expected output root.
func
(
c
*
Challenger
)
compareOutputRoots
(
received
*
eth
.
OutputResponse
,
expected
eth
.
Bytes32
,
blockNumber
*
big
.
Int
)
(
bool
,
error
)
{
if
received
.
Version
!=
supportedL2OutputVersion
{
c
.
log
.
Error
(
"Unsupported l2 output version"
,
"version"
,
received
.
Version
)
return
false
,
ErrUnsupportedL2OOVersion
}
if
received
.
BlockRef
.
Number
!=
blockNumber
.
Uint64
()
{
c
.
log
.
Error
(
"Invalid blockNumber"
,
"expected"
,
blockNumber
,
"actual"
,
received
.
BlockRef
.
Number
)
return
false
,
ErrInvalidBlockNumber
}
return
received
.
OutputRoot
==
expected
,
nil
}
op-challenger/challenger/output_test.go
0 → 100644
View file @
565a4e67
package
challenger
import
(
"context"
"errors"
"math/big"
"testing"
"time"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum-optimism/optimism/op-challenger/metrics"
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/testlog"
)
func
TestChallenger_ValidateOutput_RollupClientErrors
(
t
*
testing
.
T
)
{
output
:=
eth
.
OutputResponse
{
Version
:
supportedL2OutputVersion
,
OutputRoot
:
eth
.
Bytes32
{},
BlockRef
:
eth
.
L2BlockRef
{},
}
challenger
:=
newTestChallenger
(
t
,
output
,
true
)
valid
,
received
,
err
:=
challenger
.
ValidateOutput
(
context
.
Background
(),
big
.
NewInt
(
0
),
output
.
OutputRoot
)
require
.
False
(
t
,
valid
)
require
.
Nil
(
t
,
received
)
require
.
ErrorIs
(
t
,
err
,
mockOutputApiError
)
}
func
TestChallenger_ValidateOutput_ErrorsWithWrongVersion
(
t
*
testing
.
T
)
{
output
:=
eth
.
OutputResponse
{
Version
:
eth
.
Bytes32
{
0x01
},
OutputRoot
:
eth
.
Bytes32
{
0x01
},
BlockRef
:
eth
.
L2BlockRef
{},
}
challenger
:=
newTestChallenger
(
t
,
output
,
false
)
valid
,
received
,
err
:=
challenger
.
ValidateOutput
(
context
.
Background
(),
big
.
NewInt
(
0
),
eth
.
Bytes32
{})
require
.
False
(
t
,
valid
)
require
.
Nil
(
t
,
received
)
require
.
ErrorIs
(
t
,
err
,
ErrUnsupportedL2OOVersion
)
}
func
TestChallenger_ValidateOutput_ErrorsInvalidBlockNumber
(
t
*
testing
.
T
)
{
output
:=
eth
.
OutputResponse
{
Version
:
supportedL2OutputVersion
,
OutputRoot
:
eth
.
Bytes32
{
0x01
},
BlockRef
:
eth
.
L2BlockRef
{},
}
challenger
:=
newTestChallenger
(
t
,
output
,
false
)
valid
,
received
,
err
:=
challenger
.
ValidateOutput
(
context
.
Background
(),
big
.
NewInt
(
1
),
output
.
OutputRoot
)
require
.
False
(
t
,
valid
)
require
.
Nil
(
t
,
received
)
require
.
ErrorIs
(
t
,
err
,
ErrInvalidBlockNumber
)
}
func
TestOutput_ValidateOutput
(
t
*
testing
.
T
)
{
output
:=
eth
.
OutputResponse
{
Version
:
eth
.
Bytes32
{},
OutputRoot
:
eth
.
Bytes32
{},
BlockRef
:
eth
.
L2BlockRef
{},
}
challenger
:=
newTestChallenger
(
t
,
output
,
false
)
valid
,
expected
,
err
:=
challenger
.
ValidateOutput
(
context
.
Background
(),
big
.
NewInt
(
0
),
output
.
OutputRoot
)
require
.
Equal
(
t
,
*
expected
,
output
.
OutputRoot
)
require
.
True
(
t
,
valid
)
require
.
NoError
(
t
,
err
)
}
func
TestChallenger_CompareOutputRoots_ErrorsWithDifferentRoots
(
t
*
testing
.
T
)
{
output
:=
eth
.
OutputResponse
{
Version
:
eth
.
Bytes32
{
0xFF
,
0xFF
,
0xFF
,
0xFF
},
OutputRoot
:
eth
.
Bytes32
{},
BlockRef
:
eth
.
L2BlockRef
{},
}
challenger
:=
newTestChallenger
(
t
,
output
,
false
)
valid
,
err
:=
challenger
.
compareOutputRoots
(
&
output
,
output
.
OutputRoot
,
big
.
NewInt
(
0
))
require
.
False
(
t
,
valid
)
require
.
ErrorIs
(
t
,
err
,
ErrUnsupportedL2OOVersion
)
}
func
TestChallenger_CompareOutputRoots_ErrInvalidBlockNumber
(
t
*
testing
.
T
)
{
output
:=
eth
.
OutputResponse
{
Version
:
supportedL2OutputVersion
,
OutputRoot
:
eth
.
Bytes32
{},
BlockRef
:
eth
.
L2BlockRef
{},
}
challenger
:=
newTestChallenger
(
t
,
output
,
false
)
valid
,
err
:=
challenger
.
compareOutputRoots
(
&
output
,
output
.
OutputRoot
,
big
.
NewInt
(
1
))
require
.
False
(
t
,
valid
)
require
.
ErrorIs
(
t
,
err
,
ErrInvalidBlockNumber
)
}
func
TestChallenger_CompareOutputRoots_Succeeds
(
t
*
testing
.
T
)
{
output
:=
eth
.
OutputResponse
{
Version
:
supportedL2OutputVersion
,
OutputRoot
:
eth
.
Bytes32
{},
BlockRef
:
eth
.
L2BlockRef
{},
}
challenger
:=
newTestChallenger
(
t
,
output
,
false
)
valid
,
err
:=
challenger
.
compareOutputRoots
(
&
output
,
output
.
OutputRoot
,
big
.
NewInt
(
0
))
require
.
True
(
t
,
valid
)
require
.
NoError
(
t
,
err
)
valid
,
err
=
challenger
.
compareOutputRoots
(
&
output
,
eth
.
Bytes32
{
0x01
},
big
.
NewInt
(
0
))
require
.
False
(
t
,
valid
)
require
.
NoError
(
t
,
err
)
}
func
newTestChallenger
(
t
*
testing
.
T
,
output
eth
.
OutputResponse
,
errors
bool
)
*
Challenger
{
outputApi
:=
newMockOutputApi
(
output
,
errors
)
log
:=
testlog
.
Logger
(
t
,
log
.
LvlError
)
metr
:=
metrics
.
NewMetrics
(
"test"
)
challenger
:=
Challenger
{
rollupClient
:
outputApi
,
log
:
log
,
metr
:
metr
,
networkTimeout
:
time
.
Duration
(
5
)
*
time
.
Second
,
}
return
&
challenger
}
var
mockOutputApiError
=
errors
.
New
(
"mock output api error"
)
type
mockOutputApi
struct
{
mock
.
Mock
expected
eth
.
OutputResponse
errors
bool
}
func
newMockOutputApi
(
output
eth
.
OutputResponse
,
errors
bool
)
*
mockOutputApi
{
return
&
mockOutputApi
{
expected
:
output
,
errors
:
errors
,
}
}
func
(
m
*
mockOutputApi
)
OutputAtBlock
(
ctx
context
.
Context
,
blockNumber
uint64
)
(
*
eth
.
OutputResponse
,
error
)
{
if
m
.
errors
{
return
nil
,
mockOutputApiError
}
return
&
m
.
expected
,
nil
}
op-node/p2p/monitor/peer_monitor.go
View file @
565a4e67
...
...
@@ -86,7 +86,7 @@ func (p *PeerMonitor) checkNextPeer() error {
if
err
!=
nil
{
return
fmt
.
Errorf
(
"retrieve score for peer %v: %w"
,
id
,
err
)
}
if
score
>
p
.
minScore
{
if
score
>
=
p
.
minScore
{
return
nil
}
if
p
.
manager
.
IsStatic
(
id
)
{
...
...
op-node/p2p/sync.go
View file @
565a4e67
...
...
@@ -368,6 +368,7 @@ func (s *SyncClient) onRangeRequest(ctx context.Context, req rangeRequest) {
}
if
_
,
ok
:=
s
.
inFlight
[
num
];
ok
{
log
.
Debug
(
"request still in-flight, not rescheduling sync request"
,
"num"
,
num
)
continue
// request still in flight
}
pr
:=
peerRequest
{
num
:
num
,
complete
:
new
(
atomic
.
Bool
)}
...
...
op-node/p2p/sync_test.go
View file @
565a4e67
...
...
@@ -3,7 +3,9 @@ package p2p
import
(
"context"
"math/big"
"sync"
"testing"
"time"
"github.com/libp2p/go-libp2p/core/host"
"github.com/libp2p/go-libp2p/core/network"
...
...
@@ -29,7 +31,42 @@ func (fn mockPayloadFn) PayloadByNumber(_ context.Context, number uint64) (*eth.
var
_
L2Chain
=
mockPayloadFn
(
nil
)
func
setupSyncTestData
(
length
uint64
)
(
*
rollup
.
Config
,
map
[
uint64
]
*
eth
.
ExecutionPayload
,
func
(
i
uint64
)
eth
.
L2BlockRef
)
{
type
syncTestData
struct
{
sync
.
RWMutex
payloads
map
[
uint64
]
*
eth
.
ExecutionPayload
}
func
(
s
*
syncTestData
)
getPayload
(
i
uint64
)
(
payload
*
eth
.
ExecutionPayload
,
ok
bool
)
{
s
.
RLock
()
defer
s
.
RUnlock
()
payload
,
ok
=
s
.
payloads
[
i
]
return
payload
,
ok
}
func
(
s
*
syncTestData
)
deletePayload
(
i
uint64
)
{
s
.
Lock
()
defer
s
.
Unlock
()
delete
(
s
.
payloads
,
i
)
}
func
(
s
*
syncTestData
)
addPayload
(
payload
*
eth
.
ExecutionPayload
)
{
s
.
Lock
()
defer
s
.
Unlock
()
s
.
payloads
[
uint64
(
payload
.
BlockNumber
)]
=
payload
}
func
(
s
*
syncTestData
)
getBlockRef
(
i
uint64
)
eth
.
L2BlockRef
{
s
.
RLock
()
defer
s
.
RUnlock
()
return
eth
.
L2BlockRef
{
Hash
:
s
.
payloads
[
i
]
.
BlockHash
,
Number
:
uint64
(
s
.
payloads
[
i
]
.
BlockNumber
),
ParentHash
:
s
.
payloads
[
i
]
.
ParentHash
,
Time
:
uint64
(
s
.
payloads
[
i
]
.
Timestamp
),
}
}
func
setupSyncTestData
(
length
uint64
)
(
*
rollup
.
Config
,
*
syncTestData
)
{
// minimal rollup config to build mock blocks & verify their time.
cfg
:=
&
rollup
.
Config
{
Genesis
:
rollup
.
Genesis
{
...
...
@@ -57,15 +94,7 @@ func setupSyncTestData(length uint64) (*rollup.Config, map[uint64]*eth.Execution
payloads
[
i
]
=
payload
}
l2Ref
:=
func
(
i
uint64
)
eth
.
L2BlockRef
{
return
eth
.
L2BlockRef
{
Hash
:
payloads
[
i
]
.
BlockHash
,
Number
:
uint64
(
payloads
[
i
]
.
BlockNumber
),
ParentHash
:
payloads
[
i
]
.
ParentHash
,
Time
:
uint64
(
payloads
[
i
]
.
Timestamp
),
}
}
return
cfg
,
payloads
,
l2Ref
return
cfg
,
&
syncTestData
{
payloads
:
payloads
}
}
func
TestSinglePeerSync
(
t
*
testing
.
T
)
{
...
...
@@ -73,11 +102,11 @@ func TestSinglePeerSync(t *testing.T) {
log
:=
testlog
.
Logger
(
t
,
log
.
LvlError
)
cfg
,
payloads
,
l2Ref
:=
setupSyncTestData
(
25
)
cfg
,
payloads
:=
setupSyncTestData
(
25
)
// Serving payloads: just load them from the map, if they exist
servePayload
:=
mockPayloadFn
(
func
(
n
uint64
)
(
*
eth
.
ExecutionPayload
,
error
)
{
p
,
ok
:=
payloads
[
n
]
p
,
ok
:=
payloads
.
getPayload
(
n
)
if
!
ok
{
return
nil
,
ethereum
.
NotFound
}
...
...
@@ -116,13 +145,13 @@ func TestSinglePeerSync(t *testing.T) {
defer
cl
.
Close
()
// request to start syncing between 10 and 20
require
.
NoError
(
t
,
cl
.
RequestL2Range
(
ctx
,
l2Ref
(
10
),
l2
Ref
(
20
)))
require
.
NoError
(
t
,
cl
.
RequestL2Range
(
ctx
,
payloads
.
getBlockRef
(
10
),
payloads
.
getBlock
Ref
(
20
)))
// and wait for the sync results to come in (in reverse order)
for
i
:=
uint64
(
19
);
i
>
10
;
i
--
{
p
:=
<-
received
require
.
Equal
(
t
,
uint64
(
p
.
BlockNumber
),
i
,
"expecting payloads in order"
)
exp
,
ok
:=
payloads
[
uint64
(
p
.
BlockNumber
)]
exp
,
ok
:=
payloads
.
getPayload
(
uint64
(
p
.
BlockNumber
))
require
.
True
(
t
,
ok
,
"expecting known payload"
)
require
.
Equal
(
t
,
exp
.
BlockHash
,
p
.
BlockHash
,
"expecting the correct payload"
)
}
...
...
@@ -131,14 +160,14 @@ func TestSinglePeerSync(t *testing.T) {
func
TestMultiPeerSync
(
t
*
testing
.
T
)
{
t
.
Parallel
()
// Takes a while, but can run in parallel
log
:=
testlog
.
Logger
(
t
,
log
.
Lvl
Error
)
log
:=
testlog
.
Logger
(
t
,
log
.
Lvl
Debug
)
cfg
,
payloads
,
l2Ref
:=
setupSyncTestData
(
100
)
cfg
,
payloads
:=
setupSyncTestData
(
100
)
setupPeer
:=
func
(
ctx
context
.
Context
,
h
host
.
Host
)
(
*
SyncClient
,
chan
*
eth
.
ExecutionPayload
)
{
// Serving payloads: just load them from the map, if they exist
servePayload
:=
mockPayloadFn
(
func
(
n
uint64
)
(
*
eth
.
ExecutionPayload
,
error
)
{
p
,
ok
:=
payloads
[
n
]
p
,
ok
:=
payloads
.
getPayload
(
n
)
if
!
ok
{
return
nil
,
ethereum
.
NotFound
}
...
...
@@ -190,23 +219,25 @@ func TestMultiPeerSync(t *testing.T) {
clC
.
Start
()
defer
clC
.
Close
()
// request to start syncing between 10 and
10
0
require
.
NoError
(
t
,
clA
.
RequestL2Range
(
ctx
,
l2Ref
(
10
),
l2
Ref
(
90
)))
// request to start syncing between 10 and
9
0
require
.
NoError
(
t
,
clA
.
RequestL2Range
(
ctx
,
payloads
.
getBlockRef
(
10
),
payloads
.
getBlock
Ref
(
90
)))
// With such large range to request we are going to hit the rate-limits of B and C,
// but that means we'll balance the work between the peers.
p
:=
<-
recvA
exp
,
ok
:=
payloads
[
uint64
(
p
.
BlockNumber
)]
require
.
True
(
t
,
ok
,
"expecting known payload"
)
require
.
Equal
(
t
,
exp
.
BlockHash
,
p
.
BlockHash
,
"expecting the correct payload"
)
for
i
:=
uint64
(
89
);
i
>
10
;
i
--
{
// wait for all payloads
p
:=
<-
recvA
exp
,
ok
:=
payloads
.
getPayload
(
uint64
(
p
.
BlockNumber
))
require
.
True
(
t
,
ok
,
"expecting known payload"
)
require
.
Equal
(
t
,
exp
.
BlockHash
,
p
.
BlockHash
,
"expecting the correct payload"
)
}
// now see if B can sync a range, and fill the gap with a re-request
bl25
:=
payloads
[
25
]
// temporarily remove it from the available payloads. This will create a gap
delete
(
payloads
,
uint64
(
25
)
)
require
.
NoError
(
t
,
clB
.
RequestL2Range
(
ctx
,
l2Ref
(
20
),
l2
Ref
(
30
)))
bl25
,
_
:=
payloads
.
getPayload
(
25
)
// temporarily remove it from the available payloads. This will create a gap
payloads
.
deletePayload
(
25
)
require
.
NoError
(
t
,
clB
.
RequestL2Range
(
ctx
,
payloads
.
getBlockRef
(
20
),
payloads
.
getBlock
Ref
(
30
)))
for
i
:=
uint64
(
29
);
i
>
25
;
i
--
{
p
:=
<-
recvB
exp
,
ok
:=
payloads
[
uint64
(
p
.
BlockNumber
)]
exp
,
ok
:=
payloads
.
getPayload
(
uint64
(
p
.
BlockNumber
))
require
.
True
(
t
,
ok
,
"expecting known payload"
)
require
.
Equal
(
t
,
exp
.
BlockHash
,
p
.
BlockHash
,
"expecting the correct payload"
)
}
...
...
@@ -215,13 +246,19 @@ func TestMultiPeerSync(t *testing.T) {
// client: WARN failed p2p sync request num=25 err="peer failed to serve request with code 1"
require
.
Zero
(
t
,
len
(
recvB
),
"there is a gap, should not see other payloads yet"
)
// Add back the block
payloads
[
25
]
=
bl25
payloads
.
addPayload
(
bl25
)
// race-condition fix: the request for 25 is expected to error, but is marked as complete in the peer-loop.
// But the re-request checks the status in the main loop, and it may thus look like it's still in-flight,
// and thus not run the new request.
// Wait till the failed request is recognized as marked as done, so the re-request actually runs.
for
!
clB
.
inFlight
[
25
]
.
Load
()
{
time
.
Sleep
(
time
.
Second
)
}
// And request a range again, 25 is there now, and 21-24 should follow quickly (some may already have been fetched and wait in quarantine)
require
.
NoError
(
t
,
clB
.
RequestL2Range
(
ctx
,
l2Ref
(
20
),
l2Ref
(
26
)))
require
.
NoError
(
t
,
clB
.
RequestL2Range
(
ctx
,
payloads
.
getBlockRef
(
20
),
payloads
.
getBlockRef
(
26
)))
for
i
:=
uint64
(
25
);
i
>
20
;
i
--
{
p
:=
<-
recvB
exp
,
ok
:=
payloads
[
uint64
(
p
.
BlockNumber
)]
exp
,
ok
:=
payloads
.
getPayload
(
uint64
(
p
.
BlockNumber
))
require
.
True
(
t
,
ok
,
"expecting known payload"
)
require
.
Equal
(
t
,
exp
.
BlockHash
,
p
.
BlockHash
,
"expecting the correct payload"
)
}
...
...
op-proposer/proposer/l2_output_submitter.go
View file @
565a4e67
...
...
@@ -187,7 +187,7 @@ func NewL2OutputSubmitter(cfg Config, l log.Logger, m metrics.Metricer) (*L2Outp
l2ooContract
,
err
:=
bindings
.
NewL2OutputOracleCaller
(
cfg
.
L2OutputOracleAddr
,
cfg
.
L1Client
)
if
err
!=
nil
{
cancel
()
return
nil
,
err
return
nil
,
fmt
.
Errorf
(
"failed to create L2OO at address %s: %w"
,
cfg
.
L2OutputOracleAddr
,
err
)
}
cCtx
,
cCancel
:=
context
.
WithTimeout
(
ctx
,
cfg
.
NetworkTimeout
)
...
...
op-wheel/cheat/cheat.go
View file @
565a4e67
...
...
@@ -14,6 +14,7 @@ import (
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/consensus/beacon"
"github.com/ethereum/go-ethereum/consensus/ethash"
"github.com/ethereum/go-ethereum/core"
...
...
@@ -379,6 +380,13 @@ func SetBalance(addr common.Address, amount *big.Int) HeadFn {
}
}
func
SetCode
(
addr
common
.
Address
,
code
hexutil
.
Bytes
)
HeadFn
{
return
func
(
headState
*
state
.
StateDB
)
error
{
headState
.
SetCode
(
addr
,
code
)
return
nil
}
}
func
SetNonce
(
addr
common
.
Address
,
nonce
uint64
)
HeadFn
{
return
func
(
headState
*
state
.
StateDB
)
error
{
headState
.
SetNonce
(
addr
,
nonce
)
...
...
op-wheel/commands.go
View file @
565a4e67
...
...
@@ -12,6 +12,7 @@ import (
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/rpc"
...
...
@@ -179,6 +180,10 @@ func addrFlag(name string, usage string) cli.GenericFlag {
return
textFlag
[
*
common
.
Address
](
name
,
usage
,
new
(
common
.
Address
))
}
func
bytesFlag
(
name
string
,
usage
string
)
cli
.
GenericFlag
{
return
textFlag
[
*
hexutil
.
Bytes
](
name
,
usage
,
new
(
hexutil
.
Bytes
))
}
func
hashFlag
(
name
string
,
usage
string
)
cli
.
GenericFlag
{
return
textFlag
[
*
common
.
Hash
](
name
,
usage
,
new
(
common
.
Hash
))
}
...
...
@@ -191,6 +196,10 @@ func addrFlagValue(name string, ctx *cli.Context) common.Address {
return
*
ctx
.
Generic
(
name
)
.
(
*
TextFlag
[
*
common
.
Address
])
.
Value
}
func
bytesFlagValue
(
name
string
,
ctx
*
cli
.
Context
)
hexutil
.
Bytes
{
return
*
ctx
.
Generic
(
name
)
.
(
*
TextFlag
[
*
hexutil
.
Bytes
])
.
Value
}
func
hashFlagValue
(
name
string
,
ctx
*
cli
.
Context
)
common
.
Hash
{
return
*
ctx
.
Generic
(
name
)
.
(
*
TextFlag
[
*
common
.
Hash
])
.
Value
}
...
...
@@ -271,6 +280,17 @@ var (
return
ch
.
RunAndClose
(
cheat
.
SetBalance
(
addrFlagValue
(
"address"
,
ctx
),
bigFlagValue
(
"balance"
,
ctx
)))
}),
}
CheatSetCodeCmd
=
cli
.
Command
{
Name
:
"code"
,
Flags
:
[]
cli
.
Flag
{
DataDirFlag
,
addrFlag
(
"address"
,
"Address to change code of"
),
bytesFlag
(
"code"
,
"New code of the account"
),
},
Action
:
CheatAction
(
false
,
func
(
ctx
*
cli
.
Context
,
ch
*
cheat
.
Cheater
)
error
{
return
ch
.
RunAndClose
(
cheat
.
SetCode
(
addrFlagValue
(
"address"
,
ctx
),
bytesFlagValue
(
"code"
,
ctx
)))
}),
}
CheatSetNonceCmd
=
cli
.
Command
{
Name
:
"nonce"
,
Flags
:
[]
cli
.
Flag
{
...
...
@@ -440,6 +460,7 @@ var CheatCmd = cli.Command{
Subcommands
:
[]
cli
.
Command
{
CheatStorageCmd
,
CheatSetBalanceCmd
,
CheatSetCodeCmd
,
CheatSetNonceCmd
,
CheatOvmOwnersCmd
,
CheatPrintHeadBlock
,
...
...
ops/docker/ci-builder/Dockerfile
View file @
565a4e67
...
...
@@ -97,3 +97,11 @@ RUN echo "downloading and verifying Codecov uploader" && \
cp
codecov /usr/local/bin/codecov
&&
\
chmod
+x /usr/local/bin/codecov
&&
\
rm
codecov
RUN
echo
"downloading mockery tool"
&&
\
mkdir
-p
mockery-tmp-dir
&&
\
curl
-o
mockery-tmp-dir/mockery.tar.gz
-sL
https://github.com/vektra/mockery/releases/download/v2.28.1/mockery_2.28.1_Linux_x86_64.tar.gz
&&
\
tar
-xzvf
mockery-tmp-dir/mockery.tar.gz
-C
mockery-tmp-dir
&&
\
cp
mockery-tmp-dir/mockery /usr/local/bin/mockery
&&
\
chmod
+x /usr/local/bin/mockery
&&
\
rm
-rf
mockery-tmp-dir
packages/contracts-bedrock/contracts/test/invariants/Encoding.t.sol
0 → 100644
View file @
565a4e67
// SPDX-License-Identifier: MIT
pragma solidity 0.8.15;
import { Test } from "forge-std/Test.sol";
import { StdInvariant } from "forge-std/StdInvariant.sol";
import { Encoding } from "../../libraries/Encoding.sol";
contract Encoding_Converter {
bool public failedRoundtripAToB;
bool public failedRoundtripBToA;
/**
* @notice Takes a pair of integers to be encoded into a versioned nonce with the
* Encoding library and then decoded and updates the test contract's state
* indicating if the round trip encoding failed.
*/
function convertRoundTripAToB(uint240 _nonce, uint16 _version) external {
// Encode the nonce and version
uint256 encodedVersionedNonce = Encoding.encodeVersionedNonce(_nonce, _version);
// Decode the nonce and version
uint240 decodedNonce;
uint16 decodedVersion;
(decodedNonce, decodedVersion) = Encoding.decodeVersionedNonce(encodedVersionedNonce);
// If our round trip encoding did not return the original result, set our state.
if ((decodedNonce != _nonce) || (decodedVersion != _version)) {
failedRoundtripAToB = true;
}
}
/**
* @notice Takes an integer representing a packed version and nonce and attempts
* to decode them using the Encoding library before re-encoding and updates
* the test contract's state indicating if the round trip encoding failed.
*/
function convertRoundTripBToA(uint256 _versionedNonce) external {
// Decode the nonce and version
uint240 decodedNonce;
uint16 decodedVersion;
(decodedNonce, decodedVersion) = Encoding.decodeVersionedNonce(_versionedNonce);
// Encode the nonce and version
uint256 encodedVersionedNonce = Encoding.encodeVersionedNonce(decodedNonce, decodedVersion);
// If our round trip encoding did not return the original result, set our state.
if (encodedVersionedNonce != _versionedNonce) {
failedRoundtripBToA = true;
}
}
}
contract Encoding_Invariant is StdInvariant, Test {
Encoding_Converter internal actor;
function setUp() public {
// Create a converter actor.
actor = new Encoding_Converter();
targetContract(address(actor));
bytes4[] memory selectors = new bytes4[](2);
selectors[0] = actor.convertRoundTripAToB.selector;
selectors[1] = actor.convertRoundTripBToA.selector;
FuzzSelector memory selector = FuzzSelector({ addr: address(actor), selectors: selectors });
targetSelector(selector);
}
/**
* @custom:invariant `convertRoundTripAToB` never fails.
*
* Asserts that a raw versioned nonce can be encoded / decoded to reach the same raw value.
*/
function invariant_round_trip_encoding_AToB() external {
// ASSERTION: The round trip encoding done in testRoundTripAToB(...)
assertEq(actor.failedRoundtripAToB(), false);
}
/**
* @custom:invariant `convertRoundTripBToA` never fails.
*
* Asserts that an encoded versioned nonce can always be decoded / re-encoded to reach
* the same encoded value.
*/
function invariant_round_trip_encoding_BToA() external {
// ASSERTION: The round trip encoding done in testRoundTripBToA should never
// fail.
assertEq(actor.failedRoundtripBToA(), false);
}
}
packages/contracts-bedrock/contracts/test/invariants/Hashing.t.sol
0 → 100644
View file @
565a4e67
// SPDX-License-Identifier: MIT
pragma solidity 0.8.15;
import { Test } from "forge-std/Test.sol";
import { StdInvariant } from "forge-std/StdInvariant.sol";
import { Encoding } from "../../libraries/Encoding.sol";
import { Hashing } from "../../libraries/Hashing.sol";
contract Hash_CrossDomainHasher {
bool public failedCrossDomainHashHighVersion;
bool public failedCrossDomainHashV0;
bool public failedCrossDomainHashV1;
/**
* @notice Takes the necessary parameters to perform a cross domain hash with a randomly
* generated version. Only schema versions 0 and 1 are supported and all others should revert.
*/
function hashCrossDomainMessageHighVersion(
uint16 _version,
uint240 _nonce,
address _sender,
address _target,
uint256 _value,
uint256 _gasLimit,
bytes memory _data
) external {
// generate the versioned nonce
uint256 encodedNonce = Encoding.encodeVersionedNonce(_nonce, _version);
// hash the cross domain message. we don't need to store the result since the function
// validates and should revert if an invalid version (>1) is encoded
Hashing.hashCrossDomainMessage(encodedNonce, _sender, _target, _value, _gasLimit, _data);
// check that execution never makes it this far for an invalid version
if (_version > 1) {
failedCrossDomainHashHighVersion = true;
}
}
/**
* @notice Takes the necessary parameters to perform a cross domain hash using the v0 schema
* and compares the output of a call to the unversioned function to the v0 function directly
*/
function hashCrossDomainMessageV0(
uint240 _nonce,
address _sender,
address _target,
uint256 _value,
uint256 _gasLimit,
bytes memory _data
) external {
// generate the versioned nonce with the version set to 0
uint256 encodedNonce = Encoding.encodeVersionedNonce(_nonce, 0);
// hash the cross domain message using the unversioned and versioned functions for
// comparison
bytes32 sampleHash1 = Hashing.hashCrossDomainMessage(
encodedNonce,
_sender,
_target,
_value,
_gasLimit,
_data
);
bytes32 sampleHash2 = Hashing.hashCrossDomainMessageV0(
_target,
_sender,
_data,
encodedNonce
);
// check that the output of both functions matches
if (sampleHash1 != sampleHash2) {
failedCrossDomainHashV0 = true;
}
}
/**
* @notice Takes the necessary parameters to perform a cross domain hash using the v1 schema
* and compares the output of a call to the unversioned function to the v1 function directly
*/
function hashCrossDomainMessageV1(
uint240 _nonce,
address _sender,
address _target,
uint256 _value,
uint256 _gasLimit,
bytes memory _data
) external {
// generate the versioned nonce with the version set to 1
uint256 encodedNonce = Encoding.encodeVersionedNonce(_nonce, 1);
// hash the cross domain message using the unversioned and versioned functions for
// comparison
bytes32 sampleHash1 = Hashing.hashCrossDomainMessage(
encodedNonce,
_sender,
_target,
_value,
_gasLimit,
_data
);
bytes32 sampleHash2 = Hashing.hashCrossDomainMessageV1(
encodedNonce,
_sender,
_target,
_value,
_gasLimit,
_data
);
// check that the output of both functions matches
if (sampleHash1 != sampleHash2) {
failedCrossDomainHashV1 = true;
}
}
}
contract Hashing_Invariant is StdInvariant, Test {
Hash_CrossDomainHasher internal actor;
function setUp() public {
// Create a hasher actor.
actor = new Hash_CrossDomainHasher();
targetContract(address(actor));
bytes4[] memory selectors = new bytes4[](3);
selectors[0] = actor.hashCrossDomainMessageHighVersion.selector;
selectors[1] = actor.hashCrossDomainMessageV0.selector;
selectors[2] = actor.hashCrossDomainMessageV1.selector;
FuzzSelector memory selector = FuzzSelector({ addr: address(actor), selectors: selectors });
targetSelector(selector);
}
/**
* @custom:invariant `hashCrossDomainMessage` reverts if `version` is > `1`.
*
* The `hashCrossDomainMessage` function should always revert if the `version` passed is > `1`.
*/
function invariant_hash_xdomain_msg_high_version() external {
// ASSERTION: The round trip aliasing done in testRoundTrip(...) should never fail.
assertFalse(actor.failedCrossDomainHashHighVersion());
}
/**
* @custom:invariant `version` = `0`: `hashCrossDomainMessage` and `hashCrossDomainMessageV0`
* are equivalent.
*
* If the version passed is 0, `hashCrossDomainMessage` and `hashCrossDomainMessageV0` should be
* equivalent.
*/
function invariant_hash_xdomain_msg_0() external {
// ASSERTION: A call to hashCrossDomainMessage and hashCrossDomainMessageV0
// should always match when the version passed is 0
assertFalse(actor.failedCrossDomainHashV0());
}
/**
* @custom:invariant `version` = `1`: `hashCrossDomainMessage` and `hashCrossDomainMessageV1`
* are equivalent.
*
* If the version passed is 1, `hashCrossDomainMessage` and `hashCrossDomainMessageV1` should be
* equivalent.
*/
function invariant_hash_xdomain_msg_1() external {
// ASSERTION: A call to hashCrossDomainMessage and hashCrossDomainMessageV1
// should always match when the version passed is 1
assertFalse(actor.failedCrossDomainHashV1());
}
}
packages/contracts-bedrock/deployments/mainnet/
Lib_
AddressManager.json
→
packages/contracts-bedrock/deployments/mainnet/AddressManager.json
View file @
565a4e67
File moved
packages/contracts-bedrock/invariant-docs/Encoding.md
View file @
565a4e67
# `Encoding` Invariants
## `convertRoundTripAToB` never fails.
**Test:**
[
`Encoding.t.sol#L76`
](
../contracts/test/invariants/Encoding.t.sol#L76
)
Asserts that a raw versioned nonce can be encoded / decoded to reach the same raw value.
## `convertRoundTripBToA` never fails.
**Test:**
[
`Encoding.t.sol#L87`
](
../contracts/test/invariants/Encoding.t.sol#L87
)
Asserts that an encoded versioned nonce can always be decoded / re-encoded to reach the same encoded value.
## `testRoundTripAToB` never fails.
**Test:**
[
`FuzzEncoding.sol#L56`
](
../contracts/echidna/FuzzEncoding.sol#L56
)
...
...
packages/contracts-bedrock/invariant-docs/Hashing.md
View file @
565a4e67
# `Hashing` Invariants
## `hashCrossDomainMessage` reverts if `version` is > `1`.
**Test:**
[
`Hashing.t.sol#L141`
](
../contracts/test/invariants/Hashing.t.sol#L141
)
The
`hashCrossDomainMessage`
function should always revert if the
`version`
passed is >
`1`
.
## `version` = `0`: `hashCrossDomainMessage` and `hashCrossDomainMessageV0` are equivalent.
**Test:**
[
`Hashing.t.sol#L153`
](
../contracts/test/invariants/Hashing.t.sol#L153
)
If the version passed is 0,
`hashCrossDomainMessage`
and
`hashCrossDomainMessageV0`
should be equivalent.
## `version` = `1`: `hashCrossDomainMessage` and `hashCrossDomainMessageV1` are equivalent.
**Test:**
[
`Hashing.t.sol#L166`
](
../contracts/test/invariants/Hashing.t.sol#L166
)
If the version passed is 1,
`hashCrossDomainMessage`
and
`hashCrossDomainMessageV1`
should be equivalent.
## `hashCrossDomainMessage` reverts if `version` is > `1`.
**Test:**
[
`FuzzHashing.sol#L120`
](
../contracts/echidna/FuzzHashing.sol#L120
)
...
...
packages/contracts-bedrock/scripts/eoa-migration.sh
0 → 100755
View file @
565a4e67
#!/bin/sh
# RPC endpoint for the migration rehearsal network
export
ETH_RPC_URL
=
"https://mainnet-l1-rehearsal.optimism.io/"
# export ETH_RPC_URL="localhost:8545"
# Default HH key
HH_KEY
=
"0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80"
# Default HH addr
HH_ADDR
=
"0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266"
# SystemDictator contract (deployed by Maurelian on 2023-05-09)
MSD
=
"0x49149a233de6E4cD6835971506F47EE5862289c1"
# ProxyAdmin contract
PROXY_ADMIN
=
"0x43cA9bAe8dF108684E5EAaA720C25e1b32B0A075"
# AddressManager contract
ADDRESS_MANAGER
=
"0xdE1FCfB0851916CA5101820A69b13a4E276bd81F"
# ResolvedDelegateProxy contract
RESOLVED_DELEGATE_PROXY
=
"0x25ace71c97B33Cc4729CF772ae268934F7ab5fA1"
# L1ChugSplashProxy contract
# use `setOwner(address)` for this one
L1_CHUG_SPLASH_PROXY
=
"0x99C9fc46f92E8a1c0deC1b1747d010903E884bE1"
# Proxy contract
# use `changeAdmin(address)` for this one
PROXY
=
"0x5a7749f83b81B301cAb5f48EB8516B986DAef23D"
# OptimismPortal proxy
PORTAL_PROXY
=
"0x59C4e2c6a6dC27c259D6d067a039c831e1ff4947"
# Check existing owners (should all be $HH_ADDR)
# cast call $PROXY_ADMIN "owner()(address)"
# cast call $ADDRESS_MANAGER "owner()(address)"
# cast admin $L1_CHUG_SPLASH_PROXY
# cast admin $PROXY
# ---
# Transfer ownership
# ---
# cast send --private-key $HH_KEY $PROXY_ADMIN "transferOwnership(address)" $MSD
# cast send --private-key $HH_KEY $ADDRESS_MANAGER "transferOwnership(address)" $MSD
# cast send --private-key $HH_KEY $L1_CHUG_SPLASH_PROXY "setOwner(address)" $MSD
# cast send --private-key $HH_KEY $PROXY "changeAdmin(address)" $MSD
# ---
# Execute Phase 1
# ---
# cast send --private-key $HH_KEY $MSD "phase1()"
# updateDynamicConfig signature
SIG
=
"updateDynamicConfig((uint256,uint256),bool)"
# Encode calldata
CALLDATA
=
$(
cast abi-encode
$SIG
"(17377105,1685641931)"
true
)
# Grab the selector
SELECTOR
=
$(
cast sig
$SIG
)
# Prepare full payload
PAYLOAD
=
$(
cast
--concat-hex
$SELECTOR
$CALLDATA
)
# Sanity check calldata
# cast pretty-calldata $PAYLOAD
# ---
# Update dynamic config
# ---
# cast send --private-key $HH_KEY $MSD $PAYLOAD
# ---
# !!!POINT OF NO RETURN!!!
# Execute phase 2
# ---
# cast send --private-key $HH_KEY $MSD "phase2()"
# ---
# Unpause the portal
# ---
# cast send --private-key $HH_KEY $PORTAL_PROXY "unpause()"
# ---
# Unpause Portal with CallForwarder
# ---
# Fetch portal guardian
# cast call $PORTAL_PROXY "GUARDIAN()(address)"
SIG
=
"forward(address,bytes)"
FORWARD_SIG
=
$(
cast sig
$SIG
)
CALLDATA
=
$(
cast abi-encode
$SIG
$PORTAL_PROXY
$(
cast sig
"unpause()"
))
PAYLOAD
=
$(
cast
--concat-hex
$FORWARD_SIG
$CALLDATA
)
# Sanity check calldata
# cast pretty-calldata $PAYLOAD
# Send unpause tx from multisig
# cast send "0x9BA6e03D8B90dE867373Db8cF1A58d2F7F006b3A" $PAYLOAD --private-key $HH_KEY
# Check if paused
# cast call $PORTAL_PROXY "paused()(bool)"
# Check bytecode of multisig
# cast code "0x9BA6e03D8B90dE867373Db8cF1A58d2F7F006b3A" --rpc-url https://mainnet-l1-rehearsal.optimism.io
packages/sdk/src/utils/chain-constants.ts
View file @
565a4e67
...
...
@@ -3,6 +3,20 @@ import {
getDeployedContractDefinition
,
}
from
'
@eth-optimism/contracts
'
import
{
predeploys
as
bedrockPredeploys
}
from
'
@eth-optimism/contracts-bedrock
'
import
portalArtifactsMainnet
from
'
@eth-optimism/contracts-bedrock/deployments/mainnet/OptimismPortalProxy.json
'
import
portalArtifactsGoerli
from
'
@eth-optimism/contracts-bedrock/deployments/goerli/OptimismPortalProxy.json
'
import
l2OutputOracleArtifactsMainnet
from
'
@eth-optimism/contracts-bedrock/deployments/mainnet/L2OutputOracleProxy.json
'
import
l2OutputOracleArtifactsGoerli
from
'
@eth-optimism/contracts-bedrock/deployments/goerli/L2OutputOracleProxy.json
'
const
portalAddresses
=
{
mainnet
:
portalArtifactsMainnet
,
goerli
:
portalArtifactsGoerli
,
}
const
l2OutputOracleAddresses
=
{
mainnet
:
l2OutputOracleArtifactsMainnet
,
goerli
:
l2OutputOracleArtifactsGoerli
,
}
import
{
L1ChainID
,
...
...
@@ -64,6 +78,7 @@ export const DEFAULT_L2_CONTRACT_ADDRESSES: OEL2ContractsLike = {
* @returns The L1 contracts for the given network.
*/
const
getL1ContractsByNetworkName
=
(
network
:
string
):
OEL1ContractsLike
=>
{
// TODO this doesn't code split and makes the sdk artifacts way too big
const
getDeployedAddress
=
(
name
:
string
)
=>
{
return
getDeployedContractDefinition
(
name
,
network
).
address
}
...
...
@@ -77,8 +92,8 @@ const getL1ContractsByNetworkName = (network: string): OEL1ContractsLike => {
StateCommitmentChain
:
getDeployedAddress
(
'
StateCommitmentChain
'
),
CanonicalTransactionChain
:
getDeployedAddress
(
'
CanonicalTransactionChain
'
),
BondManager
:
getDeployedAddress
(
'
BondManager
'
),
OptimismPortal
:
'
0x5b47E1A08Ea6d985D6649300584e6722Ec4B1383
'
as
const
,
L2OutputOracle
:
'
0xE6Dfba0953616Bacab0c9A8ecb3a9BBa77FC15c0
'
as
const
,
OptimismPortal
:
portalAddresses
[
network
].
address
,
L2OutputOracle
:
l2OutputOracleAddresses
[
network
].
address
,
}
}
...
...
@@ -109,6 +124,7 @@ export const CONTRACT_ADDRESSES: {
CanonicalTransactionChain
:
'
0xCf7Ed3AccA5a467e9e704C703E8D87F634fB0Fc9
'
as
const
,
BondManager
:
'
0x5FC8d32690cc91D4c39d9d3abcBD16989F875707
'
as
const
,
// FIXME
OptimismPortal
:
'
0x0000000000000000000000000000000000000000
'
as
const
,
L2OutputOracle
:
'
0x0000000000000000000000000000000000000000
'
as
const
,
},
...
...
proxyd/integration_tests/consensus_test.go
View file @
565a4e67
...
...
@@ -8,6 +8,7 @@ import (
"os"
"path"
"testing"
"time"
"github.com/ethereum/go-ethereum/common/hexutil"
...
...
@@ -630,6 +631,55 @@ func TestConsensus(t *testing.T) {
require
.
Equal
(
t
,
len
(
nodes
[
"node2"
]
.
mockBackend
.
Requests
()),
0
,
msg
)
})
t
.
Run
(
"load balancing should not hit if node is degraded"
,
func
(
t
*
testing
.
T
)
{
reset
()
useOnlyNode1
()
// replace node1 handler with one that adds a 500ms delay
oldHandler
:=
nodes
[
"node1"
]
.
mockBackend
.
handler
defer
func
()
{
nodes
[
"node1"
]
.
mockBackend
.
handler
=
oldHandler
}()
nodes
[
"node1"
]
.
mockBackend
.
SetHandler
(
http
.
HandlerFunc
(
func
(
w
http
.
ResponseWriter
,
r
*
http
.
Request
)
{
time
.
Sleep
(
500
*
time
.
Millisecond
)
oldHandler
.
ServeHTTP
(
w
,
r
)
}))
update
()
// send 10 requests to make node1 degraded
numberReqs
:=
10
for
numberReqs
>
0
{
_
,
statusCode
,
err
:=
client
.
SendRPC
(
"eth_getBlockByNumber"
,
[]
interface
{}{
"0x101"
,
false
})
require
.
NoError
(
t
,
err
)
require
.
Equal
(
t
,
200
,
statusCode
)
numberReqs
--
}
// bring back node2
nodes
[
"node2"
]
.
handler
.
ResetOverrides
()
update
()
// reset request counts
nodes
[
"node1"
]
.
mockBackend
.
Reset
()
nodes
[
"node2"
]
.
mockBackend
.
Reset
()
require
.
Equal
(
t
,
0
,
len
(
nodes
[
"node1"
]
.
mockBackend
.
Requests
()))
require
.
Equal
(
t
,
0
,
len
(
nodes
[
"node2"
]
.
mockBackend
.
Requests
()))
numberReqs
=
10
for
numberReqs
>
0
{
_
,
statusCode
,
err
:=
client
.
SendRPC
(
"eth_getBlockByNumber"
,
[]
interface
{}{
"0x101"
,
false
})
require
.
NoError
(
t
,
err
)
require
.
Equal
(
t
,
200
,
statusCode
)
numberReqs
--
}
msg
:=
fmt
.
Sprintf
(
"n1 %d, n2 %d"
,
len
(
nodes
[
"node1"
]
.
mockBackend
.
Requests
()),
len
(
nodes
[
"node2"
]
.
mockBackend
.
Requests
()))
require
.
Equal
(
t
,
0
,
len
(
nodes
[
"node1"
]
.
mockBackend
.
Requests
()),
msg
)
require
.
Equal
(
t
,
10
,
len
(
nodes
[
"node2"
]
.
mockBackend
.
Requests
()),
msg
)
})
t
.
Run
(
"rewrite response of eth_blockNumber"
,
func
(
t
*
testing
.
T
)
{
reset
()
update
()
...
...
proxyd/integration_tests/testdata/consensus.toml
View file @
565a4e67
...
...
@@ -3,6 +3,7 @@ rpc_port = 8545
[backend]
response_timeout_seconds
=
1
max_degraded_latency_threshold
=
"30ms"
[backends]
[backends.node1]
...
...
proxyd/metrics.go
View file @
565a4e67
...
...
@@ -358,6 +358,14 @@ var (
"backend_name"
,
})
degradedBackends
=
promauto
.
NewGaugeVec
(
prometheus
.
GaugeOpts
{
Namespace
:
MetricsNamespace
,
Name
:
"backend_degraded"
,
Help
:
"Bool gauge for degraded backends"
,
},
[]
string
{
"backend_name"
,
})
networkErrorRateBackend
=
promauto
.
NewGaugeVec
(
prometheus
.
GaugeOpts
{
Namespace
:
MetricsNamespace
,
Name
:
"backend_error_rate"
,
...
...
@@ -493,6 +501,7 @@ func RecordConsensusBackendUpdateDelay(b *Backend, lastUpdate time.Time) {
func
RecordBackendNetworkLatencyAverageSlidingWindow
(
b
*
Backend
,
avgLatency
time
.
Duration
)
{
avgLatencyBackend
.
WithLabelValues
(
b
.
Name
)
.
Set
(
float64
(
avgLatency
.
Milliseconds
()))
degradedBackends
.
WithLabelValues
(
b
.
Name
)
.
Set
(
boolToFloat64
(
b
.
IsDegraded
()))
}
func
RecordBackendNetworkErrorRateSlidingWindow
(
b
*
Backend
,
rate
float64
)
{
...
...
specs/derivation.md
View file @
565a4e67
...
...
@@ -107,7 +107,7 @@ To derive the L2 blocks in an epoch `E`, we need the following inputs:
is the sequencing window size (note that this means that epochs are overlapping). In particular, we need:
-
The
[
batcher transactions
][
g-batcher-transaction
]
included in the sequencing window. These allow us to
reconstruct
[
sequencer batches
][
g-sequencer-batch
]
containing the transactions to include in L2 blocks (each batch
maps to a single L2 block
).
contains a list of L2 blocks
).
-
Note that it is impossible to have a batcher transaction containing a batch relative to epoch
`E`
on L1 block
`E`
, as the batch must contain the hash of L1 block
`E`
.
-
The
[
deposits
][
g-deposits
]
made in L1 block
`E`
(in the form of events emitted by the
[
deposit
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment