Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
N
nebula
Project
Project
Details
Activity
Releases
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
exchain
nebula
Commits
4f5cd706
Unverified
Commit
4f5cd706
authored
Oct 26, 2022
by
mergify[bot]
Committed by
GitHub
Oct 26, 2022
Browse files
Options
Browse Files
Download
Plain Diff
Merge branch 'develop' into feat/beta-1-artifacts
parents
b41a4018
26b1d32e
Changes
10
Show whitespace changes
Inline
Side-by-side
Showing
10 changed files
with
180 additions
and
72 deletions
+180
-72
channel_bank.go
op-node/rollup/derive/channel_bank.go
+28
-36
channel_bank_test.go
op-node/rollup/derive/channel_bank_test.go
+27
-22
engine_queue.go
op-node/rollup/derive/engine_queue.go
+17
-11
frame_queue.go
op-node/rollup/derive/frame_queue.go
+61
-0
pipeline.go
op-node/rollup/derive/pipeline.go
+2
-1
package.json
packages/contracts-bedrock/package.json
+1
-1
verify-foundry-install.ts
packages/contracts-bedrock/scripts/verify-foundry-install.ts
+0
-0
.gitignore
packages/migration-data/.gitignore
+1
-0
cli.ts
packages/migration-data/bin/cli.ts
+27
-0
derivation.md
specs/derivation.md
+16
-1
No files found.
op-node/rollup/derive/channel_bank.go
View file @
4f5cd706
...
@@ -10,8 +10,8 @@ import (
...
@@ -10,8 +10,8 @@ import (
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/log"
)
)
type
Next
Data
Provider
interface
{
type
Next
Frame
Provider
interface
{
Next
Data
(
ctx
context
.
Context
)
([]
byt
e
,
error
)
Next
Frame
(
ctx
context
.
Context
)
(
Fram
e
,
error
)
Origin
()
eth
.
L1BlockRef
Origin
()
eth
.
L1BlockRef
}
}
...
@@ -34,14 +34,14 @@ type ChannelBank struct {
...
@@ -34,14 +34,14 @@ type ChannelBank struct {
channels
map
[
ChannelID
]
*
Channel
// channels by ID
channels
map
[
ChannelID
]
*
Channel
// channels by ID
channelQueue
[]
ChannelID
// channels in FIFO order
channelQueue
[]
ChannelID
// channels in FIFO order
prev
Next
Data
Provider
prev
Next
Frame
Provider
fetcher
L1Fetcher
fetcher
L1Fetcher
}
}
var
_
ResetableStage
=
(
*
ChannelBank
)(
nil
)
var
_
ResetableStage
=
(
*
ChannelBank
)(
nil
)
// NewChannelBank creates a ChannelBank, which should be Reset(origin) before use.
// NewChannelBank creates a ChannelBank, which should be Reset(origin) before use.
func
NewChannelBank
(
log
log
.
Logger
,
cfg
*
rollup
.
Config
,
prev
Next
Data
Provider
,
fetcher
L1Fetcher
)
*
ChannelBank
{
func
NewChannelBank
(
log
log
.
Logger
,
cfg
*
rollup
.
Config
,
prev
Next
Frame
Provider
,
fetcher
L1Fetcher
)
*
ChannelBank
{
return
&
ChannelBank
{
return
&
ChannelBank
{
log
:
log
,
log
:
log
,
cfg
:
cfg
,
cfg
:
cfg
,
...
@@ -73,22 +73,12 @@ func (cb *ChannelBank) prune() {
...
@@ -73,22 +73,12 @@ func (cb *ChannelBank) prune() {
}
}
// IngestData adds new L1 data to the channel bank.
// IngestData adds new L1 data to the channel bank.
// Read() should be called repeatedly first, until everything has been read, before adding new data.
\
// Read() should be called repeatedly first, until everything has been read, before adding new data.
func
(
cb
*
ChannelBank
)
Ingest
Data
(
data
[]
byt
e
)
{
func
(
cb
*
ChannelBank
)
Ingest
Frame
(
f
Fram
e
)
{
origin
:=
cb
.
Origin
()
origin
:=
cb
.
Origin
()
cb
.
log
.
Debug
(
"channel bank got new data"
,
"origin"
,
origin
,
"data_len"
,
len
(
data
))
log
:=
log
.
New
(
"origin"
,
origin
,
"channel"
,
f
.
ID
,
"length"
,
len
(
f
.
Data
),
"frame_number"
,
f
.
FrameNumber
)
log
.
Debug
(
"channel bank got new data"
)
// TODO: Why is the prune here?
cb
.
prune
()
frames
,
err
:=
ParseFrames
(
data
)
if
err
!=
nil
{
cb
.
log
.
Warn
(
"malformed frame"
,
"err"
,
err
)
return
}
// Process each frame
for
_
,
f
:=
range
frames
{
currentCh
,
ok
:=
cb
.
channels
[
f
.
ID
]
currentCh
,
ok
:=
cb
.
channels
[
f
.
ID
]
if
!
ok
{
if
!
ok
{
// create new channel if it doesn't exist yet
// create new channel if it doesn't exist yet
...
@@ -99,16 +89,18 @@ func (cb *ChannelBank) IngestData(data []byte) {
...
@@ -99,16 +89,18 @@ func (cb *ChannelBank) IngestData(data []byte) {
// check if the channel is not timed out
// check if the channel is not timed out
if
currentCh
.
OpenBlockNumber
()
+
cb
.
cfg
.
ChannelTimeout
<
origin
.
Number
{
if
currentCh
.
OpenBlockNumber
()
+
cb
.
cfg
.
ChannelTimeout
<
origin
.
Number
{
cb
.
log
.
Warn
(
"channel is timed out, ignore frame"
,
"channel"
,
f
.
ID
,
"frame"
,
f
.
FrameNumber
)
log
.
Warn
(
"channel is timed out, ignore frame"
)
continue
return
}
}
cb
.
log
.
Trace
(
"ingesting frame"
,
"channel"
,
f
.
ID
,
"frame_number"
,
f
.
FrameNumber
,
"length"
,
len
(
f
.
Data
)
)
log
.
Trace
(
"ingesting frame"
)
if
err
:=
currentCh
.
AddFrame
(
f
,
origin
);
err
!=
nil
{
if
err
:=
currentCh
.
AddFrame
(
f
,
origin
);
err
!=
nil
{
cb
.
log
.
Warn
(
"failed to ingest frame into channel"
,
"channel"
,
f
.
ID
,
"frame_number"
,
f
.
FrameNumber
,
"err"
,
err
)
log
.
Warn
(
"failed to ingest frame into channel"
,
"err"
,
err
)
continue
return
}
}
}
// Prune after the frame is loaded.
cb
.
prune
()
}
}
// Read the raw data of the first channel, if it's timed-out or closed.
// Read the raw data of the first channel, if it's timed-out or closed.
...
@@ -156,12 +148,12 @@ func (cb *ChannelBank) NextData(ctx context.Context) ([]byte, error) {
...
@@ -156,12 +148,12 @@ func (cb *ChannelBank) NextData(ctx context.Context) ([]byte, error) {
}
}
// Then load data into the channel bank
// Then load data into the channel bank
if
data
,
err
:=
cb
.
prev
.
NextData
(
ctx
);
err
==
io
.
EOF
{
if
frame
,
err
:=
cb
.
prev
.
NextFrame
(
ctx
);
err
==
io
.
EOF
{
return
nil
,
io
.
EOF
return
nil
,
io
.
EOF
}
else
if
err
!=
nil
{
}
else
if
err
!=
nil
{
return
nil
,
err
return
nil
,
err
}
else
{
}
else
{
cb
.
Ingest
Data
(
data
)
cb
.
Ingest
Frame
(
frame
)
return
nil
,
NotEnoughData
return
nil
,
NotEnoughData
}
}
}
}
...
...
op-node/rollup/derive/channel_bank_test.go
View file @
4f5cd706
package
derive
package
derive
import
(
import
(
"bytes"
"context"
"context"
"fmt"
"io"
"io"
"math/rand"
"math/rand"
"strconv"
"strconv"
...
@@ -21,7 +19,7 @@ import (
...
@@ -21,7 +19,7 @@ import (
type
fakeChannelBankInput
struct
{
type
fakeChannelBankInput
struct
{
origin
eth
.
L1BlockRef
origin
eth
.
L1BlockRef
data
[]
struct
{
data
[]
struct
{
data
[]
byt
e
frame
Fram
e
err
error
err
error
}
}
}
}
...
@@ -30,34 +28,28 @@ func (f *fakeChannelBankInput) Origin() eth.L1BlockRef {
...
@@ -30,34 +28,28 @@ func (f *fakeChannelBankInput) Origin() eth.L1BlockRef {
return
f
.
origin
return
f
.
origin
}
}
func
(
f
*
fakeChannelBankInput
)
Next
Data
(
_
context
.
Context
)
([]
byt
e
,
error
)
{
func
(
f
*
fakeChannelBankInput
)
Next
Frame
(
_
context
.
Context
)
(
Fram
e
,
error
)
{
out
:=
f
.
data
[
0
]
out
:=
f
.
data
[
0
]
f
.
data
=
f
.
data
[
1
:
]
f
.
data
=
f
.
data
[
1
:
]
return
out
.
data
,
out
.
err
return
out
.
frame
,
out
.
err
}
}
func
(
f
*
fakeChannelBankInput
)
Add
Output
(
data
[]
byt
e
,
err
error
)
{
func
(
f
*
fakeChannelBankInput
)
Add
Frame
(
frame
Fram
e
,
err
error
)
{
f
.
data
=
append
(
f
.
data
,
struct
{
f
.
data
=
append
(
f
.
data
,
struct
{
data
[]
byt
e
frame
Fram
e
err
error
err
error
}{
data
:
data
,
err
:
err
})
}{
frame
:
frame
,
err
:
err
})
}
}
// ExpectNextFrameData takes a set of test frame & turns into the raw data
// ExpectNextFrameData takes a set of test frame & turns into the raw data
// for reading into the channel bank via `NextData`
// for reading into the channel bank via `NextData`
func
(
f
*
fakeChannelBankInput
)
AddFrames
(
frames
...
testFrame
)
{
func
(
f
*
fakeChannelBankInput
)
AddFrames
(
frames
...
testFrame
)
{
data
:=
new
(
bytes
.
Buffer
)
data
.
WriteByte
(
DerivationVersion0
)
for
_
,
frame
:=
range
frames
{
for
_
,
frame
:=
range
frames
{
ff
:=
frame
.
ToFrame
()
f
.
AddFrame
(
frame
.
ToFrame
(),
nil
)
if
err
:=
ff
.
MarshalBinary
(
data
);
err
!=
nil
{
panic
(
fmt
.
Errorf
(
"error in making frame during test: %w"
,
err
))
}
}
}
f
.
AddOutput
(
data
.
Bytes
(),
nil
)
}
}
var
_
Next
Data
Provider
=
(
*
fakeChannelBankInput
)(
nil
)
var
_
Next
Frame
Provider
=
(
*
fakeChannelBankInput
)(
nil
)
// format: <channelID-data>:<frame-number>:<content><optional-last-frame-marker "!">
// format: <channelID-data>:<frame-number>:<content><optional-last-frame-marker "!">
// example: "abc:0:helloworld!"
// example: "abc:0:helloworld!"
...
@@ -105,17 +97,22 @@ func TestChannelBankSimple(t *testing.T) {
...
@@ -105,17 +97,22 @@ func TestChannelBankSimple(t *testing.T) {
input
:=
&
fakeChannelBankInput
{
origin
:
a
}
input
:=
&
fakeChannelBankInput
{
origin
:
a
}
input
.
AddFrames
(
"a:0:first"
,
"a:2:third!"
)
input
.
AddFrames
(
"a:0:first"
,
"a:2:third!"
)
input
.
AddFrames
(
"a:1:second"
)
input
.
AddFrames
(
"a:1:second"
)
input
.
Add
Output
(
nil
,
io
.
EOF
)
input
.
Add
Frame
(
Frame
{}
,
io
.
EOF
)
cfg
:=
&
rollup
.
Config
{
ChannelTimeout
:
10
}
cfg
:=
&
rollup
.
Config
{
ChannelTimeout
:
10
}
cb
:=
NewChannelBank
(
testlog
.
Logger
(
t
,
log
.
LvlCrit
),
cfg
,
input
,
nil
)
cb
:=
NewChannelBank
(
testlog
.
Logger
(
t
,
log
.
LvlCrit
),
cfg
,
input
,
nil
)
// Load the first
+ third
frame
// Load the first frame
out
,
err
:=
cb
.
NextData
(
context
.
Background
())
out
,
err
:=
cb
.
NextData
(
context
.
Background
())
require
.
ErrorIs
(
t
,
err
,
NotEnoughData
)
require
.
ErrorIs
(
t
,
err
,
NotEnoughData
)
require
.
Equal
(
t
,
[]
byte
(
nil
),
out
)
require
.
Equal
(
t
,
[]
byte
(
nil
),
out
)
// Load the third frame
out
,
err
=
cb
.
NextData
(
context
.
Background
())
require
.
ErrorIs
(
t
,
err
,
NotEnoughData
)
require
.
Equal
(
t
,
[]
byte
(
nil
),
out
)
// Load the second frame
// Load the second frame
out
,
err
=
cb
.
NextData
(
context
.
Background
())
out
,
err
=
cb
.
NextData
(
context
.
Background
())
require
.
ErrorIs
(
t
,
err
,
NotEnoughData
)
require
.
ErrorIs
(
t
,
err
,
NotEnoughData
)
...
@@ -140,21 +137,29 @@ func TestChannelBankDuplicates(t *testing.T) {
...
@@ -140,21 +137,29 @@ func TestChannelBankDuplicates(t *testing.T) {
input
.
AddFrames
(
"a:0:first"
,
"a:2:third!"
)
input
.
AddFrames
(
"a:0:first"
,
"a:2:third!"
)
input
.
AddFrames
(
"a:0:altfirst"
,
"a:2:altthird!"
)
input
.
AddFrames
(
"a:0:altfirst"
,
"a:2:altthird!"
)
input
.
AddFrames
(
"a:1:second"
)
input
.
AddFrames
(
"a:1:second"
)
input
.
Add
Output
(
nil
,
io
.
EOF
)
input
.
Add
Frame
(
Frame
{}
,
io
.
EOF
)
cfg
:=
&
rollup
.
Config
{
ChannelTimeout
:
10
}
cfg
:=
&
rollup
.
Config
{
ChannelTimeout
:
10
}
cb
:=
NewChannelBank
(
testlog
.
Logger
(
t
,
log
.
LvlCrit
),
cfg
,
input
,
nil
)
cb
:=
NewChannelBank
(
testlog
.
Logger
(
t
,
log
.
LvlCrit
),
cfg
,
input
,
nil
)
// Load the first
+ third
frame
// Load the first frame
out
,
err
:=
cb
.
NextData
(
context
.
Background
())
out
,
err
:=
cb
.
NextData
(
context
.
Background
())
require
.
ErrorIs
(
t
,
err
,
NotEnoughData
)
require
.
ErrorIs
(
t
,
err
,
NotEnoughData
)
require
.
Equal
(
t
,
[]
byte
(
nil
),
out
)
require
.
Equal
(
t
,
[]
byte
(
nil
),
out
)
// Load the third frame
out
,
err
=
cb
.
NextData
(
context
.
Background
())
require
.
ErrorIs
(
t
,
err
,
NotEnoughData
)
require
.
Equal
(
t
,
[]
byte
(
nil
),
out
)
// Load the duplicate frames
// Load the duplicate frames
out
,
err
=
cb
.
NextData
(
context
.
Background
())
out
,
err
=
cb
.
NextData
(
context
.
Background
())
require
.
ErrorIs
(
t
,
err
,
NotEnoughData
)
require
.
ErrorIs
(
t
,
err
,
NotEnoughData
)
require
.
Equal
(
t
,
[]
byte
(
nil
),
out
)
require
.
Equal
(
t
,
[]
byte
(
nil
),
out
)
out
,
err
=
cb
.
NextData
(
context
.
Background
())
require
.
ErrorIs
(
t
,
err
,
NotEnoughData
)
require
.
Equal
(
t
,
[]
byte
(
nil
),
out
)
// Load the second frame
// Load the second frame
out
,
err
=
cb
.
NextData
(
context
.
Background
())
out
,
err
=
cb
.
NextData
(
context
.
Background
())
...
...
op-node/rollup/derive/engine_queue.go
View file @
4f5cd706
...
@@ -9,7 +9,6 @@ import (
...
@@ -9,7 +9,6 @@ import (
"github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/log"
...
@@ -404,21 +403,28 @@ func (eq *EngineQueue) forceNextSafeAttributes(ctx context.Context) error {
...
@@ -404,21 +403,28 @@ func (eq *EngineQueue) forceNextSafeAttributes(ctx context.Context) error {
case
BlockInsertPrestateErr
:
case
BlockInsertPrestateErr
:
return
NewResetError
(
fmt
.
Errorf
(
"need reset to resolve pre-state problem: %w"
,
err
))
return
NewResetError
(
fmt
.
Errorf
(
"need reset to resolve pre-state problem: %w"
,
err
))
case
BlockInsertPayloadErr
:
case
BlockInsertPayloadErr
:
eq
.
log
.
Warn
(
"could not process payload derived from L1 data"
,
"err"
,
err
)
eq
.
log
.
Warn
(
"could not process payload derived from L1 data
, dropping batch
"
,
"err"
,
err
)
//
filter everything but the deposits
//
Count the number of deposits to see if the tx list is deposit only.
var
deposits
[]
hexutil
.
Bytes
depositCount
:=
0
for
_
,
tx
:=
range
attrs
.
Transactions
{
for
_
,
tx
:=
range
attrs
.
Transactions
{
if
len
(
tx
)
>
0
&&
tx
[
0
]
==
types
.
DepositTxType
{
if
len
(
tx
)
>
0
&&
tx
[
0
]
==
types
.
DepositTxType
{
deposit
s
=
append
(
deposits
,
tx
)
deposit
Count
+=
1
}
}
}
}
if
len
(
attrs
.
Transactions
)
>
len
(
deposits
)
{
// Deposit transaction execution errors are suppressed in the execution engine, but if the
eq
.
log
.
Warn
(
"dropping sequencer transactions from payload for re-attempt, batcher may have included invalid transactions"
,
// block is somehow invalid, there is nothing we can do to recover & we should exit.
"txs"
,
len
(
attrs
.
Transactions
),
"deposits"
,
len
(
deposits
),
"parent"
,
eq
.
safeHead
)
// TODO: Can this be triggered by an empty batch with invalid data (like parent hash or gas limit?)
eq
.
safeAttributes
[
0
]
.
Transactions
=
deposits
if
len
(
attrs
.
Transactions
)
==
depositCount
{
return
nil
eq
.
log
.
Error
(
"deposit only block was invalid"
,
"parent"
,
eq
.
safeHead
,
"err"
,
err
)
}
return
NewCriticalError
(
fmt
.
Errorf
(
"failed to process block with only deposit transactions: %w"
,
err
))
return
NewCriticalError
(
fmt
.
Errorf
(
"failed to process block with only deposit transactions: %w"
,
err
))
}
// drop the payload without inserting it
eq
.
safeAttributes
=
eq
.
safeAttributes
[
1
:
]
// suppress the error b/c we want to retry with the next batch from the batch queue
// If there is no valid batch the node will eventually force a deposit only block. If
// the deposit only block fails, this will return the critical error above.
return
nil
default
:
default
:
return
NewCriticalError
(
fmt
.
Errorf
(
"unknown InsertHeadBlock error type %d: %w"
,
errType
,
err
))
return
NewCriticalError
(
fmt
.
Errorf
(
"unknown InsertHeadBlock error type %d: %w"
,
errType
,
err
))
}
}
...
...
op-node/rollup/derive/frame_queue.go
0 → 100644
View file @
4f5cd706
package
derive
import
(
"context"
"io"
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum/go-ethereum/log"
)
var
_
NextFrameProvider
=
&
FrameQueue
{}
type
NextDataProvider
interface
{
NextData
(
context
.
Context
)
([]
byte
,
error
)
Origin
()
eth
.
L1BlockRef
}
type
FrameQueue
struct
{
log
log
.
Logger
frames
[]
Frame
prev
NextDataProvider
}
func
NewFrameQueue
(
log
log
.
Logger
,
prev
NextDataProvider
)
*
FrameQueue
{
return
&
FrameQueue
{
log
:
log
,
prev
:
prev
,
}
}
func
(
fq
*
FrameQueue
)
Origin
()
eth
.
L1BlockRef
{
return
fq
.
prev
.
Origin
()
}
func
(
fq
*
FrameQueue
)
NextFrame
(
ctx
context
.
Context
)
(
Frame
,
error
)
{
// Find more frames if we need to
if
len
(
fq
.
frames
)
==
0
{
if
data
,
err
:=
fq
.
prev
.
NextData
(
ctx
);
err
!=
nil
{
return
Frame
{},
err
}
else
{
if
new
,
err
:=
ParseFrames
(
data
);
err
==
nil
{
fq
.
frames
=
append
(
fq
.
frames
,
new
...
)
}
else
{
fq
.
log
.
Warn
(
"Failed to parse frames"
,
"origin"
,
fq
.
prev
.
Origin
(),
"err"
,
err
)
}
}
}
// If we did not add more frames but still have more data, retry this function.
if
len
(
fq
.
frames
)
==
0
{
return
Frame
{},
NotEnoughData
}
ret
:=
fq
.
frames
[
0
]
fq
.
frames
=
fq
.
frames
[
1
:
]
return
ret
,
nil
}
func
(
fq
*
FrameQueue
)
Reset
(
ctx
context
.
Context
,
base
eth
.
L1BlockRef
)
error
{
fq
.
frames
=
fq
.
frames
[
:
0
]
return
io
.
EOF
}
op-node/rollup/derive/pipeline.go
View file @
4f5cd706
...
@@ -67,7 +67,8 @@ func NewDerivationPipeline(log log.Logger, cfg *rollup.Config, l1Fetcher L1Fetch
...
@@ -67,7 +67,8 @@ func NewDerivationPipeline(log log.Logger, cfg *rollup.Config, l1Fetcher L1Fetch
l1Traversal
:=
NewL1Traversal
(
log
,
l1Fetcher
)
l1Traversal
:=
NewL1Traversal
(
log
,
l1Fetcher
)
dataSrc
:=
NewDataSourceFactory
(
log
,
cfg
,
l1Fetcher
)
// auxiliary stage for L1Retrieval
dataSrc
:=
NewDataSourceFactory
(
log
,
cfg
,
l1Fetcher
)
// auxiliary stage for L1Retrieval
l1Src
:=
NewL1Retrieval
(
log
,
dataSrc
,
l1Traversal
)
l1Src
:=
NewL1Retrieval
(
log
,
dataSrc
,
l1Traversal
)
bank
:=
NewChannelBank
(
log
,
cfg
,
l1Src
,
l1Fetcher
)
frameQueue
:=
NewFrameQueue
(
log
,
l1Src
)
bank
:=
NewChannelBank
(
log
,
cfg
,
frameQueue
,
l1Fetcher
)
chInReader
:=
NewChannelInReader
(
log
,
bank
)
chInReader
:=
NewChannelInReader
(
log
,
bank
)
batchQueue
:=
NewBatchQueue
(
log
,
cfg
,
chInReader
)
batchQueue
:=
NewBatchQueue
(
log
,
cfg
,
chInReader
)
attributesQueue
:=
NewAttributesQueue
(
log
,
cfg
,
l1Fetcher
,
batchQueue
)
attributesQueue
:=
NewAttributesQueue
(
log
,
cfg
,
l1Fetcher
,
batchQueue
)
...
...
packages/contracts-bedrock/package.json
View file @
4f5cd706
...
@@ -16,7 +16,7 @@
...
@@ -16,7 +16,7 @@
"scripts"
:
{
"scripts"
:
{
"build:forge"
:
"forge build"
,
"build:forge"
:
"forge build"
,
"build:differential"
:
"tsc scripts/differential-testing.ts --outDir dist --moduleResolution node --esModuleInterop"
,
"build:differential"
:
"tsc scripts/differential-testing.ts --outDir dist --moduleResolution node --esModuleInterop"
,
"prebuild"
:
"yarn ts-node scripts/verify
FoundryI
nstall.ts"
,
"prebuild"
:
"yarn ts-node scripts/verify
-foundry-i
nstall.ts"
,
"build"
:
"hardhat compile && yarn autogen:artifacts && yarn build:ts && yarn typechain"
,
"build"
:
"hardhat compile && yarn autogen:artifacts && yarn build:ts && yarn typechain"
,
"build:ts"
:
"tsc -p tsconfig.json"
,
"build:ts"
:
"tsc -p tsconfig.json"
,
"autogen:artifacts"
:
"ts-node scripts/generate-artifacts.ts"
,
"autogen:artifacts"
:
"ts-node scripts/generate-artifacts.ts"
,
...
...
packages/contracts-bedrock/scripts/verify
FoundryI
nstall.ts
→
packages/contracts-bedrock/scripts/verify
-foundry-i
nstall.ts
View file @
4f5cd706
File moved
packages/migration-data/.gitignore
View file @
4f5cd706
/data/evm-messages.json
/data/evm-messages.json
/data/slots.json
/data/slots.json
/data/evm-addresses.json
packages/migration-data/bin/cli.ts
View file @
4f5cd706
...
@@ -14,6 +14,33 @@ program
...
@@ -14,6 +14,33 @@ program
.
description
(
'
CLI for querying Bedrock migration data
'
)
.
description
(
'
CLI for querying Bedrock migration data
'
)
.
version
(
version
)
.
version
(
version
)
program
.
command
(
'
parse-state-dump
'
)
.
description
(
'
parses state dump to json
'
)
.
option
(
'
--file <file>
'
,
'
path to state dump file
'
)
.
action
(
async
(
options
)
=>
{
const
iface
=
getContractInterface
(
'
OVM_L2ToL1MessagePasser
'
)
const
dump
=
fs
.
readFileSync
(
options
.
file
,
'
utf-8
'
)
const
addrs
:
string
[]
=
[]
const
msgs
:
any
[]
=
[]
for
(
const
line
of
dump
.
split
(
'
\n
'
))
{
if
(
line
.
startsWith
(
'
ETH
'
))
{
addrs
.
push
(
line
.
split
(
'
|
'
)[
1
].
replace
(
'
\r
'
,
''
))
}
else
if
(
line
.
startsWith
(
'
MSG
'
))
{
const
msg
=
'
0x
'
+
line
.
split
(
'
|
'
)[
2
].
replace
(
'
\r
'
,
''
)
const
parsed
=
iface
.
decodeFunctionData
(
'
passMessageToL1
'
,
msg
)
msgs
.
push
({
who
:
line
.
split
(
'
|
'
)[
1
],
msg
:
parsed
.
_message
,
})
}
}
fs
.
writeFileSync
(
'
./data/evm-addresses.json
'
,
JSON
.
stringify
(
addrs
,
null
,
2
))
fs
.
writeFileSync
(
'
./data/evm-messages.json
'
,
JSON
.
stringify
(
msgs
,
null
,
2
))
})
program
program
.
command
(
'
evm-sent-messages
'
)
.
command
(
'
evm-sent-messages
'
)
.
description
(
'
queries messages sent after the EVM upgrade
'
)
.
description
(
'
queries messages sent after the EVM upgrade
'
)
...
...
specs/derivation.md
View file @
4f5cd706
...
@@ -520,6 +520,14 @@ As currently implemented, each step in this stage performs the following actions
...
@@ -520,6 +520,14 @@ As currently implemented, each step in this stage performs the following actions
frame are discarded.
frame are discarded.
-
Concatenate the data of the
*contiguous frame sequence*
(in sequential order) and push it to the next stage.
-
Concatenate the data of the
*contiguous frame sequence*
(in sequential order) and push it to the next stage.
The ordering of these actions is very important to be consistent across nodes & pipeline resets. The rollup node
must attempt to do the following in order to maintain a consistent channel bank even in the presence of pruning.
1.
Attempt to read as many channels as possible from the channel bank.
2.
Load in a single frame
3.
Check if channel bank needs to be pruned & do so if needed.
4.
Go to step 1 once the channel bank is under it's size limit.
> **TODO** Instead of waiting on the first seen channel (which might not contain the oldest batches, meaning buffering
> **TODO** Instead of waiting on the first seen channel (which might not contain the oldest batches, meaning buffering
> further down the pipeline), we could process any channel in the queue that is ready. We could do this by checking for
> further down the pipeline), we could process any channel in the queue that is ready. We could do this by checking for
> channel readiness upon writing into the bank, and moving ready channel to the front of the queue.
> channel readiness upon writing into the bank, and moving ready channel to the front of the queue.
...
@@ -537,8 +545,9 @@ During the *Batch Buffering* stage, we reorder batches by their timestamps. If b
...
@@ -537,8 +545,9 @@ During the *Batch Buffering* stage, we reorder batches by their timestamps. If b
slots]
[
g-time-slot
]
and a valid batch with a higher timestamp exists, this stage also generates empty batches to fill
slots]
[
g-time-slot
]
and a valid batch with a higher timestamp exists, this stage also generates empty batches to fill
the gaps.
the gaps.
Batches are pushed to the next stage whenever there is one
or more sequential batch(es)
directly following the timestamp
Batches are pushed to the next stage whenever there is one
sequential batch
directly following the timestamp
of the current
[
safe L2 head
][
g-safe-l2-head
]
(the last block that can be derived from the canonical L1 chain).
of the current
[
safe L2 head
][
g-safe-l2-head
]
(the last block that can be derived from the canonical L1 chain).
The parent hash of the batch must also match the hash of the current safe L2 head.
Note that the presence of any gaps in the batches derived from L1 means that this stage will need to buffer for a whole
Note that the presence of any gaps in the batches derived from L1 means that this stage will need to buffer for a whole
[
sequencing window
][
g-sequencing-window
]
before it can generate empty batches (because the missing batch(es) could have
[
sequencing window
][
g-sequencing-window
]
before it can generate empty batches (because the missing batch(es) could have
...
@@ -645,6 +654,12 @@ If consolidation fails, the unsafe L2 head is reset to the safe L2 head.
...
@@ -645,6 +654,12 @@ If consolidation fails, the unsafe L2 head is reset to the safe L2 head.
If the safe and unsafe L2 heads are identical (whether because of failed consolidation or not), we send the block to the
If the safe and unsafe L2 heads are identical (whether because of failed consolidation or not), we send the block to the
execution engine to be converted into a proper L2 block, which will become both the new L2 safe and unsafe head.
execution engine to be converted into a proper L2 block, which will become both the new L2 safe and unsafe head.
If a payload attributes created from a batch cannot be inserted into the chain because of a validation error (i.e. there
was an invalid transaction or state transition in the block) the batch should be dropped & the safe head should not be
advanced. The engine queue will attempt to use the next batch for that timestamp from the batch queue. If no valid batch
is found, the rollup node will create a deposit only batch which should always pass validation because deposits are
always valid.
Interaction with the execution engine via the execution engine API is detailed in the
[
Communication with the Execution
Interaction with the execution engine via the execution engine API is detailed in the
[
Communication with the Execution
Engine]
[
exec-engine-comm
]
section.
Engine]
[
exec-engine-comm
]
section.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment