Commit ab7ed0d4 authored by norswap's avatar norswap Committed by GitHub

[specs] knock some TODOs + review execution engine interaction during derivation (#3223)

* many clarification and corrections to stage descriptions

* knock some TODOs + review execution engine interaction during derivation

* spec that we drop sequencer transactions if there is a payload error

* lint
Co-authored-by: default avatarmergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
parent 1c5cb6bd
...@@ -37,6 +37,7 @@ ...@@ -37,6 +37,7 @@
[g-time-slot]: glossary.md#time-slot [g-time-slot]: glossary.md#time-slot
[g-consolidation]: glossary.md#unsafe-block-consolidation [g-consolidation]: glossary.md#unsafe-block-consolidation
[g-safe-l2-head]: glossary.md#safe-l2-head [g-safe-l2-head]: glossary.md#safe-l2-head
[g-safe-l2-block]: glossary.md#safe-l2-block
[g-unsafe-l2-head]: glossary.md#unsafe-l2-head [g-unsafe-l2-head]: glossary.md#unsafe-l2-head
[g-unsafe-l2-block]: glossary.md#unsafe-l2-block [g-unsafe-l2-block]: glossary.md#unsafe-l2-block
[g-unsafe-sync]: glossary.md#unsafe-sync [g-unsafe-sync]: glossary.md#unsafe-sync
...@@ -70,8 +71,8 @@ ...@@ -70,8 +71,8 @@
- [Deriving Payload Attributes](#deriving-payload-attributes) - [Deriving Payload Attributes](#deriving-payload-attributes)
- [Deriving the Transaction List](#deriving-the-transaction-list) - [Deriving the Transaction List](#deriving-the-transaction-list)
- [Building Individual Payload Attributes](#building-individual-payload-attributes) - [Building Individual Payload Attributes](#building-individual-payload-attributes)
- [WARNING: BELOW THIS LINE, THE SPEC HAS NOT BEEN REVIEWED AND MAY CONTAIN MISTAKES](#warning-below-this-line-the-spec-has-not-been-reviewed-and-may-contain-mistakes)
- [Communication with the Execution Engine](#communication-with-the-execution-engine) - [Communication with the Execution Engine](#communication-with-the-execution-engine)
- [WARNING: BELOW THIS LINE, THE SPEC HAS NOT BEEN REVIEWED AND MAY CONTAIN MISTAKES](#warning-below-this-line-the-spec-has-not-been-reviewed-and-may-contain-mistakes)
- [Handling L1 Re-Orgs](#handling-l1-re-orgs) - [Handling L1 Re-Orgs](#handling-l1-re-orgs)
- [Resetting the Engine Queue](#resetting-the-engine-queue) - [Resetting the Engine Queue](#resetting-the-engine-queue)
- [Resetting Payload Attribute Derivation](#resetting-payload-attribute-derivation) - [Resetting Payload Attribute Derivation](#resetting-payload-attribute-derivation)
...@@ -102,6 +103,8 @@ To derive the L2 blocks in an epoch `E`, we need the following inputs: ...@@ -102,6 +103,8 @@ To derive the L2 blocks in an epoch `E`, we need the following inputs:
- The [batcher transactions][g-batcher-transaction] included in the sequencing window. These allow us to - The [batcher transactions][g-batcher-transaction] included in the sequencing window. These allow us to
reconstruct [sequencer batches][g-sequencer-batch] containing the transactions to include in L2 blocks (each batch reconstruct [sequencer batches][g-sequencer-batch] containing the transactions to include in L2 blocks (each batch
maps to a single L2 block). maps to a single L2 block).
- Note that it is impossible to have a batcher transaction containing a batch relative to epoch `E` on L1 block
`E`, as the batch must contain the hash of L1 block `E`.
- The [deposits][g-deposits] made in L1 block `E` (in the form of events emitted by the [deposit - The [deposits][g-deposits] made in L1 block `E` (in the form of events emitted by the [deposit
contract][g-deposit-contract]). contract][g-deposit-contract]).
- The L1 block attributes from L1 block `E` (to derive the [L1 attributes deposited transaction][g-l1-attr-deposit]). - The L1 block attributes from L1 block `E` (to derive the [L1 attributes deposited transaction][g-l1-attr-deposit]).
...@@ -109,9 +112,8 @@ To derive the L2 blocks in an epoch `E`, we need the following inputs: ...@@ -109,9 +112,8 @@ To derive the L2 blocks in an epoch `E`, we need the following inputs:
[L2 genesis state][g-l2-genesis]. [L2 genesis state][g-l2-genesis].
- An epoch `E` does not exist if `E <= L2CI`, where `L2CI` is the [L2 chain inception][g-l2-chain-inception]. - An epoch `E` does not exist if `E <= L2CI`, where `L2CI` is the [L2 chain inception][g-l2-chain-inception].
> **TODO** specify sequencing window size > **TODO** specify sequencing window size (current thinking: on the order of a few hours, to give maximal flexibility to
> the batch submitter)
> **TODO** specify genesis block / state (in its own document? include/link predeploy.md)
To derive the whole L2 chain from scratch, we simply start with the [L2 genesis state][g-l2-genesis], and the [L2 chain To derive the whole L2 chain from scratch, we simply start with the [L2 genesis state][g-l2-genesis], and the [L2 chain
inception][g-l2-chain-inception] as first epoch, then process all sequencing windows in order. Refer to the inception][g-l2-chain-inception] as first epoch, then process all sequencing windows in order. Refer to the
...@@ -129,7 +131,8 @@ Each epoch may contain a variable number of L2 blocks (one every `l2_block_time` ...@@ -129,7 +131,8 @@ Each epoch may contain a variable number of L2 blocks (one every `l2_block_time`
- `l1_timestamp` is the timestamp of the L1 block associated with the L2 block's epoch - `l1_timestamp` is the timestamp of the L1 block associated with the L2 block's epoch
- `max_sequencer_drift` is the most a sequencer is allowed to get ahead of L1 - `max_sequencer_drift` is the most a sequencer is allowed to get ahead of L1
> **TODO** specify max sequencer drift > **TODO** specify max sequencer drift (current thinking: on the order of 10
> minutes, we've been using 2-4 minutes in testnets)
Put together, these constraints mean that there must be an L2 block every `l2_block_time` seconds, and that the Put together, these constraints mean that there must be an L2 block every `l2_block_time` seconds, and that the
timestamp for the first L2 block of an epoch must never fall behind the timestamp of the L1 block matching the epoch. timestamp for the first L2 block of an epoch must never fall behind the timestamp of the L1 block matching the epoch.
...@@ -193,6 +196,7 @@ Refer to the [Batch Submission specification][batcher-spec] for more information ...@@ -193,6 +196,7 @@ Refer to the [Batch Submission specification][batcher-spec] for more information
> - The batcher requests safe L2 safe head from the rollup node, then queries the execution engine for the block data. > - The batcher requests safe L2 safe head from the rollup node, then queries the execution engine for the block data.
> - In the future we might be able to get the safe hea dinformation from the execution engine directly. Not possible > - In the future we might be able to get the safe hea dinformation from the execution engine directly. Not possible
> right now but there is an upstream geth PR open. > right now but there is an upstream geth PR open.
> - specify batcher authentication (cf. TODO below)
## Batch Submission Wire Format ## Batch Submission Wire Format
...@@ -274,9 +278,17 @@ frame `B2` indicates that it is the last frame within the channel and (2) no emp ...@@ -274,9 +278,17 @@ frame `B2` indicates that it is the last frame within the channel and (2) no emp
The diagram does not specify the sequencing window size in use, but from it we can infer that it must be at least 4 The diagram does not specify the sequencing window size in use, but from it we can infer that it must be at least 4
blocks, because the last frame of channel `A` appears in block 102, but belong to epoch 99. blocks, because the last frame of channel `A` appears in block 102, but belong to epoch 99.
As for the comment on "security types", this is explained later in this document. As for the comment on "security types", it explains the classification of blocks as used on L1 and L2.
> **TODO** forward reference here - [Unsafe L2 blocks][g-unsafe-l2-block]:
- [Safe L2 blocks][g-safe-l2-block]:
- Finalized L2 blocks: currently the same as the safe L2 block, but could be changed in the future to refer to block
that have been derived from [finalized][g-finalized-l2-head] L1 data, or alternatively, from L1 blacks that are older
than the [challenge period].
These security levels map to the `headBlockHash`, `safeBlockHash` and `finalizedBlockHash` values transmitted when
interacting with the [execution-engine API][exec-engine]. Refer to the the [Communication with the Execution
Engine][exec-engine-comm] section for more information.
### Batcher Transaction Format ### Batcher Transaction Format
...@@ -308,10 +320,13 @@ frame_number = uint16 ...@@ -308,10 +320,13 @@ frame_number = uint16
frame_data_length = uint32 frame_data_length = uint32
frame_data = bytes frame_data = bytes
is_last = bool is_last = bool
Where `uint64`, `uint32` and `uint16` are all big-endian unsigned integers.
``` ```
Where `uint64`, `uint32` and `uint16` are all big-endian unsigned integers. Type names should be interpreted to and
encoded according to [the Solidity ABI][solidity-abi].
[solidity-abi]: https://docs.soliditylang.org/en/v0.8.16/abi-spec.html
All data in a frame is fixed-size, except the `frame_data`. The fixed overhead is `32 + 8 + 2 + 4 + 1 = 47 bytes`. All data in a frame is fixed-size, except the `frame_data`. The fixed overhead is `32 + 8 + 2 + 4 + 1 = 47 bytes`.
Fixed-size frame metadata avoids a circular dependency with the target total data length, Fixed-size frame metadata avoids a circular dependency with the target total data length,
to simplify packing of frames with varying content length. to simplify packing of frames with varying content length.
...@@ -321,27 +336,21 @@ where: ...@@ -321,27 +336,21 @@ where:
- `channel_id` uniquely identifies a channel as the concatenation of a random value and a timestamp - `channel_id` uniquely identifies a channel as the concatenation of a random value and a timestamp
- `random` is a random value such that two channels with different batches should have a different random value - `random` is a random value such that two channels with different batches should have a different random value
- `timestamp` is the time at which the channel was created (UNIX time in seconds) - `timestamp` is the time at which the channel was created (UNIX time in seconds)
- The ID includes both the random value and the timestamp, in order to prevent a malicious sequencer from reusing - The ID includes both the random value and the timestamp, in order to prevent a malicious sequencer from reusing the
the random value after the channel has [timed out][g-channel-timeout] (refer to the [batcher random value after the channel has [timed out][g-channel-timeout] (refer to the [batcher
specification][batcher-spec] to learn more about channel timeouts). This will also allow us substitute `random` by specification][batcher-spec] to learn more about channel timeouts). This will also allow us substitute `random` by a
a hash commitment to the batches, should we want to do so in the future. hash commitment to the batches, should we want to do so in the future.
- Channels whose timestamp are higher than that of the L1 block they first appear in must be ignored. Note that L1 - Frames with a channel ID whose timestamp are higher than that of the L1 block on which the frame appears must be
nodes have a soft constraint to ignore blocks whose timestamps that are ahead of the wallclock time by a certain ignored. Note that L1 nodes cannot easily manipulate the L1 block timestamp: L1 nodes have a soft constraint to
margin. (A soft constraint is not a consensus rule — nodes will accept such blocks in the canonical chain but will ignore blocks whose timestamps that are ahead of the wallclock time by a certain margin. (A soft constraint is not a
not attempt to build directly on them.) consensus rule — nodes will accept such blocks in the canonical chain but will not attempt to build directly on
them.) This issue goes away with Proof of Stake, where timestamps are determined by [L1 time slots][g-time-slot].
- `frame_number` identifies the index of the frame within the channel - `frame_number` identifies the index of the frame within the channel
- `frame_data_length` is the length of `frame_data` in bytes. It is capped to 1,000,000 bytes. - `frame_data_length` is the length of `frame_data` in bytes. It is capped to 1,000,000 bytes.
- `frame_data` is a sequence of bytes belonging to the channel, logically after the bytes from the previous frames - `frame_data` is a sequence of bytes belonging to the channel, logically after the bytes from the previous frames
- `is_last` is a single byte with a value of 1 if the frame is the last in the channel, 0 if there are frames in the - `is_last` is a single byte with a value of 1 if the frame is the last in the channel, 0 if there are frames in the
channel. Any other value makes the frame invalid (it must be ignored by the rollup node). channel. Any other value makes the frame invalid (it must be ignored by the rollup node).
> **TODO**
>
> - Is that requirement to drop channels correct?
> - Is it implemented as such?
> - Do we drop the channel or just the first frame? End result is the same but this changes the channel bank size, which
> can influence things down the line!!
[batcher-spec]: batching.md [batcher-spec]: batching.md
### Channel Format ### Channel Format
...@@ -365,12 +374,10 @@ where: ...@@ -365,12 +374,10 @@ where:
[rfc1950]: https://www.rfc-editor.org/rfc/rfc1950.html [rfc1950]: https://www.rfc-editor.org/rfc/rfc1950.html
When decompressing a channel, we limit the amount of decompressed data to `MAX_RLP_BYTES_PER_CHANNEL`, in order to avoid When decompressing a channel, we limit the amount of decompressed data to `MAX_RLP_BYTES_PER_CHANNEL` (currently
"zip-bomb" types of attack (where a small compressed input decompresses to a humongous amount of data). If the 10,000,000 bytes), in order to avoid "zip-bomb" types of attack (where a small compressed input decompresses to a
decompressed data exceeds the limit, things proceeds as thought the channel contained only the first humongous amount of data). If the decompressed data exceeds the limit, things proceeds as thought the channel contained
`MAX_RLP_BYTES_PER_CHANNEL` decompressed bytes. only the first `MAX_RLP_BYTES_PER_CHANNEL` decompressed bytes.
> **TODO** specify `MAX_RLP_BYTES_PER_CHANNEL`
While the above pseudocode implies that all batches are known in advance, it is possible to perform streaming While the above pseudocode implies that all batches are known in advance, it is possible to perform streaming
compression and decompression of RLP-encoded batches. This means it is possible to start including channel frames in a compression and decompression of RLP-encoded batches. This means it is possible to start including channel frames in a
...@@ -449,8 +456,8 @@ This ensures that we use the data we already have before pulling more data and m ...@@ -449,8 +456,8 @@ This ensures that we use the data we already have before pulling more data and m
the derivation pipeline. the derivation pipeline.
Each stage can maintain its own inner state as necessary. **In particular, each stage maintains a L1 block reference Each stage can maintain its own inner state as necessary. **In particular, each stage maintains a L1 block reference
(number + hash) to the latest L1 block such that all data originating from previous blocks has been processed, and the (number + hash) to the latest L1 block such that all data originating from previous blocks has been fully processed, and
data from that block is or has been processed.** the data from that block is being or has been processed.**
Let's briefly describe each stage of the pipeline. Let's briefly describe each stage of the pipeline.
...@@ -466,7 +473,7 @@ particular we extract a byte string that corresponds to the concatenation of the ...@@ -466,7 +473,7 @@ particular we extract a byte string that corresponds to the concatenation of the
transaction][g-batcher-transaction] belonging to the block. This byte stream encodes a stream of [channel transaction][g-batcher-transaction] belonging to the block. This byte stream encodes a stream of [channel
frames][g-channel-frame] (see the [Batch Submission Wire Format][wire-format] section for more info). frames][g-channel-frame] (see the [Batch Submission Wire Format][wire-format] section for more info).
This frames are parsed, then grouped per [channel][g-channel] into a structure we call the *channel bank*. These frames are parsed, then grouped per [channel][g-channel] into a structure we call the *channel bank*.
Some frames are ignored: Some frames are ignored:
...@@ -477,11 +484,15 @@ Some frames are ignored: ...@@ -477,11 +484,15 @@ Some frames are ignored:
`frame.is_last == 1`) are ignored. `frame.is_last == 1`) are ignored.
- These frames could still be written into the channel bank if we haven't seen the final frame yet. But they will - These frames could still be written into the channel bank if we haven't seen the final frame yet. But they will
never be read out from the channel bank. never be read out from the channel bank.
- Frames with a channel ID whose timestamp are higher than that of the L1 block on which the frame appears.
Channels are also recorded in FIFO order in a structure called the *channel queue*. A channel is added to the channel
queue the first time a frame belonging to the channel is seen. This structure is used in the next stage.
### Channel Bank ### Channel Bank
The *Channel Bank* stage is responsible from reading from the channel bank that was written to by the L1 retrieval The *Channel Bank* stage is responsible for managing buffering from the channel bank that was written to by the L1
stage, and decompressing batches from these frames. retrieval stage. A step in the channel bank stage tries to read data from channels that are "ready".
In principle, we should be able to read any channel that has any number of sequential frames at the "front" of the In principle, we should be able to read any channel that has any number of sequential frames at the "front" of the
channel (i.e. right after any frames that have been read from the bank already) and decompress batches from them. (Note channel (i.e. right after any frames that have been read from the bank already) and decompress batches from them. (Note
...@@ -489,28 +500,41 @@ that if we did this, we'd need to keep partially decompressed batches around.) ...@@ -489,28 +500,41 @@ that if we did this, we'd need to keep partially decompressed batches around.)
However, our current implementation doesn't support streaming decompression, so currently we have to wait until either: However, our current implementation doesn't support streaming decompression, so currently we have to wait until either:
- We have received all frames in the channel (i.e. we received the last frame in the channel (`is_last == 1`) and every - We have received all frames in the channel: i.e. we received the last frame in the channel (`is_last == 1`) and every
frame with a lower number). frame with a lower number.
- The channel has timed out (in which we case we read all contiguous sequential frames from the start of the channel). - The channel has timed out (in which we case we read all contiguous sequential frames from the start of the channel).
- A channel is considered to be *timed out* if `currentL1Block.timestamp > channeld_id.timestamp + CHANNEL_TIMEOUT`. - A channel is considered to be *timed out* if `currentL1Block.timestamp > channeld_id.timestamp + CHANNEL_TIMEOUT`.
- where `currentL1Block` is the L1 block maintained by this stage, which is the most recent L1 block whose frames - where `currentL1Block` is the L1 block maintained by this stage, which is the most recent L1 block whose frames
have been added to the channel bank. have been added to the channel bank.
- The channel is pruned out of the channel bank (see below), in which case it isn't passed to the further stages.
> **TODO** There is currently `MAX_CHANNEL_BANK_SIZE`, a notion about the maximum amount of channels we can keep track
> of. > **TODO** specify CHANNEL_TIMEOUT (currently 120s on Goerli testnet)
>
> - Is this a semantic detail (i.e. if the batcher opens too many frames, valid channels can be dropped?) As currently implemented, each step in this stage performs the following actions:
> - If so, I feel **very strongly** about changing this. This ties us very much to the current implementation.
> - And it doesn't feel necessary given the channel timeout - if DOS is an issue we can reduce the channel timeout. - Try to prune the channel bank.
- This occurs if the size of the channel bank exceeds `MAX_CHANNEL_BANK_SIZE` (currently set to 100,000,000 bytes).
> **TODO** The channel queue is a bit weird as implemented (blocks all other channels until the first channel is closed - The size of channel bank is the sum of the sizes (in btes) of all the frames contained within it.
> / timed out. Also unclear why we need to wait for channel closure. Maybe something to revisit? - In this case, channels are dropped from the front of the *channel queue* (see previous stage), and the frames
> belonging from these channels are dropped from the channel bank.
> cf. slack discussion with Proto - As many channels are dropped as is necessary so that the channel bank size falls back below
`MAX_CHANNEL_BANK_SIZE`.
- Take the first channel and the *channel queue*, determine if it is ready, and process it if so.
- A channel is ready if all its frames have been received or it timed out (see list above for details).
- If the channel is ready, determine its *contiguous frame sequence*, which is a contiguous sequence of frames,
starting from the first frame in the channel.
- For a full channel, those are all the frames.
- For a timed channel, those are all the frames until the first missing frame. Frames after the first missing
frame are discarded.
- Concatenate the data of the *contiguous frame sequence* (in sequential order) and push it to the next stage.
> **TODO** Instead of waiting on the first seen channel (which might not contain the oldest batches, meaning buffering
> further down the pipeline), we could process any channel in the queue that is ready. We could do this by checking for
> channel readiness upon writing into the bank, and moving ready channel to the front of the queue.
### Batch Decoding ### Batch Decoding
In the *Batch Decoding* stage, we decompress the frames we received in the last stage, then parse In the *Batch Decoding* stage, we decompress the channel we received in the last stage, then parse
[batches][g-sequencer-batch] from the decompressed byte stream. [batches][g-sequencer-batch] from the decompressed byte stream.
### Batch Buffering ### Batch Buffering
...@@ -521,7 +545,7 @@ During the *Batch Buffering* stage, we reorder batches by their timestamps. If b ...@@ -521,7 +545,7 @@ During the *Batch Buffering* stage, we reorder batches by their timestamps. If b
slots][g-time-slot] and a valid batch with a higher timestamp exists, this stage also generates empty batches to fill slots][g-time-slot] and a valid batch with a higher timestamp exists, this stage also generates empty batches to fill
the gaps. the gaps.
Batches are pushed to the next stage whenever there is one or more sequential batches directly following the timestamp Batches are pushed to the next stage whenever there is one or more sequential batch(es) directly following the timestamp
of the current [safe L2 head][g-safe-l2-head] (the last block that can be derived from the canonical L1 chain). of the current [safe L2 head][g-safe-l2-head] (the last block that can be derived from the canonical L1 chain).
Note that the presence of any gaps in the batches derived from L1 means that this stage will need to buffer for a whole Note that the presence of any gaps in the batches derived from L1 means that this stage will need to buffer for a whole
...@@ -548,7 +572,8 @@ We also ignore invalid batches, which do not satisfy one of the following constr ...@@ -548,7 +572,8 @@ We also ignore invalid batches, which do not satisfy one of the following constr
- The batch only contains sequenced transactions, i.e. it must NOT contain any [deposited-type transactions][ - The batch only contains sequenced transactions, i.e. it must NOT contain any [deposited-type transactions][
g-deposit-tx-type]. g-deposit-tx-type].
> **TODO** specify `max_sequencer_drift` > **TODO** specify `max_sequencer_drift` (see TODO above) (current thinking: on the order of 10 minutes, we've been
> using 2-4 minutes in testnets)
### Payload Attributes Derivation ### Payload Attributes Derivation
...@@ -587,7 +612,7 @@ In particular, the following fields of the payload attributes are checked for eq ...@@ -587,7 +612,7 @@ In particular, the following fields of the payload attributes are checked for eq
If consolidation fails, the unsafe L2 head is reset to the safe L2 head. If consolidation fails, the unsafe L2 head is reset to the safe L2 head.
If the safe and unsafe L2 heads are identical (whether because of failed consolidation or not), we send the block to the If the safe and unsafe L2 heads are identical (whether because of failed consolidation or not), we send the block to the
execution engine to be converted into a proper L2 block, which becomes both the new L2 safe and unsafe heads. execution engine to be converted into a proper L2 block, which will become both the new L2 safe and unsafe head.
Interaction with the execution engine via the execution engine API is detailed in the [Communication with the Execution Interaction with the execution engine via the execution engine API is detailed in the [Communication with the Execution
Engine][exec-engine-comm] section. Engine][exec-engine-comm] section.
...@@ -612,10 +637,10 @@ which includes the additional `transactions` and `noTxPool` fields. ...@@ -612,10 +637,10 @@ which includes the additional `transactions` and `noTxPool` fields.
## Deriving the Transaction List ## Deriving the Transaction List
For each such block, we start from a [sequencer batch][g-sequencer-batch] matching the target L2 block number. This For each L2 block to be created by the sequencer, we start from a [sequencer batch][g-sequencer-batch] matching the
could potentially be an empty auto-generated batch, if the L1 chain did not include a batch for the target L2 block target L2 block number. This could potentially be an empty auto-generated batch, if the L1 chain did not include a batch
number. [Remember][batch-format] the batch includes a [sequencing epoch][g-sequencing-epoch] number, an L2 timestamp, for the target L2 block number. [Remember][batch-format] that the batch includes a [sequencing
and a transaction list. epoch][g-sequencing-epoch] number, an L2 timestamp, and a transaction list.
This block is part of a [sequencing epoch][g-sequencing-epoch], This block is part of a [sequencing epoch][g-sequencing-epoch],
whose number matches that of an L1 block (its *[L1 origin][g-l1-origin]*). whose number matches that of an L1 block (its *[L1 origin][g-l1-origin]*).
...@@ -645,24 +670,14 @@ entries. ...@@ -645,24 +670,14 @@ entries.
After deriving the transaction list, the rollup node constructs a [`PayloadAttributesV1`][expanded-payload] as follows: After deriving the transaction list, the rollup node constructs a [`PayloadAttributesV1`][expanded-payload] as follows:
- `timestamp` is set to the batch's timestamp. - `timestamp` is set to the batch's timestamp.
- `random` is set to the *random* `prev_randao` L1 block attribute. - `random` is set to the `prev_randao` L1 block attribute.
- `suggestedFeeRecipient` is set to an address determined by the system. - `suggestedFeeRecipient` is set to an address determined by the sequencer.
- `transactions` is the array of the derived transactions: deposited transactions and sequenced transactions, all - `transactions` is the array of the derived transactions: deposited transactions and sequenced transactions, all
encoded with [EIP-2718]. encoded with [EIP-2718].
- `noTxPool` is set to `true`, to use the exact above `transactions` list when constructing the block. - `noTxPool` is set to `true`, to use the exact above `transactions` list when constructing the block.
[expanded-payload]: exec-engine.md#extended-payloadattributesv1 [expanded-payload]: exec-engine.md#extended-payloadattributesv1
> **TODO** specify Optimism mainnet fee recipient
------------------------------------------------------------------------------------------------------------------------
# WARNING: BELOW THIS LINE, THE SPEC HAS NOT BEEN REVIEWED AND MAY CONTAIN MISTAKES
We still expect that the explanations here should be pretty useful.
------------------------------------------------------------------------------------------------------------------------
# Communication with the Execution Engine # Communication with the Execution Engine
[exec-engine-comm]: #communication-with-the-execution-engine [exec-engine-comm]: #communication-with-the-execution-engine
...@@ -679,74 +694,109 @@ place. This section explains how this happens. ...@@ -679,74 +694,109 @@ place. This section explains how this happens.
Let: Let:
- `refL2` be the (hash of) the current [safe L2 head][g-unsafe-l2-head] - `safeL2Head` be a variable in the state of the execution engine, tracking the (hash of) the current [safe L2
- `finalizedRef` be the (hash of) the [finalized L2 head][g-finalized-l2-head]: the highest L2 block that can be fully head][g-safe-l2-head]
derived from *[finalized][finality]* L1 blocks — i.e. L1 blocks older than two L1 epochs (64 L1 [time - `unsafeL2Head` be a variable in the state of the execution engine, tracking the (hash of) the current [unsafe L2
slots][g-time-slot]). head][g-unsafe-l2-head]
- `finalizedL2Head` be a variable in the state of the execution engine, tracking the (hash of) the current [finalized L2
head][g-finalized-l2-head]
- This is not yet implemented, and currently always holds the zero hash — this does not prevent the pseudocode below
from working.
- `payloadAttributes` be some previously derived [payload attributes][g-payload-attr] for the L2 block with number - `payloadAttributes` be some previously derived [payload attributes][g-payload-attr] for the L2 block with number
`l2Number(refL2) + 1` `l2Number(safeL2Head) + 1`
[finality]: https://hackmd.io/@prysmaticlabs/finality [finality]: https://hackmd.io/@prysmaticlabs/finality
Then we can apply the following pseudocode logic to update the state of both the rollup driver and execution engine: Then we can apply the following pseudocode logic to update the state of both the rollup driver and execution engine:
```javascript ```javascript
// request a new execution payload
forkChoiceState = {
headBlockHash: refL2,
safeBlockHash: refL2,
finalizedBlockHash: finalizedRef,
}
[status, payloadID, rpcErr] = engine_forkchoiceUpdatedV1(forkChoiceState, payloadAttributes)
if (rpcErr != null) soft_error()
if (status != "VALID") payload_error()
// retrieve and execute the execution payload
[executionPayload, rpcErr] = engine_getPayloadV1(payloadID)
if (rpcErr != null) soft_error()
[status, rpcErr] = engine_newPayloadV1(executionPayload) fun makeL2Block(payloadAttributes) {
if (rpcErr != null) soft_error()
if (status != "VALID") payload_error() // request a new execution payload
forkChoiceState = {
headBlockHash: safeL2Head,
safeBlockHash: safeL2Head,
finalizedBlockHash: finalizedL2Head,
}
[status, payloadID, rpcErr] = engine_forkchoiceUpdatedV1(forkChoiceState, payloadAttributes)
if (rpcErr != null) return softError()
if (status != "VALID") return payloadError()
refL2 = l2Hash(executionPayload) // retrieve and execute the execution payload
[executionPayload, rpcErr] = engine_getPayloadV1(payloadID)
if (rpcErr != null) return softError()
// update head to new refL2 [status, rpcErr] = engine_newPayloadV1(executionPayload)
forkChoiceState = { if (rpcErr != null) return softError()
headBlockHash: refL2, if (status != "VALID") return payloadError()
safeBlockHash: refL2,
finalizedBlockHash: finalizedRef,
newL2Head = executionPayload.blockHash
// update head to new refL2
forkChoiceState = {
headBlockHash: newL2Head,
safeBlockHash: newL2Head,
finalizedBlockHash: finalizedL2Head,
}
[status, payloadID, rpcErr] = engine_forkchoiceUpdatedV1(forkChoiceState, null)
if (rpcErr != null) return softError()
if (status != "SUCCESS") return payloadError()
return newL2Head
}
result = softError()
while (isSoftError(result)) {
result = makeL2Block(payloadAttributes)
if (isPayloadError(result)) {
payloadAttributes = onlyDeposits(payloadAttributes)
result = makeL2Block(payloadAttributes)
}
if (isPayloadError(result)) {
panic("this should never happen")
}
}
if (!isError(result)) {
safeL2Head = result
unsafeL2Head = result
} }
[status, payloadID, rpcErr] = engine_forkchoiceUpdatedV1(forkChoiceState, null)
if (rpcErr != null) soft_error()
if (status != "SUCCESS") payload_error()
``` ```
> **TODO** `finalizedL2Head` is not being changed yet, but can be set to point to a L2 block fully derived from data up
> to a finalized L1 block.
As should apparent from the assignations, within the `forkChoiceState` object, the properties have the following As should apparent from the assignations, within the `forkChoiceState` object, the properties have the following
meaning: meaning:
- `headBlockHash`: block hash of the last block of the L2 chain, according to the sequencer. - `headBlockHash`: block hash of the last block of the L2 chain, according to the sequencer.
- `safeBlockHash`: same as `headBlockHash`. - `safeBlockHash`: same as `headBlockHash`.
- `finalizedBlockHash`: the hash of the L2 block that can be fully derived from finalized L1 data, making it impossible - `finalizedBlockHash`: the [finalized L2 head][g-finalized-l2-head].
to derive anything else.
Error handling: Error handling:
- A `payload_error()` means the inputs were wrong, and the payload attributes should thus be dropped from the queue, and - A value returned by `payloadError()` means the inputs were wrong.
not reattempted. - This could mean the sequencer included invalid transactions in the batch. **In this case, all transactions from the
- A `soft_error()` means that the interaction failed by chance, and should be reattempted. batch should be dropped**. We assume this is the case, and modify the payload via `onlyDeposits` to only include
- If the function completes without error, the attributes were applied successfully, [deposited transactions][g-deposited], and retry.
and can be dropped from the queue while the tracked "safe head" is updated. - In the case of deposits, the [execution engine][g-exec-engine] will skip invalid transactions, so bad deposited
transactions should never cause a payload error.
- A value returned by `softError()` means that the interaction failed by chance, and should be reattempted (this is the
purpose of the `while` loop in the pseudo-code).
> **TODO** `finalizedRef` is not being changed yet, but can be set to point to a L2 block fully derived from data up to > **TODO** define "invalid transactions" properly, check the interpretation for the execution engine
> a finalized L1 block.
The following JSON-RPC methods are part of the [execution engine API][exec-engine]: The following JSON-RPC methods are part of the [execution engine API][exec-engine]:
[exec-engine]: exec-engine.md [exec-engine]: exec-engine.md
- [`engine_forkchoiceUpdatedV1`] — updates the forkchoice (i.e. the chain head) to `headBlockHash` if different, and - [`engine_forkchoiceUpdatedV1`] — updates the forkchoice (i.e. the chain head) to `headBlockHash` if different, and
instructs the engine to start building an execution payload given payload attributes the second argument isn't `null` instructs the engine to start building an execution payload if the payload attributes isn't `null`
- [`engine_getPayloadV1`] — retrieves a previously requested execution payload - [`engine_getPayloadV1`] — retrieves a previously requested execution payload
- [`engine_newPayloadV1`] — executes an execution payload to create a block - [`engine_newPayloadV1`] — executes an execution payload to create a block
...@@ -760,6 +810,12 @@ The execution payload is an object of type [`ExecutionPayloadV1`][eth-payload]. ...@@ -760,6 +810,12 @@ The execution payload is an object of type [`ExecutionPayloadV1`][eth-payload].
------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------
# WARNING: BELOW THIS LINE, THE SPEC HAS NOT BEEN REVIEWED AND MAY CONTAIN MISTAKES
We still expect that the explanations here should be pretty useful.
------------------------------------------------------------------------------------------------------------------------
# Handling L1 Re-Orgs # Handling L1 Re-Orgs
[handling-reorgs]: #handling-l1-re-orgs [handling-reorgs]: #handling-l1-re-orgs
......
...@@ -47,6 +47,7 @@ ...@@ -47,6 +47,7 @@
- [Payload Attributes](#payload-attributes) - [Payload Attributes](#payload-attributes)
- [L2 Genesis Block](#l2-genesis-block) - [L2 Genesis Block](#l2-genesis-block)
- [L2 Chain Inception](#l2-chain-inception) - [L2 Chain Inception](#l2-chain-inception)
- [Safe L2 Block](#safe-l2-block)
- [Safe L2 Head](#safe-l2-head) - [Safe L2 Head](#safe-l2-head)
- [Unsafe L2 Block](#unsafe-l2-block) - [Unsafe L2 Block](#unsafe-l2-block)
- [Unsafe L2 Head](#unsafe-l2-head) - [Unsafe L2 Head](#unsafe-l2-head)
...@@ -575,12 +576,18 @@ oracle][output-oracle] contract. ...@@ -575,12 +576,18 @@ oracle][output-oracle] contract.
In the current implementation, this is the L1 block number at which the output oracle contract was deployed or upgraded. In the current implementation, this is the L1 block number at which the output oracle contract was deployed or upgraded.
## Safe L2 Block
[safe-l2-block]: glossary.md#safe-l2-block
A safe L2 block is an L2 block can be derived entirely from L1 by a [rollup node][rollup-node]. This can vary between
different nodes, based on their view of the L1 chain.
## Safe L2 Head ## Safe L2 Head
[safe-l2-head]: glossary.md#safe-l2-head [safe-l2-head]: glossary.md#safe-l2-head
The safe L2 head is most recent L2 block that was can be derived entirely from L1 by a [rollup node][rollup-node]. This The safe L2 head is the highest [safe L2 block][safe-l2-block] that a [rollup node][rollup-node] knows about.
can vary between different nodes, based on their view of the L1 chain.
## Unsafe L2 Block ## Unsafe L2 Block
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment