Commit 98ae8c3b authored by mergify[bot]'s avatar mergify[bot] Committed by GitHub

Merge pull request #3731 from ethereum-optimism/jg/decompress_channel_size_limit

specs,op-node: Clarify Max RLP Bytes Per Channel
parents 936b8ba5 c8f36940
......@@ -138,7 +138,7 @@ func BatchReader(r io.Reader, l1InclusionBlock eth.L1BlockRef) (func() (BatchWit
if err != nil {
return nil, err
}
rlpReader := rlp.NewStream(zr, 10_000_000)
rlpReader := rlp.NewStream(zr, MaxRLPBytesPerChannel)
// Read each batch iteratively
return func() (BatchWithL1InclusionBlock, error) {
ret := BatchWithL1InclusionBlock{
......
......@@ -15,6 +15,10 @@ const DerivationVersion0 = 0
// starting with the oldest channel.
const MaxChannelBankSize = 100_000_000
// MaxRLPBytesPerChannel is the maximum amount of bytes that will be read from
// a channel. This limit is set when decoding the RLP.
const MaxRLPBytesPerChannel = 10_000_000
// DuplicateErr is returned when a newly read frame is already known
var DuplicateErr = errors.New("duplicate frame")
......
......@@ -367,6 +367,9 @@ When decompressing a channel, we limit the amount of decompressed data to `MAX_R
humongous amount of data). If the decompressed data exceeds the limit, things proceeds as thought the channel contained
only the first `MAX_RLP_BYTES_PER_CHANNEL` decompressed bytes.
When decoding batches, all batches that can be completly decoded below `MAX_RLP_BYTES_PER_CHANNEL` will be accepted
even if the size of the channel is greater than `MAX_RLP_BYTES_PER_CHANNEL`.
While the above pseudocode implies that all batches are known in advance, it is possible to perform streaming
compression and decompression of RLP-encoded batches. This means it is possible to start including channel frames in a
[batcher transaction][g-batcher-transaction] before we know how many batches (and how many frames) the channel will
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment