Commit d9e4363d authored by Joshua Gutow's avatar Joshua Gutow Committed by GitHub

Merge pull request #8539 from 0xn4de/develop

chore: fix typos & wording
parents b02c4969 d4926405
...@@ -12,7 +12,7 @@ ...@@ -12,7 +12,7 @@
<!-- END doctoc generated TOC please keep comment here to allow auto update --> <!-- END doctoc generated TOC please keep comment here to allow auto update -->
[Deposited transaction](./glossary.md#deposited-transaction) are transactions on L2 that are [Deposited transactions](./glossary.md#deposited-transaction) are transactions on L2 that are
initiated on L1. The gas that they use on L2 is bought on L1 via a gas burn (or a direct payment initiated on L1. The gas that they use on L2 is bought on L1 via a gas burn (or a direct payment
in the future). We maintain a fee market and hard cap on the amount of gas provided to all deposits in the future). We maintain a fee market and hard cap on the amount of gas provided to all deposits
in a single L1 block. in a single L1 block.
...@@ -145,7 +145,7 @@ An attacker would observe a deposit in the mempool and frontrun it with a deposi ...@@ -145,7 +145,7 @@ An attacker would observe a deposit in the mempool and frontrun it with a deposi
that purchases enough gas such that the other deposit reverts. that purchases enough gas such that the other deposit reverts.
The smaller the max resource limit is, the easier this attack is to pull off. The smaller the max resource limit is, the easier this attack is to pull off.
This attack is mitigated by having a large resource limit as well as a large This attack is mitigated by having a large resource limit as well as a large
elastcity multiplier. This means that the target resource usage is kept small, elasticity multiplier. This means that the target resource usage is kept small,
giving a lot of room for the deposit base fee to rise when the max resource limit giving a lot of room for the deposit base fee to rise when the max resource limit
is being purchased. is being purchased.
......
...@@ -34,7 +34,7 @@ fundamental difference. ...@@ -34,7 +34,7 @@ fundamental difference.
- All existing Ethereum tooling works - all you have to do is change the chain ID. - All existing Ethereum tooling works - all you have to do is change the chain ID.
- **Maximal compatibility with ETH1 nodes:** The implementation should minimize any differences with a vanilla Geth - **Maximal compatibility with ETH1 nodes:** The implementation should minimize any differences with a vanilla Geth
node, and leverage as many existing L1 standards as possible. node, and leverage as many existing L1 standards as possible.
- The execution engine/rollup node use the ETH2 Engine API to build the canonical L2 chain. - The execution engine/rollup node uses the ETH2 Engine API to build the canonical L2 chain.
- The execution engine leverages Geth's existing mempool and sync implementations, including snap sync. - The execution engine leverages Geth's existing mempool and sync implementations, including snap sync.
- **Minimize state and complexity:** - **Minimize state and complexity:**
- Whenever possible, services contributing to the rollup infrastructure are stateless. - Whenever possible, services contributing to the rollup infrastructure are stateless.
...@@ -149,7 +149,7 @@ the epoch's sequencing window (i.e. the batch must land before L1 block `n + SEQ ...@@ -149,7 +149,7 @@ the epoch's sequencing window (i.e. the batch must land before L1 block `n + SEQ
(along with the `TransactionDeposited` L1 events) what allows the derivation of the L2 chain from the L1 chain. (along with the `TransactionDeposited` L1 events) what allows the derivation of the L2 chain from the L1 chain.
The sequencer does not need for a L2 block to be batch-submitted to L1 in order to build on top of it. In fact, batches The sequencer does not need for a L2 block to be batch-submitted to L1 in order to build on top of it. In fact, batches
typically contain multiple L2 blocks worth of sequenced transaction. This is what enables typically contain multiple L2 blocks worth of sequenced transactions. This is what enables
_fast transaction confirmations_ on the sequencer. _fast transaction confirmations_ on the sequencer.
Since transaction batches for a given epoch can be submitted anywhere within the sequencing window, verifiers must Since transaction batches for a given epoch can be submitted anywhere within the sequencing window, verifiers must
...@@ -183,7 +183,7 @@ This process is then repeated with incrementing epochs until the tip of L1 is re ...@@ -183,7 +183,7 @@ This process is then repeated with incrementing epochs until the tip of L1 is re
The rollup driver doesn't actually create blocks. Instead, it directs the execution engine to do so via the Engine API. The rollup driver doesn't actually create blocks. Instead, it directs the execution engine to do so via the Engine API.
For each iteration of the block derivation loop described above, the rollup driver will craft a _payload attributes_ For each iteration of the block derivation loop described above, the rollup driver will craft a _payload attributes_
object and send it to the execution engine. The execution engine will then convert the payload attributes object into a object and send it to the execution engine. The execution engine will then convert the payload attributes object into a
block, and add it to the chain. The basic sequence the rollup driver is as follows: block, and add it to the chain. The basic sequence of the rollup driver is as follows:
1. Call `engine_forkchoiceUpdatedV2` with the payload attributes object. We'll skip over the details of the fork choice 1. Call `engine_forkchoiceUpdatedV2` with the payload attributes object. We'll skip over the details of the fork choice
state parameter for now - just know that one of its fields is the L2 chain's `headBlockHash`, and that it is set to the state parameter for now - just know that one of its fields is the L2 chain's `headBlockHash`, and that it is set to the
......
...@@ -43,13 +43,13 @@ submits it to the `L2OutputOracle` contract on the settlement layer (L1). ...@@ -43,13 +43,13 @@ submits it to the `L2OutputOracle` contract on the settlement layer (L1).
### L2OutputOracle v1.0.0 ### L2OutputOracle v1.0.0
The submission of output proposals is permissioned to a single account. It is expected that this The submission of output proposals is permissioned to a single account. It is expected that this
account continues to submit output proposals over time to ensure that user withdrawals do not halt. account will continue to submit output proposals over time to ensure that user withdrawals do not halt.
The [L2 output proposer](../op-proposer) is expected to submit output roots on a deterministic The [L2 output proposer](../op-proposer) is expected to submit output roots on a deterministic
interval based on the configured `SUBMISSION_INTERVAL` in the `L2OutputOracle`. The larger interval based on the configured `SUBMISSION_INTERVAL` in the `L2OutputOracle`. The larger
the `SUBMISSION_INTERVAL`, the less often L1 transactions need to be sent to the `L2OutputOracle` the `SUBMISSION_INTERVAL`, the less often L1 transactions need to be sent to the `L2OutputOracle`
contract, but L2 users will need to wait a bit longer for an output root to be included in L1 (the settlement layer) contract, but L2 users will need to wait a bit longer for an output root to be included in L1 (the settlement layer)
that includes their intention to withdrawal from the system. that includes their intention to withdraw from the system.
The honest `op-proposer` algorithm assumes a connection to the `L2OutputOracle` contract to know The honest `op-proposer` algorithm assumes a connection to the `L2OutputOracle` contract to know
the L2 block number that corresponds to the next output proposal that must be submitted. It also the L2 block number that corresponds to the next output proposal that must be submitted. It also
......
...@@ -132,7 +132,7 @@ The default is to maintain a peer count with a tide-system based on active peer ...@@ -132,7 +132,7 @@ The default is to maintain a peer count with a tide-system based on active peer
except those that are marked as trusted or have a grace period. except those that are marked as trusted or have a grace period.
Peers will have a grace period for a configurable amount of time after joining. Peers will have a grace period for a configurable amount of time after joining.
In emergency, when memory runs low, the node should start pruning more aggressively. In an emergency, when memory runs low, the node should start pruning more aggressively.
Peer records can be persisted to disk to quickly reconnect with known peers after restarting the rollup node. Peer records can be persisted to disk to quickly reconnect with known peers after restarting the rollup node.
...@@ -398,7 +398,7 @@ A `res = 0` response should be verified to: ...@@ -398,7 +398,7 @@ A `res = 0` response should be verified to:
override any previous chain, until the final L2 chain can be reproduced from L1 data. override any previous chain, until the final L2 chain can be reproduced from L1 data.
A `res > 0` response code should not be accepted. The result code is helpful for debugging, A `res > 0` response code should not be accepted. The result code is helpful for debugging,
but the client should regard any error like any any other unanswered request, as the responding peer cannot be trusted. but the client should regard any error like any other unanswered request, as the responding peer cannot be trusted.
---- ----
......
...@@ -160,7 +160,7 @@ equal to `MAX_RLP_BYTES_PER_CHANNEL`). Therefore every field size of span batch ...@@ -160,7 +160,7 @@ equal to `MAX_RLP_BYTES_PER_CHANNEL`). Therefore every field size of span batch
`MAX_SPAN_BATCH_SIZE` . There can be at least single span batch per channel, and channel size is limited `MAX_SPAN_BATCH_SIZE` . There can be at least single span batch per channel, and channel size is limited
to `MAX_RLP_BYTES_PER_CHANNEL` and you may think that there is already an implicit limit. However, having an explicit to `MAX_RLP_BYTES_PER_CHANNEL` and you may think that there is already an implicit limit. However, having an explicit
limit for span batch is helpful for several reasons. We may save computation costs by avoiding malicious input while limit for span batch is helpful for several reasons. We may save computation costs by avoiding malicious input while
decoding. For example, lets say bad batcher wrote span batch which `block_count = max.Uint64`. We may early return using decoding. For example, let's say bad batcher wrote span batch which `block_count = max.Uint64`. We may early return using
the explicit limit, not trying to consume data until EOF is reached. We can also safely preallocate memory for decoding the explicit limit, not trying to consume data until EOF is reached. We can also safely preallocate memory for decoding
because we know the upper limit of memory usage. because we know the upper limit of memory usage.
...@@ -203,7 +203,7 @@ This adds more complexity, but organizes data for improved compression by groupi ...@@ -203,7 +203,7 @@ This adds more complexity, but organizes data for improved compression by groupi
### RLP encoding for only variable length fields ### RLP encoding for only variable length fields
Further size optimization can be done by packing variable length fields, such as `access_list`. Further size optimization can be done by packing variable length fields, such as `access_list`.
However, doing this will introduce much more code complexity, comparing to benefiting by size reduction. However, doing this will introduce much more code complexity, compared to benefiting from size reduction.
Our goal is to find the sweet spot on code complexity - span batch size tradeoff. Our goal is to find the sweet spot on code complexity - span batch size tradeoff.
I decided that using RLP for all variable length fields will be the best option, I decided that using RLP for all variable length fields will be the best option,
......
...@@ -104,7 +104,7 @@ The major/minor/patch versions should align with that of the upstream protocol t ...@@ -104,7 +104,7 @@ The major/minor/patch versions should align with that of the upstream protocol t
Users of the protocol can choose to implement custom support for the alternative `<build>`, Users of the protocol can choose to implement custom support for the alternative `<build>`,
but may work out of the box if the major features are consistent with that of the upstream protocol version. but may work out of the box if the major features are consistent with that of the upstream protocol version.
The 8 byte `<build>` identifier may be presented as string for human readability if the contents are alpha-numeric, The 8 byte `<build>` identifier may be presented as a string for human readability if the contents are alpha-numeric,
including `-` and `.`, as outlined in the [Semver] format specs. Trailing `0` bytes can be used for padding. including `-` and `.`, as outlined in the [Semver] format specs. Trailing `0` bytes can be used for padding.
It may be presented as `0x`-prefixed hex string otherwise. It may be presented as `0x`-prefixed hex string otherwise.
...@@ -137,7 +137,7 @@ The `<pre-release>` `0`-value is reserved for non-prereleases, i.e. `v3.1.0` is ...@@ -137,7 +137,7 @@ The `<pre-release>` `0`-value is reserved for non-prereleases, i.e. `v3.1.0` is
Node-software may support a pre-release, but must not activate any protocol changes without the user explicitly Node-software may support a pre-release, but must not activate any protocol changes without the user explicitly
opting in through the means of a feature-flag or configuration change. opting in through the means of a feature-flag or configuration change.
A pre-release is not an official version, and meant for protocol developers to communicate an experimental changeset A pre-release is not an official version and is meant for protocol developers to communicate an experimental changeset
before the changeset is reviewed by governance. Pre-releases are subject to change. before the changeset is reviewed by governance. Pre-releases are subject to change.
### Protocol Version Exposure ### Protocol Version Exposure
...@@ -290,7 +290,7 @@ The Canyon upgrade contains the Shapella upgrade from L1 and some minor protocol ...@@ -290,7 +290,7 @@ The Canyon upgrade contains the Shapella upgrade from L1 and some minor protocol
- [EIP-3855: PUSH0 instruction](https://eips.ethereum.org/EIPS/eip-3855) - [EIP-3855: PUSH0 instruction](https://eips.ethereum.org/EIPS/eip-3855)
- [EIP-3860: Limit and meter initcode](https://eips.ethereum.org/EIPS/eip-3860) - [EIP-3860: Limit and meter initcode](https://eips.ethereum.org/EIPS/eip-3860)
- [EIP-4895: Beacon chain push withdrawals as operations](https://eips.ethereum.org/EIPS/eip-4895) - [EIP-4895: Beacon chain push withdrawals as operations](https://eips.ethereum.org/EIPS/eip-4895)
- [Withdrawlas are prohibited in P2P Blocks](./rollup-node-p2p.md#block-validation) - [Withdrawals are prohibited in P2P Blocks](./rollup-node-p2p.md#block-validation)
- [Withdrawals should be set to the empty array with Canyon](./derivation.md#building-individual-payload-attributes) - [Withdrawals should be set to the empty array with Canyon](./derivation.md#building-individual-payload-attributes)
- [EIP-6049: Deprecate SELFDESTRUCT](https://eips.ethereum.org/EIPS/eip-6049) - [EIP-6049: Deprecate SELFDESTRUCT](https://eips.ethereum.org/EIPS/eip-6049)
- [Modifies the EIP-1559 Denominator](./exec-engine.md#1559-parameters) - [Modifies the EIP-1559 Denominator](./exec-engine.md#1559-parameters)
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment