Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
N
nebula
Project
Project
Details
Activity
Releases
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
exchain
nebula
Commits
4b755207
Unverified
Commit
4b755207
authored
Dec 02, 2024
by
protolambda
Committed by
GitHub
Dec 02, 2024
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
op-node: update readme (#13164)
parent
ebd2cfd6
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
222 additions
and
75 deletions
+222
-75
README.md
op-node/README.md
+222
-75
No files found.
op-node/README.md
View file @
4b755207
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
# `op-node`
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents**
*generated with [DocToc](https://github.com/thlorenz/doctoc)*
-
[
op-node
](
#op-node
)
Issues:
-
[
Compiling
](
#compiling
)
[
monorepo
](
https://github.com/ethereum-optimism/optimism/issues?q=is%3Aissue%20state%3Aopen%20label%3AA-op-node
)
-
[
Testing
](
#testing
)
-
[
Running
](
#running
)
-
[
L2 Genesis Generation
](
#l2-genesis-generation
)
-
[
L1 Devnet Genesis Generation
](
#l1-devnet-genesis-generation
)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
Pull requests:
[
monorepo
](
https://github.com/ethereum-optimism/optimism/pulls?q=is%3Aopen+is%3Apr+label%3AA-op-node
)
# op-node
User docs:
-
[
How to run a node
](
https://docs.optimism.io/builders/node-operators/rollup-node
)
This is the reference implementation of the
[
rollup-node spec
](
https://github.com/ethereum-optimism/specs/blob/main/specs/protocol/rollup-node.md
)
.
Specs:
It can be thought of like the consensus layer client of an OP Stack chain where it must run with an OP Stack execution layer client
-
[
rollup-node spec
]
like
[
op-geth
](
https://github.com/ethereum-optimism/op-geth
)
.
## Compiling
The op-node implements the
[
rollup-node spec
]
.
It functions as a Consensus Layer client of an OP Stack chain.
This builds, relays and verifies the canonical chain of blocks.
The blocks are processed by an execution layer client, like
[
op-geth
]
.
Compile a binary:
[
rollup-node spec
]:
https://github.com/ethereum-optimism/specs/blob/main/specs/protocol/rollup-node.m
[
op-geth
]:
https://github.com/ethereum-optimism/op-geth
```
shell
## Quickstart
```
bash
make op-node
make op-node
# Network selection:
# - Join any of the pre-configured networks with the `--network` flag.
# - Alternatively, join a custom network with the `--rollup.config` flag.
#
# Essential Connections:
# - L1 ethereum RPC, to fetch blocks, receipts, finality
# - L1 beacon API, to fetch blobs
# - L2 engine API, to apply new blocks to
# - P2P TCP port, to expose publicly, to retrieve and relay the latest L2 blocks
# - P2P UDP port, to expose publicly, to discover other nodes to peer with
# - RPC port, to serve RPC of the op-node
#
# Other:
# - Sync mode: how to interact with the execution-engine,
# such that it enters the preferred form of syncing:
# - consensus-layer (block by block sync)
# - execution-layer (e.g. snap-sync)
#
# Tip: every CLI flag has an env-var equivalent (run `op-node --help` for more information)
./bin/op-node
\
--network
=
op-sepolia
\
--l1
=
ws://localhost:8546
\
--l1
.beacon
=
http://localhost:4000
\
--l2
=
ws://localhost:9001
\
--p2p
.listen.tcp
=
9222
--p2p
.listen.udp
=
9222
--rpc
.port
=
7000
\
--syncmode
=
execution-layer
# If running inside docker, ake sure to mount the below persistent data as (host) volume,
# it may be lost on restart otherwise:
# - P2P private key: auto-generated when missing, used to maintain a stable peer identity.
# - Peerstore DB: remember peer records to connect with, used to not wait for peer discovery.
# - Discovery DB: maintain DHT data, to avoid repeating some discovery work after restarting.
--p2p
.priv.path
=
opnode_p2p_priv.txt
\
--p2p
.peerstore.path
=
opnode_peerstore_db
\
--p2p
.discovery.path
=
opnode_discovery_db
\
--p2p
.priv.path
=
opnode_p2p_priv.txt
```
```
##
Testing
##
Usage
Run op-node unit tests:
### Build from source
```
shell
```
bash
make
test
# from op-node dir:
make op-node
./bin/op-node
--help
```
```
##
Running
##
# Run from source
Configuration options can be reviewed with:
```
bash
# from op-node dir:
```
shell
go run ./cmd
--help
./bin/op-node
--help
```
```
[
eth-json-rpc-spec
]:
https://ethereum.github.io/execution-apis/api-documentation
### Build docker image
To start syncing the rollup:
See
`op-node`
docker-bake target.
Connect to one L1 Execution Client that supports the
[
Ethereum JSON-RPC spec
][
eth-json-rpc-spec
]
,
## Implementation overview
an L1 Consensus Client that supports the
[
Beacon Node API
](
https://ethereum.github.io/beacon-APIs
)
and
an OP Stack based Execution Client that supports the
[
Ethereum JSON-RPC spec
][
eth-json-rpc-spec
]
:
-
L1: use any L1 client, RPC, websocket, or IPC (connection config may differ)
### Interactions
-
L2: use any OP Stack Execution Client like
[
`op-geth`
](
https://github.com/ethereum-optimism/op-geth
)
Note that websockets or IPC is preferred for event notifications to improve sync, http RPC works with adaptive polling.
<!-- how this interacts with other modules -->
<!-- dependencies on other modules -->
```
shell
## Product
./bin/op-node
\
--l1
=
ws://localhost:8546
\
--l1
.beacon
=
http://localhost:4000
\
--l2
=
ws://localhost:9001
\
--rollup
.config
=
./path-to-network-config/rollup.json
\
--rpc
.addr
=
127.0.0.1
\
--rpc
.port
=
7000
```
## L2 Genesis Generation
The op-node
**builds**
,
**relays**
and
**verifies**
the canonical chain of blocks.
The
`op-node`
can generate geth compatible
`genesis.json`
files. These files
The op-node does not store critical data:
can be used with
`geth init`
to initialize the
`StateDB`
with accounts, storage,
the op-node can recover from any existing L2 chain pre-state
code and balances. The L2 state must be initialized with predeploy contracts
that is sufficiently synced such that available input data can complete the sync.
that exist in the state and act as system level contracts. The
`op-node`
can
generate a genesis file with these predeploys configured correctly given
an L1 RPC URL, a deploy config, L2 genesis allocs and a L1 deployments artifact.
The deploy config contains all of the config required to deploy the
The op-node
**builds**
blocks:
system. Examples can be found in
`packages/contracts-bedrock/deploy-config`
. Each
either from scratch as a sequencer, or from block-inputs (made available through L1) as verifier.
deploy config file is a JSON file. The L2 allocs can be generated using a forge script
in the
`contracts-bedrock`
package and the L1 deployments are a JSON file that is the
output of doing a L1 contracts deployment.
Example usage:
The block
**relay**
is a happy-path: the P2P sync is optional, and does not affect the ability to verify.
However, the block relay is still important for UX, as it lowers the latency to the latest state.
```
bash
The blocks are
**verified**
: only valid L2 blocks that can be reproduced from L1 data are accepted.
$
./bin/op-node genesis l2
\
--l1-rpc
$ETH_RPC_URL
\
--deploy-config
<PATH_TO_MY_DEPLOY_CONFIG>
\
--l2-allocs
<PATH_TO_L2_ALLOCS>
\
--l1-deployments
<PATH_TO_L1_DEPLOY_ARTIFACT>
\
--outfile
.l2 <PATH_TO_WRITE_L2_GENESIS>
\
--outfile
.rollup <PATH_TO_WRITE_OP_NODE_CONFIG>
```
##
L1 Devnet Genesis Generation
##
# Optimization target
It is also possible to generate a devnet L1
`genesis.json`
file. The L1 allocs can
<!-- What do we optimize for in this implementation? -->
be generated with the foundry L1 contracts deployment script if the extra parameter
`--sig 'runWithStateDump()`
is added to the deployment command.
**Safely and reliably sync the canonical chain**
The op-node implements the three core product features as following:
-
Block
**building**
: extend the chain at a throughput rate and latency that is safe to relay and verify.
-
Block
**relaying**
: while keeping throughput high and latency low, prevent single points of failure.
-
Block
**verification**
: efficiently sync, but always fully verify, follow the canonical chain.
Trade-offs are made here: verification safety is at odds ideal throughput, latency, efficiency.
Or in other words: safety vs. liveness. Chain parameters determine this.
The implementation offers this trade-off, siding with safety by default,
and design-choices should aim to improve the trade-off.
### Vision
The op-node is changing in two ways:
-
[
Reliability
](
#reliability
)
: improve the reliability with improved processing, testing and syncing.
-
[
Interoperability
](
#interoperability
)
: cross-chain messaging support.
#### Reliability
-
Parallel derivation processes:
[
Issue 10864
](
https://github.com/ethereum-optimism/optimism/issues/10864
)
-
Event tests:
[
Issue 13163
](
https://github.com/ethereum-optimism/optimism/issues/13163
)
-
Improving P2P sync:
[
Issue 11779
](
https://github.com/ethereum-optimism/optimism/issues/11779
)
#### Interoperability
The OP Stack is make chains natively interoperable:
messages between chains form safety dependencies, and verified asynchronously.
Asynchronous verification entails that the op-node reorgs away a block
if and when the block is determined to be invalid.
The
[
op-supervisor
]
specializes in this dependency verification work.
The op-node encapsulates all the single-chain concerns:
it prepares the local safety data-points (DA confirmation and block contents) for the op-supervisor.
The op-supervisor then verifies the cross-chain safety, and promotes the block safety level accordingly,
which the op-node then follows.
See
[
Interop specs
]
and
[
Interop design-docs
]
for more information about interoperability.
[
op-supervisor
]:
../op-supervisor/README.md
### User stories
<!-- As a **actor** I want **achievement** so that I **benefit** -->
As
*a user*
I want
*reliability*
so that I
*don't miss blocks or fall out of sync*
.
As
*a RaaS dev*
I want
*easy configuration and monitoring*
so that I
*can run more chains*
.
As
*a customizoor*
I want
*clear extensible APIs*
so that I
*can avoid forking and be a contributor*
.
As
*a protocol dev*
I want
*integration with tests*
so that I
*assert protocol conformance*
As
*a proof dev*
I want
*reusable state-transition code*
so that I
*don't reimplement the same thing*
.
## Design principles
<!-- design choices / trade-offs -->
-
Encapsulate the state-transition:
-
Use interfaces to abstract file-IO / concurrency / etc. away from state-transition logic.
-
Ensure code-sharing with action-tests and op-program.
-
No critical database:
-
Persisting data is ok, but it should be recoverable from external data without too much work.
-
The best chain "sync" is no sync.
-
Keep the tech-stack compatible with ethereum L1:
-
L1 offers well-adopted and battle tested libraries and standards, e.g. LibP2P, DiscV5, JSON-RPC.
-
L1 supports a tech-stack in different languages, ensuring client-diversity, important to L2 as well.
-
Downstream devs of OP-Stack should be able to pull in
*one*
instance of a library, that serves both OP-Stack and L1.
## Failure modes
This is a brief overview of what might fail, and how the op-node responds.
### L1 downtime
When the L1 data-source is temporarily unavailable the op-node
`safe`
/
`finalized`
progression halts.
Blocks may continue to sync through the happy-path if P2P connectivity is undisrupted.
### No batch confirmation
As per the
[
rollup-node spec
]
the sequencing-window ensures that after a bounded period of L1 blocks
the verifier will infer blocks, to ensure liveness of blocks with deposited transactions.
The op-node will continue to process the happy-path in the mean time,
which may have to be reorged out if it does not match the blocks that is inferred after sequencing window expiry.
### L1 reorg
L1 reorgs are detected passively during traversal: upon traversal to block
`N+1`
,
if the next canonical block has a parent-hash that does not match the
current block
`N`
we know the remote L1 chain view has diverged.
When this happens, the op-node assumes the local view is wrong, and resets itself to follow that of the remote node,
dropping any non-canonical blocks in the process.
### No L1 finality
When L1 does not finalize for an extended period of time,
the op-node is also unable to finalize the L2 chain for the same time.
Note that the
`safe`
block in the execution-layer is bootstrapped from the
`finalized`
block:
some verification work may repeat after a restart.
Blocks will continue to be derived from L1 batch-submissions, and optimistic processing will also continue to function.
### P2P failure
On P2P failure, e.g. issues with peering or failed propagation of block-data, the
`unsafe`
part of the chain may stall.
The
`unsafe`
part of the chain will no longer progress optimistically ahead of the
`safe`
part.
The
`safe`
blocks will continue to be derived from L1 however, providing a higher-latency access to the latest chain.
The op-node may pick back up the latest
`unsafe`
blocks after recovering its P2P connectivity,
and buffering
`unsafe`
blocks until the
`safe`
blocks progress meets the first known buffered
`unsafe`
block.
### Restarts and resyncing
After a restart, or detection of missing chain data,
the op-node dynamically determines what L1 data is required to continue, based on the syncing state of execution-engine.
If the sync-state is far behind, the op-node may need archived blob data to sync from the original L1 inputs.
A faster alternative may be to bootstrap through the execution-layer sync mode,
where the execution-engine may perform an optimized long-range sync, such as snap-sync.
## Testing
<!-- describe testing methods and approach to test coverage -->
-
Unit tests: encapsulated functionality, fuzz tests, etc. in the op-node Go packages.
-
`op-e2e`
action tests: in-progress Go testing, focused on the onchain aspects,
e.g. state-transition edge-cases. This applies primarily to the derivation pipeline.
-
`op-e2e`
system tests: in-process Go testing, focused on the offchain aspects of the op-node,
e.g. background work, P2P integration, general service functionality.
-
Local devnet tests: full end to end testing, but set up on minimal resources.
-
Kurtosis tests: new automated devnet-like testing. Work in progress.
-
Long-running devnet: roll-out for experimental features, to ensure sufficient stability for testnet users.
-
Long-running testnet: battle-testing in public environment.
-
Shadow-forks: design phase, testing experiments against shadow copies of real networks.
```
bash
$
./bin/op-node genesis l1
\
--deploy-config
$CONTRACTS_BEDROCK
/deploy-config
\
--l1-deployments
<PATH_TO_L1_DEPLOY_ARTIFACT>
\
--l1-allocs
<PATH_TO_L1_ALLOCS>
```
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment