Commit 7f8b74de authored by Andreas Bigger's avatar Andreas Bigger

📝 fix batcher intradoc links

📝 batcher doc-links

indexer: Fix startup errors

- The block locator was being initialized with a zero hash, which caused `Update` return `nil` and crash the process.
- Adds an L2 conf depth parameter, since L2 can now reorg.

update changesets to latest version

op-node: Turn down log level of enode filtering

This was spamming ~12 log lines every 2 seconds, so I turned down the log level.

deletes old changeset patches

check-changed: Add additional patterns to force total rebuild

Adds a bunch of patterns that should trigger a rebuild of all dependencies. Mostly related to build stuff.

fix(bmon): Fix balance monitor name

Version Packages

indexer: changeset

Version Packages

creates indexer docker build cci configs

feat(ctb): commit prod Goerli config

Commits the production Goerli configuration for Bedrock.

Add vault recipients

Remove coinbase

feat(ctb): Goerli deployment artifacts

Commits deployment artifacts for Goerli

 typo fix

📝 update specs link for bedrock 🔨

fix: batcher doc line lengths

fix: batcher doc aliasing?

nit: remove doc link alias hyphen
parent e8c5eac8
...@@ -1045,6 +1045,20 @@ workflows: ...@@ -1045,6 +1045,20 @@ workflows:
- oplabs-gcr - oplabs-gcr
requires: requires:
- op-heartbeat-docker-build - op-heartbeat-docker-build
- docker-build:
name: indexer-docker-build
docker_file: indexer/Dockerfile
docker_name: indexer
docker_tags: <<pipeline.git.revision>>,<<pipeline.git.branch>>
docker_context: .
- docker-publish:
name: indexer-docker-publish
docker_name: indexer
docker_tags: <<pipeline.git.revision>>,<<pipeline.git.branch>>
context:
- oplabs-gcr
requires:
- indexer-docker-build
- hive-test: - hive-test:
name: hive-test-rpc name: hive-test-rpc
version: <<pipeline.git.revision>> version: <<pipeline.git.revision>>
......
...@@ -8,7 +8,7 @@ There are plenty of ways to contribute, in particular we appreciate support in t ...@@ -8,7 +8,7 @@ There are plenty of ways to contribute, in particular we appreciate support in t
- Fixing and responding to existing issues. You can start off with those tagged ["good first issue"](https://github.com/ethereum-optimism/optimism/contribute) which are meant as introductory issues for external contributors. - Fixing and responding to existing issues. You can start off with those tagged ["good first issue"](https://github.com/ethereum-optimism/optimism/contribute) which are meant as introductory issues for external contributors.
- Improving the [community site](https://community.optimism.io/), [documentation](https://github.com/ethereum-optimism/community-hub) and [tutorials](https://github.com/ethereum-optimism/optimism-tutorial). - Improving the [community site](https://community.optimism.io/), [documentation](https://github.com/ethereum-optimism/community-hub) and [tutorials](https://github.com/ethereum-optimism/optimism-tutorial).
- Become an "Optimizer" and answer questions in the [Optimism Discord](https://discord.optimism.io). - Become an "Optimizer" and answer questions in the [Optimism Discord](https://discord.optimism.io).
- Get involved in the protocol design process by proposing changes or new features or write parts of the spec yourself in the [optimistic-specs repo](https://github.com/ethereum-optimism/optimistic-specs). - Get involved in the protocol design process by proposing changes or new features or write parts of the spec yourself in the [specs subdirectory](./specs/).
Note that we have a [Code of Conduct](https://github.com/ethereum-optimism/.github/blob/master/CODE_OF_CONDUCT.md), please follow it in all your interactions with the project. Note that we have a [Code of Conduct](https://github.com/ethereum-optimism/.github/blob/master/CODE_OF_CONDUCT.md), please follow it in all your interactions with the project.
......
# @eth-optimism/indexer # @eth-optimism/indexer
## 0.7.0
### Minor Changes
- ed50bd5b4: Bump indexer
## 0.6.0
### Minor Changes
- ecf0cc59b: Fix startup issues, add L2 conf depth
## 0.5.0 ## 0.5.0
### Minor Changes ### Minor Changes
......
...@@ -65,9 +65,13 @@ type Config struct { ...@@ -65,9 +65,13 @@ type Config struct {
// L1StartBlockNumber is the block number to start indexing L1 from. // L1StartBlockNumber is the block number to start indexing L1 from.
L1StartBlockNumber uint64 L1StartBlockNumber uint64
// ConfDepth is the number of confirmations after which headers are // L1ConfDepth is the number of confirmations after which headers are
// considered confirmed. // considered confirmed on L1.
ConfDepth uint64 L1ConfDepth uint64
// L2ConfDepth is the number of confirmations after which headers are
// considered confirmed on L2.
L2ConfDepth uint64
// MaxHeaderBatchSize is the maximum number of headers to request as a // MaxHeaderBatchSize is the maximum number of headers to request as a
// batch. // batch.
...@@ -122,7 +126,8 @@ func NewConfig(ctx *cli.Context) (Config, error) { ...@@ -122,7 +126,8 @@ func NewConfig(ctx *cli.Context) (Config, error) {
LogLevel: ctx.GlobalString(flags.LogLevelFlag.Name), LogLevel: ctx.GlobalString(flags.LogLevelFlag.Name),
LogTerminal: ctx.GlobalBool(flags.LogTerminalFlag.Name), LogTerminal: ctx.GlobalBool(flags.LogTerminalFlag.Name),
L1StartBlockNumber: ctx.GlobalUint64(flags.L1StartBlockNumberFlag.Name), L1StartBlockNumber: ctx.GlobalUint64(flags.L1StartBlockNumberFlag.Name),
ConfDepth: ctx.GlobalUint64(flags.ConfDepthFlag.Name), L1ConfDepth: ctx.GlobalUint64(flags.L1ConfDepthFlag.Name),
L2ConfDepth: ctx.GlobalUint64(flags.L2ConfDepthFlag.Name),
MaxHeaderBatchSize: ctx.GlobalUint64(flags.MaxHeaderBatchSizeFlag.Name), MaxHeaderBatchSize: ctx.GlobalUint64(flags.MaxHeaderBatchSizeFlag.Name),
MetricsServerEnable: ctx.GlobalBool(flags.MetricsServerEnableFlag.Name), MetricsServerEnable: ctx.GlobalBool(flags.MetricsServerEnableFlag.Name),
RESTHostname: ctx.GlobalString(flags.RESTHostnameFlag.Name), RESTHostname: ctx.GlobalString(flags.RESTHostnameFlag.Name),
......
...@@ -137,11 +137,17 @@ var ( ...@@ -137,11 +137,17 @@ var (
Value: 0, Value: 0,
EnvVar: prefixEnvVar("START_BLOCK_NUMBER"), EnvVar: prefixEnvVar("START_BLOCK_NUMBER"),
} }
ConfDepthFlag = cli.Uint64Flag{ L1ConfDepthFlag = cli.Uint64Flag{
Name: "conf-depth", Name: "l1-conf-depth",
Usage: "The number of confirmations after which headers are considered confirmed", Usage: "The number of confirmations after which headers are considered confirmed on L1",
Value: 20, Value: 20,
EnvVar: prefixEnvVar("CONF_DEPTH"), EnvVar: prefixEnvVar("L1_CONF_DEPTH"),
}
L2ConfDepthFlag = cli.Uint64Flag{
Name: "l2-conf-depth",
Usage: "The number of confirmations after which headers are considered confirmed on L1",
Value: 24,
EnvVar: prefixEnvVar("L2_CONF_DEPTH"),
} }
MaxHeaderBatchSizeFlag = cli.Uint64Flag{ MaxHeaderBatchSizeFlag = cli.Uint64Flag{
Name: "max-header-batch-size", Name: "max-header-batch-size",
...@@ -203,7 +209,8 @@ var optionalFlags = []cli.Flag{ ...@@ -203,7 +209,8 @@ var optionalFlags = []cli.Flag{
SentryEnableFlag, SentryEnableFlag,
SentryDsnFlag, SentryDsnFlag,
SentryTraceRateFlag, SentryTraceRateFlag,
ConfDepthFlag, L1ConfDepthFlag,
L2ConfDepthFlag,
MaxHeaderBatchSizeFlag, MaxHeaderBatchSizeFlag,
L1StartBlockNumberFlag, L1StartBlockNumberFlag,
RESTHostnameFlag, RESTHostnameFlag,
......
...@@ -164,7 +164,7 @@ func NewIndexer(cfg Config) (*Indexer, error) { ...@@ -164,7 +164,7 @@ func NewIndexer(cfg Config) (*Indexer, error) {
ChainID: new(big.Int).SetUint64(cfg.ChainID), ChainID: new(big.Int).SetUint64(cfg.ChainID),
AddressManager: addrManager, AddressManager: addrManager,
DB: db, DB: db,
ConfDepth: cfg.ConfDepth, ConfDepth: cfg.L1ConfDepth,
MaxHeaderBatchSize: cfg.MaxHeaderBatchSize, MaxHeaderBatchSize: cfg.MaxHeaderBatchSize,
StartBlockNumber: cfg.L1StartBlockNumber, StartBlockNumber: cfg.L1StartBlockNumber,
Bedrock: cfg.Bedrock, Bedrock: cfg.Bedrock,
...@@ -179,7 +179,7 @@ func NewIndexer(cfg Config) (*Indexer, error) { ...@@ -179,7 +179,7 @@ func NewIndexer(cfg Config) (*Indexer, error) {
L2RPC: l2RPC, L2RPC: l2RPC,
L2Client: l2Client, L2Client: l2Client,
DB: db, DB: db,
ConfDepth: cfg.ConfDepth, ConfDepth: cfg.L2ConfDepth,
MaxHeaderBatchSize: cfg.MaxHeaderBatchSize, MaxHeaderBatchSize: cfg.MaxHeaderBatchSize,
StartBlockNumber: uint64(0), StartBlockNumber: uint64(0),
Bedrock: cfg.Bedrock, Bedrock: cfg.Bedrock,
......
...@@ -73,7 +73,8 @@ func TestBedrockIndexer(t *testing.T) { ...@@ -73,7 +73,8 @@ func TestBedrockIndexer(t *testing.T) {
LogLevel: "info", LogLevel: "info",
LogTerminal: true, LogTerminal: true,
L1StartBlockNumber: 0, L1StartBlockNumber: 0,
ConfDepth: 1, L1ConfDepth: 1,
L2ConfDepth: 1,
MaxHeaderBatchSize: 2, MaxHeaderBatchSize: 2,
RESTHostname: "127.0.0.1", RESTHostname: "127.0.0.1",
RESTPort: 7980, RESTPort: 7980,
......
{ {
"name": "@eth-optimism/indexer", "name": "@eth-optimism/indexer",
"version": "0.5.0", "version": "0.7.0",
"private": true, "private": true,
"license": "MIT" "license": "MIT"
} }
...@@ -71,6 +71,7 @@ type Service struct { ...@@ -71,6 +71,7 @@ type Service struct {
batchScanner *scc.StateCommitmentChainFilterer batchScanner *scc.StateCommitmentChainFilterer
latestHeader uint64 latestHeader uint64
headerSelector *ConfirmedHeaderSelector headerSelector *ConfirmedHeaderSelector
l1Client *ethclient.Client
metrics *metrics.Metrics metrics *metrics.Metrics
tokenCache map[common.Address]*db.Token tokenCache map[common.Address]*db.Token
...@@ -143,6 +144,7 @@ func NewService(cfg ServiceConfig) (*Service, error) { ...@@ -143,6 +144,7 @@ func NewService(cfg ServiceConfig) (*Service, error) {
ZeroAddress: db.ETHL1Token, ZeroAddress: db.ETHL1Token,
}, },
isBedrock: cfg.Bedrock, isBedrock: cfg.Bedrock,
l1Client: cfg.L1Client,
} }
service.wg.Add(1) service.wg.Add(1)
return service, nil return service, nil
...@@ -202,16 +204,22 @@ func (s *Service) loop() { ...@@ -202,16 +204,22 @@ func (s *Service) loop() {
} }
func (s *Service) Update(newHeader *types.Header) error { func (s *Service) Update(newHeader *types.Header) error {
var lowest = db.BlockLocator{ var lowest db.BlockLocator
Number: s.cfg.StartBlockNumber,
}
highestConfirmed, err := s.cfg.DB.GetHighestL1Block() highestConfirmed, err := s.cfg.DB.GetHighestL1Block()
if err != nil { if err != nil {
return err return err
} }
if highestConfirmed != nil { if highestConfirmed == nil {
lowest = *highestConfirmed startHeader, err := s.l1Client.HeaderByNumber(s.ctx, new(big.Int).SetUint64(s.cfg.StartBlockNumber))
if err != nil {
return fmt.Errorf("error fetching header by number: %w", err)
}
highestConfirmed = &db.BlockLocator{
Number: s.cfg.StartBlockNumber,
Hash: startHeader.Hash(),
}
} }
lowest = *highestConfirmed
headers, err := s.headerSelector.NewHead(s.ctx, lowest.Number, newHeader, s.cfg.RawL1Client) headers, err := s.headerSelector.NewHead(s.ctx, lowest.Number, newHeader, s.cfg.RawL1Client)
if err != nil { if err != nil {
...@@ -260,22 +268,28 @@ func (s *Service) Update(newHeader *types.Header) error { ...@@ -260,22 +268,28 @@ func (s *Service) Update(newHeader *types.Header) error {
bridgeDepositsCh <- deposits bridgeDepositsCh <- deposits
}(bridgeImpl) }(bridgeImpl)
} }
go func() {
provenWithdrawals, err := s.portal.GetProvenWithdrawalsByBlockRange(s.ctx, startHeight, endHeight) if s.isBedrock {
if err != nil { go func() {
errCh <- err provenWithdrawals, err := s.portal.GetProvenWithdrawalsByBlockRange(s.ctx, startHeight, endHeight)
return if err != nil {
} errCh <- err
provenWithdrawalsCh <- provenWithdrawals return
}() }
go func() { provenWithdrawalsCh <- provenWithdrawals
finalizedWithdrawals, err := s.portal.GetFinalizedWithdrawalsByBlockRange(s.ctx, startHeight, endHeight) }()
if err != nil { go func() {
errCh <- err finalizedWithdrawals, err := s.portal.GetFinalizedWithdrawalsByBlockRange(s.ctx, startHeight, endHeight)
return if err != nil {
} errCh <- err
finalizedWithdrawalsCh <- finalizedWithdrawals return
}() }
finalizedWithdrawalsCh <- finalizedWithdrawals
}()
} else {
provenWithdrawalsCh <- make(bridge.ProvenWithdrawalsMap)
finalizedWithdrawalsCh <- make(bridge.FinalizedWithdrawalsMap)
}
var receives int var receives int
for { for {
......
...@@ -216,17 +216,17 @@ func FilterEnodes(log log.Logger, cfg *rollup.Config) func(node *enode.Node) boo ...@@ -216,17 +216,17 @@ func FilterEnodes(log log.Logger, cfg *rollup.Config) func(node *enode.Node) boo
err := node.Load(&dat) err := node.Load(&dat)
// if the entry does not exist, or if it is invalid, then ignore the node // if the entry does not exist, or if it is invalid, then ignore the node
if err != nil { if err != nil {
log.Debug("discovered node record has no opstack info", "node", node.ID(), "err", err) log.Trace("discovered node record has no opstack info", "node", node.ID(), "err", err)
return false return false
} }
// check chain ID matches // check chain ID matches
if cfg.L2ChainID.Uint64() != dat.chainID { if cfg.L2ChainID.Uint64() != dat.chainID {
log.Debug("discovered node record has no matching chain ID", "node", node.ID(), "got", dat.chainID, "expected", cfg.L2ChainID.Uint64()) log.Trace("discovered node record has no matching chain ID", "node", node.ID(), "got", dat.chainID, "expected", cfg.L2ChainID.Uint64())
return false return false
} }
// check version matches // check version matches
if dat.version != 0 { if dat.version != 0 {
log.Debug("discovered node record has no matching version", "node", node.ID(), "got", dat.version, "expected", 0) log.Trace("discovered node record has no matching version", "node", node.ID(), "got", dat.version, "expected", 0)
return false return false
} }
return true return true
......
...@@ -6,6 +6,14 @@ import sys ...@@ -6,6 +6,14 @@ import sys
from github import Github from github import Github
REBUILD_ALL_PATTERNS = [
r'^\.circleci/\.*',
r'^\.github/\.*',
r'^package\.json',
r'^yarn\.lock',
r'ops/check-changed/.*'
]
WHITELISTED_BRANCHES = { WHITELISTED_BRANCHES = {
'master', 'master',
'develop' 'develop'
...@@ -42,7 +50,7 @@ log = logging.getLogger(__name__) ...@@ -42,7 +50,7 @@ log = logging.getLogger(__name__)
def main(): def main():
patterns = sys.argv[1].split(',') patterns = sys.argv[1].split(',')
patterns.append(r'^\.circleci/\.*') patterns = patterns + REBUILD_ALL_PATTERNS
fp = os.path.realpath(__file__) fp = os.path.realpath(__file__)
monorepo_path = os.path.realpath(os.path.join(fp, '..', '..')) monorepo_path = os.path.realpath(os.path.join(fp, '..', '..'))
......
...@@ -6,25 +6,25 @@ FROM ethereumoptimism/foundry:latest as foundry ...@@ -6,25 +6,25 @@ FROM ethereumoptimism/foundry:latest as foundry
FROM node:16-alpine3.14 as base FROM node:16-alpine3.14 as base
RUN apk --no-cache add curl \ RUN apk --no-cache add curl \
jq \ jq \
python3 \ python3 \
ca-certificates \ ca-certificates \
git \ git \
make \ make \
gcc \ gcc \
musl-dev \ musl-dev \
linux-headers \ linux-headers \
bash \ bash \
build-base \ build-base \
gcompat gcompat
ENV GLIBC_KEY=https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub ENV GLIBC_KEY=https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub
ENV GLIBC_KEY_FILE=/etc/apk/keys/sgerrand.rsa.pub ENV GLIBC_KEY_FILE=/etc/apk/keys/sgerrand.rsa.pub
ENV GLIBC_RELEASE=https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.35-r0/glibc-2.35-r0.apk ENV GLIBC_RELEASE=https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.35-r0/glibc-2.35-r0.apk
RUN wget -q -O ${GLIBC_KEY_FILE} ${GLIBC_KEY} \ RUN wget -q -O ${GLIBC_KEY_FILE} ${GLIBC_KEY} \
&& wget -O glibc.apk ${GLIBC_RELEASE} \ && wget -O glibc.apk ${GLIBC_RELEASE} \
&& apk add glibc.apk --force && apk add glibc.apk --force
COPY --from=foundry /usr/local/bin/forge /usr/local/bin/forge COPY --from=foundry /usr/local/bin/forge /usr/local/bin/forge
COPY --from=foundry /usr/local/bin/cast /usr/local/bin/cast COPY --from=foundry /usr/local/bin/cast /usr/local/bin/cast
...@@ -108,6 +108,6 @@ FROM base as drippie-mon ...@@ -108,6 +108,6 @@ FROM base as drippie-mon
WORKDIR /opt/optimism/packages/drippie-mon WORKDIR /opt/optimism/packages/drippie-mon
ENTRYPOINT ["npm", "run", "start"] ENTRYPOINT ["npm", "run", "start"]
FROM base as drippie-mon FROM base as balance-monitor
WORKDIR /opt/optimism/packages/balance-monitor WORKDIR /opt/optimism/packages/balance-monitor
ENTRYPOINT ["yarn", "run", "start:prod"] ENTRYPOINT ["yarn", "run", "start:prod"]
# @eth-optimism/balance-monitor # @eth-optimism/balance-monitor
## 0.0.4
### Patch Changes
- 013bd456f: Fixed the name in Dockerfile.packages
## 0.0.3 ## 0.0.3
### Patch Changes ### Patch Changes
......
{ {
"name": "@eth-optimism/balance-monitor", "name": "@eth-optimism/balance-monitor",
"version": "0.0.3", "version": "0.0.4",
"description": "[Optimism] Forta Agent that reports whether certain accounts have fallen below some balance", "description": "[Optimism] Forta Agent that reports whether certain accounts have fallen below some balance",
"main": "dist/index", "main": "dist/index",
"types": "dist/index", "types": "dist/index",
......
{ {
"finalSystemOwner": "DUMMY", "numDeployConfirmations": 1,
"controller": "DUMMY",
"finalSystemOwner": "0xBc1233d0C3e6B5d53Ab455cF65A6623F6dCd7e4f",
"controller": "0xBc1233d0C3e6B5d53Ab455cF65A6623F6dCd7e4f",
"l1StartingBlockTag": "FILL_IN_DAY_OF",
"l1StartingBlockTag": "DUMMY",
"l1ChainID": 5, "l1ChainID": 5,
"l2ChainID": 420, "l2ChainID": 420,
"l2BlockTime": 2, "l2BlockTime": 2,
"maxSequencerDrift": 1200, "maxSequencerDrift": 600,
"sequencerWindowSize": 3600, "sequencerWindowSize": 3600,
"channelTimeout": 120, "channelTimeout": 300,
"p2pSequencerAddress": "DUMMY", "p2pSequencerAddress": "0x715b7219D986641DF9eFd9C7Ef01218D528e19ec",
"batchInboxAddress": "0xff00000000000000000000000000000000000420", "batchInboxAddress": "0xff00000000000000000000000000000000000420",
"batchSenderAddress": "DUMMY", "batchSenderAddress": "0x7431310e026B69BFC676C0013E12A1A11411EEc9",
"l2OutputOracleSubmissionInterval": 20,
"l2OutputOracleStartingTimestamp": -1,
"l2OutputOracleProposer": "DUMMY",
"l2OutputOracleChallenger": "DUMMY",
"finalizationPeriodSeconds": 2,
"l2GenesisBlockGasLimit": "0x17D7840", "l2OutputOracleSubmissionInterval": 12,
"l2GenesisBlockBaseFeePerGas": "0x3b9aca00", "l2OutputOracleStartingBlockNumber": "FILL_IN_DAY_OF",
"l2OutputOracleStartingTimestamp": "FILL_IN_DAY_OF",
"l2CrossDomainMessengerOwner": "DUMMY", "l2OutputOracleProposer": "0x02b1786A85Ec3f71fBbBa46507780dB7cF9014f6",
"l2OutputOracleChallenger": "0xBc1233d0C3e6B5d53Ab455cF65A6623F6dCd7e4f",
"governanceTokenName": "Optimism", "finalizationPeriodSeconds": 12,
"governanceTokenSymbol": "OP",
"governanceTokenOwner": "0x038a8825A3C3B0c08d52Cc76E5E361953Cf6Dc76", "proxyAdminOwner": "0xf80267194936da1E98dB10bcE06F3147D580a62e",
"baseFeeVaultRecipient": "0xBc1233d0C3e6B5d53Ab455cF65A6623F6dCd7e4f",
"l1FeeVaultRecipient": "0xBc1233d0C3e6B5d53Ab455cF65A6623F6dCd7e4f",
"sequencerFeeVaultRecipient": "0xBc1233d0C3e6B5d53Ab455cF65A6623F6dCd7e4f",
"gasPriceOracleOverhead": 2100, "gasPriceOracleOverhead": 2100,
"gasPriceOracleScalar": 1000000, "gasPriceOracleScalar": 1000000,
"governanceTokenSymbol": "OP",
"governanceTokenName": "Optimism",
"governanceTokenOwner": "0x038a8825A3C3B0c08d52Cc76E5E361953Cf6Dc76",
"l2GenesisBlockGasLimit": "0x17D7840",
"l2GenesisBlockBaseFeePerGas": "0x3b9aca00",
"eip1559Denominator": 50, "eip1559Denominator": 50,
"eip1559Elasticity": 10 "eip1559Elasticity": 10
} }
This source diff could not be displayed because it is too large. You can view the blob instead.
This source diff could not be displayed because it is too large. You can view the blob instead.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -4,10 +4,10 @@ ...@@ -4,10 +4,10 @@
The batch submitter, also referred to as the batcher, is the entity submitting the L2 sequencer data to L1, The batch submitter, also referred to as the batcher, is the entity submitting the L2 sequencer data to L1,
to make it available for verifiers. to make it available for verifiers.
[derivation-spec]: ./derivation.md [derivation spec]: derivation.md
The format of the data transactions is defined in the [derivation spec]: the data is constructed from L2 blocks The format of the data transactions is defined in the [derivation spec]:
in the reverse order as it is derived from data into L2 blocks. the data is constructed from L2 blocks in the reverse order as it is derived from data into L2 blocks.
The timing, operation and transaction signing is implementation-specific: any data can be submitted at any time, The timing, operation and transaction signing is implementation-specific: any data can be submitted at any time,
but only the data that matches the [derivation spec] rules will be valid from the verifier perspective. but only the data that matches the [derivation spec] rules will be valid from the verifier perspective.
......
...@@ -263,7 +263,7 @@ The rest of the diagram is conceptually distinct from the first part and illustr ...@@ -263,7 +263,7 @@ The rest of the diagram is conceptually distinct from the first part and illustr
channels have been reordered. channels have been reordered.
The first line shows batcher transactions. Note that in this case, there exists an ordering of the batches that makes The first line shows batcher transactions. Note that in this case, there exists an ordering of the batches that makes
all frames within the channels appear contiguously. This is not true in true in general. For instance, in the second all frames within the channels appear contiguously. This is not true in general. For instance, in the second
transaction, the position of `A1` and `B0` could have been inverted for exactly the same result — no changes needed in transaction, the position of `A1` and `B0` could have been inverted for exactly the same result — no changes needed in
the rest of the diagram. the rest of the diagram.
......
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment