Commit 9902541b authored by mergify[bot]'s avatar mergify[bot] Committed by GitHub

Merge branch 'develop' into qbzzt/230314-opstack-build

parents 272cb6c6 60e56b5e
...@@ -2,4 +2,4 @@ ...@@ -2,4 +2,4 @@
'@eth-optimism/contracts-bedrock': patch '@eth-optimism/contracts-bedrock': patch
--- ---
Print tenderly simulation links during deployment Reduce the time that the system dictator deploy scripts wait before checking the chain state.
---
'@eth-optimism/atst': minor
---
Update readAttestations and prepareWriteAttestation to handle keys longer than 32 bytes
---
'@eth-optimism/atst': minor
---
Remove broken allowFailures as option
---
'@eth-optimism/common-ts': patch
---
Fix BaseServiceV2 configuration for caseCase options
---
'@eth-optimism/atst': patch
---
Update docs
---
'@eth-optimism/atst': minor
---
Move react api to @eth-optimism/atst/react so react isn't required to run the core sdk
---
'@eth-optimism/sdk': patch
---
Update migrated withdrawal gaslimit calculation
---
'@eth-optimism/atst': minor
---
Fix main and module in atst package.json
---
'@eth-optimism/atst': patch
---
Fixed bug with atst not defaulting to currently connected chain
---
'@eth-optimism/atst': minor
---
Deprecate parseAttestationBytes and createRawKey in favor for createKey, createValue
---
'@eth-optimism/fault-detector': patch
---
Fixes a bug that would cause the fault detector to error out if no outputs had been proposed yet.
---
'@eth-optimism/chain-mon': patch
'@eth-optimism/data-transport-layer': patch
'@eth-optimism/fault-detector': patch
'@eth-optimism/message-relayer': patch
'@eth-optimism/replica-healthcheck': patch
---
Empty patch release to re-release packages that failed to be released by a bug in the release process.
...@@ -97,7 +97,6 @@ jobs: ...@@ -97,7 +97,6 @@ jobs:
- "packages/migration-data/node_modules" - "packages/migration-data/node_modules"
- "packages/replica-healthcheck/node_modules" - "packages/replica-healthcheck/node_modules"
- "packages/sdk/node_modules" - "packages/sdk/node_modules"
- "packages/two-step-monitor/node_modules"
- run: - run:
name: print forge version name: print forge version
command: forge --version command: forge --version
...@@ -543,10 +542,6 @@ jobs: ...@@ -543,10 +542,6 @@ jobs:
name: Check integration-tests name: Check integration-tests
command: npx depcheck command: npx depcheck
working_directory: integration-tests working_directory: integration-tests
- run:
name: Check two-step-monitor
command: npx depcheck
working_directory: packages/two-step-monitor
go-lint: go-lint:
parameters: parameters:
...@@ -611,7 +606,7 @@ jobs: ...@@ -611,7 +606,7 @@ jobs:
command: | command: |
# Note: We don't use circle CI test splits because we need to split by test name, not by package. There is an additional # Note: We don't use circle CI test splits because we need to split by test name, not by package. There is an additional
# constraint that gotestsum does not currently (nor likely will) accept files from different pacakges when building. # constraint that gotestsum does not currently (nor likely will) accept files from different pacakges when building.
OP_TESTLOG_DISABLE_COLOR=true OP_E2E_DISABLE_PARALLEL=false OP_E2E_USE_HTTP=<<parameters.use_http>> gotestsum \ OP_TESTLOG_DISABLE_COLOR=true OP_E2E_DISABLE_PARALLEL=true OP_E2E_USE_HTTP=<<parameters.use_http>> gotestsum \
--format=standard-verbose --junitfile=/tmp/test-results/<<parameters.module>>_http_<<parameters.use_http>>.xml \ --format=standard-verbose --junitfile=/tmp/test-results/<<parameters.module>>_http_<<parameters.use_http>>.xml \
-- -timeout=20m ./... -- -timeout=20m ./...
working_directory: <<parameters.module>> working_directory: <<parameters.module>>
......
(The MIT License) (The MIT License)
Copyright 2020-2022 Optimism Copyright 2020-2023 Optimism
Permission is hereby granted, free of charge, to any person obtaining Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the a copy of this software and associated documentation files (the
......
...@@ -24,7 +24,7 @@ If you want to build Optimism, check out the [Protocol Specs](./specs/). ...@@ -24,7 +24,7 @@ If you want to build Optimism, check out the [Protocol Specs](./specs/).
## Community ## Community
General discussion happens most frequently on the [Optimism discord](https://discord.optimism.io). General discussion happens most frequently on the [Optimism discord](https://discord-gateway.optimism.io).
Governance discussion can also be found on the [Optimism Governance Forum](https://gov.optimism.io/). Governance discussion can also be found on the [Optimism Governance Forum](https://gov.optimism.io/).
## Contributing ## Contributing
...@@ -138,7 +138,7 @@ When merging commits to the `develop` branch you MUST include a changeset file i ...@@ -138,7 +138,7 @@ When merging commits to the `develop` branch you MUST include a changeset file i
To add a changeset, run the command `yarn changeset` in the root of this monorepo. To add a changeset, run the command `yarn changeset` in the root of this monorepo.
You will be presented with a small prompt to select the packages to be released, the scope of the release (major, minor, or patch), and the reason for the release. You will be presented with a small prompt to select the packages to be released, the scope of the release (major, minor, or patch), and the reason for the release.
Comments with in changeset files will be automatically included in the changelog of the package. Comments within changeset files will be automatically included in the changelog of the package.
### Triggering Releases ### Triggering Releases
......
...@@ -38,8 +38,8 @@ module.exports = { ...@@ -38,8 +38,8 @@ module.exports = {
offset: -200, offset: -200,
}, },
algolia: { algolia: {
appId: '7Q6XITDI0Z', appId: 'O9WKE9RMCV',
apiKey: '9d55a31a04b210cd26f97deabd161705', apiKey: '00cf17cba30b374d08d7f7afead974be',
indexName: 'optimism' indexName: 'optimism'
}, },
nav: [ nav: [
......
import event from '@vuepress/plugin-pwa/lib/event'
export default ({ router }) => { export default ({ router }) => {
registerAutoReload();
router.addRoutes([ router.addRoutes([
{ path: '/docs/', redirect: '/' }, { path: '/docs/', redirect: '/' },
]) ])
} }
// When new content is detected by the app, this will automatically
// refresh the page, so that users do not need to manually click
// the refresh button. For more details see:
// https://linear.app/optimism/issue/FE-1003/investigate-archive-issue-on-docs
const registerAutoReload = () => {
event.$on('sw-updated', e => e.skipWaiting().then(() => {
location.reload(true);
}))
}
...@@ -19,13 +19,13 @@ aside.sidebar { ...@@ -19,13 +19,13 @@ aside.sidebar {
p.sidebar-heading { p.sidebar-heading {
color: #323A43 !important; color: #323A43 !important;
font-family: 'Open Sans', sans-serif; font-family: 'Open Sans', sans-serif;
font-weight: 600; font-weight: 600 !important;
font-size: 14px !important; font-size: 14px !important;
line-height: 24px !important; line-height: 24px !important;
min-height: 36px; min-height: 36px;
margin-left: 32px; margin-left: 20px;
padding: 8px 16px !important; padding: 8px 16px !important;
width: calc(100% - 64px) !important; width: calc(100% - 60px) !important;
border-radius: 8px; border-radius: 8px;
} }
...@@ -34,15 +34,17 @@ a.sidebar-link { ...@@ -34,15 +34,17 @@ a.sidebar-link {
font-size: 14px !important; font-size: 14px !important;
line-height: 24px !important; line-height: 24px !important;
min-height: 36px; min-height: 36px;
margin-left: 32px; margin-top: 3px;
margin-left: 20px;
padding: 8px 16px !important; padding: 8px 16px !important;
width: calc(100% - 64px) !important; width: calc(100% - 60px) !important;
border-radius: 8px; border-radius: 8px;
} }
section.sidebar-group a.sidebar-link { section.sidebar-group a.sidebar-link,
margin-left: 44px; section.sidebar-group p.sidebar-heading.clickable {
width: calc(100% - 64px) !important; margin-left: 32px;
width: calc(100% - 60px) !important;
} }
.sidebar-links:not(.sidebar-group-items) > li > a.sidebar-link { .sidebar-links:not(.sidebar-group-items) > li > a.sidebar-link {
......
...@@ -59,3 +59,66 @@ Download and install [Docker engine](https://docs.docker.com/engine/install/#ser ...@@ -59,3 +59,66 @@ Download and install [Docker engine](https://docs.docker.com/engine/install/#ser
After the docker containers start, browse to http:// < *computer running Blockscout* > :4000 to view the user interface. After the docker containers start, browse to http:// < *computer running Blockscout* > :4000 to view the user interface.
You can also use the [API](https://docs.blockscout.com/for-users/api) You can also use the [API](https://docs.blockscout.com/for-users/api)
### GraphQL
Blockscout's API includes [GraphQL](https://graphql.org/) support under `/graphiql`.
For example, this query looks at addresses.
```
query {
addresses(hashes:[
"0xcB69A90Aa5311e0e9141a66212489bAfb48b9340",
"0xC2dfA7205088179A8644b9fDCecD6d9bED854Cfe"])
```
GraphQL queries start with a top level entity (or entities).
In this case, our [top level query](https://docs.blockscout.com/for-users/api/graphql#queries) is for multiple addresses.
Note that you can only query on fields that are indexed.
For example, here we query on the addresses.
However, we couldn't query on `contractCode` or `fetchedCoinBalance`.
```
{
hash
contractCode
fetchedCoinBalance
```
The fields above are fetched from the address table.
```
transactions(first:5) {
```
We can also fetch the transactions that include the address (either as source or destination).
The API does not let us fetch an unlimited number of transactions, so here we ask for the first 5.
```
edges {
node {
```
Because this is a [graph](https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)), the entities that connect two types, for example addresses and transactions, are called `edges`.
At the other end of each edge there is a transaction, which is a separate `node`.
```
hash
fromAddressHash
toAddressHash
input
}
```
These are the fields we read for each transaction.
```
}
}
}
}
```
Finally, close all the brackets.
\ No newline at end of file
--- ---
title: Contribute to OP Stack title: Contribute to the OP Stack
lang: en-US lang: en-US
--- ---
......
...@@ -1292,6 +1292,46 @@ ...@@ -1292,6 +1292,46 @@
dependencies: dependencies:
"@hapi/hoek" "^8.3.0" "@hapi/hoek" "^8.3.0"
"@jridgewell/gen-mapping@^0.3.0":
version "0.3.2"
resolved "https://registry.yarnpkg.com/@jridgewell/gen-mapping/-/gen-mapping-0.3.2.tgz#c1aedc61e853f2bb9f5dfe6d4442d3b565b253b9"
integrity sha512-mh65xKQAzI6iBcFzwv28KVWSmCkdRBWoOh+bYQGW3+6OZvbbN3TqMGo5hqYxQniRcH9F2VZIoJCm4pa3BPDK/A==
dependencies:
"@jridgewell/set-array" "^1.0.1"
"@jridgewell/sourcemap-codec" "^1.4.10"
"@jridgewell/trace-mapping" "^0.3.9"
"@jridgewell/resolve-uri@3.1.0":
version "3.1.0"
resolved "https://registry.yarnpkg.com/@jridgewell/resolve-uri/-/resolve-uri-3.1.0.tgz#2203b118c157721addfe69d47b70465463066d78"
integrity sha512-F2msla3tad+Mfht5cJq7LSXcdudKTWCVYUgw6pLFOOHSTtZlj6SWNYAp+AhuqLmWdBO2X5hPrLcu8cVP8fy28w==
"@jridgewell/set-array@^1.0.1":
version "1.1.2"
resolved "https://registry.yarnpkg.com/@jridgewell/set-array/-/set-array-1.1.2.tgz#7c6cf998d6d20b914c0a55a91ae928ff25965e72"
integrity sha512-xnkseuNADM0gt2bs+BvhO0p78Mk762YnZdsuzFV018NoG1Sj1SCQvpSqa7XUaTam5vAGasABV9qXASMKnFMwMw==
"@jridgewell/source-map@^0.3.2":
version "0.3.2"
resolved "https://registry.yarnpkg.com/@jridgewell/source-map/-/source-map-0.3.2.tgz#f45351aaed4527a298512ec72f81040c998580fb"
integrity sha512-m7O9o2uR8k2ObDysZYzdfhb08VuEml5oWGiosa1VdaPZ/A6QyPkAJuwN0Q1lhULOf6B7MtQmHENS743hWtCrgw==
dependencies:
"@jridgewell/gen-mapping" "^0.3.0"
"@jridgewell/trace-mapping" "^0.3.9"
"@jridgewell/sourcemap-codec@1.4.14", "@jridgewell/sourcemap-codec@^1.4.10":
version "1.4.14"
resolved "https://registry.yarnpkg.com/@jridgewell/sourcemap-codec/-/sourcemap-codec-1.4.14.tgz#add4c98d341472a289190b424efbdb096991bb24"
integrity sha512-XPSJHWmi394fuUuzDnGz1wiKqWfo1yXecHQMRf2l6hztTO+nPru658AyDngaBe7isIxEkRsPR3FZh+s7iVa4Uw==
"@jridgewell/trace-mapping@^0.3.9":
version "0.3.17"
resolved "https://registry.yarnpkg.com/@jridgewell/trace-mapping/-/trace-mapping-0.3.17.tgz#793041277af9073b0951a7fe0f0d8c4c98c36985"
integrity sha512-MCNzAp77qzKca9+W/+I0+sEpaUnZoeasnghNeVc41VZCEKaCH73Vq3BZZ/SzWIgrqE4H4ceI+p+b6C0mHf9T4g==
dependencies:
"@jridgewell/resolve-uri" "3.1.0"
"@jridgewell/sourcemap-codec" "1.4.14"
"@leancloud/adapter-types@^3.0.0": "@leancloud/adapter-types@^3.0.0":
version "3.0.0" version "3.0.0"
resolved "https://registry.yarnpkg.com/@leancloud/adapter-types/-/adapter-types-3.0.0.tgz#71c5e8e37065bea4914650848b55a6262d658577" resolved "https://registry.yarnpkg.com/@leancloud/adapter-types/-/adapter-types-3.0.0.tgz#71c5e8e37065bea4914650848b55a6262d658577"
...@@ -2436,6 +2476,11 @@ acorn@^6.4.1: ...@@ -2436,6 +2476,11 @@ acorn@^6.4.1:
resolved "https://registry.yarnpkg.com/acorn/-/acorn-6.4.2.tgz#35866fd710528e92de10cf06016498e47e39e1e6" resolved "https://registry.yarnpkg.com/acorn/-/acorn-6.4.2.tgz#35866fd710528e92de10cf06016498e47e39e1e6"
integrity sha512-XtGIhXwF8YM8bJhGxG5kXgjkEuNGLTkoYqVE+KMR+aspr4KGYmKYg7yUe3KghyQ9yheNwLnjmzh/7+gfDBmHCQ== integrity sha512-XtGIhXwF8YM8bJhGxG5kXgjkEuNGLTkoYqVE+KMR+aspr4KGYmKYg7yUe3KghyQ9yheNwLnjmzh/7+gfDBmHCQ==
acorn@^8.5.0:
version "8.8.2"
resolved "https://registry.yarnpkg.com/acorn/-/acorn-8.8.2.tgz#1b2f25db02af965399b9776b0c2c391276d37c4a"
integrity sha512-xjIYgE8HBrkpd/sJqOGNspf8uHG+NOHGOw6a/Urj8taM2EXfdNAH2oFcPeIFfsv3+kz/mJrS5VuMqbNLjCa2vw==
agentkeepalive@^2.2.0: agentkeepalive@^2.2.0:
version "2.2.0" version "2.2.0"
resolved "https://registry.yarnpkg.com/agentkeepalive/-/agentkeepalive-2.2.0.tgz#c5d1bd4b129008f1163f236f86e5faea2026e2ef" resolved "https://registry.yarnpkg.com/agentkeepalive/-/agentkeepalive-2.2.0.tgz#c5d1bd4b129008f1163f236f86e5faea2026e2ef"
...@@ -2755,6 +2800,11 @@ autoprefixer@^9.5.1: ...@@ -2755,6 +2800,11 @@ autoprefixer@^9.5.1:
postcss "^7.0.32" postcss "^7.0.32"
postcss-value-parser "^4.1.0" postcss-value-parser "^4.1.0"
autosize@^4.0.2:
version "4.0.4"
resolved "https://registry.yarnpkg.com/autosize/-/autosize-4.0.4.tgz#924f13853a466b633b9309330833936d8bccce03"
integrity sha512-5yxLQ22O0fCRGoxGfeLSNt3J8LB1v+umtpMnPW6XjkTWXKoN0AmXAIhelJcDtFT/Y/wYWmfE+oqU10Q0b8FhaQ==
aws-sign2@~0.7.0: aws-sign2@~0.7.0:
version "0.7.0" version "0.7.0"
resolved "https://registry.yarnpkg.com/aws-sign2/-/aws-sign2-0.7.0.tgz#b46e890934a9591f2d2f6f86d7e6a9f1b3fe76a8" resolved "https://registry.yarnpkg.com/aws-sign2/-/aws-sign2-0.7.0.tgz#b46e890934a9591f2d2f6f86d7e6a9f1b3fe76a8"
...@@ -2938,6 +2988,11 @@ bluebird@^3.1.1, bluebird@^3.5.5: ...@@ -2938,6 +2988,11 @@ bluebird@^3.1.1, bluebird@^3.5.5:
resolved "https://registry.yarnpkg.com/bluebird/-/bluebird-3.7.2.tgz#9f229c15be272454ffa973ace0dbee79a1b0c36f" resolved "https://registry.yarnpkg.com/bluebird/-/bluebird-3.7.2.tgz#9f229c15be272454ffa973ace0dbee79a1b0c36f"
integrity sha512-XpNj6GDQzdfW+r2Wnn7xiSAd7TM3jzkxGXBGTtWKuSXv1xUV+azxAm8jdWZN06QTQk+2N2XB9jRDkvbmQmcRtg== integrity sha512-XpNj6GDQzdfW+r2Wnn7xiSAd7TM3jzkxGXBGTtWKuSXv1xUV+azxAm8jdWZN06QTQk+2N2XB9jRDkvbmQmcRtg==
blueimp-md5@^2.8.0:
version "2.19.0"
resolved "https://registry.yarnpkg.com/blueimp-md5/-/blueimp-md5-2.19.0.tgz#b53feea5498dcb53dc6ec4b823adb84b729c4af0"
integrity sha512-DRQrD6gJyy8FbiE4s+bDoXS9hiW3Vbx5uCdwvcCf3zLHL+Iv7LtGHLpr+GZV8rHG8tK766FGYBwRbu8pELTt+w==
bn.js@^4.0.0, bn.js@^4.1.0, bn.js@^4.11.9: bn.js@^4.0.0, bn.js@^4.1.0, bn.js@^4.11.9:
version "4.12.0" version "4.12.0"
resolved "https://registry.yarnpkg.com/bn.js/-/bn.js-4.12.0.tgz#775b3f278efbb9718eec7361f483fb36fbbfea88" resolved "https://registry.yarnpkg.com/bn.js/-/bn.js-4.12.0.tgz#775b3f278efbb9718eec7361f483fb36fbbfea88"
...@@ -3374,6 +3429,21 @@ check-md@^1.0.2: ...@@ -3374,6 +3429,21 @@ check-md@^1.0.2:
globby "^9.1.0" globby "^9.1.0"
yargs-parser "^20.2.1" yargs-parser "^20.2.1"
"chokidar@>=3.0.0 <4.0.0":
version "3.5.3"
resolved "https://registry.yarnpkg.com/chokidar/-/chokidar-3.5.3.tgz#1cf37c8707b932bd1af1ae22c0432e2acd1903bd"
integrity sha512-Dr3sfKRP6oTcjf2JmUmFJfeVMvXBdegxB0iVQ5eb2V10uFJUCAS8OByZdVAyVb8xXNz3GjjTgj9kLWsZTqE6kw==
dependencies:
anymatch "~3.1.2"
braces "~3.0.2"
glob-parent "~5.1.2"
is-binary-path "~2.1.0"
is-glob "~4.0.1"
normalize-path "~3.0.0"
readdirp "~3.6.0"
optionalDependencies:
fsevents "~2.3.2"
chokidar@^2.0.3, chokidar@^2.1.8: chokidar@^2.0.3, chokidar@^2.1.8:
version "2.1.8" version "2.1.8"
resolved "https://registry.yarnpkg.com/chokidar/-/chokidar-2.1.8.tgz#804b3a7b6a99358c3c5c61e71d8728f041cff917" resolved "https://registry.yarnpkg.com/chokidar/-/chokidar-2.1.8.tgz#804b3a7b6a99358c3c5c61e71d8728f041cff917"
...@@ -3587,6 +3657,11 @@ commander@~2.19.0: ...@@ -3587,6 +3657,11 @@ commander@~2.19.0:
resolved "https://registry.yarnpkg.com/commander/-/commander-2.19.0.tgz#f6198aa84e5b83c46054b94ddedbfed5ee9ff12a" resolved "https://registry.yarnpkg.com/commander/-/commander-2.19.0.tgz#f6198aa84e5b83c46054b94ddedbfed5ee9ff12a"
integrity sha512-6tvAOO+D6OENvRAh524Dh9jcfKTYDQAqvqezbCW82xj5X0pSrcpxtvRKHLG0yBY6SD7PSDrJaj+0AiOcKVd1Xg== integrity sha512-6tvAOO+D6OENvRAh524Dh9jcfKTYDQAqvqezbCW82xj5X0pSrcpxtvRKHLG0yBY6SD7PSDrJaj+0AiOcKVd1Xg==
comment-regex@^1.0.0:
version "1.0.1"
resolved "https://registry.yarnpkg.com/comment-regex/-/comment-regex-1.0.1.tgz#e070d2c4db33231955d0979d27c918fcb6f93565"
integrity sha512-IWlN//Yfby92tOIje7J18HkNmWRR7JESA/BK8W7wqY/akITpU5B0JQWnbTjCfdChSrDNb0DrdA9jfAxiiBXyiQ==
common-tags@^1.8.0: common-tags@^1.8.0:
version "1.8.2" version "1.8.2"
resolved "https://registry.yarnpkg.com/common-tags/-/common-tags-1.8.2.tgz#94ebb3c076d26032745fd54face7f688ef5ac9c6" resolved "https://registry.yarnpkg.com/common-tags/-/common-tags-1.8.2.tgz#94ebb3c076d26032745fd54face7f688ef5ac9c6"
...@@ -3706,9 +3781,9 @@ cookie@0.4.0: ...@@ -3706,9 +3781,9 @@ cookie@0.4.0:
integrity sha512-+Hp8fLp57wnUSt0tY0tHEXh4voZRDnoIrZPqlo3DPiI4y9lwg/jqx+1Om94/W6ZaPDOUbnjOt/99w66zk+l1Xg== integrity sha512-+Hp8fLp57wnUSt0tY0tHEXh4voZRDnoIrZPqlo3DPiI4y9lwg/jqx+1Om94/W6ZaPDOUbnjOt/99w66zk+l1Xg==
cookiejar@^2.1.0, cookiejar@^2.1.2: cookiejar@^2.1.0, cookiejar@^2.1.2:
version "2.1.3" version "2.1.4"
resolved "https://registry.yarnpkg.com/cookiejar/-/cookiejar-2.1.3.tgz#fc7a6216e408e74414b90230050842dacda75acc" resolved "https://registry.yarnpkg.com/cookiejar/-/cookiejar-2.1.4.tgz#ee669c1fea2cf42dc31585469d193fef0d65771b"
integrity sha512-JxbCBUdrfr6AQjOXrxoTvAMJO4HBTUIlBzslcJPAz+/KT8yk53fXun51u+RenNYvad/+Vc2DIz5o9UxlCDymFQ== integrity sha512-LDx6oHrK+PhzLKJU9j5S7/Y3jM/mUHvD/DeI1WQmJn652iPC5Y4TBzC9l+5OMOXlyTTA+SmVUPm0HQUwpD5Jqw==
copy-concurrently@^1.0.0: copy-concurrently@^1.0.0:
version "1.0.5" version "1.0.5"
...@@ -4626,9 +4701,9 @@ decamelize@^1.1.1, decamelize@^1.2.0: ...@@ -4626,9 +4701,9 @@ decamelize@^1.1.1, decamelize@^1.2.0:
integrity sha1-9lNNFRSCabIDUue+4m9QH5oZEpA= integrity sha1-9lNNFRSCabIDUue+4m9QH5oZEpA=
decode-uri-component@^0.2.0: decode-uri-component@^0.2.0:
version "0.2.0" version "0.2.2"
resolved "https://registry.yarnpkg.com/decode-uri-component/-/decode-uri-component-0.2.0.tgz#eb3913333458775cb84cd1a1fae062106bb87545" resolved "https://registry.yarnpkg.com/decode-uri-component/-/decode-uri-component-0.2.2.tgz#e69dbe25d37941171dd540e024c444cd5188e1e9"
integrity sha1-6zkTMzRYd1y4TNGh+uBiEGu4dUU= integrity sha512-FqUYQ+8o158GyGTrMFJms9qh3CqTKvAqgqsTnkLI8sKu0028orqBhxNMFkFen0zGyg6epACD32pjVk58ngIErQ==
decompress-response@^3.3.0: decompress-response@^3.3.0:
version "3.3.0" version "3.3.0"
...@@ -5855,6 +5930,13 @@ gray-matter@^4.0.1: ...@@ -5855,6 +5930,13 @@ gray-matter@^4.0.1:
section-matter "^1.0.0" section-matter "^1.0.0"
strip-bom-string "^1.0.0" strip-bom-string "^1.0.0"
hanabi@^0.4.0:
version "0.4.0"
resolved "https://registry.yarnpkg.com/hanabi/-/hanabi-0.4.0.tgz#ebbb251358c1337db1eabda686c43ff777d30d82"
integrity sha512-ixJH94fwmmVzUSdxl7TMkVZJmsq4d2JKrxedpM5V1V+91iVHL0q6NnJi4xiDahK6Vo00xT17H8H6b4F6RVbsOg==
dependencies:
comment-regex "^1.0.0"
handle-thing@^2.0.0: handle-thing@^2.0.0:
version "2.0.1" version "2.0.1"
resolved "https://registry.yarnpkg.com/handle-thing/-/handle-thing-2.0.1.tgz#857f79ce359580c340d43081cc648970d0bb234e" resolved "https://registry.yarnpkg.com/handle-thing/-/handle-thing-2.0.1.tgz#857f79ce359580c340d43081cc648970d0bb234e"
...@@ -6068,9 +6150,9 @@ htmlparser2@^6.1.0: ...@@ -6068,9 +6150,9 @@ htmlparser2@^6.1.0:
entities "^2.0.0" entities "^2.0.0"
http-cache-semantics@^4.0.0: http-cache-semantics@^4.0.0:
version "4.1.0" version "4.1.1"
resolved "https://registry.yarnpkg.com/http-cache-semantics/-/http-cache-semantics-4.1.0.tgz#49e91c5cbf36c9b94bcfcd71c23d5249ec74e390" resolved "https://registry.yarnpkg.com/http-cache-semantics/-/http-cache-semantics-4.1.1.tgz#abe02fcb2985460bf0323be664436ec3476a6d5a"
integrity sha512-carPklcUh7ROWRK7Cv27RPtdhYhUsela/ue5/jKzjegVvXDqM2ILE9Q2BGn9JZJh1g87cp56su/FgQSzcWS8cQ== integrity sha512-er295DKPVsV82j5kw1Gjt+ADA/XYHsajl82cGNQG2eyoPkvgUhX+nDIyelzhIWbbsXP39EHcI6l5tYs2FYqYXQ==
http-deceiver@^1.2.7: http-deceiver@^1.2.7:
version "1.2.7" version "1.2.7"
...@@ -6219,6 +6301,11 @@ immediate@^3.2.3: ...@@ -6219,6 +6301,11 @@ immediate@^3.2.3:
resolved "https://registry.yarnpkg.com/immediate/-/immediate-3.3.0.tgz#1aef225517836bcdf7f2a2de2600c79ff0269266" resolved "https://registry.yarnpkg.com/immediate/-/immediate-3.3.0.tgz#1aef225517836bcdf7f2a2de2600c79ff0269266"
integrity sha512-HR7EVodfFUdQCTIeySw+WDRFJlPcLOJbXfwwZ7Oom6tjsvZ3bOkCDJHehQC3nxJrv7+f9XecwazynjU8e4Vw3Q== integrity sha512-HR7EVodfFUdQCTIeySw+WDRFJlPcLOJbXfwwZ7Oom6tjsvZ3bOkCDJHehQC3nxJrv7+f9XecwazynjU8e4Vw3Q==
immutable@^4.0.0:
version "4.3.0"
resolved "https://registry.yarnpkg.com/immutable/-/immutable-4.3.0.tgz#eb1738f14ffb39fd068b1dbe1296117484dd34be"
integrity sha512-0AOCmOip+xgJwEVTQj1EfiDDOkPmuyllDuTuEX+DDXUgapLAsBIfkg3sxCYyCEA8mQqZrrxPUGjcOQ2JS3WLkg==
import-cwd@^2.0.0: import-cwd@^2.0.0:
version "2.1.0" version "2.1.0"
resolved "https://registry.yarnpkg.com/import-cwd/-/import-cwd-2.1.0.tgz#aa6cf36e722761285cb371ec6519f53e2435b0a9" resolved "https://registry.yarnpkg.com/import-cwd/-/import-cwd-2.1.0.tgz#aa6cf36e722761285cb371ec6519f53e2435b0a9"
...@@ -7228,9 +7315,9 @@ lru-cache@^6.0.0: ...@@ -7228,9 +7315,9 @@ lru-cache@^6.0.0:
yallist "^4.0.0" yallist "^4.0.0"
luxon@^1.3.3: luxon@^1.3.3:
version "1.28.0" version "1.28.1"
resolved "https://registry.yarnpkg.com/luxon/-/luxon-1.28.0.tgz#e7f96daad3938c06a62de0fb027115d251251fbf" resolved "https://registry.yarnpkg.com/luxon/-/luxon-1.28.1.tgz#528cdf3624a54506d710290a2341aa8e6e6c61b0"
integrity sha512-TfTiyvZhwBYM/7QdAVDh+7dBTBA29v4ik0Ce9zda3Mnf8on1S5KJI8P2jKFZ8+5C0jhmr0KwJEO/Wdpm0VeWJQ== integrity sha512-gYHAa180mKrNIUJCbwpmD0aTu9kV0dREDrwNnuyFAsO1Wt0EVYSZelPnJlbj9HplzXX/YWXHFTL45kvZ53M0pw==
magic-string@^0.25.0, magic-string@^0.25.7: magic-string@^0.25.0, magic-string@^0.25.7:
version "0.25.7" version "0.25.7"
...@@ -7309,6 +7396,11 @@ markdown-it@^8.4.1: ...@@ -7309,6 +7396,11 @@ markdown-it@^8.4.1:
mdurl "^1.0.1" mdurl "^1.0.1"
uc.micro "^1.0.5" uc.micro "^1.0.5"
marked@^4.0.8:
version "4.2.12"
resolved "https://registry.yarnpkg.com/marked/-/marked-4.2.12.tgz#d69a64e21d71b06250da995dcd065c11083bebb5"
integrity sha512-yr8hSKa3Fv4D3jdZmtMMPghgVt6TWbk86WQaWhDloQjRSQhMMYCAro7jP7VDJrjjdV8pxVxMssXS8B8Y5DZ5aw==
md5.js@^1.3.4: md5.js@^1.3.4:
version "1.3.5" version "1.3.5"
resolved "https://registry.yarnpkg.com/md5.js/-/md5.js-1.3.5.tgz#b5d07b8e3216e3e27cd728d72f70d1e6a342005f" resolved "https://registry.yarnpkg.com/md5.js/-/md5.js-1.3.5.tgz#b5d07b8e3216e3e27cd728d72f70d1e6a342005f"
...@@ -7510,16 +7602,16 @@ minimalistic-crypto-utils@^1.0.1: ...@@ -7510,16 +7602,16 @@ minimalistic-crypto-utils@^1.0.1:
integrity sha1-9sAMHAsIIkblxNmd+4x8CDsrWCo= integrity sha1-9sAMHAsIIkblxNmd+4x8CDsrWCo=
minimatch@^3.0.4: minimatch@^3.0.4:
version "3.0.4" version "3.1.2"
resolved "https://registry.yarnpkg.com/minimatch/-/minimatch-3.0.4.tgz#5166e286457f03306064be5497e8dbb0c3d32083" resolved "https://registry.yarnpkg.com/minimatch/-/minimatch-3.1.2.tgz#19cd194bfd3e428f049a70817c038d89ab4be35b"
integrity sha512-yJHVQEhyqPLUTgt9B83PXu6W3rx4MvvHvSUvToogpwoGDOUQ+yDrR0HRot+yOCdCO7u4hX3pWft6kWBBcqh0UA== integrity sha512-J7p63hRiAjw1NDEww1W7i37+ByIrOWO5XQQAzZ3VOcL0PNybwpfmV/N05zFAzwQ9USyEcX6t3UO+K5aqBQOIHw==
dependencies: dependencies:
brace-expansion "^1.1.7" brace-expansion "^1.1.7"
minimist@^1.2.0, minimist@^1.2.5: minimist@^1.2.0, minimist@^1.2.5:
version "1.2.5" version "1.2.8"
resolved "https://registry.yarnpkg.com/minimist/-/minimist-1.2.5.tgz#67d66014b66a6a8aaa0c083c5fd58df4e4e97602" resolved "https://registry.yarnpkg.com/minimist/-/minimist-1.2.8.tgz#c1a464e7693302e082a075cee0c057741ac4772c"
integrity sha512-FM9nNUYrRBAELZQT3xeZQ7fmMOBg6nWNmJKTcgsJeaLstP/UODVpGsr5OhXhhXg6f+qtJ8uiZ+PUxkDWcgIXLw== integrity sha512-2yyAR8qBkN3YuheJanUpWC5U3bb5osDywNB8RzDVlDwDHbocAJveqqj1u8+SVD7jkWT4yvsHCpWqqWqAxb0zCA==
miniprogram-api-typings@^2.10.2: miniprogram-api-typings@^2.10.2:
version "2.12.0" version "2.12.0"
...@@ -8701,16 +8793,16 @@ qs@6.7.0: ...@@ -8701,16 +8793,16 @@ qs@6.7.0:
integrity sha512-VCdBRNFTX1fyE7Nb6FYoURo/SPe62QCaAyzJvUjwRaIsc+NePBEniHlvxFmmX56+HZphIGtV0XeCirBtpDrTyQ== integrity sha512-VCdBRNFTX1fyE7Nb6FYoURo/SPe62QCaAyzJvUjwRaIsc+NePBEniHlvxFmmX56+HZphIGtV0XeCirBtpDrTyQ==
qs@^6.5.1, qs@^6.6.0, qs@^6.9.4: qs@^6.5.1, qs@^6.6.0, qs@^6.9.4:
version "6.10.2" version "6.11.1"
resolved "https://registry.yarnpkg.com/qs/-/qs-6.10.2.tgz#c1431bea37fc5b24c5bdbafa20f16bdf2a4b9ffe" resolved "https://registry.yarnpkg.com/qs/-/qs-6.11.1.tgz#6c29dff97f0c0060765911ba65cbc9764186109f"
integrity sha512-mSIdjzqznWgfd4pMii7sHtaYF8rx8861hBO80SraY5GT0XQibWZWJSid0avzHGkDIZLImux2S5mXO0Hfct2QCw== integrity sha512-0wsrzgTz/kAVIeuxSjnpGC56rzYtr6JT/2BwEvMaPhFIoYa1aGO8LbzuU1R0uUYQkLpWBTOj0l/CLAJB64J6nQ==
dependencies: dependencies:
side-channel "^1.0.4" side-channel "^1.0.4"
qs@~6.5.2: qs@~6.5.2:
version "6.5.2" version "6.5.3"
resolved "https://registry.yarnpkg.com/qs/-/qs-6.5.2.tgz#cb3ae806e8740444584ef154ce8ee98d403f3e36" resolved "https://registry.yarnpkg.com/qs/-/qs-6.5.3.tgz#3aeeffc91967ef6e35c0e488ef46fb296ab76aad"
integrity sha512-N5ZAX4/LxJmF+7wN74pUD6qAh9/wnvdQcjq9TZjevvXzSUo7bfmw91saqMjzGS2xq91/odN2dW/WOl7qQHNDGA== integrity sha512-qxXIEh4pCGfHICj1mAJQ2/2XVZkjCDTcEgfoSQxc/fYivUZxTkk7L3bDBJSoNrEzXI17oUO5Dp07ktqE5KzczA==
query-string@^5.0.1: query-string@^5.0.1:
version "5.1.1" version "5.1.1"
...@@ -9136,6 +9228,15 @@ safe-regex@^1.1.0: ...@@ -9136,6 +9228,15 @@ safe-regex@^1.1.0:
resolved "https://registry.yarnpkg.com/safer-buffer/-/safer-buffer-2.1.2.tgz#44fa161b0187b9549dd84bb91802f9bd8385cd6a" resolved "https://registry.yarnpkg.com/safer-buffer/-/safer-buffer-2.1.2.tgz#44fa161b0187b9549dd84bb91802f9bd8385cd6a"
integrity sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg== integrity sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==
sass@^1.49.9:
version "1.59.3"
resolved "https://registry.yarnpkg.com/sass/-/sass-1.59.3.tgz#a1ddf855d75c70c26b4555df4403e1bbf8e4403f"
integrity sha512-QCq98N3hX1jfTCoUAsF3eyGuXLsY7BCnCEg9qAact94Yc21npG2/mVOqoDvE0fCbWDqiM4WlcJQla0gWG2YlxQ==
dependencies:
chokidar ">=3.0.0 <4.0.0"
immutable "^4.0.0"
source-map-js ">=0.6.2 <2.0.0"
sax@^1.2.4, sax@~1.2.4: sax@^1.2.4, sax@~1.2.4:
version "1.2.4" version "1.2.4"
resolved "https://registry.yarnpkg.com/sax/-/sax-1.2.4.tgz#2816234e2378bddc4e5354fab5caa895df7100d9" resolved "https://registry.yarnpkg.com/sax/-/sax-1.2.4.tgz#2816234e2378bddc4e5354fab5caa895df7100d9"
...@@ -9455,6 +9556,11 @@ source-list-map@^2.0.0: ...@@ -9455,6 +9556,11 @@ source-list-map@^2.0.0:
resolved "https://registry.yarnpkg.com/source-list-map/-/source-list-map-2.0.1.tgz#3993bd873bfc48479cca9ea3a547835c7c154b34" resolved "https://registry.yarnpkg.com/source-list-map/-/source-list-map-2.0.1.tgz#3993bd873bfc48479cca9ea3a547835c7c154b34"
integrity sha512-qnQ7gVMxGNxsiL4lEuJwe/To8UnK7fAnmbGEEH8RpLouuKbeEm0lhbQVFIrNSuB+G7tVrAlVsZgETT5nljf+Iw== integrity sha512-qnQ7gVMxGNxsiL4lEuJwe/To8UnK7fAnmbGEEH8RpLouuKbeEm0lhbQVFIrNSuB+G7tVrAlVsZgETT5nljf+Iw==
"source-map-js@>=0.6.2 <2.0.0":
version "1.0.2"
resolved "https://registry.yarnpkg.com/source-map-js/-/source-map-js-1.0.2.tgz#adbc361d9c62df380125e7f161f71c826f1e490c"
integrity sha512-R0XvVJ9WusLiqTCEiGCmICCMplcCkIwwR11mOSD9CR5u+IXYdiseeEuXCVAjS54zqwkLcPNnmU4OeJ6tUrWhDw==
source-map-resolve@^0.5.0, source-map-resolve@^0.5.2: source-map-resolve@^0.5.0, source-map-resolve@^0.5.2:
version "0.5.3" version "0.5.3"
resolved "https://registry.yarnpkg.com/source-map-resolve/-/source-map-resolve-0.5.3.tgz#190866bece7553e1f8f267a2ee82c606b5509a1a" resolved "https://registry.yarnpkg.com/source-map-resolve/-/source-map-resolve-0.5.3.tgz#190866bece7553e1f8f267a2ee82c606b5509a1a"
...@@ -9502,7 +9608,7 @@ source-map@^0.6.0, source-map@^0.6.1, source-map@~0.6.0, source-map@~0.6.1: ...@@ -9502,7 +9608,7 @@ source-map@^0.6.0, source-map@^0.6.1, source-map@~0.6.0, source-map@~0.6.1:
resolved "https://registry.yarnpkg.com/source-map/-/source-map-0.6.1.tgz#74722af32e9614e9c287a8d0bbde48b5e2f1a263" resolved "https://registry.yarnpkg.com/source-map/-/source-map-0.6.1.tgz#74722af32e9614e9c287a8d0bbde48b5e2f1a263"
integrity sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g== integrity sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==
source-map@^0.7.3, source-map@~0.7.2: source-map@^0.7.3:
version "0.7.3" version "0.7.3"
resolved "https://registry.yarnpkg.com/source-map/-/source-map-0.7.3.tgz#5302f8169031735226544092e64981f751750383" resolved "https://registry.yarnpkg.com/source-map/-/source-map-0.7.3.tgz#5302f8169031735226544092e64981f751750383"
integrity sha512-CkCj6giN3S+n9qrYiBTX5gystlENnRW5jZeNLHpe6aue+SrHcG5VYwujhW9s4dY31mEGsxBDrHR6oI69fTXsaQ== integrity sha512-CkCj6giN3S+n9qrYiBTX5gystlENnRW5jZeNLHpe6aue+SrHcG5VYwujhW9s4dY31mEGsxBDrHR6oI69fTXsaQ==
...@@ -9960,21 +10066,22 @@ terser-webpack-plugin@^1.4.3: ...@@ -9960,21 +10066,22 @@ terser-webpack-plugin@^1.4.3:
worker-farm "^1.7.0" worker-farm "^1.7.0"
terser@^4.1.2: terser@^4.1.2:
version "4.8.0" version "4.8.1"
resolved "https://registry.yarnpkg.com/terser/-/terser-4.8.0.tgz#63056343d7c70bb29f3af665865a46fe03a0df17" resolved "https://registry.yarnpkg.com/terser/-/terser-4.8.1.tgz#a00e5634562de2239fd404c649051bf6fc21144f"
integrity sha512-EAPipTNeWsb/3wLPeup1tVPaXfIaU68xMnVdPafIL1TV05OhASArYyIfFvnvJCNrR2NIOvDVNNTFRa+Re2MWyw== integrity sha512-4GnLC0x667eJG0ewJTa6z/yXrbLGv80D9Ru6HIpCQmO+Q4PfEtBFi0ObSckqwL6VyQv/7ENJieXHo2ANmdQwgw==
dependencies: dependencies:
commander "^2.20.0" commander "^2.20.0"
source-map "~0.6.1" source-map "~0.6.1"
source-map-support "~0.5.12" source-map-support "~0.5.12"
terser@^5.0.0: terser@^5.0.0:
version "5.10.0" version "5.16.6"
resolved "https://registry.yarnpkg.com/terser/-/terser-5.10.0.tgz#b86390809c0389105eb0a0b62397563096ddafcc" resolved "https://registry.yarnpkg.com/terser/-/terser-5.16.6.tgz#f6c7a14a378ee0630fbe3ac8d1f41b4681109533"
integrity sha512-AMmF99DMfEDiRJfxfY5jj5wNH/bYO09cniSqhfoyxc8sFoYIgkJy86G04UoZU5VjlpnplVu0K6Tx6E9b5+DlHA== integrity sha512-IBZ+ZQIA9sMaXmRZCUMDjNH0D5AQQfdn4WUjHL0+1lF4TP1IHRJbrhb6fNaXWikrYQTSkb7SLxkeXAiy1p7mbg==
dependencies: dependencies:
"@jridgewell/source-map" "^0.3.2"
acorn "^8.5.0"
commander "^2.20.0" commander "^2.20.0"
source-map "~0.7.2"
source-map-support "~0.5.20" source-map-support "~0.5.20"
text-table@^0.2.0: text-table@^0.2.0:
...@@ -10408,15 +10515,20 @@ uuid@^3.0.0, uuid@^3.3.2, uuid@^3.4.0: ...@@ -10408,15 +10515,20 @@ uuid@^3.0.0, uuid@^3.3.2, uuid@^3.4.0:
integrity sha512-HjSDRw6gZE5JMggctHBcjVak08+KEVhSIiDzFnT9S9aegmp85S/bReBVTb4QTFaRNptJ9kuYaNhnbNEOkbKb/A== integrity sha512-HjSDRw6gZE5JMggctHBcjVak08+KEVhSIiDzFnT9S9aegmp85S/bReBVTb4QTFaRNptJ9kuYaNhnbNEOkbKb/A==
valine@^1.4.16: valine@^1.4.16:
version "1.4.16" version "1.5.1"
resolved "https://registry.yarnpkg.com/valine/-/valine-1.4.16.tgz#6b077734c94d75cb6b1cbfb88e9e634000c1bfef" resolved "https://registry.yarnpkg.com/valine/-/valine-1.5.1.tgz#cb7aa97c07588646e7cd1494bd1a4d09093931c3"
integrity sha512-gKmVzOAYrHNKt2Oswu6+UFiR+qJfTuVjPj6XrqtoPNZFcoJaITftGI6qw/n//YhWMd/75E4WTL6WeZ5CHhADdA== integrity sha512-uT9tHuoKkFUqqVHHRyz4EPe8U+h1ZkCa2lVKH8SKZM9r1Whm8DZW9GfrK4USmLJ+QwpHWenPIKOH0s2xO0BVbQ==
dependencies: dependencies:
autosize "^4.0.2"
balajs "^1.0.7" balajs "^1.0.7"
balalaika "^1.0.1" balalaika "^1.0.1"
blueimp-md5 "^2.8.0"
element-closest "^3.0.2" element-closest "^3.0.2"
hanabi "^0.4.0"
insane "^2.6.2" insane "^2.6.2"
leancloud-storage "^3.0.4" leancloud-storage "^3.0.4"
marked "^4.0.8"
sass "^1.49.9"
storejs "^1.0.25" storejs "^1.0.25"
xss "^1.0.6" xss "^1.0.6"
......
...@@ -30,10 +30,10 @@ ...@@ -30,10 +30,10 @@
"devDependencies": { "devDependencies": {
"@babel/eslint-parser": "^7.5.4", "@babel/eslint-parser": "^7.5.4",
"@eth-optimism/contracts": "^0.5.40", "@eth-optimism/contracts": "^0.5.40",
"@eth-optimism/contracts-bedrock": "0.13.0", "@eth-optimism/contracts-bedrock": "0.13.1",
"@eth-optimism/contracts-periphery": "^1.0.7", "@eth-optimism/contracts-periphery": "^1.0.7",
"@eth-optimism/core-utils": "0.12.0", "@eth-optimism/core-utils": "0.12.0",
"@eth-optimism/sdk": "2.0.0", "@eth-optimism/sdk": "2.0.1",
"@ethersproject/abstract-provider": "^5.7.0", "@ethersproject/abstract-provider": "^5.7.0",
"@ethersproject/providers": "^5.7.0", "@ethersproject/providers": "^5.7.0",
"@ethersproject/transactions": "^5.7.0", "@ethersproject/transactions": "^5.7.0",
......
...@@ -19,8 +19,18 @@ test: ...@@ -19,8 +19,18 @@ test:
lint: lint:
golangci-lint run -E goimports,sqlclosecheck,bodyclose,asciicheck,misspell,errorlint -e "errors.As" -e "errors.Is" golangci-lint run -E goimports,sqlclosecheck,bodyclose,asciicheck,misspell,errorlint -e "errors.As" -e "errors.Is"
fuzz:
go test -run NOTAREALTEST -v -fuzztime 10s -fuzz FuzzDurationZero ./batcher
go test -run NOTAREALTEST -v -fuzztime 10s -fuzz FuzzDurationTimeoutMaxChannelDuration ./batcher
go test -run NOTAREALTEST -v -fuzztime 10s -fuzz FuzzDurationTimeoutZeroMaxChannelDuration ./batcher
go test -run NOTAREALTEST -v -fuzztime 10s -fuzz FuzzChannelCloseTimeout ./batcher
go test -run NOTAREALTEST -v -fuzztime 10s -fuzz FuzzChannelZeroCloseTimeout ./batcher
go test -run NOTAREALTEST -v -fuzztime 10s -fuzz FuzzSeqWindowClose ./batcher
go test -run NOTAREALTEST -v -fuzztime 10s -fuzz FuzzSeqWindowZeroTimeoutClose ./batcher
.PHONY: \ .PHONY: \
op-batcher \ op-batcher \
clean \ clean \
test \ test \
lint lint \
fuzz
...@@ -12,9 +12,9 @@ import ( ...@@ -12,9 +12,9 @@ import (
gethrpc "github.com/ethereum/go-ethereum/rpc" gethrpc "github.com/ethereum/go-ethereum/rpc"
"github.com/urfave/cli" "github.com/urfave/cli"
"github.com/ethereum-optimism/optimism/op-batcher/metrics"
"github.com/ethereum-optimism/optimism/op-batcher/rpc" "github.com/ethereum-optimism/optimism/op-batcher/rpc"
oplog "github.com/ethereum-optimism/optimism/op-service/log" oplog "github.com/ethereum-optimism/optimism/op-service/log"
opmetrics "github.com/ethereum-optimism/optimism/op-service/metrics"
oppprof "github.com/ethereum-optimism/optimism/op-service/pprof" oppprof "github.com/ethereum-optimism/optimism/op-service/pprof"
oprpc "github.com/ethereum-optimism/optimism/op-service/rpc" oprpc "github.com/ethereum-optimism/optimism/op-service/rpc"
) )
...@@ -36,9 +36,10 @@ func Main(version string, cliCtx *cli.Context) error { ...@@ -36,9 +36,10 @@ func Main(version string, cliCtx *cli.Context) error {
} }
l := oplog.NewLogger(cfg.LogConfig) l := oplog.NewLogger(cfg.LogConfig)
m := metrics.NewMetrics("default")
l.Info("Initializing Batch Submitter") l.Info("Initializing Batch Submitter")
batchSubmitter, err := NewBatchSubmitterFromCLIConfig(cfg, l) batchSubmitter, err := NewBatchSubmitterFromCLIConfig(cfg, l, m)
if err != nil { if err != nil {
l.Error("Unable to create Batch Submitter", "error", err) l.Error("Unable to create Batch Submitter", "error", err)
return err return err
...@@ -64,16 +65,15 @@ func Main(version string, cliCtx *cli.Context) error { ...@@ -64,16 +65,15 @@ func Main(version string, cliCtx *cli.Context) error {
}() }()
} }
registry := opmetrics.NewRegistry()
metricsCfg := cfg.MetricsConfig metricsCfg := cfg.MetricsConfig
if metricsCfg.Enabled { if metricsCfg.Enabled {
l.Info("starting metrics server", "addr", metricsCfg.ListenAddr, "port", metricsCfg.ListenPort) l.Info("starting metrics server", "addr", metricsCfg.ListenAddr, "port", metricsCfg.ListenPort)
go func() { go func() {
if err := opmetrics.ListenAndServe(ctx, registry, metricsCfg.ListenAddr, metricsCfg.ListenPort); err != nil { if err := m.Serve(ctx, metricsCfg.ListenAddr, metricsCfg.ListenPort); err != nil {
l.Error("error starting metrics server", err) l.Error("error starting metrics server", err)
} }
}() }()
opmetrics.LaunchBalanceMetrics(ctx, l, registry, "", batchSubmitter.L1Client, batchSubmitter.From) m.StartBalanceMetrics(ctx, l, batchSubmitter.L1Client, batchSubmitter.From)
} }
rpcCfg := cfg.RPCConfig rpcCfg := cfg.RPCConfig
...@@ -95,6 +95,9 @@ func Main(version string, cliCtx *cli.Context) error { ...@@ -95,6 +95,9 @@ func Main(version string, cliCtx *cli.Context) error {
return fmt.Errorf("error starting RPC server: %w", err) return fmt.Errorf("error starting RPC server: %w", err)
} }
m.RecordInfo(version)
m.RecordUp()
interruptChannel := make(chan os.Signal, 1) interruptChannel := make(chan os.Signal, 1)
signal.Notify(interruptChannel, []os.Signal{ signal.Notify(interruptChannel, []os.Signal{
os.Interrupt, os.Interrupt,
......
...@@ -11,6 +11,29 @@ import ( ...@@ -11,6 +11,29 @@ import (
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
) )
var (
ErrZeroMaxFrameSize = errors.New("max frame size cannot be zero")
ErrSmallMaxFrameSize = errors.New("max frame size cannot be less than 23")
ErrInvalidChannelTimeout = errors.New("channel timeout is less than the safety margin")
ErrInputTargetReached = errors.New("target amount of input data reached")
ErrMaxFrameIndex = errors.New("max frame index reached (uint16)")
ErrMaxDurationReached = errors.New("max channel duration reached")
ErrChannelTimeoutClose = errors.New("close to channel timeout")
ErrSeqWindowClose = errors.New("close to sequencer window timeout")
)
type ChannelFullError struct {
Err error
}
func (e *ChannelFullError) Error() string {
return "channel full: " + e.Err.Error()
}
func (e *ChannelFullError) Unwrap() error {
return e.Err
}
type ChannelConfig struct { type ChannelConfig struct {
// Number of epochs (L1 blocks) per sequencing window, including the epoch // Number of epochs (L1 blocks) per sequencing window, including the epoch
// L1 origin block itself // L1 origin block itself
...@@ -48,6 +71,32 @@ type ChannelConfig struct { ...@@ -48,6 +71,32 @@ type ChannelConfig struct {
ApproxComprRatio float64 ApproxComprRatio float64
} }
// Check validates the [ChannelConfig] parameters.
func (cc *ChannelConfig) Check() error {
// The [ChannelTimeout] must be larger than the [SubSafetyMargin].
// Otherwise, new blocks would always be considered timed out.
if cc.ChannelTimeout < cc.SubSafetyMargin {
return ErrInvalidChannelTimeout
}
// If the [MaxFrameSize] is set to 0, the channel builder
// will infinitely loop when trying to create frames in the
// [channelBuilder.OutputFrames] function.
if cc.MaxFrameSize == 0 {
return ErrZeroMaxFrameSize
}
// If the [MaxFrameSize] is set to < 23, the channel out
// will underflow the maxSize variable in the [derive.ChannelOut].
// Since it is of type uint64, it will wrap around to a very large
// number, making the frame size extremely large.
if cc.MaxFrameSize < 23 {
return ErrSmallMaxFrameSize
}
return nil
}
// InputThreshold calculates the input data threshold in bytes from the given // InputThreshold calculates the input data threshold in bytes from the given
// parameters. // parameters.
func (c ChannelConfig) InputThreshold() uint64 { func (c ChannelConfig) InputThreshold() uint64 {
...@@ -87,8 +136,12 @@ type channelBuilder struct { ...@@ -87,8 +136,12 @@ type channelBuilder struct {
blocks []*types.Block blocks []*types.Block
// frames data queue, to be send as txs // frames data queue, to be send as txs
frames []frameData frames []frameData
// total amount of output data of all frames created yet
outputBytes int
} }
// newChannelBuilder creates a new channel builder or returns an error if the
// channel out could not be created.
func newChannelBuilder(cfg ChannelConfig) (*channelBuilder, error) { func newChannelBuilder(cfg ChannelConfig) (*channelBuilder, error) {
co, err := derive.NewChannelOut() co, err := derive.NewChannelOut()
if err != nil { if err != nil {
...@@ -105,11 +158,21 @@ func (c *channelBuilder) ID() derive.ChannelID { ...@@ -105,11 +158,21 @@ func (c *channelBuilder) ID() derive.ChannelID {
return c.co.ID() return c.co.ID()
} }
// InputBytes returns to total amount of input bytes added to the channel. // InputBytes returns the total amount of input bytes added to the channel.
func (c *channelBuilder) InputBytes() int { func (c *channelBuilder) InputBytes() int {
return c.co.InputBytes() return c.co.InputBytes()
} }
// ReadyBytes returns the amount of bytes ready in the compression pipeline to
// output into a frame.
func (c *channelBuilder) ReadyBytes() int {
return c.co.ReadyBytes()
}
func (c *channelBuilder) OutputBytes() int {
return c.outputBytes
}
// Blocks returns a backup list of all blocks that were added to the channel. It // Blocks returns a backup list of all blocks that were added to the channel. It
// can be used in case the channel needs to be rebuilt. // can be used in case the channel needs to be rebuilt.
func (c *channelBuilder) Blocks() []*types.Block { func (c *channelBuilder) Blocks() []*types.Block {
...@@ -133,22 +196,25 @@ func (c *channelBuilder) Reset() error { ...@@ -133,22 +196,25 @@ func (c *channelBuilder) Reset() error {
// AddBlock returns a ChannelFullError if called even though the channel is // AddBlock returns a ChannelFullError if called even though the channel is
// already full. See description of FullErr for details. // already full. See description of FullErr for details.
// //
// AddBlock also returns the L1BlockInfo that got extracted from the block's
// first transaction for subsequent use by the caller.
//
// Call OutputFrames() afterwards to create frames. // Call OutputFrames() afterwards to create frames.
func (c *channelBuilder) AddBlock(block *types.Block) error { func (c *channelBuilder) AddBlock(block *types.Block) (derive.L1BlockInfo, error) {
if c.IsFull() { if c.IsFull() {
return c.FullErr() return derive.L1BlockInfo{}, c.FullErr()
} }
batch, err := derive.BlockToBatch(block) batch, l1info, err := derive.BlockToBatch(block)
if err != nil { if err != nil {
return fmt.Errorf("converting block to batch: %w", err) return l1info, fmt.Errorf("converting block to batch: %w", err)
} }
if _, err = c.co.AddBatch(batch); errors.Is(err, derive.ErrTooManyRLPBytes) { if _, err = c.co.AddBatch(batch); errors.Is(err, derive.ErrTooManyRLPBytes) {
c.setFullErr(err) c.setFullErr(err)
return c.FullErr() return l1info, c.FullErr()
} else if err != nil { } else if err != nil {
return fmt.Errorf("adding block to channel out: %w", err) return l1info, fmt.Errorf("adding block to channel out: %w", err)
} }
c.blocks = append(c.blocks, block) c.blocks = append(c.blocks, block)
c.updateSwTimeout(batch) c.updateSwTimeout(batch)
...@@ -158,7 +224,7 @@ func (c *channelBuilder) AddBlock(block *types.Block) error { ...@@ -158,7 +224,7 @@ func (c *channelBuilder) AddBlock(block *types.Block) error {
// Adding this block still worked, so don't return error, just mark as full // Adding this block still worked, so don't return error, just mark as full
} }
return nil return l1info, nil
} }
// Timeout management // Timeout management
...@@ -215,7 +281,7 @@ func (c *channelBuilder) updateTimeout(timeoutBlockNum uint64, reason error) { ...@@ -215,7 +281,7 @@ func (c *channelBuilder) updateTimeout(timeoutBlockNum uint64, reason error) {
} }
// checkTimeout checks if the channel is timed out at the given block number and // checkTimeout checks if the channel is timed out at the given block number and
// in this case marks the channel as full, if it wasn't full alredy. // in this case marks the channel as full, if it wasn't full already.
func (c *channelBuilder) checkTimeout(blockNum uint64) { func (c *channelBuilder) checkTimeout(blockNum uint64) {
if !c.IsFull() && c.TimedOut(blockNum) { if !c.IsFull() && c.TimedOut(blockNum) {
c.setFullErr(c.timeoutReason) c.setFullErr(c.timeoutReason)
...@@ -330,10 +396,11 @@ func (c *channelBuilder) outputFrame() error { ...@@ -330,10 +396,11 @@ func (c *channelBuilder) outputFrame() error {
} }
frame := frameData{ frame := frameData{
id: txID{chID: c.co.ID(), frameNumber: fn}, id: frameID{chID: c.co.ID(), frameNumber: fn},
data: buf.Bytes(), data: buf.Bytes(),
} }
c.frames = append(c.frames, frame) c.frames = append(c.frames, frame)
c.outputBytes += len(frame.data)
return err // possibly io.EOF (last frame) return err // possibly io.EOF (last frame)
} }
...@@ -371,23 +438,3 @@ func (c *channelBuilder) PushFrame(frame frameData) { ...@@ -371,23 +438,3 @@ func (c *channelBuilder) PushFrame(frame frameData) {
} }
c.frames = append(c.frames, frame) c.frames = append(c.frames, frame)
} }
var (
ErrInputTargetReached = errors.New("target amount of input data reached")
ErrMaxFrameIndex = errors.New("max frame index reached (uint16)")
ErrMaxDurationReached = errors.New("max channel duration reached")
ErrChannelTimeoutClose = errors.New("close to channel timeout")
ErrSeqWindowClose = errors.New("close to sequencer window timeout")
)
type ChannelFullError struct {
Err error
}
func (e *ChannelFullError) Error() string {
return "channel full: " + e.Err.Error()
}
func (e *ChannelFullError) Unwrap() error {
return e.Err
}
package batcher
import (
"bytes"
"errors"
"math"
"math/big"
"math/rand"
"testing"
"time"
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
dtest "github.com/ethereum-optimism/optimism/op-node/rollup/derive/test"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/trie"
"github.com/stretchr/testify/require"
)
var defaultTestChannelConfig = ChannelConfig{
SeqWindowSize: 15,
ChannelTimeout: 40,
MaxChannelDuration: 1,
SubSafetyMargin: 4,
MaxFrameSize: 120000,
TargetFrameSize: 100000,
TargetNumFrames: 1,
ApproxComprRatio: 0.4,
}
// TestConfigValidation tests the validation of the [ChannelConfig] struct.
func TestConfigValidation(t *testing.T) {
// Construct a valid config.
validChannelConfig := defaultTestChannelConfig
require.NoError(t, validChannelConfig.Check())
// Set the config to have a zero max frame size.
validChannelConfig.MaxFrameSize = 0
require.ErrorIs(t, validChannelConfig.Check(), ErrZeroMaxFrameSize)
// Set the config to have a max frame size less than 23.
validChannelConfig.MaxFrameSize = 22
require.ErrorIs(t, validChannelConfig.Check(), ErrSmallMaxFrameSize)
// Reset the config and test the Timeout error.
// NOTE: We should be fuzzing these values with the constraint that
// SubSafetyMargin > ChannelTimeout to ensure validation.
validChannelConfig = defaultTestChannelConfig
validChannelConfig.ChannelTimeout = 0
validChannelConfig.SubSafetyMargin = 1
require.ErrorIs(t, validChannelConfig.Check(), ErrInvalidChannelTimeout)
}
// addMiniBlock adds a minimal valid L2 block to the channel builder using the
// channelBuilder.AddBlock method.
func addMiniBlock(cb *channelBuilder) error {
a := newMiniL2Block(0)
_, err := cb.AddBlock(a)
return err
}
// newMiniL2Block returns a minimal L2 block with a minimal valid L1InfoDeposit
// transaction as first transaction. Both blocks are minimal in the sense that
// most fields are left at defaults or are unset.
//
// If numTx > 0, that many empty DynamicFeeTxs will be added to the txs.
func newMiniL2Block(numTx int) *types.Block {
return newMiniL2BlockWithNumberParent(numTx, new(big.Int), (common.Hash{}))
}
// newMiniL2Block returns a minimal L2 block with a minimal valid L1InfoDeposit
// transaction as first transaction. Both blocks are minimal in the sense that
// most fields are left at defaults or are unset. Block number and parent hash
// will be set to the given parameters number and parent.
//
// If numTx > 0, that many empty DynamicFeeTxs will be added to the txs.
func newMiniL2BlockWithNumberParent(numTx int, number *big.Int, parent common.Hash) *types.Block {
l1Block := types.NewBlock(&types.Header{
BaseFee: big.NewInt(10),
Difficulty: common.Big0,
Number: big.NewInt(100),
}, nil, nil, nil, trie.NewStackTrie(nil))
l1InfoTx, err := derive.L1InfoDeposit(0, l1Block, eth.SystemConfig{}, false)
if err != nil {
panic(err)
}
txs := make([]*types.Transaction, 0, 1+numTx)
txs = append(txs, types.NewTx(l1InfoTx))
for i := 0; i < numTx; i++ {
txs = append(txs, types.NewTx(&types.DynamicFeeTx{}))
}
return types.NewBlock(&types.Header{
Number: number,
ParentHash: parent,
}, txs, nil, nil, trie.NewStackTrie(nil))
}
// addTooManyBlocks adds blocks to the channel until it hits an error,
// which is presumably ErrTooManyRLPBytes.
func addTooManyBlocks(cb *channelBuilder) error {
for i := 0; i < 10_000; i++ {
block := newMiniL2Block(100)
_, err := cb.AddBlock(block)
if err != nil {
return err
}
}
return nil
}
// FuzzDurationTimeoutZeroMaxChannelDuration ensures that when whenever the MaxChannelDuration
// is set to 0, the channel builder cannot have a duration timeout.
func FuzzDurationTimeoutZeroMaxChannelDuration(f *testing.F) {
for i := range [10]int{} {
f.Add(uint64(i))
}
f.Fuzz(func(t *testing.T, l1BlockNum uint64) {
channelConfig := defaultTestChannelConfig
channelConfig.MaxChannelDuration = 0
cb, err := newChannelBuilder(channelConfig)
require.NoError(t, err)
cb.timeout = 0
cb.updateDurationTimeout(l1BlockNum)
require.False(t, cb.TimedOut(l1BlockNum))
})
}
// FuzzDurationZero ensures that when whenever the MaxChannelDuration
// is not set to 0, the channel builder will always have a duration timeout
// as long as the channel builder's timeout is set to 0.
func FuzzDurationZero(f *testing.F) {
for i := range [10]int{} {
f.Add(uint64(i), uint64(i))
}
f.Fuzz(func(t *testing.T, l1BlockNum uint64, maxChannelDuration uint64) {
if maxChannelDuration == 0 {
t.Skip("Max channel duration cannot be 0")
}
// Create the channel builder
channelConfig := defaultTestChannelConfig
channelConfig.MaxChannelDuration = maxChannelDuration
cb, err := newChannelBuilder(channelConfig)
require.NoError(t, err)
// Whenever the timeout is set to 0, the channel builder should have a duration timeout
cb.timeout = 0
cb.updateDurationTimeout(l1BlockNum)
cb.checkTimeout(l1BlockNum + maxChannelDuration)
require.ErrorIs(t, cb.FullErr(), ErrMaxDurationReached)
})
}
// FuzzDurationTimeoutMaxChannelDuration ensures that when whenever the MaxChannelDuration
// is not set to 0, the channel builder will always have a duration timeout
// as long as the channel builder's timeout is greater than the target block number.
func FuzzDurationTimeoutMaxChannelDuration(f *testing.F) {
// Set multiple seeds in case fuzzing isn't explicitly used
for i := range [10]int{} {
f.Add(uint64(i), uint64(i), uint64(i))
}
f.Fuzz(func(t *testing.T, l1BlockNum uint64, maxChannelDuration uint64, timeout uint64) {
if maxChannelDuration == 0 {
t.Skip("Max channel duration cannot be 0")
}
// Create the channel builder
channelConfig := defaultTestChannelConfig
channelConfig.MaxChannelDuration = maxChannelDuration
cb, err := newChannelBuilder(channelConfig)
require.NoError(t, err)
// Whenever the timeout is greater than the l1BlockNum,
// the channel builder should have a duration timeout
cb.timeout = timeout
cb.updateDurationTimeout(l1BlockNum)
if timeout > l1BlockNum+maxChannelDuration {
// Notice: we cannot call this outside of the if statement
// because it would put the channel builder in an invalid state.
// That is, where the channel builder has a value set for the timeout
// with no timeoutReason. This subsequently causes a panic when
// a nil timeoutReason is used as an error (eg when calling FullErr).
cb.checkTimeout(l1BlockNum + maxChannelDuration)
require.ErrorIs(t, cb.FullErr(), ErrMaxDurationReached)
} else {
require.NoError(t, cb.FullErr())
}
})
}
// FuzzChannelCloseTimeout ensures that the channel builder has a [ErrChannelTimeoutClose]
// as long as the timeout constraint is met and the builder's timeout is greater than
// the calculated timeout
func FuzzChannelCloseTimeout(f *testing.F) {
// Set multiple seeds in case fuzzing isn't explicitly used
for i := range [10]int{} {
f.Add(uint64(i), uint64(i), uint64(i), uint64(i*5))
}
f.Fuzz(func(t *testing.T, l1BlockNum uint64, channelTimeout uint64, subSafetyMargin uint64, timeout uint64) {
// Create the channel builder
channelConfig := defaultTestChannelConfig
channelConfig.ChannelTimeout = channelTimeout
channelConfig.SubSafetyMargin = subSafetyMargin
cb, err := newChannelBuilder(channelConfig)
require.NoError(t, err)
// Check the timeout
cb.timeout = timeout
cb.FramePublished(l1BlockNum)
calculatedTimeout := l1BlockNum + channelTimeout - subSafetyMargin
if timeout > calculatedTimeout && calculatedTimeout != 0 {
cb.checkTimeout(calculatedTimeout)
require.ErrorIs(t, cb.FullErr(), ErrChannelTimeoutClose)
} else {
require.NoError(t, cb.FullErr())
}
})
}
// FuzzChannelZeroCloseTimeout ensures that the channel builder has a [ErrChannelTimeoutClose]
// as long as the timeout constraint is met and the builder's timeout is set to zero.
func FuzzChannelZeroCloseTimeout(f *testing.F) {
// Set multiple seeds in case fuzzing isn't explicitly used
for i := range [10]int{} {
f.Add(uint64(i), uint64(i), uint64(i))
}
f.Fuzz(func(t *testing.T, l1BlockNum uint64, channelTimeout uint64, subSafetyMargin uint64) {
// Create the channel builder
channelConfig := defaultTestChannelConfig
channelConfig.ChannelTimeout = channelTimeout
channelConfig.SubSafetyMargin = subSafetyMargin
cb, err := newChannelBuilder(channelConfig)
require.NoError(t, err)
// Check the timeout
cb.timeout = 0
cb.FramePublished(l1BlockNum)
calculatedTimeout := l1BlockNum + channelTimeout - subSafetyMargin
cb.checkTimeout(calculatedTimeout)
if cb.timeout != 0 {
require.ErrorIs(t, cb.FullErr(), ErrChannelTimeoutClose)
}
})
}
// FuzzSeqWindowClose ensures that the channel builder has a [ErrSeqWindowClose]
// as long as the timeout constraint is met and the builder's timeout is greater than
// the calculated timeout
func FuzzSeqWindowClose(f *testing.F) {
// Set multiple seeds in case fuzzing isn't explicitly used
for i := range [10]int{} {
f.Add(uint64(i), uint64(i), uint64(i), uint64(i*5))
}
f.Fuzz(func(t *testing.T, epochNum uint64, seqWindowSize uint64, subSafetyMargin uint64, timeout uint64) {
// Create the channel builder
channelConfig := defaultTestChannelConfig
channelConfig.SeqWindowSize = seqWindowSize
channelConfig.SubSafetyMargin = subSafetyMargin
cb, err := newChannelBuilder(channelConfig)
require.NoError(t, err)
// Check the timeout
cb.timeout = timeout
cb.updateSwTimeout(&derive.BatchData{
BatchV1: derive.BatchV1{
EpochNum: rollup.Epoch(epochNum),
},
})
calculatedTimeout := epochNum + seqWindowSize - subSafetyMargin
if timeout > calculatedTimeout && calculatedTimeout != 0 {
cb.checkTimeout(calculatedTimeout)
require.ErrorIs(t, cb.FullErr(), ErrSeqWindowClose)
} else {
require.NoError(t, cb.FullErr())
}
})
}
// FuzzSeqWindowZeroTimeoutClose ensures that the channel builder has a [ErrSeqWindowClose]
// as long as the timeout constraint is met and the builder's timeout is set to zero.
func FuzzSeqWindowZeroTimeoutClose(f *testing.F) {
// Set multiple seeds in case fuzzing isn't explicitly used
for i := range [10]int{} {
f.Add(uint64(i), uint64(i), uint64(i))
}
f.Fuzz(func(t *testing.T, epochNum uint64, seqWindowSize uint64, subSafetyMargin uint64) {
// Create the channel builder
channelConfig := defaultTestChannelConfig
channelConfig.SeqWindowSize = seqWindowSize
channelConfig.SubSafetyMargin = subSafetyMargin
cb, err := newChannelBuilder(channelConfig)
require.NoError(t, err)
// Check the timeout
cb.timeout = 0
cb.updateSwTimeout(&derive.BatchData{
BatchV1: derive.BatchV1{
EpochNum: rollup.Epoch(epochNum),
},
})
calculatedTimeout := epochNum + seqWindowSize - subSafetyMargin
cb.checkTimeout(calculatedTimeout)
if cb.timeout != 0 {
require.ErrorIs(t, cb.FullErr(), ErrSeqWindowClose, "Sequence window close should be reached")
}
})
}
// TestBuilderNextFrame tests calling NextFrame on a ChannelBuilder with only one frame
func TestBuilderNextFrame(t *testing.T) {
channelConfig := defaultTestChannelConfig
// Create a new channel builder
cb, err := newChannelBuilder(channelConfig)
require.NoError(t, err)
// Mock the internals of `channelBuilder.outputFrame`
// to construct a single frame
co := cb.co
var buf bytes.Buffer
fn, err := co.OutputFrame(&buf, channelConfig.MaxFrameSize)
require.NoError(t, err)
// Push one frame into to the channel builder
expectedTx := txID{chID: co.ID(), frameNumber: fn}
expectedBytes := buf.Bytes()
frameData := frameData{
id: frameID{
chID: co.ID(),
frameNumber: fn,
},
data: expectedBytes,
}
cb.PushFrame(frameData)
// There should only be 1 frame in the channel builder
require.Equal(t, 1, cb.NumFrames())
// We should be able to increment to the next frame
constructedFrame := cb.NextFrame()
require.Equal(t, expectedTx, constructedFrame.id)
require.Equal(t, expectedBytes, constructedFrame.data)
require.Equal(t, 0, cb.NumFrames())
// The next call should panic since the length of frames is 0
require.PanicsWithValue(t, "no next frame", func() { cb.NextFrame() })
}
// TestBuilderInvalidFrameId tests that a panic is thrown when a frame is pushed with an invalid frame id
func TestBuilderWrongFramePanic(t *testing.T) {
channelConfig := defaultTestChannelConfig
// Construct a channel builder
cb, err := newChannelBuilder(channelConfig)
require.NoError(t, err)
// Mock the internals of `channelBuilder.outputFrame`
// to construct a single frame
co, err := derive.NewChannelOut()
require.NoError(t, err)
var buf bytes.Buffer
fn, err := co.OutputFrame(&buf, channelConfig.MaxFrameSize)
require.NoError(t, err)
// The frame push should panic since we constructed a new channel out
// so the channel out id won't match
require.PanicsWithValue(t, "wrong channel", func() {
frame := frameData{
id: frameID{
chID: co.ID(),
frameNumber: fn,
},
data: buf.Bytes(),
}
cb.PushFrame(frame)
})
}
// TestOutputFrames tests the OutputFrames function
func TestOutputFrames(t *testing.T) {
channelConfig := defaultTestChannelConfig
channelConfig.MaxFrameSize = 2
// Construct the channel builder
cb, err := newChannelBuilder(channelConfig)
require.NoError(t, err)
require.False(t, cb.IsFull())
require.Equal(t, 0, cb.NumFrames())
// Calling OutputFrames without having called [AddBlock]
// should return no error
require.NoError(t, cb.OutputFrames())
// There should be no ready bytes yet
readyBytes := cb.co.ReadyBytes()
require.Equal(t, 0, readyBytes)
// Let's add a block
err = addMiniBlock(cb)
require.NoError(t, err)
// Check how many ready bytes
readyBytes = cb.co.ReadyBytes()
require.Equal(t, 2, readyBytes)
require.Equal(t, 0, cb.NumFrames())
// The channel should not be full
// but we want to output the frames for testing anyways
isFull := cb.IsFull()
require.False(t, isFull)
// Since we manually set the max frame size to 2,
// we should be able to compress the two frames now
err = cb.OutputFrames()
require.NoError(t, err)
// There should be one frame in the channel builder now
require.Equal(t, 1, cb.NumFrames())
// There should no longer be any ready bytes
readyBytes = cb.co.ReadyBytes()
require.Equal(t, 0, readyBytes)
}
// TestMaxRLPBytesPerChannel tests the [channelBuilder.OutputFrames]
// function errors when the max RLP bytes per channel is reached.
func TestMaxRLPBytesPerChannel(t *testing.T) {
t.Parallel()
channelConfig := defaultTestChannelConfig
channelConfig.MaxFrameSize = derive.MaxRLPBytesPerChannel * 2
channelConfig.TargetFrameSize = derive.MaxRLPBytesPerChannel * 2
channelConfig.ApproxComprRatio = 1
// Construct the channel builder
cb, err := newChannelBuilder(channelConfig)
require.NoError(t, err)
// Add a block that overflows the [ChannelOut]
err = addTooManyBlocks(cb)
require.ErrorIs(t, err, derive.ErrTooManyRLPBytes)
}
// TestOutputFramesMaxFrameIndex tests the [channelBuilder.OutputFrames]
// function errors when the max frame index is reached.
func TestOutputFramesMaxFrameIndex(t *testing.T) {
channelConfig := defaultTestChannelConfig
channelConfig.MaxFrameSize = 1
channelConfig.TargetNumFrames = math.MaxInt
channelConfig.TargetFrameSize = 1
channelConfig.ApproxComprRatio = 0
// Continuously add blocks until the max frame index is reached
// This should cause the [channelBuilder.OutputFrames] function
// to error
cb, err := newChannelBuilder(channelConfig)
require.NoError(t, err)
require.False(t, cb.IsFull())
require.Equal(t, 0, cb.NumFrames())
for {
lBlock := types.NewBlock(&types.Header{
BaseFee: common.Big0,
Difficulty: common.Big0,
Number: common.Big0,
}, nil, nil, nil, trie.NewStackTrie(nil))
l1InfoTx, _ := derive.L1InfoDeposit(0, lBlock, eth.SystemConfig{}, false)
txs := []*types.Transaction{types.NewTx(l1InfoTx)}
a := types.NewBlock(&types.Header{
Number: big.NewInt(0),
}, txs, nil, nil, trie.NewStackTrie(nil))
_, err = cb.AddBlock(a)
if cb.IsFull() {
fullErr := cb.FullErr()
require.ErrorIs(t, fullErr, ErrMaxFrameIndex)
break
}
require.NoError(t, err)
_ = cb.OutputFrames()
// Flushing so we can construct new frames
_ = cb.co.Flush()
}
}
// TestBuilderAddBlock tests the AddBlock function
func TestBuilderAddBlock(t *testing.T) {
channelConfig := defaultTestChannelConfig
// Lower the max frame size so that we can batch
channelConfig.MaxFrameSize = 2
// Configure the Input Threshold params so we observe a full channel
// In reality, we only need the input bytes (74) below to be greater than
// or equal to the input threshold (3 * 2) / 1 = 6
channelConfig.TargetFrameSize = 3
channelConfig.TargetNumFrames = 2
channelConfig.ApproxComprRatio = 1
// Construct the channel builder
cb, err := newChannelBuilder(channelConfig)
require.NoError(t, err)
// Add a nonsense block to the channel builder
err = addMiniBlock(cb)
require.NoError(t, err)
// Check the fields reset in the AddBlock function
require.Equal(t, 74, cb.co.InputBytes())
require.Equal(t, 1, len(cb.blocks))
require.Equal(t, 0, len(cb.frames))
require.True(t, cb.IsFull())
// Since the channel output is full, the next call to AddBlock
// should return the channel out full error
err = addMiniBlock(cb)
require.ErrorIs(t, err, ErrInputTargetReached)
}
// TestBuilderReset tests the Reset function
func TestBuilderReset(t *testing.T) {
channelConfig := defaultTestChannelConfig
// Lower the max frame size so that we can batch
channelConfig.MaxFrameSize = 2
cb, err := newChannelBuilder(channelConfig)
require.NoError(t, err)
// Add a nonsense block to the channel builder
err = addMiniBlock(cb)
require.NoError(t, err)
// Check the fields reset in the Reset function
require.Equal(t, 1, len(cb.blocks))
require.Equal(t, 0, len(cb.frames))
// Timeout should be updated in the AddBlock internal call to `updateSwTimeout`
timeout := uint64(100) + cb.cfg.SeqWindowSize - cb.cfg.SubSafetyMargin
require.Equal(t, timeout, cb.timeout)
require.NoError(t, cb.fullErr)
// Output frames so we can set the channel builder frames
err = cb.OutputFrames()
require.NoError(t, err)
// Add another block to increment the block count
err = addMiniBlock(cb)
require.NoError(t, err)
// Check the fields reset in the Reset function
require.Equal(t, 2, len(cb.blocks))
require.Equal(t, 1, len(cb.frames))
require.Equal(t, timeout, cb.timeout)
require.NoError(t, cb.fullErr)
// Reset the channel builder
err = cb.Reset()
require.NoError(t, err)
// Check the fields reset in the Reset function
require.Equal(t, 0, len(cb.blocks))
require.Equal(t, 0, len(cb.frames))
require.Equal(t, uint64(0), cb.timeout)
require.NoError(t, cb.fullErr)
require.Equal(t, 0, cb.co.InputBytes())
require.Equal(t, 0, cb.co.ReadyBytes())
}
// TestBuilderRegisterL1Block tests the RegisterL1Block function
func TestBuilderRegisterL1Block(t *testing.T) {
channelConfig := defaultTestChannelConfig
// Construct the channel builder
cb, err := newChannelBuilder(channelConfig)
require.NoError(t, err)
// Assert params modified in RegisterL1Block
require.Equal(t, uint64(1), channelConfig.MaxChannelDuration)
require.Equal(t, uint64(0), cb.timeout)
// Register a new L1 block
cb.RegisterL1Block(uint64(100))
// Assert params modified in RegisterL1Block
require.Equal(t, uint64(1), channelConfig.MaxChannelDuration)
require.Equal(t, uint64(101), cb.timeout)
}
// TestBuilderRegisterL1BlockZeroMaxChannelDuration tests the RegisterL1Block function
func TestBuilderRegisterL1BlockZeroMaxChannelDuration(t *testing.T) {
channelConfig := defaultTestChannelConfig
// Set the max channel duration to 0
channelConfig.MaxChannelDuration = 0
// Construct the channel builder
cb, err := newChannelBuilder(channelConfig)
require.NoError(t, err)
// Assert params modified in RegisterL1Block
require.Equal(t, uint64(0), channelConfig.MaxChannelDuration)
require.Equal(t, uint64(0), cb.timeout)
// Register a new L1 block
cb.RegisterL1Block(uint64(100))
// Since the max channel duration is set to 0,
// the L1 block register should not update the timeout
require.Equal(t, uint64(0), channelConfig.MaxChannelDuration)
require.Equal(t, uint64(0), cb.timeout)
}
// TestFramePublished tests the FramePublished function
func TestFramePublished(t *testing.T) {
channelConfig := defaultTestChannelConfig
// Construct the channel builder
cb, err := newChannelBuilder(channelConfig)
require.NoError(t, err)
// Let's say the block number is fed in as 100
// and the channel timeout is 1000
l1BlockNum := uint64(100)
cb.cfg.ChannelTimeout = uint64(1000)
cb.cfg.SubSafetyMargin = 100
// Then the frame published will update the timeout
cb.FramePublished(l1BlockNum)
// Now the timeout will be 1000
require.Equal(t, uint64(1000), cb.timeout)
}
func TestChannelBuilder_InputBytes(t *testing.T) {
require := require.New(t)
rng := rand.New(rand.NewSource(time.Now().UnixNano()))
cb, _ := defaultChannelBuilderSetup(t)
require.Zero(cb.InputBytes())
var l int
for i := 0; i < 5; i++ {
block := newMiniL2Block(rng.Intn(32))
l += blockBatchRlpSize(t, block)
_, err := cb.AddBlock(block)
require.NoError(err)
require.Equal(cb.InputBytes(), l)
}
}
func TestChannelBuilder_OutputBytes(t *testing.T) {
require := require.New(t)
rng := rand.New(rand.NewSource(time.Now().UnixNano()))
cfg := defaultTestChannelConfig
cfg.TargetFrameSize = 1000
cfg.MaxFrameSize = 1000
cfg.TargetNumFrames = 16
cfg.ApproxComprRatio = 1.0
cb, err := newChannelBuilder(cfg)
require.NoError(err, "newChannelBuilder")
require.Zero(cb.OutputBytes())
for {
block, _ := dtest.RandomL2Block(rng, rng.Intn(32))
_, err := cb.AddBlock(block)
if errors.Is(err, ErrInputTargetReached) {
break
}
require.NoError(err)
}
require.NoError(cb.OutputFrames())
require.True(cb.IsFull())
require.Greater(cb.NumFrames(), 1)
var flen int
for cb.HasFrame() {
f := cb.NextFrame()
flen += len(f.data)
}
require.Equal(cb.OutputBytes(), flen)
}
func defaultChannelBuilderSetup(t *testing.T) (*channelBuilder, ChannelConfig) {
t.Helper()
cfg := defaultTestChannelConfig
cb, err := newChannelBuilder(cfg)
require.NoError(t, err, "newChannelBuilder")
return cb, cfg
}
func blockBatchRlpSize(t *testing.T, b *types.Block) int {
t.Helper()
batch, _, err := derive.BlockToBatch(b)
require.NoError(t, err)
var buf bytes.Buffer
require.NoError(t, batch.EncodeRLP(&buf), "RLP-encoding batch")
return buf.Len()
}
package batcher_test
import (
"math"
"testing"
"github.com/ethereum-optimism/optimism/op-batcher/batcher"
"github.com/stretchr/testify/require"
)
// TestInputThreshold tests the [ChannelConfig.InputThreshold]
// function using a table-driven testing approach.
func TestInputThreshold(t *testing.T) {
type testInput struct {
TargetFrameSize uint64
TargetNumFrames int
ApproxComprRatio float64
}
type test struct {
input testInput
assertion func(uint64)
}
// Construct test cases that test the boundary conditions
tests := []test{
{
input: testInput{
TargetFrameSize: 1,
TargetNumFrames: 1,
ApproxComprRatio: 0.4,
},
assertion: func(output uint64) {
require.Equal(t, uint64(2), output)
},
},
{
input: testInput{
TargetFrameSize: 1,
TargetNumFrames: 100000,
ApproxComprRatio: 0.4,
},
assertion: func(output uint64) {
require.Equal(t, uint64(250_000), output)
},
},
{
input: testInput{
TargetFrameSize: 1,
TargetNumFrames: 1,
ApproxComprRatio: 1,
},
assertion: func(output uint64) {
require.Equal(t, uint64(1), output)
},
},
{
input: testInput{
TargetFrameSize: 1,
TargetNumFrames: 1,
ApproxComprRatio: 2,
},
assertion: func(output uint64) {
require.Equal(t, uint64(0), output)
},
},
{
input: testInput{
TargetFrameSize: 100000,
TargetNumFrames: 1,
ApproxComprRatio: 0.4,
},
assertion: func(output uint64) {
require.Equal(t, uint64(250_000), output)
},
},
{
input: testInput{
TargetFrameSize: 1,
TargetNumFrames: 100000,
ApproxComprRatio: 0.4,
},
assertion: func(output uint64) {
require.Equal(t, uint64(250_000), output)
},
},
{
input: testInput{
TargetFrameSize: 100000,
TargetNumFrames: 100000,
ApproxComprRatio: 0.4,
},
assertion: func(output uint64) {
require.Equal(t, uint64(25_000_000_000), output)
},
},
{
input: testInput{
TargetFrameSize: 1,
TargetNumFrames: 1,
ApproxComprRatio: 0.000001,
},
assertion: func(output uint64) {
require.Equal(t, uint64(1_000_000), output)
},
},
{
input: testInput{
TargetFrameSize: 0,
TargetNumFrames: 0,
ApproxComprRatio: 0,
},
assertion: func(output uint64) {
// Need to allow for NaN depending on the machine architecture
require.True(t, output == uint64(0) || output == uint64(math.NaN()))
},
},
}
// Validate each test case
for _, tt := range tests {
config := batcher.ChannelConfig{
TargetFrameSize: tt.input.TargetFrameSize,
TargetNumFrames: tt.input.TargetNumFrames,
ApproxComprRatio: tt.input.ApproxComprRatio,
}
got := config.InputThreshold()
tt.assertion(got)
}
}
...@@ -6,7 +6,9 @@ import ( ...@@ -6,7 +6,9 @@ import (
"io" "io"
"math" "math"
"github.com/ethereum-optimism/optimism/op-batcher/metrics"
"github.com/ethereum-optimism/optimism/op-node/eth" "github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
...@@ -23,6 +25,7 @@ var ErrReorg = errors.New("block does not extend existing chain") ...@@ -23,6 +25,7 @@ var ErrReorg = errors.New("block does not extend existing chain")
// Functions on channelManager are not safe for concurrent access. // Functions on channelManager are not safe for concurrent access.
type channelManager struct { type channelManager struct {
log log.Logger log log.Logger
metr metrics.Metricer
cfg ChannelConfig cfg ChannelConfig
// All blocks since the last request for new tx data. // All blocks since the last request for new tx data.
...@@ -40,10 +43,12 @@ type channelManager struct { ...@@ -40,10 +43,12 @@ type channelManager struct {
confirmedTransactions map[txID]eth.BlockID confirmedTransactions map[txID]eth.BlockID
} }
func NewChannelManager(log log.Logger, cfg ChannelConfig) *channelManager { func NewChannelManager(log log.Logger, metr metrics.Metricer, cfg ChannelConfig) *channelManager {
return &channelManager{ return &channelManager{
log: log, log: log,
metr: metr,
cfg: cfg, cfg: cfg,
pendingTransactions: make(map[txID]txData), pendingTransactions: make(map[txID]txData),
confirmedTransactions: make(map[txID]eth.BlockID), confirmedTransactions: make(map[txID]eth.BlockID),
} }
...@@ -71,6 +76,8 @@ func (s *channelManager) TxFailed(id txID) { ...@@ -71,6 +76,8 @@ func (s *channelManager) TxFailed(id txID) {
} else { } else {
s.log.Warn("unknown transaction marked as failed", "id", id) s.log.Warn("unknown transaction marked as failed", "id", id)
} }
s.metr.RecordBatchTxFailed()
} }
// TxConfirmed marks a transaction as confirmed on L1. Unfortunately even if all frames in // TxConfirmed marks a transaction as confirmed on L1. Unfortunately even if all frames in
...@@ -78,7 +85,8 @@ func (s *channelManager) TxFailed(id txID) { ...@@ -78,7 +85,8 @@ func (s *channelManager) TxFailed(id txID) {
// resubmitted. // resubmitted.
// This function may reset the pending channel if the pending channel has timed out. // This function may reset the pending channel if the pending channel has timed out.
func (s *channelManager) TxConfirmed(id txID, inclusionBlock eth.BlockID) { func (s *channelManager) TxConfirmed(id txID, inclusionBlock eth.BlockID) {
s.log.Trace("marked transaction as confirmed", "id", id, "block", inclusionBlock) s.metr.RecordBatchTxSubmitted()
s.log.Debug("marked transaction as confirmed", "id", id, "block", inclusionBlock)
if _, ok := s.pendingTransactions[id]; !ok { if _, ok := s.pendingTransactions[id]; !ok {
s.log.Warn("unknown transaction marked as confirmed", "id", id, "block", inclusionBlock) s.log.Warn("unknown transaction marked as confirmed", "id", id, "block", inclusionBlock)
// TODO: This can occur if we clear the channel while there are still pending transactions // TODO: This can occur if we clear the channel while there are still pending transactions
...@@ -92,13 +100,15 @@ func (s *channelManager) TxConfirmed(id txID, inclusionBlock eth.BlockID) { ...@@ -92,13 +100,15 @@ func (s *channelManager) TxConfirmed(id txID, inclusionBlock eth.BlockID) {
// If this channel timed out, put the pending blocks back into the local saved blocks // If this channel timed out, put the pending blocks back into the local saved blocks
// and then reset this state so it can try to build a new channel. // and then reset this state so it can try to build a new channel.
if s.pendingChannelIsTimedOut() { if s.pendingChannelIsTimedOut() {
s.log.Warn("Channel timed out", "chID", s.pendingChannel.ID()) s.metr.RecordChannelTimedOut(s.pendingChannel.ID())
s.log.Warn("Channel timed out", "id", s.pendingChannel.ID())
s.blocks = append(s.pendingChannel.Blocks(), s.blocks...) s.blocks = append(s.pendingChannel.Blocks(), s.blocks...)
s.clearPendingChannel() s.clearPendingChannel()
} }
// If we are done with this channel, record that. // If we are done with this channel, record that.
if s.pendingChannelIsFullySubmitted() { if s.pendingChannelIsFullySubmitted() {
s.log.Info("Channel is fully submitted", "chID", s.pendingChannel.ID()) s.metr.RecordChannelFullySubmitted(s.pendingChannel.ID())
s.log.Info("Channel is fully submitted", "id", s.pendingChannel.ID())
s.clearPendingChannel() s.clearPendingChannel()
} }
} }
...@@ -194,8 +204,8 @@ func (s *channelManager) TxData(l1Head eth.BlockID) (txData, error) { ...@@ -194,8 +204,8 @@ func (s *channelManager) TxData(l1Head eth.BlockID) (txData, error) {
// all pending blocks be included in this channel for submission. // all pending blocks be included in this channel for submission.
s.registerL1Block(l1Head) s.registerL1Block(l1Head)
if err := s.pendingChannel.OutputFrames(); err != nil { if err := s.outputFrames(); err != nil {
return txData{}, fmt.Errorf("creating frames with channel builder: %w", err) return txData{}, err
} }
return s.nextTxData() return s.nextTxData()
...@@ -211,7 +221,11 @@ func (s *channelManager) ensurePendingChannel(l1Head eth.BlockID) error { ...@@ -211,7 +221,11 @@ func (s *channelManager) ensurePendingChannel(l1Head eth.BlockID) error {
return fmt.Errorf("creating new channel: %w", err) return fmt.Errorf("creating new channel: %w", err)
} }
s.pendingChannel = cb s.pendingChannel = cb
s.log.Info("Created channel", "chID", cb.ID(), "l1Head", l1Head) s.log.Info("Created channel",
"id", cb.ID(),
"l1Head", l1Head,
"blocks_pending", len(s.blocks))
s.metr.RecordChannelOpened(cb.ID(), len(s.blocks))
return nil return nil
} }
...@@ -229,28 +243,27 @@ func (s *channelManager) registerL1Block(l1Head eth.BlockID) { ...@@ -229,28 +243,27 @@ func (s *channelManager) registerL1Block(l1Head eth.BlockID) {
// processBlocks adds blocks from the blocks queue to the pending channel until // processBlocks adds blocks from the blocks queue to the pending channel until
// either the queue got exhausted or the channel is full. // either the queue got exhausted or the channel is full.
func (s *channelManager) processBlocks() error { func (s *channelManager) processBlocks() error {
var blocksAdded int var (
var _chFullErr *ChannelFullError // throw away, just for type checking blocksAdded int
_chFullErr *ChannelFullError // throw away, just for type checking
latestL2ref eth.L2BlockRef
)
for i, block := range s.blocks { for i, block := range s.blocks {
if err := s.pendingChannel.AddBlock(block); errors.As(err, &_chFullErr) { l1info, err := s.pendingChannel.AddBlock(block)
if errors.As(err, &_chFullErr) {
// current block didn't get added because channel is already full // current block didn't get added because channel is already full
break break
} else if err != nil { } else if err != nil {
return fmt.Errorf("adding block[%d] to channel builder: %w", i, err) return fmt.Errorf("adding block[%d] to channel builder: %w", i, err)
} }
blocksAdded += 1 blocksAdded += 1
latestL2ref = l2BlockRefFromBlockAndL1Info(block, l1info)
// current block got added but channel is now full // current block got added but channel is now full
if s.pendingChannel.IsFull() { if s.pendingChannel.IsFull() {
break break
} }
} }
s.log.Debug("Added blocks to channel",
"blocks_added", blocksAdded,
"channel_full", s.pendingChannel.IsFull(),
"blocks_pending", len(s.blocks)-blocksAdded,
"input_bytes", s.pendingChannel.InputBytes(),
)
if blocksAdded == len(s.blocks) { if blocksAdded == len(s.blocks) {
// all blocks processed, reuse slice // all blocks processed, reuse slice
s.blocks = s.blocks[:0] s.blocks = s.blocks[:0]
...@@ -258,6 +271,53 @@ func (s *channelManager) processBlocks() error { ...@@ -258,6 +271,53 @@ func (s *channelManager) processBlocks() error {
// remove processed blocks // remove processed blocks
s.blocks = s.blocks[blocksAdded:] s.blocks = s.blocks[blocksAdded:]
} }
s.metr.RecordL2BlocksAdded(latestL2ref,
blocksAdded,
len(s.blocks),
s.pendingChannel.InputBytes(),
s.pendingChannel.ReadyBytes())
s.log.Debug("Added blocks to channel",
"blocks_added", blocksAdded,
"blocks_pending", len(s.blocks),
"channel_full", s.pendingChannel.IsFull(),
"input_bytes", s.pendingChannel.InputBytes(),
"ready_bytes", s.pendingChannel.ReadyBytes(),
)
return nil
}
func (s *channelManager) outputFrames() error {
if err := s.pendingChannel.OutputFrames(); err != nil {
return fmt.Errorf("creating frames with channel builder: %w", err)
}
if !s.pendingChannel.IsFull() {
return nil
}
inBytes, outBytes := s.pendingChannel.InputBytes(), s.pendingChannel.OutputBytes()
s.metr.RecordChannelClosed(
s.pendingChannel.ID(),
len(s.blocks),
s.pendingChannel.NumFrames(),
inBytes,
outBytes,
s.pendingChannel.FullErr(),
)
var comprRatio float64
if inBytes > 0 {
comprRatio = float64(outBytes) / float64(inBytes)
}
s.log.Info("Channel closed",
"id", s.pendingChannel.ID(),
"blocks_pending", len(s.blocks),
"num_frames", s.pendingChannel.NumFrames(),
"input_bytes", inBytes,
"output_bytes", outBytes,
"full_reason", s.pendingChannel.FullErr(),
"compr_ratio", comprRatio,
)
return nil return nil
} }
...@@ -273,3 +333,14 @@ func (s *channelManager) AddL2Block(block *types.Block) error { ...@@ -273,3 +333,14 @@ func (s *channelManager) AddL2Block(block *types.Block) error {
return nil return nil
} }
func l2BlockRefFromBlockAndL1Info(block *types.Block, l1info derive.L1BlockInfo) eth.L2BlockRef {
return eth.L2BlockRef{
Hash: block.Hash(),
Number: block.NumberU64(),
ParentHash: block.ParentHash(),
Time: block.Time(),
L1Origin: eth.BlockID{Hash: l1info.BlockHash, Number: l1info.Number},
SequenceNumber: l1info.SequenceNumber,
}
}
package batcher_test package batcher
import ( import (
"io" "io"
...@@ -7,7 +7,7 @@ import ( ...@@ -7,7 +7,7 @@ import (
"testing" "testing"
"time" "time"
"github.com/ethereum-optimism/optimism/op-batcher/batcher" "github.com/ethereum-optimism/optimism/op-batcher/metrics"
"github.com/ethereum-optimism/optimism/op-node/eth" "github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive" "github.com/ethereum-optimism/optimism/op-node/rollup/derive"
derivetest "github.com/ethereum-optimism/optimism/op-node/rollup/derive/test" derivetest "github.com/ethereum-optimism/optimism/op-node/rollup/derive/test"
...@@ -15,15 +15,59 @@ import ( ...@@ -15,15 +15,59 @@ import (
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/trie"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
// TestPendingChannelTimeout tests that the channel manager
// correctly identifies when a pending channel is timed out.
func TestPendingChannelTimeout(t *testing.T) {
// Create a new channel manager with a ChannelTimeout
log := testlog.Logger(t, log.LvlCrit)
m := NewChannelManager(log, metrics.NoopMetrics, ChannelConfig{
ChannelTimeout: 100,
})
// Pending channel is nil so is cannot be timed out
timeout := m.pendingChannelIsTimedOut()
require.False(t, timeout)
// Set the pending channel
err := m.ensurePendingChannel(eth.BlockID{})
require.NoError(t, err)
// There are no confirmed transactions so
// the pending channel cannot be timed out
timeout = m.pendingChannelIsTimedOut()
require.False(t, timeout)
// Manually set a confirmed transactions
// To avoid other methods clearing state
m.confirmedTransactions[frameID{frameNumber: 0}] = eth.BlockID{Number: 0}
m.confirmedTransactions[frameID{frameNumber: 1}] = eth.BlockID{Number: 99}
// Since the ChannelTimeout is 100, the
// pending channel should not be timed out
timeout = m.pendingChannelIsTimedOut()
require.False(t, timeout)
// Add a confirmed transaction with a higher number
// than the ChannelTimeout
m.confirmedTransactions[frameID{
frameNumber: 2,
}] = eth.BlockID{
Number: 101,
}
// Now the pending channel should be timed out
timeout = m.pendingChannelIsTimedOut()
require.True(t, timeout)
}
// TestChannelManagerReturnsErrReorg ensures that the channel manager // TestChannelManagerReturnsErrReorg ensures that the channel manager
// detects a reorg when it has cached L1 blocks. // detects a reorg when it has cached L1 blocks.
func TestChannelManagerReturnsErrReorg(t *testing.T) { func TestChannelManagerReturnsErrReorg(t *testing.T) {
log := testlog.Logger(t, log.LvlCrit) log := testlog.Logger(t, log.LvlCrit)
m := batcher.NewChannelManager(log, batcher.ChannelConfig{}) m := NewChannelManager(log, metrics.NoopMetrics, ChannelConfig{})
a := types.NewBlock(&types.Header{ a := types.NewBlock(&types.Header{
Number: big.NewInt(0), Number: big.NewInt(0),
...@@ -48,36 +92,26 @@ func TestChannelManagerReturnsErrReorg(t *testing.T) { ...@@ -48,36 +92,26 @@ func TestChannelManagerReturnsErrReorg(t *testing.T) {
err = m.AddL2Block(c) err = m.AddL2Block(c)
require.NoError(t, err) require.NoError(t, err)
err = m.AddL2Block(x) err = m.AddL2Block(x)
require.ErrorIs(t, err, batcher.ErrReorg) require.ErrorIs(t, err, ErrReorg)
require.Equal(t, []*types.Block{a, b, c}, m.blocks)
} }
// TestChannelManagerReturnsErrReorgWhenDrained ensures that the channel manager // TestChannelManagerReturnsErrReorgWhenDrained ensures that the channel manager
// detects a reorg even if it does not have any blocks inside it. // detects a reorg even if it does not have any blocks inside it.
func TestChannelManagerReturnsErrReorgWhenDrained(t *testing.T) { func TestChannelManagerReturnsErrReorgWhenDrained(t *testing.T) {
log := testlog.Logger(t, log.LvlCrit) log := testlog.Logger(t, log.LvlCrit)
m := batcher.NewChannelManager(log, batcher.ChannelConfig{ m := NewChannelManager(log, metrics.NoopMetrics,
ChannelConfig{
TargetFrameSize: 0, TargetFrameSize: 0,
MaxFrameSize: 120_000, MaxFrameSize: 120_000,
ApproxComprRatio: 1.0, ApproxComprRatio: 1.0,
}) })
l1Block := types.NewBlock(&types.Header{
BaseFee: big.NewInt(10),
Difficulty: common.Big0,
Number: big.NewInt(100),
}, nil, nil, nil, trie.NewStackTrie(nil))
l1InfoTx, err := derive.L1InfoDeposit(0, l1Block, eth.SystemConfig{}, false)
require.NoError(t, err)
txs := []*types.Transaction{types.NewTx(l1InfoTx)}
a := types.NewBlock(&types.Header{ a := newMiniL2Block(0)
Number: big.NewInt(0), x := newMiniL2BlockWithNumberParent(0, big.NewInt(1), common.Hash{0xff})
}, txs, nil, nil, trie.NewStackTrie(nil))
x := types.NewBlock(&types.Header{
Number: big.NewInt(1),
ParentHash: common.Hash{0xff},
}, txs, nil, nil, trie.NewStackTrie(nil))
err = m.AddL2Block(a) err := m.AddL2Block(a)
require.NoError(t, err) require.NoError(t, err)
_, err = m.TxData(eth.BlockID{}) _, err = m.TxData(eth.BlockID{})
...@@ -86,14 +120,227 @@ func TestChannelManagerReturnsErrReorgWhenDrained(t *testing.T) { ...@@ -86,14 +120,227 @@ func TestChannelManagerReturnsErrReorgWhenDrained(t *testing.T) {
require.ErrorIs(t, err, io.EOF) require.ErrorIs(t, err, io.EOF)
err = m.AddL2Block(x) err = m.AddL2Block(x)
require.ErrorIs(t, err, batcher.ErrReorg) require.ErrorIs(t, err, ErrReorg)
}
// TestChannelManagerNextTxData checks the nextTxData function.
func TestChannelManagerNextTxData(t *testing.T) {
log := testlog.Logger(t, log.LvlCrit)
m := NewChannelManager(log, metrics.NoopMetrics, ChannelConfig{})
// Nil pending channel should return EOF
returnedTxData, err := m.nextTxData()
require.ErrorIs(t, err, io.EOF)
require.Equal(t, txData{}, returnedTxData)
// Set the pending channel
// The nextTxData function should still return EOF
// since the pending channel has no frames
err = m.ensurePendingChannel(eth.BlockID{})
require.NoError(t, err)
returnedTxData, err = m.nextTxData()
require.ErrorIs(t, err, io.EOF)
require.Equal(t, txData{}, returnedTxData)
// Manually push a frame into the pending channel
channelID := m.pendingChannel.ID()
frame := frameData{
data: []byte{},
id: frameID{
chID: channelID,
frameNumber: uint16(0),
},
}
m.pendingChannel.PushFrame(frame)
require.Equal(t, 1, m.pendingChannel.NumFrames())
// Now the nextTxData function should return the frame
returnedTxData, err = m.nextTxData()
expectedTxData := txData{frame}
expectedChannelID := expectedTxData.ID()
require.NoError(t, err)
require.Equal(t, expectedTxData, returnedTxData)
require.Equal(t, 0, m.pendingChannel.NumFrames())
require.Equal(t, expectedTxData, m.pendingTransactions[expectedChannelID])
}
// TestClearChannelManager tests clearing the channel manager.
func TestClearChannelManager(t *testing.T) {
// Create a channel manager
log := testlog.Logger(t, log.LvlCrit)
rng := rand.New(rand.NewSource(time.Now().UnixNano()))
m := NewChannelManager(log, metrics.NoopMetrics, ChannelConfig{
// Need to set the channel timeout here so we don't clear pending
// channels on confirmation. This would result in [TxConfirmed]
// clearing confirmed transactions, and reseting the pendingChannels map
ChannelTimeout: 10,
// Have to set the max frame size here otherwise the channel builder would not
// be able to output any frames
MaxFrameSize: 1,
})
// Channel Manager state should be empty by default
require.Empty(t, m.blocks)
require.Equal(t, common.Hash{}, m.tip)
require.Nil(t, m.pendingChannel)
require.Empty(t, m.pendingTransactions)
require.Empty(t, m.confirmedTransactions)
// Add a block to the channel manager
a, _ := derivetest.RandomL2Block(rng, 4)
newL1Tip := a.Hash()
l1BlockID := eth.BlockID{
Hash: a.Hash(),
Number: a.NumberU64(),
}
err := m.AddL2Block(a)
require.NoError(t, err)
// Make sure there is a channel builder
err = m.ensurePendingChannel(l1BlockID)
require.NoError(t, err)
require.NotNil(t, m.pendingChannel)
require.Equal(t, 0, len(m.confirmedTransactions))
// Process the blocks
// We should have a pending channel with 1 frame
// and no more blocks since processBlocks consumes
// the list
err = m.processBlocks()
require.NoError(t, err)
err = m.pendingChannel.OutputFrames()
require.NoError(t, err)
_, err = m.nextTxData()
require.NoError(t, err)
require.Equal(t, 0, len(m.blocks))
require.Equal(t, newL1Tip, m.tip)
require.Equal(t, 1, len(m.pendingTransactions))
// Add a new block so we can test clearing
// the channel manager with a full state
b := types.NewBlock(&types.Header{
Number: big.NewInt(1),
ParentHash: a.Hash(),
}, nil, nil, nil, nil)
err = m.AddL2Block(b)
require.NoError(t, err)
require.Equal(t, 1, len(m.blocks))
require.Equal(t, b.Hash(), m.tip)
// Clear the channel manager
m.Clear()
// Check that the entire channel manager state cleared
require.Empty(t, m.blocks)
require.Equal(t, common.Hash{}, m.tip)
require.Nil(t, m.pendingChannel)
require.Empty(t, m.pendingTransactions)
require.Empty(t, m.confirmedTransactions)
}
// TestChannelManagerTxConfirmed checks the [ChannelManager.TxConfirmed] function.
func TestChannelManagerTxConfirmed(t *testing.T) {
// Create a channel manager
log := testlog.Logger(t, log.LvlCrit)
m := NewChannelManager(log, metrics.NoopMetrics, ChannelConfig{
// Need to set the channel timeout here so we don't clear pending
// channels on confirmation. This would result in [TxConfirmed]
// clearing confirmed transactions, and reseting the pendingChannels map
ChannelTimeout: 10,
})
// Let's add a valid pending transaction to the channel manager
// So we can demonstrate that TxConfirmed's correctness
err := m.ensurePendingChannel(eth.BlockID{})
require.NoError(t, err)
channelID := m.pendingChannel.ID()
frame := frameData{
data: []byte{},
id: frameID{
chID: channelID,
frameNumber: uint16(0),
},
}
m.pendingChannel.PushFrame(frame)
require.Equal(t, 1, m.pendingChannel.NumFrames())
returnedTxData, err := m.nextTxData()
expectedTxData := txData{frame}
expectedChannelID := expectedTxData.ID()
require.NoError(t, err)
require.Equal(t, expectedTxData, returnedTxData)
require.Equal(t, 0, m.pendingChannel.NumFrames())
require.Equal(t, expectedTxData, m.pendingTransactions[expectedChannelID])
require.Equal(t, 1, len(m.pendingTransactions))
// An unknown pending transaction should not be marked as confirmed
// and should not be removed from the pending transactions map
actualChannelID := m.pendingChannel.ID()
unknownChannelID := derive.ChannelID([derive.ChannelIDLength]byte{0x69})
require.NotEqual(t, actualChannelID, unknownChannelID)
unknownTxID := frameID{chID: unknownChannelID, frameNumber: 0}
blockID := eth.BlockID{Number: 0, Hash: common.Hash{0x69}}
m.TxConfirmed(unknownTxID, blockID)
require.Empty(t, m.confirmedTransactions)
require.Equal(t, 1, len(m.pendingTransactions))
// Now let's mark the pending transaction as confirmed
// and check that it is removed from the pending transactions map
// and added to the confirmed transactions map
m.TxConfirmed(expectedChannelID, blockID)
require.Empty(t, m.pendingTransactions)
require.Equal(t, 1, len(m.confirmedTransactions))
require.Equal(t, blockID, m.confirmedTransactions[expectedChannelID])
}
// TestChannelManagerTxFailed checks the [ChannelManager.TxFailed] function.
func TestChannelManagerTxFailed(t *testing.T) {
// Create a channel manager
log := testlog.Logger(t, log.LvlCrit)
m := NewChannelManager(log, metrics.NoopMetrics, ChannelConfig{})
// Let's add a valid pending transaction to the channel
// manager so we can demonstrate correctness
err := m.ensurePendingChannel(eth.BlockID{})
require.NoError(t, err)
channelID := m.pendingChannel.ID()
frame := frameData{
data: []byte{},
id: frameID{
chID: channelID,
frameNumber: uint16(0),
},
}
m.pendingChannel.PushFrame(frame)
require.Equal(t, 1, m.pendingChannel.NumFrames())
returnedTxData, err := m.nextTxData()
expectedTxData := txData{frame}
expectedChannelID := expectedTxData.ID()
require.NoError(t, err)
require.Equal(t, expectedTxData, returnedTxData)
require.Equal(t, 0, m.pendingChannel.NumFrames())
require.Equal(t, expectedTxData, m.pendingTransactions[expectedChannelID])
require.Equal(t, 1, len(m.pendingTransactions))
// Trying to mark an unknown pending transaction as failed
// shouldn't modify state
m.TxFailed(frameID{})
require.Equal(t, 0, m.pendingChannel.NumFrames())
require.Equal(t, expectedTxData, m.pendingTransactions[expectedChannelID])
// Now we still have a pending transaction
// Let's mark it as failed
m.TxFailed(expectedChannelID)
require.Empty(t, m.pendingTransactions)
// There should be a frame in the pending channel now
require.Equal(t, 1, m.pendingChannel.NumFrames())
} }
func TestChannelManager_TxResend(t *testing.T) { func TestChannelManager_TxResend(t *testing.T) {
require := require.New(t) require := require.New(t)
rng := rand.New(rand.NewSource(time.Now().UnixNano())) rng := rand.New(rand.NewSource(time.Now().UnixNano()))
log := testlog.Logger(t, log.LvlError) log := testlog.Logger(t, log.LvlError)
m := batcher.NewChannelManager(log, batcher.ChannelConfig{ m := NewChannelManager(log, metrics.NoopMetrics,
ChannelConfig{
TargetFrameSize: 0, TargetFrameSize: 0,
MaxFrameSize: 120_000, MaxFrameSize: 120_000,
ApproxComprRatio: 1.0, ApproxComprRatio: 1.0,
......
...@@ -9,6 +9,7 @@ import ( ...@@ -9,6 +9,7 @@ import (
"github.com/urfave/cli" "github.com/urfave/cli"
"github.com/ethereum-optimism/optimism/op-batcher/flags" "github.com/ethereum-optimism/optimism/op-batcher/flags"
"github.com/ethereum-optimism/optimism/op-batcher/metrics"
"github.com/ethereum-optimism/optimism/op-batcher/rpc" "github.com/ethereum-optimism/optimism/op-batcher/rpc"
"github.com/ethereum-optimism/optimism/op-node/rollup" "github.com/ethereum-optimism/optimism/op-node/rollup"
"github.com/ethereum-optimism/optimism/op-node/sources" "github.com/ethereum-optimism/optimism/op-node/sources"
...@@ -21,20 +22,34 @@ import ( ...@@ -21,20 +22,34 @@ import (
type Config struct { type Config struct {
log log.Logger log log.Logger
metr metrics.Metricer
L1Client *ethclient.Client L1Client *ethclient.Client
L2Client *ethclient.Client L2Client *ethclient.Client
RollupNode *sources.RollupClient RollupNode *sources.RollupClient
PollInterval time.Duration PollInterval time.Duration
TxManagerConfig txmgr.Config
From common.Address From common.Address
TxManagerConfig txmgr.Config
// RollupConfig is queried at startup // RollupConfig is queried at startup
Rollup *rollup.Config Rollup *rollup.Config
// Channel creation parameters // Channel builder parameters
Channel ChannelConfig Channel ChannelConfig
} }
// Check ensures that the [Config] is valid.
func (c *Config) Check() error {
if err := c.Rollup.Check(); err != nil {
return err
}
if err := c.Channel.Check(); err != nil {
return err
}
return nil
}
type CLIConfig struct { type CLIConfig struct {
/* Required Params */ /* Required Params */
......
...@@ -10,7 +10,9 @@ import ( ...@@ -10,7 +10,9 @@ import (
"sync" "sync"
"time" "time"
"github.com/ethereum-optimism/optimism/op-batcher/metrics"
"github.com/ethereum-optimism/optimism/op-node/eth" "github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
opcrypto "github.com/ethereum-optimism/optimism/op-service/crypto" opcrypto "github.com/ethereum-optimism/optimism/op-service/crypto"
"github.com/ethereum-optimism/optimism/op-service/txmgr" "github.com/ethereum-optimism/optimism/op-service/txmgr"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
...@@ -34,13 +36,14 @@ type BatchSubmitter struct { ...@@ -34,13 +36,14 @@ type BatchSubmitter struct {
// lastStoredBlock is the last block loaded into `state`. If it is empty it should be set to the l2 safe head. // lastStoredBlock is the last block loaded into `state`. If it is empty it should be set to the l2 safe head.
lastStoredBlock eth.BlockID lastStoredBlock eth.BlockID
lastL1Tip eth.L1BlockRef
state *channelManager state *channelManager
} }
// NewBatchSubmitterFromCLIConfig initializes the BatchSubmitter, gathering any resources // NewBatchSubmitterFromCLIConfig initializes the BatchSubmitter, gathering any resources
// that will be needed during operation. // that will be needed during operation.
func NewBatchSubmitterFromCLIConfig(cfg CLIConfig, l log.Logger) (*BatchSubmitter, error) { func NewBatchSubmitterFromCLIConfig(cfg CLIConfig, l log.Logger, m metrics.Metricer) (*BatchSubmitter, error) {
ctx := context.Background() ctx := context.Background()
signer, fromAddress, err := opcrypto.SignerFactoryFromConfig(l, cfg.PrivateKey, cfg.Mnemonic, cfg.SequencerHDPath, cfg.SignerConfig) signer, fromAddress, err := opcrypto.SignerFactoryFromConfig(l, cfg.PrivateKey, cfg.Mnemonic, cfg.SequencerHDPath, cfg.SignerConfig)
...@@ -99,12 +102,17 @@ func NewBatchSubmitterFromCLIConfig(cfg CLIConfig, l log.Logger) (*BatchSubmitte ...@@ -99,12 +102,17 @@ func NewBatchSubmitterFromCLIConfig(cfg CLIConfig, l log.Logger) (*BatchSubmitte
}, },
} }
return NewBatchSubmitter(ctx, batcherCfg, l) // Validate the batcher config
if err := batcherCfg.Check(); err != nil {
return nil, err
}
return NewBatchSubmitter(ctx, batcherCfg, l, m)
} }
// NewBatchSubmitter initializes the BatchSubmitter, gathering any resources // NewBatchSubmitter initializes the BatchSubmitter, gathering any resources
// that will be needed during operation. // that will be needed during operation.
func NewBatchSubmitter(ctx context.Context, cfg Config, l log.Logger) (*BatchSubmitter, error) { func NewBatchSubmitter(ctx context.Context, cfg Config, l log.Logger, m metrics.Metricer) (*BatchSubmitter, error) {
balance, err := cfg.L1Client.BalanceAt(ctx, cfg.From, nil) balance, err := cfg.L1Client.BalanceAt(ctx, cfg.From, nil)
if err != nil { if err != nil {
return nil, err return nil, err
...@@ -113,12 +121,14 @@ func NewBatchSubmitter(ctx context.Context, cfg Config, l log.Logger) (*BatchSub ...@@ -113,12 +121,14 @@ func NewBatchSubmitter(ctx context.Context, cfg Config, l log.Logger) (*BatchSub
cfg.log = l cfg.log = l
cfg.log.Info("creating batch submitter", "submitter_addr", cfg.From, "submitter_bal", balance) cfg.log.Info("creating batch submitter", "submitter_addr", cfg.From, "submitter_bal", balance)
cfg.metr = m
return &BatchSubmitter{ return &BatchSubmitter{
Config: cfg, Config: cfg,
txMgr: NewTransactionManager(l, txMgr: NewTransactionManager(l,
cfg.TxManagerConfig, cfg.Rollup.BatchInboxAddress, cfg.Rollup.L1ChainID, cfg.TxManagerConfig, cfg.Rollup.BatchInboxAddress, cfg.Rollup.L1ChainID,
cfg.From, cfg.L1Client), cfg.From, cfg.L1Client),
state: NewChannelManager(l, cfg.Channel), state: NewChannelManager(l, m, cfg.Channel),
}, nil }, nil
} }
...@@ -182,13 +192,16 @@ func (l *BatchSubmitter) Stop() error { ...@@ -182,13 +192,16 @@ func (l *BatchSubmitter) Stop() error {
func (l *BatchSubmitter) loadBlocksIntoState(ctx context.Context) { func (l *BatchSubmitter) loadBlocksIntoState(ctx context.Context) {
start, end, err := l.calculateL2BlockRangeToStore(ctx) start, end, err := l.calculateL2BlockRangeToStore(ctx)
if err != nil { if err != nil {
l.log.Trace("was not able to calculate L2 block range", "err", err) l.log.Warn("Error calculating L2 block range", "err", err)
return
} else if start.Number == end.Number {
return return
} }
var latestBlock *types.Block
// Add all blocks to "state" // Add all blocks to "state"
for i := start.Number + 1; i < end.Number+1; i++ { for i := start.Number + 1; i < end.Number+1; i++ {
id, err := l.loadBlockIntoState(ctx, i) block, err := l.loadBlockIntoState(ctx, i)
if errors.Is(err, ErrReorg) { if errors.Is(err, ErrReorg) {
l.log.Warn("Found L2 reorg", "block_number", i) l.log.Warn("Found L2 reorg", "block_number", i)
l.state.Clear() l.state.Clear()
...@@ -198,24 +211,34 @@ func (l *BatchSubmitter) loadBlocksIntoState(ctx context.Context) { ...@@ -198,24 +211,34 @@ func (l *BatchSubmitter) loadBlocksIntoState(ctx context.Context) {
l.log.Warn("failed to load block into state", "err", err) l.log.Warn("failed to load block into state", "err", err)
return return
} }
l.lastStoredBlock = id l.lastStoredBlock = eth.ToBlockID(block)
latestBlock = block
}
l2ref, err := derive.L2BlockToBlockRef(latestBlock, &l.Rollup.Genesis)
if err != nil {
l.log.Warn("Invalid L2 block loaded into state", "err", err)
return
} }
l.metr.RecordL2BlocksLoaded(l2ref)
} }
// loadBlockIntoState fetches & stores a single block into `state`. It returns the block it loaded. // loadBlockIntoState fetches & stores a single block into `state`. It returns the block it loaded.
func (l *BatchSubmitter) loadBlockIntoState(ctx context.Context, blockNumber uint64) (eth.BlockID, error) { func (l *BatchSubmitter) loadBlockIntoState(ctx context.Context, blockNumber uint64) (*types.Block, error) {
ctx, cancel := context.WithTimeout(ctx, networkTimeout) ctx, cancel := context.WithTimeout(ctx, networkTimeout)
defer cancel()
block, err := l.L2Client.BlockByNumber(ctx, new(big.Int).SetUint64(blockNumber)) block, err := l.L2Client.BlockByNumber(ctx, new(big.Int).SetUint64(blockNumber))
cancel()
if err != nil { if err != nil {
return eth.BlockID{}, err return nil, fmt.Errorf("getting L2 block: %w", err)
} }
if err := l.state.AddL2Block(block); err != nil { if err := l.state.AddL2Block(block); err != nil {
return eth.BlockID{}, err return nil, fmt.Errorf("adding L2 block to state: %w", err)
} }
id := eth.ToBlockID(block)
l.log.Info("added L2 block to local state", "block", id, "tx_count", len(block.Transactions()), "time", block.Time()) l.log.Info("added L2 block to local state", "block", eth.ToBlockID(block), "tx_count", len(block.Transactions()), "time", block.Time())
return id, nil return block, nil
} }
// calculateL2BlockRangeToStore determines the range (start,end] that should be loaded into the local state. // calculateL2BlockRangeToStore determines the range (start,end] that should be loaded into the local state.
...@@ -278,6 +301,7 @@ func (l *BatchSubmitter) loop() { ...@@ -278,6 +301,7 @@ func (l *BatchSubmitter) loop() {
l.log.Error("Failed to query L1 tip", "error", err) l.log.Error("Failed to query L1 tip", "error", err)
break break
} }
l.recordL1Tip(l1tip)
// Collect next transaction data // Collect next transaction data
txdata, err := l.state.TxData(l1tip.ID()) txdata, err := l.state.TxData(l1tip.ID())
...@@ -311,6 +335,14 @@ func (l *BatchSubmitter) loop() { ...@@ -311,6 +335,14 @@ func (l *BatchSubmitter) loop() {
} }
} }
func (l *BatchSubmitter) recordL1Tip(l1tip eth.L1BlockRef) {
if l.lastL1Tip == l1tip {
return
}
l.lastL1Tip = l1tip
l.metr.RecordLatestL1Block(l1tip)
}
func (l *BatchSubmitter) recordFailedTx(id txID, err error) { func (l *BatchSubmitter) recordFailedTx(id txID, err error) {
l.log.Warn("Failed to send transaction", "err", err) l.log.Warn("Failed to send transaction", "err", err)
l.state.TxFailed(id) l.state.TxFailed(id)
......
...@@ -55,7 +55,7 @@ func (t *TransactionManager) SendTransaction(ctx context.Context, data []byte) ( ...@@ -55,7 +55,7 @@ func (t *TransactionManager) SendTransaction(ctx context.Context, data []byte) (
return nil, fmt.Errorf("failed to create tx: %w", err) return nil, fmt.Errorf("failed to create tx: %w", err)
} }
ctx, cancel := context.WithTimeout(ctx, 100*time.Second) // TODO: Select a timeout that makes sense here. ctx, cancel := context.WithTimeout(ctx, 10*time.Minute) // TODO: Select a timeout that makes sense here.
defer cancel() defer cancel()
if receipt, err := t.txMgr.Send(ctx, tx); err != nil { if receipt, err := t.txMgr.Send(ctx, tx); err != nil {
t.log.Warn("unable to publish tx", "err", err, "data_size", len(data)) t.log.Warn("unable to publish tx", "err", err, "data_size", len(data))
......
package metrics
import (
"context"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/ethclient"
"github.com/ethereum/go-ethereum/log"
"github.com/prometheus/client_golang/prometheus"
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
opmetrics "github.com/ethereum-optimism/optimism/op-service/metrics"
)
const Namespace = "op_batcher"
type Metricer interface {
RecordInfo(version string)
RecordUp()
// Records all L1 and L2 block events
opmetrics.RefMetricer
RecordLatestL1Block(l1ref eth.L1BlockRef)
RecordL2BlocksLoaded(l2ref eth.L2BlockRef)
RecordChannelOpened(id derive.ChannelID, numPendingBlocks int)
RecordL2BlocksAdded(l2ref eth.L2BlockRef, numBlocksAdded, numPendingBlocks, inputBytes, outputComprBytes int)
RecordChannelClosed(id derive.ChannelID, numPendingBlocks int, numFrames int, inputBytes int, outputComprBytes int, reason error)
RecordChannelFullySubmitted(id derive.ChannelID)
RecordChannelTimedOut(id derive.ChannelID)
RecordBatchTxSubmitted()
RecordBatchTxSuccess()
RecordBatchTxFailed()
Document() []opmetrics.DocumentedMetric
}
type Metrics struct {
ns string
registry *prometheus.Registry
factory opmetrics.Factory
opmetrics.RefMetrics
Info prometheus.GaugeVec
Up prometheus.Gauge
// label by openend, closed, fully_submitted, timed_out
ChannelEvs opmetrics.EventVec
PendingBlocksCount prometheus.GaugeVec
BlocksAddedCount prometheus.Gauge
ChannelInputBytes prometheus.GaugeVec
ChannelReadyBytes prometheus.Gauge
ChannelOutputBytes prometheus.Gauge
ChannelClosedReason prometheus.Gauge
ChannelNumFrames prometheus.Gauge
ChannelComprRatio prometheus.Histogram
BatcherTxEvs opmetrics.EventVec
}
var _ Metricer = (*Metrics)(nil)
func NewMetrics(procName string) *Metrics {
if procName == "" {
procName = "default"
}
ns := Namespace + "_" + procName
registry := opmetrics.NewRegistry()
factory := opmetrics.With(registry)
return &Metrics{
ns: ns,
registry: registry,
factory: factory,
RefMetrics: opmetrics.MakeRefMetrics(ns, factory),
Info: *factory.NewGaugeVec(prometheus.GaugeOpts{
Namespace: ns,
Name: "info",
Help: "Pseudo-metric tracking version and config info",
}, []string{
"version",
}),
Up: factory.NewGauge(prometheus.GaugeOpts{
Namespace: ns,
Name: "up",
Help: "1 if the op-batcher has finished starting up",
}),
ChannelEvs: opmetrics.NewEventVec(factory, ns, "channel", "Channel", []string{"stage"}),
PendingBlocksCount: *factory.NewGaugeVec(prometheus.GaugeOpts{
Namespace: ns,
Name: "pending_blocks_count",
Help: "Number of pending blocks, not added to a channel yet.",
}, []string{"stage"}),
BlocksAddedCount: factory.NewGauge(prometheus.GaugeOpts{
Namespace: ns,
Name: "blocks_added_count",
Help: "Total number of blocks added to current channel.",
}),
ChannelInputBytes: *factory.NewGaugeVec(prometheus.GaugeOpts{
Namespace: ns,
Name: "input_bytes",
Help: "Number of input bytes to a channel.",
}, []string{"stage"}),
ChannelReadyBytes: factory.NewGauge(prometheus.GaugeOpts{
Namespace: ns,
Name: "ready_bytes",
Help: "Number of bytes ready in the compression buffer.",
}),
ChannelOutputBytes: factory.NewGauge(prometheus.GaugeOpts{
Namespace: ns,
Name: "output_bytes",
Help: "Number of compressed output bytes from a channel.",
}),
ChannelClosedReason: factory.NewGauge(prometheus.GaugeOpts{
Namespace: ns,
Name: "channel_closed_reason",
Help: "Pseudo-metric to record the reason a channel got closed.",
}),
ChannelNumFrames: factory.NewGauge(prometheus.GaugeOpts{
Namespace: ns,
Name: "channel_num_frames",
Help: "Total number of frames of closed channel.",
}),
ChannelComprRatio: factory.NewHistogram(prometheus.HistogramOpts{
Namespace: ns,
Name: "channel_compr_ratio",
Help: "Compression ratios of closed channel.",
Buckets: append([]float64{0.1, 0.2}, prometheus.LinearBuckets(0.3, 0.05, 14)...),
}),
BatcherTxEvs: opmetrics.NewEventVec(factory, ns, "batcher_tx", "BatcherTx", []string{"stage"}),
}
}
func (m *Metrics) Serve(ctx context.Context, host string, port int) error {
return opmetrics.ListenAndServe(ctx, m.registry, host, port)
}
func (m *Metrics) Document() []opmetrics.DocumentedMetric {
return m.factory.Document()
}
func (m *Metrics) StartBalanceMetrics(ctx context.Context,
l log.Logger, client *ethclient.Client, account common.Address) {
opmetrics.LaunchBalanceMetrics(ctx, l, m.registry, m.ns, client, account)
}
// RecordInfo sets a pseudo-metric that contains versioning and
// config info for the op-batcher.
func (m *Metrics) RecordInfo(version string) {
m.Info.WithLabelValues(version).Set(1)
}
// RecordUp sets the up metric to 1.
func (m *Metrics) RecordUp() {
prometheus.MustRegister()
m.Up.Set(1)
}
const (
StageLoaded = "loaded"
StageOpened = "opened"
StageAdded = "added"
StageClosed = "closed"
StageFullySubmitted = "fully_submitted"
StageTimedOut = "timed_out"
TxStageSubmitted = "submitted"
TxStageSuccess = "success"
TxStageFailed = "failed"
)
func (m *Metrics) RecordLatestL1Block(l1ref eth.L1BlockRef) {
m.RecordL1Ref("latest", l1ref)
}
// RecordL2BlockLoaded should be called when a new L2 block was loaded into the
// channel manager (but not processed yet).
func (m *Metrics) RecordL2BlocksLoaded(l2ref eth.L2BlockRef) {
m.RecordL2Ref(StageLoaded, l2ref)
}
func (m *Metrics) RecordChannelOpened(id derive.ChannelID, numPendingBlocks int) {
m.ChannelEvs.Record(StageOpened)
m.BlocksAddedCount.Set(0) // reset
m.PendingBlocksCount.WithLabelValues(StageOpened).Set(float64(numPendingBlocks))
}
// RecordL2BlocksAdded should be called when L2 block were added to the channel
// builder, with the latest added block.
func (m *Metrics) RecordL2BlocksAdded(l2ref eth.L2BlockRef, numBlocksAdded, numPendingBlocks, inputBytes, outputComprBytes int) {
m.RecordL2Ref(StageAdded, l2ref)
m.BlocksAddedCount.Add(float64(numBlocksAdded))
m.PendingBlocksCount.WithLabelValues(StageAdded).Set(float64(numPendingBlocks))
m.ChannelInputBytes.WithLabelValues(StageAdded).Set(float64(inputBytes))
m.ChannelReadyBytes.Set(float64(outputComprBytes))
}
func (m *Metrics) RecordChannelClosed(id derive.ChannelID, numPendingBlocks int, numFrames int, inputBytes int, outputComprBytes int, reason error) {
m.ChannelEvs.Record(StageClosed)
m.PendingBlocksCount.WithLabelValues(StageClosed).Set(float64(numPendingBlocks))
m.ChannelNumFrames.Set(float64(numFrames))
m.ChannelInputBytes.WithLabelValues(StageClosed).Set(float64(inputBytes))
m.ChannelOutputBytes.Set(float64(outputComprBytes))
var comprRatio float64
if inputBytes > 0 {
comprRatio = float64(outputComprBytes) / float64(inputBytes)
}
m.ChannelComprRatio.Observe(comprRatio)
m.ChannelClosedReason.Set(float64(ClosedReasonToNum(reason)))
}
func ClosedReasonToNum(reason error) int {
// CLI-3640
return 0
}
func (m *Metrics) RecordChannelFullySubmitted(id derive.ChannelID) {
m.ChannelEvs.Record(StageFullySubmitted)
}
func (m *Metrics) RecordChannelTimedOut(id derive.ChannelID) {
m.ChannelEvs.Record(StageTimedOut)
}
func (m *Metrics) RecordBatchTxSubmitted() {
m.BatcherTxEvs.Record(TxStageSubmitted)
}
func (m *Metrics) RecordBatchTxSuccess() {
m.BatcherTxEvs.Record(TxStageSuccess)
}
func (m *Metrics) RecordBatchTxFailed() {
m.BatcherTxEvs.Record(TxStageFailed)
}
package metrics
import (
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
opmetrics "github.com/ethereum-optimism/optimism/op-service/metrics"
)
type noopMetrics struct{ opmetrics.NoopRefMetrics }
var NoopMetrics Metricer = new(noopMetrics)
func (*noopMetrics) Document() []opmetrics.DocumentedMetric { return nil }
func (*noopMetrics) RecordInfo(version string) {}
func (*noopMetrics) RecordUp() {}
func (*noopMetrics) RecordLatestL1Block(l1ref eth.L1BlockRef) {}
func (*noopMetrics) RecordL2BlocksLoaded(eth.L2BlockRef) {}
func (*noopMetrics) RecordChannelOpened(derive.ChannelID, int) {}
func (*noopMetrics) RecordL2BlocksAdded(eth.L2BlockRef, int, int, int, int) {}
func (*noopMetrics) RecordChannelClosed(derive.ChannelID, int, int, int, int, error) {}
func (*noopMetrics) RecordChannelFullySubmitted(derive.ChannelID) {}
func (*noopMetrics) RecordChannelTimedOut(derive.ChannelID) {}
func (*noopMetrics) RecordBatchTxSubmitted() {}
func (*noopMetrics) RecordBatchTxSuccess() {}
func (*noopMetrics) RecordBatchTxFailed() {}
package main
import (
"context"
"fmt"
"math/big"
"os"
"strings"
"github.com/ethereum-optimism/optimism/op-chain-ops/crossdomain"
"github.com/ethereum-optimism/optimism/op-chain-ops/db"
"github.com/mattn/go-isatty"
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum-optimism/optimism/op-bindings/hardhat"
"github.com/ethereum-optimism/optimism/op-chain-ops/genesis"
"github.com/ethereum/go-ethereum/ethclient"
"github.com/urfave/cli"
)
func main() {
log.Root().SetHandler(log.StreamHandler(os.Stderr, log.TerminalFormat(isatty.IsTerminal(os.Stderr.Fd()))))
app := &cli.App{
Name: "check-migration",
Usage: "Run sanity checks on a migrated database",
Flags: []cli.Flag{
&cli.StringFlag{
Name: "l1-rpc-url",
Value: "http://127.0.0.1:8545",
Usage: "RPC URL for an L1 Node",
Required: true,
},
&cli.StringFlag{
Name: "ovm-addresses",
Usage: "Path to ovm-addresses.json",
Required: true,
},
&cli.StringFlag{
Name: "ovm-allowances",
Usage: "Path to ovm-allowances.json",
Required: true,
},
&cli.StringFlag{
Name: "ovm-messages",
Usage: "Path to ovm-messages.json",
Required: true,
},
&cli.StringFlag{
Name: "witness-file",
Usage: "Path to witness file",
Required: true,
},
&cli.StringFlag{
Name: "db-path",
Usage: "Path to database",
Required: true,
},
cli.StringFlag{
Name: "deploy-config",
Usage: "Path to hardhat deploy config file",
Required: true,
},
cli.StringFlag{
Name: "network",
Usage: "Name of hardhat deploy network",
Required: true,
},
cli.StringFlag{
Name: "hardhat-deployments",
Usage: "Comma separated list of hardhat deployment directories",
Required: true,
},
cli.IntFlag{
Name: "db-cache",
Usage: "LevelDB cache size in mb",
Value: 1024,
},
cli.IntFlag{
Name: "db-handles",
Usage: "LevelDB number of handles",
Value: 60,
},
},
Action: func(ctx *cli.Context) error {
deployConfig := ctx.String("deploy-config")
config, err := genesis.NewDeployConfig(deployConfig)
if err != nil {
return err
}
ovmAddresses, err := crossdomain.NewAddresses(ctx.String("ovm-addresses"))
if err != nil {
return err
}
ovmAllowances, err := crossdomain.NewAllowances(ctx.String("ovm-allowances"))
if err != nil {
return err
}
ovmMessages, err := crossdomain.NewSentMessageFromJSON(ctx.String("ovm-messages"))
if err != nil {
return err
}
evmMessages, evmAddresses, err := crossdomain.ReadWitnessData(ctx.String("witness-file"))
if err != nil {
return err
}
log.Info(
"Loaded witness data",
"ovmAddresses", len(ovmAddresses),
"evmAddresses", len(evmAddresses),
"ovmAllowances", len(ovmAllowances),
"ovmMessages", len(ovmMessages),
"evmMessages", len(evmMessages),
)
migrationData := crossdomain.MigrationData{
OvmAddresses: ovmAddresses,
EvmAddresses: evmAddresses,
OvmAllowances: ovmAllowances,
OvmMessages: ovmMessages,
EvmMessages: evmMessages,
}
network := ctx.String("network")
deployments := strings.Split(ctx.String("hardhat-deployments"), ",")
hh, err := hardhat.New(network, []string{}, deployments)
if err != nil {
return err
}
l1RpcURL := ctx.String("l1-rpc-url")
l1Client, err := ethclient.Dial(l1RpcURL)
if err != nil {
return err
}
var block *types.Block
tag := config.L1StartingBlockTag
if tag.BlockNumber != nil {
block, err = l1Client.BlockByNumber(context.Background(), big.NewInt(tag.BlockNumber.Int64()))
} else if tag.BlockHash != nil {
block, err = l1Client.BlockByHash(context.Background(), *tag.BlockHash)
} else {
return fmt.Errorf("invalid l1StartingBlockTag in deploy config: %v", tag)
}
if err != nil {
return err
}
dbCache := ctx.Int("db-cache")
dbHandles := ctx.Int("db-handles")
// Read the required deployment addresses from disk if required
if err := config.GetDeployedAddresses(hh); err != nil {
return err
}
if err := config.Check(); err != nil {
return err
}
postLDB, err := db.Open(ctx.String("db-path"), dbCache, dbHandles)
if err != nil {
return err
}
if err := genesis.PostCheckMigratedDB(
postLDB,
migrationData,
&config.L1CrossDomainMessengerProxy,
config.L1ChainID,
config.FinalSystemOwner,
config.ProxyAdminOwner,
&derive.L1BlockInfo{
Number: block.NumberU64(),
Time: block.Time(),
BaseFee: block.BaseFee(),
BlockHash: block.Hash(),
BatcherAddr: config.BatchSenderAddress,
L1FeeOverhead: eth.Bytes32(common.BigToHash(new(big.Int).SetUint64(config.GasPriceOracleOverhead))),
L1FeeScalar: eth.Bytes32(common.BigToHash(new(big.Int).SetUint64(config.GasPriceOracleScalar))),
},
); err != nil {
return err
}
if err := postLDB.Close(); err != nil {
return err
}
return nil
},
}
if err := app.Run(os.Args); err != nil {
log.Crit("error in migration", "err", err)
}
}
...@@ -106,6 +106,11 @@ func main() { ...@@ -106,6 +106,11 @@ func main() {
Value: "rollup.json", Value: "rollup.json",
Required: true, Required: true,
}, },
cli.BoolFlag{
Name: "post-check-only",
Usage: "Only perform sanity checks",
Required: false,
},
}, },
Action: func(ctx *cli.Context) error { Action: func(ctx *cli.Context) error {
deployConfig := ctx.String("deploy-config") deployConfig := ctx.String("deploy-config")
......
...@@ -20,3 +20,13 @@ func getOVMETHTotalSupplySlot() common.Hash { ...@@ -20,3 +20,13 @@ func getOVMETHTotalSupplySlot() common.Hash {
key := common.BytesToHash(common.LeftPadBytes(position.Bytes(), 32)) key := common.BytesToHash(common.LeftPadBytes(position.Bytes(), 32))
return key return key
} }
func GetOVMETHTotalSupplySlot() common.Hash {
return getOVMETHTotalSupplySlot()
}
// GetOVMETHBalance gets a user's OVM ETH balance from state by querying the
// appropriate storage slot directly.
func GetOVMETHBalance(db *state.StateDB, addr common.Address) *big.Int {
return db.GetState(OVMETHAddress, CalcOVMETHStorageKey(addr)).Big()
}
...@@ -3,6 +3,10 @@ package ether ...@@ -3,6 +3,10 @@ package ether
import ( import (
"fmt" "fmt"
"math/big" "math/big"
"sync"
"github.com/ethereum/go-ethereum/rlp"
"github.com/ethereum/go-ethereum/trie"
"github.com/ethereum-optimism/optimism/op-chain-ops/crossdomain" "github.com/ethereum-optimism/optimism/op-chain-ops/crossdomain"
"github.com/ethereum-optimism/optimism/op-chain-ops/util" "github.com/ethereum-optimism/optimism/op-chain-ops/util"
...@@ -13,11 +17,25 @@ import ( ...@@ -13,11 +17,25 @@ import (
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
) )
const (
// checkJobs is the number of parallel workers to spawn
// when iterating the storage trie.
checkJobs = 64
// BalanceSlot is an ordinal used to represent slots corresponding to OVM_ETH
// balances in the state.
BalanceSlot = 1
// AllowanceSlot is an ordinal used to represent slots corresponding to OVM_ETH
// allowances in the state.
AllowanceSlot = 2
)
var ( var (
// OVMETHAddress is the address of the OVM ETH predeploy. // OVMETHAddress is the address of the OVM ETH predeploy.
OVMETHAddress = common.HexToAddress("0xDeadDeAddeAddEAddeadDEaDDEAdDeaDDeAD0000") OVMETHAddress = common.HexToAddress("0xDeadDeAddeAddEAddeadDEaDDEAdDeaDDeAD0000")
OVMETHIgnoredSlots = map[common.Hash]bool{ ignoredSlots = map[common.Hash]bool{
// Total Supply // Total Supply
common.HexToHash("0x0000000000000000000000000000000000000000000000000000000000000002"): true, common.HexToHash("0x0000000000000000000000000000000000000000000000000000000000000002"): true,
// Name // Name
...@@ -27,163 +45,348 @@ var ( ...@@ -27,163 +45,348 @@ var (
common.HexToHash("0x0000000000000000000000000000000000000000000000000000000000000005"): true, common.HexToHash("0x0000000000000000000000000000000000000000000000000000000000000005"): true,
common.HexToHash("0x0000000000000000000000000000000000000000000000000000000000000006"): true, common.HexToHash("0x0000000000000000000000000000000000000000000000000000000000000006"): true,
} }
// maxSlot is the maximum possible storage slot.
maxSlot = common.HexToHash("0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff")
// sequencerEntrypointAddr is the address of the OVM sequencer entrypoint contract.
sequencerEntrypointAddr = common.HexToAddress("0x4200000000000000000000000000000000000005")
) )
// MigrateLegacyETH checks that the given list of addresses and allowances represents all storage // accountData is a wrapper struct that contains the balance and address of an account.
// slots in the LegacyERC20ETH contract. We don't have to filter out extra addresses like we do for // It gets passed via channel to the collector process.
// withdrawals because we'll simply carry the balance of a given address to the new system, if the type accountData struct {
// account is extra then it won't have any balance and nothing will happen. For each valid balance, balance *big.Int
// this method will migrate into state. This method does the checking as part of the migration loop legacySlot common.Hash
// in order to avoid having to iterate over state twice. This saves approximately 40 minutes during address common.Address
// the mainnet migration. }
func MigrateLegacyETH(db *state.StateDB, addresses []common.Address, allowances []*crossdomain.Allowance, chainID int, noCheck bool, commit bool) error {
type DBFactory func() (*state.StateDB, error)
// MigrateBalances migrates all balances in the LegacyERC20ETH contract into state. It performs checks
// in parallel with mutations in order to reduce overall migration time.
func MigrateBalances(mutableDB *state.StateDB, dbFactory DBFactory, addresses []common.Address, allowances []*crossdomain.Allowance, chainID int, noCheck bool) error {
// Chain params to use for integrity checking. // Chain params to use for integrity checking.
params := crossdomain.ParamsByChainID[chainID] params := crossdomain.ParamsByChainID[chainID]
if params == nil { if params == nil {
return fmt.Errorf("no chain params for %d", chainID) return fmt.Errorf("no chain params for %d", chainID)
} }
// Log the chain params for debugging purposes. return doMigration(mutableDB, dbFactory, addresses, allowances, params.ExpectedSupplyDelta, noCheck)
log.Info("Chain params", "chain-id", chainID, "supply-delta", params.ExpectedSupplyDelta)
return doMigration(db, addresses, allowances, params.ExpectedSupplyDelta, noCheck, commit)
} }
func doMigration(db *state.StateDB, addresses []common.Address, allowances []*crossdomain.Allowance, expSupplyDiff *big.Int, noCheck bool, commit bool) error { func doMigration(mutableDB *state.StateDB, dbFactory DBFactory, addresses []common.Address, allowances []*crossdomain.Allowance, expDiff *big.Int, noCheck bool) error {
// We'll need to maintain a list of all addresses that we've seen along with all of the storage // We'll need to maintain a list of all addresses that we've seen along with all of the storage
// slots based on the witness data. // slots based on the witness data.
slotsAddrs := make(map[common.Hash]common.Address) slotsAddrs := make(map[common.Hash]common.Address)
slotTypes := make(map[common.Hash]int) slotsInp := make(map[common.Hash]int)
// For each known address, compute its balance key and add it to the list of addresses. // For each known address, compute its balance key and add it to the list of addresses.
// Mint events are instrumented as regular ETH events in the witness data, so we no longer // Mint events are instrumented as regular ETH events in the witness data, so we no longer
// need to iterate over mint events during the migration. // need to iterate over mint events during the migration.
for _, addr := range addresses { for _, addr := range addresses {
sk := CalcOVMETHStorageKey(addr) sk := CalcOVMETHStorageKey(addr)
slotTypes[sk] = 1
slotsAddrs[sk] = addr slotsAddrs[sk] = addr
slotsInp[sk] = BalanceSlot
} }
// For each known allowance, compute its storage key and add it to the list of addresses. // For each known allowance, compute its storage key and add it to the list of addresses.
for _, allowance := range allowances { for _, allowance := range allowances {
slotTypes[CalcAllowanceStorageKey(allowance.From, allowance.To)] = 2 sk := CalcAllowanceStorageKey(allowance.From, allowance.To)
slotsAddrs[sk] = allowance.From
slotsInp[sk] = AllowanceSlot
} }
// Add the old SequencerEntrypoint because someone sent it ETH a long time ago and it has a // Add the old SequencerEntrypoint because someone sent it ETH a long time ago and it has a
// balance but none of our instrumentation could easily find it. Special case. // balance but none of our instrumentation could easily find it. Special case.
sequencerEntrypointAddr := common.HexToAddress("0x4200000000000000000000000000000000000005") entrySK := CalcOVMETHStorageKey(sequencerEntrypointAddr)
slotTypes[CalcOVMETHStorageKey(sequencerEntrypointAddr)] = 1 slotsAddrs[entrySK] = sequencerEntrypointAddr
slotsInp[entrySK] = BalanceSlot
// WaitGroup to wait on each iteration job to finish.
var wg sync.WaitGroup
// Channel to receive storage slot keys and values from each iteration job.
outCh := make(chan accountData)
// Channel to receive errors from each iteration job.
errCh := make(chan error, checkJobs)
// Channel to cancel all iteration jobs as well as the collector.
cancelCh := make(chan struct{})
// Migrate the OVM_ETH to ETH. // Define a worker function to iterate over each partition.
log.Info("Migrating legacy ETH to ETH", "num-accounts", len(addresses)) worker := func(start, end common.Hash) {
totalMigrated := new(big.Int) // Decrement the WaitGroup when the function returns.
logAccountProgress := util.ProgressLogger(1000, "imported OVM_ETH storage slot") defer wg.Done()
var innerErr error
err := db.ForEachStorage(predeploys.LegacyERC20ETHAddr, func(key, value common.Hash) bool { db, err := dbFactory()
defer logAccountProgress() if err != nil {
log.Crit("cannot get database", "err", err)
}
// Create a new storage trie. Each trie returned by db.StorageTrie
// is a copy, so this is safe for concurrent use.
st, err := db.StorageTrie(predeploys.LegacyERC20ETHAddr)
if err != nil {
// Should never happen, so explode if it does.
log.Crit("cannot get storage trie for LegacyERC20ETHAddr", "err", err)
}
if st == nil {
// Should never happen, so explode if it does.
log.Crit("nil storage trie for LegacyERC20ETHAddr")
}
it := trie.NewIterator(st.NodeIterator(start.Bytes()))
// Below code is largely based on db.ForEachStorage. We can't use that
// because it doesn't allow us to specify a start and end key.
for it.Next() {
select {
case <-cancelCh:
// If one of the workers encounters an error, cancel all of them.
return
default:
break
}
// Use the raw (i.e., secure hashed) key to check if we've reached
// the end of the partition. Use > rather than >= here to account for
// the fact that the values returned by PartitionKeys are inclusive.
// Duplicate addresses that may be returned by this iteration are
// filtered out in the collector.
if new(big.Int).SetBytes(it.Key).Cmp(end.Big()) > 0 {
return
}
// Skip if the value is empty.
rawValue := it.Value
if len(rawValue) == 0 {
continue
}
// Get the preimage.
rawKey := st.GetKey(it.Key)
if rawKey == nil {
// Should never happen, so explode if it does.
log.Crit("cannot get preimage for storage key", "key", it.Key)
}
key := common.BytesToHash(rawKey)
// Parse the raw value.
_, content, _, err := rlp.Split(rawValue)
if err != nil {
// Should never happen, so explode if it does.
log.Crit("mal-formed data in state: %v", err)
}
// We can safely ignore specific slots (totalSupply, name, symbol). // We can safely ignore specific slots (totalSupply, name, symbol).
if OVMETHIgnoredSlots[key] { if ignoredSlots[key] {
return true continue
} }
// Look up the slot type. slotType, ok := slotsInp[key]
slotType, ok := slotTypes[key]
if !ok { if !ok {
log.Error("unknown storage slot in state", "slot", key.String()) if noCheck {
if !noCheck { log.Error("ignoring unknown storage slot in state", "slot", key.String())
innerErr = fmt.Errorf("unknown storage slot in state: %s", key.String()) } else {
return false errCh <- fmt.Errorf("unknown storage slot in state: %s", key.String())
return
} }
} }
switch slotType { // No accounts should have a balance in state. If they do, bail.
case 1: addr, ok := slotsAddrs[key]
// Balance slot. if !ok {
bal := value.Big() log.Crit("could not find address in map - should never happen")
totalMigrated.Add(totalMigrated, bal) }
addr := slotsAddrs[key] bal := db.GetBalance(addr)
if bal.Sign() != 0 {
// There should never be any balances in state, so verify that here. log.Error(
if db.GetBalance(addr).Sign() > 0 { "account has non-zero balance in state - should never happen",
log.Error("account has non-zero balance in state - should never happen", "addr", addr) "addr", addr,
"balance", bal.String(),
)
if !noCheck { if !noCheck {
innerErr = fmt.Errorf("account has non-zero balance in state - should never happen: %s", addr) errCh <- fmt.Errorf("account has non-zero balance in state - should never happen: %s", addr.String())
return false return
} }
} }
if !commit { // Add balances to the total found.
return true switch slotType {
case BalanceSlot:
// Convert the value to a common.Hash, then send to the channel.
value := common.BytesToHash(content)
outCh <- accountData{
balance: value.Big(),
legacySlot: key,
address: addr,
} }
case AllowanceSlot:
// Set the balance, and delete the legacy slot. // Allowance slot.
db.SetBalance(addr, bal) continue
db.SetState(predeploys.LegacyERC20ETHAddr, key, common.Hash{})
case 2:
// Allowance slot. Nothing to do here.
return true
default: default:
// Should never happen. // Should never happen.
log.Error("unknown slot type", "slot", key.String(), "type", slotType) if noCheck {
if !noCheck { log.Error("unknown slot type", "slot", key, "type", slotType)
innerErr = fmt.Errorf("unknown slot type: %d", slotType) } else {
return false log.Crit("unknown slot type %d, should never happen", slotType)
} }
} }
}
}
for i := 0; i < checkJobs; i++ {
wg.Add(1)
// Partition the keyspace per worker.
start, end := PartitionKeyspace(i, checkJobs)
// Kick off our worker.
go worker(start, end)
}
// Make a channel to make sure that the collector process completes.
collectorCloseCh := make(chan struct{})
// Keep track of the last error seen.
var lastErr error
// There are multiple ways that the cancel channel can be closed:
// - if we receive an error from the errCh
// - if the collector process completes
// To prevent panics, we wrap the close in a sync.Once.
var cancelOnce sync.Once
// Create a map of accounts we've seen so that we can filter out duplicates.
seenAccounts := make(map[common.Address]bool)
// Keep track of the total migrated supply.
totalFound := new(big.Int)
return true // Kick off another background process to collect
// values from the channel and add them to the map.
var count int
progress := util.ProgressLogger(1000, "Migrated OVM_ETH storage slot")
go func() {
defer func() {
collectorCloseCh <- struct{}{}
}()
for {
select {
case account := <-outCh:
progress()
// Filter out duplicate accounts. See the below note about keyspace iteration for
// why we may have to filter out duplicates.
if seenAccounts[account.address] {
log.Info("skipping duplicate account during iteration", "addr", account.address)
continue
}
// Accumulate addresses and total supply.
totalFound = new(big.Int).Add(totalFound, account.balance)
mutableDB.SetBalance(account.address, account.balance)
mutableDB.SetState(predeploys.LegacyERC20ETHAddr, account.legacySlot, common.Hash{})
count++
seenAccounts[account.address] = true
case err := <-errCh:
cancelOnce.Do(func() {
lastErr = err
close(cancelCh)
}) })
if err != nil { case <-cancelCh:
return fmt.Errorf("failed to iterate over OVM_ETH storage: %w", err) return
}
}
}()
// Wait for the workers to finish.
wg.Wait()
// Close the cancel channel to signal the collector process to stop.
cancelOnce.Do(func() {
close(cancelCh)
})
// Wait for the collector process to finish.
<-collectorCloseCh
// If we saw an error, return it.
if lastErr != nil {
return lastErr
} }
if innerErr != nil {
return fmt.Errorf("error in migration: %w", innerErr) // Log how many slots were iterated over.
log.Info("Iterated legacy balances", "count", count)
// Verify the supply delta. Recorded total supply in the LegacyERC20ETH contract may be higher
// than the actual migrated amount because self-destructs will remove ETH supply in a way that
// cannot be reflected in the contract. This is fine because self-destructs just mean the L2 is
// actually *overcollateralized* by some tiny amount.
db, err := dbFactory()
if err != nil {
log.Crit("cannot get database", "err", err)
} }
// Make sure that the total supply delta matches the expected delta. This is equivalent to
// checking that the total migrated is equal to the total found, since we already performed the
// same check against the total found (a = b, b = c => a = c).
totalSupply := getOVMETHTotalSupply(db) totalSupply := getOVMETHTotalSupply(db)
delta := new(big.Int).Sub(totalSupply, totalMigrated) delta := new(big.Int).Sub(totalSupply, totalFound)
if delta.Cmp(expSupplyDiff) != 0 { if delta.Cmp(expDiff) != 0 {
if noCheck {
log.Error( log.Error(
"supply mismatch", "supply mismatch",
"migrated", totalMigrated.String(), "migrated", totalFound.String(),
"supply", totalSupply.String(), "supply", totalSupply.String(),
"delta", delta.String(), "delta", delta.String(),
"exp_delta", expSupplyDiff.String(), "exp_delta", expDiff.String(),
) )
} else { if !noCheck {
log.Error( return fmt.Errorf("supply mismatch: %s", delta.String())
"supply mismatch",
"migrated", totalMigrated.String(),
"supply", totalSupply.String(),
"delta", delta.String(),
"exp_delta", expSupplyDiff.String(),
)
return fmt.Errorf("supply mismatch: exp delta %s != %s", expSupplyDiff.String(), delta.String())
} }
} }
// Supply is verified. // Supply is verified.
log.Info( log.Info(
"supply verified OK", "supply verified OK",
"migrated", totalMigrated.String(), "migrated", totalFound.String(),
"supply", totalSupply.String(), "supply", totalSupply.String(),
"delta", delta.String(), "delta", delta.String(),
"exp_delta", expSupplyDiff.String(), "exp_delta", expDiff.String(),
) )
// Set the total supply to 0. We do this because the total supply is necessarily going to be // Set the total supply to 0. We do this because the total supply is necessarily going to be
// different than the sum of all balances since we no longer track balances inside the contract // different than the sum of all balances since we no longer track balances inside the contract
// itself. The total supply is going to be weird no matter what, might as well set it to zero // itself. The total supply is going to be weird no matter what, might as well set it to zero
// so it's explicitly weird instead of implicitly weird. // so it's explicitly weird instead of implicitly weird.
if commit { mutableDB.SetState(predeploys.LegacyERC20ETHAddr, getOVMETHTotalSupplySlot(), common.Hash{})
db.SetState(predeploys.LegacyERC20ETHAddr, getOVMETHTotalSupplySlot(), common.Hash{})
log.Info("Set the totalSupply to 0") log.Info("Set the totalSupply to 0")
}
return nil return nil
} }
// PartitionKeyspace divides the key space into partitions by dividing the maximum keyspace
// by count then multiplying by i. This will leave some slots left over, which we handle below. It
// returns the start and end keys for the partition as a common.Hash. Note that the returned range
// of keys is inclusive, i.e., [start, end] NOT [start, end).
func PartitionKeyspace(i int, count int) (common.Hash, common.Hash) {
if i < 0 || count < 0 {
panic("i and count must be greater than 0")
}
if i > count-1 {
panic("i must be less than count - 1")
}
// Divide the key space into partitions by dividing the key space by the number
// of jobs. This will leave some slots left over, which we handle below.
partSize := new(big.Int).Div(maxSlot.Big(), big.NewInt(int64(count)))
start := common.BigToHash(new(big.Int).Mul(big.NewInt(int64(i)), partSize))
var end common.Hash
if i < count-1 {
// If this is not the last partition, use the next partition's start key as the end.
end = common.BigToHash(new(big.Int).Mul(big.NewInt(int64(i+1)), partSize))
} else {
// If this is the last partition, use the max slot as the end.
end = maxSlot
}
return start, end
}
package ether package ether
import ( import (
"fmt"
"math/big" "math/big"
"math/rand"
"testing" "testing"
"github.com/ethereum-optimism/optimism/op-bindings/predeploys"
"github.com/ethereum-optimism/optimism/op-chain-ops/crossdomain" "github.com/ethereum-optimism/optimism/op-chain-ops/crossdomain"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/rawdb" "github.com/ethereum/go-ethereum/core/rawdb"
...@@ -12,7 +16,7 @@ import ( ...@@ -12,7 +16,7 @@ import (
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
func TestMigrateLegacyETH(t *testing.T) { func TestMigrateBalances(t *testing.T) {
tests := []struct { tests := []struct {
name string name string
totalSupply *big.Int totalSupply *big.Int
...@@ -46,11 +50,9 @@ func TestMigrateLegacyETH(t *testing.T) { ...@@ -46,11 +50,9 @@ func TestMigrateLegacyETH(t *testing.T) {
}, },
check: func(t *testing.T, db *state.StateDB, err error) { check: func(t *testing.T, db *state.StateDB, err error) {
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, db.GetBalance(common.HexToAddress("0x123")), big.NewInt(1)) require.EqualValues(t, common.Big1, db.GetBalance(common.HexToAddress("0x123")))
require.Equal(t, db.GetBalance(common.HexToAddress("0x456")), big.NewInt(2)) require.EqualValues(t, common.Big2, db.GetBalance(common.HexToAddress("0x456")))
require.Equal(t, db.GetState(OVMETHAddress, CalcOVMETHStorageKey(common.HexToAddress("0x123"))), common.Hash{}) require.EqualValues(t, common.Hash{}, db.GetState(predeploys.LegacyERC20ETHAddr, GetOVMETHTotalSupplySlot()))
require.Equal(t, db.GetState(OVMETHAddress, CalcOVMETHStorageKey(common.HexToAddress("0x456"))), common.Hash{})
require.Equal(t, db.GetState(OVMETHAddress, getOVMETHTotalSupplySlot()), common.Hash{})
}, },
}, },
{ {
...@@ -66,9 +68,9 @@ func TestMigrateLegacyETH(t *testing.T) { ...@@ -66,9 +68,9 @@ func TestMigrateLegacyETH(t *testing.T) {
}, },
check: func(t *testing.T, db *state.StateDB, err error) { check: func(t *testing.T, db *state.StateDB, err error) {
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, db.GetBalance(common.HexToAddress("0x123")), big.NewInt(1)) require.EqualValues(t, common.Big1, db.GetBalance(common.HexToAddress("0x123")))
require.Equal(t, db.GetState(OVMETHAddress, CalcOVMETHStorageKey(common.HexToAddress("0x123"))), common.Hash{}) require.EqualValues(t, common.Big0, db.GetBalance(common.HexToAddress("0x456")))
require.Equal(t, db.GetState(OVMETHAddress, getOVMETHTotalSupplySlot()), common.Hash{}) require.EqualValues(t, common.Hash{}, db.GetState(predeploys.LegacyERC20ETHAddr, GetOVMETHTotalSupplySlot()))
}, },
}, },
{ {
...@@ -97,9 +99,9 @@ func TestMigrateLegacyETH(t *testing.T) { ...@@ -97,9 +99,9 @@ func TestMigrateLegacyETH(t *testing.T) {
}, },
check: func(t *testing.T, db *state.StateDB, err error) { check: func(t *testing.T, db *state.StateDB, err error) {
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, db.GetBalance(common.HexToAddress("0x123")), big.NewInt(1)) require.EqualValues(t, common.Big1, db.GetBalance(common.HexToAddress("0x123")))
require.Equal(t, db.GetState(OVMETHAddress, CalcOVMETHStorageKey(common.HexToAddress("0x123"))), common.Hash{}) require.EqualValues(t, common.Big0, db.GetBalance(common.HexToAddress("0x456")))
require.Equal(t, db.GetState(OVMETHAddress, getOVMETHTotalSupplySlot()), common.Hash{}) require.EqualValues(t, common.Hash{}, db.GetState(predeploys.LegacyERC20ETHAddr, GetOVMETHTotalSupplySlot()))
}, },
}, },
{ {
...@@ -174,25 +176,23 @@ func TestMigrateLegacyETH(t *testing.T) { ...@@ -174,25 +176,23 @@ func TestMigrateLegacyETH(t *testing.T) {
}, },
check: func(t *testing.T, db *state.StateDB, err error) { check: func(t *testing.T, db *state.StateDB, err error) {
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, db.GetBalance(common.HexToAddress("0x123")), big.NewInt(1)) require.EqualValues(t, common.Big1, db.GetBalance(common.HexToAddress("0x123")))
require.Equal(t, db.GetBalance(common.HexToAddress("0x456")), big.NewInt(2)) require.EqualValues(t, common.Big2, db.GetBalance(common.HexToAddress("0x456")))
require.Equal(t, db.GetState(OVMETHAddress, CalcOVMETHStorageKey(common.HexToAddress("0x123"))), common.Hash{})
require.Equal(t, db.GetState(OVMETHAddress, CalcOVMETHStorageKey(common.HexToAddress("0x456"))), common.Hash{})
require.Equal(t, db.GetState(OVMETHAddress, getOVMETHTotalSupplySlot()), common.Hash{})
}, },
}, },
} }
for _, tt := range tests { for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) { t.Run(tt.name, func(t *testing.T) {
db := makeLegacyETH(t, tt.totalSupply, tt.stateBalances, tt.stateAllowances) db, factory := makeLegacyETH(t, tt.totalSupply, tt.stateBalances, tt.stateAllowances)
err := doMigration(db, tt.inputAddresses, tt.inputAllowances, tt.expDiff, false, true) err := doMigration(db, factory, tt.inputAddresses, tt.inputAllowances, tt.expDiff, false)
tt.check(t, db, err) tt.check(t, db, err)
}) })
} }
} }
func makeLegacyETH(t *testing.T, totalSupply *big.Int, balances map[common.Address]*big.Int, allowances map[common.Address]common.Address) *state.StateDB { func makeLegacyETH(t *testing.T, totalSupply *big.Int, balances map[common.Address]*big.Int, allowances map[common.Address]common.Address) (*state.StateDB, DBFactory) {
db, err := state.New(common.Hash{}, state.NewDatabaseWithConfig(rawdb.NewMemoryDatabase(), &trie.Config{ memDB := rawdb.NewMemoryDatabase()
db, err := state.New(common.Hash{}, state.NewDatabaseWithConfig(memDB, &trie.Config{
Preimages: true, Preimages: true,
Cache: 1024, Cache: 1024,
}), nil) }), nil)
...@@ -201,7 +201,7 @@ func makeLegacyETH(t *testing.T, totalSupply *big.Int, balances map[common.Addre ...@@ -201,7 +201,7 @@ func makeLegacyETH(t *testing.T, totalSupply *big.Int, balances map[common.Addre
db.CreateAccount(OVMETHAddress) db.CreateAccount(OVMETHAddress)
db.SetState(OVMETHAddress, getOVMETHTotalSupplySlot(), common.BigToHash(totalSupply)) db.SetState(OVMETHAddress, getOVMETHTotalSupplySlot(), common.BigToHash(totalSupply))
for slot := range OVMETHIgnoredSlots { for slot := range ignoredSlots {
if slot == getOVMETHTotalSupplySlot() { if slot == getOVMETHTotalSupplySlot() {
continue continue
} }
...@@ -220,5 +220,137 @@ func makeLegacyETH(t *testing.T, totalSupply *big.Int, balances map[common.Addre ...@@ -220,5 +220,137 @@ func makeLegacyETH(t *testing.T, totalSupply *big.Int, balances map[common.Addre
err = db.Database().TrieDB().Commit(root, true) err = db.Database().TrieDB().Commit(root, true)
require.NoError(t, err) require.NoError(t, err)
return db return db, func() (*state.StateDB, error) {
return state.New(root, state.NewDatabaseWithConfig(memDB, &trie.Config{
Preimages: true,
Cache: 1024,
}), nil)
}
}
// TestMigrateBalancesRandom tests that the pre-check balances function works
// with random addresses. This test makes sure that the partition logic doesn't
// miss anything.
func TestMigrateBalancesRandom(t *testing.T) {
for i := 0; i < 100; i++ {
addresses := make([]common.Address, 0)
stateBalances := make(map[common.Address]*big.Int)
allowances := make([]*crossdomain.Allowance, 0)
stateAllowances := make(map[common.Address]common.Address)
totalSupply := big.NewInt(0)
for j := 0; j < rand.Intn(10000); j++ {
addr := randAddr(t)
addresses = append(addresses, addr)
stateBalances[addr] = big.NewInt(int64(rand.Intn(1_000_000)))
totalSupply = new(big.Int).Add(totalSupply, stateBalances[addr])
}
for j := 0; j < rand.Intn(1000); j++ {
addr := randAddr(t)
to := randAddr(t)
allowances = append(allowances, &crossdomain.Allowance{
From: addr,
To: to,
})
stateAllowances[addr] = to
}
db, factory := makeLegacyETH(t, totalSupply, stateBalances, stateAllowances)
err := doMigration(db, factory, addresses, allowances, big.NewInt(0), false)
require.NoError(t, err)
for addr, expBal := range stateBalances {
actBal := db.GetBalance(addr)
require.EqualValues(t, expBal, actBal)
}
}
}
func TestPartitionKeyspace(t *testing.T) {
tests := []struct {
i int
count int
expected [2]common.Hash
}{
{
i: 0,
count: 1,
expected: [2]common.Hash{
common.HexToHash("0x00"),
common.HexToHash("0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"),
},
},
{
i: 0,
count: 2,
expected: [2]common.Hash{
common.HexToHash("0x00"),
common.HexToHash("0x7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"),
},
},
{
i: 1,
count: 2,
expected: [2]common.Hash{
common.HexToHash("0x7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"),
common.HexToHash("0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"),
},
},
{
i: 0,
count: 3,
expected: [2]common.Hash{
common.HexToHash("0x00"),
common.HexToHash("0x5555555555555555555555555555555555555555555555555555555555555555"),
},
},
{
i: 1,
count: 3,
expected: [2]common.Hash{
common.HexToHash("0x5555555555555555555555555555555555555555555555555555555555555555"),
common.HexToHash("0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"),
},
},
{
i: 2,
count: 3,
expected: [2]common.Hash{
common.HexToHash("0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"),
common.HexToHash("0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"),
},
},
}
for _, tt := range tests {
t.Run(fmt.Sprintf("i %d, count %d", tt.i, tt.count), func(t *testing.T) {
start, end := PartitionKeyspace(tt.i, tt.count)
require.Equal(t, tt.expected[0], start)
require.Equal(t, tt.expected[1], end)
})
}
t.Run("panics on invalid i or count", func(t *testing.T) {
require.Panics(t, func() {
PartitionKeyspace(1, 1)
})
require.Panics(t, func() {
PartitionKeyspace(-1, 1)
})
require.Panics(t, func() {
PartitionKeyspace(0, -1)
})
require.Panics(t, func() {
PartitionKeyspace(-1, -1)
})
})
}
func randAddr(t *testing.T) common.Address {
var addr common.Address
_, err := rand.Read(addr[:])
require.NoError(t, err)
return addr
} }
...@@ -7,6 +7,10 @@ import ( ...@@ -7,6 +7,10 @@ import (
"fmt" "fmt"
"math/big" "math/big"
"github.com/ethereum-optimism/optimism/op-chain-ops/util"
"github.com/ethereum-optimism/optimism/op-chain-ops/ether"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/rawdb" "github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/core/state" "github.com/ethereum/go-ethereum/core/state"
...@@ -31,6 +35,7 @@ const MaxSlotChecks = 1000 ...@@ -31,6 +35,7 @@ const MaxSlotChecks = 1000
type StorageCheckMap = map[common.Hash]common.Hash type StorageCheckMap = map[common.Hash]common.Hash
var ( var (
L2XDMOwnerSlot = common.Hash{31: 0x33}
ProxyAdminOwnerSlot = common.Hash{} ProxyAdminOwnerSlot = common.Hash{}
LegacyETHCheckSlots = map[common.Hash]common.Hash{ LegacyETHCheckSlots = map[common.Hash]common.Hash{
...@@ -250,7 +255,7 @@ func PostCheckPredeploys(prevDB, currDB *state.StateDB) error { ...@@ -250,7 +255,7 @@ func PostCheckPredeploys(prevDB, currDB *state.StateDB) error {
// Balances and nonces should match legacy // Balances and nonces should match legacy
oldNonce := prevDB.GetNonce(addr) oldNonce := prevDB.GetNonce(addr)
oldBalance := prevDB.GetBalance(addr) oldBalance := ether.GetOVMETHBalance(prevDB, addr)
newNonce := currDB.GetNonce(addr) newNonce := currDB.GetNonce(addr)
newBalance := currDB.GetBalance(addr) newBalance := currDB.GetBalance(addr)
if oldNonce != newNonce { if oldNonce != newNonce {
...@@ -505,7 +510,9 @@ func CheckWithdrawalsAfter(db vm.StateDB, data crossdomain.MigrationData, l1Cros ...@@ -505,7 +510,9 @@ func CheckWithdrawalsAfter(db vm.StateDB, data crossdomain.MigrationData, l1Cros
// Now, iterate over each legacy withdrawal and check if there is a corresponding // Now, iterate over each legacy withdrawal and check if there is a corresponding
// migrated withdrawal. // migrated withdrawal.
var innerErr error var innerErr error
progress := util.ProgressLogger(1000, "checking withdrawals")
err = db.ForEachStorage(predeploys.LegacyMessagePasserAddr, func(key, value common.Hash) bool { err = db.ForEachStorage(predeploys.LegacyMessagePasserAddr, func(key, value common.Hash) bool {
progress()
// The legacy message passer becomes a proxy during the migration, // The legacy message passer becomes a proxy during the migration,
// so we need to ignore the implementation/admin slots. // so we need to ignore the implementation/admin slots.
if key == ImplementationSlot || key == AdminSlot { if key == ImplementationSlot || key == AdminSlot {
...@@ -542,7 +549,7 @@ func CheckWithdrawalsAfter(db vm.StateDB, data crossdomain.MigrationData, l1Cros ...@@ -542,7 +549,7 @@ func CheckWithdrawalsAfter(db vm.StateDB, data crossdomain.MigrationData, l1Cros
// If the sender is _not_ the L2XDM, the value should not be migrated. // If the sender is _not_ the L2XDM, the value should not be migrated.
wd := wdsByOldSlot[key] wd := wdsByOldSlot[key]
if wd.XDomainSender == predeploys.L2CrossDomainMessengerAddr { if wd.MessageSender == predeploys.L2CrossDomainMessengerAddr {
// Make sure the value is abiTrue if this withdrawal should be migrated. // Make sure the value is abiTrue if this withdrawal should be migrated.
if migratedValue != abiTrue { if migratedValue != abiTrue {
innerErr = fmt.Errorf("expected migrated value to be true, but got %s", migratedValue) innerErr = fmt.Errorf("expected migrated value to be true, but got %s", migratedValue)
...@@ -551,7 +558,7 @@ func CheckWithdrawalsAfter(db vm.StateDB, data crossdomain.MigrationData, l1Cros ...@@ -551,7 +558,7 @@ func CheckWithdrawalsAfter(db vm.StateDB, data crossdomain.MigrationData, l1Cros
} else { } else {
// Otherwise, ensure that withdrawals from senders other than the L2XDM are _not_ migrated. // Otherwise, ensure that withdrawals from senders other than the L2XDM are _not_ migrated.
if migratedValue != abiFalse { if migratedValue != abiFalse {
innerErr = fmt.Errorf("a migration from a sender other than the L2XDM was migrated") innerErr = fmt.Errorf("a migration from a sender other than the L2XDM was migrated. sender: %s, migrated value: %s", wd.MessageSender, migratedValue)
return false return false
} }
} }
......
...@@ -76,6 +76,8 @@ type DeployConfig struct { ...@@ -76,6 +76,8 @@ type DeployConfig struct {
ProxyAdminOwner common.Address `json:"proxyAdminOwner"` ProxyAdminOwner common.Address `json:"proxyAdminOwner"`
// Owner of the system on L1 // Owner of the system on L1
FinalSystemOwner common.Address `json:"finalSystemOwner"` FinalSystemOwner common.Address `json:"finalSystemOwner"`
// GUARDIAN account in the OptimismPortal
PortalGuardian common.Address `json:"portalGuardian"`
// L1 recipient of fees accumulated in the BaseFeeVault // L1 recipient of fees accumulated in the BaseFeeVault
BaseFeeVaultRecipient common.Address `json:"baseFeeVaultRecipient"` BaseFeeVaultRecipient common.Address `json:"baseFeeVaultRecipient"`
// L1 recipient of fees accumulated in the L1FeeVault // L1 recipient of fees accumulated in the L1FeeVault
...@@ -128,6 +130,9 @@ func (d *DeployConfig) Check() error { ...@@ -128,6 +130,9 @@ func (d *DeployConfig) Check() error {
if d.FinalizationPeriodSeconds == 0 { if d.FinalizationPeriodSeconds == 0 {
return fmt.Errorf("%w: FinalizationPeriodSeconds cannot be 0", ErrInvalidDeployConfig) return fmt.Errorf("%w: FinalizationPeriodSeconds cannot be 0", ErrInvalidDeployConfig)
} }
if d.PortalGuardian == (common.Address{}) {
return fmt.Errorf("%w: PortalGuardian cannot be address(0)", ErrInvalidDeployConfig)
}
if d.MaxSequencerDrift == 0 { if d.MaxSequencerDrift == 0 {
return fmt.Errorf("%w: MaxSequencerDrift cannot be 0", ErrInvalidDeployConfig) return fmt.Errorf("%w: MaxSequencerDrift cannot be 0", ErrInvalidDeployConfig)
} }
......
...@@ -82,6 +82,7 @@ func MigrateDB(ldb ethdb.Database, config *DeployConfig, l1Block *types.Block, m ...@@ -82,6 +82,7 @@ func MigrateDB(ldb ethdb.Database, config *DeployConfig, l1Block *types.Block, m
) )
} }
dbFactory := func() (*state.StateDB, error) {
// Set up the backing store. // Set up the backing store.
underlyingDB := state.NewDatabaseWithConfig(ldb, &trie.Config{ underlyingDB := state.NewDatabaseWithConfig(ldb, &trie.Config{
Preimages: true, Preimages: true,
...@@ -94,6 +95,14 @@ func MigrateDB(ldb ethdb.Database, config *DeployConfig, l1Block *types.Block, m ...@@ -94,6 +95,14 @@ func MigrateDB(ldb ethdb.Database, config *DeployConfig, l1Block *types.Block, m
return nil, fmt.Errorf("cannot open StateDB: %w", err) return nil, fmt.Errorf("cannot open StateDB: %w", err)
} }
return db, nil
}
db, err := dbFactory()
if err != nil {
return nil, fmt.Errorf("cannot create StateDB: %w", err)
}
// Before we do anything else, we need to ensure that all of the input configuration is correct // Before we do anything else, we need to ensure that all of the input configuration is correct
// and nothing is missing. We'll first verify the contract configuration, then we'll verify the // and nothing is missing. We'll first verify the contract configuration, then we'll verify the
// witness data for the migration. We operate under the assumption that the witness data is // witness data for the migration. We operate under the assumption that the witness data is
...@@ -182,17 +191,13 @@ func MigrateDB(ldb ethdb.Database, config *DeployConfig, l1Block *types.Block, m ...@@ -182,17 +191,13 @@ func MigrateDB(ldb ethdb.Database, config *DeployConfig, l1Block *types.Block, m
return nil, fmt.Errorf("cannot migrate withdrawals: %w", err) return nil, fmt.Errorf("cannot migrate withdrawals: %w", err)
} }
// We also need to verify that we have all of the storage slots for the LegacyERC20ETH contract // Finally we migrate the balances held inside the LegacyERC20ETH contract into the state trie.
// that we expect to have. An error will be thrown if there are any missing storage slots. // We also delete the balances from the LegacyERC20ETH contract. Unlike the steps above, this step
// Unlike with withdrawals, we do not need to filter out extra addresses because their balances // combines the check and mutation steps into one in order to reduce migration time.
// would necessarily be zero and therefore not affect the migration.
//
// Once verified, we migrate the balances held inside the LegacyERC20ETH contract into the state trie.
// We also delete the balances from the LegacyERC20ETH contract.
log.Info("Starting to migrate ERC20 ETH") log.Info("Starting to migrate ERC20 ETH")
err = ether.MigrateLegacyETH(db, migrationData.Addresses(), migrationData.OvmAllowances, int(config.L1ChainID), noCheck, commit) err = ether.MigrateBalances(db, dbFactory, migrationData.Addresses(), migrationData.OvmAllowances, int(config.L1ChainID), noCheck)
if err != nil { if err != nil {
return nil, fmt.Errorf("cannot migrate legacy eth: %w", err) return nil, fmt.Errorf("failed to migrate OVM_ETH: %w", err)
} }
// We're done messing around with the database, so we can now commit the changes to the DB. // We're done messing around with the database, so we can now commit the changes to the DB.
......
...@@ -295,7 +295,7 @@ func deployL1Contracts(config *DeployConfig, backend *backends.SimulatedBackend) ...@@ -295,7 +295,7 @@ func deployL1Contracts(config *DeployConfig, backend *backends.SimulatedBackend)
Name: "OptimismPortal", Name: "OptimismPortal",
Args: []interface{}{ Args: []interface{}{
predeploys.DevL2OutputOracleAddr, predeploys.DevL2OutputOracleAddr,
config.FinalSystemOwner, config.PortalGuardian,
true, // _paused true, // _paused
}, },
}, },
......
...@@ -19,6 +19,7 @@ ...@@ -19,6 +19,7 @@
"l1GenesisBlockGasLimit": "0xe4e1c0", "l1GenesisBlockGasLimit": "0xe4e1c0",
"l1GenesisBlockDifficulty": "0x1", "l1GenesisBlockDifficulty": "0x1",
"finalSystemOwner": "0x0000000000000000000000000000000000000111", "finalSystemOwner": "0x0000000000000000000000000000000000000111",
"portalGuardian": "0x0000000000000000000000000000000000000112",
"finalizationPeriodSeconds": 2, "finalizationPeriodSeconds": 2,
"l1GenesisBlockMixHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "l1GenesisBlockMixHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"l1GenesisBlockCoinbase": "0x0000000000000000000000000000000000000000", "l1GenesisBlockCoinbase": "0x0000000000000000000000000000000000000000",
......
package actions
import (
"testing"
"github.com/ethereum/go-ethereum/log"
"github.com/stretchr/testify/require"
"github.com/ethereum-optimism/optimism/op-e2e/e2eutils"
"github.com/ethereum-optimism/optimism/op-node/testlog"
)
func TestShapellaL1Fork(gt *testing.T) {
t := NewDefaultTesting(gt)
dp := e2eutils.MakeDeployParams(t, defaultRollupTestParams)
sd := e2eutils.Setup(t, dp, defaultAlloc)
activation := sd.L1Cfg.Timestamp + 24
sd.L1Cfg.Config.ShanghaiTime = &activation
log := testlog.Logger(t, log.LvlDebug)
_, _, miner, sequencer, _, verifier, _, batcher := setupReorgTestActors(t, dp, sd, log)
require.False(t, sd.L1Cfg.Config.IsShanghai(miner.l1Chain.CurrentBlock().Time()), "not active yet")
// start op-nodes
sequencer.ActL2PipelineFull(t)
verifier.ActL2PipelineFull(t)
// build empty L1 blocks, crossing the fork boundary
miner.ActEmptyBlock(t)
miner.ActEmptyBlock(t)
miner.ActEmptyBlock(t)
// verify Shanghai is active
l1Head := miner.l1Chain.CurrentBlock()
require.True(t, sd.L1Cfg.Config.IsShanghai(l1Head.Time()))
// build L2 chain up to and including L2 blocks referencing shanghai L1 blocks
sequencer.ActL1HeadSignal(t)
sequencer.ActBuildToL1Head(t)
miner.ActL1StartBlock(12)(t)
batcher.ActSubmitAll(t)
miner.ActL1IncludeTx(batcher.batcherAddr)(t)
miner.ActL1EndBlock(t)
// sync verifier
verifier.ActL1HeadSignal(t)
verifier.ActL2PipelineFull(t)
// verify verifier accepted shanghai L1 inputs
require.Equal(t, l1Head.Hash(), verifier.SyncStatus().SafeL2.L1Origin.Hash, "verifier synced L1 chain that includes shanghai headers")
require.Equal(t, sequencer.SyncStatus().UnsafeL2, verifier.SyncStatus().UnsafeL2, "verifier and sequencer agree")
}
...@@ -10,6 +10,7 @@ import ( ...@@ -10,6 +10,7 @@ import (
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/trie" "github.com/ethereum/go-ethereum/trie"
"github.com/stretchr/testify/require"
) )
// L1Miner wraps a L1Replica with instrumented block building ability. // L1Miner wraps a L1Replica with instrumented block building ability.
...@@ -72,6 +73,9 @@ func (s *L1Miner) ActL1StartBlock(timeDelta uint64) Action { ...@@ -72,6 +73,9 @@ func (s *L1Miner) ActL1StartBlock(timeDelta uint64) Action {
header.GasLimit = parent.GasLimit * s.l1Cfg.Config.ElasticityMultiplier() header.GasLimit = parent.GasLimit * s.l1Cfg.Config.ElasticityMultiplier()
} }
} }
if s.l1Cfg.Config.IsShanghai(header.Time) {
header.WithdrawalsHash = &types.EmptyWithdrawalsHash
}
s.l1Building = true s.l1Building = true
s.l1BuildingHeader = header s.l1BuildingHeader = header
...@@ -97,6 +101,15 @@ func (s *L1Miner) ActL1IncludeTx(from common.Address) Action { ...@@ -97,6 +101,15 @@ func (s *L1Miner) ActL1IncludeTx(from common.Address) Action {
t.Fatalf("no pending txs from %s, and have %d unprocessable queued txs from this account", from, len(q)) t.Fatalf("no pending txs from %s, and have %d unprocessable queued txs from this account", from, len(q))
} }
tx := txs[i] tx := txs[i]
s.IncludeTx(t, tx)
s.pendingIndices[from] = i + 1 // won't retry the tx
}
}
func (s *L1Miner) IncludeTx(t Testing, tx *types.Transaction) {
from, err := s.l1Signer.Sender(tx)
require.NoError(t, err)
s.log.Info("including tx", "nonce", tx.Nonce(), "from", from)
if tx.Gas() > s.l1BuildingHeader.GasLimit { if tx.Gas() > s.l1BuildingHeader.GasLimit {
t.Fatalf("tx consumes %d gas, more than available in L1 block %d", tx.Gas(), s.l1BuildingHeader.GasLimit) t.Fatalf("tx consumes %d gas, more than available in L1 block %d", tx.Gas(), s.l1BuildingHeader.GasLimit)
} }
...@@ -104,17 +117,15 @@ func (s *L1Miner) ActL1IncludeTx(from common.Address) Action { ...@@ -104,17 +117,15 @@ func (s *L1Miner) ActL1IncludeTx(from common.Address) Action {
t.InvalidAction("action takes too much gas: %d, only have %d", tx.Gas(), uint64(*s.l1GasPool)) t.InvalidAction("action takes too much gas: %d, only have %d", tx.Gas(), uint64(*s.l1GasPool))
return return
} }
s.pendingIndices[from] = i + 1 // won't retry the tx
s.l1BuildingState.SetTxContext(tx.Hash(), len(s.l1Transactions)) s.l1BuildingState.SetTxContext(tx.Hash(), len(s.l1Transactions))
receipt, err := core.ApplyTransaction(s.l1Cfg.Config, s.l1Chain, &s.l1BuildingHeader.Coinbase, receipt, err := core.ApplyTransaction(s.l1Cfg.Config, s.l1Chain, &s.l1BuildingHeader.Coinbase,
s.l1GasPool, s.l1BuildingState, s.l1BuildingHeader, tx, &s.l1BuildingHeader.GasUsed, *s.l1Chain.GetVMConfig()) s.l1GasPool, s.l1BuildingState, s.l1BuildingHeader, tx, &s.l1BuildingHeader.GasUsed, *s.l1Chain.GetVMConfig())
if err != nil { if err != nil {
s.l1TxFailed = append(s.l1TxFailed, tx) s.l1TxFailed = append(s.l1TxFailed, tx)
t.Fatalf("failed to apply transaction to L1 block (tx %d): %w", len(s.l1Transactions), err) t.Fatalf("failed to apply transaction to L1 block (tx %d): %v", len(s.l1Transactions), err)
} }
s.l1Receipts = append(s.l1Receipts, receipt) s.l1Receipts = append(s.l1Receipts, receipt)
s.l1Transactions = append(s.l1Transactions, tx) s.l1Transactions = append(s.l1Transactions, tx)
}
} }
func (s *L1Miner) ActL1SetFeeRecipient(coinbase common.Address) { func (s *L1Miner) ActL1SetFeeRecipient(coinbase common.Address) {
...@@ -135,6 +146,9 @@ func (s *L1Miner) ActL1EndBlock(t Testing) { ...@@ -135,6 +146,9 @@ func (s *L1Miner) ActL1EndBlock(t Testing) {
s.l1BuildingHeader.GasUsed = s.l1BuildingHeader.GasLimit - uint64(*s.l1GasPool) s.l1BuildingHeader.GasUsed = s.l1BuildingHeader.GasLimit - uint64(*s.l1GasPool)
s.l1BuildingHeader.Root = s.l1BuildingState.IntermediateRoot(s.l1Cfg.Config.IsEIP158(s.l1BuildingHeader.Number)) s.l1BuildingHeader.Root = s.l1BuildingState.IntermediateRoot(s.l1Cfg.Config.IsEIP158(s.l1BuildingHeader.Number))
block := types.NewBlock(s.l1BuildingHeader, s.l1Transactions, nil, s.l1Receipts, trie.NewStackTrie(nil)) block := types.NewBlock(s.l1BuildingHeader, s.l1Transactions, nil, s.l1Receipts, trie.NewStackTrie(nil))
if s.l1Cfg.Config.IsShanghai(s.l1BuildingHeader.Time) {
block = block.WithWithdrawals(make([]*types.Withdrawal, 0))
}
// Write state changes to db // Write state changes to db
root, err := s.l1BuildingState.Commit(s.l1Cfg.Config.IsEIP158(s.l1BuildingHeader.Number)) root, err := s.l1BuildingState.Commit(s.l1Cfg.Config.IsEIP158(s.l1BuildingHeader.Number))
......
...@@ -91,6 +91,10 @@ func (s *L2Batcher) SubmittingData() bool { ...@@ -91,6 +91,10 @@ func (s *L2Batcher) SubmittingData() bool {
// ActL2BatchBuffer adds the next L2 block to the batch buffer. // ActL2BatchBuffer adds the next L2 block to the batch buffer.
// If the buffer is being submitted, the buffer is wiped. // If the buffer is being submitted, the buffer is wiped.
func (s *L2Batcher) ActL2BatchBuffer(t Testing) { func (s *L2Batcher) ActL2BatchBuffer(t Testing) {
require.NoError(t, s.Buffer(t), "failed to add block to channel")
}
func (s *L2Batcher) Buffer(t Testing) error {
if s.l2Submitting { // break ongoing submitting work if necessary if s.l2Submitting { // break ongoing submitting work if necessary
s.l2ChannelOut = nil s.l2ChannelOut = nil
s.l2Submitting = false s.l2Submitting = false
...@@ -120,7 +124,7 @@ func (s *L2Batcher) ActL2BatchBuffer(t Testing) { ...@@ -120,7 +124,7 @@ func (s *L2Batcher) ActL2BatchBuffer(t Testing) {
s.l2ChannelOut = nil s.l2ChannelOut = nil
} else { } else {
s.log.Info("nothing left to submit") s.log.Info("nothing left to submit")
return return nil
} }
} }
// Create channel if we don't have one yet // Create channel if we don't have one yet
...@@ -143,9 +147,10 @@ func (s *L2Batcher) ActL2BatchBuffer(t Testing) { ...@@ -143,9 +147,10 @@ func (s *L2Batcher) ActL2BatchBuffer(t Testing) {
s.l2ChannelOut = nil s.l2ChannelOut = nil
} }
if _, err := s.l2ChannelOut.AddBlock(block); err != nil { // should always succeed if _, err := s.l2ChannelOut.AddBlock(block); err != nil { // should always succeed
t.Fatalf("failed to add block to channel: %v", err) return err
} }
s.l2BufferedBlock = eth.ToBlockID(block) s.l2BufferedBlock = eth.ToBlockID(block)
return nil
} }
func (s *L2Batcher) ActL2ChannelClose(t Testing) { func (s *L2Batcher) ActL2ChannelClose(t Testing) {
...@@ -158,7 +163,7 @@ func (s *L2Batcher) ActL2ChannelClose(t Testing) { ...@@ -158,7 +163,7 @@ func (s *L2Batcher) ActL2ChannelClose(t Testing) {
} }
// ActL2BatchSubmit constructs a batch tx from previous buffered L2 blocks, and submits it to L1 // ActL2BatchSubmit constructs a batch tx from previous buffered L2 blocks, and submits it to L1
func (s *L2Batcher) ActL2BatchSubmit(t Testing) { func (s *L2Batcher) ActL2BatchSubmit(t Testing, txOpts ...func(tx *types.DynamicFeeTx)) {
// Don't run this action if there's no data to submit // Don't run this action if there's no data to submit
if s.l2ChannelOut == nil { if s.l2ChannelOut == nil {
t.InvalidAction("need to buffer data first, cannot batch submit with empty buffer") t.InvalidAction("need to buffer data first, cannot batch submit with empty buffer")
...@@ -192,6 +197,9 @@ func (s *L2Batcher) ActL2BatchSubmit(t Testing) { ...@@ -192,6 +197,9 @@ func (s *L2Batcher) ActL2BatchSubmit(t Testing) {
GasFeeCap: gasFeeCap, GasFeeCap: gasFeeCap,
Data: data.Bytes(), Data: data.Bytes(),
} }
for _, opt := range txOpts {
opt(rawTx)
}
gas, err := core.IntrinsicGas(rawTx.Data, nil, false, true, true, false) gas, err := core.IntrinsicGas(rawTx.Data, nil, false, true, true, false)
require.NoError(t, err, "need to compute intrinsic gas") require.NoError(t, err, "need to compute intrinsic gas")
rawTx.Gas = gas rawTx.Gas = gas
......
package actions package actions
import ( import (
"crypto/rand"
"errors"
"math/big" "math/big"
"testing" "testing"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/params" "github.com/ethereum/go-ethereum/params"
...@@ -12,6 +15,7 @@ import ( ...@@ -12,6 +15,7 @@ import (
"github.com/ethereum-optimism/optimism/op-e2e/e2eutils" "github.com/ethereum-optimism/optimism/op-e2e/e2eutils"
"github.com/ethereum-optimism/optimism/op-node/eth" "github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
"github.com/ethereum-optimism/optimism/op-node/testlog" "github.com/ethereum-optimism/optimism/op-node/testlog"
) )
...@@ -378,3 +382,131 @@ func TestExtendedTimeWithoutL1Batches(gt *testing.T) { ...@@ -378,3 +382,131 @@ func TestExtendedTimeWithoutL1Batches(gt *testing.T) {
sequencer.ActL2PipelineFull(t) sequencer.ActL2PipelineFull(t)
require.Equal(t, sequencer.L2Unsafe(), sequencer.L2Safe(), "same for sequencer") require.Equal(t, sequencer.L2Unsafe(), sequencer.L2Safe(), "same for sequencer")
} }
// TestBigL2Txs tests a high-throughput case with constrained batcher:
// - Fill 100 L2 blocks to near max-capacity, with txs of 120 KB each
// - Buffer the L2 blocks into channels together as much as possible, submit data-txs only when necessary
// (just before crossing the max RLP channel size)
// - Limit the data-tx size to 40 KB, to force data to be split across multiple datat-txs
// - Defer all data-tx inclusion till the end
// - Fill L1 blocks with data-txs until we have processed them all
// - Run the verifier, and check if it derives the same L2 chain as was created by the sequencer.
//
// The goal of this test is to quickly run through an otherwise very slow process of submitting and including lots of data.
// This does not test the batcher code, but is really focused at testing the batcher utils
// and channel-decoding verifier code in the derive package.
func TestBigL2Txs(gt *testing.T) {
t := NewDefaultTesting(gt)
p := &e2eutils.TestParams{
MaxSequencerDrift: 100,
SequencerWindowSize: 1000,
ChannelTimeout: 200, // give enough space to buffer large amounts of data before submitting it
}
dp := e2eutils.MakeDeployParams(t, p)
sd := e2eutils.Setup(t, dp, defaultAlloc)
log := testlog.Logger(t, log.LvlInfo)
miner, engine, sequencer := setupSequencerTest(t, sd, log)
_, verifier := setupVerifier(t, sd, log, miner.L1Client(t, sd.RollupCfg))
batcher := NewL2Batcher(log, sd.RollupCfg, &BatcherCfg{
MinL1TxSize: 0,
MaxL1TxSize: 40_000, // try a small batch size, to force the data to be split between more frames
BatcherKey: dp.Secrets.Batcher,
}, sequencer.RollupClient(), miner.EthClient(), engine.EthClient())
sequencer.ActL2PipelineFull(t)
verifier.ActL2PipelineFull(t)
cl := engine.EthClient()
batcherNonce := uint64(0) // manually track batcher nonce. the "pending nonce" value in tx-pool is incorrect after we fill the pending-block gas limit and keep adding txs to the pool.
batcherTxOpts := func(tx *types.DynamicFeeTx) {
tx.Nonce = batcherNonce
batcherNonce++
tx.GasFeeCap = e2eutils.Ether(1) // be very generous with basefee, since we're spamming L1
}
// build many L2 blocks filled to the brim with large txs of random data
for i := 0; i < 100; i++ {
aliceNonce, err := cl.PendingNonceAt(t.Ctx(), dp.Addresses.Alice)
status := sequencer.SyncStatus()
// build empty L1 blocks as necessary, so the L2 sequencer can continue to include txs while not drifting too far out
if status.UnsafeL2.Time >= status.HeadL1.Time+12 {
miner.ActEmptyBlock(t)
}
sequencer.ActL1HeadSignal(t)
sequencer.ActL2StartBlock(t)
baseFee := engine.l2Chain.CurrentBlock().BaseFee() // this will go quite high, since so many consecutive blocks are filled at capacity.
// fill the block with large L2 txs from alice
for n := aliceNonce; ; n++ {
require.NoError(t, err)
signer := types.LatestSigner(sd.L2Cfg.Config)
data := make([]byte, 120_000) // very large L2 txs, as large as the tx-pool will accept
_, err := rand.Read(data[:]) // fill with random bytes, to make compression ineffective
require.NoError(t, err)
gas, err := core.IntrinsicGas(data, nil, false, true, true, false)
require.NoError(t, err)
if gas > engine.l2GasPool.Gas() {
break
}
tx := types.MustSignNewTx(dp.Secrets.Alice, signer, &types.DynamicFeeTx{
ChainID: sd.L2Cfg.Config.ChainID,
Nonce: n,
GasTipCap: big.NewInt(2 * params.GWei),
GasFeeCap: new(big.Int).Add(new(big.Int).Mul(baseFee, big.NewInt(2)), big.NewInt(2*params.GWei)),
Gas: gas,
To: &dp.Addresses.Bob,
Value: big.NewInt(0),
Data: data,
})
require.NoError(gt, cl.SendTransaction(t.Ctx(), tx))
engine.ActL2IncludeTx(dp.Addresses.Alice)(t)
}
sequencer.ActL2EndBlock(t)
for batcher.l2BufferedBlock.Number < sequencer.SyncStatus().UnsafeL2.Number {
// if we run out of space, close the channel and submit all the txs
if err := batcher.Buffer(t); errors.Is(err, derive.ErrTooManyRLPBytes) {
log.Info("flushing filled channel to batch txs", "id", batcher.l2ChannelOut.ID())
batcher.ActL2ChannelClose(t)
for batcher.l2ChannelOut != nil {
batcher.ActL2BatchSubmit(t, batcherTxOpts)
}
}
}
}
// if anything is left in the channel, submit it
if batcher.l2ChannelOut != nil {
log.Info("flushing trailing channel to batch txs", "id", batcher.l2ChannelOut.ID())
batcher.ActL2ChannelClose(t)
for batcher.l2ChannelOut != nil {
batcher.ActL2BatchSubmit(t, batcherTxOpts)
}
}
// build L1 blocks until we're out of txs
txs, _ := miner.eth.TxPool().ContentFrom(dp.Addresses.Batcher)
for {
if len(txs) == 0 {
break
}
miner.ActL1StartBlock(12)(t)
for range txs {
if len(txs) == 0 {
break
}
tx := txs[0]
if miner.l1GasPool.Gas() < tx.Gas() { // fill the L1 block with batcher txs until we run out of gas
break
}
log.Info("including batcher tx", "nonce", tx)
miner.IncludeTx(t, tx)
txs = txs[1:]
}
miner.ActL1EndBlock(t)
}
verifier.ActL1HeadSignal(t)
verifier.ActL2PipelineFull(t)
require.Equal(t, sequencer.SyncStatus().UnsafeL2, verifier.SyncStatus().SafeL2, "verifier synced sequencer data even though of huge tx in block")
}
package op_e2e
import (
"math"
"math/big"
"testing"
"time"
"github.com/ethereum-optimism/optimism/op-bindings/bindings"
"github.com/ethereum-optimism/optimism/op-bindings/predeploys"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
"github.com/ethereum-optimism/optimism/op-node/testlog"
"github.com/ethereum/go-ethereum/accounts/abi/bind"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/params"
"github.com/stretchr/testify/require"
)
// TestERC20BridgeDeposits tests the the L1StandardBridge bridge ERC20
// functionality.
func TestERC20BridgeDeposits(t *testing.T) {
parallel(t)
if !verboseGethNodes {
log.Root().SetHandler(log.DiscardHandler())
}
cfg := DefaultSystemConfig(t)
sys, err := cfg.Start()
require.Nil(t, err, "Error starting up system")
defer sys.Close()
log := testlog.Logger(t, log.LvlInfo)
log.Info("genesis", "l2", sys.RollupConfig.Genesis.L2, "l1", sys.RollupConfig.Genesis.L1, "l2_time", sys.RollupConfig.Genesis.L2Time)
l1Client := sys.Clients["l1"]
l2Client := sys.Clients["sequencer"]
opts, err := bind.NewKeyedTransactorWithChainID(sys.cfg.Secrets.Alice, cfg.L1ChainIDBig())
require.Nil(t, err)
// Deploy WETH9
weth9Address, tx, WETH9, err := bindings.DeployWETH9(opts, l1Client)
require.NoError(t, err)
_, err = waitForTransaction(tx.Hash(), l1Client, 3*time.Duration(cfg.DeployConfig.L1BlockTime)*time.Second)
require.NoError(t, err, "Waiting for deposit tx on L1")
// Get some WETH
opts.Value = big.NewInt(params.Ether)
tx, err = WETH9.Fallback(opts, []byte{})
require.NoError(t, err)
_, err = waitForTransaction(tx.Hash(), l1Client, 3*time.Duration(cfg.DeployConfig.L1BlockTime)*time.Second)
require.NoError(t, err)
opts.Value = nil
wethBalance, err := WETH9.BalanceOf(&bind.CallOpts{}, opts.From)
require.NoError(t, err)
require.Equal(t, big.NewInt(params.Ether), wethBalance)
// Deploy L2 WETH9
l2Opts, err := bind.NewKeyedTransactorWithChainID(sys.cfg.Secrets.Alice, cfg.L2ChainIDBig())
require.NoError(t, err)
optimismMintableTokenFactory, err := bindings.NewOptimismMintableERC20Factory(predeploys.OptimismMintableERC20FactoryAddr, l2Client)
require.NoError(t, err)
tx, err = optimismMintableTokenFactory.CreateOptimismMintableERC20(l2Opts, weth9Address, "L2-WETH", "L2-WETH")
require.NoError(t, err)
_, err = waitForTransaction(tx.Hash(), l2Client, 3*time.Duration(cfg.DeployConfig.L2BlockTime)*time.Second)
require.NoError(t, err)
// Get the deployment event to have access to the L2 WETH9 address
it, err := optimismMintableTokenFactory.FilterOptimismMintableERC20Created(&bind.FilterOpts{Start: 0}, nil, nil)
require.NoError(t, err)
var event *bindings.OptimismMintableERC20FactoryOptimismMintableERC20Created
for it.Next() {
event = it.Event
}
require.NotNil(t, event)
// Approve WETH9 with the bridge
tx, err = WETH9.Approve(opts, predeploys.DevL1StandardBridgeAddr, new(big.Int).SetUint64(math.MaxUint64))
require.NoError(t, err)
_, err = waitForTransaction(tx.Hash(), l1Client, 3*time.Duration(cfg.DeployConfig.L1BlockTime)*time.Second)
require.NoError(t, err)
// Bridge the WETH9
l1StandardBridge, err := bindings.NewL1StandardBridge(predeploys.DevL1StandardBridgeAddr, l1Client)
require.NoError(t, err)
tx, err = l1StandardBridge.BridgeERC20(opts, weth9Address, event.LocalToken, big.NewInt(100), 100000, []byte{})
require.NoError(t, err)
depositReceipt, err := waitForTransaction(tx.Hash(), l1Client, 3*time.Duration(cfg.DeployConfig.L1BlockTime)*time.Second)
require.NoError(t, err)
t.Log("Deposit through L1StandardBridge", "gas used", depositReceipt.GasUsed)
// compute the deposit transaction hash + poll for it
portal, err := bindings.NewOptimismPortal(predeploys.DevOptimismPortalAddr, l1Client)
require.NoError(t, err)
depIt, err := portal.FilterTransactionDeposited(&bind.FilterOpts{Start: 0}, nil, nil, nil)
require.NoError(t, err)
var depositEvent *bindings.OptimismPortalTransactionDeposited
for depIt.Next() {
depositEvent = depIt.Event
}
require.NotNil(t, depositEvent)
depositTx, err := derive.UnmarshalDepositLogEvent(&depositEvent.Raw)
require.NoError(t, err)
_, err = waitForTransaction(types.NewTx(depositTx).Hash(), l2Client, 3*time.Duration(cfg.DeployConfig.L2BlockTime)*time.Second)
require.NoError(t, err)
// Ensure that the deposit went through
optimismMintableToken, err := bindings.NewOptimismMintableERC20(event.LocalToken, l2Client)
require.NoError(t, err)
// Should have balance on L2
l2Balance, err := optimismMintableToken.BalanceOf(&bind.CallOpts{}, opts.From)
require.NoError(t, err)
require.Equal(t, l2Balance, big.NewInt(100))
}
...@@ -12,6 +12,7 @@ import ( ...@@ -12,6 +12,7 @@ import (
"time" "time"
bss "github.com/ethereum-optimism/optimism/op-batcher/batcher" bss "github.com/ethereum-optimism/optimism/op-batcher/batcher"
batchermetrics "github.com/ethereum-optimism/optimism/op-batcher/metrics"
"github.com/ethereum-optimism/optimism/op-node/chaincfg" "github.com/ethereum-optimism/optimism/op-node/chaincfg"
"github.com/ethereum-optimism/optimism/op-node/sources" "github.com/ethereum-optimism/optimism/op-node/sources"
l2os "github.com/ethereum-optimism/optimism/op-proposer/proposer" l2os "github.com/ethereum-optimism/optimism/op-proposer/proposer"
...@@ -341,7 +342,7 @@ func TestMigration(t *testing.T) { ...@@ -341,7 +342,7 @@ func TestMigration(t *testing.T) {
Format: "text", Format: "text",
}, },
PrivateKey: hexPriv(secrets.Batcher), PrivateKey: hexPriv(secrets.Batcher),
}, lgr.New("module", "batcher")) }, lgr.New("module", "batcher"), batchermetrics.NoopMetrics)
require.NoError(t, err) require.NoError(t, err)
t.Cleanup(func() { t.Cleanup(func() {
batcher.StopIfRunning() batcher.StopIfRunning()
......
...@@ -7,6 +7,7 @@ import ( ...@@ -7,6 +7,7 @@ import (
"math/big" "math/big"
"os" "os"
"path" "path"
"sort"
"strings" "strings"
"testing" "testing"
"time" "time"
...@@ -23,6 +24,7 @@ import ( ...@@ -23,6 +24,7 @@ import (
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
bss "github.com/ethereum-optimism/optimism/op-batcher/batcher" bss "github.com/ethereum-optimism/optimism/op-batcher/batcher"
batchermetrics "github.com/ethereum-optimism/optimism/op-batcher/metrics"
"github.com/ethereum-optimism/optimism/op-bindings/predeploys" "github.com/ethereum-optimism/optimism/op-bindings/predeploys"
"github.com/ethereum-optimism/optimism/op-chain-ops/genesis" "github.com/ethereum-optimism/optimism/op-chain-ops/genesis"
"github.com/ethereum-optimism/optimism/op-e2e/e2eutils" "github.com/ethereum-optimism/optimism/op-e2e/e2eutils"
...@@ -119,14 +121,6 @@ func DefaultSystemConfig(t *testing.T) SystemConfig { ...@@ -119,14 +121,6 @@ func DefaultSystemConfig(t *testing.T) SystemConfig {
JWTFilePath: writeDefaultJWT(t), JWTFilePath: writeDefaultJWT(t),
JWTSecret: testingJWTSecret, JWTSecret: testingJWTSecret,
Nodes: map[string]*rollupNode.Config{ Nodes: map[string]*rollupNode.Config{
"verifier": {
Driver: driver.Config{
VerifierConfDepth: 0,
SequencerConfDepth: 0,
SequencerEnabled: false,
},
L1EpochPollInterval: time.Second * 4,
},
"sequencer": { "sequencer": {
Driver: driver.Config{ Driver: driver.Config{
VerifierConfDepth: 0, VerifierConfDepth: 0,
...@@ -141,6 +135,14 @@ func DefaultSystemConfig(t *testing.T) SystemConfig { ...@@ -141,6 +135,14 @@ func DefaultSystemConfig(t *testing.T) SystemConfig {
}, },
L1EpochPollInterval: time.Second * 4, L1EpochPollInterval: time.Second * 4,
}, },
"verifier": {
Driver: driver.Config{
VerifierConfDepth: 0,
SequencerConfDepth: 0,
SequencerEnabled: false,
},
L1EpochPollInterval: time.Second * 4,
},
}, },
Loggers: map[string]log.Logger{ Loggers: map[string]log.Logger{
"verifier": testlog.Logger(t, log.LvlInfo).New("role", "verifier"), "verifier": testlog.Logger(t, log.LvlInfo).New("role", "verifier"),
...@@ -225,7 +227,43 @@ func (sys *System) Close() { ...@@ -225,7 +227,43 @@ func (sys *System) Close() {
sys.Mocknet.Close() sys.Mocknet.Close()
} }
func (cfg SystemConfig) Start() (*System, error) { type systemConfigHook func(sCfg *SystemConfig, s *System)
type SystemConfigOption struct {
key string
role string
action systemConfigHook
}
type SystemConfigOptions struct {
opts map[string]systemConfigHook
}
func NewSystemConfigOptions(_opts []SystemConfigOption) (SystemConfigOptions, error) {
opts := make(map[string]systemConfigHook)
for _, opt := range _opts {
if _, ok := opts[opt.key+":"+opt.role]; ok {
return SystemConfigOptions{}, fmt.Errorf("duplicate option for key %s and role %s", opt.key, opt.role)
}
opts[opt.key+":"+opt.role] = opt.action
}
return SystemConfigOptions{
opts: opts,
}, nil
}
func (s *SystemConfigOptions) Get(key, role string) (systemConfigHook, bool) {
v, ok := s.opts[key+":"+role]
return v, ok
}
func (cfg SystemConfig) Start(_opts ...SystemConfigOption) (*System, error) {
opts, err := NewSystemConfigOptions(_opts)
if err != nil {
return nil, err
}
sys := &System{ sys := &System{
cfg: cfg, cfg: cfg,
Nodes: make(map[string]*node.Node), Nodes: make(map[string]*node.Node),
...@@ -457,7 +495,17 @@ func (cfg SystemConfig) Start() (*System, error) { ...@@ -457,7 +495,17 @@ func (cfg SystemConfig) Start() (*System, error) {
snapLog.SetHandler(log.DiscardHandler()) snapLog.SetHandler(log.DiscardHandler())
// Rollup nodes // Rollup nodes
for name, nodeConfig := range cfg.Nodes {
// Ensure we are looping through the nodes in alphabetical order
ks := make([]string, 0, len(cfg.Nodes))
for k := range cfg.Nodes {
ks = append(ks, k)
}
// Sort strings in ascending alphabetical order
sort.Strings(ks)
for _, name := range ks {
nodeConfig := cfg.Nodes[name]
c := *nodeConfig // copy c := *nodeConfig // copy
c.Rollup = makeRollupConfig() c.Rollup = makeRollupConfig()
...@@ -482,6 +530,10 @@ func (cfg SystemConfig) Start() (*System, error) { ...@@ -482,6 +530,10 @@ func (cfg SystemConfig) Start() (*System, error) {
return nil, err return nil, err
} }
sys.RollupNodes[name] = node sys.RollupNodes[name] = node
if action, ok := opts.Get("afterRollupNodeStart", name); ok {
action(&cfg, sys)
}
} }
if cfg.P2PTopology != nil { if cfg.P2PTopology != nil {
...@@ -549,7 +601,7 @@ func (cfg SystemConfig) Start() (*System, error) { ...@@ -549,7 +601,7 @@ func (cfg SystemConfig) Start() (*System, error) {
Format: "text", Format: "text",
}, },
PrivateKey: hexPriv(cfg.Secrets.Batcher), PrivateKey: hexPriv(cfg.Secrets.Batcher),
}, sys.cfg.Loggers["batcher"]) }, sys.cfg.Loggers["batcher"], batchermetrics.NoopMetrics)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to setup batch submitter: %w", err) return nil, fmt.Errorf("failed to setup batch submitter: %w", err)
} }
......
...@@ -649,8 +649,94 @@ func TestSystemMockP2P(t *testing.T) { ...@@ -649,8 +649,94 @@ func TestSystemMockP2P(t *testing.T) {
require.Contains(t, received, receiptVerif.BlockHash) require.Contains(t, received, receiptVerif.BlockHash)
} }
// TestSystemMockP2P sets up a L1 Geth node, a rollup node, and a L2 geth node and then confirms that
// the nodes can sync L2 blocks before they are confirmed on L1.
//
// Test steps:
// 1. Spin up the nodes (P2P is disabled on the verifier)
// 2. Send a transaction to the sequencer.
// 3. Wait for the TX to be mined on the sequencer chain.
// 5. Wait for the verifier to detect a gap in the payload queue vs. the unsafe head
// 6. Wait for the RPC sync method to grab the block from the sequencer over RPC and insert it into the verifier's unsafe chain.
// 7. Wait for the verifier to sync the unsafe chain into the safe chain.
// 8. Verify that the TX is included in the verifier's safe chain.
func TestSystemMockAltSync(t *testing.T) {
parallel(t)
if !verboseGethNodes {
log.Root().SetHandler(log.DiscardHandler())
}
cfg := DefaultSystemConfig(t)
// slow down L1 blocks so we can see the L2 blocks arrive well before the L1 blocks do.
// Keep the seq window small so the L2 chain is started quick
cfg.DeployConfig.L1BlockTime = 10
var published, received []common.Hash
seqTracer, verifTracer := new(FnTracer), new(FnTracer)
seqTracer.OnPublishL2PayloadFn = func(ctx context.Context, payload *eth.ExecutionPayload) {
published = append(published, payload.BlockHash)
}
verifTracer.OnUnsafeL2PayloadFn = func(ctx context.Context, from peer.ID, payload *eth.ExecutionPayload) {
received = append(received, payload.BlockHash)
}
cfg.Nodes["sequencer"].Tracer = seqTracer
cfg.Nodes["verifier"].Tracer = verifTracer
sys, err := cfg.Start(SystemConfigOption{
key: "afterRollupNodeStart",
role: "sequencer",
action: func(sCfg *SystemConfig, system *System) {
rpc, _ := system.Nodes["sequencer"].Attach() // never errors
cfg.Nodes["verifier"].L2Sync = &rollupNode.L2SyncRPCConfig{
Rpc: client.NewBaseRPCClient(rpc),
}
},
})
require.Nil(t, err, "Error starting up system")
defer sys.Close()
l2Seq := sys.Clients["sequencer"]
l2Verif := sys.Clients["verifier"]
// Transactor Account
ethPrivKey := cfg.Secrets.Alice
// Submit a TX to L2 sequencer node
toAddr := common.Address{0xff, 0xff}
tx := types.MustSignNewTx(ethPrivKey, types.LatestSignerForChainID(cfg.L2ChainIDBig()), &types.DynamicFeeTx{
ChainID: cfg.L2ChainIDBig(),
Nonce: 0,
To: &toAddr,
Value: big.NewInt(1_000_000_000),
GasTipCap: big.NewInt(10),
GasFeeCap: big.NewInt(200),
Gas: 21000,
})
err = l2Seq.SendTransaction(context.Background(), tx)
require.Nil(t, err, "Sending L2 tx to sequencer")
// Wait for tx to be mined on the L2 sequencer chain
receiptSeq, err := waitForTransaction(tx.Hash(), l2Seq, 6*time.Duration(sys.RollupConfig.BlockTime)*time.Second)
require.Nil(t, err, "Waiting for L2 tx on sequencer")
// Wait for alt RPC sync to pick up the blocks on the sequencer chain
receiptVerif, err := waitForTransaction(tx.Hash(), l2Verif, 12*time.Duration(sys.RollupConfig.BlockTime)*time.Second)
require.Nil(t, err, "Waiting for L2 tx on verifier")
require.Equal(t, receiptSeq, receiptVerif)
// Verify that the tx was received via RPC sync (P2P is disabled)
require.Contains(t, received, receiptVerif.BlockHash)
// Verify that everything that was received was published
require.GreaterOrEqual(t, len(published), len(received))
require.ElementsMatch(t, received, published[:len(received)])
}
// TestSystemDenseTopology sets up a dense p2p topology with 3 verifier nodes and 1 sequencer node. // TestSystemDenseTopology sets up a dense p2p topology with 3 verifier nodes and 1 sequencer node.
func TestSystemDenseTopology(t *testing.T) { func TestSystemDenseTopology(t *testing.T) {
t.Skip("Skipping dense topology test to avoid flakiness. @refcell address in p2p scoring pr.")
parallel(t) parallel(t)
if !verboseGethNodes { if !verboseGethNodes {
log.Root().SetHandler(log.DiscardHandler()) log.Root().SetHandler(log.DiscardHandler())
......
...@@ -26,6 +26,16 @@ the transaction hash. ...@@ -26,6 +26,16 @@ the transaction hash.
into channels. It then stores the channels with metadata on disk where the file name is the Channel ID. into channels. It then stores the channels with metadata on disk where the file name is the Channel ID.
### Force Close
`batch_decoder force-close` will create a transaction data that can be sent from the batcher address to
the batch inbox address which will force close the given channels. This will allow future channels to
be read without waiting for the channel timeout. It uses uses the results from `batch_decoder fetch` to
create the close transaction because the transaction it creates for a specific channel requires information
about if the channel has been closed or not. If it has been closed already but is missing specific frames
those frames need to be generated differently than simply closing the channel.
## JQ Cheat Sheet ## JQ Cheat Sheet
`jq` is a really useful utility for manipulating JSON files. `jq` is a really useful utility for manipulating JSON files.
...@@ -48,7 +58,6 @@ jq "select(.is_ready == false)|[.id, .frames[0].inclusion_block, .frames[0].tran ...@@ -48,7 +58,6 @@ jq "select(.is_ready == false)|[.id, .frames[0].inclusion_block, .frames[0].tran
## Roadmap ## Roadmap
- Parallel transaction fetching (CLI-3563) - Parallel transaction fetching (CLI-3563)
- Create force-close channel tx data from channel ID (CLI-3564)
- Pull the batches out of channels & store that information inside the ChannelWithMetadata (CLI-3565) - Pull the batches out of channels & store that information inside the ChannelWithMetadata (CLI-3565)
- Transaction Bytes used - Transaction Bytes used
- Total uncompressed (different from tx bytes) + compressed bytes - Total uncompressed (different from tx bytes) + compressed bytes
......
...@@ -9,6 +9,7 @@ import ( ...@@ -9,6 +9,7 @@ import (
"github.com/ethereum-optimism/optimism/op-node/cmd/batch_decoder/fetch" "github.com/ethereum-optimism/optimism/op-node/cmd/batch_decoder/fetch"
"github.com/ethereum-optimism/optimism/op-node/cmd/batch_decoder/reassemble" "github.com/ethereum-optimism/optimism/op-node/cmd/batch_decoder/reassemble"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/ethclient" "github.com/ethereum/go-ethereum/ethclient"
"github.com/urfave/cli" "github.com/urfave/cli"
...@@ -113,6 +114,46 @@ func main() { ...@@ -113,6 +114,46 @@ func main() {
return nil return nil
}, },
}, },
{
Name: "force-close",
Usage: "Create the tx data which will force close a channel",
Flags: []cli.Flag{
cli.StringFlag{
Name: "id",
Required: true,
Usage: "ID of the channel to close",
},
cli.StringFlag{
Name: "inbox",
Value: "0x0000000000000000000000000000000000000000",
Usage: "(Optional) Batch Inbox Address",
},
cli.StringFlag{
Name: "in",
Value: "/tmp/batch_decoder/transactions_cache",
Usage: "Cache directory for the found transactions",
},
},
Action: func(cliCtx *cli.Context) error {
var id derive.ChannelID
if err := (&id).UnmarshalText([]byte(cliCtx.String("id"))); err != nil {
log.Fatal(err)
}
frames := reassemble.LoadFrames(cliCtx.String("in"), common.HexToAddress(cliCtx.String("inbox")))
var filteredFrames []derive.Frame
for _, frame := range frames {
if frame.Frame.ID == id {
filteredFrames = append(filteredFrames, frame.Frame)
}
}
data, err := derive.ForceCloseTxData(filteredFrames)
if err != nil {
log.Fatal(err)
}
fmt.Printf("%x\n", data)
return nil
},
},
} }
if err := app.Run(os.Args); err != nil { if err := app.Run(os.Args); err != nil {
......
...@@ -38,14 +38,8 @@ type Config struct { ...@@ -38,14 +38,8 @@ type Config struct {
OutDirectory string OutDirectory string
} }
// Channels loads all transactions from the given input directory that are submitted to the func LoadFrames(directory string, inbox common.Address) []FrameWithMetadata {
// specified batch inbox and then re-assembles all channels & writes the re-assembled channels txns := loadTransactions(directory, inbox)
// to the out directory.
func Channels(config Config) {
if err := os.MkdirAll(config.OutDirectory, 0750); err != nil {
log.Fatal(err)
}
txns := loadTransactions(config.InDirectory, config.BatchInbox)
// Sort first by block number then by transaction index inside the block number range. // Sort first by block number then by transaction index inside the block number range.
// This is to match the order they are processed in derivation. // This is to match the order they are processed in derivation.
sort.Slice(txns, func(i, j int) bool { sort.Slice(txns, func(i, j int) bool {
...@@ -56,7 +50,17 @@ func Channels(config Config) { ...@@ -56,7 +50,17 @@ func Channels(config Config) {
} }
}) })
frames := transactionsToFrames(txns) return transactionsToFrames(txns)
}
// Channels loads all transactions from the given input directory that are submitted to the
// specified batch inbox and then re-assembles all channels & writes the re-assembled channels
// to the out directory.
func Channels(config Config) {
if err := os.MkdirAll(config.OutDirectory, 0750); err != nil {
log.Fatal(err)
}
frames := LoadFrames(config.InDirectory, config.BatchInbox)
framesByChannel := make(map[derive.ChannelID][]FrameWithMetadata) framesByChannel := make(map[derive.ChannelID][]FrameWithMetadata)
for _, frame := range frames { for _, frame := range frames {
framesByChannel[frame.Frame.ID] = append(framesByChannel[frame.Frame.ID], frame) framesByChannel[frame.Frame.ID] = append(framesByChannel[frame.Frame.ID], frame)
...@@ -143,6 +147,7 @@ func transactionsToFrames(txns []fetch.TransactionWithMetadata) []FrameWithMetad ...@@ -143,6 +147,7 @@ func transactionsToFrames(txns []fetch.TransactionWithMetadata) []FrameWithMetad
return out return out
} }
// if inbox is the zero address, it will load all frames
func loadTransactions(dir string, inbox common.Address) []fetch.TransactionWithMetadata { func loadTransactions(dir string, inbox common.Address) []fetch.TransactionWithMetadata {
files, err := os.ReadDir(dir) files, err := os.ReadDir(dir)
if err != nil { if err != nil {
...@@ -152,7 +157,7 @@ func loadTransactions(dir string, inbox common.Address) []fetch.TransactionWithM ...@@ -152,7 +157,7 @@ func loadTransactions(dir string, inbox common.Address) []fetch.TransactionWithM
for _, file := range files { for _, file := range files {
f := path.Join(dir, file.Name()) f := path.Join(dir, file.Name())
txm := loadTransactionsFile(f) txm := loadTransactionsFile(f)
if txm.InboxAddr == inbox && txm.ValidSender { if (inbox == common.Address{} || txm.InboxAddr == inbox) && txm.ValidSender {
out = append(out, txm) out = append(out, txm)
} }
} }
......
...@@ -7,6 +7,7 @@ import ( ...@@ -7,6 +7,7 @@ import (
"github.com/ethereum-optimism/optimism/op-node/chaincfg" "github.com/ethereum-optimism/optimism/op-node/chaincfg"
"github.com/ethereum-optimism/optimism/op-node/sources" "github.com/ethereum-optimism/optimism/op-node/sources"
oplog "github.com/ethereum-optimism/optimism/op-service/log"
"github.com/urfave/cli" "github.com/urfave/cli"
) )
...@@ -113,23 +114,6 @@ var ( ...@@ -113,23 +114,6 @@ var (
Required: false, Required: false,
Value: time.Second * 12 * 32, Value: time.Second * 12 * 32,
} }
LogLevelFlag = cli.StringFlag{
Name: "log.level",
Usage: "The lowest log level that will be output",
Value: "info",
EnvVar: prefixEnvVar("LOG_LEVEL"),
}
LogFormatFlag = cli.StringFlag{
Name: "log.format",
Usage: "Format the log output. Supported formats: 'text', 'json'",
Value: "text",
EnvVar: prefixEnvVar("LOG_FORMAT"),
}
LogColorFlag = cli.BoolFlag{
Name: "log.color",
Usage: "Color the log output",
EnvVar: prefixEnvVar("LOG_COLOR"),
}
MetricsEnabledFlag = cli.BoolFlag{ MetricsEnabledFlag = cli.BoolFlag{
Name: "metrics.enabled", Name: "metrics.enabled",
Usage: "Enable the metrics server", Usage: "Enable the metrics server",
...@@ -185,6 +169,12 @@ var ( ...@@ -185,6 +169,12 @@ var (
EnvVar: prefixEnvVar("HEARTBEAT_URL"), EnvVar: prefixEnvVar("HEARTBEAT_URL"),
Value: "https://heartbeat.optimism.io", Value: "https://heartbeat.optimism.io",
} }
BackupL2UnsafeSyncRPC = cli.StringFlag{
Name: "l2.backup-unsafe-sync-rpc",
Usage: "Set the backup L2 unsafe sync RPC endpoint.",
EnvVar: prefixEnvVar("L2_BACKUP_UNSAFE_SYNC_RPC"),
Required: false,
}
) )
var requiredFlags = []cli.Flag{ var requiredFlags = []cli.Flag{
...@@ -194,7 +184,7 @@ var requiredFlags = []cli.Flag{ ...@@ -194,7 +184,7 @@ var requiredFlags = []cli.Flag{
RPCListenPort, RPCListenPort,
} }
var optionalFlags = append([]cli.Flag{ var optionalFlags = []cli.Flag{
RollupConfig, RollupConfig,
Network, Network,
L1TrustRPC, L1TrustRPC,
...@@ -205,9 +195,6 @@ var optionalFlags = append([]cli.Flag{ ...@@ -205,9 +195,6 @@ var optionalFlags = append([]cli.Flag{
SequencerStoppedFlag, SequencerStoppedFlag,
SequencerL1Confs, SequencerL1Confs,
L1EpochPollIntervalFlag, L1EpochPollIntervalFlag,
LogLevelFlag,
LogFormatFlag,
LogColorFlag,
RPCEnableAdmin, RPCEnableAdmin,
MetricsEnabledFlag, MetricsEnabledFlag,
MetricsAddrFlag, MetricsAddrFlag,
...@@ -219,10 +206,17 @@ var optionalFlags = append([]cli.Flag{ ...@@ -219,10 +206,17 @@ var optionalFlags = append([]cli.Flag{
HeartbeatEnabledFlag, HeartbeatEnabledFlag,
HeartbeatMonikerFlag, HeartbeatMonikerFlag,
HeartbeatURLFlag, HeartbeatURLFlag,
}, p2pFlags...) BackupL2UnsafeSyncRPC,
}
// Flags contains the list of configuration options available to the binary. // Flags contains the list of configuration options available to the binary.
var Flags = append(requiredFlags, optionalFlags...) var Flags []cli.Flag
func init() {
optionalFlags = append(optionalFlags, p2pFlags...)
optionalFlags = append(optionalFlags, oplog.CLIFlags(envVarPrefix)...)
Flags = append(requiredFlags, optionalFlags...)
}
func CheckRequired(ctx *cli.Context) error { func CheckRequired(ctx *cli.Context) error {
l1NodeAddr := ctx.GlobalString(L1NodeAddr.Name) l1NodeAddr := ctx.GlobalString(L1NodeAddr.Name)
......
...@@ -9,6 +9,7 @@ import ( ...@@ -9,6 +9,7 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil" "github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/ethereum-optimism/optimism/op-bindings/bindings"
"github.com/ethereum-optimism/optimism/op-bindings/predeploys" "github.com/ethereum-optimism/optimism/op-bindings/predeploys"
"github.com/ethereum-optimism/optimism/op-node/eth" "github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup" "github.com/ethereum-optimism/optimism/op-node/rollup"
...@@ -114,7 +115,16 @@ func (n *nodeAPI) OutputAtBlock(ctx context.Context, number hexutil.Uint64) (*et ...@@ -114,7 +115,16 @@ func (n *nodeAPI) OutputAtBlock(ctx context.Context, number hexutil.Uint64) (*et
} }
var l2OutputRootVersion eth.Bytes32 // it's zero for now var l2OutputRootVersion eth.Bytes32 // it's zero for now
l2OutputRoot := rollup.ComputeL2OutputRoot(l2OutputRootVersion, head.Hash(), head.Root(), proof.StorageHash) l2OutputRoot, err := rollup.ComputeL2OutputRoot(&bindings.TypesOutputRootProof{
Version: l2OutputRootVersion,
StateRoot: head.Root(),
MessagePasserStorageRoot: proof.StorageHash,
LatestBlockhash: head.Hash(),
})
if err != nil {
n.log.Error("Error computing L2 output root, nil ptr passed to hashing function")
return nil, err
}
return &eth.OutputResponse{ return &eth.OutputResponse{
Version: l2OutputRootVersion, Version: l2OutputRootVersion,
......
...@@ -19,6 +19,11 @@ type L2EndpointSetup interface { ...@@ -19,6 +19,11 @@ type L2EndpointSetup interface {
Check() error Check() error
} }
type L2SyncEndpointSetup interface {
Setup(ctx context.Context, log log.Logger) (cl client.RPC, err error)
Check() error
}
type L1EndpointSetup interface { type L1EndpointSetup interface {
// Setup a RPC client to a L1 node to pull rollup input-data from. // Setup a RPC client to a L1 node to pull rollup input-data from.
// The results of the RPC client may be trusted for faster processing, or strictly validated. // The results of the RPC client may be trusted for faster processing, or strictly validated.
...@@ -75,6 +80,50 @@ func (p *PreparedL2Endpoints) Setup(ctx context.Context, log log.Logger) (client ...@@ -75,6 +80,50 @@ func (p *PreparedL2Endpoints) Setup(ctx context.Context, log log.Logger) (client
return p.Client, nil return p.Client, nil
} }
// L2SyncEndpointConfig contains configuration for the fallback sync endpoint
type L2SyncEndpointConfig struct {
// Address of the L2 RPC to use for backup sync
L2NodeAddr string
}
var _ L2SyncEndpointSetup = (*L2SyncEndpointConfig)(nil)
func (cfg *L2SyncEndpointConfig) Setup(ctx context.Context, log log.Logger) (client.RPC, error) {
l2Node, err := client.NewRPC(ctx, log, cfg.L2NodeAddr)
if err != nil {
return nil, err
}
return l2Node, nil
}
func (cfg *L2SyncEndpointConfig) Check() error {
if cfg.L2NodeAddr == "" {
return errors.New("empty L2 Node Address")
}
return nil
}
type L2SyncRPCConfig struct {
// RPC endpoint to use for syncing
Rpc client.RPC
}
var _ L2SyncEndpointSetup = (*L2SyncRPCConfig)(nil)
func (cfg *L2SyncRPCConfig) Setup(ctx context.Context, log log.Logger) (client.RPC, error) {
return cfg.Rpc, nil
}
func (cfg *L2SyncRPCConfig) Check() error {
if cfg.Rpc == nil {
return errors.New("rpc cannot be nil")
}
return nil
}
type L1EndpointConfig struct { type L1EndpointConfig struct {
L1NodeAddr string // Address of L1 User JSON-RPC endpoint to use (eth namespace required) L1NodeAddr string // Address of L1 User JSON-RPC endpoint to use (eth namespace required)
......
...@@ -15,6 +15,7 @@ import ( ...@@ -15,6 +15,7 @@ import (
type Config struct { type Config struct {
L1 L1EndpointSetup L1 L1EndpointSetup
L2 L2EndpointSetup L2 L2EndpointSetup
L2Sync L2SyncEndpointSetup
Driver driver.Config Driver driver.Config
......
...@@ -197,7 +197,28 @@ func (n *OpNode) initL2(ctx context.Context, cfg *Config, snapshotLog log.Logger ...@@ -197,7 +197,28 @@ func (n *OpNode) initL2(ctx context.Context, cfg *Config, snapshotLog log.Logger
return err return err
} }
n.l2Driver = driver.NewDriver(&cfg.Driver, &cfg.Rollup, n.l2Source, n.l1Source, n, n.log, snapshotLog, n.metrics) var syncClient *sources.SyncClient
// If the L2 sync config is present, use it to create a sync client
if cfg.L2Sync != nil {
if err := cfg.L2Sync.Check(); err != nil {
log.Info("L2 sync config is not present, skipping L2 sync client setup", "err", err)
} else {
rpcSyncClient, err := cfg.L2Sync.Setup(ctx, n.log)
if err != nil {
return fmt.Errorf("failed to setup L2 execution-engine RPC client for backup sync: %w", err)
}
// The sync client's RPC is always trusted
config := sources.SyncClientDefaultConfig(&cfg.Rollup, true)
syncClient, err = sources.NewSyncClient(n.OnUnsafeL2Payload, rpcSyncClient, n.log, n.metrics.L2SourceCache, config)
if err != nil {
return fmt.Errorf("failed to create sync client: %w", err)
}
}
}
n.l2Driver = driver.NewDriver(&cfg.Driver, &cfg.Rollup, n.l2Source, n.l1Source, syncClient, n, n.log, snapshotLog, n.metrics)
return nil return nil
} }
...@@ -263,13 +284,21 @@ func (n *OpNode) initP2PSigner(ctx context.Context, cfg *Config) error { ...@@ -263,13 +284,21 @@ func (n *OpNode) initP2PSigner(ctx context.Context, cfg *Config) error {
func (n *OpNode) Start(ctx context.Context) error { func (n *OpNode) Start(ctx context.Context) error {
n.log.Info("Starting execution engine driver") n.log.Info("Starting execution engine driver")
// start driving engine: sync blocks by deriving them from L1 and driving them into the engine // start driving engine: sync blocks by deriving them from L1 and driving them into the engine
err := n.l2Driver.Start() if err := n.l2Driver.Start(); err != nil {
if err != nil {
n.log.Error("Could not start a rollup node", "err", err) n.log.Error("Could not start a rollup node", "err", err)
return err return err
} }
// If the backup unsafe sync client is enabled, start its event loop
if n.l2Driver.L2SyncCl != nil {
if err := n.l2Driver.L2SyncCl.Start(); err != nil {
n.log.Error("Could not start the backup sync client", "err", err)
return err
}
}
return nil return nil
} }
...@@ -382,6 +411,13 @@ func (n *OpNode) Close() error { ...@@ -382,6 +411,13 @@ func (n *OpNode) Close() error {
if err := n.l2Driver.Close(); err != nil { if err := n.l2Driver.Close(); err != nil {
result = multierror.Append(result, fmt.Errorf("failed to close L2 engine driver cleanly: %w", err)) result = multierror.Append(result, fmt.Errorf("failed to close L2 engine driver cleanly: %w", err))
} }
// If the L2 sync client is present & running, close it.
if n.l2Driver.L2SyncCl != nil {
if err := n.l2Driver.L2SyncCl.Close(); err != nil {
result = multierror.Append(result, fmt.Errorf("failed to close L2 engine backup sync client cleanly: %w", err))
}
}
} }
// close L2 engine RPC client // close L2 engine RPC client
......
...@@ -139,6 +139,7 @@ func (bq *BatchQueue) AddBatch(batch *BatchData, l2SafeHead eth.L2BlockRef) { ...@@ -139,6 +139,7 @@ func (bq *BatchQueue) AddBatch(batch *BatchData, l2SafeHead eth.L2BlockRef) {
if validity == BatchDrop { if validity == BatchDrop {
return // if we do drop the batch, CheckBatch will log the drop reason with WARN level. return // if we do drop the batch, CheckBatch will log the drop reason with WARN level.
} }
bq.log.Debug("Adding batch", "batch_timestamp", batch.Timestamp, "parent_hash", batch.ParentHash, "batch_epoch", batch.Epoch(), "txs", len(batch.Transactions))
bq.batches[batch.Timestamp] = append(bq.batches[batch.Timestamp], &data) bq.batches[batch.Timestamp] = append(bq.batches[batch.Timestamp], &data)
} }
...@@ -212,7 +213,7 @@ batchLoop: ...@@ -212,7 +213,7 @@ batchLoop:
if nextBatch.Batch.EpochNum == rollup.Epoch(epoch.Number)+1 { if nextBatch.Batch.EpochNum == rollup.Epoch(epoch.Number)+1 {
bq.l1Blocks = bq.l1Blocks[1:] bq.l1Blocks = bq.l1Blocks[1:]
} }
bq.log.Trace("Returning found batch", "epoch", epoch, "batch_epoch", nextBatch.Batch.EpochNum, "batch_timestamp", nextBatch.Batch.Timestamp) bq.log.Info("Found next batch", "epoch", epoch, "batch_epoch", nextBatch.Batch.EpochNum, "batch_timestamp", nextBatch.Batch.Timestamp)
return nextBatch.Batch, nil return nextBatch.Batch, nil
} }
...@@ -241,7 +242,7 @@ batchLoop: ...@@ -241,7 +242,7 @@ batchLoop:
// to preserve that L2 time >= L1 time. If this is the first block of the epoch, always generate a // to preserve that L2 time >= L1 time. If this is the first block of the epoch, always generate a
// batch to ensure that we at least have one batch per epoch. // batch to ensure that we at least have one batch per epoch.
if nextTimestamp < nextEpoch.Time || firstOfEpoch { if nextTimestamp < nextEpoch.Time || firstOfEpoch {
bq.log.Trace("Generating next batch", "epoch", epoch, "timestamp", nextTimestamp) bq.log.Info("Generating next batch", "epoch", epoch, "timestamp", nextTimestamp)
return &BatchData{ return &BatchData{
BatchV1{ BatchV1{
ParentHash: l2SafeHead.Hash, ParentHash: l2SafeHead.Hash,
......
...@@ -69,6 +69,7 @@ func (cb *ChannelBank) prune() { ...@@ -69,6 +69,7 @@ func (cb *ChannelBank) prune() {
ch := cb.channels[id] ch := cb.channels[id]
cb.channelQueue = cb.channelQueue[1:] cb.channelQueue = cb.channelQueue[1:]
delete(cb.channels, id) delete(cb.channels, id)
cb.log.Info("pruning channel", "channel", id, "totalSize", totalSize, "channel_size", ch.size, "remaining_channel_count", len(cb.channels))
totalSize -= ch.size totalSize -= ch.size
} }
} }
...@@ -77,7 +78,7 @@ func (cb *ChannelBank) prune() { ...@@ -77,7 +78,7 @@ func (cb *ChannelBank) prune() {
// Read() should be called repeatedly first, until everything has been read, before adding new data. // Read() should be called repeatedly first, until everything has been read, before adding new data.
func (cb *ChannelBank) IngestFrame(f Frame) { func (cb *ChannelBank) IngestFrame(f Frame) {
origin := cb.Origin() origin := cb.Origin()
log := log.New("origin", origin, "channel", f.ID, "length", len(f.Data), "frame_number", f.FrameNumber, "is_last", f.IsLast) log := cb.log.New("origin", origin, "channel", f.ID, "length", len(f.Data), "frame_number", f.FrameNumber, "is_last", f.IsLast)
log.Debug("channel bank got new data") log.Debug("channel bank got new data")
currentCh, ok := cb.channels[f.ID] currentCh, ok := cb.channels[f.ID]
...@@ -86,6 +87,7 @@ func (cb *ChannelBank) IngestFrame(f Frame) { ...@@ -86,6 +87,7 @@ func (cb *ChannelBank) IngestFrame(f Frame) {
currentCh = NewChannel(f.ID, origin) currentCh = NewChannel(f.ID, origin)
cb.channels[f.ID] = currentCh cb.channels[f.ID] = currentCh
cb.channelQueue = append(cb.channelQueue, f.ID) cb.channelQueue = append(cb.channelQueue, f.ID)
log.Info("created new channel")
} }
// check if the channel is not timed out // check if the channel is not timed out
...@@ -114,7 +116,7 @@ func (cb *ChannelBank) Read() (data []byte, err error) { ...@@ -114,7 +116,7 @@ func (cb *ChannelBank) Read() (data []byte, err error) {
ch := cb.channels[first] ch := cb.channels[first]
timedOut := ch.OpenBlockNumber()+cb.cfg.ChannelTimeout < cb.Origin().Number timedOut := ch.OpenBlockNumber()+cb.cfg.ChannelTimeout < cb.Origin().Number
if timedOut { if timedOut {
cb.log.Debug("channel timed out", "channel", first, "frames", len(ch.inputs)) cb.log.Info("channel timed out", "channel", first, "frames", len(ch.inputs))
delete(cb.channels, first) delete(cb.channels, first)
cb.channelQueue = cb.channelQueue[1:] cb.channelQueue = cb.channelQueue[1:]
return nil, nil // multiple different channels may all be timed out return nil, nil // multiple different channels may all be timed out
...@@ -137,7 +139,6 @@ func (cb *ChannelBank) Read() (data []byte, err error) { ...@@ -137,7 +139,6 @@ func (cb *ChannelBank) Read() (data []byte, err error) {
// consistency around channel bank pruning which depends upon the order // consistency around channel bank pruning which depends upon the order
// of operations. // of operations.
func (cb *ChannelBank) NextData(ctx context.Context) ([]byte, error) { func (cb *ChannelBank) NextData(ctx context.Context) ([]byte, error) {
// Do the read from the channel bank first // Do the read from the channel bank first
data, err := cb.Read() data, err := cb.Read()
if err == io.EOF { if err == io.EOF {
......
...@@ -76,7 +76,7 @@ func (co *ChannelOut) AddBlock(block *types.Block) (uint64, error) { ...@@ -76,7 +76,7 @@ func (co *ChannelOut) AddBlock(block *types.Block) (uint64, error) {
return 0, errors.New("already closed") return 0, errors.New("already closed")
} }
batch, err := BlockToBatch(block) batch, _, err := BlockToBatch(block)
if err != nil { if err != nil {
return 0, err return 0, err
} }
...@@ -182,7 +182,7 @@ func (co *ChannelOut) OutputFrame(w *bytes.Buffer, maxSize uint64) (uint16, erro ...@@ -182,7 +182,7 @@ func (co *ChannelOut) OutputFrame(w *bytes.Buffer, maxSize uint64) (uint16, erro
} }
// BlockToBatch transforms a block into a batch object that can easily be RLP encoded. // BlockToBatch transforms a block into a batch object that can easily be RLP encoded.
func BlockToBatch(block *types.Block) (*BatchData, error) { func BlockToBatch(block *types.Block) (*BatchData, L1BlockInfo, error) {
opaqueTxs := make([]hexutil.Bytes, 0, len(block.Transactions())) opaqueTxs := make([]hexutil.Bytes, 0, len(block.Transactions()))
for i, tx := range block.Transactions() { for i, tx := range block.Transactions() {
if tx.Type() == types.DepositTxType { if tx.Type() == types.DepositTxType {
...@@ -190,17 +190,17 @@ func BlockToBatch(block *types.Block) (*BatchData, error) { ...@@ -190,17 +190,17 @@ func BlockToBatch(block *types.Block) (*BatchData, error) {
} }
otx, err := tx.MarshalBinary() otx, err := tx.MarshalBinary()
if err != nil { if err != nil {
return nil, fmt.Errorf("could not encode tx %v in block %v: %w", i, tx.Hash(), err) return nil, L1BlockInfo{}, fmt.Errorf("could not encode tx %v in block %v: %w", i, tx.Hash(), err)
} }
opaqueTxs = append(opaqueTxs, otx) opaqueTxs = append(opaqueTxs, otx)
} }
l1InfoTx := block.Transactions()[0] l1InfoTx := block.Transactions()[0]
if l1InfoTx.Type() != types.DepositTxType { if l1InfoTx.Type() != types.DepositTxType {
return nil, ErrNotDepositTx return nil, L1BlockInfo{}, ErrNotDepositTx
} }
l1Info, err := L1InfoDepositTxData(l1InfoTx.Data()) l1Info, err := L1InfoDepositTxData(l1InfoTx.Data())
if err != nil { if err != nil {
return nil, fmt.Errorf("could not parse the L1 Info deposit: %w", err) return nil, l1Info, fmt.Errorf("could not parse the L1 Info deposit: %w", err)
} }
return &BatchData{ return &BatchData{
...@@ -211,5 +211,60 @@ func BlockToBatch(block *types.Block) (*BatchData, error) { ...@@ -211,5 +211,60 @@ func BlockToBatch(block *types.Block) (*BatchData, error) {
Timestamp: block.Time(), Timestamp: block.Time(),
Transactions: opaqueTxs, Transactions: opaqueTxs,
}, },
}, nil }, l1Info, nil
}
// ForceCloseTxData generates the transaction data for a transaction which will force close
// a channel. It should be given every frame of that channel which has been submitted on
// chain. The frames should be given in order that they appear on L1.
func ForceCloseTxData(frames []Frame) ([]byte, error) {
if len(frames) == 0 {
return nil, errors.New("must provide at least one frame")
}
frameNumbers := make(map[uint16]struct{})
id := frames[0].ID
closeNumber := uint16(0)
closed := false
for i, frame := range frames {
if !closed && frame.IsLast {
closeNumber = frame.FrameNumber
}
closed = closed || frame.IsLast
frameNumbers[frame.FrameNumber] = struct{}{}
if frame.ID != id {
return nil, fmt.Errorf("invalid ID in list: first ID: %v, %vth ID: %v", id, i, frame.ID)
}
}
var out bytes.Buffer
out.WriteByte(DerivationVersion0)
if !closed {
f := Frame{
ID: id,
FrameNumber: 0,
Data: nil,
IsLast: true,
}
if err := f.MarshalBinary(&out); err != nil {
return nil, err
}
} else {
for i := uint16(0); i <= closeNumber; i++ {
if _, ok := frameNumbers[i]; ok {
continue
}
f := Frame{
ID: id,
FrameNumber: i,
Data: nil,
IsLast: false,
}
if err := f.MarshalBinary(&out); err != nil {
return nil, err
}
}
}
return out.Bytes(), nil
} }
...@@ -5,6 +5,7 @@ import ( ...@@ -5,6 +5,7 @@ import (
"math/big" "math/big"
"testing" "testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/rlp" "github.com/ethereum/go-ethereum/rlp"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
...@@ -49,3 +50,69 @@ func TestRLPByteLimit(t *testing.T) { ...@@ -49,3 +50,69 @@ func TestRLPByteLimit(t *testing.T) {
require.Equal(t, err, rlp.ErrValueTooLarge) require.Equal(t, err, rlp.ErrValueTooLarge)
require.Equal(t, out2, "") require.Equal(t, out2, "")
} }
func TestForceCloseTxData(t *testing.T) {
id := [16]byte{0xde, 0xad, 0xbe, 0xef, 0xde, 0xad, 0xbe, 0xef, 0xde, 0xad, 0xbe, 0xef, 0xde, 0xad, 0xbe, 0xef}
tests := []struct {
frames []Frame
errors bool
output string
}{
{
frames: []Frame{},
errors: true,
output: "",
},
{
frames: []Frame{Frame{FrameNumber: 0, IsLast: false}, Frame{ID: id, FrameNumber: 1, IsLast: true}},
errors: true,
output: "",
},
{
frames: []Frame{Frame{ID: id, FrameNumber: 0, IsLast: false}},
errors: false,
output: "00deadbeefdeadbeefdeadbeefdeadbeef00000000000001",
},
{
frames: []Frame{Frame{ID: id, FrameNumber: 0, IsLast: true}},
errors: false,
output: "00",
},
{
frames: []Frame{Frame{ID: id, FrameNumber: 1, IsLast: false}},
errors: false,
output: "00deadbeefdeadbeefdeadbeefdeadbeef00000000000001",
},
{
frames: []Frame{Frame{ID: id, FrameNumber: 1, IsLast: true}},
errors: false,
output: "00deadbeefdeadbeefdeadbeefdeadbeef00000000000000",
},
{
frames: []Frame{Frame{ID: id, FrameNumber: 2, IsLast: true}},
errors: false,
output: "00deadbeefdeadbeefdeadbeefdeadbeef00000000000000deadbeefdeadbeefdeadbeefdeadbeef00010000000000",
},
{
frames: []Frame{Frame{ID: id, FrameNumber: 1, IsLast: false}, Frame{ID: id, FrameNumber: 3, IsLast: true}},
errors: false,
output: "00deadbeefdeadbeefdeadbeefdeadbeef00000000000000deadbeefdeadbeefdeadbeefdeadbeef00020000000000",
},
{
frames: []Frame{Frame{ID: id, FrameNumber: 1, IsLast: false}, Frame{ID: id, FrameNumber: 3, IsLast: true}, Frame{ID: id, FrameNumber: 5, IsLast: true}},
errors: false,
output: "00deadbeefdeadbeefdeadbeefdeadbeef00000000000000deadbeefdeadbeefdeadbeefdeadbeef00020000000000",
},
}
for i, test := range tests {
out, err := ForceCloseTxData(test.frames)
if test.errors {
require.NotNil(t, err, "Should error on tc %v", i)
require.Nil(t, out, "Should return no value in tc %v", i)
} else {
require.NoError(t, err, "Should not error on tc %v", i)
require.Equal(t, common.FromHex(test.output), out, "Should match output tc %v", i)
}
}
}
...@@ -104,6 +104,8 @@ type EngineQueue struct { ...@@ -104,6 +104,8 @@ type EngineQueue struct {
finalizedL1 eth.L1BlockRef finalizedL1 eth.L1BlockRef
// The queued-up attributes
safeAttributesParent eth.L2BlockRef
safeAttributes *eth.PayloadAttributes safeAttributes *eth.PayloadAttributes
unsafePayloads PayloadsQueue // queue of unsafe payloads, ordered by ascending block number, may have gaps unsafePayloads PayloadsQueue // queue of unsafe payloads, ordered by ascending block number, may have gaps
...@@ -133,6 +135,7 @@ func NewEngineQueue(log log.Logger, cfg *rollup.Config, engine Engine, metrics M ...@@ -133,6 +135,7 @@ func NewEngineQueue(log log.Logger, cfg *rollup.Config, engine Engine, metrics M
unsafePayloads: PayloadsQueue{ unsafePayloads: PayloadsQueue{
MaxSize: maxUnsafePayloadsMemory, MaxSize: maxUnsafePayloadsMemory,
SizeFn: payloadMemSize, SizeFn: payloadMemSize,
blockNos: make(map[uint64]bool),
}, },
prev: prev, prev: prev,
l1Fetcher: l1Fetcher, l1Fetcher: l1Fetcher,
...@@ -224,6 +227,7 @@ func (eq *EngineQueue) Step(ctx context.Context) error { ...@@ -224,6 +227,7 @@ func (eq *EngineQueue) Step(ctx context.Context) error {
return err return err
} else { } else {
eq.safeAttributes = next eq.safeAttributes = next
eq.safeAttributesParent = eq.safeHead
eq.log.Debug("Adding next safe attributes", "safe_head", eq.safeHead, "next", eq.safeAttributes) eq.log.Debug("Adding next safe attributes", "safe_head", eq.safeHead, "next", eq.safeAttributes)
return NotEnoughData return NotEnoughData
} }
...@@ -426,6 +430,20 @@ func (eq *EngineQueue) tryNextUnsafePayload(ctx context.Context) error { ...@@ -426,6 +430,20 @@ func (eq *EngineQueue) tryNextUnsafePayload(ctx context.Context) error {
} }
func (eq *EngineQueue) tryNextSafeAttributes(ctx context.Context) error { func (eq *EngineQueue) tryNextSafeAttributes(ctx context.Context) error {
if eq.safeAttributes == nil { // sanity check the attributes are there
return nil
}
// validate the safe attributes before processing them. The engine may have completed processing them through other means.
if eq.safeHead != eq.safeAttributesParent {
if eq.safeHead.ParentHash != eq.safeAttributesParent.Hash {
return NewResetError(fmt.Errorf("safe head changed to %s with parent %s, conflicting with queued safe attributes on top of %s",
eq.safeHead, eq.safeHead.ParentID(), eq.safeAttributesParent))
}
eq.log.Warn("queued safe attributes are stale, safe-head progressed",
"safe_head", eq.safeHead, "safe_head_parent", eq.safeHead.ParentID(), "attributes_parent", eq.safeAttributesParent)
eq.safeAttributes = nil
return nil
}
if eq.safeHead.Number < eq.unsafeHead.Number { if eq.safeHead.Number < eq.unsafeHead.Number {
return eq.consolidateNextSafeAttributes(ctx) return eq.consolidateNextSafeAttributes(ctx)
} else if eq.safeHead.Number == eq.unsafeHead.Number { } else if eq.safeHead.Number == eq.unsafeHead.Number {
...@@ -485,14 +503,15 @@ func (eq *EngineQueue) forceNextSafeAttributes(ctx context.Context) error { ...@@ -485,14 +503,15 @@ func (eq *EngineQueue) forceNextSafeAttributes(ctx context.Context) error {
_, errType, err = eq.ConfirmPayload(ctx) _, errType, err = eq.ConfirmPayload(ctx)
} }
if err != nil { if err != nil {
_ = eq.CancelPayload(ctx, true)
switch errType { switch errType {
case BlockInsertTemporaryErr: case BlockInsertTemporaryErr:
// RPC errors are recoverable, we can retry the buffered payload attributes later. // RPC errors are recoverable, we can retry the buffered payload attributes later.
return NewTemporaryError(fmt.Errorf("temporarily cannot insert new safe block: %w", err)) return NewTemporaryError(fmt.Errorf("temporarily cannot insert new safe block: %w", err))
case BlockInsertPrestateErr: case BlockInsertPrestateErr:
_ = eq.CancelPayload(ctx, true)
return NewResetError(fmt.Errorf("need reset to resolve pre-state problem: %w", err)) return NewResetError(fmt.Errorf("need reset to resolve pre-state problem: %w", err))
case BlockInsertPayloadErr: case BlockInsertPayloadErr:
_ = eq.CancelPayload(ctx, true)
eq.log.Warn("could not process payload derived from L1 data, dropping batch", "err", err) eq.log.Warn("could not process payload derived from L1 data, dropping batch", "err", err)
// Count the number of deposits to see if the tx list is deposit only. // Count the number of deposits to see if the tx list is deposit only.
depositCount := 0 depositCount := 0
...@@ -662,3 +681,20 @@ func (eq *EngineQueue) Reset(ctx context.Context, _ eth.L1BlockRef, _ eth.System ...@@ -662,3 +681,20 @@ func (eq *EngineQueue) Reset(ctx context.Context, _ eth.L1BlockRef, _ eth.System
eq.logSyncProgress("reset derivation work") eq.logSyncProgress("reset derivation work")
return io.EOF return io.EOF
} }
// GetUnsafeQueueGap retrieves the current [start, end] range of the gap between the tip of the unsafe priority queue and the unsafe head.
// If there is no gap, the difference between end and start will be 0.
func (eq *EngineQueue) GetUnsafeQueueGap(expectedNumber uint64) (start uint64, end uint64) {
// The start of the gap is always the unsafe head + 1
start = eq.unsafeHead.Number + 1
// If the priority queue is empty, the end is the first block number at the top of the priority queue
// Otherwise, the end is the expected block number
if first := eq.unsafePayloads.Peek(); first != nil {
end = first.ID().Number
} else {
end = expectedNumber
}
return start, end
}
...@@ -2,10 +2,13 @@ package derive ...@@ -2,10 +2,13 @@ package derive
import ( import (
"context" "context"
"fmt"
"io" "io"
"math/big"
"math/rand" "math/rand"
"testing" "testing"
"github.com/holiman/uint256"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
...@@ -19,6 +22,7 @@ import ( ...@@ -19,6 +22,7 @@ import (
type fakeAttributesQueue struct { type fakeAttributesQueue struct {
origin eth.L1BlockRef origin eth.L1BlockRef
attrs *eth.PayloadAttributes
} }
func (f *fakeAttributesQueue) Origin() eth.L1BlockRef { func (f *fakeAttributesQueue) Origin() eth.L1BlockRef {
...@@ -26,7 +30,10 @@ func (f *fakeAttributesQueue) Origin() eth.L1BlockRef { ...@@ -26,7 +30,10 @@ func (f *fakeAttributesQueue) Origin() eth.L1BlockRef {
} }
func (f *fakeAttributesQueue) NextAttributes(_ context.Context, _ eth.L2BlockRef) (*eth.PayloadAttributes, error) { func (f *fakeAttributesQueue) NextAttributes(_ context.Context, _ eth.L2BlockRef) (*eth.PayloadAttributes, error) {
if f.attrs == nil {
return nil, io.EOF return nil, io.EOF
}
return f.attrs, nil
} }
var _ NextAttributesProvider = (*fakeAttributesQueue)(nil) var _ NextAttributesProvider = (*fakeAttributesQueue)(nil)
...@@ -209,17 +216,17 @@ func TestEngineQueue_Finalize(t *testing.T) { ...@@ -209,17 +216,17 @@ func TestEngineQueue_Finalize(t *testing.T) {
eng.ExpectL2BlockRefByHash(refF0.ParentHash, refE1, nil) eng.ExpectL2BlockRefByHash(refF0.ParentHash, refE1, nil)
// meet previous safe, counts 1/2 // meet previous safe, counts 1/2
l1F.ExpectL1BlockRefByNumber(refE.Number, refE, nil) l1F.ExpectL1BlockRefByHash(refE.Hash, refE, nil)
eng.ExpectL2BlockRefByHash(refE1.ParentHash, refE0, nil) eng.ExpectL2BlockRefByHash(refE1.ParentHash, refE0, nil)
eng.ExpectL2BlockRefByHash(refE0.ParentHash, refD1, nil) eng.ExpectL2BlockRefByHash(refE0.ParentHash, refD1, nil)
// now full seq window, inclusive // now full seq window, inclusive
l1F.ExpectL1BlockRefByNumber(refD.Number, refD, nil) l1F.ExpectL1BlockRefByHash(refD.Hash, refD, nil)
eng.ExpectL2BlockRefByHash(refD1.ParentHash, refD0, nil) eng.ExpectL2BlockRefByHash(refD1.ParentHash, refD0, nil)
eng.ExpectL2BlockRefByHash(refD0.ParentHash, refC1, nil) eng.ExpectL2BlockRefByHash(refD0.ParentHash, refC1, nil)
// now one more L1 origin // now one more L1 origin
l1F.ExpectL1BlockRefByNumber(refC.Number, refC, nil) l1F.ExpectL1BlockRefByHash(refC.Hash, refC, nil)
eng.ExpectL2BlockRefByHash(refC1.ParentHash, refC0, nil) eng.ExpectL2BlockRefByHash(refC1.ParentHash, refC0, nil)
// parent of that origin will be considered safe // parent of that origin will be considered safe
eng.ExpectL2BlockRefByHash(refC0.ParentHash, refB1, nil) eng.ExpectL2BlockRefByHash(refC0.ParentHash, refB1, nil)
...@@ -443,17 +450,17 @@ func TestEngineQueue_ResetWhenUnsafeOriginNotCanonical(t *testing.T) { ...@@ -443,17 +450,17 @@ func TestEngineQueue_ResetWhenUnsafeOriginNotCanonical(t *testing.T) {
eng.ExpectL2BlockRefByHash(refF0.ParentHash, refE1, nil) eng.ExpectL2BlockRefByHash(refF0.ParentHash, refE1, nil)
// meet previous safe, counts 1/2 // meet previous safe, counts 1/2
l1F.ExpectL1BlockRefByNumber(refE.Number, refE, nil) l1F.ExpectL1BlockRefByHash(refE.Hash, refE, nil)
eng.ExpectL2BlockRefByHash(refE1.ParentHash, refE0, nil) eng.ExpectL2BlockRefByHash(refE1.ParentHash, refE0, nil)
eng.ExpectL2BlockRefByHash(refE0.ParentHash, refD1, nil) eng.ExpectL2BlockRefByHash(refE0.ParentHash, refD1, nil)
// now full seq window, inclusive // now full seq window, inclusive
l1F.ExpectL1BlockRefByNumber(refD.Number, refD, nil) l1F.ExpectL1BlockRefByHash(refD.Hash, refD, nil)
eng.ExpectL2BlockRefByHash(refD1.ParentHash, refD0, nil) eng.ExpectL2BlockRefByHash(refD1.ParentHash, refD0, nil)
eng.ExpectL2BlockRefByHash(refD0.ParentHash, refC1, nil) eng.ExpectL2BlockRefByHash(refD0.ParentHash, refC1, nil)
// now one more L1 origin // now one more L1 origin
l1F.ExpectL1BlockRefByNumber(refC.Number, refC, nil) l1F.ExpectL1BlockRefByHash(refC.Hash, refC, nil)
eng.ExpectL2BlockRefByHash(refC1.ParentHash, refC0, nil) eng.ExpectL2BlockRefByHash(refC1.ParentHash, refC0, nil)
// parent of that origin will be considered safe // parent of that origin will be considered safe
eng.ExpectL2BlockRefByHash(refC0.ParentHash, refB1, nil) eng.ExpectL2BlockRefByHash(refC0.ParentHash, refB1, nil)
...@@ -775,17 +782,17 @@ func TestVerifyNewL1Origin(t *testing.T) { ...@@ -775,17 +782,17 @@ func TestVerifyNewL1Origin(t *testing.T) {
} }
// meet previous safe, counts 1/2 // meet previous safe, counts 1/2
l1F.ExpectL1BlockRefByNumber(refE.Number, refE, nil) l1F.ExpectL1BlockRefByHash(refE.Hash, refE, nil)
eng.ExpectL2BlockRefByHash(refE1.ParentHash, refE0, nil) eng.ExpectL2BlockRefByHash(refE1.ParentHash, refE0, nil)
eng.ExpectL2BlockRefByHash(refE0.ParentHash, refD1, nil) eng.ExpectL2BlockRefByHash(refE0.ParentHash, refD1, nil)
// now full seq window, inclusive // now full seq window, inclusive
l1F.ExpectL1BlockRefByNumber(refD.Number, refD, nil) l1F.ExpectL1BlockRefByHash(refD.Hash, refD, nil)
eng.ExpectL2BlockRefByHash(refD1.ParentHash, refD0, nil) eng.ExpectL2BlockRefByHash(refD1.ParentHash, refD0, nil)
eng.ExpectL2BlockRefByHash(refD0.ParentHash, refC1, nil) eng.ExpectL2BlockRefByHash(refD0.ParentHash, refC1, nil)
// now one more L1 origin // now one more L1 origin
l1F.ExpectL1BlockRefByNumber(refC.Number, refC, nil) l1F.ExpectL1BlockRefByHash(refC.Hash, refC, nil)
eng.ExpectL2BlockRefByHash(refC1.ParentHash, refC0, nil) eng.ExpectL2BlockRefByHash(refC1.ParentHash, refC0, nil)
// parent of that origin will be considered safe // parent of that origin will be considered safe
eng.ExpectL2BlockRefByHash(refC0.ParentHash, refB1, nil) eng.ExpectL2BlockRefByHash(refC0.ParentHash, refB1, nil)
...@@ -837,3 +844,166 @@ func TestVerifyNewL1Origin(t *testing.T) { ...@@ -837,3 +844,166 @@ func TestVerifyNewL1Origin(t *testing.T) {
}) })
} }
} }
func TestBlockBuildingRace(t *testing.T) {
logger := testlog.Logger(t, log.LvlInfo)
eng := &testutils.MockEngine{}
rng := rand.New(rand.NewSource(1234))
refA := testutils.RandomBlockRef(rng)
refA0 := eth.L2BlockRef{
Hash: testutils.RandomHash(rng),
Number: 0,
ParentHash: common.Hash{},
Time: refA.Time,
L1Origin: refA.ID(),
SequenceNumber: 0,
}
cfg := &rollup.Config{
Genesis: rollup.Genesis{
L1: refA.ID(),
L2: refA0.ID(),
L2Time: refA0.Time,
SystemConfig: eth.SystemConfig{
BatcherAddr: common.Address{42},
Overhead: [32]byte{123},
Scalar: [32]byte{42},
GasLimit: 20_000_000,
},
},
BlockTime: 1,
SeqWindowSize: 2,
}
refA1 := eth.L2BlockRef{
Hash: testutils.RandomHash(rng),
Number: refA0.Number + 1,
ParentHash: refA0.Hash,
Time: refA0.Time + cfg.BlockTime,
L1Origin: refA.ID(),
SequenceNumber: 1,
}
l1F := &testutils.MockL1Source{}
eng.ExpectL2BlockRefByLabel(eth.Finalized, refA0, nil)
eng.ExpectL2BlockRefByLabel(eth.Safe, refA0, nil)
eng.ExpectL2BlockRefByLabel(eth.Unsafe, refA0, nil)
l1F.ExpectL1BlockRefByNumber(refA.Number, refA, nil)
l1F.ExpectL1BlockRefByHash(refA.Hash, refA, nil)
l1F.ExpectL1BlockRefByHash(refA.Hash, refA, nil)
eng.ExpectSystemConfigByL2Hash(refA0.Hash, cfg.Genesis.SystemConfig, nil)
metrics := &testutils.TestDerivationMetrics{}
gasLimit := eth.Uint64Quantity(20_000_000)
attrs := &eth.PayloadAttributes{
Timestamp: eth.Uint64Quantity(refA1.Time),
PrevRandao: eth.Bytes32{},
SuggestedFeeRecipient: common.Address{},
Transactions: nil,
NoTxPool: false,
GasLimit: &gasLimit,
}
prev := &fakeAttributesQueue{origin: refA, attrs: attrs}
eq := NewEngineQueue(logger, cfg, eng, metrics, prev, l1F)
require.ErrorIs(t, eq.Reset(context.Background(), eth.L1BlockRef{}, eth.SystemConfig{}), io.EOF)
id := eth.PayloadID{0xff}
preFc := &eth.ForkchoiceState{
HeadBlockHash: refA0.Hash,
SafeBlockHash: refA0.Hash,
FinalizedBlockHash: refA0.Hash,
}
preFcRes := &eth.ForkchoiceUpdatedResult{
PayloadStatus: eth.PayloadStatusV1{
Status: eth.ExecutionValid,
LatestValidHash: &refA0.Hash,
ValidationError: nil,
},
PayloadID: &id,
}
// Expect initial forkchoice update
eng.ExpectForkchoiceUpdate(preFc, nil, preFcRes, nil)
require.NoError(t, eq.Step(context.Background()), "clean forkchoice state after reset")
// Expect initial building update, to process the attributes we queued up
eng.ExpectForkchoiceUpdate(preFc, attrs, preFcRes, nil)
// Don't let the payload be confirmed straight away
mockErr := fmt.Errorf("mock error")
eng.ExpectGetPayload(id, nil, mockErr)
// The job will be not be cancelled, the untyped error is a temporary error
require.ErrorIs(t, eq.Step(context.Background()), NotEnoughData, "queue up attributes")
require.ErrorIs(t, eq.Step(context.Background()), mockErr, "expecting to fail to process attributes")
require.NotNil(t, eq.safeAttributes, "still have attributes")
// Now allow the building to complete
a1InfoTx, err := L1InfoDepositBytes(refA1.SequenceNumber, &testutils.MockBlockInfo{
InfoHash: refA.Hash,
InfoParentHash: refA.ParentHash,
InfoCoinbase: common.Address{},
InfoRoot: common.Hash{},
InfoNum: refA.Number,
InfoTime: refA.Time,
InfoMixDigest: [32]byte{},
InfoBaseFee: big.NewInt(7),
InfoReceiptRoot: common.Hash{},
InfoGasUsed: 0,
}, cfg.Genesis.SystemConfig, false)
require.NoError(t, err)
payloadA1 := &eth.ExecutionPayload{
ParentHash: refA1.ParentHash,
FeeRecipient: attrs.SuggestedFeeRecipient,
StateRoot: eth.Bytes32{},
ReceiptsRoot: eth.Bytes32{},
LogsBloom: eth.Bytes256{},
PrevRandao: eth.Bytes32{},
BlockNumber: eth.Uint64Quantity(refA1.Number),
GasLimit: gasLimit,
GasUsed: 0,
Timestamp: eth.Uint64Quantity(refA1.Time),
ExtraData: nil,
BaseFeePerGas: *uint256.NewInt(7),
BlockHash: refA1.Hash,
Transactions: []eth.Data{
a1InfoTx,
},
}
eng.ExpectGetPayload(id, payloadA1, nil)
eng.ExpectNewPayload(payloadA1, &eth.PayloadStatusV1{
Status: eth.ExecutionValid,
LatestValidHash: &refA1.Hash,
ValidationError: nil,
}, nil)
postFc := &eth.ForkchoiceState{
HeadBlockHash: refA1.Hash,
SafeBlockHash: refA1.Hash,
FinalizedBlockHash: refA0.Hash,
}
postFcRes := &eth.ForkchoiceUpdatedResult{
PayloadStatus: eth.PayloadStatusV1{
Status: eth.ExecutionValid,
LatestValidHash: &refA1.Hash,
ValidationError: nil,
},
PayloadID: &id,
}
eng.ExpectForkchoiceUpdate(postFc, nil, postFcRes, nil)
// Now complete the job, as external user of the engine
_, _, err = eq.ConfirmPayload(context.Background())
require.NoError(t, err)
require.Equal(t, refA1, eq.SafeL2Head(), "safe head should have changed")
require.NoError(t, eq.Step(context.Background()))
require.Nil(t, eq.safeAttributes, "attributes should now be invalidated")
l1F.AssertExpectations(t)
eng.AssertExpectations(t)
}
package derive
import (
"fmt"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup"
)
// L2BlockRefSource is a source for the generation of a L2BlockRef. E.g. a
// *types.Block is a L2BlockRefSource.
//
// L2BlockToBlockRef extracts L2BlockRef from a L2BlockRefSource. The first
// transaction of a source must be a Deposit transaction.
type L2BlockRefSource interface {
Hash() common.Hash
ParentHash() common.Hash
NumberU64() uint64
Time() uint64
Transactions() types.Transactions
}
// PayloadToBlockRef extracts the essential L2BlockRef information from an L2
// block ref source, falling back to genesis information if necessary.
func L2BlockToBlockRef(block L2BlockRefSource, genesis *rollup.Genesis) (eth.L2BlockRef, error) {
hash, number := block.Hash(), block.NumberU64()
var l1Origin eth.BlockID
var sequenceNumber uint64
if number == genesis.L2.Number {
if hash != genesis.L2.Hash {
return eth.L2BlockRef{}, fmt.Errorf("expected L2 genesis hash to match L2 block at genesis block number %d: %s <> %s", genesis.L2.Number, hash, genesis.L2.Hash)
}
l1Origin = genesis.L1
sequenceNumber = 0
} else {
txs := block.Transactions()
if txs.Len() == 0 {
return eth.L2BlockRef{}, fmt.Errorf("l2 block is missing L1 info deposit tx, block hash: %s", hash)
}
tx := txs[0]
if tx.Type() != types.DepositTxType {
return eth.L2BlockRef{}, fmt.Errorf("first payload tx has unexpected tx type: %d", tx.Type())
}
info, err := L1InfoDepositTxData(tx.Data())
if err != nil {
return eth.L2BlockRef{}, fmt.Errorf("failed to parse L1 info deposit tx from L2 block: %w", err)
}
l1Origin = eth.BlockID{Hash: info.BlockHash, Number: info.Number}
sequenceNumber = info.SequenceNumber
}
return eth.L2BlockRef{
Hash: hash,
Number: number,
ParentHash: block.ParentHash(),
Time: block.Time(),
L1Origin: l1Origin,
SequenceNumber: sequenceNumber,
}, nil
}
...@@ -77,6 +77,7 @@ type PayloadsQueue struct { ...@@ -77,6 +77,7 @@ type PayloadsQueue struct {
pq payloadsByNumber pq payloadsByNumber
currentSize uint64 currentSize uint64
MaxSize uint64 MaxSize uint64
blockNos map[uint64]bool
SizeFn func(p *eth.ExecutionPayload) uint64 SizeFn func(p *eth.ExecutionPayload) uint64
} }
...@@ -99,6 +100,9 @@ func (upq *PayloadsQueue) Push(p *eth.ExecutionPayload) error { ...@@ -99,6 +100,9 @@ func (upq *PayloadsQueue) Push(p *eth.ExecutionPayload) error {
if p == nil { if p == nil {
return errors.New("cannot add nil payload") return errors.New("cannot add nil payload")
} }
if upq.blockNos[p.ID().Number] {
return errors.New("cannot add duplicate payload")
}
size := upq.SizeFn(p) size := upq.SizeFn(p)
if size > upq.MaxSize { if size > upq.MaxSize {
return fmt.Errorf("cannot add payload %s, payload mem size %d is larger than max queue size %d", p.ID(), size, upq.MaxSize) return fmt.Errorf("cannot add payload %s, payload mem size %d is larger than max queue size %d", p.ID(), size, upq.MaxSize)
...@@ -111,6 +115,7 @@ func (upq *PayloadsQueue) Push(p *eth.ExecutionPayload) error { ...@@ -111,6 +115,7 @@ func (upq *PayloadsQueue) Push(p *eth.ExecutionPayload) error {
for upq.currentSize > upq.MaxSize { for upq.currentSize > upq.MaxSize {
upq.Pop() upq.Pop()
} }
upq.blockNos[p.ID().Number] = true
return nil return nil
} }
...@@ -132,5 +137,7 @@ func (upq *PayloadsQueue) Pop() *eth.ExecutionPayload { ...@@ -132,5 +137,7 @@ func (upq *PayloadsQueue) Pop() *eth.ExecutionPayload {
} }
ps := heap.Pop(&upq.pq).(payloadAndSize) // nosemgrep ps := heap.Pop(&upq.pq).(payloadAndSize) // nosemgrep
upq.currentSize -= ps.size upq.currentSize -= ps.size
// remove the key from the blockNos map
delete(upq.blockNos, ps.payload.ID().Number)
return ps.payload return ps.payload
} }
...@@ -77,6 +77,7 @@ func TestPayloadsQueue(t *testing.T) { ...@@ -77,6 +77,7 @@ func TestPayloadsQueue(t *testing.T) {
pq := PayloadsQueue{ pq := PayloadsQueue{
MaxSize: payloadMemFixedCost * 3, MaxSize: payloadMemFixedCost * 3,
SizeFn: payloadMemSize, SizeFn: payloadMemSize,
blockNos: make(map[uint64]bool),
} }
require.Equal(t, 0, pq.Len()) require.Equal(t, 0, pq.Len())
require.Equal(t, (*eth.ExecutionPayload)(nil), pq.Peek()) require.Equal(t, (*eth.ExecutionPayload)(nil), pq.Peek())
...@@ -85,6 +86,7 @@ func TestPayloadsQueue(t *testing.T) { ...@@ -85,6 +86,7 @@ func TestPayloadsQueue(t *testing.T) {
a := &eth.ExecutionPayload{BlockNumber: 3} a := &eth.ExecutionPayload{BlockNumber: 3}
b := &eth.ExecutionPayload{BlockNumber: 4} b := &eth.ExecutionPayload{BlockNumber: 4}
c := &eth.ExecutionPayload{BlockNumber: 5} c := &eth.ExecutionPayload{BlockNumber: 5}
d := &eth.ExecutionPayload{BlockNumber: 6}
bAlt := &eth.ExecutionPayload{BlockNumber: 4} bAlt := &eth.ExecutionPayload{BlockNumber: 4}
require.NoError(t, pq.Push(b)) require.NoError(t, pq.Push(b))
require.Equal(t, pq.Len(), 1) require.Equal(t, pq.Len(), 1)
...@@ -105,28 +107,33 @@ func TestPayloadsQueue(t *testing.T) { ...@@ -105,28 +107,33 @@ func TestPayloadsQueue(t *testing.T) {
require.Equal(t, pq.Pop(), a) require.Equal(t, pq.Pop(), a)
require.Equal(t, pq.Len(), 2, "expecting to pop the lowest") require.Equal(t, pq.Len(), 2, "expecting to pop the lowest")
require.NoError(t, pq.Push(bAlt)) require.Equal(t, pq.Peek(), b, "expecting b to be lowest, compared to c")
require.Equal(t, pq.Len(), 3)
require.Equal(t, pq.Peek(), b, "expecting b to be lowest, compared to bAlt and c")
require.Equal(t, pq.Pop(), b) require.Equal(t, pq.Pop(), b)
require.Equal(t, pq.Len(), 2)
require.Equal(t, pq.MemSize(), 2*payloadMemFixedCost)
require.Equal(t, pq.Pop(), bAlt)
require.Equal(t, pq.Len(), 1) require.Equal(t, pq.Len(), 1)
require.Equal(t, pq.Peek(), c, "expecting c to only remain") require.Equal(t, pq.MemSize(), payloadMemFixedCost)
require.Equal(t, pq.Pop(), c)
require.Equal(t, pq.Len(), 0, "expecting no items to remain")
d := &eth.ExecutionPayload{BlockNumber: 5, Transactions: []eth.Data{make([]byte, payloadMemFixedCost*3+1)}} e := &eth.ExecutionPayload{BlockNumber: 5, Transactions: []eth.Data{make([]byte, payloadMemFixedCost*3+1)}}
require.Error(t, pq.Push(d), "cannot add payloads that are too large") require.Error(t, pq.Push(e), "cannot add payloads that are too large")
require.NoError(t, pq.Push(b)) require.NoError(t, pq.Push(b))
require.Equal(t, pq.Len(), 1, "expecting b")
require.Equal(t, pq.Peek(), b)
require.NoError(t, pq.Push(c))
require.Equal(t, pq.Len(), 2, "expecting b, c") require.Equal(t, pq.Len(), 2, "expecting b, c")
require.Equal(t, pq.Peek(), b) require.Equal(t, pq.Peek(), b)
require.NoError(t, pq.Push(a)) require.NoError(t, pq.Push(a))
require.Equal(t, pq.Len(), 3, "expecting a, b, c") require.Equal(t, pq.Len(), 3, "expecting a, b, c")
require.Equal(t, pq.Peek(), a) require.Equal(t, pq.Peek(), a)
require.NoError(t, pq.Push(bAlt))
require.Equal(t, pq.Len(), 3, "expecting b, bAlt, c") // No duplicates allowed
require.Error(t, pq.Push(bAlt))
require.NoError(t, pq.Push(d))
require.Equal(t, pq.Len(), 3)
require.Equal(t, pq.Peek(), b, "expecting b, c, d")
require.NotContainsf(t, pq.pq[:], a, "a should be dropped after 3 items already exist under max size constraint") require.NotContainsf(t, pq.pq[:], a, "a should be dropped after 3 items already exist under max size constraint")
} }
...@@ -51,6 +51,7 @@ type EngineQueueStage interface { ...@@ -51,6 +51,7 @@ type EngineQueueStage interface {
Finalize(l1Origin eth.L1BlockRef) Finalize(l1Origin eth.L1BlockRef)
AddUnsafePayload(payload *eth.ExecutionPayload) AddUnsafePayload(payload *eth.ExecutionPayload)
GetUnsafeQueueGap(expectedNumber uint64) (uint64, uint64)
Step(context.Context) error Step(context.Context) error
} }
...@@ -106,6 +107,12 @@ func NewDerivationPipeline(log log.Logger, cfg *rollup.Config, l1Fetcher L1Fetch ...@@ -106,6 +107,12 @@ func NewDerivationPipeline(log log.Logger, cfg *rollup.Config, l1Fetcher L1Fetch
} }
} }
// EngineReady returns true if the engine is ready to be used.
// When it's being reset its state is inconsistent, and should not be used externally.
func (dp *DerivationPipeline) EngineReady() bool {
return dp.resetting > 0
}
func (dp *DerivationPipeline) Reset() { func (dp *DerivationPipeline) Reset() {
dp.resetting = 0 dp.resetting = 0
} }
...@@ -160,6 +167,12 @@ func (dp *DerivationPipeline) AddUnsafePayload(payload *eth.ExecutionPayload) { ...@@ -160,6 +167,12 @@ func (dp *DerivationPipeline) AddUnsafePayload(payload *eth.ExecutionPayload) {
dp.eng.AddUnsafePayload(payload) dp.eng.AddUnsafePayload(payload)
} }
// GetUnsafeQueueGap retrieves the current [start, end] range of the gap between the tip of the unsafe priority queue and the unsafe head.
// If there is no gap, the start and end will be 0.
func (dp *DerivationPipeline) GetUnsafeQueueGap(expectedNumber uint64) (uint64, uint64) {
return dp.eng.GetUnsafeQueueGap(expectedNumber)
}
// Step tries to progress the buffer. // Step tries to progress the buffer.
// An EOF is returned if there pipeline is blocked by waiting for new L1 data. // An EOF is returned if there pipeline is blocked by waiting for new L1 data.
// If ctx errors no error is returned, but the step may exit early in a state that can still be continued. // If ctx errors no error is returned, but the step may exit early in a state that can still be continued.
......
...@@ -10,6 +10,7 @@ import ( ...@@ -10,6 +10,7 @@ import (
"github.com/ethereum-optimism/optimism/op-node/eth" "github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup" "github.com/ethereum-optimism/optimism/op-node/rollup"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive" "github.com/ethereum-optimism/optimism/op-node/rollup/derive"
"github.com/ethereum-optimism/optimism/op-node/sources"
) )
type Metrics interface { type Metrics interface {
...@@ -48,12 +49,14 @@ type DerivationPipeline interface { ...@@ -48,12 +49,14 @@ type DerivationPipeline interface {
Reset() Reset()
Step(ctx context.Context) error Step(ctx context.Context) error
AddUnsafePayload(payload *eth.ExecutionPayload) AddUnsafePayload(payload *eth.ExecutionPayload)
GetUnsafeQueueGap(expectedNumber uint64) (uint64, uint64)
Finalize(ref eth.L1BlockRef) Finalize(ref eth.L1BlockRef)
FinalizedL1() eth.L1BlockRef FinalizedL1() eth.L1BlockRef
Finalized() eth.L2BlockRef Finalized() eth.L2BlockRef
SafeL2Head() eth.L2BlockRef SafeL2Head() eth.L2BlockRef
UnsafeL2Head() eth.L2BlockRef UnsafeL2Head() eth.L2BlockRef
Origin() eth.L1BlockRef Origin() eth.L1BlockRef
EngineReady() bool
} }
type L1StateIface interface { type L1StateIface interface {
...@@ -80,7 +83,7 @@ type Network interface { ...@@ -80,7 +83,7 @@ type Network interface {
} }
// NewDriver composes an events handler that tracks L1 state, triggers L2 derivation, and optionally sequences new L2 blocks. // NewDriver composes an events handler that tracks L1 state, triggers L2 derivation, and optionally sequences new L2 blocks.
func NewDriver(driverCfg *Config, cfg *rollup.Config, l2 L2Chain, l1 L1Chain, network Network, log log.Logger, snapshotLog log.Logger, metrics Metrics) *Driver { func NewDriver(driverCfg *Config, cfg *rollup.Config, l2 L2Chain, l1 L1Chain, syncClient *sources.SyncClient, network Network, log log.Logger, snapshotLog log.Logger, metrics Metrics) *Driver {
l1State := NewL1State(log, metrics) l1State := NewL1State(log, metrics)
sequencerConfDepth := NewConfDepth(driverCfg.SequencerConfDepth, l1State.L1Head, l1) sequencerConfDepth := NewConfDepth(driverCfg.SequencerConfDepth, l1State.L1Head, l1)
findL1Origin := NewL1OriginSelector(log, cfg, sequencerConfDepth) findL1Origin := NewL1OriginSelector(log, cfg, sequencerConfDepth)
...@@ -112,5 +115,6 @@ func NewDriver(driverCfg *Config, cfg *rollup.Config, l2 L2Chain, l1 L1Chain, ne ...@@ -112,5 +115,6 @@ func NewDriver(driverCfg *Config, cfg *rollup.Config, l2 L2Chain, l1 L1Chain, ne
l1SafeSig: make(chan eth.L1BlockRef, 10), l1SafeSig: make(chan eth.L1BlockRef, 10),
l1FinalizedSig: make(chan eth.L1BlockRef, 10), l1FinalizedSig: make(chan eth.L1BlockRef, 10),
unsafeL2Payloads: make(chan *eth.ExecutionPayload, 10), unsafeL2Payloads: make(chan *eth.ExecutionPayload, 10),
L2SyncCl: syncClient,
} }
} }
...@@ -123,6 +123,14 @@ func (d *Sequencer) CancelBuildingBlock(ctx context.Context) { ...@@ -123,6 +123,14 @@ func (d *Sequencer) CancelBuildingBlock(ctx context.Context) {
// PlanNextSequencerAction returns a desired delay till the RunNextSequencerAction call. // PlanNextSequencerAction returns a desired delay till the RunNextSequencerAction call.
func (d *Sequencer) PlanNextSequencerAction() time.Duration { func (d *Sequencer) PlanNextSequencerAction() time.Duration {
// If the engine is busy building safe blocks (and thus changing the head that we would sync on top of),
// then give it time to sync up.
if onto, _, safe := d.engine.BuildingPayload(); safe {
d.log.Warn("delaying sequencing to not interrupt safe-head changes", "onto", onto, "onto_time", onto.Time)
// approximates the worst-case time it takes to build a block, to reattempt sequencing after.
return time.Second * time.Duration(d.config.BlockTime)
}
head := d.engine.UnsafeL2Head() head := d.engine.UnsafeL2Head()
now := d.timeNow() now := d.timeNow()
...@@ -173,7 +181,7 @@ func (d *Sequencer) BuildingOnto() eth.L2BlockRef { ...@@ -173,7 +181,7 @@ func (d *Sequencer) BuildingOnto() eth.L2BlockRef {
// Only critical errors are bubbled up, other errors are handled internally. // Only critical errors are bubbled up, other errors are handled internally.
// Internally starting or sealing of a block may fail with a derivation-like error: // Internally starting or sealing of a block may fail with a derivation-like error:
// - If it is a critical error, the error is bubbled up to the caller. // - If it is a critical error, the error is bubbled up to the caller.
// - If it is a reset error, the ResettableEngineControl used to build blocks is requested to reset, and a backoff aplies. // - If it is a reset error, the ResettableEngineControl used to build blocks is requested to reset, and a backoff applies.
// No attempt is made at completing the block building. // No attempt is made at completing the block building.
// - If it is a temporary error, a backoff is applied to reattempt building later. // - If it is a temporary error, a backoff is applied to reattempt building later.
// - If it is any other error, a backoff is applied and building is cancelled. // - If it is any other error, a backoff is applied and building is cancelled.
...@@ -187,8 +195,15 @@ func (d *Sequencer) BuildingOnto() eth.L2BlockRef { ...@@ -187,8 +195,15 @@ func (d *Sequencer) BuildingOnto() eth.L2BlockRef {
// since it can consolidate previously sequenced blocks by comparing sequenced inputs with derived inputs. // since it can consolidate previously sequenced blocks by comparing sequenced inputs with derived inputs.
// If the derivation pipeline does force a conflicting block, then an ongoing sequencer task might still finish, // If the derivation pipeline does force a conflicting block, then an ongoing sequencer task might still finish,
// but the derivation can continue to reset until the chain is correct. // but the derivation can continue to reset until the chain is correct.
// If the engine is currently building safe blocks, then that building is not interrupted, and sequencing is delayed.
func (d *Sequencer) RunNextSequencerAction(ctx context.Context) (*eth.ExecutionPayload, error) { func (d *Sequencer) RunNextSequencerAction(ctx context.Context) (*eth.ExecutionPayload, error) {
if _, buildingID, _ := d.engine.BuildingPayload(); buildingID != (eth.PayloadID{}) { if onto, buildingID, safe := d.engine.BuildingPayload(); buildingID != (eth.PayloadID{}) {
if safe {
d.log.Warn("avoiding sequencing to not interrupt safe-head changes", "onto", onto, "onto_time", onto.Time)
// approximates the worst-case time it takes to build a block, to reattempt sequencing after.
d.nextAction = d.timeNow().Add(time.Second * time.Duration(d.config.BlockTime))
return nil, nil
}
payload, err := d.CompleteBuildingBlock(ctx) payload, err := d.CompleteBuildingBlock(ctx)
if err != nil { if err != nil {
if errors.Is(err, derive.ErrCritical) { if errors.Is(err, derive.ErrCritical) {
......
...@@ -16,6 +16,7 @@ import ( ...@@ -16,6 +16,7 @@ import (
"github.com/ethereum-optimism/optimism/op-node/eth" "github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup" "github.com/ethereum-optimism/optimism/op-node/rollup"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive" "github.com/ethereum-optimism/optimism/op-node/rollup/derive"
"github.com/ethereum-optimism/optimism/op-node/sources"
"github.com/ethereum-optimism/optimism/op-service/backoff" "github.com/ethereum-optimism/optimism/op-service/backoff"
) )
...@@ -63,7 +64,11 @@ type Driver struct { ...@@ -63,7 +64,11 @@ type Driver struct {
l1SafeSig chan eth.L1BlockRef l1SafeSig chan eth.L1BlockRef
l1FinalizedSig chan eth.L1BlockRef l1FinalizedSig chan eth.L1BlockRef
// Backup unsafe sync client
L2SyncCl *sources.SyncClient
// L2 Signals: // L2 Signals:
unsafeL2Payloads chan *eth.ExecutionPayload unsafeL2Payloads chan *eth.ExecutionPayload
l1 L1Chain l1 L1Chain
...@@ -195,10 +200,18 @@ func (s *Driver) eventLoop() { ...@@ -195,10 +200,18 @@ func (s *Driver) eventLoop() {
sequencerTimer.Reset(delay) sequencerTimer.Reset(delay)
} }
// Create a ticker to check if there is a gap in the engine queue every 15 seconds
// If there is, we send requests to the backup RPC to retrieve the missing payloads
// and add them to the unsafe queue.
altSyncTicker := time.NewTicker(15 * time.Second)
defer altSyncTicker.Stop()
for { for {
// If we are sequencing, and the L1 state is ready, update the trigger for the next sequencer action. // If we are sequencing, and the L1 state is ready, update the trigger for the next sequencer action.
// This may adjust at any time based on fork-choice changes or previous errors. // This may adjust at any time based on fork-choice changes or previous errors.
if s.driverConfig.SequencerEnabled && !s.driverConfig.SequencerStopped && s.l1State.L1Head() != (eth.L1BlockRef{}) { // And avoid sequencing if the derivation pipeline indicates the engine is not ready.
if s.driverConfig.SequencerEnabled && !s.driverConfig.SequencerStopped &&
s.l1State.L1Head() != (eth.L1BlockRef{}) && s.derivation.EngineReady() {
// update sequencer time if the head changed // update sequencer time if the head changed
if s.sequencer.BuildingOnto().ID() != s.derivation.UnsafeL2Head().ID() { if s.sequencer.BuildingOnto().ID() != s.derivation.UnsafeL2Head().ID() {
planSequencerAction() planSequencerAction()
...@@ -223,6 +236,12 @@ func (s *Driver) eventLoop() { ...@@ -223,6 +236,12 @@ func (s *Driver) eventLoop() {
} }
} }
planSequencerAction() // schedule the next sequencer action to keep the sequencing looping planSequencerAction() // schedule the next sequencer action to keep the sequencing looping
case <-altSyncTicker.C:
// Check if there is a gap in the current unsafe payload queue. If there is, attempt to fetch
// missing payloads from the backup RPC (if it is configured).
if s.L2SyncCl != nil {
s.checkForGapInUnsafeQueue(ctx)
}
case payload := <-s.unsafeL2Payloads: case payload := <-s.unsafeL2Payloads:
s.snapshot("New unsafe payload") s.snapshot("New unsafe payload")
s.log.Info("Optimistically queueing unsafe L2 execution payload", "id", payload.ID()) s.log.Info("Optimistically queueing unsafe L2 execution payload", "id", payload.ID())
...@@ -442,3 +461,36 @@ type hashAndErrorChannel struct { ...@@ -442,3 +461,36 @@ type hashAndErrorChannel struct {
hash common.Hash hash common.Hash
err chan error err chan error
} }
// checkForGapInUnsafeQueue checks if there is a gap in the unsafe queue and attempts to retrieve the missing payloads from the backup RPC.
// WARNING: The sync client's attempt to retrieve the missing payloads is not guaranteed to succeed, and it will fail silently (besides
// emitting warning logs) if the requests fail.
func (s *Driver) checkForGapInUnsafeQueue(ctx context.Context) {
// subtract genesis time from wall clock to get the time elapsed since genesis, and then divide that
// difference by the block time to get the expected L2 block number at the current time. If the
// unsafe head does not have this block number, then there is a gap in the queue.
wallClock := uint64(time.Now().Unix())
genesisTimestamp := s.config.Genesis.L2Time
wallClockGenesisDiff := wallClock - genesisTimestamp
expectedL2Block := wallClockGenesisDiff / s.config.BlockTime
start, end := s.derivation.GetUnsafeQueueGap(expectedL2Block)
size := end - start
// Check if there is a gap between the unsafe head and the expected L2 block number at the current time.
if size > 0 {
s.log.Warn("Gap in payload queue tip and expected unsafe chain detected", "start", start, "end", end, "size", size)
s.log.Info("Attempting to fetch missing payloads from backup RPC", "start", start, "end", end, "size", size)
// Attempt to fetch the missing payloads from the backup unsafe sync RPC concurrently.
// Concurrent requests are safe here due to the engine queue being a priority queue.
for blockNumber := start; blockNumber <= end; blockNumber++ {
select {
case s.L2SyncCl.FetchUnsafeBlock <- blockNumber:
// Do nothing- the block number was successfully sent into the channel
default:
return // If the channel is full, return and wait for the next iteration of the event loop
}
}
}
}
package rollup package rollup
import ( import (
"errors"
"github.com/ethereum-optimism/optimism/op-bindings/bindings" "github.com/ethereum-optimism/optimism/op-bindings/bindings"
"github.com/ethereum-optimism/optimism/op-node/eth" "github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/crypto"
) )
// ComputeL2OutputRoot computes the L2 output root var NilProof = errors.New("Output root proof is nil")
func ComputeL2OutputRoot(l2OutputRootVersion eth.Bytes32, blockHash common.Hash, blockRoot common.Hash, storageRoot common.Hash) eth.Bytes32 {
digest := crypto.Keccak256Hash( // ComputeL2OutputRoot computes the L2 output root by hashing an output root proof.
l2OutputRootVersion[:], func ComputeL2OutputRoot(proofElements *bindings.TypesOutputRootProof) (eth.Bytes32, error) {
blockRoot.Bytes(), if proofElements == nil {
storageRoot[:], return eth.Bytes32{}, NilProof
blockHash.Bytes(), }
)
return eth.Bytes32(digest)
}
// HashOutputRootProof computes the hash of the output root proof digest := crypto.Keccak256Hash(
func HashOutputRootProof(proof *bindings.TypesOutputRootProof) eth.Bytes32 { proofElements.Version[:],
return ComputeL2OutputRoot( proofElements.StateRoot[:],
proof.Version, proofElements.MessagePasserStorageRoot[:],
proof.StateRoot, proofElements.LatestBlockhash[:],
proof.MessagePasserStorageRoot,
proof.LatestBlockhash,
) )
return eth.Bytes32(digest), nil
} }
...@@ -126,8 +126,17 @@ func FindL2Heads(ctx context.Context, cfg *rollup.Config, l1 L1Chain, l2 L2Chain ...@@ -126,8 +126,17 @@ func FindL2Heads(ctx context.Context, cfg *rollup.Config, l1 L1Chain, l2 L2Chain
// then we return the last L2 block of the epoch before that as safe head. // then we return the last L2 block of the epoch before that as safe head.
// Each loop iteration we traverse a single L2 block, and we check if the L1 origins are consistent. // Each loop iteration we traverse a single L2 block, and we check if the L1 origins are consistent.
for { for {
// Fetch L1 information if we never had it, or if we do not have it for the current origin // Fetch L1 information if we never had it, or if we do not have it for the current origin.
if l1Block == (eth.L1BlockRef{}) || n.L1Origin.Hash != l1Block.Hash { // Optimization: as soon as we have a previous L1 block, try to traverse L1 by hash instead of by number, to fill the cache.
if n.L1Origin.Hash == l1Block.ParentHash {
b, err := l1.L1BlockRefByHash(ctx, n.L1Origin.Hash)
if err != nil {
// Exit, find-sync start should start over, to move to an available L1 chain with block-by-number / not-found case.
return nil, fmt.Errorf("failed to retrieve L1 block: %w", err)
}
l1Block = b
ahead = false
} else if l1Block == (eth.L1BlockRef{}) || n.L1Origin.Hash != l1Block.Hash {
b, err := l1.L1BlockRefByNumber(ctx, n.L1Origin.Number) b, err := l1.L1BlockRefByNumber(ctx, n.L1Origin.Number)
// if L2 is ahead of L1 view, then consider it a "plausible" head // if L2 is ahead of L1 view, then consider it a "plausible" head
notFound := errors.Is(err, ethereum.NotFound) notFound := errors.Is(err, ethereum.NotFound)
......
...@@ -157,7 +157,7 @@ func (cfg *Config) CheckL2ChainID(ctx context.Context, client L2Client) error { ...@@ -157,7 +157,7 @@ func (cfg *Config) CheckL2ChainID(ctx context.Context, client L2Client) error {
return err return err
} }
if cfg.L2ChainID.Cmp(id) != 0 { if cfg.L2ChainID.Cmp(id) != 0 {
return fmt.Errorf("incorrect L2 RPC chain id %d, expected %d", cfg.L2ChainID, id) return fmt.Errorf("incorrect L2 RPC chain id, expected from config %d, obtained from client %d", cfg.L2ChainID, id)
} }
return nil return nil
} }
......
...@@ -36,10 +36,7 @@ func NewConfig(ctx *cli.Context, log log.Logger) (*node.Config, error) { ...@@ -36,10 +36,7 @@ func NewConfig(ctx *cli.Context, log log.Logger) (*node.Config, error) {
return nil, err return nil, err
} }
driverConfig, err := NewDriverConfig(ctx) driverConfig := NewDriverConfig(ctx)
if err != nil {
return nil, err
}
p2pSignerSetup, err := p2pcli.LoadSignerSetup(ctx) p2pSignerSetup, err := p2pcli.LoadSignerSetup(ctx)
if err != nil { if err != nil {
...@@ -51,19 +48,19 @@ func NewConfig(ctx *cli.Context, log log.Logger) (*node.Config, error) { ...@@ -51,19 +48,19 @@ func NewConfig(ctx *cli.Context, log log.Logger) (*node.Config, error) {
return nil, fmt.Errorf("failed to load p2p config: %w", err) return nil, fmt.Errorf("failed to load p2p config: %w", err)
} }
l1Endpoint, err := NewL1EndpointConfig(ctx) l1Endpoint := NewL1EndpointConfig(ctx)
if err != nil {
return nil, fmt.Errorf("failed to load l1 endpoint info: %w", err)
}
l2Endpoint, err := NewL2EndpointConfig(ctx, log) l2Endpoint, err := NewL2EndpointConfig(ctx, log)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to load l2 endpoints info: %w", err) return nil, fmt.Errorf("failed to load l2 endpoints info: %w", err)
} }
l2SyncEndpoint := NewL2SyncEndpointConfig(ctx)
cfg := &node.Config{ cfg := &node.Config{
L1: l1Endpoint, L1: l1Endpoint,
L2: l2Endpoint, L2: l2Endpoint,
L2Sync: l2SyncEndpoint,
Rollup: *rollupConfig, Rollup: *rollupConfig,
Driver: *driverConfig, Driver: *driverConfig,
RPC: node.RPCConfig{ RPC: node.RPCConfig{
...@@ -96,12 +93,12 @@ func NewConfig(ctx *cli.Context, log log.Logger) (*node.Config, error) { ...@@ -96,12 +93,12 @@ func NewConfig(ctx *cli.Context, log log.Logger) (*node.Config, error) {
return cfg, nil return cfg, nil
} }
func NewL1EndpointConfig(ctx *cli.Context) (*node.L1EndpointConfig, error) { func NewL1EndpointConfig(ctx *cli.Context) *node.L1EndpointConfig {
return &node.L1EndpointConfig{ return &node.L1EndpointConfig{
L1NodeAddr: ctx.GlobalString(flags.L1NodeAddr.Name), L1NodeAddr: ctx.GlobalString(flags.L1NodeAddr.Name),
L1TrustRPC: ctx.GlobalBool(flags.L1TrustRPC.Name), L1TrustRPC: ctx.GlobalBool(flags.L1TrustRPC.Name),
L1RPCKind: sources.RPCProviderKind(strings.ToLower(ctx.GlobalString(flags.L1RPCProviderKind.Name))), L1RPCKind: sources.RPCProviderKind(strings.ToLower(ctx.GlobalString(flags.L1RPCProviderKind.Name))),
}, nil }
} }
func NewL2EndpointConfig(ctx *cli.Context, log log.Logger) (*node.L2EndpointConfig, error) { func NewL2EndpointConfig(ctx *cli.Context, log log.Logger) (*node.L2EndpointConfig, error) {
...@@ -134,13 +131,21 @@ func NewL2EndpointConfig(ctx *cli.Context, log log.Logger) (*node.L2EndpointConf ...@@ -134,13 +131,21 @@ func NewL2EndpointConfig(ctx *cli.Context, log log.Logger) (*node.L2EndpointConf
}, nil }, nil
} }
func NewDriverConfig(ctx *cli.Context) (*driver.Config, error) { // NewL2SyncEndpointConfig returns a pointer to a L2SyncEndpointConfig if the
// flag is set, otherwise nil.
func NewL2SyncEndpointConfig(ctx *cli.Context) *node.L2SyncEndpointConfig {
return &node.L2SyncEndpointConfig{
L2NodeAddr: ctx.GlobalString(flags.BackupL2UnsafeSyncRPC.Name),
}
}
func NewDriverConfig(ctx *cli.Context) *driver.Config {
return &driver.Config{ return &driver.Config{
VerifierConfDepth: ctx.GlobalUint64(flags.VerifierL1Confs.Name), VerifierConfDepth: ctx.GlobalUint64(flags.VerifierL1Confs.Name),
SequencerConfDepth: ctx.GlobalUint64(flags.SequencerL1Confs.Name), SequencerConfDepth: ctx.GlobalUint64(flags.SequencerL1Confs.Name),
SequencerEnabled: ctx.GlobalBool(flags.SequencerEnabledFlag.Name), SequencerEnabled: ctx.GlobalBool(flags.SequencerEnabledFlag.Name),
SequencerStopped: ctx.GlobalBool(flags.SequencerStoppedFlag.Name), SequencerStopped: ctx.GlobalBool(flags.SequencerStoppedFlag.Name),
}, nil }
} }
func NewRollupConfig(ctx *cli.Context) (*rollup.Config, error) { func NewRollupConfig(ctx *cli.Context) (*rollup.Config, error) {
......
...@@ -24,6 +24,7 @@ type L1ClientConfig struct { ...@@ -24,6 +24,7 @@ type L1ClientConfig struct {
func L1ClientDefaultConfig(config *rollup.Config, trustRPC bool, kind RPCProviderKind) *L1ClientConfig { func L1ClientDefaultConfig(config *rollup.Config, trustRPC bool, kind RPCProviderKind) *L1ClientConfig {
// Cache 3/2 worth of sequencing window of receipts and txs // Cache 3/2 worth of sequencing window of receipts and txs
span := int(config.SeqWindowSize) * 3 / 2 span := int(config.SeqWindowSize) * 3 / 2
fullSpan := span
if span > 1000 { // sanity cap. If a large sequencing window is configured, do not make the cache too large if span > 1000 { // sanity cap. If a large sequencing window is configured, do not make the cache too large
span = 1000 span = 1000
} }
...@@ -40,7 +41,8 @@ func L1ClientDefaultConfig(config *rollup.Config, trustRPC bool, kind RPCProvide ...@@ -40,7 +41,8 @@ func L1ClientDefaultConfig(config *rollup.Config, trustRPC bool, kind RPCProvide
MustBePostMerge: false, MustBePostMerge: false,
RPCProviderKind: kind, RPCProviderKind: kind,
}, },
L1BlockRefsCacheSize: span, // Not bounded by span, to cover find-sync-start range fully for speedy recovery after errors.
L1BlockRefsCacheSize: fullSpan,
} }
} }
......
...@@ -34,6 +34,7 @@ func L2ClientDefaultConfig(config *rollup.Config, trustRPC bool) *L2ClientConfig ...@@ -34,6 +34,7 @@ func L2ClientDefaultConfig(config *rollup.Config, trustRPC bool) *L2ClientConfig
span *= 12 span *= 12
span /= int(config.BlockTime) span /= int(config.BlockTime)
} }
fullSpan := span
if span > 1000 { // sanity cap. If a large sequencing window is configured, do not make the cache too large if span > 1000 { // sanity cap. If a large sequencing window is configured, do not make the cache too large
span = 1000 span = 1000
} }
...@@ -50,7 +51,8 @@ func L2ClientDefaultConfig(config *rollup.Config, trustRPC bool) *L2ClientConfig ...@@ -50,7 +51,8 @@ func L2ClientDefaultConfig(config *rollup.Config, trustRPC bool) *L2ClientConfig
MustBePostMerge: true, MustBePostMerge: true,
RPCProviderKind: RPCKindBasic, RPCProviderKind: RPCKindBasic,
}, },
L2BlockRefsCacheSize: span, // Not bounded by span, to cover find-sync-start range fully for speedy recovery after errors.
L2BlockRefsCacheSize: fullSpan,
L1ConfigsCacheSize: span, L1ConfigsCacheSize: span,
RollupCfg: config, RollupCfg: config,
} }
......
package sources
import (
"context"
"errors"
"sync"
"github.com/ethereum-optimism/optimism/op-node/client"
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum-optimism/optimism/op-node/rollup"
"github.com/ethereum-optimism/optimism/op-node/sources/caching"
"github.com/ethereum/go-ethereum/log"
"github.com/libp2p/go-libp2p/core/peer"
)
var ErrNoUnsafeL2PayloadChannel = errors.New("unsafeL2Payloads channel must not be nil")
// RpcSyncPeer is a mock PeerID for the RPC sync client.
var RpcSyncPeer peer.ID = "ALT_RPC_SYNC"
type receivePayload = func(ctx context.Context, from peer.ID, payload *eth.ExecutionPayload) error
type SyncClientInterface interface {
Start() error
Close() error
fetchUnsafeBlockFromRpc(ctx context.Context, blockNumber uint64)
}
type SyncClient struct {
*L2Client
FetchUnsafeBlock chan uint64
done chan struct{}
receivePayload receivePayload
wg sync.WaitGroup
}
var _ SyncClientInterface = (*SyncClient)(nil)
type SyncClientConfig struct {
L2ClientConfig
}
func SyncClientDefaultConfig(config *rollup.Config, trustRPC bool) *SyncClientConfig {
return &SyncClientConfig{
*L2ClientDefaultConfig(config, trustRPC),
}
}
func NewSyncClient(receiver receivePayload, client client.RPC, log log.Logger, metrics caching.Metrics, config *SyncClientConfig) (*SyncClient, error) {
l2Client, err := NewL2Client(client, log, metrics, &config.L2ClientConfig)
if err != nil {
return nil, err
}
return &SyncClient{
L2Client: l2Client,
FetchUnsafeBlock: make(chan uint64, 128),
done: make(chan struct{}),
receivePayload: receiver,
}, nil
}
// Start starts up the state loop.
// The loop will have been started if err is not nil.
func (s *SyncClient) Start() error {
s.wg.Add(1)
go s.eventLoop()
return nil
}
// Close sends a signal to the event loop to stop.
func (s *SyncClient) Close() error {
s.done <- struct{}{}
s.wg.Wait()
return nil
}
// eventLoop is the main event loop for the sync client.
func (s *SyncClient) eventLoop() {
defer s.wg.Done()
s.log.Info("Starting sync client event loop")
for {
select {
case <-s.done:
return
case blockNumber := <-s.FetchUnsafeBlock:
s.fetchUnsafeBlockFromRpc(context.Background(), blockNumber)
}
}
}
// fetchUnsafeBlockFromRpc attempts to fetch an unsafe execution payload from the backup unsafe sync RPC.
// WARNING: This function fails silently (aside from warning logs).
//
// Post Shanghai hardfork, the engine API's `PayloadBodiesByRange` method will be much more efficient, but for now,
// the `eth_getBlockByNumber` method is more widely available.
func (s *SyncClient) fetchUnsafeBlockFromRpc(ctx context.Context, blockNumber uint64) {
s.log.Info("Requesting unsafe payload from backup RPC", "block number", blockNumber)
payload, err := s.PayloadByNumber(ctx, blockNumber)
if err != nil {
s.log.Warn("Failed to convert block to execution payload", "block number", blockNumber, "err", err)
return
}
// Signature validation is not necessary here since the backup RPC is trusted.
if _, ok := payload.CheckBlockHash(); !ok {
s.log.Warn("Received invalid payload from backup RPC; invalid block hash", "payload", payload.ID())
return
}
s.log.Info("Received unsafe payload from backup RPC", "payload", payload.ID())
// Send the retrieved payload to the `unsafeL2Payloads` channel.
if err = s.receivePayload(ctx, RpcSyncPeer, payload); err != nil {
s.log.Warn("Failed to send payload into the driver's unsafeL2Payloads channel", "payload", payload.ID(), "err", err)
return
} else {
s.log.Info("Sent received payload into the driver's unsafeL2Payloads channel", "payload", payload.ID())
}
}
...@@ -47,6 +47,9 @@ type HeaderInfo struct { ...@@ -47,6 +47,9 @@ type HeaderInfo struct {
txHash common.Hash txHash common.Hash
receiptHash common.Hash receiptHash common.Hash
gasUsed uint64 gasUsed uint64
// withdrawalsRoot was added in Shapella and is thus optional
withdrawalsRoot *common.Hash
} }
var _ eth.BlockInfo = (*HeaderInfo)(nil) var _ eth.BlockInfo = (*HeaderInfo)(nil)
...@@ -113,7 +116,10 @@ type rpcHeader struct { ...@@ -113,7 +116,10 @@ type rpcHeader struct {
Nonce types.BlockNonce `json:"nonce"` Nonce types.BlockNonce `json:"nonce"`
// BaseFee was added by EIP-1559 and is ignored in legacy headers. // BaseFee was added by EIP-1559 and is ignored in legacy headers.
BaseFee *hexutil.Big `json:"baseFeePerGas" rlp:"optional"` BaseFee *hexutil.Big `json:"baseFeePerGas"`
// WithdrawalsRoot was added by EIP-4895 and is ignored in legacy headers.
WithdrawalsRoot *common.Hash `json:"withdrawalsRoot"`
// untrusted info included by RPC, may have to be checked // untrusted info included by RPC, may have to be checked
Hash common.Hash `json:"hash"` Hash common.Hash `json:"hash"`
...@@ -160,6 +166,7 @@ func (hdr *rpcHeader) computeBlockHash() common.Hash { ...@@ -160,6 +166,7 @@ func (hdr *rpcHeader) computeBlockHash() common.Hash {
MixDigest: hdr.MixDigest, MixDigest: hdr.MixDigest,
Nonce: hdr.Nonce, Nonce: hdr.Nonce,
BaseFee: (*big.Int)(hdr.BaseFee), BaseFee: (*big.Int)(hdr.BaseFee),
WithdrawalsHash: hdr.WithdrawalsRoot,
} }
return gethHeader.Hash() return gethHeader.Hash()
} }
...@@ -188,6 +195,7 @@ func (hdr *rpcHeader) Info(trustCache bool, mustBePostMerge bool) (*HeaderInfo, ...@@ -188,6 +195,7 @@ func (hdr *rpcHeader) Info(trustCache bool, mustBePostMerge bool) (*HeaderInfo,
txHash: hdr.TxHash, txHash: hdr.TxHash,
receiptHash: hdr.ReceiptHash, receiptHash: hdr.ReceiptHash,
gasUsed: uint64(hdr.GasUsed), gasUsed: uint64(hdr.GasUsed),
withdrawalsRoot: hdr.WithdrawalsRoot,
} }
return &info, nil return &info, nil
} }
......
package sources
import (
"encoding/json"
"testing"
"github.com/stretchr/testify/require"
)
func TestBlockJSON(t *testing.T) {
testCases := []struct {
Name string
OK bool
Data string
}{
{Name: "pre-Shanghai good tx", OK: true, Data: `{"number":"0x840249","hash":"0x9ef7cd2241202b919a0e51240818a8666c73f7ce4b908931e3ae6d26d30f7663","transactions":["0x39c666d9b5cec429accad7b0f94f789ca2ebeb5294b8b129c1b76f552daf57d3","0x2ca7289ab3738d17e0f5093bd96c97c06c9a2ea4c22fc84a6a7fbfda93ce55ee","0xb0085de1476530de3efc6928c4683e7c40f8fac18875f74cbcc47df159de17d9","0xe01c8631c86ded63af95b8dbc0c8aac5d31254c14d6ecb4cc51d98259d838e52","0x69414a126a6f07ab5e31ad2f9069fb986b7c490e096898473873e41ece6af783","0xa2fef1133ee726533c7f190f246fede123e3706a03933c1febc92618f90d2804","0x6585ec5c4c2bbf1f683f90f58e18f3b38d875e94457fe4cbb7bc5bf6581f83af","0x1db276b864fbf01dcf8cededf8d597553ecb0eb9438edfaf2f5bd0cc93297c66","0xcbe7ed31654af4e191ca53445b82de040ae2cd92459a3f951bdcce423d780f08","0x808ba5211f03cc78a732ff0f9383c6355e63c83ae8c6035ced2ba6f7c331dc63","0xdd66f1f26672849ef54c420210f479c9f0c46924d8e9f7b210981ffe8d3fac82","0x254abb2f8cdcffe9ef62ab924312a1e4142578db87e4f7c199fd35991e92f014","0xa7b7c654e7073b8043b680b7ffc95d3f2099abaa0b0578d6f954a2a7c99404e1","0x7ccdfa698c8acf47ab9316ed078eb40819ff575bcf612c6f59f29e7726df3f96","0xa0b035ef315824a6f6a6565fa8de27042ade3af9cf0583a36dea83d6e01bf2a8","0x1ebad7f3e8cb3543d4963686a94d99f61839f666831eab9c9c1b4711de11d3d9","0x501750278e91d8b5be1ccf60e793d4bbcd9b3bb3ccc518d3634a71caeac65f48","0xd80ff8af29ae163d5811ba511e60b3a87a279f677bb3872a0f1aa6d0a226e880","0x096acab3b3fe47b149d375782d1eb00b9fef7904076d60c54b3c197b04e6bf82","0xbe9d1738af74a22400591a9a808fb01a25ab41e2e56f202dd7251eb113e8ceeb","0x0834c720e55cccd97aaf4f8fb0cb66afb9881fb6a762c0f70473ec53f98a712e","0x51a0c33c9b37245b416575bdd2751c0d8a5d8bead49585ac427bfc873d4016af","0x531c25d51ccda59aa9ea82e85c99be9dd4e285af9b8973cbab9ac4a38e26e55a","0x93ac6c08d21cb1b61ff59e5e2d6fa3f9ad54008b0a66c669199050bef219f6e3","0x3792db6dd6285f409e4281951e9f78dad16c4a78072ff1c909dfadea5658d857","0xd2d51764c01e8c0a43fbe362704388df5bacf7e5e620c3864e242530ffb3e828","0x516b0227d9e64eb6e0de6862764d40f5376b5f12fec878436fea3479b4c36bb8","0x81b0abc78b82840adb666775b182a9e292f663b64bcd35004c04436ed3c8281c","0xd0287570d431d2baea96ecc81cb890e7f4f06ab5df02f9b4067768abca19acb5","0x76ddab2674369f34946c5fa2f05e2aa8566d86235b83e808e9b27bc106e04ac7","0x34a5c74011a2c8a00103bc91bfbfd94aa99cd569be69066e4bf64d188fe8714e","0x7b9730ead1b9f59b206d0ddea87be9383ba3fc7b496c7863b0cb847889b86617","0x77166ee0409ba86bd26e7c03ad1a927abaf5af8a8a37149e725cd37512091dd6","0x3c2b6c2ae505c5c36d5f316c1fcb5f54f7346ed35ae35c93462991ded7968a68","0xf99a792837e13827b5e0a8915fb59c760babc95d242feca99a5594e64ff6b6e2","0x522313f5d923f048ae5bd0b5595c1f4fc883bc0b3cf3cb0939d3fcf8b08c829c","0x471ceb0e85af594aa56deca54cb8198567b2afd8406722ea530077aaa6b641b3","0x3e9dca502e9039ae0c6d642f62e9562ff00010c6bfbb8234a6135712ba70dfda","0xc95cac67267f4accb9b5950316ac64772f7d082bed6b712c09cf2da0bdc237b7","0xfca28fdbd13fc16daf7aec7d4a2ad2c6b5f0b2a7b0fb1d9167c09b5e115ff26e","0xc73124ca798b2f7a5df2ea4d568efab2f41b135130ea5cc41d4bcb4b5c57d5bd","0x29abb76b5e7a5ce137bf9c22474d386eb58d249f43178d2b2e15c16dfdc5ca80","0x03e5ab25a58bd44fb9dd0c698b323eab8b8363479dfcbcbb16d0a0bd983880ae","0x3c8ee80ddea7fa2d2b75e44563c10c10756f598e8ad252a49c5d3e8a5c8e6cbf","0xaffa73b68bc7ab0c3f5e28377f5ca0a5df33c0a485f64dc094b7f6ae23353203","0xc66c9c66fbc8fe97fcc16506cde7a58689af1004a18c6171cfe763bcd94f50b2","0x80fec96707519172b53790610d5800cd09a4243aca9bacfa956c56337d06f820","0x61b33bfcf11214906dcdce7d7ed83ad82f38184c03ded07f7782059d02eeedea","0x5d4138d4e28a8327e506cb012346b1b38b65f615a2b991d35cf5d4de244b3e6d","0x875a142b6dfcf10ffb71a7afe0ce4672c047fc7e162ba0383390516d6334d45d","0x79b6df832bfbd04085d0b005a6e3ad8f00fc8717eed59280aa8107268b71e7e0","0xcb2fb25d268f65dc9312e89bd3c328c9847a3c9da282026793c54a745f825ab5","0xe483d4a36ad19fd5eacb7f6d9ad3ce080ad70ac673273e710f6e3d5acbc6559c","0x0564242c37d5013b671ef4864394cc0f3924c589f8aad64118223a9af2f164f6","0x48db358e80b278c3a46c2a166339797060a40f33984a5d974992cd9722139d5d","0x69d7758db91fae31fa35ecbed4d40897c5087f45dc796cd796b8ceead21f972e","0x2951478916ecd27a8e808d08f85be4bf2c0b0e0546f21f4e309145dd96eb8df1","0xaca9028cb5d55bbf71b7bff9884a9a3b0b38a575ffc8f8807ce345cf8bd298ef","0xc7f625a19ee41a1750eac9428b4394a9a2476b8ea2d31b4c2f9f5b4fcb86cae3","0x45499074aa521ac4151138f0aad969bcc2dfc1648d22ff8c42e51c74cb77414d","0x00b5b05c6d1a2eb8abe2c383da600516515e383fc8a29953bb6e6d167e9705b2","0x6fc411f24c7b4b8d821b45de32b9edc5ac998d1ac748a98abe8e983c6f39fc19"],"difficulty":"0x0","extraData":"0xd883010b02846765746888676f312e32302e31856c696e7578","gasLimit":"0x1c9c380","gasUsed":"0xa79638","logsBloom":"0xb034000008010014411408c080a0018440087220211154100005a1388807241142a2504080034a00111212a47f05008520200000280202a12800538cc06488486a0141989c7800c0c848011f02249661800e08449145b040a252d18082c009000641004052c80102000804ac10901c24032000980010438a01e50a90a0d8008c138c21204040000b20425000833041028000148124c2012d0aa8d1d0548301808228002015184090000224021040d68220100210220480420308455c382a40020130dc42502986080600000115034c0401c81828490410308005610048026b822e10b4228071ba00bdd20140621b2000c02012300808084181ac308200000011","miner":"0x0000000000000000000000000000000000000000","mixHash":"0x31f0c0305fc07a93b1a33da339c79aadbe8d9811c78d2b514cd18d64e1328f25","nonce":"0x0000000000000000","parentHash":"0x2303b55af4add799b19275a491b150c1a03075395f87a7856a4e3327595ed7df","receiptsRoot":"0x99da71b17ae1929db912c3315ebe349d37f2bb600454616fdde0ee90d6dbc59e","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","size":"0xea6d","stateRoot":"0xd12bf4cf3941cf48be329a939b13d3403d326841c69cdcc9a9c13ab2f227e904","timestamp":"0x640fdeb0","totalDifficulty":"0xa4a470","transactionsRoot":"0x1ad3212eca045505cfc4cacf675b5fa2e7dc7b9f9cee88191464f97d1c9fbca4","uncles":[],"baseFeePerGas":"0x7ccf990f8"}
`},
{Name: "pre-Shanghai bad receipts root", OK: false, Data: `{"number":"0x840249","hash":"0x9ef7cd2241202b919a0e51240818a8666c73f7ce4b908931e3ae6d26d30f7663","transactions":["0x39c666d9b5cec429accad7b0f94f789ca2ebeb5294b8b129c1b76f552daf57d3","0x2ca7289ab3738d17e0f5093bd96c97c06c9a2ea4c22fc84a6a7fbfda93ce55ee","0xb0085de1476530de3efc6928c4683e7c40f8fac18875f74cbcc47df159de17d9","0xe01c8631c86ded63af95b8dbc0c8aac5d31254c14d6ecb4cc51d98259d838e52","0x69414a126a6f07ab5e31ad2f9069fb986b7c490e096898473873e41ece6af783","0xa2fef1133ee726533c7f190f246fede123e3706a03933c1febc92618f90d2804","0x6585ec5c4c2bbf1f683f90f58e18f3b38d875e94457fe4cbb7bc5bf6581f83af","0x1db276b864fbf01dcf8cededf8d597553ecb0eb9438edfaf2f5bd0cc93297c66","0xcbe7ed31654af4e191ca53445b82de040ae2cd92459a3f951bdcce423d780f08","0x808ba5211f03cc78a732ff0f9383c6355e63c83ae8c6035ced2ba6f7c331dc63","0xdd66f1f26672849ef54c420210f479c9f0c46924d8e9f7b210981ffe8d3fac82","0x254abb2f8cdcffe9ef62ab924312a1e4142578db87e4f7c199fd35991e92f014","0xa7b7c654e7073b8043b680b7ffc95d3f2099abaa0b0578d6f954a2a7c99404e1","0x7ccdfa698c8acf47ab9316ed078eb40819ff575bcf612c6f59f29e7726df3f96","0xa0b035ef315824a6f6a6565fa8de27042ade3af9cf0583a36dea83d6e01bf2a8","0x1ebad7f3e8cb3543d4963686a94d99f61839f666831eab9c9c1b4711de11d3d9","0x501750278e91d8b5be1ccf60e793d4bbcd9b3bb3ccc518d3634a71caeac65f48","0xd80ff8af29ae163d5811ba511e60b3a87a279f677bb3872a0f1aa6d0a226e880","0x096acab3b3fe47b149d375782d1eb00b9fef7904076d60c54b3c197b04e6bf82","0xbe9d1738af74a22400591a9a808fb01a25ab41e2e56f202dd7251eb113e8ceeb","0x0834c720e55cccd97aaf4f8fb0cb66afb9881fb6a762c0f70473ec53f98a712e","0x51a0c33c9b37245b416575bdd2751c0d8a5d8bead49585ac427bfc873d4016af","0x531c25d51ccda59aa9ea82e85c99be9dd4e285af9b8973cbab9ac4a38e26e55a","0x93ac6c08d21cb1b61ff59e5e2d6fa3f9ad54008b0a66c669199050bef219f6e3","0x3792db6dd6285f409e4281951e9f78dad16c4a78072ff1c909dfadea5658d857","0xd2d51764c01e8c0a43fbe362704388df5bacf7e5e620c3864e242530ffb3e828","0x516b0227d9e64eb6e0de6862764d40f5376b5f12fec878436fea3479b4c36bb8","0x81b0abc78b82840adb666775b182a9e292f663b64bcd35004c04436ed3c8281c","0xd0287570d431d2baea96ecc81cb890e7f4f06ab5df02f9b4067768abca19acb5","0x76ddab2674369f34946c5fa2f05e2aa8566d86235b83e808e9b27bc106e04ac7","0x34a5c74011a2c8a00103bc91bfbfd94aa99cd569be69066e4bf64d188fe8714e","0x7b9730ead1b9f59b206d0ddea87be9383ba3fc7b496c7863b0cb847889b86617","0x77166ee0409ba86bd26e7c03ad1a927abaf5af8a8a37149e725cd37512091dd6","0x3c2b6c2ae505c5c36d5f316c1fcb5f54f7346ed35ae35c93462991ded7968a68","0xf99a792837e13827b5e0a8915fb59c760babc95d242feca99a5594e64ff6b6e2","0x522313f5d923f048ae5bd0b5595c1f4fc883bc0b3cf3cb0939d3fcf8b08c829c","0x471ceb0e85af594aa56deca54cb8198567b2afd8406722ea530077aaa6b641b3","0x3e9dca502e9039ae0c6d642f62e9562ff00010c6bfbb8234a6135712ba70dfda","0xc95cac67267f4accb9b5950316ac64772f7d082bed6b712c09cf2da0bdc237b7","0xfca28fdbd13fc16daf7aec7d4a2ad2c6b5f0b2a7b0fb1d9167c09b5e115ff26e","0xc73124ca798b2f7a5df2ea4d568efab2f41b135130ea5cc41d4bcb4b5c57d5bd","0x29abb76b5e7a5ce137bf9c22474d386eb58d249f43178d2b2e15c16dfdc5ca80","0x03e5ab25a58bd44fb9dd0c698b323eab8b8363479dfcbcbb16d0a0bd983880ae","0x3c8ee80ddea7fa2d2b75e44563c10c10756f598e8ad252a49c5d3e8a5c8e6cbf","0xaffa73b68bc7ab0c3f5e28377f5ca0a5df33c0a485f64dc094b7f6ae23353203","0xc66c9c66fbc8fe97fcc16506cde7a58689af1004a18c6171cfe763bcd94f50b2","0x80fec96707519172b53790610d5800cd09a4243aca9bacfa956c56337d06f820","0x61b33bfcf11214906dcdce7d7ed83ad82f38184c03ded07f7782059d02eeedea","0x5d4138d4e28a8327e506cb012346b1b38b65f615a2b991d35cf5d4de244b3e6d","0x875a142b6dfcf10ffb71a7afe0ce4672c047fc7e162ba0383390516d6334d45d","0x79b6df832bfbd04085d0b005a6e3ad8f00fc8717eed59280aa8107268b71e7e0","0xcb2fb25d268f65dc9312e89bd3c328c9847a3c9da282026793c54a745f825ab5","0xe483d4a36ad19fd5eacb7f6d9ad3ce080ad70ac673273e710f6e3d5acbc6559c","0x0564242c37d5013b671ef4864394cc0f3924c589f8aad64118223a9af2f164f6","0x48db358e80b278c3a46c2a166339797060a40f33984a5d974992cd9722139d5d","0x69d7758db91fae31fa35ecbed4d40897c5087f45dc796cd796b8ceead21f972e","0x2951478916ecd27a8e808d08f85be4bf2c0b0e0546f21f4e309145dd96eb8df1","0xaca9028cb5d55bbf71b7bff9884a9a3b0b38a575ffc8f8807ce345cf8bd298ef","0xc7f625a19ee41a1750eac9428b4394a9a2476b8ea2d31b4c2f9f5b4fcb86cae3","0x45499074aa521ac4151138f0aad969bcc2dfc1648d22ff8c42e51c74cb77414d","0x00b5b05c6d1a2eb8abe2c383da600516515e383fc8a29953bb6e6d167e9705b2","0x6fc411f24c7b4b8d821b45de32b9edc5ac998d1ac748a98abe8e983c6f39fc19"],"difficulty":"0x0","extraData":"0xd883010b02846765746888676f312e32302e31856c696e7578","gasLimit":"0x1c9c380","gasUsed":"0xa79638","logsBloom":"0xb034000008010014411408c080a0018440087220211154100005a1388807241142a2504080034a00111212a47f05008520200000280202a12800538cc06488486a0141989c7800c0c848011f02249661800e08449145b040a252d18082c009000641004052c80102000804ac10901c24032000980010438a01e50a90a0d8008c138c21204040000b20425000833041028000148124c2012d0aa8d1d0548301808228002015184090000224021040d68220100210220480420308455c382a40020130dc42502986080600000115034c0401c81828490410308005610048026b822e10b4228071ba00bdd20140621b2000c02012300808084181ac308200000011","miner":"0x0000000000000000000000000000000000000000","mixHash":"0x31f0c0305fc07a93b1a33da339c79aadbe8d9811c78d2b514cd18d64e1328f25","nonce":"0x0000000000000000","parentHash":"0x2303b55af4add799b19275a491b150c1a03075395f87a7856a4e3327595ed7df","receiptsRoot":"0x99da71b17ae1929db912c3315ebe349d37f2bb600454616fdde0ee90d6dbc59f","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","size":"0xea6d","stateRoot":"0xd12bf4cf3941cf48be329a939b13d3403d326841c69cdcc9a9c13ab2f227e904","timestamp":"0x640fdeb0","totalDifficulty":"0xa4a470","transactionsRoot":"0x1ad3212eca045505cfc4cacf675b5fa2e7dc7b9f9cee88191464f97d1c9fbca4","uncles":[],"baseFeePerGas":"0x7ccf990f8"}
`},
{Name: "Shanghai good tx", OK: true, Data: `{"baseFeePerGas":"0x3fb7c357","difficulty":"0x0","extraData":"0x","gasLimit":"0x1c9c380","gasUsed":"0x18f759","hash":"0xa16c6bcda4fdca88b5761965c4d724f7afc6a6900d9051a204e544870adb3452","logsBloom":"0x020010404000001a0000021000000080001100410000100001000010040200980220400000008806200200000100000000000000000000008000000400042000000050000040000112080808800002044000040004042008800480002000000000000002020020000042002400000820000080040000000010200010020010100101212050000008000000008000001010200c80000112010000438040020400000000202400000000002002a0210402000622010000000001700144000040000000002204000000c000410105024010000808000000002004002000000261000000822200200800881000000012500400400000000000000040010000800000","miner":"0x000095e79eac4d76aab57cb2c1f091d553b36ca0","mixHash":"0x5b53dc49cbab268ef9950b1d81b5e36a1b2f1b97aee1b7ff6e4db0e06c29a8b0","nonce":"0x0000000000000000","number":"0x84161e","parentHash":"0x72d92c1498e05952988d4e79a695928a6bcbd37239f8a1734051263b4d3504b8","receiptsRoot":"0xaff90ae18dcc35924a4bddb68d403b8b7812c10c3ea2a114f34105c87d75bcdb","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","size":"0x2a51","stateRoot":"0xc56738518b2c7854a640ae25996d2211c9ef0dd2e4dd9e59e9d9cacef39622da","timestamp":"0x64110a5c","totalDifficulty":"0xa4a470","transactions":["0x1e8f148a9aea7d8d16ea6e9446723b8f262e8bcd89c7c961d52046ebd43b4598","0xab5c870f4c367012bd763172afbfbe68fbf35336a66ae41aff3f2c9dbf4ea3f8","0xa81fd92b2d0f0bbd3cc355f869cca3243c98c5e2641db9ecf3eeabb3b13bff6a","0xa92c7b720c08c83f1a0ed7e4c163200e30a3a8c03fcc5a51e685ea20cd0cb577","0x6921b429ad2ec1e97d3457049ad2e893b5a0349beba47ca1c74a9540af75347a","0xf776b2da0b835dde05d0d8b76fd19385d61e7055036cf637f804b36dc94f2384","0x9a08d899cd14ebb930ed59fa774afdb88a22615b3a931e930931ea54d26dc0bc","0x0fe0d97e25d5eb11a33a3e8278584c3780941fc2675bdf8fc547cee3d1fd3b17","0xef47a60f57f177a683c723c658137efab66d311e1c5abbc4d74f653535144d03","0xe23a5b35faae5335adc5aca38c5d633b00438b798c2053104b8df48406c9b141","0xd8cea4ba619b317bc05d58534af73beec6c2548b31b24d4dc61c9bbd29cfa17a","0x79a4b9d90b02c768baaad305f266281213cc75062cbe99a13222cc0c4b509498","0x6790a3bbddbeb21fcb736a59b3775755051c3a6344d8390cf8ca27f2e8a814f0","0x87ec7ace5442db252b5751ffddd38dcb04b088d36b6b0e526ff25607a4293c81","0x40cb487ecffda94f97ce7fc0f7163f2f024235df2c8291169edc80dac063e6d0","0xb76bb3d88c9b30d927c45ccfcf8d5b0054411ac8501ad588822a7d04690cccf6","0x798ebe823209869347c08bd81e04fbf60e9bdfe44b1cc923215182d0cf3d4edb","0xbe68a7e02725f799a65ebb069ccc83a014ac7c40e4119bf7c220a2f6ddfee295","0xc90c3a72efe81331727fcce4b5bd4906066da314ca9a0b44023a6b09ea7e8114","0x619a6cbd43cde074d314c19623bd66d9fb1e13c158d7138775236f798dc1245e","0xca5a56cd77b9e5b0e79020cc6346edf205bc11e901984d805125f28c2e6686e6","0x999c9ddeed67c6ef6fbf02a6e977a6c1b68e18d24814e51643c7157b87a43e0a","0x47c8f5d0b3778e4c34eba7fcc356fa04a5afd954ccf484728e72c002764dd3c4","0x396797ae0ebcdb72ff1f96fd08b6128f78acc7417353f142f1a5facd425a33e6","0x454aa43d6546a6f62246826c16b7a49c6c704238c18802ef0d659922f23a573c","0x317ecb5bd19caa42a69f836d41556ebb0e0e00e1c6cd2dee230e6e6192612527","0xc879285db5ef0a6bce98021584d16f134c1dc0aed8cc988802c4f72ba6877ff6","0xecaa2d6f597608307e5084854854ba6dc1e69395e2abea14f2c6a2fa1d6faf9a","0x4dd69b69a568ff30ae439e2ded72fbd7f2e7aaa345836703663f155c749c5eed"],"transactionsRoot":"0x4a87d0cf5990b1c5bac631583e5965c2ba943858bebb2e07f74d0b697f73821a","uncles":[],"withdrawals":[{"index":"0x1170","validatorIndex":"0x38c2c","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x66edfd65"},{"index":"0x1171","validatorIndex":"0x38c2d","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x6cd228e4"},{"index":"0x1172","validatorIndex":"0x38c2e","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x77f3431b"},{"index":"0x1173","validatorIndex":"0x38c2f","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x6b61f268"},{"index":"0x1174","validatorIndex":"0x38c30","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x6e10bb21"},{"index":"0x1175","validatorIndex":"0x38c31","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x6eb115a5"},{"index":"0x1176","validatorIndex":"0x38c32","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x7caead1d"},{"index":"0x1177","validatorIndex":"0x38c33","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x772c0ddf"},{"index":"0x1178","validatorIndex":"0x38c34","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x75930a95"},{"index":"0x1179","validatorIndex":"0x38c35","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x76a4db09"},{"index":"0x117a","validatorIndex":"0x38c36","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x7e692b27"},{"index":"0x117b","validatorIndex":"0x38c37","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x72038ae6"},{"index":"0x117c","validatorIndex":"0x38c38","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x6ccce352"},{"index":"0x117d","validatorIndex":"0x38c39","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x79ef6898"},{"index":"0x117e","validatorIndex":"0x38c3a","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x6d58977d"},{"index":"0x117f","validatorIndex":"0x38c3b","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x76f7d208"}],"withdrawalsRoot":"0xbe712c930a0665264b025ced87cc7839eef95a3cbc26dadc93e9e185a350ad28"}
`},
{Name: "Shanghai bad withdrawals root", OK: false, Data: `{"baseFeePerGas":"0x3fb7c357","difficulty":"0x0","extraData":"0x","gasLimit":"0x1c9c380","gasUsed":"0x18f759","hash":"0xa16c6bcda4fdca88b5761965c4d724f7afc6a6900d9051a204e544870adb3452","logsBloom":"0x020010404000001a0000021000000080001100410000100001000010040200980220400000008806200200000100000000000000000000008000000400042000000050000040000112080808800002044000040004042008800480002000000000000002020020000042002400000820000080040000000010200010020010100101212050000008000000008000001010200c80000112010000438040020400000000202400000000002002a0210402000622010000000001700144000040000000002204000000c000410105024010000808000000002004002000000261000000822200200800881000000012500400400000000000000040010000800000","miner":"0x000095e79eac4d76aab57cb2c1f091d553b36ca0","mixHash":"0x5b53dc49cbab268ef9950b1d81b5e36a1b2f1b97aee1b7ff6e4db0e06c29a8b0","nonce":"0x0000000000000000","number":"0x84161e","parentHash":"0x72d92c1498e05952988d4e79a695928a6bcbd37239f8a1734051263b4d3504b8","receiptsRoot":"0xaff90ae18dcc35924a4bddb68d403b8b7812c10c3ea2a114f34105c87d75bcdb","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","size":"0x2a51","stateRoot":"0xc56738518b2c7854a640ae25996d2211c9ef0dd2e4dd9e59e9d9cacef39622da","timestamp":"0x64110a5c","totalDifficulty":"0xa4a470","transactions":["0x1e8f148a9aea7d8d16ea6e9446723b8f262e8bcd89c7c961d52046ebd43b4598","0xab5c870f4c367012bd763172afbfbe68fbf35336a66ae41aff3f2c9dbf4ea3f8","0xa81fd92b2d0f0bbd3cc355f869cca3243c98c5e2641db9ecf3eeabb3b13bff6a","0xa92c7b720c08c83f1a0ed7e4c163200e30a3a8c03fcc5a51e685ea20cd0cb577","0x6921b429ad2ec1e97d3457049ad2e893b5a0349beba47ca1c74a9540af75347a","0xf776b2da0b835dde05d0d8b76fd19385d61e7055036cf637f804b36dc94f2384","0x9a08d899cd14ebb930ed59fa774afdb88a22615b3a931e930931ea54d26dc0bc","0x0fe0d97e25d5eb11a33a3e8278584c3780941fc2675bdf8fc547cee3d1fd3b17","0xef47a60f57f177a683c723c658137efab66d311e1c5abbc4d74f653535144d03","0xe23a5b35faae5335adc5aca38c5d633b00438b798c2053104b8df48406c9b141","0xd8cea4ba619b317bc05d58534af73beec6c2548b31b24d4dc61c9bbd29cfa17a","0x79a4b9d90b02c768baaad305f266281213cc75062cbe99a13222cc0c4b509498","0x6790a3bbddbeb21fcb736a59b3775755051c3a6344d8390cf8ca27f2e8a814f0","0x87ec7ace5442db252b5751ffddd38dcb04b088d36b6b0e526ff25607a4293c81","0x40cb487ecffda94f97ce7fc0f7163f2f024235df2c8291169edc80dac063e6d0","0xb76bb3d88c9b30d927c45ccfcf8d5b0054411ac8501ad588822a7d04690cccf6","0x798ebe823209869347c08bd81e04fbf60e9bdfe44b1cc923215182d0cf3d4edb","0xbe68a7e02725f799a65ebb069ccc83a014ac7c40e4119bf7c220a2f6ddfee295","0xc90c3a72efe81331727fcce4b5bd4906066da314ca9a0b44023a6b09ea7e8114","0x619a6cbd43cde074d314c19623bd66d9fb1e13c158d7138775236f798dc1245e","0xca5a56cd77b9e5b0e79020cc6346edf205bc11e901984d805125f28c2e6686e6","0x999c9ddeed67c6ef6fbf02a6e977a6c1b68e18d24814e51643c7157b87a43e0a","0x47c8f5d0b3778e4c34eba7fcc356fa04a5afd954ccf484728e72c002764dd3c4","0x396797ae0ebcdb72ff1f96fd08b6128f78acc7417353f142f1a5facd425a33e6","0x454aa43d6546a6f62246826c16b7a49c6c704238c18802ef0d659922f23a573c","0x317ecb5bd19caa42a69f836d41556ebb0e0e00e1c6cd2dee230e6e6192612527","0xc879285db5ef0a6bce98021584d16f134c1dc0aed8cc988802c4f72ba6877ff6","0xecaa2d6f597608307e5084854854ba6dc1e69395e2abea14f2c6a2fa1d6faf9a","0x4dd69b69a568ff30ae439e2ded72fbd7f2e7aaa345836703663f155c749c5eed"],"transactionsRoot":"0x4a87d0cf5990b1c5bac631583e5965c2ba943858bebb2e07f74d0b697f73821a","uncles":[],"withdrawals":[{"index":"0x1170","validatorIndex":"0x38c2c","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x66edfd65"},{"index":"0x1171","validatorIndex":"0x38c2d","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x6cd228e4"},{"index":"0x1172","validatorIndex":"0x38c2e","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x77f3431b"},{"index":"0x1173","validatorIndex":"0x38c2f","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x6b61f268"},{"index":"0x1174","validatorIndex":"0x38c30","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x6e10bb21"},{"index":"0x1175","validatorIndex":"0x38c31","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x6eb115a5"},{"index":"0x1176","validatorIndex":"0x38c32","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x7caead1d"},{"index":"0x1177","validatorIndex":"0x38c33","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x772c0ddf"},{"index":"0x1178","validatorIndex":"0x38c34","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x75930a95"},{"index":"0x1179","validatorIndex":"0x38c35","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x76a4db09"},{"index":"0x117a","validatorIndex":"0x38c36","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x7e692b27"},{"index":"0x117b","validatorIndex":"0x38c37","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x72038ae6"},{"index":"0x117c","validatorIndex":"0x38c38","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x6ccce352"},{"index":"0x117d","validatorIndex":"0x38c39","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x79ef6898"},{"index":"0x117e","validatorIndex":"0x38c3a","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x6d58977d"},{"index":"0x117f","validatorIndex":"0x38c3b","address":"0x8f0844fd51e31ff6bf5babe21dccf7328e19fd9f","amount":"0x76f7d208"}],"withdrawalsRoot":"0xbe712c930a0665264b025ced87cc7839eef95a3cbc26dadc93e9e185a350ad27"}
`},
}
for _, testCase := range testCases {
var x rpcHeader
require.NoError(t, json.Unmarshal([]byte(testCase.Data), &x))
h := x.computeBlockHash()
if testCase.OK {
require.Equal(t, h, x.Hash, "blockhash should verify ok")
} else {
require.NotEqual(t, h, x.Hash, "expecting verification error")
}
}
}
...@@ -208,6 +208,13 @@ func NewL2OutputSubmitter(cfg Config, l log.Logger) (*L2OutputSubmitter, error) ...@@ -208,6 +208,13 @@ func NewL2OutputSubmitter(cfg Config, l log.Logger) (*L2OutputSubmitter, error)
return nil, err return nil, err
} }
version, err := l2ooContract.Version(&bind.CallOpts{})
if err != nil {
cancel()
return nil, err
}
log.Info("Connected to L2OutputOracle", "address", cfg.L2OutputOracleAddr, "version", version)
parsed, err := abi.JSON(strings.NewReader(bindings.L2OutputOracleMetaData.ABI)) parsed, err := abi.JSON(strings.NewReader(bindings.L2OutputOracleMetaData.ABI))
if err != nil { if err != nil {
cancel() cancel()
...@@ -384,30 +391,31 @@ func (l *L2OutputSubmitter) loop() { ...@@ -384,30 +391,31 @@ func (l *L2OutputSubmitter) loop() {
for { for {
select { select {
case <-ticker.C: case <-ticker.C:
cCtx, cancel := context.WithTimeout(ctx, 3*time.Minute) cCtx, cancel := context.WithTimeout(ctx, 30*time.Second)
output, shouldPropose, err := l.FetchNextOutputInfo(cCtx) output, shouldPropose, err := l.FetchNextOutputInfo(cCtx)
if err != nil {
l.log.Error("Failed to fetch next output", "err", err)
cancel() cancel()
if err != nil {
break break
} }
if !shouldPropose { if !shouldPropose {
cancel()
break break
} }
cCtx, cancel = context.WithTimeout(ctx, 30*time.Second)
tx, err := l.CreateProposalTx(cCtx, output) tx, err := l.CreateProposalTx(cCtx, output)
cancel()
if err != nil { if err != nil {
l.log.Error("Failed to create proposal transaction", "err", err) l.log.Error("Failed to create proposal transaction", "err", err)
cancel()
break break
} }
cCtx, cancel = context.WithTimeout(ctx, 10*time.Minute)
if err := l.SendTransaction(cCtx, tx); err != nil { if err := l.SendTransaction(cCtx, tx); err != nil {
l.log.Error("Failed to send proposal transaction", "err", err) l.log.Error("Failed to send proposal transaction", "err", err)
cancel() cancel()
break break
} } else {
cancel() cancel()
}
case <-l.done: case <-l.done:
return return
......
package metrics
import (
"fmt"
"github.com/prometheus/client_golang/prometheus"
)
type Event struct {
Total prometheus.Counter
LastTime prometheus.Gauge
}
func (e *Event) Record() {
e.Total.Inc()
e.LastTime.SetToCurrentTime()
}
func NewEvent(factory Factory, ns string, name string, displayName string) Event {
return Event{
Total: factory.NewCounter(prometheus.CounterOpts{
Namespace: ns,
Name: fmt.Sprintf("%s_total", name),
Help: fmt.Sprintf("Count of %s events", displayName),
}),
LastTime: factory.NewGauge(prometheus.GaugeOpts{
Namespace: ns,
Name: fmt.Sprintf("last_%s_unix", name),
Help: fmt.Sprintf("Timestamp of last %s event", displayName),
}),
}
}
type EventVec struct {
Total prometheus.CounterVec
LastTime prometheus.GaugeVec
}
func (e *EventVec) Record(lvs ...string) {
e.Total.WithLabelValues(lvs...).Inc()
e.LastTime.WithLabelValues(lvs...).SetToCurrentTime()
}
func NewEventVec(factory Factory, ns string, name string, displayName string, labelNames []string) EventVec {
return EventVec{
Total: *factory.NewCounterVec(prometheus.CounterOpts{
Namespace: ns,
Name: fmt.Sprintf("%s_total", name),
Help: fmt.Sprintf("Count of %s events", displayName),
}, labelNames),
LastTime: *factory.NewGaugeVec(prometheus.GaugeOpts{
Namespace: ns,
Name: fmt.Sprintf("last_%s_unix", name),
Help: fmt.Sprintf("Timestamp of last %s event", displayName),
},
labelNames),
}
}
package metrics
import (
"encoding/binary"
"time"
"github.com/ethereum-optimism/optimism/op-node/eth"
"github.com/ethereum/go-ethereum/common"
"github.com/prometheus/client_golang/prometheus"
)
type RefMetricer interface {
RecordRef(layer string, name string, num uint64, timestamp uint64, h common.Hash)
RecordL1Ref(name string, ref eth.L1BlockRef)
RecordL2Ref(name string, ref eth.L2BlockRef)
}
// RefMetrics provides block reference metrics. It's a metrics module that's
// supposed to be embedded into a service metrics type. The service metrics type
// should set the full namespace and create the factory before calling
// NewRefMetrics.
type RefMetrics struct {
RefsNumber *prometheus.GaugeVec
RefsTime *prometheus.GaugeVec
RefsHash *prometheus.GaugeVec
RefsSeqNr *prometheus.GaugeVec
RefsLatency *prometheus.GaugeVec
// hash of the last seen block per name, so we don't reduce/increase latency on updates of the same data,
// and only count the first occurrence
LatencySeen map[string]common.Hash
}
var _ RefMetricer = (*RefMetrics)(nil)
// MakeRefMetrics returns a new RefMetrics, initializing its prometheus fields
// using factory. It is supposed to be used inside the construtors of metrics
// structs for any op service after the full namespace and factory have been
// setup.
//
// ns is the fully qualified namespace, e.g. "op_node_default".
func MakeRefMetrics(ns string, factory Factory) RefMetrics {
return RefMetrics{
RefsNumber: factory.NewGaugeVec(prometheus.GaugeOpts{
Namespace: ns,
Name: "refs_number",
Help: "Gauge representing the different L1/L2 reference block numbers",
}, []string{
"layer",
"type",
}),
RefsTime: factory.NewGaugeVec(prometheus.GaugeOpts{
Namespace: ns,
Name: "refs_time",
Help: "Gauge representing the different L1/L2 reference block timestamps",
}, []string{
"layer",
"type",
}),
RefsHash: factory.NewGaugeVec(prometheus.GaugeOpts{
Namespace: ns,
Name: "refs_hash",
Help: "Gauge representing the different L1/L2 reference block hashes truncated to float values",
}, []string{
"layer",
"type",
}),
RefsSeqNr: factory.NewGaugeVec(prometheus.GaugeOpts{
Namespace: ns,
Name: "refs_seqnr",
Help: "Gauge representing the different L2 reference sequence numbers",
}, []string{
"type",
}),
RefsLatency: factory.NewGaugeVec(prometheus.GaugeOpts{
Namespace: ns,
Name: "refs_latency",
Help: "Gauge representing the different L1/L2 reference block timestamps minus current time, in seconds",
}, []string{
"layer",
"type",
}),
LatencySeen: make(map[string]common.Hash),
}
}
func (m *RefMetrics) RecordRef(layer string, name string, num uint64, timestamp uint64, h common.Hash) {
m.RefsNumber.WithLabelValues(layer, name).Set(float64(num))
if timestamp != 0 {
m.RefsTime.WithLabelValues(layer, name).Set(float64(timestamp))
// only meter the latency when we first see this hash for the given label name
if m.LatencySeen[name] != h {
m.LatencySeen[name] = h
m.RefsLatency.WithLabelValues(layer, name).Set(float64(timestamp) - (float64(time.Now().UnixNano()) / 1e9))
}
}
// we map the first 8 bytes to a float64, so we can graph changes of the hash to find divergences visually.
// We don't do math.Float64frombits, just a regular conversion, to keep the value within a manageable range.
m.RefsHash.WithLabelValues(layer, name).Set(float64(binary.LittleEndian.Uint64(h[:])))
}
func (m *RefMetrics) RecordL1Ref(name string, ref eth.L1BlockRef) {
m.RecordRef("l1", name, ref.Number, ref.Time, ref.Hash)
}
func (m *RefMetrics) RecordL2Ref(name string, ref eth.L2BlockRef) {
m.RecordRef("l2", name, ref.Number, ref.Time, ref.Hash)
m.RecordRef("l1_origin", name, ref.L1Origin.Number, 0, ref.L1Origin.Hash)
m.RefsSeqNr.WithLabelValues(name).Set(float64(ref.SequenceNumber))
}
// NoopRefMetrics can be embedded in a noop version of a metric implementation
// to have a noop RefMetricer.
type NoopRefMetrics struct{}
func (*NoopRefMetrics) RecordRef(string, string, uint64, uint64, common.Hash) {}
func (*NoopRefMetrics) RecordL1Ref(string, eth.L1BlockRef) {}
func (*NoopRefMetrics) RecordL2Ref(string, eth.L2BlockRef) {}
...@@ -154,11 +154,22 @@ func (m *SimpleTxManager) IncreaseGasPrice(ctx context.Context, tx *types.Transa ...@@ -154,11 +154,22 @@ func (m *SimpleTxManager) IncreaseGasPrice(ctx context.Context, tx *types.Transa
thresholdFeeCap := new(big.Int).Mul(priceBumpPercent, tx.GasFeeCap()) thresholdFeeCap := new(big.Int).Mul(priceBumpPercent, tx.GasFeeCap())
thresholdFeeCap = thresholdFeeCap.Div(thresholdFeeCap, oneHundred) thresholdFeeCap = thresholdFeeCap.Div(thresholdFeeCap, oneHundred)
if tx.GasFeeCapIntCmp(gasFeeCap) >= 0 { if tx.GasFeeCapIntCmp(gasFeeCap) >= 0 {
if reusedTip {
m.l.Debug("Reusing the previous fee cap", "previous", tx.GasFeeCap(), "suggested", gasFeeCap) m.l.Debug("Reusing the previous fee cap", "previous", tx.GasFeeCap(), "suggested", gasFeeCap)
gasFeeCap = tx.GasFeeCap() gasFeeCap = tx.GasFeeCap()
reusedFeeCap = true reusedFeeCap = true
} else {
m.l.Debug("Overriding the fee cap to enforce a price bump because we increased the tip", "previous", tx.GasFeeCap(), "suggested", gasFeeCap, "new", thresholdFeeCap)
gasFeeCap = thresholdFeeCap
}
} else if thresholdFeeCap.Cmp(gasFeeCap) > 0 { } else if thresholdFeeCap.Cmp(gasFeeCap) > 0 {
if reusedTip {
// TODO (CLI-3620): Increase the basefee then recompute the feecap
m.l.Warn("Overriding the fee cap to enforce a price bump without increasing the tip. Will likely result in ErrReplacementUnderpriced",
"previous", tx.GasFeeCap(), "suggested", gasFeeCap, "new", thresholdFeeCap)
} else {
m.l.Debug("Overriding the fee cap to enforce a price bump", "previous", tx.GasFeeCap(), "suggested", gasFeeCap, "new", thresholdFeeCap) m.l.Debug("Overriding the fee cap to enforce a price bump", "previous", tx.GasFeeCap(), "suggested", gasFeeCap, "new", thresholdFeeCap)
}
gasFeeCap = thresholdFeeCap gasFeeCap = thresholdFeeCap
} }
......
...@@ -621,6 +621,84 @@ func TestIncreaseGasPriceEnforcesMinBump(t *testing.T) { ...@@ -621,6 +621,84 @@ func TestIncreaseGasPriceEnforcesMinBump(t *testing.T) {
baseFee: big.NewInt(460), baseFee: big.NewInt(460),
} }
mgr := &SimpleTxManager{
Config: Config{
ResubmissionTimeout: time.Second,
ReceiptQueryInterval: 50 * time.Millisecond,
NumConfirmations: 1,
SafeAbortNonceTooLowCount: 3,
Signer: func(ctx context.Context, from common.Address, tx *types.Transaction) (*types.Transaction, error) {
return tx, nil
},
From: common.Address{},
},
name: "TEST",
backend: &borkedBackend,
l: testlog.Logger(t, log.LvlTrace),
}
tx := types.NewTx(&types.DynamicFeeTx{
GasTipCap: big.NewInt(100),
GasFeeCap: big.NewInt(1000),
})
ctx := context.Background()
newTx, err := mgr.IncreaseGasPrice(ctx, tx)
require.NoError(t, err)
require.True(t, newTx.GasFeeCap().Cmp(tx.GasFeeCap()) > 0, "new tx fee cap must be larger")
require.True(t, newTx.GasTipCap().Cmp(tx.GasTipCap()) > 0, "new tx tip must be larger")
}
// TestIncreaseGasPriceEnforcesMinBumpForBothOnTipIncrease asserts that if the gasTip goes up,
// but the baseFee doesn't, both values are increased by 10%
func TestIncreaseGasPriceEnforcesMinBumpForBothOnTipIncrease(t *testing.T) {
t.Parallel()
borkedBackend := failingBackend{
gasTip: big.NewInt(101),
baseFee: big.NewInt(440),
}
mgr := &SimpleTxManager{
Config: Config{
ResubmissionTimeout: time.Second,
ReceiptQueryInterval: 50 * time.Millisecond,
NumConfirmations: 1,
SafeAbortNonceTooLowCount: 3,
Signer: func(ctx context.Context, from common.Address, tx *types.Transaction) (*types.Transaction, error) {
return tx, nil
},
From: common.Address{},
},
name: "TEST",
backend: &borkedBackend,
l: testlog.Logger(t, log.LvlCrit),
}
tx := types.NewTx(&types.DynamicFeeTx{
GasTipCap: big.NewInt(100),
GasFeeCap: big.NewInt(1000),
})
ctx := context.Background()
newTx, err := mgr.IncreaseGasPrice(ctx, tx)
require.NoError(t, err)
require.True(t, newTx.GasFeeCap().Cmp(tx.GasFeeCap()) > 0, "new tx fee cap must be larger")
require.True(t, newTx.GasTipCap().Cmp(tx.GasTipCap()) > 0, "new tx tip must be larger")
}
// TestIncreaseGasPriceEnforcesMinBumpForBothOnBaseFeeIncrease asserts that if the baseFee goes up,
// but the tip doesn't, both values are increased by 10%
// TODO(CLI-3620): This test will fail until we implemented CLI-3620.
func TestIncreaseGasPriceEnforcesMinBumpForBothOnBaseFeeIncrease(t *testing.T) {
t.Skip("Failing until CLI-3620 is implemented")
t.Parallel()
borkedBackend := failingBackend{
gasTip: big.NewInt(99),
baseFee: big.NewInt(460),
}
mgr := &SimpleTxManager{ mgr := &SimpleTxManager{
Config: Config{ Config: Config{
ResubmissionTimeout: time.Second, ResubmissionTimeout: time.Second,
......
# @eth-optimism/actor-tests # @eth-optimism/actor-tests
## 0.0.23
### Patch Changes
- Updated dependencies [22c3885f5]
- Updated dependencies [66cafc00a]
- Updated dependencies [f52c07529]
- @eth-optimism/contracts-bedrock@0.13.1
- @eth-optimism/sdk@2.0.1
## 0.0.22 ## 0.0.22
### Patch Changes ### Patch Changes
......
{ {
"name": "@eth-optimism/actor-tests", "name": "@eth-optimism/actor-tests",
"version": "0.0.22", "version": "0.0.23",
"description": "A library and suite of tests to stress test Optimism Bedrock.", "description": "A library and suite of tests to stress test Optimism Bedrock.",
"license": "MIT", "license": "MIT",
"author": "", "author": "",
...@@ -18,9 +18,9 @@ ...@@ -18,9 +18,9 @@
"test:coverage": "yarn test" "test:coverage": "yarn test"
}, },
"dependencies": { "dependencies": {
"@eth-optimism/contracts-bedrock": "0.13.0", "@eth-optimism/contracts-bedrock": "0.13.1",
"@eth-optimism/core-utils": "^0.12.0", "@eth-optimism/core-utils": "^0.12.0",
"@eth-optimism/sdk": "^2.0.0", "@eth-optimism/sdk": "^2.0.1",
"@types/chai": "^4.2.18", "@types/chai": "^4.2.18",
"@types/chai-as-promised": "^7.1.4", "@types/chai-as-promised": "^7.1.4",
"async-mutex": "^0.3.2", "async-mutex": "^0.3.2",
......
# @eth-optimism/atst # @eth-optimism/atst
## 0.2.0
### Minor Changes
- dcd13eec1: Update readAttestations and prepareWriteAttestation to handle keys longer than 32 bytes
- 9fd5be8e2: Remove broken allowFailures as option
- 3f4a43542: Move react api to @eth-optimism/atst/react so react isn't required to run the core sdk
- 71727eae9: Fix main and module in atst package.json
- 3d5f26c49: Deprecate parseAttestationBytes and createRawKey in favor for createKey, createValue
### Patch Changes
- 68bbe48b6: Update docs
- 6fea2f2db: Fixed bug with atst not defaulting to currently connected chain
## 0.1.0 ## 0.1.0
### Minor Changes ### Minor Changes
......
{ {
"name": "@eth-optimism/atst", "name": "@eth-optimism/atst",
"version": "0.1.0", "version": "0.2.0",
"type": "module", "type": "module",
"main": "dist/index.cjs", "main": "dist/index.cjs",
"types": "src/index.ts", "types": "src/index.ts",
......
# @eth-optimism/drippie-mon # @eth-optimism/drippie-mon
## 0.2.1
### Patch Changes
- Updated dependencies [fecd42d67]
- Updated dependencies [66cafc00a]
- @eth-optimism/common-ts@0.8.1
- @eth-optimism/sdk@2.0.1
## 0.2.0 ## 0.2.0
### Minor Changes ### Minor Changes
......
{ {
"private": true, "private": true,
"name": "@eth-optimism/chain-mon", "name": "@eth-optimism/chain-mon",
"version": "0.2.0", "version": "0.2.1",
"description": "[Optimism] Chain monitoring services", "description": "[Optimism] Chain monitoring services",
"main": "dist/index", "main": "dist/index",
"types": "dist/index", "types": "dist/index",
...@@ -32,10 +32,10 @@ ...@@ -32,10 +32,10 @@
"url": "https://github.com/ethereum-optimism/optimism.git" "url": "https://github.com/ethereum-optimism/optimism.git"
}, },
"dependencies": { "dependencies": {
"@eth-optimism/common-ts": "0.8.0", "@eth-optimism/common-ts": "0.8.1",
"@eth-optimism/contracts-periphery": "1.0.7", "@eth-optimism/contracts-periphery": "1.0.7",
"@eth-optimism/core-utils": "0.12.0", "@eth-optimism/core-utils": "0.12.0",
"@eth-optimism/sdk": "2.0.0", "@eth-optimism/sdk": "2.0.1",
"ethers": "^5.7.0", "ethers": "^5.7.0",
"@types/dateformat": "^5.0.0", "@types/dateformat": "^5.0.0",
"chai-as-promised": "^7.1.1", "chai-as-promised": "^7.1.1",
......
# @eth-optimism/common-ts # @eth-optimism/common-ts
## 0.8.1
### Patch Changes
- fecd42d67: Fix BaseServiceV2 configuration for caseCase options
## 0.8.0 ## 0.8.0
### Minor Changes ### Minor Changes
......
{ {
"name": "@eth-optimism/common-ts", "name": "@eth-optimism/common-ts",
"version": "0.8.0", "version": "0.8.1",
"description": "[Optimism] Advanced typescript tooling used by various services", "description": "[Optimism] Advanced typescript tooling used by various services",
"main": "dist/index", "main": "dist/index",
"types": "dist/index", "types": "dist/index",
......
...@@ -7,3 +7,7 @@ PRIVATE_KEY_DEPLOYER= ...@@ -7,3 +7,7 @@ PRIVATE_KEY_DEPLOYER=
# Optional Tenderly details for a simulation link during deployment # Optional Tenderly details for a simulation link during deployment
TENDERLY_PROJECT= TENDERLY_PROJECT=
TENDERLY_USERNAME= TENDERLY_USERNAME=
# Optional boolean to define if cast commands should be printed.
# Useful during migration testing
CAST_COMMANDS=1
...@@ -141,6 +141,7 @@ L2ERC721Bridge_Test:test_bridgeERC721_succeeds() (gas: 144643) ...@@ -141,6 +141,7 @@ L2ERC721Bridge_Test:test_bridgeERC721_succeeds() (gas: 144643)
L2ERC721Bridge_Test:test_bridgeERC721_wrongOwner_reverts() (gas: 29258) L2ERC721Bridge_Test:test_bridgeERC721_wrongOwner_reverts() (gas: 29258)
L2ERC721Bridge_Test:test_constructor_succeeds() (gas: 10110) L2ERC721Bridge_Test:test_constructor_succeeds() (gas: 10110)
L2ERC721Bridge_Test:test_finalizeBridgeERC721_alreadyExists_reverts() (gas: 29128) L2ERC721Bridge_Test:test_finalizeBridgeERC721_alreadyExists_reverts() (gas: 29128)
L2ERC721Bridge_Test:test_finalizeBridgeERC721_interfaceNotCompliant_reverts() (gas: 236012)
L2ERC721Bridge_Test:test_finalizeBridgeERC721_notFromRemoteMessenger_reverts() (gas: 19874) L2ERC721Bridge_Test:test_finalizeBridgeERC721_notFromRemoteMessenger_reverts() (gas: 19874)
L2ERC721Bridge_Test:test_finalizeBridgeERC721_notViaLocalMessenger_reverts() (gas: 16104) L2ERC721Bridge_Test:test_finalizeBridgeERC721_notViaLocalMessenger_reverts() (gas: 16104)
L2ERC721Bridge_Test:test_finalizeBridgeERC721_selfToken_reverts() (gas: 17659) L2ERC721Bridge_Test:test_finalizeBridgeERC721_selfToken_reverts() (gas: 17659)
...@@ -266,9 +267,9 @@ OptimismPortal_FinalizeWithdrawal_Test:test_finalizeWithdrawalTransaction_ifOutp ...@@ -266,9 +267,9 @@ OptimismPortal_FinalizeWithdrawal_Test:test_finalizeWithdrawalTransaction_ifOutp
OptimismPortal_FinalizeWithdrawal_Test:test_finalizeWithdrawalTransaction_ifOutputTimestampIsNotFinalized_reverts() (gas: 207520) OptimismPortal_FinalizeWithdrawal_Test:test_finalizeWithdrawalTransaction_ifOutputTimestampIsNotFinalized_reverts() (gas: 207520)
OptimismPortal_FinalizeWithdrawal_Test:test_finalizeWithdrawalTransaction_ifWithdrawalNotProven_reverts() (gas: 41753) OptimismPortal_FinalizeWithdrawal_Test:test_finalizeWithdrawalTransaction_ifWithdrawalNotProven_reverts() (gas: 41753)
OptimismPortal_FinalizeWithdrawal_Test:test_finalizeWithdrawalTransaction_ifWithdrawalProofNotOldEnough_reverts() (gas: 199464) OptimismPortal_FinalizeWithdrawal_Test:test_finalizeWithdrawalTransaction_ifWithdrawalProofNotOldEnough_reverts() (gas: 199464)
OptimismPortal_FinalizeWithdrawal_Test:test_finalizeWithdrawalTransaction_onInsufficientGas_reverts() (gas: 206360) OptimismPortal_FinalizeWithdrawal_Test:test_finalizeWithdrawalTransaction_onInsufficientGas_reverts() (gas: 205818)
OptimismPortal_FinalizeWithdrawal_Test:test_finalizeWithdrawalTransaction_onRecentWithdrawal_reverts() (gas: 180229) OptimismPortal_FinalizeWithdrawal_Test:test_finalizeWithdrawalTransaction_onRecentWithdrawal_reverts() (gas: 180229)
OptimismPortal_FinalizeWithdrawal_Test:test_finalizeWithdrawalTransaction_onReentrancy_reverts() (gas: 244377) OptimismPortal_FinalizeWithdrawal_Test:test_finalizeWithdrawalTransaction_onReentrancy_reverts() (gas: 243835)
OptimismPortal_FinalizeWithdrawal_Test:test_finalizeWithdrawalTransaction_onReplay_reverts() (gas: 245528) OptimismPortal_FinalizeWithdrawal_Test:test_finalizeWithdrawalTransaction_onReplay_reverts() (gas: 245528)
OptimismPortal_FinalizeWithdrawal_Test:test_finalizeWithdrawalTransaction_paused_reverts() (gas: 53555) OptimismPortal_FinalizeWithdrawal_Test:test_finalizeWithdrawalTransaction_paused_reverts() (gas: 53555)
OptimismPortal_FinalizeWithdrawal_Test:test_finalizeWithdrawalTransaction_provenWithdrawalHash_succeeds() (gas: 234941) OptimismPortal_FinalizeWithdrawal_Test:test_finalizeWithdrawalTransaction_provenWithdrawalHash_succeeds() (gas: 234941)
......
...@@ -12,3 +12,4 @@ deployments/mainnet-forked ...@@ -12,3 +12,4 @@ deployments/mainnet-forked
deploy-config/mainnet-forked.json deploy-config/mainnet-forked.json
test-case-generator/fuzz test-case-generator/fuzz
.resource-metering.csv .resource-metering.csv
scripts/differential-testing/differential-testing
# @eth-optimism/contracts-bedrock # @eth-optimism/contracts-bedrock
## 0.13.1
### Patch Changes
- 22c3885f5: Optionally print cast commands during migration
- f52c07529: Print tenderly simulation links during deployment
## 0.13.0 ## 0.13.0
### Minor Changes ### Minor Changes
......
...@@ -49,7 +49,7 @@ contract OptimismPortal is Initializable, ResourceMetering, Semver { ...@@ -49,7 +49,7 @@ contract OptimismPortal is Initializable, ResourceMetering, Semver {
L2OutputOracle public immutable L2_ORACLE; L2OutputOracle public immutable L2_ORACLE;
/** /**
* @notice Address that has the ability to pause and unpause deposits and withdrawals. * @notice Address that has the ability to pause and unpause withdrawals.
*/ */
address public immutable GUARDIAN; address public immutable GUARDIAN;
......
...@@ -477,16 +477,15 @@ contract FFIInterface is Test { ...@@ -477,16 +477,15 @@ contract FFIInterface is Test {
bytes[] memory bytes[] memory
) )
{ {
string[] memory cmds = new string[](9); string[] memory cmds = new string[](8);
cmds[0] = "node"; cmds[0] = "scripts/differential-testing/differential-testing";
cmds[1] = "dist/scripts/differential-testing.js"; cmds[1] = "getProveWithdrawalTransactionInputs";
cmds[2] = "getProveWithdrawalTransactionInputs"; cmds[2] = vm.toString(_tx.nonce);
cmds[3] = vm.toString(_tx.nonce); cmds[3] = vm.toString(_tx.sender);
cmds[4] = vm.toString(_tx.sender); cmds[4] = vm.toString(_tx.target);
cmds[5] = vm.toString(_tx.target); cmds[5] = vm.toString(_tx.value);
cmds[6] = vm.toString(_tx.value); cmds[6] = vm.toString(_tx.gasLimit);
cmds[7] = vm.toString(_tx.gasLimit); cmds[7] = vm.toString(_tx.data);
cmds[8] = vm.toString(_tx.data);
bytes memory result = vm.ffi(cmds); bytes memory result = vm.ffi(cmds);
( (
...@@ -508,16 +507,15 @@ contract FFIInterface is Test { ...@@ -508,16 +507,15 @@ contract FFIInterface is Test {
uint256 _gasLimit, uint256 _gasLimit,
bytes memory _data bytes memory _data
) external returns (bytes32) { ) external returns (bytes32) {
string[] memory cmds = new string[](9); string[] memory cmds = new string[](8);
cmds[0] = "node"; cmds[0] = "scripts/differential-testing/differential-testing";
cmds[1] = "dist/scripts/differential-testing.js"; cmds[1] = "hashCrossDomainMessage";
cmds[2] = "hashCrossDomainMessage"; cmds[2] = vm.toString(_nonce);
cmds[3] = vm.toString(_nonce); cmds[3] = vm.toString(_sender);
cmds[4] = vm.toString(_sender); cmds[4] = vm.toString(_target);
cmds[5] = vm.toString(_target); cmds[5] = vm.toString(_value);
cmds[6] = vm.toString(_value); cmds[6] = vm.toString(_gasLimit);
cmds[7] = vm.toString(_gasLimit); cmds[7] = vm.toString(_data);
cmds[8] = vm.toString(_data);
bytes memory result = vm.ffi(cmds); bytes memory result = vm.ffi(cmds);
return abi.decode(result, (bytes32)); return abi.decode(result, (bytes32));
...@@ -531,16 +529,15 @@ contract FFIInterface is Test { ...@@ -531,16 +529,15 @@ contract FFIInterface is Test {
uint256 _gasLimit, uint256 _gasLimit,
bytes memory _data bytes memory _data
) external returns (bytes32) { ) external returns (bytes32) {
string[] memory cmds = new string[](9); string[] memory cmds = new string[](8);
cmds[0] = "node"; cmds[0] = "scripts/differential-testing/differential-testing";
cmds[1] = "dist/scripts/differential-testing.js"; cmds[1] = "hashWithdrawal";
cmds[2] = "hashWithdrawal"; cmds[2] = vm.toString(_nonce);
cmds[3] = vm.toString(_nonce); cmds[3] = vm.toString(_sender);
cmds[4] = vm.toString(_sender); cmds[4] = vm.toString(_target);
cmds[5] = vm.toString(_target); cmds[5] = vm.toString(_value);
cmds[6] = vm.toString(_value); cmds[6] = vm.toString(_gasLimit);
cmds[7] = vm.toString(_gasLimit); cmds[7] = vm.toString(_data);
cmds[8] = vm.toString(_data);
bytes memory result = vm.ffi(cmds); bytes memory result = vm.ffi(cmds);
return abi.decode(result, (bytes32)); return abi.decode(result, (bytes32));
...@@ -552,14 +549,13 @@ contract FFIInterface is Test { ...@@ -552,14 +549,13 @@ contract FFIInterface is Test {
bytes32 _messagePasserStorageRoot, bytes32 _messagePasserStorageRoot,
bytes32 _latestBlockhash bytes32 _latestBlockhash
) external returns (bytes32) { ) external returns (bytes32) {
string[] memory cmds = new string[](7); string[] memory cmds = new string[](6);
cmds[0] = "node"; cmds[0] = "scripts/differential-testing/differential-testing";
cmds[1] = "dist/scripts/differential-testing.js"; cmds[1] = "hashOutputRootProof";
cmds[2] = "hashOutputRootProof"; cmds[2] = Strings.toHexString(uint256(_version));
cmds[3] = Strings.toHexString(uint256(_version)); cmds[3] = Strings.toHexString(uint256(_stateRoot));
cmds[4] = Strings.toHexString(uint256(_stateRoot)); cmds[4] = Strings.toHexString(uint256(_messagePasserStorageRoot));
cmds[5] = Strings.toHexString(uint256(_messagePasserStorageRoot)); cmds[5] = Strings.toHexString(uint256(_latestBlockhash));
cmds[6] = Strings.toHexString(uint256(_latestBlockhash));
bytes memory result = vm.ffi(cmds); bytes memory result = vm.ffi(cmds);
return abi.decode(result, (bytes32)); return abi.decode(result, (bytes32));
...@@ -572,20 +568,19 @@ contract FFIInterface is Test { ...@@ -572,20 +568,19 @@ contract FFIInterface is Test {
uint256 _value, uint256 _value,
uint64 _gas, uint64 _gas,
bytes memory _data, bytes memory _data,
uint256 _logIndex uint64 _logIndex
) external returns (bytes32) { ) external returns (bytes32) {
string[] memory cmds = new string[](11); string[] memory cmds = new string[](10);
cmds[0] = "node"; cmds[0] = "scripts/differential-testing/differential-testing";
cmds[1] = "dist/scripts/differential-testing.js"; cmds[1] = "hashDepositTransaction";
cmds[2] = "hashDepositTransaction"; cmds[2] = "0x0000000000000000000000000000000000000000000000000000000000000000";
cmds[3] = "0x0000000000000000000000000000000000000000000000000000000000000000"; cmds[3] = vm.toString(_logIndex);
cmds[4] = vm.toString(_logIndex); cmds[4] = vm.toString(_from);
cmds[5] = vm.toString(_from); cmds[5] = vm.toString(_to);
cmds[6] = vm.toString(_to); cmds[6] = vm.toString(_mint);
cmds[7] = vm.toString(_mint); cmds[7] = vm.toString(_value);
cmds[8] = vm.toString(_value); cmds[8] = vm.toString(_gas);
cmds[9] = vm.toString(_gas); cmds[9] = vm.toString(_data);
cmds[10] = vm.toString(_data);
bytes memory result = vm.ffi(cmds); bytes memory result = vm.ffi(cmds);
return abi.decode(result, (bytes32)); return abi.decode(result, (bytes32));
...@@ -595,19 +590,18 @@ contract FFIInterface is Test { ...@@ -595,19 +590,18 @@ contract FFIInterface is Test {
external external
returns (bytes memory) returns (bytes memory)
{ {
string[] memory cmds = new string[](12); string[] memory cmds = new string[](11);
cmds[0] = "node"; cmds[0] = "scripts/differential-testing/differential-testing";
cmds[1] = "dist/scripts/differential-testing.js"; cmds[1] = "encodeDepositTransaction";
cmds[2] = "encodeDepositTransaction"; cmds[2] = vm.toString(txn.from);
cmds[3] = vm.toString(txn.from); cmds[3] = vm.toString(txn.to);
cmds[4] = vm.toString(txn.to); cmds[4] = vm.toString(txn.value);
cmds[5] = vm.toString(txn.value); cmds[5] = vm.toString(txn.mint);
cmds[6] = vm.toString(txn.mint); cmds[6] = vm.toString(txn.gasLimit);
cmds[7] = vm.toString(txn.gasLimit); cmds[7] = vm.toString(txn.isCreation);
cmds[8] = vm.toString(txn.isCreation); cmds[8] = vm.toString(txn.data);
cmds[9] = vm.toString(txn.data); cmds[9] = vm.toString(txn.l1BlockHash);
cmds[10] = vm.toString(txn.l1BlockHash); cmds[10] = vm.toString(txn.logIndex);
cmds[11] = vm.toString(txn.logIndex);
bytes memory result = vm.ffi(cmds); bytes memory result = vm.ffi(cmds);
return abi.decode(result, (bytes)); return abi.decode(result, (bytes));
...@@ -621,27 +615,25 @@ contract FFIInterface is Test { ...@@ -621,27 +615,25 @@ contract FFIInterface is Test {
uint256 _gasLimit, uint256 _gasLimit,
bytes memory _data bytes memory _data
) external returns (bytes memory) { ) external returns (bytes memory) {
string[] memory cmds = new string[](9); string[] memory cmds = new string[](8);
cmds[0] = "node"; cmds[0] = "scripts/differential-testing/differential-testing";
cmds[1] = "dist/scripts/differential-testing.js"; cmds[1] = "encodeCrossDomainMessage";
cmds[2] = "encodeCrossDomainMessage"; cmds[2] = vm.toString(_nonce);
cmds[3] = vm.toString(_nonce); cmds[3] = vm.toString(_sender);
cmds[4] = vm.toString(_sender); cmds[4] = vm.toString(_target);
cmds[5] = vm.toString(_target); cmds[5] = vm.toString(_value);
cmds[6] = vm.toString(_value); cmds[6] = vm.toString(_gasLimit);
cmds[7] = vm.toString(_gasLimit); cmds[7] = vm.toString(_data);
cmds[8] = vm.toString(_data);
bytes memory result = vm.ffi(cmds); bytes memory result = vm.ffi(cmds);
return abi.decode(result, (bytes)); return abi.decode(result, (bytes));
} }
function decodeVersionedNonce(uint256 nonce) external returns (uint256, uint256) { function decodeVersionedNonce(uint256 nonce) external returns (uint256, uint256) {
string[] memory cmds = new string[](4); string[] memory cmds = new string[](3);
cmds[0] = "node"; cmds[0] = "scripts/differential-testing/differential-testing";
cmds[1] = "dist/scripts/differential-testing.js"; cmds[1] = "decodeVersionedNonce";
cmds[2] = "decodeVersionedNonce"; cmds[2] = vm.toString(nonce);
cmds[3] = vm.toString(nonce);
bytes memory result = vm.ffi(cmds); bytes memory result = vm.ffi(cmds);
return abi.decode(result, (uint256, uint256)); return abi.decode(result, (uint256, uint256));
......
...@@ -91,7 +91,7 @@ contract Encoding_Test is CommonTest { ...@@ -91,7 +91,7 @@ contract Encoding_Test is CommonTest {
uint64 _gas, uint64 _gas,
bool isCreate, bool isCreate,
bytes memory _data, bytes memory _data,
uint256 _logIndex uint64 _logIndex
) external { ) external {
Types.UserDepositTransaction memory t = Types.UserDepositTransaction( Types.UserDepositTransaction memory t = Types.UserDepositTransaction(
_from, _from,
......
...@@ -129,7 +129,7 @@ contract Hashing_hashDepositTransaction_Test is CommonTest { ...@@ -129,7 +129,7 @@ contract Hashing_hashDepositTransaction_Test is CommonTest {
uint256 _value, uint256 _value,
uint64 _gas, uint64 _gas,
bytes memory _data, bytes memory _data,
uint256 _logIndex uint64 _logIndex
) external { ) external {
assertEq( assertEq(
Hashing.hashDepositTransaction( Hashing.hashDepositTransaction(
......
...@@ -278,6 +278,33 @@ contract L2ERC721Bridge_Test is Messenger_Initializer { ...@@ -278,6 +278,33 @@ contract L2ERC721Bridge_Test is Messenger_Initializer {
assertEq(localToken.ownerOf(tokenId), alice); assertEq(localToken.ownerOf(tokenId), alice);
} }
function test_finalizeBridgeERC721_interfaceNotCompliant_reverts() external {
// Create a non-compliant token
NonCompliantERC721 nonCompliantToken = new NonCompliantERC721(alice);
// Bridge the non-compliant token.
vm.prank(alice);
bridge.bridgeERC721(address(nonCompliantToken), address(0x01), tokenId, 1234, hex"5678");
// Attempt to finalize the withdrawal. Should revert because the token does not claim
// to be compliant with the `IOptimismMintableERC721` interface.
vm.mockCall(
address(L2Messenger),
abi.encodeWithSelector(L2Messenger.xDomainMessageSender.selector),
abi.encode(otherBridge)
);
vm.prank(address(L2Messenger));
vm.expectRevert("L2ERC721Bridge: local token interface is not compliant");
bridge.finalizeBridgeERC721(
address(address(nonCompliantToken)),
address(address(0x01)),
alice,
alice,
tokenId,
hex"5678"
);
}
function test_finalizeBridgeERC721_notViaLocalMessenger_reverts() external { function test_finalizeBridgeERC721_notViaLocalMessenger_reverts() external {
// Finalize a withdrawal. // Finalize a withdrawal.
vm.prank(alice); vm.prank(alice);
...@@ -349,3 +376,33 @@ contract L2ERC721Bridge_Test is Messenger_Initializer { ...@@ -349,3 +376,33 @@ contract L2ERC721Bridge_Test is Messenger_Initializer {
); );
} }
} }
/**
* @dev A non-compliant ERC721 token that does not implement the full ERC721 interface.
*
* This is used to test that the bridge will revert if the token does not claim to support
* the ERC721 interface.
*/
contract NonCompliantERC721 {
address internal immutable owner;
constructor(address _owner) {
owner = _owner;
}
function ownerOf(uint256) external view returns (address) {
return owner;
}
function remoteToken() external pure returns (address) {
return address(0x01);
}
function burn(address, uint256) external {
// Do nothing.
}
function supportsInterface(bytes4) external pure returns (bool) {
return false;
}
}
...@@ -20,6 +20,7 @@ ...@@ -20,6 +20,7 @@
"sequencerFeeVaultRecipient": "0xfabb0ac9d68b0b445fb7357272ff202c5651694a", "sequencerFeeVaultRecipient": "0xfabb0ac9d68b0b445fb7357272ff202c5651694a",
"proxyAdminOwner": "0xBcd4042DE499D14e55001CcbB24a551F3b954096", "proxyAdminOwner": "0xBcd4042DE499D14e55001CcbB24a551F3b954096",
"finalSystemOwner": "0xBcd4042DE499D14e55001CcbB24a551F3b954096", "finalSystemOwner": "0xBcd4042DE499D14e55001CcbB24a551F3b954096",
"portalGuardian": "0xBcd4042DE499D14e55001CcbB24a551F3b954096",
"controller": "0xf39fd6e51aad88f6f4ce6ab8827279cfffb92266", "controller": "0xf39fd6e51aad88f6f4ce6ab8827279cfffb92266",
"finalizationPeriodSeconds": 2, "finalizationPeriodSeconds": 2,
"deploymentWaitConfirmations": 1, "deploymentWaitConfirmations": 1,
......
{ {
"finalSystemOwner": "0x62790eFcB3a5f3A5D398F95B47930A9Addd83807", "finalSystemOwner": "0x62790eFcB3a5f3A5D398F95B47930A9Addd83807",
"portalGuardian": "0x62790eFcB3a5f3A5D398F95B47930A9Addd83807",
"controller": "0x2d30335B0b807bBa1682C487BaAFD2Ad6da5D675", "controller": "0x2d30335B0b807bBa1682C487BaAFD2Ad6da5D675",
"l1StartingBlockTag": "0x4104895a540d87127ff11eef0d51d8f63ce00a6fc211db751a45a4b3a61a9c83", "l1StartingBlockTag": "0x4104895a540d87127ff11eef0d51d8f63ce00a6fc211db751a45a4b3a61a9c83",
......
...@@ -2,6 +2,7 @@ ...@@ -2,6 +2,7 @@
"numDeployConfirmations": 1, "numDeployConfirmations": 1,
"finalSystemOwner": "ADMIN", "finalSystemOwner": "ADMIN",
"portalGuardian": "ADMIN",
"controller": "ADMIN", "controller": "ADMIN",
"l1StartingBlockTag": "BLOCKHASH", "l1StartingBlockTag": "BLOCKHASH",
......
{ {
"finalSystemOwner": "0x62790eFcB3a5f3A5D398F95B47930A9Addd83807", "finalSystemOwner": "0x62790eFcB3a5f3A5D398F95B47930A9Addd83807",
"portalGuardian": "0x62790eFcB3a5f3A5D398F95B47930A9Addd83807",
"controller": "0x2d30335B0b807bBa1682C487BaAFD2Ad6da5D675", "controller": "0x2d30335B0b807bBa1682C487BaAFD2Ad6da5D675",
"l1StartingBlockTag": "0x4104895a540d87127ff11eef0d51d8f63ce00a6fc211db751a45a4b3a61a9c83", "l1StartingBlockTag": "0x4104895a540d87127ff11eef0d51d8f63ce00a6fc211db751a45a4b3a61a9c83",
......
...@@ -2,6 +2,7 @@ ...@@ -2,6 +2,7 @@
"numDeployConfirmations": 1, "numDeployConfirmations": 1,
"finalSystemOwner": "0xBc1233d0C3e6B5d53Ab455cF65A6623F6dCd7e4f", "finalSystemOwner": "0xBc1233d0C3e6B5d53Ab455cF65A6623F6dCd7e4f",
"portalGuardian": "0xBc1233d0C3e6B5d53Ab455cF65A6623F6dCd7e4f",
"controller": "0xBc1233d0C3e6B5d53Ab455cF65A6623F6dCd7e4f", "controller": "0xBc1233d0C3e6B5d53Ab455cF65A6623F6dCd7e4f",
"l1StartingBlockTag": "0x6ffc1bf3754c01f6bb9fe057c1578b87a8571ce2e9be5ca14bace6eccfd336c7", "l1StartingBlockTag": "0x6ffc1bf3754c01f6bb9fe057c1578b87a8571ce2e9be5ca14bace6eccfd336c7",
......
{ {
"finalSystemOwner": "0x9965507D1a55bcC2695C58ba16FB37d819B0A4dc", "finalSystemOwner": "0x9965507D1a55bcC2695C58ba16FB37d819B0A4dc",
"portalGuardian": "0x9965507D1a55bcC2695C58ba16FB37d819B0A4dc",
"controller": "0xf39fd6e51aad88f6f4ce6ab8827279cfffb92266", "controller": "0xf39fd6e51aad88f6f4ce6ab8827279cfffb92266",
"l1StartingBlockTag": "earliest", "l1StartingBlockTag": "earliest",
......
{ {
"finalSystemOwner": "0x858F0751ef8B4067f0d2668C076BDB50a8549fbF", "finalSystemOwner": "0x858F0751ef8B4067f0d2668C076BDB50a8549fbF",
"portalGuardian": "0x858F0751ef8B4067f0d2668C076BDB50a8549fbF",
"controller": "0x2d30335B0b807bBa1682C487BaAFD2Ad6da5D675", "controller": "0x2d30335B0b807bBa1682C487BaAFD2Ad6da5D675",
"l1StartingBlockTag": "0x19c7e6b18fe156e45f4cfef707294fd8f079fa9c30a7b7cd6ec1ce3682ec6a2e", "l1StartingBlockTag": "0x19c7e6b18fe156e45f4cfef707294fd8f079fa9c30a7b7cd6ec1ce3682ec6a2e",
......
...@@ -17,13 +17,11 @@ const deployFn: DeployFunction = async (hre) => { ...@@ -17,13 +17,11 @@ const deployFn: DeployFunction = async (hre) => {
'L2OutputOracleProxy' 'L2OutputOracleProxy'
) )
const finalSystemOwner = hre.deployConfig.finalSystemOwner const portalGuardian = hre.deployConfig.portalGuardian
const finalSystemOwnerCode = await hre.ethers.provider.getCode( const portalGuardianCode = await hre.ethers.provider.getCode(portalGuardian)
finalSystemOwner if (portalGuardianCode === '0x') {
)
if (finalSystemOwnerCode === '0x') {
console.log( console.log(
`WARNING: setting OptimismPortal.GUARDIAN to ${finalSystemOwner} and it has no code` `WARNING: setting OptimismPortal.GUARDIAN to ${portalGuardian} and it has no code`
) )
if (!isLiveDeployer) { if (!isLiveDeployer) {
throw new Error( throw new Error(
...@@ -35,13 +33,13 @@ const deployFn: DeployFunction = async (hre) => { ...@@ -35,13 +33,13 @@ const deployFn: DeployFunction = async (hre) => {
// Deploy the OptimismPortal implementation as paused to // Deploy the OptimismPortal implementation as paused to
// ensure that users do not interact with it and instead // ensure that users do not interact with it and instead
// interact with the proxied contract. // interact with the proxied contract.
// The `finalSystemOwner` is set at the GUARDIAN. // The `portalGuardian` is set at the GUARDIAN.
await deploy({ await deploy({
hre, hre,
name: 'OptimismPortal', name: 'OptimismPortal',
args: [ args: [
L2OutputOracleProxy.address, L2OutputOracleProxy.address,
finalSystemOwner, portalGuardian,
true, // paused true, // paused
], ],
postDeployAction: async (contract) => { postDeployAction: async (contract) => {
...@@ -53,7 +51,7 @@ const deployFn: DeployFunction = async (hre) => { ...@@ -53,7 +51,7 @@ const deployFn: DeployFunction = async (hre) => {
await assertContractVariable( await assertContractVariable(
contract, contract,
'GUARDIAN', 'GUARDIAN',
hre.deployConfig.finalSystemOwner hre.deployConfig.portalGuardian
) )
}, },
}) })
......
...@@ -14,6 +14,7 @@ import { ...@@ -14,6 +14,7 @@ import {
doStep, doStep,
jsonifyTransaction, jsonifyTransaction,
getTenderlySimulationLink, getTenderlySimulationLink,
getCastCommand,
} from '../src/deploy-utils' } from '../src/deploy-utils'
const deployFn: DeployFunction = async (hre) => { const deployFn: DeployFunction = async (hre) => {
...@@ -98,6 +99,7 @@ const deployFn: DeployFunction = async (hre) => { ...@@ -98,6 +99,7 @@ const deployFn: DeployFunction = async (hre) => {
console.log(`MSD address: ${SystemDictator.address}`) console.log(`MSD address: ${SystemDictator.address}`)
console.log(`JSON:`) console.log(`JSON:`)
console.log(jsonifyTransaction(tx)) console.log(jsonifyTransaction(tx))
console.log(getCastCommand(tx))
console.log(await getTenderlySimulationLink(SystemDictator.provider, tx)) console.log(await getTenderlySimulationLink(SystemDictator.provider, tx))
} }
...@@ -107,7 +109,7 @@ const deployFn: DeployFunction = async (hre) => { ...@@ -107,7 +109,7 @@ const deployFn: DeployFunction = async (hre) => {
const owner = await AddressManager.owner() const owner = await AddressManager.owner()
return owner === SystemDictator.address return owner === SystemDictator.address
}, },
30000, 5000,
1000 1000
) )
} else { } else {
...@@ -135,6 +137,7 @@ const deployFn: DeployFunction = async (hre) => { ...@@ -135,6 +137,7 @@ const deployFn: DeployFunction = async (hre) => {
console.log(`MSD address: ${SystemDictator.address}`) console.log(`MSD address: ${SystemDictator.address}`)
console.log(`JSON:`) console.log(`JSON:`)
console.log(jsonifyTransaction(tx)) console.log(jsonifyTransaction(tx))
console.log(getCastCommand(tx))
console.log(await getTenderlySimulationLink(SystemDictator.provider, tx)) console.log(await getTenderlySimulationLink(SystemDictator.provider, tx))
} }
...@@ -146,7 +149,7 @@ const deployFn: DeployFunction = async (hre) => { ...@@ -146,7 +149,7 @@ const deployFn: DeployFunction = async (hre) => {
}) })
return owner === SystemDictator.address return owner === SystemDictator.address
}, },
30000, 5000,
1000 1000
) )
} else { } else {
...@@ -172,6 +175,7 @@ const deployFn: DeployFunction = async (hre) => { ...@@ -172,6 +175,7 @@ const deployFn: DeployFunction = async (hre) => {
console.log(`MSD address: ${SystemDictator.address}`) console.log(`MSD address: ${SystemDictator.address}`)
console.log(`JSON:`) console.log(`JSON:`)
console.log(jsonifyTransaction(tx)) console.log(jsonifyTransaction(tx))
console.log(getCastCommand(tx))
console.log(await getTenderlySimulationLink(SystemDictator.provider, tx)) console.log(await getTenderlySimulationLink(SystemDictator.provider, tx))
} }
...@@ -183,7 +187,7 @@ const deployFn: DeployFunction = async (hre) => { ...@@ -183,7 +187,7 @@ const deployFn: DeployFunction = async (hre) => {
}) })
return owner === SystemDictator.address return owner === SystemDictator.address
}, },
30000, 5000,
1000 1000
) )
} else { } else {
......
...@@ -14,6 +14,7 @@ import { ...@@ -14,6 +14,7 @@ import {
isStep, isStep,
doStep, doStep,
getTenderlySimulationLink, getTenderlySimulationLink,
getCastCommand,
} from '../src/deploy-utils' } from '../src/deploy-utils'
const deployFn: DeployFunction = async (hre) => { const deployFn: DeployFunction = async (hre) => {
...@@ -194,6 +195,7 @@ const deployFn: DeployFunction = async (hre) => { ...@@ -194,6 +195,7 @@ const deployFn: DeployFunction = async (hre) => {
console.log(`MSD address: ${SystemDictator.address}`) console.log(`MSD address: ${SystemDictator.address}`)
console.log(`JSON:`) console.log(`JSON:`)
console.log(jsonifyTransaction(tx)) console.log(jsonifyTransaction(tx))
console.log(getCastCommand(tx))
console.log(await getTenderlySimulationLink(SystemDictator.provider, tx)) console.log(await getTenderlySimulationLink(SystemDictator.provider, tx))
} }
...@@ -201,7 +203,7 @@ const deployFn: DeployFunction = async (hre) => { ...@@ -201,7 +203,7 @@ const deployFn: DeployFunction = async (hre) => {
async () => { async () => {
return SystemDictator.dynamicConfigSet() return SystemDictator.dynamicConfigSet()
}, },
30000, 5000,
1000 1000
) )
} }
...@@ -305,6 +307,7 @@ const deployFn: DeployFunction = async (hre) => { ...@@ -305,6 +307,7 @@ const deployFn: DeployFunction = async (hre) => {
console.log(`OptimismPortal address: ${OptimismPortal.address}`) console.log(`OptimismPortal address: ${OptimismPortal.address}`)
console.log(`JSON:`) console.log(`JSON:`)
console.log(jsonifyTransaction(tx)) console.log(jsonifyTransaction(tx))
console.log(getCastCommand(tx))
console.log(await getTenderlySimulationLink(SystemDictator.provider, tx)) console.log(await getTenderlySimulationLink(SystemDictator.provider, tx))
} }
...@@ -313,7 +316,7 @@ const deployFn: DeployFunction = async (hre) => { ...@@ -313,7 +316,7 @@ const deployFn: DeployFunction = async (hre) => {
const paused = await OptimismPortal.paused() const paused = await OptimismPortal.paused()
return !paused return !paused
}, },
30000, 5000,
1000 1000
) )
...@@ -334,6 +337,7 @@ const deployFn: DeployFunction = async (hre) => { ...@@ -334,6 +337,7 @@ const deployFn: DeployFunction = async (hre) => {
console.log(`MSD address: ${SystemDictator.address}`) console.log(`MSD address: ${SystemDictator.address}`)
console.log(`JSON:`) console.log(`JSON:`)
console.log(jsonifyTransaction(tx)) console.log(jsonifyTransaction(tx))
console.log(getCastCommand(tx))
console.log(await getTenderlySimulationLink(SystemDictator.provider, tx)) console.log(await getTenderlySimulationLink(SystemDictator.provider, tx))
} }
...@@ -341,7 +345,7 @@ const deployFn: DeployFunction = async (hre) => { ...@@ -341,7 +345,7 @@ const deployFn: DeployFunction = async (hre) => {
async () => { async () => {
return SystemDictator.finalized() return SystemDictator.finalized()
}, },
30000, 5000,
1000 1000
) )
......
{
"address": "0x5086d1eEF304eb5284A0f6720f79403b4e9bE294",
"abi": [
{
"inputs": [
{
"internalType": "address",
"name": "_libAddressManager",
"type": "address"
},
{
"internalType": "string",
"name": "_implementationName",
"type": "string"
}
],
"stateMutability": "nonpayable",
"type": "constructor"
},
{
"stateMutability": "payable",
"type": "fallback"
}
],
"transactionHash": "0xc547cd677c4bcb87deead498c827b1dfcfd5d14826f58a0f7416a46024a03e85",
"receipt": {
"to": null,
"from": "0x3a605B442055DF2898E18cF518feb2e2A6BD0D31",
"contractAddress": "0x5086d1eEF304eb5284A0f6720f79403b4e9bE294",
"transactionIndex": 13,
"gasUsed": "291461",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"blockHash": "0x3e18927a7be75d3ac2b6f54f8c40d781756e78327f9cedd8dff5e79dd49403ff",
"transactionHash": "0xc547cd677c4bcb87deead498c827b1dfcfd5d14826f58a0f7416a46024a03e85",
"logs": [],
"blockNumber": 7017129,
"cumulativeGasUsed": "1033610",
"status": 1,
"byzantium": true
},
"args": [
"0xa6f73589243a6A7a9023b1Fa0651b1d89c177111",
"OVM_L1CrossDomainMessenger"
],
"numDeployments": 1,
"solcInputHash": "76c096070f4b72a86045eb6ab63709ed",
"metadata": "{\"compiler\":{\"version\":\"0.8.9+commit.e5eed63a\"},\"language\":\"Solidity\",\"output\":{\"abi\":[{\"inputs\":[{\"internalType\":\"address\",\"name\":\"_libAddressManager\",\"type\":\"address\"},{\"internalType\":\"string\",\"name\":\"_implementationName\",\"type\":\"string\"}],\"stateMutability\":\"nonpayable\",\"type\":\"constructor\"},{\"stateMutability\":\"payable\",\"type\":\"fallback\"}],\"devdoc\":{\"kind\":\"dev\",\"methods\":{\"constructor\":{\"params\":{\"_implementationName\":\"implementationName of the contract to proxy to.\",\"_libAddressManager\":\"Address of the Lib_AddressManager.\"}}},\"title\":\"Lib_ResolvedDelegateProxy\",\"version\":1},\"userdoc\":{\"kind\":\"user\",\"methods\":{},\"version\":1}},\"settings\":{\"compilationTarget\":{\"contracts/libraries/resolver/Lib_ResolvedDelegateProxy.sol\":\"Lib_ResolvedDelegateProxy\"},\"evmVersion\":\"london\",\"libraries\":{},\"metadata\":{\"bytecodeHash\":\"ipfs\",\"useLiteralContent\":true},\"optimizer\":{\"enabled\":true,\"runs\":10000},\"remappings\":[]},\"sources\":{\"@openzeppelin/contracts/access/Ownable.sol\":{\"content\":\"// SPDX-License-Identifier: MIT\\n\\npragma solidity ^0.8.0;\\n\\nimport \\\"../utils/Context.sol\\\";\\n\\n/**\\n * @dev Contract module which provides a basic access control mechanism, where\\n * there is an account (an owner) that can be granted exclusive access to\\n * specific functions.\\n *\\n * By default, the owner account will be the one that deploys the contract. This\\n * can later be changed with {transferOwnership}.\\n *\\n * This module is used through inheritance. It will make available the modifier\\n * `onlyOwner`, which can be applied to your functions to restrict their use to\\n * the owner.\\n */\\nabstract contract Ownable is Context {\\n address private _owner;\\n\\n event OwnershipTransferred(address indexed previousOwner, address indexed newOwner);\\n\\n /**\\n * @dev Initializes the contract setting the deployer as the initial owner.\\n */\\n constructor() {\\n _setOwner(_msgSender());\\n }\\n\\n /**\\n * @dev Returns the address of the current owner.\\n */\\n function owner() public view virtual returns (address) {\\n return _owner;\\n }\\n\\n /**\\n * @dev Throws if called by any account other than the owner.\\n */\\n modifier onlyOwner() {\\n require(owner() == _msgSender(), \\\"Ownable: caller is not the owner\\\");\\n _;\\n }\\n\\n /**\\n * @dev Leaves the contract without owner. It will not be possible to call\\n * `onlyOwner` functions anymore. Can only be called by the current owner.\\n *\\n * NOTE: Renouncing ownership will leave the contract without an owner,\\n * thereby removing any functionality that is only available to the owner.\\n */\\n function renounceOwnership() public virtual onlyOwner {\\n _setOwner(address(0));\\n }\\n\\n /**\\n * @dev Transfers ownership of the contract to a new account (`newOwner`).\\n * Can only be called by the current owner.\\n */\\n function transferOwnership(address newOwner) public virtual onlyOwner {\\n require(newOwner != address(0), \\\"Ownable: new owner is the zero address\\\");\\n _setOwner(newOwner);\\n }\\n\\n function _setOwner(address newOwner) private {\\n address oldOwner = _owner;\\n _owner = newOwner;\\n emit OwnershipTransferred(oldOwner, newOwner);\\n }\\n}\\n\",\"keccak256\":\"0x6bb804a310218875e89d12c053e94a13a4607cdf7cc2052f3e52bd32a0dc50a1\",\"license\":\"MIT\"},\"@openzeppelin/contracts/utils/Context.sol\":{\"content\":\"// SPDX-License-Identifier: MIT\\n\\npragma solidity ^0.8.0;\\n\\n/**\\n * @dev Provides information about the current execution context, including the\\n * sender of the transaction and its data. While these are generally available\\n * via msg.sender and msg.data, they should not be accessed in such a direct\\n * manner, since when dealing with meta-transactions the account sending and\\n * paying for execution may not be the actual sender (as far as an application\\n * is concerned).\\n *\\n * This contract is only required for intermediate, library-like contracts.\\n */\\nabstract contract Context {\\n function _msgSender() internal view virtual returns (address) {\\n return msg.sender;\\n }\\n\\n function _msgData() internal view virtual returns (bytes calldata) {\\n return msg.data;\\n }\\n}\\n\",\"keccak256\":\"0x90565a39ae45c80f0468dc96c7b20d0afc3055f344c8203a0c9258239f350b9f\",\"license\":\"MIT\"},\"contracts/libraries/resolver/Lib_AddressManager.sol\":{\"content\":\"// SPDX-License-Identifier: MIT\\npragma solidity ^0.8.9;\\n\\n/* External Imports */\\nimport { Ownable } from \\\"@openzeppelin/contracts/access/Ownable.sol\\\";\\n\\n/**\\n * @title Lib_AddressManager\\n */\\ncontract Lib_AddressManager is Ownable {\\n /**********\\n * Events *\\n **********/\\n\\n event AddressSet(string indexed _name, address _newAddress, address _oldAddress);\\n\\n /*************\\n * Variables *\\n *************/\\n\\n mapping(bytes32 => address) private addresses;\\n\\n /********************\\n * Public Functions *\\n ********************/\\n\\n /**\\n * Changes the address associated with a particular name.\\n * @param _name String name to associate an address with.\\n * @param _address Address to associate with the name.\\n */\\n function setAddress(string memory _name, address _address) external onlyOwner {\\n bytes32 nameHash = _getNameHash(_name);\\n address oldAddress = addresses[nameHash];\\n addresses[nameHash] = _address;\\n\\n emit AddressSet(_name, _address, oldAddress);\\n }\\n\\n /**\\n * Retrieves the address associated with a given name.\\n * @param _name Name to retrieve an address for.\\n * @return Address associated with the given name.\\n */\\n function getAddress(string memory _name) external view returns (address) {\\n return addresses[_getNameHash(_name)];\\n }\\n\\n /**********************\\n * Internal Functions *\\n **********************/\\n\\n /**\\n * Computes the hash of a name.\\n * @param _name Name to compute a hash for.\\n * @return Hash of the given name.\\n */\\n function _getNameHash(string memory _name) internal pure returns (bytes32) {\\n return keccak256(abi.encodePacked(_name));\\n }\\n}\\n\",\"keccak256\":\"0xcde9b29429d512c549f7c1b8a033f161fa71c18cda08b241748663854196ae14\",\"license\":\"MIT\"},\"contracts/libraries/resolver/Lib_ResolvedDelegateProxy.sol\":{\"content\":\"// SPDX-License-Identifier: MIT\\npragma solidity ^0.8.9;\\n\\n/* Library Imports */\\nimport { Lib_AddressManager } from \\\"./Lib_AddressManager.sol\\\";\\n\\n/**\\n * @title Lib_ResolvedDelegateProxy\\n */\\ncontract Lib_ResolvedDelegateProxy {\\n /*************\\n * Variables *\\n *************/\\n\\n // Using mappings to store fields to avoid overwriting storage slots in the\\n // implementation contract. For example, instead of storing these fields at\\n // storage slot `0` & `1`, they are stored at `keccak256(key + slot)`.\\n // See: https://solidity.readthedocs.io/en/v0.7.0/internals/layout_in_storage.html\\n // NOTE: Do not use this code in your own contract system.\\n // There is a known flaw in this contract, and we will remove it from the repository\\n // in the near future. Due to the very limited way that we are using it, this flaw is\\n // not an issue in our system.\\n mapping(address => string) private implementationName;\\n mapping(address => Lib_AddressManager) private addressManager;\\n\\n /***************\\n * Constructor *\\n ***************/\\n\\n /**\\n * @param _libAddressManager Address of the Lib_AddressManager.\\n * @param _implementationName implementationName of the contract to proxy to.\\n */\\n constructor(address _libAddressManager, string memory _implementationName) {\\n addressManager[address(this)] = Lib_AddressManager(_libAddressManager);\\n implementationName[address(this)] = _implementationName;\\n }\\n\\n /*********************\\n * Fallback Function *\\n *********************/\\n\\n fallback() external payable {\\n address target = addressManager[address(this)].getAddress(\\n (implementationName[address(this)])\\n );\\n\\n require(target != address(0), \\\"Target address must be initialized.\\\");\\n\\n // slither-disable-next-line controlled-delegatecall\\n (bool success, bytes memory returndata) = target.delegatecall(msg.data);\\n\\n if (success == true) {\\n assembly {\\n return(add(returndata, 0x20), mload(returndata))\\n }\\n } else {\\n assembly {\\n revert(add(returndata, 0x20), mload(returndata))\\n }\\n }\\n }\\n}\\n\",\"keccak256\":\"0x987774d18365ed25f5be61198e8b241728db6f97c6f2496f4a35bf9dbe0bda2b\",\"license\":\"MIT\"}},\"version\":1}",
"bytecode": "0x608060405234801561001057600080fd5b506040516105b53803806105b583398101604081905261002f91610125565b30600090815260016020908152604080832080546001600160a01b0319166001600160a01b038716179055828252909120825161006e92840190610076565b505050610252565b82805461008290610217565b90600052602060002090601f0160209004810192826100a457600085556100ea565b82601f106100bd57805160ff19168380011785556100ea565b828001600101855582156100ea579182015b828111156100ea5782518255916020019190600101906100cf565b506100f69291506100fa565b5090565b5b808211156100f657600081556001016100fb565b634e487b7160e01b600052604160045260246000fd5b6000806040838503121561013857600080fd5b82516001600160a01b038116811461014f57600080fd5b602084810151919350906001600160401b038082111561016e57600080fd5b818601915086601f83011261018257600080fd5b8151818111156101945761019461010f565b604051601f8201601f19908116603f011681019083821181831017156101bc576101bc61010f565b8160405282815289868487010111156101d457600080fd5b600093505b828410156101f657848401860151818501870152928501926101d9565b828411156102075760008684830101525b8096505050505050509250929050565b600181811c9082168061022b57607f821691505b6020821081141561024c57634e487b7160e01b600052602260045260246000fd5b50919050565b610354806102616000396000f3fe608060408181523060009081526001602090815282822054908290529181207fbf40fac1000000000000000000000000000000000000000000000000000000009093529173ffffffffffffffffffffffffffffffffffffffff9091169063bf40fac19061006d9060846101f2565b60206040518083038186803b15801561008557600080fd5b505afa158015610099573d6000803e3d6000fd5b505050506040513d601f19601f820116820180604052508101906100bd91906102d1565b905073ffffffffffffffffffffffffffffffffffffffff8116610166576040517f08c379a000000000000000000000000000000000000000000000000000000000815260206004820152602360248201527f5461726765742061646472657373206d75737420626520696e697469616c697a60448201527f65642e0000000000000000000000000000000000000000000000000000000000606482015260840160405180910390fd5b6000808273ffffffffffffffffffffffffffffffffffffffff1660003660405161019192919061030e565b600060405180830381855af49150503d80600081146101cc576040519150601f19603f3d011682016040523d82523d6000602084013e6101d1565b606091505b509092509050600182151514156101ea57805160208201f35b805160208201fd5b600060208083526000845481600182811c91508083168061021457607f831692505b85831081141561024b577f4e487b710000000000000000000000000000000000000000000000000000000085526022600452602485fd5b8786018381526020018180156102685760018114610297576102c2565b7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff008616825287820196506102c2565b60008b81526020902060005b868110156102bc578154848201529085019089016102a3565b83019750505b50949998505050505050505050565b6000602082840312156102e357600080fd5b815173ffffffffffffffffffffffffffffffffffffffff8116811461030757600080fd5b9392505050565b818382376000910190815291905056fea2646970667358221220d66a7dad92a7f7528f41181719174e1d244423b8bb730d2884645c76cfa0944064736f6c63430008090033",
"deployedBytecode": "0x608060408181523060009081526001602090815282822054908290529181207fbf40fac1000000000000000000000000000000000000000000000000000000009093529173ffffffffffffffffffffffffffffffffffffffff9091169063bf40fac19061006d9060846101f2565b60206040518083038186803b15801561008557600080fd5b505afa158015610099573d6000803e3d6000fd5b505050506040513d601f19601f820116820180604052508101906100bd91906102d1565b905073ffffffffffffffffffffffffffffffffffffffff8116610166576040517f08c379a000000000000000000000000000000000000000000000000000000000815260206004820152602360248201527f5461726765742061646472657373206d75737420626520696e697469616c697a60448201527f65642e0000000000000000000000000000000000000000000000000000000000606482015260840160405180910390fd5b6000808273ffffffffffffffffffffffffffffffffffffffff1660003660405161019192919061030e565b600060405180830381855af49150503d80600081146101cc576040519150601f19603f3d011682016040523d82523d6000602084013e6101d1565b606091505b509092509050600182151514156101ea57805160208201f35b805160208201fd5b600060208083526000845481600182811c91508083168061021457607f831692505b85831081141561024b577f4e487b710000000000000000000000000000000000000000000000000000000085526022600452602485fd5b8786018381526020018180156102685760018114610297576102c2565b7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff008616825287820196506102c2565b60008b81526020902060005b868110156102bc578154848201529085019089016102a3565b83019750505b50949998505050505050505050565b6000602082840312156102e357600080fd5b815173ffffffffffffffffffffffffffffffffffffffff8116811461030757600080fd5b9392505050565b818382376000910190815291905056fea2646970667358221220d66a7dad92a7f7528f41181719174e1d244423b8bb730d2884645c76cfa0944064736f6c63430008090033",
"devdoc": {
"kind": "dev",
"methods": {
"constructor": {
"params": {
"_implementationName": "implementationName of the contract to proxy to.",
"_libAddressManager": "Address of the Lib_AddressManager."
}
}
},
"title": "Lib_ResolvedDelegateProxy",
"version": 1
},
"userdoc": {
"kind": "user",
"methods": {},
"version": 1
},
"storageLayout": {
"storage": [
{
"astId": 7129,
"contract": "contracts/libraries/resolver/Lib_ResolvedDelegateProxy.sol:Lib_ResolvedDelegateProxy",
"label": "implementationName",
"offset": 0,
"slot": "0",
"type": "t_mapping(t_address,t_string_storage)"
},
{
"astId": 7134,
"contract": "contracts/libraries/resolver/Lib_ResolvedDelegateProxy.sol:Lib_ResolvedDelegateProxy",
"label": "addressManager",
"offset": 0,
"slot": "1",
"type": "t_mapping(t_address,t_contract(Lib_AddressManager)7084)"
}
],
"types": {
"t_address": {
"encoding": "inplace",
"label": "address",
"numberOfBytes": "20"
},
"t_contract(Lib_AddressManager)7084": {
"encoding": "inplace",
"label": "contract Lib_AddressManager",
"numberOfBytes": "20"
},
"t_mapping(t_address,t_contract(Lib_AddressManager)7084)": {
"encoding": "mapping",
"key": "t_address",
"label": "mapping(address => contract Lib_AddressManager)",
"numberOfBytes": "32",
"value": "t_contract(Lib_AddressManager)7084"
},
"t_mapping(t_address,t_string_storage)": {
"encoding": "mapping",
"key": "t_address",
"label": "mapping(address => string)",
"numberOfBytes": "32",
"value": "t_string_storage"
},
"t_string_storage": {
"encoding": "bytes",
"label": "string",
"numberOfBytes": "32"
}
}
}
}
\ No newline at end of file
{
"address": "0x636Af16bf2f682dD3109e60102b8E1A089FedAa8",
"abi": [
{
"inputs": [
{
"internalType": "address",
"name": "_owner",
"type": "address"
}
],
"stateMutability": "nonpayable",
"type": "constructor"
},
{
"stateMutability": "payable",
"type": "fallback"
},
{
"inputs": [],
"name": "getImplementation",
"outputs": [
{
"internalType": "address",
"name": "",
"type": "address"
}
],
"stateMutability": "nonpayable",
"type": "function"
},
{
"inputs": [],
"name": "getOwner",
"outputs": [
{
"internalType": "address",
"name": "",
"type": "address"
}
],
"stateMutability": "nonpayable",
"type": "function"
},
{
"inputs": [
{
"internalType": "bytes",
"name": "_code",
"type": "bytes"
}
],
"name": "setCode",
"outputs": [],
"stateMutability": "nonpayable",
"type": "function"
},
{
"inputs": [
{
"internalType": "address",
"name": "_owner",
"type": "address"
}
],
"name": "setOwner",
"outputs": [],
"stateMutability": "nonpayable",
"type": "function"
},
{
"inputs": [
{
"internalType": "bytes32",
"name": "_key",
"type": "bytes32"
},
{
"internalType": "bytes32",
"name": "_value",
"type": "bytes32"
}
],
"name": "setStorage",
"outputs": [],
"stateMutability": "nonpayable",
"type": "function"
}
],
"transactionHash": "0xd57130025124776619229f980ae08c3097156ab127c897fa9ef17e29bf757a16",
"receipt": {
"to": null,
"from": "0x3a605B442055DF2898E18cF518feb2e2A6BD0D31",
"contractAddress": "0x636Af16bf2f682dD3109e60102b8E1A089FedAa8",
"transactionIndex": 8,
"gasUsed": "614417",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"blockHash": "0x85ce123b23be93051e83bc9d6dd2bf7817ee0fd338b936684c821020b8285393",
"transactionHash": "0xd57130025124776619229f980ae08c3097156ab127c897fa9ef17e29bf757a16",
"logs": [],
"blockNumber": 7017137,
"cumulativeGasUsed": "5543459",
"status": 1,
"byzantium": true
},
"args": [
"0x3a605B442055DF2898E18cF518feb2e2A6BD0D31"
],
"numDeployments": 1,
"solcInputHash": "0a41276e1e61949b5de1e4f1cd89fb6c",
"metadata": "{\"compiler\":{\"version\":\"0.8.9+commit.e5eed63a\"},\"language\":\"Solidity\",\"output\":{\"abi\":[{\"inputs\":[{\"internalType\":\"address\",\"name\":\"_owner\",\"type\":\"address\"}],\"stateMutability\":\"nonpayable\",\"type\":\"constructor\"},{\"stateMutability\":\"payable\",\"type\":\"fallback\"},{\"inputs\":[],\"name\":\"getImplementation\",\"outputs\":[{\"internalType\":\"address\",\"name\":\"\",\"type\":\"address\"}],\"stateMutability\":\"nonpayable\",\"type\":\"function\"},{\"inputs\":[],\"name\":\"getOwner\",\"outputs\":[{\"internalType\":\"address\",\"name\":\"\",\"type\":\"address\"}],\"stateMutability\":\"nonpayable\",\"type\":\"function\"},{\"inputs\":[{\"internalType\":\"bytes\",\"name\":\"_code\",\"type\":\"bytes\"}],\"name\":\"setCode\",\"outputs\":[],\"stateMutability\":\"nonpayable\",\"type\":\"function\"},{\"inputs\":[{\"internalType\":\"address\",\"name\":\"_owner\",\"type\":\"address\"}],\"name\":\"setOwner\",\"outputs\":[],\"stateMutability\":\"nonpayable\",\"type\":\"function\"},{\"inputs\":[{\"internalType\":\"bytes32\",\"name\":\"_key\",\"type\":\"bytes32\"},{\"internalType\":\"bytes32\",\"name\":\"_value\",\"type\":\"bytes32\"}],\"name\":\"setStorage\",\"outputs\":[],\"stateMutability\":\"nonpayable\",\"type\":\"function\"}],\"devdoc\":{\"details\":\"Basic ChugSplash proxy contract for L1. Very close to being a normal proxy but has added functions `setCode` and `setStorage` for changing the code or storage of the contract. Nifty! Note for future developers: do NOT make anything in this contract 'public' unless you know what you're doing. Anything public can potentially have a function signature that conflicts with a signature attached to the implementation contract. Public functions SHOULD always have the 'proxyCallIfNotOwner' modifier unless there's some *really* good reason not to have that modifier. And there almost certainly is not a good reason to not have that modifier. Beware!\",\"kind\":\"dev\",\"methods\":{\"constructor\":{\"params\":{\"_owner\":\"Address of the initial contract owner.\"}},\"getImplementation()\":{\"returns\":{\"_0\":\"Implementation address.\"}},\"getOwner()\":{\"returns\":{\"_0\":\"Owner address.\"}},\"setCode(bytes)\":{\"params\":{\"_code\":\"New contract code to run inside this contract.\"}},\"setOwner(address)\":{\"params\":{\"_owner\":\"New owner of the proxy contract.\"}},\"setStorage(bytes32,bytes32)\":{\"params\":{\"_key\":\"Storage key to modify.\",\"_value\":\"New value for the storage key.\"}}},\"title\":\"L1ChugSplashProxy\",\"version\":1},\"userdoc\":{\"kind\":\"user\",\"methods\":{\"getImplementation()\":{\"notice\":\"Queries the implementation address. Can only be called by the owner OR by making an eth_call and setting the \\\"from\\\" address to address(0).\"},\"getOwner()\":{\"notice\":\"Queries the owner of the proxy contract. Can only be called by the owner OR by making an eth_call and setting the \\\"from\\\" address to address(0).\"},\"setCode(bytes)\":{\"notice\":\"Sets the code that should be running behind this proxy. Note that this scheme is a bit different from the standard proxy scheme where one would typically deploy the code separately and then set the implementation address. We're doing it this way because it gives us a lot more freedom on the client side. Can only be triggered by the contract owner.\"},\"setOwner(address)\":{\"notice\":\"Changes the owner of the proxy contract. Only callable by the owner.\"},\"setStorage(bytes32,bytes32)\":{\"notice\":\"Modifies some storage slot within the proxy contract. Gives us a lot of power to perform upgrades in a more transparent way. Only callable by the owner.\"}},\"version\":1}},\"settings\":{\"compilationTarget\":{\"contracts/chugsplash/L1ChugSplashProxy.sol\":\"L1ChugSplashProxy\"},\"evmVersion\":\"london\",\"libraries\":{},\"metadata\":{\"bytecodeHash\":\"ipfs\",\"useLiteralContent\":true},\"optimizer\":{\"enabled\":true,\"runs\":10000},\"remappings\":[]},\"sources\":{\"contracts/chugsplash/L1ChugSplashProxy.sol\":{\"content\":\"// SPDX-License-Identifier: MIT\\npragma solidity ^0.8.9;\\n\\nimport { iL1ChugSplashDeployer } from \\\"./interfaces/iL1ChugSplashDeployer.sol\\\";\\n\\n/**\\n * @title L1ChugSplashProxy\\n * @dev Basic ChugSplash proxy contract for L1. Very close to being a normal proxy but has added\\n * functions `setCode` and `setStorage` for changing the code or storage of the contract. Nifty!\\n *\\n * Note for future developers: do NOT make anything in this contract 'public' unless you know what\\n * you're doing. Anything public can potentially have a function signature that conflicts with a\\n * signature attached to the implementation contract. Public functions SHOULD always have the\\n * 'proxyCallIfNotOwner' modifier unless there's some *really* good reason not to have that\\n * modifier. And there almost certainly is not a good reason to not have that modifier. Beware!\\n */\\ncontract L1ChugSplashProxy {\\n /*************\\n * Constants *\\n *************/\\n\\n // \\\"Magic\\\" prefix. When prepended to some arbitrary bytecode and used to create a contract, the\\n // appended bytecode will be deployed as given.\\n bytes13 internal constant DEPLOY_CODE_PREFIX = 0x600D380380600D6000396000f3;\\n\\n // bytes32(uint256(keccak256('eip1967.proxy.implementation')) - 1)\\n bytes32 internal constant IMPLEMENTATION_KEY =\\n 0x360894a13ba1a3210667c828492db98dca3e2076cc3735a920a3ca505d382bbc;\\n\\n // bytes32(uint256(keccak256('eip1967.proxy.admin')) - 1)\\n bytes32 internal constant OWNER_KEY =\\n 0xb53127684a568b3173ae13b9f8a6016e243e63b6e8ee1178d6a717850b5d6103;\\n\\n /***************\\n * Constructor *\\n ***************/\\n\\n /**\\n * @param _owner Address of the initial contract owner.\\n */\\n constructor(address _owner) {\\n _setOwner(_owner);\\n }\\n\\n /**********************\\n * Function Modifiers *\\n **********************/\\n\\n /**\\n * Blocks a function from being called when the parent signals that the system should be paused\\n * via an isUpgrading function.\\n */\\n modifier onlyWhenNotPaused() {\\n address owner = _getOwner();\\n\\n // We do a low-level call because there's no guarantee that the owner actually *is* an\\n // L1ChugSplashDeployer contract and Solidity will throw errors if we do a normal call and\\n // it turns out that it isn't the right type of contract.\\n (bool success, bytes memory returndata) = owner.staticcall(\\n abi.encodeWithSelector(iL1ChugSplashDeployer.isUpgrading.selector)\\n );\\n\\n // If the call was unsuccessful then we assume that there's no \\\"isUpgrading\\\" method and we\\n // can just continue as normal. We also expect that the return value is exactly 32 bytes\\n // long. If this isn't the case then we can safely ignore the result.\\n if (success && returndata.length == 32) {\\n // Although the expected value is a *boolean*, it's safer to decode as a uint256 in the\\n // case that the isUpgrading function returned something other than 0 or 1. But we only\\n // really care about the case where this value is 0 (= false).\\n uint256 ret = abi.decode(returndata, (uint256));\\n require(ret == 0, \\\"L1ChugSplashProxy: system is currently being upgraded\\\");\\n }\\n\\n _;\\n }\\n\\n /**\\n * Makes a proxy call instead of triggering the given function when the caller is either the\\n * owner or the zero address. Caller can only ever be the zero address if this function is\\n * being called off-chain via eth_call, which is totally fine and can be convenient for\\n * client-side tooling. Avoids situations where the proxy and implementation share a sighash\\n * and the proxy function ends up being called instead of the implementation one.\\n *\\n * Note: msg.sender == address(0) can ONLY be triggered off-chain via eth_call. If there's a\\n * way for someone to send a transaction with msg.sender == address(0) in any real context then\\n * we have much bigger problems. Primary reason to include this additional allowed sender is\\n * because the owner address can be changed dynamically and we do not want clients to have to\\n * keep track of the current owner in order to make an eth_call that doesn't trigger the\\n * proxied contract.\\n */\\n // slither-disable-next-line incorrect-modifier\\n modifier proxyCallIfNotOwner() {\\n if (msg.sender == _getOwner() || msg.sender == address(0)) {\\n _;\\n } else {\\n // This WILL halt the call frame on completion.\\n _doProxyCall();\\n }\\n }\\n\\n /*********************\\n * Fallback Function *\\n *********************/\\n\\n // slither-disable-next-line locked-ether\\n fallback() external payable {\\n // Proxy call by default.\\n _doProxyCall();\\n }\\n\\n /********************\\n * Public Functions *\\n ********************/\\n\\n /**\\n * Sets the code that should be running behind this proxy. Note that this scheme is a bit\\n * different from the standard proxy scheme where one would typically deploy the code\\n * separately and then set the implementation address. We're doing it this way because it gives\\n * us a lot more freedom on the client side. Can only be triggered by the contract owner.\\n * @param _code New contract code to run inside this contract.\\n */\\n // slither-disable-next-line external-function\\n function setCode(bytes memory _code) public proxyCallIfNotOwner {\\n // Get the code hash of the current implementation.\\n address implementation = _getImplementation();\\n\\n // If the code hash matches the new implementation then we return early.\\n if (keccak256(_code) == _getAccountCodeHash(implementation)) {\\n return;\\n }\\n\\n // Create the deploycode by appending the magic prefix.\\n bytes memory deploycode = abi.encodePacked(DEPLOY_CODE_PREFIX, _code);\\n\\n // Deploy the code and set the new implementation address.\\n address newImplementation;\\n assembly {\\n newImplementation := create(0x0, add(deploycode, 0x20), mload(deploycode))\\n }\\n\\n // Check that the code was actually deployed correctly. I'm not sure if you can ever\\n // actually fail this check. Should only happen if the contract creation from above runs\\n // out of gas but this parent execution thread does NOT run out of gas. Seems like we\\n // should be doing this check anyway though.\\n require(\\n _getAccountCodeHash(newImplementation) == keccak256(_code),\\n \\\"L1ChugSplashProxy: code was not correctly deployed.\\\"\\n );\\n\\n _setImplementation(newImplementation);\\n }\\n\\n /**\\n * Modifies some storage slot within the proxy contract. Gives us a lot of power to perform\\n * upgrades in a more transparent way. Only callable by the owner.\\n * @param _key Storage key to modify.\\n * @param _value New value for the storage key.\\n */\\n // slither-disable-next-line external-function\\n function setStorage(bytes32 _key, bytes32 _value) public proxyCallIfNotOwner {\\n assembly {\\n sstore(_key, _value)\\n }\\n }\\n\\n /**\\n * Changes the owner of the proxy contract. Only callable by the owner.\\n * @param _owner New owner of the proxy contract.\\n */\\n // slither-disable-next-line external-function\\n function setOwner(address _owner) public proxyCallIfNotOwner {\\n _setOwner(_owner);\\n }\\n\\n /**\\n * Queries the owner of the proxy contract. Can only be called by the owner OR by making an\\n * eth_call and setting the \\\"from\\\" address to address(0).\\n * @return Owner address.\\n */\\n // slither-disable-next-line external-function\\n function getOwner() public proxyCallIfNotOwner returns (address) {\\n return _getOwner();\\n }\\n\\n /**\\n * Queries the implementation address. Can only be called by the owner OR by making an\\n * eth_call and setting the \\\"from\\\" address to address(0).\\n * @return Implementation address.\\n */\\n // slither-disable-next-line external-function\\n function getImplementation() public proxyCallIfNotOwner returns (address) {\\n return _getImplementation();\\n }\\n\\n /**********************\\n * Internal Functions *\\n **********************/\\n\\n /**\\n * Sets the implementation address.\\n * @param _implementation New implementation address.\\n */\\n function _setImplementation(address _implementation) internal {\\n assembly {\\n sstore(IMPLEMENTATION_KEY, _implementation)\\n }\\n }\\n\\n /**\\n * Queries the implementation address.\\n * @return Implementation address.\\n */\\n function _getImplementation() internal view returns (address) {\\n address implementation;\\n assembly {\\n implementation := sload(IMPLEMENTATION_KEY)\\n }\\n return implementation;\\n }\\n\\n /**\\n * Changes the owner of the proxy contract.\\n * @param _owner New owner of the proxy contract.\\n */\\n function _setOwner(address _owner) internal {\\n assembly {\\n sstore(OWNER_KEY, _owner)\\n }\\n }\\n\\n /**\\n * Queries the owner of the proxy contract.\\n * @return Owner address.\\n */\\n function _getOwner() internal view returns (address) {\\n address owner;\\n assembly {\\n owner := sload(OWNER_KEY)\\n }\\n return owner;\\n }\\n\\n /**\\n * Gets the code hash for a given account.\\n * @param _account Address of the account to get a code hash for.\\n * @return Code hash for the account.\\n */\\n function _getAccountCodeHash(address _account) internal view returns (bytes32) {\\n bytes32 codeHash;\\n assembly {\\n codeHash := extcodehash(_account)\\n }\\n return codeHash;\\n }\\n\\n /**\\n * Performs the proxy call via a delegatecall.\\n */\\n function _doProxyCall() internal onlyWhenNotPaused {\\n address implementation = _getImplementation();\\n\\n require(implementation != address(0), \\\"L1ChugSplashProxy: implementation is not set yet\\\");\\n\\n assembly {\\n // Copy calldata into memory at 0x0....calldatasize.\\n calldatacopy(0x0, 0x0, calldatasize())\\n\\n // Perform the delegatecall, make sure to pass all available gas.\\n let success := delegatecall(gas(), implementation, 0x0, calldatasize(), 0x0, 0x0)\\n\\n // Copy returndata into memory at 0x0....returndatasize. Note that this *will*\\n // overwrite the calldata that we just copied into memory but that doesn't really\\n // matter because we'll be returning in a second anyway.\\n returndatacopy(0x0, 0x0, returndatasize())\\n\\n // Success == 0 means a revert. We'll revert too and pass the data up.\\n if iszero(success) {\\n revert(0x0, returndatasize())\\n }\\n\\n // Otherwise we'll just return and pass the data up.\\n return(0x0, returndatasize())\\n }\\n }\\n}\\n\",\"keccak256\":\"0xc3cb52dfdc2706992572dd5621ae89ba919fd20539b73488a455d564f16f1b8d\",\"license\":\"MIT\"},\"contracts/chugsplash/interfaces/iL1ChugSplashDeployer.sol\":{\"content\":\"// SPDX-License-Identifier: MIT\\npragma solidity ^0.8.9;\\n\\n/**\\n * @title iL1ChugSplashDeployer\\n */\\ninterface iL1ChugSplashDeployer {\\n function isUpgrading() external view returns (bool);\\n}\\n\",\"keccak256\":\"0x9a496d99f111c1091f0c33d6bfc7802a522baa7235614b0014f35e4bbe280e57\",\"license\":\"MIT\"}},\"version\":1}",
"bytecode": "0x608060405234801561001057600080fd5b50604051610a5d380380610a5d83398101604081905261002f9161005d565b610057817fb53127684a568b3173ae13b9f8a6016e243e63b6e8ee1178d6a717850b5d610355565b5061008d565b60006020828403121561006f57600080fd5b81516001600160a01b038116811461008657600080fd5b9392505050565b6109c18061009c6000396000f3fe60806040526004361061005a5760003560e01c8063893d20e811610043578063893d20e8146100a45780639b0b0fda146100e2578063aaf10f42146101025761005a565b806313af4035146100645780636c5d4ad014610084575b610062610117565b005b34801561007057600080fd5b5061006261007f366004610792565b6103ba565b34801561009057600080fd5b5061006261009f3660046107fe565b61044b565b3480156100b057600080fd5b506100b9610601565b60405173ffffffffffffffffffffffffffffffffffffffff909116815260200160405180910390f35b3480156100ee57600080fd5b506100626100fd3660046108cd565b610698565b34801561010e57600080fd5b506100b9610706565b60006101417fb53127684a568b3173ae13b9f8a6016e243e63b6e8ee1178d6a717850b5d61035490565b60408051600481526024810182526020810180517bffffffffffffffffffffffffffffffffffffffffffffffffffffffff167fb7947262000000000000000000000000000000000000000000000000000000001790529051919250600091829173ffffffffffffffffffffffffffffffffffffffff8516916101c3919061092a565b600060405180830381855afa9150503d80600081146101fe576040519150601f19603f3d011682016040523d82523d6000602084013e610203565b606091505b5091509150818015610216575080516020145b156102c8576000818060200190518101906102319190610936565b905080156102c6576040517f08c379a000000000000000000000000000000000000000000000000000000000815260206004820152603560248201527f4c314368756753706c61736850726f78793a2073797374656d2069732063757260448201527f72656e746c79206265696e67207570677261646564000000000000000000000060648201526084015b60405180910390fd5b505b60006102f27f360894a13ba1a3210667c828492db98dca3e2076cc3735a920a3ca505d382bbc5490565b905073ffffffffffffffffffffffffffffffffffffffff8116610397576040517f08c379a000000000000000000000000000000000000000000000000000000000815260206004820152603060248201527f4c314368756753706c61736850726f78793a20696d706c656d656e746174696f60448201527f6e206973206e6f7420736574207965740000000000000000000000000000000060648201526084016102bd565b3660008037600080366000845af43d6000803e806103b4573d6000fd5b503d6000f35b7fb53127684a568b3173ae13b9f8a6016e243e63b6e8ee1178d6a717850b5d61035473ffffffffffffffffffffffffffffffffffffffff163373ffffffffffffffffffffffffffffffffffffffff161480610413575033155b1561044357610440817fb53127684a568b3173ae13b9f8a6016e243e63b6e8ee1178d6a717850b5d610355565b50565b610440610117565b7fb53127684a568b3173ae13b9f8a6016e243e63b6e8ee1178d6a717850b5d61035473ffffffffffffffffffffffffffffffffffffffff163373ffffffffffffffffffffffffffffffffffffffff1614806104a4575033155b156104435760006104d37f360894a13ba1a3210667c828492db98dca3e2076cc3735a920a3ca505d382bbc5490565b9050803f8251602084012014156104e8575050565b60405160009061051e907f600d380380600d6000396000f30000000000000000000000000000000000000090859060200161094f565b604051602081830303815290604052905060008151602083016000f084516020860120909150813f146105d3576040517f08c379a000000000000000000000000000000000000000000000000000000000815260206004820152603360248201527f4c314368756753706c61736850726f78793a20636f646520776173206e6f742060448201527f636f72726563746c79206465706c6f7965642e0000000000000000000000000060648201526084016102bd565b6105fb817f360894a13ba1a3210667c828492db98dca3e2076cc3735a920a3ca505d382bbc55565b50505050565b600061062b7fb53127684a568b3173ae13b9f8a6016e243e63b6e8ee1178d6a717850b5d61035490565b73ffffffffffffffffffffffffffffffffffffffff163373ffffffffffffffffffffffffffffffffffffffff161480610662575033155b1561068d57507fb53127684a568b3173ae13b9f8a6016e243e63b6e8ee1178d6a717850b5d61035490565b610695610117565b90565b7fb53127684a568b3173ae13b9f8a6016e243e63b6e8ee1178d6a717850b5d61035473ffffffffffffffffffffffffffffffffffffffff163373ffffffffffffffffffffffffffffffffffffffff1614806106f1575033155b156106fa579055565b610702610117565b5050565b60006107307fb53127684a568b3173ae13b9f8a6016e243e63b6e8ee1178d6a717850b5d61035490565b73ffffffffffffffffffffffffffffffffffffffff163373ffffffffffffffffffffffffffffffffffffffff161480610767575033155b1561068d57507f360894a13ba1a3210667c828492db98dca3e2076cc3735a920a3ca505d382bbc5490565b6000602082840312156107a457600080fd5b813573ffffffffffffffffffffffffffffffffffffffff811681146107c857600080fd5b9392505050565b7f4e487b7100000000000000000000000000000000000000000000000000000000600052604160045260246000fd5b60006020828403121561081057600080fd5b813567ffffffffffffffff8082111561082857600080fd5b818401915084601f83011261083c57600080fd5b81358181111561084e5761084e6107cf565b604051601f82017fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffe0908116603f01168101908382118183101715610894576108946107cf565b816040528281528760208487010111156108ad57600080fd5b826020860160208301376000928101602001929092525095945050505050565b600080604083850312156108e057600080fd5b50508035926020909101359150565b6000815160005b8181101561091057602081850181015186830152016108f6565b8181111561091f576000828601525b509290920192915050565b60006107c882846108ef565b60006020828403121561094857600080fd5b5051919050565b7fffffffffffffffffffffffffff00000000000000000000000000000000000000831681526000610983600d8301846108ef565b94935050505056fea2646970667358221220aea34fd8cdcf3a9cced029d5f7b1e628f42ad1514501878e0040df2afddb6e7164736f6c63430008090033",
"deployedBytecode": "0x60806040526004361061005a5760003560e01c8063893d20e811610043578063893d20e8146100a45780639b0b0fda146100e2578063aaf10f42146101025761005a565b806313af4035146100645780636c5d4ad014610084575b610062610117565b005b34801561007057600080fd5b5061006261007f366004610792565b6103ba565b34801561009057600080fd5b5061006261009f3660046107fe565b61044b565b3480156100b057600080fd5b506100b9610601565b60405173ffffffffffffffffffffffffffffffffffffffff909116815260200160405180910390f35b3480156100ee57600080fd5b506100626100fd3660046108cd565b610698565b34801561010e57600080fd5b506100b9610706565b60006101417fb53127684a568b3173ae13b9f8a6016e243e63b6e8ee1178d6a717850b5d61035490565b60408051600481526024810182526020810180517bffffffffffffffffffffffffffffffffffffffffffffffffffffffff167fb7947262000000000000000000000000000000000000000000000000000000001790529051919250600091829173ffffffffffffffffffffffffffffffffffffffff8516916101c3919061092a565b600060405180830381855afa9150503d80600081146101fe576040519150601f19603f3d011682016040523d82523d6000602084013e610203565b606091505b5091509150818015610216575080516020145b156102c8576000818060200190518101906102319190610936565b905080156102c6576040517f08c379a000000000000000000000000000000000000000000000000000000000815260206004820152603560248201527f4c314368756753706c61736850726f78793a2073797374656d2069732063757260448201527f72656e746c79206265696e67207570677261646564000000000000000000000060648201526084015b60405180910390fd5b505b60006102f27f360894a13ba1a3210667c828492db98dca3e2076cc3735a920a3ca505d382bbc5490565b905073ffffffffffffffffffffffffffffffffffffffff8116610397576040517f08c379a000000000000000000000000000000000000000000000000000000000815260206004820152603060248201527f4c314368756753706c61736850726f78793a20696d706c656d656e746174696f60448201527f6e206973206e6f7420736574207965740000000000000000000000000000000060648201526084016102bd565b3660008037600080366000845af43d6000803e806103b4573d6000fd5b503d6000f35b7fb53127684a568b3173ae13b9f8a6016e243e63b6e8ee1178d6a717850b5d61035473ffffffffffffffffffffffffffffffffffffffff163373ffffffffffffffffffffffffffffffffffffffff161480610413575033155b1561044357610440817fb53127684a568b3173ae13b9f8a6016e243e63b6e8ee1178d6a717850b5d610355565b50565b610440610117565b7fb53127684a568b3173ae13b9f8a6016e243e63b6e8ee1178d6a717850b5d61035473ffffffffffffffffffffffffffffffffffffffff163373ffffffffffffffffffffffffffffffffffffffff1614806104a4575033155b156104435760006104d37f360894a13ba1a3210667c828492db98dca3e2076cc3735a920a3ca505d382bbc5490565b9050803f8251602084012014156104e8575050565b60405160009061051e907f600d380380600d6000396000f30000000000000000000000000000000000000090859060200161094f565b604051602081830303815290604052905060008151602083016000f084516020860120909150813f146105d3576040517f08c379a000000000000000000000000000000000000000000000000000000000815260206004820152603360248201527f4c314368756753706c61736850726f78793a20636f646520776173206e6f742060448201527f636f72726563746c79206465706c6f7965642e0000000000000000000000000060648201526084016102bd565b6105fb817f360894a13ba1a3210667c828492db98dca3e2076cc3735a920a3ca505d382bbc55565b50505050565b600061062b7fb53127684a568b3173ae13b9f8a6016e243e63b6e8ee1178d6a717850b5d61035490565b73ffffffffffffffffffffffffffffffffffffffff163373ffffffffffffffffffffffffffffffffffffffff161480610662575033155b1561068d57507fb53127684a568b3173ae13b9f8a6016e243e63b6e8ee1178d6a717850b5d61035490565b610695610117565b90565b7fb53127684a568b3173ae13b9f8a6016e243e63b6e8ee1178d6a717850b5d61035473ffffffffffffffffffffffffffffffffffffffff163373ffffffffffffffffffffffffffffffffffffffff1614806106f1575033155b156106fa579055565b610702610117565b5050565b60006107307fb53127684a568b3173ae13b9f8a6016e243e63b6e8ee1178d6a717850b5d61035490565b73ffffffffffffffffffffffffffffffffffffffff163373ffffffffffffffffffffffffffffffffffffffff161480610767575033155b1561068d57507f360894a13ba1a3210667c828492db98dca3e2076cc3735a920a3ca505d382bbc5490565b6000602082840312156107a457600080fd5b813573ffffffffffffffffffffffffffffffffffffffff811681146107c857600080fd5b9392505050565b7f4e487b7100000000000000000000000000000000000000000000000000000000600052604160045260246000fd5b60006020828403121561081057600080fd5b813567ffffffffffffffff8082111561082857600080fd5b818401915084601f83011261083c57600080fd5b81358181111561084e5761084e6107cf565b604051601f82017fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffe0908116603f01168101908382118183101715610894576108946107cf565b816040528281528760208487010111156108ad57600080fd5b826020860160208301376000928101602001929092525095945050505050565b600080604083850312156108e057600080fd5b50508035926020909101359150565b6000815160005b8181101561091057602081850181015186830152016108f6565b8181111561091f576000828601525b509290920192915050565b60006107c882846108ef565b60006020828403121561094857600080fd5b5051919050565b7fffffffffffffffffffffffffff00000000000000000000000000000000000000831681526000610983600d8301846108ef565b94935050505056fea2646970667358221220aea34fd8cdcf3a9cced029d5f7b1e628f42ad1514501878e0040df2afddb6e7164736f6c63430008090033",
"devdoc": {
"details": "Basic ChugSplash proxy contract for L1. Very close to being a normal proxy but has added functions `setCode` and `setStorage` for changing the code or storage of the contract. Nifty! Note for future developers: do NOT make anything in this contract 'public' unless you know what you're doing. Anything public can potentially have a function signature that conflicts with a signature attached to the implementation contract. Public functions SHOULD always have the 'proxyCallIfNotOwner' modifier unless there's some *really* good reason not to have that modifier. And there almost certainly is not a good reason to not have that modifier. Beware!",
"kind": "dev",
"methods": {
"constructor": {
"params": {
"_owner": "Address of the initial contract owner."
}
},
"getImplementation()": {
"returns": {
"_0": "Implementation address."
}
},
"getOwner()": {
"returns": {
"_0": "Owner address."
}
},
"setCode(bytes)": {
"params": {
"_code": "New contract code to run inside this contract."
}
},
"setOwner(address)": {
"params": {
"_owner": "New owner of the proxy contract."
}
},
"setStorage(bytes32,bytes32)": {
"params": {
"_key": "Storage key to modify.",
"_value": "New value for the storage key."
}
}
},
"title": "L1ChugSplashProxy",
"version": 1
},
"userdoc": {
"kind": "user",
"methods": {
"getImplementation()": {
"notice": "Queries the implementation address. Can only be called by the owner OR by making an eth_call and setting the \"from\" address to address(0)."
},
"getOwner()": {
"notice": "Queries the owner of the proxy contract. Can only be called by the owner OR by making an eth_call and setting the \"from\" address to address(0)."
},
"setCode(bytes)": {
"notice": "Sets the code that should be running behind this proxy. Note that this scheme is a bit different from the standard proxy scheme where one would typically deploy the code separately and then set the implementation address. We're doing it this way because it gives us a lot more freedom on the client side. Can only be triggered by the contract owner."
},
"setOwner(address)": {
"notice": "Changes the owner of the proxy contract. Only callable by the owner."
},
"setStorage(bytes32,bytes32)": {
"notice": "Modifies some storage slot within the proxy contract. Gives us a lot of power to perform upgrades in a more transparent way. Only callable by the owner."
}
},
"version": 1
},
"storageLayout": {
"storage": [],
"types": null
}
}
\ No newline at end of file
...@@ -23,7 +23,7 @@ const config: HardhatUserConfig = { ...@@ -23,7 +23,7 @@ const config: HardhatUserConfig = {
live: false, live: false,
}, },
mainnet: { mainnet: {
url: process.env.RPC_URL || 'http://localhost:8545', url: process.env.L1_RPC || 'http://localhost:8545',
}, },
devnetL1: { devnetL1: {
live: false, live: false,
......
{ {
"name": "@eth-optimism/contracts-bedrock", "name": "@eth-optimism/contracts-bedrock",
"version": "0.13.0", "version": "0.13.1",
"description": "Contracts for Optimism Specs", "description": "Contracts for Optimism Specs",
"main": "dist/index", "main": "dist/index",
"types": "dist/index", "types": "dist/index",
...@@ -17,7 +17,7 @@ ...@@ -17,7 +17,7 @@
"bindings": "cd ../../op-bindings && make", "bindings": "cd ../../op-bindings && make",
"build:forge": "forge build", "build:forge": "forge build",
"build:with-metadata": "FOUNDRY_PROFILE=echidna yarn build:forge", "build:with-metadata": "FOUNDRY_PROFILE=echidna yarn build:forge",
"build:differential": "tsc scripts/differential-testing.ts --outDir dist --moduleResolution node --esModuleInterop", "build:differential": "go build -o ./scripts/differential-testing/differential-testing ./scripts/differential-testing",
"build:fuzz": "(cd test-case-generator && go build ./cmd/fuzz.go)", "build:fuzz": "(cd test-case-generator && go build ./cmd/fuzz.go)",
"prebuild": "yarn ts-node scripts/verify-foundry-install.ts", "prebuild": "yarn ts-node scripts/verify-foundry-install.ts",
"build": "hardhat compile && yarn autogen:artifacts && yarn build:ts && yarn typechain", "build": "hardhat compile && yarn autogen:artifacts && yarn build:ts && yarn typechain",
...@@ -59,8 +59,6 @@ ...@@ -59,8 +59,6 @@
}, },
"devDependencies": { "devDependencies": {
"@eth-optimism/hardhat-deploy-config": "^0.2.5", "@eth-optimism/hardhat-deploy-config": "^0.2.5",
"@ethereumjs/trie": "^5.0.0-beta.1",
"@ethereumjs/util": "^8.0.0-beta.1",
"@ethersproject/abstract-provider": "^5.7.0", "@ethersproject/abstract-provider": "^5.7.0",
"@ethersproject/abstract-signer": "^5.7.0", "@ethersproject/abstract-signer": "^5.7.0",
"ethereumjs-wallet": "^1.0.2", "ethereumjs-wallet": "^1.0.2",
......
import { BigNumber, utils, constants } from 'ethers'
import {
decodeVersionedNonce,
hashCrossDomainMessage,
DepositTx,
SourceHashDomain,
encodeCrossDomainMessage,
hashWithdrawal,
hashOutputRootProof,
} from '@eth-optimism/core-utils'
import { SecureTrie } from '@ethereumjs/trie'
import { Account, Address, toBuffer, bufferToHex } from '@ethereumjs/util'
import { predeploys } from '../src'
const { hexZeroPad, keccak256 } = utils
const args = process.argv.slice(2)
const command = args[0]
;(async () => {
switch (command) {
case 'decodeVersionedNonce': {
const input = BigNumber.from(args[1])
const { nonce, version } = decodeVersionedNonce(input)
const output = utils.defaultAbiCoder.encode(
['uint256', 'uint256'],
[nonce.toHexString(), version.toHexString()]
)
process.stdout.write(output)
break
}
case 'encodeCrossDomainMessage': {
const nonce = BigNumber.from(args[1])
const sender = args[2]
const target = args[3]
const value = BigNumber.from(args[4])
const gasLimit = BigNumber.from(args[5])
const data = args[6]
const encoding = encodeCrossDomainMessage(
nonce,
sender,
target,
value,
gasLimit,
data
)
const output = utils.defaultAbiCoder.encode(['bytes'], [encoding])
process.stdout.write(output)
break
}
case 'hashCrossDomainMessage': {
const nonce = BigNumber.from(args[1])
const sender = args[2]
const target = args[3]
const value = BigNumber.from(args[4])
const gasLimit = BigNumber.from(args[5])
const data = args[6]
const hash = hashCrossDomainMessage(
nonce,
sender,
target,
value,
gasLimit,
data
)
const output = utils.defaultAbiCoder.encode(['bytes32'], [hash])
process.stdout.write(output)
break
}
case 'hashDepositTransaction': {
// The solidity transaction hash computation currently only works with
// user deposits. System deposit transaction hashing is not supported.
const l1BlockHash = args[1]
const logIndex = BigNumber.from(args[2])
const from = args[3]
const to = args[4]
const mint = BigNumber.from(args[5])
const value = BigNumber.from(args[6])
const gas = BigNumber.from(args[7])
const data = args[8]
const tx = new DepositTx({
l1BlockHash,
logIndex,
from,
to,
mint,
value,
gas,
data,
isSystemTransaction: false,
domain: SourceHashDomain.UserDeposit,
})
const digest = tx.hash()
const output = utils.defaultAbiCoder.encode(['bytes32'], [digest])
process.stdout.write(output)
break
}
case 'encodeDepositTransaction': {
const from = args[1]
const to = args[2]
const value = BigNumber.from(args[3])
const mint = BigNumber.from(args[4])
const gasLimit = BigNumber.from(args[5])
const isCreate = args[6] === 'true' ? true : false
const data = args[7]
const l1BlockHash = args[8]
const logIndex = BigNumber.from(args[9])
const tx = new DepositTx({
from,
to: isCreate ? null : to,
value,
mint,
gas: gasLimit,
data,
l1BlockHash,
logIndex,
domain: SourceHashDomain.UserDeposit,
})
const raw = tx.encode()
const output = utils.defaultAbiCoder.encode(['bytes'], [raw])
process.stdout.write(output)
break
}
case 'hashWithdrawal': {
const nonce = BigNumber.from(args[1])
const sender = args[2]
const target = args[3]
const value = BigNumber.from(args[4])
const gas = BigNumber.from(args[5])
const data = args[6]
const hash = hashWithdrawal(nonce, sender, target, value, gas, data)
const output = utils.defaultAbiCoder.encode(['bytes32'], [hash])
process.stdout.write(output)
break
}
case 'hashOutputRootProof': {
const version = hexZeroPad(BigNumber.from(args[1]).toHexString(), 32)
const stateRoot = hexZeroPad(BigNumber.from(args[2]).toHexString(), 32)
const messagePasserStorageRoot = hexZeroPad(
BigNumber.from(args[3]).toHexString(),
32
)
const latestBlockhash = hexZeroPad(
BigNumber.from(args[4]).toHexString(),
32
)
const hash = hashOutputRootProof({
version,
stateRoot,
messagePasserStorageRoot,
latestBlockhash,
})
const output = utils.defaultAbiCoder.encode(['bytes32'], [hash])
process.stdout.write(output)
break
}
case 'getProveWithdrawalTransactionInputs': {
const nonce = BigNumber.from(args[1])
const sender = args[2]
const target = args[3]
const value = BigNumber.from(args[4])
const gas = BigNumber.from(args[5])
const data = args[6]
// Compute the withdrawalHash
const withdrawalHash = hashWithdrawal(
nonce,
sender,
target,
value,
gas,
data
)
// Compute the storage slot the withdrawalHash will be stored in
const slot = utils.defaultAbiCoder.encode(
['bytes32', 'bytes32'],
[withdrawalHash, utils.hexZeroPad('0x', 32)]
)
const key = keccak256(slot)
// Create the account storage trie
const storage = new SecureTrie()
// Put a bool "true" into storage
await storage.put(toBuffer(key), toBuffer('0x01'))
// Put the storage root into the L2ToL1MessagePasser storage
const address = Address.fromString(predeploys.L2ToL1MessagePasser)
const account = Account.fromAccountData({
nonce: 0,
balance: 0,
stateRoot: storage.root,
})
const world = new SecureTrie()
await world.put(address.toBuffer(), account.serialize())
const proof = await SecureTrie.createProof(storage, toBuffer(key))
const outputRoot = hashOutputRootProof({
version: constants.HashZero,
stateRoot: bufferToHex(world.root),
messagePasserStorageRoot: bufferToHex(storage.root),
latestBlockhash: constants.HashZero,
})
const output = utils.defaultAbiCoder.encode(
['bytes32', 'bytes32', 'bytes32', 'bytes32', 'bytes[]'],
[world.root, storage.root, outputRoot, withdrawalHash, proof]
)
process.stdout.write(output)
break
}
}
})().catch((err: Error) => {
console.error(err)
process.stdout.write('')
})
package main
import (
"bytes"
"fmt"
"math/big"
"os"
"github.com/ethereum-optimism/optimism/op-bindings/predeploys"
"github.com/ethereum-optimism/optimism/op-chain-ops/crossdomain"
"github.com/ethereum/go-ethereum/accounts/abi"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/trie"
)
// ABI types
var (
// Plain dynamic dynBytes type
dynBytes, _ = abi.NewType("bytes", "", nil)
bytesArgs = abi.Arguments{
{Type: dynBytes},
}
// Plain fixed bytes32 type
fixedBytes, _ = abi.NewType("bytes32", "", nil)
fixedBytesArgs = abi.Arguments{
{Type: fixedBytes},
}
// Decoded nonce tuple (nonce, version)
decodedNonce, _ = abi.NewType("tuple", "DecodedNonce", []abi.ArgumentMarshaling{
{Name: "nonce", Type: "uint256"},
{Name: "version", Type: "uint256"},
})
decodedNonceArgs = abi.Arguments{
{Name: "encodedNonce", Type: decodedNonce},
}
// WithdrawalHash slot tuple (bytes32, bytes32)
withdrawalSlot, _ = abi.NewType("tuple", "SlotHash", []abi.ArgumentMarshaling{
{Name: "withdrawalHash", Type: "bytes32"},
{Name: "zeroPadding", Type: "bytes32"},
})
withdrawalSlotArgs = abi.Arguments{
{Name: "slotHash", Type: withdrawalSlot},
}
// Prove withdrawal inputs tuple (bytes32, bytes32, bytes32, bytes32, bytes[])
proveWithdrawalInputs, _ = abi.NewType("tuple", "ProveWithdrawalInputs", []abi.ArgumentMarshaling{
{Name: "worldRoot", Type: "bytes32"},
{Name: "stateRoot", Type: "bytes32"},
{Name: "outputRoot", Type: "bytes32"},
{Name: "withdrawalHash", Type: "bytes32"},
{Name: "proof", Type: "bytes[]"},
})
proveWithdrawalInputsArgs = abi.Arguments{
{Name: "inputs", Type: proveWithdrawalInputs},
}
)
func main() {
args := os.Args[1:]
// This command requires arguments
if len(args) == 0 {
panic("Error: No arguments provided")
}
switch args[0] {
case "decodeVersionedNonce":
// Parse input arguments
input, ok := new(big.Int).SetString(args[1], 10)
checkOk(ok)
// Decode versioned nonce
nonce, version := crossdomain.DecodeVersionedNonce(input)
// ABI encode output
packArgs := struct {
Nonce *big.Int
Version *big.Int
}{
nonce,
version,
}
packed, err := decodedNonceArgs.Pack(&packArgs)
checkErr(err, "Error encoding output")
fmt.Print(hexutil.Encode(packed))
case "encodeCrossDomainMessage":
// Parse input arguments
nonce, ok := new(big.Int).SetString(args[1], 10)
checkOk(ok)
sender := common.HexToAddress(args[2])
target := common.HexToAddress(args[3])
value, ok := new(big.Int).SetString(args[4], 10)
checkOk(ok)
gasLimit, ok := new(big.Int).SetString(args[5], 10)
checkOk(ok)
data := common.FromHex(args[6])
// Encode cross domain message
encoded, err := encodeCrossDomainMessage(nonce, sender, target, value, gasLimit, data)
checkErr(err, "Error encoding cross domain message")
// Pack encoded cross domain message
packed, err := bytesArgs.Pack(&encoded)
checkErr(err, "Error encoding output")
fmt.Print(hexutil.Encode(packed))
case "hashCrossDomainMessage":
// Parse input arguments
nonce, ok := new(big.Int).SetString(args[1], 10)
checkOk(ok)
sender := common.HexToAddress(args[2])
target := common.HexToAddress(args[3])
value, ok := new(big.Int).SetString(args[4], 10)
checkOk(ok)
gasLimit, ok := new(big.Int).SetString(args[5], 10)
checkOk(ok)
data := common.FromHex(args[6])
// Encode cross domain message
encoded, err := encodeCrossDomainMessage(nonce, sender, target, value, gasLimit, data)
checkErr(err, "Error encoding cross domain message")
// Hash encoded cross domain message
hash := crypto.Keccak256Hash(encoded)
// Pack hash
packed, err := fixedBytesArgs.Pack(&hash)
checkErr(err, "Error encoding output")
fmt.Print(hexutil.Encode(packed))
case "hashDepositTransaction":
// Parse input arguments
l1BlockHash := common.HexToHash(args[1])
logIndex, ok := new(big.Int).SetString(args[2], 10)
checkOk(ok)
from := common.HexToAddress(args[3])
to := common.HexToAddress(args[4])
mint, ok := new(big.Int).SetString(args[5], 10)
checkOk(ok)
value, ok := new(big.Int).SetString(args[6], 10)
checkOk(ok)
gasLimit, ok := new(big.Int).SetString(args[7], 10)
checkOk(ok)
data := common.FromHex(args[8])
// Create deposit transaction
depositTx := makeDepositTx(from, to, value, mint, gasLimit, false, data, l1BlockHash, logIndex)
// RLP encode deposit transaction
encoded, err := types.NewTx(&depositTx).MarshalBinary()
checkErr(err, "Error encoding deposit transaction")
// Hash encoded deposit transaction
hash := crypto.Keccak256Hash(encoded)
// Pack hash
packed, err := fixedBytesArgs.Pack(&hash)
checkErr(err, "Error encoding output")
fmt.Print(hexutil.Encode(packed))
case "encodeDepositTransaction":
// Parse input arguments
from := common.HexToAddress(args[1])
to := common.HexToAddress(args[2])
value, ok := new(big.Int).SetString(args[3], 10)
checkOk(ok)
mint, ok := new(big.Int).SetString(args[4], 10)
checkOk(ok)
gasLimit, ok := new(big.Int).SetString(args[5], 10)
checkOk(ok)
isCreate := args[6] == "true"
data := common.FromHex(args[7])
l1BlockHash := common.HexToHash(args[8])
logIndex, ok := new(big.Int).SetString(args[9], 10)
checkOk(ok)
depositTx := makeDepositTx(from, to, value, mint, gasLimit, isCreate, data, l1BlockHash, logIndex)
// RLP encode deposit transaction
encoded, err := types.NewTx(&depositTx).MarshalBinary()
checkErr(err, "Failed to RLP encode deposit transaction")
// Pack rlp encoded deposit transaction
packed, err := bytesArgs.Pack(&encoded)
checkErr(err, "Error encoding output")
fmt.Print(hexutil.Encode(packed))
case "hashWithdrawal":
// Parse input arguments
nonce, ok := new(big.Int).SetString(args[1], 10)
checkOk(ok)
sender := common.HexToAddress(args[2])
target := common.HexToAddress(args[3])
value, ok := new(big.Int).SetString(args[4], 10)
checkOk(ok)
gasLimit, ok := new(big.Int).SetString(args[5], 10)
checkOk(ok)
data := common.FromHex(args[6])
// Hash withdrawal
hash, err := hashWithdrawal(nonce, sender, target, value, gasLimit, data)
checkErr(err, "Error hashing withdrawal")
// Pack hash
packed, err := fixedBytesArgs.Pack(&hash)
checkErr(err, "Error encoding output")
fmt.Print(hexutil.Encode(packed))
case "hashOutputRootProof":
// Parse input arguments
version := common.HexToHash(args[1])
stateRoot := common.HexToHash(args[2])
messagePasserStorageRoot := common.HexToHash(args[3])
latestBlockHash := common.HexToHash(args[4])
// Hash the output root proof
hash, err := hashOutputRootProof(version, stateRoot, messagePasserStorageRoot, latestBlockHash)
checkErr(err, "Error hashing output root proof")
// Pack hash
packed, err := fixedBytesArgs.Pack(&hash)
checkErr(err, "Error encoding output")
fmt.Print(hexutil.Encode(packed))
case "getProveWithdrawalTransactionInputs":
// Parse input arguments
nonce, ok := new(big.Int).SetString(args[1], 10)
checkOk(ok)
sender := common.HexToAddress(args[2])
target := common.HexToAddress(args[3])
value, ok := new(big.Int).SetString(args[4], 10)
checkOk(ok)
gasLimit, ok := new(big.Int).SetString(args[5], 10)
checkOk(ok)
data := common.FromHex(args[6])
wdHash, err := hashWithdrawal(nonce, sender, target, value, gasLimit, data)
checkErr(err, "Error hashing withdrawal")
// Compute the storage slot the withdrawalHash will be stored in
slot := struct {
WithdrawalHash common.Hash
ZeroPadding common.Hash
}{
WithdrawalHash: wdHash,
ZeroPadding: common.Hash{},
}
packed, err := withdrawalSlotArgs.Pack(&slot)
checkErr(err, "Error packing withdrawal slot")
// Compute the storage slot the withdrawalHash will be stored in
hash := crypto.Keccak256Hash(packed)
// Create a secure trie for state
state, err := trie.NewStateTrie(
trie.TrieID(types.EmptyRootHash),
trie.NewDatabase(rawdb.NewMemoryDatabase()),
)
checkErr(err, "Error creating secure trie")
// Put a "true" bool in the storage slot
state.Update(hash.Bytes(), []byte{0x01})
// Create a secure trie for the world state
world, err := trie.NewStateTrie(
trie.TrieID(types.EmptyRootHash),
trie.NewDatabase(rawdb.NewMemoryDatabase()),
)
checkErr(err, "Error creating secure trie")
// Put the put the rlp encoded account in the world trie
account := types.StateAccount{
Nonce: 0,
Balance: big.NewInt(0),
Root: state.Hash(),
}
writer := new(bytes.Buffer)
checkErr(account.EncodeRLP(writer), "Error encoding account")
world.Update(predeploys.L2ToL1MessagePasserAddr.Bytes(), writer.Bytes())
// Get the proof
var proof proofList
checkErr(state.Prove(predeploys.L2ToL1MessagePasserAddr.Bytes(), 0, &proof), "Error getting proof")
// Get the output root
outputRoot, err := hashOutputRootProof(common.Hash{}, world.Hash(), state.Hash(), common.Hash{})
checkErr(err, "Error hashing output root proof")
// Pack the output
output := struct {
WorldRoot common.Hash
StateRoot common.Hash
OutputRoot common.Hash
WithdrawalHash common.Hash
Proof proofList
}{
WorldRoot: world.Hash(),
StateRoot: state.Hash(),
OutputRoot: outputRoot,
WithdrawalHash: wdHash,
Proof: proof,
}
packed, err = proveWithdrawalInputsArgs.Pack(&output)
checkErr(err, "Error encoding output")
// Print the output
fmt.Print(hexutil.Encode(packed[32:]))
default:
panic(fmt.Errorf("Unknown command: %s", args[0]))
}
}
package main
import (
"errors"
"fmt"
"math/big"
"github.com/ethereum-optimism/optimism/op-bindings/bindings"
"github.com/ethereum-optimism/optimism/op-chain-ops/crossdomain"
"github.com/ethereum-optimism/optimism/op-node/rollup"
"github.com/ethereum-optimism/optimism/op-node/rollup/derive"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
)
var UnknownNonceVersion = errors.New("Unknown nonce version")
// checkOk checks if ok is false, and panics if so.
// Shorthand to ease go's god awful error handling
func checkOk(ok bool) {
if !ok {
panic(fmt.Errorf("checkOk failed"))
}
}
// checkErr checks if err is not nil, and throws if so.
// Shorthand to ease go's god awful error handling
func checkErr(err error, failReason string) {
if err != nil {
panic(fmt.Errorf("%s: %w", failReason, err))
}
}
// encodeCrossDomainMessage encodes a versioned cross domain message into a byte array.
func encodeCrossDomainMessage(nonce *big.Int, sender common.Address, target common.Address, value *big.Int, gasLimit *big.Int, data []byte) ([]byte, error) {
_, version := crossdomain.DecodeVersionedNonce(nonce)
var encoded []byte
var err error
if version.Cmp(big.NewInt(0)) == 0 {
// Encode cross domain message V0
encoded, err = crossdomain.EncodeCrossDomainMessageV0(target, sender, data, nonce)
} else if version.Cmp(big.NewInt(1)) == 0 {
// Encode cross domain message V1
encoded, err = crossdomain.EncodeCrossDomainMessageV1(nonce, sender, target, value, gasLimit, data)
} else {
return nil, UnknownNonceVersion
}
return encoded, err
}
// hashWithdrawal hashes a withdrawal transaction.
func hashWithdrawal(nonce *big.Int, sender common.Address, target common.Address, value *big.Int, gasLimit *big.Int, data []byte) (common.Hash, error) {
wd := crossdomain.Withdrawal{
Nonce: nonce,
Sender: &sender,
Target: &target,
Value: value,
GasLimit: gasLimit,
Data: data,
}
return wd.Hash()
}
// hashOutputRootProof hashes an output root proof.
func hashOutputRootProof(version common.Hash, stateRoot common.Hash, messagePasserStorageRoot common.Hash, latestBlockHash common.Hash) (common.Hash, error) {
hash, err := rollup.ComputeL2OutputRoot(&bindings.TypesOutputRootProof{
Version: version,
StateRoot: stateRoot,
MessagePasserStorageRoot: messagePasserStorageRoot,
LatestBlockhash: latestBlockHash,
})
if err != nil {
return common.Hash{}, err
}
return common.Hash(hash), nil
}
// makeDepositTx creates a deposit transaction type.
func makeDepositTx(
from common.Address,
to common.Address,
value *big.Int,
mint *big.Int,
gasLimit *big.Int,
isCreate bool,
data []byte,
l1BlockHash common.Hash,
logIndex *big.Int,
) types.DepositTx {
// Create deposit transaction source
udp := derive.UserDepositSource{
L1BlockHash: l1BlockHash,
LogIndex: logIndex.Uint64(),
}
// Create deposit transaction
depositTx := types.DepositTx{
SourceHash: udp.SourceHash(),
From: from,
Value: value,
Gas: gasLimit.Uint64(),
IsSystemTransaction: false, // This will never be a system transaction in the tests.
Data: data,
}
// Fill optional fields
if mint.Cmp(big.NewInt(0)) == 1 {
depositTx.Mint = mint
}
if !isCreate {
depositTx.To = &to
}
return depositTx
}
// Custom type to write the generated proof to
type proofList [][]byte
func (n *proofList) Put(key []byte, value []byte) error {
*n = append(*n, value)
return nil
}
func (n *proofList) Delete(key []byte) error {
panic("not supported")
}
...@@ -14,6 +14,12 @@ interface RequiredDeployConfig { ...@@ -14,6 +14,12 @@ interface RequiredDeployConfig {
*/ */
finalSystemOwner?: string finalSystemOwner?: string
/**
* Address that is deployed as the GUARDIAN in the OptimismPortal. Has the
* ability to pause withdrawals.
*/
portalGuardian: string
/** /**
* Address that will own the entire system on L1 during the deployment process. This address will * Address that will own the entire system on L1 during the deployment process. This address will
* not own the system after the deployment is complete, ownership will be transferred to the * not own the system after the deployment is complete, ownership will be transferred to the
...@@ -181,6 +187,9 @@ export const deployConfigSpec: { ...@@ -181,6 +187,9 @@ export const deployConfigSpec: {
finalSystemOwner: { finalSystemOwner: {
type: 'address', type: 'address',
}, },
portalGuardian: {
type: 'address',
},
controller: { controller: {
type: 'address', type: 'address',
}, },
......
...@@ -395,3 +395,15 @@ export const getTenderlySimulationLink = async ( ...@@ -395,3 +395,15 @@ export const getTenderlySimulationLink = async (
}).toString()}` }).toString()}`
} }
} }
/**
* Returns a cast commmand for submitting a given transaction.
*
* @param tx Ethers transaction object.
* @returns the cast command
*/
export const getCastCommand = (tx: ethers.PopulatedTransaction): string => {
if (process.env.CAST_COMMANDS) {
return `cast send ${tx.to} ${tx.data} --from ${tx.from} --value ${tx.value}`
}
}
...@@ -53,7 +53,7 @@ ...@@ -53,7 +53,7 @@
"url": "https://github.com/ethereum-optimism/optimism.git" "url": "https://github.com/ethereum-optimism/optimism.git"
}, },
"devDependencies": { "devDependencies": {
"@eth-optimism/contracts-bedrock": "0.13.0", "@eth-optimism/contracts-bedrock": "0.13.1",
"@eth-optimism/core-utils": "^0.12.0", "@eth-optimism/core-utils": "^0.12.0",
"@eth-optimism/hardhat-deploy-config": "^0.2.5", "@eth-optimism/hardhat-deploy-config": "^0.2.5",
"@ethersproject/hardware-wallets": "^5.7.0", "@ethersproject/hardware-wallets": "^5.7.0",
......
# data transport layer # data transport layer
## 0.5.54
### Patch Changes
- Updated dependencies [fecd42d67]
- @eth-optimism/common-ts@0.8.1
## 0.5.53 ## 0.5.53
### Patch Changes ### Patch Changes
......
{ {
"private": true, "private": true,
"name": "@eth-optimism/data-transport-layer", "name": "@eth-optimism/data-transport-layer",
"version": "0.5.53", "version": "0.5.54",
"description": "[Optimism] Service for shuttling data from L1 into L2", "description": "[Optimism] Service for shuttling data from L1 into L2",
"main": "dist/index", "main": "dist/index",
"types": "dist/index", "types": "dist/index",
...@@ -36,7 +36,7 @@ ...@@ -36,7 +36,7 @@
"url": "https://github.com/ethereum-optimism/optimism.git" "url": "https://github.com/ethereum-optimism/optimism.git"
}, },
"dependencies": { "dependencies": {
"@eth-optimism/common-ts": "0.8.0", "@eth-optimism/common-ts": "0.8.1",
"@eth-optimism/contracts": "0.5.40", "@eth-optimism/contracts": "0.5.40",
"@eth-optimism/core-utils": "0.12.0", "@eth-optimism/core-utils": "0.12.0",
"@ethersproject/providers": "^5.7.0", "@ethersproject/providers": "^5.7.0",
......
# @eth-optimism/fault-detector # @eth-optimism/fault-detector
## 0.6.2
### Patch Changes
- f9b579d55: Fixes a bug that would cause the fault detector to error out if no outputs had been proposed yet.
- Updated dependencies [fecd42d67]
- Updated dependencies [66cafc00a]
- @eth-optimism/common-ts@0.8.1
- @eth-optimism/sdk@2.0.1
## 0.6.1 ## 0.6.1
### Patch Changes ### Patch Changes
......
{ {
"private": true, "private": true,
"name": "@eth-optimism/fault-detector", "name": "@eth-optimism/fault-detector",
"version": "0.6.1", "version": "0.6.2",
"description": "[Optimism] Service for detecting faulty L2 output proposals", "description": "[Optimism] Service for detecting faulty L2 output proposals",
"main": "dist/index", "main": "dist/index",
"types": "dist/index", "types": "dist/index",
...@@ -47,10 +47,10 @@ ...@@ -47,10 +47,10 @@
"ts-node": "^10.9.1" "ts-node": "^10.9.1"
}, },
"dependencies": { "dependencies": {
"@eth-optimism/common-ts": "^0.8.0", "@eth-optimism/common-ts": "^0.8.1",
"@eth-optimism/contracts": "^0.5.40", "@eth-optimism/contracts": "^0.5.40",
"@eth-optimism/core-utils": "^0.12.0", "@eth-optimism/core-utils": "^0.12.0",
"@eth-optimism/sdk": "^2.0.0", "@eth-optimism/sdk": "^2.0.1",
"@ethersproject/abstract-provider": "^5.7.0" "@ethersproject/abstract-provider": "^5.7.0"
} }
} }
# @eth-optimism/message-relayer # @eth-optimism/message-relayer
## 0.5.32
### Patch Changes
- Updated dependencies [fecd42d67]
- Updated dependencies [66cafc00a]
- @eth-optimism/common-ts@0.8.1
- @eth-optimism/sdk@2.0.1
## 0.5.31 ## 0.5.31
### Patch Changes ### Patch Changes
......
{ {
"private": true, "private": true,
"name": "@eth-optimism/message-relayer", "name": "@eth-optimism/message-relayer",
"version": "0.5.31", "version": "0.5.32",
"description": "[Optimism] Service for automatically relaying L2 to L1 transactions", "description": "[Optimism] Service for automatically relaying L2 to L1 transactions",
"main": "dist/index", "main": "dist/index",
"types": "dist/index", "types": "dist/index",
...@@ -31,9 +31,9 @@ ...@@ -31,9 +31,9 @@
"url": "https://github.com/ethereum-optimism/optimism.git" "url": "https://github.com/ethereum-optimism/optimism.git"
}, },
"dependencies": { "dependencies": {
"@eth-optimism/common-ts": "0.8.0", "@eth-optimism/common-ts": "0.8.1",
"@eth-optimism/core-utils": "0.12.0", "@eth-optimism/core-utils": "0.12.0",
"@eth-optimism/sdk": "2.0.0", "@eth-optimism/sdk": "2.0.1",
"ethers": "^5.7.0" "ethers": "^5.7.0"
}, },
"devDependencies": { "devDependencies": {
......
# @eth-optimism/replica-healthcheck # @eth-optimism/replica-healthcheck
## 1.2.3
### Patch Changes
- Updated dependencies [fecd42d67]
- @eth-optimism/common-ts@0.8.1
## 1.2.2 ## 1.2.2
### Patch Changes ### Patch Changes
......
{ {
"private": true, "private": true,
"name": "@eth-optimism/replica-healthcheck", "name": "@eth-optimism/replica-healthcheck",
"version": "1.2.2", "version": "1.2.3",
"description": "[Optimism] Service for monitoring the health of replica nodes", "description": "[Optimism] Service for monitoring the health of replica nodes",
"main": "dist/index", "main": "dist/index",
"types": "dist/index", "types": "dist/index",
...@@ -32,7 +32,7 @@ ...@@ -32,7 +32,7 @@
"url": "https://github.com/ethereum-optimism/optimism.git" "url": "https://github.com/ethereum-optimism/optimism.git"
}, },
"dependencies": { "dependencies": {
"@eth-optimism/common-ts": "0.8.0", "@eth-optimism/common-ts": "0.8.1",
"@eth-optimism/core-utils": "0.12.0", "@eth-optimism/core-utils": "0.12.0",
"@ethersproject/abstract-provider": "^5.7.0" "@ethersproject/abstract-provider": "^5.7.0"
}, },
......
# @eth-optimism/sdk # @eth-optimism/sdk
## 2.0.1
### Patch Changes
- 66cafc00a: Update migrated withdrawal gaslimit calculation
- Updated dependencies [22c3885f5]
- Updated dependencies [f52c07529]
- @eth-optimism/contracts-bedrock@0.13.1
## 2.0.0 ## 2.0.0
### Major Changes ### Major Changes
......
{ {
"name": "@eth-optimism/sdk", "name": "@eth-optimism/sdk",
"version": "2.0.0", "version": "2.0.1",
"description": "[Optimism] Tools for working with Optimism", "description": "[Optimism] Tools for working with Optimism",
"main": "dist/index", "main": "dist/index",
"types": "dist/index", "types": "dist/index",
...@@ -50,7 +50,7 @@ ...@@ -50,7 +50,7 @@
"dependencies": { "dependencies": {
"@eth-optimism/contracts": "0.5.40", "@eth-optimism/contracts": "0.5.40",
"@eth-optimism/core-utils": "0.12.0", "@eth-optimism/core-utils": "0.12.0",
"@eth-optimism/contracts-bedrock": "0.13.0", "@eth-optimism/contracts-bedrock": "0.13.1",
"lodash": "^4.17.21", "lodash": "^4.17.21",
"merkletreejs": "^0.2.27", "merkletreejs": "^0.2.27",
"rlp": "^2.2.7" "rlp": "^2.2.7"
......
ignores: [
"@babel/eslint-parser",
"@typescript-eslint/parser",
"eslint-plugin-import",
"eslint-plugin-unicorn",
"eslint-plugin-jsdoc",
"eslint-plugin-prefer-arrow",
"eslint-plugin-react",
"@typescript-eslint/eslint-plugin",
"eslint-config-prettier",
"eslint-plugin-prettier",
"chai"
]
# URL for an L1 RPC provider, used to query L2 output proposals
TWO_STEP_MONITOR__L1_RPC_PROVIDER=
# URL for an L2 RPC provider, used to query canonical L2 state
TWO_STEP_MONITOR__L2_RPC_PROVIDER=
TWO_STEP_MONITOR__HOSTNAME=
TWO_STEP_MONITOR__PORT=
TWO_STEP_MONITOR__START_BATCH_INDEX=
TWO_STEP_MONITOR__LOOP_INTERVAL_MS=
module.exports = {
extends: '../../.eslintrc.js',
}
module.exports = {
...require('../../.prettierrc.js'),
};
(The MIT License)
Copyright 2020-2021 Optimism
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
# @eth-optimism/two-step-monitor
[![codecov](https://codecov.io/gh/ethereum-optimism/optimism/branch/develop/graph/badge.svg?token=0VTG7PG7YR&flag=two-step-monitor-tests)](https://codecov.io/gh/ethereum-optimism/optimism)
The `two-step-monitor` is a simple service for detecting discrepancies between withdrawals created on L2, and
withdrawals proven on L1.
## Installation
Clone, install, and build the Optimism monorepo:
```
git clone https://github.com/ethereum-optimism/optimism.git
yarn install
yarn build
```
## Running the service
Copy `.env.example` into a new file named `.env`, then set the environment variables listed there.
Once your environment variables have been set, run the service via:
```
yarn start
```
## Ports
- API is exposed at `$TWO_STEP_MONITOR__HOSTNAME:$TWO_STEP_MONITOR__PORT/api`
- Metrics are exposed at `$TWO_STEP_MONITOR__HOSTNAME:$TWO_STEP_MONITOR__PORT/metrics`
- `$TWO_STEP_MONITOR__HOSTNAME` defaults to `0.0.0.0`
- `$TWO_STEP_MONITOR__PORT` defaults to `7300`
## What this service does
The `two-step-monitor` detects when a withdrawal is proven on L1, and verifies that a corresponding withdrawal
has been created on L2.
We export a series of Prometheus metrics that you can use to trigger alerting when issues are detected.
Check the list of available metrics via `yarn start --help`:
```sh
> yarn start --help
yarn run v1.22.19
$ ts-node ./src/service.ts --help
Usage: service [options]
Options:
--l1rpcprovider Provider for interacting with L1 (env: TWO_STEP_MONITOR__L1_RPC_PROVIDER)
--l2rpcprovider Provider for interacting with L2 (env: TWO_STEP_MONITOR__L2_RPC_PROVIDER)
--port Port for the app server (env: TWO_STEP_MONITOR__PORT)
--hostname Hostname for the app server (env: TWO_STEP_MONITOR__HOSTNAME)
-h, --help display help for command
Metrics:
l1_node_connection_failures Number of times L1 node connection has failed (type: Gauge)
l2_node_connection_failures Number of times L2 node connection has failed (type: Gauge)
metadata Service metadata (type: Gauge)
unhandled_errors Unhandled errors (type: Counter)
Done in 2.19s.
```
import { HardhatUserConfig } from 'hardhat/types'
// Hardhat plugins
import '@nomiclabs/hardhat-ethers'
import '@nomiclabs/hardhat-waffle'
const config: HardhatUserConfig = {
mocha: {
timeout: 50000,
},
}
export default config
{
"private": true,
"name": "@eth-optimism/two-step-monitor",
"version": "0.5.0",
"description": "[Optimism] Service for detecting faulty L2 output proposals",
"main": "dist/index",
"types": "dist/index",
"files": [
"dist/*"
],
"scripts": {
"start": "ts-node ./src/service.ts",
"test": "hardhat test",
"test:coverage": "nyc hardhat test && nyc merge .nyc_output coverage.json",
"build": "tsc -p tsconfig.json",
"clean": "rimraf dist/ ./tsconfig.tsbuildinfo",
"lint": "yarn lint:fix && yarn lint:check",
"pre-commit": "lint-staged",
"lint:fix": "yarn lint:check --fix",
"lint:check": "eslint . --max-warnings=0"
},
"keywords": [
"optimism",
"ethereum",
"fault",
"detector"
],
"homepage": "https://github.com/ethereum-optimism/optimism/tree/develop/packages/two-step-monitor#readme",
"license": "MIT",
"author": "Optimism PBC",
"repository": {
"type": "git",
"url": "https://github.com/ethereum-optimism/optimism.git"
},
"devDependencies": {
"@nomiclabs/hardhat-ethers": "^2.0.6",
"@nomiclabs/hardhat-waffle": "^2.0.3",
"@types/chai": "^4.3.1",
"chai-as-promised": "^7.1.1",
"ethers": "^5.7.0",
"hardhat": "^2.9.6",
"ts-node": "^10.9.1"
}
}
export const todo = 'implement me'
import chai = require('chai')
import chaiAsPromised from 'chai-as-promised'
// Chai plugins go here.
chai.use(chaiAsPromised)
const should = chai.should()
const expect = chai.expect
export { should, expect }
{
"extends": "../../tsconfig.json",
"compilerOptions": {
"outDir": "./dist"
},
"include": [
"package.json",
"src/**/*"
]
}
diff --git a/node_modules/@changesets/cli/dist/cli.cjs.dev.js b/node_modules/@changesets/cli/dist/cli.cjs.dev.js
index b158219..6fdfb6e 100644
--- a/node_modules/@changesets/cli/dist/cli.cjs.dev.js
+++ b/node_modules/@changesets/cli/dist/cli.cjs.dev.js
@@ -937,7 +937,7 @@ async function publishPackages({
}) {
const packagesByName = new Map(packages.map(x => [x.packageJson.name, x]));
const publicPackages = packages.filter(pkg => !pkg.packageJson.private);
- const unpublishedPackagesInfo = await getUnpublishedPackages(publicPackages, preState);
+ const unpublishedPackagesInfo = await getUnpublishedPackages(packages, preState);
if (unpublishedPackagesInfo.length === 0) {
return [];
@@ -957,20 +957,27 @@ async function publishAPackage(pkg, access, twoFactorState, tag) {
const {
name,
version,
- publishConfig
+ publishConfig,
+ private: isPrivate
} = pkg.packageJson;
const localAccess = publishConfig === null || publishConfig === void 0 ? void 0 : publishConfig.access;
- logger.info(`Publishing ${chalk__default['default'].cyan(`"${name}"`)} at ${chalk__default['default'].green(`"${version}"`)}`);
- const publishDir = publishConfig !== null && publishConfig !== void 0 && publishConfig.directory ? path.join(pkg.dir, publishConfig.directory) : pkg.dir;
- const publishConfirmation = await publish(name, {
- cwd: publishDir,
- access: localAccess || access,
- tag
- }, twoFactorState);
+ let published;
+ if (!isPrivate) {
+ logger.info(`Publishing ${chalk__default['default'].cyan(`"${name}"`)} at ${chalk__default['default'].green(`"${version}"`)}`);
+ const publishDir = publishConfig !== null && publishConfig !== void 0 && publishConfig.directory ? path.join(pkg.dir, publishConfig.directory) : pkg.dir;
+ const publishConfirmation = await publish(name, {
+ cwd: publishDir,
+ access: localAccess || access,
+ tag
+ }, twoFactorState);
+ published = publishConfirmation.published;
+ } else {
+ published = true;
+ }
return {
name,
newVersion: version,
- published: publishConfirmation.published
+ published
};
}
@@ -1140,8 +1147,13 @@ async function tagPublish(tool, packageReleases, cwd) {
if (tool !== "root") {
for (const pkg of packageReleases) {
const tag = `${pkg.name}@${pkg.newVersion}`;
- logger.log("New tag: ", tag);
- await git.tag(tag, cwd);
+ const allTags = await git.getAllTags(cwd);
+ if (allTags.has(tag)) {
+ logger.log("Skipping existing tag: ", tag);
+ } else {
+ logger.log("New tag: ", tag);
+ await git.tag(tag, cwd);
+ }
}
} else {
const tag = `v${packageReleases[0].newVersion}`;
<?xml version="1.0" encoding="UTF-8"?> <?xml version="1.0" encoding="UTF-8"?>
<!-- Do not edit this file with editors other than diagrams.net --> <!-- Do not edit this file with editors other than diagrams.net -->
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"> <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.1" width="441px" height="351px" viewBox="-0.5 -0.5 441 351" content="&lt;mxfile host=&quot;app.diagrams.net&quot; modified=&quot;2022-04-04T03:06:14.911Z&quot; agent=&quot;5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.60 Safari/537.36&quot; version=&quot;17.2.5&quot; etag=&quot;zDZnQrrUn3-KehqiH5ap&quot; type=&quot;google&quot;&gt;&lt;diagram id=&quot;tywBRjScRQLhh2Yu3ye-&quot;&gt;5ZlLj5swEIB/TaT2UgEOjxw3j91WSrtVI7W7RwcGcEswdcyG9NfXBBMgZqtoA0vaXiJ7/MD+ZjyecUZotsnuGE7Cj9SDaGRoXjZC85Fh2I4lfnPBvhDommUXkoART8oqwYr8grKjlKbEg22jI6c04iRpCl0ax+DyhgwzRnfNbj6Nml9NcACKYOXiSJV+Ix4PC6ljapX8PZAgLL+sa7Jlg8vOUrANsUd3NRFajNCMUcqL0iabQZTDK7kU426faT0ujEHMzxlgFAOecJTKvcl18X25WUbT2IO8vzZC011IOKwS7OatO6FeIQv5JhI1XRR9EkUzGlF2GIt83zdcV8i3nNEfUGvxrLVlWqJFLgAYh+zZTehHNMKmgG6As73oIgcgCXPftJJdpZrxWMrCplqkSUhzCI4zV8REQUJrB4gUgHNI6JbwWxDMLmLZAZlxk4yhktGNFjJWB2DGCpilcZ/yJOX3DLsCyNBskDYcHLPfY+eZ4HjjtmPnGGtk9XTsWgC2nrujVV5C0FKAgSd8tKxSxkMa0BhHi0o6bSKt+iwpTSTI78D5Xl44OOW0zeryDzWQbWnKXCmayOsIswBKisbZZBlEmJOn5vSXYLKHwAQZ4Q+18mM+1Tth9EV1nsmpD5V9WYnF1h7qlWKYWVarYYdaOe6FOjnf2jvXiXM9OrGvSif2cDqZKA75C42iNBnlTs2KxGqmayZKQV76JALay/x1wLBHxCbnhIn4lNBYyAFveUsE5bjQHkGtHXNsarXJam324sZaVJq4yM3bJ27eUt281dM1WQZoV+flS/fRMF80nPmW66nZ7xRzNxSiVbreEM6BdRphnGGWHZjeMSb7g+3pbSEG6sL41NxomdcXGbjp4cyKchyQGFQP8ebQ8w54+Pb/cRSnyYbzigG1ruZhhwPwIV7T7DIVdEBGt4dLNfSWREwfnMj4JHmYqERQW+rQBRA1+VpemH71AAShVyRyrclUHxGhHPqZEjFjlctOmvidE6zFRS8HVWRvGMP7Wrck77A9/zOGZtanO3dZlV6LBVRaPhI5T/G2chSKJ5p/LW5ocS/9xQ2O6l8ggNgb3McY5on1mSoWpy8qaoa1gp8pxO7faWGnKFuCnb5QlsZdQ/kVGPFJxyQ9DI7fStJyHVj7/ZBELc/3vZFUA6HXeIo5vqpUDymPo/o7yotfVWQUXE9LJ91foc/gFdXqj6viNqr+/kOL3w==&lt;/diagram&gt;&lt;/mxfile&gt;" style="background-color: rgb(255, 255, 255);"><defs><linearGradient x1="0%" y1="0%" x2="100%" y2="0%" id="mx-gradient-f8cecc-1-7ea6e0-1-e-0"><stop offset="0%" style="stop-color: rgb(248, 206, 204); stop-opacity: 1;"/><stop offset="100%" style="stop-color: rgb(126, 166, 224); stop-opacity: 1;"/></linearGradient></defs><g><rect x="0" y="0" width="440" height="110" fill="#fff2cc" stroke="#d6b656" pointer-events="all"/><rect x="10" y="10" width="120" height="60" fill="rgb(255, 255, 255)" stroke="rgb(0, 0, 0)" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 118px; height: 1px; padding-top: 40px; margin-left: 11px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">DepositFeed</div></div></div></foreignObject><text x="70" y="44" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">DepositFeed</text></switch></g><rect x="270" y="10" width="120" height="60" fill="rgb(255, 255, 255)" stroke="rgb(0, 0, 0)" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 118px; height: 1px; padding-top: 40px; margin-left: 271px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">L2OutputOracle</div></div></div></foreignObject><text x="330" y="44" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">L2OutputOracle</text></switch></g><rect x="0" y="110" width="440" height="240" fill="#d5e8d4" stroke="#82b366" pointer-events="all"/><path d="M 70 210 L 70 263.63" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 70 268.88 L 66.5 261.88 L 70 263.63 L 73.5 261.88 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><path d="M 100 165 L 183.63 165" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 188.88 165 L 181.88 168.5 L 183.63 165 L 181.88 161.5 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><path d="M 100 195 L 183.63 195" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 188.88 195 L 181.88 198.5 L 183.63 195 L 181.88 191.5 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><rect x="40" y="150" width="60" height="60" fill="url(#mx-gradient-f8cecc-1-7ea6e0-1-e-0)" stroke="#b85450" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 58px; height: 1px; padding-top: 180px; margin-left: 41px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">Rollup <br />Node</div></div></div></foreignObject><text x="70" y="184" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">Rollup...</text></switch></g><path d="M 260 150 L 260 110 L 200 110 L 200 76.37" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 200 71.12 L 203.5 78.12 L 200 76.37 L 196.5 78.12 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><rect x="190" y="150" width="140" height="30" fill="#f8cecc" stroke="#b85450" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 138px; height: 1px; padding-top: 165px; margin-left: 191px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">Batch Submitter</div></div></div></foreignObject><text x="260" y="169" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">Batch Submitter</text></switch></g><rect x="10" y="270" width="120" height="60" fill="url(#mx-gradient-f8cecc-1-7ea6e0-1-e-0)" stroke="#b85450" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 118px; height: 1px; padding-top: 300px; margin-left: 11px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">L2 Execution Engine<br />(L2 Geth)</div></div></div></foreignObject><text x="70" y="304" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">L2 Execution Engine...</text></switch></g><rect x="140" y="10" width="120" height="60" fill="rgb(255, 255, 255)" stroke="rgb(0, 0, 0)" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 118px; height: 1px; padding-top: 40px; margin-left: 141px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">BatchInbox</div></div></div></foreignObject><text x="200" y="44" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">BatchInbox</text></switch></g><rect x="400" y="80" width="30" height="20" fill="rgb(255, 255, 255)" stroke="rgb(0, 0, 0)" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 28px; height: 1px; padding-top: 90px; margin-left: 401px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">L1</div></div></div></foreignObject><text x="415" y="94" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">L1</text></switch></g><rect x="400" y="320" width="30" height="20" fill="rgb(255, 255, 255)" stroke="rgb(0, 0, 0)" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 28px; height: 1px; padding-top: 330px; margin-left: 401px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">L2</div></div></div></foreignObject><text x="415" y="334" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">L2</text></switch></g><path d="M 330 195 L 360 195 L 360 76.37" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 360 71.12 L 363.5 78.12 L 360 76.37 L 356.5 78.12 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><rect x="190" y="180" width="140" height="30" fill="#f8cecc" stroke="#b85450" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 138px; height: 1px; padding-top: 195px; margin-left: 191px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">Output Submitter</div></div></div></foreignObject><text x="260" y="199" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">Output Submitter</text></switch></g><rect x="220" y="240" width="80" height="30" fill="rgb(255, 255, 255)" stroke="rgb(0, 0, 0)" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 78px; height: 1px; padding-top: 255px; margin-left: 221px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">Legend</div></div></div></foreignObject><text x="260" y="259" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">Legend</text></switch></g><rect x="220" y="270" width="80" height="30" fill="#f8cecc" stroke="#b85450" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 78px; height: 1px; padding-top: 285px; margin-left: 221px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">Sequencer</div></div></div></foreignObject><text x="260" y="289" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">Sequencer</text></switch></g><rect x="220" y="300" width="80" height="30" fill="#dae8fc" stroke="#6c8ebf" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 78px; height: 1px; padding-top: 315px; margin-left: 221px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">Verifier</div></div></div></foreignObject><text x="260" y="319" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">Verifier</text></switch></g><path d="M 70 70 L 70 143.63" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 70 148.88 L 66.5 141.88 L 70 143.63 L 73.5 141.88 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/></g><switch><g requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"/><a transform="translate(0,-5)" xlink:href="https://www.diagrams.net/doc/faq/svg-export-text-problems" target="_blank"><text text-anchor="middle" font-size="10px" x="50%" y="100%">Text is not SVG - cannot display</text></a></switch></svg> <svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.1" width="441px" height="351px" viewBox="-0.5 -0.5 441 351" content="&lt;mxfile host=&quot;Electron&quot; modified=&quot;2023-03-20T08:57:38.419Z&quot; agent=&quot;5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/20.8.16 Chrome/106.0.5249.199 Electron/21.4.0 Safari/537.36&quot; etag=&quot;naeFneXxkMTGqgBSgGby&quot; version=&quot;20.8.16&quot; type=&quot;device&quot;&gt;&lt;diagram id=&quot;tywBRjScRQLhh2Yu3ye-&quot; name=&quot;Page-1&quot;&gt;5Zlfb5swEMA/TaTtZQIMhDw2adZNytZqkbb20YEDvAFmjmnIPv1MMAFiWkULlG57in3+g/278/nOmaBFnN8wnIafqAfRxNC8fIKuJ4ah67opfgrJvpQ4MykIGPFkp1qwJr9ACjUpzYgH21ZHTmnESdoWujRJwOUtGWaM7trdfBq1v5riABTB2sWRKv1GPB7KXVhaLf8AJAirL+uabIlx1VkKtiH26K4hQssJWjBKeVmK8wVEBbyKSznu/ROtx4UxSPg5A4xywCOOMrk3uS6+rzbLaJZ4UPTXJmi+CwmHdYrdonUn1CtkIY8jUdNF0SdRtKARZYexyPd9w3WFfMsZ/QGNFs/e2JYtWtQVy008AuOQN0RyBzdAY+BsL7rIViRh7ttWsqtVY5pSFrbVIk1CmkNwnLkmJgoSWjdApAC8TTmJyTa+o4zj6DKcPcAx23AMFY5udMCxe2BjKmxWxm3G04zfMuwKIGOzQdp4cKxhT55ngeOZXSfPMTbIHujkdQDsPHpHq7yEoK0AA0+4aVkVpy+kAU1wtKyl8zbSus+K0lSC/A6c7+WdgzNOz7S6Lc2YK9cxk3cSZgFUvaR6iyU+S5ZBhDl5bN80l2CajoEJcsLvG+WHYqp3wujL6nUupz5U9lUlEVu7b1bKYVZVrYcdatW4P9aJPpZOnNejk+kr08l0LJ3MFIf8hUZRlk4Kp2ZHYn3zDROloCh9FjHtZf46YNgjAs41YSJEJTQRcsBb3hFEOS50B1EbxzItrTFZo226vLKXz2rifDc/PXHzturm7YGuySpGe4VevnIfLfNFY5lvtZqG/c4xd0MhWmebmHAOrNcI4wyz7MH0jjHZM7and4UYqA/jU9OjVVFf5uBmhzMryklAElA9xJtDzxvg4dv/x1GcJhvOCwbUupqKHQ7Ax2RD88tU0AMZfTpeqqF3JGL66ETMk+RhphJBXalDH0DU5Gt1Yfo1ABCEXpDI602m+o8I5dA7SsRn61x21sbvnGAtr3k5qCZ7xRjeN7qlRYft+Z8xNKs53bnLqvVaLqDW8pHIeYqfKkehfKL51+KGDvcyXNzgqP4FAki80X2MYZ1Yn6VicYaiomZYa/iZQeL+nRZ2irIj2BkKZWXcDZRfgRGf9EzSw+D4nSRt14GNPwxJ1PGCPxhJNRB6iaeY46tK/ZDyMGm+o1zwqiLj4GZaOuv7Cn0Cr6jW/12Vt1H9DyBa/gY=&lt;/diagram&gt;&lt;/mxfile&gt;"><defs><linearGradient x1="0%" y1="0%" x2="100%" y2="0%" id="mx-gradient-f8cecc-1-7ea6e0-1-e-0"><stop offset="0%" style="stop-color: rgb(248, 206, 204); stop-opacity: 1;"/><stop offset="100%" style="stop-color: rgb(126, 166, 224); stop-opacity: 1;"/></linearGradient></defs><g><rect x="0" y="0" width="440" height="110" fill="#fff2cc" stroke="#d6b656" pointer-events="all"/><rect x="10" y="10" width="120" height="60" fill="rgb(255, 255, 255)" stroke="rgb(0, 0, 0)" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 118px; height: 1px; padding-top: 40px; margin-left: 11px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">OptimismPortal</div></div></div></foreignObject><text x="70" y="44" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">OptimismPortal</text></switch></g><rect x="270" y="10" width="120" height="60" fill="rgb(255, 255, 255)" stroke="rgb(0, 0, 0)" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 118px; height: 1px; padding-top: 40px; margin-left: 271px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">L2OutputOracle</div></div></div></foreignObject><text x="330" y="44" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">L2OutputOracle</text></switch></g><rect x="0" y="110" width="440" height="240" fill="#d5e8d4" stroke="#82b366" pointer-events="all"/><path d="M 70 210 L 70 263.63" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 70 268.88 L 66.5 261.88 L 70 263.63 L 73.5 261.88 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><path d="M 100 165 L 183.63 165" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 188.88 165 L 181.88 168.5 L 183.63 165 L 181.88 161.5 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><path d="M 100 195 L 183.63 195" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 188.88 195 L 181.88 198.5 L 183.63 195 L 181.88 191.5 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><rect x="40" y="150" width="60" height="60" fill="url(#mx-gradient-f8cecc-1-7ea6e0-1-e-0)" stroke="#b85450" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 58px; height: 1px; padding-top: 180px; margin-left: 41px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">Rollup <br />Node</div></div></div></foreignObject><text x="70" y="184" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">Rollup...</text></switch></g><path d="M 260 150 L 260 110 L 200 110 L 200 76.37" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 200 71.12 L 203.5 78.12 L 200 76.37 L 196.5 78.12 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><rect x="190" y="150" width="140" height="30" fill="#f8cecc" stroke="#b85450" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 138px; height: 1px; padding-top: 165px; margin-left: 191px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">Batch Submitter</div></div></div></foreignObject><text x="260" y="169" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">Batch Submitter</text></switch></g><rect x="10" y="270" width="120" height="60" fill="url(#mx-gradient-f8cecc-1-7ea6e0-1-e-0)" stroke="#b85450" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 118px; height: 1px; padding-top: 300px; margin-left: 11px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">L2 Execution Engine<br />(L2 Geth)</div></div></div></foreignObject><text x="70" y="304" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">L2 Execution Engine...</text></switch></g><rect x="140" y="10" width="120" height="60" fill="rgb(255, 255, 255)" stroke="rgb(0, 0, 0)" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 118px; height: 1px; padding-top: 40px; margin-left: 141px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">BatchInbox</div></div></div></foreignObject><text x="200" y="44" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">BatchInbox</text></switch></g><rect x="400" y="80" width="30" height="20" fill="rgb(255, 255, 255)" stroke="rgb(0, 0, 0)" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 28px; height: 1px; padding-top: 90px; margin-left: 401px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">L1</div></div></div></foreignObject><text x="415" y="94" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">L1</text></switch></g><rect x="400" y="320" width="30" height="20" fill="rgb(255, 255, 255)" stroke="rgb(0, 0, 0)" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 28px; height: 1px; padding-top: 330px; margin-left: 401px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">L2</div></div></div></foreignObject><text x="415" y="334" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">L2</text></switch></g><path d="M 330 195 L 360 195 L 360 76.37" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 360 71.12 L 363.5 78.12 L 360 76.37 L 356.5 78.12 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><rect x="190" y="180" width="140" height="30" fill="#f8cecc" stroke="#b85450" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 138px; height: 1px; padding-top: 195px; margin-left: 191px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">Output Submitter</div></div></div></foreignObject><text x="260" y="199" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">Output Submitter</text></switch></g><rect x="220" y="240" width="80" height="30" fill="rgb(255, 255, 255)" stroke="rgb(0, 0, 0)" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 78px; height: 1px; padding-top: 255px; margin-left: 221px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">Legend</div></div></div></foreignObject><text x="260" y="259" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">Legend</text></switch></g><rect x="220" y="270" width="80" height="30" fill="#f8cecc" stroke="#b85450" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 78px; height: 1px; padding-top: 285px; margin-left: 221px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">Sequencer</div></div></div></foreignObject><text x="260" y="289" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">Sequencer</text></switch></g><rect x="220" y="300" width="80" height="30" fill="#dae8fc" stroke="#6c8ebf" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 78px; height: 1px; padding-top: 315px; margin-left: 221px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">Verifier</div></div></div></foreignObject><text x="260" y="319" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">Verifier</text></switch></g><path d="M 70 70 L 70 143.63" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 70 148.88 L 66.5 141.88 L 70 143.63 L 73.5 141.88 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/></g><switch><g requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"/><a transform="translate(0,-5)" xlink:href="https://www.diagrams.net/doc/faq/svg-export-text-problems" target="_blank"><text text-anchor="middle" font-size="10px" x="50%" y="100%">Text is not SVG - cannot display</text></a></switch></svg>
\ No newline at end of file \ No newline at end of file
<?xml version="1.0" encoding="UTF-8"?> <?xml version="1.0" encoding="UTF-8"?>
<!-- Do not edit this file with editors other than diagrams.net --> <!-- Do not edit this file with editors other than diagrams.net -->
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"> <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.1" width="481px" height="291px" viewBox="-0.5 -0.5 481 291" content="&lt;mxfile host=&quot;app.diagrams.net&quot; modified=&quot;2022-04-04T03:33:01.375Z&quot; agent=&quot;5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.60 Safari/537.36&quot; version=&quot;17.2.5&quot; etag=&quot;G4rfaGD2bU2ce6Z3V8kS&quot; type=&quot;google&quot;&gt;&lt;diagram id=&quot;z_UtGdSUZwvYQkXjM40-&quot;&gt;5Vpdd6M2EP01fuwe82nyuE6cds9J25y6H+mjDAOoixEVYm3311cYyRhLbnBQjLv74oMuEkh3pJk7gyfO/Xr7PUVF+iOJIJvY02g7cR4mtj0LfP5bA7sGsKb+rEESiiOBtcAS/wOyo0ArHEHZ6cgIyRguumBI8hxC1sEQpWTT7RaTrPvWAiWgAMsQZSr6B45Y2qCBN23xHwAnqXyzNRV31kh2FkCZoohsjiBnMXHuKSGsuVpv7yGryZO8NOMez9w9TIxCzvoMsJsBX1BWibWJebGdXCwlVR5B3X86ceabFDNYFiis7264eTmWsnXGWxa/jHGW3ZOM0P1YJ45jOww5XjJKPsPRnchf+Z7P74gJAGWwPbsI60AN31NA1sDojncRAxxBpthOrmR709rGDQSWHtlFYkhsh+Tw5JYxfiFI0xPovC+BkQdB5OoIDOyV4xsi0Lo7YTBQGXRsDYOHEzmEQlehcJEnOAeOfXz+pNAJET+LokkoS0lCcpQtWnTeJbzt80RIIWj+CxjbCceCKka6RuC00d1LPf6DJ5t/isftGw/bTmsnWiVDlH2sHQwHcpKDxB5xvfp9n2Y99SI6xipJRUMBCWfIByYgt2pvk1LIEMNfuk8fYh5vFAtsMXs5ut6z/2HmiWbLf93YDaLWcsbj1r8dbu3/5rY9Ep0D0Z6PM0fijTaxx7PJTHFHv5Asq4pJ7QD9jM9mvqL8KqmvfuLaZpjDTyiKMF/kA6ZcqmCScxxQyTTBNAhBH0xXged606OHyXvCBxkIEbbXDRGeJsj6mgjhGwgQgRogFt8k6b59PdLvFNKfrGGkGyDEnZ0IFU8jVHQ6xQAhUuocMzJQ7L0DI75/RUYshZFR4pkMTdZloamPWpOxUjxNRsuhMkTGt07Mcy+1/Xc8ajdjegdB8axngvkr2i4kjks+j1PbH17ZbzuoyeQcsTDl0LJarTFjQIedl8t9s4ET5vivRz7L1Rwxx8QRU/PLnytWVOxrI9W5JqlqxrnfqJ/yFdmO7tGt03KGpaFGl4wboWacbO+N+bY+kxmcJFq+6p1nvc1oPCGR0znarg9QkBKzR+Cs39x+da+5X9Vszag3vE797ZRBTfnN0ul8I+U3S02vxvUBA1RcmKGyxOGJkDsuekwvcBWQRyfykCOXlfLkvh8m9PhELWmV25B6d/+nMPFGO11wjo37fFtNNm+kChUhCGKtPvTDAFaxGY84e11zv1fhw1az2uuVm0Zh95plJenGRo42F+X+F4WMg4uyTfsonSyVlhvFR6m56W95iWJQHdQ8I+FnFX6mpEAJ2p+IsTfBeenQwzSaTxijWkZNcH99qSsGuzwcJXBrT9IrX/POlfVmhhRhD7MGNyYKPMWsS4YY3Jxl+35LPLGs/e1aVk3xnyDh2cYw3WFAKiiFujtVK+j+W2Miudd8il3C3xXk4VdR89TJrnejUk3yfweKY2yYyevoV4VJzWdAQ0zyZvtvvCZTbv/T6Cz+BQ==&lt;/diagram&gt;&lt;/mxfile&gt;" style="background-color: rgb(255, 255, 255);"><defs/><g><rect x="0" y="0" width="480" height="80" fill="#fff2cc" stroke="#d6b656" pointer-events="all"/><rect x="160" y="80" width="320" height="210" fill="#d5e8d4" stroke="#82b366" pointer-events="all"/><path d="M 250 160 L 250 213.63" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 250 218.88 L 246.5 211.88 L 250 213.63 L 253.5 211.88 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 1px; height: 1px; padding-top: 190px; margin-left: 250px;"><div data-drawio-colors="color: rgb(0, 0, 0); background-color: rgb(255, 255, 255); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 11px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; background-color: rgb(255, 255, 255); white-space: nowrap;">Engine API</div></div></div></foreignObject><text x="250" y="193" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="11px" text-anchor="middle">Engine API</text></switch></g><path d="M 280 145 L 323.63 145" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 328.88 145 L 321.88 148.5 L 323.63 145 L 321.88 141.5 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><path d="M 280 115 L 323.63 115" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 328.88 115 L 321.88 118.5 L 323.63 115 L 321.88 111.5 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><rect x="220" y="100" width="60" height="60" fill="#f8cecc" stroke="#b85450" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 58px; height: 1px; padding-top: 130px; margin-left: 221px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">Rollup <br />Node</div></div></div></foreignObject><text x="250" y="134" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">Rollup...</text></switch></g><rect x="220" y="220" width="60" height="60" fill="#f8cecc" stroke="#b85450" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 58px; height: 1px; padding-top: 250px; margin-left: 221px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">EE</div></div></div></foreignObject><text x="250" y="254" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">EE</text></switch></g><rect x="440" y="50" width="30" height="20" fill="rgb(255, 255, 255)" stroke="rgb(0, 0, 0)" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 28px; height: 1px; padding-top: 60px; margin-left: 441px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">L1</div></div></div></foreignObject><text x="455" y="64" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">L1</text></switch></g><rect x="440" y="260" width="30" height="20" fill="rgb(255, 255, 255)" stroke="rgb(0, 0, 0)" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 28px; height: 1px; padding-top: 270px; margin-left: 441px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">L2</div></div></div></foreignObject><text x="455" y="274" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">L2</text></switch></g><path d="M 400 100 L 400 25 L 226.37 25" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 221.12 25 L 228.12 21.5 L 226.37 25 L 228.12 28.5 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><rect x="330" y="100" width="140" height="30" fill="#f8cecc" stroke="#b85450" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 138px; height: 1px; padding-top: 115px; margin-left: 331px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">Batch Submitter</div></div></div></foreignObject><text x="400" y="119" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">Batch Submitter</text></switch></g><rect x="330" y="130" width="140" height="30" fill="#f8cecc" stroke="#b85450" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 138px; height: 1px; padding-top: 145px; margin-left: 331px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">Output Submitter</div></div></div></foreignObject><text x="400" y="149" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">Output Submitter</text></switch></g><rect x="100" y="10" width="120" height="30" fill="rgb(255, 255, 255)" stroke="rgb(0, 0, 0)" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 118px; height: 1px; padding-top: 25px; margin-left: 101px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">BatchInbox</div></div></div></foreignObject><text x="160" y="29" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">BatchInbox</text></switch></g><path d="M 220 55 L 250 55 L 250 93.63" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 250 98.88 L 246.5 91.88 L 250 93.63 L 253.5 91.88 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><rect x="100" y="40" width="120" height="30" fill="rgb(255, 255, 255)" stroke="rgb(0, 0, 0)" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 118px; height: 1px; padding-top: 55px; margin-left: 101px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">DepositFeed</div></div></div></foreignObject><text x="160" y="59" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">DepositFeed</text></switch></g><rect x="0" y="80" width="160" height="210" fill="#d5e8d4" stroke="#82b366" pointer-events="all"/><path d="M 33.63 130 L 20 130 L 20 25 L 100 25" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 38.88 130 L 31.88 133.5 L 33.63 130 L 31.88 126.5 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><path d="M 70 160 L 70 213.63" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 70 218.88 L 66.5 211.88 L 70 213.63 L 73.5 211.88 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><rect x="40" y="100" width="60" height="60" fill="#dae8fc" stroke="#6c8ebf" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 58px; height: 1px; padding-top: 130px; margin-left: 41px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">Rollup <br />Node</div></div></div></foreignObject><text x="70" y="134" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">Rollup...</text></switch></g><rect x="40" y="220" width="60" height="60" fill="#dae8fc" stroke="#6c8ebf" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 58px; height: 1px; padding-top: 250px; margin-left: 41px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">EE</div></div></div></foreignObject><text x="70" y="254" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">EE</text></switch></g><path d="M 100 55 L 55 55 L 55 93.63" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 55 98.88 L 51.5 91.88 L 55 93.63 L 58.5 91.88 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><path d="M 213.63 130 L 106.37 130" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 218.88 130 L 211.88 133.5 L 213.63 130 L 211.88 126.5 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><path d="M 101.12 130 L 108.12 126.5 L 106.37 130 L 108.12 133.5 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 1px; height: 1px; padding-top: 130px; margin-left: 160px;"><div data-drawio-colors="color: rgb(0, 0, 0); background-color: rgb(255, 255, 255); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 11px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; background-color: rgb(255, 255, 255); white-space: nowrap;">Unsafe<br />Block<br />Propagation</div></div></div></foreignObject><text x="160" y="133" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="11px" text-anchor="middle">Unsafe...</text></switch></g><path d="M 213.63 265 L 106.37 265" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 218.88 265 L 211.88 268.5 L 213.63 265 L 211.88 261.5 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><path d="M 101.12 265 L 108.12 261.5 L 106.37 265 L 108.12 268.5 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 1px; height: 1px; padding-top: 265px; margin-left: 160px;"><div data-drawio-colors="color: rgb(0, 0, 0); background-color: rgb(255, 255, 255); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 11px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; background-color: rgb(255, 255, 255); white-space: nowrap;">TX Sync</div></div></div></foreignObject><text x="160" y="268" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="11px" text-anchor="middle">TX Sync</text></switch></g><path d="M 213.63 235 L 106.37 235" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 218.88 235 L 211.88 238.5 L 213.63 235 L 211.88 231.5 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><path d="M 101.12 235 L 108.12 231.5 L 106.37 235 L 108.12 238.5 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 1px; height: 1px; padding-top: 235px; margin-left: 160px;"><div data-drawio-colors="color: rgb(0, 0, 0); background-color: rgb(255, 255, 255); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 11px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; background-color: rgb(255, 255, 255); white-space: nowrap;">State Sync</div></div></div></foreignObject><text x="160" y="238" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="11px" text-anchor="middle">State Sync</text></switch></g><rect x="330" y="190" width="80" height="30" fill="rgb(255, 255, 255)" stroke="rgb(0, 0, 0)" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 78px; height: 1px; padding-top: 205px; margin-left: 331px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">Legend</div></div></div></foreignObject><text x="370" y="209" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">Legend</text></switch></g><rect x="330" y="220" width="80" height="30" fill="#f8cecc" stroke="#b85450" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 78px; height: 1px; padding-top: 235px; margin-left: 331px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">Sequencer</div></div></div></foreignObject><text x="370" y="239" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">Sequencer</text></switch></g><rect x="330" y="250" width="80" height="30" fill="#dae8fc" stroke="#6c8ebf" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 78px; height: 1px; padding-top: 265px; margin-left: 331px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">Verifier</div></div></div></foreignObject><text x="370" y="269" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">Verifier</text></switch></g></g><switch><g requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"/><a transform="translate(0,-5)" xlink:href="https://www.diagrams.net/doc/faq/svg-export-text-problems" target="_blank"><text text-anchor="middle" font-size="10px" x="50%" y="100%">Text is not SVG - cannot display</text></a></switch></svg> <svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.1" width="481px" height="291px" viewBox="-0.5 -0.5 481 291" content="&lt;mxfile host=&quot;Electron&quot; modified=&quot;2023-03-20T08:58:47.205Z&quot; agent=&quot;5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/20.8.16 Chrome/106.0.5249.199 Electron/21.4.0 Safari/537.36&quot; etag=&quot;q6rv6FZaU_LAJZ8DLjMH&quot; version=&quot;20.8.16&quot; type=&quot;device&quot;&gt;&lt;diagram id=&quot;z_UtGdSUZwvYQkXjM40-&quot; name=&quot;Page-1&quot;&gt;5Vpdl5s2EP01fmyO+TR+jHedNOds2z11P7aPMghQIiMqxNrur6+wJQOWnMWLFpzkyehKgHRHmrkzeOLcbXYfKcjTX0gE8cSeRruJcz+xbcuyXP5TIfsjEswFkFAUiUE1sEL/QQFOBVqiCBatgYwQzFDeBkOSZTBkLQxQSrbtYTHB7bfmIIEKsAoBVtG/UcRSsQpvWuM/Q5Sk8s3WVPRsgBwsgCIFEdk2IGc5ce4oIex4tdndQVyRJ3k53vfhQu9pYhRmrMsN9vGGZ4BLsTYxL7aXi6WkzCJYjZ9OnMU2RQyuchBWvVtuXo6lbIN5y+KXMcL4jmBCD/c6cRzbYcjxglHyBTZ6In/tez7vUWcsFvEMKYO7BiRW8BGSDWR0z4eIXkeQKXaTK9ne1rZxA4GlDbtIDIjtkJyeXDPGLwRpegKdtyUw8mAQuToCA3vt+IYItOZnDAYqg46tYfB0IvtQ6CoULrMEZZBj7x8/KXTCiJ9F0SSUpSQhGcDLGl20Ca/HPBCSC5o/Q8b2wrGAkpG2ETiTdP9U3f/Ok81/xOMOjftdq7UXrYIByt5XDoYDGcmgxD6gavWHMRfNVZCShmKNM+HUAE2g3KxHqFr9V01KIQYMPbc9VR/zeKNYYIfYU+P6wP67mSeaNf9VY9+TWssZi1v/dri1v85tfSRaB6I+HxeOxKttYo9lk5nijn4nGJf5pHKAPubzW6wpv0qqq1+5tunn8BMKIsTJuUeUSxVEMo5DUDBNMA1CqA+m68BzvWnjYbJP+CADIcL22iHC0wRZXxMhfAMBIlADxPKHJN23hyN9rpD+YPUj3QAh7uxMqHgaoaLTKQYIkVKnyUhPsfcGjPj+gIxYCiOjxDMZmqzrQlMXtSZjpXiajJb9ZYiMb62Y53aMecL2P/GofbyjcxAUz3okiM+tHkLiuOCzOLf96ZXdtoOaTC4AC1MOrcr1BjEGab/zcr1vNnDCHP/lyGe5miPmmDhian75W8nykn1vpDpDkqpmnIeN+ilbk93oHt06L2dYGmp0ybgRasbJ9l6Zb+szGQNJouWr3nnW0TkbT0jkZJo+IGdog4rNIycb4Nvbsu6QW1ZN2Iw6xGFKcOcMaipwlk7qG6nAWWqGNa4b6CHkQgyKAoVnWq5Z95he4S1gFp0pRI5cW82TO7+P1uMTtaRVbkPtzb+lSPFqO8lxw7t9W803b6QQFQEYxFqJ6IcBXMdmPOLsZdn9VrUPW01sh6s4jcLukJUl6cZGjjZXpf9XhYyTi7LN+yidMpW2G8FHqenpn1kBYqg6qAUm4RcVfqQkBwk4nIixN8Fl6dDJNJqvGCNaRs1x/3iqigb7LBwlcGtP0gsf9C5V9maGFGEnswY3JQo8xawrBhi8Oct2/Zx4Zln7x7WsmuU/wIRnG/10hwGpoNTq5qpW0P29xkRyr/kau4L/ljALv4uyp052vRmVapL/F6QoRoaZHEa/KkxqvgQaYpI36z/kHTPl+m+NzvJ/&lt;/diagram&gt;&lt;/mxfile&gt;"><defs/><g><rect x="0" y="0" width="480" height="80" fill="#fff2cc" stroke="#d6b656" pointer-events="all"/><rect x="160" y="80" width="320" height="210" fill="#d5e8d4" stroke="#82b366" pointer-events="all"/><path d="M 250 160 L 250 213.63" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 250 218.88 L 246.5 211.88 L 250 213.63 L 253.5 211.88 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 1px; height: 1px; padding-top: 190px; margin-left: 250px;"><div data-drawio-colors="color: rgb(0, 0, 0); background-color: rgb(255, 255, 255); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 11px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; background-color: rgb(255, 255, 255); white-space: nowrap;">Engine API</div></div></div></foreignObject><text x="250" y="193" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="11px" text-anchor="middle">Engine API</text></switch></g><path d="M 280 145 L 323.63 145" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 328.88 145 L 321.88 148.5 L 323.63 145 L 321.88 141.5 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><path d="M 280 115 L 323.63 115" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 328.88 115 L 321.88 118.5 L 323.63 115 L 321.88 111.5 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><rect x="220" y="100" width="60" height="60" fill="#f8cecc" stroke="#b85450" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 58px; height: 1px; padding-top: 130px; margin-left: 221px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">Rollup <br />Node</div></div></div></foreignObject><text x="250" y="134" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">Rollup...</text></switch></g><rect x="220" y="220" width="60" height="60" fill="#f8cecc" stroke="#b85450" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 58px; height: 1px; padding-top: 250px; margin-left: 221px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">EE</div></div></div></foreignObject><text x="250" y="254" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">EE</text></switch></g><rect x="440" y="50" width="30" height="20" fill="rgb(255, 255, 255)" stroke="rgb(0, 0, 0)" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 28px; height: 1px; padding-top: 60px; margin-left: 441px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">L1</div></div></div></foreignObject><text x="455" y="64" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">L1</text></switch></g><rect x="440" y="260" width="30" height="20" fill="rgb(255, 255, 255)" stroke="rgb(0, 0, 0)" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 28px; height: 1px; padding-top: 270px; margin-left: 441px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">L2</div></div></div></foreignObject><text x="455" y="274" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">L2</text></switch></g><path d="M 400 100 L 400 25 L 226.37 25" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 221.12 25 L 228.12 21.5 L 226.37 25 L 228.12 28.5 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><rect x="330" y="100" width="140" height="30" fill="#f8cecc" stroke="#b85450" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 138px; height: 1px; padding-top: 115px; margin-left: 331px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">Batch Submitter</div></div></div></foreignObject><text x="400" y="119" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">Batch Submitter</text></switch></g><rect x="330" y="130" width="140" height="30" fill="#f8cecc" stroke="#b85450" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 138px; height: 1px; padding-top: 145px; margin-left: 331px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">Output Submitter</div></div></div></foreignObject><text x="400" y="149" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">Output Submitter</text></switch></g><rect x="100" y="10" width="120" height="30" fill="rgb(255, 255, 255)" stroke="rgb(0, 0, 0)" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 118px; height: 1px; padding-top: 25px; margin-left: 101px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">BatchInbox</div></div></div></foreignObject><text x="160" y="29" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">BatchInbox</text></switch></g><path d="M 220 55 L 250 55 L 250 93.63" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 250 98.88 L 246.5 91.88 L 250 93.63 L 253.5 91.88 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><rect x="100" y="40" width="120" height="30" fill="rgb(255, 255, 255)" stroke="rgb(0, 0, 0)" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 118px; height: 1px; padding-top: 55px; margin-left: 101px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">OptimismPortal</div></div></div></foreignObject><text x="160" y="59" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">OptimismPortal</text></switch></g><rect x="0" y="80" width="160" height="210" fill="#d5e8d4" stroke="#82b366" pointer-events="all"/><path d="M 33.63 130 L 20 130 L 20 25 L 100 25" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 38.88 130 L 31.88 133.5 L 33.63 130 L 31.88 126.5 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><path d="M 70 160 L 70 213.63" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 70 218.88 L 66.5 211.88 L 70 213.63 L 73.5 211.88 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><rect x="40" y="100" width="60" height="60" fill="#dae8fc" stroke="#6c8ebf" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 58px; height: 1px; padding-top: 130px; margin-left: 41px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">Rollup <br />Node</div></div></div></foreignObject><text x="70" y="134" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">Rollup...</text></switch></g><rect x="40" y="220" width="60" height="60" fill="#dae8fc" stroke="#6c8ebf" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 58px; height: 1px; padding-top: 250px; margin-left: 41px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">EE</div></div></div></foreignObject><text x="70" y="254" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">EE</text></switch></g><path d="M 100 55 L 55 55 L 55 93.63" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 55 98.88 L 51.5 91.88 L 55 93.63 L 58.5 91.88 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><path d="M 213.63 130 L 106.37 130" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 218.88 130 L 211.88 133.5 L 213.63 130 L 211.88 126.5 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><path d="M 101.12 130 L 108.12 126.5 L 106.37 130 L 108.12 133.5 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 1px; height: 1px; padding-top: 130px; margin-left: 160px;"><div data-drawio-colors="color: rgb(0, 0, 0); background-color: rgb(255, 255, 255); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 11px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; background-color: rgb(255, 255, 255); white-space: nowrap;">Unsafe<br />Block<br />Propagation</div></div></div></foreignObject><text x="160" y="133" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="11px" text-anchor="middle">Unsafe...</text></switch></g><path d="M 213.63 265 L 106.37 265" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 218.88 265 L 211.88 268.5 L 213.63 265 L 211.88 261.5 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><path d="M 101.12 265 L 108.12 261.5 L 106.37 265 L 108.12 268.5 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 1px; height: 1px; padding-top: 265px; margin-left: 160px;"><div data-drawio-colors="color: rgb(0, 0, 0); background-color: rgb(255, 255, 255); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 11px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; background-color: rgb(255, 255, 255); white-space: nowrap;">TX Sync</div></div></div></foreignObject><text x="160" y="268" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="11px" text-anchor="middle">TX Sync</text></switch></g><path d="M 213.63 235 L 106.37 235" fill="none" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="stroke"/><path d="M 218.88 235 L 211.88 238.5 L 213.63 235 L 211.88 231.5 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><path d="M 101.12 235 L 108.12 231.5 L 106.37 235 L 108.12 238.5 Z" fill="rgb(0, 0, 0)" stroke="rgb(0, 0, 0)" stroke-miterlimit="10" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 1px; height: 1px; padding-top: 235px; margin-left: 160px;"><div data-drawio-colors="color: rgb(0, 0, 0); background-color: rgb(255, 255, 255); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 11px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; background-color: rgb(255, 255, 255); white-space: nowrap;">State Sync</div></div></div></foreignObject><text x="160" y="238" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="11px" text-anchor="middle">State Sync</text></switch></g><rect x="330" y="190" width="80" height="30" fill="rgb(255, 255, 255)" stroke="rgb(0, 0, 0)" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 78px; height: 1px; padding-top: 205px; margin-left: 331px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">Legend</div></div></div></foreignObject><text x="370" y="209" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">Legend</text></switch></g><rect x="330" y="220" width="80" height="30" fill="#f8cecc" stroke="#b85450" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 78px; height: 1px; padding-top: 235px; margin-left: 331px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">Sequencer</div></div></div></foreignObject><text x="370" y="239" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">Sequencer</text></switch></g><rect x="330" y="250" width="80" height="30" fill="#dae8fc" stroke="#6c8ebf" pointer-events="all"/><g transform="translate(-0.5 -0.5)"><switch><foreignObject pointer-events="none" width="100%" height="100%" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility" style="overflow: visible; text-align: left;"><div xmlns="http://www.w3.org/1999/xhtml" style="display: flex; align-items: unsafe center; justify-content: unsafe center; width: 78px; height: 1px; padding-top: 265px; margin-left: 331px;"><div data-drawio-colors="color: rgb(0, 0, 0); " style="box-sizing: border-box; font-size: 0px; text-align: center;"><div style="display: inline-block; font-size: 12px; font-family: Helvetica; color: rgb(0, 0, 0); line-height: 1.2; pointer-events: all; white-space: normal; overflow-wrap: normal;">Verifier</div></div></div></foreignObject><text x="370" y="269" fill="rgb(0, 0, 0)" font-family="Helvetica" font-size="12px" text-anchor="middle">Verifier</text></switch></g></g><switch><g requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"/><a transform="translate(0,-5)" xlink:href="https://www.diagrams.net/doc/faq/svg-export-text-problems" target="_blank"><text text-anchor="middle" font-size="10px" x="50%" y="100%">Text is not SVG - cannot display</text></a></switch></svg>
\ No newline at end of file \ No newline at end of file
...@@ -48,8 +48,8 @@ mechanisms. ...@@ -48,8 +48,8 @@ mechanisms.
### L1 Components ### L1 Components
- **DepositFeed**: A feed of L2 transactions which originated as smart contract calls in the L1 state. - **OptimismPortal**: A feed of L2 transactions which originated as smart contract calls in the L1 state.
- The `DepositFeed` contract emits `TransactionDeposited` events, which the rollup driver reads in order to process - The `OptimismPortal` contract emits `TransactionDeposited` events, which the rollup driver reads in order to process
deposits. deposits.
- Deposits are guaranteed to be reflected in the L2 state within the _sequencing window_. - Deposits are guaranteed to be reflected in the L2 state within the _sequencing window_.
- Beware that _transactions_ are deposited, not tokens. However deposited transactions are a key part of implementing - Beware that _transactions_ are deposited, not tokens. However deposited transactions are a key part of implementing
...@@ -106,7 +106,7 @@ The below diagram illustrates how the sequencer and verifiers fit together: ...@@ -106,7 +106,7 @@ The below diagram illustrates how the sequencer and verifiers fit together:
- [Deposits](./deposits.md) - [Deposits](./deposits.md)
Optimism supports two types of deposits: user deposits, and L1 attributes deposits. To perform a user deposit, users Optimism supports two types of deposits: user deposits, and L1 attributes deposits. To perform a user deposit, users
call the `depositTransaction` method on the `DepositFeed` contract. This in turn emits `TransactionDeposited` events, call the `depositTransaction` method on the `OptimismPortal` contract. This in turn emits `TransactionDeposited` events,
which the rollup node reads during block derivation. which the rollup node reads during block derivation.
L1 attributes deposits are used to register L1 block attributes (number, timestamp, etc.) on L2 via a call to the L1 L1 attributes deposits are used to register L1 block attributes (number, timestamp, etc.) on L2 via a call to the L1
...@@ -141,8 +141,8 @@ worth of blocks has passed, i.e. after L1 block number `n + SEQUENCING_WINDOW_SI ...@@ -141,8 +141,8 @@ worth of blocks has passed, i.e. after L1 block number `n + SEQUENCING_WINDOW_SI
Each epoch contains at least one block. Every block in the epoch contains an L1 info transaction which contains Each epoch contains at least one block. Every block in the epoch contains an L1 info transaction which contains
contextual information about L1 such as the block hash and timestamp. The first block in the epoch also contains all contextual information about L1 such as the block hash and timestamp. The first block in the epoch also contains all
deposits initiated via the `DepositFeed` contract on L1. All L2 blocks can also contain _sequenced transactions_, i.e. deposits initiated via the `OptimismPortal` contract on L1. All L2 blocks can also contain _sequenced transactions_,
transactions submitted directly to the sequencer. i.e. transactions submitted directly to the sequencer.
Whenever the sequencer creates a new L2 block for a given epoch, it must submit it to L1 as part of a _batch_, within Whenever the sequencer creates a new L2 block for a given epoch, it must submit it to L1 as part of a _batch_, within
the epoch's sequencing window (i.e. the batch must land before L1 block `n + SEQUENCING_WINDOW_SIZE`). These batches are the epoch's sequencing window (i.e. the batch must land before L1 block `n + SEQUENCING_WINDOW_SIZE`). These batches are
......
...@@ -130,7 +130,7 @@ recognize that having the same address does not imply that a contract on L2 will ...@@ -130,7 +130,7 @@ recognize that having the same address does not imply that a contract on L2 will
## The Optimism Portal Contract ## The Optimism Portal Contract
The Optimism Portal serves as both the entry and exit point to the Optimism L2. It is a contract which inherits from The Optimism Portal serves as both the entry and exit point to the Optimism L2. It is a contract which inherits from
the [DepositFeed](./deposits.md#deposit-contract) contract, and in addition provides the following interface for the [OptimismPortal](./deposits.md#deposit-contract) contract, and in addition provides the following interface for
withdrawals: withdrawals:
- [`WithdrawalTransaction` type] - [`WithdrawalTransaction` type]
......
...@@ -1105,20 +1105,6 @@ ...@@ -1105,20 +1105,6 @@
ethereumjs-util "^7.1.1" ethereumjs-util "^7.1.1"
miller-rabin "^4.0.0" miller-rabin "^4.0.0"
"@ethereumjs/trie@^5.0.0-beta.1":
version "5.0.0-beta.1"
resolved "https://registry.yarnpkg.com/@ethereumjs/trie/-/trie-5.0.0-beta.1.tgz#79d1108222b45bc3576d62583364c96626ce4175"
integrity sha512-OjTzt9fK5aMzm84GRSe+C7bO2zorbEWRueLbxOMlS7lHCiXA7akIQ3mzz9VBSMjT7m01hZ1r3fZIOGHzQVCHtw==
dependencies:
"@ethereumjs/util" "8.0.0-beta.1"
abstract-level "^1.0.3"
ethereum-cryptography "^1.0.3"
level "^8.0.0"
memory-level "^1.0.0"
readable-stream "^3.6.0"
rlp "4.0.0-beta.1"
semaphore-async-await "^1.5.1"
"@ethereumjs/tx@^3.2.1": "@ethereumjs/tx@^3.2.1":
version "3.3.0" version "3.3.0"
resolved "https://registry.yarnpkg.com/@ethereumjs/tx/-/tx-3.3.0.tgz#14ed1b7fa0f28e1cd61e3ecbdab824205f6a4378" resolved "https://registry.yarnpkg.com/@ethereumjs/tx/-/tx-3.3.0.tgz#14ed1b7fa0f28e1cd61e3ecbdab824205f6a4378"
...@@ -1143,14 +1129,6 @@ ...@@ -1143,14 +1129,6 @@
"@ethereumjs/common" "^2.6.3" "@ethereumjs/common" "^2.6.3"
ethereumjs-util "^7.1.4" ethereumjs-util "^7.1.4"
"@ethereumjs/util@8.0.0-beta.1", "@ethereumjs/util@^8.0.0-beta.1":
version "8.0.0-beta.1"
resolved "https://registry.yarnpkg.com/@ethereumjs/util/-/util-8.0.0-beta.1.tgz#369526faf6e9f1cadfd39c7741cc07cf33d128f8"
integrity sha512-yUg3TdJm25HiamAXbNuOagXQPmgdSrV3oEH0h+Adsxt6D7qHw8HyHLA8C+tNrLP2YwcjF1dGJ+F7WtOibzEp9g==
dependencies:
ethereum-cryptography "^1.0.3"
rlp "4.0.0-beta.1"
"@ethereumjs/vm@^5.9.0": "@ethereumjs/vm@^5.9.0":
version "5.9.0" version "5.9.0"
resolved "https://registry.yarnpkg.com/@ethereumjs/vm/-/vm-5.9.0.tgz#54e485097c6dbb42554d541ef8d84d06b7ddf12f" resolved "https://registry.yarnpkg.com/@ethereumjs/vm/-/vm-5.9.0.tgz#54e485097c6dbb42554d541ef8d84d06b7ddf12f"
...@@ -5534,19 +5512,6 @@ abort-controller@^3.0.0: ...@@ -5534,19 +5512,6 @@ abort-controller@^3.0.0:
dependencies: dependencies:
event-target-shim "^5.0.0" event-target-shim "^5.0.0"
abstract-level@^1.0.0, abstract-level@^1.0.2, abstract-level@^1.0.3:
version "1.0.3"
resolved "https://registry.yarnpkg.com/abstract-level/-/abstract-level-1.0.3.tgz#78a67d3d84da55ee15201486ab44c09560070741"
integrity sha512-t6jv+xHy+VYwc4xqZMn2Pa9DjcdzvzZmQGRjTFc8spIbRGHgBrEKbPq+rYXc7CCo0lxgYvSgKVg9qZAhpVQSjA==
dependencies:
buffer "^6.0.3"
catering "^2.1.0"
is-buffer "^2.0.5"
level-supports "^4.0.0"
level-transcoder "^1.0.1"
module-error "^1.0.1"
queue-microtask "^1.2.3"
abstract-leveldown@3.0.0: abstract-leveldown@3.0.0:
version "3.0.0" version "3.0.0"
resolved "https://registry.yarnpkg.com/abstract-leveldown/-/abstract-leveldown-3.0.0.tgz#5cb89f958a44f526779d740d1440e743e0c30a57" resolved "https://registry.yarnpkg.com/abstract-leveldown/-/abstract-leveldown-3.0.0.tgz#5cb89f958a44f526779d740d1440e743e0c30a57"
...@@ -7086,16 +7051,6 @@ brorand@^1.0.1, brorand@^1.1.0: ...@@ -7086,16 +7051,6 @@ brorand@^1.0.1, brorand@^1.1.0:
resolved "https://registry.yarnpkg.com/brorand/-/brorand-1.1.0.tgz#12c25efe40a45e3c323eb8675a0a0ce57b22371f" resolved "https://registry.yarnpkg.com/brorand/-/brorand-1.1.0.tgz#12c25efe40a45e3c323eb8675a0a0ce57b22371f"
integrity sha1-EsJe/kCkXjwyPrhnWgoM5XsiNx8= integrity sha1-EsJe/kCkXjwyPrhnWgoM5XsiNx8=
browser-level@^1.0.1:
version "1.0.1"
resolved "https://registry.yarnpkg.com/browser-level/-/browser-level-1.0.1.tgz#36e8c3183d0fe1c405239792faaab5f315871011"
integrity sha512-XECYKJ+Dbzw0lbydyQuJzwNXtOpbMSq737qxJN11sIRTErOMShvDpbzTlgju7orJKvx4epULolZAuJGLzCmWRQ==
dependencies:
abstract-level "^1.0.2"
catering "^2.1.1"
module-error "^1.0.2"
run-parallel-limit "^1.1.0"
browser-stdout@1.3.1: browser-stdout@1.3.1:
version "1.3.1" version "1.3.1"
resolved "https://registry.yarnpkg.com/browser-stdout/-/browser-stdout-1.3.1.tgz#baa559ee14ced73452229bad7326467c61fabd60" resolved "https://registry.yarnpkg.com/browser-stdout/-/browser-stdout-1.3.1.tgz#baa559ee14ced73452229bad7326467c61fabd60"
...@@ -7528,11 +7483,6 @@ caseless@^0.12.0, caseless@~0.12.0: ...@@ -7528,11 +7483,6 @@ caseless@^0.12.0, caseless@~0.12.0:
resolved "https://registry.yarnpkg.com/caseless/-/caseless-0.12.0.tgz#1b681c21ff84033c826543090689420d187151dc" resolved "https://registry.yarnpkg.com/caseless/-/caseless-0.12.0.tgz#1b681c21ff84033c826543090689420d187151dc"
integrity sha1-G2gcIf+EAzyCZUMJBolCDRhxUdw= integrity sha1-G2gcIf+EAzyCZUMJBolCDRhxUdw=
catering@^2.1.0, catering@^2.1.1:
version "2.1.1"
resolved "https://registry.yarnpkg.com/catering/-/catering-2.1.1.tgz#66acba06ed5ee28d5286133982a927de9a04b510"
integrity sha512-K7Qy8O9p76sL3/3m7/zLKbRkyOlSZAgzEaLhyj2mXS8PsCud2Eo4hAb8aLtZqHh0QGqLcb9dlJSu6lHRVENm1w==
cbor@^5.0.2: cbor@^5.0.2:
version "5.2.0" version "5.2.0"
resolved "https://registry.yarnpkg.com/cbor/-/cbor-5.2.0.tgz#4cca67783ccd6de7b50ab4ed62636712f287a67c" resolved "https://registry.yarnpkg.com/cbor/-/cbor-5.2.0.tgz#4cca67783ccd6de7b50ab4ed62636712f287a67c"
...@@ -7803,17 +7753,6 @@ class-utils@^0.3.5: ...@@ -7803,17 +7753,6 @@ class-utils@^0.3.5:
isobject "^3.0.0" isobject "^3.0.0"
static-extend "^0.1.1" static-extend "^0.1.1"
classic-level@^1.2.0:
version "1.2.0"
resolved "https://registry.yarnpkg.com/classic-level/-/classic-level-1.2.0.tgz#2d52bdec8e7a27f534e67fdeb890abef3e643c27"
integrity sha512-qw5B31ANxSluWz9xBzklRWTUAJ1SXIdaVKTVS7HcTGKOAmExx65Wo5BUICW+YGORe2FOUaDghoI9ZDxj82QcFg==
dependencies:
abstract-level "^1.0.2"
catering "^2.1.0"
module-error "^1.0.1"
napi-macros "~2.0.0"
node-gyp-build "^4.3.0"
clean-regexp@^1.0.0: clean-regexp@^1.0.0:
version "1.0.0" version "1.0.0"
resolved "https://registry.yarnpkg.com/clean-regexp/-/clean-regexp-1.0.0.tgz#8df7c7aae51fd36874e8f8d05b9180bc11a3fed7" resolved "https://registry.yarnpkg.com/clean-regexp/-/clean-regexp-1.0.0.tgz#8df7c7aae51fd36874e8f8d05b9180bc11a3fed7"
...@@ -13089,7 +13028,7 @@ is-buffer@^1.1.5: ...@@ -13089,7 +13028,7 @@ is-buffer@^1.1.5:
resolved "https://registry.yarnpkg.com/is-buffer/-/is-buffer-1.1.6.tgz#efaa2ea9daa0d7ab2ea13a97b2b8ad51fefbe8be" resolved "https://registry.yarnpkg.com/is-buffer/-/is-buffer-1.1.6.tgz#efaa2ea9daa0d7ab2ea13a97b2b8ad51fefbe8be"
integrity sha512-NcdALwpXkTm5Zvvbk7owOUSvVvBKDgKP5/ewfXEznmQFfs4ZRmanOeKBTjRVjka3QFoN6XJ+9F3USqfHqTaU5w== integrity sha512-NcdALwpXkTm5Zvvbk7owOUSvVvBKDgKP5/ewfXEznmQFfs4ZRmanOeKBTjRVjka3QFoN6XJ+9F3USqfHqTaU5w==
is-buffer@^2.0.0, is-buffer@^2.0.5, is-buffer@~2.0.3: is-buffer@^2.0.0, is-buffer@~2.0.3:
version "2.0.5" version "2.0.5"
resolved "https://registry.yarnpkg.com/is-buffer/-/is-buffer-2.0.5.tgz#ebc252e400d22ff8d77fa09888821a24a658c191" resolved "https://registry.yarnpkg.com/is-buffer/-/is-buffer-2.0.5.tgz#ebc252e400d22ff8d77fa09888821a24a658c191"
integrity sha512-i2R6zNFDwgEHJyQUtJEk0XFi1i0dPFn/oqjK3/vPCcDeJvW5NQ83V8QbicfF1SupOaB0h8ntgBC2YiE7dfyctQ== integrity sha512-i2R6zNFDwgEHJyQUtJEk0XFi1i0dPFn/oqjK3/vPCcDeJvW5NQ83V8QbicfF1SupOaB0h8ntgBC2YiE7dfyctQ==
...@@ -14256,11 +14195,6 @@ level-sublevel@6.6.4: ...@@ -14256,11 +14195,6 @@ level-sublevel@6.6.4:
typewiselite "~1.0.0" typewiselite "~1.0.0"
xtend "~4.0.0" xtend "~4.0.0"
level-supports@^4.0.0:
version "4.0.1"
resolved "https://registry.yarnpkg.com/level-supports/-/level-supports-4.0.1.tgz#431546f9d81f10ff0fea0e74533a0e875c08c66a"
integrity sha512-PbXpve8rKeNcZ9C1mUicC9auIYFyGpkV9/i6g76tLgANwWhtG2v7I4xNBUlkn3lE2/dZF3Pi0ygYGtLc4RXXdA==
level-supports@~1.0.0: level-supports@~1.0.0:
version "1.0.1" version "1.0.1"
resolved "https://registry.yarnpkg.com/level-supports/-/level-supports-1.0.1.tgz#2f530a596834c7301622521988e2c36bb77d122d" resolved "https://registry.yarnpkg.com/level-supports/-/level-supports-1.0.1.tgz#2f530a596834c7301622521988e2c36bb77d122d"
...@@ -14268,14 +14202,6 @@ level-supports@~1.0.0: ...@@ -14268,14 +14202,6 @@ level-supports@~1.0.0:
dependencies: dependencies:
xtend "^4.0.2" xtend "^4.0.2"
level-transcoder@^1.0.1:
version "1.0.1"
resolved "https://registry.yarnpkg.com/level-transcoder/-/level-transcoder-1.0.1.tgz#f8cef5990c4f1283d4c86d949e73631b0bc8ba9c"
integrity sha512-t7bFwFtsQeD8cl8NIoQ2iwxA0CL/9IFw7/9gAjOonH0PWTTiRfY7Hq+Ejbsxh86tXobDQ6IOiddjNYIfOBs06w==
dependencies:
buffer "^6.0.3"
module-error "^1.0.1"
level-ws@0.0.0: level-ws@0.0.0:
version "0.0.0" version "0.0.0"
resolved "https://registry.yarnpkg.com/level-ws/-/level-ws-0.0.0.tgz#372e512177924a00424b0b43aef2bb42496d228b" resolved "https://registry.yarnpkg.com/level-ws/-/level-ws-0.0.0.tgz#372e512177924a00424b0b43aef2bb42496d228b"
...@@ -14311,14 +14237,6 @@ level-ws@^2.0.0: ...@@ -14311,14 +14237,6 @@ level-ws@^2.0.0:
level-packager "^5.1.0" level-packager "^5.1.0"
leveldown "^5.4.0" leveldown "^5.4.0"
level@^8.0.0:
version "8.0.0"
resolved "https://registry.yarnpkg.com/level/-/level-8.0.0.tgz#41b4c515dabe28212a3e881b61c161ffead14394"
integrity sha512-ypf0jjAk2BWI33yzEaaotpq7fkOPALKAgDBxggO6Q9HGX2MRXn0wbP1Jn/tJv1gtL867+YOjOB49WaUF3UoJNQ==
dependencies:
browser-level "^1.0.1"
classic-level "^1.2.0"
leveldown@^5.4.0: leveldown@^5.4.0:
version "5.6.0" version "5.6.0"
resolved "https://registry.yarnpkg.com/leveldown/-/leveldown-5.6.0.tgz#16ba937bb2991c6094e13ac5a6898ee66d3eee98" resolved "https://registry.yarnpkg.com/leveldown/-/leveldown-5.6.0.tgz#16ba937bb2991c6094e13ac5a6898ee66d3eee98"
...@@ -15176,15 +15094,6 @@ memdown@~3.0.0: ...@@ -15176,15 +15094,6 @@ memdown@~3.0.0:
ltgt "~2.2.0" ltgt "~2.2.0"
safe-buffer "~5.1.1" safe-buffer "~5.1.1"
memory-level@^1.0.0:
version "1.0.0"
resolved "https://registry.yarnpkg.com/memory-level/-/memory-level-1.0.0.tgz#7323c3fd368f9af2f71c3cd76ba403a17ac41692"
integrity sha512-UXzwewuWeHBz5krr7EvehKcmLFNoXxGcvuYhC41tRnkrTbJohtS7kVn9akmgirtRygg+f7Yjsfi8Uu5SGSQ4Og==
dependencies:
abstract-level "^1.0.0"
functional-red-black-tree "^1.0.1"
module-error "^1.0.1"
memorystream@^0.3.1: memorystream@^0.3.1:
version "0.3.1" version "0.3.1"
resolved "https://registry.yarnpkg.com/memorystream/-/memorystream-0.3.1.tgz#86d7090b30ce455d63fbae12dda51a47ddcaf9b2" resolved "https://registry.yarnpkg.com/memorystream/-/memorystream-0.3.1.tgz#86d7090b30ce455d63fbae12dda51a47ddcaf9b2"
...@@ -15849,11 +15758,6 @@ modify-values@^1.0.0: ...@@ -15849,11 +15758,6 @@ modify-values@^1.0.0:
resolved "https://registry.yarnpkg.com/modify-values/-/modify-values-1.0.1.tgz#b3939fa605546474e3e3e3c63d64bd43b4ee6022" resolved "https://registry.yarnpkg.com/modify-values/-/modify-values-1.0.1.tgz#b3939fa605546474e3e3e3c63d64bd43b4ee6022"
integrity sha512-xV2bxeN6F7oYjZWTe/YPAy6MN2M+sL4u/Rlm2AHCIVGfo2p1yGmBHQ6vHehl4bRTZBdHu3TSkWdYgkwpYzAGSw== integrity sha512-xV2bxeN6F7oYjZWTe/YPAy6MN2M+sL4u/Rlm2AHCIVGfo2p1yGmBHQ6vHehl4bRTZBdHu3TSkWdYgkwpYzAGSw==
module-error@^1.0.1, module-error@^1.0.2:
version "1.0.2"
resolved "https://registry.yarnpkg.com/module-error/-/module-error-1.0.2.tgz#8d1a48897ca883f47a45816d4fb3e3c6ba404d86"
integrity sha512-0yuvsqSCv8LbaOKhnsQ/T5JhyFlCYLPXK3U2sgV10zoKQwzs/MyfuQUOZQ1V/6OCOJsK/TRgNVrPuPDqtdMFtA==
morgan@^1.10.0: morgan@^1.10.0:
version "1.10.0" version "1.10.0"
resolved "https://registry.yarnpkg.com/morgan/-/morgan-1.10.0.tgz#091778abc1fc47cd3509824653dae1faab6b17d7" resolved "https://registry.yarnpkg.com/morgan/-/morgan-1.10.0.tgz#091778abc1fc47cd3509824653dae1faab6b17d7"
...@@ -16164,11 +16068,6 @@ node-gyp-build@^4.2.0: ...@@ -16164,11 +16068,6 @@ node-gyp-build@^4.2.0:
resolved "https://registry.yarnpkg.com/node-gyp-build/-/node-gyp-build-4.2.3.tgz#ce6277f853835f718829efb47db20f3e4d9c4739" resolved "https://registry.yarnpkg.com/node-gyp-build/-/node-gyp-build-4.2.3.tgz#ce6277f853835f718829efb47db20f3e4d9c4739"
integrity sha512-MN6ZpzmfNCRM+3t57PTJHgHyw/h4OWnZ6mR8P5j/uZtqQr46RRuDE/P+g3n0YR/AiYXeWixZZzaip77gdICfRg== integrity sha512-MN6ZpzmfNCRM+3t57PTJHgHyw/h4OWnZ6mR8P5j/uZtqQr46RRuDE/P+g3n0YR/AiYXeWixZZzaip77gdICfRg==
node-gyp-build@^4.3.0:
version "4.5.0"
resolved "https://registry.yarnpkg.com/node-gyp-build/-/node-gyp-build-4.5.0.tgz#7a64eefa0b21112f89f58379da128ac177f20e40"
integrity sha512-2iGbaQBV+ITgCz76ZEjmhUKAKVf7xfY1sRl4UiKQspfZMH2h06SyhNsnSVy50cwkFQDGLyif6m/6uFXHkOZ6rg==
node-gyp-build@~4.1.0: node-gyp-build@~4.1.0:
version "4.1.1" version "4.1.1"
resolved "https://registry.yarnpkg.com/node-gyp-build/-/node-gyp-build-4.1.1.tgz#d7270b5d86717068d114cc57fff352f96d745feb" resolved "https://registry.yarnpkg.com/node-gyp-build/-/node-gyp-build-4.1.1.tgz#d7270b5d86717068d114cc57fff352f96d745feb"
...@@ -18046,7 +17945,7 @@ querystringify@^2.1.1: ...@@ -18046,7 +17945,7 @@ querystringify@^2.1.1:
resolved "https://registry.yarnpkg.com/querystringify/-/querystringify-2.2.0.tgz#3345941b4153cb9d082d8eee4cda2016a9aef7f6" resolved "https://registry.yarnpkg.com/querystringify/-/querystringify-2.2.0.tgz#3345941b4153cb9d082d8eee4cda2016a9aef7f6"
integrity sha512-FIqgj2EUvTa7R50u0rGsyTftzjYmv/a3hO345bZNrqabNqjtgiDMgmo4mkUjd+nzU5oF3dClKqFIPUKybUyqoQ== integrity sha512-FIqgj2EUvTa7R50u0rGsyTftzjYmv/a3hO345bZNrqabNqjtgiDMgmo4mkUjd+nzU5oF3dClKqFIPUKybUyqoQ==
queue-microtask@^1.2.2, queue-microtask@^1.2.3: queue-microtask@^1.2.2:
version "1.2.3" version "1.2.3"
resolved "https://registry.yarnpkg.com/queue-microtask/-/queue-microtask-1.2.3.tgz#4929228bbc724dfac43e0efb058caf7b6cfb6243" resolved "https://registry.yarnpkg.com/queue-microtask/-/queue-microtask-1.2.3.tgz#4929228bbc724dfac43e0efb058caf7b6cfb6243"
integrity sha512-NuaNSa6flKT5JaSYQzJok04JzTL1CA6aGhv5rfLW3PgqA+M2ChpZQnAC8h8i4ZFkBS8X5RqkDBHA7r4hej3K9A== integrity sha512-NuaNSa6flKT5JaSYQzJok04JzTL1CA6aGhv5rfLW3PgqA+M2ChpZQnAC8h8i4ZFkBS8X5RqkDBHA7r4hej3K9A==
...@@ -18818,11 +18717,6 @@ ripemd160@^2.0.0, ripemd160@^2.0.1: ...@@ -18818,11 +18717,6 @@ ripemd160@^2.0.0, ripemd160@^2.0.1:
hash-base "^3.0.0" hash-base "^3.0.0"
inherits "^2.0.1" inherits "^2.0.1"
rlp@4.0.0-beta.1:
version "4.0.0-beta.1"
resolved "https://registry.yarnpkg.com/rlp/-/rlp-4.0.0-beta.1.tgz#46983ee758344e5eee48f135129407434cfea2b6"
integrity sha512-UVIENF7Rw+nX5cpfzw6X3/oXNQKsSZ8HbDJUeU9RoIs1LLyMjcPZR1o26i1vFbpuVN8GRmcdopEYOMjVsLRsQQ==
rlp@^2.0.0, rlp@^2.2.1, rlp@^2.2.2, rlp@^2.2.3, rlp@^2.2.4, rlp@^2.2.6: rlp@^2.0.0, rlp@^2.2.1, rlp@^2.2.2, rlp@^2.2.3, rlp@^2.2.4, rlp@^2.2.6:
version "2.2.6" version "2.2.6"
resolved "https://registry.yarnpkg.com/rlp/-/rlp-2.2.6.tgz#c80ba6266ac7a483ef1e69e8e2f056656de2fb2c" resolved "https://registry.yarnpkg.com/rlp/-/rlp-2.2.6.tgz#c80ba6266ac7a483ef1e69e8e2f056656de2fb2c"
...@@ -18862,13 +18756,6 @@ run-async@^2.2.0, run-async@^2.4.0: ...@@ -18862,13 +18756,6 @@ run-async@^2.2.0, run-async@^2.4.0:
resolved "https://registry.yarnpkg.com/run-async/-/run-async-2.4.1.tgz#8440eccf99ea3e70bd409d49aab88e10c189a455" resolved "https://registry.yarnpkg.com/run-async/-/run-async-2.4.1.tgz#8440eccf99ea3e70bd409d49aab88e10c189a455"
integrity sha512-tvVnVv01b8c1RrA6Ep7JkStj85Guv/YrMcwqYQnwjsAS2cTmmPGBBjAjpCW7RrSodNSoE2/qg9O4bceNvUuDgQ== integrity sha512-tvVnVv01b8c1RrA6Ep7JkStj85Guv/YrMcwqYQnwjsAS2cTmmPGBBjAjpCW7RrSodNSoE2/qg9O4bceNvUuDgQ==
run-parallel-limit@^1.1.0:
version "1.1.0"
resolved "https://registry.yarnpkg.com/run-parallel-limit/-/run-parallel-limit-1.1.0.tgz#be80e936f5768623a38a963262d6bef8ff11e7ba"
integrity sha512-jJA7irRNM91jaKc3Hcl1npHsFLOXOoTkPCUL1JEa1R82O2miplXXRaGdjW/KM/98YQWDhJLiSs793CnXfblJUw==
dependencies:
queue-microtask "^1.2.2"
run-parallel@^1.1.9: run-parallel@^1.1.9:
version "1.2.0" version "1.2.0"
resolved "https://registry.yarnpkg.com/run-parallel/-/run-parallel-1.2.0.tgz#66d1368da7bdf921eb9d95bd1a9229e7f21a43ee" resolved "https://registry.yarnpkg.com/run-parallel/-/run-parallel-1.2.0.tgz#66d1368da7bdf921eb9d95bd1a9229e7f21a43ee"
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment