Commit 79598c8b authored by Matthew Slipper's avatar Matthew Slipper Committed by GitHub

Merge pull request #2348 from ethereum-optimism/develop

Develop -> Master
parents 83a45f9e 06d821ba
---
'@eth-optimism/teleportr': patch
---
Add SuggestGasTipCap fallback
---
'@eth-optimism/data-transport-layer': patch
---
Removes the unused L1DataTransportClient and its dependencies
---
'@eth-optimism/batch-submitter-service': patch
'@eth-optimism/gas-oracle': patch
'@eth-optimism/integration-tests': patch
'@eth-optimism/l2geth': patch
'@eth-optimism/hardhat-node': patch
'@eth-optimism/contracts': patch
'@eth-optimism/data-transport-layer': patch
'@eth-optimism/message-relayer': patch
---
Refactored Dockerfiles
---
'@eth-optimism/common-ts': patch
---
Update log lines for service shutdown
---
'@eth-optimism/indexer': patch
---
Indexer: initial release
---
'@eth-optimism/replica-healthcheck': patch
---
Fixes a bug in the replica-healthcheck dockerfile
---
'@eth-optimism/indexer': patch
---
Bump `go-ethereum` to `v1.10.16`
---
'@eth-optimism/batch-submitter-service': patch
'@eth-optimism/teleportr': patch
---
Count reverted transactions in failed_submissions
---
'@eth-optimism/teleportr': patch
---
Add teleportr API server
---
'@eth-optimism/teleportr': patch
---
Bump `go-ethereum` to `v1.10.16`
---
'@eth-optimism/data-transport-layer': patch
---
dtl: Support basic authentication for RPC endpoints
---
'@eth-optimism/batch-submitter-service': patch
---
Add Min/MaxStateRootElements configuration
---
'@eth-optimism/common-ts': patch
---
Update metric names to include proper snake_case for strings that include "L1" or "L2"
---
'@eth-optimism/common-ts': patch
'@eth-optimism/message-relayer': patch
'@eth-optimism/replica-healthcheck': patch
---
Have BaseServiceV2 add spaces to environment variable names
---
'@eth-optimism/batch-submitter-service': patch
'@eth-optimism/l2geth': patch
---
l2geth: Sync from Backend Queue
---
'@eth-optimism/sdk': patch
---
Comment out non-functional getMessagesByAddress function
---
'@eth-optimism/teleportr': patch
---
Restructure Deposit and CompletedTeleport to use struct embeddings
---
'@eth-optimism/batch-submitter-service': patch
---
Enforce min/max tx size on plaintext batch encoding
---
'@eth-optimism/teleportr': patch
---
Add LoadInTeleport method to database
---
'@eth-optimism/teleportr': patch
---
Add btree index on deposit.txn_hash and deposit.address
name: indexer unit tests
on:
push:
paths:
- 'go/indexer/**'
branches:
- 'master'
- 'develop'
- '*rc'
- 'release/*'
pull_request:
branches:
- '*'
workflow_dispatch:
defaults:
run:
working-directory: './go/indexer'
jobs:
tests:
runs-on: ubuntu-latest
steps:
- name: Install Go
uses: actions/setup-go@v2
with:
go-version: 1.16.x
- name: Checkout code
uses: actions/checkout@v2
- name: Install
run: make
- name: Test
run: make test
...@@ -24,9 +24,10 @@ jobs: ...@@ -24,9 +24,10 @@ jobs:
hardhat-node: ${{ steps.packages.outputs.hardhat-node }} hardhat-node: ${{ steps.packages.outputs.hardhat-node }}
canary-docker-tag: ${{ steps.docker-image-name.outputs.canary-docker-tag }} canary-docker-tag: ${{ steps.docker-image-name.outputs.canary-docker-tag }}
proxyd: ${{ steps.packages.outputs.proxyd }} proxyd: ${{ steps.packages.outputs.proxyd }}
op-exporter : ${{ steps.packages.outputs.op-exporter }} op-exporter: ${{ steps.packages.outputs.op-exporter }}
l2geth-exporter : ${{ steps.packages.outputs.l2geth-exporter }} l2geth-exporter: ${{ steps.packages.outputs.l2geth-exporter }}
batch-submitter-service : ${{ steps.packages.outputs.batch-submitter-service }} batch-submitter-service: ${{ steps.packages.outputs.batch-submitter-service }}
indexer: ${{ steps.packages.outputs.indexer }}
steps: steps:
- name: Check out source code - name: Check out source code
...@@ -438,3 +439,40 @@ jobs: ...@@ -438,3 +439,40 @@ jobs:
file: ./ops/docker/Dockerfile.batch-submitter-service file: ./ops/docker/Dockerfile.batch-submitter-service
push: true push: true
tags: ethereumoptimism/batch-submitter-service:${{ needs.canary-publish.outputs.batch-submitter-service }} tags: ethereumoptimism/batch-submitter-service:${{ needs.canary-publish.outputs.batch-submitter-service }}
indexer:
name: Publish indexer Version ${{ needs.canary-publish.outputs.canary-docker-tag }}
needs: canary-publish
if: needs.canary-publish.outputs.indexer != ''
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Login to Docker Hub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_ACCESS_TOKEN_USERNAME }}
password: ${{ secrets.DOCKERHUB_ACCESS_TOKEN_SECRET }}
- name: Set build args
id: build_args
run: |
echo ::set-output name=GITDATE::"$(date +%d-%m-%Y)"
echo ::set-output name=GITVERSION::$(jq -r .version ./go/indexer/package.json)
echo ::set-output name=GITCOMMIT::"$GITHUB_SHA"
- name: Build and push
uses: docker/build-push-action@v2
with:
context: .
file: ./ops/docker/Dockerfile.indexer
push: true
tags: ethereumoptimism/batch-submitter-service:${{ needs.canary-publish.outputs.indexer }}
build-args: |
GITDATE=${{ steps.build_args.outputs.GITDATE }}
GITCOMMIT=${{ steps.build_args.outputs.GITCOMMIT }}
GITVERSION=${{ steps.build_args.outputs.GITVERSION }}
...@@ -19,9 +19,10 @@ jobs: ...@@ -19,9 +19,10 @@ jobs:
replica-healthcheck: ${{ steps.packages.outputs.replica-healthcheck }} replica-healthcheck: ${{ steps.packages.outputs.replica-healthcheck }}
proxyd: ${{ steps.packages.outputs.proxyd }} proxyd: ${{ steps.packages.outputs.proxyd }}
hardhat-node: ${{ steps.packages.outputs.hardhat-node }} hardhat-node: ${{ steps.packages.outputs.hardhat-node }}
op-exporter : ${{ steps.packages.outputs.op-exporter }} op-exporter: ${{ steps.packages.outputs.op-exporter }}
l2geth-exporter : ${{ steps.packages.outputs.l2geth-exporter }} l2geth-exporter: ${{ steps.packages.outputs.l2geth-exporter }}
batch-submitter-service : ${{ steps.packages.outputs.batch-submitter-service }} batch-submitter-service: ${{ steps.packages.outputs.batch-submitter-service }}
indexer: ${{ steps.packages.outputs.indexer }}
steps: steps:
- name: Checkout Repo - name: Checkout Repo
...@@ -416,3 +417,40 @@ jobs: ...@@ -416,3 +417,40 @@ jobs:
file: ./ops/docker/Dockerfile.batch-submitter-service file: ./ops/docker/Dockerfile.batch-submitter-service
push: true push: true
tags: ethereumoptimism/batch-submitter-service:${{ needs.release.outputs.batch-submitter-service }},ethereumoptimism/batch-submitter-service:latest tags: ethereumoptimism/batch-submitter-service:${{ needs.release.outputs.batch-submitter-service }},ethereumoptimism/batch-submitter-service:latest
indexer:
name: Publish Indexer Version ${{ needs.release.outputs.indexer }}
needs: release
if: needs.release.outputs.indexer != ''
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Login to Docker Hub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_ACCESS_TOKEN_USERNAME }}
password: ${{ secrets.DOCKERHUB_ACCESS_TOKEN_SECRET }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Set build args
id: build_args
run: |
echo ::set-output name=GITDATE::"$(date +%d-%m-%Y)"
echo ::set-output name=GITVERSION::$(jq -r .version ./go/indexer/package.json)
echo ::set-output name=GITCOMMIT::"$GITHUB_SHA"
- name: Publish Indexer
uses: docker/build-push-action@v2
with:
context: .
file: ./ops/docker/Dockerfile.indexer
push: true
tags: ethereumoptimism/indexer:${{ needs.release.outputs.indexer }},ethereumoptimism/indexer:latest
build-args: |
GITDATE=${{ steps.build_args.outputs.GITDATE }}
GITCOMMIT=${{ steps.build_args.outputs.GITCOMMIT }}
GITVERSION=${{ steps.build_args.outputs.GITVERSION }}
...@@ -138,6 +138,8 @@ docker-compose build ...@@ -138,6 +138,8 @@ docker-compose build
docker-compose up docker-compose up
``` ```
**If a node process exits with exit code: 137** you may need to increase the default memory limit of docker containers
Finally, **if you're running into weird problems and nothing seems to be working**, run: Finally, **if you're running into weird problems and nothing seems to be working**, run:
```bash ```bash
......
...@@ -74,7 +74,7 @@ Our update process takes the form of a PR merging the `develop` branch into the ...@@ -74,7 +74,7 @@ Our update process takes the form of a PR merging the `develop` branch into the
### The `develop` branch ### The `develop` branch
Our primary development branch is [`develop`](https://github.com/ethereum-optimism/optimism/tree/develop/). Our primary development branch is [`develop`](https://github.com/ethereum-optimism/optimism/tree/develop/).
`develop` contains the most up-to-date software that remains backwards compatible with our latest experimental [network deployments](https://community.optimism.io/docs/developers/networks.html). `develop` contains the most up-to-date software that remains backwards compatible with our latest experimental [network deployments](https://community.optimism.io/docs/useful-tools/networks/).
If you're making a backwards compatible change, please direct your pull request towards `develop`. If you're making a backwards compatible change, please direct your pull request towards `develop`.
**Changes to contracts within `packages/contracts/contracts` are usually NOT considered backwards compatible and SHOULD be made against a release candidate branch**. **Changes to contracts within `packages/contracts/contracts` are usually NOT considered backwards compatible and SHOULD be made against a release candidate branch**.
......
...@@ -148,15 +148,16 @@ func Main(gitVersion string) func(ctx *cli.Context) error { ...@@ -148,15 +148,16 @@ func Main(gitVersion string) func(ctx *cli.Context) error {
if cfg.RunStateBatchSubmitter { if cfg.RunStateBatchSubmitter {
batchStateDriver, err := proposer.NewDriver(proposer.Config{ batchStateDriver, err := proposer.NewDriver(proposer.Config{
Name: "Proposer", Name: "Proposer",
L1Client: l1Client, L1Client: l1Client,
L2Client: l2Client, L2Client: l2Client,
BlockOffset: cfg.BlockOffset, BlockOffset: cfg.BlockOffset,
MaxTxSize: cfg.MaxL1TxSize, MinStateRootElements: cfg.MinStateRootElements,
SCCAddr: sccAddress, MaxStateRootElements: cfg.MaxStateRootElements,
CTCAddr: ctcAddress, SCCAddr: sccAddress,
ChainID: chainID, CTCAddr: ctcAddress,
PrivKey: proposerPrivKey, ChainID: chainID,
PrivKey: proposerPrivKey,
}) })
if err != nil { if err != nil {
return err return err
......
...@@ -74,14 +74,18 @@ type Config struct { ...@@ -74,14 +74,18 @@ type Config struct {
// by the batch submitter. // by the batch submitter.
MaxL1TxSize uint64 MaxL1TxSize uint64
// MinStateRootElements is the minimum number of state root elements that
// can be submitted in single proposer batch.
MinStateRootElements uint64
// MaxStateRootElements is the maximum number of state root elements that
// can be submitted in single proposer batch.
MaxStateRootElements uint64
// MaxTxBatchCount is the maximum number of L2 transactions that can ever be // MaxTxBatchCount is the maximum number of L2 transactions that can ever be
// in a batch. // in a batch.
MaxTxBatchCount uint64 MaxTxBatchCount uint64
// MaxStateBatchCount is the maximum number of L2 state roots that can ever
// be in a batch.
MaxStateBatchCount uint64
// MaxBatchSubmissionTime is the maximum amount of time that we will // MaxBatchSubmissionTime is the maximum amount of time that we will
// wait before submitting an under-sized batch. // wait before submitting an under-sized batch.
MaxBatchSubmissionTime time.Duration MaxBatchSubmissionTime time.Duration
...@@ -199,6 +203,8 @@ func NewConfig(ctx *cli.Context) (Config, error) { ...@@ -199,6 +203,8 @@ func NewConfig(ctx *cli.Context) (Config, error) {
SCCAddress: ctx.GlobalString(flags.SCCAddressFlag.Name), SCCAddress: ctx.GlobalString(flags.SCCAddressFlag.Name),
MinL1TxSize: ctx.GlobalUint64(flags.MinL1TxSizeFlag.Name), MinL1TxSize: ctx.GlobalUint64(flags.MinL1TxSizeFlag.Name),
MaxL1TxSize: ctx.GlobalUint64(flags.MaxL1TxSizeFlag.Name), MaxL1TxSize: ctx.GlobalUint64(flags.MaxL1TxSizeFlag.Name),
MinStateRootElements: ctx.GlobalUint64(flags.MinStateRootElementsFlag.Name),
MaxStateRootElements: ctx.GlobalUint64(flags.MinStateRootElementsFlag.Name),
MaxBatchSubmissionTime: ctx.GlobalDuration(flags.MaxBatchSubmissionTimeFlag.Name), MaxBatchSubmissionTime: ctx.GlobalDuration(flags.MaxBatchSubmissionTimeFlag.Name),
PollInterval: ctx.GlobalDuration(flags.PollIntervalFlag.Name), PollInterval: ctx.GlobalDuration(flags.PollIntervalFlag.Name),
NumConfirmations: ctx.GlobalUint64(flags.NumConfirmationsFlag.Name), NumConfirmations: ctx.GlobalUint64(flags.NumConfirmationsFlag.Name),
......
...@@ -28,15 +28,16 @@ const stateRootSize = 32 ...@@ -28,15 +28,16 @@ const stateRootSize = 32
var bigOne = new(big.Int).SetUint64(1) //nolint:unused var bigOne = new(big.Int).SetUint64(1) //nolint:unused
type Config struct { type Config struct {
Name string Name string
L1Client *ethclient.Client L1Client *ethclient.Client
L2Client *l2ethclient.Client L2Client *l2ethclient.Client
BlockOffset uint64 BlockOffset uint64
MaxTxSize uint64 MaxStateRootElements uint64
SCCAddr common.Address MinStateRootElements uint64
CTCAddr common.Address SCCAddr common.Address
ChainID *big.Int CTCAddr common.Address
PrivKey *ecdsa.PrivateKey ChainID *big.Int
PrivKey *ecdsa.PrivateKey
} }
type Driver struct { type Driver struct {
...@@ -165,13 +166,10 @@ func (d *Driver) CraftBatchTx( ...@@ -165,13 +166,10 @@ func (d *Driver) CraftBatchTx(
log.Info(name+" crafting batch tx", "start", start, "end", end, log.Info(name+" crafting batch tx", "start", start, "end", end,
"nonce", nonce) "nonce", nonce)
var ( var stateRoots [][stateRootSize]byte
stateRoots [][stateRootSize]byte
totalStateRootSize uint64
)
for i := new(big.Int).Set(start); i.Cmp(end) < 0; i.Add(i, bigOne) { for i := new(big.Int).Set(start); i.Cmp(end) < 0; i.Add(i, bigOne) {
// Consume state roots until reach our maximum tx size. // Consume state roots until reach our maximum tx size.
if totalStateRootSize+stateRootSize > d.cfg.MaxTxSize { if uint64(len(stateRoots)) > d.cfg.MaxStateRootElements {
break break
} }
...@@ -180,10 +178,18 @@ func (d *Driver) CraftBatchTx( ...@@ -180,10 +178,18 @@ func (d *Driver) CraftBatchTx(
return nil, err return nil, err
} }
totalStateRootSize += stateRootSize
stateRoots = append(stateRoots, block.Root()) stateRoots = append(stateRoots, block.Root())
} }
// Abort if we don't have enough state roots to meet our minimum
// requirement.
if uint64(len(stateRoots)) < d.cfg.MinStateRootElements {
log.Info(name+" number of state roots below minimum",
"num_state_roots", len(stateRoots),
"min_state_roots", d.cfg.MinStateRootElements)
return nil, nil
}
d.metrics.NumElementsPerBatch().Observe(float64(len(stateRoots))) d.metrics.NumElementsPerBatch().Observe(float64(len(stateRoots)))
log.Info(name+" batch constructed", "num_state_roots", len(stateRoots)) log.Info(name+" batch constructed", "num_state_roots", len(stateRoots))
......
...@@ -78,7 +78,6 @@ func GenSequencerBatchParams( ...@@ -78,7 +78,6 @@ func GenSequencerBatchParams(
shouldStartAtElement uint64, shouldStartAtElement uint64,
blockOffset uint64, blockOffset uint64,
batch []BatchElement, batch []BatchElement,
batchType BatchType,
) (*AppendSequencerBatchParams, error) { ) (*AppendSequencerBatchParams, error) {
var ( var (
...@@ -189,6 +188,5 @@ func GenSequencerBatchParams( ...@@ -189,6 +188,5 @@ func GenSequencerBatchParams(
TotalElementsToAppend: uint64(len(batch)), TotalElementsToAppend: uint64(len(batch)),
Contexts: contexts, Contexts: contexts,
Txs: txs, Txs: txs,
Type: batchType,
}, nil }, nil
} }
...@@ -199,39 +199,57 @@ func (d *Driver) CraftBatchTx( ...@@ -199,39 +199,57 @@ func (d *Driver) CraftBatchTx(
var pruneCount int var pruneCount int
for { for {
batchParams, err := GenSequencerBatchParams( batchParams, err := GenSequencerBatchParams(
shouldStartAt, d.cfg.BlockOffset, batchElements, d.cfg.BatchType, shouldStartAt, d.cfg.BlockOffset, batchElements,
) )
if err != nil { if err != nil {
return nil, err return nil, err
} }
batchArguments, err := batchParams.Serialize() // Use plaintext encoding to enforce size constraints.
plaintextBatchArguments, err := batchParams.Serialize(BatchTypeLegacy)
if err != nil { if err != nil {
return nil, err return nil, err
} }
appendSequencerBatchID := d.ctcABI.Methods[appendSequencerBatchMethodName].ID appendSequencerBatchID := d.ctcABI.Methods[appendSequencerBatchMethodName].ID
batchCallData := append(appendSequencerBatchID, batchArguments...) plaintextCalldata := append(appendSequencerBatchID, plaintextBatchArguments...)
// Continue pruning until calldata size is less than configured max. // Continue pruning until plaintext calldata size is less than
calldataSize := uint64(len(batchCallData)) // configured max.
if calldataSize > d.cfg.MaxTxSize { plaintextCalldataSize := uint64(len(plaintextCalldata))
if plaintextCalldataSize > d.cfg.MaxTxSize {
oldLen := len(batchElements) oldLen := len(batchElements)
newBatchElementsLen := (oldLen * 9) / 10 newBatchElementsLen := (oldLen * 9) / 10
batchElements = batchElements[:newBatchElementsLen] batchElements = batchElements[:newBatchElementsLen]
log.Info(name+" pruned batch", "old_num_txs", oldLen, "new_num_txs", newBatchElementsLen) log.Info(name+" pruned batch",
"plaintext_size", plaintextCalldataSize,
"max_tx_size", d.cfg.MaxTxSize,
"old_num_txs", oldLen,
"new_num_txs", newBatchElementsLen)
pruneCount++ pruneCount++
continue continue
} else if calldataSize < d.cfg.MinTxSize { } else if plaintextCalldataSize < d.cfg.MinTxSize {
log.Info(name+" batch tx size below minimum", log.Info(name+" batch tx size below minimum",
"size", calldataSize, "min_tx_size", d.cfg.MinTxSize) "plaintext_size", plaintextCalldataSize,
"min_tx_size", d.cfg.MinTxSize,
"num_txs", len(batchElements))
return nil, nil return nil, nil
} }
d.metrics.NumElementsPerBatch().Observe(float64(len(batchElements))) d.metrics.NumElementsPerBatch().Observe(float64(len(batchElements)))
d.metrics.BatchPruneCount.Set(float64(pruneCount)) d.metrics.BatchPruneCount.Set(float64(pruneCount))
log.Info(name+" batch constructed", "num_txs", len(batchElements), "length", len(batchCallData)) // Finally, encode the batch using the configured batch type.
var calldata = plaintextCalldata
if d.cfg.BatchType != BatchTypeLegacy {
batchArguments, err := batchParams.Serialize(d.cfg.BatchType)
if err != nil {
return nil, err
}
calldata = append(appendSequencerBatchID, batchArguments...)
}
log.Info(name+" batch constructed", "num_txs", len(batchElements), "length", len(calldata))
opts, err := bind.NewKeyedTransactorWithChainID( opts, err := bind.NewKeyedTransactorWithChainID(
d.cfg.PrivKey, d.cfg.ChainID, d.cfg.PrivKey, d.cfg.ChainID,
...@@ -243,7 +261,7 @@ func (d *Driver) CraftBatchTx( ...@@ -243,7 +261,7 @@ func (d *Driver) CraftBatchTx(
opts.Nonce = nonce opts.Nonce = nonce
opts.NoSend = true opts.NoSend = true
tx, err := d.rawCtcContract.RawTransact(opts, batchCallData) tx, err := d.rawCtcContract.RawTransact(opts, calldata)
switch { switch {
case err == nil: case err == nil:
return tx, nil return tx, nil
...@@ -258,7 +276,7 @@ func (d *Driver) CraftBatchTx( ...@@ -258,7 +276,7 @@ func (d *Driver) CraftBatchTx(
log.Warn(d.cfg.Name + " eth_maxPriorityFeePerGas is unsupported " + log.Warn(d.cfg.Name + " eth_maxPriorityFeePerGas is unsupported " +
"by current backend, using fallback gasTipCap") "by current backend, using fallback gasTipCap")
opts.GasTipCap = drivers.FallbackGasTipCap opts.GasTipCap = drivers.FallbackGasTipCap
return d.rawCtcContract.RawTransact(opts, batchCallData) return d.rawCtcContract.RawTransact(opts, calldata)
default: default:
return nil, err return nil, err
......
...@@ -47,6 +47,25 @@ type BatchContext struct { ...@@ -47,6 +47,25 @@ type BatchContext struct {
BlockNumber uint64 `json:"block_number"` BlockNumber uint64 `json:"block_number"`
} }
// IsMarkerContext returns true if the BatchContext is a marker context used to
// specify the encoding format. This is only valid if called on the first
// BatchContext in the calldata.
func (c BatchContext) IsMarkerContext() bool {
return c.Timestamp == 0
}
// MarkerBatchType returns the BatchType specified by a marker BatchContext.
// The return value is only valid if called on the first BatchContext in the
// calldata and IsMarkerContext returns true.
func (c BatchContext) MarkerBatchType() BatchType {
switch c.BlockNumber {
case 0:
return BatchTypeZlib
default:
return BatchTypeLegacy
}
}
// Write encodes the BatchContext into a 16-byte stream using the following // Write encodes the BatchContext into a 16-byte stream using the following
// encoding: // encoding:
// - num_sequenced_txs: 3 bytes // - num_sequenced_txs: 3 bytes
...@@ -83,13 +102,34 @@ func (c *BatchContext) Read(r io.Reader) error { ...@@ -83,13 +102,34 @@ func (c *BatchContext) Read(r io.Reader) error {
return readUint64(r, &c.BlockNumber, 5) return readUint64(r, &c.BlockNumber, 5)
} }
// BatchType represents the type of batch being // BatchType represents the type of batch being submitted. When the first
// submitted. When the first context in the batch // context in the batch has a timestamp of 0, the blocknumber is interpreted as
// has a timestamp of 0, the blocknumber is interpreted // an enum that represets the type.
// as an enum that represets the type
type BatchType int8 type BatchType int8
// Implements the Stringer interface for BatchType const (
// BatchTypeLegacy represets the legacy batch type.
BatchTypeLegacy BatchType = -1
// BatchTypeZlib represents a batch type where the transaction data is
// compressed using zlib.
BatchTypeZlib BatchType = 0
)
// BatchTypeFromString returns the BatchType enum based on a human readable
// string.
func BatchTypeFromString(s string) BatchType {
switch s {
case "zlib", "ZLIB":
return BatchTypeZlib
case "legacy", "LEGACY":
return BatchTypeLegacy
default:
return BatchTypeLegacy
}
}
// String implements the Stringer interface for BatchType.
func (b BatchType) String() string { func (b BatchType) String() string {
switch b { switch b {
case BatchTypeLegacy: case BatchTypeLegacy:
...@@ -101,27 +141,26 @@ func (b BatchType) String() string { ...@@ -101,27 +141,26 @@ func (b BatchType) String() string {
} }
} }
// BatchTypeFromString returns the BatchType // MarkerContext returns the marker context, if any, for the given batch type.
// enum based on a human readable string func (b BatchType) MarkerContext() *BatchContext {
func BatchTypeFromString(s string) BatchType { switch b {
switch s {
case "zlib", "ZLIB": // No marker context for legacy encoding.
return BatchTypeZlib case BatchTypeLegacy:
case "legacy", "LEGACY": return nil
return BatchTypeLegacy
// Zlib marker context sets block number equal to zero.
case BatchTypeZlib:
return &BatchContext{
Timestamp: 0,
BlockNumber: 0,
}
default: default:
return BatchTypeLegacy return nil
} }
} }
const (
// BatchTypeLegacy represets the legacy batch type
BatchTypeLegacy BatchType = -1
// BatchTypeZlib represents a batch type where the
// transaction data is compressed using zlib
BatchTypeZlib BatchType = 0
)
// AppendSequencerBatchParams holds the raw data required to submit a batch of // AppendSequencerBatchParams holds the raw data required to submit a batch of
// L2 txs to L1 CTC contract. Rather than encoding the objects using the // L2 txs to L1 CTC contract. Rather than encoding the objects using the
// standard ABI encoding, a custom encoding is and provided in the call data to // standard ABI encoding, a custom encoding is and provided in the call data to
...@@ -146,9 +185,6 @@ type AppendSequencerBatchParams struct { ...@@ -146,9 +185,6 @@ type AppendSequencerBatchParams struct {
// Txs contains all sequencer txs that will be recorded in the L1 CTC // Txs contains all sequencer txs that will be recorded in the L1 CTC
// contract. // contract.
Txs []*CachedTx Txs []*CachedTx
// The type of the batch
Type BatchType
} }
// Write encodes the AppendSequencerBatchParams using the following format: // Write encodes the AppendSequencerBatchParams using the following format:
...@@ -173,7 +209,11 @@ type AppendSequencerBatchParams struct { ...@@ -173,7 +209,11 @@ type AppendSequencerBatchParams struct {
// //
// Note that writing to a bytes.Buffer cannot // Note that writing to a bytes.Buffer cannot
// error, so errors are ignored here // error, so errors are ignored here
func (p *AppendSequencerBatchParams) Write(w *bytes.Buffer) error { func (p *AppendSequencerBatchParams) Write(
w *bytes.Buffer,
batchType BatchType,
) error {
_ = writeUint64(w, p.ShouldStartAtElement, 5) _ = writeUint64(w, p.ShouldStartAtElement, 5)
_ = writeUint64(w, p.TotalElementsToAppend, 3) _ = writeUint64(w, p.TotalElementsToAppend, 3)
...@@ -190,10 +230,10 @@ func (p *AppendSequencerBatchParams) Write(w *bytes.Buffer) error { ...@@ -190,10 +230,10 @@ func (p *AppendSequencerBatchParams) Write(w *bytes.Buffer) error {
// copy the contexts as to not malleate the struct // copy the contexts as to not malleate the struct
// when it is a typed batch // when it is a typed batch
contexts := make([]BatchContext, 0, len(p.Contexts)+1) contexts := make([]BatchContext, 0, len(p.Contexts)+1)
if p.Type == BatchTypeZlib { // Add the marker context, if any, for non-legacy encodings.
// All zero values for the single batch context markerContext := batchType.MarkerContext()
// is desired here as blocknumber 0 means it is a zlib batch if markerContext != nil {
contexts = append(contexts, BatchContext{}) contexts = append(contexts, *markerContext)
} }
contexts = append(contexts, p.Contexts...) contexts = append(contexts, p.Contexts...)
...@@ -203,7 +243,7 @@ func (p *AppendSequencerBatchParams) Write(w *bytes.Buffer) error { ...@@ -203,7 +243,7 @@ func (p *AppendSequencerBatchParams) Write(w *bytes.Buffer) error {
context.Write(w) context.Write(w)
} }
switch p.Type { switch batchType {
case BatchTypeLegacy: case BatchTypeLegacy:
// Write each length-prefixed tx. // Write each length-prefixed tx.
for _, tx := range p.Txs { for _, tx := range p.Txs {
...@@ -225,7 +265,7 @@ func (p *AppendSequencerBatchParams) Write(w *bytes.Buffer) error { ...@@ -225,7 +265,7 @@ func (p *AppendSequencerBatchParams) Write(w *bytes.Buffer) error {
} }
default: default:
return fmt.Errorf("Unknown batch type: %s", p.Type) return fmt.Errorf("Unknown batch type: %s", batchType)
} }
return nil return nil
...@@ -233,9 +273,12 @@ func (p *AppendSequencerBatchParams) Write(w *bytes.Buffer) error { ...@@ -233,9 +273,12 @@ func (p *AppendSequencerBatchParams) Write(w *bytes.Buffer) error {
// Serialize performs the same encoding as Write, but returns the resulting // Serialize performs the same encoding as Write, but returns the resulting
// bytes slice. // bytes slice.
func (p *AppendSequencerBatchParams) Serialize() ([]byte, error) { func (p *AppendSequencerBatchParams) Serialize(
batchType BatchType,
) ([]byte, error) {
var buf bytes.Buffer var buf bytes.Buffer
if err := p.Write(&buf); err != nil { if err := p.Write(&buf, batchType); err != nil {
return nil, err return nil, err
} }
return buf.Bytes(), nil return buf.Bytes(), nil
...@@ -266,6 +309,9 @@ func (p *AppendSequencerBatchParams) Read(r io.Reader) error { ...@@ -266,6 +309,9 @@ func (p *AppendSequencerBatchParams) Read(r io.Reader) error {
return err return err
} }
// Assume that it is a legacy batch at first, this will be overwrritten if
// we detect a marker context.
var batchType = BatchTypeLegacy
// Ensure that contexts is never nil // Ensure that contexts is never nil
p.Contexts = make([]BatchContext, 0) p.Contexts = make([]BatchContext, 0)
for i := uint64(0); i < numContexts; i++ { for i := uint64(0); i < numContexts; i++ {
...@@ -274,30 +320,33 @@ func (p *AppendSequencerBatchParams) Read(r io.Reader) error { ...@@ -274,30 +320,33 @@ func (p *AppendSequencerBatchParams) Read(r io.Reader) error {
return err return err
} }
if i == 0 && batchContext.IsMarkerContext() {
batchType = batchContext.MarkerBatchType()
continue
}
p.Contexts = append(p.Contexts, batchContext) p.Contexts = append(p.Contexts, batchContext)
} }
// Assume that it is a legacy batch at first // Define a closure to clean up the reader used by the specified encoding.
p.Type = BatchTypeLegacy var closeReader func() error
switch batchType {
// Handle backwards compatible batch types
if len(p.Contexts) > 0 && p.Contexts[0].Timestamp == 0 { // The legacy serialization does not require clsing, so we instatiate a
switch p.Contexts[0].BlockNumber { // dummy closure.
case 0: case BatchTypeLegacy:
// zlib compressed transaction data closeReader = func() error { return nil }
p.Type = BatchTypeZlib
// remove the first dummy context
p.Contexts = p.Contexts[1:]
numContexts--
zr, err := zlib.NewReader(r)
if err != nil {
return err
}
defer zr.Close()
r = bufio.NewReader(zr) // The zlib serialization requires decompression before reading the
// plaintext bytes, and also requires proper cleanup.
case BatchTypeZlib:
zr, err := zlib.NewReader(r)
if err != nil {
return err
} }
closeReader = zr.Close
r = bufio.NewReader(zr)
} }
// Deserialize any transactions. Since the number of txs is ommitted // Deserialize any transactions. Since the number of txs is ommitted
...@@ -315,7 +364,7 @@ func (p *AppendSequencerBatchParams) Read(r io.Reader) error { ...@@ -315,7 +364,7 @@ func (p *AppendSequencerBatchParams) Read(r io.Reader) error {
if len(p.Txs) == 0 && len(p.Contexts) != 0 { if len(p.Txs) == 0 && len(p.Contexts) != 0 {
return ErrMalformedBatch return ErrMalformedBatch
} }
return nil return closeReader()
} else if err != nil { } else if err != nil {
return err return err
} }
...@@ -327,7 +376,6 @@ func (p *AppendSequencerBatchParams) Read(r io.Reader) error { ...@@ -327,7 +376,6 @@ func (p *AppendSequencerBatchParams) Read(r io.Reader) error {
p.Txs = append(p.Txs, NewCachedTx(tx)) p.Txs = append(p.Txs, NewCachedTx(tx))
} }
} }
// writeUint64 writes a the bottom `n` bytes of `val` to `w`. // writeUint64 writes a the bottom `n` bytes of `val` to `w`.
......
...@@ -119,7 +119,6 @@ func testAppendSequencerBatchParamsEncodeDecode( ...@@ -119,7 +119,6 @@ func testAppendSequencerBatchParamsEncodeDecode(
TotalElementsToAppend: test.TotalElementsToAppend, TotalElementsToAppend: test.TotalElementsToAppend,
Contexts: test.Contexts, Contexts: test.Contexts,
Txs: nil, Txs: nil,
Type: sequencer.BatchTypeLegacy,
} }
// Decode the batch from the test string. // Decode the batch from the test string.
...@@ -133,7 +132,6 @@ func testAppendSequencerBatchParamsEncodeDecode( ...@@ -133,7 +132,6 @@ func testAppendSequencerBatchParamsEncodeDecode(
} else { } else {
require.Nil(t, err) require.Nil(t, err)
} }
require.Equal(t, params.Type, sequencer.BatchTypeLegacy)
// Assert that the decoded params match the expected params. The // Assert that the decoded params match the expected params. The
// transactions are compared serparetly (via hash), since the internal // transactions are compared serparetly (via hash), since the internal
...@@ -149,7 +147,7 @@ func testAppendSequencerBatchParamsEncodeDecode( ...@@ -149,7 +147,7 @@ func testAppendSequencerBatchParamsEncodeDecode(
// Finally, encode the decoded object and assert it matches the original // Finally, encode the decoded object and assert it matches the original
// hex string. // hex string.
paramsBytes, err := params.Serialize() paramsBytes, err := params.Serialize(sequencer.BatchTypeLegacy)
// Return early when testing error cases, no need to reserialize again // Return early when testing error cases, no need to reserialize again
if test.Error { if test.Error {
...@@ -161,17 +159,14 @@ func testAppendSequencerBatchParamsEncodeDecode( ...@@ -161,17 +159,14 @@ func testAppendSequencerBatchParamsEncodeDecode(
require.Equal(t, test.HexEncoding, hex.EncodeToString(paramsBytes)) require.Equal(t, test.HexEncoding, hex.EncodeToString(paramsBytes))
// Serialize the batches in compressed form // Serialize the batches in compressed form
params.Type = sequencer.BatchTypeZlib compressedParamsBytes, err := params.Serialize(sequencer.BatchTypeZlib)
compressedParamsBytes, err := params.Serialize()
require.Nil(t, err) require.Nil(t, err)
// Deserialize the compressed batch // Deserialize the compressed batch
var paramsCompressed sequencer.AppendSequencerBatchParams var paramsCompressed sequencer.AppendSequencerBatchParams
err = paramsCompressed.Read(bytes.NewReader(compressedParamsBytes)) err = paramsCompressed.Read(bytes.NewReader(compressedParamsBytes))
require.Nil(t, err) require.Nil(t, err)
require.Equal(t, paramsCompressed.Type, sequencer.BatchTypeZlib)
expParams.Type = sequencer.BatchTypeZlib
decompressedTxs := paramsCompressed.Txs decompressedTxs := paramsCompressed.Txs
paramsCompressed.Txs = nil paramsCompressed.Txs = nil
...@@ -189,3 +184,71 @@ func compareTxs(t *testing.T, a []*l2types.Transaction, b []*sequencer.CachedTx) ...@@ -189,3 +184,71 @@ func compareTxs(t *testing.T, a []*l2types.Transaction, b []*sequencer.CachedTx)
require.Equal(t, txA.Hash(), b[i].Tx().Hash()) require.Equal(t, txA.Hash(), b[i].Tx().Hash())
} }
} }
// TestMarkerContext asserts that each batch type returns the correct marker
// context.
func TestMarkerContext(t *testing.T) {
batchTypes := []sequencer.BatchType{
sequencer.BatchTypeLegacy,
sequencer.BatchTypeZlib,
}
for _, batchType := range batchTypes {
t.Run(batchType.String(), func(t *testing.T) {
markerContext := batchType.MarkerContext()
if batchType == sequencer.BatchTypeLegacy {
require.Nil(t, markerContext)
} else {
require.NotNil(t, markerContext)
// All marker contexts MUST have a zero timestamp.
require.Equal(t, uint64(0), markerContext.Timestamp)
// Currently all other fields besides block number are defined
// as zero.
require.Equal(t, uint64(0), markerContext.NumSequencedTxs)
require.Equal(t, uint64(0), markerContext.NumSubsequentQueueTxs)
// Assert that the block number for each batch type is set to
// the correct constant.
switch batchType {
case sequencer.BatchTypeZlib:
require.Equal(t, uint64(0), markerContext.BlockNumber)
default:
t.Fatalf("unknown batch type")
}
// Ensure MarkerBatchType produces the expected BatchType.
require.Equal(t, batchType, markerContext.MarkerBatchType())
}
})
}
}
// TestIsMarkerContext asserts that IsMarkerContext returns true iff the
// timestamp is zero.
func TestIsMarkerContext(t *testing.T) {
batchContext := sequencer.BatchContext{
NumSequencedTxs: 1,
NumSubsequentQueueTxs: 2,
Timestamp: 3,
BlockNumber: 4,
}
require.False(t, batchContext.IsMarkerContext())
batchContext = sequencer.BatchContext{
NumSequencedTxs: 0,
NumSubsequentQueueTxs: 0,
Timestamp: 3,
BlockNumber: 0,
}
require.False(t, batchContext.IsMarkerContext())
batchContext = sequencer.BatchContext{
NumSequencedTxs: 1,
NumSubsequentQueueTxs: 2,
Timestamp: 0,
BlockNumber: 4,
}
require.True(t, batchContext.IsMarkerContext())
}
...@@ -66,6 +66,20 @@ var ( ...@@ -66,6 +66,20 @@ var (
Required: true, Required: true,
EnvVar: prefixEnvVar("MAX_L1_TX_SIZE"), EnvVar: prefixEnvVar("MAX_L1_TX_SIZE"),
} }
MinStateRootElementsFlag = cli.Uint64Flag{
Name: "min-state-root-elements",
Usage: "Minimum number of elements required to submit a state " +
"root batch",
Required: true,
EnvVar: prefixEnvVar("MIN_STATE_ROOT_ELEMENTS"),
}
MaxStateRootElementsFlag = cli.Uint64Flag{
Name: "max-state-root-elements",
Usage: "Maximum number of elements required to submit a state " +
"root batch",
Required: true,
EnvVar: prefixEnvVar("MAX_STATE_ROOT_ELEMENTS"),
}
MaxBatchSubmissionTimeFlag = cli.DurationFlag{ MaxBatchSubmissionTimeFlag = cli.DurationFlag{
Name: "max-batch-submission-time", Name: "max-batch-submission-time",
Usage: "Maximum amount of time that we will wait before " + Usage: "Maximum amount of time that we will wait before " +
...@@ -240,6 +254,8 @@ var requiredFlags = []cli.Flag{ ...@@ -240,6 +254,8 @@ var requiredFlags = []cli.Flag{
SCCAddressFlag, SCCAddressFlag,
MinL1TxSizeFlag, MinL1TxSizeFlag,
MaxL1TxSizeFlag, MaxL1TxSizeFlag,
MinStateRootElementsFlag,
MaxStateRootElementsFlag,
MaxBatchSubmissionTimeFlag, MaxBatchSubmissionTimeFlag,
PollIntervalFlag, PollIntervalFlag,
NumConfirmationsFlag, NumConfirmationsFlag,
......
This diff is collapsed.
...@@ -231,6 +231,7 @@ func TestClearPendingTxClearingTxConfirms(t *testing.T) { ...@@ -231,6 +231,7 @@ func TestClearPendingTxClearingTxConfirms(t *testing.T) {
return &types.Receipt{ return &types.Receipt{
TxHash: txHash, TxHash: txHash,
BlockNumber: big.NewInt(int64(testBlockNumber)), BlockNumber: big.NewInt(int64(testBlockNumber)),
Status: types.ReceiptStatusSuccessful,
}, nil }, nil
}, },
}) })
...@@ -296,6 +297,7 @@ func TestClearPendingTxMultipleConfs(t *testing.T) { ...@@ -296,6 +297,7 @@ func TestClearPendingTxMultipleConfs(t *testing.T) {
return &types.Receipt{ return &types.Receipt{
TxHash: txHash, TxHash: txHash,
BlockNumber: big.NewInt(int64(testBlockNumber)), BlockNumber: big.NewInt(int64(testBlockNumber)),
Status: types.ReceiptStatusSuccessful,
}, nil }, nil
}, },
}, numConfs) }, numConfs)
......
...@@ -215,6 +215,18 @@ func (s *Service) eventLoop() { ...@@ -215,6 +215,18 @@ func (s *Service) eventLoop() {
receipt, err := s.txMgr.Send( receipt, err := s.txMgr.Send(
s.ctx, updateGasPrice, s.cfg.Driver.SendTransaction, s.ctx, updateGasPrice, s.cfg.Driver.SendTransaction,
) )
// Record the confirmation time and gas used if we receive a
// receipt, as this indicates the transaction confirmed. We record
// these metrics here as the transaction may have reverted, and will
// abort below.
if receipt != nil {
batchConfirmationTime := time.Since(batchConfirmationStart) /
time.Millisecond
s.metrics.BatchConfirmationTimeMs().Set(float64(batchConfirmationTime))
s.metrics.SubmissionGasUsedWei().Set(float64(receipt.GasUsed))
}
if err != nil { if err != nil {
log.Error(name+" unable to publish batch tx", log.Error(name+" unable to publish batch tx",
"err", err) "err", err)
...@@ -225,11 +237,7 @@ func (s *Service) eventLoop() { ...@@ -225,11 +237,7 @@ func (s *Service) eventLoop() {
// The transaction was successfully submitted. // The transaction was successfully submitted.
log.Info(name+" batch tx successfully published", log.Info(name+" batch tx successfully published",
"tx_hash", receipt.TxHash) "tx_hash", receipt.TxHash)
batchConfirmationTime := time.Since(batchConfirmationStart) /
time.Millisecond
s.metrics.BatchConfirmationTimeMs().Set(float64(batchConfirmationTime))
s.metrics.BatchesSubmitted().Inc() s.metrics.BatchesSubmitted().Inc()
s.metrics.SubmissionGasUsedWei().Set(float64(receipt.GasUsed))
s.metrics.SubmissionTimestamp().Set(float64(time.Now().UnixNano() / 1e6)) s.metrics.SubmissionTimestamp().Set(float64(time.Now().UnixNano() / 1e6))
case err := <-s.ctx.Done(): case err := <-s.ctx.Done():
......
...@@ -2,6 +2,7 @@ package txmgr ...@@ -2,6 +2,7 @@ package txmgr
import ( import (
"context" "context"
"errors"
"math/big" "math/big"
"strings" "strings"
"sync" "sync"
...@@ -12,6 +13,9 @@ import ( ...@@ -12,6 +13,9 @@ import (
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
) )
// ErrReverted signals that a mined transaction reverted.
var ErrReverted = errors.New("transaction reverted")
// UpdateGasPriceSendTxFunc defines a function signature for publishing a // UpdateGasPriceSendTxFunc defines a function signature for publishing a
// desired tx with a specific gas price. Implementations of this signature // desired tx with a specific gas price. Implementations of this signature
// should also return promptly when the context is canceled. // should also return promptly when the context is canceled.
...@@ -225,6 +229,9 @@ func (m *SimpleTxManager) Send( ...@@ -225,6 +229,9 @@ func (m *SimpleTxManager) Send(
// The transaction has confirmed. // The transaction has confirmed.
case receipt := <-receiptChan: case receipt := <-receiptChan:
if receipt.Status == types.ReceiptStatusFailed {
return receipt, ErrReverted
}
return receipt, nil return receipt, nil
} }
} }
...@@ -288,7 +295,10 @@ func waitMined( ...@@ -288,7 +295,10 @@ func waitMined(
// tipHeight. The equation is rewritten in this form to avoid // tipHeight. The equation is rewritten in this form to avoid
// underflows. // underflows.
if txHeight+numConfirmations <= tipHeight+1 { if txHeight+numConfirmations <= tipHeight+1 {
log.Info("Transaction confirmed", "txHash", txHash) reverted := receipt.Status == types.ReceiptStatusFailed
log.Info("Transaction confirmed",
"txHash", txHash,
"reverted", reverted)
return receipt, nil return receipt, nil
} }
......
...@@ -98,6 +98,7 @@ func (g *gasPricer) sample() (*big.Int, *big.Int) { ...@@ -98,6 +98,7 @@ func (g *gasPricer) sample() (*big.Int, *big.Int) {
type minedTxInfo struct { type minedTxInfo struct {
gasFeeCap *big.Int gasFeeCap *big.Int
blockNumber uint64 blockNumber uint64
reverted bool
} }
// mockBackend implements txmgr.ReceiptSource that tracks mined transactions // mockBackend implements txmgr.ReceiptSource that tracks mined transactions
...@@ -123,6 +124,20 @@ func newMockBackend() *mockBackend { ...@@ -123,6 +124,20 @@ func newMockBackend() *mockBackend {
// TransactionReceipt with a matching txHash will result in a non-nil receipt. // TransactionReceipt with a matching txHash will result in a non-nil receipt.
// If a nil txHash is supplied this has the effect of mining an empty block. // If a nil txHash is supplied this has the effect of mining an empty block.
func (b *mockBackend) mine(txHash *common.Hash, gasFeeCap *big.Int) { func (b *mockBackend) mine(txHash *common.Hash, gasFeeCap *big.Int) {
b.mineWithStatus(txHash, gasFeeCap, false)
}
// mineWithStatus records a (txHash, gasFeeCap) pair as confirmed, but also
// includes the option to specify whether or not the transaction reverted.
// Subsequent calls to TransactionReceipt with a matching txHash will result in
// a non-nil receipt. If a nil txHash is supplied this has the effect of mining
// an empty block.
func (b *mockBackend) mineWithStatus(
txHash *common.Hash,
gasFeeCap *big.Int,
revert bool,
) {
b.mu.Lock() b.mu.Lock()
defer b.mu.Unlock() defer b.mu.Unlock()
...@@ -131,6 +146,7 @@ func (b *mockBackend) mine(txHash *common.Hash, gasFeeCap *big.Int) { ...@@ -131,6 +146,7 @@ func (b *mockBackend) mine(txHash *common.Hash, gasFeeCap *big.Int) {
b.minedTxs[*txHash] = minedTxInfo{ b.minedTxs[*txHash] = minedTxInfo{
gasFeeCap: gasFeeCap, gasFeeCap: gasFeeCap,
blockNumber: b.blockHeight, blockNumber: b.blockHeight,
reverted: revert,
} }
} }
} }
...@@ -160,12 +176,18 @@ func (b *mockBackend) TransactionReceipt( ...@@ -160,12 +176,18 @@ func (b *mockBackend) TransactionReceipt(
return nil, nil return nil, nil
} }
var status = types.ReceiptStatusSuccessful
if txInfo.reverted {
status = types.ReceiptStatusFailed
}
// Return the gas fee cap for the transaction in the GasUsed field so that // Return the gas fee cap for the transaction in the GasUsed field so that
// we can assert the proper tx confirmed in our tests. // we can assert the proper tx confirmed in our tests.
return &types.Receipt{ return &types.Receipt{
TxHash: txHash, TxHash: txHash,
GasUsed: txInfo.gasFeeCap.Uint64(), GasUsed: txInfo.gasFeeCap.Uint64(),
BlockNumber: big.NewInt(int64(txInfo.blockNumber)), BlockNumber: big.NewInt(int64(txInfo.blockNumber)),
Status: status,
}, nil }, nil
} }
...@@ -201,6 +223,39 @@ func TestTxMgrConfirmAtMinGasPrice(t *testing.T) { ...@@ -201,6 +223,39 @@ func TestTxMgrConfirmAtMinGasPrice(t *testing.T) {
require.Equal(t, gasPricer.expGasFeeCap().Uint64(), receipt.GasUsed) require.Equal(t, gasPricer.expGasFeeCap().Uint64(), receipt.GasUsed)
} }
// TestTxMgrFailsForRevertedTxn asserts that Send returns ErrReverted if the
// confirmed transaction reverts during execution, and returns the resulting
// receipt.
func TestTxMgrFailsForRevertedTxn(t *testing.T) {
t.Parallel()
h := newTestHarness()
gasPricer := newGasPricer(1)
updateGasPrice := func(ctx context.Context) (*types.Transaction, error) {
gasTipCap, gasFeeCap := gasPricer.sample()
return types.NewTx(&types.DynamicFeeTx{
GasTipCap: gasTipCap,
GasFeeCap: gasFeeCap,
}), nil
}
sendTx := func(ctx context.Context, tx *types.Transaction) error {
if gasPricer.shouldMine(tx.GasFeeCap()) {
txHash := tx.Hash()
h.backend.mineWithStatus(&txHash, tx.GasFeeCap(), true)
}
return nil
}
ctx := context.Background()
receipt, err := h.mgr.Send(ctx, updateGasPrice, sendTx)
require.Equal(t, txmgr.ErrReverted, err)
require.NotNil(t, receipt)
require.Equal(t, gasPricer.expGasFeeCap().Uint64(), receipt.GasUsed)
}
// TestTxMgrNeverConfirmCancel asserts that a Send can be canceled even if no // TestTxMgrNeverConfirmCancel asserts that a Send can be canceled even if no
// transaction is mined. This is done to ensure the the tx mgr can properly // transaction is mined. This is done to ensure the the tx mgr can properly
// abort on shutdown, even if a txn is in the process of being published. // abort on shutdown, even if a txn is in the process of being published.
...@@ -519,6 +574,7 @@ func (b *failingBackend) TransactionReceipt( ...@@ -519,6 +574,7 @@ func (b *failingBackend) TransactionReceipt(
return &types.Receipt{ return &types.Receipt{
TxHash: txHash, TxHash: txHash,
BlockNumber: big.NewInt(1), BlockNumber: big.NewInt(1),
Status: types.ReceiptStatusSuccessful,
}, nil }, nil
} }
......
...@@ -4,7 +4,7 @@ go 1.17 ...@@ -4,7 +4,7 @@ go 1.17
require ( require (
github.com/ethereum-optimism/optimism/l2geth v0.0.0-20220104205740-f39387287484 github.com/ethereum-optimism/optimism/l2geth v0.0.0-20220104205740-f39387287484
github.com/ethereum/go-ethereum v1.10.14 github.com/ethereum/go-ethereum v1.10.16
github.com/getsentry/sentry-go v0.12.0 github.com/getsentry/sentry-go v0.12.0
github.com/google/uuid v1.3.0 github.com/google/uuid v1.3.0
github.com/gorilla/mux v1.8.0 github.com/gorilla/mux v1.8.0
...@@ -24,7 +24,7 @@ require ( ...@@ -24,7 +24,7 @@ require (
github.com/cespare/xxhash/v2 v2.1.1 // indirect github.com/cespare/xxhash/v2 v2.1.1 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d // indirect github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d // indirect
github.com/davecgh/go-spew v1.1.1 // indirect github.com/davecgh/go-spew v1.1.1 // indirect
github.com/deckarep/golang-set v0.0.0-20180603214616-504e848d77ea // indirect github.com/deckarep/golang-set v1.8.0 // indirect
github.com/edsrzf/mmap-go v1.0.0 // indirect github.com/edsrzf/mmap-go v1.0.0 // indirect
github.com/elastic/gosigar v0.8.1-0.20180330100440-37f05ff46ffa // indirect github.com/elastic/gosigar v0.8.1-0.20180330100440-37f05ff46ffa // indirect
github.com/gballet/go-libpcsclite v0.0.0-20190607065134-2772fd86a8ff // indirect github.com/gballet/go-libpcsclite v0.0.0-20190607065134-2772fd86a8ff // indirect
...@@ -35,8 +35,8 @@ require ( ...@@ -35,8 +35,8 @@ require (
github.com/gorilla/websocket v1.4.2 // indirect github.com/gorilla/websocket v1.4.2 // indirect
github.com/hashicorp/golang-lru v0.5.5-0.20210104140557-80c98217689d // indirect github.com/hashicorp/golang-lru v0.5.5-0.20210104140557-80c98217689d // indirect
github.com/huin/goupnp v1.0.2 // indirect github.com/huin/goupnp v1.0.2 // indirect
github.com/jackpal/go-nat-pmp v1.0.2-0.20160603034137-1fa385a6f458 // indirect github.com/jackpal/go-nat-pmp v1.0.2 // indirect
github.com/karalabe/usb v0.0.0-20211005121534-4c5740d64559 // indirect github.com/karalabe/usb v0.0.2 // indirect
github.com/mattn/go-runewidth v0.0.9 // indirect github.com/mattn/go-runewidth v0.0.9 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.1 // indirect github.com/matttproud/golang_protobuf_extensions v1.0.1 // indirect
github.com/olekukonko/tablewriter v0.0.5 // indirect github.com/olekukonko/tablewriter v0.0.5 // indirect
......
...@@ -115,8 +115,9 @@ github.com/davecgh/go-spew v0.0.0-20171005155431-ecdeabc65495/go.mod h1:J7Y8YcW2 ...@@ -115,8 +115,9 @@ github.com/davecgh/go-spew v0.0.0-20171005155431-ecdeabc65495/go.mod h1:J7Y8YcW2
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/deckarep/golang-set v0.0.0-20180603214616-504e848d77ea h1:j4317fAZh7X6GqbFowYdYdI0L9bwxL07jyPZIdepyZ0=
github.com/deckarep/golang-set v0.0.0-20180603214616-504e848d77ea/go.mod h1:93vsz/8Wt4joVM7c2AVqh+YRMiUSc14yDtF28KmMOgQ= github.com/deckarep/golang-set v0.0.0-20180603214616-504e848d77ea/go.mod h1:93vsz/8Wt4joVM7c2AVqh+YRMiUSc14yDtF28KmMOgQ=
github.com/deckarep/golang-set v1.8.0 h1:sk9/l/KqpunDwP7pSjUg0keiOOLEnOBHzykLrsPppp4=
github.com/deckarep/golang-set v1.8.0/go.mod h1:5nI87KwE7wgsBU1F4GKAw2Qod7p5kyS383rP6+o6qqo=
github.com/decred/dcrd/lru v1.0.0/go.mod h1:mxKOwFd7lFjN2GZYsiz/ecgqR6kkYAl+0pz0tEMk218= github.com/decred/dcrd/lru v1.0.0/go.mod h1:mxKOwFd7lFjN2GZYsiz/ecgqR6kkYAl+0pz0tEMk218=
github.com/deepmap/oapi-codegen v1.6.0/go.mod h1:ryDa9AgbELGeB+YEXE1dR53yAjHwFvE9iAUlWl9Al3M= github.com/deepmap/oapi-codegen v1.6.0/go.mod h1:ryDa9AgbELGeB+YEXE1dR53yAjHwFvE9iAUlWl9Al3M=
github.com/deepmap/oapi-codegen v1.8.2/go.mod h1:YLgSKSDv/bZQB7N4ws6luhozi3cEdRktEqrX88CvjIw= github.com/deepmap/oapi-codegen v1.8.2/go.mod h1:YLgSKSDv/bZQB7N4ws6luhozi3cEdRktEqrX88CvjIw=
...@@ -143,8 +144,8 @@ github.com/etcd-io/bbolt v1.3.3/go.mod h1:ZF2nL25h33cCyBtcyWeZ2/I3HQOfTP+0PIEvHj ...@@ -143,8 +144,8 @@ github.com/etcd-io/bbolt v1.3.3/go.mod h1:ZF2nL25h33cCyBtcyWeZ2/I3HQOfTP+0PIEvHj
github.com/ethereum-optimism/optimism/l2geth v0.0.0-20220104205740-f39387287484 h1:HbNZa+JqIBEWgTmqUY6/iHNNKWVycVFSQ9BJitYIy6U= github.com/ethereum-optimism/optimism/l2geth v0.0.0-20220104205740-f39387287484 h1:HbNZa+JqIBEWgTmqUY6/iHNNKWVycVFSQ9BJitYIy6U=
github.com/ethereum-optimism/optimism/l2geth v0.0.0-20220104205740-f39387287484/go.mod h1:Tiv7YftnDjuhq2ktkynxSujAASpUxZP+E0RRPjQD3z0= github.com/ethereum-optimism/optimism/l2geth v0.0.0-20220104205740-f39387287484/go.mod h1:Tiv7YftnDjuhq2ktkynxSujAASpUxZP+E0RRPjQD3z0=
github.com/ethereum/go-ethereum v1.10.12/go.mod h1:W3yfrFyL9C1pHcwY5hmRHVDaorTiQxhYBkKyu5mEDHw= github.com/ethereum/go-ethereum v1.10.12/go.mod h1:W3yfrFyL9C1pHcwY5hmRHVDaorTiQxhYBkKyu5mEDHw=
github.com/ethereum/go-ethereum v1.10.14 h1:EJ/ucQzFlgKgwblIwU8R6ABnZ9kgUnIG2+Q1tiSrt4M= github.com/ethereum/go-ethereum v1.10.16 h1:3oPrumn0bCW/idjcxMn5YYVCdK7VzJYIvwGZUGLEaoc=
github.com/ethereum/go-ethereum v1.10.14/go.mod h1:W3yfrFyL9C1pHcwY5hmRHVDaorTiQxhYBkKyu5mEDHw= github.com/ethereum/go-ethereum v1.10.16/go.mod h1:Anj6cxczl+AHy63o4X9O8yWNHuN5wMpfb8MAnHkWn7Y=
github.com/fasthttp-contrib/websocket v0.0.0-20160511215533-1f3b11f56072/go.mod h1:duJ4Jxv5lDcvg4QuQr0oowTf7dz4/CR8NtyCooz9HL8= github.com/fasthttp-contrib/websocket v0.0.0-20160511215533-1f3b11f56072/go.mod h1:duJ4Jxv5lDcvg4QuQr0oowTf7dz4/CR8NtyCooz9HL8=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4= github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/fatih/structs v1.1.0/go.mod h1:9NiDSp5zOcgEDl+j00MP/WkGVPOlPRLejGD8Ga6PJ7M= github.com/fatih/structs v1.1.0/go.mod h1:9NiDSp5zOcgEDl+j00MP/WkGVPOlPRLejGD8Ga6PJ7M=
...@@ -250,6 +251,7 @@ github.com/gorilla/websocket v1.4.1/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/ad ...@@ -250,6 +251,7 @@ github.com/gorilla/websocket v1.4.1/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/ad
github.com/gorilla/websocket v1.4.2 h1:+/TMaTYc4QFitKJxsQ7Yye35DkWvkdLcvGKqM+x0Ufc= github.com/gorilla/websocket v1.4.2 h1:+/TMaTYc4QFitKJxsQ7Yye35DkWvkdLcvGKqM+x0Ufc=
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE= github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/graph-gophers/graphql-go v0.0.0-20201113091052-beb923fada29/go.mod h1:9CQHMSxwO4MprSdzoIEobiHpoLtHm77vfxsvsIN5Vuc= github.com/graph-gophers/graphql-go v0.0.0-20201113091052-beb923fada29/go.mod h1:9CQHMSxwO4MprSdzoIEobiHpoLtHm77vfxsvsIN5Vuc=
github.com/graph-gophers/graphql-go v1.3.0/go.mod h1:9CQHMSxwO4MprSdzoIEobiHpoLtHm77vfxsvsIN5Vuc=
github.com/hashicorp/go-bexpr v0.1.10 h1:9kuI5PFotCboP3dkDYFr/wi0gg0QVbSNz5oFRpxn4uE= github.com/hashicorp/go-bexpr v0.1.10 h1:9kuI5PFotCboP3dkDYFr/wi0gg0QVbSNz5oFRpxn4uE=
github.com/hashicorp/go-bexpr v0.1.10/go.mod h1:oxlubA2vC/gFVfX1A6JGp7ls7uCDlfJn732ehYYg+g0= github.com/hashicorp/go-bexpr v0.1.10/go.mod h1:oxlubA2vC/gFVfX1A6JGp7ls7uCDlfJn732ehYYg+g0=
github.com/hashicorp/go-version v1.2.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= github.com/hashicorp/go-version v1.2.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
...@@ -285,8 +287,9 @@ github.com/iris-contrib/go.uuid v2.0.0+incompatible/go.mod h1:iz2lgM/1UnEf1kP0L/ ...@@ -285,8 +287,9 @@ github.com/iris-contrib/go.uuid v2.0.0+incompatible/go.mod h1:iz2lgM/1UnEf1kP0L/
github.com/iris-contrib/jade v1.1.3/go.mod h1:H/geBymxJhShH5kecoiOCSssPX7QWYH7UaeZTSWddIk= github.com/iris-contrib/jade v1.1.3/go.mod h1:H/geBymxJhShH5kecoiOCSssPX7QWYH7UaeZTSWddIk=
github.com/iris-contrib/pongo2 v0.0.1/go.mod h1:Ssh+00+3GAZqSQb30AvBRNxBx7rf0GqwkjqxNd0u65g= github.com/iris-contrib/pongo2 v0.0.1/go.mod h1:Ssh+00+3GAZqSQb30AvBRNxBx7rf0GqwkjqxNd0u65g=
github.com/iris-contrib/schema v0.0.1/go.mod h1:urYA3uvUNG1TIIjOSCzHr9/LmbQo8LrOcOqfqxa4hXw= github.com/iris-contrib/schema v0.0.1/go.mod h1:urYA3uvUNG1TIIjOSCzHr9/LmbQo8LrOcOqfqxa4hXw=
github.com/jackpal/go-nat-pmp v1.0.2-0.20160603034137-1fa385a6f458 h1:6OvNmYgJyexcZ3pYbTI9jWx5tHo1Dee/tWbLMfPe2TA=
github.com/jackpal/go-nat-pmp v1.0.2-0.20160603034137-1fa385a6f458/go.mod h1:QPH045xvCAeXUZOxsnwmrtiCoxIr9eob+4orBN1SBKc= github.com/jackpal/go-nat-pmp v1.0.2-0.20160603034137-1fa385a6f458/go.mod h1:QPH045xvCAeXUZOxsnwmrtiCoxIr9eob+4orBN1SBKc=
github.com/jackpal/go-nat-pmp v1.0.2 h1:KzKSgb7qkJvOUTqYl9/Hg/me3pWgBmERKrTGD7BdWus=
github.com/jackpal/go-nat-pmp v1.0.2/go.mod h1:QPH045xvCAeXUZOxsnwmrtiCoxIr9eob+4orBN1SBKc=
github.com/jarcoal/httpmock v1.0.8/go.mod h1:ATjnClrvW/3tijVmpL/va5Z3aAyGvqU3gCT8nX0Txik= github.com/jarcoal/httpmock v1.0.8/go.mod h1:ATjnClrvW/3tijVmpL/va5Z3aAyGvqU3gCT8nX0Txik=
github.com/jedisct1/go-minisign v0.0.0-20190909160543-45766022959e/go.mod h1:G1CVv03EnqU1wYL2dFwXxW2An0az9JTl/ZsqXQeBlkU= github.com/jedisct1/go-minisign v0.0.0-20190909160543-45766022959e/go.mod h1:G1CVv03EnqU1wYL2dFwXxW2An0az9JTl/ZsqXQeBlkU=
github.com/jessevdk/go-flags v0.0.0-20141203071132-1679536dcc89/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI= github.com/jessevdk/go-flags v0.0.0-20141203071132-1679536dcc89/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=
...@@ -304,8 +307,9 @@ github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7V ...@@ -304,8 +307,9 @@ github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7V
github.com/jung-kurt/gofpdf v1.0.3-0.20190309125859-24315acbbda5/go.mod h1:7Id9E/uU8ce6rXgefFLlgrJj/GYY22cpxn+r32jIOes= github.com/jung-kurt/gofpdf v1.0.3-0.20190309125859-24315acbbda5/go.mod h1:7Id9E/uU8ce6rXgefFLlgrJj/GYY22cpxn+r32jIOes=
github.com/jwilder/encoding v0.0.0-20170811194829-b4e1701a28ef/go.mod h1:Ct9fl0F6iIOGgxJ5npU/IUOhOhqlVrGjyIZc8/MagT0= github.com/jwilder/encoding v0.0.0-20170811194829-b4e1701a28ef/go.mod h1:Ct9fl0F6iIOGgxJ5npU/IUOhOhqlVrGjyIZc8/MagT0=
github.com/k0kubun/colorstring v0.0.0-20150214042306-9440f1994b88/go.mod h1:3w7q1U84EfirKl04SVQ/s7nPm1ZPhiXd34z40TNz36k= github.com/k0kubun/colorstring v0.0.0-20150214042306-9440f1994b88/go.mod h1:3w7q1U84EfirKl04SVQ/s7nPm1ZPhiXd34z40TNz36k=
github.com/karalabe/usb v0.0.0-20211005121534-4c5740d64559 h1:0VWDXPNE0brOek1Q8bLfzKkvOzwbQE/snjGojlCr8CY=
github.com/karalabe/usb v0.0.0-20211005121534-4c5740d64559/go.mod h1:Od972xHfMJowv7NGVDiWVxk2zxnWgjLlJzE+F4F7AGU= github.com/karalabe/usb v0.0.0-20211005121534-4c5740d64559/go.mod h1:Od972xHfMJowv7NGVDiWVxk2zxnWgjLlJzE+F4F7AGU=
github.com/karalabe/usb v0.0.2 h1:M6QQBNxF+CQ8OFvxrT90BA0qBOXymndZnk5q235mFc4=
github.com/karalabe/usb v0.0.2/go.mod h1:Od972xHfMJowv7NGVDiWVxk2zxnWgjLlJzE+F4F7AGU=
github.com/kataras/golog v0.0.10/go.mod h1:yJ8YKCmyL+nWjERB90Qwn+bdyBZsaQwU3bTVFgkFIp8= github.com/kataras/golog v0.0.10/go.mod h1:yJ8YKCmyL+nWjERB90Qwn+bdyBZsaQwU3bTVFgkFIp8=
github.com/kataras/iris/v12 v12.1.8/go.mod h1:LMYy4VlP67TQ3Zgriz8RE2h2kMZV2SgMYbq3UhfoFmE= github.com/kataras/iris/v12 v12.1.8/go.mod h1:LMYy4VlP67TQ3Zgriz8RE2h2kMZV2SgMYbq3UhfoFmE=
github.com/kataras/neffos v0.0.14/go.mod h1:8lqADm8PnbeFfL7CLXh1WHw53dG27MC3pgi2R1rmoTE= github.com/kataras/neffos v0.0.14/go.mod h1:8lqADm8PnbeFfL7CLXh1WHw53dG27MC3pgi2R1rmoTE=
......
...@@ -13,8 +13,12 @@ DISBURSER_ARTIFACT := ../../packages/contracts/artifacts/contracts/L2/teleportr/ ...@@ -13,8 +13,12 @@ DISBURSER_ARTIFACT := ../../packages/contracts/artifacts/contracts/L2/teleportr/
teleportr: teleportr:
env GO111MODULE=on go build -v $(LDFLAGS) ./cmd/teleportr env GO111MODULE=on go build -v $(LDFLAGS) ./cmd/teleportr
teleportr-api:
env GO111MODULE=on go build -v $(LDFLAGS) ./cmd/teleportr-api
clean: clean:
rm teleportr rm teleportr
rm api
test: test:
go test -v ./... go test -v ./...
...@@ -48,6 +52,7 @@ bindings-disburser: ...@@ -48,6 +52,7 @@ bindings-disburser:
.PHONY: \ .PHONY: \
teleportr \ teleportr \
teleportr-api \
bindings \ bindings \
bindings-deposit \ bindings-deposit \
bindings-disburser \ bindings-disburser \
......
package api
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
const TeleportrAPINamespace = "teleportr_api"
var (
rpcRequestsTotal = promauto.NewCounter(prometheus.CounterOpts{
Namespace: TeleportrAPINamespace,
Name: "rpc_requests_total",
Help: "Count of total client RPC requests.",
})
httpResponseCodesTotal = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: TeleportrAPINamespace,
Name: "http_response_codes_total",
Help: "Count of total HTTP response codes.",
}, []string{
"status_code",
})
httpRequestDurationSumm = promauto.NewSummary(prometheus.SummaryOpts{
Namespace: TeleportrAPINamespace,
Name: "http_request_duration_seconds",
Help: "Summary of HTTP request durations, in seconds.",
Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.95: 0.005, 0.99: 0.001},
})
databaseErrorsTotal = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: TeleportrAPINamespace,
Name: "database_errors_total",
Help: "Count of total database failures.",
}, []string{
"method",
})
rpcErrorsTotal = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: TeleportrAPINamespace,
Name: "rpc_errors_total",
Help: "Count of total L1 rpc failures.",
}, []string{
"method",
})
)
This diff is collapsed.
package main
import (
"fmt"
"os"
"github.com/ethereum/go-ethereum/log"
"github.com/urfave/cli"
"github.com/ethereum-optimism/optimism/go/teleportr/api"
"github.com/ethereum-optimism/optimism/go/teleportr/flags"
)
var (
GitVersion = ""
GitCommit = ""
GitDate = ""
)
func main() {
// Set up logger with a default INFO level in case we fail to parse flags.
// Otherwise the final critical log won't show what the parsing error was.
log.Root().SetHandler(
log.LvlFilterHandler(
log.LvlInfo,
log.StreamHandler(os.Stdout, log.TerminalFormat(true)),
),
)
app := cli.NewApp()
app.Flags = flags.APIFlags
app.Version = fmt.Sprintf("%s-%s-%s", GitVersion, GitCommit, GitDate)
app.Name = "teleportr-api"
app.Usage = "Teleportr API server"
app.Description = "API serving teleportr data"
app.Action = api.Main(GitVersion)
err := app.Run(os.Args)
if err != nil {
log.Crit("Application failed", "message", err)
}
}
...@@ -21,17 +21,6 @@ var ( ...@@ -21,17 +21,6 @@ var (
ErrUnknownDeposit = errors.New("unknown deposit") ErrUnknownDeposit = errors.New("unknown deposit")
) )
// Deposit represents an event emitted from the TeleportrDeposit contract on L1,
// along with additional info about the tx that generated the event.
type Deposit struct {
ID uint64
TxnHash common.Hash
BlockNumber uint64
BlockTimestamp time.Time
Address common.Address
Amount *big.Int
}
// ConfirmationInfo holds metadata about a tx on either the L1 or L2 chain. // ConfirmationInfo holds metadata about a tx on either the L1 or L2 chain.
type ConfirmationInfo struct { type ConfirmationInfo struct {
TxnHash common.Hash TxnHash common.Hash
...@@ -39,15 +28,28 @@ type ConfirmationInfo struct { ...@@ -39,15 +28,28 @@ type ConfirmationInfo struct {
BlockTimestamp time.Time BlockTimestamp time.Time
} }
// CompletedTeleport represents an L1 deposit that has been disbursed on L2. The // Deposit represents an event emitted from the TeleportrDeposit contract on L1,
// struct also hold info about the L1 and L2 txns involved. // along with additional info about the tx that generated the event.
type CompletedTeleport struct { type Deposit struct {
ID uint64 ID uint64
Address common.Address Address common.Address
Amount *big.Int Amount *big.Int
Success bool
Deposit ConfirmationInfo ConfirmationInfo
Disbursement ConfirmationInfo }
type Disbursement struct {
Success bool
ConfirmationInfo
}
// Teleport represents the combination of an L1 deposit and its disbursement on
// L2. Disburment will be nil if the L2 disbursement has not occurred.
type Teleport struct {
Deposit
Disbursement *Disbursement
} }
const createDepositsTable = ` const createDepositsTable = `
...@@ -61,6 +63,14 @@ CREATE TABLE IF NOT EXISTS deposits ( ...@@ -61,6 +63,14 @@ CREATE TABLE IF NOT EXISTS deposits (
); );
` `
const createDepositTxnHashIndex = `
CREATE INDEX ON deposits (txn_hash)
`
const createDepositAddressIndex = `
CREATE INDEX ON deposits (address)
`
const createDisbursementsTable = ` const createDisbursementsTable = `
CREATE TABLE IF NOT EXISTS disbursements ( CREATE TABLE IF NOT EXISTS disbursements (
id INT8 NOT NULL PRIMARY KEY REFERENCES deposits(id), id INT8 NOT NULL PRIMARY KEY REFERENCES deposits(id),
...@@ -89,6 +99,8 @@ CREATE TABLE IF NOT EXISTS pending_txs ( ...@@ -89,6 +99,8 @@ CREATE TABLE IF NOT EXISTS pending_txs (
var migrations = []string{ var migrations = []string{
createDepositsTable, createDepositsTable,
createDepositTxnHashIndex,
createDepositAddressIndex,
createDisbursementsTable, createDisbursementsTable,
lastProcessedBlockTable, lastProcessedBlockTable,
pendingTxTable, pendingTxTable,
...@@ -135,7 +147,7 @@ func (c Config) WithoutDB() string { ...@@ -135,7 +147,7 @@ func (c Config) WithoutDB() string {
// sslMode retuns "enabled" if EnableSSL is true, otherwise returns "disabled". // sslMode retuns "enabled" if EnableSSL is true, otherwise returns "disabled".
func (c Config) sslMode() string { func (c Config) sslMode() string {
if c.EnableSSL { if c.EnableSSL {
return "enable" return "require"
} }
return "disable" return "disable"
} }
...@@ -371,6 +383,69 @@ func (d *Database) UpsertDisbursement( ...@@ -371,6 +383,69 @@ func (d *Database) UpsertDisbursement(
return nil return nil
} }
const loadTeleportByDepositHashQuery = `
SELECT
dep.id, dep.address, dep.amount, dis.success,
dep.txn_hash, dep.block_number, dep.block_timestamp,
dis.txn_hash, dis.block_number, dis.block_timestamp
FROM deposits AS dep
LEFT JOIN disbursements AS dis
ON dep.id = dis.id AND dep.txn_hash = $1
LIMIT 1
`
func (d *Database) LoadTeleportByDepositHash(
txHash common.Hash,
) (*Teleport, error) {
row := d.conn.QueryRow(loadTeleportByDepositHashQuery, txHash.String())
teleport, err := scanTeleport(row)
if err == sql.ErrNoRows {
return nil, nil
} else if err != nil {
return nil, err
}
return &teleport, nil
}
const loadTeleportsByAddressQuery = `
SELECT
dep.id, dep.address, dep.amount, dis.success,
dep.txn_hash, dep.block_number, dep.block_timestamp,
dis.txn_hash, dis.block_number, dis.block_timestamp
FROM deposits AS dep
LEFT JOIN disbursements AS dis
ON dep.id = dis.id AND dep.address = $1
ORDER BY dep.block_timestamp DESC, dep.id DESC
LIMIT 100
`
func (d *Database) LoadTeleportsByAddress(
addr common.Address,
) ([]Teleport, error) {
rows, err := d.conn.Query(loadTeleportsByAddressQuery, addr.String())
if err != nil {
return nil, err
}
defer rows.Close()
var teleports []Teleport
for rows.Next() {
teleport, err := scanTeleport(rows)
if err != nil {
return nil, err
}
teleports = append(teleports, teleport)
}
if err := rows.Err(); err != nil {
return nil, err
}
return teleports, nil
}
const completedTeleportsQuery = ` const completedTeleportsQuery = `
SELECT SELECT
dep.id, dep.address, dep.amount, dis.success, dep.id, dep.address, dep.amount, dis.success,
...@@ -383,46 +458,19 @@ ORDER BY id DESC ...@@ -383,46 +458,19 @@ ORDER BY id DESC
// CompletedTeleports returns the set of all deposits that have also been // CompletedTeleports returns the set of all deposits that have also been
// disbursed. // disbursed.
func (d *Database) CompletedTeleports() ([]CompletedTeleport, error) { func (d *Database) CompletedTeleports() ([]Teleport, error) {
rows, err := d.conn.Query(completedTeleportsQuery) rows, err := d.conn.Query(completedTeleportsQuery)
if err != nil { if err != nil {
return nil, err return nil, err
} }
defer rows.Close() defer rows.Close()
var teleports []CompletedTeleport var teleports []Teleport
for rows.Next() { for rows.Next() {
var teleport CompletedTeleport teleport, err := scanTeleport(rows)
var addressStr string
var amountStr string
var depTxnHashStr string
var disTxnHashStr string
err = rows.Scan(
&teleport.ID,
&addressStr,
&amountStr,
&teleport.Success,
&depTxnHashStr,
&teleport.Deposit.BlockNumber,
&teleport.Deposit.BlockTimestamp,
&disTxnHashStr,
&teleport.Disbursement.BlockNumber,
&teleport.Disbursement.BlockTimestamp,
)
if err != nil { if err != nil {
return nil, err return nil, err
} }
amount, ok := new(big.Int).SetString(amountStr, 10)
if !ok {
return nil, fmt.Errorf("unable to parse amount %v", amount)
}
teleport.Address = common.HexToAddress(addressStr)
teleport.Amount = amount
teleport.Deposit.TxnHash = common.HexToHash(depTxnHashStr)
teleport.Deposit.BlockTimestamp = teleport.Deposit.BlockTimestamp.Local()
teleport.Disbursement.TxnHash = common.HexToHash(disTxnHashStr)
teleport.Disbursement.BlockTimestamp = teleport.Disbursement.BlockTimestamp.Local()
teleports = append(teleports, teleport) teleports = append(teleports, teleport)
} }
if err := rows.Err(); err != nil { if err := rows.Err(); err != nil {
...@@ -432,6 +480,63 @@ func (d *Database) CompletedTeleports() ([]CompletedTeleport, error) { ...@@ -432,6 +480,63 @@ func (d *Database) CompletedTeleports() ([]CompletedTeleport, error) {
return teleports, nil return teleports, nil
} }
type Scanner interface {
Scan(...interface{}) error
}
func scanTeleport(scanner Scanner) (Teleport, error) {
var teleport Teleport
var addressStr string
var amountStr string
var depTxnHashStr string
var disTxnHashStr *string
var disBlockNumber *uint64
var disBlockTimestamp *time.Time
var success *bool
err := scanner.Scan(
&teleport.ID,
&addressStr,
&amountStr,
&success,
&depTxnHashStr,
&teleport.Deposit.BlockNumber,
&teleport.Deposit.BlockTimestamp,
&disTxnHashStr,
&disBlockNumber,
&disBlockTimestamp,
)
if err != nil {
return Teleport{}, err
}
amount, ok := new(big.Int).SetString(amountStr, 10)
if !ok {
return Teleport{}, fmt.Errorf("unable to parse amount %v", amount)
}
teleport.Address = common.HexToAddress(addressStr)
teleport.Amount = amount
teleport.Deposit.TxnHash = common.HexToHash(depTxnHashStr)
teleport.Deposit.BlockTimestamp = teleport.Deposit.BlockTimestamp.Local()
hasDisbursement := success != nil &&
disTxnHashStr != nil &&
disBlockNumber != nil &&
disBlockTimestamp != nil
if hasDisbursement {
teleport.Disbursement = &Disbursement{
ConfirmationInfo: ConfirmationInfo{
TxnHash: common.HexToHash(*disTxnHashStr),
BlockNumber: *disBlockNumber,
BlockTimestamp: disBlockTimestamp.Local(),
},
Success: *success,
}
}
return teleport, nil
}
// PendingTx encapsulates the metadata stored about published disbursement txs. // PendingTx encapsulates the metadata stored about published disbursement txs.
type PendingTx struct { type PendingTx struct {
// Txhash is the tx hash of the disbursement tx. // Txhash is the tx hash of the disbursement tx.
......
...@@ -91,12 +91,14 @@ func TestUpsertDeposits(t *testing.T) { ...@@ -91,12 +91,14 @@ func TestUpsertDeposits(t *testing.T) {
defer d.Close() defer d.Close()
deposit1 := db.Deposit{ deposit1 := db.Deposit{
ID: 1, ID: 1,
TxnHash: common.HexToHash("0xff01"), Address: common.HexToAddress("0xaa01"),
BlockNumber: 1, Amount: big.NewInt(1),
BlockTimestamp: testTimestamp, ConfirmationInfo: db.ConfirmationInfo{
Address: common.HexToAddress("0xaa01"), TxnHash: common.HexToHash("0xff01"),
Amount: big.NewInt(1), BlockNumber: 1,
BlockTimestamp: testTimestamp,
},
} }
err := d.UpsertDeposits([]db.Deposit{deposit1}, 0) err := d.UpsertDeposits([]db.Deposit{deposit1}, 0)
...@@ -107,12 +109,14 @@ func TestUpsertDeposits(t *testing.T) { ...@@ -107,12 +109,14 @@ func TestUpsertDeposits(t *testing.T) {
require.Equal(t, deposits, []db.Deposit{deposit1}) require.Equal(t, deposits, []db.Deposit{deposit1})
deposit2 := db.Deposit{ deposit2 := db.Deposit{
ID: 1, ID: 1,
TxnHash: common.HexToHash("0xff02"), Address: common.HexToAddress("0xaa02"),
BlockNumber: 2, Amount: big.NewInt(2),
BlockTimestamp: testTimestamp, ConfirmationInfo: db.ConfirmationInfo{
Address: common.HexToAddress("0xaa02"), TxnHash: common.HexToHash("0xff02"),
Amount: big.NewInt(2), BlockNumber: 2,
BlockTimestamp: testTimestamp,
},
} }
err = d.UpsertDeposits([]db.Deposit{deposit2}, 0) err = d.UpsertDeposits([]db.Deposit{deposit2}, 0)
...@@ -160,12 +164,14 @@ func TestUpsertDepositsRecordsLastProcessedBlock(t *testing.T) { ...@@ -160,12 +164,14 @@ func TestUpsertDepositsRecordsLastProcessedBlock(t *testing.T) {
// Insert real deposit in block 3 with last processed at 4. // Insert real deposit in block 3 with last processed at 4.
deposit := db.Deposit{ deposit := db.Deposit{
ID: 0, ID: 0,
TxnHash: common.HexToHash("0xff03"), Address: common.HexToAddress("0xaa03"),
BlockNumber: 3, Amount: big.NewInt(3),
BlockTimestamp: testTimestamp, ConfirmationInfo: db.ConfirmationInfo{
Address: common.HexToAddress("0xaa03"), TxnHash: common.HexToHash("0xff03"),
Amount: big.NewInt(3), BlockNumber: 3,
BlockTimestamp: testTimestamp,
},
} }
err = d.UpsertDeposits([]db.Deposit{deposit}, 4) err = d.UpsertDeposits([]db.Deposit{deposit}, 4)
require.Nil(t, err) require.Nil(t, err)
...@@ -190,28 +196,34 @@ func TestConfirmedDeposits(t *testing.T) { ...@@ -190,28 +196,34 @@ func TestConfirmedDeposits(t *testing.T) {
require.Equal(t, int(0), len(deposits)) require.Equal(t, int(0), len(deposits))
deposit1 := db.Deposit{ deposit1 := db.Deposit{
ID: 1, ID: 1,
TxnHash: common.HexToHash("0xff01"), Address: common.HexToAddress("0xaa01"),
BlockNumber: 1, Amount: big.NewInt(1),
BlockTimestamp: testTimestamp, ConfirmationInfo: db.ConfirmationInfo{
Address: common.HexToAddress("0xaa01"), TxnHash: common.HexToHash("0xff01"),
Amount: big.NewInt(1), BlockNumber: 1,
BlockTimestamp: testTimestamp,
},
} }
deposit2 := db.Deposit{ deposit2 := db.Deposit{
ID: 2, ID: 2,
TxnHash: common.HexToHash("0xff21"), Address: common.HexToAddress("0xaa21"),
BlockNumber: 2, Amount: big.NewInt(2),
BlockTimestamp: testTimestamp, ConfirmationInfo: db.ConfirmationInfo{
Address: common.HexToAddress("0xaa21"), TxnHash: common.HexToHash("0xff21"),
Amount: big.NewInt(2), BlockNumber: 2,
BlockTimestamp: testTimestamp,
},
} }
deposit3 := db.Deposit{ deposit3 := db.Deposit{
ID: 3, ID: 3,
TxnHash: common.HexToHash("0xff22"), Address: common.HexToAddress("0xaa22"),
BlockNumber: 2, Amount: big.NewInt(2),
BlockTimestamp: testTimestamp, ConfirmationInfo: db.ConfirmationInfo{
Address: common.HexToAddress("0xaa22"), TxnHash: common.HexToHash("0xff22"),
Amount: big.NewInt(2), BlockNumber: 2,
BlockTimestamp: testTimestamp,
},
} }
err = d.UpsertDeposits([]db.Deposit{ err = d.UpsertDeposits([]db.Deposit{
...@@ -269,12 +281,14 @@ func TestUpsertDisbursement(t *testing.T) { ...@@ -269,12 +281,14 @@ func TestUpsertDisbursement(t *testing.T) {
// Now, insert a real deposit that we will disburse. // Now, insert a real deposit that we will disburse.
err = d.UpsertDeposits([]db.Deposit{ err = d.UpsertDeposits([]db.Deposit{
{ {
ID: 1, ID: 1,
TxnHash: depTxnHash, Address: address,
BlockNumber: depBlockNumber, Amount: amount,
BlockTimestamp: testTimestamp, ConfirmationInfo: db.ConfirmationInfo{
Address: address, TxnHash: depTxnHash,
Amount: amount, BlockNumber: depBlockNumber,
BlockTimestamp: testTimestamp,
},
}, },
}, 0) }, 0)
require.Nil(t, err) require.Nil(t, err)
...@@ -287,21 +301,25 @@ func TestUpsertDisbursement(t *testing.T) { ...@@ -287,21 +301,25 @@ func TestUpsertDisbursement(t *testing.T) {
) )
require.Nil(t, err) require.Nil(t, err)
expTeleports := []db.CompletedTeleport{ expTeleports := []db.Teleport{
{ {
ID: 1, Deposit: db.Deposit{
Address: address, ID: 1,
Amount: amount, Address: address,
Success: false, Amount: amount,
Deposit: db.ConfirmationInfo{ ConfirmationInfo: db.ConfirmationInfo{
TxnHash: depTxnHash, TxnHash: depTxnHash,
BlockNumber: depBlockNumber, BlockNumber: depBlockNumber,
BlockTimestamp: testTimestamp, BlockTimestamp: testTimestamp,
},
}, },
Disbursement: db.ConfirmationInfo{ Disbursement: &db.Disbursement{
TxnHash: tempDisTxnHash, Success: false,
BlockNumber: tempDisBlockNumber, ConfirmationInfo: db.ConfirmationInfo{
BlockTimestamp: testTimestamp, TxnHash: tempDisTxnHash,
BlockNumber: tempDisBlockNumber,
BlockTimestamp: testTimestamp,
},
}, },
}, },
} }
...@@ -316,21 +334,25 @@ func TestUpsertDisbursement(t *testing.T) { ...@@ -316,21 +334,25 @@ func TestUpsertDisbursement(t *testing.T) {
err = d.UpsertDisbursement(1, disTxnHash, disBlockNumber, testTimestamp, true) err = d.UpsertDisbursement(1, disTxnHash, disBlockNumber, testTimestamp, true)
require.Nil(t, err) require.Nil(t, err)
expTeleports = []db.CompletedTeleport{ expTeleports = []db.Teleport{
{ {
ID: 1, Deposit: db.Deposit{
Address: address, ID: 1,
Amount: amount, Address: address,
Success: true, Amount: amount,
Deposit: db.ConfirmationInfo{ ConfirmationInfo: db.ConfirmationInfo{
TxnHash: depTxnHash, TxnHash: depTxnHash,
BlockNumber: depBlockNumber, BlockNumber: depBlockNumber,
BlockTimestamp: testTimestamp, BlockTimestamp: testTimestamp,
},
}, },
Disbursement: db.ConfirmationInfo{ Disbursement: &db.Disbursement{
TxnHash: disTxnHash, Success: true,
BlockNumber: disBlockNumber, ConfirmationInfo: db.ConfirmationInfo{
BlockTimestamp: testTimestamp, TxnHash: disTxnHash,
BlockNumber: disBlockNumber,
BlockTimestamp: testTimestamp,
},
}, },
}, },
} }
...@@ -473,3 +495,78 @@ func TestDeletePendingTx(t *testing.T) { ...@@ -473,3 +495,78 @@ func TestDeletePendingTx(t *testing.T) {
require.Nil(t, err) require.Nil(t, err)
require.Nil(t, pendingTxs) require.Nil(t, pendingTxs)
} }
// TestLoadTeleports asserts that LoadTeleportByDepositHash and
// LoadTeleportsByAddress are able to query for a spcific deposit in various
// stages through the teleport process.
func TestLoadTeleports(t *testing.T) {
t.Parallel()
d := newDatabase(t)
defer d.Close()
address := common.HexToAddress("0x01")
amount := big.NewInt(1000)
depTxnHash := common.HexToHash("0x0d01")
depBlockNumber := uint64(1)
disTxnHash := common.HexToHash("0x0e01")
disBlockNumber := uint64(2)
// Insert deposit.
deposit1 := db.Deposit{
ID: 1,
Address: address,
Amount: amount,
ConfirmationInfo: db.ConfirmationInfo{
TxnHash: depTxnHash,
BlockNumber: depBlockNumber,
BlockTimestamp: testTimestamp,
},
}
err := d.UpsertDeposits([]db.Deposit{deposit1}, 0)
require.Nil(t, err)
// The same, undisbursed teleport should be retruned by hash and address.
expTeleport := db.Teleport{
Deposit: deposit1,
Disbursement: nil,
}
teleport, err := d.LoadTeleportByDepositHash(depTxnHash)
require.Nil(t, err)
require.NotNil(t, teleport)
require.Equal(t, expTeleport, *teleport)
teleports, err := d.LoadTeleportsByAddress(address)
require.Nil(t, err)
require.Equal(t, []db.Teleport{expTeleport}, teleports)
// Insert a disbursement for the above deposit.
err = d.UpsertDisbursement(
1, disTxnHash, disBlockNumber, testTimestamp, true,
)
require.Nil(t, err)
// The now-complete teleport should be returned from both queries.
expTeleport = db.Teleport{
Deposit: deposit1,
Disbursement: &db.Disbursement{
Success: true,
ConfirmationInfo: db.ConfirmationInfo{
TxnHash: disTxnHash,
BlockNumber: disBlockNumber,
BlockTimestamp: testTimestamp,
},
},
}
teleport, err = d.LoadTeleportByDepositHash(depTxnHash)
require.Nil(t, err)
require.NotNil(t, teleport)
require.Equal(t, expTeleport, *teleport)
teleports, err = d.LoadTeleportsByAddress(address)
require.Nil(t, err)
require.Equal(t, []db.Teleport{expTeleport}, teleports)
}
...@@ -509,12 +509,14 @@ func (d *Driver) ingestDeposits( ...@@ -509,12 +509,14 @@ func (d *Driver) ingestDeposits(
} }
deposits = append(deposits, db.Deposit{ deposits = append(deposits, db.Deposit{
ID: event.DepositId.Uint64(), ID: event.DepositId.Uint64(),
TxnHash: event.Raw.TxHash, Address: event.Emitter,
BlockNumber: event.Raw.BlockNumber, Amount: event.Amount,
BlockTimestamp: time.Unix(int64(header.Time), 0), ConfirmationInfo: db.ConfirmationInfo{
Address: event.Emitter, TxnHash: event.Raw.TxHash,
Amount: event.Amount, BlockNumber: event.Raw.BlockNumber,
BlockTimestamp: time.Unix(int64(header.Time), 0),
},
}) })
} }
err = events.Error() err = events.Error()
......
package flags
import (
"fmt"
"strings"
"github.com/urfave/cli"
)
func prefixAPIEnvVar(name string) string {
return fmt.Sprintf("TELEPORTR_API_%s", strings.ToUpper(name))
}
var (
APIHostnameFlag = cli.StringFlag{
Name: "hostname",
Usage: "The hostname of the API server",
Required: true,
EnvVar: prefixAPIEnvVar("HOSTNAME"),
}
APIPortFlag = cli.StringFlag{
Name: "port",
Usage: "The hostname of the API server",
Required: true,
EnvVar: prefixAPIEnvVar("PORT"),
}
APIL1EthRpcFlag = cli.StringFlag{
Name: "l1-eth-rpc",
Usage: "The endpoint for the L1 ETH provider",
Required: true,
EnvVar: prefixAPIEnvVar("L1_ETH_RPC"),
}
APIDepositAddressFlag = cli.StringFlag{
Name: "deposit-address",
Usage: "Address of the TeleportrDeposit contract",
Required: true,
EnvVar: prefixAPIEnvVar("DEPOSIT_ADDRESS"),
}
APINumConfirmationsFlag = cli.StringFlag{
Name: "num-confirmations",
Usage: "Number of confirmations required until deposits are " +
"considered confirmed",
Required: true,
EnvVar: prefixAPIEnvVar("NUM_CONFIRMATIONS"),
}
APIPostgresHostFlag = cli.StringFlag{
Name: "postgres-host",
Usage: "Host of the teleportr postgres instance",
Required: true,
EnvVar: prefixAPIEnvVar("POSTGRES_HOST"),
}
APIPostgresPortFlag = cli.Uint64Flag{
Name: "postgres-port",
Usage: "Port of the teleportr postgres instance",
Required: true,
EnvVar: prefixAPIEnvVar("POSTGRES_PORT"),
}
APIPostgresUserFlag = cli.StringFlag{
Name: "postgres-user",
Usage: "Username of the teleportr postgres instance",
Required: true,
EnvVar: prefixAPIEnvVar("POSTGRES_USER"),
}
APIPostgresPasswordFlag = cli.StringFlag{
Name: "postgres-password",
Usage: "Password of the teleportr postgres instance",
Required: true,
EnvVar: prefixAPIEnvVar("POSTGRES_PASSWORD"),
}
APIPostgresDBNameFlag = cli.StringFlag{
Name: "postgres-db-name",
Usage: "Database name of the teleportr postgres instance",
Required: true,
EnvVar: prefixAPIEnvVar("POSTGRES_DB_NAME"),
}
APIPostgresEnableSSLFlag = cli.BoolFlag{
Name: "postgres-enable-ssl",
Usage: "Whether or not to enable SSL on connections to " +
"teleportr postgres instance",
Required: true,
EnvVar: prefixAPIEnvVar("POSTGRES_ENABLE_SSL"),
}
)
var APIFlags = []cli.Flag{
APIHostnameFlag,
APIPortFlag,
APIL1EthRpcFlag,
APIDepositAddressFlag,
APINumConfirmationsFlag,
APIPostgresHostFlag,
APIPostgresPortFlag,
APIPostgresUserFlag,
APIPostgresPasswordFlag,
APIPostgresDBNameFlag,
APIPostgresEnableSSLFlag,
}
...@@ -4,9 +4,12 @@ go 1.17 ...@@ -4,9 +4,12 @@ go 1.17
require ( require (
github.com/ethereum-optimism/optimism/go/bss-core v0.0.0-20220218171106-67a0414d7606 github.com/ethereum-optimism/optimism/go/bss-core v0.0.0-20220218171106-67a0414d7606
github.com/ethereum/go-ethereum v1.10.15 github.com/ethereum/go-ethereum v1.10.16
github.com/google/uuid v1.3.0 github.com/google/uuid v1.3.0
github.com/gorilla/mux v1.8.0
github.com/lib/pq v1.10.4 github.com/lib/pq v1.10.4
github.com/prometheus/client_golang v1.11.0
github.com/rs/cors v1.7.0
github.com/stretchr/testify v1.7.0 github.com/stretchr/testify v1.7.0
github.com/urfave/cli v1.22.5 github.com/urfave/cli v1.22.5
) )
...@@ -19,7 +22,7 @@ require ( ...@@ -19,7 +22,7 @@ require (
github.com/cespare/xxhash/v2 v2.1.1 // indirect github.com/cespare/xxhash/v2 v2.1.1 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d // indirect github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d // indirect
github.com/davecgh/go-spew v1.1.1 // indirect github.com/davecgh/go-spew v1.1.1 // indirect
github.com/deckarep/golang-set v0.0.0-20180603214616-504e848d77ea // indirect github.com/deckarep/golang-set v1.8.0 // indirect
github.com/decred/base58 v1.0.3 // indirect github.com/decred/base58 v1.0.3 // indirect
github.com/decred/dcrd/crypto/blake256 v1.0.0 // indirect github.com/decred/dcrd/crypto/blake256 v1.0.0 // indirect
github.com/decred/dcrd/crypto/ripemd160 v1.0.1 // indirect github.com/decred/dcrd/crypto/ripemd160 v1.0.1 // indirect
...@@ -39,7 +42,6 @@ require ( ...@@ -39,7 +42,6 @@ require (
github.com/olekukonko/tablewriter v0.0.5 // indirect github.com/olekukonko/tablewriter v0.0.5 // indirect
github.com/pkg/errors v0.9.1 // indirect github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/client_golang v1.11.0 // indirect
github.com/prometheus/client_model v0.2.0 // indirect github.com/prometheus/client_model v0.2.0 // indirect
github.com/prometheus/common v0.26.0 // indirect github.com/prometheus/common v0.26.0 // indirect
github.com/prometheus/procfs v0.6.0 // indirect github.com/prometheus/procfs v0.6.0 // indirect
......
...@@ -118,8 +118,9 @@ github.com/davecgh/go-spew v0.0.0-20171005155431-ecdeabc65495/go.mod h1:J7Y8YcW2 ...@@ -118,8 +118,9 @@ github.com/davecgh/go-spew v0.0.0-20171005155431-ecdeabc65495/go.mod h1:J7Y8YcW2
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/deckarep/golang-set v0.0.0-20180603214616-504e848d77ea h1:j4317fAZh7X6GqbFowYdYdI0L9bwxL07jyPZIdepyZ0=
github.com/deckarep/golang-set v0.0.0-20180603214616-504e848d77ea/go.mod h1:93vsz/8Wt4joVM7c2AVqh+YRMiUSc14yDtF28KmMOgQ= github.com/deckarep/golang-set v0.0.0-20180603214616-504e848d77ea/go.mod h1:93vsz/8Wt4joVM7c2AVqh+YRMiUSc14yDtF28KmMOgQ=
github.com/deckarep/golang-set v1.8.0 h1:sk9/l/KqpunDwP7pSjUg0keiOOLEnOBHzykLrsPppp4=
github.com/deckarep/golang-set v1.8.0/go.mod h1:5nI87KwE7wgsBU1F4GKAw2Qod7p5kyS383rP6+o6qqo=
github.com/decred/base58 v1.0.3 h1:KGZuh8d1WEMIrK0leQRM47W85KqCAdl2N+uagbctdDI= github.com/decred/base58 v1.0.3 h1:KGZuh8d1WEMIrK0leQRM47W85KqCAdl2N+uagbctdDI=
github.com/decred/base58 v1.0.3/go.mod h1:pXP9cXCfM2sFLb2viz2FNIdeMWmZDBKG3ZBYbiSM78E= github.com/decred/base58 v1.0.3/go.mod h1:pXP9cXCfM2sFLb2viz2FNIdeMWmZDBKG3ZBYbiSM78E=
github.com/decred/dcrd/chaincfg/chainhash v1.0.2 h1:rt5Vlq/jM3ZawwiacWjPa+smINyLRN07EO0cNBV6DGU= github.com/decred/dcrd/chaincfg/chainhash v1.0.2 h1:rt5Vlq/jM3ZawwiacWjPa+smINyLRN07EO0cNBV6DGU=
...@@ -163,8 +164,8 @@ github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.m ...@@ -163,8 +164,8 @@ github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.m
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/etcd-io/bbolt v1.3.3/go.mod h1:ZF2nL25h33cCyBtcyWeZ2/I3HQOfTP+0PIEvHjkjCrw= github.com/etcd-io/bbolt v1.3.3/go.mod h1:ZF2nL25h33cCyBtcyWeZ2/I3HQOfTP+0PIEvHjkjCrw=
github.com/ethereum/go-ethereum v1.10.12/go.mod h1:W3yfrFyL9C1pHcwY5hmRHVDaorTiQxhYBkKyu5mEDHw= github.com/ethereum/go-ethereum v1.10.12/go.mod h1:W3yfrFyL9C1pHcwY5hmRHVDaorTiQxhYBkKyu5mEDHw=
github.com/ethereum/go-ethereum v1.10.15 h1:E9o0kMbD8HXhp7g6UwIwntY05WTDheCGziMhegcBsQw= github.com/ethereum/go-ethereum v1.10.16 h1:3oPrumn0bCW/idjcxMn5YYVCdK7VzJYIvwGZUGLEaoc=
github.com/ethereum/go-ethereum v1.10.15/go.mod h1:W3yfrFyL9C1pHcwY5hmRHVDaorTiQxhYBkKyu5mEDHw= github.com/ethereum/go-ethereum v1.10.16/go.mod h1:Anj6cxczl+AHy63o4X9O8yWNHuN5wMpfb8MAnHkWn7Y=
github.com/fasthttp-contrib/websocket v0.0.0-20160511215533-1f3b11f56072/go.mod h1:duJ4Jxv5lDcvg4QuQr0oowTf7dz4/CR8NtyCooz9HL8= github.com/fasthttp-contrib/websocket v0.0.0-20160511215533-1f3b11f56072/go.mod h1:duJ4Jxv5lDcvg4QuQr0oowTf7dz4/CR8NtyCooz9HL8=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4= github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/fatih/structs v1.1.0/go.mod h1:9NiDSp5zOcgEDl+j00MP/WkGVPOlPRLejGD8Ga6PJ7M= github.com/fatih/structs v1.1.0/go.mod h1:9NiDSp5zOcgEDl+j00MP/WkGVPOlPRLejGD8Ga6PJ7M=
...@@ -265,11 +266,13 @@ github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+ ...@@ -265,11 +266,13 @@ github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg= github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk= github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY= github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/mux v1.8.0 h1:i40aqfkR1h2SlN9hojwV5ZA91wcXFOvkdNIeFDP5koI=
github.com/gorilla/mux v1.8.0/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So= github.com/gorilla/mux v1.8.0/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So=
github.com/gorilla/websocket v1.4.1/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE= github.com/gorilla/websocket v1.4.1/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/gorilla/websocket v1.4.2 h1:+/TMaTYc4QFitKJxsQ7Yye35DkWvkdLcvGKqM+x0Ufc= github.com/gorilla/websocket v1.4.2 h1:+/TMaTYc4QFitKJxsQ7Yye35DkWvkdLcvGKqM+x0Ufc=
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE= github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/graph-gophers/graphql-go v0.0.0-20201113091052-beb923fada29/go.mod h1:9CQHMSxwO4MprSdzoIEobiHpoLtHm77vfxsvsIN5Vuc= github.com/graph-gophers/graphql-go v0.0.0-20201113091052-beb923fada29/go.mod h1:9CQHMSxwO4MprSdzoIEobiHpoLtHm77vfxsvsIN5Vuc=
github.com/graph-gophers/graphql-go v1.3.0/go.mod h1:9CQHMSxwO4MprSdzoIEobiHpoLtHm77vfxsvsIN5Vuc=
github.com/hashicorp/go-bexpr v0.1.10 h1:9kuI5PFotCboP3dkDYFr/wi0gg0QVbSNz5oFRpxn4uE= github.com/hashicorp/go-bexpr v0.1.10 h1:9kuI5PFotCboP3dkDYFr/wi0gg0QVbSNz5oFRpxn4uE=
github.com/hashicorp/go-bexpr v0.1.10/go.mod h1:oxlubA2vC/gFVfX1A6JGp7ls7uCDlfJn732ehYYg+g0= github.com/hashicorp/go-bexpr v0.1.10/go.mod h1:oxlubA2vC/gFVfX1A6JGp7ls7uCDlfJn732ehYYg+g0=
github.com/hashicorp/go-version v1.2.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= github.com/hashicorp/go-version v1.2.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
...@@ -305,8 +308,9 @@ github.com/iris-contrib/go.uuid v2.0.0+incompatible/go.mod h1:iz2lgM/1UnEf1kP0L/ ...@@ -305,8 +308,9 @@ github.com/iris-contrib/go.uuid v2.0.0+incompatible/go.mod h1:iz2lgM/1UnEf1kP0L/
github.com/iris-contrib/jade v1.1.3/go.mod h1:H/geBymxJhShH5kecoiOCSssPX7QWYH7UaeZTSWddIk= github.com/iris-contrib/jade v1.1.3/go.mod h1:H/geBymxJhShH5kecoiOCSssPX7QWYH7UaeZTSWddIk=
github.com/iris-contrib/pongo2 v0.0.1/go.mod h1:Ssh+00+3GAZqSQb30AvBRNxBx7rf0GqwkjqxNd0u65g= github.com/iris-contrib/pongo2 v0.0.1/go.mod h1:Ssh+00+3GAZqSQb30AvBRNxBx7rf0GqwkjqxNd0u65g=
github.com/iris-contrib/schema v0.0.1/go.mod h1:urYA3uvUNG1TIIjOSCzHr9/LmbQo8LrOcOqfqxa4hXw= github.com/iris-contrib/schema v0.0.1/go.mod h1:urYA3uvUNG1TIIjOSCzHr9/LmbQo8LrOcOqfqxa4hXw=
github.com/jackpal/go-nat-pmp v1.0.2-0.20160603034137-1fa385a6f458 h1:6OvNmYgJyexcZ3pYbTI9jWx5tHo1Dee/tWbLMfPe2TA=
github.com/jackpal/go-nat-pmp v1.0.2-0.20160603034137-1fa385a6f458/go.mod h1:QPH045xvCAeXUZOxsnwmrtiCoxIr9eob+4orBN1SBKc= github.com/jackpal/go-nat-pmp v1.0.2-0.20160603034137-1fa385a6f458/go.mod h1:QPH045xvCAeXUZOxsnwmrtiCoxIr9eob+4orBN1SBKc=
github.com/jackpal/go-nat-pmp v1.0.2 h1:KzKSgb7qkJvOUTqYl9/Hg/me3pWgBmERKrTGD7BdWus=
github.com/jackpal/go-nat-pmp v1.0.2/go.mod h1:QPH045xvCAeXUZOxsnwmrtiCoxIr9eob+4orBN1SBKc=
github.com/jedisct1/go-minisign v0.0.0-20190909160543-45766022959e/go.mod h1:G1CVv03EnqU1wYL2dFwXxW2An0az9JTl/ZsqXQeBlkU= github.com/jedisct1/go-minisign v0.0.0-20190909160543-45766022959e/go.mod h1:G1CVv03EnqU1wYL2dFwXxW2An0az9JTl/ZsqXQeBlkU=
github.com/jessevdk/go-flags v0.0.0-20141203071132-1679536dcc89/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI= github.com/jessevdk/go-flags v0.0.0-20141203071132-1679536dcc89/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=
github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI= github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=
...@@ -328,6 +332,7 @@ github.com/jung-kurt/gofpdf v1.0.3-0.20190309125859-24315acbbda5/go.mod h1:7Id9E ...@@ -328,6 +332,7 @@ github.com/jung-kurt/gofpdf v1.0.3-0.20190309125859-24315acbbda5/go.mod h1:7Id9E
github.com/jwilder/encoding v0.0.0-20170811194829-b4e1701a28ef/go.mod h1:Ct9fl0F6iIOGgxJ5npU/IUOhOhqlVrGjyIZc8/MagT0= github.com/jwilder/encoding v0.0.0-20170811194829-b4e1701a28ef/go.mod h1:Ct9fl0F6iIOGgxJ5npU/IUOhOhqlVrGjyIZc8/MagT0=
github.com/k0kubun/colorstring v0.0.0-20150214042306-9440f1994b88/go.mod h1:3w7q1U84EfirKl04SVQ/s7nPm1ZPhiXd34z40TNz36k= github.com/k0kubun/colorstring v0.0.0-20150214042306-9440f1994b88/go.mod h1:3w7q1U84EfirKl04SVQ/s7nPm1ZPhiXd34z40TNz36k=
github.com/karalabe/usb v0.0.0-20211005121534-4c5740d64559/go.mod h1:Od972xHfMJowv7NGVDiWVxk2zxnWgjLlJzE+F4F7AGU= github.com/karalabe/usb v0.0.0-20211005121534-4c5740d64559/go.mod h1:Od972xHfMJowv7NGVDiWVxk2zxnWgjLlJzE+F4F7AGU=
github.com/karalabe/usb v0.0.2/go.mod h1:Od972xHfMJowv7NGVDiWVxk2zxnWgjLlJzE+F4F7AGU=
github.com/kataras/golog v0.0.10/go.mod h1:yJ8YKCmyL+nWjERB90Qwn+bdyBZsaQwU3bTVFgkFIp8= github.com/kataras/golog v0.0.10/go.mod h1:yJ8YKCmyL+nWjERB90Qwn+bdyBZsaQwU3bTVFgkFIp8=
github.com/kataras/iris/v12 v12.1.8/go.mod h1:LMYy4VlP67TQ3Zgriz8RE2h2kMZV2SgMYbq3UhfoFmE= github.com/kataras/iris/v12 v12.1.8/go.mod h1:LMYy4VlP67TQ3Zgriz8RE2h2kMZV2SgMYbq3UhfoFmE=
github.com/kataras/neffos v0.0.14/go.mod h1:8lqADm8PnbeFfL7CLXh1WHw53dG27MC3pgi2R1rmoTE= github.com/kataras/neffos v0.0.14/go.mod h1:8lqADm8PnbeFfL7CLXh1WHw53dG27MC3pgi2R1rmoTE=
......
# op-replica infra
Deployment examples and resources for running an Optimism replica.
[./envs](./envs) contains working example environmenatal files to configure a replica for different networks.
[./scripts](./scripts/) contains helper scripts to prepare or verify an environment.
[./docker-compose](./docker-compose/) provides working docker-compose example deployments
[./kustomize](./kustomize/) has kubernetes base overlays and network examples
# Running a Network Node
This project lets you set up a local replica of the Optimistic Ethereum chain (either the main one or the Kovan testnet). [New
transactions are submitted either to the sequencer outside of Ethereum or to the Canonical Transaction Chain on
L1](https://research.paradigm.xyz/optimism#data-availability-batches). To submit transactions via a replica, set
`SEQUENCER_CLIENT_HTTP` to a sequencer URL.
## Architecture
You need two components to replicate Optimistic Ethereum:
- `data-transport-layer`, which retrieves and indexes blocks from L1. To access L1 you need an Ethereum Layer 1 provider, such as
  [Infura](https://infura.io/).
- `l2geth`, which provides an Ethereum node where you applications can connect and run API calls.
## Resource requirements
The `data-transport-layer` should run with 1 CPU and 256Mb of memory.
The `l2geth` process should run with 1 or 2 CPUs and between 4 and 8Gb of memory.
With this configuration a synchronization from block 0 to current height is expect to take about 8 hours.
## Software Packages
These packages are required to run the replica:
1. [Docker](https://www.docker.com/)
1. [Docker compose](https://docs.docker.com/compose/install/)
## Configuration
To configure the project, clone this repository and copy either `default-kovan.env` or `default-mainnet.env` file to `.env`.
Review the `SHARED_ENV_PATH` configurations, no changes are required, but depending on the use case, you may need copy these examples to a new directory and make your changes.
!! Update DATA_TRANSPORT_LAYER__L1_RPC_ENDPOINT to a valid endpoint !!
### Settings
Change any other settings required for your environment
| Variable                 | Purpose                                                  | Default
| ------------------------ | -------------------------------------------------------- | -----------
| COMPOSE_FILE | The yml files to use with docker-compose | replica.yml:replica-shared.yml
| ETH_NETWORK              | Ethereum Layer1 and Layer2 network (mainnet,kovan)       | kovan (change to `mainnet` for the production network)
| DATA_TRANSPORT_LAYER__L1_RPC_ENDPOINT | An endpoint for the L1 network, either kovan or mainnet.
| DATA_TRANSPORT_LAYER__L2_RPC_ENDPOINT | Optimistic endpoint, such as https://kovan.optimism.io or https://mainnet.optimism.io
| REPLICA_HEALTHCHECK__ETH_NETWORK_RPC_PROVIDER | The L2 endpoint to check the replica against | (typically the same as the DATA_TRANSPORT_LAYER__L2_RPC_ENDPOINT)
| SEQUENCER_CLIENT_HTTP | The L2 sequencer to forward tx to | (typically the same as the DATA_TRANSPORT_LAYER__L2_RPC_ENDPOINT)
| SHARED_ENV_PATH          | Path to a directory containing env files                 | [a directory under ./kustomize/replica/envs](https://github.com/optimisticben/op-replica/tree/main/kustomize/replica/envs)
| GCMODE | Whether to run l2geth as an `archive` or `full` node | archive
| L2GETH_IMAGE_TAG         | L2geth version                                           | 0.5.8 (see below)
| DTL_IMAGE_TAG            | Data transport layer version                             | latest (see below)
| HC_IMAGE_TAG | Health check version | latest (see below)
| L2GETH_HTTP_PORT         | Port number for the l2geth RPC endpoint                  | 9991
| L2GETH_WS_PORT         | Port number for the l2geth WebSockets endpoint          | 9992
| DTL_PORT | Port number for the DTL endpoint, for troubleshooting | 7878
| GETH_INIT_SCRIPT | The script name to run when initializing l2geth | A file under kustomize/replica/bases/configmaps/
### Docker Image Versions
We recommend using the latest versions of both docker images. Find them as GitHub tags
[here](https://github.com/ethereum-optimism/optimism/tags) and as published Docker images linked in the badges:
| Package                                                                                                                         | Docker                                                                                                                                                                                                              |
| ------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [`@eth-optimism/l2geth`](https://github.com/ethereum-optimism/optimism/tree/master/l2geth)                                      | [![Docker Image Version (latest by date)](https://img.shields.io/docker/v/ethereumoptimism/l2geth)](https://hub.docker.com/r/ethereumoptimism/l2geth/tags?page=1&ordering=last_updated)                             |
| [`@eth-optimism/data-transport-layer`](https://github.com/ethereum-optimism/optimism/tree/master/packages/data-transport-layer) | [![Docker Image Version (latest by date)](https://img.shields.io/docker/v/ethereumoptimism/data-transport-layer)](https://hub.docker.com/r/ethereumoptimism/data-transport-layer/tags?page=1&ordering=last_updated) |
| [`@eth-optimism/replica-healthcheck`](https://github.com/ethereum-optimism/optimism/tree/master/packages/replica-healthcheck) | [![Docker Image Version (latest by date)](https://img.shields.io/docker/v/ethereumoptimism/replica-healthcheck)](https://hub.docker.com/r/ethereumoptimism/replica-healthcheck/tags?page=1&ordering=last_updated) |
## Usage
| Action | Command |
| - | - |
| Start the replica (after which you can access it at `http://localhost:L2GETH_HTTP_PORT` | `docker-compose up -d` |
| Get the logs for `l2geth` | `docker-compose logs -f l2geth-replica` |
| Get the logs for `data-transport-layer` | `docker-compose logs -f data-transport-layer` |
| Stop the replica | `docker-compose down` |
## Sync Check
 
There is a sync check container. It fails at startup because at that point the replica is not running yet. It exposes metrics on port 3000, which you could pick up with a Prometheus. You can view its status with this command:
```sh
docker-compose logs -f replica-healthcheck
```
## Registration
[Register here](https://groups.google.com/a/optimism.io/g/optimism-announce) to get announcements, such as notifications of when you're supposed to update your replica.
# Overview
These files allow you to expose the l2geth RPC/WS ports to the Internet, TLS-encrypted,
via a traefik reverse proxy and Let's Encrypt Certs.
Instructions for the CloudFlare and AWS options are kept up-to-date at https://eth-docker.net/docs/Usage/ReverseProxy
Please open issues for this contribution in the fork at https://github.com/CryptoManufaktur-io/op-replica
## Usage
The `.env` in the main project directory needs to contain traefik-specific variables. Make a backup copy with
`cp .env .env.bak`, then bring in the `default.env` from this directory: `cp contrib/traefik-haproxy/default.env .env`.
Adjust replica variables to match what they were and add either `contrib/traefik-haproxy/traefik-cf.yml` or
`contrib/traefik-haproxy/traefik-aws.yml` to `COMPOSE_FILE`.
Then edit traefik-specific variables for CloudFlare or AWS as per above-linked instructions.
Alternatively, if you have traefik running in its own stack, you can add `contrib/traefik-haproxy/ext-network.yml`
and adjust it for the network traefik runs in.
The haproxy files are examples taken from a docker swarm mode installation. They should work
with minor modifications in k8s via kompose or Portainer.
`optimism-haproxy.cfg` is the configuration file for haproxy, adjust the host and domain names you'll
use in there
`haproxy-docker-sample.yml` is an example docker-compose style deployment in docker swarm.
For example, you may have two replicas called `optimism-a.example.com` and `optimism-b.example.com`.
Both are configured in their `traefik.env` to respond to `optimism-lb.example.com`. Haproxy has
a local alias `optimism-lb.example.com` and will forward traffic to a and b servers, which know to respond
to the `optimism-lb` hostname.
`check-ecsync-optimism.sh` is an external check script to take a server out of rotation when it is not
in sync. It relies on the sequencer-health metrics being available via traefik, and assumes that
servers have `-a`, `-b`, `-c` etc suffixes. The maximum slot distance and name of the healthcheck host
without suffix are configured in the script.
#!/bin/sh
MAX_LAG=2
# Assumes that servers are labeled -a, -b, -c etc and contain their domain name, and that corresponding
# sequencer health hosts exist
HEALTH_HOST=optimismhealth
__domain=$(echo -n $HAPROXY_SERVER_NAME | cut -d '.' -f2-)
__host=$(echo -n $HAPROXY_SERVER_NAME | cut -d '.' -f1)
__dashcount=$(echo -n $__host | grep -o "-" | wc -w)
__suffix=$(echo -n $__host | cut -d '-' -f$(($__dashcount+1)))
__sequencerheight=$(curl -s -m2 -N -L "$HEALTH_HOST-$__suffix.$__domain/metrics" | grep ^replica_health_sequencer_height | cut -d' ' -f2)
__replicaheight=$(curl -s -m2 -N -L "$HEALTH_HOST-$__suffix.$__domain/metrics" | grep ^replica_health_height | cut -d' ' -f2)
if [ -z "$__sequencerheight" -o -z "$__replicaheight" ]; then
exit 1
fi
__distance=$(expr $__sequencerheight - $__replicaheight)
if [ $__distance -le $MAX_LAG ]; then
exit 0
else
exit 1
fi
COMPOSE_FILE=replica.yml:replica-shared.yml:replica-toml.yml
ETH_NETWORK=kovan
DATA_TRANSPORT_LAYER__L1_RPC_ENDPOINT=WONT_WORK_UNLESS_YOU_PROVIDE_A_VALID_ETHEREUM_L1_ENDPOINT
DATA_TRANSPORT_LAYER__L2_RPC_ENDPOINT=https://kovan.optimism.io
REPLICA_HEALTHCHECK__ETH_NETWORK_RPC_PROVIDER=https://kovan.optimism.io
SEQUENCER_CLIENT_HTTP=https://kovan.optimism.io
SHARED_ENV_PATH=../envs/kovan
GCMODE=archive
L2GETH_IMAGE_TAG=0.5.12
DTL_IMAGE_TAG=
HC_IMAGE_TAG=
L2GETH_HTTP_PORT=9991
L2GETH_WS_PORT=9992
DTL_PORT=7878
GETH_INIT_SCRIPT=check-for-chaindata-berlin.sh
RESTART=unless-stopped
# Secure web proxy - advanced use, please see instructions at https://eth-docker.net/docs/Usage/ReverseProxy
DOMAIN=example.com
ACME_EMAIL=user@example.com
CF_EMAIL=user@example.com
CF_API_TOKEN=SECRETTOKEN
AWS_PROFILE=myprofile
AWS_HOSTED_ZONE_ID=myzoneid
L2GETH_HOST=optimism
L2GETH_LB=optimism-lb
L2GETH_WS_HOST=optimismws
L2GETH_WS_LB=optimismws-lb
L2GETH_HEALTH_HOST=optimismhealth
DDNS_SUBDOMAIN=
DDNS_PROXY=false
TRAEFIK_WEB_PORT=443
TRAEFIK_WEB_HTTP_PORT=80
COMPOSE_FILE=replica.yml:replica-shared.yml:replica-toml.yml
ETH_NETWORK=mainnet
DATA_TRANSPORT_LAYER__L1_RPC_ENDPOINT=WONT_WORK_UNLESS_YOU_PROVIDE_A_VALID_ETHEREUM_L1_ENDPOINT
DATA_TRANSPORT_LAYER__L2_RPC_ENDPOINT=https://mainnet.optimism.io
REPLICA_HEALTHCHECK__ETH_NETWORK_RPC_PROVIDER=https://mainnet.optimism.io
SEQUENCER_CLIENT_HTTP=https://mainnet.optimism.io
SHARED_ENV_PATH=../envs/mainnet
GCMODE=archive
L2GETH_IMAGE_TAG=0.5.12
DTL_IMAGE_TAG=
HC_IMAGE_TAG=
L2GETH_HTTP_PORT=9991
L2GETH_WS_PORT=9992
DTL_PORT=7878
GETH_INIT_SCRIPT=check-for-chaindata-berlin.sh
RESTART=unless-stopped
# Secure web proxy - advanced use, please see instructions at https://eth-docker.net/docs/Usage/ReverseProxy
DOMAIN=example.com
ACME_EMAIL=user@example.com
CF_EMAIL=user@example.com
CF_API_TOKEN=SECRETTOKEN
AWS_PROFILE=myprofile
AWS_HOSTED_ZONE_ID=myzoneid
L2GETH_HOST=optimism
L2GETH_LB=optimism-lb
L2GETH_WS_HOST=optimismws
L2GETH_WS_LB=optimismws-lb
L2GETH_HEALTH_HOST=optimismhealth
DDNS_SUBDOMAIN=
DDNS_PROXY=false
TRAEFIK_WEB_PORT=443
TRAEFIK_WEB_HTTP_PORT=80
version: "3.4"
networks:
default:
external:
name: traefik_default
services:
l2geth-replica:
labels:
- traefik.enable=true
- traefik.http.routers.l2geth.service=l2geth
- traefik.http.routers.l2geth.entrypoints=websecure
- traefik.http.routers.l2geth.rule=Host(`${L2GETH_HOST}.${DOMAIN}`)
- traefik.http.routers.l2geth.tls.certresolver=letsencrypt
- traefik.http.routers.l2gethlb.service=l2geth
- traefik.http.routers.l2gethlb.entrypoints=websecure
- traefik.http.routers.l2gethlb.rule=Host(`${L2GETH_LB}.${DOMAIN}`)
- traefik.http.routers.l2gethlb.tls.certresolver=letsencrypt
- traefik.http.services.l2geth.loadbalancer.server.port=8545
- traefik.http.routers.l2gethws.service=l2gethws
- traefik.http.routers.l2gethws.entrypoints=websecure
- traefik.http.routers.l2gethws.rule=Host(`${L2GETH_WS_HOST}.${DOMAIN}`)
- traefik.http.routers.l2gethws.tls.certresolver=letsencrypt
- traefik.http.routers.l2gethwslb.service=l2gethws
- traefik.http.routers.l2gethwslb.entrypoints=websecure
- traefik.http.routers.l2gethwslb.rule=Host(`${L2GETH_WS_LB}.${DOMAIN}`)
- traefik.http.routers.l2gethwslb.tls.certresolver=letsencrypt
- traefik.http.services.l2gethws.loadbalancer.server.port=8546
replica-healthcheck:
labels:
- traefik.enable=true
- traefik.http.routers.l2gethhealth.service=l2gethhealth
- traefik.http.routers.l2gethhealth.entrypoints=websecure
- traefik.http.routers.l2gethhealth.rule=Host(`${L2GETH_HEALTH_HOST}.${DOMAIN}`)
- traefik.http.routers.l2gethhealth.tls.certresolver=letsencrypt
- traefik.http.services.l2gethhealth.loadbalancer.server.port=3000
version: "3.4"
x-logging: &logging
logging:
driver: json-file
options:
max-size: 20m
max-file: "3"
services:
optimism-haproxy:
image: haproxy:latest
user: root
entrypoint: ["/bin/sh", "-c"]
command:
- |
apt-get update
apt-get install --no-install-recommends -y curl jq bc ca-certificates
exec haproxy -f /usr/local/etc/haproxy/haproxy.cfg
networks:
default:
aliases:
- optimismws-lb.example.com
- optimism-lb.example.com
configs:
- source: optimism-haproxy.cfg
target: /usr/local/etc/haproxy/haproxy.cfg
- source: check-ecsync-optimism.sh
target: /var/lib/haproxy/check-ecsync.sh
mode: 0555
deploy:
mode: replicated
replicas: 2
placement:
constraints: ["node.role == worker"]
<<: *logging
configs:
optimism-haproxy.cfg:
external: true
check-ecsync-optimism.sh:
external: true
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
log stdout format raw local0
maxconn 20000
user haproxy
group haproxy
ssl-default-server-options force-tlsv12
ssl-default-bind-options force-tlsv12
ca-base /etc/ssl/certs
external-check
insecure-fork-wanted
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode tcp
option tcplog
log global
option dontlognull
timeout client 5m
#---------------------------------------------------------------------
# dedicated stats page
#---------------------------------------------------------------------
listen stats
mode http
bind :22222
stats enable
stats uri /haproxy?stats
stats realm Haproxy\ Statistics
stats auth admin:YOURPW
stats refresh 30s
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend main_https_listen
bind *:443
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
#---------------------------------------------------------------------
# Common HAProxy nodes configuration
#---------------------------------------------------------------------
# -------------------------------
# ACLs
# -------------------------------
acl acl_ec req.ssl_sni -i optimism-lb.example.com
acl acl_ecws req.ssl_sni -i optimismws-lb.example.com
# -------------------------------
# Conditions
# -------------------------------
use_backend backend_ec if acl_ec
use_backend backend_ecws if acl_ecws
#---------------------------------------------------------------------
# Backends
#---------------------------------------------------------------------
# RPC execution client
backend backend_ec
description Optimism L2Geth
default-server init-addr libc no-tls-tickets check inter 10000 on-marked-down shutdown-sessions
timeout connect 5s
timeout server 30s
retries 2
balance first
option external-check
external-check path "/usr/bin:/bin"
external-check command /var/lib/haproxy/check-ecsync.sh
server optimism-a.example.com optimism-a.example.com:443
server optimism-b.example.com optimism-b.example.com:443 backup
# WebSockets execution client
backend backend_ecws
description Optimism L2Geth WebSockets
default-server init-addr libc no-tls-tickets check inter 10000 on-marked-down shutdown-sessions
timeout connect 5s
timeout server 30s
timeout tunnel 3600s
retries 2
balance first
option external-check
external-check path "/usr/bin:/bin"
external-check command /var/lib/haproxy/check-ecsync.sh
server optimism-a.example.com optimismws-a.example.com:443
server optimism-b.example.com optimismws-b.example.com:443 backup
version: "3.4"
x-logging: &logging
logging:
driver: json-file
options:
max-size: 10m
max-file: "3"
services:
traefik:
image: traefik-aws
build:
context: ./contrib/traefik-haproxy/traefik
restart: ${RESTART}
command:
# - --log.level=DEBUG
# - --accesslog=true
# - --accesslog.format=json
# - --accesslog.fields.defaultmode=keep
# - --accesslog.fields.headers.defaultmode=keep
# - --certificatesResolvers.letsencrypt.acme.caServer=https://acme-staging-v02.api.letsencrypt.org/directory
- --providers.docker=true
- --providers.docker.exposedbydefault=false
- --certificatesresolvers.letsencrypt.acme.dnschallenge=true
- --certificatesresolvers.letsencrypt.acme.dnschallenge.provider=route53
- --certificatesresolvers.letsencrypt.acme.email=${ACME_EMAIL}
- --certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json
- --entrypoints.web.address=:${TRAEFIK_WEB_HTTP_PORT}
- --entrypoints.web.http.redirections.entrypoint.to=websecure
- --entrypoints.web.http.redirections.entrypoint.scheme=https
- --entrypoints.websecure.address=:${TRAEFIK_WEB_PORT}
ports:
- ${TRAEFIK_WEB_PORT}:${TRAEFIK_WEB_PORT}/tcp
- ${TRAEFIK_WEB_HTTP_PORT}:${TRAEFIK_WEB_HTTP_PORT}/tcp
environment:
- AWS_PROFILE=${AWS_PROFILE}
- AWS_HOSTED_ZONE_ID=${AWS_HOSTED_ZONE_ID}
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- certs:/letsencrypt
- ~/.aws:/root/.aws:ro
- /etc/localtime:/etc/localtime:ro
<<: *logging
l2geth-replica:
labels:
- traefik.enable=true
- traefik.http.routers.l2geth.service=l2geth
- traefik.http.routers.l2geth.entrypoints=websecure
- traefik.http.routers.l2geth.rule=Host(`${L2GETH_HOST}.${DOMAIN}`)
- traefik.http.routers.l2geth.tls.certresolver=letsencrypt
- traefik.http.routers.l2gethlb.service=l2geth
- traefik.http.routers.l2gethlb.entrypoints=websecure
- traefik.http.routers.l2gethlb.rule=Host(`${L2GETH_LB}.${DOMAIN}`)
- traefik.http.routers.l2gethlb.tls.certresolver=letsencrypt
- traefik.http.services.l2geth.loadbalancer.server.port=8545
- traefik.http.routers.l2gethws.service=l2gethws
- traefik.http.routers.l2gethws.entrypoints=websecure
- traefik.http.routers.l2gethws.rule=Host(`${L2GETH_WS_HOST}.${DOMAIN}`)
- traefik.http.routers.l2gethws.tls.certresolver=letsencrypt
- traefik.http.routers.l2gethwslb.service=l2gethws
- traefik.http.routers.l2gethwslb.entrypoints=websecure
- traefik.http.routers.l2gethwslb.rule=Host(`${L2GETH_WS_LB}.${DOMAIN}`)
- traefik.http.routers.l2gethwslb.tls.certresolver=letsencrypt
- traefik.http.services.l2gethws.loadbalancer.server.port=8546
replica-healthcheck:
labels:
- traefik.enable=true
- traefik.http.routers.l2gethhealth.service=l2gethhealth
- traefik.http.routers.l2gethhealth.entrypoints=websecure
- traefik.http.routers.l2gethhealth.rule=Host(`${L2GETH_HEALTH_HOST}.${DOMAIN}`)
- traefik.http.routers.l2gethhealth.tls.certresolver=letsencrypt
- traefik.http.services.l2gethhealth.loadbalancer.server.port=3000
volumes:
certs:
version: "3.4"
x-logging: &logging
logging:
driver: json-file
options:
max-size: 10m
max-file: "3"
services:
traefik:
image: traefik:latest
restart: ${RESTART}
command:
# - --log.level=DEBUG
# - --accesslog=true
# - --accesslog.format=json
# - --accesslog.fields.defaultmode=keep
# - --accesslog.fields.headers.defaultmode=keep
# - --certificatesResolvers.letsencrypt.acme.caServer=https://acme-staging-v02.api.letsencrypt.org/directory
- --providers.docker=true
- --providers.docker.exposedbydefault=false
- --certificatesresolvers.letsencrypt.acme.dnschallenge=true
- --certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare
- --certificatesresolvers.letsencrypt.acme.email=${ACME_EMAIL}
- --certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json
- --entrypoints.web.address=:${TRAEFIK_WEB_HTTP_PORT}
- --entrypoints.web.http.redirections.entrypoint.to=websecure
- --entrypoints.web.http.redirections.entrypoint.scheme=https
- --entrypoints.websecure.address=:${TRAEFIK_WEB_PORT}
ports:
- ${TRAEFIK_WEB_PORT}:${TRAEFIK_WEB_PORT}/tcp
- ${TRAEFIK_WEB_HTTP_PORT}:${TRAEFIK_WEB_HTTP_PORT}/tcp
environment:
- CLOUDFLARE_EMAIL=${CF_EMAIL}
- CLOUDFLARE_DNS_API_TOKEN=${CF_API_TOKEN}
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- certs:/letsencrypt
- /etc/localtime:/etc/localtime:ro
<<: *logging
depends_on:
- cf-ddns
cf-ddns:
image: oznu/cloudflare-ddns:latest
restart: ${RESTART}
environment:
- API_KEY=${CF_API_TOKEN}
- ZONE=${DOMAIN}
- SUBDOMAIN=${DDNS_SUBDOMAIN}
- PROXIED=${DDNS_PROXY}
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
<<: *logging
l2geth-replica:
labels:
- traefik.enable=true
- traefik.http.routers.l2geth.service=l2geth
- traefik.http.routers.l2geth.entrypoints=websecure
- traefik.http.routers.l2geth.rule=Host(`${L2GETH_HOST}.${DOMAIN}`)
- traefik.http.routers.l2geth.tls.certresolver=letsencrypt
- traefik.http.routers.l2gethlb.service=l2geth
- traefik.http.routers.l2gethlb.entrypoints=websecure
- traefik.http.routers.l2gethlb.rule=Host(`${L2GETH_LB}.${DOMAIN}`)
- traefik.http.routers.l2gethlb.tls.certresolver=letsencrypt
- traefik.http.services.l2geth.loadbalancer.server.port=8545
- traefik.http.routers.l2gethws.service=l2gethws
- traefik.http.routers.l2gethws.entrypoints=websecure
- traefik.http.routers.l2gethws.rule=Host(`${L2GETH_WS_HOST}.${DOMAIN}`)
- traefik.http.routers.l2gethws.tls.certresolver=letsencrypt
- traefik.http.routers.l2gethwslb.service=l2gethws
- traefik.http.routers.l2gethwslb.entrypoints=websecure
- traefik.http.routers.l2gethwslb.rule=Host(`${L2GETH_WS_LB}.${DOMAIN}`)
- traefik.http.routers.l2gethwslb.tls.certresolver=letsencrypt
- traefik.http.services.l2gethws.loadbalancer.server.port=8546
replica-healthcheck:
labels:
- traefik.enable=true
- traefik.http.routers.l2gethhealth.service=l2gethhealth
- traefik.http.routers.l2gethhealth.entrypoints=websecure
- traefik.http.routers.l2gethhealth.rule=Host(`${L2GETH_HEALTH_HOST}.${DOMAIN}`)
- traefik.http.routers.l2gethhealth.tls.certresolver=letsencrypt
- traefik.http.services.l2gethhealth.loadbalancer.server.port=3000
volumes:
certs:
# Add AWS CLI to traefik image
FROM traefik:latest
RUN apk add --no-cache \
python3 \
py3-pip \
&& pip3 install --upgrade pip \
&& pip3 install \
awscli \
tzdata \
&& rm -rf /var/cache/apk/*
COMPOSE_PROJECT_NAME=op-replica
COMPOSE_FILE=replica.yml:replica-shared.yml:replica-toml.yml
ETH_NETWORK=kovan
DATA_TRANSPORT_LAYER__L1_RPC_ENDPOINT=WONT_WORK_UNLESS_YOU_PROVIDE_A_VALID_ETHEREUM_L1_ENDPOINT
DATA_TRANSPORT_LAYER__L2_RPC_ENDPOINT=https://kovan.optimism.io
REPLICA_HEALTHCHECK__ETH_NETWORK_RPC_PROVIDER=https://kovan.optimism.io
SEQUENCER_CLIENT_HTTP=https://kovan.optimism.io
SHARED_ENV_PATH=../envs/kovan
GCMODE=archive
L2GETH_IMAGE_TAG=0.5.13
DTL_IMAGE_TAG=0.5.20
HC_IMAGE_TAG=
L2GETH_HTTP_PORT=9991
L2GETH_WS_PORT=9992
DTL_PORT=7878
GETH_INIT_SCRIPT=check-for-chaindata-berlin.sh
RESTART=unless-stopped
COMPOSE_PROJECT_NAME=op-replica
COMPOSE_FILE=replica.yml:replica-shared.yml:replica-toml.yml
ETH_NETWORK=mainnet
DATA_TRANSPORT_LAYER__L1_RPC_ENDPOINT=WONT_WORK_UNLESS_YOU_PROVIDE_A_VALID_ETHEREUM_L1_ENDPOINT
DATA_TRANSPORT_LAYER__L2_RPC_ENDPOINT=https://mainnet.optimism.io
REPLICA_HEALTHCHECK__ETH_NETWORK_RPC_PROVIDER=https://mainnet.optimism.io
SEQUENCER_CLIENT_HTTP=https://mainnet.optimism.io
SHARED_ENV_PATH=../envs/mainnet
GCMODE=archive
L2GETH_IMAGE_TAG=0.5.12
DTL_IMAGE_TAG=0.5.20
HC_IMAGE_TAG=
L2GETH_HTTP_PORT=9991
L2GETH_WS_PORT=9992
DTL_PORT=7878
GETH_INIT_SCRIPT=check-for-chaindata-berlin.sh
RESTART=unless-stopped
version: "3.4"
services:
l2geth-replica:
ports:
- ${L2GETH_HTTP_PORT:-9991}:8545
- ${L2GETH_WS_PORT:-9992}:8546
data-transport-layer:
ports:
- ${DTL_PORT:-7878}:7878
---
version: "3.4"
services:
l2geth-replica:
entrypoint:
- /bin/sh
- -c
- "/scripts/$GETH_INIT_SCRIPT && /scripts/l2geth-replica-start.sh --config /scripts/l2geth.toml"
---
version: "3.4"
x-logging: &logging
logging:
driver: json-file
options:
max-size: 10m
max-file: "3"
services:
data-transport-layer:
image: ethereumoptimism/data-transport-layer:${DTL_IMAGE_TAG:-latest}
restart: ${RESTART}
env_file:
- ${SHARED_ENV_PATH}/data-transport-layer.env
- .env
volumes:
- dtl:/db
<<: *logging
l2geth-replica:
image: ethereumoptimism/l2geth:${L2GETH_IMAGE_TAG:-latest}
restart: ${RESTART}
stop_grace_period: 3m
entrypoint:
- /bin/sh
- -c
#- "sleep infinity"
- "/scripts/$GETH_INIT_SCRIPT && /scripts/l2geth-replica-start.sh"
env_file:
- ${SHARED_ENV_PATH}/l2geth-replica.env
- .env
volumes:
- geth:/geth
- ../scripts/:/scripts/
<<: *logging
replica-healthcheck:
image: ethereumoptimism/replica-healthcheck:${HC_IMAGE_TAG:-latest}
restart: ${RESTART}
env_file:
- ${SHARED_ENV_PATH}/replica-healthcheck.env
- .env
volumes:
- ../scripts/:/scripts/
<<: *logging
volumes:
dtl:
geth:
DATA_TRANSPORT_LAYER__ADDRESS_MANAGER=0x3AD1eeD551d26335caD030911C15d008abBe9825
DATA_TRANSPORT_LAYER__CONFIRMATIONS=12
DATA_TRANSPORT_LAYER__DANGEROUSLY_CATCH_ALL_ERRORS=true
DATA_TRANSPORT_LAYER__DB_PATH=/db
DATA_TRANSPORT_LAYER__DEFAULT_BACKEND=l2
DATA_TRANSPORT_LAYER__ENABLE_METRICS=true
DATA_TRANSPORT_LAYER__ETH_NETWORK_NAME=kovan
DATA_TRANSPORT_LAYER__L1_GAS_PRICE_BACKEND=l2
DATA_TRANSPORT_LAYER__L1_RPC_ENDPOINT=http://failover-proxy.default:8000
DATA_TRANSPORT_LAYER__L2_CHAIN_ID=69
DATA_TRANSPORT_LAYER__L2_RPC_ENDPOINT=http://sequencer.default:8545
DATA_TRANSPORT_LAYER__LOGS_PER_POLLING_INTERVAL=2000
DATA_TRANSPORT_LAYER__NODE_ENV=production
DATA_TRANSPORT_LAYER__POLLING_INTERVAL=500
DATA_TRANSPORT_LAYER__SENTRY_TRACE_RATE=0.05
DATA_TRANSPORT_LAYER__SERVER_HOSTNAME=0.0.0.0
DATA_TRANSPORT_LAYER__SERVER_PORT=7878
DATA_TRANSPORT_LAYER__SYNC_FROM_L1=false
DATA_TRANSPORT_LAYER__SYNC_FROM_L2=true
DATA_TRANSPORT_LAYER__TRANSACTIONS_PER_POLLING_INTERVAL=1000
DEPLOYER_HTTP=
L1_NODE_WEB3_URL=http://failover-proxy.default:8000
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configMapGenerator:
- name: data-transport-layer
envs:
- ./data-transport-layer.env
- name: l2geth-replica
envs:
- ./l2geth-replica.env
- name: replica-healthcheck
envs:
- ./replica-healthcheck.env
CHAIN_ID=69
DATADIR=/geth
DEV=true
NETWORK_ID=69
NO_DISCOVER=true
NO_USB=true
GASPRICE=0
GCMODE=archive
ETH1_CTC_DEPLOYMENT_HEIGHT=25502591
ETH1_L1_FEE_WALLET_ADDRESS=0x18394B52d3Cb931dfA76F63251919D051953413d
ETH1_L1_CROSS_DOMAIN_MESSENGER_ADDRESS=0x4361d0F75A0186C05f971c566dC6bEa5957483fD
ETH1_L1_ETH_GATEWAY_ADDRESS=
ETH1_L1_STANDARD_BRIDGE_ADDRESS=0x22F24361D548e5FaAfb36d1437839f080363982B
ETH1_SYNC_SERVICE_ENABLE=true
ROLLUP_ADDRESS_MANAGER_OWNER_ADDRESS=0x18394b52d3cb931dfa76f63251919d051953413d
ROLLUP_STATE_DUMP_PATH=https://storage.googleapis.com/optimism/kovan/v0.4.0-rc0.json
ROLLUP_BACKEND=l2
ROLLUP_CLIENT_HTTP=http://data-transport-layer:7878
ROLLUP_DISABLE_TRANSFERS=false
ROLLUP_ENABLE_L2_GAS_POLLING=false
ROLLUP_ENABLE_ARBITRARY_CONTRACT_DEPLOYMENT=true
ROLLUP_GAS_PRICE_ORACLE_OWNER_ADDRESS=0x18394B52d3Cb931dfA76F63251919D051953413d
ROLLUP_MAX_CALLDATA_SIZE=40000
ROLLUP_POLL_INTERVAL_FLAG=3s
ROLLUP_SYNC_SERVICE_ENABLE=true
ROLLUP_TIMESTAMP_REFRESH=3m
ROLLUP_VERIFIER_ENABLE=true
RPC_ADDR=0.0.0.0
RPC_API=eth,rollup,net,web3,debug
RPC_CORS_DOMAIN=*
RPC_ENABLE=true
RPC_PORT=8545
RPC_VHOSTS=*
TARGET_GAS_LIMIT=11000000
USING_OVM=true
WS_ADDR=0.0.0.0
WS_API=eth,rollup,net,web3,debug
WS_ORIGINS=*
WS=true
REPLICA_HEALTHCHECK__ETH_NETWORK=kovan
REPLICA_HEALTHCHECK__L2GETH_IMAGE_TAG=0.4.9
REPLICA_HEALTHCHECK__ETH_NETWORK_RPC_PROVIDER=http://sequencer.default:8545
REPLICA_HEALTHCHECK__ETH_REPLICA_RPC_PROVIDER=http://l2geth-replica:8545
\ No newline at end of file
#!/bin/sh
set -exu
GETH_DATA_DIR=/geth
GETH_CHAINDATA_DIR=$GETH_DATA_DIR/geth/chaindata
GETH_KEYSTORE_DIR=$GETH_DATA_DIR/keystore
if [ ! -d "$GETH_KEYSTORE_DIR" ]; then
echo "$GETH_KEYSTORE_DIR missing, running account import"
echo -n "$BLOCK_SIGNER_PRIVATE_KEY_PASSWORD" > "$GETH_DATA_DIR"/password
echo -n "$BLOCK_SIGNER_PRIVATE_KEY" > "$GETH_DATA_DIR"/block-signer-key
geth account import \
--datadir="$GETH_DATA_DIR" \
--password="$GETH_DATA_DIR"/password \
"$GETH_DATA_DIR"/block-signer-key
echo "get account import complete"
fi
if [ ! -d "$GETH_CHAINDATA_DIR" ]; then
echo "$GETH_CHAINDATA_DIR missing, running init"
geth init --datadir="$GETH_DATA_DIR" "$L2GETH_GENESIS_URL" "$L2GETH_GENESIS_HASH"
echo "geth init complete"
else
echo "$GETH_CHAINDATA_DIR exists, checking for hardfork."
echo "Chain config:"
geth dump-chain-cfg --datadir="$GETH_DATA_DIR"
if geth dump-chain-cfg --datadir="$GETH_DATA_DIR" | grep -q "\"berlinBlock\": $L2GETH_BERLIN_ACTIVATION_HEIGHT"; then
echo "Hardfork already activated."
else
echo "Hardfork not activated, running init."
geth init --datadir="$GETH_DATA_DIR" "$L2GETH_GENESIS_URL" "$L2GETH_GENESIS_HASH"
echo "geth hardfork activation complete"
fi
fi
\ No newline at end of file
DATA_TRANSPORT_LAYER__ADDRESS_MANAGER=0x100Dd3b414Df5BbA2B542864fF94aF8024aFdf3a
DATA_TRANSPORT_LAYER__CONFIRMATIONS=12
DATA_TRANSPORT_LAYER__DANGEROUSLY_CATCH_ALL_ERRORS=true
DATA_TRANSPORT_LAYER__DB_PATH=/db
DATA_TRANSPORT_LAYER__DEFAULT_BACKEND=l2
DATA_TRANSPORT_LAYER__ENABLE_METRICS=true
DATA_TRANSPORT_LAYER__ETH_NETWORK_NAME=kovan
DATA_TRANSPORT_LAYER__L1_GAS_PRICE_BACKEND=l2
DATA_TRANSPORT_LAYER__L1_START_HEIGHT=27989623
DATA_TRANSPORT_LAYER__L2_CHAIN_ID=69
DATA_TRANSPORT_LAYER__LOGS_PER_POLLING_INTERVAL=2000
DATA_TRANSPORT_LAYER__NODE_ENV=production
DATA_TRANSPORT_LAYER__POLLING_INTERVAL=500
DATA_TRANSPORT_LAYER__SENTRY_TRACE_RATE=0.05
DATA_TRANSPORT_LAYER__SERVER_HOSTNAME=0.0.0.0
DATA_TRANSPORT_LAYER__SERVER_PORT=7878
DATA_TRANSPORT_LAYER__SYNC_FROM_L1=false
DATA_TRANSPORT_LAYER__SYNC_FROM_L2=true
DATA_TRANSPORT_LAYER__TRANSACTIONS_PER_POLLING_INTERVAL=1000
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configMapGenerator:
- name: data-transport-layer
envs:
- ./data-transport-layer.env
- name: l2geth-replica
envs:
- ./l2geth-replica.env
- name: replica-healthcheck
envs:
- ./replica-healthcheck.env
- name: geth-scripts
files:
- ./check-for-chaindata.sh
\ No newline at end of file
CHAIN_ID=69
DATADIR=/geth
NETWORK_ID=69
NO_DISCOVER=true
NO_USB=true
GASPRICE=0
GCMODE=archive
BLOCK_SIGNER_ADDRESS=0x00000398232E2064F896018496b4b44b3D62751F
BLOCK_SIGNER_PRIVATE_KEY=6587ae678cf4fc9a33000cdbf9f35226b71dcc6a4684a31203241f9bcfd55d27
BLOCK_SIGNER_PRIVATE_KEY_PASSWORD=pwd
ETH1_CTC_DEPLOYMENT_HEIGHT=27989623
ETH1_L1_FEE_WALLET_ADDRESS=0x18394B52d3Cb931dfA76F63251919D051953413d
ETH1_L1_CROSS_DOMAIN_MESSENGER_ADDRESS=0x4361d0F75A0186C05f971c566dC6bEa5957483fD
ETH1_L1_STANDARD_BRIDGE_ADDRESS=0x22F24361D548e5FaAfb36d1437839f080363982B
ETH1_SYNC_SERVICE_ENABLE=true
L2GETH_GENESIS_URL=https://storage.googleapis.com/optimism/kovan/kovan-berlin-genesis.json
L2GETH_GENESIS_HASH=0xaed938bc5dee7eb703658d4bec1f3e28f8b92bd9c032b2be779186eafc2b5a2a
L2GETH_BERLIN_ACTIVATION_HEIGHT=1138900
ROLLUP_ADDRESS_MANAGER_OWNER_ADDRESS=0x100Dd3b414Df5BbA2B542864fF94aF8024aFdf3a
ROLLUP_BACKEND=l2
ROLLUP_CLIENT_HTTP=http://data-transport-layer:7878
ROLLUP_DISABLE_TRANSFERS=false
ROLLUP_ENABLE_L2_GAS_POLLING=false
ROLLUP_GAS_PRICE_ORACLE_OWNER_ADDRESS=0x18394B52d3Cb931dfA76F63251919D051953413d
ROLLUP_MAX_CALLDATA_SIZE=40000
ROLLUP_POLL_INTERVAL_FLAG=3s
ROLLUP_SYNC_SERVICE_ENABLE=true
ROLLUP_TIMESTAMP_REFRESH=3m
ROLLUP_VERIFIER_ENABLE=true
RPC_ADDR=0.0.0.0
RPC_API=eth,rollup,net,web3,debug
RPC_CORS_DOMAIN=*
RPC_ENABLE=true
RPC_PORT=8545
RPC_VHOSTS=*
SEQUENCER_CLIENT_HTTP=https://kovan.optimism.io
TARGET_GAS_LIMIT=15000000
USING_OVM=true
WS_ADDR=0.0.0.0
WS_API=eth,rollup,net,web3,debug
WS_ORIGINS=*
WS=true
REPLICA_HEALTHCHECK__ETH_NETWORK=kovan
REPLICA_HEALTHCHECK__L2GETH_IMAGE_TAG=0.4.9
REPLICA_HEALTHCHECK__ETH_REPLICA_RPC_PROVIDER=http://l2geth-replica:8545
\ No newline at end of file
DATA_TRANSPORT_LAYER__ADDRESS_MANAGER=0x100Dd3b414Df5BbA2B542864fF94aF8024aFdf3a
DATA_TRANSPORT_LAYER__CONFIRMATIONS=12
DATA_TRANSPORT_LAYER__DANGEROUSLY_CATCH_ALL_ERRORS=true
DATA_TRANSPORT_LAYER__DB_PATH=/db
DATA_TRANSPORT_LAYER__DEFAULT_BACKEND=l2
DATA_TRANSPORT_LAYER__ENABLE_METRICS=true
DATA_TRANSPORT_LAYER__ETH_NETWORK_NAME=kovan
DATA_TRANSPORT_LAYER__L1_GAS_PRICE_BACKEND=l2
DATA_TRANSPORT_LAYER__L1_START_HEIGHT=27989623
DATA_TRANSPORT_LAYER__L2_CHAIN_ID=69
DATA_TRANSPORT_LAYER__LOGS_PER_POLLING_INTERVAL=2000
DATA_TRANSPORT_LAYER__NODE_ENV=production
DATA_TRANSPORT_LAYER__POLLING_INTERVAL=500
DATA_TRANSPORT_LAYER__SENTRY_TRACE_RATE=0.05
DATA_TRANSPORT_LAYER__SERVER_HOSTNAME=0.0.0.0
DATA_TRANSPORT_LAYER__SERVER_PORT=7878
DATA_TRANSPORT_LAYER__SYNC_FROM_L1=false
DATA_TRANSPORT_LAYER__SYNC_FROM_L2=true
DATA_TRANSPORT_LAYER__TRANSACTIONS_PER_POLLING_INTERVAL=1000
DEPLOYER_HTTP=
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configMapGenerator:
- name: data-transport-layer
envs:
- ./data-transport-layer.env
- name: l2geth-replica
envs:
- ./l2geth-replica.env
- name: replica-healthcheck
envs:
- ./replica-healthcheck.env
CHAIN_ID=69
DATADIR=/geth
NETWORK_ID=69
NO_DISCOVER=true
NO_USB=true
GASPRICE=0
GCMODE=archive
BLOCK_SIGNER_ADDRESS=0x00000398232E2064F896018496b4b44b3D62751F
BLOCK_SIGNER_PRIVATE_KEY=6587ae678cf4fc9a33000cdbf9f35226b71dcc6a4684a31203241f9bcfd55d27
BLOCK_SIGNER_PRIVATE_KEY_PASSWORD=pwd
ETH1_CTC_DEPLOYMENT_HEIGHT=27989623
ETH1_L1_FEE_WALLET_ADDRESS=0x18394B52d3Cb931dfA76F63251919D051953413d
ETH1_L1_CROSS_DOMAIN_MESSENGER_ADDRESS=0x4361d0F75A0186C05f971c566dC6bEa5957483fD
ETH1_L1_STANDARD_BRIDGE_ADDRESS=0x22F24361D548e5FaAfb36d1437839f080363982B
ETH1_SYNC_SERVICE_ENABLE=true
L2GETH_GENESIS_URL=https://storage.googleapis.com/optimism/kovan/v0.5.0-rc2.json
L2GETH_GENESIS_URL_SHA256SUM=17bc1ef020273bcaaa21b05666f912ebf330c0e99a7963b9e5ed61d649043fbd
ROLLUP_ADDRESS_MANAGER_OWNER_ADDRESS=0x100Dd3b414Df5BbA2B542864fF94aF8024aFdf3a
ROLLUP_BACKEND=l2
ROLLUP_CLIENT_HTTP=http://data-transport-layer:7878
ROLLUP_DISABLE_TRANSFERS=false
ROLLUP_ENABLE_L2_GAS_POLLING=false
ROLLUP_GAS_PRICE_ORACLE_OWNER_ADDRESS=0x18394B52d3Cb931dfA76F63251919D051953413d
ROLLUP_MAX_CALLDATA_SIZE=40000
ROLLUP_POLL_INTERVAL_FLAG=3s
ROLLUP_SYNC_SERVICE_ENABLE=true
ROLLUP_TIMESTAMP_REFRESH=3m
ROLLUP_VERIFIER_ENABLE=true
RPC_ADDR=0.0.0.0
RPC_API=eth,rollup,net,web3,debug
RPC_CORS_DOMAIN=*
RPC_ENABLE=true
RPC_PORT=8545
RPC_VHOSTS=*
TARGET_GAS_LIMIT=15000000
USING_OVM=true
WS_ADDR=0.0.0.0
WS_API=eth,rollup,net,web3,debug
WS_ORIGINS=*
WS=true
REPLICA_HEALTHCHECK__ETH_NETWORK=kovan
REPLICA_HEALTHCHECK__L2GETH_IMAGE_TAG=0.4.9
REPLICA_HEALTHCHECK__ETH_REPLICA_RPC_PROVIDER=http://l2geth-replica:8545
\ No newline at end of file
DATA_TRANSPORT_LAYER__ADDRESS_MANAGER=0x100Dd3b414Df5BbA2B542864fF94aF8024aFdf3a
DATA_TRANSPORT_LAYER__CONFIRMATIONS=12
DATA_TRANSPORT_LAYER__DANGEROUSLY_CATCH_ALL_ERRORS=true
DATA_TRANSPORT_LAYER__DB_PATH=/db
DATA_TRANSPORT_LAYER__DEFAULT_BACKEND=l2
DATA_TRANSPORT_LAYER__ENABLE_METRICS=true
DATA_TRANSPORT_LAYER__ETH_NETWORK_NAME=kovan
DATA_TRANSPORT_LAYER__L1_GAS_PRICE_BACKEND=l2
DATA_TRANSPORT_LAYER__L1_RPC_ENDPOINT=http://failover-proxy.default:8000
DATA_TRANSPORT_LAYER__L2_CHAIN_ID=69
DATA_TRANSPORT_LAYER__L2_RPC_ENDPOINT=http://sequencer.default:8545
DATA_TRANSPORT_LAYER__LOGS_PER_POLLING_INTERVAL=2000
DATA_TRANSPORT_LAYER__NODE_ENV=production
DATA_TRANSPORT_LAYER__POLLING_INTERVAL=500
DATA_TRANSPORT_LAYER__SENTRY_TRACE_RATE=0.05
DATA_TRANSPORT_LAYER__SERVER_HOSTNAME=0.0.0.0
DATA_TRANSPORT_LAYER__SERVER_PORT=7878
DATA_TRANSPORT_LAYER__SYNC_FROM_L1=false
DATA_TRANSPORT_LAYER__SYNC_FROM_L2=true
DATA_TRANSPORT_LAYER__TRANSACTIONS_PER_POLLING_INTERVAL=1000
DEPLOYER_HTTP=
L1_NODE_WEB3_URL=http://failover-proxy.default:8000
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configMapGenerator:
- name: data-transport-layer
envs:
- ./data-transport-layer.env
- name: l2geth-replica
envs:
- ./l2geth-replica.env
- name: replica-healthcheck
envs:
- ./replica-healthcheck.env
CHAIN_ID=69
DATADIR=/geth
NETWORK_ID=69
NO_DISCOVER=true
NO_USB=true
GASPRICE=0
GCMODE=archive
BLOCK_SIGNER_ADDRESS=0x27770a9694e4B4b1E130Ab91Bc327C36855f612E
BLOCK_SIGNER_PRIVATE_KEY=da5deb73dbc9dea2e3916929daaf079f75232d32a2cf37ff8b1f7140ef3fd9db
BLOCK_SIGNER_PRIVATE_KEY_PASSWORD=pwd
ETH1_CTC_DEPLOYMENT_HEIGHT=27989623
ETH1_L1_FEE_WALLET_ADDRESS=0x18394B52d3Cb931dfA76F63251919D051953413d
ETH1_L1_CROSS_DOMAIN_MESSENGER_ADDRESS=0x4361d0F75A0186C05f971c566dC6bEa5957483fD
ETH1_L1_STANDARD_BRIDGE_ADDRESS=0x22F24361D548e5FaAfb36d1437839f080363982B
ETH1_SYNC_SERVICE_ENABLE=true
L2GETH_GENESIS_URL=https://storage.googleapis.com/optimism/kovan/v0.5.0-rc2.json
L2GETH_GENESIS_URL_SHA256SUM=17bc1ef020273bcaaa21b05666f912ebf330c0e99a7963b9e5ed61d649043fbd
ROLLUP_ADDRESS_MANAGER_OWNER_ADDRESS=0x18394B52d3Cb931dfA76F63251919D051953413d
ROLLUP_BACKEND=l2
ROLLUP_CLIENT_HTTP=http://data-transport-layer:7878
ROLLUP_DISABLE_TRANSFERS=false
ROLLUP_ENABLE_L2_GAS_POLLING=false
ROLLUP_ENABLE_ARBITRARY_CONTRACT_DEPLOYMENT=true
ROLLUP_GAS_PRICE_ORACLE_OWNER_ADDRESS=0x18394B52d3Cb931dfA76F63251919D051953413d
ROLLUP_MAX_CALLDATA_SIZE=40000
ROLLUP_POLL_INTERVAL_FLAG=3s
ROLLUP_SYNC_SERVICE_ENABLE=true
ROLLUP_TIMESTAMP_REFRESH=3m
ROLLUP_VERIFIER_ENABLE=true
RPC_ADDR=0.0.0.0
RPC_API=eth,rollup,net,web3,debug
RPC_CORS_DOMAIN=*
RPC_ENABLE=true
RPC_PORT=8545
RPC_VHOSTS=*
TARGET_GAS_LIMIT=15000000
USING_OVM=true
WS_ADDR=0.0.0.0
WS_API=eth,rollup,net,web3,debug
WS_ORIGINS=*
WS=true
REPLICA_HEALTHCHECK__ETH_NETWORK=kovan
REPLICA_HEALTHCHECK__L2GETH_IMAGE_TAG=prerelease-0.5.0-rc-7-ee217ce
REPLICA_HEALTHCHECK__ETH_NETWORK_RPC_PROVIDER=http://sequencer.default:8545
REPLICA_HEALTHCHECK__ETH_REPLICA_RPC_PROVIDER=http://l2geth-replica:8545
DATA_TRANSPORT_LAYER__ADDRESS_MANAGER=0x100Dd3b414Df5BbA2B542864fF94aF8024aFdf3a
DATA_TRANSPORT_LAYER__CONFIRMATIONS=12
DATA_TRANSPORT_LAYER__DANGEROUSLY_CATCH_ALL_ERRORS=true
DATA_TRANSPORT_LAYER__DB_PATH=/db
DATA_TRANSPORT_LAYER__DEFAULT_BACKEND=l2
DATA_TRANSPORT_LAYER__ENABLE_METRICS=true
DATA_TRANSPORT_LAYER__ETH_NETWORK_NAME=kovan
DATA_TRANSPORT_LAYER__L1_GAS_PRICE_BACKEND=l2
DATA_TRANSPORT_LAYER__L1_START_HEIGHT=27989623
DATA_TRANSPORT_LAYER__L2_CHAIN_ID=69
DATA_TRANSPORT_LAYER__LOGS_PER_POLLING_INTERVAL=2000
DATA_TRANSPORT_LAYER__NODE_ENV=production
DATA_TRANSPORT_LAYER__POLLING_INTERVAL=500
DATA_TRANSPORT_LAYER__SENTRY_TRACE_RATE=0.05
DATA_TRANSPORT_LAYER__SERVER_HOSTNAME=0.0.0.0
DATA_TRANSPORT_LAYER__SERVER_PORT=7878
DATA_TRANSPORT_LAYER__SYNC_FROM_L1=false
DATA_TRANSPORT_LAYER__SYNC_FROM_L2=true
DATA_TRANSPORT_LAYER__TRANSACTIONS_PER_POLLING_INTERVAL=1000
DEPLOYER_HTTP=
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configMapGenerator:
- name: data-transport-layer
envs:
- ./data-transport-layer.env
- name: l2geth-replica
envs:
- ./l2geth-replica.env
- name: replica-healthcheck
envs:
- ./replica-healthcheck.env
CHAIN_ID=69
DATADIR=/geth
NETWORK_ID=69
NO_DISCOVER=true
NO_USB=true
GASPRICE=0
GCMODE=archive
BLOCK_SIGNER_ADDRESS=0x00000398232E2064F896018496b4b44b3D62751F
BLOCK_SIGNER_PRIVATE_KEY=6587ae678cf4fc9a33000cdbf9f35226b71dcc6a4684a31203241f9bcfd55d27
BLOCK_SIGNER_PRIVATE_KEY_PASSWORD=pwd
ETH1_CTC_DEPLOYMENT_HEIGHT=27989623
ETH1_SYNC_SERVICE_ENABLE=true
L2GETH_GENESIS_URL=https://storage.googleapis.com/optimism/kovan/kovan-berlin-genesis.json
L2GETH_GENESIS_HASH=0xaed938bc5dee7eb703658d4bec1f3e28f8b92bd9c032b2be779186eafc2b5a2a
L2GETH_BERLIN_ACTIVATION_HEIGHT=1138900
ROLLUP_BACKEND=l2
ROLLUP_CLIENT_HTTP=http://data-transport-layer:7878
ROLLUP_MAX_CALLDATA_SIZE=40000
ROLLUP_POLL_INTERVAL_FLAG=3s
ROLLUP_SYNC_SERVICE_ENABLE=true
ROLLUP_TIMESTAMP_REFRESH=3m
ROLLUP_VERIFIER_ENABLE=true
RPC_ADDR=0.0.0.0
RPC_API=eth,rollup,net,web3,debug
RPC_CORS_DOMAIN=*
RPC_ENABLE=true
RPC_PORT=8545
RPC_VHOSTS=*
TARGET_GAS_LIMIT=15000000
USING_OVM=true
WS_ADDR=0.0.0.0
WS_API=eth,rollup,net,web3,debug
WS_ORIGINS=*
WS=true
IPC_DISABLE=true
REPLICA_HEALTHCHECK__ETH_NETWORK=kovan
REPLICA_HEALTHCHECK__L2GETH_IMAGE_TAG=0.4.9
REPLICA_HEALTHCHECK__ETH_NETWORK_RPC_PROVIDER=http://sequencer.default:8545
REPLICA_HEALTHCHECK__ETH_REPLICA_RPC_PROVIDER=http://l2geth-replica:8545
\ No newline at end of file
#!/bin/sh
set -exu
GETH_DATA_DIR=/geth
GETH_CHAINDATA_DIR=$GETH_DATA_DIR/geth/chaindata
GETH_KEYSTORE_DIR=$GETH_DATA_DIR/keystore
if [ ! -d "$GETH_KEYSTORE_DIR" ]; then
echo "$GETH_KEYSTORE_DIR missing, running account import"
echo -n "$BLOCK_SIGNER_PRIVATE_KEY_PASSWORD" > "$GETH_DATA_DIR"/password
echo -n "$BLOCK_SIGNER_PRIVATE_KEY" > "$GETH_DATA_DIR"/block-signer-key
geth account import \
--datadir="$GETH_DATA_DIR" \
--password="$GETH_DATA_DIR"/password \
"$GETH_DATA_DIR"/block-signer-key
echo "get account import complete"
fi
if [ ! -d "$GETH_CHAINDATA_DIR" ]; then
echo "$GETH_CHAINDATA_DIR missing, running init"
geth init --datadir="$GETH_DATA_DIR" "$L2GETH_GENESIS_URL" "$L2GETH_GENESIS_HASH"
echo "geth init complete"
else
echo "$GETH_CHAINDATA_DIR exists, checking for hardfork."
echo "Chain config:"
geth dump-chain-cfg --datadir="$GETH_DATA_DIR"
if geth dump-chain-cfg --datadir="$GETH_DATA_DIR" | grep -q "\"berlinBlock\": $L2GETH_BERLIN_ACTIVATION_HEIGHT"; then
echo "Hardfork already activated."
else
echo "Hardfork not activated, running init."
geth init --datadir="$GETH_DATA_DIR" "$L2GETH_GENESIS_URL" "$L2GETH_GENESIS_HASH"
echo "geth hardfork activation complete"
fi
fi
\ No newline at end of file
DATA_TRANSPORT_LAYER__ADDRESS_MANAGER=0xdE1FCfB0851916CA5101820A69b13a4E276bd81F
DATA_TRANSPORT_LAYER__CONFIRMATIONS=12
DATA_TRANSPORT_LAYER__DANGEROUSLY_CATCH_ALL_ERRORS=true
DATA_TRANSPORT_LAYER__DB_PATH=/db
DATA_TRANSPORT_LAYER__DEFAULT_BACKEND=l2
DATA_TRANSPORT_LAYER__ENABLE_METRICS=true
DATA_TRANSPORT_LAYER__ETH_NETWORK_NAME=mainnet
DATA_TRANSPORT_LAYER__L1_GAS_PRICE_BACKEND=l2
DATA_TRANSPORT_LAYER__L1_START_HEIGHT=13596466
DATA_TRANSPORT_LAYER__L2_CHAIN_ID=10
DATA_TRANSPORT_LAYER__LOGS_PER_POLLING_INTERVAL=2000
DATA_TRANSPORT_LAYER__NODE_ENV=production
DATA_TRANSPORT_LAYER__POLLING_INTERVAL=500
DATA_TRANSPORT_LAYER__SENTRY_TRACE_RATE=0.05
DATA_TRANSPORT_LAYER__SERVER_HOSTNAME=0.0.0.0
DATA_TRANSPORT_LAYER__SERVER_PORT=7878
DATA_TRANSPORT_LAYER__SYNC_FROM_L1=false
DATA_TRANSPORT_LAYER__SYNC_FROM_L2=true
DATA_TRANSPORT_LAYER__TRANSACTIONS_PER_POLLING_INTERVAL=1000
L1_NODE_WEB3_URL=http://failover-proxyd.default:8080
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configMapGenerator:
- name: data-transport-layer
envs:
- ./data-transport-layer.env
- name: l2geth-replica
envs:
- ./l2geth-replica.env
- name: replica-healthcheck
envs:
- ./replica-healthcheck.env
- name: geth-scripts
files:
- ./check-for-chaindata.sh
\ No newline at end of file
CHAIN_ID=10
DATADIR=/geth
NETWORK_ID=10
NO_DISCOVER=true
NO_USB=true
GASPRICE=0
GCMODE=archive
BLOCK_SIGNER_ADDRESS=0x00000398232E2064F896018496b4b44b3D62751F
BLOCK_SIGNER_PRIVATE_KEY=6587ae678cf4fc9a33000cdbf9f35226b71dcc6a4684a31203241f9bcfd55d27
BLOCK_SIGNER_PRIVATE_KEY_PASSWORD=pwd
ETH1_CTC_DEPLOYMENT_HEIGHT=13596466
ETH1_L1_FEE_WALLET_ADDRESS=0x391716d440c151c42cdf1c95c1d83a5427bca52c
ETH1_L1_CROSS_DOMAIN_MESSENGER_ADDRESS=0x25ace71c97B33Cc4729CF772ae268934F7ab5fA1
ETH1_L1_STANDARD_BRIDGE_ADDRESS=0x99C9fc46f92E8a1c0deC1b1747d010903E884bE1
ETH1_SYNC_SERVICE_ENABLE=true
L2GETH_GENESIS_URL=https://storage.googleapis.com/optimism/mainnet/genesis-berlin.json
L2GETH_GENESIS_HASH=0x106b0a3247ca54714381b1109e82cc6b7e32fd79ae56fbcc2e7b1541122f84ea
L2GETH_BERLIN_ACTIVATION_HEIGHT=3950000
ROLLUP_ADDRESS_MANAGER_OWNER_ADDRESS=0x9BA6e03D8B90dE867373Db8cF1A58d2F7F006b3A
ROLLUP_BACKEND=l2
ROLLUP_CLIENT_HTTP=http://data-transport-layer:7878
ROLLUP_DISABLE_TRANSFERS=false
ROLLUP_ENABLE_L2_GAS_POLLING=false
ROLLUP_GAS_PRICE_ORACLE_OWNER_ADDRESS=0x7107142636C85c549690b1Aca12Bdb8052d26Ae6
ROLLUP_MAX_CALLDATA_SIZE=40000
ROLLUP_POLL_INTERVAL_FLAG=1s
ROLLUP_SYNC_SERVICE_ENABLE=true
ROLLUP_TIMESTAMP_REFRESH=5m
ROLLUP_VERIFIER_ENABLE=true
RPC_ADDR=0.0.0.0
RPC_API=eth,rollup,net,web3,debug
RPC_CORS_DOMAIN=*
RPC_ENABLE=true
RPC_PORT=8545
RPC_VHOSTS=*
TARGET_GAS_LIMIT=15000000
USING_OVM=true
WS_ADDR=0.0.0.0
WS_API=eth,rollup,net,web3,debug
WS_ORIGINS=*
WS=true
REPLICA_HEALTHCHECK__ETH_NETWORK=mainnet
REPLICA_HEALTHCHECK__L2GETH_IMAGE_TAG=0.5.11
REPLICA_HEALTHCHECK__ETH_REPLICA_RPC_PROVIDER=http://l2geth-replica:8545
\ No newline at end of file
#!/bin/sh
set -exu
GETH_DATA_DIR=/geth
GETH_CHAINDATA_DIR=$GETH_DATA_DIR/geth/chaindata
GETH_KEYSTORE_DIR=$GETH_DATA_DIR/keystore
if [ ! -d "$GETH_KEYSTORE_DIR" ]; then
echo "$GETH_KEYSTORE_DIR missing, running account import"
echo -n "$BLOCK_SIGNER_PRIVATE_KEY_PASSWORD" > "$GETH_DATA_DIR"/password
echo -n "$BLOCK_SIGNER_PRIVATE_KEY" > "$GETH_DATA_DIR"/block-signer-key
geth account import \
--datadir="$GETH_DATA_DIR" \
--password="$GETH_DATA_DIR"/password \
"$GETH_DATA_DIR"/block-signer-key
echo "get account import complete"
fi
if [ ! -d "$GETH_CHAINDATA_DIR" ]; then
echo "$GETH_CHAINDATA_DIR missing, running init"
geth init --datadir="$GETH_DATA_DIR" "$L2GETH_GENESIS_URL" "$L2GETH_GENESIS_HASH"
echo "geth init complete"
else
echo "$GETH_CHAINDATA_DIR exists, checking for hardfork."
echo "Chain config:"
geth dump-chain-cfg --datadir="$GETH_DATA_DIR"
if geth dump-chain-cfg --datadir="$GETH_DATA_DIR" | grep -q "\"berlinBlock\": $L2GETH_BERLIN_ACTIVATION_HEIGHT"; then
echo "Hardfork already activated."
else
echo "Hardfork not activated, running init."
geth init --datadir="$GETH_DATA_DIR" "$L2GETH_GENESIS_URL" "$L2GETH_GENESIS_HASH"
echo "geth hardfork activation complete"
fi
fi
\ No newline at end of file
DATA_TRANSPORT_LAYER__ADDRESS_MANAGER=0xdE1FCfB0851916CA5101820A69b13a4E276bd81F
DATA_TRANSPORT_LAYER__CONFIRMATIONS=12
DATA_TRANSPORT_LAYER__DANGEROUSLY_CATCH_ALL_ERRORS=true
DATA_TRANSPORT_LAYER__DB_PATH=/db
DATA_TRANSPORT_LAYER__DEFAULT_BACKEND=l1
DATA_TRANSPORT_LAYER__ENABLE_METRICS=true
DATA_TRANSPORT_LAYER__ETH_NETWORK_NAME=mainnet
DATA_TRANSPORT_LAYER__L1_GAS_PRICE_BACKEND=l1
DATA_TRANSPORT_LAYER__L1_RPC_ENDPOINT=http://failover-proxyd.default:8080
DATA_TRANSPORT_LAYER__L1_START_HEIGHT=13596466
DATA_TRANSPORT_LAYER__L2_CHAIN_ID=10
DATA_TRANSPORT_LAYER__L2_RPC_ENDPOINT=http://sequencer.default:8545
DATA_TRANSPORT_LAYER__LOGS_PER_POLLING_INTERVAL=2000
DATA_TRANSPORT_LAYER__NODE_ENV=production
DATA_TRANSPORT_LAYER__POLLING_INTERVAL=500
DATA_TRANSPORT_LAYER__SENTRY_TRACE_RATE=0.05
DATA_TRANSPORT_LAYER__SERVER_HOSTNAME=0.0.0.0
DATA_TRANSPORT_LAYER__SERVER_PORT=7878
DATA_TRANSPORT_LAYER__SYNC_FROM_L1=true
DATA_TRANSPORT_LAYER__SYNC_FROM_L2=false
DATA_TRANSPORT_LAYER__TRANSACTIONS_PER_POLLING_INTERVAL=1000
L1_NODE_WEB3_URL=http://failover-proxyd.default:8080
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configMapGenerator:
- name: data-transport-layer
envs:
- ./data-transport-layer.env
- name: l2geth-replica
envs:
- ./l2geth-replica.env
- name: replica-healthcheck
envs:
- ./replica-healthcheck.env
- name: geth-scripts
files:
- ./check-for-chaindata.sh
\ No newline at end of file
CHAIN_ID=10
DATADIR=/geth
NETWORK_ID=10
NO_DISCOVER=true
NO_USB=true
GASPRICE=0
GCMODE=archive
BLOCK_SIGNER_ADDRESS=0x00000398232E2064F896018496b4b44b3D62751F
BLOCK_SIGNER_PRIVATE_KEY=6587ae678cf4fc9a33000cdbf9f35226b71dcc6a4684a31203241f9bcfd55d27
BLOCK_SIGNER_PRIVATE_KEY_PASSWORD=pwd
ETH1_CTC_DEPLOYMENT_HEIGHT=13596466
ETH1_L1_FEE_WALLET_ADDRESS=0x391716d440c151c42cdf1c95c1d83a5427bca52c
ETH1_L1_CROSS_DOMAIN_MESSENGER_ADDRESS=0x25ace71c97B33Cc4729CF772ae268934F7ab5fA1
ETH1_L1_STANDARD_BRIDGE_ADDRESS=0x99C9fc46f92E8a1c0deC1b1747d010903E884bE1
ETH1_SYNC_SERVICE_ENABLE=true
L2GETH_GENESIS_URL=https://storage.googleapis.com/optimism/mainnet/genesis-berlin.json
L2GETH_GENESIS_HASH=0x106b0a3247ca54714381b1109e82cc6b7e32fd79ae56fbcc2e7b1541122f84ea
L2GETH_BERLIN_ACTIVATION_HEIGHT=3950000
ROLLUP_ADDRESS_MANAGER_OWNER_ADDRESS=0x9BA6e03D8B90dE867373Db8cF1A58d2F7F006b3A
ROLLUP_BACKEND=l1
ROLLUP_CLIENT_HTTP=http://data-transport-layer:7878
ROLLUP_DISABLE_TRANSFERS=false
ROLLUP_ENABLE_L2_GAS_POLLING=false
ROLLUP_GAS_PRICE_ORACLE_OWNER_ADDRESS=0x7107142636C85c549690b1Aca12Bdb8052d26Ae6
ROLLUP_MAX_CALLDATA_SIZE=40000
ROLLUP_POLL_INTERVAL_FLAG=1s
ROLLUP_SYNC_SERVICE_ENABLE=true
ROLLUP_TIMESTAMP_REFRESH=5m
ROLLUP_VERIFIER_ENABLE=true
RPC_ADDR=0.0.0.0
RPC_API=eth,rollup,net,web3,debug
RPC_CORS_DOMAIN=*
RPC_ENABLE=true
RPC_PORT=8545
RPC_VHOSTS=*
TARGET_GAS_LIMIT=15000000
USING_OVM=true
WS_ADDR=0.0.0.0
WS_API=eth,rollup,net,web3,debug
WS_ORIGINS=*
WS=true
REPLICA_HEALTHCHECK__ETH_NETWORK=mainnet
REPLICA_HEALTHCHECK__L2GETH_IMAGE_TAG=0.5.13
REPLICA_HEALTHCHECK__ETH_NETWORK_RPC_PROVIDER=http://sequencer.default:8545
REPLICA_HEALTHCHECK__ETH_REPLICA_RPC_PROVIDER=http://l2geth-replica:8545
\ No newline at end of file
DATA_TRANSPORT_LAYER__ADDRESS_MANAGER=0xdE1FCfB0851916CA5101820A69b13a4E276bd81F
DATA_TRANSPORT_LAYER__CONFIRMATIONS=12
DATA_TRANSPORT_LAYER__DANGEROUSLY_CATCH_ALL_ERRORS=true
DATA_TRANSPORT_LAYER__DB_PATH=/db
DATA_TRANSPORT_LAYER__DEFAULT_BACKEND=l1
DATA_TRANSPORT_LAYER__ENABLE_METRICS=true
DATA_TRANSPORT_LAYER__ETH_NETWORK_NAME=mainnet
DATA_TRANSPORT_LAYER__L1_GAS_PRICE_BACKEND=l1
DATA_TRANSPORT_LAYER__L1_RPC_ENDPOINT=http://failover-proxyd.default:8080
DATA_TRANSPORT_LAYER__L1_START_HEIGHT=13596466
DATA_TRANSPORT_LAYER__L2_CHAIN_ID=10
DATA_TRANSPORT_LAYER__L2_RPC_ENDPOINT=http://sequencer.default:8545
DATA_TRANSPORT_LAYER__LOGS_PER_POLLING_INTERVAL=2000
DATA_TRANSPORT_LAYER__NODE_ENV=production
DATA_TRANSPORT_LAYER__POLLING_INTERVAL=500
DATA_TRANSPORT_LAYER__SENTRY_TRACE_RATE=0.05
DATA_TRANSPORT_LAYER__SERVER_HOSTNAME=0.0.0.0
DATA_TRANSPORT_LAYER__SERVER_PORT=7878
DATA_TRANSPORT_LAYER__SYNC_FROM_L1=true
DATA_TRANSPORT_LAYER__SYNC_FROM_L2=false
DATA_TRANSPORT_LAYER__TRANSACTIONS_PER_POLLING_INTERVAL=1000
DEPLOYER_HTTP=
L1_NODE_WEB3_URL=http://failover-proxyd.default:8080
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configMapGenerator:
- name: data-transport-layer
envs:
- ./data-transport-layer.env
- name: l2geth-replica
envs:
- ./l2geth-replica.env
- name: replica-healthcheck
envs:
- ./replica-healthcheck.env
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment