Commit a965bebf authored by Hamdi Allam's avatar Hamdi Allam Committed by GitHub

feat(indexer): default devnet config + grafana dashboards (#8732)

* temp

* temp

* remove prism, changelog

* update toml to devnet default. no default in config deser, fail if not set

* grafana & prometheus config files

* add grafana/prometheus to dockerfile. updated and moved into /ops

* update docs

* nits

* unused var

* move Dockerfile

* nits

* fix toml & test

* docker-compose update
parent 306b47d8
# @eth-optimism/indexer
## 0.7.0
### Minor Changes
- ed50bd5b4: Bump indexer
## 0.6.0
### Minor Changes
- ecf0cc59b: Fix startup issues, add L2 conf depth
## 0.5.0
### Minor Changes
- f030d8660: Adds support for the OptimismPortal's new `WithdrawalProven` event to the indexer
## 0.4.0
### Minor Changes
- 1bfe79f20: Adds an implementation of the Two Step Withdrawals V2 proposal
### Patch Changes
- f49b71d50: Updated forge-std version
## 0.3.3
### Patch Changes
- 587f309bf: Fix the docker build
## 0.3.2
### Patch Changes
- f505078be: Update go-ethereum to v1.10.26
## 0.3.1
### Patch Changes
- 4006ef3a: Delete unused flags
## 0.3.0
### Minor Changes
- 19e581d8: Bedrock support
## 0.2.0
### Minor Changes
- 4b0d5109: This release supports bedrock contracts and is configured for the public alpha testnet on goerli.
## 0.1.4
### Patch Changes
- 74babaa4: Delete dead file
- 3e67f784: Update go-ethereum to 1.10.21
- ec8d6b7c: Remove some duplicated code
## 0.1.3
### Patch Changes
- f30a5d39: Fix contract bindings
## 0.1.2
### Patch Changes
- b3921408: Fix a couple semgrep issues
- c6f6d68b: Deduplicate some l2geth and geth utils
- 0b30ae05: Use op-bindings package for address manager
## 0.1.1
### Patch Changes
- 6f458607: Bump go-ethereum to 1.10.17
## 0.1.0
### Minor Changes
- 79be3e80: Add airdrops API
### Patch Changes
- d7bb9625: fix context reuse
## 0.0.4
### Patch Changes
- 1df934a1: Don't spam the backend
- 94876e28: fix (indexer): update l2 bridge addresses
## 0.0.3
### Patch Changes
- 160f4c3d: Update docker image to use golang 1.18.0
## 0.0.2
### Patch Changes
- 0e40dcb6: Indexer: initial release
- 93131547: Bump `go-ethereum` to `v1.10.16`
...@@ -15,19 +15,15 @@ COPY ./op-bindings /app/op-bindings ...@@ -15,19 +15,15 @@ COPY ./op-bindings /app/op-bindings
COPY ./op-service /app/op-service COPY ./op-service /app/op-service
COPY ./op-node /app/op-node COPY ./op-node /app/op-node
COPY ./op-chain-ops /app/op-chain-ops COPY ./op-chain-ops /app/op-chain-ops
COPY ./go.mod /app/go.mod
COPY ./go.sum /app/go.sum
WORKDIR /app/indexer WORKDIR /app/indexer
RUN make indexer RUN make indexer
FROM alpine:3.18 FROM alpine:3.18
COPY --from=builder /app/indexer/indexer /usr/local/bin COPY --from=builder /app/indexer/indexer /usr/local/bin
COPY --from=builder /app/indexer/indexer.toml /app/indexer/indexer.toml
COPY --from=builder /app/indexer/migrations /app/indexer/migrations COPY --from=builder /app/indexer/migrations /app/indexer/migrations
WORKDIR /app WORKDIR /app/indexer
CMD ["indexer"]
ENV INDEXER_MIGRATIONS_DIR="/app/indexer/migrations"
CMD ["indexer", "index", "--config", "/app/indexer/indexer.toml"]
...@@ -8,6 +8,9 @@ LDFLAGS := -ldflags "$(LDFLAGSSTRING)" ...@@ -8,6 +8,9 @@ LDFLAGS := -ldflags "$(LDFLAGSSTRING)"
indexer: indexer:
env GO111MODULE=on go build -v $(LDFLAGS) ./cmd/indexer env GO111MODULE=on go build -v $(LDFLAGS) ./cmd/indexer
up:
docker-compose up --build
clean: clean:
rm indexer rm indexer
......
...@@ -2,78 +2,68 @@ ...@@ -2,78 +2,68 @@
## Getting started ## Getting started
### Configuration
The `indexer.toml` contains configuration for the indexer. The file is templated for the local devnet, however presets are available for [known networks](https://github.com/ethereum-optimism/optimism/blob/develop/indexer/config/presets.go). The file also templates keys needed for custom networks such as the rollup contract addresses and the `l1-starting-height` for the deployment height.
### Setup env Required configuration is network specific `(l1|l2)-rpc` urls that point to archival nodes as well as the `(l1|l2)-polling-interval` & `(l1|l2)-header-buffer-size` keys which controls the of rate data retrieval from these notes.
The `indexer.toml` stores a set of preset environmental variables that can be used to run the indexer with the exception of the network specific `l1-rpc` and `l2-rpc` variables. The `indexer.toml` file can be ran as a default config, otherwise a custom `.toml` config can provided via the `--config` flag when running the application. An optional `l1-starting-height` value can be provided to the indexer to specify the L1 starting block height to begin indexing from. This should be ideally be an L1 block that holds a correlated L2 genesis commitment. Furthermore, this value must be less than the current L1 block height to pass validation. If no starting height value is provided and the database is empty, the indexer will begin sequentially processing from L1 genesis.
### Setup polling intervals
The indexer polls and processes batches from the L1 and L2 chains on a set interval/size. The default polling interval is 5 seconds for both chains with a default batch header size of 500. The polling frequency can be changed by setting the `l1-polling-interval` and `l2-polling-interval` values in the `indexer.toml` file. The batch header size can be changed by setting the `l1-batch-size` and `l2-batch-size` values in the `indexer.toml` file.
### Testing ### Testing
All tests can be ran by running `make test` from the `/indexer` directory. This will run all unit and e2e tests. All tests can be ran by running `make test` from the `/indexer` directory. This will run all unit and e2e tests.
**NOTE:** Successfully running the E2E tests requires spinning up a local L1 geth node and pre-populating it with necessary bedrock genesis state. This can be done by calling `make devnet-allocs` from the root of the optimism monorepo before running the indexer tests. More information on this can be found in the [op-e2e README](../op-e2e/README.md). > **NOTE:** Successfully running the E2E tests requires spinning up a local devnet via [op-e2e](https://github.com/ethereum-optimism/optimism/tree/develop/op-e2e) and pre-populating it with necessary bedrock genesis state. This genesis state is generated by invoking the`make devnet-allocs` target from the root of the optimism monorepo before running the indexer tests. More information on this can be found in the [op-e2e README](../op-e2e/README.md). A postgres database through pwd-less user $DB_USER env variable on port 5432 must be available as well.
### Run indexer vs goerli
- install docker
- `cp example.env .env`
- fill in .env
- run `docker compose up` to start the indexer vs optimism goerli network
### Run indexer with go
See the flags in `flags.go` for reference of what command line flags to pass to `go run` ### Run the Indexer (docker-compose)
The local [docker-compose.go](https://github.com/ethereum-optimism/optimism/blob/develop/indexer/docker-compose.yml) file spins up **index, api, postgres, prometheus and grafana** services. The `indexer.toml` file is setup for the local devnet. To run against a live network, update the `indexer.toml` with the desired configuration.
### Run indexer vs devnet > The API and Postgres services with ports mapped externally. Postgres externally mapped on port 5433 to deconflict with any instances already running
TODO add indexer to the optimism devnet compose file (previously removed for breaking CI) 1. Install Deps: Docker, Genesis State: `make devnet-allocs`
2. *Optional*: Start Devnet `make devnet up`
3. Start Indexer `cd indexer && docker-compose up`
4. View the Grafana dashboard at http://localhost:3000
- **User**: admin **Password**: optimism
### Run indexer vs a custom configuration ### Run the Indexer (Go Binary or Dockerfile)
1. Prepare the `indexer.toml` file
a. **Run database migrations**: `indexer migrate --config <indexer.toml>`
b. Run index service, cmd: `indexer index --config <indexer.toml>`
c. Run the api service, cmd: `indexer api --config <indexer.toml>`
`docker-compose.dev.yml` is git ignored. Fill in your own docker-compose file here. > Both the index and api services listen on an HTTP and Metrics port. Migrations sohuld **always** be run prior to start the indexer to ensure latest schemas are set.
## Architecture ## Architecture
![Architectural Diagram](./assets/architecture.png) ![Architectural Diagram](./ops/assets/architecture.png)
The indexer application supports two separate services for collective operation: The indexer application supports two separate services for collective operation:
**Indexer API** - Provides a lightweight API service that supports paginated lookups for bridge events. **Indexer API** - Provides a lightweight API service that supports paginated lookups for bridge data.
**Indexer Service** - A polling based service that constantly reads and persists OP Stack chain data (i.e, block meta, system contract events, synchronized bridge events) from a L1 and L2 chain. **Indexer Service** - A polling based service that constantly reads and persists OP Stack chain data (i.e, block meta, system contract events, synchronized bridge events) from a L1 and L2 chain.
### Indexer API ### Indexer API
TBD See `api/api.go` & `api/routes/` for available API endpoints to for paginated retrieval of bridge data. **TDB** docs.
### Indexer Service ### Indexer Service
![Service Component Diagram](./assets/indexer-service.png) ![Service Component Diagram](./ops/assets/indexer-service.png)
The indexer service is responsible for polling and processing real-time batches of L1 and L2 chain data. The indexer service is currently composed of the following key components: The indexer service is responsible for polling and processing real-time batches of L1 and L2 chain data. The indexer service is currently composed of the following key components:
- **Poller Routines** - Individually polls the L1/L2 chain for new blocks and OP Stack system contract events. - **Poller Routines** - Individually polls the L1/L2 chain for new blocks and OP Stack contract events.
- **Insertion Routines** - Awaits new batches from the poller routines and inserts them into the database upon retrieval. - **Insertion Routines** - Awaits new batches from the poller routines and inserts them into the database upon retrieval.
- **Bridge Routine** - Polls the database directly for new L1 blocks and bridge events. Upon retrieval, the bridge routine will: - **Bridge Routine** - Polls the database directly for new L1 blocks and bridge events. Upon retrieval, the bridge routine will:
* Process and persist new bridge events * Process and persist new bridge events
* Synchronize L1 proven/finalized withdrawals with their L2 initialization counterparts * Synchronize L1 proven/finalized withdrawals with their L2 initialization counterparts
#### L1 Poller
L1 blocks are only indexed if they contain L1 contract events. This is done to reduce the amount of unnecessary data that is indexed. Because of this, the `l1_block_headers` table will not contain every L1 block header unlike L2 blocks.
### L1 Polling
L1 blocks are only indexed if they contain L1 system contract events. This is done to reduce the amount of unnecessary data that is indexed. Because of this, the `l1_block_headers` table will not contain every L1 block header.
#### API #### Database
The indexer service runs a lightweight health server adjacently to the main service. The health server exposes a single endpoint `/healthz` that can be used to check the health of the indexer service. The health assessment doesn't check dependency health (ie. database) but rather checks the health of the indexer service itself. The indexer service currently supports a Postgres database for storing L1/L2 OP Stack chain data. The most up-to-date database schemas can be found in the `./migrations` directory. **Run the idempotent migrations prior to starting the indexer**
### Database #### HTTP
The indexer service currently supports a Postgres database for storing L1/L2 OP Stack chain data. The most up-to-date database schemas can be found in the `./migrations` directory. The indexer service runs a lightweight health server adjacently to the main service. The health server exposes a single endpoint `/healthz` that can be used to check the health of the indexer service. The health assessment doesn't check dependency health (ie. database) but rather checks the health of the indexer service itself.
## Metrics #### Metrics
The indexer services exposes a set of Prometheus metrics that can be used to monitor the health of the service. The metrics are exposed via the `/metrics` endpoint on the health server. The indexer services exposes a set of Prometheus metrics that can be used to monitor the health of the service. The metrics are exposed via the `/metrics` endpoint on the health server.
## Prerequisites
Before launching an instance of the service, ensure you have the following:
- A postgres database configured with user/password credentials.
- Access to RPC endpoints for archival layer1 and layer2 nodes.
- Access to at least two server instances with sufficient resources (TODO - Add resource reqs).
- Use of a migration procedure for applying database schema changes.
- Telemetry and monitoring configured for the service.
## Security ## Security
All security related issues should be filed via github issues and will be triaged by the team. The following are some security considerations to be taken when running the service: All security related issues should be filed via github issues and will be triaged by the team. The following are some security considerations to be taken when running the service:
...@@ -83,4 +73,4 @@ All security related issues should be filed via github issues and will be triage ...@@ -83,4 +73,4 @@ All security related issues should be filed via github issues and will be triage
- Setting confirmation count values too low can result in indexing failures due to chain reorgs. - Setting confirmation count values too low can result in indexing failures due to chain reorgs.
## Troubleshooting ## Troubleshooting
Please advise the [troubleshooting](./docs/troubleshooting.md) guide for common failure scenarios and how to resolve them. Please advise the [troubleshooting](./ops/docs/troubleshooting.md) guide for common failure scenarios and how to resolve them.
...@@ -12,6 +12,7 @@ import ( ...@@ -12,6 +12,7 @@ import (
"github.com/go-chi/chi/v5" "github.com/go-chi/chi/v5"
"github.com/go-chi/chi/v5/middleware" "github.com/go-chi/chi/v5/middleware"
"github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
...@@ -31,8 +32,7 @@ const ( ...@@ -31,8 +32,7 @@ const (
addressParam = "{address:%s}" addressParam = "{address:%s}"
// Endpoint paths // Endpoint paths
// NOTE - This can be further broken out over time as new version iterations DocsPath = "/docs"
// are implemented
HealthPath = "/healthz" HealthPath = "/healthz"
DepositsPath = "/api/v0/deposits/" DepositsPath = "/api/v0/deposits/"
WithdrawalsPath = "/api/v0/withdrawals/" WithdrawalsPath = "/api/v0/withdrawals/"
...@@ -144,27 +144,29 @@ func (a *APIService) initRouter(apiConfig config.ServerConfig) { ...@@ -144,27 +144,29 @@ func (a *APIService) initRouter(apiConfig config.ServerConfig) {
apiRouter := chi.NewRouter() apiRouter := chi.NewRouter()
h := routes.NewRoutes(a.log, apiRouter, svc) h := routes.NewRoutes(a.log, apiRouter, svc)
promRecorder := metrics.NewPromHTTPRecorder(a.metricsRegistry, MetricsNamespace) apiRouter.Use(middleware.Logger)
apiRouter.Use(chiMetricsMiddleware(promRecorder))
apiRouter.Use(middleware.Timeout(time.Duration(apiConfig.WriteTimeout) * time.Second)) apiRouter.Use(middleware.Timeout(time.Duration(apiConfig.WriteTimeout) * time.Second))
apiRouter.Use(middleware.Recoverer) apiRouter.Use(middleware.Recoverer)
apiRouter.Use(middleware.Heartbeat(HealthPath)) apiRouter.Use(middleware.Heartbeat(HealthPath))
apiRouter.Use(chiMetricsMiddleware(metrics.NewPromHTTPRecorder(a.metricsRegistry, MetricsNamespace)))
apiRouter.Get(fmt.Sprintf(DepositsPath+addressParam, ethereumAddressRegex), h.L1DepositsHandler) apiRouter.Get(fmt.Sprintf(DepositsPath+addressParam, ethereumAddressRegex), h.L1DepositsHandler)
apiRouter.Get(fmt.Sprintf(WithdrawalsPath+addressParam, ethereumAddressRegex), h.L2WithdrawalsHandler) apiRouter.Get(fmt.Sprintf(WithdrawalsPath+addressParam, ethereumAddressRegex), h.L2WithdrawalsHandler)
apiRouter.Get(SupplyPath, h.SupplyView) apiRouter.Get(SupplyPath, h.SupplyView)
apiRouter.Get(DocsPath, h.DocsHandler)
a.router = apiRouter a.router = apiRouter
} }
// startServer ... Starts the API server // startServer ... Starts the API server
func (a *APIService) startServer(serverConfig config.ServerConfig) error { func (a *APIService) startServer(serverConfig config.ServerConfig) error {
a.log.Debug("API server listening...", "port", serverConfig.Port) a.log.Debug("API server listening...", "port", serverConfig.Port)
addr := net.JoinHostPort(serverConfig.Host, strconv.Itoa(serverConfig.Port)) addr := net.JoinHostPort(serverConfig.Host, strconv.Itoa(serverConfig.Port))
srv, err := httputil.StartHTTPServer(addr, a.router) srv, err := httputil.StartHTTPServer(addr, a.router)
if err != nil { if err != nil {
return fmt.Errorf("failed to start API server: %w", err) return fmt.Errorf("failed to start API server: %w", err)
} }
a.log.Info("API server started", "addr", srv.Addr().String()) a.log.Info("API server started", "addr", srv.Addr().String())
a.apiServer = srv a.apiServer = srv
return nil return nil
......
package config package config
import ( import (
"errors"
"fmt" "fmt"
"os" "os"
"reflect" "reflect"
...@@ -11,13 +12,8 @@ import ( ...@@ -11,13 +12,8 @@ import (
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
) )
const ( // In the future, presets can just be onchain system config with everything else
// default to 5 seconds // fetched on initialization
defaultLoopInterval = 5000
defaultHeaderBufferSize = 500
)
// In the future, presets can just be onchain config and fetched on initialization
// Config represents the `indexer.toml` file used to configure the indexer // Config represents the `indexer.toml` file used to configure the indexer
type Config struct { type Config struct {
...@@ -196,24 +192,21 @@ func LoadConfig(log log.Logger, path string) (Config, error) { ...@@ -196,24 +192,21 @@ func LoadConfig(log log.Logger, path string) (Config, error) {
} }
} }
// Defaults for any unset options // Check to make sure some required properties are set
var errs error
if cfg.Chain.L1PollingInterval == 0 { if cfg.Chain.L1PollingInterval == 0 {
cfg.Chain.L1PollingInterval = defaultLoopInterval errs = errors.Join(err, errors.New("`l1-polling-interval` unset"))
} }
if cfg.Chain.L2PollingInterval == 0 { if cfg.Chain.L2PollingInterval == 0 {
cfg.Chain.L2PollingInterval = defaultLoopInterval errs = errors.Join(err, errors.New("`l2-polling-interval` unset"))
} }
if cfg.Chain.L1HeaderBufferSize == 0 { if cfg.Chain.L1HeaderBufferSize == 0 {
cfg.Chain.L1HeaderBufferSize = defaultHeaderBufferSize errs = errors.Join(err, errors.New("`l1-header-buffer-size` unset"))
} }
if cfg.Chain.L2HeaderBufferSize == 0 { if cfg.Chain.L2HeaderBufferSize == 0 {
cfg.Chain.L2HeaderBufferSize = defaultHeaderBufferSize errs = errors.Join(err, errors.New("`l2-header-buffer-size` unset"))
} }
log.Info("loaded chain config", "config", cfg.Chain) log.Info("loaded chain config", "config", cfg.Chain)
return cfg, nil return cfg, errs
} }
...@@ -22,6 +22,11 @@ func TestLoadConfig(t *testing.T) { ...@@ -22,6 +22,11 @@ func TestLoadConfig(t *testing.T) {
[chain] [chain]
preset = 420 preset = 420
l1-polling-interval = 5000
l2-polling-interval = 5000
l1-header-buffer-size = 1000
l2-header-buffer-size = 1000
[rpcs] [rpcs]
l1-rpc = "https://l1.example.com" l1-rpc = "https://l1.example.com"
l2-rpc = "https://l2.example.com" l2-rpc = "https://l2.example.com"
...@@ -58,6 +63,10 @@ func TestLoadConfig(t *testing.T) { ...@@ -58,6 +63,10 @@ func TestLoadConfig(t *testing.T) {
require.Equal(t, conf.Chain.L1Contracts.L1CrossDomainMessengerProxy.String(), Presets[420].ChainConfig.L1Contracts.L1CrossDomainMessengerProxy.String()) require.Equal(t, conf.Chain.L1Contracts.L1CrossDomainMessengerProxy.String(), Presets[420].ChainConfig.L1Contracts.L1CrossDomainMessengerProxy.String())
require.Equal(t, conf.Chain.L1Contracts.L1StandardBridgeProxy.String(), Presets[420].ChainConfig.L1Contracts.L1StandardBridgeProxy.String()) require.Equal(t, conf.Chain.L1Contracts.L1StandardBridgeProxy.String(), Presets[420].ChainConfig.L1Contracts.L1StandardBridgeProxy.String())
require.Equal(t, conf.Chain.L1Contracts.L2OutputOracleProxy.String(), Presets[420].ChainConfig.L1Contracts.L2OutputOracleProxy.String()) require.Equal(t, conf.Chain.L1Contracts.L2OutputOracleProxy.String(), Presets[420].ChainConfig.L1Contracts.L2OutputOracleProxy.String())
require.Equal(t, conf.Chain.L1PollingInterval, uint(5000))
require.Equal(t, conf.Chain.L1HeaderBufferSize, uint(1000))
require.Equal(t, conf.Chain.L2PollingInterval, uint(5000))
require.Equal(t, conf.Chain.L2HeaderBufferSize, uint(1000))
require.Equal(t, conf.RPCs.L1RPC, "https://l1.example.com") require.Equal(t, conf.RPCs.L1RPC, "https://l1.example.com")
require.Equal(t, conf.RPCs.L2RPC, "https://l2.example.com") require.Equal(t, conf.RPCs.L2RPC, "https://l2.example.com")
require.Equal(t, conf.DB.Host, "127.0.0.1") require.Equal(t, conf.DB.Host, "127.0.0.1")
...@@ -79,6 +88,10 @@ func TestLoadConfigWithoutPreset(t *testing.T) { ...@@ -79,6 +88,10 @@ func TestLoadConfigWithoutPreset(t *testing.T) {
testData := ` testData := `
[chain] [chain]
l1-polling-interval = 5000
l2-polling-interval = 5000
l1-header-buffer-size = 1000
l2-header-buffer-size = 1000
[chain.l1-contracts] [chain.l1-contracts]
optimism-portal = "0x4205Fc579115071764c7423A4f12eDde41f106Ed" optimism-portal = "0x4205Fc579115071764c7423A4f12eDde41f106Ed"
...@@ -108,12 +121,6 @@ func TestLoadConfigWithoutPreset(t *testing.T) { ...@@ -108,12 +121,6 @@ func TestLoadConfigWithoutPreset(t *testing.T) {
require.Equal(t, conf.Chain.L1Contracts.L1CrossDomainMessengerProxy.String(), common.HexToAddress("0x420ce71c97B33Cc4729CF772ae268934F7ab5fA1").String()) require.Equal(t, conf.Chain.L1Contracts.L1CrossDomainMessengerProxy.String(), common.HexToAddress("0x420ce71c97B33Cc4729CF772ae268934F7ab5fA1").String())
require.Equal(t, conf.Chain.L1Contracts.L1StandardBridgeProxy.String(), common.HexToAddress("0x4209fc46f92E8a1c0deC1b1747d010903E884bE1").String()) require.Equal(t, conf.Chain.L1Contracts.L1StandardBridgeProxy.String(), common.HexToAddress("0x4209fc46f92E8a1c0deC1b1747d010903E884bE1").String())
require.Equal(t, conf.Chain.Preset, 0) require.Equal(t, conf.Chain.Preset, 0)
// Enforce polling default values
require.Equal(t, conf.Chain.L1PollingInterval, uint(5000))
require.Equal(t, conf.Chain.L2PollingInterval, uint(5000))
require.Equal(t, conf.Chain.L1HeaderBufferSize, uint(500))
require.Equal(t, conf.Chain.L2HeaderBufferSize, uint(500))
} }
func TestLoadConfigWithUnknownPreset(t *testing.T) { func TestLoadConfigWithUnknownPreset(t *testing.T) {
...@@ -160,7 +167,8 @@ func TestLoadConfigPollingValues(t *testing.T) { ...@@ -160,7 +167,8 @@ func TestLoadConfigPollingValues(t *testing.T) {
l1-polling-interval = 1000 l1-polling-interval = 1000
l2-polling-interval = 1005 l2-polling-interval = 1005
l1-header-buffer-size = 100 l1-header-buffer-size = 100
l2-header-buffer-size = 105` l2-header-buffer-size = 105
`
data := []byte(testData) data := []byte(testData)
err = os.WriteFile(tmpfile.Name(), data, 0644) err = os.WriteFile(tmpfile.Name(), data, 0644)
...@@ -190,6 +198,11 @@ func TestLoadedConfigPresetPrecendence(t *testing.T) { ...@@ -190,6 +198,11 @@ func TestLoadedConfigPresetPrecendence(t *testing.T) {
[chain] [chain]
preset = 10 # Optimism Mainnet preset = 10 # Optimism Mainnet
l1-polling-interval = 1000
l2-polling-interval = 1000
l1-header-buffer-size = 100
l2-header-buffer-size = 100
# confirmation depths are explicitly set # confirmation depths are explicitly set
l1-confirmation-depth = 50 l1-confirmation-depth = 50
l2-confirmation-depth = 100 l2-confirmation-depth = 100
...@@ -198,7 +211,6 @@ func TestLoadedConfigPresetPrecendence(t *testing.T) { ...@@ -198,7 +211,6 @@ func TestLoadedConfigPresetPrecendence(t *testing.T) {
[chain.l1-contracts] [chain.l1-contracts]
optimism-portal = "0x0000000000000000000000000000000000000001" optimism-portal = "0x0000000000000000000000000000000000000001"
[rpcs] [rpcs]
l1-rpc = "https://l1.example.com" l1-rpc = "https://l1.example.com"
l2-rpc = "https://l2.example.com" l2-rpc = "https://l2.example.com"
...@@ -235,6 +247,11 @@ func TestLocalDevnet(t *testing.T) { ...@@ -235,6 +247,11 @@ func TestLocalDevnet(t *testing.T) {
[chain] [chain]
preset = 901 preset = 901
l1-polling-interval = 5000
l2-polling-interval = 5000
l1-header-buffer-size = 1000
l2-header-buffer-size = 1000
[rpcs] [rpcs]
l1-rpc = "https://l1.example.com" l1-rpc = "https://l1.example.com"
l2-rpc = "https://l2.example.com" l2-rpc = "https://l2.example.com"
......
...@@ -3,6 +3,7 @@ package config ...@@ -3,6 +3,7 @@ package config
import ( import (
"encoding/json" "encoding/json"
"errors" "errors"
"fmt"
"io/fs" "io/fs"
"os" "os"
"path/filepath" "path/filepath"
...@@ -25,12 +26,12 @@ func DevnetPreset() (*Preset, error) { ...@@ -25,12 +26,12 @@ func DevnetPreset() (*Preset, error) {
devnetFilepath := filepath.Join(root, ".devnet", "addresses.json") devnetFilepath := filepath.Join(root, ".devnet", "addresses.json")
if _, err := os.Stat(devnetFilepath); errors.Is(err, fs.ErrNotExist) { if _, err := os.Stat(devnetFilepath); errors.Is(err, fs.ErrNotExist) {
return nil, err return nil, fmt.Errorf(".devnet/addresses.json not found. `make devnet-allocs` in monorepo root: %w", err)
} }
content, err := os.ReadFile(devnetFilepath) content, err := os.ReadFile(devnetFilepath)
if err != nil { if err != nil {
return nil, err return nil, fmt.Errorf("unable to read .devnet/addressees.json")
} }
var l1Contracts L1Contracts var l1Contracts L1Contracts
......
...@@ -33,6 +33,8 @@ var Presets = map[int]Preset{ ...@@ -33,6 +33,8 @@ var Presets = map[int]Preset{
L1StartingHeight: 13596466, L1StartingHeight: 13596466,
L1BedrockStartingHeight: 17422590, L1BedrockStartingHeight: 17422590,
L2BedrockStartingHeight: 105235063, L2BedrockStartingHeight: 105235063,
L1ConfirmationDepth: 10,
L2ConfirmationDepth: 75,
}, },
}, },
420: { 420: {
...@@ -55,6 +57,8 @@ var Presets = map[int]Preset{ ...@@ -55,6 +57,8 @@ var Presets = map[int]Preset{
L1StartingHeight: 7017096, L1StartingHeight: 7017096,
L1BedrockStartingHeight: 8300214, L1BedrockStartingHeight: 8300214,
L2BedrockStartingHeight: 4061224, L2BedrockStartingHeight: 4061224,
L1ConfirmationDepth: 10,
L2ConfirmationDepth: 75,
}, },
}, },
11155420: { 11155420: {
...@@ -70,7 +74,9 @@ var Presets = map[int]Preset{ ...@@ -70,7 +74,9 @@ var Presets = map[int]Preset{
L1StandardBridgeProxy: common.HexToAddress("0xFBb0621E0B23b5478B630BD55a5f21f67730B0F1"), L1StandardBridgeProxy: common.HexToAddress("0xFBb0621E0B23b5478B630BD55a5f21f67730B0F1"),
L1ERC721BridgeProxy: common.HexToAddress("0xd83e03D576d23C9AEab8cC44Fa98d058D2176D1f"), L1ERC721BridgeProxy: common.HexToAddress("0xd83e03D576d23C9AEab8cC44Fa98d058D2176D1f"),
}, },
L1StartingHeight: 4071408, L1StartingHeight: 4071408,
L1ConfirmationDepth: 10,
L2ConfirmationDepth: 75,
}, },
}, },
8453: { 8453: {
...@@ -86,7 +92,9 @@ var Presets = map[int]Preset{ ...@@ -86,7 +92,9 @@ var Presets = map[int]Preset{
L1StandardBridgeProxy: common.HexToAddress("0x3154Cf16ccdb4C6d922629664174b904d80F2C35"), L1StandardBridgeProxy: common.HexToAddress("0x3154Cf16ccdb4C6d922629664174b904d80F2C35"),
L1ERC721BridgeProxy: common.HexToAddress("0x608d94945A64503E642E6370Ec598e519a2C1E53"), L1ERC721BridgeProxy: common.HexToAddress("0x608d94945A64503E642E6370Ec598e519a2C1E53"),
}, },
L1StartingHeight: 17481768, L1StartingHeight: 17481768,
L1ConfirmationDepth: 10,
L2ConfirmationDepth: 75,
}, },
}, },
84531: { 84531: {
...@@ -102,7 +110,9 @@ var Presets = map[int]Preset{ ...@@ -102,7 +110,9 @@ var Presets = map[int]Preset{
L1StandardBridgeProxy: common.HexToAddress("0xfA6D8Ee5BE770F84FC001D098C4bD604Fe01284a"), L1StandardBridgeProxy: common.HexToAddress("0xfA6D8Ee5BE770F84FC001D098C4bD604Fe01284a"),
L1ERC721BridgeProxy: common.HexToAddress("0x5E0c967457347D5175bF82E8CCCC6480FCD7e568"), L1ERC721BridgeProxy: common.HexToAddress("0x5E0c967457347D5175bF82E8CCCC6480FCD7e568"),
}, },
L1StartingHeight: 8410981, L1StartingHeight: 8410981,
L1ConfirmationDepth: 10,
L2ConfirmationDepth: 75,
}, },
}, },
84532: { 84532: {
...@@ -118,7 +128,9 @@ var Presets = map[int]Preset{ ...@@ -118,7 +128,9 @@ var Presets = map[int]Preset{
L1StandardBridgeProxy: common.HexToAddress("0xfd0Bf71F60660E2f608ed56e1659C450eB113120"), L1StandardBridgeProxy: common.HexToAddress("0xfd0Bf71F60660E2f608ed56e1659C450eB113120"),
L1ERC721BridgeProxy: common.HexToAddress("0x21eFD066e581FA55Ef105170Cc04d74386a09190"), L1ERC721BridgeProxy: common.HexToAddress("0x21eFD066e581FA55Ef105170Cc04d74386a09190"),
}, },
L1StartingHeight: 4370868, L1StartingHeight: 4370868,
L1ConfirmationDepth: 10,
L2ConfirmationDepth: 75,
}, },
}, },
7777777: { 7777777: {
...@@ -134,7 +146,9 @@ var Presets = map[int]Preset{ ...@@ -134,7 +146,9 @@ var Presets = map[int]Preset{
L1StandardBridgeProxy: common.HexToAddress("0x3e2Ea9B92B7E48A52296fD261dc26fd995284631"), L1StandardBridgeProxy: common.HexToAddress("0x3e2Ea9B92B7E48A52296fD261dc26fd995284631"),
L1ERC721BridgeProxy: common.HexToAddress("0x83A4521A3573Ca87f3a971B169C5A0E1d34481c3"), L1ERC721BridgeProxy: common.HexToAddress("0x83A4521A3573Ca87f3a971B169C5A0E1d34481c3"),
}, },
L1StartingHeight: 17473923, L1StartingHeight: 17473923,
L1ConfirmationDepth: 10,
L2ConfirmationDepth: 75,
}, },
}, },
999: { 999: {
...@@ -150,7 +164,9 @@ var Presets = map[int]Preset{ ...@@ -150,7 +164,9 @@ var Presets = map[int]Preset{
L1StandardBridgeProxy: common.HexToAddress("0x7CC09AC2452D6555d5e0C213Ab9E2d44eFbFc956"), L1StandardBridgeProxy: common.HexToAddress("0x7CC09AC2452D6555d5e0C213Ab9E2d44eFbFc956"),
L1ERC721BridgeProxy: common.HexToAddress("0x57C1C6b596ce90C0e010c358DD4Aa052404bB70F"), L1ERC721BridgeProxy: common.HexToAddress("0x57C1C6b596ce90C0e010c358DD4Aa052404bB70F"),
}, },
L1StartingHeight: 8942381, L1StartingHeight: 8942381,
L1ConfirmationDepth: 10,
L2ConfirmationDepth: 75,
}, },
}, },
424: { 424: {
...@@ -166,7 +182,9 @@ var Presets = map[int]Preset{ ...@@ -166,7 +182,9 @@ var Presets = map[int]Preset{
L1StandardBridgeProxy: common.HexToAddress("0xD0204B9527C1bA7bD765Fa5CCD9355d38338272b"), L1StandardBridgeProxy: common.HexToAddress("0xD0204B9527C1bA7bD765Fa5CCD9355d38338272b"),
L1ERC721BridgeProxy: common.HexToAddress("0xaFF0F8aaB6Cc9108D34b3B8423C76d2AF434d115"), L1ERC721BridgeProxy: common.HexToAddress("0xaFF0F8aaB6Cc9108D34b3B8423C76d2AF434d115"),
}, },
L1StartingHeight: 17672702, L1StartingHeight: 17672702,
L1ConfirmationDepth: 10,
L2ConfirmationDepth: 75,
}, },
}, },
58008: { 58008: {
...@@ -182,7 +200,9 @@ var Presets = map[int]Preset{ ...@@ -182,7 +200,9 @@ var Presets = map[int]Preset{
L1StandardBridgeProxy: common.HexToAddress("0xFaE6abCAF30D23e233AC7faF747F2fC3a5a6Bfa3"), L1StandardBridgeProxy: common.HexToAddress("0xFaE6abCAF30D23e233AC7faF747F2fC3a5a6Bfa3"),
L1ERC721BridgeProxy: common.HexToAddress("0xBA8397B6f255618D5985d0fB427D8c0496F3a5FA"), L1ERC721BridgeProxy: common.HexToAddress("0xBA8397B6f255618D5985d0fB427D8c0496F3a5FA"),
}, },
L1StartingHeight: 17672702, L1StartingHeight: 17672702,
L1ConfirmationDepth: 10,
L2ConfirmationDepth: 75,
}, },
}, },
} }
...@@ -2,166 +2,106 @@ version: '3.8' ...@@ -2,166 +2,106 @@ version: '3.8'
services: services:
postgres: postgres:
image: postgres:latest image: postgres:14.1
environment: environment:
- POSTGRES_USER=db_username - POSTGRES_USER=postgres
- POSTGRES_PASSWORD=db_password - POSTGRES_DB=indexer
- POSTGRES_DB=db_name
- PGDATA=/data/postgres - PGDATA=/data/postgres
- POSTGRES_HOST_AUTH_METHOD=trust - POSTGRES_HOST_AUTH_METHOD=trust
healthcheck: healthcheck:
test: [ "CMD-SHELL", "pg_isready -q -U db_username -d db_name" ] test: [ "CMD-SHELL", "pg_isready -q -U postgres -d indexer" ]
ports: ports:
- "5434:5432" # deconflict with postgres that might be running already on
# the host machine
- "5433:5432"
volumes: volumes:
- postgres_data:/data/postgres - postgres_data:/data/postgres
- ./migrations:/docker-entrypoint-initdb.d/ - ./migrations:/docker-entrypoint-initdb.d/
migrations: index:
build: build:
context: .. context: ..
dockerfile: indexer/Dockerfile dockerfile: indexer/Dockerfile
command: ["indexer", "migrate"] command: ["/bin/sh", "-c", "indexer migrate && indexer index"]
expose:
- "8100"
- "7300"
environment: environment:
- INDEXER_RPC_URL_L1=$INDEXER_RPC_URL_L1 - INDEXER_CONFIG=/app/indexer/config.toml
- INDEXER_RPC_URL_L2=$INDEXER_RPC_URL_L2 - INDEXER_L1_RPC_URL=http://host.docker.internal:8545
- INDEXER_CONFIG=/indexer/indexer.toml - INDEXER_L2_RPC_URL=http://host.docker.internal:9545
- INDEXER_CHAIN_PRESET=$INDEXER_CHAIN_PRESET - DB_HOST=postgres
- INDEXER_DB_PORT=5432 - DB_PORT=5432
- INDEXER_DB_HOST=postgres - DB_USER=postgres
- INDEXER_DB_USER=db_username - DB_NAME=indexer
- INDEXER_DB_PASS=db_password
- INDEXER_DB_NAME=db_name
volumes: volumes:
- ./indexer.toml:/indexer/indexer.toml - ./indexer.toml:/app/indexer/config.toml/:ro
depends_on: # needed only when running against the local devnet such
postgres: # that it can bootstrap the local deployment addresses
condition: service_healthy - ../go.mod:/app/go.mod/:ro
- ../.devnet/addresses.json:/app/.devnet/addresses.json/:ro
indexer: healthcheck:
build: test: wget index:8100/healthz -q -O - > /dev/null 2>&1
context: ..
dockerfile: indexer/Dockerfile
command: ["indexer", "index"]
environment:
- INDEXER_RPC_URL_L1=$INDEXER_RPC_URL_L1
- INDEXER_RPC_URL_L2=$INDEXER_RPC_URL_L2
- INDEXER_CONFIG=/indexer/indexer.toml
- INDEXER_CHAIN_PRESET=$INDEXER_CHAIN_PRESET
- INDEXER_DB_PORT=5432
- INDEXER_DB_HOST=postgres
- INDEXER_DB_USER=db_username
- INDEXER_DB_PASS=db_password
- INDEXER_DB_NAME=db_name
volumes:
- ./indexer.toml:/indexer/indexer.toml
depends_on: depends_on:
postgres: postgres:
condition: service_healthy condition: service_healthy
migrations:
condition: service_started
api: api:
build: build:
context: .. context: ..
dockerfile: indexer/Dockerfile dockerfile: indexer/Dockerfile
command: ["indexer", "api"] command: ["indexer", "api"]
healthcheck:
test: wget localhost:8080/healthz -q -O - > /dev/null 2>&1
environment:
# Note that you must index goerli with INDEXER_BEDROCK=false first, then
# reindex with INDEXER_BEDROCK=true or seed the database
- INDEXER_RPC_URL_L1=$INDEXER_RPC_URL_L1
- INDEXER_RPC_URL_L2=$INDEXER_RPC_URL_L2
- INDEXER_CONFIG=/indexer/indexer.toml
- INDEXER_CHAIN_PRESET=$INDEXER_CHAIN_PRESET
- INDEXER_DB_HOST=postgres
- INDEXER_DB_PORT=5432
- INDEXER_DB_USER=db_username
- INDEXER_DB_PASS=db_password
- INDEXER_DB_NAME=db_name
volumes:
- ./indexer.toml:/indexer/indexer.toml
ports:
- 8080:8080
depends_on:
postgres:
condition: service_healthy
ui:
build:
context: ..
dockerfile: indexer/ui/Dockerfile
environment: environment:
- DATABASE_URL=${DATABASE_URL:-postgresql://db_username:db_password@postgres:5432/db_name} - INDEXER_CONFIG=/app/indexer/config.toml
- DB_HOST=postgres
- DB_PORT=5432
- DB_USER=postgres
- DB_NAME=indexer
ports: ports:
- 5555:5555 - "8100:8100"
expose:
- "7300"
volumes:
- ./indexer.toml:/app/indexer/config.toml/:ro
# needed only when running against the local devnet such
# that it can bootstrap the local deployment addresses
- ../go.mod:/app/go.mod/:ro
- ../.devnet/addresses.json:/app/.devnet/addresses.json/:ro
healthcheck: healthcheck:
test: wget localhost:5555 -q -O - > /dev/null 2>&1 test: wget api:8100/healthz -q -O - > /dev/null 2>&1
depends_on: depends_on:
postgres: postgres:
condition: service_healthy condition: service_healthy
prisma-check: prometheus:
restart: "no" image: prom/prometheus:latest
build: expose:
context: .. - "9090"
dockerfile: indexer/ui/Dockerfile volumes:
command: ./prisma.sh --check - ./ops/prometheus:/etc/prometheus/:ro
environment: - prometheus_data:/prometheus
- DATABASE_URL=${DATABASE_URL:-postgresql://db_username:db_password@postgres:5432/db_name}
depends_on: depends_on:
indexer: index:
condition: service_healthy condition: service_healthy
postgres: api:
condition: service_healthy condition: service_healthy
backend-goerli: grafana:
image: ethereumoptimism/gateway-backend:latest image: grafana/grafana:latest
platform: linux/amd64
environment: environment:
# this enables the backend to proxy history requests to the indexer - GF_SECURITY_ADMIN_PASSWORD=optimism
- BRIDGE_INDEXER_URI=http://api - GF_DASHBOARDS_DEFAULT_HOME_DASHBOARD_PATH=/var/lib/grafana/dashboards/indexer.json
- HOST=0.0.0.0
- PORT=7300
- MIGRATE_APP_DB_USER=${MIGRATE_APP_DB_USER:-postgres}
- MIGRATE_APP_DB_PASSWORD=${MIGRATE_APP_DB_PASSWORD:-db_password}
- APP_DB_HOST=${APP_DB_HOST:-postgres-app}
- APP_DB_USER=${APP_DB_USER:-gateway-backend-goerli@oplabs-local-web.iam}
- APP_DB_NAME=${APP_DB_NAME:-gateway}
- APP_DB_PORT=${APP_DB_PORT:-5432}
- INDEXER_DB_HOST=${INDEXER_DB_HOST_GOERLI:-postgres-goerli}
- INDEXER_DB_USER=${INDEXER_DB_USER_GOERLI:-db_username}
- INDEXER_DB_PASS=${INDEXER_DB_PASSWORD_GOERLI:-db_password}
- INDEXER_DB_NAME=${INDEXER_DB_NAME_GOERLI:-db_name}
- INDEXER_DB_PORT=${INDEXER_DB_PORT_GOERLI:-5432}
- DATABASE_URL=${DATABASE_URL_GOERLI:-postgres://db_username:db_password@postgres-goerli:5432/db_name}
- JSON_RPC_URLS_L1=$JSON_RPC_URLS_L1_GOERLI
- JSON_RPC_URLS_L2=$JSON_RPC_URLS_L2_GOERLI
- JSON_RPC_URLS_L2_GOERLI=$JSON_RPC_URLS_L2_GOERLI
- FAUCET_AUTH_ADMIN_WALLET_PRIVATE_KEY=$FAUCET_AUTH_ADMIN_WALLET_PRIVATE_KEY
- IRON_SESSION_SECRET=${IRON_SESSION_SECRET:-UNKNOWN_IRON_SESSION_PASSWORD_32}
- CHAIN_ID_L1=5
- CHAIN_ID_L2=420
- FLEEK_BUCKET_ADDRESS=34a609661-6774-441f-9fdb-453fdbb89931-bucket
- FLEEK_API_SECRET=$FLEEK_API_SECRET
- FLEEK_API_KEY=$FLEEK_API_KEY
- MOCK_MERKLE_PROOF=true
- LOOP_INTERVAL_MINUTES=.1
- GITHUB_CLIENT_ID=$GITHUB_CLIENT_ID
- GITHUB_SECRET=$GITHUB_SECRET
- MAINNET_BEDROCK=true
- TRM_API_KEY=$TRM_API_KEY
- GOOGLE_CLOUD_STORAGE_BUCKET_NAME=oplabs-dev-web-content
# Recommend to uncomment for local dev unless you need it
#- BYPASS_EVENT_LOG_POLLER_BOOTSTRAP=true
ports: ports:
- 7422:7300 - "3000:3000"
# overrides command in Dockerfile so we can hot reload the server in docker while developing volumes:
#command: ['pnpm', 'nx', 'run', '@gateway/backend:docker:watch'] - ./ops/grafana/provisioning:/etc/grafana/provisioning/:ro
healthcheck: - ./ops/grafana/dashboards:/var/lib/grafana/dashboards/:ro
test: curl http://0.0.0.0:7300/api/v0/healthz - grafana_data:/var/lib/grafana
depends_on:
prometheus:
condition: service_started
volumes: volumes:
postgres_data: postgres_data:
prometheus_data:
grafana_data:
# Fill me in with goerli uris and run docker-compose up to run indexer vs goerli
INDEXER_L1_ETH_RPC=FILL_ME_IN
INDEXER_L2_ETH_RPC=FILL_ME_IN
# Fill in to use prisma studio ui with a db other than the default
# DATABASE_URL=FILL_ME_IN
...@@ -35,13 +35,11 @@ type Indexer struct { ...@@ -35,13 +35,11 @@ type Indexer struct {
l1Client node.EthClient l1Client node.EthClient
l2Client node.EthClient l2Client node.EthClient
// api server only really serves a /health endpoint here, but this may change in the future metricsRegistry *prometheus.Registry
apiServer *httputil.HTTPServer
apiServer *httputil.HTTPServer
metricsServer *httputil.HTTPServer metricsServer *httputil.HTTPServer
metricsRegistry *prometheus.Registry
L1ETL *etl.L1ETL L1ETL *etl.L1ETL
L2ETL *etl.L2ETL L2ETL *etl.L2ETL
BridgeProcessor *processors.BridgeProcessor BridgeProcessor *processors.BridgeProcessor
...@@ -235,15 +233,20 @@ func (ix *Indexer) startHttpServer(ctx context.Context, cfg config.ServerConfig) ...@@ -235,15 +233,20 @@ func (ix *Indexer) startHttpServer(ctx context.Context, cfg config.ServerConfig)
ix.log.Debug("starting http server...", "port", cfg.Port) ix.log.Debug("starting http server...", "port", cfg.Port)
r := chi.NewRouter() r := chi.NewRouter()
r.Use(middleware.Logger)
r.Use(middleware.Heartbeat("/healthz")) r.Use(middleware.Heartbeat("/healthz"))
// needed so that the middlware gets invoked
r.Get("/", r.NotFoundHandler())
addr := net.JoinHostPort(cfg.Host, strconv.Itoa(cfg.Port)) addr := net.JoinHostPort(cfg.Host, strconv.Itoa(cfg.Port))
srv, err := httputil.StartHTTPServer(addr, r) srv, err := httputil.StartHTTPServer(addr, r)
if err != nil { if err != nil {
return fmt.Errorf("http server failed to start: %w", err) return fmt.Errorf("http server failed to start: %w", err)
} }
ix.apiServer = srv
ix.log.Info("http server started", "addr", srv.Addr()) ix.log.Info("http server started", "addr", srv.Addr())
ix.apiServer = srv
return nil return nil
} }
......
# Chain configures l1 chain addresses
# Can configure them manually or use a preset L2 ChainId for known chains
# - i.e OP Mainnet, OP Goerli, Base, Base Goerli, Zora, Zora Goerli, etc
[chain] [chain]
preset = $INDEXER_CHAIN_PRESET ## Required ETL configuration which controls
## the rate and and range at data retrieval
l1-polling-interval = 5000 # 5s
l2-polling-interval = 5000
# L1 Config l1-header-buffer-size = 1000
l1-polling-interval = 0 l2-header-buffer-size = 1000
l1-header-buffer-size = 0
l1-confirmation-depth = 0
l1-starting-height = 0
# L2 Config l1-confirmation-depth = 0 # unnecessary for devnet
l2-polling-interval = 0
l2-header-buffer-size = 0
l2-confirmation-depth = 0 l2-confirmation-depth = 0
### Option 1: Utilize a known preset for chain configuration
### See config/presets.go for available presets.
### i.e OP, Base, Zora, PGN, etc.
preset = 901 # Local Devnet
### Option 2: Custom Networks
###
### Deployment block height of the rollup. Must be set
### correctly otherwise the ETL will start from L1
### genesis resulting in a lot of wasted work. Confirmation
### depths should be set appropriately to avoid a the ETL
### getting stuck terminally due to a reorg
# l1-starting-height = 0
#
# l1-confirmation-depth = 10
# l2-confirmation-depth = 75 (roughly 10 L1 block equivalent)
#
### These contract addresses MUST be the Proxy contract
### addresses and not the implementation
# [chain.l1-contracts]
# address-manager = ""
# system-config = ""
# optimism-portal = ""
# l2-output-oracle = ""
# l1-cross-domain-messenger = ""
# l1-standard-brigde = ""
# l1-erc721-bridge = ""
###
[rpcs] [rpcs]
l1-rpc = "${INDEXER_RPC_URL_L1}" l1-rpc = "${INDEXER_L1_RPC_URL}"
l2-rpc = "${INDEXER_RPC_URL_L2}" l2-rpc = "${INDEXER_L2_RPC_URL}"
[db] [db]
host = "$INDEXER_DB_HOST" host = "${DB_HOST}"
port = $INDEXER_DB_PORT port = ${DB_PORT}
user = "$INDEXER_DB_USER" user = "${DB_USER}"
password = "$INDEXER_DB_PASS" password = "${DB_PWD}"
name = "$INDEXER_DB_NAME" name = "${DB_NAME}"
[http] [http]
host = "127.0.0.1" host = "0.0.0.0"
port = 8080 port = 8100
timeout = 10 timeout = 10
[metrics] [metrics]
host = "127.0.0.1" host = "0.0.0.0"
port = 7300 port = 7300
This diff is collapsed.
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
options:
path: /var/lib/grafana/dashboards
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
uid: '6R74VAnVz'
access: proxy
url: http://prometheus:9090
isDefault: true
global:
scrape_interval: 5s
evaluation_interval: 5s
scrape_configs:
- job_name: 'indexer'
static_configs:
- targets: ['index:7300']
FROM node:18.16.0-bullseye-slim
WORKDIR /app
RUN echo {} > package.json && \
npm install prisma
COPY indexer/ui/prisma.sh prisma.sh
COPY indexer/ui/schema.prisma schema.prisma
RUN npx prisma generate --schema schema.prisma
CMD ["npx", "prisma", "studio", "--port", "5555", "--hostname", "0.0.0.0", "--schema", "schema.prisma" ]
## @eth-optimism/indexer-ui
A simple UI for exploring the indexer DB using [Prisma studio](https://www.prisma.io)
## Usage
Included in the docker-compose file as `ui` service
```bash
docker compose up
```
Prisma can be viewed at [localhost:5555](http://localhost:5555)
## Update the schema
The [prisma schema](https://www.prisma.io/docs/reference/api-reference/prisma-schema-reference) is what allows prisma to work. It is automatically generated from the db schema.
To update the schema to the latest db schema start the database and run [./ui/prisma.sh](./prisma.sh). Optionally pass in a DATABASE_URL if not the default
```bash
DATABASE_URL=postgresql://db_username:db_password@postgres:5432/db_name
```
## Other functionality
We mostly just use prisma as a UI. But brisma provides much other functionality that can be useful including.
- Ability to change the [db schema](https://www.prisma.io/docs/reference/api-reference/command-reference#db-push) direction from modifying the [schema.prisma](./schema.prisma) in place. This can be a fast way to [start prototyping](https://www.prisma.io/docs/guides/migrate/prototyping-schema-db-push)
- Ability to [seed the database](https://www.prisma.io/docs/guides/migrate/seed-database)
- Ability to write quick scripts with [prisma client](https://www.prisma.io/docs/reference/api-reference/prisma-client-reference)
## Running prisma studio outside of docker
Prisma can also be run with [npx](https://docs.npmjs.com/cli/v8/commands/npx)
```bash
npx prisma studio --schema indexer/ui/schema.prisma
```
#!/usr/bin/env bash
# This script updates the prisma schema
#
SCRIPT_DIR=$( cd "$(dirname "${BASH_SOURCE[0]}")" || exit ; pwd -P )
DATABASE_URL=${DATABASE_URL:-postgresql://db_username:db_password@localhost:5434/db_name}
PRISMA_FILE="$SCRIPT_DIR/schema.prisma"
TEMP_FILE="$SCRIPT_DIR/temp-schema.prisma"
function update_prisma() {
echo "Updating Prisma Schema..."
npx prisma db pull --url "$DATABASE_URL" --schema "$PRISMA_FILE"
echo "Update completed."
}
function check_prisma() {
echo "Checking Prisma Schema..."
cp "$PRISMA_FILE" "$TEMP_FILE"
npx prisma db pull --url "$DATABASE_URL" --schema "$TEMP_FILE"
if diff "$PRISMA_FILE" "$TEMP_FILE" > /dev/null; then
echo "Prisma Schema is up-to-date."
rm "$TEMP_FILE"
else
echo "Prisma Schema is not up-to-date."
rm "$TEMP_FILE"
return 1
fi
}
if [ "$1" == "--check" ]; then
check_prisma
else
update_prisma
fi
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment