Commit d820a8a7 authored by Joshua Gutow's avatar Joshua Gutow

Aggregate typo fixes

parent fe8e1217
...@@ -14,7 +14,7 @@ ...@@ -14,7 +14,7 @@
## Overview ## Overview
This document contains guidelines best practices in PRs that should be enforced as much as possible. The motivations and goals behind these best practices are: This document contains guidelines and best practices in PRs that should be enforced as much as possible. The motivations and goals behind these best practices are:
- **Ensure thorough reviews**: By the time the PR is merged, at least one other person—because there is always at least one reviewer—should understand the PR’s changes just as well as the PR author. This helps improve security by reducing bugs and single points of failure (i.e. there should never be only one person who understands certain code). - **Ensure thorough reviews**: By the time the PR is merged, at least one other person—because there is always at least one reviewer—should understand the PR’s changes just as well as the PR author. This helps improve security by reducing bugs and single points of failure (i.e. there should never be only one person who understands certain code).
- **Reduce PR churn**: PRs should be quickly reviewable and mergeable without much churn (both in terms of code rewrites and comment cycles). This saves time by reducing the need for rebases due to conflicts. Similarly, too many review cycles are a burden for both PR authors and reviewers, and results in “review fatigue” where reviews become less careful and thorough, increasing the likelihood of bugs. - **Reduce PR churn**: PRs should be quickly reviewable and mergeable without much churn (both in terms of code rewrites and comment cycles). This saves time by reducing the need for rebases due to conflicts. Similarly, too many review cycles are a burden for both PR authors and reviewers, and results in “review fatigue” where reviews become less careful and thorough, increasing the likelihood of bugs.
......
...@@ -130,7 +130,7 @@ func (a *APIService) Addr() string { ...@@ -130,7 +130,7 @@ func (a *APIService) Addr() string {
func (a *APIService) initDB(ctx context.Context, connector DBConnector) error { func (a *APIService) initDB(ctx context.Context, connector DBConnector) error {
db, err := connector.OpenDB(ctx, a.log) db, err := connector.OpenDB(ctx, a.log)
if err != nil { if err != nil {
return fmt.Errorf("failed to connect to databse: %w", err) return fmt.Errorf("failed to connect to database: %w", err)
} }
a.dbClose = db.Closer a.dbClose = db.Closer
a.bv = db.BridgeTransfers a.bv = db.BridgeTransfers
......
...@@ -24,7 +24,7 @@ type DBConfigConnector struct { ...@@ -24,7 +24,7 @@ type DBConfigConnector struct {
func (cfg *DBConfigConnector) OpenDB(ctx context.Context, log log.Logger) (*DB, error) { func (cfg *DBConfigConnector) OpenDB(ctx context.Context, log log.Logger) (*DB, error) {
db, err := database.NewDB(ctx, log, cfg.DBConfig) db, err := database.NewDB(ctx, log, cfg.DBConfig)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to connect to databse: %w", err) return nil, fmt.Errorf("failed to connect to database: %w", err)
} }
return &DB{ return &DB{
BridgeTransfers: db.BridgeTransfers, BridgeTransfers: db.BridgeTransfers,
......
...@@ -20,7 +20,7 @@ Header traversal is a client abstraction that allows the indexer to sequentially ...@@ -20,7 +20,7 @@ Header traversal is a client abstraction that allows the indexer to sequentially
* This error occurs when the indexer is operating on a different block state than the node. This is typically caused by network reorgs and is the result of `l1-confirmation-count` or `l2-confirmation-count` values being set too low. To resolve this issue, increase the confirmation count values and restart the indexer service. * This error occurs when the indexer is operating on a different block state than the node. This is typically caused by network reorgs and is the result of `l1-confirmation-count` or `l2-confirmation-count` values being set too low. To resolve this issue, increase the confirmation count values and restart the indexer service.
2. `the HeaderTraversal's internal state is ahead of the provider` 2. `the HeaderTraversal's internal state is ahead of the provider`
* This error occurs when the indexer is operating on a block that the upstream provider does not have. This is typically occurs when resyncing upstream node services. This issue typically resolves itself once the upstream node service is fully synced. If the problem persists, please file an issue. * This error occurs when the indexer is operating on a block that the upstream provider does not have. This typically occurs when resyncing upstream node services. This issue typically resolves itself once the upstream node service is fully synced. If the problem persists, please file an issue.
### L1/L2 Processor Failures ### L1/L2 Processor Failures
The L1 and L2 processors are responsible for processing new blocks and system txs. Processor failures can spread and contaminate other downstream processors (i.e, bridge) as well. For example, if a L2 processor misses a block and fails to index a `MessagePassed` event, the bridge processor will fail to index the corresponding `WithdrawalProven` event and halt progress. The following are some common failure modes and how to resolve them: The L1 and L2 processors are responsible for processing new blocks and system txs. Processor failures can spread and contaminate other downstream processors (i.e, bridge) as well. For example, if a L2 processor misses a block and fails to index a `MessagePassed` event, the bridge processor will fail to index the corresponding `WithdrawalProven` event and halt progress. The following are some common failure modes and how to resolve them:
......
...@@ -66,7 +66,7 @@ func TestE2EBridgeL1CrossDomainMessenger(t *testing.T) { ...@@ -66,7 +66,7 @@ func TestE2EBridgeL1CrossDomainMessenger(t *testing.T) {
require.Equal(t, aliceAddr, sentMessage.Tx.ToAddress) require.Equal(t, aliceAddr, sentMessage.Tx.ToAddress)
require.ElementsMatch(t, calldata, sentMessage.Tx.Data) require.ElementsMatch(t, calldata, sentMessage.Tx.Data)
// (2) Process RelayedMesssage on inclusion // (2) Process RelayedMessage on inclusion
// - We dont assert that `RelayedMessageEventGUID` is nil prior to inclusion since there isn't a // - We dont assert that `RelayedMessageEventGUID` is nil prior to inclusion since there isn't a
// a straightforward way of pausing/resuming the processors at the right time. The codepath is the // a straightforward way of pausing/resuming the processors at the right time. The codepath is the
// same for L2->L1 messages which does check for this so we are still covered // same for L2->L1 messages which does check for this so we are still covered
......
...@@ -131,8 +131,8 @@ func (etl *ETL) processBatch(headers []types.Header) error { ...@@ -131,8 +131,8 @@ func (etl *ETL) processBatch(headers []types.Header) error {
batchLog.Warn("mismatch in FilterLog#ToBlock number", "queried_to_block_number", lastHeader.Number, "reported_to_block_number", logs.ToBlockHeader.Number) batchLog.Warn("mismatch in FilterLog#ToBlock number", "queried_to_block_number", lastHeader.Number, "reported_to_block_number", logs.ToBlockHeader.Number)
return fmt.Errorf("mismatch in FilterLog#ToBlock number") return fmt.Errorf("mismatch in FilterLog#ToBlock number")
} else if logs.ToBlockHeader.Hash() != lastHeader.Hash() { } else if logs.ToBlockHeader.Hash() != lastHeader.Hash() {
batchLog.Error("mismatch in FitlerLog#ToBlock block hash!!!", "queried_to_block_hash", lastHeader.Hash().String(), "reported_to_block_hash", logs.ToBlockHeader.Hash().String()) batchLog.Error("mismatch in FilterLog#ToBlock block hash!!!", "queried_to_block_hash", lastHeader.Hash().String(), "reported_to_block_hash", logs.ToBlockHeader.Hash().String())
return fmt.Errorf("mismatch in FitlerLog#ToBlock block hash!!!") return fmt.Errorf("mismatch in FilterLog#ToBlock block hash!!!")
} }
if len(logs.Logs) > 0 { if len(logs.Logs) > 0 {
......
...@@ -91,7 +91,7 @@ func CrossDomainMessengerSentMessageEvents(chainSelector string, contractAddress ...@@ -91,7 +91,7 @@ func CrossDomainMessengerSentMessageEvents(chainSelector string, contractAddress
default: default:
// NOTE: We explicitly fail here since the presence of a new version means finalization // NOTE: We explicitly fail here since the presence of a new version means finalization
// logic needs to be updated to ensure L1 finalization can run from genesis and handle // logic needs to be updated to ensure L1 finalization can run from genesis and handle
// the changing version formats. Any unrelayed OVM1 messages that have been harcoded with // the changing version formats. Any unrelayed OVM1 messages that have been hardcoded with
// the v1 hash format also need to be updated. This failure is a serving indicator // the v1 hash format also need to be updated. This failure is a serving indicator
return nil, fmt.Errorf("expected cross domain version 0 or version 1: %d", version) return nil, fmt.Errorf("expected cross domain version 0 or version 1: %d", version)
} }
......
...@@ -39,7 +39,7 @@ func LegacyCTCDepositEvents(contractAddress common.Address, db *database.DB, fro ...@@ -39,7 +39,7 @@ func LegacyCTCDepositEvents(contractAddress common.Address, db *database.DB, fro
return nil, err return nil, err
} }
// Enqueued Deposits do not carry a `msg.value` amount. ETH is only minted on L2 via the L1StandardBrige // Enqueued Deposits do not carry a `msg.value` amount. ETH is only minted on L2 via the L1StandardBridge
ctcTxDeposits[i] = LegacyCTCDepositEvent{ ctcTxDeposits[i] = LegacyCTCDepositEvent{
Event: &events[i].ContractEvent, Event: &events[i].ContractEvent,
GasLimit: txEnqueued.GasLimit, GasLimit: txEnqueued.GasLimit,
......
...@@ -41,7 +41,7 @@ func NewLegacyWithdrawal(msgSender, target, sender common.Address, data []byte, ...@@ -41,7 +41,7 @@ func NewLegacyWithdrawal(msgSender, target, sender common.Address, data []byte,
} }
} }
// Encode will serialze the Withdrawal in the legacy format so that it // Encode will serialize the Withdrawal in the legacy format so that it
// is suitable for hashing. This assumes that the message is being withdrawn // is suitable for hashing. This assumes that the message is being withdrawn
// through the standard optimism cross domain messaging system by hashing in // through the standard optimism cross domain messaging system by hashing in
// the L2CrossDomainMessenger address. // the L2CrossDomainMessenger address.
......
...@@ -86,7 +86,7 @@ func init() { ...@@ -86,7 +86,7 @@ func init() {
if err := readStateDiffs(); err != nil { if err := readStateDiffs(); err != nil {
panic(err) panic(err)
} }
// Initialze the message passer ABI // Initialize the message passer ABI
var err error var err error
passMessage, err = abi.JSON(strings.NewReader(passMessageABI)) passMessage, err = abi.JSON(strings.NewReader(passMessageABI))
if err != nil { if err != nil {
......
...@@ -686,7 +686,7 @@ func NewL1Deployments(path string) (*L1Deployments, error) { ...@@ -686,7 +686,7 @@ func NewL1Deployments(path string) (*L1Deployments, error) {
var deployments L1Deployments var deployments L1Deployments
if err := json.Unmarshal(file, &deployments); err != nil { if err := json.Unmarshal(file, &deployments); err != nil {
return nil, fmt.Errorf("cannot unmarshal L1 deployements: %w", err) return nil, fmt.Errorf("cannot unmarshal L1 deployments: %w", err)
} }
return &deployments, nil return &deployments, nil
......
# op-preimage # op-preimage
`op-preimage` offers simple Go bindings to interact as client or sever over the Pre-image Oracle ABI. `op-preimage` offers simple Go bindings to interact as client or server over the Pre-image Oracle ABI.
Read more about the Preimage Oracle in the [specs](../specs/fault-proof.md). Read more about the Preimage Oracle in the [specs](../specs/fault-proof.md).
......
...@@ -4,7 +4,7 @@ Implements a fault proof program that runs through the rollup state-transition t ...@@ -4,7 +4,7 @@ Implements a fault proof program that runs through the rollup state-transition t
This verifiable output can then resolve a disputed output on L1. This verifiable output can then resolve a disputed output on L1.
The program is designed such that it can be run in a deterministic way such that two invocations with the same input The program is designed such that it can be run in a deterministic way such that two invocations with the same input
data wil result in not only the same output, but the same program execution trace. This allows it to be run in an data will result in not only the same output, but the same program execution trace. This allows it to be run in an
on-chain VM as part of the dispute resolution process. on-chain VM as part of the dispute resolution process.
## Compiling ## Compiling
......
# Smart Contract Style Guide # Smart Contract Style Guide
This document providing guidance on how we organize and write our smart contracts. For cases where This document provides guidance on how we organize and write our smart contracts. For cases where
this document does not provide guidance, please refer to existing contracts for guidance, this document does not provide guidance, please refer to existing contracts for guidance,
with priority on the `L2OutputOracle` and `OptimismPortal`. with priority on the `L2OutputOracle` and `OptimismPortal`.
...@@ -154,7 +154,7 @@ Test contracts should be named one of the following according to their use: ...@@ -154,7 +154,7 @@ Test contracts should be named one of the following according to their use:
To minimize clutter, getter functions can be grouped together into a single test contract, To minimize clutter, getter functions can be grouped together into a single test contract,
ie. `TargetContract_Getters_Test`. ie. `TargetContract_Getters_Test`.
## Withdrawaing From Fee Vaults ## Withdrawing From Fee Vaults
See the file `scripts/FeeVaultWithdrawal.s.sol` to withdraw from the L2 fee vaults. It includes See the file `scripts/FeeVaultWithdrawal.s.sol` to withdraw from the L2 fee vaults. It includes
instructions on how to run it. `foundry` is required. instructions on how to run it. `foundry` is required.
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
Summary - Summary -
- This package is generated from [contracts-bedrock](../contracts-bedrock/) - This package is generated from [contracts-bedrock](../contracts-bedrock/)
- It's version is kept in sync with contracts bedrock via the [changeset config](../../.changeset/config.json) e.g. if contracts-bedrock is `4.2.0` this package will have the same version. - Its version is kept in sync with contracts bedrock via the [changeset config](../../.changeset/config.json) e.g. if contracts-bedrock is `4.2.0` this package will have the same version.
## Code gen instructions ## Code gen instructions
......
...@@ -9,7 +9,7 @@ This tool implements `proxyd`, an RPC request router and proxy. It does the foll ...@@ -9,7 +9,7 @@ This tool implements `proxyd`, an RPC request router and proxy. It does the foll
5. Re-write requests and responses to enforce consensus. 5. Re-write requests and responses to enforce consensus.
6. Load balance requests across backend services. 6. Load balance requests across backend services.
7. Cache immutable responses from backends. 7. Cache immutable responses from backends.
8. Provides metrics the measure request latency, error rates, and the like. 8. Provides metrics to measure request latency, error rates, and the like.
## Usage ## Usage
......
...@@ -749,7 +749,7 @@ of even benign consensus issues. ...@@ -749,7 +749,7 @@ of even benign consensus issues.
The L2 block time is 2 second, meaning there is an L2 block at every 2s [time slot][time-slot]. The L2 block time is 2 second, meaning there is an L2 block at every 2s [time slot][time-slot].
Post-[merge], it could be said the that L1 block time is 12s as that is the L1 [time slot][time-slot]. However, in Post-[merge], it could be said that the L1 block time is 12s as that is the L1 [time slot][time-slot]. However, in
reality the block time is variable as some time slots might be skipped. reality the block time is variable as some time slots might be skipped.
Pre-merge, the L1 block time is variable, though it is on average 13s. Pre-merge, the L1 block time is variable, though it is on average 13s.
......
...@@ -286,7 +286,7 @@ used for depositing native L1 tokens into. These ERC20 contracts can be created ...@@ -286,7 +286,7 @@ used for depositing native L1 tokens into. These ERC20 contracts can be created
and implement the interface required by the `StandardBridge` to just work with deposits and withdrawals. and implement the interface required by the `StandardBridge` to just work with deposits and withdrawals.
Each ERC20 contract that is created by the `OptimismMintableERC20Factory` allows for the `L2StandardBridge` to mint Each ERC20 contract that is created by the `OptimismMintableERC20Factory` allows for the `L2StandardBridge` to mint
and burn tokens, depending on if the user is depositing from L1 to L2 or withdrawaing from L2 to L1. and burn tokens, depending on if the user is depositing from L1 to L2 or withdrawing from L2 to L1.
## OptimismMintableERC721Factory ## OptimismMintableERC721Factory
......
...@@ -12,9 +12,9 @@ Starting from left to right in the above diagram: ...@@ -12,9 +12,9 @@ Starting from left to right in the above diagram:
1. Github Workflow files are created for each time interval Test Services should be ran 1. Github Workflow files are created for each time interval Test Services should be ran
- All Test Services that should be ran for a specific time interval (e.g. 1 hour) should be defined in the same Github Workflow file - All Test Services that should be ran for a specific time interval (e.g. 1 hour) should be defined in the same Github Workflow file
2. Github will run a workflow at it's specified time interval, triggering all of it's defined Test Services to run 2. Github will run a workflow at its specified time interval, triggering all of it's defined Test Services to run
3. `docker-compose.yml` builds and runs each Test Service, setting any environment variables that can be sourced from Github secrets 3. `docker-compose.yml` builds and runs each Test Service, setting any environment variables that can be sourced from Github secrets
4. Each Test Service will run it's defined tasks, generate it's metrics, and push them to an already deployed instance of Prometheus Pushgateway 4. Each Test Service will run its defined tasks, generate its metrics, and push them to an already deployed instance of Prometheus Pushgateway
5. An already deployed instance of Prometheus will scrape the Pushgateway for metrics 5. An already deployed instance of Prometheus will scrape the Pushgateway for metrics
6. An already deployed Grafana dashboard will query Prometheus for metric data to display 6. An already deployed Grafana dashboard will query Prometheus for metric data to display
...@@ -71,7 +71,7 @@ Starting from left to right in the above diagram: ...@@ -71,7 +71,7 @@ Starting from left to right in the above diagram:
# Runs every 1 day # Runs every 1 day
0 12 * * * /usr/local/bin/docker-compose -f /path/to/docker-compose.yml --profile 1day up -d 0 12 * * * /usr/local/bin/docker-compose -f /path/to/docker-compose.yml --profile 1day up -d
# Runs every 7 day # Runs every 7 days
0 12 */7 * * /usr/local/bin/docker-compose -f /path/to/docker-compose.yml --profile 7day up -d 0 12 */7 * * /usr/local/bin/docker-compose -f /path/to/docker-compose.yml --profile 7day up -d
``` ```
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment