This README describes how to use the actor testing library to write new tests. If you're just looking for how to run test cases, check out the README [in the root of the repo](../README.md).
## Introduction
An "actor test" is a test case that simulates how a user might interact with Optimism. For example, an actor test might automate a user minting NFTs or swapping on Uniswap. Multiple actor tests are composed together in order to simulate real-world usage and help us optimize network performance under realistic load.
Actor tests are designed to catch race conditions, resource leaks, and performance regressions. They aren't a replacement for standard unit/integration tests, and aren't executed on every pull request since they take time to run.
This directory contains the actor testing framework as well as the tests themselves. The framework lives in `lib` and the tests live in this directory with `.test.ts` prefixes. Read on to find out more about how to use the framework to write actor tests of your own.
## CLI
Use the following command to run actor tests from the CLI:
-`name`: Sets the actor's name. Used in logs and in outputted metrics.
-`cb`: The body of the actor. Cannot be async. All the other DSL methods (i.e. `setup*`, `run`) must be called within this callback.
#### `setupActor(cb: () => Promise<void>)`
Defines a setup method that gets called after the actor is instantiated but before any workers are spawned. Useful to set variables that need to be shared across all worker instances.
**Note:** Any variables set using `setupActor` must be thread-safe. Don't use `setupActor` to define a shared `Provider` instance, for example, since this will introduce nonce errors. Use `setupRun` to define a test context instead.
#### `setupRun(cb: () => Promise<T>)`
Defines a setup method that gets called inside each worker after it is instantiated but before any runs have executed. The value returned by the `setupRun` method becomes the worker's test context, which will be described in more detail below.
**Note:** While `setupRun` is called once in each worker, invocations of the `setupRun` callback are executed serially. This makes `setupRun` useful for nonce-dependent setup tasks like funding worker wallets.
Defines what the actor actually does. The test runner will execute the `run` method multiple times depending on its configuration.
**Benchmarker**
Sections of the `run` method can be benchmarked using the `Benchmarker` argument to the `run` callback. Use the `Benchmarker` like this:
```typescript
b.bench('bench name',async()=>{
// benchmarked code here
})
```
A summary of the benchmark's runtime and a count of how many times it succeeded/failed across each worker will be recorded in the run's metrics.
**Context**
The value returned by `setupRun` will be passed into the `ctx` argument to the `run` callback. Use the test context for values that need to be local to a particular worker. In the example, we use it to pass around the worker's wallet.
### Error Handling
Errors in setup methods cause the test process to crash. Errors in the `run` method are recorded in the test's metrics, and cause the run to be retried. The runtime of failed runs are not recorded.
It's useful to use `expect`/`assert` to make sure that actors are executing properly.
### Test Runner
The test runner is responsible for executing actor tests and managing their lifecycle. It can run in one of two modes:
1. Fixed run mode, which will execute the `run` method a fixed number of times.
2. Timed mode, which will will execute the `run` method as many times as possible until a period of time has elapsed.
Test lifecycle is as follows:
1. The runner collects all the actors it needs to run.
> Actors automatically register themselves with the default instance of the runner upon being `require()`d.
2. The runner executes each actor's `setupActor` method.
3. The runner spawns `n` workers.
4. The runner executes the `setupRun` method in each worker. The runner will wait for all `setupRun` methods to complete before continuing.
5. The runner executes the `run` method according to the mode described above.
## Metrics
The test runner prints metrics about each run to `stdout` on exit. This output can then be piped into Prometheus for visualization in Grafana or similar tools. Example metrics output might looks like:
```
# HELP actor_successful_bench_runs_total Count of total successful bench runs.