Commit eaed52d1 authored by Freeman's avatar Freeman Committed by GitHub

Feature/rv compute (#11015)

* Add dummy tests for passing and failing

* Updating passing along return errors and a duplicate call to clean_docker

* Fixing logging

* Update to latest kontrol version supporting error codes

* Remove trap for testing

* Add install-kontrol and run local with enforced version requiresments

* Drop on failure trap and test

* The cleanup steps may be altering the return results. We're not running docker, not needed here

* Test with traps re-added after finding issue was with using find

* Formatting, now test with passing test

* Update run-kontrol.sh

Revert debug set to original header

* Increase CPU workers to 16 for kaas runners

* Update config.yml

* Remove dummy proofs

* run-kontrol.sh: set `max_workers` to 16

* run-kontrol.sh: execute all tests with `script` option

* run-kontrol.sh: match tests more precisely

* run-kontrol.sh: add back `break_every_step` variable

* versions.json: bump Kontrol from 0.1.247 to 0.1.258

* IGnore vscode configuration files

* Move on regardless of docker removal, container is started with automatic removal on stop, we just want to make sure it happens for redundancy

* Extract content of Results to upload to kaas

* Call into RV Workflow to run symbolic tests

* Updating circleCI to latest credentials setup/secret references. Set context to proper env. Remove unused inputs

* Reverting unecessary change

* Add RV context

* Removing comma from end of json block

* Set the specific branch name to run on, do not assume develop.

* Temporary when conditional to always run

* Revert test when conditional

* Supress extraction of outputs

* Accept Suggestion for comment line in packages/contracts-bedrock/test/kontrol/scripts/run-kontrol.sh
Co-authored-by: default avatarMatt Solomon <matt@mattsolomon.dev>

* Supress tar file creation output

* Dropping -verbose in tar command. Reduce output on exeutions

* run-kontrol.sh: Update Temporarily unexecuted tests list

* run-kontrol.sh: Update Temporarily unexecuted tests list v2

* Update documentation noting changes / usage / flow / secrets for utilizing kaas

* run-kontrol.sh: Fetch the xml file if it exists from the build env and make available for posting to summary

* Update packages/contracts-bedrock/test/kontrol/README.md
Co-authored-by: default avatarMatt Solomon <matt@mattsolomon.dev>

* Update packages/contracts-bedrock/test/kontrol/README.md
Co-authored-by: default avatarMatt Solomon <matt@mattsolomon.dev>

* Update packages/contracts-bedrock/test/kontrol/README.md
Co-authored-by: default avatarMatt Solomon <matt@mattsolomon.dev>

* Update packages/contracts-bedrock/test/kontrol/README.md
Co-authored-by: default avatarMatt Solomon <matt@mattsolomon.dev>

* Update packages/contracts-bedrock/test/kontrol/README.md
Co-authored-by: default avatarMatt Solomon <matt@mattsolomon.dev>

* kontrol/README.md: Update instructions for clarity and usage

* .github/workflows: Adding a new workflow to trigger on updates to depoloyment_status from third party

* Force push to test

* Remove if check for environment.

* status-test: This looks like the wrong trigger

* Update slack notification workflow with template

* proof-runner-notification.yml: Slack notifications on failure, and webhook URL

* Replace Link with provided target_url from commit status

* Update payload variables per slack workflow variable requirements.

---------
Co-authored-by: default avatarJuan C <juanconrod@protonmail.com>
Co-authored-by: default avatarMatt Solomon <matt@mattsolomon.dev>
parent 549b52e0
......@@ -1443,12 +1443,22 @@ jobs:
docker_layer_caching: true
- run:
name: Run Kontrol Tests
command: just test-kontrol
command: |
curl -X POST \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer $RV_COMPUTE_TOKEN" \
https://api.github.com/repos/runtimeverification/optimism-ci/actions/workflows/optimism-ci.yaml/dispatches \
-d '{
"ref": "master",
"inputs": {
"branch_name": "<<pipeline.git.branch>>",
"extra_args": "script",
"statuses_sha": "<< pipeline.git.revision >>",
"org": "ethereum-optimism",
"repository": "optimism"
}
}'
working_directory: ./packages/contracts-bedrock
- store_artifacts:
path: ./packages/contracts-bedrock/test/kontrol/logs/kontrol-results_latest.tar.gz
- store_test_results:
path: ./packages/contracts-bedrock
- notify-failures-on-develop
workflows:
......@@ -1990,6 +2000,7 @@ workflows:
- kontrol-tests:
context:
- slack
- runtimeverification
scheduled-docker-publish:
when:
......
name: Proof Runner Deployment Status
on:
status
jobs:
handle-proof-runner:
runs-on: ubuntu-latest
if: >-
github.event.state == 'error' ||
github.event.state == 'failure'
steps:
- name: Generate Slack Payload
run: |
echo '{
"kontrol_status": "${{ github.event.state }}",
"ci_run_link": "${{ github.event.target_url }}"
}' > payload.json
- name: Send Status Check to Slack
uses: slackapi/slack-github-action@v1.26.0
with:
payload-file-path: "./payload.json"
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_KONTROL_WEBHOOK_URL }}
\ No newline at end of file
......@@ -190,3 +190,64 @@ Therefore, the snapshots CI check will fail if the committed Kontrol state diff
Note that the CI check only compares the JSON state diff, not the generated `DeploymentSummary.sol` or `DeploymentSummaryCode` contracts.
This is for simplicity, as those three files will be in sync upon successful execution of the `make-summary-deployment.sh` script.
We commit the `DeploymentSummary.sol` and `DeploymentSummaryCode.sol` contracts, because forge fails to build if those contracts are not present—it is simpler to commit these autogenerated files than to workaround their absence in forge builds.
### Kontrol Proof Execution using KaaS (Kontrol as a Service) & CI Integration
To execute compute intensive Symbolic Kontrol proofs in a CI pipeline, we leverage KaaS (Kontrol as a Service) & CircleCI.
On each execution a status is returned to the commit checks the execution was run from. The checks for a commit provides a "Details" link to the execution KCFG arrtifacts and kontrol proof summary results. See below for further infromation on how to fetch these artifacts from the summary using the Github API.
A high level overview of the CI Setup is defined in CircleCI and configured within [.circleci/config.yml](../../../../.circleci/config.yml). The CirlceCI flow calls on the Github API to execute Kontrol test proofs utilzing the [test scripts](scripts/run-kontrol.sh) in this repo. CircleCI is configured with a Github fine-grained token provided by Runtime Verification to leverage better machines than available in CircleCI to run Kontrol proofs. The token needed is stored in the CircleCI project as a secret.
The proofs can be run manually using the Github Fine-grained token and providing the appropriate parameters outlind in the below Github API call.
```bash
curl -X POST \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer $RV_COMPUTE_TOKEN" \
https://api.github.com/repos/runtimeverification/optimism-ci/actions/workflows/optimism-ci.yaml/dispatches \
-d '{
"ref": "master",
"inputs": {
"branch_name": "<<pipeline.git.branch>>",
"extra_args": "script",
"statuses_sha": "<< pipeline.git.revision >>",
"org": "ethereum-optimism",
"repository": "optimism"
}
}'
```
NOTE: Making this call manually and providng a sha will overrite an existing statuses sha commit. It will not remove the history from RV's end but will be removed from the commit checks table in a PR or in the commit statuses.
Parameters to be replaced are:
- **RV_COMPUTE_TOKEN**: The token used to authenticate to the KaaS service.
- **pipeline.git.branch**: The branch on which to run the proof.
- **pipeline.git.revision**: The commit hash for the proof execution.
- **org**: The organization under which to run the proof.
- **repository**: The repository for the proof execution.
Artifacts are produced after execution is finished and posted to the summary page. Two artifacts are expected:
- `kontrol_prove_report.xml` -- This is a Kontrol Generated JUnit XML Report containing results and performance data.
- `Kontrol Results Folder.zip` -- This contains artifacts produced by kontrol during execution that can be used for debugging. This is expected on pass or failure of a proof run.
To Fetch the artifacts of the execution from KaaS, you can either manually navigate to the summary page using the 'Details' link. Or you can use the GitHub API to pull the artifacts.
Method 1: GitHub's `gh` CLI tool
- gh run download RUN_ID
Method 2: [Github API](https://docs.github.com/en/rest/actions/artifacts?apiVersion=2022-11-28)
List the artifacts for a run:
- GET /repos/{owner}/{repo}/actions/runs/{run_id}/artifacts -- See [documentaiton](httpshttps://docs.github.com/en/rest/actions/artifacts?apiVersion=2022-11-28#list-workflow-run-artifacts) for more details
```bash
curl -L \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer <RV_COMPUTE_TOKEN>" \
-H "X-GitHub-Api-Version: 2022-11-28" \
https://api.github.com/repos/runtimeverification/optimism-ci/actions/runs/RUN_ID/artifacts
```
Then Download the artifacts:
- GET /repos/{owner}/{repo}/actions/artifacts/{artifact_id} -- See [documentaiton](https://docs.github.com/en/rest/actions/artifacts?apiVersion=2022-11-28#download-an-artifact) for more details
```bash
curl -L \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer <RV_COMPUTE_TOKEN>" \
-H "X-GitHub-Api-Version: 2022-11-28" \
https://api.github.com/repos/runtimeverification/optimism-ci/actions/artifacts/ARTIFACT_ID/ARCHIVE_FORMAT
```
\ No newline at end of file
......@@ -35,6 +35,7 @@ usage_make_summary() {
# Set Run Directory <root>/packages/contracts-bedrock
WORKSPACE_DIR=$( cd "$SCRIPT_HOME/../../.." >/dev/null 2>&1 && pwd )
pushd "$WORKSPACE_DIR" > /dev/null || exit
# Variables
export KONTROL_FP_DEPLOYMENT="${KONTROL_FP_DEPLOYMENT:-false}"
......@@ -111,7 +112,6 @@ parse_first_arg() {
export LOCAL=true
export SCRIPT_TESTS=$SCRIPT_OPTION
export CUSTOM_TESTS=$CUSTOM_OPTION
pushd "$WORKSPACE_DIR" > /dev/null || exit
elif [ "$1" == "script" ]; then
notif "Running in docker container (DEFAULT)"
export LOCAL=false
......@@ -130,7 +130,6 @@ check_kontrol_version() {
if [ "$(kontrol version | awk -F': ' '{print$2}')" == "$KONTROLRC" ]; then
notif "Kontrol version matches $KONTROLRC"
export LOCAL=true
pushd "$WORKSPACE_DIR" > /dev/null || exit
else
notif "Kontrol version does NOT match $KONTROLRC"
notif "Please run 'kup install kontrol --version v$KONTROLRC'"
......@@ -182,11 +181,16 @@ copy_to_docker() {
}
clean_docker(){
notif "Stopping Docker Container"
docker stop "$CONTAINER_NAME"
if [ "$LOCAL" = false ]; then
notif "Cleaning Docker Container"
docker stop "$CONTAINER_NAME" > /dev/null 2>&1 || true
docker rm "$CONTAINER_NAME" > /dev/null 2>&1 || true
sleep 2 # Give time for system to clean up container
else
notif "Not Running in Container. Done."
fi
}
docker_exec () {
docker exec --user user --workdir /home/user/workspace $CONTAINER_NAME "${@}"
}
......@@ -200,4 +204,5 @@ run () {
notif "Running in docker"
docker_exec "${@}"
fi
return $? # Return the exit code of the command
}
......@@ -20,6 +20,7 @@ kontrol_build() {
--require $lemmas \
--module-import $module \
$rekompile
return $?
}
kontrol_prove() {
......@@ -38,56 +39,63 @@ kontrol_prove() {
--init-node-from $state_diff \
--kore-rpc-command 'kore-rpc-booster --equation-max-recursion 100' \
--xml-test-report
return $?
}
dump_log_results(){
trap clean_docker ERR
RESULTS_FILE="results-$(date +'%Y-%m-%d-%H-%M-%S').tar.gz"
LOG_PATH="test/kontrol/logs"
RESULTS_LOG="$LOG_PATH/$RESULTS_FILE"
get_log_results(){
RESULTS_FILE="results-$(date +'%Y-%m-%d-%H-%M-%S').tar.gz"
LOG_PATH="test/kontrol/logs"
RESULTS_LOG="$LOG_PATH/$RESULTS_FILE"
if [ ! -d $LOG_PATH ]; then
mkdir $LOG_PATH
fi
notif "Generating Results Log: $LOG_PATH"
if [ ! -d $LOG_PATH ]; then
mkdir $LOG_PATH
fi
run tar -czvf results.tar.gz kout-proofs/ > /dev/null 2>&1
if [ "$LOCAL" = true ]; then
mv results.tar.gz "$RESULTS_LOG"
else
docker cp "$CONTAINER_NAME:/home/user/workspace/results.tar.gz" "$RESULTS_LOG"
fi
if [ -f "$RESULTS_LOG" ]; then
cp "$RESULTS_LOG" "$LOG_PATH/kontrol-results_latest.tar.gz"
notif "Generating Results Log: $RESULTS_LOG"
run tar -czf results.tar.gz kout-proofs/ > /dev/null 2>&1
if [ "$LOCAL" = true ]; then
mv results.tar.gz "$RESULTS_LOG"
else
docker cp "$CONTAINER_NAME:/home/user/workspace/results.tar.gz" "$RESULTS_LOG"
# Check if kontrol_prove_report.xml exists in the container and copy it out if it does
if docker exec "$CONTAINER_NAME" test -f /home/user/workspace/kontrol_prove_report.xml; then
docker cp "$CONTAINER_NAME:/home/user/workspace/kontrol_prove_report.xml" "$LOG_PATH/kontrol_prove_report.xml"
notif "Copied kontrol_prove_report.xml to $LOG_PATH"
else
notif "Results Log: $RESULTS_LOG not found, skipping.."
notif "kontrol_prove_report.xml not found in container"
fi
# Report where the file was generated and placed
notif "Results Log: $(dirname "$RESULTS_LOG") generated"
tar -xzf "$RESULTS_LOG" > /dev/null 2>&1
fi
if [ -f "$RESULTS_LOG" ]; then
cp "$RESULTS_LOG" "$LOG_PATH/kontrol-results_latest.tar.gz"
else
notif "Results Log: $RESULTS_LOG not found, skipping.."
fi
# Report where the file was generated and placed
notif "Results Log: $(dirname "$RESULTS_LOG") generated"
if [ "$LOCAL" = false ]; then
notif "Results Log: $RESULTS_LOG generated"
RUN_LOG="run-kontrol-$(date +'%Y-%m-%d-%H-%M-%S').log"
docker logs "$CONTAINER_NAME" > "$LOG_PATH/$RUN_LOG"
fi
if [ "$LOCAL" = false ]; then
notif "Results Log: $RESULTS_LOG generated"
RUN_LOG="run-kontrol-$(date +'%Y-%m-%d-%H-%M-%S').log"
docker logs "$CONTAINER_NAME" > "$LOG_PATH/$RUN_LOG"
# Expand the tar folder to kout-proofs for Summary Results and caching
tar -xzf "$RESULTS_LOG" -C "$WORKSPACE_DIR" > /dev/null 2>&1
fi
}
# Define the function to run on failure
on_failure() {
dump_log_results
get_log_results
if [ "$LOCAL" = false ]; then
clean_docker
fi
notif "Cleanup complete."
notif "Failure Cleanup Complete."
exit 1
}
# Set up the trap to run the function on failure
trap on_failure ERR INT
#########################
# kontrol build options #
#########################
......@@ -107,21 +115,9 @@ regen=
#################################
# Tests to symbolically execute #
#################################
# Temporarily unexecuted tests
# "OptimismPortalKontrol.prove_proveWithdrawalTransaction_paused0" \ -- This one is executed below.
# "OptimismPortalKontrol.prove_proveWithdrawalTransaction_paused1" \
# "OptimismPortalKontrol.prove_proveWithdrawalTransaction_paused2" \
# "OptimismPortalKontrol.prove_proveWithdrawalTransaction_paused3" \
# "OptimismPortalKontrol.prove_proveWithdrawalTransaction_paused4" \
# "OptimismPortalKontrol.prove_proveWithdrawalTransaction_paused5" \
# "OptimismPortalKontrol.prove_proveWithdrawalTransaction_paused6" \
# "OptimismPortalKontrol.prove_proveWithdrawalTransaction_paused7" \
# "OptimismPortalKontrol.prove_proveWithdrawalTransaction_paused8" \
# "OptimismPortalKontrol.prove_proveWithdrawalTransaction_paused9" \
# "OptimismPortalKontrol.prove_proveWithdrawalTransaction_paused10" \
# "OptimismPortal2Kontrol.prove_proveWithdrawalTransaction_paused0" \
# "OptimismPortal2Kontrol.prove_proveWithdrawalTransaction_paused1" \ -- This one is executed below.
# "OptimismPortal2Kontrol.prove_proveWithdrawalTransaction_paused1(" \
# "OptimismPortal2Kontrol.prove_proveWithdrawalTransaction_paused2" \
# "OptimismPortal2Kontrol.prove_proveWithdrawalTransaction_paused3" \
# "OptimismPortal2Kontrol.prove_proveWithdrawalTransaction_paused4" \
......@@ -134,13 +130,23 @@ regen=
test_list=()
if [ "$SCRIPT_TESTS" == true ]; then
test_list=( "OptimismPortalKontrol.prove_proveWithdrawalTransaction_paused0" \
"OptimismPortalKontrol.prove_proveWithdrawalTransaction_paused1(" \
"OptimismPortalKontrol.prove_finalizeWithdrawalTransaction_paused" \
"L1StandardBridgeKontrol.prove_finalizeBridgeERC20_paused" \
"L1StandardBridgeKontrol.prove_finalizeBridgeETH_paused" \
"L1ERC721BridgeKontrol.prove_finalizeBridgeERC721_paused" \
"L1CrossDomainMessengerKontrol.prove_relayMessage_paused"
test_list=(
"OptimismPortalKontrol.prove_proveWithdrawalTransaction_paused0" \
"OptimismPortalKontrol.prove_proveWithdrawalTransaction_paused1(" \
"OptimismPortalKontrol.prove_proveWithdrawalTransaction_paused2" \
"OptimismPortalKontrol.prove_proveWithdrawalTransaction_paused3" \
"OptimismPortalKontrol.prove_proveWithdrawalTransaction_paused4" \
"OptimismPortalKontrol.prove_proveWithdrawalTransaction_paused5" \
"OptimismPortalKontrol.prove_proveWithdrawalTransaction_paused6" \
"OptimismPortalKontrol.prove_proveWithdrawalTransaction_paused7" \
"OptimismPortalKontrol.prove_proveWithdrawalTransaction_paused8" \
"OptimismPortalKontrol.prove_proveWithdrawalTransaction_paused9" \
"OptimismPortalKontrol.prove_proveWithdrawalTransaction_paused10" \
"OptimismPortalKontrol.prove_finalizeWithdrawalTransaction_paused" \
"L1StandardBridgeKontrol.prove_finalizeBridgeERC20_paused" \
"L1StandardBridgeKontrol.prove_finalizeBridgeETH_paused" \
"L1ERC721BridgeKontrol.prove_finalizeBridgeERC721_paused" \
"L1CrossDomainMessengerKontrol.prove_relayMessage_paused"
)
elif [ "$CUSTOM_TESTS" != 0 ]; then
test_list=( "${@:${CUSTOM_TESTS}}" )
......@@ -156,9 +162,9 @@ done
max_depth=10000
max_iterations=10000
smt_timeout=100000
max_workers=7 # Set to 7 since the CI machine has 8 CPUs
# workers is the minimum between max_workers and the length of test_list
# unless no test arguments are provided, in which case we default to max_workers
max_workers=16 # Set to 16 since there are 16 proofs to run
# workers is the minimum between max_workers and the length of test_list unless
# no test arguments are provided, in which case we default to max_workers
if [ "$CUSTOM_TESTS" == 0 ] && [ "$SCRIPT_TESTS" == false ]; then
workers=${max_workers}
else
......@@ -177,16 +183,35 @@ state_diff="./snapshots/state-diff/Kontrol-31337.json"
#############
# RUN TESTS #
#############
# Set up the trap to run the function on failure
trap on_failure ERR INT TERM
trap clean_docker EXIT
conditionally_start_docker
results=()
# Run kontrol_build and store the result
kontrol_build
results[0]=$?
# Run kontrol_prove and store the result
kontrol_prove
results[1]=$?
dump_log_results
get_log_results
if [ "$LOCAL" == false ]; then
notif "Stopping docker container"
clean_docker
# Now you can use ${results[0]} and ${results[1]}
# to check the results of kontrol_build and kontrol_prove, respectively
if [ ${results[0]} -ne 0 ] && [ ${results[1]} -ne 0 ]; then
echo "Kontrol Build and Prove Failed"
exit 1
elif [ ${results[0]} -ne 0 ]; then
echo "Kontrol Build Failed"
exit 1
elif [ ${results[1]} -ne 0 ]; then
echo "Kontrol Prove Failed"
exit 2
else
echo "Kontrol Passed"
fi
notif "DONE"
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment