Skip to content

Commit

Permalink
improve benchmark scripts and add a nightly job
Browse files Browse the repository at this point in the history
This patch:
- Adds a benchmark suite abstraction and moves all existing benchmarks
  into it. This makes individual benchmark types self-contained and
  allows us to skip over benchmarks that don't have their dependencies
  met.
- Makes sycl and ur optional, and they are no longer positional
  arguments.
- Creates a benchmark history that stores benchmark runs. This then
  enables us to do comparisions not just with the latest result but
  also against e.g., a historical average.
- adds a nightly job to store baseline results.
- adds HTML output
  • Loading branch information
pbalcer committed Nov 6, 2024
1 parent 3edf997 commit aa3c170
Show file tree
Hide file tree
Showing 25 changed files with 1,182 additions and 433 deletions.
38 changes: 38 additions & 0 deletions .github/workflows/benchmarks-nightly.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
name: Compute Benchmarks Nightly

on:
schedule:
- cron: '0 0 * * *' # Runs at midnight UTC every day

permissions:
contents: read
pull-requests: write

jobs:
nightly:
name: Compute Benchmarks Nightly level-zero
uses: ./.github/workflows/benchmarks-reusable.yml
with:
str_name: 'level_zero'
unit: 'gpu'
pr_no: 0
bench_script_params: '--save baseline'
sycl_config_params: ''
sycl_repo: 'intel/llvm'
sycl_commit: ''

nightly2:
# we need to wait until previous job is done so that the html report
# contains both runs
needs: nightly
name: Compute Benchmarks Nightly level-zero v2
uses: ./.github/workflows/benchmarks-reusable.yml
with:
str_name: 'level_zero_v2'
unit: 'gpu'
pr_no: 0
bench_script_params: '--save baseline-v2'
sycl_config_params: ''
sycl_repo: 'intel/llvm'
sycl_commit: ''
upload_report: true
Original file line number Diff line number Diff line change
@@ -1,50 +1,39 @@
name: Compute Benchmarks
name: Benchmarks Reusable

on:
# Can be triggered via manual "dispatch" (from workflow view in GitHub Actions tab)
workflow_dispatch:
# acceptable input for adapter-specific runs
workflow_call:
inputs:
str_name:
description: Formatted adapter name
type: choice
required: true
default: 'level_zero'
options:
- level_zero
- level_zero_v2
type: string
unit:
description: Test unit (cpu/gpu)
type: choice
required: true
default: 'gpu'
options:
- cpu
- gpu
type: string
pr_no:
description: PR number (if 0, it'll run on the main)
type: number
required: true
bench_script_params:
description: Parameters passed to script executing benchmark
# even though this is a number, this is a workaround for issues with
# reusable workflow calls that result in "Unexpected value '0'" error.
type: string
bench_script_params:
required: false
type: string
default: ''
sycl_config_params:
description: Extra params for SYCL configuration
type: string
required: false
type: string
default: ''
sycl_repo:
description: 'Compiler repo'
type: string
required: true
type: string
default: 'intel/llvm'
sycl_commit:
description: 'Compiler commit'
type: string
required: false
type: string
default: ''
upload_report:
required: false
type: boolean
default: false

permissions:
contents: read
Expand All @@ -56,19 +45,17 @@ jobs:
strategy:
matrix:
adapter: [
{str_name: "${{inputs.str_name}}",
sycl_config: "${{inputs.sycl_config_params}}",
unit: "${{inputs.unit}}"
{str_name: "${{ inputs.str_name }}",
sycl_config: "${{ inputs.sycl_config_params }}",
unit: "${{ inputs.unit }}"
}
]
build_type: [Release]
compiler: [{c: clang, cxx: clang++}]

runs-on: "${{inputs.str_name}}_PERF"
runs-on: "${{ inputs.str_name }}_PERF"

steps:
# Workspace on self-hosted runners is not cleaned automatically.
# We have to delete the files created outside of using actions.
- name: Cleanup self-hosted workspace
if: always()
run: |
Expand Down Expand Up @@ -99,7 +86,8 @@ jobs:
path: ur-repo

- name: Install pip packages
run: pip install -r ${{github.workspace}}/ur-repo/third_party/requirements.txt
run: |
pip install --force-reinstall -r ${{github.workspace}}/ur-repo/third_party/benchmark_requirements.txt
# We need to fetch special ref for proper PR's merge commit. Note, this ref may be absent if the PR is already merged.
- name: Fetch PR's merge commit
Expand Down Expand Up @@ -169,13 +157,15 @@ jobs:
run: cmake --install ${{github.workspace}}/ur_build

- name: Run benchmarks
working-directory: ${{ github.workspace }}/ur-repo/
id: benchmarks
run: >
numactl -N 0 ${{ github.workspace }}/ur-repo/scripts/benchmarks/main.py
~/bench_workdir
${{github.workspace}}/sycl_build
${{github.workspace}}/ur_install
${{ matrix.adapter.str_name }}
${{ github.workspace }}/ur-repo/scripts/benchmarks/main.py
~/bench_workdir
--sycl ${{ github.workspace }}/sycl_build
--ur ${{ github.workspace }}/ur_install
--adapter ${{ matrix.adapter.str_name }}
${{ inputs.upload_report && '--output-html' || '' }}
${{ inputs.bench_script_params }}
- name: Add comment to PR
Expand Down Expand Up @@ -204,3 +194,10 @@ jobs:
repo: context.repo.repo,
body: body
})
- name: Upload HTML report
if: ${{ always() && inputs.upload_report }}
uses: actions/cache/save@6849a6489940f00c2f30c0fb92c6274307ccb58a # v4.1.2
with:
path: ${{ github.workspace }}/ur-repo/benchmark_results.html
key: benchmark-results-${{ matrix.adapter.str_name }}-${{ github.run_id }}
68 changes: 68 additions & 0 deletions .github/workflows/benchmarks.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
name: Compute Benchmarks

on:
workflow_dispatch:
inputs:
str_name:
description: Formatted adapter name
type: choice
required: true
default: 'level_zero'
options:
- level_zero
- level_zero_v2
unit:
description: Test unit (cpu/gpu)
type: choice
required: true
default: 'gpu'
options:
- cpu
- gpu
pr_no:
description: PR number (if 0, it'll run on the main)
type: number
required: true
bench_script_params:
description: Parameters passed to script executing benchmark
type: string
required: false
default: ''
sycl_config_params:
description: Extra params for SYCL configuration
type: string
required: false
default: ''
sycl_repo:
description: 'Compiler repo'
type: string
required: true
default: 'intel/llvm'
sycl_commit:
description: 'Compiler commit'
type: string
required: false
default: ''
upload_report:
description: 'Upload HTML report'
type: boolean
required: false
default: false

permissions:
contents: read
pull-requests: write

jobs:
manual:
name: Compute Benchmarks
uses: ./.github/workflows/benchmarks-reusable.yml
with:
str_name: ${{ inputs.str_name }}
unit: ${{ inputs.unit }}
pr_no: ${{ inputs.pr_no }}
bench_script_params: ${{ inputs.bench_script_params }}
sycl_config_params: ${{ inputs.sycl_config_params }}
sycl_repo: ${{ inputs.sycl_repo }}
sycl_commit: ${{ inputs.sycl_commit }}
upload_report: ${{ inputs.upload_report }}
18 changes: 17 additions & 1 deletion .github/workflows/docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,23 @@ jobs:

- name: Build Documentation
working-directory: ${{github.workspace}}/scripts
run: python3 run.py --core
run: |
python3 run.py --core
mkdir -p ${{ github.workspace }}/ur-repo/
mkdir -p ${{github.workspace}}/docs/html
- name: Download benchmark HTML
id: download-bench-html
uses: actions/cache/restore@6849a6489940f00c2f30c0fb92c6274307ccb58a # v4.1.2
with:
path: ${{ github.workspace }}/ur-repo/benchmark_results.html
key: benchmark-results-

- name: Move benchmark HTML
# exact or partial cache hit
if: steps.download-bench-html.outputs.cache-hit != ''
run: |
mv ${{ github.workspace }}/ur-repo/benchmark_results.html ${{ github.workspace }}/docs/html/
- name: Upload artifact
uses: actions/upload-pages-artifact@0252fc4ba7626f0298f0cf00902a25c6afc77fa8 # v3.0.0
Expand Down
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
[![OpenSSF Scorecard](https://api.securityscorecards.dev/projects/github.com/oneapi-src/unified-runtime/badge)](https://securityscorecards.dev/viewer/?uri=github.com/oneapi-src/unified-runtime)
[![Trivy](https://github.com/oneapi-src/unified-runtime/actions/workflows/trivy.yml/badge.svg)](https://github.com/oneapi-src/unified-runtime/actions/workflows/trivy.yml)
[![Deploy documentation to Pages](https://github.com/oneapi-src/unified-runtime/actions/workflows/docs.yml/badge.svg)](https://github.com/oneapi-src/unified-runtime/actions/workflows/docs.yml)
[![Compute Benchmarks Nightly](https://github.com/oneapi-src/unified-runtime/actions/workflows/benchmarks-nightly.yml/badge.svg)](https://github.com/oneapi-src/unified-runtime/actions/workflows/benchmarks-nightly.yml)

<!-- TODO: add general description and purpose of the project -->

Expand Down
7 changes: 4 additions & 3 deletions scripts/benchmarks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,9 +37,10 @@ By default, the benchmark results are not stored. To store them, use the option

To compare a benchmark run with a previously stored result, use the option `--compare <name>`. You can compare with more than one result.

If no `--compare` option is specified, the benchmark run is compared against a previously stored `baseline`. This baseline is **not** automatically updated. To update it, use the `--save baseline` option.
The recommended way of updating the baseline is running the benchmarking
job on main after a merge of relevant changes.
If no `--compare` option is specified, the benchmark run is compared against a previously stored `baseline`.

Baseline, as well as baseline-v2 (for the level-zero adapter v2) is updated automatically during a nightly job. The results
are stored [here](https://oneapi-src.github.io/unified-runtime/benchmark_results.html).

## Requirements

Expand Down
39 changes: 0 additions & 39 deletions scripts/benchmarks/benches/SobelFilter.py

This file was deleted.

15 changes: 12 additions & 3 deletions scripts/benchmarks/benches/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,16 +20,18 @@ def __init__(self, directory):
def get_adapter_full_path():
for libs_dir_name in ['lib', 'lib64']:
adapter_path = os.path.join(
options.ur_dir, libs_dir_name, f"libur_adapter_{options.ur_adapter_name}.so")
options.ur, libs_dir_name, f"libur_adapter_{options.ur_adapter}.so")
if os.path.isfile(adapter_path):
return adapter_path
assert False, \
f"could not find adapter file {adapter_path} (and in similar lib paths)"

def run_bench(self, command, env_vars):
env_vars_with_forced_adapter = env_vars.copy()
env_vars_with_forced_adapter.update(
{'UR_ADAPTERS_FORCE_LOAD': Benchmark.get_adapter_full_path()})
if options.ur is not None:
env_vars_with_forced_adapter.update(
{'UR_ADAPTERS_FORCE_LOAD': Benchmark.get_adapter_full_path()})

return run(
command=command,
env_vars=env_vars_with_forced_adapter,
Expand Down Expand Up @@ -76,3 +78,10 @@ def run(self, env_vars) -> list[Result]:

def teardown(self):
raise NotImplementedError()

class Suite:
def benchmarks(self) -> list[Benchmark]:
raise NotImplementedError()

def setup(self):
return
34 changes: 0 additions & 34 deletions scripts/benchmarks/benches/bitcracker.py

This file was deleted.

Loading

0 comments on commit aa3c170

Please sign in to comment.