Tint Benchmark

The tint_benchmark executable measures the cost of translating shaders from WGSL to each backend language. The benchmark uses Google Benchmark, and is integrated into Chromium's performance waterfall.

Benchmark inputs

Shaders used for benchmarking can be provided as either WGSL or SPIR-V. The shaders are embedded in the binary at build time in order to avoid runtime dependencies. A Python script is used to generate a header file that contains all of the benchmark shaders, and the macros that register benchmarks with the Google Benchmark harness.

SPIR-V shaders are converted to WGSL as an offline step using Tint, as the SPIR-V reader is not available on the waterfall bots. The generated WGSL files are checked in to the repo and the same script is used by CQ to check that the generated files are up to date.

The script lists the paths to all of the benchmark shaders, which are either in the end-to-end test directory (test/tint/benchmark/) or provided as external shaders in third_party/benchmark_shaders/.

Adding benchmarks

Files that end with _bench.cc are automatically included in the benchmark binary. To benchmark a component of Tint against the benchmark shaders, define a function that performs the benchmarking and then register it with the TINT_BENCHMARK_PROGRAMS macro. For example, the SPIR-V backend benchmark code is in src/tint/lang/spirv/writer/writer_bench.cc.

Other parts of Tint can be benchmarked independently from the benchmark shader corpus by registering the benchmark function with the BENCHMARK macro.

Chromium performance waterfall

The tint_benchmark binary is a dependency of the performance_test_suite_template_base template in chrome/test/BUILD.gn.

The bot_platforms.py script controls which platforms run the benchmark. The benchmark was added to the waterfall in this CL.

The chrome.perf console shows the status of the builder and tester bots, and can be used to check that changes to the benchmarking setup are working correctly.

Viewing benchmark data

View benchmark data with the perf.luci.app dashboard. The units for Tint benchmarks are nanoseconds.

The query parameters are:

  • benchmark - select tint_benchmark
  • bot - select the waterfall bot from which to view data
  • test - select the component of Tint to focus on (e.g. a specific backend)
  • subtest_1 - select the benchmark shader to focus on
  翻译: