BenchmarkDotNet Performance Benchmarks #1085
Labels
benchmarks
help-wanted
Extra attention is needed
is:enhancement
New feature or request
is:task
A chore to be done
pri:normal
Milestone
Is there an existing issue for this?
Task description
This issue intends to formalize the plan for benchmarking with BenchmarkDotNet that was started in PRs #310 (and #349 can likely be closed, as that was effectively done as an update to #310), and track it against the release. Note that this is specifically about BenchmarkDotNet benchmarks, and this is not the same thing as the Lucene.Net.Benchmark project or their use in lucene-cli.
First, we should get the project that tests Demos (PR #310) in the repo as a starting point, addressing the structural feedback in PR #349 so that it is set up for future projects. This will allow us to run the benchmarks between branches locally to be on the lookout for performance regressions while we go, and compare them to maybe the last 2 or 3 NuGet packages. We should also have CI scripts for GitHub and Azure DevOps that run this benchmark project, to ensure they continue to work as future changes are made, although centralizing benchmarking reporting will come next. If we can have this CI trivially output a user-friendly file like HTML that could be a build asset (and even visualized in i.e. an ADO tab) that would be great; but this would be limited to viewing the data from that benchmark run only to keep the scope reasonable. That latter part can be split out as a separate issue if needed. But having this initial benchmarking infrastructure in place should be a requirement for beta 18.
Second, we should set up centralized benchmark reporting so that we can track benchmark performance data over time. While our first attempt naturally should start out much smaller in scope, it would be nice to have something that aims to eventually be equivalent to Lucene's nightly benchmarks. Where to publish this data, how to visualize it, etc. is TBD. This part likely will be a post-beta-18 item, and we can split that out as its own issue if needed.
Any additional benchmarks that we think would be useful can be logged as their own issues. Hopefully, having this infrastructure in place will encourage the community to provide additional benchmarks to help us build out our benchmark test suite.
The text was updated successfully, but these errors were encountered: