Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BenchmarkDotNet Performance Benchmarks #1085

Open
1 task done
paulirwin opened this issue Jan 2, 2025 · 0 comments
Open
1 task done

BenchmarkDotNet Performance Benchmarks #1085

paulirwin opened this issue Jan 2, 2025 · 0 comments
Labels
benchmarks help-wanted Extra attention is needed is:enhancement New feature or request is:task A chore to be done pri:normal

Comments

@paulirwin
Copy link
Contributor

Is there an existing issue for this?

  • I have searched the existing issues

Task description

This issue intends to formalize the plan for benchmarking with BenchmarkDotNet that was started in PRs #310 (and #349 can likely be closed, as that was effectively done as an update to #310), and track it against the release. Note that this is specifically about BenchmarkDotNet benchmarks, and this is not the same thing as the Lucene.Net.Benchmark project or their use in lucene-cli.

First, we should get the project that tests Demos (PR #310) in the repo as a starting point, addressing the structural feedback in PR #349 so that it is set up for future projects. This will allow us to run the benchmarks between branches locally to be on the lookout for performance regressions while we go, and compare them to maybe the last 2 or 3 NuGet packages. We should also have CI scripts for GitHub and Azure DevOps that run this benchmark project, to ensure they continue to work as future changes are made, although centralizing benchmarking reporting will come next. If we can have this CI trivially output a user-friendly file like HTML that could be a build asset (and even visualized in i.e. an ADO tab) that would be great; but this would be limited to viewing the data from that benchmark run only to keep the scope reasonable. That latter part can be split out as a separate issue if needed. But having this initial benchmarking infrastructure in place should be a requirement for beta 18.

Second, we should set up centralized benchmark reporting so that we can track benchmark performance data over time. While our first attempt naturally should start out much smaller in scope, it would be nice to have something that aims to eventually be equivalent to Lucene's nightly benchmarks. Where to publish this data, how to visualize it, etc. is TBD. This part likely will be a post-beta-18 item, and we can split that out as its own issue if needed.

Any additional benchmarks that we think would be useful can be logged as their own issues. Hopefully, having this infrastructure in place will encourage the community to provide additional benchmarks to help us build out our benchmark test suite.

@paulirwin paulirwin added help-wanted Extra attention is needed is:enhancement New feature or request pri:normal is:task A chore to be done benchmarks labels Jan 2, 2025
@paulirwin paulirwin added this to the 4.8.0-beta00018 milestone Jan 2, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
benchmarks help-wanted Extra attention is needed is:enhancement New feature or request is:task A chore to be done pri:normal
Projects
None yet
Development

No branches or pull requests

1 participant