-
Notifications
You must be signed in to change notification settings - Fork 408
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Opt-in client kubernetes version success tracking / ability to test each version of kubernetes w/ each provider #1585
Comments
Testing Matrix (known to work combinations of versions)
To simplify the version combinations could be limited to image-builder running in a container, rather than every possible version running on every possible linux or mac distribution. Instead of 'verifiedby' it could just be 'votes' being how many folks have reported success with those combinations. |
Maybe, after a build completes successfully it could ask:
And a command line '--report yes/no' to avoid the prompt. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Is your feature request related to a problem? Please describe.
If I want to build using kubernetes v1.29.6, I can't just specify that version, I have to specify multiple versions because apparently the debian package might be named differently.
Describe the solution you'd like
If someone specifies the needed versions and are able to build v1.29.6 successfully, why not register those values online somewhere, then others could specify the version and use the referenced success of others.
Describe alternatives you've considered
Additional context
I'm planning to setup automation which watches for a new release of kubernetes and then automatically generates the needed images using image-builder, storing the results locally. I will do my best to fully automate this process working to figure out the needed versions using strategies like above and perhaps additional steps to try and additionally look up the packages.
Each time the image build is successful, I could provide the versions I used to the image-builder project if only there was a place to upload the values to.
Really, probably the image-builder project should be building new images each time kubernetes releases a new version and testing them for success by deploying out a cluster. If that did occur, then again, the working versions could be provided via some sort of programmatically accessible page.
So, whether I provide the values, or many volunteers provide the values, or the image-builder project themselves provide the values, there still needs to be a place to store those values.
Potential storage
Maybe something like:
image-builder / images / capi / packer / proxmox / known.json
Instead of a file in a git repo it might make more sense to have it be a REST API, a POST that listens for success metrics, and a GET that returns known values.
If there was an opt-in which explained the type of data gathered and how it lets us know which versions of things work successfully, and that helps us all over-all to have a better experience, I suspect there will be a lot of participation... but it'd only need a little bit to be successful.
Maybe at first it could be volunteer generated versions, but after testing is fully automated, the image-builder project would take over delivering those versions making the data more reliable. Users could chose to use the public version data or not.
This could also make for a nice stats page maybe, if lots of people participate.
Maybe its own project used by this one, "image-builder-metrics"? Or just a couple more paths on the existing documentation website url?
Alternatively
Another way to think of all this could be as a TestResultsPage, where we would essentially be listing out all the different combinations of variables (within reason) and then mark each combination after testing as whether those versions resulted in success. With this feature suggesting we could get some of that data from the users since such thorough testing is not yet part of the image-builder project. However, once a list of test combinations is created, that then kind of creates the possibility for folks to put together solutions to test out all those combinations. So, probably just as good an idea as any to start with a list of what is to be tested and a way to update that list.
/kind feature
The text was updated successfully, but these errors were encountered: