Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added more memory tier with proper resource access #969

Closed

Conversation

bamachrn
Copy link
Contributor

This is replacement of PR #944

Signed-off-by: Bama Charan Kundu <bamachrn@gmail.com>
Copy link

openshift-ci bot commented Feb 12, 2024

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: bamachrn
Once this PR has been reviewed and has the lgtm label, please assign sbryzak for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link

openshift-ci bot commented Feb 12, 2024

Hi @bamachrn. Thanks for your PR.

I'm waiting for a codeready-toolchain member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Copy link
Contributor

@alexeykazakov alexeykazakov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like there are problems with unit tests for this PR.

content, err := assets.Asset(templatePath)
require.NoError(t, err)
expected := templatev1.Template{}
_, _, err = decoder.Decode(content, nil, &expected)
require.NoError(t, err)
// then override the templates' parameters (if applicable)
if basedOnOtherTier(expectedTiers, tier) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure why this is needed. Could you please comment on this?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this comes from my contribution to #944 but I don't recall all the details :/

Copy link

Quality Gate Passed Quality Gate passed

Issues
0 New issues

Measures
0 Security Hotspots
No data about Coverage
12.0% Duplication on New Code

See analysis details on SonarCloud

Copy link
Contributor

@MatousJobanek MatousJobanek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you please answer why you changed the templates in test/templates/ folder?
There is no point in changing that right? It's not a problem that they don't align with the actual ones - they are just testing "dummy" templates.

@@ -39,7 +39,6 @@ objects:
scopes:
- NotTerminating
hard:
limits.cpu: "20"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are you sure that you want to remove the quota for cpu limits completely?

@MatousJobanek
Copy link
Contributor

This PR hasn't been updated for a long time and there are a tons of conflicts. Closing it...
If you still want to change the values then please open a new PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants