Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Our larger RHTAP tenants need more memory quota #944

Closed
wants to merge 37 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
0e40660
our larger tenants need more memory quota
amfred Dec 5, 2023
52b8d93
remove CPU limits
amfred Dec 5, 2023
428a402
let the large tenants use a few nodes if needed
amfred Dec 5, 2023
3a6c708
end with blank line
amfred Dec 5, 2023
85cacca
really a blank line
amfred Dec 5, 2023
5224c5b
remove limits from other constraints
amfred Dec 5, 2023
9ab255f
increase default CPU request
amfred Dec 14, 2023
9a397e2
updated to latest specs
amfred Dec 18, 2023
ab0d96b
Added appstudiolarge to the test tier list
bamachrn Jan 4, 2024
8902940
wip
xcoulon Jan 9, 2024
b2af475
fix test failure
xcoulon Jan 9, 2024
c719b00
Merge branch 'master' into more-memory-quota
alexeykazakov Jan 9, 2024
76b3414
bugfix: invalid number of secrets found (#953)
mfrancisc Jan 10, 2024
3ae7d0d
upgrade ose-kube-rbac-proxy to v4.14 (#960)
xcoulon Jan 16, 2024
a5eff41
KSPACE-28 define the SpaceProvisionerConfig CRD (#959)
metlos Jan 17, 2024
3ecc2a9
Regenerated from API changes (#958)
sbryzak Jan 17, 2024
80e0752
Regenerated from API (#961)
sbryzak Jan 18, 2024
6ce040e
Reintroduced migration to fix broken UserSignups (#962)
sbryzak Jan 18, 2024
babf78b
our larger tenants need more memory quota
amfred Dec 5, 2023
1a6797c
remove CPU limits
amfred Dec 5, 2023
1245967
let the large tenants use a few nodes if needed
amfred Dec 5, 2023
227cac3
end with blank line
amfred Dec 5, 2023
783af62
really a blank line
amfred Dec 5, 2023
4aa1bf0
remove limits from other constraints
amfred Dec 5, 2023
04304e7
increase default CPU request
amfred Dec 14, 2023
3d90f6e
updated to latest specs
amfred Dec 18, 2023
a9b7cc1
Added appstudiolarge to the test tier list
bamachrn Jan 4, 2024
e07744f
wip
xcoulon Jan 9, 2024
b1e274d
fix test failure
xcoulon Jan 9, 2024
afcc469
our larger tenants need more memory quota
amfred Dec 5, 2023
8f0d805
remove CPU limits
amfred Dec 5, 2023
64234a9
wip
xcoulon Jan 9, 2024
3e9ae60
fix test failure
xcoulon Jan 9, 2024
4cc09a2
Regenerated from API changes (#958)
sbryzak Jan 17, 2024
2d5efbf
Regenerated from API (#961)
sbryzak Jan 18, 2024
8c97da2
Merge remote-tracking branch 'refs/remotes/origin/more-memory-quota' …
bamachrn Jan 22, 2024
efc5ab2
Merge branch 'master' into more-memory-quota
mfrancisc Feb 2, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 4 additions & 5 deletions deploy/templates/nstemplatetiers/appstudio/ns_tenant.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,6 @@ objects:
scopes:
- NotTerminating
hard:
limits.cpu: "20"
limits.memory: ${MEMORY_LIMIT}
requests.cpu: 1750m
requests.memory: ${MEMORY_REQUEST}
Expand All @@ -52,9 +51,8 @@ objects:
scopes:
- Terminating
hard:
limits.cpu: "120"
limits.memory: ${MEMORY_BUILD_LIMIT}
requests.cpu: "12"
requests.cpu: ${CPU_BUILD_REQUEST}
requests.memory: ${MEMORY_BUILD_REQUEST}
- apiVersion: v1
kind: ResourceQuota
Expand Down Expand Up @@ -156,10 +154,9 @@ objects:
limits:
- type: "Container"
default:
cpu: 2000m
amfred marked this conversation as resolved.
Show resolved Hide resolved
memory: 2Gi
defaultRequest:
cpu: 10m
cpu: 200m
memory: 256Mi

- apiVersion: networking.k8s.io/v1
Expand Down Expand Up @@ -271,5 +268,7 @@ parameters:
value: "32Gi"
- name: MEMORY_BUILD_LIMIT
value: "128Gi"
- name: CPU_BUILD_REQUEST
value: "12"
- name: MEMORY_BUILD_REQUEST
value: "64Gi"
Original file line number Diff line number Diff line change
Expand Up @@ -12,3 +12,9 @@ parameters:
value: "300"
- name: SECRET_QUOTA
value: "300"
- name: MEMORY_BUILD_LIMIT
value: "512Gi"
- name: CPU_BUILD_REQUEST
value: "24"
- name: MEMORY_BUILD_REQUEST
value: "128Gi"
Original file line number Diff line number Diff line change
Expand Up @@ -40,16 +40,17 @@ var expectedProdTiers = map[string]bool{
"appstudio-env": false,
}

// tier_name: true/false (if based on the other tier)
var expectedTestTiers = map[string]bool{
"advanced": true, // tier_name: true/false (if based on the other tier)
"advanced": true,
"base": false,
"nocluster": false,
"appstudio": false,
}

func nsTypes(tier string) []string {
switch tier {
case "appstudio":
case "appstudio", "appstudiolarge":
return []string{"tenant"}
case "appstudio-env":
return []string{"env"}
Expand All @@ -62,7 +63,7 @@ func nsTypes(tier string) []string {

func roles(tier string) []string {
switch tier {
case "appstudio", "appstudio-env":
case "appstudio", "appstudiolarge", "appstudio-env":
return []string{"admin", "maintainer", "contributor"}
default:
return []string{"admin"}
Expand Down Expand Up @@ -173,7 +174,7 @@ func TestLoadTemplatesByTiers(t *testing.T) {
tmpls, err := loadTemplatesByTiers(assets)
// then
require.NoError(t, err)
require.Len(t, tmpls, 4)
require.Len(t, tmpls, 4) // advanced,appstudio,base,nocluster
require.NotContains(t, "foo", tmpls) // make sure that the `foo: bar` entry was ignored

for _, tier := range tiers(expectedTestTiers) {
Expand Down Expand Up @@ -599,11 +600,27 @@ func assertNamespaceTemplate(t *testing.T, decoder runtime.Decoder, actual templ
} else {
templatePath = fmt.Sprintf("%s/ns_%s.yaml", tier, typeName)
}
t.Logf("checking template '%s' (based on another tier: %t)", templatePath, basedOnOtherTier(expectedTiers, tier))
content, err := assets.Asset(templatePath)
require.NoError(t, err)
expected := templatev1.Template{}
_, _, err = decoder.Decode(content, nil, &expected)
require.NoError(t, err)
// then override the templates' parameters (if applicable)
if basedOnOtherTier(expectedTiers, tier) {
content, err = assets.Asset(fmt.Sprintf("%s/based_on_tier.yaml", tier))
require.NoError(t, err)
extension := BasedOnTier{}
err = yaml.Unmarshal(content, &extension)
require.NoError(t, err)
for i, p := range expected.Parameters {
for _, ep := range extension.Parameters {
if p.Name == ep.Name {
expected.Parameters[i].Value = ep.Value
}
}
}
}
assert.Equal(t, expected, actual)
assert.NotEmpty(t, actual.Objects)
}
Expand Down
138 changes: 123 additions & 15 deletions test/templates/nstemplatetiers/appstudio/cluster.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,30 +6,138 @@ objects:
- apiVersion: quota.openshift.io/v1
kind: ClusterResourceQuota
metadata:
name: for-${SPACE_NAME}-compute
name: for-${SPACE_NAME}-deployments
spec:
quota:
hard:
limits.cpu: 20000m
limits.memory: ${MEMORY_LIMIT}
limits.ephemeral-storage: 7Gi
requests.cpu: 1750m
requests.memory: ${MEMORY_REQUEST}
requests.storage: 15Gi
requests.ephemeral-storage: 7Gi
count/persistentvolumeclaims: "5"
count/deployments.apps: ${{DEPLOYMENT_QUOTA}}
count/deploymentconfigs.apps: ${{DEPLOYMENT_QUOTA}}
count/pods: ${{POD_QUOTA}}
selector:
annotations: null
labels:
matchLabels:
toolchain.dev.openshift.com/space: ${SPACE_NAME}
- apiVersion: quota.openshift.io/v1
kind: ClusterResourceQuota
metadata:
name: for-${SPACE_NAME}-replicas
spec:
quota:
hard:
count/replicasets.apps: ${{REPLICASET_QUOTA}}
count/replicationcontrollers: ${{REPLICASET_QUOTA}}
selector:
annotations: null
labels:
matchLabels:
toolchain.dev.openshift.com/space: ${SPACE_NAME}
- apiVersion: quota.openshift.io/v1
kind: ClusterResourceQuota
metadata:
name: for-${SPACE_NAME}-routes
spec:
quota:
hard:
count/ingresses.extensions: ${{ROUTE_QUOTA}}
count/routes.route.openshift.io: ${{ROUTE_QUOTA}}
selector:
annotations: null
labels:
matchLabels:
toolchain.dev.openshift.com/space: ${SPACE_NAME}
- apiVersion: quota.openshift.io/v1
kind: ClusterResourceQuota
metadata:
name: for-${SPACE_NAME}-jobs
spec:
quota:
hard:
count/jobs.batch: "30"
count/daemonsets.apps: "30"
count/cronjobs.batch: "30"
count/statefulsets.apps: "30"
selector:
annotations: null
labels:
matchLabels:
toolchain.dev.openshift.com/space: ${SPACE_NAME}
- apiVersion: quota.openshift.io/v1
kind: ClusterResourceQuota
metadata:
name: for-${SPACE_NAME}-services
spec:
quota:
hard:
count/services: ${{SERVICE_QUOTA}}
selector:
annotations: null
labels:
matchLabels:
toolchain.dev.openshift.com/space: ${SPACE_NAME}
- apiVersion: quota.openshift.io/v1
kind: ClusterResourceQuota
metadata:
name: for-${SPACE_NAME}-bc
spec:
quota:
hard:
count/buildconfigs.build.openshift.io: "30"
selector:
annotations: null
labels:
matchLabels:
toolchain.dev.openshift.com/space: ${SPACE_NAME}
- apiVersion: quota.openshift.io/v1
kind: ClusterResourceQuota
metadata:
name: for-${SPACE_NAME}-secrets
spec:
quota:
hard:
count/secrets: ${{SECRET_QUOTA}}
selector:
annotations: null
labels:
matchLabels:
toolchain.dev.openshift.com/space: ${SPACE_NAME}
- apiVersion: quota.openshift.io/v1
kind: ClusterResourceQuota
metadata:
name: for-${SPACE_NAME}-cm
spec:
quota:
hard:
count/configmaps: ${{CONFIGMAP_QUOTA}}
selector:
annotations: null
labels:
matchLabels:
toolchain.dev.openshift.com/space: ${SPACE_NAME}
- apiVersion: toolchain.dev.openshift.com/v1alpha1
kind: Idler
metadata:
name: ${SPACE_NAME}
spec:
timeoutSeconds: ${{IDLER_TIMEOUT_SECONDS}}
parameters:
- name: SPACE_NAME
required: true
- name: IDLER_TIMEOUT_SECONDS
# 12 hours
value: "43200"
- name: MEMORY_LIMIT
value: "7Gi"
- name: MEMORY_REQUEST
value: "7Gi"
# No Idling
value: "0"
# Quota
- name: REPLICASET_QUOTA
value: "30"
- name: DEPLOYMENT_QUOTA
value: "30"
- name: POD_QUOTA
value: "300"
- name: ROUTE_QUOTA
value: "30"
- name: SERVICE_QUOTA
value: "30"
- name: CONFIGMAP_QUOTA
value: "100"
- name: SECRET_QUOTA
value: "100"
Loading
Loading