Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DEV] split Helm values file into multiple files to support multiple Kubernetes environments #40

Open
bcfriesen opened this issue Apr 11, 2024 · 4 comments
Assignees

Comments

@bcfriesen
Copy link
Contributor

Currently there is a single values.yaml file for the Helm charts that deploy OCHAMI services. Those values expect a Google Kubernetes Engine (GKE) environment, and have annotations targeting that environment which, in any other Kubernetes environment will, at best, be ignored, and at worst, break the deployment.

Helm supports specifying values files with the -f flag to helm install, and multiple values files can be specified in a single invocation. So let's split the existing values file into one for GKE and one for CSM, which is somewhat closer to a "plain" Kubernetes environment than GKE.

I don't have access to a completely unmodified Kubernetes environment so I am not sure how to write a values file targeting that environment. But hopefully some combination of the GKE and CSM values files will get pretty close.

@bcfriesen bcfriesen self-assigned this Apr 11, 2024
@rainest
Copy link
Contributor

rainest commented Nov 6, 2024

I don't have access to a completely unmodified Kubernetes environment

https://kind.sigs.k8s.io/ generally works well enough for this--there's not really any true "vanilla" Kubernetes, but KiND doesn't have any cloud provider. I usually use it via my previous job's test instance builder, primarily to set up metallb.

We don't currently have much that's GKE-specific other than the default GatewayClass, but there no provider-agnostic option there. There was a request to add support for indicating a default GatewayClass at the cluster level. It's currently not planned, but may reopen.


We do probably want to move Gateway configuration into a separate key, and reserve the GKE key for only resources that are GKE-specific. Gateway isn't provider-specific, it's bound to some provider-specific GatewayClass outside the chart.

Do we indeed want separate Gateways for different services? We currently spawn separate Gateway (effectively, separate LoadBalancer Services) resources for krakend and lighttpd, but both use the single gke.gateway key.

If we don't need separate Gateways (for network-level isolation, different TLS parameters, or avoiding TCP/UDP listener port collisions) and can multiplex by HTTP hostname, I'd say create gateway key that spawns a single resource. If we do want those separate, we'd want separate lighttpd.gateway and krakend.gateway keys.

@alexlovelltroy
Copy link
Member

KTF looks great! Do you have examples of using it as part of an integration test flow with GitHub actions?

@rainest
Copy link
Contributor

rainest commented Nov 18, 2024

Expanding on the brief examples I provided in chat earlier, there are currently two options to set up a KTF cluster and run tests on it, via Golang instance creation and via the KTF CLI.

For Golang projects, the better option is to configure and run a KTF environment struct using its Golang API. The go test command then runs this as part of a test suite. The Action invokes this Make target, using a matrix strategy to handle different test variants (toggling feature flags, selecting different database options, different component container images, etc.).

Non-Golang projects can use the CLI to provision an instance. The Action invokes this script along with others that run tests (roughly, running helm install and then deploying/interacting with resources).

The CLI does not have full support for all features (for example, the metallb pool creation toggle is not exposed by its CLI flag. It's arguably better used for easily deploying a local environment than as part of CI as such, but we just didn't have a need for more complex configuration for our chart tests. If we did, a small Golang cluster setup helper would probably be a better option--it's easier to write one of those than to maintain a YAML-based config system or similar.

@rainest
Copy link
Contributor

rainest commented Nov 22, 2024

Not sure why my KinD deploys earlier didn't catch it, but the storageClassName setting in postgres and lighttpd can cause issues.

I don't see any always available class in the docs other than unspecified, which uses whatever the cluster marks as the default.

The current value is the default on some GKE clusters. It is not available in KinD.

We can probably just remove it, but it needs to be exposed in values.yaml and defaulted to empty if we keep it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants