-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
aws-eks: cdk should validate cluster version and kubectl layer version #24580
Comments
From another issue, it looks like the library in some cases print a warning:
But I've never seen that warning. Was it removed in a newer version maybe? |
According to the document:
But I agree with you we probably should implement a check to avoid potential error like that. I am making this a p2 feature request and any PR would be appreciated! |
@pahud According to this reply
when I am trying to use
|
@ShankarDhandapani looks like you need to instantiate it like:
|
I am currently struggling with the same issue. |
This solution does not seem to apply to the v2 of the AWS-CDK. |
We probably can add the validation here aws-cdk/packages/aws-cdk-lib/aws-eks/lib/cluster.ts Lines 1473 to 1475 in cc4ce12
I guess the challenge is that the lambda.ILayerVersion does not have any attribute of the kubectl version so it's not easy to compare that. |
Thanks for this thread.
Thanks for starting this thread. I was running into the same issue, but I was able to fix it following the suggestions posted here. I am using CDK v2 and I see that my kubectl version is at its latest. I don't know my cdk is not validating the Kubectl version. Is anyone working on fixing this? Any idea on when will this issue be fixed where it can take the related versions for kubectlLayer based on the kubernetes version provided. I imported the KubectlLambdaLayer package from here. `import { KubectlV26Layer } from '@aws-cdk/lambda-layer-kubectl-v26'; kubectlLayer: new KubectlV26Layer(this, 'KubectlLayer'),` |
I've seen this error several times while attempting to update resources created with It appears cloud formation is attempting to use a mismatched api version from what is actually deployed. E.g. attempting to use Full error response
When it occurs, it leaves the stack in an Running Kubernetes 1.29. |
I've deployed a 1.29 EKS cluster via cdk and specify the kubectlLayer as KubectlV29Layer() when creating the cluster and having the same issue as @graydenshand where the only way to get changes applied is to destroy and deploy again. This blocks just about any management of the cluster. From the lambda kubectl layer logs:
|
We are experiencing the same problem To make matters worse for us, it appears that https://github.com/cdklabs/awscdk-kubectl-go/commits/kubectl.29 |
@graydenshand @benjamin-at-greensky Are you able to reproduce this issue for us? For example, after initially create a 1.29 cluster with kubectl v29 layer, what could cause this error after that? |
@kriscoleman Can you create a new issue and provide your CDK in Go code snippet in the issue description? |
@pahud I have been able to reproduce this by deploying a fresh EKS cluster with kubectlLayer set to v29 and then redeploying a helm chart with updated values.
After this I will make an update to the cdk that deploys a helm chart (for example I was redeploying one with some annotations on an ingress). I then receive this error when running a cdk deploy:
I have no CronJob's deployed to the cluster:
It is worth mentioning the helm chart I'm deploying has no references to batch/v1beta1 anywhere. |
I had the same issue and defining, from aws_cdk.lambda_layer_kubectl_v28 import KubectlV28Layer
solved my issue. |
I am using
Can someone please look into this issue? It's been a while and it technically blocked us from using EKS at the moment. |
I tried to create a new cluster in version 1.28 and use KubectlV28Layer, but still got the same error. |
@tchcxp
By default, if you don't specify the layer version, it will default to version 20.0. To resolve this, you need to set the kubectl layer again: eks.Cluster.fromClusterAttributes(this, 'ImportedCluster', {
clusterName: clusterName,
kubectlRoleArn: kubectlRoleArn,
blah: blah,
kubectlLayer: new KubectlV28Layer(this, `kubectl-v28-layer`), // <---
}); This should address the issue. |
Describe the bug
Ever since we upgraded from Kubernetes 1.21 to newer versions, we're getting lots of weird errors related to what I believe are kubectl layer incompatibilities, like
It would be much better if cdk actually validated the layer version vs the intended kubernetes version when synthesising, so that these issues didn't occur
Expected Behavior
cdk should error out, informing me that the selected cluster version doesn't match the configured layer
Current Behavior
No validation occurs, which leads to lots of errors when trying to change the cluster later
Reproduction Steps
Possible Solution
No response
Additional Information/Context
No response
CDK CLI Version
2.67.0
Framework Version
2.66.1
Node.js Version
v18.14.2
OS
Ubuntu
Language
Python
Language Version
3.9
Other information
No response
The text was updated successfully, but these errors were encountered: