The JVM-Runtime is notorious for it's long, multi-seconds coldstarts. Workarounds in the form of lambda warm up magic are rather ugly.
With the addition of native lambda runtimes, we can use graalvm to build a native image which should be significantly faster.
=>
The native lambda generated by this repo with a 256 MB lambda needs roughly 150 ms for a coldstart.
Also used tech: gradle + terraform
- basic gradle setup
- terraform ec2 (for build, graalvm only works on *nix)
- graalvm task works
- create http client for lambda runtime interface (http + json parsing)
- implement lambda runtime interface
- terraform lambda using graalvm image
- replace manual ec2 build with codebuild from git repo and automatic lambda rollout
Currently, the lambda function gets created by terraform. On a clean slate (no state), the creation would fail because
the expected zip won't be present inside the s3 bucket (no codebuild run yet). Therefore a fake zip will be used
during the lambda creation. The codebuild job will call lambda:UpdateFunctionCode
.
Alternatively, the codebuild job could also handle the creation of the function, if it is not yet created. Both approaches aren't without flaw.
- fighting-cold-startup-issues-for-your-kotlin-lambda-with-graalvm
- AWS Lambda Runtime Interface
- A simple native HTTP server with GraalVM
- HttpServer.kt
- palantir/gradle-graal
- compute_type = "BUILD_GENERAL1_SMALL" keeps crashing, possible because it needs more RAM -> BUILD_GENERAL1_MEDIUM
- Maybe you forgot to apply terraform?!