Examples of using k6 for load testing your services
- Install K6
brew install k6
k6 run tests/simple-service-test.js
By using the Ramping Arrival Rate k6 executor you can configure your tests to ramp up and then hold a fixed throughput rate.
For example, the following options
block in a test will start a test from 0, and spend 2 min to ramp up to 200 iterations per second, and then hold 200 iterations per second for 5 minutes:
export const options = {
scenarios: {
load_test: {
executor: 'ramping-arrival-rate',
startRate: 0,
timeUnit: '1s',
preAllocatedVUs: 1,
maxVUs: 200,
stages: [
{ target: 200, duration: '2m'},
{ target: 200, duration: '5m'}
],
tags: {
testName: 'my-test'
}
},
}
};
K6 does not support out of the box writing to a file but it is possible to use a 3rd-party extension to enable a test to write to a file to the local filesystem. To do so, we need to build a custom version of k6 bundled with this integration.
-
Make sure that your
go
version is at least1.17.0
-
Install the
xk6
tool for bulding K6go install go.k6.io/xk6/cmd/xk6@latest
-
Make a custom build of
k6
with the commandxk6 build v0.36.0 --with github.com/avitalique/xk6-file@latest
-
Now you can use the new
k6
binary to run a test that writes into a local file, for example./k6 run tests/print-user-file.js
By using k6 Shared Array you can initiate an array that is read from csv or json files, that is shared across the VUs of a given k6 shell.
The following is an example of initializing the Shared Array, and also accessing it (shuffled):
// Initializing Shared Array
const users = new SharedArray('Users', () => {
return papaparse.parse(open('../resources/testUsers.csv'), { header: true }).data
})
// Accessing users (shuffled)
const user = users[Math.floor(Math.random() * users.length)];
- Run grafana and influx docker containers:
docker-compose up influxdb grafana
- Run test configured to send metrics to influxdb
k6 run -o influxdb=http://localhost:8086/k6 tests/simple-service-test.js
- You should now be able to access grafana at
localhost:3000
now and set up a Data Source and custom dashboards in grafana to monitor tests. You can import an example dashboard from thegrafana
folder in this repo. Usehttp://host.docker.internal:8086
as data source url for InfluxDb, andk6
as database.
See K6 docs for more details.
- Run datadog agent on your machine, remember to input your datadog api key:
DOCKER_CONTENT_TRUST=1 \
docker run --rm -d \
--name datadog \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
-v /proc/:/host/proc/:ro \
-v /sys/fs/cgroup/:/host/sys/fs/cgroup:ro \
-e DD_SITE="datadoghq.com" \
-e DD_API_KEY=<YOUR_DATADOG_API_KEY> \
-e DD_DOGSTATSD_NON_LOCAL_TRAFFIC=1 \
-p 8125:8125/udp \
datadog/agent:latest
- Run tests:
K6_STATSD_ENABLE_TAGS=true K6_STATSD_ADDR=localhost:8125 k6 run --out statsd tests/simple-service-test.js
- Metrics should now be accessible in datadog under
k6
namespace.
More docs on pushing metrics to datadog is found here
- Install and start minikube
brew install minikube
minikube start
-
Point your kubectl-context to your local k8s cluster
-
Follow k6 documentation to clone k6-operator into a location you like on your machine, and deploy its infrastructure on your local k8s:
git clone https://github.com/grafana/k6-operator && cd k6-operator
make deploy
- Go back to the working directory of this repo and create a configmap for your test script
kubectl create configmap my-test-example --from-file tests/simple-service-test.js
- Create a custom resource that will run the test in your namespace. In the custom resource you can control how many machines (pods) to run, which test script to run as well as various test configs (throughput, warmup, duration) with environment variables.
kubectl apply -f custom-resource.yml
-
Now you will see several pods spinning up in your default namespace, and you should be able to see in the service dashboards that traffic is hitting the service.
-
Cleaning up local cluster after:
kubectl delete -f custom-resource.yml
kubectl delete configmap my-test-example
cd /path/to/k6-operator
make delete
minikube stop
The custom-resource.yml
file used to initiate a test in step 5 above has a config variable called parallelism
. The number defined here controls the number of pods that will spin up to run tests in parallel.
By using environment variables to assign values to the options
block in the tests, you can configure the tests running on k8s without changing the test script itself each time. Within the tests, env vars are available under the __ENV
object.
To see an example of this, see the simple-service-test.js
, and also how env vars are passed to the containers in custom-resource.yml
.
Not implemented. The idea so far is to use shared volumes and reference these within the pods running the tests. See github issue for more info.
Alternatively, we could store files on s3 and use the k6 s3 client to fetch them. See slack thread for more info.