Lightning-fast proxy cache for Prismic Headless API CMS
Prismic is an amazing Headless CMS, with features such as experiments, previews and planned releases. To do so, it uses a versioning model apparently very inspired by how Git works.
For instance, retrieving a document usually consists of two steps:
# 1. Look for the master reference
ref=`curl http://your-repo.cdn.prismic.io/api/v2 | jq -r '.refs[] | select(.isMasterRef == true) | .ref'`
# 2. Query documents based on reference
curl -g "http://your-repo.cdn.prismic.io/api/v2/documents/search?ref=$ref&q=[[at(document.type, \"home_page\")]]"
There are two problems here:
- The first request is never cached by CDN, so requests originating far from Prismic servers are hurt by latency
- CDNs are fast, but local cache is faster
Sample measurements from running the script above from different locations (times in milliseconds):
avg | min | max | |
---|---|---|---|
10.8 | 8 | 30 | Call proxy from inside Kubernetes cluster |
20.7 | 16 | 34 | Call proxy from GCP instance in southamerica-east-1 |
46.3 | 6 | 223 | Call Prismic CDN from AWS instance in us-east-1 |
86.2 | 20 | 329 | Call Prismic CDN from GCP instance in southamerica-east-1 |
Note: the Redis instance used for cache has low network performance, so its latency could also be improved.
Running the benchmark
- Run
npm install axios
- Edit
configs
inbenchmark.js
to point to the desired endpoints - Run
node benchmark.js <target>
, where<target>
is one ofprismic
,buran-remote
orburan-local
Buran solves the latency problem by adding a cache layer (with Redis in the example). The first call (to get the master reference) has its cache invalidated whenever content is published, exempting the need to invalidate each query.
Use the following environment variables to configure the server:
PORT
(default:3000
) - Port on which the application will listenBACKEND_URL
(required, ex:http://your-repo.cdn.prismic.io
) - URL of your Prismic API backendREDIS_URL
(default:redis://localhost
) - Redis connection URL, if you choose to use Redis as a cacheCACHE_PROVIDER
(default:memory
, values:redis
,redis-cluster
,memory
) - which cache provider implementation to use
This proxy was built with Kubernetes in mind, so check out the Kubernetes example to see how to deploy it.
The Docker image used in the Kubernetes example is available for you to use in any way you choose:
$ docker run --name prismic \
--env BACKEND_URL='http://<your-repo>.cdn.prismic.io' \
-p 3000:3000 escaletech/buran
- Go 1.13
- GNU Make
After cloning the project, run:
$ BACKEND_URL='http://<your-repo>.cdn.prismic.io' make
And you should see:
INFO[0000] listening on port 3000
Then you can make a request to the local server and see the Prismic API response:
$ curl localhost:3000/api/v2
If a request is served from cache, you should see the header X-From-Cache: 1
.
Contributions are welcome! So feel free to open an issue or submit a pull request.