For data management systems including CKAN.
- GraphQL endpoint
- Bulk export of data to json/csv/xlsx files
datastore_search
endpoint (similar to CKAN Datastore extention)
The current version is v1
APP_VERSION = 'v1'
- /{APP_VERSION}/graphql
- /{APP_VERSION}/download
- /{APP_VERSION}/datastore_search
- /{APP_VERSION}/datastore_search/help
GraphQL Endpoint exposes the hasura GraphQL API.
For the GraphQL documentation please refer to Hasura Documentation
- resource_id (string) – MANDATORY id or alias of the resource to be searched against
- q (string or dictionary) – JSON format query restrictions {“key1”: “a”, “key2”: “b”}, it’ll search on each specific field (optional)
- distinct_on [bool || list of field names] – If True: return only distinct rows, if list of fields will return only
- limit (int) – Maximum number of rows to return (optional, default: 100)
- offset (int) – offset this number of rows (optional)
- fields (list of strings) – fields to return (optional, default: all fields)
sort (string) – comma separated field names with ordering e.g.: “fieldname1, fieldname2 desc”Not implemented yetfilters (dictionary) – matching conditions to select, e.g {“key1”: “a”, “key2”: “b”} (optional)Not implemented - similar to q
The result is a JSON document containing:
- schema (JSON) – The data schema
- data (JSON) – matching results in JSON format
fields (list of dictionaries) – fields/columns and their extra metadataNot Implementedoffset (int) – query offset valueNot implementedtotal (int) – number of total matching recordsNot implemented
With a test table having the following schema:
We can make different queries:
- copy env variables -
cp .env.example .env
- set up background services - this repo contains a mock environment you can launch with
bash run-mock-environment.sh
or if connecting to an existing one edit the URLs in the.env
file - install dependencies -
yarn
yarn start
to launch server /yarn test
to run tests- setup automatic code formatter - install and use prettier, if using vscode install prettier-vscode extention
Please don't forget to add new variables to .env.example
- they also will be used in ci tests.
Tests and formating check and run automatically on every push and pull request to master. They run on Docker hub. See documentation here https://docs.docker.com/docker-hub/builds/automated-testing/
To simulate tests as running on dockerhub you can run bash run-tests-in-docker.sh
file.
If a pull request has failed checks it shows an error message in GitHub. The link to DockerHub does not work though. You will need to navigate there:
- Docker repository is here https://hub.docker.com/repository/docker/datopian/data-api
- To see build jobs go to builds https://hub.docker.com/repository/docker/datopian/data-api/builds and find your build/test
After every push to master or a pull request to this branch and successful tests there is a new docker image built here https://hub.docker.com/repository/docker/datopian/data-api/builds
New images can be build with the provided Dockerfile or fetched from dockerhub.
Can be deployed as a usual docker container. An environement for this microservice should contain:
- postgres database
- hasura
- environment variables (see .env.example)
This project is licensed under the MIT License - see the LICENSE file for details