The apitesting tool helps you to build automated apitests that can be run after every build to ensure a constant product quality.
A single testcase is also a perfect definition of an occuring problem and helps the developers to fix your issues faster!
For configuring the apitest tool, add the follwing section to your 'apitest.yml' configuration file.
The report parameters of this config can be overwritten via a command line flag. So you should set your intended standar
apitest:
server: "http://5.simon.pf-berlin.de/api/v1" # The base url to the api you want to fire the apitests against. Important: don’t add a trailing ‘/’
log:
short: true # Configures minimal logs by default for all tests
report: # Configures the maschine report. For usage with jenkis or any other CI tool
file: "apitest_report.xml" # Filename of the report file. The file gets saved in the same directory of the apitest binary
format: "json.junit" # Format of the report. (Supported formats: json, junit or stats)
store: # initial values for the datastore, parsed as map[string]interface{}
email.server: smtp.google.com
oauth2_client: # Map of client-config for oAuth clients
my_client: # oauth Client ID
endpoint: # endpoints on the oauth server
auth_url: "http://auth.myserver.de/oauth/auth"
token_url: "http://auth.myserver.de/oauth/token"
secret: "foobar" # oauth Client secret
redirect_url: "http://myfancyapp.de/auth/receive-fancy-token" # redirect, usually on client side
The YAML config is optional. All config values can be overwritten/set by command line parameters: see Overwrite config parameters
You start the apitest tool with the following command
./apitest
This starts the command with the following default settings:
- Runs all tests that are in the current directory, or in any of its subdirectories
- Logs to console
- Writes the machine log, to the given file in the apitest.yml
- Logs only the request & responses if a test fails
--directory testDirectory
or-d testDirectory
: Defines which directory should be used for running the tests in it. The tool walks recursively trough all subdirectories and runs alls tests that have a "manifest.json" file in alphabetical order of the folder names. (Depth-First-Search)--single path/to/a/single/manifest.json
or-s path/to/a/single/manifest.json
: Run only a single test. The path needs to point directly to the manifest file. (Not the directory containing it)
stop-on-fail
: Stop execution of later test suites if a test suite fails
keep-running
: Wait for a keyboard interrupt after each test suite invocation. This can be useful for keeping the HTTP / SMTP server for manual inspection.
Per default request and response of a request will be logged on test failure. If you want to see more information you can configure the tool with additional log flags
--log-network
: Log all network traffic--log-datastore
: Logs datastore operations into datastore--log-verbose
:--log-network
,--log-datastore
and a few additional trace informations--log-short
: Show minimal logs, useful for CI chains--log-timestamp
/-t
: Log the timestamp of the log message into the console--curl-bash
: Log the request as curl command-l
: Limit the lines of request log output. Configure limit in apitest.yml
You can also set the log verbosity per single testcase. The greater verbosity wins.
--log-console-enable false
: If you want to see a log in the console this parameter needs to be "true" (what is also the default)--log-console-level debug
: Sets the loglevel which controls what kind of output should be displayed in the console--log-console-level info
(default): Shows only critical information--log-console-level warn
: Shows more verbose log output--log-console-level debug
: Shows all possible log output
--log-sqlite-enable false
: If you want to save the into a sqlite databasethis parameter needs to be "true"--log-sqlite-file newLog.db
: Defines the filename in which the sqlite log should be saved--log-sqlite-level debug
: Sets the loglevel which controls what kind of output should be saved into the sqlite database--log-sqlite-level info
(default): Saves only critical information--log-sqlite-level warn
: Saves more verbose log output--log-sqlite-level debug
: Saves all possible log output
--config subfolder/newConfigFile
or-c subfolder/newConfigFile
: Overwrites the path of the config file (default "./apitest.yml") with "subfolder/newConfigFile"--server URL
: Overwrites base url to the api--report-file newReportFile
: Overwrites the report file name from the apitest.yml config with "newReportFile"--report-format junit
: Overwrites the report format from the apitest.yml config with "junit"--replace-host [host][:port]
: Overwrites built-in server host in template function "replace_host"
--report-format-stats-group 3
: Sets the number of groups for manifests distrubution when using report formatstats
- Run all tests in the directory apitests display all server communication and save the maschine report as junit for later parsing it with jenkins
./apitest --directory apitests --verbosity 2 --report-format junit
- Only run a single test apitests/test1/manifest.json with no console output and save the maschine report to the standard file defined in the apitest.yml
./apitest --single apitests/test1/manifest.json --log-console-enable false
- Run all tests in the directory apitests with http server host replacement for those templates using replace_host template function
./apitest -d apitests --replace-host my.fancy.host:8989
Manifest is loaded as template, so you can use variables, Go range and if and others.
{
// General info about the testuite. Try to explain your problem indepth here. So that someone who works on the test years from now knows what is happening
"description": "search api tests for filename",
// Testname. Should be the ticket number if the test is based on a ticket
"name": "ticket_48565",
// init store
"store": {
"custom": "data"
},
// Testsuites your want to run upfront (e.g. a setup). Paths are relative to the current test manifest
"require": [
"setup_manifests/purge.yaml",
"setup_manifests/config.yaml",
"setup_manifests/upload_datamodel.yaml"
],
// Array of single testcases. Add es much as you want. They get executed in chronological order
"tests": [
// [SINGLE TESTCASE]: See below for more information
// [SINGLE TESTCASE]: See below for more information
// [SINGLE TESTCASE]: See below for more information
// We also support the external loading of a complete test:
"@pathToTest.json",
// By prefixing it with a number, the testtool runs that many instances of
// the included test file in parallel to each other.
//
// Only tests directly included by the manifest are allowed to run in parallel.
"5@pathToTestsThatShouldRunInParallel.json"
]
}
{
// Define if the test suite should continue even if this test fails. (default: false)
"continue_on_failure": true,
// Name to identify this single test. Is important for the log. Try to give an explaning name
"name": "Testname",
// Store custom values to the datastore
"store": {
"key1": "value1",
"key2": "value2"
},
// Optional temporary HTTP Server (see below)
"http_server": {
"addr": ":1234",
"dir": ".",
"testmode": false
},
// Optional temporary SMTP Server (see below)
"smtp_server": {
"addr": ":9025",
"max_message_size": 1000000,
},
// Specify a unique log behavior only for this single test.
"log_network": true,
"log_verbose": false,
// Show or disable minimal logs for this test
"log_short": false,
// Defines what gets send to the server
"request": {
// What endpoint we want to target. You find all possible endpoints in the api documentation
"endpoint": "suggest",
// the server url to connect can be set directly for a request, overwriting the configured server url
"server_url": "",
// How the endpoint should be accessed. The api documentations tells your which methods are possible for an endpoint. All HTTP methods are possible.
"method": "GET",
// If set to true, don't follow redirects.
"no_redirect": false,
// Parameters that will be added to the url. e.g. http:// 5.testing.pf-berlin.de/api/v1/session?token=testtoken&number=2 would be defined as follows
"query_params": {
"number": 2,
"token": "testtoken"
},
// With query_params_from_store set a query parameter to the value of the datastore field
"query_params_from_store": {
"format": "formatFromDatastore",
// If the datastore key starts with an ?, wo do not throw an error if the key could not be found, but just
// do not set the query param. If the key "a" is not found it datastore, the queryparameter test will not be set
"test": "?a"
},
// Additional headers that should be added to the request
"header": {
"header1": "value",
"header2": "value"
},
// Cookies can be added to the request
"cookies": {
// name of a cookie to be set
"cookie1": {
// A cookie can be get parsed from store if it was saved before
// It will ignore the cookie if it is not set
"value_from_store": "sess_cookie",
// Or its values can be directly set, overriding the one from store, if defined
"value": "value"
},
"cookie2": {
"value_from_store": "ads_cookie",
}
},
// Special headers `X-Test-Set-Cookie` can be populated in the request (on per entry)
// It is used in the builting `http_server` to automatically set those cookies on response
// So it is useful for mocking them for further testing
"header-x-test-set-cookie": [
{
"name": "sess",
"value": "myauthtoken"
},
{
"name": "jwtoken",
"value": "tokenized",
"path": "/auth",
"domain": "mydomain",
"expires": "2021-11-10T10:00:00Z",
"max_age": 86400,
"secure": false,
"http_only": true,
"same_site": 1
}
],
// With header_from_you set a header to the value of the dat astore field
// In this example we set the "Content-Type" header to the value "application/json"
// As "application/json" is stored as string in the datastore on index "contentType"
"header_from_store": {
"Content-Type": "contentType",
// If the datastore key starts with an ?, wo do not throw an error if the key could not be found, but just
// do not set the header. If the key "a" is not found it datastore, the header Range will not be set
"Range": "?a"
},
// All the content you want to send in the http body. Is a JSON Object
"body": {
"flower": "rose",
"animal": "dog"
},
// If the body should be marshaled in a special way, you can define this here. Is not a required attribute. Standart is to marshal the body as json. Possible: [multipart,urlencoded, file]
"body_type": "urlencoded"
// If body_type is file, "body_file" points to the file to be sent as binary body
"body_file": "<path|url>"
},
// Define how the response should look like. Testtool checks against this response
"response": {
// Expected http status code. See api documentation vor the right ones
"statuscode": 200,
// If you expect certain response headers, you can define them here. A single key can have mulitble headers (as defined in rfc2616)
"header": {
"key1": [
"val1",
"val2",
"val3"
],
"x-easydb-token": [
"csdklmwerf8ßwji02kopwfjko2"
]
},
// Cookies will be under this key, in a map name => cookie
"cookie": {
"sess": {
"name": "sess",
"value": "myauthtoken"
},
"jwtoken": {
"name": "jwtoken",
"value": "tokenized",
"path": "/auth",
"domain": "mydomain",
"expires": "2021-11-10T10:00:00Z",
"max_age": 86400,
"secure": false,
"http_only": true,
"same_site": 1
}
}
// optionally, the expected format of the response can be specified so that it can be converted into json and can be checked
"format": {
"type": "csv",
"csv": {
"comma": ";"
}
},
// The body we want to assert on
"body": {
"objecttypes": [
"pictures"
]
}
},
// Store parts of the response into the datastore
"store_response_qjson": {
"eas_id": "body.0.eas._id",
// Cookies are stored in `cookie` map
"sess_cookie": "cookie.sess"
},
// wait_before_ms pauses right before sending the test request <n> milliseconds
"wait_before_ms": 0,
// wait_after_ms pauses right before sending the test request <n> milliseconds
"wait_after_ms": 0,
// Delay the request by x msec
"delay_ms": 5000,
// With the poll we can make the testing tool redo the request to wait for certain events (Only the timeout_msec is required)
// timeout_ms:* If this timeout is done, no new redo will be started
// -1: No timeout - run endless
// break_response: [Array] [Logical OR] If one of this responses occures, the tool fails the test and tells it found a break repsponse
// collect_response: [Array] [Logical AND] If this is set, the tool will check if all reponses occure in the response (even in different poll runs)
"timeout_ms": 5000,
"break_response": [
"@break_response.json"
],
"collect_response": [
"@continue_response_pending.json",
"@continue_response_processing.json"
],
// If set to true, the test case will consider its failure as a success, and the other way around
"reverse_test_result": false
}
Go template delimiters can be redefined as part of a single line comment in any of these syntax:
// template-delims: <delim_left> <delim_right>
/* template-delims: <delim_left> <delim_right> */
Examples:
// template-delims: /* */
/* template-delims: // // */
// template-delims {{ }}
/* template-delims: {* *} */
** All external tests/requests/responses inherit those delimiters if not overriden in their template **
Go templates may break the proper JSONC format even when separators are comments. So we could use placeholders for filling missing parts then strip them.
// template-remove-tokens: <token> [<token>]*
/* template-remove-tokens: <token> [<token>] */
Example:
// template-delims: /* */
// template-remove-tokens: "delete_me"
{
"prop": /* datastore "something" */"delete_me"
}
This would be an actual proper JSONC as per the "delete_me"
string.
However that one will be stripped before parsing the template, which would be just:
{
"prop": /* datastore "something" */
}
** Unlike with delimiters, external tests/requests/responses don't inherit those removals, and need to be specified per file.
The tool is able to run tests in parallel to themselves. You activate this
mechanism by including an external test file with N@pathtofile.json
, where N
is the number of parallel "clones" you want to have of the included tests.
The included tests themselves are still run serially, only the entire set of tests will run in parallel for the specified number of replications.
This is useful e.g. for stress-testing an API.
Only tests directly included by a manifest are allowed to run in parallel.
Using "0@file.json" will not run that specific test.
{
"name": "Binary Comparison",
"request": {
"endpoint": "suggest",
"method": "GET"
},
// Path to binary file with N@
"response": "123@simple.bin"
}
The tool is able to do a comparison with a binary file. Here we take a MD5 hash of the file and and then later compare that hash.
For comparing a binary file, simply point the response to the binary file:
{
"name": "Binary Comparison",
"request": {
"endpoint": "suggest",
"method": "GET"
},
// Path to binary file with @
"response": {
"format": {
"type": "binary"
},
"body": {
"md5sum": {{ md5sum "@simple.bin" || marshal }}
}
}
}
The format must be specified as
"type": "binary"
If the response format is specified as "type": "xml"
or "type": "xml2"
, we internally marshal that XML into json using github.com/clbanning/mxj.
The format "xml"
uses NewMapXmlSeq()
, whereas the format "xml2"
uses NewMapXml()
, which provides a simpler json format.
See also template file_xml2json
.
On that json you can work as you are used to with the json syntax. For seeing how the converted json looks you can use the --log-verbose
command line flag
If the response format is specified as "type": "html"
, we internally marshal that HTML into json using github.com/PuerkitoBio/goquery.
This marshalling is less strict than for XHTML. For example it will not raise errors for unclosed tags like <p>
or <hr>
, as well as Javascript code inside the HTML code. But it is possible that unclosed tags are missing in the resulting JSON if the tokenizer can not find a matching closing tag.
See also template file_html2json
.
If the response format is specified as "type": "xhtml"
, we internally marshal that XHTML into json using github.com/clbanning/mxj.
The XHTML code in the response must comply to the XHTML standard, which means it must be parsable as XML.
See also template file_xhtml2json
.
If the response format is specified as "type": "csv"
, we internally marshal that CSV into json.
You can also specify the delimiter (comma
) for the CSV format (default: ,
):
{
"name": "CSV comparison",
"request": {
"endpoint": "export/1/files/file.csv",
"method": "GET"
},
"response": {
"format": {
"type": "csv",
"csv": {
"comma": ";"
}
},
"body": {
}
}
}
Responses in arbitrary formats can be preprocessed by calling any command line tool that can produce JSON, XML, CSV or binary output. In combination with the type
parameter in format
, non-JSON output can be formatted after preprocessing. If the result is already in JSON format, it can be checked directly.
The response body is piped to the stdin
of the tool and the result is read from stdout
. The result of the command is then used as the actual response and is checked.
To define a preprocessing for a response, add a format
object that defines the pre_process
to the response definition:
{
"response": {
"format": {
"pre_process": {
"cmd": {
"name": "...",
"args": [ ],
"output": "stdout"
}
}
}
}
}
format.pre_process.cmd.name
: (string, mandatory) name of the command line toolformat.pre_process.cmd.args
: (string array, optional) list of command line parametersformat.pre_process.cmd.output
: (string, optional) what command output to use as result response, it can be one ofexitcode
,stderr
orstdout
(default)
This basic example shows how to use the pre_process
feature. The response is piped through cat
which returns the input without any changes. This command takes no arguments.
{
"response": {
"format": {
"pre_process": {
"cmd": {
"name": "cat"
}
}
}
}
}
This example shows how to use the pre_process
feature with stderr
output. The response is the metric result of running imagemagick compare
which returns the absolute error between 2 images given a threshold (0 if identical, number of different pixels otherwise). The arguments are the piped binary from the response and the image to compare against (local file using file_path
template function) .
{
"response": {
"format": {
"pre_process": {
"cmd": {
"name": "compare",
"args": [
"-metric",
"AE",
"-fuzz",
"2%",
"-",
{{ file_path "other/file.jpg" | marshal }},
"/dev/null"
],
"output": "stderr"
}
}
},
"body": 0
}
}
format.pre_process
:- Command:
compare -metric AE -fuzz 2% - /path/to/other/file.jpg /dev/null
- Parameters:
-metric AE
: metric to use for image comparison-fuzz 2%
: threshold for allowed pixel color difference-
: read first image fromstdin
instead loading a saved file/path/to/other/file.jpg
: read second image from local path (result from template function above)/dev/null
: discard stdout (it contains a binary we don't want, we use stderr output)
- Command:
To check the file metadata of a file that is directly downloaded as a binary file using the eas/download
API, use exiftool
to read the file and output the metadata in JSON format.
If there is a file with the asset ID 1
, and the apitest needs to check that the MIME type is image/jpeg
, create the following test case:
{
"request": {
"endpoint": "eas/download/1/original",
"method": "GET"
},
"response": {
"format": {
"pre_process": {
"cmd": {
"name": "exiftool",
"args": [
"-j",
"-g",
"-"
]
}
}
},
"body": [
{
"File": {
"MIMEType": "image/jpeg"
}
}
]
}
}
format.pre_process
:- Command:
exiftool -j -g -
- Parameters:
-j
: output in JSON format-g
: group output by tag class-
: read fromstdin
instead loading a saved file
- Command:
This example shows the combination of pre_process
and type
. Instead of calling exiftool
with JSON output, it can also be used with XML output, which then will be formatted to JSON by the apitest tool.
{
"request": {
"endpoint": "eas/download/1/original",
"method": "GET"
},
"response": {
"format": {
"pre_process": {
"cmd": {
"name": "exiftool",
"args": [
"-X",
"-"
]
}
},
"type": "xml"
},
"body": [
{
"File": {
"MIMEType": "image/jpeg"
}
}
]
}
}
format.pre_process
:- Command:
exiftool -X -
- Parameters:
-X
: output in XML format-
: read fromstdin
instead loading a saved file
- Command:
format.type
:xml
: convert the output ofexiftool
, which is expected to be in XML format, into JSON
If there is any error during the call of the command line tool, the error is formatted as a JSON object and returned instead of the expected response:
{
"command": "cat --INVALID",
"error": "exit status 1",
"exit_code": 1,
"stderr": "cat: unrecognized option '--INVALID'\nTry 'cat --help' for more information.\n"
}
command
: the command that was executed (consisting ofcmd.name
andcmd.args
)error
: error message (message of internalexec.ExitError
)exit_code
: integer value of the exit codestderr
: additional error information fromstderr
of the command line tool
If such an error is expected as a result, this formatted error message can be checked as the response.
The datastore is a storage for arbitrary data. It can be set directly or set using values received from a response. It has two parts:
- Custom storage with custom key
- Sequential response store per test suite (one manifest)
The custom storage is persistent throughout the apitest run, so all requirements, all manifests, all tests. Sequential storage is cleared at the start of each manifest.
The custom store uses a string as index and can store any type of data.
Array: If an key ends in []
, the value is assumed to be an Array, and is appended. If no Array exists, an array is created.
Map: If an key ends in [key]
, the value is assumed to be an map, and writes the data into the map at that key. If no map exists, an map is created.
{
"store": {
"eas_ids[]": 15,
"mapStorage[keyIWantToStore]": "value"
}
}
This example would create an Array in index eas_ids and append 15 to it.
Arrays are useful using the Go-Template range function.
To set data in custom store, you can use 4 methods:
- Use
store
on the manifest.json top level, the data is set before the session authentication (if any) - Use
store_response_qjson
inauthentication.store_response_qjson
- Use
store
on the test level, the data is set before request and response are evaluated - Use
store_response_qjson
on the test level, the data is set after each response (If you want the datestore to delete the current entry if no new one could be found with qjson. Just prepend the qjson key with a!
. E.g."eventId":"!body.0._id"
will delete theeventId
entry from the datastore ifbody.0._id
could not be found in the response json)
All methods use a Map as value, the keys of the map are string, the values can be anything. If the key (or index) ends in []
and Array is created if the key does not yet exists, or the value is appended to the Array if it does exist.
The method store_response_qjson
takes only string as value. This qjson-string is used to parse the current response using the qjson feature. The return value from the qjson call is then stored in the datastore.
The data from the custom store is retrieved using the datastore <key>
Template function. key
must be used in any store method before it is requested. If the key is unset, the datastore function returns an empty string. Use the special key -
to return the entire datastore.
Slices allow the backwards index access. If you have a slice of length 3 and access it at index -1
you get the last
element in the slice (original index 2
)
If you access an invalid index for datastore map[index]
or slice[]
you get an empty string. No error is thrown.
To get the data from the sequential store an integer number has to be given to the datastore function as string. So datastore "0"
would be a valid request. This would return the response from first test of the current manifest. datastore "-1"
returns the last response from the current manifest. datastore "-2"
returns second to last from the current manifest. If the index is wrong the function returns an error.
The sequential store stores the body and header of all responses. Use qjson
to access values in the responses. See template functions datastore
and qjson
.
When using relative indices (negative indices), use the same index to get values from the datastore to use in the request and response definition. Especially, for evaluating the current response, it has not yet been stored. So, datastore "-1"
will still return the last response in the datastore. The current response will be appended after it was evaluated, and then will be returned with datastore "-1"
.
We support certain control structures in the response definition. You can use this control structures when ever you are able to set keys in the json (so you have to be inside a object). Some of them also need a value and some don't. For those which don't need a value you can just setup the control structure without a second key with some weird value. When you give a value the tool always tries to deep check if that value is correct and present in the actual reponse. So be aware of this behavior as it could interfere with your intended test behavior.
In the example we use the jsonObject test
and define some control structures on it. A control structure uses the key it
is attached to plus :control
. So for our case it would be test:control
. The tool gets that this two keys test
and
test:control
are in relationship with each other.
{
"test": {
"hallo": 2,
"hello": 3
},
"test:control": {
"is_object": true,
"no_extra": true
}
}
Their are several controls available. The first two no_extra
and order_matters
always need their responding real key
and value to function as intended. The others can be used without a real key.
Default behavior for all keys is =false
. So you only have to set them if you want to explicit use them as true
This commands defines an exact match, if it is set, their are no more fields allowed in the response as defined in the testcase.
no_extra
is available for objects and arrays
The following response would fail as their are to many fields in the actual response
{
"body": {
"testObject": {
"a": "z",
"b": "y"
},
"testObject:control": {
"no_extra": true
}
}
}
{
"body": {
"testObject": {
"a": "z",
"b": "y",
"c": "to much, so we fail"
}
}
}
This commands defines that the order in an array should be checked.
order_matters
is available only for arrays
E.g. the following response would fail as the order in the actual response is wrong
{
"body": {
"testArray": [
"a",
"b",
"c"
],
"testArray:control": {
"order_matters": true
}
}
}
{
"body": {
"testArray": [
"c",
"b",
"a"
]
}
}
This setting defines the depth that the no_extra
and order_matters
should consider when matching arrays.
depth
is available only for arrays.
The possible values of depth
are:
-1 -> full depth
0 -> top element only (default)
N -> N elements deep
The following response would fail as their are too many entries in the actual response inner array.
{
"body": {
"testArray": [
[1, 3, 5],
[2, 4, 6]
],
"testObject:control": {
"no_extra": true,
"depth": 1
}
}
}
{
"body": {
"testArray": [
[1, 3, 5],
[2, 4, 6, 8]
]
}
}
Check if a certain value does exist in the reponse (no matter what its content is)
must_exist
is available for all types.
This control can be used without a "real" key. So only the :control
key is present.
E.g. the following response would fail as "iShouldExists"
is not in the actual response
{
"body": {
"iShouldExists:control": {
"must_exist": true
}
}
}
Check if the size of an array equals the element_count
element_count
is available only for arrays
This control can be used without a "real" key. So only the :control
key is present.
E.g. the following response would fail as "count"
is has the wrong length
{
"body": {
"count:control": {
"element_count": 2
}
}
}
{
"body": {
"count": [
1,
2,
3
]
}
}
Passes the no extra to the underlying structure in an array
element_no_extra
is available only for arrays
This control can be used without a "real" key. So only the :control
key is present.
E.g. the following response would fail as "extra"
is an extra element
{
"body": {
"count": [
{
"fine": true,
}
],
"count:control": {
"element_no_extra": true
}
}
}
{
"body": {
"count": [
{
"fine": true,
"extra": "shouldNotBeHere"
}
]
}
}
Check if a certain value does not exist in the reponse
must_not_exist
is available for all types.
This control can be used without a "real" key. So only the :control
key is present.
E.g. the following response would fail as "iShouldNotExists"
is in the actual response
{
"body": {
"iShouldNotExists:control": {
"must_not_exist": true
}
}
}
{
"body": {
"iShouldNotExists": "i exist, hahahah"
}
}
Check if a string value matches a given regular expression
{
"body": {
"text:control": {
"match": ".+-\\d+"
}
}
}
{
"body": {
"text": "valid_string-123"
}
}
Check if a string value starts with a given string prefix
{
"body": {
"text:control": {
"starts_with": "abc-"
}
}
}
{
"body": {
"text": "abc-123"
}
}
Check if a string value ends with a given string suffix
{
"body": {
"text:control": {
"ends_with": "-123"
}
}
}
{
"body": {
"text": "abc-123"
}
}
With is_string
, is_bool
, is_object
, is_array
and is_number
you can check if your field has a certain type
The type checkers are available for all types. It implicit also checks must_exist
for the value as there is no sense in
type checking a value that does not exist.
This control can be used without a "real" key. So only the :control
key is present.
E.g. the following response would fail as "testNumber"
is no number in the actual response
{
"body": {
"testNumber:control": {
"is_number": true
}
}
}
{
"body": {
"testNumber": false
}
}
With number_gt
(greater than >
), number_ge
(greater equal >=
), number_lt
(less than <
), number_le
(less equal <=
) you can check if your field of type number (implicit check) is in certain number range
This control can be used without a "real" key. So only the :control
key is present.
E.g. the following response would fail as "beGreater"
is smaller than expected
{
"body": {
"beGreater:control": {
"number_gt": 5
}
}
}
{
"body": {
"beGreater": 4
}
}
In the request and response part of the single testcase you also can load the content from an external file This is exspecially helpfull for keeping the manifest file simpler/smaller and keep a better overview. On top: You can use so called template functions in the external file. (We will dig deeper into the template functions later)
A single test could look as simple as following:
{
"name": "Test loading request & response from external file",
"request": "@path/to/requestFile.json",
"response": "@path/to/responseFile.json"
}
Important: The paths to the external files start with a '@' and are relative to the location of the manifest.json or can be web urls e.g. https://programmfabrik.de/testfile.json
The content of the request and response file are execatly the same as if you would place the json code inline:
{
"body": {
"animal": "dog",
"flower": "rose"
},
"body_type": "urlencoded",
"endpoint": "suggest",
"header": {
"header1": "value",
"header2": "value"
},
"method": "GET",
"query_params": {
"number": 2,
"token": "testtoken"
}
}
{
"body": {
"objecttypes": [
"pictures"
],
"query": ">>>[0-9]*<<<"
},
"header": {
"key1": [
"val1",
"val2",
"val3"
],
"x-easydb-token": [
"csdklmwerf8\u00dfwji02kopwfjko2"
]
},
"statuscode": 200
}
apitest supports the Sprig template function library in v3. Internally provided functions like
add
overwrite theSprig
function.
As described before, if you use an external file you can make use of so called template functions. What they are and how they work for the apitesting tool is described in the following part.
Template Functions are invoked using the tags {{ }}
and upon returning substitutes the function call with
its result. We use the golang "text/template" package so all functions provided there are also supported here.
For a reference see [https://golang.org/pkg/text/template/]
manifest.json->external file: load external file
external file->another file: render template with file parameter "hello"
another file->external file: return rendered template "hello world"
external file->manifest.json: return rendered template
Assume that the template function myfunc
, given the arguments 1 "foo"
, returns
"bar"
. The call {{ myfunc 1 "foo" }}
would translate to bar
.
Consequently, rendering Lets meet at the {{ myfunc 1 "foo" }}
results in an invitation to the bar
.
We provide the following functions:
Helper function to load contents of a file; if this file contains templates; it will render these templates with the parameters provided in the can be accessed from the loaded file via {{ .Param1-n }};
see example below
Loads the file with the relative path ( to the file this template function is invoked in ) "relative/path" or a weburl e.g. https://docs.google.com/test/tmpl.txt. Returns string.
Loads the file with the relative path ( to the file this template function is invoked in ) "relative/path" or a weburl e.g. https://docs.google.com/test/tmpl.txt. Returns string.
Content of file at some/path/example.tmpl
:
{{ load_file "../target.tmpl" "hello" }}
Content of file at some/target.tmpl
:
{{ .Param1 }} world`
Rendering example.tmpl
will result in hello world
Returns the relative path ( to the file this template function is invoked in ) "relative/path" or a weburl e.g. https://docs.google.com/test/tmpl.txt
Absolute path of file at some/path/myfile.cpp
:
{{ file_path "../myfile.tmpl" }}
Read a CSV map and turn rows into columns and columns into rows.
Assume you have the following structure in your sheet:
| key | type | 1 | 2
string | string | string | string |
---|---|---|---|
name | string | bicyle | car |
wheels | int64 | 2 | 4 |
As a convention the data columns need to be named 1
, 2
, ... Allowed types are:
- string
- int64
- number (JSON type number)
- float64
Calling pivot_rows("key","type",(file_csv "file.csv" ','))
returns
[
{
"filename": "bicyle",
"wheels": 2
},
{
"filename": "car",
"wheels": 4
}
]
## `rows_to_map "keyColumn" "valueColumn" [input]`
Generates a key-value map from your input rows.
Assume you have the following structure in your sheet:
| column_a | column_b | column_c |
| -------- | -------- | -------- |
| row1a | row1b | row1c |
| row2a | row2b | row2c |
If you parse this now to CSV and then load it via `file_csv` you get the following JSON structure:
```yaml
[
{
"column_a": "row1a",
"column_b": "row1b",
"column_c": "row1c"
},
{
"column_a": "row2a",
"column_b": "row2b",
"column_c": 22
}
]
For mapping now certain values to a map you can use rows_to_map "column_a" "column_c"
and the output will be a map with the following content:
{
"row1a": "row1c",
"row2a": 22
}
Generates an Array of rows from input rows. The groupColumn needs to be set to a column which will be used for grouping the rows into the Array.
The column needs to:
- be an int64 column
- use integers between 0 and 999
The Array will group all rows with identical values in the groupColumn.
The CSV can look at follows, use file_csv to read it and pipe into group_rows
batch | reference | title |
---|---|---|
int64 | string | string |
1 | ref1a | title1a |
1 | ref1b | title1b |
4 | ref4 | title4 |
3 | ref3 | titlte2 |
Produces this output (presented as json for better readability:
[
[
{
"batch": 1,
"reference": "ref1a",
"title": "title1a"
},
{
"batch": 1,
"reference": "ref1b",
"title": "title1b"
}
],
[
{
"batch": 3,
"reference": "ref3",
"title": "title3"
}
],
[
{
"batch": 4,
"reference": "ref4",
"title": "title4"
}
]
]
Generates an Map of rows from input rows. The groupColumn needs to be set to a column which will be used for grouping the rows into the Array.
The column needs to be a string column.
The Map will group all rows with identical values in the groupColumn.
The CSV can look at follows, use file_csv to read it and pipe into group_rows
batch | reference | title |
---|---|---|
string | string | string |
one | ref1a | title1a |
one | ref1b | title1b |
4 | ref4 | title4 |
3 | ref3 | titlte2 |
Produces this output (presented as json for better readability:
{
"one": [
{
"batch": "one",
"reference": "ref1a",
"title": "title1a"
},
{
"batch": "one",
"reference": "ref1b",
"title": "title1b"
}
],
"4": [
{
"batch": "4",
"reference": "ref3",
"title": "title3"
}
],
"3": [
{
"batch": "3",
"reference": "ref4",
"title": "title4"
}
]
}
With the parameters keyColumn
and valueColumn
you can select the two columns you want to use for map. (Only two are supported)
The keyColumn
must be of the type string, as it functions as map index (which is of type string)
{{ unmarshal "[{\"column_a\": \"row1a\",\"column_b\": \"row1b\",\"column_c\": \"row1c\"},{\"column_a\": \"row2a\",\"column_b\": \"row2b\",\"column_c\": \"row2c\"}]" | rows_to_map "column_a" "column_c" | marshal }}
Rendering that will give you :
{
"row1a": "row1c",
"row2a": "row2c"
}
The function returns an empty map
For rows_to_map
:
{}
The complete row gets mapped
For rows_to_map "column_a"
:
{
"row1a":{
column_a: "row1a",
column_b: "row1b",
column_c: "row1c",
},
"row2a":{
column_a: "row2a",
column_b: "row2b",
column_c: "row2c",
}
}
The row does get skipped
Input:
[
{
column_a: "row1a",
column_b: "row1b",
column_c: "row1c",
},
{
column_b: "row2b",
column_c: "row2c",
}
{
column_a: "row3a",
column_b: "row3b",
column_c: "row3c",
}
]
For rows_to_map "column_a" "column_c"
:
{
row1a: "row1c",
row3a: "row3c",
}
The value will be set to ""
(empty string)
Input:
[
{
column_a: "row1a",
column_b: "row1b",
column_c: "row1c",
},
{
column_a: "row2a",
column_b: "row2b",
}
{
column_a: "row3a",
column_b: "row3b",
column_c: "row3c",
}
]
For rows_to_map "column_a" "column_c"
:
{
row1a": "row1c",
row2a: "",
row3a: "row3c",
}
Helper function to query the datastore; used most of the time in conjunction with qjson
.
The key
can be an int, or int64 accessing the store of previous responses. The responses are accessed in the order received. Using a negative value access the store from the back, so a value of -2 would access the second to last response struct.
This function returns a string, if the key
does not exists, an empty string is returned.
If the key
is a string, the datastore is accessed directly, allowing access to custom set values using store
or store_response_qjson
parameters.
The datastore stores all responses in a list. We can retrieve the response (as a json string) by using this
template function. {{ datastore 0 }}
will render to
{
"statuscode": 200,
"header": {
"foo": [
"bar",
"baz"
]
},
"body": "..."
}
This function is intended to be used with the qjson
template function.
The key -
has a special meaning, it returns the entire custom datastore (not the sequentially stored responses)
Helper function to extract fields from the 'json'.
@path
: string; a description of the location of the field to extract. For array access use integers; for object access use keys. Example: 'body.1.field'; see below for more details@json_string
: string; a valid json blob to be queried; can supplied via pipes from 'datastore idx'@result
: the content of the json blob at the specified path
The call
{{ qjson "foo.1.bar" "{\"foo": [{\"bar\": \"baz\"}, 42]}" }}
would return baz
.
As an example with pipes, the call
{{ datastore idx | qjson "header.foo.1" }}
would returnbar
given the response above.
See gjson
Helper function to load a csv file.
@path
: string; a path to the csv file that should be loaded. The path is either relative to the manifest or a weburl@delimiter
: rune; The delimiter that is used in the given csv e.g. ',' Defaults to ','@result
: the content of the csv as json array so we can work on this data with qjson
The CSV must have a certain structur. If the structure of the given CSV differs, the apitest tool will fail with a error
- In the first row must be the names of the fields
- In the seconds row must be the types of the fields
Valid types
- int64
- int
- string
- float64
- bool
- int64,array
- string,array
- float64,array
- bool,array
- json
All types can be prefixed with * to return a pointer to the value. Empty strings initialize the Golang zero value for the type, for type array the empty string inialized an empty array. The empty string returns an untyped nil.
Content of file at some/path/example.csv
:
id,name
int64,string
1,simon
2,martin
The call
{{ file_csv "some/path/example.csv" ','}}
would result in
[map[id:1 name:simon] map[id:2 name:martin]]
As an example with pipes, the call
{{ file_csv "some/path/example.csv" ',' | marshal | qjson "1.name" }}
would result in martin
given the response above.
There are some corner cases that trigger a certain behavior you should keep in mind
The column gets skipped in every row
Input
id,name
int64,
1,simon
2,martin
Result
[map[id:1] map[id:2]]
The column gets skipped in every row
Input
,name
int64,string
1,simon
2,martin
Result
[map[name:simon] map[name:martin]]
If there is a comment marked with #
, or a empty line that does not get rendered into the result
Input
id,name
int64,string
1,simon
2,martin
#3,philipp
4,roman
#5,markus
6,klaus
7,sebastian
Result
[map[name:simon] map[name:martin] map[name:roman] map[name:klaus] map[name:sebastian]]
Helper function to parse an XML file and convert it into json
@path
: string; a path to the XML file that should be loaded. The path is either relative to the manifest or a weburl
This function uses the function NewMapXml()
from github.com/clbanning/mxj.
Content of XML file some/path/example.xml
:
<objects xmlns="https://schema.easydb.de/EASYDB/1.0/objects/">
<obj>
<_standard>
<de-DE>Beispiel Objekt</de-DE>
<en-US>Example Object</en-US>
</_standard>
<_system_object_id>123</_system_object_id>
<_id>45</_id>
<name type="text_oneline"
column-api-id="263">Example</name>
</obj>
</objects>
The call
{{ file_xml2json "some/path/example.xml" }}
would result in
{
"objects": {
"-xmlns": "https://schema.easydb.de/EASYDB/1.0/objects/",
"obj": {
"_id": "45",
"_standard": {
"de-DE": "Beispiel Objekt",
"en-US": "Example Object"
},
"_system_object_id": "123",
"name": {
"#text": "Example",
"-column-api-id": "263",
"-type": "text_oneline"
}
}
}
}
Helper function to parse an HTML file and convert it into json
@path
: string; a path to the HTML file that should be loaded. The path is either relative to the manifest or a weburl
This marshalling is less strict than for XHTML. For example it will not raise errors for unclosed tags like <p>
or <hr>
, as well as Javascript code inside the HTML code. But it is possible that unclosed tags are missing in the resulting JSON if the goquery tokenizer can not find a matching closing tag.
Content of HTML file some/path/example.html
:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<title>fylr</title>
<meta name="description" content="fylr - manage your data" />
<script>
function onInputHandler(event) {
const form = event.currentTarget;
submitForm(form);
}
</script>
</head>
<body>
<div class="container">
<h1>Register</h1>
<p class="required-information"><sup>*</sup>Mandatory fields<br>
<p class="error-summary">Form has errors
<hr>
</div>
</body>
</html>
The call
{{ file_html2json "some/path/example.html" }}
would result in
{
"html": {
"-lang": "en",
"head": {
"meta": [
{
"-charset": "utf-8"
},
{
"-content": "fylr - manage your data",
"-name": "description"
}
],
"title": {
"#text": "fylr"
},
"script": {
"#text": "function onInputHandler(event) {\n\t\t\t\tconst form = event.currentTarget;\n\t\t\t\tsubmitForm(form);\n\t\t\t}"
}
},
"body": {
"div": {
"-class": "container",
"h1": {
"#text": "Register"
},
"p": [
{
"-class": "required-information",
"sup": {
"#text": "*"
},
"br": {}
},
{
"#text": "Form has errors",
"-class": "error-summary"
}
],
"hr": {}
}
}
}
}
Helper function to parse an XHTML file and convert it into json
@path
: string; a path to the XHTML file that should be loaded. The path is either relative to the manifest or a weburl
Content of XHTML file some/path/example.xhtml
:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<link href="/css/easydb.css" rel="stylesheet" type="text/css" />
<title>easydb documentation</title>
</head>
<body>
<h1 id="welcome-to-the-easydb-documentation">Welcome to the easydb documentation</h1>
</body>
</html>
The call
{{ file_xhtml2json "some/path/example.xhtml" }}
would result in
{
"html": {
"-xmlns": "http://www.w3.org/1999/xhtml",
"head": {
"link": {
"-href": "/css/easydb.css",
"-rel": "stylesheet",
"-type": "text/css"
},
"title": "easydb documentation"
},
"body": {
"h1": {
"#text": "Welcome to the easydb documentation",
"-id": "welcome-to-the-easydb-documentation"
}
}
}
}
Helper function to return the result of an SQL statement from a sqlite3 file.
@path
: string; a path to the sqlite file that should be loaded. The path is either relative to the manifest or a weburl@statement
: string; a SQL statement that returns data (SELECT
)@result
: the result of the statement as a json array so we can work on this data with qjson
Content of sqlite file at some/path/example.sqlite
:
Table names
:
- column
id
: typeINTEGER
- column
name
: typeTEXT
id | name |
---|---|
2 |
martin |
3 |
NULL |
1 |
simon |
The call
{{ file_sqlite "some/path/example.sqlite" `
SELECT id, name FROM names
WHERE name IS NOT NULL
ORDER BY id ASC
` }}
would result in
[map[id:1 name:simon] map[id:2 name:martin]]
NULL
values in the database are returned as nil
in the template. To check if a value in the sqlite file is NULL
, us a comparison to nil
:
The call
{{ file_sqlite "some/path/example.sqlite" `
SELECT id, name FROM names
ORDER BY id ASC
` }}
would result in
[map[id:1 name:simon] map[id:2 name:martin] map[id:3 name:nil]]
The NULL
value in name
can be checked with
{{ if ne $row.name nil }}
// use name, else skip
{{ end }}
Returns a slice with the given parameters as elements. Use this for range in templates.
Returns a string slice with s
split by sep
.
Returns the sum of a
and b
. a, b
can be any numeric type or string. The function returns a numeric type, depending on the input. With string
we return int64
.
Returns a - b
. a, b
can be any numeric type or string. The function returns a numeric type, depending on the input. With string
we return int64
.
Returns a * b
. a, b
can be any numeric type or string. The function returns a numeric type, depending on the input. With string
we return int64
.
Returns a / b
. a, b
can be any numeric type or string. The function returns a numeric type, depending on the input. With string
we return int64
.
Returns a util.GenericJson
Object (go: interface{}
) of the unmarshalled JSON
string.
Returns a string
of the marshalled interface{}
object.
Returns a string
of the MD5 sum of the file found in filepath
.
Returns a string
where all "
are escaped to \"
. This is useful in Strings which need to be concatenated.
Returns a string
as the result of escaping input as if it was intended for use in a URL query string.
Returns a string
as the result of unescaping input as if it was coming from a URL query string.
Returns a string
as the result of encoding input into base64.
Returns a string
as the result of decoding input from base64.
Uses Url.PathEscape to escape given string
to use in endpoint
or server_url
. Returns string
.
Returns a bool
value. If text
matches the regular expression regex
, it returns true
, else false
. This is useful inside {{ if ... }}
templates.
Just for reference, this is a Go Template built-in.
Returns a slice of n 0-sized elements, suitable for ranging over.
Example how to range over 100 objects
{
"body": [
{{ range $idx, $v := N 100 }}
...
{{ end }}
]
}
replace_host replaces the host and port in the given url
with the actual address of the built-in HTTP server (see below). This address, taken from the manifest.json
can be overwritten with the command line parameter --replace-host
.
As an example, the URL http://localhost/myimage.jpg would be changed into http://localhost:8788/myimage.jpg following the example below.
server_url returns the server url, which can be globally provided in the config file or directly by the command line parameter --server
. This is a *url.URL
.
server_url_no_user returns the server url, which can be globally provided in the config file or directly by the command line parameter --server
. Any information about the user authentification is removed. This is a *url.URL
.
If the server_url is in the form of http://user:password@localhost, server_url_no_user will return http://localhost.
is_zero returns true if the passed value is the Golang zero value of the type.
oauth2_password_token returns an oauth token for a configured client and given some user credentials. Such token is an object which contains several properties, being access_token one of them. It uses the trusted
oAuth2 flow
Example:
{
"store": {
"access_token": {{ oauth2_password_token "my_client" "john" "pass" | marshal | qjson "access_token" }}
}
}
oauth2_client_token returns an oauth token for a configured client. Such token is an object which contains several properties, being access_token one of them. It uses the client credentials
oAuth2 flow.
Example:
{
"store": {
"access_token": {{ oauth2_client_token "my_client" | marshal | qjson "access_token" }}
}
}
oauth2_code_token returns an oauth token for a configured client and accepts a variable number of key/value parameters. Such token is an object which contains several properties, being access_token one of them. It uses the code grant
oAuth2 flow.
Behind the scenes the function will do a GET request to the auth URL
, adding such parameters to it, and interpret the last URL such request was redirected to, extracting the code from it and passing it to the last step of the regular flow.
Example:
{
"store": {
"access_token": {{ oauth2_code_token "my_client" "username" "myuser" "password" "mypass" | marshal | qjson "access_token" }}
}
}
Or:
{
"store": {
"access_token": {{ oauth2_code_token "my_client" "guess_access" "true" | marshal | qjson "access_token" }}
}
}
oauth2_implicit_token returns an oauth token for a configured client and accepts a variable number of key/value parameters. Such token is an object which contains several properties, being access_token one of them. It uses the implicit grant
oAuth2 flow.
Behind the scenes the function will do a GET request to the auth URL
, adding such parameters to it, and interpret the last URL such request was redirected to, extracting the token from its fragment.
Example:
{
"store": {
"access_token": {{ oauth2_password_token "my_client" "myuser" "mypass" | marshal | qjson "access_token" }}
}
}
oauth2_client returns a configured oauth client given its client_id
. Result is an object which contains several properties.
Example:
{
"store": {
"oauth2_client_config": {{ oauth2_client "my_client" | marshal }}
}
}
** oauth2_basic_auth** returns the authentication header for basic authentication for the given oauth client.
semver_compare compares to semantic version strings. This calls https://pkg.go.dev/golang.org/x/mod/semver#Compare, so check there for additional documentation. If the version is ""
the version v0.0.0
is assumed. Before comparing, the function checks if the strings are valid. In case they are not, an error is returned.
Write msg to log output. Args can be given. This uses logrus.Debugf to output.
Removes from key from url's query, returns the url with the key removed. In case of an error, the url is returned as is. Unparsable urls are ignored and the url is returned.
Returns the value from the url's query for key. In case of an error, an empty string is returned. Unparsable urls are ignored and an empty string is returned.
Returns the index of the Parallel Run that the template is executed in, or -1 if it is not executed within a parallel run.
The apitest tool includes an HTTP Server. It can be used to serve files from the local disk temporarily. The HTTP Server can run in test mode. In this mode, the apitest tool does not run any tests, but starts the HTTP Server in the foreground, until CTRL-C in pressed. It is possible to define a proxy in the server which accepts and stores request data. It is useful if there is need to test that expected webhook calls are properly performed. Different stores can be configured within the proxy.
To configure a HTTP Server, the manifest need to include these lines:
{
"http_server": {
"addr": ":8788", // address to listen on
"dir": "", // directory to server, relative to the manifest.json, defaults to "."
"testmode": false, // boolean flag to switch test mode on / off
"proxy": { // proxy configuration
"test": { // proxy store configuration
"mode": "passthru" // proxy store mode
}
}
}
}
The proxy mode
parameter supports these values:
passthru
: The request is stored as it is, without further processing
The HTTP Server is started and stopped per test.
The server provides endpoints to serve local files and return responses based on request data.
To access any static file, use the path relative to the server directory (dir
) as the endpoint:
{
"request": {
"endpoint": "path/to/file.jpg",
"method": "GET"
}
}
If there is any error (for example wrong path), a HTTP error repsonse will be returned.
For some tests, you may not want the Content-Length header to be sent alongside the asset
In this case, add no-content-length=1
to the query string of the asset url:
{
"request": {
"endpoint": "path/to/file.jpg?no-content-length=1",
"method": "GET"
}
}
The endpoint bounce
returns the binary of the request body, as well as the request headers and query parameters as part of the response headers.
{
"request": {
"endpoint": "bounce",
"method": "POST",
"query_params": {
"param1": "abc"
},
"header": {
"header1": 123
},
"body": {
"file": "@path/to/file.jpg"
},
"body_type": "multipart"
}
}
The file that is specified is relative to the apitest file, not relative to the http server directory. The response will include the binary of the file, which can be handled with pre_process
and format
.
Request headers are included in the response header with the prefix X-Req-Header-
, request query parameters are included in the response header with the prefix X-Req-Query-
:
{
"response": {
"header": {
"X-Req-Query-Param1": [
"abc"
],
"X-Req-Header-Header1": [
"123"
]
}
}
}
The endpoint bounce-json
returns the a response that includes header
, query_params
and body
in the body.
{
"request": {
"endpoint": "bounce-json",
"method": "POST",
"query_params": {
"param1": "abc"
},
"header": {
"header1": 123
},
"body": {
"value1": "test",
"value2": {
"hello": "world"
}
}
}
}
will return this response:
{
"response": {
"body": {
"query_params": {
"param1": [
"abc"
]
},
"header": {
"Header1": [
"123"
]
},
"body": {
"value1": "test",
"value2": {
"hello": "world"
}
}
}
}
}
The endpoint bounce-query
returns the a response that includes in its body
the request query string
as it is.
This is useful in endpoints where a body cannot be configured, like oAuth urls, so we can simulate responses in the request for testing.
{
"request": {
"endpoint": "bounce-query?here=is&all=stuff",
"method": "POST",
"body": {}
}
}
will return this response:
{
"response": {
"body": "here=is&all=stuff"
}
}
The proxy different stores can be used to both store and read their stored requests The configuration, as already defined in HTTP Server, is as follows:
"proxy": { // proxy configuration
"<store_name>": { // proxy store configuration
"mode": "passthru" // proxy store mode
}
}
Key | Value Type | Value description |
---|---|---|
proxy | JSON Object | An object with the store names as keys and their configuration as values |
<store_name> | JSON Object | An object with the store configuration |
mode | string | The mode the store runs on (see below) |
Store modes:
Value | Description |
---|---|
passthru | The request to the proxy store will be stored as it is without any further processing |
Perform a request against the http server path /proxywrite/<store_name>
.
Where <store_name>
is a key (store name) inside the proxy
object in the configuration.
The expected response will have either 200
status code and the used offset as body or another status and an error body.
Given this request:
{
"endpoint": "/proxywrite/test",
"method": "POST",
"query_params": {
"some": "param"
},
"header": {
"X-My-Header": 0
},
"body": {
"post": {
"my": ["body", "here"]
}
}
}
The expected response:
{
"statuscode": 200,
"body": {
"offset": 0
}
}
Whatever request performed against the server path /proxyread/<store_name>?offset=<offset>
.
Where:
<store_name>
is a key inside theproxy
object in the server configuration, aka the proxy store name<offset>
represents the entry to be retrieved in the proxy store requests collection. If not provided, 0 is assumed.
Given this request:
{
"endpoint": "/proxyread/test",
"method": "GET",
"query_params": {
"offset": 0
}
}
The expected response:
{
"header": { // Merged headers. original request headers prefixed with 'X-Request`
"X-Apitest-Proxy-Request-Method": ["POST"], // The method of the request to the proxy store
"X-Apitest-Proxy-Request-Path": ["/proxywrite/test"], // The url path requested (including query string)
"X-Apitest-Proxy-Request-Query": ["is=here&my=data&some=value"], // The request query string only
"X-My-Header": ["blah"], // Original request custom header
"X-Apitest-Proxy-Store-Count": ["7"], // The number of requests stored
"X-Apitest-Proxy-Store-Next-Offset": ["1"] // The next offset in the store
... // All other standard headers sent with the original request (like Content-Type)
},
"body": { // The body of this request to the proxy store, always in binary format
"whatever": ["is", "here"] // Content-Type header will reveal its format on client side, in this case, it's JSON, but it could be a byte stream of an image, etc.
}
}
The apitest tool can run a mock SMTP server intended to catch locally sent emails for testing purposes.
To add the SMTP Server to your test, put the following in your manifest:
{
"smtp_server": {
"addr": ":9025", // address to listen on
"max_message_size": 1000000 // maximum accepted message size in bytes
// (defaults to 30MiB)
}
}
The server will then listen on the specified address for incoming emails. Incoming messages are stored in memory and can be accessed using the HTTP endpoints described further below. No authentication is performed when receiving messages.
If the test mode is enabled on the HTTP server and an SMTP server is also configured, both the HTTP and the SMTP server will be available during interactive testing.
On its own, the SMTP server has only limited use, e.g. as an email sink for applications that require such an email sink to function. But when combined with the HTTP server (see above in section HTTP Server), the messages received by the SMTP server can be reproduced in JSON format.
When both the SMTP server and the HTTP server are enabled, the following additional endpoints are made available on the HTTP server:
A very basic HTML/JavaScript GUI that displays and auto-refreshes the received
messages is made available on the /smtp/gui
endpoint.
On the /smtp
endpoint, an index of all received messages will be made
available as JSON in the following schema:
{
"count": 3,
"messages": [
{
"from": [
"testsender@programmfabrik.de"
],
"idx": 0,
"isMultipart": false,
"receivedAt": "2024-07-02T11:23:31.212023129+02:00",
"smtpFrom": "testsender@programmfabrik.de",
"smtpRcptTo": [
"testreceiver@programmfabrik.de"
],
"to": [
"testreceiver@programmfabrik.de"
]
},
{
"from": [
"testsender2@programmfabrik.de"
],
"idx": 1,
"isMultipart": true,
"receivedAt": "2024-07-02T11:23:31.212523916+02:00",
"smtpFrom": "testsender2@programmfabrik.de",
"smtpRcptTo": [
"testreceiver2@programmfabrik.de"
],
"subject": "Example Message",
"to": [
"testreceiver2@programmfabrik.de"
]
},
{
"from": [
"testsender3@programmfabrik.de"
],
"idx": 2,
"isMultipart": false,
"receivedAt": "2024-07-02T11:23:31.212773829+02:00",
"smtpFrom": "testsender3@programmfabrik.de",
"smtpRcptTo": [
"testreceiver3@programmfabrik.de"
],
"to": [
"testreceiver3@programmfabrik.de"
]
}
]
}
Headers that were encoded according to RFC2047 are decoded first.
On the /smtp/$idx
endpoint (e.g. /smtp/1
), metadata about the message with
the corresponding index is made available as JSON:
{
"bodySize": 306,
"contentType": "multipart/mixed",
"contentTypeParams": {
"boundary": "d36c3118be4745f9a1cb4556d11fe92d"
},
"from": [
"testsender2@programmfabrik.de"
],
"headers": {
"Content-Type": [
"multipart/mixed; boundary=\"d36c3118be4745f9a1cb4556d11fe92d\""
],
"Date": [
"Tue, 25 Jun 2024 11:15:57 +0200"
],
"From": [
"testsender2@programmfabrik.de"
],
"Mime-Version": [
"1.0"
],
"Subject": [
"Example Message"
],
"To": [
"testreceiver2@programmfabrik.de"
]
},
"idx": 1,
"isMultipart": true,
"multiparts": [
{
"bodySize": 15,
"contentType": "text/plain",
"contentTypeParams": {
"charset": "utf-8"
},
"headers": {
"Content-Type": [
"text/plain; charset=utf-8"
]
},
"idx": 0,
"isMultipart": false
},
{
"bodySize": 39,
"contentType": "text/html",
"contentTypeParams": {
"charset": "utf-8"
},
"headers": {
"Content-Type": [
"text/html; charset=utf-8"
]
},
"idx": 1,
"isMultipart": false
}
],
"multipartsCount": 2,
"receivedAt": "2024-07-02T12:54:44.443488367+02:00",
"smtpFrom": "testsender2@programmfabrik.de",
"smtpRcptTo": [
"testreceiver2@programmfabrik.de"
],
"subject": "Example Message",
"to": [
"testreceiver2@programmfabrik.de"
]
}
Headers that were encoded according to RFC2047 are decoded first.
On the /smtp/$idx/body
endpoint (e.g. /smtp/1/body
), the message body
(excluding message headers, including multipart part headers) is made availabe
for the message with the corresponding index.
If the message was sent with a Content-Transfer-Encoding
of either base64
or quoted-printable
, the endpoint returns the decoded body.
If the message was sent with a Content-Type
header, it will be passed through
to the HTTP response.
For multipart messages, the /smtp/$idx/multipart
endpoint (e.g.
/smtp/1/multipart
) will contain an index of that messages multiparts in the
following schema:
{
"multiparts": [
{
"bodySize": 15,
"contentType": "text/plain",
"contentTypeParams": {
"charset": "utf-8"
},
"headers": {
"Content-Type": [
"text/plain; charset=utf-8"
]
},
"idx": 0,
"isMultipart": false
},
{
"bodySize": 39,
"contentType": "text/html",
"contentTypeParams": {
"charset": "utf-8"
},
"headers": {
"Content-Type": [
"text/html; charset=utf-8"
]
},
"idx": 1,
"isMultipart": false
}
],
"multipartsCount": 2
}
On the /smtp/$idx/multipart/$partIdx
endpoint (e.g. /smtp/1/multipart/0
),
metadata about the multipart with the corresponding index is made available:
{
"bodySize": 15,
"contentType": "text/plain",
"contentTypeParams": {
"charset": "utf-8"
},
"headers": {
"Content-Type": [
"text/plain; charset=utf-8"
]
},
"idx": 0,
"isMultipart": false
}
Headers that were encoded according to RFC2047 are decoded first.
The endpoint can be called recursively for nested multipart messages, e.g.
/smtp/1/multipart/0/multipart/1
.
On the /smtp/$idx/multipart/$partIdx/body
endpoint (e.g.
/smtp/1/multipart/0/body
), the body of the multipart (excluding headers)
is made available.
If the multipart was sent with a Content-Transfer-Encoding
of either base64
or quoted-printable
, the endpoint returns the decoded body.
If the message was sent with a Content-Type
header, it will be passed through
to the HTTP response.
The endpoint can be called recursively for nested multipart messages, e.g.
/smtp/1/multipart/0/multipart/1/body
.