diff --git a/docs/runner/android/configure.md b/docs/runner/android/configure.md index 3b671c0b4..c49846685 100644 --- a/docs/runner/android/configure.md +++ b/docs/runner/android/configure.md @@ -86,7 +86,7 @@ vendorConfiguration: -#### Multi module testing +#### Multi-module testing :::danger @@ -119,7 +119,7 @@ Each entry consists of `testApplication` in case of library testing and `applica This mode is not supported by Gradle Plugin -This mode is also not available for Android devices with version less Android 5. +This mode is also not available for Android devices with versions below Android 5. ::: Marathon supports testing dynamic feature modules: @@ -213,7 +213,7 @@ marathon { ### Device serial number assignment -This option allows to customise how marathon assigns a serial number to devices. +This option allows customisation of how marathon assigns a serial number to devices. Possible values are: * ```automatic``` @@ -262,7 +262,7 @@ Notes on the source of serial number: ```ddms``` - Adb serial number(same as you see with `adb devices` command) -```automatic``` - Sequantially checks all available options for first non empty value. +```automatic``` - Sequentially checks all available options for first non-empty value. Priority order: @@ -272,7 +272,7 @@ After 0.6: ```marathon_property``` -> ```ddms``` -> ```boot_property``` -> ```h ### Install options -By default, these will be ```-g -r``` (```-r``` prior to marshmallow). You can specify additional options to append to the default ones. +By default, these will be ```-g -r``` (```-r``` prior to Marshmallow). You can specify additional options to append to the default ones. @@ -307,7 +307,7 @@ marathon { ### Screen recorder configuration -By default, device will record a 1280x720 1Mbps video of up to 180 seconds if it is supported. If on the other hand you want to force +By default, the device will record a 1280x720 1Mbps video of up to 180 seconds if it is supported. If on the other hand you want to force screenshots or configure the recording parameters you can specify this as follows: :::tip @@ -478,7 +478,7 @@ marathon { ### [Allure-kotlin][3] support This option enables collection of allure's data from devices. -Configuration below works out of the box for allure-kotlin 2.3.0+. +The configuration below works out of the box for allure-kotlin 2.3.0+. @@ -517,7 +517,7 @@ marathon { Additional configuration parameters include **pathRoot** which has two options: -* `EXTERNAL_STORAGE` that is usually the `/sdcard/` on most of the devices +* `EXTERNAL_STORAGE` which is usually `/sdcard/` on most devices * `APP_DATA` which is usually `/data/data/$appPackage/` Besides the expected path root, you might need to provide the **relativeResultsDirectory**: this is the relative path to `pathRoot`. The @@ -565,7 +565,7 @@ Please refer to [allure's documentation][3] on the usage of allure. :::tip -Starting with allure 2.3.0 your test application no longer needs MANAGE_EXTERNAL_STORAGE permission to write allure's output, so there is no +Starting with allure 2.3.0 your test application no longer needs the **MANAGE_EXTERNAL_STORAGE** permission to write allure's output, so there is no need to add any special permissions. ::: @@ -582,7 +582,7 @@ The on-device report gives you more flexibility and allows you to: * Capture window hierarchy and more. -All allure output from devices will be collected under `$output/device-files/allure-results` folder. +All allure output from devices will be collected under the `$output/device-files/allure-results` folder. ### Timeout configuration @@ -637,7 +637,7 @@ marathon { ### Sync/pull files from device after test run Sometimes you need to pull some folders from each device after the test execution. It may be screenshots or logs or other debug information. -To help with this marathon supports pulling files from devices at the end of the test batch execution. Here is how you can configure it: +To help with this, marathon supports pulling files from devices at the end of the test batch execution. Here is how you can configure it: @@ -909,13 +909,13 @@ marathon { ### Test access configuration :::info -This is power-user feature of marathon that allows setting up GPS location on the emulator, simulating calls, SMS and more thanks to the -access to device-under-test from the test itself. +This is a power-user feature of marathon that allows setting up GPS locations on the emulator, simulating calls, SMS and more thanks to +access to the device-under-test from the test itself. ::: Marathon supports adam's junit extensions which allow tests to gain access to adb on all devices and emulator's control + gRPC port. See the -[docs](https://malinskiy.github.io/adam/extensions/1-android-junit/) as well as the [PR](https://github.com/Malinskiy/adam/pull/30) for +[docs](https://malinskiy.github.io/adam/extensions/1-android-junit/) as well as the [PR](https://github.com/Malinskiy/adam/pull/30) for a description on how this works. @@ -950,14 +950,14 @@ marathon { ### Multiple adb servers -Default configuration of marathon assumes that adb server is started locally and is available at `127.0.0.1:5037`. In some cases it may be +The default configuration of marathon assumes that the adb server is started locally and is available at `127.0.0.1:5037`. In some cases it may be desirable to connect multiple adb servers instead of connecting devices to a single adb server. An example of this is distributed execution of tests using test access (calling adb commands from tests). For such scenario all emulators should be connected via a local (in relation to the emulator) adb server. Default port for each host is 5037. :::tip -Adb server started on another machine should be exposed to external traffic, e.g. using option `-a`. For example, if you want to +Adb servers started on another machine should be exposed to external traffic, e.g. using option `-a`. For example, if you want to expose the adb server and start it in foreground explicitly on port 5037: `adb nodaemon server -a -P 5037`. ::: @@ -1065,7 +1065,7 @@ found [here](https://malinskiy.github.io/adam/extensions/2-android-event-produce ### Enable window animations -By default, marathon uses `--no-window-animation` flag. Use the following option if you want to enable window animations: +By default, marathon uses the `--no-window-animation` flag. Use the following option if you want to enable window animations: diff --git a/docs/runner/configuration/dynamic-configuration.md b/docs/runner/configuration/dynamic-configuration.md index f7108b60e..49b93a58b 100644 --- a/docs/runner/configuration/dynamic-configuration.md +++ b/docs/runner/configuration/dynamic-configuration.md @@ -9,7 +9,7 @@ Marathon allows you to pass dynamic variables to your marathon configuration, e. ## CLI -Marathonfile support environment variable interpolation in the Marathonfile. Every occurance of `${X}` in the Marathonfile will be replaced +Marathonfile support environment variable interpolation in the Marathonfile. Every occurrence of `${X}` in the Marathonfile will be replaced with the value of envvar `X` For example, if you want to dynamically pass the index of the test run to the fragmentation filter: ```yaml diff --git a/docs/runner/intro/configure.md b/docs/runner/intro/configure.md index 6e4aa54d7..282fdb6cb 100644 --- a/docs/runner/intro/configure.md +++ b/docs/runner/intro/configure.md @@ -16,23 +16,29 @@ outputDir: "marathon" debug: false ``` -There are _a lot_ of options in marathon. This can be overwhelming especially when you're just starting out. We will split the options into -general options below, complex options that you can find as subsections in the menu on the left and platform-specific options under each +There are _a lot_ of options in marathon. This can be overwhelming especially when you're just starting out. We will split the options into +general options below, complex options that you can find as subsections in the menu on the left and platform-specific options under each platform section. -If you're unsure how to properly format your options in Marathonfile take a look at the samples or take a look at the [deserialisation logic][1] in the *configuration* module of the project. -Each option might use a default deserializer from yaml or a custom one. Usually the custom deserializer expects the _type_ option for polymorphic types to +If you're unsure how to properly format your options in Marathonfile take a look at the samples or take a look at +the [deserialisation logic][1] in the *configuration* module of the project. +Each option might use a default deserializer from yaml or a custom one. Usually the custom deserializer expects the _type_ option for +polymorphic types to understand which specific object we need to instantiate. ## Important notes + ### File-system path handling -When specifying **relative host file** paths in the configuration they will be resolved relative to the directory of the Marathonfile, e.g. if +When specifying **relative host file** paths in the configuration they will be resolved relative to the directory of the Marathonfile, e.g. +if you have `/home/user/app/Marathonfile` with `baseOutputDir = "./output"` then the actual path to the output directory will be `/home/user/app/output`. ## Required -Below you will find a list of currently supported configuration parameters and examples of how to set them up. Keep in mind that not all of the + +Below you will find a list of currently supported configuration parameters and examples of how to set them up. Keep in mind that not all of +the additional parameters are supported by each platform. If you find that something doesn't work - please submit an issue for a platform at fault. @@ -103,7 +109,9 @@ marathon { ### Platform-specific options + Marathon requires you to specify the platform for each run, for example: + ```yaml vendorConfiguration: type: "Android" @@ -114,7 +122,8 @@ Refer to platform configuration for additional options inside the `vendorConfigu ## Optional ### Ignore failures -By default, the build fails if some tests failed. If you want to the build to succeed even if some tests failed use *true*. + +By default, the build fails if some tests failed. If you want the build to succeed even if some tests failed, use *true*. @@ -145,8 +154,11 @@ marathon { ### Code coverage -Depending on the vendor implementation code coverage may not be supported. By default, code coverage is disabled. If this option is enabled, -code coverage will be collected and marathon assumes that code coverage generation will be setup by user (e.g. proper build flags, jacoco + +Depending on the vendor implementation, code coverage may not be supported. By default, code coverage is disabled. If this option is +enabled, +code coverage will be collected and marathon assumes that code coverage generation will be setup by the user (e.g. proper build flags, +jacoco jar added to classpath, etc). @@ -178,8 +190,9 @@ marathon { ### Test output timeout -This parameter specifies the behaviour for the underlying test executor to timeout if there is no output. By default, this is set to 5 -minutes. + +This parameter specifies that the underlying test executor should time out if there is no test output within this duration. By default, +this is set to 5 minutes. @@ -211,8 +224,8 @@ marathon { ### Test batch timeout -This parameter specifies the behaviour for the underlying test executor to timeout if the batch execution exceeded some duration. By -default, this is set to 30 minutes. +This parameter specifies that the underlying test executor should time out if the batch execution exceeds this duration. By default, this is +set to 30 minutes. @@ -244,8 +257,8 @@ marathon { ### Device provider init timeout -When the test run starts device provider is expected to provide some devices. This should not take more than 3 minutes by default. If your -setup requires this to be changed please override as following: +When the test run starts, the device provider is expected to provide some devices. This should not take more than 3 minutes by default. If your +setup requires this to be changed, please override as following: @@ -277,7 +290,7 @@ marathon { ### Analytics tracking -To better understand the use-cases that marathon is used for we're asking you to provide us with anonymised information about your usage. By +To better understand the use-cases that marathon is used for, we're asking you to provide us with anonymised information about your usage. By default, this is enabled. Use **false** to disable. @@ -310,11 +323,11 @@ marathon { :::note -analyticsTracking can also be enabled (default value) / disabled directly from the CLI. It is disabled if it's set to be disabled in either the config or the CLI. +analyticsTracking can also be enabled (default value) / disabled directly from the CLI. It is disabled if it's set to be disabled in either +the config or the CLI. ::: - ### BugSnag reporting To better understand crashes, we report crashes with anonymised info. By default, this is enabled. Use **false** to disable. @@ -349,12 +362,13 @@ marathon { :::note -bugsnagReporting can also be enabled (default value) / disabled directly from the CLI. It is disabled if it's set to be disabled in either the config or the CLI. +bugsnagReporting can also be enabled (default value) / disabled directly from the CLI. It is disabled if it's set to be disabled in either +the config or the CLI. ::: - ### Uncompleted test retry quota + By default, tests that don't have any status reported after execution (for example a device disconnected during the execution) retry indefinitely. You can limit the number of total execution for such cases using this option. @@ -387,12 +401,17 @@ marathon { ### Execution strategy -When executing tests with retries there are multiple trade-offs to be made. Two execution strategies are supported: any success or all success. -By default, `ANY_SUCCESS` strategy is used with fast execution i.e. if one of the test retries succeeds then the test is considered successfully + +When executing tests with retries there are multiple trade-offs to be made. Two execution strategies are supported: any success or all +success. +By default, `ANY_SUCCESS` strategy is used with fast execution i.e. if one of the test retries succeeds then the test is considered +successfully executed and all non-started retries are removed. #### Any success -Test passes if any of its executions are passing. This mode works only if there is no complex sharding strategy applied. This is the default. + +Test passes if any of its executions are passing. This mode works only if there is no complex sharding strategy applied. This is the +default. @@ -425,12 +444,15 @@ marathon { :::info -Complex sharding with `ANY_SUCCESS` mode doesn't make sense when user asks for N number of tests to run explicitly, and we pass on the first one. +Complex sharding with `ANY_SUCCESS` mode doesn't make sense when user asks for N number of tests to run explicitly, and we pass on the first +one. ::: #### All success -Test passes if and only if all its executions are passing. This mode works only if there are no retries, i.e. no complex flakiness strategy, no retry strategy. + +Test passes if and only if all its executions are passing. This mode works only if there are no retries, i.e. no complex flakiness strategy, +no retry strategy. @@ -463,28 +485,39 @@ marathon { :::info -Adding retries with retry/flakiness strategies means users wants to trade off cost for reliability, i.e. add more retries and pass if one -of test retries passes, so retries only make sense for `ANY_SUCCESS` mode. +Adding retries with retry/flakiness strategies means users wants to trade off cost for reliability, i.e. add more retries and pass if one +of test retries passes, so retries only make sense for `ANY_SUCCESS` mode. -When we use `ALL_SUCCESS` mode it means the user want to verify each test with a number of tries (they are not retries per se) and pass only if -all of them succeed. This is the case when fixing a flaky test or adding a new test, and we want to have a signal that the test is fixed/not flaky. +When we use `ALL_SUCCESS` mode it means the user want to verify each test with a number of tries (they are not retries per se) and pass only +if +all of them succeed. This is the case when fixing a flaky test or adding a new test, and we want to have a signal that the test is fixed/not +flaky. ::: #### Fast execution mode + When the test reaches a state where a decision about its state can be made, marathon can remove additional in-progress retries. This decision point is different depending on the execution mode used. Let's walk through two examples. -Assume `ANY_SUCCESS` strategy is used and 100 retries are scheduled for a test A via flakiness strategy. Let's say first 3 failed and the 4th attempt succeeded. At -this point the test should already be considered passed since `ANY_SUCCESS` out of all retries leads to the result by definition of `ANY_SUCCESS` -execution strategy. To save cost one can remove additional non-started retries by using fast execution mode (this is the default behavior for -`ANY_SUCCESS` strategy). On the other hand one could disable fast execution and get much more accurate statistics about this test by executing +Assume `ANY_SUCCESS` strategy is used and 100 retries are scheduled for a test A via flakiness strategy. Let's say first 3 failed and the +4th attempt succeeded. At +this point the test should already be considered passed since `ANY_SUCCESS` out of all retries leads to the result by definition +of `ANY_SUCCESS` +execution strategy. To save cost one can remove additional non-started retries by using fast execution mode (this is the default behavior +for +`ANY_SUCCESS` strategy). On the other hand one could disable fast execution and get much more accurate statistics about this test by +executing more retries and calculating the probability of passing as a measure of flakiness for test A. -Assume `ALL_SUCCESS` strategy is used and 100 retries are scheduled using sharding strategy. Let's say first 3 passed and the 4th attempt failed. -At this point the test should already be considered failed since any failure out of all retries leads to the result by definition of `ALL_SUCCESS` -execution strategy. You can save cost by removing additional non-started retries by using fast execution mode (this is the default behaviour for -`ALL_SUCCESS` strategy). On the other hand one could disable fast execution and verify the flakiness rate with a defined precision, in this case +Assume `ALL_SUCCESS` strategy is used and 100 retries are scheduled using sharding strategy. Let's say first 3 passed and the 4th attempt +failed. +At this point the test should already be considered failed since any failure out of all retries leads to the result by definition +of `ALL_SUCCESS` +execution strategy. You can save cost by removing additional non-started retries by using fast execution mode (this is the default behaviour +for +`ALL_SUCCESS` strategy). On the other hand one could disable fast execution and verify the flakiness rate with a defined precision, in this +case there are 100 retries, so you would get precision up to 1% for test A. @@ -517,7 +550,8 @@ marathon { ### Debug mode -Enabled very verbose logging to stdout of all the marathon components. Very useful for debugging. + +Enables very verbose logging to stdout of all the marathon components. Very useful for debugging. @@ -548,8 +582,9 @@ marathon { ### Screen recording policy -By default, screen recording will only be pulled for tests that failed (**ON_FAILURE** option). This is to save space and also to reduce the -test duration time since we're not pulling additional files. If you need to save screen recording regardless of the test pass/failure please + +By default, screen recordings will only be pulled for tests that failed (**ON_FAILURE** option). This is to save space and also to reduce the +test duration time since we're not pulling additional files. If you need to save screen recordings regardless of the test pass/failure please use the **ON_ANY** option: @@ -581,10 +616,12 @@ marathon { ### Output configuration + #### Max file path -By default, the max file path for any output file is capped at 255 characters due to some of OSs limitations. This is the reason why some -test runs have lots of "File path length cannot exceed" messages in the log. Since there is currently no API to programmatically -establish this limit it's user's responsibility to set it up to larger value if OS supports this and the user desires it. + +By default, the max file path for any output file is capped at 255 characters due to limitations of some OSs. This is the reason why some +test runs have lots of "File path length cannot exceed" messages in the log. Since there is currently no API to programmatically +establish this limit, it's the user's responsibility to set a larger value if the OS supports this and the user desires it.