Test results
This topic describes how the test-results resource sends test results to the ValueEdge server.
Tip: We recommend sending up to 20 requests per second.
Overview
The test-results resource sends test results to the ValueEdge server.
Using this resource, you can:
-
Create or update test and test run entities on ValueEdge.
-
Link test entities to backlog items (epics, stories, features, defects), and to application modules.
-
Link test run entities to releases and environment information (using taxonomy).
-
Define tests and run characteristics (using the fields of test level, type, testing tool, and framework).
Note: You must log in to the server using the Authentication API before using this resource.
Resources
Send test results
POST .../api/shared_spaces/<space_id>/workspaces/<workspace_id>/test-results
-
The Content-Type header field must be set to application/xml.
-
The content can be compressed in gzip format. In this case, the Content-Encoding header field must be set to application/gzip.
-
The XML payload must conform to the XSD schema as described below.
Response
This is an asynchronous resource that assigns a task to ALM Octane to process the payload.
-
If the payload conforms to the XSD schema (see below) and the task is accepted, the resource returns status 202 (Accepted), and a JSON that contains task status and task ID:
{"status":"queued","id":1103}
See more details about the JSON in "Get task status" below.
-
Otherwise, the resource returns with status 400 (Bad Request), indicating that payload was not processed as it does not conform to the XSD schema.
Note: To avoid double processing of the same payload that was sent more than once (either by the same node or by two different nodes), the received payload is hashed and the hash is saved. If the hash of the received payload is found, the payload is not processed, and the latest status of the original task is returned.
-
As a response to an attempt to inject the same payload more than once, the JSON includes additional fields of ‘fromOlderPush’ and ‘until’ indicating that task has been already processed :
{"fromOlderPush":"true","id":1103,"until":"2020-0-28T11:29:51+0000","status":" SUCCESS "}
After you send test results and have received task ID, you can check the task status:
GET .../api/shared_spaces/<space_id>/workspaces/<workspace_id>/test-results/<task_id>
The response of the request is JSON with the following fields:
Field | Description |
---|---|
id |
ID of the task |
Status |
Status of the task. Possible values: QUEUED - Task item is waiting in queue. RUNNING – Task is running. SUCCESS – Publishing operation finished successfully. WARNING - Publishing operation succeeded partially. See details in errorDetails field. FAILED - Publishing operation didn't succeed. See details in errorDetails field. ERROR - System error encountered. |
fromOlderPush | Optional field, appears only on attempt to inject a payload that has already been processed. |
until | Deadline for the cache to be valid (if the same payload is injected the cache value is used to return a result). |
errorDetails |
This field may contain several error messages separated by a semicolon (;). The field might contain global errors that are relevant for all test results, and errors that are related to specific test results. Example of global error:
Example of error related to specific test results, item 0 and item 2:
|
Query parameters
The test-results resource supports the following parameter:
skip-errors | Boolean |
Whether parsing ignores recoverable errors and continues to process data. The default is false.
|
Payload definition
The payload describes:
- The test and test run entities that you want to create or update.
- ALM Octane entities that you can link to the tests and test runs.
Root element: test_result
Minimal payload is:
<test_result>
<test_runs>
<test_run module="/helloWorld" package="hello" class="HelloWorldTest" name="testOne" duration="3" status="Passed" started="1430919295889"/>
</test_runs>
</test_result>
The test_runs element contains:
-
test_run that is translated to automated tests and automated runs in ALM Octane.
-
gherkin_test_run that is translated to Gherkin tests and Gherkin runs in ALM Octane.
The following table describes the test_run element attributes:
Attribute name | Required | Target | Description |
---|---|---|---|
module |
No |
Component field of test and run |
The name of the module or component that contains the test. For example, the name of the Maven module. |
package |
No |
Package field of test and run |
The location of the test inside the module. For example, the name of the Java package. |
class |
No |
Class field of test and run |
The name of the file that contains the test. For example, the name of the Java. |
name |
Yes |
Name field of test and run |
The name of the executed test. For example, the name of the Java method. |
duration |
Yes |
Duration field of run |
The duration of the test execution, in milliseconds. |
status |
Yes |
Status and Native Status fields of run |
The final status of the test execution. Possible values: ‘Passed', 'Failed', 'Skipped' |
started |
No |
Started field of run |
The starting time of the test execution. This is the number of milliseconds since the standard base time known as "the epoch": January 1, 1970, 00:00:00 GMT. If not supplied, injection time is used. |
external_report_url | No | External report url field of run | The URL to a report in the external system that produced this run. Value must start with http(s), or td(s). |
external_test_id | No | External test id of test |
External reference to test. Limitation: In injection API, External test id field of test can be updated only during test creation. After test has been created, this field can be updated only through regular API or by UI. |
external_run_id | No |
External run id of run External test in suite id in planning of test suite, if injection is done with a suite |
Used to differentiate between runs of the same test, so if there are several test runs belonging to the same test, for each unique external_run_id a separate run is generated. Without external_run_id, several runs of the same test result in one run in ValueEdge, while older runs are considered previous runs of the run. You can also differentiate test runs by using Test characteristics (test level, type, testing tool, and framework) and Environment labels (such as DB,OS, browser), as described below. Use the external_run_id field if there is no meaning to differentiate different runs with test characteristics or environment labels. Note: If this attribute is used along with the testing framework, this field is passed back to the CI server when the suite run is executed from ValueEdge. |
Note: Illegal characters for XML attributes : & <
ValueEdge identifies each test by a unique combination of the following fields: Component, Package, Class, Name.
When ValueEdge receives a test run, it checks these attributes to associate the run with a specific test. If no existing test matches the run, ValueEdge creates a new automated test entity.
Each test can have more than one run, each uniquely identified by Test, Release, Milestone, Program, and Environment tags. These are the test's Last Runs.
ValueEdge stores the details of a single Last Run for each unique combination of these attributes. Each last run accumulates its own history of Previous Runs.
For example, if a test was run on Chrome and Firefox in two different releases, the test would have four last runs.
When POSTing test and test run results, ValueEdge determines if a new test or test run is created, or if an existing one is updated (simulating a PUT operation). For each reported test run:
-
If a matching test run exists, its data is updated (such as status, starting time and duration).
-
If such a test run does not exist, a new one is created and linked to the relevant test entity.
(Optional) If a test run finished with error/exception, you can report error details in nested error element as follows:
-
Set the type attribute to the exception type.
-
Set the message attribute to the exception message
-
Set element body to exception stack trace
<test_run module="/helloWorld" package="hello" class="HelloWorldTest" name="testTwo" duration="2" status="Failed" started="1430919316223">
<error type="java.lang.AssertionError" message="expected:'111' but was:'222'">java.lang.AssertionError: expected:'111' but was:'222'
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
</error>
</test_run>
(Optional) You can add run a description in a nested description element as follows:
<test_run module="/helloWorld" package="hello" class="HelloWorldTest" name="testTwo" duration="2" status="Failed" started="1430919316223">
<description>My run description</description>
</test_run>
If error details also included, the description element should come after it.
The following table describes the gherkin_test_run element attributes:
Attribute name | Description |
---|---|
name |
The name of the executed Gherkin test. For example, the name of the Java method. |
duration |
The duration of the Gherkin test execution, in milliseconds. |
status |
The final status of the Gherkin test execution. Possible values: ‘Passed', 'Failed', 'Skipped' |
feature |
The feature in the Gherkin test. The feature is associated with a:
|
For details on how unique Gherkin tests and test runs are identified, see Automate Gherkin tests.
Linking tests and test run to other entities
You can link tests and runs to other entities. The link can appear in:
-
Global context (before test_runs element), and all test results have that link.
-
Test result context (inside test_run element), and only the specific test and run have that link.
Links to different entities must appear in the order that the entities are listed below.
All links are optional.
Note: The specified links must exist in ALM Octane. Otherwise it is considered an error (unless you set the skip-errors parameter to true when POSTing the results), and test_run containing such a link is skipped.
If the payload contains a link to a suite, new or update tests are assigned to the suite, and test runs are injected under the suite run belonging to that suite.
Linking is available only by ID, using the suite_ref element:
<suite_ref id="1001" external_run_id="Regression1" component=”my component”/>
- external_run_id attribute: (Optional) Contains the name of the external set of test results.
- If external_run_id is specified, the new suite run and inner runs have the name of the specified external_run_id. Otherwise the new suite run and inner runs have the name of the suite.
- component attribute: (Optional) Contains the name of a component or characteristic that the suite run is a member of (for example ALM-domain-project). If component attribute is missing and the parent suite has a defined component field, this is used when a new suite run is created.
The suite run used for injection is selected by a combination of the following links:
- suite run name (external_run_id, or suite name if external_run_id is missing)
- release
- milestone
- program
If a suite run with those links does not exist in the suite, it is created.
Note: Creation of a new suite run requires assignment to a release, so you need to specify a release link.
In addition, if a specified release is assigned to programs, a program link is also required, otherwise the link to program is optional. Link to milestone is optional.
Limitations:
- ValueEdge does not currently allow adding the same test twice in the suite planning tab. When injecting two runs of the same test to a suite run, the test is added only once to the suite planning tab, without any specific environment information.
- Runs injected to ValueEdge cannot be re-run from ValueEdge. To run the same test again, create a new suite run.
- The number of runs to suite is limited by 1000 in a single payload. Exceeding runs are ignored.
If the payload contains a link to a program, new or updated run entities are linked to the specified program.
Linking is available only by ID, using the program_ref element: <program_ref id="1001"/>
If both program and release links are specified, and the release has assigned programs, the program must be one of the assigned programs.
If the payload contains a link to release, new or updated run entities are linked to the specified release.
-
Use the release element to refer to the release by name : <release name="myRelease"/>
Linking by name is possible because release name is unique in context of workspace.
-
Use the release_ref element to refer to the release by ID : <release_ref id="1001"/>
-
To specify a default release, use the release element as follows: <release name="_default_"/>
If the payload contains a link to milestone, new or updated run entities are linked to the specified milestone.
The milestone link can be defined along with the release link. The specified milestone must belong to specified release.
Linking is available only by ID, using the milestone_ref element: <milestone_ref id="1001"/>
If the payload contains links to backlog items, new or updated test entities are linked to the specified backlog items. You can specify more than one such link.
Linking is available only by ID, using the backlog_item_ref element wrapped in backlog_items element.
<backlog_items>
<backlog_item_ref id="2001"/>
<backlog_item_ref id="2002"/>
</backlog_items>
If the payload contains the application module, new or updated test entities are linked to the specified application modules.
You can specify more than one such link.
Linking is available only by ID, using the product_area_ref element wrapped in product_areas element.
<product_areas>
<product_area_ref id="2001"/>
<product_area_ref id="2002"/>
</product_areas>
The test_fields and test_field elements are used to set the values of the following fields for new or updated tests or test runs: test level, type, testing tool, and framework.
These fields are list-based fields that correspond to ValueEdge list-based fields. This means you can select a value from a pre-defined list (or add values to that list).
List name | Field value cardinality | Predefined values |
---|---|---|
Test_Level |
single |
Unit Test, System Test, Integration Test |
Test_Type |
multiple |
Acceptance, Regression, End to End, Sanity, Security, Performance, UI, API |
Testing_Tool_Type |
single |
UFT One, UFT Developer, Digital Lab, StormRunner Load, Selenium, Manual Runner, SoapUI, Protractor, LoadRunner Enterprise |
Framework |
single |
JUnit, UFT One, TestNG, NUnit, JBehave, Karma, Jasmine, Mocha, Cucumber |
To refer to the test field value, use the test_field element and its attributes: type and value.
-
Set the type attribute to the List name.
-
Set the value attribute to one of the list’s predefined values or any custom string value.
If you use a custom value, it is added in ValueEdge as a new display name value so it can be reused.
<test_fields>
<test_field type="Test_Level" value="Integration Test"/>
<test_field type="Test_Type" value="Acceptance"/>
<test_field type="Test_Type" value="My custom test type"/>
<test_field type="Testing_Tool_Type" value="Selenium"/>
<test_field type="Framework" value="Cucumber"/>
</test_fields>
You can specify Environment labels of test runs. These labels differentiate between different runs of the same test and are part of the Last run definition. For details, see How does ValueEdge uniquely identify test runs?
Use the taxonomy element and its attributes: type and value, wrapped in environment element.
-
Set the type attribute to the name of the taxonomy type, such as browser.
If you use a type that does not yet exist in ALM Octane, it is added as a new type so it can be reused.
-
Set the value attribute to one of the list’s predefined values or any custom string value. (For example:
<taxonomy type="Browser" value="Chrome"/>
).If you use a custom value, it is added in ALM Octane as a new value so it can be reused. (For example:
<taxonomy type="MyEnvironmentType" value="MyEnvironmentValue"/>
)
Predefined values:
Type | Predefined values |
---|---|
AUT Env |
Dev, Production, QA, Staging |
Browser |
Chrome, Firefox, IE, Opera, Safari |
DB |
IBM DB2, MSSQL, MySQL, Oracle, Postgres |
Distribution |
Debian, Fedora, RedHat, SUSE, Ubuntu |
OS | Android, BlackBerryOS, iOS, Linux, MacOCS, WinServer2008, WinServer2012 |
<environment>
<taxonomy type="MyEnvironmentType" value="MyEnvironmentValue"/>
<taxonomy type="Browser" value="Chrome"/>
<taxonomy type="DB" value="Oracle"/>
<taxonomy type="OS" value="Linux"/>
<taxonomy type="AUT Env" value="Staging"/>
</environment>
Resolving link conflicts
Follow these rules regarding the resolution of conflicts when link elements are set in both global context and test result context:
-
If elements support multiple values like product_areas, backlog_items, taxonomy, test_fields (TEST_TYPE only), then the elements are merged for further processing.
-
For elements that do not support multiple values, like release or test_fields with single cardinality:
- If skip-errors is false (default), then such conflicts are considered an error. If the conflict is at the global context, the processing stops. If it is in a single test, that test is skipped.
-
If skip-errors is set to true:
-
If different releases are specified in both contexts, this information is ignored. No release is linked to the test run.
-
If a test characteristic (field) that supports a single value is set in both contexts, then only the test result value is used.
Sample payloads
<?xml version="1.0"?> <test_result> <release name="myRelease"/><!--set release by name--> <backlog_items> <backlog_item_ref id="2001"/> <backlog_item_ref id="2002"/> </backlog_items> <product_areas> <product_area_ref id="2001"/> <product_area_ref id="2002"/> </product_areas> <test_fields> <test_field type="Test_Level" value="Integration Test"/> <test_field type="Test_Type" value="Acceptance"/> <test_field type="Test_Type" value="End to End"/> <test_field type="Testing_Tool_Type" value="Selenium"/> <test_field type="Framework" value="Cucumber"/> </test_fields> <environment> <taxonomy type="MyEnvironmentType" value="MyEnvironmentValue"/> <taxonomy type="Browser" value="Chrome"/> <taxonomy type="DB" value="Oracle"/> <taxonomy type="OS" value="Linux"/> <taxonomy type="AUT Env" value="Staging"/> </environment> <test_runs> <test_run module="/helloWorld" package="hello" class="HelloWorldTest" name="testOne" duration="3" status="Passed" started="1430919295889"/> <test_run module="/helloWorld" package="hello" class="HelloWorldTest" name="testThree" duration="4" status="Skipped" started="1430919319624"/> </test_runs> </test_result>
<?xml version="1.0"?> <test_result> <test_runs> <test_run module="/helloWorld" package="hello" class="HelloWorldTest" name="testOne" duration="3" status="Passed" started="1430919295890"> <release_ref id="1001"/> <!--set release by id--> <backlog_items> <backlog_item_ref id="2001"/> <backlog_item_ref id="2002"/> </backlog_items> <product_areas> <product_area_ref id="2001"/> </product_areas> <test_fields> <test_field type="Test_Level" value="Integration Test"/> <test_field type="Test_Type" value="Acceptance"/> </test_fields> <environment> <taxonomy type="MyEnvironmentType" value="MyEnvironmentValue"/> <taxonomy type="Browser" value="Chrome"/> <taxonomy type="DB" value="Oracle"/> </environment> </test_run> <test_run module="/helloWorld" package="hello" class="HelloWorldTest" name="testThree" duration="4" status="Passed" started="1430919319624"> <release name="myRelease"/> <!--set release by name--> <backlog_items> <backlog_item_ref id="2003"/> <backlog_item_ref id="2004"/> </backlog_items> <product_areas> <product_area_ref id="2002"/> </product_areas> <test_fields> <test_field type="Test_Type" value="End to End"/> <test_field type="Testing_Tool_Type" value="Selenium"/> <test_field type="Framework" value="Cucumber"/> </test_fields> <environment> <taxonomy type="OS" value="Linux"/> <taxonomy type="AUT Env" value="Staging"/> </environment> </test_run> </test_runs> </test_result>
<?xml version='1.0' encoding='UTF-8'?> <test_result> <test_runs> <test_run module="/helloWorld" package="hello" class="HelloWorldTest" name="testTwo" duration="2" status="Failed" started="1430919316223"> <error type="java.lang.AssertionError" message="expected:'111' but was:'222'">java.lang.AssertionError: expected:'111' but was:'222' at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) </error> <description>My run description</description> </test_run> </test_runs> </test_result>
<?xml version='1.0' encoding='UTF-8'?> <test_result> <test_runs> <gherkin_test_run duration="173757728" name="My Amazing Feature" status="Passed"> <feature name="My Amazing Feature" path="src/test/resources/My amazing feature_4002.feature" started="1482231569810" tag="@TID4002REV0.2.0"> <file>#Auto generated revision tag @TID4002REV0.2.0 Feature: My Amazing Feature Scenario: Amazing scenario Given user is logged in When she looks at this feature Then she is amazed! </file> <scenarios> <scenario name="Amazing scenario" status="Passed"> <steps> <step duration="173722717" name="Given user is logged in" status="passed" /> <step duration="22125" name="When she looks at this feature" status="passed" /> <step duration="12886" name="Then she is amazed!" status="passed" /> </steps> </scenario> </scenarios> </feature> </gherkin_test_run> <gherkin_test_run duration="130365" name="dfg f f" status="Passed"> <feature name="ScriptFileName" path="src/test/resources/gherkin_2008.feature" started="1482231570001" tag="@TID2008REV0.2.0"> <file>#Auto generated NGA revision tag @TID2008REV0.2.0 Feature: Another feature Scenario: Another scenario Given certain circumstances When an event happens Then something occurs </file> <scenarios> <scenario name="ScenarioName" status="Passed"> <steps> <step duration="60817" name="Given MyGiven" status="passed" /> <step duration="54430" name="When MyWhen" status="passed" /> <step duration="15118" name="Then MyThen" status="passed" /> </steps> </scenario> </scenarios> </feature> </gherkin_test_run> <gherkin_test_run duration="207011" name="My Next Feature" status="Passed"> <feature name="My Next Feature" path="src/test/resources/my-fancy-feature.feature" started="1482231570005" tag=""> <file>Feature: My Next Feature Scenario: Button opens the dialog Given user is logged in And she is in the module When she clicks the button Then the dialog pops up Scenario: Dialog is good Given user is logged in And the dialog is opened When she looks at it Then it looks good </file> <scenarios> <scenario name="Button opens the dialog" status="Passed"> <steps> <step duration="61058" name="Given user is logged in" status="passed" /> <step duration="19578" name="And she is in the module" status="passed" /> <step duration="15757" name="When she clicks the button" status="passed" /> <step duration="17327" name="Then the dialog pops up" status="passed" /> </steps> </scenario> <scenario name="Dialog is good" status="Passed"> <steps> <step duration="44123" name="Given user is logged in" status="passed" /> <step duration="12931" name="And the dialog is opened" status="passed" /> <step duration="23090" name="When she looks at it" status="passed" /> <step duration="13147" name="Then it looks good" status="passed" /> </steps> </scenario> </scenarios> </feature> </gherkin_test_run> </test_runs> </test_result>
<test_result> <suite_ref id="3001" external_run_id="regression Q01"/> <release name="_default_"/> <test_runs> <test_run module="/helloWorld" package="hello" class="HelloWorldTest" name="myTest1" duration="82" status="Passed" started="1580375724752" /> <test_run module="/helloWorld" package="hello" class="HelloWorldTest" name="myTest2" duration="70" status="Passed" started="1580375724752" /> </test_runs> </test_result>
See also: