Test results

This topic describes how the test-results resource sends test results. 

Overview

The test-results resource sends test results to the server.

Using this resource, you can do the following:

  • Create or update test and test run entities.

  • Link test entities to backlog items (epics, stories, features, defects), and to application modules.

  • Link test run entities to releases and environment information (using taxonomy).

  • Define tests and run characteristics (using the fields of test level, type, testing tool, and framework).

Note: You must log in to the server using the Authentication API before using this resource.

Back to top

Resources

Send test results

POST .../api/shared_spaces/<space_id>/workspaces/<workspace_id>/test-results

  • The Content-Type header field must be set to application/xml.

  • The content can be compressed in gzip format. In this case, the Content-Encoding header field must be set to application/gzip.

  • The XML payload must conform to the XSD schema as described below.

Response

This is an asynchronous resource that assigns a task to process the payload.

The following responses are possible:

Response Details
202 (Accepted)

The payload conforms to the XSD schema. A JSON is returned, containing the task status and ID.

As a response to an attempt to inject the same payload more than once, the JSON includes additional fields, indicating that the task has been already processed : {"fromOlderPush":"true","id":1103,"until":"2020-0-28T11:29:51+0000","status":" SUCCESS "}

400 (Bad Request)

The payload does not conform to the XSD schema.

503 (Service Unavailable)

The request cannot be handled. The server may be down or overloaded with requests. Resend the request at a later time.

The following reasons may be provided in the response:

  • testbox.queue_full: The queue has reached the limit of test result processing tasks that are allowed for the shared space.

  • testbox.request_limit: The server has reached the limit of test result that it can handle.

Get task status

After you send test results and have received task ID, you can check the task status:

GET .../api/shared_spaces/<space_id>/workspaces/<workspace_id>/test-results/<task_id>

The response of the request is JSON with the following fields.

Field Description

id

ID of the task

Status

Status of the task. Possible values:

QUEUED - Task item is waiting in queue.

RUNNING – Task is running.

SUCCESS – Publishing operation finished successfully.

WARNING - Publishing operation succeeded partially. See details in errorDetails field.

FAILED - Publishing operation didn't succeed. See details in errorDetails field.

ERROR - System error encountered.

fromOlderPush Optional field, appears only on attempt to inject a payload that has already been processed.
until Deadline for the cache to be valid (if the same payload is injected the cache value is used to return a result).
errorDetails

This field may contain several error messages separated by a semicolon (;).

The field might contain global errors that are relevant for all test results, and errors that are related to specific test results.

Example of global error: The release '1022' does not exist; The milestone '1022' does not exist

Example of error related to specific test results, item 0 and item 2: Test[0]: The release '1022' does not exist; Test[2]: The release '1025' does not exist

Get XSD schema

The XML payload of test results must conform to the XSD schema. To get XSD schema:

GET .../api/shared_spaces/<shared_space_id>/workspaces/<workspace_id>/test-results/xsd

Back to top

Query parameters

The test-results resource supports the following parameter.

Parameter Value Description
skip-errors Boolean

Whether parsing ignores recoverable errors and continues to process data. The default is false.

  • true. Parsing ignores references to non-existing entities and any other recoverable errors. All valid data is processed.

  • false. When encountering a recoverable error within a test element, that element is skipped completely, and processing continues with the next element.

    For details on how conflicts between data at different levels are resolved when ignoring errors, see Resolving link conflicts.

Back to top

Payload definition

The payload describes:

  • The test and test run entities that you want to create or update.
  • Entities that you can link to the tests and test runs.

Root element: test_result

Minimal payload is:

<test_result>

<test_runs>

<test_run module="/helloWorld" package="hello" class="HelloWorldTest" name="testOne" duration="3" status="Passed" started="1430919295889"/>

</test_runs>

</test_result>

test_runs element

The test_runs element contains:

  • test_run that is translated to automated tests and automated runs.

  • gherkin_test_run that is translated to Gherkin tests and Gherkin runs.

test_run element

The following table describes the test_run element attributes.

Attribute name Required Target Description

module

No

Component field of test and run

The name of the module or component that contains the test. For example, the name of the Maven module.

package

No

Package field of test and run

The location of the test inside the module. For example, the name of the Java package.

class

No

Class field of test and run

The name of the file that contains the test. For example, the name of the Java.

name

Yes

Name field of test and run

The name of the executed test. For example, the name of the Java method.

duration

Yes

Duration field of run

The duration of the test execution, in milliseconds.

status

Yes

Status and Native Status fields of run

The final status of the test execution. Possible values: ‘Passed', 'Failed', 'Skipped'

started

No

Started field of run

The starting time of the test execution.

This is the number of milliseconds since the standard base time known as "the epoch": January 1, 1970, 00:00:00 GMT.

If not supplied, injection time is used.

external_report_url No External report url field of run The URL to a report in the external system that produced this run. Value must start with http(s), or td(s).
external_test_id No External test id of test

External reference to test.

Limitation: In injection API, External test id field of test can be updated only during test creation. After test has been created, this field can be updated only through regular API or by UI.

external_run_id No

External run id of run

External test in suite id in planning of test suite, if injection is done with a suite

Used to differentiate between runs of the same test, so if there are several test runs belonging to the same test, for each unique external_run_id a separate run is generated.

Without external_run_id, several runs of the same test result in one run, while older runs are considered previous runs of the run.

You can also differentiate test runs by using Test characteristics (test level, type, testing tool, and framework) and Environment labels (such as DB,OS, browser), as described below. Use the external_run_id field if there is no meaning to differentiate different runs with test characteristics or environment labels.

Note: If this attribute is used along with the testing framework, this field is passed back to the CI server when the suite run is run.

Note: Illegal characters for XML attributes : & <

Unique identifiers for automated tests

Each test is identified by a unique combination of the following fields: Component, Package, Class, Name.

When a test run is received, the above attributes are checked to associate the run with a specific test. If no existing test matches the run, a new automated test entity is created.

Unique identifiers for test runs

Each test can have more than one run, each uniquely identified by Test, Release, Milestone, Program, and Environment tags. These are the test's Last Runs.

The details of a single Last Run for each unique combination of these attributes is stored. Each last run accumulates its own history of Previous Runs.

For example, if a test was run on Chrome and Firefox in two different releases, the test would have four last runs.

Creation and update times of tests and test runs

When POSTing test and test run results, checks are made to determines if a new test or test run is created, or if an existing one is updated (simulating a PUT operation). For each reported test run:

  • If a matching test run exists, its data is updated (such as status, starting time and duration).

  • If such a test run does not exist, a new one is created and linked to the relevant test entity.

Reporting errors

(Optional) If a test run finished with error/exception, you can report error details in nested error element as follows:

  • Set the type attribute to the exception type.

  • Set the message attribute to the exception message

  • Set element body to exception stack trace

<test_run module="/helloWorld" package="hello" class="HelloWorldTest" name="testTwo" duration="2" status="Failed" started="1430919316223">

<error type="java.lang.AssertionError" message="expected:'111' but was:'222'">java.lang.AssertionError: expected:'111' but was:'222'

at org.junit.Assert.fail(Assert.java:88)

at org.junit.Assert.failNotEquals(Assert.java:743)

at org.junit.Assert.assertEquals(Assert.java:118)

</error>

</test_run>

Reporting run descriptions

(Optional) You can add run a description in a nested description element as follows:

<test_run module="/helloWorld" package="hello" class="HelloWorldTest" name="testTwo" duration="2" status="Failed" started="1430919316223">

<description>My run description</description>

</test_run>

If error details also included, the description element should come after it.

Back to top

gherkin_test_run element

The following table describes the gherkin_test_run element attributes.

Attribute name Description

name

The name of the executed Gherkin test. For example, the name of the Java method.

duration

The duration of the Gherkin test execution, in milliseconds.

status

The final status of the Gherkin test execution. Possible values: ‘Passed', 'Failed', 'Skipped'

feature

The feature in the Gherkin test. The feature is associated with a:

  • path. A path identifying the location of the feature.

  • tag. If the Gherkin test was downloaded from OpenText Core Software Delivery Platform, its script has a tag that contains the test ID and revision in a format like this: @TID4002REV0.2.0.

For details on how unique Gherkin tests and test runs are identified, see Automate Gherkin tests.

Back to top

Linking tests and test run to other entities

You can link tests and runs to other entities. The link can appear in:

  • Global context (before test_runs element), and all test results have that link.

  • Test result context (inside test_run element), and only the specific test and run have that link.

Links to different entities must appear in the order that the entities are listed below.

All links are optional.

Note: The specified links must already exist. Otherwise it is considered an error (unless you set the skip-errors parameter to true when POSTing the results), and test_run containing such a link is skipped.

Link to suite and suite run

If the payload contains a link to a suite, new or update tests are assigned to the suite, and test runs are injected under the suite run belonging to that suite.

Linking is available only by ID, using the suite_ref element:

<suite_ref id="1001" external_run_id="Regression1" component=”my component”/>
  • external_run_id attribute: (Optional) Contains the name of the external set of test results.
  • If external_run_id is specified, the new suite run and inner runs have the name of the specified external_run_id. Otherwise the new suite run and inner runs have the name of the suite.
  • component attribute: (Optional) Contains the name of a component or characteristic that the suite run is a member of (for example ALM-domain-project). If component attribute is missing and the parent suite has a defined component field, this is used when a new suite run is created.

The suite run used for injection is selected by a combination of the following links:

  • suite run name (external_run_id, or suite name if external_run_id is missing)
  • release
  • milestone
  • program

If a suite run with those links does not exist in the suite, it is created.

Note: Creation of a new suite run requires assignment to a release, so you need to specify a release link.

In addition, if a specified release is assigned to programs, a program link is also required, otherwise the link to program is optional. Link to milestone is optional.

Limitations:

  • You cannot add the same test more than once in the suite planning tab. When injecting two runs of the same test to a suite run, the test is added only once to the suite planning tab, without any specific environment information.
  • Injected runs cannot be re-run. To run the same test again, create a new suite run.
  • The number of runs to suite is limited by 1000 in a single payload. Exceeding runs are ignored.

Link to program

If the payload contains a link to a program, new or updated run entities are linked to the specified program.

Linking is available only by ID, using the program_ref element: <program_ref id="1001"/>

If both program and release links are specified, and the release has assigned programs, the program must be one of the assigned programs.

Link to release

If the payload contains a link to release, new or updated run entities are linked to the specified release.

  • Use the release element to refer to the release by name : <release name="myRelease"/>

    Linking by name is possible because release name is unique in context of workspace.

  • Use the release_ref element to refer to the release by ID : <release_ref id="1001"/>

  • To specify a default release, use the release element as follows: <release name="_default_"/>

Link to milestone

If the payload contains a link to milestone, new or updated run entities are linked to the specified milestone.

The milestone link can be defined along with the release link. The specified milestone must belong to specified release.

Linking is available only by ID, using the milestone_ref element: <milestone_ref id="1001"/>

Link to backlog items (Epics, Features, User Stories, Defects)

If the payload contains links to backlog items, new or updated test entities are linked to the specified backlog items. You can specify more than one such link.

Linking is available only by ID, using the backlog_item_ref element wrapped in backlog_items element.

<backlog_items>

<backlog_item_ref id="2001"/>

<backlog_item_ref id="2002"/>

</backlog_items>

Link to application module

If the payload contains the application module, new or updated test entities are linked to the specified application modules.

You can specify more than one such link.

Linking is available only by ID, using the product_area_ref element wrapped in product_areas element.

<product_areas>

<product_area_ref id="2001"/>

<product_area_ref id="2002"/>

</product_areas>

Test characteristics

The test_fields and test_field elements are used to set the values of the following fields for new or updated tests or test runs: test level, type, testing tool, and framework.

Technical preview: The testing_tool_type field is technical preview. When calling it, use the ALM-OCTANE-TECH-PREVIEW header.

These fields are list-based fields that correspond to OpenText Core Software Delivery Platform list-based fields. This means you can select a value from a pre-defined list (or add values to that list).

List name Field value cardinality Predefined values

Test_Level

single

Unit Test, System Test, Integration Test

Test_Type

multiple

Acceptance, Regression, End to End, Sanity, Security, Performance, UI, API

Testing_Tool_Type

single

UFT One (for OpenText Functional Testing), UFT Developer (for OpenText Functional Testing for Developers), Digital Lab (for OpenText Functional Testing Lab for Mobile and Web), StormRunner Load (for OpenText Core Performance Engineering), LoadRunner Enterprise (for OpenText Enterprise Performance Engineering), Selenium, Manual Runner, SoapUI, Protractor

Framework

single

JUnit, UFT One (for OpenText Functional Testing), TestNG, NUnit, JBehave, Karma, Jasmine, Mocha, Cucumber

To refer to the test field value, use the test_field element and its attributes: type and value.

  • Set the type attribute to the List name.

  • Set the value attribute to one of the list’s predefined values or any custom string value.

    If you use a custom value, it is added as a new display name value so it can be reused.

<test_fields>

<test_field type="Test_Level" value="Integration Test"/>

<test_field type="Test_Type" value="Acceptance"/>

<test_field type="Test_Type" value="My custom test type"/>

<test_field type="Testing_Tool_Type" value="Selenium"/>

<test_field type="Framework" value="Cucumber"/>

</test_fields>

Environment labels

You can specify Environment labels of test runs. These labels differentiate between different runs of the same test and are part of the Last run definition. For details, see Unique identifiers for test runs

Use the taxonomy element and its attributes: type and value, wrapped in environment element.

  • Set the type attribute to the name of the taxonomy type, such as browser.

    If you use a type that does not yet exist, it is added as a new type so it can be reused.

  • Set the value attribute to one of the list’s predefined values or any custom string value. (For example: <taxonomy type="Browser" value="Chrome"/>).

    If you use a custom value, it is added as a new value so it can be reused. (For example: <taxonomy type="MyEnvironmentType" value="MyEnvironmentValue"/>)

The following table lists the predefined values.

Type Predefined values

AUT Env

Dev, Production, QA, Staging

Browser

Chrome, Firefox, IE, Opera, Safari

DB

IBM DB2, MSSQL, MySQL, Oracle, Postgres

Distribution

Debian, Fedora, RedHat, SUSE, Ubuntu

OS Android, BlackBerryOS, iOS, Linux, MacOCS, WinServer2008, WinServer2012

<environment>

<taxonomy type="MyEnvironmentType" value="MyEnvironmentValue"/>

<taxonomy type="Browser" value="Chrome"/>

<taxonomy type="DB" value="Oracle"/>

<taxonomy type="OS" value="Linux"/>

<taxonomy type="AUT Env" value="Staging"/>

</environment>

Back to top

Resolving link conflicts

Follow these rules regarding the resolution of conflicts when link elements are set in both global context and test result context:

  • If elements support multiple values like product_areas, backlog_items, taxonomy, test_fields (TEST_TYPE only), then the elements are merged for further processing.

  • For elements that do not support multiple values, like release or test_fields with single cardinality:

    1. If skip-errors is false (default), then such conflicts are considered an error. If the conflict is at the global context, the processing stops. If it is in a single test, that test is skipped.
    2. If skip-errors is set to true:

      • If different releases are specified in both contexts, this information is ignored. No release is linked to the test run.

      • If a test characteristic (field) that supports a single value is set in both contexts, then only the test result value is used.

Back to top

Sample payloads

The following are examples of payloads.

All links appear in global context

<?xml version="1.0"?>
<test_result>
  <release name="myRelease"/><!--set release by name-->
  <backlog_items>
    <backlog_item_ref id="2001"/>
    <backlog_item_ref id="2002"/>
  </backlog_items>
  <product_areas>
    <product_area_ref id="2001"/>
    <product_area_ref id="2002"/>
  </product_areas>
  <test_fields>
    <test_field type="Test_Level" value="Integration Test"/>
	<test_field type="Test_Type" value="Acceptance"/>
	<test_field type="Test_Type" value="End to End"/>
	<test_field type="Testing_Tool_Type" value="Selenium"/>
	<test_field type="Framework" value="Cucumber"/>
  </test_fields>
  <environment>
    <taxonomy type="MyEnvironmentType" value="MyEnvironmentValue"/>
    <taxonomy type="Browser" value="Chrome"/>
    <taxonomy type="DB" value="Oracle"/>
    <taxonomy type="OS" value="Linux"/>
    <taxonomy type="AUT Env" value="Staging"/>
  </environment>
  <test_runs>
    <test_run module="/helloWorld" package="hello" class="HelloWorldTest" name="testOne" duration="3" status="Passed" started="1430919295889"/>
    <test_run module="/helloWorld" package="hello" class="HelloWorldTest" name="testThree" duration="4" status="Skipped" started="1430919319624"/>
  </test_runs>
</test_result>  

All links appear in test-run context

<?xml version="1.0"?>
<test_result>
  <test_runs>
    <test_run module="/helloWorld" package="hello" class="HelloWorldTest" name="testOne" duration="3" status="Passed" started="1430919295890">
      <release_ref id="1001"/>      <!--set release by id-->
      <backlog_items>
        <backlog_item_ref id="2001"/>
        <backlog_item_ref id="2002"/>
      </backlog_items>
      <product_areas>
        <product_area_ref id="2001"/>
      </product_areas>
      <test_fields>
        <test_field type="Test_Level" value="Integration Test"/>
        <test_field type="Test_Type" value="Acceptance"/>
      </test_fields>
      <environment>
        <taxonomy type="MyEnvironmentType" value="MyEnvironmentValue"/>
        <taxonomy type="Browser" value="Chrome"/>
        <taxonomy type="DB" value="Oracle"/>
      </environment>
    </test_run>
    <test_run module="/helloWorld" package="hello" class="HelloWorldTest" name="testThree" duration="4" status="Passed" started="1430919319624">
      <release name="myRelease"/>      <!--set release by name-->
      <backlog_items>
        <backlog_item_ref id="2003"/>
        <backlog_item_ref id="2004"/>
      </backlog_items>
      <product_areas>
        <product_area_ref id="2002"/>
      </product_areas>
      <test_fields>
        <test_field type="Test_Type" value="End to End"/>
        <test_field type="Testing_Tool_Type" value="Selenium"/>
        <test_field type="Framework" value="Cucumber"/>
      </test_fields>
      <environment>
        <taxonomy type="OS" value="Linux"/>
        <taxonomy type="AUT Env" value="Staging"/>
      </environment>
    </test_run>
  </test_runs>
</test_result>

Payload with error details and run description

<?xml version='1.0' encoding='UTF-8'?>
<test_result>
  <test_runs>
    <test_run module="/helloWorld" package="hello" class="HelloWorldTest" name="testTwo" duration="2" status="Failed" started="1430919316223">
      <error type="java.lang.AssertionError" message="expected:'111' but was:'222'">java.lang.AssertionError: expected:'111' but was:'222'
		at org.junit.Assert.fail(Assert.java:88)
		at org.junit.Assert.failNotEquals(Assert.java:743)
		at org.junit.Assert.assertEquals(Assert.java:118)
     </error>
     <description>My run description</description>
   </test_run>
  </test_runs>
</test_result>  

Payload with Gherkin test run results

<?xml version='1.0' encoding='UTF-8'?>
<test_result>
 <test_runs>
  <gherkin_test_run duration="173757728" name="My Amazing Feature" status="Passed">
   <feature name="My Amazing Feature" path="src/test/resources/My amazing feature_4002.feature" started="1482231569810" tag="@TID4002REV0.2.0">
    <file>#Auto generated revision tag
          @TID4002REV0.2.0
          Feature: My Amazing Feature

	   Scenario: Amazing scenario
		Given user is logged in
		When she looks at this feature
		Then she is amazed!
    </file>
    <scenarios>
     <scenario name="Amazing scenario" status="Passed">
      <steps>
       <step duration="173722717" name="Given user is logged in" status="passed" />
       <step duration="22125" name="When she looks at this feature" status="passed" />
       <step duration="12886" name="Then she is amazed!" status="passed" />
      </steps>
     </scenario>
    </scenarios>
   </feature>
  </gherkin_test_run>
  <gherkin_test_run duration="130365" name="dfg f f" status="Passed">
   <feature name="ScriptFileName" path="src/test/resources/gherkin_2008.feature" started="1482231570001" tag="@TID2008REV0.2.0">
    <file>#Auto generated NGA revision tag
    @TID2008REV0.2.0
    Feature: Another feature
	Scenario: Another scenario
		Given certain circumstances
		When an event happens
		Then something occurs
    </file>
    <scenarios>
     <scenario name="ScenarioName" status="Passed">
      <steps>
       <step duration="60817" name="Given MyGiven" status="passed" />
       <step duration="54430" name="When MyWhen" status="passed" />
       <step duration="15118" name="Then MyThen" status="passed" />
      </steps>
     </scenario>
    </scenarios>
   </feature>
  </gherkin_test_run>
  <gherkin_test_run duration="207011" name="My Next Feature" status="Passed">
   <feature name="My Next Feature" path="src/test/resources/my-fancy-feature.feature" started="1482231570005" tag="">
    <file>Feature: My Next Feature
      Scenario: Button opens the dialog
        Given user is logged in
              And she is in the module
        When she clicks the button
        Then the dialog pops up

      Scenario: Dialog is good
        Given user is logged in
             And the dialog is opened
        When she looks at it
        Then it looks good
   </file>
   <scenarios>
     <scenario name="Button opens the dialog" status="Passed">
      <steps>
       <step duration="61058" name="Given user is logged in" status="passed" />
       <step duration="19578" name="And she is in the module" status="passed" />
       <step duration="15757" name="When she clicks the button" status="passed" />
       <step duration="17327" name="Then the dialog pops up" status="passed" />
      </steps>
     </scenario>
     <scenario name="Dialog is good" status="Passed">
      <steps>
       <step duration="44123" name="Given user is logged in" status="passed" />
       <step duration="12931" name="And the dialog is opened" status="passed" />
       <step duration="23090" name="When she looks at it" status="passed" />
       <step duration="13147" name="Then it looks good" status="passed" />
      </steps>
     </scenario>
    </scenarios>
   </feature>
  </gherkin_test_run>
 </test_runs>
</test_result>   

Payload with links to suite and default release

<test_result>
  <suite_ref id="3001" external_run_id="regression Q01"/>
  <release name="_default_"/>

  <test_runs>
    <test_run module="/helloWorld" package="hello" class="HelloWorldTest" name="myTest1" duration="82" status="Passed" started="1580375724752" />
    <test_run module="/helloWorld" package="hello" class="HelloWorldTest" name="myTest2" duration="70" status="Passed" started="1580375724752" />
  </test_runs>
</test_result>

Back to top

See also: