Add automated test results

This topic describes how to add automated test results manually using the REST API. 

Flow

To send test results to the OpenText Core Software Delivery Platform server from other applications: 

  1. Authenticate and sign in.

  2. Retrieve the XSD schema of the payload containing the test results.

  3. Verify that the payload conforms to the XSD schema before pushing the test results.

  4. Push the test results using test-results.

  5. Check the status of the POST.

Back to top

Retrieve the test results payload

Before pushing the test results, retrieve the XSD schema of the payload using a GET operation:

GET .../api/shared_spaces/<space_id>/workspaces/<workspace_id>/test-results/xsd

Use the XSD schema to prepare and validate your payload before using it. You use the payload to send the:

  • Test and test run entities that you want to create or update.

  • Entities to link to the tests and test runs. You can specify these links globally for all entities reported in the payload, or specifically per test or test run.

Back to top

Validate the test results payload

Get familiar with the payload XML and how it relates to entities. For sample payloads, see Sample payloads.

Link tests and test run to other entities in the payload

The following table describes the XML elements to use in the payload to link to each entity type. To link to an entity or attribute from a test or test run, use the REST entity name or the element or attribute in the payload XML.

OpenText Core Software Delivery Platform entity / attribute OpenText Core Software Delivery Platform entity name in REST Element / attribute in payload XML

Release

release

release or release_ref

Story or Feature

story or feature

backlog_items or backlog_item_ref

Application module

product_area

product_areas or product_area_ref

Test

test

test_fields or test_field or test_field_ref

Environment

taxonomy_node

environment or taxonomy or taxonomy_ref

Automated run

run_automated

test_runs or test_run

Build

ci_build

build_name

CI server

ci_server

server_id

Build job

ci_job

job_name

Back to top

Report the tests and test runs

Verify that the payload is defined according to these guidelines.

  • Use the tests element to wrap the tests you are reporting.

  • Each child test element represents a single executed test, i.e. a test run.

  • Do not include multiple test elements that refer to the same test (for example, test elements with the same values for application module and package).

  • The maximum number of tests that can be posted at once: 1000.

For details on the elements, see Test results.

Back to top

Set test characteristics and each report's context globally

If the set of tests included in the report share the same characteristics and context, you can specify those globally for all the tests in the report.

  • Set characteristics

    For details, see Test characteristics.

  • Set context

    You can specify the product / project context of a test by associating it with application modules, requirements, or both. For details, see Test results.

  • Set test execution context

    You can specify the context of a test run by associating it with a release, a build, a pipeline, and environment information. For details, see Test results.

Set test characteristics and each report's context individually

If the test report contains tests with different characteristics or context, you can specify it per test. To do this, use the same elements as you would use for setting the context per report.

Resolve conflicts

Context elements might be set both per report and per test, causing conflicts. To handle these conflicts, see Resolving link conflicts.

Back to top

POST the test results

Set the Content-Type header field set to application/xml. Make sure that the XML payload containing the test results conforms to the XSD schema as described above.

POST .../api/shared_spaces/<space_id>/workspaces/<workspace_id>/test-results

By default, errors that occur while pushing the test results are not ignored and the POST fails. To skip the errors, specify the skip-errors parameter:

POST .../api/shared_spaces/<space_id>/workspaces/<workspace_id>/test-results?skip-errors=true

For details on this parameter, see skip-errors.

Note: You can also use the POST request to update test run results. To update test run results, send another POST request with a unique module, package, class, and name. If a matching test run exists, its data is updated (such as status, starting time, and duration) and new test run results are not created.

Back to top

Check status and results

  • Check for errors that may have occurred. For details, see Test results.

  • Check return status.

    Example: To see the status of request #1206:

    GET .../api/shared_spaces/<space_id>/workspaces/<workspace_id>/test-results/1206

    Response:

    {
    "id": "1206"
    "status": "failed"
    "until": "2016-05-18T05:33:53+0000"
    }

    To see the log of request, use ID from the response of the POST of the test-results (see above).

    POST .../api/shared_spaces/<space_id>/workspaces/<workspace_id>/test-results/1206/log

     

    Sample response:

    status: failed
    until: 2016-05-18T08:33:53+0300
    Build reference {server: uuid; build_type: junit-job; build_sid: 1} not resolved

    For details, see Test results.

  • Check if the new automated tests and their run results exist in OpenText Core Software Delivery Platform.

Back to top

See also: