Run SRF tests and analyze results
To run your SRF tests, either select an existing test, or create a new one. Then, click RUN.
The running tests icon in the upper right corner of SRF indicates how many tests are currently running on your tenant. Click the icon to view the tests currently running, or tests recently finished.
You can view the run results from the HOME, AUTOMATION, or RESULTS tabs. The HOME tab dashboard shows the test result trends and a list of the latest runs. The AUTOMATION tab lets you view and manage the test runs, while the RESULTS tab lets you drill down into the results. This section describes the view from the RESULTS tab.
The RESULTS tab displays details about your test results, such as last run dates and success status.
Filter the results displayed by using the filter toggles on the left to enable or disable a filter, and then define the filter you want to use.
For example, you might filter by tag to find all results from runs with a specific tag.
|Sort||Use the SORT BY options at the top-right to modify how the results are sorted.|
Use the search box above the grid to search for a string from your test name or for your run ID.
If your test was run multiple times, expand the arrow on the right to view statuses for each test run (for example ), or simply click the test results to drill down further.
For more details, see:
Tip: The run results include all test runs, regardless of where the test was run from, as well as the results of exploratory (manual) sessions.
SRF uses the following test statuses:
|The test ran successfully, without warnings or errors.|
The test run completed, but SRF has no details about the success or failure of the test scripts.
For example, your test may have been run remotely in such a way that did not report detailed results back to SRF.
The test run finished with failures.
For example, the object that the test was looking for was not found.
In such cases, you may have defects in your product that need fixing.
SRF could not run the test.
In such cases, check your code and run your test again.
The test run found scenarios that generated warnings, as defined in the script code itself.
For example, you can configure your test to generate warnings for specific checkpoints or assertions.
Not relevant for Selenium or Appium testing.
|The test run was canceled by the user or by SRF, such as a test that timed out.|
|The test is currently running.|
For more details, see:
On a Results List page, click a status dot for a specific script run to drill down and see more details.
In the image below, each green dot represents the status for the relevant script run on Firefox 55 or Firefox 57.
For tests with errors or warnings, click the Errors | Warnings drop down arrow to expand their details.
Click SUBMIT DEFECT at the top to open a defect based on the results you see. For details, see Create defects.
On a Results List page for a specific script, view a performance log for each step in the script, with comparison details for each test environment. Click a status dot again to toggle between the results for each environment.
Below the performance graph, select the view you want to see on each side of the details page:
|Script||The script code for each step.|
|Transcript||A human-readable description of the step.|
|Log||The script run log.|
|Screenshot||Snapshots captured during the script run. For more details, see Screenshots and Drill down to Applitools data.|
Video recording of the entire test run. Supported for web tests on Windows and Mac.
Click DOWNLOAD below the video to download the recording and save it locally.
Note: This option is enabled only if video was recorded for this test run, as configured in the SRF settings, and the recording has not timed out.
For more details, see Define video recording settings.
When viewing screenshots, note that snapshot captures depend on your testing method and configuration.
|At the start of the step||
Snapshots are captured before the step is performed when:
|At the end of the step||
Snapshots are captured after the step is performed when:
If your test was run remotely, snapshots are recorded as configured in your remote testing tool. For more details, see Results from remote tests.
If you have Applitools visual testing enabled, continue with Drill down to Applitools data.
Access your Applitools data from your test results, from either the AUTOMATION or the RESULTS tab.
From your test results, click the status dot you want to view results for (/).
In the result panes at the bottom, select Script on one side and Screenshot on the other.
Screenshots include images saved both by SRF and Applitools during the test run.
To show or hide the Applitools highlights for differences found by your step, click Show or Hide Applitools data in the Screenshot pane.
- Applitools highlights differences between the expected and actual test results in purple.
- If a visual validation step fails, we show this status in the SRF run results as Failures. For more details, see Run status reference.
To open the Applitools report for a specific step in your test, browse down in the Script pane to the Applitools step.
Click Open Applitools report to open the relevant report in Applitools.
For more details, see Applitools visual validations.
If you've recorded your script in SRF, or are using script uploaded from UFT or LeanFT, you can download a local copy of any tests run from SRF.
Browse to your test results, and drill down to a specific environment.
At the top right, click DOWNLOAD REPORT.
Tip: If you are running your tests remotely, use your remote testing tool to save a local copy of your run results.
If your test was run remotely, there may be additional test steps displayed in your tool's run results that are filtered from SRF. SRF is unaware of these steps and cannot report on their status.
In SRF, tests with such steps may have a status of COMPLETED. This indicates that the test run finished, but that SRF cannot fully report on whether it passed or failed.
Example: This may occur for Selenium assertions, or when the test includes steps performed locally instead of on the SRF browser or device.
Run results for exploratory sessions display details about the session on the left, and the following tabs on the right:
Storyboard. A description of each of the steps performed during the session.
The camera icon indicates the steps where a screenshot was captured. Click the down arrow to display more details about each step.
Video. A video recording of the entire session, saved as configured in the SRF settings. For details, see Define video recording settings.
If you have a particularly helpful exploratory session saved that is a good candidate for reuse and automation testing, save it as an automated script.
At the top of a results details page for an exploratory session, click AUTOMATE.
The session is saved as a new script, with a name based on the session name and run ID. Add your new script to automation tests to run it using various environments and parameter values.
Note: Automating an exploratory session is not currently supported for exploratory sessions run in Internet Explorer or Edge.
For more details, see:
Submit a defect
Click SUBMIT DEFECT at the top to open a defect based on the results you see.
For details, see Create defects.