Test Execution Overview
You begin test execution by creating test sets and choosing tests to include in each set. A test set contains a subset of the tests in an Application Lifecycle Management (ALM) project designed to achieve specific test goals. As your application changes, you can run the manual and automated tests in your project to locate defects and assess quality.
You can run ALM tests in different ways.
Run Tests Using Functional Test Sets
ALM Editions: Functional test sets are available only for users with ALM Edition. For information about ALM editions and their functionality, see ALM editions. To find out which edition of ALM you are using, ask your ALM site administrator.
Tests in Functional test sets are run using server-side execution. This means you do not have to be around to initiate and control the tests. Functional test sets are run via timeslots, so you can schedule a test set to be run immediately, or you can schedule it to be run at a future time. Once you schedule the test, ALM ensures that the necessary resources are reserved for the test set. The test set is launched without user intervention and run in sequence with the input you provide in advance.
- You can schedule the execution of Functional tests or Functional test sets in the Timeslots module. If there are currently available hosts for your test, you can also use the Execution Grid to arrange for tests to run immediately. For details, see Run tests in ALM.
Functional tests run on testing hosts that are configured in Lab Resources in ALM or Lab Management. To run tests in a Functional test set, you must have testing hosts available to your project. For details about testing hosts, see Testing Hosts Overview.
- When you schedule a test, an appropriate testing host is reserved for your test and that host cannot be reserved for another test unless another appropriate host can be found for your test.
- ALM manages host allocation dynamically. If the testing host reserved for your test becomes unavailable before your test can be run, ALM is able to automatically reshuffle the remaining testing hosts and, if possible, reallocate another suitable testing host for your test. For details, see Host Allocation.
Functional test sets are a key component in ALM's Continuous Delivery solution. They facilitate an automated, end-to-end deployment and testing framework that makes application development more efficient, reliable, and quick. For details about how Functional test sets can be used as part of this process, see Deploying and Testing your Application in ALM.
Run Tests Using Default Test Sets
Tests in Default test sets are run using client-side execution. You control the test directly from your local computer. You can run Default test sets manually or automatically in ALM.
To run tests in Default test sets manually:
Use Sprinter. Provides enhanced functionality to assist you in the manual testing process.
Use Manual Runner. If you don't use Sprinter, you can run tests manually using Manual Runner.
When you run a test manually, you follow the test steps and perform operations on the application under test. You pass or fail each step, depending on whether the actual application results match the expected output.
To run tests in Default test sets automatically:
You can run tests automatically from your local machine using Automatic Runner.
When you run an automated test automatically, ALM opens the selected testing tool automatically, runs the test on your local machine or on remote hosts, and exports the results to ALM.
You can also run manual tests automatically. When you run a manual test automatically and specify a remote host, ALM notifies a designated tester by email to run the test on the specified host.
For more details about running tests in ALM, see Run tests in ALM.
Following test runs, you review and analyze test results. Your goal is to identify failed steps and determine whether a defect has been detected in your application, or if the expected results of your test need to be updated. You can validate test results regularly by viewing run data and by generating reports and graphs. For details, see Test Runs Overview.
You can also set a test as a draft run to instruct ALM to ignore the run results. For details, see Draft Runs.
For task details, see Run tests in ALM.