Test planning involves deciding which tests to automate. If you choose to automate a test, you can generate a test script and run the test using Unified Functional Testing, LoadRunner, or Visual API-XP.
Automating a test allows unattended execution of the test at high speed. It also makes the test reusable and repeatable. For example, you automate functional, benchmark, unit, stress and load tests, as well as tests requiring detailed information about applications.
Consider the following issues when deciding whether to automate a test.
Tests that will run with each new version of your application are good candidates for automation. These include sanity tests that check basic functionality across an entire application. Each time there is a new version of the application, you run these tests to check the stability of the new version, before proceeding to more in-depth testing.
Tests that use multiple data values for the same operation (data-driven tests) are also good candidates for automation. Running the same test manually—each time with a different set of input data—can be tedious and ineffective. By creating an automated data-driven test, you can run a single test with multiple sets of data.
It is also recommended that you automate tests that are run many times (stress tests) and tests that check a multi-user client/server system (load tests). For example, suppose a test must be repeated a thousand times. Running the test manually would be extremely impractical. In this case, you can create a test that runs a thousand iterations.
Generally, the more user involvement a test requires, the less appropriate it is to automate. The following describes test cases that should not be automated:
Usability tests—tests providing usage models that check how easy the application is to use.
Tests that you only have to run once.
Tests that you need to run immediately.
Tests based on user intuition and knowledge of the application.
Tests with no predictable results.