Design performance tests
This section provides best practices and instructions for designing a performance test.
Before you begin
Before you start designing a performance test, we recommend you create scripts, topologies, monitor profiles, and enable diagnostics (as required).
For details, see Create test assets.
Design a performance test - workflow
This task describes how to design a performance test from the Performance Test Designer.
-
Prerequisites
-
Make sure the relevant scripts and tests have been uploaded and saved to LoadRunner Enterprise. For details, see Upload and run scripts or tests in LoadRunner Enterprise.
-
Make sure that there is at least one Controller, one load generator, and one data processor in the host pool of your project (under Resources, select Hosts). If not, contact your administrator to add them.
-
-
From the LoadRunner Enterprise navigation toolbar, click and select Create New Test. Define the test and assign it to a test set as described in Create a test.
-
Define a workload for the test
Defining a workload involves creating Vuser groups, distributing Vusers among the Vuser groups, and assigning hosts to the Vuser groups. For task details, see Define test workloads.
-
Define a schedule for the performance test
Schedule how the Vuser groups are to run in the test. For details, see Define a schedule for the test.
-
Configure terminal sessions - optional
When using manual load generator distribution, you can open terminal services sessions on the load generators, enabling you to run multiple GUI Vusers simultaneously on the same application. For details, see Configure terminal sessions.
-
Select or create monitor profiles to monitor the test - optional
In the LoadRunner Enterprise Designer's Monitors tab, click Add Monitor Profile or Add Monitor OFW. The respective tree opens on the right.
-
To select existing profiles or monitor-over-firewall agents, select the monitors to add to the test, and click Add selected monitors .
Note: Before you can select monitors for a test, you must configure monitor machines and create monitor profiles. For details, see Create and configure monitor profiles.
Similarly, you must define monitor-over-firewall agents in the system before you can select them to monitor a test.
-
To create a new monitor profile or monitor-over-firewall agent, click Create Monitor or Create Monitor OFW.
For a monitor profile: In the Create Monitor Profile dialog box, enter a name and description, and select a folder for storing the profile. Click OK, and create a new monitor profile as described in Create and configure monitor profiles.
For a monitor-over-firewall agent: In the Create Monitor Over Firewall dialog box, enter a name, the machine key, and select a folder for storing the profile and the MI Listener with which the monitor is to connect. Click OK.
After you have created new monitor profiles or monitor-over-firewall agents, select the monitors to add to the test, and click Add selected monitors/monitors over firewall .
For user interface details, see Monitors tab (Performance Test Designer).
-
-
Select a topology for the test - optional
Note: Before you can select a topology for a test, you must design the topology. To design topologies, see Design topologies.
In the LoadRunner Enterprise Designer's Topology tab, click Select Topology and select a topology for the test. For user interface details, see Topology tab (Performance Test Designer).
-
Enable and configure diagnostics - optional
Enable and configure diagnostic modules to collect J2EE/.NET diagnostics data from the test run. For details, see Diagnostics.
-
Configure runtime settings - optional
You can configure the runtime settings of uploaded Vuser scripts. Runtime settings are applied to Vusers when the script runs. For details, see Configure runtime settings.
-
Define service level agreements (SLAs) for the test - optional
Define SLAs to measure performance metrics against performance goals. For details, see Define service level agreements (SLAs).
-
When you save the test, it goes through a validation process. The test is valid only if it contains no errors. For details, see Test Validation Results dialog box.
Note: If you make changes to the test, and the test is linked to a timeslot, the timeslot is updated with these changes automatically.