Define a load test

A load test is a test designed to measure an application's behavior under normal and peak conditions. You add and configure scripts, monitors, and SLAs to a load test definition.

Get started

From your Home page, you can select a project or create and edit a test.

Action How to

Select a project

On the Home page, select a project from the menu bar.

Create a test

To create a test, perform one of the following steps:

  • On the Home page, click Create a test.
  • On the Load Tests page, click Create.
Edit a test
  • Highlight a test in the grid.
  • Click Edit or click the test name link in the test grid.
Duplicate a test

Click Duplicate to copy a test definition to a new test.

Note: To use this feature, the test name should contain 120 characters or less.

Back to top

Navigation bar

The Load Tests navigation bar provides the following options:

General. Configure basic, run configuration, log, and load generator settings. For details, see Define general test settings.

Scripts. Show a list of the scripts and lets you set a script schedule. For details, see Configure a schedule for your script and Manage scripts.
Monitors. Show a list of monitors. For details, see Add monitors.
Distribution. Choose the load generator machines for your Vusers. Optionally, assign scripts and the number Vusers for on-premises load generators. For details, see On-premises locations.
Rendezvous. Set up a rendezvous for your Vusers. For details, see Configure rendezvous settings.
SLA. Show the SLA for the test. For details, see Configure SLAs.
Single user performance. Collect client side breakdown data. For details, see Generate single user performance data.
Streaming. Show a list of the streaming agents. For details, see Configure the Streaming agent. (This option is only visible when data streaming was enabled for your tenant via a support ticket.)
Schedules. Set a schedule for the test run. For details, see Schedule a test run.
Runs. Opens the Runs pane listing the runs for the selected test along with run statistics such as run status, regressions, and failed transactions. For details, see Find out the status of my test run.
Trends. For details, see Trends.

Back to top

Configure a schedule for your script

In the Scripts page, you configure a schedule for each script.

There are three modes of script schedules: Simple, Manual, and Advanced. For all modes, you configure basic settings, which may differ depending on the run mode that you selected in the General settings.

To configure the Simple, Manual, or Advanced mode settings, click the arrow adjacent to a script to expand its details.

Back to top

Configure a goal for a load test

You can configure a load test to run in Goal Oriented mode. If you select this option, you configure a goal for the test and when the test runs, it continues until the goal is reached. For more details, see How LoadRunner Cloud works with goals below.

To configure a goal oriented test:

  1. In the load test's General page, select Run Mode > Goal Oriented. For details, see Run mode.

  2. In the load test's Scripts page, configure the % of Vusers and Location for each script. For details, see Configure a schedule for your script.

  3. In the load test's Scripts page, click the Goal settings button . In the Goal Settings dialog box that opens, configure the following:

    Setting Description
    Goal type

    Select the goal type:

    • Hits per second. The number of hits (HTTP requests) to the Web server per second.
    • Transactions per second. The number of transactions completed per second. Only passed transactions are counted.
    Transaction name If you selected Transactions per second as the goal type, select or input the transaction to use for the goal.
    Goal value Enter the number of hits per second or transactions per second to be reached, for the goal to be fulfilled.
    Vusers Enter the minimum and maximum number of Vusers to be used in the load test.
    Ramp up

    Select the ramp up method and time. This determines the amount of time in which the goal must be reached.

    • Ramp up automatically with a maximum duration of. Ramp up Vusers automatically to reach the goal as soon as possible. If the goal can’t be reached after the maximum duration, the ramp up will stop.

    • Reach target number of hits or transactions per second <time> after test is started. Ramp up Vusers to try to reach the goal in the specified duration.

    • Step up N hits or transactions per second every. Set the number of hits or transactions per second that are added and at what time interval to add them. Hits or transactions per second are added in these increments until the goal is reached.

    Action if target cannot be reached

    Select what to do if the target is not reached in the time frame set by the Ramp up setting.

    • Stop test run. Stop the test as soon as the time frame has ended.

    • Continue test run without reaching goal. Continue running the test for the duration time setting, even though the goal was not reached.

    Duration

    Set a duration time for the load test to continue running after the time frame set by the Ramp up setting has elapsed.

    Note: The total test time is the sum of the Ramp up time and the Duration.

The following do not support the Goal Oriented run mode:

  • Adding Vusers

  • Generating single user performance data

  • The Timeline tab when scheduling a load test

  • Network emulations for cloud locations

  • Changing the load in a running load test

  • TruClient - Native Mobile scripts

How LoadRunner Cloud works with goals

LoadRunner Cloud runs Vusers in batches to reach the configured goal value. Each batch has a duration of 2 minutes.

In the first batch, LoadRunner Cloud determines the number of Vusers for each script according to the configured percentages and the minimum and maximum number of Vusers in the goal settings.

After running each batch, LoadRunner Cloud evaluates whether the goal has been reached or not. If the goal has not been reached, LoadRunner Cloud makes a calculation to determine the number of additional Vusers to be added in the next batch.

If the goal has been reached, or there are no remaining Vusers to be added, the ramp up ends. Note that the value can be greater than the defined goal value. LoadRunner Cloud does not remove Vusers to reduce the load.

During run time, if the goal was reached, but subsequently dropped to below the configured value, LoadRunner Cloud will try to add Vusers (if there are any remaining to be added) to reach the goal again.

LoadRunner Cloud does not change think time and pacing configured for scripts.

Back to top

Add monitors

You can add monitors to monitor your load test.

  • Each load test can have only one type of monitoring: SiteScope on-premises, Dynatrace, New Relic, AppDynamics, or Application Insights.
  • If your test requires SiteScope on-premises or New Relic monitoring, you can add only one monitor per load test.

Click the Monitors pane and perform one of the following actions:

Action How to

Create

Click Create to open the New Monitor dialog box.

For details on creating and configuring monitors, see Monitors.

Add from assets
  1. Click Add from assets.
  2. Select one or more monitors to add to the test definition.
Edit
  1. Select a monitor.
  2. Click Edit and modify the monitor details.

Back to top

Configure load generator locations

For each script in the test, you configure a location—Cloud or On Premise. This is the location of the load generators that run the script. For details, see Configure a schedule for your script.

You configure the settings for each location type in the Distribution pane:

  1. In the Load Tests tab, open the Distribution pane .
  2. Click the tab for the location type you want to configure—Cloud or On Premise.

Back to top

Configure rendezvous settings

When performing load testing, you need to emulate heavy user load on your system. To help accomplish this, you can instruct Vusers to perform a task at exactly the same moment using a rendezvous point. When a Vuser arrives at the rendezvous point, it waits until the configured percentage of Vusers participating in the rendezvous arrive. When the designated number of Vusers arrive, they are released.

You can configure the way LoadRunner Cloud handles rendezvous points included in scripts.

If scripts containing rendezvous points are included in a load test, click the Rendezvous tab and configure the following for each relevant rendezvous point:

  • Enable or disable the rendezvous point. If you disable it, it is ignored by LoadRunner Cloud when the script runs. Other configuration options described below are not valid for disabled rendezvous points.

  • Set the percentage of currently running Vusers that must reach the rendezvous point before they can continue with the script.

  • Set the timeout (in seconds) between Vusers for reaching the rendezvous point. This means that if a Vuser does not reach the rendezvous point within the configured timeout (from when the previous Vuser reached the rendezvous point), all the Vusers that have already reached the rendezvous point are released to continue with the script.

Note:  

  • When you select a rendezvous point in the list, all the scripts in which that rendezvous point is included are displayed on the right (under the configuration settings). If the script is disabled, its name is grayed out.

  • If no rendezvous points are displayed for a script that does contain such points, go to Assets > Scripts and reload the script.

Back to top

Configure SLAs

Service level agreements (SLAs) are specific goals that you define for your load test run. After a test run, LoadRunner Cloud compares these goals against performance related data that was gathered and stored during the course of the run, and determines whether the SLA passed or failed. Both the Dashboard and Report show the run's compliance with the SLA.

On the SLA page, you set a percentile, the percentage of transactions expected to complete successfully, by default 90%, and the following goals :

Goal Description
Percentile TRT (sec)

The expected transaction response time that it takes the specified percent of transactions to complete (by default 3 seconds). If the percent of completed transaction exceeds the SLA's value, the SLA will be considered "broken".

For example, assume you have a transaction named "Login" in your script, where the 90th percentile TRT (seconds) value is set to 2.50 seconds. If more than 10% of successful transactions that ended during a 5 second window have a response greater than 2.50 seconds, the SLA will be considered "broken". It will show a Failed status. For details about how the percentiles are calculated, see Percentiles.

Failed TRX (%) The percent of allowed failed transactions, by default 10%. If the percent of failed transaction exceeds the SLA's value, the SLA will be considered "broken" and will show a Fail status.

You can set a separate SLA for each of your runs. You can also select multiple SLAs at one time and use bulk actions to apply one or more actions to the set. For details, see Configure SLAs.

You configure one of more SLAs (Service Level Agreements) for each load test:

To configure the SLA:

  1. In the Load Tests tab, choose a test and open the SLA pane .
  2. Set percentile settings:

    1. Select the percentage of transactions expected to complete successfully. The default value is 90%.

    2. Set the Percentile TRT (sec) (transaction response time) value. This is the expected time it takes the specified percent (from the previous step) of transactions to complete.

      For example, if you set your percentile to 90 and the Percentile TRT (sec) value to 2.50 seconds, then if more than 10% of successful transactions that ended during a 5 second interval have a response time higher than 2.50 seconds, an SLA warning is recorded (using the average over time algorithm). For details, see Percentiles.

      The default Percentile TRT (sec) value is 3.00 seconds.

      Tip: If you have run your test more than once, LoadRunner Cloud will display a suggested percentile TRT value. Click the button to use the suggested value.

    3. Check Stop to stop the test if the SLA is broken.
    4. Clear the Enable option adjacent to the Percentile TRT (sec) column if you do not want the percentile TRT (seconds) value to be used during the test run.

      Note: For tenants created from January 2021 (version 2021.01), SLAs will be disabled by default. You can enable them manually or use the Bulk actions dropdown to enable SLAs for multiple scripts. To change this behavior and have all SLAs enabled by default, open a support ticket.

  3. Set failed transaction settings:
    1. Set the Failed TRX (%) value (default 10%). If the number of transaction failures exceeds this value, the SLA is assigned a status of Failed.
    2. Set the Enable option if you want the Failed TRX (%) value to be used during the test run.

Tip: Select multiple SLAs at one time and use Bulk actions to apply one or more actions to the set.

Back to top

Generate single user performance data

When enabled, this feature lets you create client side breakdown data to analyze user experience for a specific business process.

Note: You cannot generate single user performance data for:

  • Load tests configured for the Iterations run mode.
  • Scripts configured to run on on-premises load generators.

Select one or more of the following report types:

Report type How to
NV Insights

Generate a comprehensive report based on the selected script that provides information about how your application performs for a specific business process.

Generate an NV Insights report:

  1. Select the Single user performance pane.
  2. Under the Client Side Breakdown tab, select the NV Insights check box.
  3. Click Select Script and select a script for the report.

Note:

  • It can take several minutes for the NV Insights report to be generated when viewing it for the first time in a load test.

  • The NV Insights report does not support scripts in which WinInet is enabled.

For details, see The NV Insights report.

TRT Breakdown

Generate TRT breakdown data:

  1. Select the Single user performance pane.
  2. Under the Client Side Breakdown tab, select the Transaction Response Time Breakdown check box.
  3. Click Select Scripts and select up to five scripts.
  4. To display the results, from the Dashboard > , select the Breakdown widget.

For details, see Transaction response time breakdown data.

WebPage

Generate a WebPage report:

  1. Select the Single user performance WebPageTest Report pane.
  2. Under the WebPage Test Report tab, enter a URL of your application. You can add up to 3 URLs.
  3. Select a network speed.

    Network speed Description

    Cable

    5/1 Mbps 28ms RTT
    DSL 1.5 Mbps/384 Kbps 50 ms RTT
    FIOS 20/5 Mbps 4 ms RTT
    56K Dial-Up 49/30 Kbps 120 ms RTT

    Mobile 3G

    1.6 Mbps/786Kbps 300ms RTT
    Mobile 3G- Fast 1.6 Mbps/786Kbps 150ms RTT
    Native Connection No traffic shaping

For details, see WebPage Test report data.

Back to top

Assign labels

Use labels to help you organize scripts in your repository, or to organize your load tests in the Load Tests and Results pages. The labels you create are common to both scripts and load tests.

Labels can be nested in sub-categories:

Click to expand the Labels pane.

You can perform the following actions for labels:

Action How to

Create a label

 

  1. In the Labels pane, click Create label to open the New label dialog box.
  2. Give the label a name.
  3. Optionally, nest the label under another label.
  4. Click to select a label color.
Edit a label

From the Labels pane, highlight a label.

Click the vertical ellipsis and select Edit.

Delete a label

From the Labels pane, highlight a label.

Click the vertical ellipsis and select Remove.

Removing a label also removes any sub-labels.

Assign a color to the label

From the Labels pane, highlight a label.

Click the vertical ellipsis and select Color.

Add a sub-label

From the Labels pane, highlight a label.

Click the vertical ellipsis and select Add sub-label.

Assign a label
  1. In the Assets > Scripts grid, select the check box of the scripts you want to label. For Results or Load tests, select one item in the grid.
  2. Expand the Assign labels drop down.
  3. Select one or more labels to assign to the selected items.

Use the Search box to find a label name.

Filter by a label

You can filter scripts, load tests, and results by a specific label.

In the Labels pane, highlight the label or sub-label to search for.

Use the Search box to find a label name.

Back to top

Schedule a test run

Schedule a test to run on a specific date and time.

Caution: The most up-to-date test settings will be used when the test is launched.

To schedule a test:

  1. From Load Tests page, open the Schedules pane.
  2. Click the Add schedules button.

    Note: You can configure up to 50 schedules for a load test.

  3. From the date picker, select a date to run the test.
  4. From the time picker, specify a time to run the test.
  5. To apply the schedule, click the toggle to On.

    The Schedule On button appears in the cart section indicating that a schedule is set for this load test.

Tip: In the Loads Test page, a test with a set schedule has the icon displayed next to its status.

On the lower part of the page, underneath the list of set schedules, the following two tabs show additional schedule information:

Runs tab

The Runs tab displays a list of previously scheduled runs and their launch status.

  • Launched. The test was launched successfully. Click the run ID to view results.

  • Failed to launch. The test was not launched successfully.

    Troubleshooting failed launches:

    • The number of Vusers scheduled in your test exceeds the amount remaining in your license.

    • A load generator or monitor that is defined in your test is down.

Timeline tab

Note:  

  • This tab is only displayed for tests whose license mode is set as VU.
  • This tab is not displayed for load tests configured for the Iterations run mode.

The Timeline tab displays the timeline for the selected schedule, as well as for any other schedule in the project that overlaps with the selected schedule.

If the current project uses global licenses (that is, it does not have dedicated licenses assigned to it), then schedules from other global license projects that overlap with the selected schedule are also displayed.

The timeline is colored according to its status, as follows:

  • OK (green). There are no conflicts and this schedule will run as planned.

  • Conflicting (orange). This schedule impacts another schedule and will cause it to breach your license count, but this schedule will still run as planned.

  • Blocked (red). This schedule will breach your license count and may not run.

Back to top

Generate and download logs

To generate and download logs for your load tests do the following:

Action How to
Enable logs
  1. Select the Load Test tab and select a load test.
  2. Click the Setting button to open the General settings page.
  3. In the Data and logs section, select Collect Vuser logs from … load generators. For details, see Data and logs.
Configure your script to generate logs
  1. In VuGen, select VuGen > Runtime Settings > Logs > Log Level and associated options.

    In TruClient, select > Runtime settings > Log >Log level and associated options.

  2. Set the Log level to standard.
Configure your load generator to collect logs (on-premises LGs only)
  1. Under Assets, select the Load Generators tab.

  2. Select a load generator and click Edit.
  3. Turn on the Enable Vuser Logs Collection toggle.
  4. Repeat the above steps for each on-premises load generator for which you want to collect logs.
Retain the log (on-premises LGs only)

To retain the logs on an on-premises machine indefinitely, until you manually delete them:

  1. Open the On-premises load generator configuration tool.
  2. In the Options tab, set the SRL_AGENT_KEEP_VUSER_LOGS parameter to true.

The logs are stored on the load generator machine, in a subfolder with the inj_o_ prefix under the %TEMP% folder.

Download the log

For details, see Download logs.

Back to top

Stream script errors to Splunk

You can stream script errors to a Splunk Cloud system during a test run. To stream script errors to Splunk, do the following:

Action How to
Configure your Splunk account

In your Splunk account, configure the following for the HTTP Event Collector.

  1. Create a new token
    1. Enter a name for the HTTP Event Collector.
    2. Set the Source type to _json.

    When the new token is created, note the token value that is displayed (you will need this when configuring LoadRunner Cloud).

  2. Enable the token

    Note the HTTP port number that is displayed (you will need this when configuring LoadRunner Cloud).

For details on configuring the HTTP Event Collector, refer to the Splunk documentation.

Configure you Splunk account details in LoadRunner Cloud

In LoadRunner Cloud:

  1. Navigate to Menu bar > Your user name > Splunk account.
  2. In the dialog box that opens, configure:
    • Your Splunk Cloud URL that was sent to you by Splunk when you created your Splunk instance.
    • The HTTP Event Collector port number.
    • If your account is a managed Splunk Cloud account (rather than a self-service account), select the Managed Splunk check box.
    • The HTTP Event Collector token.

  3. Click Apply.
Enable script error streaming for a load test
  1. Select the Load Test tab and select a load test.
  2. Click to open the General settings page.
  3. In the Data and logs section, select Stream script errors to Splunk. For details, see Define general test settings.

Notes and Limitations

  • Streaming script errors to Splunk is enabled only for tests run in the cloud.
  • You can stream script errors to a Splunk Cloud account only.
  • Only one Splunk account can be configured for a LoadRunner Cloud tenant.
  • During a load test run, only the first 500,000 script errors are sent to the Splunk account.

Back to top

Configure the Streaming agent

You can view metrics of streaming data during the test run using the integration of LoadRunner Cloud with the InfluxDB database. You can stream either raw or aggregated data.

Note: The streaming of raw data is a technical preview and limited to 5000 Vusers, not including Vusers running on on-premises load generators.

For information on how to set up the integration, see Data streaming (Beta). Once your integration is complete, you can configure the Streaming agent.

To add a Streaming agent:

  1. In the Load Tests page, open the Streaming pane .
  2. Click + Add from Assets.
  3. Select an agent and click Add.
  4. in the Data Type column, choose Raw or Aggregated. (If only one of the data types is enabled for your tenant, the dropdown will be disabled and the relevant data type will be displayed.)

Once you begin the test run, you can check the streaming data in the InfluxDB database using a monitoring tool such as Grafana.

Back to top

Run the test

Once you configure all of the settings, you are ready to run the load test.

Before you run the test, review your settings in the Run Configuration Summary toolbar.

To run the test:

  1. Click Run Test. The Dashboard opens showing the default metrics for your test run in real time.

    During the test run, the dashboard's toolbar shows summary information such as the elapsed time, number of running Vusers, and so forth.

  2. To pause the scheduling during the test run, click the Pause scheduling button on the toolbar. To resume, click . For details, see Pause scheduling during a test run.
  3. To add Vusers during a test run, click the Change Load button . For details, see Change the Vuser load dynamically.
  4. To end a test run before its completion, click Stop Test.

Back to top

Notes and limitations

  • Before running your load test, always make sure that the monitor defined in the test is up and running and accessible so that it can monitor the test and display results in the dashboard.
  • An on-premises load generator can only be used by one running test at a time.
  • The number of Vusers defined in your test must not exceed the maximum number of Vusers defined in your Vuser license.

  • The SiteScope agent can only be used by one running test at a time.

Back to top

See also: