Manage elastic load generators

This section describes how to set up and manage elastic load generators, enabling users to dynamically assign them to performance tests.

About using elastic load generators

Before you can use elastic load generators, you need to configure the orchestrators and load generator images which are used for dynamic provisioning. Orchestrators are responsible for automating deployment and management of load generators running inside containers.

After the load generator image has been configured, the image can then be assigned to tests in the project from My Performance Center or the Performance Center REST API.

Supported orchestrators and environments

Orchestrator Supported Images Supported API Version
Docker Swarm Windows, Linux Docker Swarm 1.29 or later
Kubernetes Linux Kubernetes 1.7 or later

Performance Center currently supports Kubernetes and Swarm orchestrators with Dockerized images. For more details, see Kubernetes or Docker Swarm documentation.

Note: Support for Swarm orchestrators is provided in tech preview mode.

Advantages of elastic provisioning

Benefits of using elastic load generators include:

  • Efficient allocation of resources on demand in response to dynamic workloads, without having to rely on load generators defined in the lab, or to reserve load generators in advance.

  • Automates the testing process by provisioning and de-provisioning load generators, and seamlessly adds them to performance tests.

You manage your orchestrators and load generator images from the Orchestration page. For details, see Set up elastic load generator provisioning on Windows or Linux containers and Add load generator images.

Note: For users to be able to assign elastic load generators to a test, the project must be linked to an orchestrator.

Back to top

Set up elastic load generator provisioning on Windows or Linux containers

  1. Prerequisites

    Download the Linux or Windows (base load generator) image. For details, see Download Dockerized load generator image

    The following are key requirements for setting up an orchestrator which you should obtain from your IT team:

    • For Kubernetes: full orchestrator URL, Namespace, external storage path (if used), authentication token, and Heapster server URL (if you want to monitor the provisioned containers).

    • For Swarm: full orchestrator URL, external storage path (if used), and authentication token.

  2. Configure the orchestrator

    1. In Performance Center Administration, select Configuration > Orchestration. In the Orchestrators tab, click  Add Orchestrator.

    2. Enter the following (all entries for Kubernetes must be in lower case):

      UI Elements

      Description

      Type Select the orchestrator type (Kubernetes or Swarm).
      Full URL

      Select the connection type (http/https), and enter the full URL, including port, in the format:
      http(s)://<server>:<port>.

      The default connection type is https for Kubernetes, and http for Swarm.

      Orchestrator Name Enter a name for the orchestrator.
      Namespace

      (Kubernetes only)

      Enter the name for the namespace. This is a private space where your containers will be created.

      Example: pc

      Token

      (Kubernetes only)

      Enter the orchestrator bearer token to authenticate API requests.

      Note: After saving the orchestrator settings, the token is hidden (turns into asterisks).

      API Version

      (Swarm only)

      Enter the Swarm API version.

      Note: For every Docker Swarm upgrade you need to update the API version.

      Use External Storage

      Select this option to store the performance test run files on an external machine. We recommend using this option to prevent loss of, or inaccessibility to results, if result collation fails (it enables you to access the files even though the container has been removed). For more details, see Retain run results after a performance test ends.

      Path

      (Available when Use External Storage is selected)

      For Kubernetes: Enter the path of the environment for storing all the run files.

      For Docker Swarm:

      • For load generators running on Linux containers, test data is saved to the default volume location: /var/lib/docker/volumes/<DOMAIN_NAME-PROJECT_NAME-QC_RUN_ID>/_data

      • For load generators running on Windows containers: C:\ProgramData\docker\volumes\<DOMAIN-PROJECT-ID>\_data

      Use Monitoring

      Enables collecting monitoring metrics on the orchestrators. You can collect metrics on Kubernetes container clusters using Heapster, and on Docker Swarm using the Prometheus monitoring solution.

      Heapster Server

      (Available for Kubernetes only when Use Monitoring is selected)

      Enter the full URL of the Heapster server, including port, in the format: <server_name>:<port>.

      Prometheus

      (Available for Swarm only when Use Monitoring is selected)

      Enter the full URL of the Prometheus server, including port, in the format: <server_name>:<port>.

      Enter a user name and password for the Prometheus server.

      Resource Limits
      • Memory (GB). Specify how much of the available memory resources (in gigabytes) a container can use during a performance test run.

      • CPUs. Specify how much of the available CPU resources a container can use during a performance test run.

      Note: When a user configures elastic load generator properties, they cannot enter values that exceed these limits. If the administrator reduces the project limits below the value configured by a user in the Assign Load Generators to Groups dialog box, the user's settings are automatically adjusted to be within the new limits.

      Description Enter a description of the orchestrator.
  3. Assign load generator images to the orchestrator

    Click Assign Images to open the Assign Images to the Orchestrator dialog box.

    Select images to assign to the orchestrator, and click Assign.

    If no image is assigned, Performance Center uses the default performancetesting/load_generator_linux/ image from the Docker hub.

    For details on creating load generator images, see Add load generator images.

  4. Assign projects to the orchestrator

    1. In the Linked Projects area, click the Assign button to open the Assign Projects to the Orchestrator dialog box.

    2. Select the projects you want to assign to the orchestrator, and click Assign. The selected projects are added to the Linked Projects grid.

      Note: You can assign multiple projects to an orchestrator, but only one orchestrator to a project.

    3. You can see project details, and add or remove linked projects using the Linked Projects grid.

      Field Description
      Assign Projects. Assigns the selected projects to the orchestrator.
      Remove Assigned Project. Removes the selected project from the orchestrator.
      Refresh. Refreshes the grid so that it displays the most up-to-date information.
      ID The project's ID.
      Project Name The name of the project.
      Domain The domain in which the project was created.
  5. Click Save to save the settings. The orchestrator is added to the Orchestrators grid.

Back to top

Add load generator images

You can add and manage load generator images that are used for elastic provisioning in the Docker Images tab.

  1. Prerequisites.

    Download the Linux or Windows (base load generator) image from Docker hub and set up the host as described in Set up elastic load generator provisioning on Windows or Linux containers.

  2. In Performance Center Administration, select Configuration > Orchestration and click the Docker Images tab. The Docker Images page opens.

  3. To add a new image, click the Add Docker button.

  4. Add a name and description (optional) for the image, and select the image type (Windows or Linux). Click Save. The image is added to the Docker Images grid.

Back to top

Configure a performance test with elastic load generators

After configuring the orchestrator and adding the load generator image, you can configure the performance test either manually or by using the REST API.

To configure a performance test manually:

In My Performance Center, assign Dockerized load generators to a test. For details, see Assign elastic load generators to a test.

To configure a performance test using the REST API:

  1. Create or update load tests with elastic load generators, by selecting dynamic type hosts for your groups.

    Add elastic load generators named DOCKER1, DOCKER2 and so forth to the <Host> XML field.

    Example: 
    
    <Hosts>
    <Host>
    <Name>DOCKER1</Name>
    <Type>dynamic</Type> </Host>
    <Host>
    <Name>DOCKER2</Name>
    <Type>dynamic</Type>
    </Host>
    </Hosts>

    For API details, see test entity XML in the Performance Center Administration REST API Guide.

  2. Continue from the Run the performance test step in Add Dockerized load generators to tests from the REST API.

Back to top

Customize timeout settings for elastic load generators

When provisioning elastic load generators, you can customize the timeout settings.

  1. On the Performance Center server, open the pcs.config file located in <PC server installation directory>\dat\.

  2. Enter a new timeout value in ElasticProvisionTimeoutInSeconds. The default timeout value is 300 seconds (5 minutes).

Back to top

Retain run results after a performance test ends

To ensure that run results are retained after a performance test ends, you should select Collate results as the Post-Run Action. For details, see View or modify the project settings.

If you run a test without collating results, or if collation fails, and:

External storage is not used The result data will be lost, since elastic load generators are freed immediately after the run finishes.
External storage is used

You need to collate the results manually.

  1. On each node running the dockerized load generator, navigate to the \Results folders for the run.

    • For load generators running on Linux containers, test data is saved to the default volume location:

      /var/lib/docker/volumes/<DOMAIN_NAME-PROJECT_NAME-QC_RUN_ID>/_data/<random_data_folder>/netdir/res

    • For load generators running on Windows containers:

      C:\ProgramData\docker\volumes\<DOMAIN-PROJECT-ID>\_data\<random_data_folder>\netdir\res

    Note: The run data is saved on the specific Swarm node where the container is created, so for each test run, the data might be spread between different Swarm nodes (in case load generators run on different nodes).

  2. Copy *.eve and *.map files to the \Results folder for the Controller run.

    For example, on a Linux machine running the dockerized load generators:

    vm09348_dkr_32775_1.eve

    vm09348_dkr_32775_1.map

    The default path for the run’s results folder on the Controller:

    \<LR Installation folder>\LoadRunner\orchidtmp\Results\<RunFolder>\Run_RunID\res<RunID>.

    Example:

    C:\Program Files (x86)\Micro Focus\LoadRunner\orchidtmp\Results\My_Test_Run_Results\Run_260\res260

  3. Collate the results.

    • For a test where no collation has been performed yet, right-click the run, and select Collate results (collate will be failed since Provision LGs are not existing).
    • For a test that failed to collate, right-click the run, and select Recover results to recover and collate the results of the failed test run.

    For more details, see Manage results.

  4. Select Analyze results to generate reports.

    Note: Dockerized load generators with Network Virtualization are not supported, and therefore no NV Insights report is generated.

Back to top

Notes and limitations

  • Network Virtualization is not supported when using elastic load generators.

  • Setting a Resource Limit when provisioning Windows load generators using a Docker Swarm orchestrator is currently supported in the following instances ( indicate the resource type is not supported):

    Performance Center version Memory CPU
    12.60
    Later than 12.60
  • For issues when running tests without collating results, or if collation fails, see Retain run results after a performance test ends above.

Back to top

See also: