Performance Model Editor

The Performance Model Editor enables you to configure performance metrics for a virtual service to use during simulation. You can configure the performance for the whole service or for its individual operations.

To access

Use one of the following:

  • In the Virtualization Explorer, double-click a performance model.
  • In the Virtual Service Editor, under Performance Models, select a performance model and click Edit.
Important information
  • Click a value to edit.
  • Throughput Limit and Transaction Limit performance metrics are not affected by learned data. Learning does not modify these values.
Relevant tasks
See also

Performance modeling

User interface elements are described below (unlabeled elements are shown in angle brackets).

Common Areas

UI Element Description
<performance model name and description> The name and description of the data model. Click to edit.
<operations>

Located in the left pane of the editor. Displays a list of the operations in the service associated with the selected performance model.

By default, the service name is selected, and a performance overview is displayed in the main pane of the Performance Model Editor. For details, see Service Level View.

Enter text in the filter box to filter for specific operations in the list.

Select an operation from the list to display its details in the main pane of the Performance Model Editor. For details, see Operation Level View.

Edit Service Description Opens the Service Description Editor. For details, see Service Description Editor.

Back to top

Service Level View

UI Element Description
Booster

A set of boosters to provide high-level control of the operations selected in the operation table.

Available boosters include:

  • CPU. CPU power multiplication factor.
  • Network. Network throughput multiplication factor.
  • Cluster. Scalability multiplication factor.
  • Expert. Multiplication factors for Response Time, Hit Rate, and Throughput Limit values.
  • None. Turn off all boosters.

Note: You must restart the simulation to apply changes.

<booster controls> The sliding controls and inputs enable you to set the boost level for the selected booster. The setting affects the various performance criteria displayed in the operation table.
Performance Metrics

Enables you to set more granular settings for individual performance criteria for individual operations. You can set the following:

  • Response Time [ms]. The time for the service to process a request and return a relevant response.
  • Threshold [hits/s]. The maximum number of requests and responses the service can process without any impact on performance.
  • Throughput Limit [MB/s]. The maximum data capacity the service can process.
  • Transaction Limit [transactions/s]. The maximum number of responses that the virtual service can send per second.

To apply the performance changes to the service and all of its operations, select the Boost, Throughput Limit, or Transaction Limit check boxes at the top of the table.

Alternatively, select options separately for the service and per operation.

Click an operation name to open the operation level view for the specific operation.

Note: Throughput Limit and Transaction Limit are not affected by learned data. Learning does not modify these values.

Batch Simulation

Enables you to define a schedule for sending responses back to the client application.

Click an operation name to open the operation level view for the specific operation.

For details, see Batch Simulation.

Back to top

Operation Level View

Performance Metrics

UI Element Description
<performance graph>

The graph displays the expected performance based on the criteria set for the operation. In Range mode (see below), the expected performance is displayed between the minimum and maximum values that you specified for the range.

Select Show Measured Data to view any recorded performance data in the graph. Note: This option is displayed only after data is recorded for the service.

The graph is interactive. Move the graph elements to show the effects on performance.

<performance criteria>

Displays the advanced performance criteria for the operation with the option to edit them. The following additional criteria are available:

  • Mode. The mode for setting the response time. The response time is the time it takes the service to process a request and return a relevant response.

    • Fixed with tolerance. A fixed response time.
    • Range. A range of possible response times in milliseconds. A random value from this range is used as the response time.
  • Multi Response Interval. When generating multiple responses for a single request, the interval at which responses are sent. (This metric is only available for protocols that support multiple responses.)

    Example: You define the following parameters for use during simulation:

    Response Time = 3000ms

    Multi Response Interval = 100ms

    A request is intercepted during simulation, and the simulator determines that it needs to return 4 different responses. Simulation will look like this:

    [3000ms] The first response is sent.

    [3100ms] The second response is sent.

    [3200ms] The third response is sent.

    [3300ms] The fourth response is sent.

  • Throughput Limit. The maximum data capacity the service can process.
  • Transaction Limit. The maximum number of responses that the virtual service can send per second.

The following metrics are only available for the Fixed with tolerance mode:

  • Tolerance [%]. The acceptable range of variation in performance for the operation.
  • Threshold - Hits per Second. The maximum number of requests and responses the service can process for this operation without impacting the performance.
  • Maximum Hits per Second. The maximum number of requests and responses the operation is allowed to process.
  • Maximum Response Time. The maximum time for a response at peak performance levels.

Click a metric to edit its value.

Batch Simulation

UI Element Description
Actual State

Enable batch simulation.

Select a response sending strategy:

  • Sequential (Single Threaded). Sends responses in a single thread, according to the order in which they are simulated.
  • Parallel (Multi Threaded). Sends responses concurrently, according to the number of system CPUs. Responses are sent in a random order,
Scheduling Start

Defines when the virtual service will start publishing message responses.

  • After simulation launch with delay <x>. Set the amount of time after the start of simulation that you want the virtual service to begin sending response messages.
  • At <time x> of the simulation start day. Define a set time on the day that simulation is started to begin sending response messages.
Send Responses

Defines when or how often response messages are sent.

Note: Responses are created immediately after the request is received by the virtual service. This is important, for example, when the virtual service includes service call activity, or date/time generator functions.

Options include:

  • Periodically.

    Send messages every <x> amount of time.

    If you do not specify a number in the Number of Messages field, all waiting responses are sent.

  • At defined times. Send <x> messages at the specified time period.

    In the first row, define the number of messages to send when the schedule starts.

    Click Add to add a new row.

    Double-click the time or message boxes to edit. Define time periods and the number of messages to send at each time period.

    Each row represents the amount of time to wait, after the previous time period, before additional messages are sent.

    For the last time period in the schedule, the Number of Messages field may not be filled in. All remaining messages are sent.

 

Example:  

The schedule is set to start 4 hours after simulation starts.

  • 1st row: When the schedule starts, 1000 messages are sent.
  • 2nd row: One hour later, 2000 messages are sent. It is now 5 hours after the simulation started.
  • 3rd row: Two hours later, 3000 messages are sent. It is now 7 hours after the simulation started.
  • 4th row: Three hours later, all remaining messages are sent. It is now 10 hours after the simulation started.

Back to top