Single user performance data
When enabled, you can collect client side breakdown data to analyze user experience for a specific business process.
A comprehensive report based on the selected script that provides information about how your application performs for a specific business process.
After the test runs, the NV Insights report is available from the Single user performance reports page.
NV Insights report known limitations:
|Load generators||Test must be configured to run on cloud load generators.|
Number of scripts
You can specify one script per test to generate a client side breakdown report.
The following script types are supported:
|IP Whitelist||When using whitelisted IP addresses, an additional, dedicated IP address for the NV Insights report is required. For details, see Obtain IP addresses provided by Micro Focus.|
|Your script is configured to run with proxy settings that are inaccessible from outside your local network||
Client side breakdown widgets and report will not generate.
Disable the proxy and run the test again.
If the issue persists, open a support ticket.
|NV Insights report limitations||
Troubleshooting NV Insights report generation
- The report is generated for the location with the highest distribution of Vusers. If the distribution is spread equally amongst locations, a location is randomly selected.
- If emulation is specified, the report will generate for the network profile that has the highest distribution of Vusers.
- A report might not generate for the following reasons:
- The test duration is too short.
The script does not contain network actions.
When enabled, you can collect navigation timing data on up to five scripts to analyze user experience for a specific business process.
Scripts are run on the location with the highest distribution of Vusers. If the distribution is spread equally amongst locations, a location is randomly selected.
If emulation is specified, scripts are run on the network profile that has the highest distribution of Vusers.
Data is gathered for the following metrics for each transaction in the breakdown script:
Average amount of time for all HTTP(s) requests in the queue waiting for a network connection.
Average amount of time taken for all HTTP(s) requests to resolve a host name.
|Connect||Average amount of time taken for all HTTP(s) to create a TCP connection.|
|Send||Average amount of time taken to send an HTTP request to the server.|
Average amount of time taken for all HTTP(s) waiting for a response from the server.
Average amount of time for all HTTP(s) requests to negotiate SSL/TLS.
|Receive||Average amount of time taken for all HTTP(s) requests to read an entire response from the server (or cache).|
Transaction response time minus the sum of all the metrics above.
Note: Transaction Response Time (TRT) included in Client Side Breakdown is different from TRT included in reports. In Client Side Breakdown, TRT includes Wasted Time, which is time spent during a transaction for internal functionality of the script engine. In reports, TRT does not include Wasted Time.
For each location, the latest versions of Chrome, Firefox, and Internet Explorer are used, and the network conditions defined in the navigation breakdown page are enforced.
Each browser navigates to the defined URL(s) once in the middle of the test, when the load is at its peak, and once at the end of the test, when the load is over.
After the test is finished, the WebPageTest report for each navigation will be available from the Single user performance reports page.
Note: Load tests configured for the WebPage Test report do not support Azure locations.