Creating a single test for all browser testing
When you are testing your application or Web page on multiple browsers, you are trying to see that the application or Web page performs the same between browser types (unless designed otherwise). When you set up your test, you would also expect that the same test can be used for each browser type, with minimal maintenance and updating to account for browser differences. However, creating a single test for cross browser testing provides a set of challenges.
In UFT, a test of your application accesses the object repository or repositories for your test, which contains the necessary objects to test the application's objects. Ideally, you would create a single object repository or object repositories (per application section or Web page). Then, you would run the test in all the browser versions using that single object repository or set of object repositories, and UFT could identify the objects in the application or Web page without issue.
In reality, providing the correct objects in the correct object repository is not always simple. Firstly, there are the object identification issues. If UFT identifies a certain object very differently between browser types or versions, you may need to create separate repositories for each browser type to enable UFT to find the right object. However, when the test runs on a specific browser, you need the correct object repository for that test run.
When you have a scenario such as this, you can enable UFT to dynamically add an object repository at the beginning of the test run. For details, see Dynamically load an object repository during the test run.
In some cases, you may also have dynamically created objects that are displayed on a page as a result of previous operations performed in the application. However, in the process of creating the test and the object repositories, these objects are not identified (as they do not exist when UFT is learning the application initially). Like many other objects, these objects can be created and identified very different between browsers, making it even more difficult for UFT to identify them.
To help UFT identify these objects, you can use descriptive programming. For details, see Programmatic descriptions.
Each browser version can also have dynamic updates to the browser or page as part of its normal workflow. For example, alert dialogs are different between each of the browser versions, making it difficult to UFT to know how to recognize, handle or ignore the dialogs. As a result, if you create test steps to close the alert dialog using one browser type, the other browser types may have trouble recognizing or performing steps on the dialog. In other cases, these dialogs do not exist in the other browsers. In some cases, the browser may not even enable you to continue the test if the alert dialog is not closed, thereby causing the test to fail, simply because it did not recognize the dialog and perform the correct steps on it.
In the case of the browser dialog boxes, there are special methods that can ensure you handle the pop-up dialogs appropriately, including the Browser.HandleDialog, Browser.GetDialogText, and the Browser.DialogExists methods. (Note that even these methods do not work exactly the same between browsers also.)
In other cases, you can add steps to your test to account for browser-specific behavior. For details, see Add steps for browser specific behavior.
There are scenarios where the same application operation - such as pressing the Back button - may result in very different behavior between browsers. For example, in an appointment booking application, pressing the Back button for Internet Explorer returns you to the previously viewed page, while pressing Back in Firefox or Chrome logs you out of the application.
However, because of such problems, your test must be prepared to address the different behaviors. There are some potential solutions to help:
Navigate to a specific URL/location in the application instead of a previous/next page (such as in the example above)
Insert Wait steps that pause the test until an application or an application object achieves a certain state (which can be checked using the Exist property for an object)
For details, see Add steps for browser specific behavior.