For visual regression testing, NCDIT uses ApplitoolsEyes, which provides an end-to-end software testing platform powered by visual artificial intelligence.
Features & Benefits
Features
- ApplitoolsEyes detects any differences between two screen displays and can replace hundreds of lines of testing code with one smart visual scan.
- Applitools’ set of SDKs supports a wide variety of popular web, mobile and desktop test automation frameworks, along with various application driver infrastructures, programming languages and all common platforms, browsers and operating systems. These SDKs do not interact directly with the application under test, so Eyes is completely independent of how the application is implemented and deployed.
Benefits
- It can be used by people in engineering, test automation, manual quality assurance, DevOps and digital transformation teams.
Technical Information
This diagram shows the major components of ApplitoolsEyes and describes how they interact to run a test and view and manage the test results.
- 1. Testers run the test suite, and the code typically repeats the following steps for multiple application states.
- 2.1. Simulate user actions (e.g., mouse click, keyboard entry) by using a driver such as Selenium or Appium.
- 2.2. Call an Eyes SDK API to perform a visual checkpoint.
- 2.2a Eyes software development kit (SDK) uses the driver to obtain a screenshot.
- 2.2b. Eyes SDK then sends the image to the Eyes Server. It and the other checkpoint images are compared to the baseline images previously stored on the server.
- 2.2a Eyes software development kit (SDK) uses the driver to obtain a screenshot.
- 2.1. Simulate user actions (e.g., mouse click, keyboard entry) by using a driver such as Selenium or Appium.
- 3. After the images in the test have been processed, the Eyes Server replies with information such as whether any differences were found and a link to the Eyes site where the results can be viewed.
- 4. Testers use the Eyes Test Manager to view the test results, update the baselines, mark bugs and annotate regions that need special handling. After viewing all the results, testers save the baseline, which then becomes the basis for comparison in the next test run.