What lies beneath the cool tools? Iryna Suprun shares her analysis of automation tools in the market that is bursting with buzzwords.

I just spent some time googling. I was trying to find who and where created the first-ever automated test. No luck. The same goes for the year when it happened. Most of us just agree that automation has been there in one or another form for a long time. I would say more than twenty years. Mass adoption of agile methodology triggered mass adoption of automation so I would say that we have been actively automating for the last ten-fifteen years. It is like ages in the software industry.

Some would expect that by this time we have mastered test automation but the reality is different. Many organizations are still struggling even with unit testing. Flaky functional and end-to-end tests are the constant topics of discussions between test engineers and the leadership teams, test maintenance takes time and dedication at the same time causing a lot of pain. There is always not enough coverage and while there are builds that take hours because test execution lasts hours.

Part of this is caused by the growing complexity of software under test. Many new technologies emerged in a very short time. First, we got cell phones, then wearables and IoT devices. We have containers, we have clouds, AI and Big Data. All of this needs to be tested and testing tools always stayed a little bit behind, to be honest. Many companies built their inhouse automation tools and frameworks to compensate for it.

Something changed a couple years ago. I believe now we are reaching a new age of automation where automation tools are popping up on the market almost every month. I’d better say they are not exactly new, but newer. Most of them have been there for a few years and just recently became mature enough to be considered as a replacement to old technologies. I would not use a tool that has two customers and is six months old because automation is a big commitment. I don’t know about you but I don’t want to invest in something that might be gone in another year. Even if it looks very good.

These new(er) tools are all “industry-leading”, “game-changing”, “trusted by many”, “AI-based”, “future of testing”, “fully autonomous” and there are many of them. I think we will see a lot of mergers and acquisitions in this industry soon (it already started to happen). We will end up with a few solid products, but now it’s hard to see who will win because so many things can go wrong and right.

Today I would like to talk about tools that use the most popular technology nowadays – AI (Artificial Intelligence). There are many AI testing tools. The maturity of these tools, the usability, and the promises delivered vary. They use Supervised Learning, Unsupervised Learning, Reinforcement Learning, and Deep Learning algorithms to automatically generate tests, make test creation easy, and decrease maintenance time. Almost every test automation tool on the market nowadays uses AI in one or form or claims it does.

There are two main categories of these tools. The first category uses AI/ML in a supporting role. AI algorithms in these tools are designed to ease tasks that are hard to do manually due to the physical limitations of human beings. They also help with testing where before some subjective measurements were used (audio/video quality for example) and turning these subjective ratings into objective numbers. Test authoring in this category usually is done with heavy user involvement. Users either need to write code or record the test. The second category of testing tools uses AI for test generation. They use the software under tests to create tests, it can be done through analyzing code or production logs, gathering clickstream of actual users, traversing links in the app.

I think that AI/ML pays the most in the following testing tools features:

  1. Test recording. Although many traditional testing tools also provide the ability to record tests they do not collect data during the recording process. The number of data points for a single UI element can reach 30-50 entries. This info is used later to improve the stability and maintainability of the tests.
  2. Autonomous creation of automated test cases based on user traffic from real users, logs, and application functionality or code analysis.
  3. Self-healing. Automation maintenance is the most time-consuming, never-ending task. AI tools help reduce maintenance time by making automatic adjustments to the tests if software changes. Although in most cases, it is still the job of the QA Engineer to decide if this change is acceptable or not.
  4. Converting test documentation to automation. It is a very useful feature if you already have existing documentation and it is created in some structural, formal way.
  5. Visual testing. Some test tools tried to achieve this without involving AI and ML algorithms by using pixel by pixel comparison. This proved to be not a very effective approach because usually it leads to many false positives. ML algorithms help to increase the robustness and stability of automated tests by identifying changes that do not impact user experience and ignoring them.
  6. Audio/Video Quality Testing. AI can collect multiple data points about audio/video at any given point of time and learn how their variations impact audio/video perception by humans. Testing is done faster and is based on data, which makes it more objective.

This list looks impressive and promising but there are major drawbacks, at least for now. When considering the usage of newer tools that have not been on the market for a long time we should remember that AI tools are not mature or widely used compared to traditional testing tools. If their users encounter a problem, they most likely need to go to the tool vendors’ support team which can increase test creation and troubleshooting time. There is less available information about AI tools in general. Comparison charts, use cases, real reviews, and case studies are hard to find. If you are considering adopting an AI tool, be ready to do a lot of leg work by yourself.

Most of these tools are also not cheap. The open-source AI-based test tools, as well as free tools, are rare and very young (and in this case you don’t have wide community or company provided technical support, so you are truly on your own with your issues)

Last but not least – there are plenty of traditional testing tools for every type of application, most of the AI tools only support automation of the Web applications. There are just a couple of AI tools that provide automation support for mobile apps.

I was lucky to land my hands on a few test tools that use AI and ML. I used some of them (Mabl, Functionize, Appvance IQ) to go through the full-scale proof of concept. I played with a free version of others (Testim, TestCraft) and participated in some hackathons (Applitools Eyes). This gave me a pretty good understanding of where these tools are now and what works and what does not.

Let’s start with positive and review what worked well:

Quick Start. The pleasant surprise was how easy to start to use most of the tools. The documentation, the setup, the intuitive interface of Mabl, Testim, and Applitools Eyes are very user friendly. It took me just a couple of hours from opening a tool for the first time in my life to having my first automated test running. Of course, one should invest way more time to use any of these tools to their full potential, but the quick start is super important during the initial rollout.

Codeless Script Generation (recording). Honestly, I was a little bit sceptical about this feature, but test recording went a long way in recent years. I found that Mabel is the most mature test recording feature for web applications, with the most stable and intuitive interface. Another tool I would like to mention is Testim. The test recording feature is also implemented pretty well there. TestCraft, Functionalize, and Appvance also provide test recording but it is a much more challenging task and requires a lot of additional actions, not just pressing the “Record” button and going with the flow.

Decrease of maintenance time. Tests automated by me using different AI-based tools were executed on multiple builds and releases. Every tool I tried handled insignificant changes in the UI, such as text, size, or other attributes well. If the change was something that cannot be handled by a tool (introduction of new elements, a major redesign of UI) it was much easier and faster to fix automated test cases by re-recording steps than introducing the same change using code.

Unfortunately, I was able to try this feature only during Applitools Hackaton using Applitools Eyes. I was impressed by how easy it was to automate challenging but very common use cases (selection specific colour and model of the product in an online store).

All these features are great but there are still some challenges everyone who wants to use an AI-based automation tool should be aware of:

Codeless script generation. Although the creation of tests using recording features is much easier and faster than using traditional tools, it rarely goes smoothly. It is not just “press the button and go with the flow”. The automation Engineer still needs to add some Javascript snippets for complex cases, take care of reusable flows, setup variables, and test data. Hence, do not expect that it will be done with lightning speed and will not require any technical and test automation skills.

The self-healing feature Self-healing is a great feature, but it does not mean that all your tests will be magically fixed every time. Different tools handle self-healing in different ways. Some of them require the approval of every change, at least at the beginning while the system learns, so it is still a time investment. Others just make changes and proceed without notification, which I think is dangerous. Fortunately, most of the AI-based tools are starting to add the ability to review auto-fixed tests, accept or decline these changes, track history, and roll back to the previous version.

Every big change in the application, like introducing a new element, removing tabs, or anything else, will still result in the test updates that should be done by the QA engineer.

Autonomous creation of automated test cases. AI-based automation tools can generate tests in different ways. Some of them generate tests by collecting information on how real users use the software in production. This information can be extracted from logs, clickstreams, or both. As you might guess this requires that the application should have quite a few users to collect enough data for ML algorithms. It also should be already running in the production environment. That said, test generation based on the usage of the application can only be used to add missing regression cases.

Another type of self-generated test are tests generated by link crawlers. These tests check that every link in the app works. This is a useful tool, but a working link does not necessarily mean it’s the right link, or that it’s a functioning application.

The auto-generated tests only cover what can be easily tested. They find shallow bugs, like not working buttons, particular values, broken links. They will never help you find missing requirements, logic flaws, or spot usability issues.

As we can see there are plenty of use cases where the usage of AI and ML can improve the everyday life of QA Engineers and help us solve the problems that are usually solved with traditional test automation tools more efficiently. AI-based tools are good in decreasing or in some cases fully eliminating the time spent by QA Engineers on mechanical, boring tasks. They also allow us to automate tests much faster. Although we should not expect that automation will be effortless. It will only be easy when software development is also an easy mechanical task. Another thing to remember is that with the new technologies new issues and new testing needs emerge all the time, and tools… tools usually stay a little bit behind.

Iryana Suprun

Iryna started her career as a software engineer in 2004 in Ukraine, where she was born. She received her master’s degree in computer science in 2006, and in 2007 she began her first position as a quality analyst. She remained in QA, focusing on the telecom industry and testing real-time communication systems and products such as the audio platform for the GoToMeeting application. Three years ago, she decided it was time to try something new and moved to AdTech. She is presently an Engineering Manager at Xandr where she concentrates on automation frameworks and implementing testing processes from scratch