Skip to main content

Thrown into automation

Situation & Problem

I was thrown into an automation test project.

Concretely test automation of 3 different applications (different in purpose, look, structure, behavior) which regression testing was covered only by a automation test suite that was written in AutoIt and the size of the code was quite complex and huge for a new person in the automation.
Well, that was not the problem, it is a description of the situation.

The problems weren't initially visible. I was never automating before, so I needed to learn quite a bit of the code & got to know all the applications that were part of the project.

The problems were not so appealing at the start, but I would formulate then as:
  • Maintenance of the scripts took too long
    • By new versions of the application, it took some time to adjust the scripts to the changes
    • This caused delay in information flow from testers to managers & developers
    • The changes in the application were not clearly communicated to testers
  • Testing was purely through automation scripts, covering regression (relatively poorly ~45% code coverage)
    • There was no exploratory testing, retesting of bugs, performance testing...
  • Scripts were dependent on coordinates and simulated keystrokes 
    • Every small change in positions of windows or controls caused the scripts to stuck
    • We were missing direct control handling - for example through ID's
  • There were too few control points
    • The script executed mostly blindly for very long time
    • It could often happen that the script did some misclicks and then went rampart - opening programs, writing in them and so on...
  • Oracles were weak
    • We had no specifications or requirements
    • The task description for the developers work was often too short and unclear (we missed quite a lot of information flow)
    • There was no written documentation on the script execution
    • The manuals were either not available, or written mostly for business purposes and gave not much information for other parts
    • Feedback from business about calculation data was slow
  • To sum it up: Testing was underfed 

Journey to the solution

First of all, the solutions to the problems were often a product of a team, I don't want to take solely credit for them.

 

Collaboration

At the start, the most appealing weakness was the maintenance issue. One of the reasons was lack of communication between testing and development.

To mitigate this issue, we decided to make a regular Demo sessions, where the developers introduce changes to the application, providing a possibility for the tester to prepare for them and to plan the tests with more overview. Demo sessions were organized every second week. There was a need to fuel the interest, because the sessions were unintentionally forgotten or ignored.

Another profit from the collaboration with developers is an increase of testability of the application. The automation tool (AutoIt) lacks often the capacity to check various inputs and basically to "see" the current state of the application. These issues were partially solved by custom functionality that provided the automation tool with these "eyes", or with the ability to browse through items or tabs with help of command inputs (not only mouse clicks).

 

Reporting

It is important for the managers to stay informed on the work you are doing. Thats why we started to send a weekly summary report every Friday (or the last workday) on top of the weekly meetings.

 

Adding up

After the actual testing and maintenance work, there was a lot of work with increasing the coverage and the testing possibilities on the project. Starting with covering more from the actual AUT's (applications under test), continuing with test automation for other applications and performance testing for one particular part, there was still need to provide something more. I would say big benefits came from

 

Exploratory testing

The automation so far touched purely the regression part. There was no room to provide testing for the new features. Here came the need to cover this. Exploratory approach was the most obvious one. To differentiate this approach from "random clicking", we needed to pay attention to
  • Planning
    • There is a clear need to gather information about what areas are our target with this approach
    • Gather as many information about these areas - Demo sessions, asking, manuals, task descriptions
  • Reporting
    • Information about what we cover with this approach
    • Information about issues found through it - mentioning in the bugs that they were found thanks to exploratory testing
  • Evaluating
    • Evaluate and analyze the exploration for enhancement of future runs

This approach proved as a worthy addition to the present automation on the project, both in defects found and also information about the AUT gathered.

 

Requirements

There is a requirements process starting to improve information flow about the changes in AUT,  but it is in its beginnings, I hope the best.

 

New tools

Current tool on the project is quite outdated. This part provides a big potential to improve. We have started to look for alternatives. however a change in automation tool would cause big workload with overwriting the current coverage and overcoming problems that could emerge.

The advantages of switching to a new tool would not come fast, but in the mid and long term, there would be
  • Increase of reliability of runs
  • More coverage
  • Quicker production of new scripts
  • More possibilities what to do with the scripts
  • Greater clarity of runs, reports ...

Conclusion

Pure automation is in my opinion rarely enough to cover the testing needs of any project. You can put automation in the middle and build on it, but you should never really rely only on it. With every test that you automatize, you take the dynamic sapient part from it and you must keep that in mind.

Comments

  1. This comment has been removed by the author.

    ReplyDelete
  2. Excellent post Michal! Some of the issues that you mentioned are common to many test automation projects and you offered some great solutions.
    I would recommend considering Applitools Eyes as an additional layer of automation to reduce some of the maintenance overhead, increase the automation coverage and improve team collaboration.

    ReplyDelete
  3. Nice and very informative post..

    ReplyDelete

Post a Comment

Popular posts from this blog

When to start automation?

If you are asking this as a tester, you probably asking too late.

Automation is something that can save you some portion of your work (understand resources for your client) and i rarely found cases of testing work that did not need at least some portion of automation.

I know that it is rarely understood that automation is something to be developed & maintained and if you cover enough of the application, you do not need any more regression - well i do not think that somebody has done an automation regression suite that if fully reliable (i am not speaking about maintaining this code - which is another topic). There can be always a bug (or quality issue) that slips through, even when you scripts go through the afflicted part.

I understand that many testers have no development background or skills, but i doubt the developers that could help you are far away. I am not assuming that they can do the scripts for you....
However if they understand what you need, they can say how easy is …

Mandelbug - bug, who didn't want to be found

Returning from holiday recently, I was expecting a calm day of catching up and doing some basic tasks. The opposite was true, this day I was introduced to a situation which puzzled us for two weeks.

Situation We have been reported that Android sometimes get the wrong reply to a particular GET requests. Ok, let us investigate, I got this, will be quick...

Reproducibility The bug is up till now non-deterministic to us. We were firstly not able to find the determining factor, it just occasionally occurred, persisted for some minutes (maybe up to half an hour) and then disappeared without a trace. This made the investigation and also any communication much harder. This happened for both iOS and Android apps.
We got ourselves here a Mandelbug:
A bug whose underlying causes are so complex and obscure as to make its behavior appear chaotic or even non-deterministic
First hypothesis We have decided to focus only on the android part. A debugging proxy was attached shortly for catching all tr…

Don't blindly follow requirements

Each rule/requirement has a reason to exist and I firmly believe the written form of the rule is rarely 100% mirroring the intention.
A really nice example of this is the following situation:
Rule/Requirement -> Person on ID photo shouldn't have glasses on

If you look closely you can see my glasses which is a clear breach of the rule.
PS: To you my fellow clerk in the Swiss Strassenverkehrsamt: I'm not angry at you, but when the AI kicks in, you will the first to be replaced by  computer;)