Skip to main content

Thrown into automation

Situation & Problem

I was thrown into an automation test project.

Concretely test automation of 3 different applications (different in purpose, look, structure, behavior) which regression testing was covered only by a automation test suite that was written in AutoIt and the size of the code was quite complex and huge for a new person in the automation.
Well, that was not the problem, it is a description of the situation.

The problems weren't initially visible. I was never automating before, so I needed to learn quite a bit of the code & got to know all the applications that were part of the project.

The problems were not so appealing at the start, but I would formulate then as:
  • Maintenance of the scripts took too long
    • By new versions of the application, it took some time to adjust the scripts to the changes
    • This caused delay in information flow from testers to managers & developers
    • The changes in the application were not clearly communicated to testers
  • Testing was purely through automation scripts, covering regression (relatively poorly ~45% code coverage)
    • There was no exploratory testing, retesting of bugs, performance testing...
  • Scripts were dependent on coordinates and simulated keystrokes 
    • Every small change in positions of windows or controls caused the scripts to stuck
    • We were missing direct control handling - for example through ID's
  • There were too few control points
    • The script executed mostly blindly for very long time
    • It could often happen that the script did some misclicks and then went rampart - opening programs, writing in them and so on...
  • Oracles were weak
    • We had no specifications or requirements
    • The task description for the developers work was often too short and unclear (we missed quite a lot of information flow)
    • There was no written documentation on the script execution
    • The manuals were either not available, or written mostly for business purposes and gave not much information for other parts
    • Feedback from business about calculation data was slow
  • To sum it up: Testing was underfed 

Journey to the solution

First of all, the solutions to the problems were often a product of a team, I don't want to take solely credit for them.

 

Collaboration

At the start, the most appealing weakness was the maintenance issue. One of the reasons was lack of communication between testing and development.

To mitigate this issue, we decided to make a regular Demo sessions, where the developers introduce changes to the application, providing a possibility for the tester to prepare for them and to plan the tests with more overview. Demo sessions were organized every second week. There was a need to fuel the interest, because the sessions were unintentionally forgotten or ignored.

Another profit from the collaboration with developers is an increase of testability of the application. The automation tool (AutoIt) lacks often the capacity to check various inputs and basically to "see" the current state of the application. These issues were partially solved by custom functionality that provided the automation tool with these "eyes", or with the ability to browse through items or tabs with help of command inputs (not only mouse clicks).

 

Reporting

It is important for the managers to stay informed on the work you are doing. Thats why we started to send a weekly summary report every Friday (or the last workday) on top of the weekly meetings.

 

Adding up

After the actual testing and maintenance work, there was a lot of work with increasing the coverage and the testing possibilities on the project. Starting with covering more from the actual AUT's (applications under test), continuing with test automation for other applications and performance testing for one particular part, there was still need to provide something more. I would say big benefits came from

 

Exploratory testing

The automation so far touched purely the regression part. There was no room to provide testing for the new features. Here came the need to cover this. Exploratory approach was the most obvious one. To differentiate this approach from "random clicking", we needed to pay attention to
  • Planning
    • There is a clear need to gather information about what areas are our target with this approach
    • Gather as many information about these areas - Demo sessions, asking, manuals, task descriptions
  • Reporting
    • Information about what we cover with this approach
    • Information about issues found through it - mentioning in the bugs that they were found thanks to exploratory testing
  • Evaluating
    • Evaluate and analyze the exploration for enhancement of future runs

This approach proved as a worthy addition to the present automation on the project, both in defects found and also information about the AUT gathered.

 

Requirements

There is a requirements process starting to improve information flow about the changes in AUT,  but it is in its beginnings, I hope the best.

 

New tools

Current tool on the project is quite outdated. This part provides a big potential to improve. We have started to look for alternatives. however a change in automation tool would cause big workload with overwriting the current coverage and overcoming problems that could emerge.

The advantages of switching to a new tool would not come fast, but in the mid and long term, there would be
  • Increase of reliability of runs
  • More coverage
  • Quicker production of new scripts
  • More possibilities what to do with the scripts
  • Greater clarity of runs, reports ...

Conclusion

Pure automation is in my opinion rarely enough to cover the testing needs of any project. You can put automation in the middle and build on it, but you should never really rely only on it. With every test that you automatize, you take the dynamic sapient part from it and you must keep that in mind.

Comments

  1. This comment has been removed by the author.

    ReplyDelete
  2. Excellent post Michal! Some of the issues that you mentioned are common to many test automation projects and you offered some great solutions.
    I would recommend considering Applitools Eyes as an additional layer of automation to reduce some of the maintenance overhead, increase the automation coverage and improve team collaboration.

    ReplyDelete
  3. Nice and very informative post..

    ReplyDelete

Post a Comment

Popular posts from this blog

Testing impact on security

... or the impact when testing is lacking?

Security breaches, hacks, exploits, major ransomware attacks - their frequency
seem to increase recently. These can result in financial, credibility and data
loss, and increasingly the endangerment of human lives.

I don't want to propose that testing will always prevent these situations.
There were probably testers present (and I'm sure often also security testers) when
such systems were created. I think that there was simply a general lack of
risk-awareness on these projects.

There are many tools and techniques from  a pure technical point of view to harden the software in security context. Some of them have automated scans which crawl through your website and might discover the low hanging fruits of security weaknesses (ZAP, Burpsuite...), without much technical knowledge from the person operating it. The more important aspect is however the mindset with which you approach the product. The tester is often the first person to discov…

Cynefin beginnings

Cynefin was on my radar ever since I joined The House. It seemed an interesting idea worthy of further pursuit, therefore I decided to visit a training on this topic in London this April.



My first thought was  "What I'm doing here?!" - the other attendees were a mix of scrum masters, project managers and similar sort, which was actually to be expected. Cynefin is a decision-making framework which seems to be applicable mainly in management, but my firm belief is that testing can benefit from it equally.
My goal was, however, to find out more about Cynefin and how to apply it to my work as a software tester. I expect it will take some time to my thoughts on this fully settle and I get the whole picture from this training. My colleagues got already some very good insights from cynefin, my goal is to follow this path. The purpose of this blog is to summarize my thoughts on this so I can revisit later in my life and maybe see how much my understanding changed.

Finding out in…

Kali Linux 101

Linux was always a bit too 'geeky' thing for me. My recent time on bench provided me however with time and motivation to go into this "terra incognita".
The intention was originally to learn some foundations of security testing. After a while I discovered that Kali Linux could provide also benefits for the everyday testing routine. Following is a simple set of tools that will support and enhance your testing.
whatweb Whatweb is a web scanner which provides information about the technologies used on the website, mail addresses found and many more
Example (type into terminal in Kali Linux): whatweb 0-v https://www.houseoftest.rocks/


whois  Provides domain and legal information about the target website (where is it registered, owner, address, etc.)
Example: whois houseoftest.rocks



cewl Outputs all the words contained in the target website. You never know when such feature comes handy. You can output also into a file of course. Example: cewl https://www.houseoftest.ro…