Skip to main content

RTI - training for testers in Bratislava

I wanted to contribute into the testing culture of my company, so i decided to lead a training.

I was trying to condensate the Rapid Testing Intensive online 3 days course from James Bach and Michael Bolton into a 6 hour training class.

It went in my opinion quite well. I don’t want to reproduce the theory part of it, which you can see here

The practical part was more interesting, each of the participants was testing an freeware screenshot tool (which I rather don’t mention here), we were suprised how many bugs we found on an publicly used tool, some of them crashing the whole application down.

Our Mission:


Whole testing took over 2 hours and we had an interesting review session afterwards. The output of which was afterwards this Test Report.

Test Report

Results:

- The product's basic functionalities work. Non-typical scenarios produce unstable and unacceptable results,
 there are also minor bugs that are acceptable
- There are a few specific scenarios resulting to the application crash
- The application is hierarchically unintuitive in several instances
- There are indications that the portable and installed versions differ
- Mutual agreement on our recommendation -> Buy only under the condition that the known bugs are fixed and deeper testing is conducted thereafter

Testing:

- We performed paralelly 4 sanity check sessions by different testers to gain both confidence in the results and remove possible subjective factor
- Each session was 120 minutes long and a group review was following these sessions

Caveats and recommendations:

- Bug fixing followed by retesting is required
- Additional deeper testing is strongly recommended - various platforms, resolutions and monitor devices are recommended to be used
- Redesign and comprehensive manual is needed to be delivered

Comments

Popular posts from this blog

When to start automation?

If you are asking this as a tester, you probably asking too late. Automation is something that can save you some portion of your work (understand resources for your client) and i rarely found cases of testing work that did not need at least some portion of automation. I know that it is rarely understood that automation is something to be developed & maintained and if you cover enough of the application, you do not need any more regression - well i do not think that somebody has done an automation regression suite that if fully reliable (i am not speaking about maintaining this code - which is another topic). There can be always a bug (or quality issue) that slips through, even when you scripts go through the afflicted part. I understand that many testers have no development background or skills, but i doubt the developers that could help you are far away. I am not assuming that they can do the scripts for you.... However if they understand what you need, they can say how e

Thrown into automation

Situation & Problem I was thrown into an automation test project. Concretely test automation of 3 different applications (different in purpose, look, structure, behavior) which regression testing was covered only by a automation test suite that was written in AutoIt and the size of the code was quite complex and huge for a new person in the automation. Well, that was not the problem, it is a description of the situation. The problems weren't initially visible. I was never automating before, so I needed to learn quite a bit of the code & got to know all the applications that were part of the project. The problems were not so appealing at the start, but I would formulate then as: Maintenance of the scripts took too long By new versions of the application, it took some time to adjust the scripts to the changes This caused delay in information flow from testers to managers & developers The changes in the application were not clearly communicated to testers

Mandelbug - bug, who didn't want to be found

Returning from holiday recently, I was expecting a calm day of catching up and doing some basic tasks. The opposite was true, this day I was introduced to a situation which puzzled us for two weeks. Situation We have been reported that Android sometimes get the wrong reply to a particular GET requests. Ok, let us investigate, I got this, will be quick... Reproducibility The bug is up till now non-deterministic to us. We were firstly not able to find the determining factor, it just occasionally occurred, persisted for some minutes (maybe up to half an hour) and then disappeared without a trace. This made the investigation and also any communication much harder. This happened for both iOS and Android apps. We got ourselves here a Mandelbug: A bug whose underlying causes are so complex and obscure as to make its behavior appear chaotic or even non-deterministic First hypothesis We have decided to focus only on the android part. A debugging proxy was attached shortly for c