Skip to main content

BUG HUNT

I have recently organized a bug hunt in my company.
The point was to motivate testers for testing outside of the scope of their work.
Here are the general settings:
________________________________________________________________________________

TERMINOLOGY

Territory – application, system
Prey – bug, issue, problem, meaningful change request
Hunt – the event in which is are the bugs searched and reported in the particular application in a given time

ROLES

Old hunter
  • He is the most experienced Hunter in the particular territory
  • Tester who has not currently the role of the Hunter
  • He informs other Hunters about the application
  • He evaluates reports
Gatherer
  • He gathers the reports from Hunters
  • He anonymizes the reports and forwards them to the Old Hunter for evaluation
  • He will maybe provide reward for the Lead Hunter
Hunter
  • His role is to catch the prey
Lead Hunter
  • He will be known at the end of the hunt
  • He is the best among hunters, the chosen one
  • He will receive a big chunk of meat or something

RULES

  • You will be given access to the territory (application, system) by the Old Hunter
  • He will provide information about
    • Basic application logic, functionality, purpose
    • What not to look for
    • What to look for
    • Who to address bug reports
    • Known prey (bugs, issues)
  • You can always ask the Old Hunter about the territory, when you are not sure
  • Your goal is to find the prey (bugs, issues, defects, problems etc.)
  • You can also suggest meaningful changes
  • When to hunt? Whenever you want. You can spend the whole week looking for a prey, or you can avoid hunting and eat grassJ
  • You will report everything to a gatherer, who will sent it anonymously to the Old Hunter for evaluation
Every prey is evaluated and given a value (0-5 meat points) by the Old hunter
  • Prey is evaluated firstly according to its size (significance, severity, priority)
  • More points are given to the Hunter who catches the prey as first
  • Points will devaluate if many Hunters catch this particular prey
  • Points are given also according to the quality of hunt report – simplicity, accuracy, clear, reproducible
  • Hunters should not share information about the prey – but this is only an recommendation
  • Duration of the hunt is restricted to a week, till enough prey is caught, or till one Lead Hunter (most points) is clear

LET THE HUNT BEGIN


________________________________________________________________________________

Comments

Popular posts from this blog

When to start automation?

If you are asking this as a tester, you probably asking too late. Automation is something that can save you some portion of your work (understand resources for your client) and i rarely found cases of testing work that did not need at least some portion of automation. I know that it is rarely understood that automation is something to be developed & maintained and if you cover enough of the application, you do not need any more regression - well i do not think that somebody has done an automation regression suite that if fully reliable (i am not speaking about maintaining this code - which is another topic). There can be always a bug (or quality issue) that slips through, even when you scripts go through the afflicted part. I understand that many testers have no development background or skills, but i doubt the developers that could help you are far away. I am not assuming that they can do the scripts for you.... However if they understand what you need, they can say how e

Thrown into automation

Situation & Problem I was thrown into an automation test project. Concretely test automation of 3 different applications (different in purpose, look, structure, behavior) which regression testing was covered only by a automation test suite that was written in AutoIt and the size of the code was quite complex and huge for a new person in the automation. Well, that was not the problem, it is a description of the situation. The problems weren't initially visible. I was never automating before, so I needed to learn quite a bit of the code & got to know all the applications that were part of the project. The problems were not so appealing at the start, but I would formulate then as: Maintenance of the scripts took too long By new versions of the application, it took some time to adjust the scripts to the changes This caused delay in information flow from testers to managers & developers The changes in the application were not clearly communicated to testers

Mandelbug - bug, who didn't want to be found

Returning from holiday recently, I was expecting a calm day of catching up and doing some basic tasks. The opposite was true, this day I was introduced to a situation which puzzled us for two weeks. Situation We have been reported that Android sometimes get the wrong reply to a particular GET requests. Ok, let us investigate, I got this, will be quick... Reproducibility The bug is up till now non-deterministic to us. We were firstly not able to find the determining factor, it just occasionally occurred, persisted for some minutes (maybe up to half an hour) and then disappeared without a trace. This made the investigation and also any communication much harder. This happened for both iOS and Android apps. We got ourselves here a Mandelbug: A bug whose underlying causes are so complex and obscure as to make its behavior appear chaotic or even non-deterministic First hypothesis We have decided to focus only on the android part. A debugging proxy was attached shortly for c