Skip to main content

Testing impact on security

... or the impact when testing is lacking?

Security breaches, hacks, exploits, major ransomware attacks - their frequency
seem to increase recently. These can result in financial, credibility and data
loss, and increasingly the endangerment of human lives.

I don't want to propose that testing will always prevent these situations.
There were probably testers present (and I'm sure often also security testers) when
such systems were created. I think that there was simply a general lack of
risk-awareness on these projects.

There are many tools and techniques from  a pure technical point of view to harden the software in security context. Some of them have automated scans which crawl through your website and might discover the low hanging fruits of security weaknesses (ZAP, Burpsuite...), without much technical knowledge from the person operating it.
The more important aspect is however the mindset with which you approach the product. The tester is often the first person to discover these risks, simply because of the difference in mindset.

We don’t think 'how can this work?', but 'where it might fail?'

Lets look at a security approach. There is a methodology in security called 'Threat model' (or 'Threat modeling'), which forms the security strategy before even looking at the technical details of the system. It describes the risk analysis from security point of view. It maps the set of possible adversaries which can attack our system and vulnerabilities/attack vectors which they can exploit. It helps to pinpoint the places where the system is at the weakest against a probable attack and then we can focus the security improvements more effectively. 

This methodology is in many aspects similar to the exploratory testing mindset. Both try to learn about the system, exploratory is more general, threat modelling has a more specific scope. I admit that I never yet done threat modelling professionally. However from testing numerous systems it is clear that something similar as threat modeling appears in the mind of a tester. When testing a system, after creating and continually refining a model in my head, I often ask myself where the places that might be exploited are (the “attack vectors”). There are often security defects which can be found without any penetration testing experience - buggy password prompts (revealing information or allowing unauthorized entry), data leaks, unencrypted storage of sensitive data, etc.
Usually these are not identified thanks to any specification or predefined test case (when not focusing especially on security aspects), you just need to look for these loose ends when you navigate through the system.

Any person with the right mindset can make a difference in software security. In absence of a security specialist, this person is often your software tester, I hope you have one;)

Visit us for more info


Popular posts from this blog

Kali Linux 101

Linux was always a bit too 'geeky' thing for me. My recent time on bench provided me however with time and motivation to go into this "terra incognita".
The intention was originally to learn some foundations of security testing. After a while I discovered that Kali Linux could provide also benefits for the everyday testing routine. Following is a simple set of tools that will support and enhance your testing.
whatweb Whatweb is a web scanner which provides information about the technologies used on the website, mail addresses found and many more
Example (type into terminal in Kali Linux): whatweb 0-v

whois  Provides domain and legal information about the target website (where is it registered, owner, address, etc.)
Example: whois

cewl Outputs all the words contained in the target website. You never know when such feature comes handy. You can output also into a file of course. Example: cewl…

Thrown into automation

Situation & Problem I was thrown into an automation test project.

Concretely test automation of 3 different applications (different in purpose, look, structure, behavior) which regression testing was covered only by a automation test suite that was written in AutoIt and the size of the code was quite complex and huge for a new person in the automation.
Well, that was not the problem, it is a description of the situation.

The problems weren't initially visible. I was never automating before, so I needed to learn quite a bit of the code & got to know all the applications that were part of the project.

The problems were not so appealing at the start, but I would formulate then as: Maintenance of the scripts took too longBy new versions of the application, it took some time to adjust the scripts to the changesThis caused delay in information flow from testers to managers & developersThe changes in the application were not clearly communicated to testersTesting was purely t…