Showing posts with label terminology. Show all posts
Showing posts with label terminology. Show all posts

Sunday, May 8, 2011

Is there a Pesticide paradox in testing?

As a tester, I have heard and read the term “Pesticide paradox “ on many occasions . However, I do not feel comfortable with it so I avoid using it. In the last few days, I decided to examine it more carefully – does this term makes sense? I did some googling to explore the common use of the term in SW testing, the definition of the paradox in the real pesticide world, and to try to give an answer to the question.

The original paradox is explained in Wikipedia :
“The Paradox of the pesticides is a paradox that states that by applying pesticide to a pest, one may in fact increase its abundance. This happens when the pesticide upsets natural predator-prey dynamics in the ecosystem.” I'll refer to this definition as the "original".

The common use of the term in Testing as I experienced it, is to describe that when using scripted tests (automated in most cases), which are repeated over and over again, eventually the same set of test cases will no longer find any new defects (I took a quote from the ITCQB syllabus which is great source to find "terms of common use"). I'll refer to this definition as the "common use"

I also found the explanation that "A static set of tests will become less effective as developers learn to avoid making mistakes that trigger the tests." (A paper by Rex Black ).

I could say that the common use of the term is to describe that repeating the same checks tends to yield less bugs from run to run. I agree that usually this is true – according to my experience, since bugs get fixed, and usually when no major changes are introduced and no very bad development occurs, products become more stable from release to release also, the development learning explanation is logical.

If we try to correlate between the Testing common use of the term and the original one, we will be able to see a very loose connection – SW bugs do not increase due to the fact that you repeat some checks, and moreover, where is the paradox here? If you ask a question over and over, it is very likely that most of times you'll get the correct answer. I don’t call this a paradox.

When you analyze a term, it is good practice to read the source. I on't have the book Software testing techniques by Boris Beizer, from which is the term origin, but thanks to http://www.softwarequotes.com/  , I found it:
First law: The pesticide paradox. Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffective.
- Boris Beizer - Chapter 1, Section 1.7. Boris notes that farmers solve this problem by planting sacrifice crops for the bugs to eat, and laments that programmers are unable to write sacrifice functions., Software testing techniques by Boris Beizer , ISBN: 0442206720. I'll refer to this quote as "Bezier's".

Well, that makes sense too, and is a good foundation law before you learns about methods – any method is not fully effective. Like with the common use term, I don't see the paradox.

I'll summarize my conclusions on the subject

• The connection between the original term – the biological phenomena of the "Pesticide paradox" and the common use in the testing world is mostly due to the use of the term “bug” to describe a defect, and that the original paradox deals with type of in efficiency when trying to pesticide pests.

• A clear logical paradox appear in the original phenomena – you kill bugs, but this increases their abundance, while the Bezier's and the common use of the term talk about less efficiency, not a paradox. A possible response to this statement will be to argue that when you do something and it is not efficient this is a paradox, to my taste this is too apologetic argument.

• The original SW testing usage quote from Bezier is a warning about relying on a sole method, while the common use by others, which usually refer to Bezier as the source, is to describe the decreased efficiency of repeating a scripted test.

It’s fine to use a cool term with loose analogy to describe your idea, but as you can see in our example, this might cause others to "steal" your term to describe other things (and worse – reference you as the source). In addition, it will be hard to convince people with critical thinking to use your term. I will leave the pesticide paradox to its original meaning.

Tuesday, January 4, 2011

Note about terminology

Sometimes, It’s all about branding. When you want to sell a product or an approach, using terminology that will "sell" your approach to your stakeholders or the professional community has impact on the chances that it will be accepted .
When Fred Hoyle coined the term Big Bang during a 1949 BBC radio broadcast, he did not anticipate that he is doing a branding service to the competitive theory. According to Hoyle, who favored an alternative "steady state" cosmological model, he used the striking image to highlight the difference between the two models. Probably he did it too well :-)

Markus Gärtner in his post Active vs. passive testing  introduce refreshing terminology for what we use to call Testing Vs. Checking or Exploratory Vs. Scripted. He uses the terms Active Vs. Passive testing. He also talk about the role of judgment which is part of being active, but basically, the new pair of active Vs. Passive comes to describe Research, critical, exploratory approach versos executing the planned tests, checking and following defined scripts.

This new terminology has some benefits on the terms we are used to. It's not using a term that we already use to describe wider area like "Testing". And unlike the term exploratory it's not suffer from the "unstructured" public image.

Thursday, November 11, 2010

Using the definition of quality as a tool for context awareness

What is quality?
Since testers job is to perform quality assessments, ask questions and provide answers regarding quality, understanding what quality is can help identify dilemmas in our work and put them in context.
My preferred definition for quality is “value to someone” (Jerry Weinberg). Cem Kaner adds the extension “who matters”. Usually I refer to this VIP as a “User”.

Recently, I noticed that using this definition helps identify context and explain the context to others during discussion.
A few examples:

Focusing a discussion on the goal rather than the process.
Process is important, but sometimes process discussions disconnect from the goal. For example, when a bug is opened and there is a discussion about whether it’s a requirement violation or not, providing insight on the User value could help direct the discussion to a productive place. This is also true when a tester spots an issue and is not sure whether it falls under his responsibility to report it – “is there a threat to the value to the users?” is a good litmus test to aid the decision.


Selecting a process
When defining a process, understanding how it’s connected to the value to the user is a good way to examine it. When we define a process it should connect our efforts to the goal rather than disconnect them. A negative example is when mixing between the priority for fixing the bug and the bug Severity. Many times there is correlation between the two, but it will be a good idea to define process that will address the cases when there is a difference between the two (like setting different field for each goal) so the information of the value threat severity will not be replaced by the work plan.

Determine classification of a problem
We face many types of problems. Some related to the quality of the product and some interfere with other aspects of our work, delay our progress or block our testing efforts. When a testability issue is examined in perspective of user value, it can be underrated since our inability to test efficiently is the real issue here, and not the impact of the tested attribute on the end user.
Sometimes there are two types of issues combined together in one problem description. Distinguishing between the value to the user and the other issue, helps provide a clearer explanation.

Overcoming the tunnel effect when setting bug severity
Setting correct bug exposure classification helps the bug life cycle start on the right foot. When a tester tests his area of responsibility and spots a problem, sometimes it’s not easy to relate to the big picture – what will be the impact on the user? Will he be able to recover? Or in other words – what is the threat in terms of value to the user. Answering this question easily directs the bug submitter to specify the correct bug severity.