Recently, I see around me more emphasis on test design techniques. People get trained in common techniques, and I hear voices that call for more documentation of the techniques used in test design documents.
I would like to have a closer look on the trend by analyzing a simple but extreme example to demonstrate the hidden pitfalls that could be found in this path.
Let's take a look at the lovely pair of equivalent partitioning and Boundary values analysis. Both are usually the first most basic techniques to be mentioned when testing design techniques are discussed.
Equivalent partitioning method is derived from the fact that testing all possible values is not possible, and that one representative value from each partition is sufficient in order to have good enough testing.
Boundary values analysis come to compensate the weakness of Equivalent partitioning. Although that the specification does not specify a different behavior for processing boundary values, we do know that this areas are likely to fail due to architecture or programming bugs.
Let's apply the techniques for deriving data for our test design. Assume that we have to test the following function:
MemoryCopy(Source,Length,Destination)
The requirments specification says:
This function is used by software other components to copy memory cells, given the source memory cell number, the length of data to copy and the destination cell to copy the cells group to.
The environment of operation of the function is a memory area consists of 0 up to N memory cells that could contain any data.
Using the equivalent partitioning approach, we will analyze our environment: We have 3 data partitions: below Zero, Zero up to N and N up to infinite. Each one of our parameters could get one of them. Using pair wise planning will lower the amount of tested combinations so to get good enough coverage for the function will run the following test cases and call it TDD#1:
Case# | Source | Length | Destination |
1 | 0-N | 0-N | 0-N |
2 | N+1-infinite | 0-N | 0-N |
3 | 0-N | Negative | 0-N |
4 | 0-N | N+1-infinite | 0-N |
5 | 0-N | 0-N | Negative |
6 | 0-N | 0-N | N+1-infinite |
7 | Negative | 0-N | 0-N |
Note, that in this case, only one out the seven cases is a positive test case (Case#1), since the specification defines only one valid partition.
Since we already know that we must not use this technique without the complementary boundary values, we will specify the boundary values that we must check: instead of using 0-N we will use 0, 1,N-1 and N which are the boundary values within the valid partition range, -1 and N+1 as the boundary values of the invalid partitions. Thanks to all pairs algorithm, we end up with the following 44 cases(The negative ones are marked with ~) that we will call TDD#2:
Case# | Source | Length | Destination | | Case# | Source | Length | Destination |
1 | N-1 | 1 | 0 | | 12 | 1 | N-1 | N-1 |
2 | N | N | N-1 | | 13 | N-1 | 0 | N-1 |
3 | N | 0 | N | | 14 | 1 | 0 | 0 |
4 | 0 | 1 | N-1 | | 15 | 0 | 0 | 1 |
5 | N | N-1 | 0 | | 16 | 0 | N | 0 |
6 | N-1 | N-1 | 1 | | 17 | 1 | N | 1 |
7 | 0 | N-1 | N | | 18 | ~-1 | N | 0 |
8 | 1 | 1 | 1 | | 19 | N | N | ~-1 |
9 | N-1 | N | N | | 20 | N-1 | ~-1 | N-1 |
10 | N | 1 | 1 | | 21 | ~-1 | 0 | 1 |
11 | 1 | 1 | N | | 22 | 0 | N | ~N+1 |
| | | | | | | | |
Case# | Source | Length | Destination | | Case# | Source | Length | Destination |
23 | N | ~-1 | 1 | | 34 | ~-1 | N | N |
24 | 0 | ~-1 | 0 | | 35 | N-1 | 0 | ~-1 |
25 | 1 | ~-1 | N | | 36 | ~N+1 | 0 | N |
26 | ~N+1 | N-1 | N-1 | | 37 | 0 | N-1 | ~-1 |
27 | N-1 | ~N+1 | N | | 38 | N | 1 | ~-1 |
28 | ~N+1 | 1 | 0 | | 39 | 1 | 1 | ~-1 |
29 | 0 | ~N+1 | 1 | | 40 | ~N+1 | N-1 | 1 |
30 | ~-1 | 1 | N-1 | | 41 | ~N+1 | N | 0 |
31 | N-1 | 1 | ~N+1 | | 42 | 1 | ~N+1 | 0 |
32 | N | ~N+1 | N-1 | | 43 | N | 0 | ~N+1 |
33 | 1 | N-1 | ~N+1 | | 44 | ~-1 | N-1 | N |
So far so good, but we just been trapped…
While in terms of valid, invalid data ranges and their boundaries we did state of the art coverage, we forgot the main functional attributes of our function while it copies data from source to destination, and has to maintain data integrity.
In order to look up what we missed, let's go back to our tester natural mind set and do analysis without bounding ourselves to technique.
Getting the mention above specification on the first time, we will use our drawing board to draw a typical environment, and use our best test technique abstraction ability to imagine the possible cases and list them. We'll give attention to the partitioning and the boundaries when they will come in our path, but we will defer the full discussion of them to a later stage, so to remain focus on the functionality.
Figure - drawing board
Having the drawing in front of us, we would imagine and note the different possible cases:
Let's move memory cells 0 -3 to cells N-3 up to N. Now, intuitively, we will move them to N-2 (cases 1 and 2).
But even nicer would be to copy cells 0 - 3 to cells 1-4 since potential data loss could be here when some of the destination data overlaps the source data. In this point we'll take the idea further and imagine how will the function will copy the data (cases 3 and 4).
In case that data is copied cell by cell, it should better take rightmost cell (in our example "D" from cell 3) and copy it to the rightmost cell of the destination (cell 4) and so on till it reaches the leftmost cell, so to avoid loss of data. From the other hand if we will try to copy cells 1-4 to cells 0-3, the function must start with the leftmost cell to avoid the same data loss.
Let's sum our findings into test cases (we will call it TDD#3):
Case# | Source | Length | Destination |
1 | 0 | 0+X | N-X |
2 | 0 | 0+X | N-X+1 |
3 | 0 | Y | 1 |
4 | 1 | Y | 0 |
5 | Valid | Invalid | valid |
6 | invalid | Valid | Valid |
7 | Valid | Valid | Invalid |
We'll have to define x and y and in some cases to expand more the definition of invalid data , but it's clear that TDD#3 is much more powerful than the previous two, since it's cover functional aspects of the software.
O.K. the reader will say, you waste our time demonstrating a totally improper use of a test technique which relates to data when the actual focus has to be on the functionality. Would you like that we will add a warning against improper use of a technique on the package?
Yes. We must warn against improper use. when the focus wanders from the cognitive research and analysis to the BKM area, it is likely that we and our reviewers will be less criticizing. If in our example we were starting from the simple abstraction on the drawing board, and only then we will look inside the "techniques basket", picking the ones that we think most adequate for our proposal, we will also give it the proper weight, watching from the technique blindness.
In our example of the last Test design, we might think that few boundary values are missing and that we should relate to both invalid partitions in the invalid input cases, but we will understand that these are only second priority items. since the whole BVA technique is blind for relation betwen the variables, starting with the technique will lead our mind in the less relevant direction.