Sunday, 13 June 2010

A model for testing in practice

One of the main problems around testing is that the terms and understanding of terms differs a lot from person to person. Although standard lists of term definition has been around for a decade or so, along with certification schemes, with a huge weight on learning these definitions - terms are in reality still quite open and changes meaning from project to project.

I guess it's only fair to add something to the confusion, which I will now do. :-)

For some time I have criticised the writing of test cases, for what I think is a valiant cause: writing test cases according to the standards have a clear tendency of stupifying tests and in a popular term: dumb down testers to just be key bangers, following a script.
I dislike writing test cases as a prerequisite to testing not only because this is boring and tedious work to do, but also because it creates a distance between the actual system and the tester/test case designer. Getting the test cases 'approved' before testing starts is often a mandatory step in certain fixed time, fixed price projects, and this always ends up in a discussion concerned around format and volume of the set of test cases, which has nothing to do at all with the value of the test itself. Furthermore - just to add a little extra bit - I find that the so-called 'quality' of the test is measured from the test cases directly - and not from the test result and the story of how the test actually went. An honestly - how much do we learn from what we would like to do while testing in comparison to what we in fact get to do while testing ?
This leads to demands of having precise and accurate test cases that serve as documentation. A quote often heard: "if the test case doesn't explicitly contains the fields that should be checked and what these should contain - those fields won't get tested". A quote that however popular misses the point completely: that the testers may (and should) have other documents than the test case lying around their working place. Like, if all of a sudden, only the test cases are the surviving part of all the documentation in the project. This is where the golden sentence: "don't dumb down your test, smart up your testers" comes to mind.

I find this a great misunderstanding and an outrageous demand on the testers. No one else in an IT project works under such documentation demands. Project managers could never write down their actions and plans in this detail. No programmer would ever accept this demand as well. No architect could do it and no end user would ever restrain themselves to just follow a script anyway.

There - it's out. This is my opinion based on my experience. Others may have other experiences and other opinions. I welcome those, but after more than ten years work in testing I find it hard to change my opinion.

So - to add some more comments on this, I think that the demand for test cases comes out of the wrong model. For years and years the V-model has been the foundation for many people's testing theory. However, the theory is seldom put to exact practice. I think - currently - that this has to do with a different model being the base for our actual work. Hence the big confusion: one model for theory. One model for practice.

The V-model looks on testing as activities closely bound to development activities. At the time it was formulated, development was done on monolithic systems and almost everything was created by the programmers. In this kind of development projects, the V-model makes a lot of sense. Unit testing is done on each little chunk of code, where all input, output and interfaces are simulated and under 100% control.
Integration testing makes sense too: putting these smaller chunks together, but still keeping interfaces (drivers and stubs) under control tells us that this larger, compiled chunk of code 'works'. And so on. In the end we have acceptance test and we're all done.

What happens today is different. Today libraries of code are used and reused for the 'smaller chunks of code'. The developer's are more busy adding configuration parameters than writing code lines (I'm not trying to simplify the work of the developers, but their work has changed considerably in the past 20 years). In a way they start at a higher level, and thus they end up having something executable much sooner and without as much test (unit and integration testing).

When I observe testing in practice, it doesn't really follow the V-model. We can't make test cases for each level in the V-model. We're working in stages (mind you, I mean stages, not phases - because a phase must end before the next phase starts. I find that stages are more flexible and parts of a system can be in one stage while another part is in a later stage. At least, that's my definition ;-).

Here's how it goes: smoke testing - debugging testing - stabilization testing - extreme testing.

Smoke testing is our first attempts to just start up the application or system and move around in it to see what it is and what it can do. In this stage we would benefit little from test cases, as everything that happens would be a surprise and unpredictable. Touring exploratory would be great for this - just looking at different aspects in turn would make the smoke test structured and valuable. It resembles that we're making a map for ourselves.

Debugging testing (forgive me if the english grammer breaks on this term. Should be debug testing, perhaps ?) - Once we have got a foothold on the application we're testing, we start to investigate if it in fact can do what we want it to. If it's got customers and bank accounts, just as an example, it should be able to create and edit both of these and link them together, so a customer can have one or more bank accounts. I call it debugging because this is the first time we're letting the system try to meet it's requirements and a lot of bugs would be expected and easily found too. At this stage test cases could be beneficial as descriptions of what the system is supposed to be able to do. One form could be use cases and sets of data sheets, thus telling the tester what kind of data and what interaction to expect. Traditional test cases would be difficult to work with and difficult to handle. We're not isolating the functions - we're using them in sequences. Or you could take an exploratory approach and just question what you see when you see it, thus building on the systems capability and honing your test oracles. One thing is for sure: the system is stuffed with bugs, so whatever you do, you're likely to find a couple or more.

Stabilization testing is the next stage. Now we know that the test cases - in my view the use cases or something similar - can be completed and no major bugs are revealed along the way. In other words - we can create the customers, we can assign bank accounts to them and we can work the system - along the 'high ways'. Now we need to crawl out on the smaller roads and see that traffic in these areas is possible too. Stabilization means that the focus now is on getting the system to crash or break down in yet new ways, by being nasty to it, and by taking the full functionality into account. Test cases in this kind of testing is not possible as I see it. Imaginative, exploratory testers will be required.

Extreme testing is the last stage. Once we realize that we cannot seem to break the system anymore - we have to go to extremes to hurt it. Push the load up, increase the data volume to the vicinity of 'disk full', do hundreds of things at once - if possible - and distress the system in whatever way you possible can. One could reasonable argue that there's not much difference between stabilization and extreme testing. It's quite possible that there's not. In the mental model of testing however, there's a distinction: stabilizing a system is different from pushing a system to it's limits.
Once we start on stabilizing the system, all we know is, that if we behave nicely - the system will do what we want it to do. Somewhere down the road of stabilization, the system will be robust and survive if we behave grossly and nasty towards it. Then we can start doing something extreme. It's not a rock solid definition - but it should work.

Anyway - this is what I see done in testing and it brings us far when we do it. What doesn't bring us anywhere is writing test cases to follow the V-model. Some of these test cases are bound to be very mystifying, because they really serve more to be aligned with the V-model, than to provide some valuable information. Smoketest - debugging - stabilization - extreme testing - that's valuable all the way and it has even got a plausible place for test cases. It could unite us in that you can do it completely without predefined test cases and being exploratory all the way - or you could write thousands of test cases if that's how you like to work (of course they are only applicable to run inside 'debugging'. So the system sort of 'grows out' of the test case stage..).

With the original problem being that test cases are demanded and needed to be approved, I can only hope that this scheme could be part of changing the expectations of the test cases. That they are not all the documentation, but closer to being test conditions. 'We want it to be able to XX' is much more operational and must be more satisfying for a client to 'approve' than deciphering complex, detailed test cases - hundreds or thousands of them too.. Or maybe they just don't trust the testers to do a proper job ?

There - I said my bit. What do YOU think of it ?

No comments:

Post a Comment