Sunday, 3 March 2013

My two € on opposite terms

It seems that an easy way to stir up a lively discussion on twitter would be by simply combining any of the following words:

exploratory, scripted,
automated, manual,
testing and checking

The discussions tend to fade out when it turns into a competition of who knows the most exquisite greek form of the word's ancient roots, which is kind of ridiculous considering how some words and terms have changed meaning over just the past decade. In my language, danish, there's quite a few examples of this - for instance (in direct translation): a "bear's favour", originally referring to bears as big, opportunistic self-minded beasts, that will stop at nothing to make you its next dinner. Hence a bear's favour would be bad for you (you'll end on the dinner plate in any case). But younger people have focused just on the "bigness" of bears, and thus a bear's favour would be a huge favour, with nothing bad about it. Besides widening the generation gap, this serves as an example to show that following words like "automated" all the way to its greek roots may be lost energy. Let's face it: in IT - which is usually the reference frame for these discussions - automation simply means that we programmed it. It's software. End of story.

To me there's more to these discussions than philology. It's got a certain pitch about seeking an identity. Like football club fans dress up in the colours and logos of their team and publicly resent competing teams, testers pick words like these to show their standing, their opinion and value set. So we find testers claiming that they are exploratory, not scripted - or that true testing is manual, not automated. But guys! Really! Any debate on whether you prefer one over the other is ridiculous because these words.are.not.opposites!

Exploratory is an adjective which means that we are looking for what we might find. We explore by examining what we've got or where we are, taking in the picture and possibly zoom in on the interesting bits, at least until we figure that they weren't in fact interesting. We want to learn, add to our knowledge, which requires some room for experimentation and research.

Scripted testing has never been an opposite to that! It just means that we write down our planned paths before going. In some cases we need to explore areas that are not easily accessed. Think about what's going on at CERN. In order to operate such a huge testbed you need to follow some instructions and coordinate many peoples work. That doesn't mean you cannot explore. It just means that you are attempting something which requires some degree of planning and coordination. Testing software in a scripted fashion can be a good way - sometimes the only way - to assess a certain situation.

What is really opposite to exploration is confirmation. When we want to rerun a testcase to check that it still passes, we merely confirm what we already know. We only seek to renew our knowledge without even attempting to learn anything. Confirmatory testing is the true opposite of Exploratory testing. Scripting is just a tool, and it can be used in both or not. Not being allowed to revise scripts or add more scripts as your knowledge grows means that you're confirmatory, and not exploratory.  But if you are allowed to, you can script your way through exploration if you like.

Likewise, "automated" and "manual" aren't exactly opposites. In IT most of what we do is utilizing software. Debating whether software runs by itself or "by hand" is futile. Software has an intention, and that intention is man-made. No piece of software has ever been written that was the product of a machine or another piece of software - without that being the intention of a human. The keyboard, printer and screen drivers are software. We can only do software testing without software by looking at software outside it's natural environment - on paper or whiteboards, as models and sketches. Which is a bit like studying bears from a few bones of a skeleton. We get a few glimpses, but never the full story. But why does it ever matter whether we use a program to do some typing for us ? Get a grip, guys!

The great source of tweet flaming is testing and checking. Again these are just not opposites - testing contains and involves checking, whereas checking might happen without testing. But it's so easy to grap one and proclaim something preposterous like "I'm a tester, not a checker", to try and built some identity. Unfortunately it doesn't really identify an opponent. Testing and checking are just useful terms that distinguish what we do at various times and knowing the difference means we can improve what we do, and tell about the limitations of our results.

As long as we do not discuss true opposites we will never reach any conclusion and the debates will be endless and fruitless - and a bit dumb! Most of the testers I've met over the years are pretty smart people, so I am truly astonished that so many are still confused about these terms.

For my part, I script, test, explore and automate or invoke checks interchangeably all the time. I think most testers actually do. At least in my daily work I meet testers that do. True, I do love exploratory testing, not only because I like to learn stuff, but because locking myself to just one variation of reality will eventually make my work outdated and decrease the value of my work. Despite that, I do recognize that working confirmatory has its time and place - and in those circumstances I do not resent doing it. Though I go far to avoid wasting time by scripting things that I do not think is worth the effort, scripted testing is never my "enemy": no, that'll be working in a non-context-driven environment:

  • believing that one standard will fit all (what was good on the last project is still good for this one)
  • writing a pre-decided number of test cases for each requirement, "because that gives good coverage"
  • seeking approval of testcases before they are run
  • counting testcases and bug reports and worse: turning those counts into a "result"
  • working as if testing is predictable and deterministic, ie. having clear stopping criteria and sticking to the originally plotted path at all cost, regardless of reality
  • certifying testers as professionals without having them demonstrate their skills as testers, or the variation: labelling people as testers thinking that no skills are needed.

In contrast - the opposite - working context-driven means:

  • we encourage challenging our work, not to put anyone down, but to strengthen our understanding and improve our skillsets (for instance I do expect some "bashing" and commenting for this blog post!). 
  • we look to the product as our focal point, not to the ceremony of the process. 
  • we believe that  testing is closer related to social science than being merely a technique. There's some people skill involved as well.
  • we honour that our work often takes us to places we didn't forecast. And that's no catastrophic event. In fact that's usually where the fun begins.
  • we believe that skill is important and if we lack a certain skill we'd better aquire and master it. Learning is part of our work.
  • testing is never a mindless questioning of everything. It's a focused, skilled activity that serves a valuable purpose to the benefit of the stakeholders that matter.

How we obtain this, by use of automation, checking, scripting, exploring og testing "by hand" is less important. The trick is to be confident about when to do what and why - which is our craft, our skill.

Those were my two € - let the "bashing" begin…

1 comment:

  1. My two R worth...
    As I was reading this, the thought came to me that the approach to the testing would depend entirely upon the context of the testing endeavour, so I was pleased to see, as the blog progressed, that this was in fact where it led.

    I too have found, during my years in testing, that one cannot approach each project with the same ‘cast in concrete’ mindset and apply the same process to each one. Whatever is required to assist in producing a quality product should be utilized, regardless of the label given to it by the greater (or lesser) community, discarding the accompanying biases, if any. The lessons learned in each project should be carried over to the next, thus building the toolbox into which one can delve and retrieve any approach, method, technique, etc., as and when required.