Monday, September 13, 2010

Play Me a Song


My sister was visiting us a few weeks ago so that she might meet her new niece in person. She lives about 1500km away from us, so it’s just far enough away to make these sorts of visits infrequent.

I learned a lot.

I watched her reverse-engineer a song that I’d been singing to my son and reduced it to its essentials so that I might learn to play it on the piano. In about a minute and a half, she identified the major chord and said, “just play around with that chord with the right rhythm and you’ll get it”.

I didn’t believe it.

That is, until I started playing the song, a couple of weeks after she left. Her gift – countless hours of practice as a performer and teacher that made it possible notwithstanding – was to show me the essentials, guide me with just enough to get started, and then to let me go experiment for myself.

I didn’t get a lesson plan.

She didn’t give me a script, a transcribed version of the song into notes on the paper. She didn’t tell me the rhythm, she let me feel that out for myself. She didn’t tell me what to do when.

Turns out this is what checklists should do for testers, especially those business testers that are not necessarily software test professionals in the first place. They know their business. They know how to use a computer. They need guidance, but as I didn’t need a script to learn a song on the piano, they don’t need a script to test effectively.

In my experience, the more we tell our testers what to do, how to do it, in what order, how often, etc., the more likely it is that they will do just that. And only that. Sticking to script is rarely a good thing in testing.

Checklist items are chords, not notes. If you believe the tester is a person, a sentient being capable of putting emotional labour and thought into their work, then identifying the chords is enough. They will play the notes without you going through the effort of transcribing them.

Give them room to create their own art.

Thursday, August 05, 2010


There’s an excellent discussion initiated by Gojko Adzic on terminology used in association with acceptance test driven development (ATDD). It’s excellent because it’s playing with people’s mental models of what’s really going on. Getting to the essence of things. I support what’s going on there.

I need to get something out of my head though.

My problem (because I’m not convinced that anyone else shares it) is that we too often change the ‘nouns’ when we are describing what’s going on.  Work items transition from ‘idea’ to ‘feature’ to ‘requirement’ to ‘story’ to ‘example’ to ‘specification’ to ‘test’ with too much fluency for my comfort, like there is implied magic in behind the scenes. It seems we transform work items into completely different things by doing something.

Maybe we do, maybe we don’t. What I need to get out of my head is – what if we just used one noun, and described everything else as a state change?


My first candidate noun was minimal marketable feature (MMF). The challenge, however, was that an ‘automated MMF’ is, well, the solution we’re trying to specify. Didn’t work.

My second candidate was, as in the diagram above, specification. I transcribed Gojko’s terminology into a statechart. I used ‘Refined’ instead of ‘Distilled’ based on a twitter conversation I overheard between Gojko and Elisabeth Hendrickson. This seems to work better.

The ‘Executable’ state is actually a composite state that I almost labelled ‘Automated’. I only prefer ‘executable’ since it’s seems to be used more often in conjunction with my preferred noun (‘specification’) and the term ‘automated’ is more often used with a different noun (‘test’).


I wanted two states to represent the higher-level ‘Executable’ state because I know that just writing and running something in a BDD container doesn’t mean that the specification actually runs the system under development, nor does automagically get incorporated into the continuous validation system Gojko was referring to. I wanted this other state so that I could add the transition and indicate the work that still has to happen for that specification to be useful. In keeping with the spirit of Gojko’s post, the format of the specification even after this work is completed, should still be like it came off the whiteboard, that is, a literal representation of the specification.

After having done this, I believe there is value in unifying the conversation around a single noun, and I believe that the best candidate noun – for now - is ‘specification’.

Other random thoughts on this:

  • Interesting that this post isn’t about testing, although testing skills certainly help in doing this.  Gojko called this collaborative specification.
  • I’ve de-emphasized the design skills required to complete the transition from ‘Not Integrated’ to ‘Integrated’. Unfortunately. But I don’t want to clutter the diagram.
  • My dream – presented as a lightning talk at the first Agile Alliance Functional Testing Tools workshop in 2007 – is that the literal executable specification described above (that one that appears as it did on the whiteboard) runs either inside or outside the GUI as requested at runtime. Most for ‘separation of concerns’ reasons. An easy way to validate the back end is a must, so that business rules get automated and validated first. Then it’s a second step to figure out the optimal user experience, and I would still like to use that very same specification to drive the development of that user experience.

Please continue the discussion on Gojko’s blog so that we keep the thoughts and minds directed in a single place.