Monday, March 19, 2012

A Mandated Tool

In my current context I’m required to use a certain tool for managing the test effort; you probably know the tool. I won’t mention it by name unless the vendor pays me to, but it’s the first on the list when non-testers choose to purchase a test tool.

My approach is to use this tool as a checklist manager and it works really well for that purpose IF you don’t over-engineer your use of it.

Forget Requirements – List Test Targets

This means avoiding the requirements management trap, that is, using the requirements section for anything but helping you to organize the test effort. A test target is something that needs testing. A requirement, user story, feature, benefit, risk, performance criterion, security criterion, etc.

Test targets are in checklist format and organized in a way that makes it easy to do the next step, create the playbook.

In my current project, I have device specifications (temperature operating range, voltage operating range, etc.) that are clear and unambiguous test targets; I’m lucky. We will employ the same concept when we need to test business processes though.

Forget Test Plans – Create a Playbook

Think of this section as designing/creating a playbook. When we get a new release – we do this. When we test this feature right here, we do that. You should organize the plays in the playbook according to the conditions that exist before you run the play.

Back to my current project – when we receive a device firmware update or upgrade – the set of tests (the play) that we should run to accept that new configuration should be easy to find.

The Test Lab is for Test Sessions

Think of this section as recording the results of test sessions. For any given release or time period, pull plays from the playbook, run the play, discover what more you can/should do, update the play book, run the new/revised plays, rinse, repeat, wipe hands on pants.

At the end of the session, upload your session record so that you have evidence of having tested. Revisions of the playbook are evidence of having learned something about the product and continual improvement.

Current project example again – the February 12 release firmware tests were run over a period of a few days by two engineers and their session record (a Word document) was uploaded into this section in the test management tool. We will create a new folder for the March 29 release and go back to the source – the playbook – to define the firmware test session that will be held at that time. We will not re-run the checklist in the February 12 release section.

Principles

DRY – Don’t repeat yourself. This works in code, and it works in checklists. Don’t organize your test targets by release since that limits your checklist’s value when used three years from now to test a new release or upgrade. If your list of test targets and your list of set plays look the same, then you really haven’t done any design work and you’re not helping your future self as much as you could be.

Ignore the temptation to put a release/time/date anywhere but as metadata for test sessions. Nothing else needs a date since they should survive every release of the solution until they are decommissioned. Test targets and the plays in the playbook are for all time. Well, at least as long as the solution itself survives. I would organize the test sessions

Forget you’re on a project. You’re creating the list of test targets and a playbook for all releases, for the duration of the solution’s useful life. Without this goal – why are you using this tool again?

Tuesday, March 13, 2012

Requirements Management and Test Tools

When modeling requirements in English, one of the heuristics is to avoid ‘weak verbs’ such as ‘to process’, ‘to handle’ and ‘to manage’. The weak verb is subject to broader interpretation than other, more constrained (strong) verbs and in writing software, broad interpretations are more likely to yield building the wrong thing.

Some test management tools help us ‘manage requirements’. What exactly does that mean? It’s ironic that we use a weak verb in describing a tool used for manipulating (ideally) pairs of strong nouns and strong verbs.

Increasingly, I find that I’m evaluating test management tools based on how they help me work and collaborate around those requirements and how effectively they help me use the requirements as test targets.

  • Create/edit/share checklists of test targets (including requirements).
  • Version-control and label checklists so I can associate them with other version-controlled artifacts like source code. I don’t need per-item version control, just  version control on the checklist.
  • Layer profiles onto the checklist to highlight themes: the high-risk theme unites high-risk items, the new-construction theme unites items that are new, … and I get to invent the layers/profiles that suit my context.
  • Create/edit/share exploratory testing session charters based on a selection of items from a checklist
  • Create/edit/share exploratory testing session records
  • Compile checklists/session records into information radiators / dashboards for myself, for team members, and for external stakeholders.
  • Create/edit/share a script based on a selection of items from a checklist (could be one, could be many); creating a script for a given checklist item shouldn’t be the default.

That’s what “manage requirements” means to me, a test manager.