Monday, January 30, 2012

Personal Kanban for Exploratory Testing–Part 1

Part 1 | Part 2 | Part 3 | Part 4

I’m going to assume that you’re familiar with exploratory testing and personal kanban for this post. There are many good references out there and I encourage to get out there and read them and at the same time, keep your mind fresh for bringing your own definition into focus.

I have a couple of twisted definitions for you to use so you don’t have to rush around finding links. Just in case.

Exploratory Testing

For my purposes here, exploratory testing is test-driven testing. Weird, huh? It’s like going meta. And being proud of it.

When I explore I don’t start with a script. I run the system under test, so it is a form of dynamic testing; yet I don’t pretend to know what tests are best. So I discover them by running tests.

Test-driven developers elaborate requirements and specify designs by writing tests and use running the tests as an indicator of what’s done. It’s emergent because the number and extent of the examples and specifications required is unknown ahead of time. It’s like the first expression of the requirements is a skull and through continued refinement, the developers fill in the facial features as they work. No one on the team pretends to know it all upfront, that is, no one knows what the face will actually look like until they go through the process.

Test-driven testers elaborate tests by running tests. Their drive is to find a better test than the one they just ran. Repeatedly. Once they discover a gem of a test, they look back at their journal and perhaps ‘Save As…’ so that someone else knows how they got there and how to run it again. I’m ignoring the session-based aspect of the exploration and test chartering on purpose -  don’t need those details for the rest of this post. At least I don’t think you do. Aside: In the big picture they are important.

Personal Kanban

I use personal kanban as a mechanism for visualizing the ways that I could improve the work that I do. Visualizing the work is the key since it forces you to ask questions about your work:

  • what am I delivering (work types, work items)
  • what is the life cycle I use as I complete work items (board design)
  • who am I trying to help by doing this work (who defines done)
  • what is my capacity (work in process limits)

These are all good questions, and the kanban board (left-to-right visualization of the work in play) is the radiator for the visualization. I envision each work item as one of those glowing orbs that changes colour. Green if the exploration is going well, red if I’m stuck. Yellow and orange if I’m swamped with uncertainty.

A Mashup of Exploratory Testing and Personal Kanban

Notice it says, “a mashup”. Not “the mashup”. This is a personal thing. What you end up with might be different, and that would be OK. The more you use your board, the more you will want to fiddle. So fiddle.

Let’s go through those questions.

What Am I Delivering – Work Types and Work Items

A great debate ensued. Fortunately I was the only debater. Once I realized I was talking to my own foot, I relented and just got to work.

If I’m exploratory testing, what should flow across the board? Candidates: test scripts (no, they don’t exist until the very end, if at all). test cases (yeah, but I don’t know what they are ahead of time PLUS I could cheat and make these as trivial as I needed to in order to demonstrate hyper-productivity to my bosses.

Requirements. Scenarios. Minimum Viable Product. These are better for one reason: I’m not the only one that defines them. Their definition is (ideally, usually?) a team thing, or at least there is some consensus on what is being delivered. I tried these for a bit, and sure, they worked. What I soon discovered though was that I could put a lot through my board and nobody cared. What was important to me was cadence – I needed to achieve something and if the work item was too big, I lost that sense of achievement. If the work item was too small, it seemed like I was inventing progress.

So I came up with a definition for myself of a “minimum beneficial feature” or MBF as a work item. I adapted the definition so that I could focus on a feature at a time that someone actually cared about. Seems congruent with definitions of quality that I had read. At the same time, it needed to be small enough to give me that cadence. I like moving 2-5 things per day as a tester. In acceptance testing, when the testers are business testers, this would be per user.

In one session I gathered the business leads for the work areas that were expected to adopt the solution and most affected by its smooth operation. Got out the index cards and markers. Described what I needed. Gave them time to write on cards. Role-played the test process using a three-column kanban board as the backdrop (to test, being tested, done).

And then I typed up all the index cards and used that as our collective backlog.

I didn’t immediately see a need for work types when I used minimum beneficial feature as the work item. So I didn’t push it. And that worked for me. Let the work types evolve (so I’ll address those later in a subsequent post).

An interesting thing that happened was that over the course of time, a work item that we thought was a natural candidate – a defect or issue – didn’t emerge. We kept needing to retest a corresponding minimum beneficial feature, so we moved it backwards on the board from ‘tested’ to ‘to be tested’ and attached the reasons why this moved backwards to the card.

The other concept that we had to deal with was our desire to practice using the solution under test. Our mission was to find barriers to solution adoption in all affected areas of the enterprise, so acceptance testing was a business simulation – a miniature version of the entire enterprise, all in one testing room. One MBF could therefore intentionally be run through the board several times. Even if there weren’t any issues detected on an earlier run. Do we move the item backwards? Do we define that as a separate work item with the same name? We chose the latter. Worked well.

As an aside: This meant I had some late nights/early mornings understanding what we needed to re-cycle through the board. A lot of talking to the business people during the course of the day, gauging our risk level and their tolerance of it (yesterday’s weather). Every morning during testing I created a fresh daily backlog that was informed from yesterday’s weather and where we had planned to be ahead of time. Management liked that I could report on the number of MBF’s that we expected to test versus the number that we were able to test every day.

As another aside: I wish we had done more of this with business people other than the ones we had seconded for the project. There were still post go-live adoption issues because our testers had become so darn good at using the system. They weren’t good at picking out new user adoption problems by the time our acceptance testing wrapped.

Who Defines Done

I’ll give you a hint. Not you. More coming in Part 2.

2 comments:

Jim Benson said...

Hurry up and define done!

Adam said...

Thanks for the comment! In some contexts being done testing is the hardest thing to declare... (of blog posts too in some case. Ahem.)