Thursday, April 28, 2011

Checklist Formatted as a Dashboard

 

Context: This format was used on a number of user acceptance initiatives; in one of those cases, one of the main components of the solution was delivered by an agile team that exploits both automated unit tests and automated functional tests.

The concept of a “session” was not consistent throughout; on one project, there was one of these per day and the “session” was a 1-4 hour session that an end-user or group of end-users completed. This was useful for scheduling part-time testers into the test effort. I also gave those user-testers a session record template to journal their test results into. On another project, the “session” was really a test cycle and not a session at all, but the format was still useful as an information radiator.

Everything, session records and checklist/dashboard, was posted into a project team site so that everyone could see the latest updates when they wished.

There is another variation of this format that I’ve used that replaced the traffic light metaphor with a weather indicator (5 different images from stormy to sunny) so that there were more than just three choices available.

Sample UAT Checklist in One Page Project Manager format

More details on the one page project manager format for other purposes is available at oppmi.com (my apologies for posting an incorrect URL to the forum earlier).

This dashboard and an associated burn-down chart that showed actual progress as compared to ideal progress are consistently ranked highly in retrospectives (I host a test retrospective separate from the project retrospective for initiatives like these).

One of the oft-hidden benefits of this checklist is that it is useful for the next release of the solution, especially if you revise it every release to reflect what the organization has learned during the previous one. Refactoring, per se.

Tuesday, April 12, 2011

Feature Advocacy

Take everything you know about bug advocacy – the art of maximizing the likelihood that any given bug is fixed as per its impact on those who care about it – and direct it towards feature advocacy – the art of maximizing the likelihood that any given feature might be adopted as per its value to those who would use it.

In a world where the concept of a ‘project’ as a way of working is being eaten away by pull systems and continuous delivery models, one possible side effect is the death of the bug report.

Features (user stories, etc.) are either ready to adopt by the user community, or they are not. As complex and subjective as that decision might be, it is still ultimately a decision that a person makes. As testers, we investigate and explore and communicate so that the decision is an informed one. The people employing a pull system might require a few more columns than just two – but fundamentally, there is ultimately a state that a feature must reach that says, “as far as we can tell, we can adopt this.”

ScreenClip[1]

Everything we gather during testing is data about that particular feature – successes, failures, shortcomings, omissions; its beauty, economy, utility – and all of that information should be available in making that decision. Whether or not that feature plays well with other features is still information about that feature, especially if that other feature has already been in active use. We either move the kanban for the feature to the right, or we do not.

So why create a bug report? Why introduce a whole new ‘thing’ to create, resolve, test, re-open, close, etc.? Why not just add to the information that is known about that feature and ask the appropriate authority to either move it to the right, or send it back to the left?

I understand that there is the capacity for kanban to include a ‘bug’ work type, and a class of service that gets serious ‘bugs’ resolved sooner. The detachment of that ‘bug’ that prevents the adoption of another work item on the board – the feature that was being explored when the ‘bug’ was identified' – is a problem to me and I would vote for splitting the feature if and only if some of the original feature could still be adopted (or considered for acceptance, whatever the end state of the kanban is). I suggest the problem preventing acceptance be kept with the item being considered for said acceptance.

Feature splitting doesn’t generate a bug report, it generates another feature, presumably worded the same way that any other feature is worded. Still no bug report.

And we would again advocate for its adoption given the value it brings to the user community in the context of all the other features being considered.