Thursday, December 29, 2011

Beyond the Agile Testing Quadrants

You can build the right product and you can build it right, and still not deliver value to the customer/user. For any number of reasons, they don’t adopt it easily, completely, or on time.

You can blame them. Luddites.

You can  blame others. Microsoft.

You can blame yourself. Incompetent. Need a hairshirt.

Or you can evolve your testing to include user adoption and benefits realization. Ask questions aimed at finding user adoption issues and explore risks to benefits realization.

I remember reading a Twitter post from Joshua Kerievsky about usage data and how evolving a product without it now seemed so inadequate once he/they started using it. In my mind, this extends the agile testing quadrants to finding adoption issues in addition to helping discover the right product.

A/B testing is another example. This is using real customer behaviour to guide the evolution of the product. Imagine what a tester could do – with their skills tuned towards asking the important questions at the right time – to this field. I think it’s filled with opportunity.

In the enterprise domain (where I spend all my time) the agile testing quadrants are not enough. You can follow them bravely and assuredly and still not deliver value because of:

  • an inadequate end-user communications plan
  • an inadequate end-user training program, or having a gap between training and when the end-users need to start using the solution
  • using the same testers in every test cycle or continuously, making them more expert and less skilled in finding adoption issues
  • not translating pre-release performance testing to post-release performance monitoring

Consequently, because I love the agile testing quadrants for getting the right product built the right way, I thought to extend the idea to include supporting adoption and supporting benefits realization.

image

Q1: Discover barriers to solution adoption by the support/sustainment/operations group(s). For example, assess the communications and training for support/sustainment personnel.

Q2: Discover barriers to solution adoption by end-users/customers. For example, assess the communications and training for end-users/customers, use A/B testing to determine/optimize features/interactions the customer cares about.

Q3: Discover risks to benefits realization that originate with the end-users/customers using the solution. For example, monitor the benefits to see if there is progress towards their realization.

Q4: Discover risks to benefits realization that originate in support/sustainment/operations. For example, monitor the solution performance on an on-going basis so that performance bottlenecks don’t become barriers to effective usage that would in turn lead to the planned benefits being realized (already a common practice).

It’s a start. I almost put the agile testing quadrants themselves in Q2 and I might yet put them back there. For now, I’m hoping this conveys the way that testers need to start thinking:

  • write a test strategy that lasts forever, not merely to the end of the project and/or release; deployment is not done 
  • create checklists that the solution delivery team can evolve forever, not to just get the solution shipped/in production but one they can use to continually increase the likelihood of user adoption and benefits realization; deployment is not done 
  • explore barriers to solution adoption; expand the list of test targets to include end-user communications and training; deployment is not done 
  • explore risks and barriers to benefits being realized; deployment is not done 
  • do this continually; deployment still isn’t done

Maybe you detected the theme that deployment is not done in the above points.

You can’t afford to think like this if there are product defects that decrease the likelihood of user adoption. You’ll be too busy writing bug reports. So yeah, this is complementary to agile testing (and the agile testing quadrants concept). These new quadrants are the next step, at least in the enterprise.

Wednesday, December 07, 2011

Vacation Smoke

As test managers, we ought to coach others on giving and receiving feedback with grace since creating feedback cycle is so much a part of testing. In that line of thinking, Scott Berkun’s latest book Mindfire contains an essay on feedback that should be required reading for test managers and testers alike.

I have a story about giving/receiving feedback that has stuck with me for many years.

---------------------------------------

A boss from a previous life time told me this story. It was from an ancient time period when you could still smoke on airplanes.

Brian and his wife were going on a vacation to a sunny destination from their home in Winnipeg, Canada – I think it was Cuba. Festive and feeling good, Brian lit up a small cigar just after they got seated and looked out the window, on top of the world.

His wife was beside him in the three-seat aisle – he being in the window seat - and asked him to put out the cigar in her usual way. He made a special point when he was repeating this story to me that her usual way was loving and warm. Not a demand in any sense of the word. They were both feeling great about the vacation, there was no tension there. A simple request.

“It’s bothering me in this space. I don’t care when we’re at home because I can open a window or leave the room but here I can’t leave my seat. Do you mind?”

In relaying his response, Brian choked up a tiny bit. He told me that his response was swift and direct. It was not mean, but it was without hesitation, “No way. I’m on vacation. This is something that I do on vacation.”
She sat back. Disappointed, but committed to enjoying every bit of the vacation including the flights there and back. She closed her eyes, cuddling his arm with both of her hands.

A few minutes later, a woman sat in the aisle seat next to Brian’s wife, settled in, read the in-flight magazine for a bit and then leaned forward, asking Brian if he would mind putting out his cigar. Another polite, non-demanding request (these are Canadians after all). Brian put it out immediately.

This was when Brian really choked up. “There was the woman beside me, the one I married, the beautiful woman that I have planned my life with asking for a simple favour and I refuse her. Yet a stranger asks the exact same favour and I don’t hesitate to comply.” He paused for effect. “Never again.”

To his credit, Brian told the story to all his staff. There are many lessons here; the one about serving those closest to you as you would treat a stranger, and a second about teaching others what you’ve learned. The third is more obscure but clear once I tell the rest of the story.

Brian looked at his wife. She opened one eye at him, a small, sly smile curling the edges of her mouth upward. He was looking at her with an embarrassed smile on his face.

“Sorry honey. That should have been for you.”

“I know.”

The third lesson is his willingness to stop and talk and to do a micro-retrospective on his own behaviour and adapt.

Then there is her response. The graceful acknowledgement and gentle teaching she chose to give him. She could have been mad, she could have embarrassed him further, and she could have spent the rest of the flight making fun of his mistakes, perhaps even with the woman sitting beside her. She chose none of those options; instead she respectfully and gracefully showed him the way.

As test managers, we ought to coach others on giving and receiving feedback with grace. A feedback cycle is an important part of testing. Turning the feedback cycle negative is generally harmful and rarely, if ever, needed. Sure graceful feedback takes energy, but it is worth it.

Friday, October 14, 2011

Benefits Modeling

The user story template helps us build the right thing by identifying why we want a particular capability in the product we are working on.

As a <blank>
I want to <blank>
so that <blank>

Interpreted from a benefits modeling perspective and mapped directly into the structure as it stands, the template is

As a <stakeholder>
I want to <be able to perform or have performed some enabling action>
so that <I get some specific benefit>

The order seems odd to me, that is, identifying the enabling action before we identify the benefit. Most businesses drive initiatives from benefits first, and determine the most appropriate actions second. Maybe it should be

As a <stakeholder>
I want <this benefit>
so I need to <be able to perform or have performed some enabling action>

The best place for this kind of thinking? In the project charter – the vision section. Why have a vision statement when you can have a whole set of benefits and enabling actions? The fit nicely into the concept of story mapping too:

As a <stakeholder>
I want <this benefit>
so I need <this user story>

Now that’s a vision worth sharing.

Wednesday, September 14, 2011

The Tourist Trap

Recently at a conference I didn’t attend there was a talk on exploratory testing “tours” and that by adopting various personas, you might change up the “tour” you were on to fit the context.
Seems reasonable. The problem is, I don’t like to travel that way.
When I travel I use a tour as a last resort – as in, you aren’t going to see that attraction any other way than to take a tour. Like the Vatican. But Rome, the city? A tour isn’t for me, even if I’m the one making up the tour’s stops and activities.
When in Rome, we stayed in a quiet residential area and walked to local restaurants that only had single-language menus. I had an even better experience traveling to Argentina where I could live there and experience what “there” is and who “they” are.
Traveling in a tour, even one of my own invention, I have never experienced cultural authenticity beyond being an observer. Seth Godin describes this harshly in his book Graceful when he says, “if you go as a tourist, you will come back as you went”. Being a tourist is being there but being insulated, an observer but not someone participating in something bigger.
Here's the trap. A 'tour' in the sense I'm thinking about, is akin to a scripted experience. Restricting me in certain hallways at certain times. Keeping me away from an arrival city so that I only see certain highlights, insulated from ugliness, hustled away from the imperfect.
That’s not the way I want to test. I want the ugly bits, quickly. I want to smell the garbage and follow my nose to the source. I want to appreciate the structure, feel it, breathe it, “get it” in the gut and then see the ugliness in that context.
“You never really understand a person until you consider things from his point of view […] until you climb into his skin and walk around in it.” – Harper Lee in To Kill a Mockingbird spoken by the character Atticus.
So avoid interpreting 'tour' as the happy path tour in tourist brochures. Certainly sometimes this changes my mind about something. Or someone. Opening myself up to someone else’s mental model is a complex experience, needing skills ranging from humility/compassion/patience to technical/problem-solving/communication skills. Understanding is a non-competitive way of gaining credibility with someone else; show them you understand their model, their way, their view. Paraphrase, then… shift. Them, or you.Travel changes you, if you let it.
Because then, when you do find a problem, you already understand before you have to help someone else understand (sorry St. Francis, you’re irresistible again). You can anticipate their fears with what you’re about to say, and help them through that fear. I’m using the fear word in the same sense as in Fearless Change. The fear of change, the fear to change. Because sometimes even a small bug shatters someone else’s view of the way things are. This gives you the insight and ability to lead them. Not push them, cajole them, or bully them into making that change. Lead. Yes, lead. Leadership.
So take your tours, sure. Just don’t stay on the happy path tour. Take the other tours, like commenter Philk points out - the garbage collector tour, the red light tour, ... Don’t do things just to check them off the checklist. Pivot. Shift. Change your mind about the ways things work. Start a new tour without planning ahead.
Be like my Dad - in tears at the Roman Coliseum from grasping the immensity of it all. He let the place in. He hasn’t been the same since.

Thursday, July 14, 2011

A Measure of Trust

When we complain about metrics, it seems that we focus on the potential for abuse. A team that will optimize whatever measure that is put into place. A consultant that considers yak-shaving as billable, even if that means their projects experience an overrun.

But what about metrics when the environment is a high-trust environment?

What if the team truly wanted to improve and their management truly wanted them to improve, and their clients were bought in as well? If the potential for abuse is eliminated (or low likelihood) then why not use simple, non-invasive measures to rally around? Why not celebrate that .5d-improvement in cycle time?

Work on trust first, then metrics.

Thursday, July 07, 2011

UAT as a Lean Startup

User acceptance testing (UAT) – that misunderstood human activity that is squeezed from front to back by late delivery and set-in-stone go-live dates – is an opportunity to use lean tactics.

The user acceptance team is a cross-functional team. An enterprise solution will have many different flavours of technical folk and an equally diverse range of business folk. If it were a coach on the sidelines, Lean would say to bring those diverse individuals together early to craft an approach together.

The time allocated to UAT is fixed, and might shrink. Our coach Lean would say better to identify those things that really matter and get to them first. An excellent example of a context for relentless prioritization.

The technologies you use to set up, track, and communicate with don’t have to be the expensive ones. Lean would say you can be capital-efficient by favouring simple tools – checklists over test management systems, burndown charts over tool-generated coverage reports, a kanban board for discovered issues over a defecting tracking system (issue severity is a natural attribute to use for defining classes of service).

Then there’s the communication piece. As lean is transparent to and inclusive of the customer, UAT is transparent to and inclusive of its customers – developers and those people making the acceptance decision. Constant communication and adjustment should be expected, and given. The coach would suggest that since a decision is required, asking what information those decision-makers need in front of them to make that decision is prudent. And do that at the beginning.

Tuesday, June 28, 2011

Ready, Set, Fail

You’re a test manager in a waterfall-style solution delivery project. UAT is coming; it’s in the project plan and the delivery team is working hard to get the solution ready for it. Really hard. The project manager from the humungous-company vendor is coming down hard on them.

The test environment, the test data, the master test data, the people lined up ready to test, the scripts written and distributed. You’ve done your part.

If UAT goes well, you’re the hero of the day.

If UAT uncovers issues, well, you’re telling bad news to a large audience that doesn’t want to hear it. No one wants to hear it. Not the delivery people, not the implementing company, and certainly not the project manager. The aggressive, classically-trained, command-and-control project manager.

They fight to lower the issue severity. They challenge your evidence that the issue isn’t new but rather re-introduced in the rush. They censor your status report and work hard to make sure you are isolated and shielded from the project stakeholders. A subtle censure, sometimes not so subtle. You feel terrible, you feel like a failure for having caused so much trouble.

It’s go-live. Cutover to production. The deployment script is run, the users are notified, stakeholders sign off. And then there are issues. Issues that come in all sizes of pyjamas and none of them can be easily put to bed.You are certain that those scenarios were tested, you know it. You have the evidence. And yet there are issues, some subtlety that wasn’t checked or some difference between the test environment and production got exposed or some scenario varies from what was trained. You feel terrible, you feel like a failure for having caused so much trouble.

--

You’re a test manager on a traditional, waterfall-style delivery project with a command-and-control project manager. You will feel terrible, you will feel like a failure for having caused so much trouble.

Unless.

Unless you know this is coming and prepare everyone, including the project manager, for those issues. Scrap the scripts and replace them with checklists built by people that know the business and convert the checklists into information radiators for those not close to the project. Lower the barriers to issue resolution by avoiding root cause analysis in lieu of finding the fastest route to the individuals that can resolve the issue. Don’t think deployment is done.Use an information radiator for issues that everyone can access and update it daily. Publish the link so that anyone, everyone, can see it from anywhere.

Transparency is the only way to get around the censure, and if you set that up from the beginning, it will be embraced by even the project manager. I’m willing to bet that the project status report will ultimately contains parts of or links to the information radiators you set up.

Friday, May 13, 2011

Working with Business Testers

From Nancy Kelln’s presentation at STPCon,business testers

  • are not testers
  • do not want to be testers
  • require guidance and support
  • require their expectations to be managed
  • may be working part-time on the project

and all of these things need to be considered when creating the test strategy/approach on a project that will utilize them.

My last two projects as QA/test lead launched for 3500 and 7500 users respectively, and I can say that we only used a smattering of professional testers on those projects (not counting the professional testers that might have been working for the vendors). The end-of-cycle testing was all done by business testers, that is, people that would be classified as full-time end-users of the solution.

I can add the following to Nancy’s list:

  • business testers tend towards happy-day scenarios; adjust your coaching so they get a sense of what exploring means
  • avoid technical testing terms like boundary value analysis, equivalence class partitioning, etc. in your coaching; introduce the concepts but using examples
  • removing obstacles to their testing is critical, especially since they rarely differentiate between a test environment problem and a problem with the solution – it’s all the same to them
  • you can use session-based testing and call sheets as a way of managing part-time testers; a call sheet tells them what to investigate, when and gives them space to report back in. I populate the call sheets with items from a main checklist everyone is working from.
  • as a test lead, avoid becoming a test enforcer – you won’t be able to convince them to test in a timeline that actually helps you. Instead, make them aware that they are driving the investigation in their area and that how much they test is their decision; then communicate, communicate, communicate what they have told you and what they end up doing.
  • business testers can absolutely use exploratory testing even in projects with stringent audit and control requirements; you do need to state that this is the process you will follow, and then be able to demonstrate that you followed that process. You will need to instruct the testers on what evidence they need to gather for you.
  • another risk is what I refer to as cloister vision – the business testers get too good at using the system under test and are no longer adequate proxies for the end-users that will soon be required to adopt the solution; mix it up a bit if you can. A core of business testers is a good idea when they are providing test data for downstream workflows but for investigating solution adoption concerns, work with the training people to get some additional testers that have been trained on the solution involved so that you can observe the “unboxing” first-hand.

That last one – cloister vision – was largely responsible for go-live complications that involved call centre software. Yeah. My customer’s customers noticed, and that hurt, big-time. It’s why I keep writing about solution adoption considerations and have started using “deployment is not done” as a mantra.

Thursday, May 05, 2011

Whatever In a Flash

If you enjoyed Agile in a Flash: Speed-Learning Agile Software Development by Jeff Langr and Tim Ottinger and were burning to create your own whatever-in-a-flash as a result, then here’s a template for you in Word 2010 format.

I print on card or cover stock and then if it’s low volume, cut it myself. For volume cutting, it’s easier to take it to a print shop with one of those guillotine cutters. For my 35-card stack of 15 sets, it cost me less than $5 for this service and it would have taken me hours to do myself…

Sunday, May 01, 2011

Strategy vs. Plan

The strategy survives multiple iterations, multiple releases, multiple projects for the same, albeit evolving, solution.

The plan implements the strategy by naming names, specifying dates, and identifying the risks that guide the investigation for that iteration or  release.

This way, we can use the lessons learned during each investigation to continually improve the strategy and not get lost in the concern about deadlines.

Thursday, April 28, 2011

Checklist Formatted as a Dashboard

 

Context: This format was used on a number of user acceptance initiatives; in one of those cases, one of the main components of the solution was delivered by an agile team that exploits both automated unit tests and automated functional tests.

The concept of a “session” was not consistent throughout; on one project, there was one of these per day and the “session” was a 1-4 hour session that an end-user or group of end-users completed. This was useful for scheduling part-time testers into the test effort. I also gave those user-testers a session record template to journal their test results into. On another project, the “session” was really a test cycle and not a session at all, but the format was still useful as an information radiator.

Everything, session records and checklist/dashboard, was posted into a project team site so that everyone could see the latest updates when they wished.

There is another variation of this format that I’ve used that replaced the traffic light metaphor with a weather indicator (5 different images from stormy to sunny) so that there were more than just three choices available.

Sample UAT Checklist in One Page Project Manager format

More details on the one page project manager format for other purposes is available at oppmi.com (my apologies for posting an incorrect URL to the forum earlier).

This dashboard and an associated burn-down chart that showed actual progress as compared to ideal progress are consistently ranked highly in retrospectives (I host a test retrospective separate from the project retrospective for initiatives like these).

One of the oft-hidden benefits of this checklist is that it is useful for the next release of the solution, especially if you revise it every release to reflect what the organization has learned during the previous one. Refactoring, per se.

Tuesday, April 12, 2011

Feature Advocacy

Take everything you know about bug advocacy – the art of maximizing the likelihood that any given bug is fixed as per its impact on those who care about it – and direct it towards feature advocacy – the art of maximizing the likelihood that any given feature might be adopted as per its value to those who would use it.

In a world where the concept of a ‘project’ as a way of working is being eaten away by pull systems and continuous delivery models, one possible side effect is the death of the bug report.

Features (user stories, etc.) are either ready to adopt by the user community, or they are not. As complex and subjective as that decision might be, it is still ultimately a decision that a person makes. As testers, we investigate and explore and communicate so that the decision is an informed one. The people employing a pull system might require a few more columns than just two – but fundamentally, there is ultimately a state that a feature must reach that says, “as far as we can tell, we can adopt this.”

ScreenClip[1]

Everything we gather during testing is data about that particular feature – successes, failures, shortcomings, omissions; its beauty, economy, utility – and all of that information should be available in making that decision. Whether or not that feature plays well with other features is still information about that feature, especially if that other feature has already been in active use. We either move the kanban for the feature to the right, or we do not.

So why create a bug report? Why introduce a whole new ‘thing’ to create, resolve, test, re-open, close, etc.? Why not just add to the information that is known about that feature and ask the appropriate authority to either move it to the right, or send it back to the left?

I understand that there is the capacity for kanban to include a ‘bug’ work type, and a class of service that gets serious ‘bugs’ resolved sooner. The detachment of that ‘bug’ that prevents the adoption of another work item on the board – the feature that was being explored when the ‘bug’ was identified' – is a problem to me and I would vote for splitting the feature if and only if some of the original feature could still be adopted (or considered for acceptance, whatever the end state of the kanban is). I suggest the problem preventing acceptance be kept with the item being considered for said acceptance.

Feature splitting doesn’t generate a bug report, it generates another feature, presumably worded the same way that any other feature is worded. Still no bug report.

And we would again advocate for its adoption given the value it brings to the user community in the context of all the other features being considered.

Tuesday, January 04, 2011

Checklists, Refactored

Part 1 – Background

Guided exploratory testing is using checklists to support and guide the exploration. In addition, it also means using those checklists as the backbone for communicating to other stakeholders on the project/initiative.

I’ve started to advocate the term ‘testlists’ to replace ‘checklists’ when they are used this way, that is, when checklists are used to support testing and the communications on and about a test initiative/mission (credit goes to Michael Bolton for positing the difference between checking and testing).

Supporting the Acceptance Decision

The procurement process for a new solution includes an acceptance decision, well-described by Gerard Meszaros, Grigori Melnick and Jon Bach in Acceptance Test Engineering Guide – Volume 1 (http://testingguidance.codeplex.com/).

The acceptance decision is usually undertaken late on the project life cycle in a critical-path “acceptance test phase”. This is true even if agile testing, acceptance test driven development (ATDD), or behaviour-driven development (BDD) is the development style of the team delivering the solution. I can’t emphasize that point enough. Ron Jeffries put it succinctly in a tweet:

“don't care if they are ATDD'd. I'm a user of this stuff. Needs to be in the farking manual. if they had a farking manual” – Ron Jeffries

Ron was talking about adopting the software in question, the step that comes after the development and deployment. In short, what happens after ‘done’.

I’m advocating exploratory testing for finding the information needed to make the acceptance decision, as do many others. The tweak that I propose is the use of testlists to support collaboratively planning the exploration, for understanding where the explorers currently are, and for reporting to outsiders on the results, outsiders like internal/external auditors, steering committee members, board members, etc.

Checklists and Testlists

The difference between a checklist and a testlist is the following:

  • A checklist lists things to check; a testlist lists things to explore in and around.
  • To most people, a checklist is a means of tracking to completion. Check these items and you are done. A testlist is a list of places to start. Where you “end” depends on what you find in the micro-context of each item on the list.
  • A testlist has attributes such as risk, business value ranking, persona attached to the items so that explorers can use that information to decide what to explore first, given a limited amount of time.
  • A testlist might be hierarchical to support communicating to different audiences. A testlist might also evolve into a mind map depending on the usage and in that case be called a testmap.

We use testlists in the format of the one-page project manager (http://oppmi.com/) so that we can also provide testlist audiences with an indication of status and when/who will do the exploration/investigation. But really any format will do. The content of the testlist has been business processes/workflows/functions from the business user’s perspective, or if one exists, the starting point for the testlist content can be the story map that guided the solution development.

More on format, building testlists, and refactoring testlists in upcoming posts.

A.