Saturday, December 17, 2016

The Calculus of Acceptance Testing

It’s tempting to believe that acceptance testing is the straight-forward process of comparing the as-built solution to the as-required solution. Any differences discovered during testing are highlighted in defect reports and ultimately resolved through a defect resolution process. You’re done when you’ve had the business test everything and when enough of the differences have gone away for the business to believe that they could use the solution as part of their regular business activities.

 

It’s tempting to believe this statement because it’s simple and it seems reasonable. Looking closely, however, it’s an over-simplification that relies on three assumptions that are difficult and complex to uphold. 

 

First, the assumption that there is agreement on what the “as-required solution” really is. The various people involved may not share the same mental model of the business problem, the solution to that problem, or the implementation of that solution. There may be a document that represents one point of view, or a even a shared point of view, but even then it only represents a point in time - when the document was published. In a large enough management context where multiple workgroups interact with the solution on a daily basis, the needs and wants of the various parties may be in conflict. In addition, business and technical people will come and go on and off the project. This too leaves gaps in the collective understanding.

 

Second, the assumption that the business testers can test everything. Defining ‘everything’ is an art in itself. There are story maps, use case models, system integration diagrams, or even the Unified Modeling Language to help define ‘everything’. Comprehending ‘everything’ is a huge undertaking, even if it’s done wisely so that the essential models are identified organically and then grown, not drawn. Finding “just enough” testing to complete to get to the point of acceptance - by that standard - is a black art. It’s a calculus of people’s perspectives, beliefs and biases - because it’s people that accept the functionality - and of technical perspectives, beliefs, and biases - because there are technical elements of testing. Even acceptance testing.

 

Third, the assumption that the target of the test is the software. In reality the target of an acceptance testing is the intersection of software (multiple layers and/or participating components), business process (new, changed, and unchanged), and people’s adoption skills. To make this even more complex, consider that often the business testers are not the only users; sometimes they represent their workgroup with a wide range of technology adoption skills. So they’re not even testing at this critical section with solely their own needs in mind - they have to consider what other solution adopters might experience using the software.

 

For these and other reasons that I will explore in this blog, acceptance isn’t an event as much as a process, and acceptance testing isn’t about software quality as much as it is about solution adoptability. Of course those two things are related because you can’t address adoptability without quality. The core of the calculation is a gambit - spend the time assessing quality and hope for the best on adoptability, or spend the time exploring that intersection mentioned above - software, business process, and the likelihood of adoption.

 

That puts a different spin on the term “acceptance testing”. Instead of evaluating software against requirements, what we do in the last moments before go-live is test acceptance. Acceptance testing.

Sunday, February 28, 2016

Test Envisioning

Starting a project seems to be one of the most under-rated steps when it comes to describing critical success factors for projects. A project well-launched is like a well-designed set play that comes together on the field during a game. It's not launch effort but launch effectiveness that is important. In Agile Testing and Quality Strategies: Discipline Over Rhetoric Scott Ambler specifically describes requirements envisioning and architecture envisioning as key elements of initiating a project (he specifies this for launching an agile project, and I've come to believe now that it's true for any project with stakeholders).

In a classic standing on the shoulders of giants manner, "yes, and" ... we could also spend some of that project initiation time envisioning what the testing is going to look like. It is Iteration 0, after all so there is a large landscape of possible conversations that we could have. And increasingly I've seen misconceptions about how the project testing will be conducted as one of those "wait a second" moments. People just weren't on the same page, and they needed to be.

It's a fundamental part of the working agreement that Iteration 0 is intended to create - who will test what, when? Will the developers use test-first programming? Will they also use higher level test automation such as one of the BDD tools? Will they do build verification testing beyond the automated tests? If the solution provider is a remote, out-sourced team, what testing will the solution acquirer organization do? When will they do it? How will they do it? Will they automate some tests too? Is there formal ITIL release management going on at the acquiring organization that will demand a test-last acceptance test phase? Will the contract require such testing?

You see my point. There are a lot of alternate pathways, a lot of questions, and it's an important conversation. Even if some of the answers are unknown and require some experimentation, at least everyone should agree on the questions. Context matters.

I come back to the point about test envisioning. The result of such envisioning is a working agreement on how testing might be conducted. That working agreement might well be called a test strategy, albeit a lean one. That's why I promote it as a poster first, document second (and only if it helps to resolve some communication need). What you put on that poster is driven by what conversations need to take place and what uncertainties exist.

To build on Scott's description of Iteration 0 then, the work breakdown structure for Envisioning may include the following:

Iteration 0: Envisioning

  • Requirements Envisioning
  • Architecture Envisioning
  • Test Envisioning
and the result might very well be three big visible charts - one for each. Talking about testing upfront lets everyone listen, understand and contribute to the mental model melding that must take place for the team to be effective.