Saturday, December 17, 2016

The Calculus of Acceptance Testing

It’s tempting to believe that acceptance testing is the straight-forward process of comparing the as-built solution to the as-required solution. Any differences discovered during testing are highlighted in defect reports and ultimately resolved through a defect resolution process. You’re done when you’ve had the business test everything and when enough of the differences have gone away for the business to believe that they could use the solution as part of their regular business activities.

 

It’s tempting to believe this statement because it’s simple and it seems reasonable. Looking closely, however, it’s an over-simplification that relies on three assumptions that are difficult and complex to uphold. 

 

First, the assumption that there is agreement on what the “as-required solution” really is. The various people involved may not share the same mental model of the business problem, the solution to that problem, or the implementation of that solution. There may be a document that represents one point of view, or a even a shared point of view, but even then it only represents a point in time - when the document was published. In a large enough management context where multiple workgroups interact with the solution on a daily basis, the needs and wants of the various parties may be in conflict. In addition, business and technical people will come and go on and off the project. This too leaves gaps in the collective understanding.

 

Second, the assumption that the business testers can test everything. Defining ‘everything’ is an art in itself. There are story maps, use case models, system integration diagrams, or even the Unified Modeling Language to help define ‘everything’. Comprehending ‘everything’ is a huge undertaking, even if it’s done wisely so that the essential models are identified organically and then grown, not drawn. Finding “just enough” testing to complete to get to the point of acceptance - by that standard - is a black art. It’s a calculus of people’s perspectives, beliefs and biases - because it’s people that accept the functionality - and of technical perspectives, beliefs, and biases - because there are technical elements of testing. Even acceptance testing.

 

Third, the assumption that the target of the test is the software. In reality the target of an acceptance testing is the intersection of software (multiple layers and/or participating components), business process (new, changed, and unchanged), and people’s adoption skills. To make this even more complex, consider that often the business testers are not the only users; sometimes they represent their workgroup with a wide range of technology adoption skills. So they’re not even testing at this critical section with solely their own needs in mind - they have to consider what other solution adopters might experience using the software.

 

For these and other reasons that I will explore in this blog, acceptance isn’t an event as much as a process, and acceptance testing isn’t about software quality as much as it is about solution adoptability. Of course those two things are related because you can’t address adoptability without quality. The core of the calculation is a gambit - spend the time assessing quality and hope for the best on adoptability, or spend the time exploring that intersection mentioned above - software, business process, and the likelihood of adoption.

 

That puts a different spin on the term “acceptance testing”. Instead of evaluating software against requirements, what we do in the last moments before go-live is test acceptance. Acceptance testing.

1 comment:

Lisa said...

Thank you for this post - you have captured so much I have been thinking about but couldn't put into words. I look forward to more blogs about this from you! We've oversimplified "acceptance tests" for so long. We need better approaches.