Wednesday, February 01, 2017

You Don't Need a Test Strategy

The Problem (Speaking Plainly)

 

In agile testing, you use the whole team approach to maximize the likelihood that you’re building the right thing and that you’re building it right by first distributing the responsibility for quality - whatever that means to the product owner and team - to the entire team, customer included. Requirements collated, communicated and understood just in time and articulated as executable examples yields the best chance at building the right thing. Similarly, fine-grained and well-focused automated unit tests provide the idioms for describing and communicating the intended design and best of all, it’s articulated in source code in an ever-growing repository of “this is how it works.” Developers make changes with less fear because they know that the tests will guide them; they will learn quickly if they have broken anything because there are failed tests. Life is good, and after an appropriate sprint where the minimally viable product will yield discernible benefits to the customer, a suitable set of activities take place that nurture the system through final end-to-end testing and acceptance/sign-off so that deployment happens. The testing is thinner at this stage because the end users/customers have been engaged all along. Deployment is a rehearsed action, ideally automated as well, so it’s not that big of a deal. It’s live. 

 

And it’s not done.

 

We are used to thinking that being done means going live. Go-live, after all, represents a huge milestone, no doubt about it. It’s not done because it has not yet yielded the intended benefits. Although we have set it up for use, laid the framework for achieving those benefits, it hasn’t happened yet. Unused software just doesn’t magically realize those intended benefits - people need to start using the product, and that usage has to be monitored and measured to back up claims that it is achieving those benefits. This usually does happen, to a degree. It might be orchestrated, it might not be. It absolutely needs to be.

 

The Solution (Speaking Hopefully)

 

A “test strategy” is aimed at achieving a goal related to testing. Based on it’s name alone, it seems that we have accomplished our goal and fully implemented the strategy when we have completed the testing. I want to toy with that word “strategy” for a sec ...

 

Sales strategy - maximize the likelihood of more sales - i.e., setting prices to maximize sales/profitability 

Marketing strategy - maximize the likelihood of more sales - i.e., bringing more eyes to what is being offered

Business strategy - maximize the likelihood of achieving business goals, whatever those might be

Test strategy - ???

 

What purpose does a test strategy fulfill? To maximize the effectiveness of testing? To squeeze as much testing as possible into a fixed budget/time window? To make sure that the product is tested? We test because we want answers to questions about the solution, we want to know if there are bugs, we want to know that we’ve built the right thing, etc. We want information. Testing is “the headlights of the project” [Kaner et al] after all. So it’s tempting to say that the test strategy is all about testing - how we will know when we’re done testing, get everyone on the same page as to how the solution will be tested, etc. 

 

I’m suggesting that this is not enough. It exposes us to the risk that we’ve fulfilled the test strategy and yet somehow, the solution is not considered a success. We want to make a call on the solution’s adoptability - will the customer/user/business be able to adopt the solution and make it part of their day? If every user type has a micro-goal (place an order, generate an invoice, etc.) using the solution, can they accomplish that goal more easily than they could without the solution?

 

I’m proposing that an effective test strategy maximizes the likelihood of solution adoption and benefits realization, in so far as suggesting that we rename it the “adoption strategy” instead. It’s more clearly what we need.

 

Tested is a milestone but isn’t the end state that you are looking for.

Deployed is a milestone but isn’t the end state that you are looking for.

Used. Adopted. That’s the end state that we are looking for.

 

What I’ve seen in most businesses where I’ve worked is that the “test strategy” really only includes the first item - getting to this ambiguous state called “solution tested.”

 

If we were to include ALL THREE elements above - tested, deployed, and used/adopted, then our test strategy sways towards including elements of a marketing strategy - the tested solution being what it is, how can we bring people around to its benefits and start using it? What other non-software items should we take care of in order to maximize the likelihood of a smooth adoption? We need a rapid, repeatable deployment process so that we can adapt quickly to feedback. We need automated tests to support checking for regressions. We need to monitor usage in production and measure adoption and accrued benefits. This is part of the overall “headlights” vision of testing albeit extended into production, a place where at least in some environments and for some testing people, testing is not allowed. Marketing people don’t hesitate to use A/B testing on an e-commerce site. I’m suggesting that an acceptance/adoption strategy would include A/B testing whether or not there is a marketing element in the target business domain. I’m suggesting that adoption statistics be automated/implemented alongside the target business functionality. It’s what distinguishes an adoption strategy from a traditional test strategy. It may also include, depending on the context, testing the effectiveness of the training, testing the job aids that are rolled out along with the solution, identifying meaningful business process measures, etc.

 

For example, In my most recent project we used a testing framework to monitor financial transactions as they were fed into the financial systems. Since it logged every assertion as it ran, the resulting logs were useful as a means of measuring overall error rates. Clearly errors in finances is something that would be a barrier to acceptance and adoption. In acceptance testing, it was relatively easy to establish an error-free set of transactions. But that would only be a sampling of the combinations and permutations that could happen in production. Given it was automated, we promoted it to production and then we had the means to provide actual error rates to stakeholders, making it easier for them to accept that the solution wasn’t actually a random number generator in disguise.

 

That’s a small example of the kind of information that is helpful to adoption/benefits realization that people with testing skills can provide. It’s really just another feedback loop that we could be exploiting to improve the product or service under development.