Quantifying Quality

A Quality Assurance professional faces one of the most burdensome questions of all specialties in digital media: Go or no-go? In theory a QA professional could be excused for saying “no-go” more than “go“.  For one, it is highly unlikely that all defects are fixed or addressed (has that ever happened, really?). In addition, we didn’t write the code, we didn’t choose the layouts, colors, typography and we certainly didn’t forecast ROI. But nobody wants to hear no-go, so in practice we’re in a dubious quandary to quantify the unquantifiable: Quality.

First and foremost, quality is subjective. Having spent half of my career in creative disciplines, it’s difficult to be agreeable with design work that doesn’t speak to me. That being said, a very important and often-overlooked artifact is the key to defining metrics for quality: the acceptance criteria document. In its simplest form the acceptance criteria document defines the agreed upon set of stories, cases, preconditions and expectations for an application. How does this measure quality, you ask? The trick is to start with the correct definitions. How many conditions must pass in order for it to be a quality product? This is precisely where true quality assurance must step up and shine.

The QA professional’s task is to make a solid recommendation for these metrics based on his/her expertise and knowledge of the client and project. Then to get every key decision maker to agree it. Doing so will quantify quality in realistic terms everyone understands and alleviate the burden of saying “go” when the inevitable question is posed. And to top it off, the QA professional is armed with a shiny new and resourceful little document.

There. Now, everyone’s on the hook.

  • Michael Bolton

    There’s a binary fallacy here: that the “QA professional” must say either “go” or “no go”.  There’s another fallacy, too:  mathematisis, the idea that counting criteria is in some way meaningful.

    Instead, accept the fact that unless he or she is also the product owner, the “QA professional” (you probably mean “tester”) ISN’T the product owner.  The decision to ship is not a technical decision, but a business decision, and the tester typically doesn’t have the information needed to make that decision.  In the unlikely event that the tester does have the information, the tester doesn’t have the authority to make the decision.  And businesses aren’t typically democracies, and usually they’re not run by consensus. The product owner makes the decisions, not the tester.  So what should a tester do when asked for a go/no-go decision?

    Here’s what I do:  Throughout the project, I provide a test report in three mutually supporting levels:

    Level One:  The Product Story

    The product has these features and these benefits.
    We know that the product has these problems that threaten its value.
    There’s a plausible risk that the product may also have these other problems, but so far our testing hasn’t revealed them.

    The testing client—the product owner or project manager—is responsible for the go/no-go decision. The product story is important part of that decision, and it’s the primary product of testing.  But there are other elements in the plot of the overall story.

    Level Two:  The Testing Story

    The testing story is what we did to obtain the product story.  A credible testing story gives warrant to the product story.

    This is the testing that we’ve done.  This is how we configured, observed, operated, and evaluated the product.
    These are the risks that we considered.
    These are the oracles that we applied.
    This is the coverage that we obtained, in these areas.
    This is the testing that we haven’t done yet.
    This is the testing that we haven’t done, and that we’re not planning to do.

    This story ends with a question for our testing client:  Are you okay with this?  By asking that question continuously throughout the project and making corrections or adaptations when the answer is No, we stay on mission.

    Level Three:  The Quality-of-Testing Story

    The quality-of-testing story explains and gives warrant to the testing story.

    This is why the tests we have performed were, as best as we can figure, the most appropriate tests.
    This is why the tests we haven’t performed aren’t (or haven’t yet been) as imporant as the ones we have performed.
    These are the things that made testing harder or slower.
    This is what we need and what we recommend.

    This story also ends with a question for our testing client:  Are
    you okay with this?
      By asking that question continuously
    throughout the project, we can negotiate and refine our strategies and application of resources in collaboration with the client.

    The point of all this is to keep the testing client supplied with the information that he or she needs to inform a shipping decision at any time—or to charter a mission to find that information, if it’s not yet available.  This puts the tester in what I consider to be an appropriate role:  as an extension of the client’s senses and awareness.

    —Michael B.

  • Will Creedle

    “Instead, accept the fact that unless he or she is also the product owner, the “QA professional” (you probably mean “tester”) ISN’T the product owner.  The decision to ship is not a technical decision, but a business decision, and the tester typically doesn’t have the information needed to make that decision.  In the unlikely event that the tester does have the information, the tester doesn’t have the authority to make the decision.  And businesses aren’t typically democracies, and usually they’re not run by consensus. The product owner makes the decisions, not the tester.  So what should a tester do when asked for a go/no-go decision?”
    That’s an interesting point, and a valid one, and part of the problem. Testing is just a component (albeit a big one) of the QA Professional. If the QA Professional does not have the information needed to make that decision that is also a problem. As the defender of your company’s reputation and the defender of the end-user experience, they certainly have the authority to plant their feet in the sand. After all, if a QA Professional doesn’t understand what it does or how to use it, why would the eventual user? 

    If eventually an executive decision is made to circumvent the decision of QA, we certainly understand that and don’t take it personally.