Pages

Tuesday, March 22, 2016

Timeboxing: Budgeting Time in Test Design with an Operational Profile

Use Case Levels of Test: Innovate & Work Smart in Software Test Design
In the last few posts we've been talking about the importance of operational profiles as a tool for working smart in test design and how to generate an operational profile from a use case diagram using a simple QFD matrix. In this post we look at how to use an operational profile for budgeting your time in test design to maximize bang for the buck, or or as John Musa said, “More reliable software, faster and cheaper”.

 

Timeboxing

The strategy for test design presented is my book is to budget test design time for each use case based on an operational profile: an analysis of the frequency of use of each use case. After all, frequency of use is opportunity for failure: the more use, the more opportunities for the user to find defects.

The idea of budgeting a fixed time for some task is called timeboxing. Timeboxing is a planning strategy often associated with iterative software development in which the duration of a task (design tests for a use case) is fixed, forcing hard decisions to be made about what scope (number and rigor of tests) can be delivered in the allotted time. What’s new with the approach here is that budgeted times are based on an operational profile.

To illustrate, say you’ve been given a week to write test cases from the use cases of the library system (see last two posts). During that week you estimate your team will be able to spend about 40 staff hours. To get an idea of how best to budget the teams’ time, you construct a quick spreadsheet using the relative frequencies from the operational profile (again, see last two posts). Results are shown below; hours budgeted (right column) have been rounded to the nearest whole hour.

Allocating 40 hours of test design, timeboxing, based on the operational profile
Allocating 40 hours of test design based on the operational profile

The first thing you will notice in this example is that some tests have been allocated zero hours of test design; that is based on the low frequency of use. And this is a good time to reiterate one of the reasons use cases have gained attention in testing: in a pinch (you’ve run out of time for test design), the use cases for which you didn't get around test design are workable substitutes for full-fledged test cases.

Alternatively you may decide you really want some tests designed for each use case regardless of how little they are used. The figure below illustrates budgeting the 40 hours of test design to strike a balance between the spirit of the operational profile and yet having some time for test design for all use cases. And I think it is the spirit of what the operational profile is showing us that is important for planning. That combined with some common sense will lead to a better allocation of time.

Allocating 40 hours of test design, timeboxing, based on the operational profile

Reallocation of 40 hours honoring the spirit of the operational profile, yet still making sure each use case has at least a minimal amount of test design.

 

Timeboxing, Use Case Levels of Test, and Test Rigor

Timeboxing, use case levels of test, and test rigor
The four use case levels of test
In the side bar is an illustration I've used before of the four levels of use case test, and a strategy based on it for test design. Using that illustration, once budgeted time is in place for each use case (level 1), test design would proceed use case by use case, working top-down from level 2 (use case), to level 3 (single scenario) to level 4 (operations) until the budgeted time for a use case is up.

For use cases for which many hours have been budgeted, this strategy will produce tests at all levels, 2-4. And at any given level, when the option is available to increase or decrease rigor, rigor could be increased.[1] Tests at more, deeper levels, with increased rigor, will translate to more tests, with greater detail, and ultimately more test execution time for frequently used use cases.

For use cases for which few hours have been budgeted, test design may not progress deeper than level 2 (use case) or 3 (single scenario) before time is up. And at any given level, less rigor is likely in order to accommodate for less budgeted time. Tests at fewer levels with less rigor will translate to fewer, coarser granularity tests for use cases used infrequently.

 

Footnotes

[1] For an example of what I mean by increased or decreased test design rigor, refer to Chapter 4 of the book, Control Flow Graphs: Adjusting the Rigor of Test Design. Similar ideas for adjusting rigor are provided at all use case levels of test. I'll also be covering this in some future post.

No comments:

Post a Comment

Pages