Every student in school – from elementary to graduate – is
familiar with the angst of taking tests, hearing the dreaded line “Times up,
put your pencils down!”, followed by the that feeling of regret as you think
“If only I hadn’t spent so much time on that one question!”
I’d like you to consider that writing tests for software is a
bit like taking tests in school. Both are tasks typically done in a finite,
allotted amount of time, so it’s best to have a strategy for using your time
wisely, and knowing what techniques work well (or don’t!) on various problem
types.
One of the key goals of my book, Use Case Levels of Test, is to provide you – the
tester tasked with designing tests for a system or application – a strategy for
approaching test design so that when you hear “Times up, put your pencils down!”,
you can relax knowing you budgeted your time wisely.
My assumption is that you are a tester or test manager
working in a software development shop where use cases have been used in-whole
or in-part to specify the system or application for which you’ve been tasked to
write tests, or alternatively you are familiar enough with use cases to write
them as proxy tests before starting real test design (In a future post I'll explain the importance of the role of
use cases in this strategy). And I assume you have a fixed amount of time in which to get
tests written, hence prioritizing and budgeting test design time is important.
The question then is: Where
do you start?!
One approach would be bottom-up[1]:
jump in feet first with some use case and start writing test cases using every
conceivable test design technique you are familiar with. The problem with this approach? When the clock runs out on
test design, there’s a very good chance
you’ve written too many tests, or the wrong granularity of tests, for some use
cases, and not enough – maybe none? – for many others (say the ones used most
frequently). Also, the bottom-up approach, focusing solely on individual use
cases, may lead you to neglect the testing of use cases that should be tested
in concert, i.e. integration tested. And finally, what’s to say the use
cases you started with were an adequate base from which to design tests in the
first place, e.g. is there some part of the system that won’t get tested
because a use case was not provided?
An alternate approach is to
- First evaluate the use cases as a whole for test adequacy; determine if you are missing any use cases essential for adequate testing.
- Next budget test design time for each use case, a technique often referred to in planning as timeboxing. But rather than an arbitrary allocation of time (say equal time for all), budget time based on an operational profile: an analysis of the high traffic paths through your system. This allows you, the test designer, to concentrate on the most frequently used use cases; those having a greater chance of failure in the hands of the user.
- Then for each use case, design tests top-down, level by level (I’ll explain this in a future post) applying the test design techniques that work well at each level, adjusting the rigor of test design to match the time budgeted each use case. This top-down, level by level approach means you will produce coarser grain tests first, finer grain tests later as time permits. This, coupled with time budgets per use case based on an operational profile, will help strike the balance between breadth of coverage of all use cases and depth of coverage for the use cases most frequently used.
So my book Use Case Levels of Test is organized around just such a strategy so
that when the clock runs out – “Time’s
up! Put your pencils down!” – you can relax knowing you have spent test
design time wisely.
[1]
In the book I use the terms “bottom-up” and “top-down”, but do not mean this as used to describe strategies for integration testing. It is used solely in terms of
use case levels of test being described, and how to come at the
problem of test design. All covered in the book, and to be discussed in a future post.
No comments:
Post a Comment