Pages

Wednesday, November 25, 2015

Use Case Levels of Test: Parts I - IV of the Book

Book is organized around four use case levels of test
Book is organized around four use case levels of test

Four Use Case Levels of Test

In my book, Use Case Levels of Test, I like to use the analogy of the “View from 30,000 feet” to illustrate the role of use case levels to zoom from the big-picture (the major interstate highways through your application) down to discrete operations that form the steps of a use case scenario. These use case levels of test provide a sort of road map on how to approach test design for a system, working top-down from the 30,000-foot view to ground level.

The four parts of the book are organized around these four levels of use case test design. The bit of pseudo code below illustrates a strategy for using the parts of the book.


Apply Part I to the Use Case Diagram. This produces a test adequate set of prioritized use cases with budgeted test design time for each, as well as integration tests for the set

For each use case U from the previous step, until budgeted time for U expires do:
·         Apply Part II to use case U to produce a set of tests to cover its scenarios
·         For each test T  from the previous step, in priority order of the scenario T covers do:
o   Apply Part III to test T creating additional tests based on inputs and combinations of inputs
o   Apply Part IV to any critical operations in test T producing additional tests based on preconditions, postconditions and invariants

 

Timeboxing, Operational Profile, and Use Case Levels of Test

Recall from a previous post that this book is organized around a strategy of budgeting test design time for each use case, often referred to in planning as timeboxing. But rather than an arbitrary allocation of time (say equal time for all), time is budgeted based on an operational profile: an analysis of the high traffic paths through your system.

Because parts II-IV are timeboxed according to the operational profile (notice the phrase “until budgeted time expires”) this strategy will produce more tests for use cases frequently used. This is the key contribution from the field of software reliability engineering to maximize quality while minimizing testing costs.

This strategy is graphically depicted below. Numbers refer to the parts – 1 through 4 --  of the book.

Strategy for budgeting test design time to maximize quality while minimizing testing costs
Strategy for budgeting test design time to maximize quality while minimizing testing costs

Let’s walk through each part of the book and see what is covered.

 

Part I The Use Case Diagram

Part I of the book starts with the view of the system from 30,000 feet: the use case diagram. At this level the system is a set of actors, interacting with a system, described as set of use cases. Paths through the system are in terms of traffic trod across a whole set of use cases, those belonging to the use case diagram

As tests at lower levels will be based on these use cases, it makes sense to begin by asking: Is the use case diagram missing any use cases essential for adequate testing? This is addressed in Chapter 1 where we look at the use of  a C.R.U.D. matrix, a tool allowing you to judge test adequacy by how well the use cases of the use case diagram cover the data life-cycle of entities in your system. Not only does the C.R.U.D. matrix test the adequacy of the use case diagram, it is essentially a high level test case for the entire system, providing expected inputs (read), and outputs (create, update, delete) for the entire system in a compact succinct form.

Still working at the use case diagram level, in Chapter 2 we look at another tool – the operational profile – as a way to help you, the tester, concentrate on the most frequently used use cases, and hence those having a greater chance of failure in the hands of the user. The chapter describes how to construct an operational profile from the use case diagram, and the use of an operational profile as a way to prioritize use cases; budget test design time per use case; spot “high risk” data; and design load and stress tests.

Chapter 3 wraps up the look at testing at the use case diagram level by introducing techniques for testing the use cases in concert, i.e. integration testing of the use cases.

 

Part II The Use Case

Part II drops down to the 20,000-foot view where we have the individual use case itself. At this level of test paths through the system are in terms of paths through an individual use case.

In Chapter 4 the book looks at control flow graphs, a graph oriented approach to designing tests based on modeling paths through a use case. For use case based testing, control flow graphs have a lot going for them: easy to learn, work nicely with risk driven testing, provide a way to estimate number of needed tests, and can also be re-used for design of load tests.

Chapter 5 looks at two alternate techniques for working with use cases at this level of test: decision tables and pairwise testing. A decision table is a table showing combinations of inputs with their associated outputs and/or actions (effects): briefly, each row of the decision table describes a separate path through the use case.

For some use cases any combination of input values (or most) is valid and describes a path through the use case. In such cases, the number of paths becomes prohibitive to describe, much less test! Chapter 5 concludes with a technique to address such use cases: pairwise testing.

 

Part III A Single Scenario

In this part of the book we arrive at the 10,000-foot view and test design will focus on a single scenario of a use case. While a scenario represents a single path through the system from a black-box perspective, at the code level different inputs to the same scenario likely cause the code to execute differently.

Chapter 6 looks at test design from scenario inputs in the hope that we are more fully testing the paths through the actual code, looking at the most widely discussed topics in input testing: error guessing, random input testing, equivalence partitioning, and boundary value analysis.

For a use case scenario with more than a single input, after selecting test inputs for each separate input there’s the question of how to test the inputs in combination. Chapter 6 concludes with ideas for “working smart” in striking the balance between rigor and practicality for testing inputs in combination.

Chapter 7 looks at additional (and a bit more advanced) techniques for describing inputs and how they are used to do equivalence partitioning and boundary value analysis testing.  The chapter begins with a look at syntax diagrams, which will look very familiar as they re-use directed graphs (which you’ll have already seen used as control flow graphs in Chapter 4).

Regular expressions are cousins of syntax diagrams, though not graphic (visual) in nature. While much has been written about regular expressions, a goal in Chapter 7 is to make a bit more specific how regular expressions relate to equivalence partitioning and boundary value analysis.  A point that doesn’t always seem to come across in discussing the use of regular expressions in test design.

For the adventuresome at heart the last technique discussed in Chapter 7 is recursive definitions and will include examples written in Prolog (Programming in Logic). Recursion is one of those topics that people are sometimes intimidated by. But it is truly a Swiss-army knife for defining, partitioning and doing boundary value analysis of all types of inputs (and outputs) be they numeric in nature, sets of things, Boolean or syntactic.

 

Part IV Operations

In Part IV, the final part of the book, we arrive at “ground level”: test design from the individual operations of a use case scenario. A use case describes the behavior of an application or system as a sequence of steps, some of which result in the invocation of an operation in the system. The operation is the finest level of granularity for which we’ll be looking at test design. Just as varying inputs may cause execution of different code paths through a single use case scenario, other factors – e.g. violated preconditions – may cause execution of different code paths through a single operation.

In Chapter 8 Preconditions, Postconditions and Invariants: Thinking About How Operations Fail, we’ll look at specifying the expected behavior of abstract data types and objects – model-based specification – and apply it to use case failure analysis: the analysis of potential ways a use case operation might fail. In doing so, the reader will learn some things about preconditions and postconditions they forgot to mention in “Use Case 101”!

You may find Chapter 8 the most challenging in the book as it involves lightweight formal methods to systematically calculate preconditions as a technique for failure analysis. If prior to reading this book your only exposure to preconditions and postconditions has been via the use case literature, this chapter may be a bit like, as they say, “drinking from a fire hose”.  For the reader not interested in diving this deep, the chapter concludes with a section titled The Least You Need to Know About Preconditions, Postconditions and Invariants, providing one fundamental lesson and three simple rules that the reader can use on absolutely any use case anytime.

Having gained some insight into the true relationship between preconditions, postconditions and invariants in Chapter 8, Chapter 9 provides “lower-tech” ways to identify preconditions that could lead to the failure of an operation. We’ll look at ways to use models built from sets and relations (i.e. discrete math for testers) to help spot preconditions without actually going through the formalization of calculating the precondition. More generally we’ll find that these models are a good way to brainstorm tests for use cases at the operation level.

Friday, November 20, 2015

Working Smart in Software Test Design: The Role of Use Cases

In my last post I said one of the key goals of my book, Use Case Levels of Test, is to provide you – the tester tasked with designing tests for a system or application – a strategy for budgeting test design time wisely.

My assumption is that you are working in a software development shop where use cases have been used in-whole or in-part to specify the system or application for which you’ve been tasked to write tests, or alternatively you are familiar enough with use cases to write them as proxy tests before starting real test design.

Use cases play a key role in the “working smart” strategy presented in this book; let’s see why.

 

Use Cases: Compromise Between Ad Hoc And “Real Tests”

Part of the reason use cases have gained the attention they have in testing is that they are already pretty close to what testers often write for test cases.

One of the leaders in the testing tool arena is HP’s Quality Center (formerly Mercury Test Director). The example test shown below is part of a Quality Center tutorial in which the tester is instructed that “After you add a test  ... you define test steps -- detailed, step-by-step instructions  ... A step includes the actions to be performed on your application and the expected results”. As the example illustrates, anyone comfortable with writing use cases would be comfortable writing tests in Quality Center, and vice versa.

Anyone comfortable with writing use cases would be comfortable writing test cases in Quality Center, and vice versa
Anyone comfortable with writing use cases would be comfortable writing test cases in Quality Center, and vice versa

The early availability of use cases written by a development team gives testers a good head start on test design. What this means for our strategy is that, in a pinch (you’ve run out of time for test design) use cases provide a good compromise between ad hoc testing and full blown test cases.

On the one hand, ad hoc testing carries the risks of unplanned, undocumented (can’t peer review; not repeatable) testing that depends solely on the improvisation of the tester to find bugs. On the other hand we have full-fledged test cases for which even Boris Beizer, a proponent of “real testing” vs. “kiddie testing”, has said “If every test designer had to analyze and predict the expected behavior for every test case for every component, then test design would be very expensive”.

So use cases provide a good balance between the informal and formal. By starting test design from a set of use cases a test team can design full-fledged test cases as time permits from the most important use cases, and for the rest allow testers to “pound away” on and off the happy path; a sort of controlled ad hoc testing where use cases provide some order to the otherwise unplanned and undocumented testing.

 

Use Case Levels of Test

Another facet of use cases key to the strategy presented in this book is a way to decompose a big problem (say, write tests for a system) into smaller problems (e.g. test the preconditions on a critical operation) by using use case levels of test.

So what do I mean by use case levels of test? Most testers are familiar with the concept of levels of test such as systems test (the testing of a whole system, i.e. all assembled units), integration test (testing two or more units together), or unit test (testing a unit standalone). Uses cases provide a way to decompose test design into levels of test as well, not based on units, but rather on increasingly finer granularity paths through the system.

The parts of my book are organized around four levels of use case test design. In the book I like to use the analogy of the “View from 30,000 feet” to illustrate the role of use case levels to zoom from the big-picture (the major interstate highways through your application) down to discrete operations that form the steps of a use case scenario. This is illustrated in the figure below.

The four use case levels of test
The four use case levels of test

Let’s start with the view of the system from 30,000 feet: the use case diagram. At this level the system is a set of actors, interacting with a system, described as a set of use cases. This is the coarsest level of paths through your system; the collection of major interstate highways of your system.

Dropping down to the 20,000-foot view we have the individual use case itself, typically viewed as a particular “happy path[1] through the system, with associated branches, some being not so happy paths. In other words, a use case is a collection of related paths – each called a scenario -- through the system.

From there we drop to the 10,000-foot view which zooms in on a particular scenario of a use case, i.e. one path through a use case. While a scenario represents a single path through the system from a black-box perspective, different inputs to the same scenario very likely cause the underlying code to execute differently; there are multiple paths through the single scenario at the code level.

And finally, at ground level we reach discrete operations; the finest granularity action / re-action of the dance between actor and system. These are what make up the steps of a scenario. At this level paths are through the code implementing an operation. If dealing with an application or system implemented using object-oriented technology (quite likely), this could be paths through a single method on an object. A main concern at this level are the paths associated with operation failures; testing conditions under which a use case operation is intended to work correctly, and conversely the conditions under which it might fail.

The fact that use cases provide an alternate way to view levels of test wouldn’t necessarily be all that interesting to the tester but for a couple of important facts.

First is the fact that the levels are such that each has certain standard black-box test design techniques that work well at that level. So the use case levels of test provide a way to index that wealth of black-box testing techniques; this helps answer that plea of “Just tell me where to start!”.

Second, for each use case level of test, the “path through a system” metaphor affords a way to prioritize where you do test design at that level, e.g. spending more time on the paths most frequently traveled by the user or that touch critical data.




[1] If you are unfamiliar with this term, the “happy path” of a use case is the default scenario, or path, through the use case, generally free of exceptions or errors; life is "happy".

Monday, November 16, 2015

Working Smart: A Strategy to Better Budget Test Design Time

Every student in school – from elementary to graduate – is familiar with the angst of taking tests, hearing the dreaded line “Times up, put your pencils down!”, followed by the that feeling of regret as you think “If only I hadn’t spent so much time on that one question!”

I’d like you to consider that writing tests for software is a bit like taking tests in school. Both are tasks typically done in a finite, allotted amount of time, so it’s best to have a strategy for using your time wisely, and knowing what techniques work well (or don’t!) on various problem types.

One of the key goals of my book, Use Case Levels of Test, is to provide you – the tester tasked with designing tests for a system or application – a strategy for approaching test design so that when you hear Times up, put your pencils down!, you can relax knowing you budgeted your time wisely.

My assumption is that you are a tester or test manager working in a software development shop where use cases have been used in-whole or in-part to specify the system or application for which you’ve been tasked to write tests, or alternatively you are familiar enough with use cases to write them as proxy tests before starting real test design (In a future post I'll explain the importance of the role of use cases in this strategy). And I assume you have a fixed amount of time in which to get tests written, hence prioritizing and budgeting test design time is important.

The question then is: Where do you start?!

One approach would be bottom-up[1]: jump in feet first with some use case and start writing test cases using every conceivable test design technique you are familiar with. The problem with this approach? When the clock runs out on test design, there’s a very good chance you’ve written too many tests, or the wrong granularity of tests, for some use cases, and not enough – maybe none? – for many others (say the ones used most frequently). Also, the bottom-up approach, focusing solely on individual use cases, may lead you to neglect the testing of use cases that should be tested in concert, i.e. integration tested. And finally, what’s to say the use cases you started with were an adequate base from which to design tests in the first place, e.g. is there some part of the system that won’t get tested because a use case was not provided?

An alternate approach is to

  • First evaluate the use cases as a whole for test adequacy; determine if you are missing any use cases essential for adequate testing.
  • Next budget test design time for each use case, a technique often referred to in planning as timeboxing. But rather than an arbitrary allocation of time (say equal time for all), budget time based on an operational profile: an analysis of the high traffic paths through your system. This allows you, the test designer, to concentrate on the most frequently used use cases; those having a greater chance of failure in the hands of the user.
  • Then for each use case, design tests top-down, level by level (I’ll explain this in a future post) applying the test design techniques that work well at each level, adjusting the rigor of test design to match the time budgeted each use case. This top-down, level by level approach means you will produce coarser grain tests first, finer grain tests later as time permits. This, coupled with time budgets per use case based on an operational profile, will help strike the balance between breadth of coverage of all use cases and depth of coverage for the use cases most frequently used.
So my book Use Case Levels of Test is organized around just such a strategy so that when the clock runs out – “Time’s up! Put your pencils down!” – you can relax knowing you have spent test design time wisely.




[1] In the book I use the terms “bottom-up” and “top-down”, but do not mean this as used to describe strategies for integration testing. It is used solely in terms of use case levels of test being described, and how to come at the problem of test design. All covered in the book, and to be discussed in a future post.

Saturday, November 14, 2015

Use Case Levels of Test, Second Edition

Books, like software, get released with imperfections. The second edition of my book Use Case Levels of Test was just released -- in both print and kindle -- to correct defects in the first, make points I didn’t feel I properly made in the first edition, smooth out the rough edges, and add new material that didn’t make it into the first edition (scope control!).
Use Case Levels of Test, second edition
Use Case Levels of Test, second edition

The focus of the second edition remains the same; a strategy for software test design based on the idea of use case levels of test combined with high bang for the buck ideas from software testing, quality function deployment (QFD), software reliability’s operational profiles, structured analysis and design’s C.R.U.D. matrix, and formal methods like model-based specification and discrete math for testers.

The goal of this “cross-pollination” is to provide testers with a test design strategy to

  • Evaluate a set of use cases for test adequacy, determining if you are missing any essential for testing
  • Budget test design time to maximize reliability and minimize testing cost
  • Strike a balance between breadth of coverage of all use cases and depth of coverage for the most frequently used, critical use cases.
  • Provide a step by step process for when to use the plethora of test techniques covered in so many testing books helping address the plea “Just tell me where to start!”
  • Decompose the big problem of test design for a whole system or application into manageable chunks by using levels of test – not of units, modules or subsystems – but paths through the system
  • Introduce innovative test design techniques not covered in other testing books; elaborate on key techniques covered only briefly in other books
Let me review here the disciplines that I see as having “cross-pollinated” to make this book, and in general touch on what I see as this book’s value added to the set of books we already have on testing.

 

Operational Profiles

In John Musa’s Amazon review of my first book (Succeeding with Use Cases) he commented: “I have always felt that there were many fruitful relationships between use cases and software reliability engineering”.

To me, operational profiles and use cases seem such a natural fit. Operational Profiles (from software reliability engineering; used in my book to help budget time in test design) have been discussed in a few other testing books but none that I’m aware discussing their integration with use case based testing as presented here, or to the depth discussed in my book. See for example QFD next.

 

QFD

Quality function deployment or QFD (from requirements engineering; used here for test prioritization) appeared in my first book and spurred a lot of interest. This book’s presentation of generating an operational profile from a  use case diagram via a QFD matrix is, as far as I’m aware, unique.

 

C.R.U.D.

The C.R.U.D. matrix (from structured analysis and design; used in my book to help with determining test adequacy of a set of use cases) has been covered in a few testing books and use case books. In Use Case Levels of Test I’ve tried to expand upon their use as described in these other books, as well as showing how to leverage an operational profile and C.R.U.D. matrix to help spot high risk data entities in the system or application.

 

Formal Methods, Discrete Math for Testers

Al Davis’ software development principle #28: “Know Formal Methods .. their use (even on the back of an envelope) can aid significantly in uncovering problems .. At least one person on every project should be comfortable with formal methods to ensure that opportunities for building quality into the product are not lost”.[1]

And why shouldn’t that “at least one person” be a tester?! [2] The formal methods community has long been concerned with test design, as indicated for example by panels such as the one I was on in 1996 asking the question “Formal methods and testing: why the state-of-the art is not the state-of-the practice”[3]

The approach discussed in Use Case Levels of Test of pairing light-weight, “back of an envelope” style, model-based specification and discrete math with use case scenario operations feels like a natural fit to me, and jives with Al Davis’ principle #28.

This topic was covered in my first book (Succeeding with Use Cases), and prompted some questions on how to expand the techniques while keeping it practical, so I’ve borrowed on and expanded on it in it in this book. I see the approach advocated in this book – selective use of these techniques on high risk operations of high risk use cases to augment use cases (but not replace! Al Davis’ principle #54[4]) - as a practical approach to helping close the gap between state-of-the art and practice in testing.

Discrete math for testers (sets, relations, Venn diagrams) has been covered by a number of testing books. They are powerful tools for testers, but getting across their practical application is in my opinion a shortcoming in many testing books. In this book I’ve tried to give them a very “Here’s how to use them” approach via lots of examples.

Prolog (Programming in Logic) came on the scene in the early 70s as a programming language popular for tackling problems in artificial intelligence like problem solving, natural language understanding (think syntax as in syntax testing), and rule-based expert systems. And the formal methods testing community recognized its potential as a tool to aid in test design. I’ve included one such example in this book, illustrating its use to sanity check that a syntax definition of an input to be used for testing actually says what we think it says, then to help write tests by acting like a “code coverage” tool, but for syntax rules.

 

A Deeper Dive on Some Commonly Discussed Testing Techniques

There are some topics covered in Use Case Levels of Test that have been covered in nearly every testing book written. For such areas I try to provide the 20/80 you need to know (so the book is fairly stand-alone) with pointers to other existing sources if you want to do further reading. But additionally, I try to provide some different angles on these topics.

For example, it’s probably the case that no other topic in testing has been written about as much as equivalence partitioning and boundary value analysis. But it’s also the case that most books use a simple numeric input to explain equivalence partitioning and boundary value analysis. So I’ve tried to add value on these well-discussed topics by taking on more complicated problems.

One example, syntax testing is a common problem in input testing, and recursive rule definitions a common way to describe many inputs. Yet few books tackle the problem of black-box test design from such recursive definitions of syntax. In this  book we’ll take on syntax testing of such inputs using the example of internet keyword search queries.

 

Use Case Levels of Test

Last but certainly not least is the idea of use case levels of test. Use cases as a basis for test design have been discussed by a number of books,  but at this writing the strategy presented here based on four levels of use case test is, as far as I’m aware, unique (for example generating an operational profile from a use case diagram; working with preconditions, postconditions and invariants at the operation level).

Use case levels of test provide the framework that hopefully helps address that plea so often uttered by testers and organizations starting to climb the testing learning curve: “Just tell me where to start!”. And augmented with operational profiles use case levels of test are key to budgeting time wisely in test design.

So, this book is the accumulation of thoughts, conference papers, white-papers, training classes and slide presentations I’ve done over the years explaining to others – as well as helping myself come to grips with – a framework built around use cases for leveraging a wealth of testing techniques, as well as techniques from other software disciplines, for innovation and a way of working smarter in software test design.




[1] 201 Principles of  Software Development by Alan Davis, McGraw-Hill, 1995
[2] Jorgensen, in Software Testing: A Craftsman’s Approach, argues “More than any other life cycle activity, testing lends itself to mathematical descriptions and analysis”.
[3] Formal Methods and Testing: Why the State-of-the Art is Not the State-of-the Practice, ACM SIGSOFT, Software Engineering Notes vol. 21 no 4, July 1996, p64.
[4] Principle #54: “Augment, never replace, natural language  ... In fact, one good idea is to keep the natural language and more formal specification side-by-side  ... do a manual check between the two to verify conformity..” 201 Principles of Software Development, Alan Davis.

Pages