XUnit Test Patterns and Smells; Improving the ROI of Test Code

Mon 1:30-5:00 pm - Ponderosa A
Gerard Meszaros, Solution Frameworks Inc., Canada

High quality automated unit tests are one of the key development practices that enable incremental development and delivery of software by reducing the number of bugs introduced into code as it is evolved. But writing lots of tests is not enough as the tests need to be maintained over the life of the software. This maintenance cost can quickly outweigh the benefits provided by the tests. This tutorial provides the participants with a vocabulary of smells and patterns with which to reason about the quality of their test code and a set of reusable test code design patterns that can be used to eliminate the smells. Participants will be able to write tests that are easier to understand and maintain.

XUnit is the generic name for the family of unit test frameworks that are now available in almost every programming language. JUnit, NUnit, MsTest and CppUnit are some of the better known members of the xUnit family. While the examples in this tutorial are based on xUnit, many of the smells and patterns are equally applicable to functional test automation using keyword-driven and script-driven approaches.

AudiencePractitioners, Managers
Objectives

Participants will learn:

  • The difference between well-written automated unit tests and poorly written ones
  • To recognize common code smells that make test code hard to understand and maintain.
  • To recognize common behavior smells that increase the cost of running tests and interpreting the test results
  • Test design patterns to avoid the behavior smells and which lead to highly repeatable and robust tests.
  • Test coding idioms that make tests easier to understand and less fragile.
Class format

The tutorial is presentation based frequently punctuated with short (5-10 minute) hands-on exercises that help the participants "experience" the smells and patterns. The material is presented as a sequence of mini case studies. Each case study starts with a sample of test code or test results and we discuss the "test smells" present in the tests and their impact on achieving our goal of repeatable, robust, fully-automated tests. Then we dig into the root causes of the smell(s) and present a set of alternative patterns that can be used to address them.

Exercises will be done in small groups. Ideally, participants will be able to form into groups of 3-4 to discuss the smells, causes and patterns in each exercise. The exercises are paper-based so laptop computers are not mandatory.