testing

Python Testing 101: pytest-bdd

Warning: If you are new to BDD, then I strongly recommend reading the BDD 101 series before trying to use pytest-bdd. Also, make sure that you are already familiar with the pytest framework.

Overview

pytest-bdd is a behavior-driven (BDD) test framework that is very similar to behaveCucumber and SpecFlow. BDD frameworks are very different from more traditional frameworks like unittest and pytest. Test scenarios are written in Gherkin “.feature” files using plain language. Each Given, When, and Then step is “glued” to a step definition – a Python function decorated by a matching string in a step definition module. This means that there is a separation of concerns between test cases and test code. Gherkin steps may also be reused by multiple scenarios.

pytest-bdd is very similar to other Python BDD frameworks like behave, radish, and lettuce. However, unlike the others, pytest-bdd is not a standalone framework: it is a plugin for pytest. Thus, all of pytest‘s features and plugins can be used with pytest-bdd. This is a huge advantage!

Installation

Use pip to install both pytest and pytest-bdd.

pip install pytest
pip install pytest-bdd

Project Structure

Project structure for pytest-bdd is actually pretty flexible (since it is based on pytest), but the following conventions are recommended:

  • All test code should appear under a test directory named “tests”.
  • Feature files should be placed in a test subdirectory named “features”.
  • Step definition modules should be placed in a test subdirectory named “step_defs”.
  • conftest.py files should be located together with step definition modules.

Other names and hierarchies may be used. For example, large test suites can have feature-specific directories of features and step defs. pytest should be able to discover tests anywhere under the test directory.

[project root directory]
|‐‐ [product code packages]
|-- [test directories]
|   |-- features
|   |   `-- *.feature
|   `-- step_defs
|       |-- conftest.py
|       `-- test_*.py
`-- [pytest.ini|tox.ini|setup.cfg]

Note: Step definition module names do not need to be the same as feature file names. Any step definition can be used by any feature file within the same project.

Example Code

An example project named behavior-driven-python located in GitHub shows how to write tests using pytest-bdd. This section will explain how the Web tests are designed.

The top layer for pytest-bdd tests is the set of Gherkin feature files. Notice how the scenario below is concise, focused, meaningful, and declarative:

@web @duckduckgo
Feature: DuckDuckGo Web Browsing
  As a web surfer,
  I want to find information online,
  So I can learn new things and get tasks done.

  # The "@" annotations are tags
  # One feature can have multiple scenarios
  # The lines immediately after the feature title are just comments

  Scenario: Basic DuckDuckGo Search
    Given the DuckDuckGo home page is displayed
    When the user searches for "panda"
    Then results are shown for "panda"

Each scenario step is “glued” to a decorated Python function called a step definition. Step definitions are written in Python test modules, as shown below:

import pytest
from pytest_bdd import scenarios, given, when, then, parsers
from selenium import webdriver
from selenium.webdriver.common.keys import Keys

# Constants

DUCKDUCKGO_HOME = 'https://duckduckgo.com/'

# Scenarios

scenarios('../features/web.feature')

# Fixtures

@pytest.fixture
def browser():
    b = webdriver.Firefox()
    b.implicitly_wait(10)
    yield b
    b.quit()

# Given Steps

@given('the DuckDuckGo home page is displayed')
def ddg_home(browser):
    browser.get(DUCKDUCKGO_HOME)

# When Steps

@when(parsers.parse('the user searches for "{phrase}"'))
def search_phrase(browser, phrase):
    search_input = browser.find_element_by_name('q')
    search_input.send_keys(phrase + Keys.RETURN)

# Then Steps

@then(parsers.parse('results are shown for "{phrase}"'))
def search_results(browser, phrase):
    links_div = browser.find_element_by_id('links')
    assert len(links_div.find_elements_by_xpath('//div')) > 0
    search_input = browser.find_element_by_name('q')
    assert search_input.get_attribute('value') == phrase

Notice how each Given/When/Then step has a function with an appropriate decorator. Arguments, such as the search “phrase,” may also be passed from step to function. pytest-bdd provides a few argument parsers out of the box and also lets programmers implement their own. (By default, strings are compared using equality.) One function can be decorated for many steps, too.

pytest fixtures may also be used by step functions. The code above uses a fixture to initialize the Firefox WebDriver before each scenario and then quit it after each scenario. Fixtures follow all the same rules, including scope. Any step function can use a fixture by declaring it as an argument. Furthermore, any “@given” step function that returns a value can also be used as a fixture. Please read the official docs for more info about fixtures with pytest-bdd.

One important, easily-overlooked detail is that scenarios must be explicitly declared in test modules. Unlike other BDD frameworks that treat feature files as the main scripts, pytest-bdd treats the “test_*.py” module as the main scripts (because that’s what pytest does). Scenarios may be specified explicitly using scenario decorators, or all scenarios in a list of feature files may be included implicitly using the “scenarios” shortcut function shown above.

To share steps across multiple feature files, add them to the “conftest.py” file instead of the test modules. Since scenarios must be declared within a test module, they can only use step functions available within the same module or in “conftest.py”. As a best practice, put commonly shared steps in “conftest.py” and feature-specific steps in the test module. The same recommendation also applies for hooks.

Scenario outlines require special implementation on the Python side to run successfully. Unfortunately, steps used by scenario outlines need unique step decorators and extra converting. Please read the official docs or the example project to see examples.

Test Launch

pytest-bdd can leverage the full power of pytest. Tests can be run in full or filtered by tag. Below are example commands using the example project:

# run all tests
pytest

# filter tests by test module
# note: feature files cannot be run directly
pytest tests/step_defs/test_unit_basic.py
pytest tests/step_defs/test_unit_outlines.py
pytest tests/step_defs/test_unit_service.py
pytest tests/step_defs/test_unit_web.py

# filter tests by tags
# running by tag is typically better than running by path
pytest -k "unit"
pytest -k "service"
pytest -k "web"
pytest -k "add or remove"
pytest -k "unit and not outline"

# print JUnit report
pytest -junitxml=/path/for/output

pytest-bdd tests can be executed and filtered together with regular pytest tests. Tests can all be located within the same directory. Tags work just like pytest.mark.

All other pytest plugins should work, too. For example:

Pros and Cons

Just like for other BDD frameworks, pytest-bdd is best suited for black-box testing because it forces the developer to write test cases in plain, descriptive language. In my opinion, it is arguably the best BDD framework currently available for Python because it rests on the strength and extendability of pytest. It also has PyCharm support (in the Professional Edition). However, it can be more cumbersome to use than behave due to the extra code needed for declaring scenarios, implementing scenario outlines, and sharing steps. Nevertheless, I would still recommend pytest-bdd over behave for most users because it is more powerful – pytest is just awesome!

The Software Engineer in Test

I am a Software Engineer in Test (SET). Many people don’t know quite what that means, though. Developers frequently refer to me as a “tester” or “QA,” and a former director once thought I did DevOps. While my work covers these areas, they aren’t the main focus. Let’s clarify what it means to be a SET.

What is a Software Engineer in Test?

A “Software Engineer in Test” (a.k.a. “Software Development Engineer in Test”) is a software developer who develops software for testing: tools, frameworks, and automated tests. SETs focus primarily on automation for running tests quickly and repeatedly. Test automation is a software product: just as front-end developers write web pages and back-end developers write microservices, SETs write automated tests. The same practices and coding skills apply. I frequently say that a Software Engineer in Test must have the heart of a developer.

So they just write test scripts?

No. Never say this to a SET. Test automation involves much more than just “writing test scripts.” A serious testing solution requires serious design and effort. The top-level automation of a test case is usually just a small piece. SETs are responsible for:

  • Collaborating with developers and product owners
    • Contributing to planning and design
    • Reviewing product code
    • Formulating test scenarios
  • Developing test automation frameworks
  • Automating test scenarios using the frameworks
  • Knowing and using design patterns where appropriate
  • Setting up the infrastructure to run tests
    • Running tests in CI/CD
    • Running tests in parallel
    • Running tests with the appropriate test data
  • Setting up dashboards for reporting test results in real time
  • Teaching others good quality and testing practices
  • Developing tools to assist manual and exploratory testing

Saying that testing is “just scripting” belittles the role of the SET and underestimates the workload it requires.

How did this role start?

Software “tester” and “QA” roles have existed for decades, but the SET role first became distinct in the 2000s when large-scale test automation became both feasible and necessary. According to Wikipedia, Microsoft coined the title “Software Developer Engineer in Test” (SDET) in 2005, and others like Amazon and Apple quickly adopted it. Google coined the name “Software Engineer in Test” for the same type of role. I personally prefer the SET title over the SDET title simply because it is more concise.

How is it different from “QA” or “testing” roles?

To me, the SET role is distinct from the traditional “QA” or “testing” role. Software testers historically focused on manual testing and thus didn’t need strong programming skills. Their fortes were product domain knowledge, intuition, out-of-the-box thinking, test planning, and test system setup. And these certainly are important, indispensable skills! SETs, however, live in both the development and testing worlds. They use developer skills to provide software solutions for testing problems. Automation is much more central to the SET role. These days, almost all “testing” job openings are SET roles; manual-only test roles have been largely deprecated. Nevertheless, there will always be a need for manual testing because there are some problems a human can catch much more easily than a script (or an AI agent).

Personally, I avoid using the titles “QA” and “software tester” for myself because they don’t accurately describe all that I do. I also avoid the title “automation engineer” because, again, it is reductionist. I tackle software testing with the heart of a developer, and I set up test automation solutions from the ground up. I’m proud to be a software engineer who specializes in testing.

 

As a bonus, check out Test and Code episode 47, in which Brian Okken and I discuss what it means to be a Software Engineer in Test (among other topics).

 

Book Review: pytest Quick Start Guide

tl;dr

Title pytest Quick Start Guide
(available from Packt and Amazon)
Author Bruno Oliveira (@nicoddemus)
Publication 2018 Packt Publishing
Summary “Write better Python code with simple and maintainable tests” – a very readable guide for pytest’s main features.
Prerequisites Intermediate-level Python programming.

Summary

pytest Quick Start Guide is a new book about using pytest for Python test automation. Bruno Oliveira explains not only how to use pytest but also why its features are useful. Even though this book is written as a “quick start” introduction, it nevertheless dives deep into pytest’s major features. It covers:

  1. An introduction to pytest
  2. Why pytest is superior to unittest
  3. Setting up pytest with pip and virtualenv
  4. Writing basic tests
  5. How assertions really work
  6. Common command line arguments
  7. Marks and parametrization
  8. Using and writing fixtures
  9. Popular plugins
  10. Converting unittest suites to pytest

Example code is hosted on GitHub at PacktPublishing/pytest-Quick-Start-Guide.

Praises

This book is an easy introduction to test automation in Python with pytest. Readers should have intermediate-level Python skills, but they do not need previous testing or automation skills. The progression of chapters makes it easy to start quick and then go deeper. Oliveira has a very accessible writing style, too.

The unittest refactoring guide is a hidden gem. The unittest module is great because it comes bundled with Python, but it is also clunky and not very Pythonic. Not all teams know the best way to modernize their test suites. Oliveira provides many pieces of practical advice for making the change, at varying degrees of conversion. The big changes use pytest’s fixtures and assertions.

A Comparison to Python Testing with pytest

Python Testing with pytest by Brian Okken is another popular pytest book. Both books are great resources for learning pytest, but each approaches the framework from a unique perspective. How do they compare? Here’s what I saw:

Oliveira’s book Okken’s book
Oliveira’s book is a great introductory guide for pytest. Oliveira’s writing style makes the reader feel like the author is almost teaching in person. The book’s main theme is getting the reader to use the main features of pytest pragmatically for general testing needs. The main differences in content are the unittest refactoring guide and some of the plugins and fixtures covered. This book is probably the best choice for beginners who want to learn the basics of pytest quickly. Okken’s book is introductory but also a great manual for future reference. Okken’s writing style is direct and concise, which covers more material in fewer pages. The format for each chapter is consistent: for each idea: idea → code → output → explanation; exercises at the end. Okken also covers how to create and share custom pytest plugins. This book is probably the best choice for people who want to master the ins and outs of pytest.

Ultimately, I recommend both books because they are both excellent.

Takeaways

Reading books on the same subject by different authors helps the reader learn the subject better. I’ve used pytest quite a lot myself, but I was able to learn new things from both pytest Quick Start Guide and Python Testing with pytest. Reading how experts use and think about the framework makes me a better engineer. Different writing styles and different opinions also challenge my own understandings. (It’s also funny that the authors of both pytest books have the same initials – “B.O.”)

pytest is really popular. There are now multiple good books on the subject. It’s becoming the de facto test automation framework for Python, outpacing unittest, nose, and others. These days, it seems more popular to write a pytest plugin than to create a new framework.

Book Review: Python Testing with pytest

tl;dr

Title Python Testing with pytest
Author Brian Okken (@brianokken)
Publication 2017 (The Pragmatic Programmers)
Summary How to use all the features of pytest for Python test automation – “simple, rapid, effective, and scalable.”
Prerequisites Intermediate-level Python programming.

Summary

Python Testing with pytest is the book on pytest*. Brian Okken covers all the ins and outs of the framework. The book is useful both as tutorial for learning pytest as well as a reference for specific framework features. It covers:

  • Getting started with pytest
  • Writing simple tests as functions
  • Writing more interesting tests with assertions, exceptions, and parameters
  • Using all the different execution options
  • Writing fixtures to flexibly separate concerns and reuse code
  • Using built-in fixtures like tmpdir, pytestconfig, and monkeypatch
  • Using configuration files to control execution
  • Integrating pytest with other tools like pdb, tox, and Jenkins

Appendices also cover:

  • Using Python virtual environments
  • Installing packages with pip
  • An overview of popular plugins like pytest-xdist and pytest-cov
  • Packaging and distributing Python packages

[* Update on 9/27/2018: Check out my review on another excellent pytest book, pytest Quick Start Guide by Bruno Oliveira.]

Praises

This book is a comprehensive guide to pytest. It thoroughly covers the framework’s features and gives pointers to more info elsewhere. Even though pytest has excellent online documentation, I still recommend this book to anyone who wants to become a pytest master. Online docs tend to be fragmented with each piece limited in scope, whereas books like this one are designed to be read progressively and orderly for maximal understanding of the material.

I love how this book is example-driven. Each section follows a simple yet powerful outline: idea → code → output → explanation. Having real code with real output truly cements the point of each mini-lesson. New topics are carefully unfolded so that they build upon previous topics, making the book read like a collection of tutorials. Examples at the end of every chapter challenge the readers to practice what they learn. The formatting of each section also looks great.

The extra info on related topics like pip and virtualenv is also a nice touch. Python pros probably don’t need it, but beginners might get stuck without it.

The rocket ship logo on the cover is also really cool!

Takeaways

pytest is one of the best functional test frameworks currently available in any language. It breaks the clunky xUnit mold, in which class structures were awkwardly superimposed over test cases as if one size fits all. Instead, pytest is lightweight and minimalist because it relies on functions and fixtures. Scope is much easier to manage, code is more reusable, and side effects can more easily be avoided. pytest has taken over Python testing because it is so Pythonic.

Brian’s concise writing style has also inspired me to be more direct in my own writing. I tend to be rather verbose out of my desire to be descriptive. However, fewer words often leave a more powerful impression. They also make the message easier to comprehend. Python is beloved for its concise expressiveness, and as a Pythonista, it would be fitting for me to adopt that trait into my English.

If I had a wish list for a second edition, I’d love to see more info about assertions and other plugins (namely pytest-bdd). I think an appendix with comparisons to other Python test frameworks would also be valuable.

A Warning

I ordered a physical copy of this book directly from Amazon (not a third-party seller). Unfortunately, that copy was missing all the introductory content: the table of contents, the acknowledgements, and the preface. The first page after the front cover was Chapter 1. Befuddled, I reached out to Brian Okken (who I personally met at PyCon 2018). We suspected that it was either a misprint or a bootleg copy. Either way, we sent the evidence to the publisher, and Amazon graciously exchanged my defective copy for the real deal. Please look out for this sort of problem if you purchase a printed copy of this book for yourself!

If you want to learn more about pytest, please read my article Python Testing 101: pytest.

Behavior-Driven Blasphemy

This is my 100th post on Automation Panda! I’m thrilled to see how much this blog has grown and how many people it has helped. For such a monumental occasion, I have chosen to voice a rather controversial opinion about test automation.

Behavior-driven development seems to be the software testing buzzword of the decade. What started as a refinement of test-driven development by developers in Europe and the UK quickly became the big process fad of the 2010’s. The Cucumber project (now 10 years old) developed or inspired Gherkin-based test automation frameworks in all the major programming languages. Companies started requiring Given-When-Then format for acceptance criteria and test scenarios. Three Amigos meetings became standard calendar fixtures during sprints. Organizations that once undertook “Agile transformations” now have similar initiatives for BDD. For better or worse, BDD exists and cannot be ignored.

The dogmatic benefits of BDD are better collaboration and automation. However, leaders frequently insist that Gherkin-style test frameworks add value only when paired with practices like Example Mapping. “BDD is a process, not a tool,” is a common mantra. “Otherwise, the Gherkin just gets in the way.” Although I wholeheartedly agree that behavior-driven practices add significant value to the development process, I nevertheless espouse a rather blasphemous opinion:

BDD test automation frameworks are better than traditional frameworks for black box functional testing even when BDD processes are not followed.

What Exactly Are You Saying?

My claim is that behavior-driven test frameworks like Cucumber, SpecFlow, and behave are significantly better than traditional xUnit-style frameworks for testing live features. For example, I would rather use SpecFlow than NUnit for testing a Web app with Selenium WebDriver, whether or not the other two Amigos are with me. The resulting automation code has better structure, readability, and reusability.

I’m not saying that teams shouldn’t do BDD practices, and I’m not saying that the Three Amigos should be separated. Collaboration is key to success, and BDD really helps. Example Mapping is one of the most useful practices a development team can do. I’m also not saying that BDD frameworks should be used for all testing purposes – they are poorly suited for unit testing and for performance testing.

Objection!

I find myself very lonely in this opinion. BDD leaders repeatedly insist that BDD is not about testing and automation:

The most outspoken BDDers (mostly coalescing around the Cucumber community) have largely moved their focus to the collaboration benefits, almost forsaking the automation benefits. (This may not necessarily be true, but it appears that way based on the literature and materials floating on the Web.) That outlook is somewhat disingenuous because the main tools supporting BDD are, in fact, test frameworks.

BDD also has outspoken opponents – it’s love or hate. I’ve personally spoken with several engineers who despise Gherkin-based frameworks. “I can see how it would be valuable when a whole team embraces behavior-driven practices,” many have told me, “but otherwise, the Gherkin layer just gets in the way of automation.” I’ve heard it called “plaster” and “garbage.” Engineers just want to code their tests. And code should always be readable, right?

hqdefault

Testing is an inherently opinionated space. People can never seem to agree on things.

The Bigger Picture

Test automation must be developed regardless of any specific development practices, and its architecture must stand firmly in its own right. Unfortunately, both sides miss the bigger picture:

The best solution for test automation is a domain-specific language.

A domain-specific language (DSL) is a programming language with a purpose. It is designed to handle very specific needs, rather than general-purpose programming. For example:

  • SQL is a DSL for database queries.
  • XPath is a DSL for finding elements in an XML document.
  • YAML is a DSL for object serialization.

Gherkin is also a DSL – for behavior specification.

Domain-specific languages naturally suit test automation due to the clear difference between test cases and test code. Test cases are procedures that exercise product behavior. Anyone can write a test case. They are dictated or explained in plain language. Test code, however, is the software implementation of test cases. Test code handles function calls, logging, exceptions, and all those other little programming details that help run tests. A test automation DSL separates those concerns: test cases are written in a special language, and the interpreter handles repetitive, low-level details. Some type of extensions can handle product-specific interactions. The purpose of a language is to effectively express intention – and the intention is to test the product.

To truly achieve an optimal solution, however, the DSL and its interpreter must be treated as part of the automation software, just like the test cases and extensions. Remember, a language’s interpreter is just another piece of software. The interpreter is part of the separation of concerns and the single responsibility principle. Concerns that would typically be handled by classes and functions in traditional test code should be moved to the interpreter. For example, the interpreter should automatically log every test case step, rather that forcing the author to write explicit logging statements.

When I worked at NetApp years ago, I implemented a DSL to test platform-level features of our operating system. I called it DS – short for “Design Steps” (from HP ALM) (but also not without an affinity for the Nintendo DS). NetApp’s entire test automation code was developed in Perl at the time, so I implemented the DS interpreter in Perl to reuse existing libraries. DS test cases were typically only a dozen lines long each, and DS expressions could call specially-written Perl modules directly for complete extendability. During the first big release using DS, my team saved countless hours of automation development time as compared to the previous release while delivering a higher number of tests. I also did this before I had ever heard of BDD.

Unfortunately, most teams have neither the time to develop their own testing DSL nor the understanding of compiler theory to build it right. And if they were given such a language, they typically limit themselves to the provided implementation instead of taking ownership to extend the language for their needs.

nintendo-ds-1

The original Nintendo DS. Fun times!

Who Truly Misunderstands Gherkin?

Enter Gherkin: the world’s first major general-purpose, off-the-shelf language for test automation. It is general enough to cover any case through its plain language steps, yet specific enough to standardize tests. Users don’t need to be compiler theory experts – they just make up their own step names and provide the definition code to execute them. Early BDD projects like JBehave and Cucumber packaged an interpreter as a test framework and delivered it to a testing world still stuck on JUnit. The need for a testing DSL was there, whether or not the BDD folks meant to serve it.

Cucumber-ites frequently bemoan that their framework is misunderstood by the masses. They shudder to see teams using their framework purely for test automation. However, Cucumber effectively lowered the entry barrier for teams to make their own testing DSLs. Kodak did the same thing for film: they made it cheap and standard so anyone could be a photographer. Not everyone who uses a BDD framework misunderstands its purpose: some (like me) just see an alternative value proposition than what is preached by orthodox BDD practitioners. Gherkin fills a need that nobody knew. Its popularity validates that claim.

Benefits Apart from Process

Using a BDD framework adds much value to testing and development even without BDD processes. Below are just a handful of benefits:

  1. Focus first on good scenarios. Gherkin forces authors to think before they code.
  2. Faster automation development. Gherkin steps are reusable and parametrizable.
  3. Stronger structure. Engineers know where to put things in the framework.
  4. Test understandability. Anyone can read scenarios because they are written in plain language. Business people can help. New people can pick it up fast.
  5. Test sharing. Feature files can be shared apart from test code, which can be helpful for business partners.
  6. Test similarity. Tests all look the same. Team members can more easily help each other.
  7. Clearer failures. When a scenario fails, reports show exactly what step failed.
  8. Simpler bug reports. Use scenario steps as instructions to reproduce the failure.
  9. 2-phase test reviews. Review Gherkin first and then test code second to make sure the test cases are good before implementing the wrong things.
  10. BDD enablement. Using a BDD framework opens the door for a team to embrace better behavioral practices in the future.

I wrote about these advantages before:

Case Studies

I’m also not the only one who finds value in BDD test frameworks outside of the full BDD process. Below are five case studies.

radish

radish is a Python test framework inspired by Cucumber. Its DSL syntax is a superset of Gherkin that adds preconditions, loops, variables, and expressions. These language additions indicate a bias towards automation because they enable engineers to write tests more programmatically, albeit in a Gherkin-ese way.

Karate

Karate is a test framework with a full DSL based on Gherkin with steps specifically tailored to Web service calls. Although it is implemented in Java, testers do not need to do any Java programming to write complete tests cases from day one. Peter Thomas, the creator of Karate, unabashedly declares that Karate does not truly adhere to BDD but nevertheless uses Cucumber for its automation benefits. (Note: Karate is working to move completely off of Cucumber. See GitHub issue #444.)

REST Assured

REST Assured is a Java package for testing REST APIs. Unlike Karate, REST Assured provides a fluent syntax (and not a DSL) for writing service calls directly in Java code. The fluent syntax is based on Gherkin: given() a request spec is created, when() the call is made, then() verify the response. Although REST Assured is not a full testing framework, it nevertheless pulls inspiration from BDD frameworks for order and structure.

Cycle

Cycle is a BDD-focused solution from Tryon Solutions for testing Web, terminal, and desktop apps. Cycle is unique because it provides out-of-the-box steps for all types of supported testing so that no programming experience is required. Testers write feature files using Cycle 2.0’s slick new Electron app. Scenarios are written in CycleScript, a Gherkin-ese language with additions like variables and sub-scenario calls. Steps tend to be imperative, but that’s the tradeoff for not requiring lower-level programming.

Hexawise

Hexawise is a combinatorial testing tool designed to maximize coverage with minimal test counts by smartly joining feature variations. It helps testers write better tests with less redundancy and fewer gaps. Although Hexawise has historically assisted manual testers, it also can generate Gherkin feature files for test variations.

mexican-coast-dried-sea-cucumber

Not all cucumbers are the same. Above is a sea cucumber.

Good Enough?

Gherkin-based test frameworks are not perfect, but they do provide good structure. They gained popularity outside of the pure BDD movement because they genuinely added value to testing and automation. Like any other tool, teams will use them in both good and bad ways. (Trust me, I’ve seen scary Gherkin.)

It’s interesting to see how groups outside the Cucumber diaspora are attempting to solve the limitations of pure Gherkin. Each case study above showed a unique path. Clearly, the test automation problem has not yet been completely solved, but current BDD frameworks are the best off-the-shelf solutions we have until a new software testing movement comes along.

How Do I Know My Tests Add Value?

Software testing is a huge effort, especially for automation. Teams can spend a lot of time, money, and resources on testing (or not). People literally make careers out of it. That investment ought to be worthwhile – we shouldn’t test for the sake of testing.

So, therein lies the million-dollar question: How do we know that our tests add meaningful value?

Or, more bluntly: How do we know that testing isn’t a waste of time?

That’s easy: bugs!

The stock answer goes something like this: We know tests add value when they find bugs! So, let’s track the number of bugs we find.

That answer is wrong, despite its good intentions. Bug count is a terrible metric for judging the value of tests.

What do you mean bug counts aren’t good?

I know that sounds blasphemous. Let’s unpack it. Finding bugs is a good thing, and tests certainly should find bugs in the features they cover. But, the premise that the value of testing lies exclusively in the bugs found is wrong. Here’s why:

  1. The main value of testing is fast feedback. Testing serves two purposes: (1) validating goodness and (2) identifying badness. Passing tests are validated goodness. Failing tests, meaning uncovered bugs, are identified badness. Both types of feedback add value to the development process. Developers can proceed confidently with code changes when trustworthy tests are passing, and management can assess lower risk. Unfortunately, bug counts don’t measure that type of goodness.
  2. Good testing might actually reduce bug count. Testing means accountability for development. Developers must think more carefully about design. They can also run tests locally before committing changes. They could even do Test-Driven Development. Better practices could prevent many bugs from ever happening.
  3. Tracking bug count can drive bad behavior. Whether a high bug discovery rate looks good (or, worse, has quotas), testers will strive to post numbers. If they don’t find critical bugs, they will open bug reports for nitpicks and trivialities. The extra effort they spend to report inconsequential problems may not be of value to the business – wasting their time and the developers’ time all for the sake of metrics.
  4. Bugs are usually rare. Unless a team is dysfunctional, the product usually works as expected. Hundreds of test runs may not yield a single bug. That’s a wonderful thing if the tests have good coverage. Those tests still add value. Saying they don’t belittles the whole testing effort.

Then what metrics should we use?

Bugs happen arbitrarily, and unlimited testing is not possible. Metrics should focus on the return-on-investment for testing efforts. Here are a few:

  1. Time-to-bug-discovery. Rather than track bug counts, track the time until each bug is discovered. This metric genuinely measures the feedback loop for test results. Make sure to track the severity of each bug, too. For example, if high-severity bugs are not caught until production, then the tests don’t have enough coverage. Teams should strive for the shortest time possible – fast feedback means lower development costs. This metric also encourages teams to follow the Testing Pyramid.
  2. Coverage. Coverage is the degree to which tests exercise product behavior. Higher coverage means more feedback and greater chances of identifying badness. Most unit test frameworks can use code coverage tools to verify paths through code. Feature coverage requires extra process or instrumentation. Tests should avoid duplicate coverage, too.
  3. Test failure proportions. Tests fail for a variety of reasons. Ideally, tests should fail only when they discover bugs. However, tests may also fail for other reasons: unexpected feature changes, environment instability, or even test automation bugs. Non-bug failures disrupt the feedback loop: they force a team to fix testing problems rather than product problems, and they might cause engineers to devalue the whole testing effort. Tracking failure proportions will reveal what problems inhibit tests from delivering their top value.

More resources

 

EGAD! How Do We Start Writing (Better) Tests?

Some have never automated tests and can’t check themselves before they wreck themselves. Others have 1000s of tests that are flaky, duplicative, and slow. Wa-do-we-do? Well, I gave a talk about this problem at PyOhio 2018 and again at PyGotham 2018. The language used for example code was Python, but the principles apply to any language.

Here’s the PyGotham talk:

And here’s the earlier PyOhio version: