automation

EGAD! How Do We Start Writing (Better) Tests?

Some have never automated tests and can’t check themselves before they wreck themselves. Others have 1000s of tests that are flaky, duplicative, and slow. Wa-do-we-do? Well, I gave a talk about this problem at a few Python conferences. The language used for example code was Python, but the principles apply to any language.

Here’s the PyTexas 2019 talk:

And here’s the PyGotham 2018 talk:

And here’s the first time I gave this talk, at PyOhio 2018:

I also gave this talk at PyCaribbean 2019 and PyTennessee 2020 (as an impromptu talk), but it was not recorded.

The Testing Pyramid

The “Testing Pyramid” is an industry-standard guideline for functional test case development. Love it or hate it, the Pyramid has endured since the mid-2000’s because it continues to be practical. So, what is it, and how can it help us write better tests?

Layers

The Testing Pyramid has three classic layers:

  • Unit tests are at the bottom. Unit tests directly interact with product code, meaning they are “white box.” Typically, they exercise functions, methods, and classes. Unit tests should be short, sweet, and focused on one thing/variation. They should not have any external dependencies – mocks/monkey-patching should be used instead.
  • Integration tests are in the middle. Integration tests cover the point where two different things meet. They should be “black box” in that they interact with live instances of the product under test, not code. Service call tests (REST, SOAP, etc.) are examples of integration tests.
  • End-to-end tests are at the top. End-to-end tests cover a path through a system. They could arguably be defined as a multi-step integration test, and they should also be “black box.” Typically, they interact with the product like a real user. Web UI tests are examples of integration tests because they need the full stack beneath them.

All layers are functional tests because they verify that the product works correctly.

Proportions

The Testing Pyramid is triangular for a reason: there should be more tests at the bottom and fewer tests at the top. Why?

  1. Distance from code. Ideally, tests should catch bugs as close to the root cause as possible. Unit tests are the first line of defense. Simple issues like formatting errors, calculation blunders, and null pointers are easy to identify with unit tests but much harder to identify with integration and end-to-end tests.
  2. Execution time. Unit tests are very quick, but end-to-end tests are very slow. Consider the Rule of 1’s for Web apps: a unit test takes ~1 millisecond, a service test takes ~1 second, and a Web UI test takes ~1 minute. If test suites have hundreds to thousands of tests at the upper layers of the Testing Pyramid, then they could take hours to run. An hours-long turnaround time is unacceptable for continuous integration.
  3. Development cost. Tests near the top of the Testing Pyramid are more challenging to write than ones near the bottom because they cover more stuff. They’re longer. They need more tools and packages (like Selenium WebDriver). They have more dependencies.
  4. Reliability. Black box tests are susceptible to race conditions and environmental failures, making them inherently more fragile. Recovery mechanisms take extra engineering.

The total cost of ownership increases when climbing the Testing Pyramid. When deciding the level at which to automate a test (and if to automate it at all), taking a risk-based strategy to push tests down the Pyramid is better than writing all tests at the top. Each proportionate layer mitigates risk at its optimal return-on-investment.

Practice

The Testing Pyramid should be a guideline, not a hard rule. Don’t require hard proportions for test counts at each layer. Why not? Arbitrary metrics cause bad practices: a team might skip valuable end-to-end tests or write needless unit tests just to hit numbers. W. Edwards Deming would shudder!

Instead, use loose proportions to foster better retrospectives. Are we covering too many input combos through the Web UI when they could be checked via service tests? Are there unit test coverage gaps? Do we have a pyramid, a diamond, a funnel, a cupcake, or some other wonky shape? Each layer’s test count should be roughly an order of magnitude smaller than the layer beneath it. Large Web apps often have 10K unit tests, 1K service tests, and a few hundred Web UI tests.

Resources

Check out these other great articles on the Testing Pyramid:

Why Python is Great for Test Automation

Python is an incredible programming language. As Dan Callahan said in his PyCon 2018 keynote, “Python is the second best language for anything, and that’s an amazing aspiration.” For test automation, however, I believe it is one of the best choices. Here are ten reasons why:

#1: The Zen of Python

The Zen of Python, as codified in PEP 20, is an ideal guideline for test automation. Test code should be a natural bridge between plain-language test steps and the programming calls to automate them. Tests should be readable and descriptive because they describe the features under test. They should be explicit in what they cover. Simple steps are better than complicated ones. Test code should add minimal extra verbiage to the tests themselves. Python, in its concise elegance, is a powerful bridge from test case to test code.

(Want a shortcut to the Zen of Python? Run “import this” at the Python interpreter.)

#2: pytest

pytest is one of the best test frameworks currently available in any language, not just for Python. It can handle any functional tests: unit, integration, and end-to-end. Test cases are written simply as functions (meaning no side effects as long as globals are avoided) and can take parametrized inputs. Fixtures are a generic, reusable way to handle setup and cleanup operations. Basic “assert” statements have automatic introspection so failure messages print meaningful values. Tests can be filtered when executed. Plugins extent pytest to do code coverage, run tests in parallel, use Gherkin scenarios, and integrate with other frameworks like Django and Flask. Other Python test frameworks are great, but pytest is by far the best-in-show. (Pythonic frameworks always win in Python.)

#3: Packages

For all the woes about the CheeseShop, Python has a rich library of useful packages for testing: pytest, unittest, doctest, tox, logging, paramiko, requests, Selenium WebDriver, Splinter, Hypothesis, and others are available as off-the-shelf ingredients for custom automation recipes. They’re just a “pip install” away. No reinventing wheels here!

#4: Multi-Paradigm

Python is object-oriented and functional. It lets programmers decide if functions or classes are better for the needs at hand. This is a major boon for test automation because (a) stateless functions avoid side effects and (b) simple syntax for those functions make them readable. pytest itself uses functions for test cases instead of shoehorning them into classes (à la JUnit).

#5: Typing Your Way

Python’s out-of-the-box dynamic duck typing is great for test automation because most feature tests (“above unit”) don’t need to be picky about types. However, when static types are needed, projects like mypy, Pyre, and MonkeyType come to the rescue. Python provides typing both ways!

#6: IDEs

Good IDE support goes a long way to make a language and its frameworks easy to use. For Python testing, JetBrains PyCharm supports visual testing with pytest, unittest, and doctest out of the box, and its Professional Edition includes support for BDD frameworks (like pytest-bdd, behave, and lettuce) and Web development. For a lighter offering, Visual Studio Code is taking the world by storm. Its Python extensions support all the good stuff: snippets, linting, environments, debugging, testing, and a command line terminal right in the window. Atom, Sublime, PyDev, and Notepad++ also get the job done.

#7: Command Line Workflow

Python and the command line are like peanut butter and jelly – a match made in heaven. The entire test automation workflow can be driven from the command line. Pipenv can manage packages and environments. Every test framework has a console runner to discover and launch tests. There’s no need to “build” test code first because Python is an interpreted language, further simplifying execution. Rich command line support makes testing easy to manage manually, with tools, or as part of build scripts / CI pipelines.

As a bonus, automation modules can be called from the Python REPL interpreter or, even better, a Jupyter notebook. What does this mean? Automation-assisted exploratory testing! Imagine using Python calls to automatically steer a Web app to a point that requires a manual check. Calls can be swapped out, rerun, skipped, or changed on the fly. Python makes it possible.

#8: Ease of Entry

Python has always been friendly to beginners thanks to its Zen, whether those beginners are programming newbies or expert engineers. This gives Python a big advantage as an automation language choice because tests need to be done quickly and easily. Nobody wants to waste time when the features are in hand and just need to be verified. Plus, many manual software testers (often without programming experience) are now starting to do automation work (by choice or by force) and benefit from Python’s low learning curve.

#9: Strength for Scalability

Even though Python is great for beginners, it’s also no toy language. Python has industrial-grade strength because its design always favors one right way to get a job done. Development can scale thanks to meaningful syntax, good structure, modularity, and a rich ecosystem of tools and packages. Command line versatility enables it to fit into any tool or workflow. The fact that Python may be slower than other languages is not an issue for feature tests because system delays (such as response times for Web pages and REST calls) are orders of magnitude slower than language-level performance hits.

#10: Popularity

Python is one of the most popular programming languages in the world today. It is consistently ranked near the top on TIOBE, Stack Overflow, and GitHub (as well as GitHut). It is a beloved choice for Web developers, infrastructure engineers, data scientists, and test automationeers alike. The Python community also powers it forward. There is no shortage of Python developers, nor is there any dearth of support online. Python is not going away anytime soon. (Python 3, that is.)

Other Languages?

The purpose of this article is to highlight what makes Python great for test automation based on its own merits. Although I strongly believe that Python is one of the best automation languages, other choices like Java, C#, and Ruby are also viable. Check out my article The Best Programming Language for Test Automation for a comparison.

 

This article was posted with the author’s permission on both Automation Panda and PyBites.

Cypress.io and the Future of Web Testing

What is Cypress.io?

Cypress.io is an up-and-coming Web test automation framework. It is open source and written entirely in JavaScript. Unlike Selenium WebDriver tests that work outside the browser, Cypress works directly inside the browser. It enables developers to write front-end tests entirely in JavaScript, directly accessing everything within the browser. As a result, tests run much more quickly and reliably than Selenium-based tests.

Some nifty features include:

  • A rich yet simple API for interactions with automatic waiting
  • Mocha, Chai, and Sinon bundled in
  • A sleek dashboard with automatic reloads for Test-Driven Development
  • Easy debugging
  • Network traffic control for validation and mocking
  • Automatic screenshots and videos

Cypress was clearly developed for developers. It enables rapid test development with rapid feedback. The Cypress Test Runner is free, while the Cypress Dashboard Service (for better reporting and CI) will require a paid license.

How Do I Start Using Cypress?

I won’t post examples or instructions for using Cypress here. Please refer to the Cypress documentation for getting started and the tutorial video below. Make sure your machine is set up for JavaScript development.

Will Cypress Replace WebDriver?

TL;DR: No.

Cypress has its niche. It is ideal for small teams whose stacks are exclusively JavaScript and whose developers are responsible for all testing. However, WebDriver still has key advantages.

  1. While Selenium WebDriver supports nearly all major browsers, Cypress currently supports only one browser: Google Chrome. That’s a major limitation. Web apps do not work the same across browsers. Many industries (especially banking and finance) put strict controls on browser types and versions, too.
  2. Cypress is JavaScript only. Its website proudly touts its JavaScript purity like a badge of honor. However, that has downsides. First, all testing must happen inside the bubble of the browser, which makes parallel testing and system interactions much more difficult. Second, testers must essentially be developers, which may not work well for all teams. Third, other programming languages that may offer advantages for testing (like Python) cannot be used. Selenium WebDriver, on the other hand, has multiple language bindings and lets tests live outside the browser.
  3. Within the JavaScript ecosystem, Cypress is not the only all-in-one end-to-end framework. Protractor is more mature, more customizable, and easier to parallelize. It wraps Selenium WebDriver calls for simplification and safety in a similar way to how Cypress’s API is easy to use.
  4. The WebDriver standard is a W3C Recommendation. What does this mean? All major browsers have a vested interest in implementing the standard. Selenium is simply the most popular implementation of the standard. It’s not going away. Cypress, however, is just a cool project backed with commercial intent.

Further reading:

What Does Cypress Mean for the Future?

There are a few big takeaways.

  1. JavaScript is taking over the world. It was the most popular language on GitHub in 2017. JavaScript-only stacks like MEAN and MERN are increasingly popular. The demand for a complete JavaScript-only test framework like Cypress is further evidence.
  2. “Bundled” test frameworks are becoming popular. Historically, a test framework simply provided test structure, basic fixtures, and maybe an assertion library (like JUnit). Then, extra test packages became popular (like Selenium WebDriver, REST APIs, mocking, logging, etc.). Now, new frameworks like Cypress and Protractor aim to provide pre-canned recipes of all these pieces to simplify the setup.
  3. Many new test frameworks will likely be developer-centric. There is a trend in the software industry (especially with Agile) of eliminating traditional tester roles and putting testing work onto developers. The role of the “Software Engineer in Test” – a developer who builds test systems – is also on the rise. Test automation tools and frameworks will need to provide good developer experience (DX) to survive. Cypress is poised to ride that wave.
  4. WebDriver is not perfect. Cypress was developed in large part to address WebDriver’s shortcomings, namely the slowness, difficulty, and unreliability (though unreliability is often a result of poor implementation). Many developers don’t like to use Selenium WebDriver, and so there will be a constant itch to make something better. Cypress isn’t there yet, but it might get there one day.

Clicking Web Elements with Selenium WebDriver

Selenium WebDriver is the most popular open source package for Web UI test automation. It allows tests to interact directly with a web page in a live browser. However, using Selenium WebDriver can be very frustrating because basic interactions often lack robustness, causing intermittent errors for tests.

The Basics

One such vulnerable interaction is clicking elements on a page. Clicking is probably the most common interaction for tests. In C#, a basic click would look like this:

webDriver.FindElement(By.Id("my-id")).Click();

This is the easy and standard way to click elements using Selenium WebDriver. However, it will work only if the targeted element exists and is visible on the page. Otherwise, the WebDriver will throw exceptions. This is when programmers pull their hair out.

Waiting for Existence

To avoid race conditions, interactions should not happen until the target element exists on the page. Even split-second loading times can break automation. The best practice is to use explicit waits before interactions with a reasonable timeout value, like this:

const int timeoutSeconds = 15;
var ts = new TimeSpan(0, 0, timeoutSeconds);
var wait = new WebDriverWait(webDriver, ts);

wait.Until((driver) => driver.FindElements(By.Id("my-id")).Count > 0);
webDriver.FindElement(By.Id("my-id")).Click();

Other Preconditions

Sometimes, Web elements won’t appear without first triggering something else. Even if the element exists on the page, the WebDriver cannot click it until it is made visible. Always look for the proper way to make that element available for clicking. Click on any parent panels or expanders first. Scroll if necessary. Make sure the state of the system should permit the element to be clickable.

If the element is scrolled out of view, move to the element before clicking it:

new Actions(webDriver)
    .MoveToElement(webDriver.FindElement(By.Id("my-id")))
    .Click()
    .Perform();

Last Ditch Efforts

Nevertheless, there are times when clickable elements just don’t cooperate. They just can’t seem to be made visible. When all else fails, drop directly into JavaScript:

((IJavaScriptExecutor)webDriver).ExecuteScript(
    "arguments[0].click();",
    webDriver.FindElement(By.Id("my-id")));

Do this only when absolutely necessary. It is a best practice to use Selenium WebDriver methods because they make automated interaction behave more like a real user than raw JavaScript calls. Make sure to give good reasons in code comments whenever doing this, too.

Final Advice

This article was written specifically for clicks, but its advice can be applied to other sorts of interactions, too. Just be smart about waits and preconditions.

Note: Code examples on this page are written in C#, but calls are similar for other languages supported by Selenium WebDriver.

Why Choose BDD Over Other Test Frameworks?

People are heavily opinionated about Behavior-Driven Development. I frequently hear opponents say things like this:

Why would I use a BDD test framework instead of a traditional test framework like JUnit, NUnit, or pytest? The extra layer of plain language Gherkin steps gets in the way of the automation code. I can directly write code for those steps instead. BDD frameworks require lots of extra work that just doesn’t seem to add value. My team isn’t doing behavior-driven development practices, anyway.

I can sympathize with these sentiments, especially for those who have participated in projects where BDD was done poorly. Even if a team isn’t doing full behavior-driven development practices, I still assert that BDD test automation frameworks are better than traditional test frameworks for most feature testing (above-unit, black box). Here are reasons why.

Separation of Test Cases from Test Code

Test cases and test code are separate concerns. I should be able to design, discuss, and digest a test case without ever touching code. We describe features in plain language, and so we should also describe tests in plain language. Step definitions are nothing more than the automation behind the test case steps. Traditional test frameworks simply don’t have this separation of concerns, even if test methods are loaded with comments.

Guide Rails

BDD frameworks enforce good structure and layers for automation. There are designated places for test cases, step definitions, and support classes. The framework encourages good practices. Traditional test framework, however, are much more free-form. Programmers can do scary and stupid things with test classes. Functionally, a traditional test framework can still be structured well with layers and support classes, but it’s not required. Based on my experiences seeing less experienced automationeers shoving everything into Frankenstein’ed test methods, I much prefer to have the guide rails of a BDD framework.

Inherent Reusability

Steps are the building blocks of test cases, and test cases almost always have the same steps. BDD frameworks identify the step as a unique concern. One step with its definition can be used by any scenario, and steps can be parametrized for flexibility. This creates a “snowball” effect once enough steps have been developed: new tests may not require any new automation code! Traditional test frameworks simply don’t have this mechanism. It could be implemented by calling functions and classes outside of test classes, but not all automationeers are disciplined to do so, and everyone who does it will do it differently.

Aspect-Oriented Controls

Good frameworks handle cross-cutting concerns automatically. Things like logging, dependency injection, and cleanup shouldn’t interfere with test cases. BDD frameworks provide hooks to insert extra logic for these concerns around steps, scenarios, features, and even the whole test suite. Hooks can squeeze into steps because the framework is structured around steps. For example, hooks can automatically log steps to Extent Reports, instead of forcing programmers to write explicit logging calls in each test method.

giphy

The HOOKS, me bucko!

Easier Reviews

Nothing ruins your day like an illegible code review on features you don’t know. You are responsible for providing valuable feedback, but you can’t figure out what’s going on in the short amount of time you can dedicate to the review. Good Gherkin, however, makes it easy. A reviewer can review the test case apart from any code first to make sure it is a good test case. At this level, the reviewer could even be a non-technical person like a product owner. Then, the reviewer can either send the test case back with suggestions or, if the test case passes muster, dig deeper into the automation code.

Easier Onboarding

It can be hard to onboard new team members. They have so much to learn about the product, the code base, and the team practices. If tests are written using a BDD framework, then newbies can learn the features simply by reading the behavior specs. New automationeers likewise can rely on existing steps both for reuse and for examples as they develop new tests.

Other Reasons?

I’m sure there are other benefits to BDD frameworks, but these are the big ones for me. It’s an opinionated thing. Feel free to add comments below!

The Panda’s Dozen: Top PyCon 2018 Talks

There were tons of great talks at PyCon 2018 – more than I could attend in person – that are now available on the PyCon 2018 YouTube channel. This post has links to my favorites. Enjoy!

Check out PyCon 2018 Reflections to read my personal reflections, too. Watch my talk, Behavior-Driven Python, too!

By the Numbers: Python Community Trends in 2017/2018 (Dmitry Filippov, Ewa Jodlowska) – At the end of 2017, the Python Software Foundation teamed up with JetBrains to conduct an official Python Developers Survey. Data science is taking Python by storm, and Python 3 now has majority adoption. There are tons of other cool statistics, too!

How Netflix does failovers in 7 minutes flat (Amjith Ramanujam) – That speed at that scale is mind-blowing. This is a fascinating talk, even for non-engineers!

Solve Your Problem With Sloppy Python (Larry Hastings) – “If you ever start writing a shell script, delete it and write a Python script instead.” This talk is a jovial reminder that Python is a powerful tool, even for hack-n-slash jobs.

The AST and Me (Emily Morehouse-Valcarcel) – Emily gives a great overview of the inner workings of the Python language. This talk is a must-see for anyone into compiler theory.

Dataclasses: The code generator to end all code generators (Raymond Hettinger) – Dataclasses are new data structures to Python to generate classes based on specs.

Pipenv: the Future of Python Dependency Management (Kenneth Reitz) – Pipenv is a new tool that makes pip, Pipfile, and virtualenv easier to use together. Kenneth gives a good overview of Python packaging and why pipenv is awesome.

Type-checked Python in the real world (Carl Meyer) – Sometimes, I wish Python had static typing. Now, it can! Facebook has done some innovative things to make it possible.

Beyond Unit Tests: Taking Your Testing to the Next Level (Hillel Wayne) – Property tests + contracts = integration tests. Hillel gives a fantastic strategy for making tests smarter.

“WHAT IS THIS MESS?” – Writing tests for pre-existing code bases (Justin Crown) – This is a pragmatic guide to adding new tests to old code. Now, you’ll never procrastinate on your tests again!

Demystifying the Patch Function (Lisa Roach) – Mocking is a great practice for limiting scope in unit tests. Whether using unittest.mock or other patching/mocking packages, this is a great talk for learning why and how to do mocking in unit tests!

Automating Code Quality (Kyle Knapp) – One of Python’s most beloved traits is its elegance. Maintaining high standards of code quality can be challenging for large projects, though. Kyle shows how to use existing tools to drive higher quality.

Keynote (Ying Li) – [35:07] – Ying is a software security engineer at Docker. In her keynote, she urged all people involved in technology to learn the basics of security. Definitely watch the video recording – she used a fun story to illustrate her points.

Keynote: The People and Python (Qumisha Goss) – [1:07:35] – “Q” is a librarian at the Detroit Public Library who taught herself Python so she could teach coding classes to kids. She shared the highs and lows of her experiences, especially in light of many disadvantages her students had. My favorite takeaway was to “cultivate greatness in others.”

Python Testing 101: behave

Warning: If you are new to BDD, then I strongly recommend reading the BDD 101 series before trying to use the behave framework.

Overview

behave is a behavior-driven (BDD) test framework that is very similar to Cucumber, Cucumber-JVM, and SpecFlow. BDD frameworks are unique in that test cases are not written in raw programming code but rather in plain specification language that is then “glued” to code. The “behavior specs” help to define what the behavior is, and steps can be reused by multiple test cases (or “scenarios”). This is very different from more traditional frameworks like unittest and pytest. Although behave is not an official Cucumber variant, it still uses the Gherkin language (“Given-When-Then”) for behavior specification.

Test scenarios are written in Gherkin “.feature” files. Each Given, When, and Then step is “glued” to a step definition – a Python function decorated by a matching string in a step definition module. The behave framework essentially runs feature files like test scripts. Hooks (in “environment.py”) and fixtures can also insert helper logic for test execution.

behave is officially supported for Python 2, but it seems to run just fine using Python 3.

Installation

Use pip to install the behave module.

pip install behave

Project Structure

Since behave is an opinionated framework, it has a very opinionated project structure. All code must be located under a directory named “features”. Gherkin feature files and the “environment.py” file for hooks must appear under “features”, and step definition modules must appear under “features/steps”. Configuration files can store common execution settings and even override the path to the “features” directory.

Note: Step definition module names do not need to be the same as feature file names. Any step definition can be used by any feature file within the same project.

[project root directory]
|‐‐ [product code packages]
|-- features
|   |-- environment.py
|   |-- *.feature
|   `-- steps
|       `-- *_steps.py
`-- [behave.ini|.behaverc|tox.ini|setup.cfg]

Example Code

An example project named behavior-driven-python located in GitHub shows how to write tests using behave. This section will explain how the Web tests are designed.

The top layer in a behave project is the set of Gherkin feature files. Notice how the scenario below is concise, focused, meaningful, and declarative:

@web @duckduckgo
Feature: DuckDuckGo Web Browsing
  As a web surfer,
  I want to find information online,
  So I can learn new things and get tasks done.

  # The "@" annotations are tags
  # One feature can have multiple scenarios
  # The lines immediately after the feature title are just comments

  Scenario: Basic DuckDuckGo Search
    Given the DuckDuckGo home page is displayed
    When the user searches for "panda"
    Then results are shown for "panda"

Each scenario step is “glued” to a decorated Python function called a step definition. Step defs can use different types of step matchers and can also take parametrized inputs:

from behave import *
from selenium.webdriver.common.keys import Keys

DUCKDUCKGO_HOME = 'https://duckduckgo.com/'

@given('the DuckDuckGo home page is displayed')
def step_impl(context):
  context.browser.get(DUCKDUCKGO_HOME)

@when('the user searches for "{phrase}"')
def step_impl(context, phrase):
  search_input = context.browser.find_element_by_name('q')
  search_input.send_keys(phrase + Keys.RETURN)

@then('results are shown for "{phrase}"')
def step_impl(context, phrase):
  links_div = context.browser.find_element_by_id('links')
  assert len(links_div.find_elements_by_xpath('//div')) > 0
  search_input = context.browser.find_element_by_name('q')
  assert search_input.get_attribute('value') == phrase

The “environment.py” file can specify hooks to execute additional logic before and after steps, scenarios, features, and even the whole test suite. Hooks should handle automation concerns that should not be exposed through Gherkin. For example, Selenium WebDriver setup and cleanup should be handled by hooks instead of step definitions because after hooks always get run despite failures, while steps after an abortive failure will not get run.

from selenium import webdriver

def before_scenario(context, scenario):
  if 'web' in context.tags:
    context.browser = webdriver.Firefox()
    context.browser.implicitly_wait(10)

def after_scenario(context, scenario):
  if 'web' in context.tags:
    context.browser.quit()

Test Launch

behave boasts a powerful command line with many options. Below are common use case examples when running tests from the project root directory:

# Run all scenarios in the project
behave

# Run all scenarios in a specific feature file
behave features/web.feature

# Filter tests by tag
behave --tags-help
behave --tags @duckduckgo
behave --tags ~@unit
behave --tags @basket --tags @add,@remove

# Write a JUnit report (useful for Jenkins and other CI tools)
behave --junit

# Don't print skipped scenarios
behave -k

Pros and Cons

Like all BDD test frameworks, behave is opinionated. It works best for black box testing due to its behavior focus. Web testing would be a great use case because user interactions can easily be described using plain language. Reusable steps also foster a snowball effect for automation development. However, behave would not be good for unit testing or low-level integration testing – the verbosity would become more of a hindrance than a helper.

My recommendation is to use behave for black box testing if the team has bought into BDD. I would also strongly consider pytest-bdd as an alternative BDD framework because it leverages all the goodness of pytest.

5 Things I Love About SpecFlow

SpecFlow, a.k.a. “Cucumber for .NET,” is a leading BDD test automation framework for .NET. Created by Gáspár Nagy and maintained as a free, open source project on GitHub by TechTalk, SpecFlow presently has almost 3 million total NuGet downloads. I’ve used it myself at a few companies, and, I must say as an automationeer, it’s awesome! SpecFlow shares a lot in common with other Cucumber frameworks like Cucumber-JVM, but it is not a knockoff – it excels in many ways. Below are five features I love about SpecFlow.

#1: Declarative Specification by Example

SpecFlow is a behavior-driven test framework. Test cases are written as Given-When-Then scenarios in Gherkin “.feature” files. For example, imagine testing a cucumber basket:

Feature: Cucumber Basket
  As a gardener,
  I want to carry many cucumbers in a basket,
  So that I don’t drop them all.
  
  @cucumber-basket
  Scenario: Fill an empty basket with cucumbers
    Given the basket is empty
    When "10" cucumbers are added to the basket
    Then the basket is full

Notice a few things:

  • It is declarative in that steps indicate what should be done at a high level.
  • It is concise in that a full test case is only a few lines long.
  • It is meaningful in that the coverage and purpose of the test are intuitively obvious.
  • It is focused in that the scenario covers only one main behavior.

Gherkin makes it easy to specify behaviors by example. That way, everybody can understand what is happening. C# code will implement each step in lower layers. Even if your team doesn’t do the full-blown BDD process, using a BDD framework like SpecFlow is still great for test automation. Test code naturally abstracts into separate layers, and steps are reusable, too!

#2: Context is King

Safely sharing data (e.g., “context”) between steps is a big challenge in BDD test frameworks. Using static variables is a simple yet terrible solution – any class can access them, but they create collisions for parallel test runs. SpecFlow provides much better patterns for sharing context.

Context injection is SpecFlow’s simple yet powerful mechanism for inversion of control (using BoDi). Any POCOs can be injected into any step definition class, either using default values or using a specific initialization, by declaring the POCO as a step def constructor argument. Those instances will also be shared instances, meaning steps across different classes can share the same objects! For example, steps for Web tests will all need a reference to the scenario’s one WebDriver instance. The context-injected objects are also created fresh for each scenario to protect test case independence.

Another powerful context mechanism is ScenarioContext. Every scenario has a unique context: title, tags, feature, and errors. Arbitrary objects can also be stored in the context object like a Dictionary, which is a simple way to pass data between steps without constructor-level context injection. Step definition classes can access the current scenario context using the static ScenarioContext.Current variable, but a better, thread-safe pattern is to make all step def classes extend the Steps class and simply reference the ScenarioContext instance variable.

#3: Hooks for Any Occasion

Hooks are special methods that insert extra logic at critical points of execution. For example, WebDriver cleanup should happen after a Web test scenario completes, no matter the result. If the cleanup routine were put into a Then step, then it would not be executed if the scenario had a failure in a When step. Hooks are reminiscent of Aspect-Oriented Programming.

Most BDD frameworks have some sort of hooks, but SpecFlow stands out for its hook richness. Hooks can be applied before and after steps, scenario blocks, scenarios, features, and even around the whole test run. (Cucumber-JVM, by contrast, does not support global hooks.) Hooks can be selectively applied using tags, and they can be assigned an order if a project has multiple hooks of the same type. Hook methods will also be picked up from any step definition class. SpecFlow hooks are just awesome!

#4: Thorough Outline Templating

Scenario Outlines are a standard part of Gherkin syntax. They’re very useful for templating scenarios with multiple input combinations. Consider the cucumber basket again:

Feature: Cucumber Basket
  
  Scenario Outline: Add cucumbers to the basket
    Given the basket has "<initial>" cucumbers
    When "<some>" cucumbers are added to the basket
    Then the basket has "<total>" cucumbers

    Examples: Counts
      | initial | some | total |
      | 1       | 2    | 3     |
      | 5       | 3    | 8     |

All BDD frameworks can parametrize step inputs (shown in double quotes). However, SpecFlow can also parametrize the non-input parts of a step!

Feature: Cucumber Basket
  
  Scenario Outline: Use the cucumber basket
    Given the basket has "<initial>" cucumbers
    When "<some>" cucumbers are <handled-with> the basket
    Then the basket has "<total>" cucumbers

    Examples: Counts
      | initial | some | handled-with | total |
      | 1       | 2    | added to     | 3     |
      | 5       | 3    | removed from | 2     |

The step definitions for the add and remove steps are separate. The step text for the action is parametrized, even though it is not a step input:

[When(@"""(\d+)"" cucumbers are added to the basket")]
public void WhenCucumbersAreAddedToTheBasket(int count) { /* */ }

[When(@"""(\d+)"" cucumbers are removed from the basket")]
public void WhenCucumbersAreRemovedFromTheBasket(int count) { /* */ }

That’s cool!

#5: Test Thread Affinity

SpecFlow can use any unit test runner (like MsTest, NUnit, and xUnit.net), but TechTalk provides the official SpecFlow+ Runner for a licensed fee. I’m not associated with TechTalk in any way, but the SpecFlow+ Runner is worth the cost for enterprise-level projects. It has a friendly command line, a profile file to customize execution, parallel execution, and nice integrations.

The major differentiator, in my opinion, is its test thread affinity feature. When running tests in parallel, the major challenge is avoiding collisions. Test thread affinity is a simple yet powerful way to control which tests run on which threads. For example, consider testing a website with user accounts. No two tests should use the same user at the same time, for fear of collision. Scenarios can be tagged for different users, and each thread can have the affinity to run scenarios for a unique user. Some sort of parallel isolation management like test thread affinity is absolutely necessary for test automation at scale. Given that the SpecFlow+ Runner can handle up to 64 threads (according to TechTalk), massive scale-up is possible.

But Wait, There’s More!

SpecFlow is an all-around great test automation framework, whether or not your team is doing full BDD. Feel free to add comments below about other features you love (or *gasp* hate) about SpecFlow!