In BDD, What Should Be A Feature?

How do I decide what a feature should be? And should I define a feature first before writing behavior specs, or should I start with behaviors and see how they fit together into features?

Features, scenarios, and behaviors are all common BDD terms that should be carefully defined:

  • behavior is an operation with inputs, actions, and expected outcomes.
  • A scenario is the specification of a behavior using formal steps and examples.
  • feature is a desired product functionality often involving multiple behaviors.

Don’t try to over-think the definition of “feature.” Some features are small, while other features are large. The main distinction between a feature and a scenario or behavior is that features are what customers expect to receive. Small features may cover only a few or even only one behavior, while large features may cover several.

The Gherkin language has Feature and Scenario sections. In this sense, a Feature is simply a collection of related Scenarios. They align roughly to the more general meanings of the terms.

Don’t over-think features with Agile, either. Some teams define a feature as a collection of user stories. Other teams say that one user story is a feature. In terms of Gherkin, don’t presume that one user story must have exactly one feature file with one Feature section. A user story could have zero-to-many feature files to cover its behaviors. Do whatever is appropriate.

Features should be determined by customer needs. They should solve problems the customers have. For example, perhaps the customer needs a better way to process orders through their online store. That’s where features should start – as business needs. Behaviors should then naturally come as part of grooming and refinement efforts. Thus, in most cases, features should be identified first before individual behaviors.

Nevertheless, there may be times during development that scenario-to-feature realignment should be done. It may be more convenient to create a new feature file for related behaviors. Or, a new feature may be “discovered” out of particularly useful behaviors. This is more the exception than the norm.

BDD 101: Unit, Integration, and End-to-End Tests

There are many types of software tests. BDD practices can be incorporated into all aspects of testing, but BDD frameworks are not meant to handle all test types. Behavior scenarios are inherently functional tests – they verify that the product under test works correctly. While instrumentation for performance metrics could be added, BDD frameworks are not intended for performance testing. This post focuses on how BDD automation works into the Testing Pyramid. Please read BDD 101: Manual Testing for manual test considerations.

The Testing Pyramid

The Testing Pyramid is a functional test development approach that divides tests into three layers: unit, integration, and end-to-end.

  • Unit tests are white-box tests that verify individual “units” of code, such as functions, methods, and classes. They should be written in the same language as the product under test, and they should be stored in the same repository. They often run as part of the build to indicate immediate success or failure.
  • Integration tests are black-box tests that verify integration points between system components work correctly. The product under test should be active and deployed to a test environment. Service tests are often integration-level tests.
  • End-to-end tests are black-box tests that test execution paths through a system. They could be seen as multi-step integration tests. Web UI tests are often end-to-end-level tests.

Below is a visual representation of the Testing Pyramid:

The Testing Pyramid

The Testing Pyramid

From bottom to top, the tests increase in complexity: unit tests are the simplest and run very fast, while end-to-end require lots of setup, logic, and execution time. Ideally, there should be more tests at the bottom and fewer tests at the top. Test coverage is easier to implement and isolate at lower levels, so fewer high-investment, more-fragile tests need to be written at the top. Pushing tests down the pyramid can also mean wider coverage with less execution time.

Behavior-Driven Unit Testing

BDD test frameworks are not meant for writing unit tests. Unit tests are meant to be low-level, program-y tests for individual functions and methods. Writing Gherkin for unit tests is doable, but it is overkill. It is much better to use established unit test frameworks like JUnit, NUnit, and pytest.

Nevertheless, behavior-driven practices still apply to unit tests. Each unit test should focus on one main thing: a single call, an individual variation, a specific input combo; a behavior. Furthermore, in the software process, feature-level behavior specs draw a clear dividing line between unit and above-unit tests. The developer of a feature is often responsible for its unit tests, while a separate engineer is responsible for integration and end-to-end tests for accountability. Behavior specs carry a gentleman’s agreement that unit tests will be completed separately.

Integration and End-to-End Testing

BDD test frameworks shine at the integration and end-to-end testing levels. Behavior specs expressively and concisely capture test case intent. Steps can be written at either integration or end-to-end levels. Service tests can be written as behavior specs like in Karate. End-to-end tests are essentially multi-step integrations tests. Note how a seemingly basic web interaction is truly a large end-to-end test:

Given a user is logged into the social media site
When the user writes a new post
Then the user's home feed displays the new post
And the all friends' home feeds display the new post

Making a simple social media post involves web UI interaction, backend service calls, and database updates all in real time. That’s a full pathway through the system. The automated step definitions may choose to cover these layers implicitly or explicitly, but they are nevertheless covered.

Lengthy End-to-End Tests

Terms often mean different things to different people. When many people say “end-to-end tests,” what they really mean are lengthy procedure-driven tests: tests that cover multiple behaviors in sequence. That makes BDD purists shudder because it goes against the cardinal rule of BDD: one scenario, one behavior. BDD frameworks can certainly handle lengthy end-to-end tests, but careful considerations should be taken for if and how it should be done.

There are five main ways to handle lengthy end-to-end scenarios in BDD:

  1. Don’t bother. If BDD is done right, then every individual behavior would already be comprehensively covered by scenarios. Each scenario should cover all equivalence classes of inputs and outputs. Thus, lengthy end-to-end scenarios would primarily be duplicate test coverage. Rather than waste the development effort, skip lengthy end-to-end scenario automation as a small test risk, and compensate with manual and exploratory testing.
  2. Combine existing scenarios into new ones. Each When-Then pair represents an individual behavior. Steps from existing scenarios could be smashed together with very little refactoring. This violates good Gherkin rules and could result in very lengthy scenarios, but it would be the most pragmatic way to reuse steps for large end-to-end scenarios. Most BDD frameworks don’t enforce step type order, and if they do, steps could be re-typed to work. (This approach is the most pragmatic but least pure.)
  3. Embed assertions in Given and When steps. This strategy avoids duplicate When-Then pairs and ensures validations are still performed. Each step along the way is validated for correctness with explicit Gherkin text. However, it may require a number of new steps.
  4. Treat the sequence of behaviors as a unique, separate behavior. This is the best way to think about lengthy end-to-end scenarios because it reinforces behavior-driven thinking. A lengthy scenario adds value only if it can be justified as a uniquely separate behavior. The scenario should then be written to highlight this uniqueness. Otherwise, it’s not a scenario worth having. These scenarios will often be very declarative and high-level.
  5. Ditch the BDD framework and write them purely in the automation programming. Gherkin is meant for collaboration about behaviors, while lengthy end-to-end tests are meant exclusively for intense QA work. Biz roles will write behavior specs but will never write end-to-end tests. Forcing behavior specification on lengthy end-to-end scenarios can inhibit their development. A better practice could be coexistence: acceptance tests could be written with Gherkin, while lengthy end-to-end tests could be written in raw programming. Automation for both test sets could still nevertheless share the same automation code base – they could share the same support modules and even step definition methods.

Pick the approach that best meets the team’s needs.

Gherkin Syntax Highlighting in Atom

Atom, “a hackable editor for the 21st Century,” is a really great text editor for both quick edits and serious programming. Atom is free, open-source, and developed by GitHub. It can support a host of languages out-of-the-box, with plugins for even more. What makes Atom really nice compared to Notepad++ is that Atom is cross-platform: it runs on Linux, macOS, and Windows. Another bonus point over Notepad++ is the in-editor Project tree view for directories. Atom also has Atom IDE for advanced development support. Even though Atom is feature-rich, its response time is pretty fast. It’s a solid text editor choice for both technical and non-technical users.

One of my first blog posts on Automation Panda was Gherkin Syntax Highlighting in Notepad++. It continues to be one of my post popular posts, too. However, Notepad++ doesn’t help feature file authors who use macOS or Linux. Thankfully, Atom has a decent plugin for Gherkin. In fact, it has a number of Gherkin plugins available.

Atom Intall Plugin

On macOS, Settings are available under File -> Preferences… and on the Install tab.

I installed the first package, language-gherkin, and I was very pleased with the syntax highlighting. I also tried the internationalized package below it in the list, but the colors were not as nice (call me picky). It looked like other packages could do autocomplete and table formatting as well.

Atom Gherkin

Nice!

Atom is just another great option for writing Gherkin feature files.

BDD 101: Manual Testing

Behavior-driven development takes an automation-first philosophy: behavior specs should become automated tests. However, BDD can also accommodate manual testing. Manual testing has a place and a purpose, even in BDD. Remember, behavior scenarios are first and foremost behavior specifications, and they provide value beyond testing and automation. Any behavior scenario could be run as a manual test. The main questions, then, are (1) when is manual testing appropriate and (2) how should it be handled.

When is Manual Testing Appropriate?

Automation is not a silver bullet – it doesn’t satisfy all testing needs. Scenarios should be written for all behaviors, but they likely shouldn’t be automated under the following circumstances:

  • The return-on-investment to automate the scenarios is too low.
  • The scenarios won’t be included in regression or continuous integration.
  • The behaviors are temporary (ex: hotfixes).
  • The automation itself would be too complex or too fragile.
  • The nature of the feature is non-functional (ex: performance, UX, etc.).
  • The team is still learning BDD and is not yet ready to automate all scenarios.

Manual testing is also appropriate for exploratory testing, in which engineers rely upon experience rather than explicit test procedures to “explore” the product under test for bugs and quality concerns. It complements automation because both testing styles serve different purposes. However, behavior scenarios themselves are incompatible with exploratory testing. The point of exploring is for engineers to go “unscripted” – without formal test plans – to find problems only a user would catch. Rather than writing scenarios, the appropriate way to approach behavior-driven exploratory testing is more holistic: testers should assume the role of a user and exercise the product under test as a collection of interacting behaviors. If exploring uncovers any glaring behavior gaps, then new behavior scenarios should be added to the catalog.

How Should Manual Testing Be Handled?

Manual testing fits into BDD in much the same way as automated testing because both formats share the same process for behavior specification. Where the two ways diverge is in how the tests are run. There are a few special considerations to make when writing scenarios that won’t be automated.

Repository

Both manual and automated behavior scenarios should be stored in the same repository. The natural way to organize behaviors is by feature, regardless of how the tests will be run. All scenarios should also be managed by some form of version control.

Furthermore, all scenarios should be co-located for document-generation tools like Pickles. Doc tools make it easy to expose behavior specs and steps to everyone. They make it easier for the Three Amigos to collaborate. Non-technical people are not likely to dig into programming projects.

Tags

Scenarios must be classified as manual or automated. When BDD frameworks run tests, they need a way to exclude tests that are not automated. Otherwise, test reports would be full of errors! In Gherkin, scenarios should be classified using tags. For example, scenarios could be tagged as either “@manual” or “@automated”. A third tag, “@automatable”, could be used to distinguish scenarios that are not yet automated but are targeted for automation.

Some BDD frameworks have nifty features for tags. In Cucumber-JVM, tags can be set as runner class options for convenience. This means that tag options could be set to “~@manual” to avoid manual tests. In SpecFlow, any scenario with the special “@ignore” tag will automatically be skipped. Nevertheless, I strongly recommend using custom tags to denote manual tests, since there are many reasons why a test may be ignored (such as known bugs).

Extra Comments

The conciseness of behavior scenarios is problematic for manual testing because steps don’t provide all the information a tester may need. For example, test data may not be written explicitly in the spec. The best way to add extra information to a scenario is to add comments. Gherkin allows any number of lines for comments and description. Comments provide extra information to the reader but are ignored by the automation.

It may be tempting to simply write new Gherkin steps to handle the extra information for manual testing. However, this is not a good approach. Principles of good Gherkin should be used for all scenarios, regardless of whether or not the scenarios will be automated. High-quality specification should be maintained for consistency, for documentation tools, and for potential future automation.

An Example

Below is a feature that shows how to write behavior scenarios for manual tests:

Feature: Google Searching

  @automated
  Scenario: Search from the search bar
    Given a web browser is at the Google home page
    When the user enters "panda" into the search bar
    Then links related to "panda" are shown on the results page

  @manual
  Scenario: Image search
    # The Google home page URL is: http://www.google.com/
    # Make sure the images shown include pandas eating bamboo
    Given Google search results for "panda" are shown
    When the user clicks on the "Images" link at the top of the results page
    Then images related to "panda" are shown on the results page

It’s not really different from any other behavior scenarios.

 

As stated in the beginning, BDD should be automation-first. Don’t use the content of this article to justify avoiding automation. Rather, use the techniques outlined here for manual testing only as needed.

 

Test Automation Myth-Busting

Test automation is a vital part of software quality assurance, now more than ever. However, it is a discipline that is often poorly understood. I’ve heard all sorts of crazy claims about automation throughout my career. This post debunks a number of commonly held but erroneous beliefs about automation.

Myth #1: Every test should be automated.

“100% automation” seems to be a new buzz-phrase. If automation is so great, why not automate every test? Not every test is worth automating in terms of return-on-investment. Automation requires significant expertise to design, implement, and maintain. There are limits to how many tests a team can reasonably produce and manage. Furthermore, not all tests are equal. Some require more effort to handle, or may not be run as frequently, or cover less important features. Just because a test could be automated does not mean that it should be automated. Using a risk-based test strategy, tests to automate should be prioritized by highest ROI.

Automated testing does not completely replace manual testing, either. Automated testing is defensive: it protects a code line by consistently running scripted tests for core functionality. However, manual testing is offensive: it uses human expertise to explore features off-script, test-to-break, and evaluate wholistic quality. Returns-on-investment for the same tests are often opposites between automated and manual approaches. Automated and manual testing together fulfill vital, complementary roles.

Myth #2: Automation means we can downsize QA.

Executives often see test automation as a way to automate QA out of a job. This is simply not true: Automation makes QA jobs more efficient and all the more necessary. Automation is software and thus requires strong software development skills. It also requires extra tools, processes, and labor to maintain. The benefit is that more tests can be run more quickly. QA jobs won’t vanish due to automation – they simply assume new responsibilities.

Myth #3: Automation will catch all bugs.

By their very nature, automated tests are “scripted” – each test always follows the same pre-programmed steps. This is a very good thing for catching regression bugs, but it inherently cannot handle new, unforeseen situations. That’s why manual, exploratory testing is needed. Automation, being software, may also have its own bugs. Automation is not a silver bullet.

Myth #4: Automation must be written in the same language as the product code.

Automation must be written in the same programming language as the product code for white-box unit tests. However, any programming language may be used for black-box functional tests. Black-box functional tests (like integration and end-to-end tests) test a live product. There’s no direct connection between the automation code and the product code. For example, a web app could have a REST service layer written in Java, a Web UI frontend written in .NET and JavaScript, and test automation written in Python using requests and Selenium WebDriver. It may be helpful to write automation in the same language as the product so that developers can more easily contribute, but it is not required. Choose the best language for test automation to meet the present needs.

Myth #5: All tests should be such-and-such-level tests.

This argument varies by product type and team. For web apps, it could be phrased as, “All tests should be web UI tests, since that’s how the user interacts with the product.” This is nonsense – different layers of testing mitigate risk at their optimal returns-on-investment. The Testing Pyramid applies this principle. Consider a web app with a service layer in terms of automation – service calls have faster response times and are more reliable than web UI interactions. It would likely be wise to test input combinations at the service layer while focusing on UI-specific functionality only at the web layer.

Myth #6: Unit tests aren’t necessary because the QA team does the testing.

The existence of a QA team or of a black-box automated test suite does not negate the need for unit tests. Unit tests are an insurance policy – they make sure the software programming is fundamentally working. In continuous integration, they make sure builds are good. They are essential for good software development. Many times, I caught bugs in my own code through writing unit tests – bugs that nobody else ever saw because I fixed them before committing code. Personally, I would never want to work on a product without strong unit tests.

Myth #7: We can complete a user story this sprint and automate its tests next sprint.

In Agile Scrum, teams face immense pressure to finish user stories within a sprint. Test automation is often the last part of a story to be done. If the automation isn’t completed by the end of the sprint, teams are tempted to mark the story as complete and finish the test automation in the future. This is a terrible mistake and a violation of Agile principles. Test automation should be included in the definition of done. A story isn’t complete without its prescribed tests! Punting tests into the next sprint merely builds technical debt and forces QA into constant catch-up. To mitigate the risk of incomplete stories, teams should size stories to include automation work, shift left to start QA sooner, or reduce the total sprint commitment size. Incomplete test automation often happens when product code is delivered late or a team’s capacity is overestimated.

Myth #8: Automation is just a bunch of “test scripts.”

It’s quite common to hear developers or managers refer to automated tests as “test scripts.” While this term itself is not inherently derogatory, it oversimplifies the complexity of test automation. “Scripts” sound like short, hacky sequences of commands to do system dirty-work. Test automation, however, is a full stack: in addition to the product under test, automation involves design patterns, dependency packages, development processes, version control, builds, deployments, reporting, and failure triage. Referring to test automation as “scripting” leads to chronic planning underestimations. Automation is a discipline, and the investment it requires should be honored.

 

Do you have any other automation myths to debunk? Share them in the comment section below!

Purist vs. Pragmatist

There’s often more than one way to solve a problem. Engineers tend to be pretty opinionated about solutions, too. Whenever I see disagreements in design, I typically notice two competing stances: the pragmatist and the purist. Identifying these approaches helps to understand how others think and fosters healthier team collaboration.

purist is one who focuses primarily on the correctness of a solution. They typically seek a systematic, comprehensive, and verifiable design. A pragmatist, however, favors practical, expedient solutions. They are okay with a solution so long as it works.

The table below gives some perspective on how these two perspectives may differ:

Purist Pragmatist
Focus more on what is correct Focus more on what is expedient
Spend more effort on design and the “big picture” Spend more effort on implementation
Very picky in code review Less picky in code review
Interested more in white-box code quality Interested more in black-box code quality
Favors strong design patterns, even if they are complicated Favors simpler design patterns, even if they have less-than-desirable consequences
Prefers to redesign than to hack Prefers to hack than to redesign
Good at handling long-term problems Good at handling short-term problems
Views software development as an art as well as an engineering practice Views development primarily as an engineering practice
Aligns well with academia Aligns well with business
In test automation, better for framework development In test automation, better for test case development

These descriptions are not absolute: many people fall somewhere between the poles of purist and pragmatist. However, most people tend to exhibit stronger tendencies in one direction.

Personally, I tend to be a purist. If I need to get a job done, I feel shameful if I cannot afford the time to do it fully properly. However, I often find myself working with pragmatists. That’s not a bad thing – I recognize the value in each perspective. There is much to learn from both sides!

Django Settings for Different Environments

The Django settings module is one of the most important files in a Django web project. It contains all of the configuration for the project, both standard and custom. Django settings are really nice because they are written in Python instead of a text file format, meaning they can be set using code instead of literal values.

Settings must often use different values for different environments. The DEBUG setting is a perfect example: It should always be True in a development environment to help debug problems, but it should never be True in a production environment to avoid security holes. Another good example is the DATABASES setting: Development and test environments should not use production data. This article covers two good ways to handle environment-specific settings.

Multiple Settings Modules

The simplest way to handle environment-specific settings is to create a separate settings module for each environment. Different settings values would be assigned in each module. For example, instead of just one mysite.settings module, there could be:

mysite
`-- mysite
    |-- __init__.py
    |-- settings_dev.py
    |-- settings_prod.py
    `-- settings_test.py

For the DEBUG setting, mysite.settings_dev and mysite.settings_test would contain:

DEBUG = True

And mysite.settings_prod would contain:

DEBUG = False

Then, set the DJANGO_SETTINGS_MODULE environment variable to the name of the desired settings module. The default value is mysite.settings, where “mysite” is the name of the project. Make sure to set this variable wherever the Django site is run. Also make sure that the settings module is available in PYTHONPATH.

More details on this approach are given on the Django settings page.

Using Environment Variables

One problem with multiple settings modules is that many settings won’t need to be different between environments. Duplicating these settings then violates the DRY principle (“don’t repeat yourself”). A more advanced approach for handling environment-specific settings is to use custom environment variables as Django inputs. Remember, the settings module is written in Python, so values can be set using calls and conditions. One settings module can be written to handle all environments.

Add a function like this to read environment variables:

# Imports
import os
from django.core.exceptions import ImproperlyConfigured

# Function
def read_env_var(name, default=None):
    if not value:
       raise ImproperlyConfigured("The %s value must be provided as an env variable" % name)
    return value

Then, use it to read environment variables in the settings module:

# Read the secret key directly
# This is a required value
# If the env variable is not found, the site will not launch
SECRET_KEY = read_env_var("SECRET_KEY")

# Read the debug setting
# Default the value to False
# Environment variables are strings, so the value must be converted to a Boolean
DEBUG = read_env_var("DEBUG", "False") == "True"

To avoid a proliferation of required environment variables, one variable could be used to specify the target environment like this:

# Read the target environment
TARGET_ENV = read_env_var("TARGET_ENV")

# Set the debug setting to True only for production
DEBUG = (TARGET_ENV == "prod")

# Set database config for the chosen environment
if TARGET_ENV == "dev":
    DATABASES = { ... }
elif TARGET_ENV == "prod":
    DATABASES = { ... }
elif TARGET_ENV == "test":
    DATABASES = { ... }

Managing environment variables can be pesky. A good way to manage them is using shell scripts. If the Django site will be deployed to Heroku, variables should be saved as config vars.

Conclusion

These are the two primary ways I recommend to handle different settings for different environments in a Django project. Personally, I prefer the second approach of using one settings module with environment variable inputs. Just make sure to reference all settings from the settings module (“from django.conf import settings”) instead of directly referencing environment variables!