Jasmine

JavaScript Testing with Jasmine

Table of Contents

  1. Introduction
  2. Setup and Installation
  3. Project Structure
  4. Unit Tests for Functions
  5. Unit Tests for Classes
  6. Unit Tests with Mocks
  7. Integration Tests for REST APIs
  8. End-to-End Tests for Web UIs
  9. Basic Test Execution
  10. Advanced Test Execution with Karma
  11. Angular Testing

Introduction

Jasmine is one of the most popular JavaScript test frameworks available. Its tests are intuitively recognizable by their describe/it format. Jasmine is inspired by Behavior-Driven Development and comes with many basic features out-of-the-box. While Jasmine is renowned for its Node.js support, it also supports Python and Ruby. Jasmine also works with JavaScript-based languages like TypeScript and CoffeeScript.

This guide shows how to write tests in JavaScript on Node.js using Jasmine. It uses the jasmine-node-js-example project (hosted on GitHub). Content includes:

  • Basic white-box unit tests
  • REST API integration tests with frisby
  • Web UI end-to-end tests with Protractor
  • Spying with sinon
  • Monkeypatching with rewire
  • Handling config data with JSON files
  • Advanced execution features with Karma
  • Special considerations for Angular projects

The Jasmine API Reference is also indispensable when writing tests.

Setup and Installation

The official Jasmine Node.js Setup Guide explains how to set up and install Jasmine. Jasmine tests may be added to an existing project or to an entirely new project. As a prerequisite, Node.js must already be installed. Use the following commands to set things up.

# Initialize a new project (if necessary)
# This will create the package.json file
$ mkdir [project-name]
$ cd [project-name]
$ npm init

# Install Jasmine locally for the project and globally for the CLI
$ npm install jasmine
$ npm install -g jasmine

# Create a spec directory with configuration file for Jasmine
$ jasmine init

# Optional: Install official Jasmine examples
# Do this only for self-education in a separate project
$ jasmine examples

The code used by this guide is available in GitHub at jasmine-node-js-example. Feel free to clone this repository to try things out yourself!

Recommended editors and IDEs include Visual Studio Code with the Jasmine Snippets extensions, Atom, and JetBrains WebStorm.

Project Structure

Jasmine does not require the project to have a specific directory layout, but it does use a configuration file to specify where to find tests. The default, conventional project structure created by “jasmine init” puts all Jasmine code into a “spec” directory, which contains “*spec.js” files for tests, helpers that run before specs, and a support directory for config. The JASMINE_CONFIG_PATH environment variable can be set to change the config file used. (The default config file is spec/support/jasmine.json.)

[project-name]
|-- [product source code]
|-- spec
|   |-- [spec sub-directory]
|   |   `-- *spec.js
|   |-- helpers
|   |   `-- [helper sub-directory]
|   `-- support
|       `-- jasmine.json
`-- package.json

This structure may be changed using the “spec_dir”, “spec_files”, and “helpers” properties in the config file. For example, it may be useful to change the structure to include more than one level of directories to the hierarchy. However, it is typically best to leave the conventional directory layout in place. The default config values as of Jasmine 2.8 are below.

{
  "spec_dir": "spec",
  "spec_files": [
    "**/*[sS]pec.js"
  ],
  "helpers": [
    "helpers/**/*.js"
  ],
  "stopSpecOnExpectationFailure": false,
  "random": false
}

It is also a best practice to separate tests between different levels of the Testing Pyramid. The example project has spec subdirectories for unit, integration, and end-to-end tests. Directory-level organization makes it easy to filter tests by level when executed.

Unit Tests for Functions

The most basic unit of code to be tested in JavaScript is a function. The “lib/calculator.functions.js” module contains some basic math functions for easy testing.

// --------------------------------------------------
// lib/calculator.functions.js
// --------------------------------------------------

// Calculator Functions

function add(a, b) {
    return a + b;
}

function subtract(a, b) {
    return a - b;
}

function multiply(a, b) {
    return a * b;
}

function divide(a, b) {
    let value = a * 1.0 / b;
    if (!isFinite(value))
        throw new RangeError('Divide-by-zero');
    else
        return value;
}

function maximum(a, b) {
    return (a >= b) ? a : b;
}

function minimum(a, b) {
    return (a <= b) ? a : b;
}

// Module Exports

module.exports = {
    add: add,
    subtract: subtract,
    multiply: multiply,
    divide: divide,
    maximum: maximum,
    minimum: minimum,
}

Its tests are in “spec/unit/calculator.function.spec.js”. Below is a snippet showing simple tests for the “add” function. A describe block groups a “suite” of specs together. Each it block is an individual spec (or test). Titles for specs are often written as what the spec should do. Describe blocks may be nested for hierarchical grouping, but it blocks (being bottom-level) may not. Assertions are made using Jasmine’s fluent-like expect and matcher methods. Since the functions are stateless, no setup or cleanup is needed. Tests for other math functions are similar.

// --------------------------------------------------
// spec/unit/calculator.function.spec.js
// --------------------------------------------------

const calc = require('../../lib/calculator.functions');

describe("Calculator Functions", function() {

  describe("add", function() {

    it("should add two positive numbers", function() {
      let value = calc.add(3, 2);
      expect(value).toBe(5);
    });

    it("should add a positive and a negative number", function() {
      let value = calc.add(3, -2);
      expect(value).toBe(1);
    });

    it("should give the same value when adding zero", function() {
      let value = calc.add(3, 0);
      expect(value).toBe(3);
    });

  });

});

The divide-by-zero test for the “divide” function is special because it must verify that an exception is thrown. The divide call is wrapped in a function so that it may be passed into the expect call.

  describe("divide", function() {

    // ...

    it("should throw an exception when dividing by zero", function() {
      let divideByZero = function() { calc.divide(3, 0); };
      expect(divideByZero).toThrowError(RangeError, 'Divide-by-zero');
    });

    // ...

  });

The “maximum” and “minimum” functions have parametrized tests using the Array class’s forEach method. This is a nifty trick for hitting multiple input sets without duplicating code or combining specs. Note that the spec titles are also parametrized. Tests for “maximum” are shown below.

  describe("maximum", function() {

    [
      [1, 2, 2],
      [2, 1, 2],
      [2, 2, 2],
    ].forEach(([a, b, expected]) => {
      it(`should return ${expected} when given ${a} and ${b}`, () => {
        let value = calc.maximum(a, b);
        expect(value).toBe(expected);
      });
    });

  });

Unit Tests for Classes

Jasmine can also test classes. When testing classes, setup and cleanup routines become more helpful. The Calculator class in the “lib/calculator.class.js” module calls the math functions and caches the last answer.

// --------------------------------------------------
// lib/calculator.class.js
// --------------------------------------------------

// Imports

const calcFunc = require('./calculator.functions');

// Calculator Class

class Calculator {

  constructor() {
      this.last_answer = 0;
  }

  do_math(a, b, func) {
      return (this.last_answer = func(a, b));
  }

  add(a, b) {
      return this.do_math(a, b, calcFunc.add);
  }

  subtract(a, b) {
      return this.do_math(a, b, calcFunc.subtract);
  }

  multiply(a, b) {
      return this.do_math(a, b, calcFunc.multiply);
  }

  divide(a, b) {
      return this.do_math(a, b, calcFunc.divide);
  }

  maximum(a, b) {
      return this.do_math(a, b, calcFunc.maximum);
  }

  minimum(a, b) {
      return this.do_math(a, b, calcFunc.minimum);
  }

}

// Module Exports

module.exports = {
  Calculator: Calculator,
}

The Jasmine specs in “spec/unit/calculator.class.spec.js” are very similar but now call the beforeEach method to construct the Calculator object before each scenario. (Jasmine also has methods for afterEach, beforeAll, and afterAll.) The verifyAnswer helper function also makes assertions easier. The addition tests are shown below.

// --------------------------------------------------
// spec/unit/calculator.class.spec.js
// --------------------------------------------------

const calc = require('../../lib/calculator.class');

describe("Calculator Class", function() {

  let calculator;

  beforeEach(function() {
    calculator = new calc.Calculator();
  });

  function verifyAnswer(actual, expected) {
    expect(actual).toBe(expected);
    expect(calculator.last_answer).toBe(expected);
  }

  describe("add", function() {

    it("should add two positive numbers", function() {
      verifyAnswer(calculator.add(3, 2), 5);
    });

    it("should add a positive and a negative number", function() {
      verifyAnswer(calculator.add(3, -2), 1);
    });

    it("should give the same value when adding zero", function() {
      verifyAnswer(calculator.add(3, 0), 3);
    });

  });

  // ...

});

Unit Tests with Mocks

Mocks help to keep unit tests focused narrowly upon the unit under test. They are essential when units of code depend upon other callable entities. For example, mocks can be used to provide dummy test values for REST APIs instead of calling the real endpoints so that receiving code can be tested independently.

Jasmine’s out-of-the-box spies can do some mocking and spying, but it is not very powerful. For example, it doesn’t work when members of one module call members of another, or even when members of the same module call each other (unless they are within the same class). It is better to use rewire for monkey-patching (mocking via member substitution) and sinon for stubbing and spying.

The “lib/weather.js” module shows how mocking can be done with member dependencies. The WeatherCaller class’s “getForecast” method calls the “callForecast” function, which is meant to represent a service call to get live weather forecasts. The “callForecast” function returns an empty object, but the specs will “rewire” it to return dummy test values that can be used by the WeatherCaller class. Rewiring will work even though “callForecast” is not exported!

// --------------------------------------------------
// lib/weather.js
// --------------------------------------------------

function callForecast(month, day, year, zipcode) {
  return {};
}

class WeatherCaller {

  constructor() {
    this.forecasts = {};
  }

  getForecast(month, day, year, zipcode) {
    let key = `${month}/${day}/${year} for ${zipcode}`;
    if (!(key in this.forecasts)) {
      this.forecasts[key] = callForecast(month, day, year, zipcode);
    }
    return this.forecasts[key];
  }

}

module.exports = {
  WeatherCaller: WeatherCaller,
}

The tests in “spec/unit/weather.mock.spec.js” monkey-patch the “callForecast” function with a sinon stub in the beforeEach call so that each test has a fresh spy count. Note that the weather method is imported using “rewire” instead of “require” so that it can be monkey-patched. Even though the original function returns an empty object, the tests pass because the mock returns the dummy test value.

// --------------------------------------------------
// spec/unit/weather.mock.spec.js
// --------------------------------------------------

// Imports

const rewire = require('rewire');
const sinon = require('sinon');

// Rewirings

const weather = rewire('../../lib/weather');

// WeatherCaller Specs
describe("WeatherCaller Class", function() {

  // Test constants
  const dummyForecast = {"high": 42, "low": 26};

  // Test variables
  let callForecastMock;
  let weatherModuleRestore;
  let weatherCaller;

  beforeEach(function() {
    // Mock the inner function's return value using sinon
    // Do this for each test to avoid side effects of call count
    callForecastMock = sinon.stub().returns(dummyForecast);
    weatherModuleRestore = weather.__set__("callForecast", callForecastMock);

    // Construct the main caller object
    weatherCaller = new weather.WeatherCaller();
  });

  it("should be empty upon construction", function() {
    // No mocks required here
    expect(Object.keys(weatherCaller.forecasts).length).toBe(0);
  });

  it("should get a forecast for a date and a zipcode", function() {
    // This simply verifies that the return value is correct
    let forecast = weatherCaller.getForecast(12, 25, 2017, 21047);
    expect(forecast).toEqual(dummyForecast);
  });

  it("should get a fresh forecast the first time", function() {
    // The inner function should be called and the value should be cached
    // Note the sequence of assertions, which guarantee safety
    let forecast = weatherCaller.getForecast(12, 25, 2017, 21047);
    const forecastKey = "12/25/2017 for 21047";
    expect(callForecastMock.called).toBeTruthy();
    expect(Object.keys(weatherCaller.forecasts).length).toBe(1);
    expect(forecastKey in weatherCaller.forecasts).toBeTruthy();
    expect(weatherCaller.forecasts[forecastKey]).toEqual(dummyForecast);
  });

  it("should get a cached forecast the second time", function() {
    // The inner function should be called only once
    // The same object should be returned by both method calls
    let forecast1 = weatherCaller.getForecast(12, 25, 2017, 21047);
    let forecast2 = weatherCaller.getForecast(12, 25, 2017, 21047);
    expect(callForecastMock.calledOnce).toBeTruthy();
    expect(forecast1).toBe(forecast2);
  });

  it("should get and cache multiple forecasts", function() {
    // The other tests verify the mechanics of individual calls
    // This test verifies that the caller can handle multiple forecasts

    // Initial forecasts
    let forecast1 = weatherCaller.getForecast(12, 25, 2017, 27518);
    let forecast2 = weatherCaller.getForecast(12, 25, 2017, 27518);
    let forecast3 = weatherCaller.getForecast(12, 25, 2017, 21047);

    // Change forecast value
    weatherModuleRestore();
    const newForecast = {"high": 39, "low": 18}
    callForecastMock = sinon.stub().returns(newForecast);
    weatherModuleRestore = weather.__set__("callForecast", callForecastMock);

    // More forecasts
    let forecast4 = weatherCaller.getForecast(12, 26, 2017, 21047);
    let forecast5 = weatherCaller.getForecast(12, 27, 2017, 21047);

    // Assertions
    expect(Object.keys(weatherCaller.forecasts).length).toBe(4);
    expect("12/25/2017 for 27518" in weatherCaller.forecasts).toBeTruthy();
    expect("12/25/2017 for 21047" in weatherCaller.forecasts).toBeTruthy();
    expect("12/26/2017 for 21047" in weatherCaller.forecasts).toBeTruthy();
    expect("12/27/2017 for 21047" in weatherCaller.forecasts).toBeTruthy();
    expect(forecast1).toEqual(dummyForecast);
    expect(forecast2).toEqual(dummyForecast);
    expect(forecast3).toEqual(dummyForecast);
    expect(forecast4).toEqual(newForecast);
    expect(forecast5).toEqual(newForecast);
  });

  afterEach(function() {
    // Undo the monkeypatching
    weatherModuleRestore();
  });

});

Integration Tests for REST APIs

Jasmine can do black-box tests just as well as it can do white-box tests. Testing REST API service calls are some of the most common integration-level tests. There are many REST request packages for Node.js, but frisby is particularly designed for testing. Frisby even has its own expect methods (though the standard Jasmine expect and matchers may still be used).

A best practice for black-box tests is to put config data for external dependencies into a config file. Config data for REST API calls could be URLs, usernames, and passwords. Never hard-code config data into test automation. JavaScript config files are super simple: just write a JSON file and read it during test setup using the “require” function, just like any module. The config data will be automatically parsed as a JavaScript object!

Below is an example test for calling Wikipedia’s REST API. It reads the base URL from a config file and uses it in the frisby call. The config file:

// --------------------------------------------------
// spec/support/env.json
// --------------------------------------------------
{
  "integration" : {
    "wikipediaServiceBaseUrl": "https://en.wikipedia.org/api/rest_v1"
  }
}

And the spec:

// --------------------------------------------------
// spec/integration/wikipedia.service.spec.js
// --------------------------------------------------

const frisby = require('frisby');

describe("English Wikipedia REST API", function() {

  const ENV = require("../support/env.json");
  const BASE_URL = ENV.integration.wikipediaServiceBaseUrl;

  describe("GET /page/summary/{title}", function() {

    it("should return the summary for the given page title", function(done) {
      frisby
        .get(BASE_URL + "/page/summary/Pikachu")
        .then(function(response) {
          expect(response.status).toBe(200);
          expect(response.json.title).toBe("Pikachu");
          expect(response.json.pageid).toBe(269816);
          expect(response.json.extract).toContain("Pokémon");
        })
        .done(done);
    })

  });

  // ...
});

End-to-End Tests for Web UIs

Jasmine can also be used for end-to-end Web UI tests. One of the most popular packages for web browser automation is Selenium WebDriver, which uses programming calls to interact with a browser like a real user. Selenium releases a WebDriver package for JavaScript for Node.js, but it is typically a better practice to use Protractor.

Protractor integrates WebDriver with JavaScript test frameworks to make it easier to use. By default, Jasmine is the default framework for Protractor, but Mocha, Cucumber, and any other JavaScript framework could be used. One of the best advantages Protractor has over WebDriver by itself is that Protractor does automatic waiting: explicit calls to wait for page elements are not necessary. This is a wonderful feature that eliminates a lot of repetitive automation code. Protractor also provides tools to easily set up the Selenium Server and browsers (including mobile browsers). Even though Protractor is designed for Angular apps, it can nevertheless be used for non-Angular front-ends.

Web UI tests can be quite complicated because they cover many layers and require extra configuration. Web page interactions frequently need to be reused, too. It is a best practice to use a pattern like the Page Object Model to handle web interactions in one reusable layer. Page objects pull WebDriver locators and actions out of test fixtures (like describe/it functions) so that they may be updated more easily when changes are developed for the actual web pages. (In fact, some teams choose to co-locate page object classes with product source code for the web app so that both are updated simultaneously.) The Page Object Model is a great way to manage the inherently complicated Web automation design.

This guide does not provide a custom example for Protractor with Jasmine because the Protractor documentation is pretty good. It contains a decent tutorial, setup and config instructions, framework integrations, and a full reference. Furthermore, proper Protractor setup requires careful local setup with a live site to test. Please refer to the official doc for more information. Most of the examples in the doc use Jasmine.

Basic Test Execution

The simplest way to run Jasmine tests is to use the “jasmine” command. Make sure you are in the project’s root directory when running tests. Below are example invocations.

# Run all specs in the project (according to the Jasmine config)
$ jasmine

# Run a specific spec by file path
$ jasmine spec/integration/wikipedia.service.spec.js

# Run all specs that match a path pattern
# Warning: this call is NOT recursive and will not search sub-directories!
$ jasmine spec/unit/*

# Run all specs whose titles match a regex filter
# This searches both "describe" and "it" titles
$ jasmine --filter="Calculator"

# Stop testing after the first failure happens
$ jasmine --stop-on-failure=true

# Run tests in a random order
# Optionally include a seed value
$ jasmine --random=true --seed=4321

Test execution options may also be set in the Jasmine config file.

Advanced Test Execution with Karma

Karma is a self-described “spectacular test runner for JavaScript.” Its main value is that it runs JavaScript tests in live web browsers (rather than merely on Node.js), testing actual browser compatibility. In fact, developers can keep Karma running while they develop code so they can see test results in real time as they make changes. Karma integrates with many test tools (including Istanbul for code coverage) and frameworks (including Jasmine). Karma itself runs on Node.js and is distributed as a number of packages for different browsers and frameworks. Check out this Google Testing Blog article to learn the original impetus behind developing Karma, originally called “Testacular.”

Karma and Protractor are similar in that they run tests against real web browsers, but they serve different purposes. Karma is meant for running unit tests against JavaScript code, whereas Protractor is meant for running end-to-end tests against a full, live site like a user. Karma tests go through a “back door” to exercise pieces of a site. Karma and Protractor are not meant to be used together for the same tests (see Protractor Issue #9 on GitHub). However, one project can use both tools at their appropriate test layers, as done for standard Angular testing.

This guide does not provide a custom example for Karma with Jasmine because it requires local setup with the right packages and browser versions. Karma packages are distributed through npm. Karma with Jasmine requires the main karma package, the karma-jasmine package, and a launcher package for each desired browser (like karma-chrome-launcher). There are also plenty of decent examples online here, here, and here. Please refer to the official Karma documentation for more info.

Running Jasmine tests with Karma is not without its difficulties, however. One challenge is handling modules and imports. ECMAScript 6 (ES6) has a totally new syntax for modules and imports that is incompatible with the CommonJS module system with require used by Node.js. Node.js is working on ES6-style module support, but at the time this article was written, full support was not yet available. Module imports are troublesome for Karma because Karma is launched from Node.js (requiring require) but runs in a browser (which doesn’t support require). There are a few workarounds:

  • Use RequireJS to load modules.
  • Use Browserify to make require work in browsers.
  • Use rollup.js to bundle all modules into one to sidestep imports.
  • Use Angular with TypeScript, which builds and links everything automatically.

Angular Testing

Angular is a very popular front-end Web framework. It is a complete rewrite of AngularJS and is seen as an alternative to React. One of Angular’s perks is its excellent support for testing. Out of the box, new Angular projects come with config for unit testing with Jasmine/Karma and end-to-end testing with Jasmine/Protractor. It’s easy to integrate other automation tools like Istanbul code coverage or HTML reporting. Standard Angular projects using TypeScript also don’t suffer from the module import problem: imports are linked properly when TypeScript is compiled into JavaScript.

Angular unit tests are written just like any other Jasmine unit tests except for one main difference: the Angular testing utilities. These extra packages create a test environment (a “TestBed”) for testing each part of the Angular app internally and independently. Dependencies can be easily stubbed and mocked using Jasmine’s spies, with no need for sinon since everything binds. NGRX also provides extended test utilities. The Angular testing utilities can seem overwhelming at first, but together with Jasmine, they make it easy to write laser-precise unit tests.

Another interesting best practice for Angular unit tests is to co-locate them with the modules they cover. For every *.js/*.ts file, there should be a *.spec.js/*.spec.ts file with the covering describe/it tests. This is not common practice for unit tests, but the Angular doc notes many advantages: tests are easy to find, coverage is roughly visual, and updates are less likely forgotten. The automatically-generated test config has settings to search the whole project for spec files.

Angular end-to-end tests are treated differently from unit tests, however. Since they test the app as a whole, they don’t use the Angular testing utilities, and they should be located in their own directory (usually named “e2e”). Thus, Angular end-to-end tests are really no different than any other Web UI tests that use Protractor. Jasmine is the default test framework, but it may be advantageous to switch to Cucumber.js for all the advantages of BDD.

This guide does not provide Angular testing examples because the official Angular documentation is stellar. It contains a tutorial, a whole page on testing, and live examples of tests (linked from the testing page).

Missing Error Messages with Angular Testing

Logs are an essential part of test automation – they leave a trace of execution that is indispensable when backtracking through failures. Missing logs can make it much, much harder to figure out problems in the code. Recently, I hit this problem while writing unit tests for an Angular project: neither the console nor Google Chrome’s debugger showed any helpful error messages! Thankfully, there was a pretty easy solution. This article will explain the problem and the solution.

Update (January 18, 2018):

After further research, it appears that this problem was fixed in the @angular/cli 1.3.x release. I updated to 1.3.2, removed the “–sourcemaps=false” option, and verified that the error messages are printed. Furthermore, the source mapping is correct – the errors map to the correct line and column in the sources files!

If you are stuck using a version prior to 1.3.x, then use the workaround detailed below. Otherwise, upgrade the package and avoid the problem altogether!

TL;DR

Disable source maps when running Angular tests:

$ ng test --sourcemaps=false

Angular Project Setup

This article presumes the standard Angular 4 project setup, as automatically generated by the “ng new” command. Jasmine unit tests are written in “*.spec.ts” files and run with Karma using Google Chrome as the browser.

The Problem

The Angular testing utilities provide great support for isolating and exercising parts of Angular code for unit testing. However, programmers need to use them properly, or else they won’t work. When I tried writing some unit tests for ngrx, I quickly hit dependency problems. However, it took me hours to figure it out because the console output was not helpful – all it would print was “ERROR”:

Angular Test Errors 1

As a newbie, I had no idea what went wrong. I tried debugging with Chrome, but the error message I got there was cryptic and not much more helpful:

Angular Test Errors 2

The Solution

After googling for a while, I discovered that there is a bug with source maps in the Angular CLI (Issue #7296). The workaround is to add the “–sourcemaps=false” option to the “ng test” command. If the package.json file contains a “test” script that calls “ng test”, the option may be added there. Now, the console prints error messages:

Angular Test Errors 3

Errors also appear on the Karma page in the browser:

Angular Test Errors 4

One side effect of this workaround, however, is that the line and column numbers don’t correctly line up to the TypeScript files. I presume that they map to the compiled JavaScript files instead. Nevertheless, error messages with wrong line numbers are better than no error messages at all. There may be a way to fix the source mapping, but that’s a problem for another day. Hopefully, the Angular team will fix this “feature” for us.

Now, time to go fix those test errors!

BDD 101: Frameworks

Every major programming language has a BDD automation framework. Some even have multiple choices. Building upon the structural basics from the previous post, this post provides a survey of the major frameworks available today. Since I cannot possibly cover every BDD framework in depth in this 101 series, my goal is to empower you, the reader, to pick the best framework for your needs. Each framework has support documentation online justifying its unique goodness and detailing how to use it, and I would prefer not to duplicate documentation. Use this post primarily as a reference. (Check the Automation Panda BDD page for the full table of contents.)

Major Frameworks

Most BDD frameworks are Cucumber versions, JBehave derivatives inspired by Dan North, or non-Gherkin spec runners. Some put behavior scenarios into separate files, while others put them directly into the source code.

C# and Microsoft .NET

SpecFlow, created by Gáspár Nagy, is arguably the most popular BDD framework for Microsoft .NET languages. Its tagline is “Cucumber for .NET” – thus fully compliant with Gherkin. SpecFlow also has polished, well-designed hookscontext injection, and parallel execution (especially with test thread affinity). The basic package is free and open source, but SpecFlow also sells licenses for SpecFlow+ extensions. The free version requires a unit test runner like MsTest, NUnit, or xUnit.net in order to run scenarios. This makes SpecFlow flexible but also feels jury-rigged and inelegant. The licensed version provides a slick runner named SpecFlow+ Runner (which is BDD-friendly) and a Microsoft Excel integration tool named SpecFlow+ Excel. Microsoft Visual Studio has extensions for SpecFlow to make development easier.

There are plenty of other BDD frameworks for C# and .NET, too. xBehave.net is an alternative that pairs nicely with xUnit.net. A major difference of xBehave.net is that scenario steps are written directly in the code, instead of in separate text (feature) files. LightBDD bills itself as being more lightweight than other frameworks and basically does some tricks with partial classes to make the code more readable. NSpec is similar to RSpec and Mocha and uses lambda expressions heavily. Concordion offers some interesting ways to write specs, too. NBehave is a JBehave descendant, but the project appears to be dead without any updates since 2014.

Java and JVM Languages

The main Java rivalry is between Cucumber-JVM and JBehave. Cucumber-JVM is the official Cucumber version for Java and other JVM languages (Groovy, Scala, Clojure, etc.). It is fully compliant with Gherkin and generates beautiful reports. The Cucumber-JVM driver can be customized, as well. JBehave is one of the first and foremost BDD frameworks available. It was originally developed by Dan North, the “father of BDD.” However, JBehave is missing key Gherkin features like backgrounds, doc strings, and tags. It was also a pure-Java implementation before Cucumber-JVM existed. Both frameworks are widely used, have plugins for major IDEs, and distribute Maven packages. This popular but older article compares the two in slight favor of JBehave, but I think Cucumber-JVM is better, given its features and support.

The Automation panda article Cucumber-JVM for Java is a thorough guide for the Cucumber-JVM framework.

Java also has a number of other BDD frameworks. JGiven uses a fluent API to spell out scenarios, and pretty HTML reports print the scenarios with the results. It is fairly clean and concise. Spock and JDave are spec frameworks, but JDave has been inactive for years. Scalatest for Scala also has spec-oriented features. Concordion also provides a Java implementation.

JavaScript

Almost all JavaScript BDD frameworks run on Node.js. Jasmine and Mocha are two of the most popular general-purpose JS test frameworks. They differ in that Jasmine has many features included (like assertions and spies) that Mocha does not. This makes Jasmine easier to get started (good for beginners) but makes Mocha more customizable (good for power users). Both claim to be behavior-driven because they structure tests using “describe” and “it-should” phrases in the code, but they do not have the advantage of separate, reusable steps like Gherkin. Personally, I consider Jasmine and Mocha to be behavior-inspired but not fully behavior-driven.

Other BDD frameworks are more true to form. Cucumber provides Cucumber.js for Gherkin-compliant happiness. Yadda is Gherkin-like but with a more flexible syntax. Vows provides a different way to approach behavior using more formalized phrase partitions for a unique form of reusability. The Cucumber blog argues that Cucumber.js is best due to its focus on good communication through plain language steps, whereas other JavaScript BDD frameworks are more code-y. (Keep in mind, though, that Cucumber would naturally boast of its own framework.) Other comparisons are posted here, here, here, and here.

PHP

The two major BDD frameworks for PHP are Behat and Codeception. Behat is the official Cucumber version for PHP, and as such is seen as the more “pure” BDD framework. Codeception is more programmer-focused and can handle other styles of testing. There are plenty of articles comparing the two – here, here, and here (although the last one seems out of date). Both seem like good choices, but Codeception seems more flexible.

Python

Python has a plethora of test frameworks, and many are BDD. behave and lettuce are probably the two most popular players. Feature comparison is analogous to Cucumber-JVM versus JBehave, respectively: behave is practically Gherkin compliant, while lettuce lacks a few language elements. Both have plugins for major IDEs. pytest-bdd is on the rise because it integrates with all the wonderful features of pytestradish is another framework that extends the Gherkin language to include scenario loops, scenario preconditions, and variables. All these frameworks put scenarios into separate feature files. They all also implement step definitions as functions instead of classes, which not only makes steps feel simpler and more independent, but also avoids unnecessary object construction.

Other Python frameworks exist as well. pyspecs is a spec-oriented framework. Freshen was a BDD plugin for Nose, but both Freshen and Nose are discontinued projects.

Ruby

Cucumber, the gold standard for BDD frameworks, was first implemented in Ruby. Cucumber maintains the official Gherkin language standard, and all Cucumber versions are inspired by the original Ruby version. Spinach bills itself as an enhancement to Cucumber by encapsulating steps better. RSpec is a spec-oriented framework that does not use Gherkin.

Which One is Best?

There is no right answer – the best BDD framework is the one that best fits your needs. However, there are a few points to consider when weighing your options:

  • What programming language should I use for test automation?
  • Is it a popular framework that many others use?
  • Is the framework actively supported?
  • Is the spec language compliant with Gherkin?
  • What type of testing will you do with the framework?
  • What are the limitations as compared to other frameworks?

Frameworks that separate scenario text from implementation code are best for shift-left testing. Frameworks that put scenario text directly into the source code are better for white box testing, but they may look confusing to less experienced programmers.

Personally, my favorites are SpecFlow and pytest-bdd. At LexisNexis, I used SpecFlow and Cucumber-JVM. For Python, I used behave at MaxPoint, but I have since fallen in love with pytest-bdd since it piggybacks on the wonderfulness of pytest. (I can’t wait for this open ticket to add pytest-bdd support in PyCharm.) For skill transferability, I recommend Gherkin compliance, as well.

Reference Table

The table below categorizes BDD frameworks by language and type for quick reference. It also includes frameworks in languages not described above. Recommended frameworks are denoted with an asterisk (*). Inactive projects are denoted with an X (x).

Language Framework Type
C Catch In-line Spec
C++ Igloo In-line Spec
C# and .NET Concordion
LightBDD
NBehave x
NSpec
SpecFlow *
xBehave.net
In-line Spec
In-line Gherkin
Separated semi-Gherkin
In-line Spec
Separated Gherkin
In-line Gherkin
Golang Ginkgo In-line Spec
Java and JVM Cucumber-JVM *
JBehave
JDave x
JGiven *
Scalatest
Spock
Separated Gherkin
Separated semi-Gherkin
In-line Spec
In-line Gherkin
In-line Spec
In-line Spec
JavaScript Cucumber.js *
Yadda
Jasmine
Mocha
Vows
Separated Gherkin
Separated semi-Gherkin
In-line Spec
In-line Spec
In-line Spec
Perl Test::BDD::Cucumber Separated Gherkin
PHP Behat
Codeception *
Separated Gherkin
Separated or In-line
Python behave *
freshen x
lettuce
pyspecs
pytest-bdd *
radish
Separated Gherkin
Separated Gherkin
Separated semi-Gherkin
In-line Spec
Separated semi-Gherkin
Separated Gherkin-plus
Ruby Cucumber *
RSpec
Spinach
Separated Gherkin
In-line Spec
Separated Gherkin
Swift / Objective C Quick In-line Spec

 

[4/22/2018] Update: I updated info for C# and Python frameworks.