JavaScript Testing with Jasmine

Table of Contents

  1. Introduction
  2. Setup and Installation
  3. Project Structure
  4. Unit Tests for Functions
  5. Unit Tests for Classes
  6. Unit Tests with Mocks
  7. Integration Tests for REST APIs
  8. End-to-End Tests for Web UIs
  9. Basic Test Execution
  10. Advanced Test Execution with Karma
  11. Angular Testing


Jasmine is one of the most popular JavaScript test frameworks available. Its tests are intuitively recognizable by their describe/it format. Jasmine is inspired by Behavior-Driven Development and comes with many basic features out-of-the-box. While Jasmine is renowned for its Node.js support, it also supports Python and Ruby. Jasmine also works with JavaScript-based languages like TypeScript and CoffeeScript.

This guide shows how to write tests in JavaScript on Node.js using Jasmine. It uses the jasmine-node-js-example project (hosted on GitHub). Content includes:

  • Basic white-box unit tests
  • REST API integration tests with frisby
  • Web UI end-to-end tests with Protractor
  • Spying with sinon
  • Monkeypatching with rewire
  • Handling config data with JSON files
  • Advanced execution features with Karma
  • Special considerations for Angular projects

The Jasmine API Reference is also indispensable when writing tests.

Setup and Installation

The official Jasmine Node.js Setup Guide explains how to set up and install Jasmine. Jasmine tests may be added to an existing project or to an entirely new project. As a prerequisite, Node.js must already be installed. Use the following commands to set things up.

# Initialize a new project (if necessary)
# This will create the package.json file
$ mkdir [project-name]
$ cd [project-name]
$ npm init

# Install Jasmine locally for the project and globally for the CLI
$ npm install jasmine
$ npm install -g jasmine

# Create a spec directory with configuration file for Jasmine
$ jasmine init

# Optional: Install official Jasmine examples
# Do this only for self-education in a separate project
$ jasmine examples

The code used by this guide is available in GitHub at jasmine-node-js-example. Feel free to clone this repository to try things out yourself!

Recommended editors and IDEs include Visual Studio Code with the Jasmine Snippets extensions, Atom, and JetBrains WebStorm.

Project Structure

Jasmine does not require the project to have a specific directory layout, but it does use a configuration file to specify where to find tests. The default, conventional project structure created by “jasmine init” puts all Jasmine code into a “spec” directory, which contains “*spec.js” files for tests, helpers that run before specs, and a support directory for config. The JASMINE_CONFIG_PATH environment variable can be set to change the config file used. (The default config file is spec/support/jasmine.json.)

|-- [product source code]
|-- spec
|   |-- [spec sub-directory]
|   |   `-- *spec.js
|   |-- helpers
|   |   `-- [helper sub-directory]
|   `-- support
|       `-- jasmine.json
`-- package.json

This structure may be changed using the “spec_dir”, “spec_files”, and “helpers” properties in the config file. For example, it may be useful to change the structure to include more than one level of directories to the hierarchy. However, it is typically best to leave the conventional directory layout in place. The default config values as of Jasmine 2.8 are below.

  "spec_dir": "spec",
  "spec_files": [
  "helpers": [
  "stopSpecOnExpectationFailure": false,
  "random": false

It is also a best practice to separate tests between different levels of the Testing Pyramid. The example project has spec subdirectories for unit, integration, and end-to-end tests. Directory-level organization makes it easy to filter tests by level when executed.

Unit Tests for Functions

The most basic unit of code to be tested in JavaScript is a function. The “lib/calculator.functions.js” module contains some basic math functions for easy testing.

// --------------------------------------------------
// lib/calculator.functions.js
// --------------------------------------------------

// Calculator Functions

function add(a, b) {
    return a + b;

function subtract(a, b) {
    return a - b;

function multiply(a, b) {
    return a * b;

function divide(a, b) {
    let value = a * 1.0 / b;
    if (!isFinite(value))
        throw new RangeError('Divide-by-zero');
        return value;

function maximum(a, b) {
    return (a >= b) ? a : b;

function minimum(a, b) {
    return (a <= b) ? a : b;

// Module Exports

module.exports = {
    add: add,
    subtract: subtract,
    multiply: multiply,
    divide: divide,
    maximum: maximum,
    minimum: minimum,

Its tests are in “spec/unit/calculator.function.spec.js”. Below is a snippet showing simple tests for the “add” function. A describe block groups a “suite” of specs together. Each it block is an individual spec (or test). Titles for specs are often written as what the spec should do. Describe blocks may be nested for hierarchical grouping, but it blocks (being bottom-level) may not. Assertions are made using Jasmine’s fluent-like expect and matcher methods. Since the functions are stateless, no setup or cleanup is needed. Tests for other math functions are similar.

// --------------------------------------------------
// spec/unit/calculator.function.spec.js
// --------------------------------------------------

const calc = require('../../lib/calculator.functions');

describe("Calculator Functions", function() {

  describe("add", function() {

    it("should add two positive numbers", function() {
      let value = calc.add(3, 2);

    it("should add a positive and a negative number", function() {
      let value = calc.add(3, -2);

    it("should give the same value when adding zero", function() {
      let value = calc.add(3, 0);



The divide-by-zero test for the “divide” function is special because it must verify that an exception is thrown. The divide call is wrapped in a function so that it may be passed into the expect call.

  describe("divide", function() {

    // ...

    it("should throw an exception when dividing by zero", function() {
      let divideByZero = function() { calc.divide(3, 0); };
      expect(divideByZero).toThrowError(RangeError, 'Divide-by-zero');

    // ...


The “maximum” and “minimum” functions have parametrized tests using the Array class’s forEach method. This is a nifty trick for hitting multiple input sets without duplicating code or combining specs. Note that the spec titles are also parametrized. Tests for “maximum” are shown below.

  describe("maximum", function() {

      [1, 2, 2],
      [2, 1, 2],
      [2, 2, 2],
    ].forEach(([a, b, expected]) => {
      it(`should return ${expected} when given ${a} and ${b}`, () => {
        let value = calc.maximum(a, b);


Unit Tests for Classes

Jasmine can also test classes. When testing classes, setup and cleanup routines become more helpful. The Calculator class in the “lib/calculator.class.js” module calls the math functions and caches the last answer.

// --------------------------------------------------
// lib/calculator.class.js
// --------------------------------------------------

// Imports

const calcFunc = require('./calculator.functions');

// Calculator Class

class Calculator {

  constructor() {
      this.last_answer = 0;

  do_math(a, b, func) {
      return (this.last_answer = func(a, b));

  add(a, b) {
      return this.do_math(a, b, calcFunc.add);

  subtract(a, b) {
      return this.do_math(a, b, calcFunc.subtract);

  multiply(a, b) {
      return this.do_math(a, b, calcFunc.multiply);

  divide(a, b) {
      return this.do_math(a, b, calcFunc.divide);

  maximum(a, b) {
      return this.do_math(a, b, calcFunc.maximum);

  minimum(a, b) {
      return this.do_math(a, b, calcFunc.minimum);


// Module Exports

module.exports = {
  Calculator: Calculator,

The Jasmine specs in “spec/unit/calculator.class.spec.js” are very similar but now call the beforeEach method to construct the Calculator object before each scenario. (Jasmine also has methods for afterEach, beforeAll, and afterAll.) The verifyAnswer helper function also makes assertions easier. The addition tests are shown below.

// --------------------------------------------------
// spec/unit/calculator.class.spec.js
// --------------------------------------------------

const calc = require('../../lib/calculator.class');

describe("Calculator Class", function() {

  let calculator;

  beforeEach(function() {
    calculator = new calc.Calculator();

  function verifyAnswer(actual, expected) {

  describe("add", function() {

    it("should add two positive numbers", function() {
      verifyAnswer(calculator.add(3, 2), 5);

    it("should add a positive and a negative number", function() {
      verifyAnswer(calculator.add(3, -2), 1);

    it("should give the same value when adding zero", function() {
      verifyAnswer(calculator.add(3, 0), 3);


  // ...


Unit Tests with Mocks

Mocks help to keep unit tests focused narrowly upon the unit under test. They are essential when units of code depend upon other callable entities. For example, mocks can be used to provide dummy test values for REST APIs instead of calling the real endpoints so that receiving code can be tested independently.

Jasmine’s out-of-the-box spies can do some mocking and spying, but it is not very powerful. For example, it doesn’t work when members of one module call members of another, or even when members of the same module call each other (unless they are within the same class). It is better to use rewire for monkey-patching (mocking via member substitution) and sinon for stubbing and spying.

The “lib/weather.js” module shows how mocking can be done with member dependencies. The WeatherCaller class’s “getForecast” method calls the “callForecast” function, which is meant to represent a service call to get live weather forecasts. The “callForecast” function returns an empty object, but the specs will “rewire” it to return dummy test values that can be used by the WeatherCaller class. Rewiring will work even though “callForecast” is not exported!

// --------------------------------------------------
// lib/weather.js
// --------------------------------------------------

function callForecast(month, day, year, zipcode) {
  return {};

class WeatherCaller {

  constructor() {
    this.forecasts = {};

  getForecast(month, day, year, zipcode) {
    let key = `${month}/${day}/${year} for ${zipcode}`;
    if (!(key in this.forecasts)) {
      this.forecasts[key] = callForecast(month, day, year, zipcode);
    return this.forecasts[key];


module.exports = {
  WeatherCaller: WeatherCaller,

The tests in “spec/unit/weather.mock.spec.js” monkey-patch the “callForecast” function with a sinon stub in the beforeEach call so that each test has a fresh spy count. Note that the weather method is imported using “rewire” instead of “require” so that it can be monkey-patched. Even though the original function returns an empty object, the tests pass because the mock returns the dummy test value.

// --------------------------------------------------
// spec/unit/weather.mock.spec.js
// --------------------------------------------------

// Imports

const rewire = require('rewire');
const sinon = require('sinon');

// Rewirings

const weather = rewire('../../lib/weather');

// WeatherCaller Specs
describe("WeatherCaller Class", function() {

  // Test constants
  const dummyForecast = {"high": 42, "low": 26};

  // Test variables
  let callForecastMock;
  let weatherModuleRestore;
  let weatherCaller;

  beforeEach(function() {
    // Mock the inner function's return value using sinon
    // Do this for each test to avoid side effects of call count
    callForecastMock = sinon.stub().returns(dummyForecast);
    weatherModuleRestore = weather.__set__("callForecast", callForecastMock);

    // Construct the main caller object
    weatherCaller = new weather.WeatherCaller();

  it("should be empty upon construction", function() {
    // No mocks required here

  it("should get a forecast for a date and a zipcode", function() {
    // This simply verifies that the return value is correct
    let forecast = weatherCaller.getForecast(12, 25, 2017, 21047);

  it("should get a fresh forecast the first time", function() {
    // The inner function should be called and the value should be cached
    // Note the sequence of assertions, which guarantee safety
    let forecast = weatherCaller.getForecast(12, 25, 2017, 21047);
    const forecastKey = "12/25/2017 for 21047";
    expect(forecastKey in weatherCaller.forecasts).toBeTruthy();

  it("should get a cached forecast the second time", function() {
    // The inner function should be called only once
    // The same object should be returned by both method calls
    let forecast1 = weatherCaller.getForecast(12, 25, 2017, 21047);
    let forecast2 = weatherCaller.getForecast(12, 25, 2017, 21047);

  it("should get and cache multiple forecasts", function() {
    // The other tests verify the mechanics of individual calls
    // This test verifies that the caller can handle multiple forecasts

    // Initial forecasts
    let forecast1 = weatherCaller.getForecast(12, 25, 2017, 27518);
    let forecast2 = weatherCaller.getForecast(12, 25, 2017, 27518);
    let forecast3 = weatherCaller.getForecast(12, 25, 2017, 21047);

    // Change forecast value
    const newForecast = {"high": 39, "low": 18}
    callForecastMock = sinon.stub().returns(newForecast);
    weatherModuleRestore = weather.__set__("callForecast", callForecastMock);

    // More forecasts
    let forecast4 = weatherCaller.getForecast(12, 26, 2017, 21047);
    let forecast5 = weatherCaller.getForecast(12, 27, 2017, 21047);

    // Assertions
    expect("12/25/2017 for 27518" in weatherCaller.forecasts).toBeTruthy();
    expect("12/25/2017 for 21047" in weatherCaller.forecasts).toBeTruthy();
    expect("12/26/2017 for 21047" in weatherCaller.forecasts).toBeTruthy();
    expect("12/27/2017 for 21047" in weatherCaller.forecasts).toBeTruthy();

  afterEach(function() {
    // Undo the monkeypatching


Integration Tests for REST APIs

Jasmine can do black-box tests just as well as it can do white-box tests. Testing REST API service calls are some of the most common integration-level tests. There are many REST request packages for Node.js, but frisby is particularly designed for testing. Frisby even has its own expect methods (though the standard Jasmine expect and matchers may still be used).

A best practice for black-box tests is to put config data for external dependencies into a config file. Config data for REST API calls could be URLs, usernames, and passwords. Never hard-code config data into test automation. JavaScript config files are super simple: just write a JSON file and read it during test setup using the “require” function, just like any module. The config data will be automatically parsed as a JavaScript object!

Below is an example test for calling Wikipedia’s REST API. It reads the base URL from a config file and uses it in the frisby call. The config file:

// --------------------------------------------------
// spec/support/env.json
// --------------------------------------------------
  "integration" : {
    "wikipediaServiceBaseUrl": ""

And the spec:

// --------------------------------------------------
// spec/integration/wikipedia.service.spec.js
// --------------------------------------------------

const frisby = require('frisby');

describe("English Wikipedia REST API", function() {

  const ENV = require("../support/env.json");
  const BASE_URL = ENV.integration.wikipediaServiceBaseUrl;

  describe("GET /page/summary/{title}", function() {

    it("should return the summary for the given page title", function(done) {
        .get(BASE_URL + "/page/summary/Pikachu")
        .then(function(response) {


  // ...

End-to-End Tests for Web UIs

Jasmine can also be used for end-to-end Web UI tests. One of the most popular packages for web browser automation is Selenium WebDriver, which uses programming calls to interact with a browser like a real user. Selenium releases a WebDriver package for JavaScript for Node.js, but it is typically a better practice to use Protractor.

Protractor integrates WebDriver with JavaScript test frameworks to make it easier to use. By default, Jasmine is the default framework for Protractor, but Mocha, Cucumber, and any other JavaScript framework could be used. One of the best advantages Protractor has over WebDriver by itself is that Protractor does automatic waiting: explicit calls to wait for page elements are not necessary. This is a wonderful feature that eliminates a lot of repetitive automation code. Protractor also provides tools to easily set up the Selenium Server and browsers (including mobile browsers). Even though Protractor is designed for Angular apps, it can nevertheless be used for non-Angular front-ends.

Web UI tests can be quite complicated because they cover many layers and require extra configuration. Web page interactions frequently need to be reused, too. It is a best practice to use a pattern like the Page Object Model to handle web interactions in one reusable layer. Page objects pull WebDriver locators and actions out of test fixtures (like describe/it functions) so that they may be updated more easily when changes are developed for the actual web pages. (In fact, some teams choose to co-locate page object classes with product source code for the web app so that both are updated simultaneously.) The Page Object Model is a great way to manage the inherently complicated Web automation design.

This guide does not provide a custom example for Protractor with Jasmine because the Protractor documentation is pretty good. It contains a decent tutorial, setup and config instructions, framework integrations, and a full reference. Furthermore, proper Protractor setup requires careful local setup with a live site to test. Please refer to the official doc for more information. Most of the examples in the doc use Jasmine.

Basic Test Execution

The simplest way to run Jasmine tests is to use the “jasmine” command. Make sure you are in the project’s root directory when running tests. Below are example invocations.

# Run all specs in the project (according to the Jasmine config)
$ jasmine

# Run a specific spec by file path
$ jasmine spec/integration/wikipedia.service.spec.js

# Run all specs that match a path pattern
# Warning: this call is NOT recursive and will not search sub-directories!
$ jasmine spec/unit/*

# Run all specs whose titles match a regex filter
# This searches both "describe" and "it" titles
$ jasmine --filter="Calculator"

# Stop testing after the first failure happens
$ jasmine --stop-on-failure=true

# Run tests in a random order
# Optionally include a seed value
$ jasmine --random=true --seed=4321

Test execution options may also be set in the Jasmine config file.

Advanced Test Execution with Karma

Karma is a self-described “spectacular test runner for JavaScript.” Its main value is that it runs JavaScript tests in live web browsers (rather than merely on Node.js), testing actual browser compatibility. In fact, developers can keep Karma running while they develop code so they can see test results in real time as they make changes. Karma integrates with many test tools (including Istanbul for code coverage) and frameworks (including Jasmine). Karma itself runs on Node.js and is distributed as a number of packages for different browsers and frameworks. Check out this Google Testing Blog article to learn the original impetus behind developing Karma, originally called “Testacular.”

Karma and Protractor are similar in that they run tests against real web browsers, but they serve different purposes. Karma is meant for running unit tests against JavaScript code, whereas Protractor is meant for running end-to-end tests against a full, live site like a user. Karma tests go through a “back door” to exercise pieces of a site. Karma and Protractor are not meant to be used together for the same tests (see Protractor Issue #9 on GitHub). However, one project can use both tools at their appropriate test layers, as done for standard Angular testing.

This guide does not provide a custom example for Karma with Jasmine because it requires local setup with the right packages and browser versions. Karma packages are distributed through npm. Karma with Jasmine requires the main karma package, the karma-jasmine package, and a launcher package for each desired browser (like karma-chrome-launcher). There are also plenty of decent examples online here, here, and here. Please refer to the official Karma documentation for more info.

Running Jasmine tests with Karma is not without its difficulties, however. One challenge is handling modules and imports. ECMAScript 6 (ES6) has a totally new syntax for modules and imports that is incompatible with the CommonJS module system with require used by Node.js. Node.js is working on ES6-style module support, but at the time this article was written, full support was not yet available. Module imports are troublesome for Karma because Karma is launched from Node.js (requiring require) but runs in a browser (which doesn’t support require). There are a few workarounds:

  • Use RequireJS to load modules.
  • Use Browserify to make require work in browsers.
  • Use rollup.js to bundle all modules into one to sidestep imports.
  • Use Angular with TypeScript, which builds and links everything automatically.

Angular Testing

Angular is a very popular front-end Web framework. It is a complete rewrite of AngularJS and is seen as an alternative to React. One of Angular’s perks is its excellent support for testing. Out of the box, new Angular projects come with config for unit testing with Jasmine/Karma and end-to-end testing with Jasmine/Protractor. It’s easy to integrate other automation tools like Istanbul code coverage or HTML reporting. Standard Angular projects using TypeScript also don’t suffer from the module import problem: imports are linked properly when TypeScript is compiled into JavaScript.

Angular unit tests are written just like any other Jasmine unit tests except for one main difference: the Angular testing utilities. These extra packages create a test environment (a “TestBed”) for testing each part of the Angular app internally and independently. Dependencies can be easily stubbed and mocked using Jasmine’s spies, with no need for sinon since everything binds. NGRX also provides extended test utilities. The Angular testing utilities can seem overwhelming at first, but together with Jasmine, they make it easy to write laser-precise unit tests.

Another interesting best practice for Angular unit tests is to co-locate them with the modules they cover. For every *.js/*.ts file, there should be a *.spec.js/*.spec.ts file with the covering describe/it tests. This is not common practice for unit tests, but the Angular doc notes many advantages: tests are easy to find, coverage is roughly visual, and updates are less likely forgotten. The automatically-generated test config has settings to search the whole project for spec files.

Angular end-to-end tests are treated differently from unit tests, however. Since they test the app as a whole, they don’t use the Angular testing utilities, and they should be located in their own directory (usually named “e2e”). Thus, Angular end-to-end tests are really no different than any other Web UI tests that use Protractor. Jasmine is the default test framework, but it may be advantageous to switch to Cucumber.js for all the advantages of BDD.

This guide does not provide Angular testing examples because the official Angular documentation is stellar. It contains a tutorial, a whole page on testing, and live examples of tests (linked from the testing page).

To Infinity and Beyond: A Guide to Parallel Testing

Are your automated tests running in parallel? If not, then they probably should be. Together with continuous integration, parallel testing the best way to fail fast during software development and ultimately enforce higher software quality. Switching tests from serial to parallel execution, however, is not a simple task. Tests themselves must be designed to run concurrently without colliding, and extra tools and systems are needed to handle the extra stress. This article is a high-level guide to good parallel testing practices.

What is Parallel Testing?

Parallel testing means running multiple automated tests simultaneously to shorten the overall start-to-end runtime of a test suite. For example, if 10 tests take a total of 10 minutes to run, then 2 parallel processes could execute 5 tests each and cut the total runtime down to 5 minutes. Even better, 10 processes could execute 1 test each to shrink runtime to 1 minute. Parallel testing is usually managed by either a test framework or a continuous integration tool. It also requires more compute resources than serial testing.

Why Go Parallel?

Running automated tests in parallel does require more effort (and potentially cost) than running tests serially. So, why go through the trouble?

The answer is simple: time. It is well documented that software bugs cost more when they are discovered later. That’s why current development practices like Agile and BDD strive to avoid problems from the start through small iterations and healthy collaboration (“shift left“), while CI/CD defensively catches regressions as soon as they happen (“fail fast“). Reducing the time to discover a problem after it has been introduced means higher quality and higher productivity.

Ideally, a developer should be told if a code change is good or bad immediately after committing it. The change should automatically trigger a new build that runs all tests. Unfortunately, tests are not instantaneous – they could take minutes, hours, or even days to complete. A test automation strategy based on the Testing Pyramid will certainly shorten start-to-end execution time but likely still require parallelization. Consider the layers of the Testing Pyramid and their tests’ average runtimes, the Testing Pyramid Rule of 1’s:

The Testing Pyramid with Times

Each layer is listed above with the rough runtime of a typical test. Though actual runtimes will vary, the Rule of 1’s focuses on orders of magnitude. Unit tests typically run in milliseconds because they often exercise product code in memory. Integration tests exercise live products but are limited in scope and often cover low-level areas (like REST service calls). End-to-end tests, however, cover full paths through a live system, which requires extra setup and waiting (like Selenium WebDriver interaction).

Now, consider how many tests from each layer could be run within given time limits, if the tests are run serially:

Test Layer 1 Minute
10 Minutes
Coffee Break
1 Hour
There Goes Today
Unit 60,000 600,000 3,600,000
Integration 60 600 3,600
End-to-End 1 10 60

Unit test numbers look pretty good, though keep in mind 1 millisecond is often the best-case runtime for a unit test. Integration and end-to-end runtimes, however, pose a more pressing problem. It is not uncommon for a project to have thousands of above-unit tests, yet not even a hundred end-to-end tests could complete within an hour, nor could a thousand integration tests complete within 10 minutes. Now, consider two more facts: (1) tests often run as different phases in a CI pipeline, to total runtimes are stacked, and (2) multiple commits would trigger multiple builds, which could cause a serious backup. Serial test execution would starve engineering feedback in any continuous integration system of scale. A team would need to drastically shrink test coverage or give up on being truly “continuous” in favor of running tests daily or weekly. Neither alternative is acceptable these days. CI needs parallel testing to be truly continuous.

The Danger of Collisions

The biggest danger for parallel testing is collision – when tests interfere with each other, causing invalid test failures. Collisions may happen in the product under test if product state is manipulated by more than one test at a time, or they may happen in the automation code itself if the code is not thread-safe. Collisions are also inherently intermittent, which makes them all the more difficult to diagnose. As a design principle, automated tests must avoid collisions for correct parallel execution.

Making tests run in parallel is not as simple as flipping a switch or adding a new config file. Automated tests must be specifically designed to run in parallel. A team may need to significantly redevelop their automation code to make parallel execution work right.


A train collision in Iran in November 2016. Don’t let this happen to your tests!

Handling Product-Level Collisions

Product-level collisions essentially reduce to how environments are set up and handled.

Separate Environments

The most basic way to avoid product-level collisions would be to run each test thread or process against its own instance of the product in an exclusive environment. (In the most extreme case, every single test could have its own product instance.) No collisions would happen in the product because each product instance would be touched by only one test instance at a time. Separate environments are possible to implement using various configuration and deployment tools. Docker containers are quick and easy to spin up. VMs with Vagrant, Puppet, Chef, and/or Ansible can also get it done.

However, it may not always be sensible to make separate environments for each test thread/process:

  • Creating a new environment is inefficient – it takes extra time to set up that may cancel out any time saved from parallel execution.
  • Many projects simply don’t have the money or the compute resources to handle a massive scale-out.
  • Some tests may not cause collisions and therefore may not need total isolation.
  • Some product environments are extremely large and complicated and would not be practical to replicate for each test individually.

Shared Environments

Environments with a shared product instance are quite common. One could be a common environment that everyone on a team shares, or one could be freshly created during a CI run and accessed by multiple test threads/processes. Either way, product-level collisions are possible, and tests must be designed to avoid clashing product states. Any test covering a persistent state is vulnerable; usually, this is the vast majority of tests. Consider web app testing as an example. Tests to load a page and do some basic interactions can probably run in parallel without extra protection, but tests that use a login to enter data or change settings could certainly collide. In this case, collisions could be avoided by using different logins for each simultaneous test instance – by using either a pool of logins, a unique login per test case, or a unique login per thread/process. Each product is different and will require different strategies for avoiding collisions.


We all share certain environments. Take care of them when you do. (Photo: The Blue Marble, taken by the Apollo 17 crew on Dec 7, 1972)

Handling Automation-Level Collisions

Automation-level collisions can happen when automation code is not thread-safe, which could mean more than simply locks and semaphores.

#1: Test Independence

Test cases must be completely independent of each other. One test must not require another test to run before it for the sake of setup. A test case should be able to run by itself without any others. A test suite should be able to run successfully in random order.

#2: Proper Variable Scope

If parallel tests will be run in the same memory address space, then it is imperative to properly scope all variables. Global or static mutable variables (e.g., “non-constants”) must not be allowed because they could be changed unexpectedly. The best pattern for handling scope is dependency injection. Thread-safe singletons would be a second choice. (Typically, global or static variables are used to subvert design patterns, so they may reveal further necessary automation rework when discovered.)

#3: External Resources

Automation may sometimes interact with external resources, such as test config files or test result databases/services. Make sure no external interactions collide. For example, make sure test run updates don’t overwrite each other.

#4: Logging

Logs are very difficult to trace when multiple tests are simultaneously printed to the same file. The best practice is to generate separate log files for each test case, thread, or process to make them readable.

#5: Result Aggregation

A test suite is a unified collection of tests, no matter how many threads/processes are used to run its tests in parallel. Make sure test results are aggregated together into one report. Some frameworks will do this automatically, while others will require custom post-processing.

#6: Test Filtering

One strategy to avoid collisions may be to run non-colliding partitions (subsets) of tests in parallel. Test tagging and filtering would make this possible. For example, tests that require a special login could be tagged as such and run together on one thread.

Test Scalability

The previous section on collisions discussed how to handle product environments. It is also important to consider how to handle the test automation environment. These are two different things: the product environment contains the live product under test, while the test environment contains the automation software and resources that run tests against the product. The test environment is where the parallel tests will be executed, and, as such, it must be scalable to handle the parallelization. A common example of a test environment could be a Jenkins master with a few agents for running build pipelines. There are two primary ways to scale the test environment: scale-up and scale-out.

Parallel Scale-Up

Scale-up is when one machine is configured to handle more tests in parallel. For example, scale-up would be when a machine switches from one (serial) thread to two, three, or even more in parallel. Many popular test runners support this type of scale-up by spawning and joining threads in a common memory address space or by forking processes. (For example, the SpecFlow+ Runner lets you choose.)

Scale-up is a simple way to squeeze as much utility out of an existing machine as possible. If tests are designed to handle collisions, and the test runner has out-of-the-box support, then it’s usually pretty easy to add more test threads/processes. However, parallel test scale-up is inherently limited by the machine’s capacity. Each additional test process succumbs to the law of diminishing returns as more memory and processor cycles are used. Eventually, adding more threads will actually slow down test execution because the processor(s) will waste time constantly switching between tests. (Anecdotally, I found the optimal test-thread-to-processor ratio to be 2-to-1 for running C#/SpecFlow/Selenium-WebDriver tests on Amazon EC2 M4 instances.) A machine itself could be upgraded with more threads and processors, but nevertheless, there are limits to a single machine’s maximum capacity. Weird problems like TCP/IP port exhaustion may also arise.

Scale Up

Scale-up adds more threads to one machine.

Parallel Scale-Out

Scale-out is when multiple machines are configured to run tests in parallel. Whereas scale-up had one machine running multiple tests, scale-out has multiple machines each running tests. Scale-out can be achieved in a number of ways. A few examples are:

  • One master test execution machine launches multiple Web UI tests that each use a remote Selenium WebDriver with a service like Selenium Grid, Sauce Labs, or BrowserStack.
  • A Jenkins pipeline launches tests across ten agents in parallel, in which each agent executes a tenth of the tests independently.

Scale-out is a better long-term solution than scale-up because scale-out can handle an unlimited number of machines for parallel testing. The limiting factor with scale-out is not the maximum capacity of the hardware but rather the cost of running more machines. However, scale-out is much harder to implement than scale-up. It requires tests to be evenly divided with some sort of balancer and filter. It also requires some sort of test result aggregation for joint reporting – people won’t want to piece together a bunch of separate reports to get an overall snapshot of quality. Plus, the test environment is more complicated to build and maintain (though tools like CloudBees Jenkins Enterprise or Amazon EC2 can make it easier.)

Scale Out

Scale-out distributes tests across multiple machines.

Upwards and Outwards

Of course, scale-up and scale-out are not mutually exclusive. Scaled-out nodes could individually be scaled-up. Consider a test environment with 10 powerful VMs that could each handle 10 tests in parallel – that means 100 tests could run simultaneously. Using the Rule of 1’s, it would take only about a minute to run 100 Web UI tests, which serially would have taken over an hour and a half! Use both strategies to shorten start-to-end runtime as much as possible.


Parallel testing is a worthwhile endeavor. When done properly, it will not only reduce development time but also improve the development experience. For readers who want to start doing parallel testing, I recommend researching the tools and frameworks you want to use. Many popular test frameworks support parallel execution, and even if the one you choose doesn’t, you can always invoke tests in parallel from the command line. Do well!

Missing Error Messages with Angular Testing

Logs are an essential part of test automation – they leave a trace of execution that is indispensable when backtracking through failures. Missing logs can make it much, much harder to figure out problems in the code. Recently, I hit this problem while writing unit tests for an Angular project: neither the console nor Google Chrome’s debugger showed any helpful error messages! Thankfully, there was a pretty easy solution. This article will explain the problem and the solution.

Update (January 18, 2018):

After further research, it appears that this problem was fixed in the @angular/cli 1.3.x release. I updated to 1.3.2, removed the “–sourcemaps=false” option, and verified that the error messages are printed. Furthermore, the source mapping is correct – the errors map to the correct line and column in the sources files!

If you are stuck using a version prior to 1.3.x, then use the workaround detailed below. Otherwise, upgrade the package and avoid the problem altogether!


Disable source maps when running Angular tests:

$ ng test --sourcemaps=false

Angular Project Setup

This article presumes the standard Angular 4 project setup, as automatically generated by the “ng new” command. Jasmine unit tests are written in “*.spec.ts” files and run with Karma using Google Chrome as the browser.

The Problem

The Angular testing utilities provide great support for isolating and exercising parts of Angular code for unit testing. However, programmers need to use them properly, or else they won’t work. When I tried writing some unit tests for ngrx, I quickly hit dependency problems. However, it took me hours to figure it out because the console output was not helpful – all it would print was “ERROR”:

Angular Test Errors 1

As a newbie, I had no idea what went wrong. I tried debugging with Chrome, but the error message I got there was cryptic and not much more helpful:

Angular Test Errors 2

The Solution

After googling for a while, I discovered that there is a bug with source maps in the Angular CLI (Issue #7296). The workaround is to add the “–sourcemaps=false” option to the “ng test” command. If the package.json file contains a “test” script that calls “ng test”, the option may be added there. Now, the console prints error messages:

Angular Test Errors 3

Errors also appear on the Karma page in the browser:

Angular Test Errors 4

One side effect of this workaround, however, is that the line and column numbers don’t correctly line up to the TypeScript files. I presume that they map to the compiled JavaScript files instead. Nevertheless, error messages with wrong line numbers are better than no error messages at all. There may be a way to fix the source mapping, but that’s a problem for another day. Hopefully, the Angular team will fix this “feature” for us.

Now, time to go fix those test errors!

Please Hang Up and Dial Again: Handling Test Interruptions in CI/CD

This post was originally published by Sealights on December 19, 2017 as part of their article Test Quality in CI/CD – Expert Roundup. I was honored to contribute my thoughts on automatic recovery in test automation, and I reblogged the text of my contribution here for Automation Panda readers. Please check out contributions from other experts in the full article!

Test automation is an essential part of CI/CD, but it must be extremely robust.
Unfortunately, tests running in live environments (integration and end-to-end)
often suffer rare but pesky “interruptions” that, if unhandled, will cause tests to fail.
These interruptions could be network blips, web pages not fully loaded, or
temporarily downed services – any environment issues unrelated to product bugs.
Interruptive failures are problematic because they (a) are intermittent and thus
difficult to pinpoint, (b) waste engineering time, (c) potentially hide real failures,
and (d) cast doubt over process/product quality. Furthermore, CI/CD magnifies
even rare issues. If an interruption has only a 1% chance of happening during a test,
then considering binomial probabilities, there is a 63% chance it will happen after
100 tests, and a 99% chance it will happen after 500 tests. Keep in mind that it is not
uncommon for thousands of tests to run daily in CI – Google Guava had over 286K
tests back in July 2012!

It is impossible to completely avoid interruptions – they will happen. Therefore, it is
imperative to handle interruptions at multiple layers:

  1. First, secure the platform upon which the tests run. Make sure system
    performance is healthy and that network connections are stable.
  2. Second, add failover logic to the automated tests. Any time an interruption
    happens, catch it as close to its source as possible, pause briefly, and retry the
    operation(s). Do not catch any type of error: pinpoint specific interruption
    signatures to avoid false positives. Build failover logic into the framework
    rather than implementing it for one-off cases. For example, wrappers around web element or service calls could automatically perform retries. Aspect-
    oriented programming can help here tremendously. Repeating failed tests in their entirety also works and may be easier to implement but takes much
    more time to run.
  3. Third, log any interruptions and recovery attempts as warnings. Do not
    neglect to report them because they could indicate legitimate problems,
    especially if patterns appear.

It may be difficult to differentiate interruptions from legitimate bugs. Or, certain
retry attempts might take too long to be practical. When in doubt, just fail the test –
that’s the safer approach.

Pipe Character Escape for Gherkin Tables

For the first time today, I had to write a Gherkin behavior scenario in which table text needed to use the pipe character “|”. I wanted a generic step that would find and click web page links by name, and one of the link names had the pipe in it! The first version of the step I wrote looked like this:

When the user follows the links:
  | link              |
  | Category          |
  | Sub-Category      |
  | Index|Description |

Naturally, this step didn’t parse – the “|” was parsed as a table delimiter instead of the intended link text. I could have rewritten the step to search for partial link text, or I could have done a key-value lookup, but I wanted to keep the step simple and direct.

The solution was simple: escape the pipe character “|” with a backslash character “\”. Easy! Thanks, StackOverflow! The updated table looks like this:

When the user follows the links:
 | link               |
 | Category           |
 | Sub-Category       |
 | Index\|Description |

“\|” works for both step tables and scenario outline example tables. It looks like it is fairly standard for test frameworks that use Gherkin. I verified that Cucumber-JVM and SpecFlow support it, and it looks like Cucumber for Ruby does as well. It looks like behave will support it in 1.2.6.

After learning this trick, I updated the BDD 101: The Gherkin Language page.

Note that backslash escape sequences won’t work for quotes in Gherkin steps. Quotes in steps are merely conventions and not part of the Gherkin language standard.

The Airing of Grievances: Selenium WebDriver

Selenium WebDriver is the de facto standard for Web UI automation. It’s a great tool, but like anything good, it can also be misused. And that’s where I have grievances. I got a lot of problems with Selenium WebDriver abuses, and now you’re gonna hear about it!

WebDriver “Unit Tests”

“WebDriver unit tests” are like square circles – definitionally, they are logical fallacies. In my books, a unit test must be white box, meaning it has direct access to the product code. However, Web UI tests using WebDriver are inherently black box tests because they are interacting with an actively running website. Thus, they must be above-unit tests by definition. Don’t call them unit tests!

Making Every Test a Web Test

NO! The Testing Pyramid is vital to a healthy overall testing strategy. Web tests are great because they test a website in the ways a user would interact with it, but they have a significant cost. As compared to lower-level tests, they are more fragile, they require more development resources, and they take much more time to run. Browser differences may also affect testing. Furthermore, problems in lower level components should be caught at those lower levels! Sure, HTTP 400s and 500s will appear at the web app layer, but they would be much faster to find and fix with service layer tests. Different layers of testing mitigate risk at their optimal returns-on-investment.

No WebDriver Cleanup

Every WebDriver instance spawns a new system process for “driving” web browser interactions. When the test automation process completes, the WebDriver process may not necessary terminate with it. It is imperative that test automation quits the WebDriver instance once testing is complete. Make sure cleanup happens even when abortive exceptions occur! Otherwise, zombie WebDriver processes may continue on the test machine, causing any number of problems: locked files and directories, high memory usage, wasted CPU cycles, and blocked network ports. These problems can cripple a system and even break future test runs, especially on shared testing machines (like Jenkins nodes). Please, only you can stop the zombie apocalypse – always quit WebDriver instances!

Using “Close” Instead of “Quit”

Regardless of programming language, the WebDriver class has both “close” and “quit” methods. “Close” will close the current browser tab or window, while “quit” will close all windows and terminate the WebDriver process. Make sure to quit during final cleanup. Doing only a close may result in zombie WebDriver processes. It’s a rookie mistake.

Not Optimizing Setup/Cleanup with Service Calls

Web tests are notoriously slow. Whenever you can speed them up, do it! Some tests can be optimized by preparing initial state with service calls. For example, let’s say a user visiting a car dealership website needs to have favorite cars pre-selected for a comparison page test. Rather than navigating to a bunch of car pages and clicking a “favorite” icon, make a setup routine that calls a service to select favorites. Not all tests can do this sort of optimization, but definitely do it for those that can!

Web Elements with No ID

Developers, we need to talk – give every significant element a unique ID. PLEASE! WebDriver calls are so much easier to write and so much more robust to run when locator queries can use IDs instead of CSS selectors or XPaths. Let’s pick ID names during our Three Amigos meetings so that I can program the tests while you develop the features. Determining what elements are import should be easy based on our wireframes. You will save us automators so much time and frustration, since we won’t need to dig through HTML and wonder why our XPaths don’t work.

Changing Web Elements Without Warning

Hey, another thing, developers – don’t change the web page structure without telling us! WebDriver locator queries will break if you change the web elements. Even a seemingly innocuous change could wipe out hundreds of tests. Automation effort is non-trivial. Changes must be planned and sized with automation considerations in mind.

Not Using the Page Object Model

The Page Object Model is a widely-used design pattern for modeling a web page (or components on a web page) as an object in terms of its web elements and user interactions with it. It abstracts Web UI interactions into a common layer that can be reused by many different tests. (The Screenplay pattern, also good, is an evolution of the Page Object Model; tutorial here.) Not using the Page Object Model is Selenium suicide. It will result in rampant code duplication.

Demonizing XPath

XPaths have long been criticized for being slower than CSS selectors. That claim is outdated baloney. In many cases, XPaths outperform CSS selectors – see here, here, and here. Another common complaint is that XPath syntax is more complicated than CSS selector syntax. Honestly, I think they’re about the same in terms of learning curve. XPaths are also more powerful that CSS selectors because they can uniquely pinpoint any element on the page.

Inefficient Web Element Access

Web element IDs make access extremely efficient. However, when IDs are not provided, other locator query types are needed. It is always better to use locator queries to pinpoint elements, rather than to get a list of elements (or even a parent/child chain) to traverse using programming code. For example, I often see code reviews in which an XPath returns a list of results with text labels, and then the programming code (C# or Java or whatever) has a for loop that iterates over each element in the list and exits when the element with the desired label is found. Just add “[text()=’desired text’]” or “[contains(text(), ‘desired text’)]” to the XPath! Use locator queries for all they’re worth.

Interacting with Web Elements Before the Page is Ready

Web UI test automation is inherently full of race conditions. Make sure the elements are ready before calling them, or else face a bunch of “element not found” exceptions. Use WebDriver waits for efficient waiting. Do not use hard sleeps (like Java’s Thread.sleep).

Untuned Timeouts

WebDriver calls need timeouts, or else they could hang forever if there is a problem. (Check online docs for default timeout values.) Timeout value ought to be tuned appropriately for different test environments and different websites. Timeouts that are too short will unnecessarily abort tests, while timeouts that are too long will lengthen precious test runtime.

The Airing of Grievances: Test Automation Code

More grievances! Test code ought to be developed with the same high standards as product code, but often it is neglected. That bothers me so much, and it costs teams a significant amount of time and money through its bad consequences. I got a lot of problems with bad test automation code, and now you’re gonna hear about it!


Code duplication is code cancer. It is particularly rampant in test automation because test steps are often repetitive. That’s no excuse, however, to allow it. Use better programming practices, or face my code review rejections!

Hard-Coded Configuration Data

Automation should be able to run on any environment without hassle. Don’t hard-code URLs, usernames, passwords, and other config-specific values! Read them in as inputs or config files. Nothing is more frustrating than switching environments but – surprise! – not running with the correct values.

Re-reading Inputs for Every Single Test

Config files and input values should not change during a test suite run. So, don’t repeatedly read them! That’s inefficient. Read them once, and hold them in memory in a centrally-accessible location.

Leaving Temporary Files or Settings

Clean up your mess! Don’t leave temporary files or settings after a test completes. Not doing proper cleanup can harm future tests. Files can also build up and eventually run the system out of storage space. It’s a Boy Scout principle: Leave No Trace!

Incorrect Test Results

Test results need integrity to be trusted. Watch out for false positives and false negatives. If there’s a problem during a test, handle it – don’t ignore it. Don’t skip assertion calls. Business decisions are made based on test results.

Uninformative Failure Messages

Here’s one: “Failed.” That tells me nothing. Why did the test fail? Was something missing from a web page? Was there an unexpected exception? Did a service call return 500? TELL ME! Otherwise, I need to waste time rerunning and even debugging the test. The failure may also be intermittent. Tell me what the problem is right when it happens.

Making Assertion Calls in Automation Support Code

Every automated test case must make assertions to verify goodness or badness of system conditions. But, as a best practice, those assertion calls should be written only in the test case code – not in support code like page objects or framework-level classes. Assertions are test-specific, while support code is generic. Support code should simply give system state, while test cases use assertions to validate system state. If support code has assertion calls, then it is far less reusable and traceable.

Treating Boolean Methods like Assertions

This is an all-too-common rookie mistake. Boolean methods simply return a true/false value. They do not (or, as least according to the previous grievance, should not) contain assertion calls. However, I’ve seen many code reviews where programmers call a Boolean method but do nothing with the result. Whoops!

Burying Exceptions

Exceptions mean problems. Test automation is meant to expose problems. Don’t blindly catch all exceptions, log a message, and continue on with a test like nothing’s wrong. Let the exception rise! Most modern test frameworks will catch all exceptions at the test case level, automatically report them as failures, and safely move on to the next test. There is no need to catch exceptions yourself if they ought to abort the test, anyway – that adds a lot of unnecessary code. Catch exceptions only if they are recoverable!

No Automatic Recovery

True automation means no manual intervention. An automation framework should have built-in ways to automatically recover from common problems. For example, if a network connection breaks, automatically attempt to reconnect. If a test fails, retry it. If too many failures happen, either end the run early or pause for some time. And make sure recovery mechanisms are built into the framework, rather than appearing as copypasta throughout the codebase. The purpose of retries (especially for tests) is not to blindly run tests until they turn from red to green, but rather to overcome unexpected interruptions and also to gather more data to better understand failure reasons. Interruptions will happen – handle them when they do. And be sure to log any failures that do happen so that the reasons may be investigated.

No Logging

Please leave a helpful trail of log messages. Tracing through automation code can be difficult after a failure, especially when you’re not the original author. Logging helps everyone quickly get to the root cause of failures. It also really helps to create reproduction scenarios. If I can copy the log from a failing tests into a bug report and be done, then AMEN for an easy triage!

Hard Waits

Ain’t nobody got time for that! Automation needs to be fast, because time is money. Forcing Thread.sleep (or other such equivalent in your language of choice) shows either laziness or desperation. It unnecessarily wastes precious runtime. It also creates race conditions: what if the wait time ends before the thing is ready? Always use “smart” waits – actively and repeatedly check the system at short intervals for the thing to be ready, and abort after a healthy timeout value.