REST

Want to practice test automation? Try these demo sites!

One of the biggest struggles in learning how to develop top-notch automation is practice. Testing is as much an art as it is a science. It takes time to discern when to add explicit waits, how to craft robust locators, and why to verify one element over another. It also requires apps with specific elements or endpoints to try certain operations. Unfortunately, although resources for learning how to automate tests (like Test Automation University) are abundant these days, public demo web sites for practicing remain elusive. I’ve struggled to find ones that I like, and others frequently ask me for recommendations.

Which demo sites are good?

Below is a list of demo sites I’ve found either by searching online or through recommendations from friends. Many of these sites appear on other “Top N Demo Sites for Testing” articles as well. My list aims not only to provide links to popular demo sites but also to provide recommendations on how to use them.

Types of sites:

  • Web UI site – looks like a real web site
  • Web UI elements – tutorial pages showcasing web element types
  • Mobile UI site – looks like a real mobile site
  • API site – provides public APIs for testing
  • DIY – “do it yourself”; you must set up and run the demo site yourself
Demo SiteTypeDescription
ParaBankWeb UI & API siteAn online banking site from Parasoft with login and REST/SOAP APIs. You can also access the database if you run the project locally using the source code.
Restful BookerWeb UI & API siteAn online site for bed & breakfast bookings from Mark Winteringham. The frontend is a React app (source), and the backend is a REST API (source).
Automation Practice WebsiteWeb UI siteA basic online store with optional login from SeleniumFramework.com. Great for example web UI tests.
Luma – Magento eCommerceWeb UI siteAnother online store with several pages.
DemoblazeWeb UI siteA basic online store with optional login from BlazeMeter. Great for example web UI tests.
Swag LabsWeb UI siteA basic online store with required login from Sauce Labs. Great for example web UI tests.
Applitools demo siteWeb UI siteA small site with login page and home page from Applitools. Compare/contrast with a second version for visual testing.
Automation BookstoreWeb UI siteA one-page site for dynamically searching book titles. Good for testing responsive design in short demos.
JPetStore DemoWeb UI siteA pet store site from MyBatis built on top of MyBatis 3, Spring 3, and Stripes (source).
GlobalSQA Banking ProjectWeb UI siteA basic banking app with dropdown login and simple pages from GlobalSQA.
Gatling Computers DatabaseWeb UI siteA one-page site from Gatling that provides a paginated list of computer models with the ability to filter and add new computers.
CandyMapperWeb UI siteA Halloween-themed site from Paul Grossman that shows scary bugs. It also comes with a second release that has the fixes.
Bulldoggy: The Reminders AppDIY Web UI siteA small web app I built myself in Python using FastAPI and HTMX. It has login and uses a TinyDB database.
OWASP Juice ShopDIY Web UI siteA test site written entirely in JavaScript using Node.js, Express and Angular. Specifically built for testing security vulnerabilities.
Cypress Real-World AppDIY Web UI siteA fake payment app from Cypress meant for demonstrating real-world Cypress testing. Could be used for other purposes.
RealWorld example appsDIY Web UI siteOne demo app implemented in multiple languages and frameworks. Not developed for testing but could nevertheless be used.
the-internetWeb UI elementsA site from Dave Haeffner and Elemental Selenium with several concise examples of web elements and interactions.
Selenium Test PagesWeb UI elementsA site with several pages that have slightly deeper examples than the-internet.
LetCodeWeb UI elementsA set of very clean pages along with video tutorials explaining how to automate interactions.
DemoQAWeb UI elementsA practice site from ToolsQA that includes pages for elements, forms, frames, interactions, and even a small bookstore.
Ultimate QA Automation PracticeWeb UI elementsA set of rich practice pages from Ultimate QA.
UI Test Automation PlaygroundWeb UI elementsA set of educational pages with interactable elements from the Rapise team.
SelectorsHub Practice PageWeb UI elementsA practice page from SelectorsHub for interacting with different kinds of web elements.
WebDriverUniversity.comWeb UI elementsAnother set of educational pages with interactable elements.
Sauce Labs Native Sample ApplicationDIY Mobile UI siteA mobile app from Sauce Labs for Android and iOS that is very similar to the Swag Labs web site.
JSONPlaceholderAPI siteA public REST API for generating fake data from a set of predefined resources. Follow the guide to learn how to make requests.
Swagger PetstoreAPI siteA public REST API from Swagger for testing authentication and CRUD operations for pet store data.
Public APIsAPI siteA long list of public APIs that can be used for testing.
Device Registry ServiceDIY API siteA Flask app I developed for teaching how to test REST APIs. Includes the app and automated API tests, which can both be run locally.
Best Buy API PlaygroundDIY API siteA JavaScript-based REST API demo service from Best Buy.

Which ones do you recommend most?

There are lots of demo sites above. Here are the ones I’d personally recommend for different needs:

Of course, the others are still good. That’s why they made the list!

Why bother with “fake” sites?

You might be wondering, “Why do we need demo sites? Why don’t we just use real sites?” Well, demo or “fake” web sites meet a few needs that real sites cannot:

  • Demo sites offer consistency. They are implemented one way and then do not change. Folks can trust that their tests will always work on them.
  • Demo sites are often simpler than real sites. They feel less intimidating for newcomers.
  • Demo sites can be designed for instruction. If they are part of a tutorial, the author can add features to the site to demonstrate concepts.
  • Demo sites are safer for publications like articles, tutorials, and books. Written content is static, so any sites referenced by examples should also be static. Real sites could change.
  • Real sites might require end user agreements that forbid automated requests. Some may even throttle or block requests if they suspect the requests come from a “bot.”
  • Real sites might also bear legal or copyright implications, especially if a company is using the sites for their own content.

What limitations do demo sites have?

Unfortunately, demo sites do have limitations:

  • Demo sites may be too simplified. They may lack large workflows or real-world data. Folks might become frustrated when they discover inactive elements that appear to be real.
  • Demo sites may not be built to scale. High request volume or parallel testing scale might cripple them.
  • Demo sites may seem like they have poor quality, whether or not they actually have poor quality. Sometimes, they are built quickly for testing purposes and therefore don’t have the attention to detail as real sites.
  • Demo sites with significant company branding may be inappropriate to use. For example, if A and B are competitors, then A shouldn’t use B’s demo site for their product tutorials.

Other great lists

Here are other great articles that list demo sites for testing. I referred to them while compiling this list:

Do you have other demo sites to recommend? Put them in the comments below!

How Q2 uses BDD with SpecFlow for testing PrecisionLender

This case study was written by Andrew Knight, Lead Software Engineer in Test for Q2’s PrecisionLender product, in collaboration with Q2 and Tricentis. It explains the PrecisionLender team’s continuous testing journey and how SpecFlow served as a cornerstone for success.

What is PrecisionLender?

PrecisionLender is a web application that empowers commercial bankers with in-the-moment insights that help them structure and price commercial deals. Andi®, PrecisionLender’s intelligent virtual analyst, delivers these hyper-focused recommendations in real-time, allowing relationship managers to make data-driven decisions while pricing their commercial deals. PrecisionLender is owned and developed by Q2, a financial experience software company dedicated to providing digital banking and lending solutions to banks, credit unions, alternative finance, and fintech companies in the U.S. and internationally.

The PrecisionLender Opportunity Screen
(Picture taken from the PrecisionLender Support Center)

The starting point

The PrecisionLender team had a robust Continuous Integration (CI) delivery pipeline with strong unit test coverage, but they lacked end-to-end feature coverage. Developers would fill this gap by manually inspecting their changes in a shared development environment. However, as the PrecisionLender app grew, manual checks could not cover all possible integrations. The team knew they needed continuous automated testing to provide a safety net for development to remain lean and efficient. In April 2018, they hired Andrew Knight as their first Software Engineer in Test (SET) – a new role for the company – to lead the effort.

Automating tests with SpecFlow

The PrecisionLender team developed the Boa test solution – a project for automating end-to-end tests at scale. Boa would become PrecisionLender’s internal platform for test automation development. The name “Boa” is a loose acronym for “Behavior-Oriented Automation.”

The team chose SpecFlow to be the core framework for Boa tests. Since the PrecisionLender app’s backend is developed using .NET, SpecFlow was a natural fit. SpecFlow’s Gherkin syntax made tests readable and understandable, even to product owners and product support specialists who do not code.

The SpecFlow framework integrates with tools like Selenium WebDriver for testing Web UIs and RestSharp for testing REST APIs to exercise vital pathways for thorough app coverage. SpecFlow’s dependency injection mechanisms are solid yet simple, and the online docs are thorough. Plus, SpecFlow is an open-source project, so anyone can look at its code to learn how things work, open requests for new features, and even offer code contributions.

An example Boa test, written in Gherkin using SpecFlow.

Executing tests with SpecFlow+ Runner

Writing good tests was only part of the challenge. The PrecisionLender team needed to execute Boa tests continuously to provide fast feedback on changes to the app. The team chose to run Boa tests using SpecFlow+ Runner, which is tailored for SpecFlow tests. The team uses SpecFlow+ Runner to launch tests in parallel in TeamCity any time a developer deploys a code change to internal pre-production environments. The entire test suite also runs every night against multiple product configurations. SpecFlow+ Runner produces a helpful test report with everything needed to triage test failures: pass-and-fail tallies overall and per feature, a visual execution timeline, and full system logs. If engineers need to investigate certain failures more closely, they can use SpecFlow tags and SpecFlow+ Runner profiles to selectively filter tests for reruns. SpecFlow+ Runner’s multiple features help the team expedite test execution and investigation.

The SpecFlow+ Runner report for a dozen smoke tests.

Sharing features with SpecFlow+ LivingDoc

Good test cases are more than just verification procedures – they are behavior specifications. They define how features should work. Instead of keeping testing work siloed by role, the PrecisionLender team wanted to share Boa tests as behavior specs with all stakeholders to foster greater collaboration and understanding around features. The team also wanted to share Boa tests with specific customers without sharing the entire automation code.

SpecFlow+ LivingDoc enabled the PrecisionLender team to turn Gherkin feature files into living documentation. Whereas the SpecFlow+ Runner report focuses on automation execution, the SpecFlow+ LivingDoc report focuses on behavior specification apart from coding and automation details. LivingDoc displays Gherkin scenarios in a readable, searchable way that both internal folks and customers can consume. It can also optionally include high-level pass-and-fail results for each scenario, providing just enough information to be helpful and not overwhelming. LivingDoc has also helped PrecisionLender’s engineers identify and eliminate unused step definitions within the automation code. PrecisionLender benefits greatly from complementary reports from SpecFlow+ Runner and SpecFlow+ LivingDoc.

The SpecFlow+ LivingDoc report for a dozen smoke tests with their pass-and-fail results.

Improving interactions with Boa Constrictor

The Boa test solution initially used the Page Object Model to model interactions with the PrecisionLender app. However, as the PrecisionLender team automated more and more Boa tests, it became apparent that page objects did not scale well. Many page object classes had duplicative methods, making automation code messy. Some methods also did not include appropriate waiting mechanisms, introducing flaky failures.

PrecisionLender’s SETs developed Boa Constrictor, a .NET implementation of the Screenplay Pattern, to make better interactions for better automation. In Screenplay, actors use abilities to perform interactions. For example, an ability could be using Selenium WebDriver, and an interaction could be clicking an element. The Screenplay Pattern can be seen as a refactoring of the Page Object Model that minimizes duplicate code through a better separation of concerns. Individual interactions can be hardened for robustness, eliminating flaky hotspots. The Boa test solution now exclusively uses Boa Constrictor for interactions.

In October 2020, Q2 released Boa Constrictor as an open-source project so that anyone can use it. It is fully compatible with SpecFlow and other .NET test frameworks, and it provides rich interactions for Selenium WebDriver and RestSharp out of the box.

Boa Constrictor, the .NET Screenplay Pattern.

Scaling massively with Selenium Grid

When the PrecisionLender team first started automating Boa tests, they ran tests one at a time. That soon became too slow since the average Boa test took 20 to 50 seconds to complete. The team then started running up to 3 tests in parallel on one machine, but that also was not fast enough. They turned to Selenium Grid, a tool for running WebDriver sessions remotely across multiple machines.

PrecisionLender built a set of internal Selenium Grid instances using Microsoft Azure virtual machines to run Boa tests at high scale. As of July 2021, PrecisionLender has over 1800 unique Boa tests that run across four distinct product configurations. Whenever TeamCity detects a code change, it triggers a “continuous” Boa test suite with over 1000 tests running 50 parallel tests using Google Chrome on Selenium Grid. It completes execution in about 10 minutes. TeamCity launches the full test suite every night against all product configurations with 64-100 parallel tests on Selenium Grid. Continuous Integration currently runs up to 10K Boa tests daily against the PrecisionLender app with SpecFlow+ Runner and Selenium Grid.

The Boa test solution architecture, including Continuous Integration through TeamCity and parallel testing with SpecFlow+ Runner and Selenium Grid.

Shifting left with BDD

Better testing and automation practices eventually inspired better development practices. Product owners would create user stories, but developers would struggle to understand requirements and business purposes fully. PrecisionLender’s SETs started bringing together the Three Amigos – business, development, and testing roles – to discuss product behaviors proactively while creating user stories. They introduced Behavior-Driven Development (BDD) activities like Example Mapping to explore behaviors together. Then, well-defined stories could be easily connected to SpecFlow tests written in Gherkin following Specification by Example (SBE). Teams repeatedly saved time by thinking before coding and specifying before testing. They built higher quality into features from the beginning, and they stopped before working on half-baked stories with unjustified value propositions. Developers who participated in these behavior-driven practices were also more likely to automate Boa tests on their own. Furthermore, one of PrecisionLender’s developers loved BDD practices so much that he joined the team of SETs! Through Gherkin, SpecFlow provided a foundation that enabled quality work to shift left.

Challenges along the way

Achieving true continuous testing had its challenges along the way. Intermittent failure was the most significant issue PrecisionLender faced at scale. With so many tests, environments, and infrastructural pieces, arbitrary failures were statistically unavoidable. The PrecisionLender team took a two-pronged approach to handle intermittent failures: (1) eliminate race conditions in automation using good interactions with Boa Constrictor, and (2) use SpecFlow+ Runner to automatically retry failed tests to determine if failures were consistent or intermittent. These two approaches reduced the frequency of flaky failures and helped engineers quickly resolve any remaining issues. As a result, Boa tests enjoy well above a 99% success rate, and most failures are due to actual bugs.

PrecisionLender app performance at scale was a second big challenge. Running up to 100 tests in parallel turned functional tests into de facto load tests. Testing at scale repeatedly uncovered performance bottlenecks in the app. Performance issues caused widespread test failures that were difficult to diagnose because they appeared intermittently. Still, the visual timeline and timestamps in the SpecFlow+ Runner report helped the team identify periods of failure that could be crosschecked against backend logs, metrics, and database queries. Developers resolved many performance issues and significantly boost the app’s response times and load capacity.

Training team members to develop solid test automation was the third challenge. At the start of the journey, test automation, Gherkin, and BDD were all new to PrecisionLender. The PrecisionLender SETs took active steps to train others on how to develop good tests and good automation through group workshops, Three Amigos meetings, and one-on-one mentoring sessions. They shared resources like the Automation Panda blog for how to write good tests and good Gherkin. The investment in education paid off: many developers have joined the SETs in writing readable, reliable Boa tests that run continuously.

Benefits to the business

Developing a continuous testing solution brought many incredible benefits to PrecisionLender. First, the quality of the PrecisionLender app improved because continuous testing provided fast feedback on failures that developers could quickly fix. Instead of relying on manual spot checks, the team could trust the comprehensive safety net of Boa tests to catch bugs. Many issues would be caught within an hour of a developer making a code commit, and the longest feedback cycle would be only one business day for the full nightly test suites to run. Boa tests catch failures before customers ever experience them. The continuous nature of testing enables PrecisionLender to publish new releases every two weeks.

Second, the high reliability of the Boa test solution means that the PrecisionLender team can trust test results. When a test passes, the behavior is working. When a test fails, there is a real bug. Reliability also means that engineers spend less time on automation maintenance and more time on more valuable activities, like developing new features and adding new tests. Quality is present in both the product code and the test code.

Third, continuous testing boosts customer confidence in PrecisionLender. Customers trust the software quality because they know that PrecisionLender thoroughly tests every release. The PrecisionLender team also shares SpecFlow+ LivingDoc reports with specific clients to prove quality.

A bright future

PrecisionLender’s continuous testing journey is not over. Since the PrecisionLender team hired its first SET, it has hired three more, in addition to a testing manager, to grow quality improvement efforts. Multiple development teams have written their own Boa tests, and they plan to write more tests independently. SpecFlow’s tools have been indispensable in helping the PrecisionLender team achieve successful quality assurance. As PrecisionLender welcomes more customers, the Boa solution will be ready to scale with more tests, more configurations, and more executions.

Testing Web Services with Karate

Karate is a relatively new open source framework for testing Web services. Even though Karate is written in Java, its main value proposition is that testers don’t need to do any Java programming in order to write fully automated tests. Instead, testers use a Gherkin-like language with steps for making requests and validating responses. It’s like Cucumber with out-of-the-box Web API steps! There are a bunch of other nifty features, too.

This article is my quick-start guide for Karate. As a prerequisite, make sure you understand how Web services work (like REST APIs). Knowing BDD will also help.

My System

Since Karate is an open-source Java project, it can run almost anywhere. Here’s my system config:

  • macOS 10.13.6 (High Sierra)
  • Java 1.8.0_191
  • Apache Maven 3.6.0
  • Karate 0.9.0

Warning: I initially attempted to run my Karate project using Java 11.0.1, but I repeatedly hit SSL handshake exceptions despite trying many fixes. Downgrading to Java 8 fixed the errors. See Issue #617.

Project Setup

I created my Karate project using the Maven archetype since I am quite familiar with Maven. I named my project “firstchop”:

mvn archetype:generate \
  -DarchetypeGroupId=com.intuit.karate \
  -DarchetypeArtifactId=karate-archetype \
  -DarchetypeVersion=0.9.0 \
  -DgroupId=com.automationpanda \
  -DartifactId=firstchop

The directory layout was standard for Java Maven / Cucumber-JVM projects. (In fact, Karate was based on Cucumber-JVM until version 0.8.0.) Interestingly, the Karate docs recommend placing feature files under src/test/java instead of src/test/resources.

IDE

The Karate docs recommend using Eclipse or IntelliJ IDEA for developing Karate tests. Both IDEs offer support for JUnit and Cucumber, which Karate can leverage not only for editing but also for running tests. I’d probably use IntelliJ IDEA for serious testing.

However, for my initial exploration, I chose to use Visual Studio Code. Why?

  • It’s fast and easy.
  • It has good support for Java and Gherkin.
  • It has a file explorer and an integrated terminal.

karate-ide

Here’s what my Karate project looked like inside Visual Studio Code. Nice!

The Examples

The archetype project included example tests in the users.feature file. The first scenario from the file, copied below, tests getting users from the JSONPlaceholder REST API:

Feature: sample karate test script

Background:
* url 'https://jsonplaceholder.typicode.com'

Scenario: get all users and then get the first user by id

Given path 'users'
When method get
Then status 200

* def first = response[0]

Given path 'users', first.id
When method get
Then status 200

Anyone familiar with Cucumber will immediately recognize the Given/When/Then format for scenarios. However, Karate’s standard steps make the language more powerful than raw Gherkin:

  • Given steps build requests
  • When steps make request calls
  • Then steps validate responses
  • Catch-all steps (*) provide additional directives, like setting variables

All steps are quite concise. The Java implementation is essentially hidden from the tester (unless, of course, they want to plunge into the framework’s lower levels). Furthermore, this scenario shows how to use data from one response as the input for a second request.

Running Tests

Since I chose to use Maven and Visual Studio Code, the easiest way to run tests without any additional configuration was through the command line using “mvn test”. The example tests came with an ExamplesTest.java file that will run all feature files in the package when it is discovered during Maven’s “test” phase. The project defaulted to using JUnit 4, but JUnit 5 is also supported.

Running the tests will print many lines to the console. Every request and response will be printed. Below is the tail end of a successful run for users.feature:

$ mvn test
.
.
.
---------------------------------------------------------
feature: classpath:examples/users/users.feature
scenarios:  2 | passed:  2 | failed:  0 | time: 1.5232
---------------------------------------------------------
HTML report: (paste into browser to view) | Karate version: 0.9.0
file:/path/to/firstchop/target/surefire-reports/examples.users.users.html
---------------------------------------------------------

Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.09 sec

Results :

Tests run: 2, Failures: 0, Errors: 0, Skipped: 0

[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  5.635 s
[INFO] Finished at: 2018-12-09T23:32:29-05:00
[INFO] ------------------------------------------------------------------------

Karate can generate helpful test reports, too. The project generates JUnit reports by default, but other report formats like Cucumber reports are also possible.

karate-report-1

The JUnit HTML report shows a step-by-step log for each scenario.

karate-report-2

Full requests and responses are automatically logged, which is great for debugging.

karate-report-3

Failures are visually easy to identify. Above, I hacked the example to deliberately fail.

My Scenarios

Seeing examples is great, but I wanted to write my own scenarios with a real Web service for some hands-on experience. So, I wrote a test for Recipe Puppy, a search engine that provides links to recipes for input ingredients:

Feature: Recipe Puppy

Background:
* url 'http://www.recipepuppy.com'

Scenario Outline: Get a recipe for <ingredient>

Given path 'api'
And params {i: '<ingredient>'}
When method get
Then status 200
And match response contains {results: '#array'}
And match response.results[*] contains
  """
  {
    title: '#string',
    href: '#string',
    ingredients: '#regex <ingredient>',
    thumbnail: '#string'
  }
  """

  Examples:
    | ingredient |
    | tomato     |
    | pepperoni  |
    | cheese     |

 

Here are a few things I learned while writing this new feature file:

  • Scenario Outlines are supported, but data-driven features are preferred.
  • JSON objects can be used as step arguments or as variables.
  • Matching syntax is simple but sophisticated.
  • Markers like “#array” support fuzzy matching when specific values are unknown.
  • Multi-line JSON expressions need block quotes.

This new scenario ran just as successfully as the examples!

Standalone Execution

Even though writing tests using Karate’s domain-specific language does not require Java development skills, setting up the full Karate project does. Thankfully, Karate provides a standalone JAR that can run feature files without any other dependencies or configuration. It simply takes in paths to feature files, runs them, and generates Cucumber reports. The standalone JAR would be a good option for testers who don’t have strong programming skills.

karate-cuke-report

This was the Cucumber report generated by running my Recipe Puppy test with the standalone JAR.

Other Features

Despite being a fairly young project (with a GitHub creation date of February 7, 2017), Karate is full of nifty features that I have yet to try:

  • Parallel test execution
  • A mock servlet
  • A UI for visually debugging scripts
  • Reading files of many types into variables
  • Calling a feature file from another feature file
  • Calling JavaScript code
  • Calling Java code
  • Using scenarios as Gatling performance tests

Project contributors are also experimenting to extend Karate to test browser, mobile, and desktop UIs using Selenium WebDriver.

Analysis

Overall, Karate is a great tool for testing Web services. It handles all of the programming implementation details so that testers can focus more on testing. Its syntax is concise, clear, and versatile. Its native support for JSON object makes request and response handling feel natural. The GitHub docs are on-point. Anyone testing REST APIs should give it serious consideration.

Even though Karate uses Gherkin-like syntax, it is not truly a behavior-driven test framework. Instead, it simply leverages the strengths of Cucumber – readability, reusability, and tooling – to make automated Web service testing easier. I have long stated that one of the best solutions to test automation challenges is to create a domain-specific testing language that automatically handles low-level details. (See Behavior-Driven Blasphemy.) The Karate project validates my claim. Karate’s DSL acts much more like a testing language than a business-oriented specification language. Its Gherkin keywords simply provide structure and familiarity to its steps. And it’s perfectly okay that Karate isn’t “pure BDD” – just ask the project’s creator.

With that said, I would argue that a tester probably needs basic programming skills to be successful with Karate. Execution needs Java setup no matter what, and feature files are really just fancy test scripts. And Web service APIs are inherently code-y.

I look forward to seeing how the Karate project grows, especially if/when WebDriver-based steps are added to the language.

Resources

 

EGAD! How Do We Start Writing (Better) Tests?

Some have never automated tests and can’t check themselves before they wreck themselves. Others have 1000s of tests that are flaky, duplicative, and slow. Wa-do-we-do? Well, I gave a talk about this problem at a few Python conferences. The language used for example code was Python, but the principles apply to any language.

Here’s the PyTexas 2019 talk:

And here’s the PyGotham 2018 talk:

And here’s the first time I gave this talk, at PyOhio 2018:

I also gave this talk at PyCaribbean 2019 and PyTennessee 2020 (as an impromptu talk), but it was not recorded.

JavaScript Testing with Jasmine

Table of Contents

  1. Introduction
  2. Setup and Installation
  3. Project Structure
  4. Unit Tests for Functions
  5. Unit Tests for Classes
  6. Unit Tests with Mocks
  7. Integration Tests for REST APIs
  8. End-to-End Tests for Web UIs
  9. Basic Test Execution
  10. Advanced Test Execution with Karma
  11. Angular Testing

Introduction

Jasmine is one of the most popular JavaScript test frameworks available. Its tests are intuitively recognizable by their describe/it format. Jasmine is inspired by Behavior-Driven Development and comes with many basic features out-of-the-box. While Jasmine is renowned for its Node.js support, it also supports Python and Ruby. Jasmine also works with JavaScript-based languages like TypeScript and CoffeeScript.

This guide shows how to write tests in JavaScript on Node.js using Jasmine. It uses the jasmine-node-js-example project (hosted on GitHub). Content includes:

  • Basic white-box unit tests
  • REST API integration tests with frisby
  • Web UI end-to-end tests with Protractor
  • Spying with sinon
  • Monkeypatching with rewire
  • Handling config data with JSON files
  • Advanced execution features with Karma
  • Special considerations for Angular projects

The Jasmine API Reference is also indispensable when writing tests.

Setup and Installation

The official Jasmine Node.js Setup Guide explains how to set up and install Jasmine. Jasmine tests may be added to an existing project or to an entirely new project. As a prerequisite, Node.js must already be installed. Use the following commands to set things up.

# Initialize a new project (if necessary)
# This will create the package.json file
$ mkdir [project-name]
$ cd [project-name]
$ npm init

# Install Jasmine locally for the project and globally for the CLI
$ npm install jasmine
$ npm install -g jasmine

# Create a spec directory with configuration file for Jasmine
$ jasmine init

# Optional: Install official Jasmine examples
# Do this only for self-education in a separate project
$ jasmine examples

The code used by this guide is available in GitHub at jasmine-node-js-example. Feel free to clone this repository to try things out yourself!

Recommended editors and IDEs include Visual Studio Code with the Jasmine Snippets extensions, Atom, and JetBrains WebStorm.

Project Structure

Jasmine does not require the project to have a specific directory layout, but it does use a configuration file to specify where to find tests. The default, conventional project structure created by “jasmine init” puts all Jasmine code into a “spec” directory, which contains “*spec.js” files for tests, helpers that run before specs, and a support directory for config. The JASMINE_CONFIG_PATH environment variable can be set to change the config file used. (The default config file is spec/support/jasmine.json.)

[project-name]
|-- [product source code]
|-- spec
|   |-- [spec sub-directory]
|   |   `-- *spec.js
|   |-- helpers
|   |   `-- [helper sub-directory]
|   `-- support
|       `-- jasmine.json
`-- package.json

This structure may be changed using the “spec_dir”, “spec_files”, and “helpers” properties in the config file. For example, it may be useful to change the structure to include more than one level of directories to the hierarchy. However, it is typically best to leave the conventional directory layout in place. The default config values as of Jasmine 2.8 are below.

{
  "spec_dir": "spec",
  "spec_files": [
    "**/*[sS]pec.js"
  ],
  "helpers": [
    "helpers/**/*.js"
  ],
  "stopSpecOnExpectationFailure": false,
  "random": false
}

It is also a best practice to separate tests between different levels of the Testing Pyramid. The example project has spec subdirectories for unit, integration, and end-to-end tests. Directory-level organization makes it easy to filter tests by level when executed.

Unit Tests for Functions

The most basic unit of code to be tested in JavaScript is a function. The “lib/calculator.functions.js” module contains some basic math functions for easy testing.

// --------------------------------------------------
// lib/calculator.functions.js
// --------------------------------------------------

// Calculator Functions

function add(a, b) {
    return a + b;
}

function subtract(a, b) {
    return a - b;
}

function multiply(a, b) {
    return a * b;
}

function divide(a, b) {
    let value = a * 1.0 / b;
    if (!isFinite(value))
        throw new RangeError('Divide-by-zero');
    else
        return value;
}

function maximum(a, b) {
    return (a &amp;amp;amp;gt;= b) ? a : b;
}

function minimum(a, b) {
    return (a &amp;amp;amp;lt;= b) ? a : b;
}

// Module Exports

module.exports = {
    add: add,
    subtract: subtract,
    multiply: multiply,
    divide: divide,
    maximum: maximum,
    minimum: minimum,
}

Its tests are in “spec/unit/calculator.function.spec.js”. Below is a snippet showing simple tests for the “add” function. A describe block groups a “suite” of specs together. Each it block is an individual spec (or test). Titles for specs are often written as what the spec should do. Describe blocks may be nested for hierarchical grouping, but it blocks (being bottom-level) may not. Assertions are made using Jasmine’s fluent-like expect and matcher methods. Since the functions are stateless, no setup or cleanup is needed. Tests for other math functions are similar.

// --------------------------------------------------
// spec/unit/calculator.function.spec.js
// --------------------------------------------------

const calc = require('../../lib/calculator.functions');

describe("Calculator Functions", function() {

  describe("add", function() {

    it("should add two positive numbers", function() {
      let value = calc.add(3, 2);
      expect(value).toBe(5);
    });

    it("should add a positive and a negative number", function() {
      let value = calc.add(3, -2);
      expect(value).toBe(1);
    });

    it("should give the same value when adding zero", function() {
      let value = calc.add(3, 0);
      expect(value).toBe(3);
    });

  });

});

The divide-by-zero test for the “divide” function is special because it must verify that an exception is thrown. The divide call is wrapped in a function so that it may be passed into the expect call.

  describe("divide", function() {

    // ...

    it("should throw an exception when dividing by zero", function() {
      let divideByZero = function() { calc.divide(3, 0); };
      expect(divideByZero).toThrowError(RangeError, 'Divide-by-zero');
    });

    // ...

  });

The “maximum” and “minimum” functions have parametrized tests using the Array class’s forEach method. This is a nifty trick for hitting multiple input sets without duplicating code or combining specs. Note that the spec titles are also parametrized. Tests for “maximum” are shown below.

  describe("maximum", function() {

    [
      [1, 2, 2],
      [2, 1, 2],
      [2, 2, 2],
    ].forEach(([a, b, expected]) =&amp;amp;amp;gt; {
      it(`should return ${expected} when given ${a} and ${b}`, () =&amp;amp;amp;gt; {
        let value = calc.maximum(a, b);
        expect(value).toBe(expected);
      });
    });

  });

Unit Tests for Classes

Jasmine can also test classes. When testing classes, setup and cleanup routines become more helpful. The Calculator class in the “lib/calculator.class.js” module calls the math functions and caches the last answer.

// --------------------------------------------------
// lib/calculator.class.js
// --------------------------------------------------

// Imports

const calcFunc = require('./calculator.functions');

// Calculator Class

class Calculator {

  constructor() {
      this.last_answer = 0;
  }

  do_math(a, b, func) {
      return (this.last_answer = func(a, b));
  }

  add(a, b) {
      return this.do_math(a, b, calcFunc.add);
  }

  subtract(a, b) {
      return this.do_math(a, b, calcFunc.subtract);
  }

  multiply(a, b) {
      return this.do_math(a, b, calcFunc.multiply);
  }

  divide(a, b) {
      return this.do_math(a, b, calcFunc.divide);
  }

  maximum(a, b) {
      return this.do_math(a, b, calcFunc.maximum);
  }

  minimum(a, b) {
      return this.do_math(a, b, calcFunc.minimum);
  }

}

// Module Exports

module.exports = {
  Calculator: Calculator,
}

The Jasmine specs in “spec/unit/calculator.class.spec.js” are very similar but now call the beforeEach method to construct the Calculator object before each scenario. (Jasmine also has methods for afterEach, beforeAll, and afterAll.) The verifyAnswer helper function also makes assertions easier. The addition tests are shown below.

// --------------------------------------------------
// spec/unit/calculator.class.spec.js
// --------------------------------------------------

const calc = require('../../lib/calculator.class');

describe("Calculator Class", function() {

  let calculator;

  beforeEach(function() {
    calculator = new calc.Calculator();
  });

  function verifyAnswer(actual, expected) {
    expect(actual).toBe(expected);
    expect(calculator.last_answer).toBe(expected);
  }

  describe("add", function() {

    it("should add two positive numbers", function() {
      verifyAnswer(calculator.add(3, 2), 5);
    });

    it("should add a positive and a negative number", function() {
      verifyAnswer(calculator.add(3, -2), 1);
    });

    it("should give the same value when adding zero", function() {
      verifyAnswer(calculator.add(3, 0), 3);
    });

  });

  // ...

});

Unit Tests with Mocks

Mocks help to keep unit tests focused narrowly upon the unit under test. They are essential when units of code depend upon other callable entities. For example, mocks can be used to provide dummy test values for REST APIs instead of calling the real endpoints so that receiving code can be tested independently.

Jasmine’s out-of-the-box spies can do some mocking and spying, but it is not very powerful. For example, it doesn’t work when members of one module call members of another, or even when members of the same module call each other (unless they are within the same class). It is better to use rewire for monkey-patching (mocking via member substitution) and sinon for stubbing and spying.

The “lib/weather.js” module shows how mocking can be done with member dependencies. The WeatherCaller class’s “getForecast” method calls the “callForecast” function, which is meant to represent a service call to get live weather forecasts. The “callForecast” function returns an empty object, but the specs will “rewire” it to return dummy test values that can be used by the WeatherCaller class. Rewiring will work even though “callForecast” is not exported!

// --------------------------------------------------
// lib/weather.js
// --------------------------------------------------

function callForecast(month, day, year, zipcode) {
  return {};
}

class WeatherCaller {

  constructor() {
    this.forecasts = {};
  }

  getForecast(month, day, year, zipcode) {
    let key = `${month}/${day}/${year} for ${zipcode}`;
    if (!(key in this.forecasts)) {
      this.forecasts[key] = callForecast(month, day, year, zipcode);
    }
    return this.forecasts[key];
  }

}

module.exports = {
  WeatherCaller: WeatherCaller,
}

The tests in “spec/unit/weather.mock.spec.js” monkey-patch the “callForecast” function with a sinon stub in the beforeEach call so that each test has a fresh spy count. Note that the weather method is imported using “rewire” instead of “require” so that it can be monkey-patched. Even though the original function returns an empty object, the tests pass because the mock returns the dummy test value.

// --------------------------------------------------
// spec/unit/weather.mock.spec.js
// --------------------------------------------------

// Imports

const rewire = require('rewire');
const sinon = require('sinon');

// Rewirings

const weather = rewire('../../lib/weather');

// WeatherCaller Specs
describe("WeatherCaller Class", function() {

  // Test constants
  const dummyForecast = {"high": 42, "low": 26};

  // Test variables
  let callForecastMock;
  let weatherModuleRestore;
  let weatherCaller;

  beforeEach(function() {
    // Mock the inner function's return value using sinon
    // Do this for each test to avoid side effects of call count
    callForecastMock = sinon.stub().returns(dummyForecast);
    weatherModuleRestore = weather.__set__("callForecast", callForecastMock);

    // Construct the main caller object
    weatherCaller = new weather.WeatherCaller();
  });

  it("should be empty upon construction", function() {
    // No mocks required here
    expect(Object.keys(weatherCaller.forecasts).length).toBe(0);
  });

  it("should get a forecast for a date and a zipcode", function() {
    // This simply verifies that the return value is correct
    let forecast = weatherCaller.getForecast(12, 25, 2017, 21047);
    expect(forecast).toEqual(dummyForecast);
  });

  it("should get a fresh forecast the first time", function() {
    // The inner function should be called and the value should be cached
    // Note the sequence of assertions, which guarantee safety
    let forecast = weatherCaller.getForecast(12, 25, 2017, 21047);
    const forecastKey = "12/25/2017 for 21047";
    expect(callForecastMock.called).toBeTruthy();
    expect(Object.keys(weatherCaller.forecasts).length).toBe(1);
    expect(forecastKey in weatherCaller.forecasts).toBeTruthy();
    expect(weatherCaller.forecasts[forecastKey]).toEqual(dummyForecast);
  });

  it("should get a cached forecast the second time", function() {
    // The inner function should be called only once
    // The same object should be returned by both method calls
    let forecast1 = weatherCaller.getForecast(12, 25, 2017, 21047);
    let forecast2 = weatherCaller.getForecast(12, 25, 2017, 21047);
    expect(callForecastMock.calledOnce).toBeTruthy();
    expect(forecast1).toBe(forecast2);
  });

  it("should get and cache multiple forecasts", function() {
    // The other tests verify the mechanics of individual calls
    // This test verifies that the caller can handle multiple forecasts

    // Initial forecasts
    let forecast1 = weatherCaller.getForecast(12, 25, 2017, 27518);
    let forecast2 = weatherCaller.getForecast(12, 25, 2017, 27518);
    let forecast3 = weatherCaller.getForecast(12, 25, 2017, 21047);

    // Change forecast value
    weatherModuleRestore();
    const newForecast = {"high": 39, "low": 18}
    callForecastMock = sinon.stub().returns(newForecast);
    weatherModuleRestore = weather.__set__("callForecast", callForecastMock);

    // More forecasts
    let forecast4 = weatherCaller.getForecast(12, 26, 2017, 21047);
    let forecast5 = weatherCaller.getForecast(12, 27, 2017, 21047);

    // Assertions
    expect(Object.keys(weatherCaller.forecasts).length).toBe(4);
    expect("12/25/2017 for 27518" in weatherCaller.forecasts).toBeTruthy();
    expect("12/25/2017 for 21047" in weatherCaller.forecasts).toBeTruthy();
    expect("12/26/2017 for 21047" in weatherCaller.forecasts).toBeTruthy();
    expect("12/27/2017 for 21047" in weatherCaller.forecasts).toBeTruthy();
    expect(forecast1).toEqual(dummyForecast);
    expect(forecast2).toEqual(dummyForecast);
    expect(forecast3).toEqual(dummyForecast);
    expect(forecast4).toEqual(newForecast);
    expect(forecast5).toEqual(newForecast);
  });

  afterEach(function() {
    // Undo the monkeypatching
    weatherModuleRestore();
  });

});

Integration Tests for REST APIs

Jasmine can do black-box tests just as well as it can do white-box tests. Testing REST API service calls are some of the most common integration-level tests. There are many REST request packages for Node.js, but frisby is particularly designed for testing. Frisby even has its own expect methods (though the standard Jasmine expect and matchers may still be used).

A best practice for black-box tests is to put config data for external dependencies into a config file. Config data for REST API calls could be URLs, usernames, and passwords. Never hard-code config data into test automation. JavaScript config files are super simple: just write a JSON file and read it during test setup using the “require” function, just like any module. The config data will be automatically parsed as a JavaScript object!

Below is an example test for calling Wikipedia’s REST API. It reads the base URL from a config file and uses it in the frisby call. The config file:

// --------------------------------------------------
// spec/support/env.json
// --------------------------------------------------
{
  "integration" : {
    "wikipediaServiceBaseUrl": "https://en.wikipedia.org/api/rest_v1"
  }
}

And the spec:

// --------------------------------------------------
// spec/integration/wikipedia.service.spec.js
// --------------------------------------------------

const frisby = require('frisby');

describe("English Wikipedia REST API", function() {

  const ENV = require("../support/env.json");
  const BASE_URL = ENV.integration.wikipediaServiceBaseUrl;

  describe("GET /page/summary/{title}", function() {

    it("should return the summary for the given page title", function(done) {
      frisby
        .get(BASE_URL + "/page/summary/Pikachu")
        .then(function(response) {
          expect(response.status).toBe(200);
          expect(response.json.title).toBe("Pikachu");
          expect(response.json.pageid).toBe(269816);
          expect(response.json.extract).toContain("Pokémon");
        })
        .done(done);
    })

  });

  // ...
});

End-to-End Tests for Web UIs

Jasmine can also be used for end-to-end Web UI tests. One of the most popular packages for web browser automation is Selenium WebDriver, which uses programming calls to interact with a browser like a real user. Selenium releases a WebDriver package for JavaScript for Node.js, but it is typically a better practice to use Protractor.

Protractor integrates WebDriver with JavaScript test frameworks to make it easier to use. By default, Jasmine is the default framework for Protractor, but Mocha, Cucumber, and any other JavaScript framework could be used. One of the best advantages Protractor has over WebDriver by itself is that Protractor does automatic waiting: explicit calls to wait for page elements are not necessary. This is a wonderful feature that eliminates a lot of repetitive automation code. Protractor also provides tools to easily set up the Selenium Server and browsers (including mobile browsers). Even though Protractor is designed for Angular apps, it can nevertheless be used for non-Angular front-ends.

Web UI tests can be quite complicated because they cover many layers and require extra configuration. Web page interactions frequently need to be reused, too. It is a best practice to use a pattern like the Page Object Model to handle web interactions in one reusable layer. Page objects pull WebDriver locators and actions out of test fixtures (like describe/it functions) so that they may be updated more easily when changes are developed for the actual web pages. (In fact, some teams choose to co-locate page object classes with product source code for the web app so that both are updated simultaneously.) The Page Object Model is a great way to manage the inherently complicated Web automation design.

This guide does not provide a custom example for Protractor with Jasmine because the Protractor documentation is pretty good. It contains a decent tutorial, setup and config instructions, framework integrations, and a full reference. Furthermore, proper Protractor setup requires careful local setup with a live site to test. Please refer to the official doc for more information. Most of the examples in the doc use Jasmine.

Basic Test Execution

The simplest way to run Jasmine tests is to use the “jasmine” command. Make sure you are in the project’s root directory when running tests. Below are example invocations.

# Run all specs in the project (according to the Jasmine config)
$ jasmine

# Run a specific spec by file path
$ jasmine spec/integration/wikipedia.service.spec.js

# Run all specs that match a path pattern
# Warning: this call is NOT recursive and will not search sub-directories!
$ jasmine spec/unit/*

# Run all specs whose titles match a regex filter
# This searches both "describe" and "it" titles
$ jasmine --filter="Calculator"

# Stop testing after the first failure happens
$ jasmine --stop-on-failure=true

# Run tests in a random order
# Optionally include a seed value
$ jasmine --random=true --seed=4321

Test execution options may also be set in the Jasmine config file.

Advanced Test Execution with Karma

Karma is a self-described “spectacular test runner for JavaScript.” Its main value is that it runs JavaScript tests in live web browsers (rather than merely on Node.js), testing actual browser compatibility. In fact, developers can keep Karma running while they develop code so they can see test results in real time as they make changes. Karma integrates with many test tools (including Istanbul for code coverage) and frameworks (including Jasmine). Karma itself runs on Node.js and is distributed as a number of packages for different browsers and frameworks. Check out this Google Testing Blog article to learn the original impetus behind developing Karma, originally called “Testacular.”

Karma and Protractor are similar in that they run tests against real web browsers, but they serve different purposes. Karma is meant for running unit tests against JavaScript code, whereas Protractor is meant for running end-to-end tests against a full, live site like a user. Karma tests go through a “back door” to exercise pieces of a site. Karma and Protractor are not meant to be used together for the same tests (see Protractor Issue #9 on GitHub). However, one project can use both tools at their appropriate test layers, as done for standard Angular testing.

This guide does not provide a custom example for Karma with Jasmine because it requires local setup with the right packages and browser versions. Karma packages are distributed through npm. Karma with Jasmine requires the main karma package, the karma-jasmine package, and a launcher package for each desired browser (like karma-chrome-launcher). There are also plenty of decent examples online here, here, and here. Please refer to the official Karma documentation for more info.

Running Jasmine tests with Karma is not without its difficulties, however. One challenge is handling modules and imports. ECMAScript 6 (ES6) has a totally new syntax for modules and imports that is incompatible with the CommonJS module system with require used by Node.js. Node.js is working on ES6-style module support, but at the time this article was written, full support was not yet available. Module imports are troublesome for Karma because Karma is launched from Node.js (requiring require) but runs in a browser (which doesn’t support require). There are a few workarounds:

  • Use RequireJS to load modules.
  • Use Browserify to make require work in browsers.
  • Use rollup.js to bundle all modules into one to sidestep imports.
  • Use Angular with TypeScript, which builds and links everything automatically.

Angular Testing

Angular is a very popular front-end Web framework. It is a complete rewrite of AngularJS and is seen as an alternative to React. One of Angular’s perks is its excellent support for testing. Out of the box, new Angular projects come with config for unit testing with Jasmine/Karma and end-to-end testing with Jasmine/Protractor. It’s easy to integrate other automation tools like Istanbul code coverage or HTML reporting. Standard Angular projects using TypeScript also don’t suffer from the module import problem: imports are linked properly when TypeScript is compiled into JavaScript.

Angular unit tests are written just like any other Jasmine unit tests except for one main difference: the Angular testing utilities. These extra packages create a test environment (a “TestBed”) for testing each part of the Angular app internally and independently. Dependencies can be easily stubbed and mocked using Jasmine’s spies, with no need for sinon since everything binds. NGRX also provides extended test utilities. The Angular testing utilities can seem overwhelming at first, but together with Jasmine, they make it easy to write laser-precise unit tests.

Another interesting best practice for Angular unit tests is to co-locate them with the modules they cover. For every *.js/*.ts file, there should be a *.spec.js/*.spec.ts file with the covering describe/it tests. This is not common practice for unit tests, but the Angular doc notes many advantages: tests are easy to find, coverage is roughly visual, and updates are less likely forgotten. The automatically-generated test config has settings to search the whole project for spec files.

Angular end-to-end tests are treated differently from unit tests, however. Since they test the app as a whole, they don’t use the Angular testing utilities, and they should be located in their own directory (usually named “e2e”). Thus, Angular end-to-end tests are really no different than any other Web UI tests that use Protractor. Jasmine is the default test framework, but it may be advantageous to switch to Cucumber.js for all the advantages of BDD.

This guide does not provide Angular testing examples because the official Angular documentation is stellar. It contains a tutorial, a whole page on testing, and live examples of tests (linked from the testing page).