testing

Book Review: Python Testing with pytest

tl;dr

Title Python Testing with pytest
Author Brian Okken (@brianokken)
Publication 2017 (The Pragmatic Programmers)
Summary How to use all the features of pytest for Python test automation – “simple, rapid, effective, and scalable.”
Prerequisites Intermediate-level Python programming.

Summary

Python Testing with pytest is the book on pytest. Brian Okken covers all the ins and outs of the framework. The book is useful both as tutorial for learning pytest as well as a reference for specific framework features. It covers:

  • Getting started with pytest
  • Writing simple tests as functions
  • Writing more interesting tests with assertions, exceptions, and parameters
  • Using all the different execution options
  • Writing fixtures to flexibly separate concerns and reuse code
  • Using built-in fixtures like tmpdir, pytestconfig, and monkeypatch
  • Using configuration files to control execution
  • Integrating pytest with other tools like pdb, tox, and Jenkins

Appendices also cover:

  • Using Python virtual environments
  • Installing packages with pip
  • An overview of popular plugins like pytest-xdist and pytest-cov
  • Packaging and distributing Python packages

Praises

This book is a comprehensive guide to pytest. It thoroughly covers the framework’s features and gives pointers to more info elsewhere. Even though pytest has excellent online documentation, I still recommend this book to anyone who wants to become a pytest master. Online docs tend to be fragmented with each piece limited in scope, whereas books like this one are designed to be read progressively and orderly for maximal understanding of the material.

I love how this book is example-driven. Each section follows a simple yet powerful outline: idea → code → output → explanation. Having real code with real output truly cements the point of each mini-lesson. New topics are carefully unfolded so that they build upon previous topics, making the book read like a collection of tutorials. Examples at the end of every chapter challenge the readers to practice what they learn. The formatting of each section also looks great.

The extra info on related topics like pip and virtualenv is also a nice touch. Python pros probably don’t need it, but beginners might get stuck without it.

The rocket ship logo on the cover is also really cool!

Takeaways

pytest is one of the best functional test frameworks currently available in any language. It breaks the clunky xUnit mold, in which class structures were awkwardly superimposed over test cases as if one size fits all. Instead, pytest is lightweight and minimalist because it relies on functions and fixtures. Scope is much easier to manage, code is more reusable, and side effects can more easily be avoided. pytest has taken over Python testing because it is so Pythonic.

Brian’s concise writing style has also inspired me to be more direct in my own writing. I tend to be rather verbose out of my desire to be descriptive. However, fewer words often leave a more powerful impression. They also make the message easier to comprehend. Python is beloved for its concise expressiveness, and as a Pythonista, it would be fitting for me to adopt that trait into my English.

If I had a wish list for a second edition, I’d love to see more info about assertions and other plugins (namely pytest-bdd). I think an appendix with comparisons to other Python test frameworks would also be valuable.

A Warning

I ordered a physical copy of this book directly from Amazon (not a third-party seller). Unfortunately, that copy was missing all the introductory content: the table of contents, the acknowledgements, and the preface. The first page after the front cover was Chapter 1. Befuddled, I reached out to Brian Okken (who I personally met at PyCon 2018). We suspected that it was either a misprint or a bootleg copy. Either way, we sent the evidence to the publisher, and Amazon graciously exchanged my defective copy for the real deal. Please look out for this sort of problem if you purchase a printed copy of this book for yourself!

 

If you want to learn more about pytest, please read my article Python Testing 101: pytest.

Behavior-Driven Blasphemy

This is my 100th post on Automation Panda! I’m thrilled to see how much this blog has grown and how many people it has helped. For such a monumental occasion, I have chosen to voice a rather controversial opinion about test automation.

Behavior-driven development seems to be the software testing buzzword of the decade. What started as a refinement of test-driven development by developers in Europe and the UK quickly became the big process fad of the 2010’s. The Cucumber project (now 10 years old) developed or inspired Gherkin-based test automation frameworks in all the major programming languages. Companies started requiring Given-When-Then format for acceptance criteria and test scenarios. Three Amigos meetings became standard calendar fixtures during sprints. Organizations that once undertook “Agile transformations” now have similar initiatives for BDD. For better or worse, BDD exists and cannot be ignored.

The dogmatic benefits of BDD are better collaboration and automation. However, leaders frequently insist that Gherkin-style test frameworks add value only when paired with practices like Example Mapping. “BDD is a process, not a tool,” is a common mantra. “Otherwise, the Gherkin just gets in the way.” Although I wholeheartedly agree that behavior-driven practices add significant value to the development process, I nevertheless espouse a rather blasphemous opinion:

BDD test automation frameworks are better than traditional frameworks for black box functional testing even when BDD processes are not followed.

What Exactly Are You Saying?

My claim is that behavior-driven test frameworks like Cucumber, SpecFlow, and behave are significantly better than traditional xUnit-style frameworks for testing live features. For example, I would rather use SpecFlow than NUnit for testing a Web app with Selenium WebDriver, whether or not the other two Amigos are with me. The resulting automation code has better structure, readability, and reusability.

I’m not saying that teams shouldn’t do BDD practices, and I’m not saying that the Three Amigos should be separated. Collaboration is key to success, and BDD really helps. Example Mapping is one of the most useful practices a development team can do. I’m also not saying that BDD frameworks should be used for all testing purposes – they are poorly suited for unit testing and for performance testing.

Objection!

I find myself very lonely in this opinion. BDD leaders repeatedly insist that BDD is not about testing and automation:

The most outspoken BDDers (mostly coalescing around the Cucumber community) have largely moved their focus to the collaboration benefits, almost forsaking the automation benefits. (This may not necessarily be true, but it appears that way based on the literature and materials floating on the Web.) That outlook is somewhat disingenuous because the main tools supporting BDD are, in fact, test frameworks.

BDD also has outspoken opponents – it’s love or hate. I’ve personally spoken with several engineers who despise Gherkin-based frameworks. “I can see how it would be valuable when a whole team embraces behavior-driven practices,” many have told me, “but otherwise, the Gherkin layer just gets in the way of automation.” I’ve heard it called “plaster” and “garbage.” Engineers just want to code their tests. And code should always be readable, right?

hqdefault

Testing is an inherently opinionated space. People can never seem to agree on things.

The Bigger Picture

Test automation must be developed regardless of any specific development practices, and its architecture must stand firmly in its own right. Unfortunately, both sides miss the bigger picture:

The best solution for test automation is a domain-specific language.

A domain-specific language (DSL) is a programming language with a purpose. It is designed to handle very specific needs, rather than general-purpose programming. For example:

  • SQL is a DSL for database queries.
  • XPath is a DSL for finding elements in an XML document.
  • YAML is a DSL for object serialization.

Gherkin is also a DSL – for behavior specification.

Domain-specific languages naturally suit test automation due to the clear difference between test cases and test code. Test cases are procedures that exercise product behavior. Anyone can write a test case. They are dictated or explained in plain language. Test code, however, is the software implementation of test cases. Test code handles function calls, logging, exceptions, and all those other little programming details that help run tests. A test automation DSL separates those concerns: test cases are written in a special language, and the interpreter handles repetitive, low-level details. Some type of extensions can handle product-specific interactions. The purpose of a language is to effectively express intention – and the intention is to test the product.

To truly achieve an optimal solution, however, the DSL and its interpreter must be treated as part of the automation software, just like the test cases and extensions. Remember, a language’s interpreter is just another piece of software. The interpreter is part of the separation of concerns and the single responsibility principle. Concerns that would typically be handled by classes and functions in traditional test code should be moved to the interpreter. For example, the interpreter should automatically log every test case step, rather that forcing the author to write explicit logging statements.

When I worked at NetApp years ago, I implemented a DSL to test platform-level features of our operating system. I called it DS – short for “Design Steps” (from HP ALM) (but also not without an affinity for the Nintendo DS). NetApp’s entire test automation code was developed in Perl at the time, so I implemented the DS interpreter in Perl to reuse existing libraries. DS test cases were typically only a dozen lines long each, and DS expressions could call specially-written Perl modules directly for complete extendability. During the first big release using DS, my team saved countless hours of automation development time as compared to the previous release while delivering a higher number of tests. I also did this before I had ever heard of BDD.

Unfortunately, most teams have neither the time to develop their own testing DSL nor the understanding of compiler theory to build it right. And if they were given such a language, they typically limit themselves to the provided implementation instead of taking ownership to extend the language for their needs.

nintendo-ds-1

The original Nintendo DS. Fun times!

Who Truly Misunderstands Gherkin?

Enter Gherkin: the world’s first major general-purpose, off-the-shelf language for test automation. It is general enough to cover any case through its plain language steps, yet specific enough to standardize tests. Users don’t need to be compiler theory experts – they just make up their own step names and provide the definition code to execute them. Early BDD projects like JBehave and Cucumber packaged an interpreter as a test framework and delivered it to a testing world still stuck on JUnit. The need for a testing DSL was there, whether or not the BDD folks meant to serve it.

Cucumber-ites frequently bemoan that their framework is misunderstood by the masses. They shudder to see teams using their framework purely for test automation. However, Cucumber effectively lowered the entry barrier for teams to make their own testing DSLs. Kodak did the same thing for film: they made it cheap and standard so anyone could be a photographer. Not everyone who uses a BDD framework misunderstands its purpose: some (like me) just see an alternative value proposition than what is preached by orthodox BDD practitioners. Gherkin fills a need that nobody knew. Its popularity validates that claim.

Benefits Apart from Process

Using a BDD framework adds much value to testing and development even without BDD processes. Below are just a handful of benefits:

  1. Focus first on good scenarios. Gherkin forces authors to think before they code.
  2. Faster automation development. Gherkin steps are reusable and parametrizable.
  3. Stronger structure. Engineers know where to put things in the framework.
  4. Test understandability. Anyone can read scenarios because they are written in plain language. Business people can help. New people can pick it up fast.
  5. Test sharing. Feature files can be shared apart from test code, which can be helpful for business partners.
  6. Test similarity. Tests all look the same. Team members can more easily help each other.
  7. Clearer failures. When a scenario fails, reports show exactly what step failed.
  8. Simpler bug reports. Use scenario steps as instructions to reproduce the failure.
  9. 2-phase test reviews. Review Gherkin first and then test code second to make sure the test cases are good before implementing the wrong things.
  10. BDD enablement. Using a BDD framework opens the door for a team to embrace better behavioral practices in the future.

I wrote about these advantages before:

Case Studies

I’m also not the only one who finds value in BDD test frameworks outside of the full BDD process. Below are five case studies.

radish

radish is a Python test framework inspired by Cucumber. Its DSL syntax is a superset of Gherkin that adds preconditions, loops, variables, and expressions. These language additions indicate a bias towards automation because they enable engineers to write tests more programmatically, albeit in a Gherkin-ese way.

Karate

Karate is a test framework with a full DSL based on Gherkin with steps specifically tailored to Web service calls. Although it is implemented in Java, testers do not need to do any Java programming to write complete tests cases from day one. Peter Thomas, the creator of Karate, unabashedly declares that Karate does not truly adhere to BDD but nevertheless uses Cucumber for its automation benefits. (Note: Karate is working to move completely off of Cucumber. See GitHub issue #444.)

REST Assured

REST Assured is a Java package for testing REST APIs. Unlike Karate, REST Assured provides a fluent syntax (and not a DSL) for writing service calls directly in Java code. The fluent syntax is based on Gherkin: given() a request spec is created, when() the call is made, then() verify the response. Although REST Assured is not a full testing framework, it nevertheless pulls inspiration from BDD frameworks for order and structure.

Cycle

Cycle is a BDD-focused solution from Tryon Solutions for testing Web, terminal, and desktop apps. Cycle is unique because it provides out-of-the-box steps for all types of supported testing so that no programming experience is required. Testers write feature files using Cycle 2.0’s slick new Electron app. Scenarios are written in CycleScript, a Gherkin-ese language with additions like variables and sub-scenario calls. Steps tend to be imperative, but that’s the tradeoff for not requiring lower-level programming.

Hexawise

Hexawise is a combinatorial testing tool designed to maximize coverage with minimal test counts by smartly joining feature variations. It helps testers write better tests with less redundancy and fewer gaps. Although Hexawise has historically assisted manual testers, it also can generate Gherkin feature files for test variations.

mexican-coast-dried-sea-cucumber

Not all cucumbers are the same. Above is a sea cucumber.

Good Enough?

Gherkin-based test frameworks are not perfect, but they do provide good structure. They gained popularity outside of the pure BDD movement because they genuinely added value to testing and automation. Like any other tool, teams will use them in both good and bad ways. (Trust me, I’ve seen scary Gherkin.)

It’s interesting to see how groups outside the Cucumber diaspora are attempting to solve the limitations of pure Gherkin. Each case study above showed a unique path. Clearly, the test automation problem has not yet been completely solved, but current BDD frameworks are the best off-the-shelf solutions we have until a new software testing movement comes along.

How Do I Know My Tests Add Value?

Software testing is a huge effort, especially for automation. Teams can spend a lot of time, money, and resources on testing (or not). People literally make careers out of it. That investment ought to be worthwhile – we shouldn’t test for the sake of testing.

So, therein lies the million-dollar question: How do we know that our tests add meaningful value?

Or, more bluntly: How do we know that testing isn’t a waste of time?

That’s easy: bugs!

The stock answer goes something like this: We know tests add value when they find bugs! So, let’s track the number of bugs we find.

That answer is wrong, despite its good intentions. Bug count is a terrible metric for judging the value of tests.

What do you mean bug counts aren’t good?

I know that sounds blasphemous. Let’s unpack it. Finding bugs is a good thing, and tests certainly should find bugs in the features they cover. But, the premise that the value of testing lies exclusively in the bugs found is wrong. Here’s why:

  1. The main value of testing is fast feedback. Testing serves two purposes: (1) validating goodness and (2) identifying badness. Passing tests are validated goodness. Failing tests, meaning uncovered bugs, are identified badness. Both types of feedback add value to the development process. Developers can proceed confidently with code changes when trustworthy tests are passing, and management can assess lower risk. Unfortunately, bug counts don’t measure that type of goodness.
  2. Good testing might actually reduce bug count. Testing means accountability for development. Developers must think more carefully about design. They can also run tests locally before committing changes. They could even do Test-Driven Development. Better practices could prevent many bugs from ever happening.
  3. Tracking bug count can drive bad behavior. Whether a high bug discovery rate looks good (or, worse, has quotas), testers will strive to post numbers. If they don’t find critical bugs, they will open bug reports for nitpicks and trivialities. The extra effort they spend to report inconsequential problems may not be of value to the business – wasting their time and the developers’ time all for the sake of metrics.
  4. Bugs are usually rare. Unless a team is dysfunctional, the product usually works as expected. Hundreds of test runs may not yield a single bug. That’s a wonderful thing if the tests have good coverage. Those tests still add value. Saying they don’t belittles the whole testing effort.

Then what metrics should we use?

Bugs happen arbitrarily, and unlimited testing is not possible. Metrics should focus on the return-on-investment for testing efforts. Here are a few:

  1. Time-to-bug-discovery. Rather than track bug counts, track the time until each bug is discovered. This metric genuinely measures the feedback loop for test results. Make sure to track the severity of each bug, too. For example, if high-severity bugs are not caught until production, then the tests don’t have enough coverage. Teams should strive for the shortest time possible – fast feedback means lower development costs. This metric also encourages teams to follow the Testing Pyramid.
  2. Coverage. Coverage is the degree to which tests exercise product behavior. Higher coverage means more feedback and greater chances of identifying badness. Most unit test frameworks can use code coverage tools to verify paths through code. Feature coverage requires extra process or instrumentation. Tests should avoid duplicate coverage, too.
  3. Test failure proportions. Tests fail for a variety of reasons. Ideally, tests should fail only when they discover bugs. However, tests may also fail for other reasons: unexpected feature changes, environment instability, or even test automation bugs. Non-bug failures disrupt the feedback loop: they force a team to fix testing problems rather than product problems, and they might cause engineers to devalue the whole testing effort. Tracking failure proportions will reveal what problems inhibit tests from delivering their top value.

More resources

 

EGAD! How Do We Start Writing (Better) Tests?

Some have never automated tests and can’t check themselves before they wreck themselves. Others have 1000s of tests that are flaky, duplicative, and slow. Wa-do-we-do? Well, I gave a talk about this problem at PyOhio 2018. The language used for example code was Python, but the principles apply to any language. Please watch it below!

The Testing Pyramid

The “Testing Pyramid” is an industry-standard guideline for functional test case development. Love it or hate it, the Pyramid has endured since the mid-2000’s because it continues to be practical. So, what is it, and how can it help us write better tests?

Layers

The Testing Pyramid has three classic layers:

  • Unit tests are at the bottom. Unit tests directly interact with product code, meaning they are “white box.” Typically, they exercise functions, methods, and classes. Unit tests should be short, sweet, and focused on one thing/variation. They should not have any external dependencies – mocks/monkey-patching should be used instead.
  • Integration tests are in the middle. Integration tests cover the point where two different things meet. They should be “black box” in that they interact with live instances of the product under test, not code. Service call tests (REST, SOAP, etc.) are examples of integration tests.
  • End-to-end tests are at the top. End-to-end tests cover a path through a system. They could arguably be defined as a multi-step integration test, and they should also be “black box.” Typically, they interact with the product like a real user. Web UI tests are examples of integration tests because they need the full stack beneath them.

All layers are functional tests because they verify that the product works correctly.

Proportions

The Testing Pyramid is triangular for a reason: there should be more tests at the bottom and fewer tests at the top. Why?

  1. Distance from code. Ideally, tests should catch bugs as close to the root cause as possible. Unit tests are the first line of defense. Simple issues like formatting errors, calculation blunders, and null pointers are easy to identify with unit tests but much harder to identify with integration and end-to-end tests.
  2. Execution time. Unit tests are very quick, but end-to-end tests are very slow. Consider the Rule of 1’s for Web apps: a unit test takes ~1 millisecond, a service test takes ~1 second, and a Web UI test takes ~1 minute. If test suites have hundreds to thousands of tests at the upper layers of the Testing Pyramid, then they could take hours to run. An hours-long turnaround time is unacceptable for continuous integration.
  3. Development cost. Tests near the top of the Testing Pyramid are more challenging to write than ones near the bottom because they cover more stuff. They’re longer. They need more tools and packages (like Selenium WebDriver). They have more dependencies.
  4. Reliability. Black box tests are susceptible to race conditions and environmental failures, making them inherently more fragile. Recovery mechanisms take extra engineering.

The total cost of ownership increases when climbing the Testing Pyramid. When deciding the level at which to automate a test (and if to automate it at all), taking a risk-based strategy to push tests down the Pyramid is better than writing all tests at the top. Each proportionate layer mitigates risk at its optimal return-on-investment.

Practice

The Testing Pyramid should be a guideline, not a hard rule. Don’t require hard proportions for test counts at each layer. Why not? Arbitrary metrics cause bad practices: a team might skip valuable end-to-end tests or write needless unit tests just to hit numbers. W. Edwards Deming would shudder!

Instead, use loose proportions to foster better retrospectives. Are we covering too many input combos through the Web UI when they could be checked via service tests? Are there unit test coverage gaps? Do we have a pyramid, a diamond, a funnel, a cupcake, or some other wonky shape? Each layer’s test count should be roughly an order of magnitude smaller than the layer beneath it. Large Web apps often have 10K unit tests, 1K service tests, and a few hundred Web UI tests.

Resources

Check out these other great articles on the Testing Pyramid:

Why Python is Great for Test Automation

Python is an incredible programming language. As Dan Callahan said in his PyCon 2018 keynote, “Python is the second best language for anything, and that’s an amazing aspiration.” For test automation, however, I believe it is one of the best choices. Here are ten reasons why:

#1: The Zen of Python

The Zen of Python, as codified in PEP 20, is an ideal guideline for test automation. Test code should be a natural bridge between plain-language test steps and the programming calls to automate them. Tests should be readable and descriptive because they describe the features under test. They should be explicit in what they cover. Simple steps are better than complicated ones. Test code should add minimal extra verbiage to the tests themselves. Python, in its concise elegance, is a powerful bridge from test case to test code.

(Want a shortcut to the Zen of Python? Run “import this” at the Python interpreter.)

#2: pytest

pytest is one of the best test frameworks currently available in any language, not just for Python. It can handle any functional tests: unit, integration, and end-to-end. Test cases are written simply as functions (meaning no side effects as long as globals are avoided) and can take parametrized inputs. Fixtures are a generic, reusable way to handle setup and cleanup operations. Basic “assert” statements have automatic introspection so failure messages print meaningful values. Tests can be filtered when executed. Plugins extent pytest to do code coverage, run tests in parallel, use Gherkin scenarios, and integrate with other frameworks like Django and Flask. Other Python test frameworks are great, but pytest is by far the best-in-show. (Pythonic frameworks always win in Python.)

#3: Packages

For all the woes about the CheeseShop, Python has a rich library of useful packages for testing: pytest, unittest, doctest, tox, logging, paramiko, requests, Selenium WebDriver, Splinter, Hypothesis, and others are available as off-the-shelf ingredients for custom automation recipes. They’re just a “pip install” away. No reinventing wheels here!

#4: Multi-Paradigm

Python is object-oriented and functional. It lets programmers decide if functions or classes are better for the needs at hand. This is a major boon for test automation because (a) stateless functions avoid side effects and (b) simple syntax for those functions make them readable. pytest itself uses functions for test cases instead of shoehorning them into classes (à la JUnit).

#5: Typing Your Way

Python’s out-of-the-box dynamic duck typing is great for test automation because most feature tests (“above unit”) don’t need to be picky about types. However, when static types are needed, projects like mypy, Pyre, and MonkeyType come to the rescue. Python provides typing both ways!

#6: IDEs

Good IDE support goes a long way to make a language and its frameworks easy to use. For Python testing, JetBrains PyCharm supports visual testing with pytest, unittest, and doctest out of the box, and its Professional Edition includes support for BDD frameworks (like pytest-bdd, behave, and lettuce) and Web development. For a lighter offering, Visual Studio Code is taking the world by storm. Its Python extensions support all the good stuff: snippets, linting, environments, debugging, testing, and a command line terminal right in the window. Atom, Sublime, PyDev, and Notepad++ also get the job done.

#7: Command Line Workflow

Python and the command line are like peanut butter and jelly – a match made in heaven. The entire test automation workflow can be driven from the command line. Pipenv can manage packages and environments. Every test framework has a console runner to discover and launch tests. There’s no need to “build” test code first because Python is an interpreted language, further simplifying execution. Rich command line support makes testing easy to manage manually, with tools, or as part of build scripts / CI pipelines.

As a bonus, automation modules can be called from the Python REPL interpreter or, even better, a Jupyter notebook. What does this mean? Automation-assisted exploratory testing! Imagine using Python calls to automatically steer a Web app to a point that requires a manual check. Calls can be swapped out, rerun, skipped, or changed on the fly. Python makes it possible.

#8: Ease of Entry

Python has always been friendly to beginners thanks to its Zen, whether those beginners are programming newbies or expert engineers. This gives Python a big advantage as an automation language choice because tests need to be done quickly and easily. Nobody wants to waste time when the features are in hand and just need to be verified. Plus, many manual software testers (often without programming experience) are now starting to do automation work (by choice or by force) and benefit from Python’s low learning curve.

#9: Strength for Scalability

Even though Python is great for beginners, it’s also no toy language. Python has industrial-grade strength because its design always favors one right way to get a job done. Development can scale thanks to meaningful syntax, good structure, modularity, and a rich ecosystem of tools and packages. Command line versatility enables it to fit into any tool or workflow. The fact that Python may be slower than other languages is not an issue for feature tests because system delays (such as response times for Web pages and REST calls) are orders of magnitude slower than language-level performance hits.

#10: Popularity

Python is one of the most popular programming languages in the world today. It is consistently ranked near the top on TIOBE, Stack Overflow, and GitHub (as well as GitHut). It is a beloved choice for Web developers, infrastructure engineers, data scientists, and test automationeers alike. The Python community also powers it forward. There is no shortage of Python developers, nor is there any dearth of support online. Python is not going away anytime soon. (Python 3, that is.)

Other Languages?

The purpose of this article is to highlight what makes Python great for test automation based on its own merits. Although I strongly believe that Python is one of the best automation languages, other choices like Java, C#, and Ruby are also viable. Check out my article The Best Programming Language for Test Automation for a comparison.

 

This article was posted with the author’s permission on both Automation Panda and PyBites.

Cypress.io and the Future of Web Testing

What is Cypress.io?

Cypress.io is an up-and-coming Web test automation framework. It is open source and written entirely in JavaScript. Unlike Selenium WebDriver tests that work outside the browser, Cypress works directly inside the browser. It enables developers to write front-end tests entirely in JavaScript, directly accessing everything within the browser. As a result, tests run much more quickly and reliably than Selenium-based tests.

Some nifty features include:

  • A rich yet simple API for interactions with automatic waiting
  • Mocha, Chai, and Sinon bundled in
  • A sleek dashboard with automatic reloads for Test-Driven Development
  • Easy debugging
  • Network traffic control for validation and mocking
  • Automatic screenshots and videos

Cypress was clearly developed for developers. It enables rapid test development with rapid feedback. The Cypress Test Runner is free, while the Cypress Dashboard Service (for better reporting and CI) will require a paid license.

How Do I Start Using Cypress?

I won’t post examples or instructions for using Cypress here. Please refer to the Cypress documentation for getting started and the tutorial video below. Make sure your machine is set up for JavaScript development.

Will Cypress Replace WebDriver?

TL;DR: No.

Cypress has its niche. It is ideal for small teams whose stacks are exclusively JavaScript and whose developers are responsible for all testing. However, WebDriver still has key advantages.

  1. While Selenium WebDriver supports nearly all major browsers, Cypress currently supports only one browser: Google Chrome. That’s a major limitation. Web apps do not work the same across browsers. Many industries (especially banking and finance) put strict controls on browser types and versions, too.
  2. Cypress is JavaScript only. Its website proudly touts its JavaScript purity like a badge of honor. However, that has downsides. First, all testing must happen inside the bubble of the browser, which makes parallel testing and system interactions much more difficult. Second, testers must essentially be developers, which may not work well for all teams. Third, other programming languages that may offer advantages for testing (like Python) cannot be used. Selenium WebDriver, on the other hand, has multiple language bindings and lets tests live outside the browser.
  3. Within the JavaScript ecosystem, Cypress is not the only all-in-one end-to-end framework. Protractor is more mature, more customizable, and easier to parallelize. It wraps Selenium WebDriver calls for simplification and safety in a similar way to how Cypress’s API is easy to use.
  4. The WebDriver standard is a W3C Recommendation. What does this mean? All major browsers have a vested interest in implementing the standard. Selenium is simply the most popular implementation of the standard. It’s not going away. Cypress, however, is just a cool project backed with commercial intent.

Further reading:

What Does Cypress Mean for the Future?

There are a few big takeaways.

  1. JavaScript is taking over the world. It was the most popular language on GitHub in 2017. JavaScript-only stacks like MEAN and MERN are increasingly popular. The demand for a complete JavaScript-only test framework like Cypress is further evidence.
  2. “Bundled” test frameworks are becoming popular. Historically, a test framework simply provided test structure, basic fixtures, and maybe an assertion library (like JUnit). Then, extra test packages became popular (like Selenium WebDriver, REST APIs, mocking, logging, etc.). Now, new frameworks like Cypress and Protractor aim to provide pre-canned recipes of all these pieces to simplify the setup.
  3. Many new test frameworks will likely be developer-centric. There is a trend in the software industry (especially with Agile) of eliminating traditional tester roles and putting testing work onto developers. The role of the “Software Engineer in Test” – a developer who builds test systems – is also on the rise. Test automation tools and frameworks will need to provide good developer experience (DX) to survive. Cypress is poised to ride that wave.
  4. WebDriver is not perfect. Cypress was developed in large part to address WebDriver’s shortcomings, namely the slowness, difficulty, and unreliability (though unreliability is often a result of poor implementation). Many developers don’t like to use Selenium WebDriver, and so there will be a constant itch to make something better. Cypress isn’t there yet, but it might get there one day.