testing

Panda holding pickles

BDD Gherkin Guidelines for AI Coding and Testing

AI coding agents following BDD (Behavior-Driven Development) principles can write great Gherkin scenarios if they are given the proper rules. Without explicit rules, AI-generated Gherkin often drifts into vague Then steps, UI-heavy scripts, multi-behavior scenarios, and placeholder examples that read like filler. That is not a model problem alone; it is a missing context problem.

I have written Gherkin Guidelines for AI, an open-source context file created to become a default BDD reference for AI-assisted scenario writing, AI code review, and Gherkin-based test automation. It is one Markdown file you can attach to Cursor, Claude, Copilot, Codex, or any tool that accepts project context.

The context file is gherkin-guidelines.md, located in the GitHub repository at https://github.com/AutomationPanda/gherkin-guidelines-for-ai.

To add these guidelines to your project:

  1. Download gherkin-guidelines.md from the GitHub repository.
  2. Place it in your own project alongside your specs or context files.
  3. Wire it up to your project’s rules, skills, or sub-agents so your AI coding agents will abide by it.

Please review the repository’s README for full setup and usage instructions.

When you are ready, try it on your next user story: load the guidelines, ask your agent for scenarios, and see how it feels when everyone is reading from the same playbook. Good specs are a team sport, and this file is here to make your first pass a little lighter and a lot clearer. Whether you want to simply code the vibes or do all-out Spec-Driven Development, I hope that my Gherkin guidelines can help!

A lonely panda staring at an ancient pyramid and a modern skyscraper.

The Testing Skyscraper: A Modern Alternative to the Testing Pyramid

Every good software tester knows that a good testing strategy should adhere to the classic Testing Pyramid structure: a strong base of unit tests at the bottom, a solid layer of API tests in the middle, and a few UI tests at the top for good measure. The Testing Pyramid has been around longer than I’ve been working in the software industry, and it is arguably the most prevalent mental model in the discipline of testing.

For years, I abided by the Testing Pyramid. I formed my test plans based upon it. Heck, I even wrote a popular article about it. However, after many years of blindly accepting it, I’m ready to make a rather bold claim: the Testing Pyramid is an antiquated scheme that deceives testers. I’m leaving the pyramid scheme and embracing a new, more modern approach. Even if you think this is heresy, please allow me to explain my rationale.

The Testing Pyramid: A Relic of History

I started my professional career in software in 2007. Back then, Apple had just released the first iPhone, and Facebook was so new that they only allowed college students to create accounts. Web applications, RESTful architecture, and Selenium were all new things. Developing and testing software systems looked much different.

The Testing Pyramid evolved as a simple mental model to help testers decide what to test and how to test it based on the constraints of the time. Web UI testing was notoriously difficult. Browsers were not as standardized as they are today. Selenium WebDriver enabled UI automation but required testers to write their own frameworks around it. Test execution was often slow and flaky. As a result, testers called UI tests “bad” and did everything they could to avoid them, favoring lower-level tests instead. Unit tests were “good” because they were fast, reliable, and close to the code they covered. Plus, code coverage tools could automatically quantify coverage levels and identify gaps. API tests were “okay” because they were typically small and fast, even if they needed to make a network hop to a live environment. Thus, a “proper” test strategy took a pyramidal shape that favored lower level tests for their speed and reliability. It made sense at the time.

Are We Stuck in the Past?

The factors that pushed strategies to take a triangular shape have changed since the inception of the Testing Pyramid all those years ago. Pyramids now feel like relics of ancient history. Let’s take a reality check.

  1. UI testing tools are better, faster, and more reliable. New frameworks like Playwright and Cypress provide greater stability through automatic waiting, faster execution times, and overall better testing experiences. Selenium is still kicking with the BiDi protocol for better testing support, Selenium Manager for automatic driver management, and a plethora of community projects (like Boa Constrictor) helping testers maximize Selenium’s potential.
  2. Traditional API testing can largely be replaced by other kinds of tests. Internal handler unit tests can cover the domain logic for what happens inside the services. Contract tests can cover the handshakes between different services to make sure updates to one won’t break the integrations with others. And UI tests can make sure the system works as a whole.
  3. Test orchestration can now run tests continuously. Tests can run for every code change. They can run for pull requests. Some developers even run end-to-end tests locally before committing changes. The ability to deliver fast feedback on important areas matters far more than the types or times of tests.

Therefore, it is wrong to say a test is bad simply based on its type. All test types are good because they mitigate different kinds of risks. We should focus on building robust continuous feedback loops rather than quotas for test types.

The Testing Skyscraper: A New Model

I think a better mental model for modern testing is the Testing Skyscraper. The skyscraper is a symbol of industrial might and technological advancement. Each skyscraper has a unique architecture that makes it stand out against the skyline. Pyramids get narrower as you approach the top, but skyscrapers have several levels of varying sizes and layouts, where each floor is tailored to the needs of the building’s tenants.

Skyscrapers are a great analogy for testing strategies because one size does not fit all:

  • Testers can architect their strategies to meet their needs. They can design it as they see fit.
  • Testers can build out tests at any level they need. Every level of testing is deemed good if it meets the business needs.
  • Testers can build out as much testing at each level as they like. A floor may have zero-to-many “tenants.”
  • Testers can choose to skip tests at different levels as a calculated risk. They’ll just be empty floors in the building until needs change.
  • New testing tools are as strong as steel. Testers can build strategies that scale upwards and onwards, faster and higher than ever before.

The shape of the skyscraper does NOT imply that there should be an equal number of tests at each level. Instead, the metaphor implies that each test strategy is unique and that each level can be built as needed with the freedom of modern architecture. It’s not about quantities or quotas.

I’ve seen anti-pattern models such as ice cream cones and cupcakes. Testing Pyramid might now be an anti-pattern as well.

Modern Architecture for the Present Day

Pyramids were great for their time. The ancients like the Egyptians, the Sumerians, and the Mayans built impressive pyramids that still stand today. However, no civilization has built new pyramids for centuries, unless you count the ones at the Louvre or in Las Vegas. They’re impractical. They’re short. They require an enormous base. Let’s let go of the past and embrace the modern future. Let’s build structures that reflect our times. Let’s build Testing Skyscrapers that reach for the stars – and look snazzy while doing it.

Sichuan Opera Panda Performance

Vibe coding while live streaming

AI-assisted coding tools are great, but they can do unexpected things.

I gave a 90-minute workshop today on Playwright at Testµ 2025, an online testing conference hosted by LambdaTest. I’ve given my Playwright workshop many times before, but I’ve always taught it with “traditional” coding techniques – the way we automated tests before LLMs hit the scene. In today’s workshop, I tried to spice it up with vibe coding in Cursor. We had mixed results.

Perhaps my approach was too ambitious. I tried to develop a small web app first and then teach how to automate tests for it. I’ve had great success recently building small web apps quickly with AI. Unfortunately, though, the workshop app turned out to be a mess. Thankfully, I had another pre-built web app handy as a backup plan. I was able to get Playwright tests up and running with it pretty quickly. The AI did a decent job refining a script produced by Playwright’s code generator, removing duplicate interactions, adding assertions, and abstracting steps into page objects.

My “workshop” was really more like a livestream. It’s hard to set up lessons with exercises on a short virtual call. Even though the code didn’t turn out like I planned, I was honest and direct with the attendees. I demonstrated where AI-assisted coding tools shine and where they stink. We came up with decent results for Playwright testing. And the attendees seemed to get a lot of value out of it. They asked tons of questions and remained active in the chat for the whole session.

While I am slightly disappointed in myself for not being more prepared to avoid pitfalls, I’m glad that I could show the real me. I’m not perfect in my software practices, but I can still be productive, and I can deliver meaningful value. Hopefully, my workshop encouraged others to be bold in trying new things. And I even learned a few things to make my future workshops better.

Running tests in a Java Maven project

Java continues to be one of the most popular languages for test automation, and Maven continues to be its most popular build tool. Adding tests in the right place to a Java project with Maven can be a bit tricky, however. Let’s briefly learn how to do it. These steps work for both JUnit and TestNG.

Test code location

Maven project follow the Standard Directory Layout. The main code should go under src/main, while all test code should go under src/test:

  • src/test/java should hold Java test classes
  • src/test/resources should hold resource files that tests use

Your project directory should look something like this:

src/
+-- main/
|   +-- java/
|   \-- resources/
|-- test/
|   +-- java/
|   \-- resources/
\-- pom.xml

Don’t put test code in the main source folder. You don’t want to include it with the final build artifact. The project might have other files as well, like a README.

Unit tests

The Maven Surefire Plugin runs unit tests during Maven’s test phase. To run unit tests:

  1. Add maven-surefire-plugin to the plugins section of your pom.xml
  2. Name your unit tests *Tests.java
  3. Put them under src/test/java
  4. Mirror the package structure from the main code always
  5. Run tests with mvn test

There are a bunch of options for configuring the Maven Surefire Plugin. If you don’t want to configure anything special, you actually don’t need to add the plugin to the POM file. Nevertheless, it’s good practice to add it to the POM file anyway. Here’s what that would look like:

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-surefire-plugin</artifactId>
                <version>3.2.5</version>
            </plugin>
        </plugins>
    </build>

Integration tests

The Maven Failsafe Plugin runs integration tests during Maven’s integration-test phase. Integration tests are distinct from unit tests due to their external dependencies and should be treated differently. To run integration tests:

  1. Add maven-failsafe-plugin to the plugins section of your pom.xml
  2. Name your unit tests *IT.java
  3. Put them under src/test/java
  4. Mirror the package structure from the main code as appropriate
  5. Run tests with mvn verify

Maven actually has multiple integration test phases: pre-integration-test, integration-test, and post-integration-test to handle appropriate setup, testing, and cleanup. However, none of these phases will cause the build to fail. Instead, use the verify goal to make the build fail when ITs fail.

Like the Maven Surefire Plugin, the Maven Failsafe Plugin has a bunch of options. However, to run integration tests, you must configure the Failsafe plugin in the POM file. Here’s what it looks like:

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-failsafe-plugin</artifactId>
                <version>3.2.5</version>
                <executions>
                    <execution>
                        <goals>
                            <goal>integration-test</goal>
                            <goal>verify</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>

Maven phases

Phases in the Maven Build Lifecycle are cumulative:

  • Running mvn test includes the compile phase
  • Running mvn verify includes the compile and test phases

It is also a good practice to run mvn clean before other phases to delete the build output (target/) directory. That way, old class files and test reports are removed before generating new ones. You may also include clean with commands to run tests, like this: mvn clean test or mvn clean verify.

Customizations

You can customize how tests run. For example, you can create a separate directory for integration tests (like src/it instead of src/test). However, I recommend avoiding customizations like this. They require complicated settings in the POM file that are difficult to get right and confusing to maintain. Others who join the project later will expect Maven standards.

Test coverage and trusting your instincts

Picture this: It’s 2010, and I’m fresh out of college, eager to dive into the software industry. Little did I know, a simple interview question would challenge not only my knowledge about testing but also my instincts.

Job openings were hard to find in the wake of the Great Recession. Thankfully, I landed a few interviews with IBM, where I completed a series of internships over the summers of 2007-2009. I was willing to take any kind of job – as long as it involved coding. One of those interviews was for an entry-level position on a data warehouse team in Boston. I honestly don’t remember much from this interview, but there was one question I will never forget:

How do you know when you’ve done enough testing?

Now, remember, back in 2010, I wasn’t the Automation Panda yet. Nevertheless, since I had experience with testing during my internships, I felt prepared to give a reasonable answer. If I recall correctly, I said something about covering all paths through the code and being mindful to consider edge cases that could be overlooked. (My answer today would likely frame “completeness” around acceptable risk, but that’s not the point of the story.) I’ll never forget what the interviewer said in reply:

Well, that’s not the answer I was looking for.

Oh? What’s the “right” answer?

If you write roughly the same number of lines of test code as you write for product code, then you have enough coverage.

That answer stunned me. Despite my limited real-world experience as a recent college graduate, I knew that answer was blatantly wrong. During my internships, I wrote plenty of code with plenty of tests, and I knew from experience that there was no correlation between lines of test code and actual coverage. Even a short snippet could require multiple tests to thoroughly cover all of its variations.

For example, here’s a small Python class that keeps track of a counter:

class Counter:

  def __init__(self):
    self.count = 0

  def add(self, more=1):
    self.count += more

And here’s a set of pytest tests to cover it:

import pytest

@pytest.fixture
def counter():
  return Counter()

def test_counter_init(counter):
  assert counter.count == 0

def test_counter_add_one(counter):
  counter.add()
  assert counter.count == 1

def test_counter_add_three(counter):
  counter.add(3)
  assert counter.count == 3

def test_counter_add_twice(counter):
  counter.add()
  counter.add()
  assert counter.count == 2

There are three times as many lines of test code as product code, and I could still come up with a few more test cases.

In the moment, I didn’t know how to reply to the interviewer. He sounded very confident in his answer. All I could say was, “I don’t think I agree with that.” I didn’t have any examples or evidence to share; I just had my gut feeling.

I sensed my interviewer’s disappointment with my response. Who was I, a lowly intern, to challenge a senior engineer? Needless to say, I did not receive a job offer. I ended up taking a different job with IBM in Raleigh-Durham instead.

Nevertheless, this exchange taught me a very valuable lesson: trust your instincts. While I didn’t land the job that day, the encounter left an indelible mark on my approach to problem-solving. It instilled in me the confidence to question assumptions and trust my instincts, qualities that would shape my career trajectory in unforeseen ways. Never dismiss your instincts because you are less senior than others. You just might be right!

Modern Web Testing with Playwright

Modern Web Testing with Playwright

Playwright is an awesome new web testing framework, and it can help you take a modern approach to web development. In this article, let’s learn how.

Asking tough questions about testing

Let me ask you a series of questions:

Question 1: Do you like it when bugs happen in your code? Most likely not. Bugs are problems. They shouldn’t happen in the first place, and they require effort to fix. They’re a big hassle.

Question 2: Would you rather let those bugs ship to production? Absolutely not! We want to fix bugs before users ever see them. Serious bugs could cause a lot of damage to systems, businesses, and even reputations. Whenever bugs do slip into production, we want to find them and fix them ASAP.

Question 3: Do you like to create tests to catch bugs before that happens? Hmmm… this question is tougher to answer. Most folks understand that good tests can provide valuable feedback on software quality, but not everyone likes to put in the work for testing.

Why the distaste for testing?

Why doesn’t everyone like to do testing? Testing is HARD! Here are common complaints I hear:

  • Tests are slow – they take too long to run!
  • Tests are brittle – they break whenever the app changes!
  • Tests are flaky – they crash all the time!
  • Tests don’t make sense – they are complicated and unreadable!
  • Tests don’t make money – we could be building new features instead!
  • Tests require changing context – they interrupt my development workflow!
Testing challenges

These are all valid reasons. To mitigate these pain points, software teams have historically created testing strategies around the Testing Pyramid, which separates tests by layer from top to bottom:

  • UI tests
  • API tests
  • Component tests
  • Unit tests
Testing Pyramid

Tests at the bottom were considered “better” because they were closer to the code, easier to automate, and faster to execute. They were also considered to be less susceptible to flakiness and, therefore, easier to maintain. Tests at the top were considered just the opposite: big, slow, and expensive. The pyramid shape implied that teams should spent more time on tests at the base of the pyramid and less time on tests at the top.

End-to-end tests can be very valuable. Unfortunately, the Testing Pyramid labeled them as “difficult” and “bad” primarily due to poor practices and tool shortcomings. It also led teams to form testing strategies that emphasized categories of tests over the feedback they delivered.

Rethinking modern web testing goals

Testing doesn’t need to be hard, and it doesn’t need to suffer from the problems of the past. We should take a fresh, new approach in testing modern web apps.

Here are three major goals for modern web testing:

  1. Focus on building fast feedback loops rather than certain types of tests.
  2. Make test development as fast and painless as possible.
  3. Choose test tooling that naturally complements dev workflows.
Modern testing goals

These goals put emphasis on results and efficiency. Testing should just be a natural part of development without any friction.

Introducing Playwright

Playwright is a modern web testing framework that can help us meet these goals.

  • It is an open source project from Microsoft.
  • It manipulates the browser via (superfast) debug protocols
  • It works with Chromium/Chrome/Edge, Firefox, and WebKit
  • It provides automatic waiting, test generation, UI mode, and more
  • It can test UIs and APIs together
  • It provides bindings for JavaScript/TypeScript, Python, Java, and C#
Playwright overview

Playwright takes a unique approach to browser automation. First of all, it uses browser projects rather than full browser apps. For example, this means you would test Chromium instead of Google Chrome. Browser projects are smaller and don’t use as many resources as full browsers. Playwright also manages the browser projects for you, so you don’t need to install extra stuff.

Second, it uses browsers very efficiently:

  1. Instead of launching a full, new browser instance for each test, Playwright launches one browser instance for the entire suite of tests.
  2. It then creates a unique browser context from that instance for each test. A browser context is essentially like an incognito session: it has its own session storage and tabs that are not shared with any other context. Browser contexts are very fast to create and destroy.
  3. Then, each browser context can have one or more pages. All Playwright interactions happen through a page, like clicks and scrapes. Most tests only ever need one page.
Playwright browsers, contexts, and pages

Playwright handles all this setup automatically for you.

Comparing Playwright to other tools

Playwright is not the only browser automation tool out there. The other two most popular tools are Selenium and Cypress. Here is a chart with high-level comparisons:

Browser automation tool comparison

All three are good tools, and each one has their advantages. Playwright’s main advantages are that offers excellent developer experience with the fastest execution time, multiple language bindings, and several quality-of-life features.

Learning Playwright

If you want to learn how to automate your web tests with Playwright, take my tutorial, Awesome Web Testing with Playwright. All instructions and example code for the tutorial are located in GitHub. This tutorial is designed to be self-guided, so give it a try!

Test Automation University also has a Playwright learning path with introductory and advanced courses:

Playwright is an awesome new framework for modern web testing. Give it a try, and let me know what you automate!

Which web testing tool should I use?

This article is based on my talk at PyCon US 2023. The web app under test and most of the example code is written in Python, but the information presented is applicable to any stack.

There are several great tools and frameworks for automating browser-based web UI testing these days. Personally, I gravitate towards open source projects that require coding skills to use, rather than low-code/no-code automation tools. The big three browser automation tools right now are Selenium, Cypress, and Playwright. There are other great tools, too, but these three seem to be the ones everyone is talking about the most.

It can be tough to pick right right tool for your needs. In this article, let’s compare and contrast these tools.

Choosing a web app to test

I developed a small web app named Bulldoggy, the reminders app. You can clone the repository and run it yourself. The repository URL is https://github.com/AutomationPanda/bulldoggy-reminders-app.

Bulldoggy is a full-stack Python app:

  • It uses FastAPI for APIs.
  • It uses Jinja templates for HTML and CSS files.
  • It uses HTMX for handling dynamic interactions without needing any explicit JavaScript.
  • It uses TinyDB to store data.
  • It uses Pydantic to model data.

If you want to run it locally, all you need is Python!

The app is pretty simple. When you first load it, it presents a standard login page. I actually used ChatGPT to help me write the HTML and CSS:

The Bulldoggy login page

After logging in, you’ll see the reminders page:

The Bulldoggy reminders page

The title card at the top has the app’s name, the logo, and a logout button. On the left, there is a card for reminder lists. Here, I have different lists for Chores and Projects. On the right, there is a card for all the reminders in the selected list. So, when I click the Chores list, I see reminders like “Buy groceries” and “Walk the dog.” I can click individual reminder rows to strike them out, indicating that they are complete. I can also add, edit, or delete reminders and lists through the buttons along the right sides of the cards.

Now that we have a web app to test, let’s learn how to use the big three web testing tools to automate tests for it.

Selenium

Selenium WebDriver is the classic and still the most popular browser automation tool. It’s the original. It carries that old-school style and swagger. Selenium manipulates the browser using the WebDriver protocol, a W3C Recommendation that all major browsers have adopted. The Selenium project is fully open source. It relies on open standards, and it is run by community volunteers according to open governance policies. Selenium WebDriver offers language bindings for Java, JavaScript, C#, and – my favorite language – Python.

Selenium WebDriver works with real, live browsers through a proxy server running on the same machine as the target browser. When test automation starts, it will launch the WebDriver executable for the proxy and then send commands through it via the WebDriver protocol.

How Selenium WebDriver works

To set up Selenium WebDriver, you need to install the WebDriver executables on your machine’s system path for the browsers you intend to test. Make sure the versions all match!

Then, you’ll need to add the appropriate Selenium package(s) to your test automation project. The names for the packages and the methods for installation are different for each language. For example, in Python, you’ll probably run pip install selenium.

In your project, you’ll need to construct a WebDriver instance. The best place to do that is in a setup method within a test framework. If you are using Python with pytest, that would go into a fixture like this:

Selenium WebDriver setup

We could hardcode the browser type we want to use as shown here in the example, or we could dynamically pick the browser type based on some sort of test inputs. We may also set options on the WebDriver instance, such as running it headless or setting an implicit wait. For cleanup after the yield command, we need to explicitly quit the browser.

Here’s what a login test would look like when using Selenium in Python:

Selenium WebDriver tests

The test function would receive the WebDriver instance through the browser fixture we just wrote. When I write tests, I follow the Arrange-Act-Assert pattern, and I like to write my test steps using Given-When-Then language in comments.

The first step is, “Given the login page is displayed.” Here, we call “browser dot get” with the full URL for the Bulldoggy app running on the local machine.

The second step is, “When the user logs into the app with valid credentials.” This actually requires three interactions: typing the username, typing the password, and clicking the login button. For each of these, the test must first call “browser dot find element” with a locator to get the element object. They locate the username and password fields using CSS selectors based on input name, and they locate the login button using an XPath that searches for the text of the button. Once the elements are found, the test can call interactions on them like “send keys” and “click”.

Now, one thing to note is that these calls should probably use page objects or the Screenplay Pattern to make them reusable, but I chose to put raw Selenium code here to keep it basic.

The third step is, “Then the reminders page is displayed.” These lines perform assertions, but they need to wait for the reminders page to load before they can check any elements. The WebDriverWait object enables explicit waiting. With Selenium WebDriver, we need to handle waiting by ourselves, or else tests will crash when they can’t find target elements. Improper waiting is the main cause for flakiness in tests. Furthermore, implicit and explicit waits don’t mix. We must choose one or the other. Personally, I’ve found that any test project beyond a small demo needs explicit waits to be maintainable and runnable.

Selenium is great because it works well, but it does have some paint points:

  1. Like we just said, there is no automatic waiting. Folks often write flaky tests unintentionally because they don’t handle waiting properly. Therefore, it is strongly recommended to use a layer on top of raw Selenium like Pylenium, SeleniumBase, or a Screenplay implementation. Selenium isn’t a full test framework by itself – it is a browser automation tool that becomes part of a test framework.
  2. Selenium setup can be annoying. We need to install matching WebDriver executables onto the system path for every browser we test, and we need to keep their versions in sync. It’s very common to discover that tests start failing one day because a browser automatically updated its version and no longer matched its WebDriver executable. Thankfully, a new part of the Selenium project named Selenium Manager now automatically handles the executables.
  3. Selenium-based tests have a bad reputation for slowness. Usually, poor performance comes more from the apps under test than the tool itself, but Selenium setup and cleanup do cause a performance hit.

Cypress

Cypress is a modern frontend test framework with rich developer experience. Instead of using the WebDriver protocol, it manipulates the browser via in-browser JavaScript calls. The tests and the app operate in the same browser process. Cypress is an open source project, and the company behind it sells advanced features for it as a paid service. It can run tests on Chrome, Firefox, Edge, Electron, and WebKit (but not Safari). It also has built-in API testing support. Unfortunately, due to its design, Cypress tests must be written exclusively in JavaScript (or TypeScript).

Here’s the code for the Bulldoggy login test in Cypress in JavaScript:

Cypress tests

The steps are pretty much the same as before. Instead of creating some sort of browser object, all Cypress calls go to its cy object. The syntax is very concise and readable. We could even fit in a few more assertions. Cypress also handles waiting automatically, which makes the code less prone to flakiness.

The rich developer experience comes alive when running Cypress tests. Cypress will open a browser window that will visually execute the test in front of us. Every step is traced so we can quickly pinpoint failures. Cypress is essentially a web app that tests web apps.

Cypress test execution

While Cypress is awesome, it is JavaScript-only, which stinks for folks who use other programming languages. For example, I’m a Pythonista at heart. Would I really want to test a full-stack Python web app like Bulldoggy with a browser automation tool that doesn’t have a Python language binding? Cypress is also trapped in the browser. It has some inherent limitations, like the fact that it can’t handle more than one open tab.

Playwright

Playwright is similar to Cypress in that it’s a modern, open source test framework that is developed and maintained by a company. Playwright manipulates the browser via debug protocols, which make it the fastest of the three tools we’ve discussed today. Playwright also takes a unique approach to browsers. Instead of testing full browsers like Chrome, Firefox, and Safari, it tests the corresponding browser engines: Chromium, Firefox (Gecko), and WebKit. Like Cypress, Playwright can also test APIs, and like Selenium, Playwright offers bindings for multiple popular languages, including Python.

To set up Playwright, of course we need to install the dependency packages. Then, we need to install the browser engines. Thankfully, Playwright manages its browsers for us. All we need to do is run the appropriate “Playwright install” for the chosen language.

Playwright takes a unique approach to browser setup. Instead of launching a new browser instance for each test, it uses one browser instance for all tests in the suite. Each test then creates a unique browser context within the browser instance, which is like an incognito session within the browser. It is very fast to create and destroy – much faster than a full browser instance. One browser instance may simultaneously have multiple contexts. Each context keeps its own cookies and session storage, so contexts are independent of each other. Each context may also have multiple pages or tabs open at any given time. Contexts also enable scalable parallel execution. We could easily run tests in parallel with the same browser instance because each context is isolated.

Playwright browsers, context, and pages

Let’s see that Bulldoggy login test one more time, but this time with Playwright code in Python. Again, the code is pretty similar to what we saw before. The major differences between these browser automation tools is not so much the appearance of the code but rather how they work and perform:

Playwright tests

With Playwright, all interactions happen with the “page” object. By default, Playwright will create:

  • One browser instance to be shared by all tests in a suite
  • One context for each test case
  • One page within the context for each test case

When we read this code, we see locators for finding elements and methods for acting upon found elements. Notice how, like Cypress, Playwright automatically handles waiting. Playwright also packs an extensive assertion library with conditions that will wait for a reasonable timeout for their intended conditions to become true.

Again, like we said for the Selenium example code, if this were a real-world project, we would probably want to use page objects or the Screenplay Pattern to handle interactions rather than raw calls.

Playwright has a lot more cool stuff, such as the code generator and the trace viewer. However, Playwright isn’t perfect, and it also has some pain points:

  1. Playwright tests browser engines, not full browsers. For example, Chrome is not the same as Chromium. There might be small test gaps between the two. Your team might also need to test full browsers to satisfy compliance rules. 
  2. Playwright is still new. It is years younger than Selenium and Cypress, so its community is smaller. You probably won’t find as many StackOverflow articles to help you as you would for the other tools. Features are also evolving rapidly, so brace yourself for changes.

Which one should you choose?

So, now that we have learned all about Selenium, Cypress, and Playwright, here’s the million-dollar question: Which one should we use? Well, the best web test tool to choose really depends on your needs. They are all great tools with pros and cons. I wanted to compare these tools head-to-head, so I created this table for quick reference:

Web test automation tool comparison

In summary:

  1. Selenium WebDriver is the classic tool that historically has appealed to testers. It supports all major browsers and several programming languages. It abides by open source, standards, and governance. However, it is a low-level browser automation tool, not a full test framework. Use it with a layer on top like Serenity, Boa Constrictor, or Pylenium.
  2. Cypress is the darling test framework for frontend web developers. It is essentially a web app that tests web apps, and it executes tests in the same browser process as the app under test. It supports many browsers but must be coded exclusively in JavaScript. Nevertheless, its developer experience is top-notch.
  3. Playwright is gaining popularity very quickly for its speed and innovative optimizations. It packs all the modern features of Cypress with the multilingual support of Selenium. Although it is newer than Cypress and Selenium, it’s growing fast in terms of features and user base.

If you want to know which one I would choose, come talk with me about it! You can also watch my PyCon US 2023 talk recording to see which one I would specifically choose for my personal Python projects.

Passing Test Inputs into pytest

Someone recently asked me this question:

I’m developing a pytest project to test an API. How can I pass environment information into my tests? I need to run tests against different environments like DEV, TEST, and PROD. Each environment has a different URL and a unique set of users.

This is a common problem for automated test suites, not just in Python or pytest. Any information a test needs about the environment under test is called configuration metadata. URLs and user accounts are common configuration metadata values. Tests need to know what site to hit and how to authenticate.

Using config files with an environment variable

There are many ways to handle inputs like this. I like to create JSON files to store the configuration metadata for each environment. So, something like this:

  • dev.json
  • test.json
  • prod.json

Each one could look like this:

{
  "base_url": "http://my.site.com/",
  "username": "pandy",
  "password": "DandyAndySugarCandy"
}

The structure of each file must be the same so that tests can treat them interchangeably.

I like using JSON files because:

  • they are plain text files with a standard format
  • they are easy to diff
  • they store data hierarchically
  • Python’s standard json module turns them into dictionaries in 2 lines flat

Then, I create an environment variable to set the desired config file:

export TARGET_ENV=dev.json

In my pytest project, I write a fixture to get the config file path from this environment variable and then read that file as a dictionary:

import json
import os
import pytest

@pytest.fixture
def target_env(scope='session'):
  config_path = os.environ['TARGET_ENV']
  with open(config_path) as config_file:
    config_data = json.load(config_file)
  return config_data

I’ll put this fixture in a conftest.py file so all tests can share it. Since it uses session scope, pytest will execute it one time before all tests. Test functions can call it like this:

import requests

def test_api_get(target_env):
  url = target_env['base_url']
  creds = (target_env['username'], target_env['password'])
  response = requests.get(url, auth=creds)
  assert response.status_code == 200

Selecting the config file with a command line argument

If you don’t want to use environment variables to select the config file, you could instead create a custom pytest command line argument. Bas Dijkstra wrote an excellent article showing how to do this. Basically, you could add the following function to conftest.py to add the custom argument:

def pytest_addoption(parser):
  parser.addoption(
    '--target-env',
    action='store',
    default='dev.json',
    help='Path to the target environment config file')

Then, update the target_env fixture:

import json
import pytest

@pytest.fixture
def target_env(request):
  config_path = request.config.getoption('--target-env')
  with open(config_path) as config_file:
    config_data = json.load(config_file)
  return config_data

When running your tests, you would specify the config file path like this:

python -m pytest --target-env dev.json

Why bother with JSON files?

In theory, you could pass all inputs into your tests with pytest command line arguments or environment variables. You don’t need config files. However, I find that storing configuration metadata in files is much more convenient than setting a bunch of inputs each time I need to run my tests. In our example above, passing one value for the config file path is much easier than passing three different values for base URL, username, and password. Real-world test projects might need more inputs. Plus, configurations don’t change frequency, so it’s okay to save them in a file for repeated use. Just make sure to keep your config files safe if they have any secrets.

Validating inputs

Whenever reading inputs, it’s good practice to make sure their values are good. Otherwise, tests could crash! I like to add a few basic assertions as safety checks:

import json
import os
import pytest

@pytest.fixture
def target_env(request):
  config_path = request.config.getoption('--target-env')
  assert os.path.isfile(config_path)

  with open(config_path) as config_file:
    config_data = json.load(config_file)

  assert 'base_url' in config_data
  assert 'username' in config_data
  assert 'password' in config_data

  return config_data

Now, pytest will stop immediately if inputs are wrong.

Democratizing the Screenplay Pattern

I started Boa Constrictor back in 2018 because I loathed page objects. On a previous project, I saw page objects balloon to several thousand lines long with duplicative methods. Developing new tests became a nightmare, and about 10% of tests failed daily because they didn’t handle waiting properly.

So, while preparing a test strategy at a new company, I invested time in learning the Screenplay Pattern. To be honest, the pattern seemed a bit confusing at first, but I was willing to try anything other than page objects again. Eventually, it clicked for me: Actors use Abilities to perform Interactions. Boom! It was a clean separation of concerns.

Unfortunately, the only major implementations I could find for the Screenplay Pattern at the time were Serenity BDD in Java and JavaScript. My company was a .NET shop. I looked for C# implementations, but I didn’t find anything that I trusted. So, I took matters into my own hands and implemented the Screenplay Pattern myself in .NET. Initially, I implemented Selenium WebDriver interactions. Later, my team and I added RestSharp interactions. We eventually released Boa Constrictor as an open source project in October 2020 as part of Hacktoberfest.

With Boa Constrictor, I personally sought to reinvigorate interest in the Screenplay Pattern. By bringing the Screenplay Pattern to .NET, we enabled folks outside of the Java and JavaScript communities to give it a try. With our rich docs, examples, and videos, we made it easy to onboard new users. And through conference talks and webinars, we popularized the concepts behind Screenplay, even for non-C# programmers. It’s been awesome to see so many other folks in the testing community start talking about the Screenplay Pattern in the past few years.

I also wanted to provide a standalone implementation of the Screenplay Pattern. Since the Screenplay Pattern is a design for automating interactions, it could and should integrate with any .NET test framework: SpecFlow, MsTest, NUnit, xUnit.net, and any others. With Boa Constrictor, we focused singularly on making interactions as excellent as possible, and we let other projects handle separate concerns. I did not want Boa Constrictor to be locked into any particular tool or system. In this sense, Boa Constrictor diverged from Serenity BDD – it was not meant to be a .NET version of Serenity, despite taking much inspiration from Serenity.

Furthermore, in the design and all the messaging for Boa Constrictor, I strived to make the Screenplay Pattern easy to understand. So many folks I knew gave up on Screenplay in the past because they thought it was too complicated. I wanted to break things down so that any automation developer could pick it up quickly. Hence, I formed the soundbite, “Actors use Abilities to perform Interactions,” to describe the pattern in one line. I also coined the project’s slogan, “Better Interactions for Better Automation,” to clearly communicate why Screenplay should be used over alternatives like raw calls or page objects.

So far, Boa Constrictor has succeeded modestly well in these goals. Now, the project is pursuing one more goal: democratizing the Screenplay Pattern.

At its heart, the Screenplay Pattern is a generic pattern for any kind of interactions. The core pattern should not favor any particular tool or package. Anyone should be able to implement interaction libraries using the tools (or “Abilities”) they want, and each of those libraries should be treated equally without preference. Recently, in our plans for Boa Constrictor 3, we announced that we want to create separate packages for the “core” pattern and for each library of interactions. We also announced plans to add new libraries for Playwright and Applitools. The existing libraries – Selenium WebDriver and RestSharp – need not be the only libraries. Boa Constrictor was never meant to be merely a WebDriver wrapper or a superior page object. It was meant to provide better interactions for any kind of test automation.

In version 3.0.0, we successfully separated the Boa.Constrictor project into three new .NET projects and released a NuGet package for each:

This separation enables folks to pick the parts they need. If they only need Selenium WebDriver interactions, then they can use just the Boa.Constrictor.Selenium package. If they want to implement their own interactions and don’t need Selenium or RestSharp, then they can use the Boa.Constrictor.Screenplay package without being forced to take on those extra dependencies.

Furthermore, we continued to maintain the “classic” Boa.Constrictor package. Now, this package simply claims dependencies on the other three packages in order to preserve backwards compatibility for folks who used previous version of Boa Constrictor. As part of the upgrade from 2.0.x to 3.0.x, we did change some namespaces (which are documented in the project changelog), but the rest of the code remained the same. We wanted the upgrade to be as straightforward as possible.

The core contributors and I will continue to implement our plans for Boa Constrictor 3 over the coming weeks. There’s a lot to do, and we will do our best to implement new code with thoughtfulness and quality. We will also strive to keep everything documented. Please be patient with us as development progresses. We also welcome your contributions, ideas, and feedback. Let’s make Boa Constrictor excellent together.

Plans for Boa Constrictor 3

Boa Constrictor is the .NET Screenplay Pattern. It helps you make better interactions for better test automation!

I originally created Boa Constrictor starting in 2018 as the cornerstone of PrecisionLender‘s end-to-end test automation project. In October 2020, my team and I released it as an open source project hosted on GitHub. Since then, the Boa Constrictor NuGet package has been downloaded over 44K times, and my team and I have shared the project through multiple conference talks and webinars. It’s awesome to see the project really take off!

Unfortunately, Boa Constrictor has had very little development over the past year. The latest release was version 2.0.0 in November 2021. What happened? Well, first, I left Q2 (the company that acquired PrecisionLender) to join Applitools, so I personally was not working on Boa Constrictor as part of my day job. Second, Boa Constrictor didn’t need much development. The core Screenplay Pattern was well-established, and the interactions for Selenium WebDriver and RestSharp were battle-hardened. Even though we made no new releases for a year, the project remained alive and well. The team at Q2 still uses Boa Constrictor as part of thousands of test iterations per day!

The time has now come for new development. Today, I’m excited to announce our plans for the next phase of Boa Constrictor! In this article, I’ll share the vision that the core contributors and I have for the project – tentatively casting it as “version 3.” We will also share a rough timeline for development.

Separate interaction packages

Currently, the Boa.Constrictor NuGet package has three main parts:

  1. The Screenplay Pattern’s core interfaces and classes
  2. Interactions for Selenium WebDriver
  3. Interactions for RestSharp

This structure is convenient for a test automation project that uses Selenium and RestSharp, but it forces projects that don’t use them to take on their dependencies. What if a project uses Playwright instead of Selenium, or RestAssured.NET instead of RestSharp? What if a project wants to make different kinds of interactions, like mobile interactions with Appium?

At its heart, the Screenplay Pattern is a generic pattern for any kind of interactions. In theory, the core pattern should not favor any particular tool or package. Anyone should be able to implement interaction libraries using the core pattern.

With that in mind, we intend to split the current Boa.Constrictor package into three separate packages, one for each of the existing parts. That way, a project can declare dependencies only on the parts of Boa Constrictor that it needs. It also enables us (and others) to develop new packages for different kinds of interactions.

Playwright support

One of the new interaction packages we intend to create is a library for Playwright interactions. Playwright is a fantastic new web testing framework from Microsoft. It provides several advantages over Selenium WebDriver, such as faster execution, automatic waiting, and trace logging.

We want to give people the ability to choose between Selenium WebDriver or Playwright for their web UI interactions. Since a test automation project would use only one, and since there could be overlap in the names and types of interactions, separating interaction packages as detailed in the previous section will be a prerequisite for developing Playwright support.

We may also try to develop an adapter for Playwright interactions that uses the same interfaces as Selenium interactions so that folks could switch from Selenium to Playwright without rewriting their interactions.

Applitools support

Another new interaction package we intend to create is a library for Applitools interactions. Applitools is the premier visual testing platform. Visual testing catches UI bugs that are difficult to catch with traditional assertions, such as missing elements, broken styling, and overlapping text. A Boa Constrictor package for Applitools interactions would make it easier to capture visual snapshots together with Selenium WebDriver interactions. It would also be an “optional” feature since it would be its own package.

Shadow DOM support

Shadow DOM is a technique for encapsulating parts of a web page. It enables a hidden DOM tree to be attached to an element in the “regular” DOM tree so that different parts between the two DOMs do not clash. Shadow DOM usage has become quite prevalent in web apps these days.

We intend to add support for Selenium interactions to pierce the shadow DOM. Selenium WebDriver requires extra calls to pierce the shadow DOM. Unfortunately, Boa Constrictor’s Selenium interactions currently do not support shadow DOM interactivity. Most likely, we will add new builder methods for Selenium-based Tasks and Questions that take in a locator for the shadow root element and then update the action methods to handle the shadow DOM if necessary.

.NET 7 targets

The main Boa Constrictor project, the unit tests project, and the example project all target .NET 5. Unfortunately, NET 5 is no longer supported by Microsoft. The latest release is .NET 7.

We intend to add .NET 7 targets. We will make the library packages target .NET 7, .NET 5 (for backwards compatibility), and .NET Standard 2.0 (again, for backwards compatibility). We will change the unit test and example projects to target .NET 7 exclusively. In fact, we have already made this change in version 2.0.2!

Dependency updates

Many of Boa Constrictor’s dependencies have released new versions over the past year. GitHub’s Dependabot has also flagged some security vulnerabilities. It’s time to update dependency versions. This is standard periodic maintenance for any project. Already, we have updated our Selenium WebDriver dependencies to version 4.6.

Documentation enhancements

Boa Constrictor has a doc site hosted using GitHub Pages. As we make the changes described above, we must also update the documentation for the project. Most notably, we will need to update our tutorial and example project, since the packages will be different, and we will have support for more kinds of interactions.

What’s the timeline?

The core contributors and I plan to implement these enhancements within the next three months:

  • Today, we just released two new versions with incremental changes: 2.0.1 and 2.0.2.
  • This week, we hope to split the existing package into three, which we intend to release as version 3.0.
  • In December, we will refresh the GitHub Issues for the project.
  • In January, the core contributors and I will host an in-person hackathon (a “Constrictathon”) in Cary, NC.

There is tons of work ahead, and we’d love for you to join us. Check out the GitHub repository, read our contributing guide, and join our Discord server!