automation

Are Automated Test Retries Good or Bad?

What happens when a test fails? If someone is manually running the test, then they will pause and poke around to learn more about the problem. However, when an automated test fails, the rest of the suite keeps running. Testers won’t get to view results until the suite is complete, and the automation won’t perform any extra exploration at the time of failure. Instead, testers must review logs and other artifacts gathered during testing, and they even might need to rerun the failed test to check if the failure is consistent.

Since testers typically rerun failed tests as part of their investigation, why not configure automated tests to automatically rerun failed tests? On the surface, this seems logical: automated retries can eliminate one more manual step. Unfortunately, automated retries can also enable poor practices, like ignoring legitimate issues.

So, are automated test retries good or bad? This is actually a rather controversial topic. I’ve heard many voices strongly condemn automated retries as an antipattern (see here, here, and here). While I agree that automated retries can be abused, I nevertheless still believe they can add value to test automation. A deeper understanding needs a nuanced approach.

So, how do automated retries work?

To avoid any confusion, let’s carefully define what we mean by “automated test retries.”

Let’s say I have a suite of 100 automated tests. When I run these tests, the framework will execute each test individually and yield a pass or fail result for the test. At the end of the suite, the framework will aggregate all the results together into one report. In the best case, all tests pass: 100/100.

However, suppose that one of the tests fails. Upon failure, the test framework would capture any exceptions, perform any cleanup routines, log a failure, and safely move onto the next test case. At the end of the suite, the report would show 99/100 passing tests with one test failure.

By default, most test frameworks will run each test one time. However, some test frameworks have features for automatically rerunning test cases that fail. The framework may even enable testers to specify how many retries to attempt. So, let’s say that we configure 2 retries for our suite of 100 tests. When that one test fails, the framework would queue that failing test to run twice more before moving onto the next test. It would also add more information to the test report. For example, if one retry passed but another one failed, the report would show 99/100 passing tests with a 1/3 pass rate for the failing test.

In this article, we will focus on automated retries for test cases. Testers could also program other types of retries into automated tests, such as retrying browser page loads or REST requests. Interaction-level retries require sophisticated, context-specific logic, whereas test-level retry logic works the same for any kind of test case. (Interaction-level retries would also need their own article.)

Automated retries can be a terrible antipattern

Let’s see how automated test retries can be abused:

Jeremy is a member of a team that runs a suite of 300 automated tests for their web app every night. Unfortunately, the tests are notoriously flaky. About a dozen different tests fail every night, and Jeremy spends a lot of time each morning triaging the failures. Whenever he reruns failed tests individually on his laptop, they almost always pass.

To save himself time in the morning, Jeremy decides to add automatic retries to the test suite. Whenever a test fails, the framework will attempt one retry. Jeremy will only investigate tests whose retries failed. If a test had a passing retry, then he will presume that the original failure was just a flaky test.

Ouch! There are several problems here.

First, Jeremy is using retries to conceal information rather than reveal information. If a test fails but its retries pass, then the test still reveals a problem! In this case, the underlying problem is flaky behavior. Jeremy is using automated retries to overwrite intermittent failures with intermittent passes. Instead, he should investigate why the test are flaky. Perhaps automated interactions have race conditions that need more careful waiting. Or, perhaps features in the web app itself are behaving unexpectedly. Test failures indicate a problem – either in test code, product code, or infrastructure.

Second, Jeremy is using automated retries to perpetuate poor practices. Before adding automated retries to the test suite, Jeremy was already manually retrying tests and disregarding flaky failures. Adding retries to the test suite merely speeds up the process, making it easier to sidestep failures.

Third, the way Jeremy uses automated retries indicates that the team does not value their automated test suite very much. Good test automation requires effort and investment. Persistent flakiness is a sign of neglect, and it fosters low trust in testing. Using retries is merely a “band-aid” on both the test failures and the team’s attitude about test automation.

In this example, automated test retries are indeed a terrible antipattern. They enable Jeremy and his team to ignore legitimate issues. In fact, they incentivize the team to ignore failures because they institutionalize the practice of replacing red X’s with green checkmarks. This team should scrap automated test retries and address the root causes of flakiness.

green check red x
Testers should not conceal failures by overwriting them with passes.

Automated retries are not the main problem

Ignoring flaky failures is unfortunately all too common in the software industry. I must admit that in my days as a newbie engineer, I was guilty of rerunning tests to get them to pass. Why do people do this? The answer is simple: intermittent failures are difficult to resolve.

Testers love to find consistent, reproducible failures because those are easy to explain. Other developers can’t push back against hard evidence. However, intermittent failures take much more time to isolate. Root causes can become mind-bending puzzles. They might be triggered by environmental factors or awkward timings. Sometimes, teams never figure out what causes them. In my personal experience, bug tickets for intermittent failures get far less traction than bug tickets for consistent failures. All these factors incentivize folks to turn a blind eye to intermittent failures when convenient.

Automated retries are just a tool and a technique. They may enable bad practices, but they aren’t inherently bad. The main problem is willfully ignoring certain test results.

Automated retries can be incredibly helpful

So, what is the right way to use automated test retries? Use them to gather more information from the tests. Test results are simply artifacts of feedback. They reveal how a software product behaved under specific conditions and stimuli. The pass-or-fail nature of assertions simplifies test results at the top level of a report in order to draw attention to failures. However, reports can give more information than just binary pass-or-fail results. Automated test retries yield a series of results for a failing test that indicate a success rate.

For example, SpecFlow and the SpecFlow+ Runner make it easy to use automatic retries the right way. Testers simply need to add the retryFor setting to their SpecFlow+ Runner profile to set the number of retries to attempt. In the final report, SpecFlow records the success rate of each test with color-coded counts. Results are revealed, not concealed.

Here is a snippet of the SpecFlow+ Report showing both intermittent failures (in orange) and consistent failures (in red).

This information jumpstarts analysis. As a tester, one of the first questions I ask myself about a failing test is, “Is the failure reproducible?” Without automated retries, I need to manually rerun the test to find out – often at a much later time and potentially within a different context. With automated retries, that step happens automatically and in the same context. Analysis then takes two branches:

  1. If all retry attempts failed, then the failure is probably consistent and reproducible. I would expect it to be a clear functional failure that would be fast and easy to report. I jump on these first to get them out of the way.
  2. If some retry attempts passed, then the failure is intermittent, and it will probably take more time to investigate. I will look more closely at the logs and screenshots to determine what went wrong. I will try to exercise the product behavior manually to see if the product itself is inconsistent. I will also review the automation code to make sure there are no unhandled race conditions. I might even need to rerun the test multiple times to measure a more accurate failure rate.

I do not ignore any failures. Instead, I use automated retries to gather more information about the nature of the failures. In the moment, this extra info helps me expedite triage. Over time, the trends this info reveals helps me identify weak spots in both the product under test and the test automation.

Automated retries are most helpful at high scale

When used appropriate, automated retries can be helpful for any size test automation project. However, they are arguably more helpful for large projects running tests at high scale than small projects. Why? Two main reasons: complexities and priorities.

Large-scale test projects have many moving parts. For example, at PrecisionLender, we presently run 4K-10K end-to-end tests against our web app every business day. (We also run ~100K unit tests every business day.) Our tests launch from TeamCity as part of our Continuous Integration system, and they use in-house Selenium Grid instances to run 50-100 tests in parallel. The PrecisionLender application itself is enormous, too.

Intermittent failures are inevitable in large-scale projects for many different reasons. There could be problems in the test code, but those aren’t the only possible problems. At PrecisionLender, Boa Constrictor already protects us from race conditions, so our intermittent test failures are rarely due to problems in automation code. Other causes for flakiness include:

  • The app’s complexity makes certain features behave inconsistently or unexpectedly
  • Extra load on the app slows down response times
  • The cloud hosting platform has a service blip
  • Selenium Grid arbitrarily chokes on a browser session
  • The DevOps team recycles some resources
  • An engineer makes a system change while tests were running
  • The CI pipeline deploys a new change in the middle of testing

Many of these problems result from infrastructure and process. They can’t easily be fixed, especially when environments are shared. As one tester, I can’t rewrite my whole company’s CI pipeline to be “better.” I can’t rearchitect the app’s whole delivery model to avoid all collisions. I can’t perfectly guarantee 100% uptime for my cloud resources or my test tools like Selenium Grid. Some of these might be good initiatives to pursue, but one tester’s dictates do not immediately become reality. Many times, we need to work with what we have. Curt demands to “just fix the tests” come off as pedantic.

Automated test retries provide very useful information for discerning the nature of such intermittent failures. For example, at PrecisionLender, we hit Selenium Grid problems frequently. Roughly 1/10000 Selenium Grid browser sessions will inexplicably freeze during testing. We don’t know why this happens, and our investigations have been unfruitful. We chalk it up to minor instability at scale. Whenever the 1/10000 failure strikes, our suite’s automated retries kick in and pass. When we review the test report, we see the intermittent failure along with its exception method. Based on its signature, we immediately know that test is fine. We don’t need to do extra investigation work or manual reruns. Automated retries gave us the info we needed.

Selenium Grid
Selenium Grid is a large cluster with many potential points of failure.
(Image source: LambdaTest.)

Another type of common failure is intermittently slow performance in the PrecisionLender application. Occasionally, the app will freeze for a minute or two and then recover. When that happens, we see a “brick wall” of failures in our report: all tests during that time frame fail. Then, automated retries kick in, and the tests pass once the app recovers. Automatic retries prove in the moment that the app momentarily froze but that the individual behaviors covered by the tests are okay. This indicates functional correctness for the behaviors amidst a performance failure in the app. Our team has used these kinds of results on multiple occasions to identify performance bugs in the app by cross-checking system logs and database queries during the time intervals for those brick walls of intermittent failures. Again, automated retries gave us extra information that helped us find deep issues.

Automated retries delineate failure priorities

That answers complexity, but what about priority? Unfortunately, in large projects, there is more work to do than any team can handle. Teams need to make tough decisions about what to do now, what to do later, and what to skip. That’s just business. Testing decisions become part of that prioritization.

In almost all cases, consistent failures are inherently a higher priority than intermittent failures because they have a greater impact on the end users. If a feature fails every single time it is attempted, then the user is blocked from using the feature, and they cannot receive any value from it. However, if a feature works some of the time, then the user can still get some value out of it. Furthermore, the rarer the intermittency, the lower the impact, and consequentially the lower the priority. Intermittent failures are still important to address, but they must be prioritized relative to other work at hand.

Automated test retries automate that initial prioritization. When I triage PrecisionLender tests, I look into consistent “red” failures first. Our SpecFlow reports make them very obvious. I know those failures will be straightforward to reproduce, explain, and hopefully resolve. Then, I look into intermittent “orange” failures second. Those take more time. I can quickly identify issues like Selenium Grid disconnections, but other issues may not be obvious (like system interruptions) or may need additional context (like the performance freezes). Sometimes, we may need to let tests run for a few days to get more data. If I get called away to another more urgent task while I’m triaging results, then at least I could finish the consistent failures. It’s a classic 80/20 rule: investigating consistent failures typically gives more return for less work, while investigating intermittent failures gives less return for more work. It is what it is.

The only time I would prioritize an intermittent failure over a consistent failure would be if the intermittent failure causes catastrophic or irreversible damage, like wiping out an entire system, corrupting data, or burning money. However, that type of disastrous failure is very rare. In my experience, almost all intermittent failures are due to poorly written test code, automation timeouts from poor app performance, or infrastructure blips.

Context matters

Automated test retries can be a blessing or a curse. It all depends on how testers use them. If testers use retries to reveal more information about failures, then retries greatly assist triage. Otherwise, if testers use retries to conceal intermittent failures, then they aren’t doing their jobs as testers. Folks should not be quick to presume that automated retries are always an antipattern. We couldn’t achieve our scale of testing at PrecisionLender without them. Context matters.

Managing the Test Data Nightmare

On April 22, 2021, I delivered a talk entitled “Managing the Test Data Nightmare” at SauceCon 2021. SauceCon is Sauce Labs’ annual conference for the testing community. Due to the COVID-19 pandemic, the conference was virtual, but I still felt a bit of that exciting conference buzz.

My talk covers the topic of test data, which can be a nightmare to handle. Data must be prepped in advance, loaded before testing, and cleaned up afterwards. Sometimes, teams don’t have much control over the data in their systems under test—it’s just dropped in, and it can change arbitrarily. Hard-coding values into tests that reference system tests can make the tests brittle, especially when running tests in different environments.

In this talk, I covered strategies for managing each type of test data: test case variations, test control inputs, config metadata, and product state. I also covered how to “discover” test data instead of hard-coding it, how to pass inputs into automation (including secrets like passwords), and how to manage data in the system. After watching this talk, you can wake up from the nightmare and handle test data cleanly and efficiently like a pro!

Here are some other articles I wrote about test data:

As usual, I hit up Twitter throughout the conference. Here are some action shots:

Many thanks to Sauce Labs and all the organizers who made SauceCon 2021 happen. If SauceCon was this awesome as a virtual event, then I can’t wait to attend in person (hopefully) in 2022!

Announcing Boa Constrictor Docs!

Doc site:
https://q2ebanking.github.io/boa-constrictor/

Boa Constrictor is a C# implementation of the Screenplay Pattern. My team and I at PrecisionLender, a Q2 Company, developed Boa Constrictor as part of our test automation solution. Its primary use case is Web UI and REST API test automation. Boa Constrictor helps you make better interactions for better automation!

Our team released Boa Constrictor as an open source project on GitHub in October 2020. This week, we published a full documentation site for Boa Constrictor. They include an introduction to the Screenplay Pattern, a quick-start guide, a full tutorial, and ways to contribute to the project. The doc site itself uses GitHub Pages, Jekyll, and Minimal Mistakes.

Our team hopes that the docs help you with testing and automation. Enjoy!

Using Domain-Specific Languages for Security Testing

I love programming languages. They have fascinated me ever since I first learned to program my TI-83 Plus calculator in ninth grade, many years ago. When I studied computer science in college, I learned how parsers, interpreters, and compilers work. During my internships at IBM, I worked on a language named Enterprise Generation Language as both a tester and a developer. At NetApp, I even developed my own language named DS for test automation. Languages are so much fun to learn, build, and extend.

Today, even though I do not actively work on compilers, I still do some pretty interesting things with languages and testing. I strongly advocate for Behavior-Driven Development and its domain-specific language (DSL) Gherkin. In fact, as I wrote in my article Behavior-Driven Blasphemy, I support using Gherkin-based BDD test frameworks for test automation even if a team is not also doing BDD’s collaborative activities. Why? Gherkin is the world’s first major off-the-shelf DSL for test automation, and it doesn’t require the average tester to know the complexities of compiler theory. DSLs like Gherkin can make tests easier to read, faster to write, and more reliable to run. They provide a healthy separation of concerns between test cases and test code. After working on successful large-scale test automation projects with C# and SpecFlow, I don’t think I could go back to traditional test frameworks.

I’m not the only one who thinks this way. Here’s a tweet from Dinis Cruz, CTO and CISO at Glasswall, after he read one of my articles:

Dinis then tweeted at me to invite me to speak about using DSLs for testing at the Open Security Summit in 2021:

Now, I’m not a “security guy” at all, but I do know a thing or two about DSLs and testing. So, I gladly accepted the invitation to speak! I delivered my talk, “Using DSLs for Security Testing” virtually on Thursday, January 14, 2021 at 10am US Eastern. I also uploaded my slides to GitHub at AndyLPK247/using-dsls-for-security-testing. Check out the YouTube recording here:

This talk was not meant to be a technical demo or tutorial. Instead, it was meant to be a “think big” proposal. The main question I raised was, “How can we use DSLs for security testing?” I used my own story to illustrate the value languages deliver, particularly for testing. My call to action breaks that question down into three parts:

  1. Can DSLs make security testing easier to do and thereby more widely practiced?
  2. Is Gherkin good enough for security testing, or do we need to make a DSL specific to security?
  3. Would it be possible to write a set of “standard” or “universal” security tests using a DSL that anyone could either run directly or use as a template?

My goal for this talk was to spark a conversation about DSLs and security testing. Immediately after my talk, Luis Saiz shared two projects he’s working on regarding DSLs and security: SUSTO and Mist. Dinis also invited me back for a session at the Open Source Summit Mini Summit in February to have a follow-up roundtable discussion for my talk. I can’t wait to explore this idea further. It’s an exciting new space for me.

If this topic sparks your interest, be sure to watch my talk recording, and then join us live in February 2021 for the next Open Source Summit event. Virtual sessions are free to join. Many thanks again to Dinis and the whole team behind Open Source Summit for inviting me to speak and organizing the events.

I’m Writing a Software Testing Book!

That’s right! You read the title. I’m writing a book about software testing!

One of the most common questions people ask me is, “What books can you recommend on software testing and automation?” Unfortunately, I don’t have many that I can recommend. There are plenty of great books, but most of them focus on a particular tool, framework, or process. I haven’t found a modern book that covers software testing as a whole. Trust me, I looked – when I taught my college course on software testing at Wake Tech, the textbook’s copyright date was 2002. Its content felt just as antiquated.

I want to write a book worthy of answering that question. I want to write a treatise on software testing for our current generation of software professionals. My goal is ambitious, but I think I can do it. It will probably take a year to write. I hope to find deep joy in this endeavor.

Manning Publications will be the publisher. They accepted my proposal, and we signed a contract. The working title of the book is The Way to Test Software. The title pays homage to Julia Child’s classic, The Way to Cook. Like Julia Child, I want to teach “master recipes” that can be applied to any testing situations.

I don’t want to share too many details this early in the process, but the tentative table of contents has the following parts:

  1. Orientation
  2. Testing Code
  3. Testing Features
  4. Testing Performance
  5. Running Tests
  6. Development Practices

Python will be the language of demonstration. This should be no surprise to anyone. I chose Python because I love the language. I also think it’s a great language for test automation. Python will be easy for both beginners and experts to learn. Besides, the book is about testing, not programming – Python will be just the linguistic tool for automation.

If you’re as excited about this book as I am, please let me know! I need all the encouragement I can get. This book probably won’t enter print until 2022, given the breadth of its scope. I’ll work to get it done as soon as I can.

Learning Python Test Automation

Do you want to learn how to automate tests in Python? Python is one of the best languages for test automation because it is easy to learn, concise to write, and powerful to scale. These days, there’s a wealth of great content on Python testing. Here’s a brief reference to help you get started.

If you are new to Python, read How Do I Start Learning Python? to find the best way to start learning the language.

If you want to roll up your sleeves, check out Test Automation University. I developed a “trifecta” of Python testing courses for TAU with videos, transcripts, quizzes, and example code. You can take them for FREE!

  1. Introduction to pytest
  2. Selenium WebDriver with Python
  3. Behavior-Driven Python with pytest-bdd

If you wants some brief articles for reference, check out my Python Testing 101 blog series:

  1. Python Testing 101: Introduction
  2. Python Testing 101: unittest
  3. Python Testing 101: doctest
  4. Python Testing 101: pytest
  5. Python Testing 101: behave
  6. Python Testing 101: pytest-bdd
  7. Python BDD Framework Comparison

RealPython also has excellent guides:

I’ve given several talks about Python testing:

If you prefer to read books, here are some great titles:

Here are links to popular Python test tools and frameworks:

Do you have any other great resources? Drop them in the comments below! Happy testing!

Boa Constrictor Intro Video with Transcript

The Video

Boa Constrictor is the .NET Screenplay Pattern, and I’m its lead developer. Check out this intro video to learn why we need the Screenplay Pattern and how to use it with Boa Constrictor.

The Transcript

[Camera]

Hello, everyone! My name is Andrew Knight, or “Pandy” for short. I’m the Automation Panda – I build solutions to testing problems. Be sure to read my blog and follow me on Twitter at “AutomationPanda”.

Today, I’m going to introduce you to a new test automation library called Boa Constrictor, the .NET Screenplay Pattern. Boa Constrictor can help you make better interactions for better automation. Its primary use cases are Web UI and REST API interactions, but it can be extended to handle any type of interaction.

My team and I at PrecisionLender originally developed Boa Constrictor as the cornerstone of our .NET end-to-end test automation solution. We found the Screenplay Pattern to be a great way to scale our test development, avoid duplicate code, and stay focused on behaviors. In October 2020, together with help from our parent company Q2, we released Boa Constrictor as an open source project.

In this video, we will cover three things:

  1. First, problems with traditional ways of automating interactions.
  2. Second, why the Screenplay Pattern is a better way.
  3. Third, how to use the Screenplay Pattern with Boa Constrictor in C#.

Since Boa Constrictor is open source, you can check out its repository. I’ll paste the link below: https://github.com/q2ebanking/boa-constrictor. The repository also has a hands-on tutorial you can try. Make sure to have Visual Studio and some .NET skills because the code is written in C#.

My main goal with the Boa Constrictor project is to help improve test automation practices. For so long, our industry has relied on page objects, and I think it’s time we talk about a better way. Boa Constrictor strives to make that easy.

[Slide]

To start, let’s define that big “I” word I kept tossing around:

[Slide]

Interactions.

[Slide]

Simply put, interactions are how users operate software. For this video, I’ll focus on Web UI interactions, like clicking buttons and scraping text.

[Slide]

Interactions are indispensable to testing. The simplest way to define “testing” is interaction plus verification. That’s it! You do something, and you make sure it works.

Think about any functional test case that you have ever written or executed. The test case was a step-by-step procedure, in which each step had interactions and verifications.

[Slide]

Here’s an example of a simple DuckDuckGo search test. DuckDuckGo is a search engine like Google or Yahoo. The steps here are very straightforward.

[Slide]

Opening the search engine requires navigation.

[Slide]

Searching for a phrase requires entering keystrokes and clicking the search button.

[Slide]

Verifying results requires scraping the page title and result links from the new page. 

Interactions are everywhere!

[Slide]

Unfortunately, our industry struggles to handle automated Web UI interactions well. Even though most teams use Selenium WebDriver in their test automation code, every team seems to use it differently. There’s lots of duplicate code and flakiness, too. Let’s take a look at the way many teams evolve their WebDriver-based interactions. I will use C# for code examples, and I will continue to use DuckDuckGo for testing.

[Slide]

When teams first start writing test automation code using Selenium WebDriver, they frequently write raw calls. Anyone familiar with the WebDriver API should recognize these calls.

[Slide]

The WebDriver object is initialized using, say, ChromeDriver for the Chrome browser.

[Slide]

The first step to open the search engine calls “driver dot navigate dot go to URL” with the DuckDuckGo website address.

[Slide]

The second step performs the search by fetching Web elements using “driver dot find element” with locators and then calling methods like “send keys” and “click”.

[Slide]

The third step uses assertions to verify the contents of the page title and the existence of result links.

[Slide]

Finally, at the end of the test, the WebDriver quits the browser for cleanup.

Like I said, these are all common WebDriver calls. Unfortunately, there’s a big problem in this code.

[Slide]

Race conditions. There are three race conditions in this code in which the automation does NOT wait for the page to be ready before making interactions! WebDriver does not automatically wait for elements to load or titles to appear. Waiting is a huge challenge for Web UI automation, and it is one of the main reasons for “flaky” tests.

[Slide]

You could set an implicit wait that will make calls wait until target elements appear, but they don’t work for all cases, such as the title in race condition #2.

[Slide]

Explicit waits provide much more control over waiting timeout and conditions. They use a “WebDriverWait” object with a pre-set timeout value, and they must be placed explicitly throughout the code. Here, they are placed in the three spots where race conditions could happen. Each “wait dot until” call takes in a function that returns true when the condition is satisfied.

[Slide]

These waits are necessary, but they cause new problems. First, they cause duplicate code because Web element locators are used multiple times. Notice how “search form input homepage” is called twice.

[Slide]

Second, raw calls with explicit waits makes code less intuitive. If I remove the comments from each paragraph of code, what’s left is a wall of text. It is difficult to understand what this code does as a glance.

[Slide]

To remedy these problems, most teams use the Page Object Pattern. In the Page Object Pattern, each page is modeled as a class with locator variables and interaction methods. So, a “search page” class could look like this.

[Slide]

At the top, there could be a constant for the page URL and variables for the search input and search button locators. Notice how each has an intuitive name.

[Slide]

Next, there could be a variable to hold the WebDriver reference. This reference would come via dependency injection through the constructor.

[Slide]

The first method would be a “load” method that navigates the browser to the page’s URL.

[Slide]

And, the second method would be a “search” method that waits for the elements to appear, enters the phrase into the input field, and clicks the search button.

This page object class has a decent structure and a mild separation of concerns. Locators and interactions have meaningful names. Page objects require a few more lines of code that raw calls at first, but their parts can easily be reused.

[Slide]

The original test steps can be rewritten using this new SearchPage class. Notice how much cleaner this new code looks.

[Slide]

The other steps can be rewritten using page objects, too.

[Slide]

Unfortunately, page objects themselves suffer problems with duplication in their interaction methods.

[Slide]

Suppose a page object needs a method to click an element. We already know the logic: wait for the element to exist, and then click it.

But what about clicking another element? This method is essentially hard coded for one button.

[Slide]

A second “click” method is needed to click the other button.

[Slide]

Unfortunately, the code for both methods is the same. The code will be the same for any other click method, too. This is copy pasta, and it happens all the time in page objects. I’ve seen page objects grow to be thousands of lines long due to duplicative methods like this.

At this point, some teams will say, “Aha! More duplicate code? We can solve this problem with more Object-Oriented Programming!”

[Slide]

And they’ll create the infamous “base page”, a parent class for all other page object classes.

[Slide]

The base page will have variables for the WebDriver and the wait object.

[Slide]

It will also provide common interaction methods, such as this click method that can click on any element. Abstraction for the win!

[Slide]

Child pages will inherit everything from the base page. Child page interaction methods frequently just call base page methods.

I’ve seen many teams stop here and say, “This is good enough.” Unfortunately, this really isn’t very good at all!

[Slide]

The base page helps mitigate code duplication, but it doesn’t solve its root cause. Page objects inherently combine two separate concerns: page structure and interactions. Interactions are often generic enough to be used on any Web element. Coupling interaction code with specific locators or pages forces testers to add new page object methods for every type of interaction needed for an element. Every element could potentially need a click, a text, a “displayed”, or any other type of WebDriver interaction. That’s a lot of extra code that shouldn’t be necessary. The base page also becomes very top-heavy as testers add more and more code to share.

[Slide]

Most frustratingly, the page object code I showed here is merely one type of implementation. What do your page objects look like? I’d bet dollars to doughnuts that they look different than mine. Page objects are completely free form. Every team implements them differently. There’s no official version of the Page Object Pattern. There’s no conformity in its design. Even worse, within its design, there is almost no way for the pattern to enforce good practices. That’s why people argue whether page object locators should be public or private. Page objects would be better described as a “convention” than as a true design pattern.

[Slide]

There must be a better way to handle interactions. Thankfully, there is.

[Slide]

Let’s take a closer look at how interactions happen.

[Slide]

First, there is someone who initiates the interactions. Usually, this is a user. They are the ones making the clicks and taking the scrapes. Let’s call them the “Actor”.

[Slide]

Second, there is the thing under test. For our examples in this video, that’s a Web app. It has pages with elements. Web page structure is modeled using locators to access page elements from the DOM. Keep in mind, the thing under test could also be anything else, like a mobile app, a microservice, or even a command line.

[Slide]

Third, there are the interactions themselves. For Web apps, they could be simple clicks and keystrokes, or they could be more complex interactions like logging into the app or searching for a phrase. Each interaction will do the same type of operation on whatever target page or element it is given.

[Slide]

Finally, there are objects that enable Actors to perform certain types of Interactions. For example, browser interactions need a tool like Selenium WebDriver to make clicks and scrapes. Let’s call these things “Abilities”.

Actors, Abilities, and Interactions are each different types of concerns. We could summarize their relationship in one line.

[Slide]

Actors use Abilities to perform Interactions.

Actors use Abilities to perform Interactions.

[Slide]

This is the heart of the Screenplay Pattern. In the Page Object Convention, page objects become messy because concerns are all combined. The Screenplay Pattern separates concerns for maximal reusability and scalability.

[Slide]

So, let’s learn how to Screenplay, using Boa Constrictor.

[Slide]

“Boa Constrictor” is an open source C# implementation of the Screenplay Pattern my team and I developed at PrecisionLender. Like I said before, it is the cornerstone of PrecisionLender’s end-to-end test automation solution. It can be used with any .NET test framework, like SpecFlow or NUnit. The GitHub repository name is q2ebanking/boa-constrictor, and the NuGet package name is Boa.Constrictor.

[Slide]

Let’s rewrite that DuckDuckGo search test from before using Boa Constrictor. As you watch this video, I recommend just reading along with the code as it appears on screen to get the concepts. Trying to code along in real time might be challenging. After this video, you can take the official Boa Constrictor tutorial to get hands-on with the code.

[Slide]

To use Boa Constrictor, you will need to install the Boa Constrictor and Selenium WebDriver NuGet packages. My example code will also use Fluent Assertions and ChromeDriver.

[Slide]

The Actor is the entity that initiates Interactions. All Screenplay calls start with an Actor. Most test cases need only one Actor.

The Actor class optionally takes two arguments. The first argument is a name, which can help describe who the actor is. The name will appear in logged messages. The second argument is a logger, which will send log messages from Screenplay calls to a target destination. Loggers must implement Boa Constrictor’s ILogger interface. ConsoleLogger is a class that will log messages to the system console. You can define your own custom loggers by implementing ILogger.

[Slide]

Abilities enable Actors to initiate Interactions. For example, an Actor needs a Selenium WebDriver instance to click elements on a Web page.

Read this new line in plain English: “The actor can browse the Web with a new ChromeDriver.” Boa Constrictor’s fluent-like syntax makes its call chains very readable. “actor dot Can” adds an Ability to an Actor.

[Slide]

“BrowseTheWeb” is the Ability that enables Actors to perform Web UI Interactions. “BrowseTheWeb dot With” provides the WebDriver object that the Actor will use, which, in this case, is a new ChromeDriver object. Boa Constrictor supports all browser types.

All Abilities must implement the IAbility interface. Actors can be given any number of Abilities. “BrowseTheWeb” simply holds a reference to the WebDriver object. Web UI Interactions will retrieve this WebDriver object from the Actor.

[Slide]

Before the Actor can call any WebDriver-based Interactions, the Web pages under test need models. These models should be static classes that include locators for elements on the page and possibly page URLs. Page classes should only model structure – they should not include any interaction logic.

The Screenplay Pattern separates the concerns of page structure from interactions. That way, interactions can target any element, maximizing code reusability. Interactions like clicks and scrapes work the same regardless of the target elements.

The SearchPage class has two members. The first member is a URL string named Url. The second member is a locator for the search input element named SearchInput.

A locator has two parts. First, it has a plain-language Description that will be used for logging. Second, it has a Query that is used to find the element on the page. Boa Constrictor uses Selenium WebDriver’s By queries. For convenience, locators can be constructed using the statically imported L method.

[Slide]

The Screenplay Pattern has two types of Interactions. The first type of Interaction is called a Task. A Task performs actions without returning a value. Examples of Tasks include clicking an element, refreshing the browser, and loading a page. These interactions all “do” something rather than “get” something.

Boa Constrictor provides a Task named Navigate for loading a Web page using a target URL. Read this line in plain English: “The actor attempts to navigate to the URL for the search page.” Again, Boa Constrictor’s fluent-like syntax is very readable. Clearly, this line will load the DuckDuckGo search page. 

[Slide]

“Actor dot attempts to” calls a Task. All Tasks must implement the ITask interface. When the Actor calls “AttemptsTo” on a task, it calls the task’s “PerformAs” method.

[Slide]

“Navigate” is the name of the task, and “dot to URL” provides the target URL.

[Slide]

The Navigate Task’s “PerformAs” method fetches the WebDriver object from the Actor’s Ability and uses it to load the given URL.

[Slide]

“Search page dot URL” comes from the SearchPage class we previously wrote. Putting the URL in the page class makes it universally available.

[Slide]

The second type of Interaction is called a Question. A Question returns an answer after performing actions. Examples of Questions include getting an element’s text, location, and appearance. Each of these interactions return some sort of value. 

Boa Constrictor provides a Question named ValueAttribute that gets the “value” of the text currently inside an input field. Read this line in plain English: “The actor asking for the value attribute of the search page’s search input element should be empty.”

[Slide]

“Actor dot asking for” calls a Question. All Questions must implement the IQuestion interface. When the Actor calls “AskingFor” or the equivalent “AsksFor” method, it calls the question’s “RequestAs” method.

[Slide]

“ValueAttribute” is the name of the Question, and “dot Of” provides the target Web element’s locator. 

[Slide]

The ValueAttribute’s “RequestAs” method fetches the WebDriver object, waits for the target element to exist on the page, and scrapes and returns its value attribute.

[Slide]

“Search page dot search input” is the locator for the search input field. It comes from the SearchPage class.

[Slide]

Finally, once the value is obtained, the test must make an assertion on it. “Should be empty” is a Fluent Assertion that verifies that the search input field is empty when the page is first loaded.

[Slide]

The test case’s next step is to enter a search phrase. Doing this requires two interactions: typing the phrase into the search input and clicking the search button. However, since searching is such a common operation, we can create a custom interaction for search by composing the lower-level interactions together.

[Slide]

The “SearchDuckDuckGo” task takes in a search phrase.

[Slide]

In its “PerformAs” method, it calls two other interactions: “SendKeys” and “Click”.

[Slide]

Using one task to combine these lower-level interactions makes the test code more readable and understandable. It also improves automation reusability. Read this line in plain English now: “The actor attempts to search DuckDuckGo for ‘panda’.” That’s concise and intuitive!

[Slide]

The last test case step should verify that result links appear after entering a search phrase. Unfortunately, this step has a race condition: the result page takes a few seconds to display result links. Automation must wait for those links to appear. Checking too early will make the test case fail.

Boa Constrictor makes waiting easy. Read this line in plain English: “The actor attempts to wait until the appearance of result page result links is equal to true.” In simpler terms, “Wait until the result links appear.”

[Slide]

“Wait” is a special Task. It will repeatedly call a Question until the answer meets a given condition.

[Slide]

For this step, the Question is the appearance of result links on the result page. Before links are loaded, this Question will return “false”. Once links appear, it will return “true”.

[Slide]

The Condition for waiting is for the answer value to become “true”. Boa Constrictor provides several conditions out of the box, such as equality, mathematical comparisons, and string matching. You can also implement custom conditions by implementing the “ICondition” interface.

[Slide]

Waiting is smart – it will repeatedly ask the question until the answer is met, and then it will move on. This makes waiting much more efficient than hard sleeps. If the answer does not meet the condition within the timeout, then the wait will raise an exception. The timeout defaults to 30 seconds, but it can be overridden.

Many of Boa Constrictor’s WebDriver-based interactions already handle waiting. Anything that uses a target element, such as “Click”, “SendKeys”, or “Text” will wait for the element to exist before attempting the operation. We saw this in some of the previous example code. However, there are times where explicit waits are needed. Interactions that query appearance or existence do not automatically wait.

[Slide]

The final step is to quit the browser. Boa Constrictor’s “QuitWebDriver” task does this. If you don’t quit the browser, then it will remain open and turn into a zombie. Always quit the browser. Furthermore, in whatever test framework you use, put the step to quit the browser in a cleanup or teardown routine so that it is called even when the test fails.

[Slide]

And there we have our completed test using Boa Constrictor’s Screenplay Pattern. All the separated concerns come together beautifully to handle interactions in a much better way.

[Slide]

As we said before, the Screenplay Pattern can be summed up in one line:

[Slide]

Actors [Slide] use Abilities [Slide] to perform Interactions.

It’s that simple. Actors use Abilities to perform Interactions.

[Slide]

For those who like Object-Oriented Programming, the Screenplay Pattern is, in a sense, a SOLID refactoring of the Page Object Convention. SOLID refers to five design principles for maintainability and extensibility. I won’t go into detail about each principle here because the information is a bit dense, but if you’re interested, then pause the video, snap a quick screenshot, and check out each of these principles later. Wikipedia is a good source. You’ll find that the Screenplay Pattern follows each one nicely.

[Slide]

So, why should you use the Screenplay Pattern over Page Object Convention or raw WebDriver calls? There are a few key reasons.

[Slide]

First, the Screenplay Pattern, and specifically the Boa Constrictor project, provide rich, reusable, reliable interactions out of the box. Boa Constrictor already has Tasks and Questions for every type of WebDriver-based interaction. Each one is battle-hardened and safe.

[Slide]

Second, Screenplay interactions are composable. Like we saw with searching for a phrase, you can easily combine interactions. This makes code easier to use and reuse, and it avoids lots of duplication.

[Slide]

Third, the Screenplay Pattern makes waiting easy using existing questions and conditions. Waiting is one of the toughest parts of black box automation.

[Slide]

Fourth, Screenplay calls are readable and understandable. They use a fluent-like syntax that reads more like prose than code.

[Slide]

Finally, the Screenplay Pattern, at its core, is a design pattern for any type of interaction. In this video, I showed how to use it for Web UI interactions, but the Screenplay Pattern could also be used for mobile, REST API, and other platforms. You can make your own interactions, too!

[Slide]

Overall, the Screenplay Pattern [Slide] provides better interactions [Slide] for better automation.

That’s the point. It’s not just another Selenium WebDriver wrapper. It’s not just a new spin on page objects. Screenplay is a great way to exercise any feature behaviors under test.

And, as we saw before…

[Slide]

The Screenplay Pattern isn’t that complicated. Actors use Abilities to perform Interactions. That’s it. The programming behind it just has some nifty dependency injection.

[Slide]

If you’d like to start using the Screenplay Pattern for your test automation, there are a few ways to get started.

[Slide]

If you are programming in C#, you can use Boa Constrictor, the library I showed in the examples. You can download Boa Constrictor as a NuGet package. It works with any .NET test framework, like SpecFlow and NUnit. I recommend taking the hands-on tutorial so you can develop a test automation project yourself with Boa Constrictor. Also, since Boa Constrictor is an open source project, I’d love for you to contribute!

[Slide]

If you are programming in Java or JavaScript, you can use Serenity BDD – a mature, complete test automation framework that includes the Screenplay Pattern. Serenity BDD greatly influenced Boa Constrictor, but the two are entirely separate projects. Boa Constrictor is NOT Serenity BDD for .NET. Instead, Boa Constrictor aims to be a simpler, standalone implementation of the Screenplay Pattern.

[Slide]

If none of those options suit you, then you could create your own. The Screenplay Pattern does require a bit of boilerplate code, but it’s worthwhile in the end. You can always reference code from Boa Constrictor and Serenity BDD.

[Slide]

Thank you so much for taking the time to learn more about the Screenplay Pattern and Boa Constrictor. I’d like to give special thanks to everyone at PrecisionLender and Q2 who helped make Boa Constrictor’s open source release happen.

Again, my name is Andrew Knight. I’m the Automation Panda. Be sure to read my blog, follow me on Twitter, and reach out to me if you’d like to join the Boa Constrictor project! Thank you.

Introducing Boa Constrictor: The .NET Screenplay Pattern

Today, I’m excited to announce the release of a new open source project for test automation: Boa Constrictor, the .NET Screenplay Pattern!

The Screenplay Pattern helps you make better interactions for better automation. The pattern can be summarized in one line: Actors use Abilities to perform Interactions.

  • Actors initiate Interactions. Every test has an Actor.
  • Abilities enable Actors to perform Interactions. They hold objects that Interactions need, like WebDrivers or REST API clients.
  • Interactions exercise behaviors under test. They could be clicks, requests, commands, and anything else.

This separation of concerns makes Screenplay code very reusable and scalable, much more so than traditional page objects. Check it out, here’s a C# script to test a search engine:

// Create the Actor
IActor actor = new Actor(logger: new ConsoleLogger());

// Add an Ability to use a WebDriver
actor.Can(BrowseTheWeb.With(new ChromeDriver()));

// Load the search engine
actor.AttemptsTo(Navigate.ToUrl(SearchPage.Url));

// Get the page's title
string title = actor.AsksFor(Title.OfPage());

// Search for something
actor.AttemptsTo(Search.For("panda"));

// Wait for results
actor.AttemptsTo(Wait.Until(
    Appearance.Of(ResultPage.ResultLinks),
    IsEqualTo.True()));

Boa Constrictor provides many interactions for Selenium WebDriver and RestSharp out of the box, like Navigate, Title, and Appearance shown above. It also lets you compose interactions together, like how Search is a composition of typing and clicking.

Over the past two years, my team and I at PrecisionLender, a Q2 Company, developed Boa Constrictor internally as the cornerstone of Boa, our comprehensive end-to-end test automation solution. We were inspired by Serenity BDD‘s Screenplay implementation. After battle-hardening Boa Constrictor with thousands of automated tests, we are releasing it publicly as an open source project. Our goal is to help everyone make better interactions for better test automation.

If you’d like to give Boa Constrictor a try, then start with the tutorial. You’ll implement that search engine test from above in full. Then, once you’re ready to use it for some serious test automation, add the Boa.Constrictor NuGet package to your .NET project and go!

You can view the full source code on GitHub at q2ebanking/boa-constrictor. Check out the repository for full information. In the coming weeks, we’ll be developing more content and code. Since Boa Constrictor is open source, we’d love for you to contribute to the project, too!

Mentoring Software Testers

Mentoring is important in any field, but it’s especially critical for software testing. I’ve been blessed with good mentors throughout my life, and I’ve also been honored to serve as a mentor for other software testers. In this article, I’ll explain what mentoring is and how to practice it within the context of software testing.

What is Mentoring?

Mentoring is a one-on-one relationship in which the experienced guides the inexperienced.

  • It is explicit in that the two individuals formally agree to the relationship.
  • It is intentional in that both individuals want to participate.
  • It is long-term in that the relationship is ongoing.
  • It is purposeful in that the relationship has a clear goal or development objective.
  • It is meaningful in that growth happens for both individuals.

Mentoring is more than merely answering questions or doing code reviews. It is an intentional relationship for learning and growth.

Why Software Testing Mentoring Matters

Software testing is a specialty within software engineering. People enter software testing roles in various ways, like these:

  • A new college graduate lands their first job as a QA engineer.
  • A 20-year manual tester transitions into an automation role.
  • A developer assumes more testing responsibilities for their team.
  • A coding bootcamp graduate decides to change career.

There’s no single path to entering software testing. Personally, I graduated college with a Computer Science degree and found my way into testing through internships.

Unfortunately, there’s also no “universal” training program for software testing. Universities don’t offer programs in software testing – at best, they introduce unit test frameworks and bug report formats. Most coding bootcamps focus on Web development or data science. Certifications like ISTQB can feel arcane and antiquated (and, to be honest, I don’t hold any myself).

The best ways we have for self-training are communities, conferences, and online resources. For example, I’m a member of my local testing community, the Triangle Software Quality Association (TSQA). TSQA hosts meetups monthly and a conference every other year in North Carolina. Through these events, TSQA welcomes everyone interested in software testing to learn, share, and network. I also recommend free courses from Test Automation University, and I frequently share blogs and articles from other software testing leaders.

Nevertheless, while these resources are deeply valuable, they can be overwhelming for a newbie who literally does not know where to begin. An experienced mentor provides guidance. They can introduce a newcomer to communities and events. They can recommend specific resources and answer questions in a safe space. Mentors can also provide encouragement, motivation, and accountability – things that online resources cannot provide.

How to Do Mentoring

I’ve had the pleasure of mentoring multiple individuals throughout my career. I mentor no more than a few individuals at a time so that I can give each mentee the attention they deserve. Mentoring relationships typically start in one of these ways:

  1. Someone asks me to be their mentor.
  2. A manager or team leader arranges a mentoring relationship.
  3. As a team leader, I initiate a mentoring relationship with a new team member (because that’s a team leader’s job).

Almost all of my mentoring relationships have existed within company walls. Mentoring becomes one of my job responsibilities. Personally, I would recommend forming mentoring relationships within company walls so that both individuals can dedicate time, availability, and shared knowledge to each other. However, that may not always be possible (if management does not prioritize professional development) or beneficial (if the work environment is toxic).

My teammates and me at TSQA 2020!

From the start, I like to be explicit about the mentoring relationship with the mentee. I learn what they want to get out of mentoring. I give them the top priority of my time and attention, but I also expect them to reciprocate. I don’t enter mentoring relationships if the other person doesn’t want one.

Then, I create what I call a “growth plan” for the mentee. The growth plan is a tailored training program for the mentee to accomplish their learning objectives. I set a schedule with the following types of activities:

  • One-on-one or small-group teaching sessions
    • The best format for making big points that stick
    • Gives individual care and attention to the mentee
    • Provides a safe space for questions
  • Reading assignments
    • Helpful for independently learning specific topics
    • May be blogs, articles, or documentation
    • Allows the mentee to complete it at their own pace
  • Online training courses
    • Example: Test Automation University
    • Provides comprehensive, self-paced instruction
    • However, may not be 100% pertinent to the learning objectives
  • Pair programming or code review sessions
    • Hands-on time for mentor and mentee to work together
    • Allows learning by example and by osmosis
    • However, can be mentally draining, so use sparingly
  • Independent work items
    • Real, actual work items that the team must complete
    • Makes the mentee feel like they are making valuable contributions even while still learning
    • “Practice makes perfect”

These activities should be structured to fit all learning styles and build upon each other. For example, if I am mentoring someone about how to do Behavior-Driven Development, I would probably schedule the following activities:

  1. A “Welcome to BDD” whiteboard session with me
  2. Reading my BDD 101 series
  3. Watching a video about Example Mapping
  4. A small group activity for doing Example Mapping on a new story
  5. A work item to write Gherkin scenarios for that mapped story
  6. A review session for those scenarios
  7. A “BDD test frameworks” deep dive session with me
  8. A work item to automate the Gherkin scenarios they wrote
  9. A review session for those automated tests
  10. Another work item for writing and automating a new round of tests

Any learning objective can be mapped to a growth plan like this. Make sure each step is reasonable, well-defined, and builds upon the previous steps.

I would like to give one warning about mentoring for test automation. If a mentee wants to learn test automation, then they must first learn programming. Historically, software testing was a manual process, and most testers didn’t need to know how to code. Now, automation is indispensable for organizations that want to move fast without breaking things. Most software testing jobs require some level of automation skills. Many employers are even forcing their manual testers to pick up automation. However, good automation skills are rooted in good programming skills. Someone can’t learn “just enough Java” and then expect to be a successful automation engineer – they must effectively first become a developer.

Characteristics of Good Mentoring

Mentoring requires both individuals to commit time and effort to the relationship.

To be a good mentor:

  • Be helpful – provide valuable guidance that the mentee needs
  • Be prepared – know what you should know, and be ready to share it
  • Be approachable – never be “too busy” to talk
  • Be humble – reveal your limits, and admit what you don’t know
  • Be patient – newbies can be slow

To be a good mentee:

  • Seek long-term growth, not just answers to today’s questions
  • Come prepared in mind and materials
  • Ask thoughtful questions and record the answers
  • Practice what you learn
  • Express appreciation for your mentor – it’s often a thankless job

Take Your Time

Mentoring may take a lot of time, but good mentoring bears good fruit. Mentees will produce higher-quality work. They’ll get things done faster. They’ll also have more confidence in themselves. Mentors themselves will feel a higher satisfaction in their own work, too. The whole team wins.

The alternative would waste a lot more time. Without good mentoring, newcomers will be forced to sink or swim. They won’t be able to finish tasks as quickly, and their work will more likely have problems. They will feel stressed and doubtful, too. Forcing people to tough things out is a very inefficient learning process, and it can also devolve into forms of hazing in unhealthy work environments. Anytime someone says there isn’t enough time for mentoring, I would reply by saying there’s even less time for fixing poor-quality work. In the case of software testing, the lack of mentoring could cause bugs to escape to production!

I encourage leaders, managers, and senior engineers to make mentoring happen as part of their job responsibilities. Dedicate time for it. Facilitate it. Normalize it. Be intentional. Furthermore, I encourage leaders to be force multipliers: mentor others to mentor others. Time is always tight, so make the most of it.

More Advice?

I hope this article is helpful! Do you have any thoughts, advice, or questions about mentoring, specifically in the field of software testing? I’d love to hear them, so drop a comment below.

I wrote this article as a follow-up to an “Ask Me Anything” session on July 15, 2020 with Tristan Lombard and the Testim Community. Tristan published the full AMA transcript in a Medium article. Many thanks to them for the opportunity!

Test-Driving TestProject’s New Python SDK

TestProject recently released its new OpenSDK, and one of its major features is the inclusion of Python testing support! Since I love using Python for test automation, I couldn’t wait to give it a try. This article is my crash-course tutorial on writing Web UI tests in Python with TestProject.

What is TestProject?

TestProject is a free end-to-end test automation platform for Web, mobile, and API tests. It provides a cloud-based way to teams to build, run, share, and analyze tests. Manual testers can visually build tests for desktop or mobile sites using TestProject’s in-browser recorder and test builder. Automation engineers can use TestProject’s SDKs in Java, C#, and now Python for developing coded test automation solutions, and they can use packages already developed by others in the community through TestProject’s add-ons. Whether manual or automated, TestProject displays all test results in a sleek reporting dashboard with helpful analytics. And all of these features are legitimately free – there’s no tiered model or service plan.

Recently, TestProject announced the official release of its new OpenSDK. This new SDK (“software development kit”) provides a simple, unified interface for running tests with TestProject across multiple platforms and languages (now including Python). Things look exciting for the future of TestProject!

What’s My Interest?

It’s no secret that I love testing with Python. When I heard that TestProject added Python support, I knew I had to give it a try. I never used TestProject before, but I was interested to learn what it could do. Specifically, I wanted to see the value it could bring to reporting automated tests. In the Python space, test automation is slick, but reporting can be rough since frameworks like pytest and unittest are command-line-focused. I also wanted to see if TestProject’s SDK would genuinely help me automate tests or if it would get it my way. Furthermore, I know some great people in the TestProject community, so I figured it was time to jump in myself!

The Python SDK

TestProject’s Python SDK is an open-source project. It was originally developed by Bas Dijkstra, with the support of the TestProject team, and its code is hosted on GitHub. The Python SDK supports Selenium for Web UI automation (which will be the focus for this tutorial) and Appium for Android and iOS UI automation as well!

Since I’d never used TestProject before, let alone this new Python SDK, I wanted to review the code to see how to use it. Thankfully, the README included lots of helpful information and example code. When I looked at the code for TestProject’s BaseDriver, I discovered that it simply extend’s Selenium WebDriver’s RemoteDriver class. That means all the TestProject WebDrivers use exactly the same API as Python’s Selenium WebDriver implementation. To me, that was a big relief. I know WebDriver’s API very well, so I wouldn’t need to learn anything different in order to use TestProject. It also means that any test automation project can be easily retrofitted to use TestProject’s SDKs – they just need to swap in a new WebDriver object!

Setup Steps

TestProject has a straightforward architecture. Users sign up for free TestProject accounts online. Then, they set up their own machines for running tests. Each testing machine must have the TestProject agent installed and linked to a user’s account. When tests run, agents automatically push results to the TestProject cloud. Users can then log into the TestProject portal to view and analyze results. They can invite team mates to share results, and they can also set up multiple test machines with agents. Users can even integrate TestProject with other tools like Jenkins, qTest, and Sauce Labs. The TestProject docs, especially the ecosystem diagram, explain everything in more detail.

When I did my test drive, I created a TestProject account, installed the agent on my Mac, and ran Python Web UI tests from my Mac. I already had the latest version of Python installed (Python 3.8 at the time of writing this article). I also already had my target browsers installed: Google Chrome and Mozilla Firefox.

Below are the precise steps I followed to set up TestProject:

1. Sign up for an account

TestProject accounts are “free forever.” Use this signup link.

The TestProject signup page

2. Download the TestProject Agent

The signup wizard should direct you to download the TestProject agent. If not, you can always download it from the TestProject dashboard. Be warned, the download package is pretty large – the macOS package was 345 MB. Alternatively, you can fetch the agent as a container image from Docker Hub.

The TestProject agent download page

The TestProject agent contains all the stuff needed to run tests and upload results to the TestProject app in the cloud. You don’t need to install WebDriver executables like ChromeDriver or geckodriver. Once the agent is downloaded, install it on the machine and register the agent with your account. For me, registration happened automatically.

This is what the TestProject agent looks like when running on macOS. You can also close this window to let it run in the background.

3. Find your developer token

You’ll need to use your developer token to connect your automated tests to your account in the TestProject app. The signup wizard should reveal it to you, but you can always find it (and also reset it) on the Integrations page.

The Integrations page. Check here for your developer token. No, you can’t use mine.

4. Install the Python SDK

TestProject’s Python SDK is distributed as a package through PyPI. To install it, simply run pip install testproject-python-sdk at the command line. This package will also install dependencies like selenium and requests.

A Classic Web UI Test

After setting up my Mac to use TestProject, it was time to write some Web UI tests in Python! Since I discovered that TestProject’s WebDriver objects could easily retrofit any existing test automation project, I thought, “What if I try to run my PyCon 2020 tutorial project with TestProject?” For PyCon 2020, I gave an online tutorial about building a Web UI test automation project in Python from the ground up using pytest and Selenium WebDriver. The tutorial includes one test case: a DuckDuckGo web search and verification. I thought it would be easy to integrate with TestProject since I already had the code. Thankfully, it was!

Below, I’ll walk though my code. You can check out my example project repository from GitHub at AndyLPK247/testproject-python-sdk-example. My code will be a bit more advanced than the examples shown in the Python SDK’s README or in Bas Dijkstra’s tutorial article because it uses the Page Object Model and pytest fixtures. Make sure to pip install pytest, too.

1. Write the test steps

The test case covers a simple DuckDuckGo web search. Whenever I automate tests, I always write out the steps in plain language. Good tests follow the Arrange-Act-Assert pattern, and I like to use Gherkin’s Given-When-Then phrasing. Here’s the test case:

Scenario: Basic DuckDuckGo Web Search
    Given the DuckDuckGo home page is displayed
    When the user searches for "panda"
    Then the search result query is "panda"
    And the search result links pertain to "panda"
    And the search result title contains "panda"

2. Specify automation inputs

Inputs configure how automated tests run. They can be passed into a test automation solution using configuration files. Testers can then easily change input values in the config file without changing code. Automation should read config files once at the start of testing and inject necessary inputs into every test case.

In Python, I like to use JSON for config files. JSON data is simple and hierarchical, and Python includes a module in its standard library named json that can parse a JSON file into a Python dictionary in one line. I also like to put config files either in the project root directory or in the tests directory.

Here’s the contents of config.json for this test:

{
  "browser": "Chrome",
  "implicit_wait": 10,
  "testproject_projectname": "TestProject Python SDK Example",
  "testproject_token": ""
}
  • browser is the name of the browser to test
  • implicit_wait is the implicit waiting timeout for the WebDriver instance
  • testproject_projectname is the project name to use for this test suite in the TestProject app
  • testproject_token is the developer token

3. Read automation inputs

Automation code should read inputs one time before any tests run and then inject inputs into appropriate tests. pytest fixtures make this easy to do.

I created a fixture named config in the tests/conftest.py module to read config.json:

import json
import pytest


@pytest.fixture
def config(scope='session'):

  # Read the file
  with open('config.json') as config_file:
    config = json.load(config_file)
  
  # Assert values are acceptable
  assert config['browser'] in ['Firefox', 'Chrome', 'Headless Chrome']
  assert isinstance(config['implicit_wait'], int)
  assert config['implicit_wait'] > 0
  assert config['testproject_projectname'] != ""
  assert config['testproject_token'] != ""

  # Return config so it can be used
  return config

Setting the fixture’s scope to “session” means that it will run only one time for the whole test suite. The fixture reads the JSON config file, parses its text into a Python dictionary, and performs basic input validation. Note that Firefox, Chrome, and Headless Chrome will be supported browsers.

4. Set up WebDriver

Each Web UI test should have its own WebDriver instance so that it remains independent from other tests. Once again, pytest fixtures make setup easy.

The browser fixture in tests/conftest.py initialize the appropriate TestProject WebDriver type based on inputs returned by the config fixture:

from selenium.webdriver import ChromeOptions
from src.testproject.sdk.drivers import webdriver


@pytest.fixture
def browser(config):

  # Initialize shared arguments
  kwargs = {
    'projectname': config['testproject_projectname'],
    'token': config['testproject_token']
  }

  # Initialize the TestProject WebDriver instance
  if config['browser'] == 'Firefox':
    b = webdriver.Firefox(**kwargs)
  elif config['browser'] == 'Chrome':
    b = webdriver.Chrome(**kwargs)
  elif config['browser'] == 'Headless Chrome':
    opts = ChromeOptions()
    opts.add_argument('headless')
    b = webdriver.Chrome(chrome_options=opts, **kwargs)
  else:
    raise Exception(f'Browser "{config["browser"]}" is not supported')

  # Make its calls wait for elements to appear
  b.implicitly_wait(config['implicit_wait'])

  # Return the WebDriver instance for the setup
  yield b

  # Quit the WebDriver instance for the cleanup
  b.quit()

This was the only section of code I needed to change to make my PyCon 2020 tutorial project work with TestProject. I had to change the WebDriver invocations to use the TestProject classes. I also had to add arguments for the project name and developer token, which come from the config file. (Note: you may alternatively set the developer token as an environment variable.)

5. Create page objects

Automated tests could make direct calls to the WebDriver interface to interact with the browser, but WebDriver calls are typically low-level and wordy. The Page Object Model is a much better design pattern. Page object classes encapsulate WebDriver gorp so that tests can call simpler, more readable methods.

The DuckDuckGo search test interacts with two pages: the search page and the result page. The pages package contains a module for each page. Here’s pages/search.py:

from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys


class DuckDuckGoSearchPage:

  URL = 'https://www.duckduckgo.com'

  SEARCH_INPUT = (By.ID, 'search_form_input_homepage')

  def __init__(self, browser):
    self.browser = browser

  def load(self):
    self.browser.get(self.URL)

  def search(self, phrase):
    search_input = self.browser.find_element(*self.SEARCH_INPUT)
    search_input.send_keys(phrase + Keys.RETURN)

And here’s pages/result.py:

from selenium.webdriver.common.by import By

class DuckDuckGoResultPage:
  
  RESULT_LINKS = (By.CSS_SELECTOR, 'a.result__a')
  SEARCH_INPUT = (By.ID, 'search_form_input')

  def __init__(self, browser):
    self.browser = browser

  def result_link_titles(self):
    links = self.browser.find_elements(*self.RESULT_LINKS)
    titles = [link.text for link in links]
    return titles
  
  def search_input_value(self):
    search_input = self.browser.find_element(*self.SEARCH_INPUT)
    value = search_input.get_attribute('value')
    return value

  def title(self):
    return self.browser.title

Notice that this code uses the “regular” WebDriver interface because TestProject’s WebDriver classes extend the Selenium WebDriver classes.

To make setup easier, I added fixtures to tests/conftest.py to construct each page object, too. They call the browser fixture and inject the WebDriver instance into each page object:

from pages.result import DuckDuckGoResultPage
from pages.search import DuckDuckGoSearchPage


@pytest.fixture
def search_page(browser):
  return DuckDuckGoSearchPage(browser)


@pytest.fixture
def result_page(browser):
  return DuckDuckGoResultPage(browser)

6. Automate the test case

All the automation plumbing is finally in place. Here’s the test case in tests/traditional/test_duckduckgo.py:

import pytest


@pytest.mark.parametrize('phrase', ['panda', 'python', 'polar bear'])
def test_basic_duckduckgo_search(search_page, result_page, phrase):
  
  # Given the DuckDuckGo home page is displayed
  search_page.load()

  # When the user searches for the phrase
  search_page.search(phrase)

  # Then the search result query is the phrase
  assert phrase == result_page.search_input_value()
  
  # And the search result links pertain to the phrase
  titles = result_page.result_link_titles()
  matches = [t for t in titles if phrase.lower() in t.lower()]
  assert len(matches) > 0

  # And the search result title contains the phrase
  assert phrase in result_page.title()

I parametrized the test to run it for three different phrases. The test function does not interact with the WebDriver instance directly. Instead, it interacts exclusively with the page objects.

7. Run the tests

The tests run like any other pytest tests: python -m pytest at the command line. If everything is set up correctly, then the tests will run successfully and upload results to the TestProject app.

In the TestProject dashboard, the Reports tab shows all the test you have run. It also shows the different test projects you have.

Check out those results!

You can also drill into results for individual test case runs. TestProject automatically records the browser type, timestamps, pass-or-fail results, and every WebDriver call. You can also download PDF reports!

Results for an individual test

What if … BDD?

I was delighted to see how easily I could run a traditional pytest suite using TestProject. Then, I thought to myself, “What if I could use a BDD test framework?” I personally love Behavior-Driven Development, and Python has multiple BDD test frameworks. There is no reason why a BDD test framework wouldn’t work with TestProject!

So, I rewrote the DuckDuckGo search test as a feature file with step definitions using pytest-bdd. The BDD-style test uses the same fixtures and page objects as the traditional test.

Here’s the Gherkin scenario in tests/bdd/features/duckduckgo.feature:

Feature: DuckDuckGo
  As a Web surfer,
  I want to search for websites using plain-language phrases,
  So that I can learn more about the world around me.


  Scenario Outline: Basic DuckDuckGo Web Search
    Given the DuckDuckGo home page is displayed
    When the user searches for "<phrase>"
    Then the search result query is "<phrase>"
    And the search result links pertain to "<phrase>"
    And the search result title contains "<phrase>"

    Examples:
      | phrase     |
      | panda      |
      | python     |
      | polar bear |

And here’s the step definition module in tests/bdd/step_defs/test_duckduckgo_bdd.py:

from pytest_bdd import scenarios, given, when, then, parsers
from selenium.webdriver.common.keys import Keys


scenarios('../features/duckduckgo.feature')


@given('the DuckDuckGo home page is displayed')
def load_duckduckgo(search_page):
  search_page.load()


@when(parsers.parse('the user searches for "{phrase}"'))
@when('the user searches for "<phrase>"')
def search_phrase(search_page, phrase):
  search_page.search(phrase)


@then(parsers.parse('the search result query is "{phrase}"'))
@then('the search result query is "<phrase>"')
def check_search_result_query(result_page, phrase):
  assert phrase == result_page.search_input_value()


@then(parsers.parse('the search result links pertain to "{phrase}"'))
@then('the search result links pertain to "<phrase>"')
def check_search_result_links(result_page, phrase):
  titles = result_page.result_link_titles()
  matches = [t for t in titles if phrase.lower() in t.lower()]
  assert len(matches) > 0


@then(parsers.parse('the search result title contains "{phrase}"'))
@then('the search result title contains "<phrase>"')
def check_search_result_title(result_page, phrase):
  assert phrase in result_page.title()

There’s one more nifty trick I added with pytest-bdd. I added a hook to report each Gherkin step to TestProject with a screenshot! That way, testers can trace each test case step more easily in the TestProject reports. Capturing screenshots also greatly assists test triage when failures arise. This hook is located in tests/conftest.py:

def pytest_bdd_after_step(request, feature, scenario, step, step_func):
  browser = request.getfixturevalue('browser')
  browser.report().step(description=str(step), message=str(step), passed=True, screenshot=True)

Since pytest-bdd is just a pytest plugin, its tests run using the same python -m pytest command. TestProject will group these test results into the same project as before, but it will separate the traditional tests from the BDD tests by name. Here’s what the Gherkin steps with screenshots look like:

Custom Gherkin step with screenshot reported in the TestProject app

This is Awesome!

As its name denotes, TestProject is a great platform for handling project-level concerns for testing work: reporting, integrations, and fast feedback. Adding TestProject to an existing automation solution feels seamless, and its sleek user experience gives me what I need as a tester without getting in my way. The one word that keeps coming to mind is “simple” – TestProject simplifies setup and sharing. Its design takes to heart the renowned Python adage, “Simple is better than complex.” As such, TestProject’s new Python SDK is a welcome addition to the Python testing ecosystem.

I look forward to exploring Python support for mobile testing with Appium soon. I also look forward to seeing all the new Python add-ons the community will develop.