Happy Global Testers Day! For 2021, QA Touch is celebrating with webinars, games, competitions, blogs, and videos. I participated by sharing an “upside-down” story from years ago when I accidentally wiped out all of NetApp’s continuous integration testing. Please watch my story below. I hope you find it both insightful and entertaining!
What happens when a test fails? If someone is manually running the test, then they will pause and poke around to learn more about the problem. However, when an automated test fails, the rest of the suite keeps running. Testers won’t get to view results until the suite is complete, and the automation won’t perform any extra exploration at the time of failure. Instead, testers must review logs and other artifacts gathered during testing, and they even might need to rerun the failed test to check if the failure is consistent.
Since testers typically rerun failed tests as part of their investigation, why not configure automated tests to automatically rerun failed tests? On the surface, this seems logical: automated retries can eliminate one more manual step. Unfortunately, automated retries can also enable poor practices, like ignoring legitimate issues.
So, are automated test retries good or bad? This is actually a rather controversial topic. I’ve heard many voices strongly condemn automated retries as an antipattern (see here, here, and here). While I agree that automated retries can be abused, I nevertheless still believe they can add value to test automation. A deeper understanding needs a nuanced approach.
So, how do automated retries work?
To avoid any confusion, let’s carefully define what we mean by “automated test retries.”
Let’s say I have a suite of 100 automated tests. When I run these tests, the framework will execute each test individually and yield a pass or fail result for the test. At the end of the suite, the framework will aggregate all the results together into one report. In the best case, all tests pass: 100/100.
However, suppose that one of the tests fails. Upon failure, the test framework would capture any exceptions, perform any cleanup routines, log a failure, and safely move onto the next test case. At the end of the suite, the report would show 99/100 passing tests with one test failure.
By default, most test frameworks will run each test one time. However, some test frameworks have features for automatically rerunning test cases that fail. The framework may even enable testers to specify how many retries to attempt. So, let’s say that we configure 2 retries for our suite of 100 tests. When that one test fails, the framework would queue that failing test to run twice more before moving onto the next test. It would also add more information to the test report. For example, if one retry passed but another one failed, the report would show 99/100 passing tests with a 1/3 pass rate for the failing test.
In this article, we will focus on automated retries for test cases. Testers could also program other types of retries into automated tests, such as retrying browser page loads or REST requests. Interaction-level retries require sophisticated, context-specific logic, whereas test-level retry logic works the same for any kind of test case. (Interaction-level retries would also need their own article.)
Automated retries can be a terrible antipattern
Let’s see how automated test retries can be abused:
Jeremy is a member of a team that runs a suite of 300 automated tests for their web app every night. Unfortunately, the tests are notoriously flaky. About a dozen different tests fail every night, and Jeremy spends a lot of time each morning triaging the failures. Whenever he reruns failed tests individually on his laptop, they almost always pass.
To save himself time in the morning, Jeremy decides to add automatic retries to the test suite. Whenever a test fails, the framework will attempt one retry. Jeremy will only investigate tests whose retries failed. If a test had a passing retry, then he will presume that the original failure was just a flaky test.
Ouch! There are several problems here.
First, Jeremy is using retries to conceal information rather than reveal information. If a test fails but its retries pass, then the test still reveals a problem! In this case, the underlying problem is flaky behavior. Jeremy is using automated retries to overwrite intermittent failures with intermittent passes. Instead, he should investigate why the test are flaky. Perhaps automated interactions have race conditions that need more careful waiting. Or, perhaps features in the web app itself are behaving unexpectedly. Test failures indicate a problem – either in test code, product code, or infrastructure.
Second, Jeremy is using automated retries to perpetuate poor practices. Before adding automated retries to the test suite, Jeremy was already manually retrying tests and disregarding flaky failures. Adding retries to the test suite merely speeds up the process, making it easier to sidestep failures.
Third, the way Jeremy uses automated retries indicates that the team does not value their automated test suite very much. Good test automation requires effort and investment. Persistent flakiness is a sign of neglect, and it fosters low trust in testing. Using retries is merely a “band-aid” on both the test failures and the team’s attitude about test automation.
In this example, automated test retries are indeed a terrible antipattern. They enable Jeremy and his team to ignore legitimate issues. In fact, they incentivize the team to ignore failures because they institutionalize the practice of replacing red X’s with green checkmarks. This team should scrap automated test retries and address the root causes of flakiness.
Automated retries are not the main problem
Ignoring flaky failures is unfortunately all too common in the software industry. I must admit that in my days as a newbie engineer, I was guilty of rerunning tests to get them to pass. Why do people do this? The answer is simple: intermittent failures are difficult to resolve.
Testers love to find consistent, reproducible failures because those are easy to explain. Other developers can’t push back against hard evidence. However, intermittent failures take much more time to isolate. Root causes can become mind-bending puzzles. They might be triggered by environmental factors or awkward timings. Sometimes, teams never figure out what causes them. In my personal experience, bug tickets for intermittent failures get far less traction than bug tickets for consistent failures. All these factors incentivize folks to turn a blind eye to intermittent failures when convenient.
Automated retries are just a tool and a technique. They may enable bad practices, but they aren’t inherently bad. The main problem is willfully ignoring certain test results.
Automated retries can be incredibly helpful
So, what is the right way to use automated test retries? Use them to gather more information from the tests. Test results are simply artifacts of feedback. They reveal how a software product behaved under specific conditions and stimuli. The pass-or-fail nature of assertions simplifies test results at the top level of a report in order to draw attention to failures. However, reports can give more information than just binary pass-or-fail results. Automated test retries yield a series of results for a failing test that indicate a success rate.
For example, SpecFlow and the SpecFlow+ Runner make it easy to use automatic retries the right way. Testers simply need to add the
retryFor setting to their SpecFlow+ Runner profile to set the number of retries to attempt. In the final report, SpecFlow records the success rate of each test with color-coded counts. Results are revealed, not concealed.
This information jumpstarts analysis. As a tester, one of the first questions I ask myself about a failing test is, “Is the failure reproducible?” Without automated retries, I need to manually rerun the test to find out – often at a much later time and potentially within a different context. With automated retries, that step happens automatically and in the same context. Analysis then takes two branches:
- If all retry attempts failed, then the failure is probably consistent and reproducible. I would expect it to be a clear functional failure that would be fast and easy to report. I jump on these first to get them out of the way.
- If some retry attempts passed, then the failure is intermittent, and it will probably take more time to investigate. I will look more closely at the logs and screenshots to determine what went wrong. I will try to exercise the product behavior manually to see if the product itself is inconsistent. I will also review the automation code to make sure there are no unhandled race conditions. I might even need to rerun the test multiple times to measure a more accurate failure rate.
I do not ignore any failures. Instead, I use automated retries to gather more information about the nature of the failures. In the moment, this extra info helps me expedite triage. Over time, the trends this info reveals helps me identify weak spots in both the product under test and the test automation.
Automated retries are most helpful at high scale
When used appropriate, automated retries can be helpful for any size test automation project. However, they are arguably more helpful for large projects running tests at high scale than small projects. Why? Two main reasons: complexities and priorities.
Large-scale test projects have many moving parts. For example, at PrecisionLender, we presently run 4K-10K end-to-end tests against our web app every business day. (We also run ~100K unit tests every business day.) Our tests launch from TeamCity as part of our Continuous Integration system, and they use in-house Selenium Grid instances to run 50-100 tests in parallel. The PrecisionLender application itself is enormous, too.
Intermittent failures are inevitable in large-scale projects for many different reasons. There could be problems in the test code, but those aren’t the only possible problems. At PrecisionLender, Boa Constrictor already protects us from race conditions, so our intermittent test failures are rarely due to problems in automation code. Other causes for flakiness include:
- The app’s complexity makes certain features behave inconsistently or unexpectedly
- Extra load on the app slows down response times
- The cloud hosting platform has a service blip
- Selenium Grid arbitrarily chokes on a browser session
- The DevOps team recycles some resources
- An engineer makes a system change while tests were running
- The CI pipeline deploys a new change in the middle of testing
Many of these problems result from infrastructure and process. They can’t easily be fixed, especially when environments are shared. As one tester, I can’t rewrite my whole company’s CI pipeline to be “better.” I can’t rearchitect the app’s whole delivery model to avoid all collisions. I can’t perfectly guarantee 100% uptime for my cloud resources or my test tools like Selenium Grid. Some of these might be good initiatives to pursue, but one tester’s dictates do not immediately become reality. Many times, we need to work with what we have. Curt demands to “just fix the tests” come off as pedantic.
Automated test retries provide very useful information for discerning the nature of such intermittent failures. For example, at PrecisionLender, we hit Selenium Grid problems frequently. Roughly 1/10000 Selenium Grid browser sessions will inexplicably freeze during testing. We don’t know why this happens, and our investigations have been unfruitful. We chalk it up to minor instability at scale. Whenever the 1/10000 failure strikes, our suite’s automated retries kick in and pass. When we review the test report, we see the intermittent failure along with its exception method. Based on its signature, we immediately know that test is fine. We don’t need to do extra investigation work or manual reruns. Automated retries gave us the info we needed.
Another type of common failure is intermittently slow performance in the PrecisionLender application. Occasionally, the app will freeze for a minute or two and then recover. When that happens, we see a “brick wall” of failures in our report: all tests during that time frame fail. Then, automated retries kick in, and the tests pass once the app recovers. Automatic retries prove in the moment that the app momentarily froze but that the individual behaviors covered by the tests are okay. This indicates functional correctness for the behaviors amidst a performance failure in the app. Our team has used these kinds of results on multiple occasions to identify performance bugs in the app by cross-checking system logs and database queries during the time intervals for those brick walls of intermittent failures. Again, automated retries gave us extra information that helped us find deep issues.
Automated retries delineate failure priorities
That answers complexity, but what about priority? Unfortunately, in large projects, there is more work to do than any team can handle. Teams need to make tough decisions about what to do now, what to do later, and what to skip. That’s just business. Testing decisions become part of that prioritization.
In almost all cases, consistent failures are inherently a higher priority than intermittent failures because they have a greater impact on the end users. If a feature fails every single time it is attempted, then the user is blocked from using the feature, and they cannot receive any value from it. However, if a feature works some of the time, then the user can still get some value out of it. Furthermore, the rarer the intermittency, the lower the impact, and consequentially the lower the priority. Intermittent failures are still important to address, but they must be prioritized relative to other work at hand.
Automated test retries automate that initial prioritization. When I triage PrecisionLender tests, I look into consistent “red” failures first. Our SpecFlow reports make them very obvious. I know those failures will be straightforward to reproduce, explain, and hopefully resolve. Then, I look into intermittent “orange” failures second. Those take more time. I can quickly identify issues like Selenium Grid disconnections, but other issues may not be obvious (like system interruptions) or may need additional context (like the performance freezes). Sometimes, we may need to let tests run for a few days to get more data. If I get called away to another more urgent task while I’m triaging results, then at least I could finish the consistent failures. It’s a classic 80/20 rule: investigating consistent failures typically gives more return for less work, while investigating intermittent failures gives less return for more work. It is what it is.
The only time I would prioritize an intermittent failure over a consistent failure would be if the intermittent failure causes catastrophic or irreversible damage, like wiping out an entire system, corrupting data, or burning money. However, that type of disastrous failure is very rare. In my experience, almost all intermittent failures are due to poorly written test code, automation timeouts from poor app performance, or infrastructure blips.
Automated test retries can be a blessing or a curse. It all depends on how testers use them. If testers use retries to reveal more information about failures, then retries greatly assist triage. Otherwise, if testers use retries to conceal intermittent failures, then they aren’t doing their jobs as testers. Folks should not be quick to presume that automated retries are always an antipattern. We couldn’t achieve our scale of testing at PrecisionLender without them. Context matters.
On April 22, 2021, I delivered a talk entitled “Managing the Test Data Nightmare” at SauceCon 2021. SauceCon is Sauce Labs’ annual conference for the testing community. Due to the COVID-19 pandemic, the conference was virtual, but I still felt a bit of that exciting conference buzz.
My talk covers the topic of test data, which can be a nightmare to handle. Data must be prepped in advance, loaded before testing, and cleaned up afterwards. Sometimes, teams don’t have much control over the data in their systems under test—it’s just dropped in, and it can change arbitrarily. Hard-coding values into tests that reference system tests can make the tests brittle, especially when running tests in different environments.
In this talk, I covered strategies for managing each type of test data: test case variations, test control inputs, config metadata, and product state. I also covered how to “discover” test data instead of hard-coding it, how to pass inputs into automation (including secrets like passwords), and how to manage data in the system. After watching this talk, you can wake up from the nightmare and handle test data cleanly and efficiently like a pro!
Here are some other articles I wrote about test data:
As usual, I hit up Twitter throughout the conference. Here are some action shots:
Many thanks to Sauce Labs and all the organizers who made SauceCon 2021 happen. If SauceCon was this awesome as a virtual event, then I can’t wait to attend in person (hopefully) in 2022!
Boa Constrictor is a C# implementation of the Screenplay Pattern. My team and I at PrecisionLender, a Q2 Company, developed Boa Constrictor as part of our test automation solution. Its primary use case is Web UI and REST API test automation. Boa Constrictor helps you make better interactions for better automation!
Our team released Boa Constrictor as an open source project on GitHub in October 2020. This week, we published a full documentation site for Boa Constrictor. They include an introduction to the Screenplay Pattern, a quick-start guide, a full tutorial, and ways to contribute to the project. The doc site itself uses GitHub Pages, Jekyll, and Minimal Mistakes.
Our team hopes that the docs help you with testing and automation. Enjoy!
I love programming languages. They have fascinated me ever since I first learned to program my TI-83 Plus calculator in ninth grade, many years ago. When I studied computer science in college, I learned how parsers, interpreters, and compilers work. During my internships at IBM, I worked on a language named Enterprise Generation Language as both a tester and a developer. At NetApp, I even developed my own language named DS for test automation. Languages are so much fun to learn, build, and extend.
Today, even though I do not actively work on compilers, I still do some pretty interesting things with languages and testing. I strongly advocate for Behavior-Driven Development and its domain-specific language (DSL) Gherkin. In fact, as I wrote in my article Behavior-Driven Blasphemy, I support using Gherkin-based BDD test frameworks for test automation even if a team is not also doing BDD’s collaborative activities. Why? Gherkin is the world’s first major off-the-shelf DSL for test automation, and it doesn’t require the average tester to know the complexities of compiler theory. DSLs like Gherkin can make tests easier to read, faster to write, and more reliable to run. They provide a healthy separation of concerns between test cases and test code. After working on successful large-scale test automation projects with C# and SpecFlow, I don’t think I could go back to traditional test frameworks.
I’m not the only one who thinks this way. Here’s a tweet from Dinis Cruz, CTO and CISO at Glasswall, after he read one of my articles:
Dinis then tweeted at me to invite me to speak about using DSLs for testing at the Open Security Summit in 2021:
Now, I’m not a “security guy” at all, but I do know a thing or two about DSLs and testing. So, I gladly accepted the invitation to speak! I delivered my talk, “Using DSLs for Security Testing” virtually on Thursday, January 14, 2021 at 10am US Eastern. I also uploaded my slides to GitHub at AndyLPK247/using-dsls-for-security-testing. Check out the YouTube recording here:
This talk was not meant to be a technical demo or tutorial. Instead, it was meant to be a “think big” proposal. The main question I raised was, “How can we use DSLs for security testing?” I used my own story to illustrate the value languages deliver, particularly for testing. My call to action breaks that question down into three parts:
- Can DSLs make security testing easier to do and thereby more widely practiced?
- Is Gherkin good enough for security testing, or do we need to make a DSL specific to security?
- Would it be possible to write a set of “standard” or “universal” security tests using a DSL that anyone could either run directly or use as a template?
My goal for this talk was to spark a conversation about DSLs and security testing. Immediately after my talk, Luis Saiz shared two projects he’s working on regarding DSLs and security: SUSTO and Mist. Dinis also invited me back for a session at the Open Source Summit Mini Summit in February to have a follow-up roundtable discussion for my talk. I can’t wait to explore this idea further. It’s an exciting new space for me.
If this topic sparks your interest, be sure to watch my talk recording, and then join us live in February 2021 for the next Open Source Summit event. Virtual sessions are free to join. Many thanks again to Dinis and the whole team behind Open Source Summit for inviting me to speak and organizing the events.
That’s right! You read the title. I’m writing a book about software testing!
One of the most common questions people ask me is, “What books can you recommend on software testing and automation?” Unfortunately, I don’t have many that I can recommend. There are plenty of great books, but most of them focus on a particular tool, framework, or process. I haven’t found a modern book that covers software testing as a whole. Trust me, I looked – when I taught my college course on software testing at Wake Tech, the textbook’s copyright date was 2002. Its content felt just as antiquated.
I want to write a book worthy of answering that question. I want to write a treatise on software testing for our current generation of software professionals. My goal is ambitious, but I think I can do it. It will probably take a year to write. I hope to find deep joy in this endeavor.
Manning Publications will be the publisher. They accepted my proposal, and we signed a contract. The working title of the book is The Way to Test Software. The title pays homage to Julia Child’s classic, The Way to Cook. Like Julia Child, I want to teach “master recipes” that can be applied to any testing situations.
I don’t want to share too many details this early in the process, but the tentative table of contents has the following parts:
- Testing Code
- Testing Features
- Testing Performance
- Running Tests
- Development Practices
Python will be the language of demonstration. This should be no surprise to anyone. I chose Python because I love the language. I also think it’s a great language for test automation. Python will be easy for both beginners and experts to learn. Besides, the book is about testing, not programming – Python will be just the linguistic tool for automation.
If you’re as excited about this book as I am, please let me know! I need all the encouragement I can get. This book probably won’t enter print until 2022, given the breadth of its scope. I’ll work to get it done as soon as I can.
Do you want to learn how to automate tests in Python? Python is one of the best languages for test automation because it is easy to learn, concise to write, and powerful to scale. These days, there’s a wealth of great content on Python testing. Here’s a brief reference to help you get started.
If you are new to Python, read How Do I Start Learning Python? to find the best way to start learning the language.
If you want to roll up your sleeves, check out Test Automation University. I developed a “trifecta” of Python testing courses for TAU with videos, transcripts, quizzes, and example code. You can take them for FREE!
If you wants some brief articles for reference, check out my Python Testing 101 blog series:
- Python Testing 101: Introduction
- Python Testing 101: unittest
- Python Testing 101: doctest
- Python Testing 101: pytest
- Python Testing 101: behave
- Python Testing 101: pytest-bdd
- Python BDD Framework Comparison
RealPython also has excellent guides:
- Getting Started with Testing in Python by Anthony Shaw
- Effective Python Testing with Pytest by Dane Hillard
I’ve given several talks about Python testing:
- How to Write a Test Case at PyOhio 2020
- Hands-On Web App Test Automation (Tutorial) at PyCon 2020
- How to Start Testing with Python at Automation Guild 2020
- Beyond Unit Tests: End-to-End Web UI Testing at PyGotham 2019
- Hands-On Web UI Testing (Tutorial) at DjangoCon 2019
- Hands-On Web UI Testing (Tutorial) at PyOhio 2019
- Egad! How Do We Start Writing (Better) Tests? at PyTexas 2019
- Egad! How Do We Start Writing (Better) Tests? at PyGotham 2018
- Egad! How Do We Start Writing (Better) Tests? at PyOhio 2018
- Behavior-Driven Python at PyCon 2018
- Testing is Fun in Python! at PyData Carolinas 2016
If you prefer to read books, here are some great titles:
- Test-Driven Development with Python by Harry J.W. Percival
- Python Testing with pytest by Brian Okken
- pytest Quick Start Guide by Bruno Oliveira
Here are links to popular Python test tools and frameworks:
Do you have any other great resources? Drop them in the comments below! Happy testing!
Boa Constrictor is the .NET Screenplay Pattern, and I’m its lead developer. Check out this intro video to learn why we need the Screenplay Pattern and how to use it with Boa Constrictor.
Hello, everyone! My name is Andrew Knight, or “Pandy” for short. I’m the Automation Panda – I build solutions to testing problems. Be sure to read my blog and follow me on Twitter at “AutomationPanda”.
Today, I’m going to introduce you to a new test automation library called Boa Constrictor, the .NET Screenplay Pattern. Boa Constrictor can help you make better interactions for better automation. Its primary use cases are Web UI and REST API interactions, but it can be extended to handle any type of interaction.
My team and I at PrecisionLender originally developed Boa Constrictor as the cornerstone of our .NET end-to-end test automation solution. We found the Screenplay Pattern to be a great way to scale our test development, avoid duplicate code, and stay focused on behaviors. In October 2020, together with help from our parent company Q2, we released Boa Constrictor as an open source project.
In this video, we will cover three things:
- First, problems with traditional ways of automating interactions.
- Second, why the Screenplay Pattern is a better way.
- Third, how to use the Screenplay Pattern with Boa Constrictor in C#.
Since Boa Constrictor is open source, you can check out its repository. I’ll paste the link below: https://github.com/q2ebanking/boa-constrictor. The repository also has a hands-on tutorial you can try. Make sure to have Visual Studio and some .NET skills because the code is written in C#.
My main goal with the Boa Constrictor project is to help improve test automation practices. For so long, our industry has relied on page objects, and I think it’s time we talk about a better way. Boa Constrictor strives to make that easy.
To start, let’s define that big “I” word I kept tossing around:
Simply put, interactions are how users operate software. For this video, I’ll focus on Web UI interactions, like clicking buttons and scraping text.
Interactions are indispensable to testing. The simplest way to define “testing” is interaction plus verification. That’s it! You do something, and you make sure it works.
Think about any functional test case that you have ever written or executed. The test case was a step-by-step procedure, in which each step had interactions and verifications.
Opening the search engine requires navigation.
Searching for a phrase requires entering keystrokes and clicking the search button.
Verifying results requires scraping the page title and result links from the new page.
Interactions are everywhere!
Unfortunately, our industry struggles to handle automated Web UI interactions well. Even though most teams use Selenium WebDriver in their test automation code, every team seems to use it differently. There’s lots of duplicate code and flakiness, too. Let’s take a look at the way many teams evolve their WebDriver-based interactions. I will use C# for code examples, and I will continue to use DuckDuckGo for testing.
When teams first start writing test automation code using Selenium WebDriver, they frequently write raw calls. Anyone familiar with the WebDriver API should recognize these calls.
The WebDriver object is initialized using, say, ChromeDriver for the Chrome browser.
The first step to open the search engine calls “driver dot navigate dot go to URL” with the DuckDuckGo website address.
The second step performs the search by fetching Web elements using “driver dot find element” with locators and then calling methods like “send keys” and “click”.
The third step uses assertions to verify the contents of the page title and the existence of result links.
Finally, at the end of the test, the WebDriver quits the browser for cleanup.
Like I said, these are all common WebDriver calls. Unfortunately, there’s a big problem in this code.
Race conditions. There are three race conditions in this code in which the automation does NOT wait for the page to be ready before making interactions! WebDriver does not automatically wait for elements to load or titles to appear. Waiting is a huge challenge for Web UI automation, and it is one of the main reasons for “flaky” tests.
You could set an implicit wait that will make calls wait until target elements appear, but they don’t work for all cases, such as the title in race condition #2.
Explicit waits provide much more control over waiting timeout and conditions. They use a “WebDriverWait” object with a pre-set timeout value, and they must be placed explicitly throughout the code. Here, they are placed in the three spots where race conditions could happen. Each “wait dot until” call takes in a function that returns true when the condition is satisfied.
These waits are necessary, but they cause new problems. First, they cause duplicate code because Web element locators are used multiple times. Notice how “search form input homepage” is called twice.
Second, raw calls with explicit waits makes code less intuitive. If I remove the comments from each paragraph of code, what’s left is a wall of text. It is difficult to understand what this code does as a glance.
To remedy these problems, most teams use the Page Object Pattern. In the Page Object Pattern, each page is modeled as a class with locator variables and interaction methods. So, a “search page” class could look like this.
At the top, there could be a constant for the page URL and variables for the search input and search button locators. Notice how each has an intuitive name.
Next, there could be a variable to hold the WebDriver reference. This reference would come via dependency injection through the constructor.
The first method would be a “load” method that navigates the browser to the page’s URL.
And, the second method would be a “search” method that waits for the elements to appear, enters the phrase into the input field, and clicks the search button.
This page object class has a decent structure and a mild separation of concerns. Locators and interactions have meaningful names. Page objects require a few more lines of code that raw calls at first, but their parts can easily be reused.
The original test steps can be rewritten using this new SearchPage class. Notice how much cleaner this new code looks.
The other steps can be rewritten using page objects, too.
Unfortunately, page objects themselves suffer problems with duplication in their interaction methods.
Suppose a page object needs a method to click an element. We already know the logic: wait for the element to exist, and then click it.
But what about clicking another element? This method is essentially hard coded for one button.
A second “click” method is needed to click the other button.
Unfortunately, the code for both methods is the same. The code will be the same for any other click method, too. This is copy pasta, and it happens all the time in page objects. I’ve seen page objects grow to be thousands of lines long due to duplicative methods like this.
At this point, some teams will say, “Aha! More duplicate code? We can solve this problem with more Object-Oriented Programming!”
And they’ll create the infamous “base page”, a parent class for all other page object classes.
The base page will have variables for the WebDriver and the wait object.
It will also provide common interaction methods, such as this click method that can click on any element. Abstraction for the win!
Child pages will inherit everything from the base page. Child page interaction methods frequently just call base page methods.
I’ve seen many teams stop here and say, “This is good enough.” Unfortunately, this really isn’t very good at all!
The base page helps mitigate code duplication, but it doesn’t solve its root cause. Page objects inherently combine two separate concerns: page structure and interactions. Interactions are often generic enough to be used on any Web element. Coupling interaction code with specific locators or pages forces testers to add new page object methods for every type of interaction needed for an element. Every element could potentially need a click, a text, a “displayed”, or any other type of WebDriver interaction. That’s a lot of extra code that shouldn’t be necessary. The base page also becomes very top-heavy as testers add more and more code to share.
Most frustratingly, the page object code I showed here is merely one type of implementation. What do your page objects look like? I’d bet dollars to doughnuts that they look different than mine. Page objects are completely free form. Every team implements them differently. There’s no official version of the Page Object Pattern. There’s no conformity in its design. Even worse, within its design, there is almost no way for the pattern to enforce good practices. That’s why people argue whether page object locators should be public or private. Page objects would be better described as a “convention” than as a true design pattern.
There must be a better way to handle interactions. Thankfully, there is.
Let’s take a closer look at how interactions happen.
First, there is someone who initiates the interactions. Usually, this is a user. They are the ones making the clicks and taking the scrapes. Let’s call them the “Actor”.
Second, there is the thing under test. For our examples in this video, that’s a Web app. It has pages with elements. Web page structure is modeled using locators to access page elements from the DOM. Keep in mind, the thing under test could also be anything else, like a mobile app, a microservice, or even a command line.
Third, there are the interactions themselves. For Web apps, they could be simple clicks and keystrokes, or they could be more complex interactions like logging into the app or searching for a phrase. Each interaction will do the same type of operation on whatever target page or element it is given.
Finally, there are objects that enable Actors to perform certain types of Interactions. For example, browser interactions need a tool like Selenium WebDriver to make clicks and scrapes. Let’s call these things “Abilities”.
Actors, Abilities, and Interactions are each different types of concerns. We could summarize their relationship in one line.
Actors use Abilities to perform Interactions.
Actors use Abilities to perform Interactions.
This is the heart of the Screenplay Pattern. In the Page Object Convention, page objects become messy because concerns are all combined. The Screenplay Pattern separates concerns for maximal reusability and scalability.
So, let’s learn how to Screenplay, using Boa Constrictor.
“Boa Constrictor” is an open source C# implementation of the Screenplay Pattern my team and I developed at PrecisionLender. Like I said before, it is the cornerstone of PrecisionLender’s end-to-end test automation solution. It can be used with any .NET test framework, like SpecFlow or NUnit. The GitHub repository name is q2ebanking/boa-constrictor, and the NuGet package name is Boa.Constrictor.
Let’s rewrite that DuckDuckGo search test from before using Boa Constrictor. As you watch this video, I recommend just reading along with the code as it appears on screen to get the concepts. Trying to code along in real time might be challenging. After this video, you can take the official Boa Constrictor tutorial to get hands-on with the code.
The Actor is the entity that initiates Interactions. All Screenplay calls start with an Actor. Most test cases need only one Actor.
The Actor class optionally takes two arguments. The first argument is a name, which can help describe who the actor is. The name will appear in logged messages. The second argument is a logger, which will send log messages from Screenplay calls to a target destination. Loggers must implement Boa Constrictor’s ILogger interface. ConsoleLogger is a class that will log messages to the system console. You can define your own custom loggers by implementing ILogger.
Abilities enable Actors to initiate Interactions. For example, an Actor needs a Selenium WebDriver instance to click elements on a Web page.
Read this new line in plain English: “The actor can browse the Web with a new ChromeDriver.” Boa Constrictor’s fluent-like syntax makes its call chains very readable. “actor dot Can” adds an Ability to an Actor.
“BrowseTheWeb” is the Ability that enables Actors to perform Web UI Interactions. “BrowseTheWeb dot With” provides the WebDriver object that the Actor will use, which, in this case, is a new ChromeDriver object. Boa Constrictor supports all browser types.
All Abilities must implement the IAbility interface. Actors can be given any number of Abilities. “BrowseTheWeb” simply holds a reference to the WebDriver object. Web UI Interactions will retrieve this WebDriver object from the Actor.
Before the Actor can call any WebDriver-based Interactions, the Web pages under test need models. These models should be static classes that include locators for elements on the page and possibly page URLs. Page classes should only model structure – they should not include any interaction logic.
The Screenplay Pattern separates the concerns of page structure from interactions. That way, interactions can target any element, maximizing code reusability. Interactions like clicks and scrapes work the same regardless of the target elements.
The SearchPage class has two members. The first member is a URL string named Url. The second member is a locator for the search input element named SearchInput.
A locator has two parts. First, it has a plain-language Description that will be used for logging. Second, it has a Query that is used to find the element on the page. Boa Constrictor uses Selenium WebDriver’s By queries. For convenience, locators can be constructed using the statically imported L method.
The Screenplay Pattern has two types of Interactions. The first type of Interaction is called a Task. A Task performs actions without returning a value. Examples of Tasks include clicking an element, refreshing the browser, and loading a page. These interactions all “do” something rather than “get” something.
Boa Constrictor provides a Task named Navigate for loading a Web page using a target URL. Read this line in plain English: “The actor attempts to navigate to the URL for the search page.” Again, Boa Constrictor’s fluent-like syntax is very readable. Clearly, this line will load the DuckDuckGo search page.
“Actor dot attempts to” calls a Task. All Tasks must implement the ITask interface. When the Actor calls “AttemptsTo” on a task, it calls the task’s “PerformAs” method.
“Navigate” is the name of the task, and “dot to URL” provides the target URL.
The Navigate Task’s “PerformAs” method fetches the WebDriver object from the Actor’s Ability and uses it to load the given URL.
“Search page dot URL” comes from the SearchPage class we previously wrote. Putting the URL in the page class makes it universally available.
The second type of Interaction is called a Question. A Question returns an answer after performing actions. Examples of Questions include getting an element’s text, location, and appearance. Each of these interactions return some sort of value.
Boa Constrictor provides a Question named ValueAttribute that gets the “value” of the text currently inside an input field. Read this line in plain English: “The actor asking for the value attribute of the search page’s search input element should be empty.”
“Actor dot asking for” calls a Question. All Questions must implement the IQuestion interface. When the Actor calls “AskingFor” or the equivalent “AsksFor” method, it calls the question’s “RequestAs” method.
“ValueAttribute” is the name of the Question, and “dot Of” provides the target Web element’s locator.
The ValueAttribute’s “RequestAs” method fetches the WebDriver object, waits for the target element to exist on the page, and scrapes and returns its value attribute.
“Search page dot search input” is the locator for the search input field. It comes from the SearchPage class.
Finally, once the value is obtained, the test must make an assertion on it. “Should be empty” is a Fluent Assertion that verifies that the search input field is empty when the page is first loaded.
The test case’s next step is to enter a search phrase. Doing this requires two interactions: typing the phrase into the search input and clicking the search button. However, since searching is such a common operation, we can create a custom interaction for search by composing the lower-level interactions together.
The “SearchDuckDuckGo” task takes in a search phrase.
In its “PerformAs” method, it calls two other interactions: “SendKeys” and “Click”.
Using one task to combine these lower-level interactions makes the test code more readable and understandable. It also improves automation reusability. Read this line in plain English now: “The actor attempts to search DuckDuckGo for ‘panda’.” That’s concise and intuitive!
The last test case step should verify that result links appear after entering a search phrase. Unfortunately, this step has a race condition: the result page takes a few seconds to display result links. Automation must wait for those links to appear. Checking too early will make the test case fail.
Boa Constrictor makes waiting easy. Read this line in plain English: “The actor attempts to wait until the appearance of result page result links is equal to true.” In simpler terms, “Wait until the result links appear.”
“Wait” is a special Task. It will repeatedly call a Question until the answer meets a given condition.
For this step, the Question is the appearance of result links on the result page. Before links are loaded, this Question will return “false”. Once links appear, it will return “true”.
The Condition for waiting is for the answer value to become “true”. Boa Constrictor provides several conditions out of the box, such as equality, mathematical comparisons, and string matching. You can also implement custom conditions by implementing the “ICondition” interface.
Waiting is smart – it will repeatedly ask the question until the answer is met, and then it will move on. This makes waiting much more efficient than hard sleeps. If the answer does not meet the condition within the timeout, then the wait will raise an exception. The timeout defaults to 30 seconds, but it can be overridden.
Many of Boa Constrictor’s WebDriver-based interactions already handle waiting. Anything that uses a target element, such as “Click”, “SendKeys”, or “Text” will wait for the element to exist before attempting the operation. We saw this in some of the previous example code. However, there are times where explicit waits are needed. Interactions that query appearance or existence do not automatically wait.
The final step is to quit the browser. Boa Constrictor’s “QuitWebDriver” task does this. If you don’t quit the browser, then it will remain open and turn into a zombie. Always quit the browser. Furthermore, in whatever test framework you use, put the step to quit the browser in a cleanup or teardown routine so that it is called even when the test fails.
And there we have our completed test using Boa Constrictor’s Screenplay Pattern. All the separated concerns come together beautifully to handle interactions in a much better way.
As we said before, the Screenplay Pattern can be summed up in one line:
Actors [Slide] use Abilities [Slide] to perform Interactions.
It’s that simple. Actors use Abilities to perform Interactions.
For those who like Object-Oriented Programming, the Screenplay Pattern is, in a sense, a SOLID refactoring of the Page Object Convention. SOLID refers to five design principles for maintainability and extensibility. I won’t go into detail about each principle here because the information is a bit dense, but if you’re interested, then pause the video, snap a quick screenshot, and check out each of these principles later. Wikipedia is a good source. You’ll find that the Screenplay Pattern follows each one nicely.
- Single-responsibility principle
- Open–closed principle
- Liskov substitution principle
- Interface segregation principle
- Dependency inversion principle
So, why should you use the Screenplay Pattern over Page Object Convention or raw WebDriver calls? There are a few key reasons.
First, the Screenplay Pattern, and specifically the Boa Constrictor project, provide rich, reusable, reliable interactions out of the box. Boa Constrictor already has Tasks and Questions for every type of WebDriver-based interaction. Each one is battle-hardened and safe.
Second, Screenplay interactions are composable. Like we saw with searching for a phrase, you can easily combine interactions. This makes code easier to use and reuse, and it avoids lots of duplication.
Third, the Screenplay Pattern makes waiting easy using existing questions and conditions. Waiting is one of the toughest parts of black box automation.
Fourth, Screenplay calls are readable and understandable. They use a fluent-like syntax that reads more like prose than code.
Finally, the Screenplay Pattern, at its core, is a design pattern for any type of interaction. In this video, I showed how to use it for Web UI interactions, but the Screenplay Pattern could also be used for mobile, REST API, and other platforms. You can make your own interactions, too!
Overall, the Screenplay Pattern [Slide] provides better interactions [Slide] for better automation.
That’s the point. It’s not just another Selenium WebDriver wrapper. It’s not just a new spin on page objects. Screenplay is a great way to exercise any feature behaviors under test.
And, as we saw before…
The Screenplay Pattern isn’t that complicated. Actors use Abilities to perform Interactions. That’s it. The programming behind it just has some nifty dependency injection.
If you’d like to start using the Screenplay Pattern for your test automation, there are a few ways to get started.
If you are programming in C#, you can use Boa Constrictor, the library I showed in the examples. You can download Boa Constrictor as a NuGet package. It works with any .NET test framework, like SpecFlow and NUnit. I recommend taking the hands-on tutorial so you can develop a test automation project yourself with Boa Constrictor. Also, since Boa Constrictor is an open source project, I’d love for you to contribute!
If none of those options suit you, then you could create your own. The Screenplay Pattern does require a bit of boilerplate code, but it’s worthwhile in the end. You can always reference code from Boa Constrictor and Serenity BDD.
Thank you so much for taking the time to learn more about the Screenplay Pattern and Boa Constrictor. I’d like to give special thanks to everyone at PrecisionLender and Q2 who helped make Boa Constrictor’s open source release happen.
Today, I’m excited to announce the release of a new open source project for test automation: Boa Constrictor, the .NET Screenplay Pattern!
The Screenplay Pattern helps you make better interactions for better automation. The pattern can be summarized in one line: Actors use Abilities to perform Interactions.
- Actors initiate Interactions. Every test has an Actor.
- Abilities enable Actors to perform Interactions. They hold objects that Interactions need, like WebDrivers or REST API clients.
- Interactions exercise behaviors under test. They could be clicks, requests, commands, and anything else.
This separation of concerns makes Screenplay code very reusable and scalable, much more so than traditional page objects. Check it out, here’s a C# script to test a search engine:
// Create the Actor IActor actor = new Actor(logger: new ConsoleLogger()); // Add an Ability to use a WebDriver actor.Can(BrowseTheWeb.With(new ChromeDriver())); // Load the search engine actor.AttemptsTo(Navigate.ToUrl(SearchPage.Url)); // Get the page's title string title = actor.AsksFor(Title.OfPage()); // Search for something actor.AttemptsTo(Search.For("panda")); // Wait for results actor.AttemptsTo(Wait.Until( Appearance.Of(ResultPage.ResultLinks), IsEqualTo.True()));
Boa Constrictor provides many interactions for Selenium WebDriver and RestSharp out of the box, like
Appearance shown above. It also lets you compose interactions together, like how
Search is a composition of typing and clicking.
Over the past two years, my team and I at PrecisionLender, a Q2 Company, developed Boa Constrictor internally as the cornerstone of Boa, our comprehensive end-to-end test automation solution. We were inspired by Serenity BDD‘s Screenplay implementation. After battle-hardening Boa Constrictor with thousands of automated tests, we are releasing it publicly as an open source project. Our goal is to help everyone make better interactions for better test automation.
If you’d like to give Boa Constrictor a try, then start with the tutorial. You’ll implement that search engine test from above in full. Then, once you’re ready to use it for some serious test automation, add the Boa.Constrictor NuGet package to your .NET project and go!
You can view the full source code on GitHub at q2ebanking/boa-constrictor. Check out the repository for full information. In the coming weeks, we’ll be developing more content and code. Since Boa Constrictor is open source, we’d love for you to contribute to the project, too!
Mentoring is important in any field, but it’s especially critical for software testing. I’ve been blessed with good mentors throughout my life, and I’ve also been honored to serve as a mentor for other software testers. In this article, I’ll explain what mentoring is and how to practice it within the context of software testing.
What is Mentoring?
Mentoring is a one-on-one relationship in which the experienced guides the inexperienced.
- It is explicit in that the two individuals formally agree to the relationship.
- It is intentional in that both individuals want to participate.
- It is long-term in that the relationship is ongoing.
- It is purposeful in that the relationship has a clear goal or development objective.
- It is meaningful in that growth happens for both individuals.
Mentoring is more than merely answering questions or doing code reviews. It is an intentional relationship for learning and growth.
Why Software Testing Mentoring Matters
Software testing is a specialty within software engineering. People enter software testing roles in various ways, like these:
- A new college graduate lands their first job as a QA engineer.
- A 20-year manual tester transitions into an automation role.
- A developer assumes more testing responsibilities for their team.
- A coding bootcamp graduate decides to change career.
There’s no single path to entering software testing. Personally, I graduated college with a Computer Science degree and found my way into testing through internships.
Unfortunately, there’s also no “universal” training program for software testing. Universities don’t offer programs in software testing – at best, they introduce unit test frameworks and bug report formats. Most coding bootcamps focus on Web development or data science. Certifications like ISTQB can feel arcane and antiquated (and, to be honest, I don’t hold any myself).
The best ways we have for self-training are communities, conferences, and online resources. For example, I’m a member of my local testing community, the Triangle Software Quality Association (TSQA). TSQA hosts meetups monthly and a conference every other year in North Carolina. Through these events, TSQA welcomes everyone interested in software testing to learn, share, and network. I also recommend free courses from Test Automation University, and I frequently share blogs and articles from other software testing leaders.
Nevertheless, while these resources are deeply valuable, they can be overwhelming for a newbie who literally does not know where to begin. An experienced mentor provides guidance. They can introduce a newcomer to communities and events. They can recommend specific resources and answer questions in a safe space. Mentors can also provide encouragement, motivation, and accountability – things that online resources cannot provide.
How to Do Mentoring
I’ve had the pleasure of mentoring multiple individuals throughout my career. I mentor no more than a few individuals at a time so that I can give each mentee the attention they deserve. Mentoring relationships typically start in one of these ways:
- Someone asks me to be their mentor.
- A manager or team leader arranges a mentoring relationship.
- As a team leader, I initiate a mentoring relationship with a new team member (because that’s a team leader’s job).
Almost all of my mentoring relationships have existed within company walls. Mentoring becomes one of my job responsibilities. Personally, I would recommend forming mentoring relationships within company walls so that both individuals can dedicate time, availability, and shared knowledge to each other. However, that may not always be possible (if management does not prioritize professional development) or beneficial (if the work environment is toxic).
From the start, I like to be explicit about the mentoring relationship with the mentee. I learn what they want to get out of mentoring. I give them the top priority of my time and attention, but I also expect them to reciprocate. I don’t enter mentoring relationships if the other person doesn’t want one.
Then, I create what I call a “growth plan” for the mentee. The growth plan is a tailored training program for the mentee to accomplish their learning objectives. I set a schedule with the following types of activities:
- One-on-one or small-group teaching sessions
- The best format for making big points that stick
- Gives individual care and attention to the mentee
- Provides a safe space for questions
- Reading assignments
- Helpful for independently learning specific topics
- May be blogs, articles, or documentation
- Allows the mentee to complete it at their own pace
- Online training courses
- Example: Test Automation University
- Provides comprehensive, self-paced instruction
- However, may not be 100% pertinent to the learning objectives
- Pair programming or code review sessions
- Hands-on time for mentor and mentee to work together
- Allows learning by example and by osmosis
- However, can be mentally draining, so use sparingly
- Independent work items
- Real, actual work items that the team must complete
- Makes the mentee feel like they are making valuable contributions even while still learning
- “Practice makes perfect”
These activities should be structured to fit all learning styles and build upon each other. For example, if I am mentoring someone about how to do Behavior-Driven Development, I would probably schedule the following activities:
- A “Welcome to BDD” whiteboard session with me
- Reading my BDD 101 series
- Watching a video about Example Mapping
- A small group activity for doing Example Mapping on a new story
- A work item to write Gherkin scenarios for that mapped story
- A review session for those scenarios
- A “BDD test frameworks” deep dive session with me
- A work item to automate the Gherkin scenarios they wrote
- A review session for those automated tests
- Another work item for writing and automating a new round of tests
Any learning objective can be mapped to a growth plan like this. Make sure each step is reasonable, well-defined, and builds upon the previous steps.
I would like to give one warning about mentoring for test automation. If a mentee wants to learn test automation, then they must first learn programming. Historically, software testing was a manual process, and most testers didn’t need to know how to code. Now, automation is indispensable for organizations that want to move fast without breaking things. Most software testing jobs require some level of automation skills. Many employers are even forcing their manual testers to pick up automation. However, good automation skills are rooted in good programming skills. Someone can’t learn “just enough Java” and then expect to be a successful automation engineer – they must effectively first become a developer.
Characteristics of Good Mentoring
Mentoring requires both individuals to commit time and effort to the relationship.
To be a good mentor:
- Be helpful – provide valuable guidance that the mentee needs
- Be prepared – know what you should know, and be ready to share it
- Be approachable – never be “too busy” to talk
- Be humble – reveal your limits, and admit what you don’t know
- Be patient – newbies can be slow
To be a good mentee:
- Seek long-term growth, not just answers to today’s questions
- Come prepared in mind and materials
- Ask thoughtful questions and record the answers
- Practice what you learn
- Express appreciation for your mentor – it’s often a thankless job
Take Your Time
Mentoring may take a lot of time, but good mentoring bears good fruit. Mentees will produce higher-quality work. They’ll get things done faster. They’ll also have more confidence in themselves. Mentors themselves will feel a higher satisfaction in their own work, too. The whole team wins.
The alternative would waste a lot more time. Without good mentoring, newcomers will be forced to sink or swim. They won’t be able to finish tasks as quickly, and their work will more likely have problems. They will feel stressed and doubtful, too. Forcing people to tough things out is a very inefficient learning process, and it can also devolve into forms of hazing in unhealthy work environments. Anytime someone says there isn’t enough time for mentoring, I would reply by saying there’s even less time for fixing poor-quality work. In the case of software testing, the lack of mentoring could cause bugs to escape to production!
I encourage leaders, managers, and senior engineers to make mentoring happen as part of their job responsibilities. Dedicate time for it. Facilitate it. Normalize it. Be intentional. Furthermore, I encourage leaders to be force multipliers: mentor others to mentor others. Time is always tight, so make the most of it.
I hope this article is helpful! Do you have any thoughts, advice, or questions about mentoring, specifically in the field of software testing? I’d love to hear them, so drop a comment below.
I wrote this article as a follow-up to an “Ask Me Anything” session on July 15, 2020 with Tristan Lombard and the Testim Community. Tristan published the full AMA transcript in a Medium article. Many thanks to them for the opportunity!