testing

Learning Python Test Automation

Do you want to learn how to automate tests in Python? Python is one of the best languages for test automation because it is easy to learn, concise to write, and powerful to scale. These days, there’s a wealth of great content on Python testing. Here’s a brief reference to help you get started.

If you are new to Python, read How Do I Start Learning Python? to find the best way to start learning the language.

If you want to roll up your sleeves, check out Test Automation University. I developed a “trifecta” of Python testing courses for TAU with videos, transcripts, quizzes, and example code. You can take them for FREE!

  1. Introduction to pytest
  2. Selenium WebDriver with Python
  3. Behavior-Driven Python with pytest-bdd

If you wants some brief articles for reference, check out my Python Testing 101 blog series:

  1. Python Testing 101: Introduction
  2. Python Testing 101: unittest
  3. Python Testing 101: doctest
  4. Python Testing 101: pytest
  5. Python Testing 101: behave
  6. Python Testing 101: pytest-bdd
  7. Python BDD Framework Comparison

RealPython also has excellent guides:

I’ve given several talks about Python testing:

If you prefer to read books, here are some great titles:

Here are links to popular Python test tools and frameworks:

Do you have any other great resources? Drop them in the comments below! Happy testing!

Boa Constrictor Intro Video with Transcript

The Video

Boa Constrictor is the .NET Screenplay Pattern, and I’m its lead developer. Check out this intro video to learn why we need the Screenplay Pattern and how to use it with Boa Constrictor.

The Transcript

[Camera]

Hello, everyone! My name is Andrew Knight, or “Pandy” for short. I’m the Automation Panda – I build solutions to testing problems. Be sure to read my blog and follow me on Twitter at “AutomationPanda”.

Today, I’m going to introduce you to a new test automation library called Boa Constrictor, the .NET Screenplay Pattern. Boa Constrictor can help you make better interactions for better automation. Its primary use cases are Web UI and REST API interactions, but it can be extended to handle any type of interaction.

My team and I at PrecisionLender originally developed Boa Constrictor as the cornerstone of our .NET end-to-end test automation solution. We found the Screenplay Pattern to be a great way to scale our test development, avoid duplicate code, and stay focused on behaviors. In October 2020, together with help from our parent company Q2, we released Boa Constrictor as an open source project.

In this video, we will cover three things:

  1. First, problems with traditional ways of automating interactions.
  2. Second, why the Screenplay Pattern is a better way.
  3. Third, how to use the Screenplay Pattern with Boa Constrictor in C#.

Since Boa Constrictor is open source, you can check out its repository. I’ll paste the link below: https://github.com/q2ebanking/boa-constrictor. The repository also has a hands-on tutorial you can try. Make sure to have Visual Studio and some .NET skills because the code is written in C#.

My main goal with the Boa Constrictor project is to help improve test automation practices. For so long, our industry has relied on page objects, and I think it’s time we talk about a better way. Boa Constrictor strives to make that easy.

[Slide]

To start, let’s define that big “I” word I kept tossing around:

[Slide]

Interactions.

[Slide]

Simply put, interactions are how users operate software. For this video, I’ll focus on Web UI interactions, like clicking buttons and scraping text.

[Slide]

Interactions are indispensable to testing. The simplest way to define “testing” is interaction plus verification. That’s it! You do something, and you make sure it works.

Think about any functional test case that you have ever written or executed. The test case was a step-by-step procedure, in which each step had interactions and verifications.

[Slide]

Here’s an example of a simple DuckDuckGo search test. DuckDuckGo is a search engine like Google or Yahoo. The steps here are very straightforward.

[Slide]

Opening the search engine requires navigation.

[Slide]

Searching for a phrase requires entering keystrokes and clicking the search button.

[Slide]

Verifying results requires scraping the page title and result links from the new page. 

Interactions are everywhere!

[Slide]

Unfortunately, our industry struggles to handle automated Web UI interactions well. Even though most teams use Selenium WebDriver in their test automation code, every team seems to use it differently. There’s lots of duplicate code and flakiness, too. Let’s take a look at the way many teams evolve their WebDriver-based interactions. I will use C# for code examples, and I will continue to use DuckDuckGo for testing.

[Slide]

When teams first start writing test automation code using Selenium WebDriver, they frequently write raw calls. Anyone familiar with the WebDriver API should recognize these calls.

[Slide]

The WebDriver object is initialized using, say, ChromeDriver for the Chrome browser.

[Slide]

The first step to open the search engine calls “driver dot navigate dot go to URL” with the DuckDuckGo website address.

[Slide]

The second step performs the search by fetching Web elements using “driver dot find element” with locators and then calling methods like “send keys” and “click”.

[Slide]

The third step uses assertions to verify the contents of the page title and the existence of result links.

[Slide]

Finally, at the end of the test, the WebDriver quits the browser for cleanup.

Like I said, these are all common WebDriver calls. Unfortunately, there’s a big problem in this code.

[Slide]

Race conditions. There are three race conditions in this code in which the automation does NOT wait for the page to be ready before making interactions! WebDriver does not automatically wait for elements to load or titles to appear. Waiting is a huge challenge for Web UI automation, and it is one of the main reasons for “flaky” tests.

[Slide]

You could set an implicit wait that will make calls wait until target elements appear, but they don’t work for all cases, such as the title in race condition #2.

[Slide]

Explicit waits provide much more control over waiting timeout and conditions. They use a “WebDriverWait” object with a pre-set timeout value, and they must be placed explicitly throughout the code. Here, they are placed in the three spots where race conditions could happen. Each “wait dot until” call takes in a function that returns true when the condition is satisfied.

[Slide]

These waits are necessary, but they cause new problems. First, they cause duplicate code because Web element locators are used multiple times. Notice how “search form input homepage” is called twice.

[Slide]

Second, raw calls with explicit waits makes code less intuitive. If I remove the comments from each paragraph of code, what’s left is a wall of text. It is difficult to understand what this code does as a glance.

[Slide]

To remedy these problems, most teams use the Page Object Pattern. In the Page Object Pattern, each page is modeled as a class with locator variables and interaction methods. So, a “search page” class could look like this.

[Slide]

At the top, there could be a constant for the page URL and variables for the search input and search button locators. Notice how each has an intuitive name.

[Slide]

Next, there could be a variable to hold the WebDriver reference. This reference would come via dependency injection through the constructor.

[Slide]

The first method would be a “load” method that navigates the browser to the page’s URL.

[Slide]

And, the second method would be a “search” method that waits for the elements to appear, enters the phrase into the input field, and clicks the search button.

This page object class has a decent structure and a mild separation of concerns. Locators and interactions have meaningful names. Page objects require a few more lines of code that raw calls at first, but their parts can easily be reused.

[Slide]

The original test steps can be rewritten using this new SearchPage class. Notice how much cleaner this new code looks.

[Slide]

The other steps can be rewritten using page objects, too.

[Slide]

Unfortunately, page objects themselves suffer problems with duplication in their interaction methods.

[Slide]

Suppose a page object needs a method to click an element. We already know the logic: wait for the element to exist, and then click it.

But what about clicking another element? This method is essentially hard coded for one button.

[Slide]

A second “click” method is needed to click the other button.

[Slide]

Unfortunately, the code for both methods is the same. The code will be the same for any other click method, too. This is copy pasta, and it happens all the time in page objects. I’ve seen page objects grow to be thousands of lines long due to duplicative methods like this.

At this point, some teams will say, “Aha! More duplicate code? We can solve this problem with more Object-Oriented Programming!”

[Slide]

And they’ll create the infamous “base page”, a parent class for all other page object classes.

[Slide]

The base page will have variables for the WebDriver and the wait object.

[Slide]

It will also provide common interaction methods, such as this click method that can click on any element. Abstraction for the win!

[Slide]

Child pages will inherit everything from the base page. Child page interaction methods frequently just call base page methods.

I’ve seen many teams stop here and say, “This is good enough.” Unfortunately, this really isn’t very good at all!

[Slide]

The base page helps mitigate code duplication, but it doesn’t solve its root cause. Page objects inherently combine two separate concerns: page structure and interactions. Interactions are often generic enough to be used on any Web element. Coupling interaction code with specific locators or pages forces testers to add new page object methods for every type of interaction needed for an element. Every element could potentially need a click, a text, a “displayed”, or any other type of WebDriver interaction. That’s a lot of extra code that shouldn’t be necessary. The base page also becomes very top-heavy as testers add more and more code to share.

[Slide]

Most frustratingly, the page object code I showed here is merely one type of implementation. What do your page objects look like? I’d bet dollars to doughnuts that they look different than mine. Page objects are completely free form. Every team implements them differently. There’s no official version of the Page Object Pattern. There’s no conformity in its design. Even worse, within its design, there is almost no way for the pattern to enforce good practices. That’s why people argue whether page object locators should be public or private. Page objects would be better described as a “convention” than as a true design pattern.

[Slide]

There must be a better way to handle interactions. Thankfully, there is.

[Slide]

Let’s take a closer look at how interactions happen.

[Slide]

First, there is someone who initiates the interactions. Usually, this is a user. They are the ones making the clicks and taking the scrapes. Let’s call them the “Actor”.

[Slide]

Second, there is the thing under test. For our examples in this video, that’s a Web app. It has pages with elements. Web page structure is modeled using locators to access page elements from the DOM. Keep in mind, the thing under test could also be anything else, like a mobile app, a microservice, or even a command line.

[Slide]

Third, there are the interactions themselves. For Web apps, they could be simple clicks and keystrokes, or they could be more complex interactions like logging into the app or searching for a phrase. Each interaction will do the same type of operation on whatever target page or element it is given.

[Slide]

Finally, there are objects that enable Actors to perform certain types of Interactions. For example, browser interactions need a tool like Selenium WebDriver to make clicks and scrapes. Let’s call these things “Abilities”.

Actors, Abilities, and Interactions are each different types of concerns. We could summarize their relationship in one line.

[Slide]

Actors use Abilities to perform Interactions.

Actors use Abilities to perform Interactions.

[Slide]

This is the heart of the Screenplay Pattern. In the Page Object Convention, page objects become messy because concerns are all combined. The Screenplay Pattern separates concerns for maximal reusability and scalability.

[Slide]

So, let’s learn how to Screenplay, using Boa Constrictor.

[Slide]

“Boa Constrictor” is an open source C# implementation of the Screenplay Pattern my team and I developed at PrecisionLender. Like I said before, it is the cornerstone of PrecisionLender’s end-to-end test automation solution. It can be used with any .NET test framework, like SpecFlow or NUnit. The GitHub repository name is q2ebanking/boa-constrictor, and the NuGet package name is Boa.Constrictor.

[Slide]

Let’s rewrite that DuckDuckGo search test from before using Boa Constrictor. As you watch this video, I recommend just reading along with the code as it appears on screen to get the concepts. Trying to code along in real time might be challenging. After this video, you can take the official Boa Constrictor tutorial to get hands-on with the code.

[Slide]

To use Boa Constrictor, you will need to install the Boa Constrictor and Selenium WebDriver NuGet packages. My example code will also use Fluent Assertions and ChromeDriver.

[Slide]

The Actor is the entity that initiates Interactions. All Screenplay calls start with an Actor. Most test cases need only one Actor.

The Actor class optionally takes two arguments. The first argument is a name, which can help describe who the actor is. The name will appear in logged messages. The second argument is a logger, which will send log messages from Screenplay calls to a target destination. Loggers must implement Boa Constrictor’s ILogger interface. ConsoleLogger is a class that will log messages to the system console. You can define your own custom loggers by implementing ILogger.

[Slide]

Abilities enable Actors to initiate Interactions. For example, an Actor needs a Selenium WebDriver instance to click elements on a Web page.

Read this new line in plain English: “The actor can browse the Web with a new ChromeDriver.” Boa Constrictor’s fluent-like syntax makes its call chains very readable. “actor dot Can” adds an Ability to an Actor.

[Slide]

“BrowseTheWeb” is the Ability that enables Actors to perform Web UI Interactions. “BrowseTheWeb dot With” provides the WebDriver object that the Actor will use, which, in this case, is a new ChromeDriver object. Boa Constrictor supports all browser types.

All Abilities must implement the IAbility interface. Actors can be given any number of Abilities. “BrowseTheWeb” simply holds a reference to the WebDriver object. Web UI Interactions will retrieve this WebDriver object from the Actor.

[Slide]

Before the Actor can call any WebDriver-based Interactions, the Web pages under test need models. These models should be static classes that include locators for elements on the page and possibly page URLs. Page classes should only model structure – they should not include any interaction logic.

The Screenplay Pattern separates the concerns of page structure from interactions. That way, interactions can target any element, maximizing code reusability. Interactions like clicks and scrapes work the same regardless of the target elements.

The SearchPage class has two members. The first member is a URL string named Url. The second member is a locator for the search input element named SearchInput.

A locator has two parts. First, it has a plain-language Description that will be used for logging. Second, it has a Query that is used to find the element on the page. Boa Constrictor uses Selenium WebDriver’s By queries. For convenience, locators can be constructed using the statically imported L method.

[Slide]

The Screenplay Pattern has two types of Interactions. The first type of Interaction is called a Task. A Task performs actions without returning a value. Examples of Tasks include clicking an element, refreshing the browser, and loading a page. These interactions all “do” something rather than “get” something.

Boa Constrictor provides a Task named Navigate for loading a Web page using a target URL. Read this line in plain English: “The actor attempts to navigate to the URL for the search page.” Again, Boa Constrictor’s fluent-like syntax is very readable. Clearly, this line will load the DuckDuckGo search page. 

[Slide]

“Actor dot attempts to” calls a Task. All Tasks must implement the ITask interface. When the Actor calls “AttemptsTo” on a task, it calls the task’s “PerformAs” method.

[Slide]

“Navigate” is the name of the task, and “dot to URL” provides the target URL.

[Slide]

The Navigate Task’s “PerformAs” method fetches the WebDriver object from the Actor’s Ability and uses it to load the given URL.

[Slide]

“Search page dot URL” comes from the SearchPage class we previously wrote. Putting the URL in the page class makes it universally available.

[Slide]

The second type of Interaction is called a Question. A Question returns an answer after performing actions. Examples of Questions include getting an element’s text, location, and appearance. Each of these interactions return some sort of value. 

Boa Constrictor provides a Question named ValueAttribute that gets the “value” of the text currently inside an input field. Read this line in plain English: “The actor asking for the value attribute of the search page’s search input element should be empty.”

[Slide]

“Actor dot asking for” calls a Question. All Questions must implement the IQuestion interface. When the Actor calls “AskingFor” or the equivalent “AsksFor” method, it calls the question’s “RequestAs” method.

[Slide]

“ValueAttribute” is the name of the Question, and “dot Of” provides the target Web element’s locator. 

[Slide]

The ValueAttribute’s “RequestAs” method fetches the WebDriver object, waits for the target element to exist on the page, and scrapes and returns its value attribute.

[Slide]

“Search page dot search input” is the locator for the search input field. It comes from the SearchPage class.

[Slide]

Finally, once the value is obtained, the test must make an assertion on it. “Should be empty” is a Fluent Assertion that verifies that the search input field is empty when the page is first loaded.

[Slide]

The test case’s next step is to enter a search phrase. Doing this requires two interactions: typing the phrase into the search input and clicking the search button. However, since searching is such a common operation, we can create a custom interaction for search by composing the lower-level interactions together.

[Slide]

The “SearchDuckDuckGo” task takes in a search phrase.

[Slide]

In its “PerformAs” method, it calls two other interactions: “SendKeys” and “Click”.

[Slide]

Using one task to combine these lower-level interactions makes the test code more readable and understandable. It also improves automation reusability. Read this line in plain English now: “The actor attempts to search DuckDuckGo for ‘panda’.” That’s concise and intuitive!

[Slide]

The last test case step should verify that result links appear after entering a search phrase. Unfortunately, this step has a race condition: the result page takes a few seconds to display result links. Automation must wait for those links to appear. Checking too early will make the test case fail.

Boa Constrictor makes waiting easy. Read this line in plain English: “The actor attempts to wait until the appearance of result page result links is equal to true.” In simpler terms, “Wait until the result links appear.”

[Slide]

“Wait” is a special Task. It will repeatedly call a Question until the answer meets a given condition.

[Slide]

For this step, the Question is the appearance of result links on the result page. Before links are loaded, this Question will return “false”. Once links appear, it will return “true”.

[Slide]

The Condition for waiting is for the answer value to become “true”. Boa Constrictor provides several conditions out of the box, such as equality, mathematical comparisons, and string matching. You can also implement custom conditions by implementing the “ICondition” interface.

[Slide]

Waiting is smart – it will repeatedly ask the question until the answer is met, and then it will move on. This makes waiting much more efficient than hard sleeps. If the answer does not meet the condition within the timeout, then the wait will raise an exception. The timeout defaults to 30 seconds, but it can be overridden.

Many of Boa Constrictor’s WebDriver-based interactions already handle waiting. Anything that uses a target element, such as “Click”, “SendKeys”, or “Text” will wait for the element to exist before attempting the operation. We saw this in some of the previous example code. However, there are times where explicit waits are needed. Interactions that query appearance or existence do not automatically wait.

[Slide]

The final step is to quit the browser. Boa Constrictor’s “QuitWebDriver” task does this. If you don’t quit the browser, then it will remain open and turn into a zombie. Always quit the browser. Furthermore, in whatever test framework you use, put the step to quit the browser in a cleanup or teardown routine so that it is called even when the test fails.

[Slide]

And there we have our completed test using Boa Constrictor’s Screenplay Pattern. All the separated concerns come together beautifully to handle interactions in a much better way.

[Slide]

As we said before, the Screenplay Pattern can be summed up in one line:

[Slide]

Actors [Slide] use Abilities [Slide] to perform Interactions.

It’s that simple. Actors use Abilities to perform Interactions.

[Slide]

For those who like Object-Oriented Programming, the Screenplay Pattern is, in a sense, a SOLID refactoring of the Page Object Convention. SOLID refers to five design principles for maintainability and extensibility. I won’t go into detail about each principle here because the information is a bit dense, but if you’re interested, then pause the video, snap a quick screenshot, and check out each of these principles later. Wikipedia is a good source. You’ll find that the Screenplay Pattern follows each one nicely.

[Slide]

So, why should you use the Screenplay Pattern over Page Object Convention or raw WebDriver calls? There are a few key reasons.

[Slide]

First, the Screenplay Pattern, and specifically the Boa Constrictor project, provide rich, reusable, reliable interactions out of the box. Boa Constrictor already has Tasks and Questions for every type of WebDriver-based interaction. Each one is battle-hardened and safe.

[Slide]

Second, Screenplay interactions are composable. Like we saw with searching for a phrase, you can easily combine interactions. This makes code easier to use and reuse, and it avoids lots of duplication.

[Slide]

Third, the Screenplay Pattern makes waiting easy using existing questions and conditions. Waiting is one of the toughest parts of black box automation.

[Slide]

Fourth, Screenplay calls are readable and understandable. They use a fluent-like syntax that reads more like prose than code.

[Slide]

Finally, the Screenplay Pattern, at its core, is a design pattern for any type of interaction. In this video, I showed how to use it for Web UI interactions, but the Screenplay Pattern could also be used for mobile, REST API, and other platforms. You can make your own interactions, too!

[Slide]

Overall, the Screenplay Pattern [Slide] provides better interactions [Slide] for better automation.

That’s the point. It’s not just another Selenium WebDriver wrapper. It’s not just a new spin on page objects. Screenplay is a great way to exercise any feature behaviors under test.

And, as we saw before…

[Slide]

The Screenplay Pattern isn’t that complicated. Actors use Abilities to perform Interactions. That’s it. The programming behind it just has some nifty dependency injection.

[Slide]

If you’d like to start using the Screenplay Pattern for your test automation, there are a few ways to get started.

[Slide]

If you are programming in C#, you can use Boa Constrictor, the library I showed in the examples. You can download Boa Constrictor as a NuGet package. It works with any .NET test framework, like SpecFlow and NUnit. I recommend taking the hands-on tutorial so you can develop a test automation project yourself with Boa Constrictor. Also, since Boa Constrictor is an open source project, I’d love for you to contribute!

[Slide]

If you are programming in Java or JavaScript, you can use Serenity BDD – a mature, complete test automation framework that includes the Screenplay Pattern. Serenity BDD greatly influenced Boa Constrictor, but the two are entirely separate projects. Boa Constrictor is NOT Serenity BDD for .NET. Instead, Boa Constrictor aims to be a simpler, standalone implementation of the Screenplay Pattern.

[Slide]

If none of those options suit you, then you could create your own. The Screenplay Pattern does require a bit of boilerplate code, but it’s worthwhile in the end. You can always reference code from Boa Constrictor and Serenity BDD.

[Slide]

Thank you so much for taking the time to learn more about the Screenplay Pattern and Boa Constrictor. I’d like to give special thanks to everyone at PrecisionLender and Q2 who helped make Boa Constrictor’s open source release happen.

Again, my name is Andrew Knight. I’m the Automation Panda. Be sure to read my blog, follow me on Twitter, and reach out to me if you’d like to join the Boa Constrictor project! Thank you.

Introducing Boa Constrictor: The .NET Screenplay Pattern

Today, I’m excited to announce the release of a new open source project for test automation: Boa Constrictor, the .NET Screenplay Pattern!

The Screenplay Pattern helps you make better interactions for better automation. The pattern can be summarized in one line: Actors use Abilities to perform Interactions.

  • Actors initiate Interactions. Every test has an Actor.
  • Abilities enable Actors to perform Interactions. They hold objects that Interactions need, like WebDrivers or REST API clients.
  • Interactions exercise behaviors under test. They could be clicks, requests, commands, and anything else.

This separation of concerns makes Screenplay code very reusable and scalable, much more so than traditional page objects. Check it out, here’s a C# script to test a search engine:

// Create the Actor
IActor actor = new Actor(logger: new ConsoleLogger());

// Add an Ability to use a WebDriver
actor.Can(BrowseTheWeb.With(new ChromeDriver()));

// Load the search engine
actor.AttemptsTo(Navigate.ToUrl(SearchPage.Url));

// Get the page's title
string title = actor.AsksFor(Title.OfPage());

// Search for something
actor.AttemptsTo(Search.For("panda"));

// Wait for results
actor.AttemptsTo(Wait.Until(
    Appearance.Of(ResultPage.ResultLinks),
    IsEqualTo.True()));

Boa Constrictor provides many interactions for Selenium WebDriver and RestSharp out of the box, like Navigate, Title, and Appearance shown above. It also lets you compose interactions together, like how Search is a composition of typing and clicking.

Over the past two years, my team and I at PrecisionLender, a Q2 Company, developed Boa Constrictor internally as the cornerstone of Boa, our comprehensive end-to-end test automation solution. We were inspired by Serenity BDD‘s Screenplay implementation. After battle-hardening Boa Constrictor with thousands of automated tests, we are releasing it publicly as an open source project. Our goal is to help everyone make better interactions for better test automation.

If you’d like to give Boa Constrictor a try, then start with the tutorial. You’ll implement that search engine test from above in full. Then, once you’re ready to use it for some serious test automation, add the Boa.Constrictor NuGet package to your .NET project and go!

You can view the full source code on GitHub at q2ebanking/boa-constrictor. Check out the repository for full information. In the coming weeks, we’ll be developing more content and code. Since Boa Constrictor is open source, we’d love for you to contribute to the project, too!

Mentoring Software Testers

Mentoring is important in any field, but it’s especially critical for software testing. I’ve been blessed with good mentors throughout my life, and I’ve also been honored to serve as a mentor for other software testers. In this article, I’ll explain what mentoring is and how to practice it within the context of software testing.

What is Mentoring?

Mentoring is a one-on-one relationship in which the experienced guides the inexperienced.

  • It is explicit in that the two individuals formally agree to the relationship.
  • It is intentional in that both individuals want to participate.
  • It is long-term in that the relationship is ongoing.
  • It is purposeful in that the relationship has a clear goal or development objective.
  • It is meaningful in that growth happens for both individuals.

Mentoring is more than merely answering questions or doing code reviews. It is an intentional relationship for learning and growth.

Why Software Testing Mentoring Matters

Software testing is a specialty within software engineering. People enter software testing roles in various ways, like these:

  • A new college graduate lands their first job as a QA engineer.
  • A 20-year manual tester transitions into an automation role.
  • A developer assumes more testing responsibilities for their team.
  • A coding bootcamp graduate decides to change career.

There’s no single path to entering software testing. Personally, I graduated college with a Computer Science degree and found my way into testing through internships.

Unfortunately, there’s also no “universal” training program for software testing. Universities don’t offer programs in software testing – at best, they introduce unit test frameworks and bug report formats. Most coding bootcamps focus on Web development or data science. Certifications like ISTQB can feel arcane and antiquated (and, to be honest, I don’t hold any myself).

The best ways we have for self-training are communities, conferences, and online resources. For example, I’m a member of my local testing community, the Triangle Software Quality Association (TSQA). TSQA hosts meetups monthly and a conference every other year in North Carolina. Through these events, TSQA welcomes everyone interested in software testing to learn, share, and network. I also recommend free courses from Test Automation University, and I frequently share blogs and articles from other software testing leaders.

Nevertheless, while these resources are deeply valuable, they can be overwhelming for a newbie who literally does not know where to begin. An experienced mentor provides guidance. They can introduce a newcomer to communities and events. They can recommend specific resources and answer questions in a safe space. Mentors can also provide encouragement, motivation, and accountability – things that online resources cannot provide.

How to Do Mentoring

I’ve had the pleasure of mentoring multiple individuals throughout my career. I mentor no more than a few individuals at a time so that I can give each mentee the attention they deserve. Mentoring relationships typically start in one of these ways:

  1. Someone asks me to be their mentor.
  2. A manager or team leader arranges a mentoring relationship.
  3. As a team leader, I initiate a mentoring relationship with a new team member (because that’s a team leader’s job).

Almost all of my mentoring relationships have existed within company walls. Mentoring becomes one of my job responsibilities. Personally, I would recommend forming mentoring relationships within company walls so that both individuals can dedicate time, availability, and shared knowledge to each other. However, that may not always be possible (if management does not prioritize professional development) or beneficial (if the work environment is toxic).

My teammates and me at TSQA 2020!

From the start, I like to be explicit about the mentoring relationship with the mentee. I learn what they want to get out of mentoring. I give them the top priority of my time and attention, but I also expect them to reciprocate. I don’t enter mentoring relationships if the other person doesn’t want one.

Then, I create what I call a “growth plan” for the mentee. The growth plan is a tailored training program for the mentee to accomplish their learning objectives. I set a schedule with the following types of activities:

  • One-on-one or small-group teaching sessions
    • The best format for making big points that stick
    • Gives individual care and attention to the mentee
    • Provides a safe space for questions
  • Reading assignments
    • Helpful for independently learning specific topics
    • May be blogs, articles, or documentation
    • Allows the mentee to complete it at their own pace
  • Online training courses
    • Example: Test Automation University
    • Provides comprehensive, self-paced instruction
    • However, may not be 100% pertinent to the learning objectives
  • Pair programming or code review sessions
    • Hands-on time for mentor and mentee to work together
    • Allows learning by example and by osmosis
    • However, can be mentally draining, so use sparingly
  • Independent work items
    • Real, actual work items that the team must complete
    • Makes the mentee feel like they are making valuable contributions even while still learning
    • “Practice makes perfect”

These activities should be structured to fit all learning styles and build upon each other. For example, if I am mentoring someone about how to do Behavior-Driven Development, I would probably schedule the following activities:

  1. A “Welcome to BDD” whiteboard session with me
  2. Reading my BDD 101 series
  3. Watching a video about Example Mapping
  4. A small group activity for doing Example Mapping on a new story
  5. A work item to write Gherkin scenarios for that mapped story
  6. A review session for those scenarios
  7. A “BDD test frameworks” deep dive session with me
  8. A work item to automate the Gherkin scenarios they wrote
  9. A review session for those automated tests
  10. Another work item for writing and automating a new round of tests

Any learning objective can be mapped to a growth plan like this. Make sure each step is reasonable, well-defined, and builds upon the previous steps.

I would like to give one warning about mentoring for test automation. If a mentee wants to learn test automation, then they must first learn programming. Historically, software testing was a manual process, and most testers didn’t need to know how to code. Now, automation is indispensable for organizations that want to move fast without breaking things. Most software testing jobs require some level of automation skills. Many employers are even forcing their manual testers to pick up automation. However, good automation skills are rooted in good programming skills. Someone can’t learn “just enough Java” and then expect to be a successful automation engineer – they must effectively first become a developer.

Characteristics of Good Mentoring

Mentoring requires both individuals to commit time and effort to the relationship.

To be a good mentor:

  • Be helpful – provide valuable guidance that the mentee needs
  • Be prepared – know what you should know, and be ready to share it
  • Be approachable – never be “too busy” to talk
  • Be humble – reveal your limits, and admit what you don’t know
  • Be patient – newbies can be slow

To be a good mentee:

  • Seek long-term growth, not just answers to today’s questions
  • Come prepared in mind and materials
  • Ask thoughtful questions and record the answers
  • Practice what you learn
  • Express appreciation for your mentor – it’s often a thankless job

Take Your Time

Mentoring may take a lot of time, but good mentoring bears good fruit. Mentees will produce higher-quality work. They’ll get things done faster. They’ll also have more confidence in themselves. Mentors themselves will feel a higher satisfaction in their own work, too. The whole team wins.

The alternative would waste a lot more time. Without good mentoring, newcomers will be forced to sink or swim. They won’t be able to finish tasks as quickly, and their work will more likely have problems. They will feel stressed and doubtful, too. Forcing people to tough things out is a very inefficient learning process, and it can also devolve into forms of hazing in unhealthy work environments. Anytime someone says there isn’t enough time for mentoring, I would reply by saying there’s even less time for fixing poor-quality work. In the case of software testing, the lack of mentoring could cause bugs to escape to production!

I encourage leaders, managers, and senior engineers to make mentoring happen as part of their job responsibilities. Dedicate time for it. Facilitate it. Normalize it. Be intentional. Furthermore, I encourage leaders to be force multipliers: mentor others to mentor others. Time is always tight, so make the most of it.

More Advice?

I hope this article is helpful! Do you have any thoughts, advice, or questions about mentoring, specifically in the field of software testing? I’d love to hear them, so drop a comment below.

I wrote this article as a follow-up to an “Ask Me Anything” session on July 15, 2020 with Tristan Lombard and the Testim Community. Tristan published the full AMA transcript in a Medium article. Many thanks to them for the opportunity!

Test-Driving TestProject’s New Python SDK

TestProject recently released its new OpenSDK, and one of its major features is the inclusion of Python testing support! Since I love using Python for test automation, I couldn’t wait to give it a try. This article is my crash-course tutorial on writing Web UI tests in Python with TestProject.

What is TestProject?

TestProject is a free end-to-end test automation platform for Web, mobile, and API tests. It provides a cloud-based way to teams to build, run, share, and analyze tests. Manual testers can visually build tests for desktop or mobile sites using TestProject’s in-browser recorder and test builder. Automation engineers can use TestProject’s SDKs in Java, C#, and now Python for developing coded test automation solutions, and they can use packages already developed by others in the community through TestProject’s add-ons. Whether manual or automated, TestProject displays all test results in a sleek reporting dashboard with helpful analytics. And all of these features are legitimately free – there’s no tiered model or service plan.

Recently, TestProject announced the official release of its new OpenSDK. This new SDK (“software development kit”) provides a simple, unified interface for running tests with TestProject across multiple platforms and languages (now including Python). Things look exciting for the future of TestProject!

What’s My Interest?

It’s no secret that I love testing with Python. When I heard that TestProject added Python support, I knew I had to give it a try. I never used TestProject before, but I was interested to learn what it could do. Specifically, I wanted to see the value it could bring to reporting automated tests. In the Python space, test automation is slick, but reporting can be rough since frameworks like pytest and unittest are command-line-focused. I also wanted to see if TestProject’s SDK would genuinely help me automate tests or if it would get it my way. Furthermore, I know some great people in the TestProject community, so I figured it was time to jump in myself!

The Python SDK

TestProject’s Python SDK is an open-source project. It was originally developed by Bas Dijkstra, with the support of the TestProject team, and its code is hosted on GitHub. The Python SDK supports Selenium for Web UI automation (which will be the focus for this tutorial) and Appium for Android and iOS UI automation as well!

Since I’d never used TestProject before, let alone this new Python SDK, I wanted to review the code to see how to use it. Thankfully, the README included lots of helpful information and example code. When I looked at the code for TestProject’s BaseDriver, I discovered that it simply extend’s Selenium WebDriver’s RemoteDriver class. That means all the TestProject WebDrivers use exactly the same API as Python’s Selenium WebDriver implementation. To me, that was a big relief. I know WebDriver’s API very well, so I wouldn’t need to learn anything different in order to use TestProject. It also means that any test automation project can be easily retrofitted to use TestProject’s SDKs – they just need to swap in a new WebDriver object!

Setup Steps

TestProject has a straightforward architecture. Users sign up for free TestProject accounts online. Then, they set up their own machines for running tests. Each testing machine must have the TestProject agent installed and linked to a user’s account. When tests run, agents automatically push results to the TestProject cloud. Users can then log into the TestProject portal to view and analyze results. They can invite team mates to share results, and they can also set up multiple test machines with agents. Users can even integrate TestProject with other tools like Jenkins, qTest, and Sauce Labs. The TestProject docs, especially the ecosystem diagram, explain everything in more detail.

When I did my test drive, I created a TestProject account, installed the agent on my Mac, and ran Python Web UI tests from my Mac. I already had the latest version of Python installed (Python 3.8 at the time of writing this article). I also already had my target browsers installed: Google Chrome and Mozilla Firefox.

Below are the precise steps I followed to set up TestProject:

1. Sign up for an account

TestProject accounts are “free forever.” Use this signup link.

The TestProject signup page

2. Download the TestProject Agent

The signup wizard should direct you to download the TestProject agent. If not, you can always download it from the TestProject dashboard. Be warned, the download package is pretty large – the macOS package was 345 MB. Alternatively, you can fetch the agent as a container image from Docker Hub.

The TestProject agent download page

The TestProject agent contains all the stuff needed to run tests and upload results to the TestProject app in the cloud. You don’t need to install WebDriver executables like ChromeDriver or geckodriver. Once the agent is downloaded, install it on the machine and register the agent with your account. For me, registration happened automatically.

This is what the TestProject agent looks like when running on macOS. You can also close this window to let it run in the background.

3. Find your developer token

You’ll need to use your developer token to connect your automated tests to your account in the TestProject app. The signup wizard should reveal it to you, but you can always find it (and also reset it) on the Integrations page.

The Integrations page. Check here for your developer token. No, you can’t use mine.

4. Install the Python SDK

TestProject’s Python SDK is distributed as a package through PyPI. To install it, simply run pip install testproject-python-sdk at the command line. This package will also install dependencies like selenium and requests.

A Classic Web UI Test

After setting up my Mac to use TestProject, it was time to write some Web UI tests in Python! Since I discovered that TestProject’s WebDriver objects could easily retrofit any existing test automation project, I thought, “What if I try to run my PyCon 2020 tutorial project with TestProject?” For PyCon 2020, I gave an online tutorial about building a Web UI test automation project in Python from the ground up using pytest and Selenium WebDriver. The tutorial includes one test case: a DuckDuckGo web search and verification. I thought it would be easy to integrate with TestProject since I already had the code. Thankfully, it was!

Below, I’ll walk though my code. You can check out my example project repository from GitHub at AndyLPK247/testproject-python-sdk-example. My code will be a bit more advanced than the examples shown in the Python SDK’s README or in Bas Dijkstra’s tutorial article because it uses the Page Object Model and pytest fixtures. Make sure to pip install pytest, too.

1. Write the test steps

The test case covers a simple DuckDuckGo web search. Whenever I automate tests, I always write out the steps in plain language. Good tests follow the Arrange-Act-Assert pattern, and I like to use Gherkin’s Given-When-Then phrasing. Here’s the test case:

Scenario: Basic DuckDuckGo Web Search
    Given the DuckDuckGo home page is displayed
    When the user searches for "panda"
    Then the search result query is "panda"
    And the search result links pertain to "panda"
    And the search result title contains "panda"

2. Specify automation inputs

Inputs configure how automated tests run. They can be passed into a test automation solution using configuration files. Testers can then easily change input values in the config file without changing code. Automation should read config files once at the start of testing and inject necessary inputs into every test case.

In Python, I like to use JSON for config files. JSON data is simple and hierarchical, and Python includes a module in its standard library named json that can parse a JSON file into a Python dictionary in one line. I also like to put config files either in the project root directory or in the tests directory.

Here’s the contents of config.json for this test:

{
  "browser": "Chrome",
  "implicit_wait": 10,
  "testproject_projectname": "TestProject Python SDK Example",
  "testproject_token": ""
}
  • browser is the name of the browser to test
  • implicit_wait is the implicit waiting timeout for the WebDriver instance
  • testproject_projectname is the project name to use for this test suite in the TestProject app
  • testproject_token is the developer token

3. Read automation inputs

Automation code should read inputs one time before any tests run and then inject inputs into appropriate tests. pytest fixtures make this easy to do.

I created a fixture named config in the tests/conftest.py module to read config.json:

import json
import pytest


@pytest.fixture
def config(scope='session'):

  # Read the file
  with open('config.json') as config_file:
    config = json.load(config_file)
  
  # Assert values are acceptable
  assert config['browser'] in ['Firefox', 'Chrome', 'Headless Chrome']
  assert isinstance(config['implicit_wait'], int)
  assert config['implicit_wait'] > 0
  assert config['testproject_projectname'] != ""
  assert config['testproject_token'] != ""

  # Return config so it can be used
  return config

Setting the fixture’s scope to “session” means that it will run only one time for the whole test suite. The fixture reads the JSON config file, parses its text into a Python dictionary, and performs basic input validation. Note that Firefox, Chrome, and Headless Chrome will be supported browsers.

4. Set up WebDriver

Each Web UI test should have its own WebDriver instance so that it remains independent from other tests. Once again, pytest fixtures make setup easy.

The browser fixture in tests/conftest.py initialize the appropriate TestProject WebDriver type based on inputs returned by the config fixture:

from selenium.webdriver import ChromeOptions
from src.testproject.sdk.drivers import webdriver


@pytest.fixture
def browser(config):

  # Initialize shared arguments
  kwargs = {
    'projectname': config['testproject_projectname'],
    'token': config['testproject_token']
  }

  # Initialize the TestProject WebDriver instance
  if config['browser'] == 'Firefox':
    b = webdriver.Firefox(**kwargs)
  elif config['browser'] == 'Chrome':
    b = webdriver.Chrome(**kwargs)
  elif config['browser'] == 'Headless Chrome':
    opts = ChromeOptions()
    opts.add_argument('headless')
    b = webdriver.Chrome(chrome_options=opts, **kwargs)
  else:
    raise Exception(f'Browser "{config["browser"]}" is not supported')

  # Make its calls wait for elements to appear
  b.implicitly_wait(config['implicit_wait'])

  # Return the WebDriver instance for the setup
  yield b

  # Quit the WebDriver instance for the cleanup
  b.quit()

This was the only section of code I needed to change to make my PyCon 2020 tutorial project work with TestProject. I had to change the WebDriver invocations to use the TestProject classes. I also had to add arguments for the project name and developer token, which come from the config file. (Note: you may alternatively set the developer token as an environment variable.)

5. Create page objects

Automated tests could make direct calls to the WebDriver interface to interact with the browser, but WebDriver calls are typically low-level and wordy. The Page Object Model is a much better design pattern. Page object classes encapsulate WebDriver gorp so that tests can call simpler, more readable methods.

The DuckDuckGo search test interacts with two pages: the search page and the result page. The pages package contains a module for each page. Here’s pages/search.py:

from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys


class DuckDuckGoSearchPage:

  URL = 'https://www.duckduckgo.com'

  SEARCH_INPUT = (By.ID, 'search_form_input_homepage')

  def __init__(self, browser):
    self.browser = browser

  def load(self):
    self.browser.get(self.URL)

  def search(self, phrase):
    search_input = self.browser.find_element(*self.SEARCH_INPUT)
    search_input.send_keys(phrase + Keys.RETURN)

And here’s pages/result.py:

from selenium.webdriver.common.by import By

class DuckDuckGoResultPage:
  
  RESULT_LINKS = (By.CSS_SELECTOR, 'a.result__a')
  SEARCH_INPUT = (By.ID, 'search_form_input')

  def __init__(self, browser):
    self.browser = browser

  def result_link_titles(self):
    links = self.browser.find_elements(*self.RESULT_LINKS)
    titles = [link.text for link in links]
    return titles
  
  def search_input_value(self):
    search_input = self.browser.find_element(*self.SEARCH_INPUT)
    value = search_input.get_attribute('value')
    return value

  def title(self):
    return self.browser.title

Notice that this code uses the “regular” WebDriver interface because TestProject’s WebDriver classes extend the Selenium WebDriver classes.

To make setup easier, I added fixtures to tests/conftest.py to construct each page object, too. They call the browser fixture and inject the WebDriver instance into each page object:

from pages.result import DuckDuckGoResultPage
from pages.search import DuckDuckGoSearchPage


@pytest.fixture
def search_page(browser):
  return DuckDuckGoSearchPage(browser)


@pytest.fixture
def result_page(browser):
  return DuckDuckGoResultPage(browser)

6. Automate the test case

All the automation plumbing is finally in place. Here’s the test case in tests/traditional/test_duckduckgo.py:

import pytest


@pytest.mark.parametrize('phrase', ['panda', 'python', 'polar bear'])
def test_basic_duckduckgo_search(search_page, result_page, phrase):
  
  # Given the DuckDuckGo home page is displayed
  search_page.load()

  # When the user searches for the phrase
  search_page.search(phrase)

  # Then the search result query is the phrase
  assert phrase == result_page.search_input_value()
  
  # And the search result links pertain to the phrase
  titles = result_page.result_link_titles()
  matches = [t for t in titles if phrase.lower() in t.lower()]
  assert len(matches) > 0

  # And the search result title contains the phrase
  assert phrase in result_page.title()

I parametrized the test to run it for three different phrases. The test function does not interact with the WebDriver instance directly. Instead, it interacts exclusively with the page objects.

7. Run the tests

The tests run like any other pytest tests: python -m pytest at the command line. If everything is set up correctly, then the tests will run successfully and upload results to the TestProject app.

In the TestProject dashboard, the Reports tab shows all the test you have run. It also shows the different test projects you have.

Check out those results!

You can also drill into results for individual test case runs. TestProject automatically records the browser type, timestamps, pass-or-fail results, and every WebDriver call. You can also download PDF reports!

Results for an individual test

What if … BDD?

I was delighted to see how easily I could run a traditional pytest suite using TestProject. Then, I thought to myself, “What if I could use a BDD test framework?” I personally love Behavior-Driven Development, and Python has multiple BDD test frameworks. There is no reason why a BDD test framework wouldn’t work with TestProject!

So, I rewrote the DuckDuckGo search test as a feature file with step definitions using pytest-bdd. The BDD-style test uses the same fixtures and page objects as the traditional test.

Here’s the Gherkin scenario in tests/bdd/features/duckduckgo.feature:

Feature: DuckDuckGo
  As a Web surfer,
  I want to search for websites using plain-language phrases,
  So that I can learn more about the world around me.


  Scenario Outline: Basic DuckDuckGo Web Search
    Given the DuckDuckGo home page is displayed
    When the user searches for "<phrase>"
    Then the search result query is "<phrase>"
    And the search result links pertain to "<phrase>"
    And the search result title contains "<phrase>"

    Examples:
      | phrase     |
      | panda      |
      | python     |
      | polar bear |

And here’s the step definition module in tests/bdd/step_defs/test_duckduckgo_bdd.py:

from pytest_bdd import scenarios, given, when, then, parsers
from selenium.webdriver.common.keys import Keys


scenarios('../features/duckduckgo.feature')


@given('the DuckDuckGo home page is displayed')
def load_duckduckgo(search_page):
  search_page.load()


@when(parsers.parse('the user searches for "{phrase}"'))
@when('the user searches for "<phrase>"')
def search_phrase(search_page, phrase):
  search_page.search(phrase)


@then(parsers.parse('the search result query is "{phrase}"'))
@then('the search result query is "<phrase>"')
def check_search_result_query(result_page, phrase):
  assert phrase == result_page.search_input_value()


@then(parsers.parse('the search result links pertain to "{phrase}"'))
@then('the search result links pertain to "<phrase>"')
def check_search_result_links(result_page, phrase):
  titles = result_page.result_link_titles()
  matches = [t for t in titles if phrase.lower() in t.lower()]
  assert len(matches) > 0


@then(parsers.parse('the search result title contains "{phrase}"'))
@then('the search result title contains "<phrase>"')
def check_search_result_title(result_page, phrase):
  assert phrase in result_page.title()

There’s one more nifty trick I added with pytest-bdd. I added a hook to report each Gherkin step to TestProject with a screenshot! That way, testers can trace each test case step more easily in the TestProject reports. Capturing screenshots also greatly assists test triage when failures arise. This hook is located in tests/conftest.py:

def pytest_bdd_after_step(request, feature, scenario, step, step_func):
  browser = request.getfixturevalue('browser')
  browser.report().step(description=str(step), message=str(step), passed=True, screenshot=True)

Since pytest-bdd is just a pytest plugin, its tests run using the same python -m pytest command. TestProject will group these test results into the same project as before, but it will separate the traditional tests from the BDD tests by name. Here’s what the Gherkin steps with screenshots look like:

Custom Gherkin step with screenshot reported in the TestProject app

This is Awesome!

As its name denotes, TestProject is a great platform for handling project-level concerns for testing work: reporting, integrations, and fast feedback. Adding TestProject to an existing automation solution feels seamless, and its sleek user experience gives me what I need as a tester without getting in my way. The one word that keeps coming to mind is “simple” – TestProject simplifies setup and sharing. Its design takes to heart the renowned Python adage, “Simple is better than complex.” As such, TestProject’s new Python SDK is a welcome addition to the Python testing ecosystem.

I look forward to exploring Python support for mobile testing with Appium soon. I also look forward to seeing all the new Python add-ons the community will develop.

12 Traits of Highly Effective Tests

Writing effective tests is hard. Tests that are flaky, confusing, or slow are effectively useless because they do more harm than good. The Arrange-Act-Assert pattern gives good structure, but what other characteristics should test cases have? Here are 12 traits for highly effective tests.

#1. Understandable

At its core, a test is just a step-by-step procedure. It exercises a behavior and verifies the outcome. In a sense, tests are living specifications – they detail exactly how a feature should function. Everyone should be able to intuitively understand how a test works. Follow conventions like Arrange-Act-Assert or Given-When-Then. Seek conciseness without vagueness. Avoid walls of text.

If you find yourself struggling to write a test in plain language, then you should review the design for the feature under test. If you can’t explain it, then how will others know how to use it?

#2. Unique

Each test case in a suite should cover a unique behavior. Don’t Repeat Yourself – repetitive tests with few differences bear a heavy cost to maintain and execute without delivering much additional value. If a test can cover multiple inputs, then focus on one variation per equivalence class.

For example, equivalence classes for the absolute value function could be a positive number, a negative number, and zero. There’s little need to cover multiple negative numbers because the absolute value function performs the same operation on all negatives.

#3. Individual

Test one thing at a time. Tests that each focus on one main behavior are easier to formulate and automate. They naturally become understandable and maintainable. When a test covering only one behavior fails, then its failure reason is straightforward to deduce.

Any time you want to combine multiple behaviors into one test, consider separating them into different tests. Make a clear distinction between “arrange” and “act” steps. Write atomic tests as much as possible. Avoid writing “world tours,” too. I’ve seen repositories where tests are a hundred steps long and meander through an application like Mr. Toad’s Wild Ride.

#4. Independent

Each test should be independent of all other tests. That means testers should be able to run each test as a standalone unit. Each test should have appropriate setup and cleanup routines to do no harm and leave no trace. Set up new resources for each test. Automated tests should use patterns like dependency injection instead of global variables. If one test fails, others should still run successfully. Test case independence is the cornerstone for scalable, parallelizable tests.

Modern test automation frameworks strongly support test independence. However, folks who are new to automation frequently presume interdependence – they think the end of one test is the starting point for the next one in the source code file. Don’t write tests like that! Write your tests as if each one could run on its own, or as if the suite’s test order could be randomized.

#5. Repeatable

Testing tends to be a repetitive activity. Test suites need to run continuously to provide fast feedback as development progresses. Every time they run, they must yield deterministic results because teams expect consistency.

Unfortunately, manual tests are not very repeatable. They require lots of time to run, and human testers may not run them exactly the same way each iteration. Test automation enables tests to be truly repeatable. Tests can be automated once and run repeatedly and continuously. Automated scripts always run the same way, too.

#6. Reliable

Tests must run successfully to completion, whether they return PASS or FAIL results. “Flaky” tests – tests that occasionally fail for arbitrary reasons – waste time and create doubt. If a test cannot run reliably, then how can its results be trusted? And why would a team invest so much time developing tests if they don’t run well?

You shouldn’t need to rerun tests to get good results. If tests fail intermittently, find out why. Correct any automation errors. Tune automation timeouts. Scale infrastructure to the appropriate sizes. Prioritize test stability over speed. And don’t overlook any wonky bugs that could be lurking in the product under test!

#7. Efficient

Providing fast feedback is testing’s main purpose. Fast feedback helps teams catch issues early and keep developing safely. Fast tests enable fast feedback. Slow tests cause slow feedback. They force teams to limit coverage. They waste time and money, and they increase the risk that bugs do more damage.

Optimize tests to be as efficient as possible without jeopardizing stability. Don’t include unnecessary steps. Use smart waits instead of hard sleeps. Write atomic tests that cover individual behaviors. For example, use APIs instead of UIs to prep data. Set up tests to run in parallel. Run tests as part of Continuous Integration pipelines so that they deliver results immediately.

#8. Organized

An effective test has a clear identity:

  • Purpose: Why run this test?
  • Coverage: What behavior or feature does this test cover?
  • Level: Should this test be a unit, integration, or end-to-end test?

Identity informs placement and type. Make sure tests belong to appropriate suites. For example, tests that interact with Web UIs via Selenium WebDriver do not belong in unit test suites. Group related tests together using subdirectories and/or tags.

#9. Reportable

Functional tests yield PASS or FAIL results with logs, screenshots, and other artifacts. Large suites yield lots of results. Reports should present results in a readable, searchable format. They should make failures stand out with colors and error messages. They should also include other helpful information like duration times and pass rates. Unit test reports should include code coverage, too.

Publish test reports to public dashboards so everyone can see them. Most Continuous Integration servers like Jenkins include some sort of test reporting mechanism. Furthermore, capture metrics like test result histories and duration times in data formats instead of textual reports so they can be analyzed for trends.

#10. Maintainable

Tests are inherently fragile because they depend upon the features they cover. If features change, then tests probably break. Furthermore, automated tests are susceptible to code duplication because they frequently repeat similar steps. Code duplication is code cancer – it copies problems throughout a code base.

Fragility and duplication cause a nightmare for maintainability. To mitigate the maintenance burden, develop tests using the same practices as developing products. Don’t Repeat Yourself. Simple is better than complex. Do test reviews. For automation, follow good design principles like separating concerns and building solution layers. Make tests easy to update in the future!

#11. Trustworthy

A test is “successful” if it runs to completion and yields a correct PASS or FAIL result. The veracity of the outcome matters. Tests that report false failures make teams waste time doing unnecessary triage. Tests that report false passing results give a false sense of security and let bugs go undetected. Both ways ultimately cause teams to mistrust the tests.

Unfortunately, I’ve seen quite a few untrustworthy tests before. Sometimes, test assertions don’t check the right things, or they might be missing entirely! I’ve also seen tests for which the title does not match the behavior under test. These problems tend to go unnoticed in large test suites, too. Make sure every single test is trustworthy. Review new tests carefully, and take time to improve existing tests whenever problems are discovered.

#12. Valuable

Testing takes a lot of work. It takes time away from developing new things. Therefore, testing must be worth the effort. Since covering every single behavior is impossible, teams should apply a risk-based strategy to determine which behaviors pose the most risk if they fail and then prioritize testing for those behaviors.

If you are unsure if a test is genuinely valuable, ask this question: If the test fails, will the team take action to fix the defect? If the answer is yes, then the test is very valuable. If the answer is no, then look for other, more important behaviors to cover with tests.

Any more traits?

These dozen traits certainly make tests highly effective. However, this list is not necessarily complete. Do you have any more traits to add to the list? Do you agree or disagree with the traits I’ve given? Let me know by retweeting and commenting my tweet below!

(Note: I changed #8 from “Leveled Appropriately” to “Organized” to be more concise. The tweet is older than the article.)

Arrange-Act-Assert: A Pattern for Writing Good Tests

A test is a procedure that exercises a behavior to determine if the behavior functions correctly. There are several different kinds of tests, like unit tests, integration tests, or end-to-end tests, but all functional tests do the same basic thing: they try something and report PASS or FAIL.

Testing provides an empirical feedback loop for development. That’s how testing keeps us safe. With tests, we know when things break. Without tests, coding can be dangerous. We don’t want to deploy big ol’ bugs!

So, if we intend to spend time writing tests, how can we write good tests? There’s a simple but powerful pattern I like to follow: Arrange-Act-Assert.

The Pattern

Arrange-Act-Assert is a great way to structure test cases. It prescribes an order of operations:

  1. Arrange inputs and targets. Arrange steps should set up the test case. Does the test require any objects or special settings? Does it need to prep a database? Does it need to log into a web app? Handle all of these operations at the start of the test.
  2. Act on the target behavior. Act steps should cover the main thing to be tested. This could be calling a function or method, calling a REST API, or interacting with a web page. Keep actions focused on the target behavior.
  3. Assert expected outcomes. Act steps should elicit some sort of response. Assert steps verify the goodness or badness of that response. Sometimes, assertions are as simple as checking numeric or string values. Other times, they may require checking multiple facets of a system. Assertions will ultimately determine if the test passes or fails.

Behavior-Driven Development follows the Arrange-Act-Assert pattern by another name: Given-When-Then. The Gherkin language uses Given-When-Then steps to specify behaviors in scenarios. Given-When-Then is essentially the same formula as Arrange-Act-Assert.

Every major programming language has at least one test framework. Frameworks like JUnit, NUnit, Cucumber, and (my favorite) pytest enable you, as the programmer, to automate tests, execute suites, and report results. However, the framework itself doesn’t make a test case “good” or “bad.” You, as the tester, must know how to write good tests!

Let’s look at how to apply the Arrange-Act-Assert pattern in Python code. I’ll use pytest for demonstration.

Unit Testing

Here’s a basic unit test for Python’s absolute value function:

def test_abs_for_a_negative_number():

  # Arrange
  negative = -5
  
  # Act
  answer = abs(negative)
  
  # Assert
  assert answer == 5

This test may seem trivial, but we can use it to illustrate our pattern. I like to write comments denoting each phase of the test case as well.

  1. The Arrange step creates a variable named “negative” for testing.
  2. The Act step calls the “abs” function using the “negative” variable and stores the returned value in a variable named “answer.”
  3. The Assert step verifies that “answer” is a positive value.

Feature Testing

Let’s kick it up a notch with a more complicated test. This next example tests the DuckDuckGo Instant Answer API using the requests package:

import requests

def test_duckduckgo_instant_answer_api_search():

  # Arrange
  url = 'https://api.duckduckgo.com/?q=python+programming&format=json'
  
  # Act
  response = requests.get(url)
  body = response.json()
  
  # Assert
  assert response.status_code == 200
  assert 'Python' in body['AbstractText']

We can clearly see that the Arrange-Act-Assert pattern works for feature tests as well as unit tests.

  1. The Arrange step forms the endpoint URL for searching for “Python Programming.” Notice the base URL and the query parameters.
  2. The Act steps call the API using the URL using “requests” and then parse the response’s body from JSON into a Python dictionary.
  3. The Assert steps then verify that the HTTP status code was 200, meaning “OK” or “success,” and that the word “Python” appears somewhere in the response’s abstract text.

Arrange-Act-Assert also works for other types of feature tests, like Web UI and mobile tests.

More Advice

Arrange-Act-Assert is powerful because it is simple. It forces tests to focus on independent, individual behaviors. It separates setup actions from the main actions. It requires test to make verifications and not merely run through motions. Notice how the pattern is not Arrange-Act-Assert-Act-Assert – subsequent actions and assertions belong in separate tests! Arrange-Act-Assert is a great pattern to follow for writing good functional tests.

Using Multiple Test Frameworks Simultaneously

Someone recently asked me the following question, which I’ve paraphrased for better context:

Is it good practice to use multiple test frameworks simultaneously? For example, I’m working on a Python project. I want to do BDD with behave for feature testing, but pytest would be better for unit testing. Can I use both? If so, how should I structure my project(s)?

The short answer: Yes, you should use the right frameworks for the right needs. Using more than one test framework is typically not difficult to set up. Let’s dig into this.

The F-word

I despise the F-word – “framework.” Within the test automation space, people use the word “framework” to refer to different things. Is the “framework” only the test package like pytest or JUnit? Does it include the tests themselves? Does it refer to automation tools like Selenium WebDriver?

For clarity, I prefer to use two different terms: “framework” and “solution.” A test framework is software package that lets programmers write tests as methods or functions, run the tests, and report the results. A test solution is a software implementation for a testing problem. It typically includes frameworks, tools, and test cases. To me, a framework is narrow, but a solution is comprehensive.

The original question used the word “framework,” but I think it should be answered in terms of solutions. There are two potential solutions at hand: one for unit tests written in pytest, while another for feature tests written in behave.

One Size Does Not Fit All

Always use the right tools or frameworks for the right needs. Unit tests and feature tests are fundamentally different. Unit tests directly access internal functions and methods in product code, whereas feature tests interact with live versions of the product as an external user or caller. Thus, they need different kinds of testing solutions, which most likely will require different tools and frameworks.

For example, behave is a BDD framework for Python. Programmers write test cases in plain-language Gherkin with step definitions as Python functions. Gherkin test cases are intuitively readable and understandable, which makes them great for testing high-level behaviors like interacting with a Web page. However, BDD frameworks add complexity that hampers unit test development. Unit tests are inherently “code-y” and low-level because they directly call product code. The pytest framework would be a better choice for unit testing. Conversely, feature tests could be written using raw pytest, but behave provides a more natural structure for describing features. Hence, separate solutions for different test types would be ideal.

Same or Separate Repositories?

If more than one test solution is appropriate for a given software project, the next question is where to put the test code. Should all test code go into the same repository as the product code, or should they go into separate repositories? Unfortunately, there is no universally correct answer. Here are some factors to consider.

Unit tests should always be located in the same repository as the product code they test. Unit tests directly depend upon the product code. They mus be written in the same language. Any time the product code is refactored, unit tests must be updated.

Feature tests can be placed in the same repository or a separate repository. I recommend putting feature tests in the same repository as product code if feature tests are written in the same language as the product code and if all the product code under test is located in the same repository. That way, tests are version-controlled together with the product under test. Otherwise, I recommend putting feature tests in their own separate repository. Mixed language repositories can be confusing to maintain, and version control must be handled differently with multi-repository products.

Same Repository Structure

One test solution in one repository is easy to set up, but multiple test solutions in one repository can be tricky. Thankfully, it’s not impossible. Project structure ultimately depends upon the language. Regardless of language, I recommend separating concerns. A repository should have clearly separate spaces (e.g., subdirectories) for product code and test code. Test code should be further divided by test types and then coverage areas. Testers should be able to run specific tests using convenient filters.

Here are ways to handle multiple test solutions in a few different languages:

  • In Python, project structure is fairly flexible. Conventionally, all tests belong under a top-level directory named “tests.” Subdirectories may be added thereunder, such as “unit” and “feature”. Frameworks like pytest and behave can take search paths so they run the proper tests. Furthermore, if using pytest-bdd instead of behave, pytest can use the markings/tags instead of search paths for filtering tests.
  • In .NET (like C#), the terms “project” and “solution” have special meanings. A .NET project is a collection of code that is built into one artifact. A .NET solution is a collection of projects that interrelate. Typically, the best practice in .NET would be to create a separate project for each test type/suite within the same .NET solution. I have personally set up a .NET solution that included separate projects for NUnit unit tests and SpecFlow feature tests.
  • In Java, project structure depends upon the project’s build automation tool. Most Java projects seem to use Maven or Gradle. In Maven’s Standard Directory Layout, tests belong under “src/test”. Different test types can be placed under separate packages there. The POM might need some extra configuration to run tests at different build phases.
  • In JavaScript, test placement depends heavily upon the project type. For example, Angular creates separate directories for unit tests using Jasmine and end-to-end tests using Protractor when initializing a new project.

Do What’s Best

Different test tools and frameworks meet different needs. No single one can solve all problems. Make sure to use the right tools for the problems at hand. Don’t force yourself to use the wrong thing simply because it is already used elsewhere.

Grace Hopper Bug

Writing Good Bug Reports

Bugs, bugs, bugs! Talking about software development is impossible without also talking about bugs. At first, the term “bug” may seem like strange slang for “defect.” Are there creepy-crawlies running about our code and computers? Not usually, sometimes yes! In 1947, Grace Hopper found a dead moth stuck in a relay in Harvard’s Mark II computer, and her “bug” report (pictured above) joked about finding a real bug behind a computer defect. Even though inventors like Thomas Edison had used the term “bug” to describe technological glitches for years beforehand, Grace Hopper’s bug cemented the terminology for computers and software.

Bug happen. Why? Nobody is perfect, and therefore, no software is perfect. Building software of high quality requires good designs to resist bugs, good implementations to avoid bugs, and good feedback to report bugs when they inevitably appear. This article covers best practices for writing good bug reports when they do happen.

What is a bug “report”?

A “bug” is a defect, plain and simple. The term refers specifically to an issue in the software. However, a bug report (or ticket) is a written record detailing the defect. Bug reports are typically written in a project management tool like Jira. The bug and its report are two separate entities. Certainly, undetected bugs can exist in a software product without having associated reports.

When should a bug report be written?

A bug report should be written whenever a new problem that appears to be a defect is discovered. Problems can be discovered during testing activities like automated test runs or exploratory manual testing. They can also be discovered while developing new features. In the worst case, customers will find problems and submit complaints!

However, notice how I used the term “problem” and not “defect.” All problems need solutions, but not all problems are truly defects. Sometimes, the user who reported the problem doesn’t know how a feature should work. Other times, the environment in which the problem occurred is improperly configured. The team member who first discovered the problem or received the customer complaint should initially do a light investigation to make sure the problem looks like a genuine software defect. Initial investigation should be done expediently while context is fresh.

If the problem appears to be a real defect and not a misunderstanding or misconfiguration, then the investigator should search existing bug reports for the same issue. Someone else on the team might have recently discovered the same issue or a similar issue. Bugs also can reappear even after being “fixed.” Adding information to existing reports is typically a better practice than creating duplicative reports.

What if the problem is unclear? Whenever I’m not sure if a problem is a bug or another type of issue, I ask others on my team for their thoughts. I try to ask questions like, “Does this look right? What could cause this behavior? Did I do something incorrectly?” Blindly opening bug reports for every problem is akin to “the boy who cried wolf” – it can desensitize a team to warnings of real, important bugs. Doing just a bit of investigation shows good intentions and, in many cases, spares the team extra work later. Nevertheless, when in doubt, creating a report is better than not creating a report. A little churn from false positives is better than risking real problems.

Why should bug reports be written?

Whenever a real bug is discovered, a team should write a report for it. Simply talking about the bug might seem like an easier, faster approach, especially for smaller teams, but the act of writing a report for the bug is important for the integrity of the software development process. A written record is an artifact that requires resolution:

  • A report provides a feedback loop to developers.
  • A report contains all bug information in a single source.
  • A report can be tracked in a project management tool.
  • A report can be sized and prioritized with development work.
  • A report records work history for the bug.

Bug reports help make bug fixes part of the development process. They bring attention to bugs, and they cannot be ignored or overlooked easily.

What goes into a bug report?

Regardless of tool or team process, good bug reports contain the following information:

  • Problem Summary
    • A brief, one-line description of the defect
    • Clearly states what it defective
    • Should be used as the title of the report
  • Report Identifier
    • A unique identifier for the bug report
    • Typically generated automatically by the management tool (like Jira)
  • Full Description
    • A longer description of the problem
    • Explain any relevant information
    • Use clear, plain language
  • Steps to Reproduce
    • A clear procedure for manually reproducing the failure
    • Could be steps from a failing test case
    • Include actual vs. expected results
  • Occurrences
    • Cases when the defect does and does not appear
    • Share product version, code branch, environment name, system configuration, etc.
    • Does the defect appear consistently or intermittently?
  • Artifacts
    • Attach logs, screenshots, files, links, etc.
  • Impact
    • How does the defect affect the customer?
    • Does the defect block any development work?
    • What test cases will fail due to this defect?
  • Root Cause Analysis
    • If known, explain why the defect happened
    • If unknown, offer possible reasons for the defect
    • Warning: clearly denote proof vs. speculation!
  • Triage
    • Assign an owner if possible
    • Assign a severity or priority based on guidelines and common sense
    • Assign a deadline if applicable
    • Assign any other information the team needs

I use this list as a template whenever I write bug reports. For example, in a Jira bug ticket, I’ll make each item a heading in the ticket’s “Description” field. Sometimes, I might skip sections if I don’t have the information. However, I typically don’t open a bug report until I have most of this information for the defect.

How should bug reports be handled?

One word: professionally. Handle bug reports professionally. What does that mean?

Provide as much information as possible in bug reports. Bug reports are a form of communication and record. Saying little more than, “It dun broke,” doesn’t help anyone fix the problem. Provide useful, accurate information so that others who didn’t discover the bug have enough context to help.

Triage bugs expediently. When you uncover a problem, investigate it. When you need a second opinion, ask for it. When someone sends a bug report to you or your team, triage it, fix it, and reply to the person who reported it. Don’t ignore problems, and don’t let them fester.

Treat bug reports as unfolding stories. Bugs are usually unexpected, tricky surprises. The information in a bug report can be incomplete or even incorrect because it represents best-guess theories about the defect. The report artifact should be treated as a living document. Information can be added or updated as work proceeds. Team members should be gracious to each other regarding available information.

Do not shame or be shamed. Bugs happen. Even the best developers make mistakes. A mature, healthy team should faithfully report bugs, quickly resolve them, and take steps to avoid similar problems in the future. Developers should not stigmatize bugs or try to censor bug counts. Testers should not brag about the number of bugs they find. Language used in bug reports should focus on software, not people. Gossiping and public shaming over bugs should not happen. Any shame associated with bugs can drive a team to do bad practices. Any recurring issues should be addressed with individuals directly or with the help of management.

Good bug reports matter

Writing bug reports well is vital for team collaboration. Organized, accurate information can save hours of time wasted on fruitless attempts to reproduce issues or attempt fixes. Give these practices a try the next time you discover a bug!

Beyond Unit Tests: End-to-End Web UI Testing

On October 4, 2019, I gave a talk entitled Beyond Unit Tests: End-to-End Web UI Testing at PyGotham 2019. Check it out below! I show how to write a concise-yet-complete test solution for Web UI test cases using Python, pytest, and Selenium WebDriver.

This talk is a condensed version of my Hands-On Web UI Testing tutorials that I delivered at DjangoCon 2019 and PyOhio 2019. If you’d like to take the full tutorial, check out https://github.com/AndyLPK247/djangocon-2019-web-ui-testing. Full instructions are in the README.

Be sure to check out the other PyGotham 2019 talks, too. My favorite was Dungeons & Dragons & Python: Epic Adventures with Prompt-Toolkit and Friends by Mike Pirnat.