web

1974 VW Karmann Ghia Convertible

7 Major Trends in Front End Web Testing

This article is based on my opening keynote address for Front End Test Fest 2022.

In the featured image for this article, you see a beautiful front end. It’s probably not the kind of “front end” you expected. It’s the front end of a 1974 Volkswagen Karmann Ghia. The Karmann Ghia was known as the “poor man’s Porsche.” It’s a very special car. It was actually a collaboration project between Wilhelm Karmann, a German automobile manufacturer, and Carrozzeria Ghia, an Italian automobile designer. Ghia designed the body as a work of art, and Karmann put it on the tried-and-true platform of the classic Volkswagen Beetle. When the Volkswagen executives saw it, they couldn’t say no to mass production.

The Karmann Ghia is a perfect symbol of the state of web development today. We strive to make beautiful front ends with reliable platforms supporting them on the back end. Collaboration from both sides is key to success, but what people remember most is the experience they have with your apps. My mom drove a Karmann Ghia like this when she was a teenager, and to this day she still talks about the good times she had with it.

Good quality, design, and experience are indispensable aspects of front ends – whether for classic cars or for the Web. In this article, I’ll share seven major trends I see in front end web testing. While there’s a lot of cool new things happening, I want y’all to keep in mind one main thing: tools and technologies may change, but the fundamentals of testing remain the same. Testing is interaction plus verification. Tests reveal the truth about our code and our features. We do testing as part of development to gather fast feedback for fixes and improvements. All the trends I will share today are rooted in these principles. With good testing, you can make sure your apps will look visually perfect, just like… you know.

#1. End-to-end testing

Here’s our first trend: End-to-end testing has become a three-way battle. For clarity, when I say “end-to-end” testing, I mean black-box test automation that interacts with a live web app in an active browser.

Historically, Selenium has been the most popular tool for browser automation. The project has been around for over a decade, and the WebDriver protocol is a W3C standard. It is open source, open standards, and open governance. Selenium WebDriver has bindings for C#, Java, JavaScript, Ruby, PHP, and Python. The project also includes Selenium IDE, a record-and-playback tool, and Selenium Grid, a scalable cluster for cross-browser testing. Selenium is alive and well, having just released version 4.

Over the years, though, Selenium has received a lot of criticism. Selenium WebDriver is a low-level protocol. It does not handle waiting automatically, leading many folks to unknowingly write flaky scripts. It requires clunky setup since WebDriver executables must be separately installed. Many developers dislike Selenium because coding with it requires a separate workflow or state of mind from the main apps they are developing.

Cypress was the answer to Selenium’s shortcomings. It aimed to be a modern framework with excellent developer experience, and in a few short years, it quickly became the darling test tool for front end developers. Cypress tests run in the browser side-by-side with the app under test. The syntax is super concise. There’s automatic waiting, meaning less flakiness. There’s visual tracing. There’s API calls. It’s nice. And it took a big chomp out of Selenium’s market share.

Cypress isn’t perfect, though. Its browser support is limited to Chromium-based browsers and Firefox. Cypress is also JavaScript-only, which excludes several communities. While Cypress is open source, it does not follow open standards or open governance like Selenium. And, sadly, Cypress’ performance is slow – equivalent tests run slower than Selenium.

Enter Playwright, the new open source test framework from Microsoft. Playwright is the spiritual successor to Puppeteer. It boasts the wide browser and language compatibility of Selenium with the refined developer experience of Cypress. It even has a code generator to help write tests. Plus, Playwright is fast – multiple times faster than Selenium or Cypress.

Playwright is still a newcomer, and it doesn’t yet have the footprint of the other tools. Some folks might be cautious that it uses browser projects instead of stock browsers. Nevertheless, it’s growing fast, and it could be a major contender for the #1 title. In Applitools’ recent Let The Code Speak code battles, Playwright handily beat out both Selenium and Cypress.

A side-by-side comparison of Selenium, Cypress, and Playwright
A side-by-side comparison of Selenium, Cypress, and Playwright

Selenium, Cypress, and Playwright are definitely now the “big three” browser automation tools for testing. A respectable fourth mention would be WebdriverIO. WebdriverIO is a JavaScript-based tool that can use WebDriver or debug protocols. It has a very large user base, but it is JavaScript-only, and it is not as big as Cypress. There are other tools, too. Puppeteer is still very popular but used more for web crawling than testing. Protractor, once developed by the Angular team, is now deprecated.

All these are good tools to choose (except Protractor). They can handle any kind of web app that you’re building. If you want to learn more about them, Test Automation University has courses for each.

#2. Component testing

End-to-end testing isn’t the only type of testing a team can or should do. Component testing is on the rise because components are on the rise! Many teams now build shareable component libraries to enforce consistency in their web design and to avoid code duplication. Each component is like a “unit of user interface.” Not only do they make development easier, they also make testing easier.

Component testing is distinct from unit testing. A unit test interacts directly with code. It calls a function or method and verifies its outcomes. Since components are inherently visual, they need to be rendered in the browser for proper testing. They might have multiple behaviors, or they may even trigger API calls. However, they can be tested in isolation of other components, so individually, they don’t need full end-to-end tests. That’s why, from a front end perspective, component testing is the new integration testing.

Storybook is a very popular tool for building and testing components in isolation. In Storybook, each component has a set of stories that denote how that component looks and behaves. While developing components, you can render them in the Storybook viewer. You can then manually test the component by interacting with them or changing their settings. Applitools also provides an SDK for automatically running visual tests against a Storybook library.

The Storybook viewer
The Storybook viewer

Cypress is also entering the component testing game. On June 1, 2022, Cypress released version 10, which included component testing support. This is a huge step forward. Before, folks would need to cobble together their own component test framework, usually as an extension of a unit test project or an end-to-end test project. Many solutions just ran automated component tests purely as Node.js processes without any browser component. Now, Cypress makes it natural to exercise component behaviors individually yet visually.

I love this quote from Cypress about their approach to component testing:

When testing anything for the web, we believe that tests should view and interact with the application in the same way that an actual user does. Anything less, and it’s hard to have confidence that your application is doing what it is supposed to.

https://www.cypress.io/blog/2022/06/01/cypress-10-release/

This quote hits on something big. So many automated tests fail to interact with apps like real users. They hinge on things like IDs, CSS selectors, and XPaths. They make minimal checks like appearance of certain elements or text. Pages could be completely broken, but automated tests could still pass.

#3. Visual testing

We really want the best of both worlds: the simplicity and sensibility of manual testing with the speed and scalability of automated testing. Historically, this has been a painful tradeoff. Most teams struggle to decide what to automate, what to check manually, and what to skip. I think there is tremendous opportunity in bridging the gap. Modern tools should help us automate human-like sensibilities into our tests, not merely fire events on a page.

That’s why visual testing has become indispensable for front end testing. Web apps are visual encounters. Visuals are the DNA of user experience. Functionality alone is insufficient. Users expect to be wowed. As app creators, we need to make sure those vital visuals are tested. Heaven forbid a button goes missing or our CSS goes sideways. And since we live in a world of continuous development and delivery, we need those visual checkpoints happening continuously at scale. Real human eyes are just too slow.

For example, I could have a login page that has an original version (left) and a changed version (right):

Visual comparison between versions of a login page
Visual comparison between versions of a login page

Visual testing tools alert you to meaningful changes and make it easy to compare them side-by-side. They catch things you might miss. Plus, they run just like any other automated test suite. Visual testing was tough in the past because tools merely did pixel-to-pixel comparisons, which generated lots of noise for small changes and environmental differences. Now, with a tool like Applitools Visual AI, visual comparisons accurately pinpoint the changes that matter.

Test automation needs to check visuals these days. Traditional scripts interact with only the basic bones of the page. You could break the layout and remove all styling like this, and there’s a good chance a traditional automated test would still pass:

The same login page from before, but without any CSS styling
The same login page from before, but without any CSS styling

With visual testing techniques, you can also rethink how you approach cross-browser and cross-device testing. Instead of rerunning full tests against every browser configuration you need, you can run them once and then simply re-render the visual snapshots they capture against different browsers to verify the visuals. You can do this even for browsers that the test framework doesn’t natively support! For example, using a platform like Applitools Ultrafast Test Cloud, you could run Cypress tests against Electron in CI and then perform visual checks in the Cloud against Safari and Internet Explorer, among other browsers. This style of cross-platform testing is faster, more reliable, and less expensive than traditional ways.

#4. Performance testing

Functionality isn’t the only aspect of quality that matters. Performance can make or break user experience. Most people expect any given page to load in a second or two. Back in 2016, Google discovered that half of all people leave a site if it takes longer than 3 seconds to load. As an industry, we’ve put in so much work to make the front end faster. Modern techniques like server-side rendering, hydration, and bloat reduction all aim to improve response times. It’s important to test the performance of our pages to make sure the user experience is tight.

Thankfully, performance testing is easier than ever before. There’s no excuse for not testing performance when it is so vital to success. There are many great ways to get started.

The simplest approach is right in your browser. You can profile any site with Chrome DevTools. Just right click the page, select “Inspect,” and switch to the Performance tab. Then start the profiler and start interacting with the page. Chrome DevTools will capture full metrics as a visual time series so you can explore exactly what happens as you interact with the page. You can also flip over to the Network tab to look for any API calls that take too long. If you want to learn more about this type of performance analysis, Test Automation University offers a course entitled Tools and Techniques for Performance and Load Testing by Amber Race. Amber shows how to get the most value out of that Performance tab.

Chrome DevTools Performance tab
Chrome DevTools Performance tab

Another nifty tool that’s also available in Chrome DevTools is Google Lighthouse. Lighthouse is a website auditor. It scores how well your site performs for performance, accessibility, progressive web apps, SEO, and more. It will also provide recommendations for how to improve your scores right within its reports. You can run Lighthouse from the command line or as a Node module instead of from Chrome DevTools as well.

Google Lighthouse from Chrome DevTools
Google Lighthouse from Chrome DevTools

Using Chrome DevTools manually for one-off checks or exploratory testing is helpful, but regular testing needs automation. One really cool way to automate performance checks is using Playwright, the end-to-end test framework I mentioned earlier. In Playwright, you can create a Chrome DevTools Protocol session and gather all the metrics you want. You can do other cool things with profiling and interception. It’s like a backdoor into the browser. Best of all, you could gather these metrics together with functional testing! One framework can meet the needs of both functional and performance test automation.

John Hill is a trailblazer in this space. He’s currently doing this as part of the Open MCT project. He’s the one who showed me how to automate performance tests with Playwright! If you want to learn more, check out this talk he gave recently on performance testing with Playwright, as well as his js-perf-toolkit project on GitHub.

Below is an example snippet I copied from js-perf-toolkit showing how to gather performance metrics using Playwright:

const client = await page.context().newCDPSession(page);
await client.send('Performance.enable'); 

await page.goto('https://www.google.com/');
await page.click('[aria-label="Search"]');
await page.fill('[aria-label="Search"]', 'playwright');

await Promise.all([
    page.waitForNavigation(),
    page.press('[aria-label="Search"]', 'Enter')
]);

let perfMetrics = await client.send('Performance.getMetrics');
console.log( perfMetrics.metrics );

#5. Machine learning models

There’s another curve ball when testing websites: what about machine learning models? For example, whenever you shop at an online store, the bottom of almost every product page has a list of recommendations for similar or complementary products. For example, when I searched Amazon for the latest Pokémon video game, Amazon recommended other games and toys:

Recommendation systems like this might be hard-coded for small stores, but large retailers like Amazon and Walmart use machine learning models to back up their recommendations. Models like this are notoriously difficult to test. How do we know if a recommendation is “good” or “bad”? How do I know if folks who like Pokémon would be enticed to buy a Kirby game or a Zelda game? Lousy recommendations are a lost business opportunity. Other models could have more serious consequences, like introducing harmful biases that affect users.

Machine learning models need separate approaches to testing. It might be tempting to skip data validation because it’s harder than basic functional testing, but that’s a risk not worth taking. To do testing right, separate the functional correctness of the frontend from the validity of data given to it. For example, we could provide mocked data for product recommendations so that tests would have consistent outcomes for verifying visuals. Then, we could test the recommendation system apart from the UI to make sure its answers seem correct. Separating these testing concerns makes each type of test more helpful in figuring out bugs. It also makes machine learning models faster to test, since testers or scripts don’t need to navigate a UI just to exercise them.

If you want to learn more about testing machine learning courses, Carlos Kidman created an excellent course all about it on Test Automation University named Intro to Testing Machine Learning Models. In his course, Carlos shows how to test models for adversarial attacks, behavioral aspects, and unfair biases.

#6. JavaScript

Now, the next trend I see will probably be controversial to many of you out there: JavaScript isn’t everything. Historically, JavaScript has been the only language for front end web development. As a result, a JavaScript monoculture has developed around the front end ecosystem. There’s nothing inherently wrong with that, but I see that changing in the coming years – and I don’t mean TypeScript.

In recent years, frustrations with single-page applications (SPAs) and client-heavy front ends have spurred a server-side renaissance. In addition to JavaScript frameworks that support SSR, classic server-side projects like Django, Rails, and Laravel are alive and kicking. Folks in those communities do JavaScript when they must, but they love exploring alternatives. For example, HTMX is a framework that provides hypertext directives for many dynamic actions that would otherwise be coded directly in JavaScript. I could use any of those classic web frameworks with HTMX and almost completely avoid JavaScript code. That makes it easier for programmers to make cool things happen on the front end without needing to navigate a foreign ecosystem.

Below is an example snippet of HTML code with HTMX attributes for posting a click and showing the response:

  <script src="https://unpkg.com/htmx.org@1.7.0"></script>
  <!-- have a button POST a click via AJAX -->
  <button hx-post="/clicked" hx-swap="outerHTML">
    Click Me
  </button>

WebAssembly, or “Wasm” is also here. WebAssembly is essentially an assembly language for browsers. Code written in higher-level languages can be compiled down into WebAssembly code and run on the browser. All major browsers now support WebAssembly to some degree. That means JavaScript no longer holds a monopoly on the browser.

I don’t know if any language will ever dethrone JavaScript in the browser, but I predict that browsers will become multilingual platforms through WebAssembly in the coming years. For example, at PyCon 2022, Anaconda announced PyScript, a framework for running Python code in the browser. Blazor enables C# code to run in-browser. Emscripten compiles C/C++ programs to WebAssembly. Other languages like Ruby and Rust also have WebAssembly support.

Regardless of what happens inside the browser, black-box testing tools and frameworks outside the browser can use any language. Tools like Playwright and Selenium support languages other than JavaScript. That brings many more people to the table. Testers shouldn’t be forced to learn JavaScript just to automate some tests when they already know another language. This is happening today, and I don’t expect it to change.

#7. Autonomous testing

Finally, there is one more trend I want to share, and this one is more about the future than the present: autonomous testing is coming. Ironically, today’s automated testing is still manually-intensive. Someone needs to figure out features, write down the test steps, develop the scripts, and maintain them when they inevitably break. Visual testing makes verification autonomous because assertions don’t need explicit code, but figuring out the right interactions to exercise features is still a hard problem.

I think the next big advancement for testing and automation will be autonomous testing: tools that autonomously look at an app, figure out what tests should be run, and then run those tests automatically. The key to making this work will be machine learning algorithms that can learn the context of the apps they target for testing. Human testers will need to work together with these tools to make them truly effective. For example, one type of tool could be a test recommendation engine that proposes tests for an app, and the human tester could pick the ones to run.

Autonomous testing will greatly simplify testing. It will make developers and testers far more productive. As an industry, we aren’t there yet, but it’s coming, and I think it’s coming soon. I delivered a keynote address on this topic at Future of Testing: Frameworks 2022:

Conclusion

There’s lots of exciting stuff happening in the world of the front end. As I said before, tools and technologies may change, but fundamentals remain the same. Each of these trends is rooted in tried-and-true principles of testing. They remind us that software quality is a multifaceted challenge, and the best strategy is the one that provides the most value for your project.

So, what do you think? Did I hit all the major front end trends? Did I miss anything? Let me know in the comments!

Testing GitHub Pages without Local Jekyll Setup

TL;DR: If you want to test your full GitHub Pages site before publishing but don’t want to set up Ruby and Jekyll on your local machine, then:

  1. Commit your doc changes to a new branch.
  2. Push the new branch to GitHub.
  3. Temporarily change the repository’s GitHub Pages publishing source to the new branch.
  4. Reload the GitHub Pages site, and review the changes.

If you have a GitHub repository, did you know that you can create your own documentation site for it within GitHub? Using GitHub Pages, you can write your docs as a set of Markdown pages and then configure your repository to generate and publish a static web site for those pages. All you need to do is configure a publishing source for your repository. Your doc site will go live at:

https://<user>.github.io/<repository>

If this is new to you, then you can learn all about this cool feature from the GitHub docs here: Working with GitHub Pages. I just found out about this cool feature myself!

GitHub Pages are great because they make it easy to develop docs and code together as part of the same workflow without needing extra tools. Docs can be written as Markdown files, Liquid templates, or raw assets like HTML and CSS. The docs will be version-controlled for safety and shared from a single source of truth. GitHub Pages also provides free hosting with a decent domain name for the doc site. Clearly, the theme is simplicity.

Unfortunately, I hit one challenge while trying GitHub Pages for the first time: How could I test the doc site before publishing it? A repository using GitHub Pages must be configured with a specific branch and folder (/ (root) or /docs) as the publishing source. As soon as changes are committed to that source, the updated pages go live. However, I want a way to view the doc site in its fullness before committing any changes so I don’t accidentally publish any mistakes.

One way to test pages is to use a Markdown editor. Many IDEs have Markdown editors with preview panes. Even GitHub’s web editor lets you preview Markdown before committing it. Unfortunately, while editor previews may help catch a few typos, they won’t test the full end result of static site generation and deployment. They may also have trouble with links or templates.

GitHub’s docs recommend testing your site locally using Jekyll. Jekyll is a static site generator written in Ruby. GitHub Pages uses Jekyll behind the scenes to turn doc pages into full doc sites. If you want to keep your doc development simple, you can just edit Markdown files and let GitHub do the dirty work. However, if you want to do more hands-on things with your docs like testing site generation, then you need to set up Ruby and Jekyll on your local machine. Thankfully, you don’t need to know any Ruby programming to use Jekyll.

I followed GitHub’s instructions for setting up a GitHub Pages site with Jekyll. I installed Ruby and Jekyll and then created a Jekyll site in the /docs folder of my repository. I verified that I could edit and run my site locally in a branch. However, the setup process felt rather hefty. I’m not a Ruby programmer, so setting up a Ruby environment with a few gems felt like a lot of extra work just to verify that my doc pages looked okay. Plus, I could foresee some developers getting stuck while trying to set up these doc tools, especially if the repository’s main code isn’t a Ruby project. Even if setting up Jekyll locally would be the “right” way to develop and test docs, I still wanted a lighter, faster alternative.

Thankfully, I found a workaround that didn’t require any tools outside of GitHub: Commit doc changes to a branch, push the branch to GitHub, and then temporarily change the repository’s GitHub Pages source to the branch! I originally configured my repository to publish docs from the /docs folder in the main branch. When I changed the publishing source to another branch, it regenerated and refreshed the GitHub Pages site. When I changed it back to main, the site reverted without any issues. Eureka! This is a quick, easy hack for testing changes to docs before merging them. You get to try the full site in the main environment without needing any additional tools or setup.

Above is a screenshot of the GitHub Pages settings for one of my repositories. You can find these settings under Settings -> Options for any repository, as long as you have the administrative rights. In this screenshot, you can see how I changed the publishing source’s branch from main to docs/test. As soon as I selected this change, GitHub Pages republished the repository’s doc site.

Now, I recognize that this solution is truly a hack. Changing the publishing source affects the “live”, “production” version of the site. It effectively does publish the changes, albeit temporarily. If some random reader happens to visit the site during this type of testing, they may see incorrect or even broken pages. I’d recommend changing the publishing source’s branch only for small projects and for short periods of time. Don’t forget to revert the branch once testing is complete, too. If you are working on a larger, more serious project, then I’d recommend doing full setup for local doc development. Local setup would be safer and would probably make it easier to try more advanced tricks, like templates and themes.

Test-Driving TestProject’s New Python SDK

TestProject recently released its new OpenSDK, and one of its major features is the inclusion of Python testing support! Since I love using Python for test automation, I couldn’t wait to give it a try. This article is my crash-course tutorial on writing Web UI tests in Python with TestProject.

What is TestProject?

TestProject is a free end-to-end test automation platform for Web, mobile, and API tests. It provides a cloud-based way to teams to build, run, share, and analyze tests. Manual testers can visually build tests for desktop or mobile sites using TestProject’s in-browser recorder and test builder. Automation engineers can use TestProject’s SDKs in Java, C#, and now Python for developing coded test automation solutions, and they can use packages already developed by others in the community through TestProject’s add-ons. Whether manual or automated, TestProject displays all test results in a sleek reporting dashboard with helpful analytics. And all of these features are legitimately free – there’s no tiered model or service plan.

Recently, TestProject announced the official release of its new OpenSDK. This new SDK (“software development kit”) provides a simple, unified interface for running tests with TestProject across multiple platforms and languages (now including Python). Things look exciting for the future of TestProject!

What’s My Interest?

It’s no secret that I love testing with Python. When I heard that TestProject added Python support, I knew I had to give it a try. I never used TestProject before, but I was interested to learn what it could do. Specifically, I wanted to see the value it could bring to reporting automated tests. In the Python space, test automation is slick, but reporting can be rough since frameworks like pytest and unittest are command-line-focused. I also wanted to see if TestProject’s SDK would genuinely help me automate tests or if it would get it my way. Furthermore, I know some great people in the TestProject community, so I figured it was time to jump in myself!

The Python SDK

TestProject’s Python SDK is an open-source project. It was originally developed by Bas Dijkstra, with the support of the TestProject team, and its code is hosted on GitHub. The Python SDK supports Selenium for Web UI automation (which will be the focus for this tutorial) and Appium for Android and iOS UI automation as well!

Since I’d never used TestProject before, let alone this new Python SDK, I wanted to review the code to see how to use it. Thankfully, the README included lots of helpful information and example code. When I looked at the code for TestProject’s BaseDriver, I discovered that it simply extend’s Selenium WebDriver’s RemoteDriver class. That means all the TestProject WebDrivers use exactly the same API as Python’s Selenium WebDriver implementation. To me, that was a big relief. I know WebDriver’s API very well, so I wouldn’t need to learn anything different in order to use TestProject. It also means that any test automation project can be easily retrofitted to use TestProject’s SDKs – they just need to swap in a new WebDriver object!

Setup Steps

TestProject has a straightforward architecture. Users sign up for free TestProject accounts online. Then, they set up their own machines for running tests. Each testing machine must have the TestProject agent installed and linked to a user’s account. When tests run, agents automatically push results to the TestProject cloud. Users can then log into the TestProject portal to view and analyze results. They can invite team mates to share results, and they can also set up multiple test machines with agents. Users can even integrate TestProject with other tools like Jenkins, qTest, and Sauce Labs. The TestProject docs, especially the ecosystem diagram, explain everything in more detail.

When I did my test drive, I created a TestProject account, installed the agent on my Mac, and ran Python Web UI tests from my Mac. I already had the latest version of Python installed (Python 3.8 at the time of writing this article). I also already had my target browsers installed: Google Chrome and Mozilla Firefox.

Below are the precise steps I followed to set up TestProject:

1. Sign up for an account

TestProject accounts are “free forever.” Use this signup link.

The TestProject signup page

2. Download the TestProject Agent

The signup wizard should direct you to download the TestProject agent. If not, you can always download it from the TestProject dashboard. Be warned, the download package is pretty large – the macOS package was 345 MB. Alternatively, you can fetch the agent as a container image from Docker Hub.

The TestProject agent download page

The TestProject agent contains all the stuff needed to run tests and upload results to the TestProject app in the cloud. You don’t need to install WebDriver executables like ChromeDriver or geckodriver. Once the agent is downloaded, install it on the machine and register the agent with your account. For me, registration happened automatically.

This is what the TestProject agent looks like when running on macOS. You can also close this window to let it run in the background.

3. Find your developer token

You’ll need to use your developer token to connect your automated tests to your account in the TestProject app. The signup wizard should reveal it to you, but you can always find it (and also reset it) on the Integrations page.

The Integrations page. Check here for your developer token. No, you can’t use mine.

4. Install the Python SDK

TestProject’s Python SDK is distributed as a package through PyPI. To install it, simply run pip install testproject-python-sdk at the command line. This package will also install dependencies like selenium and requests.

A Classic Web UI Test

After setting up my Mac to use TestProject, it was time to write some Web UI tests in Python! Since I discovered that TestProject’s WebDriver objects could easily retrofit any existing test automation project, I thought, “What if I try to run my PyCon 2020 tutorial project with TestProject?” For PyCon 2020, I gave an online tutorial about building a Web UI test automation project in Python from the ground up using pytest and Selenium WebDriver. The tutorial includes one test case: a DuckDuckGo web search and verification. I thought it would be easy to integrate with TestProject since I already had the code. Thankfully, it was!

Below, I’ll walk though my code. You can check out my example project repository from GitHub at AndyLPK247/testproject-python-sdk-example. My code will be a bit more advanced than the examples shown in the Python SDK’s README or in Bas Dijkstra’s tutorial article because it uses the Page Object Model and pytest fixtures. Make sure to pip install pytest, too.

1. Write the test steps

The test case covers a simple DuckDuckGo web search. Whenever I automate tests, I always write out the steps in plain language. Good tests follow the Arrange-Act-Assert pattern, and I like to use Gherkin’s Given-When-Then phrasing. Here’s the test case:

Scenario: Basic DuckDuckGo Web Search
    Given the DuckDuckGo home page is displayed
    When the user searches for "panda"
    Then the search result query is "panda"
    And the search result links pertain to "panda"
    And the search result title contains "panda"

2. Specify automation inputs

Inputs configure how automated tests run. They can be passed into a test automation solution using configuration files. Testers can then easily change input values in the config file without changing code. Automation should read config files once at the start of testing and inject necessary inputs into every test case.

In Python, I like to use JSON for config files. JSON data is simple and hierarchical, and Python includes a module in its standard library named json that can parse a JSON file into a Python dictionary in one line. I also like to put config files either in the project root directory or in the tests directory.

Here’s the contents of config.json for this test:

{
  "browser": "Chrome",
  "implicit_wait": 10,
  "testproject_projectname": "TestProject Python SDK Example",
  "testproject_token": ""
}
  • browser is the name of the browser to test
  • implicit_wait is the implicit waiting timeout for the WebDriver instance
  • testproject_projectname is the project name to use for this test suite in the TestProject app
  • testproject_token is the developer token

3. Read automation inputs

Automation code should read inputs one time before any tests run and then inject inputs into appropriate tests. pytest fixtures make this easy to do.

I created a fixture named config in the tests/conftest.py module to read config.json:

import json
import pytest


@pytest.fixture
def config(scope='session'):

  # Read the file
  with open('config.json') as config_file:
    config = json.load(config_file)
  
  # Assert values are acceptable
  assert config['browser'] in ['Firefox', 'Chrome', 'Headless Chrome']
  assert isinstance(config['implicit_wait'], int)
  assert config['implicit_wait'] > 0
  assert config['testproject_projectname'] != ""
  assert config['testproject_token'] != ""

  # Return config so it can be used
  return config

Setting the fixture’s scope to “session” means that it will run only one time for the whole test suite. The fixture reads the JSON config file, parses its text into a Python dictionary, and performs basic input validation. Note that Firefox, Chrome, and Headless Chrome will be supported browsers.

4. Set up WebDriver

Each Web UI test should have its own WebDriver instance so that it remains independent from other tests. Once again, pytest fixtures make setup easy.

The browser fixture in tests/conftest.py initialize the appropriate TestProject WebDriver type based on inputs returned by the config fixture:

from selenium.webdriver import ChromeOptions
from src.testproject.sdk.drivers import webdriver


@pytest.fixture
def browser(config):

  # Initialize shared arguments
  kwargs = {
    'projectname': config['testproject_projectname'],
    'token': config['testproject_token']
  }

  # Initialize the TestProject WebDriver instance
  if config['browser'] == 'Firefox':
    b = webdriver.Firefox(**kwargs)
  elif config['browser'] == 'Chrome':
    b = webdriver.Chrome(**kwargs)
  elif config['browser'] == 'Headless Chrome':
    opts = ChromeOptions()
    opts.add_argument('headless')
    b = webdriver.Chrome(chrome_options=opts, **kwargs)
  else:
    raise Exception(f'Browser "{config["browser"]}" is not supported')

  # Make its calls wait for elements to appear
  b.implicitly_wait(config['implicit_wait'])

  # Return the WebDriver instance for the setup
  yield b

  # Quit the WebDriver instance for the cleanup
  b.quit()

This was the only section of code I needed to change to make my PyCon 2020 tutorial project work with TestProject. I had to change the WebDriver invocations to use the TestProject classes. I also had to add arguments for the project name and developer token, which come from the config file. (Note: you may alternatively set the developer token as an environment variable.)

5. Create page objects

Automated tests could make direct calls to the WebDriver interface to interact with the browser, but WebDriver calls are typically low-level and wordy. The Page Object Model is a much better design pattern. Page object classes encapsulate WebDriver gorp so that tests can call simpler, more readable methods.

The DuckDuckGo search test interacts with two pages: the search page and the result page. The pages package contains a module for each page. Here’s pages/search.py:

from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys


class DuckDuckGoSearchPage:

  URL = 'https://www.duckduckgo.com'

  SEARCH_INPUT = (By.ID, 'search_form_input_homepage')

  def __init__(self, browser):
    self.browser = browser

  def load(self):
    self.browser.get(self.URL)

  def search(self, phrase):
    search_input = self.browser.find_element(*self.SEARCH_INPUT)
    search_input.send_keys(phrase + Keys.RETURN)

And here’s pages/result.py:

from selenium.webdriver.common.by import By

class DuckDuckGoResultPage:
  
  RESULT_LINKS = (By.CSS_SELECTOR, 'a.result__a')
  SEARCH_INPUT = (By.ID, 'search_form_input')

  def __init__(self, browser):
    self.browser = browser

  def result_link_titles(self):
    links = self.browser.find_elements(*self.RESULT_LINKS)
    titles = [link.text for link in links]
    return titles
  
  def search_input_value(self):
    search_input = self.browser.find_element(*self.SEARCH_INPUT)
    value = search_input.get_attribute('value')
    return value

  def title(self):
    return self.browser.title

Notice that this code uses the “regular” WebDriver interface because TestProject’s WebDriver classes extend the Selenium WebDriver classes.

To make setup easier, I added fixtures to tests/conftest.py to construct each page object, too. They call the browser fixture and inject the WebDriver instance into each page object:

from pages.result import DuckDuckGoResultPage
from pages.search import DuckDuckGoSearchPage


@pytest.fixture
def search_page(browser):
  return DuckDuckGoSearchPage(browser)


@pytest.fixture
def result_page(browser):
  return DuckDuckGoResultPage(browser)

6. Automate the test case

All the automation plumbing is finally in place. Here’s the test case in tests/traditional/test_duckduckgo.py:

import pytest


@pytest.mark.parametrize('phrase', ['panda', 'python', 'polar bear'])
def test_basic_duckduckgo_search(search_page, result_page, phrase):
  
  # Given the DuckDuckGo home page is displayed
  search_page.load()

  # When the user searches for the phrase
  search_page.search(phrase)

  # Then the search result query is the phrase
  assert phrase == result_page.search_input_value()
  
  # And the search result links pertain to the phrase
  titles = result_page.result_link_titles()
  matches = [t for t in titles if phrase.lower() in t.lower()]
  assert len(matches) > 0

  # And the search result title contains the phrase
  assert phrase in result_page.title()

I parametrized the test to run it for three different phrases. The test function does not interact with the WebDriver instance directly. Instead, it interacts exclusively with the page objects.

7. Run the tests

The tests run like any other pytest tests: python -m pytest at the command line. If everything is set up correctly, then the tests will run successfully and upload results to the TestProject app.

In the TestProject dashboard, the Reports tab shows all the test you have run. It also shows the different test projects you have.

Check out those results!

You can also drill into results for individual test case runs. TestProject automatically records the browser type, timestamps, pass-or-fail results, and every WebDriver call. You can also download PDF reports!

Results for an individual test

What if … BDD?

I was delighted to see how easily I could run a traditional pytest suite using TestProject. Then, I thought to myself, “What if I could use a BDD test framework?” I personally love Behavior-Driven Development, and Python has multiple BDD test frameworks. There is no reason why a BDD test framework wouldn’t work with TestProject!

So, I rewrote the DuckDuckGo search test as a feature file with step definitions using pytest-bdd. The BDD-style test uses the same fixtures and page objects as the traditional test.

Here’s the Gherkin scenario in tests/bdd/features/duckduckgo.feature:

Feature: DuckDuckGo
  As a Web surfer,
  I want to search for websites using plain-language phrases,
  So that I can learn more about the world around me.


  Scenario Outline: Basic DuckDuckGo Web Search
    Given the DuckDuckGo home page is displayed
    When the user searches for "<phrase>"
    Then the search result query is "<phrase>"
    And the search result links pertain to "<phrase>"
    And the search result title contains "<phrase>"

    Examples:
      | phrase     |
      | panda      |
      | python     |
      | polar bear |

And here’s the step definition module in tests/bdd/step_defs/test_duckduckgo_bdd.py:

from pytest_bdd import scenarios, given, when, then, parsers
from selenium.webdriver.common.keys import Keys


scenarios('../features/duckduckgo.feature')


@given('the DuckDuckGo home page is displayed')
def load_duckduckgo(search_page):
  search_page.load()


@when(parsers.parse('the user searches for "{phrase}"'))
@when('the user searches for "<phrase>"')
def search_phrase(search_page, phrase):
  search_page.search(phrase)


@then(parsers.parse('the search result query is "{phrase}"'))
@then('the search result query is "<phrase>"')
def check_search_result_query(result_page, phrase):
  assert phrase == result_page.search_input_value()


@then(parsers.parse('the search result links pertain to "{phrase}"'))
@then('the search result links pertain to "<phrase>"')
def check_search_result_links(result_page, phrase):
  titles = result_page.result_link_titles()
  matches = [t for t in titles if phrase.lower() in t.lower()]
  assert len(matches) > 0


@then(parsers.parse('the search result title contains "{phrase}"'))
@then('the search result title contains "<phrase>"')
def check_search_result_title(result_page, phrase):
  assert phrase in result_page.title()

There’s one more nifty trick I added with pytest-bdd. I added a hook to report each Gherkin step to TestProject with a screenshot! That way, testers can trace each test case step more easily in the TestProject reports. Capturing screenshots also greatly assists test triage when failures arise. This hook is located in tests/conftest.py:

def pytest_bdd_after_step(request, feature, scenario, step, step_func):
  browser = request.getfixturevalue('browser')
  browser.report().step(description=str(step), message=str(step), passed=True, screenshot=True)

Since pytest-bdd is just a pytest plugin, its tests run using the same python -m pytest command. TestProject will group these test results into the same project as before, but it will separate the traditional tests from the BDD tests by name. Here’s what the Gherkin steps with screenshots look like:

Custom Gherkin step with screenshot reported in the TestProject app

This is Awesome!

As its name denotes, TestProject is a great platform for handling project-level concerns for testing work: reporting, integrations, and fast feedback. Adding TestProject to an existing automation solution feels seamless, and its sleek user experience gives me what I need as a tester without getting in my way. The one word that keeps coming to mind is “simple” – TestProject simplifies setup and sharing. Its design takes to heart the renowned Python adage, “Simple is better than complex.” As such, TestProject’s new Python SDK is a welcome addition to the Python testing ecosystem.

I look forward to exploring Python support for mobile testing with Appium soon. I also look forward to seeing all the new Python add-ons the community will develop.

Beyond Unit Tests: End-to-End Web UI Testing

On October 4, 2019, I gave a talk entitled Beyond Unit Tests: End-to-End Web UI Testing at PyGotham 2019. Check it out below! I show how to write a concise-yet-complete test solution for Web UI test cases using Python, pytest, and Selenium WebDriver.

This talk is a condensed version of my Hands-On Web UI Testing tutorials that I delivered at DjangoCon 2019 and PyOhio 2019. If you’d like to take the full tutorial, check out https://github.com/AndyLPK247/djangocon-2019-web-ui-testing. Full instructions are in the README.

Be sure to check out the other PyGotham 2019 talks, too. My favorite was Dungeons & Dragons & Python: Epic Adventures with Prompt-Toolkit and Friends by Mike Pirnat.

WebDriver Element Existence vs. Appearance

Web UI tests with Selenium WebDriver must interact with elements on a Web page. Locating elements can be tricky because expected elements may or may not be on the page. Furthermore, WebDriver might not be able to interact with some elements that exist on the page. That may seem crazy, but let’s understand why.

Web UI interactions universally follow these steps:

  1. Wait for an element to be ready.
  2. Get the element using a locator (ID, CSS selector, XPath, etc.).
  3. Send commands (like clicking or typing) or queries (like getting text) to the element.

Clearly, an element must be “ready” before interactions can happen. As humans, we intuitively define “ready” as, “The page is loaded, and the element is visible.” Automation code is a bit more technical because there are two different ways to define readiness:

  1. Existence: the element exists in the HTML structure of the page.
  2. Appearance: the element exists and it is visible on the page.

Existence can easily be determined by WebDriver’s “find elements” method. The plural “find elements” method will return a list of all elements matching a locator query. If no elements match the locator, then an empty list is returned. The singular “find element” method, on the other hand, will return the first element matching the locator or throw an exception if no elements are found. Thus, the plural version is more convenient to use for checking existence.

Here’s an example existence method in C#:

public bool Exists(IWebDriver driver, By locator) =>
    driver.FindElements(locator).Count > 0;

Checking for existence is the most basic level of readiness. If an element doesn’t exist, interactions with it simply cannot happen. However, existence alone may not be sufficient for interactions. Selenium WebDriver requires elements to not only exist but also to be displayed for interactions like sending clicks and scraping text. Existing elements may be scrolled out of view or even deliberately hidden. WebDriver calls to such elements will yield cryptic exceptions. That’s why waiting for appearance is usually the better readiness condition.

Here’s an example appearance method in C#:

// Assume that the locator targets one element, not multiple
public bool Appears(IWebDriver driver, By locator) =>
    Exists(driver, locator) && driver.FindElement(locator).Displayed;

Existence must be checked first, or else the “Displayed” call will throw an exception whenever existence is false.

Putting it all together, here’s what a button click interaction could look like in C#:

// Assume this is a method in a Page Object class
// Assume that "Driver" is the WebDriver instance
public void ClickThatButton()
{
    var button = By.Id("that-button");
    var wait = new WebDriverWait(Driver, new System.Timespan(0, 0, 15));
    wait.Until((driver) => Appears(driver, button));
    Driver.FindElement(button).Click();
}

It’s good practice to make explicit waits before locating and using elements. It’s also good practice to get fresh elements for every interaction call in order to avoid pesky stale element exceptions. Calls like these should be placed in Page Object methods or Screenplay Pattern tasks and questions so that interactions are safe and thorough.

Appearance may not always be the right choice. There may be times when a test should check if an element doesn’t exist or if an element exists but is hidden. Just think before you code.

Web Element Locators for Test Automation

Do you want a full course? Check out Web Element Locator Strategies on Test Automation University!

If you do any Web UI test automation (like with Selenium WebDriver), then you probably spend a large chunk of your test development time finding elements on a page, like buttons, inputs, and divs. Finding the right elements, however, can be challenging, especially when they lack unique IDs or class names. This guide will show you how to locate any Web element like a pro.

What are Web elements?

A Web element is an individual entity rendered on a Web page. Everything a user sees on a Web page (and even some things they don’t see) are elements: title headers, okay buttons, input fields, text areas, and more. Elements are specified in HTML by tag name, attributes, and contents. They may also have child elements, such as a table containing rows. CSS may be applied to elements to style them with colors, sizes, position, etc. Programming languages typically access Web elements as nodes in the Document Object Model (DOM).

What are Web element locators?

Web elements and locators are two different things. A Web element locator is an object that finds and returns Web elements on a page using a given query. In short, locators find elements.

Why are locators needed? As human users, we interact with Web pages visually: We look, scroll, click, and type through a browser. However, test automation interacts with Web pages programmatically: it needs a coded way to find and manipulate those same elements. Traditional automation won’t “look” at the page like a human* – it will search the DOM instead.

(*Newer automation technologies enable visual testing, which will be discussed later in this article.)

Selenium WebDriver separates the concerns of element location and interaction. WebDriver calls for these two concerns are frequently written back-to-back:

// WebDriver example: typing a search phrase at www.google.com
// This code is written in C#, but the calls are the same in any language

// First, element location
IWebElement searchField = driver.FindElement(By.Name("q"));

// Second, element interaction
searchField.SendKeys("panda");

WebDriver provides the following locator query types using “By”:

Which one is best? We’ll discuss that below.

Locators may also return multiple elements, or none at all! For example:

// Get the list of results from a Google search
// Using "FindElements" will return a list of all elements found in order
// Using "FindElement" would return the first element found (or throw an exception if no elements were found)
IList<IWebElement> results = driver.FindElements(By.CssSelector("div.r"));
results.Count.Should().BeGreaterThan(0);

Large test frameworks often use design patterns for structuring locators and interactions. The Page Object Model organizes locators and action methods together in classes by page or component. However, I strongly recommend the Screenplay Pattern over page objects because its pieces are more reusable and scalable. Whatever the pattern, locators are needed.

How do I find elements?

Elements can be a hassle to find when writing locators for test automation. To simplify my work flow, I use Google Chrome’s Developer Tools side-by-side with my IDE. Why choose Chrome?

To inspect any Web page in Chrome, simply right-click anywhere on the page:

Voila! DevTools will open. For finding Web elements, we want to use the Elements tab.

Visually pinpointing an element is easy. Click the “select” tool in the upper-left corner of the DevTools pane. (It looks like a square with a cursor on it.) The icon should turn blue.

Then, move the cursor to the desired element on the page. You will see each element highlighted in different colors as the mouse moves over. The corresponding HTML source code in the Elements tab will simultaneously be highlighted, too. Nice! Click on the desired element to set the highlighting so that it won’t disappear when you move the cursor elsewhere.

From here, you can check out the element’s tag, classes, attributes, contents, parents, and children.

How do I write good locators?

Finding the element is half the battle. Forming a unique locator query is the other half. If a locator is too broad, then it could return false positives. However, if a locator is too specific, then it could be susceptible to break whenever the DOM changes, and it could also be difficult for others to read. The best philosophy is this: Write the simplest locator query that uniquely identifies the target element(s).

My locator query type order-of-preference is:

  1. ID (if unique)
  2. Name (if unique)
  3. Class name
  4. CSS Selector
  5. XPath without text or indexing
  6. Link text / partial link text
  7. XPath with text and/or indexing

Unique IDs, names, and class names make locators super easy to write: queries are short and don’t need extra anchors. Always encourage developers on the team to use unique identifiers like class names for all elements. However, many elements do not have them, which means locators must fall back on more complicated CSS selectors and XPaths (*shiver*). Whenever this happens, here’s some advice:

  • Use parents as anchors if they have unique identifiers.
    • CSS selector example: “#some-list > li”
    • XPath example: “//ul[@id=’some-list’]/li”
  • Avoid XPaths that use text or indexing if possible.
    • Bad example: “//div[3]//span[text()=’hello’]”
    • Those tend to be the most brittle checks.
  • Use the “contains” function when checking for classes in XPath.
    • Example: “//div[contains(@class, ‘some-class’)]”
    • Elements frequently have more than one class.
    • “contains” will check a substring instead of the full class string.
    • However, be careful because “some-class2” would be matched!

Always test locators, too. Syntax errors and false positives happen frequently. Chrome DevTools makes testing locators easy. Simply hit Ctrl-F on the Elements tab and then paste the locator query into the finder field. DevTools will highlight all the matching elements in order. Spiffy!

Sometimes, when I can’t figure out why a locator isn’t working for a test case, I’ll do the following:

  1. Run the test case with debugging from my IDE.
  2. Set a break point on the locator.
  3. Wait for the test case to stop at the break point.
  4. Enter DevTools on the active Chrome window.
  5. Check the DOM and test the locators on the live page.

What if my tests are flaky?

Web UI testing is roundly criticized for being “flaky” because tests often crash for unexpected reasons. However, much of the unreliability people hit with Web UI testing (and often with Selenium WebDriver itself) is that all Web interactions inherently pose race conditions. The automation and the browser execute independently, so interactions must be synchronized with page state. Otherwise, WebDriver will throw exceptions for timeouts, stale elements, and elements not found. Many times, these issues happen intermittently, so they can be difficult to trace and resolve.

The best way to avoid race conditions is this: Always wait for an element to exist before interacting with it. This may seem basic, but it’s easy to overlook. Selenium WebDriver packages all offer some sort of WebDriverWait object that will force the driver to wait for a given condition to be true before proceeding. The easiest way to check if an element exists is to check if the list of elements returned by a FindElements (plural) call is non-empty. Adding another call for each interaction may feel burdensome, but design patterns within well-designed frameworks (like the Screenplay Pattern) can make these checks happen automatically.

Another good practice is this: Always fetch fresh elements. Sometimes, automation will first get some elements and then use a second query to get more elements. Or, in the case of the Page Object Factory (which should never be used because, bluntly, its design is terrible), elements are fetched once when the page object is constructed and referenced thereafter. No matter which way, the longer a Web element object exists, the more prone it is to become stale and cause exceptions. I’ve seen elements turn stale inexplicably even when they still seem to be on the page, too. Always get an element in the moment when it is needed. That way, it can’t go stale!

Want some helpful tips for clicking tricky elements? Check out this article: Clicking Web Elements with Selenium WebDriver.

How can AI help Web UI testing?

Several new AI-based projects/products aim to improve automated Web UI testing over traditional methods:

  • Applitools extends Selenium WebDriver automation with checks for nontrivial visual differences.
  • Testim can automatically heal locators whenever they break, avoiding test flakiness due to front-end changes.
  • Mabl is an assistant that will learn and rerun tests that developers teach it without writing any code.
  • Test.ai runs common user tests like login, searching, and shopping on mobile apps based on what its AI has learned from several other apps.
  • Rainforest QA uses crowdsourcing plus AI to run manual tests specified by a team almost like they are automated.

Test Automation University also offers a free course on using AI for element selection: AI for Element Selection: Erasing the Pain of Fragile Test Scripts.

Many AI testing tools definitely add value, but keep in mind, under the hood, locators are still used somewhere.

EGAD! How Do We Start Writing (Better) Tests?

Some have never automated tests and can’t check themselves before they wreck themselves. Others have 1000s of tests that are flaky, duplicative, and slow. Wa-do-we-do? Well, I gave a talk about this problem at a few Python conferences. The language used for example code was Python, but the principles apply to any language.

Here’s the PyTexas 2019 talk:

And here’s the PyGotham 2018 talk:

And here’s the first time I gave this talk, at PyOhio 2018:

I also gave this talk at PyCaribbean 2019 and PyTennessee 2020 (as an impromptu talk), but it was not recorded.

Cypress.io and the Future of Web Testing

What is Cypress.io?

Cypress.io is an up-and-coming Web test automation framework. It is open source and written entirely in JavaScript. Unlike Selenium WebDriver tests that work outside the browser, Cypress works directly inside the browser. It enables developers to write front-end tests entirely in JavaScript, directly accessing everything within the browser. As a result, tests run much more quickly and reliably than Selenium-based tests.

Some nifty features include:

  • A rich yet simple API for interactions with automatic waiting
  • Mocha, Chai, and Sinon bundled in
  • A sleek dashboard with automatic reloads for Test-Driven Development
  • Easy debugging
  • Network traffic control for validation and mocking
  • Automatic screenshots and videos

Cypress was clearly developed for developers. It enables rapid test development with rapid feedback. The Cypress Test Runner is free, while the Cypress Dashboard Service (for better reporting and CI) will require a paid license.

How Do I Start Using Cypress?

I won’t post examples or instructions for using Cypress here. Please refer to the Cypress documentation for getting started and the tutorial video below. Make sure your machine is set up for JavaScript development.

Will Cypress Replace WebDriver?

TL;DR: No.

Cypress has its niche. It is ideal for small teams whose stacks are exclusively JavaScript and whose developers are responsible for all testing. However, WebDriver still has key advantages.

  1. While Selenium WebDriver supports nearly all major browsers, Cypress currently supports only one browser: Google Chrome. That’s a major limitation. Web apps do not work the same across browsers. Many industries (especially banking and finance) put strict controls on browser types and versions, too.
  2. Cypress is JavaScript only. Its website proudly touts its JavaScript purity like a badge of honor. However, that has downsides. First, all testing must happen inside the bubble of the browser, which makes parallel testing and system interactions much more difficult. Second, testers must essentially be developers, which may not work well for all teams. Third, other programming languages that may offer advantages for testing (like Python) cannot be used. Selenium WebDriver, on the other hand, has multiple language bindings and lets tests live outside the browser.
  3. Within the JavaScript ecosystem, Cypress is not the only all-in-one end-to-end framework. Protractor is more mature, more customizable, and easier to parallelize. It wraps Selenium WebDriver calls for simplification and safety in a similar way to how Cypress’s API is easy to use.
  4. The WebDriver standard is a W3C Recommendation. What does this mean? All major browsers have a vested interest in implementing the standard. Selenium is simply the most popular implementation of the standard. It’s not going away. Cypress, however, is just a cool project backed with commercial intent.

Further reading:

What Does Cypress Mean for the Future?

There are a few big takeaways.

  1. JavaScript is taking over the world. It was the most popular language on GitHub in 2017. JavaScript-only stacks like MEAN and MERN are increasingly popular. The demand for a complete JavaScript-only test framework like Cypress is further evidence.
  2. “Bundled” test frameworks are becoming popular. Historically, a test framework simply provided test structure, basic fixtures, and maybe an assertion library (like JUnit). Then, extra test packages became popular (like Selenium WebDriver, REST APIs, mocking, logging, etc.). Now, new frameworks like Cypress and Protractor aim to provide pre-canned recipes of all these pieces to simplify the setup.
  3. Many new test frameworks will likely be developer-centric. There is a trend in the software industry (especially with Agile) of eliminating traditional tester roles and putting testing work onto developers. The role of the “Software Engineer in Test” – a developer who builds test systems – is also on the rise. Test automation tools and frameworks will need to provide good developer experience (DX) to survive. Cypress is poised to ride that wave.
  4. WebDriver is not perfect. Cypress was developed in large part to address WebDriver’s shortcomings, namely the slowness, difficulty, and unreliability (though unreliability is often a result of poor implementation). Many developers don’t like to use Selenium WebDriver, and so there will be a constant itch to make something better. Cypress isn’t there yet, but it might get there one day.

Clicking Web Elements with Selenium WebDriver

Selenium WebDriver is the most popular open source package for Web UI test automation. It allows tests to interact directly with a web page in a live browser. However, using Selenium WebDriver can be very frustrating because basic interactions often lack robustness, causing intermittent errors for tests.

The Basics

One such vulnerable interaction is clicking elements on a page. Clicking is probably the most common interaction for tests. In C#, a basic click would look like this:

webDriver.FindElement(By.Id("my-id")).Click();

This is the easy and standard way to click elements using Selenium WebDriver. However, it will work only if the targeted element exists and is visible on the page. Otherwise, the WebDriver will throw exceptions. This is when programmers pull their hair out.

Waiting for Existence

To avoid race conditions, interactions should not happen until the target element exists on the page. Even split-second loading times can break automation. The best practice is to use explicit waits before interactions with a reasonable timeout value, like this:

const int timeoutSeconds = 15;
var ts = new TimeSpan(0, 0, timeoutSeconds);
var wait = new WebDriverWait(webDriver, ts);

wait.Until((driver) => driver.FindElements(By.Id("my-id")).Count > 0);
webDriver.FindElement(By.Id("my-id")).Click();

Other Preconditions

Sometimes, Web elements won’t appear without first triggering something else. Even if the element exists on the page, the WebDriver cannot click it until it is made visible. Always look for the proper way to make that element available for clicking. Click on any parent panels or expanders first. Scroll if necessary. Make sure the state of the system should permit the element to be clickable.

If the element is scrolled out of view, move to the element before clicking it:

new Actions(webDriver)
    .MoveToElement(webDriver.FindElement(By.Id("my-id")))
    .Click()
    .Perform();

Last Ditch Efforts

Nevertheless, there are times when clickable elements just don’t cooperate. They just can’t seem to be made visible. When all else fails, drop directly into JavaScript:

((IJavaScriptExecutor)webDriver).ExecuteScript(
    "arguments[0].click();",
    webDriver.FindElement(By.Id("my-id")));

Do this only when absolutely necessary. It is a best practice to use Selenium WebDriver methods because they make automated interaction behave more like a real user than raw JavaScript calls. Make sure to give good reasons in code comments whenever doing this, too.

Final Advice

This article was written specifically for clicks, but its advice can be applied to other sorts of interactions, too. Just be smart about waits and preconditions.

Note: Code examples on this page are written in C#, but calls are similar for other languages supported by Selenium WebDriver.

Django Admin Translations

Django is a fantastic Python Web framework, and one of its great out-of-the-box features is internationalization (or “i18n” for short). It’s pretty easy to add translations to nearly any string in a Django app, but what about translating admin site pages? Titles, names, and actions all need translations. Those admin pages are automatically generated, so how can their words be translated? This guide shows you how to do it easily.

chinese_django_home
Want an internationalized admin site like this? Follow this guide to find out how!

i18n Review

If you are new to translations in Django, definitely read the official Translation page first. In a nutshell, all strings that need translation should be passed into a translation function for Python code or a translation block for Django template code. Django management commands then generate language-specific message files, in which translators provide translations for the marked strings, and finally compile them for app use. Note that translations require the gettext tools to be installed on your machine. Django also provides some advanced logic for handling special cases like date formats and pluralization, too. It’s really that simple!

Initial Setup

A Django project needs some basic config before doing translations, which is needed for both the main site and the admin.

Enabling Internationalization

Make sure the following settings are given in settings.py:

# settings.py

LANGUAGE_CODE = 'en-us'  # or other appropriate code
USE_I18N = True
USE_L10N = True

They were probably added by default. The Booleans could be set to False to give apps with no internationalization a small performance boost, but we need them to be True so that translations happen.

Changing Locale Paths

By default, message files will be generated into locale directories for each app with strings marked for translation. You may optionally want to set LOCALE_PATHS to change the paths. For example, it may be easiest to put all message files into one directory like this, rather than splitting them out by app:

# settings.py

LOCALE_PATHS = [os.path.join(BASE_DIR, 'locale')]

This will avoid translation duplication between apps. It’s a good strategy for small projects, but be warned that it won’t scale well for larger projects.

Middleware for Automatic Translation

Django provides LocaleMiddleware to automatically translate pages using “context clues” like URL language prefixes, session values, and cookies. (The full pecking order is documented under How Django discovers language preference on the official doc page.) So, if a user accesses the site from China, then they should automatically receive Chinese translations! To use the middleware, add django.middleware.locale.LocaleMiddleware to the MIDDLEWARE setting in settings.py. Make sure it comes after SessionMiddleware and CacheMiddleware and before CommonMiddleware, if those other middlewares are used.

# settings.py

MIDDLEWARE = [
    # ...
    'django.middleware.locale.LocaleMiddleware',
    # ...
]

URL Pattern Language Prefixes

Getting automatic translations from context clues is great, but it’s nevertheless useful to have direct URLs to different page translations. The i18n_patterns function can easily add the language code as a prefix to URL patterns. It can be applied to all URLs for the site or only a subset of URLs (such as the admin site). Optionally, patterns can be set so that URLs without a language prefix will use the default language. The main caveat for using i18n_patterns is that it must be used from the root URLconf and not from included ones. The project’s root urls.py file should look like this:

# urls.py

from django.conf.urls.i18n import i18n_patterns
from django.contrib import admin
from django.urls import path

urlpatterns = i18n_patterns(
    # ...
    path('admin/', admin.site.urls),
    # ...

    # If no prefix is given, use the default language
    prefix_default_language=False
)

Limiting Language Choices

When adding language prefixes to URLs, I strongly recommend limiting the available languages. Django includes ready-made message files for several languages. A site would look bad if, for example, the “/fr/” prefix were available without any French translations. Set the available languages using LANGUAGES in settings.py:

# settings.py

from django.utils.translation import gettext_lazy as _

LANGUAGES = [
    ('en', _('English')),
    ('zh-hans', _('Simplified Chinese')),
]

Note that language codes follow the ISO 639-1 standard.

Doing the Translations

With the configurations above, translations can now be added for the main site! The steps below show how to add translations specifically for the admin. Unless there is a specific need, use lazy translation for all cases.

Out-of-the-Box Phrases

Admin site pages are automatically generated using out-of-the-box templates with lots of canned phrases for things like “login,” “save,” and “delete.” How do those get translated? Thankfully, Django already has translations for many major languages. Check out the list under django/contrib/admin/locale for available languages. Django will automatically use translations for these languages in the admin site – there’s nothing else you need to do! If you need a language that’s not available, I strongly encourage you to contribute new translations to the Django project so that everyone can share them. (I suspect that you could also try to manually create messages files in your locale directory, but I have not tested that myself.)

Custom Admin Titles

There are a few ways to set custom admin site titles. My preferred method is to set them in the root urls.py file. Wherever they are set, mark them for lazy translation. It’s easy to overlook them!

from django.contrib import admin
from django.utils.translation import gettext_lazy as _

admin.site.index_title = _('My Index Title')
admin.site.site_header = _('My Site Administration')
admin.site.site_title = _('My Site Management')

App Names

App names are another set of phrases that can be easily missed. Add a verbose_name field with a translatable string to every AppConfig class in the project. Do not simply try to translate the string given for the name field: Django will yield a runtime exception!

from django.apps import AppConfig
from django.utils.translation import gettext_lazy as _

class CustomersConfig(AppConfig):
    name = 'customers'
    verbose_name = _('Customers')

Model Names

Models are full of strings that need translations. Here are the things to look for:

  • Give each field a verbose_name value, since the identifiers cannot be translated.
  • Mark help texts, choice descriptions, and validator messages as translatable.
  • Add a Meta class with verbose_name and verbose_name_plural values.
  • Look out for any other strings that might need translations.

Here is an example model:

from django.db import models
from django.core.validators import RegexValidator
from django.utils.translation import gettext_lazy as _

class Customer(models.Model):
    name = models.CharField(
        max_length=100,
        help_text=_('First and last name.'),
        verbose_name=_('name'))
    address = models.CharField(
        max_length=100,
        verbose_name=_('address'))
    phone = models.CharField(
        max_length=10,
        validators=[RegexValidator(
            '^\d{10}$',
            _('Phone must be exactly 10 digits.'))],
        verbose_name=_('phone number'))

    class Meta:
        verbose_name = _('customer')
        verbose_name_plural = _('customers')

Run the Commands

Once all strings are marked for translation, generate the message files:

# Generate message files for a desired language
python manage.py makemessages -l zh_Hans

# After adding translations to the .po files, compile the messages
python manage.py compilemessages

Warning: The language code and the locale name may be different! For example, take Simplified Chinese: the language code is “zh-hans”, but the locale name is “zh_Hans”. Notice the underscore and the caps. Locale names often include a country code to differentiate language nuances, like American English vs. British English. Refer to django/contrib/admin/local for a list of examples.

Bonus: Admin Language Buttons

With LocaleMiddleware and i18n_patterns, pages should be automatically translated based on context or URL prefix. However, it would still be great to let the user manually switch the language from the admin interface. Clicking a button is more intuitive than fumbling with URL prefixes.

There are many ways to add language switchers to the admin site. To me, the most sensible way is to add flag icons to the title bar. Behind the scenes, each flag icon would be linked to a language-prefixed URL for the page. That way, whenever a user clicks the flag, then the same page is loaded in the desired language.

i18n_userlinks
It’s pretty easy to make something like this, but it needs a few steps.

Language Code Prefix Switcher

Since URL paths use i18n_patterns, their language codes can be trusted to be uniform. A utility function can easily add or substitute the desired language code as a URL path prefix. For example, it would convert “/admin/” and “/en/admin/” into “/zh-hans/admin/” for Simplified Chinese. This function should also validate that the path and language are correct. It can be put anywhere in the project. Below is the code:

from django.conf import settings

def switch_lang_code(path, language):

    # Get the supported language codes
    lang_codes = [c for (c, name) in settings.LANGUAGES]

    # Validate the inputs
    if path == '':
        raise Exception('URL path for language switch is empty')
    elif path[0] != '/':
        raise Exception('URL path for language switch does not start with "/"')
    elif language not in lang_codes:
        raise Exception('%s is not a supported language code' % language)

    # Split the parts of the path
    parts = path.split('/')

    # Add or substitute the new language prefix
    if parts[1] in lang_codes:
        parts[1] = language
    else:
        parts[0] = "/" + language

    # Return the full new path
    return '/'.join(parts)

Prefix Switch Template Filter

Ultimately, this function must be called by Django templates in order to provide links to language-specific pages. Thus, we need a custom template filter. The filter implementation module can be put into any app, but it must be in a sub-package named templatetags – that’s how Django knows to look for custom template tags and filters. The new filters will be easy to write because we already have the switch_lang_code function. (Separating the logic to handle the prefix from the filter itself makes both more testable and reusable.) The code is below:

# [app]/templatetags/i18n_switcher.py

from django import template
from django.template.defaultfilters import stringfilter

register = template.Library()

@register.filter
@stringfilter
def switch_i18n_prefix(path, language):
    """takes in a string path"""
    return switch_lang_code(path, language)

@register.filter
def switch_i18n(request, language):
    """takes in a request object and gets the path from it"""
    return switch_lang_code(request.get_full_path(), language)

Admin Template Override

Finally, admin templates must be overridden so that we can add new elements to the admin pages. Any admin template can be overridden by creating new templates of the same name under [project-root]/templates/admin. Parent content will be used unless explicitly overridden within the child template file. Since we want to change the title bar, create a new template file for base_site.html with the following contents:

{% extends "admin/base_site.html" %}

{% load static %}
{% load i18n %}

<!-- custom filter module -->
{% load i18n_switcher %}

{% block extrahead %}
    <link rel="shortcut icon" href="{% static 'images/favicon.ico' %}" />
    <link rel="stylesheet" type="text/css" href="{% static 'css/custom_admin.css' %}"/>
{% endblock %}

{% block userlinks %}
    <a href="{{ request|switch_i18n:'en' }}">
        <img class="i18n_flag" src="{% static 'images/flag-usa-16.png' %}"/>
    </a> /
    <a href="{{ request|switch_i18n:'zh-hans' }}">
        <img class="i18n_flag" src="{% static 'images/flag-china-16.png' %}"/>
    </a> /
    {% if user.is_active and user.is_staff %}
        {% url 'django-admindocs-docroot' as docsroot %}
        {% if docsroot %}
            <a href="{{ docsroot }}">{% trans 'Documentation' %}</a> /
        {% endif %}
    {% endif %}
    {% if user.has_usable_password %}
        <a href="{% url 'admin:password_change' %}">{% trans 'Change password' %}</a> /
    {% endif %}
    <a href="{% url 'admin:logout' %}">{% trans 'Log out' %}</a>
{% endblock %}

The static CSS file named css/custom_admin.css should have the following contents:

.i18n_flag img {
    width: 16px;
    vertical-align: text-top;
}

Notice that the whole userlinks block had to be rewritten to fit the flag into place. The static image files for the flags are simply free flag emojis. They are hyperlinked to the appropriate language URL for the page: the switch_i18n filter is applied to the active request object to get the desired language-prefixed path. (Note: In my example code, I removed the “View Site” link because my site didn’t need it.)

Completed View

The admin site should now look like this:

In my project, I chose to put the language prefix switcher code in its own application named i18n_switcher. The files in my project needed for the admin language buttons are organized like this (without showing other files in the project):

[root]
|- i18n_switcher
|  |- templatetags
|  |  |- __init__.py
|  |  `- i18n_switcher.py
|  |- __init__.py
|  `- apps.py
|- locale
|  `- zh_Hans
|     `- LC_MESSAGES
|        |- django.mo
|        `- django.po
|- static
|  |- css
|  |  `- custom_admin.css
|  `- images
|     |- flag-china-16.png
|     `- flag-usa-16.png
`- templates
   `- admin
      `- base_site.html

Since I created a new app for the new code, I also had to add the app name to INSTALLED_APPS in settings.py:

# settings.py

INSTALLED_APPS = [
    # ...
    'i18n_switcher.apps.I18NSwitcherConfig',
    # ...
]

As mentioned before, flag icons in the title bar are simply one way to provide easy links to translated pages. It works well when there are only a few language choices available. A different view would be better for more languages, like a dropdown, a second line in the title bar, or even a page footer.

With a bit more polishing, this would also make a nifty little Django app package that others could use for their projects. Maybe I’ll get to that someday.