Playwright is an awesome new web testing framework, and it can help you take a modern approach to web development. In this article, let’s learn how.
Asking tough questions about testing
Let me ask you a series of questions:
Question 1: Do you like it when bugs happen in your code? Most likely not. Bugs are problems. They shouldn’t happen in the first place, and they require effort to fix. They’re a big hassle.
Question 2: Would you rather let those bugs ship to production? Absolutely not! We want to fix bugs before users ever see them. Serious bugs could cause a lot of damage to systems, businesses, and even reputations. Whenever bugs do slip into production, we want to find them and fix them ASAP.
Question 3: Do you like to create tests to catch bugs before that happens? Hmmm… this question is tougher to answer. Most folks understand that good tests can provide valuable feedback on software quality, but not everyone likes to put in the work for testing.
Why the distaste for testing?
Why doesn’t everyone like to do testing? Testing is HARD! Here are common complaints I hear:
Tests are slow – they take too long to run!
Tests are brittle – they break whenever the app changes!
Tests are flaky – they crash all the time!
Tests don’t make sense – they are complicated and unreadable!
Tests don’t make money – we could be building new features instead!
Tests require changing context – they interrupt my development workflow!
These are all valid reasons. To mitigate these pain points, software teams have historically created testing strategies around the Testing Pyramid, which separates tests by layer from top to bottom:
UI tests
API tests
Component tests
Unit tests
Tests at the bottom were considered “better” because they were closer to the code, easier to automate, and faster to execute. They were also considered to be less susceptible to flakiness and, therefore, easier to maintain. Tests at the top were considered just the opposite: big, slow, and expensive. The pyramid shape implied that teams should spent more time on tests at the base of the pyramid and less time on tests at the top.
End-to-end tests can be very valuable. Unfortunately, the Testing Pyramid labeled them as “difficult” and “bad” primarily due to poor practices and tool shortcomings. It also led teams to form testing strategies that emphasized categories of tests over the feedback they delivered.
Rethinking modern web testing goals
Testing doesn’t need to be hard, and it doesn’t need to suffer from the problems of the past. We should take a fresh, new approach in testing modern web apps.
Here are three major goals for modern web testing:
Focus on building fast feedback loops rather than certain types of tests.
Make test development as fast and painless as possible.
Choose test tooling that naturally complements dev workflows.
These goals put emphasis on results and efficiency. Testing should just be a natural part of development without any friction.
Introducing Playwright
Playwright is a modern web testing framework that can help us meet these goals.
It is an open source project from Microsoft.
It manipulates the browser via (superfast) debug protocols
It works with Chromium/Chrome/Edge, Firefox, and WebKit
It provides automatic waiting, test generation, UI mode, and more
It can test UIs and APIs together
It provides bindings for JavaScript/TypeScript, Python, Java, and C#
Playwright takes a unique approach to browser automation. First of all, it uses browser projects rather than full browser apps. For example, this means you would test Chromium instead of Google Chrome. Browser projects are smaller and don’t use as many resources as full browsers. Playwright also manages the browser projects for you, so you don’t need to install extra stuff.
Second, it uses browsers very efficiently:
Instead of launching a full, new browser instance for each test, Playwright launches one browser instance for the entire suite of tests.
It then creates a unique browser context from that instance for each test. A browser context is essentially like an incognito session: it has its own session storage and tabs that are not shared with any other context. Browser contexts are very fast to create and destroy.
Then, each browser context can have one or more pages. All Playwright interactions happen through a page, like clicks and scrapes. Most tests only ever need one page.
Playwright handles all this setup automatically for you.
Comparing Playwright to other tools
Playwright is not the only browser automation tool out there. The other two most popular tools are Selenium and Cypress. Here is a chart with high-level comparisons:
All three are good tools, and each one has their advantages. Playwright’s main advantages are that offers excellent developer experience with the fastest execution time, multiple language bindings, and several quality-of-life features.
Learning Playwright
If you want to learn how to automate your web tests with Playwright, take my tutorial, Awesome Web Testing with Playwright. All instructions and example code for the tutorial are located in GitHub. This tutorial is designed to be self-guided, so give it a try!
Test Automation University also has a Playwright learning path with introductory and advanced courses:
Playwright is an awesome new framework for modern web testing. Give it a try, and let me know what you automate!
In the featured image for this article, you see a beautiful front end. It’s probably not the kind of “front end” you expected. It’s the front end of a 1974 Volkswagen Karmann Ghia. The Karmann Ghia was known as the “poor man’s Porsche.” It’s a very special car. It was actually a collaboration project between Wilhelm Karmann, a German automobile manufacturer, and Carrozzeria Ghia, an Italian automobile designer. Ghia designed the body as a work of art, and Karmann put it on the tried-and-true platform of the classic Volkswagen Beetle. When the Volkswagen executives saw it, they couldn’t say no to mass production.
The Karmann Ghia is a perfect symbol of the state of web development today. We strive to make beautiful front ends with reliable platforms supporting them on the back end. Collaboration from both sides is key to success, but what people remember most is the experience they have with your apps. My mom drove a Karmann Ghia like this when she was a teenager, and to this day she still talks about the good times she had with it.
Good quality, design, and experience are indispensable aspects of front ends – whether for classic cars or for the Web. In this article, I’ll share seven major trends I see in front end web testing. While there’s a lot of cool new things happening, I want y’all to keep in mind one main thing: tools and technologies may change, but the fundamentals of testing remain the same. Testing is interaction plus verification. Tests reveal the truth about our code and our features. We do testing as part of development to gather fast feedback for fixes and improvements. All the trends I will share today are rooted in these principles. With good testing, you can make sure your apps will look visually perfect, just like… you know.
#1. End-to-end testing
Here’s our first trend: End-to-end testing has become a three-way battle. For clarity, when I say “end-to-end” testing, I mean black-box test automation that interacts with a live web app in an active browser.
Historically, Selenium has been the most popular tool for browser automation. The project has been around for over a decade, and the WebDriver protocol is a W3C standard. It is open source, open standards, and open governance. Selenium WebDriver has bindings for C#, Java, JavaScript, Ruby, PHP, and Python. The project also includes Selenium IDE, a record-and-playback tool, and Selenium Grid, a scalable cluster for cross-browser testing. Selenium is alive and well, having just released version 4.
Over the years, though, Selenium has received a lot of criticism. Selenium WebDriver is a low-level protocol. It does not handle waiting automatically, leading many folks to unknowingly write flaky scripts. It requires clunky setup since WebDriver executables must be separately installed. Many developers dislike Selenium because coding with it requires a separate workflow or state of mind from the main apps they are developing.
Cypress was the answer to Selenium’s shortcomings. It aimed to be a modern framework with excellent developer experience, and in a few short years, it quickly became the darling test tool for front end developers. Cypress tests run in the browser side-by-side with the app under test. The syntax is super concise. There’s automatic waiting, meaning less flakiness. There’s visual tracing. There’s API calls. It’s nice. And it took a big chomp out of Selenium’s market share.
Cypress isn’t perfect, though. Its browser support is limited to Chromium-based browsers and Firefox. Cypress is also JavaScript-only, which excludes several communities. While Cypress is open source, it does not follow open standards or open governance like Selenium. And, sadly, Cypress’ performance is slow – equivalent tests run slower than Selenium.
Enter Playwright, the new open source test framework from Microsoft. Playwright is the spiritual successor to Puppeteer. It boasts the wide browser and language compatibility of Selenium with the refined developer experience of Cypress. It even has a code generator to help write tests. Plus, Playwright is fast – multiple times faster than Selenium or Cypress.
Playwright is still a newcomer, and it doesn’t yet have the footprint of the other tools. Some folks might be cautious that it uses browser projects instead of stock browsers. Nevertheless, it’s growing fast, and it could be a major contender for the #1 title. In Applitools’ recent Let The Code Speak code battles, Playwright handily beat out both Selenium and Cypress.
A side-by-side comparison of Selenium, Cypress, and Playwright
Selenium, Cypress, and Playwright are definitely now the “big three” browser automation tools for testing. A respectable fourth mention would be WebdriverIO. WebdriverIO is a JavaScript-based tool that can use WebDriver or debug protocols. It has a very large user base, but it is JavaScript-only, and it is not as big as Cypress. There are other tools, too. Puppeteer is still very popular but used more for web crawling than testing. Protractor, once developed by the Angular team, is now deprecated.
All these are good tools to choose (except Protractor). They can handle any kind of web app that you’re building. If you want to learn more about them, Test Automation University has courses for each.
#2. Component testing
End-to-end testing isn’t the only type of testing a team can or should do. Component testing is on the rise because components are on the rise! Many teams now build shareable component libraries to enforce consistency in their web design and to avoid code duplication. Each component is like a “unit of user interface.” Not only do they make development easier, they also make testing easier.
Component testing is distinct from unit testing. A unit test interacts directly with code. It calls a function or method and verifies its outcomes. Since components are inherently visual, they need to be rendered in the browser for proper testing. They might have multiple behaviors, or they may even trigger API calls. However, they can be tested in isolation of other components, so individually, they don’t need full end-to-end tests. That’s why, from a front end perspective, component testing is the new integration testing.
Storybook is a very popular tool for building and testing components in isolation. In Storybook, each component has a set of stories that denote how that component looks and behaves. While developing components, you can render them in the Storybook viewer. You can then manually test the component by interacting with them or changing their settings. Applitools also provides an SDK for automatically running visual tests against a Storybook library.
The Storybook viewer
Cypress is also entering the component testing game. On June 1, 2022, Cypress released version 10, which included component testing support. This is a huge step forward. Before, folks would need to cobble together their own component test framework, usually as an extension of a unit test project or an end-to-end test project. Many solutions just ran automated component tests purely as Node.js processes without any browser component. Now, Cypress makes it natural to exercise component behaviors individually yet visually.
I love this quote from Cypress about their approach to component testing:
When testing anything for the web, we believe that tests should view and interact with the application in the same way that an actual user does. Anything less, and it’s hard to have confidence that your application is doing what it is supposed to.
This quote hits on something big. So many automated tests fail to interact with apps like real users. They hinge on things like IDs, CSS selectors, and XPaths. They make minimal checks like appearance of certain elements or text. Pages could be completely broken, but automated tests could still pass.
#3. Visual testing
We really want the best of both worlds: the simplicity and sensibility of manual testing with the speed and scalability of automated testing. Historically, this has been a painful tradeoff. Most teams struggle to decide what to automate, what to check manually, and what to skip. I think there is tremendous opportunity in bridging the gap. Modern tools should help us automate human-like sensibilities into our tests, not merely fire events on a page.
That’s why visual testing has become indispensable for front end testing. Web apps are visual encounters. Visuals are the DNA of user experience. Functionality alone is insufficient. Users expect to be wowed. As app creators, we need to make sure those vital visuals are tested. Heaven forbid a button goes missing or our CSS goes sideways. And since we live in a world of continuous development and delivery, we need those visual checkpoints happening continuously at scale. Real human eyes are just too slow.
For example, I could have a login page that has an original version (left) and a changed version (right):
Visual comparison between versions of a login page
Visual testing tools alert you to meaningful changes and make it easy to compare them side-by-side. They catch things you might miss. Plus, they run just like any other automated test suite. Visual testing was tough in the past because tools merely did pixel-to-pixel comparisons, which generated lots of noise for small changes and environmental differences. Now, with a tool like Applitools Visual AI, visual comparisons accurately pinpoint the changes that matter.
Test automation needs to check visuals these days. Traditional scripts interact with only the basic bones of the page. You could break the layout and remove all styling like this, and there’s a good chance a traditional automated test would still pass:
The same login page from before, but without any CSS styling
With visual testing techniques, you can also rethink how you approach cross-browser and cross-device testing. Instead of rerunning full tests against every browser configuration you need, you can run them once and then simply re-render the visual snapshots they capture against different browsers to verify the visuals. You can do this even for browsers that the test framework doesn’t natively support! For example, using a platform like Applitools Ultrafast Test Cloud, you could run Cypress tests against Electron in CI and then perform visual checks in the Cloud against Safari and Internet Explorer, among other browsers. This style of cross-platform testing is faster, more reliable, and less expensive than traditional ways.
#4. Performance testing
Functionality isn’t the only aspect of quality that matters. Performance can make or break user experience. Most people expect any given page to load in a second or two. Back in 2016, Google discovered that half of all people leave a site if it takes longer than 3 seconds to load. As an industry, we’ve put in so much work to make the front end faster. Modern techniques like server-side rendering, hydration, and bloat reduction all aim to improve response times. It’s important to test the performance of our pages to make sure the user experience is tight.
Thankfully, performance testing is easier than ever before. There’s no excuse for not testing performance when it is so vital to success. There are many great ways to get started.
The simplest approach is right in your browser. You can profile any site with Chrome DevTools. Just right click the page, select “Inspect,” and switch to the Performance tab. Then start the profiler and start interacting with the page. Chrome DevTools will capture full metrics as a visual time series so you can explore exactly what happens as you interact with the page. You can also flip over to the Network tab to look for any API calls that take too long. If you want to learn more about this type of performance analysis, Test Automation University offers a course entitled Tools and Techniques for Performance and Load Testing by Amber Race. Amber shows how to get the most value out of that Performance tab.
Chrome DevTools Performance tab
Another nifty tool that’s also available in Chrome DevTools is Google Lighthouse. Lighthouse is a website auditor. It scores how well your site performs for performance, accessibility, progressive web apps, SEO, and more. It will also provide recommendations for how to improve your scores right within its reports. You can run Lighthouse from the command line or as a Node module instead of from Chrome DevTools as well.
Google Lighthouse from Chrome DevTools
Using Chrome DevTools manually for one-off checks or exploratory testing is helpful, but regular testing needs automation. One really cool way to automate performance checks is using Playwright, the end-to-end test framework I mentioned earlier. In Playwright, you can create a Chrome DevTools Protocol session and gather all the metrics you want. You can do other cool things with profiling and interception. It’s like a backdoor into the browser. Best of all, you could gather these metrics together with functional testing! One framework can meet the needs of both functional and performance test automation.
There’s another curve ball when testing websites: what about machine learning models? For example, whenever you shop at an online store, the bottom of almost every product page has a list of recommendations for similar or complementary products. For example, when I searched Amazon for the latest Pokémon video game, Amazon recommended other games and toys:
Recommendation systems like this might be hard-coded for small stores, but large retailers like Amazon and Walmart use machine learning models to back up their recommendations. Models like this are notoriously difficult to test. How do we know if a recommendation is “good” or “bad”? How do I know if folks who like Pokémon would be enticed to buy a Kirby game or a Zelda game? Lousy recommendations are a lost business opportunity. Other models could have more serious consequences, like introducing harmful biases that affect users.
Machine learning models need separate approaches to testing. It might be tempting to skip data validation because it’s harder than basic functional testing, but that’s a risk not worth taking. To do testing right, separate the functional correctness of the frontend from the validity of data given to it. For example, we could provide mocked data for product recommendations so that tests would have consistent outcomes for verifying visuals. Then, we could test the recommendation system apart from the UI to make sure its answers seem correct. Separating these testing concerns makes each type of test more helpful in figuring out bugs. It also makes machine learning models faster to test, since testers or scripts don’t need to navigate a UI just to exercise them.
If you want to learn more about testing machine learning courses, Carlos Kidman created an excellent course all about it on Test Automation University named Intro to Testing Machine Learning Models. In his course, Carlos shows how to test models for adversarial attacks, behavioral aspects, and unfair biases.
#6. JavaScript
Now, the next trend I see will probably be controversial to many of you out there: JavaScript isn’t everything. Historically, JavaScript has been the only language for front end web development. As a result, a JavaScript monoculture has developed around the front end ecosystem. There’s nothing inherently wrong with that, but I see that changing in the coming years – and I don’t mean TypeScript.
In recent years, frustrations with single-page applications (SPAs) and client-heavy front ends have spurred a server-side renaissance. In addition to JavaScript frameworks that support SSR, classic server-side projects like Django, Rails, and Laravel are alive and kicking. Folks in those communities do JavaScript when they must, but they love exploring alternatives. For example, HTMX is a framework that provides hypertext directives for many dynamic actions that would otherwise be coded directly in JavaScript. I could use any of those classic web frameworks with HTMX and almost completely avoid JavaScript code. That makes it easier for programmers to make cool things happen on the front end without needing to navigate a foreign ecosystem.
Below is an example snippet of HTML code with HTMX attributes for posting a click and showing the response:
<script src="https://unpkg.com/htmx.org@1.7.0"></script>
<!-- have a button POST a click via AJAX -->
<button hx-post="/clicked" hx-swap="outerHTML">
Click Me
</button>
WebAssembly, or “Wasm” is also here. WebAssembly is essentially an assembly language for browsers. Code written in higher-level languages can be compiled down into WebAssembly code and run on the browser. All major browsers now support WebAssembly to some degree. That means JavaScript no longer holds a monopoly on the browser.
I don’t know if any language will ever dethrone JavaScript in the browser, but I predict that browsers will become multilingual platforms through WebAssembly in the coming years. For example, at PyCon 2022, Anaconda announced PyScript, a framework for running Python code in the browser. Blazor enables C# code to run in-browser. Emscripten compiles C/C++ programs to WebAssembly. Other languages like Ruby and Rust also have WebAssembly support.
Regardless of what happens inside the browser, black-box testing tools and frameworks outside the browser can use any language. Tools like Playwright and Selenium support languages other than JavaScript. That brings many more people to the table. Testers shouldn’t be forced to learn JavaScript just to automate some tests when they already know another language. This is happening today, and I don’t expect it to change.
#7. Autonomous testing
Finally, there is one more trend I want to share, and this one is more about the future than the present: autonomous testing is coming. Ironically, today’s automated testing is still manually-intensive. Someone needs to figure out features, write down the test steps, develop the scripts, and maintain them when they inevitably break. Visual testing makes verification autonomous because assertions don’t need explicit code, but figuring out the right interactions to exercise features is still a hard problem.
I think the next big advancement for testing and automation will be autonomous testing: tools that autonomously look at an app, figure out what tests should be run, and then run those tests automatically. The key to making this work will be machine learning algorithms that can learn the context of the apps they target for testing. Human testers will need to work together with these tools to make them truly effective. For example, one type of tool could be a test recommendation engine that proposes tests for an app, and the human tester could pick the ones to run.
Autonomous testing will greatly simplify testing. It will make developers and testers far more productive. As an industry, we aren’t there yet, but it’s coming, and I think it’s coming soon. I delivered a keynote address on this topic at Future of Testing: Frameworks 2022:
Conclusion
There’s lots of exciting stuff happening in the world of the front end. As I said before, tools and technologies may change, but fundamentals remain the same. Each of these trends is rooted in tried-and-true principles of testing. They remind us that software quality is a multifaceted challenge, and the best strategy is the one that provides the most value for your project.
So, what do you think? Did I hit all the major front end trends? Did I miss anything? Let me know in the comments!
TL;DR: If you want to test your full GitHub Pages site before publishing but don’t want to set up Ruby and Jekyll on your local machine, then:
Commit your doc changes to a new branch.
Push the new branch to GitHub.
Temporarily change the repository’s GitHub Pages publishing source to the new branch.
Reload the GitHub Pages site, and review the changes.
If you have a GitHub repository, did you know that you can create your own documentation site for it within GitHub? Using GitHub Pages, you can write your docs as a set of Markdown pages and then configure your repository to generate and publish a static web site for those pages. All you need to do is configure a publishing source for your repository. Your doc site will go live at:
https://<user>.github.io/<repository>
If this is new to you, then you can learn all about this cool feature from the GitHub docs here: Working with GitHub Pages. I just found out about this cool feature myself!
GitHub Pages are great because they make it easy to develop docs and code together as part of the same workflow without needing extra tools. Docs can be written as Markdown files, Liquid templates, or raw assets like HTML and CSS. The docs will be version-controlled for safety and shared from a single source of truth. GitHub Pages also provides free hosting with a decent domain name for the doc site. Clearly, the theme is simplicity.
Unfortunately, I hit one challenge while trying GitHub Pages for the first time: How could I test the doc site before publishing it? A repository using GitHub Pages must be configured with a specific branch and folder (/ (root) or /docs) as the publishing source. As soon as changes are committed to that source, the updated pages go live. However, I want a way to view the doc site in its fullness before committing any changes so I don’t accidentally publish any mistakes.
One way to test pages is to use a Markdown editor. Many IDEs have Markdown editors with preview panes. Even GitHub’s web editor lets you preview Markdown before committing it. Unfortunately, while editor previews may help catch a few typos, they won’t test the full end result of static site generation and deployment. They may also have trouble with links or templates.
GitHub’s docs recommend testing your site locally using Jekyll. Jekyll is a static site generator written in Ruby. GitHub Pages uses Jekyll behind the scenes to turn doc pages into full doc sites. If you want to keep your doc development simple, you can just edit Markdown files and let GitHub do the dirty work. However, if you want to do more hands-on things with your docs like testing site generation, then you need to set up Ruby and Jekyll on your local machine. Thankfully, you don’t need to know any Ruby programming to use Jekyll.
I followed GitHub’s instructions for setting up a GitHub Pages site with Jekyll. I installed Ruby and Jekyll and then created a Jekyll site in the /docs folder of my repository. I verified that I could edit and run my site locally in a branch. However, the setup process felt rather hefty. I’m not a Ruby programmer, so setting up a Ruby environment with a few gems felt like a lot of extra work just to verify that my doc pages looked okay. Plus, I could foresee some developers getting stuck while trying to set up these doc tools, especially if the repository’s main code isn’t a Ruby project. Even if setting up Jekyll locally would be the “right” way to develop and test docs, I still wanted a lighter, faster alternative.
Thankfully, I found a workaround that didn’t require any tools outside of GitHub: Commit doc changes to a branch, push the branch to GitHub, and then temporarily change the repository’s GitHub Pages source to the branch! I originally configured my repository to publish docs from the /docs folder in the main branch. When I changed the publishing source to another branch, it regenerated and refreshed the GitHub Pages site. When I changed it back to main, the site reverted without any issues. Eureka! This is a quick, easy hack for testing changes to docs before merging them. You get to try the full site in the main environment without needing any additional tools or setup.
Above is a screenshot of the GitHub Pages settings for one of my repositories. You can find these settings under Settings -> Options for any repository, as long as you have the administrative rights. In this screenshot, you can see how I changed the publishing source’s branch from main to docs/test. As soon as I selected this change, GitHub Pages republished the repository’s doc site.
Now, I recognize that this solution is truly a hack. Changing the publishing source affects the “live”, “production” version of the site. It effectively does publish the changes, albeit temporarily. If some random reader happens to visit the site during this type of testing, they may see incorrect or even broken pages. I’d recommend changing the publishing source’s branch only for small projects and for short periods of time. Don’t forget to revert the branch once testing is complete, too. If you are working on a larger, more serious project, then I’d recommend doing full setup for local doc development. Local setup would be safer and would probably make it easier to try more advanced tricks, like templates and themes.
TestProject recently released its new OpenSDK, and one of its major features is the inclusion of Python testing support! Since I love using Python for test automation, I couldn’t wait to give it a try. This article is my crash-course tutorial on writing Web UI tests in Python with TestProject.
What is TestProject?
TestProject is a free end-to-end test automation platform for Web, mobile, and API tests. It provides a cloud-based way to teams to build, run, share, and analyze tests. Manual testers can visually build tests for desktop or mobile sites using TestProject’s in-browser recorder and test builder. Automation engineers can use TestProject’s SDKs in Java, C#, and now Python for developing coded test automation solutions, and they can use packages already developed by others in the community through TestProject’s add-ons. Whether manual or automated, TestProject displays all test results in a sleek reporting dashboard with helpful analytics. And all of these features are legitimately free – there’s no tiered model or service plan.
Recently, TestProject announced the official release of its new OpenSDK. This new SDK (“software development kit”) provides a simple, unified interface for running tests with TestProject across multiple platforms and languages (now including Python). Things look exciting for the future of TestProject!
What’s My Interest?
It’s no secret that I love testing with Python. When I heard that TestProject added Python support, I knew I had to give it a try. I never used TestProject before, but I was interested to learn what it could do. Specifically, I wanted to see the value it could bring to reporting automated tests. In the Python space, test automation is slick, but reporting can be rough since frameworks like pytest and unittest are command-line-focused. I also wanted to see if TestProject’s SDK would genuinely help me automate tests or if it would get it my way. Furthermore, I know some great people in the TestProject community, so I figured it was time to jump in myself!
The Python SDK
TestProject’s Python SDK is an open-source project. It was originally developed by Bas Dijkstra, with the support of the TestProject team, and its code is hosted on GitHub. The Python SDK supports Selenium for Web UI automation (which will be the focus for this tutorial) and Appium for Android and iOS UI automation as well!
Since I’d never used TestProject before, let alone this new Python SDK, I wanted to review the code to see how to use it. Thankfully, the README included lots of helpful information and example code. When I looked at the code for TestProject’s BaseDriver, I discovered that it simply extend’s Selenium WebDriver’s RemoteDriver class. That means all the TestProject WebDrivers use exactly the same API as Python’s Selenium WebDriver implementation. To me, that was a big relief. I know WebDriver’s API very well, so I wouldn’t need to learn anything different in order to use TestProject. It also means that any test automation project can be easily retrofitted to use TestProject’s SDKs – they just need to swap in a new WebDriver object!
Setup Steps
TestProject has a straightforward architecture. Users sign up for free TestProject accounts online. Then, they set up their own machines for running tests. Each testing machine must have the TestProject agent installed and linked to a user’s account. When tests run, agents automatically push results to the TestProject cloud. Users can then log into the TestProject portal to view and analyze results. They can invite team mates to share results, and they can also set up multiple test machines with agents. Users can even integrate TestProject with other tools like Jenkins, qTest, and Sauce Labs. The TestProject docs, especially the ecosystem diagram, explain everything in more detail.
When I did my test drive, I created a TestProject account, installed the agent on my Mac, and ran Python Web UI tests from my Mac. I already had the latest version of Python installed (Python 3.8 at the time of writing this article). I also already had my target browsers installed: Google Chrome and Mozilla Firefox.
Below are the precise steps I followed to set up TestProject:
1. Sign up for an account
TestProject accounts are “free forever.” Use this signup link.
The TestProject signup page
2. Download the TestProject Agent
The signup wizard should direct you to download the TestProject agent. If not, you can always download it from the TestProject dashboard. Be warned, the download package is pretty large – the macOS package was 345 MB. Alternatively, you can fetch the agent as a container image from Docker Hub.
The TestProject agent download page
The TestProject agent contains all the stuff needed to run tests and upload results to the TestProject app in the cloud. You don’t need to install WebDriver executables like ChromeDriver or geckodriver. Once the agent is downloaded, install it on the machine and register the agent with your account. For me, registration happened automatically.
This is what the TestProject agent looks like when running on macOS. You can also close this window to let it run in the background.
3. Find your developer token
You’ll need to use your developer token to connect your automated tests to your account in the TestProject app. The signup wizard should reveal it to you, but you can always find it (and also reset it) on the Integrations page.
The Integrations page. Check here for your developer token. No, you can’t use mine.
4. Install the Python SDK
TestProject’s Python SDK is distributed as a package through PyPI. To install it, simply run pip install testproject-python-sdk at the command line. This package will also install dependencies like selenium and requests.
A Classic Web UI Test
After setting up my Mac to use TestProject, it was time to write some Web UI tests in Python! Since I discovered that TestProject’s WebDriver objects could easily retrofit any existing test automation project, I thought, “What if I try to run my PyCon 2020 tutorial project with TestProject?” For PyCon 2020, I gave an online tutorial about building a Web UI test automation project in Python from the ground up using pytest and Selenium WebDriver. The tutorial includes one test case: a DuckDuckGo web search and verification. I thought it would be easy to integrate with TestProject since I already had the code. Thankfully, it was!
The test case covers a simple DuckDuckGo web search. Whenever I automate tests, I always write out the steps in plain language. Good tests follow the Arrange-Act-Assert pattern, and I like to use Gherkin’sGiven-When-Then phrasing. Here’s the test case:
Scenario: Basic DuckDuckGo Web Search
Given the DuckDuckGo home page is displayed
When the user searches for "panda"
Then the search result query is "panda"
And the search result links pertain to "panda"
And the search result title contains "panda"
2. Specify automation inputs
Inputs configure how automated tests run. They can be passed into a test automation solution using configuration files. Testers can then easily change input values in the config file without changing code. Automation should read config files once at the start of testing and inject necessary inputs into every test case.
In Python, I like to use JSON for config files. JSON data is simple and hierarchical, and Python includes a module in its standard library named json that can parse a JSON file into a Python dictionary in one line. I also like to put config files either in the project root directory or in the tests directory.
implicit_wait is the implicit waiting timeout for the WebDriver instance
testproject_projectname is the project name to use for this test suite in the TestProject app
testproject_token is the developer token
3. Read automation inputs
Automation code should read inputs one time before any tests run and then inject inputs into appropriate tests. pytest fixtures make this easy to do.
I created a fixture named config in the tests/conftest.py module to read config.json:
import json
import pytest
@pytest.fixture
def config(scope='session'):
# Read the file
with open('config.json') as config_file:
config = json.load(config_file)
# Assert values are acceptable
assert config['browser'] in ['Firefox', 'Chrome', 'Headless Chrome']
assert isinstance(config['implicit_wait'], int)
assert config['implicit_wait'] > 0
assert config['testproject_projectname'] != ""
assert config['testproject_token'] != ""
# Return config so it can be used
return config
Setting the fixture’s scope to “session” means that it will run only one time for the whole test suite. The fixture reads the JSON config file, parses its text into a Python dictionary, and performs basic input validation. Note that Firefox, Chrome, and Headless Chrome will be supported browsers.
4. Set up WebDriver
Each Web UI test should have its own WebDriver instance so that it remains independent from other tests. Once again, pytest fixtures make setup easy.
The browser fixture in tests/conftest.py initialize the appropriate TestProject WebDriver type based on inputs returned by the config fixture:
from selenium.webdriver import ChromeOptions
from src.testproject.sdk.drivers import webdriver
@pytest.fixture
def browser(config):
# Initialize shared arguments
kwargs = {
'projectname': config['testproject_projectname'],
'token': config['testproject_token']
}
# Initialize the TestProject WebDriver instance
if config['browser'] == 'Firefox':
b = webdriver.Firefox(**kwargs)
elif config['browser'] == 'Chrome':
b = webdriver.Chrome(**kwargs)
elif config['browser'] == 'Headless Chrome':
opts = ChromeOptions()
opts.add_argument('headless')
b = webdriver.Chrome(chrome_options=opts, **kwargs)
else:
raise Exception(f'Browser "{config["browser"]}" is not supported')
# Make its calls wait for elements to appear
b.implicitly_wait(config['implicit_wait'])
# Return the WebDriver instance for the setup
yield b
# Quit the WebDriver instance for the cleanup
b.quit()
This was the only section of code I needed to change to make my PyCon 2020 tutorial project work with TestProject. I had to change the WebDriver invocations to use the TestProject classes. I also had to add arguments for the project name and developer token, which come from the config file. (Note: you may alternatively set the developer token as an environment variable.)
5. Create page objects
Automated tests could make direct calls to the WebDriver interface to interact with the browser, but WebDriver calls are typically low-level and wordy. The Page Object Model is a much better design pattern. Page object classes encapsulate WebDriver gorp so that tests can call simpler, more readable methods.
The DuckDuckGo search test interacts with two pages: the search page and the result page. The pages package contains a module for each page. Here’s pages/search.py:
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
class DuckDuckGoSearchPage:
URL = 'https://www.duckduckgo.com'
SEARCH_INPUT = (By.ID, 'search_form_input_homepage')
def __init__(self, browser):
self.browser = browser
def load(self):
self.browser.get(self.URL)
def search(self, phrase):
search_input = self.browser.find_element(*self.SEARCH_INPUT)
search_input.send_keys(phrase + Keys.RETURN)
And here’s pages/result.py:
from selenium.webdriver.common.by import By
class DuckDuckGoResultPage:
RESULT_LINKS = (By.CSS_SELECTOR, 'a.result__a')
SEARCH_INPUT = (By.ID, 'search_form_input')
def __init__(self, browser):
self.browser = browser
def result_link_titles(self):
links = self.browser.find_elements(*self.RESULT_LINKS)
titles = [link.text for link in links]
return titles
def search_input_value(self):
search_input = self.browser.find_element(*self.SEARCH_INPUT)
value = search_input.get_attribute('value')
return value
def title(self):
return self.browser.title
Notice that this code uses the “regular” WebDriver interface because TestProject’s WebDriver classes extend the Selenium WebDriver classes.
To make setup easier, I added fixtures to tests/conftest.py to construct each page object, too. They call the browser fixture and inject the WebDriver instance into each page object:
from pages.result import DuckDuckGoResultPage
from pages.search import DuckDuckGoSearchPage
@pytest.fixture
def search_page(browser):
return DuckDuckGoSearchPage(browser)
@pytest.fixture
def result_page(browser):
return DuckDuckGoResultPage(browser)
6. Automate the test case
All the automation plumbing is finally in place. Here’s the test case in tests/traditional/test_duckduckgo.py:
import pytest
@pytest.mark.parametrize('phrase', ['panda', 'python', 'polar bear'])
def test_basic_duckduckgo_search(search_page, result_page, phrase):
# Given the DuckDuckGo home page is displayed
search_page.load()
# When the user searches for the phrase
search_page.search(phrase)
# Then the search result query is the phrase
assert phrase == result_page.search_input_value()
# And the search result links pertain to the phrase
titles = result_page.result_link_titles()
matches = [t for t in titles if phrase.lower() in t.lower()]
assert len(matches) > 0
# And the search result title contains the phrase
assert phrase in result_page.title()
I parametrized the test to run it for three different phrases. The test function does not interact with the WebDriver instance directly. Instead, it interacts exclusively with the page objects.
7. Run the tests
The tests run like any other pytest tests: python -m pytest at the command line. If everything is set up correctly, then the tests will run successfully and upload results to the TestProject app.
In the TestProject dashboard, the Reports tab shows all the test you have run. It also shows the different test projects you have.
Check out those results!
You can also drill into results for individual test case runs. TestProject automatically records the browser type, timestamps, pass-or-fail results, and every WebDriver call. You can also download PDF reports!
Results for an individual test
What if … BDD?
I was delighted to see how easily I could run a traditional pytest suite using TestProject. Then, I thought to myself, “What if I could use a BDD test framework?” I personally love Behavior-Driven Development, and Python has multiple BDD test frameworks. There is no reason why a BDD test framework wouldn’t work with TestProject!
So, I rewrote the DuckDuckGo search test as a feature file with step definitions using pytest-bdd. The BDD-style test uses the same fixtures and page objects as the traditional test.
Here’s the Gherkin scenario in tests/bdd/features/duckduckgo.feature:
Feature: DuckDuckGo
As a Web surfer,
I want to search for websites using plain-language phrases,
So that I can learn more about the world around me.
Scenario Outline: Basic DuckDuckGo Web Search
Given the DuckDuckGo home page is displayed
When the user searches for "<phrase>"
Then the search result query is "<phrase>"
And the search result links pertain to "<phrase>"
And the search result title contains "<phrase>"
Examples:
| phrase |
| panda |
| python |
| polar bear |
And here’s the step definition module in tests/bdd/step_defs/test_duckduckgo_bdd.py:
from pytest_bdd import scenarios, given, when, then, parsers
from selenium.webdriver.common.keys import Keys
scenarios('../features/duckduckgo.feature')
@given('the DuckDuckGo home page is displayed')
def load_duckduckgo(search_page):
search_page.load()
@when(parsers.parse('the user searches for "{phrase}"'))
@when('the user searches for "<phrase>"')
def search_phrase(search_page, phrase):
search_page.search(phrase)
@then(parsers.parse('the search result query is "{phrase}"'))
@then('the search result query is "<phrase>"')
def check_search_result_query(result_page, phrase):
assert phrase == result_page.search_input_value()
@then(parsers.parse('the search result links pertain to "{phrase}"'))
@then('the search result links pertain to "<phrase>"')
def check_search_result_links(result_page, phrase):
titles = result_page.result_link_titles()
matches = [t for t in titles if phrase.lower() in t.lower()]
assert len(matches) > 0
@then(parsers.parse('the search result title contains "{phrase}"'))
@then('the search result title contains "<phrase>"')
def check_search_result_title(result_page, phrase):
assert phrase in result_page.title()
There’s one more nifty trick I added with pytest-bdd. I added a hook to report each Gherkin step to TestProject with a screenshot! That way, testers can trace each test case step more easily in the TestProject reports. Capturing screenshots also greatly assists test triage when failures arise. This hook is located in tests/conftest.py:
Since pytest-bdd is just a pytest plugin, its tests run using the same python -m pytest command. TestProject will group these test results into the same project as before, but it will separate the traditional tests from the BDD tests by name. Here’s what the Gherkin steps with screenshots look like:
Custom Gherkin step with screenshot reported in the TestProject app
This is Awesome!
As its name denotes, TestProject is a great platform for handling project-level concerns for testing work: reporting, integrations, and fast feedback. Adding TestProject to an existing automation solution feels seamless, and its sleek user experience gives me what I need as a tester without getting in my way. The one word that keeps coming to mind is “simple” – TestProject simplifies setup and sharing. Its design takes to heart the renowned Python adage, “Simple is better than complex.” As such, TestProject’s new Python SDK is a welcome addition to the Python testing ecosystem.
I look forward to exploring Python support for mobile testing with Appium soon. I also look forward to seeing all the new Python add-ons the community will develop.
On October 4, 2019, I gave a talk entitled Beyond Unit Tests: End-to-End Web UI Testing at PyGotham 2019. Check it out below! I show how to write a concise-yet-complete test solution for Web UI test cases using Python, pytest, and Selenium WebDriver.
Web UI tests with Selenium WebDriver must interact with elements on a Web page. Locating elements can be tricky because expected elements may or may not be on the page. Furthermore, WebDriver might not be able to interact with some elements that exist on the page. That may seem crazy, but let’s understand why.
Web UI interactions universally follow these steps:
Wait for an element to be ready.
Get the element using a locator (ID, CSS selector, XPath, etc.).
Send commands (like clicking or typing) or queries (like getting text) to the element.
Clearly, an element must be “ready” before interactions can happen. As humans, we intuitively define “ready” as, “The page is loaded, and the element is visible.” Automation code is a bit more technical because there are two different ways to define readiness:
Existence: the element exists in the HTML structure of the page.
Appearance: the element exists and it is visible on the page.
Existence can easily be determined by WebDriver’s “find elements” method. The plural “find elements” method will return a list of all elements matching a locator query. If no elements match the locator, then an empty list is returned. The singular “find element” method, on the other hand, will return the first element matching the locator or throw an exception if no elements are found. Thus, the plural version is more convenient to use for checking existence.
Here’s an example existence method in C#:
public bool Exists(IWebDriver driver, By locator) =>
driver.FindElements(locator).Count > 0;
Checking for existence is the most basic level of readiness. If an element doesn’t exist, interactions with it simply cannot happen. However, existence alone may not be sufficient for interactions. Selenium WebDriver requires elements to not only exist but also to be displayed for interactions like sending clicks and scraping text. Existing elements may be scrolled out of view or even deliberately hidden. WebDriver calls to such elements will yield cryptic exceptions. That’s why waiting for appearance is usually the better readiness condition.
Here’s an example appearance method in C#:
// Assume that the locator targets one element, not multiple
public bool Appears(IWebDriver driver, By locator) =>
Exists(driver, locator) && driver.FindElement(locator).Displayed;
Existence must be checked first, or else the “Displayed” call will throw an exception whenever existence is false.
Putting it all together, here’s what a button click interaction could look like in C#:
// Assume this is a method in a Page Object class
// Assume that "Driver" is the WebDriver instance
public void ClickThatButton()
{
var button = By.Id("that-button");
var wait = new WebDriverWait(Driver, new System.Timespan(0, 0, 15));
wait.Until((driver) => Appears(driver, button));
Driver.FindElement(button).Click();
}
It’s good practice to make explicit waits before locating and using elements. It’s also good practice to get fresh elements for every interaction call in order to avoid pesky stale element exceptions. Calls like these should be placed in Page Object methods or Screenplay Pattern tasks and questions so that interactions are safe and thorough.
Appearance may not always be the right choice. There may be times when a test should check if an element doesn’t exist or if an element exists but is hidden. Just think before you code.
If you do any Web UI test automation (like with Selenium WebDriver), then you probably spend a large chunk of your test development time finding elements on a page, like buttons, inputs, and divs. Finding the right elements, however, can be challenging, especially when they lack unique IDs or class names. This guide will show you how to locate any Web element like a pro.
What are Web elements?
A Web element is an individual entity rendered on a Web page. Everything a user sees on a Web page (and even some things they don’t see) are elements: title headers, okay buttons, input fields, text areas, and more. Elements are specified in HTML by tag name, attributes, and contents. They may also have child elements, such as a table containing rows. CSS may be applied to elements to style them with colors, sizes, position, etc. Programming languages typically access Web elements as nodes in the Document Object Model (DOM).
What are Web element locators?
Web elements and locators are two different things. A Web element locator is an object that finds and returns Web elements on a page using a given query. In short, locators find elements.
Why are locators needed? As human users, we interact with Web pages visually: We look, scroll, click, and type through a browser. However, test automation interacts with Web pages programmatically: it needs a coded way to find and manipulate those same elements. Traditional automation won’t “look” at the page like a human* – it will search the DOM instead.
(*Newer automation technologies enable visual testing, which will be discussed later in this article.)
Selenium WebDriver separates the concerns of element location and interaction. WebDriver calls for these two concerns are frequently written back-to-back:
// WebDriver example: typing a search phrase at www.google.com
// This code is written in C#, but the calls are the same in any language
// First, element location
IWebElement searchField = driver.FindElement(By.Name("q"));
// Second, element interaction
searchField.SendKeys("panda");
WebDriver provides the following locator query types using “By”:
Locators may also return multiple elements, or none at all! For example:
// Get the list of results from a Google search
// Using "FindElements" will return a list of all elements found in order
// Using "FindElement" would return the first element found (or throw an exception if no elements were found)
IList<IWebElement> results = driver.FindElements(By.CssSelector("div.r"));
results.Count.Should().BeGreaterThan(0);
Large test frameworks often use design patterns for structuring locators and interactions. The Page Object Model organizes locators and action methods together in classes by page or component. However, I strongly recommend the Screenplay Pattern over page objects because its pieces are more reusable and scalable. Whatever the pattern, locators are needed.
How do I find elements?
Elements can be a hassle to find when writing locators for test automation. To simplify my work flow, I use Google Chrome’s Developer Tools side-by-side with my IDE. Why choose Chrome?
Chrome’s DevTools are really easy to use and provide rich info.
To inspect any Web page in Chrome, simply right-click anywhere on the page:
Voila! DevTools will open. For finding Web elements, we want to use the Elements tab.
Visually pinpointing an element is easy. Click the “select” tool in the upper-left corner of the DevTools pane. (It looks like a square with a cursor on it.) The icon should turn blue.
Then, move the cursor to the desired element on the page. You will see each element highlighted in different colors as the mouse moves over. The corresponding HTML source code in the Elements tab will simultaneously be highlighted, too. Nice! Click on the desired element to set the highlighting so that it won’t disappear when you move the cursor elsewhere.
From here, you can check out the element’s tag, classes, attributes, contents, parents, and children.
How do I write good locators?
Finding the element is half the battle. Forming a unique locator query is the other half. If a locator is too broad, then it could return false positives. However, if a locator is too specific, then it could be susceptible to break whenever the DOM changes, and it could also be difficult for others to read. The best philosophy is this: Write the simplest locator query that uniquely identifies the target element(s).
My locator query type order-of-preference is:
ID (if unique)
Name (if unique)
Class name
CSS Selector
XPath without text or indexing
Link text / partial link text
XPath with text and/or indexing
Unique IDs, names, and class names make locators super easy to write: queries are short and don’t need extra anchors. Always encourage developers on the team to use unique identifiers like class names for all elements. However, many elements do not have them, which means locators must fall back on more complicated CSS selectors and XPaths (*shiver*). Whenever this happens, here’s some advice:
Use parents as anchors if they have unique identifiers.
CSS selector example: “#some-list > li”
XPath example: “//ul[@id=’some-list’]/li”
Avoid XPaths that use text or indexing if possible.
Bad example: “//div[3]//span[text()=’hello’]”
Those tend to be the most brittle checks.
Use the “contains” function when checking for classes in XPath.
Example: “//div[contains(@class, ‘some-class’)]”
Elements frequently have more than one class.
“contains” will check a substring instead of the full class string.
However, be careful because “some-class2” would be matched!
Always test locators, too. Syntax errors and false positives happen frequently. Chrome DevTools makes testing locators easy. Simply hit Ctrl-F on the Elements tab and then paste the locator query into the finder field. DevTools will highlight all the matching elements in order. Spiffy!
Sometimes, when I can’t figure out why a locator isn’t working for a test case, I’ll do the following:
Run the test case with debugging from my IDE.
Set a break point on the locator.
Wait for the test case to stop at the break point.
Enter DevTools on the active Chrome window.
Check the DOM and test the locators on the live page.
What if my tests are flaky?
Web UI testing is roundly criticized for being “flaky” because tests often crash for unexpected reasons. However, much of the unreliability people hit with Web UI testing (and often with Selenium WebDriver itself) is that all Web interactions inherently pose race conditions. The automation and the browser execute independently, so interactions must be synchronized with page state. Otherwise, WebDriver will throw exceptions for timeouts, stale elements, and elements not found. Many times, these issues happen intermittently, so they can be difficult to trace and resolve.
The best way to avoid race conditions is this: Always wait for an element to exist before interacting with it. This may seem basic, but it’s easy to overlook. Selenium WebDriver packages all offer some sort of WebDriverWait object that will force the driver to wait for a given condition to be true before proceeding. The easiest way to check if an element exists is to check if the list of elements returned by a FindElements (plural) call is non-empty. Adding another call for each interaction may feel burdensome, but design patterns within well-designed frameworks (like the Screenplay Pattern) can make these checks happen automatically.
Another good practice is this: Always fetch fresh elements. Sometimes, automation will first get some elements and then use a second query to get more elements. Or, in the case of the Page Object Factory (which should never be used because, bluntly, its design is terrible), elements are fetched once when the page object is constructed and referenced thereafter. No matter which way, the longer a Web element object exists, the more prone it is to become stale and cause exceptions. I’ve seen elements turn stale inexplicably even when they still seem to be on the page, too. Always get an element in the moment when it is needed. That way, it can’t go stale!
Some have never automated tests and can’t check themselves before they wreck themselves. Others have 1000s of tests that are flaky, duplicative, and slow. Wa-do-we-do? Well, I gave a talk about this problem at a few Python conferences. The language used for example code was Python, but the principles apply to any language.
Cypress.io is an up-and-coming Web test automation framework. It is open source and written entirely in JavaScript. Unlike Selenium WebDriver tests that work outside the browser, Cypress works directly inside the browser. It enables developers to write front-end tests entirely in JavaScript, directly accessing everything within the browser. As a result, tests run much more quickly and reliably than Selenium-based tests.
A sleek dashboard with automatic reloads for Test-Driven Development
Easy debugging
Network traffic control for validation and mocking
Automatic screenshots and videos
Cypress was clearly developed for developers. It enables rapid test development with rapid feedback. The Cypress Test Runner is free, while the Cypress Dashboard Service (for better reporting and CI) will require a paid license.
How Do I Start Using Cypress?
I won’t post examples or instructions for using Cypress here. Please refer to the Cypress documentation for getting started and the tutorial video below. Make sure your machine is set up for JavaScript development.
Will Cypress Replace WebDriver?
TL;DR: No.
Cypress has its niche. It is ideal for small teams whose stacks are exclusively JavaScript and whose developers are responsible for all testing. However, WebDriver still has key advantages.
While Selenium WebDriver supports nearly all major browsers, Cypress currently supports only one browser: Google Chrome. That’s a major limitation. Web apps do not work the same across browsers. Many industries (especially banking and finance) put strict controls on browser types and versions, too.
Cypress is JavaScript only. Its website proudly touts its JavaScript purity like a badge of honor. However, that has downsides. First, all testing must happen inside the bubble of the browser, which makes parallel testing and system interactions much more difficult. Second, testers must essentially be developers, which may not work well for all teams. Third, other programming languages that may offer advantages for testing (like Python) cannot be used. Selenium WebDriver, on the other hand, has multiple language bindings and lets tests live outside the browser.
Within the JavaScript ecosystem, Cypress is not the only all-in-one end-to-end framework. Protractor is more mature, more customizable, and easier to parallelize. It wraps Selenium WebDriver calls for simplification and safety in a similar way to how Cypress’s API is easy to use.
The WebDriver standard is a W3C Recommendation. What does this mean? All major browsers have a vested interest in implementing the standard. Selenium is simply the most popular implementation of the standard. It’s not going away. Cypress, however, is just a cool project backed with commercial intent.
JavaScript is taking over the world. It was the most popular language on GitHub in 2017. JavaScript-only stacks like MEAN and MERN are increasingly popular. The demand for a complete JavaScript-only test framework like Cypress is further evidence.
“Bundled” test frameworks are becoming popular. Historically, a test framework simply provided test structure, basic fixtures, and maybe an assertion library (like JUnit). Then, extra test packages became popular (like Selenium WebDriver, REST APIs, mocking, logging, etc.). Now, new frameworks like Cypress and Protractor aim to provide pre-canned recipes of all these pieces to simplify the setup.
Many new test frameworks will likely be developer-centric. There is a trend in the software industry (especially with Agile) of eliminating traditional tester roles and putting testing work onto developers. The role of the “Software Engineer in Test” – a developer who builds test systems – is also on the rise. Test automation tools and frameworks will need to provide good developer experience (DX) to survive. Cypress is poised to ride that wave.
WebDriver is not perfect. Cypress was developed in large part to address WebDriver’s shortcomings, namely the slowness, difficulty, and unreliability (though unreliability is often a result of poor implementation). Many developers don’t like to use Selenium WebDriver, and so there will be a constant itch to make something better. Cypress isn’t there yet, but it might get there one day.
Selenium WebDriver is the most popular open source package for Web UI test automation. It allows tests to interact directly with a web page in a live browser. However, using Selenium WebDriver can be very frustrating because basic interactions often lack robustness, causing intermittent errors for tests.
The Basics
One such vulnerable interaction is clicking elements on a page. Clicking is probably the most common interaction for tests. In C#, a basic click would look like this:
webDriver.FindElement(By.Id("my-id")).Click();
This is the easy and standard way to click elements using Selenium WebDriver. However, it will work only if the targeted element exists and is visible on the page. Otherwise, the WebDriver will throw exceptions. This is when programmers pull their hair out.
Waiting for Existence
To avoid race conditions, interactions should not happen until the target element exists on the page. Even split-second loading times can break automation. The best practice is to use explicit waits before interactions with a reasonable timeout value, like this:
const int timeoutSeconds = 15;
var ts = new TimeSpan(0, 0, timeoutSeconds);
var wait = new WebDriverWait(webDriver, ts);
wait.Until((driver) => driver.FindElements(By.Id("my-id")).Count > 0);
webDriver.FindElement(By.Id("my-id")).Click();
Other Preconditions
Sometimes, Web elements won’t appear without first triggering something else. Even if the element exists on the page, the WebDriver cannot click it until it is made visible. Always look for the proper way to make that element available for clicking. Click on any parent panels or expanders first. Scroll if necessary. Make sure the state of the system should permit the element to be clickable.
If the element is scrolled out of view, move to the element before clicking it:
new Actions(webDriver)
.MoveToElement(webDriver.FindElement(By.Id("my-id")))
.Click()
.Perform();
Last Ditch Efforts
Nevertheless, there are times when clickable elements just don’t cooperate. They just can’t seem to be made visible. When all else fails, drop directly into JavaScript:
Do this only when absolutely necessary. It is a best practice to use Selenium WebDriver methods because they make automated interaction behave more like a real user than raw JavaScript calls. Make sure to give good reasons in code comments whenever doing this, too.
Final Advice
This article was written specifically for clicks, but its advice can be applied to other sorts of interactions, too. Just be smart about waits and preconditions.
Note: Code examples on this page are written in C#, but calls are similar for other languages supported by Selenium WebDriver.