How Do I Know My Tests Add Value?

Software testing is a huge effort, especially for automation. Teams can spend a lot of time, money, and resources on testing (or not). People literally make careers out of it. That investment ought to be worthwhile – we shouldn’t test for the sake of testing.

So, therein lies the million-dollar question: How do we know that our tests add meaningful value?

Or, more bluntly: How do we know that testing isn’t a waste of time?

That’s easy: bugs!

The stock answer goes something like this: We know tests add value when they find bugs! So, let’s track the number of bugs we find.

That answer is wrong, despite its good intentions. Bug count is a terrible metric for judging the value of tests.

What do you mean bug counts aren’t good?

I know that sounds blasphemous. Let’s unpack it. Finding bugs is a good thing, and tests certainly should find bugs in the features they cover. But, the premise that the value of testing lies exclusively in the bugs found is wrong. Here’s why:

  1. The main value of testing is fast feedback. Testing serves two purposes: (1) validating goodness and (2) identifying badness. Passing tests are validated goodness. Failing tests, meaning uncovered bugs, are identified badness. Both types of feedback add value to the development process. Developers can proceed confidently with code changes when trustworthy tests are passing, and management can assess lower risk. Unfortunately, bug counts don’t measure that type of goodness.
  2. Good testing might actually reduce bug count. Testing means accountability for development. Developers must think more carefully about design. They can also run tests locally before committing changes. They could even do Test-Driven Development. Better practices could prevent many bugs from ever happening.
  3. Tracking bug count can drive bad behavior. Whether a high bug discovery rate looks good (or, worse, has quotas), testers will strive to post numbers. If they don’t find critical bugs, they will open bug reports for nitpicks and trivialities. The extra effort they spend to report inconsequential problems may not be of value to the business – wasting their time and the developers’ time all for the sake of metrics.
  4. Bugs are usually rare. Unless a team is dysfunctional, the product usually works as expected. Hundreds of test runs may not yield a single bug. That’s a wonderful thing if the tests have good coverage. Those tests still add value. Saying they don’t belittles the whole testing effort.

Then what metrics should we use?

Bugs happen arbitrarily, and unlimited testing is not possible. Metrics should focus on the return-on-investment for testing efforts. Here are a few:

  1. Time-to-bug-discovery. Rather than track bug counts, track the time until each bug is discovered. This metric genuinely measures the feedback loop for test results. Make sure to track the severity of each bug, too. For example, if high-severity bugs are not caught until production, then the tests don’t have enough coverage. Teams should strive for the shortest time possible – fast feedback means lower development costs. This metric also encourages teams to follow the Testing Pyramid.
  2. Coverage. Coverage is the degree to which tests exercise product behavior. Higher coverage means more feedback and greater chances of identifying badness. Most unit test frameworks can use code coverage tools to verify paths through code. Feature coverage requires extra process or instrumentation. Tests should avoid duplicate coverage, too.
  3. Test failure proportions. Tests fail for a variety of reasons. Ideally, tests should fail only when they discover bugs. However, tests may also fail for other reasons: unexpected feature changes, environment instability, or even test automation bugs. Non-bug failures disrupt the feedback loop: they force a team to fix testing problems rather than product problems, and they might cause engineers to devalue the whole testing effort. Tracking failure proportions will reveal what problems inhibit tests from delivering their top value.

More resources

 

Gherkin Syntax Highlighting in Visual Studio Code

Visual Studio Code is an incredible code editor that’s on the rise. It offers the power of an IDE with the speed and simplicity of a lightweight text editor, similar to Sublime, Atom, and Notepad++. If you’re a BDD addict, then VS Code is a great choice for writing Gherkin features, too! There are a number of extensions for Gherkin. Which one is the best? Below is my recommendation.

TL;DR

Install both:

Extension #1

VS Code has a few free extensions to support Gherkin. The first one I tried was Cucumber (Gherkin) Full Support. This one had the highest number of installs. When I started writing feature files, it provided snippets for each section and syntax colors. The documentation said it could also provide step suggestions (meaning, I type “Given” and it shows me all available Given steps) and navigation to step definition code, but since it looked like it only worked for JavaScript, I didn’t try it myself. that left me with no step suggestions. The indentation looked off, too. Not perfect. I wanted a better extension.

This slideshow requires JavaScript.

Extension #2

The second one I tried was Snippets and Syntax Highlight for Gherkin (Cucumber). It provides colorful syntax highlighting and a few three-letter snippets for Gherkin keywords. When I typed “fea”, a full template for a Feature section appeared with user story stubs (“In order to ___, As a ___, I want ___”). Nice! Good practice. The “sce” snippet did the same thing for the Scenario section with Given, When, and Then steps. Each section was indented nicely, too. The only downside was the lack of a snippet for Examples tables. Nevertheless, tables were still highlighted. But again, no step suggestions.

This slideshow requires JavaScript.

Extension #3

The third extension I tried was Feature Syntax Highlight and Snippets (Cucumber). It was very similar to the previous extension, but it used different colors. The snippet shortcuts were also not as intuitive – they used the letter “f” for feature followed by the first letter of the section. For example, “ff” was a Feature section, and “fs” was a Scenario section. Unfortunately, this extension did not provide step suggestions. Comments and example table rows did not get highlighted, either. Personally, I preferred the previous extension’s color scheme.

This slideshow requires JavaScript.

Extension #4

The fourth extension I tried was Gherkin step autocomplete. This one promised step suggestions! However, I had some trouble setting it up. When I enabled the extension by itself, feature files did not show any syntax highlighting, and the steps had no suggestions. What? Lame. What the README doesn’t say is that it relies on a separate extension for feature file support. So, I enabled extension #2 together with this one. Then, I had to move my feature file into a project-root-level directory named “features.” (This path could be customized in the extension’s settings, but “features” is the default.) And, voila! I got pretty colors and step suggestions.

This slideshow requires JavaScript.

But Wait, There’s More!

There were even more extensions for Gherkin. I was happy with #2 and #4, so I didn’t try others. The others also didn’t have as many installations. If anyone finds goodness out of others, please post in the comments!

PyOhio 2018 Reflections

PyOhio 2018 was a free Python conference hosted at Ohio State University in Columbus, OH from July 28-29. I had the pleasure of not only attending but also speaking at PyOhio, and my company, PrecisionLender, graciously covered my travel expenses. I had a great time. Here’s my retrospective on the conference.

My Talk

The main reason I went to PyOhio was because I was honored to be a speaker. When I was at an Instagram dinner at PyCon 2018, I met a few conference organizers who encouraged me to propose talks at other Python conferences. On a whim the next morning, I spitballed an idea for a talk about building a test automation solution from the ground up in Python. After talking with a number of people, I realized how test automation is such a struggle everywhere. I took inspiration from Ying Li’s keynote and crafted a story about how Amanda the Panda, a Bamboozle employee, becomes a test automation champion. And, BOOM! My talk proposal was accepted for PyOhio and PyGotham! The video recording for my talk, “Egad! How Do We Start Writing (Better) Tests?”, is below:

Arrival

Good news: Raleigh and Columbus have direct flights. Bad news: they are either early-morning or late-night direct flights. So, I left Raleigh on Friday morning before the conference and spent the day in Columbus. Surprisingly, the security line at RDU wrapped around 2/3 of the Terminal 2 perimeter, but I still boarded the flight on time. Once I landed in Columbus, I took the COTA AirConnect bus downtown for the low price of $2.75.

My goal for Friday was personal development. I rarely get a chance to escape the rigors of everyday life to focus on myself. Personal retreats let me clear my mind, dream big, and begin taking action. And on this day, I started writing my first test automation book – a dream I’ve held for over a year now. I spent a few hours at Wolf’s Ridge Brewery, sampling beers with lunch as I developed a rough outline for my project.

My evening was low-key. I took a nap at my hotel, the Blackwell Inn and Pfahl Conference Center. For dinner, I ate at White Castle for the first time – and it was pretty darn good. After practicing my talk, I got a tiramisu bubble tea from Vivi as a night cap.

The Conference

PyOhio was a much smaller conference than PyCon. There were fewer vendor tables but nevertheless a wide selection of stellar talks. As a result, the conference felt more intimate and more focused. Perhaps that feeling was due also to the venue: the third floor of the Ohio Union had full rooms with “cozy” hallways. Hats off to the organizers, too – everything ran smoothly and professionally.

As soon as I arrived, I scored my name badge, my swag bag, and my official PyOhio 2018 t-shirt. The opening keynote from Adrienne Lowe, “From Support to Engineering and Beyond: What to Take with You, and What to Leave Behind,” about the highs and lows of trying to make it as a developer was exceptionally inspiring. Engineers often don’t talk about how hard the job is, especially for newcomers to the industry. Everybody suffers from imposter syndrome. Everybody feels inadequate. Everybody is tempted to quit, even to the point of tears. The vulnerability in hearing others say, “Me, too,” is so relatable and so relieving.

The first talk-talk I attended was Trey Hunner’s “Easier Classes: Python Classes Without All the Cruft.” Trey gave an excellent overview of writing more sophisticated Python classes. TL;DR: upgrade to 3.7 and use dataclasses.

The next talk I attended was Leo Guinan’s “Go with the Flow: Automating Your Workflows with Airflow.” Apache Airflow is a platform for automating workflows. As an automationeer, it struck me as being like a continuous integration system generalized for non-build purposes. The Q&A portion of the talk was lit.

After finding an authentic Chinese restaurant for lunch, my friend Matt arrived! I worked with Matt in the testing space at LexisNexis. He drove all the way from Dayton to see my talk and hang out. We spent the early afternoon catching up, and we went to Hook Hearted Brewing for dinner after the conference because we’re beer buddies. I was so thankful he came to support me – it meant a lot!

My talk was at 3:45pm. Other than discovering my Thunderbolt-to-HDMI adapter was a dud, the talk went very well. I decided to stick to a script for this talk because most of it followed a story, and I’m glad I did. (For my PyCon talk, I chose instead to speak without a script and rely instead on the slides alone.) There were about 30 people in the audience. Many expressed appreciation for my presentation!

The last talk of the day for me was Jace Browning’s “Automated Regression Testing with Splinter and Jupyter.” It was the perfect follow-up to my talk. Whereas mine was mostly high-level, Jace showed implementation and execution. I loved how he compared raw Selenium WebDriver calls to splinter calls, and I was thrilled to see hands-on test execution using Jupyter. One of the things that makes Python so great for automation is that modules can be called from the interpreter – and Jupyter notebooks make that so easy.

The Second Day

Sunday was a shorter conference day. The opening keynote, Lorena Mesa’s “Now is better than Never: What the Zen of Python can teach us about Data Ethics,” didn’t start until 11:40am. Lorena showed us what the Zen of Python can teach us about data ethics in a scary, modern world.

I got lunch at Chatime: dan dan noodles (or rather, an imitation thereof) and a matcha latte with grass jelly. Yum! After lunch, I attended Daniel Lindeman’s “Python in Serverless Architectures.” Now I know what the buzzword “serverless” means! I even found out that I had already developed a serverless app using Django and Heroku. There are some really cool ways test automation could take advantage of serverless architectures.

Another one of my favorite talks of the afternoon was Vince Salvino’s “Containers Without the Magic.” Vince broke down how easy containers are to use. It was a great refresher for me.

Open Spaces

At 3:15 on Sunday, I tried something new: I hosted an open space for test automation. “Open spaces” are rooms that can be reserved for a time slot to meet up informally about a common interest. (For example, PyCon had a juggling open space!) At first, nobody showed up to my open space, but after a few minutes, one lady walked in. She had been a software tester for years and wanted to start doing automation. I walked her through as much info as I could before time was up. She was very grateful for the guidance I offered. It worked out nicely that she was the only person to come to my open space so that she could really get value out of it. (My friend Jason also popped in and helped out; more on him below.)

The After-Party

At conferences, my biggest fear is being awkwardly alone. I want to spend time with good people, both new and familiar. Thankfully, PyOhio didn’t disappoint.

Backstory: At PyCon 2018, I met a guy named Julian who runs PyBites (together with his buddy Bob). We really hit it off, and he invited me to join the PyBites community. They offer great code challenges and a “100 Days of Code” challenge course, as well as a blog about all things Python. Through the PyBites community, I met another guy named Jason who would be at PyOhio 2018 with me. We agreed to meet up for dinner and drinks after the Sunday talks.

(On a side note, I recommend PyBites as a great place to learn new things, hone skills, and meet great people!)

That Sunday night, it just so happened that Adrienne and Trey, two of the other speakers, intersected Jason and me as we were deciding where to go for dinner. The next thing we know (after a hotel pitstop), we’re all walking off together to Eden Burger, a local vegan burger joint. I had a vegan “cheeseburger” with fries and a “milkshake” – and they were genuinely delicious! More than the food, I enjoyed my time with new friends. I was really inspired by the cool things each of them is doing. I guess that’s Python conference magic!

Jason and I hit World of Beer after dinner. After Slack-ing for weeks, it was so good to spend time with this fine gent. We discussed Python, software, our careers, our families, and our dreams. What a perfect way to conclude PyOhio 2018!

Takeaways

There were so many takeaways from PyOhio 2018 for me:

  1. Conferences are phenomenal for professional development. The pulse I get from conferences is electrifying. I walked away from PyOhio galvanized to be an even better software engineer. The talks opened up exciting new ideas. Inspiration for several blog posts sprang forward. The people I met motivated me to try new things. I got so much vigor out of such a short time.
  2. My friends around the globe are awesome. Matt, Jason, Adrienne, Trey, Julian (vicariously), and all the other great people I met at PyOhio made my conference experience so rewarding.
  3. Good values foster wonderful communities. My company, PrecisionLender, has four major values: Be helpful, humble, honest, and human. Those values make my company such a great place to work. I see those same values in the Python community, too. People at PyOhio even asked about these values when they saw them on my PL shirt and my business card. I think that’s partially why Python conferences are always so welcoming and inspiring.
  4. Bigger conferences have more pizzazz, while smaller conferences are more intimate. PyCon 2018 was big, flashy, and awesome. I scored so much swag that I nearly couldn’t fit it all in my suitcase to carry home. PyOhio 2018, on the other hand, focused much more intently on the talks and the people. A perfect example of this was Leo Guinan’s monologue-turned-dialogue on Airflow: it was natural for people to just ask questions. Both types of conferences are good in their own ways.
  5. PyCon 2018 was likely a watershed moment for my career. I cannot reflect on PyOhio 2018 without seeing it as an extension of my PyCon 2018 experience. The only reason I attended PyOhio was because someone at PyCon encouraged me to propose a talk. The reason I met Jason is because I first met Julian. The reason I want to keep speaking is because PyCon went so well for me. The fact that both conferences were hosted in Ohio only two months apart is also rather serendipitous. Like my first trip to China, I think PyCon 2018 will have a lasting impact on my career.

EGAD! How Do We Start Writing (Better) Tests?

Some have never automated tests and can’t check themselves before they wreck themselves. Others have 1000s of tests that are flaky, duplicative, and slow. Wa-do-we-do? Well, I gave a talk about this problem at PyOhio 2018. The language used for example code was Python, but the principles apply to any language. Please watch it below!

The Testing Pyramid

The “Testing Pyramid” is an industry-standard guideline for functional test case development. Love it or hate it, the Pyramid has endured since the mid-2000’s because it continues to be practical. So, what is it, and how can it help us write better tests?

Layers

The Testing Pyramid has three classic layers:

  • Unit tests are at the bottom. Unit tests directly interact with product code, meaning they are “white box.” Typically, they exercise functions, methods, and classes. Unit tests should be short, sweet, and focused on one thing/variation. They should not have any external dependencies – mocks/monkey-patching should be used instead.
  • Integration tests are in the middle. Integration tests cover the point where two different things meet. They should be “black box” in that they interact with live instances of the product under test, not code. Service call tests (REST, SOAP, etc.) are examples of integration tests.
  • End-to-end tests are at the top. End-to-end tests cover a path through a system. They could arguably be defined as a multi-step integration test, and they should also be “black box.” Typically, they interact with the product like a real user. Web UI tests are examples of integration tests because they need the full stack beneath them.

All layers are functional tests because they verify that the product works correctly.

Proportions

The Testing Pyramid is triangular for a reason: there should be more tests at the bottom and fewer tests at the top. Why?

  1. Distance from code. Ideally, tests should catch bugs as close to the root cause as possible. Unit tests are the first line of defense. Simple issues like formatting errors, calculation blunders, and null pointers are easy to identify with unit tests but much harder to identify with integration and end-to-end tests.
  2. Execution time. Unit tests are very quick, but end-to-end tests are very slow. Consider the Rule of 1’s for Web apps: a unit test takes ~1 millisecond, a service test takes ~1 second, and a Web UI test takes ~1 minute. If test suites have hundreds to thousands of tests at the upper layers of the Testing Pyramid, then they could take hours to run. An hours-long turnaround time is unacceptable for continuous integration.
  3. Development cost. Tests near the top of the Testing Pyramid are more challenging to write than ones near the bottom because they cover more stuff. They’re longer. They need more tools and packages (like Selenium WebDriver). They have more dependencies.
  4. Reliability. Black box tests are susceptible to race conditions and environmental failures, making them inherently more fragile. Recovery mechanisms take extra engineering.

The total cost of ownership increases when climbing the Testing Pyramid. When deciding the level at which to automate a test (and if to automate it at all), taking a risk-based strategy to push tests down the Pyramid is better than writing all tests at the top. Each proportionate layer mitigates risk at its optimal return-on-investment.

Practice

The Testing Pyramid should be a guideline, not a hard rule. Don’t require hard proportions for test counts at each layer. Why not? Arbitrary metrics cause bad practices: a team might skip valuable end-to-end tests or write needless unit tests just to hit numbers. W. Edwards Deming would shudder!

Instead, use loose proportions to foster better retrospectives. Are we covering too many input combos through the Web UI when they could be checked via service tests? Are there unit test coverage gaps? Do we have a pyramid, a diamond, a funnel, a cupcake, or some other wonky shape? Each layer’s test count should be roughly an order of magnitude smaller than the layer beneath it. Large Web apps often have 10K unit tests, 1K service tests, and a few hundred Web UI tests.

Resources

Check out these other great articles on the Testing Pyramid:

Let Me Google That For You

A brief memoir on self-improvement through self-humility.

I learned one of my most important life lessons before my career officially started. From 2007 to 2009, I interned on-and-off for the Rational Business Developer team in IBM’s Software Group. The team was great and let me do everything: development, testing, automation, builds, and even fun Web examples. However, my junior-level experience often made me doubt myself (e.g., imposter syndrome), and there were “known unknowns” by the truckload.

200px-shyguycttt_artwork

A photograph of me as an IBM intern. (j/k)

For my own survival, I quickly learned to fearlessly ask questions. This was a big step for a shy guy like me. My team mates were more than happy to help me, which put me at ease. Asking questions was a good thing.

…Until one time, it became a bad thing.

I forget exactly when it happened and what my specific question was, but I do remember the person and the method. I “pinged” (IBM lingo for “send an instant chat message to”) one of the senior developers with a question, and he replied with a link. When I clicked it, the browser loaded Google, moved the cursor, typed my question, and clicked the search button.

He sent me to Let Me Google That For You.

I was horribly embarrassed. It was an introvert’s worst nightmare come true. I never knew about LMGTFY before, and the sarcasm didn’t hit me until after the search was complete. My fledgling confidence was shattered.

embarrassed-anime-girl

My face after my first LMGTFY.

I couldn’t tell if the guy:

  1. Thought lowly of me
  2. Was annoyed with me
  3. Was trying to coach me
  4. Was having a bad day
  5. Was just a jerk

The reason didn’t matter, though. He made a valid point: the question was Google-able, and I could have searched for an answer myself before asking others. As I picked up the pieces of my confidence, I introspected on how to do better next time. Was it wrong to ask questions? No. It is never wrong to ask questions, and anyone who says otherwise is a jerk. Was it wrong for him to send me to LMGTFY? It certainly was unprofessional and harmful to our team dynamic. But, was there a better way to ask questions? Yes:

When faced with an unknown, pose it as a question. Then, ask yourself the question first before asking others.

I call this the “self-Socratic method.” Whenever I don’t know something, I write a list of questions. Then, I try to answer those questions using my own resources, like Google, Wikipedia, and books. When I answer a question, I cross it off the list. Inevitably, answers lead to more questions, which I add to the list. As the list grows, new questions become increasingly specific because the unknowns become smaller. I also find that I can answer most factual, knowledge-based questions myself. The questions that require me to ask someone are typically wisdom-based or experience-based. For example, the question, “How do I build a Web app?” could reduce to, “Why do you prefer to use Angular over React as the JavaScript framework?” because the World Wide Web already has much to say about the technologies behind itself. At the very least, even if I can’t find much by my own searching, my due diligence shows good faith when asking others.

socrates-e1483729570531

A photograph of me as a professional, self-Socratic software engineer. (j/k)

How much time do I spend searching for an answer before asking somebody who knows? This is a popular interview question. When using the self-Socratic method, I’d say (a) when the first two pages of Google results yield nothing, (b) when I spend more than 15 minutes absolutely stuck in code (from the time of the last “epiphany”), or (c) when I hit a wisdom- or experience-based question.

I’ve used the self-Socratic method for everything from learning how Web apps work to learning how to cut my lawn. I learned my lesson the tough way, but I’m glad I learned it early!

Why Python is Great for Test Automation

Python is an incredible programming language. As Dan Callahan said in his PyCon 2018 keynote, “Python is the second best language for anything, and that’s an amazing aspiration.” For test automation, however, I believe it is one of the best choices. Here are ten reasons why:

#1: The Zen of Python

The Zen of Python, as codified in PEP 20, is an ideal guideline for test automation. Test code should be a natural bridge between plain-language test steps and the programming calls to automate them. Tests should be readable and descriptive because they describe the features under test. They should be explicit in what they cover. Simple steps are better than complicated ones. Test code should add minimal extra verbiage to the tests themselves. Python, in its concise elegance, is a powerful bridge from test case to test code.

(Want a shortcut to the Zen of Python? Run “import this” at the Python interpreter.)

#2: pytest

pytest is one of the best test frameworks currently available in any language, not just for Python. It can handle any functional tests: unit, integration, and end-to-end. Test cases are written simply as functions (meaning no side effects as long as globals are avoided) and can take parametrized inputs. Fixtures are a generic, reusable way to handle setup and cleanup operations. Basic “assert” statements have automatic introspection so failure messages print meaningful values. Tests can be filtered when executed. Plugins extent pytest to do code coverage, run tests in parallel, use Gherkin scenarios, and integrate with other frameworks like Django and Flask. Other Python test frameworks are great, but pytest is by far the best-in-show. (Pythonic frameworks always win in Python.)

#3: Packages

For all the woes about the CheeseShop, Python has a rich library of useful packages for testing: pytest, unittest, doctest, tox, logging, paramiko, requests, Selenium WebDriver, Splinter, Hypothesis, and others are available as off-the-shelf ingredients for custom automation recipes. They’re just a “pip install” away. No reinventing wheels here!

#4: Multi-Paradigm

Python is object-oriented and functional. It lets programmers decide if functions or classes are better for the needs at hand. This is a major boon for test automation because (a) stateless functions avoid side effects and (b) simple syntax for those functions make them readable. pytest itself uses functions for test cases instead of shoehorning them into classes (à la JUnit).

#5: Typing Your Way

Python’s out-of-the-box dynamic duck typing is great for test automation because most feature tests (“above unit”) don’t need to be picky about types. However, when static types are needed, projects like mypy, Pyre, and MonkeyType come to the rescue. Python provides typing both ways!

#6: IDEs

Good IDE support goes a long way to make a language and its frameworks easy to use. For Python testing, JetBrains PyCharm supports visual testing with pytest, unittest, and doctest out of the box, and its Professional Edition includes support for BDD frameworks (like pytest-bdd, behave, and lettuce) and Web development. For a lighter offering, Visual Studio Code is taking the world by storm. Its Python extensions support all the good stuff: snippets, linting, environments, debugging, testing, and a command line terminal right in the window. Atom, Sublime, PyDev, and Notepad++ also get the job done.

#7: Command Line Workflow

Python and the command line are like peanut butter and jelly – a match made in heaven. The entire test automation workflow can be driven from the command line. Pipenv can manage packages and environments. Every test framework has a console runner to discover and launch tests. There’s no need to “build” test code first because Python is an interpreted language, further simplifying execution. Rich command line support makes testing easy to manage manually, with tools, or as part of build scripts / CI pipelines.

As a bonus, automation modules can be called from the Python REPL interpreter or, even better, a Jupyter notebook. What does this mean? Automation-assisted exploratory testing! Imagine using Python calls to automatically steer a Web app to a point that requires a manual check. Calls can be swapped out, rerun, skipped, or changed on the fly. Python makes it possible.

#8: Ease of Entry

Python has always been friendly to beginners thanks to its Zen, whether those beginners are programming newbies or expert engineers. This gives Python a big advantage as an automation language choice because tests need to be done quickly and easily. Nobody wants to waste time when the features are in hand and just need to be verified. Plus, many manual software testers (often without programming experience) are now starting to do automation work (by choice or by force) and benefit from Python’s low learning curve.

#9: Strength for Scalability

Even though Python is great for beginners, it’s also no toy language. Python has industrial-grade strength because its design always favors one right way to get a job done. Development can scale thanks to meaningful syntax, good structure, modularity, and a rich ecosystem of tools and packages. Command line versatility enables it to fit into any tool or workflow. The fact that Python may be slower than other languages is not an issue for feature tests because system delays (such as response times for Web pages and REST calls) are orders of magnitude slower than language-level performance hits.

#10: Popularity

Python is one of the most popular programming languages in the world today. It is consistently ranked near the top on TIOBE, Stack Overflow, and GitHub (as well as GitHut). It is a beloved choice for Web developers, infrastructure engineers, data scientists, and test automationeers alike. The Python community also powers it forward. There is no shortage of Python developers, nor is there any dearth of support online. Python is not going away anytime soon. (Python 3, that is.)

Other Languages?

The purpose of this article is to highlight what makes Python great for test automation based on its own merits. Although I strongly believe that Python is one of the best automation languages, other choices like Java, C#, and Ruby are also viable. Check out my article The Best Programming Language for Test Automation for a comparison.

 

This article was posted with the author’s permission on both Automation Panda and PyBites.