fun

The Airing of Grievances: Test Automation Code

More grievances! Test code ought to be developed with the same high standards as product code, but often it is neglected. That bothers me so much, and it costs teams a significant amount of time and money through its bad consequences. I got a lot of problems with bad test automation code, and now you’re gonna hear about it!

Copypasta

Code duplication is code cancer. It is particularly rampant in test automation because test steps are often repetitive. That’s no excuse, however, to allow it. Use better programming practices, or face my code review rejections!

Hard-Coded Configuration Data

Automation should be able to run on any environment without hassle. Don’t hard-code URLs, usernames, passwords, and other config-specific values! Read them in as inputs or config files. Nothing is more frustrating than switching environments but – surprise! – not running with the correct values.

Re-reading Inputs for Every Single Test

Config files and input values should not change during a test suite run. So, don’t repeatedly read them! That’s inefficient. Read them once, and hold them in memory in a centrally-accessible location.

Leaving Temporary Files or Settings

Clean up your mess! Don’t leave temporary files or settings after a test completes. Not doing proper cleanup can harm future tests. Files can also build up and eventually run the system out of storage space. It’s a Boy Scout principle: Leave No Trace!

Incorrect Test Results

Test results need integrity to be trusted. Watch out for false positives and false negatives. If there’s a problem during a test, handle it – don’t ignore it. Don’t skip assertion calls. Business decisions are made based on test results.

Uninformative Failure Messages

Here’s one: “Failed.” That tells me nothing. Why did the test fail? Was something missing from a web page? Was there an unexpected exception? Did a service call return 500? TELL ME! Otherwise, I need to waste time rerunning and even debugging the test. The failure may also be intermittent. Tell me what the problem is right when it happens.

Making Assertion Calls in Automation Support Code

Every automated test case must make assertions to verify goodness or badness of system conditions. But, as a best practice, those assertion calls should be written only in the test case code – not in support code like page objects or framework-level classes. Assertions are test-specific, while support code is generic. Support code should simply give system state, while test cases use assertions to validate system state. If support code has assertion calls, then it is far less reusable and traceable.

Treating Boolean Methods like Assertions

This is an all-too-common rookie mistake. Boolean methods simply return a true/false value. They do not (or, as least according to the previous grievance, should not) contain assertion calls. However, I’ve seen many code reviews where programmers call a Boolean method but do nothing with the result. Whoops!

Burying Exceptions

Exceptions mean problems. Test automation is meant to expose problems. Don’t blindly catch all exceptions, log a message, and continue on with a test like nothing’s wrong. Let the exception rise! Most modern test frameworks will catch all exceptions at the test case level, automatically report them as failures, and safely move on to the next test. There is no need to catch exceptions yourself if they ought to abort the test, anyway – that adds a lot of unnecessary code. Catch exceptions only if they are recoverable!

No Automatic Recovery

True automation means no manual intervention. An automation framework should have built-in ways to automatically recover from common problems. For example, if a network connection breaks, automatically attempt to reconnect. If a test fails, retry it. If too many failures happen, either end the run early or pause for some time. And make sure recovery mechanisms are built into the framework, rather than appearing as copypasta throughout the codebase. The purpose of retries (especially for tests) is not to blindly run tests until they turn from red to green, but rather to overcome unexpected interruptions and also to gather more data to better understand failure reasons. Interruptions will happen – handle them when they do. And be sure to log any failures that do happen so that the reasons may be investigated.

No Logging

Please leave a helpful trail of log messages. Tracing through automation code can be difficult after a failure, especially when you’re not the original author. Logging helps everyone quickly get to the root cause of failures. It also really helps to create reproduction scenarios. If I can copy the log from a failing tests into a bug report and be done, then AMEN for an easy triage!

Hard Waits

Ain’t nobody got time for that! Automation needs to be fast, because time is money. Forcing Thread.sleep (or other such equivalent in your language of choice) shows either laziness or desperation. It unnecessarily wastes precious runtime. It also creates race conditions: what if the wait time ends before the thing is ready? Always use “smart” waits – actively and repeatedly check the system at short intervals for the thing to be ready, and abort after a healthy timeout value.

The Airing of Grievances: Test Automation Process

Test automation is a big deal for me. It is my chosen specialty within the broad field of software. When I see things done wrong, or when people just don’t get what it’s about, it really grinds my gears. I got a lot of problems with bad test automation processes, and now you’re gonna hear about it!

Saying “They’re Just Test Scripts”

Test automation is not just a bunch of test scripts: it is a full technology stack that requires design, integration, and expertise. Test automation development is a discipline. Saying it is just a bunch of test scripts is derogatory and demeaning. It devalues the effort test automation requires, which can lead to poor work item sizings and an “us vs. them” attitude between developers and QA.

Not Applying the Same Software Development Best Practices

Test automation is software development, and all the same best practices should thus apply. Write clean, well-designed code. Use version control with code reviews. Add comments and doc. Don’t get lazy because “they’re just test scripts” – wrong attitude!

Lip Service

Don’t say automation is important but then never dedicate time or resources to work on it. Don’t leave automation as a task to complete only if there’s time after manual testing is done. Make automation a priority, or else it will never get done! I once worked on an Agile team where automation framework stories were never included into the sprint because there weren’t “enough points to go around.” So, even though this company hired me explicitly to do test automation, I always got shunted into a manual testing scramble every sprint.

Confusing Test Automation with Deployment Automation

Test automation is the automation of test scenarios (for either functional or performance tests). Deployment automation is the automation of product build distribution and installation in a software environment. They are two different concerns. Cucumber is not Ansible.

Forcing 100% Automation

Some people think that automation will totally eliminate the need for any manual testing. That’s simply not true. Automation and manual testing are complementary. Automation should handle deterministic scenarios with a worthwhile return-on-investment to automate, while manual testing should focus on exploratory testing, user experience (UX), and tests that are too complicated to automate properly. Forcing 100% automation will make teams focus on metrics instead of quality and effectiveness.

Downsizing or Eliminating QA

Test automation doesn’t reduce or eliminate the need for testers. On the contrary, test automation requires even more advanced skills than old-school manual testing. There is still a need for testing roles, whether as a dedicated position or as shared collectively by a bunch of developers. The work done by that testing role just becomes more technical.

Saying Product Code and Test Code Must Use the Same Language

For unit tests, this is true, but for above-unit tests, it is simply false. Any general purpose programming language could be used to automate black-box tests. For example, Python tests could run against an Angular web app. A team may choose to use the same language for product and test code for simplicity, but it is not mandatory.

Not Classifying Test Types

Not all tests are the same. You can play buzzword bingo with all the different test type names: unit, integration, end-to-end, functional, performance, system, contract, exploratory, stress, limits, longevity, test-to-break, etc. Different tests need different tools or frameworks. Tests should also be written at the appropriate Testing Pyramid level.

Assuming All Tests Are Equal

Again, not all tests are the same, even within the same test type. Tests vary in development time, runtime, and maintenance time. It’s not accurate to compare individuals or teams merely on test numbers.

Not Prioritizing Tests to Automate

There’s never enough time to automate everything. Pick the most important ones to automate first – typically the core, highest-priority features. Don’t dilly-dally on unimportant tests.

Not Running Tests Regularly

Automated tests need to run at least once daily, if not continuously in Continuous Integration. Otherwise, the return-on-investment is just too low to justify the automation work. I once worked on a QA team that would run an automated test only once or twice during a 2-year release! How wasteful.

HP Quality Center / ALM

This tool f*&@$#! sucks. Don’t use it for automated tests. Pocket the money and just develop a good codebase with decent doc in the code, and rely upon other dashboards (like Jenkins or Kibana) for test reporting.

The Airing of Grievances: Version Control

Let the Airing of Grievances series continue: I got a lot of problems with version control misuse, and now you’re gonna hear about it!

Not Using Version Control

You’ve got to be some special kind of stupid to not use a version control system. Software is just too dang fragile to go without protection.

Using Outdated Version Control Systems

Still using CVS like it’s 1999? How about Rational ClearCase? If so, it’s time to upgrade. Git seems to be the go-to standard these days, though Subversion still has a place for projects where centralized control is better than distributed control. My opinion? Just use Git.

Gigantic Commits

Code changes ought to be incremental and small, and they ought to be committed in frequent intervals. However, some people like to make one giant, killer commit at a time with bajillions of lines of changes. Nobody wants to review those pull requests. Break things up into smaller pieces!

No Comments with Commits

Look back in your version control history to see how many messages look like this:

  • .
  • updated
  • fixed
  • done

Really? How is this helpful? Please give a meaningful message. It doesn’t need to be long – a one-liner is fine. But describe what makes the commit significant. Otherwise, tracing through history is dang near impossible!

Never Committing Changes

For whatever reason, some people will check out code, make changes, run locally, and NEVER commit the changes back to the repository. This happened frequently at a previous job with offshore test automation contractors. They would add new tests and fix bugs but never share them back with the team. They’d even email code back and forth, rather than commit! I just can’t even.

Committing Code that Doesn’t Even Compile

jackie-chan-wtf

Never Pushing Changes

In Git, there is a difference between “commit” (which commits the change locally) and “push” (which pushes all local commits to the remote repository). Some people never push. Then, they wonder why they can’t open pull requests. Or, they lose their code after a Blue Screen wipes out their machine. Make it a habit to push at the end of the workday, whether it’s needed or not.

No Branching

Branches are like swim lanes: every contributor (or group) can develop code without interfering with others. They also make concurrent release work possible. Using only one branch doesn’t “simplify” development – it just causes an integration mess. For example, I once worked in an organization where the QA architect insisted that test automation should not use multiple branches and, instead, have if/else conditions for differences between release branches. (I fought tooth-and-nail against it and lost.) Code duplication became rampant. Always adopt a good branching strategy, even for test automation. Gitflow is a good example workflow.

Stale Branches

Stale branches mean old code and merge conflicts. Nobody wants those. Keep your local branches up-to-date.

Not Deleting Feature Branches

In Git, feature branches are meant to have a short lifespan: you create one to develop a new feature or fix a bug or whatever, and then you delete it after the pull request is completed. However, some people don’t delete those old branches. Over time on a team, those old branches really add up and pollute the repository. Delete them as a common courtesy.

Botched Merge Conflict Resolution

Merge conflicts themselves are not the grievance. Nobody likes them, but they are inevitable on any team. However, botched merge conflict resolution is a HUGE grievance. Would you be mad if you spent a lot of time fixing a problem, only to have some other schmuck accidentally undo your code change with an overwrite? Please be careful when merging. If you aren’t sure that your merge will be good, there’s no shame in asking for help.

Not Re-compiling and Re-testing after Merging

What’s worse than messing up a merge? Committing the changes without testing them first! Merges are risky, and mistakes happen – but that’s why it is imperative to make sure everything is still good after a merge. I’ve seen people blindly post pull request updates after resolving merge conflicts, only to have the build fail with compiler warnings. Don’t do that.

Granting Everyone Full Repository Permissions

While everyone on the team should contribute, not everyone should contribute in the same ways. If there are no security policies set up, then users will do dangerous things, whether accidentally or deliberately. They could circumvent the review process. They could rename things unexpectedly. One time, I saw a guy delete the remote master branch! (Thank goodness we recovered it quickly.) Put permissions into place before bad things happen.

The Airing of Grievances: Software Development

Let the Airing of Grievances series begin: I got a lot of problems with bad software development practices, and now you’re gonna hear about it!

Duplicate Code

Code duplication is code cancer. It spreads unmaintainable code snippets throughout the code base, making refactoring a nightmare and potentially spreading bugs. Instead, write a helper function. Add parameters to the method signature. Leverage a design pattern. I don’t care if it’s test automation or any other code – Don’t repeat yourself!

Typos

Typos are a hallmark of carelessness. We all make them from time to time, but repeated appearances devalue craftsmanship. Plus, they can be really confusing in code that’s already confusing enough! That’s why I reject code reviews for typos.

Global Non-Constant Variables

Globals are potentially okay if their values are constant. Otherwise, just no. Please no. Absolutely no for parallel or concurrent programming. The side effects! The side effects!

Using Literal Values Instead of Constants

Literal values get buried so easily under lines and lines of code. Hunting down literals when they must be changed can be a terrible pain in the neck. Just put constants in a common place.

Avoiding Dependency Management

Dependency managers like Maven, NuGet, and pip automatically download and link packages. There’s no need to download packages yourself and struggle to set your PATHs correctly. There’s also no need to copy their open source code directly into your project. I’ve seen it happen! Just use a dependable dependency manager.

Breaking Established Design Patterns

When working within a well-defined framework, it is often better to follow the established design patterns than to hack your own way of doing things into it. Hacking will be difficult and prone to breakage with framework updates. If the framework is deficient in some way, then make it better instead of hacking around it.

No Comments or Documentation

Please write something. Even “self-documenting” code can use some context. Give a one-line comment for every “paragraph” of code to convey intent. Use standard doc formats like Javadoc or docstrings that tie into doc tools. Leave a root-level README file (and write it in Markdown) for the project. Nobody will know your code the way you think they should.

Not Using Version Control

With all the risks of bugs and typos in an inherently fragile development process, you would think people would always want code protection, but I guess some just like to live on the edge. A sense of danger must give them a thrill. Or, perhaps they don’t like dwelling on things of the past. That is, until they break something and can’t figure it out, or their machine’s hard drive crashes with no backup. Give that person a Darwin Award.

No Unit Tests

If no version control wasn’t enough, let’s just never test our code and make QA deal with all the problems! Seriously, though, how can you sleep at night without unit tests? If I were the boss, I’d fire developers for not writing unit tests.

Permitting Tests to Fail

Whenever tests fail, people should be on point immediately to open bug reports, find the root cause, and fix the defect. Tests should not be allowed to fail repeatedly day after day without action. At the very least, flag failures in the test report as triaged. Complacency degrades quality.

Blue Balls

Jenkins uses blue balls because the creator is Japanese. I love Japanese culture, but my passing tests need to be green. Get the Green Balls plugin. Nobody likes blue balls.

Skipping Code Reviews

Ain’t nobody got time for that? Ain’t nobody got time when your code done BROKE cuz nobody caught the problem in review! Take the time for code reviews. Teams should also have policy for quick turnaround times.

Not Re-testing Code After Changes

People change code all the time. But people don’t always re-test code after changes are made. Why not? That’s how problems happen. I’ve seen people post updated code to a PR that wouldn’t even compile. Please, check yourself before you wreck yourself.

“It Works on My Machine”

That’s bullfeathers, and you know it! Nobody cares if it works on your machine if it doesn’t work on other machines. Do the right thing and go help the person, instead of blowing them off with this lame excuse.

Arrogance

Nobody wants to work with a condescending know-it-all. Don’t be that person.

The Airing of Grievances

It’s an open secret that bad practices are rampant in the software industry. Unreadable code. Perpetually failing tests. Pointless, boring meetings. Buzzword bingo. No discipline is perfect, but as a purist, bad practices really bother me. Somedays, I just feel like this:

giphy

The struggle is real.

There are times when we all need to vent. Well, here’s mine: it’s the Airing of Grievances! I got a lot of problems with bad practices in the software industry, and now you’re gonna hear about it! So, put up that Festivus pole and serve that meatloaf on lettuce, because the Automation Panda has a whole series about how not to develop software!

 

The Airing of Grievances

  1. Software Development
  2. Version Control
  3. Test Automation Process
  4. Test Automation Code
  5. Selenium WebDriver
  6. Agile
  7. Behavior-Driven Development

 

Use this series as a tongue-in-cheek guide for better practices. If you do, then you just might have a Festivus miracle!

giphy1

When things work right and it’s not a big deal.

Software Testing Lessons from Luigi’s Mansion

How can lessons from Luigi’s Mansion apply to software testing and automation?

Luigi’s Mansion is a popular Nintendo video game series. It’s basically Ghostbusters in the Super Mario universe: Luigi must use a special vacuum cleaner to rid haunted mansions of the ghosts within. Along the way, Luigi also solves puzzles, collects money, and even rescues a few friends. I played the original Luigi’s Mansion game for the Nintendo GameCube when I was a teenager, and I recently beat the sequel, Luigi’s Mansion: Dark Moon, for the Nintendo 3DS. They were both quite fun! And there are some lessons we can apply from Luigi’s Mansion to software testing and automation.

#1: Exploratory Testing is Good

The mansions are huge – Luigi must explore every nook and cranny (often in the dark) to spook ghosts out of their hiding places. There are also secrets and treasure hiding in plain sight everywhere. Players can easily miss ghosts and gold alike if they don’t take their time to explore the mansions thoroughly. The same is true with testing: engineers can easily miss bugs if they overlook details. Exploratory testing lets engineers freely explore the product under test to uncover quality issues that wouldn’t turn up through rote test procedures.

#2: Expect the Unexpected

Ghosts can pop out from anywhere to scare Luigi. They also can create quite a mess of the mansion – blocking rooms, stealing items, and even locking people into paintings! Software testing is full of unexpected problems, too. Bugs happen. Environments go down. Network connections break. Even test automation code can have bugs. Engineers must be prepared for any emergency regardless of origin. Software development and testing is about solving problems, not about blame-games.

#3: Don’t Give Up!

Getting stuck somewhere in the mansion can be frustrating. Some puzzles are small, while others may span multiple rooms. Sometimes, a player may need to backtrack through every room and vacuum every square inch to uncover a new hint. Determination nevertheless pays off when puzzles get solved. Software engineers must likewise never give up. Failures can be incredibly complex to identify, reproduce, and resolve. Test automation can become its own nightmare, too. However, there is always a solution for those tenacious (or even hardheaded) enough to find it.

 

Want to see what software testing lessons can be learned from other games? Check out Gotta Catch ’em All! for Pokémon!

A Cucumber Recipe

Scenario: Dinnertime
Given I have cucumbers
When I prepare my favorite cucumber dish
Then everyone at the dinner table is happy

We all know “Cucumber” is the buzzword for BDD, but the vast majority of the population knows it only as a vegetable. My wife and I prepare a special Chinese cucumber dish about once a week that’s fresh, tasty, and nutritious. It is very easy to prepare, since it requires no cooking. It may be served chilled or at room temperature, which makes it great for warm weather. As a change of pace for my blog, I’d like to share my cucumber recipe. C’mon, a panda’s gotta eat!

Ingredients

I strongly recommend against substitutions. This recipe is best with authentic ingredients. Specifically, do not substitute the cucumber variety – “regular” cucumbers won’t work. If you cannot find the required ingredients at a regular grocery store, Asian supermarkets will carry them.

52

These are Persian cucumbers. They are about 6 inches long and have a 1-1.5 in diameter.

53

These are all the ingredients that you need!

Instructions

Prep time: 15 minutes, tops.

The best way to prepare this dish is to use a Chinese cleaver (also called the “Chinese chef knife”). After washing the cucumbers, chop off the ends. Then, one at a time, smash the cucumber with the flat side of the knife. Literally swing your arm as if hammering a nail. The weight of the blade will smash the cucumber into about four strips lengthwise. Beware that seeds and guts may fly out. Then, chop the pieces into one-inch segments. (If you don’t have a Chinese cleaver, you can smash the cucumbers with a rolling pin. Other knives simply don’t have the weight to break up the cucumbers effectively.)

54

A cucumber, smashed and chopped with the Chinese cleaver.

Combine the cucumber pieces and all of the other ingredients (except the sesame seeds) into a flat-bottomed dish or wide bowl. Stir everything together and let rest. A flat bottom allows the cucumbers to soak up the sauce; regular bowls will leave the cucumbers on top somewhat flavorless. Top with sesame seeds for garnish. This dish may be served after a few minutes of resting, or it may be chilled and served later. (Please note that the texture of the cucumbers is not as good after more than a day in the refrigerator.)

55

The finished dish, ready to eat! Sesame seeds and garlic powder are sprinkled on top. Also, notice how the glass container spreads the cucumbers out so that they can soak in the sauce.

Fun Facts

 

Bon appetit!

Gotta Catch ’em All!

How can lessons from Pokémon apply to software testing and automation?

It’s no secret that I’m a lifelong Nintendo fanboy, and one of my favorite game franchises is Pokémon. Since Christmas, I have been playing the latest installment in the series, Pokémon Moon. The basic gameplay in all Pokémon games is to capture “pocket monsters” in the wild and train them for competitive battles. The main quest is to become the Pokémon League Champion by defeating the strongest trainers in the land. However, as any child of the ’90s will recall, the other major goal of the game is to catch all species of Pokémon. With 300 unique species in the latest installment, that’s no small feat. I can proudly say that I caught ’em all in Pokémon Moon.

It may seem strange to talk about video games on a professional blog, but I see five major parallels between catching Pokémon and my career in software quality and automation.

#1: As QA, We Gotta Catch ’em All

It is the quality engineer’s job to find and resolve all software problems: bugs, defects, design flaws, bad code check-ins, test failures, environment instabilities, deployment hiccups, and even automation crashes. We get paid to make sure things are good. And if we don’t catch a problem, then we haven’t done our jobs right.

#2: Coverage is Key to Success

One of the reasons to catch new Pokémon is not just to make the game’s Pokémon Professor happy for “scientific research,” but moreover to find stronger monsters to assist with the quest. Likewise, more test coverage means more problems discovered, which assists our quest to guarantee product quality. Our goal should be to achieve as close to complete test coverage as reasonably possible. When necessary, we should use risk-based approaches to smartly minimize test gaps as well. And our tests should be strong enough to legitimately exercise the features under test.

#3: Automating Tests Takes Time

As my trainer passport shows, it took me over 100 hours of gameplay to catch all 300 Pokémon. That’s a serious time investment. Test automation is the same way: it takes time to automate tests properly. It requires a robust, scalable, and extendable framework upon which to build test cases. Test automation is a software product in its own right that requires the same best practices and discipline as the product it tests. Software teams must allocate resources to its development and maintenance. It’s not as simple as just “writing test scripts.” However, when done right, the investment pays off.

#4: Not All Tests are Equal

Test metrics like “X passed and Y failed” or “N% of tests automated” can be very misleading because they do not account for differences between tests. Some tests have more coverage than others. Some require more time to run. Some require more time to automate. For example, 100 tests for feature A may take a day to automate and run in 3 minutes, while 5 tests for feature B may take a full week to automate and run in 3 hours. Yet, feature A may still be more important. All Pokémon are likewise not created equal. Some are simply given to you (like Rowlet, Litten, and Popplio), while others take hours of searching to find (like Castform) or are simply too tough to capture without a hard fight (like the Ultra Beasts). Be mindful of test differences for planning, execution, and reporting.

#5: Never Leave Work Incomplete

As a completionist, I would not consider my Pokémon adventure complete without a full Pokédex. There is great satisfaction in accomplishing the full measure of a goal. The same thing goes for testing and automation: my job is not done until all tests are automated and all lines are green. At times, it may be easy to give up because that “Ultra Beast” just won’t cooperate, but it’s the job to catch it. Always complete the ‘dex; always complete the job.

Pokédex Proof

Here’s the proof that I caught ’em all:

Complete Aloladex

The “Pokédex” is a device that indexes all of the Pokémon captured by a trainer. Mine is 100% complete!

Trainer Stamp

Here’s the stamp in my trainer passport to prove it!

 

If you see any more parallels between Pokémon and QA, please add them to the comments section below!