code review

10 Gotchas for Automation Code Reviews

Lately, I’ve been doing lots of code reviews. I probably spend about an hour every work day handling reviews for my team, both as a reviewer and an author. All of the reviews exclusively cover end-to-end test automation: new tests, old fixes, config changes, and framework updates. I adamantly believe that test automation code should undergo the same scrutiny of review as the product code it tests, because test automation is a product. Thus, all of the same best practices (like the guides here and here) should be applied. Furthermore, I also look for problems that, anecdotally, seem to appear more frequently in test automation than in other software domains. Below is a countdown of my “Top 10 Gotchas”. They are the big things I emphasize in test automation code reviews, in addition to the standard review checklist items.

#10: No Proof of Success

Trust, but verify,” as Ronald Reagan would say. Tests need to run successfully in order to pass review, and proof of success (such as a log or a screen shot) must be attached to the review. In the best case, this means something green (or perhaps blue for Jenkins). However, if the product under test is not ready or has a bug, this could also mean a successful failure with proof that the critical new sections of the code were exercised. Tests should also be run in the appropriate environments, to avoid the “it-ran-fine-on-my-machine” excuse later.

#9: Typos and Bad Formatting

My previous post, Should I Reject a Code Review for Typos?, belabored this point. Typos and bad formatting reflect carelessness, cause frustration, and damage reputation. They are especially bad for Behavior-Driven Development frameworks.

#8: Hard-Coded Values

Hard-coded values often indicate hasty development. Sometimes, they aren’t a big problem, but they can cripple an automation code base’s flexibility. I always ask the following questions when I see a hard-coded value:

  • Should this be a shared constant?
  • Should this be a parameterized value for the method/function/step using it?
  • Should this be passed into the test as an external input (such as from a config file or the command line)?

#7: Incorrect Test Coverage

It is surprisingly common to see an automated test that doesn’t actually cover the intended test steps. A step from the test procedure may be missing, or an assertion may yield a false positive. Sometimes, assertions may not even be performed! When reviewing tests, keep the original test procedure handy, and watch out for missing coverage.

#6: Inadequate Documentation

Documentation is vital for good testing and good maintenance. When a test fails, the doc it provides (both in the logs it prints and in its very own code) significantly assist triage. Automated test cases should read like test procedures. This is one reason why self-documenting behavior-driven test frameworks are so popular. Even without BDD, test automation should be flush with comments and self-documenting identifiers. If I cannot understand a test by skimming its code in a code review, then I ask questions, and when the author provides answers, I ask them to add their answers as comments to the code.

#5: Poor Code Placement

Automation projects tend to grow fast. Along with new tests, new shared code like page objects and data models are added all the time. Maintaining a good, organized structure is necessary for project scalability and teamwork. Test cases should be organized by feature area. Common code should be abstracted from test cases and put into shared libraries. Framework-level code for things like inputs and logging should be separated from test-level code. If code is put in the wrong place, it could be difficult to find or reuse. It could also create a dependency nightmare. For example, non-web tests should not have a dependency on Selenium WebDriver. Make sure new code is put in the right place.

#4: Bad Config Changes

Even the most seemingly innocuous configuration tweak can have huge impacts:

  • A username change can cause tests to abort setup.
  • A bad URL can direct a test to the wrong site.
  • Committing local config files to version control can cause other teammates’ local projects to fail to build.
  • Changing test input values may invalidate test runs.
  • One time, I brought down a whole continuous integration pipeline by removing one dependency.

As a general rule, submit any config changes in a separate code review from other changes, and provide a thorough explanation to the reviewers for why the change is needed. Any time I see unusual config changes, I always call them out.

#3: Framework Hacks

A framework is meant to help engineers automate tests. However, sometimes the framework may also be a hindrance. Rather than improve the framework design, many engineers will try to hack around the framework. Sometimes, the framework may already provide the desired feature! I’ve seen this very commonly with dependency injection – people just don’t know how to use it. Hacks should be avoided because test automation projects need a strong overall design strategy.

#2: Brittleness

Test automation must be robust enough to handle bumps in the road. However, test logic is not always written to handle slightly unexpected cases. Here are a few examples of brittleness to watch out for in review:

  • Do test cases have adequate cleanup routines, even when they crash?
  • Are all exceptions handled properly, even unexpected ones?
  • Is Selenium WebDriver always disposed?
  • Will SSH connections be automatically reconnected if dropped?
  • Are XPaths too loose or too strict?
  • Is a REST API response code of 201 just as good as 200?

#1: Duplication

Duplication is the #1 problem for test automation. I wrote a whole article about it: Why is Automation Full of Duplicate Code? Many testing operations are inherently repetitive. Engineers sometimes just copy-paste code blocks, rather than seek existing methods or add new helpers, to save development time. Plus, it can be difficult to find reusable parts that meet immediate needs in a large code base. Nevertheless, good code reviews should catch code redundancy and suggest better solutions.

 

Please let me know in the comments section if there are any other specific things you look for when reviewing test automation code!

Should I Reject a Code Review for Typos?

TL;DR: Yes!

Code reviews are essential to good software development. In a code review, peers read each others’ code and vote to approve or reject the changes before committing them to the main code base. Code reviews provide a platform for constructive feedback, accountability, and even learning opportunities. To make them effective, a team must establish best practices – not only for the code itself, but also for the review process. Good guides can be found here, here, and here for reference. Some rules, such as “no personal attacks” and “focus only on the changes at hand,” are universally agreeable. Other rules, however, can cause controversy.

One such controversial rule is the title of this blog post: Should a code review be rejected for typos? I use the word “typos” here to broadly include any sort of typographical shortcoming: misspellings, incorrect grammar, poor formatting, and even improper spacing. For example, a variable named somehting would be a typo.

There are valid reasons why not to reject code reviews despite typos. The code itself will still compile and run, so long as the use of typo’ed identifiers is consistent. Requiring corrections takes extra time, which in business costs more money. Authors may also take offense, especially if English (or the language of dialogue) is their second language.

Nevertheless, I strongly believe that yes, code reviews should be rejected for typos. Below are five reasons why:

It corrects carelessness. Typos mean carelessness. Mistakes are bound to happen, but pervasive typos indicate a deeper, systemic problem. Reviews are a measure of accountability between peer engineers to prevent carelessness. Being tough on small things encourages engineers to straighten-up on all things.

It prevents future frustration. People expect things to be spelled and formatted the right way. Compiler error messages are often cryptic and may not intuitively point to typos. Imagine trying to call the do_stuff method, only to discover that the original method was named do_stuf after an hour of hair-pulling, fist-banging, and cursing at the screen. Frustrating is especially acute when BDD Gherkin steps have typos. Allowing typos to be committed to the code base increases the chances of this type of frustration.

It improves readability. Typos and poor formatting are distracting. They make it harder to read code. For example, I remember once reading a Perl source file in which every single line had an arbitrary number of indent spaces. It was impossible to visually align function bodies, if statements, and loops. I had to reformat it before I could work on it.

It boosts confidence in the code and in the team. Imagine if you saw typos in this blog post. Each typo you find would lower your confidence in my writing skills. The same is true for software: as a reviewer, when I see typos, I lose confidence in the quality of the code and in the author’s skills because I see carelessness. Eliminating typos not only makes code better, but it also makes people think better of the code.

It reinforces high standards. Quality is not limited to functionality. Poorly-written code may run correctly, but it will not be maintainable. Upholding high standards in code review will result in overall better code output. Letting small things slip through will, over time, atrophy a code base.

If “reject” sounds like a harsh word, it may be beneficial for a review process to indicate the severity of feedback points. For example, broken code could be “critical”, while typos could be “minor.” Nevertheless, typos should not be committed to the main code base, and thus their code reviews should be rejected.