teamwork

Purist vs. Pragmatist

There’s often more than one way to solve a problem. Engineers tend to be pretty opinionated about solutions, too. Whenever I see disagreements in design, I typically notice two competing stances: the pragmatist and the purist. Identifying these approaches helps to understand how others think and fosters healthier team collaboration.

purist is one who focuses primarily on the correctness of a solution. They typically seek a systematic, comprehensive, and verifiable design. A pragmatist, however, favors practical, expedient solutions. They are okay with a solution so long as it works.

The table below gives some perspective on how these two perspectives may differ:

Purist Pragmatist
Focus more on what is correct Focus more on what is expedient
Spend more effort on design and the “big picture” Spend more effort on implementation
Very picky in code review Less picky in code review
Interested more in white-box code quality Interested more in black-box code quality
Favors strong design patterns, even if they are complicated Favors simpler design patterns, even if they have less-than-desirable consequences
Prefers to redesign than to hack Prefers to hack than to redesign
Good at handling long-term problems Good at handling short-term problems
Views software development as an art as well as an engineering practice Views development primarily as an engineering practice
Aligns well with academia Aligns well with business
In test automation, better for framework development In test automation, better for test case development

These descriptions are not absolute: many people fall somewhere between the poles of purist and pragmatist. However, most people tend to exhibit stronger tendencies in one direction.

Personally, I tend to be a purist. If I need to get a job done, I feel shameful if I cannot afford the time to do it fully properly. However, I often find myself working with pragmatists. That’s not a bad thing – I recognize the value in each perspective. There is much to learn from both sides!

Winning Support for BDD

Adopting behavior-driven development practices can greatly improve software quality and productivity, but like any big change, it will have opponents along with supporters. I’ve met resistance from all roles: testers, developers, product owners, and managers. And some people can be stubborn. As with any proposal, the best way to win support is not just to tell the benefits but to demonstrate them. Below are five major ways to demonstrate the benefits of BDD.

Make it a Refinement, not an Overhaul

I remember talking with a scrum master one time about challenges his team faced with testing and automation. The user stories his team wrote were a mess: they may or may not have had acceptance criteria, and the product owner would often ask for features to be scrapped or redone after a sprint or two. The team basically gave up on automated testing due to feature flux. Naturally, I proposed BDD to him, suggesting that it could help drive better features through formalization. However, this scrum master balked at the idea: “My team is stretched so thin right now, there’s no way we can overhaul our process right now.”

Clearly, the team had a serious problem, but they weren’t willing to try any solution deemed too “big.” The scrum master’s perception was that BDD would be a disruptive change that would hurt them more than help them. In cases like this, it is best to present BDD as a refinement of Agile, and not an overhaul of it. Agile says user stories should have acceptance criteria; BDD says acceptance criteria should be formalized. Agile says that the definition of done should include test automation; BDD says automation is a natural extension of the acceptance criteria. There’s nothing in Agile that BDD undoes, and there are shortcomings in Agile that BDD solves.

Write Good Gherkin

There is a big difference between Gherkin and good Gherkin. Anyone can add BDD buzzwords to existing test procedures, but effective BDD needs a paradigm shift. Unfortunately, bad Gherkin can ruin many of the benefits BDD can bring. For example, imperative steps will frustrate product owners, and mixed point-of-view will confuse testers. Nothing will ever be truly perfect, but it is important to strive for good Gherkin from the start, especially when the first behavior scenarios will often be used as examples for future scenarios.

Start the Automation Snowball

BDD and automation go together like peas and carrots. Not only can test automation shift left (since Gherkin scenarios are both acceptance criteria and tests), but steps can be implemented once and reused by any scenarios. When the first BDD scenarios are written, obviously all steps are new steps. As sprints pass, though, many common steps will likely be reused. I’ve even written new scenarios without adding any new steps!

Test automation is often the last thing to be done for a story, if it’s even reached at all. The inherent step reusability helps BDD automation get done sooner. It may take a while to build up useful, reusable steps in the code base, but they will cause an “automation snowball” once they are there. Imagine telling your team that the test automation is already done once a scenario is written in Gherkin!

Take Baby Steps

Rome wasn’t built in a day, and neither will a mature BDD process be. People take time to adjust to new paradigms. Start out slow, and do it right. Train the team how to write good Gherkin. Try a few stories one sprint, rather than taking on the whole backlog. For a product-owner-led approach, start with Gherkinizing acceptance criteria for a sprint or two before attempting any automation. Alternatively, for a test-led approach, work on the automation framework first, and then start to shift the scenario writing left to the developers and then to the product owners once the snowball gets bigger.

It’s okay if things aren’t perfect at first. Learn the lessons and iterate for improvement. Take baby steps!

Highlight how Everyone Wins

BDD is truly a win/win for everyone. It’s not a way to shuffle responsibilities or push around busywork, it’s a way to make a team more interdependent upon each other. Each role in the Three Amigos is empowered to do the right things, with support from each other in lock-step. Consider how BDD process changes help each role work together better:

Role New Responsibility Interdependent Benefit
Product Owner Learn to express requirements in a more formalized, slightly techy way Better assurance that features will be what they actually want, be working correctly, and be protected against future regressions
Developer Contribute more to grooming and test planning Less likely to develop the wrong thing or to be “held up” by testing
Tester Build and learn a new automation framework Automation will snowball, allowing them to meet sprint commitments and focus extra time on exploratory testing
Everyone Another meeting or two Better communication and fewer problems

 

Nobody on an Agile team can rightly say, “BDD isn’t useful to me.” Software quality is everyone’s responsibility, and BDD is a great way to improve it.

10 Gotchas for Automation Code Reviews

Lately, I’ve been doing lots of code reviews. I probably spend about an hour every work day handling reviews for my team, both as a reviewer and an author. All of the reviews exclusively cover end-to-end test automation: new tests, old fixes, config changes, and framework updates. I adamantly believe that test automation code should undergo the same scrutiny of review as the product code it tests, because test automation is a product. Thus, all of the same best practices (like the guides here and here) should be applied. Furthermore, I also look for problems that, anecdotally, seem to appear more frequently in test automation than in other software domains. Below is a countdown of my “Top 10 Gotchas”. They are the big things I emphasize in test automation code reviews, in addition to the standard review checklist items.

#10: No Proof of Success

Trust, but verify,” as Ronald Reagan would say. Tests need to run successfully in order to pass review, and proof of success (such as a log or a screen shot) must be attached to the review. In the best case, this means something green (or perhaps blue for Jenkins). However, if the product under test is not ready or has a bug, this could also mean a successful failure with proof that the critical new sections of the code were exercised. Tests should also be run in the appropriate environments, to avoid the “it-ran-fine-on-my-machine” excuse later.

#9: Typos and Bad Formatting

My previous post, Should I Reject a Code Review for Typos?, belabored this point. Typos and bad formatting reflect carelessness, cause frustration, and damage reputation. They are especially bad for Behavior-Driven Development frameworks.

#8: Hard-Coded Values

Hard-coded values often indicate hasty development. Sometimes, they aren’t a big problem, but they can cripple an automation code base’s flexibility. I always ask the following questions when I see a hard-coded value:

  • Should this be a shared constant?
  • Should this be a parameterized value for the method/function/step using it?
  • Should this be passed into the test as an external input (such as from a config file or the command line)?

#7: Incorrect Test Coverage

It is surprisingly common to see an automated test that doesn’t actually cover the intended test steps. A step from the test procedure may be missing, or an assertion may yield a false positive. Sometimes, assertions may not even be performed! When reviewing tests, keep the original test procedure handy, and watch out for missing coverage.

#6: Inadequate Documentation

Documentation is vital for good testing and good maintenance. When a test fails, the doc it provides (both in the logs it prints and in its very own code) significantly assist triage. Automated test cases should read like test procedures. This is one reason why self-documenting behavior-driven test frameworks are so popular. Even without BDD, test automation should be flush with comments and self-documenting identifiers. If I cannot understand a test by skimming its code in a code review, then I ask questions, and when the author provides answers, I ask them to add their answers as comments to the code.

#5: Poor Code Placement

Automation projects tend to grow fast. Along with new tests, new shared code like page objects and data models are added all the time. Maintaining a good, organized structure is necessary for project scalability and teamwork. Test cases should be organized by feature area. Common code should be abstracted from test cases and put into shared libraries. Framework-level code for things like inputs and logging should be separated from test-level code. If code is put in the wrong place, it could be difficult to find or reuse. It could also create a dependency nightmare. For example, non-web tests should not have a dependency on Selenium WebDriver. Make sure new code is put in the right place.

#4: Bad Config Changes

Even the most seemingly innocuous configuration tweak can have huge impacts:

  • A username change can cause tests to abort setup.
  • A bad URL can direct a test to the wrong site.
  • Committing local config files to version control can cause other teammates’ local projects to fail to build.
  • Changing test input values may invalidate test runs.
  • One time, I brought down a whole continuous integration pipeline by removing one dependency.

As a general rule, submit any config changes in a separate code review from other changes, and provide a thorough explanation to the reviewers for why the change is needed. Any time I see unusual config changes, I always call them out.

#3: Framework Hacks

A framework is meant to help engineers automate tests. However, sometimes the framework may also be a hindrance. Rather than improve the framework design, many engineers will try to hack around the framework. Sometimes, the framework may already provide the desired feature! I’ve seen this very commonly with dependency injection – people just don’t know how to use it. Hacks should be avoided because test automation projects need a strong overall design strategy.

#2: Brittleness

Test automation must be robust enough to handle bumps in the road. However, test logic is not always written to handle slightly unexpected cases. Here are a few examples of brittleness to watch out for in review:

  • Do test cases have adequate cleanup routines, even when they crash?
  • Are all exceptions handled properly, even unexpected ones?
  • Is Selenium WebDriver always disposed?
  • Will SSH connections be automatically reconnected if dropped?
  • Are XPaths too loose or too strict?
  • Is a REST API response code of 201 just as good as 200?

#1: Duplication

Duplication is the #1 problem for test automation. I wrote a whole article about it: Why is Automation Full of Duplicate Code? Many testing operations are inherently repetitive. Engineers sometimes just copy-paste code blocks, rather than seek existing methods or add new helpers, to save development time. Plus, it can be difficult to find reusable parts that meet immediate needs in a large code base. Nevertheless, good code reviews should catch code redundancy and suggest better solutions.

 

Please let me know in the comments section if there are any other specific things you look for when reviewing test automation code!

Should I Reject a Code Review for Typos?

TL;DR: Yes!

Code reviews are essential to good software development. In a code review, peers read each others’ code and vote to approve or reject the changes before committing them to the main code base. Code reviews provide a platform for constructive feedback, accountability, and even learning opportunities. To make them effective, a team must establish best practices – not only for the code itself, but also for the review process. Good guides can be found here, here, and here for reference. Some rules, such as “no personal attacks” and “focus only on the changes at hand,” are universally agreeable. Other rules, however, can cause controversy.

One such controversial rule is the title of this blog post: Should a code review be rejected for typos? I use the word “typos” here to broadly include any sort of typographical shortcoming: misspellings, incorrect grammar, poor formatting, and even improper spacing. For example, a variable named somehting would be a typo.

There are valid reasons why not to reject code reviews despite typos. The code itself will still compile and run, so long as the use of typo’ed identifiers is consistent. Requiring corrections takes extra time, which in business costs more money. Authors may also take offense, especially if English (or the language of dialogue) is their second language.

Nevertheless, I strongly believe that yes, code reviews should be rejected for typos. Below are five reasons why:

It corrects carelessness. Typos mean carelessness. Mistakes are bound to happen, but pervasive typos indicate a deeper, systemic problem. Reviews are a measure of accountability between peer engineers to prevent carelessness. Being tough on small things encourages engineers to straighten-up on all things.

It prevents future frustration. People expect things to be spelled and formatted the right way. Compiler error messages are often cryptic and may not intuitively point to typos. Imagine trying to call the do_stuff method, only to discover that the original method was named do_stuf after an hour of hair-pulling, fist-banging, and cursing at the screen. Frustrating is especially acute when BDD Gherkin steps have typos. Allowing typos to be committed to the code base increases the chances of this type of frustration.

It improves readability. Typos and poor formatting are distracting. They make it harder to read code. For example, I remember once reading a Perl source file in which every single line had an arbitrary number of indent spaces. It was impossible to visually align function bodies, if statements, and loops. I had to reformat it before I could work on it.

It boosts confidence in the code and in the team. Imagine if you saw typos in this blog post. Each typo you find would lower your confidence in my writing skills. The same is true for software: as a reviewer, when I see typos, I lose confidence in the quality of the code and in the author’s skills because I see carelessness. Eliminating typos not only makes code better, but it also makes people think better of the code.

It reinforces high standards. Quality is not limited to functionality. Poorly-written code may run correctly, but it will not be maintainable. Upholding high standards in code review will result in overall better code output. Letting small things slip through will, over time, atrophy a code base.

If “reject” sounds like a harsh word, it may be beneficial for a review process to indicate the severity of feedback points. For example, broken code could be “critical”, while typos could be “minor.” Nevertheless, typos should not be committed to the main code base, and thus their code reviews should be rejected.