Author: Andy Knight

I'm a software engineer who specializes in test automation.

Ignoring Files with Git

Git is one of the most popular version control systems (VCS) available, especially thanks to hosting vendors like GitHub. It keeps code safe and shareable. Sometimes, however, certain files should not be shared, like local settings or temporary configs. Git provides a few ways to make sure those files are ignored.

.gitignore

The easiest and most common way to ignore files is to use a gitignore file. Simply create a file named .gitignore in the repository’s root directory. Then, add names and patterns for any files and directories that should not be added to the repository. Use the asterisk (“*”) as a wildcard. For example, “*.class” will ignore all files that have the “.class” extension. Remember to add the .gitignore file to the repository so that it can be shared. As a bonus, Git hosting vendors like GitHub usually provide standard .gitignore templates for popular languages.

Any files covered by the .gitignore file will not be added to the repository. This approach is ideal for local IDE settings like .idea or .vscode, compiler output files like *.class or *.pyc, and test reports. For example, here’s GitHub’s .gitignore template for Java:

skip-worktree

.gitignore file prevents a file from being added to a repository, but what about preventing changes from being committed to an existing file? For example, developers may want to safely override settings in a shared config file for local testing. That’s where skip-worktree comes in: it allows a developer to “skip” any local changes made to a given file. Changes will not appear under “git status” and thus will not be committed.

Use the following commands:

# Ignore local changes to an existing file
git update-index --skip-worktree path/to/file

# Stop ignoring local changes
git update-index --no-skip-worktree path/to/file

Warning: The skip-worktree setting applies only to the local repository. It is not applied globally! Each developer will need to run the skip-worktree command in their local repository.

assume-unchanged

Another option for ignoring files is assume-unchanged. Like skip-worktree, it makes Git ignore changes to files. However, whereas skip-worktree assumes that the user intends to change the file, assume-unchanged assumes that the user will not change the file. The intention is different. Large projects using slow file systems may gain significant performance optimizations by marking unused directories as assume-unchanged. This option also works with other update-index options like really-refresh.

Use the following commands:

# Assume a file will be unchanged
git update-index --assume-unchanged path/to/file

# Undo that assumption
git update-index --no-assume-unchanged path/to/file

Again, this setting applies only to the local repository – it is not applied globally.

Comparison Table

Which is the best way to ignore files?

Method Description Best Use Cases Scope
.gitignore file Prevents files from being added to the repository. Local settings, compiler output, test results, etc. Global
skip-worktree setting Prevents local changes from being committed to an existing file. Shared files that will have local overwrites, like config files. Local
assume-unchanged setting Allows Git to skip files that won’t be changed for performance optimization. Files and folders that a developer won’t touch. Local

Resources

Book Review: Python Testing with pytest

tl;dr

Title Python Testing with pytest
Author Brian Okken (@brianokken)
Publication 2017 (The Pragmatic Programmers)
Summary How to use all the features of pytest for Python test automation – “simple, rapid, effective, and scalable.”
Prerequisites Intermediate-level Python programming.

Summary

Python Testing with pytest is the book on pytest. Brian Okken covers all the ins and outs of the framework. The book is useful both as tutorial for learning pytest as well as a reference for specific framework features. It covers:

  • Getting started with pytest
  • Writing simple tests as functions
  • Writing more interesting tests with assertions, exceptions, and parameters
  • Using all the different execution options
  • Writing fixtures to flexibly separate concerns and reuse code
  • Using built-in fixtures like tmpdir, pytestconfig, and monkeypatch
  • Using configuration files to control execution
  • Integrating pytest with other tools like pdb, tox, and Jenkins

Appendices also cover:

  • Using Python virtual environments
  • Installing packages with pip
  • An overview of popular plugins like pytest-xdist and pytest-cov
  • Packaging and distributing Python packages

Praises

This book is a comprehensive guide to pytest. It thoroughly covers the framework’s features and gives pointers to more info elsewhere. Even though pytest has excellent online documentation, I still recommend this book to anyone who wants to become a pytest master. Online docs tend to be fragmented with each piece limited in scope, whereas books like this one are designed to be read progressively and orderly for maximal understanding of the material.

I love how this book is example-driven. Each section follows a simple yet powerful outline: idea → code → output → explanation. Having real code with real output truly cements the point of each mini-lesson. New topics are carefully unfolded so that they build upon previous topics, making the book read like a collection of tutorials. Examples at the end of every chapter challenge the readers to practice what they learn. The formatting of each section also looks great.

The extra info on related topics like pip and virtualenv is also a nice touch. Python pros probably don’t need it, but beginners might get stuck without it.

The rocket ship logo on the cover is also really cool!

Takeaways

pytest is one of the best functional test frameworks currently available in any language. It breaks the clunky xUnit mold, in which class structures were awkwardly superimposed over test cases as if one size fits all. Instead, pytest is lightweight and minimalist because it relies on functions and fixtures. Scope is much easier to manage, code is more reusable, and side effects can more easily be avoided. pytest has taken over Python testing because it is so Pythonic.

Brian’s concise writing style has also inspired me to be more direct in my own writing. I tend to be rather verbose out of my desire to be descriptive. However, fewer words often leave a more powerful impression. They also make the message easier to comprehend. Python is beloved for its concise expressiveness, and as a Pythonista, it would be fitting for me to adopt that trait into my English.

If I had a wish list for a second edition, I’d love to see more info about assertions and other plugins (namely pytest-bdd). I think an appendix with comparisons to other Python test frameworks would also be valuable.

A Warning

I ordered a physical copy of this book directly from Amazon (not a third-party seller). Unfortunately, that copy was missing all the introductory content: the table of contents, the acknowledgements, and the preface. The first page after the front cover was Chapter 1. Befuddled, I reached out to Brian Okken (who I personally met at PyCon 2018). We suspected that it was either a misprint or a bootleg copy. Either way, we sent the evidence to the publisher, and Amazon graciously exchanged my defective copy for the real deal. Please look out for this sort of problem if you purchase a printed copy of this book for yourself!

 

If you want to learn more about pytest, please read my article Python Testing 101: pytest.

NuGet Quick Reference

What is NuGet?

NuGet is a package manager for Microsoft .NET. It installs packages and manages dependencies for .NET projects. It is like Maven (Java) or pip (Python). The NuGet Gallery hosts thousands of popular packages like Json.NET, NUnit, and jQuery. If you develop .NET applications (like in C#), then you probably need to use NuGet.

Installing Packages

The easiest way to use NuGet is through Visual Studio, which includes NuGet features by default. Packages are managed per project. Right-click on a project in Solution Explorer and select “Manage NuGet Packages…” to open the project’s package manager page.

  • The Browse tab lets you search and install new packages.
  • The Installed tab shows which packages are installed and can uninstall them.
  • The Updates tab lets you update packages to their latest versions.
Nuget Package Manager Page

The NuGet Package Manager page for a project in Visual Studio

When packages are installed and updated, NuGet also pulls any dependencies they require. Visual Studio also creates a packages.config file for all dependencies. Then, just build and run!

NuGet Configuration

NuGet can be configured using a NuGet.Config file. This file can be placed under a project directory, a solution directory, or a system-wide location. One of the most common settings is the package sources: NuGet uses the public nuget.org repository by default, but others (like private company repos) can also be added. Check the nuget.config reference online for docs on all options. (Package sources can also be configured through Visual Studio under Tools > NuGet Package Manager > Package Manager Settings.)

NuGet Package Manager Console

Sometimes, it’s helpful to control NuGet directly through the Package Manager Console. From the menu bar: Tools > NuGet Package Manager > Package Manager Console. For example, when packages get messed up, I’ll run “Update-Package -Reinstall” to reinstall everything. (Right-clicking the solution and selecting “Restore NuGet Packages” never seems to work for me.) Check the help command or the official guide for more info.

Nuget Package Manager Console

The NuGet Package Manager Console in Visual Studio

NuGet CLI

The NuGet CLI nuget.exe provides the full extent of NuGet features, including the ability to make packages. It is more powerful than the Package Manager Console. It must be installed independently – it does not come with Visual Studio. Check the NuGet CLI reference online for full details. The .NET Core CLI dotnet.exe can also be used for managing packages. See the feature comparison for the differences.

nuget CLI

The NuGet CLI

Creating a NuGet Package

A NuGet package is basically a ZIP file with a .nupkg extension. It typically contains an assembly DLL and maybe other related files. Creating a NuGet package is pretty easy:

  1. Install the NuGet CLI.
  2. Create a .nuspec file for the project.
  3. Add appropriate settings to the .nuspec file.
  4. Run the “nuget pack” command to create the .nupkg file.
  5. Publish the .nupkg file to the desired destination.

The .nuspec file can be created by running the “nuget spec” command in the project’s directory. The generated <project-name>.nuspec file will contain replacement tokens that will be substituted with values from the project’s AssemblyInfo when the package is built. Make sure to set AssemblyInfo values appropriately for the substitution. The version is especially important, and the automatic version format may be useful for guaranteeing uniqueness. Be sure to add any packages upon which the project depends as dependencies, too. (The .nuspec file can also be created manually.) Refer to the .nuspec reference for full details.

The standard package creation command is “nuget pack <project-name>.nuspec”. However, if the .nuspec file contains replacement tokens, then use “nuget pack <project-name>.csproj” instead. Once the package is created, it can be published publicly to nuget.org or to a private NuGet feed.

Below is an example .nuspec file with replacement tokens:

Resources

 

Behavior-Driven Blasphemy

This is my 100th post on Automation Panda! I’m thrilled to see how much this blog has grown and how many people it has helped. For such a monumental occasion, I have chosen to voice a rather controversial opinion about test automation.

Behavior-driven development seems to be the software testing buzzword of the decade. What started as a refinement of test-driven development by developers in Europe and the UK quickly became the big process fad of the 2010’s. The Cucumber project (now 10 years old) developed or inspired Gherkin-based test automation frameworks in all the major programming languages. Companies started requiring Given-When-Then format for acceptance criteria and test scenarios. Three Amigos meetings became standard calendar fixtures during sprints. Organizations that once undertook “Agile transformations” now have similar initiatives for BDD. For better or worse, BDD exists and cannot be ignored.

The dogmatic benefits of BDD are better collaboration and automation. However, leaders frequently insist that Gherkin-style test frameworks add value only when paired with practices like Example Mapping. “BDD is a process, not a tool,” is a common mantra. “Otherwise, the Gherkin just gets in the way.” Although I wholeheartedly agree that behavior-driven practices add significant value to the development process, I nevertheless espouse a rather blasphemous opinion:

BDD test automation frameworks are better than traditional frameworks for black box functional testing even when BDD processes are not followed.

What Exactly Are You Saying?

My claim is that behavior-driven test frameworks like Cucumber, SpecFlow, and behave are significantly better than traditional xUnit-style frameworks for testing live features. For example, I would rather use SpecFlow than NUnit for testing a Web app with Selenium WebDriver, whether or not the other two Amigos are with me. The resulting automation code has better structure, readability, and reusability.

I’m not saying that teams shouldn’t do BDD practices, and I’m not saying that the Three Amigos should be separated. Collaboration is key to success, and BDD really helps. Example Mapping is one of the most useful practices a development team can do. I’m also not saying that BDD frameworks should be used for all testing purposes – they are poorly suited for unit testing and for performance testing.

Objection!

I find myself very lonely in this opinion. BDD leaders repeatedly insist that BDD is not about testing and automation:

The most outspoken BDDers (mostly coalescing around the Cucumber community) have largely moved their focus to the collaboration benefits, almost forsaking the automation benefits. (This may not necessarily be true, but it appears that way based on the literature and materials floating on the Web.) That outlook is somewhat disingenuous because the main tools supporting BDD are, in fact, test frameworks.

BDD also has outspoken opponents – it’s love or hate. I’ve personally spoken with several engineers who despise Gherkin-based frameworks. “I can see how it would be valuable when a whole team embraces behavior-driven practices,” many have told me, “but otherwise, the Gherkin layer just gets in the way of automation.” I’ve heard it called “plaster” and “garbage.” Engineers just want to code their tests. And code should always be readable, right?

hqdefault

Testing is an inherently opinionated space. People can never seem to agree on things.

The Bigger Picture

Test automation must be developed regardless of any specific development practices, and its architecture must stand firmly in its own right. Unfortunately, both sides miss the bigger picture:

The best solution for test automation is a domain-specific language.

A domain-specific language (DSL) is a programming language with a purpose. It is designed to handle very specific needs, rather than general-purpose programming. For example:

  • SQL is a DSL for database queries.
  • XPath is a DSL for finding elements in an XML document.
  • YAML is a DSL for object serialization.

Gherkin is also a DSL – for behavior specification.

Domain-specific languages naturally suit test automation due to the clear difference between test cases and test code. Test cases are procedures that exercise product behavior. Anyone can write a test case. They are dictated or explained in plain language. Test code, however, is the software implementation of test cases. Test code handles function calls, logging, exceptions, and all those other little programming details that help run tests. A test automation DSL separates those concerns: test cases are written in a special language, and the interpreter handles repetitive, low-level details. Some type of extensions can handle product-specific interactions. The purpose of a language is to effectively express intention – and the intention is to test the product.

To truly achieve an optimal solution, however, the DSL and its interpreter must be treated as part of the automation software, just like the test cases and extensions. Remember, a language’s interpreter is just another piece of software. The interpreter is part of the separation of concerns and the single responsibility principle. Concerns that would typically be handled by classes and functions in traditional test code should be moved to the interpreter. For example, the interpreter should automatically log every test case step, rather that forcing the author to write explicit logging statements.

When I worked at NetApp years ago, I implemented a DSL to test platform-level features of our operating system. I called it DS – short for “Design Steps” (from HP ALM) (but also not without an affinity for the Nintendo DS). NetApp’s entire test automation code was developed in Perl at the time, so I implemented the DS interpreter in Perl to reuse existing libraries. DS test cases were typically only a dozen lines long each, and DS expressions could call specially-written Perl modules directly for complete extendability. During the first big release using DS, my team saved countless hours of automation development time as compared to the previous release while delivering a higher number of tests. I also did this before I had ever heard of BDD.

Unfortunately, most teams have neither the time to develop their own testing DSL nor the understanding of compiler theory to build it right. And if they were given such a language, they typically limit themselves to the provided implementation instead of taking ownership to extend the language for their needs.

nintendo-ds-1

The original Nintendo DS. Fun times!

Who Truly Misunderstands Gherkin?

Enter Gherkin: the world’s first major general-purpose, off-the-shelf language for test automation. It is general enough to cover any case through its plain language steps, yet specific enough to standardize tests. Users don’t need to be compiler theory experts – they just make up their own step names and provide the definition code to execute them. Early BDD projects like JBehave and Cucumber packaged an interpreter as a test framework and delivered it to a testing world still stuck on JUnit. The need for a testing DSL was there, whether or not the BDD folks meant to serve it.

Cucumber-ites frequently bemoan that their framework is misunderstood by the masses. They shudder to see teams using their framework purely for test automation. However, Cucumber effectively lowered the entry barrier for teams to make their own testing DSLs. Kodak did the same thing for film: they made it cheap and standard so anyone could be a photographer. Not everyone who uses a BDD framework misunderstands its purpose: some (like me) just see an alternative value proposition than what is preached by orthodox BDD practitioners. Gherkin fills a need that nobody knew. Its popularity validates that claim.

Benefits Apart from Process

Using a BDD framework adds much value to testing and development even without BDD processes. Below are just a handful of benefits:

  1. Focus first on good scenarios. Gherkin forces authors to think before they code.
  2. Faster automation development. Gherkin steps are reusable and parametrizable.
  3. Stronger structure. Engineers know where to put things in the framework.
  4. Test understandability. Anyone can read scenarios because they are written in plain language. Business people can help. New people can pick it up fast.
  5. Test sharing. Feature files can be shared apart from test code, which can be helpful for business partners.
  6. Test similarity. Tests all look the same. Team members can more easily help each other.
  7. Clearer failures. When a scenario fails, reports show exactly what step failed.
  8. Simpler bug reports. Use scenario steps as instructions to reproduce the failure.
  9. 2-phase test reviews. Review Gherkin first and then test code second to make sure the test cases are good before implementing the wrong things.
  10. BDD enablement. Using a BDD framework opens the door for a team to embrace better behavioral practices in the future.

I wrote about these advantages before:

Case Studies

I’m also not the only one who finds value in BDD test frameworks outside of the full BDD process. Below are five case studies.

radish

radish is a Python test framework inspired by Cucumber. Its DSL syntax is a superset of Gherkin that adds preconditions, loops, variables, and expressions. These language additions indicate a bias towards automation because they enable engineers to write tests more programmatically, albeit in a Gherkin-ese way.

Karate

Karate is a test framework with a full DSL based on Gherkin with steps specifically tailored to Web service calls. Although it is implemented in Java, testers do not need to do any Java programming to write complete tests cases from day one. Peter Thomas, the creator of Karate, unabashedly declares that Karate does not truly adhere to BDD but nevertheless uses Cucumber for its automation benefits. (Note: Karate is working to move completely off of Cucumber. See GitHub issue #444.)

REST Assured

REST Assured is a Java package for testing REST APIs. Unlike Karate, REST Assured provides a fluent syntax (and not a DSL) for writing service calls directly in Java code. The fluent syntax is based on Gherkin: given() a request spec is created, when() the call is made, then() verify the response. Although REST Assured is not a full testing framework, it nevertheless pulls inspiration from BDD frameworks for order and structure.

Cycle

Cycle is a BDD-focused solution from Tryon Solutions for testing Web, terminal, and desktop apps. Cycle is unique because it provides out-of-the-box steps for all types of supported testing so that no programming experience is required. Testers write feature files using Cycle 2.0’s slick new Electron app. Scenarios are written in CycleScript, a Gherkin-ese language with additions like variables and sub-scenario calls. Steps tend to be imperative, but that’s the tradeoff for not requiring lower-level programming.

Hexawise

Hexawise is a combinatorial testing tool designed to maximize coverage with minimal test counts by smartly joining feature variations. It helps testers write better tests with less redundancy and fewer gaps. Although Hexawise has historically assisted manual testers, it also can generate Gherkin feature files for test variations.

mexican-coast-dried-sea-cucumber

Not all cucumbers are the same. Above is a sea cucumber.

Good Enough?

Gherkin-based test frameworks are not perfect, but they do provide good structure. They gained popularity outside of the pure BDD movement because they genuinely added value to testing and automation. Like any other tool, teams will use them in both good and bad ways. (Trust me, I’ve seen scary Gherkin.)

It’s interesting to see how groups outside the Cucumber diaspora are attempting to solve the limitations of pure Gherkin. Each case study above showed a unique path. Clearly, the test automation problem has not yet been completely solved, but current BDD frameworks are the best off-the-shelf solutions we have until a new software testing movement comes along.

Book Review: Hands-On Enterprise Automation with Python

tl;dr

Title Hands-On Enterprise Automation with Python (available from Amazon and Packt)
Author Bassem Aly
Publication 2018 (Packt Publishing)
Summary Using Python packages for network, system, and infrastructure automation.
Prerequisites Intermediate Python programming. Intermediate administrative skills.

Summary

Hands-On Enterprise Automation with Python is an excellent resource for learning how to automate common administrative tasks like running commands, scraping network config, and setting up systems. Python is a natural fit for such tasks with its impressive package library, its easy learning curve, and its concise syntax.

The number of Python tools and modules this book covers is stunning:

  • Developing Python code with PyCharm
  • Managing network devices with paramiko, netmiko, and telnetlib
  • Using regular expressions with re
  • Charting data with matplotlib
  • Templating YAML files with Jinja2
  • Multiprocessing with multiprocessing
  • Running local system commands with subprocess
  • Running remote system commands with fabric
  • Getting system info with platform
  • Sending emails with smtplib
  • System administration with Ansible
  • Interacting with a MySQL database using MySQLdb
  • Storing files in Amazon S3 using boto3
  • Packet sniffing and manipulating with scapy

Each new topic is introduced with background information, setup steps, and example code. Instructions are given for the reader to set up their own test environment and try things out. Later chapters also show how to use modules together to build more powerful automation. Most examples favor Python 2 but can be made compatible with Python 3.

Praises

The best thing about this book is how it covers an incredible breadth of topics in such a readable way. Rather than being dry manual pages pulled from a cryptic doc site, each chapter is a tutorial with explanations and real-world code examples. Readers can easily read through the book cover-to-cover or seek topics directly as a reference.

Another great thing is that the author always introduces new concepts before applying them. While intermediate skills with Python and administration are presumed as a prerequisite, the introductory chapters nevertheless show how to set up a full Python development environment with a network lab for testing. Before showing how to use any particular module, the author explains what the problem is and why the module should be used. This makes the material very accessible, especially for non-sysadmins.

Comments

  • The tasks showcased are network-heavy.
  • The setup primarily relies on Linux.
  • Many pages are dedicated to workbench setup.
  • Modules are covered at an introductory level and not in depth.

Takeaways

I thoroughly enjoyed reading Hands-On Enterprise Automation with Python. As a Software Engineer in Test, the word “automation” almost always means “test automation.” Reading this book was a healthy glimpse at other no-less-important types of automation oriented more toward admin tools and scripts. I could definitely leverage many of the modules covered in this book for my own work, too – several of them cross domains.

I really liked the three main reasons the author gave for using Python for automation: it is readable, it has so many libraries, and it has power in its conciseness. These reasons ring true for many applications, especially the point about modules. Since there’s a module for nearly everything, Python programming often simplifies to recipes for using those modules. Arguably, this book is a cookbook full of sysadmin recipes.

Reading this book also made me reminiscent of my days working at MaxPoint, where I first learned Python to build a test automation framework. I used many of the same modules shared in this book for the same tasks. I felt comfort in my familiarity and validation in my past efforts.

How Do I Know My Tests Add Value?

Software testing is a huge effort, especially for automation. Teams can spend a lot of time, money, and resources on testing (or not). People literally make careers out of it. That investment ought to be worthwhile – we shouldn’t test for the sake of testing.

So, therein lies the million-dollar question: How do we know that our tests add meaningful value?

Or, more bluntly: How do we know that testing isn’t a waste of time?

That’s easy: bugs!

The stock answer goes something like this: We know tests add value when they find bugs! So, let’s track the number of bugs we find.

That answer is wrong, despite its good intentions. Bug count is a terrible metric for judging the value of tests.

What do you mean bug counts aren’t good?

I know that sounds blasphemous. Let’s unpack it. Finding bugs is a good thing, and tests certainly should find bugs in the features they cover. But, the premise that the value of testing lies exclusively in the bugs found is wrong. Here’s why:

  1. The main value of testing is fast feedback. Testing serves two purposes: (1) validating goodness and (2) identifying badness. Passing tests are validated goodness. Failing tests, meaning uncovered bugs, are identified badness. Both types of feedback add value to the development process. Developers can proceed confidently with code changes when trustworthy tests are passing, and management can assess lower risk. Unfortunately, bug counts don’t measure that type of goodness.
  2. Good testing might actually reduce bug count. Testing means accountability for development. Developers must think more carefully about design. They can also run tests locally before committing changes. They could even do Test-Driven Development. Better practices could prevent many bugs from ever happening.
  3. Tracking bug count can drive bad behavior. Whether a high bug discovery rate looks good (or, worse, has quotas), testers will strive to post numbers. If they don’t find critical bugs, they will open bug reports for nitpicks and trivialities. The extra effort they spend to report inconsequential problems may not be of value to the business – wasting their time and the developers’ time all for the sake of metrics.
  4. Bugs are usually rare. Unless a team is dysfunctional, the product usually works as expected. Hundreds of test runs may not yield a single bug. That’s a wonderful thing if the tests have good coverage. Those tests still add value. Saying they don’t belittles the whole testing effort.

Then what metrics should we use?

Bugs happen arbitrarily, and unlimited testing is not possible. Metrics should focus on the return-on-investment for testing efforts. Here are a few:

  1. Time-to-bug-discovery. Rather than track bug counts, track the time until each bug is discovered. This metric genuinely measures the feedback loop for test results. Make sure to track the severity of each bug, too. For example, if high-severity bugs are not caught until production, then the tests don’t have enough coverage. Teams should strive for the shortest time possible – fast feedback means lower development costs. This metric also encourages teams to follow the Testing Pyramid.
  2. Coverage. Coverage is the degree to which tests exercise product behavior. Higher coverage means more feedback and greater chances of identifying badness. Most unit test frameworks can use code coverage tools to verify paths through code. Feature coverage requires extra process or instrumentation. Tests should avoid duplicate coverage, too.
  3. Test failure proportions. Tests fail for a variety of reasons. Ideally, tests should fail only when they discover bugs. However, tests may also fail for other reasons: unexpected feature changes, environment instability, or even test automation bugs. Non-bug failures disrupt the feedback loop: they force a team to fix testing problems rather than product problems, and they might cause engineers to devalue the whole testing effort. Tracking failure proportions will reveal what problems inhibit tests from delivering their top value.

More resources

 

Gherkin Syntax Highlighting in Visual Studio Code

Visual Studio Code is an incredible code editor that’s on the rise. It offers the power of an IDE with the speed and simplicity of a lightweight text editor, similar to Sublime, Atom, and Notepad++. If you’re a BDD addict, then VS Code is a great choice for writing Gherkin features, too! There are a number of extensions for Gherkin. Which one is the best? Below is my recommendation.

TL;DR

Install both:

Extension #1

VS Code has a few free extensions to support Gherkin. The first one I tried was Cucumber (Gherkin) Full Support. This one had the highest number of installs. When I started writing feature files, it provided snippets for each section and syntax colors. The documentation said it could also provide step suggestions (meaning, I type “Given” and it shows me all available Given steps) and navigation to step definition code, but since it looked like it only worked for JavaScript, I didn’t try it myself. that left me with no step suggestions. The indentation looked off, too. Not perfect. I wanted a better extension.

This slideshow requires JavaScript.

Extension #2

The second one I tried was Snippets and Syntax Highlight for Gherkin (Cucumber). It provides colorful syntax highlighting and a few three-letter snippets for Gherkin keywords. When I typed “fea”, a full template for a Feature section appeared with user story stubs (“In order to ___, As a ___, I want ___”). Nice! Good practice. The “sce” snippet did the same thing for the Scenario section with Given, When, and Then steps. Each section was indented nicely, too. The only downside was the lack of a snippet for Examples tables. Nevertheless, tables were still highlighted. But again, no step suggestions.

This slideshow requires JavaScript.

Extension #3

The third extension I tried was Feature Syntax Highlight and Snippets (Cucumber). It was very similar to the previous extension, but it used different colors. The snippet shortcuts were also not as intuitive – they used the letter “f” for feature followed by the first letter of the section. For example, “ff” was a Feature section, and “fs” was a Scenario section. Unfortunately, this extension did not provide step suggestions. Comments and example table rows did not get highlighted, either. Personally, I preferred the previous extension’s color scheme.

This slideshow requires JavaScript.

Extension #4

The fourth extension I tried was Gherkin step autocomplete. This one promised step suggestions! However, I had some trouble setting it up. When I enabled the extension by itself, feature files did not show any syntax highlighting, and the steps had no suggestions. What? Lame. What the README doesn’t say is that it relies on a separate extension for feature file support. So, I enabled extension #2 together with this one. Then, I had to move my feature file into a project-root-level directory named “features.” (This path could be customized in the extension’s settings, but “features” is the default.) And, voila! I got pretty colors and step suggestions.

This slideshow requires JavaScript.

But Wait, There’s More!

There were even more extensions for Gherkin. I was happy with #2 and #4, so I didn’t try others. The others also didn’t have as many installations. If anyone finds goodness out of others, please post in the comments!