best practices

BDD 101: Unit, Integration, and End-to-End Tests

There are many types of software tests. BDD practices can be incorporated into all aspects of testing, but BDD frameworks are not meant to handle all test types. Behavior scenarios are inherently functional tests – they verify that the product under test works correctly. While instrumentation for performance metrics could be added, BDD frameworks are not intended for performance testing. This post focuses on how BDD automation works into the Testing Pyramid. Please read BDD 101: Manual Testing for manual test considerations.

The Testing Pyramid

The Testing Pyramid is a functional test development approach that divides tests into three layers: unit, integration, and end-to-end.

  • Unit tests are white-box tests that verify individual “units” of code, such as functions, methods, and classes. They should be written in the same language as the product under test, and they should be stored in the same repository. They often run as part of the build to indicate immediate success or failure.
  • Integration tests are black-box tests that verify integration points between system components work correctly. The product under test should be active and deployed to a test environment. Service tests are often integration-level tests.
  • End-to-end tests are black-box tests that test execution paths through a system. They could be seen as multi-step integration tests. Web UI tests are often end-to-end-level tests.

Below is a visual representation of the Testing Pyramid:

The Testing Pyramid

The Testing Pyramid

From bottom to top, the tests increase in complexity: unit tests are the simplest and run very fast, while end-to-end require lots of setup, logic, and execution time. Ideally, there should be more tests at the bottom and fewer tests at the top. Test coverage is easier to implement and isolate at lower levels, so fewer high-investment, more-fragile tests need to be written at the top. Pushing tests down the pyramid can also mean wider coverage with less execution time.

Behavior-Driven Unit Testing

BDD test frameworks are not meant for writing unit tests. Unit tests are meant to be low-level, program-y tests for individual functions and methods. Writing Gherkin for unit tests is doable, but it is overkill. It is much better to use established unit test frameworks like JUnit, NUnit, and pytest.

Nevertheless, behavior-driven practices still apply to unit tests. Each unit test should focus on one main thing: a single call, an individual variation, a specific input combo; a behavior. Furthermore, in the software process, feature-level behavior specs draw a clear dividing line between unit and above-unit tests. The developer of a feature is often responsible for its unit tests, while a separate engineer is responsible for integration and end-to-end tests for accountability. Behavior specs carry a gentleman’s agreement that unit tests will be completed separately.

Integration and End-to-End Testing

BDD test frameworks shine at the integration and end-to-end testing levels. Behavior specs expressively and concisely capture test case intent. Steps can be written at either integration or end-to-end levels. Service tests can be written as behavior specs like in Karate. End-to-end tests are essentially multi-step integrations tests. Note how a seemingly basic web interaction is truly a large end-to-end test:

Given a user is logged into the social media site
When the user writes a new post
Then the user's home feed displays the new post
And the all friends' home feeds display the new post

Making a simple social media post involves web UI interaction, backend service calls, and database updates all in real time. That’s a full pathway through the system. The automated step definitions may choose to cover these layers implicitly or explicitly, but they are nevertheless covered.

Lengthy End-to-End Tests

Terms often mean different things to different people. When many people say “end-to-end tests,” what they really mean are lengthy procedure-driven tests: tests that cover multiple behaviors in sequence. That makes BDD purists shudder because it goes against the cardinal rule of BDD: one scenario, one behavior. BDD frameworks can certainly handle lengthy end-to-end tests, but careful considerations should be taken for if and how it should be done.

There are five main ways to handle lengthy end-to-end scenarios in BDD:

  1. Don’t bother. If BDD is done right, then every individual behavior would already be comprehensively covered by scenarios. Each scenario should cover all equivalence classes of inputs and outputs. Thus, lengthy end-to-end scenarios would primarily be duplicate test coverage. Rather than waste the development effort, skip lengthy end-to-end scenario automation as a small test risk, and compensate with manual and exploratory testing.
  2. Combine existing scenarios into new ones. Each When-Then pair represents an individual behavior. Steps from existing scenarios could be smashed together with very little refactoring. This violates good Gherkin rules and could result in very lengthy scenarios, but it would be the most pragmatic way to reuse steps for large end-to-end scenarios. Most BDD frameworks don’t enforce step type order, and if they do, steps could be re-typed to work. (This approach is the most pragmatic but least pure.)
  3. Embed assertions in Given and When steps. This strategy avoids duplicate When-Then pairs and ensures validations are still performed. Each step along the way is validated for correctness with explicit Gherkin text. However, it may require a number of new steps.
  4. Treat the sequence of behaviors as a unique, separate behavior. This is the best way to think about lengthy end-to-end scenarios because it reinforces behavior-driven thinking. A lengthy scenario adds value only if it can be justified as a uniquely separate behavior. The scenario should then be written to highlight this uniqueness. Otherwise, it’s not a scenario worth having. These scenarios will often be very declarative and high-level.
  5. Ditch the BDD framework and write them purely in the automation programming. Gherkin is meant for collaboration about behaviors, while lengthy end-to-end tests are meant exclusively for intense QA work. Biz roles will write behavior specs but will never write end-to-end tests. Forcing behavior specification on lengthy end-to-end scenarios can inhibit their development. A better practice could be coexistence: acceptance tests could be written with Gherkin, while lengthy end-to-end tests could be written in raw programming. Automation for both test sets could still nevertheless share the same automation code base – they could share the same support modules and even step definition methods.

Pick the approach that best meets the team’s needs.

Django Settings for Different Environments

The Django settings module is one of the most important files in a Django web project. It contains all of the configuration for the project, both standard and custom. Django settings are really nice because they are written in Python instead of a text file format, meaning they can be set using code instead of literal values.

Settings must often use different values for different environments. The DEBUG setting is a perfect example: It should always be True in a development environment to help debug problems, but it should never be True in a production environment to avoid security holes. Another good example is the DATABASES setting: Development and test environments should not use production data. This article covers two good ways to handle environment-specific settings.

Multiple Settings Modules

The simplest way to handle environment-specific settings is to create a separate settings module for each environment. Different settings values would be assigned in each module. For example, instead of just one mysite.settings module, there could be:

mysite
`-- mysite
    |-- __init__.py
    |-- settings_dev.py
    |-- settings_prod.py
    `-- settings_test.py

For the DEBUG setting, mysite.settings_dev and mysite.settings_test would contain:

DEBUG = True

And mysite.settings_prod would contain:

DEBUG = False

Then, set the DJANGO_SETTINGS_MODULE environment variable to the name of the desired settings module. The default value is mysite.settings, where “mysite” is the name of the project. Make sure to set this variable wherever the Django site is run. Also make sure that the settings module is available in PYTHONPATH.

More details on this approach are given on the Django settings page.

Using Environment Variables

One problem with multiple settings modules is that many settings won’t need to be different between environments. Duplicating these settings then violates the DRY principle (“don’t repeat yourself”). A more advanced approach for handling environment-specific settings is to use custom environment variables as Django inputs. Remember, the settings module is written in Python, so values can be set using calls and conditions. One settings module can be written to handle all environments.

Add a function like this to read environment variables:

# Imports
import os
from django.core.exceptions import ImproperlyConfigured

# Function
def read_env_var(name, default=None):
    if not value:
       raise ImproperlyConfigured("The %s value must be provided as an env variable" % name)
    return value

Then, use it to read environment variables in the settings module:

# Read the secret key directly
# This is a required value
# If the env variable is not found, the site will not launch
SECRET_KEY = read_env_var("SECRET_KEY")

# Read the debug setting
# Default the value to False
# Environment variables are strings, so the value must be converted to a Boolean
DEBUG = read_env_var("DEBUG", "False") == "True"

To avoid a proliferation of required environment variables, one variable could be used to specify the target environment like this:

# Read the target environment
TARGET_ENV = read_env_var("TARGET_ENV")

# Set the debug setting to True only for production
DEBUG = (TARGET_ENV == "prod")

# Set database config for the chosen environment
if TARGET_ENV == "dev":
    DATABASES = { ... }
elif TARGET_ENV == "prod":
    DATABASES = { ... }
elif TARGET_ENV == "test":
    DATABASES = { ... }

Managing environment variables can be pesky. A good way to manage them is using shell scripts. If the Django site will be deployed to Heroku, variables should be saved as config vars.

Conclusion

These are the two primary ways I recommend to handle different settings for different environments in a Django project. Personally, I prefer the second approach of using one settings module with environment variable inputs. Just make sure to reference all settings from the settings module (“from django.conf import settings”) instead of directly referencing environment variables!

BDD 101: Test Data

How should test data be handled in a behavior-driven test framework? This is a common question I hear from teams working on BDD test automation. A better question to ask first is, What is test data? This article will explain different types of test data and provide best practices for handling each. The strategies covered here can be applied to any BDD test framework.

Types of Test Data

Personally, I hate the phrase “test data” because its meaning is so ambiguous. For functional test automation, there are three primary types of test data:

  1. Test Case Values. These are the input and expected output values for test cases. For example, when testing calculator addition “1 + 2 = 3”, “1” and “2” would be input values, and “3” would be the expected output value. Input values are often parameterized for reusability, and output values are used in assertions.
  2. Configuration Data. Config data represents the system or environment in which the tests run. Changes in config data should allow the same test procedure to run in different environments without making any other changes to the automation code. For example, a calculator service with an addition endpoint may be available in three different environments: development, test, and production. Three sets of config data would be needed to specify URLs and authentication in each environment (the config data), but 1 + 2 should always equal 3 in any environment (the test case values).
  3. Ready State. Some tests require initial state to be ready within a system. “Ready” state could be user accounts, database tables, app settings, or even cluster data. If testing makes any changes, then the data must be reverted to the ready state.

Each type of test data has different techniques for handling it.

Test Case Values

There are 4 main ways to specify test case values in BDD frameworks, ranging from basic to complex.

In The Specs

The most basic way to specify test case values is directly within the behavior scenarios themselves! The Gherkin language makes it easy – test case values can be written into the plain language of a step, as step parameters, or in Examples tables. Consider the following example:

Scenario Outline: Simple Google searches
  Given a web browser is on the Google page
  When the search phrase "<phrase>" is entered
  Then results for "<phrase>" are shown
  
  Examples: Animals
    | phrase   |
    | panda    |
    | elephant |
    | rhino    |

The test case value used is the search phrase. The When and Then steps both have a parameter for this phrase, which will use three different values provided by the Examples table. It is perfectly suitable to put these test case values directly into the scenario because the values are small and descriptive.

Furthermore, notice how specific result values are not specified for the Then step. Values like “Panda Express” or “Elephant man” are not hard-coded. The step wording presumes that the step definition will have some sort of programmed mechanism for checking that result links relate to the search phrase (likely through regular expression matching).

Key-Value Lookup

Direct specification is great for small sets of simple values, but one size does not fit all needs. Key-value lookups are appropriate when test data is lengthier. For example, I’ve often seen steps like this:

Given the user navigates to "http://www.somewebsite.com/long/path/to/the/profile/page"

URLs, hexadecimal numbers, XML blocks, and comma-separated lists are all the usual suspects. While it is not incorrect to put these values directly into a step parameter, something like this would be more readable:

Given the user navigates to the "profile" page

Or even:

Given the user navigates to their profile page

The automation would store URLs in a lookup table so that these new steps could easily fetch the URL for the profile page by name. These steps are also more declarative than imperative and better resist changes in the underlying environment.

Another way to use key-value lookup is to refer to a set of values by one name. Consider the following scenario for entering an address:

Scenario Outline: Address entry
  Given the profile edit page is displayed
  When the user sets the street address to "<street>"
  And the user sets the second address line to "<second>"  
  And the user sets the city to "<city>"
  And the user sets the state to "<state>"
  And the user sets the zipcode to "<zipcode>"
  And the user sets the country to "<country>"
  And the user clicks the save button
  Then ...

  Examples: Addresses
    | street | second | city | state | zipcode | country |
    ...

An address has a lot of fields. Specifying each in the scenario makes it very imperative and long. Furthermore, if the scenario is an outline, the Examples table can easily extend far to the right, off the page. This, again, is not readable. This scenario would be better written like this:

Scenario Outline: Address entry
  Given the profile edit page is displayed
  When the user enters the "<address-type>" address
  And the user clicks the save button
  Then ...

  Examples: Addresses
    | address-type |
    | basic        |
    | two-line     |
    | foreign      |

Rather than specifying all the values for different addresses, this scenario names the classifications of addresses. The step definition can be written to link the name of the address class to the desired values.

Data Files

Sometimes, test case values should be stored in data files apart from the specs or the automation code. Reasons could be:

  • The data is simply too large to reasonably write into Gherkin or into code.
  • The data files may be generated by another tool or process.
  • The values are different between environments or other circumstances.
  • The values must be selected or switched at runtime (without re-compiling code).
  • The files themselves are used as payloads (ex: REST request bodies or file upload).

Scenario steps can refer to data files using the key-value lookup mechanisms described above. Lightweight, text-based, tabular file formats like CSV, XML, or JSON work the best. They can parsed easily and efficiently, and changes to them can easily be diff’ed. Microsoft Excel files are not recommended because they have extra bloat and cannot be easily diff’ed line-by-line. Custom text file formats are also not recommended because custom parsing is an extra automation asset requiring unnecessary development and maintenance. Personally, I like using JSON because its syntax is concise and its parsing tools seem to be the simplest in most programming languages.

External Sources

An external dependency exists when the data for test case values exists outside of the automation code base. For example, test case values could reside in a database instead of a CSV file, or they could be fetched from a REST service instead of a JSON file. This would be appropriate if the data is too large to manage as a set of files or if the data is constantly changing.

As a word of caution, external sources should be used only if absolutely necessary:

  1. External sources introduce an additional point-of-failure. If that database or service goes down, then the test automation cannot run.
  2. External sources degrade performance. It is slower to get data from a network connection than from a local machine.
  3. Test case values are harder to audit. When they are in the specs, the code, or data files, history is tracked by version control, and any changes are easy to identify in code reviews.
  4. Test case values may be unpredictable. The automation code base does not control the values. Bad values can fail tests.

External sources can be very useful, if not necessary, for performance / stress / load / limits testing, but it is not necessary for the vast majority of functional testing. It may be convenient to mock external sources with either a mocking framework like Mockito or with a dummy service.

Configuration Data

Config data pertain to the test environments, not the test cases. Test automation should never contain hard-coded values for config data like URLs, usernames, or passwords. Rather, test automation should read config data when it launches tests and make references to the required values. This should be done in Before hooks and not in Gherkin steps. In this way, automated tests can run on any configuration, such as different test environments before being released to production.

Config data can be stored in data files or accessed through some other dependency. (Read the previous section for pros and cons of those approaches.) The config to use should be somehow dynamically selectable when tests run. For example, the path to the config file to use could be provided as a command line argument to the test launch command.

Config data can be used to select test values to use at runtime. For example, different environments may need different test value data files. Conversely, scenario tagging can control what parts of config data should be used. For example, a tag could specify a username to use for the scenario, and a Before hook could use that username to fetch the right password from the config data.

For efficiency, only the necessary config data should be accessed or read into memory. In many cases, fetching the config data should also be done once globally, rather than before each test case.

Ready State

All scenarios have a starting point, and often, that starting point involves data. Setup operations must bring the system into the ready state, and cleanup operations must return the system to the ready state. Test data should leave no trace – temporary files should be deleted and records should be reverted. Otherwise, disk space may run out or duplicate records may fail tests. Maintaining the ready state between tests is necessary for true test independence.

During the Test Run

Simple setup and cleanup operations may be done directly within the automation. For example, when testing CRUD operations, records must be created before they can be retrieved, updated, or deleted. Setup would create a record, and cleanup would guarantee the record’s deletion. If the setup is appropriate to mention as part of the behavior, then it should be written as Given steps. This is true of CRUD operations: “Given a record has been created, When it is deleted, …”. If multiple scenarios share this same setup, then those Given steps should be put into a Background section.

However, sometimes setup details are not pertinent to the behavior at hand. For example, perhaps fresh authentication tokens must be generated for those CRUD calls. Those operations should be handled in Before hooks. The automation will take care of it, while the Gherkin steps can focus exclusively on the behavior.

No matter what, After hooks must do cleanup. It is incorrect to write final Then steps to do cleanup. Then steps should verify outcomes, not take more actions. Plus, the final Then steps will not be run if the test has a failure and aborts!

External Preparation

Some data simply takes too long to set up fresh for each test launch. Consider complicated user accounts or machine learning data: these are things that can be created outside of the test automation. The automation can simply presume that they exist as a precondition. These types of data require tool automation to prepare. Tool automation could involve a set of scripts to load a database, make a bunch of service calls, or navigate through a web portal to update settings. Automating this type of setup outside of the test automation enables engineers to more easily replicate it across different environments. Then, tests can run in much less time because the data is already there.

However, this external preparation must be carefully maintained. If any damage is done to the data, then test case independence is lost. For example, deleting a user account without replacing it means that subsequent test runs cannot log in! Along with setup tools, it is important to create maintenance tools to audit the data and make repairs or updates.

Advice for Any Approach

Use the minimal amount of test data necessary to test the functionality of the product under test. More test data requires more time to develop and manage. As a corollary, use the simplest approach that can pragmatically handle the test data. Avoid external dependencies as much as possible.

To minimize test data, remember that BDD is specification by example: scenarios should use descriptive values. Furthermore, variations should be reduced to input equivalence classes. For example, in the first scenario example on this page, it would probably be sufficient to test only one of those three animals, because the other two animals would not exhibit any different searching behavior.

Finally, be cautioned against randomization in test data. Functional tests are meant to be deterministic – they must always pass or fail consistently, or else test results will not be reliable. (Not only could this drive a tester crazy, but it would also break a continuous integration system.) Using equivalence classes is the better way to cover different types of inputs. Use a unique number counting mechanism whenever values must be unique.

10 Gotchas for Automation Code Reviews

Lately, I’ve been doing lots of code reviews. I probably spend about an hour every work day handling reviews for my team, both as a reviewer and an author. All of the reviews exclusively cover end-to-end test automation: new tests, old fixes, config changes, and framework updates. I adamantly believe that test automation code should undergo the same scrutiny of review as the product code it tests, because test automation is a product. Thus, all of the same best practices (like the guides here and here) should be applied. Furthermore, I also look for problems that, anecdotally, seem to appear more frequently in test automation than in other software domains. Below is a countdown of my “Top 10 Gotchas”. They are the big things I emphasize in test automation code reviews, in addition to the standard review checklist items.

#10: No Proof of Success

Trust, but verify,” as Ronald Reagan would say. Tests need to run successfully in order to pass review, and proof of success (such as a log or a screen shot) must be attached to the review. In the best case, this means something green (or perhaps blue for Jenkins). However, if the product under test is not ready or has a bug, this could also mean a successful failure with proof that the critical new sections of the code were exercised. Tests should also be run in the appropriate environments, to avoid the “it-ran-fine-on-my-machine” excuse later.

#9: Typos and Bad Formatting

My previous post, Should I Reject a Code Review for Typos?, belabored this point. Typos and bad formatting reflect carelessness, cause frustration, and damage reputation. They are especially bad for Behavior-Driven Development frameworks.

#8: Hard-Coded Values

Hard-coded values often indicate hasty development. Sometimes, they aren’t a big problem, but they can cripple an automation code base’s flexibility. I always ask the following questions when I see a hard-coded value:

  • Should this be a shared constant?
  • Should this be a parameterized value for the method/function/step using it?
  • Should this be passed into the test as an external input (such as from a config file or the command line)?

#7: Incorrect Test Coverage

It is surprisingly common to see an automated test that doesn’t actually cover the intended test steps. A step from the test procedure may be missing, or an assertion may yield a false positive. Sometimes, assertions may not even be performed! When reviewing tests, keep the original test procedure handy, and watch out for missing coverage.

#6: Inadequate Documentation

Documentation is vital for good testing and good maintenance. When a test fails, the doc it provides (both in the logs it prints and in its very own code) significantly assist triage. Automated test cases should read like test procedures. This is one reason why self-documenting behavior-driven test frameworks are so popular. Even without BDD, test automation should be flush with comments and self-documenting identifiers. If I cannot understand a test by skimming its code in a code review, then I ask questions, and when the author provides answers, I ask them to add their answers as comments to the code.

#5: Poor Code Placement

Automation projects tend to grow fast. Along with new tests, new shared code like page objects and data models are added all the time. Maintaining a good, organized structure is necessary for project scalability and teamwork. Test cases should be organized by feature area. Common code should be abstracted from test cases and put into shared libraries. Framework-level code for things like inputs and logging should be separated from test-level code. If code is put in the wrong place, it could be difficult to find or reuse. It could also create a dependency nightmare. For example, non-web tests should not have a dependency on Selenium WebDriver. Make sure new code is put in the right place.

#4: Bad Config Changes

Even the most seemingly innocuous configuration tweak can have huge impacts:

  • A username change can cause tests to abort setup.
  • A bad URL can direct a test to the wrong site.
  • Committing local config files to version control can cause other teammates’ local projects to fail to build.
  • Changing test input values may invalidate test runs.
  • One time, I brought down a whole continuous integration pipeline by removing one dependency.

As a general rule, submit any config changes in a separate code review from other changes, and provide a thorough explanation to the reviewers for why the change is needed. Any time I see unusual config changes, I always call them out.

#3: Framework Hacks

A framework is meant to help engineers automate tests. However, sometimes the framework may also be a hindrance. Rather than improve the framework design, many engineers will try to hack around the framework. Sometimes, the framework may already provide the desired feature! I’ve seen this very commonly with dependency injection – people just don’t know how to use it. Hacks should be avoided because test automation projects need a strong overall design strategy.

#2: Brittleness

Test automation must be robust enough to handle bumps in the road. However, test logic is not always written to handle slightly unexpected cases. Here are a few examples of brittleness to watch out for in review:

  • Do test cases have adequate cleanup routines, even when they crash?
  • Are all exceptions handled properly, even unexpected ones?
  • Is Selenium WebDriver always disposed?
  • Will SSH connections be automatically reconnected if dropped?
  • Are XPaths too loose or too strict?
  • Is a REST API response code of 201 just as good as 200?

#1: Redundancy

Redundancy is the #1 problem for test automation. I wrote a whole article about it: Why is Automation Full of Duplicate Code? Many testing operations are inherently repetitive. Engineers sometimes just copy-paste code blocks, rather than seek existing methods or add new helpers, to save development time. Plus, it can be difficult to find reusable parts that meet immediate needs in a large code base. Nevertheless, good code reviews should catch code redundancy and suggest better solutions.

Please let me know in the comments section if there are any other specific things you look for when reviewing test automation code!

Should I Reject a Code Review for Typos?

TL;DR: Yes!

Code reviews are essential to good software development. In a code review, peers read each others’ code and vote to approve or reject the changes before committing them to the main code base. Code reviews provide a platform for constructive feedback, accountability, and even learning opportunities. To make them effective, a team must establish best practices – not only for the code itself, but also for the review process. Good guides can be found here, here, and here for reference. Some rules, such as “no personal attacks” and “focus only on the changes at hand,” are universally agreeable. Other rules, however, can cause controversy.

One such controversial rule is the title of this blog post: Should a code review be rejected for typos? I use the word “typos” here to broadly include any sort of typographical shortcoming: misspellings, incorrect grammar, poor formatting, and even improper spacing. For example, a variable named somehting would be a typo.

There are valid reasons why not to reject code reviews despite typos. The code itself will still compile and run, so long as the use of typo’ed identifiers is consistent. Requiring corrections takes extra time, which in business costs more money. Authors may also take offense, especially if English (or the language of dialogue) is their second language.

Nevertheless, I strongly believe that yes, code reviews should be rejected for typos. Below are five reasons why:

It corrects carelessness. Typos mean carelessness. Mistakes are bound to happen, but pervasive typos indicate a deeper, systemic problem. Reviews are a measure of accountability between peer engineers to prevent carelessness. Being tough on small things encourages engineers to straighten-up on all things.

It prevents future frustration. People expect things to be spelled and formatted the right way. Compiler error messages are often cryptic and may not intuitively point to typos. Imagine trying to call the do_stuff method, only to discover that the original method was named do_stuf after an hour of hair-pulling, fist-banging, and cursing at the screen. Frustrating is especially acute when BDD Gherkin steps have typos. Allowing typos to be committed to the code base increases the chances of this type of frustration.

It improves readability. Typos and poor formatting are distracting. They make it harder to read code. For example, I remember once reading a Perl source file in which every single line had an arbitrary number of indent spaces. It was impossible to visually align function bodies, if statements, and loops. I had to reformat it before I could work on it.

It boosts confidence in the code and in the team. Imagine if you saw typos in this blog post. Each typo you find would lower your confidence in my writing skills. The same is true for software: as a reviewer, when I see typos, I lose confidence in the quality of the code and in the author’s skills because I see carelessness. Eliminating typos not only makes code better, but it also makes people think better of the code.

It reinforces high standards. Quality is not limited to functionality. Poorly-written code may run correctly, but it will not be maintainable. Upholding high standards in code review will result in overall better code output. Letting small things slip through will, over time, atrophy a code base.

If “reject” sounds like a harsh word, it may be beneficial for a review process to indicate the severity of feedback points. For example, broken code could be “critical”, while typos could be “minor.” Nevertheless, typos should not be committed to the main code base, and thus their code reviews should be rejected.

Static or Singleton

Let’s face it: there are times in object-oriented programming when we need to share something somewhat globally. When I say that, many purists will scream, “But global variables are evil and should never be used!” Hence, I used the word “somewhat” – attributes (variables) and behaviors (methods) can be class members, allowing them to be used wherever their class is in scope. Things like constants and stateless utility methods are perfect candidates to be class members; Java’s Math class is a perfect example. Many engineers use the word “static” interchangeably with “class member” because languages like Java and C# use the static keyword to denote class members.

Class Membership Limitations

While going “static” is great for one-off constants and stateless methods, it is a poor solution for managing global state. Class methods require class variables to manage state. Class variables are mutable (unlike constants which are immutable) and require extra precautions for handling. And as class variables, the state is divorced from many benefits offered by the object-oriented marriage between attributes and behaviors. A much better solution for global state is the singleton pattern.

The Singleton Pattern

The singleton pattern is one of the best known, and most controversial, design patterns in object-oriented programming. A singleton class is restricted to constructing only one instance of an object so that one instance may be shared somewhat globally. Love it or hate it, the singleton pattern is quite useful when applied appropriately – factories, state machines, and even test automation often employ singletons.

For reference, below is a thread-safe singleton class written in Java, courtesy of Wikipedia. The single instance may be accessed anywhere by simply calling Singleton.getInstance(). Any attributes or behaviors are added as instance members (without “static”).

public final class Singleton {
    private static volatile Singleton instance = null;

    private Singleton() {}

    public static Singleton getInstance() {
        if (instance == null) {
            synchronized(Singleton.class) {
                if (instance == null) {
                    instance = new Singleton();
                }
            }
        }
        return instance;
    }
}

Singleton Benefits

The singleton pattern is a much better solution for managing global state for several reasons.

True Object Orientation Singleton objects may be treated as any other object (like POJOs). Classes themselves are not plain-old objects, thus limiting their usability.
Inheritance Singleton classes may employ inheritance to add or modify attributes and behaviors. Class members cannot be manipulated through inheritance.
Lazy Initialization Singleton classes can easily use lazy initialization to avoid constructing the instance until it is first used. Lazy initialization for class variables is possible but often more difficult. Many times, class variables are simply initialized when declared, which would unnecessarily bloat memory if the variables are never used.
Object Pooling The singleton pattern is essentially a special case of the object pool pattern, in which the pool size is one. Thus, a singleton class could easily be updated to handle a pool of objects instead of just a single one.
Cleaner Implementation Singleton state is nicely encapsulated. Singleton classes provide a sensible central place for shared stuff. Other classes don’t have the “static” clutter. Object references need not be passed around.

Anecdotally, engineers who use design patterns (like singleton) have a better grasp on good object-oriented principles, and they tend to make better, more thoughtful software design decisions. Adding static members to a class tends to be a hasty decision made for expediency – it will work, but it may not be the best practice.

TL;DR

  • Use class members for one-off constants and stateless methods.
  • Use singletons for managing global, mutable state.

The Dot-Dot-Dot

Warning: This post has nothing to do specifically with software. It is rather a personal musing over communication styles.

Throughout my years in the professional work environment, I’ve noticed a trend that bothers me: the inappropriate use of the ellipsis in textual communication. For example:

Hi Andy… Automated tests from the ABC build job are failing this morning… I don’t know why… Please do the needful… Thanks…

Dot-dot-dot… What meaning did the author intend to convey?

  • Did their finger get stuck on the keyboard?
  • Did they intend to use a period or a comma?
  • Do they want to textually capture pauses between their phrases?
  • Am I supposed to assume something that they haven’t said?
  • Are they giving me a specific task to do or simply speculating?
  • Did they lose their train of thought?
  • Do they doubt their words?
  • Are they half asleep?
  • Are they wishy-washy?
  • Are they being passive-aggressive?
  • Are they complaining to themselves?
  • Are they upset?
  • Are they in a bad mood?
  • Are they mad at me?
  • Am I in trouble?

I know I’m not the only one standing on this soap box – a friend recently triggered me by tweeting about the same problem. Blame my minor in creative writing.

1k3896

This is not merely a minor nuance. Ellipsis abuse causes ambiguity, doubt, and stress. It can cause uncertainty in office relationships. Terse textual communication is already crude, and every jot and tittle implies meaning, whether intended or accidental.

In professional environments, always strive for clear, concise, and direct communication. Good communication skills are more than just a resume tagline. We should all pay close attention to how we write. Be on point, not on three points.