continuous integration

Can Performance Tests be Unit Test?

A friend recently asked me this question (albeit with some rephrasing):

Can a unit test be a performance test? For example, can a unit test wait for an action to complete and validate that the time it took is below a preset threshold?

I cringed when I heard this question, not only because it is poor practice, but also because it reflects common misunderstandings about types of testing.

QA Buzzword Bingo

The root of this misunderstanding is the lack of standard definitions for types of tests. Every company where I’ve worked has defined test types differently. Individuals often play fast and loose with buzzword bingo, especially when new hires from other places used different buzzwords. Here are examples of some of those buzzwords:

  • Unit testing
  • Integration testing
  • End-to-end testing
  • Functional testing
  • System testing
  • Performance testing
  • Regression testing
  • Test-’til-it-breaks
  • Measurements / benchmarks / metrics
  • Continuous integration testing

And here are some games of buzzword bingo gone wrong:

  • Trying to separate “systemic” tests from “system” tests.
  • Claiming that “unit” tests should interact with a live web page.
  • Separating “regression” tests from other test types.

Before any meaningful discussions about testing can happen, everyone must agree to a common and explicit set of testing type definitions. For example, this could be a glossary on a team wiki page. Whenever I have discussions with others on this topic, I always seek to establish definitions first.

What defines a unit test?

Here is my definition:

A unit test is a functional, white box test that verifies the correctness of a single unit of software code. It is functional in that it gives a deterministic pass-or-fail result. It is white box in that the test code directly calls the product source code under test. The unit is typically a function or method, and there should be separate unit tests for each equivalence class of inputs.

Unit tests should focus on one thing, and they are typically short – both in lines of code and in execution time. Unit tests become extremely useful when they are automated. Every major programming language has unit test frameworks. Some popular examples include JUnit, xUnit.net, and pytest. These frameworks often integrate with code coverage, too.

In continuous integration, automated unit tests can be run automatically every time a new build is made to indicate if the build is good or bad. That’s why unit tests must be deterministic – they must yield consistent results in order to trust build status and expedite failure triage. For example, if a build was green at 10am but turned red at 11am, then, so long as the tests were deterministic, it is reasonable to deduce that a defective change was committed to the code line between 10-11am. Good build status indicates that the build is okay to deploy to a test environment and then hopefully to production.

(As a side note, I’ve heard arguments that unit tests can be black box, but I disagree. Even if a black box test covers only one “unit”, it is still at least an integration test because it covers the connection between the actual product and some caller (script, web browser, etc.).)

What defines a performance test?

Again, here’s my definition:

performance test is a black box test that measures aspects of a controlled system. It is black box in that it should test a real, live, deployed product. (Even if the build under test has special instrumentation, the performance test itself should still be run in a black box fashion.) The aspects to measure must be pre-determined, and the system under test must be controlled in order to achieve consistent measurements.

Performance tests are not functional tests:

  • Functional tests answer if a thing works.
  • Performance tests answer how efficiently a thing works.

Rather than yield pass-or-fail results, performance tests yield measurements. These measurements could track things as general as CPU or memory usage, or they could track specific product features like response times. Once measurements are gathered, data analysis should evaluate the goodness of the measurements. This often means comparison to other measurements, which could be from older releases or with other environment controls.

Performance testing is challenging to set up and measure properly. While unit tests will run the same in any environment, performance tests are inherently sensitive to the environment. For example, an enterprise cloud server will likely have better response time than a 7-year-old Macbook.

Why should performance tests not be unit tests?

Returning to the original question, it is theoretically possible to frame a performance test as a functional test by validating a specific measurement against a preset threshold. However, there are 3 main reasons why a unit test should not be a performance test:

  1. Performance checks in unit tests make the build process more vulnerable to environmental issues. Bad measurements from environment issues could cause unit tests to fail for reasons unrelated to code correctness. Any unit test failure will block a build, trigger triage, and stall progress. This means time and money. The build process must not be interrupted by environment problems.
  2. Proper performance tests require lots of setup beyond basic unit test support. Unit tests should be short and sweet, and unit testing frameworks don’t have the tools needed to take good measurements. Unit test environments are often not set up in tightly controlled environments, either. It would take a lot of work to properly put performance checks into a unit test.
  3. Unit tests are white box, but performance tests are black box. Unit test results determine if a build is good or bad. Thus, they run before a build is labeled “ready.” Performance tests need to interact with an actual build. Thus, they run after a build is ready and deployed. This is a natural progression: run the most basic tests first, and then run the more advanced tests. It separates these two test types.

These points are based on the explicit definitions provided above. Note that I am not saying that performance testing should not be done, but rather that performances checks should not be part of unit testing. Unit testing and performance testing should be categorically separate types of testing.

The Best Programming Language for Test Automation

Which programming languages are best for writing test automation? There are several choices – just look at this list on Wikipedia and this cool decision graphs for choosing languages. While this topic can quickly devolve into a spat over personal tastes, I do believe there are objective reasons for why some languages are better for automating test cases than others.

Dividing Test Layers

First of all, unit tests should always be written in the same language as the product under test. Otherwise, they would definitionally no longer be unit tests! Unit tests are white box and need direct access to the product source code. This allows them to cover functions, methods, and classes.

The question at hand pertains more to higher-layer functional tests. These tests fall into many (potentially overlapping) categories: integration, end-to-end, system, acceptance, regression, and even performance. Since they are all typically black box, higher-layer tests do not necessarily need to be written in the same language as the product under test.

My Choices

Personally, I think Python and Java are today’s best languages for test automation. Python, in particular, is wonderful because its conciseness lets the programmer expressively capture the essence of the test case. Java has a rich platform of tools and packages, and continuous integration with Java is easy with Maven/Gradle/ANT and Jenkins. I’ve heard that Ruby is another good choice for reasons similar to Python, but I have not used it myself. JavaScript is good for pure web app testing (à la Protractor) but not so good for general purposes.

On the other hand, languages like C, C++, C#, and Perl are less suitable for test automation. C and C++ are very low-level and lack robust frameworks. Although C# as a language is similar to Java, it lives in the Microsoft bubble: .NET development tools are not as friendly or as free, and command line operations are painful. Perl simply does not provide the consistency and structure for scalable and self-documenting code. Purely functional languages like LISP and Haskell are also poor choices for test automation because they do not translate well from test case procedures. They may be useful, however, for some lower-level data testing.

8 Criteria for Evaluation

There are eight major points to consider when evaluating any language for automation. These criteria specifically assess the language from a perspective of purity and usability, not necessarily from a perspective of immediate project needs.

  1. Usability.  A good automation language is fairly high-level and should handle rote tasks like memory management. Lower learning curves are also preferable. Development speed is also important for deadlines.
  2. Elegance. The process of translating test case procedures into code must be easy and clear. Test code should also be concise and self-documenting for maintainability.
  3. Available Test Frameworks. Frameworks provide basic needs such as assertions, setup/cleanup, logging, and reporting. Examples include Cucumber and xUnit.
  4. Available Packages. It is better to use off-the-shelf packages for common operations, such as web drivers (Selenium), HTTP requests, and SSH.
  5. Powerful Command Line. A good CLI makes launching tests easy. This is critical for continuous integration, where tests cannot be launched manually.
  6. Easy Build Integration. Build automation should launch tests and report results. Difficult integration is a DevOps nightmare.
  7. IDE Support. Because Notepad and vim just don’t cut it for big projects.
  8. Industry Adoption. Support is good. If the language remains popular, then frameworks and packages will be maintained well.

Below, I rated each point for a few popular languages:

Python Java C# C/C++ Perl
Usability  awesome  good  good  terrible  poor
Elegance  awesome  good  good  poor  poor
Available Test Frameworks  awesome  awesome  good  okay  poor
Available Packages  awesome  awesome  good  good  good
Powerful Command Line  awesome  good  terrible  poor  okay
Easy Build Integration  good  awesome  poor  poor  poor
IDE Support  good  awesome  okay  okay  terrible
Industry Adoption  awesome  awesome  good  terrible  terrible

Conclusion

I won’t shy away from my preference for Python and Java, but I recognize that they may not be the right choice for all situations. For example, we use C# at my current job because our app is written in C# and management wants developers and QA to be on the same page.

Now, a truly nifty idea would be to create a domain-specific language for test automation, but that must be a topic for another post.

10 Things You Lose Without Automation

Automation has a lot of potential to improve software development. Unfortunately, though, automation is often seen as a luxury. Deadlines in the real word are unforgiving, and since test code isn’t product code, automation tasks are given lower priority and dunked into the black hole of the backlog. Some might argue that this is okay because it is lean or because a new project is just getting started. Once, I even heard it quipped that the first ones cut during a layoff are the automation folks. And it is true that automation requires a nontrivial resource investment.

However, I want to turn the tables. Instead of thinking about automation in terms of the opportunity, think about automation in terms of the opportunity cost. What happens if you don’t automate your tests from the get-go?  There are 10 major things you lose:

#1: Man Hours

Automated tests will automatically run.  Manual tests must be manually run.  That’s ontological.  If you only run a test one time, then automation has no return-on-investment.  But if you run a test more than once, automation saves a tester from repeating themselves. Plus, it’s easy: push the button and wait for results. Automated tests almost always run faster than manual tests, too.  Considering that time is money and engineer salaries aren’t cheap, man hours are a clear opportunity cost.

#2: Coverage

Automated tests can achieve greater coverage than manual tests, particularly for regression testing. As product development progresses, the sheer number of test cases increases. For example, in Agile, new tests will be created every sprint. Older tests must be run periodically to verify that new features don’t break existing features. If regression tests are manual, then testers must burn hours grinding through the same tests repeatedly.  Often, for expediency, this means that they skip some tests – not in the sense of being lazy, but rather as part of a risk-based approach.  Weaker coverage plus risk of missing bugs are accepted for the sake of shorter testing time.  If those regression tests were automated, then there would be no reason to shrink coverage, because they would be easy to run.

#3: Consistency

People make mistakes. It’s human nature – nobody’s perfect. And manual tests are prone to human error because humans run them. I remember how nervous I felt running manual on-call system checks at MaxPoint for the first time, afraid that I would miss a problem that could bring down a million-dollar bidding system.  Automated scripts run the same way every time.

#4: Protection

Continuous integration (CI) protects code against defects by building and testing every code change in real time. A CI system will automatically trigger tests all the time.Tests not running in CI (like manual tests) are effectively dead. At NetApp, failing code changes would immediately be kicked out of the code line, making automated tests act like a vaccine against bugs. On the other hand, I remember a project at MaxPoint that was riddled with bugs and perpetually delayed. When I asked the developers to see their unit tests, they said they never wrote unit tests because “it wasn’t a requirement.”

#5: Delivery Time

Continuous delivery (CD) is the natural extension of continuous integration, in which software products can automatically be delivered (and potentially even deployed) as the final step in a CI pipeline. This is how big companies like Google, Facebook, and Netflix can deliver so rapidly. No automation means no CD.

#6: Results and Metrics

Non-engineers (managers, product owners, scrum masters, oh my!) love to ask questions about tests.  “Are we red or green?” “How many tests do we have for this feature?” “What’s our coverage?” “How often do we run the tests?” Automated tests simply yield more accurate and more comprehensive results. Automation can also generate test reports, so engineers don’t need to waste time drafting emails or updating wiki pages.

#7: Accountability

Numbers don’t lie. Scripts don’t lie. Engineers typically don’t lie, but… results from manual tests can have a fudge factor, or a mistake in reporting, or any other sort of inconsistency. Inaccurate results may lead to poor business decisions. Automated results tell it like it is.

#8: Creativity

Manual testing can devolve into repetitive, menial labor: just follow steps 1-10 again and again and again. It would be much more effective for manual testers to focus on exploratory testing rather than deterministic testing. While automated tests can cover the fixed, repetitive test scenarios, exploratory testing lets testers find creative ways to uncover defects and judge how well a product actually works. Lack of automation ties up human capital.

#9: Peace of Mind

Are you sure that your product is “good”? Can you run enough tests to make sure? I learned the value of peace of mind while I was still in college. In my compiler theory course, I had to develop a simple programming language and build a compiler for it. Every week, we had to add new language features: arithmetic, strings, arrays, functions, etc. And every week, I wrote a slew of mini-programs to test grammar updates to my new language. By the time the project was complete, I had 1000+ automated test cases running through JUnit with 100% coverage, and the entire suite took a mere few minutes to run. And there were many late nights when the tests caught bugs in my language right away before committing code. There was no way I could have passed that class without my automated tests.

#10: Quality

The ultimate purpose of test automation is product quality. Having automation doesn’t necessarily mean product quality is good, but not having automation severely limits how quality can be pursued. Anecdotally, I’ve seen much better code quality come out of projects that have good test automation than ones without it. If I were a product owner, I know what I would want.