A friend recently asked me this question (albeit with some rephrasing):
Can a unit test be a performance test? For example, can a unit test wait for an action to complete and validate that the time it took is below a preset threshold?
I cringed when I heard this question, not only because it is poor practice, but also because it reflects common misunderstandings about types of testing.
QA Buzzword Bingo
The root of this misunderstanding is the lack of standard definitions for types of tests. Every company where I’ve worked has defined test types differently. Individuals often play fast and loose with buzzword bingo, especially when new hires from other places used different buzzwords. Here are examples of some of those buzzwords:
- Unit testing
- Integration testing
- End-to-end testing
- Functional testing
- System testing
- Performance testing
- Regression testing
- Measurements / benchmarks / metrics
- Continuous integration testing
And here are some games of buzzword bingo gone wrong:
- Trying to separate “systemic” tests from “system” tests.
- Claiming that “unit” tests should interact with a live web page.
- Separating “regression” tests from other test types.
Before any meaningful discussions about testing can happen, everyone must agree to a common and explicit set of testing type definitions. For example, this could be a glossary on a team wiki page. Whenever I have discussions with others on this topic, I always seek to establish definitions first.
What defines a unit test?
Here is my definition:
A unit test is a functional, white box test that verifies the correctness of a single unit of software code. It is functional in that it gives a deterministic pass-or-fail result. It is white box in that the test code directly calls the product source code under test. The unit is typically a function or method, and there should be separate unit tests for each equivalence class of inputs.
Unit tests should focus on one thing, and they are typically short – both in lines of code and in execution time. Unit tests become extremely useful when they are automated. Every major programming language has unit test frameworks. Some popular examples include JUnit, xUnit.net, and pytest. These frameworks often integrate with code coverage, too.
In continuous integration, automated unit tests can be run automatically every time a new build is made to indicate if the build is good or bad. That’s why unit tests must be deterministic – they must yield consistent results in order to trust build status and expedite failure triage. For example, if a build was green at 10am but turned red at 11am, then, so long as the tests were deterministic, it is reasonable to deduce that a defective change was committed to the code line between 10-11am. Good build status indicates that the build is okay to deploy to a test environment and then hopefully to production.
(As a side note, I’ve heard arguments that unit tests can be black box, but I disagree. Even if a black box test covers only one “unit”, it is still at least an integration test because it covers the connection between the actual product and some caller (script, web browser, etc.).)
What defines a performance test?
Again, here’s my definition:
A performance test is a test that measures aspects of a controlled system. It is white box if it tests code directly, such as profiling individual functions or methods. It is black box if it tests a real, live, deployed product. Typically, when people talk about testing software performance, they mean black box style testing. The aspects to measure must be pre-determined, and the system under test must be controlled in order to achieve consistent measurements.
Performance tests are not functional tests:
- Functional tests answer if a thing works.
- Performance tests answer how efficiently a thing works.
Rather than yield pass-or-fail results, performance tests yield measurements. These measurements could track things as general as CPU or memory usage, or they could track specific product features like response times. Once measurements are gathered, data analysis should evaluate the goodness of the measurements. This often means comparison to other measurements, which could be from older releases or with other environment controls.
Performance testing is challenging to set up and measure properly. While unit tests will run the same in any environment, performance tests are inherently sensitive to the environment. For example, an enterprise cloud server will likely have better response time than a 7-year-old Macbook.
Why should performance tests not be unit tests?
Returning to the original question, it is theoretically possible to frame a performance test as a functional test by validating a specific measurement against a preset threshold. However, there are 3 main reasons why a unit test should not be a performance test:
- Performance checks in unit tests make the build process more vulnerable to environmental issues. Bad measurements from environment issues could cause unit tests to fail for reasons unrelated to code correctness. Any unit test failure will block a build, trigger triage, and stall progress. This means time and money. The build process must not be interrupted by environment problems.
- Proper performance tests require lots of setup beyond basic unit test support. Unit tests should be short and sweet, and unit testing frameworks don’t have the tools needed to take good measurements. Unit test environments are often not set up in tightly controlled environments, either. It would take a lot of work to properly put performance checks into a unit test.
- Performance tests yield metrics that should not be shoehorned into a binary pass/fail status. Performance data is complex and rich with information. Teams should analyze performance data, especially over time. It can also be volatile.
These points are based on the explicit definitions provided above. Note that I am not saying that performance testing should not be done, but rather that performances checks should not be part of unit testing. Unit testing and performance testing should be categorically separate types of testing.
Very good article! Thanks a lot!
Not sure if I agree with your definition of Performance Test – “A performance test is a black box test….”. A Performance Test is any Test that evaluates or investigates Performance aspect of the code, application or system. Compare this to functional test, which is any test that evaluates or investigates functional aspect of the code, application or system. As such, it can be both white and black box tests – you could write Performance tests at unit (function) level. Multi-threaded execution of a function is probably one of the best use cases of evaluative tests w.r.t cost/effort vs value. You can check for response times, resource leaks, thread blocks, race conditions, etc. Just a thought.
Hi Sandeep! I read your comment and then reread my article. (It’s been almost three years since I wrote it.) I believe you are correct to say that a performance test could be white box. I will update my article. Thanks!
Hi Andy – coming across your article again as part of some research into continuous performance testing as part of CI/CD. While I agree that unit testing and performance testing are certainly separate, I do believe that certain units of code can/should be tested for performance (as Sandeep suggests in his comment). As you suggest, these tests are not deterministic in the same way functional unit tests are, and therefore shouldn’t be allowed to fail the build in most cases. However, I believe there is value in the data that is captured – especially when trended over builds. Of course there needs to be some discretion here as there is not value in measuring every class/method in this way.
Great article! This is a great example of why teams need to get on the same page with respect to definitions of test types!