development

Quality Metrics 101: Test Quality

New to the series? Start from the beginning!

Test quality metrics make sure that testing efforts are worthwhile. Though “testing” and “quality” may be synonymous as organizational titles, testing is only one method of enforcing quality. In software, it just happens to be the most effective one. Testing is expensive, though, because it slows down time-to-market. Some people even devalue testing work because it doesn’t add new features to a product. Below are aspects of test quality to consider measuring to prove and even increase the value of testing efforts.

roofing

Coverage

Quality Aspect How much functionality is covered by tests?
Desired State High – More coverage means less risk. Note that 100% complete coverage is impossible.
Metrics Coverage may be measured for both manual and automated tests. However, automated test coverage is usually more important because automated tests are meant to be defensive without gaps.

Code Coverage – Code coverage tools check what paths of code are actually exercised by automated tests. While they cannot tell if tests are good or bad, they are great for exposing gaps in coverage. Unit test code coverage is easy because most frameworks have plugins, but above-unit code coverage requires instrumented builds. Look for tools that track more than just lines of code. Target 90%+ coverage. Add new tests to cover any major gaps.

Feature Coverage – Feature coverage is a manual way to score features on test coverage based on planning and review. For this metric to be successful, a team must consistently specify features well; otherwise, this metric will give useless data. Gherkin scenarios a great way to do this – for example, each scenario can be marked as untested, manual, or automated. Feature coverage is unscientific, but it can give a better picture of functionalities actually covered (instead of just the raw lines of code covered).

Automation DebtTechnical debt increases when tests are not automated and thus lack coverage. Teams are often unable to automate all tests originally planned, and test automation is frequently jettisoned from the Definition of Done. Or, a project may not start automating tests until a large chunk of the project is already complete. The best way to track automation debt is to create a backlog for incomplete automation work. Backlog tasks can be sized, prioritized, and planned according to whatever development process is used (Scrum, Kanban, etc.). Appropriate process metrics can then be used to understand the magnitude of the work and, thus, the lack of automated test coverage.

Warning: Test case count, test length, and test code line count are terrible metrics for coverage because they encourage largeness rather than uniqueness. The goal of testing is to have the greatest coverage with the lowest risk for the least work. Anybody can blindly write tests or variations that add no meaningful value.

gears-image1

Reliability

Quality Aspect Do automated tests consistently reach completion? And how trustworthy are the results?
Desired State High – Reliability means less time for failure triage or (horrors) reruns.
Metrics Failure Reasons – Track the failure reason for each test case run. Ideally, tests should fail only when they discover product bugs. However, tests may also fail when:

  • an acceptable product change caused an automation error because tests were not updated, indicating poor communication or careless updates
  • an environmental change or interruption caused an automation error, indicating deployment or sysadmin problems
  • the automation code itself has a bug

Remember, “successful” test runs either pass with appropriate coverage or fail due to product bugs. “Unsuccessful” test runs fail or crash for reasons other than product bugs. Aim to minimize unsuccessful test runs. Never hack a test just to get it passing – always work to fix the problems behind test failures.

is-the-speedometer-reliable-in-telling-me-my-vehicles-real-time-speed

Speed

Quality Aspect How much time do test runs take?
Desired State Fast – Tests should complete in the shortest time possible.
Metrics Test Case Execution Time – Test case execution times indicate the efficiency of the automation code. Track the start-to-end execution time for every individual test case run. Then, analyze the data using common sense. For example, outliers may be inefficient tests that need tuning or should be removed altogether. It may be wise to separate test runs by result type or coverage area. Historical data can also be used as a baseline to determine performance impacts when making cross-cutting automation changes.

Test Suite Execution Time – Test suites are sets of test cases, but their execution times are not merely the sum of their tests’ times. A test suite run may include environmental setup, deployment, parallel execution, reporting, and other things. The purpose of tracking test suite execution time is to determine the start-to-end time of the suite in total, because that indicates the speed of feedback and, in CI, delivery. Tracking test suite execution time will also reveal the effect of adding more test cases to the suite, which then factors into the risk-based decisions of including or excluding tests.

Test Pyramid Balance – The Test Pyramid separates tests between unit (bottom), integration (middle), and end-to-end (top) layers. Ideally, there should be more tests at the bottom than at the top. Why? Higher-level tests are more expensive – they take more time to develop, they are more time consuming to triage, and they have slower execution times. Consider the “Rule of 1’s”: a unit test takes ~1ms, an integration test takes ~1s, and an end-to-end test takes ~1m. When scaled to thousands of tests with continuous integration, end-to-end tests simply take too much time. Tracking the proportion of tests at each layer will give a rough picture of the balance. There’s no perfect ratio between layers, but make sure that the tests form a pyramid and not a cupcake, hourglass, or ice cream cone. Rebalance test efforts as appropriate.

piggybankmoney

Return on Investment

Quality Aspect Do the tests add greater value than their cost?
Desired State High – Tests need to be worth the effort. Don’t test for the sake of testing!
Metrics Measuring return on investment in terms of hard dollars is objectively impossible. The true cost of bugs can never be fully known: if a bug is caught early, the potential cost to fix it later can merely be estimated. The intangible value of protecting brand reputation may be more important than the tangible value of money saved by finding specific bugs. Better quality practices might prevent developers from causing bugs that would have otherwise happened – and there’s no good way to measure that.

Instead, return on investment is better measured by a collection of metrics that validate both code line protection and defect discovery. Use a weighted scorecard to get a more holistic view of ROI. Scorecards can be used with estimates for planning tests, as well as plugged in with actual values to measure the degree of success. Note that some aspects of ROI may be too difficult to measure accurately – in those cases, a LOW-MID-HIGH grading scale may be best. Others may seem like micromanagement.

  • Priority – Assign each test a priority for its coverage importance. Core functionalities should have the highest priority, while fringe functionalities should have the lowest priority. Focus on high-priority tests. Another way to look at importance is risk, or the chances that bugs will escape if explicit testing for a feature is not done.
  • Test Execution Frequency – Track how many times tests are actually run. Higher frequency is better. Tests that are rarely run should either be included in more regular runs or removed/archived. This could easily be tracked by a test management tool or database.
  • Coverage Uniqueness – Duplicate test coverage wastes resources. Unfortunately, this one is difficult to measure. Tools for code coverage or static analysis might help. Manual review, however, is typically a better approach.
  • Development Cost and Maintenance Cost – Track how much effort it takes to make and keep tests, including man-hours and resources. Lower costs are better, of course. Planning tools may help with this.
  • Bug Discovery – Track bugs discovered in terms of severity and when and how they were caught. Ideally, the number of bugs caught by customers after a release (meaning, not caught by tests during development) should be minimal, and their severity should be low. Bug tracking tools should easily provide this data. Be warned, though, that the raw bug count is a poor metric. Consider this question: Is a high bug count good or bad? Trick question – during a release, it indicates good test quality but poor product quality; after a release, it indicates all-around poor quality. What matters is that a minimal number of bugs happen at all, and that most of those bugs are caught and fixed before a release. Plus, keep in mind that bugs happen by accident. Finally, focusing exclusively on bug count to determine test value ignores the positive side of testing – that passing tests give confidence that features work correctly.

Quality Metrics 101: The Good, The Bad, and The Ugly

metric – [me-trik] – (noun) a standard for measuring or evaluating something

(Courtesy of dictionary.com)

When developing software, metrics can be a good way to track progress and evaluate quality. Managers typically love them because they provide insights that could otherwise be hard to see. Come on, who doesn’t love pretty charts with rainbow colors? However, gathering metrics is not easy, especially for quality. Some metrics are downright useless, and others encourage bad behavior when used improperly. It is far more important to focus on the most important aspects of quality than to blindly promulgate numbers. This article will cover quality metrics in depth, giving guidance on what quality aspects matter most and how they can be measured.

What are Quality Metrics?

Quality is the degree of a feature’s excellence. Quality metrics attempt to impartially measure a feature’s excellence. The word “attempt” is notable – quality is inherently relative, and metrics can sometimes be subjective. Take pizza as an example: How would the quality of a pizza be measured? One method could be to analyze the freshness and nutritious value of the ingredients, but, Pizza Hut notoriously fought Papa John’s Pizza over the assertion that better ingredients make better pizza. Another method could be to analyze the cooking process, like bake time or the order of toppings, but that would be better for identifying carelessness than quality. The delivery process could also be considered, like Domino’s delivery robots, but that evaluates customer service and not the pizza itself.  Ultimately, what matters are the taste and the visual appearance, which are totally subjective to the consumer. Surveys are unreliable. Taste tests have limited selection. Appearance is an art, not a science. Each of these metrics gives a glimpse into quality but does not fully reveal what actually makes a “good” pizza. Together, though, they provide a reasonable picture when the desired metrics are gathered well.

tony_pepperoni-rochester-ny-pizza-coupon

Is that really high quality pizza? Well, what aspects of quality are we measuring? We won’t get a perfect picture of quality from metrics, but we can get a rough idea. Software quality metrics work the same way.

Software Quality

In software, there are three primary types of quality metrics:

  1. Test Quality
    • How effective are tests at enforcing high quality standards?
    • Examples: code coverage, test failure reasons.
  2. Process Quality
    • How effective are processes at delivering good features?
    • Examples: time to fix broken builds, time to discover bugs.
  3. Product Quality
    • How good is the software product?
    • Examples: test failure rate, up-time, customer satisfaction.

The main purpose of software quality metrics is to validate successes and find areas for improvement in the development process. Metrics expose problems like gaps in coverage or slow feedback loops so that a team knows what to improve. They are meant to be informative but not punitive – they should simply report accurate data. Don’t shoot the messenger! For example, if the test failure rate is high, fix the bugs instead of blaming each other.

However, be warned by W. Edwards Deming‘s red bead experiment: Quality cannot be inspected into a product – it must be built in from the beginning! Metrics alone cannot solve problems – they can merely expose them. It is up to the development team to affect the proper change based on what metrics reveal. Awareness is useless without action. And action should ultimately lead to better features, faster delivery, and higher profits.

Choosing Quality Metrics

Metrics are nothing but tools to improve aspects of quality. Not every job needs the full toolbox! Always pick the quality aspect first, and then find the right measuring stick. Don’t just pick some metrics that others say are good. For example, if build stability is the quality aspect that is deemed important (and it should be), then the metric to track it could be the average time to fix a build after it is broken.

The best process for choosing quality metrics is:

  1. Identify a quality aspect that adds value.
  2. Decide if the aspect is worth measuring.
  3. Determine the desired state for that aspect.
  4. Derive the best way to measure progress toward the desired state impartially.
  5. Implement the metric gathering, storage, and analysis.
  6. Revisit the metric periodically to assert its value.
  7. Stop gathering the metric when it ceases to provide value.

Keep in mind that metrics have a cost: they must be gathered, stored, and analyzed. That’s why it’s important to pick the quality aspects that matter most.

This Series

The articles in this series will cover each of the quality metric types in detail. Each will list major quality aspects with meaningful metrics to track them and advice on how to use them. Remember, metrics should be constructive and not destructive.

 

lavemufo_edwards-deming-quote2

 

Django Projects in Visual Studio Code

Visual Studio Code is a free source code editor developed my Microsoft. It feels much more lightweight than traditional IDEs, yet its extensions make it versatile enough to handle just about any type of development work, including Python and the Django web framework. This guide shows how to use Visual Studio Code for Django projects.

Installation

Make sure the latest version of Visual Studio Code is installed. Then, install the following (free) extensions:

Reload Visual Studio Code after installation.

This slideshow requires JavaScript.

Editing Code

The VS Code Python editor is really first-class. The syntax highlighting is on point, and the shortcuts are mostly what you’d expect from an IDE. Django template files also show syntax highlighting. The Explorer, which shows the project directory structure on the left, may be toggled on and off using the top-left file icon. Check out Python with Visual Studio Code for more features.

This slideshow requires JavaScript.

Virtual Environments

Virtual environments with venv or virtualenv make it easy to manage Python versions and packages locally rather than globally (system-wide). A common best practice is to create a virtual environment for each Python project and install only the packages the project needs via pip. Different environments make it possible to develop projects with different version requirements on the same machine.

Visual Studio Code allows users to configure Python environments. Navigate to File > Preferences > Settings and set the python.pythonPath setting to the path of the desired Python executable. Set it as a Workspace Setting instead of a User Setting if the virtual environment will be specific to the project.

VS Code Python Venv

Python virtual environment setup is shown as a Workspace Setting. The terminal window shows the creation and activation of the virtual environment, too.

Helpful Settings

Visual Studio Code settings can be configured to automatically lint and format code, which is especially helpful for Python. As shown on Ruddra’s Blog, install the following packages:

$ pip install pep8
$ pip install autopep8
$ pip install pylint

And then add the following settings:

{
    "team.showWelcomeMessage": false,
    "editor.formatOnSave": true,
    "python.linting.pep8Enabled": true,
    "python.linting.pylintPath": "/path/to/pylint",
    "python.linting.pylintArgs": [
        "--load-plugins",
        "pylint_django"
    ],
    "python.linting.pylintEnabled": true
}

Editor settings may also be language-specific. For example, to limit automatic formatting to Python files only:

{
    "[python]": {
        "editor.formatOnSave": true
    }
}

Make sure to set the pylintPath setting to the real path value. Keep in mind that these settings are optional.

VS Code Django Settings.png

Full settings for automatically formatting and linting the Python code.

Running Django Commands

Django development relies heavily on its command-line utility. Django commands can be run from a system terminal, but Visual Studio Code provides an Integrated Terminal within the app. The Integrated Terminal is convenient because it opens right to the project’s root directory. Plus, it’s in the same window as the code. The terminal can be opened from ViewIntegrated Terminal or using the “Ctrl-`” shortcut.

VS Code Terminal.png

Running Django commands from within the editor is delightfully convenient.

Debugging

Debugging is another way Visual Studio Code’s Django support shines. The extensions already provide the launch configuration for debugging Django apps! As a bonus, it should already be set to use the Python path given by the python.pythonPath setting (for virtual environments). Simply switch to the Debug view and run the Django configuration. The config can be edited if necessary. Then, set breakpoints at the desired lines of code. The debugger will stop at any breakpoints as the Django app runs while the user interacts with the site.

VS Code Django Debugging

The Django extensions provide a default debug launch config. Simply set breakpoints and then run the “Django” config to debug!

Version Control

Version control in Visual Studio Code is simple and seamless. Git has become the dominant tool in the industry, but VS Code supports other tools as well. The Source Control view shows all changes and provides options for all actions (like commits, pushes, and pulls). Clicking changed files also opens a diff. For Git, there’s no need to use the command line!

VS Code Git

The Source Control view with a diff for a changed file.

Visual Studio Code creates a hidden “.vscode” directory in the project root directory for settings and launch configurations. Typically, these settings are specific to a user’s preferences and should be kept to the local workspace only. Remember to exclude them from the Git repository by adding the “.vscode” directory to the .gitignore file.

VS Code gitignore

.gitignore setting for the .vscode directory

Editor Comparisons

JetBrains PyCharm is one of the most popular Python IDEs available today. Its Python and Django development features are top-notch: full code completion, template linking and debugging, a manage.py console, and more. PyCharm also includes support for other Python web frameworks, JavaScript frameworks, and database connections. Django features, however, are available only in the (paid) licensed Professional Edition. It is possible to develop Django apps in the free Community Edition, as detailed in Django Projects in PyCharm Community Edition, but the missing features are a significant limitation. Plus, being a full IDE, PyCharm can feel heavy with its load time and myriad of options.

PyCharm is one of the best overall Python IDEs/editors, but there are other good ones out there. PyDev is an Eclipse-based IDE that provides Django support for free. Sublime Text and Atom also have plugins for Django. Visual Studio Code is nevertheless a viable option. It feels fast and simple yet powerful. Here’s my recommended decision table:

What’s Going On What You Should Do
Do you already have a PyCharm license? Just use PyCharm Professional Edition.
Will you work on a large-scale Django project? Strongly consider buying the license.
Do you need something fast, simple, and with basic Django support for free? Use Visual Studio Code, Atom, or Sublime Text.
Do you really want to stick to a full IDE for free? Pick PyDev if you like Eclipse, or follow the guide for Django Projects in PyCharm Community Edition

 

[Update on 9/30/2018: Check out the official VS Code guide here: Use Django in Visual Studio Code.]

Starting a Django Project in an Existing Directory

Django is a wonderful Python web framework, and its command line utility is indispensable when developing Django sites. However, the command to start new projects is a bit tricky. The official tutorial shows the basic case – how to start a new project from scratch using the command:

$ django-admin startproject [projectname]

This command will create a new directory using the given project name and generate the basic Django files within it. However, project names have strict rules: they may contain only letters, numbers, and underscores. So, the following project name would fail:

$ django-admin startproject my-new-django-project
CommandError: 'my-new-django-project' is not a valid project name.
Please make sure the name is a valid identifier.

Another problem is initializing a new Django project inside an existing directory:

$ mkdir myproject
$ django-admin startproject myproject
CommandError: '/path/to/myproject' already exists

These two problems commonly happen when using Git (or other source control systems). The repository may already exist, and its name may have illegal project name characters. The project could be created as a sub-directory within the repository root, but this is not ideal.

Thankfully, there’s a simple solution. The “django-admin startproject” command takes an optional argument after the project name for the project path. This argument sidesteps both problems. The project root directory and the Django project file directory can have different names. The example below shows how to change into the desired root directory and start the project from within it using “.”:

$ cd my-django-git
$ django-admin startproject myproject .
$ ls
manage.py myproject

This can be a stumbling block because it is not documented in Django’s official tutorial. The “django-admin help startproject” command does document the optional directory argument but does not explain when this option is useful. Hopefully, this article makes its use case more intuitive!

The Spark: What Makes Coders Great

I first started programming back in 2002. In fact, I stumbled into it unintentionally. My high school, Parkville High School, required all students in their Magnet Program for Mathematics, Science, and Computer Science to have a TI-83 Plus graphing calculator. As an incoming freshman, a graphing calculator was a big luxury for me – I spent my entire 8th grade Algebra I class without one, completing all problems by hand. Embarrassingly, it took me ten minutes to figure out how to turn it off the first time! I presumed that my new calculator would be used exclusively for math classes, but in the first few weeks of my computer class, we started programming the calculator. I’m sure we wrote some sort of basic “Hello World” print program, but we quickly moved onto programming math formulas.

It blew my mind.

At the time, I didn’t know what “computer programming” was. In fact, I didn’t even like computers very much. And yet there I was, thirteen years old, telling a calculator how to automatically solve my math problems for me. It was one of the greatest thrills I had ever experienced. I could command a machine to do cool things and make my life easier. I quickly started writing programs outside of class for every math formula I could find: areas, volumes, circumferences, the Quadratic Formula, the Pythagorean Theorem; and I shared my formulas with my classmates. Then, I moved onto calculator games. At the end of the year, our class started programming simple graphics and animations in C++, for which I made a fireworks show.

Something just clicked for me and coding. It all made sense. I could solve any problem. I could make any feature. I could teach myself how to do anything. And doing it brought on this wonderful euphoria – the “coder’s high” – even stronger than the feelings of Christmas morning or playing video games. For me, coding was practically addictive.

When my mom told me that there were well-paying careers in software, I never looked back. I took my first Java programming course as a sophomore (which I still consider my “mother language”) and then AP Computer Science AB as a junior (which was the first year it was offered in Java; I scored a 5/5). I went to RIT for college, where I graduated with a combined BS/MS in Computer Science in 2010. The rest is my professional history.

There were many times along the way that I doubted my path. There were times in high school that my code simply wouldn’t compile or run and I had no idea why (in the dark days before Stack Overflow). There were times at RIT when I felt like the most computer-illiterate student in my sink-or-swim program, and I even considered switching to a math major. There were times when the corporate grind was so tough I considered dropping back into academia. But, every time I doubted myself, I remembered what inspired me to pursue software at the beginning – the spark. The click. The it factor. The undeniable tenacity in my soul to solve real-world problems with elegance and efficiency through the sheer power of logical processes. I’m convinced that one of God’s greatest gifts to me has been the software spark. Though tempted, I have never wavered in my vocational clarity.

I’m not the only one who’s experienced the “spark,” either. In fact, it has consistently been my litmus test for identifying truly great coders. Many people have recounted nearly identical stories to me of how they first got into software – they were hooked at “Hello World.” I’ve heard people say things like, “I didn’t want to do it at first, but I discovered it was the coolest thing ever!” or, “What I love is that you can do anything!” or, “It seemed so basic and almost stupid, but it was so awesome!” Inversely, I’ve seen those who lack the spark struggle tremendously with computing and software. And it’s not a matter of grit or intelligence – these often are smart, hard-working people who just lack that X-factor.

Now, please do not misunderstand me by thinking that it’s impossible for those without the spark to be successful in software. I’m not trying to be elitist or condescending. Rather, based on my experience, I’ve seen the spark to be the single greatest determining factor in what makes someone naturally talented at programming. Furthermore, having the spark doesn’t make the journey easy. A career in software still requires grit and elbow grease. The challenges are tough. Having the spark simply makes it worthwhile.

If you have the spark, you’ll be able to overcome any software obstacle. I encourage you to go do awesome things. And never, ever give up!

 

This post is dedicated to my parents, who always supported my software aspirations from the very beginning.

 

Missing Error Messages with Angular Testing

Logs are an essential part of test automation – they leave a trace of execution that is indispensable when backtracking through failures. Missing logs can make it much, much harder to figure out problems in the code. Recently, I hit this problem while writing unit tests for an Angular project: neither the console nor Google Chrome’s debugger showed any helpful error messages! Thankfully, there was a pretty easy solution. This article will explain the problem and the solution.

Update (January 18, 2018):

After further research, it appears that this problem was fixed in the @angular/cli 1.3.x release. I updated to 1.3.2, removed the “–sourcemaps=false” option, and verified that the error messages are printed. Furthermore, the source mapping is correct – the errors map to the correct line and column in the sources files!

If you are stuck using a version prior to 1.3.x, then use the workaround detailed below. Otherwise, upgrade the package and avoid the problem altogether!

TL;DR

Disable source maps when running Angular tests:

$ ng test --sourcemaps=false

Angular Project Setup

This article presumes the standard Angular 4 project setup, as automatically generated by the “ng new” command. Jasmine unit tests are written in “*.spec.ts” files and run with Karma using Google Chrome as the browser.

The Problem

The Angular testing utilities provide great support for isolating and exercising parts of Angular code for unit testing. However, programmers need to use them properly, or else they won’t work. When I tried writing some unit tests for ngrx, I quickly hit dependency problems. However, it took me hours to figure it out because the console output was not helpful – all it would print was “ERROR”:

Angular Test Errors 1

As a newbie, I had no idea what went wrong. I tried debugging with Chrome, but the error message I got there was cryptic and not much more helpful:

Angular Test Errors 2

The Solution

After googling for a while, I discovered that there is a bug with source maps in the Angular CLI (Issue #7296). The workaround is to add the “–sourcemaps=false” option to the “ng test” command. If the package.json file contains a “test” script that calls “ng test”, the option may be added there. Now, the console prints error messages:

Angular Test Errors 3

Errors also appear on the Karma page in the browser:

Angular Test Errors 4

One side effect of this workaround, however, is that the line and column numbers don’t correctly line up to the TypeScript files. I presume that they map to the compiled JavaScript files instead. Nevertheless, error messages with wrong line numbers are better than no error messages at all. There may be a way to fix the source mapping, but that’s a problem for another day. Hopefully, the Angular team will fix this “feature” for us.

Now, time to go fix those test errors!

Debugging Angular Apps through Visual Studio Code

Angular is a great front-end framework for web apps. Visual Studio Code is a great source code editor. Their powers combined let you not only develop Angular app code but also debug it through the editor! VS Code debugging even works for TypeScript.

The Basic Guide

To set up debugging, simply follow the steps in the Debugging Angular section of the official Using Angular in VS Code guide. (This guide is really helpful for other VS Code Angular topics, too.) The basic steps are:

  1. Make sure VS Code, Google Chrome, and all the Angular parts are already installed.
  2. Install the Debugger for Chrome extension in VS Code.
  3. Create a launch.json config file (by clicking the gear icon in the Debug view).
  4. Set an appropriate config spec in the .vscode/launch.json file (example below).
  5. Set breakpoints in the editor.
  6. Launch the Angular app separate from the debugger (such as by running “ng serve” from the command line).
  7. Run the VS Code debugger “launch” job against the app (by clicking the green arrow in the Debug view).

The launch.json file should look like this, with values changed to reflect your environment:

{
    "version": "0.2.0",
    "configurations": [
        {
            "type": "chrome",
            "request": "launch",
            "name": "Launch Chrome against localhost",
            "url": "http://localhost:4200",
            "webRoot": "${workspaceFolder}"
        },
        {
            "type": "chrome",
            "request": "attach",
            "name": "Attach to Chrome",
            "port": 9222,
            "webRoot": "${workspaceFolder}"
        }
    ]
}

Note that the app must already be running before the debugger is launched! (This point is not entirely clear in the official guide.) The debugger will launch the Google Chrome browser and load the URL provided in the launch.json config. Any time execution hits a breakpoint, execution will stop and let VS Code step through it.

The original guide provides screen shots to better illustrate these steps. Please follow it for more precise steps.

Browser Options

Microsoft publishes the Debugger for Chrome and Debugger for Edge extensions for this sort of debugging. It looks like other non-Microsoft VS Code extensions are available for Firefox, PhantomJS, and Safari on iOS, but the launch.json config looks different.

Debugger Config and Source Control

Typically, it’s a best practice to avoid committing user-specific config files to source control. One user’s settings could conflict with another’s, potentially breaking workspaces. Personally, I would caution against submitting anything in the .vscode directory to source control unless (a) everyone on the team uses VS Code exclusively for the project and (b) the config file entries are usable by everyone on the team.

Jenkins Declarative Pipeline Resources

This post is intended to be a quick personal reference for Jenkins Pipelines so I don’t forget things I learned or lose links to valuable info. Feel free to recommend additional resources!

Today, a few of my LexisNexis coworkers and I went to the CloudBees office down the street (since we are both located at NCSU Centennial Campus) for a Jenkins Pipeline workshop. I’ve used Jenkins for a few years now, and I handle my team’s freestyle projects for running .NET/SpecFlow/Selenium automated tests, but the declarative pipeline style for Jenkins jobs is new to me. (I feel so behind the times.) I’m glad I attended the workshop because I learned a few cool things.

Below are links to helpful resources for learning about Jenkins Pipelines:

Pipelines are definitely a major improvement over freestyle projects:

  • They make it much easier to chain tasks together.
  • They are written in code (a Groovy-like DSL) and can support advanced logic.
  • They can be managed by source control (like Git).
  • They keep running even when the Jenkins master goes down.
  • Stages can be paused to wait for user input.
  • The DSL can be extended for custom steps.

I can’t wait to rewrite my team’s jobs!

The Airing of Grievances: Agile

Agile has essentially replaced the Waterfall model as the “right” software development methodology. It’s a really great process when it’s done right, but people ruin it when they do it wrong. And, oh, how badly it can go wrong. I got a lot of problems with bad Agile practices, and now you’re gonna hear about it!

Breaking the Rules

Agile is a lot like the board game Monopoly. The rules are long and complicated, but they are designed to make the game efficient. However, for some reason, everyone insists on making up their own rules for the game, rather than following the official instructions. For example, players won’t put a property up for auction when they land on it and refuse to buy it, or they will build houses before securing a monopoly. Then, as a result, the game goes on forever and loses its fun. In Agile, every organization seems to want to do things their own special way (as many of these grievances describe), and it almost never goes well when they do. The rules are not meant to be broken, and if they are, there will be consequences.

Going Rogue

Agile is meant to keep people focused on the most important tasks. Much time is spent planning and pivoting to stay on top of priorities. Team members should not deviate from committed work. Don’t go rogue! Don’t work on uncommitted tasks! If something is absolutely pressing, then talk with the scrum master to change the commitments.

Teams that are Too Big

How big is your Agile team? If the answer has more than one digit, then the team is too darn big. The ideal size is 5-9 people because communication becomes too hard with more. Large teams just don’t scale – it’s the law of diminishing returns.

Long Meetings

Nobody wants to be stuck in a long, boring meeting. While there are many Agile ceremonies (planning, grooming, stand-up, review, and retrospective), their meetings are meant to be efficient and productive. Stand-ups should be 15 minutes tops – nobody should ever need to give more than three sentences for their status, and nobody really wants to hear anything longer anyway! People should come prepared for planning and grooming so they don’t literally take all day. Demos should be short and sweet. Keep things moving!

Putting People on More Than One Team

Nobody should be cursed to provide deliverables for more than one Agile team. That’s not fair to the individual, who must spend double-duty in meetings, nor is it fair to the teams, who don’t have a dedicated resource for their work. It applies to every role: developer, tester, product owner, or scrum master. It also burns people out very quickly.

Too Many Top Priorities

I was once part of an Agile team where the product owner issued about a dozen “top priorities.” For. Every. Sprint. Our team had no clue what was really important.

Agonizing Over Story Points

Story points are meant to be sizing estimates for velocity. They don’t need to be perfectly accurate. They shouldn’t track hours. Don’t make big fights over it. Don’t go back and change values. Don’t twist planning poker into a political gambit. PLEASE!

Missing User Story Descriptions

The user story is the primary work artifact. It tells how a new feature should work from the perspective of the user… or, at least it should. If your user story contains just one line (like saying “Build the profile page”), then you just might be doing it wrong. Write user stories in the “As a ___, I want ___, so that ___” format, and provide extra descriptions to help the team understand what the story covers. Non-descriptive stories lead to poorly developed features.

Missing Acceptance Criteria

How do we know when a story is complete? If there’s no acceptance criteria, we don’t! Testers also won’t know what to check. Please write helpful acceptance criteria. A bullet list is fine, and Gherkin would be even better.

Not Including Testing and Automation in the Definition of Done

No. No. No. No. No. No. NO! A story is not complete if it is not tested. It must not be accepted without tests passing and automated. Otherwise, be prepared for an avalanche of technical debt as bugs pile up and the team can’t keep up. The premise of Agile is to deliver small, working features in iterations. Testing must be included! Don’t create separate stories for testing. Don’t push it off to the next sprint. If a team cannot get testing done, then perhaps it should increase story point sizings to include testing and/or commit to less work during a sprint.

Blaming QA for Incomplete Stories

I once heard a developer say bluntly to my automation team, “QA is the bottleneck.” Don’t shoot the messenger! Tests fail because the product under test has problems. Many times, testers don’t even receive builds until very late in the sprint. When stories don’t get done, don’t start a blame game – it’s the whole team’s fault. Try shifting left (perhaps by using BDD) or committing to less work per sprint.

Ignoring Technical Debt

Technical debt is the cost of consequences from poor development decisions. Examples may include: using single-threading when multi-threading is needed, avoiding design patterns, and even building up a test automation framework. Product owners don’t seem to like tech debt tasks because they don’t deliver new features. Unfortunately, tech debt will often cripple a team’s ability to deliver new features – pay now or pay later. Don’t ignore tech debt!

Confusing Agile with “Short Waterfall”

Agile is meant to be a process paradigm shift. It is not meant to be a condensed version of the Waterfall model. Sprints should be short. Responsibilities should be shared. Teams should be self-empowered. Break down silos and become truly Agile!

Using “Agile” and “Lean” Interchangeably

The Lean Startup is a methodology for starting a new business using minimal overhead and reacting quickly to lessons learned. It involves using Agile for product development, but it encompasses so much more than just Agile. Don’t use the terms interchangeably! Get on point with your buzzword bingo game.

Misusing the Term “Continuous Integration”

A nightly build is not CI. A weekly regression run is not CI. Manually-triggered tests are not CI. Manual deployments are not CI. Hand-written test reports are not CI. Don’t lie to yourself – CI is continuous integration, and everything must be automatic.

Forcing Scrum When Kanban May Be Better

Scrum is probably the most widely used Agile process, to the point where most people presume “Agile” means “Scrum.” However, Scrum is not appropriate for all teams. Kanban is a much better process when work items must be done “just in time” – like tech support tickets, build deployments, system maintenance, or emergency recoveries. Good candidates for Kanban are IT help desks and DevOps teams. I’ve used Kanban on automation tools/frameworks teams very successfully. Don’t shoehorn everyone into Scrum.

Hanging Agile Manifesto Posters on the Wall

What are you, Communist?

Complaining about Agile

Complaining doesn’t make it better! Honestly, in my experience, the worst complainers are old-school people who just don’t like change. Then, problems become a self-fulfilling prophecy. Or, they try to break rules and then gripe when things don’t work. If your complaint is about Agile in general, then go take a long, hard look in the mirror. However, if you find a problem in how your team is doing Agile, then bring it up during the retrospective – that’s Agile’s auto-correct mechanism. Complaining for complaint’s sake drags everybody down.

The Airing of Grievances: Version Control

Let the Airing of Grievances series continue: I got a lot of problems with version control misuse, and now you’re gonna hear about it!

Not Using Version Control

You’ve got to be some special kind of stupid to not use a version control system. Software is just too dang fragile to go without protection.

Using Outdated Version Control Systems

Still using CVS like it’s 1999? How about Rational ClearCase? If so, it’s time to upgrade. Git seems to be the go-to standard these days, though Subversion still has a place for projects where centralized control is better than distributed control. My opinion? Just use Git.

Gigantic Commits

Code changes ought to be incremental and small, and they ought to be committed in frequent intervals. However, some people like to make one giant, killer commit at a time with bajillions of lines of changes. Nobody wants to review those pull requests. Break things up into smaller pieces!

No Comments with Commits

Look back in your version control history to see how many messages look like this:

  • .
  • updated
  • fixed
  • done

Really? How is this helpful? Please give a meaningful message. It doesn’t need to be long – a one-liner is fine. But describe what makes the commit significant. Otherwise, tracing through history is dang near impossible!

Never Committing Changes

For whatever reason, some people will check out code, make changes, run locally, and NEVER commit the changes back to the repository. This happened frequently at a previous job with offshore test automation contractors. They would add new tests and fix bugs but never share them back with the team. They’d even email code back and forth, rather than commit! I just can’t even.

Committing Code that Doesn’t Even Compile

jackie-chan-wtf

Never Pushing Changes

In Git, there is a difference between “commit” (which commits the change locally) and “push” (which pushes all local commits to the remote repository). Some people never push. Then, they wonder why they can’t open pull requests. Or, they lose their code after a Blue Screen wipes out their machine. Make it a habit to push at the end of the workday, whether it’s needed or not.

No Branching

Branches are like swim lanes: every contributor (or group) can develop code without interfering with others. They also make concurrent release work possible. Using only one branch doesn’t “simplify” development – it just causes an integration mess. For example, I once worked in an organization where the QA architect insisted that test automation should not use multiple branches and, instead, have if/else conditions for differences between release branches. (I fought tooth-and-nail against it and lost.) Code duplication became rampant. Always adopt a good branching strategy, even for test automation. Gitflow is a good example workflow.

Stale Branches

Stale branches mean old code and merge conflicts. Nobody wants those. Keep your local branches up-to-date.

Not Deleting Feature Branches

In Git, feature branches are meant to have a short lifespan: you create one to develop a new feature or fix a bug or whatever, and then you delete it after the pull request is completed. However, some people don’t delete those old branches. Over time on a team, those old branches really add up and pollute the repository. Delete them as a common courtesy.

Botched Merge Conflict Resolution

Merge conflicts themselves are not the grievance. Nobody likes them, but they are inevitable on any team. However, botched merge conflict resolution is a HUGE grievance. Would you be mad if you spent a lot of time fixing a problem, only to have some other schmuck accidentally undo your code change with an overwrite? Please be careful when merging. If you aren’t sure that your merge will be good, there’s no shame in asking for help.

Not Re-compiling and Re-testing after Merging

What’s worse than messing up a merge? Committing the changes without testing them first! Merges are risky, and mistakes happen – but that’s why it is imperative to make sure everything is still good after a merge. I’ve seen people blindly post pull request updates after resolving merge conflicts, only to have the build fail with compiler warnings. Don’t do that.

Granting Everyone Full Repository Permissions

While everyone on the team should contribute, not everyone should contribute in the same ways. If there are no security policies set up, then users will do dangerous things, whether accidentally or deliberately. They could circumvent the review process. They could rename things unexpectedly. One time, I saw a guy delete the remote master branch! (Thank goodness we recovered it quickly.) Put permissions into place before bad things happen.