Quality Metrics 101: Test Quality

New to the series? Start from the beginning!

Test quality metrics make sure that testing efforts are worthwhile. Though “testing” and “quality” may be synonymous as organizational titles, testing is only one method of enforcing quality. In software, it just happens to be the most effective one. Testing is expensive, though, because it slows down time-to-market. Some people even devalue testing work because it doesn’t add new features to a product. Below are aspects of test quality to consider measuring to prove and even increase the value of testing efforts.

roofing

Coverage

Quality Aspect How much functionality is covered by tests?
Desired State High – More coverage means less risk. Note that 100% complete coverage is impossible.
Metrics Coverage may be measured for both manual and automated tests. However, automated test coverage is usually more important because automated tests are meant to be defensive without gaps.

Code Coverage – Code coverage tools check what paths of code are actually exercised by automated tests. While they cannot tell if tests are good or bad, they are great for exposing gaps in coverage. Unit test code coverage is easy because most frameworks have plugins, but above-unit code coverage requires instrumented builds. Look for tools that track more than just lines of code. Target 90%+ coverage. Add new tests to cover any major gaps.

Feature Coverage – Feature coverage is a manual way to score features on test coverage based on planning and review. For this metric to be successful, a team must consistently specify features well; otherwise, this metric will give useless data. Gherkin scenarios a great way to do this – for example, each scenario can be marked as untested, manual, or automated. Feature coverage is unscientific, but it can give a better picture of functionalities actually covered (instead of just the raw lines of code covered).

Automation DebtTechnical debt increases when tests are not automated and thus lack coverage. Teams are often unable to automate all tests originally planned, and test automation is frequently jettisoned from the Definition of Done. Or, a project may not start automating tests until a large chunk of the project is already complete. The best way to track automation debt is to create a backlog for incomplete automation work. Backlog tasks can be sized, prioritized, and planned according to whatever development process is used (Scrum, Kanban, etc.). Appropriate process metrics can then be used to understand the magnitude of the work and, thus, the lack of automated test coverage.

Warning: Test case count, test length, and test code line count are terrible metrics for coverage because they encourage largeness rather than uniqueness. The goal of testing is to have the greatest coverage with the lowest risk for the least work. Anybody can blindly write tests or variations that add no meaningful value.

gears-image1

Reliability

Quality Aspect Do automated tests consistently reach completion? And how trustworthy are the results?
Desired State High – Reliability means less time for failure triage or (horrors) reruns.
Metrics Failure Reasons – Track the failure reason for each test case run. Ideally, tests should fail only when they discover product bugs. However, tests may also fail when:

  • an acceptable product change caused an automation error because tests were not updated, indicating poor communication or careless updates
  • an environmental change or interruption caused an automation error, indicating deployment or sysadmin problems
  • the automation code itself has a bug

Remember, “successful” test runs either pass with appropriate coverage or fail due to product bugs. “Unsuccessful” test runs fail or crash for reasons other than product bugs. Aim to minimize unsuccessful test runs. Never hack a test just to get it passing – always work to fix the problems behind test failures.

is-the-speedometer-reliable-in-telling-me-my-vehicles-real-time-speed

Speed

Quality Aspect How much time do test runs take?
Desired State Fast – Tests should complete in the shortest time possible.
Metrics Test Case Execution Time – Test case execution times indicate the efficiency of the automation code. Track the start-to-end execution time for every individual test case run. Then, analyze the data using common sense. For example, outliers may be inefficient tests that need tuning or should be removed altogether. It may be wise to separate test runs by result type or coverage area. Historical data can also be used as a baseline to determine performance impacts when making cross-cutting automation changes.

Test Suite Execution Time – Test suites are sets of test cases, but their execution times are not merely the sum of their tests’ times. A test suite run may include environmental setup, deployment, parallel execution, reporting, and other things. The purpose of tracking test suite execution time is to determine the start-to-end time of the suite in total, because that indicates the speed of feedback and, in CI, delivery. Tracking test suite execution time will also reveal the effect of adding more test cases to the suite, which then factors into the risk-based decisions of including or excluding tests.

Test Pyramid Balance – The Test Pyramid separates tests between unit (bottom), integration (middle), and end-to-end (top) layers. Ideally, there should be more tests at the bottom than at the top. Why? Higher-level tests are more expensive – they take more time to develop, they are more time consuming to triage, and they have slower execution times. Consider the “Rule of 1’s”: a unit test takes ~1ms, an integration test takes ~1s, and an end-to-end test takes ~1m. When scaled to thousands of tests with continuous integration, end-to-end tests simply take too much time. Tracking the proportion of tests at each layer will give a rough picture of the balance. There’s no perfect ratio between layers, but make sure that the tests form a pyramid and not a cupcake, hourglass, or ice cream cone. Rebalance test efforts as appropriate.

piggybankmoney

Return on Investment

Quality Aspect Do the tests add greater value than their cost?
Desired State High – Tests need to be worth the effort. Don’t test for the sake of testing!
Metrics Measuring return on investment in terms of hard dollars is objectively impossible. The true cost of bugs can never be fully known: if a bug is caught early, the potential cost to fix it later can merely be estimated. The intangible value of protecting brand reputation may be more important than the tangible value of money saved by finding specific bugs. Better quality practices might prevent developers from causing bugs that would have otherwise happened – and there’s no good way to measure that.

Instead, return on investment is better measured by a collection of metrics that validate both code line protection and defect discovery. Use a weighted scorecard to get a more holistic view of ROI. Scorecards can be used with estimates for planning tests, as well as plugged in with actual values to measure the degree of success. Note that some aspects of ROI may be too difficult to measure accurately – in those cases, a LOW-MID-HIGH grading scale may be best. Others may seem like micromanagement.

  • Priority – Assign each test a priority for its coverage importance. Core functionalities should have the highest priority, while fringe functionalities should have the lowest priority. Focus on high-priority tests. Another way to look at importance is risk, or the chances that bugs will escape if explicit testing for a feature is not done.
  • Test Execution Frequency – Track how many times tests are actually run. Higher frequency is better. Tests that are rarely run should either be included in more regular runs or removed/archived. This could easily be tracked by a test management tool or database.
  • Coverage Uniqueness – Duplicate test coverage wastes resources. Unfortunately, this one is difficult to measure. Tools for code coverage or static analysis might help. Manual review, however, is typically a better approach.
  • Development Cost and Maintenance Cost – Track how much effort it takes to make and keep tests, including man-hours and resources. Lower costs are better, of course. Planning tools may help with this.
  • Bug Discovery – Track bugs discovered in terms of severity and when and how they were caught. Ideally, the number of bugs caught by customers after a release (meaning, not caught by tests during development) should be minimal, and their severity should be low. Bug tracking tools should easily provide this data. Be warned, though, that the raw bug count is a poor metric. Consider this question: Is a high bug count good or bad? Trick question – during a release, it indicates good test quality but poor product quality; after a release, it indicates all-around poor quality. What matters is that a minimal number of bugs happen at all, and that most of those bugs are caught and fixed before a release. Plus, keep in mind that bugs happen by accident. Finally, focusing exclusively on bug count to determine test value ignores the positive side of testing – that passing tests give confidence that features work correctly.

Quality Metrics 101: The Good, The Bad, and The Ugly

metric – [me-trik] – (noun) a standard for measuring or evaluating something

(Courtesy of dictionary.com)

When developing software, metrics can be a good way to track progress and evaluate quality. Managers typically love them because they provide insights that could otherwise be hard to see. Come on, who doesn’t love pretty charts with rainbow colors? However, gathering metrics is not easy, especially for quality. Some metrics are downright useless, and others encourage bad behavior when used improperly. It is far more important to focus on the most important aspects of quality than to blindly promulgate numbers. This article will cover quality metrics in depth, giving guidance on what quality aspects matter most and how they can be measured.

What are Quality Metrics?

Quality is the degree of a feature’s excellence. Quality metrics attempt to impartially measure a feature’s excellence. The word “attempt” is notable – quality is inherently relative, and metrics can sometimes be subjective. Take pizza as an example: How would the quality of a pizza be measured? One method could be to analyze the freshness and nutritious value of the ingredients, but, Pizza Hut notoriously fought Papa John’s Pizza over the assertion that better ingredients make better pizza. Another method could be to analyze the cooking process, like bake time or the order of toppings, but that would be better for identifying carelessness than quality. The delivery process could also be considered, like Domino’s delivery robots, but that evaluates customer service and not the pizza itself.  Ultimately, what matters are the taste and the visual appearance, which are totally subjective to the consumer. Surveys are unreliable. Taste tests have limited selection. Appearance is an art, not a science. Each of these metrics gives a glimpse into quality but does not fully reveal what actually makes a “good” pizza. Together, though, they provide a reasonable picture when the desired metrics are gathered well.

tony_pepperoni-rochester-ny-pizza-coupon

Is that really high quality pizza? Well, what aspects of quality are we measuring? We won’t get a perfect picture of quality from metrics, but we can get a rough idea. Software quality metrics work the same way.

Software Quality

In software, there are three primary types of quality metrics:

  1. Test Quality
    • How effective are tests at enforcing high quality standards?
    • Examples: code coverage, test failure reasons.
  2. Process Quality
    • How effective are processes at delivering good features?
    • Examples: time to fix broken builds, time to discover bugs.
  3. Product Quality
    • How good is the software product?
    • Examples: test failure rate, up-time, customer satisfaction.

The main purpose of software quality metrics is to validate successes and find areas for improvement in the development process. Metrics expose problems like gaps in coverage or slow feedback loops so that a team knows what to improve. They are meant to be informative but not punitive – they should simply report accurate data. Don’t shoot the messenger! For example, if the test failure rate is high, fix the bugs instead of blaming each other.

However, be warned by W. Edwards Deming‘s red bead experiment: Quality cannot be inspected into a product – it must be built in from the beginning! Metrics alone cannot solve problems – they can merely expose them. It is up to the development team to affect the proper change based on what metrics reveal. Awareness is useless without action. And action should ultimately lead to better features, faster delivery, and higher profits.

Choosing Quality Metrics

Metrics are nothing but tools to improve aspects of quality. Not every job needs the full toolbox! Always pick the quality aspect first, and then find the right measuring stick. Don’t just pick some metrics that others say are good. For example, if build stability is the quality aspect that is deemed important (and it should be), then the metric to track it could be the average time to fix a build after it is broken.

The best process for choosing quality metrics is:

  1. Identify a quality aspect that adds value.
  2. Decide if the aspect is worth measuring.
  3. Determine the desired state for that aspect.
  4. Derive the best way to measure progress toward the desired state impartially.
  5. Implement the metric gathering, storage, and analysis.
  6. Revisit the metric periodically to assert its value.
  7. Stop gathering the metric when it ceases to provide value.

Keep in mind that metrics have a cost: they must be gathered, stored, and analyzed. That’s why it’s important to pick the quality aspects that matter most.

This Series

The articles in this series will cover each of the quality metric types in detail. Each will list major quality aspects with meaningful metrics to track them and advice on how to use them. Remember, metrics should be constructive and not destructive.

 

lavemufo_edwards-deming-quote2

 

Andy’s Latest Opportunity

While most of my posts are technical, this one is a personal update:

I have accepted a fantastic new role as a Software Engineer in Test at PrecisionLender! I will be the company’s technical leader for testing and automation: building a strategy, setting up frameworks, writing tests, running tests in a CI/CD pipeline, and educating others. It’s the perfect role for me, and together, we will do great things. PrecisionLender is a very collaborative company that builds a software platform to help banks make smarter loans. They have about a hundred employees right now, and they’re growing. Their Raleigh office is located very close to my home.

With this announcement also comes the bittersweet news of my departure from LexisNexis. After almost a year and a half, it is time to say goodbye. I want to make it very clear that I am not leaving LexisNexis because I am unhappy, but rather to pursue a great new opportunity that providentially found me. My role as a Senior Software Test Engineer at LexisNexis has truly been the greatest opportunity of my career so far. I became a technical leader on one of the strongest test automation teams in Raleigh. I led the development of test frameworks that were shared across the whole company, in addition to writing countless test cases. I did internal consulting with groups across the globe to teach them how to be better testers and automationeers. I even earned the nickname “Reverend BDD” for the many impassioned training sessions I delivered. I grew tremendously in my own professional software skills. I learned from my mistakes along the way with the grace of others. And I found many great, new friends, with whom I will surely miss working. I specifically want to thank my manager, Kalen Howell, Sr., and my team lead, Jeff Wolf, for trusting me to tackle big problems and valuing my expertise. Working for LexisNexis has been a privilege.

My last day at LexisNexis will be Tuesday, April 3. My wife and I will then take a short vacation to Charleston, SC, and I will start my new position at PrecisionLender on Tuesday, April 10. Other than that, I will continue to write this Automation Panda blog and help my wife with her businesses as needed. I will also deliver a talk at PyCon 2018 in Cleveland, Ohio this May entitled, “Behavior-Driven Python.” Be sure to check it out! Connect with me on LinkedIn and Twitter, too.

I am resolute in my career path to continue pursuing testing and automation. Vocationally, we as creatures made in God’s image ought to seek to glorify Him through our creative work. As software engineers, our form of work emulates the creativeness of our Creator. Much in the way that God spoke creation into being, we likewise speak software into being, albeit in a microcosm. The whole discipline of computer science is itself rooted in language, in instruction. The instructions we issue, and the very systems we construct, reflect the logical, rational, orderly nature of God’s creation. Furthermore, as testers, we likewise recognize man’s fallen nature and our need for correction. The systems we implement will never be perfect because we are not equal to God. In testing, we simultaneously assert the wonders of creativity as well as our need for redemption in Christ – both to the glory of the Good Lord. This is what motivates me to pursue test automation. I thank God for this opportunity. Soli Deo Gloria.

andy

 

BDD Example Mapping

The two major goals of Behavior-Driven Development are better collaboration and automation. Even when the Three Amigos actually get together, collaboration can be tough. Where do we start? What scenarios should we write? What examples should be included?

Well, the Cucumber folks have a practice called “Example Mapping” to make it easier. All you need is a pack of index cards and a big table!

  1. Write the story under discussion on a yellow at the top of the table.
  2. Write a rule for each known acceptance criteria on a blue card under the story.
  3. Write each example for a rule on a green card.
  4. Write each open question on a red card on the side to discuss later.

Keep writing cards until the team is satisfied with the story. This process provides clear, fast feedback for stories. A team can quickly see if a story is too big or needs further refinement. Engineers can easily turn example cards into Gherkin scenarios.

Rather than duplicate documentation here, please read Matt Wynne’s seminal post on the practice, Introducing Example Mapping.

Also, watch this webinar recording from Cucumber about Example Mapping:

Tutoring: A Lifelong Impact

On Saturday, February 17, 2018, I delivered the keynote address at RIT TutorCon 2018 at the Rochester Institute of Technology in Rochester, NY. I was a student tutor at RIT from 2007-2010. The Academic Support Center asked me to speak about my experiences. Below is the transcript of my speech.

It’s good to be back in Ra-cha-cha! Happy Presidents’ Day weekend, and also Happy Chinese New Year! Let me get a good look at our tutors: If you are a tutor, please stand up.

[Wait for tutors to stand up.]

Great! It’s awesome to see so many of you here today. Is anyone in Computer Science?

Now, remain standing if you have been a tutor for at least one year.

[Wait for people to sit down.]

Not bad. What about two years?

[Wait for more people to sit down.]

Three? [Wait.] Four? [Wait.] Five? [Wait.]

What about ten years? Ten years of tutoring? [Give anyone who remains standing a round of applause, and then ask them to sit down.]

Ten years is a long time! A lot can happen; a lot can change. Here’s a question for you today, though: Will your tutoring make an impact in ten years? [Repeat the question for emphasis.]

Ten years ago, I was one of you. I was in my second year at RIT studying computer science, and I worked for the Academic Support Center and TRIO as a tutor for math, physics, and basically anything that was needed. I would have been sitting in your chair if we had these fancy tutoring conferences back them. Things were quite different a decade ago. Let me drop some knowledge bombs on you for the world in February 2008:

  • We were still on the iPhone 1. iPads did not exist yet.
  • Barack Obama was still seen as a surprise challenger to Hillary Clinton in the 2008 Democratic primaries.
  • The Great Recession was looming but had not yet hit.
  • The Summer Olympics were going to be held in Beijing, China. (Michael Phelps & Usain Bolt)
  • Lady Gaga had not yet released her debut album.

Now, let me contextualize this for RIT:

  • Bill Destler was still in his first year as university president.
  • RIT was still on the quarter system.
  • Park Point was being built.
  • The Simon Center (a.k.a. the “Toilet Bowl”) was being built.
  • The main drop-in study center was the “Math Lab” in Building 1, not Bates.

One thing that looks like it hasn’t changed, though, is Gracie’s. [Assume the audience will laugh.]

By the way, have they knocked down Riverknoll yet? I lived at 232 Kimball Drive. [Assume the audience will laugh or somehow respond.]

A lot happens in ten years. But, will your tutoring have an impact in ten years? Will the tutoring you do today benefit your students years from now? It should.

As college students, life is typically fast-paced. You have classes, you have papers, you have projects; quarters – excuse me, semesters – fly by; and it’s all over after about four years. And, for you, tutoring is just a part of that overall experience. It’s just a part-time job. As we saw earlier, most of you will spend only a few years tutoring before entering your career fields. Personally, I haven’t done any tutoring since 2010. It’s tempting to think that the time you spent tutoring doesn’t matter. So what if you help people finish their homework problems a few times a week? Students come and go anyway. It’s no big deal, right?

Well, if you’re here today at this tutoring conference, I’m pretty sure that tutoring is a big deal to you. You know it’s important. I’d be willing to bet that many of you would do tutoring even if you didn’t get paid – although, the pay is certainly deserved! I want you all to understand that what you do as a tutor will impact your students and will also impact you for the rest of your lives. Tutoring is a vector: I want you to see the line and not just the dot.

Your students come with a myriad of different circumstances. Some are just looking for a healthy environment for doing their homework. Maybe they’re stuck on a tricky physics brain-buster. Others struggle. Some really struggle – and may be one more failure away from academic suspension. But all students have one thing in common: they come to you because they want to do better. Whoever they are, they look to you as tutors to help them succeed. And every question you answer – or rather, every guiding question you turn back to them – puts them further down their paths to success. Today’s practice problems become tomorrow’s degrees. With you, they’ll learn not just the course material but, more importantly, they will learn how to learn. They will learn what questions to ask themselves. They will learn how to find answers using their resources. They will learn to teach themselves. Plutarch once said, “The mind is not a vessel to be filled but a fire to be ignited.”

With my perspective of the line, I want to give you three big ways you can make your tutoring today leave an impact for a lifetime.

First, own your role. As tutors, you have a very unique role with your students: you are peers; you are not professors. That’s a big difference! Professors are experts in their fields with years of experience and dozens of publications. You, as tutors, are students yourselves, just a few more years ahead. You can relate to your students on much more common ground. You’ve taken the same courses. You’ve taken the same tests. You’ve probably even done the very same problems. One of the tutoring tricks is to always work with a student at their level – if they sit at the table, you sit; if they stand at the board, you stand; and unless you’re making a really good example, don’t stand on the table! The equal-level principle also applies to your role as a peer tutor. There’s camaraderie. There’s energy. There’s less embarrassment to ask “stupid” questions. There’s a sense that they can do it because you can do it. So own your role as a peer tutor.

Second, focus on the student and not the problem. The problem is the dot; the student is the line. Tutors aren’t there to solve the world’s problems! Nobody comes to a tutoring center to watch a tutor show off with how much they know or how fast they can solve problems. “Look at how smart I am” – NO! Let’s be real, here: the solution to any given practice problem doesn’t really matter. What does matter is how the student learned to handle problems. Did they make an attempt? Did they look at their formulas? Did they write out their work? Did they persevere when they got stuck? Let me ask you a question: Do you think that I remember specific details to any homework assignments from ten years ago? [Wait for audience response.] Nope! But, I remember that a derivative is a rate of change. And, if I had to solve a derivative again, I’d know exactly where to look in my books to figure it out. That’s how you want your students to be in ten years. Cultivate your students to become independent.

Third, build camaraderie. Your students are already your peers – make them your friends. I don’t have any fancy statistics to share, but I know anecdotally that most students become “repeat customers.” You’ll see them again, and again, and again. Whether intended or not, you will forge relationships with your students. As your tutoring shifts become part of your everyday life, so, too, do the students who show up. Treat every single one of them the way you’d want to be treated. Work to form good relationships. Work to form trust. Be honest when you don’t know something. And furthermore, build camaraderie with your fellow tutors as well! Tutors are a team – each one brings fresh eyes and unique expertise. My specialty? Discrete math and differential equations – what a combo! We, as tutors, are trained in common techniques and share the common burdens to help our students. It’s almost like we have a special, unspoken club. I still keep up with my students and my tutors. I dined with a former student on top of the Space Needle. I partied with another on New Year’s Eve. I’m attending another student’s wedding this summer. A fellow tutor came to mine. So build camaraderie with your students and your fellow tutors.

As I close, I’d like to remind you that you are all in tutoring together. For some of you, this might just be the best job you ever have. I challenge all of you today to make your tutoring count: for now, for ten years from now, and for a lifetime. Tutors don’t make bad students good – tutors make students learn to teach themselves. That is how your tutoring will make a lifelong impact. Thank you.

Django Projects in Visual Studio Code

Visual Studio Code is a free source code editor developed my Microsoft. It feels much more lightweight than traditional IDEs, yet its extensions make it versatile enough to handle just about any type of development work, including Python and the Django web framework. This guide shows how to use Visual Studio Code for Django projects.

Installation

Make sure the latest version of Visual Studio Code is installed. Then, install the following (free) extensions:

Reload Visual Studio Code after installation.

This slideshow requires JavaScript.

Editing Code

The VS Code Python editor is really first-class. The syntax highlighting is on point, and the shortcuts are mostly what you’d expect from an IDE. Django template files also show syntax highlighting. The Explorer, which shows the project directory structure on the left, may be toggled on and off using the top-left file icon. Check out Python with Visual Studio Code for more features.

This slideshow requires JavaScript.

Virtual Environments

Virtual environments with venv or virtualenv make it easy to manage Python versions and packages locally rather than globally (system-wide). A common best practice is to create a virtual environment for each Python project and install only the packages the project needs via pip. Different environments make it possible to develop projects with different version requirements on the same machine.

Visual Studio Code allows users to configure Python environments. Navigate to File > Preferences > Settings and set the python.pythonPath setting to the path of the desired Python executable. Set it as a Workspace Setting instead of a User Setting if the virtual environment will be specific to the project.

VS Code Python Venv

Python virtual environment setup is shown as a Workspace Setting. The terminal window shows the creation and activation of the virtual environment, too.

Helpful Settings

Visual Studio Code settings can be configured to automatically lint and format code, which is especially helpful for Python. As shown on Ruddra’s Blog, install the following packages:

$ pip install pep8
$ pip install autopep8
$ pip install pylint

And then add the following settings:

{
    "team.showWelcomeMessage": false,
    "editor.formatOnSave": true,
    "python.linting.pep8Enabled": true,
    "python.linting.pylintPath": "/path/to/pylint",
    "python.linting.pylintArgs": [
        "--load-plugins",
        "pylint_django"
    ],
    "python.linting.pylintEnabled": true
}

Editor settings may also be language-specific. For example, to limit automatic formatting to Python files only:

{
    "[python]": {
        "editor.formatOnSave": true
    }
}

Make sure to set the pylintPath setting to the real path value. Keep in mind that these settings are optional.

VS Code Django Settings.png

Full settings for automatically formatting and linting the Python code.

Running Django Commands

Django development relies heavily on its command-line utility. Django commands can be run from a system terminal, but Visual Studio Code provides an Integrated Terminal within the app. The Integrated Terminal is convenient because it opens right to the project’s root directory. Plus, it’s in the same window as the code. The terminal can be opened from ViewIntegrated Terminal or using the “Ctrl-`” shortcut.

VS Code Terminal.png

Running Django commands from within the editor is delightfully convenient.

Debugging

Debugging is another way Visual Studio Code’s Django support shines. The extensions already provide the launch configuration for debugging Django apps! As a bonus, it should already be set to use the Python path given by the python.pythonPath setting (for virtual environments). Simply switch to the Debug view and run the Django configuration. The config can be edited if necessary. Then, set breakpoints at the desired lines of code. The debugger will stop at any breakpoints as the Django app runs while the user interacts with the site.

VS Code Django Debugging

The Django extensions provide a default debug launch config. Simply set breakpoints and then run the “Django” config to debug!

Version Control

Version control in Visual Studio Code is simple and seamless. Git has become the dominant tool in the industry, but VS Code supports other tools as well. The Source Control view shows all changes and provides options for all actions (like commits, pushes, and pulls). Clicking changed files also opens a diff. For Git, there’s no need to use the command line!

VS Code Git

The Source Control view with a diff for a changed file.

Visual Studio Code creates a hidden “.vscode” directory in the project root directory for settings and launch configurations. Typically, these settings are specific to a user’s preferences and should be kept to the local workspace only. Remember to exclude them from the Git repository by adding the “.vscode” directory to the .gitignore file.

VS Code gitignore

.gitignore setting for the .vscode directory

Editor Comparisons

JetBrains PyCharm is one of the most popular Python IDEs available today. Its Python and Django development features are top-notch: full code completion, template linking and debugging, a manage.py console, and more. PyCharm also includes support for other Python web frameworks, JavaScript frameworks, and database connections. Django features, however, are available only in the (paid) licensed Professional Edition. It is possible to develop Django apps in the free Community Edition, as detailed in Django Projects in PyCharm Community Edition, but the missing features are a significant limitation. Plus, being a full IDE, PyCharm can feel heavy with its load time and myriad of options.

PyCharm is one of the best overall Python IDEs/editors, but there are other good ones out there. PyDev is an Eclipse-based IDE that provides Django support for free. Sublime Text and Atom also have plugins for Django. Visual Studio Code is nevertheless a viable option. It feels fast and simple yet powerful. Here’s my recommended decision table:

What’s Going On What You Should Do
Do you already have a PyCharm license? Just use PyCharm Professional Edition.
Will you work on a large-scale Django project? Strongly consider buying the license.
Do you need something fast, simple, and with basic Django support for free? Use Visual Studio Code, Atom, or Sublime Text.
Do you really want to stick to a full IDE for free? Pick PyDev if you like Eclipse, or follow the guide for Django Projects in PyCharm Community Edition

Starting a Django Project in an Existing Directory

Django is a wonderful Python web framework, and its command line utility is indispensable when developing Django sites. However, the command to start new projects is a bit tricky. The official tutorial shows the basic case – how to start a new project from scratch using the command:

$ django-admin startproject [projectname]

This command will create a new directory using the given project name and generate the basic Django files within it. However, project names have strict rules: they may contain only letters, numbers, and underscores. So, the following project name would fail:

$ django-admin startproject my-new-django-project
CommandError: 'my-new-django-project' is not a valid project name.
Please make sure the name is a valid identifier.

Another problem is initializing a new Django project inside an existing directory:

$ mkdir myproject
$ django-admin startproject myproject
CommandError: '/path/to/myproject' already exists

These two problems commonly happen when using Git (or other source control systems). The repository may already exist, and its name may have illegal project name characters. The project could be created as a sub-directory within the repository root, but this is not ideal.

Thankfully, there’s a simple solution. The “django-admin startproject” command takes an optional argument after the project name for the project path. This argument sidesteps both problems. The project root directory and the Django project file directory can have different names. The example below shows how to change into the desired root directory and start the project from within it using “.”:

$ cd my-django-git
$ django-admin startproject myproject .
$ ls
manage.py myproject

This can be a stumbling block because it is not documented in Django’s official tutorial. The “django-admin help startproject” command does document the optional directory argument but does not explain when this option is useful. Hopefully, this article makes its use case more intuitive!