Mentoring Software Testers

Mentoring is important in any field, but it’s especially critical for software testing. I’ve been blessed with good mentors throughout my life, and I’ve also been honored to serve as a mentor for other software testers. In this article, I’ll explain what mentoring is and how to practice it within the context of software testing.

What is Mentoring?

Mentoring is a one-on-one relationship in which the experienced guides the inexperienced.

  • It is explicit in that the two individuals formally agree to the relationship.
  • It is intentional in that both individuals want to participate.
  • It is long-term in that the relationship is ongoing.
  • It is purposeful in that the relationship has a clear goal or development objective.
  • It is meaningful in that growth happens for both individuals.

Mentoring is more than merely answering questions or doing code reviews. It is an intentional relationship for learning and growth.

Why Software Testing Mentoring Matters

Software testing is a specialty within software engineering. People enter software testing roles in various ways, like these:

  • A new college graduate lands their first job as a QA engineer.
  • A 20-year manual tester transitions into an automation role.
  • A developer assumes more testing responsibilities for their team.
  • A coding bootcamp graduate decides to change career.

There’s no single path to entering software testing. Personally, I graduated college with a Computer Science degree and found my way into testing through internships.

Unfortunately, there’s also no “universal” training program for software testing. Universities don’t offer programs in software testing – at best, they introduce unit test frameworks and bug report formats. Most coding bootcamps focus on Web development or data science. Certifications like ISTQB can feel arcane and antiquated (and, to be honest, I don’t hold any myself).

The best ways we have for self-training are communities, conferences, and online resources. For example, I’m a member of my local testing community, the Triangle Software Quality Association (TSQA). TSQA hosts meetups monthly and a conference every other year in North Carolina. Through these events, TSQA welcomes everyone interested in software testing to learn, share, and network. I also recommend free courses from Test Automation University, and I frequently share blogs and articles from other software testing leaders.

Nevertheless, while these resources are deeply valuable, they can be overwhelming for a newbie who literally does not know where to begin. An experienced mentor provides guidance. They can introduce a newcomer to communities and events. They can recommend specific resources and answer questions in a safe space. Mentors can also provide encouragement, motivation, and accountability – things that online resources cannot provide.

How to Do Mentoring

I’ve had the pleasure of mentoring multiple individuals throughout my career. I mentor no more than a few individuals at a time so that I can give each mentee the attention they deserve. Mentoring relationships typically start in one of these ways:

  1. Someone asks me to be their mentor.
  2. A manager or team leader arranges a mentoring relationship.
  3. As a team leader, I initiate a mentoring relationship with a new team member (because that’s a team leader’s job).

Almost all of my mentoring relationships have existed within company walls. Mentoring becomes one of my job responsibilities. Personally, I would recommend forming mentoring relationships within company walls so that both individuals can dedicate time, availability, and shared knowledge to each other. However, that may not always be possible (if management does not prioritize professional development) or beneficial (if the work environment is toxic).

My teammates and me at TSQA 2020!

From the start, I like to be explicit about the mentoring relationship with the mentee. I learn what they want to get out of mentoring. I give them the top priority of my time and attention, but I also expect them to reciprocate. I don’t enter mentoring relationships if the other person doesn’t want one.

Then, I create what I call a “growth plan” for the mentee. The growth plan is a tailored training program for the mentee to accomplish their learning objectives. I set a schedule with the following types of activities:

  • One-on-one or small-group teaching sessions
    • The best format for making big points that stick
    • Gives individual care and attention to the mentee
    • Provides a safe space for questions
  • Reading assignments
    • Helpful for independently learning specific topics
    • May be blogs, articles, or documentation
    • Allows the mentee to complete it at their own pace
  • Online training courses
    • Example: Test Automation University
    • Provides comprehensive, self-paced instruction
    • However, may not be 100% pertinent to the learning objectives
  • Pair programming or code review sessions
    • Hands-on time for mentor and mentee to work together
    • Allows learning by example and by osmosis
    • However, can be mentally draining, so use sparingly
  • Independent work items
    • Real, actual work items that the team must complete
    • Makes the mentee feel like they are making valuable contributions even while still learning
    • “Practice makes perfect”

These activities should be structured to fit all learning styles and build upon each other. For example, if I am mentoring someone about how to do Behavior-Driven Development, I would probably schedule the following activities:

  1. A “Welcome to BDD” whiteboard session with me
  2. Reading my BDD 101 series
  3. Watching a video about Example Mapping
  4. A small group activity for doing Example Mapping on a new story
  5. A work item to write Gherkin scenarios for that mapped story
  6. A review session for those scenarios
  7. A “BDD test frameworks” deep dive session with me
  8. A work item to automate the Gherkin scenarios they wrote
  9. A review session for those automated tests
  10. Another work item for writing and automating a new round of tests

Any learning objective can be mapped to a growth plan like this. Make sure each step is reasonable, well-defined, and builds upon the previous steps.

I would like to give one warning about mentoring for test automation. If a mentee wants to learn test automation, then they must first learn programming. Historically, software testing was a manual process, and most testers didn’t need to know how to code. Now, automation is indispensable for organizations that want to move fast without breaking things. Most software testing jobs require some level of automation skills. Many employers are even forcing their manual testers to pick up automation. However, good automation skills are rooted in good programming skills. Someone can’t learn “just enough Java” and then expect to be a successful automation engineer – they must effectively first become a developer.

Characteristics of Good Mentoring

Mentoring requires both individuals to commit time and effort to the relationship.

To be a good mentor:

  • Be helpful – provide valuable guidance that the mentee needs
  • Be prepared – know what you should know, and be ready to share it
  • Be approachable – never be “too busy” to talk
  • Be humble – reveal your limits, and admit what you don’t know
  • Be patient – newbies can be slow

To be a good mentee:

  • Seek long-term growth, not just answers to today’s questions
  • Come prepared in mind and materials
  • Ask thoughtful questions and record the answers
  • Practice what you learn
  • Express appreciation for your mentor – it’s often a thankless job

Take Your Time

Mentoring may take a lot of time, but good mentoring bears good fruit. Mentees will produce higher-quality work. They’ll get things done faster. They’ll also have more confidence in themselves. Mentors themselves will feel a higher satisfaction in their own work, too. The whole team wins.

The alternative would waste a lot more time. Without good mentoring, newcomers will be forced to sink or swim. They won’t be able to finish tasks as quickly, and their work will more likely have problems. They will feel stressed and doubtful, too. Forcing people to tough things out is a very inefficient learning process, and it can also devolve into forms of hazing in unhealthy work environments. Anytime someone says there isn’t enough time for mentoring, I would reply by saying there’s even less time for fixing poor-quality work. In the case of software testing, the lack of mentoring could cause bugs to escape to production!

I encourage leaders, managers, and senior engineers to make mentoring happen as part of their job responsibilities. Dedicate time for it. Facilitate it. Normalize it. Be intentional. Furthermore, I encourage leaders to be force multipliers: mentor others to mentor others. Time is always tight, so make the most of it.

More Advice?

I hope this article is helpful! Do you have any thoughts, advice, or questions about mentoring, specifically in the field of software testing? I’d love to hear them, so drop a comment below.

I wrote this article as a follow-up to an “Ask Me Anything” session on July 15, 2020 with Tristan Lombard and the Testim Community. Tristan published the full AMA transcript in a Medium article. Many thanks to them for the opportunity!

Test-Driving TestProject’s New Python SDK

TestProject recently released its new OpenSDK, and one of its major features is the inclusion of Python testing support! Since I love using Python for test automation, I couldn’t wait to give it a try. This article is my crash-course tutorial on writing Web UI tests in Python with TestProject.

What is TestProject?

TestProject is a free end-to-end test automation platform for Web, mobile, and API tests. It provides a cloud-based way to teams to build, run, share, and analyze tests. Manual testers can visually build tests for desktop or mobile sites using TestProject’s in-browser recorder and test builder. Automation engineers can use TestProject’s SDKs in Java, C#, and now Python for developing coded test automation solutions, and they can use packages already developed by others in the community through TestProject’s add-ons. Whether manual or automated, TestProject displays all test results in a sleek reporting dashboard with helpful analytics. And all of these features are legitimately free – there’s no tiered model or service plan.

Recently, TestProject announced the official release of its new OpenSDK. This new SDK (“software development kit”) provides a simple, unified interface for running tests with TestProject across multiple platforms and languages (now including Python). Things look exciting for the future of TestProject!

What’s My Interest?

It’s no secret that I love testing with Python. When I heard that TestProject added Python support, I knew I had to give it a try. I never used TestProject before, but I was interested to learn what it could do. Specifically, I wanted to see the value it could bring to reporting automated tests. In the Python space, test automation is slick, but reporting can be rough since frameworks like pytest and unittest are command-line-focused. I also wanted to see if TestProject’s SDK would genuinely help me automate tests or if it would get it my way. Furthermore, I know some great people in the TestProject community, so I figured it was time to jump in myself!

The Python SDK

TestProject’s Python SDK is an open-source project. It was originally developed by Bas Dijkstra, with the support of the TestProject team, and its code is hosted on GitHub. The Python SDK supports Selenium for Web UI automation (which will be the focus for this tutorial) and Appium for Android and iOS UI automation as well!

Since I’d never used TestProject before, let alone this new Python SDK, I wanted to review the code to see how to use it. Thankfully, the README included lots of helpful information and example code. When I looked at the code for TestProject’s BaseDriver, I discovered that it simply extend’s Selenium WebDriver’s RemoteDriver class. That means all the TestProject WebDrivers use exactly the same API as Python’s Selenium WebDriver implementation. To me, that was a big relief. I know WebDriver’s API very well, so I wouldn’t need to learn anything different in order to use TestProject. It also means that any test automation project can be easily retrofitted to use TestProject’s SDKs – they just need to swap in a new WebDriver object!

Setup Steps

TestProject has a straightforward architecture. Users sign up for free TestProject accounts online. Then, they set up their own machines for running tests. Each testing machine must have the TestProject agent installed and linked to a user’s account. When tests run, agents automatically push results to the TestProject cloud. Users can then log into the TestProject portal to view and analyze results. They can invite team mates to share results, and they can also set up multiple test machines with agents. Users can even integrate TestProject with other tools like Jenkins, qTest, and Sauce Labs. The TestProject docs, especially the ecosystem diagram, explain everything in more detail.

When I did my test drive, I created a TestProject account, installed the agent on my Mac, and ran Python Web UI tests from my Mac. I already had the latest version of Python installed (Python 3.8 at the time of writing this article). I also already had my target browsers installed: Google Chrome and Mozilla Firefox.

Below are the precise steps I followed to set up TestProject:

1. Sign up for an account

TestProject accounts are “free forever.” Use this signup link.

The TestProject signup page

2. Download the TestProject Agent

The signup wizard should direct you to download the TestProject agent. If not, you can always download it from the TestProject dashboard. Be warned, the download package is pretty large – the macOS package was 345 MB. Alternatively, you can fetch the agent as a container image from Docker Hub.

The TestProject agent download page

The TestProject agent contains all the stuff needed to run tests and upload results to the TestProject app in the cloud. You don’t need to install WebDriver executables like ChromeDriver or geckodriver. Once the agent is downloaded, install it on the machine and register the agent with your account. For me, registration happened automatically.

This is what the TestProject agent looks like when running on macOS. You can also close this window to let it run in the background.

3. Find your developer token

You’ll need to use your developer token to connect your automated tests to your account in the TestProject app. The signup wizard should reveal it to you, but you can always find it (and also reset it) on the Integrations page.

The Integrations page. Check here for your developer token. No, you can’t use mine.

4. Install the Python SDK

TestProject’s Python SDK is distributed as a package through PyPI. To install it, simply run pip install testproject-python-sdk at the command line. This package will also install dependencies like selenium and requests.

A Classic Web UI Test

After setting up my Mac to use TestProject, it was time to write some Web UI tests in Python! Since I discovered that TestProject’s WebDriver objects could easily retrofit any existing test automation project, I thought, “What if I try to run my PyCon 2020 tutorial project with TestProject?” For PyCon 2020, I gave an online tutorial about building a Web UI test automation project in Python from the ground up using pytest and Selenium WebDriver. The tutorial includes one test case: a DuckDuckGo web search and verification. I thought it would be easy to integrate with TestProject since I already had the code. Thankfully, it was!

Below, I’ll walk though my code. You can check out my example project repository from GitHub at AndyLPK247/testproject-python-sdk-example. My code will be a bit more advanced than the examples shown in the Python SDK’s README or in Bas Dijkstra’s tutorial article because it uses the Page Object Model and pytest fixtures. Make sure to pip install pytest, too.

1. Write the test steps

The test case covers a simple DuckDuckGo web search. Whenever I automate tests, I always write out the steps in plain language. Good tests follow the Arrange-Act-Assert pattern, and I like to use Gherkin’s Given-When-Then phrasing. Here’s the test case:

Scenario: Basic DuckDuckGo Web Search
    Given the DuckDuckGo home page is displayed
    When the user searches for "panda"
    Then the search result query is "panda"
    And the search result links pertain to "panda"
    And the search result title contains "panda"

2. Specify automation inputs

Inputs configure how automated tests run. They can be passed into a test automation solution using configuration files. Testers can then easily change input values in the config file without changing code. Automation should read config files once at the start of testing and inject necessary inputs into every test case.

In Python, I like to use JSON for config files. JSON data is simple and hierarchical, and Python includes a module in its standard library named json that can parse a JSON file into a Python dictionary in one line. I also like to put config files either in the project root directory or in the tests directory.

Here’s the contents of config.json for this test:

{
  "browser": "Chrome",
  "implicit_wait": 10,
  "testproject_projectname": "TestProject Python SDK Example",
  "testproject_token": ""
}
  • browser is the name of the browser to test
  • implicit_wait is the implicit waiting timeout for the WebDriver instance
  • testproject_projectname is the project name to use for this test suite in the TestProject app
  • testproject_token is the developer token

3. Read automation inputs

Automation code should read inputs one time before any tests run and then inject inputs into appropriate tests. pytest fixtures make this easy to do.

I created a fixture named config in the tests/conftest.py module to read config.json:

import json
import pytest


@pytest.fixture
def config(scope='session'):

  # Read the file
  with open('config.json') as config_file:
    config = json.load(config_file)
  
  # Assert values are acceptable
  assert config['browser'] in ['Firefox', 'Chrome', 'Headless Chrome']
  assert isinstance(config['implicit_wait'], int)
  assert config['implicit_wait'] > 0
  assert config['testproject_projectname'] != ""
  assert config['testproject_token'] != ""

  # Return config so it can be used
  return config

Setting the fixture’s scope to “session” means that it will run only one time for the whole test suite. The fixture reads the JSON config file, parses its text into a Python dictionary, and performs basic input validation. Note that Firefox, Chrome, and Headless Chrome will be supported browsers.

4. Set up WebDriver

Each Web UI test should have its own WebDriver instance so that it remains independent from other tests. Once again, pytest fixtures make setup easy.

The browser fixture in tests/conftest.py initialize the appropriate TestProject WebDriver type based on inputs returned by the config fixture:

from selenium.webdriver import ChromeOptions
from src.testproject.sdk.drivers import webdriver


@pytest.fixture
def browser(config):

  # Initialize shared arguments
  kwargs = {
    'projectname': config['testproject_projectname'],
    'token': config['testproject_token']
  }

  # Initialize the TestProject WebDriver instance
  if config['browser'] == 'Firefox':
    b = webdriver.Firefox(**kwargs)
  elif config['browser'] == 'Chrome':
    b = webdriver.Chrome(**kwargs)
  elif config['browser'] == 'Headless Chrome':
    opts = ChromeOptions()
    opts.add_argument('headless')
    b = webdriver.Chrome(chrome_options=opts, **kwargs)
  else:
    raise Exception(f'Browser "{config["browser"]}" is not supported')

  # Make its calls wait for elements to appear
  b.implicitly_wait(config['implicit_wait'])

  # Return the WebDriver instance for the setup
  yield b

  # Quit the WebDriver instance for the cleanup
  b.quit()

This was the only section of code I needed to change to make my PyCon 2020 tutorial project work with TestProject. I had to change the WebDriver invocations to use the TestProject classes. I also had to add arguments for the project name and developer token, which come from the config file. (Note: you may alternatively set the developer token as an environment variable.)

5. Create page objects

Automated tests could make direct calls to the WebDriver interface to interact with the browser, but WebDriver calls are typically low-level and wordy. The Page Object Model is a much better design pattern. Page object classes encapsulate WebDriver gorp so that tests can call simpler, more readable methods.

The DuckDuckGo search test interacts with two pages: the search page and the result page. The pages package contains a module for each page. Here’s pages/search.py:

from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys


class DuckDuckGoSearchPage:

  URL = 'https://www.duckduckgo.com'

  SEARCH_INPUT = (By.ID, 'search_form_input_homepage')

  def __init__(self, browser):
    self.browser = browser

  def load(self):
    self.browser.get(self.URL)

  def search(self, phrase):
    search_input = self.browser.find_element(*self.SEARCH_INPUT)
    search_input.send_keys(phrase + Keys.RETURN)

And here’s pages/result.py:

from selenium.webdriver.common.by import By

class DuckDuckGoResultPage:
  
  RESULT_LINKS = (By.CSS_SELECTOR, 'a.result__a')
  SEARCH_INPUT = (By.ID, 'search_form_input')

  def __init__(self, browser):
    self.browser = browser

  def result_link_titles(self):
    links = self.browser.find_elements(*self.RESULT_LINKS)
    titles = [link.text for link in links]
    return titles
  
  def search_input_value(self):
    search_input = self.browser.find_element(*self.SEARCH_INPUT)
    value = search_input.get_attribute('value')
    return value

  def title(self):
    return self.browser.title

Notice that this code uses the “regular” WebDriver interface because TestProject’s WebDriver classes extend the Selenium WebDriver classes.

To make setup easier, I added fixtures to tests/conftest.py to construct each page object, too. They call the browser fixture and inject the WebDriver instance into each page object:

from pages.result import DuckDuckGoResultPage
from pages.search import DuckDuckGoSearchPage


@pytest.fixture
def search_page(browser):
  return DuckDuckGoSearchPage(browser)


@pytest.fixture
def result_page(browser):
  return DuckDuckGoResultPage(browser)

6. Automate the test case

All the automation plumbing is finally in place. Here’s the test case in tests/traditional/test_duckduckgo.py:

import pytest


@pytest.mark.parametrize('phrase', ['panda', 'python', 'polar bear'])
def test_basic_duckduckgo_search(search_page, result_page, phrase):
  
  # Given the DuckDuckGo home page is displayed
  search_page.load()

  # When the user searches for the phrase
  search_page.search(phrase)

  # Then the search result query is the phrase
  assert phrase == result_page.search_input_value()
  
  # And the search result links pertain to the phrase
  titles = result_page.result_link_titles()
  matches = [t for t in titles if phrase.lower() in t.lower()]
  assert len(matches) > 0

  # And the search result title contains the phrase
  assert phrase in result_page.title()

I parametrized the test to run it for three different phrases. The test function does not interact with the WebDriver instance directly. Instead, it interacts exclusively with the page objects.

7. Run the tests

The tests run like any other pytest tests: python -m pytest at the command line. If everything is set up correctly, then the tests will run successfully and upload results to the TestProject app.

In the TestProject dashboard, the Reports tab shows all the test you have run. It also shows the different test projects you have.

Check out those results!

You can also drill into results for individual test case runs. TestProject automatically records the browser type, timestamps, pass-or-fail results, and every WebDriver call. You can also download PDF reports!

Results for an individual test

What if … BDD?

I was delighted to see how easily I could run a traditional pytest suite using TestProject. Then, I thought to myself, “What if I could use a BDD test framework?” I personally love Behavior-Driven Development, and Python has multiple BDD test frameworks. There is no reason why a BDD test framework wouldn’t work with TestProject!

So, I rewrote the DuckDuckGo search test as a feature file with step definitions using pytest-bdd. The BDD-style test uses the same fixtures and page objects as the traditional test.

Here’s the Gherkin scenario in tests/bdd/features/duckduckgo.feature:

Feature: DuckDuckGo
  As a Web surfer,
  I want to search for websites using plain-language phrases,
  So that I can learn more about the world around me.


  Scenario Outline: Basic DuckDuckGo Web Search
    Given the DuckDuckGo home page is displayed
    When the user searches for "<phrase>"
    Then the search result query is "<phrase>"
    And the search result links pertain to "<phrase>"
    And the search result title contains "<phrase>"

    Examples:
      | phrase     |
      | panda      |
      | python     |
      | polar bear |

And here’s the step definition module in tests/bdd/step_defs/test_duckduckgo_bdd.py:

from pytest_bdd import scenarios, given, when, then, parsers
from selenium.webdriver.common.keys import Keys


scenarios('../features/duckduckgo.feature')


@given('the DuckDuckGo home page is displayed')
def load_duckduckgo(search_page):
  search_page.load()


@when(parsers.parse('the user searches for "{phrase}"'))
@when('the user searches for "<phrase>"')
def search_phrase(search_page, phrase):
  search_page.search(phrase)


@then(parsers.parse('the search result query is "{phrase}"'))
@then('the search result query is "<phrase>"')
def check_search_result_query(result_page, phrase):
  assert phrase == result_page.search_input_value()


@then(parsers.parse('the search result links pertain to "{phrase}"'))
@then('the search result links pertain to "<phrase>"')
def check_search_result_links(result_page, phrase):
  titles = result_page.result_link_titles()
  matches = [t for t in titles if phrase.lower() in t.lower()]
  assert len(matches) > 0


@then(parsers.parse('the search result title contains "{phrase}"'))
@then('the search result title contains "<phrase>"')
def check_search_result_title(result_page, phrase):
  assert phrase in result_page.title()

There’s one more nifty trick I added with pytest-bdd. I added a hook to report each Gherkin step to TestProject with a screenshot! That way, testers can trace each test case step more easily in the TestProject reports. Capturing screenshots also greatly assists test triage when failures arise. This hook is located in tests/conftest.py:

def pytest_bdd_after_step(request, feature, scenario, step, step_func):
  browser = request.getfixturevalue('browser')
  browser.report().step(description=str(step), message=str(step), passed=True, screenshot=True)

Since pytest-bdd is just a pytest plugin, its tests run using the same python -m pytest command. TestProject will group these test results into the same project as before, but it will separate the traditional tests from the BDD tests by name. Here’s what the Gherkin steps with screenshots look like:

Custom Gherkin step with screenshot reported in the TestProject app

This is Awesome!

As its name denotes, TestProject is a great platform for handling project-level concerns for testing work: reporting, integrations, and fast feedback. Adding TestProject to an existing automation solution feels seamless, and its sleek user experience gives me what I need as a tester without getting in my way. The one word that keeps coming to mind is “simple” – TestProject simplifies setup and sharing. Its design takes to heart the renowned Python adage, “Simple is better than complex.” As such, TestProject’s new Python SDK is a welcome addition to the Python testing ecosystem.

I look forward to exploring Python support for mobile testing with Appium soon. I also look forward to seeing all the new Python add-ons the community will develop.

12 Traits of Highly Effective Tests

Writing effective tests is hard. Tests that are flaky, confusing, or slow are effectively useless because they do more harm than good. The Arrange-Act-Assert pattern gives good structure, but what other characteristics should test cases have? Here are 12 traits for highly effective tests.

#1. Understandable

At its core, a test is just a step-by-step procedure. It exercises a behavior and verifies the outcome. In a sense, tests are living specifications – they detail exactly how a feature should function. Everyone should be able to intuitively understand how a test works. Follow conventions like Arrange-Act-Assert or Given-When-Then. Seek conciseness without vagueness. Avoid walls of text.

If you find yourself struggling to write a test in plain language, then you should review the design for the feature under test. If you can’t explain it, then how will others know how to use it?

#2. Unique

Each test case in a suite should cover a unique behavior. Don’t Repeat Yourself – repetitive tests with few differences bear a heavy cost to maintain and execute without delivering much additional value. If a test can cover multiple inputs, then focus on one variation per equivalence class.

For example, equivalence classes for the absolute value function could be a positive number, a negative number, and zero. There’s little need to cover multiple negative numbers because the absolute value function performs the same operation on all negatives.

#3. Individual

Test one thing at a time. Tests that each focus on one main behavior are easier to formulate and automate. They naturally become understandable and maintainable. When a test covering only one behavior fails, then its failure reason is straightforward to deduce.

Any time you want to combine multiple behaviors into one test, consider separating them into different tests. Make a clear distinction between “arrange” and “act” steps. Write atomic tests as much as possible. Avoid writing “world tours,” too. I’ve seen repositories where tests are a hundred steps long and meander through an application like Mr. Toad’s Wild Ride.

#4. Independent

Each test should be independent of all other tests. That means testers should be able to run each test as a standalone unit. Each test should have appropriate setup and cleanup routines to do no harm and leave no trace. Set up new resources for each test. Automated tests should use patterns like dependency injection instead of global variables. If one test fails, others should still run successfully. Test case independence is the cornerstone for scalable, parallelizable tests.

Modern test automation frameworks strongly support test independence. However, folks who are new to automation frequently presume interdependence – they think the end of one test is the starting point for the next one in the source code file. Don’t write tests like that! Write your tests as if each one could run on its own, or as if the suite’s test order could be randomized.

#5. Repeatable

Testing tends to be a repetitive activity. Test suites need to run continuously to provide fast feedback as development progresses. Every time they run, they must yield deterministic results because teams expect consistency.

Unfortunately, manual tests are not very repeatable. They require lots of time to run, and human testers may not run them exactly the same way each iteration. Test automation enables tests to be truly repeatable. Tests can be automated once and run repeatedly and continuously. Automated scripts always run the same way, too.

#6. Reliable

Tests must run successfully to completion, whether they return PASS or FAIL results. “Flaky” tests – tests that occasionally fail for arbitrary reasons – waste time and create doubt. If a test cannot run reliably, then how can its results be trusted? And why would a team invest so much time developing tests if they don’t run well?

You shouldn’t need to rerun tests to get good results. If tests fail intermittently, find out why. Correct any automation errors. Tune automation timeouts. Scale infrastructure to the appropriate sizes. Prioritize test stability over speed. And don’t overlook any wonky bugs that could be lurking in the product under test!

#7. Efficient

Providing fast feedback is testing’s main purpose. Fast feedback helps teams catch issues early and keep developing safely. Fast tests enable fast feedback. Slow tests cause slow feedback. They force teams to limit coverage. They waste time and money, and they increase the risk that bugs do more damage.

Optimize tests to be as efficient as possible without jeopardizing stability. Don’t include unnecessary steps. Use smart waits instead of hard sleeps. Write atomic tests that cover individual behaviors. For example, use APIs instead of UIs to prep data. Set up tests to run in parallel. Run tests as part of Continuous Integration pipelines so that they deliver results immediately.

#8. Organized

An effective test has a clear identity:

  • Purpose: Why run this test?
  • Coverage: What behavior or feature does this test cover?
  • Level: Should this test be a unit, integration, or end-to-end test?

Identity informs placement and type. Make sure tests belong to appropriate suites. For example, tests that interact with Web UIs via Selenium WebDriver do not belong in unit test suites. Group related tests together using subdirectories and/or tags.

#9. Reportable

Functional tests yield PASS or FAIL results with logs, screenshots, and other artifacts. Large suites yield lots of results. Reports should present results in a readable, searchable format. They should make failures stand out with colors and error messages. They should also include other helpful information like duration times and pass rates. Unit test reports should include code coverage, too.

Publish test reports to public dashboards so everyone can see them. Most Continuous Integration servers like Jenkins include some sort of test reporting mechanism. Furthermore, capture metrics like test result histories and duration times in data formats instead of textual reports so they can be analyzed for trends.

#10. Maintainable

Tests are inherently fragile because they depend upon the features they cover. If features change, then tests probably break. Furthermore, automated tests are susceptible to code duplication because they frequently repeat similar steps. Code duplication is code cancer – it copies problems throughout a code base.

Fragility and duplication cause a nightmare for maintainability. To mitigate the maintenance burden, develop tests using the same practices as developing products. Don’t Repeat Yourself. Simple is better than complex. Do test reviews. For automation, follow good design principles like separating concerns and building solution layers. Make tests easy to update in the future!

#11. Trustworthy

A test is “successful” if it runs to completion and yield a correct PASS or FAIL result. The veracity of the outcome matters. Tests that report false failures make teams waste time doing unnecessary triage. Tests that report false passing results give a false sense of security and let bugs go undetected. Both ways ultimately cause teams to mistrust the tests.

Unfortunately, I’ve seen quite a few untrustworthy tests before. Sometimes, test assertions don’t check the right things, or they might be missing entirely! I’ve also seen tests for which the title does not match the behavior under test. These problems tend to go unnoticed in large test suites, too. Make sure every single test is trustworthy. Review new tests carefully, and take time to improve existing tests whenever problems are discovered.

#12. Valuable

Testing takes a lot of work. It takes time away from developing new things. Therefore, testing must be worth the effort. Since covering every single behavior is impossible, teams should apply a risk-based strategy to determine which behaviors pose the most risk if they fail and then prioritize testing for those behaviors.

If you are unsure if a test is genuinely valuable, ask this question: If the test fails, will the team take action to fix the defect? If the answer is yes, then the test is very valuable. If the answer is no, then look for other, more important behaviors to cover with tests.

Any more traits?

These dozen traits certainly make tests highly effective. However, this list is not necessarily complete. Do you have any more traits to add to the list? Do you agree or disagree with the traits I’ve given? Let me know by retweeting and commenting my tweet below!

(Note: I changed #8 from “Leveled Appropriately” to “Organized” to be more concise. The tweet is older than the article.)

Arrange-Act-Assert: A Pattern for Writing Good Tests

A test is a procedure that exercises a behavior to determine if the behavior functions correctly. There are several different kinds of tests, like unit tests, integration tests, or end-to-end tests, but all functional tests do the same basic thing: they try something and report PASS or FAIL.

Testing provides an empirical feedback loop for development. That’s how testing keeps us safe. With tests, we know when things break. Without tests, coding can be dangerous. We don’t want to deploy big ol’ bugs!

So, if we intend to spend time writing tests, how can we write good tests? There’s a simple but powerful pattern I like to follow: Arrange-Act-Assert.

The Pattern

Arrange-Act-Assert is a great way to structure test cases. It prescribes an order of operations:

  1. Arrange inputs and targets. Arrange steps should set up the test case. Does the test require any objects or special settings? Does it need to prep a database? Does it need to log into a web app? Handle all of these operations at the start of the test.
  2. Act on the target behavior. Act steps should cover the main thing to be tested. This could be calling a function or method, calling a REST API, or interacting with a web page. Keep actions focused on the target behavior.
  3. Assert expected outcomes. Act steps should elicit some sort of response. Assert steps verify the goodness or badness of that response. Sometimes, assertions are as simple as checking numeric or string values. Other times, they may require checking multiple facets of a system. Assertions will ultimately determine if the test passes or fails.

Behavior-Driven Development follows the Arrange-Act-Assert pattern by another name: Given-When-Then. The Gherkin language uses Given-When-Then steps to specify behaviors in scenarios. Given-When-Then is essentially the same formula as Arrange-Act-Assert.

Every major programming language has at least one test framework. Frameworks like JUnit, NUnit, Cucumber, and (my favorite) pytest enable you, as the programmer, to automate tests, execute suites, and report results. However, the framework itself doesn’t make a test case “good” or “bad.” You, as the tester, must know how to write good tests!

Let’s look at how to apply the Arrange-Act-Assert pattern in Python code. I’ll use pytest for demonstration.

Unit Testing

Here’s a basic unit test for Python’s absolute value function:

def test_abs_for_a_negative_number():

  # Arrange
  negative = -5
  
  # Act
  answer = abs(negative)
  
  # Assert
  assert answer == 5

This test may seem trivial, but we can use it to illustrate our pattern. I like to write comments denoting each phase of the test case as well.

  1. The Arrange step creates a variable named “negative” for testing.
  2. The Act step calls the “abs” function using the “negative” variable and stores the returned value in a variable named “answer.”
  3. The Assert step verifies that “answer” is a positive value.

Feature Testing

Let’s kick it up a notch with a more complicated test. This next example tests the DuckDuckGo Instant Answer API using the requests package:

import requests

def test_duckduckgo_instant_answer_api_search():

  # Arrange
  url = 'https://api.duckduckgo.com/?q=python+programming&format=json'
  
  # Act
  response = requests.get(url)
  body = response.json()
  
  # Assert
  assert response.status_code == 200
  assert 'Python' in body['AbstractText']

We can clearly see that the Arrange-Act-Assert pattern works for feature tests as well as unit tests.

  1. The Arrange step forms the endpoint URL for searching for “Python Programming.” Notice the base URL and the query parameters.
  2. The Act steps call the API using the URL using “requests” and then parse the response’s body from JSON into a Python dictionary.
  3. The Assert steps then verify that the HTTP status code was 200, meaning “OK” or “success,” and that the word “Python” appears somewhere in the response’s abstract text.

Arrange-Act-Assert also works for other types of feature tests, like Web UI and mobile tests.

More Advice

Arrange-Act-Assert is powerful because it is simple. It forces tests to focus on independent, individual behaviors. It separates setup actions from the main actions. It requires test to make verifications and not merely run through motions. Notice how the pattern is not Arrange-Act-Assert-Act-Assert – subsequent actions and assertions belong in separate tests! Arrange-Act-Assert is a great pattern to follow for writing good functional tests.

Using Multiple Test Frameworks Simultaneously

Someone recently asked me the following question, which I’ve paraphrased for better context:

Is it good practice to use multiple test frameworks simultaneously? For example, I’m working on a Python project. I want to do BDD with behave for feature testing, but pytest would be better for unit testing. Can I use both? If so, how should I structure my project(s)?

The short answer: Yes, you should use the right frameworks for the right needs. Using more than one test framework is typically not difficult to set up. Let’s dig into this.

The F-word

I despise the F-word – “framework.” Within the test automation space, people use the word “framework” to refer to different things. Is the “framework” only the test package like pytest or JUnit? Does it include the tests themselves? Does it refer to automation tools like Selenium WebDriver?

For clarity, I prefer to use two different terms: “framework” and “solution.” A test framework is software package that lets programmers write tests as methods or functions, run the tests, and report the results. A test solution is a software implementation for a testing problem. It typically includes frameworks, tools, and test cases. To me, a framework is narrow, but a solution is comprehensive.

The original question used the word “framework,” but I think it should be answered in terms of solutions. There are two potential solutions at hand: one for unit tests written in pytest, while another for feature tests written in behave.

One Size Does Not Fit All

Always use the right tools or frameworks for the right needs. Unit tests and feature tests are fundamentally different. Unit tests directly access internal functions and methods in product code, whereas feature tests interact with live versions of the product as an external user or caller. Thus, they need different kinds of testing solutions, which most likely will require different tools and frameworks.

For example, behave is a BDD framework for Python. Programmers write test cases in plain-language Gherkin with step definitions as Python functions. Gherkin test cases are intuitively readable and understandable, which makes them great for testing high-level behaviors like interacting with a Web page. However, BDD frameworks add complexity that hampers unit test development. Unit tests are inherently “code-y” and low-level because they directly call product code. The pytest framework would be a better choice for unit testing. Conversely, feature tests could be written using raw pytest, but behave provides a more natural structure for describing features. Hence, separate solutions for different test types would be ideal.

Same or Separate Repositories?

If more than one test solution is appropriate for a given software project, the next question is where to put the test code. Should all test code go into the same repository as the product code, or should they go into separate repositories? Unfortunately, there is no universally correct answer. Here are some factors to consider.

Unit tests should always be located in the same repository as the product code they test. Unit tests directly depend upon the product code. They mus be written in the same language. Any time the product code is refactored, unit tests must be updated.

Feature tests can be placed in the same repository or a separate repository. I recommend putting feature tests in the same repository as product code if feature tests are written in the same language as the product code and if all the product code under test is located in the same repository. That way, tests are version-controlled together with the product under test. Otherwise, I recommend putting feature tests in their own separate repository. Mixed language repositories can be confusing to maintain, and version control must be handled differently with multi-repository products.

Same Repository Structure

One test solution in one repository is easy to set up, but multiple test solutions in one repository can be tricky. Thankfully, it’s not impossible. Project structure ultimately depends upon the language. Regardless of language, I recommend separating concerns. A repository should have clearly separate spaces (e.g., subdirectories) for product code and test code. Test code should be further divided by test types and then coverage areas. Testers should be able to run specific tests using convenient filters.

Here are ways to handle multiple test solutions in a few different languages:

  • In Python, project structure is fairly flexible. Conventionally, all tests belong under a top-level directory named “tests.” Subdirectories may be added thereunder, such as “unit” and “feature”. Frameworks like pytest and behave can take search paths so they run the proper tests. Furthermore, if using pytest-bdd instead of behave, pytest can use the markings/tags instead of search paths for filtering tests.
  • In .NET (like C#), the terms “project” and “solution” have special meanings. A .NET project is a collection of code that is built into one artifact. A .NET solution is a collection of projects that interrelate. Typically, the best practice in .NET would be to create a separate project for each test type/suite within the same .NET solution. I have personally set up a .NET solution that included separate projects for NUnit unit tests and SpecFlow feature tests.
  • In Java, project structure depends upon the project’s build automation tool. Most Java projects seem to use Maven or Gradle. In Maven’s Standard Directory Layout, tests belong under “src/test”. Different test types can be placed under separate packages there. The POM might need some extra configuration to run tests at different build phases.
  • In JavaScript, test placement depends heavily upon the project type. For example, Angular creates separate directories for unit tests using Jasmine and end-to-end tests using Protractor when initializing a new project.

Do What’s Best

Different test tools and frameworks meet different needs. No single one can solve all problems. Make sure to use the right tools for the problems at hand. Don’t force yourself to use the wrong thing simply because it is already used elsewhere.

Grace Hopper Bug

Writing Good Bug Reports

Bugs, bugs, bugs! Talking about software development is impossible without also talking about bugs. At first, the term “bug” may seem like strange slang for “defect.” Are there creepy-crawlies running about our code and computers? Not usually, sometimes yes! In 1947, Grace Hopper found a dead moth stuck in a relay in Harvard’s Mark II computer, and her “bug” report (pictured above) joked about finding a real bug behind a computer defect. Even though inventors like Thomas Edison had used the term “bug” to describe technological glitches for years beforehand, Grace Hopper’s bug cemented the terminology for computers and software.

Bug happen. Why? Nobody is perfect, and therefore, no software is perfect. Building software of high quality requires good designs to resist bugs, good implementations to avoid bugs, and good feedback to report bugs when they inevitably appear. This article covers best practices for writing good bug reports when they do happen.

What is a bug “report”?

A “bug” is a defect, plain and simple. The term refers specifically to an issue in the software. However, a bug report (or ticket) is a written record detailing the defect. Bug reports are typically written in a project management tool like Jira. The bug and its report are two separate entities. Certainly, undetected bugs can exist in a software product without having associated reports.

When should a bug report be written?

A bug report should be written whenever a new problem that appears to be a defect is discovered. Problems can be discovered during testing activities like automated test runs or exploratory manual testing. They can also be discovered while developing new features. In the worst case, customers will find problems and submit complaints!

However, notice how I used the term “problem” and not “defect.” All problems need solutions, but not all problems are truly defects. Sometimes, the user who reported the problem doesn’t know how a feature should work. Other times, the environment in which the problem occurred is improperly configured. The team member who first discovered the problem or received the customer complaint should initially do a light investigation to make sure the problem looks like a genuine software defect. Initial investigation should be done expediently while context is fresh.

If the problem appears to be a real defect and not a misunderstanding or misconfiguration, then the investigator should search existing bug reports for the same issue. Someone else on the team might have recently discovered the same issue or a similar issue. Bugs also can reappear even after being “fixed.” Adding information to existing reports is typically a better practice than creating duplicative reports.

What if the problem is unclear? Whenever I’m not sure if a problem is a bug or another type of issue, I ask others on my team for their thoughts. I try to ask questions like, “Does this look right? What could cause this behavior? Did I do something incorrectly?” Blindly opening bug reports for every problem is akin to “the boy who cried wolf” – it can desensitize a team to warnings of real, important bugs. Doing just a bit of investigation shows good intentions and, in many cases, spares the team extra work later. Nevertheless, when in doubt, creating a report is better than not creating a report. A little churn from false positives is better than risking real problems.

Why should bug reports be written?

Whenever a real bug is discovered, a team should write a report for it. Simply talking about the bug might seem like an easier, faster approach, especially for smaller teams, but the act of writing a report for the bug is important for the integrity of the software development process. A written record is an artifact that requires resolution:

  • A report provides a feedback loop to developers.
  • A report contains all bug information in a single source.
  • A report can be tracked in a project management tool.
  • A report can be sized and prioritized with development work.
  • A report records work history for the bug.

Bug reports help make bug fixes part of the development process. They bring attention to bugs, and they cannot be ignored or overlooked easily.

What goes into a bug report?

Regardless of tool or team process, good bug reports contain the following information:

  • Problem Summary
    • A brief, one-line description of the defect
    • Clearly states what it defective
    • Should be used as the title of the report
  • Report Identifier
    • A unique identifier for the bug report
    • Typically generated automatically by the management tool (like Jira)
  • Full Description
    • A longer description of the problem
    • Explain any relevant information
    • Use clear, plain language
  • Steps to Reproduce
    • A clear procedure for manually reproducing the failure
    • Could be steps from a failing test case
    • Include actual vs. expected results
  • Occurrences
    • Cases when the defect does and does not appear
    • Share product version, code branch, environment name, system configuration, etc.
    • Does the defect appear consistently or intermittently?
  • Artifacts
    • Attach logs, screenshots, files, links, etc.
  • Impact
    • How does the defect affect the customer?
    • Does the defect block any development work?
    • What test cases will fail due to this defect?
  • Root Cause Analysis
    • If known, explain why the defect happened
    • If unknown, offer possible reasons for the defect
    • Warning: clearly denote proof vs. speculation!
  • Triage
    • Assign an owner if possible
    • Assign a severity or priority based on guidelines and common sense
    • Assign a deadline if applicable
    • Assign any other information the team needs

I use this list as a template whenever I write bug reports. For example, in a Jira bug ticket, I’ll make each item a heading in the ticket’s “Description” field. Sometimes, I might skip sections if I don’t have the information. However, I typically don’t open a bug report until I have most of this information for the defect.

How should bug reports be handled?

One word: professionally. Handle bug reports professionally. What does that mean?

Provide as much information as possible in bug reports. Bug reports are a form of communication and record. Saying little more than, “It dun broke,” doesn’t help anyone fix the problem. Provide useful, accurate information so that others who didn’t discover the bug have enough context to help.

Triage bugs expediently. When you uncover a problem, investigate it. When you need a second opinion, ask for it. When someone sends a bug report to you or your team, triage it, fix it, and reply to the person who reported it. Don’t ignore problems, and don’t let them fester.

Treat bug reports as unfolding stories. Bugs are usually unexpected, tricky surprises. The information in a bug report can be incomplete or even incorrect because it represents best-guess theories about the defect. The report artifact should be treated as a living document. Information can be added or updated as work proceeds. Team members should be gracious to each other regarding available information.

Do not shame or be shamed. Bugs happen. Even the best developers make mistakes. A mature, healthy team should faithfully report bugs, quickly resolve them, and take steps to avoid similar problems in the future. Developers should not stigmatize bugs or try to censor bug counts. Testers should not brag about the number of bugs they find. Language used in bug reports should focus on software, not people. Gossiping and public shaming over bugs should not happen. Any shame associated with bugs can drive a team to do bad practices. Any recurring issues should be addressed with individuals directly or with the help of management.

Good bug reports matter

Writing bug reports well is vital for team collaboration. Organized, accurate information can save hours of time wasted on fruitless attempts to reproduce issues or attempt fixes. Give these practices a try the next time you discover a bug!

PyCon 2020 is Cancelled: Here’s Why It Hurts and What We Can Do

PyCon 2020 was cancelled today due to COVID-19. You can read the official announcement here: https://pycon.blogspot.com/2020/03/pycon-us-2020-in-pittsburgh.html. I completely support the Python Software Foundation in this decision, and I can’t imagine how difficult these past few weeks must have been for them.

Although the news of PyCon’s cancellation is not surprising, it is nevertheless devastating for members of the Python community. Here’s why it hurts, from the perspective of a full-hearted Pythonista, and here’s what we can do about it.

Python is Community-Driven

A frequent adage among Pythonistas is, “Come for the language, and say for the community.” I can personally attest to this statement for myself. When I started using Python in earnest in 2015, I loved how clean, concise, and powerful the language was. I didn’t fully engage the Python community until PyCon 2018, but once I did, I never left because I felt like I became part of something greater than just a bunch of coders.

Python is community-driven. There’s no major company behind the Python language, like Oracle for Java or Microsoft for .NET. Pythonistas keep Python going, whether that’s by being a Python language core developer, a third-party package developer, a conference organizer, a meetup volunteer, a corporate sponsor, or just a coder using Python for projects. And the people in the community are awesome. We help each other. We support each other. We eschew arrogance, champion diversity, and practice inclusivity.

PyCon US is the biggest worldwide Python event of the year. Thousands of Pythonistas from around the world join together for days of tutorials, talks, and sprints. This is the only time that many of us get to see each other in person together. It’s also the only way some of us would have ever met each other. I think of my good mate Julian, co-founder of PyBites. We met by chance at PyCon 2018 in the “hallway track” (meaning, just walking around and meeting people), and we hit it off right away. Julian lives in Australia, while I live in the United States. We probably would never have met outside of PyCon. Since then, we’ve done video chats together and promoted each other’s blogs. We spent much time together at PyCon 2019 and hoped to have another blast at PyCon 2020. We even had plans for a bunch of us content developers to get together to party. Unfortunately, that time together must be postponed until 2021.

There are several other individuals I could name in addition to Julian, too. I’m sure there are several other groups of friends throughout the community who look to PyCon as the time to meet. Losing that opportunity is heartbreaking.

PyCon is a Spectacle

PyCon itself is not just a conference – it’s a spectacle. PyCon is the community’s annual celebration of creativity, innovation, and progress. Talks showcase exciting projects. Tutorials shed deep expertise on critical subjects. Sprints put rocket boosters underneath open source projects to get work done. Online recordings become mainstay resources for the topics they cover. Becoming a speaker at PyCon is truly a badge of honor. Sponsors shower attendees with more swag than a carry-on bag can hold: t-shirts, socks, stickers, yo-yos, autographed books, iPads, water bottles, gloves, Pokémon cards; the list goes on and on. Many sponsors even host after-parties with dinner and drinks. All of these activities combined make PyCon much more than just another conference or event.

PyCon is truly a time to shine. Anyone who attends PyCon catches the magic in the air. There’s an undeniable buzz. And the anticipation leading up to PyCon can be unbearable. It’s like when kids go to Disney World. I can’t tell you how many friends I’ve encouraged to go to PyCon.

At PyCon 2020, we had much to celebrate. Python is now one of the world’s most popular programming languages, according to several surveys. Python 2 reached end-of-life on January 1. There are more Python projects and resources than ever before. 2020 is also the start of a new decade. We can still celebrate them, but not en masse at PyCon this year.

PyCon Supports the PSF

The Python Software Foundation (PSF) is the non-profit organization that supports the Python language and community. Here’s what they do, copied directly from their website:

The Python Software Foundation (PSF) is a 501(c)(3) non-profit corporation that holds the intellectual property rights behind the Python programming language. We manage the open source licensing for Python version 2.1 and later and own and protect the trademarks associated with Python. We also run the North American PyCon conference annually, support other Python conferences around the world, and fund Python related development with our grants program and by funding special projects.

PyCon is a major source of revenue for the PSF. I don’t know the ins and outs of the numbers, but I do know that cancelling PyCon will financially hurt the PSF, which will then affect what the PSF can do to carry out its mission. That’s no bueno.

What Can We Do?

Python is community-driven, and we are not powerless. Here are some things we can do as Pythonistas in light of PyCon 2020’s cancellation:

Support the Python Software Foundation. Openly and publicly thank the PSF for everything they have done. Offer heartfelt sympathies for the incredibly tough decisions they’ve had to make in recent weeks, because they made the unquestionably right decision here.

Join the Python Software Foundation. Anyone can become a member. There are varying levels of membership and commitment. Check out the PSF Membership FAQ for more info. Donations will greatly help the PSF through this tough time.

Engage your local Python community. The Python community is worldwide. Look for a local meetup. Attend regional Python conferences if possible – PyCon isn’t the only Python conference! The closest ones to where I live are PyGotham, PyOhio, and PyTennessee, and I’m helping to re-launch PyCarolinas.

Engage the online Python community. Even though many of us are practicing social distancing due to COVID-19, we can still keep in touch through the Internet. Support each other. Be intentional with good communication. Personally, I started using the hashtag #PythonStrong on Twitter.

Stay safe. COVID-19 is serious. Wherever you are, be smart and do the right things. For folks like me in the United States, that means social distancing right now.

Attend PyCon 2021. Since PyCon 2020 will be cancelled, we ought to make PyCon 2021 the best PyCon ever.

Python Strong

PyCon 2020’s cancellation is devastating but necessary. We shall overcome. Stay safe out there, Pythonistas. Stay #PythonStrong!

PyCarolinas 2020 Update

Hello World! I’d like to give an update on the PyCarolinas 2020 conference, since we’ve been quiet for quite some time. I’ll share what we’ve done and a new vision for where we want to go, especially given current world events.

How We Got Started

Calvin Spealman founded “PyCarolinas” in 2012 with the first (and so far only) conference at UNC Chapel Hill. The only other Python conference held in the Carolinas since then was PyData Carolinas 2016, hosted by IBM. Despite having several talented Pythonistas in both North and South Carolina, various factors prevented the return of either conference.

I first encountered the Python community when I delivered my first conference talk ever at PyData Carolinas 2016. However, I really became engaged at PyCon 2018, an experience that forever changed my life. I started speaking at several Python conferences around North America. The people I met became dear friends, and the ideas I learned inspired me to be a better Pythonista. However, one thing disappointed me: my home state didn’t have a regional Python conference. I wanted to bring the awesomeness of a Python conference to my backyard so that others could join the fun.

During PyCon 2019, I shared this idea with some friends, and every one of them said that we should make it happen. A few of us attended an open space for conference organizers to swap ideas. Dustin Ingram then invited me to give a call-to-action for a PyCarolinas 2020 conference on the big stage during the “parade of worldwide conferences.” Immediately thereafter, I held an open space for PyCarolinas to kick things off, and dozens of people attended. We created a Slack room, launched a newsletter, and started a Twitter storm. I got in touch with Calvin so we could formally plan things together. Calvin secured a venue at the Red Hat Annex for June 20-21. We even created a logo and took Code of Conduct training by invitation of our wonderful PyGotham friends. Things were looking bright.

Well, What Happened?

Life happened. From November until now, I personally had to handle several personal and family matters, in addition to holidays, my full-time job, and various commitments. You can read the full story here. Calvin also had things come up. As a result, PyCarolinas progress was minimal. We brought together a community, but we failed to meet critical milestones. I personally take responsibility for those failures.

There’s also a new monkey wrench in our plans: COVID-19. The coronavirus is starting to spread throughout the United States, and North Carolina is already in a state of emergency due to multiple local cases. Other conferences like SXSW 2020 and E3 have been canceled. At the time of writing this article, PyCon 2020 might be canceled or postponed. We just don’t know how things will be by June. We would hate to put a lot of work into PyCarolinas 2020 if it could be canceled due to COVID-19, especially when we are already behind schedule.

A New Way Forward

Personally, I was ready to give up and recommend that we cancel PyCarolinas 2020. Then, while returning from PyTennessee 2020, I had a stroke of inspiration:

What if we made PyCarolinas 2020 an “unconference”?

Traditional conferences take a lot of top-down planning and hard commitments, which is not something we can or should do right now. Unconferences, on the other hand, are participant-driven. Their organization is lean: provide a space for people to gather, collaborate, and cross-pollinate.

Here’s the new vision I’d like to cast for a PyCarolinas 2020 Unconference:

  • One day only: Saturday, June 20 at Red Hat Annex
  • Lightning talks only: no lengthy CFP; informal sign-ups beforehand
  • Maybe a keynote speaker
  • Open collaboration spaces in the other rooms
  • Set aside time for organizers to seriously plan PyCarolinas 2021
  • Limit sponsorships for simplicity
  • Uphold the Python community’s Code of Conduct
  • Offer only about 100 tickets to keep the event small
  • Encourage local and regional attendance
  • Empower the Python community to be awesome!

Furthermore, I would like to offer tickets for FREE! Free tickets would allow anyone to come, and they would also help us as organizers avoid the hassle of money changing hands, bank accounts, and legal entities for this event.

Red Hat has already graciously provided a venue for free. I’d like to find a sponsor to provide pizza and soft drinks for lunch. If possible, I’d also like to find a sponsor to print some stickers.

By keeping this event lean, we win both ways. If COVID-19 is no longer an issue by June, then we get to lead a truly unique type of Python regional conference. If COVID-19 is still a problem, then we can easily postpone the event without much loss or pain.

The Next Steps

I’ve shared this idea with a few friends (including Calvin), and everyone so far agrees that this is a good path forward. In the coming days, we will share this plan to make sure the community agrees. Then, if this is the way, we can launch a very simple website, set up ticketing, and find someone to sponsor pizza.

Personally, I feel good about this idea. It’s a big relief to downsize. Deep down, I know I can trust the Python community to make this unique type of conference a hit.

If you want to help us take the next steps, sign up for our newsletter and join our Slack room. With this new inspiration and energy, I’ll do my best to stay on top of things.

East Meets West When Translating Django Apps

你好!我叫安迪。 

Don’t understand my Chinese? Don’t feel bad – I don’t know much Mandarin, either! My wife and her mom are from China. When I developed a Django app to help my wife run her small business, I needed to translate the whole site into Chinese so that Mama could use it, too. Thankfully, Django’s translation framework is top-notch.

“East Meets West When Translating Django Apps” is the talk I gave about translating my family’s Django app between English and Chinese. For me, this talk had all the feels. I shared my family’s story as a backdrop. I showed Python code for each step in the translation workflow. I gave advice on my lessons learned. And I spoke truth to power – that translations should bring us all together.

I gave this talk at a few conferences. It was the opener for PyCascades 2020. I also delivered it in person at PyTennessee 2020 and online for PyCon 2020.

Here’s the PyCon recording, which is probably the “definitive” version:

Here’s the PyCascades recording:

Here are the power slides from my talk:

Check out my article, Django Admin Translations, to learn how to translate the admin site, too.

4 Rules for Writing Good Gherkin

In Behavior-Driven Development, Gherkin is a specification language for defining behaviors. Gherkin scenarios can easily be automated into test cases. There are many points to writing good Gherkin, but to keep things simple, I like to focus on four major rules:

  1. Gherkin’s Golden Rule
  2. The Cardinal Rule of BDD
  3. The Unique Example Rule
  4. The Good Grammar Rule

Check out my TechBeacon article to learn about these rules in depth!