Boa Constrictor is the .NET Screenplay Pattern. It helps you make better interactions for better automation! Its primary use case is Web UI and REST API test automation, but it can be used to automate any kind of interactions. The Screenplay Pattern is much more scalable for development and execution than the Page Object Model.
The Boa Constrictor maintainers and I strongly support open source software. That’s why we participated in Hacktoberfest 2021. In fact, this was the second Hacktoberfest we did. We launched Boa Constrictor as an open source project a year ago during Hacktoberfest 2020! We love sharing our code with the community and inspiring others to get involved. To encourage participation this year, we added the “hacktoberfest” label to open issues, and we offered cool stickers to anyone who contributed.
Boa Constrictor: The .NET Screenplay Pattern Sticker Medallion
Hacktoberfest 2021 was a tremendous success for Boa Constrictor. Even though the project is small, we received several contributions. Here’s a summary of all the new stuff we added to Boa Constrictor:
Updated WebDriver interactions to use Selenium WebDriver 4.0
Implemented asynchronous programming for Tasks and Questions
Extended the Wait Task to wait for multiple Questions using AND and OR logic
Standardized ToString methods for all WebDriver interactions
Automated unit tests for WebDriver Questions
Wrote new user guides for test framework integrations and interaction patterns
Made small refinements to the doc site
Created GitHub templates for issues and pull requests
Replaced the symbols NuGet package with embedded debugging
Added the README to the NuGet package
Added Shields to the README
Restructured projects for docs, logos, and talk
During Hacktoberfest 2021, we made a series of four releases because we believe in lean development that puts new features in the hands of developers ASAP. The final capstone release was version 2.0.0: a culmination of all Hacktoberfest work! Here’s a view of the Boa Constrictor NuGet package with its new README (Shields included):
The Boa Constrictor NuGet package with the new README and Shields
If you like project stats, then here’s a breakdown of the contributions by numbers:
11 total contributors (5 submitting more than one pull request)
41 pull requests closed
151 commits made
Over 10K new lines of code
GitHub’s Code Frequency graph for Boa Constrictor shown below illustrates how much activity the project had during Hacktoberfest 2021. Notice the huge green and red spikes on the right side of the chart corresponding to the month of October 2021. That’s a lot of activity!
The GitHub Code Frequency Graph for Boa Constrictor
Furthermore, every member of my Test Engineering & Architecture (TEA) team at Q2 completed four pull requests for Hacktoberfest, thus earning our prizes and our bragging rights. For the three others on the team, this was their first Hacktoberfest, and Boa Constrictor was their first open source project. We all joined together to make Boa Constrictor better for everyone. I’m very proud of each of them individually and of our team as a whole.
Personally, I gained more experience as an open source project maintainer. I brainstormed ideas with my team, assigned work to volunteers, and provided reviews for pull requests. I also had to handle slightly awkward situations, like politely turning down pull requests that could not be accepted. Thankfully, the project had very little spam, but we did have many potential contributors request to work on issues but then essentially disappear after being assigned. That made me appreciate the folks who did complete their pull requests even more.
Overall, Hacktoberfest 2021 was a great success for Boa Constrictor. We added several new features, docs, and quality-of-life improvements to the project. We also got people excited about open source contributions. Many thanks to Digital Ocean, Appwrite, Intel, and DeepSource for sponsoring Hacktoberfest 2021. Also, special thanks to Digital Ocean for featuring Boa Constrictor in their Hacktoberfest kickoff event. Keep on hacking!
TL;DR: If you want to test your full GitHub Pages site before publishing but don’t want to set up Ruby and Jekyll on your local machine, then:
Commit your doc changes to a new branch.
Push the new branch to GitHub.
Temporarily change the repository’s GitHub Pages publishing source to the new branch.
Reload the GitHub Pages site, and review the changes.
If you have a GitHub repository, did you know that you can create your own documentation site for it within GitHub? Using GitHub Pages, you can write your docs as a set of Markdown pages and then configure your repository to generate and publish a static web site for those pages. All you need to do is configure a publishing source for your repository. Your doc site will go live at:
https://<user>.github.io/<repository>
If this is new to you, then you can learn all about this cool feature from the GitHub docs here: Working with GitHub Pages. I just found out about this cool feature myself!
GitHub Pages are great because they make it easy to develop docs and code together as part of the same workflow without needing extra tools. Docs can be written as Markdown files, Liquid templates, or raw assets like HTML and CSS. The docs will be version-controlled for safety and shared from a single source of truth. GitHub Pages also provides free hosting with a decent domain name for the doc site. Clearly, the theme is simplicity.
Unfortunately, I hit one challenge while trying GitHub Pages for the first time: How could I test the doc site before publishing it? A repository using GitHub Pages must be configured with a specific branch and folder (/ (root) or /docs) as the publishing source. As soon as changes are committed to that source, the updated pages go live. However, I want a way to view the doc site in its fullness before committing any changes so I don’t accidentally publish any mistakes.
One way to test pages is to use a Markdown editor. Many IDEs have Markdown editors with preview panes. Even GitHub’s web editor lets you preview Markdown before committing it. Unfortunately, while editor previews may help catch a few typos, they won’t test the full end result of static site generation and deployment. They may also have trouble with links or templates.
GitHub’s docs recommend testing your site locally using Jekyll. Jekyll is a static site generator written in Ruby. GitHub Pages uses Jekyll behind the scenes to turn doc pages into full doc sites. If you want to keep your doc development simple, you can just edit Markdown files and let GitHub do the dirty work. However, if you want to do more hands-on things with your docs like testing site generation, then you need to set up Ruby and Jekyll on your local machine. Thankfully, you don’t need to know any Ruby programming to use Jekyll.
I followed GitHub’s instructions for setting up a GitHub Pages site with Jekyll. I installed Ruby and Jekyll and then created a Jekyll site in the /docs folder of my repository. I verified that I could edit and run my site locally in a branch. However, the setup process felt rather hefty. I’m not a Ruby programmer, so setting up a Ruby environment with a few gems felt like a lot of extra work just to verify that my doc pages looked okay. Plus, I could foresee some developers getting stuck while trying to set up these doc tools, especially if the repository’s main code isn’t a Ruby project. Even if setting up Jekyll locally would be the “right” way to develop and test docs, I still wanted a lighter, faster alternative.
Thankfully, I found a workaround that didn’t require any tools outside of GitHub: Commit doc changes to a branch, push the branch to GitHub, and then temporarily change the repository’s GitHub Pages source to the branch! I originally configured my repository to publish docs from the /docs folder in the main branch. When I changed the publishing source to another branch, it regenerated and refreshed the GitHub Pages site. When I changed it back to main, the site reverted without any issues. Eureka! This is a quick, easy hack for testing changes to docs before merging them. You get to try the full site in the main environment without needing any additional tools or setup.
Above is a screenshot of the GitHub Pages settings for one of my repositories. You can find these settings under Settings -> Options for any repository, as long as you have the administrative rights. In this screenshot, you can see how I changed the publishing source’s branch from main to docs/test. As soon as I selected this change, GitHub Pages republished the repository’s doc site.
Now, I recognize that this solution is truly a hack. Changing the publishing source affects the “live”, “production” version of the site. It effectively does publish the changes, albeit temporarily. If some random reader happens to visit the site during this type of testing, they may see incorrect or even broken pages. I’d recommend changing the publishing source’s branch only for small projects and for short periods of time. Don’t forget to revert the branch once testing is complete, too. If you are working on a larger, more serious project, then I’d recommend doing full setup for local doc development. Local setup would be safer and would probably make it easier to try more advanced tricks, like templates and themes.
Have you ever seen those “@” tags on top of Python functions and classes? Those are decorators – functions that wrap around other functions. Confusing? At first, but they’re easy with practice. Useful? Very!
Hello, PyTexas 2020! It’s Pandy Knight here. I’m the Automation Panda, and I’m a big Python fan, just like y’all.
Have you ever seen those “@” tags on top of Python functions? Maybe you’ve seen them on top of methods and classes, too. Those are decorators, one of Python’s niftiest language features. Decorators are essentially wrappers – they wrap additional code around existing definitions. When used right, they can clean up your code better than OxiClean! Let’s learn how to use them.
[Slide]
So, here’s a regular old “hello world” function. When we run it, …
[Slide]
…It prints “Hello World!” Nothing fancy here.
[Slide]
Now, let’s take that function…
[Slide]
…And BAM! Add a decorator. Using this “@” sign, we just added a decorator named “tracer” to “hello_world”. So, what is this decorator?
[Slide]
“Tracer” is just another function. But, it’s special because it takes in another function as an argument!
[Slide]
Since “tracer” decorates “hello_world”, the “hello_world” function is passed into “tracer” as an argument. Wow!
So, what’s inside “tracer”?
[Slide]
This decorator has an inner function named “wrapper”. Can you even do that? With Python, yes, you can! The “wrapper” function prints “Entering”, calls the function originally passed into the decorator, and then prints “Exiting”.
[Slide]
When “tracer” decorates “hello_world”, that means “hello_world” will be wrapped by “Entering” and “Exiting” print statements.
[Slides]
Finally, the decorator returns the new “wrapper” function. Any time the decorated function is called, it will effectively be replaced by this new wrapper function.
[Slides]
So, when we call “hello_world”, the trace statements are now printed, too. Wow! That’s amazing. That’s how decorators work.
[Slide] Decorators [Slide] wrap [Slide] functions [Slide] around [Slide] functions!
[Slide]
Think about them like candy bars. The decorator is like the foil wrapper, and the decorated function is like the chocolate inside.
[Slide]
But how is this even possible? That decorator code looks confusing!
[Slide]
Decorators are possible because, in Python, functions are objects. In fancy language, we say functions are “first-order” values. Since functions are just objects, …
[Slide]
…We can pass them into other functions as arguments, …
[Slide]
…define new functions inside existing functions, …
[Slide]
…and return a function from a function.
[Slide]
This is all part of a paradigm called “Functional Programming.” Python supports functional programming because functions can be treated like objects. That’s awesome!
[Slide]
So, using functions as objects, decorators change how functions are called.
[Slide]
Decorators create an “outer” decorator function around an “inner” decorated function. Remember, the outer function is like the foil wrapper, and the inner function is like the chocolate.
[Slide]
Creating an outer function lets you add new code around the inner function. Some people call this “advice.” You can add advice before or after the inner function. You could even skip the inner function!
[Slide]
The best part is, decorators can be applied to any function. They make sharing code easy so you don’t repeat yourself!
[Slide]
Decorators are reminiscent of a paradigm called “Aspect-Oriented Programming,” in which code can be cleverly inserted before and after points of execution. Neat!
[Slide]
So remember, decorators wrap functions around functions, like candy bars!
[Slide]
Hold on, now! We have a problem in that Python code!
[Slide]
If the “wrapper” function effectively replaces “hello_world”, then what identity does “hello_world” report?
[Slide]
Its name is “wrapper”…
[Slide]
And its help is also “wrapper”! That’s not right!
[Slide]
Never fear! There’s an easy solution. The “functools” module provides a decorator named “wraps”. Put “functools.wraps” on the “wrapper” function and pass in the inner function object, and decorated functions once again show the right identity. That’s awesome.
[Slide]
But wait, there’s another problem!
[Slide]
How do decorators work with inputs and outputs? What if we decorate a function with parameters and a return value?
[Slide]
If we try to use the current “tracer”, …
[Slide]
…We get an error! Arguments broke it!
[Slide]
We can fix it! First, add “star args” and “star-star k-w-args” to the “wrapper” function’s parameters, and then pass them through to the inner function. This will make sure all arguments go through the decorator into the decorated function.
[Slide]
Then, capture the inner function’s return value and return it from the “wrapper” function. This makes sure return values also pass through. If the inner function has no return value, don’t worry – the decorator will pass through a “None” value.
[Slide]
When we call the function with the updated “tracer”, …
[Slide]
…we see tracing is now successful again!
[Slide]
When we check the return value, …
[Slide]
…it’s exactly what we expect. It works!
[Slide]
Wow, that’s awesome!
[Slide]
But wait, there’s more!
[Slide]
You can write a decorator to call a function twice!
[Slide]
Start with the decorator template…
[Slide]
…and call the inner function twice! Return the final return value for continuity.
[Slide]
BAM! It works!
[Slide]
But wait, there’s more!
[Slide]
You can write a timer decorator!
[Slide]
Start with the template, …
[Slide]
…call the inner function, …
[Slide]
…and surround it with timestamps using the “time” module!
[Slide]
BAM! Now you can time any function!
[Slide]
But wait, there’s more!
[Slide]
You can also add more than one decorator to a function! This is called “nesting”. Order matters. Decorators are executed in order of closeness to the inner function. So, in this case, …
[Slide]
…”call_twice” is applied first, and then “timer” is applied.
[Slide]
If these decorators are reversed, …
[Slide]
…then each inner function call is timed. Cool!
[Slide]
But wait, there’s more!
[Slide]
You can scrub and validate function arguments! Check out these two simple math functions.
[Slide]
Create a decorator to scrub and validate inputs as integers.
[Slide]
Add the wrapper function, and make sure it has positional args.
[Slide]
Then, cast all args as ints before passing them into the inner function.
[Slide]
Now, when calling those math functions, all numbers are integers! Using non-numeric inputs also raises a ValueError!
[Slide]
But wait, there’s more!
[Slide]
You can create decorators with parameters! Here’s a decorator that will repeat a function 5 times.
[Slide]
The “repeat” function is a little different. Instead of taking in the inner function object, it takes in the parameter, which is the number of times to repeat the inner function.
[Slide]
Inside, there’s a “repeat_decorator” function that has a parameter for the inner function. The “repeat” function returns the “repeat_decorator” function.
[Slide]
Inside “repeat_decorator” is the “wrapper” function. It uses “functools.wraps” and passes through all arguments. “repeat_decorator” returns “wrapper”.
[Slide]
Finally, “wrapper” contains the logic for calling the inner function multiple times, according to the “repeat” decorator’s parameter value.
[Slide]
Now, “hello_world” runs 5 times. Nifty!
[Slide]
But wait, there’s more!
[Slide]
Decorators can be used to save state! Here’s a decorator that will count the number of times a function is called.
[Slide]
“count_calls” has the standard decorator structure.
[Slide]
Outside the wrapper, a “count” attribute is initialized to 0. This attribute is added to the wrapper function object.
[Slide]
Inside the wrapper, the count is incremented before calling the inner function. The “count” value will persist across multiple calls.
[Slide]
Initially, the “hello_world” count value is 0.
[Slide]
After two calls, the count value goes up! Awesome!
[Slide]
But wait, there’s more!
[Slide]
Decorators can be used in classes! Here, the “timer” decorator is applied to this “hello” method.
[Slider]
As long as parameters and return values are set up correctly, decorators can be applied equally to functions and methods.
[Slide]
Decorators can also be applied directly to classes!
[Slide]
When a decorator is applied to a class, it wraps the constructor.
[Slide]
Note that it does not wrap each method in the class.
[Slide]
Since decorators can wrap classes and methods in addition to functions, it would technically be more correct to say that decorators wrap callables around callables!
[Slide]
So all that’s great, but can decorators be tested? Good code must arguably be testable code. Well, today’s your lucky day, because yes, you can test decorators!
[Slide]
Testing decorators can be a challenge. We should always try to test the code we write, but decorators can be tricky. Here’s some advice:
[Slide]
First, separate tests for decorator functions from decorated functions. For decorator functions, focus on intended outcomes. Try to focus on the “wrapper” instead of the “inner” function. Remember, decorators can be applied to any callable, so cover the parts that make decorators unique. Decorated functions should have their own separate unit tests.
[Slide]
Second, apply decorators to “fake” functions used only for testing. These functions can be simple or mocked. That way, unit tests won’t have dependencies on existing functions that could change. Tests will also be simpler if they use slimmed-down decorated functions.
[Slide]
Third, make sure decorators have test coverage for every possible way it could be used. Cover decorator parameters, decorated function arguments, and return values. Make sure the “name” and “help” are correct. Check any side effects like saved state. Try it on methods and classes as well as functions. With decorators, most failures happen due to overlooked edge cases.
[Slide]
Let’s look at a few short decorator tests. We’ll use the “count_calls” decorator from earlier.
There are two decorated functions to use for testing. The first one is a “no operation” function that does nothing. It has no parameters or returns. The second one is a function that takes in one argument and returns it. Both are very simple, but they represent two equivalences classes of decoratable functions.
[Slide]
The test cases will verify outcomes of using the decorator. For “count_calls”, that means tests will focus on the “count” attribute added to decorated functions.
The first test case verifies that the initial count value for any function is zero.
[Slide]
The second test calls a function three times and verifies that count is three.
[Slide]
The third test exercises the “same” function to make sure arguments and return values work correctly. It calls the “same” function, asserts the return value, and asserts the count value.
This collection of tests is by no means complete. It simply shows how to start writing tests for decorators. It also shows that you don’t need to overthink unit tests for decorators. Simple is better than complex!
[Slide]
Up to this point, we’ve covered how to write your own decorators. However, Python has several decorators available in the language and in various modules that you can use, absolutely free!
[Slide]
Decorators like “classmethod”, “staticmethod”, and “property” can apply to methods in a class. Frameworks like Flask and pytest have even more decorators. Let’s take a closer look.
[Slide]
Let’s start by comparing the “classmethod” and “staticmethod” decorators. We’ll revisit the “Greeter” class we saw before.
[Slide]
The “classmethod” decorator will turn any method into a “class” method instead of an “instance” method. That means this “hello” method here can be called directly from the class itself instead of from an object of the class. This decorator will pass a reference to the class into the method so the method has some context of the class. Here, the reference is named “c-l-s”, and the method uses it to get the class’s name. The method can be called using “Greeter.hello”. Wow!
[Slide]
The “staticmethod” decorator works almost the same as the “classmethod” decorator, except that it does not pass a reference to the class into the method.
[Slide]
Notice how the method parameters are empty – no “class” and no “self”. Methods are still called from the class, like here with “Greeter.goodbye”. You could say that “staticmethod” is just a simpler version of “classmethod”.
[Slide]
Next, let’s take a look at the “property” decorator. To show how to use it, we’ll create a class called “Accumulator” to keep count of a tally.
[Slide]
Accumulator’s “init” method initializes a “count” attribute to 0.
[Slide]
An “add” method adds an amount to the count. So far, nothing unusual.
[Slide]
Now, let’s add a property. This “count” method has the “property” decorator on it. This means that “count” will be callable as an attribute instead of a method, meaning that it won’t need parentheses. It is effectively a “getter”. The calls to “count” in the “init” and “add” methods will actually call this property instead of a raw variable.
Inside the “count” property, the method returns an attribute named “underscore-count”. The underscore means that this variable should be private. However, this class hasn’t set that variable yet.
[Slide]
That variable is set in the “setter” method. Setters are optional for properties. Here, the setter validates that the value to set is not negative. If the value is good, then it sets “underscore-count”. If the value is negative, then it raises a ValueError.
“underscore-count” is handled internally, while “count” is handled publicly as the property. The getter and setter controls added by the “property” decorator let you control how the property is handled. In this class, the setter protects the property against bad values!
[Slide]
So, let’s use this class. When an Accumulator object is constructed, its initial count is 0.
[Slide]
After adding an amount to the object, its count goes up.
[Slide]
Its count can be directly set to non-negative values. Attempting to set the count directly to a negative value raises an exception, as expected. Protection like that is great!
[Slide]
Python packages also frequently contain decorators. For example, Flask is a very popular Web “micro-framework” that enables you to write Web APIs with very little Python code.
[Slide]
Here’s an example “Hello World” Flask app taken directly from the Flask docs online. It imports the “flask” module, creates the app, and defines a single endpoint at the root path that returns the string, “Hello, World!” Flask’s “app.route” decorator can turn any function into a Web API endpoint. That’s awesome!
[Slide]
Another popular Python package with decorators is pytest, Python’s most popular test framework.
[Slide]
One of pytest’s best features is the ability to parametrize test functions to run for multiple input combinations. Test parameters empower data driven testing for wider test coverage!
[Slide]
To show how this works, we’ll use a simple test for basic arithmetic: “test addition”. It asserts that a plus b equals c.
[Slide]
The values for a, b, and c must come from a list of tuples. For example, 1 plus 2 equals 3, and so forth.
[Slide]
The “pytest.mark.parametrize” decorator connects the list of test values to the test function. It runs the test once for each tuple in the list, and it injects the tuple values into the test case as function arguments. This test case would run four times. Test parameters are a great way to rerun test logic without repeating test code.
[Slide]
So, act now, before it’s too late!
[Slide]
When should you use decorators in your Python code?
[Slide]
Use decorators for aspects.
[Slide]
An aspect is a special cross-cutting concern. They are things that happen in many parts of the code, and they frequently require repetitive calls.
Think about something like logging. If you want to add logging statements to different parts of the code, then you need to write multiple logging calls in all those places. Logging itself is one concern, but it cross-cuts the whole code base. One solution for logging could be to use decorators, much like we saw earlier with the “tracer” decorator.
[Slide]
Good use cases for decorators include logging, profiling, input validation, retries, and registries. These are things that typically require lots of extra calls inserted in duplicative ways. Ask yourself this:
[Slide]
Should the code wrap something else? If yes, then you have a good candidate for a decorator.
[Slide]
However, decorators aren’t good for all circumstances. You should avoid decorators for “main” behaviors, because those should probably be put directly in the body of the decorated function. Avoid logic that’s complicated or has heavy conditionals, too, because simple is better than complex. You should also try to avoid completely side-stepping the decorated function – that could confuse people!
[Slide]
Ask yourself this: is the code you want to write the wrapper or the candy bar itself? Wrappers make good decorators, but candy bars do not.
[Slide]
I hope you’ve found this infomercial about decorators useful! If you want to learn more, …
[Slide]
…check out this Real Python tutorial by Geir Arne Hjelle named “Primer on Python Decorators”. It covers everything I showed here, plus more.
[Slide]
Thank you very much for listening! Again, my name is Pandy Knight – the Automation Panda and a bona fide Pythonista. Please read my blog at AutomationPanda.com, and follow me on Twitter @AutomationPanda. I’d love to hear what y’all end up doing with decorators!
Don’t understand my Chinese? Don’t feel bad – I don’t know much Mandarin, either! My wife and her mom are from China. When I developed a Django app to help my wife run her small business, I needed to translate the whole site into Chinese so that Mama could use it, too. Thankfully, Django’s translation framework is top-notch.
“East Meets West When Translating Django Apps” is the talk I gave about translating my family’s Django app between English and Chinese. For me, this talk had all the feels. I shared my family’s story as a backdrop. I showed Python code for each step in the translation workflow. I gave advice on my lessons learned. And I spoke truth to power – that translations should bring us all together.
This article will show you the best way to handle “main” functions in Python.
Python is like a scripting language in that all lines in a Python “module” (a .py file) are executed whenever the file is run. Modules don’t need a “main” function. Consider a module named stuff.py with the following code:
At first glance, we may expect to see two lines printed. However, running more_stuff actually prints three lines:
$ python more_stuff.py
stuff happened!
stuff happened!
more stuff!
Why did “stuff happened!” get printed twice? Well, when “import stuff” was called, the stuff module was loaded. Whenever a module is loaded, all of its code is executed. The print_stuff function was called at line 4 in the stuff module. Then, it was called again at line 3 in the more_stuff module.
So, how can we avoid this problem? Simple: check the module’s __name__. The __name__ variable (pronounced “dunder name”) is dynamically set to the module’s name. If the module is the main entry point, then __name__ will be set to “__main__”. Otherwise, if the module is simply imported, then it will be set to the module’s filename without the “.py” extension.
Let’s rewrite our modules. Here’s stuff:
def print_stuff():
print("stuff happened!")
if __name__ == '__main__':
print_stuff()
And here’s more_stuff:
import stuff
if __name__ == '__main__':
stuff.print_stuff()
print("more stuff!")
If we rerun more_stuff, then the line “stuff happened!” will print only once:
$ python more_stuff.py
stuff happened!
more stuff!
As a best programming practice, Python modules should not contain any directly-called lines. They should contain only functions, classes, and variable initializations. Anything to be executed as a “main” body should be done after a check for “if __name__ == ‘__main__'”. That way, no rogue calls are made when modules are imported by other modules. The conditional check for __name__ also makes the “main” body clear to the reader.
Some people still like to have a “main” function. That’s cool. Just do it like this:
Python is such a popular language for good reason: Its principles are strong. However, if Python is “the second-best language for everything”… that means the first-best is often chosen instead. Oh no! How can Pythonistas survive a project or workplace without our favorite language?
Personally, even though I love Python, I don’t use it daily at my full time job. Nevertheless, Pythonic thinking guides my whole approach to software. I will talk about how the things that make Python great can be applied to non-Python places in three primary ways:
Principles from the Zen of Python
Projects that partially use Python
People who build strong, healthy community
Check out my talk, Surviving Without Python, from PyOhio 2019! It was one of the most meaningful talks I’ve ever given.
Warning: This article contains strong opinions that might not be suitable for all audiences. Reader discretion is advised.
It’s Monday morning. After an all-too-short weekend and rush hour traffic, you finally arrive at the office. You throw your bag down at your desk, run to the break room, and queue up for coffee. As the next pot is brewing, you check your phone. It’s 8:44am… now 8:45am, and DING! A meeting reminder appears:
Sprint Planning – 9am to 3pm.
.
What’s your visceral reaction?
.
I can’t tell you mine, because I won’t put profanity on my blog.
Real Talk
In the capital-A Agile Scrum process, sprint planning is the kick-off meeting for the next iteration. The whole team comes together to talk about features, size work items with points, and commit to deliverables for the next “sprint” (typically 2 weeks long). Idealistically, team members collaborate freely as they learn about product needs and give valued input.
Let’s have some real talk, though: sprint planning sucks. Maybe that’s a harsh word, but, if you’re reading this article, then it caught your attention. Personally, my sprint planning experiences have been lousy. Why? Am I just bellyaching, or are there some serious underlying problems?
Sprint planning is a huge time commitment. 9am to 3pm is not an exaggeration. Sprint planning meetings are typically half-day to full-day affairs. Most people can’t stay focused on one thing for that long. Plus, when a sprint is only two weeks long, one hour is a big chunk of time, let alone 3, or 6, or a whole day. The longer the meeting, the higher the opportunity cost, and the deeper the boredom.
Collaboration is a farce. Planning meetings typically devolve into one “leader” (like a scrum master, product owner, or manager) pulling teeth to get info for a pre-determined list of stories. Only two people, the leader and the story-owner, end up talking, while everyone else just stares at their laptops until it’s their turn. Discussions typically don’t follow any routine beyond, “What’s the acceptance criteria?” and, “Does this look right?” with an interloper occasionally chiming in. Each team member typically gets only a few minutes of value out of an hours-long ordeal. That’s an inefficient use of everyone’s time.
No real planning actually happens. These meetings ought to be called “guessing” meetings, instead. Story point sizes are literally made up. Do they measure time or complexity? No, they really just measure groupthink. Teams even play a game called planning poker that subliminally encourages bluffing. Then, point totals are used to guess how much work can be done during the sprint. When the guess turns out to be wrong at the end of the sprint (and it always does), the team berates itself in retro for letting points slip. Every. Time.
Does It Spark Joy?
I’ve long wondered to myself if sprint planning is a good concept just implemented poorly, or if it’s conceptually flawed at its root. I’m pretty sure it’s just flawed. The meetings don’t facilitate efficient collaboration relative to their time commitments, and estimates are based on poor models. Retros can’t fix that. And gut reactions don’t lie.
So, what should we do? Should we Konmari our planning meetings to see if they spark joy? Should we get rid of our ceremonies and start over? Is this an indictment of the whole Agile Scrum process? But then, how will we know what to do, and when things can get done?
I think we can evolve our Agile process with more effective practices than sprint planning. And I don’t think that evolution would be terribly drastic.
Behavior-Driven Planning
What we really want out of a planning meeting is planning, not pulling and not predicting. Planning is the time to figure out what will be done and how it will be done. The size of the work should be based on the size of the blueprint. Enter Example Mapping.
Example Mapping is a Behavior-Driven Development practice for clarifying and confirming stories. The process is straightforward:
Write the story on a yellow card.
Write each rule that the story must satisfy on a blue card.
Illustrate each rule with examples written on green cards.
Got stuck on a question? Write it on a red card and move on.
One story should take about 20-30 minutes to map. The whole team can participate, or the team can split up into small groups to divide-and-conquer. Rules become acceptance criteria, examples become test cases, and questions become spikes.
Here’s a good walkthrough of Example Mapping.
What about story size? That’s easy – count the cards. How many cards does a story have? That’s a rough size for the work to be done based on the blueprint, not bluffing. More cards = more complexity. It’s objective. No games. Frankly, it can’t be any worse that made-up point values.
This is real planning: a blueprint with a course of action.
So, rather than doing traditional sprint planning meetings, try doing Example Mapping sessions. Actually plan the stories, and use card counts for point sizes. Decisions about priority and commitments can happen between rounds of story mapping, too. The Scrum process can otherwise remain the same.
If you want to evolve further, you could eliminate the time boxes of sprints in favor of Kanban. Two-week work item boundaries can arbitrarily fall in the middle of progress, which is not only disruptive to workflow but can also encourage bad responses (like cramming to get things done or shaming for not being complete.) Kanban treats work items as a continuous flow of prioritized work fed to a team in bite-sized pieces. When a new story comes up, it can have its own Example Mapping “planning” meeting. Now, Kanban is not for everyone, but it is popular among post-Agile practitioners. What’s important is to find what works for your team.
Rant Over
I know I expressed strong, controversial opinions in this article. And I also recognize that I’m arguing against bad examples of Agile Scrum. Nevertheless, I believe my points are fair: planning itself is not a waste of time, but the way many teams plan their sprints uses time inefficiently and sets poor expectations. There are better ways to do planning – let’s give them a try!
Software frameworks are great because they apply the principle of Separation of Concerns. A framework’s tools and code handle a specific need in a standard way for developers to write other code more easily. For example:
Web frameworks support receiving requests and sending responses.
Test frameworks include test case structure, runners, and reporting mechanisms.
Logging frameworks control how messages are gathered and stored.
Dependency injection frameworks create and manage object instances.
Recently, a question hit me: How far should a framework go to separate concerns? Should a framework try to do everything all-in-one, or should it behave more like a library that focuses on doing one thing well?
Let’s look at Python Web frameworks as an example. Django, the “Web framework for perfectionists with deadlines,” provides everything a developer could want out of the box. Flask, on the other hand, is a “microframework” that prides itself on minimalism: any extras must be handled by extensions or other packages. The differences between the two become clear when comparing some of their features:
Feature
Django
Flask
HTTP Requests and Routing
Included
Werkzeug (bundled)
Templates
Included
Jinja2 (bundled)
Forms
Included
None (Flask-WTF)
Object-Relational Mapping (ORM)
Included
None (SQLAlchemy)
Security
Included
None (Flask-Security)
Localization
Included
None (Flask-Babel)
Admin Interface
Included
None
Clearly, Django is all-in-one, while Flask is piece-by-piece. To make a serious Flask app, developers must pull in many extra pieces. There are many other frameworks with similar competitions:
JavaScript testing: Jasmine vs. Mocha
JavaScript development: Angular vs. React
Java BDD testing: Serenity vs. Cucumber-JVM
I think each approach has its merits. All-in-one frameworks are more convenient to use, especially for beginners. For developers who are new to a domain or just need to get something up fast, all-in-ones are the better choice. They come with all units already integrated together, and they often have good documentation. However, developing an all-in-one framework takes much more work because it covers multiple concerns. Developers may also feel shoehorned into the framework’s way of doing things. All-in-ones typically dictate what they believe to be the “best” solution.
Piece-by-piece frameworks require more expertise but offer greater flexibility. Developers can pick and choose the pieces they need, and they can change the packages used by the solution more easily. Found a better ORM? Not a problem. Need to localize the site in Chinese? Add it! Solutions can avoid excess weight and stay nimble for the future. The big challenge is successful integration. Furthermore, a library or framework for a singular concern tends to solve the concern in better ways simply because project contributors give it exclusive focus. The more I learn about a space, the more I lean towards a piece-by-piece approach.
As always, pick frameworks based on the needs at hand. For example, I like to use Django to make websites for my wife’s small businesses because the admin interface is just so convenient for her, even though I could get away with Flask. However, I’ll probably pick Mocha (piece-by-piece) over Jasmine (all-in-one) whenever I return to JavaScript testing.
The Python community, like many groups, has its own language – and I don’t mean just Python. There are many words and phrases thrown around that may confuse people new to Python. I originally shared some terms in my article, Which Version of Python Should I Use?, but below are some more of those colloquialisms for quick reference:
Git is one of the most popular version control systems (VCS) available, especially thanks to hosting vendors like GitHub. It keeps code safe and shareable. Sometimes, however, certain files should not be shared, like local settings or temporary configs. Git provides a few ways to make sure those files are ignored.
.gitignore
The easiest and most common way to ignore files is to use a gitignore file. Simply create a file named .gitignore in the repository’s root directory. Then, add names and patterns for any files and directories that should not be added to the repository. Use the asterisk (“*”) as a wildcard. For example, “*.class” will ignore all files that have the “.class” extension. Remember to add the .gitignore file to the repository so that it can be shared. As a bonus, Git hosting vendors like GitHub usually provide standard .gitignore templates for popular languages.
Any files covered by the .gitignore file will not be added to the repository. This approach is ideal for local IDE settings like .idea or .vscode, compiler output files like *.class or *.pyc, and test reports. For example, here’s GitHub’s .gitignore template for Java:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
As a best practice, .gitignore should be committed to the repository, which means all team members will share the same set of ignored files. However, some files should be ignored locally and not globally. Those files could be added to .gitignore, but large .gitignore files become cryptic and more likely to break other people’s setup. Thankfully, git provides a local-only solution: the .git/info/exclude file (under the repository’s hidden .git directory). Simply open it with a text editor and add new entries using the same file pattern format as .gitignore.
# Append a new file to ignore locally
echo "my_private_file" >> .git/info/exclude
skip-worktree
A .gitignore file prevents a file from being added to a repository, but what about preventing changes from being committed to an existing file? For example, developers may want to safely override settings in a shared config file for local testing. That’s where skip-worktree comes in: it allows a developer to “skip” any local changes made to a given file. Changes will not appear under “git status” and thus will not be committed.
Use the following commands:
# Ignore local changes to an existing file
git update-index --skip-worktree path/to/file
# Stop ignoring local changes
git update-index --no-skip-worktree path/to/file
Warning: The skip-worktree setting applies only to the local repository. It is not applied globally! Each developer will need to run the skip-worktree command in their local repository.
assume-unchanged
Another option for ignoring files is assume-unchanged. Like skip-worktree, it makes Git ignore changes to files. However, whereas skip-worktree assumes that the user intends to change the file, assume-unchanged assumes that the user will not change the file. The intention is different. Large projects using slow file systems may gain significant performance optimizations by marking unused directories as assume-unchanged. This option also works with other update-index options like really-refresh.
Use the following commands:
# Assume a file will be unchanged
git update-index --assume-unchanged path/to/file
# Undo that assumption
git update-index --no-assume-unchanged path/to/file
Again, this setting applies only to the local repository – it is not applied globally.
Comparison Table
Which is the best way to ignore files?
Method
Description
Best Use Cases
Scope
.gitignore file
Prevents files from being added to the repository.
Local settings, compiler output, test results, etc.
Global
.git/info/exclude file
Prevents local files from being added to the repository.
Local settings, compiler output, test results, etc.
Local
skip-worktree setting
Prevents local changes from being committed to an existing file.
Shared files that will have local overwrites, like config files.
Local
assume-unchanged setting
Allows Git to skip files that won’t be changed for performance optimization.