development

Making Great Waves: 8 Software Testing Convictions

The Great Wave Off Kanagawa.

Katsushika Hokusai, 1830.

It is one of the most recognizable works of art in the world. It is so famous, it has an emoji: 🌊.

The Great Wave Off Kanagawa is a Japanese woodblock print. It is not a painting or a drawing but a print. In Japanese, the term for this type of art is ukiyo-e, which means “pictures of the floating world.” Ukiyo-e prints first appeared around the 1660s and did not decline in popularity until the Meiji Restoration two centuries later. While most artists focused on subjects of people, late masters like Hokusai captured perspectives of landscapes and nature. Here, in The Great Wave, we see a giant wave, full of energy and ferocity, crashing down onto three fast boats attempting to transport live fish to market. Its vibrant blue water and stark white peaks contrast against a yellowish-gray sky. In the distance is Mount Fuji, the highest mountain in Japan, yet it is dwarfed in perspective by the waves. In fact, the water spray from the waves appears to fall over Mount Fuji like snow. If you didn’t look closely, you might presume that Mount Fuji is just the crest of another wave.

The Great Wave is absolutely stunning. It is arguably Hokusai’s finest work. The colors and the lines reflect boldness. The claws of the wave impart vitality. The men on the boat show submission and possibly fear. The spray from the wave reveals delicacy and attention to detail. Personally, I love ukiyo-e prints like this. I travel the world to see them in person. The quality, creativity, and craftsmanship they exhibit inspire me to instill the highest quality possible into my own work.

As software quality professionals, there are several lessons we can learn from ukiyo-e masters like Hokusai. Testing is an art as much as it is engineering. We can take cues from these prolific artists in how we approach quality in our own work. In this article, I will share how we can make our own “Great Waves” using 8 software testing convictions inspired by ukiyo-e prints like The Great Wave. Let’s begin!

Conviction #1: Focus on behavior

Although we hold these Japanese woodblock prints today in high regard, they were seen as anything but fancy centuries ago in Japan. Ukiyo-e was “low” art for the common people, whereas paintings on silk scrolls were considered “high” art for the high classes.

Folks would buy these prints from local merchants for slightly more than the cost of a bowl of noodles – about $5 to $10 US dollars today – and they would use these prints to decorate their homes. By comparison, a print of The Great Wave sold at auction for $1.11 million in September 2020.

These prints weren’t very large, either. The Great Wave measures 10 inches tall by 15 inches wide, and most prints were of similar size. That made them convenient to buy at the market, carry them home, and display on the wall. To understand how the Japanese people treated these prints in their day, think about the decorations in your homes that you bought at stores like Home Goods and Target. You probably have some screen prints or posters on your walls.

Since the target consumer for ukiyo-e prints were ordinary people with working-class budgets, they needed to be affordable, popular, and recognizable. When Hokusai published The Great Wave, it wasn’t a standalone piece. It was the first print in a series named Thirty-six Views of Mount Fuji. Below are three other prints from that series. The central feature in each print is Mount Fuji, which would be instantly recognizable to any Japanese person. The various views would also be relatable.

Fine Wind, Clear Morning
Fine Wind, Clear Morning shows nice weather against the slopes of the mountain with a powerful contrast of colors.
Thunderstorm Beneath the Summit depicts Mount Fuji from a nearly identical profile, but with lightning striking the lower slopes of the mountain amidst a far darker palate.
Kajikazawa in Kai Province depicts two fisherman with Mount Fuji in the background.

The features of these prints made them valuable. Anyone could find a favorite print or two out of a series of 36. They made art accessible. They were inexpensive yet impressive. They were artsy yet accessible. Artists like Hokusai knew what people wanted, and they delivered the goods.

This isn’t any different from software development. Features add value for the users. For example, if you’re developing a banking app, folks better be able to log in securely and view their latest transactions. If those features are broken or unintuitive, folks might as well move their accounts to other banks! We, as the developers and testers, are like the ukiyo-e artists: we need to know what our customers need. We need to make products that they not only want, but they also enjoy.

Features add value. However, I would use a better word to describe this aspect of a product: behavior. Behavior is the way one acts or conducts oneself. In software, we define behaviors in terms of inputs and responses. For example, login is a behavior: you enter valid credentials, and you expect to gain access. You gave inputs, the app did something, and you got the result.

My conviction on software testing AND development is that if you focus on good software behaviors, then everything else falls into place. When you plan development work, you prioritize the most important behaviors. When you test the features, you cover the most important behaviors. When users get your new product, they gain value from those features, and hopefully you make that money, just like Hokusai did.

This is why I strongly believe in the value of Behavior-Driven Development, or BDD for short. As a set of pragmatic practices, BDD helps you and your team stay focused on the things that matter. BDD involves activities like Three Amigos collaboration, Example Mapping, and writing Gherkin. When you focus on behavior – not on shiny new tech, or story points, or some other distractions – you win big.

Conviction #2: Prioritize on risk

Ukiyo-e artists depicted more than just views of Mount Fuji. In fact, landscape scenes became popular only during the late period of woodblock printing – the 1830s to the 1860s. Before then, artists focused primarily on people: geisha, courtesans, sumo wrestlers, kabuki actors, and legendary figures. These were all characters from the “floating world,” a world of pleasure and hedonism apart from the dreary everyday life of feudal Japan.

Here is a renowned print of a kabuki actor by Sharaku, printed in 1794:

Kabuki Actor Ōtani Oniji III as Yakko Edobei in the Play The Colored Reins of a Loving Wife
Tōshūsai Sharaku, 1794

Sharaku was active only for one year, but he produced some of the most expressive portraits seen during ukiyo-e’s peak period. A yakko was a samurai’s henchman. In this portrait, we see Edobei ready for dirty deeds, with a stark grimace on his face and hands pulsing with anger.

Why would artists like Sharaku print faces like these? Because they would sell. Remember, ukiyo-e was not high-class art. It was a business. Artists would make a series of prints and sell them on the streets of Edo (now Tokyo). They needed to make prints that people wanted to buy. If they picked lousy or boring subjects, their prints wouldn’t sell. No soba noodles for them! So, what subjects did they choose? Celebrities. Actors. “Female beauties.” And some content that was not safe for work, like Hokusai’s The Dream of the Fisherman’s Wife. (Seriously, that link is not safe for work. Click it at your own risk.)

Artists prioritized their work based on business risk. They chose subjects that would be easy to sell. They pursued value. As testers, we should also prioritize test coverage based on risk.

I know there’s a popular slogan saying, “Test all the things!”, but that’s just impossible. It’s like saying, “Print all the pictures!” Modern apps are too complex to attempt any sort of “complete” or “100%” coverage. Instead, we should focus our testing efforts on the most important behaviors, the ones that would cause the most problems if they broke. Testing is ultimately a risk-mitigating activity. We do testing to de-risk problems that enter during development.

So, what does a risk-based testing strategy look like? Well, start by covering the most valuable behaviors. You can call them the MVBs. These are behaviors that are core to your app. If they break, then it’s game over. No soba noodles. For example, if you can’t log in, you’re done-zo. The MVBs should be tested before every release. They are non-negotiable test coverage. If your team doesn’t have enough resources to run these tests, then get more resources.

In addition to the MVBs, cover areas that were changed since the previous release. For example, if your banking app just added mobile deposits, then you should test mobile deposits. Things break where developers make changes. Also, look at testing different layers and aspects of the product. Not every test should be a web UI test. Add unit tests to pinpoint failures in the code. Add API tests to catch problems at the service layer. Consider aspects like security, accessibility, and visuals.

When planning these tests, try to keep them fast and atomic, covering individual behaviors instead of long workflows. Shorter tests are more reliable and give space for more coverage. And if you do have the resources for more coverage beyond the MVBs and areas of change, expand your coverage as resources permit. Keep adding coverage for the next most valuable behaviors until you either run out of time or the coverage isn’t worth the time.

Overall, ask yourself this when weighing risks: How painful would it be if a particular behavior failed? Would it ruin a user’s experience, or would they barely notice?

Conviction #3: Automate

The copy of The Great Wave shown at the top of this article is located at the Metropolitan Museum of Art in New York City. However, that’s not the only version. When ukiyo-e artists produced their prints, they kept printing copies until the woodblocks wore out! Remember, these weren’t precious paintings for the rich, they were posters for the commoners. One set of woodblocks could print thousands of impressions of popular designs for the masses. It’s estimated that there were five to eight thousand original impressions of The Great Wave, but nobody knows for sure. To this day, only a few hundred have survived. And much to my own frustration, museums that have copies do not put them on public display because the pieces are so fragile.

Here are different copies of The Great Wave from different museums:

Print production had to be efficient and smooth. Remember, this was a business. Publishers would make more money if they could print more impressions from the same set of woodblocks. They’d gain more renown if their prints maintained high quality throughout the lifetime of the blocks. And the faster they could get their prints to market, the sooner they could get paid and enjoy all the soba noodles.

What can we learn from this? Automate! That’s our third conviction.

What can we learn from this? Automate! Automation is a force multiplier. If Hokusai spent all his time manually laboring over one copy of The Great Wave, then we probably wouldn’t be talking about it today. But because woodblock printing was a whole process, he produced thousands of copies for everyone to enjoy. I wouldn’t call the woodblock printing process fully “automated” because it had several tedious steps with manual labor, but in Edo period Japan, it was about as automated as you could get.

Compare this to testing. If we run a test manually, we cover the target behavior one time. That’s it: lots of labor for one instance. However, if we automate that test, we can run it thousands of times. It can deliver value again and again. That’s the difference between a painting and a print.

So, how should we go about test automation? First, you should define your goals. What do you hope to achieve with automation? Do you want to speed up your testing cycles? Are you looking to widen your test coverage? Perhaps you want to empower Continuous Delivery through Continuous Testing? Carefully defining your goals from the start will help you make good decisions in your test automation strategy.

When you start automating tests, treat it like full software development. You aren’t just writing a bunch of scripts, you are developing a software system. Follow recommended practices. Use design patterns. Do code reviews. Fix bugs quickly. These principles apply whether you are using coded or codeless tools.

Another trap to avoid is delaying test automation. So many times, I’ve heard teams struggle to automate their tests because they schedule automation work as their lowest priority. They wish they could develop automation, but they just never have the time. Instead, they grind through testing their MVBs manually just to get the job done. My advice is flip that attitude right-side up. Automate first, not last. Instead of planning a few tests to automate if there’s time, plan to automate first and cover anything that couldn’t be automated with manual testing.

Furthermore, integrate automated tests into the team’s Continuous Integration system as soon as possible. Automated tests that aren’t running are dead to me. Get them running automatically in CI so they can deliver value. Running them nightly or even weekly can be a good start, as long as they run on a continuous cadence.

Finally, learn good practices. Test automation technologies are ever-evolving. It seems like new tools and frameworks hit the market all the time. If you’re new to automation or you want to catch up with the latest trends, then take time to learn. One of the best resources I can recommend is Test Automation University. TAU has about 70 courses on everything you can imagine, taught by the best instructors in the world, and it’s 100% FREE!

Now, you might be thinking, “Andy, come on, you know everything can’t be automated!” And that’s true. There are times when human intervention adds value. We see this in ukiyo-e prints, too. Here is Plum Garden at Kameido by Utagawa Hiroshige, Hokusai’s main rival. Notice the gradient colors of green and red in the background:

Plum Garden in Kameido
Plum Garden at Kameido
Utagawa Hiroshige, 1857

Printers added these gradients using a technique called bokashi, in which they would apply layers of ink to the woodblocks by hand. Sometimes, they would even paint layers directly on the prints. In these cases, the “automation” of the printing process was insufficient, and humans needed to manually intervene.

It’s always good to have humans test-drive software. Automation is great for functional verification, but it can’t validate user experience. Exploratory testing is an awesome complement to automated testing because it mitigates different risks.

Nevertheless, automation is able to do things it could never do before. As I said before, I work at Applitools, where we specialize in automated visual testing. Take a look at these two prints of Matsumoto Hoji’s Frog from Meika Gafu. Notice anything different between the two?

Two different versions of Matsumoto Hoji’s Frog.

If we use Visual AI to compare these two prints, it will quickly identify the main difference:

Applitools Visual AI identifying visual differences (highlighted in magenta) between two prints.

The signature block is in a different location! Small differences like small pixel offsets are ignored, while major differences are highlighted. If you apply this style of visual testing to your web and mobile apps, you could catch a ton of visual bugs before they cause problems for your users. Modern test automation can do some really cool tricks!

Conviction #4: Shift left and right

Mokuhanga, or woodblock printing, was a huge process with multiple steps. Artists like Hokusai and Hiroshige did not print their artwork themselves. In fact, printing required multiple roles to be successful: a publisher, an artist, a carver, and a printer.

  1. The publisher essentially ran the process. They commissioned, financed, and distributed prints. They would even collaborate with artists on print design to keep them up with the latest trends.
  2. The artist designed the patterns for the prints. They would sketch the patterns on washi paper and give instructions to the carver and printer on how to properly produce the prints.
  3. The carver would chisel the artist’s pattern into a set of wooden printing blocks. Each layer of ink would have its own block. Carvers typically used a smooth, hard wood like cherry.
  4. The printer used the artist’s patterns and carver’s woodblocks to actually make the prints. They would coat the blocks in appropriately-colored water-based inks and then press paper onto the blocks.

Quality had to be considered at every step in the process, not just at the end. If the artist was not clear about colors, then the printer might make a mistake. If the carver cut a groove too deep, then ink might not adhere to the paper as intended. If the printer misaligned a page during printing, then they’d need to throw it away – wasting time, supplies, and woodblock life – or risk tarnishing everyone’s reputation with a misprint. Hokusai was noted for his stringent quality standards for carvers and printers.

The words of W. Edwards Deming ring true:

Inspection does not improve the quality, nor guarantee quality. Inspection is too late. The quality, good or bad, is already in the product. As Harold F. Dodge said, “You cannot inspect quality into a product.”

W. Edwards deming

This is just like software development. We can substitute the word “testing” for “inspection” in Deming’s quote. Testers don’t exclusively “own” quality. Every role – business, development, and testing – has a responsibility for high-caliber work. If a product owner doesn’t understand what the customer needs, or a developer skips code reviews, or if a tester neglects an important feature, then software quality will suffer.

How do we engage the whole team in quality work? Shift left and right.

Most testers are probably familiar with the term shift left. It means, start doing testing work earlier in the development process. Don’t wait until developers are “done” and throw their code “over the fence” to be tested. Run tests continuously during development. Automate tests in-sprint. Adopt test-driven and behavior-driven practices. Require unit tests. Add test implementation to the “Definition of Done.”

But what about shift right? This is a newer phase, but not necessarily a newer practice. Shift right means, continue to monitor software quality during and after releases. Build observability into apps. Monitor apps for bugs, failures, and poor performance. Do canary deployments to see how systems respond to updates. Perform chaos testing to see how resilient environments are to outages. Issue different UIs to user groups as part of A/B testing to find out what’s most effective. And feed everything you learn back into development a la “shift left.”

The DevOps Infinity Loop
(Source: https://www.atlassian.com/devops)

The famous DevOps infinity loop shows how “shift left” and “shift right” are really all part of the same flow. If you start in the middle where the paths cross, you can see arrows pointing leftward for feedback, planning, and building. Then, they push rightward with continuous integration, deployment, monitoring, and operations. We can (and should) take all the quality measures we said before as we spin through this loop perpetually. When we plan, we should build quality in with good design and feedback from the field. When we develop, we should do testing together with coding. As we deploy, automated safety checks should give thumbs-up or thumbs-down. Post-deployment, we continue to watch, learn, and adjust.

Conviction #5: Give fast feedback

The acronym CI/CD is ubiquitous in our industry, but I feel like it’s missing something important: “CT”, or Continuous Testing. CI and CD are great for pushing code fast, but without testing, they could be pushing garbage. Testing does not improve quality directly, but continuous revelation of quality helps teams find and resolve issues fast. It demands response. Continuous Testing keeps the DevOps infinity loop safe.

Fast feedback is critical. The sooner and faster teams discover problems, the less pain those problems will cause. Think about it: if a developer is notified that their code change caused a failure within a minute, they can immediately flip back to their code, which is probably still open in an editor. If they find out within an hour, they’ll still have their code fresh in their mind. Within a day, it’ll still be familiar. A week or more later? Fuggedaboutit! Heaven forbid the problem goes undetected until a customer hits it.

Continuous testing enables fast feedback. Automation enables continuous testing. Test automation that isn’t running continuously is worthless because it provides no feedback.

Japanese woodblock printers also relied on fast feedback. If they noticed anything wrong with the prints as they pressed them, they could scrap the misprint and move on. However, since they were meticulous about quality, misprints were rare. Nevertheless, each print was unique because each impression was done manually. The amount, placement, and hue of ink could vary slightly from print to print. Over time, the woodblocks themselves wore down, too.

Here, you can see differences in the title cartouche between different prints of The Great Wave:

Differences in the title cartouche between two prints of The Great Wave.
(Source: https://blog.britishmuseum.org/the-great-wave-spot-the-difference/)

On the left, the outline around the title is solid, whereas on the right, the outline has breaks. This is because the keyblock had very fine ridges for printing outlines, which suffered the most from wear and tear during repeated impressions. Furthermore, if you look very closely, you can see that the Japanese characters appear bolder on the right than the left. The printer must have used more ink or pressed the title harder for the impression on the right.

Printers would need to spot these issues quickly so they could either correct their action for future prints or warn the publisher that the woodblocks were wearing down. If the print was popular, the publisher could commission a carver to carve new woodblocks to keep production going.

Conviction #6: Go lean

As I’ve said many times now, woodblock printing was a business. Ukiyo-e was commercial art, and competition was fierce. By the 1840s, production peaked with about 250 different publishers. Artists like Hokusai and Hiroshige were rivals. While today we recognize famous prints like The Great Wave, countless other prints were also made.

Publishers competed in a rat race for the best talent and the best prints. They had to be savvy. They had to build good reputations. They needed to respond to market demands for subject material. For example, Kitagawa Utamaro was famous for prints of “female beauties.”

Two Beauties with Bamboo
Kitagawa Utamaro, 1795

Ukiyo-e artists also took inspiration from each other. If one artist made a popular design, then other artists would copy their style. Here is a print from Hiroshige’s series, Thirty-Six Views of Mount Fuji. That’s right, Hokusai’s biggest rival made his own series of 36 prints about Mount Fuji, and he also made his own version of The Great Wave. If you can’t beat ‘em, join ‘em!

The Sea off Satta in Suruga Province
The Sea off Satta in Suruga Province
Utagawa Hiroshige, 1858

Publishers also had to innovate. Oftentimes, after a print had been in production for a while, they would instruct the printer to change the color scheme. Here are two versions of Hokusai’s Kajikazawa in Kai Province, from Thirty-six Views of Mount Fuji:

The print on the left is an early impression. The only colors used were shades of blue. This was Hokusai’s original artistic intention. However, later prints, like the one on the right, added different colors to the palette. The fishermen now wear red coats. The land has a bokashi green-yellow gradient. The sky incorporates orange tones to contrast the blue. Publishers changed up the colors to squeeze more money out of existing designs without needing to pay artists for new work or carvers for new woodblocks.

However, sometimes when doing this, artistic quality was lost. Compare the fine detail in the land between these two prints. In the early impression, you can see dark blue shading used to pronounce the shadows on the side of the rocks, giving them height and depth, and making the fisherman appear high above the water. However, in the later impression, the green strip of land has almost no shading, making it appear flat and less prominent.

Ukiyo-e publishers would have completely agreed with today’s lean business model. Seek first and foremost to deliver value to your customers. Learn what they want. Try some designs, and if they fail, pivot to something else. When you find what works, get a full end-to-end process in place, and then continuously improve as you go. Respond quickly to changes.

Going lean is very important for software testing, too. Testing is engineering, and it has serious business value. At the same time, testing activities never seem to have as many resources as they should. Testers must be scrappy to deliver valuable quality feedback using the resources they have.

When I think about software testing going lean, I’m not implying that testers should skip tests or skimp on coverage. Rather, I’m saying that world-class systems and processes cannot be built overnight. The most important thing a team can do is build basic end-to-end feedback loops from the start, especially for test automation.

The Quality Feedback Loop

So many times, I’ve seen teams skew their test automation strategy entirely towards implementation. They spend weeks and weeks developing suites of automated tests before they set up any form of Continuous Testing. Instead of triggering tests as part of Continuous Integration, folks must manually push buttons or run commands to make them start. Other folks on the team see results sporadically, if ever. When testers open bug reports, developers might feel surprised.

I recommend teams set up Continuous Testing with feedback loops from the start. As soon as you automate your first test, move onto running it from CI and sending you notifications for results before automating your second test. Close the feedback loop. Start delivering results immediately. As you find hotspots, add more coverage. Talk with developers about the kinds of results they find most valuable. Then, grow your suite once you demonstrate its value. Increase the throughput. Turn those sidewalks into highways. Continue to iteratively improve upon the system as you go. Don’t waste time on tests that don’t matter or dashboards that nobody reads. Going lean means allocating your resources to the most valuable activities. What you’ll find is that success will snowball!

Conviction #7: Open up

Once you have a good thing going, whether it’s woodblock printing or software testing, how can you take it to the next level? Open up! Innovation stalls when you end up staring at your own belly button for too long. Outside influences inspire new creativity.

Ukiyo-e prints had a profound impact on Western art. After Japan opened up to the rest of the world in the mid-1800s, Europeans became fascinated by Japanese art, and European artists began incorporating Japanese styles and subjects into their work. This phenomenon became known as Japonisme. Here, Claude Monet, famous for his impressionist paintings, painted a picture of his wife wearing a kimono with fans adorning the wall behind her:

La Japonaise
Claude Monet, 1876

Vincent van Gogh in particular loved Japanese woodblock prints. He painted his own versions of different prints. Here, we see Hiroshige’s Plum Garden at Kameido side-by-side with Van Gogh’s Flowering Plum Orchard (after Hiroshige):

Van Gogh was drawn to the bold lines and vibrant colors of ukiyo-e prints. There is even speculation that The Great Wave inspired the design of The Starry Night, arguably Van Gogh’s most famous painting:

Notice how the shapes of the waves mirror the shapes of the swirls in the sky. Notice also how deep shades of blue contrast yellows in each. Ukiyo-e prints served as great inspiration for what became known as Modern art in the West.

Influence was also bidirectional. Not only did Japan influence the West, but the West influenced Japan! One thing common to all of the prints in Thirty-six Views of Mount Fuji is the extensive use of blue ink. Prussian blue pigment had recently come to Japan from Europe, and Hokusai’s publisher wanted to make extensive use of the new color to make the prints stand out. Indeed, they did. To this day, Hokusai is renowned for popularizing the deep shades of Prussian blue in ukiyo-e prints.

It’s important in any line of work to be open to new ideas. If Hokusai had not been willing to experiment with new pigments, then we wouldn’t have pieces like The Great Wave.

That’s why I’m a huge proponent of Open Testing. What if we open our tests like we open our source? There are so many great advantages to open source software: helping folks learn, helping folks develop better software, and helping folks become better maintainers. If we become more open in our testing, we can improve the quality of our testing work, and thus also the quality of the software products we are building. Open testing involves many things: building open source test frameworks, getting developers involved in testing, and even publicly sharing test cases and results.

Conviction #8: Show empathy

In this article, we’ve seen lots of great artwork, and we’ve learned lots of valuable lessons from it. I think ukiyo-e prints remain popular today because their subject matter focuses on the beauty of the world. Artists strived to make pieces of the “floating world” tangible for the common people.

Ukiyo-e prints revealed the supple humanity of the Japanese people, like in this print by Utagawa Kunisada:

Twilight Snowfall at Ueno
Utagawa Kunisada, 1850

They revealed the serene beauty of nature in harmony with civilization, like in these prints from Hiroshige’s One Hundred Famous Views of Edo:

Prints from One Hundred Famous Views of Edo
Utagawa Hiroshige, 1856-1858

Ukiyo-e prints also revealed ordinary people living out their lives, like this print from Hokusai’s Thirty-six Views of Mount Fuji:

Fuji View Field in Owari Province
Katsushika Hokusai, 1830

Art is compelling. And software, like art, is meant for people. Show empathy. Care about your customers. Remember, as a tester, you are advocating for your users. Try to help solve their problems. Do things that matter for them. Build things that actually bring them value. Be thoughtful, mindful, and humble. Don’t be a jerk.

The Golden Conviction

These eight convictions are things I’ve learned the hard way throughout my career:

  1. Focus on behavior
  2. Prioritize on risk
  3. Automate
  4. Shift left and right
  5. Give fast feedback
  6. Go lean
  7. Open up
  8. Show empathy

I live and breathe these convictions every day. Whether you are making woodblock prints or running test cases, these principles can help you do your best work.

If I could sum up these eight convictions in one line, it would be this: Be excellent in all things. If you test software, then you are both an artist and an engineer. You have a craft. Do it with excellence.

1974 VW Karmann Ghia Convertible

7 Major Trends in Front End Web Testing

This article is based on my opening keynote address for Front End Test Fest 2022.

In the featured image for this article, you see a beautiful front end. It’s probably not the kind of “front end” you expected. It’s the front end of a 1974 Volkswagen Karmann Ghia. The Karmann Ghia was known as the “poor man’s Porsche.” It’s a very special car. It was actually a collaboration project between Wilhelm Karmann, a German automobile manufacturer, and Carrozzeria Ghia, an Italian automobile designer. Ghia designed the body as a work of art, and Karmann put it on the tried-and-true platform of the classic Volkswagen Beetle. When the Volkswagen executives saw it, they couldn’t say no to mass production.

The Karmann Ghia is a perfect symbol of the state of web development today. We strive to make beautiful front ends with reliable platforms supporting them on the back end. Collaboration from both sides is key to success, but what people remember most is the experience they have with your apps. My mom drove a Karmann Ghia like this when she was a teenager, and to this day she still talks about the good times she had with it.

Good quality, design, and experience are indispensable aspects of front ends – whether for classic cars or for the Web. In this article, I’ll share seven major trends I see in front end web testing. While there’s a lot of cool new things happening, I want y’all to keep in mind one main thing: tools and technologies may change, but the fundamentals of testing remain the same. Testing is interaction plus verification. Tests reveal the truth about our code and our features. We do testing as part of development to gather fast feedback for fixes and improvements. All the trends I will share today are rooted in these principles. With good testing, you can make sure your apps will look visually perfect, just like… you know.

#1. End-to-end testing

Here’s our first trend: End-to-end testing has become a three-way battle. For clarity, when I say “end-to-end” testing, I mean black-box test automation that interacts with a live web app in an active browser.

Historically, Selenium has been the most popular tool for browser automation. The project has been around for over a decade, and the WebDriver protocol is a W3C standard. It is open source, open standards, and open governance. Selenium WebDriver has bindings for C#, Java, JavaScript, Ruby, PHP, and Python. The project also includes Selenium IDE, a record-and-playback tool, and Selenium Grid, a scalable cluster for cross-browser testing. Selenium is alive and well, having just released version 4.

Over the years, though, Selenium has received a lot of criticism. Selenium WebDriver is a low-level protocol. It does not handle waiting automatically, leading many folks to unknowingly write flaky scripts. It requires clunky setup since WebDriver executables must be separately installed. Many developers dislike Selenium because coding with it requires a separate workflow or state of mind from the main apps they are developing.

Cypress was the answer to Selenium’s shortcomings. It aimed to be a modern framework with excellent developer experience, and in a few short years, it quickly became the darling test tool for front end developers. Cypress tests run in the browser side-by-side with the app under test. The syntax is super concise. There’s automatic waiting, meaning less flakiness. There’s visual tracing. There’s API calls. It’s nice. And it took a big chomp out of Selenium’s market share.

Cypress isn’t perfect, though. Its browser support is limited to Chromium-based browsers and Firefox. Cypress is also JavaScript-only, which excludes several communities. While Cypress is open source, it does not follow open standards or open governance like Selenium. And, sadly, Cypress’ performance is slow – equivalent tests run slower than Selenium.

Enter Playwright, the new open source test framework from Microsoft. Playwright is the spiritual successor to Puppeteer. It boasts the wide browser and language compatibility of Selenium with the refined developer experience of Cypress. It even has a code generator to help write tests. Plus, Playwright is fast – multiple times faster than Selenium or Cypress.

Playwright is still a newcomer, and it doesn’t yet have the footprint of the other tools. Some folks might be cautious that it uses browser projects instead of stock browsers. Nevertheless, it’s growing fast, and it could be a major contender for the #1 title. In Applitools’ recent Let The Code Speak code battles, Playwright handily beat out both Selenium and Cypress.

A side-by-side comparison of Selenium, Cypress, and Playwright
A side-by-side comparison of Selenium, Cypress, and Playwright

Selenium, Cypress, and Playwright are definitely now the “big three” browser automation tools for testing. A respectable fourth mention would be WebdriverIO. WebdriverIO is a JavaScript-based tool that can use WebDriver or debug protocols. It has a very large user base, but it is JavaScript-only, and it is not as big as Cypress. There are other tools, too. Puppeteer is still very popular but used more for web crawling than testing. Protractor, once developed by the Angular team, is now deprecated.

All these are good tools to choose (except Protractor). They can handle any kind of web app that you’re building. If you want to learn more about them, Test Automation University has courses for each.

#2. Component testing

End-to-end testing isn’t the only type of testing a team can or should do. Component testing is on the rise because components are on the rise! Many teams now build shareable component libraries to enforce consistency in their web design and to avoid code duplication. Each component is like a “unit of user interface.” Not only do they make development easier, they also make testing easier.

Component testing is distinct from unit testing. A unit test interacts directly with code. It calls a function or method and verifies its outcomes. Since components are inherently visual, they need to be rendered in the browser for proper testing. They might have multiple behaviors, or they may even trigger API calls. However, they can be tested in isolation of other components, so individually, they don’t need full end-to-end tests. That’s why, from a front end perspective, component testing is the new integration testing.

Storybook is a very popular tool for building and testing components in isolation. In Storybook, each component has a set of stories that denote how that component looks and behaves. While developing components, you can render them in the Storybook viewer. You can then manually test the component by interacting with them or changing their settings. Applitools also provides an SDK for automatically running visual tests against a Storybook library.

The Storybook viewer
The Storybook viewer

Cypress is also entering the component testing game. On June 1, 2022, Cypress released version 10, which included component testing support. This is a huge step forward. Before, folks would need to cobble together their own component test framework, usually as an extension of a unit test project or an end-to-end test project. Many solutions just ran automated component tests purely as Node.js processes without any browser component. Now, Cypress makes it natural to exercise component behaviors individually yet visually.

I love this quote from Cypress about their approach to component testing:

When testing anything for the web, we believe that tests should view and interact with the application in the same way that an actual user does. Anything less, and it’s hard to have confidence that your application is doing what it is supposed to.

https://www.cypress.io/blog/2022/06/01/cypress-10-release/

This quote hits on something big. So many automated tests fail to interact with apps like real users. They hinge on things like IDs, CSS selectors, and XPaths. They make minimal checks like appearance of certain elements or text. Pages could be completely broken, but automated tests could still pass.

#3. Visual testing

We really want the best of both worlds: the simplicity and sensibility of manual testing with the speed and scalability of automated testing. Historically, this has been a painful tradeoff. Most teams struggle to decide what to automate, what to check manually, and what to skip. I think there is tremendous opportunity in bridging the gap. Modern tools should help us automate human-like sensibilities into our tests, not merely fire events on a page.

That’s why visual testing has become indispensable for front end testing. Web apps are visual encounters. Visuals are the DNA of user experience. Functionality alone is insufficient. Users expect to be wowed. As app creators, we need to make sure those vital visuals are tested. Heaven forbid a button goes missing or our CSS goes sideways. And since we live in a world of continuous development and delivery, we need those visual checkpoints happening continuously at scale. Real human eyes are just too slow.

For example, I could have a login page that has an original version (left) and a changed version (right):

Visual comparison between versions of a login page
Visual comparison between versions of a login page

Visual testing tools alert you to meaningful changes and make it easy to compare them side-by-side. They catch things you might miss. Plus, they run just like any other automated test suite. Visual testing was tough in the past because tools merely did pixel-to-pixel comparisons, which generated lots of noise for small changes and environmental differences. Now, with a tool like Applitools Visual AI, visual comparisons accurately pinpoint the changes that matter.

Test automation needs to check visuals these days. Traditional scripts interact with only the basic bones of the page. You could break the layout and remove all styling like this, and there’s a good chance a traditional automated test would still pass:

The same login page from before, but without any CSS styling
The same login page from before, but without any CSS styling

With visual testing techniques, you can also rethink how you approach cross-browser and cross-device testing. Instead of rerunning full tests against every browser configuration you need, you can run them once and then simply re-render the visual snapshots they capture against different browsers to verify the visuals. You can do this even for browsers that the test framework doesn’t natively support! For example, using a platform like Applitools Ultrafast Test Cloud, you could run Cypress tests against Electron in CI and then perform visual checks in the Cloud against Safari and Internet Explorer, among other browsers. This style of cross-platform testing is faster, more reliable, and less expensive than traditional ways.

#4. Performance testing

Functionality isn’t the only aspect of quality that matters. Performance can make or break user experience. Most people expect any given page to load in a second or two. Back in 2016, Google discovered that half of all people leave a site if it takes longer than 3 seconds to load. As an industry, we’ve put in so much work to make the front end faster. Modern techniques like server-side rendering, hydration, and bloat reduction all aim to improve response times. It’s important to test the performance of our pages to make sure the user experience is tight.

Thankfully, performance testing is easier than ever before. There’s no excuse for not testing performance when it is so vital to success. There are many great ways to get started.

The simplest approach is right in your browser. You can profile any site with Chrome DevTools. Just right click the page, select “Inspect,” and switch to the Performance tab. Then start the profiler and start interacting with the page. Chrome DevTools will capture full metrics as a visual time series so you can explore exactly what happens as you interact with the page. You can also flip over to the Network tab to look for any API calls that take too long. If you want to learn more about this type of performance analysis, Test Automation University offers a course entitled Tools and Techniques for Performance and Load Testing by Amber Race. Amber shows how to get the most value out of that Performance tab.

Chrome DevTools Performance tab
Chrome DevTools Performance tab

Another nifty tool that’s also available in Chrome DevTools is Google Lighthouse. Lighthouse is a website auditor. It scores how well your site performs for performance, accessibility, progressive web apps, SEO, and more. It will also provide recommendations for how to improve your scores right within its reports. You can run Lighthouse from the command line or as a Node module instead of from Chrome DevTools as well.

Google Lighthouse from Chrome DevTools
Google Lighthouse from Chrome DevTools

Using Chrome DevTools manually for one-off checks or exploratory testing is helpful, but regular testing needs automation. One really cool way to automate performance checks is using Playwright, the end-to-end test framework I mentioned earlier. In Playwright, you can create a Chrome DevTools Protocol session and gather all the metrics you want. You can do other cool things with profiling and interception. It’s like a backdoor into the browser. Best of all, you could gather these metrics together with functional testing! One framework can meet the needs of both functional and performance test automation.

John Hill is a trailblazer in this space. He’s currently doing this as part of the Open MCT project. He’s the one who showed me how to automate performance tests with Playwright! If you want to learn more, check out this talk he gave recently on performance testing with Playwright, as well as his js-perf-toolkit project on GitHub.

Below is an example snippet I copied from js-perf-toolkit showing how to gather performance metrics using Playwright:

const client = await page.context().newCDPSession(page);
await client.send('Performance.enable'); 

await page.goto('https://www.google.com/');
await page.click('[aria-label="Search"]');
await page.fill('[aria-label="Search"]', 'playwright');

await Promise.all([
    page.waitForNavigation(),
    page.press('[aria-label="Search"]', 'Enter')
]);

let perfMetrics = await client.send('Performance.getMetrics');
console.log( perfMetrics.metrics );

#5. Machine learning models

There’s another curve ball when testing websites: what about machine learning models? For example, whenever you shop at an online store, the bottom of almost every product page has a list of recommendations for similar or complementary products. For example, when I searched Amazon for the latest Pokémon video game, Amazon recommended other games and toys:

Recommendation systems like this might be hard-coded for small stores, but large retailers like Amazon and Walmart use machine learning models to back up their recommendations. Models like this are notoriously difficult to test. How do we know if a recommendation is “good” or “bad”? How do I know if folks who like Pokémon would be enticed to buy a Kirby game or a Zelda game? Lousy recommendations are a lost business opportunity. Other models could have more serious consequences, like introducing harmful biases that affect users.

Machine learning models need separate approaches to testing. It might be tempting to skip data validation because it’s harder than basic functional testing, but that’s a risk not worth taking. To do testing right, separate the functional correctness of the frontend from the validity of data given to it. For example, we could provide mocked data for product recommendations so that tests would have consistent outcomes for verifying visuals. Then, we could test the recommendation system apart from the UI to make sure its answers seem correct. Separating these testing concerns makes each type of test more helpful in figuring out bugs. It also makes machine learning models faster to test, since testers or scripts don’t need to navigate a UI just to exercise them.

If you want to learn more about testing machine learning courses, Carlos Kidman created an excellent course all about it on Test Automation University named Intro to Testing Machine Learning Models. In his course, Carlos shows how to test models for adversarial attacks, behavioral aspects, and unfair biases.

#6. JavaScript

Now, the next trend I see will probably be controversial to many of you out there: JavaScript isn’t everything. Historically, JavaScript has been the only language for front end web development. As a result, a JavaScript monoculture has developed around the front end ecosystem. There’s nothing inherently wrong with that, but I see that changing in the coming years – and I don’t mean TypeScript.

In recent years, frustrations with single-page applications (SPAs) and client-heavy front ends have spurred a server-side renaissance. In addition to JavaScript frameworks that support SSR, classic server-side projects like Django, Rails, and Laravel are alive and kicking. Folks in those communities do JavaScript when they must, but they love exploring alternatives. For example, HTMX is a framework that provides hypertext directives for many dynamic actions that would otherwise be coded directly in JavaScript. I could use any of those classic web frameworks with HTMX and almost completely avoid JavaScript code. That makes it easier for programmers to make cool things happen on the front end without needing to navigate a foreign ecosystem.

Below is an example snippet of HTML code with HTMX attributes for posting a click and showing the response:

  <script src="https://unpkg.com/htmx.org@1.7.0"></script>
  <!-- have a button POST a click via AJAX -->
  <button hx-post="/clicked" hx-swap="outerHTML">
    Click Me
  </button>

WebAssembly, or “Wasm” is also here. WebAssembly is essentially an assembly language for browsers. Code written in higher-level languages can be compiled down into WebAssembly code and run on the browser. All major browsers now support WebAssembly to some degree. That means JavaScript no longer holds a monopoly on the browser.

I don’t know if any language will ever dethrone JavaScript in the browser, but I predict that browsers will become multilingual platforms through WebAssembly in the coming years. For example, at PyCon 2022, Anaconda announced PyScript, a framework for running Python code in the browser. Blazor enables C# code to run in-browser. Emscripten compiles C/C++ programs to WebAssembly. Other languages like Ruby and Rust also have WebAssembly support.

Regardless of what happens inside the browser, black-box testing tools and frameworks outside the browser can use any language. Tools like Playwright and Selenium support languages other than JavaScript. That brings many more people to the table. Testers shouldn’t be forced to learn JavaScript just to automate some tests when they already know another language. This is happening today, and I don’t expect it to change.

#7. Autonomous testing

Finally, there is one more trend I want to share, and this one is more about the future than the present: autonomous testing is coming. Ironically, today’s automated testing is still manually-intensive. Someone needs to figure out features, write down the test steps, develop the scripts, and maintain them when they inevitably break. Visual testing makes verification autonomous because assertions don’t need explicit code, but figuring out the right interactions to exercise features is still a hard problem.

I think the next big advancement for testing and automation will be autonomous testing: tools that autonomously look at an app, figure out what tests should be run, and then run those tests automatically. The key to making this work will be machine learning algorithms that can learn the context of the apps they target for testing. Human testers will need to work together with these tools to make them truly effective. For example, one type of tool could be a test recommendation engine that proposes tests for an app, and the human tester could pick the ones to run.

Autonomous testing will greatly simplify testing. It will make developers and testers far more productive. As an industry, we aren’t there yet, but it’s coming, and I think it’s coming soon. I delivered a keynote address on this topic at Future of Testing: Frameworks 2022:

Conclusion

There’s lots of exciting stuff happening in the world of the front end. As I said before, tools and technologies may change, but fundamentals remain the same. Each of these trends is rooted in tried-and-true principles of testing. They remind us that software quality is a multifaceted challenge, and the best strategy is the one that provides the most value for your project.

So, what do you think? Did I hit all the major front end trends? Did I miss anything? Let me know in the comments!

Boa Constrictor’s Awesome Hacktoberfest 2021

Boa Constrictor is the .NET Screenplay Pattern. It helps you make better interactions for better automation! Its primary use case is Web UI and REST API test automation, but it can be used to automate any kind of interactions. The Screenplay Pattern is much more scalable for development and execution than the Page Object Model.

The Boa Constrictor maintainers and I strongly support open source software. That’s why we participated in Hacktoberfest 2021. In fact, this was the second Hacktoberfest we did. We launched Boa Constrictor as an open source project a year ago during Hacktoberfest 2020! We love sharing our code with the community and inspiring others to get involved. To encourage participation this year, we added the “hacktoberfest” label to open issues, and we offered cool stickers to anyone who contributed.

Boa Constrictor sticker
Boa Constrictor: The .NET Screenplay Pattern
Sticker Medallion

Hacktoberfest 2021 was a tremendous success for Boa Constrictor. Even though the project is small, we received several contributions. Here’s a summary of all the new stuff we added to Boa Constrictor:

  • Updated WebDriver interactions to use Selenium WebDriver 4.0
  • Implemented asynchronous programming for Tasks and Questions
  • Extended the Wait Task to wait for multiple Questions using AND and OR logic
  • Standardized ToString methods for all WebDriver interactions
  • Automated unit tests for WebDriver Questions
  • Wrote new user guides for test framework integrations and interaction patterns
  • Made small refinements to the doc site
  • Created GitHub templates for issues and pull requests
  • Replaced the symbols NuGet package with embedded debugging
  • Added the README to the NuGet package
  • Added Shields to the README
  • Restructured projects for docs, logos, and talk

During Hacktoberfest 2021, we made a series of four releases because we believe in lean development that puts new features in the hands of developers ASAP. The final capstone release was version 2.0.0: a culmination of all Hacktoberfest work! Here’s a view of the Boa Constrictor NuGet package with its new README (Shields included):

The Boa Constrictor NuGet package with the new README and Shields
The Boa Constrictor NuGet package with the new README and Shields

If you like project stats, then here’s a breakdown of the contributions by numbers:

  • 11 total contributors (5 submitting more than one pull request)
  • 41 pull requests closed
  • 151 commits made
  • Over 10K new lines of code

GitHub’s Code Frequency graph for Boa Constrictor shown below illustrates how much activity the project had during Hacktoberfest 2021. Notice the huge green and red spikes on the right side of the chart corresponding to the month of October 2021. That’s a lot of activity!

Hacktoberfest Contributions
The GitHub Code Frequency Graph for Boa Constrictor

Furthermore, every member of my Test Engineering & Architecture (TEA) team at Q2 completed four pull requests for Hacktoberfest, thus earning our prizes and our bragging rights. For the three others on the team, this was their first Hacktoberfest, and Boa Constrictor was their first open source project. We all joined together to make Boa Constrictor better for everyone. I’m very proud of each of them individually and of our team as a whole.

Personally, I gained more experience as an open source project maintainer. I brainstormed ideas with my team, assigned work to volunteers, and provided reviews for pull requests. I also had to handle slightly awkward situations, like politely turning down pull requests that could not be accepted. Thankfully, the project had very little spam, but we did have many potential contributors request to work on issues but then essentially disappear after being assigned. That made me appreciate the folks who did complete their pull requests even more.

Overall, Hacktoberfest 2021 was a great success for Boa Constrictor. We added several new features, docs, and quality-of-life improvements to the project. We also got people excited about open source contributions. Many thanks to Digital Ocean, Appwrite, Intel, and DeepSource for sponsoring Hacktoberfest 2021. Also, special thanks to Digital Ocean for featuring Boa Constrictor in their Hacktoberfest kickoff event. Keep on hacking!

Testing GitHub Pages without Local Jekyll Setup

TL;DR: If you want to test your full GitHub Pages site before publishing but don’t want to set up Ruby and Jekyll on your local machine, then:

  1. Commit your doc changes to a new branch.
  2. Push the new branch to GitHub.
  3. Temporarily change the repository’s GitHub Pages publishing source to the new branch.
  4. Reload the GitHub Pages site, and review the changes.

If you have a GitHub repository, did you know that you can create your own documentation site for it within GitHub? Using GitHub Pages, you can write your docs as a set of Markdown pages and then configure your repository to generate and publish a static web site for those pages. All you need to do is configure a publishing source for your repository. Your doc site will go live at:

https://<user>.github.io/<repository>

If this is new to you, then you can learn all about this cool feature from the GitHub docs here: Working with GitHub Pages. I just found out about this cool feature myself!

GitHub Pages are great because they make it easy to develop docs and code together as part of the same workflow without needing extra tools. Docs can be written as Markdown files, Liquid templates, or raw assets like HTML and CSS. The docs will be version-controlled for safety and shared from a single source of truth. GitHub Pages also provides free hosting with a decent domain name for the doc site. Clearly, the theme is simplicity.

Unfortunately, I hit one challenge while trying GitHub Pages for the first time: How could I test the doc site before publishing it? A repository using GitHub Pages must be configured with a specific branch and folder (/ (root) or /docs) as the publishing source. As soon as changes are committed to that source, the updated pages go live. However, I want a way to view the doc site in its fullness before committing any changes so I don’t accidentally publish any mistakes.

One way to test pages is to use a Markdown editor. Many IDEs have Markdown editors with preview panes. Even GitHub’s web editor lets you preview Markdown before committing it. Unfortunately, while editor previews may help catch a few typos, they won’t test the full end result of static site generation and deployment. They may also have trouble with links or templates.

GitHub’s docs recommend testing your site locally using Jekyll. Jekyll is a static site generator written in Ruby. GitHub Pages uses Jekyll behind the scenes to turn doc pages into full doc sites. If you want to keep your doc development simple, you can just edit Markdown files and let GitHub do the dirty work. However, if you want to do more hands-on things with your docs like testing site generation, then you need to set up Ruby and Jekyll on your local machine. Thankfully, you don’t need to know any Ruby programming to use Jekyll.

I followed GitHub’s instructions for setting up a GitHub Pages site with Jekyll. I installed Ruby and Jekyll and then created a Jekyll site in the /docs folder of my repository. I verified that I could edit and run my site locally in a branch. However, the setup process felt rather hefty. I’m not a Ruby programmer, so setting up a Ruby environment with a few gems felt like a lot of extra work just to verify that my doc pages looked okay. Plus, I could foresee some developers getting stuck while trying to set up these doc tools, especially if the repository’s main code isn’t a Ruby project. Even if setting up Jekyll locally would be the “right” way to develop and test docs, I still wanted a lighter, faster alternative.

Thankfully, I found a workaround that didn’t require any tools outside of GitHub: Commit doc changes to a branch, push the branch to GitHub, and then temporarily change the repository’s GitHub Pages source to the branch! I originally configured my repository to publish docs from the /docs folder in the main branch. When I changed the publishing source to another branch, it regenerated and refreshed the GitHub Pages site. When I changed it back to main, the site reverted without any issues. Eureka! This is a quick, easy hack for testing changes to docs before merging them. You get to try the full site in the main environment without needing any additional tools or setup.

Above is a screenshot of the GitHub Pages settings for one of my repositories. You can find these settings under Settings -> Options for any repository, as long as you have the administrative rights. In this screenshot, you can see how I changed the publishing source’s branch from main to docs/test. As soon as I selected this change, GitHub Pages republished the repository’s doc site.

Now, I recognize that this solution is truly a hack. Changing the publishing source affects the “live”, “production” version of the site. It effectively does publish the changes, albeit temporarily. If some random reader happens to visit the site during this type of testing, they may see incorrect or even broken pages. I’d recommend changing the publishing source’s branch only for small projects and for short periods of time. Don’t forget to revert the branch once testing is complete, too. If you are working on a larger, more serious project, then I’d recommend doing full setup for local doc development. Local setup would be safer and would probably make it easier to try more advanced tricks, like templates and themes.

How Python Decorators Function

Have you ever seen those “@” tags on top of Python functions and classes? Those are decorators – functions that wrap around other functions. Confusing? At first, but they’re easy with practice. Useful? Very!

The Talk

I delivered a talk on decorators at PyGotham TV 2020, PyCon India 2020, PyTexas 2020, and DevNation Day 2021. Here’s the recording from PyTexas:

And here’s the recording from PyCon India:

Of course, I mentioned testing in this talk, too:

Is it even a Pandy Knight talk if testing is not talked about?
“Is it even a Pandy Knight talk if testing is not talked about?”

And here’s the Conf42: Python 2022 recording:

The Transcript

[Camera]

Hello, PyTexas 2020! It’s Pandy Knight here. I’m the Automation Panda, and I’m a big Python fan, just like y’all.

Have you ever seen those “@” tags on top of Python functions? Maybe you’ve seen them on top of methods and classes, too. Those are decorators, one of Python’s niftiest language features. Decorators are essentially wrappers – they wrap additional code around existing definitions. When used right, they can clean up your code better than OxiClean! Let’s learn how to use them.

[Slide]

So, here’s a regular old “hello world” function. When we run it, …

[Slide]

…It prints “Hello World!” Nothing fancy here.

[Slide]

Now, let’s take that function…

[Slide]

…And BAM! Add a decorator. Using this “@” sign, we just added a decorator named “tracer” to “hello_world”. So, what is this decorator?

[Slide]

“Tracer” is just another function. But, it’s special because it takes in another function as an argument!

[Slide]

Since “tracer” decorates “hello_world”, the “hello_world” function is passed into “tracer” as an argument. Wow!

So, what’s inside “tracer”?

[Slide]

This decorator has an inner function named “wrapper”. Can you even do that? With Python, yes, you can! The “wrapper” function prints “Entering”, calls the function originally passed into the decorator, and then prints “Exiting”.

[Slide]

When “tracer” decorates “hello_world”, that means “hello_world” will be wrapped by “Entering” and “Exiting” print statements.

[Slides]

Finally, the decorator returns the new “wrapper” function. Any time the decorated function is called, it will effectively be replaced by this new wrapper function. 

[Slides]

So, when we call “hello_world”, the trace statements are now printed, too. Wow! That’s amazing. That’s how decorators work.

[Slide] Decorators [Slide] wrap [Slide] functions [Slide] around [Slide] functions!

[Slide]

Think about them like candy bars. The decorator is like the foil wrapper, and the decorated function is like the chocolate inside.

[Slide]

But how is this even possible? That decorator code looks confusing!

[Slide]

Decorators are possible because, in Python, functions are objects. In fancy language, we say functions are “first-order” values. Since functions are just objects, …

[Slide]

…We can pass them into other functions as arguments, …

[Slide]

…define new functions inside existing functions, …

[Slide]

…and return a function from a function.

[Slide]

This is all part of a paradigm called “Functional Programming.” Python supports functional programming because functions can be treated like objects. That’s awesome!

[Slide]

So, using functions as objects, decorators change how functions are called.

[Slide]

Decorators create an “outer” decorator function around an “inner” decorated function. Remember, the outer function is like the foil wrapper, and the inner function is like the chocolate.

[Slide]

Creating an outer function lets you add new code around the inner function. Some people call this “advice.” You can add advice before or after the inner function. You could even skip the inner function!

[Slide]

The best part is, decorators can be applied to any function. They make sharing code easy so you don’t repeat yourself!

[Slide]

Decorators are reminiscent of a paradigm called “Aspect-Oriented Programming,” in which code can be cleverly inserted before and after points of execution. Neat!

[Slide]

So remember, decorators wrap functions around functions, like candy bars!

[Slide]

Hold on, now! We have a problem in that Python code!

[Slide]

If the “wrapper” function effectively replaces “hello_world”, then what identity does “hello_world” report?

[Slide]

Its name is “wrapper”…

[Slide]

And its help is also “wrapper”! That’s not right!

[Slide]

Never fear! There’s an easy solution. The “functools” module provides a decorator named “wraps”. Put “functools.wraps” on the “wrapper” function and pass in the inner function object, and decorated functions once again show the right identity. That’s awesome.

[Slide]

But wait, there’s another problem!

[Slide]

How do decorators work with inputs and outputs? What if we decorate a function with parameters and a return value?

[Slide]

If we try to use the current “tracer”, …

[Slide]

…We get an error! Arguments broke it!

[Slide]

We can fix it! First, add “star args” and “star-star k-w-args” to the “wrapper” function’s parameters, and then pass them through to the inner function. This will make sure all arguments go through the decorator into the decorated function.

[Slide]

Then, capture the inner function’s return value and return it from the “wrapper” function. This makes sure return values also pass through. If the inner function has no return value, don’t worry – the decorator will pass through a “None” value.

[Slide]

When we call the function with the updated “tracer”, …

[Slide]

…we see tracing is now successful again!

[Slide]

When we check the return value, …

[Slide]

…it’s exactly what we expect. It works!

[Slide]

Wow, that’s awesome!

[Slide]

But wait, there’s more!

[Slide]

You can write a decorator to call a function twice!

[Slide]

Start with the decorator template…

[Slide]

…and call the inner function twice! Return the final return value for continuity.

[Slide]

BAM! It works!

[Slide]

But wait, there’s more!

[Slide]

You can write a timer decorator!

[Slide]

Start with the template, …

[Slide]

…call the inner function, …

[Slide]

…and surround it with timestamps using the “time” module!

[Slide]

BAM! Now you can time any function!

[Slide]

But wait, there’s more!

[Slide]

You can also add more than one decorator to a function! This is called “nesting”. Order matters. Decorators are executed in order of closeness to the inner function. So, in this case, …

[Slide]

…”call_twice” is applied first, and then “timer” is applied.

[Slide]

If these decorators are reversed, …

[Slide]

…then each inner function call is timed. Cool!

[Slide]

But wait, there’s more!

[Slide]

You can scrub and validate function arguments! Check out these two simple math functions.

[Slide]

Create a decorator to scrub and validate inputs as integers.

[Slide]

Add the wrapper function, and make sure it has positional args.

[Slide]

Then, cast all args as ints before passing them into the inner function.

[Slide]

Now, when calling those math functions, all numbers are integers! Using non-numeric inputs also raises a ValueError!

[Slide]

But wait, there’s more!

[Slide]

You can create decorators with parameters! Here’s a decorator that will repeat a function 5 times.

[Slide]

The “repeat” function is a little different. Instead of taking in the inner function object, it takes in the parameter, which is the number of times to repeat the inner function.

[Slide]

Inside, there’s a “repeat_decorator” function that has a parameter for the inner function. The “repeat” function returns the “repeat_decorator” function.

[Slide]

Inside “repeat_decorator” is the “wrapper” function. It uses “functools.wraps” and passes through all arguments. “repeat_decorator” returns “wrapper”.

[Slide]

Finally, “wrapper” contains the logic for calling the inner function multiple times, according to the “repeat” decorator’s parameter value.

[Slide]

Now, “hello_world” runs 5 times. Nifty!

[Slide]

But wait, there’s more!

[Slide]

Decorators can be used to save state! Here’s a decorator that will count the number of times a function is called.

[Slide]

“count_calls” has the standard decorator structure.

[Slide]

Outside the wrapper, a “count” attribute is initialized to 0. This attribute is added to the wrapper function object.

[Slide]

Inside the wrapper, the count is incremented before calling the inner function. The “count” value will persist across multiple calls.

[Slide]

Initially, the “hello_world” count value is 0.

[Slide]

After two calls, the count value goes up! Awesome!

[Slide]

But wait, there’s more!

[Slide]

Decorators can be used in classes! Here, the “timer” decorator is applied to this “hello” method.

[Slider]

As long as parameters and return values are set up correctly, decorators can be applied equally to functions and methods.

[Slide]

Decorators can also be applied directly to classes!

[Slide]

When a decorator is applied to a class, it wraps the constructor.

[Slide]

Note that it does not wrap each method in the class.

[Slide]

Since decorators can wrap classes and methods in addition to functions, it would technically be more correct to say that decorators wrap callables around callables!

[Slide]

So all that’s great, but can decorators be tested? Good code must arguably be testable code. Well, today’s your lucky day, because yes, you can test decorators!

[Slide]

Testing decorators can be a challenge. We should always try to test the code we write, but decorators can be tricky. Here’s some advice:

[Slide]

First, separate tests for decorator functions from decorated functions. For decorator functions, focus on intended outcomes. Try to focus on the “wrapper” instead of the “inner” function. Remember, decorators can be applied to any callable, so cover the parts that make decorators unique. Decorated functions should have their own separate unit tests.

[Slide]

Second, apply decorators to “fake” functions used only for testing. These functions can be simple or mocked. That way, unit tests won’t have dependencies on existing functions that could change. Tests will also be simpler if they use slimmed-down decorated functions.

[Slide]

Third, make sure decorators have test coverage for every possible way it could be used. Cover decorator parameters, decorated function arguments, and return values. Make sure the “name” and “help” are correct. Check any side effects like saved state. Try it on methods and classes as well as functions. With decorators, most failures happen due to overlooked edge cases.

[Slide]

Let’s look at a few short decorator tests. We’ll use the “count_calls” decorator from earlier.

There are two decorated functions to use for testing. The first one is a “no operation” function that does nothing. It has no parameters or returns. The second one is a function that takes in one argument and returns it. Both are very simple, but they represent two equivalences classes of decoratable functions.

[Slide]

The test cases will verify outcomes of using the decorator. For “count_calls”, that means tests will focus on the “count” attribute added to decorated functions.

The first test case verifies that the initial count value for any function is zero.

[Slide]

The second test calls a function three times and verifies that count is three.

[Slide]

The third test exercises the “same” function to make sure arguments and return values work correctly. It calls the “same” function, asserts the return value, and asserts the count value.

This collection of tests is by no means complete. It simply shows how to start writing tests for decorators. It also shows that you don’t need to overthink unit tests for decorators. Simple is better than complex!

[Slide]

Up to this point, we’ve covered how to write your own decorators. However, Python has several decorators available in the language and in various modules that you can use, absolutely free!

[Slide]

Decorators like “classmethod”, “staticmethod”, and “property” can apply to methods in a class. Frameworks like Flask and pytest have even more decorators. Let’s take a closer look.

[Slide]

Let’s start by comparing the “classmethod” and “staticmethod” decorators. We’ll revisit the “Greeter” class we saw before.

[Slide]

The “classmethod” decorator will turn any method into a “class” method instead of an “instance” method. That means this “hello” method here can be called directly from the class itself instead of from an object of the class. This decorator will pass a reference to the class into the method so the method has some context of the class. Here, the reference is named “c-l-s”, and the method uses it to get the class’s name. The method can be called using “Greeter.hello”. Wow!

[Slide]

The “staticmethod” decorator works almost the same as the “classmethod” decorator, except that it does not pass a reference to the class into the method.

[Slide]

Notice how the method parameters are empty – no “class” and no “self”. Methods are still called from the class, like here with “Greeter.goodbye”. You could say that “staticmethod” is just a simpler version of “classmethod”.

[Slide]

Next, let’s take a look at the “property” decorator. To show how to use it, we’ll create a class called “Accumulator” to keep count of a tally.

[Slide]

Accumulator’s “init” method initializes a “count” attribute to 0.

[Slide]

An “add” method adds an amount to the count. So far, nothing unusual.

[Slide]

Now, let’s add a property. This “count” method has the “property” decorator on it. This means that “count” will be callable as an attribute instead of a method, meaning that it won’t need parentheses. It is effectively a “getter”. The calls to “count” in the “init” and “add” methods will actually call this property instead of a raw variable.

Inside the “count” property, the method returns an attribute named “underscore-count”. The underscore means that this variable should be private. However, this class hasn’t set that variable yet.

[Slide]

That variable is set in the “setter” method. Setters are optional for properties. Here, the setter validates that the value to set is not negative. If the value is good, then it sets “underscore-count”. If the value is negative, then it raises a ValueError.

“underscore-count” is handled internally, while “count” is handled publicly as the property. The getter and setter controls added by the “property” decorator let you control how the property is handled. In this class, the setter protects the property against bad values!

[Slide]

So, let’s use this class. When an Accumulator object is constructed, its initial count is 0.

[Slide]

After adding an amount to the object, its count goes up.

[Slide]

Its count can be directly set to non-negative values. Attempting to set the count directly to a negative value raises an exception, as expected. Protection like that is great!

[Slide]

Python packages also frequently contain decorators. For example, Flask is a very popular Web “micro-framework” that enables you to write Web APIs with very little Python code.

[Slide]

Here’s an example “Hello World” Flask app taken directly from the Flask docs online. It imports the “flask” module, creates the app, and defines a single endpoint at the root path that returns the string, “Hello, World!” Flask’s “app.route” decorator can turn any function into a Web API endpoint. That’s awesome!

[Slide]

Another popular Python package with decorators is pytest, Python’s most popular test framework.

[Slide]

One of pytest’s best features is the ability to parametrize test functions to run for multiple input combinations. Test parameters empower data driven testing for wider test coverage!

[Slide]

To show how this works, we’ll use a simple test for basic arithmetic: “test addition”. It asserts that a plus b equals c.

[Slide]

The values for a, b, and c must come from a list of tuples. For example, 1 plus 2 equals 3, and so forth.

[Slide]

The “pytest.mark.parametrize” decorator connects the list of test values to the test function. It runs the test once for each tuple in the list, and it injects the tuple values into the test case as function arguments. This test case would run four times. Test parameters are a great way to rerun test logic without repeating test code.

[Slide]

So, act now, before it’s too late!

[Slide]

When should you use decorators in your Python code?

[Slide]

Use decorators for aspects.

[Slide]

An aspect is a special cross-cutting concern. They are things that happen in many parts of the code, and they frequently require repetitive calls.

Think about something like logging. If you want to add logging statements to different parts of the code, then you need to write multiple logging calls in all those places. Logging itself is one concern, but it cross-cuts the whole code base. One solution for logging could be to use decorators, much like we saw earlier with the “tracer” decorator.

[Slide]

Good use cases for decorators include logging, profiling, input validation, retries, and registries. These are things that typically require lots of extra calls inserted in duplicative ways. Ask yourself this:

[Slide]

Should the code wrap something else? If yes, then you have a good candidate for a decorator.

[Slide]

However, decorators aren’t good for all circumstances. You should avoid decorators for “main” behaviors, because those should probably be put directly in the body of the decorated function. Avoid logic that’s complicated or has heavy conditionals, too, because simple is better than complex. You should also try to avoid completely side-stepping the decorated function – that could confuse people!

[Slide]

Ask yourself this: is the code you want to write the wrapper or the candy bar itself? Wrappers make good decorators, but candy bars do not.

[Slide]

I hope you’ve found this infomercial about decorators useful! If you want to learn more, …

[Slide]

…check out this Real Python tutorial by Geir Arne Hjelle named “Primer on Python Decorators”. It covers everything I showed here, plus more.

[Slide]

Thank you very much for listening! Again, my name is Pandy Knight – the Automation Panda and a bona fide Pythonista. Please read my blog at AutomationPanda.com, and follow me on Twitter @AutomationPanda. I’d love to hear what y’all end up doing with decorators!

East Meets West When Translating Django Apps

你好!我叫安迪。 

Don’t understand my Chinese? Don’t feel bad – I don’t know much Mandarin, either! My wife and her mom are from China. When I developed a Django app to help my wife run her small business, I needed to translate the whole site into Chinese so that Mama could use it, too. Thankfully, Django’s translation framework is top-notch.

“East Meets West When Translating Django Apps” is the talk I gave about translating my family’s Django app between English and Chinese. For me, this talk had all the feels. I shared my family’s story as a backdrop. I showed Python code for each step in the translation workflow. I gave advice on my lessons learned. And I spoke truth to power – that translations should bring us all together.

I gave this talk at a few conferences. It was the opener for PyCascades 2020. I also delivered it in person at PyTennessee 2020 and online for PyCon 2020.

Here’s the PyCon recording, which is probably the “definitive” version:

Here’s the PyCascades recording:

Here are the power slides from my talk:

Check out my article, Django Admin Translations, to learn how to translate the admin site, too.

Python Program Main Function

This article will show you the best way to handle “main” functions in Python.

Python is like a scripting language in that all lines in a Python “module” (a .py file) are executed whenever the file is run. Modules don’t need a “main” function. Consider a module named stuff.py with the following code:

def print_stuff():
  print("stuff happened!")

print_stuff()

This is the output when it is run:

$ python stuff.py
stuff happened!

The print_stuff function was called as a regular line of code, not in any function. When the module ran, this line was executed.

This will cause a problem, though, if stuff is imported by another module. Consider a second module named more_stuff.py:

import stuff

stuff.print_stuff()
print("more stuff!")

At first glance, we may expect to see two lines printed. However, running more_stuff actually prints three lines:

$ python more_stuff.py
stuff happened!
stuff happened!
more stuff!

Why did “stuff happened!” get printed twice? Well, when “import stuff” was called, the stuff module was loaded. Whenever a module is loaded, all of its code is executed. The print_stuff function was called at line 4 in the stuff module. Then, it was called again at line 3 in the more_stuff module.

So, how can we avoid this problem? Simple: check the module’s __name__. The __name__ variable (pronounced “dunder name”) is dynamically set to the module’s name. If the module is the main entry point, then __name__ will be set to “__main__”. Otherwise, if the module is simply imported, then it will be set to the module’s filename without the “.py” extension.

Let’s rewrite our modules. Here’s stuff:

def print_stuff():
  print("stuff happened!")

if __name__ == '__main__':
  print_stuff()

And here’s more_stuff:

import stuff

if __name__ == '__main__':
  stuff.print_stuff()
  print("more stuff!")

If we rerun more_stuff, then the line “stuff happened!” will print only once:

$ python more_stuff.py
stuff happened!
more stuff!

As a best programming practice, Python modules should not contain any directly-called lines. They should contain only functions, classes, and variable initializations. Anything to be executed as a “main” body should be done after a check for “if __name__ == ‘__main__'”. That way, no rogue calls are made when modules are imported by other modules. The conditional check for __name__ also makes the “main” body clear to the reader.

Some people still like to have a “main” function. That’s cool. Just do it like this:

import stuff

def main():
  stuff.print_stuff()
  print("more stuff!")

if __name__ == '__main__':
  main()

For more information, read this Stack Overflow article:
What does if __name__ == “__main__”: do?

Surviving Without Python


Python is such a popular language for good reason: Its principles are strong. However, if Python is “the second-best language for everything”… that means the first-best is often chosen instead. Oh no! How can Pythonistas survive a project or workplace without our favorite language?

Personally, even though I love Python, I don’t use it daily at my full time job. Nevertheless, Pythonic thinking guides my whole approach to software. I will talk about how the things that make Python great can be applied to non-Python places in three primary ways:

  1. Principles from the Zen of Python
  2. Projects that partially use Python
  3. People who build strong, healthy community

Check out my talk, Surviving Without Python, from PyOhio 2019! It was one of the most meaningful talks I’ve ever given.

Sprint Planning Sucks. Can It Be Fixed?

Warning: This article contains strong opinions that might not be suitable for all audiences. Reader discretion is advised.

It’s Monday morning. After an all-too-short weekend and rush hour traffic, you finally arrive at the office. You throw your bag down at your desk, run to the break room, and queue up for coffee. As the next pot is brewing, you check your phone. It’s 8:44am… now 8:45am, and DING! A meeting reminder appears:

Sprint Planning – 9am to 3pm.

.

What’s your visceral reaction?

.

I can’t tell you mine, because I won’t put profanity on my blog.

Real Talk

In the capital-A Agile Scrum process, sprint planning is the kick-off meeting for the next iteration. The whole team comes together to talk about features, size work items with points, and commit to deliverables for the next “sprint” (typically 2 weeks long). Idealistically, team members collaborate freely as they learn about product needs and give valued input.

Let’s have some real talk, though: sprint planning sucks. Maybe that’s a harsh word, but, if you’re reading this article, then it caught your attention. Personally, my sprint planning experiences have been lousy. Why? Am I just bellyaching, or are there some serious underlying problems?

Sprint planning is a huge time commitment. 9am to 3pm is not an exaggeration. Sprint planning meetings are typically half-day to full-day affairs. Most people can’t stay focused on one thing for that long. Plus, when a sprint is only two weeks long, one hour is a big chunk of time, let alone 3, or 6, or a whole day. The longer the meeting, the higher the opportunity cost, and the deeper the boredom.

Collaboration is a farce. Planning meetings typically devolve into one “leader” (like a scrum master, product owner, or manager) pulling teeth to get info for a pre-determined list of stories. Only two people, the leader and the story-owner, end up talking, while everyone else just stares at their laptops until it’s their turn. Discussions typically don’t follow any routine beyond, “What’s the acceptance criteria?” and, “Does this look right?” with an interloper occasionally chiming in. Each team member typically gets only a few minutes of value out of an hours-long ordeal. That’s an inefficient use of everyone’s time.

No real planning actually happens. These meetings ought to be called “guessing” meetings, instead. Story point sizes are literally made up. Do they measure time or complexity? No, they really just measure groupthink. Teams even play a game called planning poker that subliminally encourages bluffing. Then, point totals are used to guess how much work can be done during the sprint. When the guess turns out to be wrong at the end of the sprint (and it always does), the team berates itself in retro for letting points slip. Every. Time.

Does It Spark Joy?

I’ve long wondered to myself if sprint planning is a good concept just implemented poorly, or if it’s conceptually flawed at its root. I’m pretty sure it’s just flawed. The meetings don’t facilitate efficient collaboration relative to their time commitments, and estimates are based on poor models. Retros can’t fix that. And gut reactions don’t lie.

So, what should we do? Should we Konmari our planning meetings to see if they spark joy? Should we get rid of our ceremonies and start over? Is this an indictment of the whole Agile Scrum process? But then, how will we know what to do, and when things can get done?

I think we can evolve our Agile process with more effective practices than sprint planning. And I don’t think that evolution would be terribly drastic.

Behavior-Driven Planning

What we really want out of a planning meeting is planning, not pulling and not predicting. Planning is the time to figure out what will be done and how it will be done. The size of the work should be based on the size of the blueprint. Enter Example Mapping.

Example Mapping is a Behavior-Driven Development practice for clarifying and confirming stories. The process is straightforward:

  1. Write the story on a yellow card.
  2. Write each rule that the story must satisfy on a blue card.
  3. Illustrate each rule with examples written on green cards.
  4. Got stuck on a question? Write it on a red card and move on.

One story should take about 20-30 minutes to map. The whole team can participate, or the team can split up into small groups to divide-and-conquer. Rules become acceptance criteria, examples become test cases, and questions become spikes.

Here’s a good walkthrough of Example Mapping.

What about story size? That’s easy – count the cards. How many cards does a story have? That’s a rough size for the work to be done based on the blueprint, not bluffing. More cards = more complexity. It’s objective. No games. Frankly, it can’t be any worse that made-up point values.

This is real planning: a blueprint with a course of action.

So, rather than doing traditional sprint planning meetings, try doing Example Mapping sessions. Actually plan the stories, and use card counts for point sizes. Decisions about priority and commitments can happen between rounds of story mapping, too. The Scrum process can otherwise remain the same.

If you want to evolve further, you could eliminate the time boxes of sprints in favor of Kanban. Two-week work item boundaries can arbitrarily fall in the middle of progress, which is not only disruptive to workflow but can also encourage bad responses (like cramming to get things done or shaming for not being complete.) Kanban treats work items as a continuous flow of prioritized work fed to a team in bite-sized pieces. When a new story comes up, it can have its own Example Mapping “planning” meeting. Now, Kanban is not for everyone, but it is popular among post-Agile practitioners. What’s important is to find what works for your team.

Rant Over

I know I expressed strong, controversial opinions in this article. And I also recognize that I’m arguing against bad examples of Agile Scrum. Nevertheless, I believe my points are fair: planning itself is not a waste of time, but the way many teams plan their sprints uses time inefficiently and sets poor expectations. There are better ways to do planning – let’s give them a try!

Frameworks: All-in-One or Piece-by-Piece?

Software frameworks are great because they apply the principle of Separation of Concerns. A framework’s tools and code handle a specific need in a standard way for developers to write other code more easily. For example:

  • Web frameworks support receiving requests and sending responses.
  • Test frameworks include test case structure, runners, and reporting mechanisms.
  • Logging frameworks control how messages are gathered and stored.
  • Dependency injection frameworks create and manage object instances.

Recently, a question hit me: How far should a framework go to separate concerns? Should a framework try to do everything all-in-one, or should it behave more like a library that focuses on doing one thing well?

Let’s look at Python Web frameworks as an example. Django, the “Web framework for perfectionists with deadlines,” provides everything a developer could want out of the box. Flask, on the other hand, is a “microframework” that prides itself on minimalism: any extras must be handled by extensions or other packages. The differences between the two become clear when comparing some of their features:

Feature Django Flask
HTTP Requests and Routing Included Werkzeug (bundled)
Templates Included Jinja2 (bundled)
Forms Included None (Flask-WTF)
Object-Relational Mapping (ORM) Included None (SQLAlchemy)
Security Included None (Flask-Security)
Localization Included None (Flask-Babel)
Admin Interface Included None

Clearly, Django is all-in-one, while Flask is piece-by-piece. To make a serious Flask app, developers must pull in many extra pieces. There are many other frameworks with similar competitions:

  • JavaScript testing: Jasmine vs. Mocha
  • JavaScript development: Angular vs. React
  • Java BDD testing: Serenity vs. Cucumber-JVM

I think each approach has its merits. All-in-one frameworks are more convenient to use, especially for beginners. For developers who are new to a domain or just need to get something up fast, all-in-ones are the better choice. They come with all units already integrated together, and they often have good documentation. However, developing an all-in-one framework takes much more work because it covers multiple concerns. Developers may also feel shoehorned into the framework’s way of doing things. All-in-ones typically dictate what they believe to be the “best” solution.

Piece-by-piece frameworks require more expertise but offer greater flexibility. Developers can pick and choose the pieces they need, and they can change the packages used by the solution more easily. Found a better ORM? Not a problem. Need to localize the site in Chinese? Add it! Solutions can avoid excess weight and stay nimble for the future. The big challenge is successful integration. Furthermore, a library or framework for a singular concern tends to solve the concern in better ways simply because project contributors give it exclusive focus. The more I learn about a space, the more I lean towards a piece-by-piece approach.

As always, pick frameworks based on the needs at hand. For example, I like to use Django to make websites for my wife’s small businesses because the admin interface is just so convenient for her, even though I could get away with Flask. However, I’ll probably pick Mocha (piece-by-piece) over Jasmine (all-in-one) whenever I return to JavaScript testing.