Acceptance Test Driven Development (ATDD) Revisited

A fresh look at the practice of ATDD with the hindsight of 16+ years.

Acceptance Test Driven Development (ATDD) Revisited

Once upon a time a very very long time ago (circa 2008) I spoke a lot about Acceptance Test Driven Development (ATDD). I practiced it on my own projects, taught others how to do it, presented about it at conferences. I also wrote a paper, as Mark Levison reminded me over on Mastodon. He asked me to make the paper available on my website. I’m grateful to him, and have done so. You can find it here.

However I have learned a lot in the 16 years since I wrote that paper. Rereading it I cringe just a little bit. At the time I wrote the paper, I 100% believed in the practice as I described it. I followed it on my own projects. But after watching organization after organization struggle to adopt the practice, I had to admit that at least the way I described it in the paper, ATDD was too heavyweight to be practical for most real world teams.

The practice I described involved whole team synchronous collaboration using tools that bridged natural language requirements and automation code. Each of those things added overhead. Together? Too many stumbling blocks. Product managers lost patience with defining requirements at that level of granularity. Having so many people collaborate together on defining the acceptance tests felt draining rather than energizing. The tooling had to bridge between imprecise natural language and code. Developers lost patience with the tooling. Maintaining the tests proved challenging, so too often they fell into disrepair and were abandoned. And QA still struggled to find their place in the whole process even though one of the goals was to leverage those with testing skill all the way at the beginning in defining requirements.

Eventually I had to admit to myself that even for my own projects the benefits didn’t justify the overhead. So I took the good parts and jettisoned the rest. I saw that other teams that succeeded with something like ATDD did the same thing.

So instead of doing ATDD the way I described it in 2008, here’s what I recommend now.

Begin with the End in Mind

At the core of ATDD is the simple, powerful idea of using concrete examples to express expectations about the end result. I’ve seen teams use Given / When / Then to express expectations for acceptance criteria in user stories. The successful teams do this in a lightweight way: just a few examples, not too many, and with no attempt to hook those examples up to automation.

Define Requirements for People Not Computers

Tests and requirements are two sides of the same coin. Both represent expectations. So I wanted to believe that requirements could become executable tests. The problem is that human language is full of ambiguity and nuance while automation code is code so it requires precision. Bridging the two is incredibly difficult.

It turns out the value in ATDD was not from making natural language requirements do double duty as executable tests. It’s just as valuable to separate concerns, even if that means duplicating the examples in both the user stories and automation code.

Just Enough Collaboration

Some product managers struggle to write clear requirements with concrete examples. Other product managers do so naturally. Some want help. Others want to be alone in their head to think things through. For that matter it’s not always the product manager writing user stories. In some organizations the product manager sets a high level vision and it’s up to an engineering manager or lead to slice the larger vision into incremental stories.

Trying to have everyone on the team participate in defining the acceptance tests up front proved too cumbersome in most contexts. Every organization is a little different. The key to success is not including everyone but rather involving the smallest subset of the team who have the skills and interest needed to illustrate requirements with examples. That might just be one person.

Reflect and Adapt

The neat tidy ATDD cycle I wrote about in 2008 isn’t wrong. It just turned out not to be practical for most organizations. The intentions are still valid:

  1. The whole team achieves a shared understanding of the expected behavior of the system
  2. The code base includes automated tests that detect if the system no longer conforms to expectations

That’s it: shared understanding and automated verification. So simple to say; so hard to do. What’s the lightest weight way to achieve those outcomes given your context?

The advice above is what I have seen work in practice. It may or may not work in your organization. Ask yourself what “good” looks like in your context, how close or far you are from that goal, and if the investment is worth the benefit. Experiment and ask yourself those questions again. Keep experimenting until you find the right balance.

Along the way, optimize for learning.

Contexts change. People move around. New tools come available. What worked well a few years ago might not work as well now. Conversely, what didn’t work before might work now. Ongoing success isn’t about finding the one true way to do things. It’s about learning how to learn. If your organization is really good at reflecting and adapting, everything else will follow.

Cookies
essential