In today's high paced software development environment, it's usual to hear the words Test-Driven Development (TDD). Discussion about its benefits, as well as drawbacks, are very common in the software development community.

Some say that TDD is an “unrealistic ineffective morality campaign for self-loathing and shaming” and others that it's “just a tool to help us design faster using refactoring” [2].

"Bad programmers have all the answers. Good testers have all the questions."

Gil Zilberfeld

But Test-Driven Development is not a new kid in town.

The earliest widely known reference comes from 1957 in D.D. McCracken’s “Digital Computer Programming: The First General Introduction in Book Form, Stressing Actual Work with Computers”. Although TDD was not widely adopted in the few years that followed, IBM ran a project for NASA in the 1960s where developers were “writing early unit-tests to micro-increments” [3].

Modern Test-Driven Development

During a warm summer evening of 1989, Kent Beck developed the first known TDD framework - SUnit - and tried it out on Smalltalk. TDD was then born or, more accurately, rediscovered. Shortly after, Agile and TDD movements started to encourage programmers to write automated unit tests, thus adding testing to the development discipline.

At first, like with many new things, it seemed to just "add time to the day job". Most of us didn’t quite understand the benefits of following such practices, being sceptical and critical of TDD, as it was seen as quite unnecessary time and effort.

But, before we proceed further, let's step back and consider: what is Test-Driven Development?

In a nutshell, TDD is the act of writing an automated test before writing a feature.

As an example, let's say Bob needs to develop a new feature for his great new Social Network idea. Bob starts by writing an automated test and, while the feature is not implemented correctly, the test will fail (i.e. red light). So, Bob writes the minimum amount of code to make the test pass (i.e. green light).

Then, once he has the green light, Bob cleans up the code and ensures the feature is still implemented correctly by running the tests (i.e. he refactors the code). Without duplications, the code and automated tests are easy for others to maintain.

This procedure is also known as the "red/green/refactor mantra", where red means failing tests and green means passing tests. Robert Cecil Martin, one of the leaders of the agile methodologies movement (and familiar to many as Uncle Bob), mentioned in “Clean Code” that writing tests before writing production code is just the tip of the iceberg.

"The only way to make the deadline - the only way to go fast - is to keep the code as clean as possible at all times."

Uncle Bob, Clean Code

The benefits in action

To highlight the real benefits of TDD, we'll do an exercise.

Let's imagine that we're going to start a new software application without Test-Driven Development. We'll implement an app for users to manage their blog.

Let’s start by implementing our first feature: allow users to register in order to create their blog. Once the code is complete, we need to manually test it to see if everything is working smoothly.

When we're happy with the first feature, we can start coding feature number two: allow users to add blog posts. And, when that code is all done, we test the second feature manually to see if it's working correctly. Once we're happy, we can move to the next feature, right?

But... how do we know that the new code didn't break the user registration process (first feature)?

We can test the first feature manually and see if it's still working. Does this mean that every time we add a new feature, we need to manually execute N+1 tests?

The answer is yes.

The result is that it's so incredibly expensive to manually test all features whenever a new release is made, that projects without TDD are very prone to regressions. In software development terms, a regression happens when a feature that was working in a previous release appears to be broken on a later release.

The conclusion is that projects where TDD is applied usually have fewer bugs than projects with no automated tests. But automated tests are not free.

On the face of it, developing a feature with automated tests can cost 20% to 50% more than without tests. However, as complexity of the software grows, it becomes increasingly difficult to manually test every single feature.

After the implementation of a few features, investing time in writing automated tests already pays off.

In the software development community, it is now widely accepted that projects with automated tests are less buggy than projects without automated tests.

It is also widely accepted that the overhead of writing a test pays off by preventing bugs from hitting the production environment. Fewer bugs means a better product. And that usually translates in happier users. Not to mention happier developers, who can spend their time making a better product, instead of constantly fixing bugs.

How much testing?

But how much testing is enough? Do we need to write automated tests for every single feature?

The answer is: it depends.

In TDD there is no formal test plan, making it an informal process. Therefore, the percentage of code that needs to be tested is a huge debate.

"'How to test?' is a question that cannot be answered in general. 'When to test?' however, does have a general answer: as early and as often as possible."

Bjarne Stroustrup,
The C++ Programming Language

We observed that bringing test coverage to above 80% doesn't bring greater benefits. We had such few bugs occurring when we applied the 80% rule, that writing even more tests had very, very marginal benefits.

To put it in the real world, let's just take an example.

Recently on a project for an extensive platform, due to some pressure on deadlines for a couple of sprints, we were forced to reduce the automated test coverage to 60% of the code.

As new features were being released with lower coverage, we started to get more regressions. After completing two sprints at 60% code coverage, we decided to use an entire sprint just to fix bugs and put test coverage back on track.

Had we not had a drop in coverage, we would be delayed only by half-a-sprint, instead of a full sprint. So, that's a whole week we could have saved just by upping our test coverage in the first place.

Imagine how anyone feels when time is spent just sweeping up after yourself, not building bigger and better things.

The takeaway

Our rule of thumb is to always apply automated tests for the most complex features. We only develop automated tests for simple features if the coverage without these is below 80%. This means that we spend less time bug-fixing and fewer issues are raised by end-users.

So, to summarise, at Imaginary Cloud we love TDD and 80% code coverage is our magic number.

The more complex the feature, the higher it is on our testing priority list.

We have a team of experts on software development at Imaginary Cloud. If you think you could use some help with your digital project, drop us a line here!

Ready for a UX Audit? Book a free call

Found this article useful? You might like these ones too!