A beginner’s personal journey into automated tests

Like many engineers, I landed into coding kind of by accident. I was programming alone and unsupervised for a decade and my work was mostly guided by what I felt was the right way of doing things, what allowed me to evolve in exciting ways, so I eagerly followed that path. Back then my workflow consisted pretty much of:

  1. Detecting a need;
  2. Outlining my approach;
  3. Begin implementing.

This approach looked pretty natural to me. But here are some issues I would run into all the time:

  1. How detailed should I be in my outline?
    Sometimes it felt like I was wasting way too much time methodically planning every step of my implementation. Other times I didn’t plan much, and as a result I felt like I was just winging it when it came to writing code. I never found a happy inbetween.

  2. Where should I begin my implementation?
    And what should I do next? I would usually pick a random node (either near the root or at an extremity of my implementation tree) and start from there, then proceeding to implement related sections of the code.

  3. Where am I?!
    This is inevitably where I ended up. The code would keep growing and its complexity would exceed my brains capacity to know where everything was, and I would end up being lost in a sea of code. Have I already implemented this bit? Does it work yet, or do I need to implement something else to run this? Testing as I went along was usually a messy nightmare.

It looked like I was doomed to work like this forever, at the limit of my brain’s RAM and constantly scared to break my code.

However, on a fine summer day I was presented to a new concept: tests. I had an idea of what “testing your code” was, and to me it meant executing your application and trying to break it manually. Instead, what if there was an automated way to run those "proofs"? This is when I stumbled uppon the idea of "automated tests". I would write a testing suite providing it with certain input values and an expected output. The testing suite would then run my code with the set input and check to see if the results obtained matched the expected output. That way, if any change I made to the code introduced any bugs or unexpected bahaviour I would be promptly informed of the fact that the proof had failed.

Here is a quick example:

describe “simple sum” do
    it “adds 1 and 2 to get 3” do
        expect(simple_sum(1,2)).to eq(3)

The code above is written in RSpec (a tool created to test programs in Ruby), but we’re not interested in the details of how it works. All we need to know is that it runs the function simple_sum (defined elsewhere) and checks if 1+2 equals 3.

If the tests wrote are good tests, then we can always know if the code is behaving as intended or not. And if given careful thought, tests can be written to cover pretty much the entire application. This is quite liberating. No longer needing to manually test my application or ponder post hoc what long chain of unlikely scenarios may cause it to break.

Subsequently, I learned about TDD (test-driven development) which introduced me to a counterintuitive notion of writing tests. To my beginner’s brain it was obvious you should write your tests after having written your code, since there is nothing to test before you write the code!

But TDDs high priests swore the right approach was to first write the tests and then write the code that made the tests pass. Ludicrous!

So, I gave it a go.

The example below implements a test suite for a temperature unit converter, capable of working with Celsius, Kelvin, and Fahrenheit. Initially I had to overcome some weird sort of mental block, since my brain refused to write a test for imaginary code. But eventually I managed to get it done, and this is what I came up with:

describe "temperature unit converter" do

    context "temperature in Celsius" do
        let(:zero_degrees_celsius) { Temperature.new value:0, scale:'C' }

        it "converts to Fahrenheit" do
            expect(zero_degrees_celsius.to_fahrenheit).to eq(32)

        it "converts to Kelvin" do
            expect(zero_degrees_celsius.to_kelvin).to eq(273.15)

    context "temperature in Fahrenheit" do
        let(:thirty_two_fahrenheit) { Temperature.new value:32, scale:'F' }

        it "converts to Celsius" do
            expect(thirty_two_fahrenheit.to_celsius).to eq(0)

        it "converts to Kelvin" do
            expect(thirty_two_fahrenheit.to_kelvin).to eq(273.15)

    context "temperature in Kelvin" do
        let(:zero_kelvin) { Temperature.new value:0, scale:'K' }

        it "converts to Celsius" do
            expect(zero_kelvin.to_celsius).to eq(-273.15)

        it "converts to Fahrenheit" do
            expect(zero_kelvin.to_fahrenheit).to eq(-459.67)

I'm creating temperature objects for each temperature scale and performing all of the desired unit conversions to them. It may look like we’re done, but we’re missing some important scenarios. First of all, we’re not testing the method “new”, which creates a new temperature object. What if the user tries to create a temperature but picks an invalid unit (i.e., neither “C”, “F”, nor “K”)? What if the temperature value chosen by the user is invalid (that is, below absolute zero)? These issues came to my attention without much effort as I looked at the tests above, making this feel like the right place to be asking such questions. Not a single line of code written yet, but I already knew my program better than I did before.

With hindsight I can now tell apart two different kinds of actions I performed when coding: conceiving the program and writing the code. Conceiving is where deep thinking takes place, and it begins by understanding the task at hand. This means looking at the problem you’re trying to solve from different angles even before you begin pondering how to solve it, in a kind of creativity-led exploration.

Writing code, on the other hand, is a lot more mechanic. Once you know what needs to happen in terms of behavior it’s just a matter of consulting documentation and translating ideas into a programming language’s particular syntax. How much do you really need to think to write code that appends exclamation! points! to! every! string! element! in! a! list!? Not much, I’d say!

Notice how these tasks are fundamentally different, and thus require your brain to operate in very different ways. And here’s the insight TDD gave me: shifting constantly between these two modes is ineffective. If you write a bunch of low-level methods and afterwards you need to stop and ponder the whole structure of your code, your brain will not enjoy the experience. At least mine doesn't.

Shifting constantly makes you bad at both tasks. Your conceptual thinking is shallow because your understanding is limited, since your immediate concern is writing code. And your code writing is poor because you’re not able to focus on the task at hand since the back of your mind is running its conceptual thinking module in parallel all the time. It would be great, therefore, if you could segregate those two modes of brain operation – and that is what TDD offered me.

By writing tests first I found that I was able to perform all the conceptual-level thinking upfront, using the framework of an automated test suite to develop the understanding of the problem I would eventually solve. Once I’m done creating tests I’m then free to simply write the code. It’s like a part of my brain finishes its shift and delegates the following task to a different part. And each of those parts is happy to perform its own specialized kind of work, reducing the amount of stress each of them is under – and in turn providing me with a more pleasant coding experience.

Additionally, once I’m done writing the code I have a test suite that I can go back to and execute to make sure that what I’ve implemented performs accordingly. No need to test manually or stare endlessly at the code. Tests, of course, are not only an aid to streamlining the thought process, they’re also a very practical tool.

Perhaps in my eagerness to share my excitement I’ve made writing tests first sound like a perfect, bulletproof approach, or a cheap ticket to becoming a flawless programmer. That is certainly not the case. What often happens (especially with novices like me!) is that midway through implementation you’ll realize your test suite is not comprehensive, usually because you were oblivious to aspects of the tools you’re working with or because you didn’t understand the problem in the first place. This can mean shifting back to conceptual thinking, and perhaps even throwing away much of what you have written. But this is not in vain. With every failed implementation, every narrow view of a problem, every conceptualization gone awry, you will be learning. Your next effort is sure to be more successful than the previous one.

Using the right framework to organize thought and optimize the use of different brain modes you are able to build on every past failure and grow as a programmer. TDD, therefore, lays down a path for improving yourself as one who is capable of understanding and solving problems.

So that is the perspective TDD has given me, though I have barely begun to understand it. I hope you’ll have as much fun with it as I’ve been having lately. Cheers!

We want to work with you! Check out our "What We Do" page.