Better TDD (with respect to Go)

TL;DR — Test the Behaviour not the Implementation

The key take away is when we work with TDD:

  1. Write tests against the observable behaviours, which in go is the package’s public APIs
  2. Do not write tests for the implementation which would start failing if we refactored the implementation
  3. Write the implementation as quickly as possible, it can be a C&P from StackOverflow! Just get the test to pass asap!
  4. REFACTOR! Deduplicate code; use design patterns; remove code smells; make it maintainable; don’t write any new tests.

What I used to do

I was first introduced to testing and TDD through Clean Code by Robert C. Martin (aka Uncle Bob). It was great! I was more confident with my code, required less manual testing and was confident with the tests themselves too. This is software engineering as it should be, and I was confident and satisfied with my new found skill, so I proudly plastered it all over my CV.

What I’ve Learned

I was recently involved in a colleague’s PR, where several discussions ensued over when and how to write unit tests. I soon realised that what I thought was the correct way to test wasn’t inline with my colleagues’ way.

Dave’s talk on testing in Go
This clarified all the questions I had around TDD, testing and mocks
An example step-by-step guide on how the code could evolve with the correct use of unit testing can be found here —

Write a test that fails

Write the minimum amount of code for the test to fail. It’s up to you whether that means it doesn’t compile or it does but the test actually fails.

Write the minimum amount of code to get it to pass

You need to be the duct-tape programmer here, write it like they would. Speed is key here, so if you want just copy and paste it from stackoverflow, or another service, or a scrap book of useful code. We want the test to pass asap so that we can move onto the next very important step.


Yes, this is very important. This is where I was going wrong all this time. I forgot about the refactor step. If I hadn’t then maybe I wouldn’t be writing this blog post.

  1. not testing the implementation detail
  2. not leaving tech debt from the get go
  3. not adding additional observable behaviour — run the coverage tool to see whether there’s any refactored code that isn’t being covered. If there are new behaviours, try to remove them or add new tests to cover them
  4. allowing other developers to easily refactor the code without affecting the observable behaviours.


When you’re testing the observable behaviours of your package, don’t mock internal dependencies. Mocking heavy fixtures (such as calls to interact with databases, gRPC and HTTP services) are fine to do. We can do this by creating a thin layer between our code and the implementation details of the fixture.

Final thoughts

I now understand why we wrote the test as we did for the PR in question. What was the PR? It was basically an ETL, collating data from other services and sending the data off elsewhere to be saved.

Code example

I’ve created a very simple service that shortens a URI, where I followed the steps outlined above —

Useful Commands

CMDS                                      DEFINITION
go test ./… -v Run all the tests
go test ./… -coverprofile cp.out Coverage
go tool cover -html=cp.out Visualise coverage as html
go test -run=XXX -bench=. -benchtime=10s Run benchmark tests
go test -run=TestURLShortner/Race Only run the Race test

Future Reading

TDD by example by Kent Beck

On an adventure

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store