When Writing Unit Tests, Don't Use Mocks

May 01, 2018
Written by
Opinions expressed by Twilio contributors are their own

When Writing Unit Tests, Don't Use Mocks

Note: This is our latest technical engineering post written by principal engineer, Seth Ammons. Special thanks to Sam Nguyen, Kane Kim, Elmer Thomas, and Kevin Gillette for peer reviewing this post. And for more posts like this, check out our technical blog roll

I really enjoy writing tests for my code, especially unit tests. The sense of confidence it gives me is great. Picking up something I've not worked on in a long time and being able to run the unit and integration tests gives me the knowledge that I can ruthlessly refactor if needed and, as long as my tests have good and meaningful coverage and continue to pass, I will still have functional software afterward.

Unit tests guide code design and allow us to quickly verify that failure modes and logic-flows work as intended. With that, I want to write about something perhaps a bit more controversial: when writing unit tests, don't use mocks.

Let's get some definitions on the table

What is the difference between unit and integration tests? What do I mean by mocks and what should you use instead? This post is focused on working in Go, and so my slant on these words are in the context of Go.

When I say unit tests, I'm referring to those tests that ensure proper error handling and guide design of the system by testing small units of code. By unit, we could be referring to a whole package, an interface, or an individual method.

Integration testing is where you actually interact with dependent systems and/or libraries. When I say "mocks," I am specifically referring to the term "Mock Object," which is where we "replace domain code with dummy implementations that both emulate real functionality and enforce assertions about the behavior of our code [1]" (emphasis mine).

Stated a little shorter: mocks assert behavior, like:


I advocate for "Fakes rather than Mocks."

A fake is a kind of test double that may contain business behavior [2]. Fakes are merely structs that fit an interface and are a form of dependency injection where we control the behavior. The major benefit of fakes are that they decrease coupling in code, where mocks increase coupling, and coupling makes refactoring harder [3].

In this post, I intend to demonstrate that fakes provide flexibility and allow for easy testing and refactoring. They reduce dependencies compared to mocks, and are easy to maintain.

Let's dive in with an example that is a bit more advanced than "testing a sum function" as you might see in a typical post of this nature. However, I need to give you some context so you can more easily understand the code that follows in this post.

At SendGrid, one of our systems has traditionally had files on the local file system, but due to the need for higher availability and better throughput, we are moving these files over to S3.

We have an application that needs to be able to read these files and we opted for an application that can run in two modes "local" or "remote," depending on configuration. A caveat that is elided in many of the code samples is that in the case of a remote failure, we fall back to reading the file locally.

With that out of the way, this application has a package getter. We need to ensure that package getter can get files either from the remote filesystem or the local filesystem.

Naive Approach: just call library and system level calls

The naive approach is that our implementing package will call getter.New(...) and pass it the information needed for setting up either remote or local file getting and will return a Getter. The returned value will then be able to call MyGetter.GetFile(...) with the parameters needed for locating the remote or local file.

This will gives us our basic structure. When we create the new Getter, we initialize parameters that are needed for any potential remote file fetching (an access key and secret) and we also pass in some values that originate in our application configuration, such as useRemoteFS that will tell the code to try the remote file system.

We need to provide some basic functionality. Check out the naive code here [4]; below is a reduced version. Note, this is a non-finished example and we are going to be refactoring things.

The basic idea here is that if we are configured to read from the remote file system and we get remote file system details (host, bucket, and key), then we should attempt to read from the remote file system. After we have confidence in the system reading remotely, we will shift all file reading out to the remote file system and remove references to reading from the local file system.

This code is not very unit test friendly; note that to verify how it works, we actually need to hit not only the local file system, but the remote file system too. Now, we could just do an integration test and set up some Docker magic to have an s3 instance allowing us to verify the happy path in the code.

Having only integration testing is less than ideal though as unit tests help us design more robust software by easily testing alternate code and failure paths. We should save integration tests for larger "does it really work" kinds of tests. For now, let's focus on the unit tests.

How can we make this code more unit testable? There are two schools of thought. One is to use a mock generator (such as https://github.com/vektra/mockery or https://github.com/golang/mock) that creates boilerplate code for use when testing mocks.

You could go this route and generate the filesystem calls and the Minio client calls. Or maybe you want to avoid a dependency, so you generate your mocks by hand. It turns out that mocking the Minio client is not straightforward because you have a concretely typed client that returns a concretely typed object.

I say that there is a better way than mocking. If we restructure our code to be more testable, we don't need additional imports for mocks and related cruft and there will be no need for knowing additional testing DSLs to confidently test the interfaces. We can set up our code to not be overly coupled and the testing code will just be normal Go code using Go's interfaces. Let's do it!


Interface Approach: Greater abstraction, easier testing

What is it that we need to test? This is where some new Gophers get things wrong. I've seen folks understand the value of leveraging interfaces, but feel they need interfaces that match the concrete implementation of the package they are using.

They might see we have a Minio client, so they might begin by making interfaces that match ALL the methods and uses of the Minio client (or any other s3 client). They forget the Go Proverb [5][6] of "The bigger the interface, the weaker the abstraction."

We don't need to test against the Minio client. We need to test that we can get files remotely or locally (and verify some failure paths, such as remote failures). Let's refactor that initial approach and pull out the Minio client into a remote getter. While we are doing that, let's do the same to our code for local file reading, and make a local getter. Here are the basic interfaces, and we will have type to implement each:

This new, refactored code is much more unit testable because we take interfaces as parameters on the Getter struct and we can change out the concrete types for fakes. Instead of mocking OS calls or needing a full mocking of the Minio client or large interfaces, we just need two simple fakes: fakeLocalFetcher and fakeRemoteFetcher.

These fakes have some properties on them that let us specify what they return. We will be able to return the file data or any error we like and we can verify that the calling GetFile method handles the data and errors as we intended.

With this in mind, the heart of the tests become:

With this basic structure, we can wrap it all up in table driven tests [8]. Each case in the table of tests will either be testing for local or remote file access. We will be able to inject an error at either remote or local file access. We can verify propagated errors, that the file contents are passed up, and that expected log entries are present.

I went ahead and included all potential test cases and permutations in the one table driven test available here [9] (you may note that some method signatures are a bit different—it allows us to do things like inject a logger and assert against log statements).

Nifty, eh? We have full control of how we want GetFile to behave, and we can assert against the results. We've designed our code to be unit-test friendly and can now verify success and error paths implemented in the GetFile method.

The code is loosely coupled and refactoring in the future should be a breeze. We did this by writing plain ol' Go code that any developer familiar with Go should be able to understand and extend when needed.



Mocks: what about nitty, gritty implementation details?

What would the mocks buy us that we don't get in the proposed solution? A great question showcasing a benefit to a traditional mock could be, "how do you know you called the s3 client with the correct parameters? With mocks, I can ensure that I passed the key value to the key parameter, and not the bucket parameter."

This is a valid concern and it should be covered under a test somewhere. The testing approach that I advocate here does not verify that you called the Minio client with the bucket and key parameters in the right order.

A great quote I recently read said, "Mocking introduces assumptions, which introduces risk [10]". You are assuming the client library is implemented right, you are assuming all boundaries are solid, you are assuming you know how the library actually behaves.

Mocking the library only mocks assumptions and makes your tests more brittle and subject to change when you update the code (which is what Martin Fowler concluded in Mocks Aren't Stubs [3]). When the rubber meets the road, we are going to have to verify that we are actually using the Minio client correctly and this means integration tests (these might live in a Docker setup or a testing environment). Because we will have both unit and integration tests, there is no need for a unit test to cover the exact implementation as the integration test will cover that.

In our example, unit tests guide our code design and allow us to quickly test that errors and logic flows work as designed, doing exactly what they need to do. For some, they feel that this is not enough unit test coverage. They are worried about points above. Some will insist on Russian doll style interfaces where one interface returns another interface that returns another interface, maybe like the following:

And then they might pull out each part of the Minio client into each wrapper and then use a mock generator (adding dependencies to builds and tests, increasing assumptions, and making things more brittle). At the end, the mockist will be able to say something like:

myClientMock.ExpectsCall("GetObject").Returns(mockObject).NumberOfCalls(1).WithArgs(key, bucket) – and that is if you can recall the correct incantation for this specific DSL.

This would be a lot of extra abstraction tied directly to the implementation choice of using the Minio client. This will cause brittle tests for when we find out we need to change our assumptions about the client, or need a different client entirely.

This adds to end-to-end code development time now and in the future, adds to code complexity and reduces readability, potentially increases dependencies on mock generators, and gives us the dubious additional value of knowing if we mixed up the bucket and key parameters of which we would have discovered in integration testing anyway.

As more and more objects get introduced, the coupling gets tighter and tighter. We might have made a logger mock and later we start having a metrics mock. Before you know it, you are adding a log entry or a new metric and you just broke umpteen tests that did not expect an additional metric to come through.

The last time I was bit by this in Go, the mocking framework would not even tell me what test or file was failing as it panicked and died a horrible death because it came across a new metric (this required binary searching the tests by commenting them out to be able to find where we needed to alter the mock behavior). Can mocks add value? Sure. Is it worth the cost? In most cases, I'm not convinced.


Interfaces: simplicity and unit testing for the win

We've shown that we can guide design and ensure proper code and error paths are followed with simple use of interfaces in Go. By writing simple fakes that adhere to the interfaces, we can see that we do not need mocks, mocking frameworks, or mock generators to create code designed for testing. We've also noted that unit testing is not everything, and you must write integration tests to ensure that systems are properly integrated with one another.

I hope to get a post on some neat ways to run integration tests in the future; stay tuned!


1: Endo-Testing: Unit Testing with Mock Objects (2000): See introduction for the definition of mocked object 2: The Little Mocker: See the part on fakes, specifically, "a Fake has business behavior. You can drive a fake to behave in different ways by giving it different data." 3: Mocks Aren't Stubs: See the section, "So should I be a classicist or a mockist?" Martin Fowler states, "I don't see any compelling benefits for mockist TDD, and am concerned about the consequences of coupling tests to implementation." 4: Naive Approach: a simplified version of the code. See [7]. 5: https://go-proverbs.github.io/: The list of Go Proverbs with links to talks. 6: https://www.youtube.com/watch?v=PAAkCSZUG1c&t=5m17s: Direct link to talk by Rob Pike in regards to interface size and abstraction. 7: Full version of demo code: you can clone the repo and run `go test`. 8: Table driven tests: A testing strategy for organizing test code to reduce duplication. 9: Tests for the full version of the demo code. You can run them with `go test`. 10: Questions To Ask Yourself When Writing Tests by Michal Charemza: Mocking introduces assumptions, and assumptions introduce risk.

Recommended For You

Most Popular

Send With Confidence

Partner with the email service trusted by developers and marketers for time-savings, scalability, and delivery expertise.