Writing Tests
- Automated Testing improves code quality significantly.
Understanding the Basics of Testing
- Go's testing support has two parts : libraries & tooling
testing
package in standard library provides the types & functions to write tests
go test
tool bundled with Go runs your tests and generates reports
tests
in go are placed in the same directory as your package so as to access unexported variables and functions.
// sample_code/adder/adder.go
func addNumbers(x, y int) int {
return x + x
}
// adder_test.go
func Test_addNumbers(t *testing.T) {
res := addNumbers(2, 3)
if res != 5 {
t.Error("incorrect result: expected 5, got", res)
}
// notice nothing is returned in tests
}
- for writing tests for
foo.go
, create foo_test.go
to write test.
- Test Functions start with word
Test
and take single parameter of type *testing.T
- for testing unexported function use
Test_<func_name>
got test
runs the test functions in current directory
Reporting Test Failures
- There are multiple methods on
*testing.T
for reporting test failures
* Error
: builds an error out of description string
* Errorf
: adds support for Printf
style formatting
* Fatal
& Fatalf
: halts entire test run when it is encountered
Setting Up and Tearing Down
var testTime time.Time
// this is used to setup the state
func TestMain(m *testing.M) {
fmt.Println("Set up stuff for tests here")
testTime = time.Now()
exitVal := m.Run()
fmt.Println("Clean up stuff after tests here")
os.Exit(exitVal)
}
func TestFirst(t *testing.T) {
fmt.Println("TestFirst uses stuff set up in TestMain", testTime)
}
func TestSecond(t *testing.T) {
fmt.Println("TestSecond also uses stuff set up in TestMain", testTime)
}
* TestMain
is useful in two common scenarios
* setup data in external repository such as database
* code being tested depends on package-level variables initialization
* Cleanup
method cleans up temporary resources created for a single test, for simple test cases you could use defer
// createFile is a helper function called from multiple tests
func createFile(t *testing.T) (_ string, err error) {
f, err := os.Create("tempFile")
if err != nil {
return "", err
}
defer func() {
err = errors.Join(err, f.Close())
}()
// write some data to f
t.Cleanup(func() {
os.Remove(f.Name())
})
return f.Name(), nil
}
func TestFileProcessing(t *testing.T) {
fName, err := createFile(t)
if err != nil {
t.Fatal(err)
}
// do testing, don't worry about cleanup
}
* for temporary files correct method is to use TempDir
method in *testing.T
, which register cleanup itself.
Testing with Environment Variables
- env variables are used configure application
t.Setenv()
can register a value for an environment variable for test, NOTE: It exits when test ends by calling cleanup.
// assume ProcessEnvVars is a function that processes environment variables
// and returns a struct with an OutputFormat field
func TestEnvVarProcess(t *testing.T) {
t.Setenv("OUTPUT_FORMAT", "JSON")
cfg := ProcessEnvVars()
if cfg.OutputFormat != "JSON" {
t.Error("OutputFormat not set correctly")
}
// value of OUTPUT_FORMAT is reset when the test function exits
}
Storing Sample Test Data
- create a subdirectory name
testdata
to hold files
- while reading files from this directory, use a relative file reference and Go reserves this dir for this purpose.
Caching Test Results
- NOTE: Go caches test results by default, go runs tests only if there is any change in package or the
testdata
directory
- To enable running tests always use
-count=1
flag in go test
Testing Your Public API
- By default since tests are included in package, allowing testing exported and unexported function.
- To write tests just for public API of your package, Go suggests using
packagename_test
for the package name.
// file sample_code/pubadder/adder.go
func AddNumbers(x, y int) int {
return x + y
}
// file : adder_public_test.go
package pubadder_test
import (
"github.com/learning-go-book-2e/ch15/sample_code/pubadder"
"testing"
)
func TestAddNumbers(t *testing.T) {
result := pubadder.AddNumbers(2, 3)
if result != 5 {
t.Error("incorrect result: expected 5, got", result)
}
}
Using go-cmp to Compare Test Results
reflect.DeepEqual
has already been used to compare structs, maps, and slices.
- Google released a third-party module called
go-cmp
for performing comparisons and return detailed description of what doesn't match.
- import
github.com/google/go-cmp/cmp
in your codebase, then u can use cmp.Diff
function to compare results
result := CreatePerson("Dennis", 37)
if diff := cmp.Diff(expected, result); diff != "" {
t.Error(diff)
}
- To use custom comparator you can do following
comparer := cmp.Comparer(func(x, y Person) bool {
return x.Name == y.Name && x.Age == y.Age
})
// then pass it in Diff
if diff := cmp.Diff(expected, result, comparer); diff != "" {
t.Error(diff)
}
Running Table Tests
- lot of testing logic is repetitive like, setting up functions and data
- table tests is provided by Go for this purpose explicitly
func TestDoMath(t *testing.T) {
result, err := DoMath(2, 2, "+")
if result != 4 {
t.Error("Should have been 4, got", result)
}
if err != nil {
t.Error("Should have been nil error, got", err)
}
result2, err2 := DoMath(2, 2, "-")
if result2 != 0 {
t.Error("Should have been 0, got", result2)
}
if err2 != nil {
t.Error("Should have been nil error, got", err2)
}
// and so on...
}
- Above can simplified using test tables
data := []struct {
name string
num1 int
num2 int
op string
expected int
errMsg string
}{
{"addition", 2, 2, "+", 4, ""},
{"subtraction", 2, 2, "-", 0, ""},
{"multiplication", 2, 2, "*", 4, ""},
{"division", 2, 2, "/", 1, ""},
{"bad_division", 2, 0, "/", 0, `division by zero`},
}
for _, d := range data {
t.Run(d.name, func(t *testing.T) {
result, err := DoMath(d.num1, d.num2, d.op)
if result != d.expected {
t.Errorf("Expected %d, got %d", d.expected, result)
}
var errMsg string
if err != nil {
errMsg = err.Error()
}
if errMsg != d.errMsg {
t.Errorf("Expected error message `%s`, got `%s`",
d.errMsg, errMsg)
}
})
}
Running Tests Concurrently
- By default, unit tests are run sequentially
- Since test cases are independent from each other they can be run concurrently.
func TestMyCode(t *testing.T) {
t.Parallel()
// rest of test goes here
}
- NOTE: Be careful because before Go 1.21 or earlier version, a reference to the variable
d
is shared by all parallel tests, so they all see the same value. go vet
is able detect such scenarios by indicating loop variable d captured by func literal
on each line where d
is captured.
func TestParallelTable(t *testing.T) {
data := []struct {
name string
input int
output int
}{
{"a", 10, 20},
{"b", 30, 40},
{"c", 50, 60},
}
for _, d := range data {
// fix for older version
// d := d // shadow
t.Run(d.name, func(t *testing.T) {
t.Parallel()
fmt.Println(d.input, d.output)
out := toTest(d.input)
if out != d.output {
t.Error("didn't match", out, d.output)
}
})
}
}
Checking Your Code Coverage
- although code coverage eliminates obvious problems, 100% coverage doesn't guarantee that there is no bug in your code.
- Adding the
-cover
flag to go test
calculates coverage information and includes a summary in test output
-coverprofile
: used to save coverage information to a file
go test -v -cover -coverprofile=c.out
- for html report :
go tool cover -html=c.out
Fuzzing
- one of the most important lessons that every dev learns eventually, is all data is suspect. No matter how well specified a data format is, you will eventually have to process input that doesn't match what you expect
- Even when test cases are 100% coverage and written in good manner, its impossible to think of everything. To supplement test cases with generated data helps you catch unexpected errors.
- Fuzzing : technique for generating random data and submitting it to code to see whether it properly handles unexpected input.
- Devs can provide seed corpus which serves as the initial input for fuzzer for generating bad input.
- Example Code
func ParseData(r io.Reader) ([]string, error) {
s := bufio.NewScanner(r)
if !s.Scan() {
return nil, errors.New("empty")
}
countStr := s.Text()
count, err := strconv.Atoi(countStr)
if err != nil {
return nil, err
}
out := make([]string, 0, count)
for i := 0; i < count; i++ {
hasLine := s.Scan()
if !hasLine {
return nil, errors.New("too few lines")
}
line := s.Text()
out = append(out, line)
}
return out, nil
}
- Unit Tests for above function cover 100%
func TestParseData(t *testing.T) {
data := []struct {
name string
in []byte
out []string
errMsg string
}{
{
name: "simple",
in: []byte("3\nhello\ngoodbye\ngreetings\n"),
out: []string{"hello", "goodbye", "greetings"},
errMsg: "",
},
{
name: "empty_error",
in: []byte(""),
out: nil,
errMsg: "empty",
},
{
name: "zero",
in: []byte("0\n"),
out: []string{},
errMsg: "",
},
{
name: "number_error",
in: []byte("asdf\nhello\ngoodbye\ngreetings\n"),
out: nil,
errMsg: `strconv.Atoi: parsing "asdf": invalid syntax`,
},
{
name: "line_count_error",
in: []byte("4\nhello\ngoodbye\ngreetings\n"),
out: nil,
errMsg: "too few lines",
},
}
for _, d := range data {
t.Run(d.name, func(t *testing.T) {
r := bytes.NewReader(d.in)
out, err := ParseData(r)
var errMsg string
if err != nil {
errMsg = err.Error()
}
if diff := cmp.Diff(d.out, out); diff != "" {
t.Error(diff)
}
if diff := cmp.Diff(d.errMsg, errMsg); diff != "" {
t.Error(diff)
}
})
}
}
- Writing a fuzz test
func FuzzParseData(f *testing.F) {
testcases := [][]byte{
[]byte("3\nhello\ngoodbye\ngreetings\n"),
[]byte("0\n"),
}
for _, tc := range testcases {
f.Add(tc) // seed corpus
}
f.Fuzz(func(t *testing.T, in []byte) {
r := bytes.NewReader(in)
out, err := ParseData(r)
if err != nil {
t.Skip("handled error")
}
roundTrip := ToData(out)
rtr := bytes.NewReader(roundTrip)
out2, err := ParseData(rtr)
if diff := cmp.Diff(out, out2); diff != "" {
t.Error(diff)
}
})
}
go test -fuzz=FuzzParseData
- to find out where your fuzz test has failed, fuzzer writes failed testcases to testdata/fuzz/TESTNAME, adding a new entry to seed corpus.
- The new seed corpus entry in file now becomes a new unit test, one the was automatically generated by the fuzzer. It is run anytime
go test
runs the FuzzParsedData
function, and acts as a regression test once you fix your bug.
Using Benchmarks
- Go has built-in support for benchmarking
func FileLen(f string, bufsize int) (int, error) {
file, err := os.Open(f)
if err != nil {
return 0, err
}
defer file.Close()
count := 0
for {
buf := make([]byte, bufsize)
num, err := file.Read(buf)
count += num
if err != nil {
break
}
}
return count, nil
}
- Above function counts number of characters in a file.
*testing.B
includes all benchmarking functionality, This type includes all functionality of a *testing.T
as well additional support for benchmarking.
- Every Go benchmark has a loop that iterates from 0 to
b.N
var blackhole int // interesting usage to stop compiler from becoming too clever to optimize the FileLen, ruining Benchmark
func BenchmarkFileLen1(b *testing.B) {
for i := 0; i < b.N; i++ {
result, err := FileLen("testdata/data.txt", 1)
if err != nil {
b.Fatal(err)
}
blackhole = result
}
}
- Running Test Cases:
go test -bench=.
or you can use -benchmen
to test memory allocation information in the benchmark output
func BenchmarkFileLen(b *testing.B) {
for _, v := range []int{1, 10, 100, 1000, 10000, 100000} {
b.Run(fmt.Sprintf("FileLen-%d", v), func(b *testing.B) {
for i := 0; i < b.N; i++ {
result, err := FileLen("testdata/data.txt", v)
if err != nil {
b.Fatal(err)
}
blackhole = result
}
})
}
}
BenchmarkFileLen/FileLen-1-12 25 47828842 ns/op 65342 B/op 65208 allocs/op
BenchmarkFileLen/FileLen-10-12 230 5136839 ns/op 104488 B/op 6525 allocs/op
BenchmarkFileLen/FileLen-100-12 2246 509619 ns/op 73384 B/op 657 allocs/op
BenchmarkFileLen/FileLen-1000-12 16491 71281 ns/op 68744 B/op 70 allocs/op
BenchmarkFileLen/FileLen-10000-12 42468 26600 ns/op 82056 B/op 11 allocs/op
BenchmarkFileLen/FileLen-100000-12 36700 30473 ns/op 213128 B/op 5 allocs/op
Using Stubs in Go
- Go allows you to abstract function calls in two ways, defining a function type and defining an interface.
- These abstractions help you write not only modular production code but also unit tests.
// struct
type Processor struct {
Solver MathSolver
}
type MathSolver interface {
REsolve(ctx context.Context, expression string) (float64, error)
}
// Processor has an expression from an io.reader and returns calculated value
func (p Processor) ProcessExpression(ctx context.Context, r io.Reader)
(float64, error) {
curExpression, err := readToNewLine(r)
if err != nil {
return 0, err
}
if len(curExpression) == 0 {
return 0, errors.New("no expression to read")
}
answer, err := p.Solver.Resolve(ctx, curExpression)
return answer, err
}
- Running Test cases for above Code
type MathSolverStub struct{}
func (ms MathSolverStub) Resolve(ctx context.Context, expr string)
(float64, error) {
switch expr {
case "2 + 2 * 10":
return 22, nil
case "( 2 + 2 ) * 10":
return 40, nil
case "( 2 + 2 * 10":
return 0, errors.New("invalid expression: ( 2 + 2 * 10")
}
return 0, nil
}
func TestProcessorProcessExpression(t *testing.T) {
p := Processor{MathSolverStub{}}
in := strings.NewReader(`2 + 2 * 10
( 2 + 2 ) * 10
( 2 + 2 * 10`)
data := []float64{22, 40, 0}
hasErr := []bool{false, false, true}
for i, d := range data {
result, err := p.ProcessExpression(context.Background(), in)
if err != nil && !hasErr[i] {
t.Error(err)
}
if result != d {
t.Errorf("Expected result %f, got %f", d, result)
}
}
}
- If interface is too large and you have to stub multiple interfaces then just implement the Stubs which are request in the context of current test
- In scenario where you have to implement multiple methods, better solution is to create a Stub Struct that proxies method calls to function fields.
Using httptest
- It can be difficult to write tests for functions that call an HTTP service.
- Traditionally, this became integration by bringing up a mock instance of service that the function calls.
net/http/httptest
package makes it easier to stub HTTP services
type RemoteSolver struct {
MathServerURL string
Client *http.Client
}
func (rs RemoteSolver) Resolve(ctx context.Context, expression string)
(float64, error) {
req, err := http.NewRequestWithContext(ctx, http.MethodGet,
rs.MathServerURL+"?expression="+url.QueryEscape(expression),
nil)
if err != nil {
return 0, err
}
resp, err := rs.Client.Do(req)
if err != nil {
return 0, err
}
defer resp.Body.Close()
contents, err := io.ReadAll(resp.Body)
if err != nil {
return 0, err
}
if resp.StatusCode != http.StatusOK {
return 0, errors.New(string(contents))
}
result, err := strconv.ParseFloat(string(contents), 64)
if err != nil {
return 0, err
}
return result, nil
}
- Writing test case for this
// stores io data for server
type info struct {
expression string
code int
body string
}
var io info
// setup fake remote server
server := httptest.NewServer(
http.HandlerFunc(func(rw http.ResponseWriter, req *http.Request) {
expression := req.URL.Query().Get("expression")
if expression != io.expression {
rw.WriteHeader(http.StatusBadRequest)
fmt.Fprintf(rw, "expected expression '%s', got '%s'",
io.expression, expression)
return
}
rw.WriteHeader(io.code)
rw.Write([]byte(io.body))
}))
defer server.Close()
rs := RemoteSolver{
MathServerURL: server.URL,
Client: server.Client(),
}
// writing actual test cases
data := []struct {
name string
io info
result float64
}{
{"case1", info{"2 + 2 * 10", http.StatusOK, "22"}, 22},
// remaining cases
}
for _, d := range data {
t.Run(d.name, func(t *testing.T) {
io = d.io
result, err := rs.Resolve(context.Background(), d.io.expression)
if result != d.result {
t.Errorf("io `%f`, got `%f`", d.result, result)
}
var errMsg string
if err != nil {
errMsg = err.Error()
}
if errMsg != d.errMsg {
t.Errorf("io error `%s`, got `%s`", d.errMsg, errMsg)
}
})
}
- Integration Tests and Build Tags
* Integration tests validate the interaction between your application and external services.
* They are run less frequently than unit tests due to dependency on external systems and execution time.
* Build tags (e.g., //go:build integration) control when certain test files are compiled and run.
* Run integration tests using:
go test -tags integration -v ./...
* Some developers prefer environment variables over build tags for better discoverability (t.Skip with an explanation).
- Using the -short Flag
* The -short flag is used to skip slow tests when running go test.
* Example usage:
if testing.Short() {
t.Skip("skipping test in short mode.")
}
- Limitation: It only differentiates between βshortβ and βallβ tests, whereas build tags allow grouping tests by dependencies.
Finding Concurrency Problems with the Data Race Detector
- Data races occur when multiple goroutines access the same variable without proper synchronization.
- Goβs race detector can identify such issues using:
- Example of a race condition
var counter int
go func() { counter++ }()
- Avoid βfixingβ races with arbitrary sleep calls; use proper mutexes or channels instead.
- Race detection slows execution (~10Γ), so itβs not enabled by default in production.