I’ve recently started writing more Go code, and naturally one of the first things I encountered were
channels. The official tutorial introduces the concepts, but in many ways isn’t much more than a range of “Hello World” examples. Searching Amazon for “golang concurrency”, I came acrosss Katherine Cox-Buday’s Concurrency in Go: Tools and Techniques for Developers.
It’s a slim volume, weighing in at 223 pages, and contains six chapters:
- An Introduction to Concurrency
- Modeling Your Code: Communicating Sequential Processes
- Go’s Concurrency Building Blocks
- Concurrency Patterns in Go
- Concurrency at Scale
- Goroutines and the Go Runtime
In this post, I’ll review the first three chapters. In a follow-up post, I’ll review the final three chapters.
The first chapter starts with defining the term
concurrency. As she does throughout the book, Cox-Buday favors the practical over the theoretical, and defines it as a “process that occurs simultaneously with one or more processes. It is also usually implied that all of these processes are making progress at about the same time.” 
From here, Cox-Buday spends the remainder of the chapter covering the key concepts of concurrency. She covers race conditions, atomicity, memory access synchronization, deadlocks, livelocks, and starvation. Each topic is discussed in the following format:
- definition of the concept
- introduction of a code snippet that exhibits a concurrency-related problem
- revision of the code to illustrate how to solve the problem correctly
- discussion of the revised code and the techniques and concepts used
- summary of the concept
This chapter sets up the reader with a solid foundation to dive in to the remaining chapters.
Cox-Buday starts this chapter by exploring the difference between
parallelism. Reading the chapter’s first paragraph, I thought she was splitting hairs. But, then she puts forth this statement:
Concurrency is a property of the code; parallelism is a property of the running program.
To explain this statement, she gives the example of writing code “with the intent that two chunks of the program will run in parallel.” She then asks, what will happen if she runs this code on a machine with only a single core? As she points out, the code running on a single core machine may appear to be running in parallel, but in fact is running sequentially, thanks to CPU context switching that happens faster than is distinguishable.
Building on the theme introduced in the first chapter–that concurrency is hard–Cox-Buday moves from the realm of the process to the OS level. She argues that as we move down the stack, the abstractions we use to write concurrent code become more important, but that most concurrent code is written “at the highest levels of abstraction: OS threads.” According to her, prior to Go, programmers wanting to write concurrent code modeled their programs in terms of “threads and synchronize the access to the memory between them.” She goes on to talk about Tony’s Hoare’s “Communicating Sequential Processes”, a paper published in 1978. She spends several paragraphs explaining the core concepts of CSP, namely inputs and outputs and the communication betweeen them, and how they laid the groundwork for Go’s goroutines and channels.
This chapter provides an overview of the features that Go provides for concurrent programming. Cox-Buday starts with
goroutines as the foundation of Go’s concurrency story. She talks through several examples, from basic to more complex, including discussion of what’s happening in the Go runtime. She compares goroutines to
coroutines, which are “concurrent subroutines that are nonpreemptive”, and how they can be considered a “special class of coroutine.” She covers how the runtime maps goroutines onto green threads using an M:N scheduler and how Go’s model follows the fork-join model of concurrency.
Depsite the technical nature of the chapter, Cox-Buday manages to keep the reader engaged by writing in a style that is reminiscent of the best teachers. She brings the reader along on an exploration of the technical topics, explaining bits and pieces, asking questions of the reader, pointing out surprises.
The chapter continues, covering Go’s
sync package, which provides primitives to handle concurrency using more traditional methods (e.g. using mutexes and thread pools). It concludes with an in-depth discussion of goroutines themselves and Go’s
select statement. Thus, Cox-Buday sets the stage for the next chapter.
The first three chapters of Concurrency in Go provides an overview of Go’s concurrency model and introduces the language’s primitives. If you were to stop after the third chapter, you would be in a good spot. But, you would also be short-changing yourself, because the final three chapters build on the first three, allowing you to take your concurrency game to the next level. Stay tuned for the review of the final three chapters.