DevRef / Go / Concurrency
Goroutines and Channels: Go's Concurrency Model
Go's concurrency primitives are deceptively simple to write and surprisingly easy to misuse. This guide covers how the runtime scheduler actually manages goroutines, and why channels enforce communication discipline that shared memory cannot.
Why goroutines are not threads
A goroutine is a function executing concurrently with other goroutines in the same address space. The Go runtime multiplexes goroutines onto OS threads — a model called M:N scheduling — which means thousands of goroutines can exist simultaneously with far less overhead than an equivalent number of OS threads.
The runtime scheduler is cooperative-preemptive. Since Go 1.14, goroutines are preemptible at any safe point, meaning a tight CPU-bound loop no longer starves the scheduler. Before 1.14, you needed explicit runtime.Gosched() calls to yield.
Channel semantics from first principles
Channels are typed conduits. A send blocks until a receiver is ready; a receive blocks until a sender has a value. This blocking behaviour is the point — it synchronises goroutines without explicit locks.
func producer(ch chan<- int) {
for i := 0; i < 5; i++ {
ch <- i
}
close(ch)
}
func consumer(ch <-chan int) {
for v := range ch {
fmt.Println(v)
}
}
Directional channel types (chan<- send-only, <-chan receive-only) are enforced at compile time. Passing a bidirectional channel where a directional type is expected converts it implicitly; the reverse is illegal. This is a lightweight way to document and enforce data flow direction in your API.
The select statement
The select statement is to channels what a switch is to values. It waits until one of its cases can proceed and then executes that case. If multiple cases are simultaneously ready, Go selects one at random — a deliberate choice to avoid starvation patterns.
select {
case msg := <-ch1:
fmt.Println("from ch1:", msg)
case msg := <-ch2:
fmt.Println("from ch2:", msg)
case <-time.After(1 * time.Second):
fmt.Println("timeout")
}
The default case makes a select non-blocking. If no other case is ready, the default fires immediately. Use this carefully — a tight loop with a default case is a busy-wait and will peg a CPU.
The happens-before guarantee
Go's memory model defines happens-before relationships that determine when one goroutine's writes are guaranteed visible to another's reads. A send on an unbuffered channel happens before the corresponding receive completes. A receive from a closed channel happens after the close. These guarantees are what make channel-based synchronisation correct by construction.
Shared memory without synchronisation has no such guarantees. Two goroutines reading and writing the same variable without a lock or channel can observe each other's writes in any order — or not at all. The race detector (go test -race) exists because this class of bug is common and silent.
Buffered channels and backpressure
A buffered channel decouples sender and receiver up to a capacity. Sends block only when the buffer is full; receives block only when it is empty. Buffered channels are useful for limiting concurrency — a channel of size N is a semaphore that permits N concurrent operations.
sem := make(chan struct{}, 10) // allow 10 concurrent workers
for _, item := range items {
sem <- struct{}{} // acquire
go func(item Item) {
defer func() { <-sem }() // release
process(item)
}(item)
}