Concurrency is not Parallelism by Rob Pike
Concurrency is not Parallelism by Rob Pike slides
concurrency
Concurrency is about dealing with a lot of things at once. it is about structure provides a solution to solve a problem that may or may not be parallelizable
parallelism
Parallelism is about doing a lot of things at once. it is about execution
concurrent composition of processes
- breaking problems into independent parts(sub-problems) that can run in parallel
- or interleave execution
- on single/multi-core systems
- managed by scheduling.
- tasks APPEAR to run simultaneously
- sharing resources or communicating via defined interfaces/channels.
concurrent composition of better-managed sub-problems runs faster => we can add more pieces(gopher) into the picture (therefore do more work) and still run faster
Parallelism can be derived from better concurrent expression of the problem this process of modularisation of a problem can make the execution more efficient and faster even on single core systems or replicated to multiple cores(if available), achieving more parallelization through independent execution
Goroutines
a func running independently in the same address space as other goroutines.
- are a bit like threads (but much cheaper) that get multiplexed dynamically onto OS threads( system takes care of scheduling, blocking etc.)
- when a goroutine blocks, no other goroutine blocks (needs to wait)
f("hello", "world") // f runs; we wait
go f("hello", "world") // f starts running
g() // does not wait for f to return
Channels
typed values that allow synchronization and information exchange between goroutines.
timerChannel = make(chan time.Time)
go func() {
time.Sleep(deltaT)
timerChannel <- time.Now() // send time on timerChan
}()
// Do something else; when ready, receive.
// Receive will block until timerChan delivers.
// Value sent is other goroutine's completion time.
completedAt := <-timerChan
Select
like a switch statement except decision is based on ability to communicate (whichever channel is ready to communicate) #guarded_commands
select {
case v := <-ch1:
fmt.Println("channel 1 sends", v)
case v := <-ch2:
fmt.Println("channel 2 sends", v)
default: // optional
fmt.Println("neither channel was ready")
}
Some Examples
Launching deamons
- copy items from input to output channel
- for-range clause: run until the channel is empty
go func() { // runs in bg
for val := range input {
output <- val
}
}()
simple Load Balancer
// A unit of work
type Work struct {
x, y, z int
}
// A worker task
func worker(in <-chan *Work, out chan<- *Work) {
for w := range in {
w.z = w.x * w.y
Sleep(w.z)
out <- w
}
}
// The runner
func Run() {
in, out := make(chan *Work), make(chan *Work)
for i := 0; i < NumWorkers; i++ {
go worker(in, out)
}
go sendLotsOfWork(in)
receiveLotsOfResults(out)
}
this load balancer is implicitly parallel and scalable by design/structure No mutexes or locking needed to achieve synchronization(it is so by structure)
refer slides for more examples
Conclusion
Concurrency enables Parallelism but Parallelism is not the end goal of Concurrency The goal is to structure things so that you could possibly employ parallelism to do a better job.