sync.Pool in Go: When It Actually Helps, and When It Quietly Hurts
sync.Pool is the Go feature most likely to be used incorrectly. A working engineer's guide to when pooling buffers actually saves GC pressure, when it just adds complexity, and the benchmark methodology that tells the difference.
Table of Contents
sync.Pool is one of those Go features that shows up prominently in “how to write fast Go” blog posts and then gets applied to everything. The result is a codebase sprinkled with pools that don’t help and sometimes hurt. Most Go code I review does not need sync.Pool. The code that does need it often uses it wrong.
This is a working engineer’s take on when pooling actually helps, when it’s wasted effort, and the specific traps it creates.
tl;dr —
sync.Poolis a GC-pressure reducer for workloads that allocate large-ish, short-lived objects at high frequency. It is not a general-purpose optimization. The cases where it clearly helps: per-request buffers in HTTP handlers, encoder/decoder instances, JSON buffers, protocol frame buffers. The cases where it hurts or is wasted: small objects, infrequent allocations, long-lived state, and any code that forgets to reset pooled items. Benchmark before and after — always.
What sync.Pool Actually Does
sync.Pool is a free-list for objects that the GC can clear. You Get() an object (fresh or recycled). You use it. You Put() it back. The runtime tries to give you a recycled one next time, but reserves the right to drop the whole pool on GC.
Key properties:
- GC clears pools on every cycle. This is crucial. Pools are not a long-term cache — they’re a hint to the runtime that “if you’re going to collect these, wait a moment in case they get reused first.”
- Per-P (per-scheduler-thread) local storage. Most
Get()/Put()calls hit a goroutine-local pool with no contention. Scaling across cores is nearly free. - No guarantees. A
Get()might return a fresh object. APut()might be discarded if the pool is full or the GC just fired.
This design is exactly right for “reusable scratch space.” It’s wrong for “cached resources I need to stay around” (use a real cache instead).
Should You Pool This?
flowchart TD
Start([Considering sync.Pool?]) --> Q1{Have you
benchmarked
-benchmem?}
Q1 -->|No| Skip1[Benchmark first.
Most code doesn't need this.]
Q1 -->|Yes| Q2{Object size
> 1 KB?}
Q2 -->|No · small object| Skip2[Pool overhead exceeds
alloc cost. Use 'new'.]
Q2 -->|Yes| Q3{Allocations
frequent?
1000s/sec}
Q3 -->|No · rare| Skip3[GC handles this fine.
Skip.]
Q3 -->|Yes| Q4{Short-lived
and easily
reset?}
Q4 -->|No · long-lived| Skip4[Use a real cache
or resource pool.]
Q4 -->|Yes| Use[Use sync.Pool.
Always Reset on Get and Put.]
classDef skip fill:#fed7d7,stroke:#c53030
classDef use fill:#f0fff4,stroke:#2f855a
class Skip1,Skip2,Skip3,Skip4 skip
class Use use
Most paths in real code exit this flow long before hitting “use”. That’s correct.
When Pooling Helps: Per-Request Buffers
Canonical case. An HTTP handler serializes a response to a buffer, writes the buffer, moves on. The next request does the same thing. Without pooling, the GC collects the buffer every request. With pooling, the buffer is reused:
var bufferPool = sync.Pool{
New: func() interface{} {
return bytes.NewBuffer(make([]byte, 0, 4096))
},
}
func handler(w http.ResponseWriter, r *http.Request) {
buf := bufferPool.Get().(*bytes.Buffer)
defer func() {
buf.Reset()
bufferPool.Put(buf)
}()
writeResponse(buf, r)
w.Write(buf.Bytes())
}
Under realistic load (thousands of requests per second), this typically reduces allocation pressure by 20-40% and measurably lowers GC pause times. The exact number depends on your allocation pattern, but the principle holds: large, frequent, short-lived allocations are exactly what pooling is for.
What makes this the canonical case:
- Buffers are big enough (4KB initial) that the allocation actually matters.
- They’re frequent — thousands per second.
- Short-lived — used within one request.
- Easy to reset —
buf.Reset()clears it cleanly. - Same shape every time.
When you see a request-scoped buffer that fits all five, pooling almost always pays.
When Pooling Is Wasted Effort
Small objects. Pooling a 24-byte struct with three fields is almost never worth it. The pool’s own overhead (per-P lookup, interface boxing) is larger than the allocation. Benchmark to confirm — you’ll see allocs/op go down but ns/op stay the same or go up.
// Not worth it:
type Small struct { a, b, c int }
var smallPool = sync.Pool{New: func() interface{} { return &Small{} }}
// Just use new(Small) or &Small{}
Infrequent allocations. If your code path runs once an hour, pooling saves nothing meaningful. The GC handles a handful of allocations just fine.
Long-lived state. Connection objects, database handles, caches. These shouldn’t be in sync.Pool — they should be in a proper cache or connection pool (like *sql.DB, which internally manages connections without sync.Pool).
Anything you can’t reliably reset. If an object has state that needs to be “returned to zero,” and you can forget to zero it, you’re one typo away from data leaking between requests.
The Reset Trap
The single most dangerous mistake with sync.Pool: forgetting to reset the object before putting it back, or reusing it before clearing whatever was in it.
// Wrong:
buf := pool.Get().(*bytes.Buffer)
buf.Write(responseData) // might not start empty
w.Write(buf.Bytes())
pool.Put(buf) // buf still has data; next caller might see it
// Right:
buf := pool.Get().(*bytes.Buffer)
buf.Reset() // ← explicit
buf.Write(responseData)
w.Write(buf.Bytes())
buf.Reset()
pool.Put(buf)
This has caused real production incidents. Pooled buffers across request handlers have leaked bearer tokens, user PII, and password reset codes when a reset was missed. The runtime doesn’t help — there’s no “enforce reset” mechanism. You have to do it.
Habits that reduce the risk:
- Always pair
Getwith adefer Reset+Putat the top of the function. - Reset at both ends (on Get and on Put) — paranoid but effective.
- For byte slices, shrink before return:
buf.Reset()on abytes.Bufferresets length but keeps capacity — that’s usually what you want. For a raw[]byte, usebuf[:0]. - Make your
Newfunction return a pre-reset object. Don’t assume it’s always “fresh.”
The Alloc Benchmark Methodology
The only honest way to know if pooling is helping is go test -bench -benchmem. Here’s what a useful benchmark looks like:
func BenchmarkWithoutPool(b *testing.B) {
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
buf := bytes.NewBuffer(make([]byte, 0, 4096))
writeResponse(buf, exampleRequest)
_ = buf.Bytes()
}
}
func BenchmarkWithPool(b *testing.B) {
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
buf := bufferPool.Get().(*bytes.Buffer)
buf.Reset()
writeResponse(buf, exampleRequest)
_ = buf.Bytes()
bufferPool.Put(buf)
}
}
Run:
$ go test -bench=. -benchmem
BenchmarkWithoutPool-10 200000 8431 ns/op 4352 B/op 3 allocs/op
BenchmarkWithPool-10 500000 3214 ns/op 128 B/op 1 allocs/op
Look for two things:
allocs/opdrops significantly (here: 3 → 1).ns/opdrops or stays flat (here: 8431 → 3214).
If allocs/op drops but ns/op goes up, pooling is adding overhead without saving enough GC pressure to justify itself. That’s the “wasted effort” signal.
The benchmark alone isn’t enough, though — you also need production evidence. pprof heap profiles before and after deployment should show reduced allocation. If the prod numbers don’t match the benchmark, you’re measuring the wrong thing.
A Pattern That Actually Works: Scoped Pools
One pattern I’ve found useful: scope the pool to the type of work it serves. Don’t have one giant pool that everything pulls from.
// JSON response buffer pool
var jsonBufPool = sync.Pool{
New: func() interface{} { return bytes.NewBuffer(make([]byte, 0, 4096)) },
}
// Protocol frame buffer pool (different typical size)
var frameBufPool = sync.Pool{
New: func() interface{} { return bytes.NewBuffer(make([]byte, 0, 64*1024)) },
}
Why separate pools matter: if you have one shared pool, you might Get() a 64KB buffer when you needed a 4KB one and waste memory. Or worse, you might Get() a 4KB one for a 64KB job and grow it (defeating pooling’s purpose).
Separate pools stay close to their intended sizes. Each pool’s items are homogeneous. The New function’s initial capacity reflects the typical workload.
The Big Thing sync.Pool Isn’t
sync.Pool is not a replacement for bounded resource pools (database connections, HTTP clients, goroutine worker pools). Those need explicit lifecycle management, health checks, and non-discardable state. Use a real pool library for them.
sync.Pool is also not a cache. A cache holds items you want to find again. sync.Pool holds items you might reuse if one’s convenient, and discards them otherwise. Different primitive for a different problem.
What Actually Matters
Most Go code is fast enough without pooling. Before adding sync.Pool to your hot path, ask:
- Have I actually benchmarked this with
-benchmem? - Are the objects I’d pool both large and frequent?
- Can I reliably reset them?
- Is GC pressure in pprof profiles actually a problem?
If any answer is no, skip the pool. The simpler code is almost always the better code.
The cases where pooling pays are real but narrower than internet wisdom suggests. Per-request buffers, protocol frame buffers, encoder/decoder state, crypto scratch space. Beyond that, the pool usually adds more lines of code than it saves nanoseconds — and each of those lines is one more place where a missing Reset() can leak bytes between requests.
Measure. Then decide.
Related
- Go’s Concurrency Is About Structure, Not Speed — the bigger principle: Go optimizes for correct structure, not raw speed.
- Testing Real-World Go Backends Isn’t What Many People Think — how to actually benchmark and prove a pool helps.
Comments
This space is waiting for your voice.
Comments will be supported shortly. Stay connected for updates!
This section will display user comments from various platforms like X, Reddit, YouTube, and more. Comments will be curated for quality and relevance.
Have questions? Reach out through:
Want to see your comment featured? Mention us on X or tag us on Reddit.