Files
supabase-postgres-best-prac…/skills/postgres-best-practices/rules/conn-pooling.md
Pedro Rodrigues a1b0257ec2 feat: Add initial PostgreSQL best practices rules (#1)
* Add 30 PostgreSQL best practices rules

Rules organized in 8 categories:
- Query Performance (5): indexes, partial indexes, composite, covering, types
- Connection Management (4): pooling, limits, idle timeout, prepared statements
- Schema Design (4): data types, primary keys, foreign key indexes, partitioning
- Concurrency & Locking (4): short transactions, SKIP LOCKED, advisory, deadlocks
- Security (3): RLS basics, RLS performance, privileges
- Data Access Patterns (4): N+1 queries, pagination, upsert, batch inserts
- Monitoring (3): EXPLAIN ANALYZE, pg_stat_statements, VACUUM/ANALYZE
- Advanced Features (3): JSONB indexing, full-text search, CTE materialization

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Update skills/postgresql-best-practices/rules/schema-primary-keys.md

Co-authored-by: samrose <samuel.rose@gmail.com>

* Update skills/postgresql-best-practices/rules/lock-deadlock-prevention.md

Co-authored-by: samrose <samuel.rose@gmail.com>

* resolve merge conflicts from postgres team suggestions

* Delete GETTING_STARTED.md

* Restore all 30 rule files that were lost during rebase

* update agents.md

* remove postgres 11 mention to advanced cte optimization

* update agents.md

* replace advanced cte with check contraints

* replace check contraints with schema lowercase identifiers

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: samrose <samuel.rose@gmail.com>
2026-01-21 15:09:06 +00:00

1.3 KiB

title, impact, impactDescription, tags
title impact impactDescription tags
Use Connection Pooling for All Applications CRITICAL Handle 10-100x more concurrent users connection-pooling, pgbouncer, performance, scalability

Use Connection Pooling for All Applications

Postgres connections are expensive (1-3MB RAM each). Without pooling, applications exhaust connections under load.

Incorrect (new connection per request):

-- Each request creates a new connection
-- Application code: db.connect() per request
-- Result: 500 concurrent users = 500 connections = crashed database

-- Check current connections
select count(*) from pg_stat_activity;  -- 487 connections!

Correct (connection pooling):

-- Use a pooler like PgBouncer between app and database
-- Application connects to pooler, pooler reuses a small pool to Postgres

-- Configure pool_size based on: (CPU cores * 2) + spindle_count
-- Example for 4 cores: pool_size = 10

-- Result: 500 concurrent users share 10 actual connections
select count(*) from pg_stat_activity;  -- 10 connections

Pool modes:

  • Transaction mode: connection returned after each transaction (best for most apps)
  • Session mode: connection held for entire session (needed for prepared statements, temp tables)

Reference: Connection Pooling