feat: supabase skill with db and realtime references

Adds the supabase agent skill with comprehensive references for:
- Database: schema design, RLS policies, migrations, indexing, query optimization, security
- Realtime: channels, broadcast, presence, postgres changes, auth setup, error handling
This commit is contained in:
Pedro Rodrigues
2026-02-10 18:14:49 +00:00
parent 760460c221
commit f58047c45c
33 changed files with 2776 additions and 0 deletions

69
GETTING_STARTED.md Normal file
View File

@@ -0,0 +1,69 @@
# Getting Started
Contributor guide for adding content to the Supabase Agent Skills.
## Quick Start
1. Create a reference file in `skills/supabase/references/`
2. Use `skills/supabase/references/_template.md` as your starting point
3. Update `skills/supabase/SKILL.md` to reference your new file
4. Run `npm run build && npm run check`
## Creating Reference Files
```bash
# Main topic
skills/supabase/references/{feature}.md
# Sub-topics (optional)
skills/supabase/references/{feature}/{subtopic}.md
```
**Examples:**
- `references/auth.md` - Authentication overview
- `references/auth/nextjs.md` - Auth setup for Next.js
- `references/storage.md` - Storage overview
## Writing Guidelines
Follow the [Agent Skills Open Standard](https://agentskills.io/) best practices:
1. **Concise is key** - Only include what Claude doesn't already know
2. **Show, don't tell** - Prefer code examples over explanations
3. **Progressive disclosure** - Keep SKILL.md lean, put details in reference files
4. **Concrete examples** - Include runnable code with real values
5. **Common mistakes first** - Help agents avoid pitfalls
**Good example** (~50 tokens):
```typescript
// Get user session
const { data: { session } } = await supabase.auth.getSession();
```
**Avoid** (~150 tokens):
```markdown
Sessions are a way to track authenticated users. When a user logs in,
a session is created. You can get the current session using the
getSession method which returns a promise...
```
## Update SKILL.md
Add your reference to the resources table:
```markdown
| Area | Resource | When to Use |
| ------------ | ----------------------- | ------------------------------ |
| Your Feature | `references/feature.md` | Brief description of use cases |
```
## Validate
```bash
npm run validate -- supabase # Check files
npm run build -- supabase # Generate AGENTS.md
npm run check # Format and lint
```

69
skills/supabase/AGENTS.md Normal file
View File

@@ -0,0 +1,69 @@
# supabase
> **Note:** `CLAUDE.md` is a symlink to this file.
## Overview
Guides and best practices for working with Supabase. Covers getting started, Auth, Database, Storage, Edge Functions, Realtime, supabase-js SDK, CLI, and MCP integration. Use for any Supabase-related questions.
## Structure
```
supabase/
SKILL.md # Main skill file - read this first
AGENTS.md # This navigation guide
CLAUDE.md # Symlink to AGENTS.md
references/ # Detailed reference files
```
## Usage
1. Read `SKILL.md` for the main skill instructions
2. Browse `references/` for detailed documentation on specific topics
3. Reference files are loaded on-demand - read only what you need
## Reference Categories
| Priority | Category | Impact | Prefix |
|----------|----------|--------|--------|
| 1 | Database | CRITICAL | `db-` |
| 2 | Realtime | MEDIUM-HIGH | `realtime-` |
Reference files are named `{prefix}-{topic}.md` (e.g., `query-missing-indexes.md`).
## Available References
**Database** (`db-`):
- `references/db-conn-pooling.md`
- `references/db-migrations-diff.md`
- `references/db-migrations-idempotent.md`
- `references/db-migrations-testing.md`
- `references/db-perf-indexes.md`
- `references/db-perf-query-optimization.md`
- `references/db-rls-common-mistakes.md`
- `references/db-rls-mandatory.md`
- `references/db-rls-performance.md`
- `references/db-rls-policy-types.md`
- `references/db-rls-views.md`
- `references/db-schema-auth-fk.md`
- `references/db-schema-extensions.md`
- `references/db-schema-jsonb.md`
- `references/db-schema-realtime.md`
- `references/db-schema-timestamps.md`
- `references/db-security-functions.md`
- `references/db-security-service-role.md`
**Realtime** (`realtime-`):
- `references/realtime-broadcast-basics.md`
- `references/realtime-broadcast-database.md`
- `references/realtime-patterns-cleanup.md`
- `references/realtime-patterns-debugging.md`
- `references/realtime-patterns-errors.md`
- `references/realtime-postgres-changes.md`
- `references/realtime-presence-tracking.md`
- `references/realtime-setup-auth.md`
- `references/realtime-setup-channels.md`
---
*27 reference files across 2 categories*

1
skills/supabase/CLAUDE.md Symbolic link
View File

@@ -0,0 +1 @@
AGENTS.md

55
skills/supabase/SKILL.md Normal file
View File

@@ -0,0 +1,55 @@
---
name: supabase
description: Guides and best practices for working with Supabase. Covers getting started, Auth, Database, Storage, Edge Functions, Realtime, supabase-js SDK, CLI, and MCP integration. Use for any Supabase-related questions.
license: MIT
metadata:
author: supabase
version: '1.0.0'
organization: Supabase
date: January 2026
abstract: Comprehensive Supabase development guide for building applications with Supabase services. Contains guides covering Auth, Database, Storage, Edge Functions, Realtime, client libraries, CLI, and tooling. Each reference includes setup instructions, code examples, common mistakes, and integration patterns.
---
# Supabase
Supabase is an open source Firebase alternative that provides a Postgres database, authentication, instant APIs, edge functions, realtime subscriptions, and storage. It's fully compatible with Postgres and works with any language, framework, or ORM.
## Supabase Documentation
Always reference the Supabase documentation before making Supabase-related claims. The documentation is the source of truth for all Supabase-related information.
You can use the `curl` commands to fetch the documentation page as markdown:
**Documentation:**
```bash
# Fetch any doc page as markdown
curl -H "Accept: text/markdown" https://supabase.com/docs/<path>
```
## Overview of Resources
Reference the appropriate resource file based on the user's needs:
### Database
| Area | Resource | When to Use |
| ------------------ | ------------------------------- | ---------------------------------------------- |
| RLS Security | `references/db-rls-*.md` | Row Level Security policies, common mistakes |
| Connection Pooling | `references/db-conn-pooling.md` | Transaction vs Session mode, port 6543 vs 5432 |
| Schema Design | `references/db-schema-*.md` | auth.users FKs, timestamps, JSONB, extensions |
| Migrations | `references/db-migrations-*.md` | CLI workflows, idempotent patterns, db diff |
| Performance | `references/db-perf-*.md` | Indexes (BRIN, GIN), query optimization |
| Security | `references/db-security-*.md` | Service role key, security_definer functions |
### Realtime
| Area | Resource | When to Use |
| ---------------- | ------------------------------------ | ----------------------------------------------- |
| Channel Setup | `references/realtime-setup-*.md` | Creating channels, naming conventions, auth |
| Broadcast | `references/realtime-broadcast-*.md` | Client messaging, database-triggered broadcasts |
| Presence | `references/realtime-presence-*.md` | User online status, shared state tracking |
| Postgres Changes | `references/realtime-postgres-*.md` | Database change listeners (prefer Broadcast) |
| Patterns | `references/realtime-patterns-*.md` | Cleanup, error handling, React integration |
**CLI Usage:** Always use `npx supabase` instead of `supabase` for version consistency across team members.

View File

@@ -0,0 +1,16 @@
# Section Definitions
Reference files are grouped by prefix. Claude loads specific files based on user
queries.
---
## 1. Database (db)
**Impact:** CRITICAL
**Description:** Row Level Security policies, connection pooling, schema design patterns, migrations, performance optimization, and security functions for Supabase Postgres.
## 2. Realtime (realtime)
**Impact:** MEDIUM-HIGH
**Description:** Channel setup, Broadcast messaging, Presence tracking, Postgres Changes listeners, cleanup patterns, error handling, and debugging.

View File

@@ -0,0 +1,46 @@
---
title: Action-Oriented Title
tags: relevant, keywords
---
# Feature Name
One-sentence description of what this does and when to use it.
## Quick Start
```typescript
// Minimal working example with real code
import { createClient } from "@supabase/supabase-js";
const supabase = createClient(url, key);
// Core operation
const { data, error } = await supabase.from("table").select("*");
```
## Common Patterns
### Pattern Name
```typescript
// Concrete example - prefer this over explanations
const { data } = await supabase.from("users").select("id, email").eq("active", true);
```
## Common Mistakes
**Mistake**: Brief description of what goes wrong.
```typescript
// Incorrect
const data = await supabase.from("users").select(); // Missing error handling
// Correct
const { data, error } = await supabase.from("users").select("*");
if (error) throw error;
```
## Related
- [subtopic.md](subtopic.md) - For advanced X patterns
- [Docs](https://supabase.com/docs/guides/feature) - Official guide

View File

@@ -0,0 +1,106 @@
---
title: Use Correct Connection Pooling Mode
impact: CRITICAL
impactDescription: Prevents connection exhaustion and enables 10-100x scalability
tags: connection-pooling, supavisor, transaction-mode, session-mode
---
## Use Correct Connection Pooling Mode
Supabase provides Supavisor for connection pooling. Choose the right mode based
on your application type.
## Transaction Mode (Port 6543)
Best for: Serverless functions, edge computing, stateless APIs.
```bash
## Transaction mode connection string
postgres://postgres.[ref]:[password]@aws-0-[region].pooler.supabase.com:6543/postgres
```
**Limitations:**
- No prepared statements
- No SET commands
- No LISTEN/NOTIFY
- No temp tables
```javascript
// Prisma - disable prepared statements
const prisma = new PrismaClient({
datasources: {
db: {
url: process.env.DATABASE_URL + "?pgbouncer=true",
},
},
});
```
## Session Mode (Port 5432)
Best for: Long-running servers, apps needing prepared statements.
```bash
## Session mode (via pooler for IPv4)
postgres://postgres.[ref]:[password]@aws-0-[region].pooler.supabase.com:5432/postgres
```
## Direct Connection (Port 5432)
Best for: Migrations, admin tasks, persistent servers.
```bash
## Direct connection (IPv6 only unless IPv4 add-on enabled)
postgres://postgres.[ref]:[password]@db.[ref].supabase.co:5432/postgres
```
## Common Mistakes
**Incorrect:**
```javascript
// Serverless with session mode - exhausts connections
const pool = new Pool({
connectionString: "...pooler.supabase.com:5432/postgres",
max: 20, // Too many connections per instance!
});
```
**Correct:**
```javascript
// Serverless with transaction mode
const pool = new Pool({
connectionString: "...pooler.supabase.com:6543/postgres",
max: 1, // Single connection per serverless instance
});
```
**Incorrect:**
```bash
## Transaction mode with prepared statements
DATABASE_URL="...pooler.supabase.com:6543/postgres"
## Error: prepared statement already exists
```
**Correct:**
```bash
## Add pgbouncer=true to disable prepared statements
DATABASE_URL="...pooler.supabase.com:6543/postgres?pgbouncer=true"
```
## Connection Limits by Compute Size
| Compute | Direct Connections | Pooler Clients |
| ------- | ------------------ | -------------- |
| Nano | 60 | 200 |
| Small | 90 | 400 |
| Medium | 120 | 600 |
| Large | 160 | 800 |
## Related
- [Docs](https://supabase.com/docs/guides/database/connecting-to-postgres)

View File

@@ -0,0 +1,97 @@
---
title: Use npx supabase db diff for Dashboard Changes
impact: HIGH
impactDescription: Captures manual changes into version-controlled migrations
tags: migrations, supabase-cli, db-diff, dashboard
---
## Use npx supabase db diff for Dashboard Changes
When making schema changes via Dashboard, use `npx supabase db diff` to generate
migration files for version control.
**Incorrect:**
```sql
-- Making changes in Dashboard without capturing them
-- Changes exist in remote but not in version control
-- Team members can't reproduce the database state
```
**Correct:**
```bash
# After making Dashboard changes, generate migration
npx supabase db diff -f add_profiles_table
# Review and test
npx supabase db reset
# Commit to version control
git add supabase/migrations/
git commit -m "Add profiles table migration"
```
## Workflow
1. Make changes in Supabase Dashboard (create tables, add columns, etc.)
2. Generate migration from diff:
```bash
npx supabase db diff -f add_profiles_table
```
3. Review generated migration in `supabase/migrations/`
4. Test locally:
```bash
npx supabase db reset
```
5. Commit migration to version control
## Diff Against Local Database
```bash
# Start local Supabase
npx supabase start
# Make changes via Dashboard or SQL
# Generate diff
npx supabase db diff -f my_changes
```
## Diff Against Remote Database
```bash
# Link to remote project
npx supabase link --project-ref your-project-ref
# Pull remote schema and generate diff
npx supabase db diff --linked -f sync_remote_changes
```
## What diff Captures
- Tables and columns
- Indexes
- Constraints
- Functions and triggers
- RLS policies
- Extensions
## What diff Does NOT Capture
- DML (INSERT, UPDATE, DELETE)
- View ownership changes
- Materialized views
- Partitions
- Comments
For these, write manual migrations.
## Related
- [migrations-idempotent.md](migrations-idempotent.md)
- [migrations-testing.md](migrations-testing.md)
- [Docs](https://supabase.com/docs/guides/deployment/database-migrations)

View File

@@ -0,0 +1,90 @@
---
title: Write Idempotent Migrations
impact: HIGH
impactDescription: Safe to run multiple times, prevents migration failures
tags: migrations, idempotent, supabase-cli
---
## Write Idempotent Migrations
Migrations should be safe to run multiple times without errors. Use
`IF NOT EXISTS` and `IF EXISTS` clauses.
**Incorrect:**
```sql
-- Fails on second run: "relation already exists"
create table users (
id uuid primary key,
email text not null
);
create index idx_users_email on users(email);
```
**Correct:**
```sql
-- Safe to run multiple times
create table if not exists users (
id uuid primary key,
email text not null
);
create index if not exists idx_users_email on users(email);
```
## Idempotent Column Additions
```sql
-- Add column only if it doesn't exist
do $$
begin
if not exists (
select 1 from information_schema.columns
where table_name = 'users' and column_name = 'phone'
) then
alter table users add column phone text;
end if;
end $$;
```
## Idempotent Drops
```sql
-- Safe drops
drop table if exists old_table;
drop index if exists old_index;
drop function if exists old_function();
```
## Idempotent Policies
```sql
-- Drop and recreate to update policy
drop policy if exists "Users see own data" on users;
create policy "Users see own data" on users
for select to authenticated
using ((select auth.uid()) = id);
```
## Migration File Naming
Migrations in `supabase/migrations/` are named with timestamps:
```
20240315120000_create_users.sql
20240315130000_add_profiles.sql
```
Create new migration:
```bash
npx supabase migration new create_users
```
## Related
- [migrations-testing.md](migrations-testing.md)
- [Docs](https://supabase.com/docs/guides/deployment/database-migrations)

View File

@@ -0,0 +1,116 @@
---
title: Test Migrations with supabase db reset
impact: MEDIUM-HIGH
impactDescription: Catch migration errors before production deployment
tags: migrations, testing, supabase-cli, local-development
---
## Test Migrations with supabase db reset
Always test migrations locally before deploying to production. Use
`npx supabase db reset` to verify migrations run cleanly from scratch.
**Incorrect:**
```bash
# Deploying directly without testing
npx supabase db push # Migration fails in production!
```
**Correct:**
```bash
# Test migrations locally first
npx supabase db reset # Runs all migrations from scratch
# Verify success, then deploy
npx supabase db push
```
## Testing Workflow
```bash
# Start local Supabase
npx supabase start
# Reset database and run all migrations
npx supabase db reset
# Verify tables and data
npx supabase inspect db table-sizes
```
## What db reset Does
1. Drops the local database
2. Creates a fresh database
3. Runs all migrations in order
4. Runs `supabase/seed.sql` if present
## Seed Data for Testing
Create `supabase/seed.sql` for test data:
```sql
-- supabase/seed.sql
-- Runs after migrations on db reset
-- Use ON CONFLICT for idempotency
insert into categories (name)
values ('Action'), ('Comedy'), ('Drama')
on conflict (name) do nothing;
-- Test users (only in local development!)
insert into profiles (id, username)
values ('00000000-0000-0000-0000-000000000001', 'testuser')
on conflict (id) do nothing;
```
## Test Specific Migration
```bash
# Apply all pending migrations
npx supabase migration up
# Check migration status
npx supabase migration list
```
## Repair Failed Migration
If a migration partially fails:
```bash
# Fix the migration file
# Then repair the migration history
npx supabase migration repair --status applied 20240315120000
```
## Inspect Database State
```bash
# View tables
npx supabase inspect db table-sizes
# View indexes
npx supabase inspect db index-usage
# View cache hit rate
npx supabase inspect db cache-hit
```
## CI/CD Integration
```yaml
# GitHub Actions example
- name: Test migrations
run: |
npx supabase start
npx supabase db reset
npx supabase test db # Run pgTAP tests
```
## Related
- [migrations-idempotent.md](migrations-idempotent.md)
- [Docs](https://supabase.com/docs/guides/local-development/overview)

View File

@@ -0,0 +1,114 @@
---
title: Choose the Right Index Type
impact: CRITICAL
impactDescription: 10-1000x query performance improvements with proper indexing
tags: indexes, performance, btree, brin, gin, partial
---
## Choose the Right Index Type
Supabase uses PostgreSQL indexes. Choose the right type for your query patterns.
## B-Tree (Default)
Best for: Equality, range queries, sorting.
```sql
-- Equality and range queries
create index idx_users_email on users(email);
create index idx_orders_created on orders(created_at);
-- Composite index for multi-column queries
create index idx_orders_user_status on orders(user_id, status);
```
## BRIN (Block Range Index)
Best for: Large tables with naturally ordered data (timestamps, sequential IDs).
10x+ smaller than B-tree.
```sql
-- Perfect for append-only timestamp columns
create index idx_logs_created on logs using brin(created_at);
create index idx_events_id on events using brin(id);
```
**When to use:** Tables with millions of rows where data is inserted in order.
## GIN (Generalized Inverted Index)
Best for: JSONB, arrays, full-text search.
```sql
-- JSONB containment queries
create index idx_users_metadata on users using gin(metadata);
-- Full-text search
create index idx_posts_search on posts using gin(to_tsvector('english', title || ' ' || content));
-- Array containment
create index idx_tags on posts using gin(tags);
```
## Partial Index
Best for: Queries that filter on specific values.
```sql
-- Only index active users (smaller, faster)
create index idx_active_users on users(email)
where status = 'active';
-- Only index unprocessed orders
create index idx_pending_orders on orders(created_at)
where processed = false;
```
**Requirement:** Query WHERE clause must match index condition.
## Common Mistakes
**Incorrect:**
```sql
-- Over-indexing: slows writes, wastes space
create index idx_users_1 on users(email);
create index idx_users_2 on users(email, name);
create index idx_users_3 on users(name, email);
create index idx_users_4 on users(name);
```
**Correct:**
```sql
-- Minimal indexes based on actual queries
create index idx_users_email on users(email); -- For login
create index idx_users_name on users(name); -- For search
```
## Verify Index Usage
```sql
-- Check if query uses index
explain analyze
select * from users where email = 'test@example.com';
-- Find unused indexes
select * from pg_stat_user_indexes
where idx_scan = 0 and indexrelname not like '%_pkey';
```
## Concurrently Create Indexes
For production tables, avoid locking:
```sql
-- Doesn't block writes
create index concurrently idx_users_email on users(email);
```
## Related
- [rls-performance.md](rls-performance.md)
- [schema-jsonb.md](schema-jsonb.md)
- [Docs](https://supabase.com/docs/guides/database/postgres/indexes)

View File

@@ -0,0 +1,149 @@
---
title: Optimize Queries for PostgREST
impact: HIGH
impactDescription: Faster API responses and reduced database load
tags: postgrest, queries, performance, optimization, supabase-js
---
## Optimize Queries for PostgREST
Supabase uses PostgREST to generate REST APIs. Optimize queries for better
performance.
## Select Only Needed Columns
**Incorrect:**
```javascript
// Fetches all columns including large text/blobs
const { data } = await supabase.from("posts").select("*");
```
**Correct:**
```javascript
// Only fetch needed columns
const { data } = await supabase.from("posts").select("id, title, author_id");
```
## Use Explicit Filters
Explicit filters help the query planner, even with RLS.
**Incorrect:**
```javascript
// Relies only on RLS - query planner has less info
const { data } = await supabase.from("posts").select("*");
```
**Correct:**
```javascript
// Explicit filter improves query plan
const { data } = await supabase
.from("posts")
.select("*")
.eq("author_id", userId);
```
## Always Paginate
**Incorrect:**
```javascript
// Could return thousands of rows
const { data } = await supabase.from("posts").select("*");
```
**Correct:**
```javascript
// Paginate results
const { data } = await supabase
.from("posts")
.select("*")
.range(0, 19) // First 20 rows
.order("created_at", { ascending: false });
```
## Efficient Joins
**Incorrect:**
```javascript
// N+1: One query per post for author
const { data: posts } = await supabase.from("posts").select("*");
for (const post of posts) {
const { data: author } = await supabase
.from("users")
.select("*")
.eq("id", post.author_id)
.single();
}
```
**Correct:**
```javascript
// Single query with embedded join
const { data } = await supabase.from("posts").select(`
id,
title,
author:users (
id,
name,
avatar_url
)
`);
```
## Use count Option Efficiently
**Incorrect:**
```javascript
// Counts ALL rows (slow on large tables)
const { count } = await supabase
.from("posts")
.select("*", { count: "exact", head: true });
```
**Correct:**
```javascript
// Estimated count (fast)
const { count } = await supabase
.from("posts")
.select("*", { count: "estimated", head: true });
// Or planned count (uses query planner estimate)
const { count } = await supabase
.from("posts")
.select("*", { count: "planned", head: true });
```
## Debug Query Performance
```javascript
// Get query execution plan
const { data } = await supabase
.from("posts")
.select("*")
.eq("author_id", userId)
.explain({ analyze: true, verbose: true });
console.log(data); // Shows execution plan
```
Enable explain in database:
```sql
alter role authenticator set pgrst.db_plan_enabled to true;
notify pgrst, 'reload config';
```
## Related
- [perf-indexes.md](perf-indexes.md)
- [Docs](https://supabase.com/docs/guides/database/query-optimization)

View File

@@ -0,0 +1,97 @@
---
title: Avoid Common RLS Policy Mistakes
impact: CRITICAL
impactDescription: Prevents security vulnerabilities and unintended data exposure
tags: rls, security, auth.uid, policies, common-mistakes
---
## Avoid Common RLS Policy Mistakes
## 1. Missing TO Clause
Without `TO`, policies apply to all roles including `anon`.
**Incorrect:**
```sql
-- Runs for both anon and authenticated users
create policy "Users see own data" on profiles
using (auth.uid() = user_id);
```
**Correct:**
```sql
-- Only runs for authenticated users
create policy "Users see own data" on profiles
to authenticated
using (auth.uid() = user_id);
```
## 2. Using user_metadata for Authorization
Users can modify their own `user_metadata`. Use `app_metadata` instead.
**Incorrect:**
```sql
-- DANGEROUS: users can set their own role!
using ((auth.jwt() -> 'user_metadata' ->> 'role') = 'admin')
```
**Correct:**
```sql
-- app_metadata cannot be modified by users
using ((auth.jwt() -> 'app_metadata' ->> 'role') = 'admin')
```
## 3. Not Checking NULL auth.uid()
For unauthenticated users, `auth.uid()` returns NULL.
**Incorrect:**
```sql
-- NULL = NULL is NULL (not true), but confusing behavior
using (auth.uid() = user_id)
```
**Correct:**
```sql
-- Explicit NULL check
using (auth.uid() is not null and auth.uid() = user_id)
```
## 4. Missing SELECT Policy for UPDATE
UPDATE operations require a SELECT policy to find rows to update.
**Incorrect:**
```sql
-- UPDATE silently fails - no rows found
create policy "Users can update" on profiles
for update to authenticated
using (auth.uid() = user_id);
```
**Correct:**
```sql
-- Need both SELECT and UPDATE policies
create policy "Users can view" on profiles
for select to authenticated
using (auth.uid() = user_id);
create policy "Users can update" on profiles
for update to authenticated
using (auth.uid() = user_id)
with check (auth.uid() = user_id);
```
## Related
- [rls-mandatory.md](rls-mandatory.md)
- [Docs](https://supabase.com/docs/guides/database/postgres/row-level-security)

View File

@@ -0,0 +1,50 @@
---
title: Enable RLS on All Exposed Schemas
impact: CRITICAL
impactDescription: Prevents unauthorized data access at the database level
tags: rls, security, auth, policies
---
## Enable RLS on All Exposed Schemas
RLS must be enabled on every table in exposed schemas (default: `public`). Without
RLS, any user with the anon key can read and write all data.
**Incorrect:**
```sql
-- Table without RLS - anyone can read/write everything
create table profiles (
id uuid primary key,
user_id uuid,
bio text
);
```
**Correct:**
```sql
create table profiles (
id uuid primary key,
user_id uuid references auth.users(id) on delete cascade,
bio text
);
-- Enable RLS
alter table profiles enable row level security;
-- Create policy
create policy "Users can view own profile"
on profiles for select
to authenticated
using (auth.uid() = user_id);
```
Tables created via Dashboard have RLS enabled by default. Tables created via SQL
require manual enablement. Supabase sends daily warnings for tables without RLS.
**Note:** Service role key bypasses ALL RLS policies. Never expose it to browsers.
## Related
- [Docs](https://supabase.com/docs/guides/database/postgres/row-level-security)

View File

@@ -0,0 +1,108 @@
---
title: Optimize RLS Policy Performance
impact: CRITICAL
impactDescription: Achieve 100x-99,000x query performance improvements
tags: rls, performance, optimization, indexes, auth.uid
---
## Optimize RLS Policy Performance
RLS policies run on every row access. Unoptimized policies cause severe
performance degradation.
## 1. Wrap auth.uid() in SELECT (94-99% improvement)
**Incorrect:**
```sql
-- auth.uid() called for every row
create policy "Users see own data" on profiles
to authenticated
using (auth.uid() = user_id);
```
**Correct:**
```sql
-- Cached once per statement via initPlan
create policy "Users see own data" on profiles
to authenticated
using ((select auth.uid()) = user_id);
```
## 2. Add Indexes on Policy Columns (99% improvement)
**Incorrect:**
```sql
-- Full table scan for every query
create policy "Users see own data" on profiles
using ((select auth.uid()) = user_id);
-- No index on user_id
```
**Correct:**
```sql
create policy "Users see own data" on profiles
using ((select auth.uid()) = user_id);
-- Add index on filtered column
create index idx_profiles_user_id on profiles(user_id);
```
## 3. Use Explicit Filters in Queries (94% improvement)
**Incorrect:**
```javascript
// Relies only on implicit RLS filter
const { data } = await supabase.from("profiles").select("*");
```
**Correct:**
```javascript
// Add explicit filter - helps query planner
const { data } = await supabase
.from("profiles")
.select("*")
.eq("user_id", userId);
```
## 4. Use Security Definer Functions for Joins
**Incorrect:**
```sql
-- Join in policy - executed per row
using (
user_id in (
select user_id from team_members
where team_id = teams.id -- joins!
)
)
```
**Correct:**
```sql
-- Function in private schema
create function private.user_team_ids()
returns setof uuid
language sql
security definer
stable
as $$
select team_id from team_members
where user_id = (select auth.uid())
$$;
-- Policy uses cached function result
using (team_id in (select private.user_team_ids()))
```
## Related
- [security-functions.md](security-functions.md)
- [Supabase RLS Performance Guide](https://github.com/orgs/supabase/discussions/14576)

View File

@@ -0,0 +1,81 @@
---
title: Use RESTRICTIVE vs PERMISSIVE Policies
impact: MEDIUM-HIGH
impactDescription: Controls policy combination logic to prevent unintended access
tags: rls, policies, permissive, restrictive
---
## Use RESTRICTIVE vs PERMISSIVE Policies
Supabase RLS supports two policy types with different combination logic.
## PERMISSIVE (Default)
Multiple permissive policies combine with OR logic. If ANY policy passes, access
is granted.
```sql
-- User can access if they own it OR are an admin
create policy "Owner access" on documents
for select to authenticated
using (owner_id = (select auth.uid()));
create policy "Admin access" on documents
for select to authenticated
using ((select auth.jwt() -> 'app_metadata' ->> 'role') = 'admin');
```
## RESTRICTIVE
Restrictive policies combine with AND logic. ALL restrictive policies must pass.
**Use Case: Enforce MFA for sensitive operations**
```sql
-- Base access policy (permissive)
create policy "Users can view own data" on sensitive_data
for select to authenticated
using (user_id = (select auth.uid()));
-- MFA requirement (restrictive) - MUST also pass
create policy "Require MFA" on sensitive_data
as restrictive
for select to authenticated
using ((select auth.jwt() ->> 'aal') = 'aal2');
```
**Use Case: Block OAuth client access**
```sql
-- Allow direct session access
create policy "Direct access only" on payment_methods
as restrictive
for all to authenticated
using ((select auth.jwt() ->> 'client_id') is null);
```
## Common Mistake
**Incorrect:**
```sql
-- Intended as additional requirement, but PERMISSIVE means OR
create policy "Require MFA" on sensitive_data
for select to authenticated
using ((select auth.jwt() ->> 'aal') = 'aal2');
```
**Correct:**
```sql
-- AS RESTRICTIVE makes it an AND requirement
create policy "Require MFA" on sensitive_data
as restrictive
for select to authenticated
using ((select auth.jwt() ->> 'aal') = 'aal2');
```
## Related
- [rls-common-mistakes.md](rls-common-mistakes.md)
- [Docs](https://supabase.com/docs/guides/database/postgres/row-level-security)

View File

@@ -0,0 +1,65 @@
---
title: Use security_invoker for Views with RLS
impact: HIGH
impactDescription: Ensures views respect RLS policies instead of bypassing them
tags: rls, views, security_invoker, security
---
## Use security_invoker for Views with RLS
By default, views run as the view owner (security definer), bypassing RLS on
underlying tables.
**Incorrect:**
```sql
-- View bypasses RLS - exposes all data!
create view public_profiles as
select id, username, avatar_url
from profiles;
```
**Correct (Postgres 15+):**
```sql
-- View respects RLS of querying user
create view public_profiles
with (security_invoker = true)
as
select id, username, avatar_url
from profiles;
```
**Correct (Older Postgres):**
```sql
-- Option 1: Revoke direct access, create RLS on view
revoke all on public_profiles from anon, authenticated;
-- Option 2: Create view in unexposed schema
create schema private;
create view private.profiles_view as
select * from profiles;
```
## When to Use security_definer
Use `security_definer = true` (default) when the view intentionally aggregates
or filters data that users shouldn't access directly:
```sql
-- Intentionally exposes limited public data
create view leaderboard as
select username, score
from profiles
order by score desc
limit 100;
-- Grant read access
grant select on leaderboard to anon;
```
## Related
- [rls-mandatory.md](rls-mandatory.md)
- [Docs](https://supabase.com/docs/guides/database/postgres/row-level-security)

View File

@@ -0,0 +1,80 @@
---
title: Add CASCADE to auth.users Foreign Keys
impact: HIGH
impactDescription: Prevents orphaned records and user deletion failures
tags: foreign-keys, auth.users, cascade, schema-design
---
## Add CASCADE to auth.users Foreign Keys
When referencing `auth.users`, always specify `ON DELETE CASCADE`. Without it,
deleting users fails with foreign key violations.
**Incorrect:**
```sql
-- User deletion fails: "foreign key violation"
create table profiles (
id uuid primary key references auth.users(id),
username text,
avatar_url text
);
```
**Correct:**
```sql
-- Profile deleted automatically when user is deleted
create table profiles (
id uuid primary key references auth.users(id) on delete cascade,
username text,
avatar_url text
);
```
## Alternative: SET NULL for Optional Relationships
Use `ON DELETE SET NULL` when the record should persist without the user:
```sql
create table comments (
id bigint primary key generated always as identity,
author_id uuid references auth.users(id) on delete set null,
content text not null,
created_at timestamptz default now()
);
-- Comment remains with author_id = NULL after user deletion
```
## Auto-Create Profile on Signup
```sql
create or replace function public.handle_new_user()
returns trigger
language plpgsql
security definer
set search_path = ''
as $$
begin
insert into public.profiles (id, email, full_name)
values (
new.id,
new.email,
new.raw_user_meta_data ->> 'full_name'
);
return new;
end;
$$;
create trigger on_auth_user_created
after insert on auth.users
for each row execute function public.handle_new_user();
```
**Important:** Use `security definer` and `set search_path = ''` for triggers on
auth.users.
## Related
- [security-functions.md](security-functions.md)
- [Docs](https://supabase.com/docs/guides/database/postgres/cascade-deletes)

View File

@@ -0,0 +1,80 @@
---
title: Install Extensions in extensions Schema
impact: MEDIUM
impactDescription: Keeps public schema clean and simplifies migrations
tags: extensions, schema-design, best-practices
---
## Install Extensions in extensions Schema
Install PostgreSQL extensions in the `extensions` schema to keep the `public`
schema clean and avoid conflicts with application tables.
**Incorrect:**
```sql
-- Installs in public schema by default
create extension pg_trgm;
create extension pgvector;
```
**Correct:**
```sql
-- Install in extensions schema
create extension if not exists pg_trgm with schema extensions;
create extension if not exists vector with schema extensions;
-- Reference with schema prefix
create index idx_name_trgm on users
using gin(name extensions.gin_trgm_ops);
```
## Common Supabase Extensions
```sql
-- Vector similarity search (AI embeddings)
create extension if not exists vector with schema extensions;
-- Scheduled jobs (pg_cron requires pg_catalog, not extensions)
create extension if not exists pg_cron with schema pg_catalog;
-- HTTP requests from database
create extension if not exists pg_net with schema extensions;
-- Full-text search improvements
create extension if not exists pg_trgm with schema extensions;
-- Geospatial data
create extension if not exists postgis with schema extensions;
-- UUID generation (enabled by default)
create extension if not exists "uuid-ossp" with schema extensions;
```
## Check Available Extensions
```sql
-- List available extensions
select * from pg_available_extensions;
-- List installed extensions
select * from pg_extension;
```
## Using Extensions
```sql
-- pgvector example
create table documents (
id bigint primary key generated always as identity,
content text,
embedding vector(1536) -- OpenAI ada-002 dimensions
);
create index on documents using ivfflat (embedding vector_cosine_ops);
```
## Related
- [Docs](https://supabase.com/docs/guides/database/extensions)

View File

@@ -0,0 +1,95 @@
---
title: Use Structured Columns Over JSONB When Possible
impact: MEDIUM
impactDescription: Improves query performance, type safety, and data integrity
tags: jsonb, json, schema-design, performance
---
## Use Structured Columns Over JSONB When Possible
JSONB is flexible but should not replace proper schema design. Use structured
columns for known fields, JSONB for truly dynamic data.
**Incorrect:**
```sql
-- Everything in JSONB - loses type safety and performance
create table users (
id uuid primary key,
data jsonb -- contains email, name, role, etc.
);
-- Querying is verbose and slow without indexes
select data ->> 'email' from users
where data ->> 'role' = 'admin';
```
**Correct:**
```sql
-- Structured columns for known fields
create table users (
id uuid primary key,
email text not null,
name text,
role text check (role in ('admin', 'user', 'guest')),
-- JSONB only for truly flexible data
preferences jsonb default '{}'
);
-- Fast, type-safe queries
select email from users where role = 'admin';
```
## When JSONB is Appropriate
- Webhook payloads
- User-defined fields
- API responses to cache
- Rapid prototyping (migrate to columns later)
## Indexing JSONB
```sql
-- GIN index for containment queries
create index idx_users_preferences on users using gin(preferences);
-- Query using containment operator
select * from users
where preferences @> '{"theme": "dark"}';
```
## Validate JSONB with pg_jsonschema
```sql
create extension if not exists pg_jsonschema with schema extensions;
alter table users
add constraint check_preferences check (
jsonb_matches_schema(
'{
"type": "object",
"properties": {
"theme": {"type": "string", "enum": ["light", "dark"]},
"notifications": {"type": "boolean"}
}
}',
preferences
)
);
```
## Querying JSONB
```javascript
// supabase-js
const { data } = await supabase
.from("users")
.select("email, preferences->theme")
.eq("preferences->>notifications", "true");
```
## Related
- [perf-indexes.md](perf-indexes.md)
- [Docs](https://supabase.com/docs/guides/database/json)

View File

@@ -0,0 +1,91 @@
---
title: Realtime Requires Primary Keys
impact: MEDIUM-HIGH
impactDescription: Prevents Realtime subscription failures and data sync issues
tags: realtime, primary-keys, subscriptions
---
## Realtime Requires Primary Keys
Supabase Realtime uses primary keys to track row changes. Tables without primary
keys cannot be subscribed to.
**Incorrect:**
```sql
-- No primary key - Realtime subscriptions will fail
create table messages (
user_id uuid,
content text,
created_at timestamptz default now()
);
```
**Correct:**
```sql
create table messages (
id bigint primary key generated always as identity,
user_id uuid references auth.users(id) on delete cascade,
content text not null,
created_at timestamptz default now()
);
```
## Enable Realtime for a Table
**Via SQL:**
```sql
-- Add table to realtime publication
alter publication supabase_realtime add table messages;
```
**Via Dashboard:**
Database > Publications > supabase_realtime > Add table
## Realtime with RLS
RLS policies apply to Realtime subscriptions. Users only receive changes they
have access to.
```sql
-- Policy applies to realtime
create policy "Users see own messages" on messages
for select to authenticated
using (user_id = (select auth.uid()));
```
```javascript
// Subscribe with RLS filtering
const channel = supabase
.channel("messages")
.on(
"postgres_changes",
{ event: "*", schema: "public", table: "messages" },
(payload) => console.log(payload)
)
.subscribe();
```
## Performance Considerations
- Add indexes on columns used in Realtime filters
- Keep RLS policies simple for subscribed tables
- Monitor "Realtime Private Channel RLS Execution Time" in Dashboard
## Replica Identity
By default, only the primary key is sent in UPDATE/DELETE payloads. To receive
all columns:
```sql
-- Send all columns in change events (increases bandwidth)
alter table messages replica identity full;
```
## Related
- [rls-mandatory.md](rls-mandatory.md)
- [Docs](https://supabase.com/docs/guides/realtime)

View File

@@ -0,0 +1,79 @@
---
title: Always Use timestamptz Not timestamp
impact: MEDIUM-HIGH
impactDescription: Prevents timezone-related bugs and data inconsistencies
tags: timestamps, timestamptz, timezone, data-types
---
## Always Use timestamptz Not timestamp
Use `timestamptz` (timestamp with time zone) instead of `timestamp`. The latter
loses timezone information, causing bugs when users are in different timezones.
**Incorrect:**
```sql
create table events (
id bigint primary key generated always as identity,
name text not null,
-- Stores time without timezone context
created_at timestamp default now(),
starts_at timestamp
);
```
**Correct:**
```sql
create table events (
id bigint primary key generated always as identity,
name text not null,
-- Stores time in UTC, converts on retrieval
created_at timestamptz default now(),
starts_at timestamptz
);
```
## How timestamptz Works
- Stores time in UTC internally
- Converts to/from session timezone automatically
- `now()` returns current time in session timezone, stored as UTC
```sql
-- Insert with timezone
insert into events (name, starts_at)
values ('Launch', '2024-03-15 10:00:00-05'); -- EST
-- Retrieved in UTC by default in Supabase
select starts_at from events;
-- 2024-03-15 15:00:00+00
```
## Auto-Update updated_at Column
```sql
create table posts (
id bigint primary key generated always as identity,
title text not null,
created_at timestamptz default now(),
updated_at timestamptz default now()
);
-- Trigger to auto-update
create or replace function update_updated_at()
returns trigger as $$
begin
new.updated_at = now();
return new;
end;
$$ language plpgsql;
create trigger posts_updated_at
before update on posts
for each row execute function update_updated_at();
```
## Related
- [Docs](https://supabase.com/docs/guides/database/tables)

View File

@@ -0,0 +1,125 @@
---
title: Use security_definer Functions in Private Schema
impact: HIGH
impactDescription: Controlled privilege escalation without exposing service role
tags: functions, security_definer, security, private-schema
---
## Use security_definer Functions in Private Schema
`security definer` functions run with the privileges of the function owner, not
the caller. Place them in a private schema to prevent direct API access.
**Incorrect:**
```sql
-- DANGEROUS: Exposed via API, can be called directly
create function public.get_all_users()
returns setof auth.users
language sql
security definer
as $$
select * from auth.users; -- Bypasses RLS!
$$;
```
**Correct:**
```sql
-- Create private schema (not exposed to API)
create schema if not exists private;
-- Function in private schema
create function private.get_all_users()
returns setof auth.users
language sql
security definer
set search_path = '' -- Prevent search_path injection
as $$
select * from auth.users;
$$;
-- Wrapper in public schema with access control
create function public.get_user_count()
returns bigint
language sql
security invoker -- Runs as caller
as $$
select count(*) from private.get_all_users()
where (select auth.jwt() -> 'app_metadata' ->> 'role') = 'admin';
$$;
```
## Common Use Cases
### 1. Admin Operations
```sql
create function private.admin_delete_user(target_user_id uuid)
returns void
language plpgsql
security definer
set search_path = ''
as $$
begin
-- Verify caller is admin
if (select auth.jwt() -> 'app_metadata' ->> 'role') != 'admin' then
raise exception 'Unauthorized';
end if;
delete from auth.users where id = target_user_id;
end;
$$;
```
### 2. Cross-User Data Access
```sql
-- Function returns team IDs the current user belongs to
create function private.user_teams()
returns setof uuid
language sql
security definer
stable
set search_path = ''
as $$
select team_id from public.team_members
where user_id = (select auth.uid());
$$;
-- RLS policy uses cached function result (no per-row join)
create policy "Team members see team data" on team_data
for select to authenticated
using (team_id in (select private.user_teams()));
```
## Security Best Practices
1. **Always set search_path = ''** - Prevents search_path injection attacks
2. **Validate caller permissions** - Don't assume caller is authorized
3. **Keep functions minimal** - Only expose necessary operations
4. **Log sensitive operations** - Audit trail for admin actions
```sql
create function private.sensitive_operation()
returns void
language plpgsql
security definer
set search_path = ''
as $$
begin
-- Log the operation
insert into audit_log (user_id, action, timestamp)
values ((select auth.uid()), 'sensitive_operation', now());
-- Perform operation
-- ...
end;
$$;
```
## Related
- [security-service-role.md](security-service-role.md)
- [rls-performance.md](rls-performance.md)
- [Docs](https://supabase.com/docs/guides/database/functions)

View File

@@ -0,0 +1,103 @@
---
title: Never Expose Service Role Key to Browser
impact: CRITICAL
impactDescription: Prevents complete database compromise and data breach
tags: service-role, security, api-keys, anon-key
---
## Never Expose Service Role Key to Browser
The service role key bypasses ALL Row Level Security. Exposing it gives complete
database access to anyone.
**Incorrect:**
```javascript
// NEVER do this - service key in frontend code!
const supabase = createClient(
"https://xxx.supabase.co",
"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..." // service_role key
);
```
**Correct:**
```javascript
// Browser: Use anon key (respects RLS)
const supabase = createClient(
"https://xxx.supabase.co",
"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..." // anon key
);
```
## When to Use Service Role Key
Only in server-side code that users cannot access:
```javascript
// Edge Function or backend server
import { createClient } from "@supabase/supabase-js";
const supabaseAdmin = createClient(
process.env.SUPABASE_URL,
process.env.SUPABASE_SERVICE_ROLE_KEY // Only in secure backend
);
// Bypass RLS for admin operations
const { data } = await supabaseAdmin.from("users").select("*");
```
## Environment Variables
```bash
## .env.local (never commit to git!)
NEXT_PUBLIC_SUPABASE_URL=https://xxx.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJ... # Safe to expose
SUPABASE_SERVICE_ROLE_KEY=eyJ... # NEVER prefix with NEXT_PUBLIC_
```
## API Key Types
Supabase provides 4 key types:
| Type | Format | Privileges |
|------|--------|-----------|
| Publishable key | `sb_publishable_...` | Low — safe to expose in browsers/apps |
| Secret key | `sb_secret_...` | Elevated — bypasses RLS, backend only |
| `anon` (legacy) | JWT | Same as publishable |
| `service_role` (legacy) | JWT | Same as secret key |
The publishable and secret keys are replacing the legacy JWT-based keys. Decode legacy JWTs at [jwt.io](https://jwt.io) to verify: `role` claim is `anon` or `service_role`.
## If Service Key is Exposed
1. Immediately rotate keys in Dashboard > Settings > API Keys
2. Review database for unauthorized changes
3. Check logs for suspicious activity
4. Update all backend services with new key
## Alternative: Security Definer Functions
Instead of service role, use security definer functions for specific elevated
operations:
```sql
-- Runs with function owner's privileges
create function admin_get_user_count()
returns bigint
language sql
security definer
set search_path = ''
as $$
select count(*) from auth.users;
$$;
-- Grant to authenticated users
grant execute on function admin_get_user_count to authenticated;
```
## Related
- [security-functions.md](security-functions.md)
- [rls-mandatory.md](rls-mandatory.md)
- [Docs](https://supabase.com/docs/guides/api/api-keys)

View File

@@ -0,0 +1,94 @@
---
title: Send and Receive Broadcast Messages
impact: HIGH
impactDescription: Core pattern for real-time client-to-client messaging
tags: realtime, broadcast, send, receive, subscribe
---
## Send and Receive Broadcast Messages
Broadcast enables low-latency pub/sub messaging between clients. Prefer Broadcast over Postgres Changes for applications that require more concurrent connections.
## Subscribe to Broadcast Events
```javascript
const channel = supabase.channel('room:123:messages', {
config: { private: true },
})
channel
.on('broadcast', { event: 'message_created' }, (payload) => {
console.log('New message:', payload.payload)
})
.on('broadcast', { event: '*' }, (payload) => {
// Listen to all events on this channel
})
.subscribe((status) => {
if (status === 'SUBSCRIBED') {
console.log('Connected!')
}
})
```
## Send Messages
**After subscribing (WebSocket - lower latency):**
```javascript
channel.send({
type: 'broadcast',
event: 'message_created',
payload: { text: 'Hello!', user_id: '123' },
})
```
**Before subscribing or one-off (HTTP):**
```javascript
await channel.httpSend('message_created', { text: 'Hello!' })
```
## Receive Own Messages
By default, senders don't receive their own broadcasts.
**Incorrect:**
```javascript
// Won't receive own messages
const channel = supabase.channel('room:123')
```
**Correct:**
```javascript
// Enable self-receive when needed (e.g., optimistic UI confirmation)
const channel = supabase.channel('room:123', {
config: {
broadcast: { self: true },
},
})
```
## Get Server Acknowledgment
```javascript
const channel = supabase.channel('room:123', {
config: {
broadcast: { ack: true },
},
})
// Returns 'ok' when server confirms receipt
const status = await channel.send({
type: 'broadcast',
event: 'message_created',
payload: { text: 'Hello!' },
})
```
## Related
- [broadcast-database.md](broadcast-database.md)
- [patterns-cleanup.md](patterns-cleanup.md)
- [Docs](https://supabase.com/docs/guides/realtime/broadcast)

View File

@@ -0,0 +1,95 @@
---
title: Broadcast from Database Triggers
impact: CRITICAL
impactDescription: Scalable pattern for notifying clients of database changes
tags: realtime, broadcast, database, triggers, realtime.send, realtime.broadcast_changes
---
## Broadcast from Database Triggers
Use database triggers with `realtime.broadcast_changes()` instead of `postgres_changes` for better scalability. This avoids per-subscriber RLS checks.
## realtime.broadcast_changes()
Broadcasts database changes in a standard format.
```sql
create or replace function room_messages_broadcast()
returns trigger
security definer
language plpgsql
as $$
begin
perform realtime.broadcast_changes(
'room:' || coalesce(new.room_id, old.room_id)::text, -- topic
tg_op, -- event (INSERT/UPDATE/DELETE)
tg_op, -- operation
tg_table_name, -- table
tg_table_schema, -- schema
new, -- new record
old -- old record
);
return coalesce(new, old);
end;
$$;
create trigger messages_broadcast_trigger
after insert or update or delete on messages
for each row execute function room_messages_broadcast();
```
**Client subscription:**
```javascript
const channel = supabase
.channel('room:123', { config: { private: true } })
.on('broadcast', { event: 'INSERT' }, (payload) => console.log('Insert:', payload))
.on('broadcast', { event: 'UPDATE' }, (payload) => console.log('Update:', payload))
.on('broadcast', { event: 'DELETE' }, (payload) => console.log('Delete:', payload))
.subscribe()
```
## realtime.send()
Sends custom payloads without table binding.
```sql
select realtime.send(
jsonb_build_object('message', 'Custom notification'), -- payload
'notification_sent', -- event
'user:456:notifications', -- topic
true -- private (true = requires auth)
);
```
## Public vs Private Mismatch
**Incorrect:**
```sql
-- Database sends to public channel
select realtime.send('{}', 'event', 'topic', false); -- private = false
```
```javascript
// Client expects private channel - won't receive message
const channel = supabase.channel('topic', { config: { private: true } })
```
**Correct:**
```sql
-- Database sends to private channel
select realtime.send('{}', 'event', 'topic', true); -- private = true
```
```javascript
// Client matches
const channel = supabase.channel('topic', { config: { private: true } })
```
## Related
- [setup-auth.md](setup-auth.md)
- [broadcast-basics.md](broadcast-basics.md)
- [Docs](https://supabase.com/docs/guides/realtime/broadcast)

View File

@@ -0,0 +1,92 @@
---
title: Clean Up Channels to Prevent Memory Leaks
impact: CRITICAL
impactDescription: Prevents memory leaks and connection quota exhaustion
tags: realtime, cleanup, react, lifecycle, removeChannel
---
## Clean Up Channels to Prevent Memory Leaks
Always remove channels when components unmount or subscriptions are no longer needed.
## React Pattern
**Incorrect:**
```javascript
function ChatRoom({ roomId }) {
useEffect(() => {
const channel = supabase.channel(`room:${roomId}`)
channel.on('broadcast', { event: 'message' }, handleMessage).subscribe()
// Missing cleanup - channel persists after unmount
}, [roomId])
}
```
**Correct:**
```javascript
function ChatRoom({ roomId }) {
const channelRef = useRef(null)
useEffect(() => {
// Prevent duplicate subscriptions
if (channelRef.current?.state === 'subscribed') return
const channel = supabase.channel(`room:${roomId}:messages`, {
config: { private: true },
})
channelRef.current = channel
channel
.on('broadcast', { event: 'message_created' }, handleMessage)
.subscribe()
return () => {
if (channelRef.current) {
supabase.removeChannel(channelRef.current)
channelRef.current = null
}
}
}, [roomId])
}
```
## Channel Lifecycle Methods
```javascript
// Remove specific channel
supabase.removeChannel(channel)
// Remove all channels (e.g., on logout)
supabase.removeAllChannels()
// Get active channels
const channels = supabase.getChannels()
```
## Check Channel State Before Subscribing
```javascript
// Prevent duplicate subscriptions
if (channel.state === 'subscribed') {
return
}
channel.subscribe()
```
## Connection Quotas
| Plan | Max Connections | Channels per Connection |
|------|-----------------|------------------------|
| Free | 200 | 100 |
| Pro | 500 | 100 |
| Team | 10,000 | 100 |
Leaked channels count against quotas even when inactive.
For Pay as you go customers you can edit these limits on [Realtime Settings](https://supabase.com/dashboard/project/_/realtime/settings)
## Related
- [patterns-errors.md](patterns-errors.md)
- [setup-channels.md](setup-channels.md)
- [Docs](https://supabase.com/docs/guides/realtime/quotas)

View File

@@ -0,0 +1,78 @@
---
title: Debug Realtime Connections
impact: MEDIUM
impactDescription: Enables visibility into connection and message flow issues
tags: realtime, debugging, logging, troubleshooting
---
## Debug Realtime Connections
Use logging to diagnose connection issues, message flow, and performance problems.
## Client-Side Logging
**Incorrect:**
```javascript
// No logging - no visibility into issues
const supabase = createClient(url, key)
```
**Correct:**
Enable client-side logging with a custom logger function:
```javascript
const supabase = createClient(url, key, {
realtime: {
logger: (kind, msg, data) => {
console.log(`[${kind}] ${msg}`, data)
},
},
})
```
Log message types:
- `push` - Messages sent to server
- `receive` - Messages received from server
- `transport` - Connection events (connect, disconnect, heartbeat)
- `error` - Error events
- `worker` - Web Worker events
## Server-Side Log Level
Configure Realtime server log verbosity via client params:
```javascript
const supabase = createClient(url, key, {
realtime: {
params: {
log_level: 'info', // 'debug' | 'info' | 'warn' | 'error'
},
},
})
```
This affects the verbosity of logs from the Realtime server, not client-side logs.
## Filtering Logs for Debugging
Filter logs to focus on specific events:
```javascript
const supabase = createClient(url, key, {
realtime: {
logger: (kind, msg, data) => {
// Only log push/receive for subscription debugging
if (kind === 'push' || kind === 'receive') {
console.log(`[${kind}] ${msg}`, data)
}
},
},
})
```
## Related
- [patterns-errors.md](patterns-errors.md)
- [Docs](https://supabase.com/docs/guides/troubleshooting/realtime-debugging-with-logger)

View File

@@ -0,0 +1,97 @@
---
title: Handle Realtime Errors and Connection Issues
impact: HIGH
impactDescription: Enables graceful handling of connection failures
tags: realtime, errors, subscribe, status, reconnection
---
## Handle Realtime Errors and Connection Issues
Handle subscription status and errors to provide reliable user experiences.
## Subscription Status Handling
**Incorrect:**
```javascript
// Ignoring subscription status - no visibility into connection issues
channel.subscribe()
```
**Correct:**
```javascript
channel.subscribe((status, err) => {
switch (status) {
case 'SUBSCRIBED':
console.log('Connected!')
break
case 'CHANNEL_ERROR':
console.error('Channel error:', err)
// Client retries automatically
break
case 'TIMED_OUT':
console.error('Connection timed out')
break
case 'CLOSED':
console.log('Channel closed')
break
}
})
```
## Common Error Codes
| Error | Cause | Solution |
|-------|-------|----------|
| `too_many_connections` | Connection limit exceeded | Clean up unused channels, upgrade plan |
| `too_many_joins` | Channel join rate exceeded | Reduce join frequency |
| `ConnectionRateLimitReached` | Max connections reached | Upgrade plan |
| `DatabaseLackOfConnections` | No available DB connections | Increase compute size |
| `TenantNotFound` | Invalid project reference | Verify project URL |
## Automatic Reconnection
Supabase handles reconnection automatically with exponential backoff. No manual re-subscribe is needed.
## Client-Side Logging
Enable client-side logging to debug connection issues:
```javascript
const supabase = createClient(url, key, {
realtime: {
logger: (kind, msg, data) => {
console.log(`[${kind}] ${msg}`, data)
},
},
})
```
Log message types include `push`, `receive`, `transport`, `error`, and `worker`.
## Silent Disconnections in Background
WebSocket connections can disconnect when apps are backgrounded (mobile, inactive tabs). Supabase reconnects automatically. Re-track presence after reconnection if needed:
```javascript
channel.subscribe((status) => {
if (status === 'SUBSCRIBED') {
// Re-track presence after reconnection
channel.track({ user_id: userId, online_at: new Date().toISOString() })
}
})
```
## Authorization Errors
Private channel authorization fails when:
- User not authenticated
- Missing RLS policies on `realtime.messages`
- Token expired
## Related
- [patterns-cleanup.md](patterns-cleanup.md)
- [setup-auth.md](setup-auth.md)
- [Docs](https://supabase.com/docs/guides/realtime/troubleshooting)

View File

@@ -0,0 +1,99 @@
---
title: Listen to Database Changes with Postgres Changes
impact: MEDIUM
impactDescription: Simple database change listeners with scaling limitations
tags: realtime, postgres_changes, database, subscribe, publication
---
## Listen to Database Changes with Postgres Changes
Postgres Changes streams database changes via logical replication. Note: **Broadcast is recommended for applications that demand higher scalability**.
## When to Use Postgres Changes
- Quick prototyping and development
- Low user counts (< 100 concurrent subscribers per table)
- When simplicity is more important than scale
## Basic Setup
**1. Add table to publication:**
```sql
alter publication supabase_realtime add table messages;
```
**2. Subscribe to changes:**
```javascript
const channel = supabase
.channel('db-changes')
.on(
'postgres_changes',
{
event: 'INSERT', // 'INSERT' | 'UPDATE' | 'DELETE' | '*'
schema: 'public',
table: 'messages',
},
(payload) => console.log('New row:', payload.new)
)
.subscribe()
```
## Filter Syntax
```javascript
.on('postgres_changes', {
event: '*',
schema: 'public',
table: 'messages',
filter: 'room_id=eq.123', // Only changes where room_id = 123
}, callback)
```
| Filter | Example |
|--------|---------|
| `eq` | `id=eq.1` |
| `neq` | `status=neq.deleted` |
| `lt`, `lte` | `age=lt.65` |
| `gt`, `gte` | `quantity=gt.10` |
| `in` | `name=in.(red,blue,yellow)` (max 100 values) |
## Receive Old Records on UPDATE/DELETE
By default, only `new` records are sent.
**Incorrect:**
```sql
-- Only new record available in payload
alter publication supabase_realtime add table messages;
```
**Correct:**
```sql
-- Enable old record in payload
alter table messages replica identity full;
alter publication supabase_realtime add table messages;
```
## Scaling Limitation
Each change triggers RLS checks for every subscriber:
```text
100 subscribers = 100 database reads per change
```
For high-traffic tables, migrate to [broadcast-database.md](broadcast-database.md).
## DELETE Events Not Filterable
Filters don't work on DELETE events due to how Postgres logical replication works.
## Related
- [broadcast-database.md](broadcast-database.md)
- [patterns-cleanup.md](patterns-cleanup.md)
- [Docs](https://supabase.com/docs/guides/realtime/postgres-changes)

View File

@@ -0,0 +1,87 @@
---
title: Track User Presence and Online Status
impact: MEDIUM
impactDescription: Enables features like online indicators and typing status
tags: realtime, presence, track, online, state
---
## Track User Presence and Online Status
Presence synchronizes shared state between users. Use sparingly due to computational overhead.
## Track Presence
```javascript
const channel = supabase.channel('room:123', {
config: { private: true },
})
channel
.on('presence', { event: 'sync' }, () => {
const state = channel.presenceState()
console.log('Online users:', Object.keys(state))
})
.on('presence', { event: 'join' }, ({ key, newPresences }) => {
console.log('User joined:', key, newPresences)
})
.on('presence', { event: 'leave' }, ({ key, leftPresences }) => {
console.log('User left:', key, leftPresences)
})
.subscribe(async (status) => {
if (status === 'SUBSCRIBED') {
await channel.track({
user_id: 'user-123',
online_at: new Date().toISOString(),
})
}
})
```
## Get Current State
```javascript
const state = channel.presenceState()
// Returns: { "key1": [{ user_id: "123" }], "key2": [{ user_id: "456" }] }
```
## Stop Tracking
```javascript
await channel.untrack()
```
## Custom Presence Key
By default, presence uses a UUIDv1 key. Override for user-specific tracking.
**Incorrect:**
```javascript
// Each browser tab gets separate presence entry
const channel = supabase.channel('room:123')
```
**Correct:**
```javascript
// Same user shows once across tabs
const channel = supabase.channel('room:123', {
config: {
presence: { key: `user:${userId}` },
},
})
```
## Quotas
| Plan | Presence Messages/Second |
|------|-------------------------|
| Free | 20 |
| Pro | 50 |
| Team/Enterprise | 1,000 |
For Pay as you go customers you can edit these limits on [Realtime Settings](https://supabase.com/dashboard/project/_/realtime/settings)
## Related
- [setup-channels.md](setup-channels.md)
- [patterns-cleanup.md](patterns-cleanup.md)
- [Docs](https://supabase.com/docs/guides/realtime/presence)

View File

@@ -0,0 +1,82 @@
---
title: Configure Private Channels with Authentication
impact: CRITICAL
impactDescription: Prevents unauthorized access to real-time messages
tags: realtime, auth, private, rls, security, setAuth
---
## Configure Private Channels with Authentication
Always use private channels in production. Public channels allow any client to subscribe.
## Enable Private Channels
**Incorrect:**
```javascript
// Public channel - anyone can subscribe
const channel = supabase.channel('room:123:messages')
```
**Correct:**
```javascript
// Private channel requires authentication
const channel = supabase.channel('room:123:messages', {
config: { private: true },
})
```
## RLS Policies on realtime.messages
Private channels require RLS policies on the `realtime.messages` table.
**Read access (subscribe to channel):**
```sql
create policy "authenticated_users_can_receive"
on realtime.messages for select
to authenticated
using (true);
```
**Write access (send to channel):**
```sql
create policy "authenticated_users_can_send"
on realtime.messages for insert
to authenticated
with check (true);
```
**Topic-specific access:**
```sql
-- Only room members can receive messages
create policy "room_members_can_read"
on realtime.messages for select
to authenticated
using (
extension in ('broadcast', 'presence')
and exists (
select 1 from room_members
where user_id = (select auth.uid())
and room_id = split_part(realtime.topic(), ':', 2)::uuid
)
);
```
## Index RLS Policy Columns
Missing indexes slow channel joins significantly.
```sql
create index idx_room_members_user_room
on room_members(user_id, room_id);
```
## Related
- [setup-channels.md](setup-channels.md)
- [broadcast-database.md](broadcast-database.md)
- [Docs](https://supabase.com/docs/guides/realtime/authorization)

View File

@@ -0,0 +1,70 @@
---
title: Create and Configure Realtime Channels
impact: HIGH
impactDescription: Proper channel setup enables reliable real-time communication
tags: realtime, channels, configuration, topics, naming
---
## Create and Configure Realtime Channels
Channels are rooms where clients communicate. Use consistent naming and appropriate configuration.
## Topic Naming Convention
Use `scope:entity:id` format for predictable, filterable topics.
**Incorrect:**
```javascript
// Generic names make filtering impossible
const channel = supabase.channel('messages')
const channel = supabase.channel('room1')
```
**Correct:**
```javascript
// Structured naming enables topic-based RLS policies
const channel = supabase.channel('room:123:messages')
const channel = supabase.channel('user:456:notifications')
const channel = supabase.channel('game:789:moves')
```
## Channel Configuration Options
```javascript
const channel = supabase.channel('room:123:messages', {
config: {
private: true, // Require authentication (recommended)
broadcast: {
self: true, // Receive own messages
ack: true, // Get server acknowledgment
},
presence: {
key: 'user-session-id', // Custom presence key (default: UUIDv1)
},
},
})
```
## Event Naming
Use snake_case for event names.
**Incorrect:**
```javascript
channel.send({ type: 'broadcast', event: 'newMessage', payload: {} })
```
**Correct:**
```javascript
channel.send({ type: 'broadcast', event: 'message_created', payload: {} })
channel.send({ type: 'broadcast', event: 'user_joined', payload: {} })
```
## Related
- [setup-auth.md](setup-auth.md)
- [Docs](https://supabase.com/docs/guides/realtime/concepts)