feature: storage agent references (#38)

* rebase and house keeping

* fix: storage reference accuracy after docs review

Fix RLS permission mappings, CDN cache behavior, file management limits,
image transform descriptions, and S3 upload API signatures based on
official Supabase documentation audit.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix offset

* fix move and copy instructions

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Pedro Rodrigues
2026-02-13 14:53:48 +00:00
committed by Pedro Rodrigues
parent ac585342f3
commit af492edaf4
10 changed files with 827 additions and 3 deletions

View File

@@ -28,6 +28,7 @@ supabase/
|----------|----------|--------|--------| |----------|----------|--------|--------|
| 1 | Database | CRITICAL | `db-` | | 1 | Database | CRITICAL | `db-` |
| 2 | Realtime | MEDIUM-HIGH | `realtime-` | | 2 | Realtime | MEDIUM-HIGH | `realtime-` |
| 3 | Storage | HIGH | `storage-` |
Reference files are named `{prefix}-{topic}.md` (e.g., `query-missing-indexes.md`). Reference files are named `{prefix}-{topic}.md` (e.g., `query-missing-indexes.md`).
@@ -64,6 +65,15 @@ Reference files are named `{prefix}-{topic}.md` (e.g., `query-missing-indexes.md
- `references/realtime-setup-auth.md` - `references/realtime-setup-auth.md`
- `references/realtime-setup-channels.md` - `references/realtime-setup-channels.md`
**Storage** (`storage-`):
- `references/storage-access-control.md`
- `references/storage-cdn-caching.md`
- `references/storage-download-urls.md`
- `references/storage-ops-file-management.md`
- `references/storage-transform-images.md`
- `references/storage-upload-resumable.md`
- `references/storage-upload-standard.md`
--- ---
*27 reference files across 2 categories* *34 reference files across 3 categories*

View File

@@ -14,8 +14,6 @@ metadata:
Supabase is an open source Firebase alternative that provides a Postgres database, authentication, instant APIs, edge functions, realtime subscriptions, and storage. It's fully compatible with Postgres and provides several language sdks, including supabase-js and supabase-py. Supabase is an open source Firebase alternative that provides a Postgres database, authentication, instant APIs, edge functions, realtime subscriptions, and storage. It's fully compatible with Postgres and provides several language sdks, including supabase-js and supabase-py.
```
## Overview of Resources ## Overview of Resources
Reference the appropriate resource file based on the user's needs: Reference the appropriate resource file based on the user's needs:
@@ -41,6 +39,18 @@ Reference the appropriate resource file based on the user's needs:
| Postgres Changes | `references/realtime-postgres-*.md` | Database change listeners (prefer Broadcast) | | Postgres Changes | `references/realtime-postgres-*.md` | Database change listeners (prefer Broadcast) |
| Patterns | `references/realtime-patterns-*.md` | Cleanup, error handling, React integration | | Patterns | `references/realtime-patterns-*.md` | Cleanup, error handling, React integration |
### Storage
| Area | Resource | When to Use |
| --------------- | ------------------------------------- | ---------------------------------------------- |
| Access Control | `references/storage-access-control.md`| Bucket policies, RLS for storage |
| Standard Upload | `references/storage-upload-standard.md`| File uploads up to 5GB |
| Resumable Upload| `references/storage-upload-resumable.md`| Large file uploads with TUS protocol |
| Downloads | `references/storage-download-urls.md` | Public URLs, signed URLs, download methods |
| Transformations | `references/storage-transform-images.md`| Image resize, crop, format conversion |
| CDN & Caching | `references/storage-cdn-caching.md` | Cache control, Smart CDN, stale content |
| File Operations | `references/storage-ops-file-management.md`| Move, copy, delete, list files |
**CLI Usage:** Always use `npx supabase` instead of `supabase` for version consistency across team members. **CLI Usage:** Always use `npx supabase` instead of `supabase` for version consistency across team members.
## Supabase Documentation ## Supabase Documentation

View File

@@ -14,3 +14,8 @@ queries.
**Impact:** MEDIUM-HIGH **Impact:** MEDIUM-HIGH
**Description:** Channel setup, Broadcast messaging, Presence tracking, Postgres Changes listeners, cleanup patterns, error handling, and debugging. **Description:** Channel setup, Broadcast messaging, Presence tracking, Postgres Changes listeners, cleanup patterns, error handling, and debugging.
## 3. Storage (storage)
**Impact:** HIGH
**Description:** File uploads (standard and resumable), downloads, signed URLs, image transformations, CDN caching, access control with RLS policies, and file management operations.

View File

@@ -0,0 +1,122 @@
---
title: Configure Storage Access Control
impact: CRITICAL
impactDescription: Prevents unauthorized file access and upload failures
tags: storage, buckets, public, private, rls, policies, security
---
## Configure Storage Access Control
Storage access combines bucket visibility settings with RLS policies on
`storage.objects`. Understanding both is essential.
## Public vs Private Buckets
"Public" ONLY affects unauthenticated downloads. All other operations require
RLS policies.
| Operation | Public Bucket | Private Bucket |
|-----------|---------------|----------------|
| Download | No auth needed | Signed URL or auth header |
| Upload | RLS required | RLS required |
| Update | RLS required | RLS required |
| Delete | RLS required | RLS required |
**Incorrect assumption:**
```javascript
// "Public bucket means anyone can upload" - WRONG
await supabase.storage.from('public-bucket').upload('file.txt', file);
// Error: new row violates row-level security policy
```
## Bucket Configuration
```sql
insert into storage.buckets (id, name, public, file_size_limit, allowed_mime_types)
values (
'avatars',
'avatars',
true,
5242880, -- 5MB
array['image/jpeg', 'image/png', 'image/webp']
);
```
## Storage Helper Functions
Use these in RLS policy expressions:
```sql
storage.filename(name) -- 'folder/file.jpg' -> 'file.jpg'
storage.foldername(name) -- 'user/docs/f.pdf' -> ['user', 'docs']
storage.extension(name) -- 'file.jpg' -> 'jpg'
```
## Common RLS Patterns
### User Folder Isolation
```sql
create policy "User folder access"
on storage.objects for all to authenticated
using (
bucket_id = 'user-files' and
(storage.foldername(name))[1] = auth.uid()::text
)
with check (
bucket_id = 'user-files' and
(storage.foldername(name))[1] = auth.uid()::text
);
```
### Owner-Based Access
```sql
create policy "Owner access"
on storage.objects for all to authenticated
using (owner_id = (select auth.uid()::text))
with check (owner_id = (select auth.uid()::text));
```
### File Type Restriction
```sql
create policy "Images only"
on storage.objects for insert to authenticated
with check (
bucket_id = 'images' and
storage.extension(name) in ('jpg', 'jpeg', 'png', 'webp', 'gif')
);
```
### Public Read, Authenticated Write
```sql
create policy "Public read"
on storage.objects for select to public
using (bucket_id = 'public-assets');
create policy "Auth write"
on storage.objects for insert to authenticated
with check (bucket_id = 'public-assets');
```
## SDK Method to RLS Operation
| SDK Method | SQL Operation |
|------------|---------------|
| upload | INSERT |
| upload (upsert) | SELECT + INSERT + UPDATE |
| download | SELECT |
| list | SELECT |
| remove | DELETE |
| move | SELECT + UPDATE |
| copy | SELECT + INSERT |
| copy (upsert) | SELECT + INSERT + UPDATE |
## Related
- [db/rls-common-mistakes.md](../db/rls-common-mistakes.md) - General RLS pitfalls
- [db/rls-policy-types.md](../db/rls-policy-types.md) - PERMISSIVE vs RESTRICTIVE
- [Docs](https://supabase.com/docs/guides/storage/security/access-control)

View File

@@ -0,0 +1,91 @@
---
title: Understand CDN Caching and Stale Content
impact: HIGH
impactDescription: Prevents serving outdated files after updates
tags: storage, cdn, caching, cache-control, stale-content, smart-cdn
---
## Understand CDN Caching and Stale Content
All plans include CDN caching. Smart CDN (Pro+) automatically invalidates the
CDN cache when files change (up to 60s propagation). Without Smart CDN, the CDN
evicts based on regional request activity only.
With Smart CDN (Pro+), `cacheControl` controls **browser** cache only — the CDN
cache is managed automatically. Without Smart CDN, `cacheControl` influences both
browser and CDN cache behavior.
## Smart CDN Behavior (Pro+)
- Automatically invalidates cache when files change
- Propagation delay: up to 60 seconds
- No manual cache purging available
- Bypass cache with query-string versioning: append `?version=1` to the URL,
then increment (`?version=2`) when content changes
## Setting Cache Control
```javascript
await supabase.storage
.from('assets')
.upload('logo.png', file, {
cacheControl: '3600' // 1 hour in seconds
});
```
## Stale Content Problem
**Incorrect - Overwriting files:**
```javascript
// With Smart CDN: stale for up to 60s. Without: stale until CDN evicts.
await supabase.storage
.from('avatars')
.upload('avatar.jpg', newFile, { upsert: true });
```
**Correct - Upload to unique paths:**
```javascript
const filename = `avatar-${Date.now()}.jpg`;
await supabase.storage
.from('avatars')
.upload(`user123/${filename}`, newFile);
// Update database reference
await supabase
.from('profiles')
.update({ avatar_path: `user123/${filename}` })
.eq('id', 'user123');
// Delete old file
await supabase.storage.from('avatars').remove([oldPath]);
```
## Cache-Control Guidelines
| Asset Type | Duration | Reasoning |
|------------|----------|-----------|
| User avatars | 3600 (1h) | Changes occasionally |
| Static assets | 31536000 (1y) | Use versioned filenames |
| Documents | 0 | Always fresh |
| Public images | 86400 (1d) | Balance freshness/performance |
## Debugging Cache
Check response headers:
```bash
curl -I "https://<ref>.supabase.co/storage/v1/object/public/bucket/file.jpg"
```
- `Cache-Control`: Configured TTL
- `Age`: Seconds since cached
- `cf-cache-status`: HIT, MISS, STALE, REVALIDATED, UPDATING, EXPIRED, BYPASS,
or DYNAMIC (HIT/STALE/REVALIDATED/UPDATING = cache hit)
## Related
- [upload-standard.md](upload-standard.md) - Upload options
- [Docs](https://supabase.com/docs/guides/storage/cdn/smart-cdn)

View File

@@ -0,0 +1,99 @@
---
title: Choose the Right Download Method
impact: MEDIUM
impactDescription: Ensures correct file access for public and private content
tags: storage, download, signed-url, public-url, getPublicUrl
---
## Choose the Right Download Method
Select the method based on bucket visibility and use case.
## Public URLs (Public Buckets)
**Incorrect:**
```javascript
// Using signed URL for public bucket wastes an API call
const { data, error } = await supabase.storage
.from('public-bucket')
.createSignedUrl('image.jpg', 3600);
```
**Correct:**
```javascript
// getPublicUrl is instant - no API call needed for public buckets
const { data } = supabase.storage
.from('public-bucket')
.getPublicUrl('folder/image.jpg');
```
**Note:** Returns URL even if file doesn't exist. Does not verify existence.
## Signed URLs (Private Buckets)
```javascript
// Time-limited access URL
const { data, error } = await supabase.storage
.from('private-bucket')
.createSignedUrl('document.pdf', 3600); // Expires in 1 hour
```
### Multiple Signed URLs
```javascript
const { data, error } = await supabase.storage
.from('bucket')
.createSignedUrls(['file1.pdf', 'file2.pdf'], 3600);
```
## Download as Blob
```javascript
// Download file content directly
const { data, error } = await supabase.storage
.from('bucket')
.download('file.pdf');
// data is a Blob
const url = URL.createObjectURL(data);
```
## Force Download vs Render
```javascript
// Force browser download (not render in tab)
const { data } = supabase.storage
.from('bucket')
.getPublicUrl('file.pdf', { download: true });
// Custom download filename
const { data } = supabase.storage
.from('bucket')
.getPublicUrl('file.pdf', { download: 'report-2024.pdf' });
```
## With Image Transformations
```javascript
const { data } = supabase.storage
.from('images')
.getPublicUrl('photo.jpg', {
transform: { width: 200, height: 200, resize: 'cover' }
});
```
## Method Comparison
| Method | Auth Required | Best For |
|--------|---------------|----------|
| `getPublicUrl` | No (public buckets) | Static assets, avatars |
| `createSignedUrl` | Yes (to create) | Temporary access, private files |
| `download` | Per RLS | Server-side processing |
## Related
- [access-control.md](access-control.md) - Public vs private buckets
- [transform-images.md](transform-images.md) - Image transformations
- [Docs](https://supabase.com/docs/guides/storage/serving/downloads)

View File

@@ -0,0 +1,149 @@
---
title: Manage Files Through the API
impact: MEDIUM
impactDescription: Prevents orphaned files and billing issues
tags: storage, delete, move, copy, list, operations
---
## Manage Files Through the API
Always use SDK methods for file operations. Never modify `storage.objects`
directly via SQL.
## Critical: Never Delete via SQL
**Incorrect - Creates orphaned files:**
```sql
-- NEVER do this! Deletes metadata but file remains on disk
-- The orphaned file continues to consume storage
DELETE FROM storage.objects WHERE name = 'file.jpg';
```
**Correct - Use SDK:**
```javascript
// Deletes both metadata and actual file
await supabase.storage.from('bucket').remove(['file.jpg']);
```
## Delete Files
Limit: 1,000 objects per `remove()` call.
```javascript
// Single or multiple files (max 1,000 per call)
await supabase.storage.from('bucket').remove([
'folder/file1.jpg',
'folder/file2.jpg'
]);
```
### Delete Folder Contents
This pattern only handles top-level files. For nested subfolders, recurse into
each subfolder. The default `list()` limit is 100 — paginate for larger folders.
```javascript
async function deleteFolderContents(bucket, folder) {
const limit = 100;
while (true) {
const { data: items } = await supabase.storage
.from(bucket)
.list(folder, { limit, offset: 0 });
if (!items?.length) break;
const files = items.filter(item => item.id); // files have ids
const folders = items.filter(item => !item.id); // folders don't
// Recurse into subfolders
for (const sub of folders) {
await deleteFolderContents(bucket, `${folder}/${sub.name}`);
}
// Delete files (max 1,000 per call)
if (files.length) {
await supabase.storage
.from(bucket)
.remove(files.map(f => `${folder}/${f.name}`));
}
}
}
```
## Move Files
Max file size: 5GB.
```javascript
await supabase.storage
.from('bucket')
.move('old/path/file.jpg', 'new/path/file.jpg');
```
Requires SELECT on source and UPDATE on destination via RLS.
## Copy Files
Max file size: 5GB.
```javascript
await supabase.storage
.from('bucket')
.copy('source/file.jpg', 'destination/file.jpg');
```
Requires SELECT on source and INSERT on destination via RLS. With upsert,
additionally requires UPDATE.
## List Files
```javascript
const { data, error } = await supabase.storage
.from('bucket')
.list('folder', {
limit: 100,
offset: 0,
sortBy: { column: 'name', order: 'asc' },
search: 'report' // Filter by name prefix
});
```
### Paginate All Files
```javascript
async function listAllFiles(bucket, folder) {
const files = [];
let offset = 0;
const limit = 100;
while (true) {
const { data } = await supabase.storage
.from(bucket)
.list(folder, { limit, offset });
if (!data?.length) break;
files.push(...data);
offset += limit;
}
return files;
}
```
## File Info
```javascript
const { data, error } = await supabase.storage
.from('bucket')
.info('path/to/file.jpg');
// Returns: id, name, size, metadata, created_at, updated_at
```
## Related
- [access-control.md](access-control.md) - RLS for operations
- [Docs](https://supabase.com/docs/guides/storage/management/delete-objects)

View File

@@ -0,0 +1,117 @@
---
title: Transform Images On-the-Fly
impact: MEDIUM
impactDescription: Reduces bandwidth with server-side image transformations
tags: storage, images, transform, resize, webp, optimization
---
## Transform Images On-the-Fly
Supabase transforms images at request time. Results are cached at the CDN.
Available on Pro plan and above.
## Basic Transformation
```javascript
const { data } = supabase.storage
.from('images')
.getPublicUrl('photo.jpg', {
transform: {
width: 400,
height: 300,
resize: 'cover',
quality: 80
}
});
```
## Resize Modes
| Mode | Behavior |
|------|----------|
| `cover` | Crop to fill dimensions (default) |
| `contain` | Fit within dimensions, keep aspect ratio |
| `fill` | Stretch to fill dimensions |
## Transformation Limits
| Limit | Value |
|-------|-------|
| Max dimension | 2500px |
| Max file size | 25MB |
| Max resolution | 50 megapixels |
**Incorrect:**
```javascript
// Exceeds 2500px limit - will not apply transformation
transform: { width: 3000, height: 3000 }
```
**Correct:**
```javascript
// Within limits - transformation applied
transform: { width: 2500, height: 2500 }
```
## WebP Auto-Optimization
Without explicit format, Supabase serves WebP to supporting browsers:
```javascript
// Browser receives WebP if supported
transform: { width: 400 } // No format = auto WebP
```
To keep original format:
```javascript
transform: { width: 400, format: 'origin' }
```
## Direct URL Parameters
```
https://xxx.supabase.co/storage/v1/render/image/public/bucket/image.jpg
?width=400&height=300&resize=cover&quality=80
```
## Next.js Image Loader
```javascript
// next.config.js
module.exports = {
images: {
loader: 'custom',
loaderFile: './supabase-loader.js',
},
};
// supabase-loader.js
export default function supabaseLoader({ src, width, quality }) {
return `${process.env.NEXT_PUBLIC_SUPABASE_URL}/storage/v1/render/image/public/${src}?width=${width}&quality=${quality || 75}`;
}
```
```jsx
<Image src="bucket/photo.jpg" width={400} height={300} alt="Photo" />
```
## Responsive Images
```javascript
const sizes = [320, 640, 1280];
const srcset = sizes.map(w => {
const { data } = supabase.storage
.from('images')
.getPublicUrl('photo.jpg', { transform: { width: w } });
return `${data.publicUrl} ${w}w`;
}).join(', ');
```
## Related
- [download-urls.md](download-urls.md) - URL generation methods
- [cdn-caching.md](cdn-caching.md) - Transformation caching
- [Docs](https://supabase.com/docs/guides/storage/serving/image-transformations)

View File

@@ -0,0 +1,125 @@
---
title: Use Resumable Uploads for Large Files
impact: HIGH
impactDescription: Enables reliable upload of large files with progress and resume
tags: storage, upload, large-files, tus, resumable, multipart
---
## Use Resumable Uploads for Large Files
For files larger than 6MB, use TUS resumable uploads or S3 multipart uploads.
For optimal performance when uploading large files, use the direct storage
hostname (`https://<ref>.storage.supabase.co`) instead of `https://<ref>.supabase.co`.
## TUS Resumable Upload
```javascript
import * as tus from 'tus-js-client';
const { data: { session } } = await supabase.auth.getSession();
const upload = new tus.Upload(file, {
endpoint: `https://${projectRef}.storage.supabase.co/storage/v1/upload/resumable`,
retryDelays: [0, 3000, 5000, 10000, 20000],
headers: {
authorization: `Bearer ${session.access_token}`,
'x-upsert': 'true' // Optional: overwrite existing
},
uploadDataDuringCreation: true,
removeFingerprintOnSuccess: true,
metadata: {
bucketName: 'videos',
objectName: 'folder/video.mp4',
contentType: 'video/mp4',
cacheControl: '3600'
},
chunkSize: 6 * 1024 * 1024, // Must be 6MB for Supabase
onError: (error) => console.error('Failed:', error),
onProgress: (bytesUploaded, bytesTotal) => {
console.log(`${((bytesUploaded / bytesTotal) * 100).toFixed(1)}%`);
},
onSuccess: () => console.log('Complete')
});
upload.start();
```
## Resume Interrupted Upload
```javascript
// Check for previous uploads
const previousUploads = await upload.findPreviousUploads();
if (previousUploads.length > 0) {
upload.resumeFromPreviousUpload(previousUploads[0]);
}
upload.start();
```
## S3 Multipart Upload
For server-side uploads or S3-compatible tooling:
```javascript
import { S3Client } from '@aws-sdk/client-s3';
import { Upload } from '@aws-sdk/lib-storage';
const s3 = new S3Client({
region: '<your-project-region>',
endpoint: `https://${projectRef}.storage.supabase.co/storage/v1/s3`,
credentials: {
accessKeyId: process.env.STORAGE_ACCESS_KEY,
secretAccessKey: process.env.STORAGE_SECRET_KEY
},
forcePathStyle: true
});
const upload = new Upload(s3, {
Bucket: 'bucket-name',
Key: 'path/to/file.zip',
Body: fileStream,
ContentType: 'application/zip'
});
upload.on('httpUploadProgress', (progress) => {
console.log(`${progress.loaded}/${progress.total}`);
});
await upload.done();
```
## When to Use Each Method
| Method | Best For |
|--------|----------|
| Standard | < 6MB, simple uploads |
| TUS | > 6MB, browser uploads, unreliable networks |
| S3 Multipart | Server-side, very large files |
Max file sizes vary by plan. See
[Docs](https://supabase.com/docs/guides/storage/uploads/file-limits) for current
limits.
## TUS Configuration Notes
**Incorrect:**
```javascript
// Wrong chunk size - will fail
chunkSize: 10 * 1024 * 1024 // 10MB - not supported
```
**Correct:**
```javascript
// Supabase requires exactly 6MB chunks
chunkSize: 6 * 1024 * 1024 // 6MB - required
```
- Chunk size must be exactly 6MB for Supabase
- Upload URLs valid for 24 hours
- Use direct storage URL: `https://{ref}.storage.supabase.co/storage/v1/upload/resumable`
## Related
- [upload-standard.md](upload-standard.md) - Small file uploads
- [Docs](https://supabase.com/docs/guides/storage/uploads/resumable-uploads)

View File

@@ -0,0 +1,96 @@
---
title: Use Standard Uploads for Small Files
impact: HIGH
impactDescription: Ensures reliable uploads for files under 6MB
tags: storage, upload, small-files, upsert, signed-upload
---
## Use Standard Uploads for Small Files
Standard upload works best for files up to 6MB. For larger files, use resumable
uploads.
## Basic Upload
```javascript
const { data, error } = await supabase.storage
.from('bucket-name')
.upload('folder/file.jpg', file, {
cacheControl: '3600',
upsert: false // Fail if exists (default)
});
```
## Upsert Behavior
```javascript
// Replace existing file
await supabase.storage
.from('bucket-name')
.upload('folder/file.jpg', file, { upsert: true });
```
**Warning:** With Smart CDN (Pro+), upsert can serve stale content for up to 60
seconds while the cache invalidates. Without Smart CDN, stale content persists
until CDN eviction. Consider unique paths instead.
## Concurrent Upload Conflicts
Without `upsert: true`, first client to complete wins. Others get
`400 Asset Already Exists`.
**Incorrect:**
```javascript
// Same filename causes conflicts in concurrent uploads
await supabase.storage.from('uploads').upload('avatar.jpg', file);
// Error: Asset Already Exists (if another upload completed first)
```
**Correct:**
```javascript
// Unique filenames prevent conflicts
const filename = `${Date.now()}-${crypto.randomUUID()}.jpg`;
await supabase.storage.from('uploads').upload(filename, file);
```
## Upload with Metadata
```javascript
await supabase.storage
.from('documents')
.upload('report.pdf', file, {
contentType: 'application/pdf',
cacheControl: '86400',
metadata: { uploadedBy: user.id, version: '1.0' }
});
```
## Signed Upload URLs
Allow direct client uploads without exposing credentials:
```javascript
// Server: Generate signed URL
const { data, error } = await supabase.storage
.from('uploads')
.createSignedUploadUrl('folder/file.jpg');
// Client: Upload directly using token
await supabase.storage
.from('uploads')
.uploadToSignedUrl('folder/file.jpg', data.token, file);
```
## Size Limits
File size limits vary by plan. See
[Docs](https://supabase.com/docs/guides/storage/uploads/file-limits) for current
limits. Use resumable uploads for files > 6MB.
## Related
- [upload-resumable.md](upload-resumable.md) - Large file uploads
- [cdn-caching.md](cdn-caching.md) - Cache invalidation
- [Docs](https://supabase.com/docs/guides/storage/uploads/standard-uploads)