S3-Compatible Storage
Gallery supports using S3-compatible object storage (such as AWS S3, MinIO, Cloudflare R2, Backblaze B2, or Wasabi) as the storage backend for new uploads. This is useful for scaling storage independently of the server, leveraging cloud durability, or integrating with existing infrastructure.
If you have existing files on disk, you can migrate them to S3 using the built-in Storage Migration tool.
How It Works
When S3 storage is enabled:
- New uploads (photos, videos, thumbnails, transcoded videos, profile images) are written to your S3 bucket.
- Existing files on disk continue to be served from disk — both backends run simultaneously.
- The Storage Template determines the S3 object key at upload time.
Gallery supports two modes for serving files from S3:
| Mode | Behavior |
|---|---|
redirect | Returns a temporary presigned URL. The client downloads directly from S3. Recommended when the browser can reach the S3 endpoint. |
proxy | The Gallery server streams the file from S3 to the client. Use only when S3 is not directly reachable by browsers. |
The recent direct-media delivery change makes redirect the normal S3 mode for browser-reachable buckets. Before switching an existing deployment from proxy to redirect, apply bucket CORS for your Gallery origins or canvas-based features can fail.
For most deployments, use redirect. Only use proxy when browsers cannot reach your S3 endpoint directly.
Environment Variables
All S3 variables are set on the immich-server container.
| Variable | Description | Default | Required |
|---|---|---|---|
IMMICH_STORAGE_BACKEND | Storage backend for new uploads (disk or s3) | disk | Yes (set to s3) |
IMMICH_S3_BUCKET | S3 bucket name | Yes | |
IMMICH_S3_REGION | AWS region (or region of your S3-compatible provider) | us-east-1 | No |
IMMICH_S3_ENDPOINT | Custom endpoint URL for S3-compatible services (e.g. MinIO, R2) | No*1 | |
IMMICH_S3_ACCESS_KEY_ID | Access key ID | No*2 | |
IMMICH_S3_SECRET_ACCESS_KEY | Secret access key | No*2 | |
IMMICH_S3_PRESIGNED_URL_EXPIRY | Presigned URL expiration time in seconds (only relevant for redirect mode) | 3600 | No |
IMMICH_S3_SERVE_MODE | How to serve S3 assets: use redirect for normal deployments; proxy is the fallback when browsers cannot reach S3 directly | redirect | No |
*1: Required for non-AWS S3-compatible services (MinIO, R2, B2, etc.). Omit for AWS S3.
*2: If omitted, the AWS SDK falls back to IAM role credentials (e.g. EC2 instance roles, ECS task roles, IRSA on EKS). For non-AWS services, these are typically required.
Setup Guide
1. Create an S3 Bucket
AWS S3
- Open the AWS S3 Console and click Create bucket.
- Choose a bucket name (e.g.
my-gallery-storage) and region. - Leave "Block all public access" enabled — Gallery uses presigned URLs or proxying, so the bucket does not need to be public.
- Create the bucket.
- Create an IAM user (or use an existing one) with programmatic access. Attach a policy granting access to your bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:PutObject", "s3:GetObject", "s3:DeleteObject", "s3:ListBucket"],
"Resource": ["arn:aws:s3:::my-gallery-storage", "arn:aws:s3:::my-gallery-storage/*"]
}
]
}
- Note the Access Key ID and Secret Access Key.
MinIO
- Install and start MinIO (or add it to your Docker Compose stack).
- Open the MinIO Console and create a bucket (e.g.
gallery). - Create an access key pair from the MinIO Console or CLI.
- Note the endpoint URL (e.g.
http://minio:9000if running in the same Docker network, orhttp://<host-ip>:9000if external).
Cloudflare R2
- In the Cloudflare dashboard, go to R2 Object Storage and create a bucket.
- Under Manage R2 API Tokens, create a token with read/write access to your bucket.
- Note the Account ID from your Cloudflare dashboard. Your S3 endpoint will be
https://<account-id>.r2.cloudflarestorage.com. - Note the Access Key ID and Secret Access Key from the API token.
2. Configure Environment Variables
Add the S3 variables to your .env file:
IMMICH_STORAGE_BACKEND=s3
IMMICH_S3_BUCKET=my-gallery-storage
IMMICH_S3_REGION=us-east-1
IMMICH_S3_ACCESS_KEY_ID=your-access-key
IMMICH_S3_SECRET_ACCESS_KEY=your-secret-key
For S3-compatible services, also set the endpoint:
IMMICH_S3_ENDPOINT=https://your-s3-endpoint.example.com
IMMICH_S3_SERVE_MODE=redirect
3. Choose a Serve Mode
Pick the mode that fits your setup:
redirect(default, recommended) — Use this unless you have a hard network constraint. Gallery authorizes the API request and returns a short-lived presigned URL, so media bytes flow directly from S3 to the browser.proxy— Fallback mode for private-network S3 endpoints. Gallery streams every media byte through the API process, so it costs more server resources and is not the recommended mode for large scrolling grids.
IMMICH_S3_SERVE_MODE=proxy
If you change an existing deployment from proxy to redirect, treat bucket CORS as part of the same rollout.
4. Configure CORS For Redirect Mode
Redirect mode keeps the bucket private, but browsers still need CORS headers when Gallery loads S3 media directly. Without bucket CORS, normal image viewing may appear to work while editing, face crops, video thumbnails, copy-to-clipboard, or browser canvas operations fail with a CORS error.
Apply bucket CORS before you enable IMMICH_S3_SERVE_MODE=redirect on an existing instance.
CORS does not make the bucket public. It only tells browsers which Gallery origins may read responses from valid presigned URLs. Keep normal bucket public access disabled unless your provider requires a different setup.
Pick the Correct Origins
An origin is only the scheme, host, and optional port. It must not include a path or trailing slash.
Use every browser URL that people use to open Gallery:
https://gallery.example.comfor production;https://photos.example.comif you also expose Gallery on another hostname;http://localhost:2283for local Docker testing;http://localhost:3000only if you run the web dev server.
Do not put API paths, album paths, or S3 bucket URLs in AllowedOrigins.
For redirect mode, the S3 endpoint in IMMICH_S3_ENDPOINT must also be reachable by the browser. If users open Gallery over HTTPS, use an HTTPS S3 endpoint or custom domain; browsers can block http:// media from an HTTPS page as mixed content.
Recommended S3 CORS Policy
For AWS S3 and most S3-compatible providers, use this policy and replace the origins with your real Gallery origins:
{
"CORSRules": [
{
"AllowedOrigins": ["https://gallery.example.com", "http://localhost:3000", "http://localhost:2283"],
"AllowedMethods": ["GET", "HEAD"],
"AllowedHeaders": ["*"],
"ExposeHeaders": ["Accept-Ranges", "Content-Length", "Content-Range", "Content-Type", "ETag"],
"MaxAgeSeconds": 3600
}
]
}
This allows browser reads of presigned objects from Gallery. GET loads media, HEAD allows metadata checks when a provider or tool uses them, AllowedHeaders covers preflight headers, and ExposeHeaders lets Gallery and browser media features read range, size, type, and cache validation headers.
Do not use "*" for production origins. Gallery media requests use anonymous CORS today, but explicit origins are safer and avoid surprises if credentialed browser requests are introduced later.
Apply the Policy on AWS S3
In the AWS Console:
- Open the S3 bucket.
- Go to Permissions.
- Find Cross-origin resource sharing (CORS) and choose Edit.
- Paste the JSON policy above.
- Save changes.
Or use the AWS CLI:
aws s3api put-bucket-cors \
--bucket my-gallery-storage \
--cors-configuration '{"CORSRules":[{"AllowedOrigins":["https://gallery.example.com","http://localhost:3000","http://localhost:2283"],"AllowedMethods":["GET","HEAD"],"AllowedHeaders":["*"],"ExposeHeaders":["Accept-Ranges","Content-Length","Content-Range","Content-Type","ETag"],"MaxAgeSeconds":3600}]}'
If you prefer a file, save the policy as cors.json and run:
aws s3api put-bucket-cors \
--bucket my-gallery-storage \
--cors-configuration file://cors.json
Apply the Policy on S3-Compatible Providers
For MinIO, Wasabi, Backblaze B2, and other providers that accept AWS S3 API calls, use the same put-bucket-cors command with your endpoint:
aws s3api put-bucket-cors \
--endpoint-url https://your-s3-endpoint.example.com \
--bucket my-gallery-storage \
--cors-configuration file://cors.json
If your provider has a bucket CORS UI instead of an AWS-compatible CLI, enter the same origins, methods, headers, exposed headers, and max age there.
For MinIO in the same Docker Compose network, you normally keep IMMICH_S3_SERVE_MODE=proxy because browsers cannot reach http://minio:9000. Only configure CORS and use redirect when the endpoint in IMMICH_S3_ENDPOINT is reachable from the browser, such as https://minio.example.com.
If you use a CDN or custom domain in front of your S3 provider, make sure it forwards the browser's Origin request header to S3 or applies an equivalent CORS response-header policy. Purge the CDN cache after changing CORS so old responses without CORS headers do not linger.
Apply the Policy on Cloudflare R2
Cloudflare R2 accepts CORS from the bucket settings page:
- Open R2 Object Storage in the Cloudflare dashboard.
- Select the bucket.
- Open Settings.
- Under CORS Policy, choose Add CORS policy.
- Use the JSON tab and paste this R2 policy, replacing the origins:
[
{
"AllowedOrigins": ["https://gallery.example.com", "http://localhost:2283"],
"AllowedMethods": ["GET", "HEAD"],
"AllowedHeaders": ["*"],
"ExposeHeaders": ["Accept-Ranges", "Content-Length", "Content-Range", "Content-Type", "ETag"],
"MaxAgeSeconds": 3600
}
]
You can also use Wrangler:
{
"rules": [
{
"allowed": {
"origins": ["https://gallery.example.com", "http://localhost:2283"],
"methods": ["GET", "HEAD"],
"headers": ["*"]
},
"exposeHeaders": ["Accept-Ranges", "Content-Length", "Content-Range", "Content-Type", "ETag"],
"maxAgeSeconds": 3600
}
]
}
npx wrangler r2 bucket cors set my-gallery-storage --file cors.json
npx wrangler r2 bucket cors list my-gallery-storage
If you serve R2 through a custom domain or CDN, purge that cache after changing CORS so old responses without CORS headers do not linger.
Verify CORS
After saving the policy, test from the same browser origin you configured:
- Restart Gallery if you changed
IMMICH_S3_SERVE_MODE. - Open Gallery from the exact origin in
AllowedOrigins. - Open a photo or video that is stored on S3.
- Open browser developer tools and check the media request after Gallery redirects to S3.
- Confirm the S3 response includes
access-control-allow-originwith your Gallery origin. - Try editing an image, viewing face crops, copying an image to the clipboard, and playing a video.
You can also test with curl by sending an Origin header. Replace the URL with a fresh presigned S3 URL copied from the browser network panel. Use GET, not HEAD, because presigned URLs are method-specific:
curl -sS -D - -o /dev/null \
-H 'Origin: https://gallery.example.com' \
'https://my-gallery-storage.s3.eu-west-1.amazonaws.com/path/to/object?...'
The response should include access-control-allow-origin: https://gallery.example.com. A request without an Origin header may not show CORS headers, even when the policy is correct.
Common CORS Mistakes
AllowedOriginscontainshttps://gallery.example.com/with a trailing slash. Usehttps://gallery.example.com.AllowedOriginscontains a path such ashttps://gallery.example.com/photos. Use only the origin.- The user opens Gallery through a different hostname than the one in the policy.
HEADis missing fromAllowedMethods.- A CDN or custom domain does not forward the
Originrequest header, overrides CORS response headers, or cached the old response before CORS was configured. IMMICH_S3_ENDPOINTpoints at an internal Docker hostname such ashttp://minio:9000whileIMMICH_S3_SERVE_MODE=redirect; browsers outside Docker cannot reach that endpoint. Useproxyor expose S3 on a browser-reachable hostname.- Gallery is opened over HTTPS but
IMMICH_S3_ENDPOINTuses plain HTTP. Use an HTTPS endpoint for redirect mode.
5. Restart Gallery
Recreate the containers to apply the new environment variables:
docker compose up -d
New uploads will now be stored in your S3 bucket. Existing files on disk will continue to be served normally.
For an existing deployment switching from proxy to redirect, the safe order is:
- Apply bucket CORS.
- Set
IMMICH_S3_SERVE_MODE=redirect. - Recreate the Gallery containers.
- Verify thumbnails, editing, face crops, copy-to-clipboard, and video playback from a browser.
Example Configurations
AWS S3
IMMICH_STORAGE_BACKEND=s3
IMMICH_S3_BUCKET=my-gallery-storage
IMMICH_S3_REGION=eu-west-1
IMMICH_S3_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
IMMICH_S3_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
MinIO (Docker Compose)
IMMICH_STORAGE_BACKEND=s3
IMMICH_S3_BUCKET=gallery
IMMICH_S3_ENDPOINT=http://minio:9000
IMMICH_S3_ACCESS_KEY_ID=minioadmin
IMMICH_S3_SECRET_ACCESS_KEY=minioadmin
IMMICH_S3_SERVE_MODE=proxy
When MinIO runs in the same Docker Compose stack, use the service name (e.g. http://minio:9000) as the endpoint.
Set IMMICH_S3_SERVE_MODE=proxy since clients cannot reach the internal Docker network directly.
Cloudflare R2
IMMICH_STORAGE_BACKEND=s3
IMMICH_S3_BUCKET=my-gallery-storage
IMMICH_S3_ENDPOINT=https://abc123.r2.cloudflarestorage.com
IMMICH_S3_ACCESS_KEY_ID=your-r2-access-key
IMMICH_S3_SECRET_ACCESS_KEY=your-r2-secret-key
FAQ
Can I migrate existing files from disk to S3? Yes! Use the built-in Storage Migration tool. It supports bidirectional migration, is resumable and idempotent, and includes rollback support.
Do I need to make my S3 bucket public?
No. Gallery uses presigned URLs (in redirect mode) or proxies the files through the server (in proxy mode). The bucket should remain private.
What happens if I switch back to disk storage? Files already stored in S3 will continue to be served from S3. Only new uploads will go to disk. Both backends are always active.
Can I use IAM roles instead of access keys?
Yes. If you omit IMMICH_S3_ACCESS_KEY_ID and IMMICH_S3_SECRET_ACCESS_KEY, the AWS SDK will use the standard credential chain (environment variables, IAM roles, instance metadata, etc.).
Technical Implementation
Storage Abstraction
S3 support is built on a StorageBackend interface that both the disk and S3 backends implement:
StorageBackend interface
├── put(key, source)
├── get(key) → stream
├── exists(key)
├── delete(key)
├── getServeStrategy(key) → file | redirect | stream
└── downloadToTemp(key) → tempPath + cleanup
┌───────────────────┐ ┌───────────────────┐
│ DiskStorageBackend│ │ S3StorageBackend │
├───────────────────┤ ├───────────────────┤
│ Reads/writes to │ │ AWS SDK v3 │
│ local filesystem │ │ @aws-sdk/client-s3 │
│ │ │ Multipart uploads │
│ getServeStrategy: │ │ │
│ → { type: file } │ │ getServeStrategy: │
└───────────────────┘ │ redirect mode: │
│ → presigned URL │
│ proxy mode: │
│ → S3 stream │
└───────────────────┘
The StorageService manages both backends as static singletons and routes operations based on the file path format.
Dual Backend Routing
The key insight is that file path format determines the backend:
- Absolute paths (e.g.,
/usr/src/app/upload/library/user/file.jpg) — legacy disk files, routed to the disk backend. - Relative paths (e.g.,
library/user/file.jpg) — S3 files, routed to the S3 backend.
This means no database schema changes were needed. Existing originalPath, path, thumbnailPath, and profileImagePath columns store either format, and the resolveBackendForKey() function dispatches to the correct backend at runtime. Both backends are always active — the IMMICH_STORAGE_BACKEND setting only controls where new writes go.
Serve Modes
When a client requests an asset, BaseService.serveFromBackend() asks the resolved backend for a serve strategy and returns one of three response types:
| Backend | Mode | Response | Client Behavior |
|---|---|---|---|
| Disk | — | ImmichFileResponse | Express sends the local file directly |
| S3 | redirect | ImmichRedirectResponse | HTTP 302 to a presigned URL; client fetches from S3 |
| S3 | proxy | ImmichStreamResponse | Server streams S3 data through to the client |
Presigned URLs expire after IMMICH_S3_PRESIGNED_URL_EXPIRY seconds (default 3600). Gallery sends Cache-Control: private, no-cache, no-transform on redirect responses so browsers do not reuse an expired 302. The S3 backend signs content type and filename response overrides when they are available, so inline display and explicit downloads behave consistently after the browser follows the redirect.
The S3 backend uses forcePathStyle: true when a custom endpoint is configured, which is required for MinIO, DigitalOcean Spaces, and similar providers.
Upload Flow
When S3 is the write backend, uploads follow this path:
- File is uploaded to a local temp directory (standard NestJS multipart handling).
- The storage template generates a relative key for the S3 object.
- The file is uploaded to S3 using the
@aws-sdk/lib-storageUploadclass (supports automatic multipart for large files). - The database is updated with the relative path.
- A cleanup job deletes the local temp file.
Profile images (both user-uploaded and OAuth-synced) follow the same pattern: the file is written to disk first, then uploaded to S3 if the write backend is S3, and the local temp file is cleaned up.
For operations that require filesystem access (ffmpeg transcoding, exiftool metadata extraction), the S3 backend provides a downloadToTemp() method that streams the object to a local temp file and returns a cleanup function.
Archive Downloads
Album, selection, and shared-link archive downloads work with both disk and S3-backed assets. For S3 assets, Gallery opens object streams lazily and serializes ZIP entry appends so large archives do not exhaust the S3 connection pool. This is most visible in proxy mode or when downloading many S3-only assets through the server.
Cleanup Behavior
Deleting a user removes that user's storage prefix from the active backend. On S3, Gallery lists and deletes all objects under the user's prefix; on disk, it removes the matching directory tree. The cleanup is idempotent, so rerunning a failed delete is safe.
Sidecar copy operations also respect the target asset's backend. Copying XMP metadata from one asset to another downloads the source sidecar to a temporary local file when needed, then writes the target sidecar either to disk or to the relative S3 key for that asset.