Skip to main content

S3-Compatible Storage

Gallery supports using S3-compatible object storage (such as AWS S3, MinIO, Cloudflare R2, Backblaze B2, or Wasabi) as the storage backend for new uploads. This is useful for scaling storage independently of the server, leveraging cloud durability, or integrating with existing infrastructure.

tip

If you have existing files on disk, you can migrate them to S3 using the built-in Storage Migration tool.

How It Works

When S3 storage is enabled:

  • New uploads (photos, videos, thumbnails, transcoded videos, profile images) are written to your S3 bucket.
  • Existing files on disk continue to be served from disk — both backends run simultaneously.
  • The Storage Template determines the S3 object key at upload time.

Gallery supports two modes for serving files from S3:

ModeBehavior
redirectReturns a temporary presigned URL. The client downloads directly from S3. Recommended when the browser can reach the S3 endpoint.
proxyThe Gallery server streams the file from S3 to the client. Use only when S3 is not directly reachable by browsers.
info

The recent direct-media delivery change makes redirect the normal S3 mode for browser-reachable buckets. Before switching an existing deployment from proxy to redirect, apply bucket CORS for your Gallery origins or canvas-based features can fail.

For most deployments, use redirect. Only use proxy when browsers cannot reach your S3 endpoint directly.

Environment Variables

All S3 variables are set on the immich-server container.

VariableDescriptionDefaultRequired
IMMICH_STORAGE_BACKENDStorage backend for new uploads (disk or s3)diskYes (set to s3)
IMMICH_S3_BUCKETS3 bucket nameYes
IMMICH_S3_REGIONAWS region (or region of your S3-compatible provider)us-east-1No
IMMICH_S3_ENDPOINTCustom endpoint URL for S3-compatible services (e.g. MinIO, R2)No*1
IMMICH_S3_ACCESS_KEY_IDAccess key IDNo*2
IMMICH_S3_SECRET_ACCESS_KEYSecret access keyNo*2
IMMICH_S3_PRESIGNED_URL_EXPIRYPresigned URL expiration time in seconds (only relevant for redirect mode)3600No
IMMICH_S3_SERVE_MODEHow to serve S3 assets: use redirect for normal deployments; proxy is the fallback when browsers cannot reach S3 directlyredirectNo

*1: Required for non-AWS S3-compatible services (MinIO, R2, B2, etc.). Omit for AWS S3.

*2: If omitted, the AWS SDK falls back to IAM role credentials (e.g. EC2 instance roles, ECS task roles, IRSA on EKS). For non-AWS services, these are typically required.

Setup Guide

1. Create an S3 Bucket

AWS S3
  1. Open the AWS S3 Console and click Create bucket.
  2. Choose a bucket name (e.g. my-gallery-storage) and region.
  3. Leave "Block all public access" enabled — Gallery uses presigned URLs or proxying, so the bucket does not need to be public.
  4. Create the bucket.
  5. Create an IAM user (or use an existing one) with programmatic access. Attach a policy granting access to your bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:PutObject", "s3:GetObject", "s3:DeleteObject", "s3:ListBucket"],
"Resource": ["arn:aws:s3:::my-gallery-storage", "arn:aws:s3:::my-gallery-storage/*"]
}
]
}
  1. Note the Access Key ID and Secret Access Key.
MinIO
  1. Install and start MinIO (or add it to your Docker Compose stack).
  2. Open the MinIO Console and create a bucket (e.g. gallery).
  3. Create an access key pair from the MinIO Console or CLI.
  4. Note the endpoint URL (e.g. http://minio:9000 if running in the same Docker network, or http://<host-ip>:9000 if external).
Cloudflare R2
  1. In the Cloudflare dashboard, go to R2 Object Storage and create a bucket.
  2. Under Manage R2 API Tokens, create a token with read/write access to your bucket.
  3. Note the Account ID from your Cloudflare dashboard. Your S3 endpoint will be https://<account-id>.r2.cloudflarestorage.com.
  4. Note the Access Key ID and Secret Access Key from the API token.

2. Configure Environment Variables

Add the S3 variables to your .env file:

.env
IMMICH_STORAGE_BACKEND=s3
IMMICH_S3_BUCKET=my-gallery-storage
IMMICH_S3_REGION=us-east-1
IMMICH_S3_ACCESS_KEY_ID=your-access-key
IMMICH_S3_SECRET_ACCESS_KEY=your-secret-key

For S3-compatible services, also set the endpoint:

.env
IMMICH_S3_ENDPOINT=https://your-s3-endpoint.example.com
IMMICH_S3_SERVE_MODE=redirect

3. Choose a Serve Mode

Pick the mode that fits your setup:

  • redirect (default, recommended) — Use this unless you have a hard network constraint. Gallery authorizes the API request and returns a short-lived presigned URL, so media bytes flow directly from S3 to the browser.
  • proxy — Fallback mode for private-network S3 endpoints. Gallery streams every media byte through the API process, so it costs more server resources and is not the recommended mode for large scrolling grids.
.env
IMMICH_S3_SERVE_MODE=proxy

If you change an existing deployment from proxy to redirect, treat bucket CORS as part of the same rollout.

4. Configure CORS For Redirect Mode

Redirect mode keeps the bucket private, but browsers still need CORS headers when Gallery loads S3 media directly. Without bucket CORS, normal image viewing may appear to work while editing, face crops, video thumbnails, copy-to-clipboard, or browser canvas operations fail with a CORS error.

Apply bucket CORS before you enable IMMICH_S3_SERVE_MODE=redirect on an existing instance.

note

CORS does not make the bucket public. It only tells browsers which Gallery origins may read responses from valid presigned URLs. Keep normal bucket public access disabled unless your provider requires a different setup.

Pick the Correct Origins

An origin is only the scheme, host, and optional port. It must not include a path or trailing slash.

Use every browser URL that people use to open Gallery:

  • https://gallery.example.com for production;
  • https://photos.example.com if you also expose Gallery on another hostname;
  • http://localhost:2283 for local Docker testing;
  • http://localhost:3000 only if you run the web dev server.

Do not put API paths, album paths, or S3 bucket URLs in AllowedOrigins.

For redirect mode, the S3 endpoint in IMMICH_S3_ENDPOINT must also be reachable by the browser. If users open Gallery over HTTPS, use an HTTPS S3 endpoint or custom domain; browsers can block http:// media from an HTTPS page as mixed content.

For AWS S3 and most S3-compatible providers, use this policy and replace the origins with your real Gallery origins:

{
"CORSRules": [
{
"AllowedOrigins": ["https://gallery.example.com", "http://localhost:3000", "http://localhost:2283"],
"AllowedMethods": ["GET", "HEAD"],
"AllowedHeaders": ["*"],
"ExposeHeaders": ["Accept-Ranges", "Content-Length", "Content-Range", "Content-Type", "ETag"],
"MaxAgeSeconds": 3600
}
]
}

This allows browser reads of presigned objects from Gallery. GET loads media, HEAD allows metadata checks when a provider or tool uses them, AllowedHeaders covers preflight headers, and ExposeHeaders lets Gallery and browser media features read range, size, type, and cache validation headers.

Do not use "*" for production origins. Gallery media requests use anonymous CORS today, but explicit origins are safer and avoid surprises if credentialed browser requests are introduced later.

Apply the Policy on AWS S3

In the AWS Console:

  1. Open the S3 bucket.
  2. Go to Permissions.
  3. Find Cross-origin resource sharing (CORS) and choose Edit.
  4. Paste the JSON policy above.
  5. Save changes.

Or use the AWS CLI:

aws s3api put-bucket-cors \
--bucket my-gallery-storage \
--cors-configuration '{"CORSRules":[{"AllowedOrigins":["https://gallery.example.com","http://localhost:3000","http://localhost:2283"],"AllowedMethods":["GET","HEAD"],"AllowedHeaders":["*"],"ExposeHeaders":["Accept-Ranges","Content-Length","Content-Range","Content-Type","ETag"],"MaxAgeSeconds":3600}]}'

If you prefer a file, save the policy as cors.json and run:

aws s3api put-bucket-cors \
--bucket my-gallery-storage \
--cors-configuration file://cors.json

Apply the Policy on S3-Compatible Providers

For MinIO, Wasabi, Backblaze B2, and other providers that accept AWS S3 API calls, use the same put-bucket-cors command with your endpoint:

aws s3api put-bucket-cors \
--endpoint-url https://your-s3-endpoint.example.com \
--bucket my-gallery-storage \
--cors-configuration file://cors.json

If your provider has a bucket CORS UI instead of an AWS-compatible CLI, enter the same origins, methods, headers, exposed headers, and max age there.

For MinIO in the same Docker Compose network, you normally keep IMMICH_S3_SERVE_MODE=proxy because browsers cannot reach http://minio:9000. Only configure CORS and use redirect when the endpoint in IMMICH_S3_ENDPOINT is reachable from the browser, such as https://minio.example.com.

If you use a CDN or custom domain in front of your S3 provider, make sure it forwards the browser's Origin request header to S3 or applies an equivalent CORS response-header policy. Purge the CDN cache after changing CORS so old responses without CORS headers do not linger.

Apply the Policy on Cloudflare R2

Cloudflare R2 accepts CORS from the bucket settings page:

  1. Open R2 Object Storage in the Cloudflare dashboard.
  2. Select the bucket.
  3. Open Settings.
  4. Under CORS Policy, choose Add CORS policy.
  5. Use the JSON tab and paste this R2 policy, replacing the origins:
[
{
"AllowedOrigins": ["https://gallery.example.com", "http://localhost:2283"],
"AllowedMethods": ["GET", "HEAD"],
"AllowedHeaders": ["*"],
"ExposeHeaders": ["Accept-Ranges", "Content-Length", "Content-Range", "Content-Type", "ETag"],
"MaxAgeSeconds": 3600
}
]

You can also use Wrangler:

cors.json
{
"rules": [
{
"allowed": {
"origins": ["https://gallery.example.com", "http://localhost:2283"],
"methods": ["GET", "HEAD"],
"headers": ["*"]
},
"exposeHeaders": ["Accept-Ranges", "Content-Length", "Content-Range", "Content-Type", "ETag"],
"maxAgeSeconds": 3600
}
]
}
npx wrangler r2 bucket cors set my-gallery-storage --file cors.json
npx wrangler r2 bucket cors list my-gallery-storage

If you serve R2 through a custom domain or CDN, purge that cache after changing CORS so old responses without CORS headers do not linger.

Verify CORS

After saving the policy, test from the same browser origin you configured:

  1. Restart Gallery if you changed IMMICH_S3_SERVE_MODE.
  2. Open Gallery from the exact origin in AllowedOrigins.
  3. Open a photo or video that is stored on S3.
  4. Open browser developer tools and check the media request after Gallery redirects to S3.
  5. Confirm the S3 response includes access-control-allow-origin with your Gallery origin.
  6. Try editing an image, viewing face crops, copying an image to the clipboard, and playing a video.

You can also test with curl by sending an Origin header. Replace the URL with a fresh presigned S3 URL copied from the browser network panel. Use GET, not HEAD, because presigned URLs are method-specific:

curl -sS -D - -o /dev/null \
-H 'Origin: https://gallery.example.com' \
'https://my-gallery-storage.s3.eu-west-1.amazonaws.com/path/to/object?...'

The response should include access-control-allow-origin: https://gallery.example.com. A request without an Origin header may not show CORS headers, even when the policy is correct.

Common CORS Mistakes

  • AllowedOrigins contains https://gallery.example.com/ with a trailing slash. Use https://gallery.example.com.
  • AllowedOrigins contains a path such as https://gallery.example.com/photos. Use only the origin.
  • The user opens Gallery through a different hostname than the one in the policy.
  • HEAD is missing from AllowedMethods.
  • A CDN or custom domain does not forward the Origin request header, overrides CORS response headers, or cached the old response before CORS was configured.
  • IMMICH_S3_ENDPOINT points at an internal Docker hostname such as http://minio:9000 while IMMICH_S3_SERVE_MODE=redirect; browsers outside Docker cannot reach that endpoint. Use proxy or expose S3 on a browser-reachable hostname.
  • Gallery is opened over HTTPS but IMMICH_S3_ENDPOINT uses plain HTTP. Use an HTTPS endpoint for redirect mode.

Recreate the containers to apply the new environment variables:

docker compose up -d

New uploads will now be stored in your S3 bucket. Existing files on disk will continue to be served normally.

For an existing deployment switching from proxy to redirect, the safe order is:

  1. Apply bucket CORS.
  2. Set IMMICH_S3_SERVE_MODE=redirect.
  3. Recreate the Gallery containers.
  4. Verify thumbnails, editing, face crops, copy-to-clipboard, and video playback from a browser.

Example Configurations

AWS S3

.env
IMMICH_STORAGE_BACKEND=s3
IMMICH_S3_BUCKET=my-gallery-storage
IMMICH_S3_REGION=eu-west-1
IMMICH_S3_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
IMMICH_S3_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

MinIO (Docker Compose)

.env
IMMICH_STORAGE_BACKEND=s3
IMMICH_S3_BUCKET=gallery
IMMICH_S3_ENDPOINT=http://minio:9000
IMMICH_S3_ACCESS_KEY_ID=minioadmin
IMMICH_S3_SECRET_ACCESS_KEY=minioadmin
IMMICH_S3_SERVE_MODE=proxy
tip

When MinIO runs in the same Docker Compose stack, use the service name (e.g. http://minio:9000) as the endpoint. Set IMMICH_S3_SERVE_MODE=proxy since clients cannot reach the internal Docker network directly.

Cloudflare R2

.env
IMMICH_STORAGE_BACKEND=s3
IMMICH_S3_BUCKET=my-gallery-storage
IMMICH_S3_ENDPOINT=https://abc123.r2.cloudflarestorage.com
IMMICH_S3_ACCESS_KEY_ID=your-r2-access-key
IMMICH_S3_SECRET_ACCESS_KEY=your-r2-secret-key

FAQ

Can I migrate existing files from disk to S3? Yes! Use the built-in Storage Migration tool. It supports bidirectional migration, is resumable and idempotent, and includes rollback support.

Do I need to make my S3 bucket public? No. Gallery uses presigned URLs (in redirect mode) or proxies the files through the server (in proxy mode). The bucket should remain private.

What happens if I switch back to disk storage? Files already stored in S3 will continue to be served from S3. Only new uploads will go to disk. Both backends are always active.

Can I use IAM roles instead of access keys? Yes. If you omit IMMICH_S3_ACCESS_KEY_ID and IMMICH_S3_SECRET_ACCESS_KEY, the AWS SDK will use the standard credential chain (environment variables, IAM roles, instance metadata, etc.).

Technical Implementation

Storage Abstraction

S3 support is built on a StorageBackend interface that both the disk and S3 backends implement:

                    StorageBackend interface
├── put(key, source)
├── get(key) → stream
├── exists(key)
├── delete(key)
├── getServeStrategy(key) → file | redirect | stream
└── downloadToTemp(key) → tempPath + cleanup

┌───────────────────┐ ┌───────────────────┐
│ DiskStorageBackend│ │ S3StorageBackend │
├───────────────────┤ ├───────────────────┤
│ Reads/writes to │ │ AWS SDK v3 │
│ local filesystem │ │ @aws-sdk/client-s3 │
│ │ │ Multipart uploads │
│ getServeStrategy: │ │ │
│ → { type: file } │ │ getServeStrategy: │
└───────────────────┘ │ redirect mode: │
│ → presigned URL │
│ proxy mode: │
│ → S3 stream │
└───────────────────┘

The StorageService manages both backends as static singletons and routes operations based on the file path format.

Dual Backend Routing

The key insight is that file path format determines the backend:

  • Absolute paths (e.g., /usr/src/app/upload/library/user/file.jpg) — legacy disk files, routed to the disk backend.
  • Relative paths (e.g., library/user/file.jpg) — S3 files, routed to the S3 backend.

This means no database schema changes were needed. Existing originalPath, path, thumbnailPath, and profileImagePath columns store either format, and the resolveBackendForKey() function dispatches to the correct backend at runtime. Both backends are always active — the IMMICH_STORAGE_BACKEND setting only controls where new writes go.

Serve Modes

When a client requests an asset, BaseService.serveFromBackend() asks the resolved backend for a serve strategy and returns one of three response types:

BackendModeResponseClient Behavior
DiskImmichFileResponseExpress sends the local file directly
S3redirectImmichRedirectResponseHTTP 302 to a presigned URL; client fetches from S3
S3proxyImmichStreamResponseServer streams S3 data through to the client

Presigned URLs expire after IMMICH_S3_PRESIGNED_URL_EXPIRY seconds (default 3600). Gallery sends Cache-Control: private, no-cache, no-transform on redirect responses so browsers do not reuse an expired 302. The S3 backend signs content type and filename response overrides when they are available, so inline display and explicit downloads behave consistently after the browser follows the redirect.

The S3 backend uses forcePathStyle: true when a custom endpoint is configured, which is required for MinIO, DigitalOcean Spaces, and similar providers.

Upload Flow

When S3 is the write backend, uploads follow this path:

  1. File is uploaded to a local temp directory (standard NestJS multipart handling).
  2. The storage template generates a relative key for the S3 object.
  3. The file is uploaded to S3 using the @aws-sdk/lib-storage Upload class (supports automatic multipart for large files).
  4. The database is updated with the relative path.
  5. A cleanup job deletes the local temp file.

Profile images (both user-uploaded and OAuth-synced) follow the same pattern: the file is written to disk first, then uploaded to S3 if the write backend is S3, and the local temp file is cleaned up.

For operations that require filesystem access (ffmpeg transcoding, exiftool metadata extraction), the S3 backend provides a downloadToTemp() method that streams the object to a local temp file and returns a cleanup function.

Archive Downloads

Album, selection, and shared-link archive downloads work with both disk and S3-backed assets. For S3 assets, Gallery opens object streams lazily and serializes ZIP entry appends so large archives do not exhaust the S3 connection pool. This is most visible in proxy mode or when downloading many S3-only assets through the server.

Cleanup Behavior

Deleting a user removes that user's storage prefix from the active backend. On S3, Gallery lists and deletes all objects under the user's prefix; on disk, it removes the matching directory tree. The cleanup is idempotent, so rerunning a failed delete is safe.

Sidecar copy operations also respect the target asset's backend. Copying XMP metadata from one asset to another downloads the source sidecar to a temporary local file when needed, then writes the target sidecar either to disk or to the relative S3 key for that asset.