Appearance
Docker Deployment
Deploy Capyshop in production using Docker Compose with the pre-built container image and PostgreSQL with pgvector.
Prerequisites
- Docker Engine 20+
- Docker Compose v2+
1. Create a Docker Compose File
Create a docker-compose.yml in your deployment directory:
yaml
services:
postgres:
image: pgvector/pgvector:pg17
restart: always
environment:
POSTGRES_USER: capyshop
POSTGRES_PASSWORD: changeme
POSTGRES_DB: capyshop
volumes:
- postgres_data:/var/lib/postgresql/data
app:
image: capyshop/capyshop:latest
restart: unless-stopped
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgresql://capyshop:changeme@postgres:5432/capyshop
- BETTER_AUTH_SECRET=<generate-with-openssl-rand-base64-32>
- MASTER_SECRET=<generate-with-openssl-rand-base64-32>
- TRUSTED_ORIGINS=https://mystore.com
- BASE_URL=https://mystore.com
volumes:
- ./data:/app/data
depends_on:
- postgres
volumes:
postgres_data:Replace the placeholder values:
| Variable | Description |
|---|---|
BETTER_AUTH_SECRET | Secret key for session signing. Generate with openssl rand -base64 32. |
MASTER_SECRET | Application master secret for encryption. Generate with openssl rand -base64 32. |
POSTGRES_USER | PostgreSQL username (must match in both postgres and DATABASE_URL). |
POSTGRES_PASSWORD | PostgreSQL password (must match in both postgres and DATABASE_URL). |
POSTGRES_DB | PostgreSQL database name (must match in both postgres and DATABASE_URL). |
TRUSTED_ORIGINS | Comma-separated list of trusted origins for CSRF protection and OAuth redirects (e.g., https://mystore.com). |
BASE_URL | Public URL of the store, used in emails, SEO, sitemaps (e.g., https://mystore.com). |
Optional: SMTP Configuration via Environment Variables
By default, SMTP credentials are managed through Settings → Email in the admin panel. If you prefer to configure SMTP at deploy time instead, you can set the following environment variables. When set, they override the admin-panel values and the corresponding fields in the Email settings form become disabled.
yaml
environment:
# ...other vars
- SMTP_HOST=smtp.sendgrid.net
- SMTP_PORT=587
- SMTP_USER=apikey
- SMTP_PASSWORD=your-smtp-password| Variable | Description |
|---|---|
SMTP_HOST | Mail server hostname (e.g., smtp.sendgrid.net). |
SMTP_PORT | Mail server port. Usually 587 for TLS or 465 for SSL. Must be an integer in [1, 65535]. |
SMTP_USER | SMTP login username. |
SMTP_PASSWORD | SMTP login password. Stored in plaintext in the environment — never passed through the database encryption layer. |
Each variable is independent. If only SMTP_HOST is set, the other three SMTP fields remain editable in the admin UI and are read from the database.
Optional: S3 Asset Storage
By default, uploaded images and other assets are stored on the local disk under data/files/. To push uploads to an S3-compatible bucket and serve them from a CDN, set the storage mode to s3 and configure the bucket credentials.
yaml
environment:
# ...other vars
- ASSETS_STORAGE_MODE=s3
- ASSETS_S3_ENDPOINT=https://s3.amazonaws.com
- ASSETS_S3_REGION=us-east-1
- ASSETS_S3_BUCKET=my-store-assets
- ASSETS_S3_ACCESS_KEY_ID=AKIA...
- ASSETS_S3_SECRET_ACCESS_KEY=...
- ASSETS_PUBLIC_BASE_URL=https://cdn.mystore.com
- ASSETS_MAX_BYTES=10gb| Variable | Description |
|---|---|
ASSETS_STORAGE_MODE | local (default) or s3. When s3, the six ASSETS_S3_* vars and ASSETS_PUBLIC_BASE_URL are required. |
ASSETS_S3_ENDPOINT | S3 API endpoint (e.g. https://s3.amazonaws.com, or your provider's URL for Cloudflare R2, Backblaze B2, MinIO, etc.). |
ASSETS_S3_REGION | Bucket region (e.g. us-east-1, auto for R2). |
ASSETS_S3_BUCKET | Bucket name. |
ASSETS_S3_ACCESS_KEY_ID | Access key with read/write permissions on the bucket. |
ASSETS_S3_SECRET_ACCESS_KEY | Secret access key. |
ASSETS_PUBLIC_BASE_URL | Public base URL the storefront uses to load assets (e.g. your CDN domain pointed at the bucket). |
ASSETS_MAX_BYTES | Optional cumulative storage cap across all files. Accepts 10gb, 500mb, or a raw byte count. Applies in both local and s3 modes. Per-upload caps (5 MB image / 50 MB video) are unchanged. |
When switching an existing store from local to s3, run the bundled migration script once inside a deployed container to upload the existing files to the bucket and pre-generate WebP variants:
bash
docker exec -it <container> node build/scripts/migrate-assets-to-s3.mjsThe script is idempotent — re-running it is safe, and it skips work that's already been done. If your existing originals live on the host (e.g. /docker/<store>/app_data/files/), bind-mount that path to /app/data/files inside the container so the script can pick them up; otherwise it will fall back to whatever is already in the bucket.
2. Start the Application
bash
docker compose up -dThis starts two services:
- postgres — PostgreSQL 17 with pgvector (
pgvector/pgvector:pg17), listening on port5432. - app — The application container (
capyshop/capyshop:latest), exposed on port3000.
On startup, the application container automatically runs database migrations (prisma migrate deploy) before starting the server.
3. Verify
bash
# Check both containers are running
docker compose ps
# Check application logs
docker compose logs appThe application should be accessible at http://<your-host>:3000.
Reverse Proxy
For production deployments with HTTPS, place a reverse proxy (such as Traefik, Caddy, or nginx) in front of the application. The reverse proxy handles TLS termination and forwards traffic to port 3000.
Enabling Gzip Compression with Traefik
If you use Traefik as your reverse proxy, enable gzip compression to reduce response sizes and improve page load speed — a factor in search-engine ranking. Add the following labels to the app service in your docker-compose.yml:
yaml
labels:
- "traefik.http.middlewares.compress.compress.minresponsebodybytes=256"
- "traefik.http.routers.STORE_NAME-app.middlewares=compress"Replace STORE_NAME with the name of your store's Traefik router. The first label defines a compress middleware that gzips every response larger than 256 bytes. The second label attaches that middleware to your store's router.
Volumes
| Volume / Mount | Purpose |
|---|---|
postgres_data | Persists PostgreSQL data across container restarts. |
./data:/app/data | Persists uploaded files and application data across restarts. |
Building from Source
If you prefer to build the image yourself instead of using the pre-built one:
bash
docker build -t capyshop .The Dockerfile uses a multi-stage build:
- Installs all dependencies and builds the application.
- Copies only production dependencies and the build output into the final image.
- Runs
prisma migrate deploythen starts the server on port3000.
To use your custom image, update the image field in your docker-compose.yml, or run standalone:
bash
docker run -p 3000:3000 \
-e DATABASE_URL=postgresql://user:pass@host:5432/db \
-e BETTER_AUTH_SECRET=your-secret \
-e MASTER_SECRET=your-secret \
capyshopTroubleshooting
| Error | Cause | Fix |
|---|---|---|
app exits immediately | Missing or invalid environment variables | Check docker compose logs app and verify all required environment variables |
ECONNREFUSED to postgres | App started before database was ready | Restart the app: docker compose restart app |
P3009 - failed migrations | A previous migration left dirty state | DROP TABLE IF EXISTS _prisma_migrations CASCADE; in the database, then restart |
type "vector" does not exist | pgvector extension not created | The pgvector/pgvector:pg17 image includes it, but run CREATE EXTENSION IF NOT EXISTS vector; if using a different image |
Port 3000 already in use | Another process is using the port | Stop the conflicting process or change the port mapping in docker-compose.yml |
| Uploaded files lost after restart | ./data volume not mounted | Ensure the volumes section includes ./data:/app/data |