01 · Whole stack
Not just the frontend.
If it runs in Docker, Galley runs it. The same Postgres, Redis, queue, and worker images you boot in dev come up in the preview, on an isolated network, named exactly the way your code expects.
Galley — self-hosted preview environments
Open a PR, get an isolated environment with your Postgres, your Redis, your services. Push a commit, it rebuilds. Close the PR, it's gone. Runs on a box you own.
Why this exists
Six PRs share one staging. The reviewer is looking at whichever branch deployed last, and the database state is whatever the last person left it in.
Each PR gets its own subdomain, its own containers, its own database. Webhook closes, environment goes away.
How it's different
01 · Whole stack
If it runs in Docker, Galley runs it. The same Postgres, Redis, queue, and worker images you boot in dev come up in the preview, on an isolated network, named exactly the way your code expects.
02 · Self-hosted
One docker compose up on a
box you own. Source, secrets, and database snapshots stay on
your network. No telemetry, no license server, no phone-home.
03 · Real config
Galley reads galley.yml or your existing
docker-compose.yml. Build via your own Dockerfile
or fall back to language autodetect — both run in unprivileged
sandboxes. Services find each other by name on the env network.
The config
galley.yml per repo.
Already have a docker-compose.yml?
Galley parses that too. TTL, preview domain, and access policy
belong on the project, not in the repo, so they live in the dashboard.
version: 1
services:
web:
kind: web
build:
path: ./web
expose: 3000
depends_on: [api]
env:
API_URL: http://api:3001
api:
kind: api
build:
path: ./api
expose: 3001
depends_on: [postgres, cache]
env:
DATABASE_URL: postgres://app:pw@postgres:5432/app
REDIS_URL: redis://cache:6379
postgres:
kind: database
image: postgres:16
expose: 5432
env:
POSTGRES_USER: app
POSTGRES_PASSWORD: pw
POSTGRES_DB: app
cache:
kind: cache
image: redis:7
expose: 6379 What ships today
No "coming soon" footnotes. If it's listed, it's in the binary.
GitHub App or PAT. HMAC-verified, replay log, one-click reingest.
pr-42-myrepo.preview.yourco.dev. No per-PR DNS records.
web, api, worker, database, cache, queue, other. Drives routing + health.
Containers find each other by galley.yml name on the env network.
Dockerfiles and language autodetect, both in unprivileged sandboxes.
Multiple services build at once, capped to keep host load sane.
Per-service stdout/stderr/build over a single live connection per deploy.
Every lifecycle event arrives the moment it lands.
One sticky comment per PR with state, URL, TTL. Updated in place.
Full-page captures per viewport, grouped per service in the dashboard.
Lifecycle notifications with a test-fire button on every channel.
Public, basic auth, IP allowlist. Plus a bypass token for CI calls.
AES-256-GCM at rest, master key supplied at install, never escrowed.
Every admin action and auth event, filterable by actor.
Scoped tokens for CI: projects:read, deployments:write, admin.
owner, admin, developer, viewer. Gates manage and deploy.
Keep a preview past TTL, push the timer out, rebuild without a new commit.
Install
One compose file pulls every control-plane service as a published image. Point a wildcard DNS record at the host and you have previews.
curl -fsSL https://galley.sh/install/docker-compose.yml -o docker-compose.yml
echo "GALLEY_MASTER_KEY=$(openssl rand -hex 32)" > .env
docker compose up -d # On a separate host, after generating a token in
# Admin → Agents → New agent.
sudo docker create --name x galleysh/agent:v1
sudo docker cp x:/usr/local/bin/galley-agent /usr/local/bin/
sudo docker rm x
sudo systemctl enable --now galley-agent Full walk-through with DNS, TLS, and the master key in the quick start docs ↗.