Features
What's in the binary.
The full list of what works today. Anything labelled roadmap isn't there yet.
From webhook to teardown.
GitHub fires pull_request.opened. Galley queues a
build, the agent claims it, your services come up on the env
network, the proxy publishes a route. Push another commit, the
new build supersedes the old one. Close the PR, the
environment is removed.
States: pending → building → running → hibernating →
terminated. The dashboard pins the current state and
gives three controls: rebuild (force a fresh
build on the same commit), extend TTL (push the
timeout out), and pin (skip auto-teardown until
you unpin).
Every state change updates a single sticky comment on the PR with the current URL and TTL. No comment spam.
GitHub, today.
GitHub App is the recommended path: install on the org, pick repos, the App's installation token is what Galley uses to clone and post comments. PAT-based connections are supported for solo accounts and forks.
Webhook deliveries are HMAC-verified, logged, and replayable from the dashboard. Test-fire a delivery before you trust the endpoint. GitLab, Gitea, and Forgejo are on the roadmap; they're not in v1.
galley.yml, or your compose file.
The native format is galley.yml — a small subset
of compose with Galley-specific extras (kind,
expose, health, ephemeral).
If your repo already has docker-compose.yml,
Galley parses that and warns about the bits it ignores
(volumes, ports, network mode).
Service-level env: is interpolated at deploy time:
${GALLEY_PREVIEW_HOST_api} resolves to the
api service's preview hostname, project-level
secrets resolve to their decrypted values, sibling services
are reachable as http://api:3001 on the env
network.
Tell Galley what each service is.
kind drives routing and lifecycle behaviour:
web— public route on the bare preview domain, post-deploy screenshots.api— public route on a subdomain, no screenshots.worker— runs, no inbound network, no route.database,cache,queue— internal-only data plane.other— anything that doesn't fit. No defaults applied.
When detection has to guess (no kind:, no
Dockerfile, common image names), Galley picks the obvious one
and warns. Override in galley.yml.
Service-to-service by name.
Each preview gets a private Docker network. Containers join it
with a network alias matching their galley.yml
name, so web finds api at
http://api:3001 regardless of how the actual
container is named under the hood.
External traffic comes in on a wildcard subdomain
(pr-<n>-<repo>.preview.yourco.dev) with
TLS via Let's Encrypt DNS-01 if you've configured a DNS
provider. Bring-your-own cert is supported too.
Two paths, both rootless.
If your service has a Dockerfile, Galley builds it in a disposable, unprivileged sandbox. No daemon socket, no privileged mode, the worktree mounted read-only.
If it doesn't, Galley falls back to language autodetect — inspect the repo, pick a base image, build without a Dockerfile. Covers Node, Go, Python, Ruby, Rust, JVM, .NET, PHP, and a long tail of less-common stacks.
Multiple services build in parallel up to a host-aware cap. A new commit on a PR cancels any in-flight build for that environment before queueing the new one. Build logs stream live; you can watch a failure happen and copy the line.
Whatever you put in the compose file.
Galley doesn't have opinions about your database. Put a
Postgres image (or MySQL, Mongo, whatever) in
galley.yml with the env vars and seed scripts you
already use in dev, and Galley boots a clean instance per
preview. Tear it down on PR close.
That gets you isolated state per PR. What it doesn't get you is realistic state — production-shape data with PII removed and references intact. Snapshot orchestration, lazy cloning, and migration auto-detection are real problems we don't solve today; you handle them yourself with whatever tooling you already have. We may build native support later. We don't yet.
Logs and events, live.
Build output, runtime stdout, and runtime stderr from every container in the preview stream over a single SSE connection per deployment. The dashboard tags each line with the service and stream and renders ~500 in-memory at a time, with history backfill on first load.
Lifecycle events (build started, image built, service healthy, preview URL assigned) ride a separate SSE channel and feed the deployment timeline. The full event log persists in Postgres for audit.
Container resource graphs and runtime log persistence beyond the in-flight chunks are roadmap. The agent emits structured JSON; pipe it wherever your existing log/metric pipeline lives.
Things non-engineers can use.
Screenshots. When a kind: web
service goes healthy, a headless browser loads the preview
URL at each configured viewport and uploads the PNGs to the
control plane's blob store. Linked from the PR comment and
grouped per service in the deployment detail.
Notifications. Per-project Slack, Discord, or generic-webhook hooks for the events you care about (deploy ready, build failed, etc.). Each hook has a test-fire button — you find out it's misconfigured before the team is waiting on it.
Project-scoped roles.
Four roles per project: owner (delete, member management), admin (edit env vars, secrets, settings; full deploy ops), developer (rebuild, terminate, read everything), viewer (read-only). Instance admins bypass project membership entirely.
Invites. One-time links scoped to a project + role. Expire on first use or on a deadline.
API tokens. Scopes are
projects:read|write,
deployments:read|write, admin. Each
token shows last-used timestamp; revoke from the dashboard.
Designed for CI; not a user identity.
SSO (OIDC/SAML) is roadmap.
Everything in the dashboard, also over HTTP.
The dashboard is a thin client over a JSON API. Anything you can click, you can curl: list projects, trigger a rebuild, pin an environment, rotate a webhook secret. Bearer-token authenticated, scopes enforced, audit-logged.
Tokens are scoped at creation
(projects:read|write,
deployments:read|write, admin) so
CI gets only what it needs. A published OpenAPI spec and a
GitHub Action wrapper for "wait until preview is healthy,
then run e2e" are roadmap.
Encrypted secrets, audit log.
Secrets are AES-256-GCM enveloped with a master key supplied at install time. The key never goes in the database, never phones home, has no recovery mode — if you lose it, the ciphertext is rubble. Back it up.
Every admin action — login, project edit, secret rotation, env var write, agent registration — lands in the audit log with actor, target, IP, and timestamp. Filterable by actor.
See /security for the threat model and what Galley explicitly doesn't protect against.
One compose file.
Galley ships as Docker images. The control plane and a
colocated agent come up together with one compose project on
one host: server, message bus, Postgres, ingress proxy.
Required env vars (master key, preview domain, DNS provider
token) go in .env; the server runs migrations
on boot.
v1 is single-host. A multi-host topology — control plane on a small box, an agent fleet elsewhere — is on the roadmap. In practice, a single host runs many concurrent previews without breaking a sweat.
Topology, DNS, TLS, master key, backups — see the docs.
Try it.