Skip to main content

Documentation Index

Fetch the complete documentation index at: https://paper.brimble.io/llms.txt

Use this file to discover all available pages before exploring further.

A build turns your source code into something Brimble can run. This page covers what runs during a build, in what order, and what the builder does between “you pushed a commit” and “your service is live.”

What runs your build

Brimble runs every build on its remote builder fleet, a pool of ephemeral build machines kept warm with a persistent layer cache. The fleet is separate from where your code eventually runs, and the machine that runs your build is fresh every time. Three things matter to you here:
  • Builders run in the same region as your project. Brimble keeps builder pools in every region. When you trigger a deploy, the dispatcher picks a healthy, least-loaded builder from the pool in your project’s region. That keeps the source code, the layer cache, and the destination host close together, so push and launch phases don’t pay a cross-region round-trip.
  • Builders are ephemeral. A build machine doesn’t survive the build. State written to the local filesystem during a build is gone the moment the build ends.
  • The cache is persistent. Even though the machine is fresh, the layer cache attached to your project survives. The next build pulls cached layers from object storage and skips the work it already did.
That combination, regional, ephemeral, persistent-cache, means cold builds get a clean machine every time (no surprise state from previous runs) while warm builds reuse layers and finish in a fraction of the time.

The build pipeline

Each deployment moves through these stages, and each one shows up as a section in the streaming logs.

1. Clone

The builder fetches your commit from GitHub, GitLab, or Bitbucket using the credentials Brimble has on file for your account. Submodules are cloned recursively. The clone is shallow, fetching only the commit being built and a small history. If your repo is private, the builder authenticates using the Git provider integration you authorized when connecting the repo. No SSH keys or tokens of your own are needed.

2. Detect

The builder inspects the repo to decide which builder to use. Brimble has two builders plus straight Docker, and which one runs depends on what’s in the repo:
  • Dockerfile at the project root → Docker (BuildKit). The builder uses your Dockerfile directly. You’re in full control of the image; Brimble doesn’t inspect or rewrite it.
  • A backend stack (Node, Python, Ruby, Go, Java, PHP, Rust, Elixir, and similar) → Railpack. The builder uses Railpack to detect the framework, materialize the toolchain, run install and build, and produce a runnable container image without you writing a Dockerfile. See Frameworks supported for the full list of stacks Railpack handles.
  • A static-site or frontend framework (Vite, Astro, Next.js export, Hugo, SvelteKit static, Remix static, Gatsby, Docusaurus, MkDocs, Eleventy, and similar) → Brimble’s frontend builder. This is a purpose-built builder for frontend apps. It runs your install and build commands, captures the output directory, and ships the assets to the edge. No container is produced for static sites; the artifact is plain files.
The detected stack, version, and chosen builder are logged at the top of the build output. If the auto-detection picks the wrong thing, override the framework, install command, build command, or start command under Configuration.

3. Inject secrets

Before install or build runs, the builder pulls your environment variables from Vault and exposes them to the build environment. Brimble uses HashiCorp Vault as the secret store: every value you set under Environment is encrypted at rest, scoped to your project, and only readable by a build that has been issued a short-lived Vault token for that project. The dashboard, the API, and the builder never persist plaintext secrets to disk. Once injected, the variables are available to every command that runs from this point forward, install scripts, build scripts, your code, and any pre-start command. Variables set in the active environment (Production, Preview, etc.) are the ones loaded. For BuildKit and Railpack builds, secrets are passed as build secrets (--secret), not as plain --build-arg, so they don’t end up baked into image layers.
Secrets stay encrypted at rest and out of image layers. Vault holds plaintext only in memory, the builder reads them with a short-lived token scoped to your project, and BuildKit’s --secret mount keeps them out of layer history. There’s no path that writes plaintext secrets to disk on the build host or into the image you ship.

4. Install

The builder runs your install command (npm ci, pip install -r requirements.txt, bundle install, go mod download, whatever the framework needs). The default install command is auto-detected; override it under Configuration. If a layer cache exists for this project and the lockfile hasn’t changed since the last successful build, the builder restores the cached install output instead of running the install fresh. A warm cache typically takes the install phase from a minute or two down to a few seconds. See Build cache below.

5. Build

The builder runs your build command (npm run build, mix compile, cargo build --release, and so on). For Dockerfile-based projects, this is where the image is built layer by layer using BuildKit. For Railpack-based builds, Railpack assembles a base image, applies the language toolchain, copies your source, runs your build command, and produces a runnable container. For static sites, this is the only phase that produces output; no container is built afterwards.

6. Push

The builder pushes the finished artifact (a container image, or a directory of static assets) to Brimble’s internal storage. For container images, the builder pushes only the layers that don’t already exist; unchanged base layers don’t transfer. While the push happens, the new layers are added to the project’s persistent cache so the next build can reuse them.

7. Launch

What happens here splits by service type. Static sites: no container, no orchestration. For a static site, the build phase ends with the asset bundle being uploaded to Brimble’s globally distributed object storage for static artifacts. There’s no container, no Nomad scheduling, no health probe. As soon as the upload completes, Brimble’s gateway is able to serve the new assets directly from object storage on the next request, and Cloudflare’s cache for your project’s hostnames is purged so users see the new version immediately. See Static sites for the full flow and Edge caching below for what gets cached and where. Web services, MCP servers, workers, and databases: launch on the cluster. The builder hands the artifact off to the orchestration layer, which schedules the container in your project’s region. Brimble uses Nomad for scheduling and Consul for service discovery and health checking. For each deployment, the orchestration layer:
  1. Allocates a slot on a host with capacity for your project’s CPU, memory, and storage.
  2. Pulls the image.
  3. If you’ve set a pre-start command, runs it first in a small Alpine task with your project’s environment variables. The main task only starts if pre-start exits cleanly. See Pre-start command for the full behavior.
  4. Starts the main container with your start command and the runtime environment variables.
  5. Registers the service in Consul with health checks defined by your healthCheckPath.

8. Health check

For web services and MCP servers, Brimble sends a GET to your healthCheckPath (default /). For workers, the check is process liveness. For static sites, there’s no probe, the artifact is served directly from the edge as soon as the upload completes. Once the check passes, the edge flips traffic to the new deployment. The previous deployment continues running for a short drain window so in-flight requests finish, then it’s stopped. If the health check never passes, the deployment is marked degraded and traffic stays on the previous deployment. After repeated failures the deployment moves to failed.

Recovery and restart

Once a deployment is live, three policies decide what happens when something goes wrong with your running container. You don’t configure these, they apply to every project automatically. The point is to keep your service available without you having to do anything.

Restart in place

If your container exits, Brimble restarts it on the same host:
  • Up to 10 attempts in any 5-minute window.
  • A 20-second delay between attempts, so a fast crash loop doesn’t burn the host.
  • After 10 attempts in a window, Brimble waits out the rest of the window and starts counting again. The policy never permanently “gives up”, it backs off and keeps trying.
A clean exit (status 0) and a crash exit are treated the same: the container is restarted. If your start command is supposed to run forever and it exits, that counts as a failure.

Reschedule onto another host

If restarts in place keep failing (the host can’t run the container at all, or it’s hitting a host-local issue), Brimble reschedules the allocation onto a different host in the same region:
  • The first reschedule attempt happens 30 seconds after the in-place restarts give up.
  • Subsequent attempts back off exponentially (30 s → 1 min → 2 min → 4 min → …).
  • The backoff is capped at 60 minutes, even after many failures.
  • There is no attempt limit, Brimble keeps trying to find a healthy host indefinitely. If capacity comes back, the workload comes back.
In practice, you’ll see a deploy that was passing health checks suddenly start “moving” hosts in the events feed if a host is degrading; that’s the reschedule policy at work.

Host disconnect (lost allocations)

If the Nomad cluster can’t reach the host running your container, network partition, host hung, control plane can’t talk to it, the host is considered “lost” after 15 minutes. What happens next depends on the service type:
  • Web service, MCP server, database. Brimble starts a replacement allocation on a healthy host immediately. If the original host comes back online later, the replacement keeps running and the original is stopped, that way traffic goes to one canonical instance.
  • Worker. Brimble does not start a replacement. The original allocation is kept and Brimble waits for it to come back. This avoids double-processing of jobs and queue work in the case the original host was alive and just couldn’t be reached, you’d get the same job processed twice if both ran simultaneously. If the original host is truly gone for good, the worker stays unscheduled until the operations layer marks the host dead, then the worker is rescheduled per the reschedule policy.

Putting it together

For the things you can configure, autoscaling, replicas, regions, see Scaling.

Edge caching

Brimble sits behind Cloudflare. Every public request to a *.brimble.app URL or a verified custom domain hits Cloudflare’s global edge first, and only the requests Cloudflare can’t serve from cache are tunneled into Brimble’s gateway. That’s where most of the perceived speed of a Brimble-hosted app comes from, especially for static sites. What gets cached, and what doesn’t, is decided by the Cache-Control headers Brimble’s gateway sets on the response:
Response typeCache-Control
HTML documentsno-store, no-cache, must-revalidate, max-age=0, s-maxage=0
Hashed asset paths (/assets/, /_assets/, /static/)public, max-age=31536000, immutable
Other immutable extensions (.js, .css, fonts, images)public, max-age=31536000, immutable
All other static filespublic, max-age=3600, must-revalidate
Dynamic responses from your containerThe headers your code returns
Two practical consequences:
  • HTML is always fresh. Browsers and Cloudflare both treat HTML as uncacheable, so a deploy is visible to users on their next page load. There’s no stale HTML pinning users to an old bundle.
  • Hashed assets cache for a year. Vite, Next.js, Astro, and the rest of the modern frontend toolchain emit content-hashed filenames (app.7a3b9c.js). Once an asset is fetched, Cloudflare and the user’s browser cache it for a year. New deploys produce new hashes, and the new HTML references the new hashes, so cache invalidation happens “for free.”
When a deploy goes active, Brimble explicitly purges Cloudflare’s cache for the project’s hostnames. That guarantees the first request after a deploy fetches fresh HTML from the gateway rather than a stale copy from Cloudflare’s edge. You don’t need to touch Cloudflare to make this happen, it’s part of the deploy pipeline. For static sites, this combination, globally distributed origin storage plus Cloudflare’s edge cache, means a returning visitor’s request is usually answered from the closest Cloudflare data center to them without ever reaching Brimble.

Pre-start command

A pre-start command is an optional one-shot script that runs after the artifact is pulled but before your main container starts. It’s the right place for work that has to happen on every deploy and has to finish before the app accepts traffic, things like running database migrations, seeding a schema, warming a cache, or fetching a config blob from elsewhere. Configure it in two places:
  • New project flow, when you deploy from a Docker image, the Pre-start command field appears under Runtime Settings.
  • Project Configuration tab, the PreStart command field is available for any project regardless of source type (Git or Docker image).

Where it runs

Pre-start runs as a Nomad lifecycle task with the prestart hook, in its own short-lived Alpine container scheduled on the same host as your main task. The command is invoked via sh -c, so you can chain it with &&, pipe it, or wrap it in any shell construct Alpine’s sh understands. The pre-start task has:
  • Your project’s environment variables. All Vault-backed secrets and plain env vars set under Environment are loaded the same way the main container loads them. So DATABASE_URL, REDIS_URL, etc. work exactly as they do in your app.
  • Your volume mount, if the project has one. Useful for seeding a persistent disk before the app reads from it.
  • A small resource budget. The pre-start task is capped at 128 MB of memory and a fraction of a vCPU regardless of your project’s tier. It’s meant for short bootstrap work, not heavy lifting.
  • The same isolation as your main container (gVisor in production, identical capability and ulimit posture).
The pre-start task does not have your main container’s image, your build artifacts, or anything you wrote during the build phase. If you need the migration tool from your image, install it inline in the pre-start command (for example, apk add --no-cache postgresql-client && psql ...) or run the migration through your app, not through pre-start.

When it succeeds and fails

  • Exit 0: Nomad starts your main container. Health checks proceed as usual.
  • Non-zero exit: Nomad marks the deployment failed. Your main container never starts and traffic stays on the previous deployment.
The pre-start command’s stdout and stderr are streamed into the deployment logs alongside your main task, so you can tail them while a deploy runs and see exactly what failed.

Examples

# Run database migrations before serving requests (Node + Prisma)
npx prisma migrate deploy
# Install a CLI in the pre-start container, then use it
apk add --no-cache postgresql-client && \
  psql "$DATABASE_URL" -c "SELECT 1"
# Bootstrap a config file onto the persistent disk on first deploy
[ -f /data/config.json ] || curl -sSL "$CONFIG_URL" -o /data/config.json

When not to use pre-start

  • Long-running setup. Pre-start blocks the main container start. Anything that takes more than a minute or two will trip your deploy’s start timeout. Move long work into a one-shot Brimble worker or a cron service instead.
  • Build-time work. If the work doesn’t need production secrets or production network access, do it during the Build phase. Build output is cached; pre-start runs every deploy.
  • State that should survive a deploy. Pre-start runs every time. If you only want something to happen once per environment, gate it on a marker file on the persistent disk or on a row in your database.

Build cache

Brimble’s build cache is a content-addressed layer cache attached to your project. It’s the part that makes warm builds fast. How it works in practice:
  • Layer-level reuse. For Docker and Railpack builds, the cache works at the layer level. If the inputs to a layer haven’t changed (your lockfile, base image, and the commands run before it), the layer is fetched from cache instead of rebuilt.
  • Persistent across builds. Layers live in durable object storage tied to your project. Any builder in the fleet can pull them, so caching works even when builds land on different machines.
  • Parallel cache uploads. As a build runs, completed layers are uploaded in the background. By the time the build finishes, the next build’s cache is already warm.
  • Fresh machine, warm cache. Every build runs on an ephemeral builder, but the cache survives. You don’t get cross-build pollution from filesystem state, but you don’t pay for cold dependencies on every push.

What’s cached

  • Install output for backend builds (node_modules, ~/.cache/pip, vendor/bundle, ~/.cargo, and similar), keyed on your lockfile.
  • Image layers for Dockerfile and Railpack builds, keyed on each layer’s input hash.
  • Static-builder intermediate output, keyed on the source manifest.

What’s not cached

  • Anything you write to a path outside the standard install or build directories.
  • Output that depends on environment variables not declared as build inputs (changing one of these silently bypasses the cache).
  • The builder’s working directory itself; it’s wiped between builds.

Cache lifetime

A project’s build cache clears automatically after 15 days of inactivity. If you don’t deploy a project for two weeks, the next deploy is a cold build. Active projects keep their cache indefinitely.
A cold build after a quiet period is normal, not a regression. If you redeploy a project that hasn’t been touched in over 15 days and the build takes much longer than usual, the cache was reaped. The next deploy will be warm again.

Clearing the cache manually

If a stale cache is producing weird results, clear it:
  1. Open Configuration for the project.
  2. Toggle Enable Build Cache on Redeploy off, save, and redeploy.
  3. The next build runs cold. Toggle the cache back on once it succeeds; subsequent builds will re-warm.

The build environment

Inside the builder:
  • Your repo at the project’s root directory, or the rootDir you configured for monorepos.
  • Standard package managers for the detected language: npm, yarn, pnpm, bun, pip, poetry, bundler, go, cargo, composer, mvn, gradle, mix.
  • All environment variables for the active environment.
  • Network access for fetching dependencies from public registries.
  • Plenty of memory and CPU for typical app builds. Large frontend bundles (multi-GB Next.js builds, big Webpack bundles) work without special config; for outliers you can bump build memory under Configuration.
You don’t have:
  • Access to your production runtime container or its filesystem.
  • Persistent state between builds beyond the cache.
  • The ability to spawn long-running processes that outlive the build.
  • Access to other tenants’ builds or runtimes; every build runs on an isolated builder.

Watch paths

For monorepos, configure Watch paths under Configuration to scope which file changes trigger a build. For example, apps/web/** means pushes that only modify other apps in the monorepo don’t redeploy this project. If watch paths aren’t set, every push to the tracked branch builds.

Build minutes and concurrency

Each plan grants:
  • A monthly build-minute allowance.
  • A number of concurrent build slots.
Build minutes are clock time from the start of clone to the end of push. If you exhaust your monthly allowance:
  • Free plan: new builds queue indefinitely until the cycle resets.
  • Paid plans: overage bills at the per-minute rate from Plans and pricing and rolls into the next invoice. You can also top up; see Build minutes.
Concurrent slots determine how many builds can run at the same time. Above your slot count, builds queue (status pending) until a slot frees up. Cancel a queued or in-progress build to free a slot from Deployments.

Custom Dockerfiles

If you put a Dockerfile at the project root, Brimble uses it directly:
  • The image is built with BuildKit.
  • Multi-stage Dockerfiles work; only the final stage’s image is deployed.
  • The container must listen on process.env.PORT and bind to 0.0.0.0.
  • The image runs as the user defined in the Dockerfile. Use a non-root user where possible.
Your Dockerfile has full control. Anything Docker can do, you can do. Brimble doesn’t inspect or modify the Dockerfile.

Status updates back to the dashboard

The builder streams status back to Brimble’s API in real time as it works. Each phase emits start and finish events. The dashboard’s logs drawer subscribes to that stream so you see logs land within hundreds of milliseconds of the builder producing them. When a deployment finishes (success or failure), the builder publishes a final event. That event triggers:
  • The status badge update in the dashboard.
  • The webhook delivery for deployment.success or deployment.failed, if you’ve configured webhooks.
  • The flip of edge traffic to the new deployment.
  • The old deployment’s drain and stop.

Why builds fail

The build pipeline has well-defined failure modes. See Build failures for the catalog and how to debug each one.

Next steps

Last modified on May 10, 2026