Attach durable storage to a project so files survive restarts and redeployments. Without a persistent disk, anything your service writes to the filesystem is lost on every redeploy. The new container starts from a fresh image. Use a persistent disk for:Documentation Index
Fetch the complete documentation index at: https://paper.brimble.io/llms.txt
Use this file to discover all available pages before exploring further.
- SQLite databases (small apps, single-instance).
- File uploads stored to disk before being moved to object storage.
- Cache or state that’s expensive to rebuild on cold start.
- Self-hosted tools that need a data directory (Plausible, Umami, n8n, and similar).
Prerequisites
- A project on a paid plan that includes persistent disks.
- The project should be a single-container service. Persistent disks aren’t shared across containers; if you scale beyond one, only one container holds the data.
Enable a persistent disk
- Open the project.
- Go to Configuration.
- Scroll to Persistent disk and toggle it on.
- Set:
- Mount path, the path inside the container to mount the disk at, for example
/data. - Size, picked from the dropdown.
- Mount path, the path inside the container to mount the disk at, for example
- Save.

Disk sizes and pricing
Disks are available in 10 GB steps from 10 GB up to 150 GB. The default is 10 GB. Storage bills at $0.25 per GB per month at the base rate. Some regions carry a small multiplier on top of the base rate; the exact monthly cost is shown next to each size in the dropdown.| Size | Monthly cost (base rate) |
|---|---|
| 10 GB | $2.50 |
| 20 GB | $5.00 |
| 50 GB | $12.50 |
| 100 GB | $25.00 |
| 150 GB | $37.50 |
Use the disk
Anything your service writes to the mount path persists across deploys, restarts, and resizes.Resize a disk
Disks can grow but not shrink.- Open Configuration → Persistent disk.
- Pick a larger size.
- Save.
Limits and constraints
- Size caps at 150 GB per project on standard plans. For larger volumes, contact support.
- One disk per project. You can’t mount multiple persistent disks at different paths.
- One container per disk. A persistent disk is a local volume, not a network share. Don’t enable autoscaling on a project that depends on a persistent disk for state; only one container will see the data.
- Region-bound. A persistent disk lives in the project’s region. Moving the project to a different region requires a fresh disk.
- Backups are your responsibility. Brimble persists the disk across deployments and host moves but doesn’t snapshot it. For data that must survive disaster, copy critical files to object storage on a schedule.
When not to use a persistent disk
A persistent disk is the wrong choice when:- You need scale-out. Multiple containers reading and writing the same dataset want a managed database or object storage, not a local volume.
- Your data must be backed up automatically. Use a managed database (PostgreSQL, MongoDB, etc); Brimble snapshots those.
- Your data is large. Beyond a hundred-ish GB, object storage with a small metadata DB scales better.
Troubleshooting
Mount path doesn’t exist after deploy. The toggle might be off. Re-check that Configuration → Persistent disk is enabled and the path is what you set. Files disappear on deploy anyway. Files written to a path outside the mount don’t persist. Confirm your code writes under the configured mount path (for example/data, not /var/data).
“Permission denied” writing to the mount. Some images run as a non-root user that doesn’t own the mount. In your Dockerfile, ensure the user has access, for example RUN mkdir -p /data && chown myuser:myuser /data.
Resize didn’t take effect. Resizes apply on the next deployment, not in place. Click Redeploy.
Next steps
- Deploy a database, for state that needs scale-out, backups, and queryability.
- Builds, for how the runtime container is built and started.