Documentation Index
Fetch the complete documentation index at: https://paper.brimble.io/llms.txt
Use this file to discover all available pages before exploring further.
Every Brimble workspace gets a private, encrypted network connecting its services. A web service can call its database, a worker can pull from a Redis queue, and one HTTP service can call another, all without leaving Brimble’s infrastructure, going through Cloudflare, or appearing on the public internet. It’s faster, doesn’t count against bandwidth, and stays inside the trust boundary you already established when you signed in.
This page covers how to use the private network, how it’s built underneath, and the rules about what’s allowed to talk to what.
When to use it
Use the private network when both endpoints are Brimble-hosted and live in the same workspace and region:
- A web service calls its database, connect via the database’s private endpoint.
- A worker pulls jobs from a Redis queue, same.
- One service calls another service’s HTTP API, internal hostname instead of
<name>.brimble.app.
Don’t use it for third-party APIs (Stripe, Twilio, OpenAI), cross-region calls, or cross-workspace calls, see What’s allowed below.
Every project deployed in cluster mode is reachable from inside Brimble’s network at:
<project-slug>.service.brimble.internal
Where <project-slug> is the lowercase, dash-separated form of the project name. The slug appears in the project’s URL in the dashboard.
A few things this hostname does not support:
- Resolution from outside Brimble. Your laptop can’t
curl it. It’s only valid from inside another Brimble container in the same workspace.
- Cross-workspace lookups. Internal addresses are workspace-scoped, a personal project can’t reach a team workspace’s project, and vice versa.
- Cross-region lookups. Internal addresses are region-scoped. A service in
fra1 can’t resolve a sibling project’s internal hostname in nyc1. Use the public endpoint for cross-region calls.
- TLS verification with public CAs. The internal connection isn’t going through Cloudflare or Brimble’s public edge, it’s plain HTTP over an encrypted underlay (see How it’s built).
How databases expose their internal endpoint
When you provision a managed database, Brimble auto-injects four environment variables into the database’s project:
| Variable | Value |
|---|
CONNECTION_STRING | A full URI using the public load-balancer hostname. Use this when connecting from outside the database’s region or from outside Brimble. |
SERVICE_HOST | The public hostname of the database’s load balancer. |
SERVICE_PORT | The port the engine listens on. |
PRIVATE_SERVICE_HOST | The internal hostname: <db-project-slug>.service.brimble.internal. Use this from any service in the same workspace and region for the fastest path. |
These are marked as system variables on the database project, visible to you, but you don’t manage them.
Connect a service to its database privately
The clean way to wire a web service to a database in the same workspace is via environment variable references:
In your web service, set:
DATABASE_HOST = {{@my-postgres.PRIVATE_SERVICE_HOST}}
DATABASE_PORT = {{@my-postgres.SERVICE_PORT}}
DATABASE_USER = {{@my-postgres.DB_USER}}
DATABASE_PASSWORD = {{@my-postgres.DB_PASSWORD}}
DATABASE_NAME = {{@my-postgres.DB_NAME}}
Replace my-postgres with your database project’s slug. Brimble resolves the references at deploy time, and your container sees the actual host, port, and credentials.
If you’d rather pass a single URL, build it from the parts:
DATABASE_URL = postgres://{{@my-postgres.DB_USER}}:{{@my-postgres.DB_PASSWORD}}@{{@my-postgres.PRIVATE_SERVICE_HOST}}:{{@my-postgres.SERVICE_PORT}}/{{@my-postgres.DB_NAME}}
Connections opened to PRIVATE_SERVICE_HOST stay on Brimble’s internal network, no Cloudflare, no public IPs, lower latency, no bandwidth charge.
Service-to-service HTTP
Two services in the same workspace can call each other internally:
// Project A calling Project B's HTTP API
const apiBase = `http://project-b.service.brimble.internal`;
const res = await fetch(`${apiBase}/v1/items`);
Use http://, not https://, for internal hostnames. The internal network is already encrypted at the WireGuard layer, so there’s no TLS termination on *.service.brimble.internal. A client that tries https:// will fail to handshake.
A few things to know:
- The connection is HTTP, not HTTPS. There’s no TLS on the internal hostname; the underlying network is already encrypted (see How it’s built). Don’t try to negotiate TLS to an internal hostname; you’ll fail to connect.
- The port is your service’s runtime port. If you don’t know the port (Brimble assigns it), set the consumer’s env var via reference:
API_PORT = {{@project-b.SERVICE_PORT}}.
- Internal traffic doesn’t pass through the public gateway. So Brimble’s request headers (
x-brimble-id, x-brimble-project-version, x-forwarded-proto) are not added, your destination service sees the raw request from your origin service.
- Internal traffic doesn’t appear in the destination project’s request logs (which are populated by the public gateway).
How it’s built
Three layers cooperate to make <project>.service.brimble.internal resolve and route correctly:
1. Encrypted underlay (WireGuard)
Every host in a Brimble region is connected to every other host in that region by a WireGuard mesh. WireGuard is a kernel-level VPN that encrypts every packet between hosts with modern cryptography (Curve25519, ChaCha20-Poly1305, BLAKE2s). When your container on host A sends a packet to a peer on host B, that packet rides the WireGuard mesh, not the public internet.
You don’t configure WireGuard, you don’t see it in the dashboard, and you can’t address it directly. It’s the substrate. From your container’s perspective, the other service is just a hostname; from the operator’s perspective, the path between two hosts is encrypted regardless of which rack or data center they’re in.
2. Service discovery (Consul)
When a deployment goes live, Brimble registers the running container with Consul, HashiCorp’s service registry. The registration carries the project’s stable name, the host it landed on, the port it bound to, and a health check.
Consul also provides DNS for the cluster. When your container resolves payments.service.brimble.internal, the lookup hits Consul’s DNS, which returns the address of a healthy instance of the payments service. As deployments roll, instances move, or scale-out adds replicas, Consul keeps the resolution accurate, you don’t change anything in your code.
If a project has no healthy instances (paused, unhealthy, mid-deploy), the DNS lookup fails fast. Your client sees a connection error, not a hung request.
3. Service-to-service authorization (Consul Intentions)
Just because a service is on the network doesn’t mean every other service can talk to it. Brimble uses Consul Intentions, Consul’s allow/deny policy layer, to gate which services can call which.
When you create a project, Brimble automatically creates allow intentions between that project and every other active project in the same workspace and same project environment (Production, Preview, etc.). When you delete or disable a project, its intentions are torn down. You don’t manage this, the dashboard does it for you on every project lifecycle event.
Practically, that means:
- A new Production web service in your team workspace can immediately reach the Production database, the Production worker, the Production cache, and any other Production project in the same workspace, with no extra config.
- A Preview environment is its own intention scope. Preview services can talk to other Preview services but not to Production services in the same workspace.
- A project in another workspace has no intentions to your projects. Even if DNS resolution somehow surfaced an address, the call is refused at the policy layer.
What’s allowed
| Source | Destination | Allowed |
|---|
| Project A (Production) → Project B (Production), same workspace, same region | Same | Allowed automatically |
| Project A (Preview) → Project B (Preview), same workspace, same region | Same | Allowed automatically |
| Project A (Production) → Project B (Preview), same workspace | Different environment | Denied at the intention layer |
| Project A → Project B in a different workspace | Different workspace | Denied; use the public endpoint |
Project A in fra1 → Project B in nyc1 | Different region | Internal DNS won’t resolve; use the public endpoint |
| Project A → a third-party API (Stripe, etc.) | Public internet | Use the public URL; this isn’t internal traffic |
Your laptop → payments.service.brimble.internal | External client | Internal DNS doesn’t resolve outside the cluster |
When to use internal vs public
| Scenario | Use |
|---|
| Service in workspace A calls a database in workspace A | Internal, PRIVATE_SERVICE_HOST |
| Service calls another service in the same workspace and environment | Internal, <slug>.service.brimble.internal |
| Service calls a third-party API (Stripe, Twilio, etc.) | Public, that API’s URL |
| Your laptop connects to a Brimble database for admin work | Public, use SERVICE_HOST and the public CONNECTION_STRING |
| Service in one workspace needs to call a service in a different workspace | Public, internal addresses don’t cross workspace boundaries |
Production service in fra1 calls a database in nyc1 | Public, the internal network is region-scoped; cross-region must go through public endpoints |
| Production service calls a Preview database | Public, intentions are environment-scoped |
Latency, bandwidth, and cost
- Latency. A round-trip on the internal network is dominated by host-to-host hop latency (single-digit milliseconds inside a region). It’s consistently lower than the public path, which adds Cloudflare → Brimble gateway → service.
- Bandwidth. Internal traffic doesn’t count against your project’s public bandwidth allowance.
- Cost. No egress charge applies to internal calls. The data never leaves Brimble’s network.
Troubleshooting
getaddrinfo ENOTFOUND <slug>.service.brimble.internal
- The destination project has no healthy instance right now (it’s paused, mid-deploy, or unhealthy). Check the destination’s deployment status.
- The slug is wrong, double-check the project’s URL in the dashboard for the canonical slug.
- The destination is in a different region; see the cross-region row above.
Connection refused, or hangs and times out.
- The destination service isn’t listening on the port you expect. For databases, use
SERVICE_PORT. For your own services, confirm the runtime port.
- The destination is in a different environment (Production vs Preview). Intentions don’t span environments.
- The destination is in a different workspace; use the public endpoint instead.
Worked yesterday, fails today.
- The destination project was deleted or its name changed. Project deletion tears down intentions and the DNS record for it.
- The destination scaled to zero or became unhealthy. Internal DNS only returns healthy instances.
Next steps