Skip to main content

Documentation Index

Fetch the complete documentation index at: https://paper.brimble.io/llms.txt

Use this file to discover all available pages before exploring further.

The edge is returning 502 for your project. The deployment is up, Brimble’s edge can see it, but it can’t get a usable response from your container. This page lists every cause we see and how to confirm.

Quick triage

Run:
curl -I https://<project-name>.brimble.app
A 502 Bad Gateway means the edge reached your service but didn’t get a 1xx/2xx/3xx/4xx/5xx HTTP response within the timeout, either the container isn’t listening, the listener is on the wrong address, or the service is hung.

Cause 1: Service isn’t listening on $PORT

The most common cause. Your app reads a hardcoded port instead of the one Brimble assigns. Confirm: Look at your start code for app.listen(3000) or EXPOSE 3000 with no env var read. Fix:
// Node
const port = Number(process.env.PORT) || 3000;
app.listen(port, "0.0.0.0", () => console.log(`listening on ${port}`));
# Python (FastAPI/uvicorn)
import os, uvicorn
uvicorn.run(app, host="0.0.0.0", port=int(os.environ["PORT"]))
// Go
http.ListenAndServe(":"+os.Getenv("PORT"), nil)
The || 3000 fallback only matters locally, in production PORT is always set.

Cause 2: Service listening on the wrong host

Your app listens on 127.0.0.1 or localhost only. Brimble’s edge can’t reach localhost-only listeners. Confirm: Look for app.listen(port, "127.0.0.1", ...) or --host 127.0.0.1 in your start command. Fix: Bind to 0.0.0.0 instead:
app.listen(port, "0.0.0.0", () => console.log(`listening on ${port}`));
uvicorn.run(app, host="0.0.0.0", port=port)
# Common --host flag CLIs
gunicorn --bind 0.0.0.0:$PORT app:app
rails server -b 0.0.0.0 -p $PORT

Cause 3: Service crashed after startup

The deployment passed health checks initially but the process died later. The container restarts, but for the seconds between crash and restart, requests 502. Confirm: Open Logs for the project. Look for stack traces, process exited with code 1, or repeated startup messages. Fix: Find and fix the crash. Add an unhandled-rejection handler so silent crashes show up in logs:
process.on("unhandledRejection", (err) => console.error("unhandled:", err));
process.on("uncaughtException", (err) => { console.error("uncaught:", err); process.exit(1); });
If the crash is a known bug you’re working on, check the runtime memory profile. Out-of-memory kills are a common silent cause, a container that hits its memory limit gets killed by the runtime without a clean exit.

Cause 4: Service is slow

Your app is up but takes longer than the edge timeout to respond. The edge gives up and returns 502. Confirm: Slow endpoints return 502 even when they eventually finish processing. Watch your service’s response times in Observability. Fix: Make the slow endpoint faster, or move work to a background job and return a 202 immediately. The edge timeout is generous (30+ seconds) but not unlimited. Long-polling, SSE, and WebSocket connections aren’t subject to the request timeout, only ordinary HTTP requests are.

Cause 5: Deployment is restarting

If a deployment is unhealthy, Brimble restarts the container. Mid-restart, requests 502. Confirm: Status badge on the project shows Degraded or In progress. Logs show the start command running multiple times in a short window. Fix: Diagnose why the container can’t stay up. See Cause 3.

Cause 6: WebSocket or HTTP/2 issue

Your service speaks WebSocket or gRPC, and the upgrade isn’t being handled. Confirm: HTTP requests work, but WebSocket or gRPC clients see 502. Fix:
  • For WebSocket, make sure your server actually handles Upgrade: websocket. The path needs to match what your code listens for (/ws/*, /socket.io/*, etc.).
  • For gRPC, the edge needs Content-Type: application/grpc*. If your client sends a different content type, the edge won’t route as gRPC.

Cause 7: Health check passes, app is broken

The health check returned 200, but / (or whatever path the user is hitting) returns 502 because of a runtime error elsewhere in the code. Confirm: curl https://<project>.brimble.app/healthz returns 200; curl https://<project>.brimble.app/ returns 502. Fix: This is a real application bug, not an infrastructure issue. Check logs for the request that failed and fix the underlying error.

Cause 8: Database is down or unreachable

Your service starts, but every request that touches the database hangs or errors. If your app handles the error and returns 500, you’ll see 500. If your app crashes on the unhandled error, you’ll see 502. Confirm: Check the database project’s status. If it’s degraded, provisioning, or unreachable, fix that before debugging the service. Fix: Make sure the database is healthy. Verify DATABASE_URL is correct (no typos, includes credentials, points to the right region). For best performance, the service and database should be in the same region.

Cause 9: Env var change without redeploy

You added or changed an env var in the dashboard, but the running container has the old values. If a downstream call now fails because of the missing/wrong var, the user sees 502. Confirm: Open the deployment that’s currently active. Check it was created after you changed env vars. Fix: Click Redeploy to apply env changes.

Cause 10: Rate limit at the edge

Cloudflare is rate-limiting traffic from your client IP at the edge. Throttled requests return 429, not 502, but if you mistake one for the other you’ll chase the wrong cause. Confirm:
curl -I https://<project>.brimble.app
If you see HTTP/2 429, it’s a rate limit, not a service problem. If HTTP/2 502, keep going.

Diagnostic checklist

When you see 502, verify in this order:
  1. Did the latest deployment go to Active? (Not Degraded, not Failed.)
  2. Are you reading process.env.PORT and binding to 0.0.0.0?
  3. Has the container crashed since deployment? Check Logs.
  4. Does the health check path return 2xx?
  5. Did you change env vars without redeploying?
  6. Is the database (if any) healthy and reachable from the service’s region?
If all six are clean and you still see 502, open a ticket with the project name and the timestamp of a 502 response.

Next steps

Last modified on May 10, 2026