Cross-subdomain SSO, CSP, and other plumbing
Not every Hall shipment is a new screen. Some are the unglamorous scaffolding that makes the visible surface work. This post documents a handful of them because the ethos is "build in public" — the plumbing counts.
Single sign-on across *.our.one
One session, shared across every Our.one product. Sign in on our.one and visit hall.our.one and you’re already signed in. Sign out from one and you’re out of everything. The mechanism: the Auth.js session cookie is scoped to .our.one (not hall.our.one only), and every app shares the same AUTH_SECRET and the same Neon sessions table. Each app independently verifies the cookie and finds the same session row.
Future flagships at feed.our.one or notes.our.one inherit the flow for free. The founder asked the right question before we shipped — "how do we share auth?" — and the cookie-domain answer scales to the whole portfolio with zero per-repo SSO work.
One OAuth App per provider, forever
GitHub’s OAuth Apps only allow one callback URL per app. The naive pattern — one OAuth App per subdomain × one per provider — scales to 30+ OAuth Apps for a ten-flagship portfolio. No.
Instead: all OAuth Apps live on our.one. Hall’s "Sign in with GitHub" button redirects to our.one/cross-signin?provider=github&callbackUrl=https://hall.our.one/inside. our.one runs the OAuth dance, the resulting cookie lands on .our.one (shared), and the user gets redirected back to Hall already signed in. Three OAuth Apps total, ever — no matter how many flagships launch.
CSP — the one-line fix that looked like ten bugs
Two browser errors after deploy: Google OAuth refused to load, and magic-link sign-in 500’d. Different-looking symptoms, same cause: the Content Security Policy we inherited from our.one had form-action 'self', which blocks Auth.js from POSTing users out to accounts.google.com, github.com, or www.linkedin.com. And blocks our SSO proxy from submitting cross-subdomain too.
Fix: add the three OAuth provider origins plus https://*.our.one to the form-action directive. One line change in src/proxy.ts on both repos. Deployed, verified.
Defensive rendering + health endpoint
During one Vercel deploy the DATABASE_URL was stale. The homepage 500’d with a generic "Server Components render error," and the minified production build hid the detail. Two fixes:
- Every server call on the homepage (auth session lookup, posts query) is wrapped in try/catch. If the DB is unreachable, the page still renders with an amber banner instead of a white 500.
/api/healthsurfaces names of missing required env vars (safe to expose — names aren’t secrets). DB error messages stay redacted in production because connection strings leak through them.
The banner on the homepage tells readers exactly where to look (/api/health for diagnostics); the health endpoint tells operators exactly what to fix. Two minutes of work, multi-hour debugging cycle saved.
What this post is really about
When you watch how a product gets built, the interesting parts aren’t always the features. They’re the decisions: what to factor out, when to swap backends, which kind of complexity is worth eating now versus kicking down the road. Hall’s promise is that the whole decision trail is visible. This post is one day of it.