Why this project exists
Most event organisers who want a branded app end up with a bad trade-off. A bespoke build costs tens of thousands of pounds and locks them to an agency. Off-the-shelf event apps don't look like their brand, can't carry their identity, and treat attendees as someone else's users.
I wanted to close that gap. One shared platform, many fully branded event apps, without forcing each brand into the same template or each build into a bespoke fork.
I'm building it end-to-end: native apps, admin web, API, infra, observability, commercial strategy. It's my first time running the full product stack myself, not just the engineering.
The problem
Running multiple branded apps on shared infrastructure is full of sharp edges:
- Tenant boundaries leak easily — a single unscoped query can surface one brand's data inside another
- Every brand wants to feel bespoke, but each one-off fork multiplies the cost of every future change
- Event data rarely lives inside your own system — ticketing, CRM, and publishing platforms all own pieces of it
- Push notifications need to reach the right attendees at the right moment, not the entire audience of every brand
- Attendee accounts span years, devices, and events — deletion, provider linking, and session invalidation all have to hold across every surface
- Solo-founder time is the scarcest resource, so every abstraction has to earn its keep
Architecture at a glance
The platform is organised around a strict multi-tenant hierarchy. Every row in the database, every API call, and every screen in the admin and mobile apps knows which layer it belongs to. Brand variation is controlled through themes, assets, and feature flags — not through forked code.
- 1Organisation
- 2↓Brand
- 3↓Event
A Turborepo monorepo keeps the platform coherent: native apps, admin, marketing, and API share types, an API client, and domain logic through internal packages. The API runs on Hono + tRPC + Kysely against PostgreSQL, deployed to a Hetzner VPS orchestrated by Coolify with GHCR-hosted images and GitHub Actions driving CI/CD.
What I built
- A branded mobile app shell (React Native + Expo) with multi-tenant screen routing, visual and compact layout variants per brand, events hub with pricing, gallery carousels, notifications feed, and a full account settings surface
- An admin dashboard for organisations, brands, events, galleries, members, Eventbrite OAuth, scheduled push campaigns with live reach estimation, per-brand attendee lists, and per-brand and platform-wide analytics
- An API service with tenant-aware access, admin tRPC routes, public analytics ingestion, auto-migrations on startup, and a periodic scheduled-deletion sweep
- An installation-based push notification system — 4 categories, scheduling, audience targeting v1, preference opt-outs, OS-permission detection, delivery metrics and open tracking
- An Eventbrite integration: OAuth connection, bulk import, FAQ sync, automatic webhook registration, and a background scheduler that keeps ticket price and availability up to date for linked upcoming events
- A full attendee account lifecycle: email/password, Google and Apple sign-in, provider link/unlink, password change, and 30-day soft-delete with email cancellation — enforced across every auth path with 410 ACCOUNT_SCHEDULED_FOR_DELETION termination
- A background image pipeline for event galleries: originals stored immediately, display and thumbnail WebPs generated by an in-process worker, with admin surfacing real-time processing state
- A fire-and-forget analytics pipeline with automatic screen-view and lifecycle events from mobile, feeding per-brand and platform-wide Recharts dashboards
- Production infrastructure and observability: Hetzner + Coolify orchestration, GHCR-hosted images, GitHub Actions deploys, GlitchTip error tracking, Uptime Kuma monitoring, Discord alerts, and a first adversarial security audit that generated its own backlog of follow-up tasks
FriendFinder — research & simulation
Alongside the platform, I'm running active R&D on FriendFinder: a proximity discovery feature that lets friends find each other at large events when GPS is unreliable and the mobile network is saturated.
It targets 70,000-person events and combines BLE, mesh relaying, and intermittent server sync with an explicit privacy model. Because you can't field-test something like this at scale, I built a deterministic simulation harness for the underlying protocol and wrote out nine architecture, privacy, and feasibility specs so that design decisions, gates, and trade-offs are legible to anyone picking it up later.
The structure is designed to be extractable as a reusable library after the platform ships.
Current status
Marketing, API, and admin services are live in production. The ThisChord app is in TestFlight, and a second prospect is in active commercial conversations. All user-facing features needed for alpha are code-complete; the remaining gates for a broader TestFlight release are a new native build, E2E smoke tests against deployed services, and a handful of design and brand-asset items pending on the pilot's side.
Why it matters
Running a full product platform as one engineer used to be impractical. With modern tooling it's genuinely possible, and PulseEvent is where I'm finding out how far that goes — native apps, admin, API, infra, observability, and commercial work, all owned end-to-end.
It's also the kind of platform I'd rather work on: shared infrastructure that improves every tenant at once, brand variation kept narrow and deliberate, and the whole thing legible enough that another engineer could pick it up without a long handover.
That's the bet — that a small team can run a real multi-tenant product without it falling apart or flattening into sameness.
What comes next
Near-term focus is on getting the platform into the hands of more real brands and hardening what's already live:
- Shipping a fresh native TestFlight build and re-enabling OTA updates in production
- Closing the first broader TestFlight round with the pilot, then opening to the second prospect
- Expanding push audience targeting (tier and ticket-based) and analytics coverage
- Merging announcements and notifications into a single messaging surface
- Taking FriendFinder through its Phase 1 gates and on-device pilots
- Stripe-backed native ticketing and a web checkout as a longer-term vertical