Our platform runs 12 customer-facing SSPs across 13 brands in 4 countries (UK, US, Canada, Australia/NZ), powered by a single shared API. We've already invested in tracking — PostHog is deployed widely across our front-end properties — but the deeper layers (server-side events, warehouse, unified experimentation) are where the gaps remain.
| Tool | What It Does |
|---|---|
| PostHog JS | Client-side product analytics & session replay. Deployed across all 14 brochureware projects and the majority of SSP quote flow apps. Each brand has its own PostHog project. |
| GTM + Google Analytics | Tag management and web analytics. Running on all sites — brochureware and SSPs. Our most widely deployed analytics stack, handling marketing tags, traffic metrics, conversion tracking, and third-party pixels. |
| Mida.so | A/B testing and experimentation. Runs as a separate tool, not integrated with PostHog analytics. |
| HubSpot Embed | Form tracking, chat widget, marketing attribution on select SSPs. |
| Tapfiliate | Affiliate referral tracking on select brands. |
| CSP + MySQL reporting | 30+ custom reports in the Customer Service Portal (policy lifecycle, financials, claims, renewals, channel performance, customer demographics) powered by KoolReport and the policy_movement fact table. Our primary operational reporting layer. |
This isn't a greenfield deployment — we already have the foundations. The goal is to consolidate what works, extend it consistently across all brands, and add the missing layers (server-side events, warehouse, unified experimentation).
| Capability | Current State | Status |
|---|---|---|
| Error tracking | Sentry | Covered |
| APM / performance | Inspector APM | Covered |
| Payments | Stripe + GoCardless | Covered |
| CRM | HubSpot | Covered |
| Product analytics (brochureware) | PostHog on all 14 front-end sites | Covered |
| Product analytics (SSP quote flows) | PostHog on majority of SSPs (ETA, Sundays UK, and others) — coverage expanding | Partial |
| Session replay | PostHog on brochureware + multiple SSPs | Partial |
| A/B testing & experimentation | Mida.so (standalone, not connected to analytics) | Partial |
| Server-side event tracking | — | Missing |
| Cross-brand funnel analysis | — | Missing |
| Centralised data warehouse | — | Missing |
policy_movement table is a 160+ column denormalised fact table with dedicated dw_row_timestamp and dw_reporting_date columns — it was designed for reporting consumption. The CSP has 30+ custom report views powered by KoolReport, covering policy lifecycle, financials, customer demographics, claims, channel performance, and more. There are 40+ dedicated reporting API endpoints.
Analytics + experimentation + session replay + warehouse/data layer in a single product.
| Product | Orientation | Pricing Model | A/B Testing | Built-in Warehouse | Open Source |
|---|---|---|---|---|---|
| PostHog | Developer / technical teams | Usage-based, transparent | Yes | Yes | Yes |
| Statsig | Experimentation-heavy teams | Per-event or per-experiment | Best-in-class | No | No |
| Amplitude | Enterprise, non-technical PMs | Per MTU, opaque at scale | Yes | Reads yours | No |
PostHog is the generalist. Statsig is stronger if experimentation is the primary focus (used by OpenAI, Notion, Figma). Amplitude is the enterprise incumbent — powerful but expensive, designed for product managers rather than engineers.
Strong analytics, but you'd still need separate solutions for experimentation and warehousing.
| Product | Strength | Gap for TTB |
|---|---|---|
| Mixpanel | Funnel visualisation, journey mapping. 1M events free. | A/B testing only just relaunched (late 2025). No warehouse. |
| Heap | Zero-instrumentation autocapture. | No feature flags, no A/B testing, no warehouse. Expensive at scale. Acquired by Contentsquare. |
These are page-view counters, not product analytics. They would leave every gap still open.
| Product | Use Case | Why Not for TTB |
|---|---|---|
| Plausible | Lightweight, GDPR-friendly traffic metrics | No funnels, no product analytics, no experiments |
| Matomo | Self-hosted Google Analytics replacement (PHP) | Web analytics only. No experimentation framework |
| Umami | Minimal 1kb script, zero cookies | Page-level metrics only. No user journeys, no experiments |
| Approach | What It Is |
|---|---|
| BigQuery / Snowflake / Redshift | Raw warehouse. Pipe data in, write SQL, connect BI tools. |
| dbt + Metabase / Looker / Superset | Transform layer + dashboarding on top of a warehouse. |
Our team has the technical depth to do it. The shared API is a single instrumentation point. We already run MySQL, Redis, queues. We could conceivably add an analytics_events table, emit events from the API, add a JS tracker to SSPs, and build dashboards in Metabase.
Writing AnalyticsEvent::create([...]) at key points is straightforward. Building funnel analysis with arbitrary step ordering, time windows, property breakdowns, statistical significance, retention curves, user path visualisation, and cohort comparison — that's years of product development. PostHog has ~200 engineers working on this full-time.
A/B testing isn't "show version A to half the users." It requires sequential testing, sample size calculations, confidence intervals, guardrail metrics, mutual exclusion between experiments, and proper randomisation. Getting this wrong gives false positives that lead to bad product decisions. Building it correctly in-house is a multi-month project that then needs ongoing maintenance.
Recording, compressing, storing, and playing back DOM snapshots with console logs and network requests is an entire product category. The storage, indexing, and playback infrastructure alone is substantial.
The real power isn't "we can see events" — it's joining Stripe subscription data with quote flow behaviour with Hubspot lead source with policy data in a single query. Building sync jobs, schema mappings, and a query engine across disparate sources is exactly the problem these tools exist to solve.
Engineering time is better spent on the insurance product — quote flows, underwriting logic, new brands, new markets. Every week spent building analytics infrastructure is a week not spent on what differentiates TTB. Analytics tooling is a solved problem. Our multi-brand insurance platform is not.
Having run both side by side, several advantages have become clear:
| Rank | Product | Rationale |
|---|---|---|
| 1 | PostHog | Already partially deployed and validated by the team. Covers every remaining gap (server-side events, warehouse, unified experimentation), integrates with our stack (Stripe, HubSpot, MySQL), transparent pricing, open source exit path. First-party cookieless tracking and server-side options give it longevity as browser privacy restrictions tighten. |
| 2 | Statsig | Consider if experimentation becomes the primary focus. Better A/B testing, but no built-in warehouse — would need pairing with BigQuery or similar. |
| 3 | Amplitude | Only if the team shifts toward non-technical product managers driving analytics. More expensive, less transparent pricing, but polished UI for business users. |
| 4 | Build our own | Only if regulatory constraints rule out all vendors. Opportunity cost does not justify it otherwise. |
Each SSP sends events to PostHog with a brand property. We can then:
PostHog has a pre-built Stripe connector that syncs charges, customers, invoices, subscriptions, and balance transactions. Once connected, it auto-generates a revenue analytics dashboard with MRR, churn, LTV, ARPU, and growth rate.
By joining Stripe data with behavioural events on customer email/ID, we get the full picture: user visited SSP → started quote → selected covers → bound policy → pays monthly — in a single queryable view.
Sentry tells us what errored. Session replay shows us what the user was doing when it happened. Useful for debugging quote flow UX issues that don't throw errors — confused users, rage clicks, form abandonment.
PostHog has pre-built connectors for both Hubspot and MySQL. We can join CRM lead data with policy data and behavioural events in a single SQL query — e.g., identifying which marketing campaigns drive the highest LTV customers, or which Hubspot lifecycle stages correlate with actual policy binds.
PostHog JS is already running across all 14 brochureware sites and the majority of SSP quote flow apps (including ETA, Sundays UK, Sundays AU, and others) with session replay enabled. Each brand has its own PostHog project. This gives us broad front-end coverage. The gap isn't deployment — it's consistency of instrumentation.
PostHog is already deployed on most SSPs, but the instrumentation needs standardising. Key additions:
brand property across all sites for cross-brand analysisUse the PostHog PHP SDK from the shared api-v3 Lumen service. Since all brands funnel through a single API, this is one integration point for server-side events. These are immune to ad blockers and browser privacy restrictions:
Connect via PostHog's managed source connectors (UI configuration, no code):
Two views of how PostHog fits into the TTB stack: what we'd build now, and where it could evolve if the data needs grow. These are meant to make the setup tangible — what talks to what, and where the data ends up.
This is the architecture we'd build in the rollout described in section 14. PostHog becomes the central analytics layer, pulling data from our existing systems and connecting it with user behaviour.
brand and region properties for segmentation. Warehouse connectors are configured via UI — no code, no ETL pipeline, no separate BI tool. The CSP reports and existing MySQL reporting continue unchanged. PostHog adds the behavioural layer on top.
If TTB's data needs grow — data science, ML, regulatory reporting, multi-department BI — the setup can evolve without replacing PostHog. PostHog stays as the product analytics layer; a dedicated warehouse handles the heavier workloads.
The architecture only works if the right events are flowing. Below is an example event model for TTB, based on patterns already present in the codebase — the tracking parameters the SSPs capture, the PolicyEvent lifecycle in api-v3, the policy_movement fact table, the reward/referral system, and the Activity audit log. The event names below are illustrative — the exact naming convention should be agreed by the team before implementation.
| Event name (example) | Source | Based on | Key properties | Enables |
|---|---|---|---|---|
| Quote flow (client-side) — based on existing SSP tracking patterns | ||||
quote_started |
Client | Already captured: CaptureTrackingParams middleware stores channel, partner, ref_id, and UTM params to session on first visit. Currently flows to the API at bind time but isn't sent to PostHog. |
brand, channel, partner, utm_source, utm_medium, utm_campaign, ref_id |
Top-of-funnel measurement, channel & partner attribution |
quote_step_completed |
Client | SSPs already have defined step sequences (e.g. Sundays AU 4x4: Step1Vehicle → Step2Driver → … → Step6Summary). The QuoteLineStep model in api-v3 defines step order, type, and headers per product. |
step_name, step_number, brand, product |
Funnel analysis, drop-off identification per step & per brand |
cover_option_selected |
Client | SSPs present cover options and add-ons via QuoteLineSection within each step. Users select/deselect options that affect premium calculation. |
option_name, option_value, brand, selected |
Product mix analysis, upsell optimisation |
quote_completed |
Client | User reaches the summary/payment step. The SSP sends tracking data (channel, partner, utmSource) to /api/quote/calculate at this point. |
brand, quoted_premium, payment_frequency, channel |
Conversion funnel, premium vs. conversion correlation |
quote_abandoned |
Client | Inferred when session ends without reaching bind. Quote state is persisted in localStorage (e.g. sundays4x4_quote), so the last completed step is known. |
last_step, brand, channel |
Abandonment analysis, retargeting triggers |
Policy lifecycle (server-side) — based on existing PolicyEvent enum & webhook dispatching in api-v3 | ||||
policy.incepted |
Server | Already dispatched: api-v3 fires policy.incepted and policy.incepted.{channel} webhooks. The CreatePolicyMovement job writes a NE record with full premium, fee, and commission breakdown. |
brand, product, record_type (NE), receivable_premium, channel, sales_channel, underwriter, broker |
Revenue attribution, quote-to-bind conversion, underwriter reporting |
policy.renewed |
Server | Already tracked: PolicyEvent::Renewed creates an RE movement record. The job preserves orig_inception_date across renewals for tenure calculation. |
brand, record_type (RE), renewal_premium, orig_inception_date, underwriter |
Retention analysis, renewal rate by cohort, premium change tracking |
policy.endorsed |
Server | Already dispatched: policy.endorsed webhook fires on MTAs. PolicyEvent::Endorsed creates an EN movement with endorsementStartDate and premium delta. |
brand, record_type (EN), reason, premium_delta, eff_start_date |
MTA analysis, premium impact tracking |
policy.canceled |
Server | Already dispatched: policy.canceled webhook. PolicyEvent::Canceled creates a CA movement. cancel_fee and reason are recorded. |
brand, record_type (CA), reason, cancel_fee, tenure |
Churn analysis, cancellation reason tracking |
policy.lapsed |
Server | Already tracked: PolicyEvent::Lapsed creates an LA movement, with trans_date set to the policy endDate. |
brand, record_type (LA), tenure, expiry_date |
Involuntary churn, payment failure correlation |
payment.failed |
Server | Already logged: Stripe charge failures are captured in the Activity audit log with payment_request and payment_response JSON. The PolicyPaymentsController report queries these. |
brand, failure_reason, provider (Stripe/GoCardless) |
Payment health, dunning optimisation |
reward.awarded |
Server | Already tracked: RewardEvent model records referral rewards with statuses attached → awarded / rejected / failed. Links to ReferringCustomerId and Policy. |
brand, reward_name, reward_type, amount, referring_customer |
Referral programme performance, reward ROI |
| Warehouse joins (synced data) | ||||
| Stripe charges | Warehouse | Stripe PaymentIntents are already created with metadata including frequency and source (e.g. sundays-au-4x4-quote). PostHog's Stripe connector can sync these. |
Amount, status, frequency, source, customer_id | MRR, LTV, ARPU, revenue dashboards |
| HubSpot contacts | Warehouse | HubSpot deals are already created via the CreateHubspotDeal job on policy inception. PostHog has a pre-built HubSpot connector. |
Lead source, lifecycle stage, deal value, campaign | Marketing attribution, campaign ROI |
| MySQL policy_movement | Warehouse | The policy_movement fact table already holds 160+ denormalised columns per transaction — premiums, fees, commissions, GST, benefits, underwriter splits. PostHog has a pre-built MySQL connector. |
Full premium breakdown, underwriter (RSE/Tokio Marine/Hollard), commission, tax | Actuarial reporting, premium breakdowns, loss ratios, underwriter reconciliation |
policy_movement table, or external systems like Stripe and HubSpot. What's missing is the connection. PostHog becomes the layer that joins them: the client events give you the user journey, the server events give you the business outcome, and the warehouse data gives you the financial context. Joined on a shared identity (customer email or distinct_id), you can answer questions like: "Of users who came via partner X on Sundays AU and completed the quote flow, what percentage bound a policy, what's their average premium, and how do their renewal rates compare?"
We currently have 14 separate PostHog projects (one per brand). As we extend into SSP quote flows, server-side events, and the warehouse layer, it's worth deciding on the right project structure. There are several models, each with trade-offs.
| Model | Structure | Pros | Cons |
|---|---|---|---|
| A. Single project Recommended |
All brands in one PostHog project, differentiated by a brand and region property |
Cross-brand funnels and comparisons are trivial — everything in one place, filter by property. Dashboards, experiments, and warehouse queries only need to be built once. User access controls can restrict visibility per team member if needed. Least operational overhead by far. | Data residency is mixed (AU/UK/US/CA in one pot) — only a concern if there's a hard regulatory requirement. Single project = single billing pool. |
| B. Per-brand projects Current |
Separate project per brand (14 today) | Clean data isolation. Separate replay streams and event volumes per brand. Familiar — it's what we already have. | Cross-brand analysis requires exporting data to the warehouse and joining there. Can't natively build a funnel spanning two projects. Every dashboard, experiment, and instrumentation change must be repeated per project — significant operational overhead at 14 projects. |
| C. Regional groupings | One project per region (e.g., UK brands, US brands, AU/NZ, Canada) | Data residency stays clean per region. Intra-region cross-brand comparison works natively. | Still multiplies work — every instrumentation change, dashboard, or experiment needs repeating per region (4–5 times). Cross-region analysis still goes through the warehouse. |
| D. Separate organisations per region | Distinct PostHog organisations, each with their own hosting region (US or EU) | Strongest isolation. Separate billing, admin, and data residency per org. PostHog Cloud lets you choose US or EU hosting per org. | Most overhead to manage. Cross-org analysis entirely via warehouse/data pipelines. Only justified by hard regulatory requirements. |
A single-project approach doesn't mean everyone sees everything. PostHog supports:
This means a brand manager for Sundays UK could have a dashboard scoped to their brand without seeing ETA or BikeInsure data, while leadership has a cross-brand overview — all within the same project.
If the architecture ever needs to split (e.g., a regulatory requirement forces data isolation for a specific region), data can be ported between projects:
brand and region properties to segment data, and access controls to manage visibility. The existing per-brand brochureware projects can continue as-is or be gradually consolidated — there's no urgency to migrate them. If a hard data residency requirement emerges for a specific region, that region can be split into its own project or organisation at that point.
| Consideration | Detail | Severity |
|---|---|---|
| GoCardless not supported | No pre-built connector. Push GoCardless data to MySQL first and sync that table, or export to S3/CSV as a custom source. | Medium |
| Hubspot sync is full-table only | No incremental sync. If our Hubspot dataset is large, this drives up row-sync costs every cycle. | Medium |
| Stripe incremental is append-only | New records sync, but updates to existing records (e.g., subscription modifications from endorsements) require a full resync. | Medium |
| Data residency | PostHog Cloud is US or EU hosted. With AU/NZ brands, check data handling requirements. Self-hosting available if needed. | Medium |
| Volume & cost estimation | 13 brands × quote flow events × session replay = meaningful volume. See the calculator below for detailed projections. | Plan for |
| No seat-based charges | Unlimited users, up to 6 projects, no per-API-call charges. Only usage-based billing on events, replays, and synced rows. | Positive |
This document recommends PostHog — and that recommendation stands. But it's important to be honest about what PostHog's data warehouse is and isn't, because the framing matters. The pitch isn't "use PostHog as your data warehouse." The pitch is: PostHog gives you one place to see your product data, your revenue data, and your customer data together — without building a separate warehouse stack.
Under the hood, PostHog's data warehouse is a single-tenant DuckDB instance per customer, backed by Delta Lake for storage and Arrow Flight SQL for data transfer. It's built for analytical queries against synced business data — not for general-purpose warehousing at scale.
| Aspect | PostHog Warehouse | Dedicated Warehouses (Redshift / BigQuery / Snowflake) |
|---|---|---|
| Architecture | Single-tenant DuckDB per customer | Massively parallel (MPP), multi-node clusters or serverless |
| Primary strength | Joining product analytics with business data in one UI | Petabyte-scale analytics, complex ETL, ML workloads |
| Data modelling | Views + materialized models in HogQL. PostHog recommends bringing dbt for advanced needs — their tooling is "not quite ready for data engineers" | Mature dbt integration, full SQL dialects, stored procedures |
| BI tooling | Built-in dashboards and SQL IDE. PostHog acknowledges their BI tooling is "not quite ready for data analysts" and suggests bringing tools like Hex | Native connectors to Looker, Tableau, Metabase, Mode, etc. |
| Concurrency | Designed for product team queries, not high-concurrency reporting | Purpose-built for many concurrent analysts and dashboards |
| Maturity | Managed DuckDB warehouse is in beta (waitlist access) | Production-hardened for years, enterprise SLAs |
This isn't a criticism — it's the correct framing. PostHog is building a product analytics platform with warehouse capabilities, not a warehouse that happens to do analytics. The value for TTB is the unified view, not raw warehouse power.
PostHog's analytics engine runs on ClickHouse, which is fast for typical product analytics queries. But at higher volumes, there are documented pain points worth knowing about:
PostHog's experimentation feature works, and it's significantly better than having Mida.so disconnected from analytics. But feedback from users who've run at scale suggests some limitations:
PostHog is built for engineers, and that's both its strength and a potential friction point:
PostHog's pricing is transparent and usage-based — that's genuinely better than Amplitude's opaque enterprise quotes. But usage-based pricing has its own risk:
policy_movement fact table, 30+ CSP report views, and 40+ reporting API endpoints give us strong operational intelligence on policies, premiums, claims, and financials. That's not going away, and PostHog isn't replacing it.
TTB operates across four regulatory environments (UK FCA, Australian APRA, US state regulators, Canadian provincial regulators). Any analytics platform we adopt needs to be assessed against what those regulators actually require — and what they don't. This section is a realistic look at where PostHog fits and where it doesn't.
Insurance regulators don't prescribe which analytics tools you use. They care about outcomes: can you demonstrate that you're treating customers fairly, retaining data appropriately, protecting personal information, and producing accurate reports when asked? The specific requirements vary by jurisdiction:
| Requirement | UK (FCA) | Australia (APRA / ASIC) | Relevant to PostHog? |
|---|---|---|---|
| Data retention | Policy records must be retained for minimum periods (typically 5–7 years post-expiry). FCA expects firms to maintain adequate records of customer interactions and transactions. | APRA requires quarterly and annual reporting. Records must be retained for the period specified in prudential standards (typically 7 years). | Partial — PostHog retains data for 7 years on paid plans. But regulatory records should live in the policy database (MySQL), not in an analytics tool. |
| Audit trail | FCA requires firms to demonstrate fair customer outcomes with evidence. SMCR requires clear accountability chains. | APRA CPS 230 (operational resilience) requires documented operational risk management and business continuity. CPS 234 requires information security audit trails. | Complementary — PostHog session replays and event logs can supplement audit evidence (e.g., proving what a customer saw during a quote), but they shouldn't be the primary audit record. The API's activity table is the source of truth. |
| Regulatory reporting | FCA annual compliance reports, product governance reports, claims data returns. | APRA GRS forms (quarterly/annual): policy counts, premiums, claims, reinsurance. Due 20 business days (quarterly) or 3 months (annual) after period end. | Not PostHog's job — Regulatory returns require certified financial data from the policy admin system. The CSP/MySQL policy_movement table and existing reporting endpoints are the right source for these. |
| Customer outcome monitoring | FCA Consumer Duty requires firms to monitor and evidence that products deliver fair value and good outcomes across the customer lifecycle. | ASIC product intervention powers require monitoring of product performance and customer outcomes. | Strong fit — This is where PostHog shines. Funnel analysis showing where customers drop off, session replays revealing confusing UX, and A/B tests proving which quote flow produces better outcomes — this is exactly the evidence Consumer Duty expects. |
| Data protection / privacy | UK GDPR. ICO audit framework covers collection, use, storage, sharing, retention, disposal. | Privacy Act 1988 (APPs). APRA CPS 234 for information security. Data (Use and Access) Act 2025. | Manageable — PostHog supports cookieless tracking, EU hosting, and data deletion APIs for GDPR compliance. Personal data in session replays needs attention — PostHog offers autocapture masking, but it should be configured carefully for insurance forms that collect sensitive data. |
| Third-party risk | FCA expects firms to manage outsourcing and third-party risk, including cloud providers. | APRA CPS 230 explicitly covers service provider risk. CPS 234 extends to third-party vendors processing APRA-regulated data. | Due diligence needed — PostHog Cloud is hosted on AWS (US or EU). For APRA-regulated entities, a formal third-party risk assessment of PostHog as a service provider would be prudent. PostHog offers SOC 2 Type II compliance, GDPR DPA, and self-hosting as a fallback. |
policy_movement table serve this purpose today.activity table where it has full JSON detail, timestamps, and is under our direct control.A common concern with adopting any analytics platform: does this require a dedicated hire? The short answer is no — not initially. But it's worth mapping out who does what.
By using PostHog Cloud (not self-hosting), the entire infrastructure layer is managed:
This eliminates the most common reason teams hire a dedicated analytics engineer — keeping the infrastructure alive.
| Task | Effort | Who | When |
|---|---|---|---|
| Standardise JS instrumentation across all SSPs | Low | Internal dev | One-off setup |
| Define event taxonomy (quote steps, cover options, payments) | Medium | Internal dev + product | One-off, then evolves |
| PHP SDK integration in api-v3 (server-side events) | Medium | Internal dev | One-off setup |
| Connect warehouse sources (Stripe, HubSpot, MySQL) | Low | Internal dev | UI config, no code |
| Build dashboards and funnels | Low | Anyone on the team | Ongoing |
| Set up and run A/B experiments | Medium | Dev + product | Ongoing |
| Warehouse SQL queries and joins | Medium | Anyone with SQL | Ongoing |
PostHog has invested heavily in lowering the bar for non-technical users:
| Scenario | Hire? | Rationale |
|---|---|---|
| Pilot phase (1–2 brands) | No | Internal devs handle instrumentation. Product team explores the data. PostHog AI assists with queries. This is the validation phase. |
| Full rollout (all brands) | Maybe | Depends on how much value the data generates. If the team is building dashboards, running experiments, and making decisions from the data, a dedicated growth/analytics person could accelerate the return — but this is a business hire, not an infrastructure hire. |
| Advanced warehouse usage | Maybe | If cross-source SQL joins, custom data models, and complex attribution become central to the business, a data analyst or analytics engineer role starts to justify itself. But this is a sign of success, not a prerequisite. |
PostHog offers:
For our stack specifically (Laravel/Livewire + Lumen API), the integration patterns are well-documented and straightforward.
We're not starting from zero. PostHog is already live across all brochureware sites and the majority of SSPs. The rollout builds on this foundation by standardising what exists and adding the missing layers.
brand property, standardise the event taxonomy for quote flow steps, and connect Stripe as the first warehouse source. Validate that we can see the full funnel — quote start → cover selection → payment → bind — in a single view.
brand property from step 1 across all SSP deployments. PostHog is already installed on most SSPs (including Sundays AU) — the work here is ensuring consistent instrumentation depth, not net-new deployment. GTM stays in place alongside PostHog.
Adjust the sliders to model different traffic scenarios across our 13 brands. Costs are estimated from published pricing as of March 2026.
| PostHog Component | Volume | Free Tier | Billable | Rate | Cost |
|---|---|---|---|---|---|
| Product Analytics | 200,000 | 1,000,000 | 0 | $0.00005/event | $0 |
| Session Replay | 500 | 5,000 | 0 | $0.005/recording | $0 |
| Feature Flags | 200,000 | 1,000,000 | 0 | $0.0001/request | $0 |
| Data Pipelines (rows synced) | 500,000 | 1,000,000 | 0 | $0.000015/row | $0 |
| Total | $0 | ||||