Growth channels we can scale responsibly
We treat each channel as a controllable input with measurable outputs and clear guardrails.
Influencer partnerships
Accountable, relationship-based acquisition with unique attribution per partner and clear expectations.
Paid campaigns
Budgeted, time-boxed campaigns with cost caps, structured experiments, and continuous measurement.
Organic & referrals
Long-term compounding growth via content, partnerships, and referral incentives that are easy to audit.
Measurement & attribution
Early-stage accuracy matters more than fancy dashboards. We prioritise clear attribution and auditability, then automate once the signal is strong.
| What we track | Why it matters | How it’s used |
|---|---|---|
| Unique attribution (codes / links / UTMs) | Connects data acquisition activity to real outcomes. | Compare channels and partners on conversions, not impressions. |
| Conversion quality (activation, retention) | Prevents “cheap signups” from looking like success. | Prioritise channels that produce durable, low-risk users. |
| Cost discipline (caps, budgets, test sizes) | Avoids uncontrolled spend and unpredictable CAC. | Scale only when CAC and behaviour signals meet thresholds. |
| Audit notes (agreements, outcomes) | Institutional memory for repeatability and diligence. | Supports investor review and internal decision-making. |
Outcome over vanity
We focus on signups that become active users, then measure retention and paid conversion as we scale. Reach alone does not drive decisions.
Controlled experiments
New channels begin as small tests with clear success criteria. We only increase spend when results are stable and repeatable.
Safeguards & abuse prevention
Burner and privacy tools attract misuse. We design for that reality with layered safeguards and escalation paths.
Behavioural signals
We monitor for anomalous usage patterns (rate spikes, repeated failures, unusual messaging profiles).
Tiered controls
Limits and permissions can vary by plan/region and adapt as risk profiles evolve.
Human review loop
We prioritise a review workflow for edge cases before relying on heavy automation.
Flag → throttle → verify → temporary freeze → permanent action. The goal is risk reduction with clear, auditable decision-making.
We focus on operational risk signals and abuse patterns rather than invasive content inspection. Controls are designed to be proportionate and privacy-respecting.
Internal governance & auditability
Sustainability requires controls: who can provision, what actions are logged, and how decisions can be reviewed.
Role-based admin access
Operational actions are restricted and structured to reduce mistakes and misuse.
Action logging
Key administrative actions are logged to support review and accountability.
Policy-driven evolution
Controls can tighten or loosen as we learn, while maintaining a consistent audit trail.
We design the operating model to remain viable as volume grows, and to reduce the risk of platform bans, compliance escalations, and reputational damage.
FAQ
Short answers to the questions stakeholders most commonly ask.
Is this page aimed at end users?
Not primarily. Most end users won’t care about internal measurement frameworks or safeguard ladders. This page is intended to communicate responsibility and operational maturity to investors, partners, and stakeholders.
Do you “monitor everything” users do?
No. We focus on proportionate operational signals and abuse patterns to manage risk. Privacy remains central to the product. Safeguards are designed to be measured, auditable, and minimised to what’s necessary.
Do you use AI for abuse detection?
We’re building a structured framework that can support anomaly detection over time. Early stages prioritise accurate attribution, human review for edge cases, and conservative controls. Automation can be introduced once the signal is reliable.
How do you measure whether a growth channel is working?
We look at conversion quality (activation and retention), cost discipline (test sizes and budgets), and downstream outcomes (paid conversion where applicable). Reach and impressions are not treated as success metrics.