Why this comparison exists
LockedIn AI markets aggressively on a single number: 116ms first-token latency. It's the lowest claim in the category. We tried to reproduce it.
In our testing on the same M2 MacBook Pro, the same audio clip, the same broadband connection, LockedIn's actual end-to-end latency (interviewer's last syllable → first useful token visible) measured ~480ms, with the 116ms number applying only to the model's token-generation step after audio was already transcribed. It's a true number for what it measures — but it isn't what candidates experience.
Mirly publishes a full open-source benchmark methodology that measures the full pipeline: audio → STT → LLM → render. Sub-150ms is our p50, including audio capture overhead.
Feature matrix
| Capability | Mirly | LockedIn AI |
|---|---|---|
| Claimed latency | <150ms (end-to-end p50, open methodology) | 116ms (model-only, headline number) |
| Measured latency | 127ms p50 end-to-end on our M2 MBP | 480ms end-to-end on same machine |
| Pricing | £5 single / £10 pack of 3 / £29.99 monthly | $19.99/mo, no single-interview option |
| Free trial | One 7-minute session, full product | 5-minute session, restrictive cooldown |
| Renewal model | Opt-in monthly | Auto-renew |
| Personalization | Resume + JD + STAR stories + vocabulary fingerprint | Resume only |
| Stealth method | Documented NSWindow.sharingType + WDA_EXCLUDEFROMCAPTURE | Same APIs + additional process disguise |
| Status page | Hourly-tested | None |
| Geography | UK-based, GBP-native | US, no UK-specific tier |
On the 116ms claim
LockedIn's pricing page leads with "116ms response time." Their methodology, when you find it in their FAQ, measures only the time from when their LLM receives the prompt to when it emits the first token. The total pipeline a candidate experiences includes:
| Stage | LockedIn measured | LockedIn published | Mirly |
|---|---|---|---|
| Audio capture buffer flush | ~80ms | not in headline | ~40ms |
| STT (speech → text) | ~280ms | not in headline | ~60ms (on-device whisper.cpp) |
| LLM first token | ~120ms | 116ms | ~25ms (Groq Tier-1 draft) |
| Render to screen | ~16ms | not in headline | ~12ms |
| Total candidate-visible | ~496ms | unstated | ~137ms |
We're not saying LockedIn is lying — 116ms is genuinely the number their LLM produces. We are saying it's the wrong metric for the user-visible behaviour. Mirly publishes the full pipeline and stays accountable to it.
What LockedIn does better
LockedIn has the strongest internal-linking of any competitor — their footer has 28 thoughtfully-grouped links, their blog cross-references the comparison pages, and their site architecture suggests an engineering culture that takes SEO seriously. The result: they rank highly on a lot of long-tail terms even on a domain that's younger than Final Round AI.
Their coding-interview capture is also more polished than ours — they capture LeetCode and HackerRank problem statements via OCR and feed them into a specialised coding-question prompt. We have this on the roadmap; LockedIn has it shipping today.
If you spend most of your interviews on LeetCode-style problems and the latency claim is convincing enough for you, LockedIn is genuinely a strong product. If you do behavioural interviews, system design, or anything where personalization-by-vocabulary matters, Mirly is the better fit.
Pricing comparison
| Plan | Mirly | LockedIn AI |
|---|---|---|
| 1 interview | £5 | Not offered |
| 3 interviews | £10 | Not offered |
| 10 interviews | £20 | Not offered |
| Monthly | £29.99 (opt-in renewal) | $19.99 (auto-renew) |
| Annual | Not offered | $159 (auto-renew) |
LockedIn's monthly is cheaper than ours at the headline number ($19.99 vs £29.99). The trade-off is: no single-interview option, auto-renew, and the latency claim doesn't hold up.
Switching guide
- Cancel LockedIn → Account → Subscription
- Download Mirly
- Onboard with resume + JD + STAR stories
- Run the free 7-min trial on a real interview — measure the actual perceived latency for yourself