Context
London is moving into a new phase of facial recognition in policing: a pilot where officers can use handheld tools to check faces against watchlists during interactions. Supporters say this could reduce wrongful arrests, speed up identity checks, and help find people who are wanted or missing. Critics warn it may normalize face scanning in public spaces, expand stop-and-search in practice, and deepen trust issues if oversight, accuracy, and safeguards are not strong enough.
The issue is not just “technology good or bad”. It’s about what is acceptable in day-to-day life, what rules constrain use, and who is accountable when mistakes happen. This proposal is a structured way to decide what London should ask for next: a pause, a tightly limited pilot, or a broader expansion tied to clear conditions.
What is being decided
First, the policy direction for operator-initiated facial recognition checks. Then, the guardrails London should insist on, regardless of direction. The goal is a decision that is understandable, enforceable, and reviewable over time.
Key questions
- When should face scanning be allowed, and when should it be off-limits?
- What level of independent oversight is needed, and with what powers?
- How should accuracy, bias, and error-handling be tested and reported?
- What happens to data for non-matches, and how is that audited?
Use comments to suggest concrete limits, for example “only for violent crime watchlists”, “no use for low-level checks”, “public reporting every month”, or “independent approval before scaling”.