"It is deeply disturbing and undemocratic that police are using a technology that is nearly entirely inaccurate", Ms. Carlo said.
Adding real-time facial recognition to our surveillance state's already worryingly militaristic arsenal would fundamentally change policing in the United Kingdom, and indeed the health of our democracy.
Dancers at a Caribbean carnival in a west London street, peaceful protestors at a lawful demonstration against an arms fair, and citizens and veterans paying their respects to war dead on Remembrance Sunday - these people have all been targeted by police's new authoritarian surveillance tool invading our public spaces: automated facial recognition.
Police have been rolling out the software to be used at major events such as sporting fixtures and music concerts, including a Liam Gallagher concert and global rugby games, aiming to identify wanted criminals and people on watch lists. When the system is used to identify terrorist content out of many other forms of content, it may not only achieve a low success rate, but it may also misidentify other content as terrorist content, thus achieving a high false positive rate, too.
Automated facial recognition (AFR) technology used by London's Metropolitan Police is created to find persons of interests within large groups of people by comparing the biometrics of attendees caught on camera with information already stored on law enforcement databases.
However, the Met Police claimed that this figure is misleading because there is human intervention after the system flags up the match.
Addressing false positives, the force said: "We do not consider these as false positive matches because additional checks and balances are in place to confirm identification following system alerts". Through 50 freedom of information requests, BBW was able to discover that, on average, a staggering 95% of the facial recognition system's "matches" were actually wrongly identifying innocent people.
IT Pro has approached both the Met and South Wales Police for comment. A Met police spokesperson said that all alerts on its watch list were deleted after 30 days and faces that do not generate an alert are immediately deleted.
"Officers can quickly establish if the person has been correctly or incorrectly matched by traditional policing methods, either by looking at the person or through a brief conversation", a spokesperson said.
The South Wales police not only misidentified 2,400 innocent people with facial recognition, it also stored these people's biometric data, without their knowledge, for a year.
Police records suggest the technology is grossly unreliable, however, and authorities who continue using AFR risk potentially violating British privacy laws, according to Big Brother Watch, a nonprofit civil liberties group that released the report.
Underlying the concerns about the poor accuracy of the kit are complaints about a lack of clear oversight - an issue that has been raised by a number of activists, politicians and independent commissioners in related areas.
Wiles also called for more regulation around the use of such systems: "In terms of governance, technical development and deployment is running ahead of legislation and these new biometrics urgently need a legislative framework, as already exists for DNA and fingerprints".
Further details are expected in the long-awaited biometrics strategy, which is slated to appear in June. The force said the images were only stored as part of an academic evaluation for UCL, and not for any policing objective.
The group had little patience with this, stating in the report the government should provide funding for administrative staff to deal with this problem - one person per force employed for a full year at £35,000 would be a total of £1.5m, it said.