The Ukraine defense ministry announced that it is now using facial recognition technology from an American startup to combat misinformation, identify the dead, and expose Russian assailants. The technology, which is like a search engine for faces that aggregates data from millions of social media users across the open web, had previously stirred controversy due to privacy complaints.
After the war broke out, the American-based artificial intelligence company Clearview reached out to Ukraine’s government, offering its services free of charge. This week, the collaboration became official and Clearview’s facial recognition tech is now claimed to be used for security purposes, such as vetting people of interest at checkpoints.
Clearview claims that it has amassed a database of over 10 billion photos posted publically on the internet from sites like Facebook, Instagram, Flickr, and Getty Images. AI technology then uses this database to provide rapid face recognition. The tool also has enhancement features to clean up low-resolution photos and even offers the possibility to generate younger and older depictions that could be matched with childhood photos. Many of the American startup’s clients are from law enforcement, where it has proven an invaluable policing tool. The Federal Bureau of Investigation, Immigration and Customs Enforcement, and Fish and Wildlife Service are among a dozen U.S. agencies that have used Clearview so far (that we know of).
Two billion photos in this massive profiling database were scrapped from VKontakte, Russia’s largest social media network. Using their tool, Ukrainian military officials can quickly scan the face of a person, and if they use VKontakte, they can get an ID virtually in seconds. The tool may be particularly useful for identifying fallen soldiers much more quickly than matching fingerprints. Facial recognition seems to work even if there is facial damage, although a U.S. Department of Energy report claims the technology’s effectiveness is greatly reduced in decomposing bodies.
The same tech could be used to identify covert Russian operatives posing as Ukrainian civilians, help Ukraine debunk false social media posts spreading war propaganda, and help reunite refugees separated from their families.
But although very powerful, Clearview facial recognition could be a double-edged sword that could lead to avoidable tragedies. The technology doesn’t always return a perfect match, which could lead to misidentification at checkpoints, potentially claiming innocent lives. Clearview claims its technology has a 99% accuracy rate, but this can’t be verified and is likely a gross overstatement. A double-check using alternative intelligence would have to be employed to avoid false positives, but in the fog of war that sounds like an unrealistic assumption. In its defense, Clearview says that people in Ukraine who are supposed to be using this technology have received training and need to input a case number and reason for a search before all queries.
Other critics have voiced concerns that Clearview is performing mass surveillance, some of which may be illegal. Canada and France claim that the kind of online photo gathering employed by Clearview breaks their privacy laws. The UK and Australia have also deemed this practice illegal, while in the U.S., Clearview is battling a number of lawsuits that could soon force the company to switch gears. For now, however, the approach is being used in battered Ukraine.