The Clearview AI Problem

Clearview AI claims to have a database of three billion images made up of photos and personal data skimmed from Facebook, YouTube, Venmo and other websites. There are no national boundaries when it comes to these images. Clearview AI may have images of you, your family, as well as citizens from across Europe, Asia, Africa and the Americas.

Clearview AI has created an app which allows law enforcement and security companies (for the time being) to upload an image of a ‘Person of Interest’ and have it compared against the entire Clearview database of images and metadata for the purpose of identification.

Even though the app was launched only in the United States, their methods of operation and quick adoption by law enforcement agencies has sent chills down the spines of global communities. News outlets in Australia were also quick to report on the story and issue dire warnings on what the app means for Australians; invasion and loss of privacy.

The publicity surrounding Clearview AI has been immense. Public policy and privacy experts debate the applications of facial recognition technology and its implications often these days. This isn’t a bad conversation to have, despite the tone of articles leaning towards negative impacts and connotations (without suggesting balance or any solutions), because technology does change the way we interact with society and therefore, the very fabric of our day-to-day lives.

Clearview AI has elevated the discussions surrounding facial recognition into the stratosphere because of its clear lack of ethics and transparency, as demonstrated through the datamining of three billion images. As Kashmir Hill of the New York Times put it, “Clearview AI have gone where Governments, Google and Facebook have been scared to go” and with good reason.

Without any public and political scrutiny, over 600 law enforcement agencies in the U.S. signed up to the app in 2019. Investors of Clearview AI claim that the technology and corresponding database has incredible crime-fighting capabilities and therefore all criticism is moot and is justified by the ‘benefits’1. American law enforcement is enamoured with the app; Indiana State Police identified a man involved in a shooting (filmed by a bystander) within 20 minutes and while Florida Police identified a woman who stole a vacuum cleaner and two grills from a hardware store.

Securitisation and the Surveillance State

Securitisation is the notion that issues can be construed as a threat through a speech act; meaning they can be socially constructed as a threat. These threats can therefore range from traditional military threats to non-traditional and non-conventional threats such as troop movements, migration, economic sanctions and environmental degradation (climate change). A securitising speech act needs to follow a specific rhetorical structure: survival, urgency, and threat and defence2.

There are three steps in the securitisation speech act:

  • Claims that the referent object (object of reference) is existentially threatened (survival)
  • Demands to take extraordinary measures to counter the threat (urgency)
  • Convinces an audience (usually the public) that rule breaking behaviour is justified (threat and defence)3.

Facial Recognition and Securitisation

The benefits of using facial recognition to help aid the solving of crimes is obvious. The issue is not with solving crime or even facial recognition technology itself, but rather, the potential for securitisation of all crime through the mining of personal and sensitive data. Clearview AI and the unrestricted access and use of this data by law enforcement is a present-day example4.

The lack of any oversight, ethical and legal boundaries for law enforcement, allows them to utilise both personal and sensitive data of citizens for lesser infringements such as traffic violations (speeding, broken tail lights, etc.), all the way to the most serious of crimes, such as murder. There is no proportionality between the crime itself and the methods to collect intelligence on the offender, let alone justification.

In security and surveillance discourse, there is a constant battle between two values; the security of way of life and on the other hand security of life itself. As Sun-ha Hong from the Surveillance Studies School of the Massachusetts Institute of Technology so eloquently states;

“… both sides of the debate agree that some form of security is necessary for our societies to function well and ‘freely’. The security of way of life asks: how will you protect your ability to live a ‘free’ life, ensure your rights and privacy alongside liberal values of freedom of expression and individual property (including one’s own body and its information)? The security of life alternatively posits and asks: how will you protect your body and property from physical harm and theft?

Traditionally the later position and argument trumps all arguments to uphold the sanctity of life, however, we know from security discourse pertaining to risk that the later cannot be totally achieved unless it is prioritised above every other value and good in our society at all times, and unless we resign ourselves to martial law, allowing it to govern every aspect of our lives”5.

Between the battle of the security of way of life and security of life itself, balance can be achieved. The securitisation process of determining existential ‘threats’ is not a given or inherent to any issue, it requires public acceptance (particularly in a liberal democracy). This article is not advocating for law enforcement bodies to not use or have access to facial recognition technology. Every new technology has its positives and negatives, as well as risks. Trade-offs and risks can be managed and minimised so that advances in technology can be utilised in the best ways possible.

The Potential for Law Enforcement to ‘Securitise’ Crime

If we securitise crime and give law enforcement extended powers to use and view citizens’ personal and sensitive data for any occasion or reason, the focus moves from crime prevention in society (a fundamental principle in modern policing) to reactive policing. This means a need to counter all criminal risk, allowing for far reaching intelligence methods by law enforcement. Criminality is then expected and becomes the new ‘norm’. We all become potential criminals and risk to others in this version of reality.

I have no doubt that law enforcement agencies will use the argument that current levels of crime in the United States must be met by more powerful tools of intelligence gathering. They would facilitate this through Clearview AI’s unethical, and potentially illegal, sourcing of images and metadata to clear criminal cases and to protect society from its own threat of criminality. Should government bodies start to question and set legal boundaries for the use of the Clearview AI app they would enact the speech act – although the three steps of securitisation would not be chronological in this instance.

Law enforcement agencies and government bodies have strict boundaries for when and how personal data can be collected, used and managed. There are oversight bodies and accountability measures implemented at various data sharing junctions. Clearview AI has forgone ‘government red tape’ by going where no government or company has gone before and in some cases, beyond the bounds of law. The core problem therefore is not facial recognition technology itself, but the implementation and use of Clearview AI’s app by law enforcement (without oversight or transparency) and the potential to securitise their function in combating crime. The ethical and legal questions in regards to Clearview AI’s database and functioning is another matter altogether.

Facial recognition is a powerful tool in the hands of law enforcement; and it should be utilised to benefit society. A balance between the security of way of life and security of life itself can be achieved; especially in light of facial recognition at the hands of law enforcement. It cannot be achieved in the way Clearview AI’s app has been adopted. Without regulation, oversight for law enforcement and ethical standards (including for Clearview AI itself), we have begun the process of securitising crime in society, giving way to unbridled intelligence and surveillance tactics for law enforcement to use.

Do we allow for the securitisation of all crime including infringements that society agrees do not constitute or determine criminality (traffic infringements, property damage or petty theft such as train fare evasion) and the extent to which law enforcement has information on our lives thanks to the Clearview App and database? That question is left to the public to determine. For better or for worse.

References

  1. One such investor is David Scalzo, founder of Kirenaga Partners. David Scalzo dismisses concerns about Clearview making the internet searchable by face, saying it’s a valuable crime-solving tool. “I’ve come to the conclusion that because information constantly increases, there’s never going to be privacy,” Mr. Scalzo said. “Laws have to determine what’s legal, but you can’t ban technology. Sure, that might lead to a dystopian future or something, but you can’t ban it.” Kashmir Hill, the New York Times, ‘The Secretive Company That Might End Privacy as We Know It’, Published Jan. 18, 2020, https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html.
  2. To read more about the securitisation process and Copenhagen School of Thought in Security Studies, see Rens van Munster’s Oxford Bibliographies Introduction; https://oxfordindex.oup.com/view/10.1093/obo/9780199743292-0091, published June 2012.
  3. Ibid. (see above reference)
  4. When securitisation is achieved, extraordinary measures are permitted to be used by governing bodies or agencies/organisations. Measures and powers that would otherwise not be allowed or legitimate. Access and use of Clearview AI’s app and therefore their database without any restrictions whatsoever, can be argued to be ‘extraordinary measures’.
  5. Sun-ha Hong, ‘Criticising Surveillance and Surveillance Critique: Why Privacy and Humanism are Necessary but Insufficient’, Surveillance & Society 15 (2): 187-203, 2017, https://ojs.library.queensu.ca/index.php/surveillance-and-society/article/view/critic


Sylvia Jastkowiak
Senior Business Analyst, Business Technology Architecture
sylvia.jastkowiak@nec.com.au