Mistrust & Surveillance

Introduction

Trust is a primary constituent of the relational dynamic of most surveillance systems. Issues concerned with trust are often the catalyst for the implementation of a surveillance system. With the three following case studies we recognized and analyzed the underlying issues such as absence of consent, law enforcement, profitability and ethics that prevail in the relam of mistrust and surveillance.

Case Studies

Clearview AI

The unchecked use of facial recognition by law

Cambridge Analytica

Profitability from data/privacy laws

Amazon Ring

Consent & mishandling of data

Assessment & Mitigation

  • Self regulation

Various companies have begun to self-regulate by trying to set up their own "AI ethics" initiatives that perform ranging from academic research - as in the case of google-owned deep mind's ethics and society division to formulating guidelines and conveying external oversight panels.


  • Algorithms biases

Systemic bias against protected classes can lead to collective, disparate impacts, which may have a basis for legally cognizable harms, such as the denial of credit, online racial profiling, or massive surveillance. MIT researcher Joy Buolamwini found that the x powering three commercially available facial recognition software systems that were failing to recognize darker-skinned complexions. In response to his facial-analysis findings, both IBM and Microsoft committed to improving the accuracy of their recognition software for darker-skinned faces.


  • Big tech regulation policies and Tool kits

Big tech firms like Amazon, IBM and Microsoft have placed restrictions in place for sales of facial recognition tools and called for federal regulation.They have also adopted application of ethical principles and tool kits.


The Way Forward

Robust federal legislation

The US has no specific legislation that regulates the use of facial recognition technology in surveillance. Without clear policies, we are operating under a "just trust us" model, which currently doesn't exist anywhere else in society.

Policymakers and the tech-literate must work together to outline the ethical use of facial recognition technologies and develop a legal framework to hold companies accountable.

Working to eradicate algorithmic bias

Studies have shown that algorithmic bias can result in unlawful arrests and discrimination against women and ethnic minorities.

New laws and regulations should require commercial facial recognition products to be tested by 3rd parties for accuracy and unfair bias.

Requiring transparency

Currently, 1 in 4 police departments in the US have access to facial recognition tools, but the public has little knowledge of when or how it is used.

There may be benefits to law enforcement's use of facial recognition, but without transparency or accountability, the public's mistrust will only intensify.