Big Brother 2.0: How AI-driven policing threatens civil liberties

In the pursuit of security, how much freedom are the people willing to surrender to the hands of government?
Police and AI
Police and AIAi images
Published on
5 min read

George Orwell's dystopian vision of an omnipresent surveillance state in the novel 1984 once seemed like an exaggerated fiction. Today, the rapid integration of artificial intelligence in predictive policing has brought us alarmingly close to Orwell's nightmare, albeit with algorithmic surveillance rather than telescreens as the novel means of control.

The National Capital Territory (NCT) Budget 2025-26 highlights that the Delhi government has installed 280,000 cameras in Delhi and is gearing up to install 50,000 more cameras in the NCT. The Public Works Department (PWD), in response to a query raised in the Delhi Legislative Assembly, informed that 32,000 cameras are non-functional. The press release of the Ministry of Law and Justice dated February 25, 2025 highlights the integration of Artificial Intelligence (AI) in crime detection, surveillance and criminal investigations. As per reports, the Andhra Pradesh Police is ready to deploy AI tools in all police stations across the State.

The proponents argue that intelligent, technology-driven predictive policing and facial recognition technology robustify public safety, whereas the unchecked expansion of technological surveillance poses a profound threat to civil liberties, procedural due process and the foundational ethos of democratic governance. AI-driven policing, marketed as a tool for police efficiency and crime prevention, operates with minimal transparency, digital accountability and legal safeguards. According to a study by the Internet Freedom Foundation (IFF) in 2021, more than 40% of AI systems utilised in government operations in India have documented biases, but none had been audited or rectified.

The rise of AI in law enforcement

The integration of AI in policing is not merely an incremental upgrade; rather, it is a paradigm shift in governance. Intelligent policing encompasses a range of technologies, including Predictive Policing Algorithms, Facial Recognition Technology (FRT), Automated License Plate Readers (ALPRs), Social Media Monitoring Tools, etc.

India is aggressively deploying Automated Facial Recognition Systems (AFRS) and Punjab's PAIS (Punjab Artificial Intelligence System) across jurisdictions. This leads to scanning millions without the consent of those being scanned. The problem with facial recognition systems is that they show error rates as low as 0.8 per cent for light-skinned men, and as high as 34.7 per cent for dark-skinned women.

The machine learning tools trained on historical data risk perpetuating biases. This has the potential to amplify systemic biases and ensnare marginalised communities. The automated surveillance system with minimal transparency grants the government unprecedented power to track, analyse and anticipate individual behaviour. The deep penetration of intelligent technology controlled by the government blurs the line between public safety and pervasive social control. With the expansion of AI-driven policing, the question arises: Who governs the algorithms that govern us? And in the pursuit of security, how much freedom are the people willing to surrender to the hands of government?

Constitutional and legal safeguards against intrusive surveillance

India's adoption of AI-driven policing technologies presents a constitutional dilemma. Addressing this question requires balancing the people's security with their civil liberties. The Supreme Court in Justice KS Puttaswamy v. Union of India (2017) recognised the right to privacy as an intrinsic component of the right to life and personal liberty. The rights-based approach facilitated by the Indian Constitution ensures the fulfilment of human rights in all aspects of social and economic progress.

In Puttaswamy, the Court established a three-fold test for state surveillance. The test includes legality (clear statutory backing), legitimate state aim (proportionality) and procedural safeguards (against arbitrariness). The rationale for the test is that action infringing the right to privacy must be authorised by law (legality), the state must have a valid and justifiable reason for undertaking the action (national security, prevention of crime, or social welfare) and the action must be proportionate to the legitimate aim. Essentially, the triple test ensures that state action must not be excessive, arbitrary or unreasonable in impacting the right to privacy of an individual.

On the one hand, AI policing tools such as Automated Facial Recognition Technology (AFRT) and Social Media Monitoring Tools operate in legal ambiguity; on the other hand, the Digital Personal Data Protection Act (2023) is yet to be operational. These two-fold concerns create a vacuum where AI-driven intrusions lack judicial oversight.

In People’s Union for Civil Liberties (PUCL) v. Union of India (1997), the Supreme Court of India mandated that telephonic surveillance requires prior approval from the Home Secretary with a time-bound review mechanism and destruction of irrelevant data. However, AI-driven policing circumvents these established safeguards.

Facial recognition systems scan crowds without warrants, predictive policing algorithms label individuals as "potential offenders" without following due process, and social media monitoring tools bypass judicial scrutiny under anti-terror laws. This also contradicts the principles laid down in the Maneka Gandhi vs Union of India case (1978), where it was held that "any state action depriving liberty must be fair, just, and reasonable".

The Supreme Court struck down Section 66A of the Information Technology Act, 2000 in Shreya Singhal v. Union of India (2015). But the spirit of Section 66-A remains enforced in spirit under other provisions of the IT Act, 2000 and the Unlawful Activities (Prevention) Act, 1967 (UAPA).

Mass data harvesting and indiscriminate surveillance chill dissent. It also conflicts with Article 19(1)(a) and the Supreme Court ruling in Anuradha Bhasin vs Union of India (2020), which held that "indiscriminate state surveillance stifles democratic discourse."

Global comparisons and lessons

Article 5 of the EU AI Act (2023) classifies real-time facial recognition in public spaces as "high-risk" and imposes a total ban except in cases of terrorism investigation. The United States lacks federal law, but San Francisco banned facial recognition technology early in 2019, citing racial bias. Other cities that banned facial recognition technology include Boston, Cambridge, Springfield and Portland. New York City passed the Public Oversight of Surveillance Technology (POST) Act (2020), requiring police to disclose surveillance tools publicly.

The path forward: Regulation and accountability

India must enact a moratorium, strengthen judicial oversight, ensure algorithmic transparency, algorithmic efficiency and combat bias (data, algorithmic and human). Presently, there's a legal vacuum in India for technology law. India doesn't have a specific framework that deals with AI. Regulation often follows innovation, but disruptive technologies require precautionary regulations to tackle unforeseen situations.

The use of AI in policing rings an alarm bell. The concerns with this novel technological integration include human rights, equality and civil liberties. In a state of techno-legal ambiguity and vacuum, the concerns associated with the integration are legitimate. This also hits the very fabric of the social contract, where people have surrendered ‘certain but limited’ rights in lieu of the state-ensured protection of life and property. The hands of the state equipped with disruptive technology threaten the right to privacy and human dignity, which is inseparable from the right to life, constitutionally ensured to individuals.

AI-driven policing in India, if left unchecked, threatens to erode democracy under the pretext of individual security. India needs to adopt ex-ante regulation rather than post-hoc remedies, on the model of the EU AI Act (2023). Without urgent techno-legal reforms, India is at risk of becoming a surveillance panopticon, where every citizen would be perpetually monitored by the untiring machine agents. Civil liberties would be under the onslaught of algorithmic efficiency. This is the time to ensure accountability, before Big Brother 2.0 becomes irreversible.

Pranjal Chaturvedi and Ruchika Kumari are Doctoral Research Fellows at Bennett University (Times of India Group).

Bar and Bench - Indian Legal news
www.barandbench.com