Author: Sri Gugan S., Government Law College Salem affiliated to The Tamil Nadu Dr. Ambedkar Law University Chennai, Tamil Nadu
ABSTRACT
The first telephone patent in 1876 to the modern smartphone was not a single event but a continuous process. Similarly, Artificial Intelligence (AI) is expected to bring a complete revolution across various sectors, especially in the realm of individual privacy. As we move toward a digital world, it is important to note that India’s population stands at 1.46 billion, and according to the Internet and Mobile Association of India (IAMAI), the country is projected to exceed 900 million internet users by 2025.
The evolution of privacy in India has a rich history, shaped by several landmark judgments of the Supreme Court, particularly in response to the changing dimensions of society. The Digital Personal Data Protection Act, 2023 has been enacted by the Indian Parlianment. However, when compared with international privacy laws, India still lags behind in implementing comprehensive frameworks.
Although India has laws like the DPDP Act, 2023, and the IT Act, 2000, these do not sufficiently address the challenges posed by rapidly advancing AI technologies. There is a risk that such technologies may be misused by individuals and companies. The Government of India has allocated INR 20,000 crores in the Union Budget 2025–2026 for AI research and development.
However, in my view, a specialized legal framework must be implemented to regulate AI applications and protect individual privacy. In the digital era, where privacy is recognized as a fundamental right, the need for strong and specific legislation is critical. As we move toward a more connected world, it is essential to analyze these developments and push for robust legal mechanisms to protect citizens’ rights.
Keywords:
Artificial Intelligence, Privacy, Digital India, DPDP Act 2023, IT Act 2000, Fundamental Rights, Internet Users, AI Regulation, Indian Law, Supreme Court Judgments.
EVOLUTION OF TECHNOLOGY AND PRIVACY
As new technologies have emerged and evolved, concerns about privacy have grown alongside them. When Alexander Graham Bell patented the telephone in 1876, communication leapt beyond letters into the private realm of voice. Each upgrade—from rotary dials to flip-phones and, finally, to sensor-packed smartphones—has narrowed the distance between home life and public networks. Artificial intelligence is tracing the same arc. Rule-based expert systems were the clunky desk sets of the 1980s; today’s self-learning models act like smartphones that silently compute, store, and transmit intimate details. As capability expands, so do the risks to personal autonomy.
Indian constitutional law has evolved in step with these shifts. In Kharak Singh v. State of U.P. the Supreme Court condemned nocturnal police surveillance but stopped short of naming privacy a fundamental right. Govind v. State of M.P. advanced the idea, holding that Article 21’s guarantee of life and personal liberty could encompass “zones of privacy,” yet allowed intrusions justified by a compelling state interest. These two judgments planted doctrinal seeds that would bear decisive fruit four decades later.
That harvest came in Justice K.S. Puttaswamy (Retd.) v. Union of India. A nine-judge bench of the Supreme Court unanimously ruled that privacy is a core part of dignity, freedom, and personal choice, and declared it a fundamental right under Article 21 of the Constitution. The Court also recognized the growing impact of the digital age and cautioned that our constitutional values must continue to be protected, even in a world shaped by algorithms and technology.
The 2019–20 internet blackout in Jammu & Kashmir quickly put this idea to the test. In Anuradha Bhasin v. Union of India, the Supreme Court ruled that internet access is now integral to free expression, and any shutdown must be justified by strict standards of necessity and proportionality. Although not solely a privacy action, Bhasin applied Puttaswamy’s standards to real-time data flows, anchoring digital liberties in constitutional soil.
Policy makers followed the judiciary’s lead. The Srikrishna Committee Report drafted a comprehensive data-protection blueprint centred on user consent, accountability and an independent regulator. Its recommendations informed the Digital Personal Data Protection Act, 2023, yet the committee flagged looming AI hazards—opaque decision-making, bias and mass profiling—that existing laws scarcely confront.
From Kharak Singh’s night time knock to Puttaswamy’s resounding affirmation, the trajectory is unmistakable: as technology grows more intrusive, the law must grow more protective. Smart phones and AI systems are no longer conveniences; they have become portals through which states and corporations can observe, predict and influence citizens. The next frontier—generative AI embedded in everyday appliances—will sharpen this tension. Legislators therefore face a twin obligation: spur innovation that can lift millions while erecting guardrails strong enough to prevent perpetual surveillance and algorithmic discrimination. India’s constitutional narrative points the way; the challenge is to walk it swiftly and wisely.
THE RISE OF ARTIFICIAL INTELLIGENCE IN THE DIGITAL ERA
Artificial Intelligence (AI) has become one of the most transformative forces of the 21st century. From simplifying daily tasks through virtual assistants to revolutionizing industries like healthcare, finance, and law, AI’s rapid growth is reshaping society. In the digital era, machines are no longer just tools—they are becoming decision-makers, often without human intervention. This shift brings not only innovation but also significant challenges, especially in safeguarding personal data and privacy.
India, with a population of 1.46 billion, is emerging as one of the largest digital markets in the world. The Internet and Mobile Association of India (IAMAI) finds that 900 million internet users by 2025. As more individuals come online, personal data becomes increasingly vulnerable to exploitation. AI systems thrive on data, and in the absence of strict regulatory frameworks, sensitive information may be collected, processed, and used in ways that infringe on privacy rights.
Recognizing the importance of AI, the Government of India has allocated INR 20,000 crores in the Union Budget 2025–2026 specifically for AI research and development. This investment signals India’s ambition to become a global leader in AI innovation. However, while the financial support is commendable, it must be matched with robust legal measures to ensure ethical use of technology. Without clear and enforceable safeguards, the very tools meant to improve human life could instead erode basic rights.
As India strides toward a digital future, the urgency to balance AI growth with privacy protection cannot be ignored. The journey ahead demands not only technological progress but also legal foresight to protect individual freedom and dignity in the age of intelligent machines.
EXISTING PRIVACY LAWS IN INDIA AND THE GAPS IN THE AI ERA
India has made notable progress in recognizing and regulating data privacy in recent years. A significant development came with the enactment of the Digital Personal Data Protection (DPDP) Act, 2023, which aims to establish rules for handling digital personal data and ensure that individuals’ information is processed lawfully. The DPDP Act is centered on the principles of consent, purpose limitation, and data minimization. It introduces concepts like “data fiduciaries” and “data principals” to define the responsibilities of data collectors and the rights of individuals.
Another important statute is the Information Technology Act, 2000, which governs electronic communication and data security. It contains provisions related to cybercrime and the protection of sensitive personal data under its rules, such as the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011. Additionally, the IT Rules, 2021, under Rule 4(4), mandate that significant social media intermediaries must ensure automated tools do not undermine user rights, especially with regard to content moderation and misinformation. However, these laws primarily focus on data collection and online content, and do not directly address the complexities brought by AI systems.
One of the key gaps in India’s current legal framework is the lack of detailed regulation for AI-generated data processing and automated decision-making. AI tools often operate in opaque ways, using vast datasets to make predictions or decisions without meaningful human oversight.
This raises concerns over bias, profiling, and violation of the expectation of privacy, especially when individuals are unaware that their data is being used in AI training models.
In comparison, international frameworks such as the EU’s General Data Protection Regulation (GDPR) and the upcoming EU AI Act provide clear obligations for data controllers using AI, including transparency, explainability, and risk assessments. India, however, does not yet have a dedicated law to regulate AI. While the DPDP Act and IT Act offer partial protection, they fall short of addressing AI-specific threats like algorithmic discrimination, deepfake misuse, or autonomous surveillance.
Therefore, there is an grave need for a special laws to deal with AI in the digital era especially in context of privacy. As India moves rapidly into the digital age, the law must evolve to uphold the constitutional guarantee of privacy, especially in the age of intelligent machines.
KEY SUGGESTIONS
Recognize AI as a Distinct Regulatory Category :- Artificial Intelligence is not just another form of software. It functions independently, learns from data, and makes decisions that can deeply affect individuals. Therefore, it must be treated as a distinct category in law, separate from traditional IT regulations.
Mandate Transparency and Explainability :-AI systems should be legally required to explain how and why a particular decision was made, especially in sensitive areas like finance, healthcare, or law enforcement.
Preventing Data Misuse and Profiling:AI technologies today can collect and study huge amounts of personal information. If not properly controlled, this data can be used to build detailed profiles about people—often without them even knowing. This raises serious concerns about privacy. To protect individuals, there should be a clear and specific law that stops companies or anyone else from using personal data without permission. It’s important that people’s information is handled in a fair, honest, and lawful way.
Set Ethical Standards for AI Development :- There should be legal provisions requiring AI developers to follow ethical design principles such as fairness, non-discrimination, and safety. Algorithms that promote bias or cause harm should be penalised under the law.
Establish an Independent AI Regulatory Authority :- A dedicated body should be created to monitor AI deployment across sectors, enforce compliance, and investigate misuse. This body must work independently of commercial and political interests to ensure fairness.
Ensure User Consent and Control :- Users should have clear rights to give or withdraw consent when their data is used in AI systems. Consent must be informed and not buried in complex terms and conditions.
Protect Against Automated Decision-Making Harms :-AI systems used for automatic decisions, such as job screening or loan approvals, must be subject to legal review. Individuals should have the right to challenge unfair or incorrect decisions made by machines.
Create Legal Liability for AI Misuse :- There must be clear legal accountability when AI causes harm—whether physical, reputational, or financial. Developers, deployers, or data controllers should be held responsible if their AI tools violate privacy or rights.
CONCLUSION
As we stand at the edge of a new technological era driven by Artificial Intelligence, it is clear that innovation alone cannot define our progress. While AI holds the promise of transforming industries, improving efficiency, and addressing complex challenges, it also raises serious concerns about privacy, accountability, and individual autonomy. In a country like India, where digital growth is accelerating at an unprecedented rate, the absence of a strong, AI-specific legal framework could result in significant risks to personal freedoms.
The challenge today is to ensure that this right is not overshadowed by technological advancement. Laws like the DPDP Act, 2023 and the IT Act, 2000 are important starting points, but they must evolve to address the unique threats posed by AI systems—such as mass surveillance, deepfake manipulation, algorithmic bias, and non-consensual data processing. The way forward lies in creating a balanced framework that encourages innovation while placing necessary limits to prevent misuse. This includes transparent AI development, ethical use of data, regulatory oversight, and meaningful user rights.
In conclusion, a future driven by AI must also be guided by trust, fairness, and human dignity. By introducing dedicated AI laws and strengthening privacy protections, India can not only embrace the digital revolution but also ensure that every citizen is safeguarded in this journey.
FAQS
Q1: What is the projected number of internet users in India by 2025?
A: India is expected to have over 900 million internet users by 2025, making it one of the largest online populations globally. This growth is being driven by increased smartphone penetration, affordable data plans, and widespread digital awareness.
Q2: Are India’s current privacy laws sufficient to address AI risks in a growing internet landscape?
A: While India has the DPDP Act, 2023, and IT Act, 2000, these laws do not specifically regulate AI-based data processing. As AI tools become widespread in regional apps and services, existing frameworks may fall short in addressing challenges like algorithmic bias, automated decision-making, and non-transparent data collection.
Q3: What role does consent play in AI-driven digital services for India’s regional users?
A: Consent is crucial, especially when users interact with AI unknowingly. Many regional users may lack digital literacy to understand privacy policies or opt-out mechanisms. Without clear, multilingual, and accessible consent frameworks, AI systems may operate without genuine user approval, violating the principle of informed consent.
Q4: What steps can India take to protect privacy as AI becomes more integrated into digital services?
A: India should consider enacting a dedicated AI regulation that addresses transparency, accountability, and user rights. Strengthening digital literacy, enforcing ethical AI practices, and establishing a regulatory body to monitor AI-driven platforms will be crucial, especially as regional users become the majority online.
