RISE OF AI IN BHARAT:BOON OR TIME BOMB?

Author: Gautam Tomar, Bharati Vidyapeeth University

ABSTRACT

Artificial Intelligence or AI as we say has spread across the whole world like a tsunami which now makes us redefine how our society thinks, works and even function. In India or Bharat, AI promised to bring innovation, efficiency and a socioeconomic development but did it really bring these things or just gave us various ethical dilemmas, nightmares, job displacements, etc.  This article will help you to evaluate or come onto a conclusion that did this AI took our Dear Bharat to Viksit Bharat(Developed India) or just became a ticking time bomb for all of us.
Key Words:-BHARAT, AI, LAW, LIABILITY, GENERATED

TO THE POINT

THE RISE OF AI- As we all know that the rise of AI specially in India was truly inevitable. Now our country ranks among the top countries in adopting the AI which were even helped by various government initiatives like Digital India and India AI. 
LEGAL FRAMEWORK-Even though AI was truly new for our legal system but they still tried to control it and regulate by the help of various laws like IT ACT 2000 which helped in regulating AI and now various constitutional provisions are being made to cover these new age technology.The judiciary still has to work a lot upon the AI based liabilities and creation of new IP rights which will help in criminal proceedings and held the criminals accountable for there acts committed.
PROOF IS IN THE CODE-AI in Bharat is growing exponentially, yet regulations lags.

THE PROOF

By the Numbers: A Snapshot of India’s AI Surge
AI’s Economic Contribution: By 2035, artificial intelligence has been projected to significantly increase India’s GDP through an impressive $957 billion (NASSCOM).
Government Backing: The IndiaAI Mission has been allocated a budget of ₹10,371 crore in the Union Budget 2024–25—a serious investment into AI infrastructure.
Startup Ecosystem: India is now home to over 1,300 AI-driven startups as of 2025, signaling rapid innovation across sectors.
Workforce Shift: An estimated 69% of jobs in India could be automated, reshaping the employment landscape (Oxford Economics).

Real-World Applications: AI on the ground
Agriculture: Startups such as Fasal and CropIn use artificial intelligence to optimise crop productivity and resource use.
Healthcare: Organisations such as Niramai are at the forefront of combining AI with thermal scanning to detect breast cancer in its early stages. This approach is non-invasive, inexpensive, and particularly useful in locations where access to regular medical care is limited.
Policing and Surveillance: Delhi Police’s AI-powered facial recognition systems have prompted severe privacy concerns.
Education: Initiatives such as AI-for-All (headed by CBSE) include AI tutors into classrooms, making education more dynamic and individualised.

LEGAL JARGON & FRAMEWORK


1: Lex Lata (The Law as It Exists):

Information Technology Act, 2000 (IT Act) – Lacks express AI provisions but used indirectly for cybersecurity, data protection, and digital offenses.

Indian Penal Code, 1860 (IPC) – Silent on AI; attribution of mens rea (criminal intent) in AI-created acts remains unaddressed.

Copyright Act, 1957 – Doesn’t recognize AI as an author; AI-generated content’s IP status is disputed.

Constitution of India – Articles 21 (Right to Privacy, post Puttaswamy case), 19(1)(a) (Freedom of Expression) are increasingly tested by AI’s encroachments.

2. Lex Ferenda (The Law That Should Exist) :

Currently, India does not have a comprehensive Data Protection Law, although the Digital Personal Data Protection Act of 2023 represents a positive step forward. 
There is an immediate need for regulations that specifically address AI, akin to those of the EU’s AI.
Need for AI-specific legislation like the EU’s AI Act, to cover:
Risk classification of AI systems,
Liability in autonomous decision-making,
Ethical AI frameworks,
Sandboxing for innovation,
Algorithmic transparency and bias mitigation.

CASE LAWS

1. Justice K. S. Puttaswamy (Retd.) v. Union of India (2017) 10 SCC 1


In a pivotal ruling, the Supreme Court of India recognized the Right to Privacy as a fundamental right under Article 21 of the Constitution, which guarantees the right to life and liberty. Although this case focused on the Aadhaar biometric system, its ramifications are far-reaching, particularly concerning digital technologies and AI. The Court underscored the significance of informational privacy, dignity, and individual freedom in today’s digital landscape.Relevance to AI: AI systems that engage in profiling, behavioral prediction, or mass surveillance could potentially infringe upon this constitutionally protected right. For instance, facial recognition technologies used in public spaces, or algorithms collecting and analyzing personal data without consent, may face constitutional scrutiny. The case provides a strong legal foundation for challenging AI deployments that compromise individual privacy and reinforces the necessity for robust data protection laws in AI governance.

2. Shreya Singhal v. Union of India (2015) 5 SCC 1

This historic ruling ruled that Section 66A of the Information Technology Act of 2000, which forbids sending “offensive” materials over communication services, is unconstitutional. The Court found that the statement was unclear, overbroad, and had a chilling effect on free speech, so violating Article 19(1)(a) (freedom of speech and expression). It reaffirmed the importance of free expression in the digital age and limited the state’s ability to prohibit content found online.
Relevance to AI: With the growing use of AI-based content moderation tools—from social media platforms to news aggregators—this ruling is highly relevant. AI systems trained to detect and remove harmful or illegal content must be designed carefully to avoid arbitrary or excessive takedowns, which could infringe on users’ speech rights. Platforms using AI for automated moderation must align with constitutional standards, ensuring transparency, fairness, and due process in how content is flagged or removed.

3. Google India Pvt. Ltd. v. Visaka Industries (2020) SCC Online Del 1189

The Delhi High Court looked at the IT Act’s liability for intermediaries when defamatory content appeared on websites like Google. The Court explained that although Section 79 of the IT Act grants general protection to intermediaries, this immunity is conditional; if they become aware of illegal content, they must use “due diligence” and abide by removal orders.
Relevance to AI: Many digital intermediaries now use AI-driven content filtering and moderation systems. While these tools can help manage vast volumes of data, they must be carefully calibrated to avoid overreach. If an AI system censors content in an arbitrary or biased manner, it may expose the intermediary to legal liability. This case highlights the fine balance platforms must strike between automated enforcement and manual oversight, ensuring that AI interventions are lawful, proportionate, and in compliance with due process norms.

4. Riley v. California, 573 U.S. 373 (2014) [U.S. Jurisdiction, cited in Indian courts]

Indian courts have cited the US Supreme Court’s Riley v. California decision to illustrate the evolving nature of digital privacy. In Riley, the Court ruled that a warrant was required before law enforcement could search a person’s smartphone for digital content. The ruling stressed that since cellphones hold vast amounts of personal information, they should have more privacy protections than the traditional search and seizure idea.
Relevance to AI: In India, as AI becomes embedded in predictive policing, data mining, and surveillance tech, this precedent serves as persuasive authority for requiring strict safeguards against intrusive AI systems. Facial recognition software, predictive crime algorithms, and smart city surveillance tools often collect and analyze sensitive personal data. Riley is frequently invoked in Indian privacy arguments to advocate for stronger judicial oversight and consent frameworks when deploying AI tools that touch upon fundamental rights.

5. State of Maharashtra v. Praful B. Desai (2003) 4 SCC 601

This case concerned the admissibility of electronic and digital evidence in Indian courts. The Supreme Court upheld the use of video conferencing as a valid way to record witness testimony, highlighting the need for Indian law to adapt to technological advancements. The Court took the pragmatic position that the manner in which evidence is recorded should not compromise the substantive fairness of the trial process.
Relevance to AI: As AI-generated evidence (like facial recognition matches, predictive risk assessments, or voice pattern analysis) begins appearing in criminal and civil proceedings, the principles from Praful Desai become especially relevant. While the door has been opened for digital tools in the courtroom, the standards of reliability, transparency, and fairness must still be met. AI outputs can be prone to bias or lack explainability, and unless courts develop clear admissibility criteria for such evidence, there’s a risk of misuse or miscarriage of justice.


CONCLUSION BETWEEN SHIV AND SHAITAN:-

In Bharat, artificial intelligence represents a paradigm shift for civilisation rather than only a technical advancement. We are facing a future in which the codes we create will either determine whether our society becomes more powerful and equal or if it devolves into a dystopia marked by bias, monitoring, and a breakdown in responsibility.
On the one hand, AI has the capacity to change people’s lives, especially in a country as diverse and populated as India. Two AI-enabled agriculture systems, CropIn and Fasal, are assisting farmers with crop failure prevention, weather pattern prediction, and irrigation optimisation. Diagnostic tools like Niramai are employing thermal imaging to identify breast cancer early in underdeveloped regions with a shortage of medical professionals. As part of the CBSE’s AI-for-All initiative, AI tutors are helping millions of students in rural schools close their learning gaps. This is democratisation, not merely innovation. Artificial intelligence (AI) can help India move through development stages by influencing the last mile and removing legacy hurdles.
But on the other hand lurks the Shaitan, cloaked in lines of unchecked code. Without adequate legal guardrails, AI can also become a tool of oppression. Facial recognition systems used by police without consent or oversight risk turning public spaces into zones of surveillance, chilling democratic freedoms. Job automation may lead to widespread unemployment, especially among low-skilled workers who lack the digital literacy to adapt. Worse still, algorithmic biases—trained on historical inequities—could deepen existing divides. AI systems, left to their own devices, can unknowingly replicate caste, gender, and class prejudices, reinforcing them in decisions about credit, hiring, or policing.
The root of this duality is not the technology itself—but the absence of robust regulation and ethical oversight. India’s current legal ecosystem is ill-equipped to grapple with questions like: Who is liable when an autonomous vehicle kills someone? Does a machine-generated painting have copyright protection? Can AI testimony be admissible in court? We’re trying to stretch laws from the 19th and 20th centuries—like the IPC, IT Act, and Copyright Act—over the contours of a 21st-century reality. It’s like using a lantern to map a black hole.
Judicial responses so far have been promising but reactive. Landmark cases like Puttaswamy, Shreya Singhal, and Google v. Visaka provide useful constitutional signposts, especially around privacy, free speech, and intermediary liability. But they’re not AI-specific, and much of the judiciary is still catching up to the complexities AI brings. Precedents from foreign jurisdictions, like Riley v. California, are increasingly cited—but without a consistent jurisprudential framework in India, reliance on persuasive precedent alone is insufficient.
The way forward must be multi-pronged and urgent. First, India needs AI-specific legislation akin to the EU’s AI Act—focused on risk classification, explainability, accountability, and transparency. The Digital Personal Data Protection Act, 2023 is a start but lacks teeth in regulating autonomous systems. Second, we need a sandbox approach that encourages innovation while enforcing oversight, especially in high-risk sectors like health, law enforcement, and finance. Third, there must be algorithmic audits and impact assessments to detect and correct bias—because even machines are not immune to prejudice.
But perhaps most importantly, the ethical DNA of AI must reflect the constitutional values of liberty, equality, and dignity. These are not optional accessories; they must be encoded into AI development, deployment, and regulation. India must nurture a generation of “law + tech” professionals who understand that building responsible AI isn’t just about better engineering—it’s about ethical engineering.
To borrow from mythology once more: AI is neither Shiv nor Shaitan by birth—it is both and neither, shaped by the intent and regulation of its creators. The time to act is now. We can no longer afford to treat AI as a tech issue alone; it is a constitutional issue, a human rights issue, and a governance issue.
The choice before Bharat is stark. Either we become Viksit Bharat by embedding ethics and legality into our AI journey, or we become the world’s cautionary tale—fast, flashy, and fatally flawed. Let’s choose wisely. Let’s code consciously. Let’s regulate fearlessly.

FAQS


Does AI actually contribute to Bharat’s transformation into “Viksit Bharat”?
A: In a variety of ways. AI-powered startups are transforming access and service delivery in the fields of education, healthcare, and agriculture. However, AI also has the potential to worsen inequality and undermine human liberties if it is not sufficiently regulated. It is therefore beneficial but cautious.

Even though AI is being adopted globally, why is India falling behind in terms of regulation?
A: Due to the fact that the majority of our legal system predates digital technologies. Laws like the IT Act and IPC were never intended to promote autonomous technology or algorithmic decision-making. Legislation particularly tackling AI is still desperately needed, despite positive advancements from initiatives like the Digital Personal Data Protection Act (2023).

Are there real examples of AI misuse in India?
A: Definitely. Facial recognition systems used by law enforcement without public consent have triggered privacy debates. AI in content moderation has led to arbitrary take-downs, threatening free speech. And algorithmic bias could reinforce caste, gender, or economic inequality if unchecked.

What role is the Indian judiciary playing in AI governance?
A: So far, the judiciary has been reactive rather than proactive. Landmark cases like Puttaswamy (privacy), Shreya Singhal (free speech), and Visaka Industries (intermediary liability) provide guiding principles, but they weren’t tailored for AI. We’re still waiting on AI-specific jurisprudence.

How does AI affect constitutional rights in India?
A: Profoundly. AI threatens Right to Privacy (Art. 21), Freedom of Speech (Art. 19), and can even interfere with equality (Art. 14) if algorithms reinforce societal biases. Constitutional values must be baked into AI frameworks—not bolted on as an afterthought.

Leave a Reply

Your email address will not be published. Required fields are marked *