Algorithmic Audits as a Statutory Duty: Reading the Digital Personal Data Protection Act, 2023 into India’s AI Governance Vacuum

Author: Sanya Ashraf, Lloyd School of Law

To the Point


The Digital Personal Data Protection Act, 2023 (“DPDP Act”) is silent on artificial-intelligence systems, yet its processing-ground architecture, breach-notification rule and the proposed Data Protection Board (“DPB”) create a de-facto compliance roadmap for AI vendors. This article argues that once an AI engine ingests personal data to train or infer, a statutory “algorithmic audit” becomes obligatory under sections 6 &, a statutory “algorithmic audit” becomes obligatory under sections 6 & 8 read with section 13(2). Failure to maintain demonstrable “reasonable security safeguards” (s.8(5)) exposes the developer to penalties up to ₹250 crore per infraction. The piece proposes a three-step compliance template—Data-Flow Mapping, Model-Card Disclosure and Post-Deployment Monitoring—that satisfies both the DPB and forthcoming AI rules expected from MeitY.

Abstract


This article examines whether India’s first horizontal data-protection statute implicitly mandates algorithmic audits for AI systems. By dissecting the material scope (“digital personal data”), the controllership test (“who determines the purpose and means of processing”) and the security-safeguard clause, the paper concludes that a purposive interpretation of the DPDP Act already obliges AI developers to conduct periodic bias, privacy and accuracy audits. The analysis syntheses comparative jurisprudence from the EU AI Act and Singapore’s MAS FEAT principles to propose a light-touch yet enforceable Indian template.

Use of Legal Jargon


The DPDP Act employs a consent-manager architecture; personal data may be processed only for a “lawful purpose” after verifiable consent or under one of the “deemed consent” gateways in section 7. AI systems that re-identify anonymised datasets risk breaching the “purpose limitation” doctrine embodied in sections 5 & 6. The penumbral right of erasure (s.12(1)(c)) triggers a re-training obligation where the underlying model continues to memorise embeddings. The principle of accountability—jurisprudentially rooted in Justice K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1—now extends to algorithmic controllership, creating a positive duty of explicability.

The Proof


1. On 11 August 2023 the Ministry of Electronics & Information Technology (“MeitY”) told Parliament that 1,138 AI use-cases were deployed by Central ministries; none had undergone third-party bias audit. 
2. The NITI Aayog’s 2022 “Responsible AI” paper conceded India has “no statutory audit requirement” for ML models. 
3. Yet the DPDP Act received Presidential assent on 18 August 2023 and overrides all other Central laws except the IT Act to the extent of inconsistency (s.29). Consequently, any AI system that processes personal data—be it facial-recognition CCTV or a lending-algorithm—must comply with the Act from the date the DPB notifies thresholds.



Case Laws


1. Justice K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1 – informational privacy as a facet of dignity. 
2. District Registrar v. Canara Bank, (2005) 1 SCC 496 – proportionality test for data collection. 
3. Karmanya Singh Sareen v. Union of India, 2016 SCC OnLine Del 5533 – updation of privacy policies as an ongoing duty. 
4. Ram Jethmalani v. Union of India, (2011) 8 SCC 1 – state obligation to prevent unauthorised data leaks. 
5. Google Inc. v. Commission Nationale de l’Informatique, Case C-131/12, ECJ (Right to be Forgotten) – relevance of purpose limitation post-erasure.

Conclusion


The DPDP Act, 2023 may not utter the words “artificial intelligence,” yet its architecture is technology-neutral. Any AI pipeline that touches personal data is immediately caught within the statutory net. Controllers must therefore move from voluntary “AI ethics” to mandatory compliance: maintain a living RoPA (Record of Processing Activities), embed privacy-by-design into model weights, and file quarterly algorithmic-audit reports with the DPB. Until MeitY notifies sectoral AI rules, the DPDP Act is India’s only enforceable anchor against opaque algorithms. Non-compliance is no longer a PR risk—it is a ₹250 crore bet.

FAQS


Q1. Does the DPDP Act apply to foreign AI vendors serving Indian users?
Yes. Section 3(2) extends extra-territorial reach if the processing is “in connection with any activity related to offering of goods or services” to data principals within India.


Q2. Is anonymised training-data outside the statute?
Only if the anonymisation is irreversible and the data no longer qualifies as “personal”. Re-identification risk resurrects applicability.


Q3. Who is the “Data Fiduciary” in a cloud-based AI supply chain?
The entity that “determines the purpose and means of processing”—typically the model developer, not the cloud host, unless the host exercises decisive influence over data logic.


Q4. What is the limitation period for DPB penalties?
Section 32 bars adjudication after three years from the date of the “contravention”.


Q5. Can a start-up claim exemption?
The Central Government may exempt certain classes of fiduciaries (s.17), but no blanket fintech/AI carve-out exists yet.

Leave a Reply

Your email address will not be published. Required fields are marked *