Author: Sanjith Gurikar, PES University, Bengaluru
To the Point
AI-driven credit scoring promises increased efficiency, speed, and accuracy in evaluating borrower risk. However, it also opens Pandora’s box of algorithmic opacity, data discrimination, and regulatory lag. While traditional credit systems relied on financial history and repayment behaviour, AI systems process diverse, non-traditional data—ranging from social media activity to GPS data—to create credit profiles. This poses significant risks to privacy, equality, and due process. As India rapidly digitizes its financial infrastructure, legal clarity around transparency, accountability, and bias mitigation in AI credit scoring systems becomes imperative.
Use of Legal Jargon
In the context of AI in lending, Automated Decision-Making (ADM) refers to systems that autonomously determine creditworthiness without human intervention. These systems are increasingly under regulatory scrutiny for their fairness, especially in jurisdictions that emphasize due process. AI credit scoring also involves profiling—where algorithms evaluate behavioural and demographic data to assign scores that influence lending decisions. The concept of financial inclusion—enshrined in policy frameworks across the Global South—underlines the potential of AI to empower underserved populations. However, this must be balanced with individual rights such as the Right to Explanation, an emerging legal principle under the EU’s GDPR, mandating disclosure of the logic behind automated decisions. Similarly, the Non-Discrimination Principle under Article 14 of the Indian Constitution is pivotal in evaluating whether these systems perpetuate biased outcomes, particularly against historically marginalized communities.
The Proof
FinTech companies like ZestMoney, CASHe, and KreditBee, alongside legacy players such as SBI’s YONO, have adopted AI-powered credit scoring models that use big data analytics and machine learning to assess risk, especially for borrowers with limited credit history. These models often draw on alternative data—including smartphone usage, payment behaviours, and even location data—to generate insights. These systems are typically black-box algorithms, meaning their internal workings are proprietary and not open to scrutiny by users or even regulators. This lack of transparency exacerbates the bias risk, as AI tools can inadvertently replicate historical inequalities, disfavoring certain demographics or regions. Moreover, consumer impact is deeply concerning—borrowers are seldom informed that AI systems have assessed them and lack the means to appeal or question those assessments. Such unilateral decision-making processes undermine the very foundation of due process.
Case Laws / Regulatory Insights
India currently lacks binding judicial precedents specifically addressing the legality or fairness of AI credit scoring. However, recent regulatory efforts and comparative international norms are informative.
RBI Guidelines (2022 & 2023): The Reserve Bank of India has taken initial steps by issuing directives that promote fair lending practices, consent-based data usage, and auditability of algorithms in digital lending. These principles reflect a risk-sensitive approach to financial innovation but stop short of mandating specific safeguards for ADM systems.
MeitY’s Draft Digital India Bill (2023): The bill explicitly acknowledges algorithmic harm and proposes risk-based AI regulation, including bias audits, human oversight, and “data protection by design.” If enacted, it could provide a framework for regulating high-risk AI systems like credit scoring.
EU’s AI Act & GDPR: Internationally, Article 22 of the General Data Protection Regulation (GDPR) prohibits decisions based solely on automated processing that significantly affects individuals, unless accompanied by safeguards such as human oversight. The EU’s AI Act classifies AI systems used for credit scoring as “high risk,” mandating transparency, accountability, and robust redress mechanisms.
Indian Constitutional Law: In Justice K.S. Puttaswamy v. Union of India, the Supreme Court affirmed that the right to privacy under Article 21 includes informational autonomy. Additionally, if a state-owned or state-authorized body employs AI in credit decisions, it must comply with Article 14, ensuring fairness and non-arbitrariness in its functioning.
Critique
The use of AI in credit scoring currently operates within a regulatory grey zone in India. While regulators have acknowledged the risks, comprehensive safeguards remain absent. One of the primary criticisms is the lack of transparency. Consumers are often unaware of what personal data is harvested, how it is processed, and why they were approved or denied credit. There is also a data protection vacuum. The Digital Personal Data Protection Act, 2023, while promising, is yet to be operationalized with clear rules on algorithmic accountability. As a result, user consent is often obtained through blanket clauses, with minimal control over how data is reused or shared.
Another critical flaw is the reinforcement of bias. If AI systems are trained on datasets reflecting societal prejudice, such as caste, gender, or regional inequality, they risk reproducing those patterns. Even seemingly neutral data points (e.g., zip codes or browsing history) may act as proxy variables for protected characteristics, thereby undermining the Non-Discrimination Principle. Further, there is a growing fear of a chilling effect on dissent. The potential use of social media behaviour or public expressions in assessing creditworthiness could stifle free speech, particularly among younger or politically vocal users. This sets a dangerous precedent where financial access is tethered to ideological conformity. Finally, AI-based denials often result in a denial of due process. Borrowers are rarely informed of the reasons behind rejection, nor do they have access to a mechanism to challenge or rectify such outcomes. This undermines their right to effective remedy and meaningful participation in decisions affecting their economic life.
Conclusion
AI-based credit scoring is rapidly transforming India’s financial landscape, offering powerful tools for evaluating borrower risk with speed, scale, and adaptability. It has the potential to revolutionize financial inclusion by enabling access to credit for “new-to-credit” individuals—those who previously fell outside the formal banking system due to a lack of credit history. FinTech platforms and traditional banks alike now harness the power of big data, machine learning, and alternative data sources to create dynamic credit profiles that were inconceivable a decade ago.
However, this innovation does not come without costs. The use of AI in such critical decision-making contexts raises fundamental concerns around individual rights, algorithmic transparency, and accountability. AI systems, by their very nature, tend to operate as black boxes—opaque in logic, difficult to audit, and often inaccessible to those they impact. As these systems increasingly determine who gets access to credit—and who doesn’t—it becomes crucial to ensure that the decisions they generate are not only efficient but also fair, explainable, and contestable.
India presently lacks a comprehensive legal framework to govern the use of AI in credit scoring. While the Reserve Bank of India (RBI) has issued guidelines promoting responsible digital lending and fair data practices, these directives are largely procedural and do not address the substantive rights of individuals subjected to Automated Decision-Making (ADM). The Ministry of Electronics and Information Technology (MeitY) has recognized the risks of algorithmic bias in its Digital India Bill (2023), and the upcoming Data Protection Board under the Digital Personal Data Protection Act, 2023 is expected to provide some oversight. But the regulatory architecture remains fragmented and insufficiently focused on AI-specific harms.
To ensure that AI-based credit scoring serves as a tool of empowerment rather than discrimination, India must adopt a rights-based, transparency-first regulatory approach. This means embedding core legal principles—such as non-discrimination, data minimization, consent, fairness, and the right to explanation—into every layer of the AI deployment pipeline. Regulators must require bias mitigation protocols to audit and correct for caste, gender, regional, or socio-economic prejudices embedded in training data. They must enforce mandatory disclosures to inform borrowers when they have been assessed by an AI system and on what basis. Further, institutions must establish robust grievance redress mechanisms and ensure the presence of a human-in-the-loop for high-stakes decisions that affect access to essential financial services.
If left unregulated, AI-based credit scoring threatens to create a digital underclass, where individuals are silently scored, sorted, and excluded based on invisible algorithms they cannot understand or challenge. On the other hand, if governed with foresight and grounded in constitutional values, AI can become a transformative force—one that deepens inclusion, reduces bias, and builds public trust in India’s evolving digital finance ecosystem.
Ultimately, the question is not whether we use AI in credit scoring, but how we use it—and for whose benefit. The law must lead, not follow, in this technological revolution.
FAQS
Q1: What is AI credit scoring?
AI credit scoring uses artificial intelligence to evaluate loan eligibility using both traditional financial data and alternative non-financial data such as app usage, GPS location, and online behaviour.
Q2: Is AI scoring regulated in India?
Only partially. While the RBI has issued broad guidelines for fair digital lending and data use, India lacks a dedicated legal framework specifically regulating AI-driven credit scoring models.
Q3: Can I know how my AI score was calculated?
In most cases, no. These systems are proprietary, and India does not yet provide a Right to Explanation, unlike jurisdictions such as the EU under the GDPR.
Q4: What legal rights do I have if I’m denied a loan by an AI system?
Currently, your rights are limited. While you can raise a grievance with the lender, there is no statutory mechanism to appeal or challenge algorithmic decisions.
Q5: What reforms are needed?
Reforms must include:
Mandatory transparency in AI models;
Regular bias audits;
Human-in-the-loop controls for high-impact decisions;
Grievance redressal mechanisms; and
Alignment with global best practices like the EU’s AI Act and GDPR.