Site icon Lawful Legal

Algorithmic Bias and Discrimination in the Digital Age: A Legal and Ethical Inquiry

Author: BENANTONIO RODRIGUES LLM (LAW AND TECHNOLOGY) SCHOOL OF LAW, DAYANANDA SAGAR UNIVERSITY, BENGALURU

  1. Introduction

The reason for dynamic changes happening presently in the society are primely because of  Algorithmic AI and  content generation through AI, its ability of decision making, policy reviewing, problem solving has made human works easier, but it came at the  cost of  fairness , discrimination and bias.

The Algorithmic Ais in the present digital age, are influencing main stream areas C-bil Scores, healthcare, criminal sentencing etc. and the biased decisions or outputs of these algorithms endangering equality, justice and good conscience. 

This paper seeks to understand a legal and ethical inquiry into algorithmic bias and discrimination in the digital age. 

It aims to 

(a) causes of algorithmic bias, 

(b) examine existing global legal frameworks addressing such bias, 

(c) discuss the ethical challenges of algorithmic fairness, and 

(d) propose ways forward and suggestions for reconciling technological innovation with human rights obligations.

2. Understanding Algorithmic Bias

2.1 Definition and Nature of Algorithmic Bias

Algorithmic bias means or refers to the systematic and repeatable errors in computer systems that create and generate unfair outcomes, privileging or giving preference to one group over another (Mehrabi et al., 2021). This bias can emerge at multiple stages in the generation of outcomes like: during data collection, model designing, or deployment. Machine learning systems learn information and patterns from existing data, and if such data reflect social inequalities such as gender or racial disparities, differences, algorithms can replicate, amplify and reciprocate those inequities resulting in discriminations and bias.

2.2 Types and Sources of Bias

Data Bias: The most common form of bias stems from incomplete datasets.

 For example, if a facial recognition system is designed or trained through machine learning predominantly on lighter-skinned images, it may perform poorly on darker skin tones (Buolamwini & Gebru, 2018).

Design Bias: Bias can also arise from human decisions during model construction and model designing what data to include, what features to weigh, and which metrics to optimize, what information to analyse.

Societal Bias: Algorithms may inadvertently replicate societal stereotypes embedded in language or behavior. A very good example is Google’s ad-targeting algorithm, which once displayed high-paying job ads more often to men than women (Datta, Tschantz, & Datta, 2015).

2.3 Real-World Illustrations

A notable and striking example is the COMPAS algorithm which is used in the United States criminal justice system to predict recidivism risks of culpricts. ProPublica’s investigation revealed that the algorithm was likely to misclassify and discriminate Black defendants as high-risk compared to White defendants (Angwin et al., 2016). 

Similarly, Amazon’s AI recruitment tool, designed to screen resumes, was found to downgrade and directly reject applications from women because historical hiring data reflected gender imbalances in the tech industry (Dastin, 2018).

These cases and paradigms reveal that algorithmic systems, seems neutral but are not.

3. Legal Framework and Human Rights Perspective

3.1 International Human Rights Law

Algorithmic bias intersects with the international legal principle of equality and non-discrimination as enshrined under human rights. Article 7 of the Universal Declaration of Human Rights (UDHR) (1948) and Article 26 of the International Covenant on Civil and Political Rights (ICCPR) guarantees equality before the law and protection against discrimination. The European Convention on Human Rights (ECHR) also enforces and implements this under Article 14.

When algorithmic systems gives in biased decisions, they violate the core human rights principles and fundamental rights of the people. The UN Guiding Principles on Business and Human Rights (2011) extend these obligations to private actors, requiring corporations to respect human rights in their technological designs and operations and administrations.

3.2 The European Union’s Regulatory Approach

The European Union (EU) has taken an approach and the major leading role in regulating the algorithmic discrimination and algorithmic bias. The General Data Protection Regulation (GDPR) (2016) guarantees individuals rights against automated decision-making that significantly affects them (Article 22). The EU Artificial Intelligence Act (2024) further divides AI systems based on risk levels and imposes providing strict obligations for “high-risk” AI especially those used in employment, credit scoring, and law enforcement.

The Act explicitly makes it mandatory that every AI system must be transparent, explainable, and should be Non- discriminatory, with Compulsory algorithmic audits like how we use to do environmental audits and human oversight mechanisms (European Commission, 2021). This marks a major step towards operationalizing and implementing fairness in AI systems through enforceable regulatory standards.

3.3 The United States’ Sectoral and Ethical Framework

United States addresses algorithmic bias through it’s existing anti- discriminatory laws, like Unlike the EU’s comprehensive model for AI regulation, the United States abides a sector-specific approach, addressing algorithmic bias through existing anti-discrimination laws such as the Civil Rights Act of 1964, the Fair Credit Reporting Act, and the Equal Credit Opportunity Act.in addition to this, the White House Blueprint for an AI Bill of Rights (2022) etc.

However, the U.S. legal framework largely depends on self-regulation and voluntary compliance, leading to inconsistencies and instability in effective implementation. Scholars have criticized and opined that this approach of United States fails to provide sufficient accountability or  liability for algorithmic harms (Crawford & Paglen, 2021).

3.4 The Indian Legal Landscape

India’s legal stance to the algorithmic bias is dynamic and evolving day by day. “The Constitution of India that guarantees equality before the law (Article 14), and also prohibits discrimination (Article 15), and ensures the right to life and personal liberty (Article 21).” These provisions, when it is read comprehend sively, encompasses fairness as a constitutional obligation (Basu, 2020).

The DPDP Act that is the Digital Personal Data Protection Act (2023) regulates the principles of purpose limitation, data minimization, and consent-based processing and focuses on protection of personal data from harm of the Data Individuals that is Data Principal. However, it doesn’t mention any explicit provisions on algorithmic discrimination or automated decision-making. Indía’s National Strategy for Artificial Intelligence (AI)  (NITI Aayog, 2018) primary focuses on  “AI for All,” but regulatory safeguards remain limited.

4. Ethical Dimensions of Algorithmic Fairness

4.1 Principles of AI Ethics

“Ethical frameworks globally transgresses on three central principles that are:  i.e., transparency, accountability, and  fairness”. 

The OECD Principles on Artificial Intelligence (2019) and the UNESCO Recommendations on the Ethics of AI (2021) primely focuses on, that AI should respect human rights, diversity, and inclusion.

Algorithms to be transparent, it should be explainable and comprehensively understandable. The liability of algorithmic errors is still to be answered. The impact of bias and discriminatory outcomes are terribly affecting the vulnerable group of the society.

4.2 The Fairness–Accuracy Trade-Off

On purely ethical grounds, the pursuit of algorithmic fairness has often collided with

efficiency and accountability. Algorithms are trained to increase accuracy that may

overlook social equity considerations. For instance, adjusting a predictive policing which

“predpol” algorithm means giving up some fairness for precision, which may raise tensions.

between justice and utility (Kleinberg et al., 2018).

4.3 Corporate Ethical Accountability

Technological corporations should take measures and formulate AI boards, fairness guidelines, AI auditings, transparency reports. However, critics opine that these mechanisms often serve as “ethics washing”, binding regulation. (Metcalf, Moss, & boyd, 2019).

5. Challenges in Regulating Algorithmic Bias

5.1 The “Black Box” Problem

The present advanced AI systems, especially those using deep machine learning, operate as opaque, i.e., plasma “black boxes.” Their decision-making processes, programmes and codes are not easily interpretable, cannot be easily understood even to their developers, and even to those who designed them. This opacity is a serious challenges to transparency, due process, and accountability (Burrell, 2016).

If individuals or users cannot understand or contest algorithmic decisions affecting their rights, legal remedies become ineffective and cannot be determined.

5.2 Jurisdictional Complexity in Cyberspace

Algorithms operate across borders, making it difficult to determine jurisdiction or the applicable laws. For instance, a U.S.-based company’s algorithm used in Europe may fall under both GDPR and U.S. federal law, leading to conflicts of norms, regulations, limits and enforcement gaps. 

5.3 Absence of Uniform Global Standards

Despite various initiatives, laws, policies, regulations, measures, no binding international convention on algorithmic governance exists i.e., no such convention is signed. Developing nations face even a lot more additional challenges, like limited technical capacity to audit algorithms, illiteracy of population, poverty etc

6. The Way Forward

6.1 Legal Reforms and Algorithmic Accountability

Legislation should be implemented:

The EU’s AI Act is a abundant exemplary precedent by regulating legal liability with risk-based AI classification.

6.2 Human-in-the-Loop Systems

Another safeguard for this is human review of AI Generated outcomes, so that Human in the loop systems makes AI to not to commit errors though commits, the percentage of blunder swill be less

6.3 Interdisciplinary Collaboration

Regulating algorithmic bias needs synergy between all the other fields like department of computer science , external affairs, cyber laws etc

6.4 Strengthening Global Cooperation 

International co-operation and co-ordination between the countries globally is the need of the day to regulate AI

7. Conclusion

Algorithmic bias is one of the most serious human rights issues of the present digital age.  The people in the current generation have become more reliant on algorithmic systems to allocate resources, identify risks, and administer justice. The importance of fairness, accuracy accountability grows, necessitating a proper law which is governing to the algorithmic bias are required.

This investigation has shown that algorithmic discrimination stems not only from the Incorrect data but from broader societal and institutional frameworks. While international human rights legislations (HR Laws) and new AI regulations lay the groundwork for remedies, major gaps exist, particularly in implementation and enforcement.

Finally, maintaining justice in algorithmic decision-making is not purely a technical task, but it’s a moral, ethical, and legal imperative. The pursuit of innovation and technology cannot be at the expense of equality and human decency.

References 

Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica.

Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104(3), 671–732.

Basu, P. (2020). Algorithmic governance and constitutionalism: Equality and bias in automated decision-making. Indian Journal of Law and Technology, 16(2), 45–67.

Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 1–15.

Burrell, J. (2016). How the machine “thinks”: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1–12.

Crawford, K., & Paglen, T. (2021). Excavating AI: The politics of images in machine learning training sets. AI & Society, 36(4), 1101–1120.

Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters.

Datta, A., Tschantz, M. C., & Datta, A. (2015). Automated experiments on ad privacy settings: A tale of opacity, choice, and discrimination. Proceedings on Privacy Enhancing Technologies, 2015(1), 92–112.

European Commission. (2021). Proposal for a regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). COM(2021) 206 final.

Kleinberg, J., Ludwig, J., Mullainathan, S., & Rambachan, A. (2018). Algorithmic fairness. AEA Papers and Proceedings, 108, 22–27.

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1–35.

Metcalf, J., Moss, E., & boyd, d. (2019). Owning ethics: Corporate logics, silicon valley, and the institutionalization of ethics. Social Research: An International Quarterly, 86(2), 449–476.

NITI Aayog. (2018). National Strategy for Artificial Intelligence – AI for All. Government of India.

Rahwan, I. (2018). Society-in-the-loop: Programming the algorithmic social contract. Ethics and Information Technology, 20(1), 5–14.

UNESCO. (2021). Recommendation on the ethics of artificial intelligence. United Nations Educational, Scientific and Cultural Organization.

United Nations. (2011). Guiding principles on business and human rights. United Nations Human Rights Office.

Exit mobile version