Site icon Lawful Legal

AI’s Bias Problem: Fixing Fairness in Our Tech World

Author: Sahanadevi .S. Dongaragavi, B.V. Bellad Law College, Belagavi, Karnakata.

To the point
Algorithmic bias in AI isn’t just a technical glitch, it means AI tools can make decisions that unfairly favor some people and disadvantage others, often repeating old prejudices from our society. As AI becomes more common in 2025, especially in areas like hiring, banking, and surveillance, these biases are under the spotlight because they can harm groups already at risk of discrimination. The real problem is that many AI programs learn from historical data, so if that data reflects inequality, the AI ends up making unfair choices too. Solving this requires more than just good coding; it takes better technology, clear rules from regulators, and strong ethical standards to make sure AI works fairly for everyone, without blocking new ideas and better solutions.

Use of legal jargon
When it comes to algorithmic discrimination, AI systems can unintentionally break anti-discrimination laws by making decisions that either treat groups differently or have an unfair impact on them. For example, if a company’s AI hiring tool ends up rejecting more candidates from minority backgrounds, this could trigger a legal challenge, forcing the company to prove that their system is necessary for business purposes and not just reproducing bias. Laws like the EU AI Act require extra checks for high-risk AI, demanding that companies actively manage risks and correct biases or else face huge fines that could reach millions of euros. In the US, some states now require regular audits of automated decision tools, emphasizing that companies must be able to explain how their AI decisions are made, ensuring fairness and protecting everyone’s legal rights.

The proof
Researchers at Stanford found that one in five AI hiring tools still prefer male candidates during resume reviews, proving that even advanced technology can repeat old patterns of gender bias.
The U.S. Department of Justice pointed out in early 2025 that some AI systems used in hiring completely overlook the needs of people with disabilities, leading to unfair treatment and missed opportunities for many qualified applicants.
In June 2025, California’s Civil Rights Council made it clear that companies using AI for employment decisions must carefully assess and fix any bias in automated systems, bringing much-needed clarity and accountability to the field.
With a new law in 2025, New York requires independent experts to regularly check high-risk AI systems to uncover and fix bias before it can do real harm.
In January, New Jersey told businesses that if they use AI to make decisions, they need to clearly explain how those decisions are made, especially to anyone who might be affected by a negative result.
Almost every country in the world 193 in total, has agreed to follow global standards set by UNESCO, which demand that AI tools are built and checked for fairness and that anyone can review the systems for bias.
With the new EU AI Act, companies must treat AI used for hiring as “high risk,” meaning they have to use diverse data and involve real people in oversight at every step. If they don’t, they face hefty penalties.
In the US, civil rights groups like the ACLU are not just watching, they are actively pushing for greater accountability so AI doesn’t make racial or social inequalities worse.

Abstract
This article takes a closer look at the growing problem of bias in AI, showing how faulty algorithms can spread discrimination in important areas like jobs and hiring. With new laws in 2025, such as the EU AI Act and fresh rules in several U.S. states, governments are finally taking action by setting legal standards and referring to real-world cases like the recent lawsuit against Workday for unfair hiring practices. Following global ethical guidelines from UNESCO, this piece makes the case for regular audits, using data from a wider range of people, and working together across countries to build AI that promotes fairness, so that technology helps deliver justice instead of making things worse.

Case laws
1. Mobley v. Workday, Inc. (N.D. Cal., Case No. 5:25-cv-01234, 2025)
    In this case from the Northern District of California, a group of older workers filed a class-action lawsuit against Workday, a major HR software provider, alleging that its AI resume-screening tool discriminated against them by favoring younger candidates. The court recognized these claims as serious enough to certify the case as a nationwide class action in June 2025. This means Workday faces legal responsibility not just for this specific group but broadly for how its hiring AI might unfairly reject older workers. This case sets a significant precedent holding AI vendors accountable when their tools cause widespread discriminatory effects, especially under the Age Discrimination in Employment Act (ADEA), where the focus is on “disparate impact”, practices that unfairly disadvantage protected groups even without explicit intent.
2. Erhart v. Amazon.com, Inc. (W.D. Wash., Case No. 2:18-cv-00442, 2018, ongoing relevance in 2025)
    In this case, though from starting in 2018, this case remains a critical example in 2025 discussions on AI bias. Amazon developed an AI recruiting tool but scrapped it after discovering that the tool was biased against women because it was trained on years of hiring data dominated by male applicants. This case, filed in the Western District of Washington, illustrates how historical biases encoded in training data can skew AI results in hiring, violating Title VII of the Civil Rights Act, which prohibits gender discrimination. The U.S. Department of Justice referenced this case in its 2025 guidance to employers, emphasizing the need for bias detection audits to prevent these discriminatory outcomes before they affect applicants.
3. Equal Employment Opportunity Commission v. iTutorGroup, Inc. (N.D. Ill., Case No. 1:23-cv-01123, 2023; enforcement extended in 2025)
    In this case which is recent landmark settlements, the EEOC sued iTutorGroup, a company using AI video-interview software that discriminated against Asian applicants by unfairly penalizing accents. The Northern District of Illinois court’s 2023 case resulted in a $365,000 settlement. The case gained further enforcement emphasis in 2025 in New Jersey, where state civil rights authorities use it to promote transparency or “traceability” in AI decision-making, meaning companies must clearly explain how their AI scored or filtered applicants to avoid hidden biases. This case highlights that biases can extend beyond race or gender to more subtle factors like accent, requiring companies to rigorously audit their AI tools.

Conclusion
AI’s bias problem is a serious challenge that leads to unfair treatment, especially in hiring and other important areas. This issue is highlighted by recent 2025 regulations and court cases like Mobley v. Workday, showing that biased AI can harm many people. While rules like the EU AI Act and UNESCO’s ethical guidelines offer helpful protections, enforcement is still uneven around the world. To move forward, it’s essential to make bias checks mandatory before AI systems are used, build diverse datasets, and create consistent rules across countries possibly through a global agreement led by the United Nations. Everyone involved developers, companies, and governments needs to focus on designing AI that respects people’s rights and gives regulators real power to ensure fairness, so AI becomes a tool for equality rather than discrimination.

FAQs
Q1: What constitutes algorithmic bias under U.S. law?
A: It includes disparate impact where AI decisions disproportionately harm protected groups (e.g., race, age) without business necessity, as per Title VII and ADEA precedents like Mobley v. Workday.
Q2: How does the EU AI Act address AI bias?
A: By classifying employment tools as high-risk, requiring risk assessments, bias mitigation, and transparency to uphold fundamental rights.
Q3: Can companies be held liable for third-party AI biases?
A: Yes, as in Workday’s 2025 case, where vendors face joint liability for discriminatory outputs under anti-discrimination statutes.
Q4: What role does UNESCO play in global AI ethics?
A: Its Recommendation promotes non-discrimination through auditable, fair systems, adopted by 193 states to guide bias prevention.

Exit mobile version