Artificial Intelligence and Human Rights: A Global perspective

Author: Supriya, KIIT University, Bhubaneswar

Introduction

Artificial Intelligence (AI) has revolutionized various industries and continues to shape economies, governance and society at large. However, as AI technologies develop, they raise critical concerns about human right on the international stage. AI can enhance human life, increase efficiency and solve complex problems, but it poses risks to privacy, freedom, equality and democracy.

The impact of AI on human rights extent numerous areas- privacy and data protection, algorithmic discrimination, freedom of speech and employment rights. International organizations, human rights groups and governments are realizing the need to create rules and regulations to address concerns about artificial intelligence(AI). The challenge is to develop AI systems that support human rights while reducing their human effects. This article looks at how AI effects human rights and the role of international organization in making sure AI is used responsibly.

1). AI and Right to privacy

AI system, especially those that use big data and machine learning, usually depend on large amounts of personal information. This raises concerns regarding the right to privacy, as dedicated in Article 12 of the Universal Declaration of Human Right (UDHR). AI technologies like facial recognition, surveillance cameras and predictive tools can be intrusive allowing large-scale monitoring and the gathering of private, sensitive information.

a. Facial recognition technology and surveillance

Facial recognition is one of the talked about AI technologies when it comes to privacy. Governments and companies are using it more for things like identifying people, security and policing. However, this technology can invade people’s privacy by letting authorities track them without their permission or knowledge. In countries with weaker democratic protections, facial recognition can be used to silence protests or unfairly minority groups.

b. Data privacy and algorithmic processing

AI systems run on data, often collected from people without them knowing or agreeing to it. Predictive algorithms that look at people’s behaviour, preference and personal details can turn privacy into a product. For example, social media platforms use AI to gather personal date to show targeted ads or even shape political opinions. This raises concerns about how private data is collected and used.

The GDPR (General Data Protection Regulation) sets a global example for data protection by giving people control over how their data is used. However, many countries, especially developing ones, don’t have similar rules, leaving people vulnerable to privacy issues. The United Nations Special Rapporteur on Privacy has called for global standards to protect personal data in this age of AI.

2). Algorithmic bias and discrimination

AI system, while very useful, can be biased depending on the data they are trained with. A big issue with AI is algorithmic bias, especially in areas that affect people’s right, like criminal justice, hiring and healthcare. These biases can worsen existing social inequalities.

a. Bias in criminal justice system

AI tools are being used more in criminal justice, like in predicting crime and deciding sentences. However, studies show that these tools can be unfair to marginalized groups causing issue like racial profiling. For example, the AI tool COMPAS used in the U.S to predict if a person will reoffend, gas been found to have racial biases against African Americans.

Bias in criminal justice systems violates the right to equality and can affect the fairness of trials, as stated in the Universal Declaration of Human Rights (UDHR). Because of this, international organizations like the Council of Europe are calling for AI systems to be transparent and accountable, ensuring they respect human rights.

b. Discrimination in hiring and employment

Companies are using AI for recruiting, but these systems have been shown to reflect biases present in the data they are trained on, such as gender and racial biases. For example, an AI hiring tool developed by Amazon was found to be unfair toward female applicants because it learned from data in a male-dominated industry.

Such discrimination can limit a person’s right to work (Article 23, UDHR) and prevent equal access to job opportunities. Human rights groups are calling for rules to make sure AI in hiring is fair, without discrimination, and involves human oversight in decision-making.

3). AI and Freedom of Expression

AI technologies are increasingly being used to monitor and manage online content, which raises concerns about freedom of expression. Social media platforms use algorithms to detect harmful content, remove hate speech, and flag false information. However, this can sometimes lead to unintentional censorship. AI’s role in content moderation has grown, especially during major political events like elections and protests, where platforms are under pressure to stop the spread of disinformation.

a. AI in content moderation

AI helps identify harmful content online, but it can also mistakenly block legitimate speech. For example, during the COVID-19 pandemic, social media platforms used AI to remove posts related to the virus. Unfortunately, many important posts with useful information were wrongly removed. This raised concerns about people’s right to access information (Article 19, Universal Declaration of Human Rights).

Additionally, platforms like Facebook and Twitter use AI to find and remove hate speech. However, these algorithms sometimes lack the ability to understand cultural and situational context, leading to the removal of content that should not have been censored. Because of this, experts believe that AI moderation should be combined with human oversight to protect freedom of expression.

b. Political manipulation and disinformation

AI technologies like bots and deepfakes have created worries about the spread of fake information, especially during elections or political events. These technologies can be used to influence voters, change public opinion, and even harm democratic systems. For example, AI-generated fake news has been shared widely on social media, often with the goal of creating political instability.

In response, the European Commission has taken steps to limit the role of AI in spreading false information. Laws like the Digital Services Act (DSA) and the AI Act are designed to regulate online platforms and ensure they are transparent about how their algorithms make decisions, especially those that impact democracy.

4) AI and Labour Rights

The increasing use of AI and automation in the workplace has sparked worries about job loss and how the future of work will look. While AI can improve efficiency and create new jobs, it also threatens labour rights, especially for low-skilled workers. Discussions about AI’s impact on jobs often focus on the right to work and fair wages (Article 23, Universal Declaration of Human Rights).

a. Automation and Job Loss

AI is replacing human workers in industries like manufacturing and services. For instance, robots and AI machines are now doing tasks once performed by factory workers. This has raised concerns about whether governments and companies are ready to handle the social and economic challenges that come with AI-driven automation.

Some people believe AI will create new job opportunities, but many experts argue that without retraining and upskilling programs, many workers could be left behind. The International Labour Organization (ILO) has stressed the need for a global system to protect workers’ rights in a world where AI and automation are taking over more jobs.

b. Gig Economy and AI

The sharing economy, which includes platforms like Uber and Deliveroo, is largely driven by AI. These platforms use AI algorithms to manage gig workers, deciding things like pay rates, available jobs, and driving routes. However, gig workers are usually classified as independent contractors, meaning they don’t have access to key protections like health benefits, paid time off, or job security.

Because of this, labour organizations and human rights groups are calling for changes to ensure that AI-powered gig economy platforms treat workers fairly and offer better working conditions.

5). International initiative of regulate AI

As AI continues to advance, international organizations are working to create rules and guidelines that encourage responsible AI use while protecting human rights.

a. United Nations (UN) and Human Rights

The United Nations (UN) has recognized the growing need to address how AI impacts human rights. The UN Special Rapporteur on Privacy has stressed the importance of safeguarding data privacy in AI systems, as personal data can easily be misused. The UN Human Rights Council has also discussed the potential harm AI could cause to rights such as freedom of expression and non-discrimination.

In 2021, the UNESCO (United Nations Educational, Scientific and Cultural Organization) adopted a significant recommendation on the Ethics of Artificial Intelligence. This global framework promotes the ethical development of AI, focusing on key principles like transparency, accountability and human rights. It sets guidelines for governments and companies to ensure AI technologies are used in a way that respects human dignity.

b. European Union and the AI Act

The European Union (EU) has been a leader in creating laws to regulate AI in a way that upholds human rights. The AI Act, which is still being developed, aims to create a solid legal framework for AI use within the EU. The Act classifies AI systems into different levels of risk, ranging from low to high. For example, AI applications in law enforcement or healthcare, which can directly affect people’s rights and freedoms, will face stricter rules and oversight to make sure they don’t violate human rights laws.

This initiative reflects the EU’s broader efforts to ensure AI technologies are used responsibly and that they support rather than harm society.

Conclusion

International efforts to regulate AI show that people are realizing how much AI affects human rights. However, these efforts are still in the early stages. Since AI is used worldwide, countries and organizations need to cooperate across borders. They must work together to create common rules that both protect human rights and allow for technological progress.

AI has the power to greatly improve society, but without strong rules that focus on human rights, it could make inequality worse, invade privacy, and even threaten democratic values. By making sure AI is developed with ethics, transparency, and accountability in mind, the global community can ensure AI benefits people rather than harming important values.

Most asked questions

  • How does AI impact privacy?

AI impacts privacy by collecting and analyzing vast amounts of personal data, often without people’s knowledge. It can track online behaviour, preferences, and locations, raising concerns about how this data is used. AI technologies like facial recognition and surveillance further threaten individual privacy in public spaces.

  • Can AI system be biased, and how does it affect human right?

Yes, AI systems can be biased if trained on data that reflects social prejudices. This bias can lead to discrimination in areas like hiring, criminal justice, and healthcare, violating human rights such as equality and fair treatment. Biased AI can worsen existing social inequalities.

  • What is the relationship between AI and labour rights? 

AI and automation are changing the job market, with many fearing job losses, especially in low-skilled sectors. There is a growing call for reskilling and upskilling programs to ensure that workers’ rights to employment and fair wages are protected.

Leave a Reply

Your email address will not be published. Required fields are marked *