Author: Aakash Rastogi, Symbiosis Law School, Nagpur
INTRODUCTION
Artificial intelligence (AI) has surged in popularity after a period of relative dormancy. It is now actively shaping our daily lives, influencing the content we see on social media , risk assessments of criminal defendants, creditworthiness decisions in financial institutions, and even route optimization in navigation apps.
These AI systems function as powerful analytical tools. Trained on vast datasets, they actively identify patterns and make predictions at speeds far exceeding human capabilities. While the potential impact of AI is undeniable – some even call it the “new electricity” – its rapid development demands a critical examination of the ethical and social concerns it raises.
Furthermore, this article will delve into India’s current AI policy, given the heightened significance of potential limitations associated with data-driven decision-making in that jurisdiction. We will actively explore the inherent risks associated with AI-based decisions, particularly within the Indian context.
SECTOR SPECIFIC CHALLENGES OF AI-DRIVEN DECISIONS
The development of AI in India has been hampered by a fragmented policy process. No single regulatory body, ministry, or department has been explicitly tasked with comprehending the legal implications and opportunities presented by AI. Instead, efforts have been largely ad hoc, lacking coordination or clear relationships between parallel initiatives. While these reports and operational concerns may be addressed by the publication date, several systemic shortcomings demand immediate attention.
At the Global Technology Summit (GTS) 2023, India’s AI strategy dominated discussions. India’s ministerial representative highlighted the necessity for policy enablers and regulatory frameworks. Industry leaders showcased a use-case-driven AI strategy, and global policymakers underscored the significance of India’s governance model as a benchmark for other nations.
Regarding national security, India should consider the implications of AI akin to the USA’s Project Maven. Google’s once-lauded “don’t be evil” principle has morphed into a legal and ethical quagmire. Originally seen as an aspirational ideal, it’s now criticized as an unenforceable standard. Both employees and external parties have used it to judge Google’s actions, as seen in employee protests against the company’s involvement in Project Maven, citing potential violations of international law. This ultimately led to Google’s withdrawal.
India can learn from the USA’s Project Maven, where ethical concerns over AI use led to Google’s withdrawal. This highlights the necessity for clear ethical standards and robust regulations in national security AI projects.
Further, As journalism and media constitute the fourth pillar of democracy, the decision to adopt automated journalism using AI must consider its impact on democratic institutions. AI has the potential to influence public opinion, as demonstrated by the Cambridge Analytica scandal after the 2016 US presidential elections. This incident highlighted how AI-driven data manipulation can affect election outcomes and democratic processes, underscoring the need for careful consideration and regulation to prevent misuse and ensure the integrity of democratic institutions.
Automated journalism involves using software or algorithms to automatically generate news stories, without human intervention after the initial programming. It encompasses both AI-enabled methods and semi-autonomous tools that combine human expertise with machine capabilities. For instance, the 2016 Panama Papers investigation exemplified human-machine collaboration, where the International Consortium of Investigative Journalists (ICIJ) employed OCR algorithms to analyze 11.5 million leaked documents, a task that would have been challenging without algorithmic assistance.
“Algorithms are making strides in cognitive labor related to rule- and knowledge-based tasks, leading to new opportunities for expanding the scale and quality of investigations. While some of this technology fully automates tasks, freeing up time for other activities, other advances work symbiotically with human tasks. For example, they facilitate identifying entities and interpreting complex relationships between banks, lawyers, shell companies, and certificate holders, enhancing investigative efforts similar to the Panama Papers.
AI in journalism can influence democracy, as seen with Cambridge Analytica. While it enhances investigations like the Panama Papers, strict regulation is needed to protect democratic integrity.
AI has revolutionized the finance industry, enhancing efficiency, accuracy, and customer experience. However, the integration of AI into financial systems also brings forth several challenges. One of the primary challenges is ensuring regulatory compliance. Financial institutions must navigate complex regulatory landscapes while integrating AI technologies. Regulators, such as the Financial Conduct Authority (FCA) and the Securities and Exchange Commission (SEC), impose stringent guidelines to safeguard against financial misconduct. AI systems must be designed to comply with these regulations, which is often complicated due to the rapid evolution of both AI technologies and financial regulations.
The United Kingdom government with the FCA and Bank of England has published several documents in relation to its approach to AI regarding the steps that can be taken including the 5 principles to guide regulation of AI.
Principles given for the Regulation of AI in the UK
|
|— Safety, Security, Robustness
|
|— Appropriate Transparency and Explainability
|
|— Fairness
|
|— Accountability and Governance
|
|— Contestability and Redress
It often becomes difficult to understand the complex algorithms used in AI with respect to the use in the finance sector ‘black box’ is one of them, algorithms such as deep learning neural networks often operate as black boxes, making it challenging for stakeholders to understand how decisions are made. This lack of transparency hinders risk assessment, model validation, and regulatory scrutiny, leading to concerns about algorithmic bias and unintended results. The United States of America was amongst one of the first countries to foresee this threat. Under the Biden administration, the Consumer Financial Protection Bureau (CFPB) in particular has taken a more aggressive approach to oversight of banking practices including use of AI in lending.
In order for the AI to work effectively it needs vast amounts of the personal financial data of the user, this data often includes sensitive information about individuals and businesses. Ensuring data privacy and security is paramount, as breaches can lead to significant financial losses and reputational damage. Financial institutions must implement robust cybersecurity measures and adhere to data protection regulations, such as the General Data Protection Regulation (GDPR) in Europe, to mitigate these risks.
CONCLUSION
Moreover, in this sector Algorithmic bias is a critical issue that can lead to unfair and discriminatory outcomes. AI systems that are trained on historical data may contain certain biases. If not addressed, these biases can be perpetuated and even amplified by AI systems. For example, biased credit scoring algorithms can disproportionately impact certain demographic groups, leading to unfair lending practices. Financial institutions must implement measures to detect and mitigate algorithmic biases to ensure fair treatment of all customers.
So to resolve these issues and to make and to ensure that the AI in finance is more ethical financial institutions must ensure that their AI systems are transparent and that their decision-making processes can be audited, in order to do that they implement fairness and bias mitigation techniques, financial institutions can also adopt explainable AI (XAI) techniques, which make the decision-making processes of AI systems more understandable to humans. This helps build trust and allows stakeholders to assess whether AI decisions are fair and justifiable.
In conclusion, given the rapid and opaque nature of AI development, preemptive and multidisciplinary deliberation is essential. Policies should be informed by diverse perspectives to ensure that AI’s profound and often irreversible impacts are responsibly managed. The tradition of deploying technology first and considering its effects later will not suffice for AI.
FAQS
1. What are the issues India faces in controlling AI systems?
India’s fragmented regulation over AI lacks centralized authority, coordination, and a focus on ethics. This hinders risk areas such as algorithmic bias, data privacy breaches, and ethical concerns.
2. What risks does AI entail for journalism?
AI facilitates the capabilities of automated reporting and investigations, as in the Panama Papers. Yet, it may contravene democratic values through public opinion manipulation as indicated by Cambridge Analytica. In this regard, there is a critical need for strict regulatory frameworks that ensure the protection of democratic integrity.
3. What are the principles guiding AI regulation in finance in the UK?
The UK has established five key principles that should govern AI regulation in finance:
Safety, Security, and Robustness
Transparency and Explainability
Fairness
Accountability and Governance
Contestability and Redress
These guidelines should ensure AI systems are ethically transparent, and fair; while minimizing algorithmic bias, “black box” decision making, and breaches in data privacy.
4. How can financial institutions address the issue of algorithmic bias of AI?
This should be possible if the banks have transparency, model training from non-biased data sets, and an auditing system with regard to fair treatment and adherence to ethics in compliance.
5. What must India focus upon while formulating an AI policy framework?
India must integrate ethical, legal, and technical insights into its AI policies, focusing on data privacy, bias, and regulatory compliance. Learning from global examples, it should adopt a proactive, multidisciplinary approach.
