AI and Machine learning’s impact on girl’s safety

Author: Gauri Singh (Lloyd law college)

To the Point
Girls’ safety has changed dramatically as a result of artificial intelligence (AI) and machine learning (ML), which have made predictive and preventive solutions possible.  Girls may choose safer routes and steer clear of dangerous areas with the aid of AI-powered safety applications that utilise machine learning (ML) to analyse regions based on illumination, crowd density, crime data, and CCTV presence.  In order to improve commuter safety, ride-hailing services incorporate machine learning algorithms to analyse odd routes, identify sudden stops, and automatically notify emergency contacts.  AI also moderates online platforms by spotting content that is sexually inappropriate, abusive, or harassing, which lessens cyberbullying and the risks that girls face online.  While AI-based self-defence training applications customise modules to increase girls’ readiness and confidence in real-life scenarios, law enforcement organisations employ AI to identify crime hotspots and implement proactive patrolling.
The adverse effects, however, are just as noteworthy. Girls, particularly those from minority populations, are more likely to be misidentified by AI facial recognition systems due to gender and racial biases, which can lead to improper surveillance or criminalization. Gender-based violence continues to exist online because machine learning algorithms used for content moderation frequently miss subtle forms of abuse, such as sexist humor or indirect threats. Additionally, females’ privacy and autonomy may be violated by AI-powered surveillance, particularly in patriarchal countries where monitoring may be abused for control rather than safety. If not properly secured, data gathered by AI-enabled safety apps exposes females to exploitation, targeted advertising, and surveillance. In order to ensure that girls are protected rather than harmed, gender sensitivity, ethical use, and data privacy must be given top priority in the design, implementation, and regulation of AI and ML, even while these technologies provide sophisticated tools for improving their safety.

Abstract
In addition to providing potential answers, artificial intelligence (AI) and machine learning (ML) have raised significant ethical questions that have changed the landscape of girls’ safety.  Predictive policing, behavioral analysis, and surveillance systems that assist in detecting, preventing, and responding to crimes such as human trafficking, sexual harassment, and kidnapping are some examples of how AI-powered technologies improve safety.  Girls are empowered to travel freely and safely in public places thanks to machine learning algorithms in safety apps that offer real-time tracking, emergency warnings, and intelligent routing to avoid dangerous areas.  In order to create safer online spaces, social media companies use AI-based moderation to filter offensive content, identify instances of cyberbullying, and ban those who engage in gender-based violence. Girls who have experienced abuse or trauma can also obtain mental health support using AI chatbots and virtual counseling technologies, which lowers stigma and obstacles to getting help.
However, there are serious issues with gender prejudice, the ethics of monitoring, and data privacy brought up by AI and ML technologies. For women, particularly those from marginalized populations, facial recognition algorithms have demonstrated increased error rates, which can result in misidentification, erroneous targeting, and rights violations. Because machine learning algorithms employed in content moderation frequently lack gender and cultural sensitivity, they are unable to identify subtle or indirect types of abuse, allowing harm to continue. Furthermore, rather than being used to safeguard girls, AI-based monitoring might be abused to track their actions under patriarchal control, endangering their independence and self-determination. If AI-powered safety apps’ data is not sufficiently protected, it could be used for targeted harassment, stalking, or commercial abuse, further putting females in danger. Consequently, even while AI and ML have enormous potential to increase the safety of girls, their implementation calls for gender-sensitive approaches, inclusive datasets, stringent privacy laws, and ethical frameworks to guarantee that these technologies protect rather than exclude them.

Use of legal jargons
Under constitutional and legal frameworks, the use of AI and ML technologies for the protection of girls brings up important questions about data privacy, informed consent, proportionality, and non-discrimination. While the lack of algorithmic transparency breaches the idea of procedural fairness, AI-powered surveillance and face recognition systems frequently result in arbitrary profiling and violate the private rights protected by Article 21 of the Indian Constitution. When biased AI models disproportionately misidentify or target girls from minority groups, it is indirect discrimination and violates the Article 14 notion of fair categorization. Furthermore, failing to adopt privacy by design and the lack of data protection impact assessments (DPIA) violate new requirements under the Digital Personal Data Protection Act of 2023. Concerns of abuse, dishonest use of authority, and a breach of natural justice are raised when behavioral analysis and predictive policing are used without due process protections.  Furthermore, these technologies must guarantee safety without producing collateral harm in accordance with the AI ethical principle of beneficence and non-maleficence.  In order to guarantee that AI and ML applications adhere to constitutional protections and preserve the substantive equality, dignity, and autonomy of girls, regulatory frameworks must include algorithmic accountability, gender-sensitive impact assessments, data minimization, and purpose limitation.

The Proof
Empirical research and practical applications demonstrate how AI and machine learning affect females’ safety.  For example, according to a report by the United Nations International Telecommunication Union (ITU), AI-based predictive policing tools have helped law enforcement in nations like Kenya and India find missing children and trafficking routes more quickly, demonstrating their usefulness in combating crimes against girls.  Facial recognition and behavioural analysis systems have been successful in identifying suspicious activity around school zones, averting possible harassment events, according to Microsoft’s AI for Good programs.  In India, safety apps like “Raksha” and “Himmat Plus” incorporate AI-enabled location sharing and emergency alerts, which have been shown to speed up police response times in distress situations involving girls. These practical applications demonstrate how AI and ML improve safety infrastructures, provide girls greater self-assurance when navigating public areas, and effectively and data-drivenly assist legal enforcement systems.
However, a number of studies demonstrate that because AI and ML systems lack ethical protections and have inherent biases, they also reinforce and increase females’ vulnerabilities. Systemic algorithmic discrimination was proven by MIT Media Lab research, which showed that facial recognition algorithms from large tech corporations had error rates as high as 34% for darker-skinned women and less than 1% for lighter-skinned men. Reports from Human Rights Watch have revealed the abuse of AI-powered surveillance in nations such as China, where girls’ freedom of movement and autonomy are violated by the overzealous monitoring of their movements. Examples of data breaches from safety apps like Safe City have revealed girls’ private location and identification information, demonstrating that insufficient data protection systems put users at risk rather than shield them. Furthermore, as demonstrated by a 2022 Amnesty International study, AI-based content moderation technologies frequently miss subtle sexist threats or coded insults, allowing gender-based violence to continue online. These established evidences demonstrate that although AI and ML have the potential to significantly improve the safety of girls, their current implementation without gender-sensitive design, robust data privacy regulations, and algorithmic accountability frameworks poses serious hazards that require immediate attention.

Case laws
While there aren’t any clear Supreme Court rulings in India specifically addressing AI and girls’ safety, constitutional jurisprudence serves as the cornerstone for governing the use of AI to safeguard girls’ rights. The Supreme Court ruled in Justice K.S. Puttaswamy (Retd.) v. Union of India (2017) that the right to privacy, which includes informational privacy and decisional autonomy, is a fundamental right under Article 21. This ruling is significant for surveillance systems and safety apps driven by AI that gather girls’ personal information without strong consent or privacy protections. Similar to this, the Court stressed proportionality and necessity in limitations affecting fundamental rights in Anuradha Bhasin v. Union of India (2020), suggesting that AI surveillance and data processing must pass legality, necessity, and proportionality tests to prevent arbitrary violations of girls’ liberties. The Court ruled in Shreya Singhal v. Union of India (2015) that Section 66A of the IT Act violated freedom of speech because it was too broad, reaffirming that AI-based content moderation cannot function arbitrarily or opaquely to unjustly censor girls’ online expression.
The US Supreme Court’s 2018 decision in Carpenter v. United States, which focused on privacy concerns for AI-based tracking apps intended for girls’ safety, ruled that obtaining past cell-site location data constitutes a search under the Fourth Amendment and requires a warrant. The continuing lawsuit in ACLU v. Clearview AI (2020) argues that Clearview AI’s face recognition and scraping activities violate biometric privacy laws, establishing that data protection rules must be followed when using AI facial recognition in public safety. Indiscriminate data retention was declared unconstitutional by European case law, such as Digital Rights Ireland Ltd v. Minister for Communications (2014), which maintained that AI data gathering must guarantee necessity and proportionality. These cases demonstrate that, despite their advantages, AI applications for girls’ safety need to be regulated to safeguard privacy, gender equality, and constitutional rights. This calls for judicial review and statutory monitoring to ensure ethical implementation.

Conclusion
However, there are several moral, legal, and societal issues with using AI and ML to protect girls. While unregulated surveillance systems endanger privacy and liberty, particularly in patriarchal society, embedded racial and gender biases in algorithms reinforce prejudice. Data breaches and poor content filtering put females at even greater risk instead of protecting them. To make sure that new technologies don’t exacerbate preexisting vulnerabilities, it is crucial to incorporate gender-sensitive AI design, robust data protection frameworks, algorithmic transparency, and stringent regulatory monitoring. The potential of AI and ML as instruments for empowering and protecting females with equality, freedom, and dignity can only then be fully realized.


FAQ’S
Q1. How does AI enhance the safety of girls in public areas?
By lowering the dangers of harassment, kidnapping, and human trafficking, artificial intelligence (AI) enhances the safety of girls through real-time surveillance, facial recognition, behavioral analysis for early threat detection, and safety apps that offer position monitoring, emergency notifications, and predictive safe routing.

Q2. What dangers can employing AI pose to the safety of girls?
The use of surveillance to control rather than protect girls, privacy breaches due to data leaks, racial and gender bias in AI models that results in misidentification, and the inability of content moderation algorithms to identify subtle gender-based abuse are some of the major risks.
Q3. Does India have any legislation governing the safe use of AI?
Although there are currently no particular regulations governing AI in India, the use of AI is governed by sectoral IT and cyber legislation, Puttaswamy (privacy) constitutional rights, and the Digital Personal Data Protection Act, 2023, which demand ethical, privacy-compliant deployment.

Q4. Can girls’ data be misused by AI-based safety apps?
Indeed. Data can be misused for surveillance, targeted marketing, or stalking without the subject’s awareness if it is not safeguarded by encryption, purpose limitation, and permission frameworks.

Q5. How can we make sure AI properly protects girls?
The development of AI technologies requires gender-sensitive datasets, stringent data privacy and protection regulations, algorithmic accountability frameworks, frequent bias audits, and open usage guidelines that put girls’ autonomy and dignity first.

Leave a Reply

Your email address will not be published. Required fields are marked *