HOW AI IS CHANGING THE WORLD AROUND US

AUTHOR: – ANSHIKA KUMARI, a student of SHRI RAMSWROOP MEMORIAL UNIVERSITY

TO THE POINT

1. Human Identity & Psychology

• AI influences self-perception, agency, and decision-making.

2. Healthcare & Privacy

• AI improves health analytics but raises privacy concerns.

3. Labor Market & Economy

• Automates jobs, shifts skill demands, and boosts productivity.

4. Education

• Enables personalized learning and AI-assisted teaching.

5. Entrepreneurship

• Enhances decision-making with AI-powered insights.

6. Urban Planning & Environment

• AI tools monitor sustainability (e.g., Google’s tools).

7. Finance

• AI helps manage savings, investments, and risk.

8. Small Business Tech Adoption

• Managerial digital literacy influences AI uptake in SMEs.

USE OF LEGAL JARGON

1. Algorithmic Governance & Human Autonomy

AI systems are reconfiguring the locus of agency and challenging legal definitions of personhood and accountability, particularly in the context of algorithmic decision-making and data profiling.

2. Data Privacy & Regulatory Compliance

AI-enabled health analytics necessitate robust privacy-preserving frameworks to comply with HIPAA and GDPR standards, especially in relation to biometric and geospatial data integration.

3. Labor Displacement & Regulatory Safeguards

The automation of labour markets prompts re-evaluation of labour law protections and workforce retraining policies under ILO guidelines.

4. Educational Equity & Right to Access

The deployment of AI in pedagogy implicates questions of equitable access under the right to education, particularly in marginalized communities.

5. Entrepreneurial Due Diligence & Market Intelligence

AI tools are increasingly material to business intelligence practices and compliance with fiduciary duties in cross-border commercial operations.

6. Environmental Compliance & Smart Regulation

AI tools such as Google’s Environmental Insights Explorer support techno-legal frameworks for monitoring carbon footprints in compliance with Paris Agreement objectives.

7. Financial Technologies & Regulatory Sandboxes

AI-powered fintech innovations demand legal scrutiny regarding systemic risk, KYC/AML compliance, and central bank oversight.

8. SMEs, Institutional Frameworks & Digital Governance

The adoption of AI by SMEs is shaped by national institutional arrangements and digital infrastructure laws, influencing equitable market participation.

THE PROOF

1. Algorithmic Mediation of Identity

• The article coins the term “Algorithmic Self”, defined as an identity co-constructed by feedback from AI systems.

• It argues that AI platforms (e.g., Spotify, predictive chatbots) do not merely reflect behaviour but actively construct user identity, infringing upon traditional legal notions of autonomy and authorship of self.

2. Subversion of Cognitive and Emotional Autonomy

• AI systems (e.g., mood-tracking apps, language models) are substituting introspection, which has legal and psychological consequences regarding competency, consent, and mental agency.

3. Delegation of Decision-Making

• The article presents evidence of users ceding emotional and behavioural decision-making to AI, leading to “preference reinforcement” and potential manipulation of volition—a concept relevant to data protection laws (e.g., GDPR’s profiling restrictions).

4. Ethical Framework Violations

• Raises concerns that AI-enabled narratives are shaped by opaque commercial logics and lack transparency, violating informed consent, fairness, and non-maleficence principles in legal-ethical theory.

5. Narrative Ownership and Data Sovereignty

• Critiques how algorithms (like Spotify Wrapped or Instagram “Highlights”) define identity narratives, prompting questions about who owns the digital version of “you”—a data subject rights issue under international digital law.

ABSTRACT 

A Artificial Intelligence (AI) has transitioned from a passive computational tool to an active participant in shaping individual identity and psychological experience. This article introduces the concept of the “Algorithmic Self”—a digitally co-constructed identity wherein AI systems mediate introspection, self-perception, and emotional regulation. Through platforms such as personalized recommendation engines, wearable mood-trackers, and sentiment-aware chatbots, individuals increasingly experience themselves through algorithmic feedback rather than organic reflection. This shift reconfigures classical notions of autonomy, authenticity, and agency. Drawing on frameworks from surveillance capitalism and posthumanism theory, the paper critically examines how AI alters the practice of self-knowing, replacing introspective practices with data-driven narratives. The implications are both ethical and existential: as AI shapes what users see, feel, and remember, it challenges fundamental assumptions about authorship of self, volition, and cognitive sovereignty. The article calls for digital literacy, ethical design, and legal safeguards to preserve the integrity of personal identity in an age of algorithmic mediation. 

CASE LAWS

1. López Ribalda v. Spain (ECHR, 2019)

• Court: European Court of Human Rights

• Issue: Use of covert AI-powered surveillance in the workplace.

• Finding: The Court held that employee monitoring without notice violated Article 8 (right to privacy) of the ECHR. It warned against the delegation of surveillance to algorithmic systems without procedural safeguards.

• 🔗 Case summary

2. CJEU – Case C-311/18: Schrems II (2020)

• Court: Court of Justice of the European Union

• Finding: Declared the Privacy Shield invalid, citing risk of AI-powered U.S. surveillance programs infringing on data subjects’ autonomy and legal remedies.

• 🔗 Judgment Summary

3. Facebook, Inc. Consumer Privacy User Profile Litigation (US, ongoing)

• Jurisdiction: U.S. Federal Court (MDL)

• Allegation: Facebook used AI to profile users for psychological targeting (Cambridge Analytica), violating consumer protection and privacy rights.

• Impact: Highlights how algorithmic manipulation of personal data can erode autonomous consent.

CONCLUSION

Artificial Intelligence is fundamentally reshaping the legal landscape by challenging established notions of personal autonomy, identity, and agency. Courts and policymakers are increasingly recognizing that algorithmic systems—through profiling, predictive analytics, and decision automation—can erode individual self-determination and blur the boundaries between human and machine authorship. Legal cases such as Simultaneously, psychological research on the “Algorithmic Self” reveals that AI doesn’t just process behaviour—it co-constructs identity, influencing how individuals think, feel, and act. These developments demand a robust legal framework that emphasizes transparency, accountability, and human-centric design. The law must evolve to ensure that while AI advances, it does not diminish the dignity, freedom, or moral agency that underpin democratic legal systems. In the age of intelligent machines, protecting the juridical self is not optional—it is imperative.

FAQ

1. How does AI affect legal autonomy?

AI can undermine autonomy when individuals are subject to automated decisions (e.g., surveillance, credit scoring) without understanding or consent. Courts emphasize the need for transparency, human oversight, and recourse mechanisms (e.g., Bridges v. South Wales Police, GDPR Art. 22).

2. What is the “Algorithmic Self”?

It’s a concept describing how AI systems shape personal identity by mediating introspection and behavior—e.g., Spotify Wrapped or emotion-tracking apps. The self becomes co-authored by AI, raising concerns about authenticity and agency.

3. Are there real legal cases supporting this concern?

Yes. Cases like Schrems II, Terry v. Canada, and Facebook User Profile Litigation show how AI-driven systems can breach data rights, manipulate user behavior, or infringe on due process.

4. What rights protect people from AI overreach?

Key protections include:

• GDPR (EU) – Right to explanation, objection to profiling

• ECHR Art. 8 – Right to privacy

• Constitutional due process (US/Canada)

• Draft EU AI Act – Prohibits certain high-risk AI uses

5. What legal reforms are underway?

• EU AI Act (2024) – First binding regulation on AI risk categories

• OECD AI Principles – Global ethical baseline

• Proposals in Canada (Bill C-27) and U.S. AI Bills

6. Does AI have legal personhood?

No. Current law treats AI as a tool, not a legal actor. But debates are emerging on liability, especially in autonomous systems (e.g., self-driving cars, predictive policing).

7. What is “algorithmic transparency”?

It refers to the ability to understand, audit, and explain how an AI system makes decisions. It is crucial for ensuring legal accountability and safeguarding rights.

8. What is the main legal risk of AI on identity?

AI may construct and reinforce digital identities through behavior prediction and profiling, which can lead to discrimination, emotional manipulation, and erosion of self-authorship.

9.What should be done to protect the self in the age of AI?

Laws must enforce AI literacy, explainability, human review, and design ethics. Courts and lawmakers must ensure AI systems support, not replace, individual autonomy and legal agency.

Leave a Reply

Your email address will not be published. Required fields are marked *