The influence of big tech companies on political discourse


Author: Sahil Shukla, Chandigarh University

To The Point


The emergence of the internet and the subsequent growth of large technology corporations have permanently changed the political discourse landscape. Platforms like Meta (Facebook, Instagram, WhatsApp), Google (Search, YouTube), X (formerly Twitter), and TikTok have replaced traditional media outlets as the main sources of political information, discussion, and mobilization in areas where they once held considerable influence. This change has made information more accessible to all and allowed for previously unheard-of levels of civic participation. But at the same time, it has concentrated enormous power in the hands of a small number of private companies, which raises serious concerns about their function as judges of free speech and their influence on democratic processes. These businesses have an unmatched capacity to mold narratives, sway public opinion, and even affect election results due to the size of their user bases. This influence is actively fostered through complex algorithms, data-driven targeting, and frequently ambiguous content moderation guidelines; it is not just passive. Despite its apparent openness, the digital agora is highly mediated, which has significant ramifications for the integrity and well-being of political discourse.
Big tech has an impact on political discourse through a number of important mechanisms, each of which has its own set of difficulties:
Content Moderation and Censorship: The Arbiters of Speech
Large tech platforms are not impartial information sources. By determining what speech is acceptable and what is not, they actively participate in content moderation. The visibility as well as reach of political messages are directly impacted by this authority to delete, devalue, or magnify content. These policies are frequently criticized for their lack of transparency, inconsistency, and potential for bias, despite their apparent goal of preventing hate speech, disinformation, and incitement to violence. Particularly in the US, the legal environment surrounding content moderation is complicated. Online platforms have broad immunity from liability for content created by third parties under Section 230 of the Communications Decency Act of 1996. This clause, which is frequently cited as a pillar of the open nature of the internet, also gives platforms a great deal of discretion in content moderation without worrying about facing civil lawsuits. While supporters maintain that such immunity is necessary to promote free expression online, critics claim it discourages strong and equitable moderation. The conflict between safeguarding platforms and making them answerable for the content they host is highlighted by the ongoing discussion surrounding Section 230.
Algorithmic Amplification and Filter Bubbles: Shaping Perception
Big tech companies’ algorithms prioritize content that is controversial, emotionally charged, or consistent with a user’s preexisting beliefs in an effort to maximize user engagement. People are largely exposed to information that supports their own opinions in “filter bubbles” or “echo chambers,” which exacerbates polarization and reduces their ability to engage in thoughtful political discourse. It is challenging to determine the exact influence of these algorithms on political discourse due to their opaque nature. However, given that sensational or false content frequently results in higher engagement, there is evidence that they may aid in the dissemination of misinformation and disinformation. Political outcomes and public perception can be subtly but significantly influenced by the selective amplification of some narratives over others.


Use Of Legal Jargon
Online platforms’ legal accountability for user-posted content is known as intermediary liability. Safe Harbor: A legal clause that, under specific circumstances, shields a party from liability. Platforms are granted a “safe harbor” under Section 230. Censorship is the suppression or outright ban of any content—books, movies, news, etc.—that is deemed offensive, politically unacceptable, or a security risk. This is platform-led content removal in the big tech context. A fundamental human right that is frequently protected by constitutional law (such as Article 19(1)(a) of the Indian Constitution and the First Amendment of the U.S. Constitution), freedom of speech and expression allows people to express their thoughts and beliefs without worrying about reprisals from the government.
Algorithmic bias is the term for systematic, reproducible mistakes in a computer system’s output that lead to unfair results, like elevating some political philosophies or devaluing others. An organization that chooses how and why to process personal data is known as a data fiduciary. Large tech companies are regarded as data fiduciaries under numerous privacy laws, such as India’s proposed Digital Personal Data Protection Bill. Right to Informational Self-Determination: People’s right to decide how their personal data is gathered, used, and shared.
Data profiling is the automated handling of personal information to assess specific characteristics of an individual. Shoshana Zuboff came up with the term “surveillance capitalism” to describe a new economic system that views human experience as a free source of raw materials for covert commercial extraction, forecasting, and sales activities. A legal and ethical principle known as “informed consent” mandates that people be fully informed about and consent to the collection and use of their personal data.


The Proof
An increasing amount of evidence supports the theoretical is concerned about the impact of big tech:
Electoral interference: Several reports have shown how foreign actors used social media platforms to disseminate misinformation, sow discord, and influence voter behavior. These reports include those pertaining to the Brexit referendum and the 2016 U.S. presidential election. The integrity of democracy is seriously threatened by the speed and scope at which such campaigns can function on these platforms.
Extremism and Polarization: Research shows a link between heightened polarization in politics and vast social media use. Extreme opinions amplified by algorithms have the potential to radicalize people and erode the kind of consensus that is essential for productive political discourse.
Suppression of Dissent: Big tech companies have been under pressure from governments to remove or censor content that stifles legitimate political dissent in some authoritarian regimes. This shows the ease with which these platforms could be used as instruments of state control.
Impact on Traditional Media: News deserts and a reduction in investigative reporting—two essential elements of a robust political discourse—have resulted from big tech’s financial dominance in news distribution.


Abstract
The significant and growing impact of major technological enterprises on political discourse is examined in this article. It makes the argument that these organizations’ enormous power, which comes from their command over data aggregation, digital infrastructure, and algorithmic curation, radically alters the way political ideas are created, shared, and heard. We look at the mechanisms underlying this influence, such as data privacy procedures, algorithmic amplification, targeted advertising, and content moderation policies. The conversation includes pertinent legal frameworks and well-known case laws that highlight the difficulties in striking a balance in the digital age between free speech, platform responsibility, and democratic integrity. In order to reduce the possibility of manipulation and maintain digital literacy, the article’s conclusion emphasizes the urgent need for strong regulatory interventions. The state of democratic discourse


Case Laws
Union of India v. Shreya Singhal (2015): This historic Indian Supreme Court case invalidated Section 66A of the Information Technology Act of 2000, which made “offensive” online content illegal, even though it had nothing to do with big tech’s platform policies. The decision affected how platforms are expected to moderate the material in India and reaffirmed the requirement that limitations on online speech be specifically tailored and supported by the law. It emphasized how crucial the proportionality principle is when limiting fundamental rights.
Data privacy laws (such as the CCPA, GDPR, and the DPDP Bill in India): The goal of the California Consumer Privacy Act (CCPA) in the United States, the General Data Protection Regulation (GDPR) in the European Union, and the suggested solution Digital Personal Data Protection Bill in India is to give people more control over their personal data. Despite their primary privacy focus, these laws have an indirect effect on political targeting and microtargeting because they restrict the processing of data for political purposes and require explicit consent for data collection and use. Global pressure for stronger data governance is exemplified by the EU’s continuous attempts to expedite GDPR enforcement against large tech firms.


Conclusion


Big tech companies’ impact on political discourse poses a serious threat to democracies. An atmosphere that is conducive to manipulation and polarization is produced by their control over information flows, opaque algorithms, and massive data collection. A multifaceted strategy that strikes a balance between the demands of free speech, platform responsibility, and democratic integrity is needed to address this complicated issue. To guarantee transparency in data practices, algorithmic design, and content moderation, strong regulatory frameworks are desperately needed. This entails imposing strict data privacy protections, encouraging interoperability to promote competition, and requiring platforms to have clear accountability mechanisms. In order to prevent anti-competitive behavior and guarantee a more level playing field, governments must adopt “ex-ante” regulations rather than “ex-post” ones, which act after harm has occurred. Additionally, it is crucial to promote digital literacy among the populace. Building more resilient and knowledgeable citizens requires enabling people to critically assess online information, identify deceptive practices, and comprehend algorithmic curation mechanisms. It is also necessary to promote and support scholarly research on the long-term effects of platformization and algorithmic governance. Regaining the digital public square from the unbridled power of few private organizations must be the ultimate objective. Governments, academics, civil society, and even the tech sector itself must work together globally to create a new paradigm that puts democratic principles and the continuation of political dialogue ahead of unbridled power and profit maximization. Our ability as a society to responsibly and ethically traverse the digital frontier may determine the future of democratic societies.

FAQS


• Can regulations governing data privacy, such as GDPR, actually stop big tech from engaging in political microtargeting?
GDPR (Europe’s General Data Protection Regulation) and other data privacy laws seek to give people greater control over their personal information by requiring express consent before it can be collected and used. Although they can increase transparency and limit the kinds of data that can be used for political microtargeting, strict enforcement and ongoing adaptation to new data exploitation techniques are necessary for them to be completely effective in stopping the practice.

• What is the indirect effect of antitrust laws on the political influence of big tech?
The goals of antitrust laws are to uphold fair competition and avoid monopolies. Although not specifically related to political speech, antitrust laws that dismantle or regulate powerful large tech firms may lessen their enormous market dominance. This might result in a more competitive and fragmented digital environment, which might encourage a variety of platforms and decentralize information flows, thereby lessening the concentrated political power currently possessed by a small number of powerful companies.

Leave a Reply

Your email address will not be published. Required fields are marked *