Site icon Lawful Legal

Deepfake Law: Legal Frameworks and Global Responses


Author: Swosti Singh Napit; Symbiosis Law School, Pune


To the Point


An AI-generated video of Ukrainian President Volodymyr Zelenskiy apparently telling his soldiers to drop their weapons went viral on social media–still, all the words were produced with the help of AI. In another outrageous episode, millions watched nonconsensual deepfake porn involving Taylor Swift, where her face is editorially imposed on another person. These are the scary examples of the deepfakes’ power, when we see hyper-realistic synthetic media yet artificial intelligence is used to imperceptibly modify faces and voices, up to making people create the whole scenario. Technology has also become extremely advanced to the point where it can be utilized by bad actors to generate fake speeches, fake celebrity endorsements, and fake evidence beyond belief. On the one hand, deepfakes in the form of realistic entertainment and educational content are legitimate and are expanding; on the other hand, the ill-willed use of deepfakes in political manipulation and financial fraud, as well as in personal harassment, is exploding. The end result is disastrous: loss of trust in media, a hurt reputation, and an increasing inability to tell whether we see truth or fiction in the digital environment. Professionals are concerned that the issue will escalate further as the technology is also getting closer to people.


Abstract


Deepfakes are one of the most pressing issues of the digital era that blur the boundary between fact and fiction in a frightening way. With the use of the lightning-fast AI, venerated to make synthetic media, almost anything is possible, including but not limited to political disinformation, fraud, nonconsensual intimate imagery (NCII), and identity theft. Irrespective of the rising damage, the legal system all over the world is very much divided and is falling short of the march of technology. Though efforts to regulate malicious deepfakes are beginning to occur in certain jurisdictions, such as the EU (through the AI Act and the Digital Services Act), the U.S. (with state-imposed bans on deepfake pornography and the proposed federal bills) and South Korea (with the strong criminal offenses), enforcement vacuums and international differences continue to exist. There are no international standards that amplify the risks, as they give bad elements enough room to take advantage of lax regulation. This paper reviews the legal and ethical considerations of deepfakes, points out regulatory initiatives, and offers some reasons to endorse prompt and concerted efforts toward defending the truth, privacy, and democracy in the era of synthetic deceit.


Legal issue raised by Deepfake


The effects of deepfakes are generating significant legal issues, as legislators have to reconsider the old regulations and develop new ones. These are the major issues that they are experiencing:


A. Infringements of Privacy and Consent
The technologies of deepfakes are exploited in producing fake nude videos or explicit materials without the consent of a person, usually women and celebrities. There are countries such as South Korea that have resorted to sending offenders of deepfakes to jail, but much has not been done to stop this in many locations.


B. Slander and fraudulence
Fake audio clips or videos may destroy a reputation or even influence the elections. As an instance, the audio recording of a candidate, which was fake, went viral before the 2023 Slovakian election. The great question: Is it possible to punish AI-generated lies under the existing rules on defamation, or should we have new ones?


C. Fraud and Financial Crime
These scammers now use deepfake voice clones as tools with which they pretend to be CEOs or loved ones and induce people to transfer funds. Or is this mere identity theft, or is it an emerging type of crime? The legislation is not completely caught up.


D. Intellectual property (IP) Publicity rights
AI can also resurrect dead actors in films one way or another, but who has the right to their digital counterparts? Some will say deepfakes should be fair game, and others will say it should be a company or family choice to permit (and collect payment on) it.
Criminals and other abusers will always be faster in the race than the legal system, and that causes gaps that deepfakes will fill.


Global legal responses


1. USA
U.S. has a set of state-level legislations, such as prohibition of deepfake pornography and interference in elections which have been passed in California and Texas, among others but a single federal law is yet to be passed. Suggested laws, including DEEPFAKES Accountability Act would demand marking of advanced media and penalize its harmful applications, yet it proceeds gradually.
2.  European Union
The EU AI Act will soon require makers to declare whether a work is produced with the help of AI, and the GDPR may be used to limit the use of biometric data in deepfakes without permission. Such regulations pay much attention to transparency, though it is hard to implement.
3. China
China features one of the most rigid regulations, where the AI-generated products must bear a watermark, and a severe punishment is issued against the offenses associated with deepfakes. The government also employs the means of its social credit system, where people will be discouraged from misusing it, yet some face privacy costs.
4.  South Korea
South Korea is on the forefront regarding making deepfake porn a criminal act and jailing the perpetrators. Still, there is no consensus among the experts regarding the sufficiency of these laws because new cases continually emerge. Similar laws are also being enacted in other countries such as India and Australia.


Case Study


In the UK, a company lost 243,000 dollars after falling prey to a sophisticated voice scam AI-generated
This type of attack separated the want in March 2019, when an energy company in the UK was hit by a deep fake audio scam and managed to lose two hundred and forty-three thousand dollars worth of money to local criminals. According to a report made by Trend Micro, the imitation of the voice of a company CEO was made with the help of artificial intelligence, where fraudsters managed to write and clone the voice of the executive in an absolutely perfect way without being discovered.

The fraud began with an AI-generated synthesised voice, imitating the CEO, using his severe tone, accent, and speaking habits, instructed the director to forward money to a Hungarian supplier. The director issued payment on the belief that the call was legitimate, assuming that it was a legitimate request for business.
The fraud has only been caught when the money has already been sent. This money, by this time, had been passed through various accounts to different countries, making recuperation impossible. The case revealed the potential of using deepfake voice technology, which could abuse human trust to commit a high-value financial crime without security gateways.
Hong Kong Company Wasted 25 Million bucks in Deep Fake CFO Fraud
In early 2024, a successful deepfake scam caught one of the largest Hong Kong companies, resulting in the loss of connections with the cybercriminals of up to 25 million dollars. The CNN reports indicated that scammers also impersonated the Chief Financial Officer of the company by using AI voice cloning in a phone conversation with another top executive of the firm.
The superhack involved deepfake audio and spoofed email messages. The intended executive was called with a seemingly legitimate voice of the CFO as he demanded an immediate transfer of large amounts of money under the guise of concluding a business deal that was claimed to be of great importance. The instruction of such a fraud was supported by well-designed emails that reflected the real activities of the company, which gave the fraud credibility.
The sound of the AI-based vocal identification was more than frightening, as it was identical to that of a CFO in every respect. Following this technological fraud and the apparently authentic email conversations, there was nothing left to challenge the legitimacy of the transaction by the executive.
By the time the company discovered the fraud, the stolen money had already been scattered into a number of offshore accounts, making it impossible to track the money down. The case reflects how financial crimes with the involvement of artificial intelligence are becoming increasingly sophisticated, as this technology can circumvent even the strictest corporate security arrangements in connection with the application of social engineering tricks.
The case in Hong Kong is one of the highest known financial losses incurred by deepfake fraud so far, which is why it is now imperative to strengthen verification procedures in corporate financial dealings.


Conclusion


The Deepfake technology poses a growing threat that requires immediate legal intervention, especially after the recent series of complex fraud schemes, such as the one in Hong Kong, in the sum of 25M. Although existing measures, such as China’s watermarking regulations and South Korea’s official punishments, are already in place, international regulations are required to address the issue of AI-based illusion. The governments of the future need to strike a balance between innovation and protection by employing sound verification systems, cross-border collaboration, or flexible regulations. Unless there is clear action, deepfakes can severely undermine our digital communication, financial transactions, and democracy, and legislative bodies should unite to solve the problem before synthetic media leaps ahead of our capability of regulating it.


FAQS


Which nations have passed individual laws to punish deepfakes?
Some countries have already passed laws, such as China (the need to watermark deepfakes when sharing them online), South Korea (punishment of deepfake pornography), the U.S. (state-level prohibition on malicious deepfakes), and the EU (transparent requirements in the form of the AI Act).
Are there any laws that victims of deepfake fraud or discrediting can be prosecuted?
There are possible uses of certain existing laws (e.g. defamation, fraud, privacy), but some of them lack adequate coverage of AI-generated content, which gives rise to demands to create specific laws that target deepfakes.
What does the AI Act of the EU introduce concerning deepfakes?
According to the AI Act, it is strictly labelled on the data that is created by the AI and deep fake, and more risky AI apps, such as those used in political manipulation or creating intimate imagery of a person, are prohibited and not allowed.
What are the most serious problems of deepfake enforcement?
The major challenges that should be considered are questions of jurisdiction (the crimes done abroad), the rate at which AI grows faster than it can be detected, and the ability to restrict free speech and harm simultaneously.
Is there a liability to the tech platforms hosting deepfake material?
Liability depends on regions. The DSA in the EU puts platforms under an obligation to filter harmful content, and in the U.S., Section 230 frequently protects them, although reform is subject to discussion.

Exit mobile version