In William Golding’s famous novel, Lord of the Flies, Jack emphasises the importance of following rules and establishing a system of governance among the boys. He says, “We’re not savages. We’re English, and the English are best at everything. So, we’ve got to do the right things.” It is a story of a group of boys who are stranded on an uninhabited island and their failed attempt at self-governance. In the end, all the boys end up becoming nothing short of savages. The book illustrates the significance of rules and the negative outcomes that can result from their absence. If nothing else, it is a governance lesson for governments and regulators. It demonstrates how self-regulation or no regulation can be disastrous at times. This lesson is particularly relevant in the context of Artificial Intelligence (AI) regulations in India.
The lack of proper regulations creates avenues for individuals, firms and even non-state actors to misuse AI. The legal ambiguity, coupled with a lack of accountability and oversight, is a potent mix for a disaster. Policy vacuums on deepfakes are a perfect archetype of this situation. Deepfakes “leverage powerful techniques from machine learning (ML) and artificial intelligence (AI) to manipulate or generate visual and audio content with a high potential to deceive”. Many of us have likely encountered a highly convincing deepfake of Tom Cruise that seemed more like Tom Cruise than the real Tom Cruise.
Issues with deepfakes
While appreciating the technology, we should be aware of the serious issues with deepfakes. First, since they are compelling, deepfake videos can be used to spread misinformation and propaganda. They seriously compromise the public’s ability to distinguish between fact and fiction. Second, there has been a history of using deepfakes to depict someone in a compromising and embarrassing situation. For instance, there is no dearth of deepfake pornographic material of celebrities. Such photos and videos do not only amount to an invasion of privacy of the people reportedly in those videos, but also to harassment.As technology advances, making such videos will become much easier. Third, deepfakes have been used for financial fraud. Recently, scammers used AI-powered software to trick the CEO of a U.K. energy company over the phone into believing he was speaking with the head of the German parent company. As a result, the CEO transferred a large sum of money — €2,20,000 — to what he thought was a supplier. The audio of the deepfake effectively mimicked the voice of the CEO’s boss, including his German accent.
Creating tensions in the neighbourhood
There are three areas where deepfakes end up being a lethal tool in the hands of India’s non-friendly neighbours and non-state actors to create tensions in the country.
Deepfakes can be used to influence elections. Recently, Taiwan’s cabinet approved amendments to election laws to punish the sharing of deepfake videos or images. Taiwan is becoming increasingly concerned that China is spreading false information to influence public opinion and manipulate election outcomes, and this concern has led to these amendments. This could happen in India’s upcoming general elections too. Ironically, China is one of the few countries which has introduced regulations prohibiting the use of deepfakes deemed harmful to national security or the economy. These rules apply to content creators who alter facial and voice data and came into effect on January 10, 2023.
Deepfakes can also be used to carry out espionage activities. Doctored videos can be used to blackmail government and defence officials into divulging state secrets. In 2019, the Associated Press identified a LinkedIn profile under the name Katie Jones as a likely front for AI-enabled espionage. This profile was connected to influential individuals in Washington, D.C.
In March 2022, Ukrainian President Volodymyr Zelensky revealed that a video posted on social media in which he appeared to be instructing Ukrainian soldiers to surrender to Russian forces was actually a deepfake. Similarly, in India, deepfakes could be used to produce inflammatory material, such as videos purporting to show the armed forces or the police committing ‘crimes’ in areas with conflict. These deepfakes could be used to radicalise populations, recruit terrorists, or incite violence.
As the technology matures further, deepfakes could enable individuals to deny the authenticity of genuine content, particularly if it shows them engaging in inappropriate or criminal behaviour, by claiming that it is a deepfake. This could lead to the ‘Liar’s Dividend,’ as described by professors Danielle Keats Citron and Robert Chesney. This refers to the idea that individuals can exploit the increasing awareness and prevalence of deepfake technology to their advantage by denying the authenticity of certain content.
Need for legislation
Currently, very few provisions under the Indian Penal Code (IPC) and the Information Technology Act, 2000 can be potentially invoked to deal with the malicious use of deepfakes. Section 500 of the IPC provides punishment for defamation. Sections 67 and 67A of the Information Technology Act punish sexually explicit material in explicit form. The Representation of the People Act, 1951, includes provisions prohibiting the creation or distribution of false or misleading information about candidates or political parties during an election period. But these are not enough. The Election Commission of India has set rules that require registered political parties and candidates to get pre-approval for all political advertisements on electronic media, including TV and social media sites, to help ensure their accuracy and fairness. However, these rules do not address the potential dangers posed by deepfake content.
There is often a lag between new technologies and the enactment of laws to address the issues and challenges they create. In India, the legal framework related to AI is insufficient to adequately address the various issues that have arisen due to AI algorithms. The Union government should introduce separate legislation regulating the nefarious use of deepfakes and the broader subject of AI. Legislation should not hamper innovation in AI, but it should recognise that deepfake technology may be used in the commission of criminal acts and should provide provisions to address the use of deepfakes in these cases. The proposed Digital India Bill can also address this issue. We can’t always rely on the policy of self-regulation. At least, that is what the Lord of the Flies has taught us.
Bibek Debroy is Chairman of the Economic Advisory Council to the Prime Minister; Aditya Sinha is Additional Private Secretary (Policy and Research), Economic Advisory Council to the Prime Minister