The Delhi High Court on Wednesday urged the Union Ministry of Electronics and Information Technology (MeitY) to consider enacting a law to regulate the proliferation of harmful artificial intelligence technologies, such as deepfake, on the internet. The court expressed concerns about these technologies, emphasizing that they could become a “menace in society.”
A bench consisting of Acting Chief Justice Manmohan and Justice Tushar Rao Gedela addressed Additional Solicitor General (ASG) Chetan Sharma, who represented the government. The court highlighted that deepfake technology is not just an issue in India but a global challenge. “Everything that you are seeing or hearing is fake. It can’t be,” the bench remarked, stressing the urgency of regulatory action.
The court’s remarks came in response to a plea seeking the formulation of guidelines to regulate AI and deepfake technologies. The plea, filed by Chaitanya Rohilla, called for the identification and blocking of websites providing access to deepfake AI, underscoring the inadequacy of existing laws in addressing these emerging threats.
Global Concerns and Legislative Gaps
Deepfakes, which utilize AI to create hyper-realistic but fake images and videos, have raised significant concerns worldwide. These technologies have the potential to spread misinformation, manipulate public opinion, and infringe upon individual privacy rights. The Delhi High Court referenced legislation in some U.S. states as a benchmark, suggesting that it is time for the Indian Parliament to take similar steps. “You are the government. We as an institution have some limitations. You’ll have to do something. You’ll have to start thinking about it; it’s going to be a serious menace in society,” the bench told the government representative.
The court’s observations reflect a growing recognition of the potential harms posed by AI and deepfake technologies. While technological advancements have provided significant benefits, they have also introduced risks that existing legal frameworks may not adequately address. The plea by Rohilla pointed out the gaps in the current legislation, such as the Digital Data Protection Act, 2023, and called for the court’s intervention to protect the fundamental rights guaranteed by the Indian Constitution.
Government’s Response and Legal Framework
In response to the court’s concerns, the Centre’s counsel had previously submitted that the government is aware of the issue and is working on it. In February, the Centre filed a detailed affidavit outlining the existing legal and regulatory mechanisms under the Information Technology Act, 2000, and the Data Protection Act, 2018, which aim to address issues related to AI and deepfake technologies.
The government’s 23-page affidavit also highlighted various advisories issued to intermediaries and platforms to ensure compliance with legal provisions concerning the misuse of AI and deepfake technologies. Despite these measures, the court and the petitioner argued that these laws are insufficient in the face of rapidly advancing technology.
During Wednesday’s hearing, advocate Manohar Lal, representing the petitioner, emphasized that the problem of deepfake technology has become more significant since the plea was initially filed last year. Acknowledging the spread of deepfake as a growing issue, ASG Sharma stated that combating fake AI technologies requires a counter-technology approach.
Looking Ahead
Considering the arguments, the court directed Rohilla’s counsel to file an additional affidavit with suggestions on how to tackle the misuse of deepfake technology within two weeks. The court set October 8 as the next date for the hearing.
The Delhi High Court’s call for a dedicated law to regulate AI and deepfake technologies marks a significant step towards addressing the legal and ethical challenges posed by these advancements. As the court and government continue to explore possible solutions, the debate underscores the need for a balanced approach that harnesses the benefits of technology while safeguarding against its potential harms.