Bengaluru: India has proposed new regulations requiring social media platforms and artificial intelligence (AI) companies to clearly label AI-generated or manipulated content, in a move aimed at tackling the growing threat of deepfakes and misinformation.
The draft rules, introduced by the Ministry of Electronics and Information Technology (MeitY), seek to update the country’s IT framework to address the rapid rise of generative AI tools that can create realistic fake videos, audio, and images. Under the proposal, online platforms would be obliged to identify and label AI-generated content, while users would also need to declare if they upload such material.
Officials said the initiative is designed to prevent AI misuse that could lead to user harm, impersonation, and the spread of false information, especially during elections. “The misuse of generative AI has become a growing concern, with deepfakes increasingly used to mislead the public,” the ministry said in a statement.
India, home to one of the world’s largest internet user bases, has seen a surge in manipulated online content that has at times fueled social unrest and political controversy. Experts say deepfakes pose particular risks in a country as large and diverse as India, where misinformation can spread rapidly across languages and regions.
The proposed rules are expected to undergo public consultation before being finalized. If approved, they would make India one of the first major nations to implement comprehensive labelling requirements for AI-generated content.
While digital rights advocates welcome the government’s focus on transparency, they caution that the enforcement of such rules must avoid overreach and protect free expression. The government has said it aims to strike a balance between encouraging AI innovation and protecting citizens from harm caused by its misuse.