‘Deepfakes’, which involve the creation of highly realistic content (images, video, audio) by harnessing the power of artificial intelligence (“AI”), raise important concerns related to misinformation, identity theft, fraud, privacy infringement and electoral democracy – including as recently witnessed in India via incidents involving media personalities and politicians. However, deepfakes also promise exciting possibilities in various fields and business applications, including for personalized marketing, virtual training simulations and operational efficiency.
As of date, India does not have a specific law to regulate deepfakes or AI. However, certain provisions under the Information Technology Act, 2000 and its corresponding rules (together, the “IT Act”) may be invoked by appropriate authorities in this regard, including with respect to potential misuse and related penalties. In addition, new legislation – such as the proposed Digital India Act and the recently published Digital Personal Data Protection Act, 2023, respectively – which, when acting together, remain poised to overhaul the IT Act in its entirety – may introduce bespoke rules on regulating AI and deepfakes in India.
As organizations navigate this transformative techno-legal landscape, the responsible use of deepfake technology – including through a combined adoption of ethical frameworks, transparent policies, security measures, technical collaborations and awareness campaigns – is necessary to ensure a positive impact on the business ecosystem.
