EU’s New Law on Artificial Intelligence

The EU’s New Law on Artificial Intelligence: Global Implications

Pursuant to ‘trilogue’ negotiations among major institutions of the EU, an agreement on a proposed regulation with respect to artificial intelligence (“AI”) was arrived at in Brussels a few months ago, the text of which may be approved, published, and subsequently enter into force later this year. This is the world’s first comprehensive law on AI (the “AI Act”). According to the current draft, the AI Act should apply two years after its entry into force, likely from the second quarter of 2026.
The broad focus of this new law is a risk-based approach, based on an AI system’s capacity to cause harm. Compared to prior legislative proposals, additional elements of the current agreement include rules on high-impact general-purpose AI models that can cause systemic risk in the future, as well as on high-risk AI systems. The AI Act may set a global standard for AI regulation in other jurisdictions, just like the EU’s General Data Protection Regulation (“GDPR”) did with respect to personal information. Moreover, similar to the GDPR, one of the most important effects of the AI Act will be its extraterritorial scope, involving obligations for non-EU businesses as well.


Can Deepfakes be Leveraged Responsibly?

Can Deepfakes be Leveraged Responsibly?

‘Deepfakes’, which involve the creation of highly realistic content (images, video, audio) by harnessing the power of artificial intelligence (“AI”), raise important concerns related to misinformation, identity theft, fraud, privacy infringement and electoral democracy – including as recently witnessed in India via incidents involving media personalities and politicians. However, deepfakes also promise exciting possibilities in various fields and business applications, including for personalized marketing, virtual training simulations and operational efficiency.
As of date, India does not have a specific law to regulate deepfakes or AI. However, certain provisions under the Information Technology Act, 2000 and its corresponding rules (together, the “IT Act”) may be invoked by appropriate authorities in this regard, including with respect to potential misuse and related penalties. In addition, new legislation – such as the proposed Digital India Act and the recently published Digital Personal Data Protection Act, 2023, respectively – which, when acting together, remain poised to overhaul the IT Act in its entirety – may introduce bespoke rules on regulating AI and deepfakes in India.
As organizations navigate this transformative techno-legal landscape, the responsible use of deepfake technology – including through a combined adoption of ethical frameworks, transparent policies, security measures, technical collaborations and awareness campaigns – is necessary to ensure a positive impact on the business ecosystem.