India’s Concerns About Deepseek and Possible Regulatory Responses

Large language models (“LLMs”) connected with DeepSeek, OpenAI’s ChatGPT, and xAI’s Grok, have faced significant regulatory attention in recent times. In particular, DeepSeek’s LLMs and artificial intelligence (“AI”)-based chatbots have been prohibited, restricted, and/or extensively reviewed by several countries, including because of concerns related to privacy and national security.
While the Government of India (“Government”) is currently monitoring the use of DeepSeek by Indian users, it may adopt regulatory measures under existing provisions of the Information Technology Act, 2000 (“IT Act”) and its rules, as necessary. Such provisions include those related to: (i) blocking public access on account of risks to the security or sovereignty of India (under section 69A of the IT Act), subject to specified procedures and safeguards; and (ii) ‘safe harbor’ and intermediary liability (under section 79 of the IT Act), subject to due diligence and other obligations in respect of hosting third-party information.
Further, the Government has certain powers under the Digital Personal Data Protection Act, 2023 (“DPDP Act”) and its rules, the provisions of which are yet to be notified but are expected to come into force soon. Such powers include restricting cross-border data flows/ transfers and requiring data localization in certain circumstances.


data minimization

Navigating Data Minimization Requirements under India’s DPDP Act

While the provisions of India’s Digital Personal Data Protection Act, 2023 (“DPDP Act”) and its rules are yet to be notified, organizations need to prepare for a new set of compliance obligations and plan ahead. In large part, the DPDP Act follows global regulatory templates like the EU’s GDPR and embodies similar overarching principles such as data minimization and purpose limitation. The procedural implications of such principles reflected in the DPDP Act will translate into specific obligations and practices related to data collection, processing, sharing, and storage, especially in the context of Big Data analytics – including through the use of artificial intelligence and machine learning techniques.
This note analyzes the principle of data minimization under the DPDP Act, its interface with other laws (including with respect to consumer protection), and discusses potential learnings from other jurisdictions, including for the purpose of implementing such principle at an operational level.


Digital Personal Data Protection Rules

Draft Digital Personal Data Protection Rules, 2025

A long-anticipated draft of the Digital Personal Data Protection Rules, 2025 (“Draft Rules”) was released by the Central Government (“Government”) on January 3, 2025 for public consultation and comments, along with an explanatory note on the contents on the Draft Rules. Once brought into effect, these rules will enable implementation of the Digital Personal Data Protection Act, 2023 (the “DPDP Act” or the “Act”), which was published in the Official Gazette on August 11, 2023, although not yet in force. The consultation process on the Draft Rules will continue until February 18, 2025. The rules under the DPDP Act are proposed to be implemented in a staggered manner.
To recap, the DPDP Act lays down the law for processing of digital personal data (any data in digital form about an individual who is identifiable by or in relation to such data) in a manner that recognizes both the rights of individuals to protect their personal data and the need to process such data for lawful purposes and for connected or incidental matters. For an overview of the provisions of the DPDP Act, please see our notes here and here.
This note analyzes certain key aspects introduced or further clarified under the draft rule.


AI legal challenges

Addressing Legal Challenges on AI Development and Use

The recent lawsuit by Asian News International against OpenAI in the Delhi High Court mirrors global trends involving allegations that large language models (“LLMs”) are being trained on copyrighted material without authorization or licenses, leading to copyright infringement. For the purpose of balancing innovation with compliance, artificial intelligence (“AI”) developers in India must take proactive measures to navigate the complex interplay of copyright, data protection and liability issues. By securing licensing agreements, clarifying the scope of ‘fair use’ under copyright law, offering indemnities to users, and preparing for court-directed compliance actions, AI developers can mitigate risks and build legally compliant AI systems.


investing in ai

Investing in AI in India (Part 2): Tracking the Regulatory Landscape

Prospective investors in Indian artificial intelligence (“AI”) companies should familiarize themselves with the Indian government’s initiatives in AI regulation and the direction of future regulation. This note, the second of a multi-part series on investing in the Indian AI sector, outlines some of the key developments in AI in the country. However, it is important to keep in mind that India’s approach to AI governance may change in the future, given the rapidly evolving nature of technology as well as the country’s dynamic regulatory trajectory, including with respect to data, intermediary liability, digital technologies, telecommunications and digital competition, as discussed in this note.


new data protection law

The Implications of India’s New Data Protection Law on Internal Investigations

Internal investigations may need to be carried out in India by employers in relation to a wide range of issues and/or situations. In case of Indian subsidiaries of MNCs, investigations may be carried out for the purpose of satisfying compliance requirements under law(s) applicable to the parent entity, like the Foreign Corrupt Practices Act of 1977 of the US or the UK’s Bribery Act 2010.
In the course of such internal investigations, large amounts of personal data related to accused persons and other relevant individuals may need to be processed by the employer – either by itself or through its advisors and agents. Accordingly, an informed assessment of the rights of such individuals, as well as the obligations of the employer and its advisors/agents, becomes crucial from the perspective of applicable data protection law.
This note specifically discusses the processing of personal data in the context of internal investigations, including with respect to allegations or suspicions of economic and criminal offences. While necessary rules under the Digital Personal Data Protection Act, 2023 are yet to be notified, provisions of this new law, as published in August 2023, indicate key considerations for employers (each of which is likely to be treated as a “data fiduciary”), including with respect to consent, legitimate use and potential exemptions.


AI in India

Investing in AI in India (Part 1): Key Considerations

While investments in the AI sector in India present significant opportunities, they also present a unique set of risks within an evolving legal and regulatory landscape.
Before making an investment decision, investors should consider IP issues, data-related rights and compliance, any industry-specific concerns, the then-applicable regulatory framework as well as potential developments in AI regulation. In addition, investors should evaluate operational and contractual arrangements, undertake a technical due diligence, and assess potential liabilities and risks. Such risks include product and professional liability, algorithmic bias and discrimination, cybersecurity and data breaches, market and reputational risks, along with concerns related to transparency and explainability.
 


data protection regime in India

India’s New Data Protection Regime: Tracking Updates and Preparing for Compliance

The Digital Personal Data Protection Act, 2023 (the “DPDP Act”), published in India’s official gazette last year, is a new law regulating the collection, storage, use and processing of personal data. The DPDP Act will take effect from the date(s) notified by the Indian Government, and different dates may be notified for different provisions of the DPDP Act. Further, several provisions of the DPDP Act require specific rules which are yet to be notified.
According to a recent statement made by the new union minister of the Ministry of Electronics and Information Technology, the new rules are in advanced stages of drafting and are expected to be released for industry-wide consultation in the near future. Given that both rules and provisions of the DPDP Act are likely to be notified over the next few months, all entities need to check whether and to what extent the DPDP Act applies to them and their operations.
For the purpose of preparing for, and complying with, obligations under the DPDP Act, it would be advisable for all organizations to undertake data mapping exercises and data audits inter alia in order to facilitate the identification and determination of ‘personal’ information from mixed or legacy databases and/or organizational datasets.


EU’s New Law on Artificial Intelligence

The EU’s New Law on Artificial Intelligence: Global Implications

Pursuant to ‘trilogue’ negotiations among major institutions of the EU, an agreement on a proposed regulation with respect to artificial intelligence (“AI”) was arrived at in Brussels a few months ago, the text of which may be approved, published, and subsequently enter into force later this year. This is the world’s first comprehensive law on AI (the “AI Act”). According to the current draft, the AI Act should apply two years after its entry into force, likely from the second quarter of 2026.
The broad focus of this new law is a risk-based approach, based on an AI system’s capacity to cause harm. Compared to prior legislative proposals, additional elements of the current agreement include rules on high-impact general-purpose AI models that can cause systemic risk in the future, as well as on high-risk AI systems. The AI Act may set a global standard for AI regulation in other jurisdictions, just like the EU’s General Data Protection Regulation (“GDPR”) did with respect to personal information. Moreover, similar to the GDPR, one of the most important effects of the AI Act will be its extraterritorial scope, involving obligations for non-EU businesses as well.


Can Deepfakes be Leveraged Responsibly?

Can Deepfakes be Leveraged Responsibly?

‘Deepfakes’, which involve the creation of highly realistic content (images, video, audio) by harnessing the power of artificial intelligence (“AI”), raise important concerns related to misinformation, identity theft, fraud, privacy infringement and electoral democracy – including as recently witnessed in India via incidents involving media personalities and politicians. However, deepfakes also promise exciting possibilities in various fields and business applications, including for personalized marketing, virtual training simulations and operational efficiency.
As of date, India does not have a specific law to regulate deepfakes or AI. However, certain provisions under the Information Technology Act, 2000 and its corresponding rules (together, the “IT Act”) may be invoked by appropriate authorities in this regard, including with respect to potential misuse and related penalties. In addition, new legislation – such as the proposed Digital India Act and the recently published Digital Personal Data Protection Act, 2023, respectively – which, when acting together, remain poised to overhaul the IT Act in its entirety – may introduce bespoke rules on regulating AI and deepfakes in India.
As organizations navigate this transformative techno-legal landscape, the responsible use of deepfake technology – including through a combined adoption of ethical frameworks, transparent policies, security measures, technical collaborations and awareness campaigns – is necessary to ensure a positive impact on the business ecosystem.