India's digital personal data protection regime

India’s Digital Personal Data Protection Regime Takes Effect

On November 13, 2025, the Government of India notified the Digital Personal Data Protection Rules, 2025 (“Rules”) under the Digital Personal Data Protection Act, 2023 (“DPDP Act”). These rules follow the draft Digital Personal Data Protection Rules, 2025, which were released for public consultation and comments in January 2025.
The provisions of the DPDP Act and the Rules will come into force in three phases – with phase 1 provisions (relating to constitution of the Data Protection Board of India and other procedural provisions) becoming effective from November 14, 2025; phase 2 provisions (relating to consent managers) becoming effective in November 2026; and phase 3 provisions (substantive provisions) becoming effective in May 2027.
The Rules provide clarity on various aspects of the DPDP Act, including on consent notices, notification requirements in case of a personal data breach, conditions for registration and obligations of Consent Managers, verifiable parental consent for processing children’s data, additional obligations of Significant Data Fiduciaries, new data retention requirements for all data fiduciaries, reasonable security safeguards, and cross-border data transfers.
This note provides an overview of the key provisions under the Rules. Organizations should plan ahead and prepare for compliance under the DPDP Act and Rules.


AI framework in the Indian financial sector

A Framework for Using AI in the Indian Financial Sector

On August 13, 2025, a committee constituted by the Reserve Bank of India (“RBI”) released its report with respect to a proposed framework for the responsible and ethical enablement (FREE) of artificial intelligence (AI) models in the Indian financial sector (such report, “FREE AI Report”), pursuant to principles of transparency, responsibility, and ethical deployment. The proposed framework may require RBI-regulated entities, including banks, to undertake significant investments and operational changes, including with respect to new governance structures and capacity-building measures.
In general, the FREE AI Report provides an overview of potential compliance obligations which might be imposed on RBI-regulated entities through future AI-related regulation. In that regard, the report proposes a sector-specific approach based on amendments to existing regulations and new AI-specific rules. Accordingly, a principles-based framework has been recommended for developing new regulations to guide the development, deployment, and governance of AI. The report also recommends including AI regulation within the scope of certain existing RBI master directions, including on cybersecurity, digital lending, customer service, fraud detection, information technology (“IT”) governance, and outsourcing of IT services.
This note provides a broad overview of the FREE AI Report and discusses the ways in which RBI-regulated entities could act upon its recommendations, including for the purpose of preparing for future AI regulation. AI governance requirements may involve the design and implementation of measures and strategies to ensure that AI models deployed in the financial sector are safe, reliable, and trustworthy, including for enhancing customer confidence and trust, and facilitating greater integration of AI models in finance.
 


AI regulations in India

The Impact of India’s Data-Related Laws and Policies on AI Development and Deployment

The rise of Artificial Intelligence (“AI”) and Machine Learning (“ML”) promises both opportunities and risks. To address such risks while leveraging the power of AI/ML for new areas of growth, stakeholders need to remain attentive to an evolving regulatory landscape. Unlike the European Union, India is yet to enact an overarching law on AI. Nevertheless, AI developers, deployers, investors, and other relevant entities in the AI supply chain must stay informed about existing and emergent regulatory initiatives across several industries, sectors, and legal regimes. Given India’s ongoing policy and legislative attempts to govern AI, especially with respect to addressing deployment-related concerns and potential harm, the outcome of such processes is likely to emerge soon, even if in fragmented fashion.
Since AI model training relies heavily on data, India’s fast-developing data protection framework warrants special attention. Balancing compliance with innovation will remain crucial for organizations as they aim to thrive under India’s regulatory ecosystem on digital data and AI.


India’s Concerns About Deepseek and Possible Regulatory Responses

Large language models (“LLMs”) connected with DeepSeek, OpenAI’s ChatGPT, and xAI’s Grok, have faced significant regulatory attention in recent times. In particular, DeepSeek’s LLMs and artificial intelligence (“AI”)-based chatbots have been prohibited, restricted, and/or extensively reviewed by several countries, including because of concerns related to privacy and national security.
While the Government of India (“Government”) is currently monitoring the use of DeepSeek by Indian users, it may adopt regulatory measures under existing provisions of the Information Technology Act, 2000 (“IT Act”) and its rules, as necessary. Such provisions include those related to: (i) blocking public access on account of risks to the security or sovereignty of India (under section 69A of the IT Act), subject to specified procedures and safeguards; and (ii) ‘safe harbor’ and intermediary liability (under section 79 of the IT Act), subject to due diligence and other obligations in respect of hosting third-party information.
Further, the Government has certain powers under the Digital Personal Data Protection Act, 2023 (“DPDP Act”) and its rules, the provisions of which are yet to be notified but are expected to come into force soon. Such powers include restricting cross-border data flows/ transfers and requiring data localization in certain circumstances.


data minimization

Navigating Data Minimization Requirements under India’s DPDP Act

While the provisions of India’s Digital Personal Data Protection Act, 2023 (“DPDP Act”) and its rules are yet to be notified, organizations need to prepare for a new set of compliance obligations and plan ahead. In large part, the DPDP Act follows global regulatory templates like the EU’s GDPR and embodies similar overarching principles such as data minimization and purpose limitation. The procedural implications of such principles reflected in the DPDP Act will translate into specific obligations and practices related to data collection, processing, sharing, and storage, especially in the context of Big Data analytics – including through the use of artificial intelligence and machine learning techniques.
This note analyzes the principle of data minimization under the DPDP Act, its interface with other laws (including with respect to consumer protection), and discusses potential learnings from other jurisdictions, including for the purpose of implementing such principle at an operational level.


Digital Personal Data Protection Rules

Draft Digital Personal Data Protection Rules, 2025

A long-anticipated draft of the Digital Personal Data Protection Rules, 2025 (“Draft Rules”) was released by the Central Government (“Government”) on January 3, 2025 for public consultation and comments, along with an explanatory note on the contents on the Draft Rules. Once brought into effect, these rules will enable implementation of the Digital Personal Data Protection Act, 2023 (the “DPDP Act” or the “Act”), which was published in the Official Gazette on August 11, 2023, although not yet in force. The consultation process on the Draft Rules will continue until February 18, 2025. The rules under the DPDP Act are proposed to be implemented in a staggered manner.
To recap, the DPDP Act lays down the law for processing of digital personal data (any data in digital form about an individual who is identifiable by or in relation to such data) in a manner that recognizes both the rights of individuals to protect their personal data and the need to process such data for lawful purposes and for connected or incidental matters. For an overview of the provisions of the DPDP Act, please see our notes here and here.
This note analyzes certain key aspects introduced or further clarified under the draft rule.


AI legal challenges

Addressing Legal Challenges on AI Development and Use

The recent lawsuit by Asian News International against OpenAI in the Delhi High Court mirrors global trends involving allegations that large language models (“LLMs”) are being trained on copyrighted material without authorization or licenses, leading to copyright infringement. For the purpose of balancing innovation with compliance, artificial intelligence (“AI”) developers in India must take proactive measures to navigate the complex interplay of copyright, data protection and liability issues. By securing licensing agreements, clarifying the scope of ‘fair use’ under copyright law, offering indemnities to users, and preparing for court-directed compliance actions, AI developers can mitigate risks and build legally compliant AI systems.


investing in ai

Investing in AI in India (Part 2): Tracking the Regulatory Landscape

Prospective investors in Indian artificial intelligence (“AI”) companies should familiarize themselves with the Indian government’s initiatives in AI regulation and the direction of future regulation. This note, the second of a multi-part series on investing in the Indian AI sector, outlines some of the key developments in AI in the country. However, it is important to keep in mind that India’s approach to AI governance may change in the future, given the rapidly evolving nature of technology as well as the country’s dynamic regulatory trajectory, including with respect to data, intermediary liability, digital technologies, telecommunications and digital competition, as discussed in this note.


new data protection law

The Implications of India’s New Data Protection Law on Internal Investigations

Internal investigations may need to be carried out in India by employers in relation to a wide range of issues and/or situations. In case of Indian subsidiaries of MNCs, investigations may be carried out for the purpose of satisfying compliance requirements under law(s) applicable to the parent entity, like the Foreign Corrupt Practices Act of 1977 of the US or the UK’s Bribery Act 2010.
In the course of such internal investigations, large amounts of personal data related to accused persons and other relevant individuals may need to be processed by the employer – either by itself or through its advisors and agents. Accordingly, an informed assessment of the rights of such individuals, as well as the obligations of the employer and its advisors/agents, becomes crucial from the perspective of applicable data protection law.
This note specifically discusses the processing of personal data in the context of internal investigations, including with respect to allegations or suspicions of economic and criminal offences. While necessary rules under the Digital Personal Data Protection Act, 2023 are yet to be notified, provisions of this new law, as published in August 2023, indicate key considerations for employers (each of which is likely to be treated as a “data fiduciary”), including with respect to consent, legitimate use and potential exemptions.


AI in India

Investing in AI in India (Part 1): Key Considerations

While investments in the AI sector in India present significant opportunities, they also present a unique set of risks within an evolving legal and regulatory landscape.
Before making an investment decision, investors should consider IP issues, data-related rights and compliance, any industry-specific concerns, the then-applicable regulatory framework as well as potential developments in AI regulation. In addition, investors should evaluate operational and contractual arrangements, undertake a technical due diligence, and assess potential liabilities and risks. Such risks include product and professional liability, algorithmic bias and discrimination, cybersecurity and data breaches, market and reputational risks, along with concerns related to transparency and explainability.