On the question of regulating artificial intelligence (“AI”), India’s position remains an evolving one – even while the country closely tracks, and accounts for, global developments. For a discussion on the challenges and considerations with respect to regulating AI in India, please see our previous note here.
In the month of May last year, India’s Ministry of Electronics and Information Technology (“MeitY”) clarified that the government would regulate high-risk AI to the extent of protecting users from harm – possibly through dedicated rules under a separate chapter in the proposed Digital India Act – although a bespoke law on AI may not be passed just yet.
In response to a recent parliamentary question on whether the Indian government was contemplating any plan or policy with regard to the use of AI and the global rules related to it, MeitY admitted on December 6, 2023 that a framework for catalyzing the growth of emerging technologies – including in respect of AI – was necessary with regard to (i) putting guardrails in place for safe and ethical use; (ii) ensuring that trustworthy and safe AI was available for all digital users; (iii) avoiding adverse use of AI technology; and (iv) instrumentalizing the government’s continued expectation that AI would be a ‘kinetic enabler’ with respect to India’s growing digital economy, especially in connection with innovation.
Furthermore, since AI involves ethical issues and risks stemming from concerns such as (i) potential bias and discrimination in decision-making, (ii) privacy violations, (iii) lack of transparency, along with (iv) continued uncertainty related to liability attribution in the event of harm, various central and state government departments/agencies have now initiated efforts to (a) standardize the responsible development and use of AI, as well as (b) promote the adoption of best practices in this regard. However, MeitY also admitted that India’s current strategy on AI is inadequate to address such concerns.
At present, it appears that MeitY is keen to collaborate with like-minded democracies for the purpose of negotiating a concrete global agreement with respect to AI regulation – along with a supranational institutional framework based on international consensus. In terms of timelines, as recently as in mid-December 2023, MeitY was reportedly hopeful that such a global agreement might be arrived at within the next six to nine months, given the current pace of AI deployment.
The EU’s AI Act
It is likely that MeitY’s re-articulated positions on AI governance are influenced by global regulatory developments, including those in the European Union (“EU”) – where a high-level political agreement on a proposed law to regulate AI (the “AI Act”) was reached recently, pursuant to ‘trilogue’ negotiations among three of the EU’s major decision-making institutions.
The AI Act contains the world’s first harmonized rules for AI systems, based on the level of risk such systems pose to safety, livelihoods and rights. For a discussion on the implications of the AI Act – including in respect of non-EU (e.g., Indian) businesses on account of the law’s likely effects on international markets, please see here.
Similar to the effect of the EU’s General Data Protection Regulation (“GDPR”) on data protection regimes worldwide, the AI Act may provide a template for a specific legislative trajectory in other countries. Further, the final provisions of the AI Act are likely to introduce significant consequences for companies in non-EU countries as well – including on account of the law’s extraterritorial application.
MeitY had constituted seven expert groups for the purpose of deliberating upon the core goals and design of India’s AI program. On October 13, 2013, these expert groups submitted the first edition of a formal report pursuant to such mandate (the “AI Report”).
The rest of this note is divided into two parts: Part I provides a broad overview of past, as well as recent, developments on AI regulation in India. Subsequently, Part II discusses the AI Report.
As Chair of the Global Partnership on Artificial Intelligence (“GPAI”), India hosted a three-day summit (the “GPAI Summit”) in New Delhi between December 12 and 14, 2023. The GPAI Summit witnessed a ministerial declaration (the “New Delhi Declaration”) built on the consensus reached among 29 GPAI members (including the ten largest national economies of the world by GDP as of December 2023, including the EU and excluding China) in respect of advancing safe, secure and trustworthy AI. In this regard, the GPAI members also reaffirmed a commitment to implement AI responsibly and sustainably by developing regulations, policies, standards and other initiatives within their respective jurisdictions.
Earlier in December 2023, New Delhi also witnessed the eighth edition of the Global Technology Summit (“GTS”) – an annual event co-hosted by India’s Ministry of External Affairs for the purpose of deliberating upon the changing nature of technology and geopolitics, along with a new framework to address stakeholder concerns without hindering technological progress. Among other things, the 2023 edition of the GTS delved into (i) various use-cases of AI in light of the evolving regulatory landscape, and (ii) issues such as skilling and innovation.
NPAI, INDIAai and the Gen AI Report
India launched a national program on AI (“NPAI”) to harness the potential benefits of AI and other transformative technologies. In this regard, a national AI portal called ‘INDIAai’ acts as a centralized content repository. Acknowledging developments and projections with respect to generative AI (“Gen AI”), such as those contained in a McKinsey report, and pursuant to discussions on Gen AI (including in respect of legal challenges and ways to mitigate harm), a report was published in May 2023 (the “Gen AI Report”) focusing on the economic impacts and other important consequences in India related to Gen AI.
Consistent with MeitY’s general stance, the Gen AI Report maintains that all Gen AI regulations should, at a minimum, protect individuals against harm. Such harms may include violations of privacy and breaches of data protection rights; discrimination with respect to accessing services; as well as exposure to false and/or misleading news/information. Further, second-order harms may include violations of intellectual property rights (“IPR”).
Pursuant to a press release dated July 20, 2023, the Telecom Regulatory Authority of India (“TRAI”) announced a set of recommendations on leveraging AI and ‘Big Data’ in the telecommunications sector.
In light of global regulatory developments involving AI – including in the EU, the US, the UK, Saudi Arabia, Australia, Singapore, as well as the Organization for Economic Co-operation and Development (“OECD”) – TRAI’s recommendations suggest, among other things, that: (i) a unified regulatory body should be established for the purpose of creating a framework that enables transparency and accountability with respect to the use of AI; (ii) high-risk use-cases that directly impact humans should be regulated through legally binding obligations; and (iii) in the interest of leveraging the benefits of AI and machine learning (“ML”), a balanced approach must be adopted in order to address both risks and opportunities, such that existing concerns can be dealt with through appropriate compliance mechanisms, while the immense potential of AI/ML technologies can be harnessed via enabling regulations.
The Responsible AI Report
Pursuant to a March 2018 taskforce report and a discussion paper about India’s national AI strategy released later that year, NITI Aayog (under the Government of India) had deliberated upon the advisability of regulating AI. In a February 2021 approach document, NITI Aayog had proposed certain overarching principles for the development of ‘responsible AI’ (the “Responsible AI Report”).
As of last year, it appeared that MeitY intends to remain focused on harnessing the benefits of AI, as the government continues to champion its ‘Digital India’ drive. That apart, various other developmental initiatives involving AI have been undertaken by the government, including in terms of skilling and capacity-building; health, defence and defence products; agriculture; as well as international cooperation.
The Indian AI Program
The Indian AI program aims to address certain perceived deficiencies in the country’s AI ecosystem with respect to a set of focus areas, including: computing infrastructure; data; AI financing (e.g., through a design-linked incentive scheme for offering financial incentives and infrastructural support to domestic companies, start-ups and MSMEs); research and innovation; skilling; and the institutional capacity for data.
Apart from promoting the development of AI chips in partnership with a modified ‘Semicon India’ program (which aims to develop a sustainable semiconductor and display ecosystem) – the Indian AI program will comprise several other components, such as the India Datasets Platform (“IDP”) – a large collection of anonymized datasets to be used by Indian researchers for the purpose of training multi-parameter models.
The AI Report
The AI Report is intended to serve as a roadmap for the development of India’s AI ecosystem, including in terms of its intersection with:
- governance (e.g., improved decision-making, efficiency and transparency);
- intellectual property;
- hardware and software infrastructure related to computation (e.g., the ‘India AI Compute Platform’ – a public-private partnership to create capacity for graphics processing units (“GPUs”)); along with
- ethics (e.g., the responsible deployment of AI systems).
In addition, the AI Report contemplates centers of excellence, as well as an institutional framework to govern the collection, management, processing and storage of data.
The IDP aims to leverage data to fuel the development and capabilities of AI in the country for the purpose of enabling better insights, superior predictions, and more intelligent decision-making. Accordingly, the IDP aims to provide a foundation for dataset sharing, analysis, collaboration and monetization among dataset providers and consumers. Built on open-source architecture, the IDP is – at its core – a unified and interoperable national exchange platform for stakeholders to upload, browse through, and consume datasets, metadata, user-created data artefacts and application programming interfaces (“APIs”) in a safe and standardized manner.
A separate regulatory framework may be established for the purpose of governing the IDP, including in terms of data privacy, information security and IPRs.
Referring to a survey conducted by McKinsey in December 2021 (which had shown that almost three out of five organizations were using AI in at least one business function), the AI Report also seeks to identify how AI can serve business – including through use cases across marketing, sales, customer services and other processes.
India’s ongoing initiatives with respect to AI governance, while adopting a ‘light touch’ approach, continues to be influenced by global developments – such as the risk-based approach embodied in the EU’s AI Act.
Importantly, the AI Report also refers to artificially created ‘synthetic’ data for the satisfaction of various unmet needs related to the development of AI models, including in respect of data privacy limitations.
According to a recent report by Gartner, by 2024, 60% of all data used for the development of AI will be artificially generated, and by 2030, it may completely replace the use of ‘real’ data in AI models. Indeed, the synthetic data market is growing quickly, with several start-ups offering synthetic data generation tools, platforms and services.
However, synthetic data may also create harms and risks – including in respect of ‘deepfakes’, as recently witnessed in India. Our next few notes will (i) analyze the advantages and concerns related to the use of synthetic data, and (ii) discuss underlying issues with respect to regulating deepfakes.
Nevertheless, synthetic data may alter the competitive dynamics among digital businesses through its effects on data sharing. For example, if the data collected by an entity does not produce a comparative advantage, there is a higher probability that such entity will share that data with others. This dynamic may be especially relevant in the context of the IDP. Further, the use of synthetic data may reduce the collection of unnecessary personal information and promote data minimization – as contemplated in the EU’s GDPR, as well as in India’s Digital Personal Data Protection Act, 2023.
This insight has been authored by Dr. Deborshi Barat (Counsel) he can be reached at email@example.com for any questions. This insight is intended only as a general discussion of issues and is not intended for any solicitation of work. It should not be regarded as legal advice and no legal or business decision should be based on its content.