Before any major global technology summit, participating states and private agencies undertake extensive preparatory work. Domestic consultations are held, issues are debated across government departments, and think tanks and other stakeholders are engaged. By the time delegations arrive at the venue, national strategies and red lines are usually well defined. This caution reflects the fact that global technology governance directly affects national security, economic competitiveness, industrial policy, and societal values. As a result, summits typically become arenas where pre-formulated national interests are projected, negotiated, and occasionally contested, rather than spaces for spontaneous consensus-building.This pattern has been particularly evident in recent discussions on artificial intelligence (AI), especially since the release of ChatGPT, a generative AI chatbot developed by OpenAI, in November 2022. Across regional and international AI-related meetings, it has become clear that neither state actors nor industry leaders arrive without settled positions. Consensus, where it emerges, is largely the product of prior alignment rather than on-the-spot deliberation.Well before the first global AI Safety Summit at Bletchley Park in the United Kingdom in 2023, several states and leading industrial players had already articulated their domestic AI strategies. The European Union entered the discussions anchored firmly to its draft AI Act, prioritising binding regulation and legal accountability. In contrast, the United States articulated a vision centred on voluntary commitments and industry-led innovation, reflecting its discomfort with what it saw as over-regulation and its concern for protecting domestic industry interests. Unsurprisingly, the outcome of the Bletchley Park summit mirrored these pre-existing positions. While debates were substantive, the final conclusions were generic, emphasising cooperation and risk mitigation while avoiding hard regulatory obligations.India will host the fourth AI Impact Summit in New Delhi from February 19 to 20, 2026. Positions already taken by states and private actors with significant stakes in shaping global AI architectures will strongly influence the summit’s outcome. It is therefore important to review where key players currently stand on AI governance to assess whether the February 2026 meeting will merely formalise a predetermined consensus or whether India will still have scope to disrupt an already-written script.The major state actors in the AI domain remain the United States, China, and a handful of European countries. European states typically adopt a collective approach under the EU framework, with the United Kingdom remaining a notable exception.The EU formally introduced comprehensive AI legislation with the adoption of the EU AI Act on June 13, 2024. The Act establishes a risk-based classification system: AI systems posing “unacceptable risk,” such as social scoring or manipulative AI, are banned outright; high-risk systems are subject to strict regulatory requirements; limited-risk systems are primarily governed through transparency obligations; and minimal-risk systems are largely unregulated. Notably, most AI systems currently deployed in the EU fall into this minimal-risk category. Compliance obligations are focused mainly on providers of high-risk AI systems.The Act also places obligations on suppliers of general-purpose AI (GPAI) models, requiring them to provide technical documentation, comply with copyright rules, and publish summaries of training data sources. While legally binding, the framework is complex and technically demanding, particularly for large-scale model developers.Perhaps recognising these challenges, the EU introduced a GPAI Code of Practice on July 10, 2025. Developed through a multi-stakeholder process and framed as a voluntary compliance tool, the Code is intended to support practical and consistent implementation of the AI Act while encouraging innovation. While the EU presents this as an implementation aid, it also functions as a soft-law mechanism that offers model providers flexibility in demonstrating alignment with regulatory requirements.Although the Code does not replace the AI Act’s legally binding obligations, it effectively shifts significant interpretive and operational discretion to industry actors. This allows major companies to influence how compliance standards are shaped in practice, potentially enabling them to set de facto global norms. In this sense, the EU’s approach reflects a hybrid strategy: a strong commitment to binding regulation, moderated by soft-law instruments designed to ease industry concerns and build broader acceptance.At the Paris AI Summit in 2025, participating countries were invited to sign a non-binding “Pledge for a Trustworthy AI in the World of Work.” Sixty countries, including Canada, China, France, and India, signed the declaration. The United States and the United Kingdom did not. There is little indication that either country will reverse this position at the New Delhi summit.The US approach to AI governance has remained distinct from the EU’s risk-centric framework. In 2022, the US Office of Science and Technology Policy released a “Blueprint for an AI Bill of Rights,” outlining five principles aimed at protecting the rights of the American public in the age of automated systems. These principles focus on issues such as algorithmic discrimination, data privacy, and human alternatives to automated decision-making, but they do not constitute binding regulation.In July 2025, the US announced “America’s AI Action Plan,” shaped by three core priorities: placing US workers at the centre of AI development and deployment; ensuring AI systems are free from ideological bias; and preventing the misuse or theft of advanced technologies while monitoring emerging risks. The Plan emphasises incentives for businesses and reflects a broader preference for innovation-friendly, non-binding rules. Compared to the EU’s regulatory approach, the US framework remains flexible and industry-oriented, driven in part by concerns that excessive regulation could stifle innovation.China, meanwhile, unveiled its Action Plan for Global AI Governance on July 26, 2025. Rather than proposing a legalistic treaty framework, the Chinese initiative promotes global cooperation through 13 voluntary actions focused on coordination, standards alignment, and shared risk management. The Plan envisions broad participation and hints at the creation of a global AI cooperation organisation. China’s approach reflects its ambition to play a leading role in shaping international norms while emphasising a strong role for the state in AI governance.Taken together, these positions reveal clear divergences. The EU seeks a legally binding framework but pragmatically relies on soft-law instruments to build consensus. China prioritises international cooperation and state-led governance, while the US treats AI primarily as a commercial and innovation-driven domain, favouring voluntary and flexible rules. At the strategic level, US AI policymaking is also shaped by concerns over China’s rapid advances in the field.India’s expectations for the 2026 AI Impact Summit are more consensus-oriented. As host, India aims to promote a global framework that balances safety, inclusion, and growth while amplifying Global South perspectives. India’s approach emphasises ethical deployment, equitable access, and socio-economic impact, with governance framed around principles, guidelines, and national strategies rather than binding regulation.Like most countries, India is unlikely to push for legally binding AI commitments at the summit. Instead, its objective is likely to be procedural and political: securing a broadly endorsed, non-binding declaration that signals international convergence on AI governance norms. From New Delhi’s perspective, success will be measured less by the legal force of the outcome document than by the breadth of participation, particularly among major states and private-sector actors.Attention will inevitably focus on the position taken by the United States. The Trump administration has shown increasing reluctance to formally participate in multilateral mechanisms, withdrawing from several UN-sponsored processes and other international agreements, including treaty frameworks linked to the International Solar Alliance led by India. US dissatisfaction with India’s close ties with Russia and the lack of progress on a mutually agreeable tariff regime may further limit Washington’s willingness to take a proactive stance at the New Delhi summit.India must also engage constructively with industry stakeholders. In recent years, the country has attracted major investment commitments from global technology firms, totalling $67.5 billion in US investments. Google plans to build a $15 billion AI data centre in Visakhapatnam; Microsoft has committed $17.5 billion to cloud and AI infrastructure; Amazon has pledged $35 billion through 2030; and Tata Electronics and Intel have announced a $14 billion semiconductor venture.Such investors may expect policy accommodation, particularly on data governance and regulatory certainty. However, India must ensure that its data protection regime is not compromised. Greater clarity is needed on interoperable data-protection frameworks that enable cross-border data flows, foreign cloud services, AI model deployment, and intellectual property protection.Balancing public regulation with private investment will be delicate. Major AI innovators may resist restrictions that limit commercial interests, particularly around data governance, which remains central to AI development. India must also avoid uncritically adopting AI frameworks designed by the US or EU, as doing so could marginalise the concerns of developing economies with limited technological capacity.As India seeks to reconcile the priorities of major powers, emerging economies, and private actors, its diplomatic skills will be tested. Given these complexities, forging a mutually acceptable AI governance framework at the February 2026 summit will be challenging. Ultimately, effective AI governance must balance innovation with the protection of individual rights, public security, and legitimate industry interests.Ajey Lele is a researcher and is the author of the book Institutions That Shaped Modern India: ISRO.This piece was first published on The India Cable – a premium newsletter from The Wire – and has been updated and republished here. To subscribe to The India Cable, click here.