By mid-2025, the technological world reached a firm realisation that powerful (monolithic) foundational models like the ones from Google (Gemini), Anthropic (Claude), OpenAI (GPT) and Meta (Llama) meant little by themselves anymore. Between Qwen and DeepSeek, Chinese AI powerhouses were releasing a new model every few days. The big shift in perspective began with the question, “How powerfully can these models be applied to real-world use-cases?” An example being the visible thematic change in Google’s messaging from “we have the most powerful model” to “we are now applying our models to science research, to biological research etc.” Within the larger technological arc of artificial intelligence (AI) as a paradigm, the generative AI boom marked an epoch where AI went from an experimental technology, largely utilised by enterprises, to widespread individualised application-level experience. Especially with the peaking of an LLM-driven machine ‘intelligence’ replicating basic human thought – goal-oriented and systematic in nature. The next wave of applications, namely agentic AI, is displaying a form of machine ‘intelligence’ reflecting how humans work, with increasing autonomy. The likes of Manus – recently acquired by Meta – and Claude’s Cowork mark a proliferation of this epoch. Experts point at large action models (LAM) paving the way for ‘embodied AI’ next, systems grounded in physical presence and real-world experience. Reflecting how humans act. However, OpenAI’s own data reveals that people are largely using its offering as a personal assistant for simple tasks. Leading to a notion that AI is primarily being deployed in areas where substitutes and other tools are available. The users utilising these forms of AI are mostly in “rich countries” where affordable alternatives are available. Leading to a sense of redundancy, if not outright futility. Leaving a bulk of the spending on AI’s application in these countries to businesses. In their case too, however, a 2025 MIT report discovered that 95% of the surveyed businesses were not able to seek financial returns from their AI investments. Reflecting upon the growing global apprehensions over an AI bubble-bust, Professor Bhaskar Chakravorti, dean of global business at the Tuft University’s Fletcher School of Law and Diplomacy, and one of the most vocal proponents against the excesses of the incredible AI boom in the last few years, mulls that “while a bubble makes it hard for competitors to step away from the treadmill of continuous acceleration, a bust creates conditions that favor resource efficiency, sustainability, and prioritisation of AI that produces actual value and does so with more guardrails in place.” This has allowed for the propagation of a concept called ‘Small-AI’. Considered to be a form of AI built upon narrower datasets, with models trained towards a specific purpose in mind. It is typically driven by local context, and problems at hand, to serve needs across verticals like agriculture, healthcare, education, finance, and governance. This rendition of AI factors in environmental constraints like poor infrastructure, low productivity, and a lacking state of human conditions. It is essentially designed for small inputs driving a big impact on people’s everyday lives and wellbeing. Three approaches considered central to achieving these objectives on the path to developing and deploying Small-AI have come to the fore: Tactical – distributed (hybrid/edge) infrastructure. While large-scale AI computing infrastructure remains essential to advanced AI development, it has also given rise to a global “compute divide”. Last year’s ‘DeepSeek moment’ sent shockwaves across the global AI ecosystem for threatening to disrupt this incumbent ‘Big-AI’ strategy. Technical – open-source principles for model ownership. When access to core pillars of a transformational technology like AI feels uncertain, given the current environment of intense global economic and military contestation, it is natural for countries to look for alternatives and build their own capacity. The upside to this constraint could manifest as a more transparent, collaborative and diversified approach to technology development and deployment, like open-source AI, and healthier global competition.Strategic – multinational partnerships for advanced AI R&D. Across the core pillars like compute, talent, and data, this approach is similar to past projects like CERN, Airbus and the Human Genome Project that have fostered responsible technological innovation and governance globally. Essentially offering a more equitable and sustainable alternative against computer monopolies, talent drain and entrenched geopolitical leverage.The 2026 AI Impact Summit comes to India as the fourth iteration of an event known to be a global pulse-check on this critical technology and its future. The event began with the theme of ‘safety’ in the UK 2023 edition, moved to a focus on the importance of AI (innovation, inclusion and advancement) in Seoul 2024, and then to (sovereign) ‘action’ in Paris 2025. With India becoming the first nation in the ‘Global South’ to host this important conversation, at a disruptive stage in world affairs, there is hope that a societal lens can be afforded on the opportunities and the risks that this technology brings along. That beyond the seemingly zero-sum great-power competition between the US and China to determine the course of AI proliferation and regulation, emerging powers like Brazil, Indonesia, South Africa and India can represent the needs of the global majority in areas like democratisation of AI models, supply chain resilience, capacity building and technical standards. With many of the world’s leading thinkers and practitioners on AI expected to participate, not to mention the numerous heads of state due to mark a presence, the summit themed ‘People, Planet, and Progress’ offers an opportunity for a timely review. Renowned Stanford professor Fei-Fei Li, co-director of their Institute for Human-Centered Artificial Intelligence (HAI) and one of the keynote speakers at the upcoming summit’s Research Symposium, aptly advises, “As capabilities advance at an astonishing speed, opinions about AI polarise: some see it as limitless promise and a bright future, while others view it as an imminent potential catastrophe. This highlights the urgent need for deep understanding and balanced thinking about a technology that has become central to our lives and the trajectory of civilisation”.The India AI Impact Summit 2026 is being held from February 16-20 at Bharat Mandapam, New Delhi.Rahul Batra is ex-Google and works at the intersection of technology and geopolitics.