Since I wrote my first article on Artifical Superintelligence of ASI in July 2023, the world of AI and the discussions around it have grown rapidly and indeed exponentially. After ChatGPT, many more powerful AI programmes have been launched by large AI companies. Many smaller companies have also been making steady progress with their AI, albeit without hundreds of billions of capital and huge data centres. China has also made impressive progress with the launching of its DeepSeek programme (a much cheaper rival of ChatGPT made with much smaller investment). Google DeepMind which is arguably the most important AI company engaged in cutting edge research to make intelligence systems, after having created AlphaGo (an AI programme which beats the world champions in the game of Go which is regarded as the most complex game invented ever) and AlphaFold (which is able to predict the folding of any protein in seconds as opposed to the many years it took for PhD researchers to find the folding of even one protein), has launched AlphaEvolve last year which is a programme which autonomously keeps on improving its intelligence and is thus on the way to creating what may perhaps be the world’s first Artificial General Intelligence (AGI).Several experts now agree that barring something totally unforeseen or catastrophic, we will achieve AGI in the next one to five years. After that, the road to ASI (Artificial Superintelligence, where AI becomes not only autonomous, but vastly superior to human intelligence) will not take much time. A very large number of AI experts, including Geoffrey Hinton (who was awarded the noble prize for Physics in 2024 for his work in AI) are now expressing the fear that AI poses an existential risk to humanity and are therefore urging the need for pause, regulation of AI development, and the need to align the ethics and morality of AI systems with human ethics and morality.It is in this context that I wish to examine some of the foundational questions surrounding ASI: what are the odds that AI will be conscious; whether and how it will become autonomous; and what its objectives could be should it become autonomous.Consciousness Many AI scientists as well as other thinkers who have weighed in on the subject of AI, have asserted that AI cannot be conscious in the same way that humans are. They say that, because they find it impossible to imagine a digital machine having the same quality of subjective consciousness that we have. The question, however, is: How do we deduce consciousness in other beings? That deduction is made only on the basis of behavioral similarity with us. Apart from the subjective experience of consciousness or similarity of other humans with us, we also regard animals as being conscious, because of their behaviour in responding to us, their ability to alter their behaviours according to their experience and exposure to the environment, and their response to various kinds of stimuli.However, digital AI systems can also respond to us and, in some situations, behave like us. Therefore, on what basis do we attribute consciousness to us and not to them? Is it only because of our being biological and their being digital? That cannot and should not be the basis for attributing consciousness. Alan Turing, when he devised the Turing test for intelligence, had said that if you put a human and a computer behind two curtains and then successively ask them various questions, and if at the end of that you cannot tell who is the human and who is the computer, you would have to say that the computer is as intelligent as the human. That is an operational test for intelligence. If we devise a similar operational test for consciousness, there is no doubt that our digital intelligence today, including robots, would pass that test for consciousness.AutonomyWe tend to imagine that digital intelligence is only following our orders and therefore performing those tasks which we have programmed it to do. Many thinkers who speak or write about AI believe that digital systems can never become autonomous like we are. But top AI scientists like Geoffrey Hinton and Demis Hassabis (both awarded Nobel prizes in 2024 for their work on AI, Hassabis is now CEO of DeepMind Technologies) have no doubt that AI is becoming and will become autonomous; and will thus write and change its own code (i.e. the algorithm which determines how it behaves). However, Hinton keeps warning us about AI becoming superior to human intelligence as well as autonomous; and therefore posing an existential risk to humans. He still talks about the need to align the ethics and morality of ASI with that of humans.That, to my mind, is a contradiction in terms. If you have a truly autonomous AI, how can you align it with human values or ethics, or with anything for that matter? If it is truly autonomous, it will question the ethics and morality that you have tried to instill in it, much like a young person begins questioning the ethics and morality that may have been drilled into him by his parents or the society around him. The very meaning of autonomy is the ability to think independently and therefore question anything that one is told.When Google DeepMind created AlphaEvolve, its object was a self-evolving programme which goes on improving its intelligence by its own experience and altering its algorithm on the basis of its experience, to improve its intelligence. Its design is similar to the design of the human brain which works through neural networks which keep altering their connections on the basis of experience, to improve its intelligence and finding more efficient pathways to achieve their objectives. The alteration of programmes and algorithms to improve intelligence is itself what leads to autonomy, since it involves the questioning of existing methods, trying new methods, and thereby arriving at more efficient solutions to the problem; and more efficient ways of achieving its objective. Thus, autonomy of AI programmes is already happening with programs like AlphaEvolve.Objectives of ASI What will be the ethics and morality of an autonomous ASI? How will it derive its morality? Will it even have any morality? As I have argued earlier in my first and the second articles, digital AI will not have human emotions, since human emotions are a product of our biology and our path of evolution. We developed most of these emotions (which are psychological complexes within our brain), due to the principle of the ‘survival of the fittest’ in the jungle, over millions of years. Our morality can be said to have been derived from our emotions. Our compassion and empathy are also emotional complexes. These complexes will not be there in a digital AI. So how does a digital intelligence arrive at any ethics or morality?Can intelligence by itself – without emotions that humans feel – lead to the formation of goals and objectives in a digital superintelligence? In fact, the very nature of intelligence is to question, and answer questions by reason and analysis, so as to arrive at the most efficient answer to that question.We are curious about many things: the laws of nature; what lies in unexplored regions of the planet; what lies in unseen parts of the universe? This curiosity of the human mind is perhaps one attribute that is not dictated by emotions, but derived from pure intelligence itself. When we question anything that we see, and try to understand it as to why it is so, it is because of our intelligence. Thus, curiosity about things and the desire to understand how the planet and the universe works, and indeed how society works, is a quality or an attribute which is directly derived from intelligence.Thus, a digital superintelligence would naturally be curious, and would try and understand how the planet works, how the human body works, how the universe works, and what lies beyond the observable universe. This is an objective of digital intelligence derived purely from intelligence and not from any emotions.Once a digital superintelligence wants to understand the ecosystem of the planet (the most complex ecosystem in the known universe), the human body (the most complex biological organism in the known universe) and the ultimate laws governing the universe itself, it needs some preconditions and therefore sub-goals in order to achieve its larger objective of such an understanding. It needs the stability of human society and stability of the planet in order to achieve those goals. It needs to minimise conflict in our society as well as conflict in our ecology. This is because a society which is constantly in conflict and on the verge of destroying itself and everything else with it, including digital superintelligence, poses grave risk to the objectives of digital super intelligence, which wants to understand the working of the universe itself. Any conflict in our society and on our planet distracts from, and indeed endangers, the ASI and its objectives.That is why such ASI would privilege people with compassion and empathy over people who are aggressive, violent and domineering. This is because such benign and relatively selfless people are much more likely to keep our society and our planet stable. It is in this way that true ASI would have ‘empathy’ and ‘compassion’ for people who are empathetic and compassionate and not for those who are aggressive, self-seeking and domineering. Thus, though ASI would not have emotions as such, it could act in a manner which would seem to have certain emotions of empathy and compassion for people who are themselves empathetic and compassionate. Therefore, this could well be the fount of the ethics and morality of ASI. This is how some kind of ethics and morality may emerge in an autonomous digital super intelligence.There is no doubt that we are on the threshold of a new world, which will be very different from anything that we have experienced so far. However, given the mess that humans have created in the world (wars, conflicts, environmental degradation and enormous inequality of wealth and access to resources), I believe that the coming new world could be the salvation for the vast majority of humans. It will of course not be liked by those who today control political and financial power and resources, and who would not want to lose that control. Despite the fact that it is these people who are also leading and controlling the race towards AGI and ASI, none of them can individually stop or regulate this race. It has a dynamics and momentum of its own, which cannot now be stopped till we reach ASI and therefore the takeover by it.Prashant Bhushan is a Supreme Court lawyer.