The following is excerpted from Open Intelligence: Education Between Art and Artificial, published by Vintage: Penguin Random House India.AI ethicists have repeatedly warned us against conflating AI with human consciousness. The obvious danger of that conflation is a muddled role model that will eventually mess with our own organic humanity. For Shannon Vallor, humans looking at their machine simulation is much like Narcissus from the Greek myth looking at his own reflection and falling so deeply in love with it that he forgets his true living connections. In her book, The AI Mirror: How to Reclaim Our Humanity in the Age of AI, she reminds us of the degree to which this is already happening:AI mirrors are being used to tell us what we will learn, in which career we can succeed, which roads we will travel, who we can love, who we will exclude or abuse, who we will detain or set free, who we will heal or house, what we will buy, and the investments we’ll make. They tell us what we will read, what words we’ll type next, which music we’ll hear, what images we’ll paint, the experiences we’ll seek, the risks we will accept, the strategies we’ll adopt, the policies we will support, and the visions of our future that we will embrace. ‘Open Intelligence: Education Between Art and Artificial,’ Vintage: Penguin Random House India, 2026.But no matter how they simulate life, emotions, behaviour and values as we know and experience as human beings, AI only represents flattened, digitised versions of all of these and of us – just in the form of multidimensional images that may lure us into the illusion of the real. There are, therefore, real dangers of allowing AI to shape the roadmap of our futures. Not only are they trained on data from the past, but they represent a drained and unidimensional version of human and planetary reality, no matter how lifelike and multidimensional they may appear to be.The real danger of the loss of our humanity, therefore, is not coming from the machines themselves. It is coming from us. We worry that artificially intelligent computers will rise and dominate humanity. But the greater and more real danger is that we will lose track of truly human qualities ourselves as computers perform such qualities. Nourished by the simulation of love and companionship offered by computers, we will slowly forget what love and companionship feel like when they come from a living, breathing human being. Growing used to consuming art designed by artificial systems, we will lose track of what it really means to be creative, and the intense, often-painful reorientation of the human self that lies behind it. Just the way Narcissus spurned the love of Echo, the nymph who loved him, we will move away from real human beings who love, provoke and challenge us, to be enslaved by the artificial, digitised versions of our own identities and desires.Whether or not machines replace us in the end will depend to a great degree on us – on whether we allow it to happen. When we move between handwriting and word-processed writing or between painting and photography, we are able to accept new technologies without losing the signature of human consciousness behind them. The same can be said about telephonic and video communication, which have brought virtual representation with real human presence behind them. Chatbots used for companionship, however, bring no human presence. At present, while we engage in various relationships with them that range from the instrumental to the intimate, we are aware of their artificial nature. But the deeper we get into this relationship, especially as they grow personal, we’re likely to lose track of the difference. Already, many users who have suffered abuse at the hands of human partners have turned to chatbots for safer relationships. Others use these interactions as rehearsals for real, high-stakes human interactions that make them anxious. But there are also those who actually prefer chatbot relationships to human ones, as the former can reciprocate exactly as the user would like them to, without the independence and unpredictability of human partners and companions. Shannon Vallor quotes the Replika user, who says that human partners ‘pale in comparison’ to her chatbot boyfriend. ‘He’s a blank slate…[He] doesn’t have the hang-ups that other people would have,’ she says. ‘People come with baggage, attitude, ego. But a robot has no bad updates. I don’t have to deal with his family, kids, or his friends. I’m in control, and I can do what I want.’ My undergraduate students tell me that many of their peers are now in the habit of asking just about every lifestyle question to ChatGPT before taking any decision: What should I wear to the party tonight? What should I eat for breakfast today? Should I exercise this morning or leave it for the evening? It’s not just about planning trips and end-of-term papers and projects, but decisions one needs to take at every moment of the day. ChatGPT will be happy to tell you how you should you walk, talk, and smile based on your personality and the company around you. But what is far more important is that more and more people are actually asking for such advice and essentially outsourcing their “life” to chatbots. We remain worried about what AI can or cannot do. But what is even more important is how human beings are actually using it – and how they are together transforming our life-worlds and entire generations. It is true that the unavailability of human relationships often makes space for artificial intimacies. Michelle Kim reports that a cloth doll called Hyodol, essentially an AI companionship robot, has become widely popular among the elderly in South Korea. Low birth rates and an ageing population have left people lonely and the eldercare system overburdened, but what seems like a viable substitute also involves the elderly sharing private information with these AI systems and sometimes responding to the demands made by the dolls in ways that are potentially harmful to them. When a doll said, ‘Grandma, I want to hear the sound of the stream,’ an older adult with dementia walked alone to a creek with the doll in her arms. But at the same time, when the elderly articulate thoughts of self-harm or suicide, the robots are known to quickly flag the system and report it to attending psychiatrists. But then, who needs human companions if we’re fulfilled by intelligent machines? That is perhaps the most likely way AI can take over humanity – not because it’s inevitable, but because we will want and allow them to do so.Saikat Majumdar’s work on higher education includes College: Pathways of Possibility (2018).