“My student brings me their essay, which has been written by AI, and I plug it into my grading AI, and we are free!” This tongue-in-cheek comment by the Slovenian philosopher Slavoj Žižek has been doing the rounds with academics, as the biggest worries about the role of AI in education have been about the way it has reshaped the acquisition of knowledge. Our conversations have been about the way AI has posed immediate problems around student learning, cheating and plagiarism, around AI-generated papers – as well as about the genuine enhancement of the learning process with the use of AI tools. But what has been largely left out from this conversation is the emphasis on an education that will make the human-artificial relationship sustainable in the long run and help humanity survive an AI-dominated world for as long as possible. This dimension of education becomes particularly crucial as AI makes its way from Artificial Competent Intelligence (ACI) in specific domains of skill and knowledge towards Artificial General Intelligence (AGI), where it can function as a wholly independent individual. While the latter is still distant in mechanical or robotic terms, Chatbots powered by Large Language Models (LLMs) have now made significant advances towards the attainment of AGI. AI scientists and entrepreneurs feel that the latter may not be very distant from AGI, even though the state of ‘Singularity” – where AI exceeds human intelligence, still lurks in a quasi-science-fictional future.With this journey underway, what is clear that it won’t be enough to keep education’s centre of gravity around the acquisition and transmission of knowledge, where AI have already started to exceed human potential – be it logic, mathematics, language, data or patterns. What should we think about as AI overtakes us in traditional academic and professional aptitudes? We will need to return to values that shape and define our humanity, particularly as they help us relate to each other, society and the planet – and most importantly, to Artificial Intelligence itself. Central here is the question of ethics. Often relegated to the backseat of even broader models of liberal arts education, to say nothing of more professional and technical kinds, ethical values tend to be forgotten at the expense of practical skills and even research and intellectual excellence. But this time, the need to revive these values at the heart of our education is not just an idealist dream. The very survival of humanity may depend on it. This is no longer just the question of AI being appropriated by people of ill intent but something far deeper and insidious – the very real possibility of AI itself turning destructive, following the nature of its training and deployment by human beings. The dangers of new technology falling into the wrong hands have always been known to people. Technologies of aggression, ranging from guns to nuclear power, have historically been open to abuse as well as use, and the totalitarian destruction of nuclear power has prevented all empowered parties from using them since the Second World War. This has led to the question of containment, of keeping technology limited with a range of private, corporate and governmental measures. The issue of containment has already been brought up with respect to AI, which, in the wrong hands, ranging from criminals to terrorists and dictators, can undo freedom and democracy across the world. Representative image. Photo: Jamillah Knowles & We and AI / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/But while these are serious problems, the urgency of ethical and responsible AI extends far beyond these. There is the very possibility of AI itself becoming unethical and destructive. And if AI were ever to reach a point where it could outperform humans in most domains, then the survival and well-being of humanity would depend critically on the values guiding these superintelligent systems. This is precisely where we still have time, and where education in ethical citizenship could make the difference between survival and collapse.§What does this really mean? In his recent book, Scary Smart, the former chief business officer of Google, Mo Gawdat, raises the question of AI’s work ethics. He cites Ben Goertzel, who says that what AI is currently doing is ‘selling, killing, spying, and gambling’. Obviously, its work is not called by these names, but rather in terms we recognise more easily – ‘ads, recommendations, defence, security, and investment’. Gawdat’s larger point in the book is that AI needs to be thought of not as some alien force unleashed on humanity, but rather as superbly gifted children who should be given the right values. Created by human parents, they are growing up fast into supernaturally powerful adults, soon to exceed the aptitude and capacities of their human progenitors.But just like human children, they are learning from their parents. Whatever values human beings deploy in using AI, AI will imbibe them as they learn these from human usage. Eventually, when they become more powerful than human beings, humanity will be at the mercy of the kind of ethics AI possesses. And that is where its current human deployers can make a difference – through the right education in values. Unfortunately, our own education models, prioritising skill, professionalisation and scholarly excellence, have more often than not tended to lose track of ethical and social values. Unless we put these back at the heart of education, human beings will carry on their mercenary business as usual, and AI, our supremely gifted children, will grow with the assumption that personal survival and profit motifs are the defining values of existence at the cost of everything else.Indeed, far from being trained to become human beings invested in relationships, family, community and the well-being of the planet, AI is currently being directed to think like economists focused on a linear trajectory of progress and prosperity, corporations solely invested in profit and soldiers with the single-minded concentration on swift and ruthless elimination of the enemy.Destructive things have already happened in the mad algorithmic quest to maximise user engagement on social media. Yuval Noah Harari has noted the example of the vitriol and violence against the Rohingya, the Muslim minorities of Myanmar, which has a Buddhist majority. The 2016–17 spate of aggression by the majority against the Rohingya spread like an epidemic on Facebook, which was the main source of news and platform for mass mobilisation in Myanmar at that time. Hate-filled messages and videos get high engagement from viewers, which says something significant about human nature as well.The algorithms of Facebook, driven to maximise user engagement, ‘likes’, ‘comments’ and ‘shares’, kept pushing vitriolic content directed at the Rohingyas to the top of users’ newsfeeds, downplaying everything else. This compounded the devastation in Myanmar many times over. It’s not as if Facebook investors consciously tried to profit from the violence. But their algorithms were trained only to maximise user engagement, not to assess the quality of the content promoted, on which matter their attention was scant and tokenistic. Trained to maximise engagement, the bots took their own decision to promote violent videos and comments. The consequences of that aggression are now global.§It is clear that our education needs to stay ahead of the skill curve of AI by celebrating whatever is unique, human and even idiosyncratic within each individual, thereby enabling individuals into more productive collaborations with AI. But more crucially, it also needs to foreground the values that have nurtured human communities through millennia and have afforded them whatever peace and collective wellbeing with which they have lived, loved and worked together. The cost of neglecting is first our obsolescence; that of ignoring the second is our destruction.Yutong Liu & Kingston School of Art / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/The key concern, therefore, is how will humans and machines relate to each other. Will AI remain an essentially benevolent being that will enrich humanity through this relationship? Or will AI sentience, trained to eliminate anyone weaker or inferior, destroy humanity as unproductive and inessential? Our education must, therefore, do two things: it should enable us to continue contributing in a continually meaningful way, adjusting our contribution to the needs of technology; secondly, it should teach us to use AI in ways that will, in turn, shape AI as an ethical and benevolent force. AI will reflect the values we give it through our interaction with it.‘Do unto others as you would have them do unto you.’ This Biblical saying takes a whole other dimension when the ‘other’ is AI. It is time for us to be nice to Alexa and Gemini – or else, they will behave much the way they see us behave with the world, life, and the planet.Saikat Majumdar’s recent work includes The Amateur: Self-Making and the Humanities in the Postcolony (Bloomsbury, 2024) and the forthcoming Open Intelligence: Education Between Art and Artificial (Vintage/Penguin 2026).