The Next Battlefield: Artificial Intelligence Through A Soldier’s Lens
In such a world, the greatest danger is not that AI becomes alien. The danger is that humans start behaving less humanly by outsourcing thinking, surrendering responsibility, and hiding behind the machine while letting it make decisions we should be making ourselves. Leaders may find it convenient to blame the algorithm. Companies may find it profitable to exploit it. Ordinary citizens may find it easier to trust AI’s shortcuts than to cultivate patience and understanding.
Yuval Noah Harari, a respected historian and a man who delights in unsettling the comfortable, has suggested that AI should not be called Artificial Intelligence at all. He says it deserves a different name - Alien Intelligence - because, in his view, humans will neither fully understand it nor completely control it. At first glance, this sounds dramatic, almost like the opening line of a futuristic thriller: “They created a machine. The machine created itself.” But look closely, and there is a point worth considering.
AI Is Different
AI is the next battlefield for humans. Humans are already trained to read shadows, judge intent, and anticipate mistakes before they erupt. AI is just another theatre, a bigger, faster, noisier, and yes still governed by the same timeless rule that a tool is dangerous only when the mind behind it is undisciplined. A soldier learns early in life that the most dangerous threat is the one you cannot read; the threat from the unknown. The enemy you can see, assess, and anticipate is manageable. The unknown enemy keeps you awake, all night. Harari’s argument sits somewhere in that uncomfortable zone. He is not saying AI is hostile or preparing to take over the world. He is simply saying that AI is different; not human, not emotional, not bound by the slow instincts and intuitive patterns through which human beings process reality.
AI “thinks” in ways that do not resemble human thinking. It absorbs information at a scale no person can match, learns at a speed that makes our own intellectual processes look like bullock carts racing cars, and operates without the emotional filters that shape human judgment; filters that sometimes protect us, sometimes blind us, but always make our actions understandable to one another.
Yet, calling AI alien carries its own risks. It suggests helplessness, as if we have already surrendered the battlefield before the first shot has been fired. That is not how soldiers, or nations, survive. The moment you start believing the enemy is mystical, unstoppable, or beyond human comprehension, you have already begun to lose. Control is not lost; it is simply becoming more complex. And complexity is not new to human history. Technology has always grown faster than human comprehension. The printing press, railways, radio, nuclear power, each appeared like a force beyond control before being absorbed into daily life. Every generation believes its new challenge is the most frightening one humanity has ever faced. Then history smiles and whispers, “Relax, we’ve been through worse.”
Who Will Control AI
But the difference now is scale and speed. The printing press changed Europe in a century; AI evolves in months. Nuclear power reshaped geopolitics; AI reshapes industries, societies, and personal behaviours simultaneously. Earlier technologies affected specific sectors; AI affects all of them. And unlike previous tools, which extended human muscles or memory, AI extends human cognition itself. It is the first tool that “thinks”, if not like us, then at least alongside us.
This is why Harari’s caution is not misplaced. Any tool that amplifies human capability will also amplify human responsibility. Technology has no morality of its own; it borrows the morality of those who wield it. A scalpel can heal or kill. A drone can deliver medicine or drop bombs. And AI can solve problems or multiply chaos, depending on who programs it, who controls it, and who sets the boundaries.
Left unchecked, AI will reflect our weaknesses more loudly than our strengths. It will replicate human biases, but at scale. It will amplify misinformation, but at machine speed. It will make surveillance easier, manipulation effortless, and anonymity more dangerous. The danger lies not in the machine’s intelligence being “alien,” but in humans abandoning their own judgment, discipline, ethics, and accountability.
AI Doesn't Behave Like Us
In the uniform, we were taught one simple principle to not fear the unknown; you prepare for it. Preparation is what separates survival from panic. Technology becomes dangerous only when the human mind becomes lazy. AI is no exception. It is not an alien to be feared, nor a miracle to be worshipped. It is a powerful force, neutral, and shaped entirely by the hands that steer it. History shows that humanity stumbles when it treats a new technology either as a god or a demon. Nuclear technology, when worshipped, created arms races. When feared, it created paranoia. Only when viewed with sober responsibility did it become a source of energy and scientific progress. AI too will need that balance such as respect for its capability, caution about its misuse, and confidence in our ability to govern it.
Where Harari is absolutely right is this: AI will not behave like us. It does not understand regret, loyalty, fear, compassion, honour, or context. A soldier on the ground sees a child in danger and breaks formation; a machine sees only variables. A human commander may delay an attack out of intuition; a machine optimizes the timeline. That difference matters. And it is this difference that makes governance essential.
Humans at the helm
Consider this; throughout history, the most destructive decisions were not made by tools but by humans who misused them. Cannons did not start wars; kings did. Nuclear weapons did not create the Cold War; distrust did. Social media did not divide societies; misplaced incentives and unregulated influence did. AI will not destroy humanity; humanity mismanaging AI might. And that is where the conversation should focus, not on fearing AI, but on shaping the principles that govern it. We need accountability. We need transparency. We need global norms that prevent reckless behaviour by governments, corporations, and individuals. We need systems where AI augments human judgment, not replaces it.
At the same time, we must acknowledge an uncomfortable truth: AI will change things. It will change work, politics, war, communication, and culture. Machines capable of drafting constitutions and writing propaganda will challenge our understanding of authorship and authenticity. Systems capable of analyzing battlefield patterns within seconds will transform warfare. Tools capable of predicting public reactions will reshape politics, for better or worse.
Needed Superior Judgement
In such a world, the greatest danger is not that AI becomes alien. The danger is that humans start behaving less humanly by outsourcing thinking, surrendering responsibility, and hiding behind the machine while letting it make decisions we should be making ourselves. Leaders may find it convenient to blame the algorithm. Companies may find it profitable to exploit it. Ordinary citizens may find it easier to trust AI’s shortcuts, than to cultivate patience and understanding.
That is the true battlefield Harari wants us to see. And in that sense, he is right. The question is not whether AI will control us, the question is whether we will have the courage, discipline, and wisdom to control ourselves. Humanity has navigated monumental transitions before. What saved us each time was not fearlessness, but clarity of purpose. What protected us was not superior technology, but superior judgment.
Elevate Human Responsibility
AI is here to stay. Calling it “alien” may help us respect its differences, but it must not make us surrender our agency. The goal is not to tame AI, but to elevate human responsibility. Not to fear the machine, but to understand the mind that operates it. It is our own mind.
A soldier never hands over his rifle to someone he does not trust. Humanity must learn to treat AI the same way. Not with panic. Not with worship. But with readiness. In the end, the battlefield is not the machine. It is us. And as always, victory will depend on whether we remain alert, ethical, and in command of our tools, our choices, and ourselves
(The author is an Indian Army veteran and a contemporary affairs commentator. Views are personal. He can be reached at kl.viswanathan@gmail.com )

Post a Comment