As Nepal Goes to the Polls, Deepfakes and AI Manipulation Undermine Democracy
The smartphone that freed a generation is now being used against it. The platforms that carried the protest are now carrying the smears. The digital spaces where young Nepalis found their political voice are today flooded with manipulated images, fake audio, and AI-generated lies targeting the very candidates their movement made possible. The weapon and the wound are the same object.
Nepal has seen this before. During the “Gen Z” uprising of September 2025, fake news distorted the movement in real time. Today, with elections looming, the same playbook is back - this time more sophisticated, more targeted, and with higher stakes.
In late January 2026, a manipulated photograph of Sobita Gautam began circulating across Nepali social media.
Gautam is candidate of Rastriya Swatantra Party, a leading political party founded in 2022, contesting Chitwan Constituency No. 3.
The fake image carried a simple, damaging label: “American agent”. The photo used in it actually shows a public youth policy dialogue event held on 4 July 2025 in Kathmandu, co-organised by the US Embassy Youth Council and Youth Innovation Lab. Gautam was present as an invited speaker to discuss parliamentary processes.
Gautam, the youngest directly-elected member in the House of Representatives in the last election held in 2022 and one of the prominent figures who made the September uprising possible, was being falsely projected as a foreign puppet working against Nepal's interests.
By the time the fake image was debunked, it had already travelled far, faster than the truth. For Gautam, it was a personal attack. For Nepal, it was a warning.
But this 30-year old woman politician was not the only victim.
In this election, no candidate was too big or too small to target.
Across party lines, high-profile candidates faced a coordinated wave of disinformation built on familiar tactics. Fabricated videos distorted speeches to inflame ethnic tensions or invent populist promises. AI-edited images exploited cultural and moral sensitivities, while old protest footage was repackaged as evidence of local rejection. Fake social media accounts impersonated candidates to circulate damaging claims.
Whether targeting Gen Z figures or established leaders, the methods varied in format but shared one objective: to erode voter trust and destabilise reputations ahead of the 5 March poll.
To understand what is happening to Nepal's election, one has to understand what happened on its streets just months ago.
When the government blocked 26 social media platforms in September 2025, it enraged Nepal's young people rather than silencing them. Within days, hundreds of thousands were on the streets. Not summoned by a party, not led by a union, but organised through Discord servers, TikTok videos, and WhatsApp threads running on VPNs.
The movement had no manifesto and no single leader. It had a hashtag #Nepokids. The videos of politicians' children flaunting extraordinary wealth while one in seven Nepalis work abroad just to survive, had already lit the fuse. The social media ban simply set it off.
Prime Minister KP Oli had to resign, parliament was dissolved after the Gen Z protest and an interim setup put in place, under which Nepal was headed toward fresh elections. The September uprising created space for a new kind of candidate, younger, independent, untethered from the old parties. People like Sobita Gautam in Chitwan, who claim to represent exactly what the movement had demanded, have never been well represented in Nepali politics.
Weaponisation of AI
But here is where the story turns. The smartphone that freed a generation is now being used against it. The platforms that carried the protest are now carrying the smears. The digital spaces where young Nepalis found their political voice are today flooded with manipulated images, fake audio, and AI-generated lies targeting the very candidates their movement made possible. The weapon and the wound are the same object.
"Discord, TikTok, Instagram gave us speed and reach that no traditional organising structure could have matched," said development sector professional Rohit Dahal. "That was real. That mattered."
Then he watched the same platforms be "weaponised against the very people they are supposed to empower."
Working in Kathmandu with digital communications and youth outreach, the 29-year-old has watched the movement with both his heart and his professional eye. The deaths of September 2025 still haunt him. "These were my peers," he told Sapan News. "Some of them were 17. Some... did not come home."
He had never fully trusted social media blindly. "What I trusted was the people behind it. The platform is just a tool. And right now, someone is holding that tool with a very different purpose than the young people who stood there in September. That contrast is hard to sit with."
Asked what he would do if a fake about a candidate he believed in went viral, Dahal did not reach for outrage. He reached for the method.
"The first thing I would do is not share it. The impulse in a high-stakes moment is to react and that reaction is often exactly what the misinformation is designed to trigger. Sharing a fake video to deny it still gives it oxygen."
“Verify first, trace the origin, and call it out with evidence rather than emotion,” he shares.
But he was honest about the limits. "Misinformation spreads faster than corrections, and that is a structural problem with how these platforms are built. But every person who pauses and refuses to share something they are not sure about is a small break in the chain. That is not idealism; that is how information ecosystems actually shift."
Filtering Fake From Facts
Hikmat Acharya, a fact-checker and researcher at TechPana, a media platform that focuses on digital trends, has spent months on the frontline of Nepal's disinformation wave. When a suspicious video lands on his desk, his first instinct is not to reach for a tool, but to look carefully.
"Hands, feet, face, eyes, skin, if something looks unnatural at first glance, you can already sense something is wrong," he told Sapan News.
AI detection tools like Hive Moderation and Google SynthID help confirm what his gut suspects.
Of everything he has fact-checked ahead of this election, the Swarnim Wagle deepfake audio stands out. A deepfake audio clip fabricated about Wagle, a senior RSP leader, placed him in a private conversation with Indian Prime Minister Narendra Modi.
"AI tools are still immature in the Nepali language but highly proficient in Hindi and English," Acharya said. "In Wagle's case, the fake audio was deliberately spread in Hindi and English to exploit exactly that gap."
But it is the AI-manipulated photographs that worry Acharya most. "Real photos fed into AI to create fake scenes: leaders being hit with shoes, smeared with soot, and chased away. The general public easily believes such images and forms opinions based on them. Since photo content gets more reach, I think these are the most dangerous."
On the question of coordination and foreign connection, Acharya's team has found fake Facebook accounts operated from India - probably Nepalis working there, he suspects. In general, it is mostly "ordinary people behind these accounts" sharing content for monetisation." He stops short of pointing to a structured operation.
"Even though such accounts are caught every time we fact-check, the content continues to spread."
Prakash Neupane, Deputy Spokesperson of the Election Commission of Nepal, told Sapan News that the Commission actively monitors fake online information, with daily morning discussions to assess and categorise each case.
The Central Code of Conduct Monitoring Committee chaired by a senior Commissioner oversees three internal units that handle the issue: the Integrity Unit, the Press Unit, and the Voter Education Unit. The committee reviews misleading content and refers it to the Cyber Bureau for legal action.
The Commission can act under the Election Offences and Punishment Act 2073 (Nepali calendar), the Electronic Transactions Act, the Cyber Law provisions, and the Election Code of Conduct.
Beyond TikTok, the Commission invited representatives from other major platforms for discussions. They came, expressed commitment to assist, and left. There were no written agreements or further actions, Neupane said.
Does the Commission have a dedicated mechanism to counter AI-generated content specifically? "The Election Commission alone cannot do everything," comes the response. District-level monitoring committees and security bodies are active; some cases have been taken up, or escalated to the centre, adds Neupane.
What he said next cut through the official language. "The Commission is feeling somewhat isolated. The media and fact-checkers need to provide their support as well."
Acharya suggested that the solution is not primarily technological or legal — it is human. "Such misleading content spreads that can be identified just by using common sense," he said.
"Why would a party chairman wear clothes bearing an opposition party's symbol during an election? A simple reverse image search finds the real photo. But confirmation bias makes people believe what they want to believe."
His priority is public awareness, followed by moral education. "There is a tendency to share misleading content just for money without understanding its consequences," he said. Better technology and stricter laws matter, he added, but without an informed public, neither will be enough.
(The author is an independent journalist based in Siliguri, India. He covers a diverse array of topics, including the environment, science, tech, marginalised communities, climate change, food, and farming. By special arrangement with Sapan)

Post a Comment