AI-Generated Videos: Assam’s Political Battlefield and India’s Democratic Crossroads

These videos, which are almost impossible to distinguish from real footage, are being weaponized to attack opponents, spread fear, and polarize communities.

author-image
PratidinTime News Desk
New Update
alankar kaushik

Assam’s politics has entered a new and worrying phase. Alongside longstanding tensions over land, identity, and governance, a powerful new tool is being used to manipulate opinions and create division, AI-generated fake videos, or deepfakes. These videos, which are almost impossible to distinguish from real footage, are being weaponized to attack opponents, spread fear, and polarize communities. What’s happening in Assam is not just a local problem. It reflects a global challenge that threatens democratic values across the world.

Smearing the Opposition: The Pakistan Narrative

The first video, widely shared across BJP networks, portrays Gaurav Gogoi allegedly conferring with a Pakistani intelligence official on national security matters. Assam’s Chief Minister Himanta Biswa Sarma amplified the narrative, alleging that Gogoi’s family maintains cross-border links and that the Congress leader even visited Pakistan at the invitation of the Inter-Services Intelligence (ISI). By hinting at a referral to the National Investigation Agency (NIA), Sarma sought to cloak the allegations with institutional gravitas.

Gogoi swiftly denounced the video as AI-generated propaganda, accusing the ruling party of using synthetic media to distract from governance failures. His response underscored the deeper danger, AI’s power to fabricate reality threatens not just individual reputations but public trust in institutions. When manufactured content masquerades as truth, citizens are left without reliable markers of authenticity.

Corporate Greed, Tribal Land, and Political Deflection

The second AI-generated video tells a different but equally troubling story. It shows Sarma allegedly agreeing to sell 3,000 bighas, over 81 million square feet of tribal land to the Adani Group under pressure from Prime Minister Narendra Modi. The Congress seized upon the video to stoke outrage, framing it as a stark example of corporate capture and betrayal of indigenous rights. The incident reflects a growing trend AI’s ability to craft narratives that exploit entrenched anxieties around identity, land rights, and corporate interests. In both cases, parties weaponize AI not to inform but to inflame emotions, feeding identity politics and deepening mistrust.

The Double-Edged Sword of Synthetic Media

The deployment of AI-generated content in Assam’s political discourse mirrors global challenges. AI promises unprecedented efficiencies in communication and storytelling but carries equally unprecedented risks. Its ability to blur fact and fiction erodes the pillars of informed debate and reasoned discourse.

In fragile regions like Assam, where ethnic tensions, historical grievances, and socio-economic disparities already dominate political discourse, misinformation can act as an accelerant. Fake videos that resonate with existing fears or prejudices spread like wildfire, hardening opinions and shaping voting behaviour long before truth-checks catch up.

The technological sophistication behind these videos compounds the danger. Unlike crude doctored images, AI-generated deepfakes reproduce subtle facial expressions, voice modulation, and contextual cues that make debunking difficult, especially for lay audiences. The result is a crisis of credibility where even genuine information may be treated with suspicion.

Trust Under Siege

At the heart of this phenomenon lies the erosion of trust. Democracies depend on informed consent on citizens being able to distinguish truth from fiction. When political actors weaponize AI to manipulate public perception, they fracture that trust, encouraging cynicism, disengagement, and polarization.

The virality of such content only intensifies the challenge. A video can reach millions within hours, long before fact-checking mechanisms can intervene. By then, narratives have entrenched themselves, and corrections often fall on deaf ears. This time-lag between circulation and verification is precisely what malicious actors exploit.

Moreover, synthetic videos are not limited to political manipulation. They threaten the credibility of journalism, public health messaging, and civic discourse at large. Once citizens are conditioned to doubt every piece of media, the ground for extremism, conspiracy theories, and misinformation is fertile.

A Global Problem: Deepfakes Are Not Just an Assam Issue

India’s 2024 General Election

AI-generated deepfakes were extensively used during India’s 2024 general elections, arguably the largest electoral exercise in the world. Videos and audios circulated that discredited rivals and manipulated voter sentiment. In some cases, deepfakes resurrected deceased political figures like M. Karunanidhi and J. Jayalalithaa to endorse candidates, raising serious ethical and legal concerns (incidentdatabase.ai). The Deepfakes Analysis Unit (DAU), part of theMisinformation Combat Alliance, created aWhatsApp tipline where citizens could report suspicious videos. Despite this, studies indicated that over 75% of Indians were exposed to deepfakes, and nearly one in four believed them to be real (blackbird.ai).

Indonesia’s 2024 Presidential Election

In Indonesia’s 2024 elections, deepfakes played a similarly disruptive role. A fabricated video showed the late President Suharto endorsing a political party, exploiting respect for a deceased leader. Audio deepfakes misrepresented conversations involving presidential candidate Anies Baswedan, sowing distrust among voters. These incidents sparked public debate about the need for ethical regulations and safeguards to prevent AI-generated misinformation.

https://www.facebook.com/share/v/1VY74g9TbP/

United States and Slovakia: Deepfakes Shake Democracies

In the United States, deepfake videos have been used to falsely depict political opponents in scandalous scenarios, manipulating emotions and intensifying partisan divides.

In Slovakia, a deepfake audio recording purportedly captured a conversation about election rigging between a journalist and a political leader. The scandal caused public unrest and highlighted how synthetic media can disrupt democratic processes.

A global analysis revealed that 38 countries have reported election-related deepfakes, affecting nearly 3.8 billion people (surfshark.com).

Regulation: Necessary but Insufficient

Addressing this emerging threat demands more than technological fixes, it requires robust regulatory frameworks and ethical commitments from all stakeholders. Platforms that host user-generated content must take responsibility for identifying manipulated material before it spreads. Automated detection systems, clearer labelling protocols, and stricter enforcement mechanisms are essential.

Simultaneously, policies that penalize deliberate disinformation campaigns must be paired with incentives for transparency and accountability. Without repercussions, political actors will continue to weaponize synthetic media unchecked.

Yet, regulation alone cannot safeguard democracy. Political parties must commit to ethical standards and eschew manipulative tactics, even when expedient. Public discourse thrives when all actors play by shared norms; once truth becomes negotiable, democracy’s foundation begins to crumble.

https://www.facebook.com/share/r/16avqDLZbr/

The Role of Digital Literacy

Empowering citizens to critically assess content is equally critical. Digital literacy campaigns must be integrated into educational curricula, public outreach initiatives, and media platforms. Schools and universities must train students to question sources, understand AI’s capabilities, and cross-verify information before sharing.

Fact-checking bodies need greater investment and support to respond swiftly to misinformation. Public-private partnerships can create shared frameworks for monitoring synthetic media, while cross-party initiatives can build trust and encourage cooperative responses.

Most importantly, citizens must be equipped with the tools to interrogate content themselves. Asking where a video originated, who stands to gain from its spread, and whether independent verification exists are essential steps in breaking the cycle of manipulation.

Defending Democracy in the Age of Deepfakes

The AI-generated videos targeting Gaurav Gogoi and Himanta Biswa Sarma are not anomalies, they are warning signals. They expose how technology, unchecked by ethics or regulation, can be turned against democratic principles, undermining dialogue, trust, and informed choice.

Assam’s experience is not a parochial concern; it is a national cautionary tale. As AI tools become more accessible and powerful, the potential for misuse will only grow. Unless India develops comprehensive safeguards - regulatory, educational, and ethical, the integrity of its political processes is at risk.

We stand at a critical juncture. The spread of synthetic media threatens to redefine elections, reshape political narratives, and corrode institutions designed to protect democratic values. Action cannot be deferred. Policymakers must craft forward-looking regulations, political parties must adopt ethical frameworks, and citizens must be empowered with the literacy to discern truth from manipulation.

In a time when “seeing is believing” no longer holds, protecting the integrity of political discourse is as urgent as ever. Assam’s political battlefield may be the testing ground, but the lessons apply to all of India. The stakes are nothing less than the health and resilience of democracy itself.

ALSO READ: Freedom in Motion: Women, Scooters, and Assam’s Sakhi Express

Himanta Biswa Sarma Pakistan BJP Gaurav Gogoi National Investigation Agency