AI: A new key player in Armenian politics
On June 25, 2025, Archbishop Bagrat Galstanyan, senior Armenian clergyman and leading opposition activist, was arrested on charges of terrorism and plotting a coup. Among the supporting pieces of evidence were voice recordings in which a voice resembling the Archbishop’s was heard discussing plans to bomb Parliament. As a CivilNet journalist noted, “There is no way of verifying the authenticity of the recordings. Authorities have not specified who made them, when or where.”
Around the same time that the recordings were released, Prime Minister Nikol Pashinyan’s spokesperson, Nazeli Baghdasaryan, announced that a deepfake audio clip mimicking Pashinyan’s voice was circulating online. The clip suggested that the June 25 raids and arrests were conducted under Pashinyan’s direct orders and that the evidence used to charge the suspects—including the Archbishop’s audio recordings—was artificially generated.
The recordings attributed to the Archbishop spread rapidly, and two opposing camps quickly emerged. One believed that the recordings were authentic and blamed the Archbishop, while the other claimed they had been manipulated—perhaps even AI-generated—and blamed the government for their manipulative actions.
My intention with this article is not to determine the authenticity of these recordings nor to debate which was AI-generated, partially altered or entirely real. Rather, these events have made me reflect on the broader dangers of AI in Armenian politics, especially given the widespread lack of media literacy among our population. While AI technologies evolve rapidly, the risks they pose become increasingly alarming.
I often witness this issue firsthand. Close relatives—especially older family members—frequently send me images and videos of dramatic events, such as floods, earthquakes or crowds fleeing in panic. Time and again, I find myself explaining that what they are seeing is not always real, but often AI-generated. It is striking how many people—regardless of age—struggle to distinguish between authentic and artificial content. This constant stream of misinformation highlights a growing challenge: the urgent need for digital literacy and critical thinking in an era where AI can so convincingly blur the line between reality and fabrication.
If left unaddressed, this issue could lead to a serious crisis in Armenian politics. Anyone can create AI-generated recordings that mimic real-life events or personas. AI can now convincingly mimic real voices, generate highly realistic images and produce fabricated videos that appear completely authentic. Lip-syncing, facial expressions—every detail—can now be replicated with frightening accuracy. In countries with rollercoaster-like political climates marked by polarization and manipulation, the result could be chaos that is difficult—if not, impossible—to contain.
Several non-state actors are already taking steps to combat this challenge through promotion of media literacy and AI education. For instance, reArmenia offers free webinars that introduce a wide range of practical AI tools—covering everything from composing songs and writing poems to automating workflows and generating designs. These initiatives aim to equip the Armenian public with essential AI knowledge and skills, empowering individuals to better adapt to both local and global market changes. For those interested in exploring specific tools or processes in greater depth, ReArmenia offers comprehensive courses for a fee, which may present a barrier for individuals seeking advanced training.
This is where state involvement becomes essential. Government-led initiatives do not need to teach citizens how to create logos or songs with AI, but they should educate the public about what AI is, what it can do, how it is used and how to stay protected from manipulation. The threat of AI is not limited to foreign actors—it can also come from within. Public education on the benefits and harms of AI is no longer an option but a necessity. This is especially true for a war-torn region like ours, where weapons are increasingly digital as much as they are physical.
Even the most tech-savvy individuals sometimes find it difficult to distinguish between real and AI-generated content. The technology gets smarter, mimicking one’s voice gets easier, fabricating footage becomes as easy as pie, while understanding politics gets harder and harder.
I am not sure if we will ever be able to fully distinguish fact from fake in this new digital era—but one thing is certain: we must make the effort to learn, and also to educate others. We must not lose trust, but take information with a grain of salt. AI is a double-edged sword. Whether it helps or harms depends on how we use it—and how protected we are to defend ourselves against its darker potentials.



