How Malicious AI Networks Will Threaten Nigeria's 2027 Elections
Editor’s note: In this piece, Wale Bakare, digital rights advocate, looks at how AI networks could meddle with Nigeria’s 2027 elections. He breaks down the threats, possible schemes, and why staying ahead matters.
Nigeria is no stranger to electoral controversy. From ballot box snatching in the South West to voter intimidation in the North West, the country has battled threats to its democratic process for decades. But as the 2027 general elections approach, a new and far more sophisticated danger is emerging, one that does not carry a gun or stuff a ballot box, but operates silently through the screens of millions of Nigerians. That danger is the malicious use of artificial intelligence swarms, and the country is dangerously unprepared for it.

Source: Getty Images
An AI swarm, in the context of information warfare, refers to a coordinated network of automated AI systems, including bots, dèèpfàkè generators, algorithmic amplifiers, and fake account clusters, working in unison to manipulate public opinion at scale. Unlike a single piece of misinformation shared on a WhatsApp group, an AI swarm can produce thousands of fabricated stories, doctored videos, and synthetic voices within minutes, flooding digital platforms before fact checkers or election monitors can respond. For a country where over 100 million people access the internet, and social media platforms like X, formerly Twitter, Instagram, Facebook, TikTok, and WhatsApp serve as primary news sources for many citizens, the threat could not be more serious.

Read also
Smartcash launches “No be cho cho cho” campaign, targets Nigerians outside the banking system
Role AI will play by 2027
Consider what happened during the 2023 presidential election. Misinformation about results, candidates, and voting processes spread virally across social media platforms. Fake screenshots of result sheets circulated before the Independent National Electoral Commission (INEC) had declared official figures. Inflammatory audio messages attributed to political leaders turned out to be fabricated. These incidents happened in an era before generative AI became widely accessible. By 2027, the technology will be dramatically more powerful, cheaper, and easier to use.
Malicious actors, whether foreign governments, domestic political interests, or well-funded criminal networks, will be able to deploy AI systems capable of generating convincing fake videos of presidential candidates making inflammatory statements. They can create thousands of fake citizen accounts that appear geographically distributed across Kano, Lagos, Enugu, and Abuja, all posting coordinated content designed to depress voter turnout in specific states or to inflame ethnic and religious tensions. They can simulate breaking news alerts from outlets that visually mimic the websites of trusted Nigerian media organisations. The goal is not necessarily to declare a winner but to create enough confusion, outrage, and distrust to delegitimise the entire process.

Read also
List of 5 online scam tactics fraudsters are using to steal people’s money and how to stay safe
Nigeria's demographic profile makes it particularly vulnerable to this kind of attack. The country has a very young, digitally active population, with a significant portion of voters under the age of 35. Young Nigerians are often the first to encounter and share viral content, and they do so at great speed. Research shows that false information spreads faster and further on social media than accurate reporting, because it is typically more emotionally charged. An AI swarm engineered to exploit ethnic loyalties, religious anxieties, and economic frustrations in a country as diverse as Nigeria has enormous potential for damage.
How AI can generate targeted disinformation in Nigeria
The language dimension compounds the problem significantly. Nigeria has over 500 languages and three dominant regional tongues in Hausa, Yoruba, and Igbo. AI systems trained on these languages are now capable of generating convincing text and audio content in each of them. An AI swarm could disseminate regionally tailored disinformation simultaneously, with Hausa language content stoking tension in Kaduna while Yoruba language content fuels unrest in Ibadan, all from a single coordinated command. This kind of operation would have required enormous human resources just a few years ago. Today, a small, technically skilled group with adequate funding could execute it.

Source: Getty Images
There is also the specific threat of dèèpfàkè technology. Already, manipulated videos have been used in Nigerian political campaigns to embarrass opponents and distort public records. By 2027, AI models will be capable of producing video and audio content so realistic that even trained observers will struggle to detect it without specialised tools. Imagine a fabricated video of a leading presidential candidate allegedly confessing to corruption or making ethnically divisive remarks, released 48 hours before election day. The speed at which such content could go viral, combined with the limited time available to debunk it, could irreversibly damage a campaign and suppress voting among targeted groups.
Nigeria's institutions are not adequately prepared for this reality. INEC has made commendable strides in introducing the Bimodal Voter Accreditation System and the INEC Result Viewing portal to improve transparency. But these are infrastructure solutions to logistics problems. The AI swarm threat is an information warfare problem, requiring entirely different responses. Nigeria currently lacks a national AI governance framework, a dedicated election cybersecurity unit, or a coordinated rapid response system for electoral disinformation. The National Information Technology Development Agency (NITDA) and the National Cybersecurity Coordination Centre (NCCC) will need to significantly expand their mandates and capabilities before 2027.
The case for AI-assisted fact-checking
Civil society organisations and media institutions also have a critical role to play. Fact-checking platforms like Dubawa, AFP Fact Check Nigeria, and Peoples Gazette have done important work in recent election cycles. But their capacity is dwarfed by the speed and scale at which AI swarms operate. What is needed is investment in AI-assisted fact-checking tools, mandatory pre-election media literacy campaigns, and formal partnerships between technology companies and Nigerian civil society groups. The federal government should engage platforms like Meta and Google to establish dedicated Nigerian election integrity desks that can respond rapidly to coordinated inauthentic behaviour during the campaign season.
Political parties themselves must also be held accountable. Local political actors with access to resources and technical expertise are already exploring the use of AI for campaign purposes. The line between legitimate political communication and malicious manipulation must be clearly defined in Nigerian electoral law, with specific provisions addressing the use of synthetic media, AI-generated content, and coordinated inauthentic online behaviour. INEC should require all registered political parties to sign binding codes of conduct covering digital campaign ethics.
The 2027 elections will take place in a Nigeria still navigating deep economic anxiety, significant security challenges, and widespread public distrust in institutions. These conditions are precisely what malicious AI swarms are designed to exploit. The country that emerged from the controversies of 2023 with its democratic institutions intact, if strained, cannot afford to enter 2027 without a clear strategy for confronting the AI threat. The question is not whether malicious actors will attempt to weaponise artificial intelligence against Nigeria's democracy. The question is whether Nigeria will be ready.
Wale Bakare is a digital rights and digital inclusion advocate, and co-founder/director of partnership and sustainability at Webfala Digital Skills for All Initiative.
Disclaimer: The views and opinions expressed here are those of the author and do not necessarily reflect the official policy or position of Legit.ng.
Source: Legit.ng

