Dozens of short videos circulating on social media in the run-up to the country’s 13th national parliamentary election feature what appear to be ordinary citizens making voting pledges, offering political endorsements, and criticising rival parties. Investigators say many of the “people” in these clips are not real, but synthetic characters created or edited using generative AI.
A January 2026 investigation by the factchecking and media research organisation Dismislab reported that a Facebook page called Uttorbongo Television had posted 35 videos it assessed to be AI-generated or AI-edited, presenting lifelike characters who introduce themselves as voters and urge others to back the “balancing scales” symbol used by Bangladesh Jamaat-e-Islami, a leading contestant in the upcoming polls.
The findings come as Bangladesh approaches polling on February 12 and amid widening international concern about the use of generative AI to fabricate testimony, imitate speech, and scale persuasive messaging at low cost.
Dismislab’s report traces Uttorbongo Television’s page history and describes it as a repurposed account. It was created on October 30, 2021, under the name Human Help, renamed the same day to Help Mission, changed again on December 30, 2021, to Hum Bolo, and later became Uttorbongo Television on July 7, 2022. The page’s transparency information listed seven administrators operating from Bangladesh, and it had more than 90,000 followers at the time of Dismislab’s reporting.
According to Dismislab, the page’s AI-style political uploads began on December 15, 2025, shortly after the election schedule was announced on December 12. A separate report by the Bangla daily Prothom Alo later said the page’s follower count had climbed further and described additional AI-labelled political clips spreading across platforms, including Facebook, TikTok, and YouTube.
The Dismislab analysis describes a repeated format: a brief “street interview” with an elderly woman, a fruit seller, a person presented as disabled, or a person presented as a Hindu voter, speaking directly to the camera with minimal context. The clips typically converge on the same message: support Jamaat-e-Islami and vote for the scales.
One early example, posted on December 12, depicts an elderly woman saying she would vote for Jamaat and not for the “boat” symbol associated with the Awami League. Dismislab reported that the clip attracted strong engagement and that comments suggested many viewers treated it as authentic.
Image

Dismislab also found content that went beyond endorsements and moved into criticism and alleged wrongdoing by opponents. It said the page’s most viewed video was a 28-second clip of a fruit seller criticising the Bangladesh Nationalist Party (BNP), which, at the time of reporting, had reached more than 8 million views, with hundreds of thousands of reactions and tens of thousands of shares.
The organisation additionally documented clips aimed at individual political figures, including a video attacking Tarique Rahman, BNP’s acting chair, which it said drew millions of views and prompted some viewers to condemn him, while others suggested the video appeared AI-generated.
Earlier, several fact-checkers, including Dismislab, also flagged a different strand of AI-made content circulating online. In one widely shared clip, a synthetic video purporting to show Zaima Rahman, the daughter of BNP chair Tarique Rahman, claimed that she will send BDT 20,000 to viewers via BKash, a financial services provider, and urged people to leave their mobile numbers in the comments. Researchers said many users did so in the apparent hope of receiving money.
Dismislab reported that it assessed the videos using a combination of visual scrutiny and automated detection tools. Across multiple clips, it described irregularities often associated with synthetic media: facial textures that appear unusually smooth, shifting skin folds that change shape mid-speech, and background Bangla lettering that resembles script but does not form meaningful words. It also noted eye behaviour that appeared unnatural in some videos, including an apparent absence of blinking, and it pointed to mismatches between speech and mouth movement, suggesting altered lip-sync. In addition, Dismislab highlighted distortions in hands or objects and background anomalies that can emerge in AI-generated imagery.
The organisation said it uploaded content to Google’s SynthID detection capability, reporting that the system flagged a digital watermark and indicated that audio and video had been created or edited using AI. SynthID is a Google DeepMind watermarking approach that embeds imperceptible signatures into certain AI-generated media and can help identify content generated or modified with some of Google’s tools, though researchers note watermarking is not universal and cannot, by itself, determine whether a claim in a video is true.
Dismislab also cited results from DeepFake-o-meter, an open platform developed at the University at Buffalo that aggregates multiple detection methods and returns probabilistic assessments.
One of the most sensitive examples in Dismislab’s report involves a clip posted on January 10 in which a woman, presented as disabled, claims BNP leaders took money from her in exchange for arranging a disability allowance card and then threatened her when she demanded a refund. Dismislab reported substantial engagement with the video.
The digital investigative outlet The Dissent separately factchecked the video and reported that it was AI-generated and appeared to use the image of Rikta, a garment worker injured in the 2013 Rana Plaza disaster, to construct the synthetic speaker. Prothom Alo also reported that the woman in a similar clip closely resembled the same Rana Plaza survivor and said its own factchecking found the video was AI-created.
Dismislab said the content did not only aim to depict generic enthusiasm, but also attempted to present support for Jamaat among groups whose political preferences are often treated as significant.
In one clip posted on January 8, a man presented as a Hindu voter is shown saying “we Hindus” will vote for the scales and urging rejection of the BNP. Dismislab reported that the video reached more than 1.6 million views and drew extensive supportive commentary, while its analysis highlighted artefacts such as inconsistent forehead wrinkles, finger distortion and lip-sync issues.
In a televised address on December 12, Bangladesh’s chief election commissioner, AMM Nasir Uddin, warned about misinformation spreading on social media and highlighted the growing use of AI to generate false or misleading content. He urged the public not to share unverified claims and indicated that action could be taken under existing laws.
The 2025 code of conduct for political parties and candidates contains provisions intended to restrict AI misuse. It highlighted language that, in effect, prohibits using AI with dishonest intent in election-related matters and bars the use of AI to mislead voters or to defame candidates or individuals by creating, publishing, promoting or sharing false and harmful content.