The sources and response to scams using AI and fake news websites during Canada’s 2025 federal election
Author: Mathieu Lavigne, Alexei Sisulu Abrahams, Tess Corkery, David Hobson, and Aengus Bridgman, Media Ecosystem Observatory.
Key takeaways:
A network of Facebook pages, supported by an underlying network of websites hosted around the world, continues to advertise disinformative and deepfaked political content to Canadians during the 2025 electoral period.
These Facebook pages have international links through self-declared locations and through IP addresses of linked web pages. They are built, however, to deliver distinctly Canadian content.
Although they have slightly accelerated their moderation efforts, Meta has taken down only about half of the identified fraudulent pages and content, once removed, often quickly reappears in a very similar form. Tests using relatively simple AI demonstrate that these ads can be readily identified as political, suggesting Meta is neglecting automated detection methods that are easily within its capabilities.
Introduction
Last week, we reported on a substantial network of Facebook pages that have been sponsoring ads during this election period containing misleading content pertaining to Canadian politics. Thanks to our efforts and those of other research partners, this malicious activity has drawn attention from domestic and international news, prompting a response from Meta. In the days since, we have deepened our investigation into this network, while also assessing the efficacy of Meta’s efforts.
Specifically, we ask:
Are these pages part of a much larger global manipulation network?
Do these Facebook pages, or the underlying domain network to which they link, share commonalities that offer a clue as to their origin?
What has been the efficacy of Meta’s response?
Are these pages part of a much larger global manipulation network?
As of last week our team had encountered more than 40 Facebook pages sponsoring misleading political ads in Canada. Since last week we have discovered around twice as many pages (83). This increase could partly be explained by the fact that Meta appears to be somewhat quicker at taking the pages down than at the beginning of the election campaign, making it more likely that we encounter ads posted by new pages, rather than old pages, over time.
Image 1. Examples of Facebook pages publishing fraudulent ads masquerading as legitimate news sources during the 2025 Canadian federal election
Our team has been discovering these Facebook pages simply by scrolling Facebook with a Canadian internet address, and indeed as we reported last week, one in four Canadians have encountered these fake ads in this fashion. That said, Facebook feeds are algorithmically curated and so our method of discovery is not representative. This raises the question whether the network is of a similar order of magnitude to what we have encountered (tens or hundreds of pages), or whether it may be the tip of a much larger iceberg.
A recent report published by Reset documents several vast networks numbering an estimated 3.8 million Facebook pages worldwide whose behavior and characteristics bear a passing resemblance to the 83 pages we have discovered. In particular, pages discovered by the Reset investigation have very few followers, post minimal content, have generic names and profile images, and most of the time lie dormant, ready to be spun into action when needed. The pages discovered in our investigation share all of these characteristics. If they do indeed belong to one of the networks unravelled by Reset, then the scale of the operation may be far larger than we have heretofore discovered, the potential for electoral and political manipulation far greater, and the scope of the requisite response far more robust.
As of this time, however, the evidence is inconclusive. The networks found by Reset consist of Facebook pages following predictable naming conventions. One network dubbed “Botiful” consists of pages with names like “Comley ds4”, “Lithe fg8”, or “Fine wid5”, which appear to consist of an adjective followed by a short alphanumeric sequence. Another, dubbed “Filthy jewel”, consists of names taking an implausible adjective + noun pairing, such as “filthy jewel”, “proud current”, “acidic birth”, and so on. Another set of pages have lengthy, “three-phrase username combinations” such as “Innovative IdeasCooking ChronicesArt & Design”. None of the 83 pages identified in our investigation match these naming conventions.
We do, however, observe some similarities with the branding strategies described in the Reset report (e.g., "ABCDCD online shop" network), which suggest the use of automated or semi-automated approaches to efficiently generate and manage these pages within a broader network. These methods include:
Naming many pages using similar combinations of the same words: Podcast in Canada, 25 Canada POD, 25k Podcast CA, 25k Podcast, Podcast 25, CAA Podcast New
Reusing the same profile picture across multiple pages (see examples in Image 2)
Multiple pages publishing identical ads (see examples in Figure 8)
Multiple pages posting links to the same fake news article (with or without linking the same URL).
Taken together, we cannot conclude that this network dovetails with the networks discovered by Reset. It may well be that the same actors are at work here, but have become more sophisticated and shed the predictable naming conventions that previously helped researchers identify them. On the other hand, it may be that entirely different actors are exploiting the same loopholes in Facebook’s ad market. Further monitoring will be required to reach a more definitive answer.
Do these Facebook pages, or the underlying domain network to which they link, share commonalities that offer a clue as to their origin?
We gathered information about the Facebook pages that have not been taken down to better understand the types of pages used, their dates of creation, and their locations.
Figure 1 shows that a large proportion of the pages were created in 2025, the majority of them after the election began. This can partly explain why three quarters of the pages we studied have less than 10 followers.
Using the location of Facebook Pages to assess the origin of ads has limitations, as this information is self-reported by advertisers and not verified by Meta. While Meta does verify the identity of advertisers running ads related to social issues, elections, or politics, most of the fraudulent ads were not self-reported as political and were therefore not subject to the same scrutiny. We still can see interesting patterns when looking at the information provided (or not) by the advertisers. First, nearly 40% of advertisers did not disclose any location. Second, none of the advertisers self-identified as being based in Canada. Among those who did report a location, the most frequently listed countries were the United States, followed by Ukraine and Vietnam.
Figure 1. Creation year and location of Facebook pages
Apart from a handful of pages categorized as podcasts or news, the page categories appear largely random, ranging from restaurants and music to clothing, travel, churches, and sports. Regarding the branding and visual identity of the pages, we observe a mix of strategies. While some pages seem completely unrelated (e.g., Xtreme Pizza, College Enter, Millenium Gaming) or feature fake user (e.g., Edith Williams, Richard McCoy, Jeff Nicholes), a majority of the pages appear to have a name related to Canada (a combination of Canada, Maple or North with another word like Podcast, News, Insider, Current, etc.) or a profile picture displaying a Canadian flag. In line with what is described in other countries in the Reset Tech report, we see overlapping profile pictures across pages with different names. Profile photos appear to be taken from existing websites. For example, many pages are using the same image that is used on Canadian government websites when discussing its relationship with the United States, as shown in Image 2.
Image 2. Inauthentic Facebook pages using an image from Canadian government websites as their profile pictures
If clicked on, many of the sponsored ads encountered on Facebook would lead viewers off Facebook to a separate domain. For each domain, we checked the IP addresses of their corresponding web servers for faked articles that were circulating between April 22 and April 24. The majority were served out of California, with 11 out of 14 domains served from San Francisco and Los Angeles via Cloudflare. The rest of the domains were served out of Frankfurt, Germany and Roosendaal, Netherlands.
We found in several cases that domains served different content depending on whether or not the website visitor’s internet address was Canadian. For example, Image 3 displays a comparison of web page screenshots obtained by visiting the same URL – mannerorthodox[DOT]xyz/mMtBmqhF – with different internet addresses.
Image 3. (Top) Screenshot of a web browser showing an innocuous news article about a music performance. This is the content shown to visitors of the URL mannerorthodox[DOT]xyz/mMtBmqhF with an internet address from anywhere in the world except Canada.
(Bottom) Screenshot of a web browser showing a fake news article about NDP leader Jaghmeet Singh. This is the content shown to visitors to the URL mannerorthodox[DOT]xyz/mMtBmqhF with a Canadian internet address.
Web visitors with a Canadian internet address were served a misleading news article about an election candidate. Visitors from non-Canadian internet addresses encountered an anodyne news article from 2015 about a musical performance. We tried visiting the same URL while VPNing through the United States, Mexico, Argentina, France, Croatia, and Hong Kong, but in each case encountered the exact same musical performance article.
Image 4. Website visitors to truenorthcapitai[dot]com see fake political content (top) if they have a Canadian internet address, or an innocuous blog (bottom) if they have a non-Canadian address (screenshots taken on April 25, 2025).
When using an American IP address, the majority of sites listed financial services such as debt relief and investment coaching, as well as fashion articles, e-commerce items (joggers, watches), appointments for hair and nail services.
The majority of these pages had email sign-up forms, blog subscription prompts and clickable menu tabs. The majority of pages viewed from an American IP had identical structural templates, with only two pages presenting more sophisticated content regarding financial services.
From a Canadian IP address, these same pages contained faked CBC articles, usually with a headline mentioning either Mark Carney, Pierre Pollievre, or Jagmeet Singh. Pages viewed from a Canadian IP address contained links to third party sites, where users could sign up or register for some financial services platform advertised in the fake CBC article, such as the included the fake investment platform “Rise Deltix”:
[MISLEADING CONTENT]
Jagmeet Singh: "I don't have another business, but I started making money on the Rise Deltix platform. A year ago, I invested just C$350 and quickly increased my investment. Now, I live off the daily income from this platform.”
If the intention behind this campaign were simply to maximize engagement, then it would make sense to let the content vary in a country- or region-specific way. For example, visitors from a Mexican internet address ought perhaps to have encountered a financial services scam advertised in Spanish. Visitors from France ought to have encountered a scam worded in French. Instead, however, the only relevant criterion seemed to be whether the visitor was from Canada or not. This raises concerns that this campaign may be singling out Canada in particular.
What has been the efficacy of Meta’s response?
Since last week, the activity of this network has been covered by both domestic and international news outlets. In response, Meta has said that it is “against our policies to run ads that try to scam or impersonate people or brands” and that they “continue to invest in new technologies” and yet their effort to curb these have been limited. X, another platform where these ads have widely circulated, has not commented.
We have investigated Meta’s response on several dimensions. Firstly, has Meta taken down the Facebook pages that sponsor these disinformative ads? Of the 83 pages we have discovered so far, approximately half of them were still up as of the time of writing. By definition, the proportion of pages removed is lower in the morning (e.g., 41% on April 24), when new pages start posting fraudulent ads, than at the end of the day (e.g., 55% on April 24), when many of these new pages have been reported to and taken down by Meta.
Image 5. Screenshots of false political advertisements – “Poilievre Exposed - Carney Drops the Facts” and “With just days to go, Mark Carney delivered a searing takedown of Poilievre - a moment that could define the election” – published on April 25, 2025, by Digest 24 and Dumich1.
Aside from taking down the pages, has Meta taken steps to rid Facebook of the false advertisements themselves? After all, if one Facebook page is suspended, another could be used to post the same disinformative content. Our investigation has shown that several of the false advertisements contained deepfaked images and videos of Canadian politicians. This content was being served from Facebook’s content delivery network (CDN), a vast global infrastructure dedicated to hosting all the billions of images and videos uploaded to Facebook by its user base.
To rid this false content from its CDN, Meta could take several steps, each one more effective and comprehensive than the previous. Firstly, at a bare minimum, it could take down the precise copy of misleading content uploaded by the offending page. But this would not protect Canadians from the possibility of the same page re-uploading that content, or from another page uploading the same content.
Secondly, and slightly more robustly, Meta could calculate the hash of the image or video – a string of letters, numbers, and symbols, that uniquely identify the image or video – and censor any video on its servers matching that hash. This approach has previously been used to calculate the evolution of images as they become memes, among other things. While this is a more robust approach, a manipulative actor could introduce minor tweaks to the image or video (even changing just a single pixel) and in so doing generate a completely different hash that evades Meta’s content moderation efforts. Indeed, as shown below, we found that slightly altered (zooming, wide-screen bands) versions of the same deepfakes of Mark Carney have been republished over multiple days, including for deepfakes that Meta indicated they had taken down.
Image 6. Screenshots of minor edits to videos to evade hash-based content moderation. The ads, encountered on April 10, April 11, April 13, and April 16, include the same video masquerading as a CBC News broadcast and featuring a deepfake of Mark Carney promoting a fraudulent investment platform.
Image 7. Screenshots of minor edits to videos to evade hash-based content moderation. The ads, encountered on April 10, April 11, and April 22, include the same video masquerading as a CTV News broadcast and featuring a deepfake of Mark Carney promoting a fraudulent investment platform.
This same limitation applies to images. Our analysis reveals that advertisers reused a small number of headlines and slightly altered versions of the same visuals throughout the election campaign, showing that the current approach has been insufficient for preventing lookalike ads from circulating on the platform.
Image 8. Screenshots of visual reused across ads throughout the election campaign
Thirdly, and most robustly, Meta could use artificial intelligence to assess the similarity between any image or video uploaded to its CDN with any image or video flagged as problematic. But as we reported in our update last week, although these ads are clearly political in nature, they are self-reported by the sponsoring pages as non-political, and thus escape scrutiny and fall outside of some of Meta’s transparency commitments to the public.
But this merely begs the question: why does Meta rely on the sponsoring pages to self-report whether they are of a political nature or not? Off-the-shelf technologies now exist to easily and accurately identify whether an ad contains political content or not. In less than an hour our researchers were able to code a script that assessed 90 captured screenshots to identify whether they included likenesses of politicians or political content. We labeled each screenshot as political/not, and then passed the images 3 times through a consumer-grade vision detection model (Llava:7b). With 2/3 agreement, this model – capable of running on a small consumer laptop GPU – accurately identified 94% of ads as containing political content (missing only 3 positive cases) and 90% of ads as containing the likenesses of politicians (including ones where the image had been doctored to include injuries, for example), missing only 7 positive cases. Put simply - the technology to quickly and easily identify ads that likely concern politics or impersonate political figures is widely available and computationally trivial. It is unconscionable that platforms are taking a reactive approach instead of a proactive one given the vast technical and computational resources at their disposal.
For media inquiries, please contact Isabelle Corriveau at isabelle.corriveau2@mcgill.ca.