you are being lied to.
A quick article on disinformation
It’s never been easier to lie online, and governments across the world are using that to manipulate you.
That’s the news from NBC today, who report that the social media company Graphika have analyzed a number of “online influence operations,” including a few affiliated with state governments in Russia and China. Some of the examples include “Doppelganger,” which is allegedly tied to the Kremlin and used AI to make fake news sites, or “Spamoflauge,” the infamous Chinese disinfo campaign that used AI-generated news influencers to spread fake news. The researchers at Graphika have discovered that said campaigns are using generative AI to spread their agendas within the United States and other western governments. This is a full fledged propaganda push, and what’s absolutely astonishing about it is not only how effective it is… but how effective it is despite how low effort it is.
“The findings run counter to what many researchers had anticipated with the growing sophistication of generative AI,” Kevin Collier writes, “[such as] Artificial intelligence that mimics human speech, writing and images in pictures and videos. The technology has rapidly become more advanced in recent years, and some experts warned that propagandists working on behalf of authoritarian countries would embrace high-quality, convincing synthetic content designed to deceive even the most discerning people in democratic societies. Resoundingly, though, the Graphika researchers found that the AI content created by those established campaigns is low-quality “slop,” ranging from unconvincing synthetic news reporters in YouTube videos to clunky translations or fake news websites that accidentally include AI prompts in headlines.”
You might think you’re not an easy mark for AI-generated content, but you might be wrong. A recent study at Cornell University sought to see how good people were at identifying AI-generated images and news statements. The study paired 80 participants with an AI chatbot that fed them headline-image pairings about various news sources. With AI assistance, the participants were able to identify what was AI 90% of the time.
However, that number plummeted to just 60% when the participants were presented with the pairings without the AI chatbot. That means that nearly half of the time, people were shown something created by AI… and they had no idea whether it was AI or not. This is further backed up by a more neutral study that examined literature produced from 2021 to 2024 on AI’s relation to disinformation. The study’s ultimate conclusion was that generative AI was “ambivalent…neither inherently beneficial nor harmful.”
Yet to come to that conclusion, they also acknowledged that AI “enables an unprecedented capacity to generate synthetic text, images, audio, and video that are increasingly difficult to distinguish from authentic content. This raises significant risks for democratic processes, scientific credibility, and public trust, particularly when these tools are used to manipulate, fabricate, or strategically distort information” and that AI “facilitates the dissemination of disinformation by making it more targeted, personalized, and scalable. The combination of synthetic content and online platform recommendation algorithms amplifies the reach of false narratives—often beyond the control of traditional oversight mechanisms.”
It’s really interesting to acknowledge this and then arrive at a place of neutrality when you consider the fact that the “traditional oversight mechanisms” have been weakened, and in some cases completely eradicated, in the United States of America. Meta removed its fact checkers less than two weeks before the inauguration of Donald Trump off of Facebook and Instagram. Elon Musk replaced traditional fact checkers on Twitter with “community notes” and a very awful AI chatbot that sometimes tries to deny the Holocaust or threaten to sexually assault its users. Even news organizations like Voice of America have been effectively neutered by Trump, furthering the fight against the truth the administration is obsessed with.

And in case any of that isn’t alarming enough to you: over 50% of participants in a UVU study thought deepfakes were real or “likely” real. We are all easy marks for AI disinformation, which means it’s important now more than ever to question everything we see online and begin searching for the original source on everything. If you see a “crazy fact” on social media, Google it. Find out where it came from. A crazy image of Trump as the pope? Take to other platforms to find where it came from and whether it’s real. Ironically enough, you can even turn to things like ChatGPT to ask if the information is corroborated or even real.
Do I recommend it? No. But in a world in which everybody has a vested interest in lying to you, it’s worth seeing how many people are lying to you, why they’re lying to you, and where the lie even originates.





