AI-Generated Fake News on Rise: Misinformation Superspreaders

AI-Generated-Fake-News-on-Rise-Misinformation-Superspreaders

Learn how AI-generated fake news is rising and spreading false information

Artificial intelligence is automating the generation of fake news, resulting in a boom of web material that resembles true articles but disseminates misleading information about elections, conflicts, and natural catastrophes.

According to NewsGuard, an organization that analyses disinformation, the number of websites publishing AI-created bogus content has surged by more than 1,000% since May, rising from 49 to over 600.

Historically, propaganda campaigns depended on legions of low-wage labor or carefully coordinated intelligence organizations to construct sites that appeared real. However, AI is making it possible for almost anybody whether a spy agency or a teenager in their basement to develop these channels, creating content that can be difficult to distinguish from true news.

According to a NewsGuard investigation, one AI-generated piece told a fabricated narrative about Benjamin Netanyahu’s psychiatrist, stating that he died and left behind a message implying Netanyahu’s participation. Although the doctor appears to be imaginary, the allegation was broadcast on an Iranian TV show and was recirculated on media sites in Arabic, English, and Indonesian, as well as by users on TikTok, Reddit, and Instagram.

How to avoid being misled by AI images on social media

The increased churn of polarizing and false news may make it impossible to determine what is factual, damaging political candidates, military commanders, and humanitarian endeavors. According to misinformation specialists, the fast rise of these sites is especially concerning in the run-up to the 2024 elections.

“Some of these sites are generating hundreds, if not thousands, of articles per day,” said Jack Brewster, a NewsGuard researcher who led the inquiry. “This is why we call it the next great misinformation superspreader.”

Generative artificial intelligence has ushered in a new era in which chatbots, picture makers, and voice cloners may create material that appears to be created by humans.

Well-dressed AI-generated news anchors are spouting pro-Chinese propaganda, which is magnified by Beijing-friendly bot networks. Days before the election in Slovakia, lawmakers discovered their voices had been cloned to say controversial things they never spoke. A rising number of websites with generic names like iBusiness Day or Ireland Top News are producing phony news in dozens of languages ranging from Arabic to Thai.

Websites have the potential to deceive readers

The website Global Village Space, which published the item about Netanyahu’s putative psychiatrist, is saturated with material on a wide range of critical problems. There are articles on US sanctions against Russian weapons suppliers, Saudi Aramco’s involvement in Pakistan, and the US’ increasingly strained relationship with China.

Essays on the site are also authored by a Middle East think tank specialist, a Harvard-educated lawyer, and the site’s main executive, Moeed Pirzada, a Pakistani television news anchor. (Pirzada did not reply to an interview request. Two contributors confirmed that their work had appeared on Global Village Space.)

However, AI-generated pieces are placed in between these typical stories, according to Brewster, such as the story on Netanyahu’s psychiatrist, which was rebranded as “satire” when NewsGuard contacted the organization during its probe. According to NewsGuard, the report appears to be based on a satirical piece published in June 2010 that made similar allegations concerning the death of an Israeli doctor.

However, AI-generated pieces are placed in between these typical stories, according to Brewster, such as the story on Netanyahu’s psychiatrist, which was rebranded as “satire” when NewsGuard contacted the organization during its probe. According to NewsGuard, the report appears to be based on a satirical piece published in June 2010 that made similar allegations concerning the death of an Israeli doctor.

You are most likely disseminating false information. Here’s how to put a stop to it

According to the media watchdog Poynter, pink-slime journalism sites, called after the meat waste, frequently develop in tiny communities where local news sources have vanished, publishing pieces that benefit the financiers who support the business.

However, according to Blevins, these strategies need more resources than artificial intelligence. “The danger is the scope and scale with AI … especially when paired with more sophisticated algorithms,” he told me. “It’s an information war on a scale we haven’t seen before.”

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top
Close
Browse Tags