Home Uncategorized How to Rein in Russia’s Evolving Disinformation Machine

How to Rein in Russia’s Evolving Disinformation Machine

by admin

There is much to be gleaned from the U.S. government’s recent exposés of Russian state-backed influence operations. In documents unsealed in September, the Department of Justice detailed how organizations are using increasingly sophisticated tactics to push Kremlin-friendly propaganda to Americans on a range of topics, including the race for president.

While foreign interference from Russia is not new, the documents shed fresh light on its efforts in 2024—which involve a tangled web of real-life influencers, social media bots, AI-generated ads, and spoofed news sites. And though the executive branch has taken important steps to respond to this changing landscape, it can’t do so alone. These evolving methods underscore the urgency of Congress and U.S. tech companies swiftly acting to clamp down on the Russian government’s disinformation machine.

Some of what was unveiled landed like a thunderbolt in American political circles. Commentators and political pundits across the ideological spectrum pounced on revelations that a Russian state-controlled television network, RT, allegedly paid a Tennessee-based company to assemble an unwitting coterie of major conservative influencers to post videos that the company promoted on platforms like YouTube and TikTok. And revelations that marketing firms in Russia were also eyeing about 600 U.S.-based influencers as part of a state-directed disinformation operation similarly roiled media. But this focus on Russia’s efforts to co-opt so-called authentic American voices—something that social media platforms and AI companies have relatively little power to limit, outside of addressing election lies and intimidation—has overshadowed other significant findings about Russia’s evolving tactics that tech companies and Congress do have the power to address now and in the longer-term.

For more than a year, U.S. policymakers have focused on pictures, video, and audio as the main content through which AI tools could be deployed to spread deceptive and false election-related information. But Russia’s newest disinformation campaigns show that there are also acute dangers from the technology underlying generative AI chatbots. According to an unsealed FBI affidavit filed in September 2024, Russian state-directed actors created counterfeit news stories for a collection of websites designed to mimic authentic U.S. sites, such as the Washington Post and Fox News. Prior reports from this year also indicate that Russian operatives created a crush of novel news-imitating sites like “DC Weekly,” and seemingly used generative AI to write phony articles for at least some.

Policy analysts have often assumed that Russia and other foreign powers would deploy generative AI technology developed outside the U.S. in influence operations, making cutting off technological access difficult to achieve through regulatory means. But according to the unsealed government affidavit, which cites OpenAI records, Russian state-backed actors created several ChatGPT accounts to generate and edit content for their fake news sites, exploiting services run by OpenAI, an American company. In May, OpenAI said it had disrupted influence operations connected to Russia and other countries that were using its tools. Earlier this month, the company said it had identified and disrupted over 20 global influence operations and deceptive networks from various countries, including Russia, in the past year.

Russian state-sponsored outfits are aiming to develop savvier strategies when it comes to tricking American social media platforms about whether bots are humans, and bypassing companies’ bot-detection tools. A July advisory issued by the Department of Justice in conjunction with several government agencies from other countries accused Russian agents of using an AI-enhanced software program to add biographical information to social media bots, making the bots more human-seeming. That cybersecurity advisory and the recently-unsealed FBI affidavit also detailed additional ways Russian state-backed organizations are attempting to fool systems designed to identify authentic human activity, like creating a plan to use U.S.-based physical servers to mask Russian locations. All this is particularly troubling given a recent Treasury Department announcement notes that Russian officials were involved in an effort to create bots that would spread online falsehoods to Americans about where to vote in the 2024 election.

Both the U.S. government and tech companies should act to ensure that Russia cannot continue to exploit American corporations for malign purposes, and some are already starting to take steps forward. In September, the executive branch of the U.S. government sanctioned multiple individuals and entities, seized 32 Russia-controlled web domains, and unsealed an indictment of the two RT employees who allegedly enlisted an American company to shill Kremlin-favored narratives on major social media sites.

Read more: How to Protect Yourself from AI Election Misinformation

Social media companies, meanwhile, have fallen short on the job. Certain companies have allowed Russian bots to run roughshod on platforms in some cases, according to news reports. Citing the company’s own records, the unsealed affidavit also describes how Meta, seemingly unwittingly, permitted Russian state-backed operatives to purchase placement of misinformation-studded—and sometimes AI-generated—ads and share the ads from spoof accounts with names purporting to burnish the ads’ credibility, like “CNN California” and “California BBC.” (Meta may have since addressed the identified ads.) In April, research group AI Forensics found that a Russian state-backed influence operation pushed ads on Facebook to European audiences on an “unmatched” scale.

There are numerous steps that tech companies and Congress should take in the wake of these revelations.

To curb the proliferation of fake AI-generated news sites in the critical weeks leading up to the November election, generative AI companies and providers should take stronger steps to filter certain prompts—like requests made of chatbots—that would create counterfeit news stories. They should pay special attention to limiting chatbots’ responses to prompts that seek to generate fake news stories about the American election process. While chatbots sometimes filter some such requests, it’s still easy to generate deceptive news articles with tools like ChatGPT by asking the chatbot to write a “news story”—or even by asking it to write a “fake news story” in many cases—in the style of a specific outlet, including outlets notorious for promoting misinformation.

Social media companies should also dramatically increase staffing devoted to addressing falsehoods about the election process. In the wake of the recent exposés of Russia-controlled fake news sites, companies should diligently work to evaluate generative AI content produced by Russian state-backed organizations, including links to counterfeit AI-generated news articles.

And going beyond this U.S. election, companies should create meaningful verification systems for bona fide news organizations and election officials to help users navigate the morass of fake AI-generated content—without cheapening verification by making it widely available to unvetted individuals who can pay. They should more clearly indicate for users how to identify verified officials’ and news accounts beyond the small checkmarks that are used by platforms today. Platforms should also not boost unverified accounts claiming to be news outlets in their recommendation ranking systems.

These efforts need to be coupled with digital literacy initiatives that train social media users to scrutinize the web domains of apparent “news” sites when clicking on links on social media, and to investigate the provenance of sites that are unfamiliar to them on verified factchecking sites or the Artificial Intelligence Incident Database, an index of harmful AI episodes. Platforms must also crack down on bots, for one, by experimenting with innovative human verification tools.

Congress, meanwhile, should require generative AI companies to embed content with hard-to-remove watermarks and provenance information. This would help people determine the creation and edit history of some content created and modified using generative AI tools. Lawmakers should pass comprehensive ad transparency legislation for social media sites. Government agencies must also more fully resume communications with social media platforms to aid in combatting foreign state-driven disinformation after a pause in talks earlier this year.

To be sure, the suggestions here will not address all the risks from Russian state-backed influence operations. A 2024 Microsoft report makes clear that Russia is already trying to create new disinformation infrastructure and research indicates that Russia-paid actors are doubling down on their efforts in the critical weeks leading up to November 5. But these steps can make a meaningful dent in the evolving efforts by Russia to spread disinformation about this and future elections. Institutional inertia and other hurdles must not get in the way.

American voters deserve accurate election information, and the government and tech companies should help make sure they get it.

Source link

You may also like

Leave a Comment

Welcome to DopeReporters, the leading resource for accurate, timely, and influential news. Covering important events in Nigeria and around the world is part of our mission to create stories that have an impact. Giving you a comprehensive perspective on politics, sports, entertainment, current events, and more is our goal. 

Edtior's Picks

Latest Articles

©2022 DopeReporters. All Right Reserved. Designed and Developed by multiplatforms