With help from AI, Russia now supercharged its malign influence campaign.
The artificial-intelligence company OpenAI announced Friday that it disrupted a covert Iranian campaign using its ChatGPT tool to create social media posts and long-form articles to influence American voters about political candidates in both parties, spiced up with remarks about fashion and beauty to look more authentic. The disruption was based in part on the Aug. 9 warning from Microsoft’s Threat Analysis Center that “Iranian actors have recently laid the groundwork for influence operations aimed at US audiences.”
Sign up for Shifts, an illustrated newsletter series about the future of work
If this sounds like a repeat of the 2016 presidential campaign, with foreign nations trying to interfere in U.S. democracy, it is. And Iran is not alone. Russia has also been heavily engaged, and both the scale and sophistication of its efforts have grown immensely, thanks to AI.
As details emerge about Iran’s efforts, consider this: Last year on social media, “Sue Williamson” posted a video of Russian President Vladimir Putin declaring that the war in Ukraine is not a territorial conflict or a matter of geopolitical balance but, rather, the “principles on which the New World Order will be based.” Although Ms. Williamson’s account included a photo of her smiling, she did not exist. According to the Justice Department, she was a bot, a digital warrior for Russia, created using generative AI to sow discord in the United States and elsewhere.
According to court documents filed by the FBI this summer, “Sue Williamson” was one of 968 bots created by the Russians on social media platform X. Assembled by covert AI software known as Meliorator, the bots can be swiftly programmed to respond to world events and are authentic in appearance. Though past malign influence campaigns on the internet required some painstaking human trial and error, Russia has now supercharged the process to spread disinformation at high speed and on industrial scale.
Details exposed by the FBI link the effort to the Kremlin. The bot farm was organized by the deputy editor of RT, the Russian state-owned television propaganda network, with help from the Federal Security Service, or FSB, the successor to the Soviet KGB. The bot farm used software that programmed “souls” and “thoughts” for the bot personalities to make them appear real. Next, the Russians obtained and controlled domain names from a U.S.-based provider, allowing them to use emails to crank out hundreds of fictitious social media accounts. The accounts presented their influence efforts in a folksy “over the back fence” style.
Director of National Intelligence Avril Haines told Congress May 15 that China, Russia and Iran are the leading threats, but Russia stands out as the most active. A June report by Beatriz Saab for the National Endowment for Democracy warned that AI is “reducing the cost, time, and effort required by authoritarian actors to both mass-produce and disseminate manipulative content with the aim of smearing opponents and promoting allies, exacerbating divisions in democratic societies.” In its annual report in October, Freedom House found that over the previous year, “the new technology was utilized in at least 16 countries to sow doubt, smear opponents, or influence public debate.”
The United States and other open societies must not be complacent. The latest Russian campaign was caught, fortunately, by law enforcement and intelligence agencies of the United States, Canada and the Netherlands. The domains were seized on grounds of possible money laundering. But it is likely there are many other still-undetected influence campaigns, and it is impossible to catch or stop them all with existing statutes. After all, open societies earn the designation by allowing free-flowing expression and debate.
Emily Harding of the Center for Strategic and International Studies points out that the U.S. government “is largely dependent on industry to keep the bot farms away.” Some platforms are making an effort, but others seem to be ineffective or not even trying. The U.S. government effort to fight foreign influence campaigns remains underfunded and understaffed; the State Department’s Global Engagement Center is being threatened by Congress with abolition.
Not all bots or disinformation actually work. Open AI said the latest Iranian effort was probably a dud and did not reach many people. But a flood of new malign influence campaigns might be undetected and could sway unsuspecting voters. How to fight back? One way is for the platforms to exploit the power of AI against the disinformation tidal wave, using the technology to spot and expose the campaigns. Congress ought to fund and upgrade programs that warn citizens against getting duped. And everyone should remain alert for more strangers named Sue peddling propaganda from a guy named Vladimir. He’s for real.
By the Editorial Board of Washington post
August 19, 2024
COMMENTS