OpenAI Report Reveals AI Tools in Global Disinformation Campaigns
Key Insights
- OpenAI exposes covert campaigns by state actors using AI for geopolitical influence and public opinion manipulation.
- Russia and China are among the state actors that use OpenAI tech to generate social media posts and articles to influence operations.
- Despite advanced AI tools, disinformation campaigns by Russia, China, Iran, and Israel struggled to gain significant traction or impact.
OpenAI has revealed the identification and disruption of five covert online campaigns orchestrated using its generative artificial intelligence technologies. According to a recent report by OpenAI, the campaigns were run by state actors and private entities from Russia, China, Iran, and Israel. These efforts aimed to manipulate public opinion and influence geopolitical events through the generation of social media posts, article translation and editing, headline creation, and debugging computer programs.
The OpenAI report indicates that state actors and private companies in Russia, China, Iran, and Israel utilized their technology in these campaigns. The operations leveraged generative AI tools to support political campaigns and sway public sentiment in geopolitical conflicts. This marks the first instance of a major AI company openly acknowledging the use of its tools in such deceptive online activities.
Ben Nimmo, a principal investigator at OpenAI, stated that the company aimed to shed light on the actual use of generative AI in online deception following widespread speculation. According to the report, despite the technological capabilities, the campaigns struggled to build significant audiences or achieve substantial impact.
Specific Campaigns and Their Methods
One of the Russian campaigns, Doppelganger, used OpenAI’s technology to create anti-Ukraine comments in multiple languages, including English, French, German, Italian, and Polish. These comments were posted on social media platforms like X (formerly Twitter).
Additionally, the tools were used to translate and edit articles favoring Russia in the Ukraine conflict into English and French and convert these articles into Facebook posts.
Another Russian campaign targeted individuals in Ukraine, Moldova, the Baltic States, and the United States via Telegram. This effort used AI to generate comments in Russian and English regarding the war in Ukraine and other political issues. OpenAI tools also helped debug computer code designed to post information to Telegram automatically. Despite these efforts, the campaigns received minimal engagement and were often unsophisticated, with some posts displaying obvious signs of AI generation.
The Chinese campaign, Spamouflage, used OpenAI’s technology to debug code, seek advice on social media analysis, and research current events. The tools also generated social media posts criticizing individuals opposed to the Chinese government.
Other Campaigns Unveiled
The Iranian campaign, linked to the International Union of Virtual Media, utilized OpenAI tools to produce and translate long-form articles and headlines. These articles aimed to spread pro-Iranian, anti-Israeli, and anti-U.S. sentiments on various websites.
The Israeli campaign, referred to as Zeno Zeno by OpenAI, was managed by a firm involved in political campaigns. This campaign used OpenAI technology to create fictional personas and biographies for use on social media platforms in Israel, Canada, and the United States. These personas posted anti-Islamic messages and other politically charged content.
Future Implications of Generative AI in Disinformation
While OpenAI’s tools have been used to make these campaigns more efficient, the anticipated surge in convincing disinformation has not yet occurred, according to the report. The findings suggest that some of the most significant concerns about AI-enabled influence operations and disinformation have not yet come to fruition.
Graham Brookie, senior director of the Atlantic Council’s Digital Forensic Research Labs, noted that the landscape of online disinformation could evolve as generative AI technology advances. OpenAI has recently announced the training of a new flagship AI model, promising enhanced capabilities.
The revelation from OpenAI also brings attention to legal and ethical considerations surrounding the use of AI in disinformation campaigns. The New York Times has filed a lawsuit against OpenAI and its partner Microsoft, alleging copyright infringement related to AI systems. This legal action underscores the growing concerns about the misuse of AI technology in manipulating information and the potential need for regulatory measures.
Editorial credit: Vitor Miranda / Shutterstock.com
Tokenhell produces content exposure for over 5,000 crypto companies and you can be one of them too! Contact at info@tokenhell.com if you have any questions. Cryptocurrencies are highly volatile, conduct your own research before making any investment decisions. Some of the posts on this website are guest posts or paid posts that are not written by Tokenhell authors (namely Crypto Cable , Sponsored Articles and Press Release content) and the views expressed in these types of posts do not reflect the views of this website. Tokenhell is not responsible for the content, accuracy, quality, advertising, products or any other content or banners (ad space) posted on the site. Read full terms and conditions / disclaimer.