Iranian Influence Operation Using ChatGPT Disrupted by OpenAI

0
ChatGPT crossed out on Iranian flag background.




OpenAI has uncovered and disrupted a covert Iranian influence operation that utilised ChatGPT to generate fake news stories and social media comments aimed at influencing the upcoming U.S. presidential election.


The operation, identified as Storm-2035, was found to be spreading disinformation on various topics, including the Israel-Hamas conflict and U.S. politics.


Key Takeaways

  • OpenAI deactivated a cluster of ChatGPT accounts linked to an Iranian disinformation campaign.
  • The operation, known as Storm-2035, aimed to influence the U.S. presidential election and other divisive issues.
  • The AI-generated content did not achieve significant engagement.

Discovery and Action

OpenAI identified and banned several ChatGPT accounts that were part of the Iranian influence operation. The accounts were used to generate long-form articles and social media comments on topics such as the U.S. presidential election, the Israel-Hamas conflict, and Israel's presence at the Olympic Games. The content was then disseminated through social media accounts and websites posing as news outlets.


Limited Impact

Despite the sophisticated use of AI, the operation did not achieve meaningful audience engagement. Most of the social media posts received few or no likes, shares, or comments. Similarly, the long-form articles did not gain traction on social media platforms.


Broader Context

The discovery comes in the wake of a report by Microsoft's Threat Analysis Center, which highlighted the activities of Storm-2035. The group was found to be using a range of online tactics to meddle in the U.S. presidential election, including creating fake news websites and social media accounts to amplify polarising messages.


Ongoing Efforts

OpenAI has shared its findings with government, campaign, and industry stakeholders to help disrupt further attempts at foreign influence. The company remains committed to preventing the misuse of its AI tools and continues to monitor for any violations of its policies.


Conclusion

While the Iranian influence operation did not achieve significant engagement, the incident underscores the potential for AI tools like ChatGPT to be misused for disinformation campaigns. OpenAI's swift action in identifying and banning the accounts involved highlights the importance of vigilance in the face of such threats.


Sources



Tags:

Post a Comment

0Comments

Post a Comment (0)

#buttons=(Ok, Go it!) #days=(20)

Our website uses cookies to enhance your experience. Check Now
Ok, Go it!