NewsOpenAI / ChatGPT / Artificial IntelligenceScams

AI-Powered Scam Bots Pervade Social Media

Key Insights:

  • ChatGPT’s text generation fuels stealthy botnets on X.
  • Over 1,000 active bots craft deceptive personas using AI.
  • Generative AI reshapes trust in online information.

Researchers at Indiana University’s Observatory on Social Media have delved deep into the world of AI chatbots, revealing startling insights about the misuse of OpenAI’s ChatGPT. This chatbot, which made waves in the tech community earlier this year, is now being exploited on social media platforms, particularly on X, once known as Twitter.

Kai-Cheng Yang, a notable computational social science researcher, and Filippo Menczer, a seasoned computer science professor, spearheaded this revealing study. Their findings spotlighted how ChatGPT’s text generation orchestrates intricate “botnets” on X.

Botnets: Silent Digital Predators

For the uninitiated, botnets are intricate networks of bots that launch coordinated spam campaigns on social media. These campaigns often slip past modern anti-spam filters, making their operations stealthy and effective. In the context of Yang and Menczer’s study, these botnets had a dark agenda: promoting dubious cryptocurrencies and NFTs. The danger doesn’t stop there. These bots lure users into questionable investments and threaten their existing crypto assets.

The research duo pinpointed a network on X teeming with over 1,000 active bots. These digital entities cleverly interact using outputs from ChatGPT. They employ stolen images to further their deceptive facade, crafting personas that can easily mislead users.

Menczer shed light on the broader implications of this trend. He emphasized, “Emerging AI tools have drastically reduced the barriers to crafting believable content, posing challenges to the already overwhelmed moderation systems of social media platforms.”

📰 Also read:  Gold and Bitcoin Both Win if Donald Trump Defeats Kamala Harris, Says JP Morgan

The Evolution of Digital Deception

Historically, bots on social platforms were relatively easy to spot. Their robotic interactions and lackluster personas were giveaways. However, the introduction of advanced tools like ChatGPT has blurred these lines. These state-of-the-art tools can churn out human-like text in a flash, making it increasingly challenging to discern genuine interactions from AI-generated ones.

Yang voiced his concerns about this shift, noting, “Generative AI tools might reshape our trust in online information.” The bots, as identified in the study, primarily zeroed in on promoting questionable crypto and NFT campaigns. They also directed users to suspicious websites, which, upon closer inspection, were crafted using similar AI-driven tools.

Beyond Social Platforms: The Web of AI-Generated Content

NewsGuard, a firm dedicated to assessing the credibility of news platforms, has flagged over 400 such AI-crafted sites in recent months. These platforms not only disseminate misinformation but also profit from automated ad placements.

NewsGuard and Indiana University researchers have identified a recurring pattern in AI-generated content. ChatGPT often defaults to a standard response when faced with specific prompts.

Wei Xu, a computer science expert at the Georgia Institute of Technology, expressed concerns about the escalating challenge of pinpointing AI-generated content as the technology refines.

Europol’s projections add weight to this concern, forecasting that by 2026, most internet content could be AI-crafted. With robust regulations, malicious entities might always be attainable.

📰 Also read:  Moo Deng Meme Coin Surges 120% Following Binance Listing

Xu drew parallels to other global challenges, remarking, “Despite known repercussions, certain practices persist due to affordability and lack of deterrents.”

Future Pathways: Regulation and Vigilance

The Biden administration has sought commitments from major AI stakeholders to implement measures to curtail AI risks. One proposed solution involves tagging AI-generated content with discreet labels, allowing users to differentiate genuine from AI-produced content.

However, Menczer still needs to be convinced about the effectiveness of such measures. Yang advocates a holistic approach, emphasizing tracking social media patterns to identify bots.


Tokenhell produces content exposure for over 5,000 crypto companies and you can be one of them too! Contact at info@tokenhell.com if you have any questions. Cryptocurrencies are highly volatile, conduct your own research before making any investment decisions. Some of the posts on this website are guest posts or paid posts that are not written by Tokenhell authors (namely Crypto Cable , Sponsored Articles and Press Release content) and the views expressed in these types of posts do not reflect the views of this website. Tokenhell is not responsible for the content, accuracy, quality, advertising, products or any other content or banners (ad space) posted on the site. Read full terms and conditions / disclaimer.

📰 Also read:  Moo Deng Meme Coin Surges 120% Following Binance Listing

Curtis Dye

Curtis is a cryptocurrency news and analytics author with a focus on DeFi, BLockchain, CeFi, NFTs etc. He has publication skills such as SEO optimization, Wordpress, Surfer tools and aids his viewers with insights on the volatile crypto industry.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close
Skip to content