Scammers Used ChatGPT to Unleash a Crypto Botnet on X

OpenAI had not responded to a request for comment about the botnet by time of posting. The usage policy for its AI models prohibits using them for scams or disinformation.

ChatGPT, and other cutting-edge chatbots, use what are known as large language models to generate text in response to a prompt. With enough training data (much of it scraped from various sources on the web), enough computer power, and feedback from human testers, bots like ChatGPT can respond in surprisingly sophisticated ways to a wide range of inputs. At the same time, they can also blurt out hateful messages, exhibit social biases, and make things up.

A correctly configured ChatGPT-based botnet would be difficult to spot, more capable of duping users, and more effective at gaming the algorithms used to prioritize content on social media.

“It tricks both the platform and the users,” Menczer says of the ChatGPT-powered botnet. And, if a social media algorithm spots that a post has a lot of engagement—even if that engagement is from other bot accounts—it will show the post to more people. “That’s exactly why these bots are behaving the way they do,” Menczer says. And governments looking to wage disinformation campaigns are most likely already developing or deploying such tools, he adds.

Researchers have long worried that the technology behind ChatGPT could pose a disinformation risk, and OpenAI even delayed the release of a predecessor to the system over such fears. But, to date, there are few concrete examples of large language models being misused at scale. Some political campaigns are already using AI though, with prominent politicians sharing deepfake videos designed to disparage their opponents.

William Wang, a professor at the University of California, Santa Barbara, says it is exciting to be able to study real criminal usage of ChatGPT. “Their findings are pretty cool,” he says of the Fox8 work.

Wang believes that many spam webpages are now generated automatically, and he says it is becoming more difficult for humans to spot this material. And, with AI improving all the time, it will only get harder. “The situation is pretty bad,” he says.

This May, Wang’s lab developed a technique for automatically distinguishing ChatGPT-generated text from real human writing, but he says it is expensive to deploy because it uses OpenAI’s API, and he notes that the underlying AI is constantly improving. “It’s a kind of cat-and-mouse problem,” Wang says.

X could be a fertile testing ground for such tools. Menczer says that malicious bots appear to have become far more common since Elon Musk took over what was then known as Twitter, despite the tech mogul’s promise to eradicate them. And it has become more difficult for researchers to study the problem because of the steep price hike imposed on usage of the API.

Someone at X apparently took down the Fox8 botnet after Menczer and Yang published their paper in July. Menczer’s group used to alert Twitter of new findings on the platform, but they no longer do that with X. “They are not really responsive,” Menczer says. “They don’t really have the staff.”