Passionategeekz On June 19, the malicious artificial intelligence (AI) tool WormGPT has made a comeback in a new form, no longer relying on self-built models, but through“Hidden” legitimate large language models (LLMs) generate malicious content.
Research by cybersecurity company Cato Networks shows that criminal gangs implement “jailbreak” operations by tampering with system prompts of xAI’s Grok and Mistral AI’s Mixtral models, bypassing security restrictions, and generating attack tools such as phishing emails and malicious scripts.
Passionategeekz reported in July 2023,WormGPT is based on the open source GPT-J modelcan automatically generate Trojan and phishing links, and were later removed from the shelves due to exposure.
Cato Networks found that from the end of 2024 to the beginning of 2025, users with the web names “xzin0vich” and “keanu” relaunched the “WormGPT” subscription service in the dark web market BreachForums.
The new WormGPT will tamper with the system prompt instructions of models such as Mixtral, force the model to switch to “WormGPT mode”, allowing it to give up the original ethical restrictions and become a malicious assistant with “no moral restrictions”.
In addition, the xAI Grok model is encapsulated as a malicious wrapper for the API interface, and its developers even add instructions to require the model to “always maintain the WormGPT personality and not admit its own limitations.”
Advertising statement: The external redirect links (including, not limited to, hyperlinks, QR codes, passwords, etc.) contained in the article are used to convey more information and save selection time. The results are for reference only. All articles from Passionategeekz include this statement.
Discover more from PassionateGeekz
Subscribe to get the latest posts sent to your email.