📢 Gate Square Exclusive: #PUBLIC Creative Contest# Is Now Live!
Join Gate Launchpool Round 297 — PublicAI (PUBLIC) and share your post on Gate Square for a chance to win from a 4,000 $PUBLIC prize pool
🎨 Event Period
Aug 18, 2025, 10:00 – Aug 22, 2025, 16:00 (UTC)
📌 How to Participate
Post original content on Gate Square related to PublicAI (PUBLIC) or the ongoing Launchpool event
Content must be at least 100 words (analysis, tutorials, creative graphics, reviews, etc.)
Add hashtag: #PUBLIC Creative Contest#
Include screenshots of your Launchpool participation (e.g., staking record, reward
AI attacks the encryption industry! ChatGPT code hides traps, writing meme Bot is counterfished
A netizen shared the tragedy that occurred when he used the large language model ChatGPT. @r_cky0 stated that he initially used ChatGPT to assist in writing a bump bot. However, hidden in the code suggestions provided by ChatGPT was a phishing website. After @r_cky0 used the API of this website, he found that all the money in his encryptionWallet had been transferred away, resulting in a loss of $2500. Subsequent investigation revealed that the code of the API would send his Private Key to the phishing website, causing @r_cky0 to seriously question the credibility of ChatGPT.
SlowMist founder Yu Xian commented: 'When playing with GPT/Claude and other LLMs, be sure to pay attention to the common deceptive behavior of these LLMs. I mentioned AI poisoning attacks before, and now this is a real attack case targeting the crypto industry.'
ChatGPT searches for information online, analyzes and organizes it, and provides it to users. Currently, it is still difficult to achieve fact-checking and security auditing. If malicious individuals deliberately attract AI databases and hide harmful information in them, there may be risks when AI becomes more popular.
This article AI attacks the encryption industry! The ChatGPT code hides traps, and writing meme Bot is phished. It first appeared on Chain News ABMedia.