Cyber Analysis Group - CyberAGroup - OSINT Darkweb Investigations  - Insider Threat - Crypto - Online threat & exposure - SOCMINT - Canadian

Your AI Assistant May Be Scamming You!

 

As we’ve discussed in previous posts, AI systems can confidently produce false information, invent details, and amplify online misinformation and disinformation (https://cyberagroup.com/Dark-Web-Misinformation/#wbb1). At the same time, CyberAGroup has seen firsthand, through the experiences of many clients, how scammers continually find creative new ways to deceive people. Now, these two trends are converging as criminals begin exploiting AI’s flaws to make their scams more effective.

 

While concerns that cybercriminals are building “super AI” systems capable of autonomously defrauding people on a massive scale are largely overblown, the real threat is far simpler. Scammers don’t need to hack AI systems or create advanced fraud bots; instead, they’re taking advantage of AI’s existing weaknesses.

 

As AI becomes an everyday tool integrated into search engines, browsers like Comet, and virtual assistants, fraudsters are adapting their tactics. Just as they learned to manipulate Google search results or create fake websites to mislead users (https://cyberagroup.com/Scammers-are-in-your-Google-Search-Results/#wbb1), they’re now “poisoning” the data that AI tools rely on. By injecting false contact details into the web, scammers can trick AI systems into retrieving and recommending their fraudulent information. In essence, the AI ends up doing their dirty work, becoming the trusted middleman that spreads their deception.

 

For more details, check out David Shipley’s discussion on the topic (https://www.youtube.com/watch?v=0rDiSx9QAgM) and the insightful research from Aurascape (https://aurascape.ai/llm-search-poisoning-fake-support-numbers/).