AI chatbots have become a popular way for people to interact with the internet, providing direct answers to questions rather than lists of links. However, these tools can sometimes give inaccurate information, leading to security risks. Cybersecurity experts are now warning that hackers are exploiting vulnerabilities in chatbots to conduct AI phishing attacks.
When users utilize AI tools to search for login pages, especially for banking and tech platforms, they may receive incorrect links. Clicking on these links can redirect users to fake websites designed to steal personal information and login credentials.
Researchers at Netcraft recently tested the GPT-4.1 family of models, which are used by Microsoft’s Bing AI and Perplexity AI search engine. They found that out of 131 unique links returned by the chatbot, only about two-thirds were correct. Approximately 30% of the links led to unregistered or inactive domains, while 5% directed users to unrelated websites. This means that more than a third of the responses linked to pages not owned by the actual companies, increasing the risk of users ending up on fake or unsafe sites.
In a real-world example, a user asked Perplexity AI for the Wells Fargo login page and received a phishing page hosted on Google Sites as the top result. The fake site closely resembled the real design and prompted users to enter personal information. This highlights the risk of trusted AI platforms inadvertently directing users to fraudulent websites.
To protect against AI phishing attacks, users should avoid blindly trusting links from AI chat responses, double-check domain names for authenticity, use two-factor authentication whenever possible, and report suspicious AI-generated links. Keeping browsers updated and using strong antivirus software can also provide additional protection.
As attackers evolve their tactics to target AI models, it’s crucial for users to verify information provided by chatbots before taking action. By taking proactive measures to safeguard personal information and digital assets, individuals can reduce the risk of falling victim to AI phishing attacks.