Experts warn that chatbots, such as ChatGPT, are undermining a traditional defense against phishing emails by eliminating spelling and grammatical errors. Phishing emails, which trick recipients into divulging personal information or downloading malicious software, often contain poor grammar and spelling mistakes. However, AI chatbots can now correct these errors, enabling cybercriminals to bypass spam filters and deceive human readers. Data from cybersecurity firm Darktrace indicates that phishing emails are increasingly being generated by bots, allowing criminals to write more sophisticated and lengthy messages that are less likely to be detected. This rise in linguistically complex phishing emails suggests the utilization of large language models like ChatGPT. The development of such technology has made it easier for scammers to engage in effective social engineering and craft highly believable spear-phishing emails. Europol has also highlighted the potential risks associated with AI chatbots, including fraud, disinformation, and cybercrime. The use of these models provides malicious actors with the ability to understand and carry out various types of crimes. While Google has launched its chatbot product called Bard, both Google and OpenAI have policies prohibiting the use of their AI models for deceptive or fraudulent activities.
https://www.theguardian.com/technology/2023/mar/29/ai-chatbots-making-it-harder-to-spot-phishing-emails-say-experts