Language Learning Models (LLMs) have become adept at performing tasks that require specific content generation, such as responding to a customer complaint or summarizing an article. However, with the proliferation of open-source AI tools, cyber attackers and scammers now possess the capacity to create deceptive phishing or impersonating emails using models like LLAMA or GPT4ALL.
Consider a scenario where a non-German-speaking cybercriminal targets a company in Frankfurt. Constructing a persuasive phishing email in German would have been a time consuming challenge in the past. The criminal would have had to either secure German email templates or hire German-speaking accomplices. However, the accessibility and popularity of open-source AI tools have now simplified the creation and testing of impersonation tactics at a global scale. Consequently, the market for such cybercrime is no longer localized.
What's even more alarming is that hackers can train models like ChatGPT to mimic someone's writing style, generating content with an identical tone. This makes it even harder for users to discern counterfeit emails. Imagine receiving a phishing email that mirrors your friend's writing style and requests assistance. How could the average person resist the urge to click the embedded website link?
These developments underscore the urgency of enhanced cybersecurity measures. As we embrace the benefits of AI technology, we must also be vigilant about its potential misuse and safeguard against the associated risks.
https://twitter.com/itsPaulAi/status/1662450775165894657
Although AI tools like LLMs can potentially be used for malicious activities, they can also be employed to enhance security measures. Thus we developed emailGPT and one of its uses is detecting phishing emails.
This is an example on using emailGPT to analyse a phishing email. Just forward the email and you will get a reply in the mailbox. No software installation, available in every mobile phone or PC, all automated.
emailGPT detects phishing emails in two dimensions
Linguistic : ChatGPT is able to look at the text and examine the email sender intention and identify suspicious requests.
Network reputation : In the past 2 years we collected over 10K fake email from users in EU/US/HK/SG and analyses over 20K phishing domains. With this dataset, we train a neural network in identifying high risk domains or links. The results are 90-93% accurate if using network reputation alone.
Combining linguistic analysis capability of ChatGPT and network reputation of our LightGBM Neural network, we are able to help users in identifying impersonating emails.
We offer free uses of emailGPT for personal uses. Reach out to us at info@aipedals.com for instructions. And we would like to get your comments on how to improve the product and making it more user friendly.