LLMs fueling a “genAI criminal revolution” according to Netcraft report
- by nlqip
However, there can be clues in the email or on the site. Netcraft said that sometimes threat actors accidentally include large language model (LLM) outputs in the fraudulent emails. For example, a phishing email it encountered, claiming to contain a link to a file transfer of family photos, also included the phrase, “Certainly! Here are 50 more phrases for a family photo.”
“We might theorize that threat actors, using ChatGPT to generate the email body text, mistakenly included the introduction line in their randomizer,” Netcraft said. “This case suggests a combination of both genAI and traditional techniques.”
Telltale evidence still shows which phishing emails are fake
Another phishing email it viewed would have been credible — had it not been for the sentence at the beginning, which included the LLM introduction line, “Certainly, here’s your message translated into professional English.” And a fake investment website touting the phoney company’s advantages looked good, except for the headline saying, “Certainly! Here are six key strengths of Cleveland Invest Company.”
Source link
lol
However, there can be clues in the email or on the site. Netcraft said that sometimes threat actors accidentally include large language model (LLM) outputs in the fraudulent emails. For example, a phishing email it encountered, claiming to contain a link to a file transfer of family photos, also included the phrase, “Certainly! Here are…
Recent Posts
- Bob Sullivan Discovers a Scam That Strikes Twice
- A Vulnerability in Apache Struts2 Could Allow for Remote Code Execution
- CISA Adds One Known Exploited Vulnerability to Catalog | CISA
- Xerox To Buy Lexmark For $1.5B In Blockbuster Print Deal
- Vulnerability Summary for the Week of December 16, 2024 | CISA