SECURITY NEWS

AI Phishing: Staying Safe in a New World
Q1 2025 Newsletter
The Hong Kong Attack
Since its introduction in 2022, AI has helped improve productivity worldwide. Unfortunately, cybercriminals are also leveraging its capabilities to deploy advanced phishing attacks—and the implications are concerning.
For instance, in early 2024, a finance employee at a multinational company in Hong Kong received an email that looked like it came from the company’s CFO in the UK. The email mentioned a secret transaction that needed quick action. Although the employee initially thought it might be a phishing attack, his doubts disappeared after joining a video call mentioned in the email.
On the call, the CFO and other colleagues looked and sounded just like the colleagues he knew. In reality, both the email and the video call were fake: cybercriminals downloaded publicly available videos of the CFO and other colleagues and then used AI to make the facial expressions and voices look and sound realistic.
The attackers successfully convinced the employee to make 15 large money transfers to five different accounts in Hong Kong. The scam resulted in a loss of $200 million HKD, equivalent to around $25 million USD at the time.
More Realistic Phishing Emails
AI is also helping cybercriminals craft fake emails that look realistic. Traditional phishing tactics often relied on generic and poorly worded emails, easily flagged by vigilant users and spam filters. However, AI-driven phishing employs machine learning algorithms to enhance the quality, context, and personalization of these fraudulent messages, increasing their success rate.
One notable advancement is the use of natural language processing (NLP), enabling AI to mimic human writing styles and adapt messages to resonate with individual recipients. By analyzing vast amounts of data, AI can generate emails that are contextually relevant and tailored to specific interests, jobs, or recent activities of the target. This targeted personalization makes it challenging for both individuals and automated systems to distinguish between legitimate and malicious communications.
Best Practices for Avoiding Attacks
But here’s the good news: as concerning as AI phishing attacks are, following a few best practices can help you stay safe from such attacks.
Embrace Skepticism
No matter how real a communication looks or sounds, be skeptical of all unexpected calls and messages—especially those that ask you to perform a sensitive action such as transferring money, sharing sensitive data, or clicking on a link or attachment.
Independently Verify Authenticity
Remember this rule: when a sensitive request is unexpected, and it’s not clear if the request is real or fake, always independently verify its authenticity before following any instructions. Do so either by meeting the supposed sender in person or by using contact information known to be valid. Following this simple rule can help you avoid a large majority of phishing attacks.
Verify Suspicious Video Conferences
Most video conferences are a non-issue, because they are typically expected or planned. Generally, the best practice is to avoid joining random or unexpected video conferences without first confirming their authenticity.
If you do find yourself in a video conference that seems suspicious, contact the supposed participants through other channels to confirm the authenticity of the meeting. Additionally, actively participate and ask questions that only legitimate participants would be able to answer accurately.
Be Alert on Social Media
In your personal life, be careful on social media and dating sites. Verify the identity of strangers by reviewing their profiles critically and limiting what you share before meeting them in person. Finally, remember that “investment” and “get-rich-quickly” offers from strangers are almost always red flags that you’re being targeted for a scam.