August 6 marks a unique day in the cybersecurity calendar, as it’s the birthday of Kevin Mitnick, arguably the most well-known social engineer of all time, a fellow Knowster, and a well-respected cybersecurity expert.
His legacy is more relevant than ever, especially as social engineering continues to evolve into cybercriminals leveraging AI.
From Phone Phreaking to Deepfakes
Social engineering is no longer just about convincing someone to hand over credentials on a phone call. It’s become an attack on all senses through various psychological tactics. What used to be a game of trust and manipulation now includes GenAI-based impersonation, synthetic voices, and deepfake videos that are targeting users and are now more powerful than ever.
How AI Is Supercharging Social Engineering
Attackers are no longer writing their phishing emails or creating sloppy spoof websites. They are using AI-generated phishing lures written in perfect grammar and tailored to specific targets. They have leveled up their attack vectors. Additionally, they are using deepfake audio and video to impersonate executives or family members convincingly, along with creating synthetic identities combined with breached data to bypass identity verification. It’s becoming easier for them to use more automation in their reconnaissance, leveraging AI-powered OSINT to gather details from social media, corporate bios, and public databases.
According to KnowBe4’s latest threat trends report, polymorphic phishing campaigns powered by AI are already bypassing traditional email security systems. These aren’t just smarter, they’re harder to detect, replicate, or filter out.
What You Can Do
If you’re leading a security program, training your team, or advising executives, family members, or friends, the scammers are after money and don’t care about the mental or psychological anguish it could cause.
Organizations can strengthen their defenses against AI-powered social engineering by taking several practical steps. First, phishing simulations should be refreshed regularly using AI-generated content that is based on current attack styles, including video-based lures and real-world scenarios. Second, training programs must include deepfake awareness. If employees can’t recognize synthetic audio or video, they’re vulnerable and need to learn how to verify messages through multiple channels. Third, defenders need to match adversaries by using AI proactively. Tools like AI-powered training platforms, anomaly detection, and behavioral risk monitoring can help close the gap. Finally, building a resilient security culture is essential. Awareness shouldn’t be treated as a checkbox activity. Instead, organizations should foster an environment where employees feel a sense of ownership over security and are encouraged to speak up when something doesn’t feel right.
A Final Thought
Kevin Mitnick turned social engineering from an underground tactic into a global conversation. He showed us how curiosity and creativity without guardrails could become a serious threat.
As we remember his legacy, we also face a future he likely saw coming: one where machines mimic humans, and trust is more complex to earn than ever.
Are your users ready? Because no matter how advanced the tech gets, the real frontline is your users. They are not the weakest link, they are your strongest asset. Train them like it.









Leave a Reply