Blog
Nov 25, 2024

How Your SIM Card
Stops AI-Powered
Fraud and Cybercrime

Carlos DaSilva, CPO at Unibeam

AI has changed the way we all work and play. It can make tasks easier and experiences more personalized. However, AI has also created many new ways for fraudsters and criminals to operate. It’s enabled many new dangers that cyber-defenders need to combat. For example, AI enables:

  • Deepfake Fraud
    AI-generated videos or voices can make people seem to say or do things they never did.
  • AI-Powered Phishing
    AI can create emails or messages that are so personal and convincing they can fool even careful readers.
  • Identity Theft with AI Images
    AI can make realistic photos of nonexistent people, which scammers use to create fake identities, or impersonate identity of existing people.
  • Synthetic Identity Fraud
    Fraudsters can use AI to blend real and fake data to create “synthetic” identities that they can use to open accounts.
  • Investment and Ad Fraud
    AI can be used to spread fake investment advice and encourage people to click on fake ads.

AI-based fraud is no longer just a concept that is tested in research labs – it’s here. And it is affecting real people and real businesses every day.

For example, AI-driven voice cloning scams have become a real threat. In a recent case, a finance employee was scammed out of $25 million in a deepfake incident. The worker believed he was on a video call with the company’s CFO and other colleagues, but all participants were AI-generated fakes. Even though he was suspicious at first, the worker was reassured by familiar faces on the call – and authorized the payment.

Similarly, synthetic identity fraud (where criminals use AI to create entirely fake personas) is surging in the US auto lending industry. In 2023, there was a 98% rise in attempts and $7.9 billion in losses. An analysis of 180 million loan applications showed that misrepresented income, fake identities, and credit washing now pose 75% of the risks to lenders.

AI Lowers Cybercrime Barriers of Entry

The key issue is that AI has lowered the barriers of entry for cybercriminals. It’s made it easier than ever to not only committ cybercrimes and fraud, but to profit from these crimes.

With the power of AI in their hands, cybercrime is more accessible and affordable for criminals, with almost no technical barriers. Tools like “FraudGPT” (available on the dark web for $500 or less) can automate social engineering attacks, including phishing and voice spoofing. These AI-powered tools generate realistic emails, voices, and images that make scams more believable and increase the chances of successful (and profitable) fraud.

Another rising trend is “Phishing-as-a-Service” (PhaaS) and “Malware-as-a-Service” (MaaS) platforms. These services allow non-technical users to launch sophisticated attacks at scale. PhaaS platforms can create customized phishing emails that appear legitimate. And MaaS provides adaptive malware that’s harder to detect, making traditional cybersecurity measures less effective.

As AI-powered cybercrime grows, security researchers face urgent questions about how to identify and counter these sophisticated attacks. And with deepfakes becoming more common, a major challenge is how to authenticate users effectively. How can we ensure that AI-generated images, voices, and videos cannot bypass verification systems? Researchers are exploring methods to detect AI-altered content. They’re looking for ways to evaluate security systems for potential AI exploits. And they are also developing new authentication solutions that can protect against both synthetic identities and deepfake technology.

Software Vs. Software – The Best Defense?

There are many challenges for security professionals that want to protect their organizations against AI-powered fraud and cybercrime. One key problem is that many businesses still rely on outdated security methods, like passwords and SMS-based one-time password (OTP) authentication.

Passwords are often weak or reused across applications, making them easy to guess or steal. And SMS-based multi-factor authentication (MFA) can be easily compromised. The fact is that most authentication today is done by software-based solutions. And the question is: is software-based security the best defense against sophisticated, software-driven problems like AI-enhanced phishing or deepfake scams? The answer to this is why many organizations are turning to hardware-based authentication.

Apple, for example, has started offering users to use “Security Keys for Apple Account” as an additional layer of defense. These hardware USB keys are also commonly required for users to access highly sensitive services like IT admin company servers. These hardware-based keys offer extra protection against phishing and social engineering. This reflects a shift towards solutions that go beyond software to protect against modern, AI-driven cybercrime. The market recognizes that software-only defenses may no longer be enough.

Hardware Vs. Software – Stronger on AI-Powered Cybercrime

Hardware-based authentication offers several important advantages for security professionals concerned about AI-powered cybercrime and fraud.

First, it is highly effective against phishing attacks. Security keys generate unique, cryptographic codes that are difficult for hackers to steal or spoof – unlike passwords or SMS-based MFA, which can be intercepted. And with phishing-resistant MFA, like Apple’s Security Keys, only the individual with the physical device can access an account.

Another key benefit of hardware-based authentication is user-friendliness. Despite its powerful security, hardware-based authentication is easy to use. It simplifies the login process, and this encourages users to adopt it.

Unfortunately, despite being easy to use during a login process, these are inconvenient as an overall process for the end-user. End users need to carry these USB keys at all times and bear a responsibility to not forget or lose the key.
Beyond the cost of the physical keys themselves, IT organization bear a new overhead cost to manage the keys, ship the keys and keep track of lost, stollen keys.



SIM-based user authentication (like Unibeam’s solution) is one of the most secure options among hardware-based solutions. It offers both superior security and ease of use compared to other methods like security keys, dongles, or wearables. SIM-based authentication is difficult to bypass, since it links authentication directly to a user’s phone number, user mobile device and SIM card. This makes it  impossible for cybercriminals to spoof and thus it stops AI-powered identity fraud in its tracks. What’s more, SIM-based authentication is extremely convenient. Users only need their mobile device – something most people carry with them at all times. There’s no need for additional security hardware like USB keys or smart cards, which can easily be lost or forgotten. It simplifies and reduces the cost of management for IT teams who do not need to store, track, ship, or cancel hardware keys associated with employes.

The Bottom Line

As AI makes cybercrime easier for criminals and more dangerous for end users, the need for stronger security has never been more important. Hardware-based protections, like SIM-based authentication, offer a powerful defense against AI-driven scams. Unlike traditional software-only defenses, SIM-based security leverages the most secure part of the mobile device – the nearly unhackable SIM card – to verify identity. This makes it much harder for fraudsters to fake.

AI-powered scams are here to stay.  By moving toward more reliable, more user-friendly hardware-based authentication methods, businesses can stay a step ahead of cybercriminals.

Share