Move over ‘Phishing’, it’s the dawn of ‘Vishing’
Cybersecurity analysts have been predicting that identity fraud would begin to play a larger role in cybercrime. The day has come when they can truly say, I told you so.
Until now, service providers have largely been dealing with spam-emails connected to bank and clients. Clients, alternatively, have been on the receiving end of imposters posing as service providers, armed with a few details and trying to appear more legitimate to gain access to private data and bank details.
As these cyber frauds’ methods have become both more professional and automated, we have found different ways to counter-attack: we have tightened up on security and shown our clients how to distinguish between true service providers and the more obvious fakes.
Whilst we were busy handling the basics, AI got involved.
AI or artificial intelligence is phenomenal.
We have benefitted from virtual personal assistants, digital translators and multitudes of other voice driven technologies that allow us to use innovation to its fullest. Both Google and Microsoft have chatbots that can make pretty impressive phone calls, with the ability to interact with clients, without getting frustrated and actually get results.
Culturally, we can breathe a collective sigh of relief that soon there will be smart home assistants for all, with the ability to call your insurance company, wait on hold and renew your policies.
Thank the Lord.
A.I.s’ dark side.
At the same time, fraudsters were developing their own voice driven technology, in Voice Fraud.
Voice Fraud refers to anyone who pretends to be someone that they aren’t while on the phone. Recent reports show that voice fraud calls have increased by 350% over the past few years, meaning that one in every 638 phone calls are attempts at voice fraud.
This obviously allows a wide range of fraud activities, including illegitimate sales calls where the caller will try to get the victim to hand over sensitive information such as bank card details etc.
With AI-driven Voice Fraud, known as vishing, it’s a little more advanced.
Vishers, use social engineering tactics, to get victims to provide sensitive, restricted information. This is not to be mistaken for a robocall. A robocall prompts you to take further action, like press a button or dial in your social security number, whereas vishing is interactive.
A vishing call, uses an algorithm-driven computer to interact with you in the quest for more information.
In the first ever example of a deep fake audio scam actually working, financial scammers created an A.I impersonation of a C.E.Os’ voice and convinced the company’s’ finance department into transferring almost a quarter of a million dollars to their bank account.
The global company, which has chosen to remain anonymous was first attacked in March, when the CEO of an energy company believed he was talking to his superior, the CEO of the parent company in Germany. Hearing a similar voice to his boss, with the German accent and voice pattern adequately imitating that of his boss, he followed the ‘urgent’ request and transferred the funds to the Hungarian supplier.
When they called back straight after, demanding another transfer, he got suspicious and refused to make the transfer.
Cyber security advisors are saying that this event, is the first of many. David Thomas, CEO of Evident commented on Threatpost that we are ‘seeing more artificial intelligence-based identity fraud than ever before”.
Identity verification, now has to go to the next level. Whilst AI is a perfect tool for automating procedures and allows companies to uncover unusual events, it can also be used as effectively for criminal purposes.
The use of Multi-factor authentication and of Facial recognition still allows us to have the upper hand in verification procedures. Now that the gates of ‘Vishing’ have opened, we should demand that anyone with the ability to transfer data or funds within a company, practice most practical security procedures.