So what precautions should we take to reduce the likelihood of this occurring again?
"Criminals used artificial intelligence-based software to impersonate a chief executive’s voice and demand a fraudulent transfer of €220,000 ($243,000) in March in what cybercrime experts described as an unusual case of artificial intelligence being used in hacking."
Would the Ronald Reagan approach of "trust but verify" actually work in these circumstances?
So there you are listening to the CEO of your organisation, who has demanded you send money to a particular place within a certain time frame.
Regardless of it potentially being a medical emergency, a ransom demand, or some other issue such as an emergency due to someone having forgotten to renew the license on a vital piece of equipment or service?
How would you verify that the person on the other end is actually who they state they are?
1) Ask them key questions, only you and the CEO would know the correct responses?
2) Put the mobile or telephone down, and ring a known verified number belonging to the person?
3) If you have your wits about at the time, record the session and then compare this with a known verified recording?
4) Could you take a SHA-256 bit hash and compare them - would this sufficient to prove that it was the original CEO in reality?
This is more like a security awareness theme, but the potential for damage is very high given the number of high profile whaling successes.
On the basis that Augmented Intelligence (known as Artificial Intelligence) can detect up to 187 different nuances of a humans voice, and if a recording is captured by some means - and then played back through an attacking Augmented Intelligence solution or service - would you immediately pay up or carry out the task in reality, if you were prepared for such an attack?
Thoughts, contention or it won't happen to me?
Solved! Go to Solution.
More news was realised today with more facts: https://thenextweb.com/security/2019/09/02/fraudsters-deepfake-ceos-voice-to-trick-manager-into-tran...