AI voice imitation latest fraud lure

This news has been read 12489 times!

KUWAIT CITY, May 13: Information technology and security experts have warned of a new trick that has begun to spread. It involves the use of tone of voice through voice artificial intelligence programs. They said the new method of fraud is based on the fraudster calling for one goal, which is to record the tone of the caller’s voice, and the fraudster then ends the call, then calls a friend or relative, asking them to transfer money using the voice modified by artificial intelligence, reports Al-Rai daily.

In this context, cybersecurity expert Engineer Saleh Al-Shammari said, “There is a direct relationship between technological development and cyber-attacks.” He said, “The latest that hackers around the world have achieved is the use of artificial intelligence in imitating voices. In this method, the fraudster takes a sample of the victim’s voice, either recorded or via communication, after luring the victim into talking for an appropriate time period to record all the letters and tones of his voice. It is then used to bait relatives or acquaintances into being swindled.”

Al-Shammari stressed “the need to be cautious about these methods and always check incoming calls to see if the person calling is actually a real person that they know or is a technical person inspired by the world of artificial intelligence.” In addition, information technology expert Qusay Al-Shatti said, “Electronic fraud methods are evolving with the development of technical and technological means and the new possibilities they offer.

There are some who use this technology for harmful and abusive goals such as electronic fraud to steal funds and accounts and achieve illegal financial return. Previously, short messages were received from mobile phone numbers claiming that the bank card was suspended and that the call must be taken to collect the card data from the individual. After this, the pattern has changed to a call with an audio recording to deceive the caller into thinking that it was coming from a financial institution. Now we have reached the technique of transcribing voices.”

In addition, security and strategic expert Khaled Al-Sallal said, “The danger of artificial intelligence systems lies in their ability to carry out deep falsification of voices, using “Deep Fake” technical programs that have become available. Through this, they can easily imitate someone’s voice with a degree of accuracy that can deceive both humans as well as smart devices.” He cited the example of the ‘SV2TTS’ application, which only needs a period of five seconds to produce an audio clip that is characterized by an acceptable imitation of anyone. Al-Sallal added, “Microsoft has developed another program called the ‘Vall E’ application, which is an accurate tool for anyone looking to create a sound that sounds more natural than any other application, through an audio recording period that does not exceed three seconds only.”

This news has been read 12489 times!

Related Articles

Back to top button

Advt Blocker Detected

Kindly disable the Ad blocker

Verified by MonsterInsights