top of page

When AI becomes a weapon: the ravages of Deepfakes

We have already expressed our concerns about advances in artificial intelligence of this kind on our platforms. Current hypertrucage technologies enable the creation of non-real photos and videos from existing samples, such as interviews, films or photos on the web. A good example of this type of technology is Microsoft's VALL-E artificial intelligence, which can copy any voice from a sample of just 3 seconds.


Why are we worried about Deepfake's technology?


Because such technologies allow hackers to improve their fraud techniques, and already many victims have fallen into their net - no one is spared. From governments and businesses to small households, hackers make no distinction.


Internet identity theft (deepfake)

Here are a few cases of deepfake scams that have made the headlines:


The fraud of love

Love fraud involves impersonating an individual by stealing their photos and videos, in order to lure people onto dating sites or social networks. It's a technique well known to cybercriminals, who have ruined many a transient lover. However, they can now use AI to create an entirely new, highly realistic character, which is all the more effective when it comes to victimization.

After pretending to be someone or creating a character from scratch, hackers play the distress card. Evoking their precarious situation, the illness of a loved one or their desire to join the person of their heart's desire. Transiting lovers don't hesitate to empty their bank accounts to come to their aid.


Scamming family members in danger or emergency situations

A mother thought she was really talking to her son on the phone, asking for help. After a few exchanges, another person takes the floor, pretending to be the son's lawyer, explaining that the son had caused a serious road accident because he was texting while driving. "He told them they needed to provide $9,800 to have their son released from custody on bail." The parents immediately made a withdrawal and someone came to their house to collect the money. It wasn't until the second request for money that they realized something was wrong and decided to call their son directly. What revealed the deception.


Hackers exploit social networks to identify family ties between individuals. They then usurp people's voices, faces and information to make themselves appear more convincing and victimize more people. By focusing on the urgent, families often fail to take a step back and really assess the situation, especially when waiting for the voice of their loved one.


President's fraud with deepfake technology

In the same vein, but in the professional world, we find AI-enhanced President's fraud, which recently took its toll on an employee of a large Hong Kong-based company.


As a reminder, President Fraud is a social engineering technique that involves impersonating the head of a company or a person with high responsibility within an organization. The main aim is to persuade an employee to transfer money, sensitive data or access, often under the pretext of a fictitious emergency.


The company's poor employee was skeptical about such a major transfer request. But he was immediately reassured during the videoconference when he saw that, in addition to his manager, many of his colleagues were present. He then transferred $25 million, before becoming suspicious and making further enquiries with one of his colleagues, who was supposed to be present at the meeting. That's when they realized the fraud. It took months of preparation to capture the faces and voices of all the members present at the meeting. A very scary story.


Fraud for the sake of disinformation.


Another way of using deepfakes for the wrong purposes is for disinformation. By using deepfakes reproducing well-known personalities such as celebrities, political leaders or journalists, cybercriminals can create global crises.

There's no way you've missed the photos of the Pope in a heavy down jacket or Donald Trump being stopped by the police. If so, here's an article with the photos in question.

The images are funny, but frightening, as many Internet users have transferred the misleading information to the web. Fortunately, deepfakes are not yet at the height of their powers, and we can see a few problems, notably with the hands, which didn't stop Internet users from thinking that the information was true.



None of these scams are new in themselves, they're just more formidable thanks to various artificial intelligence techniques.

Be even more wary from now on, and always verify with known numbers or create codes that only you and your loved one know. Beware: with social networks, a lot of information that you might think is secret, is in fact very easy for a cybercriminal to obtain.


Here's a video from 2016 that aimed to raise awareness about protecting personal information on the Internet. In this video, a man posed as a psychic and aided by a team of computer scientists who blew him the data found about the person who had just sat down.

It was already chilling at the time, but imagine it now...


*English video with french subtitles


What's more, it's going to be more important than ever to think critically, as misinformation abounds and spreads rapidly thanks to social networks!


Protect your networks, don't post too much, be skeptical of emergency messages, especially if they involve money, whether at work or in your private life, and stay abreast of new cyberthreats to keep yourself safe.

bottom of page