HomeAI News
Chatting with AI online for 10 minutes was defrauded of 4.3 million. The real fraud case shocked the whole network. Official: The success rate of AI fraud is close to 100%

Chatting with AI online for 10 minutes was defrauded of 4.3 million. The real fraud case shocked the whole network. Official: The success rate of AI fraud is close to 100%

Hayo News
Hayo News
May 23rd, 2023
View OriginalTranslated by Google

It's hard to guard against, 4.3 million was cheated by AI in 10 minutes!

This is a real fraud case that has shocked the entire Internet in the past two days.

According to the Baotou police release, the owner of a company received a WeChat video call from a friend . Since his appearance and voice confirmed that it was "himself", he sent the money without any doubt.

As a result, when I asked my friend, the other party didn't know about it at all. Only then did he know that the scammer DeepFake had his friend's face and voice.

As soon as the news came out, it rushed to the top of the hot search. Netizens said one after another: It's out of line! I dare not answer the phone.

Some people also questioned: AI is so easy to train? This requires a lot of personal information.

However, although it seems to be an outrageously small probability event, according to relevant statistics, after the new AI technology scam hits, the fraud success rate is close to 100% .

After all, even those live broadcast sellers "Yang Mi", "Dilraba Dilmurat", and singers "Sun Yanzi" and "JJ Lin" at station B are not real.

△The source of the picture is Douyin @娱乐日洞社, it is suspected that Yang Mi’s AI was used in the live broadcast room to change faces and bring goods

4.3 million was cheated by AI in 10 minutes

According to the micro-signal Ping An Baotou, one month ago, Guo, the legal representative of a technology company in Fuzhou, suddenly received a WeChat video from a friend.

During the chat, this "friend" revealed that his friend needs a deposit of 4.3 million yuan for bidding in other places, and the account is posted from a company to a company account, so he wants to use the account of Guo's company to settle the payment.

After the background introduction, the "friend" asked Guo for the bank card number, and then threw out a screenshot of the bank transfer slip and told Guo that the money had been transferred to Guo's account.

As a result, because of the video chat and various "physical evidence", Guo did not have too much suspicion, and he did not even check whether the money had arrived.

A few minutes later, Guo sent the money in two payments. He wanted to report to his friend: "The matter has been settled."

However, the friend slowly typed out a question mark.

Fortunately, Guo responded quickly and called the police immediately. So under the cooperation of the police and the bank, it took only 10 minutes to successfully intercept the defrauded funds of more than 3.3 million yuan.

Some netizens said that AI is becoming a new generation tool for scammers.

Some netizens joked that I have no money, and no one can deceive me . (Wait, friends can't doge even if they have money)

Behind this case, the core involves the two technologies of AI face changing and speech synthesis .

In terms of AI face changing, which is well known to the public, even a 2D photo can now move the mouth. According to previous news from Xinhua Daily Telegraph, the cost of synthesizing a dynamic video is only 2 to 10 yuan.

The suspect involved in the case said at the time: "Customers" often buy by hundreds or thousands, and there is huge room for profit.

As for a more accurate and real-time large-scale live broadcast platform, the purchase price of a full set of video real-time face-changing models is 35,000 yuan, and there is no delay or bug.

As for speech synthesis , technical effects are becoming more and more realistic, and new models and open source projects are emerging.

Some time ago, Microsoft's new model VALL·E exploded the academic circle: it only takes 3 seconds to copy anyone's voice , even the ambient background sound.

And Bark, a tool with speech synthesis function, once topped the GitHub list. In addition to the sound and color close to real people, it can also add background noise and highly realistic laughter, sighs and cries.

On various social networks, various tutorials for beginners also emerge in endlessly.

If it is combined with a virtual camera , it may be even more unpredictable.

With just one software application, any video resource can be used in a video call.

△Picture source Weibo@dumb

After clicking to connect, the other party will not see the specific operations such as playing and pausing the video at all, but will only see the effect of the video playing, "After connecting, you will see a beautiful woman":

△Picture source Weibo@dumb

In this way, not only the video can be shot or even changed at will through the virtual camera, but even the way of speaking can be customized by real people:

△Picture source Weibo@dumb

The lowering of the threshold of core technology also gives criminals an opportunity.

New AI scam has nearly 100% success rate

In fact, the new type of online fraud under the blessing of AI is not limited to this kind of operation.

Whether it is domestic or foreign, there are many fraud cases where AI is used to change faces. It is as small as defrauding a small amount of money in online shopping, part-time job billing, etc., and as big as posing as a customer service, investment and wealth management personnel, etc. to obtain bank card account passwords directly. After transferring a large amount of accounts, all of them appeared.

In China, according to the Nanjing police, there has been a case of being defrauded of 3,000 yuan by QQ video AI before.

The person involved, Xiao Li, said that his college classmate Xiao Wang borrowed 3,800 yuan from him through QQ, saying that he was in a hurry because his cousin was hospitalized.

Xiao Li suspected Xiao Wang's identity, and Xiao Wang quickly sent her a dynamic QQ video of about 4 to 5 seconds, not only in the background of the hospital, but also said hello.

This dispelled Xiao Li's doubts and transferred 3,000 yuan, but later found that the other party had deleted her and blocked her, and found that the video was forged by AI.

At present, official public account platforms including Beijing Anti-Fraud and Wuhan Anti-Telecommunication Network Fraud Center have warned the seriousness of the new AI technology scam, and even stated that "the success rate of fraud is close to 100%".

Don't think that these scams only appear in China, and voice scams abroad are also full of tricks.

One way is to use AI synthetic voice to defraud phone transfers .

According to Gizmodo, a recent fraud case involving up to 220,000 pounds (equivalent to about 1.924 million yuan) occurred in the United Kingdom.

The CEO of a local energy company was "DeepFake" his voice without knowing it. The scammer then used his voice to transfer £220,000 over the phone to his Hungarian account.

According to the CEO, he was surprised when he heard this AI-synthesized voice, because this voice not only imitated his usual tone of speech, but also had a bit of his oral habit, which is a bit like some kind of "subtle German accent".

The other is to use synthetic voices to pretend to be relatives and friends .

According to nbc15 reports, a mother named Jennifer DeStefano in the United States recently received a scam call claiming to be a "kidnapper", who claimed that she had kidnapped her 15-year-old daughter and asked the mother to hand over a $1 million ransom.

The daughter's "cry for help" came from the other end of the phone. Not only the voice, but also the cry were very similar. Fortunately, her husband proved that the daughter was safe in time, and the scam did not succeed.

Now, not only is it fraud, but with the blessing of AI technology, even Yang Mi and Di Lieba have sparked heated discussions today.

It turns out that this is a new "way to make money" that businesses have come up with, that is, during the live broadcast, use AI to change faces and other technologies to "deepfake" the faces of celebrities such as Yang Mi, Di Lieba, and Angelababy, so that everyone It will mistakenly think that the star himself is bringing the goods, thereby increasing the live broadcast traffic.

However, such acts cannot be directly judged as infringement at present. According to the 21st Century Business Herald, Zhao Zhanping, a lawyer from Beijing Yunjia Law Firm, said:

The platform does not bear direct infringement responsibility for the infringement of the merchants on the platform, but whether it constitutes a contributory infringement mainly depends on whether the platform knows or should have known about the infringement of the merchants.

However, it is generally difficult to determine whether the platform knows or should have known about the complaints of users and right holders.

Obviously, at a time when AI technology is becoming more and more popular, relevant laws still need to be further improved.

One More Thing

Just last night, "AI Stefanie Sun", who has become popular all over the Internet recently, herself, that is, singer Stefanie Sun, came out to respond.

She posted an English version of an article titled "My AI", here is the full text of her team's Chinese translation:

Then, the comments made by netizens after seeing it belonged to Aunt Jiang:

△Source WeChat @Southern Metropolis Daily

Reference link:

[1] https://mp.weixin.qq.com/s/Ije3MyQxN-kkw4jCE5Wieg

[2] https://mp.weixin.qq.com/s/kcbNlaFe_-ebYNlfoGcIoA

[3] https://gizmodo.com/deepfake-ai-scammer-money-wiring-china-1850461160

[4] https://www.nbc15.com/2023/04/10/ive-got-your-daughter-mom-warns-terrifying-ai-voice-cloning-scam-that-faked-kidnapping/

[5] https://weibo.com/1796087453/JEaCeaOK6

[6] https://weibo.com/1420862042/4904325194975650

Reprinted from 量子位 杨净 萧箫View Original


no dataCoffee time! Feel free to comment