Playing ‘whack-a-mole’ with Meta over my fraudulent avatars - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
FT商学院

Playing ‘whack-a-mole’ with Meta over my fraudulent avatars

How is it possible that a company with such huge resources, including artificial intelligence tools, cannot deal with this?
Examples of deepfake avatars of Martin Wolf promoted in adverts on Facebook and Instagram

I have an alter ego or, as it is now known on the internet, an avatar. My avatar looks like me and sounds at least a bit like me. He pops up constantly on Facebook and Instagram. Colleagues who understand social media far better than I do have tried to kill this avatar. But so far at least they have failed.

Why are we so determined to terminate this plausible-seeming version of myself? Because he is a fraud — a “deepfake”. Worse, he is also literally a fraud: he tries to get people to join an investment group that I am allegedly leading. Somebody has designed him to cheat people, by exploiting new technology, my name and reputation and that of the FT. He must die. But can we get him killed?

I was first introduced to my avatar on March 11 2025. A former colleague brought his existence to my attention and I brought him at once to that of experts at the FT.

It turned out that he was in an advertisement on Instagram for a WhatsApp group supposedly run by me. That means Meta, which owns both platforms, was indirectly making money from the fraud. This was a shock. Someone was running a financial fraud in my name. It was as bad that Meta was profiting from it.

My expert colleague contacted Meta and after a little “to-ing and fro-ing”, managed to get the offending adverts taken down. Alas, that was far from the end of the affair. In subsequent weeks a number of other people, some whom I knew personally and others who knew who I am, brought further posts to my attention. On each occasion, after being notified, Meta told us that it had been taken down. Furthermore, I have also recently been enrolled in a new Meta system that uses facial recognition technology to identify and remove such scams.

In all, we felt that we were getting on top of this evil. Yes, it had been a bit like “whack-a-mole”, but the number of molehills we were seeing seemed to be low and falling. This has since turned out to be wrong. After examining the relevant data, another expert colleague recently told me there were at least three different deepfake videos and multiple Photoshopped images running over 1,700 advertisements with slight variations across Facebook, and Instagram. The data, from Meta’s Ad Library, shows the ads reached over 970,000 users in the EU alone — where regulations require tech platforms to report such figures.

“Since the ads are all in English, this likely represents only part of their overall reach,” my colleague noted. Presumably many more UK accounts saw them as well.

These ads were purchased by ten fake accounts, with new ones appearing after some were banned. This is like fighting the Hydra!

That is not all. There is a painful difference, I find, between knowing that social media platforms are being used to defraud people and being made an unwitting part of such a scam myself. This has been quite a shock. So how, I wonder, is it possible that a company like Meta with its huge resources, including artificial intelligence tools, cannot identify and take down such frauds automatically, particularly when informed of their existence? Is it really that hard or are they not trying, as Sarah Wynn-Williams suggests in her excellent book Careless People?

We have been in touch with officials at the Department for Culture, Media and Sport, who directed us towards Meta’s ad policies, which state that “ads must not promote products, services, schemes or offers using identified deceptive or misleading practices, including those meant to scam people out of money or personal information”. Similarly, the Online Safety Act requires platforms to protect users from fraud.

A spokesperson for Meta itself said: “It’s against our policies to impersonate public figures and we have removed and disabled the ads, accounts, and pages that were shared with us.”

Meta said in self-exculpation that “scammers are relentless and continuously evolve their tactics to try to evade detection, which is why we’re constantly developing new ways to make it harder for scammers to deceive others — including using facial recognition technology.” Yet I find it hard to believe that Meta, with its vast resources, could not do better. It should simply not be disseminating such frauds.

In the meantime, beware. I never offer investment advice. If you see such an advertisement, it is a scam. If you have been the victim of this scam, please share your experience with the FT at visual.investigations@ft.com. We need to get all the ads taken down and so to know whether Meta is getting on top of this problem. 

Above all, this sort of fraud has to stop. If Meta cannot do it, who will?

martin.wolf@ft.com

Follow Martin Wolf with myFT and on X

版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

米拉•穆拉蒂的思维机器实验室完成20亿美元融资,估值达100亿美元

这家成立仅六个月、极为低调的人工智能初创公司,刚刚完成了硅谷历史上最大规模的初始融资轮之一。

欧洲各国部长敦促伊朗与特朗普政府恢复谈判

法国、德国和英国警告德黑兰,在以色列袭击下,德黑兰可能不得不放弃拒绝与美国对话的红线。

特朗普称即将与哈佛达成协议

言论发表之际,一名法官拒绝了他的政府阻止外国学生进入美国精英大学的企图。

伊朗“不战不和”战略的崩溃

自负和误判摧毁了德黑兰数十年的威慑理论,以色列的风险偏好也随之增长。

AI人才热潮背后的驱动力是什么?

令人瞠目的高额支出显示出科技公司要建立竞争护城河有多么困难。

市场的沉默为什么令人担忧

投资者已经麻木,无法识别即将到来的尾部风险预警信号。
设置字号×
最小
较小
默认
较大
最大
分享×