How to prevent AI from provoking the next financial crisis | 如何防止人工智能引发下一次金融危机 - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
FT英语电台

How to prevent AI from provoking the next financial crisis
如何防止人工智能引发下一次金融危机

New systems have benefits for markets, but risks to stability must be managed
新体系对市场有利,但稳定性风险必须加以管理
00:00

undefined

Amid talk of job cuts due to artificial intelligence, Gary Gensler thinks robots will actually create more work for financial watchdogs. The US Securities and Exchange Commission chair puts the likelihood of an AI-driven financial crisis within a decade as “nearly unavoidable”, without regulatory intervention. The immediate risk is more of a new financial crash than a robot takeover.

Gensler’s critics argue that the risks posed by AI are not novel, and have existed for decades. But the nature of these systems, created by a handful of hugely powerful tech companies, requires a new approach beyond siloed regulation. Machines may make finance more efficient, but could do just as much to trigger the next crisis.

Among the risks Gensler pinpoints is “herding”, in which multiple parties make similar decisions. Such behaviour has played out countless times: the stampede of financial institutions into packages of subprime mortgages sowed the seeds of the 2008 financial crisis. The growing reliance on AI models produced by a few tech companies increases that risk. The opaque nature of the systems also makes it difficult for regulators and institutions to assess what data set they are reliant on.

Another danger lies in the paradox of explainability, noted by Gensler in a paper he co-wrote in 2020 as an MIT academic. If AI predictions could be easily understood, simpler systems could be used instead. It is their ability to produce new insights based on learning that makes them valuable. But it also hampers accountability and transparency; a lending model based on historical data could produce, say, racially biased results, but identifying this would take post facto investigation.

Reliance on AI also entrenches power in the hands of technology companies, which are increasingly making inroads into finance but are not subject to strict oversight. There are parallels with the world of cloud computing in finance. In the west, the triumvirate of Amazon, Microsoft and Google provides services to the biggest lenders. This concentration raises competition concerns, and affords at least the theoretical ability to move markets in the direction of their choice. It also generates systemic risk: an outage at Amazon Web Services in 2021 affected companies ranging from robot vacuum producer Roomba to dating app Tinder. An issue with a trading algorithm could trigger a market crash.

Watchdogs have pushed back against the awkward nexus of technology and finance in the past, as with Meta’s digital currency, Diem, formerly known as Libra. But to mitigate the risks from AI requires expanding the perimeter of financial regulation or pushing authorities across different sectors to collaborate far more effectively. Given the potential for AI to affect every industry, that co-operation should be broad. The history of credit default swaps and collateralised debt obligations shows how dangerous “siloed” thinking can be.

The authorities will also need to take a leaf from the book of those convinced that AI is going to conquer the world, and focus on structural challenges rather than individual cases. The SEC itself proposed a rule in July addressing possible conflicts of interest in predictive data analytics, but it was focused on individual models used by broker-dealers and investment advisers. Regulation should study the underlying systems as much as specific cases.

Neo-Luddism is not warranted; AI is not inherently negative for financial services. It can be used to speed up the delivery of credit, support better trading or combat fraud. That regulators are engaging with the technology is also welcome: further adoption could accelerate data analysis and develop institutional understanding. AI can be a friend to finance, if the watchmen have the right tools to keep it on the rails.

版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

我们是否正处于核复兴的边缘?

为什么小型核电站会对可再生能源构成巨大威胁?

内塔尼亚胡与伊朗的战争:“对他来说,这是私人恩怨”

在哈马斯于10月7日发动袭击后,这位以色列总理的政治生涯似乎已经走到尽头。但如今,他正推动着一场自己多年来一直主张的冲突。

玛格丽特•米切尔:通用人工智能不过是“氛围和蛇油”

人工智能伦理领域的先驱之一解释了为何人类需求应成为科技发展的核心驱动力。

谁能在伊朗问题上影响特朗普?

从JD•万斯到“猩猩”,MAGA忠诚支持者和军方领导人正争夺在椭圆形办公室的影响力。

为什么华尔街害怕一个33岁的政治局外人

进步派候选人佐赫兰•马姆达尼搅动了纽约市长选举,城市精英们想要阻止他。

以色列空袭伊朗伊斯法罕核设施,特朗普权衡是否介入战争

美国总统认为欧洲领导的停火谈判无效。
设置字号×
最小
较小
默认
较大
最大
分享×