AI companies are just companies - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
观点 人工智能

AI companies are just companies

As we leap into a new technological age, the old rules of capitalism still apply
00:00

{"text":[[{"start":5.35,"text":"AI enthusiasts wave off the notion that the technology will lead to mass unemployment. A lot of people once drove horse-drawn carts and made buggy whips, they say. Losing those jobs to automobiles didn’t lead to breadlines; on the contrary."}],[{"start":20.85,"text":"Doomers respond that, in the case of AI, we’re not the drivers; we’re the horses. The optimists’ retort, that horses’ lives got better as they went from work animals to luxury items, is no help. Have a look at what happened to the equine population in the first half of the 20th century."}],[{"start":38.85,"text":"Whatever AI’s ultimate impact on unemployment, this back-and-forth highlights the idea that AI is unlike all the technologies that went before, with greater complexity, greater upsides and greater risks — for labour, cyber security, national defence, mental health and so on. So those controlling it have special responsibilities. Everyone in the AI industry acknowledges this. It is expressed in OpenAI’s “Model Spec” guidelines and papers on the topic by Anthropic CEO Dario Amodei, which lay down guidelines about what AI companies will allow their models to do. "}],[{"start":77.9,"text":"But AI companies and their models will follow one rule before all others: they will seek to maximise returns for their shareholders, up to the limits set by law. When the law of profit conflicts with the company’s internal principles, profit will win every time."}],[{"start":94.30000000000001,"text":"This is not to be regretted. It is the outcome our system of corporate capitalism was intended to create. It has made us free and prosperous by encouraging risk-taking and creativity. And, in most cases, the profit motive and the common good line up beautifully. But as we leap into a new technological age, the old rules of capitalism still apply. Corporations only manage or pay for the economic externalities they create when they are forced to. "}],[{"start":122.30000000000001,"text":"The amounts of money AI has attracted are staggering. The Big Tech “hyperscalers” plan to invest more than $600bn in the space this year alone. AI start-ups raised $73bn in the first quarter of 2025. OpenAI raised $122bn in a single funding round last month. The capital comes from people who demand a high return, and who know the industry will soon need more capital to buy computing power. This ensures that excessively cautious executives will be pushed aside, and sets up an arms race in which prioritising safety will open the way for technological irrelevance."}],[{"start":157.5,"text":"Amodei argues that there is a tension between building AI systems that won’t “autonomously threaten humanity” and staying ahead of authoritarian nations (or is it nation?) that might use such systems against us. "}],[{"start":169.6,"text":"Before that tension comes into play, though, AI company CEOs will have to balance safety and competition. If Amodei or OpenAI’s Sam Altman strike that balance in a way that displeases their investors, they will be sacked. The industry’s sensitivity to revenue growth expectations is extreme. This week, The Wall Street Journal reported that OpenAI had missed internal sales and user targets. The story moved the whole of the Nasdaq, and OpenAI quickly released a statement calling it “clickbait”."}],[{"start":200.45,"text":"When Amodei says that he is “focused day and night on how to steer us away from [AI’s] negative outcomes and towards the positive ones”, I’m sure he is sincere. I’m also sure that, from the point of view of how the conflict between AI profit and safety plays out, his words are just noise. The relevant incentive structures don’t care what he is focused on."}],[{"start":222,"text":"This simple observation — that some of the risks created by AI can only be managed by citizens, not companies — leaves hard questions about how to regulate it. Figuring it out will be messy. Some AI companies’ fears about unintended consequences will be realised. "}],[{"start":239.85,"text":"What might good regulation look like? Horse anecdotes aside, it should not try to protect specific job categories, which always ends in paying people to be unproductive. It should match specific regulatory tools to specific harms — physical, digital, psychological, financial — rather than taking the form of a monolithic law. On the liability side, it should take seriously the example of how other useful but inherently dangerous products like explosives are treated, and it should rethink agency law applied to non-human agents. It should emphasise liability, rather than companies’ duty to warn. Investors’ skin needs to be in the safety game. "}],[{"start":278.7,"text":"At the outset, though, the key is to reject any suggestion that this product is different, and somehow too complicated for citizens to have a say in. AI is new; capitalism is not."}],[{"start":295.25,"text":""}]],"url":"https://audio.ftcn.net.cn/album/a_1778212219_2141.mp3"}

版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

法律AI初创公司为律师开辟的另类职业路径

AI热潮正在为初级和资深律师开辟一条另类的职业路径:加入法律科技初创公司工作,且往往还能获得股权。

苹果、伯克希尔与耐心的美德

这两家公司等待绝佳机遇的耐心策略曾经奏效,但如今却愈发困难。

沃什应该倾听美联储的反对声音

在连续供应冲击下持续美联储宽松政策,是一种无视疫情教训的高风险做法。

Lex专栏:诺和诺德再迎问鼎减重药霸主地位的机会

减重药的第二波竞争已然打响,礼来和诺和诺德都已推出口服版本,而这一次,优势或许在诺和诺德这边。

FT社评:美国欠欧洲盟友一份防务路线图

美国的报复性削减开支无法实现合理的北约责任分担。

欧洲能否开发出欧洲版的“战斧”?

欧洲眼下推进的项目至少还要十年才能落地,但短期内并非没有权宜之计。
设置字号×
最小
较小
默认
较大
最大
分享×