The Pentagon-Anthropic dispute is a test of control - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
FT商学院

The Pentagon-Anthropic dispute is a test of control

Should private companies be able to set boundaries around the AI systems we integrate into our lives?
00:00

{"text":[[{"start":6.64,"text":"The writer is a senior fellow at the Foundation for American Innovation and was lead staff writer of the Trump administration’s AI Action Plan "}],[{"start":17.53,"text":"On March 4, the US Department of Defense took an unprecedented move against an American company: designating the frontier AI start-up Anthropic a “supply chain risk”. Typically, this designation is applied to technology from foreign-adversary countries. In this instance, it was invoked over a contract dispute."}],[{"start":40.13,"text":"The conflict, which was largely blocked by a judge in California last week, centred on the question of where control over AI should rest. Neither side had the answer quite right."}],[{"start":52.38,"text":"Trump administration officials sought to renegotiate the terms of the Pentagon’s contract to use Anthropic’s Claude — the only large language model certified for use in classified US military contexts — not because they intended to violate the company’s red line on lethal autonomous weapons and mass surveillance, they say, but because they believe only US law should limit the military’s use of technology."}],[{"start":78.92,"text":"The principle is reasonable enough. But, as Judge Rita Lin stated, the proposed punishment of Anthropic was “arbitrary and capricious”. The correct solution would have been to cancel the contract and pass laws concerning the government’s use of AI systems. "}],[{"start":96.76,"text":"The ruling is not a final decision and will probably be appealed by the Trump administration. But the issue raises a broader set of challenges that governments and citizens will grapple with for decades to come: where, precisely, should the locus of control over powerful AI systems rest? Should private companies be able to set ethical boundaries for the AI systems that may one day underpin our lives?"}],[{"start":125.86000000000001,"text":"Some liken advanced AI to nuclear weapons and conclude that no technology so powerful should rest in private hands. But there are crucial differences between the two. Early iterations of the atomic bomb did not provide the consumer and commercial benefits that many derive from today’s AI. "}],[{"start":145.3,"text":"The notion of a government passing laws that dictate the moral, ethical and philosophical values of AI systems therefore appears as a stark violation of the principle of free speech that underlies democratic nations. The prospect of nationalisation of AI labs — which is the logical endpoint of the “nuclear weapons” analogy — seems like a profound and radical act of tyranny."}],[{"start":173.28,"text":"But herein lies the central challenge of AI governance: the “nuclear weapons” analogists may not be correct but they are right to be concerned. Advanced AI systems really do pose serious risks to national security. Frontier models from the biggest US AI companies are classified, by the companies’ own admission, as having high risk for cyber attacks and assistance in the creation of bioweapons, for example. And as the US defence department’s own usage makes clear, AI — not some future version of the technology but the systems we have today — can assist in creating lethal outcomes."}],[{"start":215.91,"text":"The good news is that advanced capitalist societies have dealt with this sort of challenge before. Our civilisations rest upon successive generations of foundational technologies — the printing press, banking, the automobile, electricity and the computer itself. All of these are technologies without which it is hard to imagine a modern military, and thus are essential for national security. Yet none, at least in the US, have been nationalised. Instead, the technologies and industries are overseen by political, legal, regulatory and technical institutions — often a hybrid of public and private bodies."}],[{"start":258.96,"text":"Erecting something similar for AI is perilous. The Pentagon’s dispute with Anthropic shows that even an administration that brands itself as pro-AI can easily veer into regulatory over-reach. The chances of erring in a way that stifles innovation are high. Time, in this AI race, is a resource even more scarce than computing power. "}],[{"start":292.03,"text":""}]],"url":"https://audio.ftcn.net.cn/album/a_1774856645_6864.mp3"}

版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

公司威胁涨价,消费者将面临更多痛苦

高管警告称,若能源冲击持续,企业将面临更大压力,把成本转嫁给客户。

中国收紧对生产商竞争的监管后,太阳能电池板价格上涨

在一场令头部厂商亏损惨重的价格战之后,价格反弹或将宣告“电池价格不断走低”时代的终结。

为何伊朗战争未必会加速向低碳能源转型

电力供应安全地位上升,可能促使一些国家加码依赖化石燃料。

英国大选的关键议题是什么?

改革党、绿党、苏格兰民族党、威尔士党和自由民主党都希望从两大主流政党手中夺得更多席位。

特朗普家族加密项目起诉孙宇晨诽谤

在孙宇晨以涉嫌欺诈起诉世界自由金融之后,世界自由金融对这位主要投资者提出了指控。

控制科学——一场针对管理者的片面指控

从19世纪工厂的监察塔到亚马逊仓库的数据架构,亨利•斯诺将监视工人的人描绘为资本主义的反派。
设置字号×
最小
较小
默认
较大
最大
分享×