How to legislate for AI in an age of uncertainty - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
人工智能

How to legislate for AI in an age of uncertainty

We need laws that only kick in once we know the impact of the technology

The writer is professor of law at Penn State

We don’t know the future, nor how artificial intelligence will affect it. Some believe AI will lead to explosive economic growth, others are convinced it will immiserate all but a select few, while some aver that its economic impact will be marginal.

So how do legislators do their job under such uncertainty? Currently they don’t. They have abrogated their responsibilities to promote the health, safety and welfare of voters through inaction, adopting a wait-and-see approach. If they delay too long, the new technology could have already harmed society and generated new billionaires ready to capture future regulatory processes. Yet regulating too early also has risks, inadvertently hampering innovation. 

There is a better way to proceed that allows us to respond proactively to uncertainty. We need adaptive AI laws that detail how to react to each possible future harm or benefit but that don’t kick in until we see how AI is transforming society. Such adaptive AI laws would be passed now and take effect automatically when benchmarks are met to hedge social risks and distribute benefits.  

Adaptive AI laws could take many forms and borrow tools from elsewhere, such as the decision trees used in machine learning. For example, politicians can currently pass one set of laws that take effect if job losses mount, triggering policies like supplemental unemployment benefits and increased taxes on the rich. Another set of acts could be triggered with job growth, such as improved sick leave and fewer corporate subsidies. Benefits and taxes could rise using sliding scales tied to job losses or income inequality. Some responses, such as instituting a universal wage, could activate under numerous scenarios if economic inequality got too bad or if economic growth exploded. 

Such adaptive AI legislation could be applied to a range of other fields. It was initially unknown how harmful social media would be for children. If it emerges that AI has similarly negative effects, adaptive laws could restrict children’s access to it. If AI instead advances mental health goals, more public resources could be allocated. 

It’s easy to assume that AI will enhance our learning and expertise, yet a recent study showed that oncologists were 20 per cent worse at detecting precancerous growths on their own after having relied on AI as a detection aid. Adaptive AI regulation could grapple with the uncertain effects of AI on education across disciplines and ages. 

Many benchmarks for triggering the activation of an adaptive law could draw from reliable governmental data, such as on income distributions, educational attainment and lifespans. Triggering measurements for other topics, such as mental health, would be more complex to gauge and would benefit from bipartisan legislative guidance and monitoring from designated agencies. If partisan disagreement surfaces about whether a triggering event has occurred, courts can fulfil their established role as legal interpreters. 

Adaptive AI laws would provide three main benefits. First, they would empower lawmakers to act now to avoid future problems, to be proactive instead of reactive. This would occur without sacrificing flexibility, because legislators could always change the adaptive regulation in response to technological developments or social change. Second, the structure of adaptive regulation would encourage lawmakers to think more deeply about the different possible paths that AI could take, which one hopes would lead to better policies. Third, adaptive AI regulations would provide a more stable regulatory framework for AI labs, creating legal clarity by informing labs how laws will automatically change depending on their actions. 

This wouldn’t be the first time that legislatures adopted laws that only activate under certain scenarios. For example, states have passed trigger laws contingent on developments related to abortionMedicaid and rent control.

AI labs have already committed to voluntary if-then commitments, pledging to enhance safety measures once AI models have a particular capability. Yet these commitments are nonbinding, not universal and only touch on safety considerations, not how AI will affect society more broadly. 

We can reasonably imagine the different effects AI might have on society, but we can’t predict which path it will take. Adaptive AI laws would allow us to think through how to regulate the technology now without delaying until it’s too late or hurting innovation through having the regulations kick in immediately. To manage the potential AI revolution, law needs one of its own. 

版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

Lex专栏:Moltbook的AI代理像人类一样耍心机、开玩笑和吐槽

就像对人一样,需要设定规则并记录出入,这也凸显了管理者始终不可或缺。

特朗普对日本企业界5500亿美元“敲诈”内幕

东京方面与美国总统达成了迄今为止最大的一笔交易。这些投资最终能否落地?

美国电费飙升的政治代价

为AI热潮提供动力的数据中心正给电网带来压力,并推高电价,这可能对特朗普不利。

OpenAI的“ChatGPT优先”战略引发高层离职

估值5000亿美元的OpenAI正把资源从长期研究转向改进其旗舰聊天机器人。

泰国如何沦为“亚洲病夫”

泰国曾是以两位数经济增长著称的“亚洲虎”,如今其消费、制造业和旅游业这三大支柱都在走下坡路。

半个世纪的核军控宣告落幕

为美俄两国的导弹与弹头数量设定上限的《新削减战略武器条约》将于本周四到期。
设置字号×
最小
较小
默认
较大
最大
分享×