The writer is professor of law at Penn State
We don’t know the future, nor how artificial intelligence will affect it. Some believe AI will lead to explosive economic growth, others are convinced it will immiserate all but a select few, while some aver that its economic impact will be marginal.
So how do legislators do their job under such uncertainty? Currently they don’t. They have abrogated their responsibilities to promote the health, safety and welfare of voters through inaction, adopting a wait-and-see approach. If they delay too long, the new technology could have already harmed society and generated new billionaires ready to capture future regulatory processes. Yet regulating too early also has risks, inadvertently hampering innovation.
There is a better way to proceed that allows us to respond proactively to uncertainty. We need adaptive AI laws that detail how to react to each possible future harm or benefit but that don’t kick in until we see how AI is transforming society. Such adaptive AI laws would be passed now and take effect automatically when benchmarks are met to hedge social risks and distribute benefits.
Adaptive AI laws could take many forms and borrow tools from elsewhere, such as the decision trees used in machine learning. For example, politicians can currently pass one set of laws that take effect if job losses mount, triggering policies like supplemental unemployment benefits and increased taxes on the rich. Another set of acts could be triggered with job growth, such as improved sick leave and fewer corporate subsidies. Benefits and taxes could rise using sliding scales tied to job losses or income inequality. Some responses, such as instituting a universal wage, could activate under numerous scenarios if economic inequality got too bad or if economic growth exploded.
Such adaptive AI legislation could be applied to a range of other fields. It was initially unknown how harmful social media would be for children. If it emerges that AI has similarly negative effects, adaptive laws could restrict children’s access to it. If AI instead advances mental health goals, more public resources could be allocated.
It’s easy to assume that AI will enhance our learning and expertise, yet a recent study showed that oncologists were 20 per cent worse at detecting precancerous growths on their own after having relied on AI as a detection aid. Adaptive AI regulation could grapple with the uncertain effects of AI on education across disciplines and ages.
Many benchmarks for triggering the activation of an adaptive law could draw from reliable governmental data, such as on income distributions, educational attainment and lifespans. Triggering measurements for other topics, such as mental health, would be more complex to gauge and would benefit from bipartisan legislative guidance and monitoring from designated agencies. If partisan disagreement surfaces about whether a triggering event has occurred, courts can fulfil their established role as legal interpreters.
Adaptive AI laws would provide three main benefits. First, they would empower lawmakers to act now to avoid future problems, to be proactive instead of reactive. This would occur without sacrificing flexibility, because legislators could always change the adaptive regulation in response to technological developments or social change. Second, the structure of adaptive regulation would encourage lawmakers to think more deeply about the different possible paths that AI could take, which one hopes would lead to better policies. Third, adaptive AI regulations would provide a more stable regulatory framework for AI labs, creating legal clarity by informing labs how laws will automatically change depending on their actions.
This wouldn’t be the first time that legislatures adopted laws that only activate under certain scenarios. For example, states have passed trigger laws contingent on developments related to abortion, Medicaid and rent control.
AI labs have already committed to voluntary if-then commitments, pledging to enhance safety measures once AI models have a particular capability. Yet these commitments are nonbinding, not universal and only touch on safety considerations, not how AI will affect society more broadly.
We can reasonably imagine the different effects AI might have on society, but we can’t predict which path it will take. Adaptive AI laws would allow us to think through how to regulate the technology now without delaying until it’s too late or hurting innovation through having the regulations kick in immediately. To manage the potential AI revolution, law needs one of its own.