{"text":[[{"start":5.35,"text":"AI enthusiasts wave off the notion that the technology will lead to mass unemployment. A lot of people once drove horse-drawn carts and made buggy whips, they say. Losing those jobs to automobiles didn’t lead to breadlines; on the contrary."}],[{"start":20.85,"text":"Doomers respond that, in the case of AI, we’re not the drivers; we’re the horses. The optimists’ retort, that horses’ lives got better as they went from work animals to luxury items, is no help. Have a look at what happened to the equine population in the first half of the 20th century."}],[{"start":38.85,"text":"Whatever AI’s ultimate impact on unemployment, this back-and-forth highlights the idea that AI is unlike all the technologies that went before, with greater complexity, greater upsides and greater risks — for labour, cyber security, national defence, mental health and so on. So those controlling it have special responsibilities. Everyone in the AI industry acknowledges this. It is expressed in OpenAI’s “Model Spec” guidelines and papers on the topic by Anthropic CEO Dario Amodei, which lay down guidelines about what AI companies will allow their models to do. "}],[{"start":77.9,"text":"But AI companies and their models will follow one rule before all others: they will seek to maximise returns for their shareholders, up to the limits set by law. When the law of profit conflicts with the company’s internal principles, profit will win every time."}],[{"start":94.30000000000001,"text":"This is not to be regretted. It is the outcome our system of corporate capitalism was intended to create. It has made us free and prosperous by encouraging risk-taking and creativity. And, in most cases, the profit motive and the common good line up beautifully. But as we leap into a new technological age, the old rules of capitalism still apply. Corporations only manage or pay for the economic externalities they create when they are forced to. "}],[{"start":122.30000000000001,"text":"The amounts of money AI has attracted are staggering. The Big Tech “hyperscalers” plan to invest more than $600bn in the space this year alone. AI start-ups raised $73bn in the first quarter of 2025. OpenAI raised $122bn in a single funding round last month. The capital comes from people who demand a high return, and who know the industry will soon need more capital to buy computing power. This ensures that excessively cautious executives will be pushed aside, and sets up an arms race in which prioritising safety will open the way for technological irrelevance."}],[{"start":157.5,"text":"Amodei argues that there is a tension between building AI systems that won’t “autonomously threaten humanity” and staying ahead of authoritarian nations (or is it nation?) that might use such systems against us. "}],[{"start":169.6,"text":"Before that tension comes into play, though, AI company CEOs will have to balance safety and competition. If Amodei or OpenAI’s Sam Altman strike that balance in a way that displeases their investors, they will be sacked. The industry’s sensitivity to revenue growth expectations is extreme. This week, The Wall Street Journal reported that OpenAI had missed internal sales and user targets. The story moved the whole of the Nasdaq, and OpenAI quickly released a statement calling it “clickbait”."}],[{"start":200.45,"text":"When Amodei says that he is “focused day and night on how to steer us away from [AI’s] negative outcomes and towards the positive ones”, I’m sure he is sincere. I’m also sure that, from the point of view of how the conflict between AI profit and safety plays out, his words are just noise. The relevant incentive structures don’t care what he is focused on."}],[{"start":222,"text":"This simple observation — that some of the risks created by AI can only be managed by citizens, not companies — leaves hard questions about how to regulate it. Figuring it out will be messy. Some AI companies’ fears about unintended consequences will be realised. "}],[{"start":239.85,"text":"What might good regulation look like? Horse anecdotes aside, it should not try to protect specific job categories, which always ends in paying people to be unproductive. It should match specific regulatory tools to specific harms — physical, digital, psychological, financial — rather than taking the form of a monolithic law. On the liability side, it should take seriously the example of how other useful but inherently dangerous products like explosives are treated, and it should rethink agency law applied to non-human agents. It should emphasise liability, rather than companies’ duty to warn. Investors’ skin needs to be in the safety game. "}],[{"start":278.7,"text":"At the outset, though, the key is to reject any suggestion that this product is different, and somehow too complicated for citizens to have a say in. AI is new; capitalism is not."}],[{"start":295.25,"text":""}]],"url":"https://audio.ftcn.net.cn/album/a_1778212219_2141.mp3"}