How the internet can rebuild trust - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
人工智能

How the internet can rebuild trust

Algorithms and generative AI models that decide what billions of users see should be transparent
00:00

{"text":[[{"start":null,"text":"

As AI companies fight for dominance, the temptation to embed bias — commercial, political or cultural — into training data will be immense
"}],[{"start":5.8,"text":"The writer is co-founder of Wikipedia and author of ‘The Seven Rules of Trust’"}],[{"start":11.29,"text":"When I founded Wikipedia in 2001, pioneers of the internet were excited by its promise to give the world access to truth and connection."}],[{"start":22.18,"text":"Two decades later, that optimism has curdled into cynicism. We scroll through feeds serving up news we no longer believe, interact with bots we cannot identify and brace for the next synthetic scandal created by fake images from artificial intelligence."}],[{"start":42.06,"text":"Before the web can move forward, it must remember how it earned trust in the first place."}],[{"start":48.49,"text":"The defining difference between web 1.0 and the platforms that dominate today is not technological sophistication but moral architecture. Early online communities were transparent about process and purpose. They exposed how information was created, corrected and shared. That visibility generated accountability. People could see how the system worked and participate in fixing its mistakes. Trust emerged not from perfection (there was still plenty of online trolling, flame wars and toxicity), but from openness."}],[{"start":84.49000000000001,"text":"Today’s digital landscape reverses that logic. Recommendation algorithms and generative AI models decide what billions of users see, yet their workings remain opaque. When platforms insist their systems are too complex to explain, users are asked to substitute faith for understanding."}],[{"start":105.78,"text":"AI intensifies the problem. Large language models can produce fluent paragraphs and convincing deepfakes. The tools that promised to democratise knowledge now threaten to make knowledge unrecognisable. If everything can be fabricated, the distinction between truth and illusion becomes a matter of persuasion."}],[{"start":127.36,"text":"Re-establishing trust in this environment requires more than fact-checking or content moderation. It requires structural transparency. Every platform that mediates information should make provenance visible: where data originated, how it was processed, and what uncertainty surrounds it. Think of it as nutritional labelling for information. Without it, citizens cannot make informed judgments and democracies cannot function."}],[{"start":156.64,"text":"Equally important is independence. As AI companies fight for dominance, the temptation to embed bias — commercial, political or cultural — into training data will be immense. Guardrails must ensure the entities curating public knowledge are accountable to the public, not just investors."}],[{"start":177.42999999999998,"text":"And we must revive civility too. Some of the best early online spaces relied on norms that valued reasoned argument over insult. They were imperfect but self-correcting because participants felt a duty to the collective project. Today’s social platforms monetise outrage. Restoring trust means designing systems that reward good-faith discourse — through visibility algorithms, community-based moderation, or friction that forces reflection before reposting."}],[{"start":212.55999999999997,"text":"Governments have a role to play but regulation alone cannot rebuild trust. It has to be observed in practice. Platforms should disclose not only how their algorithms work but also when they fail. AI developers should publish dataset sources and error rates."}],[{"start":232.7,"text":"The challenge of our time is not that information is scarce but that authenticity is. Important aspects of the early internet succeeded because people could trace what they read to another human being, even if the other human being was operating behind a pseudonym. The new internet must restore that chain of custody."}],[{"start":255.83999999999997,"text":"We are entering an era when machines can mimic any voice and invent any image. If we want truth to survive that onslaught, we must embed transparency, independence and empathy into the digital architecture itself. The early days of the web showed it could be done. The question is whether we still have the will to do it again."}],[{"start":284.46999999999997,"text":""}]],"url":"https://audio.ftcn.net.cn/album/a_1764835851_6780.mp3"}

版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

低增长已成为欧洲最大的金融稳定风险

欧洲最大的金融稳定风险已不再是银行,而是低增长本身。只有实现更强劲的增长,欧洲才能保持安全、繁荣与战略自主。

好莱坞导演罗伯•莱纳夫妇遇害,儿子尼克被捕

洛杉矶警方正在调查《摇滚万万岁》导演罗伯•莱纳遇害一案。莱纳生前除影坛成就外,也因长期投身民权事业而备受政界与娱乐圈人士称赞。
7小时前

“稳定币超级周期”为什么可能重塑银行业?

一些技术专家认为,未来五年内,稳定币支付系统的数量将激增至十万种以上。

一周展望:英国央行会在圣诞节前降息吗?

与此同时,投资者一致认为,欧洲央行本周将把基准利率维持在2%。而推迟发布的美国就业数据将揭示美国劳动力市场处于何种状态。

“布鲁塞尔效应”如何适得其反

曾被视为全球典范的欧盟立法机器,如今却在自身抱负的重压下步履蹒跚。

对冲基金涌入大宗商品,寻求新的回报来源

包括Balyasny、Jain Global和Qube在内的基金正扩张业务,以便能够直接交易相关金融市场。
设置字号×
最小
较小
默认
较大
最大
分享×