Artificial intelligence has rapidly embedded itself in daily life. At home, large language models are used to plan holidays, draft greetings card sentiments and diagnose ailments. At work, they are writing emails and conducting analysis. In schools and universities students are using chatbots to research and write essays. There is now reportedly around 700mn active weekly users of OpenAI’s ChatGPT globally. AI hype got another boost this week as blockbuster earnings reports from Microsoft and Meta drove the companies to record valuations and included further hefty commitments to invest in the technology. Google also began rolling out “AI mode” on its search platform in the UK.
The potential benefits of widespread AI use are enormous. By speeding up routine tasks, it can free up leisure time or allow busy people to dedicate time to more involved activities. The technology’s ability to process vast amounts of data also means it can accelerate research and development processes, and expand human knowledge. It has made significant progress in brain mapping and mathematical reasoning.
However, the explosion of instant, easy access, AI-driven answers has its potential downsides, too. A particular concern is “cognitive offloading”. This is the idea that frequently outsourcing mental tasks to smart technology can cause our memory and problem-solving skills to atrophy. One example is the “Google effect”: research has already found that individuals can end up depending on search engines as a source of knowledge rather than remembering details for themselves. The risk with powerful AI chatbots, when overused, is that having the bulk of our writing, analysis and creative tasks done for us may mean we engage in less reasoning over time.
The nascent studies into AI and human cognition are not without their flaws. But some do echo the concerns. Research published by MIT’s Media Lab in June, which divided 54 participants into groups, found that those who used large language models to write essays “consistently underperformed” against those who did not “at neural, linguistic and behavioural levels”. Over several months these users also got lazier, often resorting to lifting wads of AI text, verbatim. Another academic study published in January, drawing on interviews with 666 participants, found “a significant negative correlation between frequent AI tool usage and critical thinking abilities”.
Further research is needed to help everyone better understand the effects of AI. But it is still worth heeding the warning signs. After all, the harms from otherwise positive technological advances — such as the internet and social media — have often revealed themselves over time. And given our proclivity to simple answers and solutions (also known as “cognitive miserliness”) a few guardrails could be put in place to optimise AI use.
One priority must be to protect critical thinking in education. The widespread access to fast information raises the premium on our ability to question and evaluate AI outputs; teaching should enhance these skills. Second, AI coaches suggest users ought to be encouraged to see the technology as an assistant, not an omniscient bot. It is, after all, not free from “hallucinations”, misinformation or bias. Such awareness is important, particularly as AI can be used for things such as political advice and therapy. Finally, developers could in certain cases nudge AI to return answers with questions and options, to encourage users to do more deliberative thinking.
AI is at its most powerful when it is a collaborator, not a crutch. To avoid over-dependence, it seems best to be a discerning user of chatbots, rather than a passive consumer.