Many argue
that using AI for repetitive tasks saves time and frees up cognitive space to focus on creativity or so-called “higher-level” thinking. However, using AI for everyday tasks disengages the person from the process and instead incentivizes them to do less.
“It’s like you now have someone working for you to get rid of the clutter,” said Luke Hebert, a third-year PhD student in cell and molecular biology. “Now you can think ‘higher level’ and let it do a lot of the grunt work.”
Hebert’s use of AI is commonplace among STEM students and is encouraged by research advisors.
“I’m using a large language model (LLM) to either generate ideas for code (or) sometimes generate drafts for code,” Hebert said. “(To) look over code that I’ve written or tweaked, troubleshoot errors if they’re too complicated or I don’t have time.”
Using technology to reduce the cognitive effort required to complete a task, like writing code, is called
cognitive offloading
. This behavior has been observed for all of
recorded history
.
Plato wrote that In 370 BCE on the invention of writing,
Socrates
said, “This discovery of yours will create forgetfulness in the learners’ souls because they will not use their memories; they will trust to the external written characters and not remember of themselves.”
This sentiment is echoed today, nearly two and a half millennia later.
“In a way, certainly (using AI) has made me forget. If I had to sit down and take an exam and write python script or something, with no access to the internet, no access to large language models, I would do poorer now,” Herbert said about coding.
Is downscaled memory worth the boost to productivity? Even among researchers in similar fields, the answer changes based on the task, the goal and the process.
“A big part of your job as a graduate student is to learn and get better at things, and the only way (to do that) is by doing them repeatedly,” first-year graduate student Ira Zibbu said. “When you offload that work to the LLM, it robs you of that opportunity to practice.”
While it’s a good sign that our scientists-in-training are aware of the impact of generative AI on learning, that awareness may not extend to, or even be encouraged in, other sectors. For many workers,
AI is no longer optional
.
Sixty-seven of companies worldwide
have adopted generative AI to create content, and
over half of US adults
use models like ChatGPT in their professional and personal lives.
If AI delegation allows people to complete the same goal with less engagement in the process, then it’s likely not to lead to an increase in “higher order thinking,” but rather lead to less thinking overall. This follows the
law of least effort
, in which people prefer to work less for the same reward.
Personally, I’ve experienced the law of least effort when it comes to using AI, as I’m sure many users have. Tools like
SciSpace
and
ResearchRabbit
have made it incredibly easy to find research papers without digging through library catalogs or reading every new journal issue. However, because I have a tool that can immediately spit out resources, I am less engaged in the literature search. I don’t have to invest as much time and effort as the scientists before me did.
The argument that LLMs will boost human potential by taking on mundane and procedural tasks and therefore allow humans to focus on more interesting and fulfilling jobs is idealistic and ignores the innate predilection to do less. While not always an obvious choice, every AI user should consider the tradeoff in personal ability for efficiency when deciding when to delegate.
Kate Windsor is a PhD student in molecular biology from Austin, Texas.