Probably the biggest challenge facing faculty in the classroom today — and there are many — is generative AI. I’m sure you’ve seen discussion of it everywhere. It’s invading every app, device and experience whether consumers want it or not.
But it’s particularly front of mind this week in education, because MIT scientists released a first-of-its-kind study that looked at how large-language-model (LLM, which is more accurate but less sexy than “AI”) tools like ChatGPT affect students’ brains when they use them to help write essays.
The study shows what any English instructor in the world would have told you for free if they could get you to stand still long enough: using ChatGPT turns student brains into pudding. Sure, the MIT paper calls it “accumulation of cognitive debt,” but that’s what they mean. Your brain on ChatGPT is no longer really braining.
Students attempting to avoid doing the work they’ve been assigned in the classroom is nothing new, of course. Academic dishonesty has been around as long as or maybe longer than the academy.
My previous role at a four-year college in Missouri included work as the primary enforcer of academic integrity. In practice, that meant I spent a lot of time meeting with students who had committed various violations of the academic code of conduct. Some violations were clever (tiny, tiny handwriting on the inside the label of a water bottle). Some were…not so clever (copy-paste from Wikipedia without changing the color of the links, let alone taking them out). In most cases, my role was to remind the guilty students that it would have been less effort and headache to just do the work — or admit that they couldn’t do the work and ask for help.
Nothing has changed since those years-ago conversations, except that any friction between the student and the ease of dishonesty has been completely eliminated. ChatGPT and other LLMs are actually marketed as tools that can help students with homework. But any marketing that suggests they can help them with learning should be considered a lie.
As with any other disruptive tool that’s come along in the past 100 years — the calculator, the internet, Wikipedia, etc. — LLMs are garbage-in, garbage-out machines. It just seems to me that the garbage-out is stinkier with this machine.
Of course there are legitimate uses for LLM tools, and we have an obligation to teach our students how to use them professionally and ethically so they’re prepared for their use in the workplace.
But you can imagine how disheartening it is as an English or Communications Studies instructor to read paper after paper that’s obviously not the work of your students. There’s the casual disregard for the value of the subject to which the faculty member has dedicated their professional life, of course, and the implied insult that the faculty member is too dumb to notice or too lazy to care about blatant cheating.
But I find, in conversation with my faculty, that the thing that bothers them the most is the way the cheating students are cheating themselves. “What’s the point of going to college if you’re just going to cheat your way through it?” is a common question.
And there’s more to unpack in answering that question than I maybe have space for in this column (or a year of columns). But I do think it’s very much worth thinking about what it means for so many students to completely devalue their own learning, or see college as purely a means to an end and not a worthwhile experience in its own right. Who benefits when students — or people in general — outsource their thinking so readily?
One thing’s for certain: it isn’t the students.



