It’s well known that given the chance, people will nearly always look for a shortcut in making a decision, in order to avoid the more laborious effort required to think things through from scratch. The psychology discipline has various terms for this: “cognitive miserliness,” “cognitive economy,” “satisficing,” “mental inertia,” reliance on “heuristics,” and others. It’s a near-universal practice.
This sounds pretty bad, but we’ll cut our fellow humans a break by recognizing that it’s likely a survival mechanism. In most instances, a “good enough” decision is good enough for whatever situation is confronting an organism, and in many such situations, seconds count. So you’ll probably do better conserving your resources (like “time and energy available for thinking”) rather than possibly overthinking every decision. Optimality can be overrated.
Every instructor of intro computer science has probably noticed a similar trend among their students. Computer programming requires choosing from various algorithmic building blocks that a programming language (like Java or Python) provides, and assembling them in a clever way to solve a problem. It’s hard and creative work, since no two problems are exactly alike, and since the building blocks interact in non-trivial ways.
Now an intro CS student who pays attention in class has invariably seen a number of example programs that solve various problems. They’re then given a homework assignment. Ideally, they’ll use the principles they’ve been taught — which the examples helped to illustrate — to succeed in writing their own, fresh solution to their homework. That homework problem will of course bear some resemblance to the example problems they’ve studied, just like a child’s (and even an adult’s) sentences bear resemblance to the previous sentences they’ve heard. But the student’s problem-solving thought process, ideally, will not be “let’s find an example solution from class and see if I can munge it into a solution to this homework.” After all, language speakers don’t say “let’s start with some existing sentence I’ve written down, and try to substitute a word here and there to arrive at the sentence I want to say now.”
The student, however, is tempted to do exactly that. “Cognitive miserliness,” or whatever you want to call it, whispers in their ear: “Hasn’t your professor shown you working programs that do something similar to what this homework calls for? Wouldn’t it be quicker to just copy from one of those and see if you can tweak it slightly and get it to work?”
From lots of experience teaching this material, I can tell you that in most cases, it’s actually not quicker to copy-and-modify an example, even though the student expects it will be. Students tend to underestimate how intrusive the necessary changes will be (and how long it will take to test them) and to overestimate how hard it would be to write the program from first principles.
But this is an aside. The main problem is that even if it were quicker and easier to copy-and-modify, most of the necessary learning does not take place when the student takes this approach. The student may produce a working program, but strangely enough, will not have acquired the skills necessary to produce working programs in general.
To see this, return to the language analogy. Imagine trying to learn (say) Japanese this way. Every time your homework tells you to write a Japanese sentence (expressing something specific like “I always go running alone before dark”) your procedure is to thumb through your list of example sentences looking for the one that most closely resembles what you want to say, make a few changes to its vocabulary, tense, supportive clauses, etc., and write down that modified sentence.
You might get a decent grade on the homework. But now try embarking on some actual Japanese conversation. Oof.
Now as bad as this transmogrify-an-example approach has been for learning to code, things have gotten immeasurably worse since the rise of ChatGPT-like tools. The issue with using AI assistants is the same as with the copy-and-modify-an-example strategy: the temptation to solve the problem without actually solving the problem. Put differently, it’s the false promise that one can get from A to B without actually learning what cognitive steps are required to get there, by offloading the thinking-it-out part to something else (a pre-written example, or an AI tool).
The seductive message is that ChatGPT will be able to write the program for you…or at least give you a start…or at least suggest things…or at least tell you when you make a mistake…or something. Students get the message (whether or not it was ever explicitly stated) that coaxing ChatGPT into producing the program is the “right way” to go about generating code. It’s the quicker and easier path. It would be dumb, students reason, to try and write the program yourself from scratch, no? I mean, who does that? Isn’t that exactly what ChatGPT is for?
And as if it couldn’t get any more tempting, new programming environments (like Replit, a tool used in some of our Data Science courses at UMW) present a bright, loud, front-and-center AI “co-pilot” feature designed to drag the unsuspecting beginner’s eyeballs right to it. My daughter, a DS major who has used Replit for a course, tells me that you actually have to deliberately search the interface a bit to find out how to manually disable the AI. 99% of students will not do that. They will be (mis)led into thinking that their path to programming success lies through that mysterious and centrally-located AI box.
All this is leading to a generation (or more) of students who produce programs they literally do not understand, because they didn’t even write them. They have no idea how the code actually works, only that it does seem to work for the scenario they’re immediately facing (their specific homework problem). But to maintain that code? adapt it? debug it for a previously-unseen test case? perceive its limitations and opportunities for improvement? These are all hopeless tasks.
At this point you may be tempted to say, “well, if the goal is to produce working programs, and ChatGPT lets you do that more quickly, does it matter whether the programmer understands them?” The fallacy here stems from confusing long-term and short-term thinking.
Suppose your goal was to run a marathon. As everyone knows, no matter how many books or video tutorials or personalized trainers or diet supplements you use, you aren’t going to make any progress towards this goal without doing the hard work of actually running, day in and day out. All the tools in the world cannot by themselves get you in shape for a marathon, because getting in shape necessarily is exercising your body.
Giving a novice programmer ChatGPT is, at best, like giving a beginning marathon runner a car. “Look,” you say, “why go through the pain of running 26 miles? In fact, why run at all? With this new contraption, you can just press a couple of pedals and you’ll get to your destination 26 miles away in no time!” Sure. And if the goal is merely to cross 26 miles, a car is a much more optimal way of getting there. It comes down to what you’re trying to achieve. You can drive all day, but will never get in shape.
The relevant long-term question is: what should our society’s goals and expectations be for the tech-savvy portion of the population? Those trained in computer science are the gatekeepers of technology who have made tremendous contributions to the rest of the human race. They bear the burden of maintaining, updating, fixing, improving, and transforming this giant storehouse of inventions, as well as imagining what will be possible tomorrow. What base of knowledge and skills do we want this guild of technocrats to have? To comprehend the technical details of these creations and critically reason about them? Or to be able to press buttons on an opaque dashboard, hoping by chance and brute force to produce something incomprehensible that based on cursory experimentation seems to satisfy an immediate need?
I do think that there’s a place for the AI co-pilot in the programming workforce. In the hands of a proficient technologist, who not only knows what questions to ask but can separate the tool’s nutty answers from its genuinely helpful advice, it promises to increase productivity. To return to the language example, this is like equipping a competent Japanese speaker with Google Translate (or even a Japanese-English dictionary). They have the base of skills necessary to properly use it, and it will help them venture outside their vocabulary and cut the occasional corner.
But at the learning level, AI assistants are an unmitigated disaster. I’ve seen many confused students struggling with their programming assignments, and it’s obvious at a glance that they haven’t even thought about how to algorithmically solve the problem before them. They’re wrestling with a vehicle breakdown which arose because they were trying to avoid running the 26 miles entirely. I don’t lay most of the blame on them: they honestly haven’t grasped what programming is. They think the right thing to do is tell the AI assistant things and try to force it to give the right answers. Even if they succeed, they will have failed.
— S
Leave a Reply