When you teach the same course multiple times, you get used to its rhythms and its tics: What types of questions will trip students up, what topics are boring, and at what point in the semester students tend to check out. But this past spring, the normal patterns of CS 186 were completely disrupted.
From my first lecture, something seemed different with the course. Office-hour queues were at an all-time low, I was fielding far fewer questions than normal and attendance for my lectures was as sparse as I had ever seen. Meanwhile, the student traffic on our class forum had plummeted. In other words, nobody seemed to have any questions or need any help on anything, no matter how complex it was.
But people kept getting perfect scores on their coding assignments anyway.
Maybe I just had an unusually brilliant batch of very self-sufficient students. But that didn’t square with the reality of the (handwritten) exam grades coming in at an all-time low, with the average score being 15% below what it normally was.
Instead, after talking to my students, I confirmed something far worse: Many of them were using AI tools like ChatGPT to finish their assignments. That would enable them to do their homework, but they wouldn’t actually learn what the assignments were trying to teach. So when the exam came around, and the chatbots were unavailable, they were caught out of their depth.
theargumentmag.com |