Estimated reading time: 4 minutes, 5 seconds
Whenever someone says, “I used ChatGPT to…” I’m now irked and annoyed. I’m like, “Really? You didn’t care enough to make effort to…”
It started with the US senator who used ChatGPT to write a speech. Really? Really? You really don’t care enough to write actual words about an actual cause you care about? I know, I know, they have speech writers, but still. I assume those human speech writers actually understand the issues and the stance of the senator.
It happened again with two recent events. Someone who used ChatGPT to write part of the commencement address they were giving. And I’m like, “Really? You couldn’t take a few minutes of your time to think of how you might want to inspire young people who are about to graduate and go out into the world? Is this how we inspire them?” and then again, more so, when I read the news of how the Michigan State University DEI office [correction, I was told: It was not the Michigan State University DEI office that did this. It was the Peabody EDI Office at Vanderbilt University] wrote a message in response to the recent shootings using ChatGPT. Now that was low. Like really low. You’re the goddamn DEI office. You’re the folks who are supposed to care about diversity, not perform it. This is maybe the lowest level of performing equity and care.
Here’s what I’m thinking. When someone says they’ll let the machine do it because “they don’t have time”, that does not really exactly mean they don’t have time, per se. It means that time is a limited resource, and “don’t have time” means that this thing is “not a priority”. I’ll delegate to a machine because “I don’t care”. It does not deserve my personal attention, it does not deserve the time it takes to stop doing other things in order to do this thing.
Whenever I hear of AI tools that will write an email for you, I’m thinking, I understand why we don’t care about a lot of our email, but I assume that behind any email we read/write are promises to do actual work beyond what is written. So don’t we need to stop and think about that first?
I think I totally get why I’m OK with automating something like images, because art is not my thing, and I sometimes just need something quick that I don’t need an hour of photo editing to do. It’s not going to be perfect, and I “don’t care”. But it’s not going to be as meaningful as what a thoughtful graphic designer would give me, if I was trying to get an image about something I deeply care about.
Whenever I hear a faculty member ask if they can use AI to grade student work (and it’s been doing that for a long time, btw, this is not like a new development), I’m thinking, “really? Is what you ask your students to do so unimportant to you that you don’t care to see it, give feedback on it, and assess it?”. I think when someone asks me this, I want to ask, “don’t you want to change your assessments to ones you would care to read/assess?”. I know this is not always possible, so I rarely say it aloud.
And so, I truly do think, that if we end up in a situation where students are constantly taking shortcuts using AI, it’s just an expression of “I don’t care enough about this to spend time on it” – and we should do better in our education that students would care enough to want to do thinsg that are hopefully authentic, meaningful and relevant to THEM.
It’s like, if I’m traveling to a country for two days and I don’t have time to learn the language, yes, please, give me some translation AI! But if I’m considering marrying someone who has a different native language that I don’t speak and I want to connect with them deeply, I’m not going to automate that, I’m going to want to learn that language.
We don’t automate what we care about. So I want to ask you: what do you care about? What do we hope our students will care about? Whatever the answers are – don’t automate those. Before we start automating a thing, we should ask ourselves: do we care about that?
And on another note, I have been using ChatGPT less and less (only during workshops where I’m showing people like faculty what it’s capable of because they need to know, or to test something). Because every time I use it, I feel like it is a sign I care less about the people in the global South whose mental health was harmed by the process of making this AI appear more ethical to us as end users.
Feature image made by me on Canva.