On the Ethics of Using AI in Survey Responses

Estimated reading time: 2 minutes, 22 seconds

I wanted to reflect on the ethics of using AI in survey responses. I was recently analyzing a survey where about 1% of the responses to a particular question looked like they’d been written with AI. How do I know this? Most responses were between 1-3 sentences, over 1-5 lines long, whereas these were, like, 15 or 20 lines long. They were overly structured in a way that was both overly comprehensive (on a question that asked people to offer their opinion on priorities) and at the same time overly generic, and with one of those AI-looking concluding paragraphs to boot. It is remotely possible that some people really took the time to think of every possible response to the question and put them all in a numbered form, but I think that likelihood is reaaaaaaaaally remote. My almost 13 year old could tell it was AI right away.

And I was annoyed at this. More annoyed than when students use it inappropriately, because these were adults, educators, people who were pretty AI literate, using it in what I consider to be inappropriate. It didn’t look like they had written it themselves in their own language or in a rough draft, and asked AI to write it better. It looked too comprehensive to be that.

And then I realized why I was annoyed by this. I felt like, when we asked this question, we wanted your opinion, and if you don’t have an opinion, please just leave the question blank or say “no opinion” (do we need to explicitly say that in future surveys?). It’s a survey. There are no high stakes here, no need to appear to have an answer to everything. But more than anything, if I wanted to know what ChatGPT had to say about that, I would have asked ChatGPT, I really wouldn’t have needed to do a survey. ChatGPT gives a possible answer to the question, but it does not give your opinion. In the extreme case of everyone using ChatGPT to answer, what kind of data would we end up with as an answer to our survey questions? A compilation of ChatGPT upon ChatGPT? So basically, some kind of jumble of what an average person would say, over and over and over again, when what we actually want in a survey is what each individual would think and say?

So, I mean, I’ll take the responses and analyze them and assume that, at least, the person agrees with what ChatGPT answered, but I mean, really? Really?

What do you think? What could I be missing here? Obviously, if this keeps happening, I’d have to question the quality of my survey questions, that they’re so not worth answering? But also, why would people do that? It takes more time than leaving it blank, you know?

Image of a clipboard from chinspec on Pixabay

One thought on “On the Ethics of Using AI in Survey Responses

  1. I agree that we should exclude any survey responses in the data clean up process. We could add a note to each survey that responders should skip questions instead of submitting AI responses, but that creates a diplomatically awkward situation. We’d be asking colleagues for a favor but adding specifications — except that it seems oddly necessary to tell the one dishonest neighbor that we don’t need their fake favor. I don’t see a good solution here. I’m annoyed by people who are unprofessional killjoys who create such situations.

    PS: I once received a very long email in which a dear colleague gave me 20+ suggestions for improving a faculty training program, each of which would need far more time and resources than the program could afford. In a follow up conversation on the same topic, the “writer” reflected no clue about what he had “written” — and that made me lose half of the trust I had in him. In the past two years, I have received writing, ranging from thank you notes to journal article drafts, that were clearly written by AI. And every single case made me want to vomit — not only because the content didn’t make a lot of sense but because I found the ethical irresponsibility of the sender baffling. If the writer of an email took 60 seconds to tell me how I should invest hundreds of hours on a program, then I am not going to spend a minute of my own time reading or responding. In fact, I didn’t respond to the lengthy thank you note either. I am not stupid, and I have more self-respect than to engage people are dishonest enough to copy-paste machine vomit. It is a different thing to “use” AI tools to “improve” one’s writing, but to “generate” content without considering all the implications in the exact same way as the sender writing that email themselves is offensive to me. I will only read/respond to what feels like AI-assisted writing if, in the big picture of interpersonal or professional communication/relationship, the writer takes intellectual, emotional, interpersonal/social, and ethical/professional responsibility for their word exactly like when they write it themselves. Here’s what I wrote recently along the same lines (in case you find time to skim through it): https://myrepublica.nagariknetwork.com/news/three-thresholds-for-ai-use/

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.