Estimated reading time: 4 minutes, 34 seconds
It makes sense if some of us are talking about transparency and encouraging students to disclose how they have used AI in their writing process, that we develop a practice for doing so and agree upon it? Rebecca Hogue in a comment on my last post wondered if we are anthropomorphizing it if we do so, so here is my suggestion.
The author is a hybrid co-author. So if it is me, it would be an in-text citation like (Bali & ChatGPT, YEAR), or (Bali & Sudorwritr, Year). And the end citation in the references would be Bali & ChatGPT (2023, Jan 26). Prompt: Give me 10 titles for a workshop on gender in education. ChatGPT.open.ai
What do you think?
I also think I want students to reflect on their process a bit more, not just cite it. And get smarter at working with the hybrid brain/writer concept.
Funnily, separately, Twitter just announced co-tweeting!!!! Which I think inspired me with the co-authoring with AI citation approach.
Thoughts?
Header photo of pug dog with wings from Pixabay.com by Sarah Richter
UPDATE
So I posted and tagged some people on Twitter to get their feedback on this, and some of the tweets are now appearing as comments directly on the blog, but here is the Tweet anyway in case some of them don’t end up here directly:
My take on this now is as follows:
- I am convinced that we should not cite ChatGPT as a co-author, based on the Nature article as well as the opinions of many people that ChatGPT is not a sentient being and not intentional or anything like that; it does not cite its own sources, either, but that’s not a reason not to cite it, imho
- I am convinced that acknowledgment alone is not enough, because people may copy/paste entire sentences or paragraphs they did not themselves write, and in general when we do that, we cite our “sources”;
- Something like “(Bali, generated by ChatGPT)” or “(Bali via ChatGPT)” seems plausible to me, with the citation at the end showing the prompt we used to get this text. Unfortunately, the text we get is not replicable nor does it have a stable url people can find again, which is problematic. But I guess (apologies for anthropomorphizing… humanifying) it’s like when you talk to a person who is knowledgeable about something and then you meet them again the next day, ask the same question, get a slightly different answer, type of thing. You can cite an informal exchange or conversation that no one else will be able to find online or in print, but it existed, you know? A “conversation” or exchange happened. Of sorts. You took words or ideas from it that were NOT originally yours. You used them. You gotta indicate where you got them.
- Content vs. Process. The analogies to spell checker or Photoshop as tools and such are not relevant here. This is actual text/content you got as output from this tool that was not your original text or ideas, not a tool that helped your process only. You did not write this, you should cite this. If you paraphrase it after being inspired by it, you should still acknowledge it, imho. Unless it’s only very very loosely influenced by it.
I fully agree with Laura, Alan and others that the act of narrating our work will become more important, and more important in education, to help students think through their own contributions in the midst of AI use. Even when we were mostly just using AI in Google search, I often ask students what search terms they used in which search engine and how they chose what to input into the system.
My daughter, at 11, already realizes this: if she wants information, it’s better to use a search engine and follow the source, check its credibility first, then decide what to use, because ChatGPT may fabricate information. However, I think students may still go to it to write things up. If someone had written it on the web before 2021, the likelihood ChatGPT can talk about it and sound reasonably intelligent is high, not because ChatGPT is intelligent, but because all it is really doing is synthesizing and paraphrasing things already written by humans. And that’s not really that hard when you think about it, right?
The problem with all of this that anyone who’s saying “I’m not worried about this” is… that the way ChatGPT does this sounds remarkable like a B or B- student who is writing with good English but very superficial understanding of a topic that is new to them, so if a freshman student in my university wrote something, it would look remarkably similar to ChatGPT output. So of course there is the direction of trying to make our assessments more authentic and meaningful and all those things that make assessments GOOD… but we need to acknoweldge that:
a. This is more difficult for some teachers who are less agile/innovative/aware than others
b. Students will probably use the AI no matter what and we need to have conversations around the extent to which this can be done in ways that promote learning versus harm it… right? These conversations need to include the learners themselves.
(and I just realized this is an entire new blogpost, so I’m actually going to copy/paste this as a new post)
Tweeted already, but consider the journal Nature’s position, that it’s problematic (or dangerous) to give credit to something that is opaque about its own sources https://www.nature.com/articles/d41586-023-00191-1
“First, no LLM tool will be accepted as a credited author on a research paper. That is because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility…. Second, researchers using LLM tools should document this use in the methods or acknowledgements sections. If a paper does not include these sections, the introduction or another appropriate section can be used to document the use of the LLM.”
I cannot see using the output of ChatGPT writing wholesale enough to give credit.
Looking broader, I agree with my colleague Jonathan Portiz’s assertion (https://podcast.oeglobal.org/2022/11/16/voices-43/)
“I think that giving credit to an AI, is like Ansel Adams giving credit to his camera for the copyright for his photography, or Norman Rockwell giving credit to his brush. It’s a tool, a pretty impressive tool, but just a tool”
Or if I create a mashup in Photoshop, do I give it authoring credit? If I use a beat generator for a drum track in a mix I do in Garage Band, which software gets co-author credit? If I write my paper using some kind of outlining software or something that provides alternative wording suggestion…
Citation is not the only means to be transparent. If an Artificial Intelligence tool played a major part in the writing, I would be include that in a note/acknowledgement section.
But there is no absolute answer, and I certainly am not going to ask ChatGPT for it’s opinion because it does not own one, nor does it have any stake in what it spits out. It’s merely a highly advanced autocompletion engine.
I’ll say what I said on Twitter… I don’t think the AI is a tool the same way as a camera or photoshop are. Cameras and photoshop don’t create content. They allow you to capture and edit your content. But the AI wrote some text. You used some of it as is. There’s generation there. Not sentient or responsible generation of content, but it is not content you made. So what do we call it? Maybe not citation. Anna Mills suggested maybe to cite it as “Bali, generated through ChatGPT). I am good with that
(Acknowledgement at the end isn’t enough… need to highlight sections. Could be done subtly, I guess)
to me question of citation practice is always about what is HELPFUL to readers.
so I could see citing tool in a bibliography I guess, and just leaving it at that
…just IMO my feeling is that all this AI stuff is distracting from existing, persistent, more important problems…
(The article is lovely and clear – it is the accountability angle that caught me)
“attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.”
It’s still more than process, though, is my sticking point here
I think it is both distracting and… revealing the underlying problems.
For us as #facdev, so much of what we’ve been recommending to improve learning value of assessments would naturally be difficult for AI to pretend to do – but it’s also more work for teachers
likewise with other help: if they get a friend or relative to proofread with them, or copy edit…my students would often credit each other just saying thanks to so-and-so for ideas they got from feedback, etc.
writing is NOT a solitary thing, and should not be. we all need help!
Thank you! This is such an important distinction. To me a “source” implies some person stands behind the words. An author has to have intent and understanding.
I do think that labeling any AI-generated text as such is key. However, to me that’s different from citation of a source. We need a new format and language for it. I wouldn’t use the language of “author,” even hybrid author, since that to me implies intent and understanding.
Curious what you think @MiaZamoraPhD @EnglishOER @rjhogue @cogdog @andrewdempsey @JasminaNajjar @OnlineCrsLady @HAKretschmer @hwrightly @yasser_tammer
That’s interesting–it suggests that the human author is the first origin. I’m not sure the human prompt makes that human responsible for the output… What about “generated by ChatGPT”?
Yeah… (Bali, generated by ChatGPT) I am good w that
I remain conflicted on this, and agree with Alan’s exploration of the reasons why giving credit to a data scraping machine that won’t cite its own sources is problematic. And yet, Maha, your inquiry into a way forward makes sense, too.
Apologies Yasser that I didn’t add alt text to my ChatGPT screenshot. Would you advise adding the whole text from the chat extract, or just an indication of the subject matter?
I have previously asked ChatGPT for its view on citation of AI-generated text, and this is what it replied.
If the alt text aloud letters give you the space to add the whole text, it would be fantastic.
I can’t add the whole text, but here’s most of it. Sorry again, and thanks for the advice!
I agree 100% that acknowledgement is key but I would need a lot of convincing that acknowledgement side by side with author is the way to go. As a poor speller I utterly depend on spellcheck but I’d no more acknoweldge that AI than the chair I sat in to type something.
It feels a bit close to personifying it, to me. Maybe (Shaw, via ChatGPT, 2023) or something. Would be tempted then to include the query/convo as a contextual appendix…
Where I use citations in text it is to acknowledge the ideas, research, opinions and thoughts of others. LLMs can’t do any of those things so I still think it’s an elevation. I am drawn to the caveat and reparative statement idea mentioned in @PeterBryantHE ‘s recent post
I’ve used ChatGPT only a few times and it feels rather like Wikipedia without the references. Output is better targeted than Wiki but vetted less well, and never attributed. We don’t cite Wiki because it is a non-creative source. We should not cite ChatGPT either
I agree, but think, in short term at least, it’ll be prudent to encourage openness and discussion about what tools have been used though perhaps via acknowledgement but, again, do we ask for acknowledgement of grammarly use? will be fascinating to see what practices evolve
I’m not sure why it would be a co-author…
it could be a source, sure, especially if someone quotes from it directly, instead of paraphrasing like paraphrasing from Wikipedia.
but sources aren’t authors. sources contribute to a work, and are credited via citation & bibliography
YES, that’s just how I feel about it. 🙂
Sudowrite has NO understanding of what I’m doing.
and what I usually take away from it is more an idea or maybe a phrase; that’s why it feels like a thesaurus to me. but way more rich than traditional thesaurus (& I like thesauruses!)
I like the angle of opacity about its own sources but am not sure that alone is a reason not to cite something (or acknowledge it in some clear way?)
(The pug pic was sooooo for you, Alan, you know the story – and I cited it as you like, too, because I was on my phone haha)
I’m a few weeks behind, I think, in thinking about this! I think what prompted me today is my semester starts next week and I am giving a local workshop Sunday… so I wanna have ideas for how to cite
Oh, interesting! I do sometimes credit photo editing tools when I edit a photo! In my post before this one I do credit canva and person on pixabay. But I realize now I can’t get chatGPT to replicate what it says on someone else’s computer, right? But it’s not just a process tool
But what if you take the text from it verbatim? Also, ideas. We credit sources for ideas as well as actual text.
If you create an image with a visual AI, wouldn’t you credit it? Even when using canva recently, I credit it because it wasn’t a crop, I overlaid w canva elements
To me, a “source” is… literally this is where I got this text, idea, image. The reality is, I copied this text from an interaction w ChatGPT. It’s something more than an acknowledgement because it wasn’t just a process use (like editing an image) but actual content?
So what I am saying is, I hear you it is not a person with intent behind the thinking that produced the words. But if you take the ideas or words from it, it is not a “process tool”, it is a tool that gave you actual content and the acknowledgement needs to have some other form.
I get you on the thesaurus. My daughter and I were saying sthg similar. ChatGPT just paraphrases from data it was trained on… but it may or may not give accurate anything. I like its writing style, but someone who likes writing would rephrase in their own words
Well that was the labeling part. So “Bali via ChatGPT” as a way to label it? Do we wanna know which AI?
EXACTLY. which is why to me @biblioracle‘s book is so essential: he talks about better writing experiences for students but he ALSO talks about adjunctification and the admin forces that are refusing to commit the resources (time, money, etc.) that we need
amazon.com/Why-They-Cant-…
Just giving a personal opinion.
I am torn whether if we should account for the efforts of ChatGpt3 or not. If the intent is to develop a sense of coherence in a piece of writing, I think it can help to some extent. In research, it’s a bit difficult to consider sources suggested by an AI tool.
But it contributed to the work produced by the student, so why don’t we consider it as a co-author in a research project or a writing in general? As A student, I think it would decrease the controversy around the use, or miss use of such tools when developing a writing process.
@Bali_Maha Alt text, please.
With this being said, can we consider it now as a hybrid author? This would be until we find a new model for documentation to AI tools.
Let me take the argument to another level: imagining that used illicit, would you still consider the help of the tool? So in that case, Bali via elicit