On Citing Our Hybrid Brain/Writing #chatGPT #openAI

Estimated reading time: 4 minutes, 34 seconds

It makes sense if some of us are talking about transparency and encouraging students to disclose how they have used AI in their writing process, that we develop a practice for doing so and agree upon it? Rebecca Hogue in a comment on my last post wondered if we are anthropomorphizing it if we do so, so here is my suggestion.

The author is a hybrid co-author. So if it is me, it would be an in-text citation like (Bali & ChatGPT, YEAR), or (Bali & Sudorwritr, Year). And the end citation in the references would be Bali & ChatGPT (2023, Jan 26). Prompt: Give me 10 titles for a workshop on gender in education. ChatGPT.open.ai

What do you think?

I also think I want students to reflect on their process a bit more, not just cite it. And get smarter at working with the hybrid brain/writer concept.

Funnily, separately, Twitter just announced co-tweeting!!!! Which I think inspired me with the co-authoring with AI citation approach.


Header photo of pug dog with wings from Pixabay.com by Sarah Richter


So I posted and tagged some people on Twitter to get their feedback on this, and some of the tweets are now appearing as comments directly on the blog, but here is the Tweet anyway in case some of them don’t end up here directly:

My take on this now is as follows:

  1. I am convinced that we should not cite ChatGPT as a co-author, based on the Nature article as well as the opinions of many people that ChatGPT is not a sentient being and not intentional or anything like that; it does not cite its own sources, either, but that’s not a reason not to cite it, imho
  2. I am convinced that acknowledgment alone is not enough, because people may copy/paste entire sentences or paragraphs they did not themselves write, and in general when we do that, we cite our “sources”;
  3. Something like “(Bali, generated by ChatGPT)” or “(Bali via ChatGPT)” seems plausible to me, with the citation at the end showing the prompt we used to get this text. Unfortunately, the text we get is not replicable nor does it have a stable url people can find again, which is problematic. But I guess (apologies for anthropomorphizing… humanifying) it’s like when you talk to a person who is knowledgeable about something and then you meet them again the next day, ask the same question, get a slightly different answer, type of thing. You can cite an informal exchange or conversation that no one else will be able to find online or in print, but it existed, you know? A “conversation” or exchange happened. Of sorts. You took words or ideas from it that were NOT originally yours. You used them. You gotta indicate where you got them.
  4. Content vs. Process. The analogies to spell checker or Photoshop as tools and such are not relevant here. This is actual text/content you got as output from this tool that was not your original text or ideas, not a tool that helped your process only. You did not write this, you should cite this. If you paraphrase it after being inspired by it, you should still acknowledge it, imho. Unless it’s only very very loosely influenced by it.

I fully agree with Laura, Alan and others that the act of narrating our work will become more important, and more important in education, to help students think through their own contributions in the midst of AI use. Even when we were mostly just using AI in Google search, I often ask students what search terms they used in which search engine and how they chose what to input into the system.

My daughter, at 11, already realizes this: if she wants information, it’s better to use a search engine and follow the source, check its credibility first, then decide what to use, because ChatGPT may fabricate information. However, I think students may still go to it to write things up. If someone had written it on the web before 2021, the likelihood ChatGPT can talk about it and sound reasonably intelligent is high, not because ChatGPT is intelligent, but because all it is really doing is synthesizing and paraphrasing things already written by humans. And that’s not really that hard when you think about it, right?

The problem with all of this that anyone who’s saying “I’m not worried about this” is… that the way ChatGPT does this sounds remarkable like a B or B- student who is writing with good English but very superficial understanding of a topic that is new to them, so if a freshman student in my university wrote something, it would look remarkably similar to ChatGPT output. So of course there is the direction of trying to make our assessments more authentic and meaningful and all those things that make assessments GOOD… but we need to acknoweldge that:

a. This is more difficult for some teachers who are less agile/innovative/aware than others

b. Students will probably use the AI no matter what and we need to have conversations around the extent to which this can be done in ways that promote learning versus harm it… right? These conversations need to include the learners themselves.

(and I just realized this is an entire new blogpost, so I’m actually going to copy/paste this as a new post)

39 thoughts on “On Citing Our Hybrid Brain/Writing #chatGPT #openAI

  1. Tweeted already, but consider the journal Nature’s position, that it’s problematic (or dangerous) to give credit to something that is opaque about its own sources https://www.nature.com/articles/d41586-023-00191-1
    “First, no LLM tool will be accepted as a credited author on a research paper. That is because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility…. Second, researchers using LLM tools should document this use in the methods or acknowledgements sections. If a paper does not include these sections, the introduction or another appropriate section can be used to document the use of the LLM.”
    I cannot see using the output of ChatGPT writing wholesale enough to give credit.
    Looking broader, I agree with my colleague Jonathan Portiz’s assertion (https://podcast.oeglobal.org/2022/11/16/voices-43/)
    “I think that giving credit to an AI, is like Ansel Adams giving credit to his camera for the copyright for his photography, or Norman Rockwell giving credit to his brush. It’s a tool, a pretty impressive tool, but just a tool”
    Or if I create a mashup in Photoshop, do I give it authoring credit? If I use a beat generator for a drum track in a mix I do in Garage Band, which software gets co-author credit? If I write my paper using some kind of outlining software or something that provides alternative wording suggestion…
    Citation is not the only means to be transparent. If an Artificial Intelligence tool played a major part in the writing, I would be include that in a note/acknowledgement section.
    But there is no absolute answer, and I certainly am not going to ask ChatGPT for it’s opinion because it does not own one, nor does it have any stake in what it spits out. It’s merely a highly advanced autocompletion engine.

    1. I’ll say what I said on Twitter… I don’t think the AI is a tool the same way as a camera or photoshop are. Cameras and photoshop don’t create content. They allow you to capture and edit your content. But the AI wrote some text. You used some of it as is. There’s generation there. Not sentient or responsible generation of content, but it is not content you made. So what do we call it? Maybe not citation. Anna Mills suggested maybe to cite it as “Bali, generated through ChatGPT). I am good with that

  2. to me question of citation practice is always about what is HELPFUL to readers.
    so I could see citing tool in a bibliography I guess, and just leaving it at that
    …just IMO my feeling is that all this AI stuff is distracting from existing, persistent, more important problems…

  3. likewise with other help: if they get a friend or relative to proofread with them, or copy edit…my students would often credit each other just saying thanks to so-and-so for ideas they got from feedback, etc.
    writing is NOT a solitary thing, and should not be. we all need help!

  4. I remain conflicted on this, and agree with Alan’s exploration of the reasons why giving credit to a data scraping machine that won’t cite its own sources is problematic. And yet, Maha, your inquiry into a way forward makes sense, too.

  5. Apologies Yasser that I didn’t add alt text to my ChatGPT screenshot. Would you advise adding the whole text from the chat extract, or just an indication of the subject matter?

  6. I agree 100% that acknowledgement is key but I would need a lot of convincing that acknowledgement side by side with author is the way to go. As a poor speller I utterly depend on spellcheck but I’d no more acknoweldge that AI than the chair I sat in to type something.

  7. It feels a bit close to personifying it, to me. Maybe (Shaw, via ChatGPT, 2023) or something. Would be tempted then to include the query/convo as a contextual appendix…

  8. Where I use citations in text it is to acknowledge the ideas, research, opinions and thoughts of others. LLMs can’t do any of those things so I still think it’s an elevation. I am drawn to the caveat and reparative statement idea mentioned in @PeterBryantHE ‘s recent post

  9. I’ve used ChatGPT only a few times and it feels rather like Wikipedia without the references. Output is better targeted than Wiki but vetted less well, and never attributed. We don’t cite Wiki because it is a non-creative source. We should not cite ChatGPT either

  10. I agree, but think, in short term at least, it’ll be prudent to encourage openness and discussion about what tools have been used though perhaps via acknowledgement but, again, do we ask for acknowledgement of grammarly use? will be fascinating to see what practices evolve

  11. I’m not sure why it would be a co-author…
    it could be a source, sure, especially if someone quotes from it directly, instead of paraphrasing like paraphrasing from Wikipedia.
    but sources aren’t authors. sources contribute to a work, and are credited via citation & bibliography

  12. YES, that’s just how I feel about it. 🙂

    Sudowrite has NO understanding of what I’m doing.

    and what I usually take away from it is more an idea or maybe a phrase; that’s why it feels like a thesaurus to me. but way more rich than traditional thesaurus (& I like thesauruses!)

  13. Oh, interesting! I do sometimes credit photo editing tools when I edit a photo! In my post before this one I do credit canva and person on pixabay. But I realize now I can’t get chatGPT to replicate what it says on someone else’s computer, right? But it’s not just a process tool

  14. So what I am saying is, I hear you it is not a person with intent behind the thinking that produced the words. But if you take the ideas or words from it, it is not a “process tool”, it is a tool that gave you actual content and the acknowledgement needs to have some other form.

  15. EXACTLY. which is why to me @biblioracle‘s book is so essential: he talks about better writing experiences for students but he ALSO talks about adjunctification and the admin forces that are refusing to commit the resources (time, money, etc.) that we need

  16. But it contributed to the work produced by the student, so why don’t we consider it as a co-author in a research project or a writing in general? As A student, I think it would decrease the controversy around the use, or miss use of such tools when developing a writing process.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.