Estimated reading time: 11 minutes, 54 seconds
So in case you missed it, the wonderful Sarah Elaine Eaton recently blogged about the 6 Tenets of the Postplagiarism Era, which she had first mentioned in the final chapter of Plagiarism in Higher Education: Tackling Tough Topics in Academic Integrity (2021) which I now realize is available as an eBook in my university library, so I’ll get to reading that soon! But for now, I’m responding to the blogpost, and she writes that the blogpost expands on that thinking. I know that in a blogpost, Sarah will not have said everything she’s thinking about all these tenets, so I am not claiming that anything I say here is stuff she has not thought of.
Plagiarism vs Postplagiarism
Just to backtrack for a second, I did look at her book to find how she defines plagiarism and postplagiarism. I think this is really important because a tool like Turnitin.com tends to give students the impression that plagiarism is JUST about copy/pasting text without paraphrasing it, when plagiarism is more about attributing ideas that aren’t our own, not just words/phrases/sentences, right? Even deciding that a number of words/phrases copied without quotes/citation is debunked in Eaton’s book (chapter 2).
Eaton suggests that our understandings of plagiarism are complex, and some have defined it as “theft” of ideas/words of others, but some have resisted this framing. She talks about how it is a moral issue, a policy issue, and a teaching and learning issue.
“it is important to expand our ideas about what constitutes plagiarism so that we can ultimately help our students focus on learning in ethical ways in changing contexts.” (Eaton, 2021, p. 22).
In the final chapter, proposing a postplagiarism era, Eaton (2021, p. 222) writes:
“I imagine that global citizens of a postplagiarism world might reject binary notions entirely, shaking their heads at the futility of arguments about good versus bad students or pedagogy versus policing. Those who insist on focusing on dualities might show themselves as less intellectually capable than those who can tolerate the chaos and embrace the complexity of what ever it means to integrate, to become whole, as a learner and as a person.”
(I read the paragraph above after I had written everything that comes after it, but I think it certainly captures the kind of complexity I’m proposing in response even to the tenets Eaton is proposing, and which I’m sure she has contemplated as well).
I’ll relay her 6 tenets and respond to each in turn:
1. Hybrid Human-AI Writing Will Become Normal
Eaton writes that this will become so prevalent and normalized that “trying to determine where the human ends and where the artificial intelligence begins is pointless and futile.” and I somewhat agree, but somewhat don’t agree. I agree first of all that, in any case, every thought and idea we have and that we express is influenced by interaction with others – whether they be human or non-human others. Even before we were necessarily prompting an AI text generator to write things for us, we were using search engines to help us find things and often being influenced by that output without necessarily taking it verbatim.
I know that when I’ve cited Twitter conversations as influencing my thinking (such as here) some have been pleasantly surprised. I think that kind of practice is about the spirit of attribution/respect for others’ work, even when the “source” is not usually considered “academic” – because I feel that otherwise, there would be exploitation, right?
What concerns me about the use of AI text generators is that they will produce original combination of words, not “lifted” or “stolen” verbatim from somewhere, BUT those text generators got those ideas from loads of data they’ve learned from, and they’re not attributing where this all came from.
Part of me is more concerned about where the AI text generator got its ideas (and I know how complex machine learning is and that asking for this kind of transparency or explainability in a machine learning type of AI is an almost impossible ask, if not completely impossible – but some tools like Perplexity do offer references for whatever they produce) – and the imprtance of that for verification but also as a form of respect for sources that influence us.
Another part of me feels like it’s important for students’ metacognition that they remain aware of how they’ve integrated AI into their thinking, and this is why I suggest an approach of “transparency“, not just of students disclosing where they got some ideas/text from in the sense of attribution, but also of reflecting on where they used AI in their process and why that was helpful or how it needed tweaking, etc. Perhaps in a future where this is common practice, the self-reflection may become less important, or at least as learners get older (maybe they’ll start doing it in school but by the time they’re in college it is second nature?).
2. Human Creativity is Enhanced
Eaton suggests in her blogpost that “Human creativity is enhanced, not threatened by artificial intelligence. Humans can be inspired and inspire others. Humans may even be inspired by artificial intelligence, but our ability to imagine, inspire, and create remains boundless and inexhaustible.”
I’ll agree with that, and I think the authors of The Manifesto for Teaching Online would agree.
However, I think it is worth considering the ways in which AI might limit our creativity as well as enhance it. I don’t think it threatens it, though that is an impulse many have. I wonder what are good analogies for this, and I guess drawing might be a good one here.
As a young person, I used to draw really well on paper. I’m not as good at doing computer graphics work, but tools like Canva can help me. Being able to use an AI like mid-journey or DALL.E, to give it text and get an image of something I imagine, instead of having to do it myself, that’s really cool, really useful, and much quicker than I would have ever taken to do it myself. I don’t think it’s better than what a graphic design person could get out of it, or possibly even do without it, but if I’m doing something small, like getting a header image on a blogpost, why not, right?
I know there are musicians that merge their own creativity with music-generated AI – or AI in some way helping them along the process.
I do think that, historically, education systems have managed to try to stifle human imagination and creativity, and that there are many human and social threats to human flourishing, regardless of AI, so AI is not the only or main threat to this.
What concerns me is the ways in which overreliance on AI might lead to some laziness in trying to do something yourself first, and how the first output by AI, e.g. in suggesting an outline for a paper or such, might limit you or stop you from taking longer to think through something and come up with things that the AI would not have thought of, because it wasn’t in its training set, you know? And I’m concerned with how culturally biased the training sets must be, and how that might skew the outputs to certain dominat cultural norms.
But yes, so is much of education, textbooks, teaching, scholarship, research, etc. So in that sense, the AI is just a mirror of all of that, only faster to access, and easier to use.
Which brings me to next point, which is this: I think we need to consider the value of taking our time with things in order to think deeply about them. One of the things I don’t like about online “recommendation engines” is that I lose the value of getting lost in the stacks and stumbling upon things that interest me unexpectedly, or taking the time to develop judgment over what I might want to read and find.
But again – I would NOT replace my library’s online catalog or Google scholar with the library card system that required me to run all over the library in order to find a book that took me hours (OK, minutes when I got better at it) to even decide I wanted. Would tools like typset.io that help summarize research articles for us and answer our questions about them – would these tools start limiting our capacity to read deeply and critically, or will they help us read more things faster, in order to do more complex things with them? I think we should consider both of those dimensions and intentionally choose how we approach this.
3. Language Barriers Disappear
This one is truly the most contentious one I’ve seen so far.
Look, of course I looooove translation tools as much as anyone, and I know people who’ve managed to get along with others in different languages throguh them. They’re amazing and one of the most awesome AI tools out there. That Zoom now can do the auto-transcribe+translate magic trick is just phenomenal.
Oh yes! Imagine all the intercultural multilinguistic conversations we can have!
Oh no! Imagine all the horrible misunderstandings from linguistic nuance that will happen!
Now to be fair, even human translation exerts violence and has a layer of interpretation and is not neutral… the danger here is that machine translation may be treated as neutral, and we don’t even know how exactly it’s doing it. The key caution here is that we still need to remember what was translated by the machine, and that the interpretation may be inaccurate, incorrectly nuanced, or politically non-neutral. And to recognize the situations where translation needs to be done responsibly and where machine translation will not work. Please don’t use it for the Palestinian-Israeli conflict, for example, where human translation has already caused so much strife.
I worry also that overreliance on translation allows us to continue producing content in predominantly Western/European languages and expect the rest of the world to translate (I mean, that’s already the case) rather than nurturing other languages – and I am concerned about what gets lost about how thinking is influneced by langauge and culture when we focus on translating WORDS not THOUGHTS and CULTURES. Does that make sense?
4. Humans can Relinquish Control, but not Responsibility
Sarah writes in the blogpost that “Although humans can relinquish control [to AI], they do not relinquish responsibility for what is written.” And she’s talking about the producers of AI tools as well as the end users, being responsible for how they use the output, verify it, etc. I’m not sure about this one. Is using a tool equivalent to “relinquishing control”? Is searching via Google or using a calculator a form of “relinquishing control”? Is using a car instead of walking “relinquishing control”? I’m still thinking this over. I guess we trust the car to work as we ordered it to, and then we control what we can in order to make it do what we want it to do. With calculators, we assume it will calculate correctly based on correct input from us. But neither of these are like AI text generators, because these tools are… like physics/math, they should produce predictable outcomes/outputs and the same thing should happen each time you give it the same input. It’s easy(ish) to figure out when something is going wrong. Like if you use the steering wheel to go left and the car goes right, you’ll know right away something is wrong. Text generating tools don’t just automate process, they automate content. They are capable of producing new and original content each time, and we of course steer it with the prompts we write, don’t we? I don’t know, though, that that is relinquishing control any more than we relinquish control by Google-searching something. Of course, we know that Google search is not neutral, there is an algorithm behind it, but this does not always impact how we search. For example, I don’t always use DuckDuckGo. I actually usually don’t use it, because in many ways Google’s algorithm “works” for me. I think if we’re constantly aware of this, it’s not relinquishing control, but I also think there are unconscious ways this works on us, such that we’re not aware of what kinds of control we’re relinquishing? I’m still thinking about this one.
5. Attribution Remains Important
I agree with Sarah’s blogpost that “Humans learn in community with one another, even when they are learning alone.” I wonder how this fits with #1 above, and whether/how she imagines AI being cited when she suggests that hybrid writing won’t differentiate where the human ends and the AI/computer text begins. I think the blurring of lines does not necessarily mean their non-existence… making whatever nuance explicit, at least in the short term, is helpful.
6. Historical Definitions of Plagiarism No Longer Apply
This statement I mostly agree with: “Historical definitions of plagiarism will not be rewritten because of artificial intelligence; they will be transcended. Policy definitions can – and must – adapt.”
I actually think that very specific language is probably needed in policies, but the more specific the language, the more important it will be to constantly updated. For example, our local polciy tends to emphasize not taking text or ideas from another human being without attribution, but this doesn’t cover AI. In the letter of the policy. Obviously what they really meant to say was “any work that is not your words or ideas” but they had not factored QuillBot and ChatGPT into that equation.
I do think (and Sarah expands on this in her book) that the teaching of academic integrity needs to spend so much more time on the WHY of citation and referencing, the ethics of acknowledging how others have influenced us, and how that is both respectful to them AND a sign of the credibility of our own work – rather than the MECHANICS of citation. Tools like Turnitin.com just overemphasize the mechanics of ONE type of plagiarism and gives a backseat to the real reasons why citation matters, and the ways of referencing ethically.
What do you think?