Estimated reading time: 4 minutes, 43 seconds
Someone recently wrote about how ChatGPT and similar generative AI are not really “edtech”, they were not developed for education, and that’s, of course, true. I’m not entirely sure why someone developed them, but I guess advanced in natural language processing and generation is, in the broad sense, useful, in the way that automated translation is broadly useful.
I’m trying to unpack something recently. I have a very strong negative reaction to most AI use in education (see my blogging of an unpublished paper on AI, based on research I had done and written about in 2019 – was the paper not accepted because I took a critical view?) like in learning analytics, in tracking student progress for “retention purposes”, in algorithmic proctoring, in automated grading, in tracking emotion, in adaptive learning, etc. Because in that sense, it’s AI being applied “to” the learner in paternalistic ways at best, and violent, biased, criminalizing, dehumanizing, surveillance-based ways at worst. Why do I not feel that way about ChatGPT and generative AI in general? I guess I could see value in other uses of AI, like in automated translation (but concern about its widespread use because it can cause misunderstandings and of course is biased in that it does better between Latin-based languages than between e.g. Arabic and English), and voice recognition – because I have rheumatoid arthritis and it helped me type when my fingers couldn’t anymore, and I know it helps many other people for whom typing is a burden, for example. I am concerned about facial recognition because of all the biases we know are baked into it. AI in search is of course helpful but also: reproduces and exacerbates bias, plus collects data and violates privacy, the way Google does it.
But back to my point: why is ChatGPT not evoking as negative a reaction in me as the other AI in education? I think it’s because it’s technology the learner can choose to do whatever they want with – rather than technology that teachers/institutions do TO and ABOUT the learner. In a recent Future Trends Forum session led by Bryan Alexander (video will appear here eventually), Ruben Puentedura kept bringing up agency in the session – and this is where I think the ChatGPT (and other generative AI of any kind) lines are blurred. Of all the AI’s out there, it appears to give learners some agency, and it can be empowering in that sense if it helps, for example, a non-native speakers of English or a weak reader better understand a reading, or help someone with ADHD get started writing when they’re having a block, or continue writing when their brain wanders (maybe) – but of course it can insidiously be limiting agency by directing our thinking and writing in a particular direction based on the datasets it’s been trained on – so we all tend to write in the same ways.
I also wonder if educators’ kneejerk reactions to “control” and protect “academic integrity” made me want to step back and see things differently. It made me question a lot of things, including this reductionist view of “writing as learning”. I learn a lot and my thoughts develop as I write, so I get that, but a lot of learning happens outside of the writing and the writing is a representation of that learning.
I will say that my experiment in promioting “trust and transparency” with students has been a learning journey for me – I’ve learned that students use AI in ways different from what I had expected, and of course they don’t all use it in the same way, nor have similar attitudes to it. Some are very concerned by it and scared of it, and some think of it really casually, like it has just integrated into their way of life and it’s just “there”. While others are fascinated by it, and still others are unimpressed. Faculty reactions vary as well, but most who have looked at it a bit more closely are realizing they can intuitively see when a student has used it (or used it poorly at least) to create generic papers that don’t answer a prompt very well… or they have figured out where students could use it to support their productivity.
There is still so much critical AI literacy needed, and so much potential harm (and existing harms from older AI we know) to consider, but I don’t think ChatGPT and the like are the most harmful among them. I do think we need to tread carefully, with eyes wide open, rather than turn out backs, but treading carefully does not mean we can’t experiment with something like this. I actually feel like as an educational developer, I have a responsiblity to experiment with it, learn how students are using it, learn how it works, so I can support other educators. I need to learn and try as many of the tools as I possibly can to see their limits. You can never learn all the tools – but if some might be useful, we need to know; if some might be harmful, we need to know. We need to know because students know, or will know, and this is the world they’re in and the world they’re going into. It’s not about an inevitability of AI taking over everything – I don’t believe in that stuff, but some AI is already everywhere and it’s just a little bit more of it everywhere. And we need to know. And keep digging and asking questions.
Featured Image of a cat with one blue eye and one green eye by Alexa from Pixabay
One thought on “Why Is My Attitude Towards Generative AI Different From Previous AI in Education?”