Estimated reading time: 4 minutes, 33 seconds
I’ve been thinking, writing, speaking, workshopping, conversing and experimenting around AI for 4 months straight now, and I’m mentally preparing for my next “phase” of writing, speaking and workshopping, as my thinkign and experimenting also evolves… and because it’s Ramadan and lots of vacations (spring break, Easter, Eid coming up) the conversations are less intense than usual. And I haven’t seen my students in a while because of vacation (I miss them!).
I’m writing this post because my brain is about to explode from the amount of stuff in my head about this that I have not written down, and I feel like I need to write it to help me think more clearly. Woah, AI can’t help me with THAT 🙂 nor do I want it to 🙂
First off, in terms of writing, I’m in the process of writing four different papers with four different groups of people on topics that either center around or touch upon AI, and working with local colleagues to develop workshops to help us all finish this semester and plan for next semester. And I’m thinking about my own class! And I’m helping write some AI guidelines within our institution. And I’m helping test AI detectors (they are so bad).
I won’t reveal everything, but I do want to sort of headling some of the major ideas there, because I think I could use these in future talks I give:
- The importance of Open Educational Practices in supporting us to learn about and respond to unexpected changes impacting education
- What does it mean to respond to AI using an approach that helps build trust and belonging rather than mistrust and suspicion?
- How have those of us who support other educators in our institutions been responding to unexpected and complex shifts such as the advent of AI tools? In complex ways!
- How do we create educational development around AI with Intentionally Equitable Hospitality? Recognizing that there is no way there is a one-size-fits-all on almost any question related to AI, and also that there are many systemic inequalities amongst educators and amongst students to consider
- How do we support each other to create meaningful assessments that learners will be motivated and interested in working on and struggling with because they care, because they have relevance beyond our classes, so that, even if they use AI, they don’t rely completely on AI because they want to learn about the thing
- How do we harness AI where it makes sense, to promote productivity when it makes sense, to promote future-possible professional skills, while remaining critical of its output not only due to inaccuracies, but also due to the biased datasets it’s been trained on and its very specific writing style that can mess with individual “voice”.
- How do we resist a culture of “cat and mouse” focused on detecting AI and punishing students who use it, that is not based on how bad AI detectors are (they’re pretty unreliable, every last one of them) but focused on how we can work with students to build trust and transparency around AI use not just in terms of citing output but openness about processes, so that we can all learn together?
- What does useful transparency in use of AI look like in education? Where might transparency/citation not be needed, where might it be necessary, to what extent is it useful to use it, and what might it look like for different contexts?
- How can we hold multiple perspectives on AI all together at the same time, remain open to different perspectives, without losing our values in the midst of the hype the media is pushing on us?
- Most importantly, I think, is how do we NOT let the AI conversation derail us from our mission and purpose as educators and educational institutions? Things that call for “going back to paper exams” are understandable, if our purpose was to escape AI-use, but my kneejerk reaction is that they make no sense if our purpose is to offer authentic assessments to prepare learners for the world out there, where the internet and AI and everything else exist. Then again, they might make sense for basic skills courses, but not across the board. These conversations need to be very nuanced and even I need to remember to avoid “shaming” in these conversations, because context is so important here.
Always with creating talks around this kind of thing, it’s difficult when you’re doing it online and you don’t know the audience, haven’t had a chance to chat with them the night before or the morning of your talk. What is their experience with AI, what is their attitude towards AI? My keynotes are always interactive, so I give room pretty early on for people to share some of where they’re at on these things, via polling usually, or chat, but that’s not a replacement for deep conversation, of course.
I don’t know that this post has helped me think through what I want to do going forward on anything specifically 🙂 but it’s given me a couple of other ideas to do for my class, so I’ll write that one out separately (and yes, I’m procrastinating from grading some student work right now!)
I’m curious what kind of questions around AI are uppermost on *your* mind?
Featured Image is a banner I created from a DALL.E generated image with a prompt: brain exploding rembrandt style blue and purple colors as part of a human head