Estimated reading time: 3 minutes, 47 seconds
So of course every day I change my mind about how I’ll introduce AI in class beyond our first class. So I shared a little bit in a recent post about starting with simple visual AI like QuickDraw and Autodraw. I submitted this idea in more detail into this wonderful open project by Chrissi Nerantzi and others – 101 Creative Ways to Use AI in the Classroom. Look for the slide deck lnik and contribute your ideas! Or join us Tuesday, Feb 7 at 5pm GMT (7pm Cairo, 9am PT, 12pm ET for an Equity Unbound workshop with Anna Mills & me, where we’ll brainstorm and possibly contribute – register here).
Anyway, I got three good ideas today for things to do in class:
- From Bonni Stachowiak, I got the idea to have students “narrate their process via Screenshare” (she does this for students using Mike Caufield’s SIFT model for fact-checking, I’ll use it for them to narrate their process of using AI in their writing and adapting prompts, etc. I’ve been doing it with Anna Mills and my boss already – I think it would be good to help students to learn to surface process over product and learn aloud). For a split-second I thought this might be like self-surveillance (and Foucault would be mad) but I think it’s more of cultivating more self-awareness (and most of us would agree it is good?);
- From Lance Eaton, he gives his students his ChatGPT login credentials to help preserve their privacy. I may do the same.
- Of course I want to share with my students the horror of the exploitation of Kenyan workers in the “process” creating an ethical AI. So the product of “ethical AI” that OpenAI strove towards ended up doing horrible damage to Kenyan workers’ mental health, and also underpaying them greatly, but it’s the mental health part that is most horrific. I wonder who should be held accountable for this? And also, what other horror stories lie behind much of the tech (and honestly, also, food – think fair trade, and clothes – think Nike sweatshops) we consume and use happily has such horror stories behind them – and how disgusted and horrified we feel when we know this, and yet we don’t stop using it – but also, it’s not about using it, but what can we do to prevent this from continuing to happen, not just by OpenAI to workers in Kenya but by all companies worldwide to any humans anywhere in the world (and to any harm to other life on the planet including animals and plants and even the inanimate earth that nourishes us all). Turn out there’s been a lawsuit against MEta and Sama since last year (article on TechCrunch) and a recent update that Sama shuts down operations after this lawsuit.
I’m going to repeat this strange tidbit because it is really strange. Sama is the name of the “ethical AI” company in Kenya, and Sama is Sam Altman’s Twitter handle (CEO of OpenAI). The founder of Sama is a different person, but the name thing is weird. Sama means sky in Arabic, and the person who founded Sama has an Arabic name, Leila Janah (she died in 2020, the internet tells me – the about page of history of the company is inspiring – so I don’t know what happened, where things went wrong). This is maybe such an important story about intention versus reality. So often someone starts out with good intentions. I’ll believe they are authentic in the intentions they state. But what happens in the process that can go wrong?
The product/process element is essential here. It is not good enough that OpenAI created ChatGPT to be polite and not use swearing and mean words. That’s not what makes AI ethical. It’s important that the process of training the AI be ethical. And it was not, at least for one part of it.
Similarly, I think Lance’s suggestion to give our students access to our ChatGPT accounts is a great idea because we then do not risk their privacy. In our process of teaching them to be critical of AI, we can keep protecting at least their privacy without sacrificing their learning. I do think anyone who teaches digital literacies cannot avoid exposing students to some AI that has something from OpenAI in it (even if not ChatGPT).
What are other folks doing? I hope you’ll contribute to the 101 slide deck, and I hope you’ll join us on Tuesday inshallah!