On Equity and Trust and AI

Estimated reading time: 9 minutes, 15 seconds

So next week, Anna Mills & I are giving a workshop on Critical AI literacy (info and registration here), and in my department at work, we are also planning a heck of a lot of conversations, workshops, tip sheets, etc. And of course I am always attending sessions, reading articles, reflecting on my own teaching, talking to people, etc. So here are new thoughts on several things.

Critical AI Literacy

For a quick overview of critical issues in AI, read the post by Autumm Caines. And read this post by Leon Furze on AI ethics more broadly. I am planning to discuss both of these in class early on inshallah. Leon’s has an infographic that can be a good starting point. Autumm’s includes some good learning activities you can try.

The other element of AI literacy is, if you use it in class, helping students develop the judgment to critique its outputs and get better at making it do better? And recognize where they may not wanna use the AI. Because they may lose something important and fundamental to their learning? For ideas on two approaches, watch this video (I think her name is Sherri Gibson) and this blogpost by Mark Morino.

Equity Issuss in AI

So Stephen Downes wondered if my latest blogpost missed the point of the problems with AI by focusing on what teachers can do that AI can’t. I don’t agree with him that just because many teachers can’t be caring or good role models, maybe AI can do better. It just means we need to invest more time in helping them improve and improve the conditions that may be obstacles to them doing better. Like large class sizes, heavy workloads, poor working conditions, etc. But Stephen is right that equity issues in AI access and use are important, so I’ll turn to these here.

But to segue into that, I was in one of Bryan Alexander’s FTTE sessions today, with guest Maria Anderson. I got a hint that a global perspective was welcome, so I raised my hand and shared that:

  • People were always saying that ChatGPT was available for free to anyone with an internet connection. Let me tell you, folks, it is blocked in Egypt, Saudi Arabia, and God knows where else. It’s something to wonder about. To get it here, I have to use VPN, a phone number in a country where it is not blocked in order to verify my account, and incognito window. Many people don’t have the digital literacies or connections to do that, though the Tiktok and Instagram videos will help folks get virtual phone numbers and such. I don’t understand this decision as it seems to stem from OpenAI (the company behind chatGPT)? But anyway. However, we have access to other AI writing tools for free and regularly. Also, now chatGPT has a subscription option that will privilege people who pay $20/month in that it will work faster and not be “down” for paying folks. Ugh for non-paying folks for whom $20 is a huge amount.
  • It suck at Arabic. It writes like a 6 or 7 year old. Or a bad translation AI. So clearly not trained in enough Arabic.
  • Its political correctness is both endearing and concerning. I once asked it to write a feminist interpretation of a Quranic verse, and it refused, saying the Quran is a sacred text for Muslims and it cannot get into that and offend Muslims. On the one hand, I found this really adorable and I felt the intention to not harm a religious group was lovely. On the other hand, I realized later after taking about freedom of speech issues, that actually I hadn’t made an offensive request to like say bad things about the Quran. But I’m kind of OK with it being all like, “I’m not trained to talk about religion” or “I don’t wanna offend anyone”. I’m just not sure if I will always agree with the lines it has been trained to draw, you know?
  • You can ask it to write in a non-native speaker voice. And it does it brilliantly.

I asked Marie about the epistemic injustice in this tool, and she made a good point about how the internet as a whole is more skewed towards white, male, Anglo content. As are spaces like Reddit and Wikipedia. So the AI developers would have to be really intentional about training it with more diverse data than data that is representative of internet content that is already skewed and not representative of world populations and where minorities show up even less than they do in real life. Of course, having it blocked in some countries, not fluent in some languages? That’s not helpful in terms of the basic step of access, but then also the epistemic injustice it perpetuates, when, you know, if you can train AI to not engage with Quran so as not to offend anyone, you can probably train it to read more diverse texts and POVs by more diverse people. I’ll call this one a missed opportunity for chatGPT in terms of getting testing data from these countries it is blocked in.

Now I have not talked about this in FTTE with Bryan and Maria, but perhaps Stephen was referring more to my points related to labor? I am unsure. The thought I have right now in terms of equity in AI is multi-pronged

  • Who has access to it, literally, to learn or to teach?
  • Who has the digital literacies to work with it and produce something good/useful? In ways that are useful to humans?
  • Which data needs to be used for a more culturally diverse AI that can represent minoritized and marginalized ways of knowing? If we know it’s not been trained with diverse enough data, then how might its training bias limit our thinking and what we can do in it?
  • Whose labor has been exploited along the way? What are we gonna do about it?
  • What about privacy of our own data? Are we helping train the AI each time we “play” with it? What other risks of data collection and privacy violations are we facing? Have we read the terms and conditions and privacy policies of all (any?) Of the tools we use? What are the pros and cons of taking those risks of using those tools?
  • What systemic issues are we avoiding solving by bringing in AI? Systemic issues like large class sizes that make it difficult for teachers to read student work carefully, for example, and give substantial feedback to enable students to improve.
  • Intellectual property rights violations are also an ethical issue because if some people’s content shared openly was used fo train AI and then reused to make them pay to use the AI, but even worse, make it harder for those people to make money out of their work because now AI can mimick it (like art)?

I Don’t Ask If I Trust My Students, I Ask if My Students Trust Me

The statement above is mine, but inspired by Tim Fawns (in convo w Dave Cormier) and Ameena Payne & Jan McArthur recent article on Carefull feedback. Tim said something like “if we don’t trust our students, what does this say about us?” And Ameena and Jan’s article focuses on care in feedback and the line between trust and vulnerability. Our students make themselves vulnerable when they trust us… we have power to harm them many of the times they put their trust in us.

This morning, I had my first class. I told students this course would discuss AI a lot, because it is a digital literacies class. I asked who had used chatGPT. Not a good move for first day of class. Only one student admitted (also possibly, many don’t use VPN – see earlier section on equity). We talked a bit more about it, and then I said, “of course everyone knows Quillbot” and I raised my hand, and said, “my 11 year old knows Quillbot” and everyone raised their hand.

I explained my policy of transparency and disclosure about AI use, and that we would learn to navigate it critically in class, but that it was not banned or discouraged. Just that they needed to use it reasonably for things that can help them express *their* thoughts and ideas better, or give them some help when stuck, rather than replace their whole thinking process.

My point is, though. These students don’t know me well yet. Some have cousins and friends who took my course and recommended it/me, but some have no idea. Why would they trust me right off the bat about their AI use? Also, they are used to *not* being trusted by their teachers, and they don’t know that my default is to trust them. Yet. So my role here is to build trust gradually. It will take time, more time for some than others. It will take some one on one time. And community building. But if we start by gaining our students’ trust, they are more likely to let us know what they’re doing and why.

I imagine this semester that if I give an assignment that AI Can do well, I can gain my students’ trust enough that they can tell me that, and show me I need to change my prompt. I imagine they can be my partner in critically evaluating the output of the AI and recognizing when to use it for which purpose. And when it might actually limit their creative or critical thinking. I think listening to students on this is key. They are not us.

A participatory and transparent approach to critically approaching AI in class requires a foundation of trust.

My plan for the semester? So far!

I have decided to use visual AI as my “in road” into gaining that trust, so the students don’t need to worry too much. I suspect unless you are an artist or graphic designer, using visual AI seems like a good opportunity to you? It helps you get a visual for something you wouldn’t be able to do yourself. It’s not a typical assignment in any course, but it would helps students avoid picking up any image from Google which may be copyrighted. But there are other ethical issues at play with visual AI that we can discuss as an in road to the wider discussion on AI ethics (using Autumm and Leon’s articles), before we get back into writing generators. I will invite a colleague to guest speak on visual AI and its ethical issues inshallah.

My plan for next class is to play Quickdraw with students to show them the idea of how pattern recognition machine learning works, then show autodraw, the flipside of QuickDraw. Then you.com imagine (image generator with several built in). Then I will ask them to do a Cover Story for the course, using images they generate from autodraw and you.com imagine.

What are your thoughts?

Featured image of water droplet on a printed page or book. Image by Janeke88 from Pixabay

8 thoughts on “On Equity and Trust and AI

  1. Every time I have tried ChatGPT, or examined someone’s example posted online, it is always terrible. I just tell my students to avoid it because it just feels abusive to tell them to use something that is so bad. It just amazes me that people will read these really garbage outputs and then post them online as “awesome” examples. Just don’t get what has happened to so many people with regards to all this.

    1. I feel like it writes like a B minus level student in my university. So it is awesome in that it sounds human and addresses the prompt. Not that it is an A+ or professional writer. If that makes sense? Can you not imagine a high schooler or freshmen writing like that? Vague but related to the point? If you give it detailed prompts it does better

      1. The highest any of the ones I have gotten would get is maybe a 40 – there is just no way to get it to address the prompt. I have tried. It is just total garbage in my field.

        1. I don’t know Matt. I asked it once a more detailed prompt. I asked it basically to explain how to implement online learning using three theories: cognitive-behaviorist, social constructivist and connectivist and to give an example of each, and it did well. I think I ever asked it to give examples of designs outside these 3 types

          Not graduate student level. It writes like a high schooler or first year. It sounds like my students. It wrote a relevant response and regenerating gave new examples.

  2. ChatGPT is not banned in Malaysia, I guess. I can access it and I recently came across newspaper articlea bout it. But in my field (dental education) where much of the assessment are still pretty old school, we have f2f exams (coursework is more in form of that) and rest is psychomotor skills. I think we are losing at the creative element of AI. Although is pervasive when it comes to haptic simulation and those sorts. But I think it goes back to Tim and Dave’s video you shared its how we design the assessments. Lets not blame the car it will always be faulty or even the driver (its human) but the check with the manufacturer, the roads and other stuff that may play role in the experience and outcome of driving.

  3. Pingback: Bethany Witham
  4. Pingback: admin

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.