What Can We Refuse in AI?

Estimated reading time: 3 minutes, 5 seconds

I wrote this post to reflect on my own practices and philosophies of what I feel I must refuse, and what I feel I cannot refuse. I am curious about others, and I wonder if I left any out?

  1. I cannot stop learning about the latest generative AI tools because students mostly have access to them, and I am a faculty developer and public scholar and responsible for helping other educators navigate this uncertain landscape. I also teach digital literacies so I’m also responsible for supporting students to navigate it.
  2. I cannot stop researching or speaking about AI, because I bring a perspective that not many bring to the table and I need to continue on my journey to explore and promote critical AI literacy.
  3. I refuse to ban AI use completely and i also refuse to give up on hoping I will be able to create authentic assessments that students find relevant enough to choose not to use AI inappropriately. I pledge to focus on developing students’ AI literacies, especially in terms of potential bias and hallucinations, so that they may develop the judgment of when to use AI and when not to

What are some things I can refuse?

  1. I refuse to listen to the hype around AI, wherever on the spectrum it may be. I’ll respond to the AI we have now and the AI we have had historically, without getting hysterical about the future until it comes.
  2. I refuse to believe that humans and educators have no agency around how AI will be used in their field in future. We can always have a say, it just needs to be grounded in the reality of what is possible with what we have. It may require systemic changes in how we do things, but these need to be done with intentional adaptation towards our goals, not as knee-jerk responses to tech.
  3. I refuse to use, or support the use of, AI for policing learners and mistrusting them – Turnitin.com, algorithmic proctoring, AI detectors. First because such teaching potentially fosters mistrust,  and second because such tools may reproduce bias and inequality (e.g. neuronormativity in AI proctoring, and bias again non-native speakers in AI detectors).
  4. I refuse to use AI for administrative educational uses such as learning analytics or for important teaching elements such as giving feedback to students. Students are whole human beings and any machine readable data about them does not represent the whole of who they are. Such uses have potential to reproduce biases previously done by humans, including me, without accountability for them. And these are decisions we need to remain accountable for, and we need to provide socioemotional support with.
  5. I refuse to ever take AI output as given or as unquestionably correct. I pledge to remain vigilant and keep testing for hallucination and bias. I refuse to overuse AI simply for fun (rather than research or educational purposes) so as to reduce the harm to the environment. 
  6. I refuse to think there are simple solutions to the epistemic bias and inequality in AI, but I pledge to support all those who truly try to subvert this.

What can/do you refuse?

Featured image: In generating the image via DALLE-3, I could not use the word refuse or refusal. Apparently it is violates their rules. I hadn’t expected that but can imagine why. So instead I tried several tings and ended up with this one “hands or fingers indicating stop or no” and it prdocued the weirdest set of images. A few do gesture no, but the others are just so so weird that it felt like a hood show of AI going whacko. I noticed the diverse skin color, though.

2 thoughts on “What Can We Refuse in AI?

  1. Maha, I’ve been reading your blog posts over the years, and with the advent of AI, I moved the blog alert to my inbox so I will notice them better. I found this post memorably good and thought-provoking and useful. Every point you make is worth embracing by any educator, and #6 of the second, refusal list is particularly striking to me. AI data sets are going to continue to represent a minority of societies, cultures, and epistemologies; AI algorithms are going to continue to reflect rhetorical thought patterns of dominant societies that create and control the systems; and the AI market is going to advance the interest of the rich and powerful and those who have influence even within the small number of countries/societies whose geopolitical interests have already colonized and marginalized and often erased the epistemologies of others. So, as you say, there are no simple solutions to the epistemic bias, and all we can do is to try to subvert the inherent bias through our discourse, through fostering critical AI literacy that has a strong global DEI dimension, and through public scholarship (thank god there are scholars like you who shoulder this burden at a scale that is humbling for the rest of us).

    You ask what I refuse: I refuse to say nothing when a computer scientist tells me, a writing scholar, that “AI writes just fine,” because I don’t tell him that “AI does computer science work just fine.” I also refuse to say nothing when a medical doctor says that “AI does a great job of diagnosing patients” in front of me as a potential patient because I know she hasn’t looked under the hood, as no one can truly look under the hood of these systems, by definition/design.

    1. so great to hear from you, Shyam, and I love your comments. Will never forget the ways you helped me develop and hone my critical perspective early on in my career – and I’m glad you’re finding my writing relevant and I hope that I continue to hit the appropriate notes re epistemic bias, because darn it, other people are often like, “epistemic bias, yeah yeah” then go on as usual with the inevitability of AI BS!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.