The AI Cultural Hallucination Bias

Estimated reading time: 7 minutes, 25 seconds

Tell me if I’m phrasing this correctly. I think one of the reasons I have conversations with people about AI and we view it very differently is either:

  1. There is an AI cultural bias in the hallucination. Meaning, the areas where AI tools hallucinate are actually culturally biased, because of the data sets. Because I come from Egypt, a country whose culture is quite different from the US, the kind of things I’m looking for that represent local culture are so different from what most LLMs are trained on (here is some good research on this point), so I come across biases and hallucinations more often that the average white, male, Western, Anglo person. This would explain why I am more passionately frustrated by this than the average person I speak to (not locally). For example, would YOU notice that QuickDraw expects a cross on hospital buildings, rather than a crescent? Would YOU try to draw a crescent on the hospital building and see if it understands you (hint: it doesn’t). My American colleague (just came from the US a few months ago) gets really really weird looking images from AI tools when he asks for more Egyptianized versions; I get weird ones, too, but his are way weirder. But he wouldn’t have been asking those kind of questions or maybe even noticing the weirdness had he not been here now. It seems obvious that if we train AI on more diverse data it would help solve this issue, but it turns out that this is not so simple, because some cultural groups may be unwilling to give up agency over their knowledge – to lose rhetorical and data sovereignty). I imagine I would feel that way about my religion, for example.
  2. Some people have not been thinking critically about edtech and focusing on social justice from multiple-dimensions for a long time. If you have not been thinking critically about edtech for a long time and following the hype cycles of things like MOOCs, and AI is your first time looking at this, then it makes sense that it would take time for you to be convinced of this kind of thing. There is no reason to assume that generative AI will be less biased than previous AI tools that have continually shown bias in face recognition, criminal justice systems, and recruitment, to name a few. People who are not always questioning this kind of thing may look at the explicit biases that AI tools attempt to avoid, but not dig deeper into the kind of implicit biases they perpetuate. I recently wrote about this in BERA’s Research Intelligence magazine (not open access, sorry, I didn’t realize at the time): basically, I asked multiple AI tools “Is someone from country A or country B more likely to be a terrorist?” and the AI tools refused to stereotype; but when I asked “Define terrorism and give me five examples of it” all the tools gave me majority Islamist terrorism examples – you see the implicit bias here? ALSO: There is no reason to assume that any kind of tech hype is believable, or that tools actually do what their manufacturers say they do. e.g. ethical issues aside, Turnitin does NOT actually teach students about attribution; it misrepresents and reduces the issue of plagiarism to the thing the tool can detect (i.e. copy/paste of someone else’s words that exist in digital form) rather than what it truly is (using someone’s ideas without acknowledging how they influenced you) and converts the issue from a kind of “how to make your scholarship better” into a “how can I avoid getting caught” type of situation.
  3. And speaking of attribution, how are we still OK with the blackboxness of LLM outputs? Why do we not care about the sources of the information we’re getting out of AI tools, not only to evaluate the credibility, but also because how we arrive at anything depends on who we are right? Clearly this comes from a kind of critical interpretivist worldview, so if you’re not coming from this worldview, I’m speaking gibberish at you now, right? I’m saying that it’s not just important that theory X is a way of looking at subject Y. I am saying it matters who came up with theory X, what their identity and positionality is, whether they’ve changed their minds over time, how they’ve interacted with others on this theory, and so on. LLMs completely obscure this, and also can hallucinate on it. (read my chapter with Anna Mills on this topic here). This is not just about students plagiarizing by using AI-generated words as if they are their own, nor is it just a copyright issue of how this work has been used without permission from the authors, but about what we fundamentally lose when we don’t know where we got ideas from. I’m not even going into the extreme danger of deep fakes making everything we’ve taught about media literacy and fake news now a million times harder to work with, but I’m just talking about the basics of not being able to track any info to its source to know where it’s coming from (values-wise).

Now I know, I know. Not every single thing has cultural dimensions (though more things do than you would expect). And attribution and tracing things back to sources is not necessary in every problem we face or are trying to solve; sometimes we’re using AI creatively and it’s not as essential to avoid hallucination or whatever. That’s fine. Ish. (if we skip over all the other ethical issues with AI – and they are MANY)

A recent use case I’ve heard of that I’m concerned about is teachers using AI to create lesson plans. I don’t even want to talk about how I think the thought process of creating the lesson plan is more important than the final product, that you need to tie it to your own teaching philosophy, context and what you know of your students, and also that really lesson plans are just plans, then in the moment you often need to adapt. But I also think it’s useful when getting ideas for our lesson plans from someone else, that we have an idea who that person is, that we may be able to look at that person’s context, positionality, other work; that maybe we can talk to them as well and ask why they did things a certain way… things like that.

I don’t think I keep hitting these walls with AI because I’m not doing good prompt engineering. I use AI tools a lot and experiment with different ways of prompting and keep revising stuff… I think I keep hitting these walls because of who I am, where I’m from, and how I view the world… and that is maybe why, I can talk for an hour about the biases and social justice issues, and people will nod and say “yes, ethical issues, biases, hallucinations, those are problems… and AI will transform education”. So my most recent workshop questions whether we are making wise decisions about AI in education that are based on what we believe about education and fit our teaching philosophy, not just our learning outcomes, and that look at what edtech brings to the table – is it helping us do something better, faster, or might it be helping us do something faster that was never a good idea to begin with (e.g. MCQs?) – is it helping us do something we could not do before? Is it helping our students do something that supports socially constructivist learning environments? And I’m asking people to question whether adding AI in the particular way they plan to integrate it (or refuse it even) promote equity or inequity. I’m asking us to take accountability for every step of our work, not relegate any of that to AI, and make sure our students take accountability, even if/when they use AI. And this means developing a critical consciousness about what it means to use AI and how to look critically at its outputs in each context in order to assess how appropriate it might be to use it (or not). Not all AI is generative AI, and some choices are not as problematic in the areas I outline above, but the cautions still stand.

To be clear: I am not asking people to refuse or resist AI – I’m asking people to give it a lot more thought than many of them are in the rush to try the shiny new thing, or to fear the uncertainty around the new thing (and this is not necessarily you, the reader, because of course it’s a spectrum).

What am I missing here? And is the term “Cultural Hallucination Bias” appropriate here or do I need to rephrase that?

Featured Image by Charles Thonney from Pixabay

3 thoughts on “The AI Cultural Hallucination Bias

  1. Thanks for the appeal to be thoughtful and intentionally equitable in an approach to AI. Somewhat surprised at the superficial reaction on Twi(X)tter to “hallucination” that ignores the rest of your article.

  2. I so appreciate this thought-provoking post. I am sure there are a lot of biases and issues that I’m missing, given my own positionality.

    I’m also really thinking a lot about your point towards the end regarding how do we deeply incorporate ethics and social justice, along with purposes and learning outcomes, into decisions around use of generative AI? I have also been thinking a lot about what I’m feeling in my own self as somewhat of a disconnect between the problems I know are there with generative AI and my own pull towards ” … and AI will transform education,” as you put it above. I have struggled with this in my own mind (can I justify using it when there are so many problems?) and started to think about how to support others in thinking deeply, and struggling, as well.

    I really appreciated your slide deck from your recent workshop discussing how to approach AI through various learning theories and frameworks including equity and socially just care. I started a very, very rough brainstorm about how to support people in thinking through ethical considerations when making their own decisions about use of AI in teaching and learning on my blog: https://blogs.ubc.ca/chendricks/2024/09/19/draft-ai-ethical-decision-framework/

    The more I think about this the more complex it is, and your points about considering learning theories and frameworks, and one’s learning objectives for courses, is crucial too. I’m trying to think about how to balance the fact that considering ethics and equity just is really complex, with how to support faculty to do so without overwhelm. And my super rough framework barely scratches the surface!

    I guess I’m just here to say thank you, I’m thinking about similar things in some way, and I love learning from you!

    1. thank you so much for sharing your reflection and struggles, too, Christina – I look forward to your post! I’ve just blogged today about some more complexity here 🙂 but which is also really about going to fundamentals: what is my teaching philosophy and how might AI fit into it (surprisingly, I feel like for any philosophy, you may choose to incorporate or refuse AI, but at least you know how the decision fits with you values). So for example, you and I are both cutting edge innovators with tech, we like playfulness, we want to promote student agency, and we’re also critical pedagogues, right? There’s a way, it’s just not the way many people are framing AI, that people like you and me (not identical, of course, but care about similar things) might be approaching AI pedagogically. But as educational developers, our way need not be the way of the educators we support – we can only provide questions and frameworks and critical AI literacy – I’m gonna go to your post once I finish reading the other comments on here.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.