Blogging an Unpublished Paper: South African & Egyptian Academic Developers’ Perceptions of AI in Education Part 9: Reactions to QuickDraw & Discussions of Bias & Ethical Issues

Estimated reading time: 8 minutes, 18 seconds

In case you missed my first post about this, I am blogging an unpublished paper as a series in parts over several days. You can read that post to understand the story and the reason behind this. Comments on each post welcome. This is the ninth post and the last one from the findings section, focusing on people’s reaction to playing QuickDraw and discussing potential bias and ethical issues in AI. I strongly recommend you play QuickDraw yourself a couple of times before you read this!

Findings

Reactions to QuickDraw

This exercise, of demonstrating the learning of a neural network in a concrete way by asking participants to play Google’s QuickDraw seemed eye opening. It was a good break from the interview questions, a fun game to play, but also opened up interesting conversations around the particulars of how the game chose which words/items to draw, how these words were interpreted by the player who was a participant in my interview, and how other people doodled the same thing – and how the AI then started to recognize patterns in order to recognize what people were doodling. In terms of seeing the bias in the system, I asked participants if they could imagine biases on their own, or if they wanted me to give them ideas. Among those who recognized bias on their own are SAU5, who said, unprompted: “cultural context”.

For those who didn’t recognize potential for bias, I explained some of my own conclusions about this, and gave examples where the questions themselves were culturally biased (e.g. bat can be the animal or the baseball bat, but most non-Americans would not think of a baseball bat – someone else, SAU4, mentioned the same with cricket – people from countries that played cricket as a sport would think of the sport, whereas others would think of the insect) and other examples where the dominant culture of people drawing would influence what the AI considered to be “correct” drawing (e.g. angels were expected to have halos and wings, as the Western Christian representation of them would be, whereas someone from a Muslim or atheist cultural background would not represent them as such; the game also expects hospitals to be repesented with a cross sign on the building, but again, Jewish and Muslim hospitals would have a David’s star or a crescent instead). Examples of things that happened during the interview, were for example that the app understood “moon” as being both crescent or circle (with or without craters) (AUC1), a participant getting slightly confused about drawing a hockey stick AND a hockey puck in the same set (AUC5) since this is not a very familiar sport, and a participant who was surprised to be asked to draw an angel because of non-religious upbringing (SAU5) and another participant who struggled to draw a pig (AUC2), probably because in Egypt, Muslims won’t probably be drawing pigs often as they are not culturally prevalent.

AUC2 said “I don’t think it’s bias, I think it’s not intelligent enough” but after a few examples (mentioned above) could see how the data set can bias the tool.

SAU2 had thoughts again around themes she had mentioned before: “appropriation and commodification and what gets privileged… an extension of Western Eurocentric paradigms”.

SAU5 felt that in education, it “becomes the responsibility of the teacher to mediate use of AI in the classroom” and that “critical engagement with machines is essential to how people learn”

This activity was to help participants understand machine learning better, and to understand how it can reproduce bias from the way it is designed or from the type of data it is fed. This helped set the stage for the next question about ethical in AI in education.

Reactions to AI After Discussing Potential Bias & Ethical Issues 

At first, I asked participants if they felt there were potential ethical issues in the uses of AI in general, then to discuss how this influenced their views on AI in education. Some participants already had views on both issues and answered openly on their own. Where a couple of participants did not have particular thoughts on this, or wanted to hear mine, I mentioned the examples I listed in the methodology section. The point of sharing these was that my main concern is how participants viewed ethical issues in AI in education – once they became aware of these other issues.

The major ethical concerns participants had throughout the interview related to:

  • AI replacing the human teacher and reducing human presence for learners
  • Data collection and privacy
  • Relying on statistics and data uncritically
  • Biases reproduced in AI – particularly towards Northern/Western dominant knowledge, or standardization of knowledge
  • Policies to protect people (these came mostly from people who have lived in the West or South Africa – those at AUC who had not had this exposure did not mention it)

Several participants (e.g. SAU3, AUC4) find AI helpful particularly in the areas that can improve in efficiency such as student recruitment and admissions, and that we should start with areas where humans are not able to cope, or where a machine might be able to do the work better or faster.

AUC2 had not thought of ethical issues before, but once we discussed them, she expected them to be “highly problematic…because a student can get in trouble, because of their race or origin or whatever, so you have to be careful really think before you use it, check first, see if it is biased or not, if it will cause your students a problem or not, any kind of inequality, you have to make sure as teacher they’re all happy”.

AUC3, AUC5 and SAU5 felt AI can be an additional thing used with human judgment to help alert a teacher in a (large) class, but where possible, there are things that require personal relationships with students and context in order to understand, and this is the “human touch that we need”.

AUC3 felt that human judgment is always needed on top of any tool. They emphasized the importance of connections, reflections and “fairness” because “human statistics are often problematic, missing contextual cues and superficial”, not accurately representing what is happening behind the numbers. There was always the need for a “human factor” and “human touch”.

SAU4 mentioned “ethical issues across the board”, and that one way to address them was to include stakeholders across the board in the design and testing of systems. SAU3 suggested “being aware of risk but not avoiding [AI]”, as it would “hold back potential of our regions” (meaning global South, emerging economies. She believes that even though “technology is reinforcing inequality – unless you embrace it, you can reproduce inequality”. She also mentioned the bigger picture of the hegemonic system reinforcing domination, including things like the culture and context of programmers biasing the systems, where she suggests a social project “to be producing computer scientists who are from different backgrounds… Break the stranglehold of tech and white males”. Also, the availability of data in more Western contexts reproduces their privilege as their data is used to train systems. One other things she mentioned is the importance of using AI to solve problems – find areas where AI can do something faster or better than what currently exists. SAU4 agrees, especially for offering targeted differentiated support, but “at one point does the benefit of giving so-called targeted support make [learners] into a kind of object or data point rather than a human being who has more to bring in than just their marks and socioeconomic background and whether their parents went to university… how do we make sure it does not overwhelm the kind of humanness a person needs from the system?”

SAU1 believed that some of the ethical issues related to bias in AI would eventually be solved, simply because the creators want to make their platforms better, and fixing these biases such as bias in facial recognition is “in the best interests of the companies” would make their tools better. More concerned about “who is investing in AI?” because so much more comes from private companies than governments, so “the research agenda is implicitly or explicitly determined by private companies or corporates” which is extremely dangerous without regulatory frameworks. 

SAU4 believed that ethical use of tools was about putting the appropriate policies in place – and that we should not be “luddites” about this – she was optimistic about the potential of using AI to support learning. SAU1 agrees, “we [educators and researchers] need to construct counterstories [to those coming from private companies], we have no choice” to find ways to use AI usefully and not follow the dominant narratives.

AUC4 said “everything that has a use, has a misuse” and mentioned the importance of working on policies and regulations. Participants from South Africa focused on policies and regulations as well.  SAU1 feels regulatory frameworks are “not designed to be agile” and have not kept up with AI. For example, SAU2 suggested AI should not be used “without regulation or oversight” and that this should be done bottom up, from universities up to the country level, because starting at a higher level would be unequal given diversity of resources. She suggests it be done via a “long process of participation and consultation”.

That’s it for now – how was your experience with QuickDraw? This part of the paper is where I think peer reviewers thought I was too biased. Also because my paper title was “potential of AI in education” whereas I guess the paper is mostly trying to show the “limitations” as perceived by educational developers… that’s why I changed the title here. Titles make a difference.

Header image is just a screenshot of QuickDraw’s front page 🙂 I was too lazy to try to find an image that represented ethics or bias or something. could have done a Doodling one…

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.