Estimated reading time: 6 minutes, 48 seconds
In case you missed my first post discussing this process, I am blogging an unpublished paper as a series in parts over several days. You can read that post to understand the story and the reason behind this. Comments on each post welcome. This is the third post, and it will have the research methodology and positionality. I have already published Part 1: abstract/references and Part 2: introduction & literature.
This research follows a critical-interpretivist paradigm, emphasizing the researcher as research instrument, and foregrounding the interpretations and perceptions of participants to understanding their own reality. My own preconceptions as a researcher were my starting point for the research, and my understanding developed as I reflected while reinterpreting it as I learned from the process of interviewing others (Gadamer, 1975, cited in Usher, 1996). I see myself as a bricoleur and I am aware of how my own identity and context influences the ways people respond to me in the interviews (Denzin & Lincoln, 2005), some of whom have heard me speak or read my work before. Everyone I interviewed was someone who had interacted with me in person and online. Although my colleagues at AUC have worked with me more closely, some of my South African participants have known me longer and interacted with me closely as well.
This research did not aim to understand how AI was used in education in the two African institutions involved, but rather to ask:
- How do teachers and academic developers on the ground use AI, and how do they perceive other potential uses?
- Which problems does AI solve or what opportunities does AI offer teachers? (given particular applications of plagiarism detection, automated essay grading, teacher bots, speech recognition and automated translation)
- When made aware of potential bias in AI solutions, how does this influence attitudes of educators about AI and its uses in education.
Selection of participants was necessarily a voluntary convenience sample because of time constraints. All participants are from either my home institution and my department (Center for Learning and Teaching at the American University in Cairo in Egypt) or the equivalent in a leading research-intensive South African university, which I will hereafter simply refer to as SAU to keep the university name confidential. I asked for volunteer participants in my research by sending out emails to both groups, and I interviewed all those who were willing and able to participate in a short time frame of two weeks (during winter semester in South Africa and at the end of summer and the start of fall semester in Egypt).
I followed a semi-structured interview format, which took participants through four phases as follows:
- Participants were introduced to what AI and machine learning were, and asked about their general attitudes towards them, and whether they were aware of any uses in their everyday lives and in their teaching.
- Participants were then asked to comment on five specific AI-based educational applications, including describing their experiences with each, and elaborating on the benefits and potential costs/problems with each:
- Plagiarism detection such as Turnitin.com
- Teacher bots that can replace teaching assistants in online learning
- Automated essay grading as is done in some standardized testing
- Voice recognition software
- Automated translation
- In order to help participants see how biases can develop with machine learning, I used an activity. Participants were asked if they had played QuickDraw (http://quickdraw.withgoogle.com), a game Google developed that uses pattern recognition (a form of machine learning) to learn how humans doodle certain pictures. If they had never played it before, participants were asked to play one round, involving drawing out in 20 seconds a Doodle representing the word the game showed them (one round consists of 6 of these). They were then prompted to look at the results. When they got one “wrong”, we looked together at what Google thought was “right” based on previous players’ doodles, and we talked about how pattern recognition worked. I then explained to participants if they could see the ways in which bias can be built into such algorithms and the way they ask questions (e.g. the particular words chosen to ask people to draw, potential for cultural misunderstanding and confusion, etc. – .e.g. is a bat an animal or a baseball bat; is camouflage the way the color of a butterfly blends in with flowers or an army uniform?) and the way they collect data (possibility for male or Anglo dominance in the data collected such that Western/Anglo interpretations of drawing things like Angels with halos and wings, and hospitals with crosses can occur). I captured participant reactions to this. Participants were then informed of other instances of bias in machine learning, such as racial bias in facial recognition (Buolamwini, 2016), gender and race bias in Google search (Noble, 2018), bias in US justice system (O’Neil, 2016) and how YouTube recommendations show increasingly extremist content (Tufecki, 2016; 2018).
- Participants were then asked if ethical issues in AI influence their attitudes towards uses of AI in education and where they imagined it would be beneficial or harmful
In order to analyze the data in a short timeframe, I took notes during the interviews themselves and highlighted parts that seemed “quotable”, and then I listened to each interview one more time and filled in gaps in my notes. I then looked at each interview question, and looked at all participant responses on the question. Where there were common themes, I mention those, and where one participant said something different, I mentioned this. I picked out quotes from parts where participants made particularly eloquent or detailed statements that helped push their point across.
I share my positionality as a researcher and my purpose as a critical educator here in order to be explicit with readers about how my background influenced my research approach and interpretation of the data.
My undergraduate degree was in computer science and my undergraduate thesis centered around neural networks. I am therefore intimately familiar with how machine learning works, inasmuch as I understand how to design neural networks, train and test them, and how to optimize them with genetic algorithms. Since graduation, however, I have not done any programming beyond very basic small code, and have since completed a Masters and PhD in education. I have been at the Center for Learning and Teaching since 2003 and the main bulk of my work involves supporting others in their teaching. However, I also have my own scholarship and teaching, including the area of digital literacies. In my course on digital literacies, I take a critical feminist perspective (Author, redacted, 2019) towards teaching digital literacies, part of which involves looking critically at advances in Artificial Intelligence and its pros and cons, particularly from a social justice perspective. I teach students about empathy, bias, equity, and when we learn about AI, we analyze it from an ethical standpoint. It is my contention (see Author, redacted, 2017) that some uses of AI in education are driven by the IT industry and not by pedagogical needs of teachers, or best interests of students, and I wanted to see what other educators’ attitudes would be at first, then to understand if learning about ethical issues in AI outside education might modify their perspectives on the potential of AI in education. In selecting quotes to be included in this article, I struggled to represent the variety of voices and opinions without sacrificing readability. Where several participants agreed on a perspective I could easily paraphrase, I did not quote. Occasionally, a radical or minority view was so well-said by a participant that I felt compelled to quote parts of it. I was concerned that perhaps quotes that agreed with my viewpoint ended up quoted more often, but I was also aware of how my own perspective on the issues grew more complex as I conducted interviews. I felt it during the actual interviews, while relistening to them to take more accurate notes, and while putting the ideas of the different interviews together to make up the findings section.
That’s it for now! Let me know what you think of the methodology!