Estimated reading time: 9 minutes, 5 seconds
In case you missed my first post discussing this process, I am blogging an unpublished paper as a series in parts over several days. You can read that post to understand the story and the reason behind this. Comments on each post welcome. This is the tenth and last post, and it will have the discussion, limitations & further research, and conclusion.
Artificial Intelligence as a technology has been developing, as have its applications in education and mainstream. According to Holmes, Bialik and Fadel (2019, citing Spohrer IBM diagram),
AI is now able to perform pattern recognition, video recognition, memory and some reasoning work, but is expected in the near future to be able to do social, relationship-building roles, and later, mentoring and coaching roles. It seems interviewees in my study recognize the potential usefulness and some of the limitations of current AI, but they do not see the possible future of AI as becoming good at the relationship-building and coaching side. They see this as the purview of human teachers, and it does not seem to be a problem that they wish for AI to solve. SAU1 and SAU3 specifically suggested that the way forward for ethical AI should stem from existing problems in education, which AI can do better than humans, or take away menial tasks from humans. I would contend that relationship-building and coaching are amongst the most rewarding of human behaviors, and I do not imagine teachers need help with it, except perhaps for extreme cases of people with mental or physical illnesses where human contact might (temporarily) be harmful (although I don’t imagine this being the best direction to go, even then, because you can still put a human being on the other side of the screen).
Some participants seemed to recognize that AI was on the verge of reaching fluency in things like automated translation, and seemed eager to see how this would go, but others were skeptical of AI ever becoming good enough to detect nuances and contexts in language without creating potentially risky misunderstandings. Several of them felt that a second layer of human touch or moderation was needed in all language-based AI, such as automated essay grading, translation and speech recognition, but several of them seemed comfortable with using the AI to support teachers as a first step.
There were common threads across some of the participant responses to different questions asked. Most participants had extensive firsthand experience with Turnitin.com, some had some experience with automated translation and voice recognition, but most had no firsthand experience of automated essay grading or teacher bots.
Positive reactions to tools like Turnitin.com and teacher bots centered around the efficiency and productivity gain, while negative attitudes towards these two tools were slightly different. With Turnitin, it was seen that human judgment was needed to detect mistakes by the system. With Teacher bots, some participants worried about correctness and felt a human should supervise, while others thought there would still be a need for a second teaching assistant to cover questions and needs more complex than what a bot could cover. Only one person brought up the affective element humans bring to online teaching.
It was interesting that some SAU participants, and one non-Egyptian AUC participant brought up regulation, whereas Egyptian participants did not. The Egyptian context is one where regulations, in general, are rarely enforced unless perhaps when they are in the interests of national security. Whereas maybe (I don’t know for sure) regulations are more widely respected in South Africa and perhaps there is a history of successful efforts or attempts at democratic processes for doing so. It is interesting that the South African tech entrepreneur Elon Musk has been quoted (in Holmes et al, 2019, p.8) for saying:
I have exposure to the very cutting-edge AI, and I think people should be really concerned about it… AI is a rare case where we need to be proactive about regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late. —Elon Musk
Although I analyzed my interviews with grounded coding, I thought it would be interesting to apply the SAMR framework (see Puentedura, 2015), which shows how technology can be used in education in ways that are for enhancement (S=Substitution, A=Augmentation) or for transformation (M=Modification or R=Redefinition). Applying the framework to analyze the interview responses here, I found that the majority of respondents saw AI as either substituting or slightly augmenting teaching, particularly in menial, boring tasks, but no one was seeing it as reimagining learning processes in more radical ways. A few did imagine it transforming the admission process to solve existing difficult problems (SAU3, AUC4), and one person (SAU2) felt that using AI might provide a “mirror” for us to rethink what teaching means. At the moment, it is unclear how much of these attitudes that relegate AI to the more menial tasks and lean towards keeping humans as moderators or second eyes, relate to the quality of existing AI, and how much relates to deeply held beliefs about what teaching is or should be, and which elements require human intervention, for ethical and moral grounds, regardless of the efficacy of the technology. Exploring this would require further investigation.
Limitations and Recommendations for Further Research
One of the limitations of this study is the small number of participants, from only two elite universities not representing the entire populations of Egypt and South Africa, and that the majority were women. This occurred because the majority of staff at both centers were women to start with, and because of the short time frame I had between gaining ethics approval and the deadline to submit the paper: these were the people who agreed to be interviewed and gave me 30-60 minutes of their times on short notice.
There was also an imbalance in that there were several language teachers, and people with masters degrees who were more junior from the Egyptian group, but more diversity in backgrounds in the South African context, and some more senior participants. The specificity of the two institutions and that all participants were academic developers of some kind was intentional to have some commonality among the group interviewed, because there was otherwise a difference in the country and culture in which the participants worked, and there were contextual differences such as the size of classes (South African institution had larger classes of 500 or more whereas the Egyptian institution had mostly smaller classes of 20-40) and the type of university (research university in SA vs liberal arts in Egypt). I intentionally limited the participants within only two institutions in order to see how the specific institutional contexts influenced their perceptions about AI, and it did, for example, with regards to things like automated essay grading.
Further study might include academics who mainly teach in the disciplines instead of just academic developers. It would be interesting to see how academics in computer science and engineering perceive AI versus academics who teach social sciences, humanities, business, or arts. It would also be interesting to include a few more institutions from the two countries in order to see which attitudes seem common within a country, versus which ones seem common within certain institution types (e.g. elite, or research-based versus less well-resourced institutions).
To summarize, participants’ positive attitudes to AI came when they believed that AI could somewhat do some aspects of education while saving time and effort for humans, or where it would remove more menial tasks for humans, or where it might allow for things to be done with large data or at scale, not previously possible with limited human intellectual resources.
“Highly-publicized projects like Sophia convince us that true AI — human-like and perhaps even conscious — is right around the corner. But in reality, we’re not even close… The true state of AI research has fallen far behind the technological fairy tales we’ve been led to believe. And if we don’t treat AI with a healthier dose of realism and skepticism, the field may be stuck in this rut forever.” Dan Robitzski (2018, n.p.)
It seems that several of those interviewed already had low expectations of what AI could offer, by its limitations, particularly with regards to areas involving language (e.g. automated essay grading, automated translation) without human intervention. One of the reasons for the more negative attitudes related to the belief that “AI cannot do this thing well” or “cannot do it as well as a human” or it could not be trusted to work without human supervision.
On the other hand, reservations about AI varied, from concerns that the AI could not do things as well as a human could do them (something which could, theoretically, “improve” in the future), to concerns about inherent biases in the system (which may or may not have solutions in the near future) and the need to ensure fairness to students. Several participants raised concerns about the hegemony of Western/Northern knowledge being reproduced via AI, and one of the ways suggested to address this involves educators and researchers from the global South getting onboard and starting to have a role in shaping the future of AI, rather than resisting it.
There were also concerns about data privacy – who collected the data and what they were allowed to do with this data. There was a final concern about the importance of the human touch in education and the impact of this affective interaction on learning, which cannot and should not be replaced by the rationality of a machine. I will end with this quote from Audrey Watters:
“We should ask what happens when we remove care from education…. What happens to thinking and writing when… the whole educational process is offloaded to the machines—to “intelligent tutoring systems,” “adaptive learning systems,” or whatever the latest description may be? What sorts of signals are we sending students?” —Audrey Watters (2015, n.p.)
- Availability of data and material. IRB (ethics) approval requires data to be kept confidential and not made public. If the journal requires this, I did not get informed consent prior to conducting the interviews. I could get consent after the fact if necessary?
- Funding: N/A no funding was received
The author would like to thank all participants in the research for their contribution. The author also extends special thanks to Paul Prinsloo with whom she had brainstormed some of the ideas of this paper before conducting the research.
That’s it, that is all, thanks for coming along for the ride!
Header photo from Pixabay – I wonder what birds say to each other when they discuss? I wonder why they call it a discussion section when really it is just the author(s) talking to themselves… shuldn’t it be called analysis or something?