Blogging an Unpublished Paper: South African & Egyptian Academic Developers’ Perceptions of AI in Education Part 2: Introduction & Literature

Estimated reading time: 9 minutes, 19 seconds

In case you missed my first post about this, I am blogging an unpublished paper as a series in parts over several days. You can read that post to understand the story and the reason behind this. Comments on each post welcome. This is the second post, covering the introduction and literature (remember this was first written in 2019) and the previous post had the abstract and references, so go back there for the references if you’re interested!

Introduction

The potential uses of Artificial Intelligence and particularly Machine Learning in education has been gaining traction in media. We often hear the perspectives of the designers, often computer scientists or corporations, and more recently, we hear groups working towards a more ethical AI, often at large research institutions like MIT, Harvard and Stanford. But how do educators use AI and how do they perceive other potential uses? There is a gap in the literature that tackles educators’ experiences on the ground, to see whether AI indeed solves problems or creates opportunities for enhancing teaching and learning. This article will focus on three applications of machine learning in higher education: plagiarism-detection systems such as Turnitin.com, automated grading, and teacher bots, and two others, speech recognition and automated translation. Faculty/academic developers at two African universities were interviewed about their actual experiences with Turnitin.com, its benefits, costs and challenges, from both a pedagogical and logistical perspective. They were also asked about their attitudes and perceptions towards the potential of teacher bots, automated grading, speech recognition and automated translation, and what it means for the future of the teaching profession. They were also asked explicitly about ethical issues in AI and how this influences their perception of the future of AI in education.

Why the academic developer perspective from African universities?

This article focuses on an African perspective because some of the applications of AI have huge potential for global South or less-wealthy countries, but also raise some of the ethical concerns which are exacerbated for these same countries. I chose two elite institutions in Egypt and South Africa so that I may find academic developers in relatively well-resourced institutions who would have had the opportunity to experience some of the AI applications first hand, or seen it used by others. Because they are mainly academic developers rather than lecturers, they are likely to be more up to date on the latest trends in education and educational technology than your average university lecturer. I know this because both of the centers involved have digital education focus on some of their projects.

Literature on Artificial Intelligence and Machine Learning in Education

The Manifesto of Online Teaching (Bayne et al, 2016) by the University of Edinburgh includes the items:

  • Algorithms and analytics re-code education: pay attention!
  • A routine of plagiarism detection structures-in distrust.
  • Online courses are prone to cultures of surveillance. Visibility is a pedagogical and ethical issue.
  • Automation need not impoverish education: we welcome our new robot colleagues.

I chose the above items because they are all relevant to the topic of AI in education, first because AI uses algorithms, and plagiarism detection is one of the prevalent forms of machine learning used in education which educators seem to use frequently; third because high data collection is part of what makes machine learning possible, and so the surveillance risk increases; and finally, because they speak of automation and robots. It is interesting how the manifesto warns us to pay attention to how algorithms and analytics influence education, and how plagiarism detection fosters mistrust, and how online involves surveillance, but then they write about automation as something having positive potential. This manifesto is interesting in how it encourages us to question utopian and technopositivist views of digital education, but at the same time discourages extremely dystopian perspectives. It also encourages nuance – and this is why I do not discuss AI in education in a general way, but rather discuss specific applications for specific populations.

Literature on AI Uses in Education

Rather than review all that’s been written about AI in education in a general way, I will focus on some particular applications I focused on in my interviews with educators, in order to make the conversation more concrete and get their reactions to these specific applications: plagiarism detection such as Turnitin.com, teacher bots in online courses, automated essay grading, speech recognition, and automated translation. My own literature review is not exhaustive, but just touches upon some of what has been written about some of these applications. I intentionally did not ask about adaptive learning and learning analytics, because even though these could potentially use machine learning, my understanding from speaking to people who work on them is that they often do not use machine learning.

My department on campus administers Turnitin.com, the leading plagiarism detection tool. I used to give workshops to faculty members at my institution who use it in their teaching, showing them the nuances of interpreting the reports and discussing with them ways to design their use of it so it has pedagogical use for students, rather than be used as a technical, punitive tool. However, I have not used it in a long time, mainly because I feel that I can design assignments that don’t leave much room for plagiarism, and I feel that I should teach my students the purpose behind citation and the value of connecting their work with others’ work while offering their original contributions, and so I don’t feel the need to use a tool like Turnitin to “catch” them. I have spoken about the importance of educators questioning the ethos of a tool like Turnitin and what kind of values it relays to students (Groom & Bali, 2016).

Morris & Stommel (2017) have written critically about this tool. Their main concern about it is that this private company profits off of students’ intellectual property, without their permission or consent as their universities force them to use it. They also critique the software for harming the relationship between teachers and students, turning learners into “potential plagiarizers”, cultivating a culture of mistrust. Morris & Stommel (2017, n.p.) argue that this tool is “actually handicapping teachers from exercising their pedagogical agency”. It does so by taking the complex pedagogical process of teaching students to be responsible while referencing other people’s words and ideas, and reducing it to the technical act of paraphrasing poorly, while ignoring the important values behind ethical citation practices, and ignoring all other forms of plagiarism. Students start to believe that “avoiding plagiarism” equals avoiding Turnitin.com detecting their text as “similar” to other text it can find. Which all educators know is only a small portion of what plagiarism is.

On the other hand, I understand the dilemma of teachers who have to teach either very large groups of students of over 100, or who have to mark a large number of essays per semester across three or four classes. Although these teachers may be trying hard to teach about ethical citation practices, Turnitin can be a useful tool for them to show students quickly how their paraphrasing needs to be improved. Doing so sentence by sentence and for every individual student may simply be impossible in the limited time the (probably overloaded) teacher has.

Another application of AI in education that seems to stem from the dilemma of grading writing of students in large classes is automated grading of writing, which started in 1966 and has improved over the years with developments in automated language processing allowing for feedback on content and not just automated scoring (Li, Link,& Hegelheimer, 2015). The study by Li et al (2015) assessed the efficacy of a particular tool, Criterion, in enhancing writing of ESL students from the draft level to the final draft, examining instructor and student perspectives. Such tools are able to offer feedback on linguistic (e.g. grammar) and rhetorical (e.g. organization of content) dimensions, but it remains controversial whether such feedback helps enhance students’ writing or focuses it on technical dimensions, and whether alignment with human graders is a sufficient criterion of quality (Li et al, 2015). Warschauer and Grimes (2008) compared different tools for automated grading in four schools in the US, and found the feedback from them led mainly to word and sentence-level revisions but not more. It is important to recognize that automated grading of writing is based on the assumption of the usefulness of of WCF (written corrective feedback) on improving L2 writing (see Li et al, 2015), and that individualizing it using standard categories consistently may offer more consistency in feedback than human graders. However, it is unclear whether writing for a machine, would influence the way students write in the first place?

Another application meant to make better use of teachers’ time is teacher bots in online courses. Bayne’s (2015) work on teacher bots recognizes the promises of efficiency made by the idea of teacher bots in digital education, and that educational institutions have at times embraced and other times resisted this. She claims that attitudes towards teacher bots are based in humanistic thinking, and Bayne’s article, as was the experiment with a teacher bot in EDCMOOC (University of Edinburgh’s Elearning and Digital Cultures MOOC) are to emphasize that “our response to automation need not be an uncritical re-statement of the centrality of humanism for education.” (Bayne, 2015, p. 465). Rather than be critically uncritical in our approach to bots, Bayne suggests the main question one needs to grapple with is “How can we continue to value teaching within an algorithmic culture defined by the new potentials of computation and digital data?” (Bayne, 2015, p. 465). In their experiment, learners knew they were dealing with a bot, but I have heard, in a conference presentation a few years ago (OEB?) of experiments where students were not explicitly told that one of their Teaching Assistants was a bot.

Two other AI applications that depend on natural language processing are speech recognition and automated translation. These are not simply educational tools per se, but have potential uses in education, as we shall see in the findings of this study. I chose these five uses of AI in education because

  1. Plagiarism detection is something many academics have used as students, researchers or teachers. Plagiarism is an important topic at my institution in Egypt and around the world
  2. While I did not expect participants to have experience with teacher bots and automated essay grading, I expected them to have a preliminary perspective on how they would feel about the idea of them
  3. I included speech recognition and automated translation because I expected participants will have probably used them in one way or another at some point in their daily lives, whether personal or professional, and may know of students who had used them.

That’s it for now – wanna suggest more things I could have included in the literature review? Leave them in the comments? I honestly can’t believe I don’t have Ben Williamson referenced here, because I know I had been reading his work for a long time! Many good things came later, especially with reference to facial recognition, proctoring, emotion AI, and automatic writing/paraphrasing

Featured image from Pixabay

3 thoughts on “Blogging an Unpublished Paper: South African & Egyptian Academic Developers’ Perceptions of AI in Education Part 2: Introduction & Literature

  1. Jim Stauffer says:

    Your account of how you taught faculty to use turn it in as a pedagogical tool rather than punitive policing really intrigued me. I would like to hear more about that because I think it’s important to offer people alternatives and how they use some of these surveillance tools rather than just put them on the defensive about it.
    Whenever I now hear about AI, my mind goes to the “Ethics Analytics and the Duty of Care” MOOC [https://ethics.mooc.ca/course_outline.htm] Stephen Downes offered in the fall of 2021. Not sure how it fits the lit review for your narrower look at how academic developers used AI, but it is a very thorough examination of how machine learning works and the ethical mechanisms we use in making decisions about how we use AI, including how we use it in education.

    1. Maha Bali says:

      I remember hearing about it but not being able to engage . Will look again!

  2. Reviewer #2 says:

    In a “Reviewer voice,” I might gently suggest that the literature review and list of references both indicate a novice level of knowledge. The references from the educational literature—the only ones in your references that are pre-2010—are from general or introductory sources. None of your references on AI in education pre-date 2010, which suggests a need for more historical context. In your list of sixteen references, you cite two popular-readership books (Noble and O’Neill) and two refereed articles, neither from journals in areas related directly to your research question. Beyond those sources, the rest are blog posts, TED talks, and other forms of knowledge that are perhaps suited to discussion in a popular article, but are not scholarly. In some cases—for example, the Morris & Stommel blog post—this results in erroneous claims or arguments, based upon those authors’ mistaken or incomplete readings of their sources. One advantage of citing scholarly sources is that lists of references allow one to directly trace where ideas come from, and thereby reduce the propagation of error. As a reviewer, I often skip first to a submission’s list of references, so I have an idea where the author’s coming from. In this case, I would already be suspecting that the submission’s claims will be somewhat vague and perhaps unaware of the substantial broader context and history of the argument—which means it will likely restate claims and arguments that have been made elsewhere without awareness of the counterclaims and counterarguments.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Follow

Get every new post on this blog delivered to your Inbox.

Join other followers:

%d bloggers like this: