Estimated reading time: 5 minutes, 2 seconds
In case you missed my first post, I am blogging an unpublished paper as a series in parts over several days. You can read that post to understand the story and the reason behind this. Comments on each post welcome. This is the sixth post, and it will cover Attitudes Towards Teacher Bots, part of the Findings section. Part 1 covered abstract/references, Part 2 intro/lit review, Part 3: methodology/positionality, Part 4: Findings: General Attitudes Towards AI, Part 5: Attitudes Towards Turnitin.
Attitudes Towards Teacher Bots e.g. in online courses
The majority of participants welcomed teacher bots in online courses as a way to save the time of humans in answering routine and menial questions, and saving costs of one teaching assistant, but all agreed there still needed to be a human being answering questions beside the bot. One (AUC2) mentioned the importance of the teacher supervising the bot, in case it made mistakes or gave incorrect information. Another person (AUC1) suggested a second TA to answer more complex questions, and AUC3 felt that a human was needed for the more complex and affective roles such as “mentoring” that a teacher or teaching assistant performed online, that the main tasks were not routine answering of questions, but the more difficult work of supporting and motivating students, “to trust and build a connection… seek engagement” all of which required a “personal touch”. One important point SAU2 said seems like a response to this view, and reminds me of Bayne’s (2015) writing about bots:
“People talk of the fourth industrial revolution as depersonalizing teaching and learning. [But] I am not convinced. You can have depersonalized learning with a teacher in front of you… depersonalization is not attached to machines only.” (SAU2).
SAU2 had no experience with teacher bots, but her first reaction was “I’m open to that, to learning and working with machines”. She felt one of the benefits of technology were “a form of ease in terms of communication, opens up opportunities to communicate” and that using it may help us rethink how we learn and teach, to serve as a “mirror” to help us do so.
SAU3 suggested that teacher bots would be “of limited use” but still “not unuseful”, as sometimes online learning needed 24/7 support which humans would be unable to maintain. However, we should “continually guard against” using it to completely replace teaching assistants for example as a cost-saving measure. SAU1 and AUC1 also expressed a concern that policy makers might misunderstand the capacity of teacher bots, and “start to believe that this can replace the real part of teaching” (SAU1) and cut budgets for Teaching Assistants (AUC1), for example.
SAU4 said their institution was considering piloting a bot but had not done so yet. They had used it (as nudging) within a MOOC platform before, but had no control over how it worked. Here, she highlighted “the difference between that… [is] taking something from another entity, you don’t know what the programming or algorithm is… what is really important for units like ours is to develop that expertise to do our own or specify, even if you get somebody else to build it, specify how you might build it.” So control over the tool seemed important, even when building tools such as these was not the core of what they do. When asked why they wanted to try a bot, she said that innovation was part of what defined their department and so they should always be willing to try new things such as these.
It seemed like, for the most part, participants agreed with the Online Teaching Manifesto that “automation need not impoverish education”.
AUC4 had a different perspective and was “not convinced [a teacher bot is] providing value” as it would only be able to answer questions “semantically” without knowledge of context. The expectation that these would be used in MOOCs or such, they felt, meant already the majority of learners were autonomous, self-motivated non-novice learners and less likely to need such a bot. They also asked whether that bot would have access to student data and the ethical implications of that. SAU1 also raised the concern about data collection and ownership, but could see potential value to students in teacher bots.
SAU5 was opposed to teacher bots for a different reason. She cited Paul Riccour that “we need to suffer to progress” and that students need to learn to look for answers before getting an immediate, easy answer from a bot or a human. She talked about the slow process of interpreting data while doing your PhD and the need to move away from “instant learning… real time… right now” as this may not really be promoting good learning. She talked about the irony of discussing “deep learning for machines but no deep learning for humans any more”.
SAU1 mentioned how “doing things appropriately and developing a nuanced approach is much more effort and people want generic solutions”, but she would welcome the use of teacher bots if done in a nuanced way.
That’s it for now! What’s the latest on Teacher bots? I’m not keeping track! What came up during the pandemic? What about emotion AI?
Featured image from Pixabay (though I like the little robot students that the teacher bot is teaching! I mean, we talk about “representation” all the time, of seeing teachers and leaders who look like us, and if you’re gonna have teacher bots, then they probably need to have student bots to learn from them!). This one is more along the lines of what you expect (not metaphor) but I remembered something funny my kid told me yday. When not logged into Pixabay the Captcha asks me to say I am not a robot and my kid is like “are they sure the robot can’t just tick that?” and she has a point… some day someone can invent a bot to do that… and even answer those weird image questions because image processing/recognition is a thing (flawed, but exists, and will ultimately improve someday, no?)