Estimated reading time: 5 minutes, 59 seconds
I’ve been having a lot of private conversations around proctoring and surveillance, and we’ve been having a lot of conversations on my campus about academic integrity. Thankfully, in many of these conversations on my campus, there are people other than me who bring up the importance of centering care and pedagogy and equity in all of this, and some who see the big picture of developing ethical citizens and not just “catching cheaters” – treating the cause, modifying our own systems before blaming students for subverting them. There have also been several incidences on Twitter of invited speakers refusing to speak at conferences sponsored by AI Proctoring companies, and more recently educators refusing to participate in Twitter chats by a company that is problematic in many ways but also partners with an AI proctoring tool [I dislike naming them because it’s advertising, and it’s not my main point]. This blogpost building on several things: my blogpost after #OLCInnovate about changing the conversation around cheating, my blogpost ahead of the AgainstSurveillance event, and the panel below.
I was also recently on a panel in the CPDP conference, moderated by John Davisson (thanks Bon Stewart for recommending me for this panel) along with Meg Foulkes, Sofie van Londen and Lydia X.Z. Brown. Here is the recording in case you’re interested:
In any case, in a recent private conversation, I responded with something that someone encouraged me to blog about. I’ll just provide a small background to this, using stories I’ve heard – and tie it to a story in the video above.
First of all, when we say that AI proctoring has the potential to harm some students, this does not mean that other students will object to it. But why do we even develop a technology that occasionally harms some students? Shouldn’t education have a similar mantra to healthcare, “do no harm”? (Cory Doctorow makes this point in his contribution in the Against Surveillance panel – and you can still go there and contribute to support Ian Linkletter).
Second, for some students, there are circumstances that require them to undergo proctoring, because some accreditation agency or institution requires they take a proctored exam to qualify for a certain certification. Some students have no option to do this exam in person, especially with COVID, so if you tell them proctoring is their only option, they will take it. This does not mean you have their consent, it means they have complied, they’ve been coerced. They may not even be objecting inside, because all their lives they’ve had to deal with things like this. It’s kind of like… as a Muslim, when I get selected for a “random check” at the airport (almost every other time I travel without my family to a Western country) … I can choose to notice it for what it is: not random; or I can just take it in stride, because all my life it’s been happening to me, so I don’t notice it as something different. Actually, one time the white Western woman next to me (a friend) was chosen for a random check, an I REALLY noticed it! I actually felt it was someone trying to tell me “see, I see you there with the headscarf, but I chose to randomly select HER. So random, right?”. I think I laughed out loud when it happened. ANYWAY.
Third, in the panel video above, Meg Faulkes says she refused to sit for a bar exam in the UK because they used AI proctoring and she advocated against it. I love that she took that stance and that she advocated against it. I also notice, very clearly, that this is an excellent way to use one’s privilege. But it cannot be done without privilege. She could afford to postpone her exam. She could afford to speak up. People who are most harmed by this kind of technology are less likely to have the privilege to refuse and object. People with privilege less likely to be harmed have a responsibility to refuse, so that it is not always a “minor problem for the 1% or 5% or 10%” that for some reason policymakers don’t take seriously enough.
So here are the points I made in a private convo that someone suggested I blog:
Whenever someone (especially a marginalized student/person) “accepts” proctoring without objecting to it:
A. Do they know the potential harms of online proctoring, to themselves and to others?
B. Are they given an alternative to online proctoring that is within their reach and equally valid, like an alternative assessment that tests a similar level of knowledge, which they’ve practiced before [because let’s face it, suddenly doing a difficult-to-cheat authentic assessment or open book take home exam… students need practice before they would choose that over an MCQ]
C. Have they understood that their own acceptance of proctoring could harm others less privileged than themselves?
D. Do they recognize the long-term impact of normalizing surveillance on humans beyond this one COURSE and into LIFE in the world…
Perhaps if they had all the information from A-D then they can make an informed decision, beyond their internalized and ideological oppression [and this takes probably years to undo, not just writing a blogpost about it or having a 10 minute conversation on it]. Because remember that they’ve internalized, if they’re a minority, that life will always be harder for them anyway, that they will have to make more sacrifices than others to reach the same goals. Society as a whole has an ideology that not everyone needs to have access to the same education (at certain levels, depending on the society) or the same opportunities to work (again, more social systems are much better at this, but the elements of gender and race and disability and neurodivergence and such are almost always there at one level or another).
It is important to recognize that for B to happen, systemic change needs to happen; at the very least, institutions should be investing in humans, such as what the University of Michigan-Dearborn did by paying human graders to support their faculty in developing authentic assessments which take a lot of time to grade. Other investments include in faculty developers’ time to support faculty in designing these kinds of assessments, and rewarding faculty for doing them… and then hopefully, you’ve spent your money wisely on human solutions that promote equity and care, and you recognize you don’t need to pay for technology to oppress and surveil your students. Get over the ideological oppression of even using exams for assessment, with all the inequalities they bring, but that’s another post.
I know it’s not that simple, but that is what is on my mind today.
I want to thank Autumm Caines especially for always keeping me thinking about this and helping me notice what I may overlook… and the entire Continuity with Care group for being a space for all of us to think and question and learn.