Estimated reading time: 5 minutes, 59 seconds
I’ve been having a lot of private conversations around proctoring and surveillance, and we’ve been having a lot of conversations on my campus about academic integrity. Thankfully, in many of these conversations on my campus, there are people other than me who bring up the importance of centering care and pedagogy and equity in all of this, and some who see the big picture of developing ethical citizens and not just “catching cheaters” – treating the cause, modifying our own systems before blaming students for subverting them. There have also been several incidences on Twitter of invited speakers refusing to speak at conferences sponsored by AI Proctoring companies, and more recently educators refusing to participate in Twitter chats by a company that is problematic in many ways but also partners with an AI proctoring tool [I dislike naming them because it’s advertising, and it’s not my main point]. This blogpost building on several things: my blogpost after #OLCInnovate about changing the conversation around cheating, my blogpost ahead of the AgainstSurveillance event, and the panel below.
I was also recently on a panel in the CPDP conference, moderated by John Davisson (thanks Bon Stewart for recommending me for this panel) along with Meg Foulkes, Sofie van Londen and Lydia X.Z. Brown. Here is the recording in case you’re interested:
In any case, in a recent private conversation, I responded with something that someone encouraged me to blog about. I’ll just provide a small background to this, using stories I’ve heard – and tie it to a story in the video above.
First of all, when we say that AI proctoring has the potential to harm some students, this does not mean that other students will object to it. But why do we even develop a technology that occasionally harms some students? Shouldn’t education have a similar mantra to healthcare, “do no harm”? (Cory Doctorow makes this point in his contribution in the Against Surveillance panel – and you can still go there and contribute to support Ian Linkletter).
Second, for some students, there are circumstances that require them to undergo proctoring, because some accreditation agency or institution requires they take a proctored exam to qualify for a certain certification. Some students have no option to do this exam in person, especially with COVID, so if you tell them proctoring is their only option, they will take it. This does not mean you have their consent, it means they have complied, they’ve been coerced. They may not even be objecting inside, because all their lives they’ve had to deal with things like this. It’s kind of like… as a Muslim, when I get selected for a “random check” at the airport (almost every other time I travel without my family to a Western country) … I can choose to notice it for what it is: not random; or I can just take it in stride, because all my life it’s been happening to me, so I don’t notice it as something different. Actually, one time the white Western woman next to me (a friend) was chosen for a random check, an I REALLY noticed it! I actually felt it was someone trying to tell me “see, I see you there with the headscarf, but I chose to randomly select HER. So random, right?”. I think I laughed out loud when it happened. ANYWAY.
Third, in the panel video above, Meg Faulkes says she refused to sit for a bar exam in the UK because they used AI proctoring and she advocated against it. I love that she took that stance and that she advocated against it. I also notice, very clearly, that this is an excellent way to use one’s privilege. But it cannot be done without privilege. She could afford to postpone her exam. She could afford to speak up. People who are most harmed by this kind of technology are less likely to have the privilege to refuse and object. People with privilege less likely to be harmed have a responsibility to refuse, so that it is not always a “minor problem for the 1% or 5% or 10%” that for some reason policymakers don’t take seriously enough.
So here are the points I made in a private convo that someone suggested I blog:
Whenever someone (especially a marginalized student/person) “accepts” proctoring without objecting to it:
A. Do they know the potential harms of online proctoring, to themselves and to others?
B. Are they given an alternative to online proctoring that is within their reach and equally valid, like an alternative assessment that tests a similar level of knowledge, which they’ve practiced before [because let’s face it, suddenly doing a difficult-to-cheat authentic assessment or open book take home exam… students need practice before they would choose that over an MCQ]
C. Have they understood that their own acceptance of proctoring could harm others less privileged than themselves?
D. Do they recognize the long-term impact of normalizing surveillance on humans beyond this one COURSE and into LIFE in the world…
Perhaps if they had all the information from A-D then they can make an informed decision, beyond their internalized and ideological oppression [and this takes probably years to undo, not just writing a blogpost about it or having a 10 minute conversation on it]. Because remember that they’ve internalized, if they’re a minority, that life will always be harder for them anyway, that they will have to make more sacrifices than others to reach the same goals. Society as a whole has an ideology that not everyone needs to have access to the same education (at certain levels, depending on the society) or the same opportunities to work (again, more social systems are much better at this, but the elements of gender and race and disability and neurodivergence and such are almost always there at one level or another).
It is important to recognize that for B to happen, systemic change needs to happen; at the very least, institutions should be investing in humans, such as what the University of Michigan-Dearborn did by paying human graders to support their faculty in developing authentic assessments which take a lot of time to grade. Other investments include in faculty developers’ time to support faculty in designing these kinds of assessments, and rewarding faculty for doing them… and then hopefully, you’ve spent your money wisely on human solutions that promote equity and care, and you recognize you don’t need to pay for technology to oppress and surveil your students. Get over the ideological oppression of even using exams for assessment, with all the inequalities they bring, but that’s another post.
I know it’s not that simple, but that is what is on my mind today.
I want to thank Autumm Caines especially for always keeping me thinking about this and helping me notice what I may overlook… and the entire Continuity with Care group for being a space for all of us to think and question and learn.
Really interesting read! For me, this is a two-pronged issue, concerning on each prong.
On one hand, these proctoring softwares are asking for pretty powerful permissions on my personal computer, monitoring the windows I have open, they keystrokes I make, and collecting information from my camera. I am always extremely apprehensive about letting programs (especially ones from new pop-up companies) have these levels of access to my personal computer, and furthermore, with the attention some people give to these devices, these programs will often not be uninstalled after the required examination is complete. From an IT security standpoint, this is a nightmare of a backdoor, and it is honestly only a matter of time before a bad actor gains access to this from the corporation side, or else the corporations themselves decide to supplement their income stream with data-mining ala Facebook and Google.
On the other hand, IT’S NOT EFFECTIVE. Students are endlessly clever, and I have heard of workarounds on proctoring software, ranging from virtual machines isolating the proctoring software in a sandbox, notes on the wall, mask, behind the keyboard, to more inventive solutions like casting the screen into another room and having an accomplice relay the correct answers back to the test taker via covert methods. Asking for extraordinary permissions from the public, to deliver insecure protection smacks of the ‘security theater’ that has run rampant across the USA since 2001. As you rightly bring up, this becomes normalized, adding surveillance to yet another aspect of our lives. And once the idea of being watched by your computer becomes normal, it has the potential to morph into something far more insidious, being abused by corporations or governments.
On both prongs, this smacks of the most disappointing kind of problem solving I’ve seen play out over the course of my life, whereby complex, multi-faceted problems like cheating or extremism are addressed via heavy-handed technological intervention, that invariably doesn’t even begin to address the root issue, and has a whole host of spin-off negative consequences. I’m really proud to know someone that is using her voice to fight back against this.
Exactly!!! Computer scientists at my institution (I know you are at AUC, too, but for anyone who reads) have banned that software in their department. It’s basically spyware!!! And of course it isn’t effective. Neither at stopping zealous cheaters, nor, of course, at making students become honest ppl with integrity. So well said!!! You should do a podcast about this 🙂
Dustin, the peer-reviewed research appears on the second tab in the library Autumm and I (and others) have been building over the last few months. Lots to explore in addition to peer-reviewed research. And feel free to add other items as you find them! docs.google.com/spreadsheets/d…
And this is the latest article which contains links to some stuff that is probably in the sheet. by @autumm @sarahesilverman et al quod.lib.umich.edu/t/tia/17063888…
Hiya, yes, empirical research that might validate concerns is what is being sought. Thank you!
Occasionally, just occasionally, you wish people would just use their common sense and not ask for empirical research when it’s not needed. Esp when it’s something affecting minority groups and studies can be designed to skew it 1/2
2/3 like if 10% of your students have mental health challenges or physical disabilities and 20% are dark skinned and you did a study… the majority of students might be “fine” and their normative behavior doesn’t trigger false claims of cheating.
3/3 but the psychological impact of it on many students is something you can see when you just talk to any of them.
an important tension Shea and I have been discussing. working within conservative systems to change massive financial/technical investments will require that decision-makers read evidence presented according to a particular formal discourse – hence the need for empirical research
If it hasn’t been done in the way that will convince policy makers I guess maybe we all here can figure out a quick way to do it? As I wrote recently, there’s an element of students internalizing their own oppression, which probably means the trauma is underreported
I have a Twitter thread from last year where educators voted surveillance tech as THE technology they would like to uninvent 🙃 does that count???
OMG YES IAN!!! THAT!!!
Why is research/evidence not required before policy makers buy this tech? That’s the right question. This is what Mourid Barghouti meant when he said ppl start a convo with “secondly”. Start from the beginning!
#AgainstSurveillance
And if you want to do the research would you even be able to?
The help documentation is apparently beyond reproach without litigation. Can you imagine what a full on research study would result in?
Of course yes, western/conventional/empirical notions of evidence must be challenged and critiqued. And we have the conceptual tools & traditions to do so. And yet, as you know, people spending millions to direct school/system tech aren’t reading that lit/care much. I’m cynical.
It would be incredibly difficult to design a study. The software itself wouldn’t pass research ethics approval, especially if you’re studying emotional harm. Participants would need an opt out students don’t get.
Maybe they’re the wrong audience? If peer-reviewed research is the currency of the land in academia, then it seems the faculty who deploy surveillance tech might buy that evidence and make changes to their practice.
Perhaps @Procteario can open some eyes. Every one of those students has a story. washingtonpost.com/technology/202…
I really think it is almost impossible.
That’s the one place I am able to locate some hope – faculty assembly/shared governance processes that can inform such administrative/financial decisions.
The researchers would also be at personal risk of a lawsuit.
Yeah, that closing paragraph from a student: “Half of my brain is so concentrated on not messing up or doing anything wrong or making the wrong move…and the other half is trying to actually work on the exam…How far is this going to go?”
I think so too
If you are looking for such evidence and not satisfied with the stories coming out of journalism. If you’re having trouble finding that “empirical” research… it’s maybe because such a study is not possible without great personal risk.
Yep. It’s quite the thought experiment
Not like a randomized control trial
But maybe self-report would still count as evidence here.
Yeah, just terrifying. And again, I can’t help but draw a parallel with human-subjects research – an IRB would never approve a study that presented this level of student risk. And yet universities open the door to tech in an educational context that jettisons reduction of harm.
But then the company could claim that as an abuse of their terms
Might be easier then to just do the faculty governance thing and change the culture (I’m kidding of course it’s not easy at all)
Which brings me back to what “counts” as evidence. What abt an ethnographic approach? How might depth interviews surface students’ experiences of surveillance tech? (I suppose faculty would need to be moved by their accounts. Plus their accounts already exist in the journalism.)
So yeah. Recap for me is:
1. We don’t need a study to tell us this is harmful. We know it is. We can see it in front of us and that is empirical enough.
2. Such a study is not possible ethically or without great risk.
right. “where is the evidence” too often is admin speak for “I don’t want to.”
You would have to let them opt out. You would also have to be very clear about the benefit that could potentially come from the research.
how many traumatized students have to be out there before it’s actionable? Where is the world where one traumatized student is already too many?
But even if you can get that far I think you would be putting yourself at risk legally
academics who want a peer-reviewed study before they stop engaging in practices that are already harming students…I can’t even finish that sentence.
And besides, what’s a single peer-reviewed article going to do to a pedagogy that’s taken root over a lifetime? So on to other ways to resist, refuse, and abolish the technology.
well and to the points that @Autumm and her colleagues made in their latest article, support academics in ways great and small in changing the ways they approach assessment in their classes.
And because we don’t have evidence of the sort that some are looking for, the pushback is that there is none and therefore it’s fine to use…serious question though, how do we ethically do this kind of research a priori if we suspect it will cause harm?
You can’t!!
Who decides what’s “evidence”? I’m being socialized into academia right now, & I’m grateful for a community w an expansive view of teaching/learning/knowing. But what of gatekeepers? Reviewers? Journal editors? How might they be pushed to reconsider what “counts” as evidence?
I wonder if @LydiaXZBrown has research on impact of proctoring on neurodivergent students?
Absolutely, Maha. I’m thinking about students’ experiences documented in the journalism, much of which students wrote. How much of that ends up in peer-reviewed research, and if it does, is it minimized or amplified? Now I’m thinking we need a tab with quotes from students…
Why wasn’t this research needed prior to harm being done? Why is it needed prior to stopping the harm? This is backwards.
not suggesting this is my personal preference, not ignoring the tremendous value (& pain) that is documented in the growing volume of journalism, and not dismissing the incredible labor that will go into producing such research… and, empirical/peer reviewed evidence is currency
You can see Foucault and the Panopticon in one of the quotes by a student “it’s that whole Big Brother scenario, you sort of forget they’re watching you after a little bit.
. . . It’s a really simple straightforward process.” 😱
I hope the answer is no?