Estimated reading time: 5 minutes, 59 seconds

On Proctoring & Internalized Oppression #AgainstSurveillance

Estimated reading time: 5 minutes, 59 seconds

I’ve been having a lot of private conversations around proctoring and surveillance, and we’ve been having a lot of conversations on my campus about academic integrity. Thankfully, in many of these conversations on my campus, there are people other than me who bring up the importance of centering care and pedagogy and equity in all of this, and some who see the big picture of developing ethical citizens and not just “catching cheaters” – treating the cause, modifying our own systems before blaming students for subverting them. There have also been several incidences on Twitter of invited speakers refusing to speak at conferences sponsored by AI Proctoring companies, and more recently educators refusing to participate in Twitter chats by a company that is problematic in many ways but also partners with an AI proctoring tool [I dislike naming them because it’s advertising, and it’s not my main point]. This blogpost building on several things: my blogpost after #OLCInnovate about changing the conversation around cheating, my blogpost ahead of the AgainstSurveillance event, and the panel below.

I was also recently on a panel in the CPDP conference, moderated by John Davisson (thanks Bon Stewart for recommending me for this panel) along with Meg Foulkes, Sofie van Londen and Lydia X.Z. Brown. Here is the recording in case you’re interested:

Recording of the CPDP Panel I was on recently: STUDENT PRIVACY AT RISK UNDER COVID-19: ONLINE TEST PROCTORING BRINGS AI AND SURVEILLANCE INTO STUDENTS’ HOMES

In any case, in a recent private conversation, I responded with something that someone encouraged me to blog about. I’ll just provide a small background to this, using stories I’ve heard – and tie it to a story in the video above.

First of all, when we say that AI proctoring has the potential to harm some students, this does not mean that other students will object to it. But why do we even develop a technology that occasionally harms some students? Shouldn’t education have a similar mantra to healthcare, “do no harm”? (Cory Doctorow makes this point in his contribution in the Against Surveillance panel – and you can still go there and contribute to support Ian Linkletter).

Second, for some students, there are circumstances that require them to undergo proctoring, because some accreditation agency or institution requires they take a proctored exam to qualify for a certain certification. Some students have no option to do this exam in person, especially with COVID, so if you tell them proctoring is their only option, they will take it. This does not mean you have their consent, it means they have complied, they’ve been coerced. They may not even be objecting inside, because all their lives they’ve had to deal with things like this. It’s kind of like… as a Muslim, when I get selected for a “random check” at the airport (almost every other time I travel without my family to a Western country) … I can choose to notice it for what it is: not random; or I can just take it in stride, because all my life it’s been happening to me, so I don’t notice it as something different. Actually, one time the white Western woman next to me (a friend) was chosen for a random check, an I REALLY noticed it! I actually felt it was someone trying to tell me “see, I see you there with the headscarf, but I chose to randomly select HER. So random, right?”. I think I laughed out loud when it happened. ANYWAY.

Third, in the panel video above, Meg Faulkes says she refused to sit for a bar exam in the UK because they used AI proctoring and she advocated against it. I love that she took that stance and that she advocated against it. I also notice, very clearly, that this is an excellent way to use one’s privilege. But it cannot be done without privilege. She could afford to postpone her exam. She could afford to speak up. People who are most harmed by this kind of technology are less likely to have the privilege to refuse and object. People with privilege less likely to be harmed have a responsibility to refuse, so that it is not always a “minor problem for the 1% or 5% or 10%” that for some reason policymakers don’t take seriously enough.

So here are the points I made in a private convo that someone suggested I blog:

Whenever someone (especially a marginalized student/person) “accepts” proctoring without objecting to it:

A. Do they know the potential harms of online proctoring, to themselves and to others?

B. Are they given an alternative to online proctoring that is within their reach and equally valid, like an alternative assessment that tests a similar level of knowledge, which they’ve practiced before [because let’s face it, suddenly doing a difficult-to-cheat authentic assessment or open book take home exam… students need practice before they would choose that over an MCQ]

C. Have they understood that their own acceptance of proctoring could harm others less privileged than themselves?

D. Do they recognize the long-term impact of normalizing surveillance on humans beyond this one COURSE and into LIFE in the world…

Perhaps if they had all the information from A-D then they can make an informed decision, beyond their internalized and ideological oppression [and this takes probably years to undo, not just writing a blogpost about it or having a 10 minute conversation on it]. Because remember that they’ve internalized, if they’re a minority, that life will always be harder for them anyway, that they will have to make more sacrifices than others to reach the same goals. Society as a whole has an ideology that not everyone needs to have access to the same education (at certain levels, depending on the society) or the same opportunities to work (again, more social systems are much better at this, but the elements of gender and race and disability and neurodivergence and such are almost always there at one level or another).

It is important to recognize that for B to happen, systemic change needs to happen; at the very least, institutions should be investing in humans, such as what the University of Michigan-Dearborn did by paying human graders to support their faculty in developing authentic assessments which take a lot of time to grade. Other investments include in faculty developers’ time to support faculty in designing these kinds of assessments, and rewarding faculty for doing them… and then hopefully, you’ve spent your money wisely on human solutions that promote equity and care, and you recognize you don’t need to pay for technology to oppress and surveil your students. Get over the ideological oppression of even using exams for assessment, with all the inequalities they bring, but that’s another post.

I know it’s not that simple, but that is what is on my mind today.

I want to thank Autumm Caines especially for always keeping me thinking about this and helping me notice what I may overlook… and the entire Continuity with Care group for being a space for all of us to think and question and learn.

47 thoughts on “On Proctoring & Internalized Oppression #AgainstSurveillance

  1. Really interesting read! For me, this is a two-pronged issue, concerning on each prong.

    On one hand, these proctoring softwares are asking for pretty powerful permissions on my personal computer, monitoring the windows I have open, they keystrokes I make, and collecting information from my camera. I am always extremely apprehensive about letting programs (especially ones from new pop-up companies) have these levels of access to my personal computer, and furthermore, with the attention some people give to these devices, these programs will often not be uninstalled after the required examination is complete. From an IT security standpoint, this is a nightmare of a backdoor, and it is honestly only a matter of time before a bad actor gains access to this from the corporation side, or else the corporations themselves decide to supplement their income stream with data-mining ala Facebook and Google.

    On the other hand, IT’S NOT EFFECTIVE. Students are endlessly clever, and I have heard of workarounds on proctoring software, ranging from virtual machines isolating the proctoring software in a sandbox, notes on the wall, mask, behind the keyboard, to more inventive solutions like casting the screen into another room and having an accomplice relay the correct answers back to the test taker via covert methods. Asking for extraordinary permissions from the public, to deliver insecure protection smacks of the ‘security theater’ that has run rampant across the USA since 2001. As you rightly bring up, this becomes normalized, adding surveillance to yet another aspect of our lives. And once the idea of being watched by your computer becomes normal, it has the potential to morph into something far more insidious, being abused by corporations or governments.

    On both prongs, this smacks of the most disappointing kind of problem solving I’ve seen play out over the course of my life, whereby complex, multi-faceted problems like cheating or extremism are addressed via heavy-handed technological intervention, that invariably doesn’t even begin to address the root issue, and has a whole host of spin-off negative consequences. I’m really proud to know someone that is using her voice to fight back against this.

    1. Exactly!!! Computer scientists at my institution (I know you are at AUC, too, but for anyone who reads) have banned that software in their department. It’s basically spyware!!! And of course it isn’t effective. Neither at stopping zealous cheaters, nor, of course, at making students become honest ppl with integrity. So well said!!! You should do a podcast about this 🙂

  2. Dustin, the peer-reviewed research appears on the second tab in the library Autumm and I (and others) have been building over the last few months. Lots to explore in addition to peer-reviewed research. And feel free to add other items as you find them! docs.google.com/spreadsheets/d…

  3. an important tension Shea and I have been discussing. working within conservative systems to change massive financial/technical investments will require that decision-makers read evidence presented according to a particular formal discourse – hence the need for empirical research





  4. If it hasn’t been done in the way that will convince policy makers I guess maybe we all here can figure out a quick way to do it? As I wrote recently, there’s an element of students internalizing their own oppression, which probably means the trauma is underreported




  5. Of course yes, western/conventional/empirical notions of evidence must be challenged and critiqued. And we have the conceptual tools & traditions to do so. And yet, as you know, people spending millions to direct school/system tech aren’t reading that lit/care much. I’m cynical.





  6. It would be incredibly difficult to design a study. The software itself wouldn’t pass research ethics approval, especially if you’re studying emotional harm. Participants would need an opt out students don’t get.





  7. Maybe they’re the wrong audience? If peer-reviewed research is the currency of the land in academia, then it seems the faculty who deploy surveillance tech might buy that evidence and make changes to their practice.





  8. Yeah, that closing paragraph from a student: “Half of my brain is so concentrated on not messing up or doing anything wrong or making the wrong move…and the other half is trying to actually work on the exam…How far is this going to go?”







  9. I think so too

    If you are looking for such evidence and not satisfied with the stories coming out of journalism. If you’re having trouble finding that “empirical” research… it’s maybe because such a study is not possible without great personal risk.





  10. Yeah, just terrifying. And again, I can’t help but draw a parallel with human-subjects research – an IRB would never approve a study that presented this level of student risk. And yet universities open the door to tech in an educational context that jettisons reduction of harm.







  11. Which brings me back to what “counts” as evidence. What abt an ethnographic approach? How might depth interviews surface students’ experiences of surveillance tech? (I suppose faculty would need to be moved by their accounts. Plus their accounts already exist in the journalism.)







  12. And besides, what’s a single peer-reviewed article going to do to a pedagogy that’s taken root over a lifetime? So on to other ways to resist, refuse, and abolish the technology.








  13. And because we don’t have evidence of the sort that some are looking for, the pushback is that there is none and therefore it’s fine to use…serious question though, how do we ethically do this kind of research a priori if we suspect it will cause harm?






  14. Who decides what’s “evidence”? I’m being socialized into academia right now, & I’m grateful for a community w an expansive view of teaching/learning/knowing. But what of gatekeepers? Reviewers? Journal editors? How might they be pushed to reconsider what “counts” as evidence?

  15. Absolutely, Maha. I’m thinking about students’ experiences documented in the journalism, much of which students wrote. How much of that ends up in peer-reviewed research, and if it does, is it minimized or amplified? Now I’m thinking we need a tab with quotes from students…

  16. not suggesting this is my personal preference, not ignoring the tremendous value (& pain) that is documented in the growing volume of journalism, and not dismissing the incredible labor that will go into producing such research… and, empirical/peer reviewed evidence is currency

  17. Pingback: Steve Heldon

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.