Note. experimenting with “add to calendar” button…. think I fixed the glitches
#FOEcast with Maha Bali
The other side of student empowerment
So, last summer, Bryan Alexander asked me to write a thing. Around student empowerment in a digital world.
That thing just got published, among some other cool things, and we’re doing Zoom synchronous sessions to discuss them.
Mine is June 24 inshallah 3pm ET, 7 UTC 9 Cairo – to join, just use https://tinyurl.com/FOEcastvid to get Zoom going.
My thing (as well as others’) can be downloaded from here:
But I wanted to re-post it here for two reasons.
- I wanna invite folks to annotate via Hypothes.is, and it’s easier on HTML than PDF
- I wanna sort of update some of what’s in it, adding some post commentary. Because, man, it’s 10 months later. Of course my thinking has changed on these things!
So here is my original piece (will be adding commentary as i go, clarifying if it is added today).
Title: Student Empowerment in a Digital Future – Maha Bali
Added today: that title is too vague. HOW ABOUT… The other side of student empowerment in a digital world
Quite often, the rhetoric surrounding technology use in education makes hyperbolic claims of democratization and liberation, when the reality can be quite the opposite. As Seymour Papert noted back in 1980 wrote, “In many schools today, the phrase “computer-aided instruction” means making the computer teach the child. One might say the computer is being used to program the child. In my vision, the child programs the computer”, but his vision is rarely realized in practice: the use of computers and technology to empower learners is rarely realized. To demonstrate this, I will highlight how recent advances in adaptive learning, learning analytics, and some pedagogies claiming to empower learners sometimes do the opposite, or fall short of achieving their objective. I will also share one approach to thinking about student empowerment as a provocation for thinking more about how to do so online. In the backdrop of my thinking on this, I am considering the notion of “equity literacy”, defined by Gorski as
“the skills and dispositions that enable us to recognize, respond to and redress (i.e., correct for) conditions that deny some students access to the educational opportunities enjoyed by their peers. Equity literacy also describes the skills and dispositions that allow us to create and sustain equitable and just learning environments for all families and students”
In a recent classroom discussion, my students thought that automated grading would be a good idea, because, they believed, besides providing quick feedback to large numbers of learners, it would reduce bias in instructor grading. That, and the difficulty of grading papers for large numbers of learners, were the problems automated grading would solve. However, they then became exposed to Safiya Noble’s work. She writes in Algorithms of Oppression:
While we often think of terms such as “big data” and “algorithms” as being benign, neutral, or objective, they are anything but. The people who make these decisions hold all types of values, many of which openly promote racism, sexism, and false notions of meritocracy, which is well documented in studies of Silicon Valley and other tech corridors. (Kindle location 171)
Machine learning algorithms simply learn the patterns from the data they are fed – data produced by humans with biases, selected by the designers and trainers of the algorithms. An automatic grader will learn about a particular teacher’s ways of grading a particular assignment and needs a large amount of data to learn on… So that it can eventually grade automatically without intervention from that teacher. Apart from reproducing that teacher’s biases, the algorithm also may not know how to deal with a student who uses a unique or more creative style. The algorithm also learned from a particular assignment graded a particular way, such that if the instructor made important changes to the requirements, it may not adapt adequately. More importantly for me, though, is the disempowerment of students by writing something that will not ever be read by human eyes. If the purpose of writing is to communicate with other human beings, educators should be in the business of encouraging learners to write for more authentic contexts and purposes, rather than the opposite of writing for a machine. The machine may be able to detect error or suggest improvements, but cannot provide human encouragement to motivate learners or give them career advice or share experiences and stories.
In this sense, while the problem of large classes could be tackled by an automatic grader, a possible human alternative would have been hiring Teaching assistants to aid in grading. Any instructor biases related to writing style or for example race or gender, would be replicated by an algorithm, and any creativity or deviation from the norm is likely to be punished by the algorithm, thus promoting conformity. A human alternative to this is having multiple graders, or if that is impractical, communicating explicit quality criteria to students. Most importantly, in my view, it dehumanizes the writing process and divorces it from its actual purpose of communicating with other humans. It also emphasizes grading writing rather than focusing on its quality.
There is insufficient space to critique Learning Analytics. But let me say briefly that while they are often touted to help administrators and instructors predict which learners may drop out or fail so they can take action to support them, they in fact reduce human beings to a set of numbers collected by observable online behaviors, rather than seeing a human being as an individual, and believing in their agency to see the patterns of their own data in order to plan their own course of action to achieve their goals. It would be worth asking ourselves how we might decolonize learning analytics, as Paul Prinsloo has written.
Added today: I did recently critique them here and the insightful Matt Crosslin replied here. See also Ruben Puentedura’s contribution to #FOEcast. I also like the work of Dan McQuillan on this. adding tweets
"I want to warn of the possibility that algorithmic prediction will lead to the production of thoughtlessness, as characterised by Hannah Arendt." @danmcquillan
I was just thinking of this the other day but didn't have proper terminology for it!https://t.co/u6MhtOT11Q
— ℳąhą Bąℓi, PhD مها بالي 🏵 (@Bali_Maha) June 9, 2019
"We need to think collectively about ways out of this mess,
learning from and with each other rather than relying on machine learning.
countering thoughtlessness with practices of collective care."
Thanks @danmcquillan #AI @DataJusticeLab #DesignJustice https://t.co/K8tWYtK8Ug
— ℳąhą Bąℓi, PhD مها بالي 🏵 (@Bali_Maha) June 9, 2019
— ℳąhą Bąℓi, PhD مها بالي 🏵 (@Bali_Maha) June 9, 2019
Also, Nagla Rizk’s recent talk on AI and Inequality in the Middle East
Domain Of One’s Own as Partial Empowerment
On the other hand, the project of Domain of One’s Own more directly aims for student empowerment, agency, and voice. As Martha Burtis of UMW says, for her institution, like many others, educational technology started by using the LMS (Learning Management System). Students having their own personal domain on the web affords them two important things: their data is theirs for as long as they want it, and does not get lost after a course disappears from the LMS or after a commercial website closes down; and their data is theirs, no commercial platform will monetize it or keep it after they decide to remove it. However, Domain of One’s Own is only a partial philosophical and technical solution. It can empower in some but not all contexts.
In 2016, I wrote a brief blogpost entitled “I don’t own my domain, I rent it #DoOO” which generated a lot of discussion.
Martha Burtis commented, “people deserve to have spaces on the Web over which they have as much control as we can give them. They deserve to own their data, to take it with them when they need to, and to delete it when they want to”.
She adds “Domain of One’s Own is about empowerment, but empowerment comes with responsibility. (Sort of like tenure.) It’s on us to backup our data (in case our host goes under), to update our applications (so our sites continue to work), to create and remember strong passwords (so our accounts don’t get hacked). Learning these lessons is part of becoming a capable digital citizen. So we still host with a company and we register domains through a domain registrar (because there is no other option)”.
However, the idea of domain ownership is inaccurate. First of all, while we do not place our web presence in the hands of a commercial provider that will exploit it, we do place it on a hosting provider (because most people do not have the technical skill to have their own physical server) and we place our trust in them to protect our data as well, and not, for example, drudge up backups of things we had deleted. We also pay them an annual fee, so it is not complete ownership. If we stop paying, we lose that hosting. Similarly, domain names are not owned forever. We pay to renew them annually, and we could lose access to them if not. This is a dangerous prospect because if a university pays for a student’s domain while they’re in college, then after they graduate they cannot continue paying for the upkeep of their website, they would lose that space, except for a backup they cannot use. And this does not look like ownership.
The issue of inability to pay is strong here in Egypt for several reasons. Many young people and even older people do not have credit cards. Additionally, sometimes credit card providers restrict where we can use our credit card online- I was recently unable to pay for renewal of a subscription to a software online; I recently paid for an online course and my credit card got blocked. These things happen.
Moreover, DoOO presumes a particular approach to empowerment centered on voice, and especially public voice. For populations that face heavy surveillance, or are extremely vulnerable, having a public voice is not always empowering, but potentially threatening, and safety in closed and discontinuous online or offline spaces may be more appropriate (Tanya Dorey-Elias gives examples of abuse victims, Robin Derosa gives the example of someone in the witness protection program, and I highlight the risk of imprisonment and torture in autocratic regimes for political bloggers). DoOO does not prevent the use of closed, private or anonymous spaces, of course, but this highlights the partiality of its empowerment potential.
The freedom dimension is also partial. As then-student Andrew Rikard wrote, it is inaccurate to call a student’s domain their own if it will get graded by instructors. It then becomes a way of reproducing institutional power. It is also inaccurate to believe that having a space of one’s own affords freedom of expression, because there are multiple factors outside the web that limit freedom.
To be fair, almost all projects intended for empowerment will be partial: they solve a dimension of a problem for a portion of people. For example, Virtually Connecting empowers those who cannot attend conferences (for financial, social, logistical, health, political or other reasons – such as graduate students) to have a voice in academic conversations and it subverts their marginality. However, it reaffirms the importance of conferences in academia, which in itself is problematic, and it only helps those who have the infrastructure and digital literacy to use web-based video conferencing – and because it uses a particular technology (for now), countries where this technology is blocked cannot access the live or recorded videos. Nor is this an empowering space for people with certain disabilities or extreme introverts. It is a partial solution for a big segment of the population, but not all, and it addresses some forms of marginality, but not completely.
Added today- this way looking at empowerment as partial and contextual , I have learned can be well served with Nancy Fraser’s framework for social justice as done by Cheryl Hogkinson-Williams and Henry Trotter here.
It is important, when considering the potential benefits of any educational technology to ask ourselves (extending from a previous post of mine):
Which educational problem does it solve? What non-existent opportunity does it create? Are these equitably distributed among stakeholders of different groups?
What human solutions to this problem exist? Why aren’t we investing in those?
Whom can it benefit? Whom can it harm? In what ways might this tool disproportionately harm less privileged learners and societies? In what ways might it reproduce inequality?
How participatory has the process been? How much have actual teachers and learners, especially minorities, on the ground been involved in or consulted on the design, implementation, and assessment of these tools and pedagogies? What can you do to have complete participation in agenda-setting, decision-making, implementation and evaluation?
When we fail to ask these questions, we risk letting hyperbolic discourses (even with good intentions) deceive us into thinking we are doing the work of empowerment and liberation.
Added today: see Sasha Costanza-Chock on design justice
Finally, link roundup
A link recently shared by @Tweetinchar Human AI in Stanford.
A book on Data Feminism i think shared by Paul Prinsloo
Join me June 24 inshallah 3pm ET, 7 UTC 9 Cairo – to join, just use https://tinyurl.com/FOEcastvid to get Zoom going.