Estimated reading time: 1 minute, 32 seconds
I could make a slippery slope argument about where learning/predictive analytics could take us, but I won’t and I will just stick to the facts.
So this tiny incident happened today.
Facebook wouldn't allow me to post my student voices article because its algorithm thought it might be "abusive" pic.twitter.com/ayQ51hAbgB
— ℳąhą Bąℓi مها بالي (@Bali_Maha) December 1, 2016
I got outrage from my Twitter friends. And we remembered worse abuses of Facebook.
The back story? I wrote a Prof Hacker article about the importance of listening to student voices at conferences. I tried to post it on my phone’s Facebook app. I got an error message and behind it, Facebook tells me it prevented publishing what looked to be “abusive” content to it. And now the burden was ON ME to prove it wasn’t by sending to Facebook and explaining.
Well F#@k U Facebook (now that, you can label abusive content)
But really. This is minor compared to
- Alec Couros stolen identity
- Alan Levine catfishing where Facebook wouldn’t believe him…WHEN
- Tressie McMillan Cottom gets blocked from her own Facebook as a fake.
I mean, really. Believing the lies and denying the truths? Oh wait. But Facebook is also the fake news promoter, so it’s all within the same ethos.
What concerns me is not the particular decisions Facebook makes (though there’s probably something fishy going on there) but the mere fact that Facebook trusts its own obscure (and clearly often just plain WRONG) algorithms over human judgment. And the burden to disprove the algorithm lies on humans.
There is nothing more ridiculous than this. Than building machines. Building algorithms. And letting them rule with such authority that we trust them above human judgment.
An analogy with medicine is in order. Where some medical diagnosis has become so reliant on machines the doctor’s own clinical sense is dismissed. Or even the patient’s own experience.
I will leave it there.