Estimated reading time: 5 minutes, 10 seconds

To Love But to Hate AI…

Estimated reading time: 5 minutes, 10 seconds

I recently published an article on Prof Hacker highlighting some of my pedagogical objections to AI, adaptive learning and learning analytics. All of them potentially using machine learning or some form of AI, but not necessarily. 

I’ve got two things on my mind right now, following up on this. 

First, I found a lot of really good writing/research around those topics AFTER I published the article. I’ll share that later.

Second, just because I object to the use of AI and such in those contexts doesn’t mean I reject them in ALL contexts. For example, Andy Nobes recently published an article reflecting on and quoting some Francophone Sub-Saharan African articles that take a decolonizing perspective on Open Access. The articles are brilliant, and written in French. Andy doesn’t read French, but he says Google Translate did a GREAT job. That’s brilliant AI (and brilliant Andy, thank you!). Apparently, Google Translate also does well on Spanish and German. Let me now tell you, Google Translate does much much worse for Arabic. On the very positive side, this allows us for the most part to read content in another language, previously inaccessible (or at least, less conveniently accessible) to us. The Twitter app also has a simple translate option that uses Bing. So now I’m going to be more intentionally reading material online that comes in unfamiliar languages. Another positive AI-type thing is YouTube’s automatic closed captioning which works brilliantly for native English accents. Not so for less familiar English accents that aren’t American or British, apparently. So the downside of all the positive is this: which languages and accents become more accessible because of this AI, and which do not? And what does this mean for a decolonizing agenda (yes, I looped this back to Andy’s article rounding up those awesome articles)? This means that languages with common Latin/Anglo-Saxon roots (arguably easier to translate to each other, but also arguably belonging to the economically and culturally dominant of the world) become accessible to each other and to people who speak/read them…while languages that are less so…remain less accessible. I mean, obviously, Arabic is among the languages that gets a lot of work done (lots of speakers of it in the world and online), but it’s also more work to make it work well. I have been thinking recently that lots of the work related to developing literacy, for example, in English, does not apply easily to Arabic because of the diglossia and that problematic discord between written and spoken language making it more complex to teach reading and writing the language to its own native speakers. And back to YouTube, of course it’s natural that native accents are easier to do auto closed captioning for. There’s a larger dataset of them, and a relatively stable set of sounds to look for – versus non-native accents which are probably too diverse to include fully. Having said that, I don’t know how they train their algorithms and if there might be a way to help the algorithm by specifically training it for Nigerian-English accents (arguably native, just not white native) or for Arabic-English accents (not native, but for non-fluent speakers, carries some particular quirks like pronouncing this as zis and such – keeping in mind that also other non-native accents would make that zis sound, like French). So I don’t know where I’m at with that. But thinking about how decisions are made when designing these solutions and what kind of information becomes more accessible to whom…

The other thing is a TV show on Nickelodeon called “I Am Frankie”. It’s not the first show about human-like robots ever. We’ve had those for years. I remember one I used to watch growing up, and I know there are soooo many I don’t even need to remind you of them 🙂 I would argue, though, that more of what’s in it now is on the cusp of being imaginable. Natural language processing exists and works well in many contexts. We know much more about human brains and machine learning. So Frankie isn’t like a totally unimaginable sci fi thing. Just look at the robot Saudi Arabia just gave citizenship to!

But this one is about a teen Android and a variety of villains in the episodes trying to get their hands on her or reveal her secret (she poses as a high school student). I’ll leave lots of this aside and say I am most uncomfortable with the emotional part of it. I just read Audrey’s piece about the new behaviorism in edtech that uses social-emotional-learning and that whole approach has been making me uncomfortable. What I’m most uncomfortable about in this Frankie thing is that she has emotions or something. She develops feelings for this guy. His sister is her best friend abs knows she is a robot. Her sister sees nothing wrong with encouraging this relationship even though she knows Frankie is a robot. When the guy she likes finds out, he’s angry for a while and then he’s SORRY?!??? He’s sorry he got mad at her for hiding that she was a robot?!? What are they trying to encourage here? The viewer is almost brainwashed into loving Frankie (who is of course a human being acting) and dismissing all the villains trying to get their hands on her in order to weaponize her for evil purposes. I mean, REALLY? We’re going down that route of maybe it’s ok for a human and robot to fall in love (OK I know it’s not the first time ever this concept gets put in front of us, not by a long shot, but it matters to me more now because this human-like robot is much closer to being a reality than it used to be).

I need to stop now

Here are some of those AI resources I was talking about:

From a conference (organized by Berkmann Klein center?) – Charting a Roadmap to Ensure AI Benefits All

Gdoc copy (because Medium is blocked here. Thanks Laura G

Elana Zeide peer-reviewed article

AI and disability

Why women desperately need to design AI

More soon

One thought on “To Love But to Hate AI…

  1. After reading so much science fiction about robots I’m a bit nervous about AI. Of course he Replicants in the movie Blade Runner were intended for war and off-world mining. Listened to the CBC radio clip and will read a bit more before I comment.
    From Science magazine:

    What can machine learning do? workforce implications
    http://science.sciencemag.org/content/358/6370/1530.full
    POLICY FORUM TECHNOLOGY AND THE ECONOMY
    What can machine learning do? Workforce implications
    Erik Brynjolfsson1,2, Tom Mitchell3

    Science 22 Dec 2017:
    Vol. 358, Issue 6370, pp. 1530-1534
    DOI: 10.1126/science.aap8062
    http://science.sciencemag.org/content/358/6370/1530.full

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.