On Automation and Accountability (a Tennis Interlude)

Estimated reading time: 5 minutes, 48 seconds

I’ve been reflecting a lot recently on what cannot be a new topic, but which often goes ignored in conversations about generative AI, and it is about human accountability in the face of automation. The problem with bias in generative AI isn’t that AI is necessarily more biased than humans (it’s likely reproducing human bias, but in some instances may be amplifying it), the issue is that humans who are biased can be held accountable for their actions, whereas humans using technology tend to feel like they can lay blame on tech and claim innocence, and like, who are you going to sue, punish, or even reform when the tech goes wrong, you know?I’ve been reflecting a lot recently on what cannot be a new topic, but which often goes ignored in conversations about generative AI, and it is about human accountability in the face of automation. The problem with bias in generative AI isn’t that AI is necessarily more biased than humans (it’s likely reproducing human bias, but in some instances may be amplifying it), the issue is that humans who are biased can be held accountable for their actions, whereas humans using technology tend to feel like they can lay blame on tech and claim innocence, and like, who are you going to sue, punish, or even reform when the tech goes wrong, you know?

So I have two examples from the tennis world where technology/automation and accountability came to a head. In both cases, a bad judgment by a human occurs. In both cases, technology would be thought to reduce or circumvent human error. In one of the cases, the technology was available but failed. In the other case, the technology is not available as per the rules of the game BUT is literally available to everyone outside the game. My point in all of this is that using tech should not reduce the importance of human judgment both on top of tech and instead of tech and also AFTER the tech has spoken. Let me explain.

An out is an out

The first case is a Taylor Fritz match where automatic line calling has replaced human line judges. I have no idea if this tech uses AI or is just a more straightforward sensor on the ground that is highly accurate, or if it uses visual sensing that can be made less accurate by shadows or placement of a player’ feet, but in any case, the ball was out, the automatic line calling did not beep, and the player (who in olden times could challenge the call and stop the point) kept playing, and the umpire (who used to be capable of calling an out by sight) did not stop the point until several points later. Now we see the umpire calling up the line calling to make a judgment, or some such thing, and a player (Taylor Fritz) saying the ball was “a foot out” and the umpire says “yes, I saw it out”. Taylor asks the umpire why he didn’t call it out, and the umpire said he could not do that (I assume since line calling was in place), but the umpire asks Taylor why he didn’t stop the point, and Taylor says, “there’s automatic line calling, why would I stop the point?”. Here, watch:

And I am thinking, first of all, what is wrong with you people, giving sovereignty to technology, when you both saw it out. Yes, human eyes are fallible, but clearly so is this tech. You’ve reduced the number of human eyes, and reduced the agency of the remaining humans. (Side note: I am reminded of how someone told me that in Pickleball, players call the outs, no umpires involved, and you can have disputes and resolve them. That’s just so cool (and also, like most amateur sport, I am guessing, but I love how it promotes community ethics).

What is worse is what happens next. The umpire deems the ball out (I am guessing the tech corrected after he “asked it”) and rules to replay the point. At which point Taylor rightfully asks why they need to replay the point? The ball was out, he should just get the point. And this is where human judgment (of this umpire) totally fails. I understand the technology can fail, and that maybe some stupid rule makes you question your own eyes in the place of this amazing technology. I get it. But I don’t get making a poor judgment call just because the tech called an out late. An out is an out. What else is it?

Should Tennis Have Instant Replay?

This one was a very confusing ball at match point between Felix Auger-Aliassime and Jack Draper where something weird happened with the ball, and the umpire declares it game, set and match to Draper, but Felix objects and explains how he saw the shot and that it was an illegitimate shot. Here, take a look:

Now, some people blame the player (Draper) for not coming out and conceding the point, because they say even if he wasn’t looking, he would have felt his racket hit the ball once to the ground from the frame and a second (illegitimate shot) time over the net. Others say it’s not his call, it’s the umpires call. But many say it is possible the umpire didn’t see it, and he is only human, and we need instant replay slow motion technology like in football/soccer. Others say this has not necessarily improved fairness in football (I am not sure, but I have seen instances where it was fair).

My point here is… first, why do people observing this want to exonerate the humans from judging, and why pass the power onto technology? In the absence of this tech (which also isn’t foolproof) shouldn’t humans use good judgment, and honestly in this case, just replay a point that looked suspicious at a very important point in the match? Wouldn’t that have been more fair? I don’t blame the umpire for not seeing it, but I do blame him for not suggesting a replay. As in, no one gets a point, and they replay the point (not as in tech instant replay, just that players start over).

What is My Point Here?

My point here is that human accountability is needed in these situations, and this instinct to want to blame tech and exonerate humans is making me both uncomfortable and concerned over people acting or starting to act stupidly (sorry) because tech is there and they have lost their agency to act relatively reasonably. When technology glitches or is absent, we cannot be helpless and blameless. And that’s also the case when technology works.

Featured image of a young person holding a tennis racket and tennis balls from Pexels

2 thoughts on “On Automation and Accountability (a Tennis Interlude)

  1. Thank you for this really helpful blog post! The tennis examples help illustrate the point very well. I’m also thinking of an example where Air Canada (one of the airlines here) tried to pass off responsibility when their bot told someone something incorrect and refused to honour what the bot said. That didn’t work; a court ruled they are responsible and have to adhere to what the bot told the person: https://www.cbc.ca/news/canada/british-columbia/air-canada-chatbot-lawsuit-1.7116416

    We have talked about responsibility and accountability while using AI at our institution, and our new guidelines on AI in teaching & learning include discussion of this: https://genai.ubc.ca/guidance/teaching-learning-guidelines/

    We also heard from folks that in addition to individual responsibility for uses of generative AI, the institution should bear some responsibility for helping people to understand not only what their responsibilities are, but how to uphold them (e.g., helping with AI literacy, understanding ethical and privacy issues, etc.). I found this a really interesting point that it shouldn’t all be on individuals to find resources to teach themselves (though individual efforts to learn are still important!).

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.