This past Sunday, I watched the awesome Wimbledon Finals, and I couldn’t help but notice the number of times that Hawk-Eye, the computerized “line-calling” system, overruled the human judges, even the Umpire regarding one particularly important point. The sports commentators repeatedly alluded to “trouble” with the Hawk-Eye system, so today I looked it up. Sure enough, according to some reports:
Tennis players are split on the technology, Wimbledon champion Roger Federer has described Hawk-Eye as “nonsense”
Now researchers at Cardiff University’s School of Social Science have challenged whether the Hawk-Eye can always be right. In a paper entitled “You cannot be serious! Public Understanding of Technology with special reference to Hawk-Eye”, the researchers claim errors made by the machine can be greater than 3.6 millimetres – the average error stated by its makers.
Led by Professor Harry Collins and Dr Robert Evans, the team argue such devices could cause viewers to overestimate the ability of technological devices to resolve disagreement among humans.
I really like this last point. I remember long arguments with family members (we were big tennis watchers when I was a kid) about whether a ball was in or out, and the screaming fits of John McEnroe protesting the Umpire. But now, when Hawk-Eye makes a call, the debate ends. After all, the cold, unemotional computer says the ball was out. What, you can see better than the advanced robot? The robot is named “Hawk-Eye” for goodness sake, you know a hawk can see better than you.
I’ve written about the blind faith we place in machines before. The thing is, the influence of machines in our daily lives is growing steadily. There are questions about the breathalyzer tests administered to potentially drunk drivers. There are, of course, voting machines. And, most of the time, by human instinct, we trust the machines over the humans.
We get a sense of the potentially enormous mistake we might be making only when a machine wrongs us directly, if a voting machine selects the second candidate when we know we touched the first candidate, if a breathalyzer test says we’re drunk when we know we’re not, and, for a professional tennis player like Federer, if Hawk-Eye says the the ball was in when he saw it out.
Now, the researchers aren’t saying that Hawk-Eye is corrupt. Rather, it’s more like a blood test: there’s a margin of error. For Hawk-Eye, that margin of error is thought to be 3-4mm, meaning that, with some high probability, the machine isn’t making an error of more than 3mm. But the results are presented to the public as if they were exact. So, when Hawk-Eye calls it “in” by 1mm, the decision should probably revert to the human. The Cardiff suggests using a different interface for the public, one that shows the margin of error and that encourages the Umpire to make the final call.
Whether we worry about inexact measurements or potentially buggy/corrupt machines, we need to be thinking much more closely about the influence that computerized decision-making is having on our lives. We are becoming less and less autonomous, and our blind faith in the unemotional machines needs to be constantly challenged. Finding a way to use machines to help us make decisions while retaining trust and autonomy, now that’s an interesting long-term research challenge.