“This is a matter of how we prioritize the money that we spend […] Where does a lot of that money end up, anyways? […] Sometimes these dollars, they go to projects having little or nothing to do with the public good. Things like fruit fly research in Paris, France.”
Sarah Palin, US Vice-Presidential Candidate, October 2008.
A couple of weeks ago at the Chaos Communication Conference, Ulrich Wiesner mocked secure voting research. For rhetorical effect, he attempted to apply nascent techniques, like ThreeBallot and Punchscan, to real-world German elections with dozens of candidates and races, finding that one would have to fill out an absurdly large amount of information for a single ballot, e.g. >1800 checkmarks.
Wiesner concludes two things:
- open-audit voting is an interesting academic problem that should stay in academia, and
- paper ballots and hand-counting provide “full transparency” and have never shown any significant issues.
So the second proposition, that paper ballots and hand-counting are fully transparent, is laughable. There is repeated historical evidence of ballot box stuffing, destruction, and other manipulations that should lead an objective observer to at least consider the potential security issues of paper-ballot-only elections. To ignore them entirely, to sweep them under the rug, betrays a very poor understanding of voting security history.
But more importantly, I’m disturbed to see a crowd of people laughing and clapping about how certain ideas should “stay in academia,” a clear implication of a belief that real progress doesn’t ever come from the academic setting. This is an increasingly popular viewpoint in some circles, notably in the applied security world, where it has become all too common to hear: “okay, in theory this is useful, but this is the real world, buddy, and in the real world, your fancy equations and models just don’t fly.”
Of course there are failures in academia, large ones, and of course the failures are much more numerous than the successes. But ThreeBallot and Punchscan are not failures. They are prototypes, pedagogical tools, the first steps in a marathon of innovation and exploration. Already, Punchscan has led to Scantegrity, a much more practical system that probably didn’t make Wiesner’s list because it would have been quite a bit harder to mock.
So Wiesner’s attack is in bad faith because the systems he explores were never meant for the elections he tries to apply them to, and he knows it. ThreeBallot, in particular, was from the start described as a pedagogical tool, an exploration of paper-based crypto voting, not a finished product ready to run some particularly complex German election. If you read the original paper [PDF], that’s eminently clear in the first few lines of the abstract.
That’s how research begins and marches forward. With simpler systems that address a simpler problem. You don’t mess with the human genome right away, or even with a mouse genome, you start with fruit fly and work your way up. If you try to apply fruit-fly research directly to a mammalian model, the result is indeed laughable, but the joke is typically on the person who over-interprets the results. The fruit-fly researcher never told you it would work on humans, did he?
It’s by starting on fruit flies and quickly iterating from there that research accomplishes what it does best: ground-breaking new paradigms like open-audit voting. Mocking academic research by illustrating the gap between prototypes and real-world applications reflects poorly on the person making the comparison, not on the researchers. And there’s a particularly strong irony to mocking a system designed by Ronald Rivest, whose early work was quite impractical 30 years ago, but is now implemented in every web browser, even those on cell phones.
There’s plenty of work left to bring open-audit voting into practice. It’s unfortunate to see that one of the largest obstacles is not so much the uninformed public, but rather some partially informed folks who mock research for not solving the whole problem in one swoop. A little knowledge can be a dangerous thing, indeed.
UPDATE: forgot to mention that I found Wiesner’s talk through the Scantegrity blog.
Comments
6 responses to “On Bad-Faith Mocking of Academic Research”
well said
well said
I was at Wiesner’s talk and I don’t think he was “mocking” cryptographic voting research. In fact I think the opposite — I interpreted his talk to convey that he had a sincere hope that such systems could be made to work. However, we’re not nearly there yet, and it’s important for cryptographers (and others) to remember that.
I have written what I hope is a simple introduction to the topic here.
I was at Wiesner’s talk and I don’t think he was “mocking” cryptographic voting research. In fact I think the opposite — I interpreted his talk to convey that he had a sincere hope that such systems could be made to work. However, we’re not nearly there yet, and it’s important for cryptographers (and others) to remember that.
I have written what I hope is a simple introduction to the topic here.
Jonathan: I have to disagree with you, having watched the video. Putting up slides that actually count the number of marks to make with three-ballot for a complex election is disingenuous and yes, it is mocking. It’s also disingenuous not to point out more recent work, like punchscan, which does not have nearly the problems implied in the talk.
Jonathan: I have to disagree with you, having watched the video. Putting up slides that actually count the number of marks to make with three-ballot for a complex election is disingenuous and yes, it is mocking. It’s also disingenuous not to point out more recent work, like punchscan, which does not have nearly the problems implied in the talk.