I’m at EVT 2007, the USENIX/ACCURATE workshop on voting technology. I had to miss the first session because I flew in on the red-eye, so I missed three talks that described attacks on Nedap, Diebold, and Hart. I hear they were quite interesting.
The second session (the first I attended), started with Rice University’s “Casting Votes in the Auditorium”, which describes a way to use a local network within the precinct to provide more trustworthy logs of events: each voting machines records the events announced by all other voting machines in the precinct (but not the vote content, of course.) I like this idea, even though people get nervous at the idea of a network at the precinct… it will be good to prototype this system and see how easy it is to set up and run appropriately.
Ping from UC Berkeley presented an extension to his previously presented pVote to support accessibility. pVote pre-renders the voting interface to cut down the size of the voting machine software. This is cool stuff, especially because it modernizes the approach to building voting machines. That said, my preference remains solidly in the camp that gives voters some ability to verify, rather than just election officials. Ping made some interesting points on the chains of trust we implicitly place in software compilers, themselves produced by other compilers (cf. Ken Thompson’s “On Trusting Trust”, 1984).
Dermot from University College Dublin talked about verification of vote counting using formal assertions in Java Modeling Language (JML). I’m not a huge formal methods expert, but if any application can use these techniques, it’s voting software. Dermot makes an interesting point about the requirement for “shuffling,” which is a bit of an odd requirement for software verification. Interestingly, he resorts to the same definition as the Universal Composability folks: the output order is always the same (effectively lexicographically sorted).
Over lunch, I chatted with Avi Rubin and California Secretary of State Debra Bowen. That’s right, Secretary Bowen didn’t just show up at this conference, she brought staff and is staying all day. I suspect that her work in California is going to radically change the landscape of election equipment. That’s the power of a strong politician in an influential state like California.
Auditing and Transparency
Joe Hall kicked off the post-lunch session with a discussion of election contracts and how they may prevent proper oversight. This is dry stuff, but it is likely incredibly important. He pointed out specific clauses in vendor contracts that prevent any analysis of the equipment and software. Some contracts even declare “unit pricing” to be trade secret, which, as Joe points out, is in conflict with normal government public budget reviews. Funny thing: the restrictions are so strict that the contracts then specifically carve out “permission for the voter to use the equipment for voting.” And of course, the contracts themselves are often considered confidential.
Raluca Popa, a sophomore from MIT, presented a formula for performing useful statistical audits. In other words, if you’re auditing some of the precincts to achieve a certain level of confidence that the election was not corrupted, how many should you audit? And, more interestingly, can you simplify the formula so it can be performed with a handheld calculator? The answer is yes, and the formula is quite simple. The conclusion is that California’s fixed level of auditing is completely wrong: the smaller the victory margin, the more precincts should be audited. California’s fixed 1% auditing provides decent confidence only if the margin of victory is 20%. Interestingly, the Holt Bill’s step-function auditing is not too horrible, although this new formula is far more accurate.
Joseph Calandrino presented “Machine Assisted Auditing”, which mentioned the now-common theme of auditing individual ballots vs. whole precincts. The idea is to have voting machines do a little bit of extra work to enable ballot-based auditing. I had a bit of talk fatigue (oy jetlag), so I didn’t catch the whole presentation, but it’s worth checking out the full paper.
Stephen Goggin from Rice presented an analysis of the auditability provided by Voter-Verified Paper Audit Trail, as opposed to the human-verification aspect of VVPAT (which will be examined in a future paper.) How error-prone, and how costly is a VVPAT recount? Interesting finds: auditors are biased by the trends of votes they’ve already counted: lopsided elections yield significantly more auditing errors. The error rate is 1.3% per candidate. And the political stratification of voters by geography means that the lopsided situation will happen often, and the error rate will be high. Human auditors used in this analysis talked about being in a “trance” while counting, commenting that the process was tedious and error-prone. This is potentially bad news for VVPAT auditing, although to folks who have looked at the low accuracy of handcounts, this is not entirely surprising. “Human counting is not without error.” David Wagner pointed out that the counting process used in this talk may not be correct: oftentimes multiple auditors count the same vote simultaneously to prevent the trends-based error rate.
Ryan Gardner from Johns Hopkins kicked off the afternoon session by exploring how a voting official might verify that a voting machine runs the correct software. One might use attestation to prove the state of the software, e.g. using a trusted hash. Ryan specifically mentions Pioneer software-based attestation, which depends on timing differences to attest to a voting system, by blowing up the number of iterations so the timing difference is human-noticeable. Then, using a poll worker initiated challenge-response, the voting machine proves that it’s running the right code. The first attempt yielded too big a multiplier to get this practical difference in timing. And, in fact, with increasing computing power, it seems that this won’t be possible…
Andrew from UConn presented attacks against Diebold opscan machines. He points out that there’s no way to verify the ballot layout – the correspondence between the X-Y coordinate of the bubble and the candidate name. He also points out that the machine can be put into “diagnostic mode” without a password, just a special boot sequence. The PIN number is easily recoverable by reading the memory card to reach supervisor mode, disable printer, change the communication endpoint, and erase the memory card. Andrew also mentioned attacks against Diebold touch-screen machines: the display parameters are different from the result parameters, meaning that attacks can be easily carried out.
Candice Hoke from Cleveland State University talked about the architectural problems with GEMS, the Diebold election management software. She specifically mentioned issues with database normalization, specifically the lack thereof in GEMS: data duplication, lack of primary keys in databases, etc.. Diebold uses the MS Jet DB engine, which is advertised as “not appropriate for absolute data integrity.” Candice mentions that higher costs and delays are imposed on vendors who would consider higher DB design standards. I asked whether we might not be better off with performance measure regulations, rather than specific prescriptions like “you must use second normal form.”
I chaired the last session, which only means I introduced the speakers.
Josh began with an introduction to verifiable voting, introducing the concept of cryptographic verification. When he distinguished “cast as intended” and “counted as cast”, some audience members didn’t quite approve… there’s disagreement here on what verification truly means (I agree with Josh on this.) He mentioned VoComp to point out that there seems to be a dilemma between verification and usability: can we make it look identical to a DRE? Josh says yes, but with some attention to detail. He points out that “cast as intended” is not nearly as well achieved as “counted as cast.” He then discusses the details of his simplified ballot casting protocol, and the potential complications (chain voting, etc..) In response to a question from Ed Felten on the privacy of the scheme, Josh points out that privacy can never be guaranteed, but that the goal is to guarantee integrity and encourage privacy.
Amnon Ta-Shma presented an approach to cryptographic voting that does not reveal the plaintext of the vote to the voting machine, yet remains “bare-handed.” He provided some background on Chaum, Neff, and Ryan’s schemes. He then explained the conflict between preparing a ballot in the booth (privacy), and preparing a ballot at home (coercion). Amnon concurs with Josh that privacy cannot be fully guaranteed, only made more likely. His scheme involves the voter bringing an encrypted ballot for each candidate, and having the booth reencrypt the one he wants. That way, the booth doesn’t know the plaintext (privacy), and the voter doesn’t predict the ciphertext (no coercion). There were numerous questions about whether it’s workable to use cryptography in the first place when voters may not be very tech savvy.
Ron Rivest wrapped it all up with a description of three schemes that provide end-to-end verification without cryptography: ThreeBallot, VAV, and Twin. In his usual super clear style, he really nailed down the principles of end-to-end elections, and produced three pedagogical examples of how they might be achieved without cryptography.
Overall, a fantastic day with lots of high quality talks. EVT is shaping up to be the de-facto conference for voting developments. I remain a little bit disheartened by the continuing gap between the crypto and applied security crowds. The crypto folks (me included) need to do a better job pitching this stuff, especially now that there’s an opening to improve the technology in places like California.