Responding to Ronald

In response to my recent post regarding open-audit voting, Ronald Crane expresses a number of doubts regarding cryptographic auditing of elections, concluding “I don’t see that crypto voting solves much.” I am responding in detail here because Ronald is deeply misinformed. There are certainly points regarding open-audit techniques that merit in-depth discussion, but the points Ronald brings up are precisely those where cryptographic auditing shines, and it’s important to correct these misunderstandings quickly.

First, they are not proof against many presentation attacks (e.g., dropping candidates from the ballot, rearranging the ballot, modifying the headers between races, modulating the sensitivity of the touch-screen to make it more difficult to select certain candidates…)

Let’s assume for a second that this is true. It turns out that every current voting system, be it opscan, touch-screen, punchcard, etc… is vulnerable to these kinds of issues. Ronald may be using this argument to promote hand-counted paper ballots. Sadly, hand-counted paper ballots are completely unworkable in large precincts, and they’ve been shown time and time again to be extremely unreliable because, sadly, humans don’t count things very well.

But even if you’re not convinced and you cheer for hand-counted paper ballots, it turns out that Ronald’s claim is actually not true to begin with: open-audit voting systems can do better than anyone else on this front. Consider Benaloh’s latest “Simple Cryptographic Voting”, where he proposes splitting the ballot preparation machine from the ballot casting machine. This is a bit like the machines that help you mark an opscan ballot if you need audiovisual help, except in Benaloh’s approach, the machines help you prepare an encrypted ballot. The beauty of the Benaloh scheme is that you don’t need to authenticate the user at the time of ballot preparation, only at ballot casting time. In other words, you can let the ballot preparation machines be used by voters multiple times if they so choose, e.g. if they’re not happy about their previous experience for whatever reason. Even more interesting, you can weave auditors into the mix, letting them use the ballot preparation machines as much as they want during election day. Think of it: ACLU reps, party reps, all intermingled with voters, testing the ballot preparation machines live, flagging any calibration issue, presentation issue, etc…. It’s extremely powerful, especially since it doesn’t require any extra work from the voter.

In other words, open-audit voting systems have such flexibility that they can, in fact, be far superior to other voting systems regarding ballot presentation issues and machine input biases.

nor against delay- or denial-of-service attacks.

All voting systems are vulnerable to delay and denial of service… except with cryptographic auditing, you actually know who got denied: Alice’s encrypted vote doesn’t show up on the bulletin board, even though she has a receipt and there is a record of her voting. This is yet another area where cryptographic auditing shines: while you can never prevent denial-of-service, with cryptographic auditing, you can detect and remedy it.

Ronald: you really need to check out the schemes in detail, in particular the part about the public bulletin board, which lets you know exactly whose votes are being counted. There’s no “black box” in open-audit voting, it’s all out in the open, which means that any such process attack is visible to all observers.

Second, though they might (or might not) be proof against vote-flipping attacks, they are not proof against vote-cancellation attacks. In such an attack, the attacker programs the machine to generate a corrupted electronic record of her vote, along with a matching cryptographic receipt.

Okay stop. This is very very wrong. The whole point of the cryptographic receipt is that this exact situation cannot happen. Ever. Not in a million years. No one can fake this proof. That’s mathematically proven. And I don’t mean “it’s safe as long as factoring large numbers is hard.” I mean it’s safe as long as you agree that it’s incredibly unlikely that I’m going to win the lottery every day for the next 10 years.

So, if a machine fakes a proof, and the receipt is checked, it will get caught. Always. It then takes a very small percentage of receipt auditing to catch a cheating machine, and a cheating machine is immediately investigated forensically. Since we can easily trace who cast a vote on that machine, all of those votes’ proofs can be checked, and those voters whose proofs don’t check out can actually revote.

Yes, that’s right, an open-audit voting system actually lets you detect what went wrong, and have only those people whose votes were miscaptured revote.

When the votes are tallied, the corrupt record will either not decrypt to anything sensible, or will decrypt, but will contain a bad signature (depending on the crypto scheme). Now it doesn’t matter whether the voter checks the tally, since both her electronic record and her receipt are corrupt.

Again, in crypto voting schemes, this is impossible. Ronald seems to think the receipt is like a Fedex tracking number, where the back-end database might simply not have a record for your receipt. This is incorrect. Your receipt proves that your vote was correctly captured, no matter how the machine is programmed. You don’t have to trust the program, the proof is in your interaction with the machine.

There are some schemes in which it is indeed possible that a vote would end up “corrupt” if the machine is cheating. But again, if that happens, it’s fully visible to all, and fully traceable back to the machine in question. Such a corrupt vote can only result from machine error, not the voter’s error, so it’s absolutely not a vote cancellation attack. If such a corrupt vote is detected, heads will roll, people will go to jail, and the few people affected will easily be able to revote. The big point to remember here is: open-audit means anyone can audit. Mistakes cannot hide, errors are attributed to the guilty party, and recovery is very doable.

Now let’s assume that the attacker corrupted enough records to theoretically flip the election. What do the officials do? Write it off as a “glitch” and certify the election, as is all too common with existing e-voting systems? Order a forensic investigation that concludes long after the fact, long after the attacker’s program has erased itself, and long after the election has been certified? Order a re-vote?

So again, the presumption that the attacker can corrupt votes is simply false. But even if, somehow, the attacker manages to do this, and there do appear to be corrupted votes at the end of the day. Then what? Well, again, with cryptographic voting schemes, everything is traceable. You can trace those corrupt votes back to the name of the voter (with agreement of all trustees of course, not everyone can do this), figure out which machine they voted on, begin a serious forensic investigation, and recover only those votes that need recovering.

The Take-Away Message

Ronald’s message implies that there are certain voting systems which never suffer from the problems he mentions: corrupt votes, evil voting machines, stuffed ballot boxes, destroyed ballot boxes, etc…. This is obviously false. All voting systems can be attacked. The difference is that when an open-audit system is attacked, everyone sees it, the attack can be traced and localized, the guilty people get caught for sure and the problem can be remedied with the most minimal intervention. In any other system, you’re lucky to even detect the problem, and, even if you do, what can you do? 300 extra paper ballots in the ballot box? How can you recover? Impossible.

Cryptographic auditing is not about having trusted machines produce extra signatures that can be checked only if all goes well. It’s about having untrusted machines be forced to prove mathematically that they did the right thing. No software needs to be trusted. The beauty of the latest schemes, such as Benaloh’s, is that the checking can be done entirely by political parties and activist organizations: voters don’t need to do anything more than vote, get a receipt, and either check it themselves or hand it to a trusted helper who can check it for them.

Ronald’s points show a deep level of misinformation, which is unfortunate. I’m going to continue to work on clarifying the message regarding open-audit voting with cryptography. There are valid issues to address regarding open-audit voting techniques (are they usable enough? Are they deployable? What laws would have to change?), but the points Ronald raises are incorrect.

22 thoughts on “Responding to Ronald

  1. First, your point about presentation attacks is incorrect. They are a far greater vulnerability for systems in which a computational device individually presents a ballot than in hand-filled paper systems. It is true that one might attack the latter with intentionally-misprinted ballots, however, that attack is easily discovered by random statistical sampling of the ballots. And once the ballots are checked, it is difficult for them to be changed, unlike the ballot presentations on computational devices.

    Second, splitting the ballot-presentation machine from the ballot-casting machine is a form of “independent dual verification,” and is orthogonal to the use of crypto-voting, which is what I was critiquing. I have been unable to find a copy of the Benaloh paper, but the ability to audit the presentation sounds interesting. One question is whether this approach could permit a voter to vote multiple times if, for example, she acquired an extra token.

    Third, you mischaracterized delay- and denial-of-service attacks. These attacks aim to cause vote loss by increasing polling-place wait times. The votes lost are not recoverable, because voters leave the polls in frustration before casting them.

    Fourth, hand-filled paper ballot systems are far less vulnerable to DoS attacks than are e-voting systems; one need only ensure an adequate supply of ballots, markers, and ballot boxes.

    Fifth, in the vote cancellation attack I propose, I am *not* positing that an attacker “fakes” a receipt so as to indicate a vote different from the one the voter wished to cast. I am positing that she causes the machine to print a corrupted receipt, along with a corresponding corrupted electronic record. Since neither captures the voter’s selections, this attack effectively cancels her vote.

    Sixth, you assume that “if such a corrupt vote is detected, heads will roll, people will go to jail, and the few people affected will easily be able to revote.” To date, we have seen that “heads” almost never “roll” when election-related problems are discovered, no matter how egregious they are. Forensic investigations of irregularities take months to conduct, when they are conducted at all, and (when they review source code) often do not attempt to determine whether the source was properly built into an executable, nor whether the corresponding executable really was installed in the machines on election day. [1] You are correct that it is possible for voters affected by a vote cancellation attack to determine that they had been affected. But for them to re-vote raises a variety of legal, fairness, and privacy issues. The fairness issue is especially important. Imagine that key states had used crypto systems in 1992, and that someone had waged a vote cancellation attack against Perot voters. I do not doubt that many of those voters, seeing how their intended votes had contributed to a Clinton win, would have “re-voted” for Bush I, quite possibly changing the election’s outcome. Ditto a hypothetical attack on Nader voters in Florida in 2000.

    Seventh, nothing in my note “implies that there are certain voting systems which never suffer from the problems he mentions: corrupt votes, evil voting machines, stuffed ballot boxes, destroyed ballot boxes, etc.” That is a strawman.

    Eighth, you give short shrift to the possibility of intensive public chain-of-custody supervision of hand-filled paper ballot systems, saying, “300 extra paper ballots in the ballot box? How can you recover? Impossible.” No, it’s not impossible, either to prevent or (sometimes) to recover from (e.g., by use of mark-style analysis to determine that one person marked multiple ballots). Also, nothing about crypto systems prevents crooked officials from stuffing the (virtual) ballot box. Indeed, what prevents an attacker from programming the machines to stuff the ballot box?

    Ninth, “No software needs to be trusted” overstates the case, as I have just illustrated. Further, computational voting systems are more vulnerable to wholesale attacks than are non-computational systems, because their software comes from a single or a small number of sources.

    Tenth, “Sadly, hand-counted paper ballots are completely unworkable in large precincts, and they’ve been shown time and time again to be extremely unreliable because, sadly, humans don’t count things very well” again overstates the case. Many nations (e.g., Canada) hand count their ballots successfully and with a small fraction of the problems we have encountered with e-voting systems. But machine tabulation probably can be sufficiently secure and transparent when it is accompanied by appropriate random and directed hand audits.

    Lastly, please try to minimize the personal characterizations, e.g., “Ronald’s points show a deep level of misinformation.” They add only heat, not light, to the debate.

    [1] See, e.g., the report on Sarasota’s undervote problem, http://election.dos.state.fl.us/pdf/FinalAudRepSAIT.pdf at p.18: “We assume that the firmware image provided to us was compiled correctly from the source code provided to us. We also assume that the firmware image provided to us was the firmware image that was actually executed by the iVotronic machines on Election Day. These assumptions imply that
    the executable software executed by the iVotronic systems during the election matched the source code we examined.”

  2. First, your point about presentation attacks is incorrect. They are a far greater vulnerability for systems in which a computational device individually presents a ballot than in hand-filled paper systems. It is true that one might attack the latter with intentionally-misprinted ballots, however, that attack is easily discovered by random statistical sampling of the ballots. And once the ballots are checked, it is difficult for them to be changed, unlike the ballot presentations on computational devices.

    Second, splitting the ballot-presentation machine from the ballot-casting machine is a form of “independent dual verification,” and is orthogonal to the use of crypto-voting, which is what I was critiquing. I have been unable to find a copy of the Benaloh paper, but the ability to audit the presentation sounds interesting. One question is whether this approach could permit a voter to vote multiple times if, for example, she acquired an extra token.

    Third, you mischaracterized delay- and denial-of-service attacks. These attacks aim to cause vote loss by increasing polling-place wait times. The votes lost are not recoverable, because voters leave the polls in frustration before casting them.

    Fourth, hand-filled paper ballot systems are far less vulnerable to DoS attacks than are e-voting systems; one need only ensure an adequate supply of ballots, markers, and ballot boxes.

    Fifth, in the vote cancellation attack I propose, I am *not* positing that an attacker “fakes” a receipt so as to indicate a vote different from the one the voter wished to cast. I am positing that she causes the machine to print a corrupted receipt, along with a corresponding corrupted electronic record. Since neither captures the voter’s selections, this attack effectively cancels her vote.

    Sixth, you assume that “if such a corrupt vote is detected, heads will roll, people will go to jail, and the few people affected will easily be able to revote.” To date, we have seen that “heads” almost never “roll” when election-related problems are discovered, no matter how egregious they are. Forensic investigations of irregularities take months to conduct, when they are conducted at all, and (when they review source code) often do not attempt to determine whether the source was properly built into an executable, nor whether the corresponding executable really was installed in the machines on election day. [1] You are correct that it is possible for voters affected by a vote cancellation attack to determine that they had been affected. But for them to re-vote raises a variety of legal, fairness, and privacy issues. The fairness issue is especially important. Imagine that key states had used crypto systems in 1992, and that someone had waged a vote cancellation attack against Perot voters. I do not doubt that many of those voters, seeing how their intended votes had contributed to a Clinton win, would have “re-voted” for Bush I, quite possibly changing the election’s outcome. Ditto a hypothetical attack on Nader voters in Florida in 2000.

    Seventh, nothing in my note “implies that there are certain voting systems which never suffer from the problems he mentions: corrupt votes, evil voting machines, stuffed ballot boxes, destroyed ballot boxes, etc.” That is a strawman.

    Eighth, you give short shrift to the possibility of intensive public chain-of-custody supervision of hand-filled paper ballot systems, saying, “300 extra paper ballots in the ballot box? How can you recover? Impossible.” No, it’s not impossible, either to prevent or (sometimes) to recover from (e.g., by use of mark-style analysis to determine that one person marked multiple ballots). Also, nothing about crypto systems prevents crooked officials from stuffing the (virtual) ballot box. Indeed, what prevents an attacker from programming the machines to stuff the ballot box?

    Ninth, “No software needs to be trusted” overstates the case, as I have just illustrated. Further, computational voting systems are more vulnerable to wholesale attacks than are non-computational systems, because their software comes from a single or a small number of sources.

    Tenth, “Sadly, hand-counted paper ballots are completely unworkable in large precincts, and they’ve been shown time and time again to be extremely unreliable because, sadly, humans don’t count things very well” again overstates the case. Many nations (e.g., Canada) hand count their ballots successfully and with a small fraction of the problems we have encountered with e-voting systems. But machine tabulation probably can be sufficiently secure and transparent when it is accompanied by appropriate random and directed hand audits.

    Lastly, please try to minimize the personal characterizations, e.g., “Ronald’s points show a deep level of misinformation.” They add only heat, not light, to the debate.

    [1] See, e.g., the report on Sarasota’s undervote problem, http://election.dos.state.fl.us/pdf/FinalAudRepSAIT.pdf at p.18: “We assume that the firmware image provided to us was compiled correctly from the source code provided to us. We also assume that the firmware image provided to us was the firmware image that was actually executed by the iVotronic machines on Election Day. These assumptions imply that
    the executable software executed by the iVotronic systems during the election matched the source code we examined.”

  3. Ronald,

    When I say you are misinformed, it’s not a personal attack, it’s a judgment I make based on the information you’re spreading, which is simply incorrect. In particular, you seem to use arguments against today’s DREs to attack cryptographic auditing. These arguments do not apply. I sincerely urge you to read up on cryptographic auditing some more. In the meantime, I’ll attempt to correct your mistakes.

    In the interest of keeping things short, I’ll focus first on the points you’re making regarding cryptographic voting.

    1) regarding presentation attacks: cryptographic voting systems cannot be easily attacked by changing the presentation after the fact. It’s far easier to modify paper ballots (by, for example, destruction and replacement) than it is to modify a cryptographic ballot that is publicly available for all to see, where any change will be visible to all parties, all activist organizations, etc…

    2) If the voting machine prints a bad receipt, then it will get caught as long as some voters check their receipt. You’re already willing to do random sampling of ballots, and this sampling I’m describing is, in fact, easier, since voters can do this auditing themselves or hand their receipt to the ACLU who can do it for them. Unlike the ballot sampling you propose, you don’t have to count on election officials to do the sampling correctly (see the HBO Bev Harris documentary on election officials providing auditors with a “random sample”.)

    3) In a cryptographic voting system, ballot stuffing is extremely difficult to achieve without detection and recovery. All votes have to be assigned to an actual voter name, and this assignment is visible to all auditors: “Ben Adida: 0x372v8c8sdf”. If someone stuffs the ballot box, the extra ballots are highly likely to be detected and removed from the count. In a non-crypto system, once the ballot box is stuffed, there’s no recovery: you can never know which ballots are good and which are stuffed.

    4) In a cryptographic system, you don’t need to check that the right software is installed. This is an important point, and the major distinction with current DREs (and the Sarasota problem you mention). The software produces a mathematical proof that it produced the right result.

    5) As a continuation of this point: as you state, normal DREs are vulnerable to wholesale attacks because software is single-sourced. This does not apply to crypto auditing, because you don’t need to trust that single-source software. Each encrypted vote is checked independently.

    In other words, crypto auditing means you don’t trust the voting software. At all. This is the single point you don’t seem to take into consideration.

    Some other side points:

    – Denial of Service attacks: they happen all the time with paper ballots. Boston ran out of ballots in the last election and people were turned away. Each system has its own DoS issues, and no system really has the upperhand here.

    – hand counting of paper ballots: it happens only in countries where ballots are short. In the US, ballots are much longer, and “making piles” for counting is basically impossible.

  4. Ronald,

    When I say you are misinformed, it’s not a personal attack, it’s a judgment I make based on the information you’re spreading, which is simply incorrect. In particular, you seem to use arguments against today’s DREs to attack cryptographic auditing. These arguments do not apply. I sincerely urge you to read up on cryptographic auditing some more. In the meantime, I’ll attempt to correct your mistakes.

    In the interest of keeping things short, I’ll focus first on the points you’re making regarding cryptographic voting.

    1) regarding presentation attacks: cryptographic voting systems cannot be easily attacked by changing the presentation after the fact. It’s far easier to modify paper ballots (by, for example, destruction and replacement) than it is to modify a cryptographic ballot that is publicly available for all to see, where any change will be visible to all parties, all activist organizations, etc…

    2) If the voting machine prints a bad receipt, then it will get caught as long as some voters check their receipt. You’re already willing to do random sampling of ballots, and this sampling I’m describing is, in fact, easier, since voters can do this auditing themselves or hand their receipt to the ACLU who can do it for them. Unlike the ballot sampling you propose, you don’t have to count on election officials to do the sampling correctly (see the HBO Bev Harris documentary on election officials providing auditors with a “random sample”.)

    3) In a cryptographic voting system, ballot stuffing is extremely difficult to achieve without detection and recovery. All votes have to be assigned to an actual voter name, and this assignment is visible to all auditors: “Ben Adida: 0x372v8c8sdf”. If someone stuffs the ballot box, the extra ballots are highly likely to be detected and removed from the count. In a non-crypto system, once the ballot box is stuffed, there’s no recovery: you can never know which ballots are good and which are stuffed.

    4) In a cryptographic system, you don’t need to check that the right software is installed. This is an important point, and the major distinction with current DREs (and the Sarasota problem you mention). The software produces a mathematical proof that it produced the right result.

    5) As a continuation of this point: as you state, normal DREs are vulnerable to wholesale attacks because software is single-sourced. This does not apply to crypto auditing, because you don’t need to trust that single-source software. Each encrypted vote is checked independently.

    In other words, crypto auditing means you don’t trust the voting software. At all. This is the single point you don’t seem to take into consideration.

    Some other side points:

    – Denial of Service attacks: they happen all the time with paper ballots. Boston ran out of ballots in the last election and people were turned away. Each system has its own DoS issues, and no system really has the upperhand here.

    – hand counting of paper ballots: it happens only in countries where ballots are short. In the US, ballots are much longer, and “making piles” for counting is basically impossible.

  5. First, your rebuttal on presentation attacks might apply to the IDV system you cited (I need to study it further. Can you provide the Benaloh paper?). It does not apply to crypto systems in which a single machine handles everything, since the machine is alone with a single voter and its presentation cannot reasonably be audited without invading the voter’s privacy (except, perhaps, by parallel testing).

    Second, Re: (2), (4), and (5), you are misunderstanding the ballot cancellation attack. And I am not arguing that the attack will not be detected. Rather, I am arguing that detecting it doesn’t help you very much, since the attack (waged by the software that you argue we don’t need to trust) replaces the targetted ballots, and the corresponding receipts, with random data. Yes, you can know that the attack occurred, and voters can learn that they were targetted. But can you reasonably do anything to fix the problem? I suspect not. Re-voting has the serious drawback that I already described.

    Third, “Unlike the ballot sampling you propose, you don’t have to count on election officials to do the sampling correctly (see the HBO Bev Harris documentary on election officials providing auditors with a “random sample”.)” incorrectly describes my position on auditing. I do not propose to trust the officials, but to make the entire process subject to direct citizen supervision, including the calculation of the number of precincts to audit, the choice of precincts, and the actual hand audits.

    Fourth, “In a cryptographic voting system, ballot stuffing is extremely difficult to achieve without detection and recovery. All votes have to be assigned to an actual voter name, and this assignment is visible to all auditors: “Ben Adida: 0×372v8c8sdf”” raises two issues: what prevents officials from “voting” ballots for registered voters who did not appear at the polls? And what prevents the auditors from learning how each voter voted?

    Fourth, “Denial of Service attacks: they happen all the time with paper ballots. Boston ran out of ballots in the last election and people were turned away. Each system has its own DoS issues, and no system really has the upperhand here” is overdrawn. With any computerized voting system, a single or small number of attackers can wage a DoS attack against potentially the entire nation at once. A similar attack against paper ballot systems must be waged precinct-by-precinct, and thus requires a far larger set of attackers.

    Fifth, ballot length does indeed affect hand counts’ practicality, which is the primary reason I am willing to accept opscan tabulation (when appropriately audited and publicly supervised).

  6. First, your rebuttal on presentation attacks might apply to the IDV system you cited (I need to study it further. Can you provide the Benaloh paper?). It does not apply to crypto systems in which a single machine handles everything, since the machine is alone with a single voter and its presentation cannot reasonably be audited without invading the voter’s privacy (except, perhaps, by parallel testing).

    Second, Re: (2), (4), and (5), you are misunderstanding the ballot cancellation attack. And I am not arguing that the attack will not be detected. Rather, I am arguing that detecting it doesn’t help you very much, since the attack (waged by the software that you argue we don’t need to trust) replaces the targetted ballots, and the corresponding receipts, with random data. Yes, you can know that the attack occurred, and voters can learn that they were targetted. But can you reasonably do anything to fix the problem? I suspect not. Re-voting has the serious drawback that I already described.

    Third, “Unlike the ballot sampling you propose, you don’t have to count on election officials to do the sampling correctly (see the HBO Bev Harris documentary on election officials providing auditors with a “random sample”.)” incorrectly describes my position on auditing. I do not propose to trust the officials, but to make the entire process subject to direct citizen supervision, including the calculation of the number of precincts to audit, the choice of precincts, and the actual hand audits.

    Fourth, “In a cryptographic voting system, ballot stuffing is extremely difficult to achieve without detection and recovery. All votes have to be assigned to an actual voter name, and this assignment is visible to all auditors: “Ben Adida: 0×372v8c8sdf”” raises two issues: what prevents officials from “voting” ballots for registered voters who did not appear at the polls? And what prevents the auditors from learning how each voter voted?

    Fourth, “Denial of Service attacks: they happen all the time with paper ballots. Boston ran out of ballots in the last election and people were turned away. Each system has its own DoS issues, and no system really has the upperhand here” is overdrawn. With any computerized voting system, a single or small number of attackers can wage a DoS attack against potentially the entire nation at once. A similar attack against paper ballot systems must be waged precinct-by-precinct, and thus requires a far larger set of attackers.

    Fifth, ballot length does indeed affect hand counts’ practicality, which is the primary reason I am willing to accept opscan tabulation (when appropriately audited and publicly supervised).

  7. Also re: the Benaloh IDV scheme, its resistance to presentation attacks vanishes if there is any way for the ballot preparation machines to distinguish an actual voter from an auditor. If, for example, an actual voter receives a token (which she then takes to the ballot casting machine to cast her vote) but a (non-voting) auditor does not, then an attacker can program the preparation machines to cheat only when the token is present.

  8. Also re: the Benaloh IDV scheme, its resistance to presentation attacks vanishes if there is any way for the ballot preparation machines to distinguish an actual voter from an auditor. If, for example, an actual voter receives a token (which she then takes to the ballot casting machine to cast her vote) but a (non-voting) auditor does not, then an attacker can program the preparation machines to cheat only when the token is present.

  9. Ronald,

    Since you agree that hand counting is impractical (and I would add imprecise), and since you then propose optical scanning machines, it seems fairly clear that you can’t then compare cryptographic auditing to the elusive ideal of hand-counted paper ballots, an ideal which doesn’t really exist anyways. Let’s compare to optical scan, then, since that system is acceptable to you.

    The main thread of my response is this: cryptographic auditing is about error detection and recovery, not about error prevention. Errors can rarely be prevented, not in any system. Cryptographic auditing lets you recover, and no other approach really comes close.

    About presentation issues: a machine that cheats in the way you suggest runs an incredibly high risk of being caught, especially if you take the usual precaution of giving voters access to sample ballots. That’s already the law, to prevent just this kind of attack on any system. Note also that, if a voter is unhappy in any way with his voting experience, cryptographic systems let that person revote easily, maybe even using another machine. In general, you’re now focusing on a class of problems where voters are so disinformed about the election, they don’t even know what they’re voting for. This is an extremely narrow focus. Sure, we should always consider these issues, and that’s why we always continue to research new ways to cast ballots, e.g. the Benaloh scheme. Crypto voting is not perfect, but the issue you bring up is far from disastrous and, more importantly, is already being addressed by various defenses. The threats to chain-of-custody voting via optical scan are far worse and far more generalizable to any set of voters.

    Regarding IDV, I think you’re confusing the discussion with this topic: IDV is really about the whole voting system and already includes crypto voting systems in general (see the definition). The ballot casting portion of the Benaloh scheme is not really about IDV, it’s a lot more than that: it’s parallel testing of the ballot casting portion on serious steroids: you’re not just testing similar machines, you’re testing the actual machines by interweaved auditing. You could conceivably do something similar with the machines that mark optical scan ballots for the disabled… except in the case of the Benaloh approach, because it’s cryptographic auditing, once the ballot is encrypted, there is no risk of “bad scanning” or other equiment misreading further down the chain of custody: the ballot is verified all the way to the tally.

    The Benaloh scheme can be found on the EVT 2006 proceedings site. And regarding your point as to whether the ballot casting machine knows whether it’s dealing with an auditor or voter…. you must not think very highly of cryptographers if you think that issue wasn’t considered! The Benaloh scheme definitely points that out, mentions shielding and simplifying the machine so it can have no surreptitious triggers, etc…

    Regarding re-voting, I’m glad you bring up this point, because it’s the first subjective issue we’ve hit: will we ask certain voters to re-vote if we determine their ballots were corrupted? I believe that, since open-audit can pinpoint exactly the voters who get disenfranchised, and since we can replace their votes and not affect the rest of the tally, then yes, this recovery is workable. Just because current systems don’t allow for any recovery doesn’t mean we should eschew recovery in the future. That’s the main advantage of cryptographic auditing: recovery becomes a reasonable avenue! I can see how you would disagree with this, but I’m very pessimistic about the voting systems on which you then fall back, where recovery is never an option, and the execution must then be perfect. It’s bad engineering to expect a system to work when error detection is difficult and error correction is not available.

    There’s an additional detail which I think I need to point out to explain why I think re-voting is usually trivial: in cryptographic auditing, errors in the receipts can, in almost every case, be detected immediately. In other words, a 9am voter could detect a problem in her receipt at 9:05am, and the machine would immediate get quarantined to determine what went wrong, while the voter would be directed to another machine to vote. The damage would be extremely limited and, as always, traceable and fixable.

    You ask “what prevents officials from ‘voting’ ballots for registered voters who did not appear at the polls?” Well, that’s exactly the same issue we have today: a malicious precinct can stuff the ballot box. The difference is, if stuffing is detected –which happens when the stuffing is egregious — cryptographic voting can recover: just remove the bad ballots and re-tally. Typical voting is stuck, because you can’t distinguish good from fraudulent ballots once they’re cast. Again, the issue here is one of recoverability: cryptographic voting doesn’t magically prevent all attacks, it just lets you detect and recover.

    You ask “what prevents auditors from learning how each voter voted?” Did you really mean “auditors” or rather “election trustees?” Auditors, like the ACLU, the League of Women Voters, and yours truly on my home computer, don’t know how people voted because the votes on the public bulletin board are encrypted. So, when it says “Ben Adida 0×372v8c8sdf” on the bulletin board, everyone sees “Ben Adida”, but the rest is encrypted and doesn’t reveal my vote. So the next question is: who can decrypt it? This is achieved through threshold decryption: all trustees have to get together to agree to decrypt a value, and they will never do that for identified votes. You’d need all the trustees to collude to figure out how you voted… and if that happened, all current election schemes would be non-private, too (hidden cameras in the voting booth set up by officials, scanners that actually report the individual ballots in the scanned order to the officials after the fact, etc…) In fact, cryptographic voting tends to be more private, because once you leave that booth, there’s no chance to peek at your ballot as you’re scanning it: it’s encrypted.

    Regarding the DoS attack: I think you’re going down a fringe path here. If machines break down at a whole bunch of precincts, do you really think the election won’t be extended, re-run, or called off? This would be a major visible scandal, not something that can be hidden from the public. This is very different from an attack that is hard to detect (like, say, ballot stuffing an optical scanner), so the retail vs. wholesale issue you bring up doesn’t really compute like it does for stealthy attacks. In addition, note that a whole bunch of precincts ran out of ballots in Boston this past election, not just a handful. It was a centralized problem with the election commission, not a localized one.

    Interestingly, at one point, you say the following:

    I do not propose to trust the officials, but to make the entire process subject to direct citizen supervision, including the calculation of the number of precincts to audit, the choice of precincts, and the actual hand audits.

    That’s exactly the goal of cryptographic auditing: to give auditing power to every citizen. Except the approach you’re proposing is workable only in small precincts: do you really expect Boston election officials to open up City Hall to all observers on election night? That’s completely unrealistic. Cryptographic auditing is about giving power to citizens in a real deployable, scalable way, taking into consideration that we can’t all be there on election night, not even close. So, at a high level, you and I share the same goal. I hope we can at least agree on that. I also hope this gives you a better perspective on cryptographic voting.

  10. Ronald,

    Since you agree that hand counting is impractical (and I would add imprecise), and since you then propose optical scanning machines, it seems fairly clear that you can’t then compare cryptographic auditing to the elusive ideal of hand-counted paper ballots, an ideal which doesn’t really exist anyways. Let’s compare to optical scan, then, since that system is acceptable to you.

    The main thread of my response is this: cryptographic auditing is about error detection and recovery, not about error prevention. Errors can rarely be prevented, not in any system. Cryptographic auditing lets you recover, and no other approach really comes close.

    About presentation issues: a machine that cheats in the way you suggest runs an incredibly high risk of being caught, especially if you take the usual precaution of giving voters access to sample ballots. That’s already the law, to prevent just this kind of attack on any system. Note also that, if a voter is unhappy in any way with his voting experience, cryptographic systems let that person revote easily, maybe even using another machine. In general, you’re now focusing on a class of problems where voters are so disinformed about the election, they don’t even know what they’re voting for. This is an extremely narrow focus. Sure, we should always consider these issues, and that’s why we always continue to research new ways to cast ballots, e.g. the Benaloh scheme. Crypto voting is not perfect, but the issue you bring up is far from disastrous and, more importantly, is already being addressed by various defenses. The threats to chain-of-custody voting via optical scan are far worse and far more generalizable to any set of voters.

    Regarding IDV, I think you’re confusing the discussion with this topic: IDV is really about the whole voting system and already includes crypto voting systems in general (see the definition). The ballot casting portion of the Benaloh scheme is not really about IDV, it’s a lot more than that: it’s parallel testing of the ballot casting portion on serious steroids: you’re not just testing similar machines, you’re testing the actual machines by interweaved auditing. You could conceivably do something similar with the machines that mark optical scan ballots for the disabled… except in the case of the Benaloh approach, because it’s cryptographic auditing, once the ballot is encrypted, there is no risk of “bad scanning” or other equiment misreading further down the chain of custody: the ballot is verified all the way to the tally.

    The Benaloh scheme can be found on the EVT 2006 proceedings site. And regarding your point as to whether the ballot casting machine knows whether it’s dealing with an auditor or voter…. you must not think very highly of cryptographers if you think that issue wasn’t considered! The Benaloh scheme definitely points that out, mentions shielding and simplifying the machine so it can have no surreptitious triggers, etc…

    Regarding re-voting, I’m glad you bring up this point, because it’s the first subjective issue we’ve hit: will we ask certain voters to re-vote if we determine their ballots were corrupted? I believe that, since open-audit can pinpoint exactly the voters who get disenfranchised, and since we can replace their votes and not affect the rest of the tally, then yes, this recovery is workable. Just because current systems don’t allow for any recovery doesn’t mean we should eschew recovery in the future. That’s the main advantage of cryptographic auditing: recovery becomes a reasonable avenue! I can see how you would disagree with this, but I’m very pessimistic about the voting systems on which you then fall back, where recovery is never an option, and the execution must then be perfect. It’s bad engineering to expect a system to work when error detection is difficult and error correction is not available.

    There’s an additional detail which I think I need to point out to explain why I think re-voting is usually trivial: in cryptographic auditing, errors in the receipts can, in almost every case, be detected immediately. In other words, a 9am voter could detect a problem in her receipt at 9:05am, and the machine would immediate get quarantined to determine what went wrong, while the voter would be directed to another machine to vote. The damage would be extremely limited and, as always, traceable and fixable.

    You ask “what prevents officials from ‘voting’ ballots for registered voters who did not appear at the polls?” Well, that’s exactly the same issue we have today: a malicious precinct can stuff the ballot box. The difference is, if stuffing is detected –which happens when the stuffing is egregious — cryptographic voting can recover: just remove the bad ballots and re-tally. Typical voting is stuck, because you can’t distinguish good from fraudulent ballots once they’re cast. Again, the issue here is one of recoverability: cryptographic voting doesn’t magically prevent all attacks, it just lets you detect and recover.

    You ask “what prevents auditors from learning how each voter voted?” Did you really mean “auditors” or rather “election trustees?” Auditors, like the ACLU, the League of Women Voters, and yours truly on my home computer, don’t know how people voted because the votes on the public bulletin board are encrypted. So, when it says “Ben Adida 0×372v8c8sdf” on the bulletin board, everyone sees “Ben Adida”, but the rest is encrypted and doesn’t reveal my vote. So the next question is: who can decrypt it? This is achieved through threshold decryption: all trustees have to get together to agree to decrypt a value, and they will never do that for identified votes. You’d need all the trustees to collude to figure out how you voted… and if that happened, all current election schemes would be non-private, too (hidden cameras in the voting booth set up by officials, scanners that actually report the individual ballots in the scanned order to the officials after the fact, etc…) In fact, cryptographic voting tends to be more private, because once you leave that booth, there’s no chance to peek at your ballot as you’re scanning it: it’s encrypted.

    Regarding the DoS attack: I think you’re going down a fringe path here. If machines break down at a whole bunch of precincts, do you really think the election won’t be extended, re-run, or called off? This would be a major visible scandal, not something that can be hidden from the public. This is very different from an attack that is hard to detect (like, say, ballot stuffing an optical scanner), so the retail vs. wholesale issue you bring up doesn’t really compute like it does for stealthy attacks. In addition, note that a whole bunch of precincts ran out of ballots in Boston this past election, not just a handful. It was a centralized problem with the election commission, not a localized one.

    Interestingly, at one point, you say the following:

    I do not propose to trust the officials, but to make the entire process subject to direct citizen supervision, including the calculation of the number of precincts to audit, the choice of precincts, and the actual hand audits.

    That’s exactly the goal of cryptographic auditing: to give auditing power to every citizen. Except the approach you’re proposing is workable only in small precincts: do you really expect Boston election officials to open up City Hall to all observers on election night? That’s completely unrealistic. Cryptographic auditing is about giving power to citizens in a real deployable, scalable way, taking into consideration that we can’t all be there on election night, not even close. So, at a high level, you and I share the same goal. I hope we can at least agree on that. I also hope this gives you a better perspective on cryptographic voting.

  11. From the top, I do not “agree that hand counting is impractical” in every case. There are probably some cases (e.g., the California gubernatorial recall election) in which that is so, and many in which it is not. Hand counts in the precincts of origin should be the default, because of all tabulation systems, ordinary citizens understand them best and, thus, can supervise them most effectively. [1]

    About presentation issues: a machine that cheats in the way you suggest runs an incredibly high risk of being caught, especially if you take the usual precaution of giving voters access to sample ballots. That’s already the law, to prevent just this kind of attack on any system.

    I disagree. Sample ballots are demonstrably not very useful at helping voters detect problems in the actual ballot presentation. For example, the Sarasota sample ballot includes a nice bold heading for the U.S. representative contest, clearly separating it from the other races. http://www.srqelections.com/SampleBallots/sample%20ballot%20general%202006.pdf . The DREs Sarasota used de-emphasized the separation, and many people are now attributing the massive undervote in that race (~13%) to that factor. http://www.heraldtribune.com/apps/pbcs.dll/article?AID=/20061210/NEWS/612100869/-1/NEWS0521 . Also consider that most elections are decided by margins smaller (often much smaller) than that. Indeed, Sarasota itself was decided by ~0.16%. So, whatever one thinks happened there, the event shows that a presentation defect can affect large numbers of votes. Nothing prevents a presentation attack from selectively creating such a defect. Thus, presentation attacks are not, as you say, “an extremely narrow focus” that affects only “voters [who] are so disinformed about the election, they don’t even know what they’re voting for.”

    This leads us to procedural attacks, of which presentation attacks are really a subset. The more complex the voting process becomes, the easier it is for an attacker to mislead voters about the appropriate procedure. For example, the security of the VHTI crypto-voting protocol (http://www.votehere.net/vhti/documentation/vsv-2.0.3638.pdf ) depends upon the relative inability of the machine to guess the voter’s choice of c (see s.4.2.1). However, the machine need not guess it at all if the voter is not intelligent, well-informed about appropriate procedure, unrushed (!), and equipped with enough gumption to question deviations from appropriate procedure. For example, the machine could simply solicit the voter’s choice, choose the candidate i that it wants, choose c itself, print out the entire receipt at once, record the corresponding Bv containing its choice, and print “Thanks for voting!” Then, when the voter’s auditor verified that the voter’s “vote” made it into the tally, there would be no mismatch, and the voter’d think everything was just fine and dandy.

    This is likely to work because the voter has not the faintest idea why the choice of c or the order of her interactions with the machine is important — and neither do the pollworkers or the elections officials.

    Probably the only semi-reliable way to deter this attack is parallel testing, which puts us pretty much right back where we are with plain old DREs: retail procedural defenses against wholesale attacks.

    Now maybe Benaloh offers some protection here; I have to read his paper in detail. But I hope you see that attackers can rather easily wage “social engineering” attacks against voting systems whose procedures fundamentally mismatch voters’ intuitions about voting.

    As for whether I don’t “think very highly of cryptographers if [I] think that issue [vote-creation machines distinguishing between actual voters and auditors] wasn’t considered!,” I try to question everything, rather than relying on reputations. If more people had questioned things when vendors started pushing e-voting systems, we’d be a lot better off now than we are. Along those lines, I certainly will read the Benaloh paper in detail.

    But just off the bat, I have found an attack on Benaloh that can substantially reduce auditors’ effectiveness at finding presentation attacks. In this attack, the attacker programs the “vote creation device” to increment a counter on the encrypted ballot media each time a person inserts it into the device. She also programs the vote casting machine to zero this counter. When a person inserts the token into the vote creation device, it first checks the counter’s value, then increments it. If the original value was zero, it wages a presentation attack with probability p. If it’s nonzero, it does not wage the attack. Since an auditor unaware of this attack is likely to use the same media to audit multiple creation devices (or the same device more than once), this attack greatly reduces the auditor’s probability of detecting a presentation attack. Quantitatively, the probability of detection (Pd) for a single auditor who conducts n tests using the same media is 1-(1-p)^n, but the counter-check attack reduces this probability to simply p, irrespective of n. If p=0.03 and n=7 (for example), this attack reduces Pd from 0.19 to 0.03. It might be that Pd=0.03 is low enough for officials to dismiss a discovery as “voter error,” “a glitch,” or even “malicious monkeywrenching by the auditor” (please don’t forget how defensive officials often get about their voting systems).

    This attack can be improved by adding a social-engineering element, which is to attack the presentation only when the polls are busier than a certain threshold. The threshold is calculated to be such that pollworkers would be likely to limit the number of auditors some time before it is reached — for the laudable purpose of ensuring that actual voters aren’t unreasonably delayed by competition for creation devices. The threshold would be expressed in terms of voting rate, which the casting devices easily could be programmed to measure.

    I think that the Benaloh auditing-for-presentation-attack scheme probably cannot realistically be implemented in an effective manner. The entire process is too unlike what voters and pollworkers expect, and thus creates new opportunities for social-engineering attacks that voters and officials are not primed to recognize. This is also, I think, true of the VHTI voter-challenge process, as I noted above. [2]

    Per instant discovery of corrupted ballots and instant re-voting, this approach implicitly requires networking between a publicly-accessible verification system and the casting machines in each polling place. And since the casting machines and the creation (presentation) machines share media, this approach also effectively networks the creation machines, as well as the voter-registration verification machines. Networking raises the possibility of attacks involving realtime monitoring of, and interference with, vote creation and casting. It also creates an opportunity for non-insiders to exploit backdoors or bugs to wage attacks remotely.

    These attacks make me very leery of networking any machines involved in election administration, particularly when the polls are open — though malware potentially can be injected at any time and remain latent indefinitely.

    Regarding DoS attacks, they are not “fringe” if they’re conducted with discretion. That means not shutting down polling places, but introducing enough delays at the right times to cause sufficient vote loss to accomplish the attacker’s goals. The “right times” are easily determined by measuring voting rates. And the delays can be introduced at several points in the voting procedure, such as following the insertion of the token, between screens, when printing, etc., so as to substantially lengthen the required voting time without raising undue suspicion. This attack also takes advantage of the intuition among those inexperienced in computers that machines that are being used a lot should be slower than machines that aren’t (Do ordinary voters understand how low a voting machine’s CPU’s duty cycle actually is?)

    Another possible kind of DoS attack is against computerized voter-registration verification machines: “Oh, you’re a felon! You can’t vote!” Benaloh contemplates the use of such machines (s.4.1), and they’re already used with certain DRE systems, e.g., http://phx.corporate-ir.net/phoenix.zhtml?c=106584&p=irol-newsArticle&ID=774350&highlight .

    Now on to box stuffing: it need not be “egregious” to be successful. On the contrary, I think most potential attackers understand that blatant attacks are more likely to be caught than subtle ones. You do appear to be correct that it might be easier to remedy a box stuffing attack with a crypto voting system than with a standard paper one. That’s good, because it’s also easier to wage one. In both cases the attacker must falsify the polling place’s sign-in log, but with the paper system she must also physically steal, mark, and stuff individual paper ballots. In the crypto case, she need only get some time alone with a machine. Worse, if the crypto system includes a voter-registration verification system, it might be susceptible to wholesale box-stuffing attacks.

    Conclusion
    While I can agree that crypto-voting aims “to give auditing power to every citizen,” it also substantially complicates election administration and voting, which creates new opportunities for attack, both of the technical variety and of the social-engineering variety. Unlike most attacks on paper systems, many of these can be waged wholesale [3], and most (all?) of them are beyond the ken of most voters, pollworkers, and elections officials. Thus, while crypto-voting can add certain verifiability features to elections, it is not a “holy grail” [4] and should not be represented as such. Recall how cautiously cryptographers approach the use of new ciphers. We should treat crypto-voting more cautiously than that, because its use affects not just the security of some early-adopters’ data, but potentially the health of the entire body politic.

    —————————-
    [1] I do think that central-count procedures (e.g., the one you call “completely unrealistic”) are significantly riskier than precinct-count procedures, because the chain of custody is longer and thus easier to break.

    [2] More broadly, the possibility of these kinds of attacks (and of ones that I and others have not yet discovered) makes me generally unhappy about leaving a voter alone with a programmable machine. This is one of the main reasons I favor hand-filled paper ballots. BTW, the issue of hand vs. machine tabulation is largely orthogonal to the issue of paper presentation and hand filling vs. machine presentation and filling.

    [3] Wholesale attacks are worse than retail attacks not just because they greatly multiply an attacker’s reach, but because they will tend to push all jurisdictions’ results in a single direction. Retail attacks, being conducted by many attackers of varying motivations, have some tendency to cancel each other with respect to cross-jurisdictional races (e.g., gubernatorial, Presidential, etc. races).

    [4] See, e.g., https://benlog.com/articles/2007/03/08/on-fully-informed-decisions-and-the-role-of-academics/ .

  12. From the top, I do not “agree that hand counting is impractical” in every case. There are probably some cases (e.g., the California gubernatorial recall election) in which that is so, and many in which it is not. Hand counts in the precincts of origin should be the default, because of all tabulation systems, ordinary citizens understand them best and, thus, can supervise them most effectively. [1]

    About presentation issues: a machine that cheats in the way you suggest runs an incredibly high risk of being caught, especially if you take the usual precaution of giving voters access to sample ballots. That’s already the law, to prevent just this kind of attack on any system.

    I disagree. Sample ballots are demonstrably not very useful at helping voters detect problems in the actual ballot presentation. For example, the Sarasota sample ballot includes a nice bold heading for the U.S. representative contest, clearly separating it from the other races. http://www.srqelections.com/SampleBallots/sample%20ballot%20general%202006.pdf . The DREs Sarasota used de-emphasized the separation, and many people are now attributing the massive undervote in that race (~13%) to that factor. http://www.heraldtribune.com/apps/pbcs.dll/article?AID=/20061210/NEWS/612100869/-1/NEWS0521 . Also consider that most elections are decided by margins smaller (often much smaller) than that. Indeed, Sarasota itself was decided by ~0.16%. So, whatever one thinks happened there, the event shows that a presentation defect can affect large numbers of votes. Nothing prevents a presentation attack from selectively creating such a defect. Thus, presentation attacks are not, as you say, “an extremely narrow focus” that affects only “voters [who] are so disinformed about the election, they don’t even know what they’re voting for.”

    This leads us to procedural attacks, of which presentation attacks are really a subset. The more complex the voting process becomes, the easier it is for an attacker to mislead voters about the appropriate procedure. For example, the security of the VHTI crypto-voting protocol (http://www.votehere.net/vhti/documentation/vsv-2.0.3638.pdf ) depends upon the relative inability of the machine to guess the voter’s choice of c (see s.4.2.1). However, the machine need not guess it at all if the voter is not intelligent, well-informed about appropriate procedure, unrushed (!), and equipped with enough gumption to question deviations from appropriate procedure. For example, the machine could simply solicit the voter’s choice, choose the candidate i that it wants, choose c itself, print out the entire receipt at once, record the corresponding Bv containing its choice, and print “Thanks for voting!” Then, when the voter’s auditor verified that the voter’s “vote” made it into the tally, there would be no mismatch, and the voter’d think everything was just fine and dandy.

    This is likely to work because the voter has not the faintest idea why the choice of c or the order of her interactions with the machine is important — and neither do the pollworkers or the elections officials.

    Probably the only semi-reliable way to deter this attack is parallel testing, which puts us pretty much right back where we are with plain old DREs: retail procedural defenses against wholesale attacks.

    Now maybe Benaloh offers some protection here; I have to read his paper in detail. But I hope you see that attackers can rather easily wage “social engineering” attacks against voting systems whose procedures fundamentally mismatch voters’ intuitions about voting.

    As for whether I don’t “think very highly of cryptographers if [I] think that issue [vote-creation machines distinguishing between actual voters and auditors] wasn’t considered!,” I try to question everything, rather than relying on reputations. If more people had questioned things when vendors started pushing e-voting systems, we’d be a lot better off now than we are. Along those lines, I certainly will read the Benaloh paper in detail.

    But just off the bat, I have found an attack on Benaloh that can substantially reduce auditors’ effectiveness at finding presentation attacks. In this attack, the attacker programs the “vote creation device” to increment a counter on the encrypted ballot media each time a person inserts it into the device. She also programs the vote casting machine to zero this counter. When a person inserts the token into the vote creation device, it first checks the counter’s value, then increments it. If the original value was zero, it wages a presentation attack with probability p. If it’s nonzero, it does not wage the attack. Since an auditor unaware of this attack is likely to use the same media to audit multiple creation devices (or the same device more than once), this attack greatly reduces the auditor’s probability of detecting a presentation attack. Quantitatively, the probability of detection (Pd) for a single auditor who conducts n tests using the same media is 1-(1-p)^n, but the counter-check attack reduces this probability to simply p, irrespective of n. If p=0.03 and n=7 (for example), this attack reduces Pd from 0.19 to 0.03. It might be that Pd=0.03 is low enough for officials to dismiss a discovery as “voter error,” “a glitch,” or even “malicious monkeywrenching by the auditor” (please don’t forget how defensive officials often get about their voting systems).

    This attack can be improved by adding a social-engineering element, which is to attack the presentation only when the polls are busier than a certain threshold. The threshold is calculated to be such that pollworkers would be likely to limit the number of auditors some time before it is reached — for the laudable purpose of ensuring that actual voters aren’t unreasonably delayed by competition for creation devices. The threshold would be expressed in terms of voting rate, which the casting devices easily could be programmed to measure.

    I think that the Benaloh auditing-for-presentation-attack scheme probably cannot realistically be implemented in an effective manner. The entire process is too unlike what voters and pollworkers expect, and thus creates new opportunities for social-engineering attacks that voters and officials are not primed to recognize. This is also, I think, true of the VHTI voter-challenge process, as I noted above. [2]

    Per instant discovery of corrupted ballots and instant re-voting, this approach implicitly requires networking between a publicly-accessible verification system and the casting machines in each polling place. And since the casting machines and the creation (presentation) machines share media, this approach also effectively networks the creation machines, as well as the voter-registration verification machines. Networking raises the possibility of attacks involving realtime monitoring of, and interference with, vote creation and casting. It also creates an opportunity for non-insiders to exploit backdoors or bugs to wage attacks remotely.

    These attacks make me very leery of networking any machines involved in election administration, particularly when the polls are open — though malware potentially can be injected at any time and remain latent indefinitely.

    Regarding DoS attacks, they are not “fringe” if they’re conducted with discretion. That means not shutting down polling places, but introducing enough delays at the right times to cause sufficient vote loss to accomplish the attacker’s goals. The “right times” are easily determined by measuring voting rates. And the delays can be introduced at several points in the voting procedure, such as following the insertion of the token, between screens, when printing, etc., so as to substantially lengthen the required voting time without raising undue suspicion. This attack also takes advantage of the intuition among those inexperienced in computers that machines that are being used a lot should be slower than machines that aren’t (Do ordinary voters understand how low a voting machine’s CPU’s duty cycle actually is?)

    Another possible kind of DoS attack is against computerized voter-registration verification machines: “Oh, you’re a felon! You can’t vote!” Benaloh contemplates the use of such machines (s.4.1), and they’re already used with certain DRE systems, e.g., http://phx.corporate-ir.net/phoenix.zhtml?c=106584&p=irol-newsArticle&ID=774350&highlight .

    Now on to box stuffing: it need not be “egregious” to be successful. On the contrary, I think most potential attackers understand that blatant attacks are more likely to be caught than subtle ones. You do appear to be correct that it might be easier to remedy a box stuffing attack with a crypto voting system than with a standard paper one. That’s good, because it’s also easier to wage one. In both cases the attacker must falsify the polling place’s sign-in log, but with the paper system she must also physically steal, mark, and stuff individual paper ballots. In the crypto case, she need only get some time alone with a machine. Worse, if the crypto system includes a voter-registration verification system, it might be susceptible to wholesale box-stuffing attacks.

    Conclusion
    While I can agree that crypto-voting aims “to give auditing power to every citizen,” it also substantially complicates election administration and voting, which creates new opportunities for attack, both of the technical variety and of the social-engineering variety. Unlike most attacks on paper systems, many of these can be waged wholesale [3], and most (all?) of them are beyond the ken of most voters, pollworkers, and elections officials. Thus, while crypto-voting can add certain verifiability features to elections, it is not a “holy grail” [4] and should not be represented as such. Recall how cautiously cryptographers approach the use of new ciphers. We should treat crypto-voting more cautiously than that, because its use affects not just the security of some early-adopters’ data, but potentially the health of the entire body politic.

    —————————-
    [1] I do think that central-count procedures (e.g., the one you call “completely unrealistic”) are significantly riskier than precinct-count procedures, because the chain of custody is longer and thus easier to break.

    [2] More broadly, the possibility of these kinds of attacks (and of ones that I and others have not yet discovered) makes me generally unhappy about leaving a voter alone with a programmable machine. This is one of the main reasons I favor hand-filled paper ballots. BTW, the issue of hand vs. machine tabulation is largely orthogonal to the issue of paper presentation and hand filling vs. machine presentation and filling.

    [3] Wholesale attacks are worse than retail attacks not just because they greatly multiply an attacker’s reach, but because they will tend to push all jurisdictions’ results in a single direction. Retail attacks, being conducted by many attackers of varying motivations, have some tendency to cancel each other with respect to cross-jurisdictional races (e.g., gubernatorial, Presidential, etc. races).

    [4] See, e.g., https://benlog.com/articles/2007/03/08/on-fully-informed-decisions-and-the-role-of-academics/ .

  13. Ronald,

    This latest message from you makes me realize that you intend to go to any length to make your point. You’ve made up what you think the Benaloh scheme is, after admitting you haven’t read it in detail, and then (surprise!) you come up with an attack against this made-up scheme. I’m not going to convince you, so I will simply stop here.

    For others who remain intrigued about cryptographic auditing, I am putting the finishing touches on a white paper that explains the process in greater detail. I will post it as soon as it’s ready.

  14. Ronald,

    This latest message from you makes me realize that you intend to go to any length to make your point. You’ve made up what you think the Benaloh scheme is, after admitting you haven’t read it in detail, and then (surprise!) you come up with an attack against this made-up scheme. I’m not going to convince you, so I will simply stop here.

    For others who remain intrigued about cryptographic auditing, I am putting the finishing touches on a white paper that explains the process in greater detail. I will post it as soon as it’s ready.

  15. Ben, Ronald, aren’t blogs wonderful. They allow for the public to read such intelligent discourse between two sophisticated writers. Thank you for the surprisingly interesting intro to cryptographic auditing.

    I think the white paper is very much needed by your new reading public (I found your site through Avi Ruben’s post on his Congressional testimony, and this is the first I’ve read about crypto auditing)

    I look forward to reading the paper, and Ronald’s comments.

  16. Ben, Ronald, aren’t blogs wonderful. They allow for the public to read such intelligent discourse between two sophisticated writers. Thank you for the surprisingly interesting intro to cryptographic auditing.

    I think the white paper is very much needed by your new reading public (I found your site through Avi Ruben’s post on his Congressional testimony, and this is the first I’ve read about crypto auditing)

    I look forward to reading the paper, and Ronald’s comments.

  17. Mr. Adida, please show us how my presentation attack fails to do what I’ve claimed. If I’ve missed something critical about Benaloh’s scheme that is relevant to that attack, please point it out. The exchange will certainly benefit the debate about Benaloh in particular and crypto-voting in general.

  18. Mr. Adida, please show us how my presentation attack fails to do what I’ve claimed. If I’ve missed something critical about Benaloh’s scheme that is relevant to that attack, please point it out. The exchange will certainly benefit the debate about Benaloh in particular and crypto-voting in general.

  19. Ben and Ronald, you seem to be focusing only on a specific cryptographic voting scheme (for both problems and advantages). There are a very large number of proposed voting schemes that allow cryptographic auditing. Some cryptographic voting schemes do not need ANY computers for voting (e.g., Punchscan). Clearly, many of the shortcomings Ronald raises are not relevant in this case.

    In some cases “standard” paper ballots can be used in addition to the cryptographic receipts in order to provide a simple recovery mechanism if tampering is detected. This might allow us to create a “best of both worlds” type system.

  20. Ben and Ronald, you seem to be focusing only on a specific cryptographic voting scheme (for both problems and advantages). There are a very large number of proposed voting schemes that allow cryptographic auditing. Some cryptographic voting schemes do not need ANY computers for voting (e.g., Punchscan). Clearly, many of the shortcomings Ronald raises are not relevant in this case.

    In some cases “standard” paper ballots can be used in addition to the cryptographic receipts in order to provide a simple recovery mechanism if tampering is detected. This might allow us to create a “best of both worlds” type system.

  21. Hi crypto voter,

    I certainly didn’t mean to focus on one system as the only approach: I like to use the Benaloh system in my presentations because it’s particularly simple to explain and requires no additional voter overhead. Punchscan, of course, is a great example of another end-to-end verifiable system, with its own advantages and disadvantages, but with the overarching benefit that you can truly verify your vote.

    I’m glad you brought this up, actually, because it’s important to note that cryptographic auditing is a family of systems, not just a single approach. Different settings may call for different cryptographic auditing techniques, of course, and the point to focus on is the ability to truly verify that your vote counted.

  22. Hi crypto voter,

    I certainly didn’t mean to focus on one system as the only approach: I like to use the Benaloh system in my presentations because it’s particularly simple to explain and requires no additional voter overhead. Punchscan, of course, is a great example of another end-to-end verifiable system, with its own advantages and disadvantages, but with the overarching benefit that you can truly verify your vote.

    I’m glad you brought this up, actually, because it’s important to note that cryptographic auditing is a family of systems, not just a single approach. Different settings may call for different cryptographic auditing techniques, of course, and the point to focus on is the ability to truly verify that your vote counted.

Comments are closed.