Voting Security Cheatsheet [2016 Edition]

It’s voting season! Which means everyone is asking questions like:

  • wait, why can’t I vote online?
  • how hard can voting really be?
  • shouldn’t this all be open-source?
  • isn’t it just as easy to hack paper voting as electronic voting?
  • is Russia hacking our voting machines?
  • why do we even need voting machines when other countries count by hand?
  • maybe there’s enough time to fix things before November 8th?
  • Hasn’t the blockchain solved voting already?

For your convenience, I have compiled this handy election technology & security cheat-sheet.

  1. you can’t vote online for good reason. (a) We don’t know how to make sure the device you use to vote has correctly captured your voting intent – it might have been compromised such that when you vote for Alice, it votes for Bob instead. (b) Though we know of a number of techniques to tally electronic votes in a publicly verifiable way that also preserves individual privacy, we are far from deploying these at scale. Reason (a) on its own, however, is good enough not to vote online.
  2. getting voting right is really hard. Since everyone has a stake in the outcome, you can’t outsource the trust to any one person or organization. You have to preserve the privacy of individual votes even against the wishes of the voter herself, otherwise voters can be coerced, and yes coercion has been a concern throughout history and remains a concern today, in 2016, in the US. And you have to provide some process that everyone, even the loser, can trust. In other words, you need a process auditable by everyone, without placing much trust in any given person or organization, while deleting critical information (who voted for what).
  3. open-source doesn’t solve the problem. Yes, it would be cool if voting machines used only open-source software. But how would you know the software that was audited is the same as the software running on the machine? Doesn’t solve the problem.
  4. paper ballots collected and tallied at each precinct are vastly more secure. It’s quite difficult to corrupt a distributed counting process, where every precinct publishes its results and keeps paper records for recounts, all while being disconnected from the Internet. Massachusetts does this well. California does this less well as paper ballots are transported before they’re counted, thus leaving more opportunities for foul play, though it’s still pretty tricky to attack at scale. What matters in an election is scalability of attacks.
  5. yes, voting machines can be hacked. Usually it takes an in-person attack as these machines aren’t networked, but apparently some are and that’s just crazy. This is why you probably want paper records of all votes, and why optical scan voting machines are best, since they start and end with paper. But again, to hack voting machines requires being at the precinct, which isn’t scalable. Except of course if the machines are on the network, and again that’s just insane.
  6. you can’t count ballots by hand in the US because we vote for a dozen offices and ballot initiatives. If we just voted for one thing, e.g. President, then counting by hand would be highly preferable and plenty fast: just make piles. You could even weigh the piles to count them quickly. The process for counting up a dozen ore more questions on paper by hand simply doesn’t work at scale. This part is sometimes hard to believe, but it is the real issue, and the central reason why we have voting machines.
  7. the Blockchain doesn’t solve voting. At best it solves one part of the voting process, which isn’t even the hardest part. Combining vote privacy and tally verifiability is the hardest part, and Blockchain doesn’t solve that.
  8. it’s way too late to change anything for November 8th. The process for certifying new voting machines / processes takes years. If you want to make things better, start now for 2020.

What John McCain could say

[This is … hopeful fiction]

My fellow Americans,

When I ran for President in 2008, in the last stretch of the campaign, a woman at one of my rallies stood up and expressed fears about Obama because “he’s an Arab.” I could have stoked those fears, and many Republicans wanted me to. Instead, I chose to answer “no, Ma’am, he’s a decent family man, a citizen, that I just happen to have disagreements with on fundamental issues.” I chose decency over easy political gain and demagoguery. (Ignore for a moment the implication that “Arab” and “decent family man” are opposites.)

At some point we must all remember that we are Americans above all. That many of our brothers and sisters are Americans and Muslims, and that, thanks to our Constitution, there’s no conflict in saying “American” and “Muslim” in the same sentence. Captain Humayun Khan demonstrated the power of our Constitution with his ultimate sacrifice for his country. For our country, because he and I belong the same amazing country that doesn’t discriminate on the basis of your gender, race, background, or sexual orientation.

So let me get to the point. I have many disagreements with Hillary Clinton. I despise many of her policy proposals. But she is a decent woman with a long track record of helping her fellow Americans, even when I believe the type of help she’s providing is misguided.

Donald Trump is anything but decent. He is incapable of showing respect to anyone who doesn’t support him. He cannot see the humanity in others, because there is barely any humanity in him.

So today, my fellow Americans, I choose to place country above party. I revoke my endorsement of Donald Trump, and I urge you all to vote for Hillary Clinton. I don’t agree with everything she says, but she is a good person with a good heart and the drive to make America better. Her opponent is unfit for duty, unfit for political service, and unfit for American leadership.

-John McCain.

On Apple and the FBI

If you pay attention to tech/policy stories, then surely you know about the Apple/FBI situation. Though this story has been broadly covered, I don’t think we’re having the right debate. And the right debate is, of course, very subtle. So here goes my attempt to nail that subtlety.

What’s Going On?

  • The FBI wants access to a particular criminal/terrorist’s iPhone. They have a warrant.
  • The iPhone is locked, and if the FBI tries a few bad PIN codes, the phone will erase its data as a defense mechanism. Also, iPhones are programmed to slow down password attempts after a few bad guesses, which means that, even if the auto-erase feature were not activated, it would take the FBI years to laboriously try enough PIN codes.
  • Changing the iPhone’s behavior – say to allow as many PIN code attempts as fast as possible – is doable via a software update, but iPhones are programmed such that they accept only software updates blessed by Apple.
  • The FBI wants to compel Apple to program and bless this new behavior so they can software-update the phone and go guess the PIN code quickly and without self-destruct.
  • The FBI is happy with a very narrow solution: the updated behavior can be hard-coded to function only with that particular iPhone, and the FBI is willing to never touch that new iPhone operating system. They’re content with having Apple effectively extract the data for them.

Some say FBI could find other avenues

Is this the only way the FBI can get at this data? Is this data even that valuable? It’s a bit dubious, in my opinion. The FBI already has iCloud backups straight from Apple servers, phone call metadata and texts from Verizon, etc. Is there really some key data on the device left to discover? Doubtful.

Also, hardware-security experts are arguing that, given a few hundred thousand dollars, the FBI could find a way to bypass the iPhone’s restriction that a software update has to be blessed by Apple. This seems possible, though I can imagine how it might be difficult for the FBI to develop that specific expertise urgently.

All in all, I’d say it’s pretty clear the FBI doesn’t strictly need Apple to comply. What’s probably happening is that the FBI is using this as a test case for the general principle that they should be able to compel tech companies to assist in police investigations. And that’s pretty smart, because it’s a pretty good test case: Apple obviously wants to help prevent terrorist attacks, so they’re left to argue the slippery slope argument in the face of an FBI investigation of a known terrorist. Well done, FBI, well done.

So this is a backdoor? That bad guys can use, too?

This is where I break with other privacy advocates. It’s a significant overstatement to claim that the FBI’s request could provide them with the technical means to penetrate other iPhones. I call BS when Tim Cook says:

In the wrong hands, this software — which does not exist today — would have the potential to unlock any iPhone in someone’s physical possession.

The FBI has explicitly stated that they’d be happy with Apple performing this software update without ever shipping the software to the FBI, and, as an additional constraint, with Apple tailoring the update so it functions only on that one iPhone in particular.

There’s a key difference here between this FBI request – access to a single device in physical custody with a warrant – and prior demands from FBI/NSA – access to any encrypted channel, with or without physical custody of a device. The latter requires engineering all encrypted channels to provide law-enforcement access and is so complex that it’s almost guaranteed to create new security holes, especially with respect to foreign governments aiming for broad surveillance. The former is doable if Apple wanted to engineer this capability into their phones. Not completely without risk – in particular when devices are confiscated at customs and such – but much more doable.

So … slippery slope or not?

Technically speaking, I don’t think so. Apple granting this request will not technically enable the FBI to get into other phones.

But legally speaking? I’m a little bit out of my depth here, but from everything I’m reading, I’d say there seems to be a clear legal slippery slope risk. If Apple can be compelled to program and bless code that weakens the phone’s security, then maybe courts will force Apple to help in other ways. Update a criminal’s phone remotely, maybe, because that criminal is on the run? Or wholesale give the FBI the capability to perform software updates themselves? Which would then amount to the remote built-in backdoor and the introduction of unacceptable security risks for everyone.

So why are technologists all worked up?

So technologists are all worked up. I’m pretty worked up. This is a big deal. I’m on Apple’s side, but not for Apple’s stated reasons. We’re not dealing with a universal backdoor request, and we’re misleading the public if we say that.

The three reasons why this is a big deal are:

  1. there is that legal slippery slope, see above.
  2. starting with the PATRIOT act, the US government seems to be increasingly in the business of bypassing due process. National Security Letters, for example. What if the FBI’s next request to Apple is done in secret, with a gag order so Apple can’t talk about it? What if the FBI’s next request is for the all-out ability to update any phone with any software they choose, without looping Apple in ever again? Is this our one and only chance to stop this behavior before it goes dark?
  3. foreign governments making the same requests without due process because they have no such thing. Yeah. Oy. What do we do about them? Can Apple really be in the position of deciding which governments have reasonable due process?

What happens next?

Legally speaking, I have no idea, but I worry the FBI will win this one.

So, technically speaking, I think what happens next is that Apple begins to engineer phones such that they can no longer assist the FBI, even if compelled by court order. Here’s my specific bet: right now Apple can update a phone’s entire software stack to reconfigure a particular phone’s behavior, including number of PIN tries and delays – the most secure parts of the phone. I bet Apple will move towards making the most sensitive parts of that stack updatable only in very specific conditions:

  1. wipe user data, or
  2. keep user data only if the phone is successfully unlocked first.

The interesting question will be whether Apple will be legally allowed to engineer their phones this way. This will be such a fascinating and critically important discussion.

And we, technologists, fans of civil liberties and freedom, privacy advocates, we should find more subtle arguments than calling everything a backdoor and, by the transitive property of backdoor evilness, calling every law enforcement action evil. Yes, law enforcement has broken the public’s trust time and time again. Yes, the FBI is clearly playing this one to set a precedent. And yes, we should be incredibly thankful that Apple and others are standing up for user security.

Yet we have important and real issues to confront. How does law enforcement evolve in the age of universal unbreakable encryption? What should be the law-enforcement role of third-party organizations, when those third parties have access to our most intimate secrets? If we do choose, as a people, to compel third parties to assist law enforcement when served with a warrant, I hope we also couple that with the extension of Fourth Amendment protections to data we choose to store with those third parties.

This isn’t as simple as “backdoors!” And it isn’t as simple as “terrorism!” Like Tim Cook said, I’m glad we’re having this debate in public. I hope it stays in public.

Letter to My Two Sons – November 13th, 2015

[this is a little bit raw… on purpose.]

My sons,

You are just 6 and 3, and so you don’t know what happened tonight. A group of suicide bombers killed 150 people in Paris, your father’s hometown. The feeling in my gut today is much like the one I felt on that Tuesday in September 2001, as I tried to get to my office in TriBeCa, shell-shocked people on the street walking past me, thousands of dead in the rubble. Profound sadness, deep anger, frustration, and powerlessness. And this nagging feeling that one of the victims could, under slightly different circumstances, have been me or… you.

That day in 2001, I got to the office just a few blocks north of the towers, just an hour or two after they’d collapsed. I logged into one of our web servers, found an unused IP address (that’s how we did it back then, kids), and built a manual list of “people I know are safe in NYC” (a poor man’s Facebook Safety Check). I frantically emailed friends and built up the list. The URL went around to a few dozen people. A few friends and friends of friends found each other and, hopefully, a small measure of relief. In retrospect, I realize I was coping by doing the only thing I knew how to do: contribute a small positive on a day of pure horror. I don’t mean to praise myself, I simply did what all decent people did that day: help any way I knew how. I knew HTML and web servers, and so that’s what I did.

Much will be written about today, November 13th 2015. Extremists on the right will embrace confirmation bias and recommend closing borders, arming the public, and generally distrusting brown people. Extremists on the left will also embrace confirmation bias and lay the blame entirely on the West’s foreign policy.

To be honest, I don’t really know what to think. Well, no, that’s not quite true: I think those extremists on the right (including many presidential candidates today) are idiots, maniacs, and shouldn’t be allowed within spitting distance of the seat of power. They stoke the fires of retaliation and intolerance, feeding on fear to push their agenda, the furthest thing from democracy and freedom. So yeah, I guess on some level, I do know what to think.

That said… might it help to fight at the source those who committed these awful acts so they don’t get the chance to do it again? Maybe. On the flip side, did we do things that others saw as acts of aggression, for which they then retaliated? Maybe that’s part of it. Are there suicidal/homicidal maniacs who will use anything as an excuse to hurt innocents? Probably. I don’t really know for sure.

So what do we do?

If there is one thing I hope to teach you, it is this: you will not always be safe. It kills me to say this, because I am biologically wired to protect you, and yet… You shouldn’t live your life seeking safety at all costs. You shouldn’t compromise your own freedom because madmen took lives, even if it’s dozens, hundreds or thousands. You shouldn’t compromise your own freedom the second, third, and fourth time something terrible happens, either.

What you can do is choose to be one of those people who help. One of those people who make the world better, in small or big ways. You will live through many more terror attacks, stupid governments, unnecessary wars. The human condition is, in many ways, heartbreaking. You cannot make the heartbreak go away. But you can choose to be a positive force. You can choose to be a helper. Even if it’s something as small as writing a bit of HTML by hand on a warm Tuesday in September, tears streaming down your face, because it’s the only thing you know how to do and because maybe, maybe, it will help one person.

the responsibility we have as software engineers

I had the chance to chat this week with the very awesome Kate Heddleston who mentioned that she’s been thinking a lot about the ethics of being a software engineer, something she just spoke about at PyCon Sweden. It brought me back to a post I wrote a few years ago, where I said:

There’s this continued and surprisingly widespread delusion that technology is somehow neutral, that moral decisions are for other people to make. But that’s just not true. Lessig taught me (and a generation of other technologists) that Code is Law

[…]

In 2008, the world turned against bankers, because many profited by exploiting their expertise in a rapidly accelerating field (financial instruments) over others’ ignorance of even basic concepts (adjustable-rate mortgages). How long before we software engineers find our profession in a similar position? How long will we shield ourselves from the responsibility we have, as experts in the field much like experts in any other field, to guide others to make the best decision for them?

Well, I think that time has come.

Everyone uses software, very few people understand it. What seems obvious to a small elite group is completely opaque to the majority of the world. This gap is incredibly hard for us, the software engineering elite, to see. A few examples:

  • The Radiolab Podcast did a wonderful piece – Trust Engineers – where they explored the case of Facebook running experiments on its newsfeed. For non-engineers, there’s an incredible feeling of breached trust upon realizing that a set of flesh-and-blood humans have that much control over the algorithm that feeds them daily information. (And, for that matter, to most researchers used to interacting with an IRB, there’s complete shock at what Facebook did.) For most engineers, including a number of very good and ethical people at Facebook, it’s surprising that this is even an issue.
  • A couple of years ago, a friend of a friend – who happens to be a world-renowned physician and research scientist – asked me: “Ben, can the system administrators at work read my email? Even if they don’t have my password?” The answer is yes and yes. This is obvious to us engineers, so much so that we don’t even think twice about it. To a non-engineer, even an incredibly smart person, this is absolutely non-obvious.
  • A close friend, another very smart person, was discussing something with his young child recently, and I overheard “if you don’t know, ask the computer, the computer knows and it’s always right.” Where do I begin?

We, software engineers, have superpowers most people don’t remotely understand. The trust society places in us is growing so rapidly that the only thing that looks even remotely similar is the trust placed in doctors. Except, most people have a pretty good idea of the trust they’re placing in their doctor, while they have almost no idea that every time they install an app, enter some personal data, or share a private thought in a private electronic conversation, they’re trusting a set of software engineers who have very little in the form of ethical guidelines.

Where’s our Hippocratic Oath, our “First, Do No Harm?”

I try very hard to think about this in my own work, and I try to share this sense of duty with every engineer I mentor and interact with. Still, I don’t have a good answer to the core question. Yet it feels increasingly urgent and important for us to figure this out.

ben@clever

This week, I joined Clever as VP Engineering. Clever makes K-12 education software vastly more efficient and effective by simplifying how students and teachers log in. It’s this simple: imagine if you could give teachers and students 10-15 minutes back in every single class. That’s 30-40% more time for actual teaching and learning. That’s what Clever does today, with much more in the works.

I’m incredibly excited about this new adventure, and I want to gush a bit.

Priorities

My priorities in work are:

  1. people
  2. mission
  3. product

People – strong contributors who know how to work in teams that accomplish more than the sum of their parts – are my top priority. A mission and a product, no matter how good, survive contact with the real world only if backed by strong, honest team players.

A Mission – a clear goal to make a positive, socially beneficial dent in the universe – is my close second priority. Products come and go, pivots happen, but a strong mission gives an organization an invariant, a true north when the storm hits.

And finally, a strong Product. Because once you have great people, and once you have a clear and stable mission, you still need a compelling product to deliver that mission to the market. The product is the last mile of your impact on the world.

Clever

Clever meets all three of my priorities in spades.

I am blown away by the quality of people at Clever, starting with co-founders Tyler, Dan, and Rafael, and including every engineer, business-development partner, school experience advocate, recruiter, etc. Clever team members have oodles of education experience, with ex-teachers, ex-DOE, and ex-school-technologists joining hands to build the next-generation education platform. And it’s not just individual quality and skills, as Clever has also built a strong team culture, one that mirrors the values of the education products we want to see. Every team member is always a student, and Clever is a group effort.

The Clever mission is clear and deeply impactful: to save teacher and students time, all the while protecting student privacy and preserving data access controls enforced by schools.

And finally, the Clever product is catching on like wildfire. There remains a mountain of work because there’s a mountain of opportunity to make education software better. But the market is already speaking, and Clever has struck a a very clear chord.

Join Us

We’re a growing team of 20’ish engineers, passionate about applying technology to making K-12 education far more effective. We’re committing to building a diverse team, because teams with a variety of life experiences build better products, and because darnit it’s the right thing to do.

Do you like working on code that makes the world a better place? Do you want to learn from and teach your teammates every day? Do you want to work on hard technical problems not just because they’re hard, but because they’re hard and impactful?

Clever’s a unique place and a unique opportunity. Use your powers for good. Send me a note.

(your) information wants to be free – obamacare edition

My friends over at EFF just revealed that Healthcare.gov is sending personal data to dozens of tracking sites:

It’s especially troubling that the U.S. government is sending personal information to commercial companies on a website that’s touted as the place for people to obtain health care coverage. Even more troubling is the potential for companies like Doubleclick, Google, Twitter, Yahoo, and others to associate this data with a person’s actual identity.

The referenced AP story uses even more damning language:

The government’s health insurance website is quietly sending consumers’ personal data to private companies that specialize in advertising and analyzing Internet data for performance and marketing, The Associated Press has learned.

Sounds pretty bad, right? Except it’s almost certainly not what it sounds like. It’s almost certainly a simple mistake.

How could this be a mistake, you ask? Here’s what almost certainly happened:

  1. Someone at Healthcare.gov wanted to analyze patterns of usage of the site. This is often done to optimize sites for better usage. So they added a tracker to their page for MixPanel, for Optimizely, for Google Analytics, and a couple of other sites that help you understand how people use your site. In all likelihood, different departments added different trackers, each for their own purposes, almost certainly with good intentions of making the web site more usable.
  2. Meanwhile, someone else responsible for social media of HealthCare.gov added a “Tweet This” button, and someone else added a YouTube video. Once again, these come in the form of widgets, often snippets of JavaScript code, that load resources from their respective home base.
  3. Separately, someone built the web form that lets you enter basic information about yourself so you can find a health plan. That information is, in large part, fairly personal: your age, your zip code, whether or not you smoke, etc. And for some reason, almost certainly completely random, they used a web form with an action type of GET.
  4. Here’s the first mildly technical point. When you submit a GET form, the data in the form is appended to the URL, like so:
    https://healthcare.gov/results?zip=12345&gender=male&parent=1&pregnant=1&...

    Not a big deal, since that data is going to Healthcare.gov anyways.

  5. And now for the second mildly technical point. For tracking purposes, trackers often blindly copy the current URL and send it to their homebase, so that the trackers can tell you users spent 5s on this page, then 10s on that page, etc. In addition, when your browser requests an embedded YouTube video, or an embedded tracker, it sends the current URL as part of the request in a so-called Referrer field.
  6. Put those two technical points together, and boom: a web site that collects personal information with GET forms and uses third-party tracking widgets tends to send form data to those third parties.

This is extremely common. Many web sites with sufficiently large engineering teams have no idea how many trackers they’ve embedded. It’s typical for a web site to move from one site analysis tool to another and to forget to remove the first tracking widget in the process. When the Wall Street Journal reported on these issues a couple of years ago with their fantastic What They Know series, they forgot to mention that their own page has a half-dozen trackers embedded.

I’ve said it before, and I’ll say it again: unfortunately, your information wants to be free. My favorite analogy remains:

when building a skyscraper, workers are constantly fighting gravity. One moment of inattention, and a steel beam can fall from the 50th floor, turning a small oversight into a tragedy. The same goes for software systems and data breaches. The natural state of data is to be copied, logged, transmitted, stored, and stored again. It takes constant fighting and vigilance to prevent that breach. It takes privacy and security engineering.

So, am I letting Healthcare.gov off the hook? Not at all, they should have done their due diligence and done a more thorough privacy audit. And using GET forms is particularly sloppy, since it leads to data sprayed all over the place in logs, referrers, etc.

But was this a deliberate attempt at sharing private data with private companies? Not a chance. The press should do a better job of reporting this stuff. And, to my wonderful friends at EFF, this is a gentle nudge to say: so should you. It’s important to differentiate between negligence and malice, to not spread fear, uncertainty, and doubt, even when it’s issues we care about.

The good news is that HealthCare.gov has already responded by (a) reducing their number of trackers significantly and (b) submitting form data using XMLHttpRequest or POST. The bad news is how many people now actually believe that this was intentional, conspiratorial data selling. If that was Healthcare.gov’s intentions, there are much sneakier ways of doing that without getting caught so easily.

Oh, and if you want to understand more about trackers and block them as you surf the web, try the very excellent Ghostery extension for your browser.

managing photos and videos

This holiday, I finally spent time digging into how I manage photos and videos. With 2 young kids and some remote family and friends, this requires a good bit of thinking and planning. I know I’m not the only one, so I figured documenting where I landed might be useful to others.

I started with Dave Liggat’s Robust Photo Workflow, and found much of it resonates with my needs. Here’s where I landed:

  1. I take photos with a DSLR and two phones. My wife takes photos with her phone. We both take videos with our phones. We use Dropbox/Carousel auto-upload, which works just fine on both iOS and Android. For the DSLR, I manually load photos over USB.
  2. All photos and videos are now available on my desktop Mac (via USB or Dropbox). When I’m ready to review/edit photos, I drag and drop the batch into an all-photos/ directory I keep within my Dropbox.
  3. Hazel automatically categorizes photos and videos into subdirectories of the form 2015/01/. It’s really kind of awesome.
  4. all-photos and all-videos are thus simple date-classified folders of all of my photos and videos. They’re backed up locally using Time Machine. They’re backed up to the network using Dropbox. I can imagine eventually snapshotting this to Amazon S3/Glacier, but right now that doesn’t feel too urgent.
  5. I use Lightroom5 as an editor only, so if I blow away my Lightroom proprietary catalog info, it’s not that big a deal. To do this, I tell Lightroom to look at photos in all-photos without moving/copying them. After I’ve added a bunch of photos to the all-photos directory by drag-and-drop, I tell Lightroom to synchronize its catalog with the source folder, which takes a few seconds and gives me a screen with my latest imported photos and videos. I can then edit photos, reject them if they’re bad, and write back JPG/XMP data to each photo’s originating directory using Lightroom export. Dropbox backs those up automatically. To remove bad photos (blurry, etc.), I flag them as “rejected” in Lightroom using the X key, and when I’m done I command-delete, which gives me the option of removing the files from disk, too. I do this only for clear rejects, and it makes my mild OCD happy since I know I am not keeping totally useless files around, and the overhead of deleting photos is low. I could also delete photos easily using the Dropbox UI, which is pretty good, and then re-synchronize in Lightroom.
  6. I can then use Carousel (or Dropbox) on any mobile device to browse through all of my photos. It’s surprisingly good at dealing with large photo libraries (I have 20K) and large photos (I have a bunch of 13MP photos). As in, really, really good, even on a puny phone. Better than anything else I’ve seen.
  7. I’ve been using Flickr for years for private photo sharing, and Lightroom is really good at exporting to Flickr. That said, at this point I’m thinking of moving to Dropbox/Carousel based sharing. I can easily bundle photos & videos into albums on Dropbox, whereas videos are still limited on Flickr. Carousel conversations around a few photos are great with family. The only bummer is that Carousel and Dropbox have some mutually exclusive features: albums on Dropbox, conversations on Carousel. I suspect Dropbox will fix that in the next year.
  8. What I’d love to see:
    • unified photo features in Dropbox and Carousel
    • export Dropbox albums as directories of symlinks in my Dropbox folder, and export Carousel conversations in some other file-based way, too.
    • Lightroom export compatibility with Dropbox/Carousel albums.

I’m super happy with this new process: one funnel, easy, low overhead, and a very solid long-term photo storage solution. I’m only relying on RAW/JPG files and directories of said files to be readable for the long term, and that seems pretty safe. Lightroom is awesome, but I could replace it with a different tool if I needed to.

One more thing: if you’re going to use Dropbox to store all of your photos, make sure you pick a strong password and set up 2-factor authentication.

Power & Accountability

So there’s this hot new app called Secret. The app is really clever: it prompts you to share secrets, and it sends those secrets to your social circle. It doesn’t identify you directly to your friends. Instead, it tells readers that this secret was written by one of their friends without identifying which one. The popularity of the app appears to be off the charts, with significant venture-capital investment in a short period of time. There are amazing stories of people seeking out emotional support on Secret, and awful stories of bullying that have caused significant uproar. Secret has recently released features aimed at curbing bullying.

My sense is that the commentary to date is missing the mark. There’s talk of the danger of anonymous speech. Even the founders of Secret talk about their app like it’s anonymous speech:

“Anonymity is a really powerful thing, and with that power comes great responsibility. Figuring out these issues is the key to our long-term success, but it’s a hard, hard problem and we are doing the best we can.”

And this is certainly true: we’ve known for a while that anonymous speech can reveal the worst in people. But that’s not what we’re dealing with here. Posts on Secret are not anonymous. Posts on Secret are guaranteed to be authored by one of your friends. That guarantee is enabled and relayed by the Secret platform. That’s a very different beast than anonymity.

In general, if you seek good behavior, Power and Accountability need to be connected: the more Power you give someone, the more you hold them Accountable. Anonymity can be dangerous because it removes Accountability. That said, anonymity also removes some Power: if you’re not signing your name to your statement, it carries less weight. With Secret, Accountability is absent, just like with anonymous speech, but the power of identified speech remains in full force. That leads to amazing positive experiences: people can share thoughts of suicide with friends who can help, all under the cloak of group-anonymity that is both protecting and empowering. And it leads to disastrous power granted to bullies attacking their victims with the full force of speaking with authority – the bully is one of their friends! – while carrying zero accountability. That kind of power is likely to produce more bullies, too.

This is so much more potent that anonymity. And if this fascinating experiment is to do more good than harm, it will need to seriously push the envelope on systems for Accountability that are on par with the power Secret grants.

Here’s a free idea, straight out of crypto land. In cryptographic protocols that combine a need for good behavior with privacy/anonymity protections, there is often a trigger where bad behavior removes the anonymity shield. What if Secret revealed the identity of those users found to be in repeated violation of a code of good behavior? Would the threat of potential shame keep people in line, leaving the good uses intact while disincentivizing the destructive ones?