When Arvind writes something, I tend to wait until I have a quiet moment to read it, because it usually packs a particularly high signal to noise ratio. His latest post In Silicon Valley, Great Power but No Responsibility, is awesome:
We’re at a unique time in history in terms of technologists having so much direct power. There’s just something about the picture of an engineer in Silicon Valley pushing a feature live at the end of a week, and then heading out for some beer, while people halfway around the world wake up and start using the feature and trusting their lives to it. It gives you pause.
So true. I’ve been thinking about this issue a lot recently, especially as good technologists in the Valley are in exceptionally good financial / career health, while the rest of the country, and sometimes even the other half of our cities, are suffering through a long and deep recession.
Here’s one story that blew my mind a few months ago. Facebook (and I don’t mean to pick on Facebook, they just happen to have a lot of data) introduced a feature that shows you photos from your past you haven’t seen in a while. Except, that turned out to include a lot of photos of ex-boyfriends and ex-girlfriends, and people complained. But here’s the thing: Facebook photos often contain tags of people present in the photo. And you’ve told Facebook about your relationships over time (though it’s likely that, even if you didn’t, they can probably guess from your joint social network activity.) So what did Facebook do? They computed the graph of ex-relationships, and they ensured that you are no longer proactively shown photos of your exes. They did this in a matter of days. Think about that one again: in a matter of days, they figured out all the romantic relationships that ever occurred between their 600M+ users. The power of that knowledge is staggering, and if what I hear about Facebook is correct, that power is in just about every Facebook engineer’s hands.
Here’s another story. I used to lecture MIT Undergraduates about web security. My approach was basically: (a) hack a few of the student project web sites, then (b) hack a few public web sites to make the students understand how widespread the problems are. In late 2003, I showed students how to buy movie tickets for free (the price of the ticket was held in a hidden variable in a web form… duh). I ended my lecture with “but just because you can do this, doesn’t mean you should. Please don’t do this.” Over the years, I’ve received a few emails from former students to the tune of “hey Ben, you gave an awesome lecture, I still remember how a bunch of us went out to see Matrix 3 for free that weekend!”
I shudder to think about what happens when you put those two stories together. While the earliest hackers may have had a particularly well developed ethical sense, I get the sense that our profession’s average ethical sense doesn’t nearly measure up to the incredible power we have gained precipitously over the last 15 years.
And then there’s the additional point Arvind makes, which I’ve observed directly too:
I often hear a willful disdain for moral issues. Anything that’s technically feasible is seen as fair game and those who raise objections are seen as incompetent outsiders trying to rain on the parade of techno-utopia.
Yes! There’s this continued and surprisingly widespread delusion that technology is somehow neutral, that moral decisions are for other people to make. But that’s just not true. Lessig taught me (and a generation of other technologists) that Code is Law, or as I prefer to think about it, that Code defines the Laws of Physics on the Internet. Laws of Physics are only free of moral value if they are truly natural. When they are artificial, they become deeply intertwined with morals, because the technologists choose which artificial worlds to create, which defaults to set, which way gravity pulls you. Too often, artificial gravity tends to pull users in the direction that makes the providing company the most money.
A parting thought. In 2008, the world turned against bankers, because many profited by exploiting their expertise in a rapidly accelerating field (financial instruments) over others’ ignorance of even basic concepts (adjustable-rate mortgages). How long before we software engineers find our profession in a similar position? How long will we shield ourselves from the responsibility we have, as experts in the field much like experts in any other field, to guide others to make the best decision for them?