no user is an island

US government agencies appear to be engaged in large-scale Internet surveillance, using secret court orders to force major Internet companies to provide assistance. The extent of this assistance is a topic of debate. What’s clear, though, is that the process itself is opaque: it’s impossible to know how broad or inappropriate the surveillance may be.

OK, so what do we do about it?

told you so, never shoulda trusted the Cloud

Some folks see this as vindication: we never should have trusted the Cloud. Only trust yourself, generate your own keypairs, encrypt all traffic, host your own email, etc. Servers are evil and should be considered leaky stupid passthroughs for fully encrypted data.

First, this is naive. If government agencies believe they have the authority to monitor all Internet traffic, would they hesitate to create viruses that infect and monitor endpoints? Would they hesitate to force software and hardware vendors to build secret backdoors into their products? It is the engineer’s mistake to believe that Law Enforcement will stop cleanly at technical abstraction layers. If the goal is total surveillance, the financial means immense, the arm-twisting strength unlimited, the oversight inexistent.. what would you do in their position?

Second, if, like me, you agree that technology experts have a duty to build solutions that matter to laypeople, it’s also irresponsible. None of these paranoid solutions are accessible to laypeople. Can you imagine Grandpa with his fingerprint-activated USB-key holding his RSA-2048-bit secret key and surfing the Web via Tor proclaiming “not me, I will fight the man!” Yeah. (And if you’re thinking “no Grandpa, not RSA! Elliptic curves!” well, thank you for making my point for me.)

So enough with this la-la land of users as fortified islands communicating via torpedo-proof-ciphertext-carrying submarines. People engage with others by way of intermediaries they trust, for that is the basis of all human interaction and commerce since the dawn of time. Let us build systems, both technical and legal, that start there.

protect user data wherever it lives

We can build systems that start with respect for the user and her data, wherever it lives. On Facebook servers, on Google servers, on self-hosted servers, on private computers. Encrypted or not encrypted. We can and should use cryptography to secure channels from those who would disrespect user data, reduce data collection to that which is useful, and generally build defense in depth against bad actors. We should stop wasting time on systems that impose the resulting complexity on users. Government access to user data should follow a clear, transparent process that is consistent wherever the data happens to be stored, however it happens to be encrypted.

Let’s build that system together. Not by barricading ourselves on our lonely islands of encryption and onion-routing. But by building the legal and technical framework we need to respect users and their data. Mozilla and Google have started. I’m hopeful many more will join.

a hopeful note about PRISM

You know what? I’m feeling optimistic suddenly. Mere hours ago, all of us tech/policy geeks lost our marbles over PRISM. And in the last hour, we’ve got two of the most strongly worded surveillance rebuttals I’ve ever seen from major Internet Companies.

Here’s Google’s CEO Larry Page:

we provide user data to governments only in accordance with the law. Our legal team reviews each and every request, and frequently pushes back when requests are overly broad or don’t follow the correct process. Press reports that suggest that Google is providing open-ended access to our users’ data are false, period. Until this week’s reports, we had never heard of the broad type of order that Verizon received—an order that appears to have required them to hand over millions of users’ call records. We were very surprised to learn that such broad orders exist. Any suggestion that Google is disclosing information about our users’ Internet activity on such a scale is completely false.

And here’s Mark Zuckerberg of Facebook:

Facebook is not and has never been part of any program to give the US or any other government direct access to our servers. We have never received a blanket request or court order from any government agency asking for information or metadata in bulk, like the one Verizon reportedly received. And if we did, we would fight it aggressively. We hadn’t even heard of PRISM before yesterday.

Both companies emphasize government data requests transparency as a critical component of moving forward. I couldn’t agree more. We need to know about every legal process in place that gives government access to private user data.


Could PRISM mark a tech world epiphany that users care about privacy? I hope so. It certainly seems that major PR departments think so. 24-hour unequivocally worded responses from major Internet CEOs means they care. This is a good thing.

retreat is the wrong reaction

I’ve heard folks argue that PRISM means we need to bet it all on end-to-end encryption. I think that’s wrong, because that doesn’t fulfill users’ needs. But even putting that aside: if you believe the government is willing to penetrate professionally managed corporate servers without company permission or legal clarity, do you sincerely believe the government wouldn’t also penetrate your personal computer and steal the data before you encrypt it?

Services and data aggregation play a critical role in providing users the features they need to share, discover, and grow. They’re not going away. Don’t expect PRISM to herald the era of end-to-end encryption and dumb servers. Those will continue to play only a limited role for very specific use cases.

What we need is (1) companies that deeply respect users, and (2) legal processes that protect user data wherever it lives. I think we’re seeing the beginning of (1). Now, Obama, over to you for (2).

what happens when we forget who should own the data: PRISM

Heard about PRISM? Supposedly, the NSA has direct access to servers at major Internet companies. This has happened before, e.g. when Sprint provided law enforcement a simple data portal they could use at any time. They used it 8 million times in a year. That said, the scale of this new claim is a bit staggering. If the NSA has access to these 9 companies’ data, it has access to every American Citizen’s complete life.

what’s really happening?

I think we don’t know yet what’s happening.

I’m dubious that NSA has direct access to servers at Google, Facebook, Apple, etc. Those companies have strongly denied the claim, and I have trouble believing this happened on a large scale for years without someone at those companies leaking the information.

Might NSA be tapping all network traffic? Yeah, that’s probable. Might NSA have the facility to decrypt the encrypted traffic? For targeted searches, yeah, I believe that. For broad-scale searching across all traffic? I’m not so sure. It could be happening, but that would be tremendous, hard-to-fathom news.

I could be wrong here. Companies might be cooperating and lying about it. NSA might be eons ahead of what we expect in terms of computing capability and cryptographic breakthroughs. This is just my gut instinct.

is this okay?

So, let’s assume it is happening. Is it okay? Hell no it isn’t. There is no doubt in my mind that user data, whether stored in a lockbox in my home or on a server in Oregon, should first and foremost belong to me, and be covered by the same Constitutional protections as my home and private belongings. It is high time for the law to catch up, for a digital due process. Blanket surveillance, warrantless private data capture or seizure, are unacceptable, and should be revolting to anyone who cares about freedom and democracy.

lessons for technologists

I deeply believe that one should first look at one’s own actions before blaming others. And I think we, technologists, have some blame to shoulder.

We’ve let our guard down when it comes to user data ownership. We’ve made it increasingly acceptable to collect user data and make decisions about how best to use it without involving the user much. We’ve often allowed the definition of “using data for the user’s benefit” to loosen.

In other words, where user data ownership in the cloud was murky to begin with, we’ve made it murkier.

Unlike some of my colleagues, I don’t believe we can simply forgo the Cloud or use end-to-end encryption. Encryption cannot be layered on without consequences. You cannot provide the value that users want without some centralization of data and services.

But we can take a stronger stance against companies that abuse users’ trust and treat the data as their own rather than the user’s. We can set an example. We can state clearly that when we collect data, we do it with care, we do it for a clear purpose, and we allow the user to leave as easily as possible, removing traces of their data as best we can.

We can set the example that the user’s data, whatever server it’s on, belongs, by principle, to the user. And then we can and should ask our government to live up to the same standard.

getting web sites to adopt a new identity system

My team at Mozilla works on Persona, an easy and secure web login solution. Persona delivers to web sites and apps just the right information for a meaningful login: an email address of the user’s choice. Persona is one of Mozilla’s first forays “up the stack” into web services.

Typically, at Mozilla, we improve the Web by way of Firefox, our major lever with hundreds of millions of users. Take asm.js, Firefox’s new awesome JavaScript optimization technology which lets you run 60-frame-per-seconds games in your web browser. It’s such a great thing that Chrome is fast-following. Of course, Chrome also innovates by deploying features first, and Firefox often fast-follows. Standardization ensues. The Web wins.

With Identity, we’ve taken a different approach: out of the gate, Persona works on all modern browsers, desktop and mobile, and some not-so-modern browsers like IE8 and Android 2.2 stock. We’re not simply building open specs for others to build against: we are putting in the time and effort to make Persona work everywhere. We even have iOS and Android native SDKs in the works.

Why would we do such a thing? Aren’t we helping to improve our competitors’ platforms instead of improving our own? That reasoning, though tempting, is misguided. Here’s why.

working on all modern platforms is table-stakes

We talk about Persona to Web developers all the time. We almost always get the following two questions:

  1. does this work in other browsers?
  2. does this work on mobile?

These questions are actually all-or-nothing: either Persona works on other browsers and on mobile, or, developers tell us, they won’t adopt it. To date, we have not found a single web site that would deploy a Firefox-only authentication system. Some web sites have adopted Persona, only to back out once they built an iOS app and couldn’t use Persona effectively (we’re actively fixing that.) So, grand theories aside, we’re targeting all platforms because web sites simply won’t adopt Persona otherwise. After all, Facebook Connect works everywhere.

When you think about it, is that actually so different from the asm.js strategy? asm.js is much faster on Firefox, but it works on Chrome and any other JavaScript engine, too. Heck, even Google’s DART, a brand new language they want to see browsers adopt, comes with a DART-to-JavaScript-compiler so it works on all other browsers out of the gate. These are not after-thoughts. These are not small investments. asm.js was designed as a proper subset of JavaScript. The DART-to-JS compiler is a freaking compiler, built just so non-Chrome browsers can run DART.

When appealing to web developers to make a significant investment — rewriting code, building against a new authentication system, .. —, cross-browser and cross-device functionality from day 1 is table-stakes. The alternative is not reduced adoption, it’s zero adoption.

priming users is the winning hand

The similarities between Identity and purely functional improvements like asm.js stop when it comes to users. The reason web sites choose Facebook Connect is not just because it works, but because 1 Billion users are primed with accounts and ready to log in. Same goes for Google+ and Twitter logins.

Persona doesn’t have a gigantic userbase to start from. That sucks. The good news is that, unlike other identity systems, we don’t want to create a huge silo’ed userbase. What we want is a protocol and a user-experience that make Web logins better. We want users to choose their identity. We’re happy to bridge to existing userbases to help them do just that!

So, bridging is what we’re doing. You’ve seen it already with Yahoo Identity Bridging in Persona Beta 2. More is coming. With each bridge, hundreds of millions of additional users are primed to log in with Persona. That’s powerful. And it’s a major reason why sites are adopting Persona.

Working everywhere is table-stakes. Priming users so they’re ready to log in with just a couple of clicks, that’s the winning hand.

beautiful native user-agent implementations sweetens the pot

Meanwhile, the Persona protocol is specifically tailored to be mediated by the user’s browser. Long-term, we think this will be a fantastic asset for the Persona login experience. Beautiful, device-specific UIs. Universal logout buttons. Innovation in trusted UIs. And lots of other tricks we haven’t even thought of yet. We’re doing just that kind of innovation on Firefox OS with a built-in trusted UI for Persona.

But let’s be clear: that’s not an adoption strategy. An optimized Firefox UI for Persona will not affect web-site adoption because it does nothing to reduce login friction. In a while, once Persona is widespread with hundreds of thousands of web sites supporting it, and users are actively logging in with Persona on many devices and browsers, Firefox’s optimized Persona UI will be a competitive advantage that other browsers will feel pressure to match. Until then, web site adoption is the only thing that matters.

now you know our priorities

Wherever it makes sense, we’re implementing Firefox-specific Persona UIs. However, when it comes to an adoption strategy, we know from our customers that this won’t help. What will help is:

  1. Persona working everywhere
  2. As many users as possible primed to log in

Those are our priorities.

We know this is different for Mozilla. But it’s quite common for folks implementing Services. What you’re seeing here is Mozilla adapting as it applies its strongly held principle of user sovereignty up the stack and into the cloud.

Identity Systems: white labeling is a no-go

There’s a new blog post with some criticism of Mozilla Persona, the easy and secure web login solution that my team works on. The great thing about working in the open at Mozilla is that we get this kind of criticism openly, and we respond to it openly, too.

The author’s central complaint is that the Persona brand is visible to the user:

It [Persona] needs white-labeling. I know that branding drives adoption, but showing the Persona name on the login box at all is too much; it needs to be transparent for the user. Most of the visits to any website are first-time visits, which means the user is seeing your site/brand for the first time. Introducing another brand at the sign-up point is a confusing distraction to the user.

The author compares Persona to Stripe, the payment company with a super-easy-to-use JavaScript API, which lets a web site display a payment form with no trace of the Stripe brand, and all the hard credit-card processing work is left to the Stripe service.

This is an interesting point, but unfortunately it is wrong for an Identity solution. Consider if Persona were fully white-labeled, integrated into the web site’s own pages, with no trace of the Persona system visible to the user. What happens then? Two possibilities:

  1. no user state is shared between sites: users create a new account on every site that uses Persona. The site doesn’t have to do the hard work of password storage, it can let Persona handle this. There’s no benefit to the user: every web site looks independent from the others, with its own account and password. And while this is incrementally better than having web sites store passwords themselves, that increment is quite small: web sites tend to use federated authentication solutions if they can lower the friction of users signing up. If users still have to create accounts everywhere, friction is high, and the benefit to the web site is small.
  2. user state is shared between sites: users don’t have to create new accounts at every web site, they can use their existing single Persona account, but now they have no branding whatsoever to indicate this. So, are users supposed to type in the same Persona password on every site they see? Are they supposed to feel good about seeing their list of identities embedded within a brand new site they’ve never seen before, with no indication of why that data is already there? This is a recipe for disastrous phishing and a deeply jarring user experience.

So what about Stripe? With Stripe, the user retypes their credit-card number at every web site they visit. That makes sense because the hard part of payment processing for web sites isn’t so much the prompting for a credit card, it’s the actual payment processing in the backend. And, frankly, it would be quite jarring if you saw your credit card number just show up on a brand new web site you’ve never visited before.

But identity is different. The hard part is not the backend processing, it’s getting the user to sign up in the first place, and for that you really want the user to not have to create yet another account. Plus, if you’re going to surface the user’s identity across sites, then you *have* to give them an indication of the system that’s helping them do that so they know what password to type in and why their data is already there. And that’s Persona. Built to provide clear benefits to users and sites.

By the way, though we need some consistent Persona branding to make a successful user experience, we don’t need the Persona brand to be overbearing. Already, with Persona, web sites can add a prominent logo of their choosing to the Persona login screen. And we’re working on new approaches that would give sites even more control over the branding, while giving users just the hint they need to understand that this is the same login system they trust everywhere else. Check it out.

so what if torture works?

I’ve seen most of Zero Dark Thirty, the movie that claims to tell the story of the search for and killing of Bin Laden. It’s a pretty gruesome film, with clear implications that torture led to information that led us to Bin Laden. There are fierce debates about whether that fact – that torture led us to Bin Laden – is true or not. Almost every time torture is discussed, the discussion quickly shifts to one side saying “see, it’s effective!” and the other saying “it doesn’t even work!”

Here’s a simple question I don’t hear asked all that often: who cares if it works? It’s simply wrong. If “it works!” is our only criteria, then forget the Rule of Law. Many lives would be saved if police could willy-nilly enter anyone’s home and search their belongings, because some of those folks are probably murderers that we can’t catch if we strictly follow the rules. We could have captured all of Bin Laden’s extended family and tortured them, publicly threatening him to surrender. That might have worked. Especially if we did it to his kids. Especially the young ones.

What is wrong with the world when we even consider this most extreme version of the ends justifying the means?

Torture is wrong. Period. Even if it works.

Firefox is the unlocked browser

Anil Dash is a man after my own heart in his latest post, The Case for User Agent Extremism. Please go read this awesome post:

One of my favorite aspects of the infrastructure of the web is that the way we refer to web browsers in a technical context: User Agents. Divorced from its geeky context, the simple phrase seems to be laden with social, even political, implications.

The idea captured in the phrase “user agent” is a powerful one, that this software we run on our computers or our phones acts with agency on behalf of us as users, doing our bidding and following our wishes. But as the web evolves, we’re in fundamental tension with that history and legacy, because the powerful companies that today exert overwhelming control over the web are going to try to make web browsers less an agent of users and more a user-driven agent of those corporations. This is especially true for Google Chrome, Microsoft Internet Explorer and Apple Safari, though Mozilla’s Firefox may be headed down this path as well.

So so right… except for the misinformed inclusion of Firefox in that list. Anil: Firefox is the User Agent you’re looking for. Here’s why.

user agency

Two years ago, I joined Mozilla because Mozillians are constantly working to strengthen the User Agent:

In a few days, I’ll be joining Mozilla.


[I want] to work on making the browser a true user agent working on behalf of the user. Mozilla folks are not only strongly aligned with that point of view, they’ve already done quite a bit to make it happen.

browser extensions

Like Anil, I believe browser add-ons/extensions/user-scripts are critical for user freedom, as I wrote more than two years ago, before I even joined Mozilla:

Browser extensions, or add-ons, can help address this issue [of user freedom]. They can modify the behavior of specific web sites by making the browser defend user control and privacy more aggressively: they can block ads, block flash, block cookies for certain domains, add extra links for convenience (i.e. direct links to Flickr’s original resolution), etc.. Browser extensions empower users to actively defend their freedom and privacy, to push back on the more egregious actions of certain web publishers.


Again, like Anil, I saw, in that same blog post, the threat of mobile:

Except in the mobile space. Think about the iPhone browser. Apple disallows web browsers other than Safari, and there is no way to create browser extensions for Safari mobile. When you use Safari on an iPhone, you are using a browser that behaves exactly like all other iPhone Safaris, without exception. And that means that, as web publishers discover improved ways to track you, you continue to lose privacy and control over your data as you surf the Web.

This situation is getting worse: the iPad has the same limitations as the iPhone. Technically, other browsers can be installed on Android, but for all intents and purposes, it seems the built-in browser is the dominant one. Simplified computing is the norm, with single isolated applications, never applications that can modify the behavior of other applications. Thus, no browser extensions, and only one way to surf the web.

so Firefox?

To Anil’s concerns:

  • Firefox Sync, which lets you share bookmarks, passwords, tabs, etc. across devices, is entirely open-source, including the server infrastructure, and if you don’t want Mozilla involved, you can change your Firefox settings to point to a Sync server of your choosing, including one you run on your own using our open-source code. PICL (Profile in the Cloud), the next-generation Sync that my team is working on, will make it even easier for you to choose your own PICL server. We offer a sane default so things work out of the box, but no required centralization, unlike other vendors.
  • Mozilla Persona, our Web Identity solution, works today on any major browser (not just Firefox), and is fully decentralized: you can choose any identity provider you want today. This stands in stark contrast to competing solutions that tie browsers to vendor-specific accounts. Persona is the identity solution that respects users.
  • Firefox for Android is the only major mobile browser that supports add-ons. Anil, if you want “cloud-to-butt”, you can have it on Firefox for Android. You can also have AdBlock Plus. Try that on any other mobile browser.

the unlocked browser

Anil argues that we should talk about unlocked browsers. I love it. Let’s do that. Here’s my bet, Anil: write down your criteria for the ideal unlocked browser. I bet you’ll find that Firefox, on desktop, on mobile, and in all of the services Mozilla is offering as attachments, is exactly what you’re looking for.

the Web is the Platform, and the User is the User

Mid-2007, I wrote two blog posts — get over it, the web is the platform and the web is the platform [part 2] that turned out to be quite right on one front, and so incredibly wrong on another.

Let’s start with where I was right:

Apps will be written using HTML and JavaScript. [...] The Web is the Platform. The Web is the Platform. It’s going to start to sink in fast.


Imagine if there’s a way to have your web application say: “please go pick a contact from your address book, then post that contact’s information back to this URL”. Or if the web application can actually prepare a complete email on your behalf, image attachments included (oh the security issues….), and have you just confirm that, yes, you really want to send that email (the web app definitely can’t do that without confirmation)?


[We could] begin to define some JavaScript APIs, much like Google Gears for offline data storage, that enables this kind of private-public mashup. It would be fantastically interesting, because the security issues are mind boggling, but the resulting features are worth it. And it would spearhead some standards body to look into this issue more closely.

Whatever happens, though, the web is the platform. If you’re not writing apps in cross-browser-compliant HTML+JavaScript, the clock is ticking.

And in my followup post:

Add incremental features in JavaScript. First an offline functionality package, like Google Gears, so applications can work offline. Then, an interface to access the user’s stored photos. Over time, a way for web applications to communicate with one another.


Then there’s one tweak that could make a huge difference. Let a web application add itself to the dashboard.

Where did I go wrong? I thought this innovation was going to be unleashed by Apple with their introduction of the iPhone.

In my defense, if you read between the lines of the iPhone announcements back in 2007, it’s possible that Apple actually meant to do this. But then they didn’t, and they released an Objective C API, and a single closed app store, and locked down payments, and disallowed competition with their own apps, … So much for the Web.

It’s only fitting that the organization that is making this happen is my employer, Mozilla, with Firefox OS. Don’t get me wrong, I’m not taking credit for Firefox OS: there is a whole team of amazing leaders, engineers, product managers, product marketers, and generally rockstars making that happen. But it’s nice to see that this vision from six years ago is now reality.

So, the Web is the platform. HTML and JavaScript are the engines.

What about data? What about services? It’s time we redesign those. They, too, need to be freed from central points of control and silos. Data & Services need to be re-architected around the user. I should get to choose which organization I want to trust and which of my existing accounts I want to use to log into a new site/app/service. I should be able to access my data, know who else is touching it, and move it around as I please. I should be able to like/share any piece of content from any publisher I read onto any social network I choose. Amazing new apps should have easy access to any data the user wishes to give them, so that new ideas can emerge, powered by consenting users’ data, at the speed of the Web.

That, by the way, is the mission of my team, Mozilla Identity, and those are the guiding principles of our Persona login service and our upcoming project codenamed PICL. And of course we’ll continue to build those principles and those technologies into the Firefox OS phone (Persona is already in there.)

The Web is the Platform. And the User is the User. I’m quite sure Mozilla is the organization made to deliver both.


I heard about Aaron Swartz in 2000, when he won the ArsDigita prize. I met him for the first time in early summer 2002, when my little open-source webdev company, OpenForce, joined the Creative Commons team to build the CC web site. That’s also when I met Matt Haughey, whose words helped trigger a bunch of memories in me about Aaron.

Most hackers inevitably meet a younger, better, smarter version of themselves. For me, it happened probably earlier than for most, and it was the day I met Aaron. I was 25, an MIT graduate with a near-perfect GPA, leading my second startup. He was 15 and in high school, self-taught. And he wasn’t just better. He was unbelievable. He mastered complex software topics with an ease that made me look awkward. He got visibly impatient with me. It made me a little bit jealous.

Aaron came to New York City once during the Creative Commons site buildout. He came by our offices, introduced us to the future: newsreaders and RSS. I took him and his father out to lunch for a fancy burger in Soho. He was unimpressed and hated the food. And he said it.

Aaron was brutally honest like that. A few weeks into our work on Creative Commons, Aaron published an update on his web page: “OpenForce… bad, slow, expensive.” I barely slept that night. Here I was, trying to impress one of my idols (Larry Lessig), and this guy was just killing me. I was afraid he was right. I was also mesmerized by his ability to be so direct, so honest, because the truth was all that mattered, consequences be damned.

Over time, I got to understand that none of this technical criticism was personal. Aaron’s standards of openness, expertise, and focus were just incredibly high. By the time we launched Creative Commons, we were on reasonably good terms. We became friendly. We were never very close, and the friendship didn’t get deeply personal. We talked about web frameworks, our mutual passion for a few years (before he moved on to bigger things.) We discussed Creative Commons, semantic web, etc. We chatted over email a bit every few months. We saw each other at a couple of Creative Commons events. I think the last time I saw him in person was in 2006.

One day, he mentioned me in a blog post and called me his friend. I was honored. A year or so ago, we chatted again when he became interested in the work I’m doing at Mozilla. He was excited about what we were doing and started telling people. I was honored, again.

When I heard about Aaron’s JSTOR shenanigans, I reacted much like many others: okay, that was stupid, he’ll get slapped on the wrist and we’ll all move on, because where’s the harm, really? I was pretty busy with my own life and growing family. I didn’t think about his situation very much. I didn’t realize how serious it was getting. I missed how MIT played a role in his predicament (something that makes me incredibly angry right now.) I basically gave to his legal defense fund and moved on.

I wish I’d at least reached out. Said something. Maybe given him an opportunity to vent. It’s unlikely it would have made any difference: he was surrounded by people who knew him far better and who loved him. But you know, maybe. I’m sorry, Aaron, that I didn’t reach out. I’m sorry that the world failed you in the most insane way, that you were put in an unfathomable situation for doing something that harmed no one. That I didn’t appreciate you nearly as much as I could have while you were around.

I spent some time today digging up old photos, and finally found ones I took on the day Creative Commons launched, in December 2002. Here’s a few of Aaron: one with Matt Haughey, one with Glenn Brown and Larry Lessig, one interview with Dan Gillmor, and one on stage describing Creative Commons technology.

Matt Haughey and Aaron Swartz at CC Launch – December 2002

Aaron Swartz, Glenn Brown, Larry Lessig, preparing CC Launch – December 2002

Aaron Swartz interviewed by Dan Gillmor at CC Launch – December 2002

Aaron Swartz speaking at CC Launch – December 2002

I know there was one photo of Aaron, Glenn Brown, and me in front of the Supreme Court on the day of the Eldred argument, November 2002, but I can’t find it. I’ll keep looking.

Hugs to Aaron’s family and friends, we lost someone big.

The Onus is on Scientists – Shame on the AAAS

The American Association for the Advancement of Science (AAAS) has just come out against California’s Proposition 37, which would mandate the labeling of genetically-modified foods. In my opinion, the AAAS has failed its duty as promoters of Good Science.

The question is not whether genetically-modified foods are safe. I see the benefits, and I see the downsides (especially as a security guy, since food safety testing is, in my opinion, very poorly done), and the debate will rage on for a long time. But whether genetically-modified foods are safe is not the issue. The issue is whether consumers have a right to know what food they eat. There should be no debate here. Of course people have a right to know. And what better way to hear the people’s voice than to vote on this issue? The AAAS should be pro-labeling. If the AAAS believes that genetically-modified foods are, in fact, safer, as they claim in their statement, then they can make that point and rally the troops to explain to consumers that they should specifically seek out the GM-labeled foods. But withholding knowledge? Are you kidding me?

The world would be better off if people behaved according to scientific consensus. I wouldn’t have to worry about sending my kids to a school where up to 10% of kids might not be vaccinated, for example. But does that mean we should force parents to vaccinate their children? Of course not.

The onus is on scientists to make their case. Paternalism has no place in science. People have a right to know. The AAAS Board should be ashamed.