Mid-2007, I wrote two blog posts — get over it, the web is the platform and the web is the platform [part 2] that turned out to be quite right on one front, and so incredibly wrong on another.
Let’s start with where I was right:
Apps will be written using HTML and JavaScript. […] The Web is the Platform. The Web is the Platform. It’s going to start to sink in fast.
[…]
Imagine if there’s a way to have your web application say: “please go pick a contact from your address book, then post that contact’s information back to this URL”. Or if the web application can actually prepare a complete email on your behalf, image attachments included (oh the security issues….), and have you just confirm that, yes, you really want to send that email (the web app definitely can’t do that without confirmation)?
[…]
[We could] begin to define some JavaScript APIs, much like Google Gears for offline data storage, that enables this kind of private-public mashup. It would be fantastically interesting, because the security issues are mind boggling, but the resulting features are worth it. And it would spearhead some standards body to look into this issue more closely.
Whatever happens, though, the web is the platform. If you’re not writing apps in cross-browser-compliant HTML+JavaScript, the clock is ticking.
And in my followup post:
Add incremental features in JavaScript. First an offline functionality package, like Google Gears, so applications can work offline. Then, an interface to access the user’s stored photos. Over time, a way for web applications to communicate with one another.
[…]
Then there’s one tweak that could make a huge difference. Let a web application add itself to the dashboard.
Where did I go wrong? I thought this innovation was going to be unleashed by Apple with their introduction of the iPhone.
In my defense, if you read between the lines of the iPhone announcements back in 2007, it’s possible that Apple actually meant to do this. But then they didn’t, and they released an Objective C API, and a single closed app store, and locked down payments, and disallowed competition with their own apps, … So much for the Web.
It’s only fitting that the organization that is making this happen is my employer, Mozilla, with Firefox OS. Don’t get me wrong, I’m not taking credit for Firefox OS: there is a whole team of amazing leaders, engineers, product managers, product marketers, and generally rockstars making that happen. But it’s nice to see that this vision from six years ago is now reality.
So, the Web is the platform. HTML and JavaScript are the engines.
What about data? What about services? It’s time we redesign those. They, too, need to be freed from central points of control and silos. Data & Services need to be re-architected around the user. I should get to choose which organization I want to trust and which of my existing accounts I want to use to log into a new site/app/service. I should be able to access my data, know who else is touching it, and move it around as I please. I should be able to like/share any piece of content from any publisher I read onto any social network I choose. Amazing new apps should have easy access to any data the user wishes to give them, so that new ideas can emerge, powered by consenting users’ data, at the speed of the Web.
That, by the way, is the mission of my team, Mozilla Identity, and those are the guiding principles of our Persona login service and our upcoming project codenamed PICL. And of course we’ll continue to build those principles and those technologies into the Firefox OS phone (Persona is already in there.)
The Web is the Platform. And the User is the User. I’m quite sure Mozilla is the organization made to deliver both.
Comments
7 responses to “the Web is the Platform, and the User is the User”
I suspect you’ve seen it, but have you looked into a social network called Diaspora? It’s exactly the type of distributed service that you’re describing – you can choose who to trust, or you can truly own your data.
I’m increasingly uncomfortable by the creeping scope of the BrowserID/Persona/Identity services project. Initially it was, essentially, just authentication, but standardized and decentralized. Happy to see that, experimented with it, anxiously waiting for the day when the Mozilla-hosted parts can be retired on account of it having “won” and all other sites are their own IdPs/verifiers. With PICl, though, it appears to moving back towards centralization of data on a single host, kept there by continually extending the API surface without actually sticking to a standard. Kinda like how GCC wanted to continually shift their API to prevent external plugins, I guess?
Mook,
PICL and Persona are not tied together. You can use Persona and never touch PICL. In addition, PICL is built to be de-centralized too. So there’s no creep: users pick and choose the features they want, and at no point are they forced (or even encouraged) to share data they don’t want to share.
Now to your “anxiously waiting for the day when the Mozilla-hosted parts can be retired.” Though we’ll be happy to do that when every domain is an IdP, it’s quite unrealistic to expect that to happen anytime soon. There are no successful fully-decentralized systems today beyond simple experiments. Even email is not nearly as distributed as it used to be (80% of the world is on 3 providers.) So the goal here is to make sure that the system can be decentralized, but let’s be realistic: Mozilla infrastructure will likely be necessary for a long time.
In my opinion, you’re using the wrong (impossible) measure of success. As long as a user *can* pick a domain that is an IdP, as long as there is no forced centralization, the user and the Web are winning.
(Sorry, it looks like disqus ate my comment… here’s a second try)
Ah, I hadn’t realized Persona and PICL was that separated – perhaps I didn’t read enough documentation, but that wasn’t clear to me (especially since it’s done by the same people using the same mailing lists). It might be useful to emphasize that more in the future.
As for decentralization: alright, let’s assume the fallback will always be needed, then. At what point would it at least be possible to complete a login without the use of Mozilla servers? That would of course require the verifier, and the IdP (plus whatever scripts that run on the page) to be stabilized. Which of course means getting an API freeze. (Note that you didn’t mention verifiers – just having my own IdP isn’t enough.) Basically, the ability for the same code to 1) be used on the web, and 2) be used on an internal network. At this point it just looks like Persona development has been dropped on the floor 😦
Of course, my desire to be independent of Mozilla is the same reason I am not that interested in PICL – I want to be sure that Mozilla doesn’t have the opportunity to have a problem. I trust you as a whole, and as individuals, but the umbrella is just too big these days 🙂
Hi Mook,
Right now, as you point out, you can set up your own IdP. It’s worth emphasizing that this means that, *right now*, you can complete a login without using Mozilla as part of the trust path. I think that’s pretty awesome 🙂
As you point out, we then need to stabilize the data formats to enable other implementations of the client logic and verifiers to flourish. I’ll admit that this has not been our *top* priority because there is so much we are still doing to improve the flow and understand incremental use cases.
What you can see from the mailing list discussions, however, is that we’re designing any new feature with eventual full de-centralization in mind (and path to get there.) But understand that, the moment we do cut the cord, we significantly lose the ability to innovate.
The plan for decentralization is there, the architecture is there, but for the sake of actually moving the Web to a fully de-centralized system *eventually*, we actually need to not hurry too much, or we’ll screw it up.
And again, regarding PICL: just like with Persona, this will NOT be centralized in architecture. Yes, Mozilla will provide a sensible default, but you will be able to use your own servers (starting with our code) if you prefer.
I cannot stress enough how sensible defaults and good user experience – which can only be achieved through iteration, measurement, A/B testing, etc. – are *necessary* to success. If you really want to eventually achieve decentralized identity, you have to start with centralized scaffolding, always provide a good default, and take the time to learn from that phase. The goal is not to quickly move away from this scaffolding, the goal is to learn from it and move away from it once it’s safe to do so.
I think I undersand some nervousness around this approach: you might be worried that, if we stay centralized for too long, we might cement a centralized approach. In the abstract, I get that. But if you look carefully at what we’re doing in practice, I think you’ll see that we’re safe from that risk, because we’ve designed the APIs, the shims, and the trust rules such that decentralization can happen at any time, transparently, for every aspect of our centralized scaffolding. And critically, parts of the world can go decentralized without everyone else having to follow.
(Sorry, it looks like disqus ate my comment _again_. Ugh.)
Actually, some of what I was saying was based on my experiences doing my own IdP (the one in the browser using NSS). I see that there’s now three branches of id-specs that looks like they may be useful (dev, prod, beta1), only one of which (the last) mentions “iat”. And that’s really what it’s about – the specs don’t match reality, and there’s not really been any movement on making it actually work. I understand (and agree with) wanting to make sure the APIs and protocols are good before freezing them, but that’s not what I’m asking; I just want to make sure that the decentralized side is being considered at all, instead of just ignored.
Of course, looking at the mail today, it seems like some of it might just have been things being done on a private list. I had not expected that to be the case – previous interactions made things seem like what was done was in the open, and that (given that the same people were working on PICL instead) things on the Persona side was just backburnered. Glad to know that things are still progressing, if invisibly…
As for the logging without using Mozilla as part of the trust path at all… that’s not quite true. The verifier still needs to be the persona.org one, even if you don’t care about fallback behaviour – at least, I hadn’t seen anything recommending otherwise. This is sensible, of course; the protocol hasn’t been finalized to a point that could support it. But until that point, trying to not use Mozilla-hosted things for the flow is just not feasible.
I guess what I’m afraid of is two things; one, that everything becomes Yet Another Centralized Service; and two, that PICL is the New Shiny and Persona just kinda dies a slow death because it’s never specced to the point of being decentralizable. It’s not that things need to be frozen now so that people can go make their own implementations, it’s that there should be visible action so ensure that things are actually progressing towards that future.
Hi Mook,
Let me try to clear up some of your worries:
– nothing of importance is going on on the private list, only scheduling and coordination. Sometimes something slips in there, and we immediately correct and put it on the public list.
– there is no chance of the Persona service getting less attention, because it is a critical component for Firefox OS and for PICL (though again, PICL builds on Persona, not the other way around.)
– yes, we are delayed in bringing the spec up to speed. We’re working on fixing that soon. Certainly, we intend on that spec being up-to-date before we come out of Beta.
– no, we are not building another centralized service. Everything about the design of Persona to date should make that abundantly clear, especially given how much effort we put into it and how much easier it would be if we just centralized it.
I hope that puts your mind at ease.