Facebook finally tackle the problem of forced invites, a behavioral characteristic of some (bad) applications that repeatedly prompt users to invite other users after installing an application of performing some action within the application.
Long overdue, the continued abuse of both the invite and notification APIs has significantly devalued and reduced the utility of service. Many high-tech folks I’ve talked to recently have already given up on Facebook (retreating to Twitter) but perhaps if they continue improving the spam controls they might be tempted back.
It would seem the initial “grab as many users as possible” focus of some applications in order to boost valuations may now be coming to an end (thankfully).
$15bn-Facebook are also quite cavalier about making changes midweek – they seem to do midweek ‘pushes’ to their production server that they frequently have to back out or subsequently apologise for. Some of the posts to their platform status feed read like a car crash.
Both could both learn a thing or two from mission critical SaaS service providers like Salesforce who figured out a long time ago that trust is the key to building and operating a successful online service (though in Facebook’s case other issues may be eroding end-user trust).
Salesforce place so much emphasis on trust that they put considerable effort into building and naming an operations website called http://trust.salesforce.com/. They use it to announce things like scheduled maintenance well in advance. Oh, Salesforce scheduled maintenance is always is done at weekends.
Doing silly things like taking your service offline for a day in the middle of the week, when most of your users will be online, erodes trust.
I lean towards agreeing with Joe. I like the idea behind APML in that I can see the need for a standard format for publishing attention information. However I don’t think it is credible to attempt to specify how to safely distribute and share attention data (which has privacy and trust issues written all over it) using a simple annotated schema document and a php-based example parser implementation. Problems
The core APML XML schema is woefully underspecified. The semantic meaning of the various elements/attributes is really not clearly specified at all. To illustrate what I mean, compare and contrast the Atom Syndication Format specification with the APML specification.
There is no protocol specification how to access/update APML documents. Again, referring to a specification like the Atom Publishing Protocol highlights the need for such a specification.
At this stage, I think anyone producing specifications that essentially contain lists of data for consumption by arbitrary web based clients should seriously consider extending the Atom (or RSS) syndication formats rather than rolling their own schema. They should also consider adapting AtomPub for publishing this type of content (e.g. as Google did for OpenSocial APIs)
The APML specification seems to have been developed in a sandbox. I know it is listed alongside OpenID, microformats and others at (what is now the buzz site of this week) DataPortability.org but it does not build on or integrate with any of these specifications (worth noting that Chris Saad is behind both efforts). There are no requirements, recommendations or guidelines on how to use APML within the context of existing and pending web infrastructure. Sure I could build an APML service and then stick it behind an OAuth login system but then I may end up with a service that nobody can use with their client libraries.
That last point actually brings me to something else I’ve been meaning to blog about – DataPortability.org. First off, I think this website is a commendable effort at raising the profile of the specifications it is linking to. However, and this is a big however, I have seen this happen before many, many times in enterprise-software-land (I’m looking at you CORBA and J2EE vendors!). Consortium of big vendors would get together to discuss reference designs or ‘profiles’ for combining various specifications into an “industry recommended solution” that is then vaguely referenced by each company’s website to claim ‘compliance’.
However, in the absence of free and open reference implementations for these solutions (and I mean an integrated reference implementation here) true compliance and interoperability will never really be achieved. There is simply no carrot there for the vendors to work hard to achieve this and there is no stick large enough for the community to beat them with to play fairer. So I’m afraid that for now I’m not drinking the kool-aid regarding Google and Facebook joining DataPortability.org Sorry, but given both Facebook and Google’s recent track records I can only assume that this is mostly a PR exercise.
any spreadsheet, database, physical document, server, network, or other repository of information, whether centralized or distributed.
Pretty all-encompassing eh! I’ll bet there are more than a few facebook applications that are actively breaking this term of service. (Aside: The 24 hour restriction can be avoided if, and only if, the application explicitly asks the user to opt-in – see section 2.A.6. I wonder does the OutSync tool that Dare uses do that?)
I am of course picking bones here – I could go on but enough said about the minutiae of Facebook legal mumbo-jumbo. A much bigger and much more important question is. How did we end up in the situation whereby we need to take personal data out of social networks? The answer of course is that we allow multiple web services and social networks to indefinitely store overlapping subsets of our personal data as they see fit.
Let me put it another way – what would happen if we inverted the location of your personal data? What if social networks had to (periodically) contact your identity provider to get your personal information and social graph? Then this type of problem would not exist and everyone would have far greater data and service portability.
However, there are several large barriers to this happening:
We don’t yet have an established global identity scheme for storing the critical personal and social graph information that social network websites need to operate. OpenID and OAuth provide the low level plumbing for such a scheme but a higher level standardized portable personal information protocol is required to allow 3rd parties to find out more about a user with an OpenID.
Assuming the above existed, it would be impossibly difficult for 99% of the internet users to manage/use/understand unless it (their identity service) was managed on their behalf by the organization their work for or their broadband provider. I was going to initially say ‘was built into their OS’ but nowadays people use multiple computers that have no fixed public internet address so that’s not even close to an option.
No large social network will ever willingly volunteer to support this. Legislation/Regulation will be required to force the existing social networks to evolve onto this identity model.
The last point is probably the biggest barrier and is likely the reason why no big player is expending significant effort to developing standards for user owned identity profiles. Given the relative lack of voice that average internet users, or even groups of users, now have (Scoble aside) legislation and/or regulation is IMHO the only way to compel the incumbents to change how the whole social network operates.
The word ‘open’ has been abused terribly in recent months (I’m looking at you OpenSocial and you AT&T/Verizon) but the recently completed OpenID 2.0 and OAuth Core 1.0 specifications are truly open. They really should be on the radar of every self respecting web developer that works on websites/APIs that require authentication (OpenID) and authorization/access-control (OAuth). Both are integral to any hope we have of evolving the existing world wide web into a truly open social network (or the giant global graph as timbl now calls it)
That said, minimal OpenID implementations won’t solve all authentication headaches. Phishing is a problem so I suspect OpenID enabled sites will need to employ white list providers as Tim and Dare highlighted this a while back.
Now we (the web community that is) need two things to happen.
We need the big online identity silos like Google, Yahoo!, Microsoft Live, Facebook and MySpace – the sites whose login page average web users trust – to step up to the plate and act as OpenID providers.
We need the big API sites like Google Maps/Charts/Base/…, Microsoft Live, Yahoo!/Flickr, Facebook to start working on enabling OAuth access to their APIs.
Note the overlap in the two lists above – yep, those guys own this part of the web. Which will be brave enough to move first? With final specifications in hand, no excuses, please go forth and implement and lets end this www account/data access hell we all live in.
Perhaps past-history now that they have implemented the great off switch but it helps explain why organizations like MoveOn got so riled (btw, here’s a game, find the link in the Facebook/Zuckerberg blog post)
I hinted that authentication was a concern at the end of my previous post on the topic.
According to the OpenSocial documentation I can only surmise that the only authentication mechanisms prescribed by OpenSocial are the Google Authentication APIs. The application user gets redirected to Google Login and once done the application gets a token that it uses when calling the OpenSocial APIs. This implies that every OpenSocial application user has to have a Google Account.
There are no references to open alternatives such as OAuth or OpenID in the documentation. It’s worth bearing in mind that the existing documents are very Orkut centric so perhaps they are focusing too much on explaining how it works within the Google/Orkut world but I can’t find any alternative info. This really doesn’t seem very Open!
I’m wondering how this this will work for applications that they are deployed into Ning or MySpace. Do the application have to detect that it is not in Google-land and use the local container’s authentication/delegating authority mechanism? Or do they continue to authenticate against Google Accounts? Am I missing something obvious here?
Here things get fuzzier. One of the main things I wanted to find out was how I might host an OpenSocial application on our servers, in the same way that we host some of the Facebook applications that we have developed for clients. (Facebook authentication is described here for those are curious about where I’m coming from here)
The docs only talk about Google Gadgets. There isn’t any reference a callback API that the OpenSocial container calls when it wants the content for the application – this concept doesn’t seem to exist (or at least is not documented). The AuthSubRequest API does take a next parameter but is that it?
OpenSocial is built upon Google Gadget technology, so you can build a great, viral social app with little to no serving costs.
Uh-oh. Why create a dependency between an open social network API and a proprietary widget platform? I can see that some OpenSocial applications might be built as Google Gadgets but what if I want to create an OpenSocial application that isn’t?
It’s an admirable and ambitious project and it kicks Facebook right in the FBML-tender-spot but as ever the devil will be in the detail. There seems to be some concerns about how truly portable applications would be but I’m far more interested in portability of user data. I’m looking forward to reviewing the API details when they are published, hopefully they will also incorporate some of the emerging de-facto standards like OpenID and OAuth.
Facebook’s next major move will be interesting. Some of their key application developers (Rock You, Slide) will of course divert resources to work on OpenSocial based applications.
At the Facebook Developer Garage in London a few weeks ago Facebook’s VP of Product Development (Chamath Palihapitiya) mentioned they had 5.2 million UK users so I suspect the real total is somewhere in between but that growth rate is phenomenal. Almost as interesting were some other stats Palihapitiya presented – IIRC he mentioned a 50% daily return rate with each user burning through over 1500 page views a month…