Not as crazy as it sounds, but when you are designing something new, you sometimes can do worse than start by working backwards. Werner Vogels recently wrote how Amazon use this process to help their small teams produce services that customers (internal or external) want:
The product definition process works backwards in the following way: we start by writing the documents we’ll need at launch (the press release and the faq) and then work towards documents that are closer to the implementation.
The Working Backwards product definition process is all about is fleshing out the concept and achieving clarity of thought about what we will ultimately go off and build.
With the ever growing trend towards online applications and services, software architects need to be more aware than ever of the challenges in building platforms to host these types of applications. Successful sites in this space (Craigslist, Fickr, Salesforce etc.) all have one common problem to cope with – how do you maintain availability while dealing with exponential audience growth?
Two excellent pieces serve to proffer incredible insight into the experiences of those who have hyper-succeeded in the past:
- Inside MySpace.com, by Baseline Magazine is a great read about how MySpace scaled their architecture from zero to over 26 million user accounts, serving over 40 billion pages a month (isn’t that figure just incredible!).
- Database War Stories is a series of posts by Tim O’Reilly, interviewing folks from Second Life, Memeorandum, Craigslist and more. (The rest of the posts are linked to at the bottom of the first post.)
One common theme in many of these stories: periodically these guys are faced with the stark reality that incremental improvements to existing infrastructure will not sustain the current business model. It is testimony to the folks in charge that they trust their geeks enough to bet the company repeatedly on new architectures.
It is a high-risk world and there are many that fall by the wayside but the rewards for the brave are there for all to see.
A very naive entrepreneur (or a joke) on GetACoder.com:
So I’m posting for a rather large project. I need someone to program me a new OS (Operasting System) that looks different than Ms Windows XP etc. but has the same style. It does not need to run on a mac but all the other PCs. It’s supposed to have a stylish look with clear edges etc. And ITS NOT SUPPOSED TO BE JUST A REDESIGNED WINDOWS as I’m going to sell that operating system later on. It’s going to be called BlueOrb.
The mix of responses is great. Some are hilarious (“My blue orbs will beat your blue orbs”) but are others really taking this seriously (“Well I already designed and OS for Computational chemistry research group…”)?
The mere presence of serious responses, while extreme examples, does show why even attempting to rent coders for anything more than a you-absolutely-couldn’t-possibly-screw-this-up-if-you-tried project is just A Bad Idea.
There used to be an age-old restriction in Eclipse whereby two projects couldn’t be imported into the same workspace if the project locations overlapped in the file system. So if you had a project hierarchy like
Then you couldn’t have a workspace that looked like this:
I say ‘age-old’, because (obviously) somewhere along the line this workspace restriction was removed, maybe at the behest of WTP – I’m not sure, I can’t find the related Bugzilla entry (UPDATED: John Arthorne has indirectly pointed me to it – Bugzilla 44967). I only noticed last night when I was just about to blog about how nice it would be to be able to do this. It would appear I am very stuck in my ways – until now I’ve always used separate workspaces to work on the my top level projects and that doesn’t sit too well with trying to use Mylar.
By top level projects I mean a simple project (with no builders/natures) that contain the Ant/PDE build scripts and multiple sub folders, each containing a set of projects (plugin or plain Java) that represent the modular components of our products. These projects are usually big, whole CVS modules in fact, but by synchronizing them with our CVS repository we can easily detect build script changes that might require us to do a little more than re-build each plugin project in our workspaces in order to pick up all of the latest changes.
There is one trick to using this arrangement. Always make sure that you put at least one non-project folder in between your top level project and the contained projects. That way you can check out the top level project, and then point the Eclipse Import “Existing Projects…” wizard at the intermediate folder and it can auto-import all projects found in sub-folders. If you point it at the top level project it will do nothing, thinking you already have that project in your workspace.
Anyway, this is all great good news for anyone who wants to use Mylar with a project arrangement like this since it means that if you need to tweak build scripts as well as plugin source code (which I very frequently do), then you can do it all in one workspace and Mylar will keep track of everything you are doing . This is great for building change sets, especially so for committing changes back to CVS!
There is one caveat though, but only if you check your projects out into your workspace folder folder and I think a lot of people do this. If you try to subsequently use the New Project wizard to try create a “contained” project (like the ‘inner’ one illustrated above) then you will hit a problem. When you deselect the “Use default location” check box for the second (contained) project, the wizard will flag an error claiming that the “Project contents cannot be inside workspace directory”. I have no idea why it does this since this the suggested default is in the workspace folder but it’d be nice if the platform guys could fix this. It must be a trivial fix, I’m guessing it’s doing an indiscriminate “location.startsWith()” validation check on the string or something (see Bugzilla 165336. (UPDATE 2: it looks like the UI bug is fixed for 3.3 – see Bugzilla 147727)
A workaround is to create the project elsewhere, delete it from the workspace (but not it’s contents), move it on the filesystem and then re-import it into the workspace…
To compliment the HTTP EFS plugin I created a while back I have created a simple “New Link” wizard extension that can be used to create link resources in projects. Manually adding
linkedResources elements to the eclipse project descriptor (
.project) file is no longer required!
The wizard page implementation class extends
org.eclipse.ui.dialogs.WizardNewFileCreationPage (the standard New File wizard page) and just adds an extra control at the bottom to prompt the user for a target URI location. Thanks to the platform ui folks for designing thar page class in such an extensible way.
Plugin source and binary available at Cape Clear Developer.
So ApacheCon Europe ’06 has been on in the Burlington for the past few days )anyone wandering around Donnybrook/Leeson Street in the mornings/evenings over may have noticed a “geek increase”. It was a bit smaller than I expected but the quality of the content was superb if one bears in mind that these guys are doing this stuff part time (well, some of them!). The sessions I attended were all SOA/OSGi centric and all were worth attending, from an educational point of view. A very partial summary:
- The OSGi/JSR#291/JSR#277 issue is still very much alive although some efforts are being made to try achieve some degree of compatibility by using a common subset of Manifest headers. However differences apparently already exist in the drafts (such as the version range specifier format used) It is pretty sad to see such open (or should I say JCP-closed) disregard for interoperability between two overlapping JSR. Both are due for delivery in Dolphin (Java 7), perhaps by then the whole Java platform will have been rendered unusable by the plethora of other overlapping JSRs and we (the users) will all have moved on to coding in Ruby.
Actually, by then, it may not matter – OSGi seems to be gaining considerable traction. Tuesday afternoon had a whole afternoon of talks and discussion from Richard Hall (Felix), Peter Kriens (OSGi Alliance/aQute), Marcel Offermans (Luminis) and a round table discussion involving folks from Apache Maven, Felix, Harmony, Directory Server projects. It seems some efforts are being made to try get Apache Jakarta projects to OSGi-fy their Manifests (via capabilities built into Maven)
- Woden looks good. It’ll be the WSDL2.0 processor that all Java folks end up using (if you need to parse WSDL2.0 that is!). It replaces WSDL4J which is being put out to pasture (it must be, I logged a bug against it over a year ago and still no feedback!)
- Axis 2 apparently has WSDL2.0 HTTP binding support with some restrictions (messages must be “IRI style” compatible). I’m not sure if one even needs to declare a HTTP binding in the WSDL, it seems to be automatically available for all services that have SOAP endpoints.
- Apache Synapse looks nice, in a cute kind of way. I’m not sure I’d call it an ESB since it is really just a very thin routing framework that runs on top of Axis 2 – it doesn’t have any management interface per say and the overview might have been a bit forward looking with regard to the capabilities of the current implementation Synapse looks more like an endpoint intermediary that you one might use to implement specific tasks (XPath/RegExp based routing, binding conversion, logging, endpoint authentication & authorization, transformation (XSLT, E4X, POJO based). I’m not sure you’d host your service implementations on it.
- Apache Tuscany work is ongoing and I bet it will be for some time yet! I’m not certain of their claim that it will simplify the development of business solutions, if only because it is based on SCA which is turning into an absolute beast of a set of specifications – think “one spec to rule them all” type big, and with big specifications comes complexity, not simplification. From an implementation point of view, a huge problem is that the SCA specifications do not have either a reference implementation nor a compatibility test suite and according to Simon Nash and Jeremy Boynes (both of IBM) there are no current plans to develop either. Also it seems curious that the SCA specifications are not being developed under the auspices of an open body like OASIS, W3C or the OMG – why not? Some would say the overall approach seems incredibly vulnerable to repeat the mistakes of CORBA. It will also be interesting to see if any convincing response is given to Ron Ten-Hove’s recent critique of SCA.
- They’re giving it away for free but they only ship some of the source for the toolkit. It would of course be better if they shipped all of the source for the toolkit itself when integrating that much open source software (Eclipse/Tomcat/Mozilla/Rhino) into a free toolkit. Quid pro quo and all that.
See the overview for all the gory details.
The Web 2.0 bubble continues to expand and with the hype comes the inevitable need for thought leadership. The daily arrival of “new, exciting and and revolutionary mashups/services” against the background noise of the perpetual “but what the hell IS Web 2.0?” questions from newbies is prompting the thought leaders in the arena to throw in their tuppence.
However, I got a bit worried when I read Thinking in Web 2.0: Sixteen Ways by Dion Hinchcliffe. Aside from having a real problem with #3 on Dion’s list (which is a fantasy) I couldn’t quite put my finger on why this list seemed valueless until I read Russell Beattie’s wtf 2.0? followup (great post title!). Dion’s list not only never touches on the business side of things, it never even mentions the whole reason why someone would want to provide a service – to make money. Update: There is some too-ing and fro-ing between the two in updates to their original posts but wtf 2.0 is still the more important post to read.
Besides, I’m not sure you can teach people how to formulate a good idea from a list of 16 rules for a technology domain that at worst defies definition and at best can only be defined using diagrams that contain 30-40 components.
Last year, podcasting was all the rage, destined to destroy big media. I don’t know about you but back in the real world, I still get most of my content from big media.
It would appear that the Eclipse Platform, or actually the new-ish Equinox (OSGI runtime) project have finally decided to eat their own dog food and address â€œthe poor practice of not versioning Eclipseâ€œ) for their 3.2 release.
If you are skimming, comment #50 is the most important response I think – also #25, #27, #30, #32, #40 are good contributions.
They have published a versioning best practices document that is incredibly complicated but (given the length of the bugzilla thread) it would seem a lot of thought has gone into finding a system that works for both maintenance and forward dev branches.
About time if you ask me, the lack of strict version specification in the Eclipse platform has lead everyone downstream to believe it was ok to do the same, leading to an almighty mess and in most cases making it impossible to use the update manager to reliably install multiple unrelated features into the same Eclipse installation.
Now they’ll just have to get all those other projects to follow their lead…