Monday, December 12, 2011

Cool Innovations by WSO2

I've long held that the WSO2 suite of middleware products is one of the industry's best-kept secrets. But just what is it that makes these products so special?

Well, if the mere fact that there exists a functionally comprehensive, 100% Open Source middleware suite isn't remarkable enough, there are plenty of other reasons why IT practitioners should take a good look at WSO2's offerings.

I have had the opportunity for a few months now to work closely with the company's engineers and play with their products, using them to build (demo) distributed systems, and this is what I have found.

There is significant innovation here that is really cool. There may be even more that I haven't discovered yet.

1. OSGi bundles, functionality "features", and the fluid definition of a "product"

(WSO2 has no rigid products, although their brochures list about 12. The truth is that they have hundreds of capability bundles that can be combined with a common core to create tailored products at will.)

2. A "Middleware Anywhere" architecture that spans cloud and terrestrial servers

(When cloud-native features like multi-tenancy and elasticity are baked into the common core of the middleware product suite, there is no need for applications to be written differently for cloud and terrestrial deployments.)

3. Not just an ESB - the right tool for every job

(As in my writings on "Practical SOA for the Solution Architect", there are three core technology components required for SOA - the Service Container, the Broker and the Process Coordinator. They do different things and are not mutual substitutes. There are also eight supporting aspects at the technology layer. The ESB, being just the Broker component, cannot do all of these functions. WSO2 has products corresponding to all of them.)

4. Making a federated ESB an economically viable architecture

(Economics forces many organisations to deploy their expensive ESB product in a centralised, hub-and-spokes architecture, which leads to a single point of failure and a performance bottleneck. But Brokers are best deployed in a federated manner close to service provider and service consumer endpoints. WSO2's affordable pricing model makes it easy for organisations to do the right thing, architecturally speaking.)

5. The "Server Role" concept - enabling a logical treatment of SOA topology

(Architects don't think in terms of products but in terms of functional capability. One would rather earmark an artifact for deployment to a "mainframe proxy" and another to a "customer complaints process node" than to an "ESB" or a "Business Process Server" instance. Tagging both servers and artifacts with a user-defined "Server Role" makes it possible to speak in these convenient abstractions.)

6. The CAR file as a version snapshot of a working and tested distributed system

(In distributed systems, upgrading the version of software X can break its interoperability with software Y. That's why version change in distributed systems is such a nightmare involving expensive regression testing and a higher probability of outage after an upgrade. But what if a related set of changes can be tested together, certified as interoperable, labelled as a single version, and deployed to multiple systems through a common mechanism? We've just described the Carbon Archive.)

7. The CAR file and Carbon Studio as a unifier of diverse developer skillsets

(Developing distributed systems is challenging from a skills perspective. Writing business logic in Java and exposing it as a web service requires different skills from writing data transformation in XSLT, which is again different from specifying process logic in WS-BPEL. And then there are specialised languages to codify business rules, etc. WSO2 has a single IDE to support all these diverse developer needs - Carbon Studio. It also provides a single package that can hold all these types of artifacts - the Carbon Archive.)

8. The Admin console and support for configuration over coding

(Not every artifact needs to be "developed" through code. Every WSO2 server product has an Admin console that looks largely alike, yet tailored to the peculiarities of the artifacts deployed on that server. Following an 80-20 rule, the bulk of the (simple) artifacts that need to be deployed on a server can be created through configuration using the admin console.)

9. Port offsets and the ability to run multiple servers on the same machine

(Owing to their common core, all WSO2 server products listen on the same ports - 9763 (HTTP) and 9443 (HTTPS). Obviously, port conflicts will result when attempting to run two servers (or even two instances of the same server product) on a single machine. But with a simple configuration change (a "port offset"), a server can be nudged away from its default port to a non-conflicting one. A port offset of 1 will have a server listening on ports 9764 (HTTP) and 9444 (HTTPS), for example.)

10. The dark horse - Mashup Server and server-side JavaScript

(Nodejs has refocused industry attention on server-side JavaScript and the power that brings. But Nodejs has no support for E4X! If you're doing XML manipulation of data from multiple sources, which is what mashups often entail, WSO2's Mashup Server is worth a serious look.)

11. Carbon Studio - A light-touch IDE for middleware developers

(The best GUIs are those built on top of non-visual scripting. Every artifact used by the WSO2 development process is text-based, including the build process that relies on Maven. Command-line junkies can coexist peacefully with those who prefer a GUI, because the IDE imposes no additional requirements on developers. It's purely an option, not an obligation. You can even use another IDE like IntelliJ's IDEA with no loss of capability.)

12. Governance Registry - for control both mundane and subtle

(Governance or plain management? People often use the former term when they mean the latter. In any case, for the mechanics of what you want to hold to support either of these functions, the WSO2 registry and repository tool is simple, flexible and powerful enough to support you. The registry is embedded inside every server product as well as being a standalone server, so the way one can choose to store and share configuration settings as well as policy files is pleasurably versatile.)

If you work with middleware, you should be seriously checking out the offerings of WSO2.

Friday, December 09, 2011

Nutshell Definitions

When conducting the Practical SOA workshops in different Australian cities last month, I felt the need to explain a few concepts for my audience in very simple and memorable terms. I realised I have already been doing this for myself for a long time. I try and distil a concept into a single word, or at most a short phrase, in order to understand it. This has been very useful to me in evaluating the merits of technologies and comparing them to others. Call these my trade secrets which I'm now sharing with you :-).

OK, so here are some of my definitions of popular terms, each in a nutshell:

Service-Oriented Architecture (SOA):
1. In one word, dependencies. More precisely, it is the science of analysing and managing dependencies - making implicit dependencies explicit and eliminating unnecessary dependencies between systems. That's what it's all about.
2. Alternative definition: Lego-isation of the Enterprise, i.e., refactoring the various application silos in an enterprise into reusable building blocks based on their core functions.

Middleware:
That which converts organisational silos into Lego blocks.

Integration:
This is a tricky one. When you point your browser at a website and the page loads, no one thinks of it as integration. For something to be recognised as "integration", it seems it cannot afford to appear effortless! For this reason, I steer clear of trying to define integration in terms of technology. The best integration is seamless and not seen as such. It's an art that involves dependency management (see the definition of SOA above). Minimalism is a virtue here, and appropriate data design is an unacknowledged part of integration.

Governance, as opposed to plain old Management:
Governance has suddenly become a very fashionable word, and not just in IT. "Corporate governance" is a phrase that is parrotted by commentators when they often mean just management. So what's the difference?

In a nutshell,
Governance is about doing the right thing, the "what", the "goal".
Management is about doing things right, the "how", the "task".

Cloud computing as opposed to conventional "terrestrial" computing:
The leasing of IT capability as opposed to ownership. "IT capability" is a very loose term, and its nature varies based on the type of cloud (below). Leasing has several benefits as opposed to ownership - no upfront costs, pay-as-you-go scalability that is easy on startup operations, i.e., practically limitless capacity without having to provision it upfront, etc.

Infrastructure as a Service (IaaS):
Leasing infrastructure (storage, compute power and networking), and owning all applications above it.

Platform as a Service (PaaS):
Leasing infrastructure as well as some application frameworks and common utilities, and owning all applications above it. The frameworks and utilities make it easier to build the applications above.

Software as a Service (SaaS):
Leasing all the application functionality required, and owning nothing.

Virtualisation:
Imagine separating your "mind" from your "brain". We assume exactly one mind per brain and vice-versa. But what if you can have multiple minds within a single brain, a kind of benign schizophrenia? Or more eerily, if a mind can span multiple brains and think more powerful thoughts as a result? That's virtualisation - turning a one-to-one relationship between mind and brain into a many-to-many one.

Cloud computing as opposed to virtualised servers in "terrestrial" computing:
In virtualisation, you still own the brains. In cloud computing, you lease them.

Private cloud:
Leasing brains to yourself to run minds on. This may make sense in certain cases, like with transfer pricing between business units of the same enterprise. But because you haven't rid yourself of the responsibilities of ownership, it may not be as attractive as a public cloud. Nevertheless, it's attractive because sometimes legislation prevents minds from running on strange brains, and you may yourself be testing the concept of mind-brain separation before trusting someone else's brains with the sensitive job of running (and therefore knowing) your minds.


Sunday, December 04, 2011

Building RESTful applications using the WSO2 platform

Someone taking a casual look at the WSO2 middleware platform might be forgiven for thinking this is exclusively about SOAP and WS-*. But there is in fact strong support for building RESTful applications with this platform using the JAX-RS framework library, and Prabath Siriwardena (one of WSO2's experts on Identity Management) has blogged about the recommended component architecture to achieve this. There is also a WSO2 workshop on this topic conducted by Asanka Abeysinghe, who has many years of experience on the customer side of the fence and understands both the vendor (technology) and customer perspectives. This workshop is on Dec 8th and should be worth attending for people living close to Palo Alto.

An interesting sidelight: In Practical SOA for the Solution Architect, I re-introduce the practitioner to SOA principles by talking about three core technology components - the Service Container, the Broker and the Process Coordinator. According to this view of SOA, a service can be exposed through any of these components and a service consumer will be none the wiser as to the nature of its implementation. In other words, the Practical SOA approach does not make any prescriptions about which component should be the consumer-facing one. All of them are equally valid candidates, and the only criterion for choosing one over the other is the nature of the service enablement mechanism (bespoke, brokered or orchestrated).

In Prabath's architecture diagram below, the reader will notice that runtime clients must all consume services through the ESB (the Broker), even though these services are hosted on the App Server (the Service Container). Why can't services be exposed directly from the App Server where they are hosted?

Prabath explains that using a Broker (ESB) instance as the front-end for services is recommended practice because the ESB can provide security features like authentication and authorisation, as well as throttling capabilities to guard against Denial of Service (DoS) attacks.

So in the real world, we may often need to front-end Service Containers and Process Coordinators with an instance of the Broker that is dedicated to providing these security and traffic shaping features. This could (and should!) be a different instance of the Broker from those used to mediate access to backend systems. Such an architecture will work well because ESBs are better deployed in a federated topology than in a centralised hub-and-spokes fashion. [The unnatural hub-and-spokes topology for ESB deployment, which the high cost of most commercial ESBs forces on customer organisations, then results in a performance bottleneck and a single point of failure. Fortunately, the more favourable economics of WSO2's Commercial Open Source model makes it feasible for customer organisations to implement a more flexible federated architecture for the ESB.]

Tuesday, November 29, 2011

PaaS by Lineage

If you've ever wondered about what "Platform as a Service" (PaaS) really means, then you may find this analysis of mine useful.

The traditional NIST model of Cloud Computing shows three layers from a consumer's perspective:

Infrastructure as a Service refers to (shared and scalable) infrastructural capabilities such as compute power, storage and networking, as provided by a cloud vendor. The attributes "shared" (or "multi-tenanted") and "scalable" (or "elastic") distinguish cloud solutions from "terrestrial" alternatives. Amazon's EC2 (Elastic Compute Cloud) and S3 (Simple Storage Service) are examples of IaaS.

Software as a Service refers to applications that you don't have to install on your own computers but can consume through your browser. For an individual, Gmail is the best example of SaaS, while companies can relate to SalesForce.com.

But PaaS has always been a bit of a mystery. What is a "platform", exactly? This is a rather nebulous (pardon the cloudy adjective) area between IaaS and SaaS, and appears to be defined by what the other two are not. The market segment is also still in flux and yet to mature, and there are many players here, each staking out a piece of turf and attempting to define the segment to its own advantage.

I have a high-level, vendor-neutral view of this. [Disclosure: I'm currently working for one of the PaaS vendors, WSO2.]

My preferred categorisation of the PaaS landscape is by lineage. In other words, where did the various PaaS offerings evolve from? I think this is a useful way to look at PaaS because it indicates the traditional strengths of a vendor and therefore where their PaaS offering is likely to be stronger than its competitors. Potential consumers who are looking for a particular emphasis in their PaaS solution will know which vendors are likely to meet their requirements better.

In short, I think PaaS offerings have evolved from one of three directions.

IaaS vendors like VMWare (with their vCloud IaaS) have added DevOps capability to their traditional strength and are targeting organisations that want to develop and run generic applications on the cloud. Their version of PaaS is CloudFoundry.

SaaS vendors like SalesForce.com have made their application more generic and supportive of customisation. Organisations that want to build employee-oriented applications can do so through configuration rather than coding. Their version of PaaS, which is more oriented towards a vertical segment (employee-oriented applications), is called Force.com.

A third direction from which PaaS has evolved is from traditional "terrestrial" middleware (also known as Integration or SOA products). WSO2, which has a full stack of SOA middleware products collectively referred to as Carbon, has added cloud-native features (multi-tenancy, elasticity, etc.) to its suite of products to turn them into yet another version of horizontal PaaS called Stratos. Their DevOps tools have also been upgraded to be able to deploy applications to the cloud just like they do to terrestrial servers.

The diagram below that loosely follows the NIST layering, illustrates my analysis. There may be more versions of PaaS depending on where a vendor has traditionally been based, and I will update this model as more such examples appear.


Friday, November 25, 2011

Glamorous Tech and Workhorse Tech

IT people are biased towards the new, the cool and the "sexy". Perhaps Node.js and Cloud Computing fall into this category. Glamorous technology is very attractive, because it promises to be "the next big thing", but it's usually not usable today. Like Benjamin Franklin's proverbial new-born baby, it's full of potential, but does nothing for us in the here and now.

Contrast this with "workhorse technology", which is mature and relatively boring, and used to build working real-world systems that make or save money today. Not many people get excited about workhorse technology, but the majority of them work with it anyway, because it pays the bills.

I want to talk about one particular category of workhorse technology that I'm discovering can be quite glamorous as well. I have gradually become more familiar with this over the last few months of working with WSO2, writing the "Practical SOA for the Solution Architect" white paper, conducting a webinar and then hitting the road to conduct workshops in three Australian cities. I've realised that the combination of a lightweight SOA methodology in combination with a lightweight SOA product suite can be a very effective workhorse technology that is usable today and saves real dollars. In addition, it's actually pretty cool because of how quickly and easily it can help practitioners integrate diverse and distributed systems into an end-to-end business solution.

In a nutshell, in the white paper, webinar and workshops, I evangelise the message that SOA is not an esoteric and complex black art but simple commonsense that can be readily applied. I talk about the technology layer of SOA, of course, but also cover the equally important data layer that is often neglected and that contributes to the "tight coupling" that plagues so many solution designs and prevents them from realising the benefits of SOA. I talk about three core technology components to use (the Service Container, the Broker and the Process Coordinator) and when to use them, the inefficiencies that result from using the wrong tool for the job, and the dangers of treating the Broker as a singleton, centralised component and deploying it in a hub-and-spokes architecture rather than a federated one. I also cover four simple Data layer principles (make implicit dependencies explicit, remove unnecessary dependencies, loosely couple internal domain data with externally-visible message data, and settle on an intermediate granularity for domain data model(s) rather than a single overarching Canonical Data Model for the entire enterprise).

To drive home these concepts, we work through the real-world example of a well-known banking process, i.e., opening an account. Using the lightweight Practical SOA methodology, participants are encouraged to try their hand at producing an outline solution design to a described requirement in about 15 minutes, using a few standard Lego-style components from the SOA technology layer. Then we demonstrate how that solution design can actually be implemented, substituting each of the conceptual Lego blocks with an actual platform product that hosts the corresponding logic, and get the entire system to work.

Here's a view of the conceptual building blocks that form the outline solution design, once the lightweight SOA methodology is applied to the business problem:

In the next step, the conceptual components are replaced by actual WSO2 SOA products that perform each of those functions, and the physical version of the above diagram looks like this:


The Customer Master Database, the Mainframe and the Card System are all mock objects. A Data Services Server exposes the Customer Master as a set of CRUD services. A Broker (ESB) component exposes the mainframe as a set of Account services, and a second Broker exposes the Card System as a set of Card Services. Since the mainframe can usually only be accessed over IBM MQ in real-life, we simulate that through a JMS connection over ActiveMQ. A Process Coordinator (Business Process Server) coordinates all these services into a BPEL process that performs the account opening business function.

It was gratifying to see that participants at all the workshops we conducted were visibly impressed by this demonstration. In the space of 45 minutes, we had moved from a problem, through a conceptual solution design, to a working implementation. And since we had their active participation through the design exercise, they had an emotional investment in the solution and were able to appreciate it all the more. [To be fair, the demo was developed earlier over 6 person-days of effort, and in the workshop, we stepped quickly through the development by copying code that was written earlier.]

Traditional SOA vendors and the big-name analysts have done the industry a disservice by complicating SOA and scaring off practitioners. In our workshops, we've shown that SOA is just commonsense and relies on just a few simple principles that practitioners can readily apply. Once they know the right tools to use in the solution, it's a simple matter to gather those required components and hook them together to create the end-to-end solution.

This is obviously workhorse technology, because it's mature enough to solve bread-and-butter business problems today. But the lightweight methodology and product suite also make it glamorous.

I hope this attracts more practitioners towards the practice of lightweight SOA and the use of simple and cost-effective products like the WSO2 product suite.

Friday, November 18, 2011

Enterprise Shared Services and the Cloud

I've worked in the area of Enterprise Shared Services (or Enterprise Utilities) for many years, so when InfoQ approached me asking if I would be interested in writing an article on cloud computing, this was one of the angles I thought of. Of course, I'm also working with WSO2 at present, and WSO2 has a distinct type of PaaS (Platform as a Service) offering called Stratos. LinkPaaS usually either evolves up from IaaS (Infrastructure as a Service) with the addition of support for DevOps (the development-operations continuum) or evolves down from SaaS (Software as a Service) with the addition of customisation support for the software application. Stratos is unique because it has evolved "sideways" from enterprise middleware with the addition of cloud-native features. The 12 SOA products of WSO2 are all available as cloud-native middleware on the Stratos PaaS.

In any case, since InfoQ wanted vendor-neutral content, I couldn't write about Stratos (which I will write about in some context because I find it fascinating). So I fell back to my old favourite - Enterprise Shared Services.

The long and short of it is that when we factor in Enterprise Shared Services, the old monikers of SaaS and PaaS are no longer enough. We have to deal with "vertical" and "horizontal" variants of these, and the distinction is important because in an organisational context, they have unique characteristics around how they are requisitioned, evaluated for feasibility, funded and charged back.

I'll let you read about it here.

Friday, October 28, 2011

A Good Primer on NoSQL


NoSQL databases are all the rage, but the array of choices before us is bewildering. I must confess I'm still confused about the features and differences between BigTable, GAE DataStore, GemFire, SimpleDB, SQLFire, CouchDB, MongoDB, RavenDB, Redis, Cassandra, Riak, HBase, Neo4j and so many other names that I have only recently begun to hear about. I'm sure many others would be in the same situation.

I was therefore happy to see that my colleague at WSO2, Dr Srinath Perera, has analysed the NoSQL landscape in depth, zeroed in on the characteristics of NoSQL databases that are relevant, and summarised this for our common understanding in an InfoQ article that provides a simple overview of the choices that designers and developers have today, choices that go beyond the traditional relational databases that we're familiar with.

I've often wondered about why NoSQL should be so popular in the first place. Srinath explains:

A few years ago, most systems were small and relational databases could handle [their requirements] without any trouble. Therefore, the storage choices for architects and programmers were simple. However, the size and scale of these systems have grown significantly over the last few years. High tech companies like Amazon and Google faced the challenge of scale before others. They soon observed that relational databases could not scale to handle those use cases.
In other words, this demanding new requirements wave has probably not hit most of us yet, but with the jump in the number of connected devices (smartphones, tablets and the coming "Internet of Things"), applications dealing with huge volumes of data are probably not going to be as rare as in the past. And when we say "huge", we're not even talking Gigabytes anymore. It's Terabytes and larger. As we learnt from Godzilla, size does matter. And drastic situations call for drastic measures, hence the NoSQL revolution.

Srinath refers to Eric Brewer's CAP theorem, which states that a distributed system can only have two of the three properties - Consistency, Availability, and Partition Tolerance. The NoSQL databases aim to break through the limitations imposed on traditional relational databases by loosening the fundamental principles on which these have been based, dropping one or more constraints as appropriate, to obtain a desired behaviour.

Depending on the constraints dropped, the resulting solution falls into one of several new categories:

  • Local memory
  • Distributed cache
  • Column Family Storage
  • Document storage
  • Name-value pairs
  • Graph DB
  • Service Registry
  • Tuple Space
...in addition to the traditional filesystems, relational databases and message queues that are familiar to IT practitioners today.

Perhaps the most important contribution of Srinath's article is his distilling of the four primary characteristics that are important from a usage point of view - data structure, the level of scalability required, the nature of data retrieval and the level of consistency required. He then puts these characteristics together in various combinations to show which of the above-listed categories of data store would be the most appropriate solution to use.

He's certainly succeeded in demystifying NoSQL for me, although I suspect I'll need to go back and read the article a few times till I've fully internalised the concepts in it. This is an overview article that I'd recommend to anyone trying to make sense of NoSQL and wanting to decide on the appropriate product category that would be right for their needs.

I can see the demand for a follow-up article from Srinath drilling down into each of these data storage categories and providing recommendations about actual products (e.g., Cassandra, Redis, CouchDB, etc.) While the sands shift more rapidly in the product space, it's also a more practically urgent decision for a developer or architect to make. So while such an article might need to be updated quite frequently, the advice in it would be more practical than this one, which provides the necessary initial understanding of the NoSQL landscape.

Wednesday, October 19, 2011

Strange Creature on the Mozilla Firefox Download Page

I've never seen this creature before (circled in red). Do you know who or what it is? It looks friendly enough, but it also reminds me of the Morlocks in H.G. Wells's The Time Machine (shudder).


Tuesday, October 18, 2011

I Hate HatEoAS

For something that's supposed to be THE defining characteristic of REST, it could have done with better naming.

I would have been happy with the term HatEoAS if it had stood for "Hypermedia as the Envelope of Application State" rather than "Hypermedia as the Engine of Application State".

An Engine actively drives things. E.g., A process engine is well named, because it drives a process.

A constraint doesn't drive anything. It constrains. It provides an envelope around the range of possibilities.

And so they really should have called this an envelope rather than an engine of application state.

There, I've said it. Because the expansion of HatEoAS has been driving me up the wall.

Fortunately, it's being referred to as "Hypermedia Constraint" now, which is both more elegant and more accurate.

Sunday, October 16, 2011

Oneiric Ocelot Not Quite the Stuff of Dreams

One of the nicest things about Linux distributions like Ubuntu is that you don't have to spend the night standing in a queue just to get the latest version of an operating system on the day of its release.

Unlike some other (closed) systems where supply is deliberately constrained
to create an impression of even greater demand, Ubuntu is upfront and
relaxed about new versions.


I sat down at my Linux desktop yesterday and was pleasantly surprised to see a popup informing me that the next version of Ubuntu Linux (version 11.10 a.k.a. "Oneiric Ocelot") was now available and would I like to upgrade?

With a smile of anticipation, I clicked Yes, and the upgrade began. My ADSL link showed a steady bandwidth of around 400 kBps throughout. An uneventful hour and a half later, I rebooted into Oneiric Ocelot. That was the kind of experience I've got used to with Ubuntu over so many online upgrades.

Most of the time, upgrades to Ubuntu are boringly anticlimactic. That's a good thing, by the way, because users hate surprises, and there are really very few nice surprises possible on an upgrade.

The dictionary says "oneiric" means "pertaining to dreams", but there was something almost nightmarish with this upgrade that was even worse than the upgrade to 11.04 ("Natty Narwhal").

Someone at Canonical has taken it into their head that a completely re-imagined user interface would be a good thing. The same someone has also arrogantly assumed that there's no need to give users a choice when changing a fundamental aspect of the user interface that they will use all the time.

My unpleasant surprise after both upgrades was the horror they call the Unity Desktop. I tried to be fair to them. I believe I gave Unity an hour of my time both times. In the end, I gave up. I just hated it. Sorry Canonical, I recognise you're trying, but Unity really doesn't work for me. From the number of similar comments I read on the web, I'm hardly alone.

What was far worse than installing Unity by default (without asking me if I wanted it) was not providing me a quick way to get back to the default Gnome desktop of earlier versions. I was actually forced to download the Gnome desktop, then re-login to select it as my default desktop.

1. Why didn't I have the choice to say no to Unity at the time of the upgrade?
2. Why wasn't it a straightforward option to return to the "classic" desktop?

For a distribution that is supposed to be the friendliest desktop Linux, this is very poor showing indeed.

My other major whinges are that the "Show desktop" button on the taskbar has disappeared, as has the "System" menu on the menu bar. I now have to minimise every window manually, and have no way to set several preferences. Cosmetically as well, the taskbar at the bottom and the menu bar at the top now sport a ghastly dark grey colour, and the desktop theme that I used to use has disappeared from the list of options. Since I've forgotten what it was called, I don't think I can get it back.

This upgrade experience has been anything but oneiric. I feel like I've been mauled by an ocelot.

Friday, October 14, 2011

Practical SOA for the Solution Architect

This is a story that has had a fairly long history, so stay with me till the end.

I've worked as an architect in the shared services space for almost a decade now, at some of Australia's biggest, richest and technologically diverse financial services organisations. During that time, I first heard about Service-Oriented Architecture (SOA), learnt what it was about, bought into its philosophy and attempted to implement it at work.

And while I have seen a few successful examples of SOA projects (mostly individual services, truth be told), by and large, I did not see SOA having an impact at all at these large and reputed organisations. Many hundreds of thousands of dollars were spent on acquiring SOA tools from vendors as reputed as IBM and TIBCO, and many millions more were spent on integration projects using these tools, but somehow, the results failed to live up to the SOA promise. [I guess organisations need a terrorising CEO like Amazon's Jeff Bezos to achieve the benefits of SOA. Read the fascinating story of how an online bookstore built a platform that it now rents out to others.]

For a while, I lost faith. Influenced by a few cynical colleagues, I too began to think SOA was marketing hype and nothing more. But then, as I continued to work in the shared services domain and had the opportunity to review more solution designs, I had an epiphany. Most of the designs I was seeing used SOA products (usually an ESB) or were implemented as Web Services, yet I could see they were still tightly-coupled at the level of the application design, i.e., the data. Also, it was very common for solution designers to use the wrong tool for the job, simply because it was the one they were most familiar with. They even seemed to lack a conceptual ability to tell which tool would be right for a given requirement.

With that epiphany, I revisited my understanding of SOA. I realised that solution architects needed to be educated to produce SOA-compliant application designs, otherwise all the investment made by their organisations in SOA tool suites would be a waste. Worse still, SOA itself would be unfairly blamed for the waste of resources, when it in fact remains the best hope to reduce waste, improve business agility and reduce operational risk.

I've been tossing this idea around in my head for at least a couple of years now, the idea that a lightweight method is required to get solution architects up to speed with the required concepts quickly. Unfortunately, most solution architects would bristle if it was suggested to them that they don't really understand SOA. "Can't you see we're using an ESB?" would be the defensive response. That the response is a non sequitur would be lost on them. One can use an ESB and still come up with a design that is not SOA-compliant. How can we educate solution architects when they don't know what they don't know?

Fortuitously, I had a chat about this in August this year with Sanjiva Weerawarana, the CEO of the innovative Open Source middleware company, WSO2. Our needs meshed perfectly. WSO2 has a full suite of middleware products based on SOA concepts and Web Service standards. They are compact and not bloated, fairly straightforward to install and use, fully Open Source under the Apache Software Licence, and for which WSO2 offers a very attractively priced support model as well as other professional services. For such an innovative range of value-for-money products, the level of awareness in the customer community has been surprisingly low. Sanjiva has been trying to raise the level of awareness in the industry about his company's products for some time now. For too long WSO2's pitch had been aimed at developers (techies talking to techies), but for a real breakthrough, they needed to target decision-makers and decision-influencers. They were looking for a way to reach the solution architect with a compelling message.

And here I was, trying to educate the same solution architect about SOA using a new, lightweight approach. So Sanjiva and I came to an understanding. I would do a paid consulting assignment with WSO2 for a few months and turn out a few white papers to present their offerings to an audience that was higher up the corporate food chain than the developers. In return (in addition to helping me pay the bills), they would provide me a vehicle to popularise some of my ideas on SOA, especially the lightweight methodology I came to call "Practical SOA". I guess this approach is the good cop to Jeff Bezos's bad cop :-).

The first of those white papers ("Practical SOA for the Solution Architect") has now been completed and is available on WSO2's website. It's my audacious hope that a Solution Architect can read it in half an hour and be immediately effective on their project thanks to a simpler and more powerful mental model of SOA. A short summary of the paper is available for you to skim through, but I would encourage you to download the full paper (a free registration is required), since it has much more detailed descriptions, extensive explanations for the final conclusions and a couple of industry examples to drive home the concepts. So please have a read and, if you like it, recommend it to all your architect- and designer-type friends. (At my own request, my name doesn't appear on this document because I don't want to dilute the appeal of the method by causing it to look like a mere individual's opinion. WSO2's cachet is better than mine!)

A second white paper that is currently in the works will describe the full suite of WSO2's products and how they map to the framework established in the first. Stay tuned for that too. The first white paper will equip you with concepts. The second will equip you with know-how about a comprehensive set of reasonably-priced tools. Together, they should provide a customer organisation with excellent value for money and the long sought-for return on their investment in SOA.

Thursday, October 13, 2011

Steve Jobs and Dennis Ritchie

This hasn't been a great week to be a computer pioneer. Dennis Ritchie has now followed Steve Jobs off the stage and into that great big computer industry in the sky. Two giants in a single week.

Steve Jobs


Dennis Ritchie

I guess, unlike most people, I have been touched by Ritchie much more than by Jobs. I have never owned a Mac or an iPhone, and regretted buying an iPod as soon as I realised there was no way to bypass the iTunes straitjacket. iPod generation 4 worked with the gtkPod application on my Ubuntu desktop, but generation 5 corrected that shocking oversight, and the Apple empire, with a sigh of relief, regained its pristine purity, shutting out the great unwashed once more. That ended my dalliance with Jobs and his closed system. I don't fancy handcuffs even when they're haute mode.

I feel sorry about the death of Steve Jobs the human being. I have no sympathy for the worldview that he represented, of closed systems, slimy lawyers and patent lawsuits.

Dennis Ritchie, of course, was the polar opposite of Jobs. Those who have read Asimov's science fiction trilogy Foundation may remember that there were two Foundations, a well-known one at the periphery of the Galactic empire, and the other, a secret one, located "at the opposite end of the galaxy". Many characters in the novel tried searching for the Second Foundation along the opposite edge of the galaxy where they thought it would be, but its actual location was right at the centre of the empire! The term "opposite" was meant in a sociological sense, not a physical one.

And in true Foundation-esque fashion, Ritchie's contribution to mankind, while in a sense the opposite of Jobs's, was not a rival closed system but an open one. Along with Ken Thompson, he wrote the most open operating system of its time, -- Unix.

Ken Thompson

The popular web article "The Last Dinosaur and The Tarpits of Doom" has a matchless passage describing the world at the time of Unix's birth.

In 1970, primitive proprietary operating systems bestrode the landscape like mighty dinosaurs: Prime's PrimeOS, DEC's RSTS, RT-11, etc. (with VAX/VMS soon to come), IBM's innumerable offerings, CDC's Scope and of course dominating the scientific workstation market, Apollo's Domain.

Who would then have dared to predict the fall of such giants?

What force could topple such entrenched operating systems, backed by massive industry investment, hacker culture and customer loyalty?

Today, of course, we all know the answer:

In 1975 Bell Labs released Unix.

  • Unix had no support from its creator, AT&T: Buy the magtape and don't call us. (AT&T was legally barred from entering the operating system market.)
  • Unix had no support from any existing vendor: None had the slightest interest in backing, supporting or developing an alternative to its proprietary operating systems offerings.
  • Unix had zero customer base: Nobody had ever heard of it, nobody was requesting it.
  • Unix had zero marketing: Nobody had any reason to spend money building mindshare for it.

A one-sided competition?

Decidedly: Unix wiped all workstation competition off the map in less than fifteen years.

On April 12, 1989, HP bought up Apollo at a fire-sale price, putting out of its misery the last remaining proprietary operating system vendor in the workstation world, and the workstation proprietary OS era was over: Unix was left alone in the workstation market.

In fifteen years, a [magnetic] tape and an idea had effectively destroyed all opposition: Every workstation vendor was either supporting Unix or out of business.

Let me add one more point to that. At the heart of Apple's operating systems is a version of Unix (BSD Unix). Steve Jobs's business empire took a freely available operating system, layered a user-friendly graphical interface over it, and without a word of thanks, proceeded to build a proprietary edifice that was as closed as its enabling technology was open.

So thank you, Dennis Ritchie, for giving us today's Mac.

Ritchie was an inventor second to none. People today forget one of the main reasons Unix is considered "open". Before Unix, an operating system was written for a specific processor chip, in the assembly language corresponding to that chip. One of the key factors that made Unix open was the fact that it could be ported to any chip at all. More than 90% of an operating system's logic is in fact independent of the underlying hardware architecture. Less than 10% is specific to the chip. That's why only very low-level code in Unix is written in assembly language. That's the only part that needs to be re-written when porting Unix to a different processor architecture.

Once the operating system was liberated from its ties to hardware, any hardware manufacturer could port Unix to their computers. That's the openness that destroyed the proprietary dinosaurs and created the world we see today. We have Thompson's and Ritchie's genius to thank for that. In the next generation, Linux proceeded to wipe out proprietary Unix variants to take over the server room.

Today, Google's Android has Linux at its core. So now Ritchie's invention has taken over the server, a significant part of the desktop (through the Mac) and an increasingly dominant part of the smartphone and tablet markets (through Android and Apple's iOS). Not bad for a simple and open operating system!

Now we know that 90% of Unix is written in a higher-level language, and therein hangs another tale. At the time Thompson and Ritchie wrote Unix, there was no suitable high-level language to write an operating system in. It had to have the higher-level constructs of most modern, structured, procedural programming languages. Yet it also had to provide sufficient control over low-level constructs like memory addresses and file structures. This was a challenge that may have stumped other people and caused them to compromise in some way. Not Ritchie. Necessity for him was the mother of invention - the invention of the C programming language. Together with Brian Kernighan, Dennis Ritchie created the first C compiler, and it is astonishing that the language has hardly had to change since their version to the present day. The standardised version of their language, ANSI C, is largely the same as their original one, with just minor changes. Now that's vision for you.


Brian Kernighan

C inspired C++, Java, JavaScript, Perl, C# and a whole bunch of other languages. Any language with curly braces and semicolons owes an intellectual debt to Ritchie and Kernighan.

The laptop that I'm composing this on runs Ubuntu Linux, another variant of Unix. Most of Linux is written in C. I'm probably not fully aware of the extent to which I owe Ritchie a debt of gratitude, as the one common factor in the creation of both Unix and C.

By the way, if you think Unix has an ugly user interface because of its command line, there are two rebuttals to that argument. The trivial one is that modern Unix variants like Linux have very sophisticated and friendly user interfaces indeed. The deeper rebuttal is that there is beauty and power in the Unix command line that MacOS has eagerly embraced as an offering to the "power user".

User Interface experts Don Gentner and Jakob Nielsen write in their classic paper The Anti-Mac Interface:
The see-and-point principle states that users interact with the computer by pointing at the objects they can see on the screen. It's as if we have thrown away a million years of evolution, lost our facility with expressive language, and been reduced to pointing at objects in the immediate environment. Mouse buttons and modifier keys give us a vocabulary equivalent to a few different grunts. We have lost all the power of language, and can no longer talk about objects that are not immediately visible (all files more than one week old), objects that don't exist yet (future messages from my boss), or unknown objects (any guides to restaurants in Boston).
Like they said. As an advocate of the power of the Unix command line, I rest my case.

Unix is such a unique phenomenon in the world of computing that noted academic Prof Martin Vermeer believes it should be treated as a basic element of literacy, alongside the three Rs.

And so a tumultuous week has gone by, and the computer industry mourns its two luminaries. Among computer pioneers, Steve Jobs was the shiny user interface, slick and popular. Ritchie was the kernel, unseen and unknown to the masses, yet the workhorse that made everything else possible, including the user interface. He may be less widely mourned, but no less mourned.

And I like to think Ritchie rushed after Jobs to make sure the Pearly Gates stayed open to all!

Wednesday, October 12, 2011

Google's Aimless Darting

Search Engine vendors should be in the clarification business, not the muddying business. All the more frustrating to read the news about Dart, Google's new web programming language to replace JavaScript.

My only reaction is - WHY???

Yes, JavaScript is not exactly a perfect language. It has warts, huge ones. That's been known for a long time. Is that sufficient reason to throw the whole language overboard and try and popularise a new one? Especially after CoffeeScript has already done the job? JavaScript has good parts as well as bad, and I'm not talking about just its technical aspects. Its ubiquity is a major strength. Replacing it is an exercise with very doubtful prospects. I don't think even Google can pull it off.

CoffeeScript has taken the right approach, in my opinion. CoffeeScript is JavaScript with lead shielding over the reactor core. And that was all that was required. CoffeeScript on the server side uses Nodejs to run scripts anyway, so one can write server-side code in CoffeeScript and run it with "coffee" instead of with "node", and the ugliness and danger of JavaScript can be neatly sidestepped. Even on the client side, a single line such as

<script type="text/JavaScript" src="coffeescript.js"></script>

will allow you to write the rest of your client-side code in CoffeeScript, because coffeescript.js is a minimised library that will let your browser interpret CoffeeScript natively. Your application code will look like this:

<script type="text/CoffeeScript">
# CoffeeScript code
</script>

What is Dart going to do for us beyond that, functionality-wise? Technically, Dart won't even replace JavaScript, because it will compile to JavaScript. Does that sound familiar? Because that's just what CoffeeScript does as well! Was it so hard for Google to get behind CoffeeScript? Some NIH at work, I think.

Now the frustrating thing is that Dart will waste at least some developer mindshare and bandwidth when the world should have been just getting on with the job - using CoffeeScript plus jQuery on the client side, CoffeeScript plus Nodejs on the server side.

What a waste!

Monday, October 10, 2011

Life above the Service Tier (Change of Links)

Google Groups no longer allows file uploads. Worse, they went back on a commitment to keep the files that were already uploaded accessible. Now all the files I uploaded to the "wisdomofganesh" group have to find alternate homes (if I still have a local copy, that is), and I have to update all the links from my blog. Totally uncool of Google. I'm very annoyed.

Update 15/11/2012: I've changed again from mesfichiers.org to slideshare, since that seems to be the new place to upload shareable stuff.

Anyway, since the following docs seem to be the most frequently accessed, here are the new links:



Amazon Cloud Drive Tech Talk - Sydney, 10-10-2011

I attended the Amazon Cloud Drive Tech Talk today in Sydney. This was held on the 39th floor of the Citigroup building at 2 Park Street in the CBD. The views from the window are awesome, by the way.

That's Park Street stretching away towards King's Cross


Those red slugs are the new Metro double-buses

I reached the venue just after 1800, when registration was scheduled to begin. There were eats, bottles of beer and canned carbonated drinks in the room, but none suitable for a vegetarian teetotaller trying to lose a few calories, so I drank a glass of water instead and waited for the proceedings to begin.

The event kicked off promptly at 1830. John Scott of the Android Australia User Group made a short speech introducing Piragash Velummylum of Amazon, who had flown in from the US.


Then Piragash took the stage and spoke. I must say the Amazon Cloud Drive turned out to be more Amazon Recruitment Drive than anything else! Piragash and a colleague Brad (who spoke for half a minute) made no secret of the fact that they were in town to recruit developers for their Seattle office. The tech talk was like a campus recruitment talk - just enough data to pique the interest of developers wanting to work on cool technology.


The unofficial motto of the Amazon Cloud Drive service is "anything digital, securely stored, accessible anywhere". Piragash spoke a bit about Amazon CEO Jeff Bezos, who he projected to be a likeable nerd who listens to everybody and does the right thing. [Piragash must have a weird sense of humour. Just read this piece and the comments that follow. I wouldn't want to work for Jeff Bezos in a hundred years.]

Amazon Cloud Drive is meant to address three customer pain points:
  • Multiple music downloads
  • Moving files from one store to another
  • Data loss
He said something about Amazon MP3, which seems to be similar to Apple's iTunes (Piragash sidestepped a question from the audience on a comparison with iTunes, saying they had a policy not to talk about competitors, but mentioned that the Amazon version allowed upload of customer content). I think the story might have begun with Amazon MP3 which catered to music files. Then Amazon Cloud Drive came along which was more general-purpose and catered to videos and other kinds of files as well. That's what I gathered.

He emphasised a few times that Amazon Cloud Drive is for customer content, not purchased content (also called studio content or catalog content). It's like DropBox, I guess. It's said to be free, but there was no mention about storage limits. Unlike DropBox, they don't do de-duplication of files (yet), and certainly not across users. They would need to sort out licensing before they did that sort of thing, and he made a statement to the effect that Amazon is DMCA-compliant.

They obviously leverage off their other technologies, i.e., AWS (Amazon Web Services). They currently use S3 (Scalable Storage Solution), not EC2 (Elastic Cloud Computing), but they may use that in future if required.

Amazon Cloud Drive has three layers. Storage is provided by S3. They've defined a hierarchical file system on top of that (which has long been a demand of S3 users). Finally, they have metadata on top that defines relationships and queries, making the file system much more useful.

These are some of the operations supported at each of those levels:

S3: Upload, download
Filesystem: Copy, move, delete, list, recycle, restore, create
Metadata: Select, get, put, delete (Aha, REST! But Select seems to have replaced Post. Puzzling.)

Piragash also talked about some of their technical challenges.

Scaling was the major one (and continues to be a major focus of research and innovation). The principles Amazon follows to ensure scalability are:

1. "Don't get in the way", i.e., let AWS and Akamai do the job they know best, don't interpose Amazon Cloud Drive between those systems and the user. Rather, allow Amazon Cloud Drive to be proxied by CDNs (Content Delivery Networks) like Akamai.

2. "Be flexible", i.e., be forward-thinking and don't prevent services like allowing a zipped set of files to be downloaded.

Security is another major concern. Content is meant to be private and personal. As part of their PaaS offering, Amazon provides an Identity and Access Management (IAM) system, and Amazon Cloud Drive uses IAM and AWS to control the generation of time-bound tokens. Delegated S3 access has an extra security token in the URL that expires after a certain period, so people can't pass content URLs around and have them accessible indefinitely to anyone who gets the URL. Can customers share their content with others, or is this purely private? Piragash's response: "Stay tuned".

Then Piragash briefly talked about the Kindle Fire, due for release in about a month. Incidentally, Kindle for iPad is said to use HTML5 throughout and to look like a native app.

There's also a new business called Amazon Fresh, currently only rolled out to the Seattle area. Order your groceries online at midnight, and have them delivered to your door by 6 am!

There was a little talk about version synchronisation strategies and rules for conflict resolution. They support multiple schemes for different domains. Sometimes they use automated algorithms, and at other times they let the customer resolve conflicts.

Asked about an API for Cloud Drive, Piragash would only say, "Stay tuned".

He mentioned that it was important to weed out "phantom requirements" and to concentrate on solving the customer's real problems.

A member of the audience remarked that Australia really needed an edge server located here to reduce latency. Piragash merely smiled acknowledgement. Someone else asked about the number of servers used by Amazon, but Piragash could not talk about that.

That was the end of the talk, and Piragash invited people to stay and talk to him and his colleagues. He called for CVs and said they'd be in Sydney the whole week, recruiting people to be based out of Seattle.

So if you like rain throughout the year and don't mind sharing a city with Microsoft, do apply to Amazon for a job.

[What was even more interesting than the technology talk was a nugget of cultural trivia that I've written about on my other blog.]

Thursday, October 06, 2011

Aakash - The Sky's the Limit

Android tablet prices went down even further with the announcement of the Indian-made tablet "Aakash" (Sanskrit for "sky", pronounced "aakaash"). The price to students (admittedly with a government subsidy) is projected to be $35. Even without the subsidy, the retail price should still be a groundbreaking $60. The aim is to bridge the "digital divide" and allow less affluent sections of society to participate in the digital economy.

I like free markets, but given the tendency for cartels to form in most market segments (negating the freedom that true liquidity would deliver) and the distortions of patent law that entrench the power of large corporations, I also think governments need to step in from time to time to ensure equity. Untrammelled capitalism isn't going to bridge the digital divide. There has to be a deus ex machina that kicks in at critical junctures.

Stepping back a bit to take a historical view, the primary difference between the 19th and 20th centuries was not in scientific knowledge or even technological invention (because many 20th century inventions were known even in the 19th), but in the mass production and mass consumption of such technology. In analogous fashion, I can see the potential for India to become a technology "enabler" for the world's poorer half (or two-thirds), democratising technology usage across a geographical span rather than a historical one.

I remember the dire warnings a decade ago about an AIDS epidemic that was poised to devastate Africa, and the utterly shameful behaviour of the big Western pharma companies in refusing to lower the prices of AIDS medication to save millions of lives. They tried to use patent law to block any attempt by other parties to provide cheaper medication. If the intellectual wherewithal had been entirely lacking in the Third World, they may well have got away with it. But Indian pharma companies were able to produce AIDS medication, and more importantly, produce it at a much lower price point that African countries could afford, and the Third World as a whole managed to vote their way around the IP laws that the US (and other Western countries) have forced all others to sign. A humanitarian disaster was averted that would have dwarfed the Holocaust, and India had a large (though also largely unsung) role to play in averting it.

More recently, Tata Motors established a radically lower price point for cars ($3000). The Nano is a revolutionary breakthrough, and it isn't a toy either. If a car can survive on Indian roads, it will positively thrive anywhere else ;-). The Nano is going through a few teething problems right now, but I have a sense that in a decade or two, it will be one of the world's iconic car brands, perhaps the most ubiquitous. The Nano could change the image of car ownership as an indication of wealth.

And now, with Aakash, India is once again bringing technology within the reach of ordinary people, at Third World prices. The term "reasonably priced" means something very different in Western countries. Even middle class people in Third World countries cannot afford these "reasonably priced" products. In Marxist terminology, as long as the "means of production" were concentrated in Western hands, there wasn't much the rest of the world could do about it. They either paid those prices (exorbitant by their standards) or simply did without. Now they have a choice. Incongruous as it may seem, India is swooping in in shining armour to save the day.

I do have some reservations, though. Indian ingenuity has never been in doubt. What's in doubt is India's institutional ability to follow through, to execute, to deliver. India has always been a muddle-through country rather than a reliably-achieve country. Even the Nano is a case in point. The political shenanigans that preceded its launch very nearly canned the project. If I was someone big in the Indian government, I would not look at this as just a private sector enterprise that is none of the government's business. I would see it as a national enterprise where India has a chance to put its stamp on the world and change it for the better, and I would provide Tata Motors with all the support needed to manufacture and sell the Nano in volume, worldwide.
[I'm not going to argue about whether an invention that serves to consume more fossil fuels is going to change the world for the better. The environmentalists don't seem to have an easy answer to the developmental issue of stagnation-versus-pollution either.]

Again on the topic of the Indian character, I remember my student days at IIT Madras when a new Siemens mainframe computer was delivered to the institute. This was in the mid-eighties. The box was too big to go up the stairs of the computer centre. Many students and professors watched as the workmen, mostly uneducated, rigged up a makeshift pulley and with the help of ropes, winched the box up the side of the building, and others leaned over the parapet wall on the first floor and hauled it in. Mission accomplished with a minimum of technology (and no insurance!)

Watching this, one of my professors who had done his PhD in the US and worked there for a while, remarked wisely, "We Indians are great at improvisation. The danger is that we will be satisfied with our ability to improvise, and fail to develop real systems."

I see no evidence that India has improved significantly on the systems front. Flashes of Indian brilliance, like the Nano and the Aakash, will remain just flashes in the pan unless the country learns to be more disciplined about delivery. Virtually every developed country emphasises delivery discipline. Only when India masters that can the world look forward to a steady stream of dirt-cheap high technology that will change billions, not just millions, of lives.

Wednesday, October 05, 2011

Meeting of Android Australia User Group - Sydney 05/10/2011

I attended my first meeting of the Android Australia User Group - Sydney this evening.

The organiser, John Scott, started with some general announcements, which should be of interest to many.

- There is an Amazon Cloud Drive Developer Tech Talk in Sydney on Monday October 10, 2011.
- Google Developer Day will be on November 8, 2011 in Sydney.

Those interested should register for these events. They're both free, as far as I know.

John then kicked off the first talk of the evening, which was a brief overview of Motorola's IDE for Android application development - MOTODEV Studio. According to John, this IDE is better than the Google Android plugin for Eclipse, which seems to be the de facto IDE for Android, because it has a few extra features like the generation of boilerplate code to make the developer's life easier, and also graphical tools to manage SQLite databases. I found a couple of other discussions on MOTODEV Studio vs Eclipse here and here, but they seem somewhat dated.

The second talk was by Gianpaolo De Biase, a developer with AppCast. Gianpaolo talked about some real-life applications that his company has built (one of which is JustStartWalking, an app for an initiative of the same name from the Chiropractors' Association of Australia). He also discussed two important architectural decisions that his company made when developing these products.

1. When dealing with local databases, Gianpaolo recommends using an Object-Relational Mapping (ORM) tool rather than the raw SQLite API provided by Google. The tool he used is OrmLite, which is Open Source like Hibernate but more lightweight and adequate for the simpler data structures needed for local storage on mobile devices.

2. When making REST calls to remote servers, he recommends using the Spring Android module with its REST Template in preference to raw HttpClient. He did encounter some bugs with Spring Android in the areas of cookie management and SSL certificates, but believes the product is rapidly maturing.

As a side benefit of both OrmLite and Spring Android being annotation-based, Gianpaolo was able to have a single set of domain objects. Each object had two sets of annotations applied, one for persistence and one for interfacing with REST services. [I'm a bit suspicious of the latter annotation, since it looks like the design combines domain objects and message documents into a common entity, a form of tight coupling I've long warned against.]

After a short break, we had our third talk of the evening by James Zaki, a freelance developer with goCatch. This was a high-level description of the goCatch app, which brings together cab drivers and prospective passengers. There was general satisfaction expressed around the room in favour of this cartel-breaking app, since the taxi companies and CabCharge engage in significant rent-seeking behaviour at the expense of both drivers and passengers. goCatch allows the two to bypass the middlemen. I had some reservations about one of the aspects of the goCatch design as described by James, i.e., its statefulness, which led to problems of synchronisation of state held on devices with that on the server. Perhaps a suitable set of messages based on idempotence could solve the problem. I didn't have time to discuss this offline with James.

Our fourth and final talk of the evening was by Darren Younger, CTO of IPScape, in whose offices the meeting was held. IPScape is a provider of cloud-based contact centre solutions catering to both voice and web. One of their interesting applications allows mobile device users to make phone calls not through the device's native telephony capabilities, but through the IPScape app. The server then initiates regular (teleconference-style) phone contact with both the caller and the receiver. The advantage of this is that the server can record the call. Many financial service providers are required by law to record all customer conversations, and it is easier for them to use this app rather than approach the telcos for a voice recording service. A developer API may be coming in a few months.

I also met another attendee, Nanik Tolaram, an amateur Android enthusiast with his own Android-related website.

I picked up a few useful tidbits of information over the course of the evening.

Samsung sent a couple of people to the meeting. They seem keen to understand the size and strength of the Android developer community. Samsung wants to carve out a unique niche even within the Android ecosystem and they have their own app store separate from the generic Android one.

DroidDraw is a testing tool for Android User Interfaces.

Google has a cloud-to-device API.

developer.android.com is Google's portal for Android developers.

Balsamiq is a commercial tool to mock up UIs, including mobile UIs. Pencil seems to be a good Open Source equivalent, and Lumzy is a free one.

Two of the presenters talked about their negative experiences with outsourcing. Although the outsourcing countries were as varied as Israel, India and Singapore, there seem to be some common problems caused by distance as well as the seeming cultural inability of some developers to look beyond the literal specification and to understand the higher abstraction that an application is trying to implement. Errors in the documentation of some specs led to literal implementation of those features even though they patently made no sense. Outsourcing sites like Freelancer.com seem very cost-effective, but the elapsed time to obtain a working solution negates those benefits. Examples: $200 and 4.5 months to develop an app, $800 and 9 months to develop another. The moral of the story seems to be to hire good local developers so communication problems are reduced and results are achieved quickly.

The Android Australia User Group is a good place for developers to hang out. The organisations that some of the speakers represented are looking for developers, and this may be a good way to get introduced.

Saturday, October 01, 2011

Daylight Saving Time - Ubuntu Passes the Test

60 seconds after 0159 on October 2nd is when New South Wales, Australia mysteriously loses an hour to Daylight Saving Time and the time becomes 0300.

I was awake to record what our computers did. My wife's Windows 7 machine showed 0300 all right. I would have been surprised if it hadn't. Microsoft needs to show some degree of competence for all that market share!

What was personally gratifying to me was that my Ubuntu Linux machines also neatly skipped the hour and showed 0300.

I'm going to bed now with a smile.

Friday, September 30, 2011

Facebook's Secret Sauce

I would say there have been Three Ages of the Internet.

The first was the Internet era proper, where only the US government and universities (and maybe a few other organisations) had it. There was email of course, and really cool stuff like Archie and Gopher (whatever they were) that you could access on text-based terminals.

Then there was the Web, which was nice and graphical and coincidentally came out at roughly the same time as the extremely popular Windows 95. The Web through Windows made the Internet something for regular folks, not just geeks. There was surfing and search, pleasurable as well as useful. The Web soon gobbled up email too (webmail changed the user-facing protocol from SMTP and POP to HTTP). And blogs became a way for the little guy to communicate his own views to the world instead of just consuming the output of established media. That was the first step towards mass participation.

Then there came Facebook. More than Picasa and on par with Skype, Facebook has suddenly made the Internet a must-have for everyone. I'm betting Facebook and Skype have driven Internet usage to new highs, both in terms of bandwidth consumed and in terms of market penetration. And along the way, Facebook has sucked the oxygen out of blogs. Ask me. I should know.

That deserves the title of 'third generation Internet'.

Skype's appeal is easily understood. It's the videophone of science fiction that the Telco monopolies never gave us. (Thanks, guys.) But what is it with Facebook? If it's just a place where friends hang out and exchange news and funny stories, would it really have become all that big?

I've been thinking a lot about this, because I'm a latecomer to Facebook, having resisted it for a while. Now I find I'm thinking of it as 'Wastehook', an addictive way to spend time that I later regret. What made Facebook so addictive to a person who resisted it so much to start with?

Yes, it's cool that we can keep tabs on all the people we've ever known. But there's more to it than that.

I think the one thing of absolute genius that Facebook has pioneered is the 'friends of friends' concept. 'Friends of friends' gives you visibility just beyond the horizon of your circle of contacts. You get to know of people who are somewhat interesting because of the friends you share. Sometimes, they turn out to be people you know too! Bonus points for the thrill in such cases.

'Friends of friends' are people our friends like and trust, and since we like and trust our friends, their friends are people we're already favourably disposed towards. We don't mind reading the things they say. Sometimes, they say witty and wise things. Facebook is a lot more interesting because of these people. It's not just our boring and predictable friends. It's these people, unknown but not quite irrelevant, who bring a bit of variety to the experience. It's somewhat interesting when a friend puts up photos of themselves, but a lot more interesting to read what their friends have to say about it, even if we don't know many of them. We can join in the conversation and politely add to what they're saying, and nobody minds. A nice polite party with decent people we could be friends with. That's Facebook.

'Friends of friends' prevents Facebook from becoming a stale backwater, a stagnant pond, an eddy in a stream. It's not quite the wide blue ocean, though. It's just a safe little harbour. 'Friends of friends' expands our circle just enough to make Facebook an interesting place to hang out, but not so much that it becomes overwhelming or irrelevant.

I'd say that's Facebook's secret sauce.

[What about Google+? Well, Google+ pioneeered categories of friends, but Facebook quickly neutralised that advantage. An authoritative source informs me that Google+ has a rough equivalent to 'friends of friends' based on followers and following, so the effect could be similar. But my prize for pioneering the next generation of the Internet goes to Facebook in any case, not Google+.]