Wednesday, December 25, 2013

Dependency-Oriented Thinking - Documents Released!

I know I haven't been active on this blog for almost 6 months, but I haven't been idle. Ever since Rahul Singh and I conducted our first workshop on "Dead Simple SOA" in Sydney last year, I have been working on the feedback we received, namely that we needed to split "Analysis & Design" from "Governance & Management" because these two topics are of interest to two different audiences.

Splitting out just Governance & Management from the original document "Slicing the Gordian Knot of SOA Governance" was the easy part. I realised that I didn't have much material on Analysis and Design, so I had to write that from scratch. That task ultimately took the better part of a year.

Well, that's completed now, and the two documents are now ready to download from SlideShare:


I have believed for a while now that the core concept behind SOA ought to be "dependencies". I could explain more here, but I'm exhausted after the marathon effort of the last few months. Besides, you'll find a very detailed argument in Volume 1.

Please do download these documents and have a read. More importantly, give me your feedback. My email address is in the "About the Author" section in both documents.

Tuesday, July 02, 2013

Dependency Principles

I've been used to thinking of SOA as "Dependency-Oriented Thinking" for such a long time that it suddenly struck me that I should readily be able to postulate "dependency principles" that can be applied at each of the BAIT layers. As it turned out, I was fortunate enough to have some actual case studies to verify my proposed set, and these cases were selected for being representative of different tiers, so they were quite comprehensive.

It turns out that the lesson to be learnt after a post-mortem of each of those case studies was that one or more dependency principles had been ignored or violated, resulting in the problem that was being faced. The corollary is that an organisation that scrupulously adheres to these principles will end up achieving SOA.

So, without further ado, here are the dependency principles (updated on 25/12/2013):

Business Layer Principles

1. Traceability – Enforce core governance; ensure that everything that is done makes sense with respect to the organisation's Vision
2. Minimalism – Challenge assumptions, reject unwarranted constraints, do no more than required, reuse functional building blocks
3. Domain Insight – Understand the true nature of the business; don't be misled by superficialities or conventional wisdom; re-imagine the business from first principles
4. Business Process Coordination Style – Choose a coordination style (orchestration or choreography) based on whether tight control is possible and/or necessary. This influences the choice of technology standard (SOAP or REST), but may also be influenced by a prior choice of standard.

Application Layer Principles

5. High Cohesion (“Belonging”) – What belongs together should go together, with minimal links between distinct systems; question instances of a single logical function split across multiple systems, or multiple logical functions combined within a single system. Group operations that share a Domain Data Model and business logic into Products, those that share an Interface Data Model into Services.
6. Decoupling of Interfaces from Internals – Ensure that no external system develops dependencies on the internal aspects of a system; create an interface to isolate the systems from each other.
7. “Goldilocks” Signatures (Stability versus Precision) – Identify multiple concurrent flavours of each logical operation; establish a way to express the business intent common to all of them; make sure that the interface doesn't change for minor variations in logic but does change with the business intent.
8. Shared Semantics (Negotiation of Intent and Content) – In choreographed processes, ensure that the service provider and service consumer understand the meaning of every Operation as well as the mechanics of how each Operation should be invoked.

Information (Data) Layer Principles

9. Decoupling of Interface Data from Internal Data – Distinguish the Interface Data Model from the Domain Data Model; use this to guide the grouping of Operations into both Products and Services
10. Isolation of Context from Content – Separate the core data elements of the Interface Data Model from its qualifying elements; establish a separate Type Hierarchy for Context; use Context to categorise Variants and minimise Versions
11. Low External Coupling (“Need to Know”) – Expose the minimum possible set of data to service consumers; make the data as generic as possible but exposing the business intent (i.e., conforming to the “Goldilocks” signature).
12. Type Hierarchy – Create a data type hierarchy for both Content and Context elements; use this to expose multiple Variants as a single logical operation
13. Identity Association – Ensure that entity identifiers do not leak out through the interface; provide a mapping between external and internal identifiers.

Technology Layer Principles

14. Extraneous Constraints – Avoid introducing fresh dependencies between unrelated components in the process of implementing business logic
15. Logic Bundling – Avoid combining unrelated pieces of data or logic
16. State (“Stickiness”) – Avoid tight and unwarranted associations between instances of data/logic and physical components
17. Topology Hotspots – Avoid associating physical components together in specific layouts and hard-wired connections unless warranted
18. Late Binding – Delay unavoidable dependencies till the last responsible juncture

When you put these principles together with the entities at each layer, it forms a simple and mutually reinforcing set of techniques, and the complete approach is what is "Dependency-Oriented Thinking". This is an extract from the document "Dependency-Oriented Thinking: Volume 1 - Analysis and Design":


There is a companion document on Governance and Management. Feel free to download both documents:

Dependency-Oriented Thinking: Volume 1 - Analysis and Design
Dependency-Oriented Thinking: Volume 2 - Governance and Management

Thursday, June 06, 2013

Red Hat's Competition Is From...Amazon


I was talking to a colleague about Red Hat yesterday, and it struck me that a new competitor has emerged that threatens Red Hat's business model in a fundamental way.

Red Hat has two main lines of business, Linux and JBoss. Yes, they also have a cloud platform called Redshift, but I'm guessing it's not yet of the scale of the other two.

Both Linux and JBoss are Open Source, so the revenues to Red Hat are through support subscriptions. Put very bluntly, Red Hat's business model is based on fear, rather like that of insurance companies. Customer organisations are generally afraid to run unsupported software in their data centres, so they will gladly pay the insurance premiums to ensure that someone stands behind the software they run.

But consider this subtle change in the model of reliability that comes about when we move to a cloud-based platform like Amazon. Reliability is achieved not so much by keeping servers running and getting them back online when there's a problem. The new model of reliability is to simply spin up new instances to replace failed servers. We can create images (AMIs) of our entire software stack, and spin up new instances either in response to increased volumes, or to replace instances that have died. This is actually done automatically, as part of the "elasticity" that cloud platforms deliver.

Mind you, Amazon itself suffers in adoption right now because of its own version of the fear factor.  The thought of placing one's business-critical data and processes in the cloud keeps many CIOs awake at night. But that fear is gradually easing as more and more organisations are seen to be migrating to that cloud platform with no adverse effects.

When Amazon becomes the norm for production infrastructure, the requirement for supported versions of operating systems may well reduce. The main difference between Red Hat Enterprise Linux and Centos (a RHEL clone) is that the former comes with paid support. If the reliability problem can be automatically addressed through elasticity, then Centos will do just as well.

That's why I think Red Hat needs to be afraid of Amazon.

Monday, May 27, 2013

Resources Are Four-Dimensional


The term ROA (Resource-Oriented Architecture) is misleading. It should ideally stand for "Resource/Representation-Oriented Architecture", even though that's quite a mouthful.

I've found in my discussions with people who work on REST-based systems that lots of them are very fuzzy about the notions of "resource" and "representation", even though these are the fundamental concepts underlying REST (and my forthcoming messaging-oriented variant, ROMA).

Let me try and explain this. Let's say I found a space capsule that had fallen out of the sky, and by deciphering the alien glyphs on its surface, I understood that it contained a 4-dimensional object from the planet Royf. Unfortunately, I couldn't take the object out, because ours is a 3-dimensional world and such an operation is impossible. However, I found that it was possible to get the capsule to "project" a 3-dimensional image of the object it contained, so I could "see" it in a way that made sense to my limited mind. I found that I could also ask for the object to be manipulated in 3-dimensional terms. I knew, of course, that the object itself was 4-dimensional and so my instructions had to be somehow translated into terms that made sense in the 4D world. But I found to my satisfaction that the 3D image that resulted from my instructions was exactly what I wanted.

I realised then that my interactions with the space capsule were RESTian. The 4-dimensional object was the resource, an unseeable thing that I had no way of even comprehending and which was therefore mercifully shielded from my vision. What I could ask for (through a GET) was a 3D "representation" of the object, and this was something I could understand. I could also manipulate the object in several ways. I could show the capsule other 3D objects and say, "Change its shape to resemble this", or "Make its colour more like this", and it would happen! Obviously, the objects I was holding up were not the same as the object inside the capsule. They were representations of what I wanted the object to look like, when I saw it in 3D terms.

That's really what REST is. The only aspect of the resource itself that we can directly deal with is its name, or URI. The actual resource is completely unseen, indeed unseeable.  Everything that is actually seen is a representation, whether it's a representation of what the object is like right now, or a representation of what we want the object to be like. Everything that goes "over the wire" in REST is a representation.

Nerds can readily understand what a 3D projection is

[See also that REST is the very opposite of "Distributed Objects", although some industry personalities continue to insist that REST is DO! (JJ, I'm talking to you.) Distributed Objects tries to bring about an illusion that remote objects are local, allowing you to grasp them using virtual reality gloves. REST tries to bring about a discipline that says even local objects like the one inside the space capsule should be treated as remote, and we shouldn't try to grasp them directly, only deal with them indirectly through representations. Distributed Objects works well when it does and fails horribly when it doesn't. REST always works.]

Hopefully, this should set to REST some of the confusion around resources and representations.

Monday, May 20, 2013

SOA As Dependency-Oriented Thinking - One Diagram That Explains It All


I've been talking and writing about SOA as "Dependency-Oriented Thinking" for a while now, and have even conducted a couple of workshops on the theme. The feedback after the interactive sessions has always been positive, and it surprises me that such a simple concept is not already widely known.

I'm in the process of splitting (slicing?) my white paper "Slicing the Gordian Knot of SOA Governance" into two, one dealing with SOA ("Dependency-Oriented Thinking" or DOT) and the other dealing with Governance and Management ("Dependency-Oriented Governance and Management" or DOGMa).

Partway through the DOT document, I realised that one of the diagrams in it explains the entire approach at a glance.

Here it is (updated after the release of the Dependency-Oriented Thinking documents). Click to expand.



This is of course the BAIT model of an organisation, with a specific focus on Dependencies. BAIT refers to Business, Application, Information (Data) and Technology, the four "layers" through which we can trace the journey of application logic from business intent to implementation.

[Basic definitions: SOA is the science of analysing and managing dependencies between systems, and "managing dependencies" means eliminating needless dependencies and formalising legitimate dependencies into readily-understood contracts.]

At the Business layer, the focus on dependencies forces us to rationalise processes and make them leaner. Business processes need to be traceable back to the organisation's vision (its idea of Utopia), its mission (its strategy to bring about that Utopia) and the broad functions it needs to have in place to execute those strategies (Product Management, Engineering, Marketing, Sales, etc.). Within each function, there will need to be a set of processes, each made up of process steps. Here is where potential reuse of business logic is first identified.

At the end of this phase, we know the basic process steps (operations) required, and how to string them together into processes that run the business. But we can't just have these operations floating around in an organisational "soup". We need to organise them better.

At the Application layer, we try to group operations. Note that the Business Layer has already defined the run-time grouping of operations into Processes. At the application layer, we need to group them more statically. Which operations belong together and which do not? That's the dependency question that needs to be asked at this layer.

The answer though, is to be found only in the Information layer below, because operations only "belong" together if they share a data model. As it turns out, there are two groups of data models, those on the "outside" and those on the "inside". The data models on the "inside" of any system are also known as "domain data models", and these are never visible to other systems. In contrast, a data model on the "outside" of a system, known as an "interface data model", is always exposed and shared with other systems. In SOA, data on the outside is at least an order of magnitude more important than data on the inside because it impacts the integration of systems with one another, whereas data on the inside is only seen by a single system.

Version churn is a potential problem at the Information Layer, because changing business requirements could result in changed interfaces. With a well-designed type hierarchy that only exposes generic super-types, the interface data model can remain stable even as newer implementations pop up to handle specialised sub-types. Most changes to interfaces are then compatible with older clients, and incompatible changes are minimised.

Once we have our data models, we can go back up one level to the Application layer and start to group our operations in two different ways, depending on whether they share an internal (domain) data model or an interface data model. Operations sharing a domain data model form Products. Operations sharing an interface data model form Services. (And that's where the "Service" in "Service-Oriented Architecture" comes from.) Products are "black boxes" meant to be used as standalone applications. Services are "glass boxes" with no other function than to loosely bundle together related operations.

Finally, we have to implement our Services. The description and deployment bundles that are used need not correspond one-to-one with the Services themselves. They should in general be smaller, so that the volatility (rate of change) of any single operation does not needlessly impact others merely because they share a description bundle (e.g., a WSDL file) or a deployment bundle (e.g., a JAR file). If we also pay attention to the right types of components to use to host logic, transform and adapt data, and coordinate logic, we will be implementing business intent in the most efficient and agile way possible.

This, in fact, is all there is to SOA. This is Dependency-Oriented Thinking in practice.

The white paper will explain all this in much greater detail and with examples, but this post is an early appetiser. [Update: you can fall to and help yourselves, since the main course is now ready. Grab the two documents below.]


Dependency-Oriented Thinking: Volume 1 - Analysis and Design
Dependency-Oriented Thinking: Volume 2 - Governance and Management

Thursday, May 16, 2013

50 Data Principles For Loosely-Coupled Identity Management


It's been a while since our eBook on Loosely-Coupled IAM (Identity and Access Management) came out. In it, my co-author Umesh Rajbhandari and I had described a radically simpler and more elegant architecture for a corporate identity management system, an architecture we called LIMA (Lightweight/Low-cost/Loosely-coupled Identity Management Architecture).

Looking at developments since then, it looks like that book isn't going to be my last word on the subject.

IAM has quickly moved from within the confines of a corporate firewall to encompass players over the web. New technology standards have emerged that are in general more lightweight and scalable than anything the corporation has seen before. The "cloud" has infected IAM like everything else, and it appears that IAM in the age of the cloud is a completely different beast.

And yet, some things have remained the same.

I saw this for myself when reviewing the SCIM specification. This is a provisioning API that is meant to work across generic domains, not just "on the cloud". It's part of the OAuth 2.0 family of specifications, and OAuth 2.0 is an excellent, layerable protocol that can be applied as a cross-cutting concern to protect other APIs. SCIM too is OAuth 2.0-protected, but that's probably where the elegance ends.

The biggest problem with SCIM is its clumsy data model, which then impacts the intuitiveness and friendliness of its API. I critiqued SCIM on InfoQ, and in response to a "put up or shut up" challenge from some of the members of the SCIM working group, I began working on an Internet Draft to propose a new distributed computing protocol, no less. That's a separate piece of work that should see the light of day in a couple of months.

In the meantime, I began to work on IAM at another organisation, a telco this time. My experiences with IAM at a bank, an insurance company and then a telco, had by then given me a much better understanding of Identity as a concept, and I began to see that many pervasive ideas about Identity were either limiting or just plain wrong. Funnily enough, most of these poor ideas had more to do with the Identity data model than with technology. I also observed that practitioners tended to focus more on the "sexy" technology bits of IAM and less on the "boring" data bits, and that explained to me, very convincingly, why systems were so clumsy.

I then consciously began to set down some data-specific tips and recommendations that I saw being ignored or violated. The irony is that it doesn't cost much to follow these tips. All it costs is a change of mindset, but perhaps that's too high a price to pay for many! In dollar terms, the business benefits of IAM can be had for a song. Expensive technology is simply not required.

So that's the lesson I learnt once more, and the lesson I want to share. No matter what changes we think are occurring in technology, the fundamental concepts of Identity have not changed. The data model underlying Identity has not changed. Collectively, we have a very poor understanding of this data model and how we need to design our systems to work with this data model.

So here are 50 data principles for you, the architect of your organisation's Identity Management solution. I hope these will be useful.

The presentation on Slideshare:
http://slidesha.re/14uo3YY

The document hosted on mesfichiers.org:
http://atarj9.mesfichiers.org/en/

Friday, May 03, 2013

"What Are The Drawbacks Of REST?"


It seems the season for me to post comments in response to provocative topics on LinkedIn. 

A few days ago, Pradeep Bhat posed the question, "What Are The Drawbacks Of REST?" on the REST Architects LinkedIn Group. As before, I had an opinion on this too, which I reproduce below:

"I wouldn't say REST has "drawbacks" as such. It does what it says on the tin, and does that very well. But remember that the only implementation of the REST architecture uses the HTTP protocol. We can surely think of a future RESTian implementation that uses another transport protocol, and that is where some improvements could be made. 

1. HTTP is a synchronous, request/response protocol. This means the protocol does not inherently support server-initiated notifications (peer-to-peer), which are often required. That's why callbacks in RESTian applications require the use of application-level design patterns like Webhooks. Now that we have a bidirectional transport protocol in the form of WebSockets, perhaps the industry should be looking at layering a new application protocol on top of it that follows RESTian principles. 

2. The much-reviled WS-* suite of protocols has at least one very elegant feature. These are all end-to-end protocols layered on top of the core SOAP+WS-Addressing "messaging" capability. They resemble the TCP stack in that the basic protocol is IP, which only knows how to route packets. SOAP messages with WS-Addressing headers are analogous to IP packets. In the TCP world, end-to-end reliability is implemented through TCP over IP, and the SOAP world's analogy is WS-ReliableMessaging headers in SOAP messages. In the TCP stack, IPSec is the end-to-end security protocol (not TLS, which is point-to-point). The SOAP equivalent is WS-SecureConversation. Such Qualities of Service (QoS - reliability, security, transactions) can be specified by policy declaration (WS-PolicyFramework) and SOAP endpoints can apply them like an "aspect" to regular SOAP traffic. 

The REST world has nothing like this. Yes, an argument could be made that idempotence at the application level is a better form of reliability than automated timeouts and retries at the transport level. Similarly, we could argue that an application-level Try-Confirm/Cancel pattern is better than distributed transactions. But what remains is security. WS-SecureConversation with WS-Security is routable, unlike SSL/TLS, which is the only security mechanism in REST. With WS-Sec*, messages can also be partially encrypted, leaving some content in the clear to aid in content-based routing or switching. This is something REST does not have an elegant equivalent for. SSL is point-to-point, cannot be inspected by proxies and violates RESTian principles. It is just tolerated. 

The reason behind REST's inability to support such QoS in general is that all of these require *conversation state* to be maintained. Statefulness has known drawbacks (i.e., impacts to scalability and failure recovery), but with the advent of NoSQL datastores like Redis that claim constant-time, i.e., O(1), performance, it may be possible to delegate conversation state from memory to this datastore and thereby support shared sessions for multiple nodes for the purposes of QoS alone. I don't mean to use this for application-level session objects like shopping carts. If nodes can routinely use shared NoSQL datastores to maintain sessions, then the argument against statefulness weakens, and Qualities of Service can be more readily supported *as part of the protocol*. In RESTian terms, we can have a "uniform interface" for QoS.

3. While REST postulates a "limited" set of verbs, HTTP's verbs are too few! 

POST (add to a resource collection), PUT (replace in toto), PATCH (partially update), DELETE (remove from accessibility) and GET. These are actually not sufficient and they are frequently overloaded, resulting in ambiguity. 

I would postulate a more finely-defined set of verbs if defining a RESTian application protocol over a new peer-to-peer transport: 

INCLUDE (add to a resource collection and return a server-determined URI), PLACE (add to a resource collection with client-specified URI), REPLACE (in toto), FORCE (PLACE or REPLACE), AMEND (partial update, a container verb specifying one or more other verbs to specify operations on a resource subset), MERGE (populate parts of the resource with the supplied representation), RETIRE (a better word than DELETE) and SOLICIT (a GET replacement that is also a container verb, to tell the responding peer what to do to the initiator's own resource(s), because this is a peer-to-peer world now). Think of GET as a SOLICIT-POST to understand the peer-to-peer model. We also need a verb of last resort, a catch-all verb, APPLY, which caters to conditions not covered by any of the others. 

4. HTTP combines application-level and transport-level status codes (e.g., 304 Not Modified and 400 Bad Request vs 407 Proxy Authentication Required and 502 Bad Gateway). The next implementation of REST on another transport should design for a cleaner separation between the application protocol and the transport protocol. HTTP does double-duty and the results are often a trifle inelegant. 

So that's what I think could be done as an improvement to REST-over-HTTP. Apply the principles (which are very good) to a more capable peer-to-peer transport protocol, and design the combination more elegantly."

I'm in the process of writing an Internet Draft for a new application protocol that can be bound to any transport (Pub/Sub, Asynchronous Point-to-Point or Synchronous Request/Response). The protocol is part of a new distributed computing architecture that I call ROMA (Resource/Representation-Oriented Messaging Architecture) and covers not just the data model and message API but also higher levels (QoS, description and process coordination). It's been 5 years in the making and has reached 170 pages so far. It may take another couple of months to get to a reviewable state. Stay tuned.

Tuesday, April 30, 2013

"Can Anyone Explain SOA In Simple Terms?"


A few days ago, David Diamond posed a deceptively simple question on one of the LinkedIn Group sites (SOA SIG) - "Can Anyone Explain SOA In Simple Terms?"

The barrage of widely varying responses that followed was, in a way, an eloquent answer to that question!

I've had my own take on SOA for quite a while now, so this gave me the opportunity to validate my model against what other practitioners had to say. And I must say this: I'm more convinced than ever that the industry is horribly confused about SOA. There are those whose understanding of SOA is at a purely technology level (even some of those who profess to understand that SOA is not (just) about technology). And there are others who may understand SOA for all I know, but whose explanations tend to be couched in so much jargon that they're really hard to understand.

In hindsight, David Diamond could not have asked a more insightful question.

Well, this is my blog, so just as history is written by the victors, the one correct answer to David's question is to be found here :-).

Here's what I wrote (put together from more than one post that I made to that topic):

My initial one-paragraph answer: "SOA is the science of analysing and managing dependencies between systems. That means the elimination of unnecessary dependencies and the formalisation of legitimate dependencies into readily understood contracts. The more dependencies there are between systems, the less agile an organisation is (because of the number of related systems that have to change when one of them has to change), the higher its operating costs (because of all the unnecessary extra work to coupled systems) and the higher its operational risk (because of the number of things that could break when something changes). Dependencies exist at all of the BAIT layers - Business, Applications, Information (Data) and Technology. That's why a technology-only view of SOA does not solve an organisation's real problems. SOA should have been called DOT instead ("Dependency-Oriented Thinking")."


After a few days of reading other responses and feeling dissatisfied, I posted again:


"Many of the comments here emphasise reuse as part of the *definition* of SOA. Is reuse a core feature of SOA or just a side-benefit? If the latter, what are SOA's defining features (which is what the original question was about)? Also, while we use the word "services" a lot, how do we define the term?

Let me try and address these two points.

SOA is an organising principle for the enterprise, and the fundamental skill that an architect requires to apply this organising principle is the ability to see dependencies between systems, - to be able to eliminate the ones that shouldn't exist and formalise the legitimate ones into "contracts" maintained in a single place and covering all the dependencies between two systems. This approach greatly reduces the cost of change, improves the speed with which changes are made (agility) and reduces the risk of making changes, all because the number of dependencies (aspects of an interaction affected by a change) are now smaller, one can tell at a glance what they are, and there are no surprises because there are no dependencies outside what is documented by the contract. This is not limited to technology interactions. One can apply this thinking to the design of business processes just as naturally.

When we look through a dependency lens at an organisation, our tasks are quite distinct at its four layers (Business, Applications, Information (Data) and Technology).

At the Business layer, it is more of a BPR (Business Process Re-engineering) exercise, because we end up rationalising processes when we weed out unnecessary dependencies. When we finish, we have a traceability matrix linking the following:

Vision (Our idea of Utopia)
Mission (Our strategy to bring about that Utopia)
Functions (The main groups of activities we need to be doing as part of that strategy)
Processes (The detailed and related sequences of steps comprising each function)
Process Steps (The basic building blocks of these processes)

[At the business layer, we will come across some *potential* reuse when we look at the definition of some of the Process Steps (operations) we arrive at. Only further analysis at the Information layer will tell us if reuse is actually possible or these are independent operations.]

The Application layer is all about grouping "related" operations, and the dependency principle used is that of "cohesion and coupling". In other words, we need to determine which process steps belong together and which do not. This cannot be done independently but must involve the Information (data) layer as well. [That's why architectural frameworks like TOGAF combine the two into a single step (Phase C)].

The Information layer looks at data dependencies (shared models) and classifies data into two groups - "data on the outside" and "data on the inside". "Data on the inside" is the set of internal domain models for operations that other operations do not need to see. "Data on the outside" is what goes "over the wire" between operations.

When we apply the dependency principle of cohesion and coupling to the combined Application and Information layers, we have two ways of grouping operations together. Operations that share a domain model ("data on the inside") coalesce into Applications that are called Products. Operations that share an interface data model ("data on the outside") coalesce into Applications that are called Services. So this is where Services fit into SOA - as a bundle of related operations sharing an interface data model.

The Technology layer deals with "implementation". As others have pointed out as well, implementation need not have anything to do with SOAP, ESBs, etc. We need distinct components to host implementations of exposed operations (Service Containers), to mediate interactions (Brokers) and to coordinate operations (Process Coordinators). Other components merely support these (Rules engines, registries, monitoring tools).

This is SOA :-)."


I would have posted more, but I exceeded the word count for the site, so I had to post my thoughts about the Technology layer separately:


"I must add that when viewed through a dependency lens, the Technology layer often introduces artificial dependencies of its own. There is a reason why many people prefer REST to SOAP. It's because WSDL is a dependency hotspot. Think about it. If a WSDL file describes just one service, and that service comprises 5 operations, each with an input document and an output document, then the version of the WSDL file has a dependency on the version of 10 document schemas. If any one of them changes, the WSDL will have to change! That's why we have so much version churn in organisations.

In addition, because we don't build explicit interface data models with type hierarchies, our operation interfaces are too rigid and low-level, requiring a fresh *version* whenever a new *variant* is to be supported.

A second major dependency introduced by the technology layer is through the ESB, or more correctly, through incorrect use of the ESB. The dependency principle at the Technology layer is to use the right tool for the job and to use it the right way. If we use the ESB to host business logic, we are making it perform the role of a Service Container. If we use the ESB to orchestrate a process, we are making it perform the role of a Process Coordinator. Both of these mistakes create dependencies that reduce performance and increase the cost of change. 

The other ESB-related mistake is its deployment in a hub-and-spokes architecture. Then the ESB becomes both a performance bottleneck and a single point of failure - both symptoms of a needless topological dependency that was created at the Technology layer. IT organisations often ask for funds to buy an ESB because they want to "do SOA", then implement it in a topology that creates dependencies and thereby violates SOA principles. What an irony! 

So one of the reasons why SOA has acquired a bad name is that its practice often introduces dependencies at the Technology layer even as it tries to reduce dependencies at the Business, Application and Information layers. Worse, because organisations are often too technology-focused, they don't do enough of the dependency-reduction at these higher layers and their net effect is to introduce new, technology-related dependencies to an existing set of business processes and data structures. The net effect of SOA on such organisations is then entirely negative.

I'm in the process of writing a white paper on "Dependency-Oriented Thinking" based on my experiences with SOA in large organisations. Stay tuned :-)."


Well, this represents my current thinking about SOA in a nutshell (a fairly large nutshell, I'll grant). The coming white paper on Dependency-Oriented Thinking will elaborate on these points. The workshops on "Dead Simple SOA" that I've been conducting through my company (Eigner Pty Ltd) along with my colleague Rahul Singh, address these very topics.

Monday, April 29, 2013

JEM (JSON with Embedded Metadata) - A Simpler Alternative to JSON Schema?


I've long been a supporter of the JSON Schema initiative, and I was also happy to see developments like Orderly, which offered a simpler and less verbose format than JSON Schema. But Orderly introduces its own format, which necessitates conversion to JSON Schema before it can be used. Both approaches are unsatisfactory in their own way. One is too verbose and the other needs translation.

All of this made me wonder if we aren't approaching the problem the wrong way. JSON Schema is a conscious effort to replicate in the JSON world the descriptive capability that XML Schema brings to XML. But is this the best way to go about it?

I would like descriptive metadata about documents to be capable of being embedded inside the document itself, rather like annotations in Java programs. Indeed, this metadata should be capable of forming a "scaffold" around the data that then allows the data itself to be stripped out, leaving behind a template or schema for other data instances.

So I'm proposing something that I think is a whole lot simpler. It does require one fundamental naming convention to be followed, and that is this:

Any attribute name that begins with an underscore is metadata. Everything else is data.

Let's take this simple JSON document:


We can embed metadata about this document in two different ways. Click diagram to expand.


I'm calling the first style "Metadata Markup", where the data elements of the JSON document retain their primacy, and the metadata around them is secondary and serves to add more detail to these data elements. One can readily see that "_value" is now just one of the possible attributes of an element, and many more such attributes can therefore be added at will.

I call the second style "Metadata Description", where the primary elements are metadata, and any data elements (whether keys or values) are modelled as the values of metadata elements. Note that describing a document as an array (a nested array in the general case) rather than as a dictionary (or nested dictionary) of elements allows the default order of the elements to be retained. This is quite useful when this format is used to publish data for human consumption.

The first style, Metadata Markup, is more suitable for document instances, because a lot of detailed meta-information can accompany a document and can be hidden or stripped out at will. It is easy for a recipient to distinguish data from metadata because of the leading underscore naming convention. There is no need to pre-negotiate a dictionary of metadata elements. (Click to expand.)



The second style, Metadata Description, is more suitable for schemas, because in this format, all elements pertaining to instance data (both keys and values) are just values. If only the values representing keys are retained, we get a "scaffold" structure describing the document, and more metadata elements representing constraints can be added, turning it into a schema definition. (Click to expand.)


Obviously, this system will not work for everyone. I'm sure there are JSON documents out there that have underscores for regular data (HAL?), so adoption of this convention won't be feasible in such domains. But if a significant subset of the JSON-using crowd finds value in this approach, they're more than welcome to adopt it.

Tuesday, March 26, 2013

The Happy Confluence of IAM, SOA and Cloud


Someone pointed me to this Gartner blog post on IAM, and I was once again reminded why Gartner doesn't get it, (or when they do, they get it much after everyone else).

The Gartner analyst in his presentation makes a big deal of the fact that LDAP, being a hierarchical data structure, is incapable of modelling the various complex relationships between entities in an IAM system. This is one of the reasons he believes we need to "kill IAM in order to save it". But is this limitation in traditional IAM systems really new? I'm no fan of LDAP, and it has been known in IAM circles for at least 5 years that LDAP directories are suited for nothing other than the storage of authentication credentials (login names and passwords)! Everything else should go into a relational database, which is much better at modelling complex relationships. A meaning-free identifier links an LDAP entry with its corresponding record in the relational database. I describe this hybrid design in a fair amount of detail in my book "Identity Management on a Shoestring". And this wasn't even my original idea. It was one of the pieces of advice my team got from a consultant (Stan Levine) that my employer hired to review our IAM plans.

Seriously, where has Gartner been?

Another big point made by the Gartner analyst was that IAM should not be "apart from" the rest of an organisation's systems but become "a part of" them. Joining the dots with my cynical knowledge of where Gartner tends to go with this kind of argument, I can see them making the case for big vendors that do everything including IAM. The cash registers at SAP, Oracle and Salesforce.com must have started ringing already, since Gartner has given those vendors' product strategies their all-important blessing.

Um, no. If there's anything we've learnt in the last few years (especially from SOA thinking), it's the great benefits that are gained from loose coupling. IAM should neither be "apart from" (decoupled) nor "a part of" (tightly coupled) with respect to an organisation's other, business-related systems. IAM needs to be loosely-coupled with respect to them.

What does this mean in practical terms? It means IAM needs to be a cross-cutting concern that can be transparently layered onto business systems to enforce access policies, but without disrupting those systems with IAM-related logic.

That's really what the latest IAM technology, OAuth 2, brings to the table. But the Gartner analyst, while dwelling for quite a while on how great OAuth is, completely omits to define its true contribution.

Eve Maler of Forrester says it much better in her presentations. She defines OAuth as a way to delegate authorisation, and positions it as a way to protect APIs. Can you see the confluence of IAM, SOA and the Cloud in that simple characterisation?

Let's take those two aspects one by one and have a closer look.

OAuth as a way to delegate authorisation:
The traditional model of authorisation works like this. There is an entity that owns as well as physically controls access to a resource. When a client requests access to that resource, the owning entity does three things:

1. Authenticates the client (i.e., establishes that they are who they claim to be)
2. Checks the authorisation of the authenticated client to access the resource (i.e., acts as a Policy Decision Point)
3. Allows (or denies) the client access to the resource (i.e., acts as a Policy Enforcement Point)

What OAuth does is recognise that the Policy Decision Point and the Policy Enforcement Point may be two very different organisational entities, not just two systems within the same organisational entity. The PDP role is typically performed by the owner of the resource. The PEP role is performed by the custodian of the resource. The owner need not be the custodian.

Under the OAuth model, there is a three-way handshake between the owner of a resource, the custodian of the resource and a client. Three separate trust relationships are established between the three pairs of entities in this model, and authentication is obviously required in setting these up (owner-to-client, owner-to-custodian and client-to-custodian-through-owner). Once the owner's permission to access the resource for a certain window of time is recorded in the form of an access token that the client stores, the owner's presence is no longer required when such access takes place. The custodian is able to verify the token and allow access in accordance with the owner's wishes even in the owner's absence. This is delegated authorisation.

And since the resource doesn't even know it's being protected, this is loose coupling. IAM is neither "apart from" nor "a part of" the business system with OAuth.

OAuth as a way to protect APIs:
The delegated authorisation model can be used to protect resources that are not just "things" but also "actions". In other words, OAuth can be used to control who can invoke what logic, and do so in a delegated manner. In other words, owners of business logic can grant access to clients to invoke business logic, and custodians that host such business logic can validate the access tokens presented by clients and allow or deny access in accordance with the wishes of the owners.

Now why does this development in the IAM world bring it into confluence with the SOA and cloud worlds?

The SOA bit is easy to understand. We did mention that an API is a form of resource. If all business logic can be reduced to service operations exposed through endpoints, then these form an API. Endpoints can be protected by OAuth as we saw, so OAuth can be an effective security mechanism for SOA.

The cloud bit isn't hard to understand either. If business logic can be abstracted behind APIs, then does it matter where that logic sits? Bingo - cloud! The cloud also forces separation of owner and custodian roles, with the cloud platform performing the role of custodian, and the cloud customer performing the role of resource owner or API owner. With OAuth as the authorisation mechanism, the cloud model becomes viable from an access control perspective as well.

So that's really what OAuth signifies. It's not just a development in IAM. It has profound implications for SOA security and the viability of the cloud model.

Watch for Gartner to break this news to their clients in 3 to 5 years' time...

(Meanwhile, someone at Gartner or elsewhere ought to tell that analyst that "staid" is not spelled "stayed". This presentation has irritated me on so many levels - spiritually, ecumenically, grammatically, as Captain Jack Sparrow said.)

Tuesday, March 12, 2013

How to Implement An Atomic "Get And Set" Operation In REST


This question came up yesterday at work, and it's probably a common requirement.

You need to retrieve the value of a record (if it exists), or else create it with a default value. An example would be when you're mapping identifiers between an external domain and your own. If the external domain is passing in a reference to an existing entity in your domain, you need to look up the local identifier for that entity. If the entity doesn't yet exist in your domain, you need to create (i.e., auto-provision) it and insert a record in the mapping table associating the two identifiers. The two operations have to be atomic because you can't allow two processes to both check for the existence of the mapping record, find out it doesn't exist, then create two new entity instances. Only one of the processes should win the race.

(Let's ignore for a moment the possibility that you can rely on a uniqueness constraint in a relational database to prevent this situation from occurring. We're talking about a general pattern here.)

Normally, you would be tempted to create an atomic operation called "Get or Create". But if this is to be a RESTian service operation, there is no verb that combines the effects of GET and POST, nor would it be advisable to invent one, because it would in effect be a GET with side-effects - never a good idea.

One solution is as follows (and there could be others):

Step 1:

GET /records/{external-id}

If a record exists, you receive a "200 OK" status and the mapping record containing the internal ID.

Body:
{
  "external-id" :  ...
  "internal-id" :  ...
}

If the record does not exist, you get a "404 Not found" and a one-time URI in the "Location" header.

Location: /newrecords/84c5d65a-2198-42eb-8537-b16f58733791

(The server will also use the header "Cache-control: no-cache" to ensure that intermediate proxies do not cache this time-sensitive response but defer to the origin server on every request.)

Step 2 (Required only if you receive a "404 Not found"):

2a) Generate an internal ID.

2b) Create a new entity with this internal ID and also create a mapping record that associates this internal ID with the external ID passed in. This can be done with a single POST to the one-time URI.

POST /newrecords/84c5d65a-2198-42eb-8537-b16f58733791

Body:
{
  "external-id" :  ...
  "internal-id" :  ... (what you just generated)
  "other-entity-attributes" : ...
}

The implementation of the POST will create a new local entity instance as well as insert a new record in the mapping table - in one atomic operation (which is easy enough to ensure on the server side).

If you win the race, you receive a "201 Created" and the mapping record as a confirmation.

Body:
{
  "external-id" :  ...
  "internal-id" :  ... (what you generated)
}

If you lose the race, you receive a "409 Conflict" and the mapping record that was created by the previous (successful) process.

Body:
{
  "external-id" :  ...
  "internal-id" :  ... (what the winning process generated)
}

Either way, the local system now has an entity instance with a local (internal) identifier, and a mapping from the external domain's identifier to this one. Subsequent GETs will return this mapping along with a "200 OK". The operation is guaranteeably consistent, without having to rely on an atomic "Get or Create" verb.

One could quibble that a GET that fails to retrieve a representation of a resource does have a side-effect - the creation of a one-time URI with the value "84c5d65a-2198-42eb-8537-b16f58733791" being inserted somewhere. This is strictly true, but the operation is idempotent, which mitigates its impact. The next process to do an unsuccessful GET on the same value must receive the same one-time URI.

It's a bit of work on the server side, but it results in an elegant RESTian solution.