mark nottingham

HTTP API Complexity

Monday, 25 June 2012

HTTP APIs

@dret: if your scenario is homogeneous and models are harmonized across participants, #REST is of limited utility for you.

Erik’s tweet just now reminded me of something I’d been wanting to write about for a while; complexity in HTTP APIs, and their effect on how you use the protocol.

One Client, One Server

The simplest HTTP API is one where you have a known client and server talking to each other; they may be managed by separate people or the same team, but regardless, the people developing can communicate about it. This is often called “integration” by the people doing it, and a ready out-of-band communication channel (whether it be through management, a bug queue, or the pub) means that the server and client can coordinate their actions. HTTP is just a way to get bits on the wire.

As such, the big value for this audience is in reusing existing tools and knowledge; their people know how to deal with HTTP, and their language of choice has a library for it, so they use it. The more sophisticated ones might even use caching.

However, these folks aren’t going to see much value from the more subtle bits of REST, such as HATEOS.

Many Clients, One Server

A very different kind of HTTP API is one where there are many clients. This is common on the Web; consider Twitter’s API, or Google’s, or Amazon’s. The relationship now is of a single service being consumed by a larger – and likely unknown – group of clients. As such, the service dictates reality, but is also constrained by its past actions, because any incompatible changes will result in – potentially – some broken clients. Again, REST is providing some latent value here, because there does need to be some de-coupling of the client and server, but there’s still an out-of-band channel for figuring out what’s going on, announcing changes, etc. That’s a major escape valve, so they might not see the value of using things like linking to achieve evolveability.

Many Clients, Many Servers

The next level is developing an API that can not only be consumed by many different clients (with their own unique concerns and history), but also deployed by many different servers, using different implementations, and with many possible extensions. OpenStack’s API has many of these concerns; although it has a single (ish) implementation, it is being deployed in lots of very different ways. Another example is any of the emerging standard cloud APIs (whether they’ll succeed is another blog entry, of course). Writing this kind of API is hard; your only channel for coordination is the specification itself, and once it’s out in the world, you can only add to it, you can’t take it back. You have to be able to accommodate lots of unforeseen uses, deployment scenarios, extensions, and bad implementation.

These concerns were very similar to the Web when it was set up, and are reflected in HTTP, and therefore REST. Truth be told, they’re common to most Internet protocols, as designed by the IETF. REST, as embodied in HTTP, just has an advantage in terms of simplicity, tool support, mindshare and capability for many (not all) purposes today.


3 Comments

Jan Algermissen said:

Hi Mark,

good classification of scenarios.

I sort of ‘oppose’ to your assertion that “one server one client” scenarios aren’t going to see much value of applying the more subtle bits of REST. Typically integration scenarios suffer a lot more from coupling than the apparent ease of out of band coordination suggests. And this is where virtually all enterprises suffer. And loose tons of money in the long run - IMO (‘O’ standing for ‘observation’ here :-)

First, coupling easily leaks to source code and library repositories, often making federated change of system components harder than the ease of coordination suggests (“Hey we updated the version of log4j in our API-JAR, if you want to use this new JAR, go upgrade your log4j lib first” … brrzzz)

Then deploying new component software versions on parallel instances will cause some consumers to see during the deployment process instances with different versions. It is not easy to deal with this and you usually have to roll your own (whereas HTTP brings solutions out of the box).

Hot deployment of new component software typically requires running business transactions to be terminated or at least to coordinate with the consumers to have the transactions suspended. Again, HTTP addresses these issues out of the box, eliminating the need to coordinate deployment with the consumer.

Bottom line is that in my experience the effects of coupling surface in integration scenarios at the development and deployment level and I think that enterprise integration can benefit a lot more from the hypermedia constraint than is commonly understood.

Especially - speaking from the context of my current client - when the frequency of adding new features to the system is deliberately very high and the mere act of communicating about those new features at the functional level already consumes most of the peoples working time.

It appears like heaven, when you imagine you could just go and upgrade a system component in isolation because the system owners agreed on having global contracts that allow federated change (aka ‘media types’) as the only contracts on top of a ubiquitous Web Architecture.

Oh well … we are getting there, but slowly :-)

Jan

Wednesday, June 27 2012 at 5:50 AM

Mike Schinkel said:

Excellent post Mark, and I really like how you classify the different concerns; I’ve looked for such classification in the past but never found it. This will be useful for framing my thoughts on projects in the future. I’ve been a bit vocal about how I have seen HATEOAS a bit like the Emperor’s new clothes, but your “MANY CLIENTS, MANY SERVERS” scenario really drives home a great example of where the pain to implement HATEOS is much less than the pain of not implementing it.

And to balance out Jan’s evangelical zeal :-), I think that while Jan is right about the benefits of HATEOAS for “MANY CLIENTS, ONE SERVER” he said nothing about the costs of implementing HATEOAS in those scenarios. When your clients can be many, and many of them are at their skill and/or budget limits anyway, requiring the added initial complexity of HATEOAS (complex because there are no standards and thus no standard libraries to enable) can significantly retard adoption and increase API support costs as well as increase the costs to the clients for their initial implementations. Solve those problem and I expect a lot more “MANY CLIENTS, ONE SERVER” scenarios will use HATEOAS because the benefits would then outweigh the costs.

Wednesday, July 4 2012 at 2:17 AM