Google API shift

Google kills their Search API. So what?

I have learned the news through David Megginson’s Quoderat under the title Beginning of the end for open web data APIs? but I don’t agree with his analysis even if it is shared by all the other posts I have read on the subject.

David writes: The replacement, Google AJAX API, forces you to hand over part of your web page to Google so that Google can display the search box and show the results the way they want (with a few token user configuration options), just as people do with Google AdSense ads or YouTube videos which justifies that the whole of open web data and mash-ups all end up [could be] on the losing side

This is not what I understand when I read the Google AJAX Search API Documentation.

The « Hello, World » of Google AJAX Search API does use a method in which you handle to Google a node in your page where they include the markup for their search results, but there is more than that in their API.

If you are not happy with this basic method, you can use Search Control Callbacks to get search results delivered to your own JavaScript methods and do whatever you want with that.

What’s the difference with the SOAP search API, then?

The difference is twofold:

  • You trade an API that needs to be used server side by an API that needs to be used client side. Because of the same origin policy, the SOAP API needs to be implemented on your own server acting as a proxy. By contrast, the new Ajax API is designed to be used directly in the browser. It would be interesting to test if you can use this API in a server side JavaScript interpreter but this is obviously not Google’s main target!
  • You trade a SOAP API which is platform and language independent against a JavaScript API. From a developer’s perspective, if you accept the fact that this the API is used client side, that doesn’t make a lot of difference. On the contrary, most will probably be happy to use an API which is simpler than a SOAP client API.

When you think about it, this isn’t that much the end of mashups but rather a shift between server side mashups and client side mashups.

This Ajax Search API appears to be using the same concepts and principles than the Google Maps API and it’s weird to see people who consider than the Google Maps API is the best invention since sliced bread also consider the Google AJAX Search API evil.

Client side mashups are generally easier to implement since they do not rely on any piece of software installed on your server, however the benefit of server side mashups is that they can include content in the HTML pages that they serve, making them good web citizens which are accessible and crawlable.

I don’t regret the SOAP API (SOAP is almost as evil than any API) but what I do regret is that Google doesn’t publish both an Ajax API to make client side mashups easy and a REST API which would be used by their Ajax API and which could also be used server side.

7 thoughts on “Google API shift”

  1. Frederik,

    If I don’t publish more often on this blog, this is because I don’t want to speak about what I don’t know :) !

    This post has been published in reaction to many other posts which emphasize the fact that this move is the end of mashups because Google takes control over the way search results are presented in your pages.

    My point is that this is not true and that the real difference is a shift from server side to client side mashups.

    I appreciate the difference between a SOAP service and a JavaScript API, but in this context where you focus on client side Web programming which is very largely dominated by JavaScript, the difference will be very thin for most developers.

    Eric

  2. Eric,

    I absolutely disagree with you. How can you say that it makes no difference ? It makes me wonder if you just know what you’re talking about !

    For sure, for the lambda web programmer, the difference may be very light. Probably, he will even consider it easyer to use a client-side javascript API to « embed » a search facility in its pages.

    But from a more architectural point of view it makes a lot of difference ! The main point of the SOAP protocole is that it makes services accessible from almost any piece of software, not just a web client. The other point of SOAP is that this is standardized which guarantees the independance of server and client implementations.

    You suggest that calling the API from a server-side javascript engine could do the trick, but this would just be a very ugly hack, which would not be acceptable for millions of good reasons.

    So it is just pointless to compare a SOAP service and a proprietary client/server architecture (with a javacript client API).

    However I agree with your conclusion : both may be useful and one could be based on the other (would the other be REST or SOAP).

  3. My question was rather why can’t they use HTTP(S) to connect to proprietary data services?

    If they really want to do so, they should be able to authentify their API and use a standard protocol that is implemented in all the browsers than create a new network protocol which would be complex if not impossible to implement in the different browsers, don’t you think so?

    As for Microsoft, they can be tempted to create their own client/server protocol but they also need to make it work in other browsers if they want to generalize on the Web.

    The browsers war is what has left the Web open and I don’t see what would make that change in a near future!

  4. Why would they want to connect directly the client to proprietary data services? The answer is almost in the question. Since data is the real fuel (some say the « Intel Inside ») of the web, it unfortunately makes sense to me that companies who run data services would like to have control on their use. But no data-locking of course, since Google is set to not do evil, isn’t it?

    As to how would they do that, a virtual machine should be good enough. Microsoft already put a .NET VM into their SQL Server 2005. Wouldn’t the new Java Virtual Machine be a good candidate for such a system?

  5. Xavier,

    I haven’t double checked, but I had assumed that behind the scene they are still using HTTP(S) and a serialization format such as XML or JSON.

    If they were be bypassing HTTP, I agree that this would be more worrying.

    On the other hand, why and how would they do that? I would think that client side JavaScript miss the low level networking functions that would make that possible and that HTTP(S) is really the most sensible option.

    Eric

  6. I agree when you say that this isn’t that much the end of mashups but rather a shift between server side mashups and client side mashups, but I do think this is indeed where we might have worries.

    Microsoft recently made the same kind of shift with its Ajax API, and one could wonder whether this kind of move wouldn’t eventually lead to merely bypassing the HTTP/CGI layer, therefore connecting directly client with the data server, where the data could also be preprocessed in all kind of useful ways before retrieval.

    For instance, there should become possible to retrieve web pages directly from SQL Server 2005, build from any internal and external data, since the server itself includes a full .NET virtual machine. It would be interesting to watch Google moves to this respect.

Répondre à Xavier Cazin Annuler la réponse

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *