A couple of things we got wrong ten years ago

I have started both to design web pages and to learn Java roughly ten years ago, back in 1996.

The first Web server I have ever used was a Netscape server. It came with built-in server side JavaScript and we were convinced that JavaScript would be a language of choice to develop server side Web applications.

Around the same period, I followed my first Java training. The instructor explained us that the real cool thing with Java was its virtual machine that can run everywhere and that, for this reason, Java would become the obvious choice for client side Web development.

Ten years after, we must admit that we got that completely wrong, that Java is mostly used server side and JavaScript is mostly used client side!

Will that remain true in the future?

I would be surprised if Java grew client side, but wouldn’t be surprised if JavaScript made a comeback server side.

Technically speaking, JavaScript is a good language, very comparable to scripting languages such as Python, Perl or Ruby and the fact that it is used client side for increasingly complex functions should justify to use it server side too.

There are good reasons to use the same language client and server side:

  • Developers don’t have to learn different languages to work client and server side.
  • It is easier to move functions from the server to the client or vice versa.
  • Functions can duplicated client and server side.

Ruby on Rails and the Google Web Toolkit translate their source languages into JavaScript to solve similar issues, wouldn’t that be much easier if we could use the same language client and server side?

The duplication of functions is a point that I find really important.

Web 2.0 applications need to remain good Web citizen and serve full pages to clients rather than HTML place holders for Ajax applications.

If you want to do that while keeping the Ajax fluidity, you end up doing the same functions server side to build the initial page and client side to update page fragments.

In the first chapter of Professional Web 2.0 Programming, I show how you can use the same XSLT transformation client and server side to achieve this goal. However, there is a strong cultural push back from many developers to use XSLT and server side JavaScript should be a better alternative for them.

What should the ideal JavaScript framework look like?

There are already several JavaScript framework around, unfortunately all those that I have found follow the same templating principles than PHP or ASP.

For me, the killer JavaScript framework would be modeled after Ruby on Rails or Pylons.

Tell me if you find one!

Google API shift

Google kills their Search API. So what?

I have learned the news through David Megginson’s Quoderat under the title Beginning of the end for open web data APIs? but I don’t agree with his analysis even if it is shared by all the other posts I have read on the subject.

David writes: The replacement, Google AJAX API, forces you to hand over part of your web page to Google so that Google can display the search box and show the results the way they want (with a few token user configuration options), just as people do with Google AdSense ads or YouTube videos which justifies that the whole of open web data and mash-ups all end up [could be] on the losing side

This is not what I understand when I read the Google AJAX Search API Documentation.

The « Hello, World » of Google AJAX Search API does use a method in which you handle to Google a node in your page where they include the markup for their search results, but there is more than that in their API.

If you are not happy with this basic method, you can use Search Control Callbacks to get search results delivered to your own JavaScript methods and do whatever you want with that.

What’s the difference with the SOAP search API, then?

The difference is twofold:

  • You trade an API that needs to be used server side by an API that needs to be used client side. Because of the same origin policy, the SOAP API needs to be implemented on your own server acting as a proxy. By contrast, the new Ajax API is designed to be used directly in the browser. It would be interesting to test if you can use this API in a server side JavaScript interpreter but this is obviously not Google’s main target!
  • You trade a SOAP API which is platform and language independent against a JavaScript API. From a developer’s perspective, if you accept the fact that this the API is used client side, that doesn’t make a lot of difference. On the contrary, most will probably be happy to use an API which is simpler than a SOAP client API.

When you think about it, this isn’t that much the end of mashups but rather a shift between server side mashups and client side mashups.

This Ajax Search API appears to be using the same concepts and principles than the Google Maps API and it’s weird to see people who consider than the Google Maps API is the best invention since sliced bread also consider the Google AJAX Search API evil.

Client side mashups are generally easier to implement since they do not rely on any piece of software installed on your server, however the benefit of server side mashups is that they can include content in the HTML pages that they serve, making them good web citizens which are accessible and crawlable.

I don’t regret the SOAP API (SOAP is almost as evil than any API) but what I do regret is that Google doesn’t publish both an Ajax API to make client side mashups easy and a REST API which would be used by their Ajax API and which could also be used server side.

Why XML Experts Should Care About Web 2.0

Here is the talk I had prepared for the Web 2.0 panel a the XML 2006 conference. This has been a very interactive panel and even though I haven’t pronounce exactly the same sentences, the message is the same.

I had proposed a whole session titled “Why XML Experts Should Care About Web 2.0”. I have tried to shrink this 45 minutes presentation to fit within a 5 minutes slot, but that didn’t really work. Instead of presenting the result of this hopeless exercise, I will use a well known metaphor. Of course, metaphors do not prove anything but they are great to quickly illustrate a point and that’s what I need. 

Bamboo stems can reach 40 meters in height with diameters up to 30 cm and some species can grow over one meter per day. Despite that, they are so strong that in Asia they are used to build scaffoldings for sky scrappers. These performances are due to the tube like structure of stems reinforced by their nodes.

It recently occurred to me that the IT technology (and probably science in general) is progressing like bamboos and alternates periods of fast innovation with periods of consolidation. It is interesting to note that the prominent actors for these phases are often different. Consolidation builds on prior experience and is a good work for established experts. On the other hand, expertise often tends to censor new ideas and it can seriously limit the ability to innovate. 

This theory is well illustrated by the history of the World Wide Web. 

In the eighties and early nineties, hypertext experts were stuck by the complexity of their models and a new phase of innovation began with the invention of HTTP and HTML. 

The consolidation phase was launched ten years ago by Jon Bosak when he said “You have to put SGML on the web. HTML just won’t work for the kinds of things we’ve been doing in industry.”

In five years time, this consolidation phase grew to a stage where the XML stack is so heavy that it looks like legacy. Its development is almost stalled and a new innovation phase was badly required. 

Those of you who know me know me as an XML expert and as many XML experts the crazy hype that is obscuring Web 2.0 kept me away for a long time.  

I started to look what’s behind the hype a year ago. Having done so, I am happy to report that Web 2.0 could be the next innovation phase. 

A good indication is that XML experts predict that Web 2.0 will fail for the same reasons hypertext experts predicted that HTML would fail: Web 2.0 is messy, over simplistic, not well enough designed, … 

If Web 2.0 is the next innovation phase, what should we do? 

We can contribute, actively follow the growth of the phenomena, provide guidance but we should avoid to be too directive for the moment. 

My first personal contribution is my book “Professional Web 2.0 Programming”. This book is for anyone wanting to catch the Web 2.0 wagon. It’s also a set of reminders and guidances by we’ve tried to be as open as possible and for instance, we have covered not only XML but its alternatives (including controversial technologies such as JSON).

If we keep ready, our turn will come again when the next consolidation phase starts.  

This consolidation phase will eventually put XML on the Web like XML has (at least partially) put SGML on the Web. 

Will XML on the Web still be XML? Maybe not: SGML on the Web is no longer SGML, why should XML necessarily survive to the next iteration? Anyways, does that really matter? 

Our Web 2.0 book appears to be tough to classify

I have arrived in Boston yesterday evening to participate to the XML 2006 conference.

Today, I spent most of my time walking in the town and I couldn’t resist to enter in the first bookshop I found to check if they had our new Web 2.0 book.

This bookshop happened to be Borders, 10 School Street and it took me a while to find the book because it was neither with the other books about the Web nor with other suspects such as books about Ajax but together with my XML Schema book and HTML 4 for dummies (I haven’t understood why this other book was there either) between a bunch of books about XSLT.

Our book is probably difficult to classify because it covers a lot of subjects but, even though I have been involved in it, it is certainly not a book about XML and should rather be classified as a book about the Web!

Professional Web 2.0 programming for real

I’have received my personal copies of our Web 2.0 book and they look really good.

I really like the kind foreword from Caterina Fake, co-founder of Flickr, especially when she says this book is very much about how, through technology, you can capture and delight your users. This should be the tagline of our book!

She goes on and adds: Web 2.0 is really a developer’s paradise! and that’s really what we’ve felt while we wrote the book.

I am also impressed by the ground we’ve covered while keeping the book relatively short. I wrote in the outline that I have sent to the publishers to sell my book idea that my goal wasn’t to write a Web 2.0 bible and I had set the prospective page count to 450. If you don’t take the index into account, we are very close to our target with our 492 pages.

The real challenge was to use this limited space to cover an incredibly large landscape: Web 2.0 is about using a dozen of different technologies together. Your reviews will tell but I think that we have been quite successful in selecting the most important things that you need to know to combine these technologies into successful Web 2.O applications.

I had the chance to give a talk about Web 2.0 at sparklingPoint yesterday evening and had a copy with me to circulate after my talk. The audience included several Web 2.0 developers and they spent more time that I had expected to glance through the book.

Their comments are positive and they appreciate in particular the fact that we have a full chapter about HTTP, a fundamental brick of the Web which is misunderstood by too many developers.

Now that the book is available, like Caterina Fake, I look forward to seeing the results the readers of this book will bring into being!

Qu’attendez-vous de XMLfr?

Après en avoir discuté avec l’équipe de rédaction, j’ai envoyé sur la liste xml-tech un long message intitulé Qu’attendez-vous de XMLfr?.

J’y décrit brièvement l’évolution du site depuis sa création début 2000 et mes projets pour redonner un peu plus de dynamisme au site.

N’hésitez pas à prendre part au débat et à me dire, que ce soit sur la liste, par un mail individuel ou en commentaire à ce billet, ce que vous attendez de XMLfr.

Next year at ATHENS

I gave my Web 2.0 tutorial for ATHENS 2006 yesterday afternoon and it was the first time I had the opportunity to teach to « real » students.

A few of them were really sleepy but the organizers had kindly warned me that their Parisian nights were pretty busy and that it was to be expected…

Most of the others looked on the contrary interested and the audience was much more friendly and participative than the typical audience of professional conferences.

They did ask a lot of questions and warmly applauded me at the end of my talk.

The content of the tutorial is really heavily technical with a lot of code snippets and HTTP traces. Its duration (three hours) is adequate and I think it has been well received even though there are at least two points that can be improved:

  • I was surprised to see that in slide show mode, OpenOffice Impress didn’t show any pointer (of course that wasn’t the case when I had rehearsed on my own PC). My presentation includes a lot of links and I was not able to click on these links since I couldn’t find out when they were selected! That has been quite disturbing and I had to switch into edit mode to get the pointer back each time I wanted to follow a link. During the break, I eventually discovered that there is a slide show option to make the mouse pointer visible and that made my life much easier during the second part. That’s something I need to remember for my next presentations!
  • I need to add some diagrams to visualize the exchanges between the browser and the server. There are many of them in my sample Web 2.0 application. Showing the HTTP traces is useful but some diagrams would help to understand the sequencing of actions and exchanges.

I have asked to the students to use my email address to send me their feedback and I hope they won’t hesitate to do so.

The organizers, Jacques Prévost and Didier Courtaud, seemed please enough to invite me to participate to the next edition and I should be involved in the ATHENS program again next year.

Newspapers 2.0

Ifra has published a new special edition of their magazine, newspaper techniques, dedicated to Web 2.0. Ifra present themselves as the world’s leading association for newspaper and media publishing and this special edition shows the level of interest from the newspaper industry for these new technologies.

I had the pleasure to contribute to this edition a paper giving some tips for transforming a 1.0 site into Web 2.0 and the table of content also includes an interview from Tim O’Reilly, a general and a more technical introduction, two case studies, a brainstorming with newspapers suppliers and a glossary.

This special edition is a good introduction to Web 2.0 which should be useful beyond the communities of newspaper and media publishing.

It can currently be downloaded free of charge as a Flash document on the Ifra newspaper techniques ePaper web site.