Web 2.0, professional… and Fun!

Phew! said Danny Ayers, relieved said Erik Bruchez. Our upcoming Web 2.0 book is written and it’s been both hard work but also fun.

This book is a long story, almost as long as my interest for Web 2.0…

A long time Web and XML expert, the marketing around Web 2.0 kept me away for a while and I didn’t bother to take a look at what is behind the smoke before December 2005 when the business networking initiative sparklingPoint Networking invited me to present my analysis of Web 2.0.

I have published my presentation both in English on my blog and in French on XMLfr. The French version has rapidly become the most read article on XMLfr and one of the reference definitions of Web 2.0 in French.

These documents go through the so called social and technical layers of Web 2.0 and note that yes, Web 2.0 is nothing but a new term to designate the Web as it was meant to be. Does that make the word pointless? I don’t think so. Nobody can deny that the web is changing and finding a word to name this new web is most useful.

One of the things that really stuck me was the number of technologies involved in Web 2.0 applications. Web 2.0 is anything but a coherent platform! There is nothing wrong in using a whole set of technologies except that you need to keep the vision of the big picture. This goes against another tendency which is to specialise people and resources.

Most of the technologies that constitute the Web 2.0 technology stack, from (X)HTML to databases through CSS, Javascript or HTTP, are getting more complex everyday and most of the books available focus on only one of these technologies. This does not only mean that you need to buy a full bookshelf if you want to cover the Web 2.0 stack, but also that most of these books hardly cover how these technologies can be used together.

Web 2.0 Professional Programming is born from this analysis and tries to give the big picture by presenting the whole Web 2.0 stack.

Erik kindly wrote that this book owes everything to Eric van der Vlist, who provided the vision, outline, and much more over those last few months. I’d rather say that I am the only responsible for the incoherences and misses that you might find in the outline and the writers (including me) are responsible for the good things that you’ll find in their chapters!

This book also owes a lot to the friends which have helped me to write the outline before I submitted it to publishers, to Jim Minatel our Senior Acquisitions Editor who believed in the project from the very beginning, to Sara Shlaer our editor and to Micah Dubinko our tech reviewer.

Both the book and the writing experience turned out to be quite different from what I had expected.

When I first submitted my outline to several publishers I had planed to write this book, like my previous books, alone and over a period of twelve to eighteen months. If Jim Minatel was quite happy with my original outline, he was less than appealed by my agenda! WROX has a lot of experience with multi-authored books and he gently convinced me to build a team to cut the delays.

The team has been constituted with a mix of authors that I knew and authors who were previous WROX authors and this has been a good decision: a more homogeneous team would probably not have been able to provide the diversity of points of view that you will find in this book.

The authors were spread between the Silicon Valley, UK, Italy, Switzerland and France and I had anticipated that communication could be an issue. To facilitate the writing, I have setup a bunch of internet based goodies including a mailing list, an IRC channel, a wiki, a subversion repository to share our prose and code samples and an external web site.

This has proven to work very efficiently and Jim wrote: What I really love about working with this gang of authors […] is how relentless they are about communicating and collaborating with each other.

The mailing list has been widely used all over the period and it has been critical to keep contact between the whole team. The wiki has been very helpful to finalize the outline. One of my regrets is that we’ve not used it to edit the whole book… My previous book (RELAX NG) has been edited on a wiki and transformed into DocBook before publication and I have found that very handy. I am pretty confident that we could have used the same method to edit this book but that was breaking too much of the WROX policies and we’ve not followed that way. The subversion repository has been used by most writers more like a backup than as a repository but here again it was not aligned on the WROX policies. We’ve used the IRC only once to solve a controversial issue on the outline.

The writing itself went very smoothly with everyone doing his best to be on schedule. Joe Fawcett wrote in his blog: One of the great things about writing is how much you learn. It’s easy to pick a passing knowledge of a subject but when have to write about it and provide working code examples then you really need to burrow down and learn. I couldn’t agree more with his statement and have been surprised again to see how much you learn by writing!

The main difficulty of this book is that since its goal is to give the “big picture”, it needs to be coherent between chapters which is always difficult for multi-authored books. Also, since the agenda was very tight, we couldn’t afford to spend too much time building a very detailed outline. Keeping things coherent has been the job of Sara and Micah and they’ve done a good job checking the cross references, redundancies and other incoherences.

The other difficulty is that the target audience are “professional developers with no prior experience of Web 2.0” and deciding what the prerequisites for this book are was quite subjective. On one hand, we would like this book to be a central place where people can find most of what they need to write Web 2.0 applications. On the other hand, we couldn’t afford to introduce each of the technologies from scratch. We’ve done our best to guess what most of you already know about web technologies at large, but the result may sometimes seem arbitrary. For instance, we’ve presented HTTP from scratch because we think that this is an area where there are still a lot of misconceptions, but we’ve assumed that our readers are already somewhat familiar with HTML. Your reviews will tell if we need to change this in future editions!

Even if I knew that Sara and Micah were carefully tracking inconsistencies, I was rather anxious to know what the book would like too as a whole and as soon as I got enough chapters written and some extra time, I started reviewing as a whole. I am rather happy with the result and think that this should be a useful resource for web developers. The only regret I have is about the number of programming languages we’ve covered.

Server side, there isn’t a programming language of choice to write Web 2.0 applications and my attempt was that the book should not only be as agnostic as possible but also provide examples using as many different programming languages as possible. I think that the book can be considered agnostic, but the examples are not using as many languages as I would have liked: Java, C# and PHP represent most of the examples and Python, Perl or Ruby users may feel frustrated.

Still, I really believe that they can easily understand the examples in this book. A great way to follow the explanations given in the book is of course to try the examples. A still healthier exercise is to translate these examples in your favorite programming language. If you do so, I strongly encourage you to post these translations on the forum dedicated to this book on the WROX web site.

Writing this book has ben fun and I hope that reading it will be an enjoyable experience too!

Share and Enjoy:
  • Identi.ca
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Twitter
  • Add to favorites

Client side XSLT brings live to static HTML pages and microformats

I am making all kind of tests for the chapter about multimedia of our upcoming Web 2.0 book and as it is often the case when I am writing, this is sparkling a number of strange ideas.

I was exploring the similarities between playlists, podcasts and SMIL animation when it occurred to me that it might be interesting to see what can be done with microformats.

Although the relEnclosure proposal still needs some polishing (for instance, it mentions that Atom requires a length on enclosures but do not define a way to express this length), the result would be something such as:

      <div class="hfeed">
         <h1>SVG en quinze points</h1>
         <div class="hentry">
            <h2 class="hentry-title">
               <a
                  href="http://xmlfr.org/documentations/articles/i040130-0001/01%20-%20C'est%20base%20sur%20XML.mp3"
                  rel="bookmark" title="...">C'est basé sur XML</a>
            </h2>
            <p class="hentry-content">By <address class="vcard author fn">Antoine Quint</address> -
                  <abbr class="updated" title="2004-01-30T00:00:00">2004-01-30T00:00:00</abbr>
            </p>
            <p>[<a
                  href="http://xmlfr.org/documentations/articles/i040130-0001/01%20-%20C'est%20base%20sur%20XML.mp3"
                  rel="enclosure">download</a>] (<span class="htype">audio/mpeg</span>, <span
                  class="hLength">231469</span> bytes).</p>
         </div>
 .
 .
 .
      </div>        

[hatom.xhtml]

I am not a microformat expert and I have been surprised to see that this document is actually much harder to write than the corresponding Atom document. It probably contains lots of errors and if you spot one of them, thanks to report it as a comment.

This is nice, but probably not what users would expect for a Web 2.0 application. For one thing, this page is static and lacking all the bells and whistles of a Web 2.0 application. For instance, we might want to use one of the techniques exposed by Mark Huckvale to play the audio in the web page itself.

For this, we would need to modify the document and entries could become:

                 <div class="hentry">
                        <h2 class="hentry-title">
                              <a
                                    href="http://xmlfr.org/documentations/articles/i040130-0001/01%20-%20C'est%20base%20sur%20XML.mp3"
                                    rel="bookmark" title="...">C'est basé sur XML</a>
                        </h2>
                        <p class="hentry-content">By
                              <address class="vcard author fn">Antoine Quint</address> - <abbr
                                    class="updated" title="2004-01-30T00:00:00"
                              >2004-01-30T00:00:00</abbr>
                        </p>
                        <p>[<a
                                    href="javascript:play(&#34;http://xmlfr.org/documentations/articles/i040130-0001/01%20-%20C'est%20base%20sur%20XML.mp3&#34;);"
                                    rel="enclosure">play</a>] (<span class="htype"
                              >audio/mpeg</span>, <span class="hLength">231469</span> bytes).</p>
                  </div>
            

[hatom-decorated.xhtml]

This is not very different, but the links with rel=”enclosure” have been replaced by a call to a Javascript function and this is enough to loose the semantic of the microformat since we obfuscate the enclosure’s URL.

We have thus a situation where the document that we want to server is different from the document that we want to display client side and that’s a typical use case for client side XSLT.

The trick is to write a simple transformation that makes the static page synamic:

<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
    xmlns="http://www.w3.org/1999/xhtml" xmlns:x="http://www.w3.org/1999/xhtml" version="1.0"
    exclude-result-prefixes="x">
    <xsl:output method="xml" encoding="UTF-8" indent="yes" cdata-section-elements="x:style x:script"/>
    <xsl:strip-space elements="*"/>
    <xsl:preserve-space elements="x:script x:style"/>
    <xsl:template match="@*|node()">
        <xsl:copy>
            <xsl:apply-templates select="@*|node()"/>
        </xsl:copy>
    </xsl:template>

    <xsl:template match="x:head">
        <xsl:copy>
            <xsl:apply-templates select="@*|node()"/>
            <style type="text/css"><![CDATA[

#player {
    padding: 10px;
    background-color: gray;
    position:fixed;
    top: 20px;
    right:10px
}

                    ] ]></style>
            <script type="text/javascript"><![CDATA[

function play(surl) {
  document.getElementById("player").innerHTML=
    '<embed src="'+surl+'" hidden="false" autostart="true" loop="false"/>';
}

                ] ]></script>
                </xsl:copy>
                </xsl:template>

            <xsl:template match="x:body">
                <xsl:copy>
                    <xsl:apply-templates select="@*|node()"/>
                    <div id="player">A media player<br/>will pop-up here.</div>
                </xsl:copy>
            </xsl:template>

            <xsl:template match="x:a[@rel='enclosure']/@href">
                <xsl:attribute name="href">
                    <xsl:text>javascript:play("</xsl:text>
                    <xsl:value-of select="."/>
                    <xsl:text>");</xsl:text>
                </xsl:attribute>
            </xsl:template>

            <xsl:template match="x:a[@rel='enclosure']/text()">
                <xsl:text>play</xsl:text>
            </xsl:template>

</xsl:stylesheet>

            

[decorateMf.xsl]

And add a xsl-stylesheet PI to the static (microformat) page:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet href="decorateMf.xsl" type="text/xsl"?>
<html xmlns="http://www.w3.org/1999/xhtml">
.
.
.
</html>
            

This is working fine for me (GNU Linux/Ubuntu, Firefox 1.5) and the mplayer plug-in nicely pops up in the player div when I click on one of the “play” links but it would require a bit of polishing to work in other browsers:

  • The page crashes Opera 9.0 (I have entered a bug report and have been contacted back by their tech support who is already working on the issue).
  • The XSLT output method needs to be changed to HTML to work in Internet Explorer (otherwise the result is displayed as a XML document). Furthermore, IE inserts the embed element as text in the player div and you might need to use a proper DOM method to insert the embed element as a DOM node.

[Try it!]

There are probably a number of other (easier?) solutions for the specific problem I have solved here. However, this is an interesting pattern to apply in situations where you want to serve a clean document that needs to be altered to display nicely in a browser.

XSLT has sometimes been described as a “semantic firewall” that removes the semantic of XML documents to keep only their presentation. I like to think at this technique as a semantic “anti-firewall” or “tunnel” that keeps the semantic of XML documents intact until the very last stage before it hits the browser’s rendering engine…

Share and Enjoy:
  • Identi.ca
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Twitter
  • Add to favorites

Web 2.0 the book

One of the reasons I have been too busy to blog these days is the project to write a comprehensive book about Web 2.0 technologies.

If Web 2.0 is about using the web as a platform, this platform is far from being homogeneous. On the contrary, it is made of a number of very different pieces of technology, from CSS to web server configuration through XML, Javascript, server side programming, HTML, …

I believe that integrating these technologies is one of the main challenges of Web 2.0 developers and I am always surprised if not frightened to see that people tend to get more and more specialized. Too many CSS gurus do not know the first thing about XML, too many XML gurus don’t know how to spell HTTP, too many Java programmers don’t want to know Javascript. And, no, knowing everything about Ajax isn’t enough to write a Web 2.0 application.

To the defense of these hyper-specialists, I have also found that most of the available resources, both online and in print, are even more heavily specialized than their authors and that even if you could read a book on each of these technologies you’d find it difficult to get the big picture and understand how they can be used together.

The goal of this book is fill the gap and be a useful resource for all the Web 2.0 developers who do not want to stay in their highly specialized domain as well as for project managers who need to grasp the Web 2.0 big picture.

This is an ambitious project on which I have started to work in December 2005.

The first phase has been to define the book outline with the helpful contribution of many friends.

The second one has been to find an editor. O’Reilly who is the editor of my two previous books happens to be also one of the co-inventors of the term “Web 2.0” and that makes them very nervous about Web 2.0 book projects.

Jim Minatel from Wiley has immediately been convinced by the outline and the book will be published in the Wrox Professional Series.

I had initially planned to write the book all by myself but it would have taken me at least one year to complete this work and Jim wasn’t appealed by the idea of waiting until 2007 to get this book in print.

The third step has been to find the team to write the book and the lucky authors are:

Micah Dubinko is tech editing the book and Sara Shlaer is our Development Editor.

We had then to split the work between authors. The exercise has been easier than expected. Being in a position to arbiter the choice, I have found it fair to pick the chapters left by other authors and this leaves me with chapters that will require a lot of researches for me. This is fine since I like learning new things when I write but this also means more hard work.

This is my first co-authored book and I think that one of the challenges of these books is to keep the whole content coherent. This is especially true for a book which goal is to give “the big picture” and to explain how different technologies play together.

To facilitate the communication between authors, I have set up a series of internal resources (wiki, mailing list, subversion repository). It’s still too early to say if that will really help but the first results are encouraging.

More recently, I have also set up a public site (http://web2.0thebook.org/) that presents the book and aggregates relevant content. I hope that all these resources will help us to feel and act as a team rather than a set of individual authors.

The “real” work has finally started and we have now the first versions of our first chapters progressing within the Wiley review system.

It’s interesting to see the differences between processes and rules from different editors. To me, a book was a book and I hadn’t anticipated so many differences not only in the tools being used but also in style guidelines.

The first chapter I have written is about Web Services and that’s been a good opportunity to revisit the analysis I had done in 2004 for the ZDNet Web Services Convention [papers (in French)].

From a Web 2.0 developer perspective, I think that the main point is to publish Web Services that are perfectly integrated in the Web architecture and that means being as RESTfull as possible.

I have been happy to see that WSDL 2.0 appears to be making some progress in its support of REST Services even though it’s still not perfect yet. I have posted a mail with some of my findings to the Web Services Description Working Group comment list and they have split these comments as three issues on their official issue list ([CR052] [CR053] [CR054]).

I wish they can take these issues into account, even if that means updating my chapter!

Some resources I have found most helpful while I was writing this chapter are:

It’s been fun so far and I look forward to seeing this book “for real”.

Share and Enjoy:
  • Identi.ca
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Twitter
  • Add to favorites

Le liberalisme n a pas d avenir

Je suis allé voir hier soir une conférence-débat où Guillaume Duval présentait son livre intitulé “le libéralisme n’a pas d’avenir”.

C’est un livre que j’avais commencé à feuilleter et dont la thèse est que le libéralisme porte en lui même les raisons d’un échec certain pour plusieurs raisons :

  • Pour se développer, le libéralisme (que Duval qualifie de “système marchand”) a besoin d’une infrastructure généralement étatique (le système “non-marchand”). Le libéralisme ne peut donc émerger et se maintenir que sur les bases d’un système non libéral.
  • Le marché pousse les entreprises à se concentrer (Duval explique chiffre à l’appui que la concurrence est source de coûts) et cette logique tend à créer de nouveaux monopoles qui sont contraires aux principes même du libéralisme.

La thèse est intéressante et crédible. Guillaume Duval est un orateur convaincant qui a le don de trouver des chiffres qui parlent et de relier ses thèses à des évènements d’actualité connus de tous. En revanche, j’ai trouvé le débat moins réussi : Duval semblait souvent replonger dans les démonstrations de son livre plutôt que de chercher à réellement comprendre les questions.

Je suis loin d’être un spécialiste de la question, mais je me demande si la principale raison pour laquelle le libéralisme n’a peut être pas l’avenir que certains lui prédisent n’est pas plutôt que ses principaux promoteurs semblent avoir cessé d’y croire et de l’appliquer (si tant est qu’ils l’aient réellement appliqué).

L’administration Bush semble sous bien des angles la moins libérale que l’on ait vu aux Etats-Unis depuis bien longtemps. On l’a vu lors de la crise de l’acier et on voit aujourd’hui les tentations protectionnistes pour protéger l’industrie du logiciel contre l’externalisation. De plus, les budgets de la défense (actuels) et de la recherche spatiale (annoncés) constituent un développement notable du “non-marchand” et la banque centrale des Etats-Unis semble plus soucieuse de la santé financière des entreprises américaines que du respect des règles du marché international.

Au point que l’on peut se demander si la “vieille Europe” n’est pas la seule à jouer un peu naïvement la carte du libéralisme dogmatique pur et dur, au risque de marquer contre son camp!

Share and Enjoy:
  • Identi.ca
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Twitter
  • Add to favorites

Rouge Bresil

Je viens de terminer Rouge Brésil de Jean-Christophe Rufin, un livre qui par son style, ses personnages et ses thèmes m’a rappelé les Robert Merle qui ont marqué mon adolescence.

Ses personnages ne sont pas toujours tres nuancés mais sa lecture est captivante et il n’y pas de raison de s’en priver, d’autant plus qu’elle nous donne l’occasion de découvrir un épisode méconnu de la découverte des Amériques et une réflexion sur le cannibalisme à l’opposé de celles de Jules Verne!

Share and Enjoy:
  • Identi.ca
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Twitter
  • Add to favorites