Using XSLTUnit with XSLT 2.0

The latest version of XSLTUnit (v0.2) is from January 2002 and people sometimes ask me if the project is dead and when XSLTUnit will support XSLT 2.0.

The short answer is that you can already use XSLTUnit with XSLT 2.0.

The project is not dead and if I haven’t published any new version it’s just because XSLTUnit meets my needs and nobody has ever asked me any update.

I use it a lot and, no later than this afternoon, came on a new opportunity to use it to add unit tests to a function that I needed to debug for the Owark project.

Following ideas to develop a web service to create page archives, I was writing an XSLT transformation that analyses Heritrix crawl logs to determine what needs to be packaged into the archives and one of the touchy functions is to create user friendly local names that remains unique within the scope of an archive.

My first naive attempt didn’t survive real world tests and results were rather disappointing.

Using these results as a test suite was an obvious idea…

To do so, I have used log.xml, a real crawl log converted to XML as my source document and local-names.xml, the result of the transformation, as a reference.

The XSLTUnit transformation that exploit these documents, local-names.xsl, is very simple and is a nice example of what you can do with this framework and how you can use it with XSLT 2.0.

An XSLTUnit test suite is an XSLT transformation that imports both the transformation to test and xsltunit.xsl:

<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:exsl="http://exslt.org/common" extension-element-prefixes="exsl" xmlns:xsltu="http://xsltunit.org/0/"
  xmlns:owk="http://owark.org/xslt/" exclude-result-prefixes="exsl">
  <xsl:import href="../actions/resource-index.xslt"/>
  <xsl:import href="xsltunit.xsl"/>

.../...

</xsl:stylesheet>

Here, the test suite and the XSLT transformation to test are both XSLT 2.0 and with Saxon they just work fine with xsltunit.xsl which is XSLT 1.0.

A XSLTUnit test case is composed of comparisons, and a test case to test that the value of the owk:local-name() function on a specific log entry is equal to « index.xml » could be:

      <xsltu:test id="index">
        <xsl:call-template name="xsltu:assertEqual">
          <xsl:with-param name="id" select="'index'"/>
          <xsl:with-param name="nodes1">
            <name>
              <xsl:value-of select="owk:local-name(/log/entry[uri='https://blog.eric.van-der-vlist.com/'])"/>
            </name>
          </xsl:with-param>
          <xsl:with-param name="nodes2">
            <name>index.html</name>
          </xsl:with-param>
        </xsl:call-template>
      </xsltu:test>

This crawl log includes 292 resources and you wouldn’t want to copy and past 291 times this snippet… No problem, you can just use some XSLT power and write:

 <?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:exsl="http://exslt.org/common" extension-element-prefixes="exsl" xmlns:xsltu="http://xsltunit.org/0/"
  xmlns:owk="http://owark.org/xslt/" exclude-result-prefixes="exsl">
  <xsl:import href="../actions/resource-index.xslt"/>
  <xsl:import href="xsltunit.xsl"/>
  <xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes"/>
  <xsl:variable name="local-names" select="doc('local-names.xml')/index"/>
  <xsl:key name="log-by-uri" match="/log/entry" use="uri"/>
  <xsl:template match="/">
    <xsltu:tests>
      <xsl:for-each select="$local-names/resource">
        <xsltu:test id="{uri}">
          <xsl:call-template name="xsltu:assertEqual">
            <xsl:with-param name="id" select="uri"/>
            <xsl:with-param name="nodes1">
              <local-name>
                <xsl:value-of select="owk:unique-local-name(key('log-by-uri', current()/uri, $source ))"/>
              </local-name>
            </xsl:with-param>
            <xsl:with-param name="nodes2">
              <xsl:copy-of select="local-name"/>
            </xsl:with-param>
          </xsl:call-template>
        </xsltu:test>
      </xsl:for-each>
    </xsltu:tests>
  </xsl:template>
</xsl:stylesheet>

When you’ve done so, you can edit « local-names.xml » that contains the expected values so that they look more like what you want to, run the test suite to detect the differences, update your transformation accordingly and iterate.

Le choc des photos

Ne cherchez pas de référence à l’écologie ou l’environnement dans le clip officiel de campagne de François Hollande : il n’y en a pas.

Ce qui s’en rapproche le plus sont sept séquences fugitives de 5 secondes au total lorsqu’il parle de la France :

  • Un tracteur labourant un champ immense
  • Un homme poussant une carcasse de bœuf dans un abattoir
  • Un autre poussant un chariot de pommes parfaitement calibrées dans une immense halle, sans doute un hypermarché
  • Une laborantine dans l’exercice de ses fonctions
  • Un Airbus A 380
  • Un TGV
  • Un viaduc, sans doute celui de Millau

Elles défilent très vite et je les ai capturées pour pouvoir les regarder plus facilement :

Pourquoi avoir choisi ces images?

Pourquoi les passer à un rythme subliminal?

Où est le changement?

Jugez par vous même, ces sept plans apparaissent de la 45ème à la 50ème seconde :


Clip officiel de campagne de François Hollande par francoishollande

Merveilleux nuages sans bullshit

– Eh! qu’aimes-tu donc, extraordinaire étranger?
– J’aime les nuages… les nuages qui passent… là-bas… là-bas… les merveilleux nuages!

Charles Baudelaire, petits poèmes en prose

Les nuages sont insaisissables et fascinants.

Ils sont incroyablement variés et, du cirrus au  stratus, ils se situent à des altitudes bien différentes.

La notion même de nuage est une question de point de vue : si vous avez déjà marché en montagne, vous aurez remarqué qu’il suffit de monter pour que les nuages deviennent brouillard.

La métaphore de l’informatique dans les nuages a ses limites, comme toute métaphore, mais elle partage ces deux caractéristiques.

La notion de cloud computing est une question de point de vue : pour l’utilisateur « final », tout site ou application web est dans le nuage puisqu’il ne sait que très exceptionnellement quelle machine l’héberge. Pour l’administrateur du site web par contre, le site ne sera « dans le nuage » que s’il est hébergé sur une machine virtuelle!

Le cloud computing peut également se situer à des hauteurs bien différentes, entre le stratus qu’est la machine virtuelle que l’on administre comme une machine réelle et les cirrus que sont les logiciels en tant que services sans oublier les cumulonimbus qui ont vocation à remplir tous les créneaux.

Je me suis laissé tenter par un tout petit stratus et viens de migrer les sites que j’administre sur des machines virtuelles chez Gandi.

Pourquoi un stratus? De la même manière que j’aime faire mon pain ou réparer mon toit, j’aime installer et administrer les outils informatique que j’utilise. Alors que les informaticiens tendent à devenir de plus en plus spécialisés, j’y vois là une manière de maintenir un minimum de culture générale en informatique! Je préfère donc administrer une machine (virtuelle ou non) plutôt que d’utiliser des logiciels en tant que services.

Pourquoi un petit stratus? Parce que j’ai toujours préféré les petites structures aux grands groupes!

Pourquoi Gandi? Je suis reconnaissant à Gandi de m’avoir fait sortir des griffes monopolistiques de Network Solutions en faisant au passage chuter le prix du domaine de $70 à 12€ ! Depuis plus de 10 ans, j’apprécie le service, la culture et le slogan « No Bullshit » de cette entreprise.

J’ai donc migré mes trois dediboxes sur des serveurs virtuels Gandi.

Ne vous laissez pas tromper par les chiffres : le serveur dédié dedibox à 14,99€ est beaucoup plus puissant que la part de Gandi serveur à 12€ et ma facture chez Gandi est plus élevée qu’elle ne l’était chez dedibox.

Pourquoi cette migration?

Après des débuts un peu difficiles (mes premières dediboxes se bloquaient très fréquemment) les dediboxes sont devenues très fiables mais ce sont toujours des machines physiques qui vieillissent et Online les renouvelle tous les trois ans environ. Cela signifie qu’il faut réinstaller ses serveurs tous les trois ans.

De même, ces machines ne sont pas évolutives et il n’est pas question de rajouter de la mémoire, de l’espace de stockage ou du CPU si vous en avez besoin.

Les serveur virtuels sont au contraire virtuellement éternels : si Gandi ne fait pas d’erreur de manipulation, il n’y a aucun risque qu’une machine virtuelle « vieillisse ».

Ils sont également très souples et on peut très simplement rajouter de la mémoire, de la puissance de calcul, de l’espace de stockage ou des interfaces réseau. La plupart de ces opérations se font même à la volée, sans redémarrer le serveur.

Après quelques semaines, quel bilan?

Je ne n’avais eu l’occasion d’apprécier le support Gandi que pour les enregistrements de domaines. Je peux maintenant vous dire qu’il est tout aussi réactif sur l’hébergement. Lorsque l’on s’adresse au support au moyen de l’interface web, une case à cocher permet de signaler si son serveur est bloqué, ce qui permet au support de traiter votre cas de manière prioritaire.

Les trois premières semaines après le passage en production de mon premier serveur, les accès disques se sont bloqués à trois reprises pendant deux à trois heures à chaque fois.  Le support Gandi a rapidement signalé le problème, indiquant que « les accès disques étaient ralentis ». Pour ma part, j’aurais plutôt dit « gelés » que ralentis (No Bullshit!) : la machine virtuelle était totalement bloquée, les services (HTTP, SSH, SMTP, IMAP, …) ne répondant plus du tout.

Gandi semble avoir identifié et  corrigé le problème et depuis tout fonctionne très bien.

Les performances en écriture disque sont souvent un peu faibles, de l’ordre de 60 MB/s, mais ce soir elles semblent correctes:

vdv@community:/tmp$ dd if=/dev/zero of=test.file bs=1024k count=512
512+0 enregistrements lus
512+0 enregistrements écrits
536870912 octets (537 MB) copiés, 5,01527 s, 107 MB/s

Pour le moment tout se passe donc bien et les deux reproches que je pourrais faire sont d’ordre administratif.

L’interface d’administration sur le site est agréable à utiliser, mais les erreurs de saisie sont trop rarement documentées : la plupart du temps, le champ incriminé est signalé mais aucun message d’erreur ne vient expliquer comment le corriger. Dans les cas les plus complexes, cela se traduit par un message au support technique et c’est beaucoup de temps perdu pour le support comme pour l’utilisateur.

Gandi permet d’acquérir des ressources sans aucun engagement et cette formule est très souple mais chaque acquisition donne lieu à une facturation séparée (il en est de même pour les renouvellement de domaines si vous activez le renouvellement automatique).  Je me retrouve donc avec des dizaines de petites factures, certaines pour quelques euros seulement, qui vont être un cauchemar à traiter! Pourquoi ne pas proposer de regrouper ces montants en une facture mensuelle unique?

J’attends maintenant avec impatience de pouvoir tirer partie de la souplesse que m’offre cette formule.

Cela pourrait être le cas à l’occasion de la prochaine montée de version du système d’exploitation que j’utilise (Ubuntu).

Pour mes serveurs, je préfère utiliser les versions LTS (Long Term Support) qui sont publiées tous les deux ans. La différence entre deux versions est significative et les montées de version sont souvent pénibles.

Je n’ai pas encore regardé dans le détail comment faire, mais je compte « cloner » les machines virtuelles de mes serveurs pour effectuer les mises à jour sur les clones tout en laissant les originaux en service. Cela devrait me permettre de faire la mise à jour et de la tester sans interrompre le service.

A suivre…

Mise en sommeil de websemantique.org

Cela fait plusieurs années que la plupart des « contributions » sur le site websemantique.org sont le fait de spammeurs! Les informations qui s’y trouvent ne sont plus de première fraîcheur et le site ne reflète ni la vitalité ni l’état de l’art du web sémantique.

J’ai donc décidé (avec l’accord de l’équipe fondatrice) de rediriger toutes les page du site vers la « planète web sémantique » que je continue à apprécier en tant qu’outil de veille.

Je vais rediriger de même les pages de http://smob.websemantique.org/ (qui n’a plus été utilisé depuis 2009) vers la planète.

Merci à tous pour votre contribution!

More musings on XDM 3.0

Note: these are musings and should not be taken as a concrete proposal!

Balisage’s tag line is « there is nothing so practical as a good theory ». I love Balisage but my brain doesn’t really work like that and its tag line would rather be « there is nothing as theoretical as a good exercise ».

χίμαιραλ (chimeral) is the kind of exercise I needed to get into XDM 3.0 (and probably even 2.0 that I had been using without really thinking about it) and I’d like to share the musings that this exercise has inspired.

For XDM, the most generic piece of information is called an « item ».

XDM 2.0 has introduced a distinction between two kind of items:

  • Nodes which come directly from the XML infoset and borrow some properties to the PSVI.
  • Atomic values that are « pure » values living their own lives independently of their occurrences in XML documents as text or attribute node values.

Coming straight from the XML infoset, nodes are rather concrete, and you can still smell the (electronic) ink of their tags… Each node is unique and lives in its own context: when you write:

<?xml version="1.0" encoding="UTF-8"?>
<root>
  <foo>5</foo>
  <foo>5</foo>
</root>

The two <foo/> elements are two different nodes. They may look identical and have the same content, they are still two different nodes like identical twins are different persons.

Their value, by contrast, is the same value: a 5 is a 5, the fact that it’s the value of the first or of the second <foo/> (or of a @bar attribute) doesn’t make any difference.

The fact that values are shared between the places where they are used is very common among programming languages. In Fortran IV, the first programming I have ever used, these values were not well write protected (you could manage to assign the value 6 to 5), leading to dreadful bugs and I remember that I had taken the challenge to write a program that was using values as variables!

XDM 2.0 also introduced the notion of « sequences ». Sequences are really special. They are not considered as a model item and are just a kind of invisible bag to package items together and are useful because you can store them in variables and use them as parameters.

Sequences have some interesting magics…

They disappear by themselves when they pack only one item and their is no way to differentiate a sequence of one item from the item itself. XDM has invented a perfectly biodegradable bag!

They also disappear when you put them within another sequence and you can’t have sequences of sequences. You’d better think twice if you want to separate apple from oranges (or meat and cheese) before packaging them in a sequence!

You may have heard that XPath/XSLT 2.0 had abolished the difference between node sets and result tree fragments. That’s true but not always as one would have expected!

Assuming that your input document is the one mentioned above, the equivalent of an XSLT 1.0 node set could be:

<xsl:variable name="nodeset" select="/root/foo"/>

And the equivalent of an XSLT 1.0 result tree fragment would be:

<xsl:variable name="rtf">
  <xsl:copy-of select="/root/foo"/>
</xsl:variable>

Of course, the nodes in $rtf are copies of the original nodes and $nodeset = $rtf will return false().

But why does deep-equal($nodeset,$rtf) also return false()? And why does count($nodeset) return 2 when count($rtf) returns 1?

Forget that you’ve been told that there was no longer any difference between node sets and result tree fragments, these two variables are two different beasts…

$nodeset is a sequence composed of the two elements <foo/> from the input document (the elements themselves, not their copies) while $rtf is a document node with two <foo/> child elements copied from the input document.

Both are sequences of nodes, but that’s all they have in common!

XDM 3.0 adds a third type of items: functions that become first class objects transforming XPath/XQuery/XSLT into functional languages.

Michael Kay’s proposal adds as a fourth item type: maps (and arrays considered as maps with integer keys). Maps being a brand new type of item, the choice is open: they could have been considered similar to XML nodes but the current proposal is to consider them as pure values.

In my previous post I have explained the consequences of this important design decision and I’d like to take a step backward and analyze the differences between values and nodes.

Still before discovering Fortran IV, I did study mathematics and geometry and I was fascinated by the dual approach for solving problem in geometry using either Euclidean vectors or points and segments.

In the current proposal, a map is like a vector: you can reuse it as many time as you like in other map entries and it will remain unique. On the contrary, a node is like a segment. It has an identity,  is bound to a parent and if you want to reuse it you need to copy and paste it.

In geometry, vectors and segments are both useful and complementary. I can understand that there is a need for both nodes and values, but why should maps always be values and nodes always be « concrete » nodes? Both are useful data structures, why should they be treated so differently?

I understand that there are (at least) to use cases for maps: to support JSON and to provide lightweight data structures. These two use cases look very different to me. Wanting to meet them both with a single feature, won’t we miss both points? Why should lightweight structures be limited to maps and why should maps always been lightweight?

In geometry the processes by which you create a vector from a segment or a segment from a vector by pinning one of its extremities are well known. Can’t we define similar processes to transform a « concrete » data structure (either node tree or map) into a pure value and vice versa?

This is not so uncommon for programmers: values can also be seen as classes (with class properties) and concrete structures as instantiations of these classes.

Now, what about chimeras mixing pure values and concrete nodes?

A handy feature of sequences and maps as currently proposed is that they can include nodes. This « inclusion » is done « by reference » and the nodes keep they identity and context. Maps using this feature are « pure values » with references to concrete nodes.

When a map entry is a node, the node keeps its context within a document and its parent remains its parent in this document. How can you do that if you want to also represent the reverse relation between the node and the map entry in which it has been added?

One option could be to define a mechanism similar to symbolic links on Linux: the link is a kind of shortcut and when you follow it you can’t always guarantee that you’ll be able to come back in the same directory. Adapted to the XDM, we wouldn’t store the parent/child relation between map entries and nodes. This would be a limitation, but we wouldn’t meet this limitation when dealing with maps de-serialized from JSON objects (JSON objects do not contain XML nodes).

Another option could be to « decorate » or « re-instantiate » the node so that the new instance has two parents, probably not with the same kind of parent/child relation but with a new kind that could be followed with a different axis. This decoration  would add new context information to a tree and would be very similar to the process by which values are turned into concrete nodes. Now, would that be a practical thing to do? What about adding a map with a node as a new entry on another map? Don’t we end up with nodes that can contain an indefinite number of ancestors?

I hope that these musings can be helpful, but I should probably stick to my role of XDM user rather than giving suggestions!

To summarize: I think that we need both lightweight map structures and the full set of XPath axis on maps de-serialized from JSON objects.

Having only lightweight map structures means that users (and probably other specs and technologies) will have to continue to define custom mappings between JSON and XML to perform serious work.

XDM Maps should be first class citizens

Note: This issue has been submitted to the W3C as #16118.

The XPath/XQuery/XSLT 3.0 Data model distinguishes three types of information items:

  • Nodes that directly relate to the XML Infoset with some information borrowed from the PSVI.
  • Functions
  • Atomic types.

Michael Kay has recently proposed to add maps as a fourth item type derived from functions.

The main motivation for this addition is to support JSON objects that can be considered as a subset of maps items.

However, in the current proposal map items are treated very differently from XML nodes and this has deep practical consequences.

Take for instance the following simple JSON sample borrowed from Wikipedia:

{
     "firstName": "John",
     "lastName" : "Smith",
     "age"      : 25,
     "address"  :
     {
         "streetAddress": "21 2nd Street",
         "city"         : "New York",
         "state"        : "NY",
         "postalCode"   : "10021"
     },
     "phoneNumber":
     [
         {
           "type"  : "home",
           "number": "212 555-1234"
         },
         {
           "type"  : "fax",
           "number": "646 555-4567"
         }
     ]
 }

To get the postalCode from an equivalent structure expressed as XML and stored in the variable $person, one would just use the following XPath expression: $person/address/postalCode.

When the same structure is expressed in JSON and parsed into an XDM map, XPath axes can no longer be used (their purpose is to traverse documents, ie nodes) and we need to use map functions: map:get(map:get($person, 'address'), 'postcalCode').

That’s not as bad as it sounds because maps can be invoked as functions and this can be rewritten as $person('address')('postalCode') but this gives a first idea of the deep differences between maps and nodes and things would become worse if I wanted to get the postal code of persons whose first name are « John »…

Another important difference is that node items are the only ones that have a context or an identity.

When I write <foo><bar>5</bar></foo><bat></bar>5</bar></bat> each of the two bar elements happen to have the same names and values but they are considered as two different elements and even the two text nodes that are their children are two different text nodes.

When I write foo: {bar: 5}, bat: {bar: 5} the two bar entries are actually the same thing and can’t be distinguished.

This difference is important because that means that XPath axes as we know them for nodes could never be implemented on maps: if an entry in a map can’t be distinguished from en identical entry else where in another map there is no hope to be able to determine its parent for instance.

Now, why is it important to be able to define axes on maps and map entries?

I think that this is important for XSLT and XQuery users to be able to traverse maps like they traverse XML fragments (with the same level of flexibility and syntaxes that are kept as close as possible). And yes, that means being able to apply templates over maps and be able to update maps using XQuery update…

But I also think that this will be important to other technologies that rely on XPath such as (to name those I know best) XForms, pipeline languages (XPL, XPROC, …) and Schematron.

Being able to use XForms to edit JSON object is an obvious need that XForms 2.0 is trying to address through a « hack » that has been presented at XML Prague 2012.

In a longer term we can hope that XForms will abandon this hack to rely on XDM maps XForms relies a lot on the notions of nodes and axes. XForms binds controls to instance nodes and the semantics of such bindings would be quite different to be applied to XDM map entries as currently proposed.

XML pipeline languages are also good candidates to support JSON objects. Both XPL and XProc have features to loop over document fragments and choose actions depending on the results of XPath expressions and again the semantics of these features would be affected if they had to support XDM maps as currently proposed.

Schematron could be a nice answer to the issue of validating JSON objects. Schematron relies on XPath at two different levels: its rules are defined as XPath expressions and it is often very convenient to be able to use XPath axes such as ancestor and its processing model is defined in term of traversing a tree. Again, an update of Schematron to support maps would be more difficult is maps are not similar to XML nodes.

Given the place of JSON on the web, I think that it is really important to support maps and the question we have to face is: « do we want a minimal level of support that may require hard work from developers and other standards to support JSON or do we want to make it as painless as possible for them? ».

And obviously, my preference is the later: if we add maps to the XDM, we need to give them full citizenship from the beginning!

Note: The fact that map entries are unordered (and they need to be because the properties of JSON objects are unordered) is less an issue to me. We already have two node types (namespaces nodes and attributes) which relative order are « stable but implementation-dependent ».

 

Introducing χίμαιραλ (chimeral), the Chimera Language

In the presentation I gave at XML Prague 2012 (see my paper), one of my conclusions was that the XML data model extended by the XPath/XQuery/XSLT Working Group to embrace other data models such as JSON was an important foundation of the whole XML ecosystem.

In her amazing keynote, Jeni Tennison warned us against chimeras,  “ugly, foolish or impossible fantasies” and I have thought that it would be useful to check to which extent the XPath/XQuery/XSLT 3.0 data model (aka XDM 3.0) deserves to be called a chimera.

The foundation of this data model is the XML infoset, but it also borrows informations items from the Post Schema Validation Infoset (the [in]famous PSVI) and adds its own abstract items such as sequences and, new in 3.0, functions and maps (needed to represent JSON objects).

I started to think more seriously about this, doing some researches and writing a proposal for Balisage and my plan was to wait until the conference to publish anything.

One of the things I planed to present is a simple XML serialization format for the XDM. My initial motivation to propose such a format was to have a visualization of the XDM: I find it difficult to represent it if its instances stay purely abstract and can’t be serialized and deserialized.

Working on this, I have soon discovered that this serialization can have other concrete benefits: the items that have been recently added to the XDM such as maps and even sequences are not treated as first class citizens by XPath/XQuery/XSLT and the data model can be easier to traverse using its serialization!

When for instance you have a complex map imported from JSON by the brand new parse-json() function, you can’t easily apply the templates on the map items and sub items. And of course, with a XML serialization that becomes trivial to do.

If such a serialization can be useful, there is no reason to wait until Balisage in August to discuss it and I’d like to introduce the very first version of  χίμαιραλ (chimeral), the Chimera Language.

The URL itself http://χίμαιραλ.com is a chimera composed of letters from two different alphabets and merging concepts from two different civilizations!

This first version is not complete. It already supports rather complex cases, but I need to think more how to deal with maps or sequences of nodes such as namespace nodes or attributes.

So far I am really impressed by XPath 3.0 but also surprised by many limitations in term of reflexion:

  • No built in function to determine the basic type of an item (node, attribute, sequence, map, function, …).
  • The dm:node-kind()accessor to determine the type of a node is abstract and XPath 3.0 does not expose it.
  • The behavior of the exslt:object-type() function is surprising.

I may have missed something, but in practice I have found quite difficult when you have a variable to browse its data model.

The other aspect that I don’t like in XPath/XQuery/XSLT 3.0 is the lack of homogeneity between the way the different types of items are manipulated. This strengthen the feeling that we have a real chimera!

In XSLT for instance, I’d like to be able to apply templates and match items in the same way for any item types. Unfortunately, the features that are needed to do so (node tests, axis, …) are reserved to XML nodes. I can’t define a template that matches a map (nor a sequence by the way), I can’t apply templates over map items, …

It may be too late for version 3.0, but I really think that we should incorporate these recent additions to make them first class citizens!

Going forward, we could reconsider the way these items mix and match. Currently you can have sequences of maps, functions, nodes and atomic values, maps which values are sequences, functions, nodes and atomic values but nodes are only composed of other nodes.  Even if the XML syntax doesn’t support this, I would really like to see more symmetry and be able to add sequences and maps within nodes!

In other words, I think that it would be much more coherent to treat maps and sequences like nodes…

Note: The χίμαιραλ website is currently « read only » but comments are very welcome on this blog of by mail.

 

XML Prague 2012: The web would be so cool without the web developers

Note: XML Prague is also a very interesting pre-conference day, a traditional dinner, posters, sponsors announcements, meals, coffee breaks, discussions and walks that I have not covered in article for lack of time.

xmlprague2012-150

When I was a child, I used to say that I was feeling Dutch when I was in France and French when I was in the Netherlands. That was nice to feel slightly different and I liked to analyze the differences between Dutch people who seemed to be more adult and civilized and French people who seemed to me more spontaneous and fierce.

I have found back this old feeling of being torn between two different culture very strongly this week end at XML Prague. Of course, that was no longer between French and Dutch but between the XML and Web communities.

The conference also reminded me the old joke of the Parisian visiting Corsica and saying « Corsica would be so cool without Corsicans! » and for me the tag line could have been « the web would be so cool without web developers! ».

xmlprague2012-150Jeni Tennison’s amazing opening keynote was of course more subtle than that!

She started by acknowledging that the web was split into no less than four different major formats: HTML, JSON, XML and RDF.

Her presentation has been a set of clever considerations over how we can deal with these different formats and cultures, concluding that we should accept the fact than « the web is varied,  complex, dynamic, beautiful ».

I was then giving my talk « XML, the eX Markup Language » (read also my paper of this blog) where I have analyzed the reasons of the failure of XML to become the one major web format and given my view on where XML should be heading.

While Jeni had explained why « chimera are usually ugly, foolish or impossible fantasies », my conclusion has been that we should focus on the data model and extend or bridge it to embrace JSON and HTML like the XPath 3.0 data model is proposing to do.

I am still thinking so, but what is such a data model if not a chimera? Is it ugly, foolish or impossible then? There is a lot to think about beyond what Hans-Jürgen Rennau and David Lee have proposed at Balisage 2011 and I think I’ll submit a proposal at Balisage 2012 on this topic!

Robin Berjon and Norman Walsh tried then to bridge the gap with their presentation « XML and HTML Cross-Pollination: A Bridge Too Far?« , an interesting talk where they’ve tried to show how interesting ideas could be shared between these two communities: « forget about angle brackets, look at the ideas ». This gave a nice list of things that do work in the browser (did you know that you could run JavaScript against any XML document?) and fascinating new ideas such as JS-SLT, a JavaScript transformation library or CSS-Schema, a CSS based schema assertion language.

xmlprague2012-150Anne van Kesteren had chosen a provocative title for his talk: « What XML can learn from HTML; also known as XML5« . Working for Opera, Anne was probably the only real representative of the web community at this conference. Under that title, his presentation was an advocacy for releasing the strictness of the XML parsing rules and defining an error recovery mechanism in XML as they exist in HTML5.

His talk was followed by a panel discussion on HTML/XML convergence and this subject of error recovery has monopolized the full panel! Some of the panelists (Anne van Kesteren, Robin Berjon and myself) were less hostile but the audience did unanimously reject the idea to change anything in the well-formedness rules of the XML recommendation.

Speaking of errors may be part of the problem: errors have a bad connotation and if a syntactical construct is allowed by the error recovery mechanism with a well defined meaning, why should we still consider it an error?

However, a consensus was found to admit that it could be useful to specify an error recovery mechanism that could be used when applications need to read non well formed XML documents that may be found on the wide web. This consensus has led to the creation of the W3C XML Error Recovery Community Group.

The reaction if the room that didn’t accept to even consider a discussion on what XML well-formedness means seems rather irrational to me. Michael Sperberg-McQueen reinforced this feeling in his closing keynote when he pleaded to define this as « a separate add-on rule rather than as a spec that changes the fundamental rules of XML ».

What can be so fundamental with the definition of XML well-formedness? These reactions made me feel like we were discussing kashrut rules rather than parsing rules and the debate often looked more religious than technical!

xmlprague2012-150The next talk, XProc: Beyond application/xml by Vojtěch Toman was again about bridging technologies but was less controversial, probably because the technologies to bridge with were not seen as XML competitors.

Taking a look at the workarounds used by XML pipelines to support non XML data (either encoding the data or storing it out of the pipeline), Vojtěch proposed to extend the data model flowing in the pipelines to directly support non XML content. That kind of proposal looks so obvious and simple that you wonder why it hasn’t been done before!

xmlprague2012-150George Bina came next to present Understanding NVDL – the Anatomy of an Open Source XProc/XSLT implementation of NVDL. NVDL is a cool technology to bridge different schema languages and greatly facilitates the validation of compound XML documents.

xmlprague2012-150Next was Jonathan Robie, presenting JSONiq: XQuery for JSON, JSON for XQuery. JSONiq is both a syntax and a set of extensions to query JSON documents in an XQuery flavor that looks like JSON. Both the syntax and the extensions look both elegant and clever.

The room was usually very quiet during the talks, waiting for the QA sessions at the end of the talks to ask questions or give comments, but as soon as Jonathan displayed the first example, Anne van Kesteren couldn’t help gasping: « what? arrays are not zero based! »

Having put JSON clothes on top of an XPath data model, JSONiq has a base index equal to one for its arrays while JavaScript and most programming languages use zero for their base indexes.

Proposing zero based arrays inside a JSONic syntax to web developers is like wearing a kippah to visit an orthodox Jew and bring him baked ham: if you want to be kosher you need to be fully kosher!

xmlprague2012-150Norman Walsh came back on stage to present Corona: Managing and querying XML and JSON via REST, a project to « expose the core MarkLogic functionality—the important things developers need— as a set of services callable from other languages » in an format agnostic way (XML and JSON can be used interchangeably).

xmlprague2012-150The last talk of this first day was given by Steven Pemberton, Treating JSON as a subset of XML: Using XForms to read and submit JSON. After a short introduction to XForms, Steven explained how the W3C XForms Working Group is considering supporting JSON in XForms 2.0.

While Steven was speaking, Michael Kay twitted what many of us were thinking: « Oh dear, yet another JSON-to-XML mapping coming…« . Unfortunately, until JSON finds its way into the XML Data Model, every application that wants to expose JSON to XML tool has to propose a mapping!

xmlprague2012-150The first sessions of the second day were spent by Jonathan Robie and Michael Kay to present What’s New in XPath/XSLT/XQuery 3.0 and XML Schema 1.1.

A lot of good things indeed! XML Schema 1.1 in particular that will correct the biggest limitations of XML Schema 1.0 and borrow some features to Schematron, making XML Schema an almost decent schema language!

But the biggest news are for XPath/XSLT/XQuery 3.0, bringing impressive new features that will turn these languages into fully functional programming languages. And of course new types in the data model to support the JSON data model.

xmlprague2012-150One of these new features are annotations and Adam Retter gave a good illustration of how these annotations can be used in his talk RESTful XQuery – Standardised XQuery 3.0 Annotations for REST. XQuery being used to power web applications, these annotations can be used to define how stored queries are associated to HTTP requests and Adam proposes to standardize them to insure interoperability between implementations.

xmlprague2012-150For those of use whose head was not spinning yet,  Alain Couthures came to explain how he is Compiling XQuery code into Javascript instructions using XSLT 1.0 for his XSLTForms implementation. If we can use XSLT 1.0 to compile XQuery into JavaScript, what are the next steps? XSLT 2.0?

xmlprague2012-150After the lunch, Evan Lenz came to present Carrot, « an appetizing hybrid of XQuery and XSLT » which was first presented at Balisage 2011. This hybrid is not a chimera but a nice compromise for those of us who can’t really decide if they prefer XSLT or XQuery: Carrot extends the non XML syntax of XQuery to expose the templating system of XSLT.

It can be seen as yet another non XML syntax for XSLT, a templating extension for XQuery and borrows their best features to both languages!

xmlprague2012-150Speaking of defining templates in XQuery, John Snelson came next to present Transform.XQ: A Transformation Library for XQuery 3.0. Taking profit of the functional programming features of XQuery 3.0, Transform.XQ is an XQuery library that implements templates in XQuery. These templates are not exactly similar to XSLT templates (the priority system is different) but like in XSLT you’ll find template definitions,apply templates methods, modes, priorities and other goodies.

xmlprague2012-150Java had not been mentioned yet and Charles Foster came to propose Building Bridges from Java to XQuery. Based on XQuery API for Java (XQJ), this bridges rely on Java annotations to map Java classes and XQuery stored queries and of course POJOs are also mapped to XML to provide a very sleek integration.

xmlprague2012-150The last talk was a use case by Lorenzo Bossi presenting A Wiki-based System for Schema and Data Evolution, providing a good summary of the kind of problem you have when you need to update schemas and corpus’s of documents.

xmlprague2012-150Everyone was then holding their breath waiting for Michael Sperberg-McQueen’s closing keynote that has been brilliant as usual and almost impossible to summarize and should be watched on video!

Michael choose to use John Amos Comenius as an introduction for his keynote. Comenius has been the last bishop of Unity of the Brethren and became a religious refugee. That gave Michael an opportunity to call for tolerance and diversity in document formats like in real life. Comenius is also one of the earliest champions of universal education and Michael pointed out that structured markup languages were the new champions of this noble goal in his final conclusion.

Of course, there has been much more than that in his keynote, Michael taking care to mention each presentation, but this focus on Comenius confirmed my feeling of the  religious feeling toward XML.

I agree with most what Michael said in his keynote except maybe when he seems to deny that the XML adoption can be considered disappointing. When he says that the original goal of XML to be able to use SGML on the web has been achieved because he, Michael Sperberg-McQueen, can use XML on his web sites, that’s true of course, but was the goal really to allow SGML experts to use SGML on the web?

It’s difficult for me to dissent because he is the one who was involved in XML at that time when I had never heard of SGML, but I would still argue that SGML was usable on the web by SGML experts and that I don’t understand the motivation of the simplification that gave birth to XML if that was not to lower the price to entry so that web developers could use XML.

The consequences of this simplifications have been very heavy: the whole stack of XML technologies had to be reinvented and SGML experts have lost a lot of time before these technologies could be considered to be at the same level as they were. And even now some features of SGML that have been stripped down could be very useful for experts on the web such as for instance DTDs powerful enough to describe wiki syntaxes.

Similarly, when discussing during lunch with Liam Quin about my talk, he said that he had always thought that XHTML would never replace HTML. I have no reason to contradict Liam, but the vision of the W3C Markup Activity was clearly to « Deliver the Web of the Future Today: Recasting HTML in XML » like it can be seen on this archive.

It’s not pleasant to admit that we’ve failed, but replacing HTML with XHTML so that XML became dominant on the browser was clearly the official vision of the W3C shared by a lot of us and this vision has failed!

We need to acknowledge that we’ve lost this battle and make peace with the web developers that have won…

Curiously, there seems to be much less aggressiveness toward JSON than toward HTML5 in the XML community as can be shown by the number of efforts to bridge XML and JSON. Can we explain this by the fact that many XML purists considered data oriented XML as less interesting and noble than document oriented XML?

Anyway, the key point is that very strong ecosystem has been created with an innovative, motivated and almost religious community and a technology stack which is both modern and mature.

XML, the eX Markup Language?

Note: this article is a copy of the paper that I have presented at XML Prague 2012.

Abstract

Revisiting the question that was the tag line of XML Prague last year: « XML as new lingua franca for the Web. Why did it never happen? », Eric tries to answer to other questions such as: « where is XML going? » or « is XML declining, becoming an eX Markup Language? ».

XML as new lingua franca for the Web. Why did it never happen?

This was the tagline of XML Prague 2011, but the question hasn’t really been answered last year and I’ll start this talk to give my view on that question.

Flashback

February 1998 is a looong time ago, a date from another century and for those of you who were not born or don’t remember, here is a small summary of what did happen in February 1998:

February

Wikipedia

While the Iraq disarmament crisis was raging, the World Wide Web Consortium waited until the third day of the Winter Olympics held in Nagano to make the following announcement:

Advancing its mission to lead the Web to its full potential, the World Wide Web Consortium (W3C) today announced the release of the XML 1.0 specification as a W3C Recommendation. XML 1.0 is the W3C’s first Recommendation for the Extensible Markup Language, a system for defining, validating, and sharing document formats on the Web
W3C Press Release (February 1998)

People curious enough to click on the second link of the announcement could easily double check that beyond the marketing bias XML was something to be used over the Internet:

The design goals for XML are:

  1. XML shall be straightforwardly usable over the Internet.
  2. XML shall support a wide variety of applications.
  3. XML shall be compatible with SGML.
  4. It shall be easy to write programs which process XML documents.
  5. The number of optional features in XML is to be kept to the absolute minimum, ideally zero.
  6. XML documents should be human-legible and reasonably clear.
  7. The XML design should be prepared quickly.
  8. The design of XML shall be formal and concise.
  9. XML documents shall be easy to create.
  10. Terseness in XML markup is of minimal importance.
W3C Recommendation (February 1998)

And the point was reinforced by the man who had led the « Web SGML » initiative and is often referred to as the father of XML:

XML arose from the recognition that key components of the original web infrastructure — HTML tagging, simple hypertext linking, and hardcoded presentation — would not scale up to meet the future needs of the web. This awareness started with people like me who were involved in industrial-strength electronic publishing before the web came into existence.
Jon Bosak

This has often been summarized saying that XML is about « putting SGML on the Web ».

Among the design goals the second one (« XML shall support a wide variety of applications ») has been especially successful and by the end of 98, Liora Alschuler reported that the motivations of the different players pushing XML forward were very diverse:

The big-gun database vendors, IBM and Oracle, see XML as a pathway into and out of their data management tools. The big-gun browser vendors, Netscape and Microsoft, see XML as the e-commerce everywhere technology. The big-gun book and document publishers, for all media, are seeing a new influx of tools, integrators, and interest but the direction XML publishing will take is less well-defined and more contingent on linking and style specs still in the hands of the W3C.
Liora Alschuler for XML.com (December 1998)

One thing these « big-gun » players that were pushing XML to different directions did achieve has been to develop an incredible hype that rapidly covered everything and in 2001 the situation had become hardly bearable:

Stop the XML hype, I want to get offAs editor of XML.com, I welcome the massive success XML has had. But things prized by the XML community — openness and interoperability — are getting swallowed up in a blaze of marketing hype. Is this the price of success, or something we can avoid?
Edd Dumbill (March 2001)

Marketers behind the hype being who they were, the image of XML that they promoted was so shiny that the XML gurus didn’t recognize their own technology and tried to fight against the hype:

I’ve spent years learning XML / I like XML / This is why www.XmlSuck.com is here
PaulT (January 2001)

The attraction was high and people rushed to participate to the W3C working groups:

Working Group size – so many people means it is difficult to gain consensus, or even know everyone’s face. Conference calls are difficult.
Mark Nottingham, about the SOAP W3C WG (May 2000)

Huge working groups with people pushing to different directions is not the best recipe to publish high quality standards and even though XML itself was already baked, the perception of XML depends on the full « stack »:

This is a huge responsibility for the Schema Working Group since it means that the defects of W3C XML Schema will be perceived by most as defects of XML.
Eric van der Vlist on xml-dev (April 2001)

The hype was so huge that XML geeks rapidly thought that they had won the war and that XML was everywhere:

XML is now as important for the Web as HTML was to the foundation of the Web. XML is everywhere.
connet.us (February 2001)

Why this hype? My guess is that the IT industry had such a desperate need for a data interchange format that any one of them could have been adopted at that time and that XML happened to be the one that went through the radar screen at the right moment:

When the wind is strong enough, even flatirons can fly.
Anonymous (February 2012)

The W3C had now to maintain:

  • XML, a SGML subset
  • HTML, a SGML application that did not match the XML subset

Technically speaking, the thing to do was to refactor HTML to meet the XML requirements. Given the perceived success of XML, it seemed obvious that everyone would jump into the XML wagon and be eager to adopt XHTML.

Unfortunately from a web developer perspective the benefits of XHTML 1.0 were not that obvious:

The problem with XHTML is :a) it’s different enough from HTML to create new compatibility problems.b) it’s not different enough from HTML to bring significant advantages.
Eric van der Vlist on XHTML-DEV (May 2000)

It is fair to say that Microsoft had been promoting XML since the beginning:

XML, XML, EverywhereThere’s no avoiding XML in the .NET world. XML isn’t just used in Web applications, it’s at the heart of the way data is stored, manipulated, and exchanged in .NET systems.
Rob Macdonald for MSDN (February 2001)

However, despite their strong commitment to XML, Microsoft had frozen new developments on Internet Explorer. The browser has never been updated to support the XHTML media type, meaning that the few web sites using XHTML had to serve their pages as HTML!

By 2001, the landscape was set:

  • XML had become a dominant buzzword giving a false impression that it had been widely adopted
  • Under the hood, many developers were deeply upset by this hype even among the XML community
  • Serving XHTML web pages as such was not an option for most web sites

The landscape was set, but the hype was still high and XML was still gaining traction as a data interchange format.

In the meantime, another hype was growing…

Wikipedia has tracked the origin of the term Web 2.0 back to 1999:

The Web we know now, which loads into a browser window in essentially static screenfuls, is only an embryo of the Web to come…./…Ironically, the defining trait of Web 2.0 will be that it won’t have ant visible characteristics at all. The Web will be identified only by its underlying DNA structure– TCP/IP (the protocol that controls how files are transported across the Internet); HTTP (the protocol that rules the communication between computers on the Web), and URLs (a method for identifying files).

…/…

The Web will be understood not as screenfuls of text and graphics but as a transport mechanism, the ether through which interactivity happens.

Darcy DiNucci (1999)

The term became widely known with the first Web 2.0 conferences in 2003 and 2004 and XML was an important piece of the Web 2.0 puzzle through Ajax (Asynchronous JavaScript and XML), coined and defined by Jesse James Garrett in 2005 as:

Ajax isn’t a technology. It’s really several technologies, each flourishing in its own right, coming together in powerful new ways. Ajax incorporates:

Jesse James Garrett (February 2005)

This definition shows how, back in 2005, some of us still thought that XML could dominate the Web and be used both to exchange documents (in XHTML) and data.

Unfortunately, this vision defended by the W3C, has been rapidely torpedoed by Ian Hickson and Douglas Crockford.

Founded in 1994 for that purpose, the W3C had been the place where HTML had been normalized. Among other things, the W3C had been the place where the antagonists of the first browser war could meet and discuss in a neutral field.

In 2004, Netscape had disappeared, Microsoft had frozen the development of their browser and browser innovation moved into the hand of new players: Mozilla, Apple/Safari and Opera who was starting to gain traction.

Complaining that the W3C did not meet their requirements and that HTML needed to be updated urgently to meet the requirements what would be soon known as Web 2.0, they decided to fork the development of HTML:

Software developers are increasingly using the Internet as a software platform, with Web browsers serving as front ends for server-based services. Existing W3C technologies — including HTML, CSS and the DOM — are used, together with other technologies such as JavaScript, to build user interfaces for these Web-based applications.However, the aforementioned technologies were not developed with Web Applications in mind, and these systems often have to rely on poorly documented behaviors. Furthermore, the next generation of Web Applications will add new requirements to the development environment — requirements these technologies are not prepared to fulfill alone. The new technologies being developed by the W3C and IETFcan contribute to Web Applications, but these are often designed to address other needs and only consider Web Applications in a peripheral way.The Web Hypertext Applications Technology working group therefore intends to address the need for one coherent development environment for Web Applications. To this end, the working group will create technical specifications that are intended for implementation in mass-market Web browsers, in particular Safari, Mozilla, and Opera.
WHATWG (June 2004)

The W3C was behind a simple choice: either push XHTML recommendations that would never be implemented in any browsers or ditch XHTML and ask the WHATWG to come back and continue their work toward HTML5 as a W3C Working Group. The later option was eventually chosen and HTML work resumed within W3C in 2007.

JSON was around since 2001. It took a few years of Douglas Crockford’s energy to popularize this JavaScript subset but around 2005, JSON rapidly became a technology of choice as a « Fat-Free Alternative to XML » in Ajax applications.

There is no direct link between HTML5 and JSON but the reaction against XML, its hype and its perceived complexity is a strong motivation in both cases.

Why?

A number of reasons can be found for this failure:

  • Bad timing between the XML and HTML specifications (see Adam Retter’s presentation at XML Amsterdam 2011).
  • Lack of quality of some XML recommendations (XML Namespaces, XML Schema, …).
  • Lack of pedagogy to explain why XML is the nicer technology on the earth.
  • Dumbness of Web developers who not use XML.

There is some truth in all these explanations, but the main reason is that from the beginning we (the XML crowd) have been arrogant, over confident and have made a significant design error.

When we read this quote:

XML arose from the recognition that key components of the original web infrastructure — HTML tagging, simple hypertext linking, and hardcoded presentation — would not scale up to meet the future needs of the web. This awareness started with people like me who were involved in industrial-strength electronic publishing before the web came into existence.
Jon Bosak

We all understand what Jon Bosak meant and we probably all agree that HTML is limited and that something more extensible makes our lives easier, but we must also admit that we have been proven wrong and that HTML has been enough to scale up to the amazing applications we see today.

Of course, the timing was wrong and everything would have been easier if Tim Berners-Lee had came up with a first version of HTML that would have been a well formed XML document but on the other hand, the web had to exist before we could put SGML on the web and there had to be a prior technology.

In 1998 it was already clear that HTML was widespread and the decision to create XML as a SGML subset that would be incompatible with HTML has been a bad one:

  • Technically speaking because that meant that millions of existing pages would be non well formed XML (« the first Google index in 1998 already had 26 million pages« ).
  • Tactically speaking because that could be understood as « what you’ve done so far was crappy, now you must do what we tell you to do ».

To avoid this deadly risk, the first design goal of XML should have been that existing valid HTML documents were well formed XML documents. The result might have been a more complex format and specification, but this risk to create a gap between XML and HTML communities would have been minimized.

Another reason to explain this failure is that XML is about extensibility. This is both its main strength and weakness: extensibility comes at a price and XML is more complex than domain specific languages.

Remove the need for extensibility and XML will always loose against DSLs, we’ve seen a number of examples in the past:

  • RELAX NG compact syntax
  • JSON
  • HTML
  • N3
  • CSS

Is it a time to refactor XML? Converge or convert?

Hmmm… It’s time to address the questions asked this year by XML Prague!

We’ve failed to establish XML as the format to use on the web but we’ve succeeded in creating a strong toolbox which is very powerful to power websites and exchange information.

I don’t know if it’s to compensate the ecosystems that we are destructing on our planet, but one of the current buzzwords among developers is « ecosystem »: dominant programming languages such as Java and JavaScript are becoming « ecosystems » that you can use to run a number of applications that may be written using other programming languages.

What we’ve built with XML during the past 14 years is a very strong ecosystem.

The XML ecosystem is based on an (almost) universal data model that can not only represent well formed XML documents but also HTML5 documents and (with an impedance mismatch that may be reduced in future versions) JSON objects.

[Note] Note
Notable exceptions that cannot be represented by the XML data model include overlapping structures and graphs.

On top of this data model, we have a unique toolbox that includes:

  • transformation and query languages
  • schema languages
  • processing (pipeline) languages
  • databases
  • web forms
  • APIs for traditional programming languages
  • signature and encryption standards
  • a text based serialization syntax
  • binary serialization syntaxes

We can truly say that what’s important in XML is not the syntax but that:

Angle Brackets Are a Way of Life
Planet XMLHack

Rather than fighting fights that we’ve already lost we need to develop our ecosystem.

The number one priority is to make sure that our data model embraces the web that is taking shape (which means HTML5 and JSON) as efficiently as possible. Rather than converge or convert we must embrace, the actual syntax is not that important after all!

To grow our ecosystem, we could also consider embracing more data models, such as graphs (RDF), name/value pairs (NOSQL), relations (SQL), overlaps (LMNL).

I am more skeptical about refactoring XML at that stage.

It’s always interesting to think about what could be done better, but refactoring a technology as widespread as XML is tough and needs to be either backward compatible or provide a huge benefit to compensate the incompatibilities.

Will we see a proposal that will prove me wrong during the conference?

Le temps de l’utopie

Je n’ai pas encore regardé le film, mais je viens de terminer la lecture des Sentiers de l’Utopie, un livre-film d’Isabelle Fremeaux et John Jordan.

J’ai beaucoup aimé cette balade parmi quelques hauts lieux de l’Utopie en Europe et cette lecture renforce mon sentiment que pour surmonter la crise que nous vivons, les seules solutions réalistes sont celles qui sont qualifiées d’utopiques!

En 2008 déjà, en lisant les motions déposées pour le congrès de Reims du PS, j’avais été surpris de remarquer que la seule motion qui semblait prendre pleine conscience de l’ampleur de la crise et proposer des solutions proportionnées était la motion « F » du mouvement Utopia.

L’utopie a mauvaise presse : les utopies que l’on a voulu appliquer à grande échelle au vingtième siècle se sont toutes soldées de manière tragique.

Si aujourd’hui les utopies sont les seules à proposer des solutions réalistes, c’est que nous sommes à un moment charnière où pour sauver notre civilisation il nous faut modifier profondément nos habitudes pour instaurer plus d’équité ce qui est habituellement qualifié d’utopiste.

Les Sentiers de l’Utopie nous font découvrir des lieux qui expérimentent, chacun à sa manière, des modes d’interaction différents. Ce ne sont pas des dogmes à suivre aveuglément mais autant de laboratoires dont les découvertes seront précieuses pour trouver des solutions plus générales.

Pour terminer ce billet, deux images glanées hier soir dans un Paris où il allait geler opour la première fois de l’hiver alors que je me rendais à pieds à la conférence de Christian Vélot sur les plantes mutées à la mairie du deuxième arrondissement :

  • une terrasse ouverte et chauffée, où l’on chauffe l’extérieur pour que quelques consommateurs puissent déguster une bière à l’extérieur plutôt qu’à l’intérieur.
  • à quelques mètres, un sans abri se recroqueville, adossé à un sac à dos pour se préparer à passer la nuit qui sera glaciale.

Un gaspillage inutile de ressources non renouvelables dont les plus démunis sont totalement dépourvu, comment mieux illustrer la crise dans laquelle nous nous enfonçons?