Objets connectés

Il ne vous a certainement pas échappé que Google s’intéresse aux objets connectés, en vrac :

La lecture de ces articles peut laisser penser que ces objets connectés, contrôlés par Google ou un autre géant du même type vont fatalement rentrer chez nous.

Pourtant, même s’ils arrivent chez nous livrés par les drones d’Amazon, ce sera le résultat de notre choix.

Il est trop facile de nous lamenter de la puissance de Google (vous pouvez remplacer ici Google par Amazon, Microsoft, Apple, Facebook, Carrefour, Intermarché, Leclerc, Casino ou votre bête noire préférée) en oubliant que nous avons le choix et que c’est nous qui faisons la puissance de ces marques.

Libre à nous d’utiliser (ou de ne pas utiliser) le moteur de recherche Google, gmail, Google+, Adsense et Google analytics sur nos sites, Google Maps pour nous déplacer, Google Drive pour partager nos documents, Android et son écosystème sur nos téléphones, tablettes et ordinateurs et bientôt les objets connectés de Google mais ne nous plaignons pas ensuite que Google sait trop de choses sur notre compte.

Personnellement je boycotte autant que possible ces services (et ces marques) mais il m’arrive aussi de les utiliser et je ne me pose pas en moralisateur…

Rappelons nous simplement qu’il existe presque toujours des alternatives, que nous avons toujours le choix et essayons d’exercer nos choix de manière responsable!

 

Share and Enjoy:
  • Identi.ca
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Twitter
  • Add to favorites

Un projet e-nov

De:     Valerie P <valeriep@outlook.fr>
À:     vdv@dyomedea.com
Sujet:     informations relatives à notre projet
Date:     Tue, 14 Jan 2014 03:24:19 +0100

Bonjour,

J’ai vu vos coordonnées sur Internet.
Nous recherchons un prestataire et nous souhaitons éventuellement vous consulter concernant un projet.
Merci de me fixer un bref rendez-vous téléphonique à l’heure de votre choix (mardi ou mercredi).
pour un complément d’information.

Bonne réception,
Valerie P
Arcadev

Un prospect qui m’écrit à 3h24 du matin doit avoir un projet particulièrement urgent et je m’empresse de lui répondre.

La plupart de mes clients me contactent de cette manière et je n’ai pas de raison particulière de me méfier mais j’aime bien savoir à qui j’ai à faire et je recherche “arcadev“. Je ne trouve pas grand chose à part les profils LinkedIn et Viadeo de Valérie P. Curieusement, bien que son profil LinkedIn apparaisse en résultat de recherche, aucune mention d’arcadev dans son profil pour lequel elle est “Data Center Sales Specialist at Cisco”. Pas non plus de trace d’arcadev sur societe.com.

Quelqu’un qui écrit des mails à 3h du matin ne se lève peut être pas très tôt, j’essaye donc de trouver une heure raisonnable pour un rendez vous téléphonique :

De:     Eric van der Vlist <vdv@dyomedea.com>
À:     Valerie P<valeriep@outlook.fr>
Sujet:     Re: informations relatives à notre projet
Date:     Tue, 14 Jan 2014 07:45:17 +0100

Bonjour,

Le mardi 14 janvier 2014 à 03:24 +0100, Valerie P a écrit :
> Bonjour,
>
> J’ai vu vos coordonnées sur Internet.
> Nous recherchons un prestataire et nous souhaitons éventuellement vous
> consulter concernant un projet.
> Merci de me fixer un bref rendez-vous téléphonique à l’heure de votre
> choix (mardi ou mercredi).

Est-ce que 11h00 ce matin vous conviendrait?

Bien cordialement,

Eric van der Vlist

> pour un complément d’information.
>
> Bonne réception,
> Valerie P
> Arcadev

La réponse vient un peu plus tard :

De:     E-Nov Développement <valeriep@outlook.fr>
Reply-to:     <contact@e-prog.fr>
À:     Eric van der Vlist <vdv@dyomedea.com>
Sujet:     RE: informations relatives à notre projet
Date:     Tue, 14 Jan 2014 10:18:01 +0100

Merci, entendu c’est noté.
A quel numéro devrais-je vous joindre ?

Cordialement,
Valérie P
Arcadev

Voilà qui me donne un peu plus d’informations. La société E-Nov Développement est connue par societe.com, mais également par les moteurs de recherche :

J’aborde donc ce rendez vous téléphonique avec une certaine prudence. Valérie P marque un point en m’annonçant qu’elle m’envoie un mail décrivant le projet (“cool, un prospect sérieux qui m’envoie des éléments concrets pour que je comprenne ses besoins”).

De:     E-prog vp <contact@cybix.fr>
À:     vdv@dyomedea.com, DEMO <contact@e-prog.fr>
Sujet:     Projet
Date:     Tue, 14 Jan 2014 12:21:13 +0100

Bonjour,

Afin de consulter notre projet, veuillez cliquer sur les liens suivants

————————-Consultation projet ————————-
http://e-prog.fr/z_4647/iph/ok2.asp?tra=http://www.dyomedea.com

————————-Lien 2 ————————-
http://e-prog.fr/z_4647/iph/ok2.asp?tra=http://gossard.cybix.fr

————————-Notre Site ————————-
http://e-prog.fr

Cordialement,
01 49 76 96 26
contact@e-prog.fr

E-Nov
Siret : 40500275900034

Message securise l’antivirus Kapersky

Déception à l’arrivée : en guise de besoins le premier lien pointe sur un montage grossier ajoutant une vidéo standard sure une copie de mon site dyomedea.com en espérant me convaincre de faire appel à leurs services.

Sur son profil, Valérie P annonce avoir un “Master in European Business, International Marketing” de l’ESCP-EAP. Elle sait probablement ce qu’elle fait mais j’ai du mal à croire qu’elle puisse avoir un taux de conversion honorable quand elle décroche des rendez-vous téléphoniques sur la base de courriels mensongers!

On dit souvent qu’Internet est une jungle, un espace de non droit. Il est vrai que sans Internet je n’aurais jamais reçu ce mail. Par contre j’aurais pu recevoir le même type de message par courrier postal et sans Internet je n’aurais pas été capable de recueillir les informations sur la société E-Nov Développement…

Quant au “non droit”, E-Nov Développement a déjà été condamné au moins une fois, et ce n’est peut être pas la dernière!

De:     Eric van der Vlist <vdv@dyomedea.com>
À:     abuse@outlook.fr
Sujet:     [Fwd: informations relatives à notre projet]
Date:     Tue, 14 Jan 2014 16:40:13 +0100

Bonjour,

Je souhaite vous signaler l’utilisation d’une adresse mail @outlook.fr
pour l’envoi de messages non sollicités (après contact de cette
personne, il s’avère qu’elle espérait me vendre une vidéo
“personnalisée” pour mon site web).

Il s’avère également que cette personne travaille pour la société “E NOV
DEVELOPPEMENT (E-NOV DEV)” [1] qui a déjà été condamnée pour avoir
utilisé une adresse mail hotmail pour l’envoi de tels messages [2].

[1]http://www.societe.com/societe/e-nov-developpement-405002759.html
[2]http://www.prodimarques.com/documents/gratuit/59/apercus-de-la-jurisprudence-recente.php

Cordialement,

Eric van der Vlist

Share and Enjoy:
  • Identi.ca
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Twitter
  • Add to favorites

Running my own identity server

Context and motivation

I have been an happy user of Janrain‘s OpenId provider, myOpenId since May 2007 and didn’t feel any urgency to change until their announcement that the service will close on February 1, 2014:

Janrain, Inc. | 519 SW 3rd Ave, Suite 600, Portland OR 97204 | 888.563.3082 | janrain.com <http://www.janrain.com>
Hello,

I wanted to reach out personally to let you know that we have made the decision to end of life the myOpenID <https://www.myopenid.com/> service. myOpenID will be turned off on February 1, 2014.

In 2006 Janrain created myOpenID to fulfill our vision to make registration and login easier on the web for people. Since that time, social networks and email providers such as Facebook, Google, Twitter, LinkedIn and Yahoo! have embraced open identity standards. And now, billions of people who have created accounts with these services can use their identities to easily register and login to sites across the web in the way myOpenID was intended.

By 2009 it had become obvious that the vast majority of consumers would prefer to utilize an existing identity from a recognized provider rather than create their own myOpenID account. As a result, our business focus changed to address this desire, and we introduced social login technology. While the technology is slightly different from where we were in 2006, I’m confident that we are still delivering on our initial promise – that people should take control of their online identity and are empowered to carry those identities with them as they navigate the web.

For those of you who still actively use myOpenID, I can understand your disappointment to hear this news and apologize if this causes you any inconvenience. To reduce this inconvenience, we are delaying the end of life of the service until February 1, 2014 to give you time to begin using other identities on those sites where you use myOpenID today.

Speaking on behalf of Janrain, I truly appreciate your past support of myOpenID.

Sincerely,
Larry

Larry Drebes, CEO, Janrain, Inc. <http://bit.ly/cKKudR>

I am running a number of low profile web sites such as owark.org, xformsunit.org or even this blog for which OpenID makes sense not only because it’s convenient to log into these sites with a single identity (and password) but also because I haven’t taken the pain to protect them with SSL and that https authentication on an identity server is safer than http authentication on these sites.

On the other hand I do not trust “recognized providers” such as “Facebook, Google, Twitter, LinkedIn and Yahoo!” and certainly not want them to handle my identity.

The only sensible alternative appeared to be to run my own identity server, but which one?

My own identity server

The OpenID wiki gives a list of identity servers but a lot of these seem to be more or less abandoned, some of the links even returning 404 errors and I have chosen to install SimpleID which is enough for my needs and is still being developed.

Its installation, following its Getting Started guide is straightforward and I soon had an identity server for my identity “http://eric.van-der-vlist.com/“. The next step has been to update the links that delegate the identity on this page to point to my new identity server instead of myOpenID:

  <link rel="openid.server" href="https://eudyptes.dyomedea.com/openid/" />
  <link rel="openid.delegate" href="http://eric.van-der-vlist.com/" />
  <link rel="openid2.local_id" href="http://eric.van-der-vlist.com/" />
  <link rel="openid2.provider" href="https://eudyptes.dyomedea.com/openid/" />

Working around a mod_gnutls bug on localhost

At that stage I was expecting to be able to be able to log into my websites using OpenID and that did work for owark.org and xformsunit.org but not from this blog!

Trying to log into this blog logged a rather cryptic message into Apache’s error log:

CURL error (35): error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol, referer: http://eric.van-der-vlist.com/blog/wp-admin/users.php?page=your_openids

The same error was reported when trying to access the identity server using cURL and even OpenSSL:

vdv@corp:~$ openssl s_client -debug -connect eudyptes.dyomedea.com:443
CONNECTED(00000003)
write to 0x6aa5a0 [0x6aa620] (226 bytes => 226 (0xE2))
0000 - 16 03 01 00 dd 01 00 00-d9 03 02 52 2b 09 1e 75   ...........R+..u
0010 - 8b 8a 35 91 0e ba 6a 08-56 c6 34 a9 d8 78 d3 e8   ..5...j.V.4..x..
0020 - 70 cc 92 36 60 d2 41 32-f1 e8 0f 00 00 66 c0 14   p..6`.A2.....f..
0030 - c0 0a c0 22 c0 21 00 39-00 38 00 88 00 87 c0 0f   ...".!.9.8......
0040 - c0 05 00 35 00 84 c0 12-c0 08 c0 1c c0 1b 00 16   ...5............
0050 - 00 13 c0 0d c0 03 00 0a-c0 13 c0 09 c0 1f c0 1e   ................
0060 - 00 33 00 32 00 9a 00 99-00 45 00 44 c0 0e c0 04   .3.2.....E.D....
0070 - 00 2f 00 96 00 41 c0 11-c0 07 c0 0c c0 02 00 05   ./...A..........
0080 - 00 04 00 15 00 12 00 09-00 14 00 11 00 08 00 06   ................
0090 - 00 03 00 ff 02 01 00 00-49 00 0b 00 04 03 00 01   ........I.......
00a0 - 02 00 0a 00 34 00 32 00-0e 00 0d 00 19 00 0b 00   ....4.2.........
00b0 - 0c 00 18 00 09 00 0a 00-16 00 17 00 08 00 06 00   ................
00c0 - 07 00 14 00 15 00 04 00-05 00 12 00 13 00 01 00   ................
00d0 - 02 00 03 00 0f 00 10 00-11 00 23 00 00 00 0f 00   ..........#.....
00e0 - 01 01                                             ..
read from 0x6aa5a0 [0x6afb80] (7 bytes => 7 (0x7))
0000 - 3c 21 44 4f 43 54 59                              <!DOCTY
140708692399776:error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol:s23_clnt.c:749:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 7 bytes and written 226 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
---

Of course, the same commands did work perfectly on the servers hosting owark.org and xformsunit.org and I was puzzled because these two servers are running the same versions of the same software with very similar configurations.

The main difference is that my blog runs on the same server than the identity server. Looking closely at the result of the openssl command I noticed that the server was returning plain text instead where encrypted content was expected. Knowing that the server is using mod_gnutls to serve its https content (this is needed to support wildcards in SSL certificates), I was soon able to find a bug, reported in September 2011 which has been fixed but never ported into Debian or Ubuntu packages: mod_gnutls doesn’t encrypt the traffic when the IP source and destination addresses are identical.

Since the fix is not easily available I had to find a workaround… How could I trick the server giving it a source address that would be different from the destination address?

With my current configuration, both addresses were 95.142.167.137, the address of eudyptes.dyomedea.com. What if one of these addresses could become 127.0.0.1?

These addresses can easily become 127.0.0.1, to do so you just need to say so in /etc/hosts:

127.0.0.1       localhost eudyptes.dyomedea.com

Of course at that stage, both addresses are equal to 127.0.0.1 instead of 95.142.167.137. They are still equal and that doesn’t fix anything.

The trick is then to update the Apache configuration so that its doesn’t listen on 127.0.0.1:443 anymore:

    Listen 95.142.167.137:443

So that we can redirect 127.0.0.1:443 on 95.142.167.137:443. To do so we can use iptables but we don’t need the full power of this tool and may prefer the simplicity of a command such as redir:

sudo redir --laddr=127.0.0.1 --lport=443 --caddr=95.142.167.137 --cport=443 --transproxy

This redirection changes the destination address to 95.142.167.137 without updating the source address which remains 127.0.0.1. The addresses being different mod_gnutls does encrypt the traffic and our identity server becomes available on the local machine.

Other tweaks

Note that if you’re using WordPress and its OpenID plugin you may have troubles to get OpenID login working with the excellent Better WP Security plugin and will have to disable “Hide Backend” and “Prevent long URL strings” options.

Share and Enjoy:
  • Identi.ca
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Twitter
  • Add to favorites

Merveilleux nuages sans bullshit

– Eh! qu’aimes-tu donc, extraordinaire étranger?
– J’aime les nuages… les nuages qui passent… là-bas… là-bas… les merveilleux nuages!

Charles Baudelaire, petits poèmes en prose

Les nuages sont insaisissables et fascinants.

Ils sont incroyablement variés et, du cirrus au  stratus, ils se situent à des altitudes bien différentes.

La notion même de nuage est une question de point de vue : si vous avez déjà marché en montagne, vous aurez remarqué qu’il suffit de monter pour que les nuages deviennent brouillard.

La métaphore de l’informatique dans les nuages a ses limites, comme toute métaphore, mais elle partage ces deux caractéristiques.

La notion de cloud computing est une question de point de vue : pour l’utilisateur “final”, tout site ou application web est dans le nuage puisqu’il ne sait que très exceptionnellement quelle machine l’héberge. Pour l’administrateur du site web par contre, le site ne sera “dans le nuage” que s’il est hébergé sur une machine virtuelle!

Le cloud computing peut également se situer à des hauteurs bien différentes, entre le stratus qu’est la machine virtuelle que l’on administre comme une machine réelle et les cirrus que sont les logiciels en tant que services sans oublier les cumulonimbus qui ont vocation à remplir tous les créneaux.

Je me suis laissé tenter par un tout petit stratus et viens de migrer les sites que j’administre sur des machines virtuelles chez Gandi.

Pourquoi un stratus? De la même manière que j’aime faire mon pain ou réparer mon toit, j’aime installer et administrer les outils informatique que j’utilise. Alors que les informaticiens tendent à devenir de plus en plus spécialisés, j’y vois là une manière de maintenir un minimum de culture générale en informatique! Je préfère donc administrer une machine (virtuelle ou non) plutôt que d’utiliser des logiciels en tant que services.

Pourquoi un petit stratus? Parce que j’ai toujours préféré les petites structures aux grands groupes!

Pourquoi Gandi? Je suis reconnaissant à Gandi de m’avoir fait sortir des griffes monopolistiques de Network Solutions en faisant au passage chuter le prix du domaine de $70 à 12€ ! Depuis plus de 10 ans, j’apprécie le service, la culture et le slogan “No Bullshit” de cette entreprise.

J’ai donc migré mes trois dediboxes sur des serveurs virtuels Gandi.

Ne vous laissez pas tromper par les chiffres : le serveur dédié dedibox à 14,99€ est beaucoup plus puissant que la part de Gandi serveur à 12€ et ma facture chez Gandi est plus élevée qu’elle ne l’était chez dedibox.

Pourquoi cette migration?

Après des débuts un peu difficiles (mes premières dediboxes se bloquaient très fréquemment) les dediboxes sont devenues très fiables mais ce sont toujours des machines physiques qui vieillissent et Online les renouvelle tous les trois ans environ. Cela signifie qu’il faut réinstaller ses serveurs tous les trois ans.

De même, ces machines ne sont pas évolutives et il n’est pas question de rajouter de la mémoire, de l’espace de stockage ou du CPU si vous en avez besoin.

Les serveur virtuels sont au contraire virtuellement éternels : si Gandi ne fait pas d’erreur de manipulation, il n’y a aucun risque qu’une machine virtuelle “vieillisse”.

Ils sont également très souples et on peut très simplement rajouter de la mémoire, de la puissance de calcul, de l’espace de stockage ou des interfaces réseau. La plupart de ces opérations se font même à la volée, sans redémarrer le serveur.

Après quelques semaines, quel bilan?

Je ne n’avais eu l’occasion d’apprécier le support Gandi que pour les enregistrements de domaines. Je peux maintenant vous dire qu’il est tout aussi réactif sur l’hébergement. Lorsque l’on s’adresse au support au moyen de l’interface web, une case à cocher permet de signaler si son serveur est bloqué, ce qui permet au support de traiter votre cas de manière prioritaire.

Les trois premières semaines après le passage en production de mon premier serveur, les accès disques se sont bloqués à trois reprises pendant deux à trois heures à chaque fois.  Le support Gandi a rapidement signalé le problème, indiquant que “les accès disques étaient ralentis”. Pour ma part, j’aurais plutôt dit “gelés” que ralentis (No Bullshit!) : la machine virtuelle était totalement bloquée, les services (HTTP, SSH, SMTP, IMAP, …) ne répondant plus du tout.

Gandi semble avoir identifié et  corrigé le problème et depuis tout fonctionne très bien.

Les performances en écriture disque sont souvent un peu faibles, de l’ordre de 60 MB/s, mais ce soir elles semblent correctes:

vdv@community:/tmp$ dd if=/dev/zero of=test.file bs=1024k count=512
512+0 enregistrements lus
512+0 enregistrements écrits
536870912 octets (537 MB) copiés, 5,01527 s, 107 MB/s

Pour le moment tout se passe donc bien et les deux reproches que je pourrais faire sont d’ordre administratif.

L’interface d’administration sur le site est agréable à utiliser, mais les erreurs de saisie sont trop rarement documentées : la plupart du temps, le champ incriminé est signalé mais aucun message d’erreur ne vient expliquer comment le corriger. Dans les cas les plus complexes, cela se traduit par un message au support technique et c’est beaucoup de temps perdu pour le support comme pour l’utilisateur.

Gandi permet d’acquérir des ressources sans aucun engagement et cette formule est très souple mais chaque acquisition donne lieu à une facturation séparée (il en est de même pour les renouvellement de domaines si vous activez le renouvellement automatique).  Je me retrouve donc avec des dizaines de petites factures, certaines pour quelques euros seulement, qui vont être un cauchemar à traiter! Pourquoi ne pas proposer de regrouper ces montants en une facture mensuelle unique?

J’attends maintenant avec impatience de pouvoir tirer partie de la souplesse que m’offre cette formule.

Cela pourrait être le cas à l’occasion de la prochaine montée de version du système d’exploitation que j’utilise (Ubuntu).

Pour mes serveurs, je préfère utiliser les versions LTS (Long Term Support) qui sont publiées tous les deux ans. La différence entre deux versions est significative et les montées de version sont souvent pénibles.

Je n’ai pas encore regardé dans le détail comment faire, mais je compte “cloner” les machines virtuelles de mes serveurs pour effectuer les mises à jour sur les clones tout en laissant les originaux en service. Cela devrait me permettre de faire la mise à jour et de la tester sans interrompre le service.

A suivre…

Share and Enjoy:
  • Identi.ca
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Twitter
  • Add to favorites

XML Prague 2012: The web would be so cool without the web developers

Note: XML Prague is also a very interesting pre-conference day, a traditional dinner, posters, sponsors announcements, meals, coffee breaks, discussions and walks that I have not covered in article for lack of time.

xmlprague2012-150

When I was a child, I used to say that I was feeling Dutch when I was in France and French when I was in the Netherlands. That was nice to feel slightly different and I liked to analyze the differences between Dutch people who seemed to be more adult and civilized and French people who seemed to me more spontaneous and fierce.

I have found back this old feeling of being torn between two different culture very strongly this week end at XML Prague. Of course, that was no longer between French and Dutch but between the XML and Web communities.

The conference also reminded me the old joke of the Parisian visiting Corsica and saying “Corsica would be so cool without Corsicans!” and for me the tag line could have been “the web would be so cool without web developers!”.

xmlprague2012-150Jeni Tennison’s amazing opening keynote was of course more subtle than that!

She started by acknowledging that the web was split into no less than four different major formats: HTML, JSON, XML and RDF.

Her presentation has been a set of clever considerations over how we can deal with these different formats and cultures, concluding that we should accept the fact than “the web is varied,  complex, dynamic, beautiful”.

I was then giving my talk “XML, the eX Markup Language” (read also my paper of this blog) where I have analyzed the reasons of the failure of XML to become the one major web format and given my view on where XML should be heading.

While Jeni had explained why “chimera are usually ugly, foolish or impossible fantasies”, my conclusion has been that we should focus on the data model and extend or bridge it to embrace JSON and HTML like the XPath 3.0 data model is proposing to do.

I am still thinking so, but what is such a data model if not a chimera? Is it ugly, foolish or impossible then? There is a lot to think about beyond what Hans-Jürgen Rennau and David Lee have proposed at Balisage 2011 and I think I’ll submit a proposal at Balisage 2012 on this topic!

Robin Berjon and Norman Walsh tried then to bridge the gap with their presentation “XML and HTML Cross-Pollination: A Bridge Too Far?“, an interesting talk where they’ve tried to show how interesting ideas could be shared between these two communities: “forget about angle brackets, look at the ideas”. This gave a nice list of things that do work in the browser (did you know that you could run JavaScript against any XML document?) and fascinating new ideas such as JS-SLT, a JavaScript transformation library or CSS-Schema, a CSS based schema assertion language.

xmlprague2012-150Anne van Kesteren had chosen a provocative title for his talk: “What XML can learn from HTML; also known as XML5“. Working for Opera, Anne was probably the only real representative of the web community at this conference. Under that title, his presentation was an advocacy for releasing the strictness of the XML parsing rules and defining an error recovery mechanism in XML as they exist in HTML5.

His talk was followed by a panel discussion on HTML/XML convergence and this subject of error recovery has monopolized the full panel! Some of the panelists (Anne van Kesteren, Robin Berjon and myself) were less hostile but the audience did unanimously reject the idea to change anything in the well-formedness rules of the XML recommendation.

Speaking of errors may be part of the problem: errors have a bad connotation and if a syntactical construct is allowed by the error recovery mechanism with a well defined meaning, why should we still consider it an error?

However, a consensus was found to admit that it could be useful to specify an error recovery mechanism that could be used when applications need to read non well formed XML documents that may be found on the wide web. This consensus has led to the creation of the W3C XML Error Recovery Community Group.

The reaction if the room that didn’t accept to even consider a discussion on what XML well-formedness means seems rather irrational to me. Michael Sperberg-McQueen reinforced this feeling in his closing keynote when he pleaded to define this as “a separate add-on rule rather than as a spec that changes the fundamental rules of XML”.

What can be so fundamental with the definition of XML well-formedness? These reactions made me feel like we were discussing kashrut rules rather than parsing rules and the debate often looked more religious than technical!

xmlprague2012-150The next talk, XProc: Beyond application/xml by Vojtěch Toman was again about bridging technologies but was less controversial, probably because the technologies to bridge with were not seen as XML competitors.

Taking a look at the workarounds used by XML pipelines to support non XML data (either encoding the data or storing it out of the pipeline), Vojtěch proposed to extend the data model flowing in the pipelines to directly support non XML content. That kind of proposal looks so obvious and simple that you wonder why it hasn’t been done before!

xmlprague2012-150George Bina came next to present Understanding NVDL – the Anatomy of an Open Source XProc/XSLT implementation of NVDL. NVDL is a cool technology to bridge different schema languages and greatly facilitates the validation of compound XML documents.

xmlprague2012-150Next was Jonathan Robie, presenting JSONiq: XQuery for JSON, JSON for XQuery. JSONiq is both a syntax and a set of extensions to query JSON documents in an XQuery flavor that looks like JSON. Both the syntax and the extensions look both elegant and clever.

The room was usually very quiet during the talks, waiting for the QA sessions at the end of the talks to ask questions or give comments, but as soon as Jonathan displayed the first example, Anne van Kesteren couldn’t help gasping: “what? arrays are not zero based!”

Having put JSON clothes on top of an XPath data model, JSONiq has a base index equal to one for its arrays while JavaScript and most programming languages use zero for their base indexes.

Proposing zero based arrays inside a JSONic syntax to web developers is like wearing a kippah to visit an orthodox Jew and bring him baked ham: if you want to be kosher you need to be fully kosher!

xmlprague2012-150Norman Walsh came back on stage to present Corona: Managing and querying XML and JSON via REST, a project to “expose the core MarkLogic functionality—the important things developers need— as a set of services callable from other languages” in an format agnostic way (XML and JSON can be used interchangeably).

xmlprague2012-150The last talk of this first day was given by Steven Pemberton, Treating JSON as a subset of XML: Using XForms to read and submit JSON. After a short introduction to XForms, Steven explained how the W3C XForms Working Group is considering supporting JSON in XForms 2.0.

While Steven was speaking, Michael Kay twitted what many of us were thinking: “Oh dear, yet another JSON-to-XML mapping coming…“. Unfortunately, until JSON finds its way into the XML Data Model, every application that wants to expose JSON to XML tool has to propose a mapping!

xmlprague2012-150The first sessions of the second day were spent by Jonathan Robie and Michael Kay to present What’s New in XPath/XSLT/XQuery 3.0 and XML Schema 1.1.

A lot of good things indeed! XML Schema 1.1 in particular that will correct the biggest limitations of XML Schema 1.0 and borrow some features to Schematron, making XML Schema an almost decent schema language!

But the biggest news are for XPath/XSLT/XQuery 3.0, bringing impressive new features that will turn these languages into fully functional programming languages. And of course new types in the data model to support the JSON data model.

xmlprague2012-150One of these new features are annotations and Adam Retter gave a good illustration of how these annotations can be used in his talk RESTful XQuery – Standardised XQuery 3.0 Annotations for REST. XQuery being used to power web applications, these annotations can be used to define how stored queries are associated to HTTP requests and Adam proposes to standardize them to insure interoperability between implementations.

xmlprague2012-150For those of use whose head was not spinning yet,  Alain Couthures came to explain how he is Compiling XQuery code into Javascript instructions using XSLT 1.0 for his XSLTForms implementation. If we can use XSLT 1.0 to compile XQuery into JavaScript, what are the next steps? XSLT 2.0?

xmlprague2012-150After the lunch, Evan Lenz came to present Carrot, “an appetizing hybrid of XQuery and XSLT” which was first presented at Balisage 2011. This hybrid is not a chimera but a nice compromise for those of us who can’t really decide if they prefer XSLT or XQuery: Carrot extends the non XML syntax of XQuery to expose the templating system of XSLT.

It can be seen as yet another non XML syntax for XSLT, a templating extension for XQuery and borrows their best features to both languages!

xmlprague2012-150Speaking of defining templates in XQuery, John Snelson came next to present Transform.XQ: A Transformation Library for XQuery 3.0. Taking profit of the functional programming features of XQuery 3.0, Transform.XQ is an XQuery library that implements templates in XQuery. These templates are not exactly similar to XSLT templates (the priority system is different) but like in XSLT you’ll find template definitions,apply templates methods, modes, priorities and other goodies.

xmlprague2012-150Java had not been mentioned yet and Charles Foster came to propose Building Bridges from Java to XQuery. Based on XQuery API for Java (XQJ), this bridges rely on Java annotations to map Java classes and XQuery stored queries and of course POJOs are also mapped to XML to provide a very sleek integration.

xmlprague2012-150The last talk was a use case by Lorenzo Bossi presenting A Wiki-based System for Schema and Data Evolution, providing a good summary of the kind of problem you have when you need to update schemas and corpus’s of documents.

xmlprague2012-150Everyone was then holding their breath waiting for Michael Sperberg-McQueen’s closing keynote that has been brilliant as usual and almost impossible to summarize and should be watched on video!

Michael choose to use John Amos Comenius as an introduction for his keynote. Comenius has been the last bishop of Unity of the Brethren and became a religious refugee. That gave Michael an opportunity to call for tolerance and diversity in document formats like in real life. Comenius is also one of the earliest champions of universal education and Michael pointed out that structured markup languages were the new champions of this noble goal in his final conclusion.

Of course, there has been much more than that in his keynote, Michael taking care to mention each presentation, but this focus on Comenius confirmed my feeling of the  religious feeling toward XML.

I agree with most what Michael said in his keynote except maybe when he seems to deny that the XML adoption can be considered disappointing. When he says that the original goal of XML to be able to use SGML on the web has been achieved because he, Michael Sperberg-McQueen, can use XML on his web sites, that’s true of course, but was the goal really to allow SGML experts to use SGML on the web?

It’s difficult for me to dissent because he is the one who was involved in XML at that time when I had never heard of SGML, but I would still argue that SGML was usable on the web by SGML experts and that I don’t understand the motivation of the simplification that gave birth to XML if that was not to lower the price to entry so that web developers could use XML.

The consequences of this simplifications have been very heavy: the whole stack of XML technologies had to be reinvented and SGML experts have lost a lot of time before these technologies could be considered to be at the same level as they were. And even now some features of SGML that have been stripped down could be very useful for experts on the web such as for instance DTDs powerful enough to describe wiki syntaxes.

Similarly, when discussing during lunch with Liam Quin about my talk, he said that he had always thought that XHTML would never replace HTML. I have no reason to contradict Liam, but the vision of the W3C Markup Activity was clearly to “Deliver the Web of the Future Today: Recasting HTML in XML” like it can be seen on this archive.

It’s not pleasant to admit that we’ve failed, but replacing HTML with XHTML so that XML became dominant on the browser was clearly the official vision of the W3C shared by a lot of us and this vision has failed!

We need to acknowledge that we’ve lost this battle and make peace with the web developers that have won…

Curiously, there seems to be much less aggressiveness toward JSON than toward HTML5 in the XML community as can be shown by the number of efforts to bridge XML and JSON. Can we explain this by the fact that many XML purists considered data oriented XML as less interesting and noble than document oriented XML?

Anyway, the key point is that very strong ecosystem has been created with an innovative, motivated and almost religious community and a technology stack which is both modern and mature.

Share and Enjoy:
  • Identi.ca
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Twitter
  • Add to favorites

XML, the eX Markup Language?

Note: this article is a copy of the paper that I have presented at XML Prague 2012.

Abstract

Revisiting the question that was the tag line of XML Prague last year: “XML as new lingua franca for the Web. Why did it never happen?”, Eric tries to answer to other questions such as: “where is XML going?” or “is XML declining, becoming an eX Markup Language?”.

XML as new lingua franca for the Web. Why did it never happen?

This was the tagline of XML Prague 2011, but the question hasn’t really been answered last year and I’ll start this talk to give my view on that question.

Flashback

February 1998 is a looong time ago, a date from another century and for those of you who were not born or don’t remember, here is a small summary of what did happen in February 1998:

February

Wikipedia

While the Iraq disarmament crisis was raging, the World Wide Web Consortium waited until the third day of the Winter Olympics held in Nagano to make the following announcement:

Advancing its mission to lead the Web to its full potential, the World Wide Web Consortium (W3C) today announced the release of the XML 1.0 specification as a W3C Recommendation. XML 1.0 is the W3C’s first Recommendation for the Extensible Markup Language, a system for defining, validating, and sharing document formats on the Web
W3C Press Release (February 1998)

People curious enough to click on the second link of the announcement could easily double check that beyond the marketing bias XML was something to be used over the Internet:

The design goals for XML are:

  1. XML shall be straightforwardly usable over the Internet.
  2. XML shall support a wide variety of applications.
  3. XML shall be compatible with SGML.
  4. It shall be easy to write programs which process XML documents.
  5. The number of optional features in XML is to be kept to the absolute minimum, ideally zero.
  6. XML documents should be human-legible and reasonably clear.
  7. The XML design should be prepared quickly.
  8. The design of XML shall be formal and concise.
  9. XML documents shall be easy to create.
  10. Terseness in XML markup is of minimal importance.
W3C Recommendation (February 1998)

And the point was reinforced by the man who had led the “Web SGML” initiative and is often referred to as the father of XML:

XML arose from the recognition that key components of the original web infrastructure — HTML tagging, simple hypertext linking, and hardcoded presentation — would not scale up to meet the future needs of the web. This awareness started with people like me who were involved in industrial-strength electronic publishing before the web came into existence.
Jon Bosak

This has often been summarized saying that XML is about “putting SGML on the Web”.

Among the design goals the second one (“XML shall support a wide variety of applications”) has been especially successful and by the end of 98, Liora Alschuler reported that the motivations of the different players pushing XML forward were very diverse:

The big-gun database vendors, IBM and Oracle, see XML as a pathway into and out of their data management tools. The big-gun browser vendors, Netscape and Microsoft, see XML as the e-commerce everywhere technology. The big-gun book and document publishers, for all media, are seeing a new influx of tools, integrators, and interest but the direction XML publishing will take is less well-defined and more contingent on linking and style specs still in the hands of the W3C.
Liora Alschuler for XML.com (December 1998)

One thing these “big-gun” players that were pushing XML to different directions did achieve has been to develop an incredible hype that rapidly covered everything and in 2001 the situation had become hardly bearable:

Stop the XML hype, I want to get offAs editor of XML.com, I welcome the massive success XML has had. But things prized by the XML community — openness and interoperability — are getting swallowed up in a blaze of marketing hype. Is this the price of success, or something we can avoid?
Edd Dumbill (March 2001)

Marketers behind the hype being who they were, the image of XML that they promoted was so shiny that the XML gurus didn’t recognize their own technology and tried to fight against the hype:

I’ve spent years learning XML / I like XML / This is why www.XmlSuck.com is here
PaulT (January 2001)

The attraction was high and people rushed to participate to the W3C working groups:

Working Group size – so many people means it is difficult to gain consensus, or even know everyone’s face. Conference calls are difficult.
Mark Nottingham, about the SOAP W3C WG (May 2000)

Huge working groups with people pushing to different directions is not the best recipe to publish high quality standards and even though XML itself was already baked, the perception of XML depends on the full “stack”:

This is a huge responsibility for the Schema Working Group since it means that the defects of W3C XML Schema will be perceived by most as defects of XML.
Eric van der Vlist on xml-dev (April 2001)

The hype was so huge that XML geeks rapidly thought that they had won the war and that XML was everywhere:

XML is now as important for the Web as HTML was to the foundation of the Web. XML is everywhere.
connet.us (February 2001)

Why this hype? My guess is that the IT industry had such a desperate need for a data interchange format that any one of them could have been adopted at that time and that XML happened to be the one that went through the radar screen at the right moment:

When the wind is strong enough, even flatirons can fly.
Anonymous (February 2012)

The W3C had now to maintain:

  • XML, a SGML subset
  • HTML, a SGML application that did not match the XML subset

Technically speaking, the thing to do was to refactor HTML to meet the XML requirements. Given the perceived success of XML, it seemed obvious that everyone would jump into the XML wagon and be eager to adopt XHTML.

Unfortunately from a web developer perspective the benefits of XHTML 1.0 were not that obvious:

The problem with XHTML is :a) it’s different enough from HTML to create new compatibility problems.b) it’s not different enough from HTML to bring significant advantages.
Eric van der Vlist on XHTML-DEV (May 2000)

It is fair to say that Microsoft had been promoting XML since the beginning:

XML, XML, EverywhereThere’s no avoiding XML in the .NET world. XML isn’t just used in Web applications, it’s at the heart of the way data is stored, manipulated, and exchanged in .NET systems.
Rob Macdonald for MSDN (February 2001)

However, despite their strong commitment to XML, Microsoft had frozen new developments on Internet Explorer. The browser has never been updated to support the XHTML media type, meaning that the few web sites using XHTML had to serve their pages as HTML!

By 2001, the landscape was set:

  • XML had become a dominant buzzword giving a false impression that it had been widely adopted
  • Under the hood, many developers were deeply upset by this hype even among the XML community
  • Serving XHTML web pages as such was not an option for most web sites

The landscape was set, but the hype was still high and XML was still gaining traction as a data interchange format.

In the meantime, another hype was growing…

Wikipedia has tracked the origin of the term Web 2.0 back to 1999:

The Web we know now, which loads into a browser window in essentially static screenfuls, is only an embryo of the Web to come…./…Ironically, the defining trait of Web 2.0 will be that it won’t have ant visible characteristics at all. The Web will be identified only by its underlying DNA structure– TCP/IP (the protocol that controls how files are transported across the Internet); HTTP (the protocol that rules the communication between computers on the Web), and URLs (a method for identifying files).

…/…

The Web will be understood not as screenfuls of text and graphics but as a transport mechanism, the ether through which interactivity happens.

Darcy DiNucci (1999)

The term became widely known with the first Web 2.0 conferences in 2003 and 2004 and XML was an important piece of the Web 2.0 puzzle through Ajax (Asynchronous JavaScript and XML), coined and defined by Jesse James Garrett in 2005 as:

Ajax isn’t a technology. It’s really several technologies, each flourishing in its own right, coming together in powerful new ways. Ajax incorporates:

Jesse James Garrett (February 2005)

This definition shows how, back in 2005, some of us still thought that XML could dominate the Web and be used both to exchange documents (in XHTML) and data.

Unfortunately, this vision defended by the W3C, has been rapidely torpedoed by Ian Hickson and Douglas Crockford.

Founded in 1994 for that purpose, the W3C had been the place where HTML had been normalized. Among other things, the W3C had been the place where the antagonists of the first browser war could meet and discuss in a neutral field.

In 2004, Netscape had disappeared, Microsoft had frozen the development of their browser and browser innovation moved into the hand of new players: Mozilla, Apple/Safari and Opera who was starting to gain traction.

Complaining that the W3C did not meet their requirements and that HTML needed to be updated urgently to meet the requirements what would be soon known as Web 2.0, they decided to fork the development of HTML:

Software developers are increasingly using the Internet as a software platform, with Web browsers serving as front ends for server-based services. Existing W3C technologies — including HTML, CSS and the DOM — are used, together with other technologies such as JavaScript, to build user interfaces for these Web-based applications.However, the aforementioned technologies were not developed with Web Applications in mind, and these systems often have to rely on poorly documented behaviors. Furthermore, the next generation of Web Applications will add new requirements to the development environment — requirements these technologies are not prepared to fulfill alone. The new technologies being developed by the W3C and IETFcan contribute to Web Applications, but these are often designed to address other needs and only consider Web Applications in a peripheral way.The Web Hypertext Applications Technology working group therefore intends to address the need for one coherent development environment for Web Applications. To this end, the working group will create technical specifications that are intended for implementation in mass-market Web browsers, in particular Safari, Mozilla, and Opera.
WHATWG (June 2004)

The W3C was behind a simple choice: either push XHTML recommendations that would never be implemented in any browsers or ditch XHTML and ask the WHATWG to come back and continue their work toward HTML5 as a W3C Working Group. The later option was eventually chosen and HTML work resumed within W3C in 2007.

JSON was around since 2001. It took a few years of Douglas Crockford’s energy to popularize this JavaScript subset but around 2005, JSON rapidly became a technology of choice as a “Fat-Free Alternative to XML” in Ajax applications.

There is no direct link between HTML5 and JSON but the reaction against XML, its hype and its perceived complexity is a strong motivation in both cases.

Why?

A number of reasons can be found for this failure:

  • Bad timing between the XML and HTML specifications (see Adam Retter’s presentation at XML Amsterdam 2011).
  • Lack of quality of some XML recommendations (XML Namespaces, XML Schema, …).
  • Lack of pedagogy to explain why XML is the nicer technology on the earth.
  • Dumbness of Web developers who not use XML.

There is some truth in all these explanations, but the main reason is that from the beginning we (the XML crowd) have been arrogant, over confident and have made a significant design error.

When we read this quote:

XML arose from the recognition that key components of the original web infrastructure — HTML tagging, simple hypertext linking, and hardcoded presentation — would not scale up to meet the future needs of the web. This awareness started with people like me who were involved in industrial-strength electronic publishing before the web came into existence.
Jon Bosak

We all understand what Jon Bosak meant and we probably all agree that HTML is limited and that something more extensible makes our lives easier, but we must also admit that we have been proven wrong and that HTML has been enough to scale up to the amazing applications we see today.

Of course, the timing was wrong and everything would have been easier if Tim Berners-Lee had came up with a first version of HTML that would have been a well formed XML document but on the other hand, the web had to exist before we could put SGML on the web and there had to be a prior technology.

In 1998 it was already clear that HTML was widespread and the decision to create XML as a SGML subset that would be incompatible with HTML has been a bad one:

  • Technically speaking because that meant that millions of existing pages would be non well formed XML (“the first Google index in 1998 already had 26 million pages“).
  • Tactically speaking because that could be understood as “what you’ve done so far was crappy, now you must do what we tell you to do”.

To avoid this deadly risk, the first design goal of XML should have been that existing valid HTML documents were well formed XML documents. The result might have been a more complex format and specification, but this risk to create a gap between XML and HTML communities would have been minimized.

Another reason to explain this failure is that XML is about extensibility. This is both its main strength and weakness: extensibility comes at a price and XML is more complex than domain specific languages.

Remove the need for extensibility and XML will always loose against DSLs, we’ve seen a number of examples in the past:

  • RELAX NG compact syntax
  • JSON
  • HTML
  • N3
  • CSS

Is it a time to refactor XML? Converge or convert?

Hmmm… It’s time to address the questions asked this year by XML Prague!

We’ve failed to establish XML as the format to use on the web but we’ve succeeded in creating a strong toolbox which is very powerful to power websites and exchange information.

I don’t know if it’s to compensate the ecosystems that we are destructing on our planet, but one of the current buzzwords among developers is “ecosystem”: dominant programming languages such as Java and JavaScript are becoming “ecosystems” that you can use to run a number of applications that may be written using other programming languages.

What we’ve built with XML during the past 14 years is a very strong ecosystem.

The XML ecosystem is based on an (almost) universal data model that can not only represent well formed XML documents but also HTML5 documents and (with an impedance mismatch that may be reduced in future versions) JSON objects.

<div class=
" />
Note
Notable exceptions that cannot be represented by the XML data model include overlapping structures and graphs.

On top of this data model, we have a unique toolbox that includes:

  • transformation and query languages
  • schema languages
  • processing (pipeline) languages
  • databases
  • web forms
  • APIs for traditional programming languages
  • signature and encryption standards
  • a text based serialization syntax
  • binary serialization syntaxes

We can truly say that what’s important in XML is not the syntax but that:

Angle Brackets Are a Way of Life
Planet XMLHack

Rather than fighting fights that we’ve already lost we need to develop our ecosystem.

The number one priority is to make sure that our data model embraces the web that is taking shape (which means HTML5 and JSON) as efficiently as possible. Rather than converge or convert we must embrace, the actual syntax is not that important after all!

To grow our ecosystem, we could also consider embracing more data models, such as graphs (RDF), name/value pairs (NOSQL), relations (SQL), overlaps (LMNL).

I am more skeptical about refactoring XML at that stage.

It’s always interesting to think about what could be done better, but refactoring a technology as widespread as XML is tough and needs to be either backward compatible or provide a huge benefit to compensate the incompatibilities.

Will we see a proposal that will prove me wrong during the conference?

Share and Enjoy:
  • Identi.ca
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Twitter
  • Add to favorites

Dear oreilly.com, cool URIs don’t change, please!

That shouldn't happen!

Dear oreilly.com, I hope you will not mind if I am doing some buzz on your name, but that’s for owark, the Open Web Archive, a project that have been launched at OSCON 2011 and that could make good use of some additional visibility.

Owark is currently implemented as a WordPress plugin that runs in three of my websites and replaces broken links by links to local archives.

As I was presenting owark today during a workshop at Paris Web 2011, I noticed that http://oreilly.com/catalog/9780596529321/index.html was one of these broken links and used this example to demonstrate the usefulness of this project.

Now that I have made my point, explaining to my attendees that even websites as geeky as oreilly.com could not be trusted to avoid linkrot, can I suggest that you add redirections from these old URLs to the new ones?

Of course, I know that you know that “cool URIs don’t change” and I have noticed that you are already redirecting http://oreilly.com/catalog/9780596529321/ (without the trailing “index.html”) to its new location but that’s not enough ;) …

Thanks,

Eric

PS: owark needs your help. If you’re interested, please leave a comment (if you log using an OpenID you’ll get the bonus of not being moderated), drop me a mail at vdv@dyomedea.com or contact me on Identica, Twitter, Skype, Yahoo or MSN where my pseudo is “evlist”.

Share and Enjoy:
  • Identi.ca
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Twitter
  • Add to favorites

La biographie que je n’aurai pas le temps de détailler à Paris Web

Mon atelier sur l’archivage des pages web ne durant que trente minutes, je ne vais pas avoir le temps de détailler ma biographie.

Voici donc ce que j’aurais aimé dire aux participants de Paris Web

J’ai entendu parler du web pour la première fois en 1993 ou 1994 alors que je travaillais à l’avant vente chez Sybase.

La commerciale qui gérait le CNRS m’avait demandé de rencontrer un interlocuteur qui une demande particulière à nous faire.

A l’époque nous étions très fiers de TDS, notre protocole client/serveur et ce client voulais nous demander si nous ne pouvions pas supporter un nouveau protocole que les chercheurs commençaient à utiliser dans les laboratoires pour partager des informations.

Sans grand espoir je consultais tout de même l’engineering qui me répondit que non, il n’était pas question de supporter ce protocole HTTP qui leur semblait bien trop rudimentaire pour être utilisable en client/serveur…

Malgré ce premier échec, le web pénétra très vite par la petite porte chez Sybase et deux ans plus tard je créais mes premières pages web dynamiques en Perl sur notre intranet pour exposer des données de la base de données du support que le progiciel que nous utilisions n’affichaient pas comme nous l’aurions souhaité. L’année suivante, abandonnant Perl pour Java, je créais une petite application web pour gérer mon équipe de support second niveau…

Le pli était pris, j’étais contaminé par le web et je le suis resté.

Quand j’ai créé Dyomedea en 1999, mon premier projet a été de créer Du côté de…, un site de quartier dont l’échec commercial devint flagrant un an plus tard.

Ce fut mon premier gros projet web : un site dynamique écrit en PHP accédant à une base de données PostgreSQL (venant de chez Sybase il me semblait exclus d’utiliser une base de données ne gérant pas les transactions…), mis à jour régulièrement et générant plusieurs milliers de pages…

C’est ce projet qui m’apprit vraiment les bases du développement web et me permit de découvrir des technologies telles que CSS, XML et XSLT.

Enthousiasmé par ces technologies, je lançais XMLfr début 2000 et n’hésitais pas à contacter la communauté XML naissante en France comme à l’étranger pour la mettre à contribution.

La réponse fut étonnante et je devins rapidement un membre influant de cette communauté, rédacteur xmlhack et xml.com, auteur de livres O’Reilly et orateur à un nombre (trop) important de conférences internationales.

XMLfr, qui a été une source de vulgarisation importante pendant le début des années 2000, reste également à ce jour mon plus gros site web, réalisé entièrement en XML pour être une vitrine de ces technologies.

Le reflux de la vague XML n’a en rien modéré mon enthousiasme pour cette technologie dont la boîte à outils est maintenant arrivée à maturité et que j’utilise quotidiennement.

Utilisateur et promoteur de logiciels open source (au point d’utiliser un OS open source sur mon MacBook) et de données ouvertes, mon engagement s’est diversifié puisque je suis devenu apiculteur et arboriculteur (heureux propriétaire d’un verger de plusieurs hectares certifié AB et sous mention Nature & Progrès) et associé d’un magasin de produits biologiques.

Les échanges ouverts de semences ou de greffons de variétés traditionnelles viennent ainsi compléter celui d’idées, de données et de logiciels : ils font partie de la même logique!

Autre élément de convergence entre ces deux “casquettes” : c’est en maintenant le site du Retour à la Terre que j’ai été confronté une nouvelle fois au problème d’obsolescence des liens et que j’ai repris une vieille idée qui m’avait déjà travaillé quand de maintenais activement XMLfr et que j’ai démarré le projet owark que je vais présenter à Paris Web.

Share and Enjoy:
  • Identi.ca
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Twitter
  • Add to favorites