I hesitate to put any kind of definition around the versioning of the web. The fact that the internet world has to quantify the differences between the so-called Web 1.0 and Web 2.0 is silly at best. However, there is no doubt that there is a vast degree of difference between the web that was known in, say, 1999 and the web that we know of in 2009.
Objectively speaking, the first generation of the internet was based around a premise of “Read only”. It, of course, was not termed that, but the technology did not exist to support anything else. People used the internet to read the news, find weather forecasts and catch up on sports scores. Blogs didn’t exist. Facebook and Twitter were but thoughts in their founders minds, and likely thoughts that did not even exist yet. Who knew that a time would come when the most interactive thing on the web would not be shopping and ecommerce?
Somewhere in the middle of this decade, the web took on a more interactive approach. Tim O’Reilly began calling it Web 2.0 to note the clear cut difference between a “read only” web and a “read/write” web. Social networks and blogs gave users of the internet a chance to participate in the creation of it, by generating content. Eventually, content generation transformed from the written word to video, podcasts and microcontent.
On the cusp of a next generation to the web, there is a movement toward meta-data, that is granular information to help discoverability on the web. APIs allow developers to take content from, say, YouTube or Twitter, and repurpose that into something usable in other forms by humans, applications and mobile devices. It is, in essence, a “read/write/execute” version of the web and we are already beginning to see this.
Ari Herzog, a longtime reader of this blog as well as a longtime opponent of mine, wrote a post declaring Europe’s Government 2.0ish aspect of their EU site a win over the United States. See his post for his rationale.
He certainly makes a good point with his premise after the jump: