I hesitate to put any kind of definition around the versioning of the web. The fact that the internet world has to quantify the differences between the so-called Web 1.0 and Web 2.0 is silly at best. However, there is no doubt that there is a vast degree of difference between the web that was known in, say, 1999 and the web that we know of in 2009.
Objectively speaking, the first generation of the internet was based around a premise of “Read only”. It, of course, was not termed that, but the technology did not exist to support anything else. People used the internet to read the news, find weather forecasts and catch up on sports scores. Blogs didn’t exist. Facebook and Twitter were but thoughts in their founders minds, and likely thoughts that did not even exist yet. Who knew that a time would come when the most interactive thing on the web would not be shopping and ecommerce?
Somewhere in the middle of this decade, the web took on a more interactive approach. Tim O’Reilly began calling it Web 2.0 to note the clear cut difference between a “read only” web and a “read/write” web. Social networks and blogs gave users of the internet a chance to participate in the creation of it, by generating content. Eventually, content generation transformed from the written word to video, podcasts and microcontent.
On the cusp of a next generation to the web, there is a movement toward meta-data, that is granular information to help discoverability on the web. APIs allow developers to take content from, say, YouTube or Twitter, and repurpose that into something usable in other forms by humans, applications and mobile devices. It is, in essence, a “read/write/execute” version of the web and we are already beginning to see this.
Ari Herzog, a longtime reader of this blog as well as a longtime opponent of mine, wrote a post declaring Europe’s Government 2.0ish aspect of their EU site a win over the United States. See his post for his rationale.
He certainly makes a good point with his premise after the jump:
If I must, you can see information in multiple columns”“and the data makes sense. It’s logically organized, providing intuitive links for wherever you might need to go for further environmental information in any of the EU member nations. Whereas the US list is, well, a list. How boring!
The problem, of course, is that the battle of websites is a tired one and caters to a “Read only” view of the web. That users engage content on the website only (this is true at this point) and that they will continue to do so in the future. That the future of the government-oriented portion of the web, also known as “Government 2.0” to denote a next-gen approach to internet media and government participation, is a “Read Only” approach.
This argument is short-sighted and, while I agree that a usable and UI-focused approach to a government website is important, it does not address the larger hurdles faced in the government community. It does not consider that, as an example, the NOAA might want to allow constituents to engage their data in a mobile or iPhone app. Or that DefenseLINK, the official website of the Defense Department, might want to make their official data accessible to all other DoD websites via RSS or other API method.
The next generation of the web, and the government participation on the web, is not about pixels and content presentation for humans using Internet Explorer! That is certainly an aspect, but will not translate to anything that could be billed as Government 2.0. The assumption, the premise, and therefore the jumping off point in terms of thinking, should be consistently “How do we provide the most data to our constituents?” (where constituents might be internal or machine/computer/webapp, and not even be the U.S. Citizen).
Peter Corbett wrote a post here several months ago talking about the building of apps to meet the needs of government and their constituency. He alluded to his Apps for Democracy project which opened up vast amounts of District [of Columbia] data for developers to build real life solutions to. Sunlight Foundation has a similar project called Apps for America. Without a doubt, every developer who built an app as part of those project, considered the web beyond the browser. That… is what is the keystone of Government 2.0 will be.
An understanding of, at minimum, the “Read/Write” web is necessary. Better yet, having a firm grasp and understanding of the “Read/Write/Execute” web, where data discoverability is ubiquitous via microformats (read and subscribe to Chris Messina and the fine work of the DiSo Project for more ubiquity/discoverability/findability work on the web) and mobile devices.
Of course, this would also mean that the technology community would get off their asses and actually innovate, but I digress.