(the compact version of this article is cross-posted at ZDNet)
Sir Tim Berners-Lee blogged again. This time he invented another new term---Giant Global Graph. Sir Tim uses GGG to describe Internet in a new abstraction layer that is different from either the Net layer abstraction or the Web layer abstraction. Quite a few technique blogs immediately reported this news in this Thanksgiving weekend. I am afraid, however, that few of them really told readers the deeper meaning of this new GGG. To me, this is a signal from the father of World Wide Web: the Web (or the information on Internet) has started to be reorganized from the traditional publisher-oriented structure to the new viewer-oriented structure. This claim from Sir Tim Berners-Lee well matches my previous predictions of web evolution.
Why another layer?
We need to look at a question---why do we need another layer of abstraction of Internet? The answer: when all the previous abstractions are no longer sufficient enough to foster the newest evolution of Internet. Based on Sir Tim, we previously had two typical abstractions of Internet layers. The first layer of abstraction is called the Net, in which Internet is a network of computers. The second layer of abstraction is called the Web, in which Internet is a network of documents. After these two abstractions, Sir Tim now declare the third layer of abstraction named the Graph, in which Internet is a network of individual social graphs.
We are all familiar to the Net layer of Internet, which Sir Tim also call the International Information Infrastructure (III). Whenever we buy a new computer and link it online, this computer automatically becomes a part of the III. Through this computer, humans can access information stored in all the other computers within the III. Simultaneously, the information stored in this new computer become generally accessible by all the other computers within the III. This abstraction layer is particularly useful when we discuss information transformation protocols on Internet.
The Web layer of Internet is often called the World Wide Web (WWW). "It isn't the computers, but the documents which are interesting." Most of the time human users only care of information itself but not on which computers the information is physically stored. Whenever somebody uploads a piece of information online, this information automatically becomes a part of the WWW. In general, a piece of information holds its unalterable meaning that is independent to whether it is physically stored in computer A or computer B. This abstraction layer is particularly useful when we discuss general information manipulation on Internet.
Are these two abstractions enough for us to explore all the potential of Internet? Sir Tim answers no, and I agree. The Internet evolution continuously brings us new challenges. As I had pointed out in my series of web evolution, the primary contradiction on the Web is always the contradiction between unbounded quantitative accumulation of web resources and limited resource-operating mechanism at the meantime. We continuously require newer web-resource-operation mechanisms to solve this primary contradiction at a new level. The newer resource-operation mechanisms, however, are reflections of the newer abstraction layers of Internet. In particular to the Web 2.0, this primary contradiction is shown as the continuously increased amount of individually tagged information and the lack of ability to coherently organize them together. The concept of social graph is helpful to solve this contradiction.
Both Brad Fitzpatrick and Alex Iskold presented the same observation: every individual web user expects to have an organized social graph of web information in which they are interested. Independently, I had another presentation but about the same meaning. The term I had used was web space. Due to current status of web evolution, web users are going to look for integrating their explored web information of interest into a personal cyberspace---web space. Inside each web space, information is organized as a social graph based on the perspective of the owner of the web space. This is thus the connection between the web spaces under my interpretation and the social graphs under the interpretation of Brad and Alex. Note that this web-space interpretation reveals another implicit but important aspect: the major role of an web-space owner is a web viewer instead of a web publisher.
The emergence of this new Graph abstraction of Internet tells that the Web (or information on Internet) is now evolving from a publisher-oriented structure to a viewer-oriented structure. At the Web layer, every web page shows an information organization based on the view of its publishers. Web viewers generally have no control on how web information should be organized. So the Web layer is upon a publisher-oriented structure. At the new proposed Graph layer, every social graph shows an information organization based on the view of graph owners, who are primarily the web viewers. In general, web publishers have little impact on how these social graphs should be composed. "It's not the documents, it is the things they are about which are important." Who are going to answer what are "the things they are about"? It is the viewers instead of the publishers who will answer. This is why information organization at the Graph layer becomes viewer-oriented. The composition of all viewer-oriented social graphs becomes a giant graph at the global scale that is equivalent to the World Wide Web (but based on a varied view); this giant composition is thus the Giant Global Graph (GGG).
Turning from the publisher-oriented web to the viewer-oriented web is a fascinating transformation. Based on the view of web evolution, the core of this transformation is the upgrade of web spaces.
- On Web 1.0, web spaces were homepages. Homepages typically represented the publishers' view. So Web 1.0 was a publisher-oriented web.
- On Web 2.0, web spaces become individual accounts. Web 2.0 is in a transition from the publisher-oriented web to the viewer-oriented web. Individual accounts are representative units of this transition. Within an account, web viewers collect resources of interest and store them into the account. So these accounts contain significant viewer-oriented aspects. On the other hand, these accounts are isolated in varied web sites, which are typical information organizations built upon the publisher-oriented view. Therefore, individual accounts on these particular sites must inevitably also contain significant publisher-oriented aspects. Such a mixture between the two views causes more problems than benefits. Users feel difficult to organize information across the boundary of web sites.
- On the future Web 3.0, web spaces will become primarily viewer-oriented. In contrast to the Web-2.0 accounts, Web-3.0 spaces (or graphs) are going to be a collection of web resources from various web sites that are organized essentially based on the view of web viewers. Web-3.0 spaces will become viewer-side home-spaces in contract to the publisher-side home-pages on Web 1.0.
This vision of viewer-oriented web is exciting. But is there anything else still missing in this vision? If the things we have discussed until now were all we need, Twine would have been the example of our ultimate solution towards the Web 3.0. But I also have analyzed that Twine (or at least the current Twine Beta) was at most Web 2.5. There is still a missing piece in this vision.
The missing piece is the character of proactivity. In my web evolution article, I have emphasized that the implementation of proactivity is a key for the next transition on web evolution. Unlike the publishers who can fully control of whether and what they should publish, viewers have no control on either of these questions. Therefore, a successful viewer-oriented information organization must be equipped with certain proactive mechanisms so that viewers can continuously update their social graphs by newly uploaded web information. Similar to that the implementation of activity (such as RSS) was a key to the success of Web 2.0, the implementation of proactivity will be a key to the success of Web 3.0, or Semantic Web, or the new proposed Giant Global Graph.