This post is the highlight of what was on Thinking Space in 2007 month-by-month. I am grateful to all the readers of Thinking Space and wish you merry Christmas and happy new year!
January 28, 2007, Web 2.0 panel on World Economic Forum
How would Web 2.0 and the emerging social networks affect world business? The annual World Economic Forum at Davos organized a panel with five outstanding Web business leaders addressing this issue at the beginning of 2007. The talks, however, showed that the executives from traditional big companies such as Microsoft and NIKE were less alerted to the new technologies than the executives from new-age companies such as YouTube and Flickr. In short, both Bill and Mark were talking in languages other than Web 2.0. By their viewpoints, the Web-2.0 phenomenon was certainly less important than their own imaginary vision towards the future. What web evolution really impacts world business was severely underestimated.
At the end, the speech by Viviane is worth of re-emphasizing. When the Web evolves to be more and more mature, who are going to govern the virtual world? This question may gradually become a severe issue when web evolution goes further. Will there be conflicts between the virtual world governments and the real world governments? I do not think that in 2008 we will immediately see this type of conflicts. But the traditional means of national boards do have started to diminish while the new means of digital boards are forming; these changes are slowly but inevitably.
February 18, 2007, The Two-Year Birthday of AJAX
Few technologies have affected the Web so much as AJAX has done. AJAX is more than a technology; it is a philosophy. What AJAX really does is to decompose Web content into smaller portable pieces that are feasible to be uploaded and updated independently. AJAX prompts the dynamic recomposition of pieces of Web content from varied resources. Hence it significantly improves the reuse of information on the Web.
The prevalence of AJAX causes the fragmentation of the Web. The reverse side of this phenomenon is, however, how we may defragment the small pieces of information and reorganize them from end-users' perspectives. This defragmentation issue is the next critical challenge for Web information management. Twine is an example that has started to address this issue. I expect to see more proposals to solve this defragmentation issue in 2008.
March 23, 2007, Will the Semantic Web fail? Or not?
Whether the Semantic Web is going to succeed is always debatable. There are many supporters of Semantic Web, and there are nearly as many as the opponents as well. Will Semantic Web become true? The answer partially depends on whether the Semantic Web researchers can humbly learn from the success of Web 2.0. The normal public might not welcome Semantic Web if its research is still kept inside the ivory tower. Practices such as Microformat are good examples that the Semantic Web research approaches normal web users. But there are still too few of this type of examples. For instance, will the new W3C RDFa proposal be too complicated again? We don't know yet. Hopefully this time W3C would focus more on simple solutions that are feasible to normal users rather than on sound and complete solutions that the academic researchers favor. In comparison, if our real human society is far less than being perfect in reasoning and inference, why must we have theoretically perfect plans to build a virtual world?
April 18, 2007, New web battle is announced
Google is expanding rapidly. Google had replaced Yahoo being the leading Web search engine. Google has already been the largest site that produces Web-2.0 products. Google is competing against Microsoft to be the leading online document editor. Google is fighting against Facebook to be the leading social network through the OpenSocial initiative. More recently, Google starts another battle against Wikipedia to be the leading online knowledge aggregator by the announcement of Google Knol. Can Google succeed simultaneously in all of these fields? Are Google's plans too ambitious to be successful?
The age of Google is about to pass; this is my prediction after watching all these ambitious plans issued by Google. Google has started losing its momentum on originality. By contrast, Google is now repeating a "successful" path of many traditional big companies, i.e., dominating the market by defeating the opponents not by new achievements on technologies but by its superior money resources. This strategy has been proved successfully in many fields. However, it is not a winning strategy on web industry. The reason is that World Wide Web itself is evolving. When the Web evolves, Web technologies evolves. Any company that stops evolving would be thrown away. The history once happened to Yahoo may happen to Google again in the future. The age of Google will be passed with the over of Web 2.0.
May 8, 2007, Web Search, is Google the ultimate monster?
Google is beatable, but Google is not going to be defeated by another Google-style solution. When I predict that the age of Google is about to pass, I mean new revolution on Web technologies. Google is thinking of itself as the God of World Wide Web; and indeed many Web users accept this interpretation (because we have no other better choices at present). But history has already told us that this type of fake gods like Google could not stay forever. In history, we humans abandoned most of the fake gods as soon as the public education system was prevailed. In similar, this history will repeat itself in the virtual world of the Web. The fake God of the virtual world (Google) will step down when the education on Web machine agents prevails. Hakia would not threaten Google if it continues following the Google strategy by addressing itself to be a more powerful fake God on the Web.
In addition to this short summary, I have a preliminary funding request. I will graduate next year and currently I am looking for an assistant professor position. If I'd get an offer, I would start a new research project on next-generation Web search that is beyond the current Google-style search strategy. In fact, I have already done the project proposal. For any reader, if you are responsible on looking for and funding new research projects that are full of potential in the future, I am far more than happy to discuss my project with you. I can be contacted through email@example.com. The philosophy underneath my new web search strategy can be read at here.
June 29, 2007, Epistemological extension to ontologies: a key of realizing Semantic Web?
The application of epistemology into Semantic Web is less explored than it should have been. We need ontologies to enhance the collaboration and agreements. We also need epistemologies to emphasize the individuality and privacy. I expect more research on this topic in 2008.
July 31, 2007, What does tagging contribute to the web evolution? | An introduction of web thread
There are many ways to describe web evolution. One unique expression is the transformation from the node-driven web to the tread-driven web. Web thread is a new term proposed by myself. In short, a web thread is a connection that links multiple web nodes to a fixed inbound. I observed that the Web was not only syntactically connected by human-specified links, but also semantically connected by latent threads each of which expresses a fixed meaning. A straightforward evidence of the existence of web threads is Web-2.0 tags. On Web 2.0, resources are automatically mutual-connected when they are specified the same tag by individual human users. When weaving these tags together, we obtain an interconnected network of all web pages.
The existence of web threads is an interesting phenomenon that lacks of insightful research at present. From one side, web threads are part of the implicit web because they are generally latent at this moment. On the other side, by proactively revealing web threads and explicitly weaving them, we might produce more comprehensive social graphs for individual web users. This new concept thus may contribute significantly to the vision of Giant Global Graph. I will publish more research on this concept in 2008. By the way, a broader discussion of web links and web threads can be found at here.
August 24, 2007, Mapping between Web Evolution and Human Growth, A View of Web Evolution, series No. 4
World Wide Web is evolving. But why does the Web evolve and how does it evolve? Few answers have been given. The view of web evolution is the first systematic study in the world that directly addresses the answer to these questions based on a theoretic exploration.
This view of web evolution stands upon the analogical comparison between web evolution and human growth. I argue that the two progresses are not only similar to each other by their common evolutionary patterns, but also literally simulate each other from all the major aspects. At present, the simulation mainly happens in the uni-direction from the real world to the virtual world. In the future, however, we are going to see more evidences of simulation on the reversed direction, i.e. from the virtual world to the real world.
The virtual world represented by the Web is nothing but a reflection of our human society. Due to the limit of web technologies, however, we are not able to completely simulate our society from every aspect into this virtual world. In particular, we are not able to well simulate all the activities of individual humans on the Web. By contrast, we can simulate individuals at a certain level within any specific evolutionary stage. This continuous upgrade of simulation of individuals on the Web represents the main stream of web evolution.
This theory of web evolution has published for half a year and I have received many requests on discussing this vision. I hope this study would bring more attention to the fascinating web evolution research.
September 16, 2007, A Simple Picture of Web Evolution
The simple picture of web evolution expresses a straightforward timeline of web evolution. The Web is evolving from a read-or-write web to a read/write web, and eventually it may become a read/write/request web. The implementation of the "Request" operation would be a fundamental next-step towards the next generation Web.
October 7, 2007, What is Web 2.0? | The Path towards Next Generation, Series No.1
What is the next generation Web? This is a grand question to all Web researchers at this moment. We might see critical breakthrough on answering this question in 2008.
At present, the advance of Web 2.0 has already slowed down. The progress of web evolution has reached another stable quantitative expansion period after the exciting qualitative transition from 1.0 to 2.0. The seed of next transition is growing underground now.
In order to figure out the path towards the next generation Web, we need to know the present and where the present was coming from. In the first post of this series "towards the next generation", I summarized the various definitions of Web 2.0. In the following installments at this series, I will continue discussing my vision of the path towards Web 3.0. I feel sorry about the slow progress of this series. I will try to post this series more frequently in the coming year.
November 23, 2007, Multi-layer Abstractions: World Wide Web or Giant Global Graph or Others
Giant Global Graph is a new concept. Although Tim Berners-Lee proposed this concept intuitively for freely deploying personal social networks onto the Web, my view of the intent of this concept is beyond this intuition. In general, I believe that the proposal of this concept is the first sign of a great transition---the organization of web information is transforming from the publisher-oriented point of view to the viewer-oriented point of view.
The impact of this transformation could be greater than we may imagine. Most importantly, this transformation will show that the Web may automatically re-organize its information system without a human-controlled organization such as W3C or Google. World Wide Web is a self-organizing system. This observation is essential to the understanding of web evolution.
December 3, 2007, Collectivism on the Web
The implementation of collectivism has been the landmark of Web 2.0. But do we know how many types of collectivism we may implement onto the Web? This last selected article at December 2007 summarized a few typical implementations of collectivism on the Web. Some of them (such as collective intelligence) have been well known, while others (such as collective responsibility and collective identity) are less known by the public. I expect to watch more creative implementations of collectivism in 2008.
Saturday, December 22, 2007
This post is the highlight of what was on Thinking Space in 2007 month-by-month. I am grateful to all the readers of Thinking Space and wish you merry Christmas and happy new year!
Monday, December 10, 2007
This is my newest article in Semantic Report. In this article I present my thoughts of web evolution in the business realm. A truth is that when the Web evolves, most of the businesses on the Web must evolve simultaneously to simply survive. Maybe it is a little bit surprising, but Web business is indeed one of the most risky business categories in the world because of web evolution. New businesses always have plenty of chances, while old businesses often struggle on catching up with the step of web evolution.
In this article, I describe that the success of a hard-core web business (i.e. a company cannot survive without the Web) depends on the implementation of three things: bring its customers the chances of making money, bring its customers the entertainments, or help its customers be recognized and remembered. A successful implementation of any of the three philosophies can lead to a successful company. But a great company always engages at least two of the thoughts. If a company successfully implements all the three issues, it becomes unbeatable.
On the other hand, the progress of web evolution provides richer and richer ways for people to implement these three philosophical thoughts. If a company does not frequently upgrade its implementation according to the progress of web evolution, the company may easily be swapped out of the market by its new competitors that directly adopt the newest implementation methods in web evolution. This is thus the exciting but also the cruel side of the web business evolution.
You may check out the full article at the December 2007 edition of the Semantic Report.
Monday, December 03, 2007
Collectivism emphasizes on human interdependence and the importance of collective. As probably the greatest collective project of mankind in history, World Wide Web engages enormous practices of collectivism. In this article, we take a brief look at several typical examples of these engagements.
Collective intelligence is the most well-known engagement of collectivism on World Wide Web. In particular, Web 2.0 advocates have declared "harnessing collective intelligence" to be the touchstone of the Web 2.0 revolution. By definition, collective intelligence is a form of intelligence that emerges from the collaboration and competition of many individuals. If someone feels a little bit puzzled of this definition, here is an alternative explanation that is imprecise but much easier to be understood. Informally, collective intelligence on the Web is the collections of user generated "intelligence".
A keen reader may immediately find an interesting comparison: are there any differences between user generated "intelligence" and user generated "content" (or user generated "data")? On Web 2.0, we have almost mentioned users generation content (UGC) as many times as collective intelligence. In many people's mind, UGC almost equals to the collective intelligence. But the actual meanings between "intelligence" and "content" or "data" are very much different. The intent of "intelligence" is much richer than "content/data". Tim O'Reilly also had briefly mentioned this distinction in one of his earlier post about harnessing collective intelligence.
Content/data is a type of intelligence but at the low end. Jean Piaget, a Swiss philosopher and pioneer of the constructivist epistemology, had a compact description about intelligence: "Intelligence is what you use when you don't know what to do." Content/data provides shallow and unrefined information for people to use. Content/data is often too crude to be efficiently used. Keeping the user generation intelligence at the level of content/data is not enough. This is a problem.
I foresee that the degree of complexity (as well as the degree of efficient usage) of the collective intelligence on the Web is going to evolve with the Web. For example, by tagging content with formal labels that are defined by ontologies, the user generated content/data would evolve to be the user generated knowledge. This is exactly what the vision of Semantic Web wants to bring to us. Moreover, by augmenting formally labeled content with external logic routines, the user generated knowledge would evolve to be the user generated wisdom. By encoding the mechanism of proactiveness into machine computation, the user generated wisdom might evolve to be the user generated creativity. By engaging user generated content/data, knowledge, wisdom, creativity together, we might eventually get the user generated personality, through which the human evolution reaches a new stage of being artificially immortal. Is this path a long way? Yes, there is a long way to go. Is this path an impossible dream? No, it is not. The practice of collective intelligence is converting our society into a virtual world simultaneously from the level of individuals and the level of collective groups.
Collective intelligence is not the only practice of collectivism on the Web. Another key practice of collectivism on the Web is the implementation of collective behavior.
Collective behavior is very much difference from collective intelligence. All types of collective intelligences are static and thus they can be easily presented in an explicit way. In comparison, collective behaviors are dynamic and it is difficult to present them in an explicit way. As the result, collective behaviors are much harder to be used than collective intelligences on the Web though in fact at the same time the amount of collective behaviors is much greater than the amount of collective intelligences. The reason of this amount difference is indeed trivial. Every piece of collective intelligence on the Web must be related to at least one human behavior (i.e. the one action that post this piece of information online). The majority of the time, any piece of collective intelligence must be associated with many human behaviors such as reading and writing. With such a large pool of collective behaviors, it is surprising to see that so few actions have been made so far to manage and utilize this large pool.
Fortunately, Web researchers have started to pay their attention to the collective behaviors. The recent proposal of the implicit web is a typical example. The implicit web is a network that defragments every piece of implicitness on the explicit web. The majority of the implicitness on the Web actually belongs to the collective behaviors.
The collective intelligence is a popular concept. The discussion of collective behavior is also not rare. But the rest of practices of collectivism on the Web I am going to discuss are indeed uncommonly. Many readers may not even hear of them before. But all these practices are unexceptionally important and valuable for the evolution of World Wide Web. The first one I introduce is the collective responsibility.
Collective responsibility is a concept, or doctrine, according to which individuals are to be held responsible for other people's actions by tolerating, ignoring, or harboring them, without actively collaborating in these actions. This concept is particularly important to the study of Internet security.
On the age of Web 2.0 and afterwards, security is no longer a solo issue with the deeper and wider implementation of collectivism. As a result, being innocent may no longer be simply taken as an individual issue. We must start to consider collective responsibility, i.e., some people may have to be punished not due to their own guilty but because they have not actively prohibited the guilty happened regularly in their participated societies. This issue is going to be very much debatable and exciting.
A collective identity refers to individuals' sense of belonging to a group.
Identity is a tough issue on the Web. Normally, a web user may have varied identities on different sites. These varied identities, however, cause serious problems when people try to organize their information of interest across the boundaries of web sites. To address this problem, web researchers have issued the project OpenID that allows users to use a single ID over the entire Web.
But OpenID, even if it would be a standard over the Web, is not the end of the Web identity issue. Similar to that individual persons have their particular roles in real life, individual identities on the Web must gain their particular social roles in virtual life. The identification of these roles is particularly important when we would start to manipulate human generated information on the Web, i.e. collective intelligence, collective behaviors, etc. Only until humans or machines may identify the social roles of the information producers or owners, these humans or machines may be able to properly manipulate the information. The research of collective identity will focus on the identification of social roles of individual identities.
The collective identities are identities of identities. The study of this issue is another exciting and unexplored field that may cause much attention in the future.
Collective consciousness refers to the shared beliefs and moral attitudes which operate as a unifying force within society. In the other words, the collective consciousness is about machine morality because human consciousness on the Web is handled by machines. The machine morality is not a sci-fi term; this issue is indeed real. Machine morality is the reflection of human morality onto the virtual world.
The implementation of collective consciousness is very much related to all the previously mentioned collective factors. Human consciousnesses are materialized on the Web as static intelligence and dynamic behaviors. Moreover, the collective identities assign social roles for the materialized consciousnesses. The integrity of these materialized consciousnesses is closely related to the level of collective responsibility that is maintained at the meantime. The combination of all these issues compose the intent of the machine morality.
Collective effervescence is a perceived energy formed by a gathering of people as might be experienced at a sporting event, a carnival, a rave, or a riot. This energy can cause people to act differently than in their everyday life.
Collective effervescence is the emotion web site owners want to bring. Collective effervescence represents one word---hype! Collective effervescence is the ultimate goal of implementing collectivism on the Web. At the same time, how much an implementation of collectivism successfully brings collective effervescence into a web site is the fundamental standard that we can measure the quality of the implementations of collectivism. This concept encloses the entire set of collective factors and upgrades the evaluation of collectivism into the computational realm.
We have discussed several examples of how we may engage practices of collectivism onto the Web. Certainly there could be many other possible practices that are beyond this article. But one thing is certain. Collectivism is a crucial phenomenon on the evolving Web. The study of collectivism on the Web is going to be a critical issue of the Web Science.
Wednesday, November 28, 2007
Blink is another best-selling book authored by Malcolm Gladwell after his influential The Tipping Point. The book Blink is about the unconsciousness of human being. In the book, Malcolm argues that a decision made by well-trained unconsciousness many times is better than an alternate decision made by through thoughts. Reason: well-trained unconsciousness (or the so-called "thin slicing") only catches the very core of the problem, while through thoughts often wander into unessential branches that lead to the burying of the core. This is thus "the power of thinking without thinking," as the subtitle of the book.
This observation of the importance about the "thin slicing" shows an embarrassing side of the collective intelligence: if there is a conflict between a decision made from a collective base and an alternate decision made by the instinct of few top experts, which one should we trust? The Web-2.0 experiences ask us to vote for the first decision, but Malcolm's book tells us that most of the time it is the second one that is more trustworthy. Which one would you pick in real then?
This is a vague question that may not have an absolute answer in general. But at least the question shows that collective intelligence is not a panacea. An opinion from a domain expert and another opinion from a layman certainly should be weighted differently when we apply both to make a decision. Some time, as what Blink tells, the instinct of very few experts is much more correct than a collective decision.
So is the YouBeTheVC competition a really serious event? Maybe it is just another American Idol show. Think of it, would Larry Page and Sergey Brin (or Mark Zuckerberg) attend this kind of idol show when they had the blueprint of Google (or Facebook) in mind? I doubt it. Distinctive idea is more often out of a blink in contrast to out of a collective vote.
Friday, November 23, 2007
(the compact version of this article is cross-posted at ZDNet)
Sir Tim Berners-Lee blogged again. This time he invented another new term---Giant Global Graph. Sir Tim uses GGG to describe Internet in a new abstraction layer that is different from either the Net layer abstraction or the Web layer abstraction. Quite a few technique blogs immediately reported this news in this Thanksgiving weekend. I am afraid, however, that few of them really told readers the deeper meaning of this new GGG. To me, this is a signal from the father of World Wide Web: the Web (or the information on Internet) has started to be reorganized from the traditional publisher-oriented structure to the new viewer-oriented structure. This claim from Sir Tim Berners-Lee well matches my previous predictions of web evolution.
Why another layer?
We need to look at a question---why do we need another layer of abstraction of Internet? The answer: when all the previous abstractions are no longer sufficient enough to foster the newest evolution of Internet. Based on Sir Tim, we previously had two typical abstractions of Internet layers. The first layer of abstraction is called the Net, in which Internet is a network of computers. The second layer of abstraction is called the Web, in which Internet is a network of documents. After these two abstractions, Sir Tim now declare the third layer of abstraction named the Graph, in which Internet is a network of individual social graphs.
We are all familiar to the Net layer of Internet, which Sir Tim also call the International Information Infrastructure (III). Whenever we buy a new computer and link it online, this computer automatically becomes a part of the III. Through this computer, humans can access information stored in all the other computers within the III. Simultaneously, the information stored in this new computer become generally accessible by all the other computers within the III. This abstraction layer is particularly useful when we discuss information transformation protocols on Internet.
The Web layer of Internet is often called the World Wide Web (WWW). "It isn't the computers, but the documents which are interesting." Most of the time human users only care of information itself but not on which computers the information is physically stored. Whenever somebody uploads a piece of information online, this information automatically becomes a part of the WWW. In general, a piece of information holds its unalterable meaning that is independent to whether it is physically stored in computer A or computer B. This abstraction layer is particularly useful when we discuss general information manipulation on Internet.
Are these two abstractions enough for us to explore all the potential of Internet? Sir Tim answers no, and I agree. The Internet evolution continuously brings us new challenges. As I had pointed out in my series of web evolution, the primary contradiction on the Web is always the contradiction between unbounded quantitative accumulation of web resources and limited resource-operating mechanism at the meantime. We continuously require newer web-resource-operation mechanisms to solve this primary contradiction at a new level. The newer resource-operation mechanisms, however, are reflections of the newer abstraction layers of Internet. In particular to the Web 2.0, this primary contradiction is shown as the continuously increased amount of individually tagged information and the lack of ability to coherently organize them together. The concept of social graph is helpful to solve this contradiction.
Both Brad Fitzpatrick and Alex Iskold presented the same observation: every individual web user expects to have an organized social graph of web information in which they are interested. Independently, I had another presentation but about the same meaning. The term I had used was web space. Due to current status of web evolution, web users are going to look for integrating their explored web information of interest into a personal cyberspace---web space. Inside each web space, information is organized as a social graph based on the perspective of the owner of the web space. This is thus the connection between the web spaces under my interpretation and the social graphs under the interpretation of Brad and Alex. Note that this web-space interpretation reveals another implicit but important aspect: the major role of an web-space owner is a web viewer instead of a web publisher.
The emergence of this new Graph abstraction of Internet tells that the Web (or information on Internet) is now evolving from a publisher-oriented structure to a viewer-oriented structure. At the Web layer, every web page shows an information organization based on the view of its publishers. Web viewers generally have no control on how web information should be organized. So the Web layer is upon a publisher-oriented structure. At the new proposed Graph layer, every social graph shows an information organization based on the view of graph owners, who are primarily the web viewers. In general, web publishers have little impact on how these social graphs should be composed. "It's not the documents, it is the things they are about which are important." Who are going to answer what are "the things they are about"? It is the viewers instead of the publishers who will answer. This is why information organization at the Graph layer becomes viewer-oriented. The composition of all viewer-oriented social graphs becomes a giant graph at the global scale that is equivalent to the World Wide Web (but based on a varied view); this giant composition is thus the Giant Global Graph (GGG).
Turning from the publisher-oriented web to the viewer-oriented web is a fascinating transformation. Based on the view of web evolution, the core of this transformation is the upgrade of web spaces.
- On Web 1.0, web spaces were homepages. Homepages typically represented the publishers' view. So Web 1.0 was a publisher-oriented web.
- On Web 2.0, web spaces become individual accounts. Web 2.0 is in a transition from the publisher-oriented web to the viewer-oriented web. Individual accounts are representative units of this transition. Within an account, web viewers collect resources of interest and store them into the account. So these accounts contain significant viewer-oriented aspects. On the other hand, these accounts are isolated in varied web sites, which are typical information organizations built upon the publisher-oriented view. Therefore, individual accounts on these particular sites must inevitably also contain significant publisher-oriented aspects. Such a mixture between the two views causes more problems than benefits. Users feel difficult to organize information across the boundary of web sites.
- On the future Web 3.0, web spaces will become primarily viewer-oriented. In contrast to the Web-2.0 accounts, Web-3.0 spaces (or graphs) are going to be a collection of web resources from various web sites that are organized essentially based on the view of web viewers. Web-3.0 spaces will become viewer-side home-spaces in contract to the publisher-side home-pages on Web 1.0.
This vision of viewer-oriented web is exciting. But is there anything else still missing in this vision? If the things we have discussed until now were all we need, Twine would have been the example of our ultimate solution towards the Web 3.0. But I also have analyzed that Twine (or at least the current Twine Beta) was at most Web 2.5. There is still a missing piece in this vision.
The missing piece is the character of proactivity. In my web evolution article, I have emphasized that the implementation of proactivity is a key for the next transition on web evolution. Unlike the publishers who can fully control of whether and what they should publish, viewers have no control on either of these questions. Therefore, a successful viewer-oriented information organization must be equipped with certain proactive mechanisms so that viewers can continuously update their social graphs by newly uploaded web information. Similar to that the implementation of activity (such as RSS) was a key to the success of Web 2.0, the implementation of proactivity will be a key to the success of Web 3.0, or Semantic Web, or the new proposed Giant Global Graph.
Monday, November 19, 2007
ThinkerNet is the main component of Internet Evolution, a new site that is focused on exploring the future of the Internet. ThinkerNet is an interactive blog forum written by industry mavens, futurists, authors, entertainers, and other famous Internet faces. You can find bloggers at ThinkerNet such as Craig Newmark, the founder of Craigslist.com, or Philip Rosedale, founder and CEO of the Linden Lab, and many other impressive names whenever you think of the future of Internet. ThinkerNet is a place engaged by exceptional thinkers on the Web.
I was invited to join this extraordinary group of thinkers. I regard this invitation as an honor and recommend ThinkerNet to the readers of Thinking Space. Many articles at the site are worth of reading and thinking. Check them out and enjoy!
Friday, November 16, 2007
I had a talk with Paul Miller about Semantic Web and web evolution. The first half of the talk is about myself and my current PhD research; and the second half of the talk starts discussing my visionary view of web evolution and Semantic Web. In general, we had an easy and nice talk except that Skype dropped off 4 to 5 times in the middle. So please forgive us if you hear some broken connection in the talk.
This talk is part of the Talking with Talis series, in which Paul had talked with web evangelists such as Danny Ayers, Nova Spivack, Thomas Vander Wal, and many more.
Thursday, November 15, 2007
In my newest Semantic Focus article, I casually introduced ontology mapping and its solutions by discussing an interesting issue---The Curse of Knowledge. The following is a quote from the article:
"Since the Curse of Knowledge is a major reason for ontology mapping on the Semantic Web, we may try to solve this problem by breaking the Curse of Knowledge. By breaking this curse, we may solve the problem of ontology mapping in reality more easily than trying to exploit the complex algorithms of computer science."
Check out the complete article at Semantic Focus if you would be interested in.
Monday, November 12, 2007
I have set up a Chinese version of Thinking Space (思维空间) dedicating for Chinese readers. Although at present my plan of 思维空间 is to have an official Chinese translation of Thinking Space (Google translation is far less than satisfaction), sooner or later I may start to post some special articles that are particular for Chinese readers. So, if you are Chinese or you prefer to reading in Chinese, be aware of this new Chinese version of Thinking Space. Wish to see your comments in this new space!
BTW: Any translated post will have a "read the story also in Chinese" label underneath the title.
Tuesday, November 06, 2007
(This article is cross-posted at ZDNet's Web 2.0 Explorer.)
(watch the article also in Chinese, translated by the author)
Implicit web is a new concept coined in 2007. Due to the first Defrag conference right now, discussion of this new term is timely.
Generally this concept implicit web intends to alert us a fact that besides all the explicit data, services, and links, the Web engages with much more implicit information such as which data users have browsed, which services users have invoked, and which links users have clicked. This type of information is often too boring and tedious to be human readable. So, inevitably, this type of information is only implicitly stored (if stored) on the Web. The implicit web intends to describe a network of this implicit information.
Implicit information is everywhere. Implicit information on the Web is about things to which human web users have paid attention. For example, it is about which web pages are frequently read, how often they are read, and who read them. It is also about which services are frequently invoked, how often they are invoked, and who invoked them. Consider the number of web users and how many activities everybody has done daily on the Web, the amount of implicit information must be astonishing. The implicit information co-exists with every web page, every web service, and every web link. In short, great amount of implicitness co-exists with every little piece of explicitness on the Web.
Implicit does not mean insignificant or unimportant. By contrast, implicit web information is often valuable and even crucial in various situations. For example, implicit information of click rates can help editors decide which news are the most popular ones and thus they should put these news on the front page. In similar, the same type of implicit click rates can help salespeople decide which merchandises are among the greatest demanding and so they can arrange the next supply line.
Many companies have already started to collect implicit information and they take benefits from it. Alex Iskold had written a compact introduction on how some companies have utilized implicit information in their products. One well-known example is Amazon.com, which always lists related buyer recommendations with each of its online merchandise. "Customers Who Bought This Item Also Bought," many readers must be familiar to this label. And more importantly, many web users do care of the content underneath this label. This is a typical example of how implicit web information helps.
Amazon is not the only company that benefits from implicit information. Amazon is not one of the few companies that benefit from implicit information. In fact, nowadays almost every website that sells something, from baby toys to cars, has some back-end mechanism on analyzing the traffic (a typical implicit information) and adjust their sales plan based on the analysis. Implicitness is indeed everywhere.
Implicitness is everywhere, but is fragmented everywhere. Implicit information on the Web is not connected. This is a problem.
Until now, implicit web information is generally separately stored, typically by individual companies. For example, both Gap.com and jcrew.com have their own stored visitor history but not shared to the other, although we may imagine that this information must be well connectible since both companies sell apparel and accessories. Someone may argue that Gap and J. Crew are competitors. So let us switch the pair to be Banana Republic and Victoria's Secret. The products of these two companies are well complement (in contrast to compete) to each other. But still the implicit information is isolated to itself, despite that both sides can benefit by connecting this independent implicit information. Readers can find many more this type of examples.
If sharing implicit information among big companies is still questionable (because these big boys hardly believe that they could get help from their little sisters), this type of sharing is much more critical to small websites. There are numerous individual sites that cannot utilize themselves well enough from their own implicit information because they are too small in size. At the same time, however, there are no effective way for them to share and find helpful implicit information, though everybody knows that there is plenty of this information on the Web.
All these discussions lead to one demand: we need the implicit web, which is not there yet. The goal of the implicit web is to defragment all the fragments of implicitness (where the name Defrag is gotten for the conference). But how can we indeed connect all the different types of implicitness on the Web to be a coherent implicit web? This is a grand challenge to the newly formed community of implicit web research. We do not have a clear answer yet.
No matter whatever, however, the solution to the question must be beyond web links. The implicit web engages with complex types of semantics. The amount of information on the implicit web is gigantic. The implicit Web is also very much dynamic. The traditional model of web link is too simple, too shallow, and too static to deal with all these challenges at the same time. We need big, creative thoughts to store and link all the implicitness.
The greatest potential problem to the implicit web is privacy. To companies, some implicit information may be too confidential to be shared. To individual persons, some implicit information may be too private to be public. We need innovative methods of privacy control on the implicit web.
Implicit Web in nutshell
In summary, I briefly list my beliefs about the implicit web.
1. The implicit web is a network that defragments every piece of implicitness on the explicit web, which is the generally known World Wide Web itself.
2. If the explicit web reveals the static side of human knowledge through posted data, services, and links, the implicit web reveals the dynamic side of human knowledge by recording how users access these data, services, and links.
3. The explicit web engages collective human intelligence. The implicit web engages collective human behaviors.
4. The implicit web is not part of the Semantic Web, but they are closely related. If the Semantic Web constructs a conceptual model of World Wide Web, the implicit web constructs a behavior model of World Wide Web.
Tuesday, October 30, 2007
Web 2.0 is a web of platform; this is a commonly accepted viewpoint. But what does a platform mean to particular companies? The answers are not necessarily the same.
For example, do the platform of Facebook and the platform of Yahoo (if it would ever be built) mean the same? In a recent report from Bits, a blog hosted by The New York Times, Jerry Yang explained Yahoo's interpretation of platform. "A business that has a set of standards that allows a set of companies to participate and find benefit from it," he said. This is, however, exactly what the Facebook Platform is: "Facebook Platform is a set of APIs and tools that provides a way for external applications to access Facebook content on behalf of Facebook users." So did Yang suggest that every platform was the same?
What is the difference between Yahoo and Facebook? Yahoo produces web resources by itself, while the present Facebook does not. Facebook is only a social playground. Facebook itself produces few unique data but only service functions. Facebook very much relies on user-contributed data to survive. Yahoo is, however, a major web-resource manufacturer. Yahoo produces numerous unique consumable data to the public every day. Yahoo can survive without user-contributed data. But Yahoo can certainly live better when effectively engaged with user-contributed data. This is the difference between Yahoo and Facebook at present.
Facebook knows its strengths and weaknesses. So the strategy of its platform is to be fully open and maximizes user contribution. This policy both facilitates the usage of its main products (services) and minimizes its main shortcomings (lack of data production line by itself).
Yahoo can simply clone this successful policy, as Yang said. But it is a pity if Yahoo would not adjust this policy with its own strength. As we have discussed, Yahoo produces a lot of data. Yahoo could transform its data production line to user-accessible services. By this transformation, Yahoo is not only a social-network platform as Facebook is, but also a data production platform that Facebook is not. Jeff Jarvis suggested that Yahoo should "turn absolutely every — every — piece of Yahoo into a widget any of us could export and use on our own sites." This is what Yahoo really should approach.
Although the web of a platform is a general concept, individual platforms are different. Every company should design their own unique platform based on their own strengths and weaknesses. Simply cloning others may lead to a tremendous waste of its resources. This case study between Yahoo and Facebook is an example.
Tuesday, October 23, 2007
The 2007 Web 2.0 Summit conference has come to its end. Richard MacManus at Read/WriteWeb has summarized the conference by saying that this conference is a success but lack of a focused theme. By contrast, he listed a timeline of Web 2.0 as follows.
* Oct '04: Web 2.0 is Born
* Oct '05: Web 2.0 Tips (a.k.a. "cautious optimism and cynical buzz")
* Nov '06: Web 2.0 Matures
* Apr '07: Web 2.0 Goes Mainstream
I, however, made a complement to this timeline.
* Oct '07: Web 2.0 Starts to be flourishing
Impression of Web 2.0 Summit 2007
Web 2.0 is starting to be flourishing! This is the most important signal delivered by this Web 2.0 Summit 2007. Technologies are ready, and it's time to exploit human creativity on manipulating these technologies. This is what this conference tells the world.
That Web 2.0 is starting to be flourishing explains why it seems that this conference is short of a focused theme. Web 2.0 is now going everywhere. Different people are thinking of how to apply this vision to their professional realms and make profits from this hype. It thus causes the diversity of topics, and so the theme is hard to be focused.
But, isn't the lack of a theme itself also a theme? Yes, it is. Diversity dominates this conference. People all talk about themselves. It is short of extra energy to care of a common theme to everyone. This is the sign of being flourishing.
Current Status from the View of Web Evolution
So what does this current status mean? I want to share my view based on my vision of web evolution. I have three predictions based on the observation of this Web 2.0 Summit.
1. We are now at the stage of rapid quantitative accumulation of Web-2.0 resources.
The core technologies of Web 2.0 mature. The rest of the work is how to maximize the usage of these technologies. From traditional big boys such as Microsoft to numerous small startups, everyone is trying their own way to dig gold from this Web 2.0 hype. In the following few year, we are going to see tremendous increase of quantity of Web-2.0 resources on the Web. Now it is the best to produce revenue from Web-2.0 products.
2. The preparation of transition to Web 3.0 has begun.
I emphasize that it is the preparation but not the transition itself. At present online Web-2.0 resources are still too few on both its quantity and diversity to really trigger the next transition. As we know, sufficient quantitative accumulation is the prerequisite of a qualitative transition. The general philosophical theory tells us that a qualitative transition can never happen without such a sufficient quantitative accumulation, and certainly we are not there yet.
This observation tells why Twine is still only a Web-2.0 or at most a Web-2.5 product but not a true Web-3.0 product. In some sense, Twine likes an early-born baby and the entire environment has not been ready to its healthy growth yet. But Twine is a sign that Web 3.0 is ahead.
3. Web-2.0 bubble is unavoidable, but probably it is also necessary.
In order to accelerate the emergence of Web 3.0, we need more Web-2.0 companies (instead of more Web-3.0 startups) at present. It sounds controversy. But remember that no Web-3.0 companies can exist before the world of Web 2.0 has been flourishing enough. Since no one knows how much flourishing is enough, only over-flourishing can tell us that it has already been enough. By over-flourishing, we get a bubble. This is thus the dilemma.
Web 2.0 stands on the flourishing world of Web 1.0, and it was so flourishing that caused a bubble. Similarly, Web 3.0 must stand on the flourishing world of Web 2.0, and there is no other way to make Web 3.0 happen. By this mean, Web-2.0 bubble is not only unavoidable, but also necessary. In order to survive from this coming bubble, however, any ambitious Web-2.0 company must prepare its own shift from Web 2.0 to Web 3.0 when at present it is still focusing on producing Web-2.0 products.
This is my most recent post at Semantic Focus. In this post I shared my view of a web of data. The following are selected quotes from the article.
A web of data is a network of data whose local characters are specified by metadata and global characters are specified by hyperdata.
A web thread is a reference to a named web location. Unlike a web link, a web thread connects arbitrary numbers of objects at the same time. In contrast to unidirectional, a web thread is omnidirectional. Data in a thread is automatically connected to all other data in the same thread. All the data connected by the same web thread mutually supplements each other in semantics.
With web threads, do we still need web links in a web of data? The answer is yes. Web threads cannot completely replace web links. Web links have their irreplaceable semantics.
But isn't "incorrectness" a synonym of creativeness? If we want to engage collective intelligence in a web of data, allowing and encouraging subjective (and biased) assignment of web links is fundamental to explore human creativity.
In summary, a web of agents is what ordinary users can see about the Semantic Web at the front end, while a web of data is what professional developers understand to be the essence of the Semantic Web at the back end. These two presentations tell a common story from two different sides.
If you are interested in my interpretation about "a web of data," check out the full story at Semantic Focus.
Sunday, October 21, 2007
One big news flying over the blogosphere this past weekend is the first public announcement of Twine. Richard MacManus at Read/WriteWeb asked whether it is the first mainstream Semantic Web application.
I have not gotten the chance to test the beta myself yet, and I know that neither do many of my readers too at this moment. By analyzing released information in various blogs, I would like to share my first impression of Twine. I may post a follow-up of this post later when I get to know better about Twine. Unless otherwise mentioned, the figures used in this post are from the references I cite at the end of this post.
Twine in a nutshell
This is my one-sentence impression on what Twine is after reading all the referenced posts.
Twine produces a personalized knowledge network for every user by allowing them to find, share, and organize information from people they trust.
As usual, I unfold this sentence so that we may peek the core of Twine.
Twine produces knowledge network. This is the main goal of Twine. That a knowledge network versus a normal social network is What-You-Know versus Who-You-Know. Peter Rip had a fairly well explanation of this issue.
The knowledge networks produced by Twine are personalized. This clause actually has two folds of meanings. If we only read these words, it tells that Twine leverages the management of personal knowledge and improves the usage of knowledge for individual users. If we think of this expression deeper, very likely the knowledge management inside Twines may hardly run across the boundaries of individual knowledge networks at the semantic level. In fact, "personalization" is a comparatively weak term in the realm of knowledge management because globalization is much harder than personalization. But certainly this claim from Twine is reasonable and understandable. (It would be less believable if Twine claims that it could effectively manage knowledge across all the knowledge networks.) Indeed I have already been very much impressed on this claim Twine has made.
A knowledge network in Twine allows users to find, share, and organize information. The keyword in this clause is users, i.e. humans find, share, and organize (with help from machines) rather than machines find, share, and organize. It shows that we are still half way to the real Semantic Web.
Information in a knowledge network is from people who are trusted by the owners of the knowledge network. Obviously, the quality of any knowledge network is related to the quality of its content. The quality of content is, however, related to whether the information providers are trustworthy. Recently, Paul Miller and I had a talk and both of us also agreed that the trust issue must be fundamental to any form of networks on the future Web. Obviously Twine has already addressed this issue for its knowledge networks. How does Twine actually has modeled and implemented trust? This is an interesting question waiting to be revealed.
Impressions from Released Screen-shots
Now we look at two screen-shots and take a close feeling about Twine.
The first screen-shot shows a standard front page of a Twine. The design is familiar to other Web 2.0 sites. The page contains various imports, which could be seen as widget components. On the right side, there are standard tags and list of friends. In general, this screen-shot hardly reveals why Twine is more than another Web 2.0 site.
I am a little bit disappointed about this front-page design. The most important shortcoming is that there is lack of new thought in the design. It is hard to convince me that this site is a new-generation product as it is advertised.
This second screen-shot reveals something new. Typically, it shows an automated annotation mechanism behind the screen. It seems that the Radar's semantic engine can automatically annotate new imports based on existing user-specified tags. Annotated data are stored in RDF files, as Twine is advertised. The interface does not reveal whether there is an underlying ontology management mechanism that may automatically upgrade taxonomies based on users' activities. From the pragmatic point of view, I guess that there might be pre-constructed small ontologies or taxonomies (e.g. learned from Wikipedia) in Radar's semantic engine. Based on user-specified tags, the Radar's semantic engine can automatically (or semi-automatically) select proper taxonomies for users. Then these taxonomies become the seeds for further annotation and query requests.
This screen-shot demonstrates that Twine is beginning to distinguish itself from the other Web-2.0 products. The integration of semantic-web technologies brings new elements to the design and further enriches user experiences on leveraging web information management.
From the two screen-shots, we have seen the use of novel semantic web technologies in Twine. The main problem is, however, that Twine seems only mechanically lay the techniques together. What is the philosophy underneath these techniques and what kind of revolution can these improvement bring to the world? Unfortunately, Twine does not provide a clear answer. As the result, "Twine looks like it's just del.icio.us 2.0," quoted from Tim O'Reilly's comment for his own post about Twine. This is also exactly my feeling after carefully reading all the discussions about Twine up to now.
Semantics behind Twine
Which philosophy does Twine want to bring to the world? This is the grand question to Radar Networks and Nova Spivack.
What I can see is that Twine is still aiming to leverage a web of platform. Certainly this goal is timely and exciting at this moment. But if Twine stops its goal only at the web of platform, Twine is not (and will not be) a Web-3.0 product as it is advertised. Twine is an excellent Web-2.0 product; or maybe we could call it a Web-2.5 product because it shows inevitable distinction to many other Web-2.0 products. But unfortunately, it is not a Web-3.0 product because Twine so far does not bring us revolutionary thoughts. Web 3.0 is more than just a plain layout of new technologies. Web 3.0 must be a revolutionary layout of new technologies. A revolutionary layout means to bring a new philosophy to the world; but Twine fails in this ultimate goal.
To understand revolution, let's compare the current Twine to the Google when it was risen and we can understand the lack of Twine at present. The greatness of Google is not because of its PageRank algorithm. Nevertheless is the algorithm a magnificent contribution to the world, Google changes the philosophy of the Web. Google redefined itself to be a center hub of a social network of users who use Google products instead of defining itself to be a traditional entry-portal to the Web. This upgrade of philosophy lifts Google from a 1.0 company to a leading 2.0 company. This is called revolution. So far Twine has not shown a sign of this type of revolution. By the way, I am not sure whether Yahoo had really understood this revolution until now.
If Radar Networks would like to welcome my comments, I would suggest changing the name "Twine" to "Twin". Check dictionary again if you are curious of these two words. Email me if you really want to know my opinion, which is hard to be explained in short sentences and out of the focus of this post. (Certainly I do not insist on literally changing the name. But they'd better change the philosophy underneath the name if their goal is really about Web 3.0.)
Twine is an exciting product. Although this Twine beta is not a Web-3.0 product yet, it is already one of the greatest Web-2.0 products up to the present. Moreover, we must not neglect that Twine still has a huge space to grow before it gets out of its beta version. Twine has the potential to grow to be a real Web-3.0 product. The question is what kind of ultimate philosophy Nova Spivack and his peers are preparing to bring to the world. Let's be optimistic to the future of Twine.
- Twine official website, Twine Introduction
- Nova Spivack (founder of Radar Networks), What a Week!
- Danny Ayers, Radar Networks decloak: Twine
- Nicholas Carr, Twine: a social network with brains
- Dan Farber, Radar Networks weaves semantic Twine
- Martin LaMonica, Radar Networks' Twine: Semantic Web meets information overload
- Brad Linder, Twine: A social network built on the semantic web dls interview
- Richard MacManus, Twine: The First Mainstream Semantic Web App?
- Paul Miller, Web 2.0 Summit - tying it all together with Twine
- Chris Morrison, Radar’s Twine: A semantic complement to Google
- Tim O'Reilly, Web2Summit: Radar Networks Unwinds twine.com
- Shelly Powers, Semantic to Go
- Peter Rip, Initial Experience with Twine
- Julie Sloane, Radar Networks To Unveil Its Semantic Web App, Twine
Many other related discussions can be found at here.
Friday, October 12, 2007
Not a social network, but a business networking tool. This was what Linkedin CEO Dan Nye spoke to New York Times recently. "Not a social network!" Is there anything wrong within the statement? Actually, nothing went wrong. What Linkedin really want to express (but be shy to say) is that Linkedin is aiming to be a social network devoting to elites but not plebeians.
By contrast, Facebook's policy of open platform (or open API) is the manifesto of devoting to the general public, i.e. plebeians. Anyone can get a free place for their dreams of social networking. The realization of dreams, however, may be inelegant and lack of well maintenance. But open platform gives rewards to creative and diligent minds, even if they are short of money and their plans are lack of consideration.
Plebeians do not care much about security. As a matter of fact, plebeians are often more willing to try unknown applications than the rich elites. Why? If one does not own much at the beginning, how much could he lose to the end?
Certainly noble elites think of things in some other ways. They own much, and thus they worry more. Noble elites care much of confidential. They want to be safer, i.e. to be more closed to themselves, even within a "social" network.
Well, I guess Linkedin has properly found their customers. However, isn't being noble also equivalent to being solitude and short of choices? So will Linkedin be.
Tuesday, October 09, 2007
(last updated, June 10th, 2008)
Many people agree on Web evolution, but few take it seriously. As a term, "Web evolution" is commonly used. But few people have thoughtfully studied its principles, i.e. why and how the Web evolves. Even after the initiative of Web Science, Web evolution, supposed to be a major branch of Web Science, is still lack of considerable attention. For example, Wikipedia, the most popular online encyclopedia, does not have an entry of Web evolution till now (last checked June 10th, 2008). We need to change this situation.
A Brief History
One of the early attempts of formalizing the concept of evolution on the Web was done by Tim Berners-Lee, the father of World Wide Web. In 1998, he explained the importance of evolvability of Web technology. In short, we need to preserve spaces for Web technologies so that they can be continuously upgraded to compromise new requests. According to Tim, "evolvability" is one of the two fundamental goals of all W3C technologies (the other goal is "Interoperability"). Berners-Lee also emphasized that the key evolutionary issues at the meantime should be language evolution and data evolution. Within the context of his discussion, the term "evolvable" was actually closer to the meaning of "extensible" than the meaning of "evolutionary".
A more recent discussion about Web evolution was at the panel "Meaning on the Web: Evolution or Intelligent Design?" at Edinburgh, Unite Kingdom during the WWW-2006 conference. This panel invited five well-known web researchers, Ron Brachman, Dan Connolly, Rohit Khare, Frank Smadja, and Frank van Harmelen. In the description of this panel, it was written as follows.
"should meaning on the Web be evolutionary, driven organically through the bottom-up human assignment of tags? Or does it need to be carefully crafted and managed by a higher authority, using structured representations with defined semantics?"
The evolution of meaning specifications on the Web is a central issue of Web evolution; and this issue is particularly critical to the vision of Semantic Web. But this panel still did not touch the very core of Web evolution, i.e. what the essential driving force of web evolution is and how this force really drives the Web forward.
Very recently at WWW 2008, we finally have a workshop organized by the WSRI that focused solely on the study of Web evolution. Nevertheless is it a big step forward, most of the accepted papers in the workshop still focuses on describing the various phenomena of Web technology evolution rather than digging the fundamental reasons that drive the progress of Web evolution and how these reasons may drive the Web forward in the future.
Formal Study of Web Evolution
To the best of my knowledge, the article "Evolution of World Wide Web, a historical view and analogical study" is the first attempt to explain the essence of Web evolution on the ground of a theoretical study. The first draft of Part 1 was posted at January 12, 2007, and the first draft of Part 2 was posted at April 27, 2007. The Part 3 is still in progress. The Part 1 describes an analogical comparison between the growth of World Wide Web and the growth of humans. The Part 2 makes a scientific abstraction of the analogy discussed in Part 1 and concludes a view of Web evolution by two postulates and seven corollaries. Furthermore, in Part 2 we have also applied the newly abstracted Web-evolution theory to predict the path towards the next-generation Web (or Web 3.0 in someone's mind).
We have taken a great deal of effort to write and revise the articles. But it is simply too broad and sophisticated project to make it perfect in short time. Hence at the same time, I have authored a compact series about Web evolution in ten installments here at Thinking Space (the whole list of the post is attached at the end of this post). This series is more updated than the original article.
Brief Summary of the Web Evolution Theory
If World Wide Web does evolve, we believe that the progress of Web evolution must obey the general law of Transformation of Quantity into Quality, which is a general law of any evolutionary process in the world. In particular to the case of Web evolution, the general law is shown as a spiral advancement that consists of unstopping quantitative accumulation of Web resources and successive qualitative stage transitions. On the Web, whenever the quantity of Web resources reaches a certain level so that the amount becomes too many to be efficiently operated by the Web resource operating mechanism at the meantime, the Web will demand an upgrade of Web resource operating mechanism (a qualitative transition) to ensure the continuity of Web evolution. After the qualitative transition is done, the Web then start a new round of quantitative accumulation of Web resources at a higher level. The transition from Web 1.0 to Web 2.0 is a typical example of this theory.
Although the general law of Transformation of Quantity into Quality explains the path of Web evolution, it does not explain the reasons beneath the unstopping quantitative accumulation of Web resources. In other words, why does such a unstopping quantitative accumulation of Web resources happen and never stop? The answer to this question is related to the human aspect of World Wide Web. To the end, World Wide Web is a project produced by humans, contributed by humans, and serving humans.
The fundamental power of unstopping human contribution to the Web is laid on a nature of mankind---the desire of being known when alive and still being remembered after death. Human is a social creature. The invention of World Wide Web helps satisfy the deep concern of humanity itself. This fulfillment is the fundamental momentum that drives the resource accumulation on the Web.
This theory of Web evolution is not flawless. Many arguments might be debatable and amendable. The main purpose of this work is to bring the world a fresh new vision of Web evolution. In fact, Web evolution is not just about the Web, it is indeed about all humans and our society.
A View of Web Evolution
1. In the Beginning …
2. Three Evolutionary Elements
3. Two Postulates
4. Web Evolution and Human Growth
5. Evolutionary Stage
6. Qualities of Evolutionary Stages
7. Trigger of Transition
8. Beginning of a Stage Transition
9. Essence of Web Evolution
10. Completion of a Stage Transition
Monday, October 08, 2007
Allow me to do a little bit advertisement. (Very rarely I do so.) Planet Semantic Focus is now open to the public. Planet Semantic Focus is an automated aggregator that delivers the most up-to-time news and discussions primarily from various semantic-web-oriented blogs. It is a nice place to check if you do not have the time or patience to wander through all these sites. Added with this important piece, SemanticFocus grows closer and closer to be a central hub of digested semantic-web information. James Simmons has done terrific work on building up the site. Great work, James!
Sunday, October 07, 2007
(Revised October 19, 2007)
This new series is the follow-up of A View of Web Evolution. In the previous series, we studied a theory of web evolution. In this new series, we apply the web-evolution theory to predict the next generation web. Since this new generation web follows Web 2.0, I adopt the name "Web 3.0" for it. So this series may also be called The Path towards Web 3.0.
To know a path, the first thing we need to discover is its starting point. The starting point toward the next generation web is the current web, which is well known as Web 2.0. So in this first installment we begin with the question: what is Web 2.0?
Various Expressions of Web 2.0
There have been dozens of expressions about what Web 2.0 is. I picked five representative ones that complement to each other. By analyzing these five expressions, I am going to conclude a new definition of Web 2.0 from which our path towards the next generation starts.
Expression from Tim O'Reilly
Tim O'Reilly, one of the co-inventors of the term "Web 2.0," had a substantial explanation of Web 2.0. I summarized the 5-page-long explanation into one compact definition.
Web 2.0 is a platform web that enriches user experiences by harnessing collective intelligence, encouraging explicitly declaration of ownership over data, and prompting the conversion from software products to services.
This definition seems complicated. It is, however, not sophisticated. Let's unfold its meaning one clause by another.
1. A platform web is a web of platform. Components in a platform web are portable. Users can freely plug and unplug these portable web components into arbitrary web spaces.
2. Since users can freely plug and unplug their favorite web components into their own web spaces, a platform web enriches user experiences on the Web.
Enriching user experiences is actually a general long-term goal of web evolution. But the typical methods used to enrich user experiences are, however, varied to different evolutionary stages. At the current stage, three typical methods are applied to enable this web of platform. In particular, they are collective intelligence, explicit ownership over web resources, and SaaS (Software as a Service). These methods are the identifiers of Web 2.0.
3. Originally, World Wide Web allows everybody contributes and everybody enjoys contributions from everybody else. A web of platform upgrades this philosophy by making every contribution be not only enjoyable but also freely portable to everybody else. Through this augmentation, the web intelligence becomes collective.
4. When web components are freely portable, the clarification of their ownership becomes a crucial issue. A web of platform requires an explicit mechanism of declaring ownerships over web resources so that they would not be mixed up during the deployment.
5. In particular, traditional software products are not suitable for a web of platform because they are generally not portable. The conversion from software products to services prompts the growth of a platform web.
Expression from Joining Dots
Joining Dots, a research consultancy company at UK, published its vision of Web 2.0. People at Joining Dots had not tried to define Web 2.0 in general but expressed Web 2.0 in their own way.
Web 2.0 is the joining dot of digital natives, internet economics, and the Read/Write Web.
This compact expression contains three clauses.
Web 2.0 connects people; and the connected people are digitalized and become native to the Web. This observation is illuminating. Real humans become part of the Web 2.0. In comparison, real humans were foreigners (not native) to the Web before Web 2.0. Humans visit Web 1.0; but humans live on Web 2.0.
Web 2.0 represents special economical opportunities. The theory of Long Tail is a commercial mutant of collective intelligence. With the participation of "native web people", the execution of Long Tail becomes not only practical but also profitable.
Web 2.0 is a Read/Write Web. Blogs and wikis (primarily read and write) prompt the transformation of web users from foreigners to natives on the Web. This presentation of Read/Write Web is the precise description of Web-2.0 front end. Comparatively, the presentation of platform web in Tim O'Reilly's expression is the description of Web-2.0 back end.
Expression from Andy Budd
Andy Budd, an internationally renowned user experience designer and web standards expert, has his understanding of Web 2.0.
"Web 2.0 isn't a thing... It's a state of mind."
Sound even more weird this time. What Andy emphasizes is that Web 2.0 is a change of thought. The essence of WWW never changes; what does change regularly how we indeed understand about the Web. Before Web 2.0, we commonly think of the Web as a document delivery system. On Web 2.0, we start to watch the Web as an application platform. This updated view gives the Web a new life.
Although it is short, Andy's expression helps us to understand better about web evolution. Web evolution does not change the essence of WWW, i.e. an interlinked system that prompts human communication. But with continuously upgraded views, WWW may mean differently to us within different time periods. Looking for a revolutionary but pragmatic view is the key to investigate the unknown future of World Wide Web. This discovery is a principle when we foresee Web 3.0.
Expression from Nicholas Carr
Nicholas Carr, an acclaimed business writer and speaker whose work centers on strategy, innovation, and technology, had his fabulous vision that "Web 2.0 is amoral."
"From the start, the World Wide Web has been a vessel of quasi-religious longing." Nicholas said. He also quoted from Kevin Kelly's marvelous article We Are the Web that "because of the ease of creation and dissemination, online culture is the culture." At the end, Nicholas conclude that Web 2.0 is "what it is, not what we wish it would be."
These expressions are enlightening. As "a vessel of quasi-religious longing," the Web is a religionary existence. The growth of WWW is thus beyond the judgment of right or wrong, mortal or immortal, profitable or nonprofitable, and favorite or unfavorite to individuals. Based on Nicholas's expressions, the emergence and growth of Web 2.0 is due to objective principles that reflect amoral willingness of the general public. The emergence of this Web 2.0 is not due to that we wish there was a Web 2.0. Web 2.0 emerged only because it was the time for it to emerge. Web evolution is independent to humans' will.
Expression from Dion Hinchcliffe
Dion Hinchcliffe, a well-known evangelist of Web 2.0, SOA, and Enterprise 2.0, had another outstanding explanation on how we got Web 2.0. His expression supplements to Nicholas's expressions.
"Web 2.0 is what happened while we were waiting for the Semantic Web."
In real, no one had expected Web 2.0 before this Web 2.0 suddenly boomed out. What did people originally expect? The answer was Semantic Web. The plot of Semantic Web was drawn several years before the name "Web 2.0" was coined. Many web evangelists had looked for a web that machines could understand. Barely few people had thought of a web in which humans were digitalized. Most of the web evangelists back to the pre-2.0 age and even a few top-tier web professionals at present think of connecting humans on the Web to be a fake question. Hadn't we already connected humans in the original World Wide Web? they asked. They have mistaken "the connect to the Web" and "the connection on the Web."
Dion's observation is revealing. We didn't expect the emergence of Web 2.0 at all from the beginning. The emergence of blogging was a minor improvement on online communication; the spread of web services was a standard advance on leveraging web applications; the rise of tagging was a little bit more than a semantic sugar; and the invention of AJAX was nothing but another one out of one hundred new technologies that had invented. Few people had seen how all these little things might constitute a grand new version of World Wide Web.
When few people were aware, the transition to Web 2.0 had already started. Web 2.0 simply came to the world as an uninvited guest. But this unexpected guest received one of the most magnificent welcome parties ever. This is the astonishing story of Web 2.0.
My Expression of Web 2.0
Web 2.0 tells that the World Wide Web has evolved to its second major stage in its history of evolution. Web 2.0 is a new amoral view of World Wide Web that digitalizes humans participation through collective intelligence, explicit ownership over web resources, and portable web services. At the front-end, Web 2.0 is a Read/Write Web. At the back-end, Web 2.0 is a web of platform.
This expression summarizes the five Web-2.0 expressions we just reviewed besides my addition of explanations. In the following I explain its clauses one-by-one.
Web 2.0 is a major stage in web evolution. This claim is directly based on the view of web evolution. I do not view "2.0" as a pure marketing term. By contrast, it is a precise declaration that the World Wide Web is coming to its second major stage in history.
Web 2.0 is a new view of World World Web instead of a new World Wide Web. We had not built a new Web. We are still on the only Web. But we do have a new vision to the Web. This clause is an alternative to Andy Budd's expression.
Web 2.0 is an amoral new view. Web 2.0 is not a human-designed plan. This view of Web 2.0 exists independent to individual prejudice. Web 2.0 would be in this current form disregarding the human efforts on either prompting or blocking it. Humans may either accelerate or decelerate its progress. But there is no way to stop or deviate this progress. This clause is a reassessment of expressions from Nicholas Carr and Dion Hinchcliffe.
Humans participate onto Web 2.0 as digitalized natives. Humans at Web 2.0 are connected within the Web instead of to the Web. The transformation of humans from foreigners to natives to the Web is a primal symptom. This clause is based on the expression from Joining Dots.
The immigration process (humans from foreigners to natives to the Web) is only at its beginning on Web 2.0. At Web 2.0, digitalization of humans is still far away from its potential ultimate form. The transformation is at its unnoticed start. Few people have really recognized its revolutionary value. The further web evolution will gradually show how this transformation impacts the world. This clause is a composition of the view of web evolution and the expression from Joining Dots.
This engagement of human participation on Web 2.0 is achieved by prompting collective intelligence, explicit ownership over web resources, and portable web services. Tim O'Reilly's expression explains which characters of humans are digitalized and how they are digitalized on Web 2.0. At the individual level, Web 2.0 digitalizes ownership (i.e. self, the fundamental of individual) and service (i.e. action, the fundamental of being alive). Mashup is an improvement on actions, which we will specifically discuss in later installments. At the community level, Web 2.0 digitalizes aggregation of individual contributions, which is collective intelligence. These three typical achievements constitutes the foundation of a digital society. A social network aggregated with digitalized living selves is the essence of Web 2.0.
Web 2.0 have two basic presentations respectively for professional web developers and ordinary web users. To developers, the back-end of Web 2.0 is a web of platform. This presentation tells how developers can contribute to Web 2.0 and manipulate Web-2.0 resources. Typical technology-side advances such as mashup and SaaS are upon this presentation.
To ordinary users, the front-end of Web 2.0 is a Read/Write Web. This presentation tells how ordinary users play with Web 2.0 and business people venture into Web 2.0. Typical non-technology-side advances such as social network and Long Tail are upon this presentation.
Web 2.0 is more than a commercial slogan "to signify that the web was roaring back after the dot com bust!" Although the initiative of this term might be with this sole purpose, it actually ended on a grand picture far beyond a commercial trick. From the various expressions of Web 2.0, we have seen that Web 2.0 is a major stage upgrade in the history of web evolution. After a long time accumulation, the dot-com bubble ultimately triggered the Web 2.0 off; the web evangelists Tim O'Reilly and others happened to be the first ones who caught and named this big moment.
In this first installment, I present a new expression of Web 2.0 by integrating several previous expressions together with my own viewpoints of web evolution. With this new expression, Web 2.0 is a revolutionary new vision of World Wide Web. From this Web 2.0 age, World Wide Web becomes not only a great human conducted project, but also a digital society of real people. Web 2.0 is the starting point of the path towards the fascinating next generation web.