The greatest contribution of search is to divide the information on the internet into a clue by machine. However, just knowing what keywords are in the web page only solves people's needs of browsing the web page. Therefore, shortly after Tim-Berners-Lee put forward WWW, he began to advocate the concept of semantic web. Why? Because of the content on the internet, the machine can't understand it. His ideal is that when making a webpage and building a database, everyone expresses the content in the webpage in a semantic way into a format that the machine can understand. In this way, the whole Internet becomes a well-structured knowledge base. Ideally, this is attractive because both scientists and machines like things to be orderly. Berners-Lee is concerned with the data on the Internet and whether these data can be repeatedly cited by other Internet applications. Give an example to illustrate the charm of standard database. There is a product called LiberyLink. After installation, when browsing on Amazon, it will automatically tell you whether you can find a book in the user's local library, what is the book number and so on. Because a book has a unified book number and title, two different Internet services (Amazon and local library database retrieval) can share data and provide users with brand-new services.
However, after the semantic web was put forward, few people responded. Why? Because it is too difficult to expect the producer of a web page to provide so much extra information for the machine to understand a web page; Just like people working for machines. This goes against the nature that people are lazy when they can be lazy. Just look at the success of Google. Google has a page ranking technology, which uses the relationship between web pages as the basis of ranking results and uses the judgment of web page producers in disguise. Think about the number of web page producers, which is much less than the number of pure visitors. But for this innovation, Google used part of the power of web page producers to push it to the peak of the Internet.
Therefore, the next step of the Internet is to make everyone busy, and the whole people will weave the web, and then use the power of software and machines to make this information easier for people who need it to find and browse. If WEB 1.0 is a data-centric network, then I think WEB2.0 is a people-oriented Internet. We can understand the above viewpoint by looking at some recent WEB2.0 products.
Blog: users weave the network, publish new knowledge and link with other users' content, and then naturally organize these contents.
RSS: User-generated content is automatically distributed and subscribed.
Podcast: Personal Video/Audio Publishing/Subscribing
Sns: Blog+Links between people
WIKI: Users build a big encyclopedia together.
From the perspective of knowledge production, the task of WEB 1.0 is to put human knowledge on the Internet through the power of commerce. The task of WEB2.0 is to organize this knowledge organically through the power of each user's browsing and collaborative work, and in the process, deepen the knowledge and generate new ideological sparks;
From the perspective of content producers, WEB 1.0 is about commercial companies moving content online, while WEB2.0 is about users moving new content online in a simple and casual way through blogs/podcasts.
In terms of interactivity, WEB 1.0 is a user-oriented website. WEB2.0 is based on P2P.
From a technical point of view, the WEB is client-oriented and its working efficiency is getting higher and higher. For example, Ajax technology is used in GoogleMAP/Gmail.
We see that users are playing an increasingly important role on the Internet. They contribute content, disseminate content, and provide links and browsing paths between these contents. In SNS, content is organized with users as the core. WEB2.0 is a user-centered Internet.
So, what's the difference between WEB2.0 in this sense and Tim Berners Lee's semantic web? The starting point of the semantic web is that data is regular and can be called repeatedly by machines. This paper puts forward the use of semantic content publishing tools in an attempt to make the Internet more orderly in terms of rules and technical standards. Search engines such as Google provide as many clues to the Internet as possible without the semantic web. WEB2.0 encourages users to publish content in the most convenient way (blog/podcast), but it provides an index for these seemingly messy contents through user's spontaneous (blog) or system's automatic human-centered (SNS) links. Because these clues are provided by users themselves, they are more in line with the user's experience. The Internet has gradually changed from an organization mode and a reading mode with keywords as the core to a reading mode with Internet users' personal portal (SNS) as the clue or personal thought clue (blog/rss) as the clue. WEB2.0 emphasizes collaboration among users. Wiki is a typical example. From this perspective, the Internet is becoming more orderly, and every user is making a contribution: either the content or the order of the content.
There will be a lot of discussion about the next generation Internet. One thing is certain, WEB2.0 is a people-centered thread network. Provide tools that are more convenient for users to weave webs and encourage the provision of content. According to the traces left by users on the Internet, organizing browsing clues, providing related services, creating new value for users and generating new value for the whole Internet, which is the business mode of WEB2.0.
I answered too many questions, so don't get me wrong. I think it's better to answer more questions, and you will understand better.