
By now even casual web users are familiar with the term “Web 2.0″. Probably because the most popular brands (YouTube, Flickr, FaceBook) of the Web 2.0 revolution attract large amounts of novice and first-time web surfers.
Now there is a new revolution on the horizon, one that lacks the glitz and glamor of Web 2.0. This revolution is less about the human user, and more about the machine user.
Welcome To The Semantic Web, Where Machines Do All The Work
Imagine if you didn’t have to dig through Craig’s List, eBay, and Google separately for the best deals on antique soup spoons. Now imagine there is a way for web developers to aggregate all those sites together without much effort, therefore being able to offer users a single point of reference for antique soup spoons.
Welcome to The Semantic Web, where machines do all the work. Continue reading for a preview of the revolution.
The term semantic web was first used by Sir Timothy John Berners-Lee, creator of “The World Wide Web”, the first web browser. In a nutshell, the semantic web is an augmentation of the tools and languages that websites are built with, the goal being to make content in sites easily understood by software agents.
Google Has Some Of The Best Computer Scientists In The World Working On This Very Problem
Right now websites are constructed with only a human audience in mind. The navigation, graphics, and content is organized in a way that makes sense to people, not computer software. Writing programs that can read websites and absorb the information requires incredibly sophisticated programming. Google has some of the best computer scientists in the world working on this very problem, and has spent billions trying to get computers to understand the web’s content.
Although they have made great strides, Google’s search engine still can’t distill accurate and complete data sets from specific user queries. For example; search for “Core 2 Duo 15.4 Wide Screen Laptop < $1000" in Google's product search and results come back from less than twenty sources. Although the results are all relevant, they comprise only a tiny fraction of the real data set. The same search in Google's normal site returns over a million results. Quickly scanning these reveals hundreds of matches the product search system failed to find, and quite a bit of irrelevant information.
A person using Google could sift through all the results and compile the desired data, but that would take an enormous effort. The semantic web aims to shift that work to software by giving web developers new tools to describe their content. When content is enabled for the semantic web smart software agents will be able to read it accurately and make use of the data in an intelligent way.
Someday Computers Will Be Better At Searching The Web Than People
The core idea of the semantic web is that, like humans, intelligent software should be able to search and make sense of the web’s vast amount of data. This may sound a little far fetched, but inroads are already being made to make the semantic web a reality. Web 2.0 has proliferated the use of many technologies that are key in the semantic web. XML, RSS, and web services are commonly used in web 2.0 applications and are the basis for some proposed semantic web standards.
Someday computers will be better at searching the web than people. When that day comes people can spend their energy acting on the information, rather than searching for it.
Topics: