When it comes to internet search engines the top two are without a doubt Google and Yahoo!.
Although the two a fierce competitors they share more common bonds then some people might realize. Both were created by students at Stanford University. Yahoo! was created in January of 1994 by two Stanford graduate students Jerry Yang and David Filo. The pair originally called Yahoo! "Jerry's guide to the World Wide Web" but later changed the name to Yahoo!, commemorating the word the Jonathan Swift defined in his classic novel Gulliver's Travels. In the book Swift stated that the word was "rude, unsophisticated, uncouth." Four years after Yang and Filo had created Yahoo! and introduced it to the world (at this time it was a internet mogul) two different Stanford University students, Larry Page and Sergey Brin, created their own search engine, Google, as a research project, the date was September seventh 1998. Google started out as the search engine used on Stanford University's website before it went public on August 19, 2004. When 2006 ended Google was the leading internet search engine, it enjoyed over 50.8% of the market.
By the time it was a year old Yahoo! had had over a million hits, the sheer number of people who had found and were using Yahoo! prompted it creators to incorporated their creation in May of 1995. Yahoo! went public on April 12 1996 were it earned a total of 2.6 million dollars.
Google's progress was a little slower then Yahoo!s. Shortly after creating Google, Page and Brin registered it as the domain google.com on September 17, 1997 on Stanford University's website. Approximately one year after registering Google on Stanford University's website the pair decided to incorporate their research project. Finally, on August 19, 2004, Google had its very first public offering. Google is currently the favorite internet search engine.
After its meteoritic climb to glory Yahoo!'s creators and shareholders were confident that they were holding onto a gold mine. They didn't predict the burst of the dot.com bubble in the early two thousands. Yahoo! survived the crisis but the value of Yahoo! stocks dropped to $8.11, an all time low.
Yahoo! uses a combination of web crawler compiled and indexed results to rank the websites and webpage are registered on their search engine. In addition to rankings compiled by the web crawler, webmasters can, for a fee, purchase a submission to Yahoo!'s human compiled directory. The annual yearly fee is about three hundred dollars. The theory is that the listing human's provide will influence web crawlers into giving the website a higher ranking.
Google credits its success and popularity to the program it uses to search and rank webpage's, a program it calls PageRank. Because Google is worried about webmasters using abusive techniques to garner higher rankings for their search engines Google carefully keeps the hows and whys of PageRank a closely guarded secret. Google does confess that PageRank runs on a link analysis algorithm. PageRank was different from all the rest of the search engine optimization techniques because it graded each page based on the number of and quality of the links that pointed to it.
Yahoo! quickly grew fond of offering the webmasters that subscribed to its search engine the opportunity to purchase something called paid inclusion. In exchange for a fee, Yahoo! guaranteed that the webpage's would be ranked. What Yahoo! didn't guarantee was what type of ranking the webpage's would receive; they refused to promise that the webpage's would appear in the first two pages of a search.
Google uses a pay-per-click method to charge advertisers. Each time an advertisers link is clicked Google charges the account fifty cents.
Just a little over ten years ago, if a person needed information they were forced to go to the local library and spend hours entombed amongst shelves of books. Now that the internet is available in almost every home finding information is easier then ever before. Now when someone needs information all they have to do is boot up their computer and type their needs into a search engine
A search engine is an information retrieval system that is designed to help find information stored on a ca computer system.
In 1990 the very first search engine was created by students at McGill University in Montreal. The search engine was called Archie and it was invented to index FTP archives, allowing people to quickly access specific files. FTPs (short for File Transfer Protocol) are used to transfer data from one computer to another ocer the internet, or through a network that supports TCP/IP protocol. In its early days Archie contacted a list of FTP archives approximately once a month with a request for a listing. Once Archie received a listing it was stored in local files and could be searched using a UNIX grep command. In its early days Archie was a local tool but as the kinks got worked out and it became more efficient it became a network wide resource. Archie users could utilize Archie's services through a variety of methods including e-mail queries, teleneting directly to a server, and eventually through the World Wide Web interfaces. Archie only indexed computer files.
A student at the University of Minnesota created a search engine that indexed plain text files in 1991. They named the program Gopher after the University of Minnesota's mascot.
In 1993 a student at MIT created Wandx, the first Web search engine.
Today, search engines match a user's keyword query with a list of potential websites that might have the information the users is looking for. The search engine does this by using a software code that is called a crawler to probe web pages that match the user's keyword. Once the crawler has identified web pages that may be what the user is looking for the search engine uses a variety of statistical techniques to establish each pages importance. Most search engines establish the importance of hits based on the frequency of word distribution. Once the search engine has finished searching web pages it provides a list of web sites to the user.
Today, when an internet user types a word into a search engine they are given a list of websites that might be able to provide them with the information they seek. The typical search engine provides ten potential hits per page. The average internet user never looks farther they the second page the search engine provides. Webmasters are constantly finding themselves forced to use new methods of search engine optimization to be highly ranked by the search engines.
In 2000, a study was done by Lawrence and Giles that suggested internet search engines were only able to index sixteen percent of all available webpage's.
This website uses cookies that are necessary to its functioning and required to achieve the purposes illustrated in the privacy policy. By accepting this OR scrolling this page OR continuing to browse, you agree to our Privacy Policy