Web Crawlers
Web crawlers, also known as web spiders or internet bots, are programs that browse the web in an automated manner for the purpose of indexing content. Crawlers can look at all sorts of data such as content, links on a page, broken links, sitemaps, and HTML code validation.
Search engines like Google, Bing, and Yahoo use crawlers to properly index downloaded pages so that users can find them them faster and more efficiently when they are searching. Without crawlers there would be nothing to tell them that your website has new and fresh content. Sitemaps also can play a part in that process. So web crawlers, for the most part, are a good thing. However there are also issues sometimes when it comes to scheduling and load as a crawler might be constantly polling your site. And this is where a robots.txt file comes into play. This file can help control the crawl traffic and ensure that it doesn’t overwhelm your server.
Web crawlers identify themselves to a web server by using the User-agent field in an HTTP request, and each crawler has their own unique identifier. Most of the time you will need to examine your web server referrer logs to view web crawler traffic.
Web crawlers identify themselves to a web server by using the User-agent field in an HTTP request, and each crawler has their own unique identifier. Most of the time you will need to examine your web server referrer logs to view web crawler traffic.
Robots.txt
By placing a robots.txt file at the root of your web server you can define rules for web crawlers such as allow or disallow that they must follow. You can apply generic rules which apply to all bots or get more granular and specify their specific User-agent string.
Learn more about the Top Search engine Bots
There are hundreds of web crawlers and bots scouring the internet but below is a list of popular web crawlers and bots that we have been collected based on ones that we see on a regular basis within our web server logs.
Googlebot: Googlebot is Google’s web crawling bot (sometimes also called a “spider”). Googlebot uses an algorithmic process: computer programs determine which sites to crawl, how often, and how many pages to fetch from each site. Googlebot’s crawl process begins with a list of webpage URLs, generated from previous crawl processes and augmented with Sitemap data provided by webmasters. As Googlebot visits each of these websites it detects links (SRC and HREF) on each page and adds them to its list of pages to crawl. New sites, changes to existing sites, and dead links are noted and used to update the Google index.
"Google+"Another one you might see popup is Google+. When a user shares a URL on Google+ or an app writes an app activity, Google+ attempts to fetch the content and create a snippet to provide a summary of the linked content. This service is different than the Googlebot that crawls and indexes your site. These requests do not honor robots.txt or other crawl mechanisms because this is a user-initiated request.
"Baiduspider " Baiduspider is a robot of Baidu Chinese search engine. Baidu (Chinese: 百度; pinyin: Bǎidù) is the leading Chinese search engine for websites, audio files, and images.
"MSN Bot/Bingbot" This is a web-crawling robot (type of Internet bot), deployed by Microsoft to supply Bing (search engine). It collects documents from the web to build a searchable index for the Bing (search engine).
"Slurp Bot"Yahoo Search results come from the Yahoo web crawler Slurp and Bing’s web crawler, as a lot of Yahoo is now powered by Bing. Sites should allow Yahoo Slurp access in order to appear in Yahoo Mobile Search results.dditionally, Slurp does the following:
Collects content from partner sites for inclusion within sites like Yahoo News, Yahoo Finance and Yahoo Sports.
Accesses pages from sites across the Web to confirm accuracy and improve Yahoo’s personalized content for our users.
"Yandex Bot" Yandex bot is Yandex’s search engine’s crawler. Yandex is a Russian Internet company which operates the largest search engine in Russia with about 60% market share in that country. Yandex ranked as the fifth largest search engine worldwide with more than 150 million searches per day as of April 2012 and more than 25.5 million visitors.
"Soso Spider " Soso.com is a Chinese search engine owned by Tencent Holdings Limited, which is well known for its other creation QQ. Soso.com is ranked as the 36th most visited website in the world and the 13th most visited website in China, according to Alexa Internet. On an average, Soso.com gets 21,064,490 page views everyday.
"DuckDuckBot"DuckDuckBot is the Web crawler for DuckDuckGo, a search engine that has become quite popular lately as it is known for privacy and not tracking you. It now handles over 12 million queries per day. DuckDuckGo gets its results from over four hundred sources. These include hundreds of vertical sources delivering niche Instant Answers, DuckDuckBot (their crawler) and crowd-sourced sites (Wikipedia). They also have more traditional links in the search results, which they source from Yahoo!, Yandex and Bing.
"Baiduspider"Baiduspider is the official name of the Chinese Baidu search engine’s web crawling spider. It crawls web pages and returns updates to the Baidu index. Baidu is the leading Chinese search engine that takes an 80% share of the overall search engine market of China Mainland.
"Sogou Spider"Sogou Spider is the web crawler for Sogou.com, a leading Chinese search engine that was launched in 2004. it has a rank of 103 in Alexa’s internet rankings. Note: The Sogou web spider does not respect the robots.txt internet standard, and is therefore banned from many web sites because of excessive crawling.
"Exabot"Exabot is a web crawler for Exalead, which is a search engine based out of France. It was founded in 2000 and now has more than 16 billion pages currently indexed.
"Facebook External Hit"Facebook allows its users to send links to interesting web content to other Facebook users. Part of how this works on the Facebook system involves the temporary display of certain images or details related to the web content, such as the title of the webpage or the embed tag of a video.
"Alexa Crawler"Ia_archiver is the web crawler for Amazon’s Alexa internet rankings. As you probably know they collect information to show rankings for both local and international sites.
"Google Feedfetcher" Used by Google to grab RSS or Atom feeds when users choose to add them to their Google homepage or Google Reader. Feedfetcher collects and periodically refreshes these user-initiated feeds, but does not index them in Blog Search or Google’s other search services (feeds appear in the search results only if they’ve been crawled by Googlebot).
0 comments:
Post a Comment