0

How Google Crawls the Web

SEO systems | Google | Friday December 26 2008

google-indexGoogle Crawls, Indexes, and Serves the Web… When you sit down at your computer and enter a Google search such as “SEO Systems”, you’re almost instantly presented with a list of results from all over the web. How does Google find websites and web pages matching your query, and determine the order of search results?

In the simplest terms, think of searching the web as looking in a very large book with an impressive index telling you exactly where everything can be found. When you perform a Google search, googles programs check their index to determine the most relevant search results to be returned (”served”) to you.

There are three key processes to search results:

Crawling
Indexing
Serving

Does Google know about your site? Can we find it?

Crawling is the process by which the Googlebot discovers new and updated pages to be added to the Google index.

Google use a huge set of computers to fetch (or “crawl”) billions of pages on the web. The program that does the fetching is called Googlebot (also known as a robot, bot, or spider). Googlebot uses an algorithmic process: computer programs determine which sites to crawl, how often, and how many pages to fetch from each site.

Google’s crawl process begins with a list of web page URLs, generated from previous crawl processes, and augmented with Sitemap data provided by webmasters. As Googlebot visits of each these websites it detects links on each page and adds them to its list of pages to crawl. New sites, changes to existing sites, and dead links are noted and used to update the Google index.

Google doesn’t accept payment to crawl a site more frequently, and we keep the search side of our business separate from our revenue-generating AdWords service.

Can Google index your site?

Googlebot processes each of the pages it crawls in order to compile a massive index of all the words it sees and their location on each page. In addition, we process information included in key content tags and attributes, such as Title tags and ALT attributes. The Googlebot can process many, but not all, content types. For example, Google cannot process the content of some rich media files or dynamic pages.

Does the site have good and useful content that is relevant to the user’s search?
When a user enters a query, our machines search the index for matching pages and return the results we believe are the most relevant to the user. Relevancy is determined by over 200 factors, one of which is the PageRank for a given page. PageRank is the measure of the importance of a page based on the incoming links from other pages. In simple terms, each link to a page on your site from another site adds to your site’s PageRank. Not all links are equal: Google works hard to improve the user experience by identifying spam links and other practices that negatively impact search results. The best types of links are those that are given based on the quality of your content.

In order for your site to rank well in search results pages, it’s important to make sure that Google can crawl and index your site correctly thus the importance of search engine optimization.

SEO systems

Comments are closed.