SEO Junkies Guide Indexing Vs. Crawling

Understanding how search engines work is crucial to understanding how search engine optimisation (SEO) and rankings can be improved. The two main methods used by SEs for gathering information on keywords and web pages are indexing and crawling. In this guide, the SEO Experts at SEO Junkies examine both terms and what they mean for SEO.

How Does Indexing and Crawling Work?

SEO Junkies have realised the importance in knowing the difference between these two processes is for improving SEO.

What Is Indexing?

Indexing has two different definitions which can make it a little confusing to understand. One is a colloquial meaning for the word -Indexed-. This is essentially when a website shows up on Google or other search engines. What this means is that pages have been added to a search engine’s database (indexed) and therefore show up on the search engine.

Indexing in its real sense is something different. Pages are indexed by a search engine during the crawling process. Once a webpage has been crawled, relevant information such as keywords, phrases and their locations will be collated by the relevant search engines. The search engine will then use its algorithms to look these up in their database, like the index in a book, in order to locate pages and publish them on their search engine.

What Is Crawling?

Crawling is the process by which search engines classify and catalogue website data, which is then indexed. This is done through the use of web crawlers or Robots.

Crawlers begin their process by using a list of web addresses from previous crawling attempts, and by using a website’s site map to determine which pages to crawl. Web crawlers will then go through a site’s internal links in order to find data and keywords to index.

Website owners can give specifics about what parts of their websites can and can’t be crawled through the use of robot.txt files. Essentially, these regulate a web crawler’s access to a site and determine which parts of their site can be indexed.

However, some search engines – such as Google – will take robot.txt files as advisory files. This means that they can still crawl and index your site without your permission. Some crawlers will even target directories you have disallowed as part of your robot.txt file.

Understand Indexing and Crawling with SEO Junkies

Indexing and crawling can be confusing. Luckily, the SEO experts at SEO Junkies are available to help.

SEO Junkies are experts with extensive knowledge of best practice SEO. As such, they have a deep understanding of how search engines work, their processes and how this can affect a websites’ search engine optimisation.

To learn more about SEO Junkies’ services, or to talk to one of their SEO experts, visit their website today, give them a call on 0845 373 0595 or email them at .

Sources: http://searchengineland.com/how-search-engines-work-really-171556, https://www.google.com/insidesearch/howsearchworks/crawling-indexing.html

This entry was posted in SEO and tagged . Bookmark the permalink.

Comments are closed.