Skip to main content

Your privacy settings

If you’re happy with the use of cookies by Business Growth West Midlands and our selected partners, click ‘Accept all cookies’. Or click ‘Manage cookies’ to learn more.

Manage Cookies

How search engines crawl and index websites: A beginner's guide

If you've ever wondered how search engines like Google and Bing deliver relevant results almost instantly after you type in a query, you're not alone. The process involves a combination of "crawling" and "indexing" — two fundamental activities that search engines perform to keep their vast databases up-to-date with the latest content on the web. For anyone new to the world of websites, understanding these processes is crucial, especially if you're looking to optimise your site for search engine visibility.

Click here to browse free training events on search engines and optimising your website. 

What is crawling?

Crawling is the process by which search engines discover new and updated pages on the web. This task is performed by automated programs known as "crawlers" or "spiders." The most well-known crawler is Google's "Googlebot", which is in essence the Chrome web browser many of you are using right now, but without a human controlling things.

How does crawling work?

1. Starting Point: Crawlers start their journey from a list of known URLs. These could be websites that are already indexed or URLs provided by webmasters through tools like Google Search Console. This is one of the reasons an XML sitemap is useful, as it provides search engines with a list of URLs that they should find when crawling your site.

2. Following Links: As crawlers browse the web, they follow links on each page they visit, discovering new pages and links along the way. In reality, the URLs of the links are added to the crawl list – they don’t necessarily crawl the URLs they find in links immediately.

3. Fetching Content: Once a crawler lands on a page, it fetches the content to analyse it. The crawler reads the HTML, examines the structure, and notes the text, images, and metadata. This is a two-stage process, as the page will then be added to the rendering queue – the process of taking all the files that make up the web page and “rendering” them into the visual page we see in our browsers. This takes more time and processing power, hence the text information and images from a page are processed first for speed, then the “whole page” is rendered to assess its usability, quality of presentation etc.

4. Storing Information: The information from the page is then stored and used in the indexing process.

 

What is indexing?

Indexing is the next step after crawling. It involves analysing and storing the content fetched by the crawlers in the search engine’s database. Think of it as creating a massive library catalogue that search engines can refer to when answering a user's query.

Key elements of indexing

- Content Analysis: The search engine parses the content to understand what the page is about. This includes analysing the text, images, metadata, and even the page’s structure. Search engines don’t index every page they crawl – they decide whether the page is useful, unique or important enough to warrant being stored.

- Storing in the Index: The content is then organised and stored in a massive database called the "index." This index is what search engines refer to when a user types in a query.

- Ranking Signals: During indexing, search engines also assess how relevant the content is to various search queries. Factors such as title tags, headings, the main body text of the page, page load speed, mobile-friendliness, and inbound links play a crucial role.

Technical issues that can affect crawling and indexing

Not all websites are treated equally by search engines. Certain technical issues can make a site appear low-quality, potentially harming its search visibility.

1. Broken Links: Links that lead to non-existent pages (404 errors) can frustrate both users and crawlers, signalling poor site maintenance.

2. Duplicate Content: Having the same content on multiple pages (either on the same site, or on different websites) can confuse search engines about which page to rank, potentially leading to lower rankings.

3. Slow Page Load Times: Search engines prefer sites that load quickly. Slow-loading pages may not only rank lower but might also be partially crawled or ignored.

4. Non-Mobile-Friendly Design: With the majority of searches now happening on mobile devices, a site that isn’t mobile-friendly could be penalised in mobile search results.

5. Robots.txt Misconfiguration: A poorly configured “robots.txt” file can accidentally block important pages from being crawled, leading to gaps in indexing.

6. Meta Robots and Canonical Tag Misconfigurations: These are tags that can be used to prevent search engines indexing pages and are a useful tool, but used incorrectly they can cause important pages to be ignored by search engines.

Search console screenshot

Google Search Console: A webmaster’s best friend

Google Search Console (GSC) is a free tool that provides valuable insights into how Google crawls and indexes your site. It offers a range of features that help you monitor your site’s performance in Google Search and diagnose crawling and indexing issues.

Pictured right: An example of Google Search Console’s Performance report, showing traffic and keywords from Google organic search:

Key features of Google Search Console

- Performance Reports: View detailed reports on how your site performs in search, including clicks, impressions, and the average position for your queries. This is one of the primary uses of Search Console, as it’s the only source of accurate data about the keywords people search for when they find your site on Google (and which pages Google sends them to).

- Coverage Reports: See which pages are successfully indexed, and identify any errors or issues that might be preventing indexing.  This is extremely useful in understanding why certain pages aren’t indexed or whether you have technical issues with the site preventing indexing.

- Sitemap Submission: Submit an XML sitemap to help Google understand the structure of your site and discover all your pages more efficiently.

- URL Inspection: Check specific URLs to see if they've been crawled and indexed, and view any errors that may have occurred.

- Links Reports: View a list of sites that are linking to yours, including the anchor text and pages where the links appear (and which pages they link to on your site).

 

Why you might want to block certain pages from search engines

Not every page on your website needs to be indexed by search engines. In some cases, you may want to block certain pages to maintain a high-quality site index or protect sensitive information.

Common reasons to block pages

- Duplicate Content: Blocking duplicate pages ensures that search engines focus on the original, canonical content, preventing ranking issues.

- Private or Confidential Information: Pages containing sensitive information (like user account pages or internal documents) should be blocked to protect privacy.

- Low-Quality or Thin Content: Pages with little or no valuable content can drag down the overall quality score of your site, so it’s best to keep them out of the index.

- Testing or Staging Sites: Pages that are under development or are part of a testing environment should be blocked from search engines to avoid them being indexed prematurely.

How to block pages

- robots.txt: This file tells search engine crawlers which pages or sections of your site should not be crawled. Be careful with this, as blocking essential pages could hurt your SEO. You can view your robots.txt file at https://[yourdomain]/robots.txt

- Meta Tags: The “robots noindex” meta tag can be used on individual pages to prevent them from being indexed, without blocking crawlers entirely. If you need this, you can usually find it in the SEO settings or section when editing a page on your website.

- Password Protection: For highly sensitive pages, you can use password protection to keep them private and inaccessible to crawlers.

Ian Lockwood headshot

Further learning

This article was written to accompany a related webinar. Click here to view all our upcoming free webinars for businesses.

About the author: Ian Lockwood has run successful digital marketing and web development agencies for two decades, whilst delivering training & consultancy to over 1000 businesses. Ian is an expert in SEO, PPC, CRO and Analytics.

Bees & Flower V5 2000X3000