Home / Technical SEO / Search Engine Fundamentals / How Do Search Engines Work?
A guide to how search engines work. Topics covered include the processes of search engine crawling and indexing as well as concepts such as crawl budget and PageRank.
Sam Marsden
SEO & Content Manager
In this guide we’re going to provide you with an introduction to how search engines work. This will cover the processes of crawling and indexing as well as concepts such as crawl budget and PageRank.
Search engines work by crawling hundreds of billions of pages using their own web crawlers. These web crawlers are commonly referred to as search engine bots or spiders. A search engine navigates the web by downloading web pages and following links on these pages to discover new pages that have been made available.
The search engine index
Webpages that have been discovered by the search engine are added into a data structure called an index.
The index includes all the discovered URLs along with a number of relevant key signals about the contents of each URL such as:
- The keywords discovered within the page’s content – what topics does the page cover?
- The type of content that is being crawled (using microdata called Schema) – what is included on the page?
- The freshness of the page – how recently was it updated?
- The previous user engagement of the page and/or domain – how do people interact with the page?
What is the aim of a search engine algorithm?
The aim of the search engine algorithm is to present a relevant set of high-quality search results that will fulfill the user’s query/question as quickly as possible.
The user then selects an option from the list of search results and this action, along with subsequent activity, then feeds into future learnings which can affect search engine rankings going forward.
What happens when a search is performed?
When a search query is entered into a search engine by a user, all of the pages which are deemed to be relevant are identified from the index and an algorithm is used to hierarchically rank the relevant pages into a set of results.
The algorithms used to rank the most relevant results differ for each search engine. For example, a page that ranks highly for a search query in Google may not rank highly for the same query in Bing.
In addition to the search query, search engines use other relevant data to return results, including:
- Location – Some search queries are location-dependent e.g. ‘cafes near me’ or ‘movie times’.
- Language detected – Search engines will return results in the language of the user, if it can be detected.
- Previous search history – Search engines will return different results for a query dependent on what user has previously searched for.
- Device – A different set of results may be returned based on the device from which the query was made.
Why might a page not be indexed?
There are a number of circ*mstances where a URL will not be indexed by a search engine. This may be due to:
- Robots.txt file exclusions – a file which tells search engines what they shouldn’t visit on your site.
- Directives on the webpage telling search engines not to index that page (noindex tag) or to index another similar page (canonical tag).
- Search engine algorithms judging the page to be of low quality, have thin content or contain duplicate content.
- The URL returning an error page (e.g. a 404 Not Found HTTP response code).
Next Chapter: Search Engine Crawling
The Full Guide to How Search Engines Work:
Additional Learning Resources
Start building better online experiences today
Lumar is the intelligence & automation platform behind revenue-driving websites
Get started with Lumar