- Get link
- X
- Other Apps
Thats because google uses crawling. Crawlers like Googlebot, look at webpages and follow links on those pages. They bring data about those webpages back to Google’s servers. The crawl process begins with a list of web addresses from past crawls and sitemaps provided by website owners. With the robots.txt file, site owners can choose not to be crawled by Googlebot, or they can provide instructions about how to process pages on their sites. This is known as Crawling. It scans for every word, its location, metadata, links in a webpage. This is a continious process and takes a lot of computational power.
Now that google all this data that it mined, Next step is to organize that data by indexing those webpages. It uses Algorithms to index that data based on different factors. And makes an index of it much like an index on your textbook. Thats why its easier to find a chapter by looking in an index instead of reading the book. By using index, you can tell the location of that chapter without even reading the book.
You can enable this in your computer too in file settings. It always keeps scanning your computer in background for new files, when file names are changes, files are edited or deleted. So it in a way it already searches everything before you even make a search query. Although it uses resources on your computer but modern computers can handle workload like this and its recommended to enable indexing in search.
Comments
Post a Comment