Daniel Waisberg of Google released a really nice explanatory video on how crawl budget works with crawl demand and crawl rates. He also spent most of the video doing a deep dive into the new crawl stats report in Search Console.
Here is the video embed:
Daniel explained how Google crawls the web and explains there are not unlimited resources to do so. So Google needs to prioritize what and when it crawls. Note, this topic is mostly relevant for larger websites, not really sites with a few thousand pages or less. This is a topic Gary Illyes of Google wrote up in detail back in 2017, I don’t think there is anything new, but having it in a video format is nice.
Crawl Demand is how much of the content is desired by Google and it is affected by URLs that Google has not crawled before and by Google’s estimation on how often content is updated on the known URLs.
Crawl Rate is the maximum number of concurrent connections a crawler may use to crawl your site. This number is calculated by Google periodically. If your server and site can handle it, Google can crawl faster. If not, Google will crawl slower, as to not hurt your server or site.
Crawl Budget is calculated by taking the crawl rate and crawl demand, it is the number of URLs Google can and wants to crawl on your site.
At 3 minutes into the video, Daniel digs into the Google Search Console Crawl Stats Report. This report shows you how often Google crawls your site and the responses it gets from your server. We covered its launch in November 2020. the report shows you your site’s general availability, the average page response time and the number of requests made by Google.
Forum discussion at Twitter.