Crawl budget is a vital SEO concept that often gets overlooked.
There are so many tasks and issues an SEO expert has to keep in mind that it’s often put on the back burner.
In short, crawl budget can, and should, be optimized.
In this article, you will learn:
- How to improve your crawl budget along the way.
- Go over the changes to crawl budget as a concept in the last couple of years.
What Is Crawl Budget
So for those of us who’ve had so much to think/worry/sweat about that we forgot what crawl budget even means, here’s a quick recap.
Crawl budget is simply the frequency with which search engine’s crawlers (i.e., spiders and bots) go over the pages of your domain.
That frequency is conceptualized as a tentative balance between Googlebot’s attempts to not overcrowd your server and Google’s overall desire to crawl your domain.
Crawl budget optimization is just a series of steps that you can take specifically to up the rate at which search engines’ bots visit your pages.
The more often they visit, the quicker it gets into the index that the pages have been updated.
Consequently, your optimization efforts will take less time to take hold and start affecting your rankings.
With that wording, it certainly sounds like the most important thing we all should be doing every second, right?
Well, not entirely.
Why Is Crawl Budget Optimization Neglected?
To answer that question, you only need to take a look at this official blog post by Google.
As Google explains plainly, crawling by itself is not a ranking factor.
So that alone is enough to stop certain SEO professionals from even thinking about crawl budget.
To many of us, “not a ranking factor” is equated to “not my problem.”
I disagree with that wholeheartedly.
But even forgetting that, thereare Google’s Gary Illyes’ comments. He has stated outright that, sure, for a huge website of millions and millions of pages, crawl budget management makes sense.
But if you’re a modestly-sized domain, then you don’t have to actually concern yourself too much with crawl budget. (And in fact added that if you really have millions and millions of pages, you should consider cutting some content, which would be beneficial for your domain in general.)
But, as we all know, SEO is not at all a game of changing one big factor and getting the results.
SEO is very much a process of making small, incremental changes, taking care of dozens of metrics.
Our job, in a big way, is about making sure that thousands of tiny little things are as optimized as possible.
In addition, although it’s not a big crawling factor by itself, as Google’s John Mueller points out, it’s good for conversions and for the overall website health.
With all that said, I feel it’s important to make sure that nothing on your website is actively hurting your crawl budget.
How to Optimize Your Crawl Budget Today
There are still things that are super heavy-duty and others’ importance has changed dramatically to a point of not being relevant at all.
You still need to pay attention to what I call the “usual suspects” of website health.
1. Allow Crawling of Your Important Pages in Robots.Txt
This is a no-brainer, and a natural first and most important step.
Managing robots.txt can be done by hand, or using a website auditor tool.
I prefer to use a tool whenever possible. This is one of the instances where a tool is simply more convenient and effective.
Simply add your robots.txt to the tool of your choice will allow you to allow/block crawling of any page of your domain in seconds. Then you’ll simply upload an edited document and voila!
Obviously, anybody can pretty much do it by hand. But from my personal experience I know that with a really large website, where frequent calibrations might be needed, it’s just so much easier to let a tool help you out.
2. Watch Out for Redirect Chains
This is a common-sense approach to website health.
Ideally, you would be able to avoid having even a single redirect chain on your entire domain.
Honestly, it’s an impossible task for a really large website – 301 and 302 redirects are bound to appear.
But a bunch of those, chained together, definitely hurt your crawl limit, to a point where search engine’s crawler might simply stop crawling without getting to the page you need indexed.
One or two redirects here and there might not damage you much, but it’s something that everybody needs to take good care of nevertheless.
3. Use HTML Whenever Possible
On the other hand, other search engines aren’t quite there yet.
Because of that, my personal standpoint is, whenever possible, you should stick to HTML.
That way, you’re not hurting your chances with any crawler for sure. – Read more