Search engine optimization, in its the majority of standard sense, relies upon something above all others: Online search engine spiders crawling and indexing your site.
However nearly every site is going to have pages that you don’t want to include in this expedition.
In a best-case scenario, these are doing nothing to drive traffic to your site actively, and in a worst-case, they might be diverting traffic from more important pages.
Fortunately, Google allows web designers to inform online search engine bots what pages and content to crawl and what to disregard. There are numerous methods to do this, the most common being using a robots.txt file or the meta robotics tag.
We have an exceptional and comprehensive explanation of the ins and outs of robots.txt, which you must definitely check out.
However in top-level terms, it’s a plain text file that resides in your website’s root and follows the Robots Exclusion Protocol (REP).
Robots.txt provides spiders with instructions about the website as a whole, while meta robots tags consist of instructions for specific pages.
Some meta robotics tags you might use include index, which tells online search engine to add the page to their index; noindex, which tells it not to include a page to the index or include it in search engine result; follow, which instructs an online search engine to follow the links on a page; nofollow, which informs it not to follow links, and an entire host of others.
Both robots.txt and meta robotics tags are useful tools to keep in your toolbox, however there’s also another method to advise search engine bots to noindex or nofollow: the X-Robots-Tag.
What Is The X-Robots-Tag?
The X-Robots-Tag is another method for you to control how your webpages are crawled and indexed by spiders. As part of the HTTP header response to a URL, it manages indexing for a whole page, in addition to the particular components on that page.
And whereas using meta robotics tags is relatively uncomplicated, the X-Robots-Tag is a bit more complex.
However this, of course, raises the concern:
When Should You Use The X-Robots-Tag?
According to Google, “Any regulation that can be utilized in a robotics meta tag can also be defined as an X-Robots-Tag.”
While you can set robots.txt-related regulations in the headers of an HTTP reaction with both the meta robotics tag and X-Robots Tag, there are specific situations where you would wish to use the X-Robots-Tag– the two most typical being when:
- You wish to manage how your non-HTML files are being crawled and indexed.
- You wish to serve instructions site-wide rather of on a page level.
For instance, if you want to obstruct a specific image or video from being crawled– the HTTP reaction technique makes this simple.
The X-Robots-Tag header is also beneficial since it enables you to integrate multiple tags within an HTTP action or utilize a comma-separated list of regulations to specify directives.
Maybe you do not desire a certain page to be cached and desire it to be not available after a certain date. You can utilize a combination of “noarchive” and “unavailable_after” tags to advise online search engine bots to follow these directions.
Essentially, the power of the X-Robots-Tag is that it is far more flexible than the meta robots tag.
The benefit of using an X-Robots-Tag with HTTP responses is that it permits you to utilize routine expressions to execute crawl instructions on non-HTML, as well as apply specifications on a bigger, international level.
To assist you comprehend the difference in between these instructions, it’s practical to categorize them by type. That is, are they crawler regulations or indexer instructions?
Here’s a handy cheat sheet to discuss:
|Crawler Directives||Indexer Directives|
|Robots.txt– utilizes the user representative, enable, disallow, and sitemap directives to define where on-site search engine bots are permitted to crawl and not enabled to crawl.||Meta Robotics tag– enables you to define and prevent search engines from showing specific pages on a website in search results.
Nofollow– allows you to define links that must not hand down authority or PageRank.
X-Robots-tag– permits you to control how defined file types are indexed.
Where Do You Put The X-Robots-Tag?
Let’s state you want to block specific file types. An ideal method would be to add the X-Robots-Tag to an Apache setup or a.htaccess file.
The X-Robots-Tag can be contributed to a site’s HTTP reactions in an Apache server configuration via.htaccess file.
Real-World Examples And Uses Of The X-Robots-Tag
So that sounds terrific in theory, but what does it look like in the real life? Let’s have a look.
Let’s state we wanted online search engine not to index.pdf file types. This setup on Apache servers would look something like the below:
In Nginx, it would appear like the below:
location ~ *. pdf$ add_header X-Robots-Tag “noindex, nofollow”;
Now, let’s look at a various situation. Let’s state we want to utilize the X-Robots-Tag to obstruct image files, such as.jpg,. gif,. png, etc, from being indexed. You might do this with an X-Robots-Tag that would look like the below:
Please note that comprehending how these instructions work and the impact they have on one another is vital.
For example, what occurs if both the X-Robots-Tag and a meta robotics tag are located when spider bots find a URL?
If that URL is obstructed from robots.txt, then particular indexing and serving instructions can not be found and will not be followed.
If instructions are to be followed, then the URLs including those can not be prohibited from crawling.
Check For An X-Robots-Tag
There are a few different approaches that can be used to check for an X-Robots-Tag on the site.
The simplest method to check is to set up a web browser extension that will tell you X-Robots-Tag info about the URL.
Screenshot of Robots Exemption Checker, December 2022
Another plugin you can utilize to determine whether an X-Robots-Tag is being utilized, for instance, is the Web Designer plugin.
By clicking the plugin in your internet browser and browsing to “View Response Headers,” you can see the numerous HTTP headers being utilized.
Another technique that can be utilized for scaling in order to pinpoint issues on sites with a million pages is Screaming Frog
. After running a website through Shrieking Frog, you can browse to the “X-Robots-Tag” column.
This will show you which sections of the website are utilizing the tag, along with which particular directives.
Screenshot of Screaming Frog Report. X-Robot-Tag, December 2022 Using X-Robots-Tags On Your Website Understanding and controlling how online search engine communicate with your website is
the cornerstone of seo. And the X-Robots-Tag is a powerful tool you can use to do just that. Just be aware: It’s not without its dangers. It is really easy to slip up
and deindex your whole website. That said, if you’re reading this piece, you’re most likely not an SEO beginner.
So long as you utilize it wisely, take your time and check your work, you’ll find the X-Robots-Tag to be an useful addition to your toolbox. More Resources: Featured Image: Song_about_summer/ SMM Panel