A NEW ROBOTS.TXT REPORT HAS ARRIVED IN GOOGLE SEARCH CONSOLE

A new robots.txt report has arrived in Google Search Console

A new robots.txt report has arrived in Google Search Console

Blog Article

A new robots.txt report has arrived in Google Search Console

By adding the new robots.txt report to the search console, Google provided users with more features in relation to it. Now, if you go to the Google search console and settings, you will see a new tool above Crawl stats, which has completely replaced the robots.txt tester tool.



Learn more:

Google's SGE generator artificial intelligence has been updated with new features


With this decision, Google has also developed the robots.txt guide document with more solutions and explanations so that by benefiting from this report and the added information, you can get a better understanding and insight into the status of robots.txt and its impact on your website. These policies have announced different conditions which are very important in kind.

The new robots.txt feature in Google Search Console includes the following features:

Displaying the robots.txt status of all properties connected to your domain, including subdomains and addresses that you have fetched with or without www.

Last time crawlers crawled robots.txt.

All problems and errors related to the robots.txt file and the problems that exist on the site with the settings of this file.

Robots.txt file status report including size, commands inside it and availability status.

With this update, from now on, the status of the pages that are checked in Page indexing from the inspection tool will be accompanied by the robots.txt report.

The new robots.txt report contains various information, the most important of which are as follows:

File path status – the path where Google was able to find the robots.txt file in the last 30 days.

Fetch status – Specifies what status Google encountered when it last fetched robots.txt. This report includes Not Fetched - Not found (404), Not Fetched - Any other reason and Fetched

Status Checked on – the last time Google crawled the file.

Status Size – the size of the robots.txt file

Issues status - display of robots.txt problems and errors that Google says, use the robots.txt validator tools available on the web to check the problems of this file.

Google says that if you have made necessary changes to robots.txt that require Google to crawl immediately, it is possible to submit a manual crawl request for force majeure.

If there is a robots.txt file but Google cannot fetch it for any reason, in the first 12 hours Google stops crawling the site but continues to try to fetch the robots.txt file.

If Google fails to fetch the new version of the robots.txt file, for 30 days after that it will try to use the last healthy version of robots.txt that existed before while still trying to fetch the new version. .

If the errors are still not resolved after 30 days:

If the site is generally accessible, Google will act as if the robots.txt file does not exist.

If the site is involved in accessibility problems, Google stops crawling the site completely and periodically sends a request to fetch the robots.txt file. This may not be resolved for all sites, including music, news, entertainment, SEO, and any other site

If Google finds the robots.txt file and can fetch it, it reads the file line by line:

If there is a line that has errors or cannot be matched with robots.txt rules, the entire robots.txt file is skipped.

If there are no valid commands in the new file, Google will consider it as an empty robots.txt file, in other words, as if no rules have been specified for the site.

Report this page