The analysis covers all key aspects of the file:
Typical issues we detect:
A single wrong character in robots.txt can stop Googlebot from crawling your site entirely. Always verify the file after site changes and when configuring a CMS or new hosting environment.
Enter the site URL into the form above. Enterno.io will automatically fetch the robots.txt from /robots.txt and analyze its contents.
Always at the root of the domain: e.g. https://example.com/robots.txt. It must be accessible without redirects or authentication.
This directive blocks all crawlers from indexing the entire site. It's a critical error if it ends up in production — a common mistake when copying settings from a development environment.
Yes. Without one, search engines use default crawl behavior. At minimum, include your Sitemap path — it significantly improves correct indexing.