How to use
Use this robots.txt checker in under a minute:
- Paste your
robots.txtinto the input. - Enter a User-agent (e.g.,
Googlebot) and a URL or path (e.g.,https://example.com/private/page?x=1or/private/page). - Click Check URL to see Allowed/Blocked, the matched group, and the winning rule.
- Click Analyze to list groups, rules, sitemaps, and warnings.
FAQ
Does this robots checker fetch my live robots.txt?
No. It runs fully offline in your browser—paste the content you want to test.
What part of a URL is tested against robots.txt rules?
The tool tests path + query (e.g., /page?x=1). If you paste a full URL, the domain is ignored.
How does it choose which User-agent group applies?
It selects the matching group with the most specific (longest) user-agent token; * is the fallback.
How are Allow and Disallow conflicts resolved?
The longest matching pattern wins; if there’s a tie, Allow wins over Disallow.
Are wildcards (*) and end anchors ($) supported?
Yes. * matches any characters, and $ anchors the match to the end of the tested path.
Is this a full robots.txt validator for every crawler?
No—different bots can interpret edge cases differently. This tool covers common behavior and highlights suspicious lines as warnings.
Can I extract Sitemap URLs from robots.txt here?
Yes. The output lists any Sitemap: directives it finds.