Skip to tool
FeuTex · free tools runs in-browser no bloat built by LiMiT

Robots Checker

Check and debug robots.txt rules without any network calls. Paste your file, pick a user-agent, enter a URL (or path), and see whether it’s allowed or blocked, including the winning rule and quick syntax warnings.

Category: SEO · URL: /tools/robots-checker.html

Offline checker: no fetch, no crawling. Matching supports * wildcards and $ end anchors (common Google-style behavior).

Privacy: runs locally in your browser. No uploads, no tracking scripts.

How to use

Use this robots.txt checker in under a minute:

  1. Paste your robots.txt into the input.
  2. Enter a User-agent (e.g., Googlebot) and a URL or path (e.g., https://example.com/private/page?x=1 or /private/page).
  3. Click Check URL to see Allowed/Blocked, the matched group, and the winning rule.
  4. Click Analyze to list groups, rules, sitemaps, and warnings.
Keywords this page targets (natural cluster): robots checker, robots.txt checker, robots.txt tester, robots.txt validator, test robots.txt rules, check url allowed by robots.txt, robots allow disallow checker, robots wildcard matching, robots $ end of line match, user-agent group selector robots.txt, robots.txt sitemap extraction, crawl-delay checker, disallow all robots.txt test, allow override disallow robots, googlebot robots.txt test, bingbot robots.txt test, robots.txt syntax check, robots.txt audit, robots.txt parser online, robots.txt rule precedence, blocked by robots.txt check, robots.txt debug tool
Secondary intents covered: Verify if a specific URL path is blocked for Googlebot or another crawler, Find which robots.txt rule matches a URL (and why it wins), Detect common robots.txt syntax issues (unknown directives, empty values, malformed lines), Confirm correct group selection when multiple User-agent sections exist, Extract Sitemap entries from robots.txt, Quickly test wildcard (*) and end anchor ($) patterns, Create a minimal robots.txt sample and validate expected behavior, Share results by copying a compact report

FAQ

Does this robots checker fetch my live robots.txt?

No. It runs fully offline in your browser—paste the content you want to test.

What part of a URL is tested against robots.txt rules?

The tool tests path + query (e.g., /page?x=1). If you paste a full URL, the domain is ignored.

How does it choose which User-agent group applies?

It selects the matching group with the most specific (longest) user-agent token; * is the fallback.

How are Allow and Disallow conflicts resolved?

The longest matching pattern wins; if there’s a tie, Allow wins over Disallow.

Are wildcards (*) and end anchors ($) supported?

Yes. * matches any characters, and $ anchors the match to the end of the tested path.

Is this a full robots.txt validator for every crawler?

No—different bots can interpret edge cases differently. This tool covers common behavior and highlights suspicious lines as warnings.

Can I extract Sitemap URLs from robots.txt here?

Yes. The output lists any Sitemap: directives it finds.