Skip to tool
FeuTex · free tools runs in-browser no bloat built by LiMiT

Robots Checker Online

Check whether a URL/path is allowed or blocked by a robots.txt file. Paste the file content, set a user-agent, enter a URL or path, and this tool will show the winning rule (Allow/Disallow) plus basic syntax warnings — all offline in your browser.

Category: SEO · URL: /tools/robots-checker-online.html
Privacy: runs locally in your browser. No uploads, no tracking scripts.

How to use

Use this robots.txt checker to test a single URL/path against your rules.

  1. Paste your robots.txt content into the box.
  2. Enter the user-agent you want to simulate (example: Googlebot or *).
  3. Enter a full URL (recommended) or a path like /products?ref=ad.
  4. Click Check to see ALLOWED/BLOCKED and the winning rule.
Keywords this page targets (natural cluster): robots checker online, robots.txt checker, robots txt tester, test robots.txt rules, robots allow disallow checker, is url blocked by robots, robots.txt validator, robots.txt analyzer, googlebot robots checker, user agent robots.txt test, robots wildcard match test, robots.txt allow rule check, robots.txt disallow rule check, robots.txt sitemap extract, robots.txt crawl delay check, robots.txt syntax check, robots.txt rule precedence, check robots.txt for a url, robots.txt path matching, robots.txt $ end of line match
Secondary intents covered: Confirm whether a specific URL is blocked for Googlebot or another crawler, Debug why an important page is not being crawled, Verify Allow vs Disallow precedence and which rule wins, Test wildcard (*) and end-anchor ($) patterns against a URL, Spot common robots.txt formatting issues and misplaced directives, Extract Sitemap URLs declared in robots.txt, Compare how different user-agents match different groups, Quickly sanity-check robots.txt before deploying

FAQ

Is this robots checker really online?

The page is online, but the checking runs offline in your browser — it doesn’t fetch or call any URLs.

Can I paste a full URL instead of a path?

Yes. The tool extracts pathname + query (for example /page?x=1) and matches rules against that.

How does it choose which User-agent group applies?

It picks the matching group with the most specific user-agent token (longest match). If tied, the first one in the file wins.

If both Allow and Disallow match, which one wins?

The most specific pattern wins (longest path pattern). If specificity ties, Allow wins.

Does it support wildcards (*) and the $ end anchor?

Yes: * matches any sequence and a trailing $ anchors the end of the URL/path.

What does an empty Disallow mean?

Disallow: with an empty value means nothing is blocked for that group (effectively allow all).

Does robots.txt block indexing?

Robots rules control crawling, not guaranteed indexing. A URL can still appear in search if discovered elsewhere.