eg

ai bot access checker.

see if gptbot, claudebot, perplexitybot and 10+ ai bots can read your store

> worked example

Paste https://www.shopify.com/ and click 'run check'. The tool fetches /robots.txt, parses every User-agent group, and checks all 18 bots, GPTBot, ClaudeBot, PerplexityBot, Google-Extended, CCBot, and more. The result matrix shows Shopify allows Googlebot but disallows GPTBot and CCBot in its robots.txt. A live HTTP probe then confirms GPTBot actually gets a 200, meaning the robots.txt disallow is a policy request, not a technical block. The full robots.txt is shown in a collapsible block.

takeaway, A robots.txt disallow is a request, not a wall, some bots honour it, others do not. The HTTP probe tells you what bots actually see.

> when operators reach for this

  • Shopify store owners checking whether GPTBot, ClaudeBot, and PerplexityBot can crawl their product pages for AI-powered search and training.
  • Ecommerce SEO leads auditing a site migration to confirm no new robots.txt rule accidentally blocks AI crawlers and reduces LLM visibility.
  • CMOs worried about AI discoverability who need a board-ready screenshot of exactly which AI systems are or are not indexing the brand.
  • Agencies auditing client sites for AI-readiness as part of a technical SEO or LLM-visibility report.
  • Headless-commerce engineers deploying a new frontend and verifying that the generated robots.txt matches the intended bot policy before going live.

> the calculation

  • robots.txt precedencelongest-match Allow/Disallow wins; * is a wildcard; empty Disallow means allow all
  • http checkGET url with bot User-Agent, follow up to 3 redirects, 10s timeout, non-4xx = accessible

> related calculators, ai & llm visibility