eg

robots.txt generator.

shopify / woocommerce presets with sitemap + crawl-delay

preset
user-agents

> robots.txt preview

User-agent: *
Disallow: /admin
Disallow: /cart
Disallow: /orders
Disallow: /checkouts
Disallow: /carts
Disallow: /account
Disallow: /policies/


Sitemap: https://yourstore.com/sitemap.xml

> worked example

Select the Shopify preset: disallow paths pre-fill with /admin, /cart, /orders, /checkouts, /carts, /account, /policies/. Set store base URL to https://yourstore.com, the sitemap field auto-completes to https://yourstore.com/sitemap.xml. User-agent defaults to All (*). The live preview renders a valid robots.txt and two buttons let you copy it or download it as robots.txt. Switching to the WooCommerce preset swaps in /wp-admin/, /cart/, /my-account/, and /checkout/ disallows.

takeaway, Always keep /admin and /checkout disallowed so Googlebot doesn't waste crawl budget on auth-gated or session-specific pages that return thin content.

> when operators reach for this

  • Shopify merchants going live on a new domain who need a correctly formatted robots.txt in under a minute.
  • WooCommerce site owners ensuring wp-admin, checkout, and cart pages are blocked from all crawlers.
  • SEO agencies generating a baseline robots.txt for a client site before a technical audit.
  • Devs targeting specific bots (GPTBot, Bingbot) with per-agent rules while keeping Googlebot unrestricted.
  • Store owners adding a crawl-delay to reduce bot load during a sale or high-traffic period.

> the calculation

  • user-agent blockUser-agent: * (or specific bot name)
  • disallow ruleDisallow: /path/, blocks the path and all sub-paths
  • allow ruleAllow: /path/, overrides a Disallow for the same prefix
  • crawl-delayCrawl-delay: N, seconds between requests (Googlebot ignores this)
  • sitemap declarationSitemap: https://domain.com/sitemap.xml, hints all crawlers

> related calculators, seo & metadata