Create customized robots.txt files to control search engine crawlers and improve your website's SEO
Enter your website's primary domain (including https://)
Recommended: 5-10 seconds to prevent server overload
Comma-separated list of URL parameters to block
Define specific rules for different search engine crawlers
Add your XML sitemap URLs to help search engines discover your content
Example: Request-rate: 1/10s # 1 page every 10 seconds
A robots.txt file is a text file that tells search engine crawlers which pages or files they can or cannot request from your site. It's used primarily to manage crawler traffic to your site.
User-agent: * Disallow: /private/ Disallow: /tmp/ Allow: /public/ Sitemap: https://www.example.com/sitemap.xml
User-agent: * Disallow: /checkout/ Disallow: /cart/ Disallow: /account/ Disallow: /search? Allow: /search?q=* Crawl-delay: 10 Clean-param: ref /products/ Sitemap: https://www.example.com/sitemap.xml Sitemap: https://www.example.com/sitemap-products.xml
User-agent: * Disallow: /uploads/temp/ Disallow: /admin/ Disallow: /config/ Allow: /media/ Crawl-delay: 5 User-agent: Googlebot-Image Allow: /images/ Disallow: /images/private/ User-agent: Bingbot Crawl-delay: 15 Sitemap: https://www.example.com/sitemap.xml Sitemap: https://www.example.com/sitemap-videos.xml