Robots.txt Generator
SEO & Meta Tools
Control how search engines and web crawlers access your website. Create rules for specific bots, block sensitive directories, and manage AI crawler access.
Quick Presets
Rules (1)
Rule
Additional Directives
robots.txttext
User-agent: *
Allow: /
Sitemap: https://example.com/sitemap.xmlHow to Use
- 1.Copy the generated robots.txt content
- 2.Create a file named
robots.txt - 3.Place it in your website's root directory
- 4.Verify at
yourdomain.com/robots.txt
Crawler Control
Set precise rules for Google, Bing, and other search engines. Block or allow specific paths per bot.
AI Bot Blocking
Block AI training crawlers like GPTBot, CCBot, and others from scraping your content.
Quick Presets
One-click presets for common setups: allow all, block all, WordPress, or AI blocker.
Common Robots.txt Rules
Allow Everything
User-agent: * Allow: /
Block Everything
User-agent: * Disallow: /
Block Admin Area
User-agent: * Disallow: /admin/ Disallow: /private/
Block AI Crawlers
User-agent: GPTBot Disallow: / User-agent: CCBot Disallow: /
Important Notes
- ⚠️ Not security: robots.txt is a suggestion, not enforcement. Don't rely on it to hide sensitive data.
- • Place robots.txt in your site's root directory (e.g., example.com/robots.txt)
- • Changes may take time to reflect as crawlers cache the file
- • Use Google Search Console to test your robots.txt
- • Disallow doesn't remove pages from search results - use noindex for that