Robots.txt Tester
Inspect how lucena.com.ec controls crawler access, blocked paths, sitemap references and AI crawler rules.
Robots.txt Status
Robots.txt Status
Present
Score
92
/100
ยท Strong
View Full Robots.txt
Robots.txt Content Preview
User-agent: * Disallow: /wp-content/uploads/wc-logs/ Disallow: /wp-content/uploads/woocommerce_transient_files/ Disallow: /wp-content/uploads/woocommerce_uploads/ Disallow: /*?add-to-cart= Disallow: /*?*add-to-cart= Disallow: /wp-admin/ Allow: /wp-admin/admin-ajax.php # START YOAST BLOCK # --------------------------- User-agent: * Disallow: Sitemap: https://lucena.com.ec/sitemap_index.xml # --------------------------- # END YOAST BLOCK
User-agent Rules
| User-agent(s) | Allowed paths | Disallowed paths |
|---|---|---|
| * |
|
|
| * | No explicit Allow rules. | No explicit Disallow rules. |
Blocked and Allowed Paths
| Blocked paths |
|
|---|---|
| Allowed paths |
|
| Crawl-delay | No Crawl-delay directive detected. |
Sitemaps Detected
AI Crawler Policy
No explicit blocks were detected for common AI crawlers (GPTBot, ChatGPT-User, ClaudeBot, PerplexityBot, Google-Extended).
Issues Found
- Blocking CSS or JS may prevent search engines from rendering pages correctly.
Recommendations
- Document your AI crawler policy explicitly in robots.txt so future bots know how to treat your content.
- Ensure important pages, CSS and JavaScript assets are crawlable so search engines can fully render your site.