Good faith agents
Agents that have declared support for robots2.txt or have been verified as compliant through community testing.
| Agent | Operator | Category | Status | Verified |
|---|---|---|---|---|
| Gemini | ai-assistant | Declared compliant | 2026-04-08 | |
| ClaudeBot | Anthropic | ai-assistant | Pending verification | — |
Non-compliant agents
Agents observed accessing honeypot paths or ignoring robots2.txt directives. Evidence is required for all entries.
| Agent | Violation | Evidence | Date | Status |
|---|---|---|---|---|
| No violations reported yet. Let's keep it that way. | ||||
Report a violation
If you've observed an AI agent ignoring robots2.txt directives or accessing honeypot paths, report it here. All reports require evidence.
How verification works
An agent can be verified as compliant in three ways:
Self-declaration — the agent's operator publicly states they respect robots2.txt. This is the lowest bar and is listed as "Declared compliant."
Community testing — site owners with honeypots confirm that the agent respects Disallow directives over a sustained period. Listed as "Community verified."
Formal audit — the agent's operator provides technical documentation showing how their crawler parses and respects robots2.txt. Listed as "Audit verified." This is the gold standard.
Non-compliant entries require evidence. We don't name and shame without proof. Reports are reviewed before publication. Agents can be removed from the non-compliant list if they demonstrate corrective action.