Google just added a crawler that ignores robots.txt

· Tools

Google quietly added a new crawler: Google-Agent.

It’s not for indexing. It’s for Project Mariner, Google’s AI agent that browses the web and performs actions on behalf of users. Book flights, fill forms, navigate checkout flows.

Two things stand out.

First, it ignores robots.txt. Google classifies it as “user-triggered” (a human asked the agent to go there), so standard crawl rules don’t apply. You can’t block it.

Second, Google is experimenting with web-bot-auth, a cryptographic protocol where bots prove their identity by signing requests instead of using spoofable user-agent strings. Cloudflare proposed the IETF draft. Google is testing it on Google-Agent using the https://agent.bot.goog identity.

If this standard takes off, it replaces user-agent based bot identification entirely. That’s a big deal for anyone tracking AI crawlers (like my Markdown for Agents experiment).

The web is shifting from “bots that read” to “bots that act.” Google-Agent is the first major signal of that shift.