Links

All links

Standardising AI crawler consent

IETF works on building blocks to let websites declare if crawlers can take their content for traininy:

Right now, AI vendors use a confusing array of non-standard signals in the robots.txt file (defined by RFC 9309) and elsewhere to guide their crawling and training decisions. As a result, authors and publishers lose confidence that their preferences will be adhered to, and resort to measures like blocking their IP addresses.

(From: IETF | IETF setting standards for AI preferences)