Article 6CTRR Google Suggests Robots.txt File Updates for 'Emerging AI' Use Cases

Google Suggests Robots.txt File Updates for 'Emerging AI' Use Cases

by
EditorDavid
from Slashdot on (#6CTRR)
For a "vibrant content ecosystem," Google's VP of Trust says web publishers need "choice and control over their content, and opportunities to derive value from participating in the web ecosystem." (Does this mean Google wants to buy the right to scrape your content?) In a blog post, Google's VP of trust starts by saying that unfortunately, "existing web publisher controls" like your robots.txt file (a community-developed web standard) came from nearly 30 years ago, "before new AI and research use cases..." We believe it's time for the web and AI communities to explore additional machine-readable means for web publisher choice and control for emerging AI and research use cases. Today, we're kicking off a public discussion, inviting members of the web and AI communities to weigh in on approaches to complementary protocols. We'd like a broad range of voices from across web publishers, civil society, academia and more fields from around the world to join the discussion, and we will be convening those interested in participating over the coming months. They're announcing an "AI web publisher controls" mailing list (which you can sign up for at the bottom of Google's blog post). Am I missing something? It seems like this should be as easy as adding a syntax for opting in, like AI-ok: * Thanks to Slashdot reader terrorubic for sharing the article.

twitter_icon_large.pngfacebook_icon_large.png

Read more of this story at Slashdot.

External Content
Source RSS or Atom Feed
Feed Location https://rss.slashdot.org/Slashdot/slashdotMain
Feed Title Slashdot
Feed Link https://slashdot.org/
Feed Copyright Copyright Slashdot Media. All Rights Reserved.
Reply 0 comments