Do Not Train" Meta Tags: The Robots.txt of AI – Will Anyone Respect Them?

4 points by alissa_v 2 days ago

I've been noticing more creators and platforms quietly adding things like <meta name="robots" content="noai"> to their pages - kind of like a robots.txt, but for LLMs. For those unfamiliar, robots.txt is a standard file websites use to tell search engines which pages they shouldn't crawl. These new "noai" tags serve a similar purpose, but for AI training models instead of search crawlers.

Some examples of platforms implementing these opt-out mechanisms: - Sketchfab now offers creators an option to block AI training in their account settings - DeviantArt pioneered these tags as part of their content protection approach - ArtStation added both meta tags and updated their Terms of Service - Shutterstock created a compensation model for contributors whose images are used in AI training

But here's where things get concerning - there's growing evidence these tags are being treated as optional suggestions rather than firm boundaries:

- Various creators have reported issues with these tags being ignored. For instance, a discussion on DeviantArt (https://www.deviantart.com/lumaris/journal/NoAI-meta-tag-is-NOT-honored-by-DA-941468316) documents cases where the tags weren't honored, with references to GitHub conversations showing implementation issues

- In a GitHub pull request for an image dataset tool (https://github.com/rom1504/img2dataset/pull/218), developers made respecting these tags optional rather than default, which one commenter described as having "gutted it so that we can wash our hands of responsibility without actually respecting anyone's wishes"

- Raptive Support, a company implementing these tags, admits they "are not yet an industry standard, and we cannot guarantee that any or all bots will respect them" (https://help.raptive.com/hc/en-us/articles/13764527993755-NoAI-Meta-Tag-FAQs)

- A proposal to the HTML standards body (https://github.com/whatwg/html/issues/9334) acknowledges these tags don't enforce consent and compliance "might not happen short of robust regulation"

Some creators have become so cynical that one prominent artist David Revoy announced they're abandoning tags like #NoAI because "the damage has already been done" and they "can't remove [their] art one by one from their database." (https://www.davidrevoy.com/article977/artificial-inteligence-why-i-ll-not-hashtag-my-art-humanart-humanmade-or-noai)

This raises several practical questions:

- Will this actually work in practice without enforcement mechanisms?

- Could it be legally enforceable down the line?

- Has anyone successfully used these tags to prevent unauthorized training?

Beyond the technical implementation, I think this points to a broader conversation about creator consent in the AI era. Is this more symbolic - a signal that people want some version of "AI consent" for the open web? Or could it evolve into an actual standard with teeth?

I'm curious if folks here have added something like this to their own websites or content. Have you implemented any technical measures to detect if your content is being used for training anyway? And for those working in AI: what's your take on respecting these kinds of opt-out signals?

Would love to hear what others think.

nicbou a day ago

They already started with the assumption of consent, crawled the web with disregard for resource use, and still provide no mechanism to revoke permission. This is the culture around AI. A quiet little tag that says "please don't do that" won't do much.

These companies are already behaving like jerks. Do you think they will become more polite once they control how we avcess information? with investors breathing down their neck?

Ukv a day ago

Of the signals used to indicate crawling is prohibited, robots.txt is probably the most effective; OpenAI, Google, Anthropic, Meta, and CommonCrawl all claim to respect it. That often provokes a response of "well they're lying", but I've yet to actually find any cases of the IPs they use for crawling accessing content prohibited by robots.txt.

Newly proposed standards will probably take a while to catch on, if they ever do.

Not a lawyer, but I believe such measures could in theory become legally enforceable in the US without any new legislation if the fair use defense fails but an implied license defense (the reason you can cache/rehost copies of webpages that don't have a <noarchive> meta tag, as in Field v. Google Inc) succeeds.

zzo38computer a day ago

I do not want others to scrape my files from my server for the purpose of training LLMs, but if they acquire a copy of them by other means or already have a copy of them for other reasons, then they will already have a copy and then they can do what they want with it.

I do not care about attribution; but I care more that they do not claim additional restrictions in their terms of use when they copy my stuff and use it.

abhisek 2 days ago

I am not sure how this is any different from open source code being embedded in commercial applications. It’s really like a self-accelerating loop.

At least for OSS, usage defines value. When an OSS project is popular, enterprises notices it and begins to use it in their commercial applications.

  • alissa_v a day ago

    I agree with your point about usage defining value in OSS - popular projects gain recognition, contributions, and opportunities through their adoption in commercial applications.

    The critical difference, though, is consent. OSS creators explicitly choose licenses permitting commercial use - they opt in to sharing their work. Many content creators never made such a choice for AI training.

    The current AI training paradigm doesn't even have a true opt-out model - it simply assumes everything is available. The noAI tags are attempting to create an opt-out mechanism where none previously existed. Without enforcement or standards adoption, though, these signals don't seem to have the same weight as established open source licenses.

    There's also a significant difference in attribution. OSS creators receive clear attribution even when their work is used commercially. For creators whose work trains AI models, their contribution is blended and anonymized with no recognition pathway.

    The core question is whether creating this opt-out approach is sufficient, or if AI training should move toward an opt-in model more similar to how open source licensing works.

BobbyTables2 2 days ago

No

  • alissa_v a day ago

    Haha fair enough! Any particular reason why you think they won't be respected?