This is all so *very* wrong. #21
Replies: 13 comments 12 replies
-
(Originally posted in #14) My thanks to you, Stephan, for framing the above so clearly. I’m a front-end designer so my sense of what is possible in terms of meaningful protection* is slender but, for what it’s worth, here follows my sense of the probabilities. Reality checkEven for content behind a login, the prospects of protection against AI theft are not much better than for open content. See, for example, what is happening to The Guardian’s login UI. They’ve made a Faustian pact with OpenAI, presumably to reduce one major vector of attack. But that’s mere mitigation by an organisation with significant resources. The only way this ends less than awfully is if governments find they have to respond to a mass rejection of AI, and that ain’t happening soon for 2 reasons:
The irony that this is being discussed on a platform owned by one of the most deluded major players in the race to create bull***t generators that are built on theft engines, is not lost on me. *And even the supposed defences – **WEIRD = Western, Educated, Industrial, Rich, ‘Democratic’ This just gets worse. Let the realism of that frame what we do instead. |
Beta Was this translation helpful? Give feedback.
-
Thanks, this is more or less everything I wanted to say. I still have a proposal: There are ways to check if your work is in the corpus of the "big crawl" and hence has already been used for training of LLMs. My blog, licensed under a CC-License, is part of that. I bet I am far from the only one. The CC organization should weigh the legal options, for example a class lawsuit, to help those artists, rather than opening the doors for those wanting to exploit creative works en masse. |
Beta Was this translation helpful? Give feedback.
-
Where do I begin to express my awe and shock... I'm a Wikipedia editor since 2006. I have thousands and thousands of editions, and maybe hundreds of new articles written in the Spanish Wikipedia. There are thousands of hours of work devoted to make it easy for other humans to access knowledge. So yes, I'm not just mad at the prospect of having my work surrendered to greedy corporations whose sole purpose is to usher workers' misery and genocide while making profit from our work. I'm so enraged I can barely type anymore. No. Not in a thousand years. Nope. I'll rather see all my work disappear in a catastrophic fire than give it to any slop generator. Also: why? Why does everything have to include the cursed parrot that is the ill-engineered LLMs? |
Beta Was this translation helpful? Give feedback.
-
Very well expressed. I have been hoping for Creative Commons no-AI version of the CC-by and CC0s, and after all that waiting to get this, is like a slap in the face to those of us that believed in these licenses and this movement. |
Beta Was this translation helpful? Give feedback.
-
From #28
|
Beta Was this translation helpful? Give feedback.
-
Copying my comment from the now-closed issue: The point of CC was to let creators share with other creators without fear of exploitation. Generative AI is the ultimate exploiter. CC need only look at the libraries offering free materials -- from digitized treasures to open-access scholarly work -- whose services are being DDoSed by selfish, exploitative AI crawlers. (Don't say "they don't realize," either. They do. They're using proxies because they know they'd be blocked if they didn't.) CC's duty is clear, and I do not understand why CC has not realized that duty. Not answering this clear call to duty is already damaging CC's status and mindshare (and brand, if I may be horribly corporate for a moment). How am I supposed to teach this hypocrisy and cruelty to creators in my classrooms, when previously I was so proud to rep CC and use it on my own work? Shame on CC. Do better. |
Beta Was this translation helpful? Give feedback.
-
Adding my voice here to say thank you to steph for wording this so well. I agree with everything - this is a horrible and disappointing decision from CC and I hope to see it reversed ASAP. |
Beta Was this translation helpful? Give feedback.
-
When choosing a CC license I have options to choose from: Am I okay with commercial usage? Do adaptions need to be shared with the same license? Am I okay with adaptions of my work? The easy fix for all of this would be adding the option -NA similar to -NC: No AI. But: That would also mean that we need to hold AI companies accountable. Where is the attribution? Do they respect -NC or -ND? Because the answer is that AI companies ignore every license agreement they are presented with. Worse: They openly say "If we have to pay for using all this content, we would have to close the company and file for bankrupcy." Therefore, no matter what signals we send, they will ignore it and take. If we don't step in and stop this, then everyone will take, too. Not just for their AI but to release our music under their name on Bandcamp. Our art under their name on whatever platform sells art. Because who will stop them? And at this point the CC license, that started as a good thing, will be worthless. So step in. Or go under. |
Beta Was this translation helpful? Give feedback.
-
What makes this whole signals situation so infuriating is that there seems to be a disconnect between those who are shaping Creative Commons policy and those of us who have actually used CC licenses – lived them, believed in them, built with them. It's as if the organization doesn't know who the community is. When I began licensing my music under Creative Commons back in 2004, for a couple of years there was a real sense of hope. A vibrant free music scene was blossoming. Platforms like Jamendo were alive with discovery; here in Germany there were free music charts and even festivals; real audiences! There were collaborations with people we would never have met otherwise. For a short time, we reached hundreds of listeners who were actively seeking something outside the mainstream — something idealistic, something new. That spirit was quickly crushed under the rise of social media monopolies and streaming platforms. These days, I am lucky if 30 people listen to a new release that took me a year to make. And yet, I kept using CC licenses; not for fame or profit, simply out of principle. Because I believe in open culture and sharing, and I always believed that CC licenses were – apart from the legal framework they represent –, a way to signal these beliefs to likeminded individuals. Until today. To publish under a Creative Commons license is to be idealist. Watching the organization that represented that spirit (or at least I think it did) roll out AI-friendly "signals" – essentially building tools to better tell tech companies that it's ok to exploit our work leaves a bitter taste indeed. AI is the antithesis of the ideals that drove the CC movement. It doesn't build culture, it cannibalizes it. The response from many CC users makes it clear: We are painfully aware of this. Yet the organization's presentation of these "signals" – as if they’re the best thing since sliced bread – suggest that they aren't. That's why this feels so deeply disappointing to me. For years, I have carried the torch of Creative Commons through obscurity and indifference (it speaks volumes that major German IT news portal heise.de reports about this – and one day later not a single comment has been posted by a reader), often defending it to people who dismissed it outright. And now as AI threatens arts and culture as a whole, we’re told to welcome it with open arms. I understand that Creative Commons is just a licensing framework. I don’t expect the organization to champion my art, to recognize the value of my work, or to offer thanks — that's never been the point. But if CC becomes a label that, through initiatives like these "signals", effectively tells the world, "These are the artists who are fine with being exploited", then something has gone terribly wrong. In that case, it's no longer a banner I can stand under in good faith. And I am seriously reconsidering whether to release my forthcoming album under a CC license at all. This isn't some grand gesture or a threat – I know the world will move on without blinking. But if this is the direction CC is heading, I simply can't keep lending it my trust, my name, or my work. |
Beta Was this translation helpful? Give feedback.
-
Exactly. |
Beta Was this translation helpful? Give feedback.
-
Adding my support to this objection. The technologies behind LLMs and "AI"-based image processing may have legitimate and even beneficial uses. However, the way I see them currently being implemented is thoroughly harmful, not just when it comes to the acquisition of training data. Among others, this technology currently causes absurd waste of energy, clean water and other resources, thus wreaking ecological havoc. Additionally, it causes considerable harm in the realm of labor and social harm through the building of new, massive datacenters. It creates plagiarism nightmares not just for artists and creatives, but also in education. Its outputs are notoriously unreliable and contribute to the already rampant spread of misinformation, in addition to notoriously amplifying bias. "AI" currently makes creative carreers that include publishing on the Internet considerably less viable. I have suffered a total collapse of my copywriting business since LLMs became popular. I am furious at the blatant disregard for the harms of AI (not just regarding the acquisition of training data) I see in the current discourse. I ask you, Creative Commons, to take a more critical stance. Give us tools to fight back. |
Beta Was this translation helpful? Give feedback.
-
Thank you for your feedback. We understand your frustration about scraping and the desire for stronger fences to protect human creativity from being exploited. This is certainly one way we’re seeing many creators react to the extractive nature of this new technology, rightly wanting measures to block unwanted AI uses. CC’s mission supports open access to knowledge and creativity. We do worry, however, that putting content behind digital walls could limit people’s ability to freely access, share, and build upon works, putting openness at risk. It also prevents the potential benefits of AI training for purposes like research and other public interest uses. That said, fences (mechanisms for blocking AI use) are already being developed by others, including the IETF AI Usage Preferences that the CC signals proposal is integrated with, and other mechanisms like Cloudflare’s approach. You can see more examples in this discussion. We expect many users will choose to fence their content. We continue to encourage content holders to choose the best approach for themselves. Over the last three years, we’ve heard from so many who want something more open than a fence. They want a practical way to let others, including AI, know how they want their works used. That is why we are working towards CC signals as a way for creators to communicate their terms of use. |
Beta Was this translation helpful? Give feedback.
-
There's also the issue with all the contributions made in the entire Wikipedia by all the contributors along all these years. How many of such editors will gladly give away their work, for free , to corporations that don't seem to care one iota for our environment, our jobs, our well being, the genocide in Gaza, or even our own web servers (which they parasite like leeches)? I don't want any of my contributions to be used to train a cursed parrot that, best case scenario, will serve to engross the already pornographic wealth of billionaires; in the worst case scenario, to weaponize some bot that will worsen the current genocides and help further the rampant fascism of certain countries. Nope. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
(copied from #14)
Problem
This whole idea is completely and utterly wrong.
Description
Inviting AI scrapers to negotiate licensing terms for CC works is like a flock of sheep holding a summit to draft ethical guidelines for how wolves may best enjoy mutton — debating whether rosemary or thyme would best complement their sacrifice, graciously seasoned, of course, with consent paperwork attached. What we need is a fence, not a recipe book.
AI (and especially LLMs and generative AI) is a cultural parasite. It gorges itself on the life’s work of gifted artists, writers, musicians, and creators — scraping their creations without consent, credit, or compensation, regardless of what "signals" you might dream up. Because the AI bros won't care. They already don't care. They are going to laugh at your "signals".
Decades of human expression, painstakingly crafted with skill and soul, are reduced to fuel for machines that spit out lifeless imitations on demand.
What once took years of mastery, struggle, and personal vision can now be faked in seconds by anyone with a prompt and an internet connection. The result? A flood of synthetic content that completely devalues the artists' work and will eventually drown out every shred of originality still left.
It’s not innovation — it’s intellectual theft disguised as progress. This isn't democratizing creativity; it's automating appropriation. And it’s turning the arts from a field of human depth into a cheap buffet for algorithms and opportunists.
Alternatives
What we need is a way to fight this menace. The Creative Commons Organization should invest all the time that went into this fake to figure out ways to protect our works from AI scrapers.
Postscript
I’ve been releasing my music under Creative Commons for almost two decades now — out of belief in open culture, in sharing, in building something bigger than myself. And what did I get? Silence. Indifference. Seldom meaningful recognition or support. Now suddenly, AI profiteers come crawling in, looting and scavenging — and they get to cash out while I watch art created with all of my heart's blood turned into soulless sludge?
This isn’t just unfair — it’s a gut punch. It’s a betrayal of everything Creative Commons was supposed to stand for. I didn’t open my work to be strip-mined by algorithms and sold back to me as synthetic knockoffs. Enough is enough.
Beta Was this translation helpful? Give feedback.
All reactions