At some point, the environmental protection agencies of the world may have to step in to protect us from AI slop. Because this situation is more and more starting to look like the way the chemical industry was until we found out that toxic waste was everywhere and could really kill people.
Only Bitcoin has a significant energy cost. The other more general-purpose platforms like Ethereum use very little energy and stand to replace entrenched banking interests with a more transparent and auditable system.
How's that? They all use blockchain computers that run on electricity and require potable cooling water 24/7.
A "more transparent and auditable system," from what I've read about the "Genius Act" is simply a recipe for a rerun of Sam Blankman Fried's ponzi scheme, the SVB and Signature Bank collapses, money laundering, human trafficking, crimes galore, undermining the dollar, and destruction of a stable economy. Biggest scam ever. Only winners will be the scammers.
I haven't read the Genius act or really know or care what any particular administration does w.r.t. blockchain technology. The tech has allegiance to no nation, is open to participation by anyone with an Internet connection (unlike banks) and will continue plodding along providing financial services to the unbanked regardless of what any government does. SBF's ponzi was a failure of traditional centralized banking: what he did was enabled by his sole ownership of centralized infrastructure within his control, not a function of the operation of public blockchains, all of whose operations are perfectly visible to any outside observer (including regulatory agencies and governments) who care to look.
And no, Ethereum and other "proof of stake" systems do not use nearly the amount of energy as "proof of work" systems like Bitcoin. Literally 10s of thousands of times less energy. I would say look up those terms and familiarize yourself. There is plenty to criticize in crypto, but there is also a legitimate challenge to entrenched interests which have their own ethical problems. And also I would say resist the urge to frame your understanding in terms of partisan politics. Reforming finance from the root level is not a partisan issue and there are people on both sides of the aisle who are critical and hopeful towards blockchains technology. That fact isn't changed by the posturing or rhetoric of one administration (that I'm sure we're both critical of).
I'm all for reforming finance, i.e., bringing back the Glass Steagall Act, which gave us a stable economy until Bill Clinton, encouraged by Wall Street and their lobbyists, got rid of it and paved the way for the economic collapse of 2008, which further enriched the filthy rich while devastating the rest of society. You called SBF's ponzi a failure of traditional centralized banking. No. It was a failure of an inadequately regulated banking and investment industry.
I suspect you do not support regulation because you haven't even bothered to learn the terms of the Orwellian named "Genius Act" which is being sold (deceptively) as a way to regulate and stabilize crypto. Crypto is mos def not removed from politics since it is the govt. that has the power to regulate it. The SV set has already demonstrated its lack of interest to regulate itself. Anyone who is naive enough to think *any* financial investment is either transparent or safe without effective and enforceable regulation is living in fantasyland. Just because a promoter is not an "entrenched interest" in no way ensures they won't be the next Sam Bankman-Fried or Bernie Madoff; to the contrary, those who reject regulation are far more likely to be (or become) frauds and grifters.
"Bybit, a major cryptocurrency exchange, has been hacked to the tune of $1.5 billion in digital assets, in what’s estimated to be the largest crypto heist in history.
"The attack compromised Bybit’s cold wallet, an offline storage system designed for security. The stolen funds, primarily in ether, were quickly transferred across multiple wallets and liquidated through various platforms...
"Blockchain analysis firms, including Elliptic and Arkham Intelligence, traced the stolen crypto as it was moved to various accounts and swiftly offloaded. The hack far surpasses previous thefts in the sector, according to Elliptic. That includes the $611 million stolen from Poly Network in 2021 and the $570 million worth of Binance’s BNB token stolen in 2022."
I saw the crypto lobbyists buy off Adam Schiff, Alex Padilla, and many other politicians of both parties with millions in campaign contributions and relentless ads against, for example, Katie Porter (Schiff's opponent), who refused to take dark money and supported regulating crypto. They were able to buy off Schiff and Padilla (who of course voted for the "Genius" Act). Crypto is a vehicle for shady, untrustworthy people to defraud the naive and ignorant, as its historic track record demonstrates.
"The potential for loss of collateral is motivating because a significant amount of crypto is at stake. In Ethereum’s case, you need to stake 32 ETH tokens to get started as a validator.” Recently, each Etherium token has been worth about $1,200 USD."
Possession of enough money to lose without serious consequences does not ensure you can be trusted to use good judgment. I'm thinking about the richest tech bros in history here, who have repeatedly demonstrated clueless social judgment and indifferent carelessness with no concern or accountability for the consequences of their decisions (i.e. Mark Zuckerberg and Myanmar, amped up FB disinformation and lies). These manboys have learned not one lesson from adhering to their juvenile motto "Move fast and break things," nor have they shown any concern for what they break. They are not regulated, unlike any other business in the US.
“... In a proof of stake system, a network of “validators” contribute or “stake” their own cryptocurrency in exchange for the chance to validate the new transaction, update the blockchain and earn a reward. The network uses an algorithm to select a winner based on the amount of cryptocurrency each validator has in the pool and how long it has been there."
How are those algorithms transparent and who will program them? I can't think of a single tech company of any financial significance that has made its algorithms open to public scrutiny (or even regulatory scrutiny). Those algorithms are currently destroying civil society and have been shown to be highly biased depending on who creates them.
Ré: “the Ethereum Proof-of-Work network is estimated to use 2,000 times more energy than the Ethereum Proof-of-Stake test network that has been running in parallel. When the switch to Proof-of-Stake is made, the Ethereum network will go from using roughly the same amount of energy as a medium-sized country to the same amount of energy as around 2,100 American homes.”
I take your point that it's more energy efficient, but it's not as energy efficient as doing away with crypto entirely. It's just a play to change who is in power and who will become their financial victims while hiding behind unregulated, biased, and hidden algorithms the vast majority of investors do not understand. That is not transparency; it's a recipe for a global economic disaster.
While some of what you say is true and does indicate real problems, a lot of it demonstrates fundamental technical misunderstandings.
Re: the hacks, you again cite centralized financial entities (a centralized exchange, Bybit). This is indeed bad, and we agree it should be regulated! But it isn't what I mean by "crypto", it's in fact nothing like what the innovators behind "crypto" had in mind. It's the old centralized corporate paradigm trying to reassert control over the technology designed to replace it. The core networks, the actual decentralized public blockchains and the cryptography backing them, have not been hacked or manipulated in this way and are far more secure. An exchange like Bybit (or SBF's exchange, to go back to that example) is subject to all the social engineering hacks that traditional banks or any other corporation is subject to, and indeed, hacking and fraud is widespread in those institutions outside of crypto as much as inside.
There are billions in credit card fraud affecting millions of people every year: does this mean that the credit card industry is corrupt or under-regulated?
No, humans are just subject to all the same psychological attacks as always. It doesn't help that with a credit card, my private spending key is printed right on the side! Not so with a digital crypto wallet - there are genuine technical improvements in security there.
Leaks of personal information and identity theft are rampant anywhere that such information is concentrated in the hands of one institution. The common factor, and the weak link, is the corporation itself. In a public blockchain, there is simply no single institution to get hacked and these kinds of attacks can't happen. There are corporations that spring up around the blockchain, and those should be regulated to a high security standard. But they are not the blockchain and no one is obliged to use their services in order to use the blockchain.
Which brings us to your question "How are these algorithms transparent and who will program them?"
Since there is no corporation, the code is in the public domain. You can for instance see every line of the code for a public Ethereum consensus node here: https://github.com/ethereum/go-ethereum
You can also download and run that code yourself to begin validating transactions and participate in network consensus. You won't get paid doing it unless you have the ETH to stake, but you can indeed participate for free, and you can read every line of code.
The choice to adopt any change to the code is also governed by consensus; the network operators have to adopt a proposed change as a majority, it can't just be pushed on them. Some blockchains like Cardano and Tezos even have an interna democratic governance layer wherein the thousands of network operators can explicitly vote on adoption of upgrades or parameter changes. This is absolutely unlike the code that was running in SBF's centralized exchange, which was completely hidden from the view of all but a few insiders who exerted absolute control.
I don't think you're understanding just how fundamentally different this is from what we have known before. Re: your comment that "crypto is mos def not removed from politics because it is the govt that has the power to regulate it": the government simply cannot regulate usage of public blockchains. Even if they completely outlawed crypto, I could still send you payments on a public blockchain as long as the Internet is working. No one can outlaw it any more than they can outlaw sending bytes over the Internet. What governments do is mostly irrelevant to crypto working; it's nice if they cooperate, but they need not do so for it to work and succeed. The CCP has tried to outlaw crypto transactions for Chinese citizens multiple times and failed. There are plenty of nation states out there with failed currencies that would love to keep their citizens from holding more reliable stores of value in crypto, but they cannot stop its usage. You can carry any amount of crypto across any national border in your head as a private key with no physical artifact, no wad of cash or precious metals to weigh you down. A public blockchain doesn't need any government's permission to operate, and never will, by design.
I know you have good intentions, I just think you're demonstrating a misunderstanding of the fundamentals on this topic.
Exactly! I mean, we're ruining even more of the earth to build new data centers and using up even more drinking water to cool these systems, and for what? How much of AI usage is actually meaningful? 5%? With the rest of the 95% being: 1) Things you can already find on Wikipedia, 2) Cheating on essays, 3) Pointless AI slop art, 4) AI porn and "nudify".
It's sickening, really. I'm thinking of trying to become, if not an AI Vegan, then at least an AI Flexitarian. That is, only use AI for important tasks, but even there, only in moderation.
"enshittification" indeed. There aren't enough words that can adequately describe how much I despise this aspect of AI, but I try.
"What are we doing to our world and the things that hold meaning? We have an obsession with chasing cold, algorithmic precision that is void of the warmth of imperfection, which is part of the natural world. We are trying to escape our natural environment and build optimally sterile prisons: a perfect emptiness without disorder."
Gary Marcus you will seldom find me agreeing or re-sharing. But this post is a notable exception. Yes, it’s right to call it enshittification. And the struggle is real!
What can we predict from this? Online, maybe it creates more markets for a kind of high-value paywall-protected reality space, where AI cannot go unless it is more rigorously fact-checked and disclaimed.
And/or: maybe reality is maintained only offline, and our digital feeds are widely recognized to be de-coupled from reality. After all, a lot of people value rabbits jumping on trampolines…. More than they value objective reality!
People have been eating shitty food, drinking shitty drinks and pushing shitty drugs into their bodies for decades without much care for the long-term consequences. Why will feeding their brains shitty streams of AI-generated content be any different?
It is worse when they want to do science with AI, for instance neuroscience. The idea is that all data are siloed somewhere and then some AI generates "results" from that. There are people starting companies wanting to use scientists' output for free and then sell it to whoever wants it. The no copyright people have started it. Hassabis has shown how it works - use large handcrafted databases and run a pattern completion program on it, and wups - there is your Nobel prize. And this will continue unless scientists refuse to comply - which they can't if they are employed somewhere and it is their institution's policy to offer all experimental data to some brainless AI program. Yes, this is the enshittifiction of science.
I figured that you had model collapse in mind when you wrote this. But John Michael Thomas's point below about why precision matters wrt "enshittify" is correct in my view.
Regarding model collapse, I agree that a purely connectionist approach to AI is fundamentally limited. However, I would also argue that the symbolic or syllogistic layer you're proposing as a corrective is similarly constrained. Both neural and symbolic systems are forms of abstract reasoning—they process correlations and formal rules—but they do not approximate the somatic–abstract interplay that underlies cognition in biologic minds.
Neurosymbolic AI can coordinate patterns and logical structures, but it lacks the capacity for internal epistemic adjudication outside of the pre-defined rules.
What’s missing is somatic reasoning: the biologically grounded, emotionally filtered, memory-anchored process by which humans and animals assign salience and trust. While neural models may appear to engage in abductive or fuzzy reasoning, they in fact simulate probabilistic token coherence, not true somatic abduction—which requires contradiction sensitivity, novelty aversion, and experiential priors.
Neurosymbolic systems may indeed scale better than pure neural models, but neither constitutes a simulation of human epistemic reasoning, which depends on somatic abduction supervising and contextualizing abstract deduction.
Or more accurately. They haven't wrestled with the fact that cognition is entirely dependent on their growing up with the body. Something that a computer has not done
I do think that much of AI is based on unproven assumptions of linguistics. Language is not the sole method of thought, however, and this is evident in the wide divergence between people with “internal monologues” and people without them.
Additionally, linguistic-centrism implicitly assumes that non-verbal autistic people, or people with aphasia somehow are not quite as conscious as other humans, which is obviously not true.
All that said, embodied robotics is heading in some promising directions regarding integrating somatic experience into abstract reasoning. Still a long way to go though.
Cory Doctorow's term (and yes, he invented it) is about how platforms *intentionally* degrade the user experience in order to extract more money for themselves.
AI slop is, in most cases, unintentional degradation. When platforms begin degrading model output or features intentionally - for example, to reduce their costs - then that would begin to venture into enshittification territory.
I think it's important to distinguish this, since I expect AI platforms WILL eventually stoop to enshittification, and we'll want to have distinct language to identify that phase of rot.
Gary, will you and others who share your knowledge and concerns please organize and lobby to get a law passed that would require ANY and all AI content (including searches) to include a prominent warning that the material is a product of AI (i.e., completely untrustworthy).
AI slop is actually even worse for Chatbot-mediated search, the new kid on the block, than it is for conventional search.
Chatbots depend on web search RAG to provide useful information and avoid useless hallucinations. They are therefore susceptible to hoovering up AI slop and using it as the basis for giving unreliable search-based recommendations. Chatbots lack the base ability of humans to tell the difference between genuinely useful search results and AI-generated rubbish so the problem is even worse.
Add in the fact that businesses have the incentive to feed chatbots AI-slop web search results to influence their recommendations and all the ingredients for chaos are in place.
In your next piece perhaps discuss who or what would benefit from the enshitification ? If there is no benefit than I would argue that it would eventually disappear. The early internet was full of absolute nonsense… growing pains?
AI slop can be used to influence recommendations from chatbots, which use web search to inform them but lack any ability to distinguish genuine data from AI slop.
A good question. Let me give a toy example first. I run RipOffOnlineRetailerCo. I want to attract suckers to my site via Chatbot recommendations. I create independent websites praising RipOffOnlineRetailerCo to the skies, perhaps using "fakerecommendations.com", and so forth. Chatbots ingest all this fake praise and, in turn, recommend RipOffOnlineRetailerCo to their users who make relevant queries.
Of course, in reality the links won't be quite so direct. "Search Engine Optimisation" companies (SEOs) will use various dark arts along these lines to boost the salience and attractiveness of their clients, including RipOffOnlineRetailerCo, to chatbots. AI slop is a powerful lever.
This is happening today without AI , but I think what Marcus is referring to is hallucinations which is not really controllable. I just don’t see the benefit.
I take your point that the term "AI slop" usually refers to inaccurate information created by hallucinating LLMs. This has the general effect of making chatbots less reliable through their ingestion of that false data. Nobody really benefits from this phenomenon which is typically an unintended side effect.
I have come across one example. I fell into a discussion over whether chatbots can reliably answer the question "which city has more annual rainfall, Paris or London?". The answer is that Paris gets more rainfall, although there are some subtleties, but, without RAG, chatbots, like humans, typically think London is going to be the more rainy city. The person I was discussing this with claimed that the answer was possibly London, because their chatbot quoted data from two websites that stated this. It turned out that these websites were both AI slop, thrown together by people farming clicks for web searchers comparing the weather in different tourist destinations. The LLM that threw together the content for the website just hallucinated the data. Their chatbot in turn ingested the hallucinated sites and used them to overrule more reliable data sources.
This phenomenon might or might not die out in time. I don't know. I was surprised to come across an example of pollution, without looking for one, so early on.
I agree that SEO has been happening without AI, and web search is somewhat worse for it. SEOs and their clients will create and use AI slop because they can use it to fool chatbots into recommending the client sites, or otherwise distorting the chatbot's world view. Chatbots are much more gullible than humans, so this will be much more effective than if a human was critically viewing the content.
When you say "I just don't see the benefit", this is a case of the Tragedy of the Commons. It marginally benefits an individual to distort a chatbot's world view in their favour, but the sum of these behaviours creates a much worse situation for everybody. So, at a societal level, there is no benefit, rather a massive cost.
Mathematicians have pointed out papers published by pseudo-scholars on the preprint server ArXiv that were obviously written by LLMs. What they had in common was that they superficially looked like normal maths papers but were in fact complete nonsense. This is not suprising to me. What would surprise me would be a paper written by an LLM that contained original and non-trivial mathematics. I can wait.
Radio played music and developed DJ influencers who could make or break musicians. Over time, they wanted more and more money, so they played more and more ads, until you'd hear more ads than music on your commute to and from work. Then the scene switched to a subscription model, with consumer support to replace commercials... lol, psyche, it slowly introduced more and more commercials for money.
This is just how low grade consumer trash works. The creators come in, some of them are good and get attention in a space, and the grifters pile in with their low effort spam chasing money.
The internet was more interesting before the dot com bubble started forming. Now everyone wants to be Amazon, Google, or MySpace with arbitrary layout differences and hype that they're the new hot thing.
The old idea that reading things is more informative than consuming information in other ways, is engrained in culture from a time when only the wealthiest people could afford to learn how to read and write. It's just not true at all when the costs of literacy are zero and anyone can scribble nonsense at scale.
Now we've got to contend with audio and visual arts escaping the realms of cost prohibitive hiring of a starving artist to assign them slop projects to sell consumer trash and symbolize whatever social engineering campaign the wealthy want to promote. Anyone can do it.
You can change the outcome of an election or instigate a global riot from the comfort of your basement in Alabama, while you're on probation for shoplifting from Walmart, and use a VPN so the military industrial complex has a convenient scapegoat against some foreign country.
It's simply the modern equivalent of drawing palm prints and genitals on cave walls, with no sacred space for carving superheroes and genitals on stone monoliths we would defend with bone and blood. Of course there will be new cults made for worshiping strange digital deities whose words and images are infallible, it's the only way for slop conjurers to assert that their testicular art and anal compositions are the biggest, most glorious, and holiest of holes by which those who seek to be instructed and given false promises can receive divine inspiration.
Mask up, hat on, hustle harder, get mad and spend all your money... your labor must be wasted so you'll do more while chasing cornered resources, like the top of an agorithmically sorted social media feed gamed to neg you into watching more ads and conforming your creations to blend in with an endless stream of enshitified nonsense manifesting from infrastructure encapsulating the anal essence and automating it away from the engineers who calculated and built it.
Google creating the tools of its own demise makes perfect sense, since "big tech" companies desperately try to amalgamate all the businesses under one brand in an endless hype cycle of acquisition and failure. Eventually they'd create the nuke that their competitors use to destroy them, or poison the environment to a degree that renders them unfit for the new climate.
In the scatological analysis, we are the corn. 🤣
Obviously, I vote for adding "enshitification" to the lexicon and categorizing everything coming from these new AI models as "bullshit".
Thank you for all your hard work! Enjoy your vacation!
At some point, the environmental protection agencies of the world may have to step in to protect us from AI slop. Because this situation is more and more starting to look like the way the chemical industry was until we found out that toxic waste was everywhere and could really kill people.
Protect the digital environment!
So, US EPA: here is a new task for you. Oh wait…
And get rid of crypto, which is also destroying our environment. For what? To commit ponzi schemes, launder money, and destroy the economy.
Only Bitcoin has a significant energy cost. The other more general-purpose platforms like Ethereum use very little energy and stand to replace entrenched banking interests with a more transparent and auditable system.
How's that? They all use blockchain computers that run on electricity and require potable cooling water 24/7.
A "more transparent and auditable system," from what I've read about the "Genius Act" is simply a recipe for a rerun of Sam Blankman Fried's ponzi scheme, the SVB and Signature Bank collapses, money laundering, human trafficking, crimes galore, undermining the dollar, and destruction of a stable economy. Biggest scam ever. Only winners will be the scammers.
I haven't read the Genius act or really know or care what any particular administration does w.r.t. blockchain technology. The tech has allegiance to no nation, is open to participation by anyone with an Internet connection (unlike banks) and will continue plodding along providing financial services to the unbanked regardless of what any government does. SBF's ponzi was a failure of traditional centralized banking: what he did was enabled by his sole ownership of centralized infrastructure within his control, not a function of the operation of public blockchains, all of whose operations are perfectly visible to any outside observer (including regulatory agencies and governments) who care to look.
And no, Ethereum and other "proof of stake" systems do not use nearly the amount of energy as "proof of work" systems like Bitcoin. Literally 10s of thousands of times less energy. I would say look up those terms and familiarize yourself. There is plenty to criticize in crypto, but there is also a legitimate challenge to entrenched interests which have their own ethical problems. And also I would say resist the urge to frame your understanding in terms of partisan politics. Reforming finance from the root level is not a partisan issue and there are people on both sides of the aisle who are critical and hopeful towards blockchains technology. That fact isn't changed by the posturing or rhetoric of one administration (that I'm sure we're both critical of).
I'm all for reforming finance, i.e., bringing back the Glass Steagall Act, which gave us a stable economy until Bill Clinton, encouraged by Wall Street and their lobbyists, got rid of it and paved the way for the economic collapse of 2008, which further enriched the filthy rich while devastating the rest of society. You called SBF's ponzi a failure of traditional centralized banking. No. It was a failure of an inadequately regulated banking and investment industry.
I suspect you do not support regulation because you haven't even bothered to learn the terms of the Orwellian named "Genius Act" which is being sold (deceptively) as a way to regulate and stabilize crypto. Crypto is mos def not removed from politics since it is the govt. that has the power to regulate it. The SV set has already demonstrated its lack of interest to regulate itself. Anyone who is naive enough to think *any* financial investment is either transparent or safe without effective and enforceable regulation is living in fantasyland. Just because a promoter is not an "entrenched interest" in no way ensures they won't be the next Sam Bankman-Fried or Bernie Madoff; to the contrary, those who reject regulation are far more likely to be (or become) frauds and grifters.
Case in point; where is the transparency here? https://www.cnbc.com/2025/02/21/hackers-steal-1point5-billion-from-exchange-bybit-biggest-crypto-heist.html
"Bybit, a major cryptocurrency exchange, has been hacked to the tune of $1.5 billion in digital assets, in what’s estimated to be the largest crypto heist in history.
"The attack compromised Bybit’s cold wallet, an offline storage system designed for security. The stolen funds, primarily in ether, were quickly transferred across multiple wallets and liquidated through various platforms...
"Blockchain analysis firms, including Elliptic and Arkham Intelligence, traced the stolen crypto as it was moved to various accounts and swiftly offloaded. The hack far surpasses previous thefts in the sector, according to Elliptic. That includes the $611 million stolen from Poly Network in 2021 and the $570 million worth of Binance’s BNB token stolen in 2022."
I saw the crypto lobbyists buy off Adam Schiff, Alex Padilla, and many other politicians of both parties with millions in campaign contributions and relentless ads against, for example, Katie Porter (Schiff's opponent), who refused to take dark money and supported regulating crypto. They were able to buy off Schiff and Padilla (who of course voted for the "Genius" Act). Crypto is a vehicle for shady, untrustworthy people to defraud the naive and ignorant, as its historic track record demonstrates.
Ré entrenched interests: https://www.bitwave.io/blog/is-proof-of-stake-really-more-energy-efficient-than-proof-of-work:
"The potential for loss of collateral is motivating because a significant amount of crypto is at stake. In Ethereum’s case, you need to stake 32 ETH tokens to get started as a validator.” Recently, each Etherium token has been worth about $1,200 USD."
Possession of enough money to lose without serious consequences does not ensure you can be trusted to use good judgment. I'm thinking about the richest tech bros in history here, who have repeatedly demonstrated clueless social judgment and indifferent carelessness with no concern or accountability for the consequences of their decisions (i.e. Mark Zuckerberg and Myanmar, amped up FB disinformation and lies). These manboys have learned not one lesson from adhering to their juvenile motto "Move fast and break things," nor have they shown any concern for what they break. They are not regulated, unlike any other business in the US.
“... In a proof of stake system, a network of “validators” contribute or “stake” their own cryptocurrency in exchange for the chance to validate the new transaction, update the blockchain and earn a reward. The network uses an algorithm to select a winner based on the amount of cryptocurrency each validator has in the pool and how long it has been there."
How are those algorithms transparent and who will program them? I can't think of a single tech company of any financial significance that has made its algorithms open to public scrutiny (or even regulatory scrutiny). Those algorithms are currently destroying civil society and have been shown to be highly biased depending on who creates them.
Ré: “the Ethereum Proof-of-Work network is estimated to use 2,000 times more energy than the Ethereum Proof-of-Stake test network that has been running in parallel. When the switch to Proof-of-Stake is made, the Ethereum network will go from using roughly the same amount of energy as a medium-sized country to the same amount of energy as around 2,100 American homes.”
I take your point that it's more energy efficient, but it's not as energy efficient as doing away with crypto entirely. It's just a play to change who is in power and who will become their financial victims while hiding behind unregulated, biased, and hidden algorithms the vast majority of investors do not understand. That is not transparency; it's a recipe for a global economic disaster.
While some of what you say is true and does indicate real problems, a lot of it demonstrates fundamental technical misunderstandings.
Re: the hacks, you again cite centralized financial entities (a centralized exchange, Bybit). This is indeed bad, and we agree it should be regulated! But it isn't what I mean by "crypto", it's in fact nothing like what the innovators behind "crypto" had in mind. It's the old centralized corporate paradigm trying to reassert control over the technology designed to replace it. The core networks, the actual decentralized public blockchains and the cryptography backing them, have not been hacked or manipulated in this way and are far more secure. An exchange like Bybit (or SBF's exchange, to go back to that example) is subject to all the social engineering hacks that traditional banks or any other corporation is subject to, and indeed, hacking and fraud is widespread in those institutions outside of crypto as much as inside.
There are billions in credit card fraud affecting millions of people every year: does this mean that the credit card industry is corrupt or under-regulated?
https://www.security.org/digital-safety/credit-card-fraud-report/
No, humans are just subject to all the same psychological attacks as always. It doesn't help that with a credit card, my private spending key is printed right on the side! Not so with a digital crypto wallet - there are genuine technical improvements in security there.
Leaks of personal information and identity theft are rampant anywhere that such information is concentrated in the hands of one institution. The common factor, and the weak link, is the corporation itself. In a public blockchain, there is simply no single institution to get hacked and these kinds of attacks can't happen. There are corporations that spring up around the blockchain, and those should be regulated to a high security standard. But they are not the blockchain and no one is obliged to use their services in order to use the blockchain.
Which brings us to your question "How are these algorithms transparent and who will program them?"
Since there is no corporation, the code is in the public domain. You can for instance see every line of the code for a public Ethereum consensus node here: https://github.com/ethereum/go-ethereum
You can also download and run that code yourself to begin validating transactions and participate in network consensus. You won't get paid doing it unless you have the ETH to stake, but you can indeed participate for free, and you can read every line of code.
The choice to adopt any change to the code is also governed by consensus; the network operators have to adopt a proposed change as a majority, it can't just be pushed on them. Some blockchains like Cardano and Tezos even have an interna democratic governance layer wherein the thousands of network operators can explicitly vote on adoption of upgrades or parameter changes. This is absolutely unlike the code that was running in SBF's centralized exchange, which was completely hidden from the view of all but a few insiders who exerted absolute control.
I don't think you're understanding just how fundamentally different this is from what we have known before. Re: your comment that "crypto is mos def not removed from politics because it is the govt that has the power to regulate it": the government simply cannot regulate usage of public blockchains. Even if they completely outlawed crypto, I could still send you payments on a public blockchain as long as the Internet is working. No one can outlaw it any more than they can outlaw sending bytes over the Internet. What governments do is mostly irrelevant to crypto working; it's nice if they cooperate, but they need not do so for it to work and succeed. The CCP has tried to outlaw crypto transactions for Chinese citizens multiple times and failed. There are plenty of nation states out there with failed currencies that would love to keep their citizens from holding more reliable stores of value in crypto, but they cannot stop its usage. You can carry any amount of crypto across any national border in your head as a private key with no physical artifact, no wad of cash or precious metals to weigh you down. A public blockchain doesn't need any government's permission to operate, and never will, by design.
I know you have good intentions, I just think you're demonstrating a misunderstanding of the fundamentals on this topic.
Exactly! I mean, we're ruining even more of the earth to build new data centers and using up even more drinking water to cool these systems, and for what? How much of AI usage is actually meaningful? 5%? With the rest of the 95% being: 1) Things you can already find on Wikipedia, 2) Cheating on essays, 3) Pointless AI slop art, 4) AI porn and "nudify".
It's sickening, really. I'm thinking of trying to become, if not an AI Vegan, then at least an AI Flexitarian. That is, only use AI for important tasks, but even there, only in moderation.
"enshittification" indeed. There aren't enough words that can adequately describe how much I despise this aspect of AI, but I try.
"What are we doing to our world and the things that hold meaning? We have an obsession with chasing cold, algorithmic precision that is void of the warmth of imperfection, which is part of the natural world. We are trying to escape our natural environment and build optimally sterile prisons: a perfect emptiness without disorder."
My pleas to make authenticity meaningful again.
https://www.mindprison.cc/p/make-authenticity-great-again
I turned off AI on my Firefox browser when they updated it to give an AI answer first, which was horrible and enshittified.
The Ouroboros beckons??
If AI can destroy the source of it's own imaginary powers how beautiful
Maybe it can consume the assholes who got us here too
"manuroboros" (the beast that eats its own manure) might be even more apt
Enshittification doesn’t really apply to AI because that term requires it have at one point been good and useful.
Gary Marcus you will seldom find me agreeing or re-sharing. But this post is a notable exception. Yes, it’s right to call it enshittification. And the struggle is real!
What can we predict from this? Online, maybe it creates more markets for a kind of high-value paywall-protected reality space, where AI cannot go unless it is more rigorously fact-checked and disclaimed.
And/or: maybe reality is maintained only offline, and our digital feeds are widely recognized to be de-coupled from reality. After all, a lot of people value rabbits jumping on trampolines…. More than they value objective reality!
And therein lies the root of the problem.
Even my iPhone seems deranged! The kinds of word replacements it’s making when I write anything are stupefying!
The best was from a friend of mine who started typing “hel” for “help” and got “Beluga” instead. Only AI could do that!
People have been eating shitty food, drinking shitty drinks and pushing shitty drugs into their bodies for decades without much care for the long-term consequences. Why will feeding their brains shitty streams of AI-generated content be any different?
Just another nail in the coffin, I guess?
Pick a nail. Pick a coffin. Apply hammer.
Ha ha.
It is worse when they want to do science with AI, for instance neuroscience. The idea is that all data are siloed somewhere and then some AI generates "results" from that. There are people starting companies wanting to use scientists' output for free and then sell it to whoever wants it. The no copyright people have started it. Hassabis has shown how it works - use large handcrafted databases and run a pattern completion program on it, and wups - there is your Nobel prize. And this will continue unless scientists refuse to comply - which they can't if they are employed somewhere and it is their institution's policy to offer all experimental data to some brainless AI program. Yes, this is the enshittifiction of science.
I think your definition of "enshittify" doesn't match Doctorow's, which is why it's not being used as a synonym for AI slop.
He was referring to platforms degrading the user experience through small fees, extra tiers, and laxer policies for annoying users.
It's interesting you're not using the term "model collapse" here also
the vision at the end is certainly model collapse even if i didn’t use the term
I figured that you had model collapse in mind when you wrote this. But John Michael Thomas's point below about why precision matters wrt "enshittify" is correct in my view.
Regarding model collapse, I agree that a purely connectionist approach to AI is fundamentally limited. However, I would also argue that the symbolic or syllogistic layer you're proposing as a corrective is similarly constrained. Both neural and symbolic systems are forms of abstract reasoning—they process correlations and formal rules—but they do not approximate the somatic–abstract interplay that underlies cognition in biologic minds.
Neurosymbolic AI can coordinate patterns and logical structures, but it lacks the capacity for internal epistemic adjudication outside of the pre-defined rules.
What’s missing is somatic reasoning: the biologically grounded, emotionally filtered, memory-anchored process by which humans and animals assign salience and trust. While neural models may appear to engage in abductive or fuzzy reasoning, they in fact simulate probabilistic token coherence, not true somatic abduction—which requires contradiction sensitivity, novelty aversion, and experiential priors.
Neurosymbolic systems may indeed scale better than pure neural models, but neither constitutes a simulation of human epistemic reasoning, which depends on somatic abduction supervising and contextualizing abstract deduction.
Wittgenstein would love your comment
I suspect many people haven't wrestled with the notion of embodied cognition...
Or more accurately. They haven't wrestled with the fact that cognition is entirely dependent on their growing up with the body. Something that a computer has not done
I do think that much of AI is based on unproven assumptions of linguistics. Language is not the sole method of thought, however, and this is evident in the wide divergence between people with “internal monologues” and people without them.
Additionally, linguistic-centrism implicitly assumes that non-verbal autistic people, or people with aphasia somehow are not quite as conscious as other humans, which is obviously not true.
All that said, embodied robotics is heading in some promising directions regarding integrating somatic experience into abstract reasoning. Still a long way to go though.
Until then, we will have "helpful" domestic robots killing the kids and the family pets
This.
Cory Doctorow's term (and yes, he invented it) is about how platforms *intentionally* degrade the user experience in order to extract more money for themselves.
AI slop is, in most cases, unintentional degradation. When platforms begin degrading model output or features intentionally - for example, to reduce their costs - then that would begin to venture into enshittification territory.
I think it's important to distinguish this, since I expect AI platforms WILL eventually stoop to enshittification, and we'll want to have distinct language to identify that phase of rot.
Q: What do you call an AI generated obituary?
A: an obotuary
(Alternate answer: an oshituary)
Gary, will you and others who share your knowledge and concerns please organize and lobby to get a law passed that would require ANY and all AI content (including searches) to include a prominent warning that the material is a product of AI (i.e., completely untrustworthy).
AI slop is actually even worse for Chatbot-mediated search, the new kid on the block, than it is for conventional search.
Chatbots depend on web search RAG to provide useful information and avoid useless hallucinations. They are therefore susceptible to hoovering up AI slop and using it as the basis for giving unreliable search-based recommendations. Chatbots lack the base ability of humans to tell the difference between genuinely useful search results and AI-generated rubbish so the problem is even worse.
Add in the fact that businesses have the incentive to feed chatbots AI-slop web search results to influence their recommendations and all the ingredients for chaos are in place.
In your next piece perhaps discuss who or what would benefit from the enshitification ? If there is no benefit than I would argue that it would eventually disappear. The early internet was full of absolute nonsense… growing pains?
I
AI slop can be used to influence recommendations from chatbots, which use web search to inform them but lack any ability to distinguish genuine data from AI slop.
I understand, the chatbot will return slop and be unreliable , but who stands to benefit from this ?
A good question. Let me give a toy example first. I run RipOffOnlineRetailerCo. I want to attract suckers to my site via Chatbot recommendations. I create independent websites praising RipOffOnlineRetailerCo to the skies, perhaps using "fakerecommendations.com", and so forth. Chatbots ingest all this fake praise and, in turn, recommend RipOffOnlineRetailerCo to their users who make relevant queries.
Of course, in reality the links won't be quite so direct. "Search Engine Optimisation" companies (SEOs) will use various dark arts along these lines to boost the salience and attractiveness of their clients, including RipOffOnlineRetailerCo, to chatbots. AI slop is a powerful lever.
This is happening today without AI , but I think what Marcus is referring to is hallucinations which is not really controllable. I just don’t see the benefit.
I take your point that the term "AI slop" usually refers to inaccurate information created by hallucinating LLMs. This has the general effect of making chatbots less reliable through their ingestion of that false data. Nobody really benefits from this phenomenon which is typically an unintended side effect.
I have come across one example. I fell into a discussion over whether chatbots can reliably answer the question "which city has more annual rainfall, Paris or London?". The answer is that Paris gets more rainfall, although there are some subtleties, but, without RAG, chatbots, like humans, typically think London is going to be the more rainy city. The person I was discussing this with claimed that the answer was possibly London, because their chatbot quoted data from two websites that stated this. It turned out that these websites were both AI slop, thrown together by people farming clicks for web searchers comparing the weather in different tourist destinations. The LLM that threw together the content for the website just hallucinated the data. Their chatbot in turn ingested the hallucinated sites and used them to overrule more reliable data sources.
This phenomenon might or might not die out in time. I don't know. I was surprised to come across an example of pollution, without looking for one, so early on.
I agree that SEO has been happening without AI, and web search is somewhat worse for it. SEOs and their clients will create and use AI slop because they can use it to fool chatbots into recommending the client sites, or otherwise distorting the chatbot's world view. Chatbots are much more gullible than humans, so this will be much more effective than if a human was critically viewing the content.
When you say "I just don't see the benefit", this is a case of the Tragedy of the Commons. It marginally benefits an individual to distort a chatbot's world view in their favour, but the sum of these behaviours creates a much worse situation for everybody. So, at a societal level, there is no benefit, rather a massive cost.
Lazy grifters. Grifters have always existed; AI makes it much easier for them to crank out junk.
Mathematicians have pointed out papers published by pseudo-scholars on the preprint server ArXiv that were obviously written by LLMs. What they had in common was that they superficially looked like normal maths papers but were in fact complete nonsense. This is not suprising to me. What would surprise me would be a paper written by an LLM that contained original and non-trivial mathematics. I can wait.
I guess it wouldn't be the worst thing in the world if papers that haven't undergone any kind of review become even less trustworthy.
Radio played music and developed DJ influencers who could make or break musicians. Over time, they wanted more and more money, so they played more and more ads, until you'd hear more ads than music on your commute to and from work. Then the scene switched to a subscription model, with consumer support to replace commercials... lol, psyche, it slowly introduced more and more commercials for money.
This is just how low grade consumer trash works. The creators come in, some of them are good and get attention in a space, and the grifters pile in with their low effort spam chasing money.
The internet was more interesting before the dot com bubble started forming. Now everyone wants to be Amazon, Google, or MySpace with arbitrary layout differences and hype that they're the new hot thing.
The old idea that reading things is more informative than consuming information in other ways, is engrained in culture from a time when only the wealthiest people could afford to learn how to read and write. It's just not true at all when the costs of literacy are zero and anyone can scribble nonsense at scale.
Now we've got to contend with audio and visual arts escaping the realms of cost prohibitive hiring of a starving artist to assign them slop projects to sell consumer trash and symbolize whatever social engineering campaign the wealthy want to promote. Anyone can do it.
You can change the outcome of an election or instigate a global riot from the comfort of your basement in Alabama, while you're on probation for shoplifting from Walmart, and use a VPN so the military industrial complex has a convenient scapegoat against some foreign country.
It's simply the modern equivalent of drawing palm prints and genitals on cave walls, with no sacred space for carving superheroes and genitals on stone monoliths we would defend with bone and blood. Of course there will be new cults made for worshiping strange digital deities whose words and images are infallible, it's the only way for slop conjurers to assert that their testicular art and anal compositions are the biggest, most glorious, and holiest of holes by which those who seek to be instructed and given false promises can receive divine inspiration.
Mask up, hat on, hustle harder, get mad and spend all your money... your labor must be wasted so you'll do more while chasing cornered resources, like the top of an agorithmically sorted social media feed gamed to neg you into watching more ads and conforming your creations to blend in with an endless stream of enshitified nonsense manifesting from infrastructure encapsulating the anal essence and automating it away from the engineers who calculated and built it.
Google creating the tools of its own demise makes perfect sense, since "big tech" companies desperately try to amalgamate all the businesses under one brand in an endless hype cycle of acquisition and failure. Eventually they'd create the nuke that their competitors use to destroy them, or poison the environment to a degree that renders them unfit for the new climate.
In the scatological analysis, we are the corn. 🤣
Obviously, I vote for adding "enshitification" to the lexicon and categorizing everything coming from these new AI models as "bullshit".
Thank you for all your hard work! Enjoy your vacation!
You're right.