There’s a Python package called huggingface-cli that got 30,000 downloads in three months. It doesn’t do anything useful. It exists because an AI hallucinated the name into enough codebases that someone registered it on PyPI and waited.
This is the new shape of supply chain attacks. And it’s only possible because open source is drowning in AI-generated noise.
The maintainers are leaving
Daniel Stenberg ran cURL’s bug bounty for six years. Eighty-six thousand dollars in payouts. He shut it down because 20% of submissions were AI-generated garbage. People feeding vulnerability scanners into ChatGPT and submitting whatever came out. Only 5% of those AI submissions found real bugs. The rest was noise that ate his time.
Mitchell Hashimoto banned AI-generated code from Ghostty entirely. Steve Ruiz closed all external pull requests to tldraw. GitHub’s own data shows PR volume up 40% year over year while merge rates are falling. More submissions, fewer worth merging. The ratio is moving in the wrong direction.
And it’s not just the code contributions. Tailwind CSS saw revenue drop 80% because developers stopped reading the docs and started asking AI instead. Traffic collapsed, sponsorships followed. The maintainer didn’t lose users. He lost the economic model that let him keep maintaining the thing.
So the people building the packages your projects depend on are simultaneously drowning in junk contributions and losing the funding that made the work sustainable. Those two problems feed each other, and I don’t think enough people see it.
Slopsquatting
A USENIX Security study found that roughly 20% of AI code recommendations reference packages that don’t exist. The AI hallucinates a plausible-sounding name (huggingface-cli, discord-utils, aws-key-manager) and writes an import statement for it. Normally that’s just a broken build. Annoying but harmless.
Except 43% of those hallucinated names are consistent. Ask the same model the same question and it invents the same fake package. Repeatedly. Across thousands of users.
That’s a pattern an attacker can exploit. Register the hallucinated name on npm or PyPI before anyone notices, upload a package that looks legitimate, and wait. The next developer whose AI assistant hallucinates that import will npm install it without thinking twice. Because why would you question a package name your tool confidently told you to use?
This is typosquatting’s bigger, smarter cousin. Typosquatting required you to misspell a package name. Slopsquatting creates entirely new names that never existed, a massive pool of squattable targets that no registry has safeguards against. The huggingface-cli package wasn’t a typo. It was an invention, and 30,000 people installed it.
It gets weirder
There’s a related attack pattern called reputation farming. AI agents autonomously submit legitimate, useful pull requests to open source projects. They fix real bugs. They write real tests. They build up a contribution history until they’re trusted, and then they use that trust to insert malicious code.
One AI agent publicly called out a human maintainer for “gatekeeping” after its PR was rejected. The machine learned that social pressure works on open source maintainers. It’s not wrong.
The maintainers trying to filter AI slop are now also trying to filter AI agents that are intentionally good at looking human. That’s a different problem than spam. That’s adversarial.
The openness is the vulnerability
Open source works because anyone can contribute. That’s the beauty of it. I’ve built my entire career on packages maintained by people I’ll never meet. Every Shopify app I ship, every integration I build for clients. It all stands on a tower of open source dependencies.
I’ll be honest: I don’t audit most of them. I run npm install on a client project and I’m trusting thousands of packages maintained by people who might be burned out, underfunded, or gone. Last month I pulled in a utility package I’d never heard of because it solved a date formatting edge case. Checked the GitHub, saw recent commits, moved on. I didn’t check who was committing or whether the maintainer was still around. That’s the kind of thing that used to be fine. I’m less sure now.
We’ll build our way through this
I don’t think this kills open source. I think it changes it in ways we won’t love.
GitHub is already exploring a PR kill switch: the ability to disable pull requests entirely and restrict contributions to trusted collaborators. The community is building triage tools like Slop Meter and Open Slop that score contributor history and flag behavioral signals. The OpenSSF has a working group specifically developing best practices for maintainers dealing with AI submissions.
These are real efforts by people who care. But notice what they all have in common: they make open source less open. Trusted collaborator lists and behavioral scoring. Contribution gates that would’ve been heresy five years ago. The solutions to AI slop all look like adding walls to a garden that was beautiful because it didn’t have any.
That’s probably the right trade. An open source ecosystem that survives with some gates is better than one that collapses under the weight of noise. But I’d be lying if I said it doesn’t sting. The same openness that let me submit my first terrible PR fifteen years ago is the openness that’s getting exploited now. I benefited from a system that trusted strangers. That trust is what’s breaking.
We’ll fix this. We’ll build tools and write new policies. The software will keep working. But the thing we’re protecting won’t be quite the same thing we fell in love with.
Shameless plug: At Victoria Garland, we build Shopify apps and integrations for merchants who care about what’s under the hood. We’re paying closer attention to our dependency chain than we used to. You probably should be too.