17mar2026
SmallWeb is a greate idea: manually curated collection of the personal projects and blogs. It’s exactly what I’ve been missing. The design and philosophy prioritize people over rigid structure.
Every layer of review makes you 10x slower
The result might be slop, but if the slop is 100x cheaper, then it only needs to deliver 1% of the value per unit and it's still a fair trade. And if your value per unit is even a mere 2% of what it used to be, you’ve doubled your returns! Amazing.
There are some pretty dumb assumptions underlying that theory; you can imagine them for yourself. Suffice it to say that this produces what I will call the AI Developer’s Descent Into Madness:
- Whoa, I produced this prototype so fast! I have super powers!
- This prototype is getting buggy. I’ll tell the AI to fix the bugs.
- Hmm, every change now causes as many new bugs as it fixes.
- Aha! But if I have an AI agent also review the code, it can find its own bugs!
- Wait, why am I personally passing data back and forth between agents
- I need an agent framework
- I can have my agent write an agent framework!
- Return to step 1
It’s actually alarming how many friends and respected peers I’ve lost to this cycle already.
There is so much to say about this.
First of all: but if the slop is 100x cheaper, then it only needs to deliver 1% — the problem is that it is extremely hard to distinguish real slop from good code without additional reviews. And once you introduce those reviews, you largely eliminate the speed advantage the whole argument relies on.
Framing the problem as a trade-off between speed and quality is misleading.
In reality, we are not replacing a slow process with a fast one. We are replacing one slow process with another slow process — just with additional layers of AI agents in between, consuming tokens and increasing costs along the way.
