

Are you talking about anubis? Because youāre very clearly wrong.
And now I think about it, regardless of which approach you were talking about, thatās some impressive arrogance to assume that everyone involved other than you was a complete idiot.
Eta:
Ahh, looking at your post history, I see you misunderstand why scrapers use a common user agent, and are confused about what a general increase in cost-per-page means to people who do bulk scraping.
Bruh, when I said āyou misunderstand why scrapers use a common user agentā I didnāt require further proof.
Requests following an obvious bulk scraper pattern with user agents that almost certainly arenāt regular humans are trivially easy to handle using decades old techniques, which is why scrapers will not start using curl user agents.
See, the thing is with blocking ai scraping, you can actually see it work by looking at the logs. Iām guessing you donāt run any sites that get much traffic or youād be able to see this too. Its efficacy is obvious.
Sure scrapers could start keeping extra state or brute forcing hashes, but at the scale theyāre working at that becomes painfully expensive and the effort required to raise the challenge difficulty is minimal if it becomes apparent that scrapers are getting through. Which will be very obvious if it happens.
Presumably you havenāt had much experience with ai scrapers. Theyāre not a āone run and doneā type thing, especially for sites with frequently changing content, like this one.
I donāt want to seem rude, but you appear to be speaking from a position of considerable ignorance, dismissing the work of people who actually have skin in the game and have demonstrated effective techniques for dealing with a problem. Maybe a little more research on the issue would help.