cross-posted from [email protected]
In light of this news, we need a browser that looks like a search engine crawler.
This would equalise the problem of websites giving preferential treatment to crawlers and lousy treatment to the rest.
My question is: assuming all hearders could mimick a crawler, would that be sufficient? Or do paywalls take IP address into account? And if so, would it work to subscribe to Google Cloud just to get an IP address in Google’s ranges and use that for crawling?


You mean a browser with AI features built in? Sounds useful!
Do you mean that a browser cannot fool a web server without using AI? Can you explain in more detail how the web service detects a client’s use of AI?