While I can see the plus side of being able to identify bots, I don’t think the WEI is the right way to do it, and Google definitely isn’t the right company to be handling it
Plus how do you spot the difference between a good bot and a bad bot? Web crawlers from search engines are for example inherently good, so they should still be able to operate, but if it is easy to register a good bot in WEI, it is also easy to register a bad bot. If it is hard to register a good bot, then you’re effectively gatekeeping the automated part of the internet (something that actually might be Google’s intention).
Yeah, even if the hardware can validate perfectly that it’s not running any botting software, there’s nothing stopping someone from spinning up a farm of these machines and using a central server as a hypervisor for them all. It’s impossible to determine if your user is a bot.
While I can see the plus side of being able to identify bots, I don’t think the WEI is the right way to do it, and Google definitely isn’t the right company to be handling it
Plus how do you spot the difference between a good bot and a bad bot? Web crawlers from search engines are for example inherently good, so they should still be able to operate, but if it is easy to register a good bot in WEI, it is also easy to register a bad bot. If it is hard to register a good bot, then you’re effectively gatekeeping the automated part of the internet (something that actually might be Google’s intention).
I was thinking the same thing about Google wanting their bots to be the only ones allowed to crawl and index the internet.
A bot that only reads your website is good, one that posts things or otherwise changes your database less so.
Yeah, even if the hardware can validate perfectly that it’s not running any botting software, there’s nothing stopping someone from spinning up a farm of these machines and using a central server as a hypervisor for them all. It’s impossible to determine if your user is a bot.