- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
A new study from Columbia Journalism Review showed that AI search engines and chatbots, such as OpenAI’s ChatGPT Search, Perplexity, Deepseek Search, Microsoft Copilot, Grok and Google’s Gemini, are just wrong, way too often.
I would be interested to see a comparison between AI search and classic search. “How often does it find what I am looking for?” and “How often is that info confidently wrong?” compared.
Classic search is so garbage these days, I can never find what I am looking for.
classic search is mostly wrong because of ai spam, and secondarily because of seo that google encouraged that made the services worse across the board, ftr. so “ai fail to solve a problem ai made” is another version of this headline.
no search has been garbage for more than 10 years now, years before any usable AI model even existed. AI articles just made it worse.
SEO and relentless profit optimization of search engines without regard for UX is what is to blame in my book.
Fair point IMO, sure AI should be double checked, but does it give bad info more often than top search results? How much is it wrong because AI black magic vs because the top results it based it’s output on were themselves bad?
what I like about using AI for search is that even if it’s wrong, it’s at least pointing me in the right direction.
whereas if I use a modern search engine I find nothing.
I remember when search engines would actually give you what you were looking for. those were the days. I would take that over AI search any time.
When you could actually get useful exact matches and semantic matches too.
nowadays search engines are more like:
“if page contains one of the stem words in the query, then return the page as top result; ignore all other words in the query or what order they are in”
someone at google probably got 1000000 billion dollars for this innovative “optimization”