- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
The rapid spread of artificial intelligence has people wondering: who’s most likely to embrace AI in their daily lives? Many assume it’s the tech-savvy – those who understand how AI works – who are most eager to adopt it.
Surprisingly, our new research (published in the Journal of Marketing) finds the opposite. People with less knowledge about AI are actually more open to using the technology. We call this difference in adoption propensity the “lower literacy-higher receptivity” link.
Even if we somehow manage to make AI 100% accurate, it won’t actually be factual. AI will never be factual.
If you think about what an LLM actually is, its basically not more than someone making a tar file, which just takes a lot of time, energy, and user input in order to untar again. But it still depends on the maker of the tar file what they will put in it. For example, a zuckerberg will put other data in it than an LLM made by Bernie sanders. Therefore, the LLM will always output data similar to the views of the person who made it, be it political or other. Therefore, you would need to use every AI there is in order to see a truly factual answer.
So, TL;DR, Even if you use an LLM, you still need to use every LLM there is in order to get an at least close to factual answer. Therefore, you are not better off than just using SearXNG with a good adblock and blocking the search results of all the clickbait AI generated slop sites.
Yup. These “AI” machines are not much more than glorified pattern recognition software. They are hallucination machines that sometimes get things right by accident.
Comparing them to .tar or .zip files is an interesting way of thinking about how the “training process” is nothing more than adjusting the machine sot that it copies the training data (backwards propagation). Since training works is such a way that the machine’s definition of success is how well if copies the training data:
Dont give me the credit, I just once saw a Video about how you could theoretically use an llm as a compression algorithm for password (or in this case prompt) protected files. Like, if you make that work, you can literally retcon someone (like the Feds) about cracking your file.