AI is a fucking curse upon humanity. The tiny morsels of good it can do is FAR outweighed by the destruction it causes. Fuck anyone involved with perpetuating this nightmare.
Todays “AI” is just machine learning code. It’s been around for decades and does a lot of good. It’s most often used for predictive analytics and used to facilitate patient flow in healthcare and understand volumes of data fast to provide assistance to providers, case manager, and social workers. Also used in other industries that receive little attention.
Even some language learning machines can do good, it’s the shitty people that use it for shitty purposes that ruin it.
Sure I know what it is and what it is good for, I just don’t think the juice is worth the squeeze. The companies developing AI HAVE to shove it everywhere to make it feasible, and the doing of that is destructive to our entire civilization. The theft of folks’ work, the scamming, the deep fakes, the social media propaganda bots, the climate raping energy consumption, the loss of skill and knowledge, the enshittification of writing and the arts, the list goes on and on. It’s a deadend that humanity will regret pursuing if we survive this century. The fact that we get a paltry handful of positives is cold comfort for our ruin.
I disagree. It may seem that way if that’s all you look at and/or you buy the BS coming from the LLM hype machine, but IMO it’s really no different than the leap to the internet or search engines. Yes, we open ourselves up to a ton of misinformation, shifting job market etc, but we also get a suite of interesting tools that’ll shake themselves out over the coming years to help improve productivity.
It’s a big change, for sure, but it’s one we’ll navigate, probably in similar ways that we’ve navigated other challenges, like scams involving spoofed webpages or fake calls. We’ll figure out who to trust and how to verify that we’re getting the right info from them.
LLMs are not like the birth of the internet. LLMs are more like what came after when marketing took over the roadmap. We had AI before LLMs, and it delivered high quality search results. Now we have search powered by LLMs and the quality is dramatically lower.
Sure, and we had an internet before the world wide web (ARPANET). But that wasn’t hugely influential until it was expanded into what’s now the Internet. And that evolved into the world wide web after 20-ish years. Each step was a pretty monumental change, and built on concepts from before.
LLMs are no different. Yes they’re built on older tech, but that doesn’t change the fact that they’re a monumental shift from what we had before.
Let’s look at access to information and misinformation. The process was something like this:
Physical encyclopedias, newspapers, etc
Digital, offline encyclopedias and physical newspapers
Online encyclopedias and news
SEO and the rise of blog/news spam - misinformation is intentional or negligent
Early AI tools - misinformation from hallucinations is largely also accidental
Misinformation in AI tools becomes intentional
We’re in the transition from 5 to 6, which is similar to the transition from 3 to 4. I’m old enough to have seen each of these transitions.
The way people interact with the world is fundamentally different now than it was before LLMs came out, just like the transition from offline to online computing. And just like people navigated the transition to SEO nonsense, people need to navigate he transition to LLM nonsense. It’s quite literally a paradigm shift.
Enshittification is a paradigm shift, but not one we associate with the birth of the internet.
On to your list. Why does misinformation appear after the birth of the internet? Was yellow journalism just a historical outlier?
What you’re witnessing is the “Red Queen hypothesis”. LLMs have revolutionized the scam industry and step 7 is an AI arms race against and with misinformation.
Why does misinformation appear after the birth of the internet?
It certainly existed before. Physical encyclopedias and newspapers weren’t perfect, as they frequently followed the propaganda line.
My point is that a lot of people seem to assume that “the internet” is somewhat trustworthy, which is a bit bizarre. I guess there’s the fallacy that if something is untrustworthy, it won’t get attention, but instead things are given attention if they’re popular, by some definition of “popular” (i.e. what a lot of users want to see, what the platform wants users to see, etc).
Red Queen hypothesis
Well yeah, every technological innovation will be used for good and ill. The Internet gave a lot of people a voice who didn’t have it before, and sometimes that was good (really helpful communities) and sometimes that was bad (scam sites, misinformation, etc).
My point is that AI is a massive step. It can massively increase certain types of productivity, and it can also massively increase the effectiveness of scams and misinformation. Whichever way you look at it, it’s immensely impactful.
AI is a fucking curse upon humanity. The tiny morsels of good it can do is FAR outweighed by the destruction it causes. Fuck anyone involved with perpetuating this nightmare.
Damn this AI, posting and doing all this mayhem all by itself on poor unsuspecting humans…
Yes. Fuck the owners and fuck their machine guns.
Todays “AI” is just machine learning code. It’s been around for decades and does a lot of good. It’s most often used for predictive analytics and used to facilitate patient flow in healthcare and understand volumes of data fast to provide assistance to providers, case manager, and social workers. Also used in other industries that receive little attention.
Even some language learning machines can do good, it’s the shitty people that use it for shitty purposes that ruin it.
Sure I know what it is and what it is good for, I just don’t think the juice is worth the squeeze. The companies developing AI HAVE to shove it everywhere to make it feasible, and the doing of that is destructive to our entire civilization. The theft of folks’ work, the scamming, the deep fakes, the social media propaganda bots, the climate raping energy consumption, the loss of skill and knowledge, the enshittification of writing and the arts, the list goes on and on. It’s a deadend that humanity will regret pursuing if we survive this century. The fact that we get a paltry handful of positives is cold comfort for our ruin.
This statement tells me you don’t understand how many industries are using machine learning and how many lives it saves.
That’s great. We can schedule it like heroin for professional use only, then.
They are just harmless fireworks. They are even useful for signaling ships at sea of dangerous tides.
I disagree. It may seem that way if that’s all you look at and/or you buy the BS coming from the LLM hype machine, but IMO it’s really no different than the leap to the internet or search engines. Yes, we open ourselves up to a ton of misinformation, shifting job market etc, but we also get a suite of interesting tools that’ll shake themselves out over the coming years to help improve productivity.
It’s a big change, for sure, but it’s one we’ll navigate, probably in similar ways that we’ve navigated other challenges, like scams involving spoofed webpages or fake calls. We’ll figure out who to trust and how to verify that we’re getting the right info from them.
LLMs are not like the birth of the internet. LLMs are more like what came after when marketing took over the roadmap. We had AI before LLMs, and it delivered high quality search results. Now we have search powered by LLMs and the quality is dramatically lower.
Sure, and we had an internet before the world wide web (ARPANET). But that wasn’t hugely influential until it was expanded into what’s now the Internet. And that evolved into the world wide web after 20-ish years. Each step was a pretty monumental change, and built on concepts from before.
LLMs are no different. Yes they’re built on older tech, but that doesn’t change the fact that they’re a monumental shift from what we had before.
Let’s look at access to information and misinformation. The process was something like this:
We’re in the transition from 5 to 6, which is similar to the transition from 3 to 4. I’m old enough to have seen each of these transitions.
The way people interact with the world is fundamentally different now than it was before LLMs came out, just like the transition from offline to online computing. And just like people navigated the transition to SEO nonsense, people need to navigate he transition to LLM nonsense. It’s quite literally a paradigm shift.
Enshittification is a paradigm shift, but not one we associate with the birth of the internet.
On to your list. Why does misinformation appear after the birth of the internet? Was yellow journalism just a historical outlier?
What you’re witnessing is the “Red Queen hypothesis”. LLMs have revolutionized the scam industry and step 7 is an AI arms race against and with misinformation.
It certainly existed before. Physical encyclopedias and newspapers weren’t perfect, as they frequently followed the propaganda line.
My point is that a lot of people seem to assume that “the internet” is somewhat trustworthy, which is a bit bizarre. I guess there’s the fallacy that if something is untrustworthy, it won’t get attention, but instead things are given attention if they’re popular, by some definition of “popular” (i.e. what a lot of users want to see, what the platform wants users to see, etc).
Well yeah, every technological innovation will be used for good and ill. The Internet gave a lot of people a voice who didn’t have it before, and sometimes that was good (really helpful communities) and sometimes that was bad (scam sites, misinformation, etc).
My point is that AI is a massive step. It can massively increase certain types of productivity, and it can also massively increase the effectiveness of scams and misinformation. Whichever way you look at it, it’s immensely impactful.