- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
I’ve been opting out of the internet at large. It started a few years back just disconnecting from social media, but as AI has begun to pollute everything and enshittification ruins everything, I’m just kinda done with it all.
I’ve been enjoying Lemmy, and I’m hoping Lemmy doesn’t turn into an AI circle jerk.
As a result I’ve been building out my server with services I want to use that I control so I’m not trapped by enshittification or inundated with AI where I don’t want it.
With all that being said, I know AI is here to stay. My biggest problem with AI is these companies gleefully gobbling up our data, our art, our words, our creativity, using it to feed and train, and to make billions, while we get nothing. It would be one thing if all these generative models were open source and freely available for everyone to use and benefit from, but that’s not the case. I know there are open source models, but the big ones are all paywalled and in many ways being weaponized against us.
At some point I hope generative AI becomes a boon to society. Right now, I’m too cynical to believe it will. I feel like it’s just going to make things worse for the majority of people.
Ollama is actually pretty decent at stuff now, and comparable in speed to chat gpt on a sort of busy day. I’m enjoying having a constant rubber duck to bounce ideas off.
That’s cool. I haven’t looked at any local/foss llms or other generators, largely because I don’t have a use case for them.
If your concern is that we’re “not getting anything” in exchange for the training data AI trainers have gleaned from your postings, then those open-source AIs are what you should be taking a look at. IMO they’re well worth the trade.
Agree. When I feel like playing and/or have a use case for myself I’ll be looking at open source ai.
It’s with playing around with. This is a good one which packages all the basics including RAG
I’ve been playing with a locally installed instance of big agi really like the UI but it’s missing the RAG part. I’m also cobbling my own together for fun and not profit to try to stay relevant in these hard times. Langchain is some wild stuff.
Thank you. Gonna save the link for when I have a use case and/or want to play around
I’m starting to think that we need to see AI research in the same way we see biological weapon research - a visit from a SEAL team or a cruise missile for any identified laboratory. Smash the disks, burn all the print outs!
Okay, this is hyperbolic and unrealistic, but I agree with this lion-maned YouTuber - we are really not ready.
AI as a tech is game changing, but it practically demands at least UBI (and probably some form of socialism) as a prerequisite. We, meanwhile, are still electing conservative governments! The same arseholes that will label the legions of unemployed artists, actors, musicians, coders, admin assistants etc etc as lazy and cut their benefits.
Does anyone truly believe that a tech that can replace half of human jobs is going to create happy outcomes in today’s society? Or will it just make tech-bros and scammers richer, and virtually everyone else poorer?
I haven’t clicked on the video but since you’ve said “Lion-maned YouTuber” I’m going to guess it’s Kyle Hill.
Science Thor himself
AI or no AI, the solution needs to be social restructuring. People underestimate the amount society can actively change, because the current system is a self sustaining set of bubbles that have naturally grown resilient to perturbations.
The few people who actually care to solve the world’s problems are figuring out how our current systems inevitably fail, and how to avoid these outcomes.
However, the best bet for restructuring would be a distributed intelligent agent system. I could get into recent papers on confirmation bias, and the confabulatory nature of thought, on the personal level, group level, and society level.
Turns out we are too good at going with the flow, even when the structure we are standing on is built over highly entrenched vestigial confabulations that no longer help.
Words, concepts, and meanings change heavily depending on the model interpreting them. The more divergent, the more difficulty in bridging this communication gap.
a distributed intelligent system could not only enable a complete social restructuring with autonomy and altruism both guaranteed, but with an overarching connection between the different models at every scale, capable of properly interpreting the different views, and conveying them more accurately than we could have ever managed with model projection and the empathy barrier.
any changes bound to happen will eventually do, sooner or later, wanted or not.
we have the word for it in dictionary, “evolution”.
Eh, I’d say a better word is “progress.” Or maybe “technological progress.” Evolution is the change in gene frequency in a population over time.
Agree, otherwise. Its like trying to tell people not to make things go boom using saltpeter, charcoal, and sulfur. Luddites will always want to burn down the textile mill.
I 100% agree the genie is out of the bottle. People who want to walk back this change are not dealing with reality. AI and robotics are so valuable I very much doubt there’s even any point in talking about slowing it down. All that’s left now is to figure out how to use the good and deal with the bad - likely on a timeline of months to maybe one or two years.
I’m personally waiting for legal cases to do with the use of AI trained on code, and whether the licenses apply to it.
If they don’t, our GPL becomes almost useless because it can be laundered but at the same time we can begin using AI trained on code we don’t necessarily abide by the license terms of (maybe even decomps I don’t know how it’ll go). Fight fire with fire and all. So I’d maybe look into that.
If they do, then I’ll probably still use it, but mainly with permissively licensed code, and code also under the GPL (as I use the GPL)
And in both cases, they’d be local models, not “cLoUd” models run by the likes of M$
Until then, I’m not touching it.
That timeline of dealing with the bad looks incredibly optimistic. I imagine new issues will likely be regularly cropping up as well which we’ll also have to address.
I agree. I’m talking about how quickly we’re going to have strategies in place to deal not how quickly we’ll have it all figured out. My guess is we have at best a year before it’s a huge issue, and I agree with your take that figuring out human vs. AI content etc. is going to be an ongoing thing. Perhaps until AI gets so good it ceases to matter as much because it will be functionally the same.
And while it’s probably true that “we’re not ready”, we’re never going to become ready until the tech actually arrives and forces us to do that.
This is Kyle Hill’s video on the predicted impact of AI-generated content on the internet, especially as it becomes more difficult to tell machine from human over text and video. He relays that experts say within a year huge portions of online content will be AI-generated. What do you guys think? Do you care that you may soon be having discussions/arguments with chatbots more often than not on popular platforms like Reddit, X, YouTube, etc?
I didn’t get past the part where he started talking about the dark forest theory as if it “solved” the Fermi paradox. The Fermi paradox is an observation, the dark forest theory is a theory. Worse, actually, it’s considered a hypothesis. I was willing to sit down for the 15 min video. Why blow your credibility in the first sentences.
Unfortunately the Dark Forest thing is super popular right now, so it gets the clicks.
Which is rather annoying, IMO, because as Fermi Paradox solutions go it’s riddled with holes and implausibilities. But it’s scary, and so people latch on to it easily.
I generate AI content (some of which is art) for fun, so I am not against it in theory. I just dont so far find much enjoyment consuming AI content made by others. So far the vast majority of it is mediocre. Which seems like a natural consequence of lowering the barriers to entry.
The Sora demo, for example, is very compelling technologically, but it didn’t impress me at all as something that would replace creative work, so much as provide a tool to get it done differently.
As AI content becomes more prevalent, I will continue to further disengage with that content and prefer authentic human experiences, to the extent that AI content continues to feel mostly soulless and vacuous.
Do you care that you may soon be having discussions/arguments with chatbots more often than not on popular platforms like Reddit, X, YouTube, etc?
I wouldn’t mind it as much if these chatbots weren’t being used for nefarious purposes, like mass data collection, tracking, influencing, and privacy violations. Other than that, if it walks like a human, talks like a human, and we are convinced it’s a human, is there anything wrong with that? It might as well be human. This is going to become more and more of a big question as we get closer to AGI. An AGI isn’t going to suddenly “wake up” and become self aware one day. All these systems are slowly inching towards it. There’s not going to be a clean line between “just a program mimicking a human” and “a fully self-aware entity”. It’s up to us to determine that, and there’s no hard rules to determine that, because it falls into the “problem of other minds” philosophical concept.