
The trick is to talk to real actual human beings and not people terminally online enough to know about Lemmy.
Find a lefty book club and you’ll find reasonable people.
Just a guy shilling for gun ownership, tech privacy, and trans rights.
I’m open for chats on mastodon https://hachyderm.io/
my blog: thinkstoomuch.net
My email: [email protected]
Always looking for penpals!
The trick is to talk to real actual human beings and not people terminally online enough to know about Lemmy.
Find a lefty book club and you’ll find reasonable people.
My favorite lefty take to hit a capitalism/libertarian shill with is that I don’t really think a communist/socialist project like the Soviet Union is the future. And honestly, you’d be hard pressed to find someone who does want that.
Its becoming a pretty common take these days that capitalism is fine IF human and environmental needs are met first.
Ollama and all that runs on it its just the firewall rules and opening it up to my network that’s the issue.
I cannot get ufw, iptables, or anything like that running on it. So I usually just ssh into the PC and do a CLI only interaction. Which is mostly fine.
I want to use OpenWebUI so I can feed it notes and books as context, but I need the API which isn’t open on my network.
The put some damn fan service into Law and Order: SVU and I might start watching!
Am I right fellas!?
I was thinking about that now that I have Mac Minis on the mind. I might even just set a mac mini on top next to the modem.
Ollama + Gemma/Deepseek is a great start. I have only ran AI on my AMD 6600XT and that wasn’t great and everything that I know is that AMD is fine for gaming AI tasks these days and not really LLM or Gen AI tasks.
A RTX 3060 12gb is the easiest and best self hosted option in my opinion. New for <$300 and used even less. However, I was running with a Geforce 1660 ti for a while and thats <$100
A mac is a very funny and objectively correct option
I think I’m going to have a harder time fitting a threadripper in my 10 inch rack than I am getting any GPU in there.
I do already have a NAS. It’s in another box in my office.
I was considering replacing the PIs with a BOD and passing that through to one of my boxes via USB and virtualizing something. I compromised by putting 2tb Sata SSDs in each box to use for database stuff and then backing that up to the spinning rust in the other room.
How do I do that? Good question. I take suggestions.
With a RTX 3060 12gb, I have been perfectly happy with the quality and speed of the responses. It’s much slower than my 5060ti which I think is the sweet spot for text based LLM tasks. A larger context window provided by more vram or a web based AI is cool and useful, but I haven’t found the need to do that yet in my use case.
As you may have guessed, I can’t fit a 3060 in this rack. That’s in a different server that houses my NAS. I have done AI on my 2018 Epyc server CPU and its just not usable. Even with 109gb of ram, not usable. Even clustered, I wouldn’t try running anything on these machines. They are for docker containers and minecraft servers. Jeff Geerling probably has a video on trying to run an AI on a bunch of Raspberry Pis. I just saw his video using Ryzen AI Strix boards and that was ass compared to my 3060.
But to my use case, I am just asking AI to generate simple scripts based on manuals I feed it or some sort of writing task. I either get it to take my notes on a topic and make an outline that makes sense and I fill it in or I feed it finished writings and ask for grammatical or tone fixes. Thats fucking it and it boggles my mind that anyone is doing anything more intensive then that. I am not training anything and 12gb VRAM is plenty if I wanna feed like 10-100 pages of context. Would it be better with a 4090? Probably, but for my uses I haven’t noticed a difference in quality between my local LLM and the web based stuff.
That’s fair and justified. I have the label maker right now in my hands. I can fix this at any moment and yet I choose not to.
I’m man feeding orphans to the orphan crushing machine. I can stop this at any moment.
Oh and my home office set up uses Tiny in One monitors so I configured these by plugging them into my monitor which was sick.
I’m a huge fan of this all in one idea that is upgradable.
These are M715q Thinkcentres with a Ryzen Pro 5 2400GE
Not really a lot of thought went into rack choice. I wanted something smaller and more powerful than my several optiplexs I had.
I also decided I didn’t want storage to happen here anymore because I am stupid and only knew how to pass through disks for Truenas. So I had 4 truenas servers on my network and I hated it.
This was just what I wanted at a price I was good with at Like $120. There’s a 3D printable version but I wasn’t interested in that. I do want to 3D print racks and I want to make my own custom ones for the Pis to save space.
But this set up is way cheaper if you have a printer and some patience.
Not much. As much as I like LLMs, I don’t trust them for more than rubber duck duty.
Eventually I want to have a Copilot at Home set up where I can feed a notes database and whatever manuals and books I’ve read so it can draw from that when I ask it questions.
The problem is my best GPU is my gaming GPU a 5060ti and its in a Bazzite gaming PC so its hard to get the AI out of it because of Bazzite’s “No I won’t let you break your computer” philosophy, which is why I did it. And my second best GPU is a 3060 12GB which is really good, but if I made a dedicated AI server, I’d want it to be better than my current server.
Rough. It was a great handle. Fortunately he still pops up when you search sexuallobster
Gone but not forgotten!
I just built a mini rack with 3 Thinkcentre tiny PCs I bought for $175 (USD) on eBay. All work great.
SINCE WHEN DID SEXUAL LOBSTER CHANGE HIS NAME TO GREASY TALES
Alternative
looks inside The absolute least educated read you’ve ever seen
I’ve only seen the episode with Toby Turner in it and it has made me a worse person.