• ɐɥO
    link
    fedilink
    English
    1135 months ago

    what the fuck is a “AI PC”?

    • Sabata11792
      link
      fedilink
      655 months ago

      They get to put a sticker on that inflates the value by $600, then fill it with spyware.

    • Kairos
      link
      fedilink
      English
      555 months ago

      It means “VC money now 🥺🥺”

        • @[email protected]
          link
          fedilink
          English
          165 months ago

          Not VC, more like hedge funds and institutional investors. But yes, all public companies work primarily for higher share prices, and then everything else. I’ve experienced a public US company paying more than a million USD to save 300k just so they can put out good articles about themselves that they kept promises to shareholders.

    • @[email protected]
      link
      fedilink
      English
      155 months ago

      If I have to deal with Blockchain cloud computing IoT bullshit as a software engineer, I want everyone else to feel my buzzword pain in the tech they use.

      • Deceptichum
        link
        fedilink
        11
        edit-2
        5 months ago

        Nope.

        They’re just regular PC’s with an NPU. These are consumer products they’re trying to push. Like how they added the copilot key to keyboards.

  • @[email protected]
    link
    fedilink
    English
    805 months ago

    Thus, Windows will again be instrumental in driving growth for the minimum memory capacity acceptable in new PCs.

    I love that the primary driver towards more powerful hardware is Windows just bloating itself bigger and bigger. It’s a grift in its own way, consumers are subsidizing the requirements for Microsoft’s idiotic data processing. And MSFT is not alone in this, Google doing away with cookies also conveniently shifts away most ad processing from their servers into Chrome (while killing their competition).

    • @[email protected]
      link
      fedilink
      English
      95 months ago

      Google doing away with cookies also conveniently shifts away most ad processing from their servers into Chrome (while killing their competition).

      OOTL, what’s going on here? Distributed processing like Folding@Home, but for serving ads to make Google more money?

      • @[email protected]
        link
        fedilink
        English
        165 months ago

        They called it Federated Learning of Cohorts at one point. Instead of you sending raw activity data to Google servers and them running their models there, the model runs in Chrome and they only send back the ad targeting groups you belong to. All in the name of privacy of course.

  • @[email protected]
    link
    fedilink
    English
    44
    edit-2
    5 months ago

    Microsoft is desperate to regain the power they had in the 00s and is scrambling trying to find that killer app. At least this time they’re not just copying apples homework.

    • @[email protected]
      link
      fedilink
      English
      7
      edit-2
      5 months ago

      They either force it on everyone or bundle it in the enterprise package that businesses already pay for and then raise the price.

      It never works, but maybe this time it will. I mean it won’t… But maybe.

      • @[email protected]
        link
        fedilink
        English
        8
        edit-2
        5 months ago

        And maybe that’s why it isn’t working. They try too hard to persuade or force you, giving people icky feelings from the get go… and they try too little to just make a product that people want.

  • @[email protected]
    link
    fedilink
    English
    435 months ago

    At least it should result in less laptops being made with ridiculously small amounts of non upgradable RAM.

    Requiring a large amount of compute power for AI is just stupid though. It will probably come in the form of some sort of dedicated AI accelerator that’s not usable for general purpose computing.

    • Lee DunaOP
      link
      fedilink
      English
      225 months ago

      And remember that your data and telemetry are sent to Microsoft servers to train Copilot AI. You may also need to subscribe to some advanced AI features

      • DontMakeMoreBabies
        link
        fedilink
        85 months ago

        And that’s when I’ll start using Linux as my daily driver.

        Honestly installing Ubuntu is almost idiot proof at this point.

        • Lee DunaOP
          link
          fedilink
          English
          5
          edit-2
          5 months ago

          I do agree with you, the obstacle is that there are many applications that are not available on Linux or they’re not as powerful as on Windows. As for me is MS. Excel, many of my office clients use VBA in Excel spreadsheet to do calculations.

          • @[email protected]
            link
            fedilink
            English
            4
            edit-2
            5 months ago

            At least we might have a finally viable replacement in Photoshop soon. GIMP is getting NDE, Krita might be getting foreground extraction tool at some point, and Pixellator might have better tools though it’s NDE department is solid. The thing is all of them are missing something, but I’m betting on GIMP after CMYK_Student arrival to GIMP development.

            I tried adding foreground selection based on guided selection, but was unable to fix noises on in-between selection and was unable to build Krita. We would have Krita with foreground selection if it weren’t for that.

          • @msage
            link
            English
            25 months ago

            I know that you are speaking truth, yet it still hurts

    • @[email protected]
      cake
      link
      fedilink
      English
      335 months ago

      Yeah, and solder it onto the board while you’re at it! Who ever needs to upgrade or perform maintenance anyways?

      • @[email protected]
        cake
        link
        fedilink
        English
        -225 months ago

        They do make the most of it though. Soldered RAM can be much faster than socketed RAM, which is why GPUs do it too.

        • @[email protected]
          link
          fedilink
          English
          195 months ago

          My knowledge of electrical engineering has not shown that solder increases performance. Do you have some more information on this?

          • @[email protected]
            cake
            link
            fedilink
            English
            4
            edit-2
            5 months ago

            Solder doesn’t increase performance (the memory is soldered to something regardless, either the main board or an expansion board), but shorter physical distances mean lower latency and less power to transmit the same data. LPDDR4/5X are designed to take advantage of this additional efficiency.

          • @[email protected]
            link
            fedilink
            English
            25 months ago

            It would seem to be rational that the less mass of metal in a connection, the faster that connection will charge or discharge voltage. Physical sockets require a lot more mass just to ensure solid contact.

          • @[email protected]
            cake
            link
            fedilink
            English
            25 months ago

            Well, that too, but that’s not particularly common on laptops or GPUs. Even in Apple silicon it’s not the same die, but it is the same package.

          • @[email protected]
            cake
            link
            fedilink
            English
            15 months ago

            Shorter physical distance means less latency and lower power. Some memory types like LPDDR4X are built with assumptions that only apply to soldered RAM.

  • DumbAceDragon
    link
    fedilink
    English
    315 months ago

    “Wanna see me fill entire landfills with e-waste due to bullshit minimum requirements?”

    “Wanna see me do it again?”

    • archomrade [he/him]
      link
      fedilink
      English
      75 months ago

      All I can think of:

      Hi kids, do you like violence? Wanna see me stick nine-inch nails through each one of my eyelids? Wanna copy me and do exactly like I did? Try 'cid and get fucked up worse than my life is?

  • ANON
    link
    fedilink
    English
    22
    edit-2
    5 months ago

    Everyone here is like praising microsoft when in fact you can just buy any pc’s with 16 gig ram you like without the additional ai spyware and (cost if i may assume)

    • @[email protected]
      cake
      link
      fedilink
      English
      65 months ago

      my work laptop came with only 8 and can’t be upgraded. The next model up was twice the price

      • ANON
        link
        fedilink
        English
        1
        edit-2
        5 months ago

        And you think microsoft will give you 16 gig laps at 8 gig price . also why is it twice the price does if it only have improvement of ram i doubt that

    • @[email protected]
      link
      fedilink
      English
      15 months ago

      I’m not seeing anyone here praising Microsoft; actually the opposite. Who’s praising Microsoft?

    • Shurimal
      link
      fedilink
      155 months ago

      Unless it’s locally hosted, doesn’t scan every single file on my storage and doesn’t send everything I do with it to the manufacturer’s server.

    • @[email protected]
      link
      fedilink
      English
      65 months ago

      Personally I really want it to but only locally run AI like lamma or whatever it’s called

      • @[email protected]
        link
        fedilink
        English
        45 months ago

        Do it, it’s easy and fun and you’ll learn about the actual capabilities of the tech. Started a week ago and I’m a convert on the utility of local AI. Got to go back to Reddit for it but r/localllama has tons of good info. You can actually run useful models at a conversational pace.

        This whole thread is silly because VRAM is what you need, I’m running some pretty good coding and general knowledge models in a 12GB Radeon. Almost none of my 32GB system ram is used lol either Microsoft is out of touch or hiding an amazing new algorithm

        Running in system ram works but the processing is painfully slow on the regular CPU, over 10x slower

        • @[email protected]
          link
          fedilink
          English
          15 months ago

          Just downloaded gpt4all and lm studio or whatever. I’m learning slowly but there’s a lot of jargon. I only have the 4GB Rx 5500 and I’m not sure how to get it to run on my GPU. I think I really just need to upgrade my PC tho. I have 16GB of ram but an i5-6500. Shit be slow

          • @[email protected]
            link
            fedilink
            English
            25 months ago

            Start off with the Tinyllama model, it’s under 1gb. It will even run on raspberry pi so on real PCs it rips even on CPU. You need a “quantized” model, they are distributed as GGUF files.

            I would recommend 5 bit quantized. The less bits, the stupider to put it simply, and Tinyllama is already pretty stupid. But it’s still impressive for what it is, and you can learn the jargon which is the hard part.

            Fastest software to run the model on is llama.cpp which is a rewrite from python to C++. Use -ngl <number> to offload layers from cpu to GPU.

            Not sure what system you’re using, most AI development is done on Linux so if you’re on Windows I can’t guarantee anything will work.

            Working right now on making a voice assistant for my house that can read all my MQTT data and give status reports, it’s neat when you get it running. Fun to tweak it with prompts and see what it can do. Tinyllama can’t seem to reliably handle MQTT and JSON but slightly smarter models can with ease.

          • @[email protected]
            link
            fedilink
            English
            1
            edit-2
            5 months ago

            Ok, I walked over to my PC to give you a working command line for llama.cpp. You need to make sure it is compiled with support for hipBLAS / ROCm which is the equivalent AMD framework to CUDA, if you want it to run on your GPU.

            ./main -ngl 24 -m models/tinyllama-1.1b-chat-v1.0.Q5_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -i -ins

            This will put it into interactive mode so you can try to chat with it. Running on my GPU it cranks out almost 160 tokens per second, which is way faster than anyone can type. On CPU (-ngl 0) it will make 90 which is still fast. TinyLlama is not a great chatter and should be treated more as a prediction or answer engine. i.e:

            >Write a paragraph about borscht.
            Borscht is a traditional Russian soup made with beetroot, potatoes, and a variety of spices. It is often served during the winter months in Russia, Ukraine, Belarus, and other Soviet-style countries. Borscht is similar to borscht in Poland, but has different ingredients and a slightly different preparation method. In Poland, beets are boiled until they become tender, then blended with potatoes and vegetable broth to create the soup. In Russia, beetroot is removed from the pot before cooking and replaced with other vegetables such as carrots, celery, and onions. The resulting mixture is then simmered until it is thickened, creating a hearty and flavorful soup. Borscht is usually served cold or at room temperature, and can be accompanied by sour cream, slices of crusty bread, or grilled meats such as kebabs.

            It does know a surprising amount, considering it would fit on a CDROM

  • @[email protected]
    link
    fedilink
    English
    18
    edit-2
    5 months ago

    They are making for a long time now, a massive slow effort to make end users finally migrate to Linux (and I’m a whole life windows guy)

  • HidingCat
    link
    fedilink
    145 months ago

    Great, so it’ll take AI to set 16GB as minimum.

    I still shudder that there are machines still being sold with 8GB RAM, that’s just barely enough.

    • @[email protected]
      cake
      link
      fedilink
      English
      3
      edit-2
      5 months ago

      It’s honestly crazy to think about that we used to say the same about 4GB only 5-7 years ago…

      And the same about 2GB a measly 10 years ago…

      5 years ago I used to think 32GB was great. Now I regularly cap out and start page filing doing my normal day-to-day work on 48GB. It’s crazy now.

  • @[email protected]
    link
    fedilink
    English
    135 months ago

    AI PC sounds like something that will be artificially personal more than anything else.

  • @[email protected]
    link
    fedilink
    English
    115 months ago

    Makes sense, 16GB is sort of the new “normal” although 8GB is still quite enough for everyday casual use. “AI PCs” being a marketing term just like “AI” itself.

  • @[email protected]
    link
    fedilink
    English
    95 months ago

    Opening excel and outlook on a win11 PC brings you to almost 16GB of memory used. I don’t know how anybody is still selling computers with 8GB of ram.

    • @[email protected]
      link
      fedilink
      English
      95 months ago

      That doesn’t work even as a hyperbole. I literally just opened an Excel spreadsheet with 51192 rows (I had Outlook already open) and those two programs still only take 417 MB of RAM combined. Meanwhile Firefox is at 2.5 GB. Yes, my total RAM currently used is 13.8 GB but I have 64 GB of RAM installed and you should know that generally the more RAM you have, the more of it gets utilized by the system (this is true for all modern OS, not just Windows) which is a good thing, because it means better performance, since you can cache more things in RAM that would otherwise needed to be read from disk. Unused RAM is wasted RAM. So even if one computer uses 16 GB of RAM for some relatively simple tasks, it doesn’t necessarily mean it wouldn’t run or grind to a halt on a system with less RAM.

      • @[email protected]
        cake
        link
        fedilink
        English
        05 months ago

        Nephew them open for a week or so while using them consistently.

        The memory usage will change drastically.

    • @[email protected]
      link
      fedilink
      English
      65 months ago

      Uh… No, it doesn’t. 8GB is definitely tight these days, but for simple word processing, email, and spreadsheet usage it still works fine.

    • Liz
      link
      fedilink
      English
      55 months ago

      Why in the hell do those programs take up so much space?

      • Kogasa
        link
        English
        35 months ago

        Usually, caching. They can and do use less RAM if you have less free, at the cost of slower performance.