The rushed launch of Apple Intelligence was a debacle, reminding Apple it should focus on readiness rather than quickly appeasing shareholders.

  • Optional@lemmy.world
    link
    fedilink
    arrow-up
    84
    arrow-down
    12
    ·
    3 days ago

    Well . . . yeah. All generative AI is awful. It’s a scam wrapped in hype surrounded by an insult.

    • macstainless@discuss.tchncs.deOP
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      2
      ·
      3 days ago

      💯 The best way it was described to me was “it’s a parlor trick.” And I’ve started to use that phrasing ever since.

      • TheFunkyMonk@lemmy.world
        link
        fedilink
        arrow-up
        14
        ·
        3 days ago

        I do too, as someone who understands what it actually is, what it’s useful for, and what its limitations are. The issue is every company shoving it down users’ throats as future AGI/something LLMs will never achieve.

        • doodledup@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          1 day ago

          I feel like most of the idiots who say AI is trash use it as a Google alternative, which is it not for.

    • markovs_gun@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      3
      ·
      2 days ago

      I wouldn’t say that. It’s a tool like anything else. You don’t say a hammer is useless because it’s really bad at driving screws no matter how much your terrible coworker keeps insisting that she just hits the screws in with the hammer and it’s fine. I learned programming very quickly with ChatGPT and I use LLMs all the time for help with programming. They’re also good for proofreading, learning new languages, and a few other things. The hype is exaggerated but these things are quite useful when used correctly.

      • Optional@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        2 days ago

        Username checks out! :D Yeah, it has some narrow use cases which aren’t the worst thing ever. If it wasn’t destroying the entire tech industry and to some extent the global economy with utter lies and deceit it might be kind of okay sometimes.

    • macstainless@discuss.tchncs.deOP
      link
      fedilink
      English
      arrow-up
      25
      ·
      3 days ago

      Right? Instead of “I can show you some web results on your phone” every time it’s now “Would you like to ask ChatGPT?” every time. Barf.

    • Rentlar@lemmy.ca
      link
      fedilink
      arrow-up
      7
      ·
      3 days ago

      Next version of Apple Intelligence: Rollback to Siri, but with “I Am Genius.” added to the end of every answer.

  • scarabic@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 days ago

    quickly

    That’s the problem. It wasn’t quick. If it had been released quickly and been a failure, that would be one thing. But to hype it and hype it and pre-sell it into new devices for 9 months only THEN to release a failure… now that’s fucked up. Apple hardware has been crushing it for years. Software is a mess. Services couldn’t piss themselves if their pants were on fire.

  • B0rax@feddit.org
    link
    fedilink
    arrow-up
    3
    arrow-down
    5
    ·
    3 days ago

    Is this just an opinion post or am I missing something?

    Apple intelligence does have some useful things, like showing a summary of each mail or message.

    But I will not argue here. Nothing is as black and white as you make it sound.

  • Convict45@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    15
    ·
    3 days ago

    Guess I was hallucinating when I asked it to turn a rambling, amateurishly written game account into a saga in the style of heroic poetry…

    and it got it. About 85%, with only a few clunkers that needed edits.

  • PunkRockSportsFan@fanaticus.social
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    70
    ·
    3 days ago

    Apple is terrible. The ai is doing what it’s supposed to: spying on its users for its real masters.

    Ditch your Apple products before they get you sent to El Salvador

    • Lung@lemmy.world
      link
      fedilink
      arrow-up
      60
      arrow-down
      2
      ·
      3 days ago

      Well for once I have to stand up for apple. What makes them different in the AI space is that the inference actually happens on device and is very privacy focused. Probably why it sucks

      • Cyberflunk@lemmy.world
        link
        fedilink
        arrow-up
        13
        ·
        3 days ago

        Nailed it. I’ve tried taking notification contexts and generally seeing how hard it is. Their foundational model, I think is 4bit quantized, 3billion parameter model.

        So I loaded up llama, phi, and picollm to run some unscientific tests. Honestly they had way better results than I expected. Phi and llama handled notification summaries (I modeled the context window, nothing official) and both performed great. I have no idea wtf AFM is doing, but it’s awful.

      • PunkRockSportsFan@fanaticus.social
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        23
        ·
        3 days ago

        It sucks for a lot of reasons but mostly because ai is always a “black box” (deep seek the exception) with “magic proprietary code”. You think “Tim Apple” isn’t working with the trump admin to id people for El Salvador?

        • T156@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          3 days ago

          Yes, but in this case, you can see what the model is doing, and it is running on your actual computer. Whereas a lot of LLM providers tend to run their models on their own server farms today, partly because it’s prohibitively expensive to run a big model on your machine (Deepseek’s famous R1 model needs at least a hundred GBs of VRAM, or about 20 GPUs) and partly so that they have more control over the thing.

          AI isn’t a black box in the sense that it is a mystery machine that could do anything. It’s a black box in the sense that we don’t know exactly how it’s working, with which particular probability vector/tensor is responsible for what, though we have a fairly good general idea of what goes on.

          It’s like a brain in that sense. We don’t know which exact nerve-circuits do what, but we have a fairly good general idea of how brains work. We don’t think that if we talk to someone, they’re transmitting everything you say to the hivemind, because brains can’t do that.