Artificial-intelligence aide handles email, meetings and other things, but its price and limited use have some skeptical

Microsoft’s new artificial-intelligence assistant for its bestselling software has been in the hands of testers for more than six months and their reviews are in: useful, but often doesn’t live up to its price.

The company is hoping for one of its biggest hits in decades with Copilot for Microsoft 365, an AI upgrade that plugs into Word, Outlook and Teams. It uses the same technology as OpenAI’s ChatGPT and can summarize emails, generate text and create documents based on natural language prompts.

Companies involved in testing say their employees have been clamoring to test the tool—at least initially. So far, the shortcomings with software including Excel and PowerPoint and its tendency to make mistakes have given some testers pause about whether, at $30 a head per month, it is worth the price

  • JeeBaiChow@lemmy.world
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    1
    ·
    10 months ago

    Isn’t this pretty much the state of current gen ai, the hype overtaking the reality and all that?

    • Lmaydev
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      2
      ·
      edit-2
      10 months ago

      I use it all the time at work as a programmer. Not that often for generating code but for learning new languages and frameworks quickly.

      I noticed our juniors are able to get up to speed incredibly fast by leaning on it when picking up new things as well.

      We also experimented with it for sentiment analysis of customer feedback and the results were very impressive.

      It is genuinely a game changer when used correctly. The issue I see is people trying to push it everywhere.

      • JeeBaiChow@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        10 months ago

        As a reference, I’d use a search engine first, but it’s a matter of personal preference. Usually I’m only short in syntax and a particular language’s native functions. The only benefit I could foresee is avoiding the rude, condescending snarky comments from the experienced developers on stackexchange and the like, but I almost never register to post, so avoid all that. I did see a benefit in the area of (real) language learning, when I can ask it to translate something. Then break down specific parts of the response for clarification, switching between my native and the language I’m trying to learn. That was mind blowing.

        • Lmaydev
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          10 months ago

          I use it instead of a search engine now.

          Rather than skimming a few blog/SO posts looking for the particular info I want it pulls exactly what I need, summarizes it, provides sources and allows follow up questions.

      • TimeSquirrel@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        10 months ago

        That’s exactly it. I know HOW to program generically. I know what control flow is, how memory works, what a pointer and an object is. I just need some coaching on syntax because it’s all just too much to memorize in one lifetime. But once I see it written and used in front of me, I can easily determine if it’s any good or not.

        It’s amusing when it just makes up methods to objects of mine that don’t exist. I can spot crap like that immediately. On one of those occasions I actually wrote it into the class so it would actually compile because I thought it was a useful thing.

    • Kissaki@feddit.de
      link
      fedilink
      English
      arrow-up
      15
      ·
      10 months ago

      Yes, this is very expected to me. What surprises me is the 30 USD per User per month price point. That’s very expensive. (I can make guesses as to why, but it ultimately doesn’t matter.)

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      4
      ·
      edit-2
      10 months ago

      Like many tools, there’s a gulf between a skilled user and an unskilled user.

      What ML researchers are doing with these models is straight up insane. The kinds of things years ago I didn’t think I’d see in my lifetime, or maybe only in an old age home (a ways off).

      If you gave someone who had never used a NLE application to edit a multi track video access to Avid for putting together some family videos, they might not be that impressed with the software and instead frustrated with perceived shortcomings.

      Similarly, the average person interacting with the models often hits their shortcomings (confabulations, safety fine tuning, etc) and doesn’t know how to get past them and assumes the software tool is shitty.

      As an example, you can go ahead and try the following query to Copilot using GPT-4:

      Without searching, solve the following puzzle repeating the adjective for each noun: “A man has a vegetarian wolf, a carnivorous goat, and a cabbage. He needs to get them to the other side of a river but the boat which can cross can only take him and one object at a time. How can he cross without any of the objects eating another object?” Think carefully.

      It will get it wrong (despite two prompt engineering techniques already in the query), defaulting to the standard form solution where the goat is taken first. When GPT-4 first released, a number of people thought that this was because it couldn’t solve a variation of the puzzle, lacking the reasoning capabilities.

      Turns out, it’s that the token similarity to the standard form trips it up and if you replace the wolf, goat, and cabbage in the prompt above with the emojis for each, it answers perfectly, having the vegetarian wolf go across first, etc. This means the model was fully able to process the context of the implicit relationship between a carnivorous goat eating the wolf and a vegetarian wolf eating the cabbage and adapt the classic form of the answer accordingly. It just couldn’t do it when the tokens were too similar to the original.

      So if you assume it’s stupid, see a stupid answer and instead of looking deeper think it confirms your assumption, then you walk away thinking the models suck and are dumb, when really it’s just that like most tools there’s a learning curve to get the most out of them.

      • dee_dubs@lemmy.world
        link
        fedilink
        English
        arrow-up
        17
        ·
        10 months ago

        My problem with this is that your example replies on you already knowing the correct answer, so that you know it’s given you the wrong answer and you can go back and try to trick it into giving a different answer. If you’re asking it a question to which you don’t already know the answer, how would you know if this has happened?

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          edit-2
          10 months ago

          Don’t use LLMs in production for accuracy critical implementations without human oversight.

          Don’t use LLMs in production for accuracy critical implementations without human oversight.

          I almost want to repeat that a third time even.

          They weirdly ended up being good at information recall in many cases, and as a result have been being used like that in cases where it really doesn’t matter much if they are wrong some of the time. But the infrastructure fundamentally cannot self-verify.

          This is part of why I roll my eyes when I see employment of LLMs vs humans presented as an exclusionary binary. These are tools to extend and support human labor. Not replace humans in most cases.

          So LLMs can be amazing at a wide array of tasks. Like I literally just saved myself a half hour of copying and pasting minor changes in a codebase by having Copilot automate generating methods using a parallel object as a template and the new object’s fields. But I also have unit tests to verify behavior and my own review of what was generated with over a decade of experience under my belt.

          Someone who has never programmed using Copilot to spit out code for an idea is going to have a bad time. But they’d have a similar bad time if they outsourced a spec sheet to a code farm without having anyone to supervise deliverables.

          Oh, and technically, my example doesn’t actually require you to know the correct answer before asking. It only requires you to recognize the correct answer when you see it. And the difference between those two usecases is massive.

          Edit: In fact, the suggestion to replace the nouns with emojis came from GPT-4. Even though it doesn’t have any self-introspection capabilities, I described what I thought was happening and why, and it came up with three suggestions for ways to improve the result. Two I immediately saw were dumb as shit, but the idea to use emojis as representative placeholders while breaking the token pattern was simply brilliant and I’m not sure if I would have thought of that on my own, but as soon as I saw it I knew it would work.

          • jherazob@kbin.social
            link
            fedilink
            arrow-up
            5
            ·
            10 months ago

            But that’s what the marketers are selling, “this will replace a lot of workers!” and it just cannot

  • My_friend_Johnny@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    10 months ago

    My laptop had copilot on it. It just appeared 2 days ago. Was no update I was aware of. And today I restarted and it’s just gone. Weird.

    My Microsoft swift keyboard on mobile this morning changed to copilot. Made me a nice valentines meme. I may get lucky tonight.

  • DominusOfMegadeus@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    6
    ·
    10 months ago

    In my experience it gets tech stuff wrong frequently, in my case often supplying incorrect JIRA queries. GPT-4 blows it out of the water in most every regard. The image creation also seems to be inherently evil; it even replied to me with a mischievous devil emoji once when asked to make a creepy image and it seemed delighted, then its governor kicked in and it returned an error. And sometimes it comes up with stuff far darker than I ever intended, and it gets through.

  • paddirn@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    10 months ago

    LLMs feel like an evolution of search engines (or a devolution in some ways), but apart from that, it’s really just been a novelty. Maybe if I was in a creative field that needed to generate crap-tons of text on a regular basis, it would be a nice to have (before inevitably losing my job when the higher-ups realize how to get my job done). Otherwise, I struggle to even figure out what to do with it. Any answers it gives are sub-par and only surface level ideas that I could think up in 5 minutes OR they’re so heavily censored now as to be worthless. There’s some storytelling/worldbuilding potential with respect to RPGs, but you’re holding its hand so much that you’d just as well write your own material

    The image generation is interesting, but mostly seems like a replacement for stock images/photography, but again, because of how it censors or misunderstands you so much, it really limits its potential.