• noodlejetski@lemm.ee
    link
    fedilink
    English
    arrow-up
    68
    arrow-down
    14
    ·
    6 months ago

    Their real power is their ability to understand language and context.

    …they do exactly none of that.

    • breakingcups@lemmy.world
      link
      fedilink
      English
      arrow-up
      33
      arrow-down
      9
      ·
      6 months ago

      No, but they approximate it. Which is fine for most use cases the person you’re responding to described.

      • FarceOfWill@infosec.pub
        link
        fedilink
        English
        arrow-up
        26
        arrow-down
        6
        ·
        6 months ago

        They’re really, really bad at context. The main failure case isn’t making things up, it’s having text or image in part of the result not work right with text or image in another part because they can’t even manage context across their own replies.

        See images with three hands, where bow strings mysteriously vanish etc.

        • FierySpectre@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          9
          ·
          6 months ago

          New models are like really good at context, the amount of input that can be given to them has exploded (fairly) recently… So you can give whole datasets or books as context and ask questions about them.

    • Lmaydev
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      6 months ago

      They do it much better than anything you can hard code currently.