• JackbyDev
    link
    fedilink
    English
    arrow-up
    22
    ·
    1 month ago

    Alt text: a beautiful girl on a dock at sunset with some fugly hands and broken ass fingees

    • Adalast@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 month ago

      Honestly, auto generating text descriptions for visually impaired people is probably one of the few potential good uses for LLM + CLIP. Being able to have a brief but accurate description without relying on some jackass to have written it is a bonefied good thing. It isn’t even eliminating anyone’s job since the jackass doesn’t always do it in the first place.

      • AWildMimicAppears@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        I am so sorry, and i agree with your point, but i really had a good laugh at my mental image of a bonefied good thing :-)

        If you know already or it’s autocorrect, just ignore me, if not, it’s bona fide :-)

      • SGforce@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        The models that do that now are very capable but aren’t tuned properly IMO. They are overly flowery and sickly positive even when describing something plain. Prompting them to be more succinct only has them cut themselves off and leave out important things. But I can totally see that improving soon.

        • DillyDaily@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          Unfortunately the models are have trained on biased data.

          I’ve run some of my own photos through various “lens” style description generators as an experiment and knowing the full context of the image makes the generated description more hilarious.

          Sometimes the model tries to extrapolate context, for example it will randomly decide to describe an older woman as a “mother” if there is also a child in the photo. Even if a human eye could tell you from context it’s more likely a teacher and a student, but there’s a lot a human can do that a bot can’t, including having common sense to use appropriate language when describing people.

          Image descriptions will always be flawed because the focus of the image is always filtered through the description writer. It’s impossible to remove all bias. For example, because of who I am as a person, it would never occur to me to even look at someone’s eyes in a portrait, let alone write what colour they are in the image description. But for someone else, eyes may be super important to them, they always notice eyes, even subconsciously, so they make sure to note the eyes in their description.