• Flaqueman@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 month ago

      When reading, you start at the top left, then you go (top) right, and then go down one line and read again from the left to the right. If you carefully follow these rules, you’ll see that this sign says “AI wrote this sign”. There are no separation between the group of words on the left of their line, and the ones on the right, hence no real reason to think it says “AI this wrote sign”. Now, go back to your drama podcast

    • j4k3@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 month ago

      No offense, I’m pro union, but this sign is really anti union and hurtful in the big picture. There are subtle ways that one might highlight the limitations of present LLM’s. I don’t think they would be well stated on a sign. This image says “I don’t understand a tool or this technology at all.” AI doesn’t think or say anything. The response from any prompt is a reflection of the person’s prompt. The output is a simulacrum of the individual in the shape of the data and alignment present within the model. You get back what you put in with the caveat of your understanding of how the model loader code works and the limitations present. There are a ton on compromises made in all readily available public AI tools. If you’ve only ever used proprietary AI you’ve likely never seen the scope of these. The underlying complexity and potential is tremendous when models are professionally tailored to a task. The primary library used in all publicly available tools is Transformers. Read the first page of the documentation for this library. It plainly states that it is an incomplete example codebase that prioritizes accessibly modifying the codebase above any other priorities. This is at the core of everything you can access. The companies targeting AI applications in entertainment are not using the Transformers library, or making massive compromises for general use by a technically incompetent public. If the true complexity of these systems was the default interface, no one would be using LLM’s. Under the model loader code, these are enormous regular expressions used to parse text and convert it into floating point math statistics. All of human language is now a high probability statistical math problem with a solution that can be found quickly with a bit of guidance. Saying “AI wrote this” is effectively saying “I made a math problem,” just like complaining about the output is like saying, “I don’t understand how to use a complex math tool.”

      Really, pushing against a tool like this is shooting one’s self in the foot. It is also a failure to assess the primary issue in play. The real problem is not the tool, it is that present culture chooses to decay and extract wealth when efficiencies are improved instead of using the opportunity to grow. You don’t reduce the skilled workforce or extract the wealth. Intelligent humans would use the opportunity to add value, innovative and expand in new ways. Cultures that invest in themselves will be long term successful. Those that extract wealth stagnate and decay into insignificant obscurity as much as those that fight when presented a new and more efficient path to work within.