A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial. And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

  • @[email protected]
    link
    fedilink
    English
    23 months ago

    How is guided pattern recognition is different from imagination (and therefore intelligence) though?

    • @[email protected]
      link
      fedilink
      English
      6
      edit-2
      3 months ago

      There’s a lot of other layers in brains that’s missing in machine learning. These models don’t form world models and somedon’t have an understanding of facts and have no means of ensuring consistency, to start with.

      • @[email protected]
        link
        fedilink
        English
        2
        edit-2
        3 months ago

        I mean if we consider just the reconstruction process used in digital photos it feels like current ai models are already very accurate and won’t be improved by much even if we made them closer to real “intelligence”.

        The point is that reconstruction itself can’t reliably produce missing details, not that a “properly intelligent” mind will be any better at it than current ai.

      • @[email protected]
        link
        fedilink
        English
        23 months ago

        They absolutely do contain a model of the universe which their answers must conform to. When an LLM hallucinates, it is creating a new answer which fits its internal model.

        • @[email protected]
          link
          fedilink
          English
          13 months ago

          Statistical associations is not equivalent to a world model, especially because they’re neither deterministic nor even tries to prevent giving up conflicting answers. It models only use of language

          • @[email protected]
            link
            fedilink
            English
            13 months ago

            It models only use of language

            This phrase, so casually deployed, is doing some seriously heavy lifting. Lanuage is by no means a trivial thing for a computer to meaningfully interpret, and the fact that LLMs do it so well is way more impressive than a casual observer might think.

            If you look at earlier procedural attempts to interpret language programmatically, you will see that time and again, the developers get stopped in their tracks because in order to understand a sentence, you need to understand the universe - or at the least a particular corner of it. For example, given the sentence “The stolen painting was found by a tree”, you need to know what a tree is in order to interpret this correctly.

            You can’t really use language *unless* you have a model of the universe.

            • @[email protected]
              link
              fedilink
              English
              1
              edit-2
              3 months ago

              But it doesn’t model the actual universe, it models rumor mills

              Today’s LLM is the versificator machine of 1984. It cares not for truth, it cares for distracting you

              • @[email protected]
                link
                fedilink
                English
                13 months ago

                They are remarkably useful. Of course there are dangers relating to how they are used, but sticking your head in the sand and pretending they are useless accomplishes nothing.

    • @[email protected]
      link
      fedilink
      English
      13 months ago

      Your comment is a good reason why these tools have no place in the courtroom: The things you describe as imagination.

      They’re image generation tools that will generate a new, unrelated image that happens to look similar to the source image. They don’t reconstruct anything and they have no understanding of what the image contains. All they know is which color the pixels in the output might probably have given the pixels in the input.

      It’s no different from giving a description of a scene to an author, asking them to come up with any event that might have happened in such a location and then trying to use the resulting short story to convict someone.

      • @[email protected]
        link
        fedilink
        English
        43 months ago

        They don’t reconstruct anything and they have no understanding of what the image contains.

        With enough training they, in fact, will have some understanding. But that still leaves us with that “enhance meme” problem aka the limited resolution of the original data. There are no means to discover what exactly was hidden between visible pixels, only approximate. So yes you are correct, just described it a bit differently.

        • @[email protected]
          link
          fedilink
          English
          13 months ago

          they, in fact, will have some understanding

          These models have spontaneously acquired a concept of things like perspective, scale and lighting, which you can argue is already an understanding of 3D space.

          What they do not have (and IMO won’t ever have) is consciousness. The fact we have created machines that have understanding of the universe without consciousness is very interesting to me. It’s very illuminating on the subject of what consciousness is, by providing a new example of what it is not.

          • @[email protected]
            link
            fedilink
            English
            03 months ago

            I think AI doesn’t need consciousness to be able to say what is on the picture, or to guess what else could specific details contain.