A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial. And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

  • emptyother
    link
    fedilink
    English
    arrow-up
    98
    arrow-down
    1
    ·
    8 months ago

    How long until we got upscalers of various sorts built into tech that shouldn’t have it? For bandwidth reduction, for storage compression, or cost savings. Can we trust what we capture with a digital camera, when companies replace a low quality image of the moon with a professionally taken picture, at capture time? Can sport replays be trusted when the ball is upscaled inside the judges’ screens? Cheap security cams with “enhanced night vision” might get somebody jailed.

    I love the AI tech. But its future worries me.

    • Jimmycakes@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      1
      ·
      8 months ago

      It will wild out for the foreseeable future until the masses stop falling for it in gimmicks then it will be reserved for the actual use cases where it’s beneficial once the bullshit ai stops making money.

    • GenderNeutralBro@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      3
      ·
      8 months ago

      AI-based video codecs are on the way. This isn’t necessarily a bad thing because it could be designed to be lossless or at least less lossy than modern codecs. But compression artifacts will likely be harder to identify as such. That’s a good thing for film and TV, but a bad thing for, say, security cameras.

      The devil’s in the details and “AI” is way too broad a term. There are a lot of ways this could be implemented.

      • jeeva@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        8 months ago

        I don’t think loss is what people are worried about, really - more injecting details that fit the training data but don’t exist in the source.

        Given the hoopla Hollywood and directors made about frame-interpolation, do you think generated frames will be any better/more popular?

        • GenderNeutralBro@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          In the context of video encoding, any manufactured/hallucinated detail would count as “loss”. Loss is anything that’s not in the original source. The loss you see in e.g. MPEG4 video usually looks like squiggly lines, blocky noise, or smearing. But if an AI encoder inserts a bear on a tricycle in the background, that would also be a lossy compression artifact in context.

          As for frame interpolation, it could definitely be better, because the current algorithms out there are not good. It will not likely be more popular, since this is generally viewed as an artistic matter rather than a technical matter. For example, a lot of people hated the high frame rate in the Hobbit films despite the fact that it was a naturally high frame rate, filmed with high-frame-rate cameras. It was not the product of a kind-of-shitty algorithm applied after the fact.

      • DarkenLM@kbin.social
        link
        fedilink
        arrow-up
        6
        ·
        8 months ago

        I don’t think AI codecs will be anything revolutionary. There are plenty of lossless codecs already, but if you want more detail, you’ll need a better physical sensor, and I doubt there’s anything that can be done to go around that (that actually represents what exists, not an hallucination).

        • foggenbooty@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          2
          ·
          8 months ago

          It’s an interesting thought experiment, but we don’t actually see what really exists, our brains essentially are AI vision, filling in things we don’t actually perceive. Examples are movement while we’re blinking, objects and colors in our peripheral vision, the state of objects when our eyes dart around, etc.

          The difference is we can’t go back frame by frame and analyze these “hallucinations” since they’re not recorded. I think AI enhanced video will actually bring us closer to what humans see even if some of the data doesn’t “exist”, but the article is correct that it should never be used as evidence.

        • Natanael@slrpnk.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          I think there’s a possibility for long format video of stable scenes to use ML for higher compression ratios by deriving a video specific model of the objects in the frame and then describing their movements (essentially reducing the actual frames to wire frame models instead of image frames, then painting them in from the model).

          But that’s a very specific thing that probably only work well for certain types of video content (think animated stuff)

        • GenderNeutralBro@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          There are plenty of lossless codecs already

          It remains to be seen, of course, but I expect to be able to get lossless (or nearly-lossless) video at a much lower bitrate, at the expense of a much larger and more compute/memory-intensive codec.

          The way I see it working is that the codec would include a general-purpose model, and video files would be encoded for that model + a file-level plugin model (like a LoRA) that’s fitted for that specific video.

        • Hexarei
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          Nvidia’s rtx video upscaling is trying to be just that: DLSS but you run it on a video stream instead of a game running on your own hardware. They’ve posited the idea of game streaming becoming lower bit rate just so you can upscale it locally, which to me sounds like complete garbage

      • Buelldozer@lemmy.today
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 months ago

        AI-based video codecs are on the way.

        Arguably already here.

        Look at this description of Samsungs mobile AI for their S24 phone and newer tablets:

        AI-powered image and video editing

        Galaxy AI also features various image and video editing features. If you have an image that is not level (horizontally or vertically) with respect to the object, scene, or subject, you can correct its angle without losing other parts of the image. The blank parts of that angle-corrected image are filled with Generative AI-powered content. The image editor tries to fill in the blank parts of the image with AI-generated content that suits the best. You can also erase objects or subjects in an image. Another feature lets you select an object/subject in an image and change its position, angle, or size.

        It can also turn normal videos into slow-motion videos. While a video is playing, you need to hold the screen for the duration of the video that you want to be converted into slow-motion, and AI will generate frames and insert them between real frames to create a slow-motion effect.

    • MudMan@fedia.io
      link
      fedilink
      arrow-up
      13
      ·
      edit-2
      8 months ago

      Not all of those are the same thing. AI upscaling for compression in online video may not be any worse than “dumb” compression in terms of loss of data or detail, but you don’t want to treat a simple upscale of an image as a photographic image for evidence in a trial. Sport replays and hawkeye technology doesn’t really rely on upscaling, we have ways to track things in an enclosed volume very accurately now that are demonstrably more precise than a human ref looking at them. Whether that’s better or worse for the game’s pace and excitement is a different question.

      The thing is, ML tech isn’t a single thing. The tech itself can be used very rigorously. Pretty much every scientific study you get these days uses ML to compile or process images or data. That’s not a problem if done correctly. The issue is everybody is both assuming “generative AI” chatbots, upscalers and image processers are what ML is and people keep trying to apply those things directly in the dumbest possible way thinking it is basically magic.

      I’m not particularly afraid of “AI tech”, but I sure am increasingly annoyed at the stupidity and greed of some of the people peddling it, criticising it and using it.

    • elephantium@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      8 months ago

      Cheap security cams with “enhanced night vision” might get somebody jailed.

      Might? We’ve been arresting the wrong people based on shitty facial recognition for at least 5 years now. This article has examples from 2019.

      On one hand, the potential of this type of technology is impressive. OTOH, the failures are super disturbing.

    • Dojan@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      Probably not far. NVidia has had machine learning enhanced upscaling of video games for years at this point, and now they’ve also implemented similar tech but for frame interpolation. The rendered output might be 720p at 20FPS but will be presented at 1080p 60FPS.

      It’s not a stretch to assume you could apply similar tech elsewhere. Non-ML enhanced, yet still decently sophisticated frame interpolation and upscaling has been around for ages.

      • MrPoopbutt@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        8 months ago

        Nvidias game upscaling has access to game data and also training data generated by gameplay to make footage that is appealing to the gamers eye and not necessarily accurate. Security (or other) cameras don’t have access to this extra data and the use case for video in courts is to be accurate, not pleasing.

        Your comparison is apples to oranges.

        • Dojan@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          8 months ago

          No, I think you misunderstood what I’m trying to say. We already have tech that uses machine learning to upscale stuff in real-time, but I’m not that it’s accurate on things like court videos. I don’t think we’ll ever get to a point where it can be accurate as evidence because by the very nature of the tech it’s making up detail, not enhancing it. You can’t enhance what isn’t there. It’s not turning nothing into accurate data, it’s guessing based on input and what it’s been trained on.

          Prime example right here, this is the objectively best version of Alice in Wonderland, produced by BBC in 1999, and released on VHS. As far as I can tell there was never a high quality version available. Someone used machine learning to upscale it, and overall it looks great, but there are scenes (such as the one that’s linked) where you can clearly see the flaws. Tina Majorino has no face, because in the original data, there wasn’t enough detail to discern a face.

          Now we could obviously train a model to recognise “criminal activity”, like stabbing, shooting, what have you. Then, however, you end up with models that mistake one thing for another, like scratching your temple turning into driving while on the phone, now if instead of detecting something, the model’s job is to fill in missing data we have a recipe for disaster.

          Any evidence that has had machine learning involved should be treated with at least as much scrutiny as a forensic sketch, while while they can be useful in investigations, generally don’t carry much weight as evidence. That said, a forensic sketch is created through collaboration with an artist and a witness, so there is intent behind those. Machine generated artwork lacks intent, you can tweak the parameters until it generates roughly what you want, but it’s honestly better to just hire an artist and get exactly what you want.

        • Buelldozer@lemmy.today
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          8 months ago

          Security (or other) cameras don’t have access to this extra data

          Samsung’s AI on their latest phones and tablets does EXACTLY what @[email protected] is describing. It will literally create data including parts of scenes and even full frames, in order to make video look better.

          So while a true security camera may not be able to do it there’s now widely available consumer products that WILL. You’re also forgetting that even Security Camera footage can be processed through software so footage from those isn’t immune to AI fiddling either.

    • Bread@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      The real question is could we ever really trust photographs before AI? Image manipulation has been a thing long before the digital camera and Photoshop. What makes these images we see actually real? Cameras have been miscapturing image data for as long as they have existed. Do the light levels in a photo match what was actually there according to the human eye? Usually not. What makes a photo real?

      • emptyother
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        They can. But theres a reasonable level of trust that a security feed has been kept secure and not tampered with by the owner if he doesnt have a motive. But what if not even the owner know that somewhere in their tech chain, maybe the camera, maybe the screen, maybe the storage device, maybe all 3, the image was “improved”. No evidence of tampering. We’ll have the police blaming Count Rugen for a bank robbery he didnt do, but the camera clearly shows a six fingered man!