Artists have finally had enough with Meta’s predatory AI policies, but Meta’s loss is Cara’s gain. An artist-run, anti-AI social platform, Cara has grown from 40,000 to 650,000 users within the last week, catapulting it to the top of the App Store charts.

Instagram is a necessity for many artists, who use the platform to promote their work and solicit paying clients. But Meta is using public posts to train its generative AI systems, and only European users can opt out, since they’re protected by GDPR laws. Generative AI has become so front-and-center on Meta’s apps that artists reached their breaking point

  • tyler
    link
    fedilink
    English
    arrow-up
    4
    ·
    7 months ago

    I’ve heard the “big guys are the only ones that will profit from AI regulation” and I haven’t ever heard an actual argument as to why.

    And in my mind the biggest issues with AI image generation have nothing to do with using it as a tool for artists. That’s perfectly fine. But what it is doing is making it infinitely easier to spread enormous amounts of completely unidentifiable misinformation, due to being added with indistinguishable text to speech and video generation.

    The barrier is no longer “you need to be an artist”. It’s “you need to have an internet connection”.

    • IHeartBadCode@kbin.run
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      7 months ago

      Ah. No problem. So the notion behind the “big guys are the ones that stand to profit from AI regulation” is that regulation curtails activity in a general sense. However, many of the offices that create regulation defer to industry experts for guidance on regulatory processes, or have former industry experts appointed onto regulatory committees. (good example of the later is Ajit Pai and his removal of net neutrality).

      AI regulation at the Federal level has mostly circled “trusted” AI generation, as you mentioned:

      But what it is doing is making it infinitely easier to spread enormous amounts of completely unidentifiable misinformation, due to being added with indistinguishable text to speech and video generation

      And the talk has been to add checks along the way by the industry itself (much like how the music industry does policing itself or how airline industry has mostly policed itself). So this would leave people like Adobe and Disney to largely dictate what are “trusted” platforms for AI generation. Platforms that they will ensure that via content moderation and software control, that only “trusted” AI makes it out into the wild.

      Regulation can then take the shape of social media being required to enforce regulation on AI posts, source distributors like github being required to enforce distribution prohibitions, and so on.

      This removes the tools for any AI out of the hands of the public and places them all in the hands of Adobe, Disney, Universal, and so on. And thus, if you wanted to use AI you must use one of their tools, which may in turn have within the TOS that you can not use their product to compete with their product. Basically establishing a monopoly.

      This happens a lot in regulatory processes which is why things like the RIAA, the MPAA, Boeing, and so on are so massive and seemingly unbreakable. They aren’t enshrined in law, but regulatory processes create a de facto monopoly that becomes difficult to enter because of fear of competition.

      The big guys, being the industry leaders, in a regulatory hearing would be the first to get a crack at writing the rules that the regulatory body would debate on. In addition to the expert phase, regulatory process also includes a public comment, this would allow the public to address concerns about the expert submitted recommendation. But as demonstrated back in the public comment of the debate to remove rules regulating ISPs for net neutrality, the FCC decided that the comments were “fake” and only heard a small “selected” percentage of them.

      side note: in a regulatory hearing, every public comment accepted must be debated and rationale on the conclusion of the argument submitted to the record. This is why Ajit Pai suspended comments on NN because they didn’t want to enter justification that can be brought up in a court case to the record.

      The barrier is no longer “you need to be an artist”. It’s “you need to have an internet connection”

      And yeah, that might be worth locking AI out of the hands of the public forever. But it doesn’t stop the argument of “AI taking jobs”. It just means that small startups will never be able to create jobs with AI. So if the debate is “AI shouldn’t take our jobs, let’s regulate it”, that will only make it worse in the end (sort of how AWS has mostly dominated the Internet services and how everyone started noticing that as not being incredibly ideal around 2019-2021 when Twitter started kicking people off their service and people wanting to build the next Twitter were limited to what Amazon would and would not accept).

      So that’s the argument. And there’s pros and cons to each. But we have to be pretty careful about which way to go, because once we go a direction, it’s pretty difficult to change directions because corporations are incredibly good at adapting. I distinctly remember streaming services being the “breath of fresh air from cable” all the way up till it wasn’t. And now with hard media becoming harder to purchase (it’s not impossible mind you) we’ve sort of entrenched streaming. Case in point, I love Pokémon Concierge, it is not available for purchase as a DVD or whatever (at least not a non-bootleg version), so if I ever want to watch it again I need Netflix.

      And do note, I’m not saying we shouldn’t have regulation on AI, what I am saying is that there’s a lot for consideration with AI regulation. And the public needs to have some unified ideas about it for the regulatory body’s public comment section to ensure small businesses that want to use AI can still be allowed. Otherwise the expert phase will dominate and AI will be gone from the public’s hands for quite some time. We’re just now getting around to reversing the removal of net neutrality that started back in 2017. But companies have used that 2017 to today to form business alliances (Disney + Hulu Verizon deal as an example) that’ll be hard to compete with for some time.

      • molave@reddthat.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        7 months ago

        I’m very wary of the measures that could potentially pass if the some of the anti-AI art people get their way. I know how messy and difficult putting fair-use material in YouTube can be. There would be more of that in more platforms.

        I agree unregulated AI is problematic. At the same time, I’m cynical on what the actual measures would look like.

        • IHeartBadCode@kbin.run
          link
          fedilink
          arrow-up
          4
          ·
          7 months ago

          I agree unregulated AI is problematic. At the same time, I’m cynical on what the actual measures would look like.

          OMG, Thank you, this is the correct take.