Yup. And it isn’t even just artists. Disabled people that aren’t creatives on a professional level object to it as well. It’s an unpleasant form of ablism, trying to pander on the backs of those poor, sad disabled people.
But it is all a spectrum of technologies, when applied properly.
The properly part is the bottom panel of the posted comic, imo. The various generative models aren’t actually about helping people, they aren’t about expanding human creativity. They’re about trying to cash in on a growing technology.
That doesn’t mean that ai can’t be a good thing. It just means that it’s a bad thing in the way it exists now, or at least in the form that’s being shoved down the public’s throat.
Had the big ones not stolen the training data, were they not being used to leverage corporate goals over humans, they could be a very useful thing.
Had the big ones not stolen the training data, were they not being used to leverage corporate goals over humans, they could be a very useful thing
AI still has the problems of spam(propaganda being the most dangerous variant of it), disinformation and impersonating real artists. These could be fixed if every AI image/video had a watermark, but i dont think that could be enforced well enough to completely eliminate these issues
Those specific flaws are down to the same issue though. The training data was flawed enough, in large part due to being stolen wholesale, that it skews the matter towards counterfeits being easier. I would agree that in the absence of legislation, no for profit business based on ai will ever tag their output. It could be an easier task for non profit, and/or open source models though. Definitely something that needs addressing.
I’m not sure what you mean by spam being a direct problem of ai. Are you saying that it’s easier to generate propaganda, and thus allow it to be spammed?
As near as I can tell, the propaganda farms were doing quite well spreading misinformation and disinformation before ai. Spamming it too, when that was useful to their goals.
Propaganda is more of a problem with text generation than image generation, but both can be used to change peoples opinions much more easily than before
But thats not whats happening with AI “art”. Thats whats being attempted with other technologies
I have seen a lot of disabled artists complain about bring used in pro-AI arguments
Yup. And it isn’t even just artists. Disabled people that aren’t creatives on a professional level object to it as well. It’s an unpleasant form of ablism, trying to pander on the backs of those poor, sad disabled people.
But it is all a spectrum of technologies, when applied properly.
The properly part is the bottom panel of the posted comic, imo. The various generative models aren’t actually about helping people, they aren’t about expanding human creativity. They’re about trying to cash in on a growing technology.
That doesn’t mean that ai can’t be a good thing. It just means that it’s a bad thing in the way it exists now, or at least in the form that’s being shoved down the public’s throat.
Had the big ones not stolen the training data, were they not being used to leverage corporate goals over humans, they could be a very useful thing.
AI still has the problems of spam(propaganda being the most dangerous variant of it), disinformation and impersonating real artists. These could be fixed if every AI image/video had a watermark, but i dont think that could be enforced well enough to completely eliminate these issues
Those specific flaws are down to the same issue though. The training data was flawed enough, in large part due to being stolen wholesale, that it skews the matter towards counterfeits being easier. I would agree that in the absence of legislation, no for profit business based on ai will ever tag their output. It could be an easier task for non profit, and/or open source models though. Definitely something that needs addressing.
I’m not sure what you mean by spam being a direct problem of ai. Are you saying that it’s easier to generate propaganda, and thus allow it to be spammed?
As near as I can tell, the propaganda farms were doing quite well spreading misinformation and disinformation before ai. Spamming it too, when that was useful to their goals.
As far as i know, the twitter AI tags its images
Propaganda is more of a problem with text generation than image generation, but both can be used to change peoples opinions much more easily than before