doesn’t it follow that AI-generated CSAM can only be generated if the AI has been trained on CSAM?
This article even explicitely says as much.
My question is: why aren’t OpenAI, Google, Microsoft, Anthropic… sued for possession of CSAM? It’s clearly in their training datasets.
doesn’t it follow that AI-generated CSAM can only be generated if the AI has been trained on CSAM?
Not quite, since the whole thing with image generators is that they’re able to combine different concepts to create new images. That’s why DALL-E 2 was able to create a images of an astronaut riding a horse on the moon, even though it never saw such images, and probably never even saw astronauts and horses in the same image. So in theory these models can combine the concept of porn and children even if they never actually saw any CSAM during training, though I’m not gonna thoroughly test this possibility myself.
Still, as the article says, since Stable Diffusion is publicly available someone can train it on CSAM images on their own computer specifically to make the model better at generating them. Based on my limited understanding of the litigations that Stability AI is currently dealing with (1, 2), whether they can be sued for how users employ their models will depend on how exactly these cases play out, and if the plaintiffs do win, whether their arguments can be applied outside of copyright law to include harmful content generated with SD.
My question is: why aren’t OpenAI, Google, Microsoft, Anthropic… sued for possession of CSAM? It’s clearly in their training datasets.
Well they don’t own the LAION dataset, which is what their image generators are trained on. And to sue either LAION or the companies that use their datasets you’d probably have to clear a very high bar of proving that they have CSAM images downloaded, know that they are there and have not removed them. It’s similar to how social media companies can’t be held liable for users posting CSAM to their website if they can show that they’re actually trying to remove these images. Some things will slip through the cracks, but if you show that you’re actually trying to deal with the problem you won’t get sued.
LAION actually doesn’t even provide the images themselves, only linking to images on the internet, and they do a lot of screening to remove potentially illegal content. As they mention in this article there was a report showing that 3,226 suspected CSAM images were linked in the dataset, of which 1,008 were confirmed by the Canadian Centre for Child Protection to be known instances of CSAM, and others were potential matching images based on further analyses by the authors of the report. As they point out there are valid arguments to be made that this 3.2K number can either be an overestimation or an underestimation of the true number of CSAM images in the dataset.
The question then is if any image generators were trained on these CSAM images before they were taken down from the internet, or if there is unidentified CSAM in the datasets that these models are being trained on. The truth is that we’ll likely never know unless the aforementioned trials reveal some email where someone at Stability AI admitted that they didn’t filter potentially unsafe images, knew about CSAM in the data and refused to remove it, though for obvious reasons that’s unlikely to happen. Still, since the LAION dataset has billions of images, even if they are as thorough as possible in filtering CSAM chances are that at least something slipped through the cracks, so I wouldn’t bet my money on them actually being able to infallibly remove 100% of CSAM.
I would imagine that ai generated csam can be “had” in big tech ai in two ways: contamination, and training from an analog. Contamination would be the training passes of the ai using the data being introduced into an uncontaminated training pool. (Not introducing raw csam material). Training from analogous data is what the name states, get as close to the csam material as possible without raising eyebrows. Or the criminals could train off of “fresh” unknown to lawenforcment csam.
Well, it can draw an astronaut on a horse, and I doubt it had seen lots of astronauts on horses…
Yeah but the article suggests that pedos train their local AI on existing CSAM, which would indicate that it’s somehow needed to generate AI-generated CSAM. Otherwise why would they bother? They’d just feed them images of children in innocent settings and images of ordinary porn to get their local AI to generate CSAM.
which would indicate that it’s somehow needed to generate AI-generated CSAM
This is not strictly true in general. Generative AI is able to produce output that is not in the training data, by learning a broad range of concepts and applying them in novel ways. I can generate an image of a rollerskating astronaut even if there are no rollerskating astronauts in the training data.
It is true that some training sets include CSAM, at least in the past. Back in 2023, researches found a few thousand such images in the LAION-5B dataset (roughly one per million images). 404 Media has an excellent article with details: https://www.404media.co/laion-datasets-removed-stanford-csam-child-abuse/
On learning of this, LAION took down their database until it could properly cleaned. Source: https://laion.ai/notes/laion-maintenance/
Those images were collected from the public web. LAION took steps to avoid linking to illicit content (details in the link above), but clearly it’s an imperfect system. God only knows what closed companies (OpenAI, Google, etc.) are doing. With open data sets, at least any interested parties can review, verify, and report this stuff. With closed data sets, who knows?
How do they know that? Did the pedos text them to let them know? Sounds very made up.
The article says “remixed” images of old victims have cropped up.
And again, what’s the source? The great thing with articles about CSAM is that you don’t need sources, everyone just assumes you have them, but obviously cannot share.
Did at least one pedo try that? Most likely yes. Is it the best way to get good quality fake CSAM? Not at all.
I don’t know man. But I assume associations concerned with child abuse are all over that shit and checking it out. I’m not a specialist of CSAM but I assume an article that says old victims show up in previously-unseen images doesn’t lie, because why would it? It’s not like Wired is a pedo outlet…
Also, it was just a question. I’m not trying to convince you of anything 🙂
I think that aricle lacks nuance. It’s a bit baity and attends to the usual talking points without contextualizing the numbers or what’s actually happening out there, the consequences or the harm. That makes me believe the author just wants to push some point across.
But I’ve yet to read a good article on this. Most articles are like this one. But yeah, are a few thousand images much in the context of crime that’s happening online? Where are these numbers from and what’s with the claim that there are more actual pictures out there? I seriously doubt that at this point, if it’s so easy to generate images. And what consequences does all of this have? Does it mean an increase or a decrease in abuse? And lots of services have implemented filters… Are the platforms doing their due diligence? Is this a general societal issue or criminals doing crime?
It’s certainly technically possible. I suspect these AI models just aren’t good at it. So the pedophiles need to train them on actual images.
I can imagine for example AI doesn’t know what puberty is since it has in fact not seen a lot of naked children. It would try to infer from all the internet porn it’s seen, and draw any female with big breasts, disregarding age. And that’s not how children actually look.
I haven’t tried, since it’s illegal where I live. But that’s my suspicion why pedophiles bother with training models.
(Edit: If that’s the case, it would mean the tech companies are more or less innocent. At least at this.
And note a lot of the CSAM talk is FUD (spreading fear, uncertainty and doubt) I usually see this in the context of someone pushing for total surveillance of the people. It’s far less pronounced in my experience than some people make it to be. I’ve been around on the internet, and I haven’t seen any real pictures, yet. I’m glad that I didn’t, but that makes me believe you have to actively look for that kind of stuff, or be targeted somehow.
And I think a bit mure nuance would help. This article also lumps together fictional drawings and real pictures. I think that’s counterproductive, since one is a heinous crime and has real victims. And like, drawing nude anime children or de-aging celebrities isn’t acceptable either (depends on legislation), but I think we need to differentiate here. I think real pictures are entirely on a different level and should have far more severe consequences. If we mix everything together, we kind of take away from that.)
Training an existing model on a specific set of new data is known as “fine tuning”.
A base model has broad world knowledge and the ability to generate outputs of things it hasn’t specifically seen, but a tuned model will provide “better” (fucking yuck to even write it) results.
The closer your training data is to your desired result, the better.
That’s not exactly how it works.
It can “understand” different concepts and mix them, without having to see the combination before hand.
As for the training thing, that would probably be more LORA. They’re like add-ons you can put on your AI to draw certain things better like a character, a pose, etc. not needed for the base model.
Grok literally says it would protect 1 jewish person’s life over 1 million non-jewish people. Wonder what they are training that shit on lol.
would it suck off one non-jewish man to save 1 millinons jewish lives?
If AI spits out stuff it’s been trained on
For Stable Diffusion, it really doesn’t just spit out what it’s trained on. Very loosely, it starts with white noise, then adds noise and denoises the result based on your prompt, and it keeps doing this until it converges to a representation of your prompt.
IMO your premise is closer to true in practice, but still not strictly true, about large language models.
A fun anecdote is that when my friends and I tried the then brand new MS image gen AI built into Bing(for the purpose of a fake tinder profile, long story).
The generator kept hitting walls because it had been fed so much porn that the model averaged women to be by default nude in images. You had to specify that what clothes a woman was wearing. Not even just “clothed”, then it defaulted to lingerie or bikinis.
Not men though. Men it defaulted to being clothed.
I mean, Bing has proven itself to the the best search engine for porn - so it kinda stands to reason that their AI model would have a particular knack for generating even more of the stuff!
Their image gen app isn’t theirs through and through. It runs on Dall-e
a GPT can produce things it’s never seen.
It can produce a galaxy made out of dog food; doesn’t mean it was trained on pictures of galaxies made out of dog food.
those are big companies. they have more legal protection than anyone in the world. and money if judges/law enforcement still consider moving a case forward
The article is bullshit that wants to stir shit up for more clicks.
You don’t need a single CSAM image to train AI to make fake CSAM. In fact, if you used the images from the database of known CSAM, you’d get very shit results because most of them are very old and thus the quality most likely sucks.
Additionally, in another comment you mention that it’s users training their models locally, so that answers your 2nd question of why companies are not sued: they don’t have CSAM in their training dataset.
First of all, it’s by definition not CSAM if it’s AI generated. It’s simulated CSAM - no people were harmed doing it. That happened when the training data was created.
However it’s not necessary that such content even exists in the training data. Just like ChatGPT can generate sentences it has never seen before, image generators can also generate pictures it has not seen before. Ofcourse the results will be more accurate if that’s what it has been trained on but it’s not strictly necessary. It just takes a skilled person to write the prompt.
My understanding is that the simulated CSAM content you’re talking about has been made by people running their software locally and having provided the training data themselves.
First of all, it’s by definition not CSAM if it’s AI generated. It’s simulated CSAM
This is blatantly false. It’s also illegal and you can go to prison for owning selling or making child Lolita dolls.
I don’t know why this is the legal position in most places. Because as you mention no one is harmed.
Dumb internet argument from here on down; advise the reader to do something else with their time.
What’s blatantly false about what I said?
CSAM = Child sexual abuse material
Even virtual material is still legally considered CSAM in most places. Although no children were hurt, it’s a depiction of it, and that’s enough.Being legally considered CSAM and actually being CSAM are two different things. I stand behind what I said which wasn’t legal advise. By definition it’s not abuse material because nobody has been abused.
There’s a reason it’s legally considered CSAM. as I explained it is material that depicts it.
You can’t have your own facts, especially not contrary to what’s legally determined, because that means your definition or understanding is actually ILLEGAL!! If you act based on it.Which law are you speaking about?
I already told you that I’m not speaking from legal point of view. CSAM means a specific thing and AI generated content doesn’t fit under this definition. The only way to generate CSAM is by abusing children and taking pictures/videos of it. AI content doesn’t count any more than stick figure drawings do. The justice system may not differentiate the two but that is not what I’m talking about.
The only way to generate CSAM is by abusing children and taking pictures/videos of it.
Society has decided otherwise, as I wrote, you can’t have your own facts or definitions. You might as well claim that in traffic red means go, because you have your own interpretation of how traffic lights should work.
Red is legally decided to mean stop, so that’s how it is, that’s how our society works by definition.
it’s by definition not CSAM if it’s AI generated
Tell that to the judge. People caught with machine-made imagery go to the slammer just as much as those caught with the real McCoy.
Have there been cases like that already?
It’s not legal advice I’m giving here.
It probably won’t yield good results for the literal query “child porn” because such content on the open web is censored, but I’m pretty sure degenerates know workarounds such as “young, short, naked, flat chested, no pubic hair”, all of which exist plentifully in isolation. Just my guess, I haven’t tried of course.
Sexton says criminals are using older versions of AI models and fine-tuning them to create illegal material of children. This involves feeding a model existing abuse images or photos of people’s faces, allowing the AI to create images of specific individuals. “We’re seeing fine-tuned models which create new imagery of existing victims,” Sexton says. Perpetrators are “exchanging hundreds of new images of existing victims” and making requests about individuals, he says. Some threads on dark web forums share sets of faces of victims, the research says, and one thread was called: “Photo Resources for AI and Deepfaking Specific Girls.”
The model hasn’t necessarily been trained with CSAM, rather you can create things called LORAs which help influence the image output of a model so that it’s better at producing very specific content that it may have struggled with before. For example I downloaded some recently that help Stable Diffusion create better images of Battleships from Warhammer 40k. My guess is that criminals are creating their own versions for kiddy porn etc.
This is one of those things where both are likely to be true. All webscale datasets have a problem with porn and csam, and it’s like that people wanting to generate csam use their own fine tuned models.
Here’s an example story. https://cyber.fsi.stanford.edu/news/investigation-finds-ai-image-generation-models-trained-child-abuse and it’s very likely that this was the tip of the iceberg, and there’s more csam still in these datasets.
I think you misunderstand what’s happening.
It isn’t that, as an example to represent the idea, openai is training their models on kiddie porn.
It’s that people are taking ai software, and then training it on their existing material. The wired article even specifically says they’re issuing older versions of the software to bypass safeguards that are in place to prevent it now.
This isn’t to say that any of the companies involved in offering generative software don’t have such imagery in the data used to train their models. But they wouldn’t have to possess it for it to be in there. Most of those assholes just grabbed giant datasets and plugged them in. They even used scrapers for some of it. So all it would take is them accessing some of it unintentionally for their software to end up able to generate new material. They don’t need to store anything once the software is trained.
Currently, none of them lack some degree of prevention in their products to prevent it being used for that. How good those protections are, I have zero clue. But they’ve all made noises about it.
But don’t forget, one of the earlier iterations of software designed to identify kiddie porn was trained on seized materials. The point of that is that there are exceptions to possession. The various agencies that investigate sexual abuse of minors tend to keep materials because they need it to track down victims, have as evidence, etc. It’s that body of data that made detection something that can be automated. While I have no idea if it happened, it wouldn’t be surprising if some company or another did scrape that data at some point. That’s just a tangent rather than part of your question.
So, the reason that they haven’t been “sued” is that they likely don’t have any materials to be “sued” for in the first place.
Besides, not all generated materials are made based on existing supplies. Some of it is made akin to a deepfake, where someone’s face is pasted onto a different body. So, they can take materials of perfectly legal adults that look young, slap real or fictional children’s faces onto them, and have new stuff to spread around. That doesn’t require any original material at all. You could, as I understand it, train an generative model on that and it would turn out realistic fully generative materials. All of that is still illegal, but it’s created differently.