As I understand, there are currently no real guidelines for this, even though AI is currently a big topic in FOSS.
In my opinion, AI can be quite dangerous for free software and that’s why we really need to discuss how we can address this issue. Here are some of the reasons for that:
- Poor quality, insecure code: AI still produces hard to maintain code, that is often severely insecure. Even if you check every line of code carefully, there is a good chance you overlook something, because you won’t fully understand the code, you haven’t written yourself
- Licensing issues: AI often reproduces code from it’s training material, which could be incompatible with this codes license
- Legal trouble: The legality of the copyright about AI is not yet really settled, so it could be a big legal risk to have AI-code in your codebase
- Ethics: AI systematically exploits the work of all open source contributors for the profit of big companies. We as part of the free software movement should reject this more openly
My idea for this policy was, that we should definetely demand for AI generated code to be marked as such (you have to disclose in your commits, if you have used AI for that). I think we should also ban entirely AI-generated PRs aswell, because they produce more work for the maintainers than they actually help with anything. Were I am not quite sure yet, is how we handle the case when someone used AI just as autocomplete, but wrote most of the code themselves? You should probably also have to disclose that, do you think we should ban something like that?
Looking forward to hear what you think about this!


Unfortunately we can’t tell the difference between the two.
How do you plan on verifying without creating a No True Scotsman problem?
There is no way to verify it. Sometimes it is quite obvious though. It’s not really about eliminating it completely (although I wish I could). It’s more about taking a clear stance and maybe keep off a few people, that think they can “help” with AI. Maybe we could ban people, when it’s obvious. Although we should always strive to still create a welcoming culture and not create too much trouble for the people, that don’t use AI.
I don’t think AI is as advanced yet so we’re not able to tell. I mean sure it’s problematic. And oftentimes down to circumstancial evidence.
But I regularly spot posts/comments here which start with an affirmation, they’ll follow the rule of three, they contain em-dashes and have the tendency to be quite unnecessarily verbose for the amount of information in them. It’s not too hard to spot. And a good tell-tale sign for ChatGPT usage.
Similarly with PRs which contain a lot of bullet lists and emojis. And then Claude has some own style. You’ll spot it after you went through enough AI code. It also sometimes states the obvious. Like add a comment on how the main function is the main function. Or it’s the other extreme and there’s no comments. And sometimes you’ll see nonsensical code or it not following project structure or other things due to some limitations in the process. And IMO it tends to be very generous with changing around a lot of stuff, adding new dependencies to a project…
So I wouldn’t say “we can’t tell the difference”. But it ain’t easy. And it’s time-consuming because AI output looks legit on a first glance. That’s the intent behind it.
(Edit: and there’s other funny creative ways to tell.)