From the replies:
In cGMP and cGLP you have to be able to document EVERYTHING. If someone, somewhere messes up the company and authorities theoretically should be able to trace it back to that incident. Generative AI is more-or-less a black box by comparison; plus how often itās confidently incorrect is well known and well documented. To use it in a pharmaceutical industry would be teetering on gross negligence and asking for trouble.
Also suppose that you use it in such a way that it helps your company profit immensely andāuh oh! The data it used was the patented IP of a competitor! How would your company legally defend itself? Normally it would use the documentation trail to prove that they were not infringing on the other companyās IP, but you donāt have that here. What if someone gets hurt? Do you really want to make the case that you just gave Chatgpt a list of results and it gave a recommended dosage for your drug? Probably not. When validating SOPs are they going to include listening to Chatgpt in it? If you do, then you need to make sure that OpenAI has their program to the same documentation standards and certifications that you have, and I donāt think they want to tangle with the FDA at the moment.
Thereās just so, SO many things that can go wrong using AI casually in a GMP environment that end with your company getting sued and humiliated.
And a good sneer:
With a few years and a couple billion dollars of investment, itāll be unreliable much faster.
āIf you donāt know the subject, you canāt tell if the summary is goodā is a basic lesson that so many people refuse to learn.