Lessons from the FDAs First AI Warning Letter
Imagine telling an FDA inspector, "I didn't know that was required because my AI didn't tell me." That was excatly the case for Purolea Cosmetics Lab when the FDA inspected their drug manufacturing facility on Oct 28-30, 2025. Subsequently, the facility received a warning letter on April 2, 2026 (WL 320-26-58) citing among other GMP violations “Inappropriate Use of Artificial Intelligence in Pharmaceutical Manufacturing”. This isn’t about just one facility; it’s the first time the FDA has officially cited the misuse of AI in a warning letter. Let’s take a moment to break down and unpack this warning letter.
The firm used AI agents to draft critical documents such as drug specification, SOPs and master production records without human review. This goes against 21 CFR 211.22(C) which calls out the fundamental responsibilities of the quality unit; here specifically the quality unit’s responsibility to approve or reject all procedures or specification impacting the identity, strength, quality and purity of the drug product.
When investigators found a lack of foundational process validation (21 CFR 211.100), the firm claimed they were unaware of the requirement because the AI never surfaced it. Ignorance of the law is not a defense. Clearly this firm was relying on AI tools too heavily, and forgetting to keep the “expert-in-the-loop”. The FDA and EMA has published the “Guiding Principles of Good AI Practice in Drug Development” in January 2026 to lay a foundation for developing good practices that address the unique nature of AI tools and their use in drug development and throughout the product lifecycle. In this guidance document10 guiding principles are sited:
1. Human-centric by design
2. Risk-based approach
3. Adherence to standards
4. Clear context of use
5. Multidisciplinary expertise
6. Data governance and documentation
7. Model design and development practices
8. Risk-based performance assessment
9. Life cycle management
10. Clear, essential information
You can read more about each of these 10 guiding principles here Guiding Principles of Good AI Practice in Drug Development | January 2026
There are some important lessons to be learned from this warning letter for the pharmaceutical and medical device industries. AI cannot replace the Quality Unit. This is not a task that can be outsourced to an AI agent. Every automated output affecting product quality must have a human “expert-in-the-loop”. In medical device, with the QMSR in effect, AI-generated design history files or risk assessment must meet strict human oversight standards. In order to manage AI model updates without the need for new submissions, medical device manufacturers should document predetermined change control plans (PCCPs) which is outlined in the FDA guidance document Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence-Enabled Device Software Functions.
In the current regulatory landscape, we know regulators expect documented proof of who reviewed AI’s work and how it was verified. It’s important for companies to integrate AI systems into the QMS with a focus on vulnerability management and algorithm transparency. AI integration should not be a bolt on for weak processes. This will only amplify existing weaknesses. Instead, the transition to using AI needs to be implemented in strategic steps including first assuring all relevant QMS data is digital, and the quality data is clean, structured and validated. AI systems must comply with 21 CFR part 11 and Annex 11. There are several AI ready enterprise platforms on the market now such as MasterControl GxPAssist AI, AmpleLogic and IQVIA SmartSolve. The most effective approach to AI integration involves using AI to automate high-volume documentation, predict deviations before they occur, and streamline risk management. Implementing AI systems should also include appropriate staff training, both on how to use the AI tools and how to verify the output, critiquing for GxP errors.
AI is a powerful tool for efficiency, but at the end of the day, regulators hold the humans accountable. The first FDA warning letter with cited AI-related failures focused on deficiencies in the “expert-in-the-loop” model and where unclear roles, inadequate oversight, and insufficient documentation allowed unsafe outputs to reach the market. In order to deploy AI within the Quality Management System, companies should focus on assuring AI is subject to design validation, formalize human-in-the-loop procedures, document performance monitoring and periodic revalidation, assure there is robust change control for model retraining or updates, and deploy training for staff handling AI. When organizations take these proactive steps integrating AI into the QMS, they can significantly reduce regulatory risk and better avoid receiving FDA warning letters.