Wednesday, January 14, 2026 | 10am EST: Bioassay Method Transfer Strategies to Reduce Variability

X

Agentic AI Comes to the FDA: A Closer Look at the Rollout

FDA agentic AI guidance, FDA artificial intelligence guidance

The FDA’s agentic AI tools can plan and execute multi-step tasks with human oversight, helping streamline review validation, inspections and post-market surveillance.

The FDA has introduced new agentic AI capabilities for its workforce, marking a significant step in the agency’s long-term effort to integrate advanced digital tools into regulatory operations. Agentic AI refers to systems that can plan tasks, make decisions and carry out multi-step actions with human oversight.

These capabilities allow staff to build and manage complex AI workflows, drawing on multiple models to support tasks that require planning, reasoning and multi-step execution. The tool is voluntary and designed with safeguards such as human oversight and protections for sensitive data.

FDA Commissioner Dr. Marty Makary said the rollout reflects a broader effort to give reviewers, scientists and investigators stronger digital support.

The deployment builds on the FDA’s earlier launch of Elsa, an learning language model (LLM)-based internal tool introduced in May 2025. More than 70% of staff now use Elsa, prompting ongoing updates that improve its integration across scientific, administrative and regulatory workflows.

With agentic AI, FDA teams can now support tasks such as meeting management, review validation, pre-market assessments, post-market surveillance, inspection activities and other compliance functions.

The agency is also launching a two-month Agentic AI Challenge, which will invite staff to design solutions for presentation at the FDA’s Scientific Computing Day in January 2026.

The tools operate within a secure GovCloud environment and do not learn from staff inputs or any data submitted by the regulated industry, which is a measure designed to protect the confidentiality of regulatory submissions.


XTALKS WEBINAR: Mastering Inspection Readiness for FDA’s AI Tool Elsa

Live and On-Demand: Monday, December 15, 2025, at 1pm EST (10am PST)

Register for this free webinar to learn how inspection readiness can evolve to meet the demands of AI-enabled regulatory oversight.


Other AI-Related Actions the FDA Took in 2025

The agentic AI deployment follows several other digital-focused initiatives the FDA advanced this year. The agency continued its participation in the HHS AI Task Force, which is developing cross-agency guidelines on responsible AI use in healthcare, including considerations for safety, cybersecurity and real-world performance monitoring.

In 2024, the FDA finalized guidance on Predetermined Change Control Plans (PCCPs) for AI-enabled device software. A PCCP is a pre-agreed plan that outlines the types of model updates a manufacturer may make after a device is on the market, provided the changes remain within defined safety and performance limits. The guidance gives companies a clearer framework for carrying out these planned modifications while maintaining the device’s safety and effectiveness.

Earlier Regulatory AI Milestones Over the Years

The FDA’s recent AI expansion builds on several earlier milestones. In 2021, the agency published its first AI/ML-Based Software as a Medical Device Action Plan, which outlined priorities such as transparency, real-world performance monitoring and a more predictable path for adaptive machine learning tools.

By August 2024, FDA activity related to AI-enabled products had grown substantially. A JAMA Health Forum analysis identified approximately 950 AI/ML-enabled medical devices cleared or approved across clinical areas, including radiology, cardiology and neurology, with hundreds authorized in 2023 and 2024.

Global Context: Other Regulators Are Also Moving on AI

The FDA’s internal AI modernization parallels similar activity among other major regulators. In Europe, the European Medicine Agency (EMA) and Heads of Medicines Agencies released final guidance on the use of AI across the medicinal product lifecycle. Taking effect in 2025, it outlines responsible AI practices in drug development, clinical trials and post-market safety monitoring, with an emphasis on transparency and human oversight.

The UK’s MHRA is advancing its Software and AI as a Medical Device roadmap, which includes expectations for adaptive algorithms and expanded sandbox programs that let companies test emerging technologies in a controlled environment with input from regulators.

In Canada, federal, provincial and territorial partners endorsed the Pan-Canadian AI for Health Guiding Principles, a shared framework that emphasizes equity, privacy, safety, transparency and Indigenous-led governance in the adoption of AI across the health system.

Agentic AI Adoption Could Be Inevitable

McKinsey data suggested that agentic AI could evolve from a supporting tool into something closer to a workflow partner. Groups like EY have noted that agentic AI brings a different level of autonomy than earlier AI tools.

How quickly those possibilities materialize will likely depend on oversight, governance, adoption and real-world performance, meaning organizations may need stronger oversight and clearer governance as these systems take on more complex tasks.


If you want your company to be featured on Xtalks.com, please email [email protected].