Skip to main content
Share via Share via Share via Copy link

EMA and FDA guiding principles for AI in drug development: A review

23 March 2026
Chris Holder and Saara Leino

In January 2026, the European Medicines Agency (EMA) and the United States Food and Drug Administration (FDA) jointly published ten guiding principles for the use of artificial intelligence (AI) in drug development.

The joint publication is notable not merely as a regulatory document but as a signal of transatlantic intent at a time when political and regulatory approaches in Europe and the UK are drifting apart faster than ever. For those involved in the drug development industry in the UK (regulators, pharmaceutical companies, and medicine developers) an understanding of these principles is essential when using AI to develop products intended for the UK, EU, or US markets.

The ten guiding principles

  1. Human-centric by design: AI technologies must align with ethical and human‑centric values.
  2. Risk-based approach: AI development follows a context-driven risk-based methodology, including proportionate validation and oversight.
  3. Adherence to standards: AI systems are expected to comply with relevant legal, ethical, technical, scientific and regulatory standards, including GxP.
  4. Clear context of use: The purpose, role, and scope of the AI tool must be well‑defined.
  5. Multidisciplinary expertise: AI development should incorporate expertise across technical, scientific, clinical, and regulatory domains.
  6. Data governance and documentation: Data provenance, processing, and analytical decisions must be traceable, verifiable, and aligned with GxP expectations.
  7. Model design and development practices: AI systems should be built using best practices that promote transparency, interpretability, reliability, and robustness.
  8. Risk-based performance assessment: Performance assessments must evaluate the full human–AI system using appropriate, fit‑for‑purpose data.
  9. Life-cycle management: Continuous monitoring and periodic reassessment are required to address issues such as data drift.
  10. Clear, essential information: Information about context of use, performance, limitations, and updates must be communicated in plain, accessible language.

The never‑ending question: What is AI?

Unlike the OECD and the EU AI Act, which define an “AI system”, the EMA–FDA publication defines artificial intelligence itself. Their definition covers system‑level technologies used to generate or analyse evidence across nonclinical, clinical, post‑marketing, and manufacturing phases.

This definition is notably broad. It does not require autonomy or decision‑making capability, which means many software and hardware tools used in drug development could fall within scop, an important consideration if these principles begin to appear in contractual obligations.

The English Law context

From an English law perspective, these principles matter for several reasons:

1. Regulatory alignment

Although the UK is no longer bound by EMA rules, the MHRA continues to emphasise international alignment in digital health and AI.

2. Contractual implications

Given the complex web of contracts between sponsors, contract research organisations (CROs), data processors, and AI vendors, these principles will increasingly inform risk allocation, documentation requirements, and performance obligations.

3. Data protection

The principles’ focus on data integrity and transparency aligns with UK GDPR expectations around lawful processing, accuracy, automated decision‑making, and the handling of clinical data.

Why this matters in practice

Although the EMA–FDA principles are high‑level and non‑binding, they already influence how organisations design, validate, and oversee AI in drug development.

  • Clinical trial recruitment: AI tools used to identify eligible patients must show data provenance, bias controls, and human oversight, or risk delays to trial approval.
  • Safety monitoring: Machine‑learning models used in signal detection must be explainable and continuously validated, particularly where outputs affect patient safety or product labelling.
  • Manufacturing quality: AI that informs batch quality or predicts equipment issues must be well‑documented and validated, or manufacturers risk compliance findings.
  • Supplier assurances: Sponsors increasingly expect CROs and AI vendors to evidence data governance, model validation, and quality frameworks; creating competitive pressure across the supply chain.

Ultimately, given the broad nature of these principles, organisations deploying AI would do well to align their operations with them. Regulatory compliance is an obvious benefit, but doing so also signals maturity in global AI governance and a willingness to scale with the pace of AI development without losing sight of safety and responsibility.

Liability and risk management

The guiding principles are highly relevant to liability considerations under English law. Whether through the Consumer Protection Act 1987 or negligence, organisations must demonstrate that reasonable care has been taken where AI contributes to decisions such as clinical trial design or safety signal detection.

Human oversight, interpretability, and robust documentation can help evidence reasonable care, but they are not sufficient on their own. Organisations must integrate regulatory, contractual, technical, and operational safeguards to mitigate risk.

Data protection obligations under the UK GDPR and the Data Protection Act 2018 also intersect directly with the principles, especially where AI processes personal or special category clinical data.

Commercial and strategic considerations

The principles also have strategic significance:

  • Organisations frequently seek approvals across multiple jurisdictions, and early alignment with the EMA–FDA principles may reduce regulatory friction.
  • Investors and partners increasingly scrutinise AI governance when evaluating life sciences organisations.
  • Demonstrating compliance with widely recognised principles strengthens credibility with regulators, partners, and customers.

Comparisons to the EU AI Act

Although the influence of the EU AI Act can be sensed in the EMA–FDA principles, the two frameworks are not interchangeable. The EMA–FDA principles touch on similar themes (transparency, interpretability, life‑cycle management, and contextual risk) but they do so at a high, non‑binding level.

In contrast, the EU AI Act introduces prescriptive, enforceable compliance requirements. The EMA–FDA document remains guidance rather than a regulatory framework.

It is also important to recognise that the US federal approach continues to lean towards lighter‑touch or non‑regulatory oversight of AI, meaning the EMA–FDA alignment should not be mistaken for convergence with EU law.

What UK life sciences organisations should do now

The EMA–FDA principles represent a meaningful shift in how AI governance is being shaped at an international level, and life sciences organisations should treat them as an active compliance and commercial consideration — not simply a future-facing policy document.

In practical terms, organisations operating in the UK, EU, or US drug development space should now be taking three concrete steps:

  1. Audit existing AI use against the ten principles: Paying particular attention to data governance, model documentation, and human oversight obligations — especially where AI outputs inform clinical decisions or regulatory submissions.
  2. Review contractual frameworks: With CROs, AI vendors, and data processors to ensure risk allocation reflects the documentation and validation standards the principles expect. Gaps here create legal and reputational exposure.
  3. Prepare for regulatory scrutiny from the MHRA and other authorities: who are increasingly aligning with international AI governance norms, even where those norms remain non-binding for now.

Browne Jacobson's life sciences and technology teams advise pharmaceutical companies, medicine developers, and AI vendors on AI governance, regulatory compliance, data protection, and contract risk across UK, EU, and US markets. If you would like to discuss how the EMA–FDA principles apply to your organisation's AI strategy, please get in touch.

Contact

Contact

Chris Holder

Partner

chris.holder@brownejacobson.com

+44 (0)330 045 1455

View profile
Can we help you? Contact Chris

Saara Leino

Professional Development Lawyer

saara.leino@brownejacobson.com

+44 (0)330 045 1289

View Profile
Can we help you? Contact Saara