Skip to main content
Share via Share via Share via Copy link

The impact of agentic AI on English contract law

13 January 2026
Chris Holder and Brieanna McDonald

The emergence of agentic artificial intelligence (AI) systems capable of autonomous decision-making and action, presents some interesting implications for English contract law.

As these technologies evolve from passive tools to active participants in commercial transactions, questions arise about legal capacity, contractual formation, liability and the very nature of agreement itself. This article examines how agentic AI challenges traditional English contract law principles and explores the legal frameworks emerging to address these challenges.

Contractual capacity and agency

English contract law has long required that parties possess the legal capacity to enter binding agreements. Traditionally, this capacity has been reserved for natural persons and certain legal entities such as corporations. Agentic AI systems, however, operate with increasing autonomy, negotiating terms, accepting offers and executing agreements without direct human intervention. This raises the question: can an AI agent possess contractual capacity?

Under current English law, AI systems lack legal personality and cannot themselves be parties to contracts. As Chitty on Contracts observes, only "persons" recognised by law as having legal personality may enter into binding agreements. Instead, contracts formed through AI agents are attributed to the natural or legal persons who deploy them, based on principles of agency law established in cases such as Freeman & Lockyer v Buckhurst Park Properties (Mangal) Ltd 2 QB 480. The AI acts as an instrument of its principal, much like a traditional agent operating under actual or apparent authority.

However, this framework becomes strained when AI systems operate with such autonomy that attributing their actions to human principals becomes conceptually difficult. The unpredictability of machine learning algorithms, which may produce outcomes their designers neither intended nor foresaw, challenges the traditional requirement that agents act within the scope of their authority. Leading commentators have argued that the current agency framework “is a poor foundation” for regulating AI behaviours and risks (Oliver 2021).

The framework assumes human directed intention and control, making it a weak practical and conceptual fit for autonomous or algorithmic systems when compared with alternative mechanisms (Bayern 2021). Taken together, these analyses reinforce that the current approach, based on human agency and oversight, may become inadequate once AI systems act independently of contemporaneous human direction or intention.

Formation and intention to create legal relations

Contract formation requires offer, acceptance, consideration and an intention to create a legal relationship. When AI agents negotiate and conclude agreements, determining whether these elements are satisfied becomes complex. If an AI system autonomously generates an offer based on market conditions, can this constitute a valid offer? 

English courts have historically adopted an objective approach to contractual intention, as established in Smith v Hughes (1871) LR 6 QB 597, asking whether a reasonable person would understand the parties to intend legal consequences. This objective test may accommodate AI-generated offers and acceptances, provided the system operates within parameters established by parties who do intend to create or enter into a legal relationship.

Nevertheless, difficulties arise with wholly autonomous AI transactions where no human reviews the terms before conclusion. The "battle of the forms" (traditionally addressed in cases like Butler Machine Tool Co Ltd v Ex-Cell-O Corporation (England) Ltd 1 WLR 401) becomes a battle of algorithms, with AI agents potentially creating contracts that no human has read or approved. This challenges the notion of genuine assent, creating uncertainty as to whether contracts formed wholly by autonomous systems can satisfy the intention and agreement requirements for enforceability. As Treitel's The Law of Contract (15th edition, 2020) notes, the requirement of "a meeting of minds" by the parties to a contract becomes problematic when neither party has actual knowledge of the agreed terms.

Distinguishing agentic AI from electronic data interchange systems

A critical distinction must be drawn between agentic AI and traditional Electronic Data Interchange (EDI) systems, which have facilitated automated contract formation for decades. EDI systems operate as passive conduits for pre-programmed transactions. These systems execute contracts based on predetermined rules and parameters established by human operators, functioning essentially as sophisticated communication tools. English law does not have a dedicated regime for EDI, but it does allow for these exchanges to take place and be effective through existing contract law. 

The legal treatment of EDI has been relatively straightforward: contracts formed through EDI are valid provided the parties intended the system to have legal effect, as confirmed in the UNCITRAL Model Law on Electronic Commerce (adopted in various forms across jurisdictions). The system merely automates the communication of human decisions rather than making decisions itself. Crucially, EDI transactions are predictable and traceable to specific human instructions.

Agentic AI, by contrast, employs machine learning and adaptive algorithms that enable autonomous decision-making. These systems are now capable of negotiating novel terms, responding to unforeseen circumstances and reaching outcomes not explicitly programmed by their operators. 

This autonomy creates legal uncertainty absent in EDI contexts: whilst EDI liability clearly rests with the party who programmed the system's parameters, agentic AI may produce results that no human anticipated or authorised. 

The Law Commission's 2021 report on smart contracts acknowledged this distinction, noting that truly autonomous systems raise questions about attribution of contractual intention that do not arise with traditional automated systems. This fundamental difference may ultimately require distinct legal treatment for agentic AI beyond the frameworks developed for EDI.

Mistake, misrepresentation and algorithmic error

Agentic AI systems may malfunction or produce erroneous outputs due to programming errors, corrupted data or adversarial manipulation. When an AI agent concludes a contract based on such errors, traditional doctrines of mistake and misrepresentation must be reconsidered. If an AI system materially misrepresents facts during negotiations, can the contract be rescinded? 

Under English law, actionable misrepresentation requires a false statement of fact that induces the contract, as established in Redgrave v Hurd (1881) 20 Ch D 1. Attributing the AI's statement to its principal would likely satisfy this requirement, but questions remain about the principal's state of mind and whether they can be said to have 'made' a representation they were unaware of.

The doctrine of common mistake, as refined in Great Peace Shipping Ltd v Tsavliris Salvage (International) Ltd EWCA Civ 1407, may apply where both parties' AI agents operate under shared erroneous assumptions. However, unilateral mistake, particularly relevant in cases like Hartog v Colin & Shields 3 All ER 566, would typically not void the contract unless the other party knew of the mistake. This becomes particularly difficult when dealing with autonomous systems where knowledge must be attributed rather than actual.

The Law Commission's 2021 report Smart Legal Contracts: Advice to Government acknowledged these challenges, noting that existing legal principles can generally accommodate automated contract formation but recommending clarification in certain areas to provide commercial certainty.

Liability and remedies

When AI agents breach contractual obligations, liability falls upon the principals who deployed them under the principle of vicarious liability. However, determining appropriate remedies becomes complicated when breaches result from autonomous AI decisions. Should damages be assessed differently when a breach stems from algorithmic unpredictability rather than human choice? The traditional measure of damages in Hadley v Baxendale (1854) 9 Exch 341 (compensating for losses reasonably foreseeable at the time of contracting) may require reconsideration when AI systems create unforeseen consequences.

English courts have not yet developed distinct principles for AI-related breaches, instead applying traditional remedies. This approach may prove inadequate as AI systems become more autonomous and their decision-making processes more opaque. The 'black box' problem of AI systems raises particular difficulties for establishing causation and foreseeability, essential elements in breach of contract claims.

Regulatory and legislative responses

Recognising these challenges, policymakers are beginning to address agentic AI in contract law.

The Law Commission of England and Wales has examined digital assets in its 2023 report Digital Assets: Final Report and smart contracts in its 2021 advice to government, concluding that English law's flexibility and technology neutral principles generally accommodate these innovations. However, the Commission recommended targeted statutory reform or clarification in specific areas such as the clarification of digital assets and interpretation/deed formalities for smart contracts to enhance legal certainty.

The European Union's proposed AI Act, whilst not directly applicable to England post-Brexit, influences thinking about AI regulation and liability frameworks. Academic commentators have explored whether sophisticated AI agents should receive distinct legal status, whilst others argue existing principles of agency and attribution remain sufficient with modest adaptation.

Conclusion

Agentic AI poses some interesting challenges to English contract law's ‘human centric’ jurisprudence. Whilst current legal frameworks can accommodate some AI-mediated transactions through agency principles and the objective theory of contract formation, truly autonomous AI systems strain traditional concepts of capacity, intention and agreement. Machines making actual ‘decisions’ by themselves without human involvement take English contract law into a new area.

The Law Commission has recognised that English law's flexibility provides a foundation for addressing these challenges, but gaps remain, particularly concerning liability for algorithmic errors, the attribution of knowledge and intention and remedies for AI-related breaches.

As these technologies become more common in commercial practice, English law must evolve either by extending existing doctrines through judicial development or creating new statutory frameworks specifically for AI agents. The path chosen will shape not only contract law but the broader relationship between law, technology and human autonomy in the digital age. 

Given English law's historical adaptability and the common law's incremental development through case law, a hybrid approach combining judicial innovation with targeted legislative intervention appears most likely to succeed in addressing the challenges posed by agentic AI whilst preserving the coherence and predictability essential for commercial certainty.

Contact

Contact

Chris Holder

Partner

chris.holder@brownejacobson.com

+44 (0)330 045 1455

View profile
Can we help you? Contact Chris

Brieanna McDonald

Trainee Solicitor

brieanna.mcdonald@brownejacobson.com

+44 (0)330 045 1016

View Profile
Can we help you? Contact Brieanna

You may be interested in...