The increased use of artificial intelligence (AI) is revolutionising the way businesses operate and is having a disruptive impact in sectors that have traditionally been slow to modernise.
The increased use of artificial intelligence (AI) is revolutionising the way businesses operate and is having a disruptive impact in sectors that have traditionally been slow to modernise. In the legal sector for example, AI is being used to streamline tasks such as legal research and bundling, and in some jurisdictions is being used to assist judges in reaching decisions. The benefits of investing in AI for businesses are clear. AI can increase productivity and add new proficiencies, scalability and can be tailored to create industry specific value. Notwithstanding these benefits, the use of AI in technology projects can lead to complex and at times unexpected disputes. Technology projects are typically bespoke agreements that evolve as the project progresses. As a result, when projects run into difficulties it can be challenging to ascertain what has gone wrong and to pinpoint the role AI may have played in causing the issue(s) giving rise to the dispute. This article sets out some common AI-related issues in technology project disputes and offers guidance on how to prevent them.
Machine learning (a type of AI whereby algorithms and statistical models are used to analyse and draw inferences from patterns in data) is being utilised in a greater variety of technology projects. These systems use neural networks and require significant technical resources to run, train and generate decisions, predictions and spot analogies. Legacy systems and standard graphics processing units are unlikely to be able to keep pace with the growing demand for AI tools and services – increasing the need for customers and suppliers, who want to use AI, to invest in specially designed AI hardware, such as Graphcore’s recently released “Bow” AI Chip, which has the capacity to speed performance by 40% for machine learning tasks. The need for specialist hardware can increase the costs associated with the utilisation of AI and can also increase the likelihood that a dispute will arise due to there being more opportunities for something to go wrong at the manufacturing, shipping and installation stages. This can very easily result in delays to the achievement of key project milestones, which could in turn trigger remedies such as delay payments, and in the most extreme cases, termination of the contract.
Technology projects will often involve multiple stakeholders. A consequence of the additional steps required to implement AI solutions is that it can be difficult to identify who is responsible for failures and delays, whether it be the supplier, customer, a third party or a combination. Where the parties’ roles and responsibilities are not well defined from the outset, a whole range of disputes can be triggered. These will typically be breach of contract disputes relating to the agreement in place between the supplier and the customer, and the various contracts which may be in place with third parties who are contracted to manufacture, ship and install specially designed AI hardware.
Whilst the mapping process used by AI systems is automated, the design, the initial training data and the further data that is passed through the system to train the model, is controlled by human operators. As a result, humans have visibility over the data that the AI tool starts with (the input), as well as the results that the tool produces (the output). What happens in between is often referred to as the “black box problem” which denotes the fact that after the input data has been added, it can be very difficult to change the underlying AI system and there will often be no transparency or explanation as to how or why the model produced a specific output.
This lack of transparency can create complex liability issues, especially as the main AI tools and services that are being deployed in technology projects at present are autonomous enough to function without much human intervention. It is therefore far more difficult to determine who bears responsibility when loss occurs as a direct result of an AI system failing; the failure potentially resulting from a technical malfunction or because of human error in the way in which the model was designed and/or trained.
Human involvement in the creation of the AI tool or service means that there is a strong possibility that human bias will be reflected in the input data, the training data, and in the outputs that the AI system produces. Algorithm bias can arise in a number of different settings and can result in various forms of discrimination. In a financial services setting, bias in AI systems could lead to errors in asset valuations which could in turn lead to missed investment opportunities. There have also been several high-profile examples of discrimination arising from the use of facial recognition technology in which characteristics such as gender and ethnicity, as well as the use of social scoring systems are inherent in the use and perceived accuracy of the technology.
It is crucial that customers and suppliers are clear about the primary objectives of the project, and the ways in which AI will be used to achieve these objectives. Contracting parties should ensure that there is a comprehensive agreement in place to govern the project and to provide certainty in the event that something goes wrong, in particular in relation to any limitations of liability. At a minimum the agreement should include detailed provisions covering confidentiality, liability, data protection, and IP. In the absence of dedicated AI legislation these areas are, at present, free to be negotiated. It should be noted however that contractual standards are being created - IP clauses for instance will invariably provide that the supplier owns the IP in the AI, whilst the customer has a licence to use it.
Issues related to specially designed AI hardware can be mitigated against by ensuring that all parties are clear about their respective roles and who bears the risk. The agreement should also set out the governance structure that will underpin the project and make clear how third-party relationships and dependencies will be managed and by whom.
The use of AI means that typical liability frameworks may not be suitable. Parties contracting to use an AI tool or service in a technology project should ensure that from the outset, the agreement includes AI specific warranties, indemnities and limitation provisions. These terms should be tailored to the specific context in which the AI tool or service will be deployed. The more general the term, the harder it will be to determine whether a particular standard has been met and establish liability. Common service standards, such as reasonable skill and care, are unlikely to be sufficient where AI is involved due to the output being too dissociated from the initial human input (the black box problem referred to earlier). Contracting parties will therefore need to draft inventive warranties stipulating, for example, that the AI tool should behave in the same way as a suitably capable and experienced human who is exercising reasonable skill and care in providing the service. Likewise, a buyer of an AI solution may wish to include warranties that the dataset used to train the AI is not discriminatory, does not breach data protection principles, or that the AI will not infringe third party IP rights. Limitation of liability and indemnity provisions should be given close scrutiny at the stage of contract negotiation, to ensure that one party does not bear disproportionate liability in the event of a catastrophic failure of the AI giving rise to significant third-party claims. These issues are highly specific to the use to which the AI will be put, and so contracting parties should engage with their stakeholders, consultants, lawyers and other experts to help them navigate this complex area.
Contracting parties also need to be aware of the risks associated with algorithm bias and should look to establish internal guidelines, processes and controls that minimise and address the discrimination and data privacy risks that may result from biased AI systems. An effective internal governance framework can assist in identifying any unjustifiable biases from initial data sets and may also flag any discriminatory outputs.
Whilst there is general agreement that AI regulation is required, there is not, as of yet, a broad consensus on how this should be achieved.
On 21 April 2021, the European Commission published the Draft AI Regulation. Instead of opting for a blanket regulation covering all AI systems, the EU chose to adopt a risk-based approach which differentiates between unacceptable risk, high risk, and low risk uses of AI. The level of regulatory intervention varies depending on which category of risk the AI tool or service falls into. Both customers and suppliers should familiarise themselves with the Draft Regulation and determine whether they are using, or intend to use, AI which carries an unacceptable risk (therefore prohibiting use of the AI tool or service), or whether they are likely to be impacted by the requirements for the provision of high-risk AI systems. Failing to account for the Draft Regulation could result in penalties for noncompliance, which are up to 6% of global annual turnover or EUR 30 million, whichever is greater.
The territorial scope of the Draft Regulation centres on whether the impact of the AI system occurs within the EU – not on the location of the AI provider or user.
On 18 July 2022, the UK Government published a policy paper titled ‘Establishing a pro-innovation approach to regulating AI’. This paper builds on the UK’s National AI Strategy and sets out the Government’s proposals on the future regulation of AI in the UK. The UK approach to AI regulation seeks to demonstrate the Government’s pro-innovation regulatory position post-Brexit, as it aims to be proportionate, light-touch and forward-looking, with the hope that this will drive the development of innovation and investment. Whilst the EU’s Draft AI Regulation categorises AI use on risk level, the UK is not seeking to group specific uses of AI, instead choosing to rely on six "core principles" that will require AI developers and users to:
Moreover, in contrast to the centralised approach adopted by the EU, the UK’s proposal will allow different regulators, such as Ofcom, the Competition Markets Authority (CMA) and the Information Commissioner’s Office (ICO), to interpret and implement the six core principles, and adopt a tailored approach to the growing use of AI in a range of sectors.
Developers and suppliers of AI systems should also be aware of the Digital Regulation Co-operation Forum’s (DRCF) recently published paper on algorithmic processing (which includes AI applications, such as those powered by machine learning). The paper notes that whilst algorithmic processing has the potential to deliver significant benefits, it also poses several risks if not managed responsibly. To manage these potential harms, the DRCF suggests that a more hands-on cooperative intervention strategy could be achieved through the increased use of regulatory sandboxes, such as the Financial Conduct Authority’s (FCA) Digital Sandbox. Where this strategy proves ineffective, the paper suggests that regulators can resort to use of their powers to take enforcement action against those who have not complied with the regulations and caused harm.
Parties contracting to use an AI tool or service in a technology project, should therefore endeavour to keep abreast of the latest AI regulatory developments to ensure that they understand their roles and responsibilities, and are aware of the level of regulatory intervention that they might face.
Customers and suppliers who recognise and mitigate against the potential risks of AI use in a technology project, by taking the steps outlined above, are likely to be the ones who reap the many benefits that AI offers. Those who fail to take early action to protect themselves contractually against AI risks could easily find themselves in a technology dispute over issues similar to the ones that we have identified in this article, as well as more project specific disputes.
Likewise, it is essential that parties consider the various proposals for AI regulation outlined above at the early stages of the project, to determine the applicable regulatory framework and to assess whether contractual changes are needed to account for developing regulation of AI.
If you need advice in relation to the issues discussed in this article, please contact Sophie Ashcroft.
Below are some of the questions we are regularly asked by startups, covering a range of topic areas.
UK law firm Browne Jacobson, which opened its first overseas office in Dublin in September, has outlined its strategic plans to grow its legal team over the next four years.
Bishopsgate Corporate Finance and law firm Browne Jacobson have jointly advised on the acquisition of award-winning tech solutions business, Custard Technical Services by US managers services and cyber security provider, Thrive.
Settlement agreements are commonplace in an employment context and are ordinarily used to provide the parties to the agreement with certainty following the conclusion of an employment relationship.
In the ongoing complex litigation between Optis Cellular Technology LLC and Apple Inc., the Court of Appeal ([2022] EWCA Civ 1411) has upheld the High Court’s findings that implementers of standard-essential patents (SEPs) cannot refuse to accept a FRAND license and continue activities in the meantime which constitute infringement: that party must commit to accept a court-determined license if it wishes to avoid an injunction.
Claims arising from interest-only mortgages have been farmed in volume. Many such claims to date have sought to drive a narrative that interest-only mortgages are an inherently toxic product and brokers were negligent simply for suggesting them. Taylor is a helpful recalibration, focussing instead on what the monies raised by the mortgage product were being used for and whether the client understood the inherent risks.
A deepfake of Bruce Willis is advertising Russian mobile phones. Many great artistic and metaphysical questions are raised by this performance. However, this article is going to look at the intellectual property law implications, from a UK perspective.
The Digital Services Act (the “DSA”) has today (27 October) been given the go-ahead by the EU Council and will enter into force by early 2024.
In a judgment handed down yesterday the Supreme Court has affirmed that a so called “creditor duty” exists for directors such that in some circumstances company directors are required to act in accordance with, or to consider the interests of creditors. Those circumstances potentially arise when a company is insolvent or where there is a “probability” of an insolvency. We explore below the “trigger” for such a test to apply and its implications.
Created at the end of the Brexit transition period, Retained EU Law is a category of domestic law that consists of EU-derived legislation retained in our domestic legal framework by the European Union (Withdrawal) Act 2018. This was never intended to be a permanent arrangement as parliament promised to deal with retained EU law through the Retained EU Law (Revocation and Reform) Bill (the “Bill”).
Practice Direction 57AC (“PD57AC”) relates to witness evidence in trials and explicitly applies only to the Business and Property Courts. It applies to existing proceedings in which the witness statements for trial are signed on or after 6 April 2021.
Browne Jacobson’s corporate technology dealmakers have advised Agilico, a workplace technology business, on its acquisition of Capital Document Solutions Limited for an undisclosed amount.
The Supreme Court has unanimously dismissed the BTI v Sequana appeal and reviewed the existence, content and engagement of the so-called ‘creditor duty’; being the point at which the interest of creditors is said to intrude upon the decision-making of directors of companies in financial distress.
The increased use of artificial intelligence (AI) is revolutionising the way businesses operate and is having a disruptive impact in sectors that have traditionally been slow to modernise.
The Chancellor’s recent mini-budget provided a significant announcement for business as it was confirmed that the off-payroll working rules (known as “IR35”) put in place for public and private sector businesses from 2017 and 2021 will be scrapped from April 2023.
Browne Jacobson’s specialist cleantech lawyers have advised AIM listed Clean Power Hydrogen Group Limited (CPH2) on its licence agreement with Bentec GmbH, a member of the Kenera business of the KCA Deutag Group. Kenera will manufacture CPH2’s unique membrane-free electrolysers from its facility in Bad Bentheim, Germany.
Browne Jacobson’s corporate finance lawyers have advised leading mid-market private equity firm, LDC and management on the sale of specialist managed IT services provider, Littlefish to Bowmark Capital.
The Digital Markets Act (the “DMA”) joins the dots between competition law and data protection law and actively targets data-driven platforms. It is also a comprehensive regulation to take note of, with familiar GDPR-style fines tied to turnover.