Businesses are increasingly adopting AI tools to carry out functions traditionally performed by humans and/or non-AI tech. Construction companies are no different and are using AI tools across many parts of their businesses to increase productivity and add new proficiencies in areas such as their construction, consultation, operation and development services (e.g. site and staff management, reporting, procurement controls, project management processes, utilities management, contract generation, review, and analysis). Notwithstanding these benefits, the use of AI in can lead to complex and at times unexpected disputes.
This article focuses on construction projects which are typically bespoke and multifaceted agreements that evolve as the project progresses. As a result, when construction projects run into difficulties, it can be challenging to ascertain which party is responsible for what has gone wrong, and to pinpoint the role AI may have played in causing the issue(s) giving rise to the dispute.
This article sets out the potential disputes that could arise due to AI use in construction projects and offers guidance on how to manage AI-related risks.
AI disputes in construction projects
Construction projects invariably involve participation from stakeholders at all points of the supply chain (construction developers, contractors, subcontractors, architects, purchasers, tenants, funders etc). Within the context of the overall project, there will be different contractual arrangements in place that govern the various relationships, for example developers will have contracts in place with contractors who in turn will have contracts in place with subcontractors, and architects. The use of AI adds another layer to this contractual matrix and introduces an additional party to the chain – namely the AI developer.
The question of who is liable when construction projects go wrong is ultimately a factual one, but the introduction of AI makes answering this question far harder, due to the inherent complexity of AI tools. The difficulties in assessing who is responsible for the failure of a construction project that uses AI means that a whole range of time consuming and expensive disputes can be triggered. For instance, did the project ultimately fail because of the data used by the AI developer to train the AI tool (the answer is likely to be different if data was provided to the AI developer by the construction developer or contractors) or was it because the training methodology used by the AI developer was flawed or inadequate or was the failure due to an underlying issue with the hardware used in the tool itself? Moving higher up the chain, architects could be held liable for a failure to exercise sufficient oversight over the outputs that the AI tool produces before submitting designs to contractors, who in turn could be held liable by the construction developers for failing to properly implement the designs. It is also possible that one or more AIs could be used by different stakeholders for their workstreams on the project or some or all of the stakeholders may be providing inputs / contributions to the operation of the different AIs on the project. Accordingly, AI use increases the number of potential disputes associated with one project, but it is important to bear in mind that the nature of these disputes largely depends on the project itself, what specifically goes wrong, and what is stated in the various liability and risk provisions of the relevant contracts.
How to manage risk in construction projects that use AI
It is crucial that the various stakeholders are clear about the primary objectives of the project, and the ways in which AI will be used to achieve these objectives. Contracting parties should ensure that there are comprehensive agreements in place at each stage of the supply chain to provide certainty in the event that something goes wrong, in particular in relation to any warranties and limitations of liability.
It is essential that the roles, responsibilities, expectations and risk of the various parties are contractualised and defined in detail from the outset of the project, so as to apportion liability throughout the chain with maximum clarity. This includes drafting very detailed and bespoke specifications for the AI tool along with the related customer services requirements (e.g. Statements of Work) and any corresponding supplier solution responses that must be met by the relevant parties, which should be contractually tied into clearly identified timelines in a comprehensive implementation/project plan.
In the event that a construction project runs into difficulty, then it is likely that there will be a cascade of claims, with stakeholders seeking to recover their losses from the party next in the contractual chain. The claims will typically be for breach of contract arising from a failure to provide services in accordance with the express terms of the contract (e.g. by missing contractual milestones, or by the AI producing results which do not meet the contractual specifications for the AI tool itself or the operational use requirements for that AI tool), and/or with reasonable care and skill. Such claims may give rise to damages, termination rights and/or other contractual remedies specified in the contracts.
The use of AI means that typical liability frameworks may not be suitable. Parties contracting to use an AI tool in a construction project should ensure that from the outset, the agreement includes AI specific warranties, indemnities and limitation provisions. These terms should be tailored to the specific context in which the AI tool will be deployed and should be based on standards that are clearly measurable. This will likely involve drafting warranties which, whilst based on common service standards such as reasonable care and skill, respond to the fact that an AI tool is being utilised. Examples of this type of warranty is that: the AI tool should behave in the same way as a suitably capable and experienced human who is exercising reasonable skill and care in providing the service; its outputs will be monitored and reviewed by a suitably qualified human; the AI developer uses a suitably diverse team to design and develop the AI software. Where the AI’s outputs are subject to human oversight, the scope of such obligations should be clearly drafted, including identifying the required skills/experience of the individuals concerned, the nature of any training required, as well as the processes to be followed in testing, monitoring and reviewing the AI’s outputs (e.g., testing and analysis of the AI outputs to be carried out on a monthly basis for the duration of the project). Record-keeping in relation to the review of decisions made by AI solutions can also be helpful in managing the risk associated with their use.
Notwithstanding the merits-based challenges in establishing the causes of the failure of an AI tool, given the complexity of the development of the tool, it is also important for parties to recognise and mitigate against the risk that the AI developer (often start-ups) might not have sufficient assets available or sufficient insurance coverage to satisfy a claim.
The contractual issues referred to above are highly specific to the use to which the AI will be put, and so contracting parties should engage with their stakeholders, consultants, lawyers and other experts to help them navigate this complex and evolving area.
You may be interested in...
Al in construction: Do your contracts mitigate the risks?
How to mitigate risk in disputes arising from AI use in construction projects
What does ‘RAAC’ mean for university campuses?
Contractors' liability and contract works exclusion
Recklessness not ‘accidental’ when it comes to trespass
Underlying contracts remain key in arguments over scope of co-insurance
Insurance considerations following use of RAAC concrete
Covering the costs of RAAC – new guidance published
Legal Update - Building Safety Act
Building Safety Act 2022: New duty holder and competency regime from 1 October 2023
Building Safety Act 2022: New building control regime from 1 October 2023
Published Article - RAAC
Action needed: How RAAC became a critical issue
Legal Update - RAAC
Insurance considerations of RAAC failures - air bubbles belong in chocolate, not concrete!
Higher-risk buildings – are you ready for 30 September 2023?
Is new nuclear needed?
Amendments to Procurement Bill: Navigating sanctions and supplier bans and impact on the construction sector
The history and future of nuclear energy
New provisions for higher-risk residential buildings now in force
UK and Ireland law firm Browne Jacobson joins UKREiiF 2023
J A Ball Limited (in Administration) v St Philips Homes (Courthaulds) Ltd
Register your interest to join our next Home Delivery Academy
Government response to the consultation on the Higher-Risk Buildings Regulations
2023: Horizon scanning in construction
Former Mace Group Legal Director joins Browne Jacobson as Non-Executive Director of its Construction & Real Estate sector
Browne Jacobson has appointed Amy Chapman, the former Group Legal Director of global built environment experts Mace Group, as its first Non-Executive Director (NED) of its Construction & Real Estate sector strategy board.
Don't look down
An engineering company in Tyne and Wear was fined £20,000 after a worker fractured his pelvis and suffered internal injuries after falling through a petrol station forecourt canopy, whilst he was replacing the guttering.
Guide - Building Safety Act
Building Safety Act 2022 update - secondary legislation
IUA publishes cladding remediation clause
Guide - Building Safety Act
How will the Building Safety Act 2022 impact buildings completed before the Act comes into force?
Guide - Building Safety Act
How will the Building Safety Act 2022 impact on the design and construction of buildings commenced after the Act comes into force?
Guide - Building Safety Act
How will the Building Safety Act 2022 impact on the ownership and occupation of higher-risk buildings?
Published Article - Building Safety Act
The Building Safety Act 2022: Navigating building liability orders
Sustainability in construction
The climate emergency has reached a point where real and substantial damage is being caused to both the planet and society. There has been a shift from planning and theorising the most effective solutions, to a phase where practical, efficient, and sustainable solutions are required at speed.