The rapid development of Artificial Intelligence (AI) technology leaves us to question whether the English Civil Liability rules of contractual, and extra-contractual liability for loss and damage caused by AI are fit for purpose.
Failure or disruption of AI systems can lead to catastrophic risks and liabilities. The UK government’s policy paper on AI states that legal liability for AI outcomes must always rest with “an identified or identifiable legal person – whether corporate or natural”. This position resonates with the Law Commission’s recommendation that liability for self-driving vehicles shifts from driver to manufacturer and software developer. Hence, risk and liability allocation within the AI chain is important to establish liability in the event of a dispute.
Contractual liability
The central provisions of contract law are flexible enough to provide a workable framework of liability. However, those contractual rules that predate the development of artificial intelligence technology do not interact with the framework to provide sufficient protection for users of artificial intelligence. This is particularly because they do not provide consumers with sufficient mandatory protection rules, implied terms, and minimum service standards.
Contractual liability limitations and AI
The principle of freedom of contract in English law allows parties to a contract to agree or to exclude liabilities for breach as they see fit. That principle is partly limited in B2B contracts, and most particularly in B2C contracts, by the imposition of minimum implied terms for fitness for purpose and satisfactory quality, which in B2C contracts are mandatory and cannot be excluded. Agreed contractual terms if broken, give the party suffering the breach a remedy in an action in damages for the foreseeable loss suffered, or for specific performance of obligations breached. Contract law can give specific warranties relating to performance and liability for defects in AI products and services, to the contract parties or other classes of users, operators, or persons adversely affected by defective AI.
Generally, a claim in contract is limited to claims brought by the named parties. However, the persons entitled to bring claims can be extended to named persons or classes of persons under the Contracts Rights against Third Parties’ legislation; the principals of vicarious liability; the common law extension of voluntary assumption of liability, such as Volvo cars in announcing that it would accept liability for harm caused by their cars when operated autonomously.
From a consumer perspective it is comforting that in B2C contracts, there are terms protecting consumers and mandatory rules imposing minimum terms of fitness for purpose and that the product will be of satisfactory quality, or the service will be delivered with reasonable care and skill. However, as AI opens new frontiers to automated services being provided in matters such as financial services, robotic process outsourcing, and AI as a service, the terms that the law will imply in these sectors have not caught up with the development in the technology that moves at a much faster pace than the law. Unfortunately, the UK does not yet have any detailed plans to implement such legislation, to fill the liability gap.
Extra-contractual liability
This liability exists in three parts: liability in the law of negligence; product liability and liability for breach of statutory duty. Negligence is the liability that stems from conduct that fails to conform to a required standard. Product liability provides for liabilities of manufacturers of defective products and those in the supply chain. Civil claims for breach of product safety law give rights to consumers and others suffering special losses to claim for losses arising from breach of regulations to protect consumers or to promote safe products.
Negligence limitations and AI
Extra contractual liability in negligence in the UK, and most civil law jurisdictions, including those of France and Germany, have liability regimes for conduct that fails to conform to a required standard. English law of negligence provides that where a claimant can show (a)the existence of a duty of care, (b) a breach of that duty, (c) that causes foreseeable consequential damage that is not too remote, the claim is likely to be successful.
While in some ways, the law of negligence can be adapted to issues of developing artificial intelligence technology liabilities, it also has some limitations. First, the standard of care is the human standard of care that would apply to the average reasonable person in the same situation. With artificial intelligence, this is not possible as artificial intelligence has no human comparator. Second, the foreseeability of damage in human terms has no direct comparator in artificial intelligence terms. Third, claims for pure, economic loss are not possible in the law of negligence.
Product liability law limitations and AI
The European Union implemented the 1985 Products Liability Directive, (PLD) which was transposed into UK law by the Consumer Products Act 1987. Under the PLD, producers of products and other intermediaries are strictly liable to persons, who are injured or whose personal property is damaged by a product, if it can be proved that the product is defective. A product is considered defective when it does not provide the safety that a person is entitled to expect, taking into account:- (a) the presentation of the product and; (b) the use to which it could reasonably be put at the time at which the product was brought into circulation. “Products” are defined as” any goods or electricity and includes products integrated into other products, whether as a component part, raw materials or otherwise”.
There are several significant limitations with regard to the PLD remedies that can be given to persons or property damaged by AI:-
It is doubtful that the definition of “product” extends to software and artificial intelligence unless that software or artificial intelligence is embedded in some hardware being part of the product. It is far more likely that artificial intelligence will be excluded from the product liability regime in most cases because it has more characteristics of a service rather than a product as artificial intelligence generates custom-made output, based on individual input from a user.
PLD claims are built on an assumption that a product does not continue to change in unpredictable ways once it has been manufactured. Liability is specifically excluded if the defect did not exist when the product was put into circulation, or where the state of scientific and technical knowledge, at the time that the product was put into circulation, was not such as to enable the defects to be discovered.
The PLD regime is not universal as it does not apply to defective services, and it is not in the UK linked as closely as it should be to a rapidly evolving product safety regime that takes account the developments in AI technology.
In PLD claims the claimant must prove that the product was defective when it was put on the market which in the case of liability arising from artificial intelligence issues, can be very complicated. This is because with the two main types of artificial intelligence systems being artificial neural networks and probabilistic Bayesian networks, it is very hard to determine how or why a machine learning system made a particular decision. This can set up an impossibly high barrier to a claim being brought without a claimant having disclosure of all the detailed workings of the AI system and without there being any rebuttable presumption of liability where there is a clear link between the AI and the loss and damage.
Those limitations just described do not make the PLD fit for purpose in providing adequate legal protections to those affected by AI outcomes.
Civil claims for breach of product safety law and AI
Civil claims for breach of product safety law give rights to consumers and others suffering special losses to claim for losses arising from breach of regulations that protect consumers or promote safe products. The EU has substantial regulations in place to ensure consumer protection and safety of products to prevent the occurrence of harm to businesses and individuals. English law provides a limited and restricted right for civil claims in damages to be brought for breaches of those regulatory laws. Claims can only be brought where the regulatory law does not provide penalties for breach or other means of enforcement and then is limited to a small class of potential claimants.
Summary
The civil law entitlement to claim damages for loss suffered resulting from the breach of the product safety regulations are very limited, as are the remedies available to persons or property damaged by AI.
Not all product safety laws permit civil enforcement.
They generally do not apply to the provision of defective services.
The UK product safety regime is falling behind the development of technology particularly in relation to AI. Following Brexit, the UK has no plans to adopt any of the immediate changes in product and services safety law in relation to AI, or other digital products or services. This is a contrasting approach to EU where it is progressing with its Artificial Intelligence Act, the proposed EU Machinery Regulations, the proposed EU General Product Safety Regulations, the EU Digital Markets Act and the EU Digital Services Act.
Takeaways
The takeaways from these descriptions of civil liability law and artificial intelligence are: -
The UK regime is not fit for purpose for AI liability claims in contract, negligence, product liability or product safety.
Post-Brexit the UK has become a rule-taker, not a rule giver in its biggest export market. The lack of development of UK law to deal with AI liabilities means UK businesses will have to comply with one set of looser standards in their relatively small home market and stricter standards in their relatively larger EU export market. The reverse will be the case for EU exporters to the UK putting EU exporters at a comparative advantage.
Contracts should address the risk and cost allocation for defective AI products or services to minimise AI providers’ or users’ exposure to liability.
Next articles in this series
In our upcoming article, we will be discussing how the EU has taken the game of AI liability into its own territory with its proposed laws on product liability fit for the AI age. We will also discuss measures that businesses within the AI industry need to take to manage AI liability risk in accordance with the proposed EU product liability regime.
This article is based on European Product Liabilities (Butterworths) edited by Patrick Kelly and Rebecca Attree.
If you have any questions or require advice, please reach out to Paddy Kelly or Carmen Yong in our Corporate & Commercial Department.