AI - where does the liability lie?
Emma Stevens, a dispute resolution specialist at law firm Coffin Mew, explains who should be responsible when AI goes wrong
Artificial intelligence (AI) is transforming the world we live in; by using algorithms to analyse information, recognise patterns and gain insights from data, AI is faster, smarter and more productive than humans.
AI and robotics are becoming increasingly incorporated into our daily lives and the potential for associated claims is also increasing.
Sectors that have embraced AI include law construction and banking, and there is no sign that this trend will abate any time soon.
Professional and medical services:
There is no doubt that AI is already transforming the legal profession and is similarly the case for many other professional and medical services.
One example of developments in the legal arena is the "Do Not Pay" consumer law service created by 20-year-old student Joshua Browder. The robot lawyer service advertises itself as assisting with legal disputes in over 1,000 different areas, including fighting rogue landlords over repair works and security deposits, challenging parking fines and reporting credit card fraud. It appears that through responding to a series of questions from the "Do Not Pay" chat bot system, consumers can access a range of services from drafting letters to advice across a variety of issues.
Whilst the benefits of AI being integrated into professional services in this way seem clear, in terms of providing lower cost access to routine services, it seems likely that difficulties could arise when considering who is liable for any professional advice given in this way.
In the medical arena there are examples of AI systems, such as IBM Watson, being developed to assist clinicians with a range of services, including assessing test results and potential treatment options. The system's knowledge is derived from assessment of vast quantities of pre-existing data sets.
Traditionally in both the professional and medical services sectors, where an error causes loss or damage to a client or patient, there is usually a clear potential defendant to the claim; this is usually the party which made the error but, in the case of an AI system, this position is potentially less straightforward.
If an AI system gives advice or recommends treatment which is incorrect, and which causes loss or damage there are a number of potential defendants to the claim. Depending upon the facts and contractual arrangements in place, in theory liability could rest with the manufacturer, the owner, the operator, the producer of the data sets from which the system's knowledge derives and / or any third party who has amended the original program (if any).
Technology in construction:
Technology is similarly transforming the construction industry. Some construction companies are now harnessing virtual reality to conduct walkthroughs with clients, simulating the end product. Balfour Beatty has predicted that drones will replace humans in the construction sites of the future, by constantly scanning and gathering data to identify and prevent problems before they occur.
Another area of development is prefabrication, which is the process of pre-assembling components of a structure in a factory and transporting the assembled structure to the construction site.
Some prefabrication systems now allow anyone involved in a project to monitor the progress in manufacturing and establish a potential timeframe for completion. The system is similar to that used in online shopping whereby a buyer can track his or her parcel and monitor arrival times. These developments assist greatly with planning what materials are needed and when, reducing delays often associated with construction projects.
Such advances in technology are clearly innovative and efficient but, if humans were to be replaced in the assessment of construction site safety and progress, there is likely to be a question as to who is liable if construction does not go as planned or an incident occurs. Would this liability rest with the manufacturer of the AI system, with its operator or on the owner of the site? As with the examples of AI in professional services above, who has ultimate liability will unfortunately depend on the facts and the contractual arrangements in place in each case.
[Turn to next page]
AI - where does the liability lie?
Emma Stevens, a dispute resolution specialist at law firm Coffin Mew, explains who should be responsible when AI goes wrong
Robotics in banking:
In the banking sector robotics are also reported to have been particularly beneficial; they reduce human error and ensure compliance with regulators. There are many tasks in the banking sector that can be automated, for instance, mortgage approval or processing credit card orders. These types of automation are often referred to as bots.
BNY Mellon bank is heavily investing in bots and uses them to handle tasks such as transferring funds. They have reported that the use of bots has led to an overall saving of around $300,000 and has seen greatly reduced processing time.
Developments in this sector demonstrate that robots can be used to process data, handle backstage work or improve and change customer experience, not just by processing transactions.
The Bank of Tokyo has in-branch robots which can communicate in 19 different languages and Citibank and others reportedly now use robots to detect fraudulent activities by reviewing customers' spending history. These uses clearly result in savings from decreases in staffing costs and improve customer accessibility, but there are still limitations to these systems and it seems likely that errors will still require human intervention to resolve.
The future:
The arguments regarding liability in the event of error or incident are beginning to expand. As developments continue, and the use of AI becomes more mainstream, there will increasingly be cases which call in to question who has liability for the systems in use.
The majority of the existing legislation and case law in relation to liability and duty in cases of negligence significantly pre-dates the ongoing robotics revolution. It is clear that the legal system has a lot of ground to cover before it can effectively regulate such advances and the existing law will need to be translated to apply to situations where considering the role and impact of AI and robotics was not previously necessary.
Businesses would be sensible to make themselves aware of the technological advances in the sectors in which they operate, to ensure that their contracts are clear regarding liability (both generally and for AI) and that they have adequate insurance in place for any systems used, where appropriate.
Emma Stevens is Associate Solicitor - Dispute Resolution, at law firm Coffin Mew
What can artificial intelligence and machine learning do for you and your organisation? To explore the issue further, and to hear from some of the industry's leading lights, Computing's first AI & Machine Learning Live event is for you. To find out more, check out the Computing AI & Machine Learning Live website. Attendance is FREE to qualifying IT leaders and senior IT pros, but places are going fast