Why Courts Need ‘Explainable AI’ When Self-Driving Cars Crash

The first serious accident involving a self-driving car in Australia happened in March this year. A pedestrian suffered life-threatening injuries when he was hit by a Tesla Model 3, which the driver said was on “autopilot” mode.

In the United States, the traffic safety regulator is investigating a series of accidents where Teslas on autopilot rammed first aid vehicles with flashing lights during traffic stops.

A car crash on the highway at night with flashing emergency lights
A Tesla Model 3 collides with a stationary emergency response vehicle in the United States.
NBC/YouTube

The decision-making processes of “self-driving” cars are often opaque and unpredictable (even for their manufacturers), so it can be difficult to determine who should be held responsible for incidents like these. However, the growing field of “explainable AI” may help provide answers.

Who is liable in the event of a self-driving car accident?

Although self-driving cars are new, they are still machines made and sold by manufacturers. When they cause damage, we need to ask ourselves whether the manufacturer (or software developer) has fulfilled its security responsibilities.

Modern negligence law comes from the famous Donoghue v Stevenson case, where a woman discovered a rotting snail in her bottle of ginger beer. The manufacturer was deemed negligent, not because it was supposed to directly predict or control snail behavior, but because its bottling process was unsafe.

By this logic, manufacturers and developers of AI-based systems like self-driving cars may not be able to predict and control everything the “self-driving” system does, but they can take steps to reduce the risks. If their risk management, testing, auditing and monitoring practices are not good enough, they should be held accountable.

What is sufficient risk management?

The tough question will be “How much care and how much risk management is enough?” In complex software, it is impossible to test every bug in advance. How will developers and manufacturers know when to stop?

Fortunately, courts, regulators, and technical standards bodies have experience setting standards of care and accountability for risky but worthwhile activities.

The standards could be very demanding, such as the European Union’s draft AI regulation, which requires that risks be reduced “as far as possible”, without regard to cost. Or they may be more like Australia’s negligence law, which allows less stringent management for less likely or less severe risks, or where risk management would reduce the overall benefit of the risky activity.

Legal cases will be complicated by the opacity of AI

Once we have a clear standard for risk, we need a way to enforce it. One approach could be to give a regulator the power to impose sanctions (as the ACCC does in competition cases, for example).

People harmed by AI systems should also be able to take legal action. In cases involving self-driving cars, lawsuits against manufacturers will be particularly important.

However, for such prosecutions to be effective, courts will need to understand in detail the processes and technical parameters of AI systems.

Manufacturers often prefer not to reveal these details for commercial reasons. But courts already have procedures for balancing commercial interests with an appropriate amount of disclosure to facilitate litigation.

A greater challenge can arise when AI systems themselves are opaque “black boxes”. For example, Tesla’s Autopilot feature relies on “deep neural networks,” a popular type of AI system in which even developers can never be quite sure how or why they arrive at a result. given.

Explainable AI to the rescue?

Opening the black box of modern AI systems is at the center of a new wave of computer science and humanities scholars: the so-called “explainable AI” movement.

The goal is to help developers and end users understand how AI systems make decisions, either by modifying the way the systems are built or by generating explanations after the fact.

In a classic example, an AI system mistakenly classifies an image of a husky as a wolf. An “explainable AI” method reveals the system centered on the snow in the background of the image, rather than the animal in the foreground.

(Right) An image of a husky against a snowy background.  (Left) An “explainable AI” method shows which parts of the image the AI ​​system focused on when classifying the image as a wolf.

How this might be used in a lawsuit will depend on a variety of factors, including the specific AI technology and the damage done. One of the main concerns will be the degree of access the injured party has to the AI ​​system.

The Trivago case

Our new research analyzing a major recent Australian court case provides encouraging insight into what that might look like.

In April 2022, the Federal Court fined global hotel reservation company Trivago $44.7 million for misleading customers about hotel room rates on its website and in television advertising, after a case brought by competition watchdog the ACCC. A critical question was how Trivago’s complex ranking algorithm chose the highest-ranked offer for hotel rooms.

The Federal Court established rules for the discovery of evidence with safeguards to protect Trivago’s intellectual property, and the ACCC and Trivago called expert witnesses to provide evidence explaining how Trivago’s AI system works.

Even without full access to Trivago’s system, the ACCC’s expert witness was able to produce compelling evidence that the system’s behavior was inconsistent with Trivago’s claim to offer customers the “best price”. “.

This shows how technical experts and lawyers can overcome the opacity of AI in court cases together. However, the process requires close collaboration and deep technical expertise, and will likely be expensive.

Regulators can take steps now to streamline things in the future, like requiring AI companies to properly document their systems.

The road ahead

Vehicles with varying degrees of automation are becoming more common, and fully autonomous taxis and buses are being tested in Australia and overseas.

Keeping our roads as safe as possible will require close collaboration between artificial intelligence and legal experts, and regulators, manufacturers, insurers and users will all have a role to play.The conversation

This article by Aaron J. Snoswell, Postdoctoral Fellow, Computational Law & AI Accountability, Queensland University of Technology; Henry Fraser, Researcher in Law, Liability and Data Science, Queensland University of Technology, and Rhyle Simcock, PhD Candidate, Queensland University of Technology is republished from The Conversation under a Creative Commons License. Read the original article.

Leave a Comment