Back in 2011, I wrote about how tort liability would apply to self-driving cars. As I wrote then, it made the most sense to go right back to the case that first adopted the “crashworthiness doctrine,” Larsen v. General Motors Corporation, 391 F.2d 495 (8th Cir. 1968), which held “We perceive of no sound reason, either in logic or experience, nor any command in precedent, why the manufacturer should not be held to a reasonable duty of care in the design of its vehicle consonant with the state of the art to minimize the effect of accidents. The manufacturers are not insurers but should be held to a standard of reasonable care in design to provide a reasonably safe vehicle in which to travel.” Liability for autonomous cars shouldn’t be any different: if an autonomous car causes a crash, then the manufacturer will be liable if they did not use “reasonable care” in designing, programming, and testing the car.

 

Via Jason Kottke, I saw a recent TED video raised a whole bunch of ethical dilemmas arising from self-driving cars. Namely, the TED talk raised the possibility that autonomous vehicles might find themselves in situations where they could “choose” — depending on the programming — to take actions that value certain lives over others.

 

The primary hypothetical in the video imagines an autonomous car driving behind a truck on the interstate. The car is “boxed in” by an SUV on the left and a motorcycle on the right. Suddenly, the truck loses some cargo, presenting the car with the options of:

 

  • jamming the brakes (and likely hitting the cargo),
  • swerving to hit an SUV (thereby endangering the occupants of the autonomous car and those of the SUV), or
  • swerving to hit a motorcycle (which is likely the safest course for the occupants of the autonomous car, but is a horribly dangerous choice for the motorcycle).

 

Running into the motorcycle might be the “safest” choice for the occupants of the car, but it’s also probably the choice most likely to kill someone, namely the motorcyclist. (For more about the dilemma, check out this page on the TED site, which links out to several other articles.)

 

This hypothetical isn’t absurd — consider this incredible mishap from last week, where the driver thankfully suffered only a scratch — but it does have some flawed elements. In that hypothetical, the autonomous vehicle has already messed up and put itself in an unsafe position by apparently following the truck too closely and by getting boxed in. If the computer was following the “two second rule” (or, better, “three second rule”), then the computer would have time to slow down the vehicle to the point where an impact with the cargo would be unlikely to pose a grave threat to the passengers. Moreover, the swerve maneuver attempted by the computer is technically known as a “lane toss,” and one of the key concepts taught at driving schools is to keep the spaces next to your vehicle open so that you can do a “lane toss” at a moment’s notice.

 

With those caveats in mind, it’s still worth considering the ethical dilemma created by the hypothetical. Perhaps we have to start with the assumption that a poor human driver created the mess in the first place, and the computer has a “crash avoidance” system that has suddenly activated.

 

It’d be nice if we could just use Isaac Asimov’s Laws of Robotics, like, “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” But that doesn’t help our poor computer friend, as the computer won’t be able to discern if it is better to allow the human beings in its own vehicle to come to harm, or to reduce the likelihood of that by increasing the likelihood of injuring other human beings.

 

So let’s consider what the real law would say about this hypothetical.

 

Regardless of the reasons why, intentionally ramming another vehicle — either the SUV or the motorcycle — involves disregarding the safety of the people in that vehicle, and thus it is arguably reckless endangerment. The Model Penal Code defines “reckless” as, “when [a person] consciously disregards a substantial and unjustifiable risk that the material element exists or will result from his conduct” and defines “reckless endangerment” as, “recklessly engag[ing] in conduct which places or may place another person in danger of death or serious bodily injury.” As the Code further explains, “The risk must be of such a nature and degree that, considering the nature and purpose of the actor’s conduct and the circumstances known to him, its disregard involves a gross deviation from the standard of conduct that a law-abiding person would observe in the actor’s situation.”

 

Is the programming of a vehicle to value its own passengers’ safety over the safety of others “a gross deviation from the standard of conduct that a law-abiding person would observe in the actor’s situation?”

 

Before you reach an answer, bear in mind that this question is typically resolved on a case-by-case basis by juries. It’s not the sort of thing we can create general rules from: the definition of “reckless” is the general rule.

 

But we’re not done. There’s also another doctrine that applies here, the defense of necessity:

 

[T]he defense of necessity, or choice of evils, traditionally covered the situation where physical forces beyond the actor’s control rendered illegal conduct the lesser of two evils. Thus, where … A destroyed [a] dike in order to protect more valuable property from flooding, A could claim a defense of necessity. … [The defense was] designed to spare a person from punishment if he acted “under threats or conditions that a person of ordinary firmness would have been unable to resist,” or if he reasonably believed that criminal action “was necessary to avoid a harm more serious than that sought to be prevented by the statute defining the offense.” … Under any definition of these defenses one principle remains constant: if there was a reasonable, legal alternative to violating the law, “a chance both to refuse to do the criminal act and also to avoid the threatened harm,” the defenses will fail.

 

United States v. Bailey, 444 U.S. 394, 410 (1980).

 

If you want to know more about how the defense of necessity works in practice, compare Allen v. State, 123 P.3d 1106 (Alaska Ct. App. 2005) and Guthrie v. State, No. A-10145, 2009 WL 1424447 (Alaska Ct. App. May 20, 2009). Both involve defendants charged with driving without a license. In Allen, the defendant contended his mother needed immediate medical attention and he was driving to a place with a phone so he could call for medical help, and the court held he was allowed to present the defense to the jury. In Guthrie, the defendant contended he was driving to the closest pharmacy where he could get Tylenol for his daughter, who had been in the emergency room with strep throat and a fever of 105 the day before, and the court held that was not the sort of “immediate” threat of harm that allowed the defense to be presented to the jury.

 

And therein lies the problem: the only way we’ll know if the defense of necessity is available is if we knew whether running into the cargo was a “reasonable” alternative to swerving into the SUV or the motorcycle.

 

In the hypothetical, swerving into the SUV and swerving into the motorcycle both involve “disregarding” the safety of someone else who wasn’t in danger, and there is simply no way to program a computer to know, ahead of time, if a particular instance of that decision would be judged “reasonable” by the community, thereby allowing the defense of necessity. There are simply too many variables and uncertainties involved. Moreover, there are too many differences of opinion to create an iron-clad rule to govern what a person or computer should do. One of the great virtues of juries — specifically large, diverse juries — is that their “deliberation provides an excellent opportunity for the jury members to influence one another on the meaning of facts and the value judgments implicit within them.” (See this article, pages 423-424.)

 

Personally, I think that to even begin answering the hypothetical, we would need to have a very good sense about just how dangerous the cargo was when it fell off the truck. For anything less than, say, an utterly massive piece of steel or concrete, I think it is unlikely a jury would find that it was “necessary” for them to swerve. Moreover, given that the computer would be (understandably) judged by the standard of hindsight, the jury would have to be convinced that there was no better alternative available, like hitting the brakes and then swerving. The situations in which a person must truly imperil another person to protect themselves are very few and far between, even on the highway.

 

That is to say, if, as Kottke wonders, a company made “a car that places the security of the owner above all else,” and someone bought that car, I think both the company and the driver would see themselves on the wrong end of a reckless endangerment prosecution and verdict.

 

Jason Kottke recognizes that this hypothetical presents us with a situation akin to “the trolley problem” from philosophy, which forces people to make a stark choice between evils. The law does its best to not answer these problems in advance with a simplified “correct” answer, but rather to present these questions to jurors, so that they, on behalf of the community, can come to an answer in that particular situation. To me, these legal rules — forged from hundreds of years of real-life experience with similar life-or-death dilemmas — demonstrate the sheer difficulty of coming up with any general rules we could apply to self-driving cars beyond the basic rule that the car should protect its occupants, but not at the expense of anyone else.

 

I think that’s the rule the law would require, and is also the rule that most artificial intelligence experts would suggest. As Ben Goertzel told io9 not long ago, “Very few [artificial intelligence] researchers believe that it would be possible to engineer [artificial intelligence] systems that could be guaranteed totally safe.” Given that inherent limitation, it seems prudent to keep artificial intelligence systems bound by the same laws that govern human beings as well. That’s the same position advocated by the Engineering and Physical Sciences Research Council: “Robots should be designed and operated to comply with existing law…”