October 14, 2021, Kitchener, Ontario
Posted by: Robert Deutschmann, Personal Injury Lawyer
The promise of self driving cars is a tempting one. Cars are available now that can park, and back up and warn of unsafe lane changes. Some like Tesla even have a limited self driving capability. The question remains though, are they safe for the road?
The National Academies of Science published a comprehensive analysis of the question in their latest edition. I’ve attached it here because it is very well written and an excellent analysis of the question.
From a safety perspective proponents argue that self driving cars will eliminate the 90% of accidents due to human error, they will save energy and resources, they will revolutionize cities and allow more flexible ownership models. They also concede that work remains to be done on the ethical side of programming, the legislative side (laws, liability, insurance), fail safes for system failure, hacking adn security issues and unforeseen scenarios,
Enjoy the read and the excellent analysis.
Driverless Motor Vehicles: Not Yet Ready for Prime Time
Perspectives | September 30, 2021
Christopher A. Hart is founder of Hart Solutions LLC and former chair of the National Transportation Safety Board (2015–17).
According to safety experts, more than 90% of motor vehicle crashes involve driver error, and many believe that replacing drivers with automation could significantly reduce the number of crashes. Well-considered automation could compensate for human susceptibilities such as fatigue, distraction, and impairment that contribute to crashes, and thus greatly reduce the loss of nearly 40,000 lives every year on US streets and highways.
It is worth noting that aviation has been developing automation for decades. But airliners will continue to have pilots for the foreseeable future because automation designers do not have satisfactory answers for two crucial questions:
What if the automation fails? And, much more likely,
What if the automation encounters circumstances that were not anticipated by the designers?
The same questions apply for automation in road vehicles. Hence, despite the substantial potential lifesaving improvements of removing drivers from cars, driverless cars probably will not achieve public acceptance for widespread use until car automation designers can answer those two questions. This article addresses those two foundational issues as well as several other automation challenges for creating driverless vehicles.
Challenges and Lessons from Aviation Automation Experience
Developing autonomous (driverless) vehicles (AVs) is much more challenging and will take much longer than many AV advocates anticipated as recently as a few years ago. To provide context, commercial aviation has been automating airplanes for decades, in a less complex and more structured environment than streets and highways, but automation will not be replacing pilots in the foreseeable future.
Aviation experience has demonstrated that automation concerns include inadequate consideration of “human factors” in designing automation, automation failure, and automation in situations not anticipated by the designer. To avoid a skeptical public delaying acceptance of AV automation, and continued losses of more than 100 lives a day in the United States alone, the AV industry would do well to learn from the aviation automation experience to avoid similar mistakes, especially given that serious problems with AV automation have already attracted, and are likely to continue to attract, widespread publicity.
Inadequate Consideration of Human Factors
The most significant challenges in aviation automation have resulted from inadequate consideration of human-automation interactions, primarily involving pilots. That history has limited applicability here because this article is about driverless vehicles in which there is no operator, but there are relevant concerns.
Mixing automation with humans is much more difficult than designing for a wholly automated environment. The AV industry faces several human factors challenges concerning humans who are not in the vehicle—drivers of other vehicles, motorcyclists, bicyclists, and pedestrians. But the industry has very little experience to draw from to effectively consider humans who are not in the vehicle. Early indications are that automation that does not perform well amid humans will probably be disfavored by the general public.
Safety experts learned long ago not to predict that “This ship can’t sink.” Although the reliability of automation is generally improving, any system that is designed, built, and maintained by humans will fail sooner or later. A major design challenge is to ensure that automation failures will not endanger the lives of the vehicle occupants or others.
Two common AV industry concepts for automation failure are to pull off to the side of the road or stop in the lane of travel. But these may not always be the safest options. A human driver may assess options in the moment to come up with a safer course of action. Consequences of accidents due to driverless vehicles elicit more negative reactions than those caused by human error; only time will tell whether that public attitude will change.
Automation in Unanticipated Circumstances
Aviation experience has demonstrated that problems are much more likely to result when automation encounters unanticipated circumstances than from outright failure. Other than the recent tragic crashes of the Boeing 737 MAX—which arguably resulted not from failure but from inadequate design that enabled the system to activate based on information from only one sensor—no US commercial aviation accident has been found to have resulted from automation failure. However, in several commercial aviation accidents automation encountered circumstances that were not anticipated by the automation designers. Lessons from these accidents can apply to AVs.
The unplanned airliner landing on the Hudson River in 2009 is an example of an automation encounter with unanticipated circumstances. When both engines were seriously damaged after ingesting birds, the pilots were faced with at least three situations for which they had never been trained: gliding an airliner, landing without power, and landing on water. Before they hit the water, they sought to “flare” (pull the nose up to generate more lift just before touchdown) in order to reduce the vertical impact speed. Unbeknownst to the pilots, the airplane’s automation system included a feature that was designed to dampen an undesirable flight characteristic known as the phugoid, and the phugoid damper prevented the pilots from pulling the nose up as far as they desired. Consequently, they hit the water harder than anticipated, which damaged the aircraft enough to allow water to enter and resulted in the only significant injury from the accident. The designers of the phugoid damper would be highly unlikely to anticipate that the airplane would ever land on water.
Although the reliability of automation is generally improving, any system that is designed, built, and maintained by humans will fail sooner or later.
Recovery from situations that are created when automation encounters unanticipated circumstances can be very challenging, and often can occur only, if at all, with human intervention. In a driverless car, equipped with only a screen for passenger input about destination—but no controls for acceleration, braking, or steering—there will be no opportunity for human intervention. The car itself must ensure that unanticipated circumstances will not result in damage or harm to the vehicle occupants or others. It must “fail safe.”
Exacerbating this complexity is the fact that there are 4 million miles of roadway in the United States, with unique features that are ever changing. Programming for this magnitude of potential circumstances is virtually impossible.
Concerns Specific to Automobile Automation
Because the ground environment will be much more complex and variable than in the air, AV makers will have to tackle several other problems.
In aviation, simulators are capable of extensive testing of automation, often sufficient to obviate the need for real-world testing. But for road vehicles, the current state of development for lab testing of AV automation cannot substitute for real-world testing. Similarly, because the very large variety of street challenges cannot be satisfactorily replicated on a test track, such testing is insufficient.
For these reasons, AVs will not be safe and reliable on the streets until they are tested and proven on the streets. Unfortunately, street testing of driverless vehicles will lead to crashes sooner or later, even after responsible AV manufacturers have proven the vehicles’ reliability both in the lab and on the test track.
One important problem with street testing is that automation history has demonstrated that humans are not good monitors, in part because they become complacent in the presence of very reliable systems. Moreover, a human monitor, no matter how vigilant, cannot keep his or her eyes on the road 100% of the time, sometimes for good driving reasons such as reading street signs or looking around before changing lanes.
It was foreseeable that in AV street testing sooner or later a person or thing would be in front of a vehicle at the very instant that a human monitor happened not to be looking at the road. This occurred in March 2018 in Tempe, AZ, when an Uber that was being street tested with a monitor/driver killed a woman who was walking her bicycle across the street. That was the first fatality from a street test of a vehicle that was designed to be driverless.
The Tempe fatality illustrates a problem that is inherent to the development of driverless cars, and more street testing fatalities are very possible. Given that the public is already skeptical about AVs, it is important to educate the public that, although there is a possibility of fatalities during the development process, the safety benefits of automation, if done properly, can be significant. In some ways, it is analogous to the challenge of very rare negative side effects of covid-19 vaccines.
Software Updates and Cyber Concerns
Periodic updates of AV software will probably be typical, as well as communications from AVs to the vehicle manufacturers. And cyber protection updates will be necessary to address ever-evolving cyber invasion protocols. This creates the challenge of ensuring that the addition of the new software, including cyber protection software, does not create unintended consequences.
Automation in cars is associated with ethical issues. For example, if a car encounters a vehicle coming from the opposite direction in the same lane but there are pedestrians on the sidewalk, the AV will need to have been programmed to decide whether to go onto the sidewalk, to save the AV occupants while harming the pedestrians, or crash into the oncoming traffic in order to save the pedestrians. Problems of this type will probably be rare, but the AV industry needs to develop a supportable and transparent way to address them.
Federal leadership on AVs is important for several reasons.
Uniform minimum safety standards will help minimize uncertainty in AV manufacturing.
Developing public confidence in the safety of street testing will probably be more difficult if there is a patchwork quilt of testing and approval standards across the country.
Given that the worldwide car industry is exploring AVs, the federal government must be involved to promote international AV harmonization.
The federal government needs to determine a national framework for the ethical issues discussed above, both as a general matter and to more effectively engage in efforts to ensure international harmonization.
The potential benefits of driverless vehicles are significant. Removing the driver greatly reduces or eliminates human error as well as risks due to human fatigue, distraction, and impairment. These factors account for the loss of nearly 40,000 lives every year in the United States. But major challenges must be addressed before driverless motor vehicles will be ready and accepted by the public for widespread use.
 NHTSA Traffic Safety Facts, Feb 2015.
 This article is limited to driverless vehicles and does not address automation that assists drivers.
 J.D. Power 2020 Q3 Mobility Confidence Index Study.
 National Highway Traffic Safety Administration. 2018. A Framework for Automated Driving System Testable Cases and Scenarios. Washington.
 It is not known whether there may have been automation failures that did not result in accidents because the pilots became aware of the failures in time to respond appropriately.
 National Transportation Safety Board. 2010. Loss of Thrust in Both Engines After Encountering a Flock of Birds and Subsequent Ditching on the Hudson River, US Airways Flight 1549, Airbus A320-214, N106US, Weehawken, New Jersey, January 15, 2009. Aircraft Accident Report NTSB/AAR-10/03. Washington.
 Parasuraman R, Manzey DH. 2010. Complacency and bias in human use of automation: An attentional integration. Human Factors 52(3):381–410.
 NTSB Report HAR 19-03, PB2019-101402. When the Tempe crash occurred, Uber voluntarily terminated all its AV street testing. By way of full disclosure, the author was engaged as a consultant to help Uber resume street testing.