June 04, 2019, Kitchener, Ontario
Posted by: Robert Deutschmann, Personal Injury Lawyer
The death of a human driver in a Tesla Model X in fully autonomous mode highlights a serious issue that we will need to resolve as autonomous cars become more common. In this case a man was in his Tesla and was killed while it was in full autonomous mode.
A previous death of a pedestrian killed by an UBER car in autonomous mode highlighted that the systems engineers had disabled manual emergency braking mode as a means of eliminating dangerous situations where the human is braking, and the car is continuing to drive and steer.
This same sort of systems error may have played a part in the death of the Tesla driver. An automated emergency braking system should be robust enough to handle emergency stopping when sensors ‘see’ an object the car is about to hit but should also be capable of assessing when emergency braking is not required. A tremendous amount of AI capability will be required to balance this decision.
Humans make this instantaneous decision based on instinct, experience and driving ability. There is debris or an animal on the road – do you jam on the brakes and get hit by the car behind? Do you swerve suddenly and risk losing control? Do you hit the object? This is an unconscious process and often leads to errors that cause personal injury and car crashes.
Given that the ‘decision’ making process of autonomous cars is suspect in the emergency braking regard it may be beneficial to have both manual and automated emergency braking. This would however require the human passenger/driver be paying attention to the road and what is going on. Studies have shown that humans in autonomous cars in the testing periods have an incredibly difficult time maintaining focus on the road when they are not actually driving.
So how do we balance a robust AI aided automated braking system, with a simplistic braking system?
The lawsuit against Tesla claims that the “2017 Model X was designed, built, and introduced into the stream of commerce without having been equipped with an effective automatic emergency braking system.” The claim also asserts that “Notwithstanding the fact the Tesla Model X vehicle was marketed and sold as a state-of-the-art automobile, the vehicle was without safe and effective automatic emergency braking safety features that were operable on the date of this collision.”. Clearly this case will begin to highlight the engineering and programming decisions made behind closed doors.
As I’ve written on my blog in the past, very little public dialogue has been had about who/how ‘moral’ decisions are being made and programmed into the autonomous vehicles. For example, is the life of a child on the road worth more than the life of the passengers of the vehicle? We have not been privy to who is making the decisions and no regulatory framework has been established by governments to aid in the process.
There are several levels of autonomous vehicles at the moment – some that allow the human to turn the automated emergency braking systems off, and some that don’t. How will we understand whether the human’s action or inaction caused a crash in the case where the human has turned the system off?
Lawsuits in the near future will begin to examine the ramifications of the ‘co-sharing’ of driving responsibilities – shared between the human and the technology. There will be thought given to how the car manufacturers can truly educate human drivers of the implications of turning some autonomous capabilities on/off at will.