Whether or not anyone has seen an autonomous car operating on the streets of Connecticut, most people agree that such vehicles are coming in the near future. However, a computer programmer and professor in the field believes that errors in the method of programming could lead to unsafe vehicles.
In a recent statement, the computer science professor stated that programming an autonomous vehicle to drive like a human is a mistake. As humans are prone to mistakes, he feels that using human driving habits is unsafe. He is quoted as saying, "They are learning from human drivers, all of whom are fallible, and the autonomous cars are in turn mirroring our unsafe driving behaviors."
He gives an example of a recent fatal accident when a driverless vehicle killed a pedestrian in Arizona. In that instance, the pedestrian entered the road in a darkened area and was struck by the autonomous vehicle. He believes the vehicle was programmed to view the roadway and proceed if nothing was detected in its path. According to him, the vehicle should have been programmed to be able to stop within the range of its lights.
According to the professor, driverless vehicles should be programmed to the capabilities of the technology and not to the capabilities of the average driver. As another example, he believes the vehicle should be programmed to apply the brakes within a millisecond of detection of an object in its path, less than the normal reaction time of a human driver.
He brings up a point that may become an issue in the near future for personal injury attorneys. Will there be a claim against the manufacturer for failing to program the vehicle to the limits of its technological capabilities? With the evolution of autonomous vehicles, these issues are sure to be the subject of future litigation after car collisions with driverless vehicles.