By Ian Hildebrand
A lot of news has been rolling around on the subject of self-driving cars. Elon Musk has tried to handle the first, inevitable death of a person riding in auto-pilot in a Tesla, adding a slew of safety upgrades.
The federal government has thrown in its lot with auto-pilot, creating a range of guidelines designed to keep product quality and user safety at the forefront of companies’ commercial policies while encouraging manufacturers to break into this new innovation that the Director of the National Economic Council Jeffrey Zeints is hoping “will save time, money and lives.”
In addition to all of this is Uber’s foray into the driverless car future, with driverless cab services on a test run in Pittsburgh.
Ford Fusions have been picking up passengers with nothing but a tablet display to choose a destination and an engineer inside to take over in case of emergency.
The question is if we should we be trusting them. Can we ride in self-driving cars without worry?
The public has a mixed opinion on vehicles with autopilot technology. Some can’t bring themselves to ever trust a car to be able to pilot itself. Others are okay with some levels autopilot, like crash-avoidance technology, but anything beyond stretches the limits of their comfort zone.
Many people are afraid that a car on autopilot will make a mistake, that it won’t see something, or that it will think that it sees something that will lead to an accident, such as in the case of the Tesla autopilot crash that lead to the death of Joshua Brown. The Malloy Law Offices, LLC in Baltimore area has lawyers that can help you get started with the legalities when it comes to such situations.
This is a legitimate fear of riding in any vehicle, but here’s the catch: human drivers are just as prone to making mistakes like these all the time.
This isn’t anyone’s fault in particular. We aren’t always aware of our surroundings when we drive, whether we’re driving with friends and family, or we’re on an early or late commute home.
Autopilot systems don’t have that problem. According to the Oregonian, The Ford Fusions that Uber were testing in particular were equipped with “seven traffic-light detecting cameras to a radar system that detects different weather conditions to 20 spinning lasers that generates a continuous, 360 degree 3-D map of the surrounding environment.”
Even if the technology still needs buffing, this driving system has to be at least doubly aware of its current surroundings at any given moment than I am at any point in the day! When it comes to injuries you can click for more info here in case legal aid is needed.
Moreover, the reaction time of a computer system is always going to be faster than a person’s reaction time.
Even the issue of cybersecurity for self-driving cars, while a very real possibility, probably isn’t going to have a measurable threat to the average driver – unless you happen to make an enemy out of a very skilled computer hacker.
Chris Valasek of IOActive, a cybersecurity firm, told Fortune that anyone looking to hack a car on autopilot will “spend a lot of time, money and effort, and have a very special skill set.”
My view is that if it comes down to choosing between a human driver, who might not be able to be hacked, but can certainly be distracted, injured, or fall asleep at the wheel, or an automated driver that’s been built and optimized specifically to keep the passenger safe, I’d pick the latter any day of the week.
I’d have no problem taking a self-driving Uber myself. Any way you look at it, it at least saves you the trouble of socializing with a driver – the greatest of all evils in this world.