We have talked a lot about self-driving cars. Why? Because first and foremost we work in the arena of motor insurance. We can see that self-driving cars will radically alter the insurance and motor industries, a process that has already begun and will quickly grow in scale over the next few years.
The reforms could be coming quicker than expected, especially as how Volvo plans to test their own AI technology in 2017. The suddenness of the changes will have profound effects on society as a whole and will undoubtedly raise numerous debates on the extent to which we can trust technology.
I’m not suggesting we will have the real life Skynet situation in our hands. But instead, a future where we allow technology to manage more and more of life’s everyday challenges. A fear a world similar to WALL-E more than Terminator.
AI technology will only ever have the conscious of its creators. Which is something you may fear after Microsoft shot themselves in the foot with the creation of ‘Tay’, an AI system designed to get smarter the more it interacted with twitter users. It proved that artificial intelligence was far easier to corrupt than actual intelligence.
Self-driving cars will inevitably have questions raised over its ethics. No matter how smart self-driving technology gets, motor accidents will never be completely eliminated; there are simply too many external factors which could arise.
Could Driverless Cars Cause More Harm Than Good?
AI software will be more efficient at avoiding accidents than humans. The inability to get distracted from the task at hand, uncompromising awareness and split-second decision making are all reasons why computers will be superior drivers to humans. But questions may be raised when an accident is unavoidable and thus the AI must make a decision on the best course of action for damage limitation, an argument raised by WIRED magazine.
Picture this: a driverless car is behind a motorbike, there is traffic on the opposite side of the road and pedestrians to the right. For whatever reason, the motorbike in front skids to a halt and there is no time for the driverless car to stop behind them. So what would the self-driving decide to do? It certainly wouldn’t veer towards the pedestrians as it would cause the most injury. Also it would not want to hit the motorcyclist as they could also suffer as much injury as a pedestrian. The result in this case would be for the driverless car to veer towards the opposite side of the road.
What would be the problem with self-driving cars? You may be asking. Well if the driverless car always aimed for the safest option then it may influence people to purposely make unsafe decisions. For example if the car had to choose between hitting a motorcyclist with a helmet or one without then the car would go for the safer option: hitting the helmeted motorcyclist. The driverless car would be punishing the more responsible of the two riders.
Another factor the driverless car may take into account would be the safety rating of each vehicle. Say the self-driving car had to decide between hitting a brand new 5 star safety rated vehicle or a really old banger? The driverless car would pick the new vehicle every time. This may unintentionally influence people, and hence manufacturers, to avoid these safer cars.
Other Unforeseen Issues?
Could your vehicle be stolen without you in it? Quite possibly yes. Although manufacturers would do everything in their power to ensure security to prevent the issue becoming a genuine threat, it could still be possible. Controls of a vehicle could be hacked into remotely, in fact, they already have.
Other issues with the transition to driverless cars would be the restrictions on who could operate them. Could you get into a driverless car by yourself without a licence? Could a child travel in one unsupervised? Further down the line, would you even be allowed to control a car yourself if all others are driven by AI? All questions which will probably be raised in the future.
Innovation For The Better
Don’t get me wrong, I am an avid supporter for the development of driverless cars, which to my surprise puts me in the minority. Personally I cannot wait for the day in which I can sit back, read a book, have something to eat, get some shut-eye or other activities as my vehicle takes me exactly where I want to go.
Traffic will be a problem of the past as computers will set the speed and distance to ease the flow of vehicles. Lorries in particular will no longer clog the motorways as they could engage in truck-platooning: computers that allow for lorries to travel one behind the other in extremely close formation to gain aerodynamic efficiency.
Above all other reasons, I would like to see a city with transport links as good as the film Minority Report.
The Question of Law
Possibly the biggest impact that driverless vehicle could have on our lives and the future is that it will open up the debate on the extent to which we can trust technology – and to what extent can AI technology be held accountable for its own actions.
Moving back to the issue of unavoidable accidents and damage limitation, the role of AI reminds me of the book and film adaptation of I, Robot, in which Will Smith’s character develops a distrusts of robots. This is because the AI decided to save him rather than a child as he stood a higher percentage chance of surviving. Driverless cars will also make these on the spot judgements, in which human emotion is taken out of the equation.
Science and science fiction are intertwined. Developments in science drive science fiction, just as new science fiction drives the development in science. Films such as Minority Report, WALL-E, Terminator and I, Robot all drive technological advancement. The law behind driverless cars will undoubtedly be at the forefront on how artificial intelligence will be judged under human law, the repercussions of which, will felt for decades to come.