Algorithms with “incompatible objectives.” Will lay the legal groundwork based on which tomorrow’s cars will be launched. Without the legal groundwork, the launch of autonomous automated cars is going to be an entrepreneurial nightmare.
Although the much awaited launch of autonomous automated vehicles is eagerly awaited, the legislations that will ensure that these vehicles ply our streets could be tricky to formulate, since it appears that as a species, we humans want to have the cake and eat it too: we are totally okay with self-driven cars sacrificing their passengers in favour of not harming pedestrians as long as we are not the ones sitting in the passenger’s seat. As if this wasn’t enough, according to a survey, we would even prefer that the self-sacrificed rides aren’t ours but someone else’s car.
Although we all want autonomous automated smart cars, the idea of enforcing regulations by legislation seems to be a royal mess. If you think that is going to be tough, check this out.
So as to ensure that these cars traverse without any untoward incidents, computer scientists have to design algorithms with “incompatible objectives.” The algorithm must not only ensure that the cars do not cause any public outrage and at the same time not discourage prospective buyers. If a car keeps self-destructing in order to save pedestrians, although that’s good for road safety but it will act as a sore selling point for the car. The reverse scenario is also true.
The question that this dilemma raises is tricky since it essentially asks the AI to decide whose life is more important – the lives of the humans in the vehicle or those of the pedestrians. When humans at the wheel, our decision that saves at times saves both lives come naturally out of instinct, self-preservation, driving experience and judgement. How does one translate these in programming?
It also raises legal questions, such as if someone knowingly buys a model of an automated autonomous car which has tended to favour passengers over pedestrians, will the buyer then be liable for loss of public life if an accident were to happen?
“I do not think concerns about very rare ethical issues of this sort […] should paralyze the really groundbreaking leaps that are making in this particular domain of technology, policy and conversations in liability, insurance and legal sectors, and consumer acceptance,” said Anuj. K Pradhan, assistant research scientist at UMTRI’s Human Factors Group.
It’s good to thrash out these troubling questions now rather than later when we have a fleet of autonomous automated cars. If such questions intrigue and interest you, do check this out.