As technology advances we are getting closer and closer to a time where robots will be in charge of whether we live or die. Now I’m not talking about terminators or some self-aware AI as we are a long way away from this kind of technology but we are getting pretty close to driver-less cars becoming main stream.
There is an implicit trust that we will have to overcome in order for driverless vehicles to become the norm as it will require you to put total faith that said robotic vehicles will get you from point A to point B safely. There is already some studies that prove that driverless systems are safer and once the tipping balance of Driver / Driverless tips to the robots we may well see a dramatic drop in deaths caused by vehicle accidents but there is a large grey area that needs to be addressed with driverless vehicles decision making capabilities.
For example, say you get into your brand new electric (cause let’s face it that’s where we are heading) driverless car and say take me to the local library. Now halfway through your journey a child runs out in front of your car that is doing 80 kms an hour, to your left is a steel light post and to your right is another car coming the other direction. If the car were normal, the driver would be forced to make 1 of 3 terrible options but it would still be made by a human capable of split decision actions.
In a driver-less car who makes that choice?
Do we program cars to preserve the life of pedestrians at the cost of passengers?
There are major philosophical questions that we need to be answered as a society before we give machines the task of making these decisions for us. What would you choose in the scenario above and what are your thoughts on giving robots the right to kill us.
Leave your comments below and don’t forget to share us with your friends.
Stay Curious – C.Costigan