In philosophy, there’s an ethical question called the trolley problem. If you had to push one large person in front of a moving trolley to save a group of people on the tracks, would you? This abstract idea has taken hold in programming self-driving cars: what happens if it’s impossible to avoid everyone?
Researchers from the Toulouse School of Economics decided to see what the public would decide, and posed a series of questions to online survey-takers, including a situation where a car would either kill 10 people and save the driver, or swerve and kill the driver to save the group.
They found that more than 75 percent supported self-sacrifice of the passenger to save 10 people, and around 50 percent supported self-sacrifice when saving just one person. However, respondents didn’t actually think real cars would end up being programmed this way, and would probably save the passenger at all costs. (1)
Indeed a difficult problem. How can you program that into a robot, when you yourself is not yet fully aware of why you would sacrifice yourself to save other people?
Robots will never understand what cannot be understood.
Robots will never comprehend the incomprehensible.
Robots will never be alive.
There is godly nature in us. And such decisions should be left to gods (humans) alone. Because only God can choose to die to save a human…