How should the AI in autonomous vehicles be programmed to respond in situations where harm is unavoidable? Should it prioritize the safety of the passenger, the pedestrian, or try to minimize overall harm?
What ethical principles should guide the programming of these life-and-death decisions in autonomous vehicles?
Who should be responsible for making these ethical decisions - the engineers, the company, regulators, or society as a whole?
If you were a passenger in an autonomous vehicle, how would you want it to be programmed in such scenarios? What if you were the pedestrian?

