Researchers at the University of Connecticut have been working with Nao, a toddler-sized robot, to see if we can program robots to make ethical decisions. Even simple tasks, like reminding someone to take medication, have ethical consequences. What does the robot do if the person refuses to take the medicine?
Questions to discuss with students:
First, present a common ethical dilemma to students, such as the trolley problem. Have students discuss the possible outcomes and the consequences of the choices made.
Do you think robots can be programmed to make similar choices? Would you want them to? Why or why not?