Case:
Case #1- "I'm Afraid"
Questions to consider:
Is current AI technology anywhere near sentient?
How do we tell if something is sentient?
If AI is- or can become- sentient, then how does that impact our obligations towards them?
Discussion questions:
What, if any, sort of moral consideration would we owe a sentient A.I.?
Is it straightforwardly morally wrong to turn off the LaMDA program?
Who should have primary responsibilities towards a sentient A.I.? The researchers who created it? The corporation or university that funded its creation? The society in which it was created? Someone else? Why?
Links: