Ethics of reinforcement learning?

Since reinforcement learning systems can experience reward and punishment in a way, that raises the question of how complex they can get before it matters how they’re treated. Of course we’re nowhere near that line and there’s no such thing as cruelty to drones (yet!) But I’m curious what ML researchers think about where the line would be and whether it even exists.

That’s an interesting question.

I’m going to move this thread to the “AI Discussions” forum, because it isn’t strictly related to the MLS Course 3 Week 3 materials.