Description
1.)
If an AI were to be more intelligent than possible today, we can suppose that it could develop moral reasoning and that it could learn how humans make decisions about ethical problems. But would this suffice for full moral agency, that is for human-like moral agency?
Note: Your answer should be a minimum of 400-500 words and include a minimum of 2-3 references. No plagiarism/AI content is allowed.
2.)
How do we know whether AI has morally relevant properties or not? Are we sure about this in the case of humans?