Artificial Free Will
In this post I want to focus on a common moral objection to determining an AI’s goals. The objection is that trying to determine the values/goals of an artificial intelligence is morally on par with “enslaving” the AI.
In this post I want to focus on a common moral objection to determining an AI’s goals. The objection is that trying to determine the values/goals of an artificial intelligence is morally on par with “enslaving” the AI.
A common objection to utilitarian goals is that the philosophy is too demanding. For instance, it might seem that we should donate all our money to those in need or devote every waking hour toward helping others. This claim is based on a misunderstanding of human willpower and decision-making.
If there is no God, so the argument goes, there is no objectivity in ethics either. This article will later attempt to specify what exactly “objective ethics” could refer to. First however, we’ll get God out of the way…