Self-driving cars, autonomous cars, driverless cars – regardless of what you want to call them – are expected to revolutionize the entire automobile industry. They will also transform a host of philosophical puzzles into practical problems.
In this post I want to focus on a common moral objection to determining an AI’s goals. The objection is that trying to determine the values/goals of an artificial intelligence is morally on par with “enslaving” the AI.
A common objection to utilitarian goals is that the philosophy is too demanding. For instance, it might seem that we should donate all our money to those in need or devote every waking hour toward helping others. This claim is based on a misunderstanding of human willpower and decision-making.
It’s possible to be mistaken about one’s own values. A common instance of it is when we think we care about something, while in fact what we truly (i.e. under reflection) care about is something else, something that merely happens to correlate in most typical situations with the thing that we would care about in all situations.
“They are just animals, not humans!” While such a statement may – despite the lack of argumentative substance – seem intuitively appealing, it should immediately become apparent that it is problematic to argue this way once the appropriate historical context is laid out.
“A full-grown horse or dog is beyond comparison a more rational, as well as a more conversible animal, than an infant of a day, a week or even a month old.” – Jeremy Bentham
If there is no God, so the argument goes, there is no objectivity in ethics either. This article will later attempt to specify what exactly “objective ethics” could refer to. First however, we’ll get God out of the way…