Self-driving cars, autonomous cars, driverless cars – regardless of what you want to call them – are expected to revolutionize the entire automobile industry. They will also transform a host of philosophical puzzles into practical problems.
Suppose there was a machine you could connect your brain to, which would simulate the perfect life – everything according to your wishes. You wouldn’t notice that it was all virtual. If you connect you stay connected for the rest of your life. Would you connect yourself? Why (not)? And why do philosopers ask questions like this anyway?
In this post I want to focus on a common moral objection to determining an AI’s goals. The objection is that trying to determine the values/goals of an artificial intelligence is morally on par with “enslaving” the AI.
A common objection to utilitarian goals is that the philosophy is too demanding. For instance, it might seem that we should donate all our money to those in need or devote every waking hour toward helping others. This claim is based on a misunderstanding of human willpower and decision-making.
Bayes’ Theorem tells us how to rationally assess the probability of a certain statement of interest being true, given some evidence. Insofar as science consists in creating hypotheses, collecting evidence for and against them, and updating our credence in these hypotheses in the face of the collected evidence, Bayes’ Theorem formalizes the very process of doing science.
It seems that God would let us know why he allowed so much evil if he existed and had good reasons for allowing it. Not doing so might cause unnecessary suffering, doubt, and uncertainty among believers. This can be turned into an argument against God’s existence.
It’s possible to be mistaken about one’s own values. A common instance of it is when we think we care about something, while in fact what we truly (i.e. under reflection) care about is something else, something that merely happens to correlate in most typical situations with the thing that we would care about in all situations.
The mere possibility of zombies is enough to refute physicalism about the mind. The anti-physicalist, however, cannot simply start with the possibility of zombies as a premise without begging the question against physicalism. How can we assess whether zombies are possible on impartial grounds?
It’s in the interest of agents to achieve their own goals as well as possible. When we implement this in our behavior, we are acting rationally. But what does this mean in an applied setting, acting so as to best achieve our goals?
Despite initial plausibility, physicalism about consciousness is a controversial view. It has come under heavy attack from two unlikely opponents in an academic debate, namely ghosts and zombies.
“They are just animals, not humans!” While such a statement may – despite the lack of argumentative substance – seem intuitively appealing, it should immediately become apparent that it is problematic to argue this way once the appropriate historical context is laid out.
What makes the brain really special is not its complicated function. It is what may deserve to be called our biggest scientific surprise: the brain’s performing these various functions is accompanied by an amazing “inner movie”.