Computers make better decisions than humans because they aren’t weighed down by biases, ego, and the need to rationalize decisions after the fact. An economically rational player would make more money on Deal Or No Deal than a stupid human. We can’t help it: it’s the way we evolved. Everything from shopping, to teamwork, to the way we elect our leaders is tainted with the stupidity of how we make decisions.
Just as external storage can become a form of prosthetic memory, so computers can become prosthetic decision-makers. If we were to make them understand the dilemmas before us, computer assistants could advise us on the economically rational thing to do.
Would we be able to deal with being told we’re wrong so much of the time?
Evolution isn’t the best designer. While the variety of life is astonishing–prompting many to invoke a creator–a study of biology reveals plenty of inefficient compromises. For example, the Laryngeal nerve runs from the brain, down to the heart, and back up to the vocal cords. This is a terrible, wasteful, error-prone design; but because it arose from an efficient model in fish (which didn’t have long necks), it’s what we’re stuck with.
Here’s a video of a dissection of a giraffe, showing just how inefficient the nerve’s routing is. It’s not for the squeamish.
Our brains, like our bodies, evolved from these kinds of evolutionary compromises. We think we’re smart beings, making rational decisions about things; in fact, we tend to rationalize after the fact, and go with what worked in the past. This made good sense for our ancestors: they shouldn’t sit around thinking about whether that tiger was going to eat them, they should just run. We reinforce patterns that work, because they’re the ones that keep us alive.
One of the side effects of reinforcing our past beliefs is that we’re reluctant to reconsider things, even on the basis of new information. We play all sorts of mental tricks on ourselves to help us stick to our beliefs in the face of evidence to the contrary: adaptive preference formation, cognitive bias, and so on. All of these are attempts to relieve the discomfort of cognitive dissonance, a disconnect between our belief systems and the real world.
So if you’re answering a questionnaire, your early answers may bias you against later answers. Psychologists love this kind of research–generally replacing self-interest with candies, under the assumption that everyone has a sweet tooth.
According to recent research, when you ask a test subject something to do with fairness, their answer will cause them to “dig in their heels”, reinforcing their later behavior. A classic demonstration of this process is the Monty Hall problem: you’re presented with three doors, and behind one of them is a prize. You choose one of the three–but then your host reveals one of the other two and asks if you’d like to switch. This New York Times piece explains the problem, and even provides an online game you can play — though the piece reveals that some of this can be explained by pre-existing biases, not just those acquired during the test.
Whether it’s prejudice, or a learned bias forcing us to reject new evidence, the reality is that humans make decisions badly. The Prisoners’ Dilemma and other experiments show that we often make poor decisions rather than economically or morally rational ones (check out the awesome SMBC explanation of this dilemma.)
Can machines help us overcome these inefficiencies? If a computer could help guide us in decision making, we might overcome these biases. At the very least, we’d make better decisions on TV gameshows. Proponents would point out the tremendous benefits of rational decision-making: a utopian world where we all solved for maximal utility and minimal suffering. Detractors would see this as a great levelling, turning us all into automatons, bland communists with little incentive to try something new or take a risk, stripping away the biases and preferences that make us individuals.
If computers helped us decide, we’d find out that Dan Ariely is right: we’re largely irrational. But can our self-reinforcing psyches cope with being told we’re fools? Or will we reject the rational correction, retreating into costly self-affirmation and embracing our bad decisions?