Superficiality and emotions…
Human beings are wonderfully complex, so it’s interesting that sci-fi writers love to write about computers developing near-human characteristics (I’m guilty too—see The Golden Years of Virginia Morgan—FYI: this is a free download on Amazon starting tomorrow, June 28, through July 2; also, Odri’s starship in Sing a Samba Galactica is just another member of the crew). But, let’s face it, it’s hard to imagine an AI computer program capable of modeling the emotional ingredients that influence human decision making. (I suppose you could argue that you don’t want emotions influencing the computer’s thinking because they so often get humans in trouble, but that’s another issue.)
Last week I was struck by the stock market’s reaction to Bernanke’s announcement that the Fed was going to halt their stimulus policies and, in particular, let interest rates rise to a self-sustaining and steady-state level. The best way to describe it is that it was an “oh-my-God” reaction of Wall Street and the rest of the financial world to an abrupt change in the rules of the game. Ignoring the fact that we can’t model these emotional responses (part of the problem), we still should wonder why. Why is it that human beings have knee-jerk emotional reactions to outside stimuli that can send their world into a vortex of disaster?
Hypothesis 1: our communication capabilities have evolved to the point where every reaction tends to be a knee-jerk reaction. In spite of social media and the internet—more likely, because of it—we cannot dig deep into an issue to find the hidden nuances and nexus to other related or influencing events. We live in perpetual states of information overload as we are bombarded with information—often self-contradicting—but no longer have time to digest it. We are, in fact, expected to react as fast as computers, but we can’t. Our minds aren’t built that way. As a result, that first knee-jerk reaction is often the only one as we are forced to proceed to other apparent crises.
This leads to superficiality in our decision-making. I suppose in some cases that’s not a bad thing. We often speak of “becoming wound around the axle”—a cliché often used in corporate America to signify someone getting bogged down in the details. Another adage, “don’t over-think the problem,” is also prevalent. This extends to supervisors rewarding employees that can stand in front of an audience, spout the official corporate positions, and field questions quickly, men and women that always have the ready answer (all too often incorrect). No, people that over-think the problem aren’t rewarded in corporate America—not even in scientific research venues or the halls of government.
Hypothesis 2: Emotions can lead to egotistical solutions to problems that support the decision-maker’s agenda but are bad for most everyone else. One’s emotions taint all opinions one holds. Bigotry, of course, is the worst example. Whether one says he’s a racist or not, and even if he firmly believes he’s not, the emotion is still there. We all have to fight this, some less than others, depending on childhood upbringing, educational experiences, and events in our lives. We should all analyze our knee-jerk reactions to outside stimuli. Some of us can do it faster than others. Some never can do it—the incurable bigots, for example.
Emotions can cloud the decision-making process to the point where superficiality is superseded by bias. There is a whole class of people with agendas driven by emotions who voraciously search the information-sphere for any study that will support that agenda while ignoring perhaps equally relative studies that don’t. This bias, of course, is just being superficial in a different way, ignoring one side of an issue, but there’s not very much logic spent here if we ignore the binary choices involved in excluding or accepting different studies. This is the most dangerous kind of emotional/superficiality combination because this class of people thinks they have all the answers due to the time they spend on “their research.” It can be a particularly dangerous form of intellectual masturbation—dangerous to the individual and society at large.
There was a phrase that came out in the last few elections—“I don’t want the government messing up my Medicare.” As we age, we tend to become more emotional. I don’t have any studies to back that up, but I am a keen observer of human nature, and it seems to be true. What these elderly were often expressing was their knee-jerk reaction that X politicians were going to change something that serves them well. They are satisfied with the status quo and don’t want to change anything. Never mind that Medicare is a government program fraught with inefficiencies and fraud, and the debate about how to fix its problems should be a reasoned one among the voting public.
Politicians know how to tap into those emotions and thus keep the discussion focused on superficialities. One sector, for example, wants to solve Medicare’s problems with a voucher system. By focusing on superficialities, they hope that opinions about too much government and a desire for a balanced budget will cover up their secret agenda of destroying entitlements. It backfired on them because of elders’ knee-jerk reaction. In this case, it was a positive emotional reaction. But knee-jerk reactions can be negative too.
Consider the 2008 election. Candidate Kerry lost Ohio, and the election, to candidate Bush because politicians appealed to the state’s voters’ emotional opposition to gay marriage. This one-issue type of voting plagues modern elections, the politicians use it as a hammer, and we and the country are often stuck in a rut for more years. Considering the trend in the U.S. today, gay marriage is not much of an issue, especially among the young who have other things to worry about. In fact, most people do. One state with voters voting on emotional grounds, cheered on by manipulative politicians, thwarted our chances to go on to do something about solving those other things.
Superficiality and emotions are more common now—reasoned consideration of all the nuances without factoring in our emotional proclivities seems to be a thing of the past. The stock market reflects this. Now the challenge for the sci-fi writer seems extremely difficult. As much as we would like him to have them (with off-and-on switches), Data’s emotional chips from Star Trek are not possible (whether Data is possible is also interesting, but androids are more possible per se). It’s not a simple matter of injecting randomness. Emotions are not random. Maybe we have to wait until we have computers using biological components. But isn’t the human mind just such a computer?
Moreover, computers are anything but superficial. How do we write algorithms for our computers so that they ignore certain data due to something akin to emotions? Do we even want to do this? How do we even know what algorithms to write when we have such a lack of understanding of the human mind? Between Asimov’s Daneel Olivaw and Star Trek’s Data, we can imagine that one day it might be possible. But, is this in the realm of sci-fi or fantasy?
And so it goes….
[If you enjoyed this post, please support this blog: buy, read, and review some of my books.]